Publication Date
In 2025 | 0 |
Since 2024 | 6 |
Since 2021 (last 5 years) | 19 |
Since 2016 (last 10 years) | 23 |
Descriptor
Natural Language Processing | 22 |
Artificial Intelligence | 15 |
Models | 9 |
Computational Linguistics | 8 |
Prediction | 6 |
Reading Comprehension | 6 |
Scores | 6 |
Accuracy | 5 |
Automation | 5 |
Classification | 5 |
Educational Technology | 5 |
More ▼ |
Author
Danielle S. McNamara | 23 |
Mihai Dascalu | 14 |
Renu Balyan | 7 |
Stefan Ruseti | 7 |
Tracy Arner | 6 |
Scott A. Crossley | 5 |
Micah Watanabe | 4 |
Rod D. Roscoe | 3 |
Stefan Trausan-Matu | 3 |
Bogdan Nicula | 2 |
Ionut Paraschiv | 2 |
More ▼ |
Publication Type
Reports - Research | 20 |
Journal Articles | 8 |
Speeches/Meeting Papers | 6 |
Reports - Descriptive | 3 |
Information Analyses | 2 |
Education Level
Higher Education | 3 |
Postsecondary Education | 3 |
Elementary Education | 1 |
Grade 7 | 1 |
Grade 8 | 1 |
Grade 9 | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
Audience
Location
Arizona | 1 |
California | 1 |
Florida | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Gates MacGinitie Reading Tests | 1 |
What Works Clearinghouse Rating
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – International Journal of Artificial Intelligence in Education, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Matthew T. McCrudden; Linh Huynh; Bailing Lyu; Jonna M. Kulikowich; Danielle S. McNamara – Grantee Submission, 2024
Readers build a mental representation of text during reading. The coherence building processes readers use to build a mental representation during reading is key to comprehension. We examined the effects of self- explanation on coherence building processes as undergraduates (n =51) read five complementary texts about natural selection and…
Descriptors: Reading Processes, Reading Comprehension, Undergraduate Students, Evolution
Ying Fang; Rod D. Roscoe; Danielle S. McNamara – Grantee Submission, 2023
Artificial Intelligence (AI) based assessments are commonly used in a variety of settings including business, healthcare, policing, manufacturing, and education. In education, AI-based assessments undergird intelligent tutoring systems as well as many tools used to evaluate students and, in turn, guide learning and instruction. This chapter…
Descriptors: Artificial Intelligence, Computer Assisted Testing, Student Evaluation, Evaluation Methods
Dragos-Georgian Corlatescu; Micah Watanabe; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Modeling reading comprehension processes is a critical task for Learning Analytics, as accurate models of the reading process can be used to match students to texts, identify appropriate interventions, and predict learning outcomes. This paper introduces an improved version of the Automated Model of Comprehension, namely version 4.0. AMoC has its…
Descriptors: Computer Software, Artificial Intelligence, Learning Analytics, Natural Language Processing
Laura K. Allen; Arthur C. Grasser; Danielle S. McNamara – Grantee Submission, 2023
Assessments of natural language can provide vast information about individuals' thoughts and cognitive process, but they often rely on time-intensive human scoring, deterring researchers from collecting these sources of data. Natural language processing (NLP) gives researchers the opportunity to implement automated textual analyses across a…
Descriptors: Psychological Studies, Natural Language Processing, Automation, Research Methodology
Dragos Corlatescu; Micah Watanabe; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2023
Reading comprehension is essential for both knowledge acquisition and memory reinforcement. Automated modeling of the comprehension process provides insights into the efficacy of specific texts as learning tools. This paper introduces an improved version of the Automated Model of Comprehension, version 3.0 (AMoC v3.0). AMoC v3.0 is based on two…
Descriptors: Reading Comprehension, Models, Concept Mapping, Graphs
Renu Balyan; Danielle S. McNamara; Scott A. Crossley; William Brown; Andrew J. Karter; Dean Schillinger – Grantee Submission, 2022
Online patient portals that facilitate communication between patient and provider can improve patients' medication adherence and health outcomes. The effectiveness of such web-based communication measures can be influenced by the health literacy (HL) of a patient. In the context of diabetes, low HL is associated with severe hypoglycemia and high…
Descriptors: Computational Linguistics, Patients, Physicians, Information Security
Bogdan Nicula; Mihai Dascalu; Tracy Arner; Renu Balyan; Danielle S. McNamara – Grantee Submission, 2023
Text comprehension is an essential skill in today's information-rich world, and self-explanation practice helps students improve their understanding of complex texts. This study was centered on leveraging open-source Large Language Models (LLMs), specifically FLAN-T5, to automatically assess the comprehension strategies employed by readers while…
Descriptors: Reading Comprehension, Language Processing, Models, STEM Education
Bogdan Nicula; Marilena Panaite; Tracy Arner; Renu Balyan; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2023
Self-explanation practice is an effective method to support students in better understanding complex texts. This study focuses on automatically assessing the comprehension strategies employed by readers while understanding STEM texts. Data from 3 datasets (N = 11,833) with self-explanations annotated on different comprehension strategies (i.e.,…
Descriptors: Reading Strategies, Reading Comprehension, Metacognition, STEM Education
Razvan Paroiu; Stefan Ruseti; Mihai Dascalu; Stefan Trausan-Matu; Danielle S. McNamara – Grantee Submission, 2023
The exponential growth of scientific publications increases the effort required to identify relevant articles. Moreover, the scale of studies is a frequent barrier to research as the majority of studies are low or medium-scaled and do not generalize well while lacking statistical power. As such, we introduce an automated method that supports the…
Descriptors: Science Education, Educational Research, Scientific and Technical Information, Journal Articles
Danielle S. McNamara; Tracy Arner; Reese Butterfuss; Debshila Basu Mallick; Andrew S. Lan; Rod D. Roscoe; Henry L. Roediger; Richard G. Baraniuk – Grantee Submission, 2022
The learning sciences inherently involve interdisciplinary research with an overarching objective of advancing theories of learning and to inform the design and implementation of effective instructional methods and learning technologies. In these endeavors, learning sciences encompass diverse constructs, measures, processes, and outcomes…
Descriptors: Artificial Intelligence, Learning Processes, Learning Motivation, Educational Research
Robert-Mihai Botarleanu; Micah Watanabe; Mihai Dascalu; Scott A. Crossley; Danielle S. McNamara – International Journal of Artificial Intelligence in Education, 2024
Age of Acquisition (AoA) scores approximate the age at which a language speaker fully understands a word's semantic meaning and represent a quantitative measure of the relative difficulty of words in a language. AoA word lists exist across various languages, with English having the most complete lists that capture the largest percentage of the…
Descriptors: Multilingualism, English (Second Language), Second Language Learning, Second Language Instruction
Ying Fang; Tong Li; Linh Huynh; Katerina Christhilf; Rod D. Roscoe; Danielle S. McNamara – Grantee Submission, 2023
Literacy assessment is essential for effective literacy instruction and training. However, traditional paper-based literacy assessments are typically decontextualized and may cause stress and anxiety for test takers. In contrast, serious games and game environments allow for the assessment of literacy in more authentic and engaging ways, which has…
Descriptors: Literacy, Student Evaluation, Educational Games, Literacy Education
Previous Page | Next Page ยป
Pages: 1 | 2