Publication Date
In 2025 | 0 |
Since 2024 | 5 |
Since 2021 (last 5 years) | 9 |
Since 2016 (last 10 years) | 17 |
Since 2006 (last 20 years) | 18 |
Descriptor
Source
Grantee Submission | 18 |
Author
Danielle S. McNamara | 5 |
Mihai Dascalu | 5 |
Stefan Ruseti | 4 |
Allen, Laura K. | 3 |
McNamara, Danielle S. | 3 |
Andreea Dutulescu | 2 |
Dascalu, Mihai | 2 |
Aaron Haim | 1 |
Aleven, Vincent | 1 |
Amy M. Johnson | 1 |
Baker, Doris Luft | 1 |
More ▼ |
Publication Type
Reports - Research | 15 |
Speeches/Meeting Papers | 12 |
Journal Articles | 2 |
Reports - Descriptive | 2 |
Reports - Evaluative | 1 |
Education Level
Elementary Education | 2 |
Higher Education | 2 |
Postsecondary Education | 2 |
Early Childhood Education | 1 |
Grade 2 | 1 |
Grade 8 | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Primary Education | 1 |
Secondary Education | 1 |
Audience
Location
Massachusetts | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Autism Diagnostic Observation… | 1 |
What Works Clearinghouse Rating
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing

Andreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension
Regan Mozer; Luke Miratrix – Grantee Submission, 2024
For randomized trials that use text as an outcome, traditional approaches for assessing treatment impact require that each document first be manually coded for constructs of interest by trained human raters. This process, the current standard, is both time-consuming and limiting: even the largest human coding efforts are typically constrained to…
Descriptors: Artificial Intelligence, Coding, Efficiency, Statistical Inference
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Jessica Andrews-Todd; Jonathan Steinberg; Michael Flor; Carolyn M. Forsyth – Grantee Submission, 2022
Competency in skills associated with collaborative problem solving (CPS) is critical for many contexts, including school, the workplace, and the military. Innovative approaches for assessing individuals' CPS competency are necessary, as traditional assessment types such as multiple-choice items are not well suited for such a process-oriented…
Descriptors: Automation, Classification, Cooperative Learning, Problem Solving
Robert-Mihai Botarleanu; Mihai Dascalu; Scott Andrew Crossley; Danielle S. McNamara – Grantee Submission, 2022
The ability to express yourself concisely and coherently is a crucial skill, both for academic purposes and professional careers. An important aspect to consider in writing is an adequate segmentation of ideas, which in turn requires a proper understanding of where to place paragraph breaks. However, these decisions are often performed…
Descriptors: Paragraph Composition, Text Structure, Automation, Identification
Aaron Haim; Eamon Worden; Neil T. Heffernan – Grantee Submission, 2024
Since GPT-4's release it has shown novel abilities in a variety of domains. This paper explores the use of LLM-generated explanations as on-demand assistance for problems within the ASSISTments platform. In particular, we are studying whether GPT-generated explanations are better than nothing on problems that have no supports and whether…
Descriptors: Artificial Intelligence, Learning Management Systems, Computer Software, Intelligent Tutoring Systems
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2022
Automated scoring of student language is a complex task that requires systems to emulate complex and multi-faceted human evaluation criteria. Summary scoring brings an additional layer of complexity to automated scoring because it involves two texts of differing lengths that must be compared. In this study, we present our approach to automate…
Descriptors: Automation, Scoring, Documentation, Likert Scales
Zhang, Haoran; Litman, Diane – Grantee Submission, 2018
This paper presents an investigation of using a co-attention based neural network for source-dependent essay scoring. We use a co-attention mechanism to help the model learn the importance of each part of the essay more accurately. Also, this paper shows that the co-attention based neural network model provides reliable score prediction of…
Descriptors: Essays, Scoring, Automation, Artificial Intelligence
Crossley, Scott A.; Kim, Minkyung; Allen, Laura K.; McNamara, Danielle S. – Grantee Submission, 2019
Summarization is an effective strategy to promote and enhance learning and deep comprehension of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation of students' summaries requires time and effort. This problem has led to the development of automated models of summarization quality. However,…
Descriptors: Automation, Writing Evaluation, Natural Language Processing, Artificial Intelligence
Nicula, Bogdan; Perret, Cecile A.; Dascalu, Mihai; McNamara, Danielle S. – Grantee Submission, 2020
Theories of discourse argue that comprehension depends on the coherence of the learner's mental representation. Our aim is to create a reliable automated representation to estimate readers' level of comprehension based on different productions, namely self-explanations and answers to open-ended questions. Previous work relied on Cohesion Network…
Descriptors: Network Analysis, Reading Comprehension, Automation, Artificial Intelligence
Fancsali, Stephen E.; Holstein, Kenneth; Sandbothe, Michael; Ritter, Steven; McLaren, Bruce M.; Aleven, Vincent – Grantee Submission, 2020
Extensive literature in artificial intelligence in education focuses on developing automated methods for detecting cases in which students struggle to master content while working with educational software. Such cases have often been called "wheel-spinning," "unproductive persistence," or "unproductive struggle." We…
Descriptors: Artificial Intelligence, Automation, Persistence, Intelligent Tutoring Systems
McCarthy, Kathryn S.; Allen, Laura K.; Hinze, Scott R. – Grantee Submission, 2020
Open-ended "constructed responses" promote deeper processing of course materials. Further, evaluation of these explanations can yield important information about students' cognition. This study examined how students' constructed responses, generated at different points during learning, relate to their later comprehension outcomes.…
Descriptors: Reading Comprehension, Prediction, Responses, College Students
Stefan Ruseti; Mihai Dascalu; Amy M. Johnson; Danielle S. McNamara; Renu Balyan; Kathryn S. McCarthy; Stefan Trausan-Matu – Grantee Submission, 2018
Summarization enhances comprehension and is considered an effective strategy to promote and enhance learning and deep understanding of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation requires a lot of effort and time. Although the need for automated support is stringent, there are only a…
Descriptors: Documentation, Artificial Intelligence, Educational Technology, Writing (Composition)
Cai, Zhiqiang; Hu, Xiangen; Graesser, Arthur C. – Grantee Submission, 2019
Conversational Intelligent Tutoring Systems (ITSs) are expensive to develop. While simple online courseware could be easily authored by teachers, the authoring of conversational ITSs usually involves a team of experts with different expertise, including domain experts, linguists, instruction designers, programmers, artists, computer scientists,…
Descriptors: Programming, Intelligent Tutoring Systems, Courseware, Educational Technology
Previous Page | Next Page ยป
Pages: 1 | 2