Publication Date
In 2025 | 3 |
Since 2024 | 20 |
Since 2021 (last 5 years) | 69 |
Since 2016 (last 10 years) | 105 |
Descriptor
Source
Grantee Submission | 105 |
Author
Danielle S. McNamara | 18 |
McNamara, Danielle S. | 12 |
Mihai Dascalu | 12 |
Renu Balyan | 7 |
Dascalu, Mihai | 6 |
Stefan Ruseti | 6 |
Tracy Arner | 5 |
Aleven, Vincent | 4 |
Allen, Laura K. | 4 |
Balyan, Renu | 4 |
Graesser, Arthur C. | 4 |
More ▼ |
Publication Type
Reports - Research | 82 |
Speeches/Meeting Papers | 59 |
Journal Articles | 21 |
Reports - Evaluative | 13 |
Reports - Descriptive | 10 |
Tests/Questionnaires | 1 |
Education Level
Audience
Laws, Policies, & Programs
Assessments and Surveys
Flesch Kincaid Grade Level… | 2 |
Autism Diagnostic Observation… | 1 |
Flesch Reading Ease Formula | 1 |
Torrance Tests of Creative… | 1 |
Woodcock Johnson Tests of… | 1 |
What Works Clearinghouse Rating

Priti Oli; Rabin Banjade; Jeevan Chapagain; Vasile Rus – Grantee Submission, 2024
Assessing students' answers and in particular natural language answers is a crucial challenge in the field of education. Advances in transformer-based models such as Large Language Models (LLMs), have led to significant progress in various natural language tasks. Nevertheless, amidst the growing trend of evaluating LLMs across diverse tasks,…
Descriptors: Student Evaluation, Computer Assisted Testing, Artificial Intelligence, Comprehension
Qiwei He; Qingzhou Shi; Elizabeth L. Tighe – Grantee Submission, 2023
Increased use of computer-based assessments has facilitated data collection processes that capture both response product data (i.e., correct and incorrect) and response process data (e.g., time-stamped action sequences). Evidence suggests a strong relationship between respondents' correct/incorrect responses and their problem-solving proficiency…
Descriptors: Artificial Intelligence, Problem Solving, Classification, Data Use
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing

Andreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension

Yang Zhong; Mohamed Elaraby; Diane Litman; Ahmed Ashraf Butt; Muhsin Menekse – Grantee Submission, 2024
This paper introduces REFLECTSUMM, a novel summarization dataset specifically designed for summarizing students' reflective writing. The goal of REFLECTSUMM is to facilitate developing and evaluating novel summarization techniques tailored to real-world scenarios with little training data, with potential implications in the opinion summarization…
Descriptors: Documentation, Writing (Composition), Reflection, Metadata
Danielle S. McNamara – Grantee Submission, 2024
Our primary objective in this Special Issue was to respond to potential criticisms of AIED in potentially "perpetuating poor pedagogic practices, datafication, and introducing classroom surveillance" and to comment on the future of AIED in its coming of age. My overarching assumption in response to this line of critiques is that humans…
Descriptors: Educational Practices, Educational Quality, Intelligent Tutoring Systems, Artificial Intelligence
Kole Norberg; Husni Almoubayyed; Stephen E. Fancsali; Logan De Ley; Kyle Weldon; April Murphy; Steve Ritter – Grantee Submission, 2023
Large Language Models have recently achieved high performance on many writing tasks. In a recent study, math word problems in Carnegie Learning's MATHia adaptive learning software were rewritten by human authors to improve their clarity and specificity. The randomized experiment found that emerging readers who received the rewritten word problems…
Descriptors: Word Problems (Mathematics), Mathematics Instruction, Artificial Intelligence, Intelligent Tutoring Systems
Regan Mozer; Luke Miratrix – Grantee Submission, 2024
For randomized trials that use text as an outcome, traditional approaches for assessing treatment impact require that each document first be manually coded for constructs of interest by trained human raters. This process, the current standard, is both time-consuming and limiting: even the largest human coding efforts are typically constrained to…
Descriptors: Artificial Intelligence, Coding, Efficiency, Statistical Inference
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Hadis Anahideh; Nazanin Nezami; Abolfazl Asudeh – Grantee Submission, 2025
It is of critical importance to be aware of the historical discrimination embedded in the data and to consider a fairness measure to reduce bias throughout the predictive modeling pipeline. Given various notions of fairness defined in the literature, investigating the correlation and interaction among metrics is vital for addressing unfairness.…
Descriptors: Correlation, Measurement Techniques, Guidelines, Semantics
Jessica Andrews-Todd; Jonathan Steinberg; Michael Flor; Carolyn M. Forsyth – Grantee Submission, 2022
Competency in skills associated with collaborative problem solving (CPS) is critical for many contexts, including school, the workplace, and the military. Innovative approaches for assessing individuals' CPS competency are necessary, as traditional assessment types such as multiple-choice items are not well suited for such a process-oriented…
Descriptors: Automation, Classification, Cooperative Learning, Problem Solving
Li, Chenglu; Xing, Wanli; Leite, Walter – Grantee Submission, 2021
To support online learners at a large scale, extensive studies have adopted machine learning (ML) techniques to analyze students' artifacts and predict their learning outcomes automatically. However, limited attention has been paid to the fairness of prediction with ML in educational settings. This study intends to fill the gap by introducing a…
Descriptors: Learning Analytics, Prediction, Models, Electronic Learning
Oscar Clivio; Avi Feller; Chris Holmes – Grantee Submission, 2024
Reweighting a distribution to minimize a distance to a target distribution is a powerful and flexible strategy for estimating a wide range of causal effects, but can be challenging in practice because optimal weights typically depend on knowledge of the underlying data generating process. In this paper, we focus on design-based weights, which do…
Descriptors: Evaluation Methods, Causal Models, Error of Measurement, Guidelines

Benjamin Motz; Harmony Jankowski; Jennifer Lopatin; Waverly Tseng; Tamara Tate – Grantee Submission, 2024
Platform-enabled research services will control, manage, and measure learner experiences within that platform. In this paper, we consider the need for research services that examine learner experiences "outside" the platform. For example, we describe an effort to conduct an experiment on peer assessment in a college writing course, where…
Descriptors: Educational Technology, Learning Management Systems, Electronic Learning, Peer Evaluation

Conrad Borchers; Jeroen Ooge; Cindy Peng; Vincent Aleven – Grantee Submission, 2025
Personalized problem selection enhances student practice in tutoring systems. Prior research has focused on transparent problem selection that supports learner control but rarely engages learners in selecting practice materials. We explored how different levels of control (i.e., full AI control, shared control, and full learner control), combined…
Descriptors: Intelligent Tutoring Systems, Artificial Intelligence, Learner Controlled Instruction, Learning Analytics