NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20253
Since 202420
Since 2021 (last 5 years)69
Since 2016 (last 10 years)105
Source
Grantee Submission105
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 105 results Save | Export
Peer reviewed Peer reviewed
Priti Oli; Rabin Banjade; Jeevan Chapagain; Vasile Rus – Grantee Submission, 2024
Assessing students' answers and in particular natural language answers is a crucial challenge in the field of education. Advances in transformer-based models such as Large Language Models (LLMs), have led to significant progress in various natural language tasks. Nevertheless, amidst the growing trend of evaluating LLMs across diverse tasks,…
Descriptors: Student Evaluation, Computer Assisted Testing, Artificial Intelligence, Comprehension
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Qiwei He; Qingzhou Shi; Elizabeth L. Tighe – Grantee Submission, 2023
Increased use of computer-based assessments has facilitated data collection processes that capture both response product data (i.e., correct and incorrect) and response process data (e.g., time-stamped action sequences). Evidence suggests a strong relationship between respondents' correct/incorrect responses and their problem-solving proficiency…
Descriptors: Artificial Intelligence, Problem Solving, Classification, Data Use
Peer reviewed Peer reviewed
Direct linkDirect link
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing
Peer reviewed Peer reviewed
Andreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension
Peer reviewed Peer reviewed
Yang Zhong; Mohamed Elaraby; Diane Litman; Ahmed Ashraf Butt; Muhsin Menekse – Grantee Submission, 2024
This paper introduces REFLECTSUMM, a novel summarization dataset specifically designed for summarizing students' reflective writing. The goal of REFLECTSUMM is to facilitate developing and evaluating novel summarization techniques tailored to real-world scenarios with little training data, with potential implications in the opinion summarization…
Descriptors: Documentation, Writing (Composition), Reflection, Metadata
Peer reviewed Peer reviewed
Direct linkDirect link
Danielle S. McNamara – Grantee Submission, 2024
Our primary objective in this Special Issue was to respond to potential criticisms of AIED in potentially "perpetuating poor pedagogic practices, datafication, and introducing classroom surveillance" and to comment on the future of AIED in its coming of age. My overarching assumption in response to this line of critiques is that humans…
Descriptors: Educational Practices, Educational Quality, Intelligent Tutoring Systems, Artificial Intelligence
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kole Norberg; Husni Almoubayyed; Stephen E. Fancsali; Logan De Ley; Kyle Weldon; April Murphy; Steve Ritter – Grantee Submission, 2023
Large Language Models have recently achieved high performance on many writing tasks. In a recent study, math word problems in Carnegie Learning's MATHia adaptive learning software were rewritten by human authors to improve their clarity and specificity. The randomized experiment found that emerging readers who received the rewritten word problems…
Descriptors: Word Problems (Mathematics), Mathematics Instruction, Artificial Intelligence, Intelligent Tutoring Systems
Peer reviewed Peer reviewed
Direct linkDirect link
Regan Mozer; Luke Miratrix – Grantee Submission, 2024
For randomized trials that use text as an outcome, traditional approaches for assessing treatment impact require that each document first be manually coded for constructs of interest by trained human raters. This process, the current standard, is both time-consuming and limiting: even the largest human coding efforts are typically constrained to…
Descriptors: Artificial Intelligence, Coding, Efficiency, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Hadis Anahideh; Nazanin Nezami; Abolfazl Asudeh – Grantee Submission, 2025
It is of critical importance to be aware of the historical discrimination embedded in the data and to consider a fairness measure to reduce bias throughout the predictive modeling pipeline. Given various notions of fairness defined in the literature, investigating the correlation and interaction among metrics is vital for addressing unfairness.…
Descriptors: Correlation, Measurement Techniques, Guidelines, Semantics
Jessica Andrews-Todd; Jonathan Steinberg; Michael Flor; Carolyn M. Forsyth – Grantee Submission, 2022
Competency in skills associated with collaborative problem solving (CPS) is critical for many contexts, including school, the workplace, and the military. Innovative approaches for assessing individuals' CPS competency are necessary, as traditional assessment types such as multiple-choice items are not well suited for such a process-oriented…
Descriptors: Automation, Classification, Cooperative Learning, Problem Solving
Li, Chenglu; Xing, Wanli; Leite, Walter – Grantee Submission, 2021
To support online learners at a large scale, extensive studies have adopted machine learning (ML) techniques to analyze students' artifacts and predict their learning outcomes automatically. However, limited attention has been paid to the fairness of prediction with ML in educational settings. This study intends to fill the gap by introducing a…
Descriptors: Learning Analytics, Prediction, Models, Electronic Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Oscar Clivio; Avi Feller; Chris Holmes – Grantee Submission, 2024
Reweighting a distribution to minimize a distance to a target distribution is a powerful and flexible strategy for estimating a wide range of causal effects, but can be challenging in practice because optimal weights typically depend on knowledge of the underlying data generating process. In this paper, we focus on design-based weights, which do…
Descriptors: Evaluation Methods, Causal Models, Error of Measurement, Guidelines
Peer reviewed Peer reviewed
Benjamin Motz; Harmony Jankowski; Jennifer Lopatin; Waverly Tseng; Tamara Tate – Grantee Submission, 2024
Platform-enabled research services will control, manage, and measure learner experiences within that platform. In this paper, we consider the need for research services that examine learner experiences "outside" the platform. For example, we describe an effort to conduct an experiment on peer assessment in a college writing course, where…
Descriptors: Educational Technology, Learning Management Systems, Electronic Learning, Peer Evaluation
Peer reviewed Peer reviewed
Conrad Borchers; Jeroen Ooge; Cindy Peng; Vincent Aleven – Grantee Submission, 2025
Personalized problem selection enhances student practice in tutoring systems. Prior research has focused on transparent problem selection that supports learner control but rarely engages learners in selecting practice materials. We explored how different levels of control (i.e., full AI control, shared control, and full learner control), combined…
Descriptors: Intelligent Tutoring Systems, Artificial Intelligence, Learner Controlled Instruction, Learning Analytics
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7