NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Grantee Submission88
Publication Type
Reports - Research88
Speeches/Meeting Papers49
Journal Articles19
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 88 results Save | Export
Peer reviewed Peer reviewed
Priti Oli; Rabin Banjade; Jeevan Chapagain; Vasile Rus – Grantee Submission, 2024
Assessing students' answers and in particular natural language answers is a crucial challenge in the field of education. Advances in transformer-based models such as Large Language Models (LLMs), have led to significant progress in various natural language tasks. Nevertheless, amidst the growing trend of evaluating LLMs across diverse tasks,…
Descriptors: Student Evaluation, Computer Assisted Testing, Artificial Intelligence, Comprehension
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Qiwei He; Qingzhou Shi; Elizabeth L. Tighe – Grantee Submission, 2023
Increased use of computer-based assessments has facilitated data collection processes that capture both response product data (i.e., correct and incorrect) and response process data (e.g., time-stamped action sequences). Evidence suggests a strong relationship between respondents' correct/incorrect responses and their problem-solving proficiency…
Descriptors: Artificial Intelligence, Problem Solving, Classification, Data Use
Peer reviewed Peer reviewed
Direct linkDirect link
Andreea Dutulescu; Stefan Ruseti; Denis Iorga; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
The process of generating challenging and appropriate distractors for multiple-choice questions is a complex and time-consuming task. Existing methods for an automated generation have limitations in proposing challenging distractors, or they fail to effectively filter out incorrect choices that closely resemble the correct answer, share synonymous…
Descriptors: Multiple Choice Tests, Artificial Intelligence, Attention, Natural Language Processing
Peer reviewed Peer reviewed
Andreea Dutulescu; Stefan Ruseti; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Assessing the difficulty of reading comprehension questions is crucial to educational methodologies and language understanding technologies. Traditional methods of assessing question difficulty rely frequently on human judgments or shallow metrics, often failing to accurately capture the intricate cognitive demands of answering a question. This…
Descriptors: Difficulty Level, Reading Tests, Test Items, Reading Comprehension
Peer reviewed Peer reviewed
Direct linkDirect link
Regan Mozer; Luke Miratrix – Grantee Submission, 2024
For randomized trials that use text as an outcome, traditional approaches for assessing treatment impact require that each document first be manually coded for constructs of interest by trained human raters. This process, the current standard, is both time-consuming and limiting: even the largest human coding efforts are typically constrained to…
Descriptors: Artificial Intelligence, Coding, Efficiency, Statistical Inference
Peer reviewed Peer reviewed
Direct linkDirect link
Stefan Ruseti; Ionut Paraschiv; Mihai Dascalu; Danielle S. McNamara – Grantee Submission, 2024
Automated Essay Scoring (AES) is a well-studied problem in Natural Language Processing applied in education. Solutions vary from handcrafted linguistic features to large Transformer-based models, implying a significant effort in feature extraction and model implementation. We introduce a novel Automated Machine Learning (AutoML) pipeline…
Descriptors: Computer Assisted Testing, Scoring, Automation, Essays
Peer reviewed Peer reviewed
Direct linkDirect link
Hadis Anahideh; Nazanin Nezami; Abolfazl Asudeh – Grantee Submission, 2025
It is of critical importance to be aware of the historical discrimination embedded in the data and to consider a fairness measure to reduce bias throughout the predictive modeling pipeline. Given various notions of fairness defined in the literature, investigating the correlation and interaction among metrics is vital for addressing unfairness.…
Descriptors: Correlation, Measurement Techniques, Guidelines, Semantics
Jessica Andrews-Todd; Jonathan Steinberg; Michael Flor; Carolyn M. Forsyth – Grantee Submission, 2022
Competency in skills associated with collaborative problem solving (CPS) is critical for many contexts, including school, the workplace, and the military. Innovative approaches for assessing individuals' CPS competency are necessary, as traditional assessment types such as multiple-choice items are not well suited for such a process-oriented…
Descriptors: Automation, Classification, Cooperative Learning, Problem Solving
Li, Chenglu; Xing, Wanli; Leite, Walter – Grantee Submission, 2021
To support online learners at a large scale, extensive studies have adopted machine learning (ML) techniques to analyze students' artifacts and predict their learning outcomes automatically. However, limited attention has been paid to the fairness of prediction with ML in educational settings. This study intends to fill the gap by introducing a…
Descriptors: Learning Analytics, Prediction, Models, Electronic Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Oscar Clivio; Avi Feller; Chris Holmes – Grantee Submission, 2024
Reweighting a distribution to minimize a distance to a target distribution is a powerful and flexible strategy for estimating a wide range of causal effects, but can be challenging in practice because optimal weights typically depend on knowledge of the underlying data generating process. In this paper, we focus on design-based weights, which do…
Descriptors: Evaluation Methods, Causal Models, Error of Measurement, Guidelines
Peer reviewed Peer reviewed
Benjamin Motz; Harmony Jankowski; Jennifer Lopatin; Waverly Tseng; Tamara Tate – Grantee Submission, 2024
Platform-enabled research services will control, manage, and measure learner experiences within that platform. In this paper, we consider the need for research services that examine learner experiences "outside" the platform. For example, we describe an effort to conduct an experiment on peer assessment in a college writing course, where…
Descriptors: Educational Technology, Learning Management Systems, Electronic Learning, Peer Evaluation
Peer reviewed Peer reviewed
Conrad Borchers; Jeroen Ooge; Cindy Peng; Vincent Aleven – Grantee Submission, 2025
Personalized problem selection enhances student practice in tutoring systems. Prior research has focused on transparent problem selection that supports learner control but rarely engages learners in selecting practice materials. We explored how different levels of control (i.e., full AI control, shared control, and full learner control), combined…
Descriptors: Intelligent Tutoring Systems, Artificial Intelligence, Learner Controlled Instruction, Learning Analytics
Jennifer Hill; George Perrett; Vincent Dorie – Grantee Submission, 2023
Estimation of causal effects requires making comparisons across groups of observations exposed and not exposed to a a treatment or cause (intervention, program, drug, etc). To interpret differences between groups causally we need to ensure that they have been constructed in such a way that the comparisons are "fair." This can be…
Descriptors: Causal Models, Statistical Inference, Artificial Intelligence, Data Analysis
Anjali Adukia; Alex Eble; Emileigh Harrison; Hakizumwami Birali Runesha; Teodora Szasz – Grantee Submission, 2023
Books shape how children learn about society and norms, in part through representation of different characters. We use computational tools to characterize representation in children's books widely read in homes, classrooms, and libraries over the last century, and describe economic forces that may contribute to these patterns. We introduce new…
Descriptors: Self Concept, Racism, Gender Bias, Childrens Literature
Robert-Mihai Botarleanu; Mihai Dascalu; Scott Andrew Crossley; Danielle S. McNamara – Grantee Submission, 2022
The ability to express yourself concisely and coherently is a crucial skill, both for academic purposes and professional careers. An important aspect to consider in writing is an adequate segmentation of ideas, which in turn requires a proper understanding of where to place paragraph breaks. However, these decisions are often performed…
Descriptors: Paragraph Composition, Text Structure, Automation, Identification
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6