Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 2 |
Descriptor
Artificial Intelligence | 2 |
Models | 2 |
Natural Language Processing | 2 |
Scoring | 2 |
Accuracy | 1 |
Algorithms | 1 |
Automation | 1 |
Computer Assisted Testing | 1 |
Computer Software | 1 |
Documentation | 1 |
Error Patterns | 1 |
More ▼ |
Source
Grantee Submission | 2 |
Author
Allen, Laura K. | 2 |
Botarleanu, Robert-Mihai | 2 |
Crossley, Scott Andrew | 2 |
Dascalu, Mihai | 2 |
McNamara, Danielle S. | 2 |
Publication Type
Reports - Research | 2 |
Speeches/Meeting Papers | 2 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2022
Automated scoring of student language is a complex task that requires systems to emulate complex and multi-faceted human evaluation criteria. Summary scoring brings an additional layer of complexity to automated scoring because it involves two texts of differing lengths that must be compared. In this study, we present our approach to automate…
Descriptors: Automation, Scoring, Documentation, Likert Scales
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2021
Text summarization is an effective reading comprehension strategy. However, summary evaluation is complex and must account for various factors including the summary and the reference text. This study examines a corpus of approximately 3,000 summaries based on 87 reference texts, with each summary being manually scored on a 4-point Likert scale.…
Descriptors: Computer Assisted Testing, Scoring, Natural Language Processing, Computer Software