Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 3 |
Descriptor
Artificial Intelligence | 3 |
Natural Language Processing | 3 |
Scoring | 3 |
Accuracy | 2 |
Automation | 2 |
Models | 2 |
Algorithms | 1 |
Classification | 1 |
Computer Assisted Testing | 1 |
Computer Software | 1 |
Documentation | 1 |
More ▼ |
Source
Grantee Submission | 3 |
Author
Allen, Laura K. | 3 |
McNamara, Danielle S. | 3 |
Botarleanu, Robert-Mihai | 2 |
Crossley, Scott Andrew | 2 |
Dascalu, Mihai | 2 |
Crossley, Scott A. | 1 |
Kim, Minkyung | 1 |
Publication Type
Speeches/Meeting Papers | 3 |
Reports - Research | 2 |
Reports - Descriptive | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2022
Automated scoring of student language is a complex task that requires systems to emulate complex and multi-faceted human evaluation criteria. Summary scoring brings an additional layer of complexity to automated scoring because it involves two texts of differing lengths that must be compared. In this study, we present our approach to automate…
Descriptors: Automation, Scoring, Documentation, Likert Scales
Crossley, Scott A.; Kim, Minkyung; Allen, Laura K.; McNamara, Danielle S. – Grantee Submission, 2019
Summarization is an effective strategy to promote and enhance learning and deep comprehension of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation of students' summaries requires time and effort. This problem has led to the development of automated models of summarization quality. However,…
Descriptors: Automation, Writing Evaluation, Natural Language Processing, Artificial Intelligence
Botarleanu, Robert-Mihai; Dascalu, Mihai; Allen, Laura K.; Crossley, Scott Andrew; McNamara, Danielle S. – Grantee Submission, 2021
Text summarization is an effective reading comprehension strategy. However, summary evaluation is complex and must account for various factors including the summary and the reference text. This study examines a corpus of approximately 3,000 summaries based on 87 reference texts, with each summary being manually scored on a 4-point Likert scale.…
Descriptors: Computer Assisted Testing, Scoring, Natural Language Processing, Computer Software