Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 2 |
Descriptor
Essays | 2 |
Evaluators | 2 |
Scoring | 2 |
Vocabulary Skills | 2 |
Writing Evaluation | 2 |
Writing Skills | 2 |
Accuracy | 1 |
Comparative Analysis | 1 |
Computational Linguistics | 1 |
Computer Software | 1 |
Connected Discourse | 1 |
More ▼ |
Source
Grantee Submission | 2 |
Author
Crossley, Scott A. | 2 |
McNamara, Danielle S. | 2 |
Allen, Laura K. | 1 |
Guo, Liang | 1 |
Kyle, Kristopher | 1 |
Roscoe, Rod D. | 1 |
Snow, Erica L. | 1 |
Varner, Laura K. | 1 |
Publication Type
Reports - Research | 2 |
Journal Articles | 1 |
Speeches/Meeting Papers | 1 |
Tests/Questionnaires | 1 |
Education Level
High Schools | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Gates MacGinitie Reading Tests | 1 |
Test of English as a Foreign… | 1 |
What Works Clearinghouse Rating
Roscoe, Rod D.; Crossley, Scott A.; Snow, Erica L.; Varner, Laura K.; McNamara, Danielle S. – Grantee Submission, 2014
Automated essay scoring tools are often criticized on the basis of construct validity. Specifically, it has been argued that computational scoring algorithms may be unaligned to higher-level indicators of quality writing, such as writers' demonstrated knowledge and understanding of the essay topics. In this paper, we consider how and whether the…
Descriptors: Correlation, Essays, Scoring, Writing Evaluation
Crossley, Scott A.; Kyle, Kristopher; Allen, Laura K.; Guo, Liang; McNamara, Danielle S. – Grantee Submission, 2014
This study investigates the potential for linguistic microfeatures related to length, complexity, cohesion, relevance, topic, and rhetorical style to predict L2 writing proficiency. Computational indices were calculated by two automated text analysis tools (Coh- Metrix and the Writing Assessment Tool) and used to predict human essay ratings in a…
Descriptors: Computational Linguistics, Essays, Scoring, Writing Evaluation