Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 4 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 6 |
Descriptor
Accuracy | 6 |
Elementary School Students | 6 |
Writing Tests | 6 |
Curriculum Based Assessment | 5 |
Scoring | 5 |
Automation | 4 |
Writing Evaluation | 3 |
Grade 3 | 2 |
Grade 4 | 2 |
Predictive Validity | 2 |
Scores | 2 |
More ▼ |
Source
Grantee Submission | 6 |
Author
Publication Type
Reports - Research | 6 |
Journal Articles | 2 |
Education Level
Elementary Education | 6 |
Early Childhood Education | 2 |
Grade 3 | 2 |
Grade 4 | 2 |
Intermediate Grades | 2 |
Primary Education | 2 |
Grade 2 | 1 |
Grade 5 | 1 |
Middle Schools | 1 |
Secondary Education | 1 |
Audience
Location
Texas | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Oral and Written Language… | 1 |
Wechsler Individual… | 1 |
Woodcock Johnson Tests of… | 1 |
What Works Clearinghouse Rating
Michael Matta; Milena A. Keller-Margulis; Sterett H. Mercer – Grantee Submission, 2022
Although researchers have investigated technical adequacy and usability of written-expression curriculum-based measures (WE-CBM), the economic implications of different scoring approaches have largely been ignored. The absence of such knowledge can undermine the effective allocation of resources and lead to the adoption of suboptimal measures for…
Descriptors: Cost Effectiveness, Scoring, Automation, Writing Tests
Keller-Margulis, Milena A.; Mercer, Sterett H.; Matta, Michael – Grantee Submission, 2021
Existing approaches to measuring writing performance are insufficient in terms of both technical adequacy as well as feasibility for use as a screening measure. This study examined the validity and diagnostic accuracy of several approaches to automated text evaluation as well as written expression curriculum-based measurement (WE-CBM) to determine…
Descriptors: Writing Evaluation, Validity, Automation, Curriculum Based Assessment
Mercer, Sterett H.; Cannon, Joanna E.; Squires, Bonita; Guo, Yue; Pinco, Ella – Grantee Submission, 2021
We examined the extent to which automated written expression curriculum-based measurement (aWE-CBM) can be accurately used to computer score student writing samples for screening and progress monitoring. Students (n = 174) with learning difficulties in Grades 1-12 who received 1:1 academic tutoring through a community-based organization completed…
Descriptors: Curriculum Based Assessment, Automation, Scoring, Writing Tests
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Grantee Submission, 2022
Written expression curriculum-based measurement (WE-CBM) is a formative assessment approach for screening and progress monitoring. To extend evaluation of WE-CBM, we compared hand-calculated and automated scoring approaches in relation to the number of screening samples needed per student for valid scores, the long-term predictive validity and…
Descriptors: Writing Evaluation, Writing Tests, Predictive Validity, Formative Evaluation
Wilson, Joshua; Rodrigues, Jessica – Grantee Submission, 2020
The present study leveraged advances in automated essay scoring (AES) technology to explore a proof of concept for a writing screener using the "Project Essay Grade" (PEG) program. First, the study investigated the extent to which an AES-scored multi-prompt writing screener accurately classified students as at risk of failing a Common…
Descriptors: Writing Tests, Screening Tests, Classification, Accuracy
Kim, Young-Suk; Al Otaiba, Stephanie; Wanzek, Jeanne; Gatlin, Brandy – Grantee Submission, 2015
We had 3 aims in the present study: (a) to examine the dimensionality of various evaluative approaches to scoring writing samples (e.g., quality, productivity, and curriculum-based measurement [CBM] writing scoring), (b) to investigate unique language and cognitive predictors of the identified dimensions, and (c) to examine gender gap in the…
Descriptors: Writing (Composition), Gender Differences, Curriculum Based Assessment, Scoring