NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 7 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Matta, Michael; Mercer, Sterett H.; Keller-Margulis, Milena A. – School Psychology, 2023
Recent advances in automated writing evaluation have enabled educators to use automated writing quality scores to improve assessment feasibility. However, there has been limited investigation of bias for automated writing quality scores with students from diverse racial or ethnic backgrounds. The use of biased scores could contribute to…
Descriptors: Bias, Automation, Writing Evaluation, Scoring
Peer reviewed Peer reviewed
Direct linkDirect link
Keller-Margulis, Milena A.; Mercer, Sterett H.; Matta, Michael – Reading and Writing: An Interdisciplinary Journal, 2021
Existing approaches to measuring writing performance are insufficient in terms of both technical adequacy as well as feasibility for use as a screening measure. This study examined the validity and diagnostic accuracy of several approaches to automated text evaluation as well as written expression curriculum-based measurement (WE-CBM) to determine…
Descriptors: Writing Evaluation, Validity, Automation, Curriculum Based Assessment
Mercer, Sterett H.; Cannon, Joanna E.; Squires, Bonita; Guo, Yue; Pinco, Ella – Canadian Journal of School Psychology, 2021
We examined the extent to which automated written expression curriculum-based measurement (aWE-CBM) can be accurately used to computer score student writing samples for screening and progress monitoring. Students (n = 174) with learning difficulties in Grades 1 to 12 who received 1:1 academic tutoring through a community-based organization…
Descriptors: Curriculum Based Assessment, Automation, Scoring, Writing Tests
Keller-Margulis, Milena A.; Mercer, Sterett H.; Matta, Michael – Grantee Submission, 2021
Existing approaches to measuring writing performance are insufficient in terms of both technical adequacy as well as feasibility for use as a screening measure. This study examined the validity and diagnostic accuracy of several approaches to automated text evaluation as well as written expression curriculum-based measurement (WE-CBM) to determine…
Descriptors: Writing Evaluation, Validity, Automation, Curriculum Based Assessment
Mercer, Sterett H.; Cannon, Joanna E.; Squires, Bonita; Guo, Yue; Pinco, Ella – Grantee Submission, 2021
We examined the extent to which automated written expression curriculum-based measurement (aWE-CBM) can be accurately used to computer score student writing samples for screening and progress monitoring. Students (n = 174) with learning difficulties in Grades 1-12 who received 1:1 academic tutoring through a community-based organization completed…
Descriptors: Curriculum Based Assessment, Automation, Scoring, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Reed, Deborah K.; Mercer, Sterett H. – School Psychology, 2023
Interim and summative assessments often are used to make decisions about student writing skills and needs for instruction, but the extent to which different raters and score types might introduce bias for some groups of students is largely unknown. To evaluate this possibility, we analyzed interim writing assessments and state summative test data…
Descriptors: Writing Tests, Summative Evaluation, Scoring, Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Mercer, Sterett H.; Martinez, Rebecca S.; Faust, Dennis; Mitchell, Rachel R. – School Psychology Quarterly, 2012
We investigated the criterion-related validity of four indicators of curriculum-based measurement in writing (WCBM) when using expository versus narrative writing prompts as compared to the validity of passage copying speed. Specifically, we compared criterion-related validity of production-dependent (total words written, correct word sequences),…
Descriptors: Expository Writing, Writing Evaluation, Curriculum Based Assessment, Writing Tests