NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
PDF on ERIC Download full text
ERIC Number: EJ1110012
Record Type: Journal
Publication Date: 2013-Oct
Pages: 41
Abstractor: As Provided
ISBN: N/A
ISSN: EISSN-2330-8516
EISSN: N/A
Available Date: N/A
The Impact of Sampling Approach on Population Invariance in Automated Scoring of Essays. Research Report. ETS RR-13-18
Zhang, Mo
ETS Research Report Series, Oct 2013
Many testing programs use automated scoring to grade essays. One issue in automated essay scoring that has not been examined adequately is population invariance and its causes. The primary purpose of this study was to investigate the impact of sampling in model calibration on population invariance of automated scores. This study analyzed scores produced by the "e-rater"® scoring engine using a "GRE"® assessment data set. Results suggested that equal allocation stratification by language sampling approach performed optimally in maximizing population invariance using either human/e-rater agreement or their correlation pattern differences with external variables as evaluation criteria. Guidelines were given to assist practitioners in choosing a sampling design for model calibration. Potential causes for lack of population invariance, study limitations, and future research are discussed.
Educational Testing Service. Rosedale Road, MS19-R Princeton, NJ 08541. Tel: 609-921-9000; Fax: 609-734-5410; e-mail: RDweb@ets.org; Web site: https://www.ets.org/research/policy_research_reports/ets
Publication Type: Journal Articles; Reports - Research
Education Level: Higher Education; Postsecondary Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Identifiers - Assessments and Surveys: Graduate Record Examinations
Grant or Contract Numbers: N/A
Author Affiliations: N/A