NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
PDF on ERIC Download full text
ERIC Number: EJ1109680
Record Type: Journal
Publication Date: 2015-Jun
Pages: 30
Abstractor: As Provided
ISBN: N/A
ISSN: EISSN-2330-8516
EISSN: N/A
Evaluation of "e-rater"® for the "Praxis I"®Writing Test. Research Report. ETS RR-15-03
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.
ETS Research Report Series, Jun 2015
Automated scoring models were trained and evaluated for the essay task in the "Praxis I"® writing test. Prompt-specific and generic "e-rater"® scoring models were built, and evaluation statistics, such as quadratic weighted kappa, Pearson correlation, and standardized differences in mean scores, were examined to evaluate the e-rater model performance against human scores. Performance of the scoring model was also evaluated across different demographic subgroups using the same statistics. Additionally, correlations for automated scores with external measures were observed for validity evidence. Analyses were performed to establish appropriate agreement thresholds between human and e-rater scores for unusual essays and to examine the impact of using e-rater on operational scores and classification rates. The generic e-rater scoring model was recommended for operational use to produce contributory scores within a discrepancy threshold of 1.5 with a human score.
Educational Testing Service. Rosedale Road, MS19-R Princeton, NJ 08541. Tel: 609-921-9000; Fax: 609-734-5410; e-mail: RDweb@ets.org; Web site: https://www.ets.org/research/policy_research_reports/ets
Publication Type: Journal Articles; Reports - Research
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: N/A
Identifiers - Assessments and Surveys: Praxis Series
Grant or Contract Numbers: N/A