NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)7
Audience
Location
New Jersey1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 8 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Attali, Yigal; Lewis, Will; Steier, Michael – Language Testing, 2013
Automated essay scoring can produce reliable scores that are highly correlated with human scores, but is limited in its evaluation of content and other higher-order aspects of writing. The increased use of automated essay scoring in high-stakes testing underscores the need for human scoring that is focused on higher-order aspects of writing. This…
Descriptors: Scoring, Essay Tests, Reliability, High Stakes Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Saldivia, Luis; Jackson, Carol; Schuppan, Fred; Wanamaker, Wilbur – ETS Research Report Series, 2014
Previous investigations of the ability of content experts and test developers to estimate item difficulty have, for themost part, produced disappointing results. These investigations were based on a noncomparative method of independently rating the difficulty of items. In this article, we argue that, by eliciting comparative judgments of…
Descriptors: Test Items, Difficulty Level, Comparative Analysis, College Entrance Examinations
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of the argument and issue tasks that form the Analytical Writing measure of the "GRE"® General Test. For each of these tasks, this study explored the value added of reporting 4 trait scores for each of these 2 tasks over the total e-rater score.…
Descriptors: Scores, Computer Assisted Testing, Computer Software, Grammar
Peer reviewed Peer reviewed
Direct linkDirect link
Attali, Yigal; Powers, Don – Educational and Psychological Measurement, 2010
Two experiments examine the psychometric effects of providing immediate feedback on the correctness of answers to open-ended questions, and allowing participants to revise their answers following feedback. Participants answering verbal and math questions are able to correct many of their initial incorrect answers, resulting in higher revised…
Descriptors: Feedback (Response), Psychometrics, Test Anxiety, Error Correction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Powers, Don; Hawthorn, John – ETS Research Report Series, 2008
Registered examinees for the GRE® General Test answered open-ended sentence-completion items. For half of the items, participants received immediate feedback on the correctness of their answers and up to two opportunities to revise their answers. A significant feedback-and-revision effect was found. Participants were able to correct many of their…
Descriptors: College Entrance Examinations, Graduate Study, Sentences, Psychometrics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Katz, Irvin R.; Elliot, Norbert; Attali, Yigal; Scharf, Davida; Powers, Donald; Huey, Heather; Joshi, Kamal; Briller, Vladimir – ETS Research Report Series, 2008
This study presents an investigation of information literacy as defined by the ETS iSkills™ assessment and by the New Jersey Institute of Technology (NJIT) Information Literacy Scale (ILS). As two related but distinct measures, both iSkills and the ILS were used with undergraduate students at NJIT during the spring 2006 semester. Undergraduate…
Descriptors: Information Literacy, Information Skills, Skill Analysis, Case Studies
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Burstein, Jill – Journal of Technology, Learning, and Assessment, 2006
E-rater[R] has been used by the Educational Testing Service for automated essay scoring since 1999. This paper describes a new version of e-rater (V.2) that is different from other automated essay scoring systems in several important respects. The main innovations of e-rater V.2 are a small, intuitive, and meaningful set of features used for…
Descriptors: Educational Testing, Test Scoring Machines, Scoring, Writing Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Attali, Yigal; Burstein, Jill – ETS Research Report Series, 2005
The e-rater® system has been used by ETS for automated essay scoring since 1999. This paper describes a new version of e-rater (v.2.0) that differs from the previous one (v.1.3) with regard to the feature set and model building approach. The paper describes the new version, compares the new and previous versions in terms of performance, and…
Descriptors: Essay Tests, Automation, Scoring, Comparative Analysis