Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 4 |
Descriptor
Models | 5 |
College Entrance Examinations | 3 |
Correlation | 3 |
Evaluation Methods | 3 |
Essays | 2 |
Evaluation Research | 2 |
Graduate Study | 2 |
Item Analysis | 2 |
Item Response Theory | 2 |
Scoring | 2 |
Test Construction | 2 |
More ▼ |
Author
Ramineni, Chaitanya | 2 |
Williamson, David M. | 2 |
Albano, Anthony D. | 1 |
Almond, Russell G. | 1 |
Attali, Yigal | 1 |
Breyer, F. Jay | 1 |
Bridgeman, Brent | 1 |
Davey, Tim | 1 |
Duchnowski, Matthew | 1 |
Embretson, Susan E. | 1 |
Gorin, Joanna S. | 1 |
More ▼ |
Publication Type
Journal Articles | 4 |
Reports - Research | 3 |
Reports - Descriptive | 2 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 3 |
Postsecondary Education | 3 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Graduate Record Examinations | 5 |
What Works Clearinghouse Rating
Breyer, F. Jay; Attali, Yigal; Williamson, David M.; Ridolfi-McCulla, Laura; Ramineni, Chaitanya; Duchnowski, Matthew; Harris, April – ETS Research Report Series, 2014
In this research, we investigated the feasibility of implementing the "e-rater"® scoring engine as a check score in place of all-human scoring for the "Graduate Record Examinations"® ("GRE"®) revised General Test (rGRE) Analytical Writing measure. This report provides the scientific basis for the use of e-rater as a…
Descriptors: Computer Software, Computer Assisted Testing, Scoring, College Entrance Examinations
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Automated scoring models for the "e-rater"® scoring engine were built and evaluated for the "GRE"® argument and issue-writing tasks. Prompt-specific, generic, and generic with prompt-specific intercept scoring models were built and evaluation statistics such as weighted kappas, Pearson correlations, standardized difference in…
Descriptors: Scoring, Test Scoring Machines, Automation, Models
Albano, Anthony D. – Journal of Educational Measurement, 2013
In many testing programs it is assumed that the context or position in which an item is administered does not have a differential effect on examinee responses to the item. Violations of this assumption may bias item response theory estimates of item and person parameters. This study examines the potentially biasing effects of item position. A…
Descriptors: Test Items, Item Response Theory, Test Format, Questioning Techniques
Mislevy, Robert J.; Steinberg, Linda S.; Almond, Russell G. – 1999
Tasks are the most visible element in an educational assessment. Their purpose, however, is to provide evidence about targets of inference that cannot be directly seen at all: what examinees know and can do, more broadly conceived than can be observed in the context of any particular set of tasks. This paper concerns issues in an assessment design…
Descriptors: Educational Assessment, Evaluation Methods, Higher Education, Models
Gorin, Joanna S.; Embretson, Susan E. – Applied Psychological Measurement, 2006
Recent assessment research joining cognitive psychology and psychometric theory has introduced a new technology, item generation. In algorithmic item generation, items are systematically created based on specific combinations of features that underlie the processing required to correctly solve a problem. Reading comprehension items have been more…
Descriptors: Difficulty Level, Test Items, Modeling (Psychology), Paragraph Composition