Descriptor
Testing Programs | 2 |
Achievement Gains | 1 |
Achievement Tests | 1 |
Evaluation Methods | 1 |
Grade 4 | 1 |
Grade 6 | 1 |
Grade 8 | 1 |
Grade Inflation | 1 |
High Stakes Tests | 1 |
Interrater Reliability | 1 |
Item Response Theory | 1 |
More ▼ |
Source
Online Submission | 2 |
Publication Type
Reports - Research | 2 |
Journal Articles | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Adult Education | 1 |
Grade 4 | 1 |
Grade 6 | 1 |
Grade 8 | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Yen, Shu Jing; Ochieng, Charles; Michaels, Hillary; Friedman, Greg – Online Submission, 2005
Year-to-year rater variation may result in constructed response (CR) parameter changes, making CR items inappropriate to use in anchor sets for linking or equating. This study demonstrates how rater severity affected the writing and reading scores. Rater adjustments were made to statewide results using an item response theory (IRT) methodology…
Descriptors: Test Items, Writing Tests, Reading Tests, Measures (Individuals)
Phelps, Richard P. – Online Submission, 2005
John J. Cannell's late 1980s "Lake Wobegon" reports suggested widespread deliberate educator manipulation of norm-referenced standardized test (NRT) administrations and results, resulting in artificial test score gains. The Cannell studies have been referenced in education research since, but as evidence that high stakes (and not cheating or lax…
Descriptors: Testing Programs, Achievement Gains, Standardized Tests, Norm Referenced Tests