Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 4 |
Descriptor
Item Analysis | 5 |
Item Response Theory | 5 |
Goodness of Fit | 3 |
Achievement Tests | 2 |
Models | 2 |
Regression (Statistics) | 2 |
Statistical Analysis | 2 |
Bayesian Statistics | 1 |
Comparative Analysis | 1 |
Context Effect | 1 |
Correlation | 1 |
More ▼ |
Source
ETS Research Report Series | 1 |
Educational Measurement:… | 1 |
Educational Testing Service | 1 |
Journal of Educational and… | 1 |
Large-scale Assessments in… | 1 |
Author
Sinharay, Sandip | 5 |
Haberman, Shelby J. | 2 |
Johnson, Matthew S. | 2 |
Almond, Russell | 1 |
Guo, Hongwen | 1 |
Holland, Paul W. | 1 |
Steinhauer, Eric W. | 1 |
Sweeney, Sandra M. | 1 |
Yan, Duanli | 1 |
van Rijn, Peter W. | 1 |
Publication Type
Journal Articles | 4 |
Reports - Research | 3 |
Reports - Evaluative | 2 |
Education Level
Elementary Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Sweeney, Sandra M.; Sinharay, Sandip; Johnson, Matthew S.; Steinhauer, Eric W. – Educational Measurement: Issues and Practice, 2022
The focus of this paper is on the empirical relationship between item difficulty and item discrimination. Two studies--an empirical investigation and a simulation study--were conducted to examine the association between item difficulty and item discrimination under classical test theory and item response theory (IRT), and the effects of the…
Descriptors: Correlation, Item Response Theory, Item Analysis, Difficulty Level
van Rijn, Peter W.; Sinharay, Sandip; Haberman, Shelby J.; Johnson, Matthew S. – Large-scale Assessments in Education, 2016
Latent regression models are used for score-reporting purposes in large-scale educational survey assessments such as the National Assessment of Educational Progress (NAEP) and Trends in International Mathematics and Science Study (TIMSS). One component of these models is based on item response theory. While there exists some research on assessment…
Descriptors: Goodness of Fit, Item Response Theory, Regression (Statistics), National Competency Tests
Guo, Hongwen; Sinharay, Sandip – Journal of Educational and Behavioral Statistics, 2011
Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…
Descriptors: Testing Programs, Measurement, Item Analysis, Error of Measurement
Haberman, Shelby J.; Holland, Paul W.; Sinharay, Sandip – ETS Research Report Series, 2006
Bounds are established for log cross-product ratios (log odds ratios) involving pairs of items for item response models. First, expressions for bounds on log cross-product ratios are provided for unidimensional item response models in general. Then, explicit bounds are obtained for the Rasch model and the two-parameter logistic (2PL) model.…
Descriptors: Item Response Theory, Models, Goodness of Fit, Item Analysis
Sinharay, Sandip; Almond, Russell; Yan, Duanli – Educational Testing Service, 2004
Model checking is a crucial part of any statistical analysis. As educators tie models for testing to cognitive theory of the domains, there is a natural tendency to represent participant proficiencies with latent variables representing the presence or absence of the knowledge, skills, and proficiencies to be tested (Mislevy, Almond, Yan, &…
Descriptors: Statistical Analysis, Epistemology, Educational Assessment, Item Response Theory