Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 12 |
Descriptor
Source
Author
Hambleton, Ronald K. | 47 |
Rogers, H. Jane | 6 |
Jones, Russell W. | 4 |
Xing, Dehui | 4 |
Cook, Linda L. | 3 |
Han, Kyung T. | 3 |
Sireci, Stephen G. | 3 |
Wells, Craig S. | 3 |
Liang, Tie | 2 |
Rizavi, Saba M. | 2 |
Swaminathan, Hariharan | 2 |
More ▼ |
Publication Type
Education Level
Higher Education | 1 |
Audience
Researchers | 5 |
Location
Estonia | 1 |
Netherlands | 1 |
Laws, Policies, & Programs
Assessments and Surveys
National Assessment of… | 2 |
California Achievement Tests | 1 |
Graduate Management Admission… | 1 |
Medical College Admission Test | 1 |
United States Medical… | 1 |
What Works Clearinghouse Rating
Yoo, Hanwook; Hambleton, Ronald K. – Educational Measurement: Issues and Practice, 2019
Item analysis is an integral part of operational test development and is typically conducted within two popular statistical frameworks: classical test theory (CTT) and item response theory (IRT). In this digital ITEMS module, Hanwook Yoo and Ronald K. Hambleton provide an accessible overview of operational item analysis approaches within these…
Descriptors: Item Analysis, Item Response Theory, Guidelines, Test Construction
Yavuz, Guler; Hambleton, Ronald K. – Educational and Psychological Measurement, 2017
Application of MIRT modeling procedures is dependent on the quality of parameter estimates provided by the estimation software and techniques used. This study investigated model parameter recovery of two popular MIRT packages, BMIRT and flexMIRT, under some common measurement conditions. These packages were specifically selected to investigate the…
Descriptors: Item Response Theory, Models, Comparative Analysis, Computer Software
Clauser, Jerome C.; Hambleton, Ronald K.; Baldwin, Peter – Educational and Psychological Measurement, 2017
The Angoff standard setting method relies on content experts to review exam items and make judgments about the performance of the minimally proficient examinee. Unfortunately, at times content experts may have gaps in their understanding of specific exam content. These gaps are particularly likely to occur when the content domain is broad and/or…
Descriptors: Scores, Item Analysis, Classification, Decision Making
Wells, Craig S.; Hambleton, Ronald K.; Kirkpatrick, Robert; Meng, Yu – Applied Measurement in Education, 2014
The purpose of the present study was to develop and evaluate two procedures flagging consequential item parameter drift (IPD) in an operational testing program. The first procedure was based on flagging items that exhibit a meaningful magnitude of IPD using a critical value that was defined to represent barely tolerable IPD. The second procedure…
Descriptors: Test Items, Test Bias, Equated Scores, Item Response Theory
Han, Kyung T.; Wells, Craig S.; Hambleton, Ronald K. – Practical Assessment, Research & Evaluation, 2015
In item response theory test scaling/equating with the three-parameter model, the scaling coefficients A and B have no impact on the c-parameter estimates of the test items since the cparameter estimates are not adjusted in the scaling/equating procedure. The main research question in this study concerned how serious the consequences would be if…
Descriptors: Item Response Theory, Monte Carlo Methods, Scaling, Test Items
Keller, Lisa A.; Hambleton, Ronald K. – Journal of Educational Measurement, 2013
Due to recent research in equating methodologies indicating that some methods may be more susceptible to the accumulation of equating error over multiple administrations, the sustainability of several item response theory methods of equating over time was investigated. In particular, the paper is focused on two equating methodologies: fixed common…
Descriptors: Item Response Theory, Scaling, Test Format, Equated Scores
Liang, Tie; Wells, Craig S.; Hambleton, Ronald K. – Journal of Educational Measurement, 2014
As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…
Descriptors: Item Response Theory, Measurement Techniques, Nonparametric Statistics, Models
Deng, Nina; Han, Kyung T.; Hambleton, Ronald K. – Applied Psychological Measurement, 2013
DIMPACK Version 1.0 for assessing test dimensionality based on a nonparametric conditional covariance approach is reviewed. This software was originally distributed by Assessment Systems Corporation and now can be freely accessed online. The software consists of Windows-based interfaces of three components: DIMTEST, DETECT, and CCPROX/HAC, which…
Descriptors: Item Response Theory, Nonparametric Statistics, Statistical Analysis, Computer Software
Lyren, Per-Erik; Hambleton, Ronald K. – International Journal of Testing, 2011
The equal ability distribution assumption associated with the equivalent groups equating design was investigated in the context of a selection test for admission to higher education. The purpose was to assess the consequences for the test-takers in terms of receiving improperly high or low scores compared to their peers, and to find strong…
Descriptors: Evidence, Test Items, Ability Grouping, Item Response Theory
Liang, Tie; Han, Kyung T.; Hambleton, Ronald K. – Applied Psychological Measurement, 2009
This article discusses the ResidPlots-2, a computer software that provides a powerful tool for IRT graphical residual analyses. ResidPlots-2 consists of two components: a component for computing residual statistics and another component for communicating with users and for plotting the residual graphs. The features of the ResidPlots-2 software are…
Descriptors: Computer Software, Statistics, Item Response Theory, Graphs
Hambleton, Ronald K.; Swaminathan, H. – 1985
Comments are made on the review papers presented by six Dutch psychometricians: Ivo Molenaar, Wim van der Linden, Ed Roskam, Arnold Van den Wollenberg, Gideon Mellenbergh, and Dato de Gruijter. Molenaar has embraced a pragmatic viewpoint on Bayesian methods, using both empirical and pure approaches to solve educational research problems. Molenaar…
Descriptors: Bayesian Statistics, Decision Making, Elementary Secondary Education, Foreign Countries
Monahan, Patrick O.; Stump, Timothy E.; Finch, Holmes; Hambleton, Ronald K. – Applied Psychological Measurement, 2007
DETECT is a nonparametric "full" dimensionality assessment procedure that clusters dichotomously scored items into dimensions and provides a DETECT index of magnitude of multidimensionality. Four factors (test length, sample size, item response theory [IRT] model, and DETECT index) were manipulated in a Monte Carlo study of bias, standard error,…
Descriptors: Test Length, Sample Size, Monte Carlo Methods, Geometric Concepts
Hambleton, Ronald K.; Xing, Dehui – Applied Measurement in Education, 2006
Now that many credentialing exams are being routinely administered by computer, new computer-based test designs, along with item response theory models, are being aggressively researched to identify specific designs that can increase the decision consistency and accuracy of pass-fail decisions. The purpose of this study was to investigate the…
Descriptors: Test Construction, Objective Tests, Item Response Theory, Feedback
Hambleton, Ronald K.; Cook, Linda L. – 1978
The purpose of the present research was to study, systematically, the "goodness-of-fit" of the one-, two-, and three-parameter logistic models. We studied, using computer-simulated test data, the effects of four variables: variation in item discrimination parameters, the average value of the pseudo-chance level parameters, test length,…
Descriptors: Career Development, Difficulty Level, Goodness of Fit, Item Analysis
Swaminathan, Hariharan; Hambleton, Ronald K.; Sireci, Stephen G.; Xing, Dehui; Rizavi, Saba M. – 2003
The primary objective of this study was to investigate how incorporating prior information improves estimation of item parameters in two small samples. The factors that were investigated were sample size and the type of prior information. To investigate the accuracy with which item parameters in the Law School Admission Test (LSAT) are estimated,…
Descriptors: Estimation (Mathematics), Item Response Theory, Sample Size, Sampling