Publication Date
In 2025 | 1 |
Since 2024 | 6 |
Since 2021 (last 5 years) | 6 |
Descriptor
Item Response Theory | 6 |
Evaluation Methods | 3 |
Goodness of Fit | 3 |
Error of Measurement | 2 |
Measurement | 2 |
Measurement Techniques | 2 |
Psychometrics | 2 |
Surveys | 2 |
Accuracy | 1 |
Adaptive Testing | 1 |
Affective Measures | 1 |
More ▼ |
Source
Applied Measurement in… | 2 |
Educational Assessment | 1 |
International Journal of… | 1 |
Journal of Educational… | 1 |
Measurement and Evaluation in… | 1 |
Author
Stefanie A. Wind | 6 |
Benjamin Lugu | 2 |
Beyza Aksu-Dunya | 1 |
Kazuhiro Yamaguchi | 1 |
Lientje Maas | 1 |
Matthew J. Madison | 1 |
Ryan M. Cook | 1 |
Sergio Haab | 1 |
Yangmeng Xu | 1 |
Yurou Wang | 1 |
Publication Type
Journal Articles | 6 |
Reports - Research | 5 |
Reports - Descriptive | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Stefanie A. Wind; Benjamin Lugu; Yurou Wang – International Journal of Testing, 2025
Mokken Scale Analysis (MSA) is a nonparametric approach that offers exploratory tools for understanding the nature of item responses while emphasizing invariance requirements. MSA is often discussed as it relates to Rasch measurement theory, which also emphasizes invariance, but uses parametric models. Researchers who have compared and combined…
Descriptors: Item Response Theory, Scaling, Surveys, Evaluation Methods
Stefanie A. Wind; Benjamin Lugu – Applied Measurement in Education, 2024
Researchers who use measurement models for evaluation purposes often select models with stringent requirements, such as Rasch models, which are parametric. Mokken Scale Analysis (MSA) offers a theory-driven nonparametric modeling approach that may be more appropriate for some measurement applications. Researchers have discussed using MSA as a…
Descriptors: Item Response Theory, Data Analysis, Simulation, Nonparametric Statistics
Ryan M. Cook; Stefanie A. Wind – Measurement and Evaluation in Counseling and Development, 2024
The purpose of this article is to discuss reliability and precision through the lens of a modern measurement approach, item response theory (IRT). Reliability evidence in the field of counseling is primarily generated using Classical Test Theory (CTT) approaches, although recent studies in the field of counseling have shown the benefits of using…
Descriptors: Item Response Theory, Measurement, Reliability, Accuracy
Matthew J. Madison; Stefanie A. Wind; Lientje Maas; Kazuhiro Yamaguchi; Sergio Haab – Journal of Educational Measurement, 2024
Diagnostic classification models (DCMs) are psychometric models designed to classify examinees according to their proficiency or nonproficiency of specified latent characteristics. These models are well suited for providing diagnostic and actionable feedback to support intermediate and formative assessment efforts. Several DCMs have been developed…
Descriptors: Diagnostic Tests, Classification, Models, Psychometrics
Stefanie A. Wind; Beyza Aksu-Dunya – Applied Measurement in Education, 2024
Careless responding is a pervasive concern in research using affective surveys. Although researchers have considered various methods for identifying careless responses, studies are limited that consider the utility of these methods in the context of computer adaptive testing (CAT) for affective scales. Using a simulation study informed by recent…
Descriptors: Response Style (Tests), Computer Assisted Testing, Adaptive Testing, Affective Measures
Stefanie A. Wind; Yangmeng Xu – Educational Assessment, 2024
We explored three approaches to resolving or re-scoring constructed-response items in mixed-format assessments: rater agreement, person fit, and targeted double scoring (TDS). We used a simulation study to consider how the three approaches impact the psychometric properties of student achievement estimates, with an emphasis on person fit. We found…
Descriptors: Interrater Reliability, Error of Measurement, Evaluation Methods, Examiners