Publication Date
In 2025 | 12 |
Since 2024 | 187 |
Since 2021 (last 5 years) | 818 |
Since 2016 (last 10 years) | 1951 |
Since 2006 (last 20 years) | 4074 |
Descriptor
Item Response Theory | 5553 |
Test Items | 1817 |
Foreign Countries | 1196 |
Models | 1148 |
Psychometrics | 918 |
Scores | 782 |
Comparative Analysis | 761 |
Test Construction | 750 |
Simulation | 740 |
Statistical Analysis | 659 |
Difficulty Level | 570 |
More ▼ |
Source
Author
Sinharay, Sandip | 48 |
Wilson, Mark | 45 |
Cohen, Allan S. | 43 |
Meijer, Rob R. | 43 |
Tindal, Gerald | 42 |
Wang, Wen-Chung | 40 |
Alonzo, Julie | 37 |
Ferrando, Pere J. | 36 |
Cai, Li | 35 |
van der Linden, Wim J. | 35 |
Glas, Cees A. W. | 34 |
More ▼ |
Publication Type
Education Level
Location
Turkey | 94 |
Australia | 89 |
Germany | 79 |
United States | 74 |
Netherlands | 68 |
Taiwan | 59 |
Indonesia | 53 |
China | 51 |
Canada | 49 |
Japan | 38 |
Florida | 37 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Meets WWC Standards without Reservations | 4 |
Meets WWC Standards with or without Reservations | 4 |
Kuan-Yu Jin; Thomas Eckes – Educational and Psychological Measurement, 2024
Insufficient effort responding (IER) refers to a lack of effort when answering survey or questionnaire items. Such items typically offer more than two ordered response categories, with Likert-type scales as the most prominent example. The underlying assumption is that the successive categories reflect increasing levels of the latent variable…
Descriptors: Item Response Theory, Test Items, Test Wiseness, Surveys
Huan Liu – ProQuest LLC, 2024
In many large-scale testing programs, examinees are frequently categorized into different performance levels. These classifications are then used to make high-stakes decisions about examinees in contexts such as in licensure, certification, and educational assessments. Numerous approaches to estimating the consistency and accuracy of this…
Descriptors: Classification, Accuracy, Item Response Theory, Decision Making
Nana Kim; Daniel M. Bolt – Journal of Educational and Behavioral Statistics, 2024
Some previous studies suggest that response times (RTs) on rating scale items can be informative about the content trait, but a more recent study suggests they may also be reflective of response styles. The latter result raises questions about the possible consideration of RTs for content trait estimation, as response styles are generally viewed…
Descriptors: Item Response Theory, Reaction Time, Response Style (Tests), Psychometrics

Marcelo Andrade da Silva; A. Corinne Huggins-Manley; Jorge Luis Bazan; Amber Benedict – Grantee Submission, 2024
A Q-matrix is a binary matrix that defines the relationship between items and latent variables and is widely used in diagnostic classification models (DCMs), and can also be adopted in multidimensional item response theory (MIRT) models. The construction process of the Q-matrix is typically carried out by experts in the subject area of the items…
Descriptors: Q Methodology, Matrices, Item Response Theory, Educational Assessment
Marcelo Andrade da Silva; A. Corinne Huggins-Manley; Jorge Luis Bazán; Amber Benedict – Applied Measurement in Education, 2024
A Q-matrix is a binary matrix that defines the relationship between items and latent variables and is widely used in diagnostic classification models (DCMs), and can also be adopted in multidimensional item response theory (MIRT) models. The construction process of the Q-matrix is typically carried out by experts in the subject area of the items…
Descriptors: Q Methodology, Matrices, Item Response Theory, Educational Assessment
Jochen Ranger; Christoph König; Benjamin W. Domingue; Jörg-Tobias Kuhn; Andreas Frey – Journal of Educational and Behavioral Statistics, 2024
In the existing multidimensional extensions of the log-normal response time (LNRT) model, the log response times are decomposed into a linear combination of several latent traits. These models are fully compensatory as low levels on traits can be counterbalanced by high levels on other traits. We propose an alternative multidimensional extension…
Descriptors: Models, Statistical Distributions, Item Response Theory, Response Rates (Questionnaires)
Javed Iqbal; Tanweer Ul Islam – Educational Research and Evaluation, 2024
Economic efficiency demands accurate assessment of individual ability for selection purposes. This study investigates Classical Test Theory (CTT) and Item Response Theory (IRT) for estimating true ability and ranking individuals. Two Monte Carlo simulations and real data analyses were conducted. Results suggest a slight advantage for IRT, but…
Descriptors: Item Response Theory, Monte Carlo Methods, Ability, Statistical Analysis
Jiaying Xiao; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Accurate item parameters and standard errors (SEs) are crucial for many multidimensional item response theory (MIRT) applications. A recent study proposed the Gaussian Variational Expectation Maximization (GVEM) algorithm to improve computational efficiency and estimation accuracy (Cho et al., 2021). However, the SE estimation procedure has yet to…
Descriptors: Error of Measurement, Models, Evaluation Methods, Item Analysis
Reeta Neittaanmäki; Iasonas Lamprianou – Language Testing, 2024
This article focuses on rater severity and consistency and their relation to major changes in the rating system in a high-stakes testing context. The study is based on longitudinal data collected from 2009 to 2019 from the second language (L2) Finnish speaking subtest in the National Certificates of Language Proficiency in Finland. We investigated…
Descriptors: Foreign Countries, Interrater Reliability, Evaluators, Item Response Theory
Ibrahim Kasujja; Hugo Melgar-Quinonez; Joweria Nambooze – SAGE Open, 2023
Background: School feeding programs' evaluation requires the measurement of food insecurity, a more objective indicator, within school in low-income countries. The Global Child Nutrition Foundation (GCNF) uses subjective indicators to report school feeding coverage rates across many countries that participate in the global survey of school meal…
Descriptors: Hunger, Food, Program Effectiveness, Psychometrics
Kirya, Kent Robert; Mashood, Kalarattu Kandiyi; Yadav, Lakhan Lal – Journal of Turkish Science Education, 2022
In this study, we administered and evaluated circular motion concept question items with a view to developing an inventory suitable for the Ugandan context. Before administering the circular concept items, six physics experts and ten undergraduate physics students carried out the face and content validation. One hundred eighteen undergraduate…
Descriptors: Motion, Scientific Concepts, Test Construction, Test Items
Paganin, Sally; Paciorek, Christopher J.; Wehrhahn, Claudia; Rodríguez, Abel; Rabe-Hesketh, Sophia; de Valpine, Perry – Journal of Educational and Behavioral Statistics, 2023
Item response theory (IRT) models typically rely on a normality assumption for subject-specific latent traits, which is often unrealistic in practice. Semiparametric extensions based on Dirichlet process mixtures (DPMs) offer a more flexible representation of the unknown distribution of the latent trait. However, the use of such models in the IRT…
Descriptors: Bayesian Statistics, Item Response Theory, Guidance, Evaluation Methods
Finch, W. Holmes – Educational and Psychological Measurement, 2023
Psychometricians have devoted much research and attention to categorical item responses, leading to the development and widespread use of item response theory for the estimation of model parameters and identification of items that do not perform in the same way for examinees from different population subgroups (e.g., differential item functioning…
Descriptors: Test Bias, Item Response Theory, Computation, Methods
Cook, Ryan M.; Sackett, Corrine R.; Wind, Stefanie A. – Measurement and Evaluation in Counseling and Development, 2023
Item response theory was used to study the psychometric properties of the Client Meaningful Experiences Scale (CMES). In a sample of 306 adult counseling clients, we examined the dimensional structure of the scale, item-fit, and person-fit statistics. Implications of these findings for counselors, counselors-in-training, and counseling researchers…
Descriptors: Test Construction, Experience, Counseling Effectiveness, Measures (Individuals)
Chalmers, R. Philip – Journal of Educational Measurement, 2023
Several marginal effect size (ES) statistics suitable for quantifying the magnitude of differential item functioning (DIF) have been proposed in the area of item response theory; for instance, the Differential Functioning of Items and Tests (DFIT) statistics, signed and unsigned item difference in the sample statistics (SIDS, UIDS, NSIDS, and…
Descriptors: Test Bias, Item Response Theory, Definitions, Monte Carlo Methods