NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 19 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Egberink, Iris J. L.; Meijer, Rob R.; Tendeiro, Jorge N. – Educational and Psychological Measurement, 2015
A popular method to assess measurement invariance of a particular item is based on likelihood ratio tests with all other items as anchor items. The results of this method are often only reported in terms of statistical significance, and researchers proposed different methods to empirically select anchor items. It is unclear, however, how many…
Descriptors: Personality Measures, Computer Assisted Testing, Measurement, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Meijer, Rob R.; Egberink, Iris J. L. – Educational and Psychological Measurement, 2012
In recent studies, different methods were proposed to investigate invariant item ordering (IIO), but practical IIO research is an unexploited field in questionnaire construction and evaluation. In the present study, the authors explored the usefulness of different IIO methods to analyze personality scales and clinical scales. From the authors'…
Descriptors: Test Items, Personality Measures, Questionnaires, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Tendeiro, Jorge N.; Meijer, Rob R. – Applied Psychological Measurement, 2012
This article extends the work by Armstrong and Shi on CUmulative SUM (CUSUM) person-fit methodology. The authors present new theoretical considerations concerning the use of CUSUM person-fit statistics based on likelihood ratios for the purpose of detecting cheating and random guessing by individual test takers. According to the Neyman-Pearson…
Descriptors: Cheating, Individual Testing, Adaptive Testing, Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Emons, Wilco H. M.; Sijtsma, Klaas; Meijer, Rob R. – Psychological Methods, 2007
Short tests containing at most 15 items are used in clinical and health psychology, medicine, and psychiatry for making decisions about patients. Because short tests have large measurement error, the authors ask whether they are reliable enough for classifying patients into a treatment and a nontreatment group. For a given certainty level,…
Descriptors: Psychiatry, Patients, Error of Measurement, Test Length
Peer reviewed Peer reviewed
Nering, Michael L.; Meijer, Rob R. – Applied Psychological Measurement, 1998
Compared the person-response function (PRF) method for identifying examinees who respond to test items in a manner divergent from the underlying test model to the "l(z)" index of Drasgow and others (1985). Although performance of the "l(z)" index was superior in most cases, the PRF was useful in some conditions. (SLD)
Descriptors: Comparative Analysis, Item Response Theory, Models, Responses
Meijer, Rob R.; Sijtsma, Klaas – 1994
Methods for detecting item score patterns that are unlikely (aberrant) given that a parametric item response theory (IRT) model gives an adequate description of the data or given the responses of the other persons in the group are discussed. The emphasis here is on the latter group of statistics. These statistics can be applied when a…
Descriptors: Foreign Countries, Identification, Item Response Theory, Nonparametric Statistics
Peer reviewed Peer reviewed
van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R. – Applied Psychological Measurement, 2002
Compared the nominal and empirical null distributions of the standardized log-likelihood statistic for polytomous items for paper-and-pencil (P&P) and computerized adaptive tests (CATs). Results show that the empirical distribution of the statistic differed from the assumed standard normal distribution for both P&P tests and CATs. Also…
Descriptors: Adaptive Testing, Computer Assisted Testing, Item Response Theory, Statistical Distributions
Peer reviewed Peer reviewed
Meijer, Rob R.; Sijtsma, Klaas – Applied Measurement in Education, 1995
Methods for detecting item score patterns that are unlikely, given that a parametric item response theory model gives an adequate description of the data or given the responses of other persons in the group, are discussed. The use of person-fit statistics in empirical data analysis is briefly discussed. (SLD)
Descriptors: Identification, Item Response Theory, Nonparametric Statistics, Patterns in Mathematics
Peer reviewed Peer reviewed
Meijer, Rob R. – Applied Measurement in Education, 1996
This special issue is devoted to person-fit analysis, which is also referred to as appropriateness measurement. An introduction to person-fit research is given. Several types of aberrant response behavior on a test are discussed; and whether person-fit statistics can be used to detect dominant score patterns is explored. (SLD)
Descriptors: Identification, Item Response Theory, Research Methodology, Responses
van Krimpen-Stoop, Edith M. L. A.; Meijer, Rob R. – 2000
Item scores that do not fit an assumed item response theory model may cause the latent trait value to be estimated inaccurately. For computerized adaptive tests (CAT) with dichotomous items, several person-fit statistics for detecting nonfitting item score patterns have been proposed. Both for paper-and-pencil (P&P) test and CATs, detection of…
Descriptors: Adaptive Testing, Computer Assisted Testing, Goodness of Fit, Item Response Theory
Peer reviewed Peer reviewed
Meijer, Rob R. – Journal of Educational Measurement, 2002
Used empirical data from a certification test to study methods from statistical process control that have been proposed to classify an item score pattern as fitting or misfitting the underlying item response theory model in computerized adaptive testing. Results for 1,392 examinees show that different types of misfit can be distinguished. (SLD)
Descriptors: Certification, Classification, Goodness of Fit, Item Response Theory
Meijer, Rob R.; Sijtsma, Klaas – 1999
Methods are discussed that can be used to investigate the fit of an item score pattern to a test model. Model-based tests and personality inventories are administered to more than 100 million people a year and, as a result, individual fit is of great concern. Item Response Theory (IRT) modeling and person-fit statistics that are formulated in the…
Descriptors: Evaluation Methods, Goodness of Fit, Item Response Theory, Personality Measures
Peer reviewed Peer reviewed
Direct linkDirect link
Emons, Wilco H. M.; Sijtsma, Klaas; Meijer, Rob R. – Multivariate Behavioral Research, 2004
The person-response function (PRF) relates the probability of an individual's correct answer to the difficulty of items measuring the same latent trait. Local deviations of the observed PRF from the expected PRF indicate person misfit. We discuss two new approaches to investigate person fit. The first approach uses kernel smoothing to estimate…
Descriptors: Probability, Simulation, Item Response Theory, Test Items
Meijer, Rob R.; van Krimpen-Stoop, Edith M. L. A. – 1998
Several person-fit statistics have been proposed to detect item score patterns that do not fit an item response theory model. To classify response patterns as not fitting a model, a distribution of a person-fit statistic is needed. The null distributions of several fit statistics have been investigated using conventionally administered tests, but…
Descriptors: Ability, Adaptive Testing, Foreign Countries, Item Response Theory
Peer reviewed Peer reviewed
Meijer, Rob R.; And Others – Applied Psychological Measurement, 1994
The power of the nonparametric person-fit statistic, U3, is investigated through simulations as a function of item characteristics, test characteristics, person characteristics, and the group to which examinees belong. Results suggest conditions under which relatively short tests can be used for person-fit analysis. (SLD)
Descriptors: Difficulty Level, Group Membership, Item Response Theory, Nonparametric Statistics
Previous Page | Next Page ยป
Pages: 1  |  2