Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 7 |
Descriptor
Source
Applied Psychological… | 21 |
Author
de la Torre, Jimmy | 3 |
Kim, Seock-Ho | 2 |
Beland, Sebastien | 1 |
Boughton, Keith A. | 1 |
Camilli, Gregory | 1 |
Chang, Hua-Hua | 1 |
Cohen, Allan S. | 1 |
Dimitrov, Dimiter M. | 1 |
Hanson, Bradley A. | 1 |
Hsu, Tse-chi | 1 |
Kim, Jee-Seon | 1 |
More ▼ |
Publication Type
Journal Articles | 19 |
Reports - Evaluative | 10 |
Reports - Research | 7 |
Reports - Descriptive | 2 |
Education Level
Grade 8 | 1 |
Audience
Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Wang, Chun; Chang, Hua-Hua; Boughton, Keith A. – Applied Psychological Measurement, 2013
Multidimensional computerized adaptive testing (MCAT) is able to provide a vector of ability estimates for each examinee, which could be used to provide a more informative profile of an examinee's performance. The current literature on MCAT focuses on the fixed-length tests, which can generate less accurate results for those examinees whose…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Length, Item Banks
Magis, David; Beland, Sebastien; Raiche, Gilles – Applied Psychological Measurement, 2011
In this study, the estimation of extremely large or extremely small proficiency levels, given the item parameters of a logistic item response model, is investigated. On one hand, the estimation of proficiency levels by maximum likelihood (ML), despite being asymptotically unbiased, may yield infinite estimates. On the other hand, with an…
Descriptors: Test Length, Computation, Item Response Theory, Maximum Likelihood Statistics
Pei, Lai Kwan; Li, Jun – Applied Psychological Measurement, 2010
Differential item functioning (DIF) of items has become an important issue in test fairness and equity in large-scale assessments. DIF occurs when subgroups of test takers have equal trait levels but differ in their probabilities of a correct response. DIF items may threaten the validity of test scores for subgroups and can mislead researchers…
Descriptors: Test Bias, Item Response Theory, Regression (Statistics), Statistical Analysis
de la Torre, Jimmy – Applied Psychological Measurement, 2009
For one reason or another, various sources of information, namely, ancillary variables and correlational structure of the latent abilities, which are usually available in most testing situations, are ignored in ability estimation. A general model that incorporates these sources of information is proposed in this article. The model has a general…
Descriptors: Scoring, Multivariate Analysis, Ability, Computation
de la Torre, Jimmy; Song, Hao – Applied Psychological Measurement, 2009
Assessments consisting of different domains (e.g., content areas, objectives) are typically multidimensional in nature but are commonly assumed to be unidimensional for estimation purposes. The different domains of these assessments are further treated as multi-unidimensional tests for the purpose of obtaining diagnostic information. However, when…
Descriptors: Ability, Tests, Item Response Theory, Data Analysis
de la Torre, Jimmy – Applied Psychological Measurement, 2008
Recent work has shown that multidimensionally scoring responses from different tests can provide better ability estimates. For educational assessment data, applications of this approach have been limited to binary scores. Of the different variants, the de la Torre and Patz model is considered more general because implementing the scoring procedure…
Descriptors: Markov Processes, Scoring, Data Analysis, Item Response Theory
Dimitrov, Dimiter M. – Applied Psychological Measurement, 2007
The validation of cognitive attributes required for correct answers on binary test items or tasks has been addressed in previous research through the integration of cognitive psychology and psychometric models using parametric or nonparametric item response theory, latent class modeling, and Bayesian modeling. All previous models, each with their…
Descriptors: Individual Testing, Test Items, Psychometrics, Probability

Wells, Craig S.; Subkoviak, Michael J.; Serlin, Ronald C. – Applied Psychological Measurement, 2002
Investigated the effect of item parameter drift on ability estimates under item response theory, simulating item response data for two testing occasions for the two-parameter logistic model under several crossed conditions. Results show that item parameter drift under the simulated conditions had a small effect on ability estimates. (SLD)
Descriptors: Ability, Estimation (Mathematics), Item Response Theory, Simulation

Wang, Tianyou; Zeng, Lingjia – Applied Psychological Measurement, 1998
Develops an item-parameter estimation procedure using an EM algorithm for the continuous-response model (F. Samejima, 1973). Examines the usefulness of this model using simulated data. Results show that the procedure performs well in estimating items and theta parameters. (SLD)
Descriptors: Ability, Estimation (Mathematics), Item Response Theory, Models

Narayanan, Pankaja; Swaminathan, H. – Applied Psychological Measurement, 1996
This study compared the Mantel-Haenszel procedure, the simultaneous item bias approach (SIB), and the logistic regression approach (LR) with respect to Type I error rates and power to detect nonuniform differential item functioning (DIF). The SIB and LR procedures were equally powerful in detecting nonuniform DIF. (SLD)
Descriptors: Ability, Identification, Item Bias, Item Response Theory

Veldkamp, Bernard P. – Applied Psychological Measurement, 2002
Presents two mathematical programming approaches for the assembly of ability tests from item pools calibrated under a multidimensional item response theory model. Item selection is based on the Fisher information matrix. Illustrates the method through empirical examples for a two-dimensional mathematics item pool. (SLD)
Descriptors: Ability, Item Banks, Item Response Theory, Selection

Kirisci, Levent; Hsu, Tse-chi; Yu, Lifa – Applied Psychological Measurement, 2001
Studied the effects of test dimensionality, theta distribution shape, and estimation program (BILOG, MULTILOG, or XCALIBRE) on the accuracy of item and person parameter estimates through simulation. Derived guidelines for estimating parameters of multidimensional test items using unidimensional item response theory models. (SLD)
Descriptors: Ability, Computer Software, Estimation (Mathematics), Item Response Theory

Skaggs, Gary; Lissitz, Robert W. – Applied Psychological Measurement, 1988
Item response theory equating invariance was examined by simulating vertical equating of two sets of examinee ability data comparing Rasch, three-parameter, and equipercentile equating methods. All three were reasonably invariant, suggesting that multidimensionality is likely to be the cause of lack of invariance found in real data sets. (SLD)
Descriptors: Ability, Elementary Secondary Education, Equated Scores, Latent Trait Theory

Kim, Jee-Seon; Hanson, Bradley A. – Applied Psychological Measurement, 2002
Presents a characteristic curve procedure for comparing transformations of the item response theory ability scale assuming the multiple-choice model. Illustrates the use of the method with an example equating American College Testing mathematics tests. (SLD)
Descriptors: Ability, Equated Scores, Item Response Theory, Mathematics Tests

Reckase, Mark D.; McKinley, Robert L. – Applied Psychological Measurement, 1991
The concept of item discrimination is generalized to the case in which more than one ability is required to determine the correct response to an item, using the conceptual framework of item response theory and the definition of multidimensional item difficulty previously developed by M. Reckase (1985). (SLD)
Descriptors: Ability, Definitions, Difficulty Level, Equations (Mathematics)
Previous Page | Next Page ยป
Pages: 1 | 2