Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 2 |
Descriptor
Item Response Theory | 3 |
Classification | 2 |
Reliability | 2 |
Accuracy | 1 |
Comparative Analysis | 1 |
Computer Interfaces | 1 |
Computer Software | 1 |
Documentation | 1 |
Equated Scores | 1 |
Error of Measurement | 1 |
Estimation (Mathematics) | 1 |
More ▼ |
Author
Lee, Won-Chan | 3 |
Brennan, Robert L. | 1 |
Choi, Jiwon | 1 |
Hanson, Bradley A. | 1 |
Kang, Yujin | 1 |
Kim, Stella Y. | 1 |
Malatesta, Jaime | 1 |
Publication Type
Reports - Evaluative | 3 |
Journal Articles | 2 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Software Review of IRTEQ, STUIRT, and POLYEQUATE for Item Response Theory Scale Linking and Equating
Malatesta, Jaime; Lee, Won-Chan – Measurement: Interdisciplinary Research and Perspectives, 2019
This article reviews several software programs designed to conduct item response theory (IRT) scale linking and equating. The programs reviewed include IRTEQ, STUIRT, and POLYEQUATE. Features and functionalities of each program are discussed and an example analysis using the common-item non-equivalent groups design in IRTEQ is provided.
Descriptors: Item Response Theory, Equated Scores, Computer Software, Computer Interfaces
Lee, Won-Chan; Kim, Stella Y.; Choi, Jiwon; Kang, Yujin – Journal of Educational Measurement, 2020
This article considers psychometric properties of composite raw scores and transformed scale scores on mixed-format tests that consist of a mixture of multiple-choice and free-response items. Test scores on several mixed-format tests are evaluated with respect to conditional and overall standard errors of measurement, score reliability, and…
Descriptors: Raw Scores, Item Response Theory, Test Format, Multiple Choice Tests
Lee, Won-Chan; Hanson, Bradley A.; Brennan, Robert L. – 2000
This paper describes procedures for estimating various indices of classification consistency and accuracy for multiple category classifications using data from a single test administration. The estimates of the classification consistency and accuracy indices are compared under three different psychometric models: the two-parameter beta binomial,…
Descriptors: Classification, Estimation (Mathematics), Item Response Theory, Reliability