Descriptor
Item Response Theory | 2 |
Correlation | 1 |
Effect Size | 1 |
Error of Measurement | 1 |
Evaluation Methods | 1 |
Foreign Countries | 1 |
Psychological Testing | 1 |
Rating Scales | 1 |
Sample Size | 1 |
Test Items | 1 |
Test Length | 1 |
More ▼ |
Author
Wang, Wen-Chung | 2 |
Chen, Hsueh-Chu | 1 |
Chen, Po-Hsi | 1 |
Cheng, Ying-Yao | 1 |
Publication Type
Journal Articles | 2 |
Reports - Descriptive | 2 |
Education Level
Audience
Location
Taiwan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Wang, Wen-Chung; Chen, Po-Hsi; Cheng, Ying-Yao – Psychological Methods, 2004
A conventional way to analyze item responses in multiple tests is to apply unidimensional item response models separately, one test at a time. This unidimensional approach, which ignores the correlations between latent traits, yields imprecise measures when tests are short. To resolve this problem, one can use multidimensional item response models…
Descriptors: Item Response Theory, Test Items, Testing, Test Validity
Wang, Wen-Chung; Chen, Hsueh-Chu – Educational and Psychological Measurement, 2004
As item response theory (IRT) becomes popular in educational and psychological testing, there is a need of reporting IRT-based effect size measures. In this study, we show how the standardized mean difference can be generalized into such a measure. A disattenuation procedure based on the IRT test reliability is proposed to correct the attenuation…
Descriptors: Test Reliability, Rating Scales, Sample Size, Error of Measurement