NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing all 5 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Jana Welling; Timo Gnambs; Claus H. Carstensen – Educational and Psychological Measurement, 2024
Disengaged responding poses a severe threat to the validity of educational large-scale assessments, because item responses from unmotivated test-takers do not reflect their actual ability. Existing identification approaches rely primarily on item response times, which bears the risk of misclassifying fast engaged or slow disengaged responses.…
Descriptors: Foreign Countries, College Students, Guessing (Tests), Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Völlinger, Vanessa A.; Spörer, Nadine; Lubbe, Dirk; Brunstein, Joachim C. – Journal of Educational Research, 2018
This study examined a theoretical model hypothesizing that reading strategies mediate the effects of intrinsic reading motivation, reading fluency, and vocabulary knowledge on reading comprehension. Using path analytic methods, we tested the direct and indirect effects specified in the hypothesized model in a sample of 1105 fifth-graders. In…
Descriptors: Path Analysis, Reading Strategies, Mediation Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Wendt, Heike; Kasper, Daniel; Trendtel, Matthias – Large-scale Assessments in Education, 2017
Background: Large-scale cross-national studies designed to measure student achievement use different social, cultural, economic and other background variables to explain observed differences in that achievement. Prior to their inclusion into a prediction model, these variables are commonly scaled into latent background indices. To allow…
Descriptors: Measurement, Achievement Tests, Cultural Differences, Socioeconomic Influences
Peer reviewed Peer reviewed
Direct linkDirect link
Pohl, Steffi; Gräfe, Linda; Rose, Norman – Educational and Psychological Measurement, 2014
Data from competence tests usually show a number of missing responses on test items due to both omitted and not-reached items. Different approaches for dealing with missing responses exist, and there are no clear guidelines on which of those to use. While classical approaches rely on an ignorable missing data mechanism, the most recently developed…
Descriptors: Test Items, Achievement Tests, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Hartig, Johannes; Frey, Andreas; Nold, Gunter; Klieme, Eckhard – Educational and Psychological Measurement, 2012
The article compares three different methods to estimate effects of task characteristics and to use these estimates for model-based proficiency scaling: prediction of item difficulties from the Rasch model, the linear logistic test model (LLTM), and an LLTM including random item effects (LLTM+e). The methods are applied to empirical data from a…
Descriptors: Item Response Theory, Models, Methods, Computation