NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 28 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Jorge Salas – Grantee Submission, 2024
Despite the growing interest in incorporating response time data into item response models, there has been a lack of research investigating how the effect of speed on the probability of a correct response varies across different groups (e.g., experimental conditions) for various items (i.e., differential response time item analysis). Furthermore,…
Descriptors: Item Response Theory, Reaction Time, Models, Accuracy
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Buyukatak, Emrah; Anil, Duygu – International Journal of Assessment Tools in Education, 2022
The purpose of this research was to determine classification accuracy of the factors affecting the success of students' reading skills based on PISA 2018 data by using Artificial Neural Networks, Decision Trees, K-Nearest Neighbor, and Naive Bayes data mining classification methods and to examine the general characteristics of success groups. In…
Descriptors: Classification, Accuracy, Reading Tests, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Parkin, Jason R. – Journal of Psychoeducational Assessment, 2021
The simple views of reading (SVR) and writing (SVW) provide useful foundations for the interpretation of psychoeducational achievement batteries. Research has established that oral language, decoding, and transcription explain significant variance in reading comprehension and written composition, respectively. However, the specific task demands of…
Descriptors: Achievement Tests, Reading Tests, Writing Tests, Oral Language
Peer reviewed Peer reviewed
Direct linkDirect link
Trendtel, Matthias; Robitzsch, Alexander – Journal of Educational and Behavioral Statistics, 2021
A multidimensional Bayesian item response model is proposed for modeling item position effects. The first dimension corresponds to the ability that is to be measured; the second dimension represents a factor that allows for individual differences in item position effects called persistence. This model allows for nonlinear item position effects on…
Descriptors: Bayesian Statistics, Item Response Theory, Test Items, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ningsih, Tutuk; Yuwono, Dwi Margo; Sholehuddin, M. Sugeng; Suharto, Abdul Wachid Bambang – Journal of Social Studies Education Research, 2021
Learning at home not only provides written assignments that are changed in electronic form but must also reflect student learning outcomes at home. Likewise, researchers use literary reading to avoid students getting bored with learning Indonesian language literacy and character education. However, improving literacy skills is not just reading…
Descriptors: Indonesian, Computer Assisted Testing, Fiction, Literacy
Peer reviewed Peer reviewed
Direct linkDirect link
Contini, Dalit; Cugnata, Federica – Large-scale Assessments in Education, 2020
The development of international surveys on children's learning like PISA, PIRLS and TIMSS--delivering comparable achievement measures across educational systems--has revealed large cross-country variability in average performance and in the degree of inequality across social groups. A key question is whether and how institutional differences…
Descriptors: International Assessment, Achievement Tests, Scores, Family Characteristics
Peer reviewed Peer reviewed
Direct linkDirect link
Philip Capin; Sharon Vaughn; Joseph E. Miller; Jeremy Miciak; Anna-Mari Fall; Greg Roberts; Eunsoo Cho; Amy E. Barth; Paul K. Steinle; Jack M. Fletcher – Grantee Submission, 2024
Purpose: This study investigated the reading profiles of middle school Spanish-speaking emergent bilinguals (EBs) with significantly below grade level reading comprehension and whether these profiles varied in their reading comprehension performance over time. Method: Latent profile analyses were used to classify Grade 6 and 7 Hispanic EBs (n =…
Descriptors: Profiles, Reading Comprehension, Reading Difficulties, Middle School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Huilin; Chen, Jinsong – Language Assessment Quarterly, 2016
Cognitive diagnosis models (CDMs) are psychometric models developed mainly to assess examinees' specific strengths and weaknesses in a set of skills or attributes within a domain. By adopting the Generalized-DINA model framework, the recently developed general modeling framework, we attempted to retrofit the PISA reading assessments, a…
Descriptors: Reading Tests, Diagnostic Tests, Models, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Baldonado, Angela Argo; Svetina, Dubravka; Gorin, Joanna – Applied Measurement in Education, 2015
Applications of traditional unidimensional item response theory models to passage-based reading comprehension assessment data have been criticized based on potential violations of local independence. However, simple rules for determining dependency, such as including all items associated with a particular passage, may overestimate the dependency…
Descriptors: Reading Tests, Reading Comprehension, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Debeer, Dries; Janssen, Rianne; De Boeck, Paul – Journal of Educational Measurement, 2017
When dealing with missing responses, two types of omissions can be discerned: items can be skipped or not reached by the test taker. When the occurrence of these omissions is related to the proficiency process the missingness is nonignorable. The purpose of this article is to present a tree-based IRT framework for modeling responses and omissions…
Descriptors: Item Response Theory, Test Items, Responses, Testing Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, HyeSun; Geisinger, Kurt F. – International Journal of Testing, 2014
Differential item functioning (DIF) analysis is important in terms of test fairness. While DIF analyses have mainly been conducted with manifest grouping variables, such as gender or race/ethnicity, it has been recently claimed that not only the grouping variables but also contextual variables pertaining to examinees should be considered in DIF…
Descriptors: Test Bias, Gender Differences, Regression (Statistics), Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Chen, Huilin; Chen, Jinsong – Educational Psychology, 2016
By analysing the test data of 1029 British secondary school students' performance on 20 Programme for International Student Assessment English reading items through the generalised deterministic input, noisy "and" gate (G-DINA) model, the study conducted two investigations on exploring the relationships among the five reading…
Descriptors: Reading Comprehension, Reading Skills, Models, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Kwiatkowska-White, Bozena; Kirby, John R.; Lee, Elizabeth A. – Journal of Psychoeducational Assessment, 2016
This longitudinal study of 78 Canadian English-speaking students examined the applicability of the stability, cumulative, and compensatory models in reading comprehension development. Archival government-mandated assessments of reading comprehension at Grades 3, 6, and 10, and the Canadian Test of Basic Skills measure of reading comprehension…
Descriptors: Longitudinal Studies, Reading Comprehension, Reading Achievement, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Baghaei, Purya; Carstensen, Claus H. – Practical Assessment, Research & Evaluation, 2013
Standard unidimensional Rasch models assume that persons with the same ability parameters are comparable. That is, the same interpretation applies to persons with identical ability estimates as regards the underlying mental processes triggered by the test. However, research in cognitive psychology shows that persons at the same trait level may…
Descriptors: Item Response Theory, Models, Reading Comprehension, Reading Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gu, Lin; Turkan, Sultan; Gomez, Pablo Garcia – ETS Research Report Series, 2015
ELTeach is an online professional development program developed by Educational Testing Service (ETS) in collaboration with National Geographic Learning. The ELTeach program consists of two courses: English-for-Teaching and Professional Knowledge for English Language Teaching (ELT). Each course includes a coordinated assessment leading to a score…
Descriptors: Item Analysis, Test Items, English (Second Language), Second Language Instruction
Previous Page | Next Page »
Pages: 1  |  2