NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Laws, Policies, & Programs
Every Student Succeeds Act…2
What Works Clearinghouse Rating
Showing 1 to 15 of 237 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Maria Bolsinova; Jesper Tijmstra; Leslie Rutkowski; David Rutkowski – Journal of Educational and Behavioral Statistics, 2024
Profile analysis is one of the main tools for studying whether differential item functioning can be related to specific features of test items. While relevant, profile analysis in its current form has two restrictions that limit its usefulness in practice: It assumes that all test items have equal discrimination parameters, and it does not test…
Descriptors: Test Items, Item Analysis, Generalizability Theory, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Pham, Duy N.; Wells, Craig S.; Bauer, Malcolm I.; Wylie, E. Caroline; Monroe, Scott – Applied Measurement in Education, 2021
Assessments built on a theory of learning progressions are promising formative tools to support learning and teaching. The quality and usefulness of those assessments depend, in large part, on the validity of the theory-informed inferences about student learning made from the assessment results. In this study, we introduced an approach to address…
Descriptors: Formative Evaluation, Mathematics Instruction, Mathematics Achievement, Middle School Students
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Oluwaseyi Aina Gbolade Opesemowo – Research in Social Sciences and Technology, 2023
Local Item Dependence (LID) is a desecration of Local Item Independence (LII) which can lead to overestimating or underestimating a candidate's ability in mathematics items and create validity problems. The study investigated the intra and inter-LID of mathematics items. The study made use of ex-post facto research. The population encompassed all…
Descriptors: Foreign Countries, Secondary School Students, Item Response Theory, Test Items
Laura Laclede – ProQuest LLC, 2023
Because non-cognitive constructs can influence student success in education beyond academic achievement, it is essential that they are reliably conceptualized and measured. Within this context, there are several gaps in the literature related to correctly interpreting the meaning of scale scores when a non-standard response option like I do not…
Descriptors: High School Students, Test Wiseness, Models, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Tony Albano; Brian F. French; Thao Thu Vo – Applied Measurement in Education, 2024
Recent research has demonstrated an intersectional approach to the study of differential item functioning (DIF). This approach expands DIF to account for the interactions between what have traditionally been treated as separate grouping variables. In this paper, we compare traditional and intersectional DIF analyses using data from a state testing…
Descriptors: Test Items, Item Analysis, Data Use, Standardized Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chidubem Deborah Adamu – Research in Social Sciences and Technology, 2024
The Fourth Industrial Revolution Chemistry Teachers Effectiveness Scale (4IRCTES) was evaluated in secondary schools in Southwest Nigeria to determine its item discrimination, item parameters, and model-data fit. This study utilised a descriptive survey research design and included 4,986 Chemistry teachers in the southwestern region of Nigeria,…
Descriptors: Secondary School Teachers, Science Teachers, Chemistry, Foreign Countries
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Qiwei He – International Journal of Assessment Tools in Education, 2023
Collaborative problem solving (CPS) is inherently an interactive, conjoint, dual-strand process that considers how a student reasons about a problem as well as how s/he interacts with others to regulate social processes and exchange information (OECD, 2013). Measuring CPS skills presents a challenge for obtaining consistent, accurate, and reliable…
Descriptors: Cooperative Learning, Problem Solving, Test Items, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Paul De Boeck – Grantee Submission, 2024
Explanatory item response models (EIRMs) have been applied to investigate the effects of person covariates, item covariates, and their interactions in the fields of reading education and psycholinguistics. In practice, it is often assumed that the relationships between the covariates and the logit transformation of item response probability are…
Descriptors: Item Response Theory, Test Items, Models, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Paul De Boeck – Journal of Educational Measurement, 2024
Explanatory item response models (EIRMs) have been applied to investigate the effects of person covariates, item covariates, and their interactions in the fields of reading education and psycholinguistics. In practice, it is often assumed that the relationships between the covariates and the logit transformation of item response probability are…
Descriptors: Item Response Theory, Test Items, Models, Maximum Likelihood Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Purwo Susongko; Chockchai Yuenyong; Mobinta Kusuma; Yuni Arfiani – Pegem Journal of Education and Instruction, 2024
This study aims to develop Nature of Science (NOS) measurement instrument for high school students in the Mathematics and Natural Sciences program using COVID-19 cases that had occurred since the end of 2019. This study also attempts to validate test items with the Rasch model approach. The participants in this study are 194 high school students…
Descriptors: Test Construction, Science Tests, Scientific Principles, COVID-19
Peer reviewed Peer reviewed
Direct linkDirect link
Joo, Sean; Ali, Usama; Robin, Frederic; Shin, Hyo Jeong – Large-scale Assessments in Education, 2022
We investigated the potential impact of differential item functioning (DIF) on group-level mean and standard deviation estimates using empirical and simulated data in the context of large-scale assessment. For the empirical investigation, PISA 2018 cognitive domains (Reading, Mathematics, and Science) data were analyzed using Jackknife sampling to…
Descriptors: Test Items, Item Response Theory, Scores, Student Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Yi-Hsuan Lee; Yue Jia – Applied Measurement in Education, 2024
Test-taking experience is a consequence of the interaction between students and assessment properties. We define a new notion, rapid-pacing behavior, to reflect two types of test-taking experience -- disengagement and speededness. To identify rapid-pacing behavior, we extend existing methods to develop response-time thresholds for individual items…
Descriptors: Adaptive Testing, Reaction Time, Item Response Theory, Test Format
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deniz, Kaan Zulfikar; Ilican, Emel – International Journal of Assessment Tools in Education, 2021
This study aims to compare the G and Phi coefficients as estimated by D studies for a measurement tool with the G and Phi coefficients obtained from real cases in which items of differing difficulty levels were added and also to determine the conditions under which the D studies estimated reliability coefficients closer to reality. The study group…
Descriptors: Generalizability Theory, Test Items, Difficulty Level, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Amber Dudley; Emma Marsden; Giulia Bovolenta – Language Testing, 2024
Vocabulary knowledge strongly predicts second language reading, listening, writing, and speaking. Yet, few tests have been developed to assess vocabulary knowledge in French. The primary aim of this pilot study was to design and initially validate the Context-Aligned Two Thousand Test (CA-TTT), following open research practices. The CA-TTT is a…
Descriptors: French, Vocabulary Development, Secondary School Students, Language Tests
Alicia A. Stoltenberg – ProQuest LLC, 2024
Multiple-select multiple-choice items, or multiple-choice items with more than one correct answer, are used to quickly assess content on standardized assessments. Because there are multiple keys to these item types, there are also multiple ways to score student responses to these items. The purpose of this study was to investigate how changing the…
Descriptors: Scoring, Evaluation Methods, Multiple Choice Tests, Standardized Tests
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  16