NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 57 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Muh. Fitrah; Anastasia Sofroniou; Ofianto; Loso Judijanto; Widihastuti – Journal of Education and e-Learning Research, 2024
This research uses Rasch model analysis to identify the reliability and separation index of an integrated mathematics test instrument with a cultural architecture structure in measuring students' mathematical thinking abilities. The study involved 357 students from six eighth-grade public junior high schools in Bima. The selection of schools was…
Descriptors: Mathematics Tests, Item Response Theory, Test Reliability, Indexes
Peer reviewed Peer reviewed
Direct linkDirect link
Qi Huang; Daniel M. Bolt; Weicong Lyu – Large-scale Assessments in Education, 2024
Large scale international assessments depend on invariance of measurement across countries. An important consideration when observing cross-national differential item functioning (DIF) is whether the DIF actually reflects a source of bias, or might instead be a methodological artifact reflecting item response theory (IRT) model misspecification.…
Descriptors: Test Items, Item Response Theory, Test Bias, Test Validity
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Gübes, Nese; Uyar, Seyma – International Journal of Progressive Education, 2020
This study aims to compare the performance of different small sample equating methods in the presence and absence of differential item functioning (DIF) in common items. In this research, Tucker linear equating, Levine linear equating, unsmoothed and pre-smoothed (C=4) chained equipercentile equating, and simplified circle arc equating methods…
Descriptors: Test Bias, Equated Scores, Test Items, Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Gantt, Allison L.; Paoletti, Teo; Corven, Julien – International Journal of Science and Mathematics Education, 2023
Covariational reasoning (or the coordination of two dynamically changing quantities) is central to secondary STEM subjects, but research has yet to fully explore its applicability to elementary and middle-grade levels within various STEM fields. To address this need, we selected a globally referenced STEM assessment--the Trends in International…
Descriptors: Incidence, Abstract Reasoning, Mathematics Education, Science Education
Peer reviewed Peer reviewed
Direct linkDirect link
Ma, Wenchao; de la Torre, Jimmy – Journal of Educational and Behavioral Statistics, 2019
Solving a constructed-response item usually requires successfully performing a sequence of tasks. Each task could involve different attributes, and those required attributes may be "condensed" in various ways to produce the responses. The sequential generalized deterministic input noisy "and" gate model is a general cognitive…
Descriptors: Test Items, Cognitive Measurement, Models, Hypothesis Testing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Cascella, Clelia; Giberti, Chiara; Bolondi, Giorgio – Education Sciences, 2021
This study is aimed at exploring how different formulations of the same mathematical item may influence students' answers, and whether or not boys and girls are equally affected by differences in presentation. An experimental design was employed: the same stem-items (i.e., items with the same mathematical content and question intent) were…
Descriptors: Mathematics Achievement, Mathematics Tests, Achievement Tests, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Lin, Jing-Wen; Yu, Ruan-Ching – Asia Pacific Journal of Education, 2022
Modelling ability is one of the essential elements of the latest educational reforms, and Trends in International Mathematics and Science Study (TIMSS) is a curriculum-based assessment which allows educational systems worldwide to inspect the curricular influences. The aims of this study were to examine the role of modelling ability in the…
Descriptors: Grade 8, Educational Change, Cross Cultural Studies, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Delafontaine, Jolien; Chen, Changsheng; Park, Jung Yeon; Van den Noortgate, Wim – Large-scale Assessments in Education, 2022
In cognitive diagnosis assessment (CDA), the impact of misspecified item-attribute relations (or "Q-matrix") designed by subject-matter experts has been a great challenge to real-world applications. This study examined parameter estimation of the CDA with the expert-designed Q-matrix and two refined Q-matrices for international…
Descriptors: Q Methodology, Matrices, Cognitive Measurement, Diagnostic Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Ilhan, Mustafa; Öztürk, Nagihan Boztunç; Sahin, Melek Gülsah – Participatory Educational Research, 2020
In this research, the effect of an item's type and cognitive level on its difficulty index was investigated. The data source of the study consisted of the responses of the 12535 students in the Turkey sample (6079 and 6456 students from eighth and fourth grade respectively) of TIMSS 2015. The responses were a total of 215 items at the eighth-grade…
Descriptors: Test Items, Difficulty Level, Cognitive Processes, Responses
Peer reviewed Peer reviewed
Direct linkDirect link
Liou, Pey-Yan; Bulut, Okan – Research in Science Education, 2020
The purpose of this study was to examine eighth-grade students' science performance in terms of two test design components, item format, and cognitive domain. The portion of Taiwanese data came from the 2011 administration of the Trends in International Mathematics and Science Study (TIMSS), one of the major international large-scale assessments…
Descriptors: Foreign Countries, Middle School Students, Grade 8, Science Achievement
National Assessment Governing Board, 2019
Since 1973, the National Assessment of Educational Progress (NAEP) has gathered information about student achievement in mathematics. The NAEP assessment in mathematics has two components that differ in purpose. One assessment measures long-term trends in achievement among 9-, 13-, and 17-year-old students by using the same basic design each time.…
Descriptors: National Competency Tests, Mathematics Achievement, Grade 4, Grade 8
Moyer, Eric L.; Galindo, Jennifer – National Assessment Governing Board, 2022
The National Assessment Governing Board has a legislatively mandated responsibility to develop National Assessment of Educational Progress (NAEP) achievement levels. The Board Policy Statement on Developing Student Achievement Levels for the National Assessment of Educational Progress provides policy definitions of "NAEP Basic,"…
Descriptors: Reading Achievement, Mathematics Achievement, Reading Tests, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Fishbein, Bethany; Martin, Michael O.; Mullis, Ina V. S.; Foy, Pierre – Large-scale Assessments in Education, 2018
Background: TIMSS 2019 is the first assessment in the TIMSS transition to a computer-based assessment system, called eTIMSS. The TIMSS 2019 Item Equivalence Study was conducted in advance of the field test in 2017 to examine the potential for mode effects on the psychometric behavior of the TIMSS mathematics and science trend items induced by the…
Descriptors: Mathematics Achievement, Science Achievement, Mathematics Tests, Elementary Secondary Education
Peer reviewed Peer reviewed
Direct linkDirect link
Ring, Malte; Brahm, Taiga; Randler, Christoph – International Journal of Science Education, 2019
In science, graphs are used to visualise data, relationships and scientific principles. Basic graph reading operations, i.e. reading single data points, recognising trends as well as conducting small extrapolations from the data are important skills for pupils and lay the groundwork for a comprehensive understanding of data visualisations across…
Descriptors: Graphs, Difficulty Level, Data Interpretation, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Yan; Kim, Eun Sook; Dedrick, Robert F.; Ferron, John M.; Tan, Tony – Educational and Psychological Measurement, 2018
Wording effects associated with positively and negatively worded items have been found in many scales. Such effects may threaten construct validity and introduce systematic bias in the interpretation of results. A variety of models have been applied to address wording effects, such as the correlated uniqueness model and the correlated traits and…
Descriptors: Test Items, Test Format, Correlation, Construct Validity
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4