Publication Date
In 2025 | 0 |
Since 2024 | 5 |
Since 2021 (last 5 years) | 22 |
Since 2016 (last 10 years) | 54 |
Since 2006 (last 20 years) | 91 |
Descriptor
Test Items | 92 |
Item Response Theory | 83 |
Foreign Countries | 43 |
Grade 8 | 41 |
Middle School Students | 39 |
Difficulty Level | 34 |
Mathematics Tests | 34 |
Grade 7 | 26 |
Test Construction | 23 |
Test Reliability | 21 |
Test Validity | 21 |
More ▼ |
Source
Author
Tindal, Gerald | 8 |
Alonzo, Julie | 5 |
Anderson, Daniel | 4 |
Ketterlin-Geller, Leanne R. | 4 |
Liu, Kimy | 4 |
Bulut, Okan | 3 |
Kan, Adnan | 3 |
Park, Bitnara Jasmine | 3 |
Yovanoff, Paul | 3 |
Amanda Goodwin | 2 |
Atar, Hakan Yavuz | 2 |
More ▼ |
Publication Type
Education Level
Junior High Schools | 92 |
Middle Schools | 90 |
Secondary Education | 84 |
Elementary Education | 58 |
Grade 8 | 42 |
Grade 7 | 26 |
Intermediate Grades | 24 |
Grade 6 | 20 |
High Schools | 16 |
Grade 5 | 15 |
Grade 4 | 14 |
More ▼ |
Audience
Location
Turkey | 13 |
Germany | 5 |
Taiwan | 4 |
California | 3 |
Idaho | 3 |
Indonesia | 3 |
Massachusetts | 3 |
Singapore | 3 |
South Korea | 3 |
United States | 3 |
Arkansas | 2 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Pham, Duy N.; Wells, Craig S.; Bauer, Malcolm I.; Wylie, E. Caroline; Monroe, Scott – Applied Measurement in Education, 2021
Assessments built on a theory of learning progressions are promising formative tools to support learning and teaching. The quality and usefulness of those assessments depend, in large part, on the validity of the theory-informed inferences about student learning made from the assessment results. In this study, we introduced an approach to address…
Descriptors: Formative Evaluation, Mathematics Instruction, Mathematics Achievement, Middle School Students
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Paul De Boeck – Grantee Submission, 2024
Explanatory item response models (EIRMs) have been applied to investigate the effects of person covariates, item covariates, and their interactions in the fields of reading education and psycholinguistics. In practice, it is often assumed that the relationships between the covariates and the logit transformation of item response probability are…
Descriptors: Item Response Theory, Test Items, Models, Maximum Likelihood Statistics
Sun-Joo Cho; Amanda Goodwin; Matthew Naveiras; Paul De Boeck – Journal of Educational Measurement, 2024
Explanatory item response models (EIRMs) have been applied to investigate the effects of person covariates, item covariates, and their interactions in the fields of reading education and psycholinguistics. In practice, it is often assumed that the relationships between the covariates and the logit transformation of item response probability are…
Descriptors: Item Response Theory, Test Items, Models, Maximum Likelihood Statistics
Yi-Hsuan Lee; Yue Jia – Applied Measurement in Education, 2024
Test-taking experience is a consequence of the interaction between students and assessment properties. We define a new notion, rapid-pacing behavior, to reflect two types of test-taking experience -- disengagement and speededness. To identify rapid-pacing behavior, we extend existing methods to develop response-time thresholds for individual items…
Descriptors: Adaptive Testing, Reaction Time, Item Response Theory, Test Format
Deniz, Kaan Zulfikar; Ilican, Emel – International Journal of Assessment Tools in Education, 2021
This study aims to compare the G and Phi coefficients as estimated by D studies for a measurement tool with the G and Phi coefficients obtained from real cases in which items of differing difficulty levels were added and also to determine the conditions under which the D studies estimated reliability coefficients closer to reality. The study group…
Descriptors: Generalizability Theory, Test Items, Difficulty Level, Test Reliability
Chen, Chia-Wen; Andersson, Björn; Zhu, Jinxin – Journal of Educational Measurement, 2023
The certainty of response index (CRI) measures respondents' confidence level when answering an item. In conjunction with the answers to the items, previous studies have used descriptive statistics and arbitrary thresholds to identify student knowledge profiles with the CRIs. Whereas this approach overlooked the measurement error of the observed…
Descriptors: Item Response Theory, Factor Analysis, Psychometrics, Test Items
Ayva Yörü, Fatma Gökçen; Atar, Hakan Yavuz – Journal of Pedagogical Research, 2019
The aim of this study is to examine whether the items in the mathematics subtest of the Centralized High School Entrance Placement Test [HSEPT] administered in 2012 by the Ministry of National Education in Turkey show DIF according to gender and type of school. For this purpose, SIBTEST, Breslow-Day, Lord's [chi-squared] and Raju's area…
Descriptors: Test Bias, Mathematics Tests, Test Items, Gender Differences
Muh. Fitrah; Anastasia Sofroniou; Ofianto; Loso Judijanto; Widihastuti – Journal of Education and e-Learning Research, 2024
This research uses Rasch model analysis to identify the reliability and separation index of an integrated mathematics test instrument with a cultural architecture structure in measuring students' mathematical thinking abilities. The study involved 357 students from six eighth-grade public junior high schools in Bima. The selection of schools was…
Descriptors: Mathematics Tests, Item Response Theory, Test Reliability, Indexes
Atilgan, Hakan; Demir, Elif Kübra; Ogretmen, Tuncay; Basokcu, Tahsin Oguz – International Journal of Progressive Education, 2020
It has become a critical question what the reliability level would be when open-ended questions are used in large-scale selection tests. One of the aims of the present study is to determine what the reliability would be in the event that the answers given by test-takers are scored by experts when open-ended short answer questions are used in…
Descriptors: Foreign Countries, Secondary School Students, Test Items, Test Reliability
Kim, Dong-In; Julian, Marc; Hermann, Pam – Online Submission, 2022
In test equating, one critical equating property is the group invariance property which indicates that the equating function used to convert performance on each alternate form to the reporting scale should be the same for various subgroups. To mitigate the impact of disrupted learning on the item parameters during the COVID-19 pandemic, a…
Descriptors: COVID-19, Pandemics, Test Format, Equated Scores
Soysal, Sumeyra; Yilmaz Kogar, Esin – International Journal of Assessment Tools in Education, 2021
In this study, whether item position effects lead to DIF in the condition where different test booklets are used was investigated. To do this the methods of Lord's chi-square and Raju's unsigned area with the 3PL model under with and without item purification were used. When the performance of the methods was compared, it was revealed that…
Descriptors: Item Response Theory, Test Bias, Test Items, Comparative Analysis
Stella Eteng-Uket – Numeracy, 2023
This paper describes a study that focused on developing, validating and standardizing a dyscalculia test, henceforth called the Dyscalculia Test. Out of the 4,758,800 students in Nigeria's upper primary and junior secondary schools, I randomly drew a sample of 2340 students, using a multistage sampling procedure that applied various sampling…
Descriptors: Test Construction, Learning Disabilities, Elementary School Students, Junior High School Students
Qi Huang; Daniel M. Bolt; Weicong Lyu – Large-scale Assessments in Education, 2024
Large scale international assessments depend on invariance of measurement across countries. An important consideration when observing cross-national differential item functioning (DIF) is whether the DIF actually reflects a source of bias, or might instead be a methodological artifact reflecting item response theory (IRT) model misspecification.…
Descriptors: Test Items, Item Response Theory, Test Bias, Test Validity
Alhadabi, Amal; Aldhafri, Said – European Journal of Educational Research, 2021
The current study investigated Student-Teacher Relationship Measure (STRM) psychometric properties using Rasch analysis in a sample of middle school female students (N = 995). Rasch Principal Components Analysis revealed psychometric support of two subscales (i.e., Academic and Social Relations). Summary statistics showed good psychometric…
Descriptors: Measures (Individuals), Teacher Student Relationship, Rating Scales, Item Response Theory
Ramadhani, Rahmi; Saragih, Sahat; Napitupulu, E. Elvis – Mathematics Teaching Research Journal, 2022
Statistical reasoning ability is one of the essential skills in developing competence, which is one of the Sustainable Development Goals (SDGs). This study aims to explore the statistical reasoning ability of junior high school students in descriptive statistics learning. The investigation directs students to determine their level of statistical…
Descriptors: Statistics, Thinking Skills, Statistics Education, Junior High School Students