NotesFAQContact Us
Collection
Advanced
Search Tips
Showing 1 to 15 of 5,553 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Gerhard Tutz; Pascal Jordan – Journal of Educational and Behavioral Statistics, 2024
A general framework of latent trait item response models for continuous responses is given. In contrast to classical test theory (CTT) models, which traditionally distinguish between true scores and error scores, the responses are clearly linked to latent traits. It is shown that CTT models can be derived as special cases, but the model class is…
Descriptors: Item Response Theory, Responses, Scores, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Karen Leary Duseau – North American Chapter of the International Group for the Psychology of Mathematics Education, 2023
Assessment is a topic of concern to all stakeholders in our educational system. Pattern Based Questions are an assessment tool which is an alternative to the standardized assessment tool, and they are based on generative learning pedagogy, which shows promise in engaging all learners and usefulness in teaching and learning but validity has not yet…
Descriptors: Undergraduate Students, College Mathematics, Mathematics Skills, Thinking Skills
Peer reviewed Peer reviewed
Direct linkDirect link
Maria Bolsinova; Jesper Tijmstra; Leslie Rutkowski; David Rutkowski – Journal of Educational and Behavioral Statistics, 2024
Profile analysis is one of the main tools for studying whether differential item functioning can be related to specific features of test items. While relevant, profile analysis in its current form has two restrictions that limit its usefulness in practice: It assumes that all test items have equal discrimination parameters, and it does not test…
Descriptors: Test Items, Item Analysis, Generalizability Theory, Achievement Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Eray Selçuk; Ergül Demir – International Journal of Assessment Tools in Education, 2024
This research aims to compare the ability and item parameter estimations of Item Response Theory according to Maximum likelihood and Bayesian approaches in different Monte Carlo simulation conditions. For this purpose, depending on the changes in the priori distribution type, sample size, test length, and logistics model, the ability and item…
Descriptors: Item Response Theory, Item Analysis, Test Items, Simulation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Bruno D. Zumbo – International Journal of Assessment Tools in Education, 2023
In line with the journal volume's theme, this essay considers lessons from the past and visions for the future of test validity. In the first part of the essay, a description of historical trends in test validity since the early 1900s leads to the natural question of whether the discipline has progressed in its definition and description of test…
Descriptors: Test Theory, Test Validity, True Scores, Definitions
Custer, Michael; Kim, Jongpil – Online Submission, 2023
This study utilizes an analysis of diminishing returns to examine the relationship between sample size and item parameter estimation precision when utilizing the Masters' Partial Credit Model for polytomous items. Item data from the standardization of the Batelle Developmental Inventory, 3rd Edition were used. Each item was scored with a…
Descriptors: Sample Size, Item Response Theory, Test Items, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Anthony, Christopher J.; Styck, Kara M.; Volpe, Robert J.; Robert, Christopher R. – School Psychology, 2023
Although originally conceived of as a marriage of direct behavioral observation and indirect behavior rating scales, recent research has indicated that Direct Behavior Ratings (DBRs) are affected by rater idiosyncrasies (rater effects) similar to other indirect forms of behavioral assessment. Most of this research has been conducted using…
Descriptors: Item Response Theory, Generalizability Theory, Interrater Reliability, Behavior Rating Scales
Yvette Jackson – ProQuest LLC, 2023
Rater-mediated activities in educational research occur when an expert judge or rater utilizes an instrument to judge persons or items and generates scale scores. Scale scores are from a subjective judgment and must undergo a quality control measure called rating quality. Rating quality in this study is broadly defined as the extent to which…
Descriptors: Educational Research, Evaluators, Test Theory, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Stemler, Steven E.; Naples, Adam – Practical Assessment, Research & Evaluation, 2021
When students receive the same score on a test, does that mean they know the same amount about the topic? The answer to this question is more complex than it may first appear. This paper compares classical and modern test theories in terms of how they estimate student ability. Crucial distinctions between the aims of Rasch Measurement and IRT are…
Descriptors: Item Response Theory, Test Theory, Ability, Computation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kartianom Kartianom; Heri Retnawati; Kana Hidayati – Journal of Pedagogical Research, 2024
Conducting a fair test is important for educational research. Unfair assessments can lead to gender disparities in academic achievement, ultimately resulting in disparities in opportunities, wages, and career choice. Differential Item Function [DIF] analysis is presented to provide evidence of whether the test is truly fair, where it does not harm…
Descriptors: Foreign Countries, Test Bias, Item Response Theory, Test Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Jianbin Fu; Xuan Tan; Patrick C. Kyllonen – Applied Measurement in Education, 2024
A process is proposed to create the one-dimensional expected item characteristic curve (ICC) and test characteristic curve (TCC) for each trait in multidimensional forced-choice questionnaires based on the Rank-2PL (two-parameter logistic) item response theory models for forced-choice items with two or three statements. Some examples of ICC and…
Descriptors: Item Response Theory, Questionnaires, Measurement Techniques, Statistics
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Huebner, Alan; Skar, Gustaf B. – Practical Assessment, Research & Evaluation, 2021
Writing assessments often consist of students responding to multiple prompts, which are judged by more than one rater. To establish the reliability of these assessments, there exist different methods to disentangle variation due to prompts and raters, including classical test theory, Many Facet Rasch Measurement (MFRM), and Generalizability Theory…
Descriptors: Error of Measurement, Test Theory, Generalizability Theory, Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Basman, Munevver – International Journal of Assessment Tools in Education, 2023
To ensure the validity of the tests is to check that all items have similar results across different groups of individuals. However, differential item functioning (DIF) occurs when the results of individuals with equal ability levels from different groups differ from each other on the same test item. Based on Item Response Theory and Classic Test…
Descriptors: Test Bias, Test Items, Test Validity, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Stefanie A. Wind; Benjamin Lugu; Yurou Wang – International Journal of Testing, 2025
Mokken Scale Analysis (MSA) is a nonparametric approach that offers exploratory tools for understanding the nature of item responses while emphasizing invariance requirements. MSA is often discussed as it relates to Rasch measurement theory, which also emphasizes invariance, but uses parametric models. Researchers who have compared and combined…
Descriptors: Item Response Theory, Scaling, Surveys, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Zyxcban G. Wolfs; Saskia Brand-Gruwel; Henny P. A. Boshuizen – SAGE Open, 2023
The objective of this study was to develop and validate an instrument measuring the perception and interpretation of several distinct musical features (pitch, tonality, timing, loudness, and timbre). Therefore, we developed the Implicit Tonal Ability Test (ITAT), a listening test containing 49 multiple-choice items. A total of 233 children aged 6…
Descriptors: Elementary School Students, Test Validity, Test Reliability, Age Differences
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  371