NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Elementary and Secondary…1
What Works Clearinghouse Rating
Showing 1 to 15 of 140 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Martijn Schoenmakers; Jesper Tijmstra; Jeroen Vermunt; Maria Bolsinova – Educational and Psychological Measurement, 2024
Extreme response style (ERS), the tendency of participants to select extreme item categories regardless of the item content, has frequently been found to decrease the validity of Likert-type questionnaire results. For this reason, various item response theory (IRT) models have been proposed to model ERS and correct for it. Comparisons of these…
Descriptors: Item Response Theory, Response Style (Tests), Models, Likert Scales
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J.; Belov, Dmitry I. – Journal of Educational Measurement, 2023
A test of item compromise is presented which combines the test takers' responses and response times (RTs) into a statistic defined as the number of correct responses on the item for test takers with RTs flagged as suspicious. The test has null and alternative distributions belonging to the well-known family of compound binomial distributions, is…
Descriptors: Item Response Theory, Reaction Time, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
E. Damiano D'Urso; Jesper Tijmstra; Jeroen K. Vermunt; Kim De Roover – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Measurement invariance (MI) is required for validly comparing latent constructs measured by multiple ordinal self-report items. Non-invariances may occur when disregarding (group differences in) an acquiescence response style (ARS; an agreeing tendency regardless of item content). If non-invariance results solely from neglecting ARS, one should…
Descriptors: Error of Measurement, Structural Equation Models, Construct Validity, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2022
Two independent statistical tests of item compromise are presented, one based on the test takers' responses and the other on their response times (RTs) on the same items. The tests can be used to monitor an item in real time during online continuous testing but are also applicable as part of post hoc forensic analysis. The two test statistics are…
Descriptors: Test Items, Item Analysis, Item Response Theory, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Leventhal, Brian C.; Zigler, Christina K. – Measurement: Interdisciplinary Research and Perspectives, 2023
Survey score interpretations are often plagued by sources of construct-irrelevant variation, such as response styles. In this study, we propose the use of an IRTree Model to account for response styles by making use of self-report items and anchoring vignettes. Specifically, we investigate how the IRTree approach with anchoring vignettes compares…
Descriptors: Scores, Vignettes, Response Style (Tests), Item Response Theory
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Babcock, Ben; Siegel, Zachary D. – Practical Assessment, Research & Evaluation, 2022
Research about repeated testing has revealed that retaking the same exam form generally does not advantage or disadvantage failing candidates in selected response-style credentialing exams. Feinberg, Raymond, and Haist (2015) found a contributing factor to this phenomenon: people answering items incorrectly on both attempts give the same incorrect…
Descriptors: Multiple Choice Tests, Item Analysis, Test Items, Response Style (Tests)
Xue, Kang; Huggins-Manley, Anne Corinne; Leite, Walter – Educational and Psychological Measurement, 2022
In data collected from virtual learning environments (VLEs), item response theory (IRT) models can be used to guide the ongoing measurement of student ability. However, such applications of IRT rely on unbiased item parameter estimates associated with test items in the VLE. Without formal piloting of the items, one can expect a large amount of…
Descriptors: Virtual Classrooms, Artificial Intelligence, Item Response Theory, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Spratto, Elisabeth M.; Leventhal, Brian C.; Bandalos, Deborah L. – Educational and Psychological Measurement, 2021
In this study, we examined the results and interpretations produced from two different IRTree models--one using paths consisting of only dichotomous decisions, and one using paths consisting of both dichotomous and polytomous decisions. We used data from two versions of an impulsivity measure. In the first version, all the response options had…
Descriptors: Comparative Analysis, Item Response Theory, Decision Making, Data Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Alweis, Richard L.; Fitzpatrick, Caroline; Donato, Anthony A. – Journal of Education and Training Studies, 2015
Introduction: The Multiple Mini-Interview (MMI) format appears to mitigate individual rater biases. However, the format itself may introduce structural systematic bias, favoring extroverted personality types. This study aimed to gain a better understanding of these biases from the perspective of the interviewer. Methods: A sample of MMI…
Descriptors: Interviews, Interrater Reliability, Qualitative Research, Semi Structured Interviews
Peer reviewed Peer reviewed
Direct linkDirect link
Yorke, Mantz; Orr, Susan; Blair, Bernadette – Studies in Higher Education, 2014
There has long been the suspicion amongst staff in Art & Design that the ratings given to their subject disciplines in the UK's National Student Survey are adversely affected by a combination of circumstances--a "perfect storm". The "perfect storm" proposition is tested by comparing ratings for Art & Design with those…
Descriptors: Student Surveys, National Surveys, Art Education, Design
Peer reviewed Peer reviewed
Direct linkDirect link
Glass, Arnold Lewis; Sinha, Neha – Educational Psychology, 2013
In the context of an upper-level psychology course, even when students were given an opportunity to refer to text containing the answers and change their exam responses in order to improve their exam scores, their performance on these questions improved slightly or not at all. Four experiments evaluated competing explanations for the students'…
Descriptors: Academic Achievement, Item Analysis, Test Norms, Comparative Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Dube, Chad; Rotello, Caren M.; Heit, Evan – Psychological Review, 2010
A belief bias effect in syllogistic reasoning (Evans, Barston, & Pollard, 1983) is observed when subjects accept more valid than invalid arguments and more believable than unbelievable conclusions and show greater overall accuracy in judging arguments with unbelievable conclusions. The effect is measured with a contrast of contrasts, comparing…
Descriptors: Response Style (Tests), Item Analysis, Error of Measurement, Replication (Evaluation)
Peer reviewed Peer reviewed
Direct linkDirect link
Ferrando, Pere J.; Lorenzo-Seva, Urbano; Chico, Eliseo – Structural Equation Modeling: A Multidisciplinary Journal, 2009
This article proposes procedures for simultaneously assessing and controlling acquiescence and social desirability in questionnaire items. The procedures are based on a semi-restricted factor-analytic tridimensional model, and can be used with binary, graded-response, or more continuous items. We discuss procedures for fitting the model (item…
Descriptors: Factor Analysis, Response Style (Tests), Questionnaires, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Bolt, Daniel M.; Johnson, Timothy R. – Applied Psychological Measurement, 2009
A multidimensional item response theory model that accounts for response style factors is presented. The model, a multidimensional extension of Bock's nominal response model, is shown to allow for the study and control of response style effects in ordered rating scale data so as to reduce bias in measurement of the intended trait. In the current…
Descriptors: Response Style (Tests), Rating Scales, Item Response Theory, Individual Differences
Peer reviewed Peer reviewed
Jobson, J. D. – Educational and Psychological Measurement, 1976
Given a sample of responses to a pair of questionnaire items with interval scale values it is sometimes of interest to know the degree to which respondents select the same response for both items. The coefficient of equality measures the departure from independence in the direction of equality. (RC)
Descriptors: Correlation, Item Analysis, Questionnaires, Response Style (Tests)
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10