Publication Date
In 2025 | 1 |
Since 2024 | 30 |
Since 2021 (last 5 years) | 100 |
Descriptor
Response Style (Tests) | 100 |
Item Response Theory | 38 |
Foreign Countries | 36 |
Test Items | 33 |
Reaction Time | 19 |
Achievement Tests | 16 |
International Assessment | 14 |
Models | 14 |
Accuracy | 13 |
Scores | 13 |
Secondary School Students | 13 |
More ▼ |
Source
Author
Ames, Allison J. | 3 |
Bolt, Daniel M. | 3 |
Hsieh, Shu-Hui | 3 |
Leventhal, Brian C. | 3 |
Braeken, Johan | 2 |
Bulut, Hatice Cigdem | 2 |
Bulut, Okan | 2 |
Esther Ulitzsch | 2 |
Gummer, Tobias | 2 |
Jesper Tijmstra | 2 |
Nana Kim | 2 |
More ▼ |
Publication Type
Journal Articles | 93 |
Reports - Research | 88 |
Dissertations/Theses -… | 5 |
Reports - Descriptive | 3 |
Reports - Evaluative | 3 |
Tests/Questionnaires | 3 |
Speeches/Meeting Papers | 2 |
Information Analyses | 1 |
Education Level
Audience
Practitioners | 2 |
Researchers | 2 |
Location
Germany | 13 |
Czech Republic | 4 |
Greece | 4 |
Taiwan | 4 |
Australia | 3 |
China | 3 |
Italy | 3 |
Lithuania | 3 |
New Zealand | 3 |
Norway | 3 |
South Korea | 3 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Papanastasiou, Elena C.; Stylianou-Georgiou, Agni – Assessment in Education: Principles, Policy & Practice, 2022
? frequently used indicator to reflect student performance is that of a test score. However, although tests are designed to assess students' knowledge or skills, other factors can also affect test results such as test-taking strategies. Therefore, the purpose of this study was to model the interrelationships among test-taking strategy instruction…
Descriptors: Test Wiseness, Metacognition, Multiple Choice Tests, Response Style (Tests)
J. A. Bialo; H. Li – Educational Assessment, 2024
This study evaluated differential item functioning (DIF) in achievement motivation items before and after using anchoring vignettes as a statistical tool to account for group differences in response styles across gender and ethnicity. We applied the nonparametric scoring of the vignettes to motivation items from the 2015 Programme for…
Descriptors: Test Bias, Student Motivation, Achievement Tests, Secondary School Students
Viola Merhof; Caroline M. Böhm; Thorsten Meiser – Educational and Psychological Measurement, 2024
Item response tree (IRTree) models are a flexible framework to control self-reported trait measurements for response styles. To this end, IRTree models decompose the responses to rating items into sub-decisions, which are assumed to be made on the basis of either the trait being measured or a response style, whereby the effects of such person…
Descriptors: Item Response Theory, Test Interpretation, Test Reliability, Test Validity
Wim J. van der Linden; Luping Niu; Seung W. Choi – Journal of Educational and Behavioral Statistics, 2024
A test battery with two different levels of adaptation is presented: a within-subtest level for the selection of the items in the subtests and a between-subtest level to move from one subtest to the next. The battery runs on a two-level model consisting of a regular response model for each of the subtests extended with a second level for the joint…
Descriptors: Adaptive Testing, Test Construction, Test Format, Test Reliability
Bulut, Hatice Cigdem – International Journal of Assessment Tools in Education, 2021
Several studies have been published on disengaged test respondents, and others have analyzed disengaged survey respondents separately. For many large-scale assessments, students answer questionnaire and test items in succession. This study examines the percentage of students who continuously engage in disengaged responding behaviors across…
Descriptors: Reaction Time, Response Style (Tests), Foreign Countries, International Assessment
Courey, Karyssa A.; Lee, Michael D. – AERA Open, 2021
Student evaluations of teaching are widely used to assess instructors and courses. Using a model-based approach and Bayesian methods, we examine how the direction of the scale, labels on scales, and the number of options affect the ratings. We conduct a within-participants experiment in which respondents evaluate instructors and lectures using…
Descriptors: Student Evaluation of Teacher Performance, Rating Scales, Response Style (Tests), College Students
Cannon, Edmund; Cipriani, Giam Pietro – Assessment & Evaluation in Higher Education, 2022
Student evaluations of teaching may be subject to halo effects, where answers to one question are contaminated by answers to the other questions. Quantifying halo effects is difficult since correlation between answers may be due to underlying correlation of the items being tested. We use a novel identification procedure to test for a halo effect…
Descriptors: Student Evaluation of Teacher Performance, Bias, Response Style (Tests), Foreign Countries
He, Qingping; Meadows, Michelle; Black, Beth – Research Papers in Education, 2022
A potential negative consequence of high-stakes testing is inappropriate test behaviour involving individuals and/or institutions. Inappropriate test behaviour and test collusion can result in aberrant response patterns and anomalous test scores and invalidate the intended interpretation and use of test results. A variety of statistical techniques…
Descriptors: Statistical Analysis, High Stakes Tests, Scores, Response Style (Tests)
Herwin, Herwin; Dahalan, Shakila Che – Pegem Journal of Education and Instruction, 2022
This study aims to analyze and describe the response patterns of school exam participants based on the person fit method. This research is a quantitative study with a focus on research on social science elementary school examinations as many as 15 multiple choice items and 137 participant answer sheets. Data collection techniques were carried out…
Descriptors: Response Style (Tests), Multiple Choice Tests, Emotional Response, Psychological Patterns
Leventhal, Brian C.; Gregg, Nikole; Ames, Allison J. – Measurement: Interdisciplinary Research and Perspectives, 2022
Response styles introduce construct-irrelevant variance as a result of respondents systematically responding to Likert-type items regardless of content. Methods to account for response styles through data analysis as well as approaches to mitigating the effects of response styles during data collection have been well-documented. Recent approaches…
Descriptors: Response Style (Tests), Item Response Theory, Test Items, Likert Scales
Esther Ulitzsch; Steffi Pohl; Lale Khorramdel; Ulf Kroehne; Matthias von Davier – Journal of Educational and Behavioral Statistics, 2024
Questionnaires are by far the most common tool for measuring noncognitive constructs in psychology and educational sciences. Response bias may pose an additional source of variation between respondents that threatens validity of conclusions drawn from questionnaire data. We present a mixture modeling approach that leverages response time data from…
Descriptors: Item Response Theory, Response Style (Tests), Questionnaires, Secondary School Students
Clariana, Roy B.; Park, Eunsung – Educational Technology Research and Development, 2021
Cognitive and metacognitive processes during learning depend on accurate monitoring, this investigation examines the influence of immediate item-level knowledge of correct response feedback on cognition monitoring accuracy. In an optional end-of-course computer-based review lesson, participants (n = 68) were randomly assigned to groups to receive…
Descriptors: Feedback (Response), Cognitive Processes, Accuracy, Difficulty Level
Leventhal, Brian C.; Zigler, Christina K. – Measurement: Interdisciplinary Research and Perspectives, 2023
Survey score interpretations are often plagued by sources of construct-irrelevant variation, such as response styles. In this study, we propose the use of an IRTree Model to account for response styles by making use of self-report items and anchoring vignettes. Specifically, we investigate how the IRTree approach with anchoring vignettes compares…
Descriptors: Scores, Vignettes, Response Style (Tests), Item Response Theory
Zachary J. Roman; Patrick Schmidt; Jason M. Miller; Holger Brandt – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Careless and insufficient effort responding (C/IER) is a situation where participants respond to survey instruments without considering the item content. This phenomena adds noise to data leading to erroneous inference. There are multiple approaches to identifying and accounting for C/IER in survey settings, of these approaches the best performing…
Descriptors: Structural Equation Models, Bayesian Statistics, Response Style (Tests), Robustness (Statistics)
Eirini M. Mitropoulou; Leonidas A. Zampetakis; Ioannis Tsaousis – Evaluation Review, 2024
Unfolding item response theory (IRT) models are important alternatives to dominance IRT models in describing the response processes on self-report tests. Their usage is common in personality measures, since they indicate potential differentiations in test score interpretation. This paper aims to gain a better insight into the structure of trait…
Descriptors: Foreign Countries, Adults, Item Response Theory, Personality Traits