Publication Date
In 2025 | 0 |
Since 2024 | 4 |
Since 2021 (last 5 years) | 14 |
Since 2016 (last 10 years) | 27 |
Since 2006 (last 20 years) | 66 |
Descriptor
Models | 89 |
Response Style (Tests) | 89 |
Item Response Theory | 29 |
Test Items | 22 |
Simulation | 14 |
Statistical Analysis | 14 |
Probability | 13 |
Computation | 12 |
Measurement Techniques | 12 |
Comparative Analysis | 11 |
Bias | 10 |
More ▼ |
Source
Author
Bolt, Daniel M. | 4 |
Johnson, Timothy R. | 3 |
Dube, Chad | 2 |
Heit, Evan | 2 |
Huang, Hung-Yu | 2 |
Kalisch, Stanley James, Jr. | 2 |
Rotello, Caren M. | 2 |
Rouder, Jeffrey N. | 2 |
Smithson, Michael | 2 |
Tutz, Gerhard | 2 |
Verkuilen, Jay | 2 |
More ▼ |
Publication Type
Journal Articles | 65 |
Reports - Research | 55 |
Reports - Descriptive | 11 |
Reports - Evaluative | 6 |
Dissertations/Theses -… | 4 |
Opinion Papers | 4 |
Speeches/Meeting Papers | 4 |
Collected Works - Proceedings | 1 |
Dissertations/Theses | 1 |
Education Level
Higher Education | 6 |
Postsecondary Education | 4 |
Elementary Education | 3 |
Early Childhood Education | 2 |
Elementary Secondary Education | 2 |
Secondary Education | 2 |
Grade 3 | 1 |
Grade 4 | 1 |
Grade 5 | 1 |
Grade 6 | 1 |
Grade 7 | 1 |
More ▼ |
Audience
Researchers | 1 |
Location
California | 2 |
Germany | 2 |
Wisconsin | 2 |
Belgium | 1 |
China | 1 |
France | 1 |
Italy | 1 |
Japan | 1 |
Kazakhstan | 1 |
Kuwait | 1 |
Massachusetts | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 3 |
Trends in International… | 3 |
NEO Personality Inventory | 2 |
Eysenck Personality Inventory | 1 |
Hidden Figures Test | 1 |
Rosenberg Self Esteem Scale | 1 |
Wisconsin Knowledge and… | 1 |
What Works Clearinghouse Rating
Gregory M. Hurtz; Regi Mucino – Journal of Educational Measurement, 2024
The Lognormal Response Time (LNRT) model measures the speed of test-takers relative to the normative time demands of items on a test. The resulting speed parameters and model residuals are often analyzed for evidence of anomalous test-taking behavior associated with fast and poorly fitting response time patterns. Extending this model, we…
Descriptors: Student Reaction, Reaction Time, Response Style (Tests), Test Items
Martijn Schoenmakers; Jesper Tijmstra; Jeroen Vermunt; Maria Bolsinova – Educational and Psychological Measurement, 2024
Extreme response style (ERS), the tendency of participants to select extreme item categories regardless of the item content, has frequently been found to decrease the validity of Likert-type questionnaire results. For this reason, various item response theory (IRT) models have been proposed to model ERS and correct for it. Comparisons of these…
Descriptors: Item Response Theory, Response Style (Tests), Models, Likert Scales
Sijia Huang; Seungwon Chung; Carl F. Falk – Journal of Educational Measurement, 2024
In this study, we introduced a cross-classified multidimensional nominal response model (CC-MNRM) to account for various response styles (RS) in the presence of cross-classified data. The proposed model allows slopes to vary across items and can explore impacts of observed covariates on latent constructs. We applied a recently developed variant of…
Descriptors: Response Style (Tests), Classification, Data, Models
Henninger, Mirka – Journal of Educational Measurement, 2021
Item Response Theory models with varying thresholds are essential tools to account for unknown types of response tendencies in rating data. However, in order to separate constructs to be measured and response tendencies, specific constraints have to be imposed on varying thresholds and their interrelations. In this article, a multidimensional…
Descriptors: Response Style (Tests), Item Response Theory, Models, Computation
Hong, Maxwell; Rebouças, Daniella A.; Cheng, Ying – Journal of Educational Measurement, 2021
Response time has started to play an increasingly important role in educational and psychological testing, which prompts many response time models to be proposed in recent years. However, response time modeling can be adversely impacted by aberrant response behavior. For example, test speededness can cause response time to certain items to deviate…
Descriptors: Reaction Time, Models, Computation, Robustness (Statistics)
Hsieh, Shu-Hui; Perri, Pier Francesco – Sociological Methods & Research, 2022
We propose some theoretical and empirical advances by supplying the methodology for analyzing the factors that influence two sensitive variables when data are collected by randomized response (RR) survey modes. First, we provide the framework for obtaining the maximum likelihood estimates of logistic regression coefficients under the RR simple and…
Descriptors: Surveys, Models, Response Style (Tests), Marijuana
Ö. Emre C. Alagöz; Thorsten Meiser – Educational and Psychological Measurement, 2024
To improve the validity of self-report measures, researchers should control for response style (RS) effects, which can be achieved with IRTree models. A traditional IRTree model considers a response as a combination of distinct decision-making processes, where the substantive trait affects the decision on response direction, while decisions about…
Descriptors: Item Response Theory, Validity, Self Evaluation (Individuals), Decision Making
Schweizer, Karl; Wang, Tengfei; Ren, Xuezhu – Journal of Experimental Education, 2022
The essay reports two studies on confirmatory factor analysis of speeded data with an effect of selective responding. This response strategy leads test takers to choose their own working order instead of completing the items along with the given order. Methods for detecting speededness despite such a deviation from the given order are proposed and…
Descriptors: Factor Analysis, Response Style (Tests), Decision Making, Test Items
Ge, Yuan – ProQuest LLC, 2022
My dissertation research explored responder behaviors (e.g., demonstrating response styles, carelessness, and possessing misconceptions) that compromise psychometric quality and impact the interpretation and use of assessment results. Identifying these behaviors can help researchers understand and minimize their potentially construct-irrelevant…
Descriptors: Test Wiseness, Response Style (Tests), Item Response Theory, Psychometrics
Colombi, Roberto; Giordano, Sabrina; Tutz, Gerhard – Journal of Educational and Behavioral Statistics, 2021
A mixture of logit models is proposed that discriminates between responses to rating questions that are affected by a tendency to prefer middle or extremes of the scale regardless of the content of the item (response styles) and purely content-driven preferences. Explanatory variables are used to characterize the content-driven way of answering as…
Descriptors: Rating Scales, Response Style (Tests), Test Items, Models
Lubbe, Dirk; Schuster, Christof – Journal of Educational and Behavioral Statistics, 2020
Extreme response style is the tendency of individuals to prefer the extreme categories of a rating scale irrespective of item content. It has been shown repeatedly that individual response style differences affect the reliability and validity of item responses and should, therefore, be considered carefully. To account for extreme response style…
Descriptors: Response Style (Tests), Rating Scales, Item Response Theory, Models
Leventhal, Brian C.; Zigler, Christina K. – Measurement: Interdisciplinary Research and Perspectives, 2023
Survey score interpretations are often plagued by sources of construct-irrelevant variation, such as response styles. In this study, we propose the use of an IRTree Model to account for response styles by making use of self-report items and anchoring vignettes. Specifically, we investigate how the IRTree approach with anchoring vignettes compares…
Descriptors: Scores, Vignettes, Response Style (Tests), Item Response Theory
Iannario, Maria; Manisera, Marica; Piccolo, Domenico; Zuccolotto, Paola – Sociological Methods & Research, 2020
In analyzing data from attitude surveys, it is common to consider the "don't know" responses as missing values. In this article, we present a statistical model commonly used for the analysis of responses/evaluations expressed on Likert scales and extended to take into account the presence of don't know responses. The main objective is to…
Descriptors: Response Style (Tests), Likert Scales, Statistical Analysis, Models
Huang, Hung-Yu – Educational and Psychological Measurement, 2023
The forced-choice (FC) item formats used for noncognitive tests typically develop a set of response options that measure different traits and instruct respondents to make judgments among these options in terms of their preference to control the response biases that are commonly observed in normative tests. Diagnostic classification models (DCMs)…
Descriptors: Test Items, Classification, Bayesian Statistics, Decision Making
Ulitzsch, Esther; von Davier, Matthias; Pohl, Steffi – Educational and Psychological Measurement, 2020
So far, modeling approaches for not-reached items have considered one single underlying process. However, missing values at the end of a test can occur for a variety of reasons. On the one hand, examinees may not reach the end of a test due to time limits and lack of working speed. On the other hand, examinees may not attempt all items and quit…
Descriptors: Item Response Theory, Test Items, Response Style (Tests), Computer Assisted Testing