Publication Date
In 2025 | 0 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 10 |
Since 2016 (last 10 years) | 18 |
Since 2006 (last 20 years) | 27 |
Descriptor
Source
Author
Bolt, Daniel M. | 3 |
Harvill, Leo M. | 3 |
Arce-Ferrer, Alvaro J. | 2 |
Bagby, R. Michael | 2 |
Nana Kim | 2 |
Tutz, Gerhard | 2 |
Abts, Koen | 1 |
Ace, Merle E. | 1 |
Adams, Kay Angona | 1 |
Alliger, George M. | 1 |
Allison, Howard K., II | 1 |
More ▼ |
Publication Type
Reports - Research | 47 |
Journal Articles | 41 |
Speeches/Meeting Papers | 11 |
Reports - Evaluative | 6 |
Dissertations/Theses -… | 3 |
Reports - Descriptive | 3 |
Tests/Questionnaires | 3 |
Numerical/Quantitative Data | 1 |
Education Level
Higher Education | 8 |
Postsecondary Education | 4 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
High Schools | 1 |
Audience
Researchers | 3 |
Location
California (Irvine) | 1 |
Canada | 1 |
Florida | 1 |
Germany | 1 |
Guyana | 1 |
Maine | 1 |
Massachusetts | 1 |
Mexico | 1 |
Netherlands | 1 |
Nicaragua | 1 |
United States | 1 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Nana Kim; Daniel M. Bolt – Journal of Educational and Behavioral Statistics, 2024
Some previous studies suggest that response times (RTs) on rating scale items can be informative about the content trait, but a more recent study suggests they may also be reflective of response styles. The latter result raises questions about the possible consideration of RTs for content trait estimation, as response styles are generally viewed…
Descriptors: Item Response Theory, Reaction Time, Response Style (Tests), Psychometrics
Rebekka Kupffer; Susanne Frick; Eunike Wetzel – Educational and Psychological Measurement, 2024
The multidimensional forced-choice (MFC) format is an alternative to rating scales in which participants rank items according to how well the items describe them. Currently, little is known about how to detect careless responding in MFC data. The aim of this study was to adapt a number of indices used for rating scales to the MFC format and…
Descriptors: Measurement Techniques, Alternative Assessment, Rating Scales, Questionnaires
Zou, Tongtong; Bolt, Daniel M. – Measurement: Interdisciplinary Research and Perspectives, 2023
Person misfit and person reliability indices in item response theory (IRT) can play an important role in evaluating the validity of a test or survey instrument at the respondent level. Prior empirical comparisons of these indices have been applied to binary item response data and suggest that the two types of indices return very similar results.…
Descriptors: Item Response Theory, Rating Scales, Response Style (Tests), Measurement
Colombi, Roberto; Giordano, Sabrina; Tutz, Gerhard – Journal of Educational and Behavioral Statistics, 2021
A mixture of logit models is proposed that discriminates between responses to rating questions that are affected by a tendency to prefer middle or extremes of the scale regardless of the content of the item (response styles) and purely content-driven preferences. Explanatory variables are used to characterize the content-driven way of answering as…
Descriptors: Rating Scales, Response Style (Tests), Test Items, Models
Höhne, Jan Karem; Krebs, Dagmar – International Journal of Social Research Methodology, 2021
Measuring respondents' attitudes is a crucial task in numerous social science disciplines. A popular way to measure attitudes is to use survey questions with rating scales. However, research has shown that especially the design of rating scales can have a profound impact on respondents' answer behavior. While some scale design aspects, such as…
Descriptors: Attitude Measures, Rating Scales, Telephone Surveys, Response Style (Tests)
Courey, Karyssa A.; Lee, Michael D. – AERA Open, 2021
Student evaluations of teaching are widely used to assess instructors and courses. Using a model-based approach and Bayesian methods, we examine how the direction of the scale, labels on scales, and the number of options affect the ratings. We conduct a within-participants experiment in which respondents evaluate instructors and lectures using…
Descriptors: Student Evaluation of Teacher Performance, Rating Scales, Response Style (Tests), College Students
Rocconi, Louis M.; Dumford, Amber D.; Butler, Brenna – Research in Higher Education, 2020
Researchers, assessment professionals, and faculty in higher education increasingly depend on survey data from students to make pivotal curricular and programmatic decisions. The surveys collecting these data often require students to judge frequency (e.g., how often), quantity (e.g., how much), or intensity (e.g., how strongly). The response…
Descriptors: Student Surveys, College Students, Rating Scales, Response Style (Tests)
Lubbe, Dirk; Schuster, Christof – Journal of Educational and Behavioral Statistics, 2020
Extreme response style is the tendency of individuals to prefer the extreme categories of a rating scale irrespective of item content. It has been shown repeatedly that individual response style differences affect the reliability and validity of item responses and should, therefore, be considered carefully. To account for extreme response style…
Descriptors: Response Style (Tests), Rating Scales, Item Response Theory, Models
Nana Kim – ProQuest LLC, 2022
In educational and psychological assessments, attending to item response process can be useful in understanding and improving the validity of measurement. This dissertation consists of three studies each of which proposes and applies item response theory (IRT) methods for modeling and understanding cognitive/psychological response process in…
Descriptors: Psychometrics, Item Response Theory, Test Items, Cognitive Tests
D'Urso, E. Damiano; Tijmstra, Jesper; Vermunt, Jeroen K.; De Roover, Kim – Educational and Psychological Measurement, 2023
Assessing the measurement model (MM) of self-report scales is crucial to obtain valid measurements of individuals' latent psychological constructs. This entails evaluating the number of measured constructs and determining which construct is measured by which item. Exploratory factor analysis (EFA) is the most-used method to evaluate these…
Descriptors: Factor Analysis, Measurement Techniques, Self Evaluation (Individuals), Psychological Patterns
Zebing Wu – ProQuest LLC, 2024
Response style, one common aberrancy in non-cognitive assessments in psychological fields, is problematic in terms of inaccurate estimation of item and person parameters, which leads to serious reliability, validity, and fairness issues (Baumgartner & Steenkamp, 2001; Bolt & Johnson, 2009; Bolt & Newton, 2011). Response style refers to…
Descriptors: Response Style (Tests), Accuracy, Preferences, Psychological Testing
Wang, Rui; Krosnick, Jon A. – International Journal of Social Research Methodology, 2020
Questionnaires routinely measure unipolar and bipolar constructs using rating scales. Such rating scales can offer odd numbers of points, meaning that they have explicit middle alternatives, or they can offer even numbers of points, omitting the middle alternative. By examining four types of questions in six national or regional telephone surveys,…
Descriptors: Validity, Rating Scales, Questionnaires, Telephone Surveys
Bürkner, Paul-Christian; Schulte, Niklas; Holling, Heinz – Educational and Psychological Measurement, 2019
Forced-choice questionnaires have been proposed to avoid common response biases typically associated with rating scale questionnaires. To overcome ipsativity issues of trait scores obtained from classical scoring approaches of forced-choice items, advanced methods from item response theory (IRT) such as the Thurstonian IRT model have been…
Descriptors: Item Response Theory, Measurement Techniques, Questionnaires, Rating Scales
Valencia, Edgar – Assessment & Evaluation in Higher Education, 2020
The validity of student evaluation of teaching (SET) scores depends on minimum effect of extraneous response processes or biases. A bias may increase or decrease scores and change the relationship with other variables. In contrast, SET literature defines bias as an irrelevant variable correlated with SET scores, and among many, a relevant biasing…
Descriptors: Student Evaluation of Teacher Performance, College Faculty, Student Attitudes, Gender Bias
Kim, Nana; Bolt, Daniel M. – Educational and Psychological Measurement, 2021
This paper presents a mixture item response tree (IRTree) model for extreme response style. Unlike traditional applications of single IRTree models, a mixture approach provides a way of representing the mixture of respondents following different underlying response processes (between individuals), as well as the uncertainty present at the…
Descriptors: Item Response Theory, Response Style (Tests), Models, Test Items