NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Elementary and Secondary…1
What Works Clearinghouse Rating
Showing 31 to 45 of 1,389 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Saskia van Laar; Jianan Chen; Johan Braeken – Measurement: Interdisciplinary Research and Perspectives, 2024
Questionnaires in educational research assessing students' attitudes and beliefs are low-stakes for the students. As a consequence, students might not always consistently respond to a questionnaire scale but instead provide more random response patterns with no clear link to items' contents. We study inter-individual differences in students'…
Descriptors: Foreign Countries, Response Style (Tests), Grade 8, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Jan Karem Höhne; Achim Goerres – International Journal of Social Research Methodology, 2024
The measurement of political solidarities and related concepts is an important endeavor in numerous scientific disciplines, such as political and social science research. European surveys, such as the Eurobarometer, frequently measure these concepts for people's home country and Europe raising questions with respect to the order of precedence.…
Descriptors: Surveys, Attitude Measures, Political Attitudes, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Lisa Pilotek; Mohammad N. Karimi; Tobias Richter – Discourse Processes: A Multidisciplinary Journal, 2024
When researching socioscientific topics, particularly on the Internet, readers face multiple texts that they must integrate into a coherent mental model. Previous research in monolingual settings has found that comprehension is biased toward readers' prior beliefs (text-belief consistency effect). Considering that the Internet is multilingual,…
Descriptors: Foreign Countries, Language of Instruction, Information Dissemination, Multilingual Materials
Peer reviewed Peer reviewed
Direct linkDirect link
Stefanie A. Wind; Beyza Aksu-Dunya – Applied Measurement in Education, 2024
Careless responding is a pervasive concern in research using affective surveys. Although researchers have considered various methods for identifying careless responses, studies are limited that consider the utility of these methods in the context of computer adaptive testing (CAT) for affective scales. Using a simulation study informed by recent…
Descriptors: Response Style (Tests), Computer Assisted Testing, Adaptive Testing, Affective Measures
Peer reviewed Peer reviewed
Direct linkDirect link
E. Damiano D'Urso; Jesper Tijmstra; Jeroen K. Vermunt; Kim De Roover – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Measurement invariance (MI) is required for validly comparing latent constructs measured by multiple ordinal self-report items. Non-invariances may occur when disregarding (group differences in) an acquiescence response style (ARS; an agreeing tendency regardless of item content). If non-invariance results solely from neglecting ARS, one should…
Descriptors: Error of Measurement, Structural Equation Models, Construct Validity, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Schweizer, Karl; Wang, Tengfei; Ren, Xuezhu – Journal of Experimental Education, 2022
The essay reports two studies on confirmatory factor analysis of speeded data with an effect of selective responding. This response strategy leads test takers to choose their own working order instead of completing the items along with the given order. Methods for detecting speededness despite such a deviation from the given order are proposed and…
Descriptors: Factor Analysis, Response Style (Tests), Decision Making, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J. – Journal of Educational and Behavioral Statistics, 2022
Two independent statistical tests of item compromise are presented, one based on the test takers' responses and the other on their response times (RTs) on the same items. The tests can be used to monitor an item in real time during online continuous testing but are also applicable as part of post hoc forensic analysis. The two test statistics are…
Descriptors: Test Items, Item Analysis, Item Response Theory, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Feinberg, Richard; Jurich, Daniel; Wise, Steven L. – Applied Measurement in Education, 2021
Previous research on rapid responding tends to implicitly consider examinees as either engaging in solution behavior or purely guessing. However, particularly in a high-stakes testing context, examinees perceiving that they are running out of time may consider the remaining items for less time than necessary to provide a fully informed response,…
Descriptors: High Stakes Tests, Reaction Time, Response Style (Tests), Licensing Examinations (Professions)
Peer reviewed Peer reviewed
Direct linkDirect link
Lounek, Vítezslav; Ryška, Radim – Research in Comparative and International Education, 2023
Ensuring comparability of Likert-style items across different countries is a widespread challenge for authors of large-scale international surveys. Using data from the EUROGRADUATE Pilot Survey, this study employs a series of latent class analyses to explore which response patterns emerge from self-assessment of acquired and required skills of…
Descriptors: Self Evaluation (Individuals), Surveys, College Graduates, Multivariate Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ulitzsch, Esther; Lüdtke, Oliver; Robitzsch, Alexander – Educational Measurement: Issues and Practice, 2023
Country differences in response styles (RS) may jeopardize cross-country comparability of Likert-type scales. When adjusting for rather than investigating RS is the primary goal, it seems advantageous to impose minimal assumptions on RS structures and leverage information from multiple scales for RS measurement. Using PISA 2015 background…
Descriptors: Response Style (Tests), Comparative Analysis, Achievement Tests, Foreign Countries
Ge, Yuan – ProQuest LLC, 2022
My dissertation research explored responder behaviors (e.g., demonstrating response styles, carelessness, and possessing misconceptions) that compromise psychometric quality and impact the interpretation and use of assessment results. Identifying these behaviors can help researchers understand and minimize their potentially construct-irrelevant…
Descriptors: Test Wiseness, Response Style (Tests), Item Response Theory, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Zou, Tongtong; Bolt, Daniel M. – Measurement: Interdisciplinary Research and Perspectives, 2023
Person misfit and person reliability indices in item response theory (IRT) can play an important role in evaluating the validity of a test or survey instrument at the respondent level. Prior empirical comparisons of these indices have been applied to binary item response data and suggest that the two types of indices return very similar results.…
Descriptors: Item Response Theory, Rating Scales, Response Style (Tests), Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Colombi, Roberto; Giordano, Sabrina; Tutz, Gerhard – Journal of Educational and Behavioral Statistics, 2021
A mixture of logit models is proposed that discriminates between responses to rating questions that are affected by a tendency to prefer middle or extremes of the scale regardless of the content of the item (response styles) and purely content-driven preferences. Explanatory variables are used to characterize the content-driven way of answering as…
Descriptors: Rating Scales, Response Style (Tests), Test Items, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Rios, Joseph A.; Soland, James – Educational and Psychological Measurement, 2021
As low-stakes testing contexts increase, low test-taking effort may serve as a serious validity threat. One common solution to this problem is to identify noneffortful responses and treat them as missing during parameter estimation via the effort-moderated item response theory (EM-IRT) model. Although this model has been shown to outperform…
Descriptors: Computation, Accuracy, Item Response Theory, Response Style (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Höhne, Jan Karem; Krebs, Dagmar – International Journal of Social Research Methodology, 2021
Measuring respondents' attitudes is a crucial task in numerous social science disciplines. A popular way to measure attitudes is to use survey questions with rating scales. However, research has shown that especially the design of rating scales can have a profound impact on respondents' answer behavior. While some scale design aspects, such as…
Descriptors: Attitude Measures, Rating Scales, Telephone Surveys, Response Style (Tests)
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  93