Publication Date
In 2025 | 1 |
Since 2024 | 30 |
Since 2021 (last 5 years) | 100 |
Since 2016 (last 10 years) | 214 |
Since 2006 (last 20 years) | 414 |
Descriptor
Source
Author
Weiss, David J. | 12 |
Wise, Steven L. | 9 |
Bolt, Daniel M. | 7 |
Benson, Jeri | 6 |
Fiske, Donald W. | 6 |
Holden, Ronald R. | 6 |
Jackson, Douglas N. | 6 |
Adkins, Dorothy C. | 5 |
Birenbaum, Menucha | 5 |
Crocker, Linda | 5 |
Greve, Kevin W. | 5 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 58 |
Practitioners | 17 |
Teachers | 6 |
Administrators | 3 |
Counselors | 2 |
Students | 1 |
Location
Germany | 27 |
Canada | 20 |
Australia | 17 |
United States | 12 |
South Korea | 10 |
United Kingdom | 10 |
China | 9 |
Denmark | 9 |
France | 9 |
Italy | 9 |
Norway | 9 |
More ▼ |
Laws, Policies, & Programs
Elementary and Secondary… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Hsieh, Shu-Hui; Lee, Shen-Ming; Li, Chin-Shang – Sociological Methods & Research, 2022
Surveys of income are complicated by the sensitive nature of the topic. The problem researchers face is how to encourage participants to respond and to provide truthful responses in surveys. To correct biases induced by nonresponse or underreporting, we propose a two-stage multilevel randomized response (MRR) technique to investigate the true…
Descriptors: Income, Surveys, Response Rates (Questionnaires), Response Style (Tests)
Perkins, Beth A.; Pastor, Dena A.; Finney, Sara J. – Applied Measurement in Education, 2021
When tests are low stakes for examinees, meaning there are little to no personal consequences associated with test results, some examinees put little effort into their performance. To understand the causes and consequences of diminished effort, researchers correlate test-taking effort with other variables, such as test-taking emotions and test…
Descriptors: Response Style (Tests), Psychological Patterns, Testing, Differences
Cornelia Eva Neuert – Sociological Methods & Research, 2024
The quality of data in surveys is affected by response burden and questionnaire length. With an increasing number of questions, respondents can become bored, tired, and annoyed and may take shortcuts to reduce the effort needed to complete the survey. In this article, direct evidence is presented on how the position of items within a web…
Descriptors: Online Surveys, Test Items, Test Format, Test Construction
Rebekka Kupffer; Susanne Frick; Eunike Wetzel – Educational and Psychological Measurement, 2024
The multidimensional forced-choice (MFC) format is an alternative to rating scales in which participants rank items according to how well the items describe them. Currently, little is known about how to detect careless responding in MFC data. The aim of this study was to adapt a number of indices used for rating scales to the MFC format and…
Descriptors: Measurement Techniques, Alternative Assessment, Rating Scales, Questionnaires
Kamil Jaros; Aleksandra Gajda – Journal of Psychoeducational Assessment, 2024
Stage fright is a natural and very common phenomenon that affects everyone who must present themselves in public. However, it has a negative impact on the health and voice emission of children and adolescents, which is why it is important to study and measure it. Unfortunately, there are no appropriate tools for examining public presentation…
Descriptors: Anxiety, Fear, Public Speaking, Children
Esther Ulitzsch; Janine Buchholz; Hyo Jeong Shin; Jonas Bertling; Oliver Lüdtke – Large-scale Assessments in Education, 2024
Common indicator-based approaches to identifying careless and insufficient effort responding (C/IER) in survey data scan response vectors or timing data for aberrances, such as patterns signaling straight lining, multivariate outliers, or signals that respondents rushed through the administered items. Each of these approaches is susceptible to…
Descriptors: Response Style (Tests), Attention, Achievement Tests, Foreign Countries
Henninger, Mirka – Journal of Educational Measurement, 2021
Item Response Theory models with varying thresholds are essential tools to account for unknown types of response tendencies in rating data. However, in order to separate constructs to be measured and response tendencies, specific constraints have to be imposed on varying thresholds and their interrelations. In this article, a multidimensional…
Descriptors: Response Style (Tests), Item Response Theory, Models, Computation
Wise, Steven L.; Kuhfeld, Megan R. – Journal of Educational Measurement, 2021
There has been a growing research interest in the identification and management of disengaged test taking, which poses a validity threat that is particularly prevalent with low-stakes tests. This study investigated effort-moderated (E-M) scoring, in which item responses classified as rapid guesses are identified and excluded from scoring. Using…
Descriptors: Scoring, Data Use, Response Style (Tests), Guessing (Tests)
Hong, Maxwell; Rebouças, Daniella A.; Cheng, Ying – Journal of Educational Measurement, 2021
Response time has started to play an increasingly important role in educational and psychological testing, which prompts many response time models to be proposed in recent years. However, response time modeling can be adversely impacted by aberrant response behavior. For example, test speededness can cause response time to certain items to deviate…
Descriptors: Reaction Time, Models, Computation, Robustness (Statistics)
Embretson, Susan – Large-scale Assessments in Education, 2023
Understanding the cognitive processes, skills and strategies that examinees use in testing is important for construct validity and score interpretability. Although response processes evidence has long been included as an important aspect of validity (i.e., "Standards for Educational and Psychological Tests," 1999), relevant studies are…
Descriptors: Cognitive Processes, Test Validity, Item Response Theory, Test Wiseness
Hsieh, Shu-Hui; Perri, Pier Francesco – Sociological Methods & Research, 2022
We propose some theoretical and empirical advances by supplying the methodology for analyzing the factors that influence two sensitive variables when data are collected by randomized response (RR) survey modes. First, we provide the framework for obtaining the maximum likelihood estimates of logistic regression coefficients under the RR simple and…
Descriptors: Surveys, Models, Response Style (Tests), Marijuana
Shamon, Hawal; Dülmer, Hermann; Giza, Adam – Sociological Methods & Research, 2022
The factorial survey is an experimental design in which the researcher constructs varying descriptions of situations or individual persons (vignettes), which will be judged by respondents with regard to a particular aspect. Some researchers present vignettes in text format as short stories, others present the central information of vignettes in a…
Descriptors: Vignettes, Surveys, Response Style (Tests), Reaction Time
Ö. Emre C. Alagöz; Thorsten Meiser – Educational and Psychological Measurement, 2024
To improve the validity of self-report measures, researchers should control for response style (RS) effects, which can be achieved with IRTree models. A traditional IRTree model considers a response as a combination of distinct decision-making processes, where the substantive trait affects the decision on response direction, while decisions about…
Descriptors: Item Response Theory, Validity, Self Evaluation (Individuals), Decision Making
Danielle R. Blazek; Jason T. Siegel – International Journal of Social Research Methodology, 2024
Social scientists have long agreed that satisficing behavior increases error and reduces the validity of survey data. There have been numerous reviews on detecting satisficing behavior, but preventing this behavior has received less attention. The current narrative review provides empirically supported guidance on preventing satisficing by…
Descriptors: Response Style (Tests), Responses, Reaction Time, Test Interpretation
Cui, Zhongmin – Educational and Psychological Measurement, 2020
In test security analyses, answer copying, collusion, and the use of a shared brain dump site can be detected by checking similarity between item response strings. The similarity, however, can possibly be contaminated by aberrant data resulted from careless responding or rapid guessing. For example, some test-takers may answer by repeating a…
Descriptors: Repetition, Cheating, Response Style (Tests), Pattern Recognition