Publication Date
In 2025 | 0 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 7 |
Since 2006 (last 20 years) | 12 |
Descriptor
Source
Author
Publication Type
Education Level
Higher Education | 5 |
Postsecondary Education | 4 |
Secondary Education | 2 |
Elementary Education | 1 |
Grade 8 | 1 |
High Schools | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Audience
Researchers | 9 |
Practitioners | 4 |
Counselors | 1 |
Location
Armenia | 1 |
California | 1 |
Georgia Republic | 1 |
Hungary | 1 |
Israel | 1 |
Kazakhstan | 1 |
Lithuania | 1 |
Mexico | 1 |
New Hampshire | 1 |
New Zealand | 1 |
Nicaragua | 1 |
More ▼ |
Laws, Policies, & Programs
Elementary and Secondary… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Danielle R. Blazek; Jason T. Siegel – International Journal of Social Research Methodology, 2024
Social scientists have long agreed that satisficing behavior increases error and reduces the validity of survey data. There have been numerous reviews on detecting satisficing behavior, but preventing this behavior has received less attention. The current narrative review provides empirically supported guidance on preventing satisficing by…
Descriptors: Response Style (Tests), Responses, Reaction Time, Test Interpretation
Saskia van Laar; Jianan Chen; Johan Braeken – Measurement: Interdisciplinary Research and Perspectives, 2024
Questionnaires in educational research assessing students' attitudes and beliefs are low-stakes for the students. As a consequence, students might not always consistently respond to a questionnaire scale but instead provide more random response patterns with no clear link to items' contents. We study inter-individual differences in students'…
Descriptors: Foreign Countries, Response Style (Tests), Grade 8, Secondary School Students
Hill, Laura G. – International Journal of Behavioral Development, 2020
Retrospective pretests ask respondents to report after an intervention on their aptitudes, knowledge, or beliefs before the intervention. A primary reason to administer a retrospective pretest is that in some situations, program participants may over the course of an intervention revise or recalibrate their prior understanding of program content,…
Descriptors: Pretesting, Response Style (Tests), Bias, Testing Problems
Abbakumov, Dmitry; Desmet, Piet; Van den Noortgate, Wim – Applied Measurement in Education, 2020
Formative assessments are an important component of massive open online courses (MOOCs), online courses with open access and unlimited student participation. Accurate conclusions on students' proficiency via formative, however, face several challenges: (a) students are typically allowed to make several attempts; and (b) student performance might…
Descriptors: Item Response Theory, Formative Evaluation, Online Courses, Response Style (Tests)
Adrian Adams; Lauren Barth-Cohen – CBE - Life Sciences Education, 2024
In undergraduate research settings, students are likely to encounter anomalous data, that is, data that do not meet their expectations. Most of the research that directly or indirectly captures the role of anomalous data in research settings uses post-hoc reflective interviews or surveys. These data collection approaches focus on recall of past…
Descriptors: Undergraduate Students, Physics, Science Instruction, Laboratory Experiments
McKibben, William Bradley; Silvia, Paul J. – Journal of Creative Behavior, 2017
Inattentiveness and social desirability might be particularly problematic for self-report scales in creativity and arts research. Respondents who are inattentive or who present themselves favorably will score highly on scales that yield positively skewed distributions and that assess socially valued constructs, such as scales measuring creativity…
Descriptors: Measurement Techniques, Attention, Social Desirability, Response Style (Tests)
Debeer, Dries; Janssen, Rianne; De Boeck, Paul – Journal of Educational Measurement, 2017
When dealing with missing responses, two types of omissions can be discerned: items can be skipped or not reached by the test taker. When the occurrence of these omissions is related to the proficiency process the missingness is nonignorable. The purpose of this article is to present a tree-based IRT framework for modeling responses and omissions…
Descriptors: Item Response Theory, Test Items, Responses, Testing Problems
Sinclair, Andrea; Deatz, Richard; Johnston-Fisher, Jessica – Partnership for Assessment of Readiness for College and Careers, 2015
The overall purpose of the research studies described in this report was to investigate the quality of the administration of the Partnership for Assessment of Readiness for College and Careers (PARCC) assessment during its first operational year (2014-2015). Findings from these studies can be used to identify threats to the validity of PARCC test…
Descriptors: Testing, Achievement Tests, Administrators, Readiness
McIntyre, Joe – Society for Research on Educational Effectiveness, 2014
Proper survey design is essential to obtain reliable, replicable data from research subjects. One threat to inferences drawn from surveys is anchoring-and-adjusting. Tversky and Kahnemann (1974) observed that participants' responses to questions depended systematically on irrelevant information they received prior to answering. It is important for…
Descriptors: Surveys, Response Style (Tests), Testing Problems, Online Surveys
Ventouras, Errikos; Triantis, Dimos; Tsiakas, Panagiotis; Stergiopoulos, Charalampos – Computers & Education, 2011
The aim of the present research was to compare the use of multiple-choice questions (MCQs) as an examination method against the oral examination (OE) method. MCQs are widely used and their importance seems likely to grow, due to their inherent suitability for electronic assessment. However, MCQs are influenced by the tendency of examinees to guess…
Descriptors: Grades (Scholastic), Scoring, Multiple Choice Tests, Test Format

Scheers, N. J. – Measurement and Evaluation in Counseling and Development, 1992
Reviews randomized response technique (RRT), survey method developed to reduce or eliminate underreporting of sensitive behaviors. Describes RRT models successfully used in application studies, considers issues important in using various RRT models, and illustrates their use with some realistic data examples. Discusses techniques for reducing…
Descriptors: Research Methodology, Response Style (Tests), Testing Problems

Bolter, John F.; And Others – Journal of Consulting and Clinical Psychology, 1984
Contends that the Speech Sounds Perception Test form (Adult and Midrange versions) is structured such that correct responses can be determined rationally. If a patient identifies and responds according to that structure, the validity of the test is compromised. Posttest interview is suggested as a simple solution. (Author/JAC)
Descriptors: Response Style (Tests), Test Format, Test Validity, Testing Problems
DeGracie, James S.; Vicino, Frank L. – Educational Technology, 1977
Categories of questionnaire response sets and the ability to interpret response differences as related to soliciting student attitudes. (DAG)
Descriptors: Questioning Techniques, Questionnaires, Response Style (Tests), Student Attitudes

Frary, Robert B.; And Others – Journal of Educational Statistics, 1977
Indices reflecting the probability that the observed similarity between the multiple-choice test responses of two examinees was due to chance are presented. Applications are presented for apprehending cheaters and for evaluating the prevalence of this form of cheating on an examination. (Author/JKS)
Descriptors: Cheating, Measurement Techniques, Multiple Choice Tests, Response Style (Tests)

Dixon, Paul N.; And Others – Educational and Psychological Measurement, 1984
The influence of scale format on results was examined. Two Likert type formats, one with all choice points defined and one with only end-points defined, were administered. Each subject completed half the items in each format. Results indicated little difference between forms, nor did subjects indicate a format preference. (Author/DWH)
Descriptors: Higher Education, Rating Scales, Response Style (Tests), Test Format