Publication Date
In 2025 | 1 |
Since 2024 | 30 |
Since 2021 (last 5 years) | 100 |
Since 2016 (last 10 years) | 214 |
Since 2006 (last 20 years) | 414 |
Descriptor
Source
Author
Weiss, David J. | 12 |
Wise, Steven L. | 9 |
Bolt, Daniel M. | 7 |
Benson, Jeri | 6 |
Fiske, Donald W. | 6 |
Holden, Ronald R. | 6 |
Jackson, Douglas N. | 6 |
Adkins, Dorothy C. | 5 |
Birenbaum, Menucha | 5 |
Crocker, Linda | 5 |
Greve, Kevin W. | 5 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 58 |
Practitioners | 17 |
Teachers | 6 |
Administrators | 3 |
Counselors | 2 |
Students | 1 |
Location
Germany | 27 |
Canada | 20 |
Australia | 17 |
United States | 12 |
South Korea | 10 |
United Kingdom | 10 |
China | 9 |
Denmark | 9 |
France | 9 |
Italy | 9 |
Norway | 9 |
More ▼ |
Laws, Policies, & Programs
Elementary and Secondary… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Stricker, Lawrence J. – ETS Research Report Series, 2013
This is an account of a portion of the research on cognitive, personality, and social psychology at ETS since the organization's inception. The topics in cognitive psychology are the structure of abilities; in personality psychology, response styles and social and emotional intelligence; and in social psychology, prosocial behavior and stereotype…
Descriptors: Cognitive Psychology, Personality Traits, Social Psychology, Educational Research
Miles, James D.; Proctor, Robert W. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2010
Throughout a lifetime of interaction with the physical environment, people develop a strong bias to respond on the same side as the location of a target object, even when its location is irrelevant to the task at hand. Recent research has shown that this compatibility bias can be overridden with relatively brief but focused training. To better…
Descriptors: Physical Environment, Ecology, Bias, Responses
Allen, Jeff – ACT, Inc., 2012
Detecting unusual similarity in the item responses of a pair of examinees usually conditions on the pair's overall test performance (e.g., raw scores). Doing this, however, often requires assumptions about the invariance of other examinee pair characteristics. In this study, we examined the appropriateness of such assumptions about selected…
Descriptors: College Entrance Examinations, Language Tests, English, Mathematics Tests
Yen, Yung-Chin; Ho, Rong-Guey; Laio, Wen-Wei; Chen, Li-Ju; Kuo, Ching-Chin – Applied Psychological Measurement, 2012
In a selected response test, aberrant responses such as careless errors and lucky guesses might cause error in ability estimation because these responses do not actually reflect the knowledge that examinees possess. In a computerized adaptive test (CAT), these aberrant responses could further cause serious estimation error due to dynamic item…
Descriptors: Computer Assisted Testing, Adaptive Testing, Test Items, Response Style (Tests)
van Hooft, Edwin A. J.; Born, Marise Ph. – Journal of Applied Psychology, 2012
Intentional response distortion or faking among job applicants completing measures such as personality and integrity tests is a concern in personnel selection. The present study aimed to investigate whether eye-tracking technology can improve our understanding of the response process when faking. In an experimental within-participants design, a…
Descriptors: Job Applicants, Semantics, Eye Movements, Response Style (Tests)
Barber, Larissa K.; Bailey, Sarah F.; Bagsby, Patricia G. – Teaching of Psychology, 2015
The undergraduate psychology curriculum often does not address guidelines for acceptable participant behavior. This two-part study tested the efficacy of a recently developed online learning module on ethical perceptions, knowledge, and behavior. In the preliminary quasi-experiment, students who viewed the module did not have higher…
Descriptors: Ethics, Learning Modules, Online Courses, Educational Research
Twiste, Tara L. – ProQuest LLC, 2011
The identification of patterned responding in unmotivated test takers was investigated through the formation of a novel method. The proposed method relied on marginal proportions of answer choice options as well as the transitional proportions between responses on item pairs. A chi square analysis was used to determine the degree of significance…
Descriptors: Motivation, Response Style (Tests), Statistical Analysis, Comparative Analysis
Mundia, Lawrence – Educational Psychology, 2011
The survey investigated the problems of social desirability (SD), non-response bias (NRB) and reliability in the Minnesota Multiphasic Personality Inventory--Revised (MMPI-2) self-report inventory administered to Brunei student teachers. Bruneians scored higher on all the validity scales than the normative US sample, thereby threatening the…
Descriptors: Student Teachers, College Students, Social Desirability, Response Style (Tests)
Liu, Qin – Association for Institutional Research, 2012
This discussion constructs a survey data quality strategy for institutional researchers in higher education in light of total survey error theory. It starts with describing the characteristics of institutional research and identifying the gaps in literature regarding survey data quality issues in institutional research and then introduces the…
Descriptors: Institutional Research, Higher Education, Quality Control, Researchers
Glass, Arnold Lewis; Sinha, Neha – Educational Psychology, 2013
In the context of an upper-level psychology course, even when students were given an opportunity to refer to text containing the answers and change their exam responses in order to improve their exam scores, their performance on these questions improved slightly or not at all. Four experiments evaluated competing explanations for the students'…
Descriptors: Academic Achievement, Item Analysis, Test Norms, Comparative Testing
Rice, Stephen; McCarley, Jason S. – Journal of Experimental Psychology: Applied, 2011
Automated diagnostic aids prone to false alarms often produce poorer human performance in signal detection tasks than equally reliable miss-prone aids. However, it is not yet clear whether this is attributable to differences in the perceptual salience of the automated aids' misses and false alarms or is the result of inherent differences in…
Descriptors: Feedback (Response), Response Style (Tests), Young Adults, Performance Technology
Rohling, Martin L.; Larrabee, Glenn J.; Greiffenstein, Manfred F.; Ben-Porath, Yossef S.; Lees-Haley, Paul; Green, Paul; Greve, Kevin W. – Psychological Bulletin, 2011
In the May 2010 issue of "Psychological Bulletin," R. E. McGrath, M. Mitchell, B. H. Kim, and L. Hough published an article entitled "Evidence for Response Bias as a Source of Error Variance in Applied Assessment" (pp. 450-470). They argued that response bias indicators used in a variety of settings typically have insufficient data to support such…
Descriptors: Neuropsychology, Response Style (Tests), Bias, Test Validity
Meyer, J. Patrick – Applied Psychological Measurement, 2010
An examinee faced with a test item will engage in solution behavior or rapid-guessing behavior. These qualitatively different test-taking behaviors bias parameter estimates for item response models that do not control for such behavior. A mixture Rasch model with item response time components was proposed and evaluated through application to real…
Descriptors: Item Response Theory, Response Style (Tests), Reaction Time, Computation
Lie, Celia; Alsop, Brent – Journal of the Experimental Analysis of Behavior, 2009
Three experiments using human participants varied the distribution of point-gain reinforcers or point-loss punishers in two-alternative signal-detection procedures. Experiment 1 varied the distribution of point-gain reinforcers for correct responses (Group A) and point-loss punishers for errors (Group B) across conditions. Response bias varied…
Descriptors: Positive Reinforcement, Bias, Response Style (Tests), Punishment
Thomas, Michael L.; Lanyon, Richard I.; Millsap, Roger E. – Psychological Assessment, 2009
The use of criterion group validation is hindered by the difficulty of classifying individuals on latent constructs. Latent class analysis (LCA) is a method that can be used for determining the validity of scales meant to assess latent constructs without such a priori classifications. The authors used this method to examine the ability of the L…
Descriptors: Validity, Measures (Individuals), Statistical Analysis, Classification