NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Cannon, Edmund; Cipriani, Giam Pietro – Assessment & Evaluation in Higher Education, 2022
Student evaluations of teaching may be subject to halo effects, where answers to one question are contaminated by answers to the other questions. Quantifying halo effects is difficult since correlation between answers may be due to underlying correlation of the items being tested. We use a novel identification procedure to test for a halo effect…
Descriptors: Student Evaluation of Teacher Performance, Bias, Response Style (Tests), Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Kam, Chester Chun Seng; Zhou, Mingming – Educational and Psychological Measurement, 2015
Previous research has found the effects of acquiescence to be generally consistent across item "aggregates" within a single survey (i.e., essential tau-equivalence), but it is unknown whether this phenomenon is consistent at the" individual item" level. This article evaluated the often assumed but inadequately tested…
Descriptors: Test Items, Surveys, Criteria, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Witt, Jessica K.; Brockmole, James R. – Journal of Experimental Psychology: Human Perception and Performance, 2012
Stereotypes, expectations, and emotions influence an observer's ability to detect and categorize objects as guns. In light of recent work in action-perception interactions, however, there is another unexplored factor that may be critical: The action choices available to the perceiver. In five experiments, participants determined whether another…
Descriptors: Weapons, Identification, Stereotypes, Visual Perception
Peer reviewed Peer reviewed
Direct linkDirect link
McGrath, Robert E.; Kim, Brian H.; Hough, Leaetta – Psychological Bulletin, 2011
In their comment, M. L. Rohling et al. (2011) accused us of offering a "misleading" review of response bias. In fact, the additional findings they provided on this topic are relevant only to bias assessment in 1 of the domains we discussed, neuropsychological assessment. Furthermore, we contend that, even in that 1 domain, the additional findings…
Descriptors: Response Style (Tests), Bias, Test Validity, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Rice, Stephen; McCarley, Jason S. – Journal of Experimental Psychology: Applied, 2011
Automated diagnostic aids prone to false alarms often produce poorer human performance in signal detection tasks than equally reliable miss-prone aids. However, it is not yet clear whether this is attributable to differences in the perceptual salience of the automated aids' misses and false alarms or is the result of inherent differences in…
Descriptors: Feedback (Response), Response Style (Tests), Young Adults, Performance Technology
Peer reviewed Peer reviewed
Direct linkDirect link
Rohling, Martin L.; Larrabee, Glenn J.; Greiffenstein, Manfred F.; Ben-Porath, Yossef S.; Lees-Haley, Paul; Green, Paul; Greve, Kevin W. – Psychological Bulletin, 2011
In the May 2010 issue of "Psychological Bulletin," R. E. McGrath, M. Mitchell, B. H. Kim, and L. Hough published an article entitled "Evidence for Response Bias as a Source of Error Variance in Applied Assessment" (pp. 450-470). They argued that response bias indicators used in a variety of settings typically have insufficient data to support such…
Descriptors: Neuropsychology, Response Style (Tests), Bias, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Chajut, Eran; Mama, Yaniv; Levy, Leora; Algom, Daniel – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2010
In the laboratory, people classify the color of emotion-laden words slower than they do that of neutral words, the emotional Stroop effect. Outside the laboratory, people react to features of emotion-laden stimuli or threatening stimuli faster than they do to those of neutral stimuli. A possible resolution to the conundrum implicates the…
Descriptors: Stimuli, Emotional Response, Response Style (Tests), Laboratories
Peer reviewed Peer reviewed
Direct linkDirect link
Berk, Ronald A. – Journal of Faculty Development, 2010
Most faculty developers have a wide variety of rating scales that fly across their desk tops as their incremental program activities unfold during the academic year. The primary issue for this column is: What is the quality of those ratings used for decisions about people and programs? When students, faculty, and administrators rate a program or…
Descriptors: Response Style (Tests), Rating Scales, Faculty Development, Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Barkhi, Reza; Williams, Paul – Assessment & Evaluation in Higher Education, 2010
With the proliferation of computer networks and the increased use of Internet-based applications, many forms of social interactions now take place in an on-line context through "Computer-Mediated Communication" (CMC). Many universities are now reaping the benefits of using CMC applications to collect data on student evaluations of…
Descriptors: Computer Mediated Communication, Faculty Evaluation, Foreign Countries, Student Evaluation of Teacher Performance