Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 3 |
Since 2006 (last 20 years) | 4 |
Descriptor
Response Style (Tests) | 19 |
Test Bias | 19 |
Test Items | 19 |
Item Analysis | 9 |
Testing Problems | 6 |
Higher Education | 5 |
Test Construction | 5 |
Latent Trait Theory | 4 |
Mathematical Models | 4 |
Test Reliability | 4 |
Test Validity | 4 |
More ▼ |
Source
Educational and Psychological… | 3 |
Applied Psychological… | 1 |
Educational Measurement:… | 1 |
Evaluation Review | 1 |
Journal of Educational… | 1 |
Perceptual and Motor Skills | 1 |
Social Psychology | 1 |
Studies in Higher Education | 1 |
Author
Publication Type
Reports - Research | 16 |
Journal Articles | 9 |
Speeches/Meeting Papers | 4 |
Guides - Non-Classroom | 1 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Researchers | 2 |
Location
Netherlands | 1 |
United Kingdom | 1 |
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 1 |
Texas Assessment of Basic… | 1 |
Texas Educational Assessment… | 1 |
What Works Clearinghouse Rating
Martijn Schoenmakers; Jesper Tijmstra; Jeroen Vermunt; Maria Bolsinova – Educational and Psychological Measurement, 2024
Extreme response style (ERS), the tendency of participants to select extreme item categories regardless of the item content, has frequently been found to decrease the validity of Likert-type questionnaire results. For this reason, various item response theory (IRT) models have been proposed to model ERS and correct for it. Comparisons of these…
Descriptors: Item Response Theory, Response Style (Tests), Models, Likert Scales
Ames, Allison J. – Educational and Psychological Measurement, 2022
Individual response style behaviors, unrelated to the latent trait of interest, may influence responses to ordinal survey items. Response style can introduce bias in the total score with respect to the trait of interest, threatening valid interpretation of scores. Despite claims of response style stability across scales, there has been little…
Descriptors: Response Style (Tests), Individual Differences, Scores, Test Items
Vijver, Fons J. R. – Educational Measurement: Issues and Practice, 2018
A conceptual framework of measurement bias in cross-cultural comparisons, distinguishing between construct, method, and item bias (differential item functioning), is used to describe a methodological framework addressing assessment of noncognitive variables in international large-scale studies. It is argued that the treatment of bias, coming from…
Descriptors: Educational Assessment, Achievement Tests, Foreign Countries, International Assessment
Yorke, Mantz; Orr, Susan; Blair, Bernadette – Studies in Higher Education, 2014
There has long been the suspicion amongst staff in Art & Design that the ratings given to their subject disciplines in the UK's National Student Survey are adversely affected by a combination of circumstances--a "perfect storm". The "perfect storm" proposition is tested by comparing ratings for Art & Design with those…
Descriptors: Student Surveys, National Surveys, Art Education, Design

Mentzer, Thomas L. – Educational and Psychological Measurement, 1982
Evidence of biases in the correct answers in multiple-choice test item files were found to include "all of the above" bias in which that answer was correct more than 25 percent of the time, and a bias that the longest answer was correct too frequently. Seven bias types were studied. (Author/CM)
Descriptors: Educational Testing, Higher Education, Multiple Choice Tests, Psychology

Johanson, George A.; And Others – Evaluation Review, 1993
The tendency of some respondents to omit items more often when they feel they have a less positive evaluation to make and less frequently when the evaluation is more positive is discussed. Five examples illustrate this form of nonresponse bias. Recommendations to overcome nonresponse bias are offered. (SLD)
Descriptors: Estimation (Mathematics), Evaluation Methods, Questionnaires, Response Style (Tests)

Bardo, John W.; Yeager, Samuel J. – Perceptual and Motor Skills, 1982
Responses to various fixed test-response formats were examined for "reliability" due to systematic error; Cronbach's alphas up to .67 were obtained. Of formats tested, four-point Likert Scales were least affected while forms of lines and faces were most problematic. Possible modification in alpha to account for systematic bias is…
Descriptors: Higher Education, Measures (Individuals), Psychometrics, Response Style (Tests)

Veale, James R.; Foreman, Dale I. – Journal of Educational Measurement, 1983
Statistical procedures for measuring heterogeneity of test item distractor distributions, or cultural variation, are presented. These procedures are based on the notion that examinees' responses to the incorrect options of a multiple-choice test provide more information concerning cultural bias than their correct responses. (Author/PN)
Descriptors: Ethnic Bias, Item Analysis, Mathematical Models, Multiple Choice Tests
Holland, Paul W.; Thayer, Dorothy T. – 1986
The Mantel-Haenszel procedure (MH) is a practical, inexpensive, and powerful way to detect test items that function differently in two groups of examinees. MH is a natural outgrowth of previously suggested chi square methods, and it is also related to methods based on item response theory. The study of items that function differently for two…
Descriptors: Comparative Analysis, Hypothesis Testing, Item Analysis, Latent Trait Theory
North Dakota Univ., Grand Forks. Center for Teaching and Learning.
Intended for parents, this guide discusses the effects of standardized tests on children, using test items from recent standardized reading tests which have been administered to children aged 7 to 13 and children's responses to these items. For each item, comments are made about the ambiguity of the item; the various interpretations that can be…
Descriptors: Elementary Education, Reading Tests, Response Style (Tests), Standardized Tests
Harnisch, Delwyn L.; Linn, Robert L. – 1980
Indices of appropriateness of a test for an individual are discussed, and two data sets are evaluated. With the first data set, three indices of test appropriateness are obtained for response patterns on achievement tests from an experimental study of the effects of test anxiety and time pressure with 173 3rd and 4th grade students. Relationships…
Descriptors: Curriculum, Elementary Secondary Education, Response Style (Tests), Social Influences
Schuessler, Karl; And Others – Social Psychology, 1978
The feasibility of measuring responding desirably with attitude-opinion items is discussed, and an index based on 16 such items is presented. Estimates of reliability and validity for this index, and examples of its use as a covariate (control) in attitude research are presented. Similarities and differences from related scales are discussed.…
Descriptors: Adults, Attitude Measures, Measurement Techniques, Response Style (Tests)

van Heerden, J.; Hoogstraten, Joh. – Applied Psychological Measurement, 1979
In a replication of an earlier study, a questionnaire with items lacking content and merely containing answer possibilities was administered to a sample of Dutch freshmen psychology students. Subjects showed a preference for positive options over negative options. (Author/JKS)
Descriptors: Content Analysis, Foreign Countries, Higher Education, Item Analysis
McKee, Barbara G.; Hausknecht, Michael A. – 1978
Literature on the development of classroom achievement tests for high school and college level hearing impaired students is reviewed, with emphasis on achievement tests designed to ascertain whether a particular unit of instruction has been understood as it was presented. The paper reviews: the syntactical structure and vocabulary of test items;…
Descriptors: Achievement Tests, Hearing Impairments, Higher Education, Item Analysis
Douglass, James B. – 1981
Relationships between item bias, item difficulty invariance, Rasch tests of item fit, and item position in a speeded 72-item Michigan State University Vocabulary Placement Test were investigated using 143 black males, 289 black females, 2,953 white males and 3,271 white females. Measures of item bias and item difficulty invariance were determined…
Descriptors: Black Students, Computer Programs, Correlation, Difficulty Level
Previous Page | Next Page ยป
Pages: 1 | 2