Publication Date
In 2025 | 1 |
Since 2024 | 30 |
Since 2021 (last 5 years) | 100 |
Since 2016 (last 10 years) | 214 |
Since 2006 (last 20 years) | 414 |
Descriptor
Source
Author
Weiss, David J. | 12 |
Wise, Steven L. | 9 |
Bolt, Daniel M. | 7 |
Benson, Jeri | 6 |
Fiske, Donald W. | 6 |
Holden, Ronald R. | 6 |
Jackson, Douglas N. | 6 |
Adkins, Dorothy C. | 5 |
Birenbaum, Menucha | 5 |
Crocker, Linda | 5 |
Greve, Kevin W. | 5 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 58 |
Practitioners | 17 |
Teachers | 6 |
Administrators | 3 |
Counselors | 2 |
Students | 1 |
Location
Germany | 27 |
Canada | 20 |
Australia | 17 |
United States | 12 |
South Korea | 10 |
United Kingdom | 10 |
China | 9 |
Denmark | 9 |
France | 9 |
Italy | 9 |
Norway | 9 |
More ▼ |
Laws, Policies, & Programs
Elementary and Secondary… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Akbay, Lokman; Kilinç, Mustafa – International Journal of Assessment Tools in Education, 2018
Measurement models need to properly delineate the real aspect of examinees' response processes for measurement accuracy purposes. To avoid invalid inferences, fit of examinees' response data to the model is studied through "person-fit" statistics. Misfit between the examinee response data and measurement model may be due to invalid…
Descriptors: Reliability, Goodness of Fit, Cognitive Measurement, Models
Spratto, Elisabeth M.; Leventhal, Brian C.; Bandalos, Deborah L. – Educational and Psychological Measurement, 2021
In this study, we examined the results and interpretations produced from two different IRTree models--one using paths consisting of only dichotomous decisions, and one using paths consisting of both dichotomous and polytomous decisions. We used data from two versions of an impulsivity measure. In the first version, all the response options had…
Descriptors: Comparative Analysis, Item Response Theory, Decision Making, Data Analysis
Zhou, Sherry; Huggins-Manley, Anne Corinne – Educational and Psychological Measurement, 2020
The semi-generalized partial credit model (Semi-GPCM) has been proposed as a unidimensional modeling method for handling not applicable scale responses and neutral scale responses, and it has been suggested that the model may be of use in handling missing data in scale items. The purpose of this study is to evaluate the ability of the…
Descriptors: Models, Statistical Analysis, Response Style (Tests), Test Items
Little, Todd D.; Chang, Rong; Gorrall, Britt K.; Waggenspack, Luke; Fukuda, Eriko; Allen, Patricia J.; Noam, Gil G. – International Journal of Behavioral Development, 2020
We revisit the merits of the retrospective pretest-posttest (RPP) design for repeated-measures research. The underutilized RPP method asks respondents to rate survey items twice during the same posttest measurement occasion from two specific frames of reference: "now" and "then." Individuals first report their current attitudes…
Descriptors: Pretesting, Alternative Assessment, Program Evaluation, Evaluation Methods
Holden, Ronald R.; Marjanovic, Zdravko; Troister, Talia – Journal of Psychoeducational Assessment, 2019
Indiscriminate (i.e., carless, random, insufficient effort) responses, commonly believed to weaken effect sizes and produce Type II errors, can inflate effect sizes and potentially produce Type I errors where a supposedly significant result is actually artifactual. We demonstrate how indiscriminate responses can produce spuriously high…
Descriptors: Response Style (Tests), Effect Size, Correlation, Undergraduate Students
Collins, Alyson A.; Lindström, Esther R.; Sandbank, Micheal – Annals of Dyslexia, 2021
This study investigated the dependability of reading comprehension scores across different text genres and response formats for readers with varied language knowledge. Participants included 78 fourth-graders in an urban elementary school. A randomized and counterbalanced 3 × 2 study design investigated three response formats (open-ended,…
Descriptors: Reading Comprehension, Reading Tests, Response Style (Tests), Scores
Tracy Noble; Craig S. Wells; Ann S. Rosebery – Educational Assessment, 2023
This article reports on two quantitative studies of English learners' (ELs) interactions with constructed-response items from a Grade 5 state science test. Study 1 investigated the relationships between the constructed-response item-level variables of English Reading Demand, English Writing Demand, and Background Knowledge Demand and the…
Descriptors: Grade 5, State Standards, Standardized Tests, Science Tests
Huang, Hung-Yu – Educational and Psychological Measurement, 2020
In educational assessments and achievement tests, test developers and administrators commonly assume that test-takers attempt all test items with full effort and leave no blank responses with unplanned missing values. However, aberrant response behavior--such as performance decline, dropping out beyond a certain point, and skipping certain items…
Descriptors: Item Response Theory, Response Style (Tests), Test Items, Statistical Analysis
Bürkner, Paul-Christian; Schulte, Niklas; Holling, Heinz – Educational and Psychological Measurement, 2019
Forced-choice questionnaires have been proposed to avoid common response biases typically associated with rating scale questionnaires. To overcome ipsativity issues of trait scores obtained from classical scoring approaches of forced-choice items, advanced methods from item response theory (IRT) such as the Thurstonian IRT model have been…
Descriptors: Item Response Theory, Measurement Techniques, Questionnaires, Rating Scales
Dibek, Munevver Ilgun; Cikrikci, Rahime Nukhet – International Journal of Progressive Education, 2021
This study aims to first investigate the effect of the extreme response style (ERS) which could lead to an attitude-achievement paradox among the countries participating in the Trends in International Mathematics and Science Study (TIMSS 2015), and then to determine the individual- and country-level relationships between attitude and achievement…
Descriptors: Item Response Theory, Response Style (Tests), Elementary Secondary Education, Achievement Tests
Rios, Joseph A.; Guo, Hongwen; Mao, Liyang; Liu, Ou Lydia – International Journal of Testing, 2017
When examinees' test-taking motivation is questionable, practitioners must determine whether careless responding is of practical concern and if so, decide on the best approach to filter such responses. As there has been insufficient research on these topics, the objectives of this study were to: a) evaluate the degree of underestimation in the…
Descriptors: Response Style (Tests), Scores, Motivation, Computation
Rushkin, Ilia; Chuang, Isaac; Tingley, Dustin – Journal of Learning Analytics, 2019
Each time a learner in a self-paced online course seeks to answer an assessment question, it takes some time for the student to read the question and arrive at an answer to submit. If multiple attempts are allowed, and the first answer is incorrect, it takes some time to provide a second answer. Here we study the distribution of such…
Descriptors: Online Courses, Response Style (Tests), Models, Learner Engagement
Lang, David – Grantee Submission, 2019
Whether high-stakes exams such as the SAT or College Board AP exams should penalize incorrect answers is a controversial question. In this paper, we document that penalty functions can have differential effects depending on a student's risk tolerance. Moreover, literature shows that risk aversion tends to vary along other areas of concern such as…
Descriptors: High Stakes Tests, Risk, Item Response Theory, Test Bias
Kim, Sooyeon; Moses, Tim – ETS Research Report Series, 2018
The purpose of this study is to assess the impact of aberrant responses on the estimation accuracy in forced-choice format assessments. To that end, a wide range of aberrant response behaviors (e.g., fake, random, or mechanical responses) affecting upward of 20%--30% of the responses was manipulated under the multi-unidimensional pairwise…
Descriptors: Measurement Techniques, Response Style (Tests), Accuracy, Computation
Höhne, Jan Karem; Krebs, Dagmar – International Journal of Social Research Methodology, 2018
The effect of the response scale direction on response behavior is a well-known phenomenon in survey research. While there are several approaches to explaining how such response order effects occur, the literature reports mixed evidence. Furthermore, different question formats seem to vary in their susceptibility to these effects. We therefore…
Descriptors: Test Items, Response Style (Tests), Questioning Techniques, Questionnaires