NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Elementary and Secondary…1
What Works Clearinghouse Rating
Showing 1 to 15 of 93 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; Kuhfeld, Megan R. – Journal of Educational Measurement, 2021
There has been a growing research interest in the identification and management of disengaged test taking, which poses a validity threat that is particularly prevalent with low-stakes tests. This study investigated effort-moderated (E-M) scoring, in which item responses classified as rapid guesses are identified and excluded from scoring. Using…
Descriptors: Scoring, Data Use, Response Style (Tests), Guessing (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Danielle R. Blazek; Jason T. Siegel – International Journal of Social Research Methodology, 2024
Social scientists have long agreed that satisficing behavior increases error and reduces the validity of survey data. There have been numerous reviews on detecting satisficing behavior, but preventing this behavior has received less attention. The current narrative review provides empirically supported guidance on preventing satisficing by…
Descriptors: Response Style (Tests), Responses, Reaction Time, Test Interpretation
Peer reviewed Peer reviewed
Direct linkDirect link
Ulitzsch, Esther; Penk, Christiane; von Davier, Matthias; Pohl, Steffi – Educational Assessment, 2021
Identifying and considering test-taking effort is of utmost importance for drawing valid inferences on examinee competency in low-stakes tests. Different approaches exist for doing so. The speed-accuracy+engagement model aims at identifying non-effortful test-taking behavior in terms of nonresponse and rapid guessing based on responses and…
Descriptors: Response Style (Tests), Guessing (Tests), Reaction Time, Measurement Techniques
Peer reviewed Peer reviewed
Direct linkDirect link
Yildirim-Erbasli, Seyma Nur; Bulut, Okan – Educational Research and Evaluation, 2020
This study investigated the impact of students' test-taking effort on their growth estimates in reading. The sample consisted of 7,602 students (Grades 1 to 4) in the United States who participated in the fall and spring administrations of a computer-based reading assessment. First, a new response dataset was created by flagging both…
Descriptors: Response Style (Tests), Reading Tests, Guessing (Tests), Reaction Time
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lang, David – Grantee Submission, 2019
Whether high-stakes exams such as the SAT or College Board AP exams should penalize incorrect answers is a controversial question. In this paper, we document that penalty functions can have differential effects depending on a student's risk tolerance. Moreover, literature shows that risk aversion tends to vary along other areas of concern such as…
Descriptors: High Stakes Tests, Risk, Item Response Theory, Test Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Soland, James; Kuhfeld, Megan – Educational Assessment, 2019
Considerable research has examined the use of rapid guessing measures to identify disengaged item responses. However, little is known about students who rapidly guess over the course of several tests. In this study, we use achievement test data from six administrations over three years to investigate whether rapid guessing is a stable trait-like…
Descriptors: Testing, Guessing (Tests), Reaction Time, Achievement Tests
NWEA, 2017
This document describes the following two new student engagement metrics now included on NWEA™ MAP® Growth™ reports, and provides guidance on how to interpret and use these metrics: (1) Percent of Disengaged Responses; and (2) Estimated Impact of Disengagement on RIT. These metrics will inform educators about what percentage of items from a…
Descriptors: Achievement Tests, Achievement Gains, Test Interpretation, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu; Wang, Wen-Chung – Journal of Educational Measurement, 2014
The DINA (deterministic input, noisy, and gate) model has been widely used in cognitive diagnosis tests and in the process of test development. The outcomes known as slip and guess are included in the DINA model function representing the responses to the items. This study aimed to extend the DINA model by using the random-effect approach to allow…
Descriptors: Models, Guessing (Tests), Probability, Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Yi-Hsuan; Jia, Yue – Large-scale Assessments in Education, 2014
Background: Large-scale survey assessments have been used for decades to monitor what students know and can do. Such assessments aim at providing group-level scores for various populations, with little or no consequence to individual students for their test performance. Students' test-taking behaviors in survey assessments, particularly the level…
Descriptors: Measurement, Test Wiseness, Student Surveys, Response Style (Tests)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
DeMars, Christine E.; Bashkov, Bozhidar M.; Socha, Alan B. – Research & Practice in Assessment, 2013
Examinee effort can impact the validity of scores on higher education assessments. Many studies of examinee effort have briefly noted gender differences, but gender differences in test-taking effort have not been a primary focus of research. This review of the literature brings together gender-related findings regarding three measures of examinee…
Descriptors: Gender Differences, Scores, Student Motivation, Test Wiseness
Peer reviewed Peer reviewed
Direct linkDirect link
DeMars, Christine E. – Educational Assessment, 2007
A series of 8 tests was administered to university students over 4 weeks for program assessment purposes. The stakes of these tests were low for students; they received course points based on test completion, not test performance. Tests were administered in a counterbalanced order across 2 administrations. Response time effort, a measure of the…
Descriptors: Reaction Time, Guessing (Tests), Testing Programs, College Students
Waller, Michael I. – 1976
A method of estimating the parameters of the Rasch Model removing the effect of ramdom guessing is presented. The procedure is an application of the ARRG (Abilities Removing Random Guessing) model recently developed for two parameter latent trait models. Under the Rasch model ARRG provides for estimation of abilities, removing the effects of…
Descriptors: Ability, Guessing (Tests), Item Analysis, Mathematical Models
Peer reviewed Peer reviewed
Boykin, A. Wade; Harackiewicz, Judith – British Journal of Psychology, 1981
High school and college students solved problems differing in level of uncertainty; their expressed curiosity about the correct answer was gauged; and later recognition of correct answers tested. Both epistemic curiosity and recognition bore monotonically increasing relationships to degree of uncertainty. Systematic intersubject differences in…
Descriptors: Adolescents, Curiosity, Guessing (Tests), Individual Differences
Peer reviewed Peer reviewed
Lord, Frederic M. – Journal of Educational Measurement, 1975
The assumption that examinees either know the answer to a test item or else guess at random is usually totally implausible. A different assumption is outlined, under which formula scoring is found to be clearly superior to number right scoring. (Author)
Descriptors: Guessing (Tests), Multiple Choice Tests, Response Style (Tests), Scoring
Ford, Valeria A. – 1973
The purpose of this paper is to acquaint the reader with the topic of test-wiseness. The first section of this paper presents a series of multiple-choice items. The reader is asked to respond to them and is encouraged to read carefully the remainder of this paper for techniques which could improve test-taking performance. The next section defines…
Descriptors: Guessing (Tests), Literature Reviews, Multiple Choice Tests, Response Style (Tests)
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7