Publication Date
In 2025 | 1 |
Since 2024 | 30 |
Since 2021 (last 5 years) | 100 |
Descriptor
Response Style (Tests) | 100 |
Item Response Theory | 38 |
Foreign Countries | 36 |
Test Items | 33 |
Reaction Time | 19 |
Achievement Tests | 16 |
International Assessment | 14 |
Models | 14 |
Accuracy | 13 |
Scores | 13 |
Secondary School Students | 13 |
More ▼ |
Source
Author
Ames, Allison J. | 3 |
Bolt, Daniel M. | 3 |
Hsieh, Shu-Hui | 3 |
Leventhal, Brian C. | 3 |
Braeken, Johan | 2 |
Bulut, Hatice Cigdem | 2 |
Bulut, Okan | 2 |
Esther Ulitzsch | 2 |
Gummer, Tobias | 2 |
Jesper Tijmstra | 2 |
Nana Kim | 2 |
More ▼ |
Publication Type
Journal Articles | 93 |
Reports - Research | 88 |
Dissertations/Theses -… | 5 |
Reports - Descriptive | 3 |
Reports - Evaluative | 3 |
Tests/Questionnaires | 3 |
Speeches/Meeting Papers | 2 |
Information Analyses | 1 |
Education Level
Audience
Practitioners | 2 |
Researchers | 2 |
Location
Germany | 13 |
Czech Republic | 4 |
Greece | 4 |
Taiwan | 4 |
Australia | 3 |
China | 3 |
Italy | 3 |
Lithuania | 3 |
New Zealand | 3 |
Norway | 3 |
South Korea | 3 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Ames, Allison J.; Myers, Aaron J. – Educational and Psychological Measurement, 2021
Contamination of responses due to extreme and midpoint response style can confound the interpretation of scores, threatening the validity of inferences made from survey responses. This study incorporated person-level covariates in the multidimensional item response tree model to explain heterogeneity in response style. We include an empirical…
Descriptors: Response Style (Tests), Item Response Theory, Longitudinal Studies, Adolescents
Babcock, Ben; Siegel, Zachary D. – Practical Assessment, Research & Evaluation, 2022
Research about repeated testing has revealed that retaking the same exam form generally does not advantage or disadvantage failing candidates in selected response-style credentialing exams. Feinberg, Raymond, and Haist (2015) found a contributing factor to this phenomenon: people answering items incorrectly on both attempts give the same incorrect…
Descriptors: Multiple Choice Tests, Item Analysis, Test Items, Response Style (Tests)
Magraw-Mickelson, Zoe; Wang, Harry H.; Gollwitzer, Mario – International Journal of Testing, 2022
Much psychological research depends on participants' diligence in filling out materials such as surveys. However, not all participants are motivated to respond attentively, which leads to unintended issues with data quality, known as careless responding. Our question is: how do different modes of data collection--paper/pencil, computer/web-based,…
Descriptors: Response Style (Tests), Surveys, Data Collection, Test Format
Thompson, James J. – Measurement: Interdisciplinary Research and Perspectives, 2022
With the use of computerized testing, ordinary assessments can capture both answer accuracy and answer response time. For the Canadian Programme for the International Assessment of Adult Competencies (PIAAC) numeracy and literacy subtests, person ability, person speed, question difficulty, question time intensity, fluency (rate), person fluency…
Descriptors: Foreign Countries, Adults, Computer Assisted Testing, Network Analysis
Steinmann, Isa; Sánchez, Daniel; van Laar, Saskia; Braeken, Johan – Assessment in Education: Principles, Policy & Practice, 2022
Questionnaire scales that are mixed-worded, i.e. include both positively and negatively worded items, often suffer from issues like low reliability and more complex latent structures than intended. Part of the problem might be that some responders fail to respond consistently to the mixed-worded items. We investigated the prevalence and impact of…
Descriptors: Response Style (Tests), Test Items, Achievement Tests, Foreign Countries
Soland, James; Kuhfeld, Megan; Rios, Joseph – Large-scale Assessments in Education, 2021
Low examinee effort is a major threat to valid uses of many test scores. Fortunately, several methods have been developed to detect noneffortful item responses, most of which use response times. To accurately identify noneffortful responses, one must set response time thresholds separating those responses from effortful ones. While other studies…
Descriptors: Reaction Time, Measurement, Response Style (Tests), Reading Tests
Adrian Adams; Lauren Barth-Cohen – CBE - Life Sciences Education, 2024
In undergraduate research settings, students are likely to encounter anomalous data, that is, data that do not meet their expectations. Most of the research that directly or indirectly captures the role of anomalous data in research settings uses post-hoc reflective interviews or surveys. These data collection approaches focus on recall of past…
Descriptors: Undergraduate Students, Physics, Science Instruction, Laboratory Experiments
Bulut, Okan; Bulut, Hatice Cigdem; Cormier, Damien C.; Ilgun Dibek, Munevver; Sahin Kursad, Merve – Educational Assessment, 2023
Some statewide testing programs allow students to receive corrective feedback and revise their answers during testing. Despite its pedagogical benefits, the effects of providing revision opportunities remain unknown in the context of alternate assessments. Therefore, this study examined student data from a large-scale alternate assessment that…
Descriptors: Error Correction, Alternative Assessment, Feedback (Response), Multiple Choice Tests
Bezirhan, Ummugul; von Davier, Matthias; Grabovsky, Irina – Educational and Psychological Measurement, 2021
This article presents a new approach to the analysis of how students answer tests and how they allocate resources in terms of time on task and revisiting previously answered questions. Previous research has shown that in high-stakes assessments, most test takers do not end the testing session early, but rather spend all of the time they were…
Descriptors: Response Style (Tests), Accuracy, Reaction Time, Ability
Rebecca F. Berenbon; Jerome V. D'Agostino; Emily M. Rodgers – Journal of Psychoeducational Assessment, 2024
Curriculum-based measures (CBMs) such as Word Identification Fluency promote student achievement, but because they are timed and administered frequently, they are prone to variation in student response styles. To study the impact of WIF response styles, we created and examined the validity of a novel response style measure and examined the degree…
Descriptors: Elementary School Students, Elementary School Teachers, Grade 1, Special Education Teachers
Xue, Kang; Huggins-Manley, Anne Corinne; Leite, Walter – Educational and Psychological Measurement, 2022
In data collected from virtual learning environments (VLEs), item response theory (IRT) models can be used to guide the ongoing measurement of student ability. However, such applications of IRT rely on unbiased item parameter estimates associated with test items in the VLE. Without formal piloting of the items, one can expect a large amount of…
Descriptors: Virtual Classrooms, Artificial Intelligence, Item Response Theory, Item Analysis
Ames, Allison J. – Educational and Psychological Measurement, 2022
Individual response style behaviors, unrelated to the latent trait of interest, may influence responses to ordinal survey items. Response style can introduce bias in the total score with respect to the trait of interest, threatening valid interpretation of scores. Despite claims of response style stability across scales, there has been little…
Descriptors: Response Style (Tests), Individual Differences, Scores, Test Items
Spratto, Elisabeth M.; Leventhal, Brian C.; Bandalos, Deborah L. – Educational and Psychological Measurement, 2021
In this study, we examined the results and interpretations produced from two different IRTree models--one using paths consisting of only dichotomous decisions, and one using paths consisting of both dichotomous and polytomous decisions. We used data from two versions of an impulsivity measure. In the first version, all the response options had…
Descriptors: Comparative Analysis, Item Response Theory, Decision Making, Data Analysis
Collins, Alyson A.; Lindström, Esther R.; Sandbank, Micheal – Annals of Dyslexia, 2021
This study investigated the dependability of reading comprehension scores across different text genres and response formats for readers with varied language knowledge. Participants included 78 fourth-graders in an urban elementary school. A randomized and counterbalanced 3 × 2 study design investigated three response formats (open-ended,…
Descriptors: Reading Comprehension, Reading Tests, Response Style (Tests), Scores
Tracy Noble; Craig S. Wells; Ann S. Rosebery – Educational Assessment, 2023
This article reports on two quantitative studies of English learners' (ELs) interactions with constructed-response items from a Grade 5 state science test. Study 1 investigated the relationships between the constructed-response item-level variables of English Reading Demand, English Writing Demand, and Background Knowledge Demand and the…
Descriptors: Grade 5, State Standards, Standardized Tests, Science Tests