Publication Date
In 2025 | 1 |
Since 2024 | 29 |
Since 2021 (last 5 years) | 93 |
Since 2016 (last 10 years) | 188 |
Since 2006 (last 20 years) | 374 |
Descriptor
Response Style (Tests) | 675 |
Foreign Countries | 130 |
Higher Education | 105 |
Test Items | 103 |
Test Validity | 93 |
Item Response Theory | 82 |
College Students | 81 |
Questionnaires | 80 |
Test Format | 67 |
Models | 65 |
Test Reliability | 57 |
More ▼ |
Source
Author
Bolt, Daniel M. | 7 |
Greve, Kevin W. | 5 |
Soland, James | 5 |
Wise, Steven L. | 5 |
Bagby, R. Michael | 4 |
Ben-Porath, Yossef S. | 4 |
Cheng, Ying | 4 |
Holden, Ronald R. | 4 |
Höhne, Jan Karem | 4 |
Richardson, John T. E. | 4 |
Rios, Joseph A. | 4 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 19 |
Practitioners | 9 |
Teachers | 3 |
Administrators | 2 |
Counselors | 1 |
Location
Germany | 22 |
Canada | 15 |
Australia | 13 |
China | 9 |
United Kingdom | 9 |
Taiwan | 8 |
South Korea | 7 |
United States | 7 |
France | 6 |
Italy | 6 |
Japan | 6 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Scanlon, Paul J. – Field Methods, 2019
Web, or online, probing has the potential to supplement existing questionnaire design processes by providing structured cognitive data on a wider sample than typical qualitative-only question evaluation methods can achieve. One of the practical impediments to the further integration of web probing is the concern of survey managers about how the…
Descriptors: Online Surveys, Questionnaires, Response Style (Tests), Test Items
Babcock, Ben; Siegel, Zachary D. – Practical Assessment, Research & Evaluation, 2022
Research about repeated testing has revealed that retaking the same exam form generally does not advantage or disadvantage failing candidates in selected response-style credentialing exams. Feinberg, Raymond, and Haist (2015) found a contributing factor to this phenomenon: people answering items incorrectly on both attempts give the same incorrect…
Descriptors: Multiple Choice Tests, Item Analysis, Test Items, Response Style (Tests)
Magraw-Mickelson, Zoe; Wang, Harry H.; Gollwitzer, Mario – International Journal of Testing, 2022
Much psychological research depends on participants' diligence in filling out materials such as surveys. However, not all participants are motivated to respond attentively, which leads to unintended issues with data quality, known as careless responding. Our question is: how do different modes of data collection--paper/pencil, computer/web-based,…
Descriptors: Response Style (Tests), Surveys, Data Collection, Test Format
Thompson, James J. – Measurement: Interdisciplinary Research and Perspectives, 2022
With the use of computerized testing, ordinary assessments can capture both answer accuracy and answer response time. For the Canadian Programme for the International Assessment of Adult Competencies (PIAAC) numeracy and literacy subtests, person ability, person speed, question difficulty, question time intensity, fluency (rate), person fluency…
Descriptors: Foreign Countries, Adults, Computer Assisted Testing, Network Analysis
Rios, Joseph A.; Guo, Hongwen – Applied Measurement in Education, 2020
The objective of this study was to evaluate whether differential noneffortful responding (identified via response latencies) was present in four countries administered a low-stakes college-level critical thinking assessment. Results indicated significant differences (as large as 0.90 "SD") between nearly all country pairings in the…
Descriptors: Response Style (Tests), Cultural Differences, Critical Thinking, Cognitive Tests
Ivanova, Militsa; Michaelides, Michalis; Eklöf, Hanna – Educational Research and Evaluation, 2020
Collecting process data in computer-based assessments provides opportunities to describe examinee behaviour during a test-taking session. The number of actions taken by students while interacting with an item is in this context a variable that has been gaining attention. The present study aims to investigate how the number of actions performed on…
Descriptors: Foreign Countries, Secondary School Students, Achievement Tests, International Assessment
Abbakumov, Dmitry; Desmet, Piet; Van den Noortgate, Wim – Applied Measurement in Education, 2020
Formative assessments are an important component of massive open online courses (MOOCs), online courses with open access and unlimited student participation. Accurate conclusions on students' proficiency via formative, however, face several challenges: (a) students are typically allowed to make several attempts; and (b) student performance might…
Descriptors: Item Response Theory, Formative Evaluation, Online Courses, Response Style (Tests)
Steinmann, Isa; Sánchez, Daniel; van Laar, Saskia; Braeken, Johan – Assessment in Education: Principles, Policy & Practice, 2022
Questionnaire scales that are mixed-worded, i.e. include both positively and negatively worded items, often suffer from issues like low reliability and more complex latent structures than intended. Part of the problem might be that some responders fail to respond consistently to the mixed-worded items. We investigated the prevalence and impact of…
Descriptors: Response Style (Tests), Test Items, Achievement Tests, Foreign Countries
Park, Minjeong; Wu, Amery D. – Educational and Psychological Measurement, 2019
Item response tree (IRTree) models are recently introduced as an approach to modeling response data from Likert-type rating scales. IRTree models are particularly useful to capture a variety of individuals' behaviors involving in item responding. This study employed IRTree models to investigate response styles, which are individuals' tendencies to…
Descriptors: Item Response Theory, Models, Likert Scales, Response Style (Tests)
Soland, James; Wise, Steven L.; Gao, Lingyun – Applied Measurement in Education, 2019
Disengaged responding is a phenomenon that often biases observed scores from achievement tests and surveys in practically and statistically significant ways. This problem has led to the development of methods to detect and correct for disengaged responses on both achievement test and survey scores. One major disadvantage when trying to detect…
Descriptors: Reaction Time, Metadata, Response Style (Tests), Student Surveys
Adrian Adams; Lauren Barth-Cohen – CBE - Life Sciences Education, 2024
In undergraduate research settings, students are likely to encounter anomalous data, that is, data that do not meet their expectations. Most of the research that directly or indirectly captures the role of anomalous data in research settings uses post-hoc reflective interviews or surveys. These data collection approaches focus on recall of past…
Descriptors: Undergraduate Students, Physics, Science Instruction, Laboratory Experiments
Soland, James; Kuhfeld, Megan; Rios, Joseph – Large-scale Assessments in Education, 2021
Low examinee effort is a major threat to valid uses of many test scores. Fortunately, several methods have been developed to detect noneffortful item responses, most of which use response times. To accurately identify noneffortful responses, one must set response time thresholds separating those responses from effortful ones. While other studies…
Descriptors: Reaction Time, Measurement, Response Style (Tests), Reading Tests
Wang, Rui; Krosnick, Jon A. – International Journal of Social Research Methodology, 2020
Questionnaires routinely measure unipolar and bipolar constructs using rating scales. Such rating scales can offer odd numbers of points, meaning that they have explicit middle alternatives, or they can offer even numbers of points, omitting the middle alternative. By examining four types of questions in six national or regional telephone surveys,…
Descriptors: Validity, Rating Scales, Questionnaires, Telephone Surveys
Bulut, Okan; Bulut, Hatice Cigdem; Cormier, Damien C.; Ilgun Dibek, Munevver; Sahin Kursad, Merve – Educational Assessment, 2023
Some statewide testing programs allow students to receive corrective feedback and revise their answers during testing. Despite its pedagogical benefits, the effects of providing revision opportunities remain unknown in the context of alternate assessments. Therefore, this study examined student data from a large-scale alternate assessment that…
Descriptors: Error Correction, Alternative Assessment, Feedback (Response), Multiple Choice Tests
Bezirhan, Ummugul; von Davier, Matthias; Grabovsky, Irina – Educational and Psychological Measurement, 2021
This article presents a new approach to the analysis of how students answer tests and how they allocate resources in terms of time on task and revisiting previously answered questions. Previous research has shown that in high-stakes assessments, most test takers do not end the testing session early, but rather spend all of the time they were…
Descriptors: Response Style (Tests), Accuracy, Reaction Time, Ability