NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Elementary and Secondary…1
What Works Clearinghouse Rating
Showing 106 to 120 of 1,389 results Save | Export
Scott, Marcus W. – ProQuest LLC, 2018
One way that examinees can gain an unfair advantage on a test is by having prior access to the test questions and their answers, known as preknowledge. Determining which examinees had preknowledge can be a difficult task. Sometimes, the compromised test content that examinees use to get preknowledge has mistakes in the answer key. Examinees who…
Descriptors: Cheating, Answer Keys, Tests, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Park, Minjeong; Wu, Amery D. – Educational and Psychological Measurement, 2019
Item response tree (IRTree) models are recently introduced as an approach to modeling response data from Likert-type rating scales. IRTree models are particularly useful to capture a variety of individuals' behaviors involving in item responding. This study employed IRTree models to investigate response styles, which are individuals' tendencies to…
Descriptors: Item Response Theory, Models, Likert Scales, Response Style (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Soland, James; Wise, Steven L.; Gao, Lingyun – Applied Measurement in Education, 2019
Disengaged responding is a phenomenon that often biases observed scores from achievement tests and surveys in practically and statistically significant ways. This problem has led to the development of methods to detect and correct for disengaged responses on both achievement test and survey scores. One major disadvantage when trying to detect…
Descriptors: Reaction Time, Metadata, Response Style (Tests), Student Surveys
OECD Publishing, 2019
Log files from computer-based assessment can help better understand respondents' behaviours and cognitive strategies. Analysis of timing information from Programme for the International Assessment of Adult Competencies (PIAAC) reveals large differences in the time participants take to answer assessment items, as well as large country differences…
Descriptors: Adults, Computer Assisted Testing, Test Items, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Steinmann, Isa; Sánchez, Daniel; van Laar, Saskia; Braeken, Johan – Assessment in Education: Principles, Policy & Practice, 2022
Questionnaire scales that are mixed-worded, i.e. include both positively and negatively worded items, often suffer from issues like low reliability and more complex latent structures than intended. Part of the problem might be that some responders fail to respond consistently to the mixed-worded items. We investigated the prevalence and impact of…
Descriptors: Response Style (Tests), Test Items, Achievement Tests, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Soland, James; Kuhfeld, Megan; Rios, Joseph – Large-scale Assessments in Education, 2021
Low examinee effort is a major threat to valid uses of many test scores. Fortunately, several methods have been developed to detect noneffortful item responses, most of which use response times. To accurately identify noneffortful responses, one must set response time thresholds separating those responses from effortful ones. While other studies…
Descriptors: Reaction Time, Measurement, Response Style (Tests), Reading Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Rui; Krosnick, Jon A. – International Journal of Social Research Methodology, 2020
Questionnaires routinely measure unipolar and bipolar constructs using rating scales. Such rating scales can offer odd numbers of points, meaning that they have explicit middle alternatives, or they can offer even numbers of points, omitting the middle alternative. By examining four types of questions in six national or regional telephone surveys,…
Descriptors: Validity, Rating Scales, Questionnaires, Telephone Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Adrian Adams; Lauren Barth-Cohen – CBE - Life Sciences Education, 2024
In undergraduate research settings, students are likely to encounter anomalous data, that is, data that do not meet their expectations. Most of the research that directly or indirectly captures the role of anomalous data in research settings uses post-hoc reflective interviews or surveys. These data collection approaches focus on recall of past…
Descriptors: Undergraduate Students, Physics, Science Instruction, Laboratory Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Bulut, Okan; Bulut, Hatice Cigdem; Cormier, Damien C.; Ilgun Dibek, Munevver; Sahin Kursad, Merve – Educational Assessment, 2023
Some statewide testing programs allow students to receive corrective feedback and revise their answers during testing. Despite its pedagogical benefits, the effects of providing revision opportunities remain unknown in the context of alternate assessments. Therefore, this study examined student data from a large-scale alternate assessment that…
Descriptors: Error Correction, Alternative Assessment, Feedback (Response), Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Bezirhan, Ummugul; von Davier, Matthias; Grabovsky, Irina – Educational and Psychological Measurement, 2021
This article presents a new approach to the analysis of how students answer tests and how they allocate resources in terms of time on task and revisiting previously answered questions. Previous research has shown that in high-stakes assessments, most test takers do not end the testing session early, but rather spend all of the time they were…
Descriptors: Response Style (Tests), Accuracy, Reaction Time, Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Silber, Henning; Danner, Daniel; Rammstedt, Beatrice – International Journal of Social Research Methodology, 2019
This study aims to assess whether respondent inattentiveness causes systematic and unsystematic measurement error that influences survey data quality. To determine the impact of (in)attentiveness on the reliability and validity of target measures, we compared respondents from a German online survey (N = 5205) who had passed two attention checks…
Descriptors: Foreign Countries, Test Validity, Test Reliability, Attention
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yuan; Hau, Kit-Tai – Educational and Psychological Measurement, 2020
In large-scale low-stake assessment such as the Programme for International Student Assessment (PISA), students may skip items (missingness) which are within their ability to complete. The detection and taking care of these noneffortful responses, as a measure of test-taking motivation, is an important issue in modern psychometric models.…
Descriptors: Response Style (Tests), Motivation, Test Items, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Rebecca F. Berenbon; Jerome V. D'Agostino; Emily M. Rodgers – Journal of Psychoeducational Assessment, 2024
Curriculum-based measures (CBMs) such as Word Identification Fluency promote student achievement, but because they are timed and administered frequently, they are prone to variation in student response styles. To study the impact of WIF response styles, we created and examined the validity of a novel response style measure and examined the degree…
Descriptors: Elementary School Students, Elementary School Teachers, Grade 1, Special Education Teachers
Xue, Kang; Huggins-Manley, Anne Corinne; Leite, Walter – Educational and Psychological Measurement, 2022
In data collected from virtual learning environments (VLEs), item response theory (IRT) models can be used to guide the ongoing measurement of student ability. However, such applications of IRT rely on unbiased item parameter estimates associated with test items in the VLE. Without formal piloting of the items, one can expect a large amount of…
Descriptors: Virtual Classrooms, Artificial Intelligence, Item Response Theory, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Ames, Allison J. – Educational and Psychological Measurement, 2022
Individual response style behaviors, unrelated to the latent trait of interest, may influence responses to ordinal survey items. Response style can introduce bias in the total score with respect to the trait of interest, threatening valid interpretation of scores. Despite claims of response style stability across scales, there has been little…
Descriptors: Response Style (Tests), Individual Differences, Scores, Test Items
Pages: 1  |  ...  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12  |  ...  |  93