NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 31 to 45 of 154 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Vonkova, Hana; Hrabak, Jan; Kralova, Katerina; Papajoanu, Ondrej – Field Methods, 2021
Self-assessment measures are commonly used in questionnaire surveys. However, one of the problems with self-reports is that they may be prone to differences in scale usage among respondents. The anchoring vignette method addresses this issue. It relies on two assumptions: response consistency and vignette equivalence. Here we aim to develop a…
Descriptors: Vignettes, Interviews, Self Evaluation (Individuals), Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Magraw-Mickelson, Zoe; Wang, Harry H.; Gollwitzer, Mario – International Journal of Testing, 2022
Much psychological research depends on participants' diligence in filling out materials such as surveys. However, not all participants are motivated to respond attentively, which leads to unintended issues with data quality, known as careless responding. Our question is: how do different modes of data collection--paper/pencil, computer/web-based,…
Descriptors: Response Style (Tests), Surveys, Data Collection, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Thompson, James J. – Measurement: Interdisciplinary Research and Perspectives, 2022
With the use of computerized testing, ordinary assessments can capture both answer accuracy and answer response time. For the Canadian Programme for the International Assessment of Adult Competencies (PIAAC) numeracy and literacy subtests, person ability, person speed, question difficulty, question time intensity, fluency (rate), person fluency…
Descriptors: Foreign Countries, Adults, Computer Assisted Testing, Network Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Rios, Joseph A.; Guo, Hongwen – Applied Measurement in Education, 2020
The objective of this study was to evaluate whether differential noneffortful responding (identified via response latencies) was present in four countries administered a low-stakes college-level critical thinking assessment. Results indicated significant differences (as large as 0.90 "SD") between nearly all country pairings in the…
Descriptors: Response Style (Tests), Cultural Differences, Critical Thinking, Cognitive Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Ivanova, Militsa; Michaelides, Michalis; Eklöf, Hanna – Educational Research and Evaluation, 2020
Collecting process data in computer-based assessments provides opportunities to describe examinee behaviour during a test-taking session. The number of actions taken by students while interacting with an item is in this context a variable that has been gaining attention. The present study aims to investigate how the number of actions performed on…
Descriptors: Foreign Countries, Secondary School Students, Achievement Tests, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Steinmann, Isa; Sánchez, Daniel; van Laar, Saskia; Braeken, Johan – Assessment in Education: Principles, Policy & Practice, 2022
Questionnaire scales that are mixed-worded, i.e. include both positively and negatively worded items, often suffer from issues like low reliability and more complex latent structures than intended. Part of the problem might be that some responders fail to respond consistently to the mixed-worded items. We investigated the prevalence and impact of…
Descriptors: Response Style (Tests), Test Items, Achievement Tests, Foreign Countries
OECD Publishing, 2019
Log files from computer-based assessment can help better understand respondents' behaviours and cognitive strategies. Analysis of timing information from Programme for the International Assessment of Adult Competencies (PIAAC) reveals large differences in the time participants take to answer assessment items, as well as large country differences…
Descriptors: Adults, Computer Assisted Testing, Test Items, Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Silber, Henning; Danner, Daniel; Rammstedt, Beatrice – International Journal of Social Research Methodology, 2019
This study aims to assess whether respondent inattentiveness causes systematic and unsystematic measurement error that influences survey data quality. To determine the impact of (in)attentiveness on the reliability and validity of target measures, we compared respondents from a German online survey (N = 5205) who had passed two attention checks…
Descriptors: Foreign Countries, Test Validity, Test Reliability, Attention
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yuan; Hau, Kit-Tai – Educational and Psychological Measurement, 2020
In large-scale low-stake assessment such as the Programme for International Student Assessment (PISA), students may skip items (missingness) which are within their ability to complete. The detection and taking care of these noneffortful responses, as a measure of test-taking motivation, is an important issue in modern psychometric models.…
Descriptors: Response Style (Tests), Motivation, Test Items, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Holden, Ronald R.; Marjanovic, Zdravko; Troister, Talia – Journal of Psychoeducational Assessment, 2019
Indiscriminate (i.e., carless, random, insufficient effort) responses, commonly believed to weaken effect sizes and produce Type II errors, can inflate effect sizes and potentially produce Type I errors where a supposedly significant result is actually artifactual. We demonstrate how indiscriminate responses can produce spuriously high…
Descriptors: Response Style (Tests), Effect Size, Correlation, Undergraduate Students
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu – Educational and Psychological Measurement, 2020
In educational assessments and achievement tests, test developers and administrators commonly assume that test-takers attempt all test items with full effort and leave no blank responses with unplanned missing values. However, aberrant response behavior--such as performance decline, dropping out beyond a certain point, and skipping certain items…
Descriptors: Item Response Theory, Response Style (Tests), Test Items, Statistical Analysis
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dibek, Munevver Ilgun; Cikrikci, Rahime Nukhet – International Journal of Progressive Education, 2021
This study aims to first investigate the effect of the extreme response style (ERS) which could lead to an attitude-achievement paradox among the countries participating in the Trends in International Mathematics and Science Study (TIMSS 2015), and then to determine the individual- and country-level relationships between attitude and achievement…
Descriptors: Item Response Theory, Response Style (Tests), Elementary Secondary Education, Achievement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Höhne, Jan Karem; Krebs, Dagmar – International Journal of Social Research Methodology, 2018
The effect of the response scale direction on response behavior is a well-known phenomenon in survey research. While there are several approaches to explaining how such response order effects occur, the literature reports mixed evidence. Furthermore, different question formats seem to vary in their susceptibility to these effects. We therefore…
Descriptors: Test Items, Response Style (Tests), Questioning Techniques, Questionnaires
Peer reviewed Peer reviewed
Direct linkDirect link
Höhne, Jan Karem; Schlosser, Stephan – International Journal of Social Research Methodology, 2019
Participation in web surveys via smartphones increased continuously in recent years. The reasons for this increase are a growing proportion of smartphone owners and an increase in mobile Internet access. However, research has shown that smartphone respondents are frequently distracted and/or multitasking, which might affect completion and response…
Descriptors: Online Surveys, Handheld Devices, Response Rates (Questionnaires), Response Style (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Loosveldt, Geert; Wuyts, Celine; Beullens, Koen – Quality Assurance in Education: An International Perspective, 2018
Purpose: In survey methodology, it is well-known that interviewers can have an impact on the registered answers. This paper aims to focus on one type of interviewer effect that arises from the differences between interviewers in the systematic effects of each interviewer on the answers. In the first case, the authors evaluate interviewer effects…
Descriptors: Interviews, Foreign Countries, Differences, Measurement
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11