Publication Date
In 2025 | 1 |
Since 2024 | 29 |
Since 2021 (last 5 years) | 93 |
Since 2016 (last 10 years) | 188 |
Since 2006 (last 20 years) | 374 |
Descriptor
Response Style (Tests) | 675 |
Foreign Countries | 130 |
Higher Education | 105 |
Test Items | 103 |
Test Validity | 93 |
Item Response Theory | 82 |
College Students | 81 |
Questionnaires | 80 |
Test Format | 67 |
Models | 65 |
Test Reliability | 57 |
More ▼ |
Source
Author
Bolt, Daniel M. | 7 |
Greve, Kevin W. | 5 |
Soland, James | 5 |
Wise, Steven L. | 5 |
Bagby, R. Michael | 4 |
Ben-Porath, Yossef S. | 4 |
Cheng, Ying | 4 |
Holden, Ronald R. | 4 |
Höhne, Jan Karem | 4 |
Richardson, John T. E. | 4 |
Rios, Joseph A. | 4 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 19 |
Practitioners | 9 |
Teachers | 3 |
Administrators | 2 |
Counselors | 1 |
Location
Germany | 22 |
Canada | 15 |
Australia | 13 |
China | 9 |
United Kingdom | 9 |
Taiwan | 8 |
South Korea | 7 |
United States | 7 |
France | 6 |
Italy | 6 |
Japan | 6 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Rios, Joseph A.; Guo, Hongwen; Mao, Liyang; Liu, Ou Lydia – International Journal of Testing, 2017
When examinees' test-taking motivation is questionable, practitioners must determine whether careless responding is of practical concern and if so, decide on the best approach to filter such responses. As there has been insufficient research on these topics, the objectives of this study were to: a) evaluate the degree of underestimation in the…
Descriptors: Response Style (Tests), Scores, Motivation, Computation
Rushkin, Ilia; Chuang, Isaac; Tingley, Dustin – Journal of Learning Analytics, 2019
Each time a learner in a self-paced online course seeks to answer an assessment question, it takes some time for the student to read the question and arrive at an answer to submit. If multiple attempts are allowed, and the first answer is incorrect, it takes some time to provide a second answer. Here we study the distribution of such…
Descriptors: Online Courses, Response Style (Tests), Models, Learner Engagement
Kim, Sooyeon; Moses, Tim – ETS Research Report Series, 2018
The purpose of this study is to assess the impact of aberrant responses on the estimation accuracy in forced-choice format assessments. To that end, a wide range of aberrant response behaviors (e.g., fake, random, or mechanical responses) affecting upward of 20%--30% of the responses was manipulated under the multi-unidimensional pairwise…
Descriptors: Measurement Techniques, Response Style (Tests), Accuracy, Computation
Höhne, Jan Karem; Krebs, Dagmar – International Journal of Social Research Methodology, 2018
The effect of the response scale direction on response behavior is a well-known phenomenon in survey research. While there are several approaches to explaining how such response order effects occur, the literature reports mixed evidence. Furthermore, different question formats seem to vary in their susceptibility to these effects. We therefore…
Descriptors: Test Items, Response Style (Tests), Questioning Techniques, Questionnaires
Zehner, Fabian; Eichmann, Beate; Deribo, Tobias; Harrison, Scott; Bengs, Daniel; Andersen, Nico; Hahnel, Carolin – Journal of Educational Data Mining, 2021
The NAEP EDM Competition required participants to predict efficient test-taking behavior based on log data. This paper describes our top-down approach for engineering features by means of psychometric modeling, aiming at machine learning for the predictive classification task. For feature engineering, we employed, among others, the Log-Normal…
Descriptors: National Competency Tests, Engineering Education, Data Collection, Data Analysis
Domínguez, César; López-Cuadrado, Javier; Armendariz, Anaje; Jaime, Arturo; Heras, Jónathan; Pérez, Tomás A. – Computer Assisted Language Learning, 2019
In this work, we explore the differences between proctored and unproctored Internet administration for a Basque language low-stakes test considering demographic factors such as age, gender, and knowledge level in the subject. To this aim, we have developed an ad hoc application that allows us to establish a set of filters and techniques that…
Descriptors: Language Tests, Computer Assisted Testing, Supervision, Internet
Patton, Jeffrey M.; Cheng, Ying; Hong, Maxwell; Diao, Qi – Journal of Educational and Behavioral Statistics, 2019
In psychological and survey research, the prevalence and serious consequences of careless responses from unmotivated participants are well known. In this study, we propose to iteratively detect careless responders and cleanse the data by removing their responses. The careless responders are detected using person-fit statistics. In two simulation…
Descriptors: Test Items, Response Style (Tests), Identification, Computation
Höhne, Jan Karem; Schlosser, Stephan – International Journal of Social Research Methodology, 2019
Participation in web surveys via smartphones increased continuously in recent years. The reasons for this increase are a growing proportion of smartphone owners and an increase in mobile Internet access. However, research has shown that smartphone respondents are frequently distracted and/or multitasking, which might affect completion and response…
Descriptors: Online Surveys, Handheld Devices, Response Rates (Questionnaires), Response Style (Tests)
Terentev, Evgeniy; Maloshonok, Natalia – International Journal of Social Research Methodology, 2019
This paper aims to explore the response-order effects for rating questions presented in item-by-item and grid formats. It was hypothesized that the primacy effect occurs for both formats of questions, and that this effect is dependent on age, education, and type of device used for responding to questions. Two randomized experiments were conducted…
Descriptors: Questioning Techniques, Test Format, Online Courses, Student Surveys
Loosveldt, Geert; Wuyts, Celine; Beullens, Koen – Quality Assurance in Education: An International Perspective, 2018
Purpose: In survey methodology, it is well-known that interviewers can have an impact on the registered answers. This paper aims to focus on one type of interviewer effect that arises from the differences between interviewers in the systematic effects of each interviewer on the answers. In the first case, the authors evaluate interviewer effects…
Descriptors: Interviews, Foreign Countries, Differences, Measurement
Kam, Chester Chun Seng – Sociological Methods & Research, 2018
The item wording (or keying) effect is respondents' differential response style to positively and negatively worded items. Despite decades of research, the nature of the effect is still unclear. This article proposes a potential reason; namely, that the item wording effect is scale-specific, and thus findings are applicable only to a particular…
Descriptors: Response Style (Tests), Test Items, Language Usage, College Students
Faddar, Jerich; Vanhoof, Jan; De Maeyer, Sven – School Effectiveness and School Improvement, 2018
In order to enhance the quality of education, school self-evaluation (SSE) has become a key strategy in many educational systems. During an SSE process, schools describe and evaluate their own functioning, often by administering questionnaires among teachers. However, it is unknown to what extent SSE questionnaire results are distorted by…
Descriptors: Self Evaluation (Groups), Motivation, Social Desirability, Questionnaires
Gorgun, Guher; Bulut, Okan – Educational and Psychological Measurement, 2021
In low-stakes assessments, some students may not reach the end of the test and leave some items unanswered due to various reasons (e.g., lack of test-taking motivation, poor time management, and test speededness). Not-reached items are often treated as incorrect or not-administered in the scoring process. However, when the proportion of…
Descriptors: Scoring, Test Items, Response Style (Tests), Mathematics Tests
Lundgren, Erik; Eklöf, Hanna – Educational Research and Evaluation, 2020
The present study used process data from a computer-based problem-solving task as indications of behavioural level of test-taking effort, and explored how behavioural item-level effort related to overall test performance and self-reported effort. Variables were extracted from raw process data and clustered. Four distinct clusters were obtained and…
Descriptors: Computer Assisted Testing, Problem Solving, Response Style (Tests), Test Items
Ilgun Dibek, Munevver – Eurasian Journal of Educational Research, 2020
Purpose: Cross-cultural comparisons based on ordinal Likert-type rating scales have been threatened by response style which is systematic tendencies to respond to items regardless of the item content. So, this study aimed to investigate the effect of extreme response style and acquisance response style on TIMSS 2015 data. Method: The sample of…
Descriptors: Achievement Tests, Foreign Countries, International Assessment, Mathematics Achievement