Publication Date
In 2025 | 1 |
Since 2024 | 30 |
Since 2021 (last 5 years) | 100 |
Since 2016 (last 10 years) | 214 |
Descriptor
Source
Author
Soland, James | 5 |
Cheng, Ying | 4 |
Höhne, Jan Karem | 4 |
Rios, Joseph A. | 4 |
von Davier, Matthias | 4 |
Ames, Allison J. | 3 |
Bolt, Daniel M. | 3 |
Bulut, Okan | 3 |
Guo, Hongwen | 3 |
Hong, Maxwell | 3 |
Hsieh, Shu-Hui | 3 |
More ▼ |
Publication Type
Education Level
Audience
Researchers | 3 |
Practitioners | 2 |
Location
Germany | 25 |
Canada | 12 |
United States | 10 |
Italy | 9 |
South Korea | 9 |
Australia | 7 |
Czech Republic | 7 |
Norway | 7 |
Sweden | 7 |
Denmark | 6 |
Finland | 6 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Liao, Xiangyi; Bolt, Daniel M. – Journal of Educational and Behavioral Statistics, 2021
Four-parameter models have received increasing psychometric attention in recent years, as a reduced upper asymptote for item characteristic curves can be appealing for measurement applications such as adaptive testing and person-fit assessment. However, applications can be challenging due to the large number of parameters in the model. In this…
Descriptors: Test Items, Models, Mathematics Tests, Item Response Theory
Man, Kaiwen; Harring, Jeffrey R. – Educational and Psychological Measurement, 2019
With the development of technology-enhanced learning platforms, eye-tracking biometric indicators can be recorded simultaneously with students item responses. In the current study, visual fixation, an essential eye-tracking indicator, is modeled to reflect the degree of test engagement when a test taker solves a set of test questions. Three…
Descriptors: Test Items, Eye Movements, Models, Regression (Statistics)
Soland, James; Kuhfeld, Megan – Educational Assessment, 2019
Considerable research has examined the use of rapid guessing measures to identify disengaged item responses. However, little is known about students who rapidly guess over the course of several tests. In this study, we use achievement test data from six administrations over three years to investigate whether rapid guessing is a stable trait-like…
Descriptors: Testing, Guessing (Tests), Reaction Time, Achievement Tests
Dibek, Munevver Ilgun – International Journal of Assessment Tools in Education, 2019
In the literature, response style is one of the factors causing an achievement-attitude paradox and threatens the validity of the results obtained from studies. In this regard, the aim of this study is two-fold. Firstly, it attempts to determine which item response tree (IRTree) models based on the generalized linear mixed model (GLMM) approach…
Descriptors: Response Style (Tests), Achievement Tests, Elementary Secondary Education, Foreign Countries
Fryer, Luke K.; Nakao, Kaori – Frontline Learning Research, 2020
Self-report is a fundamental research tool for the social sciences. Despite quantitative surveys being the workhorses of the self-report stable, few researchers question their format--often blindly using some form of Labelled Categorical Scale (Likert-type). This study presents a brief review of the current literature examining the efficacy of…
Descriptors: Measurement Techniques, Research Methodology, Surveys, Online Surveys
NWEA, 2017
This document describes the following two new student engagement metrics now included on NWEA™ MAP® Growth™ reports, and provides guidance on how to interpret and use these metrics: (1) Percent of Disengaged Responses; and (2) Estimated Impact of Disengagement on RIT. These metrics will inform educators about what percentage of items from a…
Descriptors: Achievement Tests, Achievement Gains, Test Interpretation, Reaction Time
Montagni, Ilaria; Cariou, Tanguy; Tzourio, Christophe; González-Caballero, Juan-Luis – International Journal of Social Research Methodology, 2019
Online surveys are increasingly used to investigate health conditions, especially among young people. However, this methodology presents some limitations including item nonresponse concerning sensitive and knowledge-related questions. This study aimed to suggest latent class analysis as the methodology to statistically deal with item nonresponse…
Descriptors: Online Surveys, Response Style (Tests), Multivariate Analysis, Health
Kiss, Hubert János; Selei, Adrienn – Education Economics, 2018
Success in life is determined to a large extent by school performance, which in turn depends heavily on grades obtained in exams. In this study, we investigate a particular type of exam: multiple-choice tests. More concretely, we study if patterns of correct answers in multiple-choice tests affect performance. We design an experiment to study if…
Descriptors: Multiple Choice Tests, Control Groups, Experimental Groups, Test Format
Keslair, François – OECD Publishing, 2018
This paper explores the impact of test-taking conditions on the quality of the Programme for the International Assessment of Adult Competencies (PIAAC) assessment. Interviewers record information about the room of assessment and interruptions that occurred during each interview. These observations, along with information on interviewer assignment…
Descriptors: Interviews, Testing, Educational Quality, Foreign Countries
Lin, Wei-Fang; Hewitt, Gordon J.; Videras, Julio – New Directions for Institutional Research, 2017
This chapter examines the impact of declining student response rates on surveys administered at small- and medium-sized institutions. The potential for nonresponse bias and its effects are addressed.
Descriptors: National Surveys, Small Colleges, Response Rates (Questionnaires), Response Style (Tests)
Lindner, Marlit A.; Schult, Johannes; Mayer, Richard E. – Journal of Educational Psychology, 2022
This classroom experiment investigates the effects of adding representational pictures to multiple-choice and constructed-response test items to understand the role of the response format for the multimedia effect in testing. Participants were 575 fifth- and sixth-graders who answered 28 science test items--seven items in each of four experimental…
Descriptors: Elementary School Students, Grade 5, Grade 6, Multimedia Materials
Gordon Wolf, Melissa; Nylund-Gibson, Karen; Dowdy, Erin; Furlong, Michael – Grantee Submission, 2019
This paper presents a framework for choosing between 4-and 6-point response options for use with online surveys. Using data that have both 4- and 6-point Likert-type items, we compare correlations, fit of factor analytic models, and several different reliability estimates as a way of identifying if there is empirical support for choosing a…
Descriptors: Likert Scales, Item Response Theory, Test Items, Goodness of Fit
Zhang, Dongbo; Koda, Keiko – Asian-Pacific Journal of Second and Foreign Language Education, 2017
Word Associates Format (WAF) tests are often used to measure second language learners' vocabulary depth with a focus on their network knowledge. Yet, there were often many variations in the specific forms of the tests and the ways they were used, which tended to have an impact on learners' response behaviors and, more importantly, the psychometric…
Descriptors: Language Tests, Vocabulary Development, Second Language Learning, Test Construction
Koop, Gregory J.; Criss, Amy H. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2016
Advances in theories of memory are hampered by insufficient metrics for measuring memory. The goal of this paper is to further the development of model-independent, sensitive empirical measures of the recognition decision process. We evaluate whether metrics from continuous mouse tracking, or response dynamics, uniquely identify response bias and…
Descriptors: Recognition (Psychology), Response Style (Tests), Mnemonics, Familiarity
Soland, James – Educational Measurement: Issues and Practice, 2019
As computer-based tests become more common, there is a growing wealth of metadata related to examinees' response processes, which include solution strategies, concentration, and operating speed. One common type of metadata is item response time. While response times have been used extensively to improve estimates of achievement, little work…
Descriptors: Test Items, Item Response Theory, Metadata, Self Efficacy