NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Elementary and Secondary…1
What Works Clearinghouse Rating
Showing 1 to 15 of 1,397 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Patrik Havan; Michal Kohút; Peter Halama – International Journal of Testing, 2025
Acquiescence is the tendency of participants to shift their responses to agreement. Lechner et al. (2019) introduced the following mechanisms of acquiescence: social deference and cognitive processing. We added their interaction into a theoretical framework. The sample consists of 557 participants. We found significant medium strong relationship…
Descriptors: Cognitive Processes, Attention, Difficulty Level, Reflection
Peer reviewed Peer reviewed
Direct linkDirect link
Karly S. Ford; Megan Holland Iantosca; Leandra Cate – Educational Researcher, 2025
In scholarly research, racial categories are typically taken for granted. However, race categories vary over time and geography and reflect the social beliefs of the people who use them. Informed by quantitative critical race theory analysis, we interrogate how race categories align (or not) with 24,000 U.S. higher education students' responses to…
Descriptors: College Students, Self Concept, Racial Identification, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Nazari, Sanaz; Leite, Walter L.; Huggins-Manley, A. Corinne – Educational and Psychological Measurement, 2023
Social desirability bias (SDB) has been a major concern in educational and psychological assessments when measuring latent variables because it has the potential to introduce measurement error and bias in assessments. Person-fit indices can detect bias in the form of misfitted response vectors. The objective of this study was to compare the…
Descriptors: Social Desirability, Bias, Indexes, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Gregory M. Hurtz; Regi Mucino – Journal of Educational Measurement, 2024
The Lognormal Response Time (LNRT) model measures the speed of test-takers relative to the normative time demands of items on a test. The resulting speed parameters and model residuals are often analyzed for evidence of anomalous test-taking behavior associated with fast and poorly fitting response time patterns. Extending this model, we…
Descriptors: Student Reaction, Reaction Time, Response Style (Tests), Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Nana Kim; Daniel M. Bolt – Journal of Educational and Behavioral Statistics, 2024
Some previous studies suggest that response times (RTs) on rating scale items can be informative about the content trait, but a more recent study suggests they may also be reflective of response styles. The latter result raises questions about the possible consideration of RTs for content trait estimation, as response styles are generally viewed…
Descriptors: Item Response Theory, Reaction Time, Response Style (Tests), Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Martijn Schoenmakers; Jesper Tijmstra; Jeroen Vermunt; Maria Bolsinova – Educational and Psychological Measurement, 2024
Extreme response style (ERS), the tendency of participants to select extreme item categories regardless of the item content, has frequently been found to decrease the validity of Likert-type questionnaire results. For this reason, various item response theory (IRT) models have been proposed to model ERS and correct for it. Comparisons of these…
Descriptors: Item Response Theory, Response Style (Tests), Models, Likert Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Sijia Huang; Seungwon Chung; Carl F. Falk – Journal of Educational Measurement, 2024
In this study, we introduced a cross-classified multidimensional nominal response model (CC-MNRM) to account for various response styles (RS) in the presence of cross-classified data. The proposed model allows slopes to vary across items and can explore impacts of observed covariates on latent constructs. We applied a recently developed variant of…
Descriptors: Response Style (Tests), Classification, Data, Models
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Yanxuan Qu; Sandip Sinharay – ETS Research Report Series, 2024
The goal of this paper is to find better ways to estimate the internal consistency reliability of scores on tests with a specific type of design that are often encountered in practice: tests with constructed-response items clustered into sections that are not parallel or tau-equivalent, and one of the sections has only one item. To estimate the…
Descriptors: Test Reliability, Essay Tests, Construct Validity, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Lovett, Benjamin J.; Schaberg, Theresa; Nazmiyal, Ara; Spenceley, Laura M. – Journal of Psychoeducational Assessment, 2023
Data collected during psychoeducational evaluations can be compromised by response bias: clients not putting forth sufficient effort on tests, not being motivated to do well, or not being fully honest and careful when completing rating scales and contributing similar self-report data. Some of these problems apply to data from third-party…
Descriptors: School Psychologists, Evaluation, Response Style (Tests), Prevention
Peer reviewed Peer reviewed
Direct linkDirect link
Hibben, Kristen Cibelli; Felderer, Barbara; Conrad, Frederick G. – International Journal of Social Research Methodology, 2022
Answering questions completely, accurately and honestly is not always the top priority for survey respondents. How might researchers motivate respondents to be more conscientious? One possibility is to elicit agreement from respondents to work hard and provide complete and accurate information. In their pioneering work in the 1970s and 80s,…
Descriptors: Online Surveys, Occupational Surveys, Response Style (Tests), Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Hung-Yu Huang – Educational and Psychological Measurement, 2025
The use of discrete categorical formats to assess psychological traits has a long-standing tradition that is deeply embedded in item response theory models. The increasing prevalence and endorsement of computer- or web-based testing has led to greater focus on continuous response formats, which offer numerous advantages in both respondent…
Descriptors: Response Style (Tests), Psychological Characteristics, Item Response Theory, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Françoise Guillemot; Florence Lacroix; Isabelle Nocus – Journal of Research in Special Educational Needs, 2025
The attitude of teachers towards inclusive education is a key issue for the success of inclusive education. Many studies have been designed to assess teachers' attitudes, but none have looked at the bias caused by teachers' non-response to questionnaires on their attitudes. Non-response biases are difficult to identify because it is impossible to…
Descriptors: Questionnaires, Teacher Attitudes, Bias, Response Style (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Chunyan Liu; Raja Subhiyah; Richard A. Feinberg – Applied Measurement in Education, 2024
Mixed-format tests that include both multiple-choice (MC) and constructed-response (CR) items have become widely used in many large-scale assessments. When an item response theory (IRT) model is used to score a mixed-format test, the unidimensionality assumption may be violated if the CR items measure a different construct from that measured by MC…
Descriptors: Test Format, Response Style (Tests), Multiple Choice Tests, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Jean Philippe Décieux – Sociological Methods & Research, 2024
The risk of multitasking is high in online surveys. However, knowledge on the effects of multitasking on answer quality is sparse and based on suboptimal approaches. Research reports inconclusive results concerning the consequences of multitasking on task performance. However, studies suggest that especially sequential-multitasking activities are…
Descriptors: Online Surveys, Time Management, Handheld Devices, Learning Activities
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J.; Belov, Dmitry I. – Journal of Educational Measurement, 2023
A test of item compromise is presented which combines the test takers' responses and response times (RTs) into a statistic defined as the number of correct responses on the item for test takers with RTs flagged as suspicious. The test has null and alternative distributions belonging to the well-known family of compound binomial distributions, is…
Descriptors: Item Response Theory, Reaction Time, Test Items, Item Analysis
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  94