NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Elementary and Secondary…1
What Works Clearinghouse Rating
Showing 1 to 15 of 1,389 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Nazari, Sanaz; Leite, Walter L.; Huggins-Manley, A. Corinne – Educational and Psychological Measurement, 2023
Social desirability bias (SDB) has been a major concern in educational and psychological assessments when measuring latent variables because it has the potential to introduce measurement error and bias in assessments. Person-fit indices can detect bias in the form of misfitted response vectors. The objective of this study was to compare the…
Descriptors: Social Desirability, Bias, Indexes, Goodness of Fit
Peer reviewed Peer reviewed
Direct linkDirect link
Gregory M. Hurtz; Regi Mucino – Journal of Educational Measurement, 2024
The Lognormal Response Time (LNRT) model measures the speed of test-takers relative to the normative time demands of items on a test. The resulting speed parameters and model residuals are often analyzed for evidence of anomalous test-taking behavior associated with fast and poorly fitting response time patterns. Extending this model, we…
Descriptors: Student Reaction, Reaction Time, Response Style (Tests), Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Nana Kim; Daniel M. Bolt – Journal of Educational and Behavioral Statistics, 2024
Some previous studies suggest that response times (RTs) on rating scale items can be informative about the content trait, but a more recent study suggests they may also be reflective of response styles. The latter result raises questions about the possible consideration of RTs for content trait estimation, as response styles are generally viewed…
Descriptors: Item Response Theory, Reaction Time, Response Style (Tests), Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Martijn Schoenmakers; Jesper Tijmstra; Jeroen Vermunt; Maria Bolsinova – Educational and Psychological Measurement, 2024
Extreme response style (ERS), the tendency of participants to select extreme item categories regardless of the item content, has frequently been found to decrease the validity of Likert-type questionnaire results. For this reason, various item response theory (IRT) models have been proposed to model ERS and correct for it. Comparisons of these…
Descriptors: Item Response Theory, Response Style (Tests), Models, Likert Scales
Peer reviewed Peer reviewed
Direct linkDirect link
Sijia Huang; Seungwon Chung; Carl F. Falk – Journal of Educational Measurement, 2024
In this study, we introduced a cross-classified multidimensional nominal response model (CC-MNRM) to account for various response styles (RS) in the presence of cross-classified data. The proposed model allows slopes to vary across items and can explore impacts of observed covariates on latent constructs. We applied a recently developed variant of…
Descriptors: Response Style (Tests), Classification, Data, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Lovett, Benjamin J.; Schaberg, Theresa; Nazmiyal, Ara; Spenceley, Laura M. – Journal of Psychoeducational Assessment, 2023
Data collected during psychoeducational evaluations can be compromised by response bias: clients not putting forth sufficient effort on tests, not being motivated to do well, or not being fully honest and careful when completing rating scales and contributing similar self-report data. Some of these problems apply to data from third-party…
Descriptors: School Psychologists, Evaluation, Response Style (Tests), Prevention
Peer reviewed Peer reviewed
Direct linkDirect link
Hibben, Kristen Cibelli; Felderer, Barbara; Conrad, Frederick G. – International Journal of Social Research Methodology, 2022
Answering questions completely, accurately and honestly is not always the top priority for survey respondents. How might researchers motivate respondents to be more conscientious? One possibility is to elicit agreement from respondents to work hard and provide complete and accurate information. In their pioneering work in the 1970s and 80s,…
Descriptors: Online Surveys, Occupational Surveys, Response Style (Tests), Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Hung-Yu Huang – Educational and Psychological Measurement, 2025
The use of discrete categorical formats to assess psychological traits has a long-standing tradition that is deeply embedded in item response theory models. The increasing prevalence and endorsement of computer- or web-based testing has led to greater focus on continuous response formats, which offer numerous advantages in both respondent…
Descriptors: Response Style (Tests), Psychological Characteristics, Item Response Theory, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Chunyan Liu; Raja Subhiyah; Richard A. Feinberg – Applied Measurement in Education, 2024
Mixed-format tests that include both multiple-choice (MC) and constructed-response (CR) items have become widely used in many large-scale assessments. When an item response theory (IRT) model is used to score a mixed-format test, the unidimensionality assumption may be violated if the CR items measure a different construct from that measured by MC…
Descriptors: Test Format, Response Style (Tests), Multiple Choice Tests, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Jean Philippe Décieux – Sociological Methods & Research, 2024
The risk of multitasking is high in online surveys. However, knowledge on the effects of multitasking on answer quality is sparse and based on suboptimal approaches. Research reports inconclusive results concerning the consequences of multitasking on task performance. However, studies suggest that especially sequential-multitasking activities are…
Descriptors: Online Surveys, Time Management, Handheld Devices, Learning Activities
Peer reviewed Peer reviewed
Direct linkDirect link
van der Linden, Wim J.; Belov, Dmitry I. – Journal of Educational Measurement, 2023
A test of item compromise is presented which combines the test takers' responses and response times (RTs) into a statistic defined as the number of correct responses on the item for test takers with RTs flagged as suspicious. The test has null and alternative distributions belonging to the well-known family of compound binomial distributions, is…
Descriptors: Item Response Theory, Reaction Time, Test Items, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Schroeders, Ulrich; Schmidt, Christoph; Gnambs, Timo – Educational and Psychological Measurement, 2022
Careless responding is a bias in survey responses that disregards the actual item content, constituting a threat to the factor structure, reliability, and validity of psychological measurements. Different approaches have been proposed to detect aberrant responses such as probing questions that directly assess test-taking behavior (e.g., bogus…
Descriptors: Response Style (Tests), Surveys, Artificial Intelligence, Identification
Peer reviewed Peer reviewed
Direct linkDirect link
Gisele Magarotto Machado; Nelson Hauck-Filho; Ana Celi Pallini; João Lucas Dias-Viana; Leilane Henriette Barreto Chiappetta Santana; Cristina Aparecida Nunes Medeiros da Silva; Felipe Valentini – International Journal of Testing, 2024
Our primary objective was to examine the impact of acquiescent responding on empathy measures. We selected the Affective and Cognitive Measure of Empathy (ACME) as the measure for this case study due to its composition--the affective dissonance scale consists solely of items that are semantically reversed relative to the empathy construct, while…
Descriptors: Cognitive Measurement, Empathy, Adults, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Suh Keong Kwon; Guoxing Yu – Language Testing, 2024
In this study, we examined the effect of visual cues in a second language listening test on test takers' viewing behaviours and their test performance. Fifty-seven learners of English in Korea took a video-based listening test, with their eye movements recorded, and 23 of them were interviewed individually after the test. The participants viewed…
Descriptors: Foreign Countries, English (Second Language), Second Language Learning, Eye Movements
Weicong Lyu – ProQuest LLC, 2023
Item response theory (IRT) is currently the dominant methodological paradigm in educational and psychological measurement. IRT models are based on assumptions about the relationship between latent traits and observed responses, so the accuracy of the methodology depends heavily on the reasonableness of these assumptions. This dissertation consists…
Descriptors: Item Response Theory, Educational Assessment, Psychological Testing, Psychometrics
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  93