NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20251
Since 202430
Since 2021 (last 5 years)100
Since 2016 (last 10 years)214
Since 2006 (last 20 years)414
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 61 to 75 of 414 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Rocconi, Louis M.; Dumford, Amber D.; Butler, Brenna – Research in Higher Education, 2020
Researchers, assessment professionals, and faculty in higher education increasingly depend on survey data from students to make pivotal curricular and programmatic decisions. The surveys collecting these data often require students to judge frequency (e.g., how often), quantity (e.g., how much), or intensity (e.g., how strongly). The response…
Descriptors: Student Surveys, College Students, Rating Scales, Response Style (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Zuckerbraun, Sara; Allen, Rachael Welsh; Flanigan, Tim – Field Methods, 2020
Paired interviews are used to evaluate whether a questionnaire functions properly for both the target respondent and an alternate respondent (proxy). We developed a new application of this tool to evaluate whether a Patient Experience of Care Survey (PECS) for long-term care hospitals (LTCHs) and inpatient rehabilitation facilities (IRFs)…
Descriptors: Interviews, Patients, Experience, Surveys
Peer reviewed Peer reviewed
Direct linkDirect link
Lubbe, Dirk; Schuster, Christof – Journal of Educational and Behavioral Statistics, 2020
Extreme response style is the tendency of individuals to prefer the extreme categories of a rating scale irrespective of item content. It has been shown repeatedly that individual response style differences affect the reliability and validity of item responses and should, therefore, be considered carefully. To account for extreme response style…
Descriptors: Response Style (Tests), Rating Scales, Item Response Theory, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Leventhal, Brian C.; Zigler, Christina K. – Measurement: Interdisciplinary Research and Perspectives, 2023
Survey score interpretations are often plagued by sources of construct-irrelevant variation, such as response styles. In this study, we propose the use of an IRTree Model to account for response styles by making use of self-report items and anchoring vignettes. Specifically, we investigate how the IRTree approach with anchoring vignettes compares…
Descriptors: Scores, Vignettes, Response Style (Tests), Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Zachary J. Roman; Patrick Schmidt; Jason M. Miller; Holger Brandt – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Careless and insufficient effort responding (C/IER) is a situation where participants respond to survey instruments without considering the item content. This phenomena adds noise to data leading to erroneous inference. There are multiple approaches to identifying and accounting for C/IER in survey settings, of these approaches the best performing…
Descriptors: Structural Equation Models, Bayesian Statistics, Response Style (Tests), Robustness (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Eirini M. Mitropoulou; Leonidas A. Zampetakis; Ioannis Tsaousis – Evaluation Review, 2024
Unfolding item response theory (IRT) models are important alternatives to dominance IRT models in describing the response processes on self-report tests. Their usage is common in personality measures, since they indicate potential differentiations in test score interpretation. This paper aims to gain a better insight into the structure of trait…
Descriptors: Foreign Countries, Adults, Item Response Theory, Personality Traits
Peer reviewed Peer reviewed
Direct linkDirect link
Waters, David P.; Amarie, Dragos; Booth, Rebecca A.; Conover, Christopher; Sayre, Eleanor C. – Physical Review Physics Education Research, 2019
Conceptual inventory surveys are routinely used in education research to identify student learning needs and assess instructional practices. Students might not fully engage with these instruments because of the low stakes attached to them. This paper explores tests that can be used to estimate the percentage of students in a population who might…
Descriptors: Response Style (Tests), Science Tests, Physics, Student Reaction
Peer reviewed Peer reviewed
Direct linkDirect link
Martin, Silke; Lechner, Clemens; Kleinert, Corinna; Rammstedt, Beatrice – International Journal of Social Research Methodology, 2021
Selective nonresponse can introduce bias in longitudinal surveys. The present study examines the role of cognitive skills (more specifically, literacy skills), as measured in large-scale assessment surveys, in selective nonresponse in longitudinal surveys. We assume that low-skilled respondents perceive the cognitive assessment as a higher burden…
Descriptors: Literacy, Response Style (Tests), Longitudinal Studies, Foreign Countries
Steven R. Hiner – ProQuest LLC, 2023
The purpose of this study was to determine if there were significant statistical differences between scores on constructed response and computer-scorable questions on an accelerated middle school math placement test in a large urban school district in Ohio, and to ensure that all students have an opportunity to take the test. Five questions on a…
Descriptors: Scores, Middle Schools, Mathematics Tests, Placement Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Doval, Eduardo; Delicado, Pedro – Journal of Educational and Behavioral Statistics, 2020
We propose new methods for identifying and classifying aberrant response patterns (ARPs) by means of functional data analysis. These methods take the person response function (PRF) of an individual and compare it with the pattern that would correspond to a generic individual of the same ability according to the item-person response surface. ARPs…
Descriptors: Response Style (Tests), Data Analysis, Identification, Classification
Peer reviewed Peer reviewed
Direct linkDirect link
Sengül Avsar, Asiye – Measurement: Interdisciplinary Research and Perspectives, 2020
In order to reach valid and reliable test scores, various test theories have been developed, and one of them is nonparametric item response theory (NIRT). Mokken Models are the most widely known NIRT models which are useful for small samples and short tests. Mokken Package is useful for Mokken Scale Analysis. An important issue about validity is…
Descriptors: Response Style (Tests), Nonparametric Statistics, Item Response Theory, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Iannario, Maria; Manisera, Marica; Piccolo, Domenico; Zuccolotto, Paola – Sociological Methods & Research, 2020
In analyzing data from attitude surveys, it is common to consider the "don't know" responses as missing values. In this article, we present a statistical model commonly used for the analysis of responses/evaluations expressed on Likert scales and extended to take into account the presence of don't know responses. The main objective is to…
Descriptors: Response Style (Tests), Likert Scales, Statistical Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Hill, Laura G. – International Journal of Behavioral Development, 2020
Retrospective pretests ask respondents to report after an intervention on their aptitudes, knowledge, or beliefs before the intervention. A primary reason to administer a retrospective pretest is that in some situations, program participants may over the course of an intervention revise or recalibrate their prior understanding of program content,…
Descriptors: Pretesting, Response Style (Tests), Bias, Testing Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Hong, Maxwell; Steedle, Jeffrey T.; Cheng, Ying – Educational and Psychological Measurement, 2020
Insufficient effort responding (IER) affects many forms of assessment in both educational and psychological contexts. Much research has examined different types of IER, IER's impact on the psychometric properties of test scores, and preprocessing procedures used to detect IER. However, there is a gap in the literature in terms of practical advice…
Descriptors: Responses, Psychometrics, Test Validity, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Yue; Liu, Hongyun – Journal of Educational and Behavioral Statistics, 2021
The prevalence and serious consequences of noneffortful responses from unmotivated examinees are well-known in educational measurement. In this study, we propose to apply an iterative purification process based on a response time residual method with fixed item parameter estimates to detect noneffortful responses. The proposed method is compared…
Descriptors: Response Style (Tests), Reaction Time, Test Items, Accuracy
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  28