NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 171 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Hung-Yu Huang – Educational and Psychological Measurement, 2025
The use of discrete categorical formats to assess psychological traits has a long-standing tradition that is deeply embedded in item response theory models. The increasing prevalence and endorsement of computer- or web-based testing has led to greater focus on continuous response formats, which offer numerous advantages in both respondent…
Descriptors: Response Style (Tests), Psychological Characteristics, Item Response Theory, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Kamil Jaros; Aleksandra Gajda – Journal of Psychoeducational Assessment, 2024
Stage fright is a natural and very common phenomenon that affects everyone who must present themselves in public. However, it has a negative impact on the health and voice emission of children and adolescents, which is why it is important to study and measure it. Unfortunately, there are no appropriate tools for examining public presentation…
Descriptors: Anxiety, Fear, Public Speaking, Children
Peer reviewed Peer reviewed
Direct linkDirect link
Danielle R. Blazek; Jason T. Siegel – International Journal of Social Research Methodology, 2024
Social scientists have long agreed that satisficing behavior increases error and reduces the validity of survey data. There have been numerous reviews on detecting satisficing behavior, but preventing this behavior has received less attention. The current narrative review provides empirically supported guidance on preventing satisficing by…
Descriptors: Response Style (Tests), Responses, Reaction Time, Test Interpretation
Peer reviewed Peer reviewed
Direct linkDirect link
Zou, Tongtong; Bolt, Daniel M. – Measurement: Interdisciplinary Research and Perspectives, 2023
Person misfit and person reliability indices in item response theory (IRT) can play an important role in evaluating the validity of a test or survey instrument at the respondent level. Prior empirical comparisons of these indices have been applied to binary item response data and suggest that the two types of indices return very similar results.…
Descriptors: Item Response Theory, Rating Scales, Response Style (Tests), Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Viola Merhof; Caroline M. Böhm; Thorsten Meiser – Educational and Psychological Measurement, 2024
Item response tree (IRTree) models are a flexible framework to control self-reported trait measurements for response styles. To this end, IRTree models decompose the responses to rating items into sub-decisions, which are assumed to be made on the basis of either the trait being measured or a response style, whereby the effects of such person…
Descriptors: Item Response Theory, Test Interpretation, Test Reliability, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Wim J. van der Linden; Luping Niu; Seung W. Choi – Journal of Educational and Behavioral Statistics, 2024
A test battery with two different levels of adaptation is presented: a within-subtest level for the selection of the items in the subtests and a between-subtest level to move from one subtest to the next. The battery runs on a two-level model consisting of a regular response model for each of the subtests extended with a second level for the joint…
Descriptors: Adaptive Testing, Test Construction, Test Format, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Faran, Yifat; Zanbar, Lea – International Journal of Social Research Methodology, 2019
The present study is the first to examine empirically whether required fields in online surveys impair reliability and response pattern, as participants forced to respond to all items may provide arbitrary answers. Two hundred and thirteen participants completed a survey consisting of six questionnaires testing personal and social issues and…
Descriptors: Online Surveys, Test Reliability, Response Style (Tests), Questionnaires
Peer reviewed Peer reviewed
Direct linkDirect link
Zachary J. Roman; Patrick Schmidt; Jason M. Miller; Holger Brandt – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Careless and insufficient effort responding (C/IER) is a situation where participants respond to survey instruments without considering the item content. This phenomena adds noise to data leading to erroneous inference. There are multiple approaches to identifying and accounting for C/IER in survey settings, of these approaches the best performing…
Descriptors: Structural Equation Models, Bayesian Statistics, Response Style (Tests), Robustness (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Eirini M. Mitropoulou; Leonidas A. Zampetakis; Ioannis Tsaousis – Evaluation Review, 2024
Unfolding item response theory (IRT) models are important alternatives to dominance IRT models in describing the response processes on self-report tests. Their usage is common in personality measures, since they indicate potential differentiations in test score interpretation. This paper aims to gain a better insight into the structure of trait…
Descriptors: Foreign Countries, Adults, Item Response Theory, Personality Traits
Peer reviewed Peer reviewed
Direct linkDirect link
Hong, Maxwell; Steedle, Jeffrey T.; Cheng, Ying – Educational and Psychological Measurement, 2020
Insufficient effort responding (IER) affects many forms of assessment in both educational and psychological contexts. Much research has examined different types of IER, IER's impact on the psychometric properties of test scores, and preprocessing procedures used to detect IER. However, there is a gap in the literature in terms of practical advice…
Descriptors: Responses, Psychometrics, Test Validity, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Steinmann, Isa; Sánchez, Daniel; van Laar, Saskia; Braeken, Johan – Assessment in Education: Principles, Policy & Practice, 2022
Questionnaire scales that are mixed-worded, i.e. include both positively and negatively worded items, often suffer from issues like low reliability and more complex latent structures than intended. Part of the problem might be that some responders fail to respond consistently to the mixed-worded items. We investigated the prevalence and impact of…
Descriptors: Response Style (Tests), Test Items, Achievement Tests, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Adrian Adams; Lauren Barth-Cohen – CBE - Life Sciences Education, 2024
In undergraduate research settings, students are likely to encounter anomalous data, that is, data that do not meet their expectations. Most of the research that directly or indirectly captures the role of anomalous data in research settings uses post-hoc reflective interviews or surveys. These data collection approaches focus on recall of past…
Descriptors: Undergraduate Students, Physics, Science Instruction, Laboratory Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Silber, Henning; Danner, Daniel; Rammstedt, Beatrice – International Journal of Social Research Methodology, 2019
This study aims to assess whether respondent inattentiveness causes systematic and unsystematic measurement error that influences survey data quality. To determine the impact of (in)attentiveness on the reliability and validity of target measures, we compared respondents from a German online survey (N = 5205) who had passed two attention checks…
Descriptors: Foreign Countries, Test Validity, Test Reliability, Attention
Peer reviewed Peer reviewed
Direct linkDirect link
Little, Todd D.; Chang, Rong; Gorrall, Britt K.; Waggenspack, Luke; Fukuda, Eriko; Allen, Patricia J.; Noam, Gil G. – International Journal of Behavioral Development, 2020
We revisit the merits of the retrospective pretest-posttest (RPP) design for repeated-measures research. The underutilized RPP method asks respondents to rate survey items twice during the same posttest measurement occasion from two specific frames of reference: "now" and "then." Individuals first report their current attitudes…
Descriptors: Pretesting, Alternative Assessment, Program Evaluation, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Lang, David – Grantee Submission, 2019
Whether high-stakes exams such as the SAT or College Board AP exams should penalize incorrect answers is a controversial question. In this paper, we document that penalty functions can have differential effects depending on a student's risk tolerance. Moreover, literature shows that risk aversion tends to vary along other areas of concern such as…
Descriptors: High Stakes Tests, Risk, Item Response Theory, Test Bias
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12