NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 213 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Cornelia Eva Neuert – Sociological Methods & Research, 2024
The quality of data in surveys is affected by response burden and questionnaire length. With an increasing number of questions, respondents can become bored, tired, and annoyed and may take shortcuts to reduce the effort needed to complete the survey. In this article, direct evidence is presented on how the position of items within a web…
Descriptors: Online Surveys, Test Items, Test Format, Test Construction
Peer reviewed Peer reviewed
Direct linkDirect link
Kamil Jaros; Aleksandra Gajda – Journal of Psychoeducational Assessment, 2024
Stage fright is a natural and very common phenomenon that affects everyone who must present themselves in public. However, it has a negative impact on the health and voice emission of children and adolescents, which is why it is important to study and measure it. Unfortunately, there are no appropriate tools for examining public presentation…
Descriptors: Anxiety, Fear, Public Speaking, Children
Peer reviewed Peer reviewed
Direct linkDirect link
Embretson, Susan – Large-scale Assessments in Education, 2023
Understanding the cognitive processes, skills and strategies that examinees use in testing is important for construct validity and score interpretability. Although response processes evidence has long been included as an important aspect of validity (i.e., "Standards for Educational and Psychological Tests," 1999), relevant studies are…
Descriptors: Cognitive Processes, Test Validity, Item Response Theory, Test Wiseness
Ge, Yuan – ProQuest LLC, 2022
My dissertation research explored responder behaviors (e.g., demonstrating response styles, carelessness, and possessing misconceptions) that compromise psychometric quality and impact the interpretation and use of assessment results. Identifying these behaviors can help researchers understand and minimize their potentially construct-irrelevant…
Descriptors: Test Wiseness, Response Style (Tests), Item Response Theory, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Zou, Tongtong; Bolt, Daniel M. – Measurement: Interdisciplinary Research and Perspectives, 2023
Person misfit and person reliability indices in item response theory (IRT) can play an important role in evaluating the validity of a test or survey instrument at the respondent level. Prior empirical comparisons of these indices have been applied to binary item response data and suggest that the two types of indices return very similar results.…
Descriptors: Item Response Theory, Rating Scales, Response Style (Tests), Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Viola Merhof; Caroline M. Böhm; Thorsten Meiser – Educational and Psychological Measurement, 2024
Item response tree (IRTree) models are a flexible framework to control self-reported trait measurements for response styles. To this end, IRTree models decompose the responses to rating items into sub-decisions, which are assumed to be made on the basis of either the trait being measured or a response style, whereby the effects of such person…
Descriptors: Item Response Theory, Test Interpretation, Test Reliability, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Wim J. van der Linden; Luping Niu; Seung W. Choi – Journal of Educational and Behavioral Statistics, 2024
A test battery with two different levels of adaptation is presented: a within-subtest level for the selection of the items in the subtests and a between-subtest level to move from one subtest to the next. The battery runs on a two-level model consisting of a regular response model for each of the subtests extended with a second level for the joint…
Descriptors: Adaptive Testing, Test Construction, Test Format, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Zachary J. Roman; Patrick Schmidt; Jason M. Miller; Holger Brandt – Structural Equation Modeling: A Multidisciplinary Journal, 2024
Careless and insufficient effort responding (C/IER) is a situation where participants respond to survey instruments without considering the item content. This phenomena adds noise to data leading to erroneous inference. There are multiple approaches to identifying and accounting for C/IER in survey settings, of these approaches the best performing…
Descriptors: Structural Equation Models, Bayesian Statistics, Response Style (Tests), Robustness (Statistics)
Peer reviewed Peer reviewed
Direct linkDirect link
Eirini M. Mitropoulou; Leonidas A. Zampetakis; Ioannis Tsaousis – Evaluation Review, 2024
Unfolding item response theory (IRT) models are important alternatives to dominance IRT models in describing the response processes on self-report tests. Their usage is common in personality measures, since they indicate potential differentiations in test score interpretation. This paper aims to gain a better insight into the structure of trait…
Descriptors: Foreign Countries, Adults, Item Response Theory, Personality Traits
Peer reviewed Peer reviewed
Direct linkDirect link
Sengül Avsar, Asiye – Measurement: Interdisciplinary Research and Perspectives, 2020
In order to reach valid and reliable test scores, various test theories have been developed, and one of them is nonparametric item response theory (NIRT). Mokken Models are the most widely known NIRT models which are useful for small samples and short tests. Mokken Package is useful for Mokken Scale Analysis. An important issue about validity is…
Descriptors: Response Style (Tests), Nonparametric Statistics, Item Response Theory, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Hong, Maxwell; Steedle, Jeffrey T.; Cheng, Ying – Educational and Psychological Measurement, 2020
Insufficient effort responding (IER) affects many forms of assessment in both educational and psychological contexts. Much research has examined different types of IER, IER's impact on the psychometric properties of test scores, and preprocessing procedures used to detect IER. However, there is a gap in the literature in terms of practical advice…
Descriptors: Responses, Psychometrics, Test Validity, Test Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Silber, Henning; Danner, Daniel; Rammstedt, Beatrice – International Journal of Social Research Methodology, 2019
This study aims to assess whether respondent inattentiveness causes systematic and unsystematic measurement error that influences survey data quality. To determine the impact of (in)attentiveness on the reliability and validity of target measures, we compared respondents from a German online survey (N = 5205) who had passed two attention checks…
Descriptors: Foreign Countries, Test Validity, Test Reliability, Attention
Peer reviewed Peer reviewed
Direct linkDirect link
Rebecca F. Berenbon; Jerome V. D'Agostino; Emily M. Rodgers – Journal of Psychoeducational Assessment, 2024
Curriculum-based measures (CBMs) such as Word Identification Fluency promote student achievement, but because they are timed and administered frequently, they are prone to variation in student response styles. To study the impact of WIF response styles, we created and examined the validity of a novel response style measure and examined the degree…
Descriptors: Elementary School Students, Elementary School Teachers, Grade 1, Special Education Teachers
Peer reviewed Peer reviewed
Direct linkDirect link
Halpin, Peter F. – Measurement: Interdisciplinary Research and Perspectives, 2017
The target paper, "Rethinking Traditional Methods of Survey Validation" (Andrew Maul), raises some interesting critical ideas, both old and new, about the validation of self-report surveys. As indicated by Dr. Maul, recent policy initiatives in the United States (e.g., ESSA) have led to a demand for assessments of…
Descriptors: Self Evaluation (Individuals), Evaluation Methods, Measurement Techniques, Response Style (Tests)
Steedle, Jeffrey – ACT, Inc., 2018
Self-report inventories are commonly administered to measure social and emotional learning competencies related to college readiness. If students respond inattentively or dishonestly, validity will suffer. This study applies several methods of detecting insufficient effort responding (IER) to data from ACT® Engage®. Different methods indicated…
Descriptors: College Readiness, Response Style (Tests), Test Validity, Measurement Techniques
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  15