Publication Date
In 2025 | 0 |
Since 2024 | 2 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 12 |
Descriptor
Models | 14 |
Response Style (Tests) | 14 |
Simulation | 14 |
Item Response Theory | 9 |
Test Items | 6 |
Statistical Analysis | 5 |
Comparative Analysis | 4 |
Decision Making | 4 |
Responses | 4 |
Computer Software | 3 |
Probability | 3 |
More ▼ |
Source
Journal of Educational… | 4 |
Educational and Psychological… | 2 |
Journal of Educational and… | 2 |
Applied Psychological… | 1 |
Grantee Submission | 1 |
Journal of Experimental… | 1 |
ProQuest LLC | 1 |
Author
Kalisch, Stanley James, Jr. | 2 |
Bolt, Daniel M. | 1 |
Broder, Arndt | 1 |
Cai, Li | 1 |
Carl F. Falk | 1 |
Cho, Sun-Joo | 1 |
De Boeck, Paul | 1 |
Debeer, Dries | 1 |
Emons, Wilco H. M. | 1 |
Falk, Carl F. | 1 |
Huang, Hung-Yu | 1 |
More ▼ |
Publication Type
Reports - Research | 12 |
Journal Articles | 10 |
Dissertations/Theses | 1 |
Dissertations/Theses -… | 1 |
Opinion Papers | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Secondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 2 |
NEO Personality Inventory | 1 |
Trends in International… | 1 |
What Works Clearinghouse Rating
Sijia Huang; Seungwon Chung; Carl F. Falk – Journal of Educational Measurement, 2024
In this study, we introduced a cross-classified multidimensional nominal response model (CC-MNRM) to account for various response styles (RS) in the presence of cross-classified data. The proposed model allows slopes to vary across items and can explore impacts of observed covariates on latent constructs. We applied a recently developed variant of…
Descriptors: Response Style (Tests), Classification, Data, Models
Ö. Emre C. Alagöz; Thorsten Meiser – Educational and Psychological Measurement, 2024
To improve the validity of self-report measures, researchers should control for response style (RS) effects, which can be achieved with IRTree models. A traditional IRTree model considers a response as a combination of distinct decision-making processes, where the substantive trait affects the decision on response direction, while decisions about…
Descriptors: Item Response Theory, Validity, Self Evaluation (Individuals), Decision Making
Huang, Hung-Yu – Educational and Psychological Measurement, 2023
The forced-choice (FC) item formats used for noncognitive tests typically develop a set of response options that measure different traits and instruct respondents to make judgments among these options in terms of their preference to control the response biases that are commonly observed in normative tests. Diagnostic classification models (DCMs)…
Descriptors: Test Items, Classification, Bayesian Statistics, Decision Making
Falk, Carl F.; Cai, Li – Grantee Submission, 2015
In this paper, we present a flexible full-information approach to modeling multiple userdefined response styles across multiple constructs of interest. The model is based on a novel parameterization of the multidimensional nominal response model that separates estimation of overall item slopes from the scoring functions (indicating the order of…
Descriptors: Response Style (Tests), Item Response Theory, Outcome Measures, Models
Debeer, Dries; Janssen, Rianne; De Boeck, Paul – Journal of Educational Measurement, 2017
When dealing with missing responses, two types of omissions can be discerned: items can be skipped or not reached by the test taker. When the occurrence of these omissions is related to the proficiency process the missingness is nonignorable. The purpose of this article is to present a tree-based IRT framework for modeling responses and omissions…
Descriptors: Item Response Theory, Test Items, Responses, Testing Problems
Jin, Kuan-Yu; Wang, Wen-Chung – Journal of Educational Measurement, 2014
Sometimes, test-takers may not be able to attempt all items to the best of their ability (with full effort) due to personal factors (e.g., low motivation) or testing conditions (e.g., time limit), resulting in poor performances on certain items, especially those located toward the end of a test. Standard item response theory (IRT) models fail to…
Descriptors: Student Evaluation, Item Response Theory, Models, Simulation
Suh, Youngsuk; Cho, Sun-Joo; Wollack, James A. – Journal of Educational Measurement, 2012
In the presence of test speededness, the parameter estimates of item response theory models can be poorly estimated due to conditional dependencies among items, particularly for end-of-test items (i.e., speeded items). This article conducted a systematic comparison of five-item calibration procedures--a two-parameter logistic (2PL) model, a…
Descriptors: Response Style (Tests), Timed Tests, Test Items, Item Response Theory
Lu, Yi – ProQuest LLC, 2012
Cross-national comparisons of responses to survey items are often affected by response style, particularly extreme response style (ERS). ERS varies across cultures, and has the potential to bias inferences in cross-national comparisons. For example, in both PISA and TIMSS assessments, it has been documented that when examined within countries,…
Descriptors: Item Response Theory, Attitude Measures, Response Style (Tests), Cultural Differences
Verkuilen, Jay; Smithson, Michael – Journal of Educational and Behavioral Statistics, 2012
Doubly bounded continuous data are common in the social and behavioral sciences. Examples include judged probabilities, confidence ratings, derived proportions such as percent time on task, and bounded scale scores. Dependent variables of this kind are often difficult to analyze using normal theory models because their distributions may be quite…
Descriptors: Responses, Regression (Statistics), Statistical Analysis, Models
Johnson, Timothy R.; Bolt, Daniel M. – Journal of Educational and Behavioral Statistics, 2010
Multidimensional item response models are usually implemented to model the relationship between item responses and two or more traits of interest. We show how multidimensional multinomial logit item response models can also be used to account for individual differences in response style. This is done by specifying a factor-analytic model for…
Descriptors: Models, Response Style (Tests), Factor Structure, Individual Differences
Emons, Wilco H. M. – Applied Psychological Measurement, 2009
For valid decision making, it is essential to both the person being measured and the person or organization that is having the person measured that the observed scores adequately represent the underlying trait. This study deals with person-fit analysis of polytomous item scores to detect unusual patterns of sum scores on subsets of items. This…
Descriptors: Personality Theories, Personality Measures, Scores, Test Items
Vogt, Vera; Broder, Arndt – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2007
Recently, J. J. Starns and J. L. Hicks (2005) have argued that source dimensions are retrieved independently from memory. In their innovative experiment, manipulating the retrievability of 1 source feature did not affect memory for a 2nd feature. Following C. S. Dodson and A. P. Shimamura (2000), the authors argue that the source memory measure…
Descriptors: Response Style (Tests), Memory, Measures (Individuals), Simulation
Kalisch, Stanley James, Jr. – 1974
The four purposes of this study were: (1) To compare two versions of a tailored testing model similar to one suggested by Kalisch (1974); (2) To identify levels of the variables within the two versions, which produce an efficient tailored testing procedures; (3) To compare, within each version, the results obtained when employing relatively small…
Descriptors: Ability, Adaptive Testing, Branching, Comparative Analysis
Kalisch, Stanley James, Jr. – 1975
Two tailored testing models, specifying procedures by which the correctness of examinees' responses to a fixed number of test items are predicted by presenting as few items as possible to the examinee, were compared for their efficiency. The models differ in that one requires reconsideration of each prediction whenever additional information is…
Descriptors: Ability, Adaptive Testing, Branching, Comparative Analysis