NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 61 to 75 of 190 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kam, Chester Chun Seng – Educational and Psychological Measurement, 2016
To measure the response style of acquiescence, researchers recommend the use of at least 15 items with heterogeneous content. Such an approach is consistent with its theoretical definition and is a substantial improvement over traditional methods. Nevertheless, measurement of acquiescence can be enhanced by two additional considerations: first, to…
Descriptors: Test Items, Response Style (Tests), Test Content, Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Michaelides, Michalis P. – Applied Measurement in Education, 2019
The Student Background survey administered along with achievement tests in studies of the International Association for the Evaluation of Educational Achievement includes scales of student motivation, competence, and attitudes toward mathematics and science. The scales consist of positively- and negatively keyed items. The current research…
Descriptors: International Assessment, Achievement Tests, Mathematics Achievement, Mathematics Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Debeer, Dries; Janssen, Rianne; De Boeck, Paul – Journal of Educational Measurement, 2017
When dealing with missing responses, two types of omissions can be discerned: items can be skipped or not reached by the test taker. When the occurrence of these omissions is related to the proficiency process the missingness is nonignorable. The purpose of this article is to present a tree-based IRT framework for modeling responses and omissions…
Descriptors: Item Response Theory, Test Items, Responses, Testing Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Jonker, Tanya R. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2016
When memory is tested, researchers are often interested in the items that were correctly recalled or recognized, while ignoring or factoring out trials where one "recalls" or "recognizes" a nonstudied item. However, intrusions and false alarms are more than nuisance data and can provide key insights into the memory system. The…
Descriptors: Individual Differences, Recall (Psychology), Test Items, Semantics
Peer reviewed Peer reviewed
Direct linkDirect link
Meyer, Joseph F.; Faust, Kyle A.; Faust, David; Baker, Aaron M.; Cook, Nathan E. – International Journal of Mental Health and Addiction, 2013
Even when relatively infrequent, careless and random responding (C/RR) can have robust effects on individual and group data and thereby distort clinical evaluations and research outcomes. Given such potential adverse impacts and the broad use of self-report measures when appraising addictions and addictive behavior, the detection of C/RR can…
Descriptors: Addictive Behavior, Response Style (Tests), Test Items, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Nijlen, Daniel Van; Janssen, Rianne – Applied Measurement in Education, 2015
In this study it is investigated to what extent contextualized and non-contextualized mathematics test items have a differential impact on examinee effort. Mixture item response theory (IRT) models are applied to two subsets of items from a national assessment on mathematics in the second grade of the pre-vocational track in secondary education in…
Descriptors: Mathematics Tests, Measurement, Item Response Theory, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Kam, Chester Chun Seng; Zhou, Mingming – Educational and Psychological Measurement, 2015
Previous research has found the effects of acquiescence to be generally consistent across item "aggregates" within a single survey (i.e., essential tau-equivalence), but it is unknown whether this phenomenon is consistent at the" individual item" level. This article evaluated the often assumed but inadequately tested…
Descriptors: Test Items, Surveys, Criteria, Correlation
Thacker, Arthur A.; Dickinson, Emily R.; Bynum, Bethany H.; Wen, Yao; Smith, Erin; Sinclair, Andrea L.; Deatz, Richard C.; Wise, Lauress L. – Partnership for Assessment of Readiness for College and Careers, 2015
The Partnership for Assessment of Readiness for College and Careers (PARCC) field tests during the spring of 2014 provided an opportunity to investigate the quality of the items, tasks, and associated stimuli. HumRRO conducted several research studies summarized in this report. Quality of test items is integral to the "Theory of Action"…
Descriptors: Achievement Tests, Test Items, Common Core State Standards, Difficulty Level
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Kuan-Yu; Wang, Wen-Chung – Journal of Educational Measurement, 2014
Sometimes, test-takers may not be able to attempt all items to the best of their ability (with full effort) due to personal factors (e.g., low motivation) or testing conditions (e.g., time limit), resulting in poor performances on certain items, especially those located toward the end of a test. Standard item response theory (IRT) models fail to…
Descriptors: Student Evaluation, Item Response Theory, Models, Simulation
Peer reviewed Peer reviewed
Direct linkDirect link
Okumura, Taichi – Educational and Psychological Measurement, 2014
This study examined the empirical differences between the tendency to omit items and reading ability by applying tree-based item response (IRTree) models to the Japanese data of the Programme for International Student Assessment (PISA) held in 2009. For this purpose, existing IRTree models were expanded to contain predictors and to handle…
Descriptors: Foreign Countries, Item Response Theory, Test Items, Reading Ability
Çetinavci, Ugur Recep; Öztürk, Ismet – Online Submission, 2017
Pragmatic competence is among the explicitly acknowledged sub-competences that make the communicative competence in any language (Bachman & Palmer, 1996; Council of Europe, 2001). Within the notion of pragmatic competence itself, "implicature (implied meanings)" comes to the fore as one of the five main areas there (Levinson, 1983).…
Descriptors: Test Construction, Computer Assisted Testing, Communicative Competence (Languages), Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Yi-Hsuan; Jia, Yue – Large-scale Assessments in Education, 2014
Background: Large-scale survey assessments have been used for decades to monitor what students know and can do. Such assessments aim at providing group-level scores for various populations, with little or no consequence to individual students for their test performance. Students' test-taking behaviors in survey assessments, particularly the level…
Descriptors: Measurement, Test Wiseness, Student Surveys, Response Style (Tests)
Peer reviewed Peer reviewed
Direct linkDirect link
Suh, Youngsuk; Cho, Sun-Joo; Wollack, James A. – Journal of Educational Measurement, 2012
In the presence of test speededness, the parameter estimates of item response theory models can be poorly estimated due to conditional dependencies among items, particularly for end-of-test items (i.e., speeded items). This article conducted a systematic comparison of five-item calibration procedures--a two-parameter logistic (2PL) model, a…
Descriptors: Response Style (Tests), Timed Tests, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Smithson, Michael; Merkle, Edgar C.; Verkuilen, Jay – Journal of Educational and Behavioral Statistics, 2011
This paper describes the application of finite-mixture general linear models based on the beta distribution to modeling response styles, polarization, anchoring, and priming effects in probability judgments. These models, in turn, enhance our capacity for explicitly testing models and theories regarding the aforementioned phenomena. The mixture…
Descriptors: Priming, Research Methodology, Probability, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Yorke, Mantz; Orr, Susan; Blair, Bernadette – Studies in Higher Education, 2014
There has long been the suspicion amongst staff in Art & Design that the ratings given to their subject disciplines in the UK's National Student Survey are adversely affected by a combination of circumstances--a "perfect storm". The "perfect storm" proposition is tested by comparing ratings for Art & Design with those…
Descriptors: Student Surveys, National Surveys, Art Education, Design
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12  |  13