NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Elementary and Secondary…1
What Works Clearinghouse Rating
Showing 211 to 225 of 1,389 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Dawadi, Saraswati; Shrestha, Prithvi N. – Educational Assessment, 2018
There has been a steady interest in investigating the validity of language tests in the last decades. Despite numerous studies on construct validity in language testing, there are not many studies examining the construct validity of a reading test. This paper reports on a study that explored the construct validity of the English reading test in…
Descriptors: Foreign Countries, Construct Validity, Reading Tests, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Newberry, Milton G., III; Israel, Glenn D. – Field Methods, 2017
Recent research has shown mixed-mode surveys are advantageous for organizations to use in collecting data. Previous research explored web/mail mode effects for four-contact waves. This study explores the effect of web/mail mixed-mode systems over a series of contacts on the customer satisfaction data from the Florida Cooperative Extension Service…
Descriptors: Mail Surveys, Mixed Methods Research, Comparative Analysis, Extension Education
Peer reviewed Peer reviewed
Direct linkDirect link
Debeer, Dries; Janssen, Rianne; De Boeck, Paul – Journal of Educational Measurement, 2017
When dealing with missing responses, two types of omissions can be discerned: items can be skipped or not reached by the test taker. When the occurrence of these omissions is related to the proficiency process the missingness is nonignorable. The purpose of this article is to present a tree-based IRT framework for modeling responses and omissions…
Descriptors: Item Response Theory, Test Items, Responses, Testing Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Marianti, Sukaesi; Fox, Jean-Paul; Avetisyan, Marianna; Veldkamp, Bernard P.; Tijmstra, Jesper – Journal of Educational and Behavioral Statistics, 2014
Many standardized tests are now administered via computer rather than paper-and-pencil format. In a computer-based testing environment, it is possible to record not only the test taker's response to each question (item) but also the amount of time spent by the test taker in considering and answering each item. Response times (RTs) provide…
Descriptors: Reaction Time, Response Style (Tests), Computer Assisted Testing, Bayesian Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Eggers, Kurt; De Nil, Luc F.; Van den Bergh, Bea R. H. – Journal of Fluency Disorders, 2013
Purpose: The purpose of this study was to investigate whether previously reported parental questionnaire-based differences in inhibitory control (IC; Eggers, De Nil, & Van den Bergh, 2010) would be supported by direct measurement of IC using a computer task. Method: Participants were 30 children who stutter (CWS; mean age = 7;05 years) and 30…
Descriptors: Response Style (Tests), Inhibition, Stuttering, Questionnaires
Goff, Peter T.; Kam, Jihye; Kraszewski, Jacek – Wisconsin Center for Education Research, 2015
Survey tools are used in education to direct policy, drive leadership decisions, and inform research. Increasingly survey measures of school climate and perspectives of leadership are incorporated into measures of school and principal quality. This study examines the role of temporal variations in survey response patterns using the data from the…
Descriptors: Elementary Secondary Education, National Surveys, School Effectiveness, Time
Peer reviewed Peer reviewed
Direct linkDirect link
Faddar, Jerich; Vanhoof, Jan; De Maeyer, Sven – School Effectiveness and School Improvement, 2017
School self-evaluation (SSE) often makes use of questionnaires in order to sketch a picture of the school. How respondents cognitively process questionnaire items determines the validity of SSE results. Still, one readily assumes that respondents interpret and answer items as intended by the instrument developer (referred to as cognitive…
Descriptors: Self Evaluation (Individuals), Questionnaires, Cognitive Tests, Construct Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Jonker, Tanya R. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2016
When memory is tested, researchers are often interested in the items that were correctly recalled or recognized, while ignoring or factoring out trials where one "recalls" or "recognizes" a nonstudied item. However, intrusions and false alarms are more than nuisance data and can provide key insights into the memory system. The…
Descriptors: Individual Differences, Recall (Psychology), Test Items, Semantics
Goldhammer, Frank; Martens, Thomas; Christoph, Gabriela; Lüdtke, Oliver – OECD Publishing, 2016
In this study, we investigated how empirical indicators of test-taking engagement can be defined, empirically validated, and used to describe group differences in the context of the Programme of International Assessment of Adult Competences (PIAAC). The approach was to distinguish between disengaged and engaged response behavior by means of…
Descriptors: International Assessment, Adults, Response Style (Tests), Reaction Time
Peer reviewed Peer reviewed
Direct linkDirect link
Meyer, Joseph F.; Faust, Kyle A.; Faust, David; Baker, Aaron M.; Cook, Nathan E. – International Journal of Mental Health and Addiction, 2013
Even when relatively infrequent, careless and random responding (C/RR) can have robust effects on individual and group data and thereby distort clinical evaluations and research outcomes. Given such potential adverse impacts and the broad use of self-report measures when appraising addictions and addictive behavior, the detection of C/RR can…
Descriptors: Addictive Behavior, Response Style (Tests), Test Items, Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Nijlen, Daniel Van; Janssen, Rianne – Applied Measurement in Education, 2015
In this study it is investigated to what extent contextualized and non-contextualized mathematics test items have a differential impact on examinee effort. Mixture item response theory (IRT) models are applied to two subsets of items from a national assessment on mathematics in the second grade of the pre-vocational track in secondary education in…
Descriptors: Mathematics Tests, Measurement, Item Response Theory, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Kam, Chester Chun Seng; Zhou, Mingming – Educational and Psychological Measurement, 2015
Previous research has found the effects of acquiescence to be generally consistent across item "aggregates" within a single survey (i.e., essential tau-equivalence), but it is unknown whether this phenomenon is consistent at the" individual item" level. This article evaluated the often assumed but inadequately tested…
Descriptors: Test Items, Surveys, Criteria, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Roohr, Katrina Crotts; Sireci, Stephen G. – Educational Assessment, 2017
Test accommodations for English learners (ELs) are intended to reduce the language barrier and level the playing field, allowing ELs to better demonstrate their true proficiencies. Computer-based accommodations for ELs show promising results for leveling that field while also providing us with additional data to more closely investigate the…
Descriptors: Testing Accommodations, English Language Learners, Second Language Learning, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Huang, Hung-Yu; Wang, Wen-Chung – Journal of Educational Measurement, 2014
The DINA (deterministic input, noisy, and gate) model has been widely used in cognitive diagnosis tests and in the process of test development. The outcomes known as slip and guess are included in the DINA model function representing the responses to the items. This study aimed to extend the DINA model by using the random-effect approach to allow…
Descriptors: Models, Guessing (Tests), Probability, Ability
Peer reviewed Peer reviewed
Direct linkDirect link
Vispoel, Walter P.; Tao, Shuqin – Psychological Assessment, 2013
Our goal in this investigation was to evaluate the reliability of scores from the Balanced Inventory of Desirable Responding (BIDR) more comprehensively than in prior research using a generalizability-theory framework based on both dichotomous and polytomous scoring of items. Generalizability coefficients accounting for specific-factor, transient,…
Descriptors: Reliability, Scores, Measures (Individuals), Generalizability Theory
Pages: 1  |  ...  |  11  |  12  |  13  |  14  |  15  |  16  |  17  |  18  |  19  |  ...  |  93