NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 24 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Guo, Hongwen; Rios, Joseph A.; Ling, Guangming; Wang, Zhen; Gu, Lin; Yang, Zhitong; Liu, Lydia O. – ETS Research Report Series, 2022
Different variants of the selected-response (SR) item type have been developed for various reasons (i.e., simulating realistic situations, examining critical-thinking and/or problem-solving skills). Generally, the variants of SR item format are more complex than the traditional multiple-choice (MC) items, which may be more challenging to test…
Descriptors: Test Format, Test Wiseness, Test Items, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Saatcioglu, Fatima Munevver; Sen, Sedat – International Journal of Testing, 2023
In this study, we illustrated an application of the confirmatory mixture IRT model for multidimensional tests. We aimed to examine the differences in student performance by domains with a confirmatory mixture IRT modeling approach. A three-dimensional and three-class model was analyzed by assuming content domains as dimensions and cognitive…
Descriptors: Item Response Theory, Foreign Countries, Elementary Secondary Education, Achievement Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Whitaker, Douglas; Barss, Joseph; Drew, Bailey – Online Submission, 2022
Challenges to measuring students' attitudes toward statistics remain despite decades of focused research. Measuring the expectancy-value theory (EVT) Cost construct has been especially challenging owing in part to the historical lack of research about it. To measure the EVT Cost construct better, this study asked university students to respond to…
Descriptors: Statistics Education, College Students, Student Attitudes, Likert Scales
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Fukuzawa, Sherry; deBraga, Michael – Journal of Curriculum and Teaching, 2019
Graded Response Method (GRM) is an alternative to multiple-choice testing where students rank options according to their relevance to the question. GRM requires discrimination and inference between statements and is a cost-effective critical thinking assessment in large courses where open-ended answers are not feasible. This study examined…
Descriptors: Alternative Assessment, Multiple Choice Tests, Test Items, Test Format
Peer reviewed Peer reviewed
Direct linkDirect link
Shin, Jinnie; Bulut, Okan; Gierl, Mark J. – Journal of Experimental Education, 2020
The arrangement of response options in multiple-choice (MC) items, especially the location of the most attractive distractor, is considered critical in constructing high-quality MC items. In the current study, a sample of 496 undergraduate students taking an educational assessment course was given three test forms consisting of the same items but…
Descriptors: Foreign Countries, Undergraduate Students, Multiple Choice Tests, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Scribner, Emily D.; Harris, Sara E. – Journal of Geoscience Education, 2020
The Mineralogy Concept Inventory (MCI) is a statistically validated 18-question assessment that can be used to measure learning gains in introductory mineralogy courses. Development of the MCI was an iterative process involving expert consultation, student interviews, assessment deployment, and statistical analysis. Experts at the two universities…
Descriptors: Undergraduate Students, Mineralogy, Introductory Courses, Science Tests
Peer reviewed Peer reviewed
Direct linkDirect link
McIntosh, James – Scandinavian Journal of Educational Research, 2019
This article examines whether the way that PISA models item outcomes in mathematics affects the validity of its country rankings. As an alternative to PISA methodology a two-parameter model is applied to PISA mathematics item data from Canada and Finland for the year 2012. In the estimation procedure item difficulty and dispersion parameters are…
Descriptors: Foreign Countries, Achievement Tests, Secondary School Students, International Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Chalmers, R. Philip; Counsell, Alyssa; Flora, David B. – Educational and Psychological Measurement, 2016
Differential test functioning, or DTF, occurs when one or more items in a test demonstrate differential item functioning (DIF) and the aggregate of these effects are witnessed at the test level. In many applications, DTF can be more important than DIF when the overall effects of DIF at the test level can be quantified. However, optimal statistical…
Descriptors: Test Bias, Sampling, Test Items, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Goldhammer, Frank; Martens, Thomas; Lüdtke, Oliver – Large-scale Assessments in Education, 2017
Background: A potential problem of low-stakes large-scale assessments such as the Programme for the International Assessment of Adult Competencies (PIAAC) is low test-taking engagement. The present study pursued two goals in order to better understand conditioning factors of test-taking disengagement: First, a model-based approach was used to…
Descriptors: Student Evaluation, International Assessment, Adults, Competence
Peer reviewed Peer reviewed
Direct linkDirect link
Bristow, M.; Erkorkmaz, K.; Huissoon, J. P.; Jeon, Soo; Owen, W. S.; Waslander, S. L.; Stubley, G. D. – IEEE Transactions on Education, 2012
Any meaningful initiative to improve the teaching and learning in introductory control systems courses needs a clear test of student conceptual understanding to determine the effectiveness of proposed methods and activities. The authors propose a control systems concept inventory. Development of the inventory was collaborative and iterative. The…
Descriptors: Diagnostic Tests, Concept Formation, Undergraduate Students, Engineering Education
Peer reviewed Peer reviewed
Direct linkDirect link
Squires, Jane K.; Waddell, Misti L.; Clifford, Jantina R.; Funk, Kristin; Hoselton, Robert M.; Chen, Ching-I – Topics in Early Childhood Special Education, 2013
Psychometric and utility studies on Social Emotional Assessment Measure (SEAM), an innovative tool for assessing and monitoring social-emotional and behavioral development in infants and toddlers with disabilities, were conducted. The Infant and Toddler SEAM intervals were the study focus, using mixed methods, including item response theory…
Descriptors: Psychometrics, Evaluation Methods, Social Development, Emotional Development
Peer reviewed Peer reviewed
Direct linkDirect link
Cui, Ying; Mousavi, Amin – International Journal of Testing, 2015
The current study applied the person-fit statistic, l[subscript z], to data from a Canadian provincial achievement test to explore the usefulness of conducting person-fit analysis on large-scale assessments. Item parameter estimates were compared before and after the misfitting student responses, as identified by l[subscript z], were removed. The…
Descriptors: Measurement, Achievement Tests, Comparative Analysis, Test Items
Peer reviewed Peer reviewed
Direct linkDirect link
Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur – CBE - Life Sciences Education, 2016
We followed established best practices in concept inventory design and developed a 12-item inventory to assess student ability in statistical reasoning in biology (Statistical Reasoning in Biology Concept Inventory [SRBCI]). It is important to assess student thinking in this conceptual area, because it is a fundamental requirement of being…
Descriptors: Foreign Countries, Measures (Individuals), Test Construction, Statistics
Peer reviewed Peer reviewed
Direct linkDirect link
Reckase, Mark D.; Xu, Jing-Ru – Educational and Psychological Measurement, 2015
How to compute and report subscores for a test that was originally designed for reporting scores on a unidimensional scale has been a topic of interest in recent years. In the research reported here, we describe an application of multidimensional item response theory to identify a subscore structure in a test designed for reporting results using a…
Descriptors: English, Language Skills, English Language Learners, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Gattamorta, Karina A.; Penfield, Randall D.; Myers, Nicholas D. – International Journal of Testing, 2012
Measurement invariance is a common consideration in the evaluation of the validity and fairness of test scores when the tested population contains distinct groups of examinees, such as examinees receiving different forms of a translated test. Measurement invariance in polytomous items has traditionally been evaluated at the item-level,…
Descriptors: Foreign Countries, Psychometrics, Test Bias, Test Items
Previous Page | Next Page »
Pages: 1  |  2