NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Online Submission31
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 31 results Save | Export
Custer, Michael; Kim, Jongpil – Online Submission, 2023
This study utilizes an analysis of diminishing returns to examine the relationship between sample size and item parameter estimation precision when utilizing the Masters' Partial Credit Model for polytomous items. Item data from the standardization of the Batelle Developmental Inventory, 3rd Edition were used. Each item was scored with a…
Descriptors: Sample Size, Item Response Theory, Test Items, Computation
Plackner, Christie; Kim, Dong-In – Online Submission, 2022
The application of item response theory (IRT) is almost universal in the development, implementation, and maintenance of large-scale assessments. Therefore, establishing the fit of IRT models to data is essential as the viability of calibration and equating implementations depend on it. In a typical test administration situation, measurement…
Descriptors: COVID-19, Pandemics, Item Response Theory, Goodness of Fit
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Whitaker, Douglas; Barss, Joseph; Drew, Bailey – Online Submission, 2022
Challenges to measuring students' attitudes toward statistics remain despite decades of focused research. Measuring the expectancy-value theory (EVT) Cost construct has been especially challenging owing in part to the historical lack of research about it. To measure the EVT Cost construct better, this study asked university students to respond to…
Descriptors: Statistics Education, College Students, Student Attitudes, Likert Scales
Kim, Dong-In; Julian, Marc; Hermann, Pam – Online Submission, 2022
In test equating, one critical equating property is the group invariance property which indicates that the equating function used to convert performance on each alternate form to the reporting scale should be the same for various subgroups. To mitigate the impact of disrupted learning on the item parameters during the COVID-19 pandemic, a…
Descriptors: COVID-19, Pandemics, Test Format, Equated Scores
Perkins, Kyle; Frank, Eva – Online Submission, 2018
This paper presents item-analysis data to illustrate how to identify a set of internally consistent test items that differentiate or discriminate among examinees who are highly proficient and nonproficient on the construct of interest. Suggestions for analyzing the quality of test items are offered as well as a pedagogical approach to augment the…
Descriptors: Item Analysis, Test Items, Test Reliability, Kinetics
Mao, Xiuzhen; Ozdemir, Burhanettin; Wang, Yating; Xiu, Tao – Online Submission, 2016
Four item selection indexes with and without exposure control are evaluated and compared in multidimensional computerized adaptive testing (CAT). The four item selection indices are D-optimality, Posterior expectation Kullback-Leibler information (KLP), the minimized error variance of the linear combination score with equal weight (V1), and the…
Descriptors: Comparative Analysis, Adaptive Testing, Computer Assisted Testing, Test Items
Lang, W. Steve; Moore, LaSonya; Wilkerson, Judy R.; Parfitt, Christopher M.; Greene, Jackie; Kratt, Diane; Martelli, C. Dawn; LaPaglia, Kyle; Johnston, Vickie; Gilbert, Shelby; Zhang, Jason; Fields, Lynette – Online Submission, 2018
A team of researchers at two institutions revised and analyzed a battery of instruments to assess the Critical Dispositions (InTASC, 2013) required in the CAEP (2016a) accreditation standards for teacher education programs. This research presents initial findings for the revised version updating previous results from validity and reliability…
Descriptors: Measures (Individuals), Test Construction, Construct Validity, Teacher Characteristics
Soysal, Sümeyra; Arikan, Çigdem Akin; Inal, Hatice – Online Submission, 2016
This study aims to investigate the effect of methods to deal with missing data on item difficulty estimations under different test length conditions and sampling sizes. In this line, a data set including 10, 20 and 40 items with 100 and 5000 sampling size was prepared. Deletion process was applied at the rates of 5%, 10% and 20% under conditions…
Descriptors: Research Problems, Data Analysis, Item Response Theory, Test Items
Custer, Michael – Online Submission, 2015
This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…
Descriptors: Sample Size, Item Response Theory, Computation, Accuracy
Schoen, Robert C.; Anderson, Daniel; Riddell, Claire M.; Bauduin, Charity – Online Submission, 2018
This report provides a description of the development process, field testing, and psychometric properties of the fall 2015 grades 3-5 Elementary Mathematics Student Assessment (EMSA), a student mathematics test designed to be administered in a whole-group setting to students in grades 3, 4, and 5. The test was administered to 2,614 participating…
Descriptors: Elementary School Students, Elementary School Mathematics, Grade 3, Grade 4
Schoen, Robert C.; Anderson, Daniel; Bauduin, Charity – Online Submission, 2017
This report provides a description of the development process, field testing, and psychometric properties of a student mathematics test designed to assess grades K, 1, and 2 student abilities. The test was administered to 4,535 participating grade K, 1, and 2 students in 66 schools located in 9 public school districts in Florida during spring…
Descriptors: Elementary School Students, Elementary School Mathematics, Kindergarten, Grade 1
Shilna, V.; Gafoor, K. Abdul – Online Submission, 2016
Learning chemistry is a hard task for many secondary school students; thus, students find it tough to score better marks in chemistry. Researchers have identified many reasons and suggested lots of alternatives to overcome difficulties in chemistry. This paper focuses on whether the test item construction has any role in the response pattern of…
Descriptors: Cognitive Style, Short Term Memory, Multiple Choice Tests, Science Tests
Schoen, Robert C.; Anderson, Daniel; Champagne, Zachary; Bauduin, Charity – Online Submission, 2017
This report provides a description of the development process, field testing, and psychometric properties of a student mathematics test designed to assess grades K, 1, and 2 student abilities. The test was administered to 4,486 participating grade K, 1, and 2 students in 67 schools located in 10 public school districts in Florida during fall 2015.…
Descriptors: Elementary School Students, Elementary School Mathematics, Kindergarten, Grade 1
Custer, Michael; Sharairi, Sid; Swift, David – Online Submission, 2012
This paper utilized the Rasch model and Joint Maximum Likelihood Estimation to study different scoring options for omitted and not-reached items. Three scoring treatments were studied. The first method treated omitted and not-reached items as "ignorable/blank". The second treatment, scored omits as incorrect with "0" and left not-reached as blank…
Descriptors: Scoring, Test Items, Item Response Theory, Maximum Likelihood Statistics
He, Wei; Li, Feifei; Wolfe, Edward W.; Mao, Xia – Online Submission, 2012
For those tests solely composed of testlets, local item independency assumption tends to be violated. This study, by using empirical data from a large-scale state assessment program, was interested in investigates the effects of using different models on equating results under the non-equivalent group anchor-test (NEAT) design. Specifically, the…
Descriptors: Test Items, Equated Scores, Models, Item Response Theory
Previous Page | Next Page »
Pages: 1  |  2  |  3