Descriptor
Source
Journal of Educational… | 1 |
Author
Huynh, Huynh | 4 |
Mandeville, Garrett K. | 1 |
Saunders, Joseph C. | 1 |
Publication Type
Reports - Research | 4 |
Journal Articles | 1 |
Education Level
Audience
Location
South Carolina | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Comprehensive Tests of Basic… | 1 |
What Works Clearinghouse Rating
Huynh, Huynh – 1977
The kappamax reliability index of domain-referenced tests is defined as the upper bound of kappa when all possibile cutoff scores are considered. Computational procedures for kappamax are described, as well as its approximation for long tests, based on Kuder-Richardson formula 21. The sampling error of kappamax, and the effects of test length and…
Descriptors: Criterion Referenced Tests, Mathematical Models, Statistical Analysis, Test Reliability
Huynh, Huynh – 1977
Three techniques for estimating Kuder Richardson reliability (KR20) coefficients for incomplete data are contrasted. The methods are: (1) Henderson's Method 1 (analysis of variance, or ANOVA); (2) Henderson's Method 3 (FITCO); and (3) Koch's method of symmetric sums (SYSUM). A Monte Carlo simulation was used to assess the precision of the three…
Descriptors: Analysis of Variance, Comparative Analysis, Mathematical Models, Monte Carlo Methods

Huynh, Huynh; Mandeville, Garrett K. – 1979
Assuming that the density p of the true ability theta in the binomial test score model is continuous in the closed interval (0, 1), a Bernstein polynomial can be used to uniformly approximate p. Then via quadratic programming techniques, least-square estimates may be obtained for the coefficients defining the polynomial. The approximation, in turn…
Descriptors: Cutting Scores, Error of Measurement, Least Squares Statistics, Mastery Tests

Huynh, Huynh; Saunders, Joseph C. – Journal of Educational Measurement, 1980
Single administration (beta-binomial) estimates for the raw agreement index p and the corrected-for-chance kappa index in mastery testing are compared with those based on two test administrations in terms of estimation bias and sampling variability. Bias is about 2.5 percent for p and 10 percent for kappa. (Author/RL)
Descriptors: Comparative Analysis, Error of Measurement, Mastery Tests, Mathematical Models