Descriptor
Author
Huynh, Huynh | 17 |
Saunders, Joseph C. | 3 |
Mandeville, Garrett K. | 1 |
Phillips, Gary W. | 1 |
Rose, Janet S. | 1 |
Publication Type
Reports - Research | 13 |
Journal Articles | 8 |
Speeches/Meeting Papers | 3 |
Collected Works - General | 1 |
Guides - Non-Classroom | 1 |
Numerical/Quantitative Data | 1 |
Reports - Evaluative | 1 |
Education Level
Audience
Researchers | 2 |
Location
South Carolina | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Comprehensive Tests of Basic… | 2 |
What Works Clearinghouse Rating
Rose, Janet S.; Huynh, Huynh – 1984
As part of a new teacher evaluation program initiated by the local school board, the Charleston County School District (South Carolina) adopted the Assessments of Performance in Teaching (APT) as a major evaluation tool to assess the teaching performance of annual contract teachers. Since evaluation procedures can ultimately lead to teacher…
Descriptors: Classroom Observation Techniques, Elementary Secondary Education, Evaluation Methods, Interrater Reliability
Huynh, Huynh – 1977
The kappamax reliability index of domain-referenced tests is defined as the upper bound of kappa when all possibile cutoff scores are considered. Computational procedures for kappamax are described, as well as its approximation for long tests, based on Kuder-Richardson formula 21. The sampling error of kappamax, and the effects of test length and…
Descriptors: Criterion Referenced Tests, Mathematical Models, Statistical Analysis, Test Reliability

Huynh, Huynh – Journal of Educational Statistics, 1982
Two indices for assessing the efficiency of decisions in mastery testing are proposed. The indices are generalizations of the raw agreement index and the kappa index. Empirical examples of these indices are given. (Author/JKS)
Descriptors: Criterion Referenced Tests, Cutting Scores, Mastery Tests, Test Reliability

Huynh, Huynh – Journal of Educational Statistics, 1981
Simulated data based on five test score distributions indicate that a slight modification of the asymptotic normal theory for the estimation of the p and kappa indices in mastery testing will provide results which are in close agreement with those based on small samples from the beta-binomial distribution. (Author/BW)
Descriptors: Error of Measurement, Mastery Tests, Mathematical Models, Test Reliability

Huynh, Huynh – Journal of Educational Measurement, 1976
Within the beta-binomial Bayesian framework, procedures are described for the evaluation of the kappa index of reliability on the basis of one administration of a domain-referenced test. Major factors affecting this index include cutoff score, test score variability and test length. Empirical data which substantiate some theoretical trends deduced…
Descriptors: Criterion Referenced Tests, Decision Making, Mathematical Models, Probability

Huynh, Huynh – Psychometrika, 1978
The use of Cohen's kappa index as a measure of the reliability of multiple classifications is developed. Special cases of the index as well as the effects of test length on the index are also explored. (JKS)
Descriptors: Career Development, Classification, Mastery Tests, Test Length

Huynh, Huynh – Journal of Educational Statistics, 1979
In mastery testing, the raw agreement index and the kappa index may be estimated via one test administration when the test scores follow beta-binomial distributions. This paper reports formulae, tables, and a computer program which facilitate the computation of the standard errors of the estimates. (Author/CTM)
Descriptors: Computer Programs, Cutting Scores, Decision Making, Mastery Tests

Huynh, Huynh – Psychometrika, 1980
A procedure for estimating the rates of false positive and false negative classification in a mastery testing situation is described. Formulas and tables are described for the computations of the standard errors. (Author/JKS)
Descriptors: Cutting Scores, Error of Measurement, Mastery Tests, Screening Tests
Huynh, Huynh – 1977
Three techniques for estimating Kuder Richardson reliability (KR20) coefficients for incomplete data are contrasted. The methods are: (1) Henderson's Method 1 (analysis of variance, or ANOVA); (2) Henderson's Method 3 (FITCO); and (3) Koch's method of symmetric sums (SYSUM). A Monte Carlo simulation was used to assess the precision of the three…
Descriptors: Analysis of Variance, Comparative Analysis, Mathematical Models, Monte Carlo Methods

Huynh, Huynh – Psychometrika, 1980
A nonrandomized minimax solution is presented for passing scores on mastery tests using the binomial error model. The computation does not require prior knowledge regarding an individual examinee or group test data for a population of examinees. A scheme which allows for correction for guessing is also described. (Author/JKS)
Descriptors: Academic Standards, Classification, Criterion Referenced Tests, Cutting Scores

Huynh, Huynh – 1979
A general framework for making mastery/nonmastery decisions based on multivariate test data is described in this study. Over all, mastery is granted (or denied) if the posterior expected loss associated with such action is smaller than the one incurred by the denial (or grant) of mastery. An explicit form for the cutting contour which separates…
Descriptors: Bayesian Statistics, Cutting Scores, Error of Measurement, Mastery Tests

Huynh, Huynh; Mandeville, Garrett K. – 1979
Assuming that the density p of the true ability theta in the binomial test score model is continuous in the closed interval (0, 1), a Bernstein polynomial can be used to uniformly approximate p. Then via quadratic programming techniques, least-square estimates may be obtained for the coefficients defining the polynomial. The approximation, in turn…
Descriptors: Cutting Scores, Error of Measurement, Least Squares Statistics, Mastery Tests

Huynh, Huynh – Journal of Educational Statistics, 1986
Under the assumptions of classical measurement theory and the condition of normality, a formula is derived for the reliability of composite scores. The formula represents an extension of the Spearman-Brown formula to the case of truncated data. (Author/JAZ)
Descriptors: Computer Simulation, Error of Measurement, Expectancy Tables, Scoring Formulas

Huynh, Huynh; Saunders, Joseph C. – Journal of Educational Measurement, 1980
Single administration (beta-binomial) estimates for the raw agreement index p and the corrected-for-chance kappa index in mastery testing are compared with those based on two test administrations in terms of estimation bias and sampling variability. Bias is about 2.5 percent for p and 10 percent for kappa. (Author/RL)
Descriptors: Comparative Analysis, Error of Measurement, Mastery Tests, Mathematical Models
Saunders, Joseph C.; Huynh, Huynh – 1980
In most reliability studies, the precision of a reliability estimate varies inversely with the number of examinees (sample size). Thus, to achieve a given level of accuracy, some minimum sample size is required. An approximation for this minimum size may be made if some reasonable assumptions regarding the mean and standard deviation of the test…
Descriptors: Cutting Scores, Difficulty Level, Error of Measurement, Mastery Tests
Previous Page | Next Page ยป
Pages: 1 | 2