NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 46 to 60 of 378 results Save | Export
Marshall, J. Laird; Haertel, Edward H. – 1975
For classical, norm-referenced test reliability, Cronbach's alpha has been shown to be equal to the mean of all possible split-half Pearson product-moment correlation coefficients, adjusted by the Spearman-Brown prophecy formula. For criterion-referenced test reliability, in an analogous vein, this paper provides the rationale behind, the analysis…
Descriptors: Criterion Referenced Tests, Statistical Analysis, Test Reliability
Robertson, Gary J. – 1981
Some fundamental concepts of criterion referenced test (CRT) reliability are highlighted. Emphasis is given to the procedures for determining reliability of scores for individual pupils because this is an area requiring increased awareness by classroom teachers and practitioners. Reliability issues encountered in the evaluation of instructional…
Descriptors: Criterion Referenced Tests, Reading Tests, Scores, Test Reliability
Peer reviewed Peer reviewed
Swaminathan, Hariharan; And Others – Journal of Educational Measurement, 1974
It is proposed that the reliability of criterion-referenced test scores be defined in terms of the consistency of the decision-making process across repeated administrations of the test. (Author/RC)
Descriptors: Criterion Referenced Tests, Decision Making, Statistical Analysis, Test Reliability
Huynh, Huynh – 1977
The kappamax reliability index of domain-referenced tests is defined as the upper bound of kappa when all possibile cutoff scores are considered. Computational procedures for kappamax are described, as well as its approximation for long tests, based on Kuder-Richardson formula 21. The sampling error of kappamax, and the effects of test length and…
Descriptors: Criterion Referenced Tests, Mathematical Models, Statistical Analysis, Test Reliability
Peer reviewed Peer reviewed
Brennan, Robert L.; Kane, Michael T. – Journal of Educational Measurement, 1977
An index for the dependability of mastery tests is described. Assumptions necessary for the index and the mathematical development of the index are provided. (Author/JKS)
Descriptors: Criterion Referenced Tests, Mastery Tests, Mathematical Models, Test Reliability
Peer reviewed Peer reviewed
Huynh, Huynh – Journal of Educational Statistics, 1982
Two indices for assessing the efficiency of decisions in mastery testing are proposed. The indices are generalizations of the raw agreement index and the kappa index. Empirical examples of these indices are given. (Author/JKS)
Descriptors: Criterion Referenced Tests, Cutting Scores, Mastery Tests, Test Reliability
Coscarelli, William; Shrock, Sharon – Performance Improvement Quarterly, 2002
Discusses problems in using traditional measures of reliability for criterion-referenced tests (CRTs) and describes two approaches to reliability for CRTs: estimates sensitive to all measures of error; and estimates of consistency in test outcome. Compares the two approaches and proposes recommendations for interpretation and use. (Author/LRW)
Descriptors: Comparative Analysis, Criterion Referenced Tests, Measurement Techniques, Test Reliability
Strasler, Gregg M.; Raeth, Peter G. – 1977
The study investigated the feasibility of adapting the coefficient k introduced by Cohen (1960) and elaborated by Swaminathan, Hambleton, and Algina (1974) to an internal consistency estimate for criterion referenced tests in single test administrations. The authors proposed the use of k as an internal consistency estimate by logically dividing…
Descriptors: Computer Programs, Criterion Referenced Tests, Multiple Choice Tests, Test Reliability
Noe, Michael J.; Algina, James – 1977
Single-administration procedures for estimating the coefficient of agreement, a reliability index for criterion referenced tests, were recently developed by Subkoviak. The procedures require a distributional assumption for errors of measurement and an estimate of each examinee's true score. A computer simulation of tests composed of items that…
Descriptors: Computer Programs, Criterion Referenced Tests, Simulation, Test Reliability
Peer reviewed Peer reviewed
Huynh, Huynh – Journal of Educational Measurement, 1976
Within the beta-binomial Bayesian framework, procedures are described for the evaluation of the kappa index of reliability on the basis of one administration of a domain-referenced test. Major factors affecting this index include cutoff score, test score variability and test length. Empirical data which substantiate some theoretical trends deduced…
Descriptors: Criterion Referenced Tests, Decision Making, Mathematical Models, Probability
Peer reviewed Peer reviewed
Subkoviak, Michael J. – Journal of Educational Measurement, 1976
A number of different reliability coefficients have recently been proposed for tests used to differentiate between groups such as masters and nonmasters. One promising index is the proportion of students in a class that are consistently assigned to the same mastery group across two testings. The present paper proposes a single test administration…
Descriptors: Criterion Referenced Tests, Mastery Tests, Mathematical Models, Probability
Peer reviewed Peer reviewed
Lovett, Hubert T. – Educational and Psychological Measurement, 1977
The analysis of variance model for estimating reliability in norm referenced tests is extended to criterion referenced tests. The essential modification is that the criterion or cut-off score is substituted for the population mean. An example and discussion are presented. (JKS)
Descriptors: Analysis of Variance, Criterion Referenced Tests, Cutting Scores, Test Reliability
Haladyna, Thomas M. – 1974
Classical test theory has been rejected for application to criterion-referenced (CR) tests by most psychometricians due to an expected lack of variance in scores and other difficulties. The present study was conceived to resolve the variance problem and explore the possibility that classical test theory is both appropriate and desirable for some…
Descriptors: Criterion Referenced Tests, Error of Measurement, Sampling, Test Construction
Kennedy, Beth T. – 1972
Issues related to the evaluation of instructional programs developed under the auspices of the Southwest Educational Development Laboratory are briefly discussed. The Laboratory develops criterion-referenced tests which form an integral part of each instructional program. The importance of examining the reliability and validity of these tests is…
Descriptors: Criterion Referenced Tests, Evaluation Methods, Instructional Programs, Test Reliability
Peer reviewed Peer reviewed
Cella, David F.; And Others – Journal of Clinical Psychology, 1985
Examined relative efficacy of two short forms of Wechsler Adult Intelligence Scale-Revised (WAIS-R) with respect to accurate subtest profile scatter (N=50). Subtest scores of both split-half Satz-Mogel short form and criterion referenced Modified WAIS-R (WAIS-RM) short form were found to differ significantly from full-length WAIS-R subtest scores.…
Descriptors: Adults, Criterion Referenced Tests, Estimation (Mathematics), Intelligence Tests
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  26