Descriptor
Author
Brennan, Robert L. | 9 |
Kane, Michael T. | 3 |
Stolurow, Lawrence M. | 1 |
Publication Type
Reports - Research | 4 |
Guides - Non-Classroom | 1 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating

Brennan, Robert L.; Kane, Michael T. – Journal of Educational Measurement, 1977
An index for the dependability of mastery tests is described. Assumptions necessary for the index and the mathematical development of the index are provided. (Author/JKS)
Descriptors: Criterion Referenced Tests, Mastery Tests, Mathematical Models, Test Reliability
Brennan, Robert L. – 1981
This handbook treats a restricted set of statistical procedures for addressing some of the most prevalent technical issues that arise in domain-referenced testing. The procedures discussed here were chosen because they do not necessitate extensive computations. The five major sections of the paper cover: (1) item analysis procedures for using data…
Descriptors: Classification, Criterion Referenced Tests, Cutting Scores, Group Testing
Brennan, Robert L. – 1974
An attempt is made to explore the use of subjective probabilities in the analysis of item data, especially criterion-referenced item data. Two assumptions are implicit: (1) one wants to obtain a maximum amount of information with respect to an item using a minimum number of subjects; and (2) once the item is validated, it may well be administered…
Descriptors: Confidence Testing, Criterion Referenced Tests, Guessing (Tests), Item Analysis

Brennan, Robert L.; Kane, Michael T. – Psychometrika, 1977
Using the assumption of randomly parallel tests and concepts from generalizability theory, three signal/noise ratios for domain-referenced tests are developed, discussed, and compared. The three ratios have the same noise but different signals depending upon the kind of decision to be made as a result of measurement. (Author/JKS)
Descriptors: Comparative Analysis, Criterion Referenced Tests, Error of Measurement, Mathematical Models

Brennan, Robert L. – Educational and Psychological Measurement, 1972
The ideal item in the criterion-referenced testing situation is the item with a nonsignificant discrimination index and a high difficulty level; items that discriminate negatively are clearly unacceptable; and items that discriminate positively usually indicate a need for revision. (Author)
Descriptors: Comparative Analysis, Criterion Referenced Tests, Cutting Scores, Discriminant Analysis
Kane, Michael T.; Brennan, Robert L. – 1977
A large number of seemingly diverse coefficients have been proposed as indices of dependability, or reliability, for domain-referenced and/or mastery tests. In this paper, it is shown that most of these indices are special cases of two generalized indices of agreement: one that is corrected for chance, and one that is not. The special cases of…
Descriptors: Bayesian Statistics, Correlation, Criterion Referenced Tests, Cutting Scores

Brennan, Robert L. – 1979
Using the basic principles of generalizability theory, a psychometric model for domain-referenced interpretations is proposed, discussed, and illustrated. This approach, assuming an analysis of variance or linear model, is applicable to numerous data collection designs, including the traditional persons-crossed-with-items design, which is treated…
Descriptors: Analysis of Variance, Cost Effectiveness, Criterion Referenced Tests, Cutting Scores
Brennan, Robert L. – 1974
The first four chapters of this report primarily provide an extensive, critical review of the literature with regard to selected aspects of the criterion-referenced and mastery testing fields. Major topics treated include: (a) definitions, distinctions, and background, (b) the relevance of classical test theory, (c) validity and procedures for…
Descriptors: Computer Programs, Confidence Testing, Criterion Referenced Tests, Error of Measurement
Brennan, Robert L.; Stolurow, Lawrence M. – 1971
A replicable process for improving instruction through the consistent use of student data collected before, during, and after instruction is proposed. A rational analysis of different types of error rates (theoretical, base, posttest, instructional) and discrimination indices (base, posttest) leads to a set of rules for identifying test items and…
Descriptors: Computer Assisted Instruction, Criterion Referenced Tests, Decision Making, Discriminant Analysis