Descriptor
Reliability | 3 |
Correlation | 2 |
Classification | 1 |
Hypothesis Testing | 1 |
Individual Characteristics | 1 |
Informal Assessment | 1 |
Mathematical Models | 1 |
Nonparametric Statistics | 1 |
Probability | 1 |
Rating Scales | 1 |
Sampling | 1 |
More ▼ |
Source
Applied Psychological… | 3 |
Author
Fleiss, Joseph L. | 3 |
Cicchetti, Domenic V. | 2 |
Cuzick, Jack | 1 |
Publication Type
Journal Articles | 1 |
Reports - Research | 1 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating

Cicchetti, Domenic V.; Fleiss, Joseph L. – Applied Psychological Measurement, 1977
The weighted kappa coefficient is a measure of interrater agreement when the relative seriousness of each possible disagreement can be quantified. This monte carlo study demonstrates the utility of the kappa coefficient for ordinal data. Sample size is also briefly discussed. (Author/JKS)
Descriptors: Mathematical Models, Rating Scales, Reliability, Sampling

Fleiss, Joseph L.; Cicchetti, Domenic V. – Applied Psychological Measurement, 1978
The accuracy of the large sample standard error of weighted kappa appropriate to the non-null case was studied by computer simulation for the hypothesis that two independently derived estimates of weighted kappa are equal, and for setting confidence limits around a single value of weighted kappa. (Author/CTM)
Descriptors: Correlation, Hypothesis Testing, Nonparametric Statistics, Reliability

Fleiss, Joseph L.; Cuzick, Jack – Applied Psychological Measurement, 1979
A reliability study is illustrated in which subjects are judged on a dichotomous trait by different sets of judges, possibly unequal in number. A kappa-like measure of reliability is proposed, its correspondence to an intraclass correlation coefficient is pointed out, and a test for its statistical significance is presented. (Author/CTM)
Descriptors: Classification, Correlation, Individual Characteristics, Informal Assessment