NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)29
Source
Applied Psychological…149
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 31 to 45 of 149 results Save | Export
Peer reviewed Peer reviewed
Alsawalmeh, Yousef M.; Feldt, Leonard S. – Applied Psychological Measurement, 2000
Developed a statistical test for the hypothesis of the equality of extrapolated coefficients when the original values (Cronbach's alpha) are based on the same sample of persons and, therefore, are statistically dependent. Monte Carlo studies show that the test precisely controlled Type I error, even with small numbers of parttest units or raters.…
Descriptors: Monte Carlo Methods, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Attali, Yigal – Applied Psychological Measurement, 2005
Contrary to common belief, reliability estimates of number-right multiple-choice tests are not inflated by speededness. Because examinees guess on questions when they run out of time, the responses to these questions generally show less consistency with the responses of other questions, and the reliability of the test will be decreased. The…
Descriptors: Reliability, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Yi, Hyun Sook; Kim, Seonghoon; Brennan, Robert L. – Applied Psychological Measurement, 2007
Large-scale testing programs involving classification decisions typically have multiple forms available and conduct equating to ensure cut-score comparability across forms. A test developer might be interested in the extent to which an examinee who happens to take a particular form would have a consistent classification decision if he or she had…
Descriptors: Classification, Reliability, Indexes, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Raju, Nambury S.; Lezotte, Daniel V.; Fearing, Benjamin K.; Oshima, T. C. – Applied Psychological Measurement, 2006
This note describes a procedure for estimating the range restriction component used in correcting correlations for unreliability and range restriction when an estimate of the reliability of a predictor is not readily available for the unrestricted sample. This procedure is illustrated with a few examples. (Contains 1 table.)
Descriptors: Correlation, Reliability, Predictor Variables, Error Correction
Peer reviewed Peer reviewed
Feldt, Leonard S.; Ankenmann, Robert D. – Applied Psychological Measurement, 1998
Developed a graphical method of determining adequate sample size based on the power of L. Feldt's (1969) test of the difference between two values of Cronbach's alpha coefficient. Discusses assumptions on which this approach is based. (SLD)
Descriptors: Comparative Analysis, Reliability, Sample Size
Peer reviewed Peer reviewed
Raykov, Tenko – Applied Psychological Measurement, 2001
Studied the population discrepancy of coefficient alpha from the composite reliability coefficient for fixed congeneric measures with correlated errors and expressed it in terms of parameters of the measures. Recommends structural equation modeling for identifying cases in which the discrepancy can be large. (SLD)
Descriptors: Correlation, Reliability, Structural Equation Models
Peer reviewed Peer reviewed
Alsawalmeh, Yousef M.; Feldt, Leonard S. – Applied Psychological Measurement, 1999
Developed an approximate statistical test for the hypothesis of equality between the Spearman-Brown extrapolations of two independent values of Cronbach's alpha reliability coefficient. Monte Carlo simulations demonstrate that the procedure effectively controls Type I error. (SLD)
Descriptors: Monte Carlo Methods, Reliability, Simulation
Peer reviewed Peer reviewed
van der Linden, Wim J.; Boekkooi-Timminga, Ellen – Applied Psychological Measurement, 1988
Gulliksen's matched random subtests method is a graphical method to split a test into parallel test halves, allowing maximization of coefficient alpha as a lower bound to the classical test reliability coefficient. This problem is formulated as a zero-one programing problem solvable by algorithms that already exist. (TJH)
Descriptors: Algorithms, Equations (Mathematics), Programing, Test Reliability
Peer reviewed Peer reviewed
Lindell, Michael K. – Applied Psychological Measurement, 2001
Developed an index for assessing interrater agreement with respect to a single target using a multi-item rating scale. The variance of rater mean scale scores is used as the numerator of the agreement index. Studied four variants of a disattenuated agreement index that vary in the random response term used as the denominator. (SLD)
Descriptors: Evaluation Methods, Interrater Reliability, Rating Scales
Peer reviewed Peer reviewed
Magnusson, D.; Backteman, G. – Applied Psychological Measurement, 1979
A longitudinal study of approximately 1,000 students aged 10-16 showed high stability of intelligence and creativity. Stability coefficients for intelligence were higher than those for creativity. Results supported the construct validity of creativity. (MH)
Descriptors: Creativity, Creativity Tests, Elementary Secondary Education, Foreign Countries
Peer reviewed Peer reviewed
Direct linkDirect link
Biswas, Ajoy Kumar – Applied Psychological Measurement, 2006
This article studies the ordinal reliability of (total) test scores. This study is based on a classical-type linear model of observed score (X), true score (T), and random error (E). Based on the idea of Kendall's tau-a coefficient, a measure of ordinal reliability for small-examinee populations is developed. This measure is extended to large…
Descriptors: True Scores, Test Theory, Test Reliability, Scores
Peer reviewed Peer reviewed
Fleiss, Joseph L.; Cicchetti, Domenic V. – Applied Psychological Measurement, 1978
The accuracy of the large sample standard error of weighted kappa appropriate to the non-null case was studied by computer simulation for the hypothesis that two independently derived estimates of weighted kappa are equal, and for setting confidence limits around a single value of weighted kappa. (Author/CTM)
Descriptors: Correlation, Hypothesis Testing, Nonparametric Statistics, Reliability
Peer reviewed Peer reviewed
Raju, Nambury S.; Brand, Paul A. – Applied Psychological Measurement, 2003
Proposed a new asymptotic formula for estimating the sampling variance of a correlation coefficient corrected for unreliability and range restriction. A Monte Carlo simulation study of the new formula results in several positive conclusions about the new approach. (SLD)
Descriptors: Correlation, Monte Carlo Methods, Reliability, Sampling
Peer reviewed Peer reviewed
Lindell, Michael K.; Brandt, Christina J.; Whitney, David J. – Applied Psychological Measurement, 1999
Proposes a revised index of interrater agreement for multi-item ratings of a single target. This index is an inverse linear function of the ratio of the average obtained variance to the variance of the uniformly distributed random error. Discusses the importance of sample size for the index. (SLD)
Descriptors: Error of Measurement, Interrater Reliability, Sample Size
Peer reviewed Peer reviewed
Komaroff, Eugene – Applied Psychological Measurement, 1997
Evaluated coefficient alpha under violations of two classical test theory assumptions: essential tau-equivalence and uncorrelated errors through simulation. Discusses the interactive effects of both violations with true and error scores. Provides empirical evidence of the derivation of M. Novick and C. Lewis (1993). (SLD)
Descriptors: Correlation, Reliability, Simulation, Test Theory
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10