NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)0
Since 2006 (last 20 years)29
Source
Applied Psychological…149
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 16 to 30 of 149 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wise, Steven L.; DeMars, Christine E. – Applied Psychological Measurement, 2009
Attali (2005) recently demonstrated that Cronbach's coefficient [alpha] estimate of reliability for number-right multiple-choice tests will tend to be deflated by speededness, rather than inflated as is commonly believed and taught. Although the methods, findings, and conclusions of Attali (2005) are correct, his article may inadvertently invite a…
Descriptors: Guessing (Tests), Multiple Choice Tests, Test Reliability, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Laenen, Annouschka; Alonso, Ariel; Molenberghs, Geert; Vangeneugden, Tony; Mallinckrodt, Craig H. – Applied Psychological Measurement, 2010
Longitudinal studies are permeating clinical trials in psychiatry. Therefore, it is of utmost importance to study the psychometric properties of rating scales, frequently used in these trials, within a longitudinal framework. However, intrasubject serial correlation and memory effects are problematic issues often encountered in longitudinal data.…
Descriptors: Psychiatry, Rating Scales, Memory, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Waller, Niels G. – Applied Psychological Measurement, 2008
Reliability is a property of test scores from individuals who have been sampled from a well-defined population. Reliability indices, such as coefficient and related formulas for internal consistency reliability (KR-20, Hoyt's reliability), yield lower bound reliability estimates when (a) subjects have been sampled from a single population and when…
Descriptors: Test Items, Reliability, Scores, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung – Applied Psychological Measurement, 2008
Raju and Oshima (2005) proposed two prophecy formulas based on item response theory in order to predict the reliability of ability estimates for a test after change in its length. The first prophecy formula is equivalent to the classical Spearman-Brown prophecy formula. The second prophecy formula is misleading because of an underlying false…
Descriptors: Test Reliability, Item Response Theory, Computation, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Liu, Mei; Holland, Paul W. – Applied Psychological Measurement, 2008
The simplified version of the Dorans and Holland (2000) measure of population invariance, the root mean square difference (RMSD), is used to explore the degree of dependence of linking functions on the Law School Admission Test (LSAT) subpopulations defined by examinees' gender, ethnic background, geographic region, law school application status,…
Descriptors: Law Schools, Equated Scores, Geographic Regions, Geometric Concepts
Peer reviewed Peer reviewed
Direct linkDirect link
Wang, Wen-Chung; Wilson, Mark – Applied Psychological Measurement, 2005
The random-effects facet model that deals with local item dependence in many-facet contexts is presented. It can be viewed as a special case of the multidimensional random coefficients multinomial logit model (MRCMLM) so that the estimation procedures for the MRCMLM can be directly applied. Simulations were conducted to examine parameter recovery…
Descriptors: Test Reliability, Item Response Theory, Interrater Reliability, Rating Scales
Peer reviewed Peer reviewed
Cicchetti, Domenic V.; Fleiss, Joseph L. – Applied Psychological Measurement, 1977
The weighted kappa coefficient is a measure of interrater agreement when the relative seriousness of each possible disagreement can be quantified. This monte carlo study demonstrates the utility of the kappa coefficient for ordinal data. Sample size is also briefly discussed. (Author/JKS)
Descriptors: Mathematical Models, Rating Scales, Reliability, Sampling
Peer reviewed Peer reviewed
Direct linkDirect link
Henson, Robert; Roussos, Louis; Douglas, Jeff; He, Xuming – Applied Psychological Measurement, 2008
Cognitive diagnostic models (CDMs) model the probability of correctly answering an item as a function of an examinee's attribute mastery pattern. Because estimation of the mastery pattern involves more than a continuous measure of ability, reliability concepts introduced by classical test theory and item response theory do not apply. The cognitive…
Descriptors: Diagnostic Tests, Classification, Probability, Item Response Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Raju, Nambury S.; Price, Larry R.; Oshima, T. C.; Nering, Michael L. – Applied Psychological Measurement, 2007
An examinee-level (or conditional) reliability is proposed for use in both classical test theory (CTT) and item response theory (IRT). The well-known group-level reliability is shown to be the average of conditional reliabilities of examinees in a group or a population. This relationship is similar to the known relationship between the square of…
Descriptors: Item Response Theory, Error of Measurement, Reliability, Test Theory
Peer reviewed Peer reviewed
Direct linkDirect link
Lee, Won-Chan – Applied Psychological Measurement, 2007
This article introduces a multinomial error model, which models an examinee's test scores obtained over repeated measurements of an assessment that consists of polytomously scored items. A compound multinomial error model is also introduced for situations in which items are stratified according to content categories and/or prespecified numbers of…
Descriptors: Simulation, Error of Measurement, Scoring, Test Items
Peer reviewed Peer reviewed
Goldberg, Lewis R. – Applied Psychological Measurement, 1978
Three personality measures were administered twice each with an interval of four weeks between administrations, and the response consistency of these tests was analyzed. The evidence is equivocal. The confounding of consistency effects with other sources of variance remains a problem. (Author/CTM)
Descriptors: Higher Education, Personality Measures, Predictor Variables, Reliability
Peer reviewed Peer reviewed
Wang, Tianyou – Applied Psychological Measurement, 1998
Derives equations for computing weights that maximize the reliability of a test with multiple parts using a congeneric model. Presents a direct derivation for the three-part case and a two-step derivation for the "n"-part case. Gives examples that show the computations and the usefulness of the equations. (SLD)
Descriptors: Equations (Mathematics), Reliability
Peer reviewed Peer reviewed
Mellenbergh, Gideon J. – Applied Psychological Measurement, 1999
Demonstrates that two aspects of precision, reliability and information, also apply to the simple gain score. Reliability applies to a population of examinees, and information applies to a given examinee. (SLD)
Descriptors: Achievement Gains, Reliability
Peer reviewed Peer reviewed
Direct linkDirect link
Lucke, Joseph F. – Applied Psychological Measurement, 2005
The properties of internal consistency (alpha), classical reliability (rho), and congeneric reliability (omega) for a composite test with correlated item error are analytically investigated. Possible sources of correlated item error are contextual effects, item bundles, and item models that ignore additional attributes or higher-order attributes.…
Descriptors: Reliability, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Rae, Gordon – Applied Psychological Measurement, 2006
When errors of measurement are positively correlated, coefficient alpha may overestimate the "true" reliability of a composite. To reduce this inflation bias, Komaroff (1997) has proposed an adjusted alpha coefficient, ak. This article shows that ak is only guaranteed to be a lower bound to reliability if the latter does not include correlated…
Descriptors: Correlation, Reliability, Error of Measurement, Evaluation Methods
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10