NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)2
Since 2006 (last 20 years)24
Source
Educational and Psychological…80
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 80 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Nugent, William R. – Educational and Psychological Measurement, 2017
Meta-analysis is a significant methodological advance that is increasingly important in research synthesis. Fundamental to meta-analysis is the presumption that effect sizes, such as the standardized mean difference (SMD), based on scores from different measures are comparable. It has been argued that population observed score SMDs based on scores…
Descriptors: Meta Analysis, Effect Size, Comparative Analysis, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
McGrath, Kathleen V.; Leighton, Elizabeth A.; Ene, Mihaela; DiStefano, Christine; Monrad, Diane M. – Educational and Psychological Measurement, 2020
Survey research frequently involves the collection of data from multiple informants. Results, however, are usually analyzed by informant group, potentially ignoring important relationships across groups. When the same construct(s) are measured, integrative data analysis (IDA) allows pooling of data from multiple sources into one data set to…
Descriptors: Educational Environment, Meta Analysis, Student Attitudes, Teacher Attitudes
Peer reviewed Peer reviewed
Direct linkDirect link
Nugent, William Robert; Moore, Matthew; Story, Erin – Educational and Psychological Measurement, 2015
The standardized mean difference (SMD) is perhaps the most important meta-analytic effect size. It is typically used to represent the difference between treatment and control population means in treatment efficacy research. It is also used to represent differences between populations with different characteristics, such as persons who are…
Descriptors: Error of Measurement, Error Correction, Predictor Variables, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Wheeler, Denna L.; Vassar, Matt; Worley, Jody A.; Barnes, Laura L. B. – Educational and Psychological Measurement, 2011
The purpose of this study was to synthesize internal consistency reliability for the subscale scores on the Maslach Burnout Inventory (MBI). The authors addressed three research questions: (a) What is the mean subscale score reliability for the MBI across studies? (b) What factors are associated with observed variance in MBI subscale score…
Descriptors: Burnout, Reliability, Measures (Individuals), Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hardin, Andrew M.; Chang, Jerry Cha-Jan; Fuller, Mark A.; Torkzadeh, Gholamreza – Educational and Psychological Measurement, 2011
The use of causal indicators to formatively measure latent constructs appears to be on the rise, despite what appears to be a troubling lack of consistency in their application. Scholars in any discipline are responsible not only for advancing theoretical knowledge in their domain of study but also for addressing methodological issues that…
Descriptors: Structural Equation Models, Measurement, Statistical Data, Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Marin-Martinez, Fulgencio; Sanchez-Meca, Julio – Educational and Psychological Measurement, 2010
Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…
Descriptors: Meta Analysis, Sample Size, Effect Size, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Romano, Jeanine L.; Kromrey, Jeffrey D. – Educational and Psychological Measurement, 2009
This study was conducted to evaluate alternative analysis strategies for the meta-analysis method of reliability generalization when the reliability estimates are not statistically independent. Five approaches to dealing with the violation of independence were implemented: ignoring the violation and treating each observation as independent,…
Descriptors: Reliability, Generalization, Meta Analysis, Correlation
Peer reviewed Peer reviewed
Direct linkDirect link
Kim, Eun Sook; Willson, Victor L. – Educational and Psychological Measurement, 2010
This article presents a method to evaluate pretest effects on posttest scores in the absence of an un-pretested control group using published results of pretesting effects due to Willson and Putnam. Confidence intervals around the expected theoretical gain due to pretesting are computed, and observed gains or differential gains are compared with…
Descriptors: Control Groups, Intervals, Educational Research, Educational Psychology
Peer reviewed Peer reviewed
Direct linkDirect link
Kuncel, Nathan R.; Wee, Serena; Serafin, Lauren; Hezlett, Sarah A. – Educational and Psychological Measurement, 2010
Extensive research has examined the effectiveness of admissions tests for use in higher education. What has gone unexamined is the extent to which tests are similarly effective for predicting performance at both the master's and doctoral levels. This study empirically synthesizes previous studies to investigate whether or not the Graduate Record…
Descriptors: Graduate Students, Grade Point Average, Doctoral Programs, Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Skidmore, Susan Troncoso; Thompson, Bruce – Educational and Psychological Measurement, 2010
The purpose of the present study is to provide a historical account and metasynthesis of which statistical techniques are most frequently used in the fields of education and psychology. Six articles reviewing the "American Educational Research Journal" from 1969 to 1997 and five articles reviewing the psychological literature from 1948 to 2001…
Descriptors: Educational Research, Meta Analysis, Synthesis, Statistical Studies
Peer reviewed Peer reviewed
Direct linkDirect link
Howell, Ryan T.; Shields, Alan L. – Educational and Psychological Measurement, 2008
Meta-analytic reliability generalizations (RGs) are limited by the scarcity of reliability reporting in primary articles, and currently, RG investigators lack a method to quantify the impact of such nonreporting. This article introduces a stepwise procedure to address this challenge. First, the authors introduce a formula that allows researchers…
Descriptors: Reliability, Meta Analysis, Generalization, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Van Horn, Pamela S.; Green, Kathy E.; Martinussen, Monica – Educational and Psychological Measurement, 2009
This article reports results of a meta-analysis of survey response rates in published research in counseling and clinical psychology over a 20-year span and describes reported survey administration procedures in those fields. Results of 308 survey administrations showed a weighted average response rate of 49.6%. Among possible moderators, response…
Descriptors: Clinical Psychology, Response Rates (Questionnaires), Counseling Psychology, Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Mason, Corinne; Allam, Reynald; Brannick, Michael T. – Educational and Psychological Measurement, 2007
Reliability generalization studies have provided estimates of the mean reliability coefficients and examined factors that explain the variability in the reliability estimates across studies for many different tests and measures. Different authors have used different data analyses to do such meta-analyses, and little research has addressed whether…
Descriptors: Reliability, Monte Carlo Methods, Meta Analysis, Generalization
Peer reviewed Peer reviewed
Direct linkDirect link
Gilpin, Andrew R. – Educational and Psychological Measurement, 2008
Rosenthal and Rubin introduced a general effect size index, r[subscript equivalent], for use in meta-analyses of two-group experiments; it employs p values from reports of the original studies to determine an equivalent t test and the corresponding point-biserial correlation coefficient. The present investigation used Monte Carlo-simulated…
Descriptors: Effect Size, Correlation, Meta Analysis, Monte Carlo Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Miller, Christopher S.; Shields, Alan L.; Campfield, Delia; Wallace, Kim A.; Weiss, Roger D. – Educational and Psychological Measurement, 2007
Three drug and alcohol use screening scales are embedded within the Minnesota Multiphasic Personality Inventory--2: the MacAndrew Alcoholism Scale (MAC) and its revised version (MAC-R), the Addiction Acknowledgement Scale (AAS), and the Addiction Potential Scale (APS). The current study evaluated the reliability reporting practices among 210…
Descriptors: Substance Abuse, Drinking, Reliability, Personality
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4  |  5  |  6