Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 2 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 9 |
Descriptor
Effect Size | 22 |
Research Design | 22 |
Meta Analysis | 12 |
Research Methodology | 9 |
Literature Reviews | 6 |
Correlation | 5 |
Educational Research | 5 |
Sample Size | 5 |
Sampling | 5 |
Statistical Analysis | 5 |
Statistical Bias | 4 |
More ▼ |
Source
Author
Hedges, Larry V. | 3 |
Thompson, Bruce | 2 |
Adair, John G. | 1 |
Bangert-Drowns, Robert L. | 1 |
Becker, Betsy Jane | 1 |
Brewer, James K. | 1 |
Brunner, Martin | 1 |
Chang, Lin | 1 |
Cheung, Alan | 1 |
Cohen, Peter A. | 1 |
Cosgrove, Dorothy | 1 |
More ▼ |
Publication Type
Reports - Research | 14 |
Journal Articles | 10 |
Speeches/Meeting Papers | 8 |
Reports - Evaluative | 4 |
Information Analyses | 3 |
Reports - Descriptive | 3 |
Opinion Papers | 2 |
Guides - Non-Classroom | 1 |
Audience
Researchers | 22 |
Policymakers | 2 |
Practitioners | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
What Works Clearinghouse Rating
Mikkel Helding Vembye; James Eric Pustejovsky; Therese Deocampo Pigott – Research Synthesis Methods, 2024
Sample size and statistical power are important factors to consider when planning a research synthesis. Power analysis methods have been developed for fixed effect or random effects models, but until recently these methods were limited to simple data structures with a single, independent effect per study. Recent work has provided power…
Descriptors: Sample Size, Robustness (Statistics), Effect Size, Social Science Research
Brunner, Martin; Keller, Lena; Stallasch, Sophie E.; Kretschmann, Julia; Hasl, Andrea; Preckel, Franzis; Lüdtke, Oliver; Hedges, Larry V. – Research Synthesis Methods, 2023
Descriptive analyses of socially important or theoretically interesting phenomena and trends are a vital component of research in the behavioral, social, economic, and health sciences. Such analyses yield reliable results when using representative individual participant data (IPD) from studies with complex survey designs, including educational…
Descriptors: Meta Analysis, Surveys, Research Design, Educational Research
Dong, Nianbo; Spybrook, Jessaca; Kelcey, Ben – Society for Research on Educational Effectiveness, 2017
The purpose of this paper is to present results of recent advances in power analyses to detect the moderator effects in Cluster Randomized Trials (CRTs). This paper focus on demonstration of the software PowerUp!-Moderator. This paper provides a resource for researchers seeking to design CRTs with adequate power to detect the moderator effects of…
Descriptors: Computer Software, Research Design, Randomized Controlled Trials, Statistical Analysis
Shadish, William R.; Hedges, Larry V.; Horner, Robert H.; Odom, Samuel L. – National Center for Education Research, 2015
The field of education is increasingly committed to adopting evidence-based practices. Although randomized experimental designs provide strong evidence of the causal effects of interventions, they are not always feasible. For example, depending upon the research question, it may be difficult for researchers to find the number of children necessary…
Descriptors: Effect Size, Case Studies, Research Design, Observation
Cheung, Alan; Slavin, Robert – Society for Research on Educational Effectiveness, 2016
As evidence-based reform becomes increasingly important in educational policy, it is becoming essential to understand how research design might contribute to reported effect sizes in experiments evaluating educational programs. The purpose of this study was to examine how methodological features such as types of publication, sample sizes, and…
Descriptors: Effect Size, Evidence Based Practice, Educational Change, Educational Policy
West, Stephen G.; Thoemmes, Felix – Psychological Methods, 2010
Donald Campbell's approach to causal inference (D. T. Campbell, 1957; W. R. Shadish, T. D. Cook, & D. T. Campbell, 2002) is widely used in psychology and education, whereas Donald Rubin's causal model (P. W. Holland, 1986; D. B. Rubin, 1974, 2005) is widely used in economics, statistics, medicine, and public health. Campbell's approach focuses on…
Descriptors: Causal Models, Research Methodology, Validity, Inferences
Ruscio, John – Psychological Methods, 2008
Calculating and reporting appropriate measures of effect size are becoming standard practice in psychological research. One of the most common scenarios encountered involves the comparison of 2 groups, which includes research designs that are experimental (e.g., random assignment to treatment vs. placebo conditions) and nonexperimental (e.g.,…
Descriptors: Psychological Studies, Effect Size, Probability, Correlation
Hedges, Larry V. – Journal of Educational and Behavioral Statistics, 2007
Multisite research designs involving cluster randomization are becoming increasingly important in educational and behavioral research. Researchers would like to compute effect size indexes based on the standardized mean difference to compare the results of cluster-randomized studies (and corresponding quasi-experiments) with other studies and to…
Descriptors: Journal Articles, Effect Size, Computation, Research Design
Wang, Zhongmiao; Thompson, Bruce – Journal of Experimental Education, 2007
In this study the authors investigated the use of 5 (i.e., Claudy, Ezekiel, Olkin-Pratt, Pratt, and Smith) R[squared] correction formulas with the Pearson r[squared]. The authors estimated adjustment bias and precision under 6 x 3 x 6 conditions (i.e., population [rho] values of 0.0, 0.1, 0.3, 0.5, 0.7, and 0.9; population shapes normal, skewness…
Descriptors: Effect Size, Correlation, Mathematical Formulas, Monte Carlo Methods

Matyas, Thomas A.; Greenwood, Kenneth M. – Journal of Applied Behavior Analysis, 1990
Visual analysis is examined as the dominant analytical method for single-case time series, in a study with 37 postgraduate research students which varied serial dependence, amount of random variability, and effect size. False alarm rates were high, but miss rates were low, indicating that visual analysts are not conservative judges, and serial…
Descriptors: Case Studies, Effect Size, Graduate Students, Higher Education
Tracz, Susan M.; And Others – 1986
The purpose of this paper is to demonstrate how multiple linear regression provides a viable statistical methodology for dealing with meta-analysis in general, and specifically with the issues of nonindependence and design complexity, such as multiple treatments. Since the F-test and t-test are special cases of the general linear model,…
Descriptors: Effect Size, Mathematical Models, Meta Analysis, Multiple Regression Analysis
Bangert-Drowns, Robert L. – 1985
Since meta-analysis was described in 1976 (Glass) as the application of familiar experimental methods to the integration of available research, at least five coherent approaches to meta-analysis have appeared in common use. These approaches can be divided into two broad groups. In the first group (including procedures by Robert Rosenthal, Larry…
Descriptors: Data, Effect Size, Error of Measurement, Literature Reviews
Adair, John G.; And Others – 1987
A meta-analysis was conducted on 44 educational studies that used either a (labelled) Hawthorne control group, a manipulation of Hawthorne effects, or a group designed to control for the Hawthorne effect. The sample included published journal articles, ERIC documents or unpublished papers, and dissertations. The studies were coded on 20 variables,…
Descriptors: Control Groups, Educational Research, Effect Size, Experimental Groups
Strube, Michael J. – 1986
A general model is described which can be used to represent the four common types of meta-analysis: (1) estimation of effect size by combining study outcomes; (2) estimation of effect size by contrasting study outcomes; (3) estimation of statistical significance by combining study outcomes; and (4) estimation of statistical significance by…
Descriptors: Comparative Analysis, Effect Size, Mathematical Models, Meta Analysis
Olive, Melissa L.; Smith, Benjamin W. – Educational Psychology, 2005
This study compared visual analyses with five alternative methods for assessing the magnitude of effect with single subject designs. Each method was successful in detecting intervention effect. When rank ordered, each method was consistent in identifying the participants with the largest effect. We recommend the use of the standard mean difference…
Descriptors: Effect Size, Regression (Statistics), Evaluation Methods, Research Design
Previous Page | Next Page »
Pages: 1 | 2