NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards1
Showing 1 to 15 of 322 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
James E. Pustejovsky; Man Chen – Journal of Educational and Behavioral Statistics, 2024
Meta-analyses of educational research findings frequently involve statistically dependent effect size estimates. Meta-analysts have often addressed dependence issues using ad hoc approaches that involve modifying the data to conform to the assumptions of models for independent effect size estimates, such as by aggregating estimates to obtain one…
Descriptors: Meta Analysis, Multivariate Analysis, Effect Size, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Hooijmans, Carlijn R.; Donders, Rogier; Magnuson, Kristen; Wever, Kimberley E.; Ergün, Mehmet; Rooney, Andrew A.; Walker, Vickie; Langendam, Miranda W. – Research Synthesis Methods, 2022
Since the early 1990s the number of systematic reviews (SR) of animal studies has steadily increased. There is, however, little guidance on when and how to conduct a meta-analysis of human-health-related animal studies. To gain insight about the methods that are currently used we created an overview of the key characteristics of published…
Descriptors: Animals, Health Education, Educational Research, Meta Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Nakagawa, Shinichi; Lagisz, Malgorzata; O'Dea, Rose E.; Rutkowska, Joanna; Yang, Yefeng; Noble, Daniel W. A.; Senior, Alistair M. – Research Synthesis Methods, 2021
"Classic" forest plots show the effect sizes from individual studies and the aggregate effect from a meta-analysis. However, in ecology and evolution, meta-analyses routinely contain over 100 effect sizes, making the classic forest plot of limited use. We surveyed 102 meta-analyses in ecology and evolution, finding that only 11% use the…
Descriptors: Graphs, Meta Analysis, Ecology, Evolution
Peer reviewed Peer reviewed
Direct linkDirect link
Weissgerber, Sophia C.; Brunmair, Matthias; Rummer, Ralf – Educational Psychology Review, 2021
In the 2018 meta-analysis of "Educational Psychology Review" entitled "Null effects of perceptual disfluency on learning outcomes in a text-based educational context" by Xie, Zhou, and Liu, we identify some errors and inconsistencies in both the methodological approach and the reported results regarding coding and effect sizes.…
Descriptors: Meta Analysis, Research Problems, Research Methodology, Coding
Peer reviewed Peer reviewed
Direct linkDirect link
Warne, Russell T. – Journal of Advanced Academics, 2022
Recently, Picho-Kiroga (2021) published a meta-analysis on the effect of stereotype threat on females. Their conclusion was that the average effect size for stereotype threat studies was d = .28, but that effects are overstated because the majority of studies on stereotype threat in females include methodological characteristics that inflate the…
Descriptors: Sex Stereotypes, Females, Meta Analysis, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Bissonnette, Steve; Boyer, Christian – Journal of Computer Assisted Learning, 2022
Tingir et al. (2017) concluded from their meta-analysis that the subject areas taught through mobile devices had significantly higher achievement scores (d = 0.48) than the ones taught with traditional teaching methods. Given the relatively high positive effect of mobile devices on student achievement, we carefully analysed the selected research…
Descriptors: Meta Analysis, Electronic Learning, Handheld Devices, Academic Achievement
Fingerhut, Joelle; Xunyun, Xu; Moeyaert, Mariola – Grantee Submission, 2021
A variety of measures have been developed to quantify intervention effects for single-case experimental design studies. Within the family of non-overlap indices, the Tau-U measure is one of the most popular indices. There are several Tau-U variants, each one calculated differently. The appropriateness of each Tau-U variant depends upon the data…
Descriptors: Case Studies, Research Design, Research Tools, Decision Making
Peer reviewed Peer reviewed
Direct linkDirect link
Shi, Linyu; Chu, Haitao; Lin, Lifeng – Research Synthesis Methods, 2020
Publication bias threatens meta-analysis validity. It is often assessed via the funnel plot; an asymmetric plot implies small-study effects, and publication bias is one cause of the asymmetry. Egger's regression test is a widely used tool to quantitatively assess such asymmetry. It examines the association between the observed effect sizes and…
Descriptors: Bayesian Statistics, Meta Analysis, Effect Size, Publications
Peer reviewed Peer reviewed
Direct linkDirect link
Wolf, Rebecca; Morrison, Jennifer; Inns, Amanda; Slavin, Robert; Risman, Kelsey – Journal of Research on Educational Effectiveness, 2020
Rigorous evidence of program effectiveness has become increasingly important with the 2015 passage of the Every Student Succeeds Act (ESSA). One question that has not yet been fully explored is whether program evaluations carried out or commissioned by developers produce larger effect sizes than evaluations conducted by independent third parties.…
Descriptors: Program Evaluation, Program Effectiveness, Effect Size, Sample Size
Peer reviewed Peer reviewed
Direct linkDirect link
Manolov, Rumen; Guilera, Georgina; Solanas, Antonio – Remedial and Special Education, 2017
The current text comments on three systematic reviews published in the special section "Issues and Advances in the Systematic Review of Single-Case Research: An Update and Exemplars." The commentary is provided in relation to the need to combine the assessment of the methodological quality of the studies included in systematic reviews,…
Descriptors: Research Design, Meta Analysis, Research Methodology, Functional Behavioral Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Bergeron, Pierre-Jérôme – McGill Journal of Education, 2017
This paper presents a critical analysis, from the point of view of a statistician, of the methodology used by Hattie in "Visible Learning," and explains why it must absolutely be called pseudoscience. We first discuss what appears to be the intentions of Hattie's approach. Then we describe the major mistakes in "Visible…
Descriptors: Scientific Concepts, Misconceptions, Statistics, Teaching Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Simpson, Adrian – Journal of Education Policy, 2017
Increased attention on "what works" in education has led to an emphasis on developing policy from evidence based on comparing and combining a particular statistical summary of intervention studies: the standardised effect size. It is assumed that this statistical summary provides an estimate of the educational impact of interventions and…
Descriptors: Public Policy, Educational Policy, Effect Size, Evidence Based Practice
Peer reviewed Peer reviewed
Direct linkDirect link
Debray, Thomas P. A.; Moons, Karel G. M.; Riley, Richard D. – Research Synthesis Methods, 2018
Small-study effects are a common threat in systematic reviews and may indicate publication bias. Their existence is often verified by visual inspection of the funnel plot. Formal tests to assess the presence of funnel plot asymmetry typically estimate the association between the reported effect size and their standard error, the total sample size,…
Descriptors: Meta Analysis, Comparative Analysis, Publications, Bias
Peer reviewed Peer reviewed
Direct linkDirect link
Higgins, Steve; Katsipataki, Maria – International Journal of Research & Method in Education, 2016
This article reviews some of the strengths and limitations of the comparative use of meta-analysis findings, using examples from the Sutton Trust-Education Endowment Foundation Teaching and Learning "Toolkit" which summarizes a range of educational approaches to improve pupil attainment in schools. This comparative use of quantitative…
Descriptors: Meta Analysis, Educational Research, Comparative Analysis, Evidence Based Practice
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Shadish, William R.; Hedges, Larry V.; Horner, Robert H.; Odom, Samuel L. – National Center for Education Research, 2015
The field of education is increasingly committed to adopting evidence-based practices. Although randomized experimental designs provide strong evidence of the causal effects of interventions, they are not always feasible. For example, depending upon the research question, it may be difficult for researchers to find the number of children necessary…
Descriptors: Effect Size, Case Studies, Research Design, Observation
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  22