NotesFAQContact Us
Collection
Advanced
Search Tips
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Menglin Xu; Jessica A. R. Logan – Educational and Psychological Measurement, 2024
Research designs that include planned missing data are gaining popularity in applied education research. These methods have traditionally relied on introducing missingness into data collections using the missing completely at random (MCAR) mechanism. This study assesses whether planned missingness can also be implemented when data are instead…
Descriptors: Research Design, Research Methodology, Monte Carlo Methods, Statistical Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Prathiba Natesan Batley; Erica B. McClure; Brandy Brewer; Ateka A. Contractor; Nicholas John Batley; Larry Vernon Hedges; Stephanie Chin – Grantee Submission, 2023
N-of-1 trials, a special case of Single Case Experimental Designs (SCEDs), are prominent in clinical medical research and specifically psychiatry due to the growing significance of precision/personalized medicine. It is imperative that these clinical trials be conducted, and their data analyzed, using the highest standards to guard against threats…
Descriptors: Medical Research, Research Design, Data Analysis, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Boers, Frank; Bryfonski, Lara; Faez, Farahnaz; McKay, Todd – Studies in Second Language Acquisition, 2021
Meta-analytic reviews collect available empirical studies on a specified domain and calculate the average effect of a factor. Educators as well as researchers exploring a new domain of inquiry may rely on the conclusions from meta-analytic reviews rather than reading multiple primary studies. This article calls for caution in this regard because…
Descriptors: Meta Analysis, Literature Reviews, Effect Size, Computation
Peer reviewed Peer reviewed
Direct linkDirect link
Rickard, Timothy C.; Pan, Steven C.; Gupta, Mohan W. – Journal of Experimental Psychology: Learning, Memory, and Cognition, 2022
We explored the possibility of publication bias in the sleep and explicit motor sequence learning literature by applying precision effect test (PET) and precision effect test with standard errors (PEESE) weighted regression analyses to the 88 effect sizes from a recent comprehensive literature review (Pan & Rickard, 2015). Basic PET analysis…
Descriptors: Publications, Bias, Sleep, Psychomotor Skills
Spybrook, Jessaca; Zhang, Qi; Kelcey, Ben; Dong, Nianbo – Educational Evaluation and Policy Analysis, 2020
Over the past 15 years, we have seen an increase in the use of cluster randomized trials (CRTs) to test the efficacy of educational interventions. These studies are often designed with the goal of determining whether a program works, or answering the what works question. Recently, the goals of these studies expanded to include for whom and under…
Descriptors: Randomized Controlled Trials, Educational Research, Program Effectiveness, Intervention
Peer reviewed Peer reviewed
Direct linkDirect link
Simpson, Adrian – Educational Research and Evaluation, 2018
Ainsworth et al.'s paper "Sources of Bias in Outcome Assessment in Randomised Controlled Trials: A Case Study" examines alternative accounts for a large difference in effect size between 2 outcomes in the same intervention evaluation. It argues that the probable explanation relates to masking: Only one outcome measure was administered by…
Descriptors: Statistical Bias, Randomized Controlled Trials, Effect Size, Outcome Measures
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Deke, John; Wei, Thomas; Kautz, Tim – National Center for Education Evaluation and Regional Assistance, 2017
Evaluators of education interventions are increasingly designing studies to detect impacts much smaller than the 0.20 standard deviations that Cohen (1988) characterized as "small." While the need to detect smaller impacts is based on compelling arguments that such impacts are substantively meaningful, the drive to detect smaller impacts…
Descriptors: Intervention, Educational Research, Research Problems, Statistical Bias
Steenbergen-Hu, Saiying; Olszewski-Kubilius, Paula – Gifted Child Quarterly, 2016
This methodological brief introduces basic procedures and issues for conducting a high-quality meta-analysis in gifted education. Specifically, we discuss issues such as how to select a topic and formulate research problems, search for and identify qualified studies, code studies and extract data, choose and calculate effect sizes, analyze data,…
Descriptors: Meta Analysis, Academically Gifted, Research Methodology, Research Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Stevenson, Jim; Buitelaar, Jan; Cortese, Samuele; Ferrin, Maite; Konofal, Eric; Lecendreux, Michel; Simonoff, Emily; Wong, Ian C. K.; Sonuga-Barke, Edmund – Journal of Child Psychology and Psychiatry, 2014
Background: The efficacy of three dietary treatments for ADHD has been repeatedly tested in randomized controlled trials (RCTs). These interventions are restricted elimination diets (RED), artificial food colour elimination (AFCE) and supplementation with free fatty acids (SFFA). There have been three systematic reviews and associated…
Descriptors: Dietetics, Attention Deficit Hyperactivity Disorder, Outcomes of Treatment, Literature Reviews
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Citkowicz, Martyna; Polanin, Joshua R. – Society for Research on Educational Effectiveness, 2014
Meta-analyses are syntheses of effect-size estimates obtained from a collection of studies to summarize a particular field or topic (Hedges, 1992; Lipsey & Wilson, 2001). These reviews are used to integrate knowledge that can inform both scientific inquiry and public policy, therefore it is important to ensure that the estimates of the effect…
Descriptors: Meta Analysis, Accountability, Cluster Grouping, Effect Size
Peer reviewed Peer reviewed
Direct linkDirect link
Wester, Kelly L.; Borders, L. DiAnne; Boul, Steven; Horton, Evette – Journal of Counseling & Development, 2013
The purpose of this study was to examine the quality of quantitative articles published in the "Journal of Counseling & Development." Quality concerns arose in regard to omissions of psychometric information of instruments, effect sizes, and statistical power. Type VI and II errors were found. Strengths included stated research…
Descriptors: Periodicals, Journal Articles, Counseling, Research
Peer reviewed Peer reviewed
Direct linkDirect link
Byiers, Breanne J.; Reichle, Joe; Symons, Frank J. – American Journal of Speech-Language Pathology, 2012
Purpose: Single-subject experimental designs (SSEDs) represent an important tool in the development and implementation of evidence-based practice in communication sciences and disorders. The purpose of this article is to review the strategies and tactics of SSEDs and their application in speech-language pathology research. Method: The authors…
Descriptors: Evidence, Research Design, Speech Language Pathology, Intervention
Peer reviewed Peer reviewed
Hains, Ann Higgins; Baer, Donald M. – Journal of Applied Behavior Analysis, 1989
Multi-element research designs (alternating treatments or simultaneous treatments) are capable of revealing when sequence effects operate, but even more valuably, they can be used to assess the effects of potential multiple treatment interference and to study contextual interactions other than sequence effects. (JDD)
Descriptors: Effect Size, Elementary Secondary Education, Interaction, Research Design
Palomares, Ronald S. – 1990
Researchers increasingly recognize that significance tests are limited in their ability to inform scientific practice. Common errors in interpreting significance tests and three strategies for augmenting the interpretation of significance test results are illustrated. The first strategy for augmenting the interpretation of significance tests…
Descriptors: Effect Size, Estimation (Mathematics), Evaluation Methods, Research Design
Peer reviewed Peer reviewed
Orwin, Robert G. – Journal of Educational Statistics, 1983
Rosenthan's (1979) concept of fail-safe N has thus far been applied to probability levels exclusively. This note introduces a fail-safe N for effect size. (Author)
Descriptors: Effect Size, Meta Analysis, Research Design, Research Problems
Previous Page | Next Page ยป
Pages: 1  |  2