Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 13 |
Descriptor
Effect Size | 13 |
Meta Analysis | 13 |
Intervention | 6 |
Statistical Analysis | 6 |
Computation | 5 |
Case Studies | 4 |
Research Design | 4 |
Simulation | 4 |
Measurement Techniques | 3 |
Regression (Statistics) | 3 |
Behavior Problems | 2 |
More ▼ |
Source
Grantee Submission | 3 |
Remedial and Special Education | 3 |
Journal of Educational and… | 2 |
Research Synthesis Methods | 2 |
Society for Research on… | 2 |
Online Submission | 1 |
Author
Publication Type
Reports - Research | 13 |
Journal Articles | 8 |
Information Analyses | 2 |
Education Level
Early Childhood Education | 1 |
Elementary Education | 1 |
Grade 1 | 1 |
Higher Education | 1 |
Primary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Joshi, Megha; Pustejovsky, James E.; Beretvas, S. Natasha – Research Synthesis Methods, 2022
The most common and well-known meta-regression models work under the assumption that there is only one effect size estimate per study and that the estimates are independent. However, meta-analytic reviews of social science research often include multiple effect size estimates per primary study, leading to dependence in the estimates. Some…
Descriptors: Meta Analysis, Regression (Statistics), Models, Effect Size
Pustejovsky, James E.; Rodgers, Melissa A. – Research Synthesis Methods, 2019
Publication bias and other forms of outcome reporting bias are critical threats to the validity of findings from research syntheses. A variety of methods have been proposed for detecting selective outcome reporting in a collection of effect size estimates, including several methods based on assessment of asymmetry of funnel plots, such as the…
Descriptors: Effect Size, Regression (Statistics), Statistical Analysis, Error of Measurement
Pustejovsky, James E. – Grantee Submission, 2018
A wide variety of effect size indices have been proposed for quantifying the magnitude of treatment effects in single-case designs. Commonly used measures include parametric indices such as the standardized mean difference, as well as non-overlap measures such as the percentage of non-overlapping data, improvement rate difference, and non-overlap…
Descriptors: Effect Size, Measurement Techniques, Monte Carlo Methods, Observation
Maggin, Daniel M.; Pustejovsky, James E.; Johnson, Austin H. – Remedial and Special Education, 2017
Group contingencies are recognized as a potent intervention for addressing challenging student behavior in the classroom, with research reviews supporting the use of this intervention platform going back more than four decades. Over this time period, the field of education has increasingly emphasized the role of research evidence for informing…
Descriptors: Contingency Management, Intervention, Behavior Problems, Student Behavior
Zimmerman, Kathleen N.; Pustejovsky, James E.; Ledford, Jennifer R.; Barton, Erin E.; Severini, Katherine E.; Lloyd, Blair P. – Grantee Submission, 2018
Varying methods for evaluating the outcomes of single case research designs (SCD) are currently used in reviews and meta-analyses of interventions. Quantitative effect size measures are often presented alongside visual analysis conclusions. Six measures across two classes--overlap measures (percentage non-overlapping data, improvement rate…
Descriptors: Research Design, Evaluation Methods, Synthesis, Intervention
Common, Eric Alan; Lane, Kathleen Lynne; Pustejovsky, James E.; Johnson, Austin H.; Johl, Liane Elizabeth – Remedial and Special Education, 2017
This systematic review investigated one systematic approach to designing, implementing, and evaluating functional assessment-based interventions (FABI) for use in supporting school-age students with or at-risk for high-incidence disabilities. We field tested several recently developed methods for single-case design syntheses. First, we appraised…
Descriptors: Functional Behavioral Assessment, Intervention, Disabilities, At Risk Students
Pustejovsky, James E. – Grantee Submission, 2018
Methods for meta-analyzing single-case designs (SCDs) are needed to inform evidence-based practice in clinical and school settings and to draw broader and more defensible generalizations in areas where SCDs comprise a large part of the research base. The most widely used outcomes in single-case research are measures of behavior collected using…
Descriptors: Meta Analysis, Case Studies, Evidence Based Practice, Behavioral Science Research
Tipton, Elizabeth; Pustejovsky, James E. – Journal of Educational and Behavioral Statistics, 2015
Meta-analyses often include studies that report multiple effect sizes based on a common pool of subjects or that report effect sizes from several samples that were treated with very similar research protocols. The inclusion of such studies introduces dependence among the effect size estimates. When the number of studies is large, robust variance…
Descriptors: Meta Analysis, Effect Size, Computation, Robustness (Statistics)
Barton, Erin E.; Pustejovsky, James E.; Maggin, Daniel M.; Reichow, Brian – Remedial and Special Education, 2017
The adoption of methods and strategies validated through rigorous, experimentally oriented research is a core professional value of special education. We conducted a systematic review and meta-analysis examining the experimental literature on Technology-Aided Instruction and Intervention (TAII) using research identified as part of the National…
Descriptors: Intervention, Pervasive Developmental Disorders, Autism, Meta Analysis
Tipton, Elizabeth; Pustejovsky, James E. – Society for Research on Educational Effectiveness, 2015
Randomized experiments are commonly used to evaluate the effectiveness of educational interventions. The goal of the present investigation is to develop small-sample corrections for multiple contrast hypothesis tests (i.e., F-tests) such as the omnibus test of meta-regression fit or a test for equality of three or more levels of a categorical…
Descriptors: Randomized Controlled Trials, Sample Size, Effect Size, Hypothesis Testing
Pustejovsky, James E.; Hedges, Larry V.; Shadish, William R. – Journal of Educational and Behavioral Statistics, 2014
In single-case research, the multiple baseline design is a widely used approach for evaluating the effects of interventions on individuals. Multiple baseline designs involve repeated measurement of outcomes over time and the controlled introduction of a treatment at different times for different individuals. This article outlines a general…
Descriptors: Hierarchical Linear Modeling, Effect Size, Maximum Likelihood Statistics, Computation
Hedges, Larry V.; Pustejovsky, James E.; Shadish, William R. – Online Submission, 2012
Single case designs are a set of research methods for evaluating treatment effects by assigning different treatments to the same individual and measuring outcomes over time and are used across fields such as behavior analysis, clinical psychology, special education, and medicine. Emerging standards for single case designs have focused attention on…
Descriptors: Research Design, Effect Size, Meta Analysis, Computation
Pustejovsky, James E. – Society for Research on Educational Effectiveness, 2013
Single-case designs (SCDs) are a class of research methods for evaluating intervention effects by taking repeated measurements of an outcome over time on a single case, both before and after the deliberate introduction of a treatment. SCDs are used heavily in fields such as special education, school psychology, social work, and applied behavior…
Descriptors: Case Studies, Behavior Change, Meta Analysis, Observation