Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 13 |
Descriptor
Research Design | 14 |
Statistical Analysis | 11 |
Computation | 7 |
Educational Research | 7 |
Intervention | 7 |
Regression (Statistics) | 5 |
Sample Size | 5 |
Scores | 5 |
Correlation | 3 |
Elementary School Students | 3 |
Error of Measurement | 3 |
More ▼ |
Source
Journal of Educational and… | 6 |
National Center for Education… | 4 |
Evaluation Review | 1 |
Mathematica Policy Research,… | 1 |
National Center for Education… | 1 |
Society for Research on… | 1 |
Author
Schochet, Peter Z. | 14 |
Chiang, Hanley S. | 1 |
Lohr, Sharon | 1 |
Sanders, Elizabeth | 1 |
Publication Type
Journal Articles | 7 |
Reports - Evaluative | 7 |
Reports - Research | 5 |
Reports - Descriptive | 2 |
Guides - Non-Classroom | 1 |
Education Level
Elementary Education | 5 |
Early Childhood Education | 1 |
Elementary Secondary Education | 1 |
Grade 3 | 1 |
Grade 4 | 1 |
Grade 5 | 1 |
Preschool Education | 1 |
Audience
Researchers | 1 |
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2022
This article develops new closed-form variance expressions for power analyses for commonly used difference-in-differences (DID) and comparative interrupted time series (CITS) panel data estimators. The main contribution is to incorporate variation in treatment timing into the analysis. The power formulas also account for other key design features…
Descriptors: Comparative Analysis, Statistical Analysis, Sample Size, Measurement Techniques
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2020
This article discusses estimation of average treatment effects for randomized controlled trials (RCTs) using grouped administrative data to help improve data access. The focus is on design-based estimators, derived using the building blocks of experiments, that are conducive to grouped data for a wide range of RCT designs, including clustered and…
Descriptors: Randomized Controlled Trials, Data Analysis, Research Design, Multivariate Analysis
Lohr, Sharon; Schochet, Peter Z.; Sanders, Elizabeth – National Center for Education Research, 2014
Suppose an education researcher wants to test the impact of a high school drop-out prevention intervention in which at-risk students attend classes to receive intensive summer school instruction. The district will allow the researcher to randomly assign students to the treatment classes or to the control group. Half of the students (the treatment…
Descriptors: Educational Research, Research Design, Data Analysis, Intervention
Schochet, Peter Z. – National Center for Education Evaluation and Regional Assistance, 2015
This report presents the statistical theory underlying the "RCT-YES" software that estimates and reports impacts for RCTs for a wide range of designs used in social policy research. The report discusses a unified, non-parametric design-based approach for impact estimation using the building blocks of the Neyman-Rubin-Holland causal…
Descriptors: Statistics, Computer Software, Inferences, Research Design
Schochet, Peter Z. – Society for Research on Educational Effectiveness, 2012
This article introduces an alternative impact parameter for group-based RCTs with student mobility--the survivor average causal effect ("SACE")--that pertains to the subpopulation of original cohort students who would remain in their baseline study schools in either the treatment or control condition. The "SACE" parameter has a clear…
Descriptors: Statistical Analysis, Student Mobility, Intervention, Outcomes of Treatment
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2013
In school-based randomized control trials (RCTs), a common design is to follow student cohorts over time. For such designs, education researchers usually focus on the place-based (PB) impact parameter, which is estimated using data collected on all students enrolled in the study schools at each data collection point. A potential problem with this…
Descriptors: Student Mobility, Scientific Methodology, Research Design, Intervention
Schochet, Peter Z.; Chiang, Hanley S. – Journal of Educational and Behavioral Statistics, 2011
In randomized control trials (RCTs) in the education field, the complier average causal effect (CACE) parameter is often of policy interest, because it pertains to intervention effects for students who receive a meaningful dose of treatment services. This article uses a causal inference and instrumental variables framework to examine the…
Descriptors: Computation, Identification, Educational Research, Research Design
Schochet, Peter Z. – Evaluation Review, 2009
In social policy evaluations, the multiple testing problem occurs due to the many hypothesis tests that are typically conducted across multiple outcomes and subgroups, which can lead to spurious impact findings. This article discusses a framework for addressing this problem that balances Types I and II errors. The framework involves specifying…
Descriptors: Policy, Evaluation, Testing Problems, Hypothesis Testing
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2009
This article examines theoretical and empirical issues related to the statistical power of impact estimates under clustered regression discontinuity (RD) designs. The theory is grounded in the causal inference and hierarchical linear modeling literature, and the empirical work focuses on common designs used in education research to test…
Descriptors: Statistical Analysis, Regression (Statistics), Educational Research, Evaluation
Schochet, Peter Z. – National Center for Education Evaluation and Regional Assistance, 2008
Pretest-posttest experimental designs are often used in randomized control trials (RCTs) in the education field to improve the precision of the estimated treatment effects. For logistic reasons, however, pretest data are often collected after random assignment, so that including them in the analysis could bias the posttest impact estimates. Thus,…
Descriptors: Pretests Posttests, Pretesting, Scores, Intervention
Schochet, Peter Z. – National Center for Education Evaluation and Regional Assistance, 2008
This report examines theoretical and empirical issues related to the statistical power of impact estimates under clustered regression discontinuity (RD) designs. The theory is grounded in the causal inference and HLM modeling literature, and the empirical work focuses on commonly-used designs in education research to test intervention effects on…
Descriptors: Research Methodology, Models, Regression (Statistics), Sample Size
Schochet, Peter Z. – Journal of Educational and Behavioral Statistics, 2008
This article examines theoretical and empirical issues related to the statistical power of impact estimates for experimental evaluations of education programs. The author considers designs where random assignment is conducted at the school, classroom, or student level, and employs a unified analytic framework using statistical methods from the…
Descriptors: Elementary School Students, Research Design, Standardized Tests, Program Evaluation
Schochet, Peter Z. – National Center for Education Evaluation and Regional Assistance, 2009
This paper examines the estimation of two-stage clustered RCT designs in education research using the Neyman causal inference framework that underlies experiments. The key distinction between the considered causal models is whether potential treatment and control group outcomes are considered to be fixed for the study population (the…
Descriptors: Control Groups, Causal Models, Statistical Significance, Computation
Schochet, Peter Z. – Mathematica Policy Research, Inc., 2005
This paper examines issues related to the statistical power of impact estimates for experimental evaluations of education programs. The focus is on "group-based" experimental designs, because many studies of education programs involve random assignment at the group level (for example, at the school or classroom level) rather than at the student…
Descriptors: Statistical Analysis, Evaluation Methods, Program Evaluation, Research Design