NotesFAQContact Us
Collection
Advanced
Search Tips
Back to results
Peer reviewed Peer reviewed
Direct linkDirect link
ERIC Number: ED663438
Record Type: Non-Journal
Publication Date: 2024-Sep-18
Pages: N/A
Abstractor: As Provided
ISBN: N/A
ISSN: N/A
EISSN: N/A
Framework for Embedding Equity Principles and Practices into Experimental Subgroup Impact Analyses
Laura Peck; Haisheng Yang
Society for Research on Educational Effectiveness
Background/Context: The reckoning with racial injustice and growing inequality that have become hallmarks of the early 2020s in the United States has implications for impact analysis and the evidence it produced for public policy decision-making. Various researchers have highlighted the shortcomings that impact analyses have when estimating the effect of policy changes on historically marginalized, disadvantaged, or underrepresented groups. While impact analyses play a central role in policy decisions, quantitative work has tended to lag behind qualitive research in its ability to address the experiences of historically marginalized, disadvantaged, or underrepresented groups. Purpose/Objective/Research Question: This paper presents a framework to link equity-focused principles and practices to subgroup impact analyses. The framework identifies three essential points in subgroup analyses that should be part of standard impact evaluations. First, in the evaluation planning phase, researchers should define groups to be included in the analyses considering issues of sampling, power, and external validity. Second, during the analysis phase, researchers should understand the extent and nature of baseline differences and choose analytic methods that represent how a subgroup might respond to program treatment and other factors. Third, during the interpretation and reporting phases, researchers should avoid interpreting a subgroup impact--or differential subgroup impact--as being causally related to subgroup identity and use language thoughtfully, to accurately represent the subgroup impacts while avoiding language that alienates individuals in the subgroup. Setting || Population/Participants/Subjects || Intervention/Program/Practice: As a methodological study, there is no specific, single setting, population, or intervention of focus. That said, the work focuses on personal-identity-based subgroups and intersectionally-defined subgroups--reflecting on the implications of sample size, power, and effect size for analyses and reporting, both from single studies and across studies. Research Design: The paper focuses specifically on experimental evaluation designs that randomize individuals or groups into gaining access to an intervention or control group. The insights can also be extended to quasiexperimental impact analyses. Data Collection and Analysis || Findings/Results: Illustrations of the paper's principles in practice come from existing evaluations' public reports in both postsecondary education/adult training and financial education/housing assistance contexts. We use those studies' approaches as examples of how subgroup impact analyses have been treated--and how they could be better--as well as reflecting on the studies' results--and how they might be better interpreted by bringing a stronger equity lens to the work. Conclusions: Our paper offers the following tentative conclusions, on which we will elaborate in the presentation and discussion: (1) Quantitatively capturing intersectionality is important and feasible. Although any given study may face sample size constraints, we suggest that they report subgroup and intersectional subgroup impacts regardless for multiple reasons. A subgroup impact estimate from a single study is the best estimate we have; and it describes a real difference between two groups, even if that estimate is somewhat uncertain. Although it might not be strong evidence from a research standpoint, it may be sufficient--if even just suggestive--for policy purposes. Moreover, across multiple studies, the meta-analysis of evaluation results holds hope for shedding light on those subgroup impacts. Deliberate planning for and across multiple studies might make those future meta-analyses stronger still. (2) Experimental impact analyses--of both overall impacts as well as subgroup impacts--are always better interpreted with companion implementation evaluation research and potentially other types of qualitative data and analysis. When quantitative results are limited, qualitative insights can aid in interpretation and caveating as well as visioning future needed work. Indeed, in some of our work, we have conducted qualitative follow-up with some study sample members to help explain otherwise inexplicable (and perverse) subgroup impact results, which offer a potentially useful model for future practice. (3) A closing observation: the more impact evaluations can accommodate subgroup and intersectional subgroup analyses, the stronger the generalization from those evaluations. That is, there are not just implications of analyzing and reporting subgroup impacts and differential subgroup impacts for the internal validity and causal claims from a given study, there are implications for generalization of study results to other people, places, and times.
Society for Research on Educational Effectiveness. 2040 Sheridan Road, Evanston, IL 60208. Tel: 202-495-0920; e-mail: contact@sree.org; Web site: https://www.sree.org/
Publication Type: Reports - Evaluative
Education Level: N/A
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: Society for Research on Educational Effectiveness (SREE)
Grant or Contract Numbers: N/A