NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Showing 1 to 15 of 51 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Matthew J. Mayhew; Christa E. Winkler – Journal of Postsecondary Student Success, 2024
Higher education professionals often are tasked with providing evidence to stakeholders that programs, services, and practices implemented on their campuses contribute to student success. Furthermore, in the absence of a solid base of evidence related to effective practices, higher education researchers and practitioners are left questioning what…
Descriptors: Higher Education, Educational Practices, Evidence Based Practice, Program Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Maxwell, Bronwen; Stevens, Anna; Demack, Sean; Coldwell, Mike; Wolstenholme, Claire; Reaney-Wood, Sarah; Stiell, Bernadette; Lortie-Forgues, Hugues – Education Endowment Foundation, 2021
The Education Endowment Foundation (EEF)'s mission is to break the link between family income and educational achievement. This is achieved through summarising the best available evidence in plain language, generating new evidence of 'what works' to improve teaching and learning, and supporting teachers and school leaders to use research evidence…
Descriptors: Foreign Countries, Disadvantaged Youth, Elementary School Students, Secondary School Students
Peer reviewed Peer reviewed
Direct linkDirect link
Klerman, Jacob Alex; Olsho, Lauren E. W.; Bartlett, Susan – American Journal of Evaluation, 2015
While regression discontinuity has usually been applied retrospectively to secondary data, it is even more attractive when applied prospectively. In a prospective design, data collection can be focused on cases near the discontinuity, thereby improving internal validity and substantially increasing precision. Furthermore, such prospective…
Descriptors: Regression (Statistics), Evaluation Methods, Evaluation Problems, Probability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Akers, Lauren; Resch, Alexandra; Berk, Jillian – National Center for Education Evaluation and Regional Assistance, 2014
This guide for district and school leaders shows how to recognize opportunities to embed randomized controlled trials (RCTs) into planned policies or programs. Opportunistic RCTs can generate strong evidence for informing education decisions--with minimal added cost and disruption. The guide also outlines the key steps to conduct RCTs and responds…
Descriptors: School Districts, Educational Research, Guides, Program Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Osadebe, P. U. – Education, 2014
The study evaluated the objectives of the Universal Basic Education (UBE) programme in Delta State. It considered the extent to which each objective was achieved. A research question on the extent to which the UBE objectives were achieved guided the study. Two hypotheses were tested. A sample of 300 students was randomly drawn through the use of…
Descriptors: Foreign Countries, Equal Education, Program Implementation, Program Effectiveness
Bolly, Madina; Jonas, Nicolas – UNESCO Institute for Lifelong Learning, 2015
Action Research on Measuring Literacy Programme Participants' Learning Outcomes (RAMAA) aims to develop, implement and collaborate on the creation of a methodological approach to measure acquired learning and study the various factors that influence its development. This report examines how RAMAA I has been implemented over the past four years in…
Descriptors: Action Research, Outcome Measures, Program Implementation, Adult Literacy
Spybrook, Jessaca; Lininger, Monica; Cullen, Anne – Society for Research on Educational Effectiveness, 2011
The purpose of this study is to extend the work of Spybrook and Raudenbush (2009) and examine how the research designs and sample sizes changed from the planning phase to the implementation phase in the first wave of studies funded by IES. The authors examine the impact of the changes in terms of the changes in the precision of the study from the…
Descriptors: Evaluation Criteria, Sampling, Research Design, Planning
Goldring, Ellen; Grissom, Jason A.; Neumerski, Christine M.; Murphy, Joseph; Blissett, Richard; Porter, Andy – Wallace Foundation, 2015
This three-volume report describes the "SAM (School Administration Manager) process," an approach that about 700 schools around the nation are using to direct more of principals' time and effort to improve teaching and learning in classrooms. Research has shown that a principal's instructional leadership is second only to teaching among…
Descriptors: Instructional Leadership, Principals, Administrator Role, Educational Improvement
Peer reviewed Peer reviewed
Direct linkDirect link
Dubois, Cathy; Long, Lori – International Journal on E-Learning, 2012
E-learning researchers face considerable challenges in creating meaningful and generalizable studies due to the complex nature of this dynamic training medium. Our experience in conducting workplace e-learning research led us to create this guide for planning research on e-learning. We share the unanticipated complications we encountered in our…
Descriptors: Electronic Learning, Course Content, Instructional Design, Program Implementation
Peer reviewed Peer reviewed
Direct linkDirect link
Hitchcock, John H.; Kurki, Anja; Wilkins, Chuck; Dimino, Joseph; Gersten, Russell – Practical Assessment, Research & Evaluation, 2009
When attempting to determine if an intervention has a causal impact, the "gold standard" of program evaluation is the randomized controlled trial (RCT). In education studies random assignment is rarely feasible at the student level, making RCTs harder to conduct. School-level assignment is more common but this often requires considerable resources…
Descriptors: Intervention, Reading Instruction, Program Effectiveness, Reading Programs
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Silvia, Suyapa; Blitstein, Jonathan; Williams, Jason; Ringwalt, Chris; Dusenbury, Linda; Hansen, William – National Center for Education Evaluation and Regional Assistance, 2010
This is the first of two reports that summarize the findings from an impact evaluation of a violence prevention intervention for middle schools. This report discusses findings after 1 year of implementation. A forthcoming report will discuss the findings after 2 years and 3 years of implementation. In 2004, the U.S. Department of Education (ED)…
Descriptors: Middle Schools, Violence, Prevention, Intervention
Templin, Patricia A. – 1981
This handbook is intended to help educational evaluators use still photography in designing, conducting, and reporting evaluations of educational programs. It describes techniques for using a visual documentary approach to program evaluation that features data collected with a camera. The emphasis is on the aspects of educational evaluation…
Descriptors: Data Collection, Elementary Secondary Education, Evaluation Methods, Photography
Peer reviewed Peer reviewed
Direct linkDirect link
Champion, Robby – Journal of Staff Development, 2002
Limiting data collection to a sample group is one way to increase effectiveness in dealing with data. The paper describes how to draw a sample group (random sampling, stratified random sampling, purposeful sampling, and convenient or opportunity sampling) and discusses how to determine the size of the sample group. (SM)
Descriptors: Data Analysis, Data Collection, Elementary Secondary Education, Evaluation Methods
Dobson, Douglas; Cook, Thomas J. – Evaluation Quarterly, 1979
A major problem in social science research is that of successfully carrying out the random assignment of persons to experimental and control groups. In this study a computer-based random assignment procedure operated successfully on a weekly basis for 17 consecutive weeks in a program serving over 360 ex-offenders. (CTM)
Descriptors: Computer Programs, Criminals, Data Collection, Field Studies
Peer reviewed Peer reviewed
Holosko, Michael J. – Canadian Journal of Program Evaluation/La Revue canadienne d'evaluation de programme, 1996
A case study approach taken with three years of data on a hospital's trauma program indicate that, although service user data is important, its political importance far outweighs evaluation value and utility, especially for hard-to-access samples. Implications of this challenge to traditional assumptions for program evaluation are discussed. (SLD)
Descriptors: Case Studies, Data Collection, Evaluation Methods, Evaluation Utilization
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4