NotesFAQContact Us
Collection
Advanced
Search Tips
Assessments and Surveys
Program for International…1
What Works Clearinghouse Rating
Showing 1 to 15 of 180 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Rosanna Cole – Sociological Methods & Research, 2024
The use of inter-rater reliability (IRR) methods may provide an opportunity to improve the transparency and consistency of qualitative case study data analysis in terms of the rigor of how codes and constructs have been developed from the raw data. Few articles on qualitative research methods in the literature conduct IRR assessments or neglect to…
Descriptors: Interrater Reliability, Error of Measurement, Evaluation Methods, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Manolov, Rumen; Tanious, René; Fernández-Castilla, Belén – Journal of Applied Behavior Analysis, 2022
In science in general and in the context of single-case experimental designs, replication of the effects of the intervention within and/or across participants or experiments is crucial for establishing causality and for assessing the generality of the intervention effect. Specific developments and proposals for assessing whether an effect has been…
Descriptors: Intervention, Behavioral Science Research, Replication (Evaluation), Research Design
Peer reviewed Peer reviewed
Direct linkDirect link
Solomon, Benjamin G.; Howard, Taylor K.; Stein, Brit'ny L. – Journal of Behavioral Education, 2015
The use of single-case effect sizes (SCESs) has increased in the intervention literature. Meta-analyses based on single-case data have also increased in popularity. However, few researchers who have adopted these metrics have provided an adequate rationale for their selection. We review several important statistical assumptions that should be…
Descriptors: Effect Size, Intervention, Statistical Analysis, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Heyvaert, Mieke; Wendt, Oliver; Van den Noortgate, Wim; Onghena, Patrick – Journal of Special Education, 2015
Reporting standards and critical appraisal tools serve as beacons for researchers, reviewers, and research consumers. Parallel to existing guidelines for researchers to report and evaluate group-comparison studies, single-case experimental (SCE) researchers are in need of guidelines for reporting and evaluating SCE studies. A systematic search was…
Descriptors: Standards, Research Methodology, Comparative Analysis, Experiments
Peer reviewed Peer reviewed
Direct linkDirect link
Ryan, Mary – Assessment & Evaluation in Higher Education, 2015
Evaluation in higher education is an evolving social practice; that is, it involves what people, institutions and broader systems do and say, how they do and say it, what they value, the effects of these practices and values, and how meanings are ascribed. The textual products (verbal, written, visual and gestural) that inform and are produced by,…
Descriptors: College Students, Research Methodology, Discovery Processes, Discourse Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
Hedges, Larry V.; Pustejovsky, James E.; Shadish, William R. – Research Synthesis Methods, 2013
Single-case designs are a class of research methods for evaluating treatment effects by measuring outcomes repeatedly over time while systematically introducing different condition (e.g., treatment and control) to the same individual. The designs are used across fields such as behavior analysis, clinical psychology, special education, and…
Descriptors: Effect Size, Research Design, Research Methodology, Behavioral Science Research
Peer reviewed Peer reviewed
Direct linkDirect link
Koehn, Peter H.; Uitto, Juha I. – Higher Education: The International Journal of Higher Education and Educational Planning, 2014
Since the mid 1970s, a series of international declarations that recognize the critical link between environmental sustainability and higher education have been endorsed and signed by universities around the world. While academic initiatives in sustainability are blossoming, higher education lacks a comprehensive evaluation framework that is…
Descriptors: Sustainability, Program Evaluation, Curriculum Evaluation, Educational Research
Mukai, Emi – ProQuest LLC, 2012
The primary concern of this thesis is how we can achieve rigorous testability when we set the properties of the Computational System (hypothesized to be at the center of the language faculty) as our object of inquiry and informant judgments as a tool to construct and/or evaluate our hypotheses concerning the properties of the Computational System.…
Descriptors: Japanese, Form Classes (Languages), Syntax, Heuristics
Peer reviewed Peer reviewed
Direct linkDirect link
Madeira, Ana C.; Carravilla, Maria Antonia; Oliveira, Jose F.; Costa, Carlos A. V. – Higher Education Policy, 2011
The purpose of this paper is to present a methodology that allows higher education institutions (HEIs) to promote, to evaluate and to report on sustainability. The ultimate goal of the afore-mentioned methodology is to help HEIs achieve sustainability. First, a model entitled Sustainability in Higher Education Institutions (SusHEI) that generally…
Descriptors: Higher Education, Sustainability, Evaluation Methods, Program Evaluation
Parker, Tiffany, Ed. – Online Submission, 2015
The NEAIR 2015 Conference Proceedings is a compilation of papers presented at the Burlington, VT, conference. Papers in this document include:(1) Strategies to Analyze Course and Teaching Evaluation Data (Kati Li); (2) Using a Mixed Methods Approach to Assess a Leadership Mentoring Program (Betty Harper); (3) Flagship Institutions and the Struggle…
Descriptors: Conference Papers, Evaluation Methods, Research Methodology, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Burde, Dana – Comparative Education Review, 2012
Randomized trials have experienced a marked surge in endorsement and popularity in education research in the past decade. This surge reignited paradigm debates and spurred qualitative critics to accuse these experimental designs of eclipsing qualitative research. This article reviews a current iteration of this debate and examines two randomized…
Descriptors: Evaluation Methods, Research Methodology, Research Design, Qualitative Research
Peer reviewed Peer reviewed
Direct linkDirect link
Holden, Meg – Social Indicators Research, 2009
Testing the validity of indicator systems is a task almost always left to the scientific community, in standard practice and in keeping with the quest for objectivity prevalent in politics and in society as a whole. This paper calls for a reinvigorated agenda within indicators research to question this practice and develop alternative…
Descriptors: Citizen Participation, Validity, Foreign Countries, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Cotton, Debby R. E.; Stokes, Alison; Cotton, Peter A. – Journal of Geography in Higher Education, 2010
Much pedagogic research undertaken in geography and other disciplines relies on post-hoc methods such as surveys or interviews to investigate the student experience of higher education (often based on self-reports of behaviour). However, observation of students provides a far more direct route to obtain information about their behaviour, and there…
Descriptors: Educational Research, Observation, Student Experience, Evaluation Methods
Peer reviewed Peer reviewed
Direct linkDirect link
Laenen, Annouschka; Alonso, Ariel; Molenberghs, Geert; Vangeneugden, Tony – Psychometrika, 2009
Reliability captures the influence of error on a measurement and, in the classical setting, is defined as one minus the ratio of the error variance to the total variance. Laenen, Alonso, and Molenberghs ("Psychometrika" 73:443-448, 2007) proposed an axiomatic definition of reliability and introduced the R[subscript T] coefficient, a measure of…
Descriptors: Error of Measurement, Case Studies, Simulation, Reliability
Peer reviewed Peer reviewed
PDF on ERIC Download full text
What Works Clearinghouse, 2011
With its critical assessments of scientific evidence on the effectiveness of education programs, policies, and practices (referred to as "interventions"), and a range of products summarizing this evidence, the What Works Clearinghouse (WWC) is an important part of the Institute of Education Sciences' strategy to use rigorous and relevant…
Descriptors: Standards, Access to Information, Information Management, Guides
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12