NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
No Child Left Behind Act 20011
What Works Clearinghouse Rating
Showing 1 to 15 of 51 results Save | Export
Office of Educational Technology, US Department of Education, 2023
The U.S. Department of Education (Department) is committed to supporting the use of technology to improve teaching and learning and to support innovation throughout educational systems. This report addresses the clear need for sharing knowledge and developing policies for "Artificial Intelligence," a rapidly advancing class of…
Descriptors: Artificial Intelligence, Educational Technology, Technology Uses in Education, Educational Policy
Laura K. Allen; Arthur C. Grasser; Danielle S. McNamara – Grantee Submission, 2023
Assessments of natural language can provide vast information about individuals' thoughts and cognitive process, but they often rely on time-intensive human scoring, deterring researchers from collecting these sources of data. Natural language processing (NLP) gives researchers the opportunity to implement automated textual analyses across a…
Descriptors: Psychological Studies, Natural Language Processing, Automation, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Glazer, Nancy; Wolfe, Edward W. – Applied Measurement in Education, 2020
This introductory article describes how constructed response scoring is carried out, particularly the rater monitoring processes and illustrates three potential designs for conducting rater monitoring in an operational scoring project. The introduction also presents a framework for interpreting research conducted by those who study the constructed…
Descriptors: Scoring, Test Format, Responses, Predictor Variables
Peer reviewed Peer reviewed
Direct linkDirect link
Seedhouse, Paul; Satar, Müge – Classroom Discourse, 2023
The same L2 speaking performance may be analysed and evaluated in very different ways by different teachers or raters. We present a new, technology-assisted research design which opens up to investigation the trajectories of convergence and divergence between raters. We tracked and recorded what different raters noticed when, whilst grading a…
Descriptors: Language Tests, English (Second Language), Second Language Learning, Oral Language
Peer reviewed Peer reviewed
Direct linkDirect link
Yu, Guoxing; Zhang, Jing – Language Assessment Quarterly, 2017
In this special issue on high-stakes English language testing in China, the two articles on computer-based testing (Jin & Yan; He & Min) highlight a number of consistent, ongoing challenges and concerns in the development and implementation of the nationwide IB-CET (Internet Based College English Test) and institutional computer-adaptive…
Descriptors: Foreign Countries, Computer Assisted Testing, English (Second Language), Language Tests
ACT, Inc., 2017
This new ACT publication is an annual report offering meaningful research insights for some of the most pressing questions impacting admissions and enrollment practice. In the first release of this report, ACT research sheds light on the following topics: (1) the practice of super-scoring; (2) STEM major choice; (3) factors impacting retention and…
Descriptors: Higher Education, Educational Research, Scoring, STEM Education
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Schochet, Peter Z.; Puma, Mike; Deke, John – National Center for Education Evaluation and Regional Assistance, 2014
This report summarizes the complex research literature on quantitative methods for assessing how impacts of educational interventions on instructional practices and student learning differ across students, educators, and schools. It also provides technical guidance about the use and interpretation of these methods. The research topics addressed…
Descriptors: Statistical Analysis, Evaluation Methods, Educational Research, Intervention
Educational Testing Service, 2010
This document describes the breadth of the research that the ETS (Educational Testing Service) Research & Development division is conducting in 2010. This portfolio will be updated in early 2011 to reflect changes to existing projects and new projects that were added after this document was completed. The research described in this portfolio falls…
Descriptors: Portfolios (Background Materials), Testing Programs, Educational Testing, Private Agencies
Wood, Jess; Joe, Jilliam N.; Cantrell, Steve; Tocci, Cynthia M.; Holtzman, Steven L.; Archer, Jeff – Bill & Melinda Gates Foundation, 2014
States and districts can use this tool to create their own plans for continual improvement of an observation system, no matter where they are in their implementation. Included are action steps to improve observation rubrics, observer training, observer assessment, and monitoring. A planning process is described to assess current status, determine…
Descriptors: Observation, Classroom Observation Techniques, Trust (Psychology), Training
Educational Testing Service, 2008
This document describes the breadth of the research being conducted in 2008 by the Research and Development Division at Educational Testing Service (ETS). The research described falls into three large categories: (1) Research supported by the ETS research allocation; (2) Research funded by testing programs at ETS; and (3) Research funded by…
Descriptors: Research and Development, Testing Programs, Educational Testing, Educational Research
OECD Publishing, 2014
The "PISA 2012 Technical Report" describes the methodology underlying the PISA 2012 survey, which tested 15-year-olds' competencies in mathematics, reading and science and, in some countries, problem solving and financial literacy. It examines the design and implementation of the project at a level of detail that allows researchers to…
Descriptors: International Assessment, Secondary School Students, Foreign Countries, Achievement Tests
National Assessment Governing Board, 2010
The National Assessment of Educational Progress (NAEP) and its reports are a key measure in informing the nation on how well the goal of scientific literacy for all students is being met. The "Science Framework for the 2011 National Assessment of Educational Progress" sets forth the design of the NAEP Science Assessment. The 2011 NAEP…
Descriptors: Science Achievement, Academic Achievement, Science Tests, National Competency Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Marley, Scott C. – Journal for Specialists in Group Work, 2010
Recent articles in "The Journal for Specialists in Group Work" have discussed credibility indicators for quantitative and qualitative studies (Asner-Self, 2009; Rubel & Villalba, 2009). This article extends upon these contributions by discussing measurement issues that are relevant to producers and consumers of quantitative group research. This…
Descriptors: Credibility, Psychological Evaluation, Validity, Data Collection
Morris, Allison – OECD Publishing (NJ1), 2011
This report discusses the most relevant issues concerning student standardised testing in which there are no-stakes for students ("standardised testing") through a literature review and a review of the trends in standardised testing in OECD countries. Unlike standardised tests in which there are high-stakes for students, no-stakes implies that…
Descriptors: Standardized Tests, Testing, Educational Trends, Educational Research
Peer reviewed Peer reviewed
Direct linkDirect link
Schochet, Peter; Burghardt, John – Evaluation Review, 2007
This article discusses the use of propensity scoring in experimental program evaluations to estimate impacts for subgroups defined by program features and participants' program experiences. The authors discuss estimation issues and provide specification tests. They also discuss the use of an overlooked data collection design--obtaining predictions…
Descriptors: Program Effectiveness, Scoring, Experimental Programs, Control Groups
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4