NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Elementary and Secondary…2
Race to the Top1
What Works Clearinghouse Rating
Showing 1 to 15 of 259 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Kelly, Anthony – Assessment & Evaluation in Higher Education, 2023
The Research Excellence Framework is a high-stakes exercise used by the UK government to allocate billions of pounds of quality-related research (QR) funding and used by the media to rank universities and their departments in national league tables. The 2008, 2014 and 2021 assessments were zero-sum games in terms of league table position because…
Descriptors: Foreign Countries, Educational Assessment, Educational Research, Educational Quality
Peer reviewed Peer reviewed
Direct linkDirect link
Daphna Harel; Dorothy Seaman; Jennifer Hill; Elisabeth King; Dana Burde – International Journal of Social Research Methodology, 2023
Indirect questioning attempts to overcome social desirability bias in survey research. However, to properly analyze the resulting data, it is crucial to understand how it impacts responses. This study analyzes results from a randomized experiment that tests whether direct versus indirect questioning methods lead to different results in a sample of…
Descriptors: Foreign Countries, Youth, Questioning Techniques, Language Usage
Saenz, David Arron – Online Submission, 2023
There is a vast body of literature documenting the positive impacts that rater training and calibration sessions have on inter-rater reliability as research indicates several factors including frequency and timing play crucial roles towards ensuring inter-rater reliability. Additionally, increasing amounts research indicate possible links in…
Descriptors: Interrater Reliability, Scoring, Training, Scoring Rubrics
Peer reviewed Peer reviewed
Direct linkDirect link
Reagan Mozer; Luke Miratrix – Society for Research on Educational Effectiveness, 2023
Background: For randomized trials that use text as an outcome, traditional approaches for assessing treatment impact require each document first be manually coded for constructs of interest by trained human raters. These hand-coded scores are then used as a measured outcome for an impact analysis, with the average scores of the treatment group…
Descriptors: Artificial Intelligence, Coding, Randomized Controlled Trials, Research Methodology
Peer reviewed Peer reviewed
Direct linkDirect link
Casabianca, Jodi M.; Donoghue, John R.; Shin, Hyo Jeong; Chao, Szu-Fu; Choi, Ikkyu – Journal of Educational Measurement, 2023
Using item-response theory to model rater effects provides an alternative solution for rater monitoring and diagnosis, compared to using standard performance metrics. In order to fit such models, the ratings data must be sufficiently connected in order to estimate rater effects. Due to popular rating designs used in large-scale testing scenarios,…
Descriptors: Item Response Theory, Alternative Assessment, Evaluators, Research Problems
Peer reviewed Peer reviewed
Direct linkDirect link
Chenchen Ma; Jing Ouyang; Chun Wang; Gongjun Xu – Grantee Submission, 2024
Survey instruments and assessments are frequently used in many domains of social science. When the constructs that these assessments try to measure become multifaceted, multidimensional item response theory (MIRT) provides a unified framework and convenient statistical tool for item analysis, calibration, and scoring. However, the computational…
Descriptors: Algorithms, Item Response Theory, Scoring, Accuracy
Peer reviewed Peer reviewed
Direct linkDirect link
Curran, Patrick J.; Georgeson, A. R.; Bauer, Daniel J.; Hussong, Andrea M. – International Journal of Behavioral Development, 2021
Conducting valid and reliable empirical research in the prevention sciences is an inherently difficult and challenging task. Chief among these is the need to obtain numerical scores of underlying theoretical constructs for use in subsequent analysis. This challenge is further exacerbated by the increasingly common need to consider multiple…
Descriptors: Psychometrics, Scoring, Prevention, Scores
Peer reviewed Peer reviewed
Direct linkDirect link
Soland, James – Educational and Psychological Measurement, 2022
Considerable thought is often put into designing randomized control trials (RCTs). From power analyses and complex sampling designs implemented preintervention to nuanced quasi-experimental models used to estimate treatment effects postintervention, RCT design can be quite complicated. Yet when psychological constructs measured using survey scales…
Descriptors: Item Response Theory, Surveys, Scoring, Randomized Controlled Trials
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Boztunç Öztürk, Nagihan; Sahin, Melek Gülsah; Ilhan, Mustafa – Turkish Journal of Education, 2019
The aim of this research was to analyze and compare analytic rubric and general impression scoring in peer assessment. A total of 66 university students participated in the study and six of them were chosen as peer raters on a voluntary basis. In the research, students were supposed to prepare a sample study within the scope of scientific research…
Descriptors: Foreign Countries, College Students, Student Evaluation, Peer Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Regional Educational Laboratory Mid-Atlantic, 2024
These are the appendixes for the report, "Strengthening the Pennsylvania School Climate Survey to Inform School Decisionmaking." This study analyzed Pennsylvania School Climate Survey data from students and staff in the 2021/22 school year to assess the validity and reliability of the elementary school student version of the survey;…
Descriptors: Educational Environment, Surveys, Decision Making, School Personnel
Peer reviewed Peer reviewed
Direct linkDirect link
Shiroda, Megan; Uhl, Juli D.; Urban-Lurain, Mark; Haudek, Kevin C. – Journal of Science Education and Technology, 2022
Constructed response (CR) assessments allow students to demonstrate understanding of complex topics and provide teachers with deeper insight into student thinking. Computer scoring models (CSMs) remove the barrier of increased time and effort, making CR more accessible. As CSMs are commonly created using responses from research-intensive colleges…
Descriptors: Responses, Student Evaluation, Scoring, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Thier, Michael; Mason, Dyana P. – International Journal of Research & Method in Education, 2019
Due to myriad applications of the nominal group technique (NGT), a highly flexible iterative focus group method, researchers know little about its optimal scoring procedures. Exploring benefits and biases that such procedures might present, we aim to clarify how NGT scoring systems can privilege consensus or prioritization. In conducting the first…
Descriptors: Methods, Scoring, Focus Groups, Study Abroad
Peer reviewed Peer reviewed
Direct linkDirect link
Berlin, Rebekah; Cohen, Julie – ZDM: The International Journal on Mathematics Education, 2018
In this paper, we analyze mathematics lessons using the Classroom Assessment Scoring System (CLASS), a standardized observation protocol that suggests that high-quality lessons are distinguished by the tenor and frequency of classroom interactions. Because the CLASS focuses on interactions, rather than the specifics of content teaching, it can be…
Descriptors: Educational Quality, Instructional Effectiveness, Mathematics Instruction, Classroom Observation Techniques
Mine Özçelik; Zekerya Batur – International Journal of Education and Literacy Studies, 2023
The present study aimed to determine whether using culture-themed documentaries in Turkish teaching of B2 level international students has effects on the development of guided writing skills. The study had an action-research design, which is one of the qualitative study approaches. The study group consisted of 12 international students at the B2…
Descriptors: Writing Skills, Writing Instruction, Turkish, Second Language Learning
Peer reviewed Peer reviewed
Direct linkDirect link
Jin, Hui; van Rijn, Peter; Moore, John C.; Bauer, Malcolm I.; Pressler, Yamina; Yestness, Nissa – International Journal of Science Education, 2019
This article provides a validation framework for research on the development and use of science Learning Progressions (LPs). The framework describes how evidence from various sources can be used to establish an interpretive argument and a validity argument at five stages of LP research--development, scoring, generalisation, extrapolation, and use.…
Descriptors: Sequential Approach, Educational Research, Science Education, Validity
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  ...  |  18