NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Grantee Submission87
Audience
Laws, Policies, & Programs
Head Start1
Showing 1 to 15 of 87 results Save | Export
Öncel, Püren; Flynn, Lauren E.; Sonia, Allison N.; Barker, Kennis E.; Lindsay, Grace C.; McClure, Caleb M.; McNamara, Danielle S.; Allen, Laura K. – Grantee Submission, 2021
Automated Writing Evaluation systems have been developed to help students improve their writing skills through the automated delivery of both summative and formative feedback. These systems have demonstrated strong potential in a variety of educational contexts; however, they remain limited in their personalization and scope. The purpose of the…
Descriptors: Computer Assisted Instruction, Writing Evaluation, Formative Evaluation, Summative Evaluation
Arora, Puneet; Fazlul, Ishtiaque; Musaddiq, Tareena; Vats, Abhinav – Grantee Submission, 2022
Empirical evidence on the effectiveness of performance-based rewards for teachers is primarily based on the evaluation of monetary reward schemes. We present results from a randomized evaluation of a teacher and principal incentive programme in India that offered a non-pecuniary recognition reward based on students' test scores on standardized…
Descriptors: Teacher Evaluation, Principals, Administrator Evaluation, Performance Based Assessment
Wesley Morris; Scott Crossley; Langdon Holmes; Chaohua Ou; Danielle McNamara; Mihai Dascalu – Grantee Submission, 2023
As intelligent textbooks become more ubiquitous in classrooms and educational settings, the need arises to automatically provide formative feedback to written responses provided by students in response to readings. This study develops models to automatically provide feedback to student summaries written at the end of intelligent textbook sections.…
Descriptors: Textbooks, Electronic Publishing, Feedback (Response), Formative Evaluation
Johnson, Evelyn S.; Crawford, Angela R.; Zheng, Yuzhu; Moylan, Laura A. – Grantee Submission, 2020
In this study, we compared the results of 27 special education teachers' evaluations using two different observation instruments, the Framework for Teaching (FFT), and the Explicit Instruction observation protocol of the Recognizing Effective Special Education Teachers (RESET) observation system. Results indicate differences in the rank-ordering…
Descriptors: Special Education Teachers, Teacher Evaluation, Teacher Effectiveness, Evaluation Methods
De Los Reyes, Andres; Makol, Bridget A. – Grantee Submission, 2021
Clients display considerable variations in functioning across the contexts that encompass their social environments (e.g., home, school/workplace, peer interactions). No single measurement method can fully capture these variations. Yet, assessors must balance the need to accurately capture clients' clinical presentations, and at the same time…
Descriptors: Self Evaluation (Individuals), Mental Health, Scores, Rating Scales
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Grantee Submission, 2022
Written expression curriculum-based measurement (WE-CBM) is a formative assessment approach for screening and progress monitoring. To extend evaluation of WE-CBM, we compared hand-calculated and automated scoring approaches in relation to the number of screening samples needed per student for valid scores, the long-term predictive validity and…
Descriptors: Writing Evaluation, Writing Tests, Predictive Validity, Formative Evaluation
Peer reviewed Peer reviewed
Direct linkDirect link
Matthew J. Madison; Stefanie Wind; Lientje Maas; Kazuhiro Yamaguchi; Sergio Haab – Grantee Submission, 2024
Diagnostic classification models (DCMs) are psychometric models designed to classify examinees according to their proficiency or nonproficiency of specified latent characteristics. These models are well suited for providing diagnostic and actionable feedback to support intermediate and formative assessment efforts. Several DCMs have been developed…
Descriptors: Diagnostic Tests, Classification, Models, Psychometrics
Peer reviewed Peer reviewed
Direct linkDirect link
Sami Baral; Eamon Worden; Wen-Chiang Lim; Zhuang Luo; Christopher Santorelli; Ashish Gurung; Neil Heffernan – Grantee Submission, 2024
The effectiveness of feedback in enhancing learning outcomes is well documented within Educational Data Mining (EDM). Various prior research have explored methodologies to enhance the effectiveness of feedback to students in various ways. Recent developments in Large Language Models (LLMs) have extended their utility in enhancing automated…
Descriptors: Automation, Scoring, Computer Assisted Testing, Natural Language Processing
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Haoran; Litman, Diane – Grantee Submission, 2021
Human essay grading is a laborious task that can consume much time and effort. Automated Essay Scoring (AES) has thus been proposed as a fast and effective solution to the problem of grading student writing at scale. However, because AES typically uses supervised machine learning, a human-graded essay corpus is still required to train the AES…
Descriptors: Essays, Grading, Writing Evaluation, Computational Linguistics
Ronfeldt, Matthew; Bardelli, Emanuele; Brockman, Stacey L.; Mullman, Hannah – Grantee Submission, 2019
Growing evidence suggests that preservice candidates receive better coaching and are more instructionally effective when they are mentored by more instructionally effective cooperating teachers (CTs). Yet, teacher education program leaders indicate it can be difficult to recruit instructionally effective teachers to serve as CTs, in part because…
Descriptors: Mentors, Student Teachers, Student Teacher Evaluation, Scores
Olney, Andrew M. – Grantee Submission, 2021
Cloze items are commonly used for both assessing learning and as a learning activity. This paper investigates the selection of sentences for cloze item creation by comparing methods ranging from simple heuristics to deep learning summarization models. An evaluation using human-generated cloze items from three different science texts indicates that…
Descriptors: Sentences, Selection, Cloze Procedure, Heuristics
Xue Zhang; Chun Wang – Grantee Submission, 2022
Item-level fit analysis not only serves as a complementary check to global fit analysis, it is also essential in scale development because the fit results will guide item revision and/or deletion (Liu & Maydeu-Olivares, 2014). During data collection, missing response data may likely happen due to various reasons. Chi-square-based item fit…
Descriptors: Goodness of Fit, Item Response Theory, Scores, Test Length
Clark McKown; Nicole Russo-Ponsaran; Ashley Karls – Grantee Submission, 2022
This paper presents evidence of the score reliability, factor structure, criterion-related validity, and measurement equivalence of a web-based assessment of several important social and emotional competencies for children in fourth through sixth grades. The assessment, SELweb LE (Late Elementary), is designed to measure children's understanding…
Descriptors: Social Emotional Learning, Social Development, Emotional Development, Elementary School Students
Teresa M. Ober; Maxwell R. Hong; Matthew F. Carter; Alex S. Brodersen; Daniella Rebouças-Ju; Cheng Liu; Ying Cheng – Grantee Submission, 2021
We examined whether students were accurate in predicting their test performance two testing contexts (low-stakes and high-stakes). The sample comprised U.S. high school students enrolled in an advanced placement (AP) statistics course during the 2017-2018 academic year (N=209; M[subscript age]=16.6 years). We found that even two months before…
Descriptors: High School Students, Self Evaluation (Individuals), Student Attitudes, High Stakes Tests
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Herrmann-Abell, Cari F.; Hardcastle, Joseph; DeBoer, George E. – Grantee Submission, 2022
As implementation of the "Next Generation Science Standards" moves forward, there is a need for new assessments that can measure students' integrated three-dimensional science learning. The National Research Council has suggested that these assessments be multicomponent tasks that utilize a combination of item formats including…
Descriptors: Multiple Choice Tests, Conditioning, Test Items, Item Response Theory
Previous Page | Next Page »
Pages: 1  |  2  |  3  |  4  |  5  |  6