NotesFAQContact Us
Collection
Advanced
Search Tips
Source
Grantee Submission21
Audience
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 21 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Cynthia Puranik; Molly Duncan; Ying Guo – Grantee Submission, 2024
In the present study we examined the contributions of transcription and foundational oral language skills to written composition outcomes in a sample of kindergartners. Two hundred and eighty-two kindergarten students from 49 classrooms participated in this study. Children's writing-related skills were examined using various tasks. Latent…
Descriptors: Oral Language, Language Skills, Writing Skills, Beginning Writing
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Grantee Submission, 2023
Recent advances in automated writing evaluation have enabled educators to use automated writing quality scores to improve assessment feasibility. However, there has been limited investigation of bias for automated writing quality scores with students from diverse racial or ethnic backgrounds. The use of biased scores could contribute to…
Descriptors: Bias, Automation, Writing Evaluation, Scoring
Michael Matta; Milena A. Keller-Margulis; Sterett H. Mercer – Grantee Submission, 2022
Although researchers have investigated technical adequacy and usability of written-expression curriculum-based measures (WE-CBM), the economic implications of different scoring approaches have largely been ignored. The absence of such knowledge can undermine the effective allocation of resources and lead to the adoption of suboptimal measures for…
Descriptors: Cost Effectiveness, Scoring, Automation, Writing Tests
Keller-Margulis, Milena A.; Mercer, Sterett H.; Matta, Michael – Grantee Submission, 2021
Existing approaches to measuring writing performance are insufficient in terms of both technical adequacy as well as feasibility for use as a screening measure. This study examined the validity and diagnostic accuracy of several approaches to automated text evaluation as well as written expression curriculum-based measurement (WE-CBM) to determine…
Descriptors: Writing Evaluation, Validity, Automation, Curriculum Based Assessment
Mercer, Sterett H.; Cannon, Joanna E.; Squires, Bonita; Guo, Yue; Pinco, Ella – Grantee Submission, 2021
We examined the extent to which automated written expression curriculum-based measurement (aWE-CBM) can be accurately used to computer score student writing samples for screening and progress monitoring. Students (n = 174) with learning difficulties in Grades 1-12 who received 1:1 academic tutoring through a community-based organization completed…
Descriptors: Curriculum Based Assessment, Automation, Scoring, Writing Tests
Debra McKeown; Kay Wijekumar; Julie Owens; Karen Harris; Steve Graham; Puiwa Lei; Erin FitzPatrick – Grantee Submission, 2023
Writing is a critical skill for success in all areas of life, but it is one of the least taught skills in school. Teachers consistently report being unprepared to teach writing. In this study, set in a Southern U.S. boomtown, teachers received two days of practice-based professional development for a ten-week implementation of self-regulated…
Descriptors: Faculty Development, Evidence Based Practice, Writing Strategies, Self Management
Ling, Guangming; Elliot, Norbert; Burstein, Jill C.; McCaffrey, Daniel F.; MacArthur, Charles A.; Holtzman, Steven – Grantee Submission, 2021
This study reports on validation of a writing motivation survey and its relationship with a variety of indicators of academic performance of 566 undergraduate students drawn from six US postsecondary institutions. A writing motivation survey was used to capture students' writing goals, confidence, beliefs, and affect. Two research questions are…
Descriptors: Writing Attitudes, Student Motivation, Writing Evaluation, Student Surveys
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Grantee Submission, 2022
Written expression curriculum-based measurement (WE-CBM) is a formative assessment approach for screening and progress monitoring. To extend evaluation of WE-CBM, we compared hand-calculated and automated scoring approaches in relation to the number of screening samples needed per student for valid scores, the long-term predictive validity and…
Descriptors: Writing Evaluation, Writing Tests, Predictive Validity, Formative Evaluation
Gary A. Troia; Frank R. Lawrence; Julie S. Brehmer; Kaitlin Glause; Heather L. Reichmuth – Grantee Submission, 2023
Much of the research that has examined the writing knowledge of school-age students has relied on interviews to ascertain this information, which is problematic because interviews may underestimate breadth and depth of writing knowledge, require lengthy interactions with participants, and do not permit a direct evaluation of a prescribed array of…
Descriptors: Writing Tests, Writing Evaluation, Knowledge Level, Elementary School Students
Wilson, Joshua; Rodrigues, Jessica – Grantee Submission, 2020
The present study leveraged advances in automated essay scoring (AES) technology to explore a proof of concept for a writing screener using the "Project Essay Grade" (PEG) program. First, the study investigated the extent to which an AES-scored multi-prompt writing screener accurately classified students as at risk of failing a Common…
Descriptors: Writing Tests, Screening Tests, Classification, Accuracy
Sterett H. Mercer; Joanna E. Cannon – Grantee Submission, 2022
We evaluated the validity of an automated approach to learning progress assessment (aLPA) for English written expression. Participants (n = 105) were students in Grades 2-12 who had parent-identified learning difficulties and received academic tutoring through a community-based organization. Participants completed narrative writing samples in the…
Descriptors: Elementary School Students, Secondary School Students, Learning Problems, Learning Disabilities
Poch, Apryl L.; McMaster, Kristen L.; Lembke, Erica S. – Grantee Submission, 2020
A small proportion of students do not benefit sufficiently from standard intervention protocols, and require more intensive, individualized instruction. Data-Based Instruction (DBI) has a strong evidence base for addressing students' intensive academic needs, yet it is not widely implemented. In this study, we explored the usability and…
Descriptors: Writing Instruction, Teaching Methods, Evidence Based Practice, Writing Difficulties
Valentine, Katherine A.; Truckenmiller, Adrea J.; Troia, Gary A.; Aldridge, Sydney – Grantee Submission, 2021
Understanding students' progress towards meeting grade-level expectations within an academic year is the goal of many education stakeholders so that they can make decisions to adjust instruction and improve students' learning trajectories. The purpose of this study was to explore 4th and 5th grade students' progress towards meeting expectations in…
Descriptors: Elementary School Students, Grade 4, Grade 5, Writing Skills
Puranik, Cynthia S.; Boss, Emily; Wanless, Shannon – Grantee Submission, 2019
Research has established that self-regulation plays an important role in early academic skills such as math and reading, but has focused less on relations with other early skill domains such as writing. The purpose of the present study was to extend that line of research by assessing the relation between self-regulation and early writing.…
Descriptors: Beginning Writing, Self Control, Preschool Children, Kindergarten
Kim, Young-Suk Grace – Grantee Submission, 2020
I propose an integrative theoretical framework for reading and writing acquisition, called the interactive dynamic literacy model, after reviewing theoretical models of reading and writing, and recent efforts in integrating theoretical models within reading and writing, respectively. The central idea of the interactive dynamic literacy model is…
Descriptors: Guidelines, Literacy, Reading Writing Relationship, Models
Previous Page | Next Page ยป
Pages: 1  |  2