Publication Date
In 2025 | 4 |
Since 2024 | 46 |
Since 2021 (last 5 years) | 220 |
Since 2016 (last 10 years) | 626 |
Since 2006 (last 20 years) | 1344 |
Descriptor
Source
Author
Deane, Paul | 12 |
Graham, Steve | 12 |
Engelhard, George, Jr. | 11 |
Lee, Yong-Won | 11 |
Attali, Yigal | 9 |
Bridgeman, Brent | 9 |
Powers, Donald E. | 9 |
Kantor, Robert | 8 |
McMaster, Kristen L. | 8 |
Thurlow, Martha L. | 8 |
Wind, Stefanie A. | 8 |
More ▼ |
Publication Type
Education Level
Audience
Practitioners | 130 |
Teachers | 96 |
Policymakers | 49 |
Administrators | 22 |
Students | 13 |
Researchers | 12 |
Parents | 4 |
Counselors | 2 |
Location
Canada | 52 |
Iran | 51 |
China | 35 |
California | 34 |
Texas | 31 |
Florida | 26 |
Georgia | 25 |
Australia | 24 |
Indonesia | 24 |
Saudi Arabia | 22 |
Turkey | 22 |
More ▼ |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Meets WWC Standards without Reservations | 1 |
Meets WWC Standards with or without Reservations | 2 |
Does not meet standards | 3 |
Yan, Xun; Chuang, Ping-Lin – Language Testing, 2023
This study employed a mixed-methods approach to examine how rater performance develops during a semester-long rater certification program for an English as a Second Language (ESL) writing placement test at a large US university. From 2016 to 2018, we tracked three groups of novice raters (n = 30) across four rounds in the certification program.…
Descriptors: Evaluators, Interrater Reliability, Item Response Theory, Certification
Joshua Kloppers – International Journal of Computer-Assisted Language Learning and Teaching, 2023
Automated writing evaluation (AWE) software is an increasingly popular tool for English second language learners. However, research on the accuracy of such software has been both scarce and largely limited in its scope. As such, this article broadens the field of research on AWE accuracy by using a mixed design to holistically evaluate the…
Descriptors: Grammar, Automation, Writing Evaluation, Computer Assisted Instruction
Pearson, William S. – SAGE Open, 2022
Due to pressure to meet goals, some test-takers preparing for the IELTS (International English Language Testing System) Writing test solicit written feedback (WF) from an expert provider on their rehearsal essays, in order to identify and close gaps in performance. The extent self-directed candidates are able to utilize written feedback to enhance…
Descriptors: Feedback (Response), Student Evaluation, Learner Engagement, Student Reaction
National Council of Teachers of English, 2022
Writing assessment can be used for a variety of purposes, both inside the classroom and outside: supporting student learning, assigning a grade, placing students in appropriate courses, allowing them to exit a course or sequence of courses, certifying proficiency, and evaluating programs. Given the high-stakes nature of many of these assessment…
Descriptors: Writing Evaluation, Position Papers, Writing Teachers, English Teachers
Karilena S. Yount – ProQuest LLC, 2022
In public school classrooms across the United States, approximately one in ten students is learning English as a second language. These students, often referred to as English language learners (ELLs), comprise one of the fastest growing demographic groups in the United States, with approximately 5 million ELLs enrolled in public schools across the…
Descriptors: English Language Learners, Spanish Speaking, Elementary School Students, Writing Tests
Wind, Stefanie A. – Language Testing, 2019
Differences in rater judgments that are systematically related to construct-irrelevant characteristics threaten the fairness of rater-mediated writing assessments. Accordingly, it is essential that researchers and practitioners examine the degree to which the psychometric quality of rater judgments is comparable across test-taker subgroups.…
Descriptors: Nonparametric Statistics, Interrater Reliability, Differences, Writing Tests
Chengyuan Yu; Wandong Xu – Language Testing in Asia, 2024
Language assessment literacy has emerged as an important area of research within the field of language testing and assessment, garnering increasing scholarly attention. However, the existing literature on language assessment literacy primarily focuses on teachers and administrators, while students, who sit at the heart of any assessment, are…
Descriptors: Foreign Countries, Video Technology, Evaluation Methods, Web Sites
Osama Koraishi – Language Teaching Research Quarterly, 2024
This study conducts a comprehensive quantitative evaluation of OpenAI's language model, ChatGPT 4, for grading Task 2 writing of the IELTS exam. The objective is to assess the alignment between ChatGPT's grading and that of official human raters. The analysis encompassed a multifaceted approach, including a comparison of means and reliability…
Descriptors: Second Language Learning, English (Second Language), Language Tests, Artificial Intelligence
Qin, Wenjuan; Zhang, Xizi – Reading and Writing: An Interdisciplinary Journal, 2023
In successful writing development, English as a foreign language (EFL) learners not only need to acquire grammatical complexity (GC) features but also know when and how to use them flexibly across communicative contexts, known as register flexibility. The present study, guided by the sociocultural theory of language learning, examines descriptive…
Descriptors: Second Language Learning, English (Second Language), Writing (Composition), Grammar
Bai, Barry; Wang, Jing – Language Teaching Research, 2023
Self-regulated reading-to-write (R2W) can be portrayed as learners' proactive learning of useful elements (e.g. content, rhetorical features, and conventions) from reading by using strategies, which is an effective mechanism connecting reading and writing, to improve their writing competence. In the present study, six major types of self-regulated…
Descriptors: Metacognition, Reading Writing Relationship, Writing Tests, Foreign Countries
Parkin, Jason R.; Frisby, Craig L.; Wang, Ze – Contemporary School Psychology, 2020
The simple view of writing suggests that written composition results from oral language, transcription (e.g., spelling/handwriting), and self-regulation skills, coordinated within working memory. The model provides a number of implications for the interpretation of psychoeducational achievement batteries. For instance, it hypothesizes that writing…
Descriptors: Writing Skills, Writing Evaluation, Writing Processes, Language Skills
Chen, Michelle Y.; Liu, Yan; Zumbo, Bruno D. – Educational and Psychological Measurement, 2020
This study introduces a novel differential item functioning (DIF) method based on propensity score matching that tackles two challenges in analyzing performance assessment data, that is, continuous task scores and lack of a reliable internal variable as a proxy for ability or aptitude. The proposed DIF method consists of two main stages. First,…
Descriptors: Probability, Scores, Evaluation Methods, Test Items
Estaji, Masoomeh; Hashemi, Mina – Language Testing in Asia, 2022
This study intended to explore the different types of phraseological units in IELTS academic writing task 2 and probe into the IELTS candidates' perceptions of phraseological competence. To this end, a corpus entailing 100 essays (26,423 words) written for IELTS writing task 2 was scrutinized, through which phraseological units were extracted and…
Descriptors: English (Second Language), Second Language Learning, Academic Language, Student Attitudes
Quinn, Margaret F.; Bingham, Gary E.; Gerde, Hope K. – Reading and Writing: An Interdisciplinary Journal, 2021
Conceptual models of early writing suggest multiple component skills support children's early writing development. Although research interest in early writing skills has grown in recent years, the majority of studies focus narrowly on procedural knowledge or transcription skills (i.e., handwriting and spelling) to the relative exclusion of how…
Descriptors: Preschool Children, Writing Skills, Writing (Composition), Emergent Literacy
Romig, John Elwood; Miller, Alexandra A.; Therrien, William J.; Lloyd, John W. – Exceptionality, 2021
Researchers studying curriculum-based measurement of written expression have used a variety of writing prompt types and durations when establishing criterion validity of these tools. The purpose of this study was to determine through meta-analytic procedures whether any prompt type or duration was superior to others in terms of criterion validity.…
Descriptors: Curriculum Based Assessment, Writing Evaluation, Prompting, Meta Analysis