NotesFAQContact Us
Collection
Advanced
Search Tips
Audience
Teachers1
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 60 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Wheeler, Jordan M.; Engelhard, George; Wang, Jue – Measurement: Interdisciplinary Research and Perspectives, 2022
Objectively scoring constructed-response items on educational assessments has long been a challenge due to the use of human raters. Even well-trained raters using a rubric can inaccurately assess essays. Unfolding models measure rater's scoring accuracy by capturing the discrepancy between criterion and operational ratings by placing essays on an…
Descriptors: Accuracy, Scoring, Statistical Analysis, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Takanori Sato – Language Testing, 2024
Assessing the content of learners' compositions is a common practice in second language (L2) writing assessment. However, the construct definition of content in L2 writing assessment potentially underrepresents the target competence in content and language integrated learning (CLIL), which aims to foster not only L2 proficiency but also critical…
Descriptors: Language Tests, Content and Language Integrated Learning, Writing Evaluation, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Keller-Margulis, Milena A.; Mercer, Sterett H.; Matta, Michael – Reading and Writing: An Interdisciplinary Journal, 2021
Existing approaches to measuring writing performance are insufficient in terms of both technical adequacy as well as feasibility for use as a screening measure. This study examined the validity and diagnostic accuracy of several approaches to automated text evaluation as well as written expression curriculum-based measurement (WE-CBM) to determine…
Descriptors: Writing Evaluation, Validity, Automation, Curriculum Based Assessment
Mercer, Sterett H.; Cannon, Joanna E.; Squires, Bonita; Guo, Yue; Pinco, Ella – Canadian Journal of School Psychology, 2021
We examined the extent to which automated written expression curriculum-based measurement (aWE-CBM) can be accurately used to computer score student writing samples for screening and progress monitoring. Students (n = 174) with learning difficulties in Grades 1 to 12 who received 1:1 academic tutoring through a community-based organization…
Descriptors: Curriculum Based Assessment, Automation, Scoring, Writing Tests
Keller-Margulis, Milena A.; Mercer, Sterett H.; Matta, Michael – Grantee Submission, 2021
Existing approaches to measuring writing performance are insufficient in terms of both technical adequacy as well as feasibility for use as a screening measure. This study examined the validity and diagnostic accuracy of several approaches to automated text evaluation as well as written expression curriculum-based measurement (WE-CBM) to determine…
Descriptors: Writing Evaluation, Validity, Automation, Curriculum Based Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
Tayyebi, Masoumeh; Abbasabady, Mahmoud Moradi; Abbassian, Gholam-Reza – Language Testing in Asia, 2022
Writing assessment literacy (WAL) has received research attention over the past few years. This study aimed at investigating writing assessment knowledge of Iranian English language teachers along with their conceptions and practices of writing assessment based on Crusan et al.'s (Assessing Writing 28:43-56, 2016) study in order to have a better…
Descriptors: Writing Evaluation, Language Teachers, Pedagogical Content Knowledge, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Joshua Kloppers – International Journal of Computer-Assisted Language Learning and Teaching, 2023
Automated writing evaluation (AWE) software is an increasingly popular tool for English second language learners. However, research on the accuracy of such software has been both scarce and largely limited in its scope. As such, this article broadens the field of research on AWE accuracy by using a mixed design to holistically evaluate the…
Descriptors: Grammar, Automation, Writing Evaluation, Computer Assisted Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Michael Matta; Sterett H. Mercer; Milena A. Keller-Margulis – Assessment in Education: Principles, Policy & Practice, 2022
Written expression curriculum-based measurement (WE-CBM) is a formative assessment approach for screening and progress monitoring. To extend evaluation of WE-CBM, we compared hand-calculated and automated scoring approaches in relation to the number of screening samples needed per student for valid scores, the long-term predictive validity and…
Descriptors: Writing Evaluation, Writing Tests, Predictive Validity, Formative Evaluation
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zari Saeedi; Zia Tajeddin; Fereshteh Tadayon – International Journal of Language Testing, 2024
This research paper delved into the critical issue of applying English as a Lingua Franca (ELF) assessment principles in local English language tests used for non-native English speakers in Iranian language institutes. A qualitative content analysis was made on 60 local tests, dissecting them into domains, dimensions, and rating rubrics to…
Descriptors: Foreign Countries, Language Tests, English (Second Language), Second Language Instruction
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Pimnada Khemkullanat; Somruedee Khongput – rEFLections, 2023
The present study implements a corpus-assisted approach with data-driven learning (DDL) in the EFL classroom to investigate its effectiveness in learning target grammatical collocations (verb-, adjective-, and noun-preposition collocations) of Thai undergraduate students and to examine the extent to which the students incorporate the collocational…
Descriptors: Undergraduate Students, English (Second Language), Second Language Learning, Second Language Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Wind, Stefanie A. – Language Testing, 2023
Researchers frequently evaluate rater judgments in performance assessments for evidence of differential rater functioning (DRF), which occurs when rater severity is systematically related to construct-irrelevant student characteristics after controlling for student achievement levels. However, researchers have observed that methods for detecting…
Descriptors: Evaluators, Decision Making, Student Characteristics, Performance Based Assessment
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Omid S. Kalantar – International Journal of Language Testing, 2024
This study sought to identify the challenges and needs of TOEFL iBT candidates in achieving C1 level scores in the speaking and writing sections of the exam. To this end, the researcher employed a mixed-method approach to collect data from a population of 46 students, both male and female, between the ages of 22 and 30. The participants were…
Descriptors: Language Tests, Scores, Native Language, Grammar
Peer reviewed Peer reviewed
Direct linkDirect link
Nguyen, Long Quoc; Le, Ha Van – Language Testing in Asia, 2022
Achieving a sufficient IELTS band score for academic purposes has been a major goal of many L2 learners around the world, especially those in Asia. However, IELTS writing scores were consistently reported to be the lowest when compared to the scores in speaking, reading, and listening. Despite a growing body of research in IELTS writing, little…
Descriptors: English (Second Language), Second Language Learning, Language Tests, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Hille, Kathryn; Cho, Yeonsuk – Language Testing, 2020
Accurate placement within levels of an ESL program is crucial for optimal teaching and learning. Commercially available tests are commonly used for placement, but their effectiveness has been found to vary. This study uses data from the Ohio Program of Intensive English (OPIE) at Ohio University to examine the value of two commercially available…
Descriptors: Student Placement, Testing, English (Second Language), Language Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Assim S. Alrajhi – International Journal of Computer-Assisted Language Learning and Teaching, 2024
This study examines and compares L2 grammatical accuracy in digital multimodal writing (DMW) and monomodal text-based writing (TBW). Utilizing a mixed-methods design, the research incorporates a dataset comprising 180 written texts, a questionnaire, and text-based interviews. Sixty EFL learners were assigned to two groups (TBW and DMW) and…
Descriptors: Grammar, Accuracy, Writing Strategies, Comparative Analysis
Previous Page | Next Page ยป
Pages: 1  |  2  |  3  |  4