Publication Date
In 2025 | 0 |
Since 2024 | 1 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 7 |
Since 2006 (last 20 years) | 7 |
Descriptor
Source
Language Testing | 2 |
Discourse Processes: A… | 1 |
Grantee Submission | 1 |
Journal of Social Studies… | 1 |
Language Testing in Asia | 1 |
Reading and Writing: An… | 1 |
Author
Publication Type
Journal Articles | 7 |
Reports - Research | 7 |
Tests/Questionnaires | 3 |
Education Level
High Schools | 7 |
Secondary Education | 6 |
Higher Education | 2 |
Postsecondary Education | 2 |
Grade 12 | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Flesch Kincaid Grade Level… | 1 |
What Works Clearinghouse Rating
Ray J. T. Liao; Renka Ohta; Kwangmin Lee – Language Testing, 2024
As integrated writing tasks in large-scale and classroom-based writing assessments have risen in popularity, research studies have increasingly concentrated on providing validity evidence. Given the fact that most of these studies focus on adult second language learners rather than younger ones, this study examined the relationship between written…
Descriptors: Writing (Composition), Writing Evaluation, English Language Learners, Discourse Analysis
Sonia, Allison N.; Joseph, Magliano P.; McCarthy, Kathryn S.; Creer, Sarah D.; McNamara, Danielle S.; Allen, Laura K. – Grantee Submission, 2022
The constructed responses individuals generate while reading can provide insights into their coherence-building processes. The current study examined how the cohesion of constructed responses relates to performance on an integrated writing task. Participants (N = 95) completed a multiple document reading task wherein they were prompted to think…
Descriptors: Natural Language Processing, Connected Discourse, Reading Processes, Writing Skills
Sonia, Allison N.; Magliano, Joseph P.; McCarthy, Kathryn S.; Creer, Sarah D.; McNamara, Danielle S.; Allen, Laura, K. – Discourse Processes: A Multidisciplinary Journal, 2022
The constructed responses individuals generate while reading can provide insights into their coherence-building processes. The current study examined how the cohesion of constructed responses relates to performance on an integrated writing task. Participants (N = 95) completed a multiple document reading task wherein they were prompted to think…
Descriptors: Natural Language Processing, Connected Discourse, Reading Processes, Writing Skills
Crossley, Scott A.; Rose, Dani Francuz; Danekes, Cassondra; Rose, Charles Wesley; McNamara, Danielle S. – Reading and Writing: An Interdisciplinary Journal, 2017
This paper examines the effects of attended and unattended demonstratives on text processing, comprehension, and writing quality in two studies. In the first study, participants (n = 45) read 64 mini-stories in a self-paced reading task and identified the main referent in the clauses. The sentences varied in the type of demonstratives (i.e., this,…
Descriptors: Nouns, Phrase Structure, Form Classes (Languages), Connected Discourse
Shi, Bibing; Huang, Liyan; Lu, Xiaofei – Language Testing, 2020
The continuation task, a new form of reading-writing integrated task in which test-takers read an incomplete story and then write the continuation and ending of the story, has been increasingly used in writing assessment, especially in China. However, language-test developers' understanding of the effects of important task-related factors on…
Descriptors: Cues, Writing Tests, Writing Evaluation, English (Second Language)
Solnyshkina, Marina I.; Zamaletdinov, Radif R.; Gorodetskaya, Ludmila A.; Gabitov, Azat I. – Journal of Social Studies Education Research, 2017
The article presents the results of an exploratory study of the use of T.E.R.A., an automated tool measuring text complexity and readability based on the assessment of five text complexity parameters: narrativity, syntactic simplicity, word concreteness, referential cohesion and deep cohesion. Aimed at finding ways to utilize T.E.R.A. for…
Descriptors: Readability Formulas, Readability, Foreign Countries, Computer Assisted Testing
Fernandez, Miguel; Siddiqui, Athar Munir – Language Testing in Asia, 2017
Background: Marking of essays is mainly carried out by human raters who bring in their own subjective and idiosyncratic evaluation criteria, which sometimes lead to discrepancy. This discrepancy may in turn raise issues like reliability and fairness. The current research attempts to explore the evaluation criteria of markers on a national level…
Descriptors: Grading, Evaluators, Evaluation Criteria, High Stakes Tests