NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Condon, William – Assessing Writing, 2013
Automated Essay Scoring (AES) has garnered a great deal of attention from the rhetoric and composition/writing studies community since the Educational Testing Service began using e-rater[R] and the "Criterion"[R] Online Writing Evaluation Service as products in scoring writing tests, and most of the responses have been negative. While the…
Descriptors: Measurement, Psychometrics, Evaluation Methods, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
McCurry, Doug – Assessing Writing, 2010
This article considers the claim that machine scoring of writing test responses agrees with human readers as much as humans agree with other humans. These claims about the reliability of machine scoring of writing are usually based on specific and constrained writing tasks, and there is reason for asking whether machine scoring of writing requires…
Descriptors: Writing Tests, Scoring, Interrater Reliability, Computer Assisted Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Evans, Donna – Assessing Writing, 2009
This is the story of a research journey that follows the trail of a novel evaluand--"place." I examine place as mentioned by rising juniors in timed exams. Using a hybridized methodology--the qualitative approach of a hermeneutic dialectic process as described by Guba and Lincoln (1989), and the quantitative evidence of place mention--I query…
Descriptors: Student Motivation, Student Experience, Writing Evaluation, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Worden, Dorothy L. – Assessing Writing, 2009
It is widely assumed that the constraints of timed essay exams will make it virtually impossible for students to engage in the major hallmarks of the writing process, especially revision, in testing situations. This paper presents the results of a study conducted at Washington State University in the Spring of 2008. The study examined the…
Descriptors: Timed Tests, Writing Evaluation, Writing Tests, Educational Assessment
Peer reviewed Peer reviewed
Direct linkDirect link
He, Ling; Shi, Ling – Assessing Writing, 2008
The present study interviewed 16 international students (13 from Mainland China and 3 from Taiwan) in a Canadian university to explore their perceptions and experiences of two standardized English writing tests: the TWE (Test of Written English) and the essay task in LPI (English Language Proficiency Index). In Western Canada, TWE is used as an…
Descriptors: Student Attitudes, Writing Tests, Foreign Countries, English (Second Language)
Peer reviewed Peer reviewed
Direct linkDirect link
Anthony, Jared Judd – Assessing Writing, 2009
Testing the hypotheses that reflective timed-essay prompts should elicit memories of meaningful experiences in students' undergraduate education, and that computer-mediated classroom experiences should be salient among those memories, a combination of quantitative and qualitative research methods paints a richer, more complex picture than either…
Descriptors: Undergraduate Study, Qualitative Research, Research Methodology, Reflection
Peer reviewed Peer reviewed
Direct linkDirect link
Petersen, Jerry – Assessing Writing, 2009
Large-scale writing programs can add value to the traditional timed writing assessment by using aspects of the essays to assess the effectiveness of institutional goals, programs, and curriculums. The "six learning goals" prompt in this study represents an attempt to provide an accurate writing assessment that moves beyond scores. This…
Descriptors: Feedback (Response), Writing Evaluation, Student Evaluation, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
East, Martin – Assessing Writing, 2006
Writing assessment essentially juxtaposes two elements: how "good writing" is to be defined, and how "good measurement" of that writing is to be carried out. The timed test is often used in large-scale L2 writing assessments because it is considered to provide reliable measurement. It is, however, highly inauthentic. One way of enhancing…
Descriptors: Writing Evaluation, Writing Tests, Timed Tests, Dictionaries
Peer reviewed Peer reviewed
Direct linkDirect link
Cumming, Alister; Kantor, Robert; Baba, Kyoko; Erdosy, Usman; Eouanzoui, Keanre; James, Mark – Assessing Writing, 2005
We assessed whether and how the discourse written for prototype integrated tasks (involving writing in response to print or audio source texts) field tested for Next Generation TOEFL[R] differs from the discourse written for independent essays (i.e., the TOEFL Essay[R]). We selected 216 compositions written for six tasks by 36 examinees in a field…
Descriptors: Grammar, Field Tests, English (Second Language), Pragmatics