NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 9 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Evans, Donna – Assessing Writing, 2009
This is the story of a research journey that follows the trail of a novel evaluand--"place." I examine place as mentioned by rising juniors in timed exams. Using a hybridized methodology--the qualitative approach of a hermeneutic dialectic process as described by Guba and Lincoln (1989), and the quantitative evidence of place mention--I query…
Descriptors: Student Motivation, Student Experience, Writing Evaluation, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
Worden, Dorothy L. – Assessing Writing, 2009
It is widely assumed that the constraints of timed essay exams will make it virtually impossible for students to engage in the major hallmarks of the writing process, especially revision, in testing situations. This paper presents the results of a study conducted at Washington State University in the Spring of 2008. The study examined the…
Descriptors: Timed Tests, Writing Evaluation, Writing Tests, Educational Assessment
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests
Peer reviewed Peer reviewed
Direct linkDirect link
De La Paz, Susan – Assessment for Effective Intervention, 2009
Rubrics are an integral part of many writing programs, and they represent elements of good writing in essays, stories, poems, as well as other genres and forms of text. Although it is possible to use rubrics to teach students about the processes underlying effective writing, a more common practice is to use rubrics as a means of assessment, after…
Descriptors: Writing Strategies, Learning Disabilities, Essays, Writing Instruction
Peer reviewed Peer reviewed
Direct linkDirect link
Anthony, Jared Judd – Assessing Writing, 2009
Testing the hypotheses that reflective timed-essay prompts should elicit memories of meaningful experiences in students' undergraduate education, and that computer-mediated classroom experiences should be salient among those memories, a combination of quantitative and qualitative research methods paints a richer, more complex picture than either…
Descriptors: Undergraduate Study, Qualitative Research, Research Methodology, Reflection
Peer reviewed Peer reviewed
Direct linkDirect link
Petersen, Jerry – Assessing Writing, 2009
Large-scale writing programs can add value to the traditional timed writing assessment by using aspects of the essays to assess the effectiveness of institutional goals, programs, and curriculums. The "six learning goals" prompt in this study represents an attempt to provide an accurate writing assessment that moves beyond scores. This…
Descriptors: Feedback (Response), Writing Evaluation, Student Evaluation, Writing Tests
Wolfe, Edward W.; Kao, Chi-Wen – 1996
This paper reports the results of an analysis of the relationship between scorer behaviors and score variability. Thirty-six essay scorers were interviewed and asked to perform a think-aloud task as they scored 24 essays. Each comment made by a scorer was coded according to its content focus (i.e. appearance, assignment, mechanics, communication,…
Descriptors: Content Analysis, Educational Assessment, Essays, Evaluation Methods
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests
Peer reviewed Peer reviewed
Engelhard, George, Jr. – Applied Measurement in Education, 1992
A Many-Faceted Rasch Model (FACETS) for measurement of writing ability is described, and its use in solving measurement problems in large-scale assessment is illustrated with a random sample of 1,000 students from Georgia's Eighth Grade Writing Test. It is a promising approach to assessment through written compositions. (SLD)
Descriptors: Educational Assessment, Essays, Evaluation Problems, Grade 8