Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 1 |
Since 2016 (last 10 years) | 1 |
Since 2006 (last 20 years) | 12 |
Descriptor
Essays | 18 |
Writing Tests | 18 |
Writing Evaluation | 10 |
Computer Assisted Testing | 8 |
English (Second Language) | 7 |
Scoring | 7 |
Second Language Learning | 6 |
Scores | 5 |
Comparative Analysis | 4 |
Language Tests | 4 |
Standardized Tests | 4 |
More ▼ |
Source
Author
Alzubi, Omar A. | 1 |
Attali, Yigal | 1 |
Barkaoui, Khaled | 1 |
Beseiso, Majdi | 1 |
Bridgeman, Brent | 1 |
Camara, Wayne J. | 1 |
Chung, Gregory K. W. K. | 1 |
Condon, William | 1 |
Del Principe, Ann | 1 |
Demirtas, Hakan | 1 |
Denny, Harry C. | 1 |
More ▼ |
Publication Type
Reports - Evaluative | 18 |
Journal Articles | 15 |
Speeches/Meeting Papers | 1 |
Education Level
Higher Education | 10 |
Postsecondary Education | 6 |
Elementary Secondary Education | 2 |
Grade 6 | 1 |
Grade 7 | 1 |
Grade 8 | 1 |
Junior High Schools | 1 |
Middle Schools | 1 |
Two Year Colleges | 1 |
Audience
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 4 |
Graduate Record Examinations | 3 |
SAT (College Admission Test) | 2 |
California Achievement Tests | 1 |
What Works Clearinghouse Rating
Beseiso, Majdi; Alzubi, Omar A.; Rashaideh, Hasan – Journal of Computing in Higher Education, 2021
E-learning is gradually gaining prominence in higher education, with universities enlarging provision and more students getting enrolled. The effectiveness of automated essay scoring (AES) is thus holding a strong appeal to universities for managing an increasing learning interest and reducing costs associated with human raters. The growth in…
Descriptors: Automation, Scoring, Essays, Writing Tests
Condon, William – Assessing Writing, 2013
Automated Essay Scoring (AES) has garnered a great deal of attention from the rhetoric and composition/writing studies community since the Educational Testing Service began using e-rater[R] and the "Criterion"[R] Online Writing Evaluation Service as products in scoring writing tests, and most of the responses have been negative. While the…
Descriptors: Measurement, Psychometrics, Evaluation Methods, Educational Testing
Lee, Yong-Won; Gentile, Claudia; Kantor, Robert – Applied Linguistics, 2010
The main purpose of the study was to investigate the distinctness and reliability of analytic (or multi-trait) rating dimensions and their relationships to holistic scores and "e-rater"[R] essay feature variables in the context of the TOEFL[R] computer-based test (TOEFL CBT) writing assessment. Data analyzed in the study were holistic…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Essays
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines
Gebril, Atta – Language Testing, 2009
Generalizability of writing scores has always been a longstanding concern in L2 writing assessment. A number of studies have been conducted to investigate this topic during the last two decades. However, with the introduction of new test methods, such as reading-to-write tasks, generalizability studies need to focus on the score accuracy of…
Descriptors: Generalizability Theory, Writing Evaluation, Writing Tests, Scores
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests
Del Principe, Ann; Graziano-King, Janine – Teaching English in the Two-Year College, 2008
In this study, we compared self-revised essays to timed writing exams written by students in a developmental English course in a community college. Using a multiple-trait rubric, we found that self-revised essays showed greater elaboration than timed writing exams, and that elaboration and focus correlated only for self-revised essays. We argue,…
Descriptors: Timed Tests, Essays, Writing Tests, Community Colleges
Denny, Harry C. – Assessing Writing, 2008
This study details the development and results of a campus-based writing assessment plan that was mandated by a state-wide university system in order to explore the ''value-added'' from a writing program curriculum to undergraduate students' competence with written expression. Four writing samples (two timed essays and two conventional essays)…
Descriptors: Undergraduate Students, Writing Evaluation, State Universities, Pilot Projects
Schaefer, Edward – Language Testing, 2008
The present study employed multi-faceted Rasch measurement (MFRM) to explore the rater bias patterns of native English-speaker (NES) raters when they rate EFL essays. Forty NES raters rated 40 essays written by female Japanese university students on a single topic adapted from the TOEFL Test of Written English (TWE). The essays were assessed using…
Descriptors: Writing Evaluation, Writing Tests, Program Effectiveness, Essays
Barkaoui, Khaled – Assessing Writing, 2007
Educators often have to choose among different types of rating scales to assess second-language (L2) writing performance. There is little research, however, on how different rating scales affect rater performance. This study employed a mixed-method approach to investigate the effects of two different rating scales on EFL essay scores, rating…
Descriptors: Writing Evaluation, Writing Tests, Rating Scales, Essays
James, Cindy L. – Assessing Writing, 2006
How do scores from writing samples generated by computerized essay scorers compare to those generated by ''untrained'' human scorers and what combination of scores, if any, is more accurate at placing students in composition courses? This study endeavored to answer this two-part question by evaluating the correspondence between writing sample…
Descriptors: Writing (Composition), Predictive Validity, Scoring, Validity
Chung, Gregory K. W. K.; O'Neil, Harold F., Jr. – 1997
This report examines the feasibility of scoring essays using computer-based techniques. Essays have been incorporated into many of the standardized testing programs. Issues of validity and reliability must be addressed to deploy automated approaches to scoring fully. Two approaches that have been used to classify documents, surface- and word-based…
Descriptors: Automation, Computer Assisted Testing, Essays, Scoring
Lee, Young-Ju – Journal of Second Language Writing, 2006
This study examines a process-oriented ESL writing assessment called the Computerized Enhanced ESL Placement Test (CEEPT). The CEEPT at the University of Illinois at Urbana-Champaign or its non-computerized alternative (EEPT) have since 2000 offered a daylong process-oriented writing assessment in which test takers are given extended time to plan,…
Descriptors: Program Effectiveness, Essays, Writing Evaluation, Writing Tests
Matzen, Richard N., Jr.; Hoyt, Jeff E. – Journal of Developmental Education, 2004
Recently, the popularity of timed-essay exams has increased, becoming part of the Graduate Management Admissions Test (GMAT) in the late 1990s and now being incorporated into The College Board Scholastic Aptitude Test (SAT) in Spring of 2005 and ACT (American College Testing) test in Fall of 2004. This research evaluates the "value…
Descriptors: Minority Groups, Essays, Writing Tests, Multiple Choice Tests
Camara, Wayne J. – College Entrance Examination Board, 2003
Previous research on differences in the reliability, validity, and difficulty of essay tests given under different timing conditions has indicated that giving examinees more time to complete an essay may raise their scores to a certain extent, but does not change the meaning of those scores, or the rank ordering of students. There is no evidence…
Descriptors: Essays, Comparative Analysis, Writing Tests, Timed Tests
Previous Page | Next Page ยป
Pages: 1 | 2