Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 5 |
Since 2006 (last 20 years) | 14 |
Descriptor
Source
Author
Attali, Yigal | 4 |
Powers, Donald E. | 3 |
Bridgeman, Brent | 2 |
Fowles, Mary E. | 2 |
Ramineni, Chaitanya | 2 |
Rizavi, Saba | 2 |
Almond, Russell G. | 1 |
Beigman Klebanov, Beata | 1 |
Bejar, Isaac I. | 1 |
Biria, Reza | 1 |
Breland, Hunter M. | 1 |
More ▼ |
Publication Type
Reports - Research | 17 |
Journal Articles | 16 |
Reports - Evaluative | 5 |
Numerical/Quantitative Data | 1 |
Speeches/Meeting Papers | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 18 |
Postsecondary Education | 16 |
Elementary Secondary Education | 1 |
Audience
Location
China | 1 |
Iran (Tehran) | 1 |
Pennsylvania (Pittsburgh) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Yao, Lili; Haberman, Shelby J.; Zhang, Mo – ETS Research Report Series, 2019
Many assessments of writing proficiency that aid in making high-stakes decisions consist of several essay tasks evaluated by a combination of human holistic scores and computer-generated scores for essay features such as the rate of grammatical errors per word. Under typical conditions, a summary writing score is provided by a linear combination…
Descriptors: Prediction, True Scores, Computer Assisted Testing, Scoring
Almond, Russell G. – International Journal of Testing, 2014
Assessments consisting of only a few extended constructed response items (essays) are not typically equated using anchor test designs as there are typically too few essay prompts in each form to allow for meaningful equating. This article explores the idea that output from an automated scoring program designed to measure writing fluency (a common…
Descriptors: Automation, Equated Scores, Writing Tests, Essay Tests
Beigman Klebanov, Beata; Ramineni, Chaitanya; Kaufer, David; Yeoh, Paul; Ishizaki, Suguru – Language Testing, 2019
Essay writing is a common type of constructed-response task used frequently in standardized writing assessments. However, the impromptu timed nature of the essay writing tests has drawn increasing criticism for the lack of authenticity for real-world writing in classroom and workplace settings. The goal of this paper is to contribute evidence to a…
Descriptors: Test Validity, Writing Tests, Writing Skills, Persuasive Discourse
Buzick, Heather; Oliveri, Maria Elena; Attali, Yigal; Flor, Michael – Applied Measurement in Education, 2016
Automated essay scoring is a developing technology that can provide efficient scoring of large numbers of written responses. Its use in higher education admissions testing provides an opportunity to collect validity and fairness evidence to support current uses and inform its emergence in other areas such as K-12 large-scale assessment. In this…
Descriptors: Essays, Learning Disabilities, Attention Deficit Hyperactivity Disorder, Scoring
Sun, Yanyan; Franklin, Teresa; Gao, Fei – British Journal of Educational Technology, 2017
This study explored how the GRE Analytical Writing Section Discussion Forum, an informal online language learning community in China, functioned to support its members to improve their English writing proficiency. The Community of Inquiry (CoI) model was used as the theoretical framework to explore the existence of teaching presence, cognitive…
Descriptors: Informal Education, Foreign Countries, College Entrance Examinations, Graduate Study
Ricker-Pedley, Kathryn L. – Educational Testing Service, 2011
A pseudo-experimental study was conducted to examine the link between rater accuracy calibration performances and subsequent accuracy during operational scoring. The study asked 45 raters to score a 75-response calibration set and then a 100-response (operational) set of responses from a retired Graduate Record Examinations[R] (GRE[R]) writing…
Descriptors: Scoring, Accuracy, College Entrance Examinations, Writing Tests
Bejar, Isaac I.; VanWinkle, Waverely; Madnani, Nitin; Lewis, William; Steier, Michael – ETS Research Report Series, 2013
The paper applies a natural language computational tool to study a potential construct-irrelevant response strategy, namely the use of "shell language." Although the study is motivated by the impending increase in the volume of scoring of students responses from assessments to be developed in response to the Race to the Top initiative,…
Descriptors: Responses, Language Usage, Natural Language Processing, Computational Linguistics
Breyer, F. Jay; Attali, Yigal; Williamson, David M.; Ridolfi-McCulla, Laura; Ramineni, Chaitanya; Duchnowski, Matthew; Harris, April – ETS Research Report Series, 2014
In this research, we investigated the feasibility of implementing the "e-rater"® scoring engine as a check score in place of all-human scoring for the "Graduate Record Examinations"® ("GRE"®) revised General Test (rGRE) Analytical Writing measure. This report provides the scientific basis for the use of e-rater as a…
Descriptors: Computer Software, Computer Assisted Testing, Scoring, College Entrance Examinations
Liaghat, Farahnaz; Biria, Reza – International Journal of Instruction, 2018
This study aimed at exploring the impact of mentor text modelling on Iranian English as a Foreign Language (EFL) learners' accuracy and fluency in writing tasks with different cognitive complexity in comparison with two conventional approaches to teaching writing; namely, process-based and product-based approaches. To this end, 60 Iranian EFL…
Descriptors: Foreign Countries, Comparative Analysis, Teaching Methods, Writing Instruction
Attali, Yigal; Sinharay, Sandip – ETS Research Report Series, 2015
The "e-rater"® automated essay scoring system is used operationally in the scoring of the argument and issue tasks that form the Analytical Writing measure of the "GRE"® General Test. For each of these tasks, this study explored the value added of reporting 4 trait scores for each of these 2 tasks over the total e-rater score.…
Descriptors: Scores, Computer Assisted Testing, Computer Software, Grammar
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines
Quinlan, Thomas; Higgins, Derrick; Wolff, Susanne – Educational Testing Service, 2009
This report evaluates the construct coverage of the e-rater[R[ scoring engine. The matter of construct coverage depends on whether one defines writing skill, in terms of process or product. Originally, the e-rater engine consisted of a large set of components with a proven ability to predict human holistic scores. By organizing these capabilities…
Descriptors: Guides, Writing Skills, Factor Analysis, Writing Tests
Hardison, Chaitra M.; Sackett, Paul R. – Applied Measurement in Education, 2008
Despite the growing use of writing assessments in standardized tests, little is known about coaching effects on writing assessments. Therefore, this study tested the effects of short-term coaching on standardized writing tests, and the transfer of those effects to other writing genres. College freshmen were randomly assigned to either training…
Descriptors: Control Groups, Group Membership, College Freshmen, Writing Tests
Briihl, Deborah S.; Wasieleski, David T. – Teaching of Psychology, 2007
The authors surveyed graduate programs to see how they use the Graduate Record Examination Analytic Writing (GRE-AW) Test. Only 35% of the graduate programs that responded use the GRE-AW test in their admission policy; of the programs not using it, most do not plan to do so. The programs using the GRE-AW rated it as medium or low in importance in…
Descriptors: Writing Tests, Educational Testing, College Admission, Surveys
O'Neill, Kathleen; Rizavi, Saba – 2002
Beginning in 1999, an analytical writing measure was offered as an optional test in the Graduate Record Examinations (GRE) program. This test will be incorporated into the GRE General Test in fall 2002. The essays in this test focus on critical reasoning and analytical writing skills. This study examined the performance of various examinee groups…
Descriptors: College Students, Ethnic Groups, Graduate Study, Higher Education
Previous Page | Next Page »
Pages: 1 | 2