Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 4 |
Descriptor
Source
Assessing Writing | 1 |
ETS Research Report Series | 1 |
International Journal of… | 1 |
Journal of Technology,… | 1 |
Author
Bridgeman, Brent | 4 |
Trapani, Catherine | 2 |
Attali, Yigal | 1 |
Bivens-Tatum, Jennifer | 1 |
Davey, Tim | 1 |
Ling, Guangming | 1 |
Ramineni, Chaitanya | 1 |
Trapani, Catherine S. | 1 |
Williamson, David M. | 1 |
Publication Type
Journal Articles | 4 |
Reports - Research | 3 |
Reports - Evaluative | 1 |
Education Level
Higher Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
Test of English as a Foreign… | 3 |
Graduate Record Examinations | 1 |
What Works Clearinghouse Rating
Bridgeman, Brent; Trapani, Catherine; Bivens-Tatum, Jennifer – Assessing Writing, 2011
Writing task variants can increase test security in high-stakes essay assessments by substantially increasing the pool of available writing stimuli and by making the specific writing task less predictable. A given prompt (parent) may be used as the basis for one or more different variants. Six variant types based on argument essay prompts from a…
Descriptors: Writing Evaluation, Writing Tests, Tests, Writing Instruction
Ling, Guangming; Bridgeman, Brent – International Journal of Testing, 2013
To explore the potential effect of computer type on the Test of English as a Foreign Language-Internet-Based Test (TOEFL iBT) Writing Test, a sample of 444 international students was used. The students were randomly assigned to either a laptop or a desktop computer to write two TOEFL iBT practice essays in a simulated testing environment, followed…
Descriptors: Laptop Computers, Writing Tests, Essays, Computer Assisted Instruction
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Scoring models for the "e-rater"® system were built and evaluated for the "TOEFL"® exam's independent and integrated writing prompts. Prompt-specific and generic scoring models were built, and evaluation statistics, such as weighted kappas, Pearson correlations, standardized differences in mean scores, and correlations with…
Descriptors: Scoring, Prompting, Evaluators, Computer Software
Attali, Yigal; Bridgeman, Brent; Trapani, Catherine – Journal of Technology, Learning, and Assessment, 2010
A generic approach in automated essay scoring produces scores that have the same meaning across all prompts, existing or new, of a writing assessment. This is accomplished by using a single set of linguistic indicators (or features), a consistent way of combining and weighting these features into essay scores, and a focus on features that are not…
Descriptors: Writing Evaluation, Writing Tests, Scoring, Test Scoring Machines