Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 17 |
Since 2016 (last 10 years) | 56 |
Since 2006 (last 20 years) | 109 |
Descriptor
Source
ETS Research Report Series | 121 |
Author
Publication Type
Journal Articles | 121 |
Reports - Research | 117 |
Tests/Questionnaires | 32 |
Reports - Descriptive | 4 |
Numerical/Quantitative Data | 3 |
Information Analyses | 2 |
Speeches/Meeting Papers | 1 |
Education Level
Audience
Laws, Policies, & Programs
Every Student Succeeds Act… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Ginther, April; Elder, Catherine – ETS Research Report Series, 2014
In line with expanded conceptualizations of validity that encompass the interpretations and uses of test scores in particular policy contexts, this report presents results of a comparative analysis of institutional understandings and uses of 3 international English proficiency tests widely used for tertiary selection--the "TOEFL iBT"®…
Descriptors: Comparative Analysis, English (Second Language), Second Language Learning, Questionnaires
Ling, Guangming; Powers, Donald E.; Adler, Rachel M. – ETS Research Report Series, 2014
One fundamental way to determine the validity of standardized English-language test scores is to investigate the extent to which they reflect anticipated learning effects in different English-language programs. In this study, we investigated the extent to which the "TOEFL iBT"® practice test reflects the learning effects of students at…
Descriptors: Language Tests, Second Language Learning, English (Second Language), Computer Assisted Testing
Liu, Ou Lydia – ETS Research Report Series, 2014
This study investigates the relationship between test preparation and test performance on the "TOEFL iBT"® exam. Information on background variables and test preparation strategies was gathered from 14,593 respondents in China through an online survey. A Chinese standardized English test was used as a control for prior English ability.…
Descriptors: Research Reports, Test Preparation, Language Tests, College Entrance Examinations
De Felice, Rachele; Deane, Paul – ETS Research Report Series, 2012
This study proposes an approach to automatically score the "TOEIC"® Writing e-mail task. We focus on one component of the scoring rubric, which notes whether the test-takers have used particular speech acts such as requests, orders, or commitments. We developed a computational model for automated speech act identification and tested it…
Descriptors: Speech Acts, Electronic Mail, Language Tests, Second Language Learning
Ramineni, Chaitanya; Trapani, Catherine S.; Williamson, David M.; Davey, Tim; Bridgeman, Brent – ETS Research Report Series, 2012
Scoring models for the "e-rater"® system were built and evaluated for the "TOEFL"® exam's independent and integrated writing prompts. Prompt-specific and generic scoring models were built, and evaluation statistics, such as weighted kappas, Pearson correlations, standardized differences in mean scores, and correlations with…
Descriptors: Scoring, Prompting, Evaluators, Computer Software
Hill, Yao Zhang; Liu, Ou Lydia – ETS Research Report Series, 2012
This study investigated the effect of the interaction between test takers' background knowledge and language proficiency on their performance on the "TOEFL iBT"® reading section. Test takers with the target content background knowledge (the focal groups) and those without (the reference groups) were identified for each of the 5 selected…
Descriptors: Language Tests, Second Language Learning, English (Second Language), Internet
Biber, Douglas; Gray, Bethany – ETS Research Report Series, 2013
One of the major innovations of the "TOEFL iBT"® test is the incorporation of integrated tasks complementing the independent tasks to which examinees respond. In addition, examinees must produce discourse in both modes (speech and writing). The validity argument for the TOEFL iBT includes the claim that examinees vary their discourse in…
Descriptors: Discourse Analysis, English (Second Language), Second Language Learning, Language Tests
Zhang, Mo; Breyer, F. Jay; Lorenz, Florian – ETS Research Report Series, 2013
In this research, we investigated the suitability of implementing "e-rater"® automated essay scoring in a high-stakes large-scale English language testing program. We examined the effectiveness of generic scoring and 2 variants of prompt-based scoring approaches. Effectiveness was evaluated on a number of dimensions, including agreement…
Descriptors: Computer Assisted Testing, Computer Software, Scoring, Language Tests
Winke, Paula; Gass, Susan; Myford, Carol – ETS Research Report Series, 2011
This study investigated whether raters' second language (L2) background and the first language (L1) of test takers taking the TOEFL iBT® Speaking test were related through scoring. After an initial 4-hour training period, a group of 107 raters (mostly of learners of Chinese, Korean, and Spanish), listened to a selection of 432 speech samples that…
Descriptors: Second Language Learning, Evaluators, Speech Tests, English (Second Language)
Sawaki, Yasuyo; Sinharay, Sandip – ETS Research Report Series, 2013
This study investigates the value of reporting the reading, listening, speaking, and writing section scores for the "TOEFL iBT"® test, focusing on 4 related aspects of the psychometric quality of the TOEFL iBT section scores: reliability of the section scores, dimensionality of the test, presence of distinct score profiles, and the…
Descriptors: Scores, Computer Assisted Testing, Factor Analysis, Correlation
Jamieson, Joan; Poonpon, Kornwipa – ETS Research Report Series, 2013
Research and development of a new type of scoring rubric for the integrated speaking tasks of "TOEFL iBT"® are described. These "analytic rating guides" could be helpful if tasks modeled after those in TOEFL iBT were used for formative assessment, a purpose which is different from TOEFL iBT's primary use for admission…
Descriptors: Oral Language, Language Proficiency, Scaling, Scores
Young, John W.; Morgan, Rick; Rybinski, Paul; Steinberg, Jonathan; Wang, Yuan – ETS Research Report Series, 2013
The "TOEFL Junior"® Standard Test is an assessment that measures the degree to which middle school-aged students learning English as a second language have attained proficiency in the academic and social English skills representative of English-medium instructional environments. The assessment measures skills in three areas: listening…
Descriptors: Item Response Theory, Test Items, Language Tests, Second Language Learning
Weigle, Sara Cushing – ETS Research Report Series, 2011
Automated scoring has the potential to dramatically reduce the time and costs associated with the assessment of complex skills such as writing, but its use must be validated against a variety of criteria for it to be accepted by test users and stakeholders. This study addresses two validity-related issues regarding the use of e-rater® with the…
Descriptors: Scoring, English (Second Language), Second Language Instruction, Automation
Biber, Douglas; Nekrasova, Tatiana; Horn, Brad – ETS Research Report Series, 2011
This research project undertook a review and synthesis of previous research on the effectiveness of feedback for individual writing development. The work plan was divided into two main phases. First, we surveyed all available studies that have investigated the effectiveness of writing feedback, including both quantitative and qualitative research,…
Descriptors: English (Second Language), Second Language Instruction, Feedback (Response), Writing Skills
Wall, Dianne; Horák, Tania – ETS Research Report Series, 2011
The aim of this report is to present the findings of the 3rd and 4th phases of a longitudinal study into the impact of changes in the TOEFL® exam on teaching in test preparation classrooms. Phase 1 (2003-2004) described the type of teaching taking place in 12 TOEFL preparation classrooms before the introduction of the new TOEFL. Phase 2…
Descriptors: English (Second Language), Language Tests, Second Language Learning, Longitudinal Studies