Publication Date
In 2025 | 0 |
Since 2024 | 3 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 8 |
Since 2006 (last 20 years) | 12 |
Descriptor
Source
Language Testing | 15 |
Author
Knoch, Ute | 3 |
Deygers, Bart | 2 |
Alanen, Riikka | 1 |
Ann Tai Choe | 1 |
Barkhuizen, Gary | 1 |
Brunfaut, Tineke | 1 |
Chapelle, Carol A. | 1 |
Daniel Holden | 1 |
Daniel R. Isbell | 1 |
Elder, Catherine | 1 |
Grant, Leslie | 1 |
More ▼ |
Publication Type
Journal Articles | 15 |
Reports - Research | 9 |
Reports - Evaluative | 5 |
Information Analyses | 2 |
Tests/Questionnaires | 2 |
Reports - Descriptive | 1 |
Education Level
Higher Education | 2 |
Elementary Secondary Education | 1 |
Postsecondary Education | 1 |
Audience
Location
Netherlands | 2 |
Arizona | 1 |
Finland | 1 |
Hawaii | 1 |
Japan | 1 |
Laws, Policies, & Programs
Assessments and Surveys
ACTFL Oral Proficiency… | 1 |
What Works Clearinghouse Rating
Huiying Cai; Xun Yan – Language Testing, 2024
Rater comments tend to be qualitatively analyzed to indicate raters' application of rating scales. This study applied natural language processing (NLP) techniques to quantify meaningful, behavioral information from a corpus of rater comments and triangulated that information with a many-facet Rasch measurement (MFRM) analysis of rater scores. The…
Descriptors: Natural Language Processing, Item Response Theory, Rating Scales, Writing Evaluation
Knoch, Ute; Deygers, Bart; Khamboonruang, Apichat – Language Testing, 2021
Rating scale development in the field of language assessment is often considered in dichotomous ways: It is assumed to be guided either by expert intuition or by drawing on performance data. Even though quite a few authors have argued that rating scale development is rarely so easily classifiable, this dyadic view has dominated language testing…
Descriptors: Rating Scales, Test Construction, Language Tests, Test Use
Yu-Tzu Chang; Ann Tai Choe; Daniel Holden; Daniel R. Isbell – Language Testing, 2024
In this Brief Report, we describe an evaluation of and revisions to a rubric adapted from the Jacobs et al.'s (1981) ESL COMPOSITION PROFILE, with four rubric categories and 20-point rating scales, in the context of an intensive English program writing placement test. Analysis of 4 years of rating data (2016-2021, including 434 essays) using…
Descriptors: Language Tests, Rating Scales, Second Language Learning, English (Second Language)
Lestari, Santi B.; Brunfaut, Tineke – Language Testing, 2023
Assessing integrated reading-into-writing task performances is known to be challenging, and analytic rating scales have been found to better facilitate the scoring of these performances than other common types of rating scales. However, little is known about how specific operationalizations of the reading-into-writing construct in analytic rating…
Descriptors: Reading Writing Relationship, Writing Tests, Rating Scales, Writing Processes
Maria Treadaway; John Read – Language Testing, 2024
Standard-setting is an essential component of test development, supporting the meaningfulness and appropriate interpretation of test scores. However, in the high-stakes testing environment of aviation, standard-setting studies are underexplored. To address this gap, we document two stages in the standard-setting procedures for the Overseas Flight…
Descriptors: Standard Setting, Diagnostic Tests, High Stakes Tests, English for Special Purposes
Knoch, Ute; Chapelle, Carol A. – Language Testing, 2018
Argument-based validation requires test developers and researchers to specify what is entailed in test interpretation and use. Doing so has been shown to yield advantages (Chapelle, Enright, & Jamieson, 2010), but it also requires an analysis of how the concerns of language testers can be conceptualized in the terms used to construct a…
Descriptors: Test Validity, Language Tests, Evaluation Research, Rating Scales
Isbell, Dan; Winke, Paula – Language Testing, 2019
The American Council on the Teaching of Foreign Languages (ACTFL) oral proficiency interview -- computer (OPIc) testing system represents an ambitious effort in language assessment: Assessing oral proficiency in over a dozen languages, on the same scale, from virtually anywhere at any time. Especially for users in contexts where multiple foreign…
Descriptors: Oral Language, Language Tests, Language Proficiency, Second Language Learning
Kuiken, Folkert; Vedder, Ineke – Language Testing, 2017
The importance of functional adequacy as an essential component of L2 proficiency has been observed by several authors (Pallotti, 2009; De Jong, Steinel, Florijn, Schoonen, & Hulstijn, 2012a, b). The rationale underlying the present study is that the assessment of writing proficiency in L2 is not fully possible without taking into account the…
Descriptors: Second Language Learning, Rating Scales, Computational Linguistics, Persuasive Discourse
Deygers, Bart; Van Gorp, Koen – Language Testing, 2015
Considering scoring validity as encompassing both reliable rating scale use and valid descriptor interpretation, this study reports on the validation of a CEFR-based scale that was co-constructed and used by novice raters. The research questions this paper wishes to answer are (a) whether it is possible to construct a CEFR-based rating scale with…
Descriptors: Rating Scales, Scoring, Validity, Interrater Reliability
Huhta, Ari; Alanen, Riikka; Tarnanen, Mirja; Martin, Maisa; Hirvelä, Tuija – Language Testing, 2014
There is still relatively little research on how well the CEFR and similar holistic scales work when they are used to rate L2 texts. Using both multifaceted Rasch analyses and qualitative data from rater comments and interviews, the ratings obtained by using a CEFR-based writing scale and the Finnish National Core Curriculum scale for L2 writing…
Descriptors: Foreign Countries, Writing Skills, Second Language Learning, Finno Ugric Languages
Schaefer, Edward – Language Testing, 2008
The present study employed multi-faceted Rasch measurement (MFRM) to explore the rater bias patterns of native English-speaker (NES) raters when they rate EFL essays. Forty NES raters rated 40 essays written by female Japanese university students on a single topic adapted from the TOEFL Test of Written English (TWE). The essays were assessed using…
Descriptors: Writing Evaluation, Writing Tests, Program Effectiveness, Essays
Elder, Catherine; Barkhuizen, Gary; Knoch, Ute; von Randow, Janet – Language Testing, 2007
The use of online rater self-training is growing in popularity and has obvious practical benefits, facilitating access to training materials and rating samples and allowing raters to reorient themselves to the rating scale and self monitor their behaviour at their own convenience. However there has thus far been little research into rater…
Descriptors: Writing Evaluation, Writing Tests, Scoring Rubrics, Rating Scales

Schoonen, Rob; And Others – Language Testing, 1997
Reports on three studies conducted in the Netherlands about the reading reliability of lay and expert readers in rating content and language usage of students' writing performances in three kinds of writing assignments. Findings reveal that expert readers are more reliable in rating usage, whereas both lay and expert readers are reliable raters of…
Descriptors: Foreign Countries, Interrater Reliability, Language Usage, Models

Henning, Grant – Language Testing, 1996
Analyzes simulated performance ratings on a six-point scale by two independent raters to account for nonsystematic error in performance ratings. Results suggest that rater agreement or covariance is not always a dependable estimate of score reliability and that the practice of seeking additional raters for adjudication of discrepant ratings is not…
Descriptors: Correlation, Error Patterns, Interrater Reliability, Language Tests

Grant, Leslie – Language Testing, 1997
Describes current procedures used for testing bilingual teachers in the United States and focuses on one means of assessment used in Arizona. Examinee questionnaire responses, teacher questionnaire responses and test section analysis all contributed evidence for validity. (33 references) (Author/CK)
Descriptors: Bilingualism, Criterion Referenced Tests, Interrater Reliability, Language Teachers