Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 5 |
Since 2016 (last 10 years) | 10 |
Since 2006 (last 20 years) | 15 |
Descriptor
Source
Author
Zhang, Mo | 3 |
Lee, Yong-Won | 2 |
Sykes, Robert C. | 2 |
Allspach, Jill R. | 1 |
Arthur, Ann M. | 1 |
Bastianello, Tamara | 1 |
Bennett, Randy | 1 |
Breyer, F. Jay | 1 |
Broer, Markus | 1 |
Brondino, Margherita | 1 |
Burton, Nancy | 1 |
More ▼ |
Publication Type
Education Level
Higher Education | 9 |
Postsecondary Education | 8 |
Secondary Education | 6 |
High Schools | 5 |
Elementary Education | 3 |
Early Childhood Education | 1 |
Grade 1 | 1 |
Grade 11 | 1 |
Grade 12 | 1 |
Grade 4 | 1 |
Grade 6 | 1 |
More ▼ |
Audience
Administrators | 1 |
Counselors | 1 |
Teachers | 1 |
Location
Illinois | 1 |
Iran (Tehran) | 1 |
Italy | 1 |
Norway | 1 |
Laws, Policies, & Programs
Assessments and Surveys
SAT (College Admission Test) | 5 |
ACT Assessment | 4 |
Test of English as a Foreign… | 4 |
Graduate Management Admission… | 1 |
Graduate Record Examinations | 1 |
Peabody Picture Vocabulary… | 1 |
What Works Clearinghouse Rating
Van A. Lemmon; Lenin C. Grajo – Journal of Occupational Therapy, Schools & Early Intervention, 2023
The Assessment of Written Expression Adaptivity for Assistive Technology (AWE ADAPT for AT) was developed to examine written expression participation in high school-age clients to aid the identification of assistive technology (AT) tools that can be recommended to optimize written expression. The tool is guided by the Occupational Adaptation (OA)…
Descriptors: High School Students, Writing Evaluation, Writing Tests, Assistive Technology
Steedle, Jeffrey T.; Cho, Young Woo; Wang, Shichao; Arthur, Ann M.; Li, Dongmei – Educational Measurement: Issues and Practice, 2022
As testing programs transition from paper to online testing, they must study mode comparability to support the exchangeability of scores from different testing modes. To that end, a series of three mode comparability studies was conducted during the 2019-2020 academic year with examinees randomly assigned to take the ACT college admissions exam on…
Descriptors: College Entrance Examinations, Computer Assisted Testing, Scores, Test Format
Chen, Michelle Y.; Liu, Yan; Zumbo, Bruno D. – Educational and Psychological Measurement, 2020
This study introduces a novel differential item functioning (DIF) method based on propensity score matching that tackles two challenges in analyzing performance assessment data, that is, continuous task scores and lack of a reliable internal variable as a proxy for ability or aptitude. The proposed DIF method consists of two main stages. First,…
Descriptors: Probability, Scores, Evaluation Methods, Test Items
Gary A. Troia; Frank R. Lawrence; Julie S. Brehmer; Kaitlin Glause; Heather L. Reichmuth – Grantee Submission, 2023
Much of the research that has examined the writing knowledge of school-age students has relied on interviews to ascertain this information, which is problematic because interviews may underestimate breadth and depth of writing knowledge, require lengthy interactions with participants, and do not permit a direct evaluation of a prescribed array of…
Descriptors: Writing Tests, Writing Evaluation, Knowledge Level, Elementary School Students
Bastianello, Tamara; Brondino, Margherita; Persici, Valentina; Majorano, Marinella – Journal of Research in Childhood Education, 2023
The present contribution aims at presenting an assessment tool (i.e., the TALK-assessment) built to evaluate the language development and school readiness of Italian preschoolers before they enter primary school, and its predictive validity for the children's reading and writing skills at the end of the first year of primary school. The early…
Descriptors: Literacy, Computer Assisted Testing, Italian, Language Acquisition
Jahangard, Ali – MEXTESOL Journal, 2022
One of the most interesting studies on the role of L1 and contrastive analysis in vocabulary teaching is by Laufer and Girsai (2008). However, due to some methodological issues, their research findings are open to criticism and controversy. The current study aimed to replicate the research with a more rigorous design to re-investigate the…
Descriptors: Grammar, Vocabulary Development, Second Language Learning, Second Language Instruction
Jølle, Lennart; Skar, Gustaf B. – Scandinavian Journal of Educational Research, 2020
This paper reports findings from a project called "The National Panel of Raters" (NPR) that took place within a writing test programme in Norway (2010-2016). A recent research project found individual differences between the raters in the NPR. This paper reports results from an explorative follow up-study where 63 NPR members were…
Descriptors: Foreign Countries, Validity, Scoring, Program Descriptions
Steedle, Jeffrey; Pashley, Peter; Cho, YoungWoo – ACT, Inc., 2020
Three mode comparability studies were conducted on the following Saturday national ACT test dates: October 26, 2019, December 14, 2019, and February 8, 2020. The primary goal of these studies was to evaluate whether ACT scores exhibited mode effects between paper and online testing that would necessitate statistical adjustments to the online…
Descriptors: Test Format, Computer Assisted Testing, College Entrance Examinations, Scores
Deane, Paul; Song, Yi; van Rijn, Peter; O'Reilly, Tenaha; Fowles, Mary; Bennett, Randy; Sabatini, John; Zhang, Mo – Reading and Writing: An Interdisciplinary Journal, 2019
This paper presents a theoretical and empirical case for the value of scenario-based assessment (SBA) in the measurement of students' written argumentation skills. First, we frame the problem in terms of creating a reasonably efficient method of evaluating written argumentation skills, including for students at relatively low levels of competency.…
Descriptors: Vignettes, Writing Skills, Persuasive Discourse, Writing Evaluation
Rios, Joseph A.; Sparks, Jesse R.; Zhang, Mo; Liu, Ou Lydia – ETS Research Report Series, 2017
Proficiency with written communication (WC) is critical for success in college and careers. As a result, institutions face a growing challenge to accurately evaluate their students' writing skills to obtain data that can support demands of accreditation, accountability, or curricular improvement. Many current standardized measures, however, lack…
Descriptors: Test Construction, Test Validity, Writing Tests, College Outcomes Assessment
Engelhard, George, Jr.; Kobrin, Jennifer L.; Wind, Stefanie A. – International Journal of Testing, 2014
The purpose of this study is to explore patterns in model-data fit related to subgroups of test takers from a large-scale writing assessment. Using data from the SAT, a calibration group was randomly selected to represent test takers who reported that English was their best language from the total population of test takers (N = 322,011). A…
Descriptors: College Entrance Examinations, Writing Tests, Goodness of Fit, English
Cho, Yeonsuk; Rijmen, Frank; Novák, Jakub – Language Testing, 2013
This study examined the influence of prompt characteristics on the averages of all scores given to test taker responses on the TOEFL iBT[TM] integrated Read-Listen-Write (RLW) writing tasks for multiple administrations from 2005 to 2009. In the context of TOEFL iBT RLW tasks, the prompt consists of a reading passage and a lecture. To understand…
Descriptors: English (Second Language), Language Tests, Writing Tests, Cues
Zhang, Mo; Breyer, F. Jay; Lorenz, Florian – ETS Research Report Series, 2013
In this research, we investigated the suitability of implementing "e-rater"® automated essay scoring in a high-stakes large-scale English language testing program. We examined the effectiveness of generic scoring and 2 variants of prompt-based scoring approaches. Effectiveness was evaluated on a number of dimensions, including agreement…
Descriptors: Computer Assisted Testing, Computer Software, Scoring, Language Tests
Peterson, Lisa S.; Martinez, Andrew; Turner, Terez L. – Journal of Psychoeducational Assessment, 2010
This article presents a review of the "Process Assessment of the Learner-Second Edition" (PAL-II), an individual or group-administered instrument designed to assess the cognitive processes involved in academic tasks in kindergarten through sixth grade. The instrument allows the examiner to identify reasons for underachievement and…
Descriptors: Test Items, Intervention, Learning Disabilities, Mathematics Tests
Lee, Yong-Won; Kantor, Robert; Mollaun, Pam – 2002
This paper reports the results of generalizability theory (G) analyses done for new writing and speaking tasks for the Test of English as a Foreign Language (TOEFL). For writing, a special focus was placed on evaluating the impact on the reliability of the number of raters (or ratings) per essay (one or two) and the number of tasks (one, two, or…
Descriptors: English (Second Language), Generalizability Theory, Reliability, Scores
Previous Page | Next Page »
Pages: 1 | 2