Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 3 |
Since 2016 (last 10 years) | 6 |
Since 2006 (last 20 years) | 12 |
Descriptor
Source
Author
Brennan, Robert L. | 2 |
Linn, Robert L. | 2 |
Aiken, Lewis R. | 1 |
Anderson, Dan | 1 |
Ault, Haley | 1 |
Bacanli, Salih S. | 1 |
Baker, Frank B. | 1 |
Balkin, Richard S. | 1 |
Barrio Minton, Casey | 1 |
Bidoki, Neda | 1 |
Biggers, Mandy | 1 |
More ▼ |
Publication Type
Education Level
Higher Education | 2 |
Postsecondary Education | 2 |
Elementary Education | 1 |
Elementary Secondary Education | 1 |
Grade 4 | 1 |
High Schools | 1 |
Secondary Education | 1 |
Audience
Practitioners | 2 |
Researchers | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Trends in International… | 2 |
Graduate Record Examinations | 1 |
National Assessment of… | 1 |
Work Keys (ACT) | 1 |
What Works Clearinghouse Rating
Li, Xu; Ouyang, Fan; Liu, Jianwen; Wei, Chengkun; Chen, Wenzhi – Journal of Educational Computing Research, 2023
The computer-supported writing assessment (CSWA) has been widely used to reduce instructor workload and provide real-time feedback. Interpretability of CSWA draws extensive attention because it can benefit the validity, transparency, and knowledge-aware feedback of academic writing assessments. This study proposes a novel assessment tool,…
Descriptors: Computer Assisted Testing, Writing Evaluation, Feedback (Response), Natural Language Processing
Carney, Michele; Crawford, Angela; Siebert, Carl; Osguthorpe, Rich; Thiede, Keith – Applied Measurement in Education, 2019
The "Standards for Educational and Psychological Testing" recommend an argument-based approach to validation that involves a clear statement of the intended interpretation and use of test scores, the identification of the underlying assumptions and inferences in that statement--termed the interpretation/use argument, and gathering of…
Descriptors: Inquiry, Test Interpretation, Validity, Scores
Lenz, A. Stephen; Ault, Haley; Balkin, Richard S.; Barrio Minton, Casey; Erford, Bradley T.; Hays, Danica G.; Kim, Bryan S. K.; Li, Chi – Measurement and Evaluation in Counseling and Development, 2022
In April 2021, The Association for Assessment and Research in Counseling Executive Council commissioned a time-referenced task group to revise the Responsibilities of Users of Standardized Tests (RUST) Statement (3rd edition) published by the Association for Assessment in Counseling (AAC) in 2003. The task group developed a work plan to implement…
Descriptors: Responsibility, Standardized Tests, Counselor Training, Ethics
Gawliczek, Piotr; Krykun, Viktoriia; Tarasenko, Nataliya; Tyshchenko, Maksym; Shapran, Oleksandr – Advanced Education, 2021
The article deals with the innovative, cutting age solution within the language testing realm, namely computer adaptive language testing (CALT) in accordance with the NATO Standardization Agreement 6001 (NATO STANAG 6001) requirements for further implementation in foreign language training of personnel of the Armed Forces of Ukraine (AF of…
Descriptors: Computer Assisted Testing, Adaptive Testing, Language Tests, Second Language Instruction
DeMara, Ronald F.; Bacanli, Salih S.; Bidoki, Neda; Xu, Jun; Nassiff, Edwin; Donnelly, Julie; Turgut, Damla – Journal of Educational Technology Systems, 2020
This research developed an approach to integrate the complementary benefits of digitized assessments and peer learning. Its basic premise and associated hypotheses are that by using student assessments of correct and incorrect quiz answers using a fine-grained resolution to pair them into remediation peer-learning cohorts is an effective means of…
Descriptors: Undergraduate Students, Engineering Education, Computer Assisted Testing, Pilot Projects
Oliveri, María Elena; von Davier, Alina A. – International Journal of Testing, 2016
In this study, we propose that the unique needs and characteristics of linguistic minorities should be considered throughout the test development process. Unlike most measurement invariance investigations in the assessment of linguistic minorities, which typically are conducted after test administration, we propose strategies that focus on the…
Descriptors: Psychometrics, Linguistics, Test Construction, Testing
Brennan, Robert L. – Journal of Educational Measurement, 2013
Kane's paper "Validating the Interpretations and Uses of Test Scores" is the most complete and clearest discussion yet available of the argument-based approach to validation. At its most basic level, validation as formulated by Kane is fundamentally a simply-stated two-step enterprise: (1) specify the claims inherent in a particular interpretation…
Descriptors: Validity, Test Interpretation, Test Use, Scores
Forbes, Cory; Lange, Kim; Möller, Kornelia; Biggers, Mandy; Laux, Mira; Zangori, Laura – International Journal of Science Education, 2014
To help explain the differences in students' performance on internationally administered science assessments, cross-national, video-based observational studies have been advocated, but none have yet been conducted at the elementary level for science. The USA and Germany are two countries with large formal education systems whose students…
Descriptors: Foreign Countries, Comparative Analysis, Video Technology, Elementary School Science
Lissitz, Robert W.; Hou, Xiaodong; Slater, Sharon Cadman – Journal of Applied Testing Technology, 2012
This article investigates several questions regarding the impact of different item formats on measurement characteristics. Constructed response (CR) items and multiple choice (MC) items obviously differ in their formats and in the resources needed to score them. As such, they have been the subject of considerable discussion regarding the impact of…
Descriptors: Computer Assisted Testing, Scoring, Evaluation Problems, Psychometrics
Raymond, Mark R.; Neustel, Sandra; Anderson, Dan – Educational Measurement: Issues and Practice, 2009
Examinees who take high-stakes assessments are usually given an opportunity to repeat the test if they are unsuccessful on their initial attempt. To prevent examinees from obtaining unfair score increases by memorizing the content of specific test items, testing agencies usually assign a different test form to repeat examinees. The use of multiple…
Descriptors: Test Results, Test Items, Testing, Aptitude Tests
Grenwelge, Cheryl H. – Journal of Psychoeducational Assessment, 2009
The Woodcock Johnson III Brief Assessment is a "maximum performance test" (Reynolds, Livingston, Willson, 2006) that is designed to assess the upper levels of knowledge and skills of the test taker using both power and speed to obtain a large amount of information in a short period of time. The Brief Assessment also provides an adequate…
Descriptors: Test Results, Knowledge Level, Testing, Performance Tests
Lind, Marianne; Moen, Inger; Simonsen, Hanne Gram – Clinical Linguistics & Phonetics, 2007
The article reports on a comparative study of the abilities of aphasic speakers and normal control subjects to comprehend and produce verbs and sentences. The analysis is based on test results obtained as part of the standardization procedure for a test battery originally developed for Dutch and since translated and adapted for English and…
Descriptors: Sentences, Test Results, Form Classes (Languages), Aphasia

Kim, Seock-Ho – Applied Psychological Measurement, 1997
Reviews the most recent version of the BILOG computer program, which estimates item and trait level parameters for the one-, two-, and three-parameter logistic unidimensional Item Response Models for dichotomously scored data. Finds this version useful. (SLD)
Descriptors: Computer Software, Item Analysis, Item Response Theory, Scores

Wolfe, Edward W.; Gitomer, Drew H. – Applied Measurement in Education, 2001
Attempted to improve the measurement quality of a complex performance assessment through principled assessment design using the example of the National Board for Professional Teaching Standards Early Childhood/Generalist examination. All indexes examined improved after revisions were made. Results show the importance of attention to assessment…
Descriptors: Change, Performance Based Assessment, Psychometrics, Scores

Hills, John R. – Educational Measurement: Issues and Practice, 1984
Normal Curve Equivalents (NCEs), a new score system for standardized tests, are used by school districts in reporting results to federal funding agencies. The author uses a quiz format to answer questions on the use of NCE scores. (EGS)
Descriptors: Scores, Scoring, Standardized Tests, Test Interpretation