Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 4 |
Since 2006 (last 20 years) | 10 |
Descriptor
Test Use | 10 |
Test Reliability | 9 |
Test Validity | 7 |
Test Construction | 5 |
College Entrance Examinations | 3 |
Scores | 3 |
Test Format | 3 |
Writing Tests | 3 |
College Students | 2 |
Correlation | 2 |
Educational Technology | 2 |
More ▼ |
Source
Author
Allen, Jeff M. | 1 |
Barrow, Lloyd | 1 |
Bush, Martin | 1 |
Castano-Bishop, Marianne | 1 |
Dikli, Semire | 1 |
Flett, Gordon L. | 1 |
Goodson, Patricia | 1 |
Harris, Sandra M. | 1 |
Hassan, Nurul Huda | 1 |
Hewitt, Paul L. | 1 |
Kim, YoungKoung Rachel | 1 |
More ▼ |
Publication Type
Journal Articles | 7 |
Reports - Research | 5 |
Reports - Descriptive | 2 |
Reports - Evaluative | 2 |
Information Analyses | 1 |
Non-Print Media | 1 |
Reference Materials - General | 1 |
Tests/Questionnaires | 1 |
Education Level
Higher Education | 10 |
Postsecondary Education | 10 |
Elementary Secondary Education | 1 |
Grade 10 | 1 |
High Schools | 1 |
Secondary Education | 1 |
Audience
Administrators | 1 |
Location
Singapore | 1 |
Laws, Policies, & Programs
Assessments and Surveys
ACT Assessment | 1 |
Center for Epidemiologic… | 1 |
Marlowe Crowne Social… | 1 |
Multidimensional… | 1 |
SAT (College Admission Test) | 1 |
What Works Clearinghouse Rating
Otoyo, Lucia; Bush, Martin – Practical Assessment, Research & Evaluation, 2018
This article presents the results of an empirical study of "subset selection" tests, which are a generalisation of traditional multiple-choice tests in which test takers are able to express partial knowledge. Similar previous studies have mostly been supportive of subset selection, but the deduction of marks for incorrect responses has…
Descriptors: Multiple Choice Tests, Grading, Test Reliability, Test Format
Flett, Gordon L.; Nepon, Taryn; Hewitt, Paul L.; Zaki-Azat, Justeena; Rose, Alison L.; Swiderski, Kristina – Journal of Psychoeducational Assessment, 2020
In the current article, we describe the development and validation of the Mistake Rumination Scale as a supplement to existing trait and cognitive measures of perfectionism. The Mistake Rumination Scale is a seven-item inventory that taps the tendency to ruminate about a past personal mistake. Psychometric analyses confirmed that the Mistake…
Descriptors: Personality Traits, Cognitive Processes, Test Construction, Cognitive Tests
Allen, Jeff M.; Mattern, Krista – ACT, Inc., 2019
States and districts have expressed interest in administering the ACT® to 10th-grade students. Given that the ACT was designed to be administered in the spring of 11th grade or fall of 12th grade, the appropriateness of this use should be evaluated. As such, the focus of this paper is to summarize empirical evidence evaluating the use of the ACT…
Descriptors: Test Validity, College Entrance Examinations, High School Students, Grade 10
Rahn, Rhonda N.; Pruitt, Buster; Goodson, Patricia – Journal of American College Health, 2016
Objective: To analyze the literature in which researchers have utilized the National College Health Assessment (NCHA) I or the NCHA II. Participants and Methods: The authors selected peer-reviewed articles published between 2004 and July 2013 utilizing a single search term: National College Health Assessment. Articles were assessed for instrument…
Descriptors: Literature Reviews, College Students, Health, National Surveys
Romine, William L.; Schaffer, Dane L.; Barrow, Lloyd – International Journal of Science Education, 2015
We describe the development and validation of a three-tiered diagnostic test of the water cycle (DTWC) and use it to evaluate the impact of prior learning experiences on undergraduates' misconceptions. While most approaches to instrument validation take a positivist perspective using singular criteria such as reliability and fit with a measurement…
Descriptors: Undergraduate Students, Diagnostic Tests, Water, Item Response Theory
Sriram, Rishi – NASPA - Student Affairs Administrators in Higher Education, 2014
When student affairs professionals assess their work, they often employ some type of survey. The use of surveys stems from a desire to objectively measure outcomes, a demand from someone else (e.g., supervisor, accreditation committee) for data, or the feeling that numbers can provide an aura of competence. Although surveys are effective tools for…
Descriptors: Surveys, Test Construction, Student Personnel Services, Test Use
Hassan, Nurul Huda; Shih, Chih-Min – Language Assessment Quarterly, 2013
This article describes and reviews the Singapore-Cambridge General Certificate of Education Advanced Level General Paper (GP) examination. As a written test that is administered to preuniversity students, the GP examination is internationally recognised and accepted by universities and employers as proof of English competence. In this article, the…
Descriptors: Foreign Countries, College Entrance Examinations, English (Second Language), Writing Tests
Proctor, Thomas P.; Kim, YoungKoung Rachel – College Board, 2009
Presented at the national conference for the American Educational Research Association (AERA) in April 2009. This study examined the utility of scores on the SAT writing test, specifically examining the reliability of scores using generalizability and item response theories. The study also provides an overview of current predictive validity…
Descriptors: College Entrance Examinations, Writing Tests, Psychometrics, Predictive Validity
Harris, Sandra M.; Larrier, Yvonne I.; Castano-Bishop, Marianne – Online Journal of Distance Learning Administration, 2011
The problem of attrition in online learning has drawn attention from distance education administrators and chief academic officers of higher education institutions. Many studies have addressed factors related to student attrition, persistence and retention in online courses. However, few studies have examined how student expectations influence…
Descriptors: Electronic Learning, Student Attitudes, Distance Education, Academic Persistence
Dikli, Semire – Journal of Technology, Learning, and Assessment, 2006
Automated Essay Scoring (AES) is defined as the computer technology that evaluates and scores the written prose (Shermis & Barrera, 2002; Shermis & Burstein, 2003; Shermis, Raymat, & Barrera, 2003). AES systems are mainly used to overcome time, cost, reliability, and generalizability issues in writing assessment (Bereiter, 2003; Burstein,…
Descriptors: Scoring, Writing Evaluation, Writing Tests, Standardized Tests