Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 0 |
Since 2006 (last 20 years) | 7 |
Descriptor
Source
College Board | 7 |
Author
Hendrickson, Amy | 2 |
Kobrin, Jennifer L. | 2 |
Patterson, Brian | 2 |
Brennan, Robert L. | 1 |
Camara, Wayne | 1 |
Ewing, Maureen | 1 |
Kim, YoungKoung Rachel | 1 |
Kimmel, Ernest W. | 1 |
Lee, Eunjung | 1 |
Lee, Won-Chan | 1 |
Mattern, Krista | 1 |
More ▼ |
Publication Type
Reports - Research | 4 |
Non-Print Media | 2 |
Reference Materials - General | 2 |
Speeches/Meeting Papers | 2 |
Numerical/Quantitative Data | 1 |
Reports - Evaluative | 1 |
Education Level
High Schools | 4 |
Higher Education | 4 |
Postsecondary Education | 4 |
Secondary Education | 2 |
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
SAT (College Admission Test) | 4 |
Advanced Placement… | 1 |
What Works Clearinghouse Rating
Lee, Eunjung; Lee, Won-Chan; Brennan, Robert L. – College Board, 2012
In almost all high-stakes testing programs, test equating is necessary to ensure that test scores across multiple test administrations are equivalent and can be used interchangeably. Test equating becomes even more challenging in mixed-format tests, such as Advanced Placement Program® (AP®) Exams, that contain both multiple-choice and constructed…
Descriptors: Test Construction, Test Interpretation, Test Norms, Test Reliability
Reshetar, Rosemary; Melican, Gerald J. – College Board, 2010
This paper discusses issues related to the design and psychometric work for mixed-format tests --tests containing both multiple-choice (MC) and constructed-response (CR) items. The issues of validity, fairness, reliability and score consistency can be addressed but for mixed-format tests there are many decisions to be made and no examination or…
Descriptors: Psychometrics, Test Construction, Multiple Choice Tests, Test Items
Proctor, Thomas P.; Kim, YoungKoung Rachel – College Board, 2009
Presented at the national conference for the American Educational Research Association (AERA) in April 2009. This study examined the utility of scores on the SAT writing test, specifically examining the reliability of scores using generalizability and item response theories. The study also provides an overview of current predictive validity…
Descriptors: College Entrance Examinations, Writing Tests, Psychometrics, Predictive Validity
Hendrickson, Amy; Patterson, Brian; Ewing, Maureen – College Board, 2010
The psychometric considerations and challenges associated with including constructed response items on tests are discussed along with how these issues affect the form assembly specifications for mixed-format exams. Reliability and validity, security and fairness, pretesting, content and skills coverage, test length and timing, weights, statistical…
Descriptors: Multiple Choice Tests, Test Format, Test Construction, Test Validity
Mattern, Krista; Camara, Wayne; Kobrin, Jennifer L. – College Board, 2007
The focus of this report is to summarize the research that has been conducted thus far on the new SAT Writing section. The evidence provided reveals that the new writing section has satisfactory psychometric quality in that its reliability is acceptable; it is significantly related to first-year college GPA and college English grades; it has been…
Descriptors: College Entrance Examinations, Writing Tests, Educational Research, Psychometrics
Kobrin, Jennifer L.; Kimmel, Ernest W. – College Board, 2006
Based on statistics from the first few administrations of the SAT writing section, the test is performing as expected. The reliability of the writing section is very similar to that of other writing assessments. Based on preliminary validity research, the writing section is expected to add modestly to the prediction of college performance when…
Descriptors: Test Construction, Writing Tests, Cognitive Tests, College Entrance Examinations
Hendrickson, Amy; Patterson, Brian; Melican, Gerald – College Board, 2008
Presented at the Annual National Council on Measurement in Education (NCME) in New York in March 2008. This presentation explores how different item weighting can affect the effective weights, validity coefficents and test reliability of composite scores among test takers.
Descriptors: Multiple Choice Tests, Test Format, Test Validity, Test Reliability