NotesFAQContact Us
Collection
Advanced
Search Tips
Publication Date
In 20250
Since 20240
Since 2021 (last 5 years)0
Since 2016 (last 10 years)2
Since 2006 (last 20 years)7
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 1 to 15 of 16 results Save | Export
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Chen, Michelle Y.; Flasko, Jennifer J. – Canadian Journal of Applied Linguistics / Revue canadienne de linguistique appliquée, 2020
Seeking evidence to support content validity is essential to test validation. This is especially the case in contexts where test scores are interpreted in relation to external proficiency standards and where new test content is constantly being produced to meet test administration and security demands. In this paper, we describe a modified…
Descriptors: Foreign Countries, Reading Tests, Language Tests, English (Second Language)
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Zhang, Xinxin; Gierl, Mark – Journal of Educational Issues, 2016
The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…
Descriptors: Test Items, Automation, Content Validity, Test Validity
Peer reviewed Peer reviewed
Direct linkDirect link
Luecht, Richard M. – Journal of Applied Testing Technology, 2013
Assessment engineering is a new way to design and implement scalable, sustainable and ideally lower-cost solutions to the complexities of designing and developing tests. It represents a merger of sorts between cognitive task modeling and engineering design principles--a merger that requires some new thinking about the nature of score scales, item…
Descriptors: Engineering, Test Construction, Test Items, Models
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Xueming; Sireci, Stephen G. – Educational and Psychological Measurement, 2013
Validity evidence based on test content is of essential importance in educational testing. One source for such evidence is an alignment study, which helps evaluate the congruence between tested objectives and those specified in the curriculum. However, the results of an alignment study do not always sufficiently capture the degree to which a test…
Descriptors: Content Validity, Multidimensional Scaling, Data Analysis, Educational Testing
Peer reviewed Peer reviewed
Direct linkDirect link
Crotts, Katrina; Sireci, Stephen G.; Zenisky, April – Journal of Applied Testing Technology, 2012
Validity evidence based on test content is important for educational tests to demonstrate the degree to which they fulfill their purposes. Most content validity studies involve subject matter experts (SMEs) who rate items that comprise a test form. In computerized-adaptive testing, examinees take different sets of items and test "forms"…
Descriptors: Computer Assisted Testing, Adaptive Testing, Content Validity, Test Content
Peer reviewed Peer reviewed
PDF on ERIC Download full text
Kalayci, Nurdan; Cimen, Orhan – Educational Sciences: Theory and Practice, 2012
The aim of this study is to examine the questionnaires used to evaluate teaching performance in higher education institutes and called "Instructor and Course Evaluation Questionnaires (ICEQ)" in terms of questionnaire preparation techniques and components of curriculum. Obtaining at least one ICEQ belonging to any state and private…
Descriptors: Higher Education, Teaching Methods, Questionnaires, Learning Processes
Kumazawa, Takaaki – ProQuest LLC, 2011
Although classroom assessment is one of the most frequent practices carried out by teachers in all educational programs, limited research has been conducted to investigate the dependability and validity of criterion-referenced tests (CRTs). The main purpose of this study is to develop a criterion-referenced test for first-year Japanese university…
Descriptors: Criterion Referenced Tests, Test Construction, Test Validity, English (Second Language)
Peer reviewed Peer reviewed
Turner, Ronna C.; Carlson, Laurie – International Journal of Testing, 2003
Item-objective congruence as developed by R. Rovinelli and R. Hambleton is used in test development for evaluating content validity at the item development stage. Provides a mathematical extension to the Rovinelli and Hambleton index that is applicable for the multidimensional case. (SLD)
Descriptors: Content Validity, Test Construction, Test Content, Test Items
Yamamoto, Kentaro; Kulick, Edward – 1992
Test items are designed to be representative of the subject areas that they measure and to reflect the importance of specific domains or item types within those subject areas. Content validity is achieved by content specification and number of items in each content domain included in the design of the test. However, largely due to the normal…
Descriptors: Content Validity, Elementary Secondary Education, Field Tests, Mathematical Models
Peer reviewed Peer reviewed
Barrett, Richard S. – Public Personnel Management, 1992
The Content Validation Form is presented as a means of proving that occupational tests provide a representative work sample or knowledge, skill, or ability necessary for a job. It is best used during test construction by a panel of subject matter experts. (SK)
Descriptors: Content Validity, Item Analysis, Multiple Choice Tests, Occupational Tests
Sireci, Stephen G. – 1995
The purpose of this paper is to clarify the seemingly discrepant views of test theorists and test developers about terminology related to the evaluation of test content. The origin and evolution of the concept of content validity are traced, and the concept is reformulated in a way that emphasizes the notion that content domain definition,…
Descriptors: Construct Validity, Content Validity, Definitions, Item Analysis
Peer reviewed Peer reviewed
Direct linkDirect link
O'Neil, Timothy; Sireci, Stephen G.; Huff, Kristen L. – Educational Assessment, 2004
Educational tests used for accountability purposes must represent the content domains they purport to measure. When such tests are used to monitor progress over time, the consistency of the test content across years is important for ensuring that observed changes in test scores are due to student achievement rather than to changes in what the test…
Descriptors: Test Items, Cognitive Ability, Test Content, Science Teachers
Peer reviewed Peer reviewed
Sireci, Stephen G.; Geisinger, Kurt F. – Applied Psychological Measurement, 1995
An expanded version of the method of content evaluation proposed by S. G. Sireci and K. F. Giesinger (1992) was evaluated with respect to a national licensure examination and a nationally standardized social studies achievement test. Two groups of 15 subject-matter experts rated the similarity and content relevance of the items. (SLD)
Descriptors: Achievement Tests, Cluster Analysis, Construct Validity, Content Validity
PDF pending restoration PDF pending restoration
Yoon, Bokhee; And Others – 1991
The validity of teachers' reports of students' instructional experiences (content exposure or coverage) and content validity of a given course were studied by examining the consistency of reported content coverage for teachers across two consecutive years (1988 and 1989). In addition, the sensitivity of the test to instruction was examined by…
Descriptors: Content Analysis, Content Validity, Course Content, Course Evaluation
Hater, John J. – 1992
Work Keys (occupational tests developed by American College Testing) could support an employer's human resource function in a number of ways: (1) communicating to educators the skill requirements for an employer's particular jobs on a national basis; (2) providing students with a realistic preview of skills needed for jobs and an assessment of…
Descriptors: Adult Education, Basic Skills, Construct Validity, Content Validity
Previous Page | Next Page »
Pages: 1  |  2