NotesFAQContact Us
Collection
Advanced
Search Tips
Showing all 11 results Save | Export
Peer reviewed Peer reviewed
Direct linkDirect link
Briggs, Derek C. – Measurement: Interdisciplinary Research and Perspectives, 2013
In his focus article "How Is Testing Supposed to Improve Schooling?" Ed Haertel distinguishes between seven uses of educational tests as a function of the intended action and what or who will be influenced by the intended action. He then applies Mike Kane's interpretive argument approach (Kane, 2006) as a basis for speculating about the validity…
Descriptors: Educational Testing, Accountability, Educational Improvement, Teacher Evaluation
Peer reviewed Peer reviewed
Soper, John C.; Brenneke, Judith Staley – Journal of Economic Education, 1987
Offers practical tips on how teachers can determine whether classroom tests are actually measuring what they are designed to measure. Discusses criterion-related validity, construct validity, and content validity. Demonstrates how to determine the degree of content validity a particular test may have for a particular course or unit. (Author/DH)
Descriptors: Criterion Referenced Tests, Economics Education, Higher Education, Teacher Made Tests
Peer reviewed Peer reviewed
Ebel, Robert L. – Educational Measurement: Issues and Practice, 1983
One major reason for the problems of test validation is an overemphasis on the need for empirical validity data, and a failure to recognize the primary importance of explicit verbal definitions of what the test is intended to measure and rational arguments in support of the means chosen for obtaining the measurement. (Author/LC)
Descriptors: Occupational Tests, Performance Tests, Standardized Tests, Statistical Data
Peer reviewed Peer reviewed
Gardner, Eric F. – Educational Measurement: Issues and Practice, 1983
In response to Ebel (TM 508 146) Gardner argues that neither intrinsic rational validity associated with ability tests nor a validity coefficient relating a test to performance as the sole information about validity is sufficient. All relevant data about a test and its functioning are essential in describing the validity of the test. (Author/LC)
Descriptors: Occupational Tests, Performance Tests, Predictive Validity, Standardized Tests
Peer reviewed Peer reviewed
Nimmer, Donald N. – Clearing House, 1984
Explains how tables of specifications (used in estabishing content validity) and item analysis (a modificatin of item difficulty) can readily be used by the classroom teacher to design well-balanced tests. (HOD)
Descriptors: Elementary Secondary Education, Item Analysis, Measurement Techniques, Teacher Made Tests
Peer reviewed Peer reviewed
Taylor, Catherine S.; Nolen, Susan Bobbitt – Education Policy Analysis Archives, 1996
The usefulness of traditional concepts of validity and reliability, developed for large-scale assessments, for the classroom context is explored. Alternate frameworks that situate these constructs in teachers' work in classrooms are presented, and their use in an assessment course for preservice teachers is described. (SLD)
Descriptors: Educational Assessment, Learning, Models, Preservice Teachers
Peer reviewed Peer reviewed
Nimmer, Donald N. – Clearing House, 1983
Outlines the benefits associated with true-false and multiple-choice tests and sets forth rules for writing effective items for such tests. (FL)
Descriptors: Elementary Secondary Education, Evaluation Methods, Multiple Choice Tests, Objective Tests
Milton, Ohmer – 1982
Educators are called upon to improve the quality of classroom tests to enhance the learning of content. Less faculty concern for tests than for other features of instruction, compounded by a lack of knowing how to assess different levels of learning with test questions that measure complex processes, appear to generate poor quality classroom…
Descriptors: Educational Testing, Evaluation Methods, Higher Education, Learning Activities
Peer reviewed Peer reviewed
Mehrens, William A. – Educational Measurement: Issues and Practice, 1984
The use of national achievement tests in schools can result in varying degrees of curricular match/mismatch with respect to local curricula. This article explores the types of mismatch which can occur, discusses the inferences made from test scores and their importance, and addresses some implications for the educational community. (EGS)
Descriptors: Achievement Tests, Course Content, Course Objectives, Curriculum Problems
Peer reviewed Peer reviewed
Kolstad, Rosemarie K.; And Others – Education, 1984
Provides guidelines for teachers writing machine-scored examinations. Explains the use of item analysis (discrimination index) to single test items that should be improved or eliminated. Discusses validity and reliability of classroom achievement tests in contrast to norm-referenced examinations. (JHZ)
Descriptors: Achievement Tests, Computer Assisted Testing, Criterion Referenced Tests, Item Analysis
Wangerin, Paul T. – 1994
This paper addresses problems confronting law school teachers in grading law school exams and assigning letter grades. Using prototypical dialogue and scenarios, the paper examines mathematical and statistical issues that contribute to grading errors. Discussed in relation to real world data and the bar exam are: differential weighting, combining…
Descriptors: Civil Rights, Court Litigation, Educational Malpractice, Error of Measurement