Descriptor
Source
Author
Brualdi, Amy | 1 |
Childs, Ruth A. | 1 |
Coburn, Louisa | 1 |
Dayton, C. Mitchell | 1 |
Haskell, Robert E. | 1 |
Jaciw, Andrew P. | 1 |
Lomawaima, K. Tsianina | 1 |
McCarty, Teresa L. | 1 |
Rudner, Lawrence M. | 1 |
Publication Type
ERIC Digests in Full Text | 7 |
ERIC Publications | 7 |
Education Level
Audience
Location
Laws, Policies, & Programs
Assessments and Surveys
What Works Clearinghouse Rating
Dayton, C. Mitchell – 2002
This Digest, intended as an instructional aid for beginning research students and a refresher for researchers in the field, identifies key factors that play a critical role in determining the credibility that should be given to a specific research study. The needs for empirical research, randomization and control, and significance testing are…
Descriptors: Credibility, Data Analysis, Reliability, Research
Childs, Ruth A.; Jaciw, Andrew P. – 2003
Matrix sampling of test items, the division of a set of items into different versions of a test form, is used by several large-scale testing programs. This Digest discusses nine categories of costs associated with matrix sampling. These categories are: (1) development costs; (2) materials costs; (3) administration costs; (4) educational costs; (5)…
Descriptors: Costs, Matrices, Reliability, Sampling
Brualdi, Amy – 1999
Test validity refers to the degree to which the inferences based on test scores are meaningful, useful, and appropriate. Thus, test validity is a characteristic of a test when it is administered to a particular population. This article introduces the modern concepts of validity advanced by S. Messick (1989, 1996, 1996). Traditionally, the means of…
Descriptors: Criteria, Data Interpretation, Elementary Secondary Education, Reliability
Coburn, Louisa – 1984
Research on student evaluation of college teachers' performance is briefly summarized. Lawrence M. Aleamoni offers four arguments in favor of student ratings: (1) students are the main source of information about the educational environment; (2) students are the most logical evaluators of student satisfaction and effectiveness of course elements;…
Descriptors: College Faculty, Evaluation Problems, Evaluation Utilization, Higher Education
Lomawaima, K. Tsianina; McCarty, Teresa L. – 2002
The constructs used to evaluate research quality--valid, objective, reliable, generalizable, randomized, accurate, authentic--are not value-free. They all require human judgment, which is affected inevitably by cultural norms and values. In the case of research involving American Indians and Alaska Natives, assessments of research quality must be…
Descriptors: Action Research, American Indian Education, Educational Research, Indigenous Knowledge
Rudner, Lawrence M. – 1992
Several common sources of error in assessment that depends on the use of judges are identified, and ways to reduce the impact of rating errors are examined. Numerous threats to the validity of scores based on ratings exist. These threats include: (1) the halo effect; (2) stereotyping; (3) perception differences; (4) leniency/stringency error; and…
Descriptors: Alternative Assessment, Error of Measurement, Evaluation Methods, Evaluators
Haskell, Robert E. – 1998
Despite a history of conflicting research on its reliability and validity, student evaluation of faculty (SEF) has typically not been viewed as an infringement on academic freedom; it has generally been taken for granted that SEF is appropriate and necessary. However, informal and reasoned analyses of the issue indicate that because SEF is used…
Descriptors: Academic Freedom, Evaluation Problems, Faculty College Relationship, Faculty Evaluation