Descriptor
Source
Journal of Educational… | 1 |
Author
Anderson, Paul S. | 3 |
Hyers, Albert D. | 2 |
Steele, D. Joyce | 2 |
Wise, Steven L. | 2 |
Chissom, Brad | 1 |
Chukabarah, Prince C. O. | 1 |
Cizek, Gregory J. | 1 |
Clauser, Brian E. | 1 |
Ebel, Robert L. | 1 |
Frisbie, David A. | 1 |
Green, Kathy E. | 1 |
More ▼ |
Publication Type
Speeches/Meeting Papers | 20 |
Reports - Research | 17 |
Reports - Evaluative | 3 |
Journal Articles | 1 |
Opinion Papers | 1 |
Tests/Questionnaires | 1 |
Education Level
Audience
Researchers | 2 |
Laws, Policies, & Programs
Assessments and Surveys
Alabama High School… | 2 |
Raven Progressive Matrices | 1 |
What Works Clearinghouse Rating
Frisbie, David A. – 1981
The relative difficulty ratio (RDR) is used as a method of representing test difficulty. The RDR is the ratio of a test mean to the ideal mean, the point midway between the perfect score and the mean chance score for the test. The RDR tranformation is a linear scale conversion method but not a linear equating method in the classical sense. The…
Descriptors: Comparative Testing, Difficulty Level, Evaluation Methods, Raw Scores
Clauser, Brian E.; And Others – 1991
Item bias has been a major concern for test developers during recent years. The Mantel-Haenszel statistic has been among the preferred methods for identifying biased items. The statistic's performance in identifying uniform bias in simulated data modeled by producing various levels of difference in the (item difficulty) b-parameter for reference…
Descriptors: Comparative Testing, Difficulty Level, Item Bias, Item Response Theory
Ebel, Robert L. – 1981
An alternate-choice test item is a simple declarative sentence, one portion of which is given with two different wordings. For example, "Foundations like Ford and Carnegie tend to be (1) eager (2) hesitant to support innovative solutions to educational problems." The examinee's task is to choose the alternative that makes the sentence…
Descriptors: Comparative Testing, Difficulty Level, Guessing (Tests), Multiple Choice Tests
Lunz, Mary E.; Stahl, John A. – 1990
Three examinations administered to medical students were analyzed to determine differences among severities of judges' assessments and among grading periods. The examinations included essay, clinical, and oral forms of the tests. Twelve judges graded the three essays for 32 examinees during a 4-day grading session, which was divided into eight…
Descriptors: Clinical Diagnosis, Comparative Testing, Difficulty Level, Essay Tests

Anderson, Paul S.; Hyers, Albert D. – 1991
Three descriptive statistics (difficulty, discrimination, and reliability) of multiple-choice (MC) test items were compared to those of a new (1980s) format of machine-scored questions. The new method, answer-bank multi-digit testing (MDT), uses alphabetized lists of up to 1,000 alternatives and approximates the completion style of assessment…
Descriptors: College Students, Comparative Testing, Computer Assisted Testing, Correlation
Cizek, Gregory J. – 1991
A commonly accepted rule for developing equated examinations using the common-items non-equivalent groups (CINEG) design is that items common to the two examinations being equated should be identical. The CINEG design calls for two groups of examinees to respond to a set of common items that is included in two examinations. In practice, this rule…
Descriptors: Certification, Comparative Testing, Difficulty Level, Higher Education
Wise, Steven L.; And Others – 1991
According to item response theory (IRT), examinee ability estimation is independent of the particular set of test items administered from a calibrated pool. Although the most popular application of this feature of IRT is computerized adaptive (CA) testing, a recently proposed alternative is self-adapted (SA) testing, in which examinees choose the…
Descriptors: Ability Identification, Adaptive Testing, College Students, Comparative Testing
Ravid, Ruth D. – 1986
A study investigated correlations between students' and parents' attitudes toward the Hebrew language, students' attitudes and achievement in Hebrew, attitude differences in boys and girls, and attitude differences of students in the third and fourth years of Hebrew study. Parents and students in four Chicago-area supplementary Hebrew schools were…
Descriptors: Achievement Rating, Comparative Analysis, Comparative Testing, Correlation
Roos, Linda L.; And Others – 1992
Computerized adaptive (CA) testing uses an algorithm to match examinee ability to item difficulty, while self-adapted (SA) testing allows the examinee to choose the difficulty of his or her items. Research comparing SA and CA testing has shown that examinees experience lower anxiety and improved performance with SA testing. All previous research…
Descriptors: Ability Identification, Adaptive Testing, Algebra, Algorithms
Chissom, Brad; Chukabarah, Prince C. O. – 1985
The comparative effects of various sequences of test items were examined for over 900 graduate students enrolled in an educational research course at The University of Alabama, Tuscaloosa. experiment, which was conducted a total of four times using four separate tests, presented three different arrangements of 50 multiple-choice items: (1)…
Descriptors: Analysis of Variance, Comparative Testing, Difficulty Level, Graduate Students

Wise, Steven L.; And Others – 1993
A new testing strategy that provides protection against the problem of having examinees in adaptive testing choose difficulty levels that are not matched to their proficiency levels was introduced and evaluated. The method, termed restricted self-adapted testing (RSAT), still provides examinees with a degree of control over the difficulty levels…
Descriptors: Achievement Tests, Adaptive Testing, Comparative Testing, Computer Assisted Testing
Vispoel, Walter P.; And Others – 1992
The effects of review options (the opportunity for examinees to review and change answers) on the magnitude, reliability, efficiency, and concurrent validity of scores obtained from three types of computerized vocabulary tests (fixed item, adaptive, and self-adapted) were studied. Subjects were 97 college students at a large midwestern university…
Descriptors: Adaptive Testing, College Students, Comparative Testing, Computer Assisted Testing
Green, Kathy E.; Kluever, Raymond C. – 1991
Item components that might contribute to the difficulty of items on the Raven Colored Progressive Matrices (CPM) and the Standard Progressive Matrices (SPM) were studied. Subjects providing responses to CPM items were 269 children aged 2 years 9 months to 11 years 8 months, most of whom were referred for testing as potentially gifted. A second…
Descriptors: Academically Gifted, Children, Comparative Testing, Difficulty Level

Stern, Elsbeth – Journal of Educational Psychology, 1993
Six experiments with 42 kindergartners, 190 first graders, and 15 second graders in Germany investigated why arithmetic word problems with an unknown reference set are more difficult for children than are problems with an unknown compare set. Lack of access to flexible language use makes these problems so difficult. (SLD)
Descriptors: Arithmetic, Child Development, Cognitive Processes, Comparative Testing
Steele, D. Joyce – 1985
This paper contains a comparison of descriptive information based on analyses of pilot and live administrations of the Alabama High School Graduation Examination (AHSGE). The test is composed of three subject tests: Reading, Mathematics, and Language. The study was intended to validate the test development procedure by comparing difficulty levels…
Descriptors: Achievement Tests, Comparative Testing, Difficulty Level, Graduation Requirements
Previous Page | Next Page ยป
Pages: 1 | 2