NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
Elementary and Secondary…1
What Works Clearinghouse Rating
Showing 76 to 90 of 140 results Save | Export
Peer reviewed Peer reviewed
Jackson, Douglas N.; Helmes, Edward – Applied Psychological Measurement, 1979
A basic structure approach is proposed for obtaining multidimensional scale values for attitude, achievement, or personality items from response data. The technique permits the unconfounding of scale values due to response bias and content and partitions item indices of popularity or difficulty among a number of relevant dimensions. (Author/BH)
Descriptors: Higher Education, Interest Inventories, Item Analysis, Mathematical Models
Peer reviewed Peer reviewed
van Heerden, J.; Hoogstraten, Joh. – Applied Psychological Measurement, 1979
In a replication of an earlier study, a questionnaire with items lacking content and merely containing answer possibilities was administered to a sample of Dutch freshmen psychology students. Subjects showed a preference for positive options over negative options. (Author/JKS)
Descriptors: Content Analysis, Foreign Countries, Higher Education, Item Analysis
Peer reviewed Peer reviewed
Mackler, Bernard; Holman, Dana – Young Children, 1976
The issues of culture-free and culture-fair tests for elementary school children are explored by examining specific tests and the testing situation. Investigators examined the problem of group intelligence testing vs. individual testing and conclude that tests still reflect White American middle socioeconomic class values and experiences. (HS)
Descriptors: Black Students, Culture Fair Tests, Elementary Education, Group Testing
Green, Kathy E. – 1983
The purpose of this study was to determine whether item difficulty is significantly affected by language difficulty and response set convergence. Language difficulty was varied by increasing sentence (stem) length, increasing syntactic complexity, and substituting uncommon words for more familiar terms in the item stem. Item wording ranged from…
Descriptors: Difficulty Level, Foreign Countries, Higher Education, Item Analysis
Lowry, Stephen R. – 1979
A specially designed answer format was used for three tests in a college level agriculture class of 19 students to record responses to three things about each item: (1) the student's choice of the best answer; (2) the degree of certainty with which the answer was chosen; and (3) all the answer choices which the student was certain were incorrect.…
Descriptors: Achievement Tests, Confidence Testing, Guessing (Tests), Higher Education
Bart, William M.; Airasian, Peter W. – 1976
The question of whether test factor structure is indicative of the test item hierarchy was examined. Data from 1,000 subjects on two sets of five bivalued Law School Admission Test items, which were analyzed with latent trait methods of Bock and Lieberman and of Christoffersson in Psychometrika, were analyzed with an ordering-theoretic method to…
Descriptors: Comparative Analysis, Correlation, Factor Analysis, Factor Structure
Waller, Michael I. – 1974
In latent trait models the standard procedure for handling the problem caused by guessing on multiple choice tests is to estimate a parameter which is intended to measure the "guessingness" inherent in an item. Birnbaum's three parameter model, which handles guessing in this manner, ignores individual differences in guessing tendency. This paper…
Descriptors: Goodness of Fit, Guessing (Tests), Individual Differences, Item Analysis
PDF pending restoration PDF pending restoration
Kane, Michael T.; Moloney, James M. – 1976
The Answer-Until-Correct (AUC) procedure has been proposed in order to increase the reliability of multiple-choice items. A model for examinees' behavior when they must respond to each item until they answer it correctly is presented. An expression for the reliability of AUC items, as a function of the characteristics of the item and the scoring…
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Lord, Frederic M. – Psychometrika, 1974
Omitted items cannot properly be treated as wrong when estimating ability and item parameters. A convenient method for utilizing the information provided by omissions is presented. Theoretical and empirical justifications are presented for the estimates obtained by the new method. (Author)
Descriptors: Academic Ability, Guessing (Tests), Item Analysis, Latent Trait Theory
Wainer, Howard – 1985
It is important to estimate the number of examinees who reached a test item, because item difficulty is defined by the number who answered correctly divided by the number who reached the item. A new method is presented and compared to the previously used definition of three categories of response to an item: (1) answered; (2) omitted--a…
Descriptors: College Entrance Examinations, Difficulty Level, Estimation (Mathematics), High Schools
Harnisch, Delwyn L.; Linn, Robert L. – 1981
Techniques to identify the degree to which a response pattern is unusual and the pattern's relationship to examinee background are presented. Sato's caution index and modified caution index use clusters of items for comparisons of observed performance outcomes and do not require the use of item response theory. Correlation between the indices was…
Descriptors: Educational Assessment, Educational Diagnosis, Error Patterns, Item Analysis
Masters, Geoff N.; Wright, Benjamin D. – 1982
The analysis of fit of data to a measurement model for graded responses is described. The model is an extension of Rasch's dichotomous model to formats which provide more than two levels of response to items. The model contains one parameter for each person and one parameter for each "step" in an item. A dichotomously-scored item…
Descriptors: Difficulty Level, Goodness of Fit, Item Analysis, Latent Trait Theory
Muraki, Eiji – 1984
The TESTFACT computer program and full-information factor analysis of test items were used in a computer simulation conducted to correct for the guessing effect. Full-information factor analysis also corrects for omitted items. The present version of TESTFACT handles up to five factors and 150 items. A preliminary smoothing of the tetrachoric…
Descriptors: Comparative Analysis, Computer Simulation, Computer Software, Correlation
McKee, Barbara G.; Hausknecht, Michael A. – 1978
Literature on the development of classroom achievement tests for high school and college level hearing impaired students is reviewed, with emphasis on achievement tests designed to ascertain whether a particular unit of instruction has been understood as it was presented. The paper reviews: the syntactical structure and vocabulary of test items;…
Descriptors: Achievement Tests, Hearing Impairments, Higher Education, Item Analysis
Donlon, Thomas F. – 1977
Detailed item analysis results for a form of the Scholastic Aptitude Test were examined for evidence of sex differences in test speededness. The conclusions were: (1) there was no evidence of appreciable differences in rate-of-work on any section of the Scholastic Aptitude Test; (2) there was some evidence that low-scoring females on the…
Descriptors: Aptitude Tests, College Entrance Examinations, Conceptual Tempo, Females
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10