NotesFAQContact Us
Collection
Advanced
Search Tips
Laws, Policies, & Programs
What Works Clearinghouse Rating
Showing 106 to 120 of 171 results Save | Export
Tracy, D. B.; And Others
Responses on both the state and trait scales of the State-Trait Anxiety (STAI) Inventory were examined under two conditions. The first condition presented a simulated real-life situation containing competitive and evaluative cues without directly suggesting faking and asked subjects to complete the STAI. After an intervening task, the STAI was…
Descriptors: Anxiety, College Students, Psychological Patterns, Response Style (Tests)
Hanna, Gerald S. – 1974
Although the "Don't Know" (DK) option has received telling criticism in maximum performance summative tests, its potential use in formative evaluation was considered and judged to be more promising. The pretest of an instructional module was administered with DK options. Examinees were then required to answer each question to which they had…
Descriptors: Formative Evaluation, Guessing (Tests), Multiple Choice Tests, Response Style (Tests)
Bayuk, Robert J. – 1973
An investigation was conducted to determine the effects of response-category weighting and item weighting on reliability and predictive validity. Response-category weighting refers to scoring in which, for each category (including omit and "not read"), a weight is assigned that is proportional to the mean criterion score of examinees selecting…
Descriptors: Aptitude Tests, Correlation, Predictive Validity, Research Reports
Peer reviewed Peer reviewed
Menasco, Michael B.; Curry, David J. – Applied Psychological Measurement, 1978
Scores on the Role Construct Repertory Test exhibited significant correlations with other forms of cognitive functioning, including American College Test scores in science and mathematics for a group of 79 college students. The Grid Form of the test was used. Test-retest reliability was low. (Author/CTM)
Descriptors: Achievement Tests, Cognitive Processes, Cognitive Style, Cognitive Tests
Peer reviewed Peer reviewed
Cross, Lawrence; Frary, Robert – Journal of Educational Measurement, 1977
Corrected-for-guessing scores on multiple-choice tests depend upon the ability and willingness of examinees to guess when they have some basis for answering, and to avoid guessing when they have no basis. The present study determined the extent to which college students were able and willing to comply with formula-scoring directions. (Author/CTM)
Descriptors: Guessing (Tests), Higher Education, Individual Characteristics, Multiple Choice Tests
Peer reviewed Peer reviewed
Direct linkDirect link
DiLillo, David; Fortier, Michelle A.; Hayes, Sarah A.; Trask, Emily; Perry, Andrea R.; Messman-Moore, Terri; Fauchier, Angele; Nash, Cindy – Assessment, 2006
This study compared retrospective reports of childhood sexual and physical abuse as assessed by two measures: the Childhood Trauma Questionnaire (CTQ), which uses a Likert-type scaling approach, and the Computer Assisted Maltreatment Inventory (CAMI), which employs a behaviorally specific means of assessment. Participants included 1,195…
Descriptors: Undergraduate Students, Factor Analysis, Victims of Crime, Behavior
Reckase, Mark D. – 1974
An application of the two-paramenter logistic (Rasch) model to tailored testing is presented. The model is discussed along with the maximum likelihood estimation of the ability parameters given the response pattern and easiness parameter estimates for the items. The technique has been programmed for use with an interactive computer terminal. Use…
Descriptors: Ability, Adaptive Testing, Computer Assisted Instruction, Difficulty Level
PDF pending restoration PDF pending restoration
Kane, Michael T.; Moloney, James M. – 1976
The Answer-Until-Correct (AUC) procedure has been proposed in order to increase the reliability of multiple-choice items. A model for examinees' behavior when they must respond to each item until they answer it correctly is presented. An expression for the reliability of AUC items, as a function of the characteristics of the item and the scoring…
Descriptors: Guessing (Tests), Item Analysis, Mathematical Models, Multiple Choice Tests
Peer reviewed Peer reviewed
Kansup, Wanlop; Hakstian, A. Ralph – Journal of Educational Measurement, 1975
Effects of logically weighting incorrect item options in conventional tests and different scoring functions with confidence tests on reliability and validity were examined. Ninth graders took conventionally administered Verbal and Mathematical Reasoning tests, scored conventionally and by a procedure assigning degree-of-correctness weights to…
Descriptors: Comparative Analysis, Confidence Testing, Junior High School Students, Multiple Choice Tests
Peer reviewed Peer reviewed
Hakstian, A. Ralph; Kansup, Wanlop – Journal of Educational Measurement, 1975
A comparison of reliability and validity was made for three testing procedures: 1) responding conventionally to Verbal Ability and Mathematical Reasoning tests; 2) using a confidence weighting response procedure with the same tests; and 3) using the elimination response method. The experimental testing procedures were not psychometrically superior…
Descriptors: Comparative Analysis, Confidence Testing, Guessing (Tests), Junior High School Students
Morse, David T. – Florida Vocational Journal, 1978
Presents guidelines for constructing tests which accurately measure a student's cognitive skills and performance in a particular course. The advantages and disadvantages of two types of test items are listed (selected response and constructed response items). Both poor and good examples are given and general rules for test item writing are…
Descriptors: Cognitive Development, Criterion Referenced Tests, Essay Tests, Multiple Choice Tests
Cummings, Oliver W. – Measurement and Evaluation in Guidance, 1981
Examined the effects on their test performance of junior high school students changing responses. Results indicated that changing answers neither increases the reliability nor decreases the standard error of measurement of the test. (Author/RC)
Descriptors: Change, Comparative Analysis, Error of Measurement, Junior High Schools
Peer reviewed Peer reviewed
Smith, Jack E.; Hakel, Milton D. – Personnel Psychology, 1979
Examined are questions pertinent to the use of the Position Analysis Questionnaire: Who can use the PAQ reliably and validly? Must one rely on trained job analysts? Can people having no direct contact with the job use the PAQ reliably and validly? Do response biases influence PAQ responses? (Author/KC)
Descriptors: Classification, Data Collection, Employee Attitudes, Employer Attitudes
Mendelsohn, Mark; Linden, James – 1971
The development of an objective diagnostic scale to measure atypical behavior is discussed. The Atypical Response Scale (ARS) is a structured projective test consisting of 17 items, each weighted 1, 2, or 3, that were tested for convergence and reliability. ARS may be individually or group administered in 10-15 minutes; hand scoring requires 90…
Descriptors: Antisocial Behavior, Behavior, Behavior Rating Scales, Diagnostic Tests
Sewall, Timothy J. – 1986
This paper addresses the issue of whether four of the learning styles instruments currently available are of sufficient psychometric quality to warrant their continued use either for research or educational purposes. Four instruments, which purport to measure learning styles, were selected for review. Criteria for selection were based in part on…
Descriptors: Adult Education, Adult Learning, Cognitive Style, Personality Measures
Pages: 1  |  2  |  3  |  4  |  5  |  6  |  7  |  8  |  9  |  10  |  11  |  12