NotesFAQContact Us
Collection
Advanced
Search Tips
What Works Clearinghouse Rating
Does not meet standards1
Showing 226 to 240 of 305 results Save | Export
Wiggins, Grant – Executive Educator, 1994
Instead of relying on standardized test scores and interdistrict comparisons, school systems must develop a more powerful, timely, and local approach to accountability that is truly client-centered and focused on results. Accountability requires giving successful teachers the freedom and opportunity to take effective ideas beyond their own…
Descriptors: Accountability, Comparative Testing, Elementary Secondary Education, Feedback
De Ayala, R. J. – 1992
One important and promising application of item response theory (IRT) is computerized adaptive testing (CAT). The implementation of a nominal response model-based CAT (NRCAT) was studied. Item pool characteristics for the NRCAT as well as the comparative performance of the NRCAT and a CAT based on the three-parameter logistic (3PL) model were…
Descriptors: Adaptive Testing, Comparative Testing, Computer Assisted Testing, Computer Simulation
Kim, Haeok; Plake, Barbara S. – 1993
A two-stage testing strategy is one method of adapting the difficulty of a test to an individual's ability level in an effort to achieve more precise measurement. A routing test provides an initial estimate of ability level, and a second-stage measurement test then evaluates the examinee further. The measurement accuracy and efficiency of item…
Descriptors: Ability, Adaptive Testing, Comparative Testing, Computer Assisted Testing
Illinois State Board of Education, Springfield. – 1983
This report summarizes the results of a study of the achievement of Illinois high school juniors in 1970 and 1981. The purposes were to provide a comparison of student performance over a period of time and to identify educational, social, and personal conditions that relate to performance on a test of Natural Science, Social Studies, English and…
Descriptors: Academic Achievement, Academic Records, Achievement Tests, Comparative Testing
Southern Regional Education Board, Atlanta, GA. – 1985
Southern Regional Education Board (SREB) states were invited in June 1984 to participate in a project with the National Assessment of Educational Progress (NAEP) to assess the reading achievement of eleventh grade students. Florida, Tennessee and Virginia accepted and worked with SREB and NAEP staff to develop and administer the testing program.…
Descriptors: Academic Achievement, Comparative Testing, Cooperative Programs, Educational Assessment
Peer reviewed Peer reviewed
Kim, Seock-Ho; Cohen, Allan S. – Applied Psychological Measurement, 1991
The exact and closed-interval area measures for detecting differential item functioning are compared for actual data from 1,000 African-American and 1,000 white college students taking a vocabulary test with items intentionally constructed to favor 1 set of examinees. No real differences in detection of biased items were found. (SLD)
Descriptors: Black Students, College Students, Comparative Testing, Equations (Mathematics)
Peer reviewed Peer reviewed
Direct linkDirect link
Pedersen, Paula; Farrell, Pat; McPhee, Eric – Journal of Geography, 2005
This article addresses the lack of outcome-based research on the integration of technology into pedagogy at the undergraduate college level. It describes a study performed at a Midwestern university, testing the relative effectiveness of paper and electronic topographic maps for teaching map-reading skills, and considers the relationship between…
Descriptors: Program Descriptions, Map Skills, Geography Instruction, Instructional Effectiveness
Peer reviewed Peer reviewed
Stone, Clement A.; Lane, Suzanne – Applied Measurement in Education, 1991
A model-testing approach for evaluating the stability of item response theory item parameter estimates (IPEs) in a pretest-posttest design is illustrated. Nineteen items from the Head Start Measures Battery were used. A moderately high degree of stability in the IPEs for 5,510 children assessed on 2 occasions was found. (TJH)
Descriptors: Comparative Testing, Compensatory Education, Computer Assisted Testing, Early Childhood Education
Peer reviewed Peer reviewed
Direct linkDirect link
Li, Yuan H.; Lissitz, Robert W. – Journal of Educational Measurement, 2004
The analytically derived asymptotic standard errors (SEs) of maximum likelihood (ML) item estimates can be approximated by a mathematical function without examinees' responses to test items, and the empirically determined SEs of marginal maximum likelihood estimation (MMLE)/Bayesian item estimates can be obtained when the same set of items is…
Descriptors: Test Items, Computation, Item Response Theory, Error of Measurement
Peer reviewed Peer reviewed
Direct linkDirect link
Zimmerman, Corinne; Raghavan, Kalyani; Sartoris, Mary – International Journal of Science Education, 2003
The Model-Assisted Reasoning in Science (MARS) project seeks to promote model-centered instruction as a means of improving middle-school science education. As part of the evaluation of the sixth-grade curriculum, performance of MARS and non-MARS students was compared on a curriculum-neutral task. Fourteen students participated in structured…
Descriptors: Concept Formation, Theory Practice Relationship, Science Education, Instructional Effectiveness
Peer reviewed Peer reviewed
Direct linkDirect link
Schmidt, William H.; Prawat, Richard S. – Journal of Curriculum Studies, 2006
Recent studies show that national control of K-12 curriculum yields important payoffs in terms of greater curricular coherence and, as a result, higher test performance on international tests such as those used in the Third International Mathematics and Science Study. This paper examines the connection between national control of curriculum and…
Descriptors: Elementary Secondary Education, Government School Relationship, National Norms, Elementary School Curriculum
Babcock, Judith L.; And Others – 1992
This study used multiple methods to assess basic community needs and attributes of community atmosphere (cohesion, religious involvement, and recreational activities) in two psychometric studies. Part 1 revised self-report community assessment measures, developed multi-item scales for each construct, and tested reliabilities and factor structures…
Descriptors: Community Needs, Community Organizations, Community Programs, Comparative Testing
Carlson, James E.; Jirele, Tom – 1992
Some results are presented relating to the dimensionality of the 1990 National Assessment of Educational Progress (NAEP) mathematics item-response data. Based on theoretical considerations, practical limitations, and previous research, two procedures were selected for study: full information factor analysis as implemented in the TESTFACT computer…
Descriptors: Comparative Testing, Computer Software Evaluation, Factor Analysis, Grade 4
Peer reviewed Peer reviewed
Haladyna, Thomas A. – Applied Measurement in Education, 1992
Several multiple-choice item formats are examined in the current climate of test reform. The reform movement is discussed as it affects use of the following formats: (1) complex multiple-choice; (2) alternate choice; (3) true-false; (4) multiple true-false; and (5) the context dependent item set. (SLD)
Descriptors: Cognitive Psychology, Comparative Testing, Context Effect, Educational Change
Peer reviewed Peer reviewed
Dorans, Neil J.; And Others – Journal of Educational Measurement, 1992
The standardization approach to comprehensive differential item functioning is described and contrasted with the log-linear approach to differential distractor functioning and the item-response-theory-based approach to differential alternative functioning. Data from an edition of the Scholastic Aptitude Test illustrate application of the approach…
Descriptors: Black Students, College Entrance Examinations, Comparative Testing, Distractors (Tests)
Pages: 1  |  ...  |  11  |  12  |  13  |  14  |  15  |  16  |  17  |  18  |  19  |  20  |  21