Publication Date
In 2025 | 0 |
Since 2024 | 0 |
Since 2021 (last 5 years) | 0 |
Since 2016 (last 10 years) | 2 |
Since 2006 (last 20 years) | 8 |
Descriptor
Source
Educational Leadership | 13 |
Author
Popham, W. James | 2 |
Aaron M. Pallas | 1 |
Baker, Eva L. | 1 |
Bracey, Gerald | 1 |
Brookhart, Susan M. | 1 |
Curry, Lynn | 1 |
Feldman, Jo | 1 |
Haladyna, Tom | 1 |
Haney, Walt | 1 |
Marzano, Robert J. | 1 |
Popham, James W. | 1 |
More ▼ |
Publication Type
Journal Articles | 13 |
Reports - Evaluative | 7 |
Reports - Descriptive | 4 |
Opinion Papers | 3 |
Information Analyses | 1 |
Education Level
Elementary Education | 1 |
Audience
Administrators | 1 |
Practitioners | 1 |
Location
New York (New York) | 1 |
Laws, Policies, & Programs
Assessments and Surveys
Program for International… | 1 |
What Works Clearinghouse Rating
Feldman, Jo – Educational Leadership, 2018
Have teachers become too dependent on points? This article explores educators' dependency on their points systems, and the ways that points can distract teachers from really analyzing students' capabilities and achievements. Feldman argues that using a more subjective grading system can help illuminate crucial information about students and what…
Descriptors: Grading, Evaluation Methods, Evaluation Criteria, Achievement Rating
Aaron M. Pallas – Educational Leadership, 2016
Teacher evaluation systems can have high stakes for individual teachers, and it's important to ask how new evaluation models--including value-added measures--serve teachers as they strive to improve their practice. The authors interviewed teachers at a high-performing New York City school about their reactions to their value-added scores and…
Descriptors: Teacher Evaluation, Evaluation Methods, Achievement Gains, Achievement Rating
Brookhart, Susan M. – Educational Leadership, 2015
Multiple-choice questions draw criticism because many people perceive they test only recall or atomistic, surface-level objectives and do not require students to think. Although this can be the case, it does not have to be that way. Susan M. Brookhart suggests that multiple-choice questions are a useful part of any teacher's questioning repertoire…
Descriptors: Multiple Choice Tests, Educational Practices, Questioning Techniques, Test Reliability
Salend, Spencer J. – Educational Leadership, 2011
Creating a fair, reliable, teacher-made test is a challenge. Every year poorly designed tests fail to accurately measure many students' learning--and negatively affect their academic futures. Salend, a well-known writer on assessment for at-risk students who consults with schools on assessment procedures, offers guidelines for creating tests that…
Descriptors: At Risk Students, Test Construction, Student Evaluation, Evaluation Methods
Bracey, Gerald – Educational Leadership, 2009
Bracey looks at three well-known assessments--the National Assessment of Educational Progress, the Program of International Student Assessment, and the Trends in Mathematics and Science Study--and concludes that their "instruction-insensitive" design makes them inappropriate as measures of education quality in schools, districts, states,…
Descriptors: National Competency Tests, Educational Quality, Student Evaluation, Evaluation Methods
Popham, James W. – Educational Leadership, 2006
Government agencies administer exams to appraise educators' effectiveness. However, most teachers and administrators are unfamiliar with how such large-scale tests are put together or polished. A profession's adequacy is being judged on the basis of tools that the profession's members don't understand. As such, educators need to have a dose of…
Descriptors: Teacher Effectiveness, Educational Testing, Evaluation Criteria, Test Validity
Popham, W. James – Educational Leadership, 2006
Assessment for learning involves the frequent, continual use of both formal and informal classroom assessments. It can be as simple as requiring students to respond to a lesson-embedded, one-item quiz as a way of gauging student understanding of what is being taught. Ideally, this innovative approach to classroom assessment is based on a careful…
Descriptors: Evaluation Methods, Student Evaluation, Performance Based Assessment, Accountability
Popham, W. James – Educational Leadership, 2006
In this article, the author explains the key differences among three kinds of instructionally relevant tests that can have a huge impact on what goes on in classrooms: "instructionally insensitive tests," "instructionally sensitive tests," and "instructionally informative tests." If educators understand the advantages and limitations of these…
Descriptors: Student Evaluation, Educational Testing, Test Construction, Test Validity

Haladyna, Tom – Educational Leadership, 1982
Describes two types of criterion-referenced testing that districts can use to measure achievement outcomes of their instructional programs: a random sampling assessment plan and an item-response theory assessment plan. (Author/JM)
Descriptors: Behavioral Objectives, Criterion Referenced Tests, Elementary Secondary Education, Item Sampling

Marzano, Robert J. – Educational Leadership, 1994
Students generally do better on outcome-based performance tasks than on domain-specific tasks. Results on performance tasks must be interpreted in the context of instruction or guidance provided before or during their administration. Reliability is sometimes questionable, since teachers are highly influenced by students' overall academic…
Descriptors: Context Effect, Elementary Secondary Education, Holistic Approach, Performance Based Assessment

Curry, Lynn – Educational Leadership, 1990
Learning styles advocates claim long-term improvements in four aspects of teaching and learning: curriculum design, instructional methods, assessment, and student guidance. The application of learning style theory encompasses three pervasive problems: confusion in definitions, weakness in measurement reliability and validity, and identification of…
Descriptors: Cognitive Style, Definitions, Elementary Secondary Education, Evaluation Problems

Baker, Eva L. – Educational Leadership, 1994
Teachers must learn to distinguish among performance assessments of different quality and appropriateness. Design criteria (cognitive complexity, linguistic appropriateness, content quality and coverage, and meaningfulness) are judged by examining assessment tasks and scoring rubrics. Effects criteria (transfer, generalizability, instructional…
Descriptors: Context Effect, Elementary Secondary Education, Evaluation Criteria, Guidelines

Haney, Walt – Educational Leadership, 1985
Following a critique of so-called educational testing, this article examines three "educationally noteworthy school testing programs"--those of Portland, Oregon; Orange County, Florida; and Pittsburgh, Pennsylvania; and one noteworthy school with no standardized testing: the Prospect School in Bennington, Vermont. Derives some broad…
Descriptors: Adaptive Testing, Alternative Assessment, Computer Assisted Testing, Demonstration Programs