ERIC Number: ED533726
Record Type: Non-Journal
Publication Date: 2012-Jul
Pages: 45
Abstractor: ERIC
ISBN: N/A
ISSN: N/A
EISSN: N/A
Available Date: N/A
Teacher Evaluation in Tennessee: A Report on Year 1 Implementation
Tennessee Department of Education
In the summer of 2011, the Tennessee Department of Education contracted with the National Institute for Excellence in Teaching (NIET) to provide a four-day training for all evaluators across the state. NIET trained more than 5,000 evaluators intensively in the state model (districts using alternative instruments delivered their own training). Evaluators were required to pass an inter-rater reliability exam, in which they viewed video recordings of teachers delivering lessons and rated them to ensure they understood the distinction between differing levels of performance. Implementation of the evaluation system began at the start of the 2011-12 school year. The department made a concentrated effort to solicit and encourage feedback, meeting with teachers and administrators across the state. Educators voiced both strengths and concerns about various facets of the teacher evaluation process and implementation. Legislators also received feedback from their constituents and shared information with department officials. The department and others heard positive comments from administrators about improvements in the quality of instruction in classrooms and also heard concerns about particular facets of the system. As implementation continued through the first semester of the school year, it became clear that satisfaction with the evaluation system varied considerably from district to district, driven largely by district- and school-level leadership. While administrators continued to tout the system's impact on instruction, the public discussion about teacher evaluation began to detract from the real purpose of the evaluation system: improving student achievement. In response, Governor Haslam, supported by legislative leadership, tasked the State Collaborative on Reforming Education (SCORE) with conducting an independent review of the system through a statewide listening and feedback process and producing a report to the State Board of Education and department outlining a range of policy considerations. In addition, the Governor announced his support of House Joint Resolution (HJR) 520, which ultimately was adopted by the General Assembly. This resolution directed the department to follow through on its commitment to seek feedback, conduct an internal review of the evaluation system, and provide a report with recommendations to the House and Senate Education Committees by July 15, 2012. Through its feedback gathering process, common themes have emerged: (1) Administrators and teachers--including both supporters and opponents of the evaluation model--believe the TEAM rubric effectively represents high-quality instruction and facilitates rich conversations about instruction; (2) Administrators consistently noted that having school-wide value-added scores has led to increased collaboration among teachers and a higher emphasis on academic standards in all subjects; (3) Administrators and teachers both feel too many teachers have treated the rubric like a checklist rather than viewing it as a holistic representation of an effective lesson, and both groups feel additional training is needed on this point; (4) Teachers in subjects and grades that do not yield an individual value-added score do not believe it is fair to have 35 percent of their evaluation determined by school-wide scores; (5) Implementation of the 15 percent measure has not led to selection of appropriate measures, with choices too often dictated by teacher and principal perceptions of which measure would generate the highest score rather than an accurate reflection of achievement; (6) Administrators consistently noted the large amount of time needed to complete the evaluation process. In particular, administrators want to spend less time observing their highest performing teachers and more time observing lower performing teachers. Additionally, they feel the mechanics of the process (e.g., data entry) need to be more streamlined and efficient; (7) Both administrators and teachers consistently felt better about the system as the year progressed, in part due to familiarity with the expectations and because of changes that allowed for fewer classroom visits during the second semester; and (8) Local capacity to offer high-quality feedback and to facilitate targeted professional development based on evaluation results varies considerably across districts. (Contains 5 footnotes.)
Descriptors: Video Technology, Feedback (Response), Evaluators, Interrater Reliability, Academic Standards, Educational Change, Politics of Education, Evaluation Methods, Leadership, Teacher Evaluation, Program Implementation, Training Methods, Program Effectiveness, Educational Policy, Scoring Rubrics, State Departments of Education, State Legislation, Policy Analysis, School Districts, Teacher Attitudes, Administrator Attitudes, Evaluation Problems, Classroom Observation Techniques, Educational Testing, Educational Indicators, Federal Legislation, Surveys, Program Evaluation, Inservice Teacher Education, Coaching (Performance), Educational Technology, Achievement Tests
Tennessee Department of Education. Andrew Johnson Tower 6th Floor, Nashville, TN 37243-0375. Tel: 615-741-2731; e-mail: Education.Comments@state.tn.us; Web site: http://www.tennessee.gov/education/
Publication Type: Reports - Evaluative
Education Level: Elementary Secondary Education
Audience: N/A
Language: English
Sponsor: N/A
Authoring Institution: Tennessee Department of Education
Identifiers - Location: Tennessee
Identifiers - Laws, Policies, & Programs: No Child Left Behind Act 2001; Race to the Top
Identifiers - Assessments and Surveys: National Assessment of Educational Progress; Stanford Achievement Tests
Grant or Contract Numbers: N/A
IES Cited: ED544797; ED561236; ED544205; ED548539
Author Affiliations: N/A