At the Tennessee Department of Education, we believe that data and research should inform every aspect of our work. Our recently created Office of Research and Policy, led by Nate Schwartz, provides us with the internal capacity to carry out in-depth analysis that looks at the impacts of department policies on students and teachers. Nate’s team also partners with external researchers and organizations, such as the Tennessee Consortium on Research, Evaluation, and Development (TNCRED), to ensure that we are receiving rigorous and independent feedback on our work.
Recently, the department worked with TNCRED to produce the First to the Top survey, an examination of, among other things, the way educators perceive the state’s teacher evaluation system (Tennessee Educator Acceleration Model), now in its third year of implementation. Keep reading below for Nate’s take on what this research means for classrooms across the state.
-The Communications Team
Tennessee Teachers Weigh in on Teacher Evaluations
By Nate Schwartz
For the past two years, the Tennessee Consortium for Research, Evaluation, and Development has surveyed teachers and administrators across Tennessee about their perceptions of the Tennessee Department of Education’s Race to the Top initiatives. The results from the 2012-13 survey have just been released (here), and the findings offer several important themes and lessons for the department as the state enters the third year of its new statewide teacher evaluation system. In this post, I’ll describe some of the major takeaways for the department and reflect on next steps. Here are several of the main points:
Teachers’ perceptions of the evaluation system have grown far more positive over the past year, although there is still considerable room for improvement.
Teachers and evaluators are increasingly seeing the evaluation process as a tool for improving teaching and learning across the state, with more than half of respondents reporting that teacher evaluation will improve teaching in their schools.
Teachers in districts that chose to adopt district-specific observation models look more positively on the evaluation process than those that use the state-provided model, although it is hard to know whether this is a cause or outcome of the alternative system.
More than 90 percent of teacher evaluators felt adequately prepared to carry out all aspects of teacher evaluation in 2013, up from three-quarters of evaluators in 2012.
Teachers who viewed the evaluation process as focused on teaching improvement tended to engage with the system to a far greater extent than teachers who saw the process as one aimed only at judging their performance.
In the space below, I expand further on these points and include supporting data from some of the survey results.
Perceptions Growing More Positive
First, the survey shows that Tennessee teachers are feeling increasingly positive about the teacher evaluation system, although there remains considerable room for improvement. Figure 1 and Figure 2 show change over time on questions about teachers’ overall perceptions of the process. The 20-30 point increases over the past year in the percent of teachers that agree with each statement attest to the growing comfort that teachers feel toward the evaluation of their work. In particular, it is useful to see that more than two-thirds of teachers now feel that the process of teacher evaluation treats them fairly, since one of the primary concerns with the system was related to concerns about biased evaluations (see Figure 2).
At the same time, the positive increases in teacher opinions about evaluation over the past year should not hide the fact that nearly half of Tennessee teachers still feel dissatisfied with the system. If teacher evaluation is truly to become a central element of the professional culture in Tennessee, the system will need to continue to produce far greater buy-in from all teachers within the system. In the coming weeks, Assistant Commissioner Sara Heyburn will write on Classroom Chronicles about how her Teachers and Leaders Division is continuing to examine the data and gather feedback from educators to inform ongoing improvements to the system.
A Tool for Teaching Improvement
A second major theme is that we have growing evidence that the evaluation system is being viewed and used as a tool for teaching improvement. In 2012, only around one-third of teachers believed that the feedback they were receiving from teacher evaluation was more focused on helping them to improve rather than making a judgment about their performance. By 2013, nearly half of teachers agreed with this statement (Figure 3).
Equally importantly, slightly more than half of survey respondents agreed that the teacher evaluation process as a whole would improve their teaching (Figure 4) and more than 40 percent agreed that the process would improve student achievement (Figure 5). Both rates increased by around 15 percentage points from the previous year. Interestingly, TNCRED finds that there is little difference in these responses by teachers’ final 2012 evaluation rating. In other words, both teachers who were rated as most effective and teachers who were rated as least effective were equally likely to believe the process might lead to better teaching.
Yet the survey also demonstrates the amount of work that still remains to be done at both the state and district levels to make teacher evaluation feel central to teachers’ professional development.
Around half of teachers reported never receiving any follow-up from their observers in the area that had been identified as most in need of improvement, and, even among those teachers who did receive some follow-up, this was rarely more than a single conversation (Figure 6). This is another area of active work for the department, and specifically the Teachers and Leaders division, where state personnel are looking to provide evaluators with more effective tools for guide educators toward high-quality, targeted development activities.
Alongside these two major themes from the survey responses, there were several other interesting stories that provide useful feedback for the department.
One is that we see more positive responses from teachers in several of the districts using alternative teacher observation rubrics than from teachers in districts that use the state-developed TEAM rubric. For instance, only 48 percent of teachers using the TEAM rubric agreed that they were satisfied with the evaluation process in their school whereas 61 percent of the teachers using other observation systems felt satisfied with this process (Figure 7). It is not entirely clear how to interpret these differences. Since the districts explicitly opted into using the alternative model, these districts likely benefited from far higher initial buy-in from district staff, school leaders and teachers. Thus, the responses might mean that the alternate systems actually provide better support for teachers, but they also might reflect differences in level of teacher buy-in or in other supports that exist within the alternate districts. To better understand the landscape of alternate observation systems within Tennessee, the department has partnered with the RAND research corporation to compare the multiple systems being used and to draw lessons that might inform the next round of TEAM design.
We can also draw useful conclusions from the set of questions directed exclusively at observers. Many of the department’s most recent initiatives – including revamped evaluator training, TEAM coaching, a revised principal evaluation model, and greater support for evaluator norming activities such as co-observations – have been directed at increasing the level of preparedness of teacher evaluators (principals, assistant principals, instructional coaches, and peer teachers) to serve as instructional leaders. In 2012, the rates of evaluators who reported that they felt adequately prepared to conduct various aspects of teacher evaluation ranged between 70 and 85 percent, depending on the particular task. By 2013, more than 90 percent of evaluators in TEAM districts felt adequately prepared for each aspect of teacher evaluation (Figure 8).
A final important lesson is that teachers who perceive the system as focused on teaching improvement rather than judgment about their performance tend to engage with and value teacher evaluation to a far greater extent. The department is actively exploring several ways to make teacher evaluation feel more useful for teachers at all performance levels. Among other initiatives, we have partnered with the Jackson-Madison school district to pilot an innovative design for creating teacher mentoring relationships based on teachers’ strengths and weaknesses on specific teaching practices. Other activities that Sara Heyburn will cover more fully in a future post include the use of video technology for purposes of self reflection, introducing the expectation of coaching conversations at the beginning of the school year and using Supplementary Scopes of Work funding to encourage co-observations for the purposes of additional feedback.
In closing, it is important to note several data caveats that might affect our interpretation of the results. This year, the survey was completed by nearly 25,000 teachers and 3,000 administrators, representing 39 percent of teachers and 46 percent of administrators across the state. While this is a fairly strong response rate for such a survey, and TNCRED’s analysis suggests that the respondents for the most part resembled non-respondents across the state, differences we cannot measure might have skewed the results to some extent. Nevertheless, the First to the Top Survey represents a unique and valuable tool for gauging educator opinion across the state, and the department takes the results very seriously as we move into another year of teacher evaluation.
Dr. Nathaniel Schwartz directs the Office of Research and Policy at the Tennessee Department of Education. He earned his Ph.D. at the University of Michigan and has published articles in research journals such as Educational Evaluation and Policy Analysis and the Teachers College Record.