Should we stop the use of test data for educator evaluations?
A guest post by Kimberly Kappler Hewitt
**The Edvocate is pleased to publish guest posts as way to fuel important conversations surrounding P-20 education in America. The opinions contained within guest posts are those of the authors and do not necessarily reflect the official opinion of The Edvocate or Dr. Matthew Lynch.**
Within the last year, three influential organizations—reflecting researchers, practitioners, and philanthropic sectors—have called for a moratorium on the current use of student test score data for educator evaluations, including the use of value-added models (VAMs).
In April of 2014, the American Statistical Association (ASA) released a position statement that was highly skeptical of the use of VAMs for educator evaluation. ASA declared that “Attaching too much importance to a single item of quantitative information is counterproductive—in fact, it can be detrimental to the goal of improving quality.” To be clear, the ASA stopped short of outright condemning the use of VAM for educator evaluation, and declared that its statement was designed to provide guidance, not prescription. Instead, ASA outlined the possibilities and limitations of VAM and called into question how it is currently being (mis)used for educator evaluation.
In June of 2014, the Gates Foundation, the largest American philanthropic education funder, released “A Letter to Our Partners: Let’s Give Students and Teachers Time.” This was written by Vicki Phillips, Director of Education, College Ready, in which she (on behalf of the Foundation) called for a two-year moratorium on the use of test scores for educator evaluation. She explained that “teachers need time to develop lessons, receive more training, get used to the new tests, and offer their feedback.”
Similarly, the Association for Supervision and Curriculum Development (ASCD), which is arguably the leading international educator organization comprised of 125,000 members in more than 130 nations, also recently released a policy brief that also calls for a two-year moratorium on high stakes use of state tests—including their use for educator evaluations. ASCD also explicitly acknowledged that “reliance on high-stakes standardized tests to evaluate students, educators, or schools is antithetical to a whole child education. It is also counter to what constitutes good educational practice.”
While the call to halt the current use of test scores for educator evaluation is echoed across all three of these organizations, there are important nuances to their messages. The Gates Foundation, for example, makes it clear that the foundation supports the use of student test data for educator evaluation even as it declares the need for a two-year moratorium, the purpose of which is to allow teachers the time to adjust to the new Common Core Standards and related tests:
The Gates Foundation is an ardent supporter of fair teacher feedback and evaluation systems that include measures of student gains. We don’t believe student assessments should ever be the sole measure of teaching performance, but evidence of a teacher’s impact on student learning should be part of a balanced evaluation that helps all teachers learn and improve.
The Gates Foundation cautions, though, the risk of moving too quickly to tie test scores to teacher evaluation:
Applying assessment scores to evaluations before these pieces are developed would be like measuring the speed of a runner based on her time—without knowing how far she ran, what obstacles were in the way, or whether the stopwatch worked!
I wonder what the stopwatch symbolizes in the simile: Does the Gates Foundation have questions about the measurement mechanism itself (VAM or another student growth measure), or is Gates simply arguing for more time in order for educators to be “ready” for the race they are expected to run?
While the Gates call for a moratorium is oriented on increasing the possibility of realizing the positive potential of policies regarding the use of student test data for educator evaluation by providing more time to prepare educators for them, ASA on the other hand is concerned about the potential negative effects of such policies. The ASA, in its attempt to provide guidance, identified problems with the current use of VAM for educator evaluation and raised important questions about the potential effects of high stakes use of VAM for educator evaluation:
A decision to use VAMs for teacher evaluations might change the way the tests are viewed and lead to changes in the school environment. For example, more classroom time might be spent on test preparation and on specific content from the test at the exclusion of content that may lead to better long-term learning gains or motivation for students. Certain schools may be hard to staff if there is a perception that it is harder for teachers to achieve good VAM scores when working in them. Over-reliance on VAM scores may foster a competitive environment, discouraging collaboration and efforts to improve the educational system as a whole.
Similarly to ASA, ASCD is concerned with the negative effects of current accountability practices, including “over testing, a narrowing of the curriculum, and a de-emphasis of untested subjects and concepts—the arts, civics, and social and emotional skills, among many others.” While ASCD is clear that it is not calling for a moratorium on testing, it is calling for a moratorium on accountability consequences linked to state tests: “States can and should still administer standardized assessments and communicate the results and what they mean to districts, schools, and families, but without the threat of punitive sanctions that have distorted their importance.” ASCD goes further than ASA and Gates in calling for a complete revamp of accountability practices, including policies regarding teacher accountability:
We need a pause to replace the current system with a new vision. Policymakers and the public must immediately engage in an open and transparent community decision-making process about the best ways to use test scores and to develop accountability systems that fully support a broader, more accurate definition of college, career, and citizenship readiness that ensures equity and access for all students.
So…are policymakers listening? Are these influential organizations able to amplify the voices of researchers and practitioners across the country who also want a moratorium on misguided teacher accountability practices? Let’s hope so.
This post originally appeared on the blog VAMboozled! and has been republished with permission.