Monday, November 26, 2012

Making Feedback to Students Effective


(Originally titled “Know Thy Impact”)
        “Gathering and assessing feedback are really the only ways teachers can know the impact of their teaching,” says Australian educator John Hattie in this Educational Leadership article. The problem is that not all feedback is effective. Hattie offers these suggestions for making feedback work:
        • Clarify the goal. “The aim of feedback is to reduce the gap between where students are and where they should be,” says Hattie. “With a clear goal in mind, students are more likely to actively seek and listen to feedback.” The teacher might provide scoring rubrics, a completed example, the steps toward a successful product, or progress charts.
        • Make sure students understand the feedback. “When we monitor how much academic feedback students actually receive in a typical class, it’s a small amount indeed,” says Hattie. Teachers need to check with students to see if they’re getting it. This may involve asking them to interpret written comments and articulate next steps.
        • Seek feedback from students. Do they need help? Different strategies? Another explanation? Teachers who listen to students can adapt lessons, clarify work demands, and provide missing information, all of which helps students do better.
        • Tailor feedback to students.   Novice students benefit most from task feedback, somewhat more proficient students from process feedback, and highly competent students thrive on feedback aimed at self-regulation or conceptual understanding.
  • Task feedback – How well the student is doing on a particular task and how to improve.
  • Process feedback – This might be suggested strategies to learn from errors, cues to seek information, or ways to relate different ideas.
  • Self-regulation feedback – This helps students monitor, direct, and regulate their own actions as they work toward the learning goal – and helps build a belief that effort, more than raw ability, is what produces successful learning.
To move students from mastery of content to mastery of strategies to mastery of conceptual understanding, teachers need to give feedback that is at or just above their current level.
        • Use effective strategies. One tip is to scope out entering misconceptions and have students think them through. Another is providing students with formative assessment information, giving them specific information on strengths and weaknesses. A third is to start with effective instruction and learning experiences. “Teachers need to listen to the hum of students learning, welcoming quality student talk, structuring classroom discussions, inviting student questions, and openly discussing errors,” says Hattie. “If these reveal that student have misunderstood an important concept or failed to grasp the point of the lesson, sometimes the best approach is simply to reteach the material.”
• Avoid ineffective feedback. Researchers have found that praise and peer feedback are problematic. “Students welcome praise,” says Hattie. “Indeed, we all do. The problem is that when a teacher combines praise with other feedback information, the student typically only hears the praise… The bottom line seems to be this: Give much praise, but do not mix it with other feedback because praise dilutes the power of that information.” As for peer feedback, Graham Nuthall monitored students’ peer interactions through the school day (using microphones) and found that most of the feedback students receive during the day is from other students – and much of it is incorrect. Peer feedback needs clear structure, such as a rubric and a set of guiding questions.
        • Create a climate of trust. Students must understand that errors and misunderstandings are part of learning and not be afraid of negative reactions from peers – or the teacher – if they make mistakes.
 
“Know Thy Impact” by John Hattie in Educational Leadership, September 2012 (Vol. 70, #1, p. 18-23), www.ascd.org; Hattie can be reached at jhattie@unimelb.edu.au
 
Stephen Anderson
Principal,

Student Survey Data As Part of Teacher Evaluation


Student Survey Data As Part of Teacher Evaluation
        In this important Kappan article, Harvard senior lecturer Ronald Ferguson describes a scenario in which a principal peeks into a classroom and likes what she sees (students are busy and well-behaved) and the teacher and principal are pleased with his test-score results (they’re almost always above average). But the students, if asked, would have told a very different story: lessons are uninteresting, assignments emphasize memorization more than understanding, and the teacher seems indifferent to their feelings and opinions. In short, it’s not a happy place and there is no love of learning.
        Universities routinely survey students on how professors are performing, but until recently, K-12 students have not been given the chance to evaluate their teachers. This is because, although students spend hundreds more hours in classrooms than any administrator, people doubt that students can provide valid, reliable, and stable responses about the quality of teaching.
        The Measures of Effective Teaching (MET) project has put those doubts to rest. Comparing value-added analysis of test scores, classroom observations, and student perception surveys (using Ferguson’s Tripod questions), researchers have found that students provide accurate, helpful information on their teachers’ performance. “[S]tudents know good instruction when they experience it as well as when they do not,” says Ferguson. The research design was careful to control for students’ family background and isolate each teacher’s characteristics and impact on learning.
These robust findings notwithstanding, Ferguson offers two caveats about using student survey results to evaluate teachers:
  • Any method of assessing teacher effectiveness is prone to measurement error.
  • Teachers may temporarily alter their behaviors to improve their survey results, especially if students’ opinions have high stakes.
These concerns lead Ferguson to say, “No one survey instrument or observational protocol should have high stakes for teachers if used alone or for only a single deployment.” He supports the idea of student surveys being one of several measures used to evaluate teachers.
        Over the last eleven years, almost a million K-12 students have filled out anonymous Tripod surveys on their teachers, and Ferguson and his colleagues have refined the questions to the point where they pass muster with other researchers. The survey questions are grouped under seven headings, and students respond by rating their agreement or disagreement with each statement on a 5-4-3-2-1 scale:
        • Care. This goes beyond a teacher’s “niceness” to encompass demonstrated concern for students’ happiness and success. A sample question: My teacher really tries to understand how students feel about things.
        • Control. These questions measure management of off-task and disruptive behaviors in the classroom. A sample question: Our class stays busy and doesn’t waste time.
        • Clarify. This addresses the teacher’s skill at promoting understanding, clearing up confusion and misconceptions, differentiating, and helping students persevere. A sample question: My teacher has several good ways to explain each topic that we cover in this class.
        • Challenge. This covers effort and rigor and measures whether the teacher pushes students to work hard and think deeply. Sample questions: In this class, my teacher accepts nothing less than our full effort and My teacher wants us to use our thinking skills, not just memorize things.
        • Captivate. Do teachers make instruction stimulating, relevant, and memorable? Sample questions: My teacher makes lessons interesting and I often feel like this class has nothing to do with real life outside school.
        • Confer. This covers teachers seeking students’ points of view and allowing them to express themselves and exchange ideas with classmates. A sample question: My teacher gives us time to explain our ideas.
        • Consolidate. This measures whether teachers check for understanding and help students see patterns and move learning into long-term memory. A sample question: My teacher takes the time to summarize what we learn each day.
        Ferguson notes that five of these areas measure teachers’ support of students – Care, Clarify, Captivate, Confer, and Consolidate – and two measure “press” – Control and Challenge.
        What have the survey results revealed about teachers? Even lower-elementary students express clear distinctions among teachers, with greater variation within schools than between schools. Overall, the MET study has shown Tripod survey results to be valid and reliable predictors of student learning in math and ELA – in fact, more reliable than administrators’ classroom observations. Students whose teachers scored in the top quarter on Tripod questions learned the equivalent of 4-5 months more per year than students whose teachers scored in the bottom quarter. The differences in ELA were about half as large as in math.
        Not all the Seven C items are equally predictive of student achievement. When Ferguson asks audiences which of the Seven C’s they think are most important to student achievement, most pick Care. But that’s not what the MET data show. Here are the seven survey questions that correlate most strongly with achievement gains:
  • Students in this class treat the teacher with respect.
  • My classmates behave the way my teacher wants them to.
  • Our class stays busy and doesn’t waste time.
  • In this class, we learn a lot every day.
  • In this class, we learn to correct our mistakes.
  • My teacher explains difficult things clearly.
However, the difference between these and other Tripod items is not large, says Ferguson: “Educators should keep all of them in mind as they seek ways to improve teaching and learning.”
        What about student outcomes beyond test-score gains? “We also want attentiveness and good behavior, happiness, effort, and efficacy,” says Ferguson. The good news is that he and his colleagues have found “the same teaching behaviors that predict better behavior, greater happiness, more effort, and stronger efficacy also predict great value-added achievement gains.” It’s not either-or; it’s both, and student survey results, used wisely, can help give teachers and administrators valuable data to improve teaching and learning.
 
“Can Student Surveys Measure Teaching Quality?” by Ronald Ferguson in Phi Delta Kappan, November 2012 (Vol. 94, #3, p. 24-28), http://www.kappanmagazine.org; Ferguson can be reached at ronald_ferguson@harvard.edu
 
Stephen Anderson
Principal,

Tuesday, November 6, 2012

Charlotte Danielson on Effective Observation and Follow-Up

(Originally titled “Observing Classroom Practice”) “Classroom observations can foster teacher learning – if observation systems include crucial components and observers know what to look for,” says teacher-evaluation guru Charlotte Danielson in this Educational Leadership article. To be fair, “the judgments that are made about a teacher’s practice must accurately reflect the teacher’s true level of performance.” Although some of teachers’ work is “behind the scenes”, Danielson believes the most important parts of teaching can be observed in classrooms. A teacher who is ineffective in front of students can’t be considered competent. What should administrators look for in classrooms? Danielson believes that every district needs a research-based instructional framework (hers, for example) that gives everyone a detailed, well-crafted, agreed-upon definition of teaching at different levels of effectiveness. The framework should be validated, meaning that teachers who do well on the rubric produce significant gains in student achievement. Administrators should also be clear on what evidence they must gather to score teachers, and the evaluation process should be supported by training ensuring that different administrators would give pretty much the same ratings to the same teacher. In addition, it’s important that teachers have a clear picture (ideally through videotapes) of performance at different levels. Danielson believes administrators need to be proficient in four areas to conduct effective classroom observations, and should be certified in these before conducting high-stakes evaluations: • Collecting evidence – She says administrators should write down what they actually see and hear in classrooms, not their opinions or interpretations. This might include something the teacher says (e.g., “Can anyone think of another idea?”), what students do (e.g., taking 45 seconds to line up), or something else (e.g., backpacks strewn in the middle of the floor). It’s hard for many administrators to refrain from making judgments, says Danielson, but it’s important to separate evidence from conclusions, especially when there’s disagreement about a teacher’s level of performance. • Deciding on rubric scores – This is where the administrator takes the evidence gathered in the classroom and finds the rubric language that provides a valid interpretation and judgment. Ideally, different administrators observing the same classroom will identify the same rubric lines and the same 4-3-2-1 levels of performance. This is relatively easy for low-inference items (did the class start on time?) but considerably more difficult for items like a teacher using questioning and discussion to deepen understanding. • Conducting professional conversations with teachers – Although there are times when administrators need to tell teachers bluntly that something must change, the focus in most follow-up conferences with teachers, Danielson believes, “should be dialogue, with a sharing of views and perspectives. After all, teachers make hundreds of decisions every day. If we accept that teaching is, among other things, cognitive work, then the conversations between teachers and observers must be about cognition.” These conversations “are the best opportunity to engage teachers in thinking through how they could strengthen their practice.” This, of course, has implications for how administrators are trained and supported. • Making the teacher an active participant – In most conventional evaluations, says Danielson, teachers are passive recipients and the administrator does almost all the work – not the best strategy for bringing about adult learning. To change this one-sided dynamic, Danielson suggests the following steps. First, both teacher and administrator become conversant with the evaluation rubric. Second, after a classroom observation, the administrator shares his or her low-inference notes with the teacher and accepts additions and edits from the teacher’s perspective. Third, the teacher and administrator independently align the observation notes with the rubric, identifying which cell accurately describes and evaluates what was taking place in the classroom. Finally, they meet and compare their rubric scores and discuss any differences. “Observing Classroom Practice” by Charlotte Danielson in Educational Leadership, November 2012 (Vol. 70, #3, p. 32-37), http://www.ascd.org; Danielson can be reached at info@danielsongroup.org. Stephen Anderson