Friday, November 9, 2012

The Test's Answers vs. The Right Answers

As those who are participating as evaluators in RI's new Evaluation System already know, we need to take an assessment based on the Charlotte Danielson model (vehicle used is Teachscape).  For those who are unfamiliar with the test, the test can take up to 5 &1/2 hours and contains video segments that have to be observed, then with the evidence collected, scores are assigned to the segments according to competencies.  Additionally there are multiple choice questions, select all that apply questions, and select those that don't belong questions. For more info see http://www.ride.ri.gov/EducatorQuality/EducatorEvaluation/Training.aspx

There are many issues that keep swirling in my head since taking (and thankfully passing) the test.  The list below are a few of those thoughts.  The list is in no particular order with the exception of #1 (can't get that thought out of my mind).

THE LIST:

  1. How I answered questions- I consciously changed my answers based on what I felt the "test" wanted as an answer, not what I knew I could prove or what I know to be best practice in education.  I think I can't shake it because it makes me wonder "to what extent do we do this to our students through our existing assessments?" (Especially those that are "high stakes")
  2. Don't Model This...How this test/assessment is implemented does not follow best practice in regards to assessments to measure learning- No pre-assessment data to determine if my existing knowledge or the video training led to my "success".  Also very limited feedback about how you score and/or how to improve.
  3. Who are these "Master Scorers?-How many times did they get to watch the videos before assigning scores? Did they change their answers to match the assessment?  Do they have superhero vision & hearing (See #4 below)? HOW DID THEY THINK THAT IS A 4?!? (Upon reflection this is a great thing to have pop into my mind, because it makes me realize how lucky I am to see level 4 teaching every day)
  4. Surprising Lack of "Standardization"/The Bias of the Multiple Cameras- For a tool that is looking to standardize our evaluation practice, I was very surprised that the videos in the module are not standardized.  Each video had a different camera view.  Some were panoramic, some followed the teacher, some used multiple cameras & angles, some had good sound some, you could barely hear, some the volume was too high when certain people spoke.  This had an enormous impact on what and how I could observe the lesson.
  5. When Will the Student's Views Become a Factor?- Just because I think a lesson went well, or matched up with what I am trained to see as "good teaching", or because I learned by observing the lesson, does that mean it was effective? Is the lesson personalized to the students? Is the lesson connected to the students' interests, other classes, or past experience? Do the students feel a teacher cares about them? Any tool that does not look to provide a measure of the student's voice as at least a potential indicator of educator effectiveness, is missing a key component of an effective tool. (Side note: Useful tool to incorporate this idea is iknowmyclass.com)
I'm sure more will pop into my head this weekend.  For those who have taken the test I'd love to know what has stuck with you after completing it.



No comments:

Post a Comment