There are many issues that keep swirling in my head since taking (and thankfully passing) the test. The list below are a few of those thoughts. The list is in no particular order with the exception of #1 (can't get that thought out of my mind).
THE LIST:
- How I answered questions- I consciously changed my answers based on what I felt the "test" wanted as an answer, not what I knew I could prove or what I know to be best practice in education. I think I can't shake it because it makes me wonder "to what extent do we do this to our students through our existing assessments?" (Especially those that are "high stakes")
- Don't Model This...How this test/assessment is implemented does not follow best practice in regards to assessments to measure learning- No pre-assessment data to determine if my existing knowledge or the video training led to my "success". Also very limited feedback about how you score and/or how to improve.
- Who are these "Master Scorers?-How many times did they get to watch the videos before assigning scores? Did they change their answers to match the assessment? Do they have superhero vision & hearing (See #4 below)? HOW DID THEY THINK THAT IS A 4?!? (Upon reflection this is a great thing to have pop into my mind, because it makes me realize how lucky I am to see level 4 teaching every day)
- Surprising Lack of "Standardization"/The Bias of the Multiple Cameras- For a tool that is looking to standardize our evaluation practice, I was very surprised that the videos in the module are not standardized. Each video had a different camera view. Some were panoramic, some followed the teacher, some used multiple cameras & angles, some had good sound some, you could barely hear, some the volume was too high when certain people spoke. This had an enormous impact on what and how I could observe the lesson.
- When Will the Student's Views Become a Factor?- Just because I think a lesson went well, or matched up with what I am trained to see as "good teaching", or because I learned by observing the lesson, does that mean it was effective? Is the lesson personalized to the students? Is the lesson connected to the students' interests, other classes, or past experience? Do the students feel a teacher cares about them? Any tool that does not look to provide a measure of the student's voice as at least a potential indicator of educator effectiveness, is missing a key component of an effective tool. (Side note: Useful tool to incorporate this idea is iknowmyclass.com)
No comments:
Post a Comment