Editor's note: This post provides a deeper look into the ideas of equity of opportunity, one of JumpRope's Core Values.
As teachers, we know that our antennae are always attuned to how well our students are doing. We also know that continuous data collection is not only burdensome, it’s not effective. In other words, weighing the pig won’t make it fatter. Forgive the analogy, but I think it works. We also know, however, that checking the pig’s weight, using the right scale at the right time, tells us if the pig is on pace to attain the weight it needs to win a blue ribbon at the county fair. And of course, once we arrive at the fair, the pig’s ultimate weight will be one variable in awarding that ribbon. If we look at classroom assessment as a measurement of learning, and we recognize that we are the facilitators of that learning, we can embrace the need to collect strategic and useful data toward and of that learning.
When I work with my students at the University of Southern Maine (USM) to help them design assessments and assessment systems, I talk about diagnostic, formative and summative assessment. I also talk about checks for understanding. I deliberately distinguish these types of assessments from one another so we share a common lexicon. The names we assign to these types of assessments ultimately are less important than understanding when and how to assess, and when and why to collect data.
Checks for understanding happen in-the-moment, while I am teaching. With my USM students, I distinguish between checks for directional understanding and checks for content understanding. Both are important, but our discussion here is focused on content. To check content understanding, I might pause 10 minutes into a full group lesson to ask some oral questions. Sometimes I listen in as students talk with one another, or watch as they complete cooperative or independent work. I might collect classwork such as note catchers, graphic organizers, problem sets, written responses, exit tickets, etc. The check can come during the class period or at the end of the class period if I know we will continue to build from the content we’ve addressed that day. The common thread for all of these checks for understanding is that I do not use them for formal data collection. I might offer a brief comment for some students, either in writing or orally, but offering feedback is not the goal. I might jot notes to myself if I hear or see some misunderstandings to address later or some good thinking I’d like a student to share with peers, but I am not keeping track of this information for the purposes of assessing learning. I am using these checks for understanding to refine my instruction that day and in the coming few days. To carry my clumsy analogy forward, I am simply eyeballing the pig here. No scale required.
I do use some sort of scale, that is, I record data, when I assess diagnostically and formatively. For both types of assessment, I apply the same standards and scoring criteria I would to a summative assessment, though my diagnostic and formative assessments are never as comprehensive as a summative assessment would be. See below for an example of how this looks in my gradebook.
Formative and summative aligned to the same standards.