Scoring the "Re-Do"
School used to be a place where, among other things, teachers taught lessons, administered assessments and entered grades into the gradebook. That system was linear, straight-forward, clean. Now we live in a culture of students re-doing assignments, revisiting assessments and generally trying again (and even again and again) in order to learn and to demonstrate evidence of their learning. Apropos of our new paradigm, I recently facilitated a discussion among some very thoughtful teachers and administrators in RSU 38 in Maranacook, Maine, to help them think through their own practices and policies to support this updated approach to teaching and learning. Like many thoughtful discussions, the intention of this one was to deepen knowledge of what is possible. The RSU 38 team was not determined to “figure it all out” that day, but rather to take some steps toward figuring it out.
In preparation for the discussion in Maranacook, I queried several of our trusted JumpRope partner school districts. The ideas shared back with me were so rich and varied that I immediately knew they were worth writing about here, so a wider group of dedicated professionals could benefit from chewing on them. To be very clear, the original RSU 38 question was, “What are some options for entering ‘redo’ scores & what are the implications for calculations of proficiency?” After sharing the ideas I collected from other districts, we spent a great deal of time trying different combinations of scores and weights in the JumpRope Calculation Simulator to see the mathematical impact additional or different scores would have on a student’s grade. Looking at the way the numbers play out relative to how we anticipate they might play out (or think they should play out) can be very helpful in establishing or refining practice and policy.
The responses I got from our partners indicate a very wide range in practice and policy; one of the beautiful enigmas that remains as we think we are all moving in the same direction. A careful review of the responses led me to this question: Is this about collecting data to indicate evidence of learning or is it about learning and using the data to report on what has been learned? The former suggests attempting to minimize data collection with fewer retakes and retakes for fewer reasons. The latter suggests continuous learning and continuous assessment. I think we see some of each of these frames in the work of the JumpRope districts represented below.
Becky Brown, South Portland, Maine:
“We use the Power Law to honor trending which means we do not over-write scores.
We also encourage teachers to plan to offer no less than 3 opportunities for students, and more if they aren't getting it, to exhibit proficiency. We do not talk of re-dos or re-takes. It’s about learning and exhibiting at the next opportunity that is already planned as part of instruction. All summative opportunities are weighted the same.
We also insist that some sort of instruction has taken place before the next opportunity so it’s not just take a test, 2 days later take it again, etc. We have some targets that we are calling Low Leverage. With LL targets, teachers instruct and assess once BUT are responsible for remediating and re-assessing for students who did not achieve proficiency.”
Eric Waddell, Kittery, Maine:
“At Traip Academy, we overwrite previous scores and note it in the comment field.
Students who exceed on the 2nd or subsequent attempts cannot earn a 4 on the assessment. The highest would be a 3.75. Students who meet proficiency on the 2nd or subsequent attempts can get nothing higher than a 3.0 on the assessment (meeting on the first attempt is noted in JR as a 3.25). The weight of the assessment would not be impacted by the number of attempts.”
Deb Taylor, RSU 12, Maine:
“We treat "re-do's" differently depending on whether you're talking about
(1) multiple attempts at a learning goal at a time when teacher and student believed they were ready to succeed and the attempts are made on the same assessment instrument - i.e. retaking a test;
(2) multiple attempts at a learning goal at a time when teacher and student believed they were ready to succeed and the attempts are made on different assessment instruments - i.e. was not able to show proficiency in a paper and pencil test but was able to show proficiency in a student-led presentation or
(3) multiple attempts at a long-term learning goal at periodic intervals to assess progress to the long-term goal - i.e. student is assessed regarding letter ID in the fall, winter, and spring with learning towards the goal occurring all year.
In instance #1, we overwrite the previous score
In instance #2, we collect each data point and use decaying average (that being said, we would weight the first score at 0 if it were possible to weight an assessment by student instead of for all students with scores since some may not need to do a second attempt).
In instance #3, we collect each data point and generally weight the previous attempts at zero.”
Kelli Deveaux, Westbrook HS, Maine:
“We do not overwrite- it created significant issues with teachers in each other’s gradebooks and didn’t give a true picture of the learning journey for a student.
With decaying average we have learned that multiple attempts should be entered, as it actually helps a student if she goes from 2.0, 2.5. 3.0 to include all, and not just do the 2.0 and then eventually the 3.0 when she gets there.”
The whole idea of trying the same thing again in education is kind of revolutionary. It’s a clear and resounding break from the days of using grades to sort students on a bell curve. As we look at the carefully considered responses above, we see that while our business is increasingly moving toward one that truly focuses on learning over grading, we also see that our approaches to evidencing and reporting the learning are still quite varied.