Brian Hewlett is the Director of Technology at the Sheridan School in Washington, DC, a JumpRope partner school.
How do you know if the instruction happening at your school is meeting the needs of all students? It's a big question, and one that I am sure many schools are interested in answering. As a small independent school, we knew that this would be an important measure of success for us. One of the places we kept coming back to in attempting to answer this was our assessment and evaluation data. A report card is one of the most important pieces of feedback a school can offer a parent; it is a marker of progress on the road to mastery. The data that goes into creating a report card shouldn't be shared in only one direction though. Just as that data is useful to parents, it's also critical to evaluating how well your program is meeting student needs overall.
Having data is essential to making informed decisions. At Sheridan, we realized that we didn't have the data we needed to thoughtfully analyze our program, instruction, and student needs. Our previous reporting system was great at generating report cards, but the information contained on those reports only lived on those pages; there was no way to systematically look at students, classes, or the school as a whole. Recognizing our data deficiency, we switched course and started working with JumpRope. Now every piece of assessment data that an instructor enters becomes part of a larger narrative. That information is now readily accessible to all faculty and administrators and can easily be broken out for analysis.
So what data are important? We've found that it's best to start looking at the bigger picture (school and class) before looking at the micro level (students). I think it's best to start with a big data set that you can then dig deeper into. We've been using the data from our gradebook to form the basis of our data. Our school uses similar standards across grade levels to track progress from grade to grade, and we have identical work habit standards across the school.
For example, we have math standards based on use of order of operations across our middle grades and the work habit of preparedness. Let's say, for example, that we’re interested in tracking preparedness. Are Sheridan students ready to learn when they show up to school each day? With our data, we can start at the top level and pull data on the preparedness standard for all courses. We look for outliers: Are there specific courses or grade levels where preparedness scores are lower? Is there a pattern that can be addressed (i.e. can we do a push to remind students to bring their gym uniform, or textbook to language arts)? Once we’ve looked at the school as a whole, we can then review individual grade levels. Is preparedness an issue for the 8th grade but not the 7th? Is preparedness an issue in all courses or just specific ones? You can imagine, no doubt, the different ways that we can tailor support to address any needs that the data show. Lastly, we have the option of looking at each individual student, and for the student level data work, I have found that downloading the dataset is best because having it in Excel or Google Sheets allows me to add filters and conditional formatting. Once again, we're checking for patterns. Is homeroom performance different from departmental work? Are work habits consistent?
This is our first year working with student data in this way, and so it's been a learning experience for all involved. We've held meetings to help teachers learn how to access this data in our gradebook, and the administrative team has done data review sessions at the end of the term. We've been particularly interested in the scoring variations, or lack thereof, between boys and girls, how work habit scores evolve throughout the year, and comparing similar standard strands across grades. Having the ability to sort through student data can build a compelling narrative about what is working for students at your school and help identify areas where students need support.