A Standards Based Grading Deep Dive – Part 1: The Grading Rubric

If you ask 100 classroom teachers what the least favorite part of their job is, I am willing to bet that at least 80 of them will say “grading student work”. Well, that might not be accurate. Almost all of them will say “mandated professional development”, with grading being a close second. Having taught middle school math for 2 decades I can safely estimate that I have assessed at least a million math problems that my students have completed on some kind of assessment. Don’t get me wrong, I get a tiny spark of joy each time a student gets a question correct (Yay, they learned the thing!). It’s just very time consuming, and I know that every time I grade something, there will always be a small number of students who are going to have some seriously negative emotions when I hand it back, whether they do horrible, or just get one question wrong. Too many emotions tied up in points, grades, and self-worth.

So two years ago the Math Department at my school switched to Standards Based Grading, with the hopes of giving students better feedback on their learning, an improved sense of hope and efficacy, and a focus on the learning rather than the grade. (I wrote about this back in October if you would like to read that first). We developed a whole new grading system based on a multi-point rubric for each Learning Target, offered multiple chances for students to be reassessed, and removed mandatory homework for points in the gradebook. It was a lot of work, but work worth doing. Or was it?

So instead of just going on feelings, I wanted to reflect on how last year went, and look at the data available to me to see if the changes are working as intended. It’s quite the journey, so I plan on looking at this in multiple posts, otherwise this blog will be gigantic. Let’s dive in to Part 1!

Part 1 – The Grading Rubric

Two years ago we started off with a very basic 5-point scoring rubric for each Target to ease the transition from a traditional gradebook to an SBG one. Here’s what that looked like:


This gave a simple 20% breakdown for each letter grade, so an A was 80% – 100% and meant that more often than not a student had “Mastered” the Targets in the class. Numbers-wise this was easy for parents and students to understand. In application, things got really weird when we tried to grade an assessment. Any teacher who has assessed students for a while knows what “Mastered” and “Beginning” look like. It was the middle area where there was a lot of subjectivity. I personally had many instances where I could not tell the difference between “Proficient” and “Approaching”, as did all of my colleagues. About halfway through the year we realized that this needed to change, since we kept having long discussion about what was Mastered versus Proficient, and Proficient versus Approaching. While grade norming is essential in a PLC, you can’t spend all of your planning time doing only that.

So last year we transitioned to a 4-point rubric, which is most often advocated for when you look into SBG practices. We also developed more language to help ourselves and our students know the difference between each level of understanding. We also updated the category language, since “Mastered” felt like a weird and highly subjective descriptor. So here’s what we used last year:

I really liked this rubric more than the previous one. Since there were less levels to consider, it was easier to see from the student work where a student was at. The only place I ran into trouble was telling the difference between “Thorough (4)” and “Adequate (3)”. Sometimes it was just really hard to tell. More often than not I would assign a student a 3, then meet with them to go over their work and talk about what needed to improve to reach a 4. Since they could retake any assessment, this always felt good. It’s not like they were stuck with that score.

Let’s look at one of the assessments I gave last year, and how I graded it for a few students. Here is the very first assessment I gave in Math 8 for Target 1.1:



One other practice I personally developed to help me determine proficiency levels was to use a spreadsheet I created for each Target assessment. As I examined each question I would grade the response using the same 4-point rubric and enter the score. I had the spreadsheet average out the scores for the entire assessment, then use the number as a general guide as to what level the student was at. Here’s a link to a sample I have for one of my Target assessments for 8th grade.


One of the tricky things about this method of grading though is that not every question is the same level of rigor, so the average score doesn’t really tell you the proficiency level. For instance, question #8 required the students to create their own equation using an “Open Middle” structure, then prove that what they created met all of the criteria needed. This is way different than question #1, which was a basic two-step equation with only whole numbers. This is where the holistic approach comes into play.

For example, let’s look at Student #6 and Student #7. Both students got an average of 3.7 on the assessment, but one of them scored an Adequate (3) and the other a Thorough (4). Why is that? Since Student #6 got questions #5 and #6 wrong, and those were considered less rigorous (they were basic equation solves), I found them to be at the Adequate level for the entire Target, but not Thorough. Student #7 got two questions wrong as well, but there were some factors to consider. For question #7, they made a simple calculation mistake in the final step of the problem. Not a big deal. I don’t really downgrade students’ proficiency level because of a simple calculation mistake. For question #9 they were able to circle the part of the work that had the error in it, but this student was a first year English learner so they did not have the vocabulary needed to do the written explanation correctly. I could tell that they understood the overall concept. That’s an English problem, not a math concept problem. They have Thorough understanding of solving equations, so lowering their score because they have only spoken English for 6 months is not appropriate.

This is why I enjoy Standards Based Grading, but also why it can take so much time to do. When all you do is give points for correct answers and turn the points into a total score of x/100, you lose the big picture. Even though Student #6 got a high average score, they have a few misconceptions in their equation solving that I still needed them to work on. If I give them a Thorough on the Target, they are less likely to work on the misconception. This way, with some coaching and a bit of intervention they are able to re-assess later and earn a 4 on the Target, should they have the desire to.

In Part 2 I will examine the types of assessments we gave in class, and how changing to shorter, more focused assessments has benefitted both me and my students.

Unknown's avatar

Author: Eric Z.

A middle school math teacher on the job for almost two decades.

One thought on “A Standards Based Grading Deep Dive – Part 1: The Grading Rubric”

Leave a comment