A Teacher’s Guide To Holiday Gifts

If a student brings you a gift, literally ANY gift, it means something special.

One of my most vibrant memories of elementary school is the day before Winter Break in which my elementary school teacher gathered the class seated criss-cross-apple-sauce on the communal rug and opened all of the gifts we had brought her for the holidays. My mom was famous for her homemade almond roca candy, and having an older brother meant my current teacher most likely had gotten a tin of that renowned roca two years prior and knew what to expect from the Zuercher family. Each gift was opened with care, and to the best of my recollection, my teacher reacted with genuine delight and appreciation for whatever happened to be gifted from each family. I’ve always assumed that my teacher absolutely adored the tin of almond roca we gave her, but I have no way of really knowing that. All I know for sure is that I definitely felt like she did.

As a classroom teacher now I always try to keep this childhood feeling in mind during the regular gift giving times. If a student brings you a gift, literally ANY gift, it means something special. That student, or the family of that student, made an effort to let you know that you mean something to them. That is a special thing, and should not be overlooked.

Did they bring you a mug with a cliche teacher quote on it that you already have three of in your surplus mug cabinet at home? Well, that student noticed that you drink coffee in the morning and wanted you to have a new mug to enjoy it in.

Did you receive a partially used Target gift card with $3.41 on it? This might be literally the only thing the family could afford to give, which means they probably couldn’t afford to give it to you, but they did anyway. What an amazing gesture!

Have you been given a single package of sticky notes that you can get from the front office at pretty much any time during the year? Maybe that student really loved the Quiz-Quiz-Trade activity you did with sticky notes last month to review linear functions and wanted to do it again.

Is a student presenting you with a suspicious looking non-descript baked good in a beat up plastic baggie? Acknowledge the time and effort it took them to make that, and ask them about the special family recipes they love to make during the holidays.

No matter what the gift is, it is special because of the reason it is being given. That student cares about you and is taking the time to let you know. Do your best to show genuine appreciation for any gift a student brings you, whether it is extravagant, simple, used, leaking, or possibly still alive.

Well, maybe not that last one.

A Standards Based Grading Deep Dive – Part 2: How We Assess Our Students

The Back Story

It’s 2019, a Friday afternoon in October, and I’m driving home from school. It’s been a tough day and my brain is absolutely cooked from making 8,000 decisions during my Math 8 classes and giving a cumulative exam in Enhanced Math 1. Seems like giving a test should make for an easy day, as you don’t have to do much, but that’s not the case. Stress levels in students are high. With stress and high expectations comes the willingness to compromise morals and desire to cheat. My attention must be laser focused to make sure students are working with integrity. It’s…not fun. I know there are other ways to assess students, but the most authentic assessment of each student’s ability is to assess them independently (as far as I have found, anyway).

Then there’s the grading. Before switching to Standards Based Grading, we used a traditional points based system, assigning point values to each question, then deducting points from a question if work or formatting was incorrect. With about 100 students taking a test that is about 20 questions long, that’s examining 2,000 test items, most of which have multiple steps of work. I might give 1 or 2 well-crafted multiple choice questions, but 90% of the exam is hand written work with many steps to inspect. When I stack up all the exams, shove them in my messenger bag, and toss the bag in the car, the ride home feels so daunting, knowing I must spend the next 8-10 hours grinding.

Having already gone over the way I am grading student work now using the 4-point rubric, let me just say that it is so much better than itemizing point deductions for each question like I used to. I would drive myself crazy trying to determine if something was minus 1, minus 2, or more. I even got to the point where I was deducting one tenth of a point on certain questions, which in hindsight was absolutely insane. Like, what was I doing???

Target Specific Assessments

One of the best changes we made this past school year was how we assess our students. Before 2020 we would give one large assessment each month, which was always cumulative up to that point. That meant that at the end of February the students would get the “February Test”, which could have any topic on it they learned from August until about mid-February. We emphasized more recent material, and the old stuff was relegated to a few questions on the essential Learning Targets. The test was always worth 100 points, and we used a year-long gradebook. By the end of the school year the gradebook had about 1,200 points in it (including the monthly tests, quizzes, and homework). 

The rationale was that we wanted students to maintain their skills throughout the year, instead of simply learning something for a short time and then never recalling it again because it would never be assessed again. While I agreed with this premise, the downside was that these tests were very stressful for the students, usually took up an entire block period to administer, and took an extremely long time for me to grade. Each exam would have around 20 questions on it of varying Depths of Knowledge, so grading around 170 of them each month was mentally exhausting.

So instead we switched to more frequent Target specific assessments, focusing on only 1 to 2 Targets each. The assessments were much shorter, able to be completed in a 51-minute period by most students, and each Target could be covered by a variety of questions at different levels of rigor. We included spicy peppers to indicate to students which questions we considered more challenging, and those were the ones they should get correct to be considered having  “Thorough” understanding of the Target. Here is an example of an assessment I gave last year in Math 8:


The Benefits of Target Focused Assessments

In 8th grade we gave 14 different assessments that covered 20 of the Learning Targets for the year. This meant that I graded assessments more frequently, but the assessments were much quicker to complete. Whenever I assessed Math 8 I was able to grade both class periods in under one hour, usually on the same day I gave the assessment. I could literally never do that before. Many times students would take the assessment on Friday and I could hand it back to them on Monday. During the days of grading a cumulative test it might take me a week or more to finish marking everything, therefore the feedback took longer and was less valuable.

One of the best results from the more Target focused, smaller assessments was that students were not as stressed out or overwhelmed. Since they were shorter and more focused, students were able to finish them in a reasonable time period, and students with IEP’s and 504 plans did not need to use their time accommodations as often. Additionally, with the retake policy we adopted, students knew that they always had a second chance to take a different but similar version of the exam, so if they just weren’t feeling it the day of they test, they always had the change to try again.

Giving these shorter assessments also gave me more flexibility on the day of the test. Since most students would finish with additional time, I was able to give them some more interesting tasks to do once they were finished. I now post Open Middle problems at my thinking stations, Non-Curricular Thinking Tasks, extension problems from previous Targets, or desmos activities that preview the next Target we are going to learn. Assessment day is now a “show me what you know, then go find something you are interested in” kind of day, rather than a stress-fest of feverishly working until the bell rings.

This isn’t to say that every student was instantly successful the first time, or that my assessment results were amazing across the board. In Part 3 I will look at how students did overall, how they reflected on their own results, and whether the retake system worked for all students. See you next time!

A Standards Based Grading Deep Dive – Part 1: The Grading Rubric

If you ask 100 classroom teachers what the least favorite part of their job is, I am willing to bet that at least 80 of them will say “grading student work”. Well, that might not be accurate. Almost all of them will say “mandated professional development”, with grading being a close second. Having taught middle school math for 2 decades I can safely estimate that I have assessed at least a million math problems that my students have completed on some kind of assessment. Don’t get me wrong, I get a tiny spark of joy each time a student gets a question correct (Yay, they learned the thing!). It’s just very time consuming, and I know that every time I grade something, there will always be a small number of students who are going to have some seriously negative emotions when I hand it back, whether they do horrible, or just get one question wrong. Too many emotions tied up in points, grades, and self-worth.

So two years ago the Math Department at my school switched to Standards Based Grading, with the hopes of giving students better feedback on their learning, an improved sense of hope and efficacy, and a focus on the learning rather than the grade. (I wrote about this back in October if you would like to read that first). We developed a whole new grading system based on a multi-point rubric for each Learning Target, offered multiple chances for students to be reassessed, and removed mandatory homework for points in the gradebook. It was a lot of work, but work worth doing. Or was it?

So instead of just going on feelings, I wanted to reflect on how last year went, and look at the data available to me to see if the changes are working as intended. It’s quite the journey, so I plan on looking at this in multiple posts, otherwise this blog will be gigantic. Let’s dive in to Part 1!

Part 1 – The Grading Rubric

Two years ago we started off with a very basic 5-point scoring rubric for each Target to ease the transition from a traditional gradebook to an SBG one. Here’s what that looked like:


This gave a simple 20% breakdown for each letter grade, so an A was 80% – 100% and meant that more often than not a student had “Mastered” the Targets in the class. Numbers-wise this was easy for parents and students to understand. In application, things got really weird when we tried to grade an assessment. Any teacher who has assessed students for a while knows what “Mastered” and “Beginning” look like. It was the middle area where there was a lot of subjectivity. I personally had many instances where I could not tell the difference between “Proficient” and “Approaching”, as did all of my colleagues. About halfway through the year we realized that this needed to change, since we kept having long discussion about what was Mastered versus Proficient, and Proficient versus Approaching. While grade norming is essential in a PLC, you can’t spend all of your planning time doing only that.

So last year we transitioned to a 4-point rubric, which is most often advocated for when you look into SBG practices. We also developed more language to help ourselves and our students know the difference between each level of understanding. We also updated the category language, since “Mastered” felt like a weird and highly subjective descriptor. So here’s what we used last year:

I really liked this rubric more than the previous one. Since there were less levels to consider, it was easier to see from the student work where a student was at. The only place I ran into trouble was telling the difference between “Thorough (4)” and “Adequate (3)”. Sometimes it was just really hard to tell. More often than not I would assign a student a 3, then meet with them to go over their work and talk about what needed to improve to reach a 4. Since they could retake any assessment, this always felt good. It’s not like they were stuck with that score.

Let’s look at one of the assessments I gave last year, and how I graded it for a few students. Here is the very first assessment I gave in Math 8 for Target 1.1:



One other practice I personally developed to help me determine proficiency levels was to use a spreadsheet I created for each Target assessment. As I examined each question I would grade the response using the same 4-point rubric and enter the score. I had the spreadsheet average out the scores for the entire assessment, then use the number as a general guide as to what level the student was at. Here’s a link to a sample I have for one of my Target assessments for 8th grade.


One of the tricky things about this method of grading though is that not every question is the same level of rigor, so the average score doesn’t really tell you the proficiency level. For instance, question #8 required the students to create their own equation using an “Open Middle” structure, then prove that what they created met all of the criteria needed. This is way different than question #1, which was a basic two-step equation with only whole numbers. This is where the holistic approach comes into play.

For example, let’s look at Student #6 and Student #7. Both students got an average of 3.7 on the assessment, but one of them scored an Adequate (3) and the other a Thorough (4). Why is that? Since Student #6 got questions #5 and #6 wrong, and those were considered less rigorous (they were basic equation solves), I found them to be at the Adequate level for the entire Target, but not Thorough. Student #7 got two questions wrong as well, but there were some factors to consider. For question #7, they made a simple calculation mistake in the final step of the problem. Not a big deal. I don’t really downgrade students’ proficiency level because of a simple calculation mistake. For question #9 they were able to circle the part of the work that had the error in it, but this student was a first year English learner so they did not have the vocabulary needed to do the written explanation correctly. I could tell that they understood the overall concept. That’s an English problem, not a math concept problem. They have Thorough understanding of solving equations, so lowering their score because they have only spoken English for 6 months is not appropriate.

This is why I enjoy Standards Based Grading, but also why it can take so much time to do. When all you do is give points for correct answers and turn the points into a total score of x/100, you lose the big picture. Even though Student #6 got a high average score, they have a few misconceptions in their equation solving that I still needed them to work on. If I give them a Thorough on the Target, they are less likely to work on the misconception. This way, with some coaching and a bit of intervention they are able to re-assess later and earn a 4 on the Target, should they have the desire to.

In Part 2 I will examine the types of assessments we gave in class, and how changing to shorter, more focused assessments has benefitted both me and my students.

Plickers: The Best Formative Assessment Tool You (Probably) Aren’t Using.

One of the biggest traps in teaching any subject to a group of humans is the assumption that silence equals comprehension. You teach a concept, ask if there are any questions, get a room full of silence, and assume everyone gets it. I have fallen into this trap countless times in my career, even though I am fully aware that the trap exists. Obviously, just because nobody asks a question doesn’t really mean they all understand what is going on. One third of the group most likely is thinking about something else, one third is paying attention and understands it, and the rest have some questions, but see that nobody else is saying anything, so they figure they are the only one not getting it and then all sorts of social pressures start piling up in their heads. It’s…. not ideal. 

Any effective educator knows that collecting accurate formative assessment data is crucial when teaching any concept. You have to know if the kids are understanding before you move on. There’s no point in plowing through the curriculum if half the class has no clue what’s happening. Sure, you “covered the standards”, but if only a few students mastered the material, did you really cover them?

So how do you check in with all of your students, get accurate data, and do so with as little anxiety as possible?

About 15 years ago my school invested in a student response system in which each student got a handheld device that had 6 buttons on it (yes, no, A, B, C, D). The system cost over a thousand dollars, and we were able to buy 2 class sets. I used them often for about 2 years. I remember lots of batteries, a whole classroom management structure that needed to be implemented, and a system that worked most of the time, but not always. 

As cell phone ownership became pretty much ubiquitous among students, several apps were developed that made the stand alone handheld devices obsolete. Things like JotForm, VeVox, and Socrative can now be used to get real time data from students using only a smart phone or Chromebook. Kahoot! came along and gamified the whole student response arena, adding points based on speed and accuracy (read this post if you’d like my thoughts on incentivizing speed in mathematics). I’ve tried most of these systems, and must admit that Kahoot! can be a lifesaver on a minimum day right before winter or spring break.

The problem I find with all of these systems is the heavy reliance on technology in the hands of every student. A device for every student brings with it problems, such as having to monitor what site the students are on, whether they have charged it, or if the Wi-Fi is connected or working properly. Relying on 36 students to all have their Chromebooks charged and ready every day is not realistic. And to be completely honest, I want my students having less screen time, not more of it.

So what is the answer? How can you collect quick formative data from every student, do so with little to no anxiety for students, reduce the reliance on each student having a device, and gather that data anonymously?

Plickers.

With Plickers the teacher has all of the tech and the students have a piece of cardstock. 

What Is It?

Plickers is a free platform (with an optional premium account) I can use to create a presentation slide deck with up to 5 questions on it. The questions can be surveys (how are you feeling this morning?), or multiple choice with a correct answer (what is the side length of the square?). I show a slide on my overhead projector, and the students all think about their response. When ready, each student holds up a QR code that has been printed on some cardstock. The direction the student holds the card indicates the answer choice they want to give, either A, B, C, or D. Once they hold up their cards I use the Plickers app on my phone to scan the room with the camera and it instantly records all of the answers. The app will show me in real time on my phone screen who is getting the questions right by displaying either a green or red dot next to their card. If a student is giving an answer that isn’t possible (answer C for a question that only has choices A or B), the dot shows up as gray. Every time I use this it feels like magic. Results are tallied instantly and anonymously, and you can share them, or not, depending on the purpose of the question. That’s it.

I love this system because the students don’t need any kind of device, you get feedback from everyone, you know who has answered and who hasn’t, and the results are instant. It’s the fastest, most reliable whole class formative data I’ve ever found.

How I Use Plickers In My Classroom

I prefer to start a class with a Plickers slide show that asks a general survey question, such as a “Would you rather…?” or “What do you prefer?” It gets class started in a fun way, and can start a fun conversation that engages the students.



The image search feature in the slide deck builder is really easy to use, and finds school appropriate images very quickly. You can edit the photos as well, but I haven’t used that feature very much yet.

Next I will do a few review questions about the current topics we are learning. Depending on the answers, I can pivot my lesson if needed, or just spend a bit more time going over an example from the day before to make sure kids are understanding. As a fun little bonus, the screen gets showered with confetti if every student gets the question right. This rarely happens, but it’s pretty great when it does.


I also like to use a question or two in the middle or end of the lesson to see how students are understanding the new material. It’s so easy to show an example in the middle of the lesson, have students solve it on their whiteboard tables, and then get answers from the whole class. With a well crafted multiple choice question and a little forward planning  you can find the misconceptions students have and deal with them in real time, rather than the next day, or even worse, find out a third of the class doesn’t get it during the actual summative assessment.

Setting up your classes on the Plickers web site is pretty easy. I create a class for each period, and leeave the student names as “Student 1, Student 2, Student 3, etc”. Since each card has a small number printed on each corner of the QR code, the students know which number they are each time they use it. This way I don’t have to assign a specific card to each student. I like this approach, since I just want whole class anonymous feedback. If I needed to know how each student answered, and wanted to keep the data, I would put actual student names in the system (first name, last initial), and assign a specific card to each person. I don’t plan on doing this though, since I want Plickers to be quick, enjoyable, and anonymous. The more information I want, the more I have to manage in my classroom. I find it way easier to give each table group a set of 4 random cards rather than have each student get their specific numbered card every time I want to use the system.

Issues I Have Encountered

While I like a lot about Plickers, it’s not perfect. Since the students have physical cards, they tend to get damaged easily. So far I have seen students tear them, roll them into tubes, and use the corners as toothpicks (gross, I know). One class set of cards printed on heavy cardstock might last a whole school year. I could try laminating them, but I’m not sure how well the camera will register the QR codes with the additional glare. I plan on trying it out with one laminated card next year to see how it goes.

The camera system can be pretty sensitive as well. If the student covers up any part of the code with their fingers then it won’t register the answer properly. Also, if the card is tilted at more than a 10 degree angle, it won’t show up. The most annoying thing though is when students get their card registered, sometimes they bring the card down, but still have it visible. The camera system will register the card a second time and change the students’ answer because it is now giving a different response. This can be mitigated by training the students to put the card QR code face down after they see that their response has been counted on the screen.

As with any tool you use in the classroom you must teach the students how to use it properly. Once I realized those problems were happening, we were able to fix them pretty quickly.

Conclusion

Overall I think this is a great low cost system that teachers of all grade levels can use in their classroom daily. It’s easy to set up, simple to use, and gives great formative data quickly so that you can focus on making sure every student in your class is understanding the lesson. If you haven’t tried this system yet, I recommend checking it out.

Disclaimer: I am not affiliated with Plickers in any way. This is not a paid endorsement. I just really like this tool and I think it can help a lot of teachers help a lot of students.

My 4 Favorite Teaching Podcasts of 2023

After a year of listening, here are my favorite podcasts about teaching.

Anyone who knows me can tell you that I love a good podcast. I listen to them during my morning routine getting ready for work, while I’m exercising, and before I go to sleep. It’s gotten to the point where if I need to walk into another room in my house to get something, I turn on a podcast through my phone speakers during the 5-10 seconds I am traveling to the other room. I’m not sure how much I get out of the few seconds, but it’s a habit of mine now.

Most of the shows I listen to are about politics, financial literacy, science related topics, or actual play RPG shows like Worlds Beyond Number. This past year however I delved into the world of podcasts focused on the teaching profession, mainly because I needed more help and guidance to implement the Building Thinking Classrooms model of instruction into my class. As any podcast fan is probably aware, once you find one good pod, many more tend to find their way into your feed.

After a year of listening, here are my favorite podcasts about teaching.

1. Think Thank Thunk

This podcast’s goal is to help listeners, and the hosts, implement Building Thinking Classrooms the best way possible in their own classrooms. Many of the episodes delve into each chapter of the book, much as I have started to do in my own BTC section of my website, helping break down the most important takeaways and strategies that we could be using. They also interview various experts and classroom teachers to see how it is going for them, and what adjustments they have made along the way. Episode 5 is a highlight, as the hosts interview the author Peter Liljedahl. If you are attempting to teach using this model, I highly recommend listening to every episode.

First Episode: April 18th, 2023

# of Episodes: 28

Average Episode Length: 30-35 minutes

My Favorite Episode: Episode 6 – The Students Have Their Say


2. Making Math Moments That Matter

I’m a math teacher, and I need all the help that I can get. Hosts Kyle Pearce & Jon Orr are passionate educators who truly love teaching math, and want all of their students to succeed and feel the same way that they do about the big world of numbers. Since 2018 they have been interviewing innovators in the math teaching field, as well as experts in brain research and how kids learn math. This was the first teaching podcast that I ever found, and I love when my feed pops up with a new episode because I know for sure that I will learn something new that I can apply with my students almost instantly. If you are a math teacher, you need to listen to this podcast.

First Episode: December 15th, 2018

# of Episodes: 265

Average Episode Length: 1 hour

My Favorite Episode: Episode #260: The Myth of the Math Brain and the Underdiagnosis of Dyscalculia – An interview with Dr. Sandra Elliot


3. The Grading Podcast

Last year the math team at my school switched to using Standards Based Grading over the traditional points/percentage based system we had always used in the past. It was not a very smooth transition, for myself or my students. Over the summer I looked for help in understanding how to better implement the system, and more knowledge about why it was more beneficial so that I could be better prepared for all of the questions I would get from students and parents alike. This podcast has really helped me understand why what I am doing is better for students, and how to implement it more effectively. I really resonated with the first few episodes where the hosts Sharona Krinsky and Robert Bosley talked about their own struggles and failures when putting this into place in the high school and college setting. A must listen for any teacher using Standards Based Grading, or is interested in doing so.

First Episode: July 18th, 2023

# of Episodes: 23

Average Episode Length: 1 hour

My Favorite Episode: Getting Started Part 1: The Problems With Traditional Grading


4. Teacher Quit Talk

So, teaching during the pandemic was really hard. Many teachers decided to leave the profession during that time. Not gonna lie, I thought about it as well. Frazz & Redacted host a show where former teachers talk about why they quit their job, or reasons that make current teachers want to. While the hosts Frazz and Ms. Redacted have TikTok fame, I don’t use that platform, so I only stumbled upon them from an ad from a different podcast. While I never quit teaching (or at least haven’t done so yet) I think my favorite thing about this podcast is the catharsis I feel when listening to other teachers who simply had enough and decided to walk away. If you have taught for more than 5 years I bet you have had that feeling at least once. Probably the best thing I get out of listening is hearing about how poorly teachers are treated in other parts of the country, which really helps me appreciate how amazing my school and school district actually is. A huge dose of perspective when I’m having a bad day.

First Episode: September 9th, 2022

# of Episodes: 66

Average Episode Length: 45 minutes

My Favorite Episode: Episode 10: Queer Librarian


So that’s my list for 2023. What did I miss? Which teaching podcasts do you find value in that I could start listening to? Please let me know in the comments section below, and let’s find more great resources in 2024.