
It seems like every other day, I am asked how I can reconcile myself to the fact that I still have to give grades at report card time and run a gradeless classroom. It’s a complex question and there’s an equally complex answer.
Let me first make a comment…gradeless is amazing! It’s the single most powerful thing I’ve done since I began my career in 1998. And my assessment journey which began when I stopped giving zeroes to where I am now as a gradeless educator, has been a series of peaks and valleys.
My decision to go feedback only, standards-based tracking, gradeless began with the question in the title. When a parent wants to know where their child is at or a student wants to know their grade, I often wonder what assumptions or generalizations are they making based on these inquiries? Why is a B the goal? Why is 73% better than 72%? Why doesn’t it matter if a student reached a personal milestone over the term showing improvement in skills? Why are stakeholders bent on categorizing kids into “good enough” or “you need to be here”?

According to back of a B.C. report card, a B means “(73-85%) Very Good Performance in relation to learning outcomes.” The language is dated (we don’t use learning outcomes anymore…c’mon people!), and vague (what the heck does “very good” actually mean?). It also assumes, based on the percentage range listed we should assume that a student with a B has displayed “very good performance” overall. Additionally, I also wonder how just 1% (85% to 86%) shift an overall label from “very good” to “excellent or outstanding” (their words, not mine). Ask a kid if they would rather be very good or outstanding, and they’ll bustle their butts to their teacher and beg them to increase their score by one percent in order to hoist the title like a crown over their head.

This, my friends, is the chasing of grades. This is extrinsic motivation.
Teachers rely heavily on their grade books to determine that very good B. Many teachers are still using grade book categories in order to distribute student results. In doing this, their rationale is to balance the grade book and to be fair. I remember how I used to spend hours determining percentage allotments for each category, often tweaking them as the school year wore on, because I didn’t like the look of the resulting, overall percentage (Shhh…don’t tell on me). That should have signaled a problem with category weighting right there, but that was when I was oblivious to my grading felonies.
Let’s examine the scores of four different students, all with the same overall mark in a course. Before shifting my practice to standards-based tracking, I used to use weighting very similar to this and have seen similar weighting across all subjects with minor variations in the wording used based on the subject in question even today.

What does the data tell us? Student 1 has shown consistent scores in all categories, while Student 2 excels at tests and project work, obviously floundering somewhat in assignments. Student 3 struggles with tests, but their in-class work appears to be at an exceptional level. Student 4 turned in all their homework but seems to have struggled with assignments and the unit project, making up for learning in the test and the final exam.
It’s grade books like this that students use math to decide what is important or not, and how much effort they need to put in to attain, maintain, or risk. Student 3, for example, didn’t need to do well on the final exam to maintain their “B” and perhaps there is a deeply personal, underlying reason, (like a burdensome Pre-calculus final) that they need to expend more energy on studying for or did they just figure out that they don’t need to work hard for it? Are we justified in suggesting that Student 1 is a more well-rounded student because their scores in each category are congruent, while Student 3 isn’t because they aren’t a good test taker? Is Student’s 4 assignment score a result of missing assignments, incomplete work, or mishandling of the learning? Hmmm.
Does this data really tell us what these students know or can do? Did students get the option or opportunity to show their learning in multiple ways, highlighting their strengths? What skills are being highlighted in the tests? Is it rote memorization of content or are there opportunities for evaluating skills like synthesis and analysis? Are tests the only way to evaluate skills like synthesis and analysis? Does this data signal an anxiety or self-esteem issue? Are students actually doing the homework or copying off a friend ten minutes before the bell rings, so they don’t drop a letter grade? Is it ethical to be giving away free marks for work that may or may not be the student’s? Isn’t this formative or practice? Lastly, when examining these categories, would you still suggest they all displayed very good performance?

This data doesn’t tell parents a lot neither. That’s why they email us and stand in line at parent-teacher interviews; they want to know what their child can do to improve in certain areas. These scores don’t tell them that, so they have to ask.
Scores on individual assessments also do very little to convey particulars to stakeholders. 20/24 on a piece of writing or a science lab, for example, could mean a variety of things even if the teacher took the time to generate a sophisticated six-point, four-criteria rubric. Does 20/24 mean that there are minor deficiencies in all four areas, a moderate deficiency across two areas, or a major deficiency across one? When 20/24 is presented to a student, they see 83%, a “B.” They likely won’t examine the rubric or any feedback because generally, students just don’t when a score reads as sufficient. A “B” is considered “very good,” so why would they bother? If the student does a similar, subsequent assignment, their score is not likely to change either. Often, I hear teachers suggest, in a positive tone, that a student’s work is “consistent,” and because they are consistently at a B, which is if you recall “very good,” then why the need for change? After all, very good is very good, right? In this case, we honour the student who is effectively stuck and has shown no progress since the beginning. But stuck at “very good” is still stuck. Where is the means to entice that student to improve in a variety of skill areas? I suppose I shouldn’t assume teachers haven’t tried. But, unfortunately, the use of grades and numbers create roadblocks. Options to accelerate progress fall by the wayside. As Dylan Wiliam, says, the kids with good scores don’t feel that they need to bother improving and the kids with poor scores feel so deflated that they don’t want to.
Gradeless is the way to go. Chuck out all numbers and scores on everything. Use feedback only. Tell students what they have done correctly and what their next steps are. Every student gets feedback that moves their learning forward; no student has the option to shut down or feel they have done just enough. In this method, students have goals and comparison/competitive behaviours go poof! because the feedback is authentic and specific to the goals which are now the power standards*. Give students time to use that feedback. Don’t eliminate this step! View the curriculum through a new lens of “what can the student accomplish” as opposed to “what can I skim over.” This prevents the issue of students being unsuccessful in a course because a lot of crappily done work was touched on over a long period of time.

And a gradebook, well that is now a tracker of progress of power standards –> standards-based grading baby!
Standards-based grading is the opportunity to assess students’ power standards (a power standard is the combination of a curricular competency, or skill, plus embedded content) along a continuum of learning. It gives teachers the opportunity to look at diverse ways for students to show their progress along the continuum instead of tying students to showing their learning in just one way, like a test. It also encourages teachers to plan various opportunities for students to show learning, lapping power standards so more than one piece of data can be collected. There needs to be evidence to prove proficiency detective teachers!
Standards-based grading narrows the focus of assessment for and of learning to the power standard. Teachers scaffold instruction towards the power standard using a taxonomy of learning, being careful not to include extraneous evidence that isn’t reserved for the power standard as evidence of the standard (Assessments of scaffolded instruction leading up the taxonomy is still important). A grade book, then, could and should look like data that justifies an evaluation based on student’s strengths instead of data added together and divided by the number of opportunities. Teachers can chuck out harvested data that doesn’t highlight strengths and keep data that does. The narrowness of a proficiency scale makes this type of evaluation more accurate (fewer gradients of cognitive complexity as opposed to 101!) and departments can determine what power standards they should be assessing and what proficiency looks like. The result is an evaluation that exemplifies whether or not a student knows the content in performing the skill.

Proficiency can be determined by the teacher in the form of an exemplar and students can stretch their thinking by unpacking the exemplar and generating complicated criteria. The result is that students know, then, what the expectation is (what is needed to reach the target). Proficiency is the grade-level expectation which cannot be shifted for a student unless they are on a modified program. Adaptations, of course, can be made based on what is suggested in their Individual Education Plan if they have one. In general, students steer themselves towards proficiency when a variety of learning opportunities are presented to them providing challenge and yoking their personal expertise.

Explaining standards-based grading to stakeholders, such as parents, can be difficult, but consider how beautiful it is if you let students help take the wheel! If teachers are clearer about what is being assessed and bring students into the assessment process, the student can be the one to explain to other stakeholders what they learned, what they are working on, what their next steps are, and why this is important.
At report card time, then, it becomes a pretty simple conversation with yourself really. Now this is pretty general, I confess, but it truly is not difficult to simply frame a power standard as an “I can” statement and decide if the student “can” do it. Consider the evidence they have generate and you have collected to determine profiency.

What happens, then, after one has gathered all the data using the proficiency scale, how does one determine a final letter grade and percentage since the Ministry of Education requires it? These are tough questions that I am still grappling with. I’ve been reading Thomas Guskey’s work (Get Set, Go!: Creating Successful Grades and Reporting Systems), and one of the things he mentions is that what grades mean and how they should be determined should be a collaborative effort by a staff. So just as a departments should determine what proficiency of a power standard looks like, what an overall grade should be determined in the same way. In the meantime, I have played around with some ways one can determine a percentage. This version has been adapted from work I did on this topic last year as well as the work my friend, Nina Pak Lui has done.

I don’t recommend that a number be used to represent a proficiency level and let a computer decide the final mark. That disempowers teachers and leaves a grade determination up to a hunk of electronics. Instead, a final grade should be a result of the compassionate and contemplative powers of a professional and can even include students as part of that process. Grades are subjective. How many of us have tweaked a final outcome because we don’t like what the computer recommended (further proving its unreliability) or we felt a student deserved more or less? We’ve all done it.
So, can I be gradeless and reconcile myself to the fact that I still have to give grades at report card time? You bet your boots I can. The way we measure and what we measure directly influences student progress. That’s why I’m gradeless.
#mygrowthmindset
*UPDATED* The term power standard was adopted from Thomas Guskey’s work, https://tguskey.com/standards-based-learning-educators-make-complex/. In BC, we have curricular competencies and content. I like to use backwards design, beginning with a curricular competency (skill) and deciding what content meshes well with the curricular competency. The result is a power standard. Laps can be made of the power standard, by using the same curricular competency but different content.
For more on how I have discussed standards based grading with parents see “But what’s his grade?“
Great read. Just what I needed. I love reading your posts as they inspire me and confirm for me that I am making the right decision with regards to my assessment practices. I have decided to take the plunge to grade less next year but am still struggling with what exactly this looks like. Nobody at the secondary level in my district (that I know of anyways) is doing this, so I’m feeling a little alone in my challenge. I teach Sciences(Chemistry and Biology) which have traditionally been very content heavy. This will be a big shift for me, but I know that this is what’s best for my students. I am hoping to find some fellow Science teachers that I could chat with the help me in my journey.
LikeLike