Snow students under with lots and lots of instruction.

When you have handed out:

  • a full page overall rubric
  • an additional two pages or so of general tips with every essay
  • a full page of typed comments

Students worry whether they missed something in there. Or, they think you are really persnickety and tough and are happy to have escaped with the grade they did.

I’m joking, really. I have no idea whether that’s why I get very few requests for higher grades. But I like to believe it. Best idea from others: make them write up a written justification that shows either where the handouts misled them or failed to mention a criticism received.


I just wrote on a student essay draft:

It’s not clear whether the structure of the argument goes:

THESIS because A, because B, because C [pyramid—thesis as capstone connecting events]


A led to B which led to C which led to THESIS [linear—thesis as end of a chain of events]

I was talking about the problem of essay exams in large classes recently….

I once was a TA for a history professor who offered a Matching section on the exam—match this quotation to this person, about 6-8 pairs. Students really needed to be able to identify the philosophy expressed and associate it with the right person, and it was a class on Russia, so there were some fine details they needed to recognize (e.g., both Lenin and Trotsky might have said A, but only Trotsky would have said B, and neither A nor B were quotations that had been focused on in class, so they really had to think about what the words meant)—-and in short, it achieved that holy grail of grading, a scantron-able question that demanded real knowledge and analysis from students.

A call for help—anything else you’ve done or seen people do that could act the same way?

FLG wants to know, and kinda sorta asked me. Now I want to know what other people think. Join the discussion over there.

I am really slow about changing incompletes to real grades. Because, you know, to grade an essay that is substantially late, I have to go back, reread some of the course materials, check the essay against other student essays to ensure that I am assessing it by the same standards, etc, etc. And once it’s been a month or so, this doesn’t really get harder, so I just keep putting it off. Who wants to do all that? Especially when maybe I am waiting for a second incomplete from the same class, and I sure don’t want to do it twice!

Anyhow, the easy way:

Since I grade by numbers, I can look at how many points the student has accumulated. Then sort out how many they need to earn an A, how many to earn an A-, how many to earn a B+, and so forth. This took me about 5 minutes in Excel.

Okay, so now the question is not, “what grade does this essay deserve?”, but “if the essay is at least a 70, change the incomplete to a B. If the essay is at least an 89, change the incomplete to a B+.” Well, that I can do off the top of my head—no rereading or comparing necessary.

I once made a student wait for ages, not realizing all he needed me to do was change the Incomplete to Pass. Which I barely had to read his essay to do. And yet, I still didn’t realize I was making it much harder than it needed to be.

Clio Bluestocking is challenging the Outcomes Assessment Borg, Historiann’s got her back, and Academic Cog is asking whether applying quantitative data to qualitative issues is in itself a failure of the humanities. I wrote this a while back, so it is not exactly a direct response.

It’s crossed my mind a couple of times that we could land at a compromise between grades and Hampshire College’s “write a letter for every student”. I mean, my department could create, say 5 characteristics that we value—1) mastered the content 2) generated original and creative ideas 3) showed real talent in writing 4) made discussion better 5) worked very hard—and professors could give students a rating in each of these, probably on a 1-4 scale plus Not Applicable—or, even simpler, just “strong/adequate/weak.”

That sort of additional information could enhance a transcript, yet feasibly be aggregated.

I was first thinking that it would be standardized within each of the three big Humanities/Social Science/Science divisions of a college, but actually that’s silly. Let the college as a whole write 7-12 checkboxes that define what they think their school is doing. When professors go to enter their grades, they pick which 1-5 checkboxes best speak to the aims of their course as taught and will appear as fields for each student. The transcript shows how many courses fed into the average for each checkbox. This should minimize the battles at the mid-level over which checkboxes are used, by distributing all the power upwards to the head honchos and downwards to individuals.

If the system were accurate, potential employers would be able to see at a glance who was coasting through on a facile intelligence and who did well because they worked very hard. But eventually, online catalogs might track which checkboxes apply to which classes, and allow students to match their strengths when registering for courses.

I wouldn’t like to talk to the registrar forced to redesign the computer system to track this information and print it on a transcript, though.

Of course, this is basically what a lot of recommendation forms do—ask you to rate students in various categories. But the categories often seem stupid or non-applicable, and there tend to be 10 or 12 of them. That’s too many, and professors are usually doing them retroactively based on memory or the grade. Collecting the same information immediately and aggregating it might even eliminate the need for some recommendation forms, say, for study abroad programs or internal scholarships.

Incidentally, this is how I came up with this idea: if I were faced with Hampshire’s requirement to write a letter for each and every student, that’s pretty much how I would do it, or at least get started—set up some AutoText sentences that reflect performance in the categories I think I can speak to and go down a toolbar checking off the list. (Clearly, that’s my fascist streak again.)

My view on taking attendance has always been that I don’t like forcing people to attend class, but I think those students who do attend class and demonstrate that they did the reading, ought to receive some benefit in the grade for their effort.

This isn’t applicable to this semester, but I went back and forth on trying out something new. I’m debating creating a two-option policy. Students can choose which of the two weightings they want to apply to them:

Option One:
participation: 20%
essay 1: 20%
essay 2: 20%
essay 3: 20%
final exam: 20%

Option Two:
essay 1: 25%
essay 2: 25%
essay 3: 25%
final exam: 25%

Another alternative would be to not grade participation at all, but use class time such that people who attend are more likely to have better grades. E.g., discuss writing tips, practice outlining essays, etc, so that there is extra value in classtime. Except I try to do this regardless.

Alternatively, I could not grade participation, but make students responsible for being there by adding a explicit expectation that lecture material is incorporated into essays (which I don’t enforce at the moment). That seems to privilege form over function though—if you can ace my essays only with class reading, why should I care?

Any innovative attendance policies out there I should check out?

Next Page »