Clio Bluestocking is challenging the Outcomes Assessment Borg, Historiann’s got her back, and Academic Cog is asking whether applying quantitative data to qualitative issues is in itself a failure of the humanities. I wrote this a while back, so it is not exactly a direct response.

It’s crossed my mind a couple of times that we could land at a compromise between grades and Hampshire College’s “write a letter for every student”. I mean, my department could create, say 5 characteristics that we value—1) mastered the content 2) generated original and creative ideas 3) showed real talent in writing 4) made discussion better 5) worked very hard—and professors could give students a rating in each of these, probably on a 1-4 scale plus Not Applicable—or, even simpler, just “strong/adequate/weak.”

That sort of additional information could enhance a transcript, yet feasibly be aggregated.

I was first thinking that it would be standardized within each of the three big Humanities/Social Science/Science divisions of a college, but actually that’s silly. Let the college as a whole write 7-12 checkboxes that define what they think their school is doing. When professors go to enter their grades, they pick which 1-5 checkboxes best speak to the aims of their course as taught and will appear as fields for each student. The transcript shows how many courses fed into the average for each checkbox. This should minimize the battles at the mid-level over which checkboxes are used, by distributing all the power upwards to the head honchos and downwards to individuals.

If the system were accurate, potential employers would be able to see at a glance who was coasting through on a facile intelligence and who did well because they worked very hard. But eventually, online catalogs might track which checkboxes apply to which classes, and allow students to match their strengths when registering for courses.

I wouldn’t like to talk to the registrar forced to redesign the computer system to track this information and print it on a transcript, though.

Of course, this is basically what a lot of recommendation forms do—ask you to rate students in various categories. But the categories often seem stupid or non-applicable, and there tend to be 10 or 12 of them. That’s too many, and professors are usually doing them retroactively based on memory or the grade. Collecting the same information immediately and aggregating it might even eliminate the need for some recommendation forms, say, for study abroad programs or internal scholarships.

Incidentally, this is how I came up with this idea: if I were faced with Hampshire’s requirement to write a letter for each and every student, that’s pretty much how I would do it, or at least get started—set up some AutoText sentences that reflect performance in the categories I think I can speak to and go down a toolbar checking off the list. (Clearly, that’s my fascist streak again.)

About these ads