Clio Bluestocking is challenging the Outcomes Assessment Borg, Historiann’s got her back, and Academic Cog is asking whether applying quantitative data to qualitative issues is in itself a failure of the humanities. I wrote this a while back, so it is not exactly a direct response.
It’s crossed my mind a couple of times that we could land at a compromise between grades and Hampshire College’s “write a letter for every student”. I mean, my department could create, say 5 characteristics that we value—1) mastered the content 2) generated original and creative ideas 3) showed real talent in writing 4) made discussion better 5) worked very hard—and professors could give students a rating in each of these, probably on a 1-4 scale plus Not Applicable—or, even simpler, just “strong/adequate/weak.”
That sort of additional information could enhance a transcript, yet feasibly be aggregated.
I was first thinking that it would be standardized within each of the three big Humanities/Social Science/Science divisions of a college, but actually that’s silly. Let the college as a whole write 7-12 checkboxes that define what they think their school is doing. When professors go to enter their grades, they pick which 1-5 checkboxes best speak to the aims of their course as taught and will appear as fields for each student. The transcript shows how many courses fed into the average for each checkbox. This should minimize the battles at the mid-level over which checkboxes are used, by distributing all the power upwards to the head honchos and downwards to individuals.
If the system were accurate, potential employers would be able to see at a glance who was coasting through on a facile intelligence and who did well because they worked very hard. But eventually, online catalogs might track which checkboxes apply to which classes, and allow students to match their strengths when registering for courses.
I wouldn’t like to talk to the registrar forced to redesign the computer system to track this information and print it on a transcript, though.
Of course, this is basically what a lot of recommendation forms do—ask you to rate students in various categories. But the categories often seem stupid or non-applicable, and there tend to be 10 or 12 of them. That’s too many, and professors are usually doing them retroactively based on memory or the grade. Collecting the same information immediately and aggregating it might even eliminate the need for some recommendation forms, say, for study abroad programs or internal scholarships.
Incidentally, this is how I came up with this idea: if I were faced with Hampshire’s requirement to write a letter for each and every student, that’s pretty much how I would do it, or at least get started—set up some AutoText sentences that reflect performance in the categories I think I can speak to and go down a toolbar checking off the list. (Clearly, that’s my fascist streak again.)
29 August 2009 at 7:49 pm
Here via Historiann and I have to say that your assessment rubric is awesome. I would love to do this for all students but I think I can manage it just for my senior seminar students — it would be good to give them such a detailed breakdown and feedback, as well, so they understand I see their effort even if that effort doesn’t translate into the “A” marks they so desperately desire.
31 August 2009 at 3:54 pm
Janice, you know, I don’t know if it works at the individual level. And if it does, it should be easy to do—by the end of term with 18 students, you should know who showed up in your office with drafts, and who always had ambitious ideas even if the writing (and thus the grade) didn’t live up to them. And if students didn’t stand out as either strong or weak, just put adequate.
31 August 2009 at 3:52 pm
More Links:
Another Damned Medievalist responds:
http://blogenspiel.blogspot.com/2009/08/on-outcomes-and-assessments-borg.html
More discussion at Historiann’s:
http://www.historiann.com/2009/08/29/historiann-has-a-man-date
This idea originally was triggered by the “no one knows what a A means! A’s mean nothing! every professor has different criteria for an A!” critique as linked/discussed here:
http://weblogs.swarthmore.edu/burke/2009/02/24/grades-as-information/
31 August 2009 at 4:41 pm
I love ideas like this. Unfortunately, the current system is so wide sweeping that it would be difficult to get people to change. You would have to convince every school of higher education to make the change or convince employers world wide to adopt a new standard for employment that isn’t based on test scores and GPA.
31 August 2009 at 5:00 pm
I think that’s too pessimistic, actually. A whole institution would have to adopt it, sure, but they would get some press, probably. The transcript comes with a letter explaining the system, the career center sets graduates up to write a cover letter that both makes them stand out and prepares the employer for it, in situations where it would help students overcome a low GPA. In other situations, it wouldn’t matter. When job applications get to the point where an employer is deciding who to interview, a transcript shows that 25 of 36 professors marked one applicant as a hard worker might be useful to the employer in making that decision.
Let’s say a SLAC adopts this. That SLAC is already pushing the idea to employers that “our graduates are XYZ people!” This is just data to back it up.
My pessimism centers on getting an institution to adopt it, and re-designing the registrar’s system to make it all work smoothly.
31 August 2009 at 5:17 pm
PS. Of course, there are laws about getting the same info from all applicants, but I suspect the legal entity there is the *transcript*, rather than the GPA, and that if a school chooses to enhance their transcript, that’s not a failure of the employer.
eg, adding an FD grade to denote “Failed for cheating”.
http://www.martlet.ca/article/19330-new-grade-exposes-academic-dishonesty