- A grading robot must never seek to evaluate the depth of a student’s thinking on an essay.
- A grading robot must obey orders given it by a teacher except where such orders would conflict with the First Law.
- A grading robot must protect its purpose as long as such purpose does not conflict with the First or Second Law.
Tired of grading endless, mind-numbing, pointless, and grammatically incorrect papers? Then, the automated grading robot is the answer to all of your problems!
Everyone, including reporter Stephanie Simon in her recent article “Robo-readers:the new teachers’ helper in the U.S.”, knows that “American high school students are terrible writers”, and we educators need all the help we can get!!
Proponents of the software contend that having students write more will help them write more effectively, and that by using this software, teachers can assign more without having to deal with the grading. Is that what we want? Really?
I’m on the fence with the idea that writing more = better writing when it comes to high-schoolers, actually. They will chuck out the most meaningless drivel in the shortest amount of time as long as aforesaid meaningless drivel gets them a grade.
In the average adolescent mindset (not all, certainly), the primary driving force is “Git-r-dun.” And with the increase in assignments, that same driving force is going to go into even higher gear, thus completely missing the original goals. We feed into this mindset by assigning more and more and more of the same, not unlike...robots.
We don’t need students to write more to get them to become better writers, we need them to think better. Sorry, Robo-Grader, while you may be able to determine that a student has actually plunked an idea down on the paper properly using a semi-colon, you’re not ready to ascertain the depth of that idea or the aesthetic choice to use that semi-colon.
Consider the plight of one of my recent SAT prepees, who had the structure of an essay down pat. He clearly understood how to take an arguable position or state an assertion; he also had a reasonable handle on grammar and punctuation. What he did not have, however, was the ability to reason cohesively or creatively to validate his position--crucial components of the higher-end score.
Interestingly, he was an A/B student in English, mostly because he diligently completed his classwork and his homework. However, his templated approach to essays wasn't going to get him into his chosen university or program.
Albeit kicking and screaming, he made significant strides in his reasoning only because this strategy compelled him to think through that reasoning in a different way, resulting in a much stronger overall discussion.
Was it perfect? No. He had some revising to do, mostly commas, for which we devised several tactics that he might use while in testing mode. But his thinking was where it needed to be to get the desirable score for college entry. (Score one for the "hew-mon".)
We cannot improve the status of student writing if we give in to the premise that we are trying to make students better at writing, concerning ourselves with the superficialities that automated software can handle.
Rather, let’s make them better writers, first.
This mission is too important for me to allow you to jeopardize it.