Philosopher Larry Shapiro (Wisconsin) writes in the Washington Post:
Here’s what I plan to do about chatbots in my classes: pretty much nothing. Let me say first that as much as I value the substance of what I teach, realistically my students will not spend more than a semester thinking about it. It’s unlikely that Goldman Sachs or Leakey’s Plumbing or wherever my students end up will expect their employees to have a solid background in philosophy of mind. Far more likely is that the employees will be required to write a letter or an analysis or a white paper, and to do this they will need to know how to write effectively in the first place. This is the skill that I most hope to cultivate in my students, and I spend a lot of time reading their essays and providing them with comments that really do lead to improvements on subsequent assignments. In-class exams — the ChatGPT-induced alternative to writing assignments — are worthless when it comes to learning how to write, because no professor expects to see polished prose in such time-limited contexts....
But what about the cheaters, the students who let a chatbot do their writing for them? I say, who cares? In my normal class of about 28 students, I encounter one every few semesters whom I suspect of plagiarism. Let’s now say that the temptation to use chatbots for nefarious ends increases the number of cheaters to an (unrealistic) 20 percent. It makes no sense to me that I should deprive 22 students who can richly benefit from having to write papers only to prevent the other six from cheating (some of whom might have cheated even without the help of a chatbot).
Professor Shapiro makes an important point. It's of course galling to grade "fake papers" produced by AI, but most of our students still need our feedback on their writing. I imagine lots of readers would benefit from hearing how other faculty are thinking about this.