Skip to main content

Artificial Intelligence is an Assistive Technology


By fall 2024, most schools and universities have gotten just far enough in their understanding of artificial intelligence (AI) to officially forbid its use. The problem, most professors say, is plagiarism -- the practice of presenting another person's work as your own. This is, on its face, nonsense. Courts have already determined that the products of AI are not copyrightable because there is no human author, and legal experts predict that courts will also find AI is not a "person" and cannot be held liable for its actions. (There's more disagreement about who should be liable for adverse products of AI, though -- the user? the original programmer? the company that makes money from the tool?). The whole point of AI is that it can process reams of existing data, identify patterns, and use those patterns to produce something new. The student using AI to write a term paper therefore is not plagiarizing in the usual sense of the word; instead, they are employing a novel tool to create something that didn't previously exist. The AI companies' legal vulnerability is not on the basis of plagiarism. It is instead about intellectual property -- whether they had the right to mine the source data that was used to train their AI model, so that it could do this novel creative work.

One senses that the professor's objection here isn't really about the legal meaning of plagiarism, and it's almost certainly not about whether original content creators are being paid for their work. (Professors themselves are inveterate "borrowers" in their teaching). Instead, faculty are expressing a concern about which tool is being used to write the term paper, with the professor's desire being for students to use the old sequentially evolved tool that sits inside their skulls: Their brains. 

Writing is one of those activities that can clearly be attributed to the Narrative Mind -- the key to decoding words' meanings resides in Wernicke's area in the temporal lobe (specifically, the left temporal lobe for most right-handed people), and the ability to physically produce those words on paper depends in part on nearby Broca's area. (Broca's area is most essential to spoken language, but people with damage in this area also tend to have problems with spelling, grammar, and the shape of letters). Writing a good essay also requires planning, to lay out ideas in a logical sequence and to make a convincing point. Planning and strategizing are functions of the prefrontal cortex (PFC), the brain's "executive system." The professor probably does care somewhat about Broca's area being in good working order (as evidenced by the perennial student complaint, "why are you marking me down for grammar? This isn't an English class!"). However, the professor is probably most interested in careful thought and clear expression of ideas. If that's the goal, the PFC is the main tool that students have historically employed.

The problem with AI, from a teaching perspective, is that it is very good at replicating the productions of the PFC. It can see patterns, it can sequence ideas, and it can follow a logical line or reasoning. Of course, it does these things in a completely different way than humans do -- by analyzing massive datasets for unseen statistical patterns, and thereby cranking out a "typical human response." In fact, AI detection software may falsely flag the work of students whose writing is more formulaic and uses a smaller or more specialized vocabulary, such as non-native English speakers and students with autism spectrum disorders. If actual humans doing their very best work can produce outputs that are mistaken for AI writing, is AI writing itself really of poor quality?

Of course, the AI system has no idea what it is saying, and sometimes professors want their students to consider the implications of their argument. AI is categorically unable to do this -- even if you use clever prompt engineering such as "also give me some potential unintended consequences of this policy," the AI is still just doing its language-prediction trick. It can never really understand those implications. The process of thinking and understanding is what the professor finds lacking in an AI-written essay; and in fact the professor is right to do so, because developing those thinking skills is the traditional purpose of a liberal-arts education. 

Despite my general agreement with the professor in this scenario, I don't think that a blanket prohibition against AI is the correct way to achieve her goals. There may be perfectly valid uses for AI that still allow students to learn what they are supposed to be learning. Here's an example: The student who writes their essay at 1 AM the night before it's due, hopped up on caffeine and skittles, is probably not doing their best analytic work. Should they have been writing at the last minute? Of course not, and learning to manage one's time and energy is also a valuable life skill. But let's concede the point that students aren't always at their very best, because the rest of us aren't either. And let's also concede that time is unforgiving, so that sometimes we need to produce thoughtful work on a deadline, even when we aren't feeling thoughtful. In a recent blog post I reviewed the effects of fatigue on the brain. Narrative-Mind thinking is particularly impaired when we are tired; it was the last part of the brain to evolve, and it's the first system to go offline. Some other scenarios that make the Narrative Mind break down are stress, distraction, and competing priorities; as I have argued elsewhere, one of the key characteristics of the Narrative Mind is its inability to effectively multitask. Or as philosopher Eric Schwitzgebel puts it, "consciousness is a limited resource."

When the Narrative Mind needs a nap, the best results tend to come from people who "trust their gut" -- in other words, they are able to shift the work to their Intuitive Mind, which has massive parallel processing capabilities and involves low effort. That of course carries other risks: Intuitive-Mind processes are riddled with assumptions and stereotypes, and the typical mistakes of Intuitive-Mind thinking have rightly been highlighted by Daniel Kahneman and others. But focusing on Intuitive-Mind errors misses the point that the Intuitive Mind works fast, it works unconsciously, and it is right perhaps 90% of the time. When the Narrative Mind is out of commission, reliance on the Intuitive Mind is usually the best strategy available to us. For the student who is Intuitively convincing in their written expression, it's possible to skate by with limited effort -- and the professor still didn't get the demonstration of Narrative-Mind thinking that they were hoping to develop.

AI provides a new way to cope with limitations of the Narrative Mind. It generates text that sounds like a Narrative-Mind output, that follows the logic of human narratives, and that has a structure acceptable to the Narrative Mind of the professor when she reads it. My argument is that this isn't cheating, as long as it is disclosed. A student's use of generative AI to write a paper recognizes and values the goal of logical reasoning and convincing written expression. It just achieves it in an assisted way. I predict that we will in fact start to see AI included in some types of educational accommodation plans, such as those for people with attention-deficit disorder. It is an assistive device, just like an audio recorder for people who can't remember what was said (or for that matter, like a piece of paper for taking notes). It externalizes what were previously internal capabilities of the human brain.

Does the use of AI undermine the professor's ultimate goal, though -- that students have their own, internal capability to create a convincing argument? Perhaps: Studies have shown that the products of AI are at about the level of the average college entrance essay, which makes AI writing hard to detect in a teaching context and also in a scientific one. And that's not surprising, because the whole method of AI is to statistically predict the average response, the next word in a sentence, the most likely response to a question. AI literally provides "the wisdom of the crowd." But the wisdom of the crowd is often not very wise -- it traffics in banalities, and it has no way to bringing in viewpoints that have not been part of the mainstream. It is more likely to sound like the writing of people who are male and privileged. Perhaps most importantly, it doesn't include the personal touches, the rhetorical flourishes, the flashes of brilliance that are part of the best human writing. 

In order to gain practice with thinking, students need to learn how to write an essay after an AI has taken a first pass at it. Some specific skills might involve asking whether all of the AI's points do in fact support the central thesis of the paper, asking what viewpoints have been ignored or excluded, and asking whether the AI's arguments do in fact fit with one's own experience. The essay can then be improved, with irrelevant points removed and more relevant examples inserted. Some of those activities are better done by the Intuitive Mind anyway -- we're asking whether the essay passes a basic "smell test" for correctness. We might not be able to put its deficiencies into words, or say exactly why our revised version is better, but that's OK -- it will probably be improved anyway, by virtue of its humanity. And if a student has time later to reflect on the essay, when the Narrative Mind is less tired, that reflection might in fact help him or her to develop the skill of analyzing structure and logic in writing, a skill that might be transferrable to the next time a free-text writing assignment is attempted.

I contend that AI is a tool like any other, and that we should allow our students to use it. Honesty requires that they disclose when and how they have done so, but they shouldn't be marked down if they do or given extra points if they don't. They should instead be graded on the quality of their work. A truly effective response will likely require engagement from both the student's Narrative Mind and their Intuitive Mind. In one sense this is no different from spellcheck software; at one time, a student who couldn't spell would never do as well as one who could, and the only way to get there was through rote memorization of words. But now spellcheck features are so ubiquitous that students don't really have to worry about it anymore. AI is much the same, providing an artificial Narrative Mind to work through the basics of logic and rhetoric. Students should of course disclose their use of generative AI, which is the ethical standard many professional and scientific groups seem to be gravitating toward.

So why not let AI do what it is good at -- providing logical structure and a jumping-off point based on common sense -- and ask our students to take it from there? The quality of essays might become more consistent as a result, and perhaps could even exceed the average level produced by AI. Put another way, why expend energy training humans up to the same mediocre level that AI can manage practically for free? Instead, let them learn how to use AI effectively as a way of compensating for built-in deficits of the human operating system, and have them practice the things that humans shine at -- i.e., their humanity.

Comments

Popular posts from this blog

Why Does Psychotherapy Work? Look to the Intuitive Mind for Answers

  Jerome Frank's 1961 book Persuasion and Healing  popularized the idea of "common factors" that explain the benefits of psychotherapy, building on ideas that were first articulated by Saul Rosenzweig in 1936 and again by Sol Garfield in 1957. Frank's book emphasized the importance of (a) the therapeutic relationship, (b) the therapist's ability to explain the client's problems, (c) the client's expectation of change, and (d) the use of healing rituals. Later theorists emphasized other factors like feedback and empathy that are sub-components of the therapeutic relationship, and that can be clearly differentiated from specific behavior-change techniques like cognitive restructuring or behavioral reinforcement . Additional aspects of therapy that are sometimes identified as common factors include the opportunity to confront difficult past experiences, the opportunity for a "corrective emotional experience" with the therapist, and the chance t...

Inside the Intuitive System: The Mardi Gras Effect

Last Tuesday was Mardi Gras, traditionally a day of excess just before the start of the church season of Lent. Lent (from the Old English lencten  meaning "springtime") is one of two penitential times in the Christian church year, when people are asked to repent for their sins and engage in various forms of self-denial. Many people still talk about "giving something up" for Lent. It seems ironic, then, that the season of Lent should start with a scheduled day of debauchery, "Fat Tuesday" in French, when people are encouraged to eat pancakes or King Cake, drink alcohol, dress in outlandish outfits, and dance in the streets. The event even has theological underpinnings: Medieval clergy offered pre-planned absolution at the start of Lent on the day that is also called "Shrove Tuesday," from the Old English verb shrive (adjective: shrove or shriven) meaning "to offer forgiveness from sins." Lent always made a certain sort of sense to me fro...

The Multitasking Mind: Intuitive Thinking is a Set of Systems

We think of the Intuitive system as representing emotion, or impulse, or other negative attributes. But Plato and Aristotle also attributed positive functions such as love, empathy, duty, and honor to the Intuitive Mind. These examples show us that the Intuitive Mind isn't just one thing. Rather than describing it as a system, it may be more accurate to describe the Intuitive Mind as a set  of systems.  Evans and Stanovich (2009) suggested that Intuitive Mind activities have the common characteristic of autonomy , meaning that they are self-executing without a person paying any conscious attention to them. (This is clearly different from Narrative Mind activities, which require ongoing focus to maintain them). Some examples of autonomous mental processes are: jumping when you hear a loud noise (instinctive behavior), turning off your alarm when you wake up (Pavlovian learned behavior), checking for coins in the vending machine change drop (Skinnerian reinforced behavior), rem...