Skip to main content

Artificial Intelligence is an Assistive Technology


By fall 2024, most schools and universities have gotten just far enough in their understanding of artificial intelligence (AI) to officially forbid its use. The problem, most professors say, is plagiarism -- the practice of presenting another person's work as your own. This is, on its face, nonsense. Courts have already determined that the products of AI are not copyrightable because there is no human author, and legal experts predict that courts will also find AI is not a "person" and cannot be held liable for its actions. (There's more disagreement about who should be liable for adverse products of AI, though -- the user? the original programmer? the company that makes money from the tool?). The whole point of AI is that it can process reams of existing data, identify patterns, and use those patterns to produce something new. The student using AI to write a term paper therefore is not plagiarizing in the usual sense of the word; instead, they are employing a novel tool to create something that didn't previously exist. The AI companies' legal vulnerability is not on the basis of plagiarism. It is instead about intellectual property -- whether they had the right to mine the source data that was used to train their AI model, so that it could do this novel creative work.

One senses that the professor's objection here isn't really about the legal meaning of plagiarism, and it's almost certainly not about whether original content creators are being paid for their work. (Professors themselves are inveterate "borrowers" in their teaching). Instead, faculty are expressing a concern about which tool is being used to write the term paper, with the professor's desire being for students to use the old sequentially evolved tool that sits inside their skulls: Their brains. 

Writing is one of those activities that can clearly be attributed to the Narrative Mind -- the key to decoding words' meanings resides in Wernicke's area in the temporal lobe (specifically, the left temporal lobe for most right-handed people), and the ability to physically produce those words on paper depends in part on nearby Broca's area. (Broca's area is most essential to spoken language, but people with damage in this area also tend to have problems with spelling, grammar, and the shape of letters). Writing a good essay also requires planning, to lay out ideas in a logical sequence and to make a convincing point. Planning and strategizing are functions of the prefrontal cortex (PFC), the brain's "executive system." The professor probably does care somewhat about Broca's area being in good working order (as evidenced by the perennial student complaint, "why are you marking me down for grammar? This isn't an English class!"). However, the professor is probably most interested in careful thought and clear expression of ideas. If that's the goal, the PFC is the main tool that students have historically employed.

The problem with AI, from a teaching perspective, is that it is very good at replicating the productions of the PFC. It can see patterns, it can sequence ideas, and it can follow a logical line or reasoning. Of course, it does these things in a completely different way than humans do -- by analyzing massive datasets for unseen statistical patterns, and thereby cranking out a "typical human response." In fact, AI detection software may falsely flag the work of students whose writing is more formulaic and uses a smaller or more specialized vocabulary, such as non-native English speakers and students with autism spectrum disorders. If actual humans doing their very best work can produce outputs that are mistaken for AI writing, is AI writing itself really of poor quality?

Of course, the AI system has no idea what it is saying, and sometimes professors want their students to consider the implications of their argument. AI is categorically unable to do this -- even if you use clever prompt engineering such as "also give me some potential unintended consequences of this policy," the AI is still just doing its language-prediction trick. It can never really understand those implications. The process of thinking and understanding is what the professor finds lacking in an AI-written essay; and in fact the professor is right to do so, because developing those thinking skills is the traditional purpose of a liberal-arts education. 

Despite my general agreement with the professor in this scenario, I don't think that a blanket prohibition against AI is the correct way to achieve her goals. There may be perfectly valid uses for AI that still allow students to learn what they are supposed to be learning. Here's an example: The student who writes their essay at 1 AM the night before it's due, hopped up on caffeine and skittles, is probably not doing their best analytic work. Should they have been writing at the last minute? Of course not, and learning to manage one's time and energy is also a valuable life skill. But let's concede the point that students aren't always at their very best, because the rest of us aren't either. And let's also concede that time is unforgiving, so that sometimes we need to produce thoughtful work on a deadline, even when we aren't feeling thoughtful. In a recent blog post I reviewed the effects of fatigue on the brain. Narrative-Mind thinking is particularly impaired when we are tired; it was the last part of the brain to evolve, and it's the first system to go offline. Some other scenarios that make the Narrative Mind break down are stress, distraction, and competing priorities; as I have argued elsewhere, one of the key characteristics of the Narrative Mind is its inability to effectively multitask. Or as philosopher Eric Schwitzgebel puts it, "consciousness is a limited resource."

When the Narrative Mind needs a nap, the best results tend to come from people who "trust their gut" -- in other words, they are able to shift the work to their Intuitive Mind, which has massive parallel processing capabilities and involves low effort. That of course carries other risks: Intuitive-Mind processes are riddled with assumptions and stereotypes, and the typical mistakes of Intuitive-Mind thinking have rightly been highlighted by Daniel Kahneman and others. But focusing on Intuitive-Mind errors misses the point that the Intuitive Mind works fast, it works unconsciously, and it is right perhaps 90% of the time. When the Narrative Mind is out of commission, reliance on the Intuitive Mind is usually the best strategy available to us. For the student who is Intuitively convincing in their written expression, it's possible to skate by with limited effort -- and the professor still didn't get the demonstration of Narrative-Mind thinking that they were hoping to develop.

AI provides a new way to cope with limitations of the Narrative Mind. It generates text that sounds like a Narrative-Mind output, that follows the logic of human narratives, and that has a structure acceptable to the Narrative Mind of the professor when she reads it. My argument is that this isn't cheating, as long as it is disclosed. A student's use of generative AI to write a paper recognizes and values the goal of logical reasoning and convincing written expression. It just achieves it in an assisted way. I predict that we will in fact start to see AI included in some types of educational accommodation plans, such as those for people with attention-deficit disorder. It is an assistive device, just like an audio recorder for people who can't remember what was said (or for that matter, like a piece of paper for taking notes). It externalizes what were previously internal capabilities of the human brain.

Does the use of AI undermine the professor's ultimate goal, though -- that students have their own, internal capability to create a convincing argument? Perhaps: Studies have shown that the products of AI are at about the level of the average college entrance essay, which makes AI writing hard to detect in a teaching context and also in a scientific one. And that's not surprising, because the whole method of AI is to statistically predict the average response, the next word in a sentence, the most likely response to a question. AI literally provides "the wisdom of the crowd." But the wisdom of the crowd is often not very wise -- it traffics in banalities, and it has no way to bringing in viewpoints that have not been part of the mainstream. It is more likely to sound like the writing of people who are male and privileged. Perhaps most importantly, it doesn't include the personal touches, the rhetorical flourishes, the flashes of brilliance that are part of the best human writing. 

In order to gain practice with thinking, students need to learn how to write an essay after an AI has taken a first pass at it. Some specific skills might involve asking whether all of the AI's points do in fact support the central thesis of the paper, asking what viewpoints have been ignored or excluded, and asking whether the AI's arguments do in fact fit with one's own experience. The essay can then be improved, with irrelevant points removed and more relevant examples inserted. Some of those activities are better done by the Intuitive Mind anyway -- we're asking whether the essay passes a basic "smell test" for correctness. We might not be able to put its deficiencies into words, or say exactly why our revised version is better, but that's OK -- it will probably be improved anyway, by virtue of its humanity. And if a student has time later to reflect on the essay, when the Narrative Mind is less tired, that reflection might in fact help him or her to develop the skill of analyzing structure and logic in writing, a skill that might be transferrable to the next time a free-text writing assignment is attempted.

I contend that AI is a tool like any other, and that we should allow our students to use it. Honesty requires that they disclose when and how they have done so, but they shouldn't be marked down if they do or given extra points if they don't. They should instead be graded on the quality of their work. A truly effective response will likely require engagement from both the student's Narrative Mind and their Intuitive Mind. In one sense this is no different from spellcheck software; at one time, a student who couldn't spell would never do as well as one who could, and the only way to get there was through rote memorization of words. But now spellcheck features are so ubiquitous that students don't really have to worry about it anymore. AI is much the same, providing an artificial Narrative Mind to work through the basics of logic and rhetoric. Students should of course disclose their use of generative AI, which is the ethical standard many professional and scientific groups seem to be gravitating toward.

So why not let AI do what it is good at -- providing logical structure and a jumping-off point based on common sense -- and ask our students to take it from there? The quality of essays might become more consistent as a result, and perhaps could even exceed the average level produced by AI. Put another way, why expend energy training humans up to the same mediocre level that AI can manage practically for free? Instead, let them learn how to use AI effectively as a way of compensating for built-in deficits of the human operating system, and have them practice the things that humans shine at -- i.e., their humanity.

Comments

Popular posts from this blog

Prototypes and Willingness: The Theory of Planned Behavior Revisited

  You may recall my blog post from last year on the Theory of Planned Behavior (TPB) , titled "in praise of a failed model." My evaluation of this model was that it accurately describes the Narrative Mind, which does control intentions. But the ultimate goal of the TPB is to predict behavior, and the relationship between intentions and behavior is weak at best -- in fact, it is entirely attributable to the fact that when someone says they don't intend to do something, they probably won't do it. When they say they do intend to do it, their actual results are no better than chance, a result of the intention-behavior gap as described in Two Minds Theory.  The full TPB is shown in this diagram: Cognitive constructs like attitudes, subjective norms, and perceived behavioral control (i.e., self-efficacy) are Narrative-system phenomena, and they do indeed have relationships with each other and with intentions (which are also products of the Narrative Mind). Perceived behavi...

Leventhal's Common-Sense Model and Two Minds Theory

Leventhal, Diefenbach, and Leventhal's (1992) "common sense model" of self-regulation. My 2018 paper describing Two Minds Theory (TMT) cites work by my colleague and coauthor Dr. Paula Meek, who conducted studies of patients experiencing the symptom of breathlessness due to chronic obstructive pulmonary disorder (COPD). Paula's research used a model by Howard and Elaine Leventhal (with Michael Diefenbach) that was an early iteration of the dual-process approach also used in TMT. She found that people who focused their attention on different aspects of the feeling of breathlessness then in turn had different interpretations of what that symptom meant for them, and that those interpretations changed their perception of the symptom's intensity. This example illustrates a back-and-forth between perceptions and thoughts, which is characteristic of Leventhal's model. Leventhal's dual-process model, sometimes called the "common sense model" of self-reg...

Intuitive Decision-Making by People with Diabetes

People with diabetes often find it challenging to maintain their blood sugar levels, in part because diabetes is a complicated disease. When the kidneys don't produce enough insulin fast enough to adjust for changes in digestion or activity, blood sugar can fluctuate rapidly, even over the course of a single day. To manage this, people with diabetes often need to make changes in multiple areas: adopting a low-carbohydrate diet, managing the timing and amount of exercise they get, keeping track of the times when their blood sugar rises and falls, potentially giving themselves a dose of insulin around mealtimes, managing stress, and other preventive measures as well.  But despite all of this complexity, the people who manage their diabetes most successfully are often the least  obsessive about the fine details. When my Dad was first diagnosed with diabetes, he checked his blood sugar often (using finger sticks; continuous glucose monitoring [CGM] devices weren’t yet a thing). Bu...