Skip to main content

Bonus Blog: Is AI Harmful to Students?

Readers of my blog may be interested in this recent event from Valparaiso University’s Christ College, where students debated the proposition “on balance, the rise of Artificial Intelligence harms students more than it helps them.” (So the “pro” team was arguing against students using AI, and the “con” team was arguing that it’s OK). If you work in education, or are just an interested observer, you might be intrigued by what these bright students thought were the most compelling reasons for and against their own use of emerging artificial intelligence technologies. I should note that both sides were explicitly instructed not to bring up the issue of reliability or AI-generated “hallucinations,” because it was felt that this would distract from the main question. That’s probably fair, as hallucinations have gotten a lot of press yet are also decreasing in frequency and severity with each new iteration of the technology. My daughter Ruth was one of the speakers for the “pro” side, but I won’t spoil this by telling you who won! https://www.youtube.com/live/iZ0FKG_O4LU?si=TZFmf1lnpubwh_Im  

Comments

Popular posts from this blog

Prototypes and Willingness: The Theory of Planned Behavior Revisited

  You may recall my blog post from last year on the Theory of Planned Behavior (TPB) , titled "in praise of a failed model." My evaluation of this model was that it accurately describes the Narrative Mind, which does control intentions. But the ultimate goal of the TPB is to predict behavior, and the relationship between intentions and behavior is weak at best -- in fact, it is entirely attributable to the fact that when someone says they don't intend to do something, they probably won't do it. When they say they do intend to do it, their actual results are no better than chance, a result of the intention-behavior gap as described in Two Minds Theory.  The full TPB is shown in this diagram: Cognitive constructs like attitudes, subjective norms, and perceived behavioral control (i.e., self-efficacy) are Narrative-system phenomena, and they do indeed have relationships with each other and with intentions (which are also products of the Narrative Mind). Perceived behavi...

Leventhal's Common-Sense Model and Two Minds Theory

Leventhal, Diefenbach, and Leventhal's (1992) "common sense model" of self-regulation. My 2018 paper describing Two Minds Theory (TMT) cites work by my colleague and coauthor Dr. Paula Meek, who conducted studies of patients experiencing the symptom of breathlessness due to chronic obstructive pulmonary disorder (COPD). Paula's research used a model by Howard and Elaine Leventhal (with Michael Diefenbach) that was an early iteration of the dual-process approach also used in TMT. She found that people who focused their attention on different aspects of the feeling of breathlessness then in turn had different interpretations of what that symptom meant for them, and that those interpretations changed their perception of the symptom's intensity. This example illustrates a back-and-forth between perceptions and thoughts, which is characteristic of Leventhal's model. Leventhal's dual-process model, sometimes called the "common sense model" of self-reg...

Chatbot Changes and Challenges in 2023

I wrote last summer  about artificial intelligence tools that are increasingly able to approximate human speech in free-form conversations. These tools then burst onto the public stage with the release of OpenAI's ChatGPT  at the end of November last year. As you probably know by now, the acronym "GPT" stands for "generative pre-trained transformer," which highlights the three most important aspects of this technology: (1) it generates novel responses that aren't based on an a specific algorithm or decision rule, but instead rely on pattern recognition; (2) it has been pre-trained  by consuming massive amounts of writing from the Internet -- much more than a human could read in several lifetimes; and (3) it transforms  those prior writing samples using a trial-and-error process that predicts the next phrase in a sequence until it has come up with a response that seems intelligible to humans. ChatGPT works much like the auto-complete feature in your email or ...