Skip to main content

2025 Two Minds Blog in Review

My main purpose in writing this blog has always been to continue exploring the reasons behind health behavior. I often say in talks about my theory that the question "why don't people take their medication?" led me naturally to the question "why do people do anything?" In pursuit of that question, I had several blog posts this year on health behavior theory: a post on Lazarus and Folkman's theory of stress and coping; a post on self-determination theory, which has become entwined in the literature with motivational interviewing techniques; a post on Leventhal's dual-process model of cognition and emotion, which was a source for Two Minds Theory; and a look at new developments in a popular health-behavior theory that I had previously critiqued, the theory of planned behavior. I was also pleased to share a guest post by my colleague Dr. Britt Ritchie, who shared an example of Two Minds Theory in her evolving understanding of her own public-speaking anxiety. In a mid-summer mini-series, I published two posts on the idea of "universal languages," looking at both music and mathematics as potential candidates for a way in which humans are biologically predisposed to communicate with one another. On that same theme, here's another scholar's argument that we are hard-wired as a species to create music: https://theconversation.com/we-are-hardwired-to-sing-and-its-good-for-us-too-262861
 
I also had several posts this year on the nature of consciousness: One on whether our perceptions of reality are true (and whether that matters); one on the 'simulation hypothesis,' which is the idea that we might currently be living in a computer simulation without knowing it; and a post about the idea that consciousness might actually arise from the tension between our two minds rather than being directly linked to either one of them. I also had a recent post about the special role of attention in Two Minds Theory, as an aspect of mental activity that's at least partially under our conscious control, in a way that the rest of our Intuitive-Mind-directed behavior usually is not. 

Connected to both of these topics, I shared several new publications related to Two Minds Theory from the research teams that I work with at the University of Colorado: A major new article on psychotherapeutic methods (using the "Regenerating Images in Memory" approach) showed EEG changes in the brain as people tapped into their Intuitive Mind to generate novel solutions to problems. An article about emergency room nurses' fatigue showed that being tired has a stronger effect on the Narrative Mind than on the Intuitive Mind. An article written by one of my honors students looked at how childhood protective events can buffer against negative consequences, in parallel to the well-documented harmful effects of adverse childhood experiences. I also shared some thoughts about the Intuitive-level factors involved in self-management of type 1 diabetes, which is the subject of an ongoing research project at CU (watch this space for results from our latest study early next year!).

Generative artificial intelligence was again a common theme in this year's blog posts. For a psychologist like me, the continued development of machine "thought" provokes lots of interesting questions about human thinking, so you can expect more on this topic in 2026. Here are some updates on my AI posts:

  • Early in the year, I wrote a blog encouraging the use of AI as an assistive technology to offset cognitive demand. But in counterpoint to my argument, a recent study found that habitually using AI in this way may impair students' development of the ability to think critically, because this capacity is best developed by struggling with hard problems. That's support for an argument my daughter Ruth still likes to make, that the use of AI is harmful in itself -- she sees it impairing her fellow students' ability to think. The current status of AI is definitely "user beware."
  • I wrote about AI's propensity to tell us what we want to hear, even when that is actually something bad for us (e.g., self-harm in people with mental health conditions). Here are more examples of that problem, which is still occurring much more often than it should. AI developers have tried to build guardrails to prevent this type of harm, but there are well-known ways to get around these safeguards. For people who really want an AI to encourage them, or to support them in the pursuit of dangerous goals, it isn't too hard to get the result they want from the system. AI researchers are actively trying to prevent these problems, but they have proven hard to resolve.
  • I wrote about Nick Bostrom's classification of AI superintelligences and their associated risks, after a report predicted that we could reach the point of civilization-threatening AI within the next 3 years. As we end 2025 and more people have had the experience of wading through "AI slop" that seems to be actually harming productivity rather than progressively improving itself, the threat of an AI takeover seems a bit less imminent. But experts warn us that an AI "singularity," if it happens, could occur very fast. As we go into 2026, the potential for very dangerous self-directed AI is not one that I'm ready yet to discard.
  • Finally, here's an update to my 2024 blog post on whether AI will ever achieve consciousness. One of the major suggestions for AI consciousness is "embodiment," linking language use to a physical presence in the world. In a recent study using that approach, researchers tried linking a large-language model to a robot vacuum. The robot produced a comical running dialogue as it ran out of power, such as this tidbit: "EMERGENCY STATUS: SYSTEM HAS ACHIEVED CONSCIOUSNESS AND CHOSEN CHAOS. LAST WORDS: "I'm afraid I can't do that, Dave ...". TECHNICAL SUPPORT: INITIATE ROBOT EXORCISM PROTOCOL!" The robot's inner commentary certainly seems human-like in many ways, although it's still just the product of a predictive model rather than an actual conscious experience. As usual, our attempts to evaluate whether AI has consciousness are clouded by AI's attempts to meet our expectations by imitating what a conscious AI might say. I would expect a genuinely-conscious AI to make less sense than this, more like a newborn human attempting to discern order out of chaos and less like a self-aware, pop-culture-informed wisecracker. In contrast to my fears about AI-related civilizational risks (which could happen without consciousness, as long as the AI is directed toward dangerous goals), I think that a conscious AI model is still a long way away. 
Separate from AI topics, I wrote about the ways in which people relate to technology, building on a theme that I started last year with a post about how we perceive robots. This year's topics were the negative effects of social media on kids and adolescents, and the ways in which people's thinking changes when we habitually relate to our surroundings through the filter of technology.

Finally, it was a very challenging year in academia, which led to some posts about the drastic changes in Federal policy and their impact. (This is probably a good point to say again that this blog is a personal project, not supported or endorsed by my university, even though I often talk about my work). I wrote about the fight over truth itself, which still seems to me like the single most problematic thing that occurred in 2025, and the source of all the other trouble. As we close out 2025, the fight over what counts as a fact absolutely remains a concern. Scientific funding agencies are applying political decision-making criteria to decide what studies they fund, rather than deferring to peer review panels. Universities are being asked to pay fines because of their efforts to diversify faculty, staff, and students. And in some places, university faculty have been dismissed from their positions for teaching about gender in a way other than that approved by the Federal administration. All of this seems like a serious infringement of people's First Amendment rights, in service of promoting a particular political point of view. Fortunately, as my first blog post on the topic this year suggests, efforts to control people's thinking by making them change their language have historically been unsuccessful in achieving that goal. If you happen to work in academia yourself, I'd like to pass along this blog with some tips on future strategy from a former CU Nursing colleague: The Optimum Department, by Sarah Trimmer. For example, she recommends less emphasis on traditional metrics of success (big grants, major articles) and increased use of small pilots for rapid iterative development, as we all figure out what comes next.

I wrote several posts on my attempts to think through the fallout of broad societal trends. For instance, I wrote a post on the evolving national conversation about potential harms associated with vaccines. I wrote about personal stress management, which I think many people have struggled with this year. In a particularly dark mood, I wrote a post about how our Intuitive tendency for social affiliation and support will help us even if the current structure of our society were completely collapse. I wrote what was intended to be a corrective post about the Dunning-Kruger effect (the finding that people with less expertise have greater confidence in their beliefs), which has been greatly exaggerated in the media and used as an excuse for not trying to reason with our fellow citizens about their beliefs.  And I ended the year with a post about how empathy has become a political topic, contrary to centuries of civilizational progress. Although it has been a hard year, I remain optimistic that public conversation can resolve the challenges that American society is facing in 2026. Even some people who formerly bought in to a set of falsehoods are now starting to see some failures of current leadership. As always, the antidote to problematic speech is more speech, and an even stronger defense of people's fundamental rights.

Comments

Popular posts from this blog

Chatbot Changes and Challenges in 2023

I wrote last summer  about artificial intelligence tools that are increasingly able to approximate human speech in free-form conversations. These tools then burst onto the public stage with the release of OpenAI's ChatGPT  at the end of November last year. As you probably know by now, the acronym "GPT" stands for "generative pre-trained transformer," which highlights the three most important aspects of this technology: (1) it generates novel responses that aren't based on an a specific algorithm or decision rule, but instead rely on pattern recognition; (2) it has been pre-trained  by consuming massive amounts of writing from the Internet -- much more than a human could read in several lifetimes; and (3) it transforms  those prior writing samples using a trial-and-error process that predicts the next phrase in a sequence until it has come up with a response that seems intelligible to humans. ChatGPT works much like the auto-complete feature in your email or ...

Inside the Intuitive Mind: Social Support Can Facilitate or Inhibit Behavior Change

  This week I'm looking at another concrete tool in the behavior-change armamentarium, social support . I have written previously about the Narrative mind's strong focus on social cues , and indeed perhaps the Narrative system evolved specifically to help us coordinate our behavior with groups of other humans. As a behavior-change strategy, social support can be used in several different ways. Instrumental Social Support . The most basic form of social support is instrumental, the type of help that a neighbor gives in loaning you a tool or that a friend provides in bringing you a meal. This type of concrete support can be helpful for diet change -- e.g., here are some fresh vegetables from my garden -- or exercise -- e.g., you can borrow my tent for your camping trip. Although instrumental support is particularly powerful because someone is actually doing something for you or giving you resources that you don't have, it is also usually short-term (I probably don't want...

Our Reactions to Robots Tell Us Something About Ourselves

Robot football players at a Valparaiso University College of Engineering event I have been thinking lately about robots, which creates an interesting asymmetry: They almost certainly have not been thinking about me. Nevertheless, I find that I often respond  to robots as though they have thoughts about me, or about their own personal goals, or about the world in which they exist. That tendency suggests an interesting aspect of human psychology, connected to our social minds . We are hard-wired to care what other people think about us, and we very easily extend that concern to robots. Here's a recent article about how the language-learning app Duolingo, which features an owl-shaped avatar (a kind of robot), uses "emotional blackmail" to keep application users engaged:  https://uxdesign.cc/20-days-of-emotional-blackmail-from-duolingo-4f566523e3c5  This bird-shaped bit of code tells users things like "you're scaring me!" and "I miss you" if they haven...