The pace of technological change continues unabated, and many conversations turn to concerns about how our use of technology might be changing us humans in return. I wrote earlier this year about Jonathan Haidt's analysis of the ways in which two specific technologies -- smartphone and social media apps -- can have negative school or mental health consequences for adolescents. At a recent meeting of scholars on a journal's editorial board, my colleagues and I debated what uses of AI might be allowable versus not during the writing of a scientific article. And I wrote about risks that can occur when people use AI models as a companion or a counselor, tasks that it isn't always good at.
In a recent book titled Jung vs. Borg (in which "Jung" is Carl the analytical psychiatrist, and "Borg" is the resistance-is-futile cyborg collective from Star Trek), Glen Slater argues that technology can have negative effects on us in four areas:
- Loss of connection to past knowledge - things that are traditional or historical might be needlessly discarded. Nassim Nicholas Taleb writes about the enduring value of "old technologies," like a printed book or a good pair of shoes -- they have lasted for thousands of years and are not likely to be replaced anytime soon. Even with parallel technologies like e-readers, a physical book is durable and sharable in different ways. As the world becomes more complex and technological, there's concern that most individual people don't have skills (and in fact wouldn't dare try) to do things like grow their own food, maintain their own shelter, or make minor repairs to their car. And of course, there's Santayana's aphorism that those who do not remember history are doomed to repeat it. Over-reliance on technology encourages us to live only for the future, and to ignore what's valuable from the past. And at the extreme, Slater suggests that a loss of connection to the past leads to a sort of dissociative state or even "psychopathy," in which people begin to treat other humans as characters in a story told by our Narrative Minds, rather than as ends in themselves who we can relate to on an Intuitive-Mind level.
- Loss of connection to our surroundings - Slater writes that "as technology dominates civilization, nature becomes more unfamiliar; we then perceive natural things and unpredictable and in more need of regulation and taming; this, in turn, promotes technologies that promote more control." Control-oriented technologies can create negative impacts like ecological damage, including some that we might not see right away, and increase long-term risks even if they reduce short-term ones. We also lose some of our mindful ability to see nature for what it is (direct experience at the level of the Intuitive Mind), instead tending to see the world through symbols and abstractions (the Narrative Mind). Slater argues that even our popular cognitive-behavioral therapies involve attempting to assert control over negative emotions, rather than experiencing them or learning from them. Star Trek's Lt. Commander Data is Slater's example here, an emotionless android who wishes only to become a real boy. Slater says that we are moving the other way, beginning to act like emotionless robots ourselves because we over-identify with technology. This puts us at risk for losing our emotional intelligence, embodiment, gut feelings to warn us of danger, and all the other resources of the Intuitive Mind.
- Inattention to adverse consequences increases adverse consequences - Slater borrows heavily from Carl Jung's theory that when we avoid or ignore some fundamental aspect of human experience, the "return of the repressed" is inevitable: That aspect will come back to us, often in a distorted, disturbing, and out-of-control way. Besides the ecological example, we may lose a connection to the emotional cues that arise from our Intuitive Minds, perhaps resulting in an absence of spiritual or religious awareness as well. Here, Slater uses the cyborg example of Star Wars's Darth Vader, who loses his previous Anakin-Skywalker self in pursuit of power and loses his empathy and community connections along with his physical self (until the end of the third original movie, when he repents and is redeemed). In fact, Slater argues that cruelty is almost an inevitable result of dissociating from our Intuitive Minds -- we begin to experience people as means to an end and mistreat them, even though we still have a fundamental need for love and connection. Slater writes that "what is dissociated from the psyche is instead acted out in the streets," with other people paying the cost of our own internal struggles. One wonders if the recent rash of gun violence, the nation's cruelty against immigrants, and even the frequency of road rage are at some level symptoms of our culture's technology-enabled retreat into the Narrative Mind and simultaneous distancing from Intuitive-level experiences.
- Over-commitment to utopian visions - Slater argues that we have become a culture living for the future, always in pursuit of the "new new thing" with no regard for the resources used up in that pursuit. Utopian future visions are sold as panaceas to all of society's current ills: Food insecurity? Technology will solve that. Overpopulation? Just colonize Mars. Political disputes? Surely there's an app for that. Slater identifies two dangers in this sort of techno-futurism: One is an unwillingness to hear the struggles that people have in the here-and-now. The other risk is hubris, in which our pursuit of ultimate power merely sets us up for an inevitable fall. Over-enthusiasm for a promised digital utopia makes us vulnerable to accepting creeping problems in the moment, or even assenting to totalitarianism if it seems able to give us what we want. At an extreme, identifying with our technologies might make us long to achieve immortality by "uploading" our mental selves into computers, an idea that has been called "the singularity" (or sometimes "the rapture of the nerds"). Philosopher David Chalmers warns that even if this were possible, we would likely be creating a non-conscious "digital twin" that was no longer an experiencing human self. To achieve truly "conscious computing," Slater suggests we would need to create an AI that has a body, that has a developmental history, that feels emotions in that body, that has the experience of feeling unsafe or vulnerable in the world, that can develop mastery, that loves and connects to a community, and that struggles with limited capabilities and losses over time. In other words, to have human consciousness, an AI would need to have a human life.
- Hedonism might provide some level of meaning in life, especially if AI systems never manage to experience emotions. This is classic utilitarian philosophy (Bentham and Mill), although Bostrom is still enough of a techno-futurist to suggest that the outcome of emotional happiness might be more easily achieved through "wireheading" where people use hypothetical side-effect-free drugs or nanobots to stimulate happiness directly in the brain. But he also writes about "enchantment," a life that is "enmeshed in a tapestry of rich symbolic significance -- when it is imbued with myths, morals, traditions, ideals, and perhaps even omens, spirits, magic, and occult or esoteric knowledges; and, more generally, when a life transects multilayered realities replete with agencies, intentions, and spiritual phenomena." This is the type of satisfying experience that we often worry would be lost in a wireheading scenario -- think of Huxley's Brave New World where the highest goods are consumerism and happiness drugs. Bostrom suggests that maybe we don't need to return to a savage mode of living to experience genuine happiness.
- Engagement involves deep attention to an activity, which doesn't need to have high instrumental value (importance for our survival) in order to command our interest. Bostrom gives the example of learning to make furniture: A machine could do the task faster and more consistently, but a human can still find great value in the process. Some of the value might come from the activity of learning the new task, or as a means to achieving a state of creative flow, or even because we humans value artisanal goods that have some unique imperfections. Just paying attention deeply to what is going on in one's life (what Bostrom calls "orientation") can also create a sense of fulfilling engagement.
- Richness involves the "texture" of experience, with the idea that having a lot of variety in one's life is better than doing one thing all the time. In fact, when we reflect on our own lives we often find that our various activities are mutually reinforcing of one another -- I enjoy exercising more, for instance, when I have just read something interesting and can ruminate about it on my way; or a long drive more when I have an interesting topic to discuss with a conversation partner in the car. Events don't have to be uniformly positive in order to contribute to the richness of experience.
- Roles are another source of meaning, arising from our connection to other humans and communities, and the ways in which we interact with them over time. There is joy (and sometimes also frustration) to be found in understanding and fulfilling others' expectations of us. It certainly keeps things interesting. Bostrom presents some examples in which important roles like parenting could be offloaded to machines, but I'm not buying it -- these are roles that also intersect with engagement or a sense of purpose, and people are unlikely to abandon them no matter how skilled the robot caretaker might become.
- Purpose reflects work toward some aim (small and short-term), goal (medium-term), or mission (overarching life narrative). Bostrom suggests that to be meaningful, a purpose should have some inherent trajectory in which things get better over time, and should fit with one's idiosyncratic personality and values in order to make for a compelling and original life-story. The narrative one can tell about one's activities seems closely associated with a sense of purpose.

Comments
Post a Comment