Skip to main content

Will We Ever Be Able to Upload Our Consciousness to the Cloud?

It’s a popular sci-fi trope: The human consciousness now residing in a computer, trading physical life for machine-based immortality. In The Matrix this works both ways: You can not only upload your consciousness to a vast multiplayer online world, but also instantly download digitized knowledge from the cloud to your brain (“I know kung fu!”). A digitized consciousness might have some acknowledged limitations, sure – you don’t eat anymore, and you can’t smell the flowers. Even that seems ridiculous by modern technology standards, though: Couldn’t we design appropriate sensors, or simply simulate those experiences? Indeed, there’s a school of thought that claims we are already living in some type of simulated reality, whether computer-generated or otherwise.

Let’s confine ourselves to currently existing digital technologies, and examine the question of whether it really might be possible to upload our consciousness to the cloud. China is investing a lot in brain-computer interface technology, and U.S. companies like Neuralink are doing the same. I will draw heavily on the work of neuroscientist Miguel Nicoletis, whose groundbreaking work on the Walk Again Project at Duke University showed that computers can be trained to directly interpret neural signals in order to control a prosthetic device in real time. 

That seems close to what we’re talking about in terms of a brain-computer upload: Why not just go the next step and transfer what’s currently happening in the brain to the computer instead? Not so fast, says Professor Nicoletis. The brain-machine interface relies on some complex associational learning methods, the same type that fuel current large-language models’ startling ability to parse and generate sentences. Dr. Nicoletis proposes the following principles for this type of interface (all quotations are from his book The True Creator of Everything, pp. 61-72):

1. “The distributed principle, which holds that all functions and behaviors generated by complex animal brains like ours depend on the coordinated work of vast ensembles of neurons distributed across multiple regions of the central nervous system. In our experimental setup, the distributed principle was clearly demonstrated when monkeys were trained to employ a brain-machine interface to control the movements of a robotic arm … . In these experiments, animals could succeed only when the combined electrical activity of a population of cortical neurons was fed into the interface. Any attempt to use a single or a small sample of neurons as the source of the motor control signals to the interface failed to produce the correct robot arm movements.” The necessary neural input wasn’t even all coming from the same area of the brain.
2. "The neural-mass principle … describes the fact that the contribution of any population of cortical neurons to encoding a behavioral parameter, such as one of the motor outputs generated by our brain-machine interfaces to produce robotic arm movements, grows as a function of the logarithm of the number of neurons added to the population.” In other words, adding more neurons generates a stronger signal when the contributing neurons are in different parts of the brain. Just adding more neurons from an already-activated area has diminishing returns.

By themselves, principles 1 and 2 aren’t definitive strikes against the idea of digitizing brains. It would in principle be possible to design a very complex system that stores information in multiple areas, and weights cross-area correlations more strongly than within-area ones. One could probably train a modern machine-learning model on this. But wait, there’s more …

3. “The multitasking principle holds that the electrical activity generated by a single neuron can contribute to the operation of multiple neural ensembles simultaneously; that is, individual neurons can participate simultaneously in multiple circuits involved in the encoding and computation of several brain functions or parameters at once, … [for instance] the calculation of the direction of arm movement and the production of the exact amount of hand-gripping force.” This seems like one big nail in the coffin for the idea of digitizing brain information – in a digital signal, each of these parameters would need to be accounted for separately. Even if some correlation between them is allowed for, hand grip and arm movement would be different variables in the model, each with its own governing equation. The idea that the exact same set of neurons governs both parameters suggests that brain storage of information is much more holistic than digitization.  

4. “The neural-degeneracy principle posits that a given behavioral outcome, such as moving your arm to reach for a glass of water, can be produced at different moments in time by different combinations of cortical neurons. … In other words, … there is no fixed neuronal activity pattern responsible for controlling the lifting of your right arm, or any other action you might undertake. In fact, some preliminary evidence obtained in my lab suggests that the same combination of neurons is never repeated to produce the same movement.” This one is even more damaging to the idea of digitizing the brain than multitasking was. Modern AI models are fully reversible, despite hype to the contrary: If you know the outputs, and you know the full specifications of the model that they came out of, you can exactly reproduce the inputs that went in. Here, Dr. Nicoletis seems to be saying that the model linking inputs to outputs in the brain isn’t the same at any 2 different points in time. The great strength of digital models is that they are replicable – you can upload a copy of your software, initialize it on a new piece of hardware, and get a system identical to the first. The brain is a lot more ad hoc in its operations. Dr. Nicoletis posits that “a distributed neural encoding scheme offers great protection against catastrophic failure,” and this likely explains the remarkable plasticity of brain operations. But again this is a significant divergence from digital computers, which are quite vulnerable to failure due to a bad spot in their data. Backup copies are the digital solution; flexible encoding is the solution in the analogue brain.

So, principles 3 and 4 are pretty damaging to the notion that brains could be uploaded to computers, highlighting fundamental differences between the two types of processing. Then come two more principles which are not major barriers in the age of AI:

5. “Next in my list comes the context principle, which holds that at any point in time, the global internal state of the brain determines how it is going to respond to some incoming sensory stimulus. In a sense, the context principle … describes why and how, during different internal brain states (that is, when animals are fully awake, versus sleeping or under the effects of anesthesia), the same neurons can respond to an incoming sensory stimulus – let’s say, a touch on its whiskers, in the case of rats – in a completely distinct way.”

6. “According to the plasticity principle, the internal brain representation of the world, and even our own sense of self, remains in continuous flux throughout our lives. It is because of this principle that we maintain our ability to learn until we die. Plasticity, for example, explains why, in blind patients, neurons in the visual cortex can become responsive to touch” (or alternately to sound, as described in my post on human echolocation).

Both of these principles are components of Bayesian statistical reasoning: Each state of the system is based on its current state, which expects some result (the “prior probability”). That expectation is then compared to an actual result observed in the world, and the prior probability is adjusted based on whether the prediction was correct or not, a process called “backpropagation.” AI can do this. But there’s a hidden technical issue: The brain can’t possibly do it in the same way that AI does, because the AI approach requires a massive amount of working memory. It has to store the mathematical weights associated with hundreds or even thousands of variables for a little while, so that some mixture of them can be adjusted. Then it tests small variations in each of the variables to get a new prediction, which is again compared to the actual result. The process iterates until a new optimum set of weights is discovered, which requires storing the results of each test (perhaps thousands of them) in addition to the new and old variable weights in each model. Human memory is nowhere near this good – we are able to hold only a small amount of data in working memory at any given time, famously summarized by psychologist George Miller as “the magic number seven, plus or minus two.” (There is a reason why phone numbers have seven digits).

Unfortunately, we still aren’t done. Here are two more barriers to the idea of uploading human consciousness to the cloud:

7. “One of the more surprising results of our multielectrode recording experiments in freely behaving rodents and monkeys was the discovery of the conservation of energy principle. As animals learn to perform a variety of different tasks, there is a continuous variation in an individual neuron’s firing rate. Nevertheless, across large cortical circuits the global electrical activity tends to remain constant. … A major implication of this principle is that, since the brain has a fixed energy budget, neural circuits have to maintain a firing rate cap. Thus, if some cortical neurons increase their instantaneous firing rate to signal a particular sensory stimulus or participate in the generation of a movement or other behavior, other neighboring cells will have to reduce their firing rate proportionally.” This means that activity in one brain area reduces activity in other areas, potentially producing complementary changes in other behaviors that those areas control. Digital systems have independent processes; as with the multitasking principle, this correlation between different brain processes undermines their basic design. Of course, digital systems address this by just drawing down more power from a wall outlet when they are doing many things at once – they are serial processors, not sequential ones.
8. The final principle isn’t named by Dr. Nicoletis, but I will call it structure dependency. Here’s how he describes the issue: “The complex mesh of white matter [in the cerebral cortex] plays a crucial role in optimizing the functioning of the cortex. Some of the dense packs of nerve fibers that form the white matter are organized in loops that reciprocally connect pools of gray matter. I call these loops biological solenoids, after the coils of wire used in electromagnets. The largest of these biological coils is the corpus callosum [which connects the brain’s left and right hemispheres].” The coil structure works exactly like an electromagnet, generating a fluctuating magnetic field around the brain as electricity flows more or less strongly through its different parts. Those magnetic signals might matter as much or more than the electrical ones, and are completely unaccounted for in digital models of the brain. Additionally, neurons in different parts of the brain are surrounded by different levels of myelin, an insulating substance that gives the white color to white matter. “Myelinated nerves need less energy to conduct action potentials. For example, while an unmyelinated C nerve fiber, with a diameter of 0.2-1.5 micrometers, conducts action potentials at roughly 1 meter per second, the same electrical impulse moves at about 120 meters per second, or more than 400 kilometers per hour, in a large myelinated fiber.” Some parts of the brain therefore send electrical impulses faster than others, and allow for more less cross-activation of neighboring neurons as the electrical impulse passes down its insulated track. This lends additional importance to the question of where in the brain a signal arises, and suggests that viewing individual neurons as on/off switches in a digital model is oversimplified. Dr. Nicoletis provides the following image (left) of how a digital system might need to be operationalized in order to mimic some of these brain features (right): It’s far from a box with blinking lights.

As in my recent post about free will, we see that neural processes operate very differently from the way computers do, even if they are able to arrive at similar results. Eric Schwitzgebel summarizes the problem succinctly with the following set of properties that brains have but computers don't:

... the activity of neurons depends on intricate biological details. Signal strength depends on axon and dendrite lengths, and small timing differences can have big consequences. Cell membranes host tens of thousands of ion channels with different features, sensitive in different ways to different chemicals. Nitric oxide serves as a diffuse signal, passing freely through the cell membrane and interacting with intracellular structures, not just surface receptors. Blood flow matters -- not just in total amount but in the specific chemicals being transported. Glial cells, which provide support structures, also influence neuronal behavior. Many cell changes accumulate over time without resulting in immediate spiking activity. And so on. (pp. 90-91)

I would therefore be very wary of anyone who claims to be able to upload my consciousness to the cloud. Schwitzgebel's argument suggests that even the more limited strategy of gradually replacing your neurons with silicon chips is doomed to fail, because the only feasible model for "a real neuron probably requires ... another biological neuron" (p. 91). And David Chalmers tells a disquieting story of "gradual replacement," in which a person gradually replaces their biological self with machines, all the while protesting that they don't feel any different, while they have in fact become a nonconscious zombie somewhere along the way.

Despite the unlikelihood of cloud-based consciousness, it's extremely likely that someone will be able to train an AI model to sound like you. Schwitzgebel and colleagues in fact did this by training a LLM to speak like the philosopher Daniel Dennett, with such successful results that Dennett himself agreed that the model sounded like him, and even produced some novel arguments that he would in fact endorse. Despite this effective mimicry, and even if your personalized AI protests that it does in fact contain your conscious experience and memories, your consciousness will remain stubbornly embodied in your biological brain. That physical existence is probably inseparable from the experience of being you.

Stay tuned for my next post, in which I'll look at a potentially simpler application of brain-computer interfaces: What if, instead of trying to upload consciousness to the cloud, we just left people's minds where they are, and tried to read their thoughts electronically instead? Is that possible using current technology? Check back here in 2 weeks to find out!

Comments

Popular posts from this blog

Inside the Intuitive Mind: Social Support Can Facilitate or Inhibit Behavior Change

  This week I'm looking at another concrete tool in the behavior-change armamentarium, social support . I have written previously about the Narrative mind's strong focus on social cues , and indeed perhaps the Narrative system evolved specifically to help us coordinate our behavior with groups of other humans. As a behavior-change strategy, social support can be used in several different ways. Instrumental Social Support . The most basic form of social support is instrumental, the type of help that a neighbor gives in loaning you a tool or that a friend provides in bringing you a meal. This type of concrete support can be helpful for diet change -- e.g., here are some fresh vegetables from my garden -- or exercise -- e.g., you can borrow my tent for your camping trip. Although instrumental support is particularly powerful because someone is actually doing something for you or giving you resources that you don't have, it is also usually short-term (I probably don't want...

Our Reactions to Robots Tell Us Something About Ourselves

Robot football players at a Valparaiso University College of Engineering event I have been thinking lately about robots, which creates an interesting asymmetry: They almost certainly have not been thinking about me. Nevertheless, I find that I often respond  to robots as though they have thoughts about me, or about their own personal goals, or about the world in which they exist. That tendency suggests an interesting aspect of human psychology, connected to our social minds . We are hard-wired to care what other people think about us, and we very easily extend that concern to robots. Here's a recent article about how the language-learning app Duolingo, which features an owl-shaped avatar (a kind of robot), uses "emotional blackmail" to keep application users engaged:  https://uxdesign.cc/20-days-of-emotional-blackmail-from-duolingo-4f566523e3c5  This bird-shaped bit of code tells users things like "you're scaring me!" and "I miss you" if they haven...

Chatbot Changes and Challenges in 2023

I wrote last summer  about artificial intelligence tools that are increasingly able to approximate human speech in free-form conversations. These tools then burst onto the public stage with the release of OpenAI's ChatGPT  at the end of November last year. As you probably know by now, the acronym "GPT" stands for "generative pre-trained transformer," which highlights the three most important aspects of this technology: (1) it generates novel responses that aren't based on an a specific algorithm or decision rule, but instead rely on pattern recognition; (2) it has been pre-trained  by consuming massive amounts of writing from the Internet -- much more than a human could read in several lifetimes; and (3) it transforms  those prior writing samples using a trial-and-error process that predicts the next phrase in a sequence until it has come up with a response that seems intelligible to humans. ChatGPT works much like the auto-complete feature in your email or ...