Skip to main content

Can We Now Read People's Brainwaves?

My most recent blog post suggested that we aren't going to be able to upload human consciousness to the cloud anytime soon, for various reasons that have to do with the technical differences between human and AI information processing, and the physical differences between computers and human brains. So what about a simpler task, reading people's minds? That's another common trope of fantasy and sci-fi literature, and it seems close to recent advances in the mental control of prosthetic devices. The Neuralink company has also reported successes in this area, along with some ability for people with communication impairments to generate words or phrases on a computer. This seems like it is getting close to reading people's thoughts.

Let's leave aside the medical challenges of implanting electrodes in the brain, which are considerable: metal needles can damage sensitive brain tissue, so the body treats the electrode as an injury and attempts to build scar tissue around it. That is not only bad for the brain, it also interferes with data transmission. And then there's the risk of infection from having a hole in your skull. But let's assume we can eventually overcome these hurdles -- there are some new approaches already. What about the technical challenge of translating neurons' firing patterns into language, for direct mind-to-mind communication? (Let's also, for now, leave aside questions about sharing images, emotions, and the like -- reading out the verbal content of unspoken sentences seems like the simplest test case). 

I have a bit of experience reading brainwaves, from my recent EEG study of people's brains during a psychotherapeutic imagery activity. Here's a look at the overall pattern of activation in 2 distinct parts of the brain (frontal lobes on top, temporal lobes on the bottom), each divided into left vs. right hemispheres. Within each of those 4 brain areas -- which are pretty big, non-differentiated brain areas -- there are 5 distinct EEG activation patterns as shown by the 5 colored lines. (In these graphs, DIP, SEE, and DO are three hypothesized sections of the psychotherapeutic process -- the x-axis just refers to time since the start of the session). 

These graphs are pretty, and they clearly show increasing activity over time in the temporal lobes (bottom panels with lines moving upward to the right), which was the main finding of our paper. But within each of those panels, the relative frequency of different brainwave types (alpha, beta, delta, theta, gamma) fluctuates quite a bit over time, sometimes in sync with one another and at other times not. These brainwaves represent faster or slower patterns of neural firing (they are measured in hertz, or firing cycles per second), and are aggregates across huge numbers of neurons in that general area of the cortex. Additionally, the graphs above are normalized pictures created by combining results from 30 different people -- when you look at the individual results, the data are a lot more messy than this. If even hugely simplified aggregate measures of brain activity are still this messy to look at, what hope do we have for identifying individual neurons whose firings represent a particular concept or word? 

As in many aspects of contemporary life, it seems that AI might be able to come to our rescue. The great strength of modern machine learning techniques is that they can condense huge numbers of variables down to a more manageable amount for interpretation. Math journalist Anil Ananthaswamy identifies three main strategies that the AI models tend to use: First, they use a trick that's familiar from multivariate statistics, multiplying each variable by its own uniquely derived weight in order to create a linear combination -- a single output in which some predictors count for more than others. 

Second, AI models use a data reduction strategy that creates "unobserved variables" through factor analysis, the same process that's used to identify subscales in a questionnaire. It then uses these unobserved variables, which are fewer in number than the original variables, to predict the outcome of interest. The number and definition of the unobserved variables can change over time, forming a "hidden layer" in the center of the AI model that translates from observed data to predicted results. None of the circles in the middle row of this diagram correspond to anything in the real world.

Finally, AI models represent the complex inter-relationships between variables as a matrix, which allows for additional simplification. The factor-analytic process in step 2 also relied on matrix algebra, consolidating a set of observed relationships down to a simpler "vector" that describes the connections only. But a second feature of matrices is that they can be reduced to a smaller size by multiplying each of their elements against some fixed set of numbers, called a "kernel" in AI methodology. This is particularly helpful when you have a very large matrix with a lot of empty space in it -- e.g., pairs of variables that are mostly unrelated to one another, but that in certain instances have very strong relationships. You can find the needle in the haystack more quickly with kernel methods.

What all three of these machine-learning strategies have in common is that they take very complex data and make it simpler for purposes of interpretation. But any of the techniques also loses information along the way. Machine-learning techniques are essentially a toolbox that allows us to find only the small variations that matter most, out of a vast sea of variability. 

Using these approaches, AI models can pick up neural firing patterns that tend to happen when a person is thinking about a particular target, like the word "ball" or the motion of the person's left arm. By ignoring a huge amount of co-occurring variability in the brain, the models can pick out the target something like 60%-70% of the time. That's accurate enough to allow for something approximating normal movement, which can be a little inelegant even in people who use their own neural wiring to produce it. It's accurate enough to allow for basic communication in people who have lost that ability -- which, again, can be pretty glitchy and still get your point across. All of this is very exciting in terms of the potential for computer-assisted rehabilitation. 

What it is not, however, is a generalizable approach to telepathy. It allows computers to pick up on specific pre-trained signals, in specific individuals, who have worked to train the computer about the operations of their specific brain. A model trained on your brain will have much more limited success in reading out the activities and intentions of mine. And there will still be errors. Indeed, if Dr. Nicoletis's "neural degeneracy principle" from my last blog post turns out to be true, and different brain patterns are involved each time you produce a particular behavior, then a model that can understand your brain today may not be able to understand it tomorrow unless a constant stream of training data is provided. 

As a side note, more advanced brain-scanning techniques like fMRI will make the problem exponentially worse, by providing much more data than in my EEG example. The brain-computer interfaces being piloted by Neuralink and others are much more targeted in one way, collecting data only from hand-selected brain areas likely to be relevant for a particular purpose based on more wide-ranging previous fMRI research. But that also means the probe-based data collection approaches are useful for only their pre-specified purposes, not for general scanning of the contents of someone's brain.

So, the answer to whether or not we can read people's brainwaves depends on what you mean by the question. If you mean can we train a computer to pick out specific needles in the vast haystack of mental activity, the answer is yes -- AI can do that. If you mean can we scan people's foreheads at a distance and determine their intentions and values, no, and we probably won't ever be able to do that. At best, our efforts to read minds will remain simplistic and surface-level, far different from the deeper-than-verbal-language ideas of mind-to-mind contact popularized in fantasy and science fiction.

----------

Postscript. A related topic concerns "neuroprivacy," the legal right not to share your own thoughts. This would normally seem obvious, but the boundary between public and private breaks down when someone uses an assistive technology that can interpret their brainwaves, as in the case of the Neuralink device. I tend to think this concern is overblown based on the custom-purpose nature of current brain-computer interfaces, and the vast oversimplification that goes into the interpretation of brain data. But let's run with the idea for a moment, and ask whether people could at some point in the far future be held accountable for every passing thought that flits through their neural interface, and whether they have the right to limit a corporation's use of their personal mental data, e.g. to guess at what products or services the user might be interested in buying, or to time advertisements to points in the day when the user is more open to suggestion. This is an extension of current ethical debates about the appropriate and allowable uses of other types of data that our personal sensor devices (smartwatches, etc.) collect.

Here's a legal case related to our understanding of consciousness, about whether ICE agents have the ability to compel someone to unlock their phone by touching it with their thumb. In one jurisdiction, a judge ruled that your thumbprint is a simple physical characteristic, and therefore agents can use it as evidence the same way they would use a fingerprint on a surface. But in another jurisdiction, a judge said that unlocking your phone requires a level of intention to access its contents, and that forcing you to unlock it would therefore violate the Fifth Amendment rule against self-incrimination. The idea of intention is an interesting one, and seems to guard against the possibilities of brainwave data being used for purposes we didn't consent to. The question comes down to whether our interactions with technological devices make them really a part of our personal identity -- if so, then the boundaries of my "self" may not include just my body, but the devices that I carry around with me as well. To me, it feels like we all have a vested interest in limiting the use of brainwave data to those purposes we specifically intend, rather than viewing brainwaves as merely a physical measurement with no legal protections.


Comments

Popular posts from this blog

Inside the Intuitive Mind: Social Support Can Facilitate or Inhibit Behavior Change

  This week I'm looking at another concrete tool in the behavior-change armamentarium, social support . I have written previously about the Narrative mind's strong focus on social cues , and indeed perhaps the Narrative system evolved specifically to help us coordinate our behavior with groups of other humans. As a behavior-change strategy, social support can be used in several different ways. Instrumental Social Support . The most basic form of social support is instrumental, the type of help that a neighbor gives in loaning you a tool or that a friend provides in bringing you a meal. This type of concrete support can be helpful for diet change -- e.g., here are some fresh vegetables from my garden -- or exercise -- e.g., you can borrow my tent for your camping trip. Although instrumental support is particularly powerful because someone is actually doing something for you or giving you resources that you don't have, it is also usually short-term (I probably don't want...

Our Reactions to Robots Tell Us Something About Ourselves

Robot football players at a Valparaiso University College of Engineering event I have been thinking lately about robots, which creates an interesting asymmetry: They almost certainly have not been thinking about me. Nevertheless, I find that I often respond  to robots as though they have thoughts about me, or about their own personal goals, or about the world in which they exist. That tendency suggests an interesting aspect of human psychology, connected to our social minds . We are hard-wired to care what other people think about us, and we very easily extend that concern to robots. Here's a recent article about how the language-learning app Duolingo, which features an owl-shaped avatar (a kind of robot), uses "emotional blackmail" to keep application users engaged:  https://uxdesign.cc/20-days-of-emotional-blackmail-from-duolingo-4f566523e3c5  This bird-shaped bit of code tells users things like "you're scaring me!" and "I miss you" if they haven...

Chatbot Changes and Challenges in 2023

I wrote last summer  about artificial intelligence tools that are increasingly able to approximate human speech in free-form conversations. These tools then burst onto the public stage with the release of OpenAI's ChatGPT  at the end of November last year. As you probably know by now, the acronym "GPT" stands for "generative pre-trained transformer," which highlights the three most important aspects of this technology: (1) it generates novel responses that aren't based on an a specific algorithm or decision rule, but instead rely on pattern recognition; (2) it has been pre-trained  by consuming massive amounts of writing from the Internet -- much more than a human could read in several lifetimes; and (3) it transforms  those prior writing samples using a trial-and-error process that predicts the next phrase in a sequence until it has come up with a response that seems intelligible to humans. ChatGPT works much like the auto-complete feature in your email or ...