Neuro-Technology Already Being Used To Convict Criminals And Manipulate Workers | Nita Farahany, WEF
Nita warns us! She was appointed by President Obama to the Presidential Commission for the Study of Bioethical Issues, where she served for seven years, & she’s part of the Expert Network for the WEF.
The emerging tech in the field of neuro-science and bio-technology is accelerating as fast the Globalists...
Our job is to approach the governments with Diplomacy, then lawsuits, if nothing works. Accountability is the duty of the people affected. Period. We are the ones we are waiting for.
Dear WEF, and other insane “thought leaders”,.
Do we think this will lead to Utopia or Dystopia? Where is this headed? Do we want this? What can we do to strengthen the laws and human rights protections as we enter this new era of emerging technology, much of which is risky or intrusive?
Does all your mind reading spy tech violate the public and moral order of things?
Should we act on the smart suggestions made by the InterAmerican Juridical Committee on Neuroscience, NeuroTechnologies and Human Rights? Probably.
IOJ
Heads Up: Full Interview Is Below To Read At BOTTOM Of This Page!
Originally released December 2023. In today’s episode, host Luisa Rodriguez speaks to Nita Farahany — professor of law and philosophy at Duke Law School — about applications of cutting-edge neurotechnology.
Some of the wilder things being discussed are:
They cover:
• How close we are to actual mind reading.
• How hacking neural interfaces could cure depression.
• How companies might use neural data in the workplace — like tracking how productive you are, or using your emotional states against you in negotiations.
• How close we are to being able to unlock our phones by singing a song in our heads.
• How neurodata has been used for interrogations, and even criminal prosecutions.
• The possibility of linking brains to the point where you could experience exactly the same thing as another person.
• Military applications of this tech, including the possibility of one soldier controlling swarms of drones with their mind.
• And plenty more.
That means we’re down to the point where you could trace specific neuronal firing patterns, and then interrupt and disrupt those patterns. Can we do the same for other kinds of thoughts?
In this episode:
• Applications of new neurotechnology and security and surveillance [00:04:25] • Controlling swarms of drones [00:12:34]
• Brain-to-brain communication [00:20:18]
• Identifying targets subconsciously [00:33:08]
• Neuroweapons [00:37:11]
• Neurodata and mental privacy [00:44:53]
• Neurodata in criminal cases [00:58:30]
• Effects in the workplace [01:05:45]
• Rapid advances [01:18:03]
• Regulation and cognitive rights [01:24:04]
• Brain-computer interfaces and cognitive enhancement [01:26:24]
• The risks of getting really deep into someone’s brain [01:41:52]
• Best-case and worst-case scenarios [01:49:00]
• Current work in this space [01:51:03]
• Watching kids grow up [01:57:03]
Articles, books, and other media discussed in the show
Nita’s work:
The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology
Prof Nita Farahany: ‘We need a new human right to cognitive liberty’ — interview with Zoë Corbyn of The Guardian
Statement to the UN Human Rights Council on neurotechnology and human rights (2023)
This is the battle for your brain at work in Fast Company
Neurotech at work in Harvard Business Review
Opinion: Provide a résumé, cover letter and access to your brain? The creepy race to read workers’ minds in the Los Angeles Times
TikTok is part of China’s cognitive warfare campaign in The Guardian
See all of Nita’s work on her website
Criminal applications of current neurotechnology:
The high-tech alibi — use of Fitbit data in criminal cases by Erin Moriarty
Brain fingerprinting: Dubai Police give exclusive glimpse at crime-fighting technology by Salam Al Amir
AI, brain scans and cameras: The spread of police surveillance tech by Paul Mozur and Adam Satariano
How your brain data could be used against you by Jessica Hamzelou
Therapeutic and enhancement applications:
Walking naturally after spinal cord injury using a brain–spine interface by Henri Lorach et al.
Brain activity decoder can reveal stories in people’s minds by Marc Airhart
BrainNet: A multi-person brain-to-brain interface for direct collaboration between brains by Linxing Jiang et al.
Concerns about cognitive warfare:
‘Havana syndrome’ not caused by foreign adversary, US intelligence says by Julia Carrie Wong
An Assessment of Illness in U.S. Government Employees and Their Families at Overseas Embassies — a 2020 report from the National Academy of Sciences on Havana syndrome — followed by the 2023 Updated Assessment of Anomalous Health Incidents from the US National Intelligence Council
Chinese ‘brain control’ warfare work revealed by Bill Gertz
Efforts at regulation and preserving rights:
UNESCO’s International Conference on the Ethics of Neurotechnology (2023)
OECD Recommendation on Responsible Innovation in Neurotechnology (2019)
ICO tech futures:
Neurotechnology — 2023 report from the UK Information Commissioner’s OfficeSpain’s 2023 León Declaration on European neurotechnology: a human focused and rights’ oriented approach
Chile: Pioneering the protection of neurorights by Lorena Guzmán H.
Hands off my brainwaves: Latin America in race for ‘neurorights’ by Avi Asher-Schapiro and Diana Baptista
Transcript
Cold open [00:00:00]
Nita Farahany: There was a patient who was suffering from really severe depression — to the point where she described herself as being terminally ill — and every different kind of treatment had failed for her. Finally, she agreed with her physicians to have electrodes implanted into her brain, and those electrodes were able to trace the specific neuronal firing patterns in her brain when she was experiencing the most severe symptoms of depression. And then were able to, after tracing those, every time that you would have activation of those signals, basically interrupt those signals. So think of it like a pacemaker but for the brain: when a signal goes wrong, it would override it and put that new signal in. And that meant that she now actually has a typical range of emotions, she has been able to overcome depression, she now lives a life worth living.
That’s a great story. But that means we’re down to the point where you could trace specific neuronal firing patterns, and then interrupt and disrupt those patterns. Can we do the same for other kinds of thoughts?
Luisa’s intro [00:01:07]
Luisa Rodriguez: Hi listeners, this is Luisa Rodriguez, one of the hosts of The 80,000 Hours Podcast.
I was really excited to speak with today’s guest Nita Farahany, because while I was reading her book about cutting-edge neurotechnology I kept thinking, “Wait, how does this crazy technology already exist without me knowing about it? Does anybody know about it?”
And so I wanted to share it with all of you.
Accountability is our only defense. IOJ really needs support this month! Thanks for caring about the defense of our rights and standing up to tyranny using law.
Related Reading:
Full Interview Below To Read:
Without further ado: Nita Farahany.
The interview begins [00:02:37]
Luisa Rodriguez: Today I’m speaking with Nita Farahany. Nita is a professor of law and philosophy at Duke Law School and the author of The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology. Her resume is incredibly long and impressive — too long to go into too much detail here — but to note just a few highlights: Nita was appointed by President Obama to the Presidential Commission for the Study of Bioethical Issues, where she served for seven years, and she’s part of the Expert Network for the World Economic Forum. So we’re very, very lucky to have her on as a guest. Thanks so much for coming on the podcast, Nita.
Nita Farahany: Thanks for having me.
Luisa Rodriguez: I hope to talk about where neurotechnology is going, and what that means for our rights to privacy, self-determination, and inequality. But first, my sense is that you’re both excited and worried about the impacts of neurotechnology on society. What’s your basic pitch for exactly what’s at stake here?
Nita Farahany: I think that’s exactly right: I’m both excited and very worried about it. And my basic pitch is that I think that this is transformational technology that could be the most empowering technology that we’ve ever introduced into society or the most oppressive that we’ve ever introduced. And it really depends on what actions we take to direct the technology and its applications, and the ways in which we handle the data that comes off of it — in ways that are either aligned with human interest and human flourishing or misaligned.
Luisa Rodriguez: Cool. I think we’ll get into a bunch of the reasons why you’re excited and worried in more detail. But I want to spend some time going through a few different applications of new neurotechnologies, and talk about where they are now and where they’re going, and then get to some of those implications.
Applications of new neurotechnology and security and surveillance [00:04:25]
Luisa Rodriguez: First, I wanted to ask you about applications of new neurotechnology and security and surveillance, starting with what already exists. What’s one important neurotechnology that exists today with applications to national security and surveillance?
Nita Farahany: The hard thing is that there’s not a lot of information out there about the ways in which neurotechnology is being used for surveillance from a national security perspective. Every military around the world has programmes that have invested in technologies either for purposes of enhancement — like to make supersoldiers, for example. And this is from soft neurotechnology in the sense of things like drugs: for a very long time, many of the cognitive enhancers that have been developed really were first developed with national security or with military applications. Air Force pilots were some of the earliest test cases for modafinil, the drug for wakefulness.
So the neurotech, from a national security perspective, really has started with investments in enhancements, rather than surveillance. Then you look at the ways in which there are lots of investments in the military to try to see if technology that decodes the brain, or technology that stimulates the brain, could either be used to monitor soldiers’ brains, or to lead to enhancements of soldiers’ brains, or to lead to brain-to-brain communication. So if we break those down, we have the enhancement applications.
There is transcranial magnetic stimulation, or there is transcranial direct-current stimulation. These are different ways of stimulating the brain with outside neurotechnology. And with the transcranial direct-current stimulation, there have been devices that have been used with militaries to try to improve training, for example. So, target practice: having military uses of the technology while they’re trying to do target identification to enhance learning. Or there have been attempts to embed EEG sensors into helmets that people are wearing on the field to try to detect even things like if they sense, automatically, a target — it turns out our brains can pick up information well before we consciously process that — so trying to detect that information and then use that, sent back to command centre.
Or there have been attempts at brain-to-brain communication, where the military has invested a lot of money in trying to figure out if it’s possible to have silent transmission of communication between people on the field. None of this, as far as I’m aware, is at scale in any military across the world. They’re all significant research development programmes.
There’s a lot of military investment in trying to decode specific “evoked response potentials” in the brain. So this is how our brains automatically react to information. And that could be how we automatically react to a picture that’s shown to us to see if we recognise an image or recognise a co-conspirator, for example. Or it’s something like an N400 signal, which looks to see if you show congruence of recognition of different statements.
So DARPA, for example, has a programme underway in the United States to look at whether you can show a veteran two pieces of information to try to see if they’re suicidal or suffering from depression. So you might say two statements — “I” and then “am suicidal” — and do those two things go together? Are they congruent? Or does the brain show incongruence — that those two things don’t go together? And then use that as a way of trying to kind of probe the brain for information.
That’s an early-stage research programme, so it’s not at scale yet.
Luisa Rodriguez: Right, wow. I want to come back to that in a bit, but first, are there any other broader applications to national security?
Nita Farahany: I guess there’s two other categories in the national security side. One is to look and see whether or not it’s possible to develop brain biometrics. So each person’s brain seems to process information a little bit differently. And people are used to, at this point — even if they’re not happy about it — different biometrics being collected about them. For example, a faceprint used to unlock a mobile phone.
And a brain biometric is a functional biometric: it’s how your brain responds to something to unlock a device. So if I were to sing my favourite song in my head and you were to sing the same song in your head, the neural signatures would look different. And so we could use that as a functional biometric where to unlock a phone: instead of needing a password, I would sing that little song in my head, and then the specific pattern of neuronal activity could be used to unlock the phone.
And a lot of militaries around the world are interested in these functional biometrics, especially brain-based biometrics, because they may be unique, much harder to replicate, and much harder to therefore hack in the ways that passwords are really easy to hack or things like that. It’s also really easy to change it: if somehow the functional biometrics were unlocked and somebody got access to them, I can just change it to my next favourite song or how I do a math calculation or anything else. And each time it’d be a unique neural signature that is unlikely to be replicated by other people.
One last category is this area that people are worried about which is cognitive warfare: the possibility that militaries are developing brain-based weapons to try to target people’s brains and disable or disorient them.
And this has come to light especially around claims of a “Havana syndrome” that a number of US diplomats have claimed. So it started with US diplomats who were in Havana and who experienced a common set of symptoms — like ringing in their ears and dizziness, and different neurological symptoms. And they had such consistency in their complaints about it that then the military started to look into it. Then the same kinds of claims started to happen with US diplomats in other places around the world. And because of where it first started, it came to be described as Havana syndrome.
The National Academy of Sciences did a big research look into the question, and they came out believing or concluding that they thought that there was likely some kind of microwave weapon, or some kind of weapon with electromagnetic frequency that was being used to actually target and disrupt people’s brains. And the US national intelligence agencies came out in a joint statement last year saying they didn’t think that a foreign adversary was likely behind it, and that there were still a couple dozen cases that they couldn’t explain, so they didn’t have any sort of answer as to what that was.
But at the same time, the Biden administration has sanctioned four Chinese-based companies for the development of purported brain-control weaponry. And you know, if you look at other areas — like the use of TikTok in the United States versus in China — informational warfare seems to be growing as a concept of cognitive warfare. And a lot of different militaries around the world and NATO have started to hold convenings and conferences and conversations around this concept of cognitive warfare and whether this might be a new domain of warfare that’s really underway.
Luisa Rodriguez: Yeah, that’s a bunch of technologies that I think many people just have absolutely no idea exist now — or at least some of them are being tested, some of them may be more at scale than others — but I think there’s a tonne of mind-blowing and just pretty new stuff for many people there. So I do want to go through a couple of those one by one.
Controlling swarms of drones [00:12:34]
Luisa Rodriguez: One thing you mentioned, and a thing that I read about in your book that really blew my mind, is the potential to use brain-computer interfaces to create so-called supersoldiers that can control swarms of drones with their minds, communicate and upload data brain-to-brain, and identify targets subconsciously. You’ve already alluded to a few of those things, but can you just describe what that would look like in a bit more detail, starting with controlling swarms of drones with your mind?
Nita Farahany: Sure. So one thing I think people maybe don’t fully appreciate is the possibility of so much more that we could control with our brains than our bodies. And that’s what a lot of neurotechnology is looking at: could you take signals from the brain and use those in really different ways?
Probably the best way to understand the swarm of drones is, I was at a presentation in 2018 by a company that was later acquired by Meta called Control Labs, and the guy started the presentation by saying, “Why are we such clumsy output devices? Our brains are incredible input devices, we have so much information stored in our brains, but we’re limited by our bodies. Wouldn’t it be great if instead of using these sledgehammer-like devices at the end of our arms” — as he was waving his hands — “we could operate octopus-like tentacles instead?”
And the idea was you could really use the output of the brain, if you trained it, to be able to control everything from octopus-like tentacles to an entire swarm of drones. And a swarm of drones in a swarm-like pattern responds in kind of directionality: so it’s go left, go right, but it’s actually organising, instead of one drone at a time, all of them to act in a swarm-like collective behaviour.
What the military has tested out is the possibility of using one brain that is controlling one drone — so using your mind to think up, down, right, left — and to instead have an entire swarm that is responsive to that activity. So it’s not necessarily that your brain is connected to all of the swarm; there’s a lot of programming that’s happening to connect the drones to each other, that are responding in a swarm-like response to your brain activity that is serving as the interface, the neural interface, for how it actually operates.
And it’s kind of incredible to think about, which is we’re so used to our brains operating our bodies, to instead think about our brains operating a lot more, and a lot more in collective action and behaviour that’s animal-like and swarms rather than human-animal-like that we’re more used to.
Luisa Rodriguez: Right. So if you’re thinking the controls or the operations for one individual drone — and it’s something like up, down, right, left — is the thing that you’re gaining the time saved from having to use your finger to click on a keyboard up, down, right, left? Is there more that you gain?
Nita Farahany: Yeah, you don’t even have to think up, down, left, right. Right now, we have become so used to the interfaces we use, like a keyboard or a mouse, that we’re used to not thinking, “I’m going to move my finger left and I’m going to move my finger right.” But really what we’ve done is we’ve added friction between us and operating a device: there’s some intermediary that you have to use brainpower to operate; you have to use your brain to have your finger move left and right. And just think about the time that you lose there too.
But it’s also unnatural. I mean, we’ve learned it, so it’s become more natural. But think about right now: whoever’s listening can’t see me, but I’m moving my hands in the way that I normally do when I’m expressing a thought. I’m not thinking, “move my hand up or down or left or right”; it’s just part of how I express myself. Similarly, the idea of being able to operate a drone is you’re not thinking, “now go left” or “now go right”: if you’re looking at a screen, that is the navigation — you’re just navigating, right? You’re just intentionally navigating. And then the drones are an extension of your body; they’re an extension of your mind that are navigating based on how you are naturally navigating through space.
And that’s the difference between neural interfaces: it’s meant to be a much more natural and seamless way of interacting with the world and with other objects that become extensions of our minds. Rather than the more direct connection that we have right now with our body, it’s forging a connection with external technology without the kinds of intermediaries that we’re used to — which, if you kind of step back and look at them, they’re a little weird. It’s a little weird that you move your hand around to actually try to navigate through a space. Or if you’re in virtual reality, it’s weird that you have to be using a joystick to move, right? You should just be able to think about moving naturally.
Luisa Rodriguez: Totally. Yeah. That really, really helped me. I don’t know if this works, but another analogy I’m thinking of is that I’ve now got muscle memory for my keyboard. I know that the L is on the right and the A is on the left. And not only will it remove the fact that I had to learn to type, but it, in theory, could also remove something like the fact that I’m used to having to translate whatever kinds of thoughts I have that are both verbal and visual into linear sentences created on a Word doc where I edit in a certain way, and I can’t backspace as quickly as I want to, or I have to switch to my mouse. It’s a mix of physical hand-eye coordination and also just something like the way of thinking.
Nita Farahany: Yeah. We’ve learned a way of expressing ourselves through chokeholds, right? But we have become accustomed to those chokeholds, and so it’s as if it’s natural — and in many ways, it is for us, because that’s what we’ve learned. That’s how we’ve wired our brains. Neural interface imagines a new world where, rather than having the chokehold, you are operating more like one with the devices that you’re operating, and you’re operating without the chokeholds in between.
There’s still going to be limitations on being able to have a full-throttled thought expressed through another medium. We have limitations of language right now of how we communicate: you can hear my words, but you can’t also see the visual images in my mind that go with those words. You can’t feel the feelings that I am feeling along with the words that I’m saying. You can pick some of that up from the tenor of my voice or pieces like that, but you’re not getting all of it.
And even when you’re interacting with a swarm of drones, there’s still these limitations. But I think people dream of a world in which brain-to-brain communication might enable sending across to another person a more full-throttled thought than we currently have. I don’t know of any technology that does that yet. I don’t know of anything that actually captures it. And part of it is, I don’t think anybody has figured out how to decode those multiple layers of thought, from cognition to metacognition to the full embodiment of thought. But I think it’s neat to think about that, the possibility of actually getting to that level of communication with one another.
Brain-to-brain communication [00:20:18]
Luisa Rodriguez: Yeah, cool. So the idea of neurotechnology, removing these chokeholds I think is going to be a theme. So in this case, we’re talking about removing that chokehold in interacting with drones. You also just mentioned communicating and uploading data brain-to-brain. Can you say more about what that might look like in the military context?
Nita Farahany: Yeah. One thing people worry about on the field is interception of communication. They worry about enemy combatants overhearing or intercepting or decrypting whatever they’re sending to each other, and also that the speed and the complexity of what you’re sending back and forth between people may be limited by existing technology. Brain-to-brain communication imagines a world in which you could send signals to another person from your brain directly to their brain.
The closest that we’ve really come to some of that brain-to-brain communication has been a neat study that was done at the University of Washington. There were three different people in three different rooms, and they were playing a collaborative game of something like Tetris, where two people had on electroencephalography headsets, and the third person also had on I think an electroencephalography headset, but also something like a neurostimulation device.
Two people were considered Senders; one was a Receiver. The Senders could see the entire game board, so they could see the piece falling from the top of the board, and they could see the bottom of the board, so they knew whether or not you needed to rotate the piece in order to satisfy the row. The Receiver could only see the falling piece; they couldn’t see the bottom of the board, and had to use the brain signals that were being sent from the Senders: “yes, rotate” or “no, don’t rotate.” And so they would think, “yes, rotate” or “no, don’t rotate.” That would be translated into a signal that would be received by the Receiver, and that person would see it as a flash of light in their brain for “yes,” or no flash of light for “no.”
And they played this game with different groups of these three-person teams getting above an 80% accuracy rate of solving the rotation of the piece. So it’s not like a full thought — it’s not like sending words to another person’s brain — but using modes of communication, like a flash of light. So in advance, you would set up some kind of, yes, fired, no, don’t fire. You’re going to see a flash of light if you’re supposed to fire. You’re not going to see a flash of light if you’re not supposed to fire. And then using that silent brain-to-brain communication mediated through neurotechnology as a way to communicate with another person. Which is pretty mind-blowing.
Luisa Rodriguez: It’s really mind-blowing. As I was reading your book and reading about studies like this, I just had this feeling of, “How does this exist and I didn’t know?” I don’t feel like anybody really knows. I’m sure that’s not true.
Nita Farahany: No, I mean, honestly, I think that’s probably one of the things that I hear most from people: “How is this so advanced and I had no idea about it? Why are people not talking about it?” Even a lot of the neuroscientists have said to me that in reading the book, putting it all together — all the different pieces that I’ve put together in one book, and showing it kind of sector by sector both where the technology is but also the ways it’s being applied in all of these different contexts — I think for a lot of people has been very startling.
That was a really intentional move that I decided to use in the book: there’s not a lot of futurism in the book; it’s mostly describing existing technology — and that was so that people would read it and understand that I was talking about something that is here and now, just not fully at scale across society. And hopefully to help serve as a wakeup call, to say we’re sitting at a moment before technology that will truly transform humanity is about to go to scale across society. And this is what is already happening in this space, what can already be achieved — and you can bet with all of the advances in generative AI, and all of the rapid ways in which the technology is going, that we’re going to be able to do a lot more five and 10 years from now. That doesn’t change that, right now, it’s already here, and we need to do something about it.
Luisa Rodriguez: Totally, yeah. I had so many moments like this, where I find it definitely interesting to think about where the technology might go, but the specific things that are already happening — and again, we’ll get into a bunch of them — truly just blew my mind.
Pulling it back in a bit: this kind of technology that was used in this Tetris game, I just want to understand how it works a bit better.
Nita Farahany: So everything we think, everything we feel, when that’s happening, neurons are firing in our brain. And when you have any particular thought, like relaxation, or you have a particular thought — like “yes, rotate” or “no, don’t rotate” — hundreds of thousands of neurons are firing in your brain at the same time in characteristic patterns that can be picked up. Those are called “brainwaves” and they can be picked up — the kind of patterns together — by electroencephalography. So these are just sensors that are placed on the scalp, it picks up the electrical activity that’s happening in the brain, and then those patterns can be decoded with AI.
So it’s like any other kind of pattern, where it can be translated and trained over time. And it happens with training where lots and lots of prior research has been done, where you’ll say, this is what it looks like when a person’s brain is relaxing, and this is what it looks like when they’re stressed. Or this is what it looks like when they’re saying yes, or this is what it looks like when they’re saying no. Each person’s brain is slightly calibrated to their own brain activity when they put on one of the devices as well. So that’s EEG.
There’s lots of different brain signals that can be picked up, but one of the dominant ones for these more widespread headsets is EEG activity. People may have heard of EEG, and maybe what they’re thinking of right now is a big medical cap that has a bunch of wires coming off of it and a bunch of gel that’s applied, and like 64 or 128 of these weird-looking things. One of the big innovations has been dry sensors, so you don’t have to apply them with gel, and just a few of them — so a few worn across the forehead or some inside of the ear, like worn inside of earbuds or headphones. I have on headphones right now. You have on headphones right now. The soft cups around them can be packed with EEG sensors that can pick up that brain activity.
So in the Tetris example that we were just talking about, they’re not wearing big medical-grade caps. They’re wearing something that could be worn in the form of a baseball cap or a stiff headset or a headband worn across the forehead that has these sensors. So the devices are getting smaller and smaller, and the capability of decoding it is getting higher and higher.
Luisa Rodriguez: OK, so it’s picking up these brainwaves and it’s smart enough now to decode them reasonably well. Where exactly is the limit on how well we can decode? Can you give some examples of things that we can do and things that we can’t do yet?
Nita Farahany: I think there have been bigger advances made in decoding the brain than in brain-to-brain communication so far. In decoding the brain, there’s lots of different signals, and those different signals have different value. So the best studies — the ones that people may have heard about in the news or something — are oftentimes with something called functional magnetic resonance imaging, like a giant MRI people go into. And the benefit of this is it can look really deeply into the brain. So you get spatial resolution, but it’s a very, very slow signal; it’s picking up what’s called blood oxygenation levels. So as you’re thinking, blood goes into one area, it’s oxygenated, you use up the oxygen, it leaves, it’s deoxygenated. That signal can be picked up on fMRI. And so the really powerful studies that have done things like decoding whole paragraphs that a person is thinking about, or visual images — like when they’re dreaming or where they’re imagining it — those have primarily been through fMRI, not these portable devices.
So EEG that we were just talking about is better at picking up bigger brain states. Think about your head: you’ve got a big, thick skull, right? A lot of brainwaves don’t make it through that skull. And then it’s a very noisy signal, because you’re moving, you’re blinking, muscles twitching — so that’s a noisier signal, so you don’t get as much. You pick up more things like, are you happy? Are you sad? It’s easier to pick up yes, no, left, right — rather than whole paragraphs.
Luisa Rodriguez: Does it get harder when you’re trying to pass it to someone else? I can imagine it being a pretty different technology going from “What is this person thinking?” to “How do you infuse that into someone else’s brain, such that it manifests as a flash of light?” That’s just pretty wild.
Nita Farahany: That’s harder, right? What’s easier to do is brain to text. So I can have something decoded, created as a text, and then send it to you. And then that is brain-to-brain in a way — it’s just not directly into your brain; you have to read your text message to get it.
Luisa Rodriguez: OK, so you’re reading a physical text message, like on a phone.
Nita Farahany: Right. So that is one of the brain-to-brain things that people have talked about, but it’s not really brain-to-brain; it’s brain to brain mediated through a text message or something else.
Luisa Rodriguez: Sure. It’s kind of like voice control, but with your brain. And then someone reads it.
Nita Farahany: Yeah, exactly. With brain-to-brain, there are some signals that people have started to figure out. I was at a conference at the Royal Society recently, and this guy was following me around, and he was like, “I want to give you a demo of my neurotech.” I was like, “I don’t want a demo of your neurotech.” Finally, I was like, “Fine, I’m about to leave. I’ll do a demo of your neurotech.” And he put these headphones on me, and he’s like, “How much time do you have?” And I was like, “Five seconds, because I’m going back to the airport.” And he’s like, “This demo is six seconds, and you can choose this one: it induces a feeling of drunkenness or vertigo.” So he pushes it, and oh my god. I had to hold onto something, because suddenly I experienced vertigo. OK, I’m impressed, right? And I had to leave, and happily, the vertigo went away and I was able to go to the flight.
But think about that, right? Suppose you and I agreed in advance that every time you experience vertigo, that means “yes,” and when you experience nothing, that means “no.” And so you see the piece falling from the top of the screen, and suddenly you have vertigo, and you’re like, “OK: yes.” And then you see a piece falling and you don’t experience vertigo, and you’re like, “OK, no.” I think that’s kind of how to think about brain-to-brain right now: it’s almost like Morse code; you agree in advance on what the signal means.
And so that same idea, which is inducing a flash of light, is stimulating the visual cortex. So there are specific signals and specific stimulation that people have figured out can do things like appear in the visual cortex, or give you nausea, or give you vertigo, or give you a shot of dopamine and pleasure. So it’s kind of hacking into the brain’s basic functions like that, and then agreeing in advance what that means for communication purposes.
Luisa Rodriguez: Cool. So that’s the current state of things. And when you imagine this being applied in the military context, eventually we can imagine it being used by soldiers to communicate brain-to-brain, and then also to upload data. But do you have specific applications in mind?
Nita Farahany: Yeah, I imagine that this is sort of a seamless way of communicating on the battlefield without risk of interception; it’s primarily about secure communication. That may just be because I’m limited in my military thinking — I’m not a national security expert — but think about the ways in which brains are both used for enhancements, but also used to create supersoldiers, and then used to try to have secure ways of communication, or brain biometrics to have a much higher way of being able to access secure information. But I’m sure that there’s a lot more there that I just don’t know about, that is all classified that I don’t get to know about.
Identifying targets subconsciously [00:33:08]
Luisa Rodriguez: Right. Another one you mentioned is identifying targets subconsciously. How would that work?
Nita Farahany: This gets back to the idea that there’s a lot that’s happening subconsciously in our brain before we consciously process information. And target identification turns out to be one of them: the brain may automatically recognise features of a target. If you’re looking at surveillance images, for example, the brain may detect and recognise through one of these evoked response potentials, a target before — and this can be milliseconds to seconds later — you consciously are aware of it. Or maybe it never reaches your conscious awareness, but your unconscious, subconscious processing and visual scanning is able to pick it up.
So software systems are being trained on this, where you have somebody who’s very good at target identification, who maybe can’t articulate what it is about that target that made them identify it. Some of the best people at target identification are not very good at training other people, because they can’t explain and verbalise what it was, like the characteristics. This is sort of the same idea of like I can’t fully convey a full thought to you, but your brain is able to do a lot more than we otherwise think. And so people who are really good at target identification, usually it’s by another person watching them — rather than them explaining to the person how to identify a target — and then kind of repeat processing of learning by watching.
So target identification using EEG, if you can figure out what that signal is and identify it every time they recognise the target, you can both use that to potentially train future people, but also use it as an early detection system. So this person who’s really good at target identification lit up: you could have AI look at it. What is it that makes this a target every time the person is able to identify it? There’s been some really interesting studies that have been done around that kind of automatic recognition of features like targets, and what is it that makes some people so good at it? And can we use that as an early warning system, or use that to send that signal back to command so that they get automatic threat detection much faster than somebody could verbalise it?
Luisa Rodriguez: Wild. So that’s the kind of way we can imagine so-called supersoldiers going forward. And it sounds like most of this technology is earlyish?
Nita Farahany: I think so. But again, from a national security perspective, I couldn’t really tell you. What I can tell you is that — at least from looking at all of the research studies that are published and from anything I know from conversations with people in the military — it seems like a lot of this stuff is early, with a big question mark around stuff like Havana syndrome, where there are a lot of declassified documents that have come out of China that suggest that they’re investing a lot in purported brain-disrupting technology. And if you could have something like a microwave or something like radio frequencies that you’re able to kind of pinpoint and target, certainly that could disrupt brains.
But we’re still, I’d say, at kind of primitive days of being able to have full, robust brain-to-brain communication between people, or thinking that the most efficient way to operate a swarm of drones is by having somebody wear an EEG headset to do so.
Luisa Rodriguez: Right, sure. And again, it feels both like we don’t have the supersoldiers yet, but also all of the things you’ve just described to me, when I first heard about them, completely shocked me. I had no idea that we were anywhere near there.
Nita Farahany: Yeah, probably I’m normalised to it. I’d say we are so much further ahead than 99.9% of people realise. And yet, from a neuroscientific perspective, there’s still a far gulf to pass until we ever get to full brain-to-brain communication. But there’s still a lot that we can already do.
Neuroweapons [00:37:11]
Luisa Rodriguez: Going back to a category you just mentioned, which is research being done on different kinds of weapons that would use neurotechnology to basically damage brain tissue — I think this includes things like acoustic weapons, laser weapons, and electromagnetic weapons. What do we know about these?
Nita Farahany: Not a lot, to be honest. The National Academy’s report that looked at the possibility of microwave weapons that could be used to disrupt brain activity sort of posited what that looks like scientifically. And there were a whole bunch of people in the scientific community that looked at it and said that’s not possible, that you’d have to have a very large microwave and that would be detected on satellite; it’s not the kind of thing that would just happen.
So I’d say it’s really disputed in the scientific community as to where we are with any of those technologies and how they actually interact with the brain. The best thinking on this — or the best, most public discussions about it — all centre around the scientific discussions of Havana syndrome. There’s also a whole lot of people who believe that they suffer from the effects of these kinds of technologies and kinds of weapons. I don’t think that they’re deployed on any kind of scale that would lead to ordinary people and ordinary civilians experiencing the effects of them.
Luisa Rodriguez: Are these kinds of technologies even accepted under international law? Are there even any laws that would apply to them?
Nita Farahany: Really good question. And it’s unclear: they don’t fall clearly under bioweapons or chemical weapons or other kinds of treaties that we have. I’ve argued that the use of them would clearly violate different provisions of the UN Declaration of Human Rights. But it’s not as if there’s ever been a case that has been brought where they’ve been interpreted to apply to the destruction of capacities of thinking or experiencing self that it would cause.
So there’s certainly a lot of discussion internationally around neurotechnologies and the regulation of them. Not a lot that’s been happening around what that means for the development, use, and deployment of weapons for cognitive warfare.
Luisa Rodriguez: Do you have a sense of what you’d like to see in an ideal world, in terms of the kinds of international agreements that might regulate these kinds of weapons?
Nita Farahany: Yeah, I go into this in the book, in the chapter “Big Brother is Listening,” where I think that there are provisions of the UN Declaration of Human Rights that should really guide us to say, use of these kinds of weapons to disable, disorient, or destroy in any way human capacity for thinking and for decision making and just operating as self really should be the kind of most fundamentally regulated, most fundamentally prohibited kinds of things that are out there. I mean, they get at our capacity for even being — and destroying the capacity for being seems like the core of all human rights would be violated.
In addition to that, I look at different provisions around torture and in particular psychological torture, and think that some of the treaties and some of the regulations that exist there should be applied in this context as well. That many times torture has really focused on physical pain that a person experiences — and even psychological torture has really looked more at physical pain that a person is suffering. Whereas to me, it seems like the basic idea of stripping a person of their dignity and their ability or capacity for thought would also constitute psychological torture, and that we ought to interpret it that way.
Luisa Rodriguez: That makes a tonne of sense to me. I guess, thinking about just like, other risks and things we should be worried about for these kind of military applications of this technology. One risk that comes to mind is the potential for hacking. To the extent that you’d be kind of uploading a bunch of data from your brain — sending it out brain-to-brain, or just uploading it to physical machines — does that make brains in general more vulnerable to some kind of hacking by another state or a non-state actor?
Nita Farahany: Maybe. We’ve talked a little bit about how we’re not quite as there yet in writing to the brain as we are in reading the brain, but we are somewhat there in writing to the brain. I’ll answer this a little bit by an analogy.
There was a patient who was suffering from really severe depression — to the point where she described herself as being terminally ill, like she was at the end of her life — and every different kind of treatment had failed for her. Finally, she agreed with her physicians to have electrodes implanted into her brain, and those electrodes were able to trace the specific neuronal firing patterns in her brain when she was experiencing the most severe symptoms of depression. And then were able to, after tracing those, every time that you would have activation of those signals, basically interrupt those signals. So think of it like a pacemaker but for the brain: when a signal goes wrong, it would override it and put that new signal in. And that meant that she now actually has a typical range of emotions, she has been able to overcome depression, she now lives a life worth living.
That’s a great story, right? That’s a happy story and a happy application of this technology. But that means we’re down to the point where you could trace specific neuronal firing patterns, at least with implanted electrodes, and then interrupt and disrupt those patterns. Can we do the same for other kinds of thoughts? Could it be that one day we get to the point where if you’re wearing EEG headsets that also have the capacity for neurostimulation, that you could pick up specific patterns of thoughts and disrupt those specific patterns of thoughts if they’re hacked? If your device is hacked, for example. Maybe.
I mean, we’re now sort of imagining a science fiction world where this is happening. But that’s how I would imagine it would first happen: that you could have either first very general stimulation — like I experienced at the Royal Society meeting, where suddenly I’m experiencing vertigo — and somebody could hack your device. Like, I’m wearing this headset for meditation, but it’s hacked, and suddenly I’m experiencing vertigo and I’m disabled. You know, devices get hacked. We can imagine devices getting hacked — and especially ones that have neurostimulation capacity, they could be hacked either in really specific patterns or they could be hacked in ways that generally could just take a person out.
Luisa Rodriguez: Well, that is incredibly horrifying.
Nita Farahany: So do I worry about that? Yes, I worry about it. I’ve been talking with a lot of neurotech companies about how there’s not a lot of investment in cybersecurity that’s been happening in this space. When you start to imagine a world in which not only could the information like what you’re thinking and feeling be hacked — so from a privacy concern — but if you’re wearing a neurostimulation device, can the device be hacked to create kinds of stimulation that would be harmful to the person? Maybe so. It seems like a really basic and fundamental requirement for these technologies should be to have really good cybersecurity measures that are implemented.
Luisa Rodriguez: Yes, I completely buy that. That sounds really important to me.
Neurodata and mental privacy [00:44:53]
Luisa Rodriguez: Moving to another technology that you mentioned already, and that seems relevant here: It sounds like different governments are getting excited about the possibility of using brain signatures for identification. Can you explain what that looks like?
Nita Farahany: So neural signatures may be unique across everyone. We don’t know yet, because not everybody has had their neural signature quantified yet or registered yet. So we can start with something called “authentication”: when you have a baseline that you record of something, and then you match it, that’s authentication. “Identification” would be a world in which I can pick you out and identify you uniquely, rather than authenticate you. Brain biometrics right now are primarily being looked at from an authentication perspective, because we don’t know if they’re unique across billions of people.
What that means is if I record myself reading a sentence — whatever that sentence is, like, “Nita had a Kind bar for breakfast this morning” — and you think that same sentence, “Nita had a Kind bar for breakfast this morning,” and we both record that, then our neural signatures, when we’re thinking that, mine will look different than yours, even though it’s exactly the same sentence. And that — whether it’s a little song I sing in my head or a sentence that I think — is something that’s called a “functional biometric” rather than a “static biometric.” A functional biometric is you’re doing something. It’s sort of like the patterns that you unlock a phone with.
Luisa Rodriguez: Sure. The shapes, a star or whatever you do with your finger.
Nita Farahany: Yeah. And I think how you do it is more telling than the numbers or something. It’s a functional biometric. So that’s what brain biometrics are: they’re functional biometrics, rather than just the resting state of your brain. It’s you doing something and then using that, whatever that “doing something” is, to unlock it. So every time I think “Nita had a Kind bar for breakfast,” I can use that. I can record myself thinking, “Nita had a Kind bar for breakfast” — my brainwave activity — and then I can unlock whatever it is. Get into the secure facility that I’m trying to get into by thinking the same thing.
And a lot of governments are investing in research into brain biometrics because they’re looking for secure ways to authenticate people, and this would be a very secure and silent way of authenticating somebody: I don’t have to say my password out loud, you don’t ever see it, it’s different between us. And you can change it really easily: today I think, “Nita had a Kind bar for breakfast,” tomorrow I change it to, “Nita had oatmeal for breakfast,” and just go down the path each day.
Luisa Rodriguez: And what do we know about neuroscientifically why they’re different?
Nita Farahany: Well, I have a theory. I’m not a neuroscientist, but I think part of it is our brains and how we think are shaped by the uniqueness of every experience we’ve ever had and the structure of our brains. So when I learned about a Kind bar, the association of Kind bar in my brain may be different than the association of Kind bar in your brain. And so when you first developed the neuronal pattern for Kind bar in your brain, it’s imbued with all kinds of context-specific information, and everything that ever came before it for how your brain processes information, which is going to look a little bit different.
Now, it’s not so different, right? I mean, you and I both have visual cortexes, and we have a sensory cortex. The brain structures are preserved across brains, but the very specific neuronal firing is going to be a little bit different between each one. Which also means that each brain has to be calibrated when they put a device on. It’s going to be like the little differences are important enough that you have to calibrate the device to your brain.
Luisa Rodriguez: Right. And is the way you do that basically like playing different songs to a bunch of people when they first use the thing? Like, “We’re going to play you the same three songs and then look at the very specific individual differences that we see in your reactions to the songs”? Is that the kind of calibration, or is it something else?
Nita Farahany: I think it depends on the context for what it’s being used for. So if I’m using a device that has been calibrated for gaming — where left, right, up, down means something for the device — then you’re going to calibrate it around left, right, up, down. So you’ll do a set of exercises that will be like, push the box to the right and push the box to the left with your mind, and it starts to kind of learn that. So you calibrate it to figure out what that looks like for you. But when you start to get to devices that are around typing, for example, or more complex kinds of decoding, it’s kind of use-case-specific for whatever you have to get the baseline for.
Luisa Rodriguez: Sure. So then, because of all this context and very microscale, individual differences between people, your reaction to a song or the way you think “Nita had a Kind bar for breakfast” is different enough that you can use it to distinguish between people. That alone is just incredible. And incredible that we’re relatively close to this being a technology that governments would actually use to identify people, is my impression. Is that basically true?
Nita Farahany: Yeah, it’s basically true, and it is incredible, for sure. And I used that example in the book because there were a few things I was trying to do in the book. One of them was to help people understand that this is technology that is really here, but it’s also to kind of build the case to explain why it’s going to go to scale across society — like how people are going to end up integrating it into their everyday lives, and how, without even realising it, our last fortress of privacy will fall.
And one of them, on the government side, is on brain biometrics. People have given up their thumbprint and their faceprints really without even thinking about what the implications are in order to unlock their devices. Like, “Oh sure, Face ID. I will give it to every single application on my phone and every single company that’s out there in order to make it easier for me to not have to type in my password.” And the same thing I think is going to happen with the brain, where if brain biometrics become the gateway for you to be able to access other information, you’d be like, “Sure, here’s me singing a little song” — without realising you’re giving away how your brain works, right? And you’re uploading information and raw brainwave activity and handing that over on a silver platter.
So it’s one of the many examples that I put into the book to help people both understand what’s already happening, and why it’s going to end up becoming part of your everyday life.
Luisa Rodriguez: I think when I was reading about the example, I was like, this just sounds pretty good. It sounds like it’ll increase the safety of my device and my stuff, because I was thinking at the time that it might be much harder to recreate my unique snowflake brainwaves than it would be to hack into my password manager.
Nita Farahany: Well, I think that’s right. Let’s give everybody a moment to say that it’s not just like, “Oh, isn’t that creepy? We weren’t even thinking about it” — there may be really big benefits to actually adopting brain biometrics. It will be more secure and it will be easier. And a functional biometric is probably a lot better than a lot of the passwords that are out there, and people suffer from identity theft and hack into systems all the time. So there are really good reasons to invest in functional biometrics, including brain biometrics.
I just don’t want people to stop with your thought. So you were about to go on. So now go on, and then I want you to be like, “But…”
Luisa Rodriguez: And then I was like, I’m imagining I’m wearing this headband. I’m using it for all of my devices. And then you point out that it’s not totally clear exactly what brain data will be accessible to whoever’s collecting it. Can they sell it? Are they looking specifically at my reaction to that song? Or, kind of like location data on my phone, where I’ve left that on because that has some benefits to me, will there be a feature where I leave my brain data scanning on? And then they not only have how I react when I listen to a song, but they also have, as I move through the world, whatever data they can get from my brainwaves.
Nita Farahany: Let’s animate that, just so people understand what the “whatever” is.
Luisa Rodriguez: Please do.
Nita Farahany: What does it mean to leave your brainwave collection on? It means multifunctional devices, right? So the primary devices that are coming are earbuds, headphones, and watches that pick up brain activity, but also let you take conference calls, listen to music, do a podcast. All of those things. And so, passively, it’s collecting brainwave activity while you use it in every other way. People are used to multifunctional watches, they’re used to rings, they’re used to all of these devices. It is another form of quantification of brain activity.
Then what does it mean? So you do it to unlock your app on your phone. Now you’re interacting with an app on your phone. How did you react to the advertisement that just popped up? Are you engaged? Is your mind wandering? Did you experience pleasure, interest, curiosity? What your actual reaction to everything is. A political message ad pops up on your phone. Did you react in disgust? Did you react in curiosity and interest?
I mean, these are all the kinds of things that can start to be picked up, and it’s your reaction to both explicit content, and also subliminally primed or unconsciously primed content — all of which can be captured, right?
Luisa Rodriguez: Yeah, I find myself drawn to the benefits. But also, I’m not the kind of person who’s super privacy-oriented, and I can easily see myself being like, “Who cares if they know my reaction to a song? I feel fine about that.” But then I can just really easily imagine the slippery slope where the technology keeps getting better and better, and it picks up more complex thoughts. And also, I’m not even correctly thinking about all the ways this data could be used. I’m probably imagining these kind of benign cases, but actually there are probably 100 different uses that I’m not even thinking of, and some of them might actually bother me.
Nita Farahany: Some of them might be totally fine. And some people — and you’re right, which is a lot of people — are not that worried about their privacy in general. So they may react to this and say, “That’s fine. Maybe I’m just going to get much better advertisements.” And that’s OK. If people choose that, if they’re OK with giving up their mental privacy, that’s fine. I’m fine with people making choices that are informed choices, and deciding to do whatever they will do.
I would guess there is a lot more going on in your mind than you think that you want other people to know. I would just ask you: Do you ever tell a little white lie? Do you ever tell a friend that you like their couch when you walk in?
Luisa Rodriguez: Yes.
Nita Farahany: Right. Or if you have a partner, do you ever tell them that their new shirt looks great? Or like, “No, you can’t tell about that giant zit on your forehead. You look terrific.”
Luisa Rodriguez: Sure.
Nita Farahany: There’s a lot of things that are like that. Or your instant reaction to something is disgust, but you have a higher order way of thinking about it. Or, less benignly, you harbour some biases that you’re trying to work on. You realise you grew up with some ingrained societal and structural biases, and you’re working on that. So your instant reaction to somebody with a different colour of skin or a different hairstyle or a different whatever — pick your bias — is one that you’re not proud of and you recognise it, you sense it in yourself, because that’s something you’re working on. And your higher-order cognitive processing kicks in, and you think, “No, that is not me. That is not who I want to be.” But your brain would reveal it, right?
Or you’re figuring out your sexual orientation, you’re figuring out your gender identity when you’re much younger, and your reaction to advertisements or your reaction to stimuli around you gives you away well before you’re ready to share that with the world. There’s a lot of that. Maybe you don’t have it in your life, but you might. You might have some of that in your life.
Luisa Rodriguez: Yeah, I’m sure I do.
Nita Farahany: It’s hard to imagine that world, is just what I would say, because we’re so used to all of the rest of our private information that we in some ways intentionally express. Or like, yeah, I drove there, so you picked it up on my GPS. Or I typed that, but I intentionally typed it. There’s a lot of filtering that you’re doing that you’re just not even fully aware of. And just imagine the filter being gone. Filter is gone: all of it can be picked up and decoded by other people. And we haven’t even gotten to manipulating your brain based on what it is that people learn about you. This is just the passive decoding of information.
Neurodata in criminal cases [00:58:30]
Luisa Rodriguez: Right. Yeah. Maybe putting a pin in that, one example from the book that I actually found compelling, that feels like it fits in here, is you talk about data from a Fitbit being used in a criminal case, where I think there was a man accused of killing his partner, but his Fitbit data actually revealed that his alibi — which is that he was sleeping, checked on a baby, and then went back to sleep — the data seemed to support that.
Nita Farahany: Yeah, there’s been a few of these cases.
Luisa Rodriguez: So, one: that’s pretty crazy to me. But two: then you talk about how, not only is it possible to use neurodata in the same way, but it’s actually happened. One case that really stuck out to me was in the United Arab Emirates. Do you want to talk about what happened there?
Nita Farahany: So the Fitbit cases are passive collection of data, meaning you have your Fitbit on, and it’s tracking your movements and activities, and you’re not consciously creating the information. And then later, the police subpoena that information and use it to confirm or to try to show that you weren’t doing what you said you were doing at the time.
With brain data, it’s a little bit different for the context in the UAE, which is that it’s been used as a tool of interrogation. So instead of passive creation of data, a person’s hauled into law enforcement, into the police station, and then they are required to put on a headset, like an EEG headset. Again, these headsets can be like earbuds or headphones, but just imagine a cap that has dry electrodes that are picking up people’s brainwave activities.
Then they’re shown a series of images or read a series of prompts, and the law enforcement are looking for what are called evoked response potentials; they’re looking for automatic reactions in the brain. And here what they’re looking for is recognition — you know, you say a terrorist name that the person shouldn’t know, there’s no context in which they should know it, and they recognise it; their brain shows recognition memory. Or you show them crime scene details and their brain shows recognition memory.
And in the UAE, it’s been used apparently to obtain murder convictions by doing this. Similar technology has been used for years in India. And there’s been a really interesting set of legal challenges to the constitutionality of doing that in India, but in countries around the world, this technology apparently has already been used in a number of cases to obtain criminal convictions.
I have not gotten verification of this other case yet, but MIT Tech Review reported on this, and I reached out to the woman who made the comment about it at a conference. Apparently a patient who suffers from epilepsy has implanted electrodes in their brain — and this is not uncommon with some conditions like this — that can either be used to control the epileptic seizures or detect them earlier, something like that.
So this person had implanted electrodes. And I say that just because the data is being captured regularly all the time: if you have implanted electrodes it’s passively always collecting brain data. And the person was accused of a crime, and they sought their brain data from the company — the defendant themselves, rather than the government in this case — to try to show that they were having an epileptic seizure at the time, not that they were violently assaulting somebody. And that would be the first case of its kind, if that turns out to be true.
And really, it’s just like the Fitbit data, where people would say, “Google, provide my Fitbit data, because I want to show I was actually asleep at the time, not that I was moving around, and I couldn’t have killed somebody because I was asleep at the time.” Or “My pattern and alibi fits with what the data shows.” The brain data is going to be a lot more compelling than the Fitbit data in those instances. And just like the person can ask for the data, so too can the government then subpoena from a third party, the person who actually operates the device, that data as well.
Luisa Rodriguez: Is that ethical? Should I feel good about that on the one hand? Like, maybe better criminal convictions?
Nita Farahany: No, you shouldn’t feel good about that!
Luisa Rodriguez: OK, convince me.
Nita Farahany: First, before we’re done, I’m going to convince you that there is a need for a right to mental privacy. And mental privacy is not absolute — sometimes it will yield — and the question is, when are we going to say it yields? Are we going to say interrogating a person’s brain to figure out if they know about a bomb that’s about to go off, is that better than the other methods that we’re using? And will that justify it?
But in general, if you are using implanted electrodes to control your seizures, should you be worrying about the risk that the government’s going to subpoena your brain data to learn whatever it wants to learn about you, whether you were having epileptic seizure at the time, what you were thinking on X date, or at X time? They wiretap and surveil people all the time, right? There’s backdoors into phones to listen to what people are doing. Do we really want the government to have a backdoor into our brains to be able to listen to what we’re thinking and feeling at all times? I don’t think so. I mean, that’s like the ultimate Orwellian nightmare. So should we feel good about it because we might be able to solve more crimes by hacking into people’s brains? I’m going to give that one a big no.
Luisa Rodriguez: Fair enough. My next question is whether this will actually take off, but I feel like we’ve just already got some evidence that it will. And to the extent that these technologies are going to be compelling and useful to people, we’ll be giving away more and more of this kind of mental privacy.
Nita Farahany: I write about not just the little neurotech companies, but big tech. And the reason I think this is going to take off is because every big tech company has a huge investment right now in neurotech, and they’re all looking at ways to integrate it into their multifunctional devices. So Meta acquired Control Labs, and they have talked openly a lot about their watch that will have EMG — electromyography — that picks up brain activity as it goes from your brain down your arm to your wrist, to pick up your intention to move or to type or to swipe. Apple has a patent on putting EEG sensors into their AirPods, and they already have announced that they’re using eye tracking in their Apple Vision Pro to make inferences about brains and mental states and intentions.
Before I published my book, I had not heard from really any of these companies. And suddenly, since I’ve published my book, Apple, Meta, Microsoft, IBM, Google: all of them have invited me out to give talks and have conversations with me. I don’t think it’s just because they found my book interesting, right? They’re all circling around what’s happening in these spaces, and that’s what’s going to make this go widespread: multifunctional devices that put brain sensors into our everyday technology.
Luisa Rodriguez: Wild.
Effects in the workplace [01:05:45]
Luisa Rodriguez: A related area that you write about in the book is thinking about how some of these neurotechnologies are going to affect the workplace. I was really shocked by some of the ways neurotechnology is already being used in work settings. First, can you talk about how EEGs are being used to track fatigue and focus?
Nita Farahany: Yeah, that chapter has probably startled people the most. One of the kind of entry points into the chapter is I talk about a company called SmartCap that for more than a decade has been selling an EEG headset — electroencephalography headset — that has basically a headband that can be put into a hard hat or a baseball cap or anything kind of wearable on the head that tracks fatigue levels of employees. This is by looking at their brainwaves, scoring them on a scale of 1 to 5, from like hyper-alert to asleep, and then giving real-time tracking that can be tracked by both the employee but also their manager about what their brain metrics show about whether they’re asleep at the wheel or not.
And this has been used in long-haul trucking and in mining and aviation, and there’s more than 5,000 companies worldwide that have already used SmartCap technologies. That alone, I think, surprises a lot of people: this is already something that’s been around for a decade, right?
Luisa Rodriguez: Yeah, it really did. I had no idea that truckers had this technology already to check on whether they’re too tired to drive, for example.
Nita Farahany: Right. And I give that example both to show that it’s already happening, but maybe that’s an application where, if done right, it might be OK. And I say maybe it’s OK from a mental privacy perspective, because if the only thing you were measuring from a long-haul trucker was whether they were wide awake or they were falling asleep at the wheel — and you weren’t using the brain data to discover anything else about what they were thinking or feeling — is their right to mental privacy really stronger than society’s interest in not having them barrel down the highway while they’re asleep? Probably not, right?
Luisa Rodriguez: Yeah.
Nita Farahany: So then I go from there to talk about productivity scoring. Here I think it’s a little bit harder to swallow. And there’s all kinds of productivity-tracking software that’s on people’s computers at this point from their workplace. During the pandemic, employers started doing even things that were like turning on webcams to see if people were at their desk at home. And more than 80% of companies admit, whether they’re white-collar workers or factory workers, that they’re using some form of surveillance of their employees to try to track their productivity. If you hire somebody to go shopping for you, they’re on a clock. And it leads to all kinds of really problematic incentives and really bad workplace conditions.
But then let’s look at brain devices. There are companies that are selling productivity tracking of employees using these devices, and they’re enterprise solutions to use” like, we’ll give your employee a multifunctional device, like a pair of earbuds, that track their attention, their focus, whether they’re bored or engaged at work, whether their mind is wandering — and they can take their conference calls and everything else, so they forget that you’re tracking their brainwave activity. So those products are already being sold.
I presented just this chapter at Davos, and I had a company CEO come up to me afterwards to say, “We would be a great use case for you, because we’ve already partnered with one of these companies. We’ve tried out this technology on more than 1,000 of our employees, and we’ve tracked far more than if they’re paying attention or their mind is wandering — but are they bored? Are they engaged? Do they work better at home or work better in the office? We’ve made managerial-level decisions, hiring and firing decisions.” So that kind of blew my mind. So that’s application two.
Luisa Rodriguez: Yeah. I want to ask more questions about that, but I also want to make sure I understand how the technology works, because I’m always just very interested in the science bit. So I guess there’s tracking fatigue, there’s tracking focus, there’s tracking productivity: are all of these kind of doing the same thing? It’s tracking brainwaves? And we basically have done enough analysis about what brainwaves correlate with what brain states that we can say that’s the tired brainwave or whatever?
Nita Farahany: Yeah, that’s a really good question. And the answer is no, it is still somewhat of a mess. If you talk to a lot of neuroscientists, what they’ll tell you is, what are you measuring? You’re measuring muscle twitches or eye blinks. How can you possibly be making decisions based on such crappy data? The data has gotten better, but there’s still a lot of noise, and it is still unclear that people are getting exactly what they think they’re getting when they’re measuring this information and making really serious decisions about a person’s livelihood based on it.
How it worked was, using medical-grade EEG, a lot of these brain states were measured — so here’s what it looks like when you’re bored, or here’s what it looks like when you’re engaged, or here’s what it looks like when you’re paying attention, here’s what it looks like when your mind is wandering, or when you’re happy or sad, or whatever the brain state is. And then using fewer electrodes, measuring the same behaviours and brain states using pattern classification, it’d be like, can you see it with the far fewer electrodes? And can you correlate it to get the different metrics that people are trying to measure?
So that’s the basis for it. But I think there’s still real questions about how good the data is that you’re capturing to begin with. Maybe the software is great, maybe the algorithms are terrific, but if the data quality is terrible, then you basically are taking a bunch of noise and trying to make meaning out of it.
Luisa Rodriguez: Right. OK, that makes sense. So I understand how it works, and then also understand why you’d be worried about it being accurate enough.
Nita Farahany: I worry about it for far more than accuracy, just to be clear. These uses in the workplace, to me — with the power differential between employers and employees, and the broad surveillance state that has emerged both within society but also within workplaces — makes me think that the use of this technology by individuals could be really good, and the use of this technology by companies to surveil their employees would be super creepy and problematic.
Luisa Rodriguez: Yeah, I’d love to make that even more concrete. I want to picture it. What are you imagining when you are imagining companies using the technology for bad?
Nita Farahany: I wrote a couple of scenarios in the book. Most of it is grounded in here’s exactly what’s happening today. But I wanted to help people understand, no matter what their frame of reference is, why it would be problematic. I wanted to try to help people who really strongly believe in freedom of contract in the workplace — so kind of the staunchest libertarian, who thinks, “OK, but the market will take care of itself” — understand why, in a context like this, the market can’t just take care of itself.
The kind of scenario that I painted in the book for that was to imagine this: You’ve got your employee who’s wearing these earbuds to take their conference calls and do everything else, right? And there’s asymmetry in information — that is, the employer can see what the person’s brain is doing at any given time, but of course, the employee can’t see what the employer’s brain is doing at any given time.
So the employer calls the employee up and says, “Hey, I wanted to let you know that you did great last quarter, and so you’re going to get a raise. I’m delighted to let you know that you’re going to get a 2% raise in salary.” And the employee’s brain data shows that they are just thrilled. Like, they’re just so happy: “Hooray, I’m getting a 2% raise!” But they know better than to say, “Hooray!” — they know that would give away their negotiating position right away — so they say, “Thanks so much. I was actually hoping for a bigger raise. I was really hoping for 10%.” And while that’s happening, they’re afraid, right? And you register that in the brainwave activity. And the employer says, “I’m going to think about it and I’ll get back to you.”
And then they go and they look at the brain data and they see the person was overjoyed when they got the 2%, and they’re super fearful when they offer the 10%. They have this additional asymmetry of knowledge, which really frustrates freedom of contract. It turns out the employer can easily handle the 10% — they’ve got the funds: their revenue really went up last quarter, they could have easily done it — but they have this information. They come back the next day and they say, “So sorry, we can only afford 2%.” And the person feels relieved, but still content, and the employer walks away having gained a significant advantage from what the brain data revealed.
And that is to just help people understand that in every conversation, your reaction to every piece of information, can suddenly be gleaned. It’s not just whether you’re paying attention or your mind is wandering. It is your reaction to company-level policy as it’s flashed up and how you actually feel about it. It is working with other people in the company where your brain starts to synchronise with theirs — because when people are working together, you start to see brainwave synchrony between them — and maybe you guys are planning for collective action to unionise against the company, but you see a bunch of brainwaves that are synchronising in ways that they shouldn’t, and you’re able to triangulate that with all of the other information that you’re surveilling them on, and you prevent them from doing so.
So these are some of the dystopian things that my brain goes to.
Luisa Rodriguez: Yeah, my brain is trying to console myself by being like, “But it’s not specific thoughts; it’s emotions.” And then you describe that scenario, and I’m like, wow, emotions give away a lot.
Nita Farahany: They give away a lot. Yeah. And if you artfully probe a person’s emotional response, you can get a lot. And I’m describing brain state, because that’s where we are right now.
But there was a really powerful study that came out just a couple of months after my book was published, using much more sophisticated neurotechnology: functional magnetic resonance imaging, that can look much more deeply into the brain. The researchers had people listen to stories — podcasts, actually; not this one, but other podcasts — and then they had them imagine stories. And they trained their classifier — which is like: here’s the brain image data, and here’s what the person was listening to — and then they took just brain image data and said, “Translate what this person is listening to or what they’re imagining.” And at a really high rate of accuracy — like 80%-plus — were able to decode whole paragraphs of what a person was imagining or thinking. And that’s mind reading, right? I mean, that’s pretty serious mind reading.
Editor’s note on the 80% figure: The study “compared decoded and actual word sequences for one test story (1,839 words) using several language similarity metrics. Standard metrics such as word error rate (WER), BLEU, and METEOR measure the number of words shared by two sequences. However, because different words can convey the same meaning—for instance, ‘we were busy’ and ‘we had a lot of work’—we also used BERTScore, a newer method that uses machine learning to quantify whether two sequences share a meaning.” The 80% accuracy Nita refers to here refers to the BERTScore (the classifier performs better on the WER metric, and significantly worse on the other metrics).
Luisa Rodriguez: Wow. That is actually mind reading.
Nita Farahany: Yeah. And then they decided, we have synthetic data on functional near infrared spectroscopy, which is a more portable system rather than fMRI: can we make it work for that, too? And they found that they were able to get a really high degree of accuracy with that. They haven’t done it with EEG yet, but they have a bunch of EEG data. It would require building a new classifier — because EEG is electrical activity rather than blood flow in the brain, so it’s a new model — but I think they can do it. And when we start to see that data, one of the things that made that study so powerful was they were using generative AI, so they were using GPT-1. And the leap in what AI can do means a leap in what we can do for decoding the brain.
So I’m describing a scenario in the workplace where the employer is just looking at broad emotional brain states. I would not be surprised if in a few years — or even less, really — what we’re talking about is decoding more complex thought.
Luisa Rodriguez: Yeah, I’m a bit speechless.
Rapid advances [01:18:03]
Nita Farahany: Yeah, that study blew my mind. And I mean, as soon as last fall, when ChatGPT was released, I reached out to some of the leading researchers in the field, like, okay, obviously this is going to change a lot. I’m trying to understand exactly how it changes your models — the people who are doing speech synthesis and speech decoding. And they were like, “Yeah, it’s going to change a lot. It’s going to rapidly allow the customisation of decoding per person. It’s going to be much easier to do a lot of this work.”
Imagine this: You’re decoding what naturally comes next. If you’re trying to predict the next word, it becomes much faster. So what you can take from brain activity using generative AI through the way in which it actually generates the next token, really advances onto decoding the brain on steroids. And they were right: it didn’t take long before it changed everything. We’re less than a year out. And give it time: as people start to use these large language models for their classifiers, brain data is just patterns of activity, and it can be decoded. And the more powerful the AI, the more powerful the decoding.
Luisa Rodriguez: I’m aware that we started talking about this in the context of the workplace, but this just is mind reading. And presumably, even though there will be incentives for employers to get these kinds of wearable technologies to their employees maybe early, this will also just permeate into other aspects of life, I’d guess.
Nita Farahany: Yeah. I mean, I think it will just be in every aspect of life.
Luisa Rodriguez: I can imagine it affecting my relationships.
Nita Farahany: I have a harder time imagining that.
Luisa Rodriguez: Really?
Nita Farahany: Yeah, and here’s why I have a slightly harder time imagining that: Maybe in the same way that some people are like, “You have to give me your password to your phone because I want to surveil you in weird and creepy ways. I don’t trust you, and so I need to actually go through all your text messages,” maybe they’re like, “I need to see your brain data. Do you really love me? You claim you love me, but maybe you’re just in lust with me. Or maybe you actually don’t have any of the feelings that you claim that you have.” That’s already a deeply unhealthy relationship. And so to the extent that people are saying, “I need the brain data,” once you’re there, you probably need to reevaluate whether you’re in that relationship to begin with.
Luisa Rodriguez: Yeah. There are already issues.
Nita Farahany: Yeah. But I don’t know. Tell me how you see it changing your relationship.
Luisa Rodriguez: I mean, OK. I guess…
Nita Farahany: Now that I’ve already said that if you’re going to say anything, that makes me deeply question your relationship. But go ahead.
Luisa Rodriguez: No. But, yeah, it’s true that when I imagine this being available to me and my partner, I do not see us using it to be like…
Nita Farahany: Truth serum to question each other.
Luisa Rodriguez: Yeah, exactly. And so maybe there’d be some types of relationships that actually wouldn’t end up drawing on it, because we’ve got such strong social norms against that kind of demanding truth of people. So maybe it’s other contexts where there aren’t as strong norms of giving people the right to some degree of privacy.
Nita Farahany: Although maybe it becomes a way of intimacy, right? We’ve talked about brain-to-brain communication. People sometimes talk about like, “Oh yeah, we’re on the same wavelength.” What if you really wanted to find out if you’re actually on the same wavelength? And start to figure it out through compatibility testing, is there brain compatibility testing? And are we actually truly in sync with each other, or do we think we’re in sync with each other?
Luisa Rodriguez: Yeah. So I don’t really have that much specific to the relationship side. Maybe it is stuff like that. I guess I have this general feeling of, you just told me that mind reading exists and that it’s going to get better and better. And that feels like it’s going to have implications all over, and I’m freaking out a bit.
Nita Farahany: Yeah, I mean, you should freak out, right? What I’m describing is a world of brain transparency that people just don’t even realise is coming or is happening. And I wrote this book as a big wake-up call, for people to understand descriptively what’s happening today, and normatively, what do we need to do about it?
Because it will change everything: it will change our workplaces, it will change our interactions with the government, it will change our interactions with each other. It will make all of us unwitting neuromarketing subjects at all times, because at every moment in time, when you’re interacting on any platform that also has issued you a multifunctional device where they’re looking at your brainwave activity, they are marketing to you, they’re cognitively shaping you. I mean, we need to recognise that this has revealed a new set of responsibilities that we need to humans, and those set of responsibilities really are around the right to cognitive liberty.
So I wrote the book as both a wake-up call, but also as an agenda-setting: to say, what do we need to do, given that this is coming? And there’s a lot of hope, and we should be able to reap the benefits of the technology, but how do we do that without actually ending up in this world of like, “Oh my god, mind reading is here. Now what?”
Luisa Rodriguez: Yeah. Now what? We didn’t prepare. Your employer can read your mind. If you’ve got any amount of imposter syndrome, you will not be able to negotiate a raise in your salary. If you are more distractible than your other coworkers, your employer will now know that. if you sometimes have a bone to pick with your manager, they might have a lot of colour on that.
Nita Farahany: Yeah. If you hate your boss, they’re going to know you hate your boss. There’s no poker face that’s going to fix that for you.
Luisa Rodriguez: Yeah. Going back to the relationship one, maybe a dating site springs up that uses this and crushes the competition.
Regulation and cognitive rights [01:24:04]
Luisa Rodriguez: So it feels like there are loads of implications. Maybe you can talk about how you want to see regulation put in place to make sure we reap those benefits and don’t fall into some of these crazy dystopian futures?
Nita Farahany: I would say it’s not even about regulation; it’s about changing our worldview. For me, regulation is one piece of that, just because I think the only way that you really shift behaviour in society is through a set of sticks and carrots. So I think we need constraints: in order for anybody to truly exercise freedom in the digital era, I think we have to recognise cognitive freedom. And cognitive freedom for me is about cognitive liberty — the right to self-determination over your brain and mental experiences.
In the book, I lay out what the human rights framework for that should be, because I think it’s a global issue, and I say we need to recognise this as the right, as the kind of organising principle. That means we have to update our right to self-determination to be an individual right; our right to privacy to include a right to mental privacy; and our right to freedom of thought to secure us against interference, manipulation, and punishment of our thoughts. But I think that’s one category that’s really important that then will translate down into national laws and context-specific laws — like employment should include a right to mental privacy, et cetera.
But then at the other end, I think we have to really start to realign incentives to enable the incentives for tech companies to align with cognitive liberty. Because if the primary incentive is to maximise attention and engagement — which right now it is, because the primary business models are ad revenue — then those incentives mean that the outcome is going to be to commodify as much brain data as possible: to use people’s brain data to keep them on devices, to keep them addicted, to keep them unhealthy, rather than to enable people to have self-determination.
There’s no liability scheme that is sufficient to actually shift business models for legacy companies, so legacy tech companies need massive investments to shift to align cognitive liberty with what their bottom line is. So for me, it’s like, yes, we need the rights — but we also need the incentives: we need the carrots and sticks to actually start to enable human flourishing in the digital age.
Luisa Rodriguez: Yeah, nice.
Brain-computer interfaces and cognitive enhancement [01:26:24]
Luisa Rodriguez: Let’s move on to another topic. People like Elon Musk are investing a lot in neurotechnology geared at cognitive enhancement — things like Neuralink, which is a kind of brain-computer interface, like we talked about earlier. Can you explain how Neuralink differs from what we’ve already talked about?
Nita Farahany: Sure. So first, I’m not positive that I would characterise Neuralink as being focused on cognitive enhancement, at least not in the short run. I say that because what we’re going to talk about now is implanted brain-computer interface, or implanted neurotechnology. Most of that is really geared at therapy, at least in probably the next decade, and that’s because there’s a lot of risk.
So let me back up: What is implanted neurotechnology? Instead of what we’ve been talking about, which is a baseball cap or earbuds or headphones, it is literally putting a device inside the brain. So drilling a hole in the skull, putting electrodes underneath the skull or deeper into the brain.
And what’s innovative about what Neuralink is doing is threefold. One is that there’s only about 400 surgeons in the entire world who can do that kind of implanted neurotechnology surgery right now, so that’s a big bottleneck to having this go widespread. So they’ve been developing a robot to do the surgery, and that would really enable clinics much more broadly to be able to do this kind of surgery.
The second is the implanted arrays: imagine sort of a disc that has a bunch of electrodes on it, and they’ve been developing something that has these hair-like structures that have little tiny electrodes on the end that could maybe embed themselves better into the brain tissue. And they’re just packed with electrodes, so many more that could pick up more signal from the brain.
And then the third is that it wirelessly communicates with whatever is external. Most of the brain-computer interface devices up until now that have been in clinical studies have required the person to be in the lab, and it has something attached to their head that’s like a big cable or something that then gets the signal out.
So basically, imagine getting it in much more easily; having it be much smaller, with densely packed arrays of electrodes; and having it communicate wirelessly. We can talk about the enhancement side of that, but the benefit of that, from a therapeutic perspective, would be that you can wear that technology everywhere — it allows mobility well beyond the lab. But it’s right now primarily being designed for people who are paraplegics to be able to potentially walk again or to be able to communicate from their brain. So we’ve talked about brain-to-text communication: to be able to operate a cursor on your screen, or be able to type from your mind. That’s the kind of stuff that Neuralink right now is focused on.
Luisa Rodriguez: OK, let’s focus on that for a bit. So the kinds of things it sounds like it can help with is, for whatever reason, a person’s ability to get their brain to communicate with a part of their body or to form speech is impaired in some way. And how exactly does Neuralink or this kind of technology bypass that?
Nita Farahany: We’ve talked a little bit about how noisy EEG signal is. If you’re wearing a brain sensor in an everyday device, it has to go from deep within your brain all the way through your skull, and get to the surface — where it’s interacting with muscle twitches and eye blinks and a bunch of other what we call “noise.” If you have electrodes deep inside the brain, you can pick up signals at much greater resolution without all of that noise. So for somebody whose primary way of communicating is getting signal from their brain to their computer, you want the highest resolution that’s possible.
And when what you’re talking about is much more complex than yes or no or left or right — but you’re literally trying to use the brain to be able to communicate with the rest of the body: to move an arm, or to potentially reconnect with the spinal cord, or to type thoughts at a rate that is much faster than one letter at a time — the depth and the increased number of electrodes, like 1,000 electrodes instead of four electrodes that you might have on the surface, all of that allows much, much greater signal. Much greater signal means that you’re picking up a lot more of what’s happening in the brain and translating it to the rest of the world.
Luisa Rodriguez: Right. Okay, so imagining someone isn’t able to speak for whatever reason. In that case, I think I understand that you’ve got these implants, and that they can communicate — not wirelessly, but wirefully — to a computer that can then write that out as text. How does it work If you’re sending signals to the spine? Do you have to surgically implant something to receive those signals in the spine?
Nita Farahany: You need a receiver. For some people, they may have had some disruption in their ability for their central nervous system, their brain to communicate with their peripheral nervous system or other parts of their body. And so you need the signal to be picked up from the brain and then transmitted, but you can do that with a receiver at the other end.
This wasn’t Neuralink, but just recently there was a report of I think the first person who has through brain-computer interface been able to walk again by having that kind of “pick up the signal from the brain and communicate it with the spine” — when the brain and the spine were no longer communicating with each other.
Luisa Rodriguez: Wow, that is pretty incredible. And again, not something I had any idea we’d achieved.
Nita Farahany: Yeah. It is remarkable. These kinds of injuries that are so life-altering, and that these kinds of technologies could help people reclaim self-determination and independence in ways that have been lost, I think is really exciting.
Luisa Rodriguez: Yea, really exciting. I am curious though, why is it that, at least for now, the focus is on therapeutic applications as opposed to augmentation?
Nita Farahany: You know, Elon Musk has been very vocal about the fact that he sees this as a potential for enhancement. So I’m not going to say he doesn’t have those kinds of intentionality. I actually just recently was at a conference where I was talking with the founder and CEO of a different neurotech company that already has brain-computer interfaces inside people’s bodies — that’s Synchron, and the founder is Tom Oxley — and we were having a little bit of a debate about how there’s a tonne of electrodes in the Neuralink device.
And there was a reporter who was writing a story, and she really wanted to have the take that the only reason you would have that many electrodes is for enhancement purposes — in order to kind of merge us with machines and enable us to win in the race against AI — and wanted to know if I agreed with that. And I was like, “I don’t know. I’m not going to go on the record with that.” And I asked Tom about it and he was like, “Yeah, that’s totally why. He’s doing it because the goal is enhancement.”
So I say all that to say: Why is it not enhancement today? Why do I think it’s not enhancement, despite what Tom thinks, despite having the huge number of arrays? For now, it’s regulated by regulatory bodies that treat it as a medical device, not as an enhancement device. And if they go to the FDA and they say this is really for cognitive enhancement or for enhancement of the body, rather than therapeutic reasons, they’re never going to get regulatory approval. And they have to get regulatory approval to go through each step of the clinical studies that they need to get approval.
And from a risk-benefit perspective, think about putting something inside your brain and your brain tissue. The monkeys that they have done this on have not fared that well, right? They’ve had infection, they’ve had other problems. Some of them have done fine, but some of them have had serious side effects. And so when you’re doing a risk-benefit analysis — when the benefit is clear because a person has lost their ability to communicate or walk or something else — then the risk may be worth it. But when what you’re talking about is something that expands beyond human capabilities, most regulatory bodies are just not even equipped to approve drugs that are enhancement drugs, let alone a brain-computer interface device that is an enhancement device.
So his ambitions may be there, and maybe this creates the proof-of-concept in a therapeutic way that enables that kind of enhancement in the future. And maybe that enhancement looks like what we were talking about earlier, which is the possibility of communicating brain-to-brain with each other: picking up a full-resolution thought, the content of how you feel, and the visual images and the metacognition that goes along with the cognition. Maybe one day that’ll be possible, and maybe one day, people who are healthy will decide that the benefits that can be offered through being able to go from brain to other technology or brain to other human brain are worth the risk. But that’s going to be a while from now.
Luisa Rodriguez: That makes sense to me. Do you mind actually painting even more of a picture of what Elon Musk and people like him have in mind of what this could look like?
Nita Farahany: Have you ever seen The Matrix?
Luisa Rodriguez: Sure.
Nita Farahany: I don’t think that’s going to happen. If the thought that just popped into your mind is the brainjack, where suddenly you’re uploading into your brain the ability to do martial arts, and then you’re like, “OK, got that” and then you can do it, we’re nowhere close to that. And maybe that is the vision that somebody like Elon Musk has, is that we can brainjack you: you’ve got all these electrodes in the brain, and we can just fuse a whole bunch of information into your brain. By the way, just on that note, I’ve always wondered: you need a lot of muscle memory to do that kind of stuff, too. It’s not just that you need to know how it works. And so suddenly your body is perfectly fit and has all the muscle memory? Anyway…
Luisa Rodriguez: And strength.
Nita Farahany: And the strength, yeah. I feel like you could put all of that in my brain, and I would still not be able to do martial arts. I would have this big disconnect, which is my brain would know how to do it, and my body would not cooperate, and it would be a huge problem.
Luisa Rodriguez: Yeah. It’d be like if you ever played the piano as a kid, and then you try and as an adult, and you’re like, oh, I used to know how to do this.
Nita Farahany: I’ve been doing that recently. It’s been really bad. My eight-year-old is taking piano lessons, and I sit down to practice with her, and she’s like, “Play this piece that you used to play.” And I sit down and try to play it, and it’s horrible. It’d be like that, but worse — because your body just would not cooperate in any way, shape, or form.
Anyway, What are they trying to do? I think part of the idea is to try to enable capabilities that we don’t have yet, like brain-to-brain communication. Or we talked a while ago about part of what inspired me to write this book, The Battle for Your Brain, which was seeing a presentation where somebody was talking about what if we could operate octopus-like tentacles with our minds, instead of using our hands as the way that we navigate the world? And that’s within the realm of possibility, right? I mean, you can operate a swarm of drones; you could operate octopus-like tentacles.
So when you start to think that the human brain is in some ways limited by our physical bodies and our ability to get the output from our brain to either connect with each other, to work collaboratively with each other, to solve some of the biggest problems in the world in a way that we can’t as efficiently or effectively do right now — because brain-to-words communication with another person is limited versus brain-to-brain — I think those are some of the ways that they’re imagining it. They’re imagining a transhuman future, which is being able to go beyond human limitations and merge humans with technology much more seamlessly, and to, in many ways, use the power of the human brain.
Because I think most people, even looking at the advances in generative AI right now, recognise that human brainpower is much richer and more complex than anywhere that we’re reaching with current iterations of generative AI. But unless we can get all of that out, and have some way of actually being able to realise the full potential of the human brain and how it works, maybe that benefit or that advantage is something that we can’t really fully realise. So for people who are investing in this transhumanist future, they believe that the best hope for humanity is being able to expand our capabilities and our output.
Luisa Rodriguez: Yeah, we actually regularly have guests on the show to talk about the risks and the promise that AI brings. And some of those people are really worried about AI basically becoming more powerful than people. And I guess, yeah, I do have some sense that —
Nita Farahany: I think that’s a real fear. It’s a real fear.
Luisa Rodriguez: Yeah, I agree.
Nita Farahany: And it’s not an unfounded one. I think if AI develops intentionality, we’ve given it the keys to everything in the world. And it doesn’t have to even be intentionality: it can be accidental, or bad actors. There’s different categories that can emerge from this. Then it is figuring out what we are going to do as human beings. And one solution that people have put forward is this possibility of brain-computer interface as a way to augment human thinking and human capacity.
Luisa Rodriguez: So it sounds like you think some of the cognitive enhancement stuff is pretty far away. What do you see as the medium-term ways that things like Neuralink might change people’s experiences in society now?
Nita Farahany: I think it’ll enable people to regain self-determination for the people who’ve lost it and who are unable to communicate their thoughts, who are unable to move and to act as independently as they would like to do, their freedom of action has been constrained in many ways. And I think Neuralink and devices like it, there’s a number of these companies out there that have really promising implanted neurotechnology. It’s just a very small population of people so far who they can reach.
Deep brain stimulation for people who are suffering from intractable depression or from Parkinson’s disease. There’s a lot of neurological disease and suffering. In fact, neurological disease and suffering worldwide is getting worse, while overall physical health is improving otherwise. So what I see Neuralink offering is a way to start to reset that balance, to start to try to actually get a handle on the large toll of suffering that is unmet needs that people are experiencing worldwide.
Luisa Rodriguez: Cool. Yeah, I just find that really moving.
The risks of getting really deep into someone’s brain [01:41:52]
Luisa Rodriguez: Does that come with risks, besides some of these health risks that obviously come from getting really deep into someone’s brain?
Nita Farahany: Yeah, there are. So the more people who have brain-computer interface technology as implanted neurotechnology, the more that they need to have a better sense of “Where am I and where do I end, and where does the technology begin? And how do I understand the interrelationship between me and the technology?”
I was talking to a researcher, a scientist, recently, who does a lot of work in deep brain stimulation. She was talking with me about her hearing loss and how she has started to wear hearing aids, and that that’s required her to sort of reestablish her sense of self in the world, because her concept of hearing is fundamentally changed. So even just trying to understand what circumstances can she be in, what is she going to hear, how is she going to react — it’s required an updating of self, and the sounds and input that she’s getting are different than ordinary hearing that she had in the past.
And we were talking about that in relationship to deep brain stimulation, where she sees patients who are suffering from intractable depression, and they then have an implanted device, and it takes about a year before they start to develop a sense of, “This is me, and that’s the technology, and here’s where I end, and here’s where the technology begins, and here’s me plus technology” — like this new concept of self. And I think we have to get to this place — whether it’s with implanted neurotechnology, or wearable neurotechnology, or just me and my mobile device — to start to update human thinking about us in relationship to our technology and our concept of self as a relational self.
Luisa Rodriguez: Right. I can imagine it really hitting on questions of identity. I guess the examples you’re giving are of regaining some types of function, or having access to some kinds of emotions.
Nita Farahany: But it changes self, right? We talked earlier about hacking. We could get into the dark side of all of this. But before we even do the risks, it is: How do people understand themselves? And one thing people have worried about a lot with these technologies is a discontinuity of self. There’s you, and then there’s you after the implant. And maybe you after the implant is a fundamentally different person. Or maybe accidentally in the surgery, parts of the empathetic you got damaged, and suddenly you are a violent killer or something like that.
There’s all those kinds of things that might emerge, but I think probably the most fundamental that people have really grappled with, is how do you get informed consent? Truly, for somebody to understand what does it mean to be a different person in relation to a technology that is implanted in your brain before and after? And how do you think about that future self and make decisions that are truly informed when you can’t have any idea of what that actually is like?
Luisa Rodriguez: Right. What that future self will experience, what their life will be like? How do you know if you want to become them?
Nita Farahany: But then there’s all kinds of risks of hacking and Manchurian candidates and all kinds of things like that. But I think the more ordinary, everyday challenges are the broader conceptions around self.
Luisa Rodriguez: Yeah. Out of curiosity, can you take me into the dark side? What are some of those less likely, but maybe scarier risks?
Nita Farahany: Yeah, I’m happy to go there. Although I’ll say this: I do a lot on the ethics of neurotechnology, and I am far more concerned from an ethical perspective about widescale, consumer-based neurotechnology than I am about implanted neurotechnology. And the reasons that’s true are both a very different risk-benefit calculus for the people who are currently part of the population who would receive implanted neurotechnology, but also because it’s happening in a really tightly regulated space as opposed to consumer technology, where there’s almost no regulations and it’s just the wild west.
But in the dystopian world — and with all of those caveats, which I think are really important — I think it’s still possible, without really good cybersecurity measures, that there’s a backdoor into the chips. That some bad actor could gain access to implanted electrodes in a person’s brain. And if they’re both read and write devices — not just interrupting a person’s mental privacy, but have the capacity of stimulating the brain and changing how a person behaves — there’s no way we would really even know that’s happening, right? When something is sort of invisibly happening in a person’s brain that changes their behaviour, how do you have any idea that that’s happening because somebody is hacked into their device versus that’s coming from their will or their intentionality?
And we have to understand people’s relationship to their technology, and we have to be able to somehow observe that something has happened to this person, which would lead us to be able to investigate that something has happened to their device and somebody has gained access to it or interference with it or something like that.
You know, we’re dealing with such small, tiny patient populations. It’s not like the president of the United States has implanted neurotechnology, where some foreign actor is going to say it’s worth it to hack into their device and turn them into the Manchurian candidate. But in the imagined sci-fi world of what could go wrong: what could go wrong if this goes to scale, and if Elon Musk really does get a brain-computer interface device into every one of our brains, is that we’d have almost no idea that the person had been hacked, and that their behaviour is not their own.
Luisa Rodriguez: Do you have thoughts on cognitive enhancement neurotechnology that doesn’t relate to things like Neuralink?
Nita Farahany: I mean, you know, it’s interesting. My book was recently reviewed in the New York Review of Books, and the reviewer really took issue with my stance on enhancement, I think — which was, I don’t think it’s cheating. And I think if people want to enhance themselves, that it’s actually part of human nature. And she really went after the science, saying none of them work scientifically. Maybe. That’s sort of beside the point. I mean, the point I was making was it’s not cheating. And saying that the science isn’t there doesn’t answer whether or not, if the science was there, it would be permissible to do so. I kind of take issue with treating cognitive enhancement in school settings and in life as something that we should punish.
I understand and appreciate the arguments about coercion and kind of race to the bottom or race to the top, however you want to think about it. I don’t think the solution to that is by saying you can’t use enhancers, nor do I think life is a zero sum game where me enhancing myself somehow prevents you from being able to do so, or trades off with your opportunities in life.
Best-case and worst-case scenarios [01:49:00]
Luisa Rodriguez: What do you see as the kind of best-case outcome for all this technology? What does the world look like?
Nita Farahany: Best case would be I get to use the device to enhance, to meditate, to improve my focus and attention; to tell when notifications actually cause distraction or are causing stress, and help me to make adjustments because I have user-level controls that make me able to adjust my interaction with other technology to optimise my brain health and wellness. That’s the best-case scenario.
I use it one day, maybe to have brain-to-brain communication with the people I want to have brain-to-brain communication with. I don’t think I want to communicate with everybody inside their brains, but maybe there’s some people where, for me, a new level of intimacy is sharing a full thought, and helping them to truly see how I see something and cultivating empathy in a really brand new way, because somebody can actually get inside my head and I can get inside their head and we can truly understand each other.
Luisa Rodriguez: Yeah, that does sound pretty special.
Nita Farahany: I think it’d be neat. I imagine this world in which it’s like, wow, you can actually feel everything I feel, sort of see everything I see.
Luisa Rodriguez: Yeah. True empathy.
Nita Farahany: Yeah. So I think the best-case scenario is like, it’s used by us, how we want to use it, without it being a creepy tool of surveillance, and where we get to choose with whom we share what. And that it isn’t used by governments to engage in cognitive warfare; it isn’t used by governments to interrogate our brains. It’s not that we have to worry at all times that they’re going to subpoena all of our brain data from companies, that companies allow the data to live on-device and to be overwritten on-device rather than capturing it, commodifying it, and using it to instrument us. So that’s the best-case scenario.
Luisa Rodriguez: Cool. And I was going to ask about the worst-case scenario, but you kind of slipped some of that in there.
Nita Farahany: Yeah. It’s the opposite of all of that, right?
Current work in this space [01:51:03]
Luisa Rodriguez: Well, that feels incredibly important and also just much closer in time than some of these other technologies felt to me. I guess it makes me curious to what extent are people thinking about this, and thinking about how people can be changing their worldviews to make sure that as we’re doing things like setting incentives and deciding how we want to use this technology, we bring about those best case outcomes?
Nita Farahany: A lot of people are thinking about this, surprisingly. In a good way. I mean, not enough. But in a lot of emerging technologies, you don’t see the conversations happening before they go to scale across society. And in a really refreshing way, I’d say, with neurotechnologies, there’s a lot of international conversations that are happening on this.
So UNESCO had a huge meeting this summer and they’re launching a potentially multiyear effort on it. The OECD in 2019 issued a set of principles directed really at regulators, and are thinking hard about how to translate that from a commercial-actor perspective. Across Europe, you see a lot of activity in this space. Chile has updated their constitution to include specific rights for people around neurotechnologies and mental integrity. Mexico and Spain and other countries in Latin America are starting to look into specific rights in this space, thanks to some advocates who are focusing in those areas.
Not as much conversation here in the US. Far less in the United States. But the UK issued a big report on this recently out of the Information [Commissioner’s] Office there. So you see across the UK, Europe, Latin America, a lot of activity — and in ways that I think are thoughtful and grounded, and recognise some of the unique benefits and also harms to try to enable the technology to progress in a way that makes sense.
I’d say there’s been a few concrete approaches, like in countries like Chile. So far, in the rest of the world, it’s primarily at the level of kind of principles — so kind of soft law or recommendations or ethical guidance. I think for me, the concept of cognitive liberty is a way to help unify those efforts, and to say it’s also not just neurotechnology — it’s all of these other technologies that are affecting our brains and mental experiences, and that we need to think about it in a more holistic way rather than in a tech-specific way.
Luisa Rodriguez: Yeah, that makes sense to me. We’ve talked about neurotechnologies specifically, but tonnes of other things fall into this bucket, and it’s a bit artificial to separate them out this way.
Nita Farahany: Well, it’s good to talk about the technologies and the unique threat that it poses, but then when I put my law professor hat on and think about it from a governance perspective, it’s to say, then let’s find the commonalities that we govern in a way that’s actually comprehensive.
Luisa Rodriguez: Yeah, right. OK, so Chile is doing well on this; the US is not doing very well on this. Is that just because there hasn’t been much advocacy, and it’s not a priority for whoever should be thinking about this in the US?
Nita Farahany: I’d say the FDA has done a good job when it comes to implanted neurotechnology to be kind of thought leaders from a regulatory perspective. It’s not quite clear who would regulate it and how in the US, and I think that’s part of the challenge.
Luisa Rodriguez: Yeah. OK, so we’ve talked about a couple of ideas you’d like to become more widespread: in general, this kind of framework that we should consider cognitive liberty a human right. Are there any ideas that you think should be more widespread that we haven’t talked about yet here?
Nita Farahany: I think we have to build an ecosystem around cognitive liberty. And what I mean by that is: If you’re thinking about investing, ideally you’re investing in technologies that enhance cognitive liberty, that don’t diminish them.
And that’s investing from an educational perspective, it’s investing from a technological perspective. If you have a portfolio and you are thinking about what smart investment that aligns with overall human flourishing in the long run looks like, then I think it’s really thinking about what the impact of technologies are on human cognitive liberty. And if they are contrary to human cognitive liberty, it’s choosing a different company to invest in, right?
I think that’s part of how we start to align incentives in ways that actually maximise human potential: by investing in those technologies that expand it. I think [brain-computer interface] technology can be technology that expands human cognitive liberty, especially the companies that have a commitment to not building their business model around commodifying the brain data.
Like, one company has asked me to come on as an advisor, OpenBCI. And the reason that they want me to come on as an advisor is because they want to figure out how to align their product around cognitive liberty. To me, that’s exciting as an opportunity to work with a company that totally believes that the future of computing needs to be rethought — that it needs to be about enabling the individual, keeping all of their data on-device, not having it commodified and extracted, not instrumenting the person for attention and engagement, or selling them advertisements — but trying to liberate them.
So investing in technologies that liberate people’s minds, that’s a good thing. Investing more in technologies that narrow our focus and diminish us, that’s kind of a surefire way to ensure that AI takes over, right? Because the more humans are distracted, kept on-device — their brains are diminished, they’re addicted, they’re acting compulsively and not critically thinking — the worse off we are as a species.
Luisa Rodriguez: Let’s end that topic there. I found that very compelling.
Watching kids grow up [01:57:03]
Luisa Rodriguez: We’ve got time for one final question. We like to end with something positive and maybe more uplifting than some of the dystopian things we’ve talked about so far. What is something that you’re excited about possibly happening over your lifetime? Maybe this is in the space of neurotechnology; maybe it’s something totally unrelated.
Nita Farahany: Honestly, the thing that I’m most excited about is seeing my kids grow up. I have a three-year-old and an eight-year-old. We lost a child in between, so I’d say I probably have an even greater appreciation for our living children and getting to see them grow, and the privilege that it is to see them get bigger and to take on interests and to see what makes them curious.
I think one of the great privileges of being a parent is getting to see the world anew through the innocent and curious eyes of children. So the thing that kind of gets me the most excited is getting the privilege of watching them grow and seeing the world through their eyes. It’s just like things you don’t notice, things you’ve taken for granted. Everything is new to them.
Luisa Rodriguez: Do you have examples?
Nita Farahany: I don’t have a specific one right now for you off the top of my head, but they catch you by surprise all the time. You’ll be driving down the road and we’ll have never noticed a road sign there or something. And they’ll be like, “Isn’t that interesting? Why does it say that?” And you’ll read it, and it totally changes your perspective of that drive. Just anything. You take most things for granted and have filtered out a lot of things in your environment. Kids don’t, and it forces you to really think about life and the world differently.
Luisa Rodriguez: That’s really lovely. Thank you for sharing. My guest today has been Nita Farahany. Thank you so much for coming on.
Nita Farahany: Thanks for having me.
I have a hard time believing all this nonsense. They can see our brain activity but not WHAT we think. They'd like to but they can't. Not in those who read books and are not media-programmed. They can take a good guess and exaggerate and gullible people -who have been conditioned for years with futuristic sci-fi nonsense- will gobble up this nonsense. When Elon Musk's monkeys got their brain implant, most died, one started chewing of his fingers and what happened to the rest I don't know. This is what they WANT, not what actually happens. We're dealing with insane people here who will not take 'NO' for an answer. I trust my own brain when it comes to judging moral characters. The fact that Willy G8s excitedly giggles every time he talks about 'the next one' or 'shooting GMO's in little kids' arms' tells me he's criminally insane and probably has some daddy issues. Same goes for Musk: a snake with weird issues.
ITS LOBOTOMY TIME FOR MEDI SIN DEMONS OF SORCERY