Podcasts

Thorsten Zander | Teaching AI empathy using brain signals

Listen on:

about the episode

AIs could get much better at understanding what we truly value if we gave them access to our brain signals. And doing that is becoming easier than ever before.

In this episode, we talk with Thorsten Zander, professor at Brandenburg University of Technology and co-founder of Zander Labs. He coined the concept of passive brain-computer interfaces: devices that read brain signals to decode a user's mental state, non-invasively and without any effort on their part. 

We cover:

  • What non-invasive brain-computer interfaces (BCIs) can actually pick up from brain signals, and why that's very different from reading your thoughts or internal monologue
  • The hardware and software breakthroughs that are finally making passive BCIs wearable and affordable
  • How continuous neural feedback could dramatically improve AI training compared to current methods based on human ratings
  • Why Thorsten believes passive BCIs may offer the most concrete path to solving the AI alignment problem
  • The risk of social networks exploiting unconscious brain reactions to manipulate people, and why regulation alone is unlikely to be enough

About Xhope scenario

Xhope scenario

No items found.

Transcript

[00:00:29] Beatrice: I’m very excited to be heretoday with Thorsten Zander, who is a professor at Brandenburg University ofTechnology. Correct? And you also just started a company pretty recently, Ithink — Zander Labs. And we’re recording today at Foresight’s newly opened nodein Berlin. So I’m very happy to introduce you as our first guest here.

[00:00:54] Thorsten: Thank you. Yes, it’s great.

[00:00:56] Beatrice: Your main focus, as I understandit, is working on something called non-invasive brain-computer interfaces. Forsomeone who has never heard of that, what is it? And could you compare it maybeto — maybe most people, if they’ve heard of brain-computer interfaces, think ofNeuralink, like Elon Musk’s company — and how is what you are working on maybedifferent from that?

[00:01:22] Thorsten: Yeah, sure. Of course.Brain-computer interfaces provide a link between the human brain activity andthe technical system, so the technical system can read out and interpretactivity that is happening in the moment in a certain context and learn fromthat. BCIs are mainly used to support people with severe disabilities who can’tcommunicate on their own anymore, and so they can then give simple cues like“yes” or “no” or “hurt” or “hunger” through that, which is of course prettyuseful. I’m personally focusing more on something I call passive BCIs, whichare usually non-invasively used. And that is more about —

[00:02:10] Beatrice: And non-invasive means, like,you don’t want to put a chip in, or —

[00:02:14] Thorsten: Exactly. So the difference isbasically that you put electrodes or sensors on the top of your head, maybeunder your hair, or somewhere you can reach the scalp, but you don’t go inside.And invasive measurements go really inside, either under the skull or even inthe brain itself. And so we use non-invasive passive BCIs, and the passive BCIsbasically track your mental state in the context. So when I’m surprised, itwould see that I’m surprised — about a cat jumping on this table, for example.I’m surprised, and it could see that before I even become aware that I’msurprised. This quick reading of these mental states can then be automaticallytransferred to a machine, which then understands my mental state and can relatethat to the context. And that is basically the main focus I have — trying tofigure out what we can do with that, how we can use these passive, non-invasiveBCIs in our daily lives for anybody.

[00:03:23] Beatrice: Yeah, maybe we start there. Whathave you come up with? What can we do with them? What do you think would beexciting to do with them?

[00:03:31] Thorsten: Yeah, so for a long time I wasfocusing on human-computer interaction, where you can see how a detection ofcertain cues is benefiting from these mental states. So for example, I writesomething on my phone and there’s an autocorrect which is not to my satisfaction— it’s just something wrong — and then before I even realize that it’s wrong,it could be seen, and then the autocorrect would be corrected. For example,that’s one thing. But actually way more powerful is the combination withartificial intelligence. So an AI system could learn in the moment how I’minterpreting the situation, what I’m happy about, what I don’t like, and on onehand understand how I interpret the world and learn from me how it shoulditself interpret the world, but also adapt to my needs and my intentions.

[00:04:33] Beatrice: Yeah, I think we’ll get back tothat, because I think it’s something that would be interesting to discuss a bitmore. But before we do that, I think it would be interesting also to maybe talka bit about what people maybe get wrong about BCIs — or, when you say you canread brain activity, what does that even mean? What is it that you can read?What is it that you see or monitor?

[00:05:03] Thorsten: So I may start with what wecan’t read. And that’s — we can’t read thoughts. We can’t read an internalmonologue.

[00:05:08] Beatrice: You don’t see the words.

[00:05:09] Thorsten: Not at all. But also not secrets— not directly secrets you have in your mind or something like that. What weget is basically the first instance of your processing in the brain. So let’sgo back to the example: the cat jumping on this table. Because I haven’t seen acat here and it drops onto the table, then there’s a first instant in my brainwhich says, oh, this could be surprise, this could be the unexpected. And thenthere’s a cascade of information flow in my brain, only over a couple ofhundred milliseconds, until I realize consciously that I’m surprised. And whatwe get with that system is the first instance — the first response in the brain— which tells us, oh, something’s going on. But of course not only surprise —I’m focusing on that more — but it could also be mental workload, errorresponses, happiness, whatever — a magnitude of different mental states,including emotions and cognitive states, that we can assess in the moment inreal time with the passive BCI.

[00:06:14] Beatrice: And so if you think about whatfuture this could lead to — how do you see, if you think about the excitingapplications, what do you see changing in our everyday lives? For example, ifwe were able to implement this technology across society?

[00:06:30] Thorsten: I mean, there’s differentstages. I think the first thing we’ll see is that big businesses will utilizethat to train their AIs. So currently we do something like what is calledreinforcement learning with human feedback. When we look at ChatGPT, for example,the big success, the big improvement came from that mechanism — so that peoplewere reading half a page of text and then saying “that is a good text” or “abad text” or giving a scale from one to five or whatever. And with that, theLLM got a direction to progress towards. And I talked to a developer of ChatGPT3.5 and he confirmed that that was a big cornerstone of the huge success ofLLMs —

[00:07:15] Beatrice: — being able to give humanfeedback.

[00:07:17] Thorsten: Yeah. So they hire people who dothat — I actually did this myself, I gave feedback — and they have thousands ofthese people. And exactly there we can utilize passive BCIs, or neuroadaptivetechnology, as you might call it. What could it do? Like I said before, whatthey’re currently doing — not only for LLMs but also for, for example,self-driving cars — is people annotating or giving feedback to the output ofthe AI. And with that, the AI learns. But that is very slow — you do that afterhalf a page, or after seeing the car crash or doing something wrong. And intheory, we know from machine learning and AI research that continuous, orhigh-frequency, feedback — which is not only yes and no, but might be evenhigh-dimensional, like four or five dimensions or more — is way better. But atthe moment we can’t do that, because you can’t put a device anywhere whileyou’re reading something, as that could distract you. But with a passive BCI,we could do that directly, and it is not bothering you because you don’t haveto care about it. So you would agree that you want to use a BCI, but then youdon’t need to be aware of what information is being extracted, and you don’tneed to take care of that. You don’t have to focus on it. You focus on yourmain task of reading the text and understanding. And while you do that, yourbrain is constantly producing output — and not just yes or no, but “I’msurprised,” “I’m overwhelmed,” “I’m happy about it,” or whatever. These aredifferent states and a different kind of evaluation than “I expected that” and“I’m super angry about it.” This is a completely different feedback. And whenyou use that, it would improve the learning, or the training, of an AI. But forexample in our daily lives, as you asked — it would be that you could benefitfrom your AI understanding you better. So imagine you have a long conversationwith your LLM. Then you see over time that it understands you better — you’vespoken to it for a couple of months, it remembers things you told it, and itwill give you better analysis, explain things in a better way, and give youbetter advice. And I did that, for example, for the last month on this personaltopic, and it was amazing to see how the system adapted to me — memory getsbetter over time, it’s really super exciting. But I would still say it’s verydifferent from a good friend who knows me well — there’s really a big gapthere, even though I [see that progress]. But I’m pretty sure if I had providedmy brain responses while interacting with it — where it sees how I interpretthe world — it would have understood me way better, way earlier. So when I, forexample, coming back to cats — I love cats. I am feeding my cat Gödel. My catis named Gödel. Then if I put this into my LLM, it would know Thorsten has acat and the name is Gödel. But if, for example, with the word “cat” it seesthat I’m happy, and when I type the name Gödel it sees that I feel love, andwhen I’m talking about feeding [the cat and feeling] a little bit annoyed —then it would know: this person probably likes cats, loves cat Gödel, and is alittle bit annoyed when he has to feed the cat. And this is way moreinformation in one sentence than without the mental states about it.

[00:11:14] Beatrice: Is your cat named after Gödel?Gödel?

[00:11:17] Thorsten: Yes. I’m a mathematician bytraining, with a focus on mathematical logic, and I’m fascinated by the life ofGödel and his work.

[00:11:26] Beatrice: Very nice. What’s the other one?What’s the cat that is in the box?

[00:11:33] Thorsten: Schrödinger’s cat! Yeah, butthat is a different situation — you don’t know if the cat is happy. [laughs]

[00:11:40] Beatrice: Okay, so this is reallyinteresting. It seems like — and I know this is a scenario that you’ve usedyourself also — it seems like this can be misused. I think you mentioned thatyou could see social networks using this to collect people’s unconscious reactionsto, like, political content or things like that. Is this a serious risk thatyou think about, and what do you think we could do to avoid it?

[00:12:14] Thorsten: Yeah. And so taking a step backfrom the question before I answer it: I think this technology has the potentialto really change the world — the way we interact with machines, and the way howAI is coming to us in the next years. And just as a general statement: any kindof big technology change has the potential to be abused. And the biggestproblem there is that we are, of course, naive — we don’t know what’s going on.I might have better insight into that, but most people don’t. And even I don’tknow exactly the full potential and full impact of that. So that leaves somespace where people could misuse that. The example I gave is: you have a socialnetwork and the network has an agenda to manipulate you, for example. We don’tknow if that’s true, but it could happen. And they want you to vote forpolitician A but not for politician B, but you actually like politician A — andthen the social network could discredit him and manipulate you. This is what’salready happening. But [with BCIs] you could see instantly, over a longer time,what your favorite color is, how you react to certain video scenes, andunderstand what you like and what you don’t like — and that could present thepolitician they want you to like in a very good way, so that you really think,“oh, that’s a very positive person.” And with that we can easily and much morestrongly manipulate people’s opinions if they are not aware of it.  Sowhat can we do about that? That’s just one example — we could go into multipledirections where this could be used. What could we do about it? We could doregulations, and we see that now in Europe: we have the EU AI Act, which waswell meant but doesn’t do its job because it’s four years old and thetechnology evolved so quickly — and it will keep evolving. So regulation willalways lag behind. It might do some good, but it takes a long time. And wecannot just hope that companies will not misuse it — that’s one approach, justfingers crossed and hope for the best, but I wouldn’t rely on that either. Sowhat I would do is communicate this to people out there, to everybody. Becausewe as people need to decide how to use technology. In the last 20 years we werejumping basically on any new tech — any kind of new tech was super cool, and weliked that. And we see some harm is happening now. We see that the internet isnot only doing good: we have social bubbles, we have fake news, and whatever.And scientists warned us about that 20 years ago, and we ignored that. So wesee the damage now. Now we have new technology and we can do better — we needto try to understand what the consequences of actions are in this domain. AndI’m a little bit puzzled, because we agreed as a society that you need to havea driver’s license to drive a car, because we see there could be danger. Andthis understanding is not there for new technologies — specifically not for AI.And I think we need to educate ourselves, we need to be aware, and with that wecan prevent at least some of the damage. But maybe one last sentence on that: Ido not focus on the damage, I do not focus on the bad things. I keep thoseconcerns in mind. But I think we all should also see the big benefits, the bigpotential of doing good in the world that we can achieve with this.

[00:16:00] Beatrice: Yeah, very much agree. I thinkit’s interesting to keep both things in mind — think about what the best casescenario is that we’re headed for with it, and think about how it can reallyuplift us. How close, or how scalable, is this technology right now? Becauseyou mentioned that you do need to have these non-invasive BCIs — is thatsomething that we would all start to have? How do you see the scale?

[00:16:35] Thorsten: Yeah, that is a very importantand big question to answer. When I look at the trajectory of BCI technology,it’s been there already for 20 years to some degree, but it never reached themarket. Why is that? Because we use these caps — you have to put a rubber capon your head and then attach hundreds of electrodes. You have to build a gelconnection between the electrode tip and your skin. So you look silly, it takeshalf an hour to an hour to put on your head, and then you have gel in yourhair, and it looks terrible. And then the second thing is that the classifiers— these are the detectors that can detect a certain mental state — need to becalibrated every time you use them, and that takes 20 minutes for each mentalstate. So if you want to have something corrected on your phone, you will notput a cap on your head for an hour and then calibrate the system just to haveone mistake corrected.

[00:17:35] Beatrice: So is it that it needs to becalibrated to each individual person? Always?

[00:17:40] Thorsten: That is the state of the art,yes.

[00:17:41] Beatrice: And —

[00:17:41] Thorsten: These were the bottlenecks thatstopped this technology from hitting the market. But there is a changehappening right now. I started a company, and this company is now developingnew hardware and software that is solving these problems. We have a new type ofelectrode system which is adhesively connected to the forehead and behind yourears, and you can do it yourself — it takes three to five minutes and there’sno gel involved. It doesn’t take long, and it doesn’t look awful either — youstill see something, but it could become fashionable. Right now we haveprototypes, and it’s really doing well — we can get almost as good data withthat compared to a full EEG cap, and that was surprising to me, I have toadmit. But what we see now is that we really can optimize for that and get alot of information out of it. Secondly, during my time at university — and I’mstill a professor there — I built my research around something I call universalclassifiers, which are these detectors that don’t need to be calibratedanymore. We have taken a lot of data from many people and trained [the system]on that. And the good thing is that the variance between our brains is not aslarge as you might think — in fact, the variance within one person’s brain isalmost as large as the variance between individuals. So if the system has seena lot of people, it can deal with basically 99% of the variance. What does thatmean? It means that we actually have a plug-and-play system that doesn’t needto be calibrated, and it works as well as something calibrated for you ortrained specifically on your own brain data. So with that, we have a wearableand immediately applicable system. And also the price factor is way better — ifyou go for the standard EEG, you pay 30 to 50,000 euros per system, and we cango below 5,000, maybe below 2,000, because it’s easier to access. So this isnot to advertise my company, just to say that there’s a change happening. Thistechnology is suddenly becoming available. All the research we’ve done over thelast 20 years can now be translated into applications in the real world.

[00:20:08] Beatrice: Yeah. And is it your companythat was also funded by the German cybersecurity agency? Yeah, because that’s abit curious, I guess — when you were speaking originally, I think most of theBCI work, as you say, is focused on helping people with neural damage to beable to function normally. So how come a cybersecurity agency wants to fundthis research?

[00:20:35] Thorsten: Yeah. A few years ago they wereinvestigating what is happening in Germany, what new technology is evolving.And at that time I was advocating for the passive BCI technology idea. I wasbasically communicating about it for 20 years — so in 2005 I started working onthat, and in 2008 I defined the new research field of passive BCIs. And I did alot of research. I have to say — not in a bad way — but people were laughing atme at the time, because they said, “Come on, grow up.” I was a young PhDstudent with a new idea and they told me it was a great idea but to stayrealistic: “You can’t do that, you can’t bring something like that to theworld.” But I’m a quite stubborn person and I continued with that. And Imanaged to get attention worldwide, and now there are 120 to 130 labs workingon that, which is quite good. But also the cyber agency got wind [of this] andthey asked me, “Hey, can you tell me about this technology?” And I told them,and they realized that this is a huge potential for the world, but specificallyalso for Germany to get into the AI game. This is evolving mainly in China andthe US, and Europe is currently looking at how to get into that and preventthat. This is the main focus for Germany and Europe. And they saw the potentialthat our societies can be protected from misuse of AI if the AI is aligned withour value systems — and this leads to the so-called alignment problem. And wecan solve that with passive BCIs. If you have a multi-dimensional stateassessment where you need the classifiers not to be individually trained, sothat you can plug and play, then you can assess [someone’s] interpretation ofthe situation and transfer that to machines, so they can understand the worldthe way you see it. And that is one perspective they had: okay, this issomething where we can connect our own value system to an AI and train it in away that it becomes safe. And part of their work is really focused on a safesociety. This is why they were interested in that. They didn’t just give me acheque just like that. So they ran a competition to fund a project, and thatwas really big and awesome, because usually there are small projects of one tothree or four million euros or something. They decided to go for one bigproject where they put 30 million euros that would go to one entity, oneperson, one group, one company, and everybody in there could apply. And weapplied, of course, and won in the end, which is great. And this is now, in ourunderstanding, the largest single-entity project ever [funded in this field].Before this, the second largest was 8 million, and now we have 30 million,which is quite a milestone. The idea is to bring it into a development stage sothat it can be used in an industrial or real-world setting. The next step needsto be funded by investors; that will not be done by the government, of course.And from that perspective, 30 million is also not enough if you really want toenter a big market — [that’s the next step we’re at now]

[00:24:22] Beatrice: Okay, I have a lot of follow-upquestions. I guess one thing — just because you were on the Europe topic rightnow — I know that you mentioned that you contrast American and European BCIresearch. What is it that you define differently? And does it matter whichapproach wins, or are they… yeah?

[00:24:53] Thorsten: It’s not an either/or, it’s notone against the other, it’s just a different way to approach it. In general, inthe US there’s a bigger focus on invasive methods. So companies like Neuralink,but also now Merge Labs and others — Merge Labs is funded through OpenAI andSam Altman and so on — that are investigating at least whether they’ll goinvasive. The idea there is of course that the signal quality is better whenyou go closer to the source: when I go deeper into the brain, I get a clearersignal. To expand on that a bit: what we measure is microvolts — it’s reallysmall. The electrical field around here in this room is way bigger than what wecan measure there. So when you are just a few centimeters [away from thesource] —

[00:25:37] Beatrice: Tiny electrical signals.

[00:25:38] Thorsten: Exactly, it’s really tiny. Andso normally batteries [are] four volts, so they have volts, but we havemicrovolts. And those few centimeters from the cortex, where the signaloriginates, to the scalp are enough to introduce a lot of noise into the data. Sothe closer you get to the source, the better you can read it. And that’scertainly very useful if you want to restore communication in a tetraplegicperson, or for prosthesis control — because you go into a certain brain areaand read out very carefully, in great detail, that specific signal, notdistracted by anything else. But when you want to capture a full mental state,it’s not just one area of the brain — it goes through the whole brain. Andthere it becomes problematic for the invasive approach, because right now theydrill holes and put an electrode in, going through the membrane that protectsthe brain. But if you want a full mapping of the brain — a good picture of that— I estimate they would need 42 drill holes, which is too many. So they limitthemselves to, at the moment, reading out certain very specific information —and they can do that very well — but that’s why I’m looking at the noisier butmore complete data I get with non-invasive measurements. And that is the distinctionbetween Europe and the US. The US is mostly focused on invasive methods, whilethe Europeans are more into a user-centered approach where we don’t justoptimize for data quality, but also make it usable and deployable. And I thinkthat hesitation is understandable. I can see a person who can’t communicate andwho could live their life much better if they accepted such an operation — butpeople who can use their hands and can speak will have their doubts about anykind of operation. And so, with Neuralink and the others aiming for thecommercial market in the mid-2030s — so in about 10 years — I’m really curiouswhether people will accept having something inserted in the brain. Of coursethere’s a lot of potential to make it easier; there are different ways toinsert something that don’t require a major operation, and they’re putting alot of effort into reducing the complexity of those procedures. But that is thedifference, I would say. I think also the US is more focused on medicalapplications at the moment, and here in Europe, at least the companies that aregetting things to market, I’d say, are more on the passive neuroadaptive side.

[00:28:52] Beatrice: Yeah, oh yeah, that’sinteresting. I feel like there are examples in our technological history wherewe’ve gone for the less invasive [approach] — you know, how everyone [thought]Google Glass was going to be the next big thing, but everyone still prefers tobe on their phone, and things like that. Yeah, that’s really interesting. Iguess the other question I wanted to ask is: when you say that BCIs can monitorour reactions to things — are there things that, for example, I interviewed anotherneuroscientist recently on this podcast, David Eagleman, and we were talking alittle bit about the thing where your sort of instinctual reactions are notalways the reactions that maybe you want an AI to update on. For example, yoursort of system 1 and system 2 — you don’t always want your [gut-based] reactionto be what the actual action taken is based on. How do you cope with that, orthink about it, in relation to the passive BCI?

[00:30:14] Thorsten: Sure. Basically it goes back tothe beginning of our conversation where I asked: what can we actually measure?And it is really the first instance of our instincts.

[00:30:35] Beatrice: You needn’t be afraid of the catjumping on the table.

[00:30:36] Thorsten: Exactly.

[00:30:37] Beatrice: Yeah.

[00:30:37] Thorsten: Even though there might be anold system, an old memory of my mind that says, “oh, it’s dangerous,” or “oh,it has claws.” So my result of the calculations — the processing of informationin my brain — might lead to something different than the initial reaction, andthat is something to be aware of. So this also leads to the questions that comeup when we leave the lab, when we go out of the academic sphere and go into thewild, into more realistic settings, and finally run real-world experiments. Howwell is the technology working out? What can we see and what can’t we see, andwhere do we need to be careful? These are questions which keep me up at night —I mean for excitement, not mainly from worry, but there is of course the concernif it doesn’t work out. But I really want to see what we can infer in thatsituation. And then we need to be, of course, very much aware of what we’rereading out, what kind of mental state we’re reading out. For example, if I seea foreign person, my first reaction in my mind — I don’t know, actually — mightbe, “oh, I’m afraid,” or whatever. And I’m not racist, right, I’m really quitenormal. But because my context is that, we need to be really careful to dealwith that. And that is something new to learn. It’s a new technology, a newkind of information we’re usually not used to. When you think about how weinteract with machines — with LLMs — we translate our thoughts into some codeor some text that the machine can process, and we have to type it in or translatethat actively. And that is how we interact with machines. It’s always a verydirect interaction, a very focused interaction. And now we’re opening up — herewe can get some understanding of intention, some empathy — it understandssomething without me actively communicating. We can read between the lines, andthat is possible to some degree now, but not to the full extent yet.

[00:32:56] Beatrice: And you mentioned that you coulduse [passive BCIs] for the alignment problem. Are you working on that rightnow? Are there any concrete projects like that?

[00:33:06] Thorsten: Yeah. In my company, ZanderLabs, this is one of our main goals — to find a good way [to tackle it]. Thealignment problem is also a little bit fuzzily defined, it’s not really clear,but maybe [think of it in terms of] empathy: creating an understanding of mymind [in a way that a] machine can understand my mind, my value systems, myinterpretation of the situation. I think that pretty much defines alignment,though people would disagree. So we are actively working on the alignment problem.And I think what we’re doing right now is the only promising technology thatcould really solve the alignment problem. When we look at the effort done sofar — OpenAI had the big superalignment team, and the success of that was, Iwould say, mediocre. They stopped it as well. And the reason is not that peopleweren’t smart enough — they were super smart — but the problem is so big, andit’s hard to tackle with the technology currently available. So we bring a newplayer to the table: something new, a new kind of information, which bringsactually all the features you need to solve the problem. And this is why ZanderLabs focuses on that very much. And we can’t do that alone, of course, but I’mlooking forward to working with companies like DeepMind, or whoever out thereis facing this problem, to see what we can do. But the first instances we sawin our labs are amazing — even though the signal is noisy and not 100% reliable— and even with one single classifier, where we just see agreement or disagreement,we could train a system to understand your preferences in a certain situation,and that to a very high degree of impact.

[00:35:17] Beatrice: Okay, that’s really interesting.Yeah, excited to see what comes out of that.

[00:35:25] Thorsten: Me too, I can imagine.

[00:35:26] Beatrice: Another theme that you mentionedis also that you’ve made this journey — you used to be in academia, and you’restill in academia to some extent, but now you also run a startup basically.What was that transition like? What did you learn from making that transition?

[00:35:49] Thorsten: That was a big step, and it moreor less happened accidentally — not 100% — but I saw, when I got this idea andgot closer to it, that academia is a great place to investigate, because therewere so many unknowns, that I decided to go fully into the academic domain. Butafter having been there for a couple of years, I progressed and realized: Iwant to change the world, I want to bring something to the world — and that wasnot 100% going to happen in the academic domain. And at that time, a goodfriend of mine, an entrepreneur in the Netherlands, said, “You’re right, youneed to build a company, let’s do that.” And I said I was scared — I didn’tknow what forms I needed to fill, what I had to do with all the taxes andeverything. But he said, “Come on, I’ll help you.” And he helped me, and we setup the first instance of Zander Labs in the Netherlands, in Amsterdam. And yeah, that was the beginning of a long learning curve. In the first coupleof years we had projects with larger companies who wanted to investigate thetechnologies, and that was great, but it was not a real company — it was reallyjust two or three people, partly hired, doing some work that we couldn’t do atthe university. In 2020, we got investors, and they brought in the knowledge,the real drive to become bigger, and started the company properly. And fromthen I worked a lot, and they’re still actually running the company — I don’thave the time to be CEO, and I don’t have the skills for it — I’m the techdeveloper. Everything else is done by experts who are also investors and partof the company. But that was, of course, great to see — what expectationsdidn’t work out, what worked out, what things I didn’t see when I started, andthe good and the bad. But right now I think there’s a huge benefit fromcombining these two worlds — being an entrepreneur while still being very muchinvolved in the academic domain. And the transition we can do there has workedout so well, and I’m absolutely happy doing that because it can really changethe world. And it can just motivate everybody: everybody’s scared about this, Iunderstand that, but it’s worth it. If you have a good idea, go out, start astartup, get help, don’t do it only by yourself. There are some hurdles, yes,but if you have a good idea, you’ll find somebody who believes in you andtrusts you — and I think trust is a big thing. You need to trust each other,and then you can achieve a lot.  Another thing I shouldn’t forget tomention is that the university is actively supporting [this]. They have a legalagreement which is solid and which manages everything. And they don’t only talkabout it — they support it actively. And there’s a good transition also frompeople from the university to the company, but also the other way around —there are people starting their PhDs who just work with our labs, and that’sreally nice. And of course, then there’s the big support from the [cyber]agency as well, which makes it a bit easier. But in the end it was a very gooddecision, and I’m really happy about it.

[00:39:50] Beatrice: So it sounds like you wouldmaybe recommend it as a career trajectory for more people — if you want to havemore impact, potentially, this could be [a path]?

[00:40:00] Thorsten: Yeah, so of course it doesn’tapply for all kinds of academic groups or leaders. But if somebody has an idea,they should actually think about investigating that. And this is specifically[true] in Germany — it rarely happens here, because there are so manyadministrative hurdles and a lack of awareness. But people who can maneuver inthe academic domain are capable of also dealing with the first steps at leastin the entrepreneurial domain. I think they should not be afraid. And I understandthis, because I was too — and without my friend Marc I wouldn’t have done it;I’m pretty sure I would not have jumped over the first hurdle. But he helped mea bit, and then since then everything went better. I think also from a largerperspective, I think Europe — countries like Germany — would benefit so much ifwe had a more visionary, entrepreneurial group of people.

[00:41:02] Beatrice: Yeah, I agree. I’ll do a littlepromo for the Foresight node here in Berlin, where we’re recording this,because — yeah, Foresight was originally founded in San Francisco, which isobviously the home of the entrepreneurial spirit in many ways, and now I guesswe’re hoping to bring some of that culture here to Berlin. So it’s a greatspace if anyone is interested in checking it out for those reasons.

[00:41:27] Beatrice: Okay, so if we think best casescenario for the technology you’re developing — could you try to paint a bit ofa picture of what could be different in the world, and how we’d be using thistechnology?

[00:41:44] Thorsten: Yeah. So in the short term —let’s say in the next three to five years, and we can get there, we’re reallyclose — like I mentioned before, we will use it in training AI systems. Thatwill be done by companies, and they will deliver, in the end, better-workingbut also more aligned AI systems that can understand us better and will supportus in a better way. The example I’m using right now might be a little furtheroff because it’s about robots. But I’ve often been told: when I grow old, Imight not have a person taking care of me — it will be a robot. And I’m notsaying that’s bad. I think robots will do that rather well: they will bereliable, they will take care of our needs. But they will not understand [us].So I can be grumpy sometimes, and it might want to try something even though Iknow I can’t, and then I want the robot to support me and let me be. And ahuman person would have that empathy — they’d understand: okay, today is onekind of day, let him be; another day I’m down and need more support. And if Iwere to be taken care of by a robot, I want [the robot] to understand that. Andthis is specifically what we can achieve with this technology already now — itwill first manifest on an LLM, which can do the trick, but on a longer horizonthat’s something I clearly see the world would be better for.  On a longerhorizon, I think we have to make a big step — we have to really understand andtry to imagine how the world looks in 20 years if we continue to develop AIs,which we certainly will. I think there is an emergence of a new kind of — I’mlacking the term because I don’t know how to call it. It’s not a life form,it’s not an existence, but a new form of intelligence that can act in theworld. It becomes real. It can live in a world of words, and it will be put outto the world and do something. So we had animals, we have humans, and then wehave this technology that can act and decide what to do in this world. And Ithink it’s really hard to tell how [these] will change our societies.

But I think when these new entities are out there, we wantthem to understand us in a good way and communicate in a good way. And I thinkin whatever form exactly this empathic communication — through a passive [BCI],this new adaptive understanding, this kind of empathy, this concept of ethicalvalues — will be helpful to shape this world into a more collaborative world.And so, maybe adding to that: I don’t believe AIs are necessarily going todevelop into something that hates us, but they will come to this world and tryto understand it. And either we help the AI to understand it the way weunderstand the world, or we let it go randomly — and then we don’t know what’shappening. So if we raise a child, we can help the child to understand theworld in the way we do. And if we don’t do that, we shouldn’t be surprised ifthe child does something we don’t like or don’t expect. So for me, an AI islike an alien baby coming to the planet, trying to figure out what’s going on.And we [need] to support that child to grow up and be with us and not againstus. And this is, while abstract, I think our task for the next 15 to 20 years.And I think with this technology we can make it way better.

[00:45:56] Beatrice: Yeah, yeah. It’s like a greatexpansion of what we can communicate to each other. Yeah, yeah.

[00:46:03] Beatrice: So if we have a young personmaybe listening to this episode and they’re getting curious, maybe [they] wantto work on this — what do you recommend? Where do they start now?

[00:46:16] Thorsten: Yeah, that’s a bit difficult.For sure, a good idea is to study AI — that’s anyway a good thing to do. Thereare not that many good programs, but there are some out there. Try to get intothat, and then you’re set. But when you look at the neuroadaptive [approach to]passive [BCIs], there is not much out there at the moment. You can of coursecome to Cottbus, where my university is — there we have an AI program where youcan have a focus track on neuroadaptive technologies. That’s great. I have 380students in my class every year, so it’s really [growing], but [that’s the onlyprogram I’m aware of that is going in this direction]. So that’s a good choice— if Germany is an option for you, go to Cottbus. There might be some other BCIprograms, but they won’t focus specifically on passive BCIs — they’ll touch onit, or not at all. And I’d just suggest: go for AI. That’s anyway good. Go forAI, and then [people] can find you and follow your work. And with AI you canlearn a lot by yourself — you can read papers, collect some papers, put theminto your AI, explore on that, and have conversations. That’s already [veryvaluable]. If you have a good knowledge of AI and get some background frompapers others and I have written, and look at the work done out there, look atthe problem — then you can also see your way in. I think doing a PhD is still agood choice if you really want to go a bit beyond [the basics], because it’snot about the title, but to really expand and dig deep into the topic. There’sstill enough time for that.

[00:48:18] Beatrice: Last question: what’s the bestadvice you ever received?

[00:48:25] Thorsten: Yeah, I mean, that’s a goodquestion. So there was a professor I didn’t know too well who told me: take astep back and look at the full picture. And that is something you don’t do whenyou just study a certain program and you really try to get to the detail. Buthow I interpreted it — I don’t know if he meant it like that — was that I workvery much interdisciplinarily. I’ve seen so often, when I work withpsychologists, people from medicine, and engineers, that one person would solvethe problem the other person thinks is unsolvable. And you can almost see thathappen when you take a step back and look at the full picture. And that is whatI’ve done multiple times in my life, consistently, and that has helped me a lotto progress and understand things in a better way.

[00:49:24] Beatrice: That’s great advice. Thank youso much, Thorsten. That’s it.

[00:49:29] Thorsten: Thank you very much.

 

Read

RECOMMENDED READING

People and organizations

  • Zander Labs: The German-Dutch company Thorsten co-founded, developing non-invasive, neuroadaptive BCI technology.
  • Neuralink: Elon Musk's invasive BCI company, referenced in the conversation as the most prominent public example of brain implant technology.
  • Brandenburg University of Technology (BTU Cottbus): Where Thorsten is a Lichtenberg Professor. Offers one of the few AI programmes with a dedicated focus track on neuroadaptive technologies.
  • David Eagleman: Neuroscientist at Stanford University, known for accessible science communication on brain plasticity, perception, and time.
  • Kurt Gödel: Mathematician who showed that any sufficiently powerful mathematical system will contain truths that cannot be proven within that system. Thorsten’s cat is named after him.

To learn more about the technology

  • Brain-computer interfaces explained: Non-technical overview of what BCIs are, how they work, and the difference between invasive and non-invasive approaches. Good starting point for anyone new to the field.
  • What Neuralink did in 2024: A summary of Neuralink's first human trials, useful for understanding the invasive end of BCI research.
  • EEG (electroencephalography) - what it is and how it works: Explainer from Cleveland Clinic on the brain-scanning technology that underlies non-invasive BCIs like the ones Zander Labs develops.
  • Passive brain-computer interfaces: Explainer by Zander Labs on how passive BCIs work, the non-invasive hardware they use, and how they reconstruct mental states from brain signals.
  • System 1 and system 2 thinking: Framework (from Daniel Kahneman's Thinking, Fast and Slow) distinguishing fast, automatic gut reactions (system 1) from slow, deliberate reasoning (system 2).
  • Reinforcement learning from human feedback: The technique where human raters score AI outputs so the model learns what 'good' looks like. AWS explainer.
  • AI alignment: Overview of why ensuring AI systems pursue human-compatible goals is hard, and why it matters more as models become more capable. IBM explainer.
  • EU AI Act explainer: Accessible overview from the European Parliament of what the EU AI Act is, what it covers, and which AI applications it bans or regulates. Thorsten referenced this as an example of well-intentioned but already-lagging regulation.
  • Zander Labs receives €30M from Germany's Cyber Agency: Zander Labs' own account of the NAFAS project and the thinking behind the German Cyber Agency's decision to fund non-invasive BCI research as a matter of national security and AI sovereignty.