What if we could reimagine the future from a place of hope instead of fear?
In this special episode of the Existential Hope Podcast, Allison Duettmann and Beatrice Erkers turn the tables and interview each other instead of a guest, sharing candid insights into their journeys, hopes, and visions for humanity.
Together, they explore big concepts like moral circle expansion, how neurotech could deepen empathy (even with animals!), and why worldbuilding in 2045 can help us envision and create better futures today.
Prepare for the new year by diving into strategies for building a future worth striving for.
Beatrice’s eucatastrophe envisions a future where humanity expands its "moral circle" to include not just people worldwide but also animals and potentially other sentient beings. She imagines technologies like neurotech and AI enabling direct emotional sharing or communication across species, fostering deeper empathy and understanding. For example, if someone could feel an animal’s fear or communicate its needs, it might inspire greater compassion and action. While challenging, this moral shift could drive global cooperation and environmental stewardship, embodying hope for a more inclusive and interconnected future.
Beatrice is Existential Hope Director. She has a background in the publishing industry and has several years of experience working with communication at Foresight and at a publishing house. Her special interest in the integration of technology and society has led her to work for Foresight Institute.
Dall-E is a GenAI tool from OpenAI.
Beatrice’s eucatastrophe envisions a future where humanity expands its "moral circle" to include not just people worldwide but also animals and potentially other sentient beings. She imagines technologies like neurotech and AI enabling direct emotional sharing or communication across species, fostering deeper empathy and understanding. For example, if someone could feel an animal’s fear or communicate its needs, it might inspire greater compassion and action. While challenging, this moral shift could drive global cooperation and environmental stewardship, embodying hope for a more inclusive and interconnected future.
[00:00:00] Beatrice Erkers: Welcome to another episode of the Existential Hope podcast, where we explore visionary ideas and how science and technology can help shape a brighter future. Normally we bring you conversations with leading technologists and scientists, but today we're doing something a little different. It's the end of the year, and so we thought we'd do a little bit of a holiday special.
[00:00:16] So today it'll be me, Beatrice, who runs this podcast along with Allison Duettmann who is the co-host and we'll be interviewing each other about these topics basically. Yeah, because we've spent a lot of time thinking about the core questions that we often ask our guests: what a hopeful future looks like, how we can achieve it, and what role we each play in creating it.
[00:00:33] For this episode, we're turning the tables and we're sharing our own thoughts, visions, and ideas for the future. For a full transcript of today's episode, additional resources, and much, much more, please visit existentialhope.com. And don't forget to subscribe to our newsletter to stay updated on new episodes, events, and different ways to get involved with our community. Now, let's jump into this very special conversation.
[00:00:51] This is a very special episode of the Existential Hope podcast. Today, me, Beatrice, I'm going to be interviewing you, Allison, and you're going to be interviewing me, or I guess it's a conversation to some extent more than an interview. But yeah, the idea is that we've been running this podcast now for I think it's at least two years, maybe it's more.
[00:01:08] And yeah, normally we interview like all these different, very smart technologists and scientists about their thoughts and visions for the future. But now we wanted to basically turn the tables a little since actually, we, I think, have been thinking about this for quite a bit and for quite a while now.
[00:01:23] And so it could be fun for people to maybe hear, yeah, just like what we think about these things as well. So maybe I'll just, I'll start and I'll ask you, could you maybe tell us a bit about yourself? What brought you to work at Foresight and Existential Hope and, yeah, the questions that you always ask of, if you have to summarize your journey in three minutes how did you get here? And, yeah why does this matter to you?
[00:01:43] Allison Duettmann: Yeah, I'll see if I can do it in three minutes, but that's a bit tough. I think what inspired me about Foresight and Existential Hope is similar, I think, because I think to me, I thought it really embodies the spirit of existential hope while necessarily being a little bit more tech and action oriented in some of its front-end ways and shows.
[00:01:59] But since I can remember, I grew up in Hamburg and suburbs of Hamburg in Germany, and so I didn't want to die. And I thought that life is great. It's so fantastic. And there was just so much to explore and to see in the world. And that with every day that passed and learning more about like books to read, friends to meet, places to go, things to learn, experiences to have, there was just no end to the fantastic experiences and just kind of growth moments that life had to offer.
[00:02:24] And so I thought it was like deeply saddening that me on an individual level, I guess I would eventually not be a part of that system anymore. And so I had this personal existential angst that accompanied me for most of my childhood until pretty much I found my way into existentialist philosophy.
[00:02:38] Mostly Camus, somewhat Nietzsche, even though you could say he's more of a nihilist philosopher, but Sartre, et cetera. And they had this like notion of trying to create meaning in the kind of face of death and in the face of a finite life. And so I got really into that literature and just tried to use it as a wrapper to meaning and a pretty finite life.
[00:02:54] I thought it was like still deeply saddening on some level. I get the help to some extent, but I thought it was just weird. Just, yeah, unfortunate that eventually all of these wonderful experiences and the investments that you make towards your future life that all has to crumble and slowly disintegrate and wither away.
[00:03:07] And then after a while, I discovered longevity and the fact that there were at least some people talking about the fact that it could be possible to extend human life, perhaps not indefinitely or infinitely but to some extent. And then through that, I discovered some of the transhumanist literature, et cetera, that had a, perhaps more of a philosophical meaning around it.
[00:03:22] Not all of it was like based in rigorous science, but nevertheless, it switched this mindset for me from thinking about like life as this kind of like the other artificial constraints of life and then trying to create meaning within these artificial constraints. And instead it took these constraints on the fact that we will die at 80, then when you're ready, like roughly 80, sometimes earlier, sometimes later, but that it took these artificial constraints, blew them out and was like, what if we could move these constraints?
[00:03:45] What if we could match the artificial constraints that we are dealt with to fit the expectations that we could have of life. And so I was, got really into that and really interested in that. Obviously realizing fully well that we're not really on track, much of that research to get us there.
[00:03:58] But nonetheless, just people thinking about it was great. And then I think the next time where something hit me was when I realized that my own fear of death is almost like more of a puberty version of the other fear of civilizational death. So I think when you're younger, you necessarily don't really think that civilization as a whole might come to an end, but you think that all you have to do to be around for a really wonderful flourishing future is to not die yourself, but far from it.
[00:04:21] There are various existential risks, including from AI, bio, nano, et cetera, many that we address in this podcast too, that may throw a curve ball into civilization existing. And to me, this kind of secondary awakening was realizing that civilization itself was not safe and certainly caring a lot about civilization longevity, just because I had always considered that to be the default.
[00:04:39] And so that made me switch this kind of lens from my personal individual life to this more civilizational lens of what are the kind of big factors that can make civilization live longer? Or what are the ones that could possibly lead to a premature death of civilization? And to put it in biological terms.
[00:04:52] And I think right now, we are quite prone to thinking that we are racing towards disaster or that who knows how long the patterns that we currently have upholding civilizations are sustainable. There's so many different risks and processes that are going out of control. How could we possibly fix them?
[00:05:06] And I just always had this perspective of the opposite view of it's already quite good. I guess there are problems, but it could be so much better if we just - it's not that difficult. Nothing is physically stopping us from creating a beautiful world. And so I just wasn't genuinely excited about the future and thought that we could probably build a pretty good world if we just managed to not kill ourselves.
[00:05:25] And so this kind of secondary switch was the switch from civilizational fear of death or civilizational existential angst to this framing of existential hope. Let's think a bit more perhaps about the futures that we can reach if we don't kill ourselves in the next five years. If we don't just take it as a default that we are in deep peril, then what could we build?
[00:05:41] And I think that's so deeply inspiring and so wonderful. And it's also something that is not a new idea. Sci-fi authors have been doing this for a time and have been doing that pretty well. Sometimes more in a dystopian way, but definitely sometimes also painting towards a pretty positive world. And so it took me a long time to comb the internet and to find all of these different resources, both for dealing with individual existential angst and like individual coming to resources about longevity, but also all of these sci-fi stories that lay out positive worlds for civilization or the different resources that are even just thinking about long-term futures and what we could do as a civilization if we don't destroy ourselves.
[00:06:15] It took me a long time and countless hours on the internet crawling, I think, I don't even know what was called Less Wrong back then, but like just crawling blogs and outlets where people were thinking ambitiously about the future. And what inspired me about Foresight when I came across them finally is that there were a few people that were in the Bay Area that already thought ambitiously about the future when they founded Foresight, thought about how nanotechnology, AI, and various other technologies can shape the future positively. And not just talking about it, but like actively trying to get these technologies to move towards these positive paths. And similarly, I think for existential hope, when I finally came across the term much, much later, and when we discovered it in a paper from Toby Ord and others, it was so inspiring that they actually now put a name to the fact of this notion that rather than just thinking about all the terrible things that can happen to us, what if you think about the opposite world, where we make it through, where something amazing happens, or where we just manage to extend civilizational progress that we've already inherited for longer.
[00:07:04] And so I just got really inspired by this notion because it just really pinpointed what had been lurking in my head for a long time. And so I went and found the website, existentialhope.com, mostly to just collect a bunch of the resources about how different technologies can shape positive futures quickly and collate them in a way where it is easier, accessible for other lay people that are starting to think about these terms, where they can just come to one page rather than combing the internet through all of this stuff.
[00:07:26] Just have shortcuts to a bunch of things that I thought were at least useful for me and my own meaning making. But then over time, the project of course got much bigger than one person. And so now we really are, have a library of resources that many people have contributed and collated to of like various different onboarding documents about why do we care about neurotech?
[00:07:42] Why do we care about nanotech? What can we positively do with these technologies? And to create, to lift our gates a little bit and think about more what's possible in the long term. Sorry, that was a very long winded way of saying it. Nonstop.
[00:07:52] Beatrice Erkers: No, I think it was great. It was great to hear about the whole arc, the story arc.
[00:07:56] So yeah, I guess one, if someone is entirely new to this podcast, the Existential Hope podcast is part of the Existential Hope program, which is part of the Foresight Institute. And if you're not familiar with the Foresight Institute, I think it would be really interesting to hear you tell us a little bit about that and also the focus areas of the Foresight Institute, like why are we focusing on the technologies that we're focusing on is one
[00:08:18] Allison Duettmann: Yeah, on the outside or externally, we are a nonprofit that was founded in 1986 and we advance the beneficial development of high-impact technology. And so for us, we have a few areas that we specifically focus on. And so this is AI, bio, nanotech, neurotech, and space. And so these are like the technical tracks that we have.
[00:08:38] And each of these tracks, we have a fellowship prizes, grants, workshops, seminars, you name it, like various different ways of helping ecosystems of like frontier science and tech development in these areas start and then helping them prosper. So basically we help project incubation, we fund projects, we help projects find a co-founder.
[00:08:56] We do actual field building with these workshops when there is no field yet. We try to invite people to get together and find collaborators. So we do everything possible to help the frontier of these different scientific disciplines to move forward in positive directions. Especially there where legacy funders just aren't really incentivized to look at just because it's either too early, too weird, too niche, too controversial, too interdisciplinary.
[00:09:17] You name it. There's often like a variety of different reasons, but it's just not as much attention on a specific technological frontier as there could be, and we try to go in and help that frontier a little bit a lot. So it might seem arbitrary why we pick these different technological tracks, such as AI, biotech, nanotech, neurotech, and space.
[00:09:33] But I think when you think about them a little bit longer, they're actually like deeply interrelated and they lead to me to like a very organic and congruent future pattern that emerges when you think about them together. And so we were founded on the book, Engines of Creation in 1986.
[00:09:46] And so an easy way to answer the question of like, why these different technologies is because they were all in the book, Engines of Creation. The book was written by Eric Drexler. And together with Christine Peterson, who coined the term open source, and who was a fellow MIT scientist at the time, they founded Foresight Institute because so many people were inspired by the principles and ideas laid out in the book, Engines of Creation.
[00:10:05] And so they founded this organization to get to work and to actually implement these features that they laid out there. What were these features that they laid out in Engines? The biggest technology that was introduced there really was the technology of nanotechnology and molecular nanotechnology, but closely in tandem with AI.
[00:10:20] And so basically the idea was that nanotechnology really has the ability to revolutionize the world of atoms. And so the way that we make things, the way that we engage with anything in the physical world, really. And so the question here really is what could we build if costs, externalities et cetera, were no issue at all?
[00:10:35] What if we could reconstitute and remake the kind of fundamental building blocks out of which we make things. We'd have such an incredible kind of vast array of possibilities for making entirely new objects, for making objects at an unprecedented speed with entirely new properties.
[00:10:49] So for example, for space development, that's very important. But also possibly with intelligence in it, there's a way to compute even on these very small devices, but also at a much lower rate of externality production and the reduction of the costs that we're usually making. So basically nanotechnology has this ability, by building from the bottom up, ideally from the atomic level, to really revolutionize the way that we think about our physical environment.
[00:11:09] And then AI is the counterpart to that. So if nanotechnology does that for the world of atoms, AI allows us to rethink the world of bits, to remake the world of bits and information. And so by really shifting, by using bits to build entirely new worlds. And so AI was like deeply a part of this book as well, by laying out really, not only the types of big questions that we could seek answers to with improved intelligence, but also the types of new questions that we could ask if only we have improved AI.
[00:11:28] And so AI and nanotechnology like revolutionizing the worlds of bits and atoms in tandem, were giving rise to this like really ambitious long-term future that affected many areas such as, for example, biotech, space, neurotech, et cetera.
[00:11:46] And so over time we really built on the foundations laid out in Engines and built out the different tracks to focus on the technologies that were mentioned there. But rather than just focusing on nanotech and AI, we added the other tracks that it touched on briefly, basically as specific technical focus areas to Foresight as well, such as, for example, longevity biotech.
[00:12:02] Yes, you could imagine that longevity as a goal we could eventually reach with nanotechnology, but there's also shortcuts that have emerged since the book Engines. And so there's a lot of really fantastic work in biotechnology coming along. And that might give us shortcuts to actually allow humans to live much longer, healthier lives.
[00:12:18] Possibly indefinite time spans, why not just use biotechnology, just use any technology that would get us to the final goals. We're pretty technology agnostic here, but biotechnology is another big technical track that we built up at Foresight over the years. Arguably it's our biggest one in terms of participants.
[00:12:32] And then another one that we added on perhaps three years ago now that has been our fastest growing one. So not our biggest one yet, but fastest growing is the Neurotech track. And so with Neurotech, we really focus on this question of the mind as the filter or the brain, the mind is the filter through which we like perceive everything that is happening in the world.
[00:12:49] And so it really is the kind of foundational layer between us and the rest of the world. And so how can we, A, fix any ailments to our brain that we already have, like any of the diseases that many folks suffer from today, but also B, how can we think about improving this, the substrate?
[00:13:01] How can we think about improving the human brain substrate to actually be able to have like entirely novel experiences and to be able to relate to the world and also to each other in new, wonderful ways and possibly even to AIs and animals and like other sentient individuals, sentient creatures that might come around.
[00:13:15] So I think this notion of like biotech and neurotech were the two other biggest pillars. And then we also have a pillar still on space around this notion of expanding outward. What if we don't remain on Earth, but what if we take this cosmic endowment that was given to us of having conscious creatures on a living planet and spread that out into the universe and really seed the universe with life.
[00:13:32] And so these are the different technical tracks. And I think they all point to like similar futures or they're building blocks of like wonderful worlds in various different ways. And they're obviously also deeply interconnected with each other. It doesn't mean that we don't think any of the other technologies are important or interesting.
[00:13:45] It just means that we focus on those because we think that we can have a counterfactual impact in creating positive worlds within those. Yeah, I guess that's how we got to the tracks that we have and the programs that we have to support each of these tracks.
[00:13:54] Beatrice Erkers: Yeah. Thank you. I think that's very thorough and it's cool that we're still connected to the whole like original Eric Drexler Engines of Creation.
[00:14:02] It was really fun. Actually, I read that book after I'd worked at Foresight for three years already, so much too late, but it was really cool to read that and knowing that it was like written in the eighties and so many of these things were actually like happening now. So that's a recommended read. Yeah.
[00:14:15] Shall we move on to the next part? Yeah, absolutely.
[00:14:17] Allison Duettmann: I think I've talked way more than enough already in this first segment. I really want to hear from you as well, Beatrice because I remember when you came to Foresight and you had already voiced that you were interested in the existential mission and vision and I thought that your perspective was like, so deeply pragmatic while also being really inspired about the ideas in the field.
[00:14:34] And so I'm just really curious what drew you to the kind of notion of existential hope, but even the program, obviously I am not making too much of a big early announcement here, but you will be switching from your role as CEO into the role of directing the existential hope program in the next year, and so obviously you've come a long way from getting your feedback in the program to like all the way to really running it.
[00:14:53] And I think there's no better person to do so than you. So I really want to hear from you as well. What inspired you about the Existential Hope mission? And then how do you see it really taking shape perhaps also with your contributions within Foresight?
[00:15:04] Beatrice Erkers: Yeah, I guess what inspired me, I think you're right in that I tend to be quite pragmatic and so I think what inspired me was just like to some extent a frustration or just like feeling like what's the point of not trying is like largely how I felt I think.
[00:15:17] I've always been very frustrated when people are like too pessimistic because yeah even if it doesn't work out, it doesn't seem like there's a point to not trying, basically. And so I think that the Existential Hope framing was really useful for me in that, for example, I came from more of an EA perspective where there's a lot of focus on existential risk, for example, and the original paper by Toby and Owen was also existential hope and existential risk and trying to define those concepts.
[00:15:42] And it was just, to me, I think a really useful concept in terms of being able to keep two thoughts in your head at the same time, being able to hold and think about the sort of uncomfortable presence of existential risks, while also like allowing yourself to envision something a bit grander and envisioning the path that we could be trying to head towards.
[00:16:01] Yeah, so I think that, that was really like, that whole affirmative path type of thing really resonated with me. And also just the frustration then with not just what's the point of not trying, but also it just, the sort of doomy, doominess presence. And overall, I think like in popular culture and everything that you're fed in media, I think I also like, I understand that news can be useful, but I also think that the way that we consume them today isn't really, we don't consume them in a useful way.
[00:16:30] We consume them very much as entertainment to a large extent. And I don't think that actually helps us get anything done. So I think that there's, if we want to get to the point of actually getting something done and creating the future, I think that we need to put all of the bullshit to that extent away and just focus on, okay, what is it that we want to achieve?
[00:16:46] And like, how can we start thinking about getting there? I also think that with that doomy presence, there's, I don't remember who I heard say it, but I'm sure many people have said this, but there's, the quote goes something like, if you don't have a future, you vote for the past. And I think you see that a lot in politics today.
[00:17:02] All over the world, really, there's a lot of people feeling very, they don't see the way forward, or they don't really see a vision at all of the future. And so then you just think about, you can dream about what the past looked like. It probably wasn't that nice. I've definitely, I think a really good, actually, a recommendation is to look at the Netflix, it's on Netflix now, the Alone TV show.
[00:17:22] It's basically, they sent people out into the woods and I think they're allowed to bring and it's so horrible. It looks so terrible to have to live alone out in the forest, building your own hut, fishing your own fish. You get, you're so grateful for all the things that we have and like that we've been able to create with technology and I think that, yeah, just like reminding people of how terrible that was is actually a really good way to instill existential hope because you think then about all the amazing things that we've actually managed to achieve to be where we are today, like how comfortable we are.
[00:17:54] Like it's not just, and not just comfortable, like how much we can also rise with that because we don't have to like have the daily struggle of just, okay, how do I feed myself? Because then we can actually go to higher states to some extent. So yeah, I guess a lot of frustration was what inspired me, but also just, I think that's what brought me there.
[00:18:10] But I think when I've been existing more also within the Foresight bubble, I have been able to think more and more about, okay, what actually excites me about the future? What things do I get excited about? And I think for that, to me in general, it's about like, how we can make it better for as many as possible in general.
[00:18:24] Both like humans, but also animals and whatever sentiences there will be in the future.
[00:18:29] Allison Duettmann: Awesome. Do you maybe want to give us also a little bit of an overview of how, what types of programs are we launching in the next year? Because to some extent, people might know us from the podcast and possibly the website, but there's so much else around that's happening now, especially that I think you're dedicating so much more time to it.
[00:18:45] So it would be great to hear that too.
[00:18:46] Beatrice Erkers: Yeah, I think that I guess I want to thank the Future of Life Institute for making a lot of it possible. The exciting thing about next year is we're just going to do a lot more. Basically, we'll have the basics there. Like the podcast will continue to come out and the newsletter monthly will come out.
[00:19:00] And the library will be there and we'll keep it updated. But what's new is there's a few new projects. Most of them are like world building related. And I know that's something that we started on this year 2024. Maybe a few of the people that listened participated. We did this world building course that spanned eight weeks and people built out a future in 2045.
[00:19:21] And so basically, we really, that was really fun and the worlds were really quite impressive. And so that's something that we're going to do more of. And so we'll do it, for example, that I think maybe the most exciting one is just, we're going to do it like very broad, public, free, all access course for anyone who's interested in starting to explore these things.
[00:19:38] So it'll be like a Udemy course that you can find online. Hopefully it'll be on our website as well. But then if, for example, if you're like, if you feel like you want to start engaging with these things, but you maybe don't have it naturally in your work or anything like that. Or if you just want to - I think I've spoken to a lot of people also, who are like maybe running a nonprofit or something that's not directly related to the things that we are working on, like tech and especially AI.
[00:20:01] You think, oh, I can sense that this is something really big happening, but I have no idea, like, how does this affect me or our nonprofit? Like, how should we think about these things? Then I think this course will actually be really good because it'll be a way for people to start to think about what does it look like if it goes well, like what future do I want with advanced AI?
[00:20:18] And so that's going to be launching end of March, fingers crossed. So yeah, that's like the biggest one. And then we're also going to continue doing more world building, more like advanced world building, meaning we're going to interview people that are really experts in the fields. So much like we actually have done on this podcast, but getting more concrete and like thinking about what exactly would you think the future in 2045 could look like, it's not forecasting.
[00:20:42] Like we want them to be really ambitious. If the best case scenario for the technology that you're working on happens, what do we have access to in 2045? And then we're going to just also do a toolbox that, so if you're curious and want to do more world building, then you can use this toolbox and they'll have a lot of Gen AI recommendations and these things so that you can better do world building that applies to our real world, not just like world building for a fantasy novel or something like that.
[00:21:09] I feel like I should also say maybe a few words on like why we're doing world building so much. And I think that at first I was, it started with I guess FLI's competition on world building a few years ago where they did, they also did I think it was 2045 Imagine a World where there was AGI. And things had gone pretty well.
[00:21:25] What does that look like? And I know you were a judge on that, Allison. And at first I was like, okay, why? I don't 100 percent understand the point of this, but the more I looked into it and read up on it, I think it makes a lot of sense actually, because the really good thing about world building very broadly is what it sounds like.
[00:21:39] You build out a world. It's what like Tolkien did for Lord of the Rings or something like that. It's anyone can do it in that sense. And it doesn't have to be like the future that you're trying to build out or anything like that. But the way that we use it is to try and imagine a more holistic view of the future.
[00:21:54] If you actually build out a world with the different sort of elements that goes into a world, then you have a much better holistic vision of how things come together. For example, when you have that, you can say that, oh, this thing is happening in the economy. How does that affect the people over here, for example, you can just see the sort of chain reactions that come in a real world.
[00:22:14] And so it's just like a way of working and thinking about the future that helps. Basically, like you can imagine these sort of what if scenarios or what if we have AGI and things have gone and you can in this process, you inhabit that world. And so that's what sort of makes it then easier to maybe think about how good is this world actually?
[00:22:35] Are there flaws in this world? Is there something we should do to make it more robust, for example? But also to then think about backcasting. If this is what we want the future to look like, what do we have to do to get there now? And I think one important point is that it's not forecasting. I think forecasting is a really great tool.
[00:22:51] And I'm sure you can, to some extent, apply it to world building in really interesting ways. But if you do forecasting, you're going to end up with what you think is most likely, obviously. Whereas this is where we're actually trying to be a bit more ambitious about what could be. And we want to build that sort of level of familiarity with this future world would be hard to get otherwise if we didn't do world building.
[00:23:09] And so it really should help us pursue the good scenarios better, basically. Yeah. And then for also for the Existential Hope program, it's, I encourage everyone to just sign up to the newsletter because we'll also be doing like events and dinners and these sorts of things. If you want to participate in that way, I think the newsletter or probably Twitter is also a good way to stay updated.
[00:23:30] Allison Duettmann: Yeah, I do think it's interesting the relationship between world building and forecasting to some extent. In many of the worlds, what we're doing is actually creating the world and then we're backcasting, how do we get there? And so rather than trying to just extrapolate outward from where we are today, we're like, okay, where do we want to go?
[00:23:44] And then let's now think about how do we get there? And of course, we can't only think about how would we get there theoretically, that's not really making a step forward, but we did have this workshop as well earlier this year on AI institution building, which was basically figuring out what positive worlds in 2045 with AI would look like, and then the institutions that would have to be in place to get us there. And I think there, we really had a few pretty fantastic, quite roots to the ground, winning projects that are now being incorporated as organizations.
[00:24:27] The incubation of new orgs and products that are actually going to make progress along those lines. And so I think this kind of inspirational part is one aspect and then actually getting from here where we currently are to there is the next one. And so we're hoping to close that gap as well a little bit more, which I think is really exciting.
[00:24:42] If it works, that will be great. All right. Wonderful. I guess if people are interested also in the other workbooks, they can also look them up on the website on existentialhope.com. There's a contact. And they're quite good.
[00:24:52] Beatrice Erkers: Yeah, they're really good and I highly recommend them. Highly recommend them. Yeah.
[00:24:56] And on the, I really, I think we, or I view the existential hope program in two buckets where one is this education, just more inspiring future leaders and builders. And then there's the one where it is more concrete, really generating projects, organizations, initiatives that yeah, get stuff done. And I think that the hackathon is a great example because there we're actually - I don't think we can take all the credit because they probably had the ideas before they came, but the top three projects that came out of the hackathon are actually being like, they're turning into real organizations right now.
[00:25:25] So that's really exciting, I think. Maybe I will get back to asking you a few questions. You, I think that you really, you're a project lead with, you've spirited a lot of initiatives at Foresight. And I think since you became the CEO at Foresight, Foresight has really taken off. I wasn't here before. I've been here, I think, three and a half, almost four years now.
[00:25:44] And since I came on, even in that short time, I think it's just felt like being at a startup more. I think things happening and growing growth and feeling like traction to a lot of things that we're doing. And yeah, I think you've been just like a really great leader of all of this. And so you have just made a lot of things happen at Foresight, like workshops, grants.
[00:26:01] Collaborations. Do you have any, this question that you usually ask in our interviews, are there any cultural shifts or anything that you've noticed since starting at Foresight? Because I know you've actually, this is like your 10 year anniversary at Foresight, I think.
[00:26:13] Allison Duettmann: Oh damn, yeah, pretty much, I think.
[00:26:15] And yes, now I'm just really grateful, I think, for the team that we've built at Foresight. It's absolutely fantastic. I think to see just what you can do when people are dedicated, committed, get along with each other and just share common missions and really share them deeply. So I'm just, yeah, I couldn't wish for any other role or job and we'll be doing this in my free time anyway.
[00:26:35] And just like incredibly grateful. I think about the path that we're currently on. It's awesome. Every day feels like a gift. And that definitely wasn't always the case. I think I'm really grateful that Foresight had the amazing predictions that it did. And it was a lot of work to build it up to where we are currently.
[00:26:50] I think one thing that really inspired me so much at Foresight that I maybe I think touched on earlier already is that when I found Foresight online, I was doing my studies still at the London School of Economics and it was on back then on AI and AI philosophy. And I was just browsing around the internet to figure out if there's any kind of positive futures with AIs that people are imagining back then.
[00:27:08] And it was early, it was 10 years ago, but Foresight was really like one of the only orgs that really had content on that, that was technical, somewhat scientific, like to the extent possible back then, without just being like philosophical or handwavy about what AI actually means for us. And it was really quite early on.
[00:27:23] Of course, there's MIRI and there's plenty of other orgs as well, but I just got inspired by the positivity that Foresight had around these things. Many of these notions and this kind of actionality around it. And so I cold emailed them and that's how I ended up here at Foresight. So first I started as an intern and then gradually, yeah, we grew.
[00:27:36] It's not even like I didn't work my way up, but we just grew. There wasn't really any of the other roles to work yourself into, because it was really like a very small org at the time that I joined Foresight. And so building out all of the programs that we currently have was, it was a ton of fun. I also thought that what was so unique about Foresight is that it has this community of people that are still deeply caring about the mission that were already with us in 1986.
[00:27:58] And so when I came to Foresight, it was this like welcoming community of people that had also strong opinions about the future. But nevertheless, that are still coming. We just finished our vision weekend, our annual festival in San Francisco, that are still coming to our events. And they've been coming since 1986.
[00:28:13] So I think that's amazing. There's like long-term institutional debt and willingness to collaborate over the long periods of time. But I think in general, when I came to Foresight, it was rather like, it was a small, smallish org at that point, just because I think many of the kind of pretty technical, technologically ambitious dreams that people had in the Foresight community in 1986, et cetera, when we got founded, it just took much longer than people had hoped, including the transformative potential of nanotechnology, some stuff in longevity technology, such as cryonics, some developments in AI.
[00:28:43] There were like several different winters, several different summers. And I think lately the culture shift that I think I'm most excited about is the one that when I came to Foresight, I thought I had missed the best of times. Because it seemed like the most ambitious thoughts were had in 1986 when, in the early times of Foresight, and now we were just trying to like, at least stay on the ball in terms of progress.
[00:29:02] Lately, I don't think that anymore. Maybe I think those are fantastic seeds, but that actually people have just suddenly woken up one day and considered that it's possible again to build a fantastic, ambitious, positive, progressive world. And they're fully committed again. And I don't know what changed.
[00:29:17] I think it was over, of course, over 10 years gradually. But I think one big thing is crypto happened. And so many people that made money in crypto also share ambitious dreams about the future and longevity in other areas. And so suddenly they had money to put towards them. So a lot of new funding sources in hard science and tech, like Astera, like Emerald Foundation, many of the other funders out there, they are from started and supported by people that made their money in crypto.
[00:29:42] And so I'm really grateful about that. I think the second one, is that suddenly a few of these fields started blowing up. For example, longevity biotechnology. I think one of the first conferences that I was at Foresight here back in the days, it was still, it was a longevity biotechnology and it was a tiny community, maybe 30 people at the workshop and it was next to, it was in a gentleman's club in Palo Alto, also next to a squash court.
[00:30:02] It was tiny, the community. And now look at it, the community, the entire longevity field has exploded as now we see is only focused on longevity as breakthrough after breakthrough, not quite getting us there, but nevertheless, we're making progress. Many billionaires are pouring in their funding into various different longevity organizations.
[00:30:17] There's so many conferences out there. People are really like, yeah, are also actively excited about this, have it in their profile name. And we know the Don't Die movement, personalized by Bryan Johnson, like basically just like now people, it's fine to talk about this stuff now. And it wasn't totally fine or welcome, even when I came to Foresight across the board.
[00:30:32] And so I think just this, the notion that there's now more of us that care about these things is amazing because it can get quite lonely, I think. Or you just start doubting yourself. I think after a while, if you just think that we could have pretty ambitious futures with nano bio, neurotech, et cetera.
[00:30:47] But like, why does no one else care basically? And now so many people care. Neurotech just has exploded as a sector over the last two years. The progress that we are making now, I think would have not been conceivable for people, even two years ago. AI probably and a whole other story that's just going to revolutionize all the other technologies.
[00:31:03] And so I think this notion of it's go time again, and people are waking up again, is I think the most exciting culture shift that I'm just, I really hope we can keep it going and we can pay it forward. The whole progress movement that has happened over the past few years, like really putting a philosophical narrative around many of the technological changes and breakthroughs that we've seen and the various different memes that sprung up around it.
[00:31:22] I think it's just, people are, people have woken up, they've decided it's game time, and they're ready to go.
[00:31:28] Beatrice Erkers: Yeah, I think that's really interesting. Also, I very much agree or I think I've shared that experience. I think, and maybe to some extent it's just like the pendulum has, it was like too doomy for a while, something else had to happen, like people got bored of it.
[00:31:41] But also, I think that when we had our vision weekend like this weekend, and I was hosting the existential hope breakout group discussion. And one of the things that was, I think, an interesting point was that if you've actually felt like tangible changes and the times in history where people have been maybe the most excited about the future has been maybe when they've actually in their own lives felt tangible changes.
[00:32:01] And I think that a lot of people today, I feel like I've experienced that with just - I felt, "Oh shit, this wasn't possible three years ago" or something, and that's probably what puts you in the go mode, go time. And I also feel to that extent, that just the idea of existential hope has been taking off a bit, or I feel like people, when I speak about it now, as compared to just two years ago, I feel like people just resonate with it more, and they see the value of it more.
[00:32:26] That it's needed in terms of framing. I think maybe also in a context where there's a large focus on existential risk, for example, and I think before people maybe didn't see so much the value of also reminding people, like, why are we doing it? It's because of existential hope to some extent, it's because we want to create this better future that we need to work on mitigating existential risk.
[00:32:46] And I think that movement is a good example of that, where it's, I think if you look at who started to actually, the current existential risk movement, the people that started that movement are like, to a large extent, Anders Sandberg, Nick Bostrom, a bunch of transhumanists basically that were getting super excited about different things that they thought they could maybe get to experience in the future, and then realized that it was actually at risk of not happening, and that's how it all started.
[00:33:08] Allison Duettmann: Yeah, I agree. I think that a few years ago when we had to try to just get people excited about notions of existential hope or even to communicate why it's possible why we might need a mindshift more towards futures of existential hope or why it's even worth considering those. I think you still needed to make a case for it.
[00:33:23] Then, yeah, you can talk about this stuff now. It's okay to also think positively. Like it's, it doesn't mean we're just being pollyannaish about the risks, and it doesn't mean we're just going to forget about existential risk. I think it was still, it felt harder to make that case. And I think now more people are on board with the fact that we need to also think about positive alternatives to very doomy scenarios.
[00:33:42] In addition, of course, and at the same time as really trying to make sure that these worst cases don't happen. But I think now more people are on board with that, but I think we're still lacking many positive visions, actually. We still need to create them. I think people are now game that we should really spread the notion that more is possible for civilization if we go for it.
[00:33:59] But now I think we need to create the narratives as well that package the technological progress that we already see happening. And that could be possible in the long-term future in modern inspiring visions again for humanity. Because I think that for the general public, that notion perhaps hasn't arrived yet to the extent that it has for the various subcultures in the Bay Area.
[00:34:16] Beatrice Erkers: Yeah, we totally need new memes for positive future, which reminds me, one of the things I should have said also that we're doing next year is launching an existential meme prize. I think, yeah, having just a list of 10 amazing memes for the future would be great because this was one of the projects that came out of Vision Week in Europe earlier this year was this idea that we should basically have a prize for positive future memes.
[00:34:37] And yeah, that's coming.
[00:34:38] Allison Duettmann: What about you, Beatrice? I think many different people, their notion of existential hope resonates with them because they come to that notion from different areas, caring already about different things in the present world and also in the future world. So I'm just really curious, what resonated for you back then?
[00:34:51] What culture shifts have you seen that you think make you just excited that we're on the correct path, if that is?
[00:34:56] Beatrice Erkers: Yeah, I totally agree with the, I think you said it very well where before when I mentioned existential hope, it was like, I had to really argue my case whereas now it's more everyone gets it or gets the point of it, but yeah, I think that one thing that I find really exciting right now because I think there's whatever word you want to use for like existential hope or these sort of things.
[00:35:17] I'm happy with whatever, but I think that one thing, for example, that captures what I was saying about existential hope before, where it's like this sort of third path of techno-optimism to some extent. I think that also is what the DIAC movement is doing. And I feel like that's a meme that's really taking off now.
[00:35:33] It's been popping up a bit all over. And I feel like people are really excited about it. For those who aren't familiar with it, it's - it was a post by Vitalik Buterin, the founder of Ethereum, called "My Techno-Optimism." And it's a really good post that I highly recommend everyone reading, where he speaks about basically there's like the different sort of paths we could walk in and how we relate to technological development.
[00:35:55] Like we could try to like, just like pause it, shut it down. We don't want to take any risks and have the risks that come with pushing it. The other view being like, just go and just hope that everything is gonna work out fine. And then he proposed there's this alternative path of trying to push forward, but pushing forward also like the defensive technologies that we may need.
[00:36:14] So what is it like? Defensive technologies, decentralized technologies. There's yeah, there's just a lot there. And I, when I was at ETHGlobal and DevCon like huge conferences, this was a very present meme. I know you hosted a Foresight event that was also really popular on this concept. And I know Entrepreneur First is - they're hosting like an incubator program for people working on defensive technologies.
[00:36:37] So it's really interesting, I think, because it's really like a meme that's being put into action, I think. So I'm excited for that. And in general, I think just, I really like that about Foresight that we have. It's really like a melting pot. We have like a lot of events you go to, you can, it can feel like quite homogeneous, I think.
[00:36:53] And at Foresight events, okay, maybe there's a bit, maybe homogeneous. There's more males often except for the team, but ideas and like ideologies, I think are very heterogeneous. And I really like that because people, I think it's really good when people come together in a way that's not filled with animosity about like how we should work on things.
[00:37:11] Because I just think that's like counterproductive. And yeah, I'd like to see more of that in general, which I also actually, I think is a change that I've seen, or like I've seen people being more interested in it, like the cross-silo type of work, like the interdisciplinary stuff. I feel like there's definitely been an increased respect and excitement about those types of spaces that offer that basically.
[00:37:31] And I think that's really good and exciting as well.
[00:37:33] Allison Duettmann: Yeah, it is quite unique that it's very difficult to pin down the Foresight community just because they, any, in terms of technical interests that people have. It ranges from bio, nano, neuro, AI, space, you name it, all the things in between, and things like charter cities, new innovation ecosystems, new funding mechanisms, like all the kind of meta structures that support these different sides as a technological area.
[00:37:52] So it's already quite broad in terms of topical interests. And then you add different political perspectives, different moral perspectives, different philosophies of life that people come from, different subcultures, because people don't just wear one hat, but they wear so many. And I think so far it hasn't blown up in our faces yet.
[00:38:05] I really hope that we can keep it going. Because I think it is one of the few venues where you see people from quite - where you think quite radically opposed communities, like engaging fruitfully with each other and actually using the benefit of doubt in interaction with each other, and I think that's just quite weird these days, so yeah, hopefully we'll
[00:38:22] continue without any too large setbacks on that front in the next year. Who knows? Yeah, for sure.
[00:38:27] Beatrice Erkers: Yeah. No, I fingers crossed. I think it'll work out fine. Yeah. Maybe we should, let's see. Okay. I think one last question I want to ask you, Allison, before we get to the outro questions that we always ask our interviewees.
[00:38:42] It's just - we've touched on it throughout this conversation, but like, how do you actually, if someone asks you about this, how do you actually think about balancing this sort of risks, risk possibility spectrum of transformative technologies? Like there, there's obviously so much potential and possibility, but then there are the risks.
[00:39:00] And so how do you think about that?
[00:39:01] Allison Duettmann: I do sometimes feel scatterbrained just personally in my own life about the crazy possibilities and impossible downsides, especially when you consider the short timelines and how they're just shortening, not just, oh, we might see clearer eventually and it will become much easier to determine, but no, for some of these things are happening very soon and you might not have very much time to prepare.
[00:39:20] But I think philosophically there's often, like just mentally, like I try to usually try to see them more as almost like the balancing act to some extent. And it's not one side or the other. And I think that's also what makes Foresight to some extent. We're not really down one path or the other much.
[00:39:33] Philosophically, I think really this notion of the existential hope notion of, okay, we can't build what we can't imagine is also in some contrast to the notion of, but when we're building, we also want to build in terms of de-pessimizing projects. So basically we're already on a pretty good path of civilizational progress.
[00:39:48] So one way to get to great futures is just not to kill ourselves. And so rather than like having these like large notions of utopia and trying to get everyone to agree on this one large utopian notion, let's just create with a continual civilization project of progress that we've had until this day.
[00:40:03] So I think these things are always going to chat on my mind or so I know in some kind of at least in a discussion rather than a shouting match in my head. Another, I think one of these interesting contrasts that also arises from this kind of like double focus or like just the balancing act between de-pessimizing and really thinking about ambitious futures is the notion between let's just let technologies like run amok and let's just accelerate as fast as we can without giving much of a direction versus this notion of, but if you're worried, we need this kind of like notion of top-down control.
[00:40:31] And I think one path that we're really trying to point toward at Foresight is like, it has to be somewhere in the middle. If we just have technological progress gone wild, there will be risks from automated warfare. There will be risks from bioweapons. There will be risks from later on nanotechnology weapons and possibly much earlier on AI that we will face.
[00:40:46] This is just the kind of notion of technological proliferation leading to like small-kills-all risk. We're like a smaller number of actors in the world have the capacity to destroy civilization at large. And so we can't just totally just let it run amok. On the flip side, I think many people just see the only alternative to that is drastic regulation and drastic centralization in the form of perhaps individual governments to all the way to a world government that has perfect surveillance, enforcement capabilities, etc.
[00:41:11] And I think these are obviously, none of these paths are tenable. Like the centralization path arguably creates more problems than it solves. If anything, it creates this one centralized risk where even if it had perfect control over the rest of the world and didn't allow much technological progress in the rest of the world, which would be terrible, it could still be itself hijacked by bad actors.
[00:41:31] So I think trying to think deeply about these different chasms that we're facing and trying to navigate them, which really using new technological tools that we can build now. So using basically new tools for decentralized mass coordination and many of the issues that we have, I think it's also something that's, it's hard to hold these two features in your head and chart a good path in between.
[00:41:51] And I think the last one really is this notion of, yeah, I think how it shows up in our work for me at least is, and we ultimately at the end of the day at Foresight, we are an organization that supports scientists and technologists. And so as we're worried about risks, it's telling people that like trying to only stop bad systems from happening or like bad failures from happening and trying to really tell people that they can't go down and build specific things.
[00:42:13] It's just not super stable. I think in the long run, I think what is stable is to also direct people's gaze towards the positive structures, positive technologies that they can build instead or earlier than dangerous technologies. And for example, this is this notion that's been around for a long time in theory, but I think we should really think about much more actively across people that are building science and tech of differential technology development, which basically means that you're working in a field and you can sometimes have the chance of accelerating safety-enhancing technologies before you accelerate risk-enhancing technologies.
[00:42:53] So maybe like rather than getting all the way into gain of function research on things that like possibly in the lab that is not what we do well secure, think about also what are like possible biodefense mechanisms that you could be working on instead.
[00:43:15] We're focusing in our AI grants program very much so on strengthening the civilization infrastructure in which humans and AIs live through computer security that we're funding and through multipolar game theory of like humans and AIs interacting and strengthening the human in that relationship to an AI by funding neurotechnology and neuroscience research to like improving human cognition vis-à-vis AI.
[00:43:34] And also by funding research in automating safety research to improve the human position vis-à-vis AI systems on the long run. And so I think this notion of thinking about what are defensive ways in which you can think about technology development that don't necessarily mean you have to shut anything down, but they might mean that you want to focus on building other things instead of, or first.
[00:43:51] And so I think this is, there's something like more or less practical that I want to leave those people with that are actively developing science and tech. Yeah, I think this notion of DPTD et cetera is going to stay with us over the next few years for sure. I think not just at Foresight, but in general too.
[00:44:05] And, but I think that's one strand within the Existential Hope Project. Another one that we focus on is this notion of eucatastrophe. And that's really coming from, not from us, it's not even coming from Toby Ord and Owen Cotton-Barrett who like reintroduced it in their seminal paper on existential hope, but it's actually coming from Tolkien and it means this kind of like sudden happy ending of a story that leans into positive ending.
[00:44:27] So when you have a fairytale at the end, that's usually like this crisis. And then at one point you see it's shifting and you have the happy ending. So what is that moment for civilization perhaps where rather than the world looking much poorer on as an outset, the world is suddenly looking much brighter based on a few events or one event or like a specific development that has occurred.
[00:44:44] So it's like a different notion of thinking about existential hope rather than think about the deep framing. But I think it's one that we often use as well. It's like a cool mental tool. I think just think about this and these types of futures. I really want to - if you have any specific ideas, it's always the hardest question that we ask our podcast guests.
[00:44:58] So it's totally fine if you don't have any, but in case you have a eucatastrophe moment for us to share, or a eucatastrophe development that would make you much more optimistic about the path of civilization in the future. That would be awesome to hear.
[00:45:08] Beatrice Erkers: But I feel like I would very much be cheating if I didn't try to answer this question because I always push others on it.
[00:45:13] But yeah, I think that definitely for me, the one thing that I think would make me feel a lot better about the future is in general, if there was like this idea of like moral circle expansion, I think it's like Peter Singer who started talking about that. And basically that who we include in our moral circle maybe could and should expand right now.
[00:45:32] Maybe the people that you include in your moral circle are like your family, your friends, maybe the people who live in the same nation as you or something like that. But I think increasingly it's really interesting to think about who, who else should be included in that. And yeah, I think for me, a big one in general, just people all over the world.
[00:45:47] So it's not bound by nation and then animals is a really big one, I think. So I think if we started including more people in our moral, people, beings in our moral circle, that would be great. It's, I guess it sounds fluffy and I've been thinking about, okay, how do we actually do that? How do we hack it?
[00:46:01] And I don't know. It's very difficult, obviously. I think the annoying thing about it is that it seems like it would take, we have really expanded our moral circle because we're, I think, the comfortable lives that we lead today, a lot of us makes it easier for us to maybe be generous and to others and care about others if we have food and clothes and all the things that we need.
[00:46:20] But I have been thinking, especially since our neurotech track has been taking off, like maybe we could hack ourselves a bit in terms of expanding the moral circle. Like maybe I think just the possibility of being able to communicate much more efficiently with each other, for example, could be - I think when we hear about that notion, we just think about, oh shit, everyone's going to find out all the horrible things I'm thinking.
[00:46:41] But I think also, if you weren't like limited by language, maybe you could communicate much more efficiently, really. Because you could get much more nuance through in your communication. Or you could maybe, there's, if you could make someone feel what you're feeling. If you're afraid of something and someone else can feel that, maybe they can understand and empathize with you more.
[00:46:58] Or just with AI and neurotech, there's this interesting, maybe possibility of being able to communicate with animals much more efficiently. That would be amazing and maybe make people include them in their moral circle a lot more. So I think, yeah, in general, anything we can do to hack Moral Circle Expansion, would be my eucatastrophe.
[00:47:16] I know that we're like over time now, basically, so maybe we just do like a rapid fire round, like a few quick questions, and then we say happy holidays to everyone. So I'll just start with asking you, Allison, if someone asked you, how do I get involved or what should I do to get involved in this space, what would be your top recommendation for them to start?
[00:47:34] Allison Duettmann: So banal, but we have a library on the website that we spend a lot of hours in curating, and so that would be the first place I would point them to.
[00:47:42] Beatrice Erkers: I think I will just echo that. I think it's, that's what it's meant to be. Yeah.
[00:47:46] Allison Duettmann: All right, now I go. What's the part that you'd like to have on next year's podcast if you could engage in some wishful thinking here?
[00:47:52] Beatrice Erkers: I would really love to have Grimes. I think she's really cool and very thinking out of the box about, in terms of what the future could be. And yeah, so Grimes would be my short answer. And you?
[00:48:02] Allison Duettmann: I'm thinking about the question also, who could I in a world of wishful thinking? And for me, it would be Karl Popper, who it's not possible anymore, but maybe through AI, we'll get to that point again in the future where we can talk to him.
[00:48:12] Beatrice Erkers: That's true. Yeah.
[00:48:13] Allison Duettmann: Any books, movies, other recommendations that you have for people that really inspired you throughout this year?
[00:48:17] Beatrice Erkers: Yeah, that's actually also someone I'd love to have on the podcast was I just saw the movie Wild Robot that I highly recommend everyone go see. It's so beautiful and so inspiring to me in many ways.
[00:48:27] And yeah, maybe the author of that would also be really nice to have. It's basically about a robot who ends up living in nature, like the machine and nature and like how they can co-evolve and exist. And it's very beautiful. What about you? Do you have any, it doesn't have to be like from any time any content that is like your favorite existential hope content?
[00:48:46] Allison Duettmann: I think if there's one person that really embodies it for me, it's David Deutsch. We've already had David on for a podcast. That's why I didn't nominate him in the last question, but his book, Beginning of Infinity, it's just - I think it's so eye-opening towards what we can create in the long-term future, especially the chapter on hope and optimism, and it's just awesome.
[00:49:02] So I really recommend it. It's an evergreen for sure. Any advice that you want to share with folks that you got that you think others can benefit from? Spill your wisdom beans, Beatrice.
[00:49:11] Beatrice Erkers: Yeah, I got, yeah, wow. It's, when you get asked the questions, you realize how mean we are asking everyone else this question.
[00:49:18] I think a very concrete down to earth type of advice that I'll, I've, I think I've given it to myself and I'll continue to give it to myself is when I feel very, you can just be a bit upside down sometimes. Things are just like, oh, what's going on? I think my best advice is to do three things.
[00:49:33] It's like a simple three step formula. You take a shower, you drink a coffee and you go for a walk and it's reset. So yeah, I think that's my advice. Do you have any maybe more grand advice?
[00:49:43] Allison Duettmann: Who gave it to you?
[00:49:43] Beatrice Erkers: It's, I think I'll just, it's, it was something that whenever I was jet lagged, I realized was like the formula to start feeling like a human again.
[00:49:50] So I think it's my own advice.
[00:49:53] Allison Duettmann: Wise words, Beatrice Erkers. I think for me it was this notion of, I remember when I came to the Bay Area, someone once taught me - because I didn't really quite know how to tell, how to fit in. At the beginning, I always had this immediate imposter syndrome. Everyone seems so smart and capable.
[00:50:05] And someone just looked at me like, the emperor has no clothes. And it was this notion of just everyone is just pretending to be an adult sometimes, doing parts of the day. And that really stuck with me because as I figured out how to do things myself and got invited to talk about these things eventually.
[00:50:19] I was like, I clearly don't know how to do them. I'm just adulting. I'm just wearing fancy pretend clothes while actually being naked on stage. But so I think this was a really useful notion right at the beginning. I wish I could have internalized it better then, but now I exactly know what they mean, because we are not wearing any clothes.
[00:50:33] Beatrice Erkers: I think that's, yeah, very great advice as well. And yeah, very true. We're all just adulting, yeah, we should really conclude this because we're way over time, but as always, really nice talking to you, Allison. And yeah, happy holidays to everyone. This is our special end of the year episode. So I, yeah, wish everyone a very happy new year and yeah, to building better futures.
[00:50:53] See you in the next one.