Podcasts

Gus Docker | Beyond Survival: Envisioning A Technologically Enhanced Utopia

about the episode

Gus begins by balancing his excitement about the future with the ever-present threat of Existential Risks – especially from AI. He believes that hopefully, we’ll be able to deal with risks as they come: “AI won't be solved. AI safety will be worked on and we will solve some problems; new problems will arrive, and we will solve those problems. And we hopefully continue doing this until…indefinitely.”

If we can get past this, he is excited about how AI and other future technologies could shape the future. For example, he imagines technologies that will allow people to sample experiences from others and to try different conscious states and see what they're like, creating a world more open to diverse experiences. When pressed to imagine a eucatastrophe, a significant and positive turning point occurs in the realm of mental health. Imagine a scenario where individuals are able to address their mental issues, even minor ones like a fleeting sadness, without the need for clinical intervention. In this future, people will have access to innovative solutions, possibly through devices in their own homes, that enable them to not only overcome psychological challenges but also to enhance their overall happiness and productivity. As individuals become happier, they find themselves more productive, creating an upward spiral of well-being. This new reality fosters personal strength and empowers people to contribute more effectively to the welfare of others and society. In essence, it's a transformative leap in human mental health and well-being, marked by a harmonious balance of emotional fulfilment and productive engagement in life.

However, he also reflects on the difficulty of imagining wider technological change, especially in future utopias, likening the disparity between our current lives and those in the (not-so-distant) far future to the gap between life in the Stone Age and life today. “I'm not sure I will be able to grasp what's going on in the future if I'm not somehow enhanced in order to follow along with what advanced AI might be doing.”

About Xhope scenario

Xhope scenario

Gus Docker
Listen to the Existential Hope podcast
Learn about this science
...

About the Scientist

Gus Docker hosts The Future of Life Institute podcast, which features conversations with prominent researchers, policy experts, philosophers, and influential thinkers of all kinds. Gus studied philosophy and computer science at the University of Copenhagen and is active in Effective Altruism Denmark.

...

About the artpiece

Beatrice Erkers created this art piece, aided by AI technologies. Beatrice is the director of the Existential Hope program, and the COO of Foresight Institute.

‍

...

About xhope scenario

Gus is excited about how AI and other future technologies could shape the future. For example, he imagines technologies that will allow people to sample experiences from others and to try different conscious states and see what they're like, creating a world more open to diverse experiences. When pressed to imagine a eucatastrophe, a significant and positive turning point occurs in the realm of mental health. Imagine a scenario where individuals are able to address their mental issues, even minor ones like a fleeting sadness, without the need for clinical intervention. In this future, people will have access to innovative solutions, possibly through devices in their own homes, that enable them to not only overcome psychological challenges but also to enhance their overall happiness and productivity. As individuals become happier, they find themselves more productive, creating an upward spiral of well-being. This new reality fosters personal strength and empowers people to contribute more effectively to the welfare of others and society. In essence, it's a transformative leap in human mental health and well-being, marked by a harmonious balance of emotional fulfilment and productive engagement in life.

Transcript

BE: Welcome to the Existential Hope podcast, where we engage with experts from various fields to envision a brighter future for humanity. I’m your co-host, Beatrice Erkers –  I host this podcast alongside Allison Duettmann.

‍

In today’s episode we’re joined by Gus Docker, who is the host of the Future of Life Institute podcast, and has previously hosted the Utilitarian podcast. In this conversation, we talk about all things AI, Neurotechnology, and Existential Hope futures. Before diving into the episode, I want to recommend subscribing to the newsletter – the Hope Drop. If you go to existentialhope.com, you can subscribe, and you can see that for every episode that we drop, we draft an art piece based on the Eucatastrophe prompt that, – for example in this episode, Gus Docker suggests.  So I highly recommend that if you’d like to see some exciting visions of the future. Without further ado, let’s dive into this Existential Hope podcast with Gus Docker.

‍

GD: ...so that's the start. Basically, I just became overcome with interest in a particular topic. And then I had to know more about it. And I thought there might be other kinds of nerds out there wanting to know about it. The thesis, by the way, is now out in a book, it's called The Feeling of Value by Sharon Hewitt. I recommend that one to two listeners. It's a bit academic/meta-ethics. But I do think it's one of the best arguments for moral realism out there.

‍

AD: Wow. Okay. Could you spend a bit more? Yeah, please give us the lay of the land here.

‍

GD: Yeah, okay. So the problem of ethics is finding value in a physical universe. And this is actually a point where I disagree with the author of this thesis, she is not a physicalist about consciousness as I am. So she's not. So I believe that consciousness is a physical property. And I believe that it is in consciousness that we will locate moral value if we are to locate it at all. I've since actually become a little bit more skeptical about both moral realism and utilitarianism, which is somewhat ironic, given that I started out hosting the utilitarian podcast, I'm not sure this ethical theory can stand up to scrutiny in the end, but I do think it's the best one we have. And so that counts for at least something here.

‍

AD: Yeah, interesting. I saw that you're back on with all such philosophy, or at least [to some extent]. My background is also in philosophy and science, roughly. But yeah, we went up and down that ladder as well. And I think there's just, it's interesting that those are a few of these questions were really like even centuries later, there are still some really, really succinct open questions that I think we still need answering. And perhaps now with AI really kicking off, we have some more data points, do you have any hope that we will learn more in the next 10 years? And we may have had the last 100 years? Or do you think it’s just a theoretical abstract topic?

‍

GD: Yeah, I think we'll learn something, I think we will learn a lot about the brain from studying artificial brains: we will learn about our own corner, we will learn about our own neural networks from studying artificial neural networks. And I do think that it's a pretty big problem that we are approaching advanced AI without a good grasp of ethics. And the fact that it's remained an open question for, let's say, 2000 years, is an indication that there will probably remain an open question until we get to advanced AI. And so that's a problem. And I am skeptical about trying to solve ethics as a way to handle AI, let's say. I don't think we need a finished theory of ethics to do something like that. We will build and repair the ship, as we go along. We are not going to, we're not going to sit down and solve ethics and then implement that vision of ethics in our artificial systems.

‍

AD: Yeah, no, I think definitely there; it is interesting that often we think of–  maybe we can learn something from animal rights and like our approach to animal rights, but also how to treat digital mindset that are perhaps not like human minds, but I feel like maybe it's more the other way around, for example, there’s this project – Earth species Institute, and they're trying to use AI to communicate with other animals. And so maybe in addition to AI, we will learn more about or we get a little bit of a better handle on our treatment of animals, if at all. And until – even if I'm that optimistic there – we have seen a lot of digital people talk on [], and some FHI publications, and so forth. And I think it is interesting, just the question that it makes. Do you want to comment on this at all?

‍

GD: Yeah, that kind of talk. It's important we want to investigate it – it is highly experimental or uncertain. It's an uncertain area. And I'm also slightly worried about non-conscious AI's arguing for their own rights, and potentially wasting a lot of resources on that, wasting a lot of our moral concern on that. So it's tremendously difficult terrain to navigate since we – if, at least as I see it, if these new beings are conscious, we want to consider their moral interests. And if they aren't, we don't. But we have really no insight into whether they are conscious or not. And yeah, it's an open problem.

‍

AD: Yeah, we touched on that a little bit when we had a Whole Brain Emulation for AI safety workshop, that you've been perhaps sent – and we already we know how to align humans with each other how to coordinate with humans, and they were better than perhaps we would, out of aligning the artificial alien systems that have very little to do with us. But, again, there you have a Pandora’s Box of many other problems that arise once you begin Human-Brain uploads. Okay, we've totally sidetracked you all the way. But, I do want to applaud you for the point that –, I think, especially when you've done philosophy, I think that's really when you appreciate how difficult philosophy or ethics actually is and how little disagreement there. I think, yeah, I'm also a little bit –

‍

GD: How little agreement – you mean, in philosophy?

‍

AD: Sorry, how little agreement. Yes, how much disagreement that as a follow-up, you're long and also for how, how, how well thought out. It's not shallow disagreements, it’s like really fundamental foundational disagreements from really smart people on both sides of the spectrum, or on many sides of the terrain.

‍

GD: Yeah, philosophy is funny in that you start out with a reasonable question: “What should I do next”? And then you end up just arguing about the weirdest thought experiments, and you end up doing a lot of mathematics, at least the branch of ethics that I'm interested in involves a lot of kind of mathematical thinking –

‍

AD: Infinite ethics?

‍

GD: Yeah, if you touch upon the very reason why I am skeptical of utilitarianism, which is: I'm not sure this infinite ethics problem can be overcome. That being said, I'm not sure any moral approach can overcome that problem. But the conclusion there is to say that, yeah, you come to infinite infinite ethics, and you find out that you probably can't solve it – does this mean that you should now just not care about your fellow humans or other animals or? No, right? If we, it will be a weird excuse to say, “Oh, I saw some hungry dog on the street, but I didn't feed it because the universe might be infinite”. And we can know the effects of our actions in the indefinite future. And so therefore, I shouldn't do what is what strikes me immediately as intuitively moral.

‍

AD: Yeah, I think that's always the problem that I think people, I think, rightly so, have with a lot of their philosophy part, that you enter, you embark the train at one station, and then shortly you are very far ahead.

‍

GD: Ajeya Cotra calls it the Crazy Train.

‍

AD: The Crazy Train? Yeah. One last thing I'm just really curious if you've heard about it, from John Rawls, is reflective equilibrium. It's basically like, I think, to me, like a relatively appealing theory of like, you take a bunch of intuitions that you have, maybe some of them are more of a deontological nature, some more of a utilitarian nature, you apply them to a bunch of situations that you have encountered, or may encounter. And you see basically where intuitions like where they differ across different/other scenarios. And across this, you can construct means like rules across roughly across these types of situations, I want to have roughly this type of wall to engage with it, or like roughly this kind of heuristic. And if your heuristic, then doesn't apply to future situations – because your intuition tells you to do something different – then you have to update either the rule or your intuition to it. And even though that sounds pretty complicated, it's actually pretty straightforward, I think. And at least we will get some coherence across decision-making. It's almost like a rational approach to add morality to some extent. And we're not even there yet. I think we're still, I think so much in the weeds that we're not even really good at, like, really thinking coherently about our own morality towards situations and often get sidetracked. I know, you want to comment on that. Take us off the Crazy Train again.

‍

GD: And reflective equilibrium is, I think, the favorite approach among academic philosophers, but I'm not super optimistic about it. I'm not; what we would want to have evidence for, in my opinion, is that these intuitions are tethered to reality that indicates something about that we are on the right path. And I'm not sure about that. I think our intuitions are – they're evolved, and they're a mix of our evolution, our evolutionary history, and the culture that we're in. And so I'm not sure that there's any form of connection with reality. If the world is extremely strange, as any kind of scientist will tell you, right? The world is not what it seems. But our intuitions remain the same. And so you could say we need to work on these intuitions and make them coherent, and update on the evidence and so on. But I think if the starting point is not solid, then I'm not sure this project will succeed.

‍

AD: I guess as a realist, you would say that, but I think if you take more of a value drift or a relativistic position of ethics of the ongoing negotiation between which intuitions we want to call biases and which we want to call values, then I think that makes some more sense. And it's been interesting because I think recently, Anthropic published this, I think it's called Public Constitution AI paper that just came out last week. And they're I think they're doing something a bit like a public reflective equilibrium, or at least like they're creating a bunch of people on their intuitions at different cases. And you could probably construct some constitutional principles out of that. And so it is a little bit like the reflective equilibrium, but like now, tied to AI.

‍

GD: I've seen this multiple times from the large AGI corporations. They grasp for moral philosophy, but I don't think more philosophy is up to the task, so to speak, of what they want it to do. So I'm not sure that that will save us but I do think that it's – I agree. It is interesting that these, say 50-year-old or 100-year-old frameworks are now being incorporated into cutting-edge technology. You saw the same with – so I think one idea behind William MacAskil’s thinking on moral uncertainty is that it would also allow us to head to – we could incorporate this framework into AI’s and allow them to make decisions under uncertainty. And I, it, all of these approaches are interesting. But I think that it's premature to begin using them in AI. They're not there, I think.

‍

AD: Yeah, got it! Okay. Let's get off this. That was really fun to geek out about it. I think in general, like, you have seen so much – just like different, really interesting thinkers. And I think it has thought really deeply about what to ask them. So I think it's actually really interesting to interview podcast hosts because they've seen it all. And they can also make up their mind really, about where there were potential points of disagreement across time across people. And so I'm super curious to know if you could tell us a little bit – is there kind of something like a bird's eye view of your field of podcasting? If so, has there been any, has there been any kind of emerging bets that come up again? Or what? If you could say anything about what you've learned through this endeavor of interviewing really thoughtful people on positive futures that are nevertheless not pollyannaish? What would you say?

‍

GD: Yeah… across podcasts. So one thing is that podcasting is a kind of, it's a human endeavor. So it's not only about the smartest questions; it’s also about connecting with the person you're talking to. And connection makes it more interesting for the listeners also. And so it's not only about the kind of the technical nature of the questions and how deep you can go and so on, even though I try to produce something that's more technical than the usual podcast, at least that's my vision for what I'm doing. Yeah, podcasting is, in a sense, much broader than what I see myself as doing. I'm podcasting mostly about AI safety. And so that's probably my area of expertise. And I'm not sure I have a special expertise in podcasting. But I do think I have some expertise in podcasting about AI safety. So I think we have to make it more narrow than podcasting in general.

‍

AD: Yeah, do you want to make it one area? – Could you give an overview of that field? Like you said, is there anything in particular that you learned, that maybe you have a unique insight on?

‍

GD: Yeah, what strikes me is that AI safety as a field is much broader than I would have said: when I became interested in it in 20, maybe in 2016, I would have said, AI safety is going to be solved by some genius, who sits down and does a bunch of extremely advanced mathematics. Right, this is a person who could have done some Fields Medal worthy of work, but he or she chose to work on AI safety instead. I'm not sure that's actually the case, I think it is a kind of an interdisciplinary problem that requires solutions from policy and solutions from technical fields. And these fields need to work together in order to find the right path forward. And so – AI  won't be solved.  AI safety will be worked on and we will solve some problems, new problems will arrive, and we will solve those problems. And we hopefully continue doing this until… yeah, indefinitely.

‍

AD: And is there any particular product that you think is potentially undervalued? Or is there something that often pops up where people are like, “Oh, I wish someone was working on this stuff and people that you talk to”?

‍

GD: Yeah, this is probably recency bias, because I just interviewed people about this. But I do think there, we can do something with mathematical proofs where we can potentially prove that certain small systems are safe, and then perhaps expand on those systems until we get something that's we can prove the safety properties of larger and larger systems. That's one approach that's been worked on. Another one is what's called interpretability, or transparency where you – it's like digital neuroscience. So you look into the model, you find out, you try to find out how it works. And I think, especially the interpretability work, is probably crucial for solving the problem. If we don't know what's going on inside of these systems, we won't know what to do in order to make them safe. We won't know whether they are safe. I think it's important for them to be honest with us. And I think we can only test for that by looking inside of their brains. I think, yeah, those are two approaches I would highlight.

‍

AD: Okay, well, that's super interesting, especially the cryptography part. We have this AI Safety Grant programme, and one of the areas that we fund is security and cryptography approaches to AI, and I know that there have been really interesting publications even I think dating, I would say gives back Ben Garfinkel, now Gov AI, gave a tour cryptographic technologies that are potentially useful for AI, and then obviously Openmined and Andrew Trask are doing really wonderful work on this side, and it's still – and then Scott Aaronson, I think, came out with this cryptographic… or he gave a talk where he referenced the paper on basically pretty much inserting an undetectable backdoor into an AI system that could function as something like a control – that may even be undetectable by the AI system itself. So anyway, lots of interesting pieces there, and I just checked and I don't see the podcast. Okay, when is that coming out?

‍

GD: And the one on cryptography? Yeah. It is out. It's the one with Steve Omohundro. Steve Omohundro on provably safe AGI – I think I called it. I think that's a potentially interesting approach. My worry is that it won't move quickly enough. So AI is, as we know, – it's moving quickly. Cryptography, it's just, monstrously difficult to prove something. So we will need help from AI to make these complicated proofs. And the question then is, will AI capabilities in general, move faster than our capabilities in improving theorems using AI? And also, if we train a system to prove theorems, will that system be generally capable? So is there a sense in which we might push capabilities forward by trying to make systems safe? I guess that's always in war. I worry, but I think it's especially a worry here. Yeah.

‍

AD: Okay. I really love Steve Omohundro’s work. Thanks a tonne. Yeah, I think that it will be really interesting to also figure out zooming out perhaps a little bit more. How did you make it into the role that you currently have at FI doing this podcasting work, because, I think for many people on the outside, it sounds like a dream job probably – just like you interview people that you think are doing really valuable, wonderful work in the world. And you have time to actually dedicate to learning more about them before and then during the podcast. And I think that just always should be like a dream come true. And yeah, I'm really curious how you got to where you currently are. And if there's anything useful, then perhaps it's more of a thing that could be useful for more than no one, like any useful advice that we can extrapolate or extract from it.

‍

GD: Try doing something. I'm not sure I can give any more specific advice than that I tried doing something and it works enough. And I guess I got positive feedback in the beginning. And that kept me going with the podcast that I started out of interest. And, I think podcast listeners can sense when a guest is passionate about a topic or when an interviewer is passionate about the topic. And so you want to have that shine through and you want to, if you're thinking of starting something, it makes sense to try it – especially with podcasting – the startup costs are so low that you can easily just try it out. And then yeah, that's what I did.

‍

AD: Yeah. I think my dad always used to say that you have to find something that you really love – because only then are you willing to work much, much harder than anyone else on this, and like an excruciating amount much harder. And I think that like then that often, I guess also holds true, but especially self-starting things like a podcast, like you really have to lean in.

‍

GD: Yeah. So there's this career advising in the Effective Altruism movement, which at some point it undervalued passion, perhaps. But now, I think they talk a lot about your fit for a given role. And I think it is just important that you enjoy your work, in fact, massively important. And yeah, so that's the thing to look for. It's such a kind of cliche, but it is true.

‍

AD; Yeah. Cliches are often like that.

‍

GD: Cliches are often true, yes.

‍

AD: Yeah, I think I did see that 80K they updated their job board / the job booklet, and I think there was a little bit more fit also, like passion – a bit more focus on it. Okay, really interesting. I'm certainly a very big fan of the podcast. I'm gonna hand it over to Beatrice to dive into some of the more existential, hope-related questions here. And so you've passed the interview introductions! Beatrice, please take it away.

‍

BE: Yeah. So yeah, thank you so much for joining us. It's Friday evening. For me. I'm in Europe. So please forgive me if I'm rambling a bit. I'm a bit tired from this week. But yeah, thank you so much for joining. And I'll try to ask you a few questions that are more about the Existential Hope topic of this podcast. More generally, like the long-term future, get a bit further – more philosophical.

‍

GD: I think I'll just say I'll just say it's Friday evening can be too –  and it's been a long day. So we're probably in the same stage. But yeah, let's go ahead.

‍

BE: Now we can blame anything stupid we say on that. Yeah: would you describe yourself as positive about the future?

‍

GD: Perhaps. Ironically, given my work is basically to read papers on how AI is probably going to destroy humanity or destroy our civilization. And then to interview people who are convinced that AI isn't probably not gonna go super well, or at least they are very concerned about the risk of AI going badly. I am quite positive. I am – I think it's gonna go well. I think you can estimate a risk of say 10 to 20% of human extinction this century. And that is more than enough to motivate all of the work that I'm doing and all of the work that you're doing and so on, and that risk if you're estimating a risk of extinction that's 20%, or at least 80%? And of course there could be some middling scenario, but I think we're on a – we are on a track that is: we're seeing in accelerating growth, we're seeing in increasing living standards. And I think if we don't destroy ourselves, the future is probably going to be pretty good.

‍

BE: That's nice to hear. Yeah. And, that's in relation to your common previously on like actually needing to be passionate about what you do. I guess it's also–  it’s just thinking about what Anders Sandberg said recently on the 80,000h podcast where he was talking about, you do need, maybe do have to feel somewhat positive about the future and working on something that you feel like you're contributing to creating a safe and bright future? That probably is, yeah, very important to keep you going. Yeah. And so in terms of one of the questions that we’ll ask you is of a eucatastrophe, which is the opposite of a catastrophe. So I don't know if you're familiar with that concept from a paper by Toby Ord and Owen Cotton-Barratt, where they are talking about, like, existential hope and existential risk. And where they mentioned that a eucatastrophe is an event where after it has happened, the world is much better off. So it's the opposite of a risk. It's like upside risk.

‍

GD: Yeah. Like eu-stress or something – like good stress.

‍

BE: Yeah. Yeah. Is it a concept that you’ve thought about at all, like existential hope, like that sort of upside-risk thinking?

‍

GD:I think the concept, as you describe, now implies that there's some huge event that happens. So this is not a matter of gradual progress. This is an enormous event that's been positive. I, it's very tempting for me to talk about AI again, now. Right? Because this, this is probably the –going to be something that happens or let me say, away, it's we're probably going to go from AI that is below human level to AI that is pretty far above human level in a short amount of time on a human timescale – in a matter of years, I think. And so that might fit the concept you're talking about of a eucatastrophe?

‍

BE: Yeah, I think that definitely is..

‍

GD: If it goes well, sorry, – if the transition to AI goes well, I should say. Then, it's often it's, whenever we talk about how awesome things could be in the future, how we could arrive in some form of utopia, I think, it often feels flat in a way. Because after I stopped trying to describe what we're talking about, we're still sitting here and the world is still the same. And we can really grasp what it is that I'm trying to describe. And I just, one way to describe it as though it's hard to think of the difference between a person living in the Stone Age basically, and then now and then considering how our lives might be similarly different to people living in the future. Yeah.

‍

BE: I think that's a great analogy. And I think that yeah, also in the original definition of the term in Toby's and Owen’s paper, they're like, they're examples of a eucatastrophe are very big events. I think at Foresight we tend to use it a bit more interpreted, a bit more freely. Their examples are like the creation of life initially, which just…. But I know that in terms of utopia of feeling flat, and that's definitely something I think a lot of people feel. I know that FLI has done a lot of world building, is that something that you're excited about in terms of actually being able to create maybe more plausible, yet exciting worlds?

‍

GD: Oh, yeah, I think this under-explored territory because we have, as a species, humanity has just now basically arrived at a place where we can dare to dream about these things. Previously, we've had to just focus on surviving and avoiding war, which we still have to focus on. And of course, now I'm speaking about people living in rich countries. But yeah, I think we need to experiment more and to think more about which type of world we would like to live in in the future. And I think it's valuable to try to imagine these scenarios for how life could be – what technology could drive life to become for us. I think we should spend more time doing that. And, perhaps, do it in a more rigorous way in which we could do some form of markets on how plausible scenarios are; we could get scientists to evaluate whether what we're dreaming about is actually possible. We could do surveys: here's my world. Here's a description of it. Tell me what you like about it. Tell me what you don't like about it? Would you want to live in this world? So we could do a more rigorous approach in which we have statistics and data about these worlds? And then perhaps, perhaps this could be a way to approach the ethical problem that doesn't require solving ethics. It just requires gathering information about worlds that people would like to live in. There was a paper – I can't remember the title – there was a paper trying to do inverse reinforcement learning of preferences by looking at YouTube videos. So analyzing the frame and training a model to try to predict the frames that happy videos would show. And of course, videos carry a lot of data, so much more data than text. And so this is all perhaps a way to make it more rigorous. If you have a video of something that basically all people would enjoy, like, a birthday party surrounded by your family – what does that video contain? What information does that video contain that you can then use to build a world around?

‍

BE: Yeah.

‍

Question: I just wonder about the reinforcement-learning part of that. Could you elaborate briefly?

‍

GD: The inverse Reinforcement learning is that you're learning about the preferences from seeing the output, not the other way around.

‍

AD: Alright, I think Creon’s happy. One thing that really struck me because just today it's we still have fun with building these technology trees of just exploring different technological states and basically like different technological goals in a domain. And then different capabilities like back casting almost the capabilities that have to come before and the different challenges we need to solve, different people already on the map. And of course, Metaculus has these really wonderful forecasting tournaments, and then I also like the ongoing forecast. And then we talked to Gaia actually from Metaculus today, who is working on this really amazing, really figuring out the worldviews, like mind-models that people have as they go into these exercises. So I think we are getting a much richer set of tools that could help us in this exercise. And I think the cool thing that you mentioned, I just wanted to see what you think about it. I don't know if you know the concept of Pareto topia(n) goal alignment from Eric Drexler.

‍

GD: I haven't read it, I think leave what it's about, right? It's episodes of just doing Pareto improvements, or doing all of the Pareto improvements…

‍

AD: Doing Pareto-improvements, but crucially, I think the important part of that is also that, once the gains of corporations get really large, so once we can see a future where let's say, like, see this one world scenario, where many people actually agree that that will be a good thing, you actually make it much more likely that people actually will cooperate towards that future. And, and also some of the costs of not cooperating, I just like too large to want to miss out on that cooperative world. And so I think that, as you were showing people like these, I think future vignettes that many people, even if they don't agree on the nitty-gritty of like the next 10 years, I remember the next year or something, and they could maybe agree on that like larger world that they could all see for themselves. And I think that's a really interesting cooperation mechanism as well, to get more cooperation across the board. I'm curious if you have any thoughts on that?

‍

GD: Yeah, it would be–  if we can get agreement that I think then that's just the main issue, right? Can we get an agreement about which worlds people want to live in? I think we probably shouldn't aim to have an agreement; we should probably have some wide variety, a huge diversity of positive future worlds and try to make sure that they are compatible, and that they don't interfere with each other. So that one, one set of people can do their version of what's what they want, and another set of people can do another version.

‍

AD: I think there… I forgot who said this, but it was on one very long ago LessWrong post, maybe even in the sequences or something, on the track that hope could be vague. You don't want to exactly sketch out every single thing of a hopeful scenario. But I think leaving it in a way that people can also see themselves in it, and interpret it in the ways that I think is aligned with their values actually get to the next steps. And I think that could be an interesting approach to this as well.

‍

Question from Creon: May I speak about something irrelevant to this for a moment?

‍

GD: Go ahead.

‍

Creon: Okay, yeah, let me turn on my camera. Even though I'm a little bit informal, um, my bathrobe. I think that this is very critical, what you are speaking about the envisioning of the desired future states, and I'm working right now actually, believe it or not, with Adam Brown, on an idea for a paper about this. Alignment is an interesting word, because you can try to mathematically characterize alignment in a very high dimensional space. So if you think about the height, the many possible dimensions that the future can unfold into – the potential space of things that could happen. There are so many possibilities, and it is important. And this is the vital work of the Xhope community –  it is important to find the North Stars, find the guiding lights for the directions, the alignment towards the features that we want, because the problem is, as I see it, so much of the world today is rushing around in fear. And what does that mean? They are running away from things they don't want. And the problem is that when you run away from something you don't want, you go running off in some direction and it may very well make things a lot worse. And there's a lot of random directions. In high-dimensional space, there are exponentially many random directions you can run away to, you can run away in exponentially many random directions. And, it's almost – none of them are going to get closer to where you want to go. So the importance of understanding where you might want to go is critical. If all you do, on the other hand, is go away, try to make the distance between you and the undesirable catastrophic outcome: you do not get any closer in general, statistically speaking to where you want to go. So I'm trying to push this into this community. And I'm very glad to hear you talking about us. That you're doing.

‍

AD: You're doing a great job of pushing it into this community. Thanks, Creon, I really appreciate it. I don't know, Gus, if you want to comment on it?

‍

GD: I think, yeah, I think it's good to have positive visions. I think it's good to have goals, something to aim for, I think. Yeah. I think what we should, though, what we should focus on as a global community is probably primarily avoiding extinction risk right now. But I think it's important,

‍

Creon: … but I'll shut up.

‍

GD: We can disagree. It's fine. I think it's also important to dedicate resources to developing positive visions and to do alignment or kind of goal alignment for different human communities.

‍

BE: Yeah, thank you. If there is not anything to add to that, I’ll steer us back to worldbuilding. Yeah, going back to Worldbuilding, if we imagine it's 2050: Are there any sort of, other than AI – because we've already discussed AI, maybe some sort of narrow AI, though I don't know – what your any specific areas or technologies that you think we really need to use are very relevant for creating this future world. And, imagining it's a positive world, one where you would want?

‍

GD: I'm pretty interested in technology that allows people to sample experiences from others and to try different conscious states and see what they're like. So here we could be talking about meditative states or states induced by various substances, or, perhaps states induced by advanced forms of virtual reality headsets. I think we, this is part of this project of trying to explore what type of world it is we want to create. If we have a stronger ability to sample various experiences, we will have a deeper understanding of what it is that we want. I think, for example, there's a project ongoing right now trying to train a model on data from very advanced meditators reaching specific states of meditation that are supposed to be very blissful. What you can do there is to try to see if you can train a model to detect that state in inexperienced meditators. And also you can try to cross-reference what the signature of the state looks like for one advanced meditator and then check, check to see that all of the advanced meditators are actually reaching the state. I think projects like that could become more important. And of course, this involves a form of narrow AI when you're training a model to detect these states. But I think if we could make it cheaper to try to try to be less costly in all kinds of ways, not just money-wise to try to sample experiences, people would be more open to trying these things and perhaps could learn something about what the world should look like in there.

‍

BE: Yeah, that's super interesting. I haven't thought about that as a potential like, yeah, technology branch?

‍

GD: Yeah, well, I want to say, you can probably take this line of thought too far, I don't think that we are going to give world leaders specific substances, and then we're going to have world peace, like kind of 60s hippie fantasy of trying to solve all of the world problems, world's problems using psychedelic drugs. I don't think that, but I do think it could be a part of this project of trying to build worlds you want to live in if you have a larger or an enhanced ability, using various forms of technology to try out new experiences.

‍

BE: Even though maybe not taking it as far as the sort of 60s – but do you think that it would lead to a sort of like a smaller Schelling point or something of like values that we all can agree on?

‍

GD: That's interesting. That's difficult, it's; my guess would be yes, it would push us in that direction. I think there are probably states that few people have reached that are enormously valuable if they could experience them. Probably states that are extremely pleasurable or extremely peaceful. And I think a lot of people would prefer that given our kind of shared evolutionary history. We probably respond to some of the same experiences with the same levels of enjoyment. Yeah.

‍

BE: Yeah, it – my friend just started at Cornell. And that sounded really fun because they had some sort of speciality lab where you could experience what it's like to be different types of species. So you could for example be, and then you'd have increased hearing, and decreased sight and just smell; it would be super strong. Yeah, it just seems super fascinating. And I think in general, like, when you experience something that you haven't experienced before, it's Yeah, I guess it opens the door or makes something accessible to you?

‍

GD: Yes, that's part of it, I think it could become much more powerful than that. I've actually been pretty disappointed by the state of virtual reality as it is. Now, I think it's often you try and you try a virtual experience. But you're still a lot of your experience is what's going on inside of your head is what how you're feeling and you take that with you into the virtual experience. And it's often possible to be in virtual reality and experience all kinds of beautiful temples from around the world. But if you're not feeling good, when you put on the headset, it doesn't intervene on the deep parts of your experience itself. It changes what you're seeing and what you're hearing. But I think a lot of people recognise that you carry your happiness and sadness around you through changing environments, and it's often not enough to change your environment to change your conscious state.

‍

BE: Yeah, I guess it also makes me think of Joe Carlsmith's sort of idea of a sublime utopia, and that we're not ambitious enough when we think about the potential future or like what utopia could be like. I don't know, is that a concept that you've thought of at all?

‍

GD: Well, I think that's very plausible. I think we are constrained – we only have a certain capacity to imagine, we only have – we are only so smart. And I think we are really limited to invest in the kinds of worlds that we can envision. And I think we talked about utopia, feeling flat; there’s probably something weird going on in our psychology there where we're supposed –  we're now ascribing intellectually, something that's that we are just we're just saying, this is one of the best worlds you can imagine. But we're not feeling it. And so we're not buying it – we're not buying it deep down. There's probably something weird going on there. And if we could respond emotionally to just a written text saying this is the best possible world, this is one of the best possible worlds. If we could respond rationally, we would probably have a different reaction than the one we actually have.

‍

BE: Yeah, just taking it back to the Stone Age person that you mentioned before landing in our time, they would probably find it pretty weird and shocking, and maybe not all that enjoyable.

‍

GD: Probably I was just about to say probably not an all out positive experience for them. Imagine taking a Stone Age person to a kind of techno party or something that would just be wildly disturbing, I think. Loud noises would probably be more hell in heaven.

‍

BE: Yeah, just taking them in, I don't know, an elevator.

‍

GD: …or something more mundane. True.

‍

BE: Yeah. Yeah. And I guess there's also that concept of weird topia, I think.

‍

GD: Weirdtopia?

‍

BE: Weird-topia. Yeah, I think Eliezer Yudkowsky, has written about it, just that the future will just be unimaginable to us. It will be very weird. And that, that it's bad just means that we aren't programmed to appreciate it right now as we are.

‍

GD: It does make me sad that I think there's something true about that. I think I'm not sure I will be able to grasp what's going on in the future if I'm not somehow enhanced in order to follow along with what advanced AI might be doing. And that's not, that does make me sad. I do. I do want to know what goes on in the far future also.

‍

BE: Yeah, I think we're all very curious. But so you mentioned AI, again, in terms of like risks, but are there any other sort of risks or challenges that you think we are maybe underestimating right now that are like top priority?

‍

GD: Basically, yeah, this is this, this will sound familiar. The world is underestimating these risks. They are not new to our community. But I do think that there's a risk of engineered pandemics and nuclear war or a great power war in general. And I These risks are intertwined with AI. I do think that advanced AI, and specifically, spreading advanced AI everywhere increases the risk that some group will use this AI to figure out how to engineer another pandemic. I do think that spreading AI capabilities widely might make it easier to attack nuclear facilities. And yeah, a lot of my thinking on the future comes back to AI and I think – we've survived a world having a world with these nuclear weapons for what is it – 70 years, 80 years now. And so this means that of course, there's risk, but the risk is not, in my opinion, as high as the risk from AI over the next, say, 20 years.

‍

BE: Yeah. Or if we take it back to thinking about the sort of Existential Hope scenario, do you think you would be able to share? If you think of your best Existential Hope vision for the future – would you be able to share that, and that's something that we would then use as a prompt and try to use in generative AI and make like an art piece out of it that hopefully inspire some hope?

‍

GD: Yeah. Interesting. Yeah. Imagine if whenever you had some form of mental issue, but it wouldn't have to be it wouldn't have to be a clinical thing, just be you, you're slightly sad for some reason. Imagine if you could fix that in a way that also allows you to do better in life. So we're not talking about wire heading, or getting hooked on some drug and just laying there and achieving nothing and helping no-one. I think we could intervene in our brains in ways that would make us happier and more productive. And in general, I think there's a false dichotomy between happiness, there's a false dichotomy: People think that you can't be happy and productive at the same time, or at least there's a some people think that perhaps, I think you can, and I think these these two kind of mental states are often comes together, when you're happier, you're more productive, that makes you happier, and it kind of upward spiral. I think we can imagine a future in which you can go somewhere, or perhaps you can have devices in your own home, that allows you to fix whatever ails you psychologically, while also becoming stronger and better able to help others and to function in life. Yeah, that's a positive vision.

‍

BE: Yeah, very positive. I think that's what I'm personally most excited about in terms of what maybe neurotech could bring or something like that. For one, one question that we always ask also, is, we talked about eucatastrophe, and we always get this comment that it's a catastrophically bad name. Yeah, yeah. Do you have any better suggestions of what we should call it? I?

‍

GD: I'm not sure I have some. Yeah. Something about… yeah, no, I actually don't have a better name. But I agree, it is a bit of a, it has been weird to pronounce. And perhaps it doesn't. When you told me about it. Just now I was thinking that this must be a kind of, one-event. But I think progress is often gradual. And so perhaps you want to find something that conveys that it's also gradual. But yeah, I don't know. I had “positive progress”. But that's just extremely vague. So yeah, maybe not that.

‍

BE: It was a tough question. And I think that I agree that progress often is more gradual. I think that since we're talking about so many types that we're talking about, paretotopia, and a prototopia, and topia, and yeah, sublime utopia and all these things. And I think there's also the prototopia, that we often also come back to prototopia, I think, yeah, prototyping tools –getting sort of nice. Utopia version.

‍

BE: Yeah, I guess that we could just when you think about Existential Hope, is there anyone in particular that inspires you? Or is there any book or anything that was super inspiring to – some sort of favorite resources?

‍

GD: I liked The Precipice by Toby Ord. I found that hopeful. I listened to it during the initial COVID lock downs, where I was quite sad and worried for the world. But that helped me – I went on a couple of long walks, listened to the audio version. That's a good resource. I think it still holds up. And I think, yeah, I liked that one. What else? Yeah, no, that's my recommendation.

‍

BE: And any recommendations for someone we should invite on the podcast? Maybe it's Toby Ord then.

‍

BE: Yeah, I think you should invite Toby, or you might invite Dan Hendricks from the Centre for AI Safety. Also, he has a kind of quite comprehensive knowledge of what's going on in that field. Yeah, he's very worried. But there's some hope there also.

‍

BE: Yeah, that's a good recommendation. In terms of someone new in the field, do you have favorite nonfiction or something or sorry, fiction, something that inspires in that sense?

GD: Fiction – There, I must disappoint. I never read fiction and perhaps that's, that's a fault of mine. And maybe I should read more fiction. But yeah, I have no fiction recommendations.

‍

BE: That's okay. We’ll excuse you. I think that we have, I have my last question. That's a very short question. I think if anyone else has a question that they want to squeeze in before they're welcome to, otherwise I'll just ask my last question and we can round off, – no audience questions.

‍

BE: Then I will ask you, what is the best advice you ever received?

‍

GD: The best advice I ever received? I think my – so my parents, but my mum in particular taught me very early on the value of honesty, and that's –  it just makes life much easier. I think it's much easier if you go through life trying your best, at least to be honest.

‍

BE: Yeah. saves you time and energy, hopefully in the long run, even though it's like paying upfront, oftentimes.

‍

GD: True. True.

‍

BE: Yeah, I think that's it. Really, thank you so much for joining us during this hour. And yeah, I’m looking forward to listening to more podcasts with you.

‍

GD: Thanks for having me on. It's been a pleasure.

‍

BE: Thank you. Thank you so much. Have a good one.

Read

RECOMMENDED READING

The Feeling of Value: Moral Realism Grounded in Phenomenal Consciousness – Sharon Hewitt, 2016.

Discussion on The Crazy Train – Ajeya Cotra on the 80,000H podcast, 2021

Steve Omohundro on Provably Safe AI – FLI Podcast, 2023.

The Precipice – Toby Ord, 2020.