In this episode we have the pleasure of interviewing Anders Sandberg, a Swedish philosopher and researcher at the Future of Humanity Institute at Oxford University. Anders has a background in computational neuroscience and has made significant contributions to the field of transhumanism and cognitive enhancement.
During our conversation with Anders, we explore the concept of grand futures and what it means to strive for them as a society. We discuss the potential benefits and risks of emerging technologies such as artificial intelligence, nanotechnology, and biotechnology, and how we can navigate these developments responsibly.
Anders also shares his thoughts on the impact of technology on human nature, the potential for post-human societies, and how we can balance our desire for progress with our ethical and moral responsibilities.
Overall, this episode provides an engaging and thought-provoking exploration of the grand futures we can envision for humanity and the challenges we must navigate to achieve them.
Tune in to gain valuable insights from one of the leading thinkers in the field of transhumanism and existential risk.A
Anders Sandberg is a Swedish researcher, science debater, futurist, transhumanist and author. He holds a PhD in computational neuroscience from Stockholm University, and is currently a Senior Research Fellow at the Future of Humanity Institute at the University of Oxford, and a Fellow at Reuben College.
This piece was created by Aron Mill, using Midjourney and DALL-E. Aron is a researcher at the Alliance to Feed the Earth in Disasters (ALLFED). Through his work he aims to contribute to civilizational resilience against Global Catastrophic Risks. He is interested in flourishing future scenarios in which we have traversed the precipice of existential risks and recently began exploring these themes through art and hopeful narratives.
This image shows a fascinating future in which a diverse range of life forms exist. At the highest level are the cyborg-baby-buddhas, who live in a constant state of bliss and float above the worlds they experience.Moving down the image we encounter synthetic or completely artificial life, which may include humans with significant robotic enhancements or AI in humanoid form. Another individual desired to embody the Hindu goddess with multiple arms. Finally, at the bottom of the hierarchy, are people who merge with plants and sprout leaves for photosynthesis.
‍Allison Duettmann: Welcome everyone to Foresight’s Existential Hope podcast. I am really happy to have so many of you here and to be starting a new year. I could not imagine anyone better to do that with other than Anders, as he is a beacon of existential hope to us and so many others. It has been wonderful collaborating with you. It has been a while now, maybe 4 years or 5. The pandemic has certainly shifted time perception somewhat. You joined us at our Vision Weekend in San Francisco and have been influential in our community in the Bay area even while at Oxford. You have been holding up this existential hope limelight for much of the community here in a wonderful way. ‍
There is a bunch of material from us with you out there where you dig into the specifics of what you are working on. It is quite a broad cluster of anything from neuroscience to neuroethics, to generally grand futures or futures out in space, to thoughts on AI. You have now packaged all of that into the book Grand Futures. You have spoken of it at various podcasts, including with us. If people want to dig more into that, it is a grand masterpiece that is a book of all books. We have discussed it on various occasions, so I wanted to point people to all of these links that exist from Anders out there. They usually talk about most topics that are pointing to positive futures in a really wonderful way. This also includes what Beatrice mentioned to me, which was one of the papers Blueberry Earth. It answers the question of what if the entire Earth was instantaneously replaced with an equal volume of closely packed but uncompressed blueberries. You have also done some fun writing.Â
‍
Anyway, we are really happy to collaborate with you this year in particular on the Whole Brain Emulation Workshop, which is doing a revamping of the Brain Emulation Roadmap that you wrote a long time ago with our current understanding of time. Thank you a lot for joining us on the podcast today. If you can, feel free to share in your own words what you have been doing in your academic career and what you think can be started. The Anders Sandberg life story in 3 minutes!
‍
Anders Sandberg: Wow, yeah. Well, thank you, Allison, for that opening. My standard story is that I grew up in Sweden in the 70s, which was very staid and very boring, so I read all of the science fiction in the local branch library. I realized I wanted to make those grandiose futures real. Then I went to the next library, then the municipal library, then the university library, and then I eventually ended up in Oxford. That is one way to look at it. I was very bored by the current reality, and I wanted something interesting. Instead of turning that into escapism, it turned into maybe we can actually understand and make a future. During that path, of course, I ran across a lot of people with similar ideas. Indeed, Foresight Institute and people associated with that, such as Eric Drexler and , were really instrumental in making me realize there were people making careers out of this. They were writing papers, getting PhDs, starting companies, convincing governments that these technologies mattered, and fighting the ethical battles in academia or newspapers. I started to realize I might want to be a part of that.Â
‍
There is also this other story that I can tell. Now I am 50 years old, half a century. When I became 25, I had sort of a 25-year-old crisis about what I had done with my life. I concluded that I had spent the past 25 years learning stuff. At first, they were obvious things, such as walking and reading. Then I thought maybe I should spend the next 25 years taking this knowledge and making use of it in interesting ways, and I did. So, last summer, when I had my birthday party, I took 10 seconds to have my 50-year-old crisis. I thought, okay, what have I been doing? I thought I have a good, established academic position, and I am super well known in some circles, but what am I going to do in the next 25 years? I realized that so far, I have been applying my knowledge, but usually within existing structures. So I thought, maybe it is time to shape a few new structures and actually make a few things. Not just sending out information as a response to receiving information, but also changing the world more. So that might be my current 25-year plan, but we will see what actually happens.Â
‍
Allison Duettmann: That is very cool. Yes, I think your thinking just brought out this tool for our quarterly life review. I also think a once-a-quarter in a quarter of a century is also a good idea. I missed my quarter, but I am going to get on the second one now, so thank you for that.
‍
Anders Sandberg: Yes, that kind of fractal structure is what you want to have with your life reviews. You want to open each month thinking, “What am I going to do this month?” I found that very useful because I actually checked my schedule and realized “Oh, I said yes to these lectures and going to these conferences is very good for practical planning.” But you also want to have that review, where you occasionally have a moment possibly thinking, “Am I the bad guy?” I think that is very important to have in regular intervals.
‍
Allison Duettmann: How often do you ask yourself that question of whether or not you are a bad guy?
‍
Anders Sandberg: About once a month or so. You can, of course, do it about different things. “Am I barking up the wrong tree? Maybe my life choice and general ideology is a bad one.” But there are also a lot of the more practical things. “Am I treating friends and family in the right way? For this particular project, am I helping the field, or am I just adding another paper to fill my CV while messing up the scientific literature?” So I think having this recursive reviewing is very important. You cannot review everything all the time because you want to work and you want to live, but adding this reflection is useful. Of course, more thorough reflections might take more effort, so those are also useful to do but more rarely.Â
‍
I even did one many years ago together with my husband-to-be, where we tried to figure out why we disagreed about politics. We had this big sheet of paper where we mapped out different assumptions and ideas. We actually figured out some things, such as core assumptions, where we really differed. There are many other variables too, such as upbringing, friends, family, and stuff like that. A core intellectual disagreement had to do with whether or not groups are more than the members. Is it good to be a part of the group regardless of what is good or if the members are good. I think it is good to be a part of a group because it is good for members, but the group itself has no extra value. My husband disagrees with it, and from that follows a lot of political choices, which is great for having these long conversations about our agreements and disagreements. So my point is that it is not just about solo reviewing. You may want to do it with other people.
‍
Allison Duettmann: You did a full double crux with your partner to dissect politics. This is wonderful. I once had this pretty long 8-hour discussion with a friend of mine where we went down basically all of the different threads we wanted to discuss about various different future topics. Usually, you come to this point where you have to decide whether are you going to go down one way or the other way. We would always map the other direction we could have gone, that was also interesting. We would then go back to each individual one. That was with , who I think you also know. It was a very long discussion, but we exhausted all of these points, including doing double cruxes. Anyways, I did want to briefly ask, what does your review process look like? Every 25 years, you have this big review, but do you also do an annual and/or monthly review? Is that roughly how it goes?Â
‍
Anders Sandberg: If I were one of these wonderful, well-organized people you find on the internet, I would have been lining it up. There are many who are doing it in a proper manner, but I am doing it more informally as I am a more sloppy person. Rather than sitting down with a big piece of paper for my 25-year review, I was literally walking around thinking about it while trying to get more cake from the kitchen. However, I knew beforehand that I was going to do it. I kind of recognized at the start of the new year, after getting over my hangover from the party and cleaning up the kitchen, that it would be a good time to sit down with a cup of coffee and plan ahead. You need to set up these habits, so they fit with you.Â
‍
I think this is one of the interesting challenges that I have noticed. Many ideas that are very good ideas on their own still need to be mapped out into actual habits to fit your actual personality and your actual psychology. This is really tricky because there are some who have the psychology aspect where they lend themselves to actually setting out the habits in the right way. Others drift in and out of habits. Some of us are very bad at actually keeping a straight organization. Recognizing the strengths and weaknesses that you can do to get yourself to do the right thing is super valuable. That is what I hope kids can learn the basics of in the first 25 years, so they can spend the next 25 years actually functioning a bit better.Â
‍
There are some things that take much longer to discover. There is also nonstationarity in personality traits. The Big Five personality traits slowly shift across the lifespan. One of the things that I am delighted about middle age is that my conscientiousness trait is still steadily going up. I am getting better at washing my dishes and cleaning up. I am getting better at finishing papers, although my coauthors, some of which are listening in right now, know I am so much slower at responding to all of my emails.Â
‍
Allison Duettmann: Okay, wonderful. So maybe the first habit for some of us will be learning how to set up habits. This is my final question on this. What do you think your 75-year-old life will unveil?
‍
Anders Sandberg: Part of it will be, “Okay, how did those last 25 years go? Did this work out as I intended or as it should?” There is also something interesting about what dimension needs to be added. In some sense, learning stuff for the first quarter century is a purely epistemic dimension. Using the learned stuff to generate new knowledge is still fairly epistemic, although it involves a social dimension of getting through it and the credibility of network etc. Actually, implementing things is much more decision-making and also a fair bit of social.Â
‍
What else is there? Does this cover all the possible spaces of action? Obviously not. However, I do not even know yet which dimensions are going to turn out to be the important ones. That is an interesting thing because keeping an eye out for the things that will be relevant the next time you do a review is useful. Also, actually noticing what you did not think about and writing it down for the next monthly review is helpful because many things can fade. You want to get it down into something more solid, so you can develop these habits. It is also a bit of a skill. Habits you need to keep going, but skills stay quietly in the back of your mind.Â
‍
Allison Duettmann: It is kind of cool because looking at it this way gives a bit of a notion of delay of gratification. It is actually useful to spend some time in the beginning to gain some skills and learn some knowledge to properly apply in a multiplied way moving forward.
‍
Anders Sandberg: Knowledge is not just about things out there. It is knowledge about yourself, and it is also the thought of “How does delay of gratification feel? How happy am I if I delay gratification?” It is useful to roughly know how rewarding it is because sometimes it is not worth delaying gratification. Some people are naturally short-term. I think I should be more long-term, but I am not. However, understanding these and the knowledge of values gets to an important thing. Eventually, and hopefully, you do not just learn of your values but the intelligence of accepting your values. This is what we normally call wisdom when we decide or are motivated to do the right thing. It takes time because there is a lot of information to gain, such as learning these patterns. Maybe I will acquire wisdom over the next 25 years. We will see, but I might be skeptical about it.
‍
Allison Duettmann: That is very cool. Yeah, I just discussed in the recent comments this weekend the reflective equilibrium from Rawls. How we can basically look at intuitions to wonder on situations, form principles based on them, and revise moral principles. I think that also takes time to build up. Anyways, I think it is an interesting long-term perspective. Hopefully, we could eventually get to the question of what your 150-year review looks like. We are not quite there yet, but I will hit you up.
‍
Anders Sandberg: Perfect! For our future podcast or whatever we have in that future era.
‍
Allison Duettmann: I will schedule it on your Google Calendar. Perhaps you could give a bird's eye view, and we already talked a bit about this, of what a young person in your field, which is a conglomeration of different fields, could learn that is relevant and useful. What is the space of possibilities in the field you think of yourself operating in if someone new wanted to enter now?
‍
Anders Sandberg: So the field I am operating in is some kind of Whig philosophy of future stuff. However, what is actually going on is that I am a generalist. I have a bunch of platforms, and, basically, certain sets of knowledge, skills, and academic backgrounds act as platforms you can build other stuff on top of. Therefore, I would tell the young person to learn as much math as possible because it helps get you into a bunch of things that are math-oriented. It also is useful for learning how to manipulate formal systems of various kinds, even if you are not going into math. Generally, intelligence training is useful and helpful for abstraction, which you may also want for other platform fields like philosophy, economics, and even biology. Then, you want to fill in a lot of random knowledge and get acquainted with as much as you can.Â
‍
Essentially, as a generalist, you want to be able to touch on any topic or know you could learn it if you really wanted to. Even if you do not have a reason for it right now. Of course, you also want a couple of areas where you are quite good at it because it is a nice way of getting a job to pay the rent. But again, that is why it is a very useful thing to be a generalist because you can fumble around if all else fails, which is very good for your emotional stability. Also, sometimes a particular skill you find yourself very good at is helpful for leveraging stuff. For instance, I am good at writing small pieces of software to test ideas. I am fairly decent at doing mathematical models of things as well. However, whether it is a philosophy paper or a volcanology paper, I can whip up models relatively easy and critique them.Â
‍
So that would be my general advice. You want to be a generalist while not being totally random, even though that could be a good thing as well. You want to learn various platforms as well as a few powerful skills. Then you should be open for combining any events when there is an opportunity because the final part is that a lot of life is just random luck. As such, you can act as a receptor for good opportunities coming your way and hold on to that. Over time, you may also learn how to recognize the pattern, but that is a stupid thing to do. However, once you learn to avoid the same mistake, then things get more effective even though a lot of it is random.
‍
Allison Duettmann: Who said that? “The more prepared I am, the luckier I get.” Roughly something like that.Â
‍
Anders Sandberg: Yeah, exactly! It is not just the individual, but I think it is also true for organizations and societies. You can set up things for an organization to be open to outside input. You can make a society that is good at picking up interesting ideas and possibilities when they show up. A society that has decided we know what is right and we are not going to do it, or even one that decides that we will plan it ahead of time, is going to miss a lot of opportunities and have a hard time dealing with the occasional bouts of bad luck.
‍
Allison Duettmann: Another quote is “Learning to make opportunity especially in times of crisis and To take opportunity and have very concrete plans of actions because when crisis hits they are more likely to flourish than potentially in a different scenario even if they are out there.”
‍
Anders Sandberg: It reminds me of a piece of advice I got from my father. He was about as pessimistic as I am optimistic. Given, he was also a manager at a big corporation with important, complex projects. He would tell me, “Anders, always have a Plan B because Plan A will always fail.” He would then add that I also need a Plan C because Plan B is going to fail, and a Plan D would be needed because Plan C would fail too. However, he mentioned that once you get to Plan F, or somewhere around there, you do not need to make any more plans because you will have a better understanding of what things you could do. Basically, when shit actually hits the fan, you can improvise because you have explored the alternatives, and you are not bound to a rigid plan. Ultimately, I always thought that was pretty good advice. Roughly based on what Eisenhower would state, “Plans will always fail, but planning is essential.”
‍
I think this is an interesting aspect of intelligence. When I tried to write my first AI programs back in the late 80s on my home computer, I had this problem. They were the classical observation, make up a plan, implement the plan, and architecture. When that failed, I did not know what to do. There is a term for it, “plan repair.” It turns out that actually implementing a proper plan repair is hard. In many ways, that is the core of actually being the intelligent agent acting in this world. Modern forms of AI have much more interesting ways of doing plan repair. Maybe not even planning in the same sense of good old-fashioned AI I was reading up about in the 80s, but I think there is a lot of truth there.
‍
Allison Duettmann: Yeah, I think, especially as a conference planner for much of my work, I usually know Murphy’s Law, right? Everything that can go wrong does go wrong, but I do not even think it captures it all. I think there should be another part of that as well, to the extent that oftentimes, you can only cancel the things that you know can go wrong. However, there are usually other things that go wrong that you did not even account for, in addition to all of the things that you knew would go wrong. I think having something like that with a bit more personalization would be very nice. But yes, I agree with your father there on all accounts. I think it is awesome, you know, and it is often evident on a small scale.Â
‍
Then I think one can apply it to the larger scale simulation. Okay, maybe just to walk people through. If you could, please give a rough understanding, to the extent that it is possible, of how the field that you work in, and what has really changed since you have entered the academic space. Are there things that were true back then that are not true anymore? Are there any predictions you can make for someone entering the field now of how they should adapt based on what you have experienced in the past? How can someone position themselves a little bit better to remain relevant over the next 25 years of their own career?
‍
Anders Sandberg: The most obvious case is that I did my own PhD on neural network models. I was a teaching assistant for courses, and I vividly remember myself sternly telling my students various things about overfitting and how to design a neural network. These pieces of advice are totally wrong today. They were kind of true back in 1999 when I was saying this, but they stopped being true. Why neural networks actually work so much better now when we have more computing and more data is still an active area of investigation. Overparameterization is, to some extent, the mystery to this day. It is fascinating because this was a surprise, not just to me but to many people in the neural network field. We thought we knew what was going on. It was true in 1997, but it stopped being true in 2010.Â
‍
However, this tells us an interesting story. Just because a field has matured in some sense and understood something does not mean it has totally understood its domain. There might be important aspects that could change that actually revolutionize it. In this case, it was not even an aspect of the field that we thought could change it. That did not seem to be the important thing. If someone would have asked me in the year 2000, “Where is the next revolution in AI or neural network coming from?” I probably would have been saying that we need some new architecture or some insight into planning or how to set up modules. We did not think that “Oh, you just have to have a ridiculously large amount of data,” and then a mysterious process produced the grokking phenomena. There are also various other interesting opportunities that reignited. Taking a step back, I see this happen occasionally in other domains.
‍
In the 90s, I started becoming active in the transhumanist movement. We, of course, envisioned a remote future beyond the year 2000. It is very amusing as an aging futurist to look at what did we get right and what did we get wrong. One of the more obvious things is that, in many cases, I think we're right about what was important, but many other important things did come up that we did not notice would be important. So if you think that something is important, it is very likely true. However, other things might turn out to be important too. For instance, quantum computing might be one thing that is obvious in retrospect, but it definitely was not obvious in the 90s until the results started coming out. We kind of recognized that space would be awesome, but we also had this glum assumption that it would be much further away, yet it happened surprisingly fast. The same goes for artificial intelligence. The ordering of technologies and the speed they arrived is very hard to predict.Â
‍
On the one hand, we are making good progress, much better than one could have ever hoped for. However, it is still not fast enough for us transhumanists. However, this leads to an interesting lesson. Your ability to predict your own field is not necessarily that great, even if you are already interested in the future of it. Also, you need to hedge against that also to a large degree. You need to understand that your field might mutate. So in the last 20 years, when I have been in academia, I moved from neural networks into philosophy. I started out in the ethics of human enhancement. I am still interested, and I want to work more on it, but I realized that maybe the important stuff is slightly elsewhere. AI and the whole brain emulation, some articles actually look like they overtake some of the enhancements that we spent a lot of ink writing about. So there is this flexibility you want to have. You want to also look at anomalies inside your own field and things that seem to be shifting, but no one is talking about it. Those shifts are usually important. Once they are big enough that everybody can notice them, you should definitely do it. If you can grab on to them beforehand, however, and learn a bit about them and write even half a good paper, then congratulations. You are a gold standard, at least from an academic standpoint.Â
‍
More importantly, you may be able to latch on and shape something that might become very powerful. That is the final lesson. When something emerges rapidly, it is very hard to predict where it is going, so you may not actually know how to push it very well. This is the Collingridge dilemma. However, you have a chance of getting that influence, even if you may not know where to push. Later on, you can exert more influence once you have a better idea of where you want to be going. That point is much harder to push in the field because there are many more people involved, invested interests, standards being set, and so forth, so it might be much more work. Still, you have a chance to be a part of the field.
‍
Allison Duettmann: Now, I think in a different podcast, maybe it was the FLI one, you discuss a little bit on the view on AI ethics, at least from a transhumanist perspective. Back in the day, many people were really pushing AI forward, and then people got a bit more worried about risks. Now, there is this whole concern about the more short-term bids in AI ethics, such as different types of biases and so forth. There is also the future of alignment now. So it has been quite interesting to hear you discuss a bit about those different beliefs and technology shifts in the field. It has been cool to see. I don’t know if you want to correct me with anything.
‍
Anders Sandberg: I think that is an accurate description. Back in the 90s, the transhumanist view was the more AI, the better. It was seen as it is going to amplify human abilities, and this would allow us to ask the AI to make us smarter and more powerful, and then we will reshape the world in an awesome way. Yay! Then there were some people saying that we need a technological singularity as soon as possible; given the crappy world, animal suffering, and all these bad things, so we really need to speed it up. Then, it became important the moment we realized, “Wait, if we get that much power and it also is fairly autonomous, as we envisioned these AI systems to be, we better do it right, and it might be harder than it looks.” People were also saying, “Don’t worry, it is totally obvious. Just put in Asimov’s three laws of robotics.” Others would point out that they were fictional.
‍
Actually, here are a good few examples of how it goes horribly wrong. Some people were not worrying, thinking that they could fix it by doing it one way, but that way would wreck it. That back and forth led to the early days of thinking about AI alive. Although, back then, it was known as friendly AI. Gradually, it has developed. For example, one of the interesting things is that the role of an intelligence explosion, which started out as super essential to the whole idea, kind of fell by the wayside. Originally, this was because it was obvious that a smart machine would be able to make much smarter machines, which would lead to this rapid explosion of capability. That meant that getting the first machine exactly right was super important.Â
‍
IÂ think there are so many strands on AI alignment that, to this day think that is sort of true. However, you can also recognize what many of the fundamental problems you find, the Orthogonality thesis, for example, that intelligence and goals are kind of independent things. You can have something very intelligent with very stupid goals. Or the problem of convergent instrumental values. If a system tries to solve a problem, it will also have instrumental abuses of power and self-preservation. Even though that may not be its original goal, it can be a side effect, which is quite dangerous. There is also the fundamental problem is that human values are messy and fragile. Overall, these things are solely dependent on your assumptions on the singularity.Â
‍
Now, the interesting thing about hanging out in fields for a long time is that you start recognizing some genealogy or ideas and also getting annoyed that the kids of today do not know about the past conversations. I get to be the grumpy old guy saying, “Yeah, we were talking about that back in the 90s. Here are some dusty posts from a mailing list that you never even heard about before you were born where we made this point.” You can point out why some peculiar things are around, and sometimes they are for good reasons, and quite often for random bad reasons. An enormous amount of the structures we find in our institutions and social network, are because some people got together and go excited about that idea.Â
‍
Because of their peculiarities, you had a particular set of people latching onto that. Later on, you might end up with a bunch of people assigned to a certain idea. It could have been totally different and in nearby parallel worlds and probably would be. So there is a lot of contingency in how ideas spread, which is a bit sad because I kind of hope that truth should always win in the end. A marketplace of ideas, good ideas, should always win. But overtime, you realize that this is sometimes true in some statistic overall sense. Quite often, you have to work rather hard on it. Of course, some of it is random chance. Sometimes you are lucky, sometimes you are unlucky.
‍
Allison Duettmann: Yes, I think it maps a bit on the concept of value drift by Robin Hanson as well, to the extent that what we call as moral progress may just be value drifting.Â
‍
Anders Sandberg: The interesting thing is that you can have a combination. When you think about diffusion processes in physics, basically what you have is a random drift where particles diffuse away from each other. But you may also have a drift that is not random because you might be in a liquid that is moving, or maybe it is magnetic particles or magnetic fields. So yes, we are diffusing around randomly, but we are also attracted to a magnet. The same thing might be true for many of our values. The magnet could stand for some values, as there are some values that are consistent and produce certain outcomes fairly reliably. I do think that they, in some sense, correspond to moral truths in some ways. I do think open societies end to function better and solve problems greater than closed societies. I think letting people control their lives tends to make them happier instead of some outside force trying to optimize their life.Â
‍
Additionally, there is also a lot of randomness about things we are trying to fix. What are things regarded as taboo to talk about? What is regarded as the most important thing to optimize? A lot of that is random and might just depend on people thinking of the right thing in a certain field and telling influential people at the right time. Understanding this makes you both random drift and targeted drift is valuable. Also incidentally, this is also a good way to try and live your life. You may move around randomly, but if you try to bias your random walk in a good direction, you will end up more forward in that direction in your life.
‍
Allison Duettmann: Yeah of course. Also, perhaps, there are a few things such as evolutionary game theory or at least specific biologically incorporated creatures. Maybe they would provide some constraints. I think what I am really excited about are a few recent developments, such as open AI and those few debates where there are AIs debating each other with a human judge in the middle. There is also the publication by Stuart Armstrong and Rebecca Gorman on chatGBT and monitoring it for jailbreak crumbs. They would be able to spot a few dangerous prompts. There is also other work on constitutional AI, where there are different systems holding each other in check. I am really excited about this because it could provide us with a better way of how humans can become more developed or at least on an individual level.Â
‍
For example, if you had an AI system that was relatively well trained on your past search history and a lot of data you have available, at first it will get a lot of things wrong, but it may get more things right as it has access to more data. It will continue getting more items correct based on your own standards. I think this sort of recursive and iterative updating, enabled and assisted by AIs, will give us a few interesting things to learn from. Overall, I am excited to see how that shapes our updating as well in the long run. It will definitely be an imperfect process, but if you have anything to say about this more human AI please feel free to add to it. What do you think?
‍
Anders Sandberg: Yeah, one underlying thing in these examples is that the truth may be a Schelling point. To explain to the listeners who may not know, a Schelling point is basically as follows. Say we decide to meet somewhere in Paris at noon on a given day, but we do not say where. Where would I go and wait? The most likely answer is the Eiffel Tower if we are not connected to Paris, as I think Parisians may have a different choice. Overall, we have this knowledge that this is most likely correct given what I think other people would do.Â
‍
Now, it is interesting because truth is a unique case in some domains, as it is a natural Schelling Point. Similarly, there may be different forces of equilibrium that are useful. Not all equilibriums are good. There is a lot of equilibrium in game theory and economics that are much worse than the best case. It may be hard to get through this case, but programming AI systems to actually get to these stable points might be a useful heuristic. In many cases, they are much easier to analyze than the pathway you take to get there. The real question is whether we can engineer properly to make it evolve to this point. Also, can we develop this theory, so the points where we end up in equilibrium tend to be good enough or actually really good?
‍
Allison Duettmann: I could talk about this all day, but I am going to hand it over to Beatrice now to dig into an entirely different set of topics, but I am sure they are related.
‍
Anders Sandberg: Everything is related to everything else because we are the same universe.
‍
Allison Duettmann: I think on that scale that is true. That is a great segway then into a very related discussion with Beatrice who will be taking you more onto the X-hope part of this podcast. This was a true delight!
‍
Anders Sandberg: Oh, thank you!
‍
Beatrice Erkers: Yes, thank you for coming! As Allison mentioned, you are a big inspiration for us, especially with this existential hope program. It seems like a perfect fit to have you here with us on this podcast. We are very excited about your book. Is it soon to come out?
‍
Anders Sandberg: The eternal question. The annoying part is that I think I am mostly done with the important stuff, which probably means I am halfway through the entire project. Even if I finish all the relevant stuff now, you still have to check the equations, fix the typos, and get the page layout. Therefore, it is taking its time, but I rather do it slowly and get it right versus rushing through it to get it out. Even though, I have a problem of the ghosts of future books haunting me. I have these precedences of books I really want to write, and they are hovering around me on these dark nights. At the rate I am going, it feels like the thoughts of these books’ phenomena will become very relevant long before I finish writing this current book.
‍
Beatrice Erkers: Yeah, at least it sounds like you have a very long timeframe for this Grand Futures book, so, even if things go crazy in the next few years, that should still be relevant.
‍
Anders Sandberg: Yeah, I am hoping this book is readable even after the technological singularity.Â
‍
Beatrice Erkers: Exactly, that is a great goal to aim for. So I wanted to dive into this. Within this existential hope project, we are trying to figure out what makes people excited about the future and figure out what we think is important for people to be excited about. Usually, when I ask the question, “Are you optimistic about the future?” I am usually unsure of the guest’s answers, but I feel as though you are fairly optimistic about everything. Are you optimistic about the future?
‍
Anders Sandberg: Yeah. I am generally optimistic, so of course. However, being optimistic about anything in particular does not give you much information as I am so heavily biased. It is a bit interesting because I am spending a fair bit of time working on existential risk and looking into very dark things. Then again, being hopeful is quite helpful, so I can still sleep at night after having been thinking of various horrible stuff. Nevertheless, I think that hope for the future does not necessarily involve much about probability. You can hope for something that is nearly impossible. It is a psychological attitude. I also think the actual probabilities of us having a good future are decent. I would give us about an 88% chance of having a glorious future. Others would say, “Wait a minute. You are giving us a 12% chance of gloom and doom?” I would say “Yes, but we may be able to make that smaller.”Â
‍
If we play those cards right, we might be able to turn that 12% into 10%, 5%, 1%, and so on. There may be pathways here. Being an optimist, of course, is a very good way of driving yourself to do it. Mainly because the future is worth saving. There is actually a lot of good there, and that would be a shame to miss out on. We have a good reason to think that we can affect the future. Then you get into this interesting discourse of “How do you go about the fictive future in a useful way?” There are some people like Robin Hanson who point out that our ability to actually figure out what actions have long-term effects is very limited. Most things people do to affect the long-range future don't perfect it, but there are certain things that you can do, such as setting up standards, creating schools of ideas, etc. There are certain patterns that tend to get generated.
‍
So I think that we are actually optimistic. Most importantly, you can learn about this domain. There are things we can figure out from past histories, simulations, philosophical analyses, what works, what doesn’t work, and what areas can influence. There are reports of the future, of course, that we cannot influence because we will have to set the stone later. Once you have a map, then you can start focusing on the stuff you can be affected by.Â
‍
Beatrice Erkers: Yeah. I think when you spoke to Allison, you mentioned that you do believe that societies can be more or less set up to be likely set on success. So what areas or technologies do you think are the most important to hone in on for us to get to the most existential hope scenario for our society?
‍
Anders Sandberg: I think there is a profound question about the technologies we use for coordination. Normally, we don’t even think of them as technologies. We might call them politics, newspapers, or kickstarters. They are actually technologies about aggregating people’s preferences and ideas and turning them into actions of various kinds. Now, these ones are important, and I think they are fairly under-researched, or, rather, the wrong kind of people are being researched. No shade over the philosophy of politics people or the people actually doing foreign affairs etc., as they have been doing certain things. However, they have not recognized that we are dealing with technologies. You could actually take engineers with a hacker mindset and start thinking, “ Could we make something much better? What if we bolted on some AI? What if we add this other technology?”Â
‍
Indeed, we know historically that our ability to coordinate has been transformed by many technologies. Writing allowed the coordination of much larger states. The printing press allowed not just larger states but literacy, which had not been possible. Then suddenly, you could do various forms of parliamentary democracy, which was previously impossible. Broadcasting, again, obviously was important for mass culture etc. The internet enables so much that we haven’t had the chance to explore it very well, which is, of course, a real problem. The time it would take to fully explore the state of possibility is so long that we do not have time to wait. We need to actually be careful with inventing new stuff and trying to pursue it. So there is still a lot of work to be done.Â
‍
Now, there are particular things important for existential hope. Getting cleaner and cheaper energy is important. Why? It does not change the human condition super strongly, but it does enable us to get a lot more material and avoid scarcity. Then you will be able to recycle much better if you have the energy. Similarly, getting a lot of intelligence and cheap-using automation AI is powerful as well. Also, we have this interesting issue about technologies for modifying the human condition, which are probably the most profound ones. Of course, they are going to be much more debated and a real problem of how to do them in our current civilization.Â
‍
One reason I am not doing so much right now on human enhancement is because I realized we could get the coordination technologies going better, which may help us much more. I think there are many moral conundrums that cannot be resolved other than when people try things. Other people also say it won’t end well, and it produces various results, but we learn from that. Sometimes subcultures turn out to have great ideas, and sometimes it is just embarrassing and very bad. However, quite often, we can learn. Additionally, the big issue here, besides coordination, is our joint epistemic system. I think one can treat a civilization as a kind of being perceiving the outside world, doing information processing about that, and making decisions. Although it is distributed across large numbers of groups and individuals. These epistemic systems can be more or less good.Â
‍
If you have a fear of knowing that you cannot actually understand anything about the world, that culture may be very humbled by the things that it cannot understand. But it may not take the heuristic approach that “Maybe I can make a theory for gravity or mechanics. I can actually build stuff and do things.” If you look at the philosophical underpinnings of the successful theories, quite often, they start out as being this extremely shoddy heuristic. Eventually, philosophers and scientists come up with better models, but taking the view on that, we can understand the world is a powerful assumption that turns out to be good.Â
‍
I also have a little side project too, which is about civilizational virtues. Essentially, individuals have dispositions that we call virtues, which lead to good stuff. If you are a virtuist, you would say that virtue itself is a good thing. If you are more of a consequentialist, you would say it is a disposition that makes you do good stuff on average, but we can leave that to the philosophers. What is interesting, however, is whether or not there are group virtues. Could you say that group is behaving well while the other is not? The members may be totally nice, but together they are making the wrong decisions and harming people. I think there is a good case to be made that it makes sense. In that case, we could look at the highest level and say, “Is our civilization virtuous?”Â
‍
This comes up with thinking about existential risk, for example. Do we have the foresight to see that we are messing things up, and can we control ourselves to step away from the brink and avoid this? That might be appropriate to call a civilization virtuous. Mr. Hugo, the author, said, “Peace is the virtue of civilization, and war is its crime.” So, in this little side project, I think we could make a good list of civilization virtues and what mechanisms we would need to get them. For example, being truth-seeking is an important aspect, I think, for civilization virtues. Right now, we are refusing philosophy and science etc., but we could be so much better. So we might want to find ways to destabilize that self-delusion. We can then see if we can make our civilization more truth-seeking. I’m sorry for the long rant, but I think it is an interesting topic.
‍
Beatrice Erkers: It is very interesting. There is one question that I really want to ask you, so I am going to ask it now. In this podcast, we take a prompt about a eucatastrophe, which is an event where the expected value of the world is much higher afterward, and make an AI generate art piece out of it. If you can think of a really cool art piece of the future that you would like to see, what would it be? If we try to get specific about envisioning a positive scenario of the future, what is your existential hope scenario? I am guessing you have several, but please feel free to share a favorite one.Â
‍
Anders Sandberg: Yeah, I think you already submitted it, but I quite often bring up my idea about the post-human coral reef. I want humanity to branch out into many different kinds of species. Not just different shapes, colors, or life projects, but also actually exploring this space of post-human possibilities and finding weird and wonderful ways of existing and coexisting with each other. Not all forms of existing work well together. So we need to find systems that help us live in the same universe. Basically, I want to see this coral reef of human diversity continuing outwards into the cosmos and finding new things.Â
‍
The one thing about biology that I think is so amazing is that it is so creative. It finds crazy and wonderful, and sometimes horrifying, solutions to the problem of how you can make an organism that reproduces a function. It is not just uniform but instead a vast diversity, to where if you replay evolution, you may get something very different. There are some patterns that recur, of course.
‍
I think the space of mind is bigger than the space of genotypes. We can do evolution, of course, by planning ahead and thinking, and we can run things on computers, test out ideas before implementing them, make AI, etc. So I think my existential hope is that we can make diversity grow endlessly. There are critics here saying, “Wait a minute, you might get other bad things growing. What about suffering? What about inequality?” There are many interesting questions about these side constraints, but the core part of this coral reef and stretching out is worth really working hard for and growing into something magnificent.Â
‍
Beatrice Erkers: That is very interesting. I also feel like that is a common critique pushing against pushing for technological development as if it will make us more homogenous. That is also one of the most common dystopian future scenarios. This world where we are all the same and dressed in white and silver or something.
‍
Anders Sandberg: Also, that is because it is easy to write. Imagine a world where everyone is thinking in a radically different way. That would be very hard to write, and when you make a screenplay, for example, someone would say, “Wait a minute, now you need different costumes for every person, and we cannot afford that.” Or even the mention of computer graphics and different graphics, but this is just too much, so we tend to get the stories of the dangers of this dystopia where everything is the same. Overall, I think that is partially because it is so easy to describe.Â
‍
I think there can be diverse dystopias that are rather scary too, but they are harder to describe. This is why typically, in many stories, people do very evil and stupid things because that is easy to describe. Having tragedies where there are a lot of people with complicated goals and fear in a complicated way is much more rare in a story. Generally, they are not common in our culture. I do think seeing that many options and not having to be the same as everybody else is important. We should not just cherish diversity because we care about individuals but because it is instrumentally useful. This is also where I think there may be another value that I think does not tend to the individual. Simply, the universe may have different kinds of things. A universe with more species, forms of life, experiences etc., is a better universe than one that only has a few kinds, even if they are very good.
‍
Beatrice Erkers: Yes, I am excited about that. It will be a hard one to create, but let us do it!
‍
Anders Sandberg: Well, that is an artistic challenge. Making pictures of easy things is not easy either, actually. For instance, I spent time last night reading about haiku poetry. I wanted to understand a small haiku about a snail going up slowly Mount Fuji because I want to use it in a book. I wanted to make sure I understood how it was framed, and yet that was poetry about something very simple. Writing poetry or making art about something complicated is a very hard challenge. Then again, it may be easier, but I doubt it.
‍
Beatrice Erkers: Yeah, it might be an abstract art piece. We are at the hour, but I wanted to ask one last question relating to what we just spoke about. Do you have any recommendations for those listening of different media worth looking at? It can be sci-fi, but it can be other things to read or listen to, and it can also be movies.Â
‍
Anders Sandberg: So I always bring up Olaf Stapledon’s “Star Maker” because I think that is driving my vision about why I want to make sure a grand future happens. It is an amazing novel, and it is also very weird by modern standards. It is a novel with no protagonist really except for intelligence in the universe. Another book that influenced me a lot, of course, was Hans Moravec’s Mind Children: The Future of Robot and Human Intelligence, which is interesting because now it is fairly obsolete, but it is still ahead of its time. Finally, the book that really set me on this course was The Anthropic Cosmological Principle by John D. Barrow and Frank J. Tipler. My own book project is in many ways kind of a spiritual sequel.Â
‍
Basically, I want to take Barrow and Tipler’s work to the next level. They wanted to have this discussion of humanity’s space in the universe. In the final chapter, there is this famous description of Tipler’s Omega point theory before he went off the deep end about it. I think that gave me religion as a kid. When I read it, I thought it could be that intelligent like can take control of the universe to the extent that we essentially survive forever and learn everything there is to learn to get to the most complex state possible. That still sends a chill down my spine. I do not think that particular cosmological theory is the correct one for our universe, but it is a goal and something approximate to strive for.Â
‍
Beatrice Erkers: We should set the ambitions for civilizations very high, then. I feel like we have to have you on again. You will be the first guest to come on twice because I do not think we have exhausted all of these topics yet. Thank you so much for joining, and it will be a great art piece. Thank you so much.
‍
Anders Sandberg: Thank you for having me! I will see you in the future.Â
‍
Recommended Reads Mentioned:
Terms Mentioned:
Events Mentioned: