Podcasts

Daron Acemoglu | New Paths For Human-Technology Synergy

about the episode

In this episode we welcome renowned academic and author, Daron Acemoglu, Institute professor and Economist at MIT.

Drawing upon his rich experiences and his forthcoming book 'Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity,' Acemoglu navigates the complex landscape of technological progress. He sheds light on how technology can either sideline or empower humans, depending on how it's harnessed. With a focus on the potential for technological disparity and the importance of steering advancements towards inclusivity, Acemoglu's insights provoke thoughtful reflection.

Whether you're a technologist, an economist, or simply someone intrigued by the future of our society, this episode with Daron Acemoglu will offer you fresh perspectives on the intersection of technology, economics, and power dynamics.

About Xhope scenario

Xhope scenario

New Paths For Human-Technology Synergy
Daron Acemoglu
Listen to the Existential Hope podcast
Learn about this science
...

About the Scientist

Kamer Daron AcemoÄźlu is a Turkish-born American economist who has taught at the Massachusetts Institute of Technology (MIT) since 1993. He is currently the Elizabeth and James Killian Professor of Economics at MIT. He was named Institute Professor in 2019.

...

About the artpiece

This artwork was created by Minh Nguyen (@menhguin), using Midjourney, Photoshop and random ADHD-induced art hobbies. Minh is the cofounder of FridaysForFuture Singapore, worked on a peer support subreddit used by 80% of teenagers in Singapore, and has done policy advocacy since high school. After interning at Nonlinear, he's exploring AI Safety (AIS) advocacy, launching an AIS online university and startup ideas that aid AIS research. Minh is an optimist because "if not, work would be way too depressing".

...

About xhope scenario

In this hopeful scenario, technology is a tool that helps us, not replaces us. It supports our diverse skills - from gardening to computer design - and makes them even better, without taking away their value.

Instead of seeing humans as error-prone and needing to be corrected, we view ourselves as resourceful beings with unique talents. Technology is there to help us be more productive and contribute more to society, not to sideline us.

Our society values all contributions, not just those from big tech companies. We use technology to protect our democracy and ensure everyone's voice is heard in important decisions about our future.

This future isn't about replacing people with machines, but about using machines to enhance our capabilities. It's about a fair, inclusive, and prosperous society where technology serves us, not the other way around.

Transcript

Beatrice Erkers: Thank you so much, Daron Acemoglu, for joining us! Daron is an institute professor at MIT. I know he is also an elected fellow at a lot of academies; the National Academy of Sciences, the American Philosophical Society, the British Academy of Sciences, the Turkish Academy of Sciences, and, I think, the European Economic Association.

Daron Acemoglu: Yes, several, but nothing in Sweden. No Sweden societies.

Beatrice Erkers: Unfortunately. Maybe the Nobel will invite you here. He is also the author of several books. The one I recognize the most is Why Nations Fail, and now, he is writing a new book that is coming out in a few weeks. I have yet to read it, but I am very curious to read it. It is called Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity. The book looks at how much of our progress relies on our choices with technologies. It is a topic that is very dear to us at Foresight Institute. With the Existential Hope project, this is something that we think about a lot. I am very excited to talk to you. Welcome so much.

Daron Acemoglu: Thank you. Thank you, Beatrice. Thank you, everybody, for being here. It is my honor to be present here with the Existential Hope podcast. 

Beatrice Erkers: Well, let’s get started. What are you working on? What got you started on this track? Tell us.

Daron Acemoglu: Well, I will give you a sort of circuitous answer. I was drawn to economics. Coming of age in Turkey in the 1980s, I was curious why some countries are able to generate prosperity, peace, and employment for their populations while others aren't. I was curious about the relationship between that and what we might call power politics, which includes those who hold social and political power in a society, democracy versus dictatorship. Turkey was under a military regime at the time, and that is what drew me to economics. I soon discovered economics was not about these questions at first, but I still thought it was fun, so I pursued it. I then came back to these issues, and this is what led me to write, for example, the book Why Nations Fail with James A. Robinson and the research that underpins it. 

However, on the side, I was still sort of very much also interested in technology issues, and I had some academic works on those. Recently, those two strands started coming together because, with advances in automation, AI, digital technologies, and social media, it became quite apparent to everybody, I think, but to me also, that these technologies were transforming our society, they were creating huge winners and huge losers, and they were redistributing power in society in a way that I think Western nations have not experienced for quite a while. I also thought that, at least in the United States, where I am, the attitude was too much colored by a form of techno-optimism. Not that people were naive and not seeing some of the costs that new technologies could create or did create, but they were optimistic that somehow, we would work through them and that we will find ways of building better societies with these powerful tools. I thought that history did not teach us that. History teaches that we have been sometimes successful and sometimes unsuccessful. We cannot take that success for granted, especially when we are dealing with such different, transformative, and new technologies. That new book with Simon Johnson, Power and Progress: Our Thousand-Year Struggle Over Technology, came out of that. 

A lot of things that I am doing right now are related. I am working more on the labor market consequences, how we can create better technologies for workers, looking at how we can make democracy work better in the digital age, and also, sort of more broadly, about controlling AI. How do we do that? I think these are all topics that are very close to the Existential Hope sort of agenda because, the way I would say it in one last sentence, I am very worried about our future. However, I am also hopeful that if we do the right things, there are amazing tools in our hands that we can build a better future. But a better future will have to start with humans. It has to start with appreciating the diverse skills that humans have and the diverse voices that they have. It should not be sidelined. I think that gives me the agenda, or we need to redirect technological change in a way that prioritizes humans. That is not the path we are on now, but I believe very much that it's a feasible path.

Beatrice Erkers: Yeah, thank you so much for sharing that. I am very curious to hear more about what you think we can do to make this go well, basically, because it is a challenge. Could you maybe share your perspective on what we can do to ensure as much as possible that technological advancements can lead to more prosperity rather than disparity?

Daron Acemoglu: Well, every new technology creates dangers and challenges. I mean, we are quite familiar with how we have taken some of the most inspiring technologies and turned them into horrible things. You know, Fritz Harbor and Carl Bosch worked out a revolutionary technology of turning atmospheric nitrogen into ammonia, which is absolutely critical for our ability to seed 8 billion or 9 billion people today. We would not be able to do one-tenth of that without synthetic fertilizers. But then, immediately, they turned around, and they created more powerful bombs to kill hundreds of millions of people with them. So what do we do with our knowledge? We can create bombs. We can create ways of generating more inequality. We can come up with new ways of spreading misinformation or destroying democracy, or we can try to build a better society. 

So I am not one of those who was worried about super intelligent AI coming and destroying humanity. Of course, we cannot rule that out, but that is not my main concern. My main concern is more mundane uses of technology in ways that create more inequality, both in the economic and political spheres. So I think, what can we do to build a better future? I find it useful to think of the answer at two levels: one is the aspirational level, and the other one is the level of levers. The aspirational level is what we are trying to achieve, and the Levers is what are the specific policies we can leverage in order to achieve those aspirations. Often, when we are dealing with such difficult problems, levers are a little shaky. When you first were hearing about climate change problems and greenhouse gas emissions in the 1970s, I think it was; it would not have been feasible for us to come up with specific levers of how to deal with them. However, people who were foresighted already understood the aspirations that we need different ways of generating energy and combating global warming. 

So let me start with the aspirations, and then we can come to the levers. For aspirations, we want to have technologies that empower and increase the productivity of humans with diverse skills. What do I mean by that? Think of what you can do with new digital technologies and what we have done with digital technologies. We have done a lot of automation, for example, but that does not increase human productivity; it sidelines humans. Some amount of automation is useful, however, and we have to do it. It's been part of our history, but it is not in and of itself empowering humans. We can create digital platforms such as Facebook or TikTok, in which we become glued with our attention, and our responsibility is in the hands of somebody who serves us the diet of entertainment rather than acting as citizens. Again, that is not empowering humans. Empowering humans with diverse backgrounds and diverse voices means that we as citizens become informed about important social choices. What should we do with AI? What should we do about globalization and the tensions between China and the U.S.? I think citizens should be informed about all of it. 

However, the current use of digital technologies is not doing that. The current use of digital technologies is not increasing human productivity. So that's the redirection of technological change that I think is feasible. That's my aspiration. We use digital technologies to increase human productivity. Teachers can be much more productive. Do not eliminate teachers but make them more productive. Nurses can become much more capable; they can do diagnoses, make prescriptions, and care much more effectively than they are at the moment in every country, especially the United States, but that requires new tools. We can make electricians and manual workers more productive, and we can make citizens understand issues and communicate in a more democratic environment. We are using the pluralistic structure of participating in democratic discourse. Those are aspirations. That is not where we are right now, but those aspirations are meaningful. 

One way you can see that they are meaningful is that at many points in the early history of computers, the internet, and social media, people had exactly those aspirations. Early pioneers of computers in the 1960s and 70s. At MIT, Berkeley, and San Francisco, they thought that computers would empower workers and citizens and would be the sort of end of large centralized corporations such as IBM. That's not what happened. IBM became much more powerful. Microsoft became much more powerful than IBM. We have Centralized Information, and we have centralized production, but that was an aspiration that I think was not crazy. Early days of social media and the internet, people thought this was going to create a more democratic discourse. Again, that is not what happened. But those ideas were not crazy. So we can go back to them. But that is where the levers come in. I think that is much more open to debate, which levers can be used for.

Beatrice Erkers: Do you want to suggest or guide us through any of the current levers studies?

Daron Acemoglu: Yeah, I mean, I think there is no silver bullet as far as I am concerned. But I think what we need to do is create better regulations and induce large tech companies to change their business model. What I mean by that is their business model is a combination of different companies doing different things, but the business model is centered on automation. You supply tools for large companies so that they can cut labor costs, and it is based on collecting a lot of data and using digital advertising. It is like a manipulative system. That is where all the profits are at the moment. That is where the majority of Facebook's and Google's profits come from. Microsoft is going in that direction with Bing and GPT. 

However, that is not the only business model. I think the question is, how can we come to encourage these companies to use a different business model? How can we encourage these companies to prioritize? Also, creating tools for workers, not just for managers to monitor workers or automate work. Can we create new tasks and new human capabilities out of these amazing tools that we have available? I mean, techno-optimists have one thing right. We are in the midst of the most mind-boggling advances in technologies that we have been in for several generations. The question is, what do we do with these tools? So I think that is where we have to use the levers in the regulations in order to create better practices.

Beatrice Erkers: That is very interesting. I guess the main question right now that sort of seeps through all of this is AI and its implications. I often find when I talk to people that they are quite hesitant about regulation of any sort when it comes to technology. Do you see a way past that? How do we convince them that this is the way forward?

Daron Acemoglu: Well, I think you are completely right for several reasons. I think we also have to understand why is it people are averse to regulation. We also have to articulate, and I believe that is actually the easy part, why regulation is key. So let me start with that last question. Take social media. I think we have seen the really ugly face of what can happen on social media, with people becoming addicted, mental health problems, and multiplying extremists forming very powerful communities that then spread misinformation and even become viral. All of these were at first defended by large companies saying you have to let information want to be free. We do not want to interfere with people's communication, and this is the natural next step of technology. Now I think almost every executive from big tech companies, at least in public pronouncements or in my conversations, all say yes, of course, we need regulation. 

But it's too late. We have already created a delay and a disadvantage for regulatory tools as the ecosystem has become much worse than it should have been. What is true for social media is doubly true for AI. How you can use generative AI large language models for creating more centralized information, more control, more monitoring, and more deep fakes and misinformation are just endlessly powerful ways of doing good things and doing bad things, I think. The same applies when you start using these tools for the production process as well. Is it okay to fire a large number of workers without knowing how generative AI is going to work? I think, again, there are questions here that we want to sort of debate before we just go ahead and rush and say, oh, no regulation can come later. So why are we still so averse to regulation? I think there are three reasons for it. 

One is that tech companies have become very powerful and have spent large amounts of money, and they have used their social influence to lobby against regulation. Second, competition with China. It is both true, but it is also a sort of a tricky argument to say, well, you can't regulate AI because if you do that creates a disadvantage for US industry or European industry, and the Chinese will go or search ahead. The third is that we just don't know how to regulate it. These technologies are complex, and bureaucrats do not understand them. I think all three of these are problematic. The first one is obviously problematic. It's exactly the same as that thing that happened with tobacco companies opposing regulation of smoking, or even the fossil fuel producers saying no, no, there is no climate change, and you cannot interfere with energy markets. These are people who have billions of dollars on the line, and they do not want government interference. 

The China argument, yes, there is some element of truth, but we are seeing that China is nowhere to be seen in large language models. So we could have slowed down large language models that would not have created an advantage for China. In fact, I think better regulation in the West, if it happens, would also influence Chinese AI research on a better path. Then finally, yes, it is true today, we do not have the human capital in the European or American bureaucracy to create smart regulation, but that is an outcome of 30 or 40 years of neglect. The sense of mission has been destroyed in bureaucracy, the talented people have all been offered three, four or five times higher salaries in the private sector, and regulation has been maligned. Ronald Reagan, who is a Social Democrat really, compared to people on the right today, you know, used to go around and say that the 10 Scariest words in the English language are “I'm from the government, I'm here to help.” So when you have that ideology, that really demoralizes any type of civil service, and you erode your capacity to undertake regulation. I think we need to build that capacity.

Beatrice Erkers: Yeah, I am based in Europe, so I hear a lot about the EU AI act right now. When I speak to people in the U.S. about it, they seem to think it will only put Europe behind, and Europe is going to lose. But in Europe, I think people believe that is probably true, but they also believe maybe that will slow down progress elsewhere, just so that we can have some time to get our ducks in a row a bit more. I want us to talk about other things as well, but the last thing I want to ask about regulation is whether or not you think we have time to do this in time for transformative AI? I do not know what your current timeline for when we have that is, but do we have time to regulate?

Daron Acemoglu: Of course, we do. I mean, we can create more time. But you know, as long as we are alive as humans and resourceful as ever, we can come up with solutions. I was a signatory to the open letter that asked for a six-month pause to the training of large language models precisely because I think we need more time to plan the regulation. I was also impressed by the people who are coming together at the early stages as potential signatories and a broad coalition, which is, I think, what we need and what we need to build for thinking about the next stage of regulation. So yes, I think time is running out, but we have no option but to act now. 

It is a very difficult problem. I mean, I think I am glad Beatrice that you mentioned your European experience. This allows us to demonstrate the good and the bad. GDPR, for example, was a brilliant idea and completely necessary. It was a very important problem, and Europeans deserve full credit. The European Commission deserves full credit for actually being ahead of the curve. But it completely backfired. We are probably worse off because of GDPR. I am not going to blame the European bureaucrats for that because I do not think anybody could have done better. It is a difficult problem, and they were the first. So you make mistakes, you learn, and you build more human capital. That is how you have to go. I think what I blame the Europeans for is still not revising GDPR, despite all of the apparent problems.

Beatrice Erkers: Yeah, with all new technologies, there definitely seems to be a trial-and-error process. There is a quote by Max Tegmark related to worries that I always like. “First, we caught fire, and our houses burned down, but we will learn to put it out and have fire controls. Then we all got cars, and that was dangerous, and now we got seatbelts.” So yeah, there is always this process. It would be interesting to hear if you have any examples historically where you think we have been able to work successfully with harnessing technology for good. I think you mentioned that you think historically, we have not been great at it.

Daron Acemoglu: I think historically, it has been a struggle. I think our past is filled with examples where we have not done great. There are many episodes also where we sort of stumbled onto something better. Let me give three examples of partial success out of the jaws of early defeat and disaster. The first is the industrial revolution. I think that today, we owe everything that we have in terms of health, comfort, and high living standards to the process that started in the industrial revolution. Prior to this, however, the first 100 years of that was horrible. Poverty deepened, real wages stagnated, working conditions worsened, and diseases became much worse. In British cities, life expectancy fell at birth to something like the low 30s because so many people were dying in disease-infested dwellings. Living conditions worsened, and power became much more unequal. Companies and bosses, and the already elite, became richer and socially more powerful. People became real second-class citizens in their own country. 

But then, the process of political and bureaucratic reform started. That is the stage where the government started to clean up its act by hiring people who were experts. They worked to clean up the cities. They started to improve the health system and build an education system and sewage systems. Hospitals got built, and technology became redirected. Instead of just trying to monitor and sideline labor, you started having new technologies, industries, and occupations that increased worker productivity. Unions started becoming organized and legal in the early 1800s because being a Unionist was punishable by jail. All of these things leveled the playing field in a more democratic, equal, and technologically dynamic country. This did not happen with a roadmap. Somebody did not say here is the way we can reshape technology, but there were many movements that contributed to that. For example, the Chartists who campaigned in the 1840s for universal voting rights and the many reformers who tried to improve working conditions, living conditions, and health conditions. 

The second example is nuclear power. Nuclear power was a very powerful technology. Today, we can use a lot more of it in our efforts to reduce climate change, in my opinion. However, immediately as that technology became available, the first application of it was to build nuclear weapons. Most leading physicists of the year worked on that. Then it took a while for physicists to realize it was not a good thing for them to do. A code of conduct was then developed for many who refused to work with nuclear powers. They were a conduit for the broader population to become involved and informed about nuclear power and weapons. Anti-nuclear weapons and more peaceful uses of nuclear technology, I think, are all thanks to the efforts of responsible uses of scientific knowledge. There is also the example of radio. Prior to its regulation, it was a potent tool in the hands of Nazis or extremists for propaganda. Then both the population and regulations adjusted, and people started understanding that everything you hear on the radio is not true. Regulators started discussing how airwaves should be allocated, what is allowed, what is libel, and who has access to these radio resources. Again, it was a process. I think we have developed ways of using technologies in more responsible ways on a number of occasions, and we can do that again.

Beatrice Erkers: Yeah, I believe in the books that you have a sort of manifesto. Is there anything you can share from that manifesto?

Daron Acemoglu: I wouldn’t quite call it a manifesto, but I would say it is an aspiration. The aspiration is to use technology and redirect it to create more worker-friendly, human-friendly, and democracy-friendly technologies. Then we suggest a number of steps, each of which is partial. For instance, subsidies for more productive uses of technology, development of technology, and changes in the organization of the tech industry, so you have less domination of a few business models and companies. We also call for a digital tax because I think some of the worst, more anti-democratic uses of data collection and technology are fueled by individualized, targeted ads, which bring out the worst uses of these technologies. Also, limits on data and the ability of tech companies to sweep up data without paying for it, asking permission, or regulating are contributing to the wrong type of ecosystem, development, and use of these technologies.

Beatrice Erkers: Are there any current initiatives or movements that you think align with this vision?

Daron Acemoglu: I think nothing is that I would call a powerful movement yet, but on each one of them, there are things happening. Another one that we discussed in the book, which I think is important, albeit not a silver bullet, is that we have to think about breaking up the largest tech companies. For instance, the Biden administration's department, the Federal Trade Commission, and DOJ are considering whether antitrust has been violated in some of the mergers. That is one sort of movement where that can happen. There have been a number of people calling for digital taxes. That has not become a reality yet, but I think there will be more calls for it. The computer scientist Jaron Lanier was a pioneer in suggesting that data should be treated as labor and there should be data unions. I think there is a movement around him that is forming, which is very encouraging. So yes, I think there is some grumbling in many different spheres, but let's see where that takes us.

‍

Beatrice Erkers: I believe in the book Why Nations Fail, you speak of inclusive institutions, which lead to more successful nations. Do you also have a thought on how we can create a more inclusive, innovative ecosystem around AI technologies?

Daron Acemoglu: Absolutely. I think that's a great question. I believe that we need both inclusive institutions and survival, and we need an inclusive ecosystem. I think an inclusive ecosystem for innovation to me means many different diverse voices and different approaches. If you think of the successes of innovation in the past, it has been encouraged when people have used different approaches and different ideas. When there is a centralized solution imposed on innovation, it does not work very well. In the past, the major concern that people had for that was governments. Governments tell you this technology is good and this technology is bad, and that could be a problem. Why is it any different if you have a very large company telling you this is the technology that we are going to go with, and this is the way we're going to structure that technology? That is part of the antitrust drive. I think of antitrust not so much because you want to worry about prices but more so because you want to worry about the innovation process. 

I think all of that is going to be worse with the generative AI movement because the chances are that I think we are going to have a couple of base models, such as DPD for, and it is going to be those companies that control the base models that are going to make the most profits. They are going to shape the future of this technology. Applications are going to be layered on top of these base models. In that case, the innovation ecosystem will become even less inclusive. This is not an accident, by the way. If you look at the U.S., and the same is true to a lesser extent in Europe, these large tech companies, and more broadly, large companies, have been extremely active over the last 20-25 years in acquiring all of their competitors. Google, Facebook, and Microsoft have acquired dozens of companies that could have turned into competitors to them. Some of them are then incorporated into their products, while others are just left aside. That is the process that is a conscientious effort to create a less inclusive innovation system.

Beatrice Erkers: In terms of beginning this process of trying to make it more inclusive, what roles should governments, corporations, and individuals play in this democratization?

Daron Acemoglu: That is a great question. Again, I will give a high-level answer because I do not have more detailed theories or blueprints. First of all, we need to change the narrative, and that is where my aspirations come in. I think the narrative in the U.S. is still that there are geniuses who are designing wonderful technologies, and we should respect them and let them do whatever they want because that is our future. However, I think that is the wrong narrative. Whenever you leave technology in the control of a few people, it is disastrous, and it has been disastrous in history. Technologies often create winners and losers. We have to think that we need to create a bigger debate around which technologies we want and how we are going to redirect them. We need to think about which technologies are useful socially, which ones are not, which technologies are creating just wealth for a few entrepreneurs, and which ones are creating benefits for workers and citizens. 

Second, we need to create new institutions. This is where technology could help, but it is actually a really political process. What I mean by that is that we need to have the right structure for countervailing voices to be heard. It is not that we cannot have a world in which every technology that Google invents, only Google is responsible for how it should be used because they are its inventors. So regulation is about creating incentives for companies, how they use the technology, which technologies they market, which they take in a different direction, and which of the many paths open we actually pursue. That means that we need this new regulatory environment, and that cannot just be something the government does from the top. Society's input into that is necessary. That is part of a changing narrative, as a broader conversation needs to be there. Then we need to have very specific policies like I mentioned with digital ad taxes. How do we implement that? What is the right level of digital ad tax? It is a technocratic question, so we need to develop technocratic expertise. I gave the example of GDPR general data regulation. Why did that go wrong? The experts who designed it did not know enough and couldn't have known enough about how large tech companies would develop and respond to it. So we need to develop and build better expertise in order to be able to do that sort of regulation.

Beatrice Erkers: Yeah. I think the narrative thread is one that I want to pull out a bit more because that fits very much into this existential hope theme. I want to ask you a few questions related to that.

Daron Acemoglu: Of course.

Beatrice Erkers: The first thing I want to ask is whether or not you would say that you are optimistic about the future. If so, why? If not, why?

Daron Acemoglu: Well, that is always a very difficult question for me because my whole belief about technology, institutions, and the future is that there is not one preordained path. Technology is highly malleable, and we can develop it in many, many different ways, both good and bad. With institutions, we can build better institutions, we can build worse institutions, and we can build a better future or a worse future. So, yes, I am hopeful we have the capacity to build a much better future. However, I am very worried that on our current path, we are heading toward a very bad future. 

So the question then becomes, how can we get out of this bad future? Also, how likely are we to get out? I think it is not very likely, meaning that it is not an automatic process. We really need to pull our resources together as humans and as world citizens in order to tackle these questions. It is not something easy, especially in today's polarized environment. When geopolitical risks are high, inequality polarization has reached alarming proportions in most countries around the world, or in most industrialized countries, at the very least. So yes, we are not starting from a position that makes me think, yeah, we are going to be able to do this easily. No, it is not easy, but I am still hopeful.

Beatrice Erkers: Yeah, thank you. I mean, in general, it seems hard for people to see positive futures or envision positive futures as opposed to these more dystopian visions, which tend to be easier to imagine. I think a piece of that is because we tend to see more ways that things could go wrong. So, I interpret your answer to be that you think we should change this, but do you have any thoughts on how?

Daron Acemoglu: Well, those were the issues that I was trying to touch upon earlier on. We need to change the narrative, we need to articulate aspirations, we need to have a debate about them, and we need to develop an alternative narrative as opposed to just letting a handful of very smart people and very powerful companies shape the future. Then we need to build the institutions to be able to do that and to support that. Then we need to build policies that can create more micro, fine-tuned incentives for achieving those objectives. None of that is easy, but I think all of them are feasible, as similar things have been done before.

Beatrice Erkers: Well, maybe then, in terms of coming up with a narrative, would you be able to share an existential hope vision that you yourself have? 

Daron Acemoglu: Yes, I think an existential hope future is the following. Here are two very opposed views of humanity. One, which I think, is never admitted, but it is all implicitly held by many people is that humans are fallible, full of mistakes, and full of imperfections. The more we can correct the humans or sideline them, the better. That means automating, that means taking the initiative away from the humans, automating decisions, automating production processes, spoon feeding them because they are going to make mistakes, and letting them have less of a voice about the future of technology and the government because democracy doesn't work.

The alternative is that humans are amazingly resourceful, and they have diverse skills. Each one of these skills is very valuable. The skills of the gardener, the skills of the manual craftsman, the skills of the construction worker, and the skills of the computer designer. All of those are hugely valuable. My hope for the future is that we build technologies that elevate skills without devaluing any of them. My hope is to make people more capable of developing and using those skills. They can be more productive and more capable with technology. We can build a society that values the contributions of these diverse skills, not just whoever can build the largest company and scoop up the most data and make billions of dollars. That is the scary scenario for me, but I think the alternative is a very hopeful and feasible one.

Beatrice Erkers: I think one way because it seems that the existential hope vision is one of diversity. One thing that would be interesting, and I do not know if you have thought about this at all, but I mean, it seems that we may be heading towards more auto-cooperation between humans and AI agents, and perhaps non-human sentience that we will have to cooperate with. Do you see that as a positive part of the vision?

Daron Acemoglu: No, I don't see that as a positive part, nor am I really that hopeful that we are going to create sentient machines. I do not think the current approach in the field is really capable of generating true sentience. I think that it is also a tangent as far as I am concerned. I wrote an article about two or three years ago in the Washington Post saying that the AI we should fear is not in the future, but it is here. I still subscribe to that even though there have been many developments in AI. What we should be fearing is not super intelligent, sentient AI or humanoid robots, but it is the bad uses of AI that we are very capable of doing at the moment with our given technologies. So in that sense, the sentience is not a critical factor for my thinking.

Beatrice Erkers: One of the things that it relates to is the concept of existential risk and unaligned AGI being an existential risk. For example, there is the issue that the workforce is going to drastically change with AI coming and the automation you discussed. So how would you weigh those changes, that I think we kind of know are coming, versus these more speculative, potentially existential risk challenges?

Daron Acemoglu: Well, if I believed that there was a real existential risk of AGI that will run out of control, and then enslave or destroy full humanity, of course, I will be very worried. So I do not think that is a major issue. However, because I do not think it's a major issue, emphasizing that too much, I think, walks into the narrative that these AI technologies are amazing, and the only thing we fear is that they are too capable. Rather, I think the conversation should be about how we are using our imperfect technologies, which are quite powerful in doing certain things, even if they cannot be sentient. 

We start Power and Progress by quoting from H.G. Wells’ The Time Machine. What he understood, as it says in the Time Machine, therefore, he understood, I presume, something that we forget, we have forgotten completely. That technology is not a way for humans to control nature only. Of course, it does. We use technology to control nature, but technology is often a way for some humans to control other humans. It is that part that I worry about. The AGI discussion says, Oh, what about technology controlling humans? Yeah, that could happen, perhaps at some future date. But that's much less real and much less present than humans using technology to control other humans.

Beatrice Erkers: Yeah, if we go back to this positive existential hope vision that you shared, is there any particular breakthrough in the next five years that would tell you we are on track to get to this positive scenario?

Daron Acemoglu: Well, I think we have to first rebound from where we are. We first have to show, in much of the West especially, that democracy works. If you look at the data, the current young generation has lost confidence and trust in democratic institutions. I think democracy is the only way we can elevate the human dignity, human skills, and human judgment that I have talked about, but we can also control the future of technology. Of course, you can have the Chinese Communist Party control the future of technology, but it is worse or as bad as Google and Facebook controlling the future of technology. It is as centralized and biased in what it tries to do with technology. 

It is as much about different types of control than control. So we need democracy to work; otherwise, we have no options. The first thing we need to show ourselves and the world is that democracy can be made to work. Look at the situation in the United States. Look at the situation in Europe. We are having a real hard time with democracy, and the young integration is responding to it. They are saying, “Oh, well, I am not so sure democracy is such a great system anymore.” That is the first thing we have to do.

Beatrice Erkers: Safeguarding democracy?

Daron Acemoglu: Yeah. Easier said than done, right?

Beatrice Erkers: Yeah, definitely. I want to ask one question very similar to the one I asked before. In this podcast, we always ask for an example of a eucatastrophe, which is the opposite of a catastrophe. It is an event where after it has happened, the expected value of the world is much higher. Do you have an example of such a eucatastrophe that you wish to see other than what you just said? 

Daron Acemoglu: No, I think those would be them. Safeguarding democracy and showing that we can sort of turn these technologies to help workers and citizens. Look, I mean, if you go back in the past, there are amazing events like antibiotics that have completely changed some subset of tasks that we perform, such as saving people who have infections and extending lives. I think there are many more of those that we can create with our new tools, but we first need to redirect those tools for that purpose.

Beatrice Erkers: Yeah. I am going to take some questions from the audience. 

Daron Acemoglu: That would be great. I would love to.

Beatrice Erkers: I have one from M . M, do you want to state your question?

M: Sure. Hello, let me just pull it up again. Yes. So one of the things I was wondering at the beginning is this point. A lot of technology that I think people come up with is a sort of claim. It kind of has only a modest social impact. If you go to a hospital, there are all kinds of medical procedures that have a lot of fancy technology. But in terms of the social impact, it might just be an improved treatment for a rare condition. When it comes to technology, what are the things to look out for? What do you think about technologies that really have the potential to have this kind of massive social impact?

Daron Acemoglu: It is a great question. In fact, I think there are very few examples of technologies that, just like one product or one technique, is so transformative, even antibiotics. Even without antibiotics, with all the other thousand health improvements, we would be okay. The way that I think about it is a platform. We create platforms for generating families of techniques, families of new products, and families of new practices. Together, they really lift us up. So, what are those platforms? Or what are these general-purpose technologies?

‍

Early industrial growth in the 20th century, for example, was fueled by creating a template, which, for example, is exemplified by Ford Motor factories. You introduce more machinery, such as high-powered energy and centralized electricity. However, at the same time, you bring engineers, maintenance workers, new tasks, better record keeping, and better product design. That was the full ecosystem that then spread throughout the manufacturing and undergirded a lot of the improvements that we had until the 1980s. So, we need to create the equivalent of that with digital technologies. 

M: Thanks.

Beatrice Erkers: Thank you. I also have a question from David. Do you want to ask your question, David?

David: Yeah. Thank you so much for speaking. This has been great. You have said a number of times that there are risks you are not concerned about, or you do not think are plausible, such as sentient AI or existential risks from AI. Are you saying that you think those things are fundamentally impossible or that they are unlikely in this decade or century? Is this a time question or a fundamental impossibility question?

Daron Acemoglu: Great question. Of course, we cannot know what the future will hold. On the current architecture, and I think many AI scientists will disagree with me on this, but on the current architecture of digital technologies and AI, I do not see anything approaching artificial general intelligence. I do not see the current architecture changing in the next decade or beyond. Subsequently, I do not see anything that could approach Artificial General Intelligence for, I would say, 30-40 years at least. 

However, there is another deeper reason why I am not worried about that, per se, and this is how I would express it. Here is a claim, and you may disagree with this claim. My claim is any technology that could be proto-AGI could be misused horribly in the hands of the humans that control it well before we can get to AGI. If there is a sentient superintelligence that controls and destroys humanity, that is horrible, but if Mr. X controls a much more rudimentary version of it, and he destroys humanity, that is as bad as far as I am concerned. I am much more worried about that latter for that reason.

David: Thank you.

Daron Acemoglu: Thank you, David.

Beatrice Erkers: Thank you. I will let Brad ask this last question.

Brad: In the area of speech regulation and the regulation of social media, it is commonly felt that one of the main reasons that liberal democracies forbid censorship in the U.S. most of all, perhaps, is not that speech is not harmful or that speech systems can cause harm. Instead, it is because we have never found a way to give the keys to anyone to control them that did not end up causing more harm. I apologize, but sometimes you are a little bit facile in suggesting that we can just find these good regulations when that is actually the key challenge. How do you address that?

Daron Acemoglu: I completely agree. I am very uncomfortable giving the keys to anybody. But the way that I think we have dealt with some of those problems in the past is that when there is a broad and well-tested consensus on certain things, we have implemented them as regulations. So, for example, the German Federal Republic created a broad consensus about Nazism and Nazi speech not being allowed on the airwaves. So, I think that is the sort of societal bottom-up process that is necessary for regulation spaces. It is not an easy process, but I completely agree with you. I would not be comfortable with a president or with a political party deciding what is allowed speech and what is not allowed speech. However, I will add to it that right now, we are essentially delegating that power to large tech companies. 

For example, if you look at ChatGPT, ChatGPT has been extensively trained by using reinforcement learning before it was launched because they were afraid that it would search and say certain things that are politically incorrect. A bunch of engineers in a small company decided what was allowed speech and what was not allowed speech. By using state-of-the-art reinforcement learning techniques, they discourage ChatGPT from saying those words. That is not a lack of regulation. That is just backroom regulation that we are not monitoring. We are not getting convention, we are not getting a consensus, and we are not getting the democratic means exercised. So I do not see that as better. So to say we are not going to regulate, we are going to allow all free speech, that is not what we are doing right now as the default either.

Brad: Do you seriously suggest that one could have applied a regulatory regime on something like ChatGPT before it was released in the way it was? Is there any other path aside from open AI deciding how they want to release the early version of the product?

Daron Acemoglu: Well, there is another. I think the problem in the case of ChatGPT was created from its architecture because it is trained from the entire, or close to the entire, corpus of the internet, especially social media. Its date training data was full of very low-quality information. So imagine, while you are sentient, your sentient intelligence is coming from Reddit. That is going to have horrible information. So we have created one problem, and then we try to solve that through another imperfect means that empowers other people to decide what is allowed and what is not allowed. What I am trying to say is that there is no alternative to a democratic debate about these issues. Delegating it to a handful of companies may appear from one point of view as our allowing of free speech, but it is very far from free speech. Thank you, Brad.

Beatrice Erkers: Thank you, Daron. If you have one more minute, I want to ask you some rapid questions before we end.

Daron Acemoglu: Go for it.

Beatrice Erkers: The first thing is, what would you recommend someone new wanting to work on contributing to a positive future? What should they specialize in?

Daron Acemoglu: Oh, I think my perspective about diverse human skills is that everybody should specialize in whatever I think they are passionate about. I think we should all be well informed. I think we should all read more broadly about how we can build a better society and what are the alternative paths around us so that we have a broader perspective. We do not want everybody to be the same. We want everybody to have their own diverse skills, but we want to have a common understanding of what our sensibilities are.

Beatrice Erkers: That makes sense. What resources would you recommend for people? Would you say there is anything you find particularly interesting that they should read, listen to, or watch?

Daron Acemoglu: I think there are so many amazing books that question our understanding and our cherished notions. I would recommend, for example, Michael Sandel’s Tyranny of Merit as a corrective to meritocracy, where we build a meritocracy, and if it is generating a lot of inequality, we have to live with it. I think we have to question those things. We have to question the future of technology. I think we are all very excited, rightly so, about the new advances in generative AI, but there are many counter voices that we have to listen to. I think, generally, there are just a lot of different perspectives. We just have to be open to them.

Beatrice Erkers: Thank you. The final question I will ask you is, what is the best advice that you have ever received?

Daron Acemoglu: Oh, I don’t know. I don't know what the best advice is. It is not particularly advice that I received directly, but the general encouragement that I got from my parents and some of my friends to just pursue my passions was important. I think if you are too strategic about what you want to study and what you want to work on, you will never realize your dreams. You have to really pursue what you are passionate about.

Beatrice Erkers: Thank you so much, Daron. Thank you for coming, and I am excited to read your book Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity.

Daron Acemoglu: Thank you, Beatrice. Thank you, everybody, for being here.

‍

Read

RECOMMENDED READING