Podcasts

Emilia Javorsky | The Future of AI, Bioengineering, and Human Empathy

about the episode

Emilia Javorsky, MD, MPH, is the Director of the Futures Program at the Future of Life Institute. A physician-scientist and entrepreneur, she specializes in the development of medical technologies and is a mentor at Harvard's Wyss Institute. Recognized as a Forbes 30 Under 30 and a Global Shaper by the World Economic Forum, Javorsky is committed to guiding emerging technologies towards ethical, safe, and beneficial applications. With a strong foundation in AI, biotech, and nuclear risk management, she champions the responsible evolution of transformative tech for humanity's advancement.

Session Summary‍

Emilia envisions a future where our innate talents join forces with artificial intelligence to tackle global challenges. This isn't merely about the speed of AI advancements, but how they harmonise with human goals. Emilia stresses the importance of creating positive narratives, ones that integrate AI with genuine human empathy, pointing towards a world where technology complements, not replaces, our connections. To get here, however, she emphasises the need for thoughtful regulation, which moves away from only-theoretical approaches, and promotes an inclusive, multi-stakeholder approach. Emilia sees this pathway as a means to harness AI's capabilities fully. Looking ahead, she believes AI can better human health, pioneer bioengineering, and aid in space exploration, whilst enhancing human connection. For her, it’s more than just mitigating risk – it's about unlocking the vast potential that AI and human collaboration promise.

About Xhope scenario

Xhope scenario

Playtime
Emilia Javorsky
Listen to the Existential Hope podcast
Learn about this science
...

About the Scientist


‍Emilia Javorsky, MD, MPH, is the Director of the Futures Program at the Future of Life Institute. A physician-scientist and entrepreneur, she specializes in the development of medical technologies and is a mentor at Harvard's Wyss Institute. Recognized as a Forbes 30 Under 30 and a Global Shaper by the World Economic Forum, Javorsky is committed to guiding emerging technologies towards ethical, safe, and beneficial applications. With a strong foundation in AI, biotech, and nuclear risk management, she champions the responsible evolution of transformative tech for humanity's advancement.

‍

...

About the artpiece

Philipp Lenssen from Germany has been exploring between technology and art for all his life. He developed sandbox universes Manyland and wrote a technology blog for 7 years. He's currently working on new daily pictures at Instagram.com/PhilippLenssen.

‍

...

About xhope scenario

Emilia envisions a future where our innate talents join forces with artificial intelligence to tackle global challenges. This isn't merely about the speed of AI advancements, but how they harmonise with human goals. Emilia stresses the importance of creating positive narratives, ones that integrate AI with genuine human empathy, pointing towards a world where technology complements, not replaces, our connections. To get here, however, she emphasises the need for thoughtful regulation, which moves away from only-theoretical approaches, and promotes an inclusive, multi-stakeholder approach. Emilia sees this pathway as a means to harness AI's capabilities fully. Looking ahead, she believes AI can better human health, pioneer bioengineering, and aid in space exploration, whilst enhancing human connection. For her, it’s more than just mitigating risk – it's about unlocking the vast potential that AI and human collaboration promise.

Transcript

Xhope Special with Emilia Javorsky

Emilia Javorsky's Background and Focus

  • Emilia Javorsky is a physician scientist and the Director of the Futures Program at the Future of Life Institute.
  • She has a unique technical background, having worked on projects related to energy-based medical devices, skin health, and the nonprofit Scientists Against Inhumane Weapons.
  • She has authored numerous peer-reviewed publications and has a deep understanding of the risks and implications of emerging technologies.
  • Javorsky's focus at the Future of Life Institute includes AI, biotech, nuclear risk, and climate change, with the aim of steering transformative technology towards benefiting life and reducing large-scale risks.
  • She is involved in initiatives like the SLAMR conferences and the AI World Building Contest.
  • Javorsky is passionate about the long-term future and the need for safe development of technologies for the betterment of humanity.

"I've been operating in a highly regulated field that is regulated for very good reasons. When we develop technology, we need to make sure we're developing technology that delivers more benefit than risk."

Entry into the Field and Motivation

  • Javorsky's interest in the safe development of technology stemmed from her background as a physician-scientist and her experience working in highly regulated biomedical research.
  • She noticed the rapid engineering of power into various disciplines like AI, nuclear weaponry, and synthetic biology, which led to the need for thorough risk mitigation and ethical considerations.
  • Her concern about the potential risks of advancing technologies and the lack of sufficient risk mitigation strategies led her to collaborate with the Future of Life Institute (FLI).
  • Javorsky's partnership with FLI began around five years ago, and since then, she has engaged in a wide range of projects addressing autonomous weapons, nuclear security, AI safety, and promoting positive futures.

"I really felt like we're engineering power a little bit faster than wisdom, and that sort of thing kept me up at night. That's what led me to FLI."

Working with the Future of Life Institute

  • Javorsky's work at the Future of Life Institute involves a dynamic and ever-evolving landscape of technological advancements and risks.
  • Her technical training in the sciences and her experience as an entrepreneur have been beneficial in navigating the fast-paced nature of the job.
  • She emphasizes the importance of making early decisions to set up technologies for long-term success while mitigating risks and cultivating benefits.
  • Javorsky also highlights the multifaceted nature of her work, which includes considerations of entrepreneurship, policy-making, and shaping societal narratives.
  • She believes that for individuals interested in this field, taking initiative and being adaptable are essential qualities.

"The job is whatever is needed at the moment, and that is highly dynamic and changes day to day."

The need for talent in addressing global challenges

  • There is a lot of room for talented individuals to contribute to addressing global challenges.
  • The needs are great and consequential for the future of human civilization.
  • Regardless of one's background, there is a way to positively contribute to moving the collective field forward.

"I think what we're seeing is there is so much room and we need so much more talent engaging and working on these issues use because the needs are just so great and also so consequential for where we head as a species that really any background that you come from there is a way that you can positively contribute to moving the sort of collective field forward. It's about taking the initiative and what if you're really excited about the issues then think about the skills you have and how they could be applied and then you know talk to people go find people and like you will find a home someplace because the needs are just so great right now"

Escalation Risk of Conflict

  • One area that needs more work is the potential escalation risk of conflict with the increasing use of AI in conventional arms. This includes the chain of command and control for AI systems.
  • There is a need to consider the overlap between different categories of risk and how new areas of risk may emerge.
  • It is important to address these risks in order to ensure the safe and ethical deployment of AI technologies.

"Those are the areas that I think we're going to need to see a lot more work done on and I think that is especially true with the increasing power of large language models..."

AI's Role in Chemical Weapons Development

  • The paper by computational toxicologist Sean Akins highlights the potential risks of using AI to generate novel classes of chemical weapons.
  • Akins demonstrated how even rudimentary AI systems could be used to modify the properties of chemical compounds and create toxic variants.
  • This raises concerns about compliance with chemical weapons treaties and the need to develop methods for verifying and preventing such misuse of AI.

"He basically just changed the valence of the ld50 so instead of trying to make it safe try to make it toxic and give it an input of a very common nerve agent VX..."

Importance of Regulation and Positive Vision

  • Regulation is essential for not only mitigating risks but also enabling the positive potential of new technologies, such as AI.
  • Drawing from the trajectory of biology and nuclear technology, active regulation can ensure that positive futures are realized while avoiding negative consequences.
  • Envisioning positive futures and articulating collective goals is crucial for making informed decisions and shaping the development of these technologies.

"There's this false dichotomy that like regulation is against progress. I think regulation is the way you safely enable progress and actually is the thing that's better for the technology to realize its positive potential in the long run."

Positive Future Vision

  • Imagining a positive future is important to counter the narrative of an inevitable march towards a bad outcome.
  • Positive outcomes with technology are still possible and within reach.
  • Articulating a positive vision provides motivation and a sense of purpose for why we should fight and make positive changes today.

"All of those positive outcomes with technology are still very much on the table and it's what we do today that takes us there."

Hope and Excitement for Building the Future

  • Having a positive future vision gives us a goal to strive for and generates hope and excitement about building that actual future.
  • It provides a sense of direction and purpose for those working towards reducing risks and using technology to create amazing things.

"You need that sort of goal post of where do we want to go with it to give you that hope and excitement about building that actual future vision that you'd want to live in."

Balancing Focus on Risks and Hope

  • While it's important to be aware of risks and mitigate them, there is a need to balance the focus by injecting hope and alternative viewpoints.
  • Overemphasizing risks can create a negative filter bubble and become a self-fulfilling prophecy.
  • Introducing items of hope can provide a different perspective and open up possibilities for positive outcomes.

"Injecting a few kinds of items of Hope here and there could maybe at least provide a bit of an alternative view."

Exciting Technological Developments

  • The rate of technological progress, especially in AI, has collapsed the distinction between short-term and long-term advancements.
  • The field of AI has the potential to greatly advance our understanding of biology, which is currently lacking in first principles.
  • The combination of AI and synthetic biology can unlock fundamental insights and lead to exciting possibilities in understanding and intervening in biological systems.

"I think it could be amazing what's happening in AI for our understanding of biology. Biology is a field that still lacks first principles."

Future of Human Connection and Empathy

  • AI has the potential to improve human connection and empathy, contrary to the narrative that it will isolate us and have negative effects on mental health.
  • Designing AI systems that encourage curiosity, play, and open-mindedness can lead to positive interactions and bridges between people.
  • Personal interactions with AI can be a space for cultivating empathy and exploring the potential for positive human-AI interactions.

"These are tools that just as easily could be designed to build Bridges to encourage changes in thinking to open up people's minds to cultivate curiosity to cultivate play."

Inspiring Narratives for the Future

  • The AI World Building contest showcased a variety of narratives that expanded the sense of empathy and human connection through AI.
  • Reading through the proposals changed the general outlook and provided inspiration for new narratives about the future possibilities.
  • Building what we can imagine requires exploring diverse and creatively thought-out futures that incorporate technology and human elements.

"For me what was interesting, it wasn't just like one individual story that was great, but collectively, reading through every single of these proposals, changed my general outlook a little bit."

Plans for Future Work

  • There is a desire to continue engaging voices in shaping positive futures with AI and other emerging technologies.
  • The Foresight Institute has future plans to conduct similar work that invites broad participation in envisioning and building better futures.
  • The goal is to develop AI safely, responsibly, and for the betterment of humanity.

"Definitely want to do more of this style of work which is like engaging sort of broad-based voices on what Futures we want to live in and with AI that we develop safely and responsibly."

World Building Competition and Broad Base of Voices

"What future do you want? What future should we be building? Showing the diversity of those visions and perspectives that are not often represented in what has been traditionally a pretty narrow techno-utopian narrative."
  • The World Building Competition Edition aimed to get a broad base of voices and ideas involved in thinking about the future
  • The competition included a youth section, allowing young people to share their thoughts on designing positive futures with AI
  • Submissions from all over the world were received, representing diverse perspectives and backgrounds
  • This diversity of visions and perspectives is important to challenge the narrow techno-utopian narrative that often dominates discussions
  • The goal is not only to envision positive futures with AI, but also to appreciate the different ways people imagine the future and how it impacts policy, technologies, and research

Examples of Regulatory Precedents

  • Nuclear technology and bio weapons are examples of technologies that have been regulated through international agreements
  • Agreements on technologies that are not fully verifiable can still have a stabilizing effect and generate powerful stigmas
  • The history of international bodies such as the IAEA and the non-proliferation treaty demonstrate the successful mitigation of risks while enabling peaceful development of technology
  • There are plenty of examples where agreements have been put in place during uncertain or adversarial geopolitical contexts
"This idea that there's no examples of regulating technology is false. We've done it in the past and we can do it again."

Policy and Regulation as an Enabler

  • Having clarity on the policy and regulatory landscape enables faster progress and reduces uncertainty
  • When actors understand the playing field and have common knowledge about regulations, it becomes easier to make informed decisions
  • Existing AI companies are actively participating in the process of shaping policies and regulations by calling for multi-stakeholder processes
  • DeepMind, for example, has published a paper emphasizing verification, safety, and multi-stakeholder involvement in AI work

"Clarity and really knowing where one is standing and this being common knowledge that other actors will also abide by can actually be an enabling force."

Positive AI Technologies

  • There is movement and gradual progress in the development of positive AI technologies.
  • Interesting papers have been published on creating global regulatory markets for AI that are privacy-preserving, allowing different actors to verify models based on agreed-upon agreements

"I think you do see a lot of innovation even just in the past few weeks, that are relatively solution based and so I think it is a pretty fast moving and exciting space"

Encouraging Statements from Leading AI Companies

  • There has been significant progress and positive statements coming out of leading AI companies.
  • However, it is important to go beyond statements and actually ensure concrete policies and agreements are in place to operationalize the goals of AI governance

"it's also important to put pen to paper and actually get specific on that and gain agreements on that because statements are great but it's actually operationalizing those to make concrete policies and concrete agreements that people sign on to that is the litmus test of like where we want to go"

Importance of a Multi-Stakeholder Process

  • It is essential to have a multi-stakeholder process in AI governance.
  • The involvement of leading manufacturers, developers, independent scientists, civil society groups, and government entities is crucial in order to address requirements, challenges, and conflicts of interest

"It is very important to have leading manufacturers and developers and our Pharmaceuticals involved in the regulatory discussions because they have a unique understanding of the requirements and the challenges and the opportunities in that space but they alone are insufficient because there are like certain conflicts of interests that come to the table right if you are the developer of those Technologies and so that is where the multi-stakeholder perspective of like also having independent scientists and Civil Society groups and other groups within government doing independent analysis is really important and key to getting the regulation right"

Concrete Policy Proposals and Levers for AI Governance

  • The implementation of concrete policy proposals and using various levers are essential for effective AI governance.
  • This includes safety and effectiveness considerations, licensing regimes, auditing, verification, liability, and exploring different policy options

"There are a lot of levers that we can we can pull here in order to make sure we put in the appropriate guard rails that mitigate risks but still enable technology to develop and be applied and realize that potential"

Moving from Deep Thinking to Rapid Execution

  • The AI field is transitioning from a phase of deep thinking to rapid execution.
  • Concrete and pragmatic proposals are currently being developed to achieve beneficial progress and outcomes

"like people haven't thought very deeply about this topic for a long time but just now the gloves come off and it's really just prime time to implement a lot of stuff and with that in mind a lot of these bubbles are not getting super pragmatic"

Near-Term Critical Decisions and Unlocking AI's Potential

  • The near term is a critical point for making important decisions and establishing safety measures for AI.
  • At the same time, there are immense opportunities and unexplored potential in applying AI, especially with large language models and their applications in various fields

The Upside of AI and Bio

  • The AI for Bio field has the potential to greatly improve human health and extend our reach through space exploration.
  • It could lead to longer and healthier lives, as well as the possibility of becoming a multi-planetary species.
  • Going to Mars poses biological challenges, but advancements in bioengineering could overcome these obstacles.
  • The ability to endure long-distance space travel, microgravity, and radiation would be a significant advancement.
"I also think that there's what that also could unlock for us as a species in terms of living longer, healthier, better lives on this planet – but also extending our reach through the cosmos and becoming a multi-planetary species."

Words of Advice for New Entrants

  • Regardless of background or training, anyone can contribute to the field of AI safety and governance.
  • Diversity of perspectives, disciplines, and global representation is crucial for making informed decisions.
  • The field is fast-moving, which can be both exciting and overwhelming.
  • Some aspects of the field are stable, providing opportunities for long-term engagement.
  • The AI safety community consists of positive-sum thinkers who believe in abundance and lifting everyone up.

"For someone new entering the space, whatever your background or training or perspective or where you come from, there's an opportunity to make a difference in this because the needs are so great."

Excitement and Appreciation

  • Emilia expresses that her interaction with Foresight Institute has been incredibly fun and she is highly excited about upcoming projects.
  • She praises the organization for consistently producing exciting and impactful initiatives.
  • Emilia encourages people to sign up for Foresight Institute's mailing list and engage with their podcasts for a wonderful experience.

"It's always just really exciting stuff tumbling out of your organization all the time, so I would say for people to just maybe sign up to you guys’ mailing list, listen to the podcasts and so forth. So thank you, thanks for doing the work that you do."

Eucatastrophe

Considering the scenario of using AI for biofocused applications, it gives me hope that we're on a good path for humanity. Sometimes I do struggle with thoughts of catastrophe, as it is much easier to break things than to build them. When something breaks in a catastrophe, it can happen fairly suddenly, whereas doing things right usually involves a lot of sustained hard work and careful engineering over time. Good things don't happen as easily as entropy taking over and breaking things. That said, AI for Bio is a piece of it.

I also think about what that could unlock for us as a species in terms of living longer, healthier, better lives on this planet, and also extending our reach through the cosmos and becoming a multi-planetary species. The limits to this, if you look at NASA's risk assessments, are mainly biologically based challenges. Our biology is not really meant to endure long distance space travel and microgravity and radiation.

There's a piece of Β bio that is like health and disease, a sort of moonshot of alleviating so much suffering. But also, we can go even further beyond just the elimination of disease. How do we live longer, and how do we take humanity to new frontiers? That's something I get excited about.

Emilia highlights the potential of AI in biology, not only in terms of transforming our healthcare and possibly extending our lifespans, but also its capacity to make us a multi-planetary species.

Emilia also discusses a broader application of biofocused AI, moving beyond the remediation of health and disease towards a more ambitious aim: extending human longevity and pushing the boundaries of humanity to new frontiers. It's these far-reaching possibilities that excite her most about the future.

AI-Powered Biological Advancements: Paving the Way for a Healthier Humanity and Multi-planetary Life

Emilia's eucatastrophe, or optimistic vision for the future, revolves around leveraging AI for biological advancements. Her ultimate hope is that these developments would not only lead to improved health and longevity for humanity on Earth, but also enable us to overcome biological limitations and become a multi-planetary species. This vision is broad, encompassing both the remediation of diseases and the push towards new frontiers of human achievement and exploration.

Read

RECOMMENDED READING

  • Dual use of artificial intelligence-powered drug discovery – Urbina et al. How AI technologies for drug discovery could be misused to generate novel classes of biochemical weapons.
  • Worldbuilding Competition – Future of Life Institute. Entries from teams across the globe competed for a prize by designing visions of a plausible, aspirational future that includes strong artificial intelligence.
  • The Summit of the Future 2024 – United Nations. An upcoming summit to enhance cooperation on critical challenges and address gaps in global governance.