In this episode of the Existential Hope Podcast, our guest is Adam Marblestone, CEO of Convergent Research. Adam shares his journey from working on nanotechnology and neuroscience to pioneering a bold new model for scientific work and funding: Focused Research Organizations (FROs). These nonprofit, deep-tech startups are designed to fill critical gaps in science by building the infrastructure needed to accelerate discovery.
Tune in to hear how FROs are unlocking innovation, tackling bottlenecks across fields, and inspiring a new approach to advancing humanity’s understanding of the world.
Imagine a future where we finally understand the human brain’s basic structure—how it drives our values, motivations, and well-being. This isn’t about uploading minds or building superintelligence but about gaining practical insights into how our brains work. With this understanding, we could improve mental health, align technology with human needs, and make life better in countless ways.
At the same time, science becomes more efficient. We remove the red tape and inefficiencies that slow progress, enabling researchers to focus on solving big problems instead of navigating bureaucracy. This shift unlocks faster innovation and encourages bold new ideas.
Together, these changes create a turning point: we overcome key barriers to understanding ourselves and advancing as a society, opening up real possibilities for a better future.
Adam Marblestone is the CEO of Convergent Research. He is working with a large and growing network of collaborators and advisors to develop a strategic roadmap for future FROs. Outside of CR, he serves on the boards of several non-profits pursuing new methods of funding and organizing scientific research including Norn Group and New Science, and as an interviewer for the Hertz Foundation. Previously, he was a Schmidt Futures Innovation Fellow, a Fellow with the Federation of American Scientists (FAS), a research scientist at Google DeepMind, Chief Strategy Officer of the brain-computer interface company Kernel, a research scientist at MIT, a PhD student in biophysics with George Church and colleagues at Harvard, and a theoretical physics student at Yale. He has also previously helped to start companies like BioBright, and advised foundations such as Open Philanthropy.
This art piece was created with the help of Dall–E 3.
Imagine a future where we finally understand the human brain’s basic structure—how it drives our values, motivations, and well-being. This isn’t about uploading minds or building superintelligence but about gaining practical insights into how our brains work. With this understanding, we could improve mental health, align technology with human needs, and make life better in countless ways.
At the same time, science becomes more efficient. We remove the red tape and inefficiencies that slow progress, enabling researchers to focus on solving big problems instead of navigating bureaucracy. This shift unlocks faster innovation and encourages bold new ideas.
Together, these changes create a turning point: we overcome key barriers to understanding ourselves and advancing as a society, opening up real possibilities for a better future.
Beatrice Erkers [00:00:00] Welcome to another episode of the Existential Hope Podcast. I'm your host, Beatrice Erkers, and I co-host this podcast along with Allison Duettmann. This is a podcast where we explore visionary ideas and talk to the people who are driving change at the very forefront of science and technology.
[00:00:15] Today, I'm very happy to say we're joined by a really incredible guest, namely Adam Marblestone.
[00:00:21] Adam is the CEO of Convergent Research, which is an organization that is behind a lot of other organizations called focused research organizations, FROs, which is a concept that you'll be hearing a lot more about in this interview. He's also on MIT Technology Review's list of 35 innovators under 35.
[00:00:37] So in this episode, we'll talk to Adam about his journey to getting where he is today and launching all of these different scientific projects to also his vision for the future.
[00:00:45] What is it that Adam is really trying to achieve? What future is he trying to unlock with all his work? For a full transcript of today's episode, along with recommended resources and other explanations, exclusive content, I would recommend you visit existentialhope.com. And also don't forget to subscribe to our newsletter to stay informed about our latest episodes and community updates.
[00:01:06] Now let's welcome Adam Marblestone to the Existential Hope Podcast.
Allison Duettmann [00:01:10] Hi everyone. Welcome to Foresight’s Existential Hope podcast. We are delighted to have Adam Marblestone here today. I think Adam, I was trying to think back of how I first came across your work or you. And I think it's because you wrote a workshop report on a positional chemistry workshop that was really going into a lot of depth in terms of what has happened in nanotechnology recently, but not only recently, but also in the past and what we can do to possibly speed that up.
[00:01:34] And I was very excited about that given Foresight's obvious history in nanotechnology and reached out to you. I think pretty much that was maybe the first time we were in touch, but since then, obviously you've been very much revolutionizing the field of meta science through the work on focused research organizations.
[00:01:49] And I'm really trying to elevate a lot of projects, not only in biotech, but also neurotech and lots of other fields. So I think it's just really interesting to see how you've really managed to have a pretty large effect on a field that is really – that we've struggled to make progress in.
[00:02:07] So thanks a lot for that. I'm really happy to have you here now. Maybe you could just start by giving a little bit of an intro to you, because I think it's interesting that you started out really very much in the nitty-gritty of a scientific discipline and then took it all meta eventually. So how you got to where you are.
Adam Marblestone [00:02:22] Thanks a lot for that. And yeah, it's hopefully just the beginning. Yeah.
[00:02:25] So I'm co-founder and CEO of a nonprofit called Convergent Research, and we think of ourselves as a science studio. So we're best known for creating this category of projects called focused research organizations that probably we'll talk about, but we're interested in enabling researchers to work on problems where essentially, you're building a piece of infrastructure for the scientific process itself. There are many different types of problems that the FROs can apply to, but a major category is scientific infrastructure. Think about the Hubble Space Telescope, that is not really a product in the traditional sense, but it is a system that had to be engineered.
[00:03:00] It's also not a scientific discovery. It's not somebody's thesis, right? It's a piece of infrastructure that's used by research itself.
[00:03:07] So we think these focused research organizations are particularly useful for filling bottlenecks that exist in the system on creating scientific infrastructure. And yeah, I got to this through a bunch of much more object-level questions, where myself and some colleagues realized that this was a gap that had some common features across many of the different object-level problems that I've been interested in working on. So when I was a teenager I was reading books like Eric Drexler's book about nanotechnology and many other science and science fiction books. And I had really broad interests across science, but I was particularly interested in this sort of very technologically enabled view, maybe that is closely related to nanotechnology.
[00:03:48] I started out, I had lots of intellectual questions. I was interested in things like, how does the brain work or something like that. But what was being described technologically would be like, okay, you could have really powerful physical access to the brain, or how would AI work? You could make really powerful computers.
[00:04:05] And so somehow this nexus that nanotechnology did represent biochemistry, engineering, physics, chemistry was what I got really interested in. As I progressed in that field, both working in nanotechnology and in synthetic biology and neuroscience fields, we just kept seeing this problem that there were these sort of early-stage parts of research where you need to build something.
[00:04:27] It might be a new type of microscope or something, or it might be a new machine for fabricating nanostructures, whatever it might be. You would need more of an industry-like team, right? It doesn't make sense for an individual person to apprentice themselves in all of the different skills that you would need to put together this piece of infrastructure.
[00:04:44] It made sense to divide the labor and professionalize the labor in a different way. And that's what led us to the focused research organizations, maybe most directly through brain mapping and connectomics is what led us to it. But in retrospect, a lot of the other problems in biomolecular engineering and stuff also have, some of them have that character also.
Allison Duettmann [00:05:02] Okay. Wonderful.
[00:05:03] Maybe we can dig a little bit into what these focused research organizations are. I know that for example, on the Convergent website, you guys do have a pretty good diagram, but maybe you could elucidate a little bit. What are they? Why are they interesting? And especially, I think, how do they differ from traditional organizations that we've known?
[00:05:18] Why do we need this kind of new organization now?
Adam Marblestone [00:05:20] Yeah. So I think of them as a critical mass of the right people multiplied by a critical mass of time and support to build a very well-defined thing that the market would not support you building as a traditional for-profit startup, right? So in some ways you can think of them as sort of like nonprofit deep tech startups.
[00:05:39] I actually think there's a lot of room for experimentation on all sorts of aspects of this. How is this set up legally? How is this set up structurally financially? Convergent Research has particular ways that we started doing this, where we have a parent nonprofit and each of the individual FROs is a subsidiary LLC entity that kind of packages people and resources and could then be repackaged after the FRO in different ways, depending on what happens, but they live during the FRO itself within the parent nonprofit, which also provides a kind of back office support structure as well.
[00:06:10] But there are these concerted pushes. And you can think of them as borrowing certain key features that startups have. So I think some of the features startups have, they tend to be very laser-focused, very strong unity of purpose of an entire team. It's not really each individual researcher pursuing their own individual question.
[00:06:28] That's actually part of why we call it Convergent Research. I think of most academic research as divergent, deliberately divergent. So my thesis has to be different from your thesis, both to get academic credit for that. And because that's what you're trying to do. You're trying to come up with a new idea. Convergent Research is like, how do you get 20 or 30 people to all build the same microscope or something for five years?
[00:06:47] It doesn't mean that there isn't major scientific risk and questions that the team figures out how to pursue, or that it doesn't pivot. It doesn't have to be a kind of straight build-out of something. There's still research, but it's startup-inspired by that unity of purpose. There's typically a small founding team that then handpicks the rest of the hires, not only for their sort of technical skills or background, but also for their alignment with that mission.
[00:07:10] So you build the team in a way that's very similar to the way a startup builds a team. And then the projects tend to evolve a kind of amount of operational coordination or tight-knit coordination. That means that they also operate in a way that's pretty similar to how a deep tech startup works on internal technical development.
[00:07:26] But then these are often creating some kind of public good. So it might be an open-source data set. It might be a tool that other researchers will use. In some cases, they're also capturing IP. In some cases, those things will also spin off into companies, but there's often a major open-source public goods component.
Allison Duettmann [00:07:41] And I guess since I just remember, I think the first time when you guys discussed FROs or FROs at least in the Foresight setting was like a few years ago on a seminar where you guys dug into the concept. And at that point it was still very much in the conceptualization stage or in the birthing stage.
[00:07:57] And now here we are like a few years later and there are a bunch of FROs out there. Looking back from the first time when you discussed, at least in a Foresight setting, what have you learned in these few years, but also which FROs have come on the horizon? Is there another sector ripe for an FRO that you want to point out that other folks listening to this might be able to take on?
Adam Marblestone [00:08:15] A couple of the things that we've learned that I'm excited about. I think the biggest objection that we got maybe circa 2020 kind of era of thinking about this before we had actually launched. Launched then they're still early, right? We still haven't gone through a full five-year cycle or beyond five-year cycle. It's all been happening pretty quickly.
[00:08:33] But the oldest one is about three years old.
[00:08:35] One of the biggest objections that we got or questions that we got was can you recruit really great people into these? First, why would you do this if you can't have an academic tenure track as your main goal here, or why, you're not offering equity in a high-growth startup, or you're not offering the stability of a major industrial entity.
[00:08:54] So why would anybody join these nonprofit startups? I think we've been really pleasantly surprised that we've basically been able in each of the cases to build really strong teams. Leaders in their respective fields at different career stages have either been founders, management in FROs or have joined FROs as technical contributors.
[00:09:13] And so we're really psyched about that. We've also experimented a lot with the sort of phenotype or career stage of the leaders and you have everything from very senior people, or this is one of the last things that they'll be doing probably before they retire to people that are very recently out of grad school.
[00:09:29] So we've been exploring a whole spectrum and everything in between the different combinations of CSOs and CEOs and COOs, different types of founding teams. I've also been pretty psyched that we knew from very direct experience, like myself and Sam Rodriques, who is the other kind of author of the FRO concept, we both had been trying to work on connectomics and neuroscience problems.
[00:09:49] And so we knew that this was really needed in neuroscience. Neuroscience has this property that the brain is just such a huge and complicated object, basically, that the skills of people who study the brain, whose job is to ask a question about the brain. There are many other skills that you need to bring into the neuroscience endeavor that are not neuroscience per se.
[00:10:09] So maybe if you want to understand the brain, you study theoretical neuroscience or you study behavior of mice or something like that. You basically study things that are more within the neuroscience toolkit. But maybe you also need chip designers, or you also need people that build lenses or huge data systems or so on.
[00:10:25] This is something that's pretty well recognized in neuroscience. We didn't know in some other fields how true this would be.
[00:10:30] So one of the ones I'm excited about is actually the Lean FRO. So this is an interactive theorem proving software for math. So it's software that's used by mathematicians, basically, instead of writing down their mathematical proof on a piece of paper, they write it in this programming language, which is Lean.
[00:10:46] And then what Lean can also do in addition to suggesting next steps and so on and resolving parts of the proof, is it can actually just say, do your conclusions actually follow from your assumptions? Yes, you are right. Your proof is correct of the thing that you claim to be proving. So this is a software infrastructure for math, and then that software infrastructure is then also going to have a huge impact on AI, particularly AI in math, and I actually think that there's a chance that AI can, in a certain sense, just end up solving math, because similar to how in other sort of reinforcement learning settings, Atari or Go or chess, the computer can just say, okay, you won the game.
[00:11:24] Good job. Here's your reinforcement learning feedback. You can close the loop without having to have a human involved in judging. Did GPT produce a nice passage of text? This can just close the loop and say, yes, you got the proof, right? And that can be done automatically because you have, if the proofs are written in Lean.
[00:11:40] And so now this is being used in AI as a kind of equivalent of Atari or equivalent of a Go engine, except for mathematical proofs. But anyway, so software infrastructure for math, and it turned out that the mathematicians didn't really have any way of organizing the software engineering project either.
[00:11:54] And it was a similar issue as in neuroscience, where maybe the neuroscience is doing neuroscience, but they really need chip designers. Mathematicians are doing their proving, but they really need software engineering. And so that turned out to be true in math as well. So math was not a field that we initially expected that there would be a focused research organization.
[00:12:09] So increasingly my hunch is that there's probably a few, we think of them as FRO-shaped bottlenecks or FRO-shaped gaps per field. Or let's say fundamental field of research. Maybe there are like 20, 25 fundamental fields of research in neuroscience. I wouldn't distinguish between autism and bipolar research.
[00:12:26] I would just say neuroscience, how does the brain work, but maybe there's on the order of a handful of FROs for each of those. So that's about a hundred, the order of a hundred FROs that are needed, which is large, but not infinitely large actually.
Allison Duettmann [00:12:41] So what do we get if we solve math?
Adam Marblestone [00:12:42] What do we get if we solve math? Yeah, it's a good question. And there's obviously different definitions of exactly what that would mean. So the most minimal version is you just have an AI assistant helping the mathematicians and that would have implications in other fields, physics and so on. There's another level where you really can just say, here are my axioms.
[00:12:58] I want you to prove P does not equal NP or whatever, some really hard thing. And it can just do that. There's another level where it can creatively suggest what is the theorem that you should be proving in the first place. I think what, not to overhype it, but I think what Lean is so far contributing to is the first of those two.
[00:13:14] It hasn't really led to the AI saying, here's the next creative direction in math. It's just said, make it more efficient to initially verify the proofs of something or crystallize the proofs of something, and ultimately to generate its own proofs for things that we already know we want to conjecture.
[00:13:29] What do you get? I think one of the really interesting connections there also is, and this is also unexpected, I think, in terms of the ways that these basic science, if you will, problems end up ramifying through different fields. But is the AI safety applications and secure sort of verifiable software applications?
[00:13:47] Can you create formal specifications of what an AI is or isn't allowed to do in some sort of very complex model. So this is a DARPA program within ARPA, for example, in the state of Omaha. Alejandro is also written about some related ideas, but can you have a sort of guaranteed safe AI
[00:14:05]
At least some versions of that would rely on having an incredibly complex software model of the system in which the AI might act or intervene. Then the question is, can you actually prove that the actions or interventions it suggests cannot lead to certain consequences? Can you prove something about an incredibly complicated software model? I don't know if that's possible, but I think it's an interesting research area. More generally, it raises the question: can you prove properties about really complicated software systems—verifiable software?
Allison Duettmann [00:14:36]
That's interesting. The first part of your answer reminded me of Sam Rodriguez's work at Future House. They recently launched a great AI tool that helps with bio and other scientific research.
[00:14:47]
It can already predict or propose experiments. Similarly, Brain GPT, from Bradley Love, can digest neuroscience research and suggest experiments. In some cases, it predicts experimental outcomes better than researchers, though that depends on the experiment.
[00:15:05]
So, solving math problems could lead to provably safe AI with implications for AI safety. How does that connect with Arias and Damien's open agency architecture? Have you discussed this with Evan at Atlas Computing? I think he's working on secure software challenges.
Adam Marblestone [00:15:19]
They’re not exactly the same challenge, but they’re closely related. Research progress on proving complex mathematical theorems, speeding up proof generation and verification, provides a testbed for AI systems generating software according to provable specifications.
[00:15:47]
This has big implications for cybersecurity and AI safety. Testing software against individual cases will always miss some scenarios. If you can prove software properties, that’s a game changer. The infrastructure of AI generating, verifying proofs, and creating code that meets those proofs is critical.
Allison Duettmann [00:16:20]
I see the potential for computer security, but for AI safety, isn’t it hard to formalize everything in a dynamic, constantly changing world? Are there other steps I’m missing?
Adam Marblestone [00:16:30]
I don’t have a complete answer to that. It comes back to scientific infrastructure. We don’t need to be completely sold on this as the sole approach to real-world AI safety. Along the way, it helps mathematicians.
[00:16:38] Speaker 4: Yeah, and I don't have a complete answer to that question. So that's why when you ask what you got, this is, I think, coming back to the question of scientific infrastructure, right?
[00:16:45] It would be one thing to say we have to be absolutely sold that this is definitely the approach to all real-world AI safety, and that's the only reason why we're doing it. But we also get to help mathematicians along the way. And so, if you can find these bottleneck points, it turns out that writing down proofs on paper and not having a way of actually verifying them has turned out to be a bottleneck that affects these different fields.
[00:17:05] But yeah, I think there are many different versions, and some of the versions on the AI safety application, as I'm understanding it, wouldn’t necessarily rely as much on actual formal verification or proof. They might rely on a model that is trying to infer how likely it is that various outcomes can occur, but that model being separate from the agent model that's actually proposing the actions. Having these separations, where you have the AI scientists estimating how likely it is that certain conditions are violated, is important. But it all depends—can the AI come up with the specifications itself? What the safety spec is? Can it determine what you need to prove? How do you prove that?
[00:17:44] So there's a lot there, but it all depends on AI accelerating in some significant way for it to be meaningful.
Allison Duettmann [00:17:45]: And I guess, can it do that in a way that we can prove is safe itself?
Adam Marblestone [00:17:48]: Yeah.
Allison Duettmann [00:17:49]: Okay. But no, you don’t have to solve all of AS. Anyway—
Adam Marblestone [00:17:52]: It turns out that’s a FRO that is in math. We are trying to select these as examples of something where there’s a missing piece of scientific infrastructure, but also where there’s this long-tail possibility that if you make that infrastructure, it could have much larger implications.
[00:18:05] It’s great if it helps some math grad students verify their proofs and write their papers. It’s really great if it is a part of AI safety that wasn’t otherwise accessible, and by building the infrastructure, you end up somewhere on that spectrum of outcomes.
Allison Duettmann [00:18:17]: Even if we can help a little bit with computer security, that would be amazing. That would be a very different world and very much worthwhile doing—and math, probably, very much.
Adam Marblestone [00:18:25]: Yeah, and interesting that even though it is so important, these are not infrastructures that are necessarily being built by the private sector or by the sort of government-backed science establishment.
Allison Duettmann [00:18:35]: Yeah, unfortunately, very much not.
Future Directions and Collaborations
Allison Duettmann [00:18:38]: Is there any other field, for example, for someone listening to the podcast afterwards, are there any areas that you want to direct them to, or any specific requests for FRO proposals? Or are you mostly guided by what is being proposed to you? How does that process work?
Adam Marblestone [00:18:54]: Yeah, that's a great question. It's a very multi-sided thing. We interact with people we think have a sense of what some of these bottlenecks are across different fields. We also have an open ability for people to propose to us. We interact with funders to align on what the interests are.
Sometimes we host fellows or workshops to more directly generate the thinking. There are things at many different stages. In some cases, someone already has a roadmap for the FRO or something close to it. For example, we have one on single-cell proteomics that launched at the beginning of 2023. In 2018, that group wrote a paper called Transformative Opportunities for Single-Cell Proteomics, which outlined what needed to be done, but they lacked the organization or funding to do it.
In other cases—and this is something I'm excited about—by sending out the "bat signal" that it's possible to do an FRO, we encourage people to think more about FRO-shaped ideas in the first place, eliciting new concepts. Sometimes we directly say, "We want you to write this white paper because we know you can," or "We want this group to come together and do a workshop to generate that thinking." Sending out the signal also surfaces ideas that might not have been sitting on the shelf.
Allison Duettmann [00:20:16]: I think that’s a really great point. It used to be, after our technical workshops, people would ask, "Could this become a company? Could any of this become a nonprofit?" Now the question is, "Could this become an FRO?" The question has changed.
While other organizational structures are still worthwhile, people are really embracing the FRO concept and taking it into the field. They’re not just pattern-matching projects into that structure; they’re also thinking more ambitiously. Previously, they had only a few organizational structures in mind when considering what they could be doing. Now, with living, breathing FROs out there, it’s no longer just a concept, and people’s thinking is shifting dramatically.
Adam Marblestone [00:21:03]: I think it’s great if that mindset can back-propagate. There’s a more general question about scientific roadmapping: how do you figure out the steps, tools, and infrastructure experiments needed along a certain path? In many fields, particularly where the perception is that the problem is farther out, it’s hard to generate that roadmapping.
Having the concrete question—"What would the FRO be?"—can help, even if it doesn’t literally lead to an FRO. It gets people to spend thought cycles on the idea. People are so busy, using up their energy competing along other dimensions—getting the next grant proposal, pitching VCs, or making experiments work in the lab. Unless there’s an incentive or encouragement, it’s hard for them to dedicate time to thinking about alternative outputs. The FRO shape contributes to that.
That’s not to say it’s the only shape. I like what Spectec is doing with their Brains accelerator. If it’s just a startup, fine—you’ll find out it’s just a startup. If it’s an academic project, others can handle that. But it could be an FRO, a pitch for an ARPA program, or something entirely new. ARPA programs are coordinated research efforts, but they’re different. These diverse outputs are all valuable.
Allison Duettmann [00:22:37]: Applications for the next Brains cohort are open now, for anyone interested. Spectec is one option for people who want to think more ambitiously about their career or the problems they want to tackle.
In recent years, other organizations have emerged, like ARIA, Futurehouse, the Arc Institute, Astera, Arcadia, and others. They’re all working on ambitious science and technology problems—funding them, creating fellowships, and more.
Looking at this broader ecosystem, how has it evolved in the past few years? Are there collaborations between these organizations to raise the bar for scientific problem-solving?
Adam Marblestone [00:23:27]: Yes, and there are even other categories, like billionaire-backed startups or highly speculative FRO-like projects structured as startups.
Allison Duettmann [00:23:36]: Oh, yep.
Adam Marblestone [00:23:37]: And. I think that's also very valid and there's some very exciting examples of that Cradle working on a large volume of verification. So these are all, it's not entirely clear what the cause and effect is.
The Role of ARPA in Scientific Collaboration
Adam Marblestone [00:23:47]: Some of it is frustration due to COVID. Some of it is financial dynamics in different fields like crypto in terms of funders, people encouraging each other to do these types of projects.
[00:23:57] So we are trying to nucleate as much collaboration as possible. So I think it's great if a sort of FRO creation studio, if you will, like us could act in a way that helps an ARPA agency, let's say, to create the performers that it needs. So this is actually part of the motivation for the FRO model is actually even in neuroscience and connectomics.
[00:24:18] Some academic groups did have ARPA funding. In this case, it was from IARPA, the intelligence ARPA, to do connectomics research. But the challenges of having a sort of distributed multi lab collaboration, the ARPA toolkit was mostly being able to write checks to these individual distributed actors. What if we could get them all under one roof and even more focused than what the ARPA incentivization of a collaboration And ARPA agencies also historically in different fields, they rely a lot on more like project creation or sort of contracting entities.
[00:24:49] So Eric Gilliam has written some good stuff about this too, about the old ARPA working with BBN and other organizations to actually do the engineering that was needed within the different ARPA programs. And so can we, and can other. Organization be a kind of create the performer mechanism for ARPA agencies.
[00:25:06] I think that's one interesting question. In what cases does that do ARPA PMs actually run against the situation where they don't have anyone to write the check to do the thing that they would otherwise want to do. You can have very general purpose engineering firms. I think if you want to make a new kind of fire to jet, you have Lockheed Martin and Raytheon and different types of contractors that the government can rely on, but if you are an ARPA PM that's interested in.
[00:25:29] Digitizing brains or something that's okay. Who's your contractor? And so maybe we could create a FRO for that. We're also trying to back propagate this more into the early stage thinking with the spec tech brains program. And we're trying to create a routing network of some kind. So if someone comes to us and maybe we don't have anyone who would fund that as an FRO, but maybe we think somebody would fund that as a billionaire backed startup, or maybe they should go apply to be an ARPA HPM. To the degree that these organizations interact, everybody who shows up in one place can end up somewhere good at the end, we definitely want to encourage that.
Allison Duettmann [00:26:01]: That is almost a job title in itself. Someone who's just making sure that the inter org paths are unblocked.
Adam Marblestone [00:26:08]: You all are having a role in that too, which is definitely appreciated.
Allison Duettmann [00:26:12]: It's a small one for sure. But I think one, one bit that I also do want to know is if you think ahead, because you already mentioned a little bit in terms of numbers of FROs that are possibly out there, how many FROs do you think you guys can churn out in the next five years?
Adam Marblestone [00:26:27]: We've been going at a rate of a few a year, I think, as I was saying, I think that there's maybe on the order of 100 or so in a kind of in a given kind of snapshot of science, obviously, in 10 years, the world will have changed a lot and so on. But if you imagine a sort of over a rough decade, it's – It would be great for the world to do on the order of a hundred, I think, I'm not sure exactly how that compares to how many ARPA programs does a given ARPA office do or something like that, but it's not totally out of the realm. If you also just add this up numerically, it's like a few billion dollars, which is a lot, but it's also, there are individual people who could just do that in a more discretionary way. And compared to the government science budget, it's actually not that huge. Yeah. So maybe you want the world to be able to be launching more than the order of 10 a year. Whereas right now we've been doing two or three.
Allison Duettmann [00:27:13]: I'm hoping that a bunch of other funders get inspired by this and join the efforts.
Adam Marblestone [00:27:17]: Yeah, me too.
Challenges and Opportunities in Nanotechnology
Allison Duettmann [00:27:18]: Maybe my last question on my end, given that we have such a long-standing history in nanotech, and that also how, like the topic over which we first met. What do you think that field in particular needs to progress faster? Are there any projects in the nanotechnology realm in the broader one that you're excited about? I think you really like champion this concept of a molecular 3D printer, but yeah, is there anything that you're excited about in terms of nanotechnology progress that could be soon on the horizon or is like already emerging? that you want to point out?
Adam Marblestone [00:27:46]: Yeah, I think it still needs a bit of an exploratory approach.
[00:27:50] The broadest answer I think is that it needs and has needed for a few decades, more ARPA style coordinated research program type approaches. And there can be several of those within a given broader field, but I think the ARPA mechanism can work really well in the early stages of something because it also pulls the thinking of the researchers with an FRO, they can interact a lot with it and collaborate with the outside community and drive a lot of change, but it's often more about buckling down and solving them.
[00:28:19] Pretty well-defined set of technical problems as a single team in the same way that a startup would if it was trying to solve a deep tech challenge for a large VC backed commercial type market. With nanotechnology, I think there's four or five different sort of ARPA like programs that, that might be, the best path.
[00:28:36] And I think that there is a challenge, which is that people are often asking for, okay, what's the commercial application that comes out at the end of those four or five years of work. And I think in nanotechnology, it's actually, it's hard. And basically with a general-purpose fabrication method, that's just bootstrapping to, to show its generality for most of the things that we want.
[00:28:56] In the world, we have come up with special-purpose fabrication methods for them, which are not as general and maybe ultimately aren't going to have these self-reinforcing properties where they can become more and more general and more and more cheap and so on. In principle, nanotechnology would have, but we have special-purpose ways of making those things.[00:29:13] So that includes most of the things that go into the nano systems we want to be exploring with. So you might be using cells or cell-free systems to manufacture the proteins that go into protein engineering. So we're far from this kind of exponential manufacturing takeoff, I think, still in any particular area.
[00:29:30] I spent a little bit of time during my PhD trying to think about bio-templated. microchips, which is not, it's not the same problem as the positional chemistry problem. I think of positional chemistry. I often think about in terms of DNA, basically you have some DNA double helix. It's about 0. 3 nanometers per letter.
[00:29:45] So if you have some sequence of ATCs, Gs between the one letter and the next, it's about 0. 3 nanometers. It's about two nanometers along the sort of the diameter of that helix. And that's just a little bit bigger than the kind of covalent bond specification resolution that you need for sort of positional chemistry.
[00:30:02] That's a little bit bigger than that eggstrom scale. Protein active sites and control of reactions in an enzyme active site is one step smaller than that. And then it's a bit smaller than relevant feature size of features on microchips. And so I was thinking a lot in terms of DNA nanotechnology, but I think it just, that's the level at which we have a lot of programmable control right now through things like DNA origami, and we're starting to have some control with protein engineering that's a little bit smaller than that scanning probe microscopes can go smaller than that in terms of precision.
[00:30:32] But anyway, so we were thinking about how would we start with that DNA level of precision? single digit nanometer programmable over some small number of nanometers as a kind of canvas with DNA origami? And how would you scale it up to full size microchip control? Could you position different nanoparticles or other discrete components at different parts of the chip?
[00:30:50] I think there's actually been a ton of progress in that direction of doing something like that. So implosion fabrication is, it's the opposite of expansion microscopy. It means you, you pattern materials into a hydrogel and then you shrink it, but it can shrink by a factor of more than 10 along each axis.
[00:31:04] And if you shrink something besides the wavelength of light down, you'd also get down to these single digit nanometer features. So we're getting more and more able to make these kinds of bio-templated microchips, but that by no means that we suddenly displace Intel or TSMC, right? For chips, because that's been so optimized for the manufacturing problem that it's solving.
[00:31:25] So even though we have actually these breakthroughs like DNA origami or implosion fabrication, it doesn't mean that we suddenly replace Intel. It also doesn't mean that vaccines or other biologic drugs that are created in this way are going to replace more traditional vaccines or things like that. And basically you need ARPA programs where you are allowing that the output of that ARPA program is another ARPA program.
[00:31:46] Basically it's helping you spec. So you need a few iterations of a few parallel ARPA programs. And I think you actually make enormous progress on 3d printing, positional chemistry, more Lego, digital protein assembly, chip scale, bio-templated chips. It's just, everybody has a more near-term problem that they're worried about that is distracting them from working on nanotechnology at the level that they should.
Allison Duettmann [00:32:07]: Yeah. As it's often the case with the general-purpose technology that would unlock many other fields. Thanks. That was a really, I think, neat summary of the field. And if anyone in the upper well was listening to this, you guys know what a great waterfall of upwards for this particular topic.
Adam Marblestone [00:32:19]: Yeah. And that only adds up to a large, but not a totally unreasonable amount of money.
Allison Duettmann [00:32:24]: Great. Totally unreasonable. Not to take that. And that's my part of the podcast.
Exploring Existential Hope
Allison Duettmann [00:32:28]: I hand it over to Beatrice to dig in a little bit more into the existential hope part.
Beatrice Erkers [00:32:31]: Yeah, I have a bit more. We call the Existential Hope Program our North Star. So it's like the question of why are we working to develop all of these technologies? And so that's why I'm very curious to hear your thoughts on. So I'll just start with asking, would you say that you're positive or optimistic about the future? And if you are, have you always been? Like what made you, was there any particular event that shifted your perspective on this?
Adam Marblestone [00:32:57]: I think. Down back to the sort of deep prehistory of reading things as a teenager, I think I think I'm probably most aligned with what Vitalik Buterin, but also the DIAC kind of framework, not to say that there aren't huge problems and risks, but that the sort of toolkit of technology is actually so powerful that there is really a notion of creating defensive and effective systems. And so I really like the idea of defensive systems. It doesn't mean it automatically works out this way. I think there are things that can be offense biased rather than defense bias. But if we can recognize that and create systems at the right scales, yeah, you can take the immune system as an inspiration. It doesn't always work, but it's a very powerful system. And so what, where are there. analogs of we need to build something like an immune system. And that unfortunately, I think is something where if in some sense, the concern is that we get too good at making technology, then we should also be very good at making these immune systems. If we're really taking that seriously, we should also be able to make very powerful immune systems. So that's my kind of, I'm sort of de/acc. That's the lever that I see at least given my background. Yeah.
Beatrice Erkers [00:33:55]: Yeah. Yeah. Yeah. Being like defensive acceleration ism, which is yeah – Love the concept. Also. It's really gotten some traction – lately, like from, I guess more like social aspect or seeing a more like deep tech future. Are there any, like if we move away from technology, are there any values that you think are really important for us to uphold as we continue to develop these technologies? I guess the DIAC. partly answers that, but is there anything else that you think we should keep in mind?
Adam Marblestone [00:34:23]: I guess I'm also inspired by the notion of pluralism, that we need to be able to experiment and have different forms of society. And if anyone, even in service of DIAC, you want to avoid this sort of hegemonic, tamping down of change or tamping down of experimentation. It's something I think about sometimes from a neuroscience perspective. Yeah. Human values, right? If human values to part of that is basically encoded by some brainstem and hypothalamus training signals or what have you, that is shaping our development in the context of a very particular set of social interaction affordances and stuff that we have. And I don't think we want to just fix those brainstem patterns. I would like to have more programmable control ultimately over my own mental states and values. But what's the danger in that is you have to maintain some sort of pluralistic system.
Beatrice Erkers [00:35:08]: Yeah. One question that we always ask is like this question of a eucatastrophe, which is the opposite of a catastrophe. An event that once it happens, the world is much better off. I'd be really curious to hear if you have any ideas for a particular event like this that you think if we get this would be much better off.
Adam Marblestone [00:35:28]: I do think, although it's incredible…, I have ones at a couple of different levels. So I do really think that right now people think a lot about AI, AGI, and they think a lot about, some people think about this idea of can we digitize or upload the brain. I think that there's a little bit, maybe because it's a bit harder to define what success looks like of just: what if we could have a basic architectural understanding of our own brains. What if neuroscience actually succeeded as a science and we could have a basic architectural understanding of our brains in the same way we have a basic architectural understanding of DNA and RNA and central dogma of biology and how cells work?
Like the things I was just saying about, okay, actually, human values are programmed as a certain learning, learning patterns and learning rules that are shaped by these brain stem or hypothalamus circuits. A lot of the human well being is basically in there, what we want and how we interact, what we're motivated to do. So we even just understand what Prof. Stephen Beers calls the steering subsystem of the brain. Even without every detail, not necessarily being able to completely simulate that, but just understanding the basic principles of how that works. What if neuroscience actually succeeded in some sense? We knew how this thing basically programs brains to go and learn. And that would be a big deal in many ways that we couldn't really predict. I think Steve Burns is interested in that from an AI safety angle. That would be an example where a large learning system has been aligned with human values basically is through the steering subsystem of the brain, but also just all the things we could do for human consciousness, all the things we could do for human experience if we actually just, yeah, there's some combination of sort of neuroscience and neuro neurotech just didn't completely fail. It doesn't, you have to completely digitally upload your person, your brain immediately into the cloud or something, or make a super brain. But even if we just knew what these sort of different subsystems of the brain we're doing so that we could modulate them and understand the basic principles of how they work.
So just neuroscience, like having a certain level of progress similar to other sciences, I think could be just enormous, but also very unpredictable what happens after that, because what do we choose to do with that. So that's at one level. Maybe at the meta level, it's what if we can remove all these operational bottlenecks to science, right. These sort of sociological operational bottlenecks.
Again, it's not a law of nature that people struggle to create focused research organizations or programs or whatever, or don't have funding for creative ideas or something that seems very solvable. So what if we could get science into a state where it was just limited more by really hard, actually hard intellectual problems, which would still exist versus by not being able to organize ourselves to prioritize doing science.
Beatrice Erkers [00:37:51]: So let's imagine we've achieved this to be bold, both of your wishes. If you think of a world where that has actually happened, what do you think the main differences between our world today and that world is?
Adam Marblestone [00:38:04]: It's very hard to predict in both cases because mine are not just concrete things. They're really, if we could really change science, things that would really change our understanding in a different way. So our understanding of what are possible goals. And I think there would just be so many ramifications. It becomes hard to predict. And one of the things I really want to make sure is that we actually got there. I think it would be great if we're like flying around on spaceships and stuff, but. If we don't get to this kind of higher level of understanding, I really like certain sci fi. Sometimes I find sci-fi gets it really right. I can steal Steven Spielberg's AI movie, right? They, the aliens, end up being very enlightened and they understand a lot about psychology and values and stuff in the far future. So I want to make sure we get to a point where we're not just the same us, but just flying around on faster spaceships.
Although I think it's also important to greatly reduce scarcity and increase the optionality of what people do and increase economic growth. But I think people sometimes forget that science is going to look totally different and philosophy is going to look totally different in a hundred years.
Beatrice Erkers [00:39:00]: It's a bit like the technical, technological maturity concept. Yeah. It's, it would just,
Adam Marblestone [00:39:06]: maturity that this will all enable.
Beatrice Erkers [00:39:08]: Yeah. Yeah.
Beatrice Erkers [00:39:09]: And feel free to decline this question, but because you're answering on a very general level, which is clever, but what do you think for yourself that if we had this, what would Adam want to do? What would you, what do you think that you would value in that future?
Adam Marblestone [00:39:26]: Yeah, just not being distracted by dumb stuff, getting to see that future at all, and not being distracted by really dumb things.
Beatrice Erkers [00:39:31]: Yeah. Fair enough. That's when you are, it's very easy to get distracted today by a lot of things both from ourselves and everything around us.
The Importance of Optimism and Public Perception
Beatrice Erkers [00:39:38]: In terms of this, if we think back, like back to the idea of being positive or being optimistic about the future, is that something that you think is important? I'm asking in particular because I'm thinking about the sort of the question of the public perception of adopting a lot of these technologies and such, and I'm wondering if it's something that in general—my sense is that like the general public is not necessarily super aware of the possibilities. A lot of these technologies and a lot of, even a lot of, like scientists can be a bit more on the gloomy side in my experience, but you have to be very action-oriented.
[00:40:14] You seem to really think that a lot of these things are like very possible and you're obviously doing a lot about it and making a lot happen. Yeah. What do you think we need to do? Do you think that we need to get the public on board more? Is there anything about this, like the public perception of this that we need to change?
[00:40:28] Is it important to have a more like optimistic outlook on the future?
Adam Marblestone [00:40:31]: No, I agree. Yeah. At all these different levels. Yeah, no, I think of it as it's just hard to envision what good outcomes would actually look like. There's just not enough work on envisioning good outcomes in detail, right? You can say, "Oh, this could be a problem," or "this could be a problem," or "this could be a problem," but there's not a lot of work on actually envisioning sort of the shape of good outcomes, even the very general shape. And even things that people think of as good outcomes, in some cases, oh, we avoid existential risk, but we're still having a huge international conflict around this or something. Yeah, more work on envisioning good outcomes and just with as much specificity as possible, because I think that people tend to dismiss that type of thinking as, "Oh, that would be nice," unless you provide enormous specificity.
[00:41:12] It is a double standard. They can just say, "Oh, this could go wrong," and even if I don't give a lot of specificity to people, "Oh, yeah, that could go wrong." But unless I say, "Oh, in incredible detail how this could go," people will still say, "Oh, that's just wishful thinking" or something, or "it won't work out." So just envisioning a positive scenario in extremely great detail, and I actually include that at all these different levels, including if you're a researcher and you could have any tool in your field that you needed, right?
[00:41:33] What specifically would that tool be? That's a FRO question. So we're doing that within research of just what would it be and then it turns out, okay, actually, yes, that cost millions of dollars, but it's not like it's like a time machine that is totally impossible, right? It's actually something that, that you could do.
[00:41:47] So I've been inspired also by Tom Khalil, who's our board chair, and his sort of magic laptop thought experiment touristy, if you could ask a 15 minutes with the president, but you could only ask them to do very specific things. Call this person and have them call this other person and say this very specific thing, you couldn't just ask them to create magic for you, but you had to say you call the CEO of this and called the president of that.
[00:42:05] And what exactly do you ask them to do? And if they would do all those things, then, do you know exactly what you would ask them to do? That's equivalent of that for a much smaller scale for science. And we somehow need this in the big picture too in society. And it's, I think it's, I think you're doing some great work on that.
[00:42:21] I need a lot more work on that. Yeah.
Beatrice Erkers [00:42:22]: Yeah. Yeah. I love the - I very much agree on the sort of double standards of this and like negative versus positive futures that if you say something can go wrong, it's very much agreed on. And in terms of world building or yeah, I guess we're doing a lot of world building stuff right now to try to do that.
[00:42:38] But the FRO idea, I guess, is nice because it breaks it down even more. And we're trying to do that with world building that we're trying to be like, we're trying to create this year in the future with this level of AI. And, but yeah, even being even more specific is probably good. But other than that, I agree very much with that. Keeping it open-ended is probably the most smart thing we can do because we don't necessarily know what's best for the future and such. But to inspire action today, it's probably better to have these like concrete things.
Adam Marblestone [00:43:09]: Yeah, you can be approximately correct, but concrete enough things could be really, can be really helpful.
[00:43:15] I find them to be really helpful.
Beatrice Erkers [00:43:16]: Yeah. We're down to the last two minutes, so I'll ask you, I think, one or two quick questions before we round off, but first I would just love to hear, do you have any favorite resources that you would recommend? It could be a paper or a movie or fiction or nonfiction, anything that you think has had a profound impact on you that you would recommend others to read.
Adam Marblestone [00:43:35] I have a list of some of these on my website on the talks writings. Page of my website. I just have this seemingly random list of all these different papers that I've really liked and stuff. And I'm not sure how helpful that is, but I love something that will cause you to go down a rabbit hole that you might not otherwise go down.
Beatrice Erkers [00:43:52]: That's perfect. We'll share the list and people can look at what's relevant to them. The last question I'll ask you is just if you could think of the best advice that you ever received?
Adam Marblestone [00:44:01]: I got some good advice about learning and trying to become a scientist from my undergrad research advisor who said, don't neglect the time and dimension in your own development. People are always asking you to produce things and it's okay. Have I created my billion dollar startup yet? By the time I'm 19 or whatever, it's actually, you're here to learn. So I would recommend people try to spend some time learning fundamentals and don't, if you need to spend five years learning physics or something, and you're worried that you're not going to be producing a lot of outside value for the world during those five years, probably just don't worry about it. It's not a very long time at the end. Do you understand physics or something?
Beatrice Erkers [00:44:31]: Yeah, that's probably very good advice. And yeah, you'll probably have the most impact in your forties or fifties or yeah, later on.
Adam Marblestone [00:44:38]: think on a long enough timescale that you're not neglecting your own development.
Beatrice Erkers [00:44:42]: That's great advice. Thank you so much, Adam, for joining.
Adam Marblestone [00:44:47]: it's great to be here. Thank you so much. Yeah.
Beatrice Erkers [00:44:49]: Lovely to hear. Thank you so much for listening to this episode of the Existential Hope podcast. Don't forget to subscribe to our podcast for more episodes like this one. You can also stay connected with us by subscribing to our substack and visit existentialhope. com. If you want to learn more about our upcoming project events and explore additional resources on the existential opportunities and the risks of the world's most impactful technologies, I would recommend going to our Existential Hope library. Thank you again for listening and we'll see you next time on the Existential Hope podcast.