Podcasts

Stuart Buck | What is Good Science?

about the episode

This month, we have Stuart Buck with us, executive director of the Good Science Project. Among many things, he has funded renowned work showing that scientific research is often irreproducible, including the Reproducibility Projects in Psychology and Cancer Biology.

Working in the field of meta-science, Stuart cares deeply about who gets funding and how, the engulfment of bureaucracy for researchers, everywhere, how we can fund more innovative science, ensuring results are reproducible and true, and much more.

For his Existential Hope Scenario, Stuart imagines (in relation to science) a world where “you have two different systems operating with equal amounts of funding. And now you can really see at a grand scale, hopefully, what happens and what are the results. There will be a chance to test out lots of different meta science ideas that people have discussed for years or decades.”

These wouldn’t be small changes either. Stuart wishes for entire institutions run completely differently from current ones, in the hands of entirely different people. For him, we’re currently “so focused on doing whatever gets you approval from the kind of the existing bureaucracy”, that we’ve forgotten how to be innovative and radical.

He imagines changes across the system could breed a number of great breakthroughs, on the scale that we as a society have not seen in many years.

About Xhope scenario

Xhope scenario

Better Science
Stuart Buck
Listen to the Existential Hope podcast
Learn about this science
...

About the Scientist

Stuart Buck is the Executive Director of the Good Science Project, and a Senior Advisor at the Social Science Research Council. Formerly, he was the Vice President of Research at Arnold Ventures. His efforts to improve research transparency and reproducibility have been featured in Wired, New York Times, The Atlantic, Slate, The Economist, and more. He has given advice to DARPA, IARPA (the CIA’s research arm), the Department of Veterans Affairs, and the White House Social and Behavioral Sciences Team on rigorous research processes, as well as publishing in top journals (such as Science and BMJ) on how to make research more accurate.

...

About the artpiece

This art piece was created with the help of Dall–E 3.

...

About xhope scenario

Stuart imagines (in relation to science) a world where “you have two different systems operating with equal amounts of funding. And now you can really see at a grand scale, hopefully, what happens and what are the results. There will be a chance to test out lots of different meta science ideas that people have discussed for years or decades.”

These wouldn’t be small changes either. Stuart wishes for entire institutions run completely differently from current ones, in the hands of entirely different people. For him, we’re currently “so focused on doing whatever gets you approval from the kind of the existing bureaucracy”, that we’ve forgotten how to be innovative and radical.

He imagines changes across the system could breed a number of great breakthroughs, on the scale that we as a society have not seen in many years.

Transcript

Beatrice Erkers  

Welcome back to the Existential Hope podcast where we dive into the insights and aspirations of leading minds that are really shaping our future. I'm one of the hosts of the podcast, Beatrice Erkers, and I co-host this podcast alongside Allison Duettmann. In today's episode, we're very honoured to have Stuart Buck on the episode, he runs the Good Science Project whose mission is to improve the funding and practice of science. But before we get into the conversation with Stuart, a quick reminder to sign up for our newsletter to stay updated with our latest episodes, insights and community ideas. It really is your gateway to engaging with existential hope ideas. And for those eager to explore further also visit existentialhope.com. There you'll find a full transcript of today's discussion, recommended resources for deeper understanding, and an exclusive art piece inspired by our conversation with Stuart. Now, without further ado, let's dive into an enlightening conversation with Stuart Buck.

Allison Duettmann  

Hi, everyone, welcome to Foresight's Existential Hope podcast. We're really delighted to have Stuart Buck here today. And we actually met in person at a recent EAG. I'd heard of your research before, and just booked a call with you or an in person meeting there. And we stumbled over a bunch of really interesting meta-science projects and problems, challenges and possible solutions; and then dove into a little bit of the new emerging landscapes of interesting orgs.

Allison Duettmann  

So I'll talk to you a little bit about that in the introduction. I'd love to hear from you on that. And I guess we were just passing the night at South by Southwest – too  bad! But hopefully next time in person connect to you.

Allison Duettmann  

Alright, so maybe just to get us started, would you want to share a little bit more about what the Good Science Project is really up to and if you can, also share your journey into the organisation? I think that usually helps people map the genealogy of your work.

Stuart Buck  

Sure. I should start by rewinding about 12 years or so. So around 2012 I went to go work for a place called the Laura and John Arnold Foundation business major billion dollar plus, philanthropy, that focuses a lot on evidence-based policy. And it's grown a lot since I joined – when I joined it was pretty new, there were eight or 10 people there – now it's over 100. So the scope and the scale of it is really grown. But right from the start, the Arnold's, who themselves are around 50 years old, – they'd retired at 38 and were devoting their wealth to philanthropy. They were mostly interested in just evidence-based policy across lots of areas like education and criminal justice and health. And one thing that we initially started noticing was the what you call the reproducibility crisis, the problem of trying to replicate research, and we first noticed it in psychology, but I started digging into it is director of research there. And it's a problem in a lot of fields, including medicine and cancer biology and economics, pretty much any field you dig into it turns out, there are some issues with replication: sometimes outright fraud, or just the publication process is often biassed towards the exciting positive results, which is natural, we all want to have exciting positive results come out of the scientific world. But when there's a bias towards that, then people feel compelled, sometimes stretch the truth and push the boundaries of acceptable practices.

Stuart Buck  

The Arnold's decided that if you want to pursue evidence-based policy, it's really difficult to do that if you're not sure how much you can trust the evidence or if you think the evidence has been biassed in a positive direction. So their initial kind of vision of philanthropy had been that they would look to education, for example. And it would be fairly simple just to find the what's the what are the best ideas supported by the best evidence and then just write a big check to the best idea. And then it's very simple. It turns out, it's much more complicated than that if you really started digging into the evidence as to what works. And so I started this kind of grant making programme there. We have the world's money first focus on open science and reproducibility and trying to improve science. Yeah, we handed out probably 60 plus million dollars over several years. And then in the process of that endeavour, I ran across a guy named Patrick Collison, who runs a company called Stripe. I ran into him several years ago at a conference and he was very interested in trying to improve science, but not just reproducibility, but improving innovation, increasing the pace of innovation, the freedom that the best scientists have to explore, explore the universe and explore their best ideas without having to cater to what the what funders most desire.

Stuart Buck  

I introduced him to John Arnold and we continued conversations and then a couple of years ago, actually, it's time has flown actually, it's going on two and a half, almost three years ago, had some further conversations with Patrick Collison that led to him being the initial funder for what I'm doing on my own now, which is the Good Science project. It's a small, – I guess you could say think tank, focusing on trying to improve federal science funding and policy so that we have faster innovation and hopefully more breakthroughs and clean up reproducibility as well. It's so fast that the journey is to how I got to where I am now.

Allison Duettmann  

I love it, I think it's always really interesting to hear about the individuals involved and so forth and some of the serendipity in it. Okay, that's wonderful. Maybe let's dive into a few of the topics that you actually focus on to give people a little bit of a taste of what you guys are working on. I know that you've published a lot on Substack, but in other outlets, too, and you sent me a few really interesting docs – I would just want to jump around here if you don't mind.

Allison Duettmann  

One that I thought was really interesting, and as of course, has become, I think, a pretty prominent field recently is the field of meta-science progress. And it's not really necessarily involved with individual scientific fields. For example, at Foresight, we support specific researchers working on one technology, but it's really also looking with a broader lens of what could be improved in the ecosystem. And you've written some really interesting stuff there. And so I just love to know, including, for example, how much progress we've made in meta science, but where we could still speed up progress. And I'd love to get your thoughts on on the broader... if you think about science from this, like meta lens, like you mentioned, reproducibility is one, but what are a few of the different areas that do you think are really holding scientists back right now from producing the research that we would all want from them, and then perhaps, like a few recommendations that you have here?

Stuart Buck  

Sure, that's really broad. So I'll just I'll pick one issue that I care about a lot, and that a lot of folks focus on. And it's really tough to crack down on this issue of bureaucracy. Everyone hates bureaucracy in the abstract. But the problem is, the government loves bureaucracy, when you point to any particular feature of it. So, like to take a step back, there have been multiple surveys of federally funded scientists over the past couple of decades. And the most recent survey surveyed housings of federally funded scientists, and they said, on average, that they were spending 44% of their time on bureaucracy, basically, filing reports and budgets and proposals, and just all the machinery that comes with getting a federal grant. And so that everyone points that says that's a huge problem, scientists spending nearly half their time on bureaucracy. That's, that seems like we're paying people to dig a hole and fill it back in for it on the same as you'd say, that's an average, there are some scientists who are fortunate enough to have great administrative help, and so they don't have personally have to spend as much. On the other extreme, I've talked to one scientist at the University of North Carolina, who said that he's probably spent 70% of his time on bureaucracy, because he said he does animal experiments. And he said that, quite frankly, his administrative help in the department wasn't very good. And so he had to do all the ethics paperwork himself. And so he felt like, this direct quote from him was: "I just don't feel like there's time to do science anymore". And so that seems quite paradoxical.

Stuart Buck  

What are we paying people to do, just to fill up reports about the money that we handed them? That just makes no sense. And it's depressing, no one goes into science thinking they're going to spend 70% of their time filling out reports and filling out paperwork, and etcetera, right? They go into science because they love a particular field, and they want to learn more, and they want to make discoveries, and so forth. And it just drains all the excitement of science. But here's the problem – every bureaucratic requirement has some justification for it, there are ethics requirements as to animal experiments. And those are there for good reasons, because animals can be abused and can ... ethically and in experimentation, and so we've developed a whole set of procedures, you know, to protect animal safety, and to protect against unnecessary deaths of animals and so forth. The same goes for experiments involving human beings. And that's, again, thanks to the kind of horrific history of experimentation done in the 20th century on unsuspecting human subjects that were mistreated.

Stuart Buck  

And so there's a whole set of ethical parameters there so that nobody wants to get rid of that. It is federal money, it's being spent. And so there's gonna be some oversight, like the budget and how the money is spent, and so forth. Right now, there's a lot of focus on international security and focus on: are researchers unwittingly passing the top technological secrets to researchers in China? And there's probably the undue focus on China and maybe some discrimination. Well, but yeah, it's still a fair consideration. But how much should we fund research that might unwittingly be used to support a foreign adversary, let's say? So in any event, like any bureaucratic requirement, you point to someone, somewhere is going to say, if there's a good reason for that, right, we need to keep that one. So. So it's really hard, you know... it's the old, like death-by-1000-cuts. But if you point to any one specific bureaucratic requirement, again, so there was a good reason for it. There was some scandal, there were some problem that someone was trying to solve with this rule with this procedure.

Stuart Buck  

And so that's why that there have been many efforts to get rid of, or try to limit bureaucracy, but they haven't really gone anywhere, because what you really need is to have some person like with... almost dictatorial authority over an agency like NIH or agency like NSF, to just go through the entire bureaucracy and take a red pen and slash through the stuff that isn't necessary or that isn't the top priority. And with the objective of, let's say, reducing the burden on scientists time to 20%, let's say rather than 44%. And you'd have to have someone who was willing to make some really difficult trade offs and difficult choices, and prioritise. We have a thousand things that everyone wants researchers to do, it takes up too much of their time. So you're gonna have to slash through some of them that even though they individually, they might sound like a good idea, you know, you just have to prioritise. So you need someone or some kind of committee that has the power and the will to actually get rid of some rules and regulations that maybe they might be a good idea. And that's politically difficult. But I think it's so worth trying, because otherwise, we're on a trajectory where ultimately, we'll just be paying for 50% 60% of scientists' time. Yeah, it's insanity, they just need to have more time to focus on their site, that that's one of the issues that I've written. But I could dive into many more.

Allison Duettmann  

Just to touch on this a little bit longer, who would be an organisation that could do this? Or do you think it will be an individual at each scientific organisation? Or could there be something for the GOA or would there be something like scientists writing an open letter about specific things that they get particularly hung up about in their research? Like in terms of thinking about solutions, if it's not as easy as like a science God coming in and giving it... are there any pathways that you think are worthwhile exploring?

Stuart Buck  

Another challenge is that these rules and regulations arise from different places, they arise from....  to dig into the weeds of the American government, like the Office of Management and Budget (OMB) that has federal-wide authority to regulate how federal monies are spent and accounted for. And so they have a lot of budgetary requirements, amongst others. So that's one source. But that's under the control of the White House. And so the White House could do something about that. Some of the requirements coming from agencies like an NIH or NSF themselves, some gain funding come directly from Congress. So Congress mandates particular practices and says agencies need to look into X, Y, or Z. Some requirements, honestly, I think, come from universities that want to behave conservatively, so to speak, they want it they're risk averse, and they want to make sure that they cross every T and dot every I and some universities, perhaps over-regulate their own researchers are over staffed their own departments that are in charge of monitoring and evaluating and submitting budgets and all that. So yeah, it's it's it comes from many different places. So that's one challenge. I do think the White House would, in theory, issue an executive order that asks the Office of Management Budget to review all of its practices and its rules with the guide towards slashing stuff that has a time requirement on researchers or impact on researchers directly.

Stuart Buck  

The White House could also review its past executive orders because some executive.... so here's an example. Some of these requirements come from the White House itself from prior executive orders. So there was an executive order in the 1990s, signed by President Clinton, that says that if you get federal money, you need to certify that you make people wear seatbelts. And again, it's well motivated, there was nobody really against seatbelts. And it's a good idea and probably saved some lives perhaps, but some 20 or 30, almost 30 years later, with our seatbelt laws in various states. And probably anyone who wears it wants to wear a seatbelt, or he does wear a seatbelt. And it's unclear that making people check that box and on federal applications, really does any good. And so you could go back and look at it through prior executive orders like that and say, look, here's some executive orders, that may have been a nice idea at the time. But we don't need to make everybody in every university certify that they do everything that was a good idea in the world, we can prioritise and say look, the time that passed when we needed to investigate whether people use seat belts.

Stuart Buck  

So they're probably any number of requirements like that, that again, individually – sure, that's fine to make people wear seat belts, and it doesn't it's not that much time to check a box. But just in terms of priorities, we should be able to streamline that. So the White House could, as I say, go back and look at the requirements that it has come up with over decades and see where it just streamline. Yeah, there are some opportunities to the White House could take advantage of it they wanted to.

Allison Duettmann  

I also guess like the problem is almost just getting worse over time, just because like, people rarely ever take things out. But they just add to the pile. Like you never really know how large the pile and total becomes once you separate adding stuff to it. That your individual thing that you want to add to is is really important right now, without considering the entirety of it. So it's definitely hopefully it's not getting much worse on the long run. But yeah, okay, that's definitely a really important one that I couldn't agree more –  writing grants is already complicated enough. And if that is a big one, that's that could be an easy one to perhaps cut down on.

Allison Duettmann  

The other thing that you've also written about really, interestingly, is, I think, on patents in general. And I think the Substack post was actually titled "How we're screwing over researchers" or something like that, and it had a need at least a bit like patent component in there. And that's, I guess, a university targeted to some extent also. So perhaps you could share a little bit more about what you were addressing there and perhaps even tell the quick story of Katalin Karikó if you feel like it.

Stuart Buck  

Sure. Yeah, I really chose the kind of provocative title for that article. So, this, hopefully will get people to read at least a little bit. To rewind a little bit, there was this famous act in 1980, the Vital Act that was passed in the United States. Prior to that, it's complicated but basically, when NIH and NSF, when the federal government funded research, it would often end up taking control, the government itself would end up taking control of patents arising from that research. And there was this perception that the government is not the most efficient user of patents, it doesn't know what to do with them, they weren't being actually used very well, or commercialised or turned into something that was useful for the market, useful for medical patients and so forth. So the idea that, instead of having the government take control of patents, let's shift that and have the universities take control of patents because universities are technically the recipients of all the federal grants that come from NIH and NSF, etc. So the money doesn't go straight to a researcher. We talked about a researcher getting a grant, but it doesn't, it's not like the money goes straight to the researchers pocket, right? It goes to the university. They're the ones who handle the money, and they pay the researcher, right. ... You have a lot of indirect costs. That's a whole separate issue. But yeah, the indirect costs at top universities are often between 60 and 70%. So what that was, and just to clarify what that means. So if you get if you as a researcher get a grant, and let's say it's $100, to be simple, that would give $100 designated for you, the researcher, and then they would add 60%, on top of that $60, let's say is indirect cost of university, so the total grant would have to be 160. And so the 60% is on top of, rather than taken out. So it's not like so anyway, that's just how that works. And by the way, indirect costs are also supporting a lot of bureaucracy that universities administer. So the bureaucracy issue is tied in to indirect costs. So anyway, Universities started patenting... and the story and since then has been like "this is a great success".

Stuart Buck  

So, in 1980, you had all these patents that either didn't exist or went unused. And then afterwards, you have this huge flourishing of university based patents. And around 20 years ago, there were a bunch of European countries that also started moving in that direction, and trying to give universities more control over patents. But here's the thing, in some of those Universities in some of those European countries, the prior rule had not been that the government controlled the patents, the prior rule had been that the professor or the researcher control the patent. And in fact, they called it professors privilege in some of these countries, and so they were switching from a different direction, right. They were switching to the US regime that was perceived as successful. But they were switching from a completely different place where the professor and the researcher had more control. And so there's been some empirical research on that has shown that like, when European countries moved in that direction, the rate of patenting actually went down. Which actually doesn't seem too surprising, because the universities often are very diffuse, like they encompass many departments. They may not have anyone who's like the specialist and what one professor is doing and in the commercialization of that research, and so they maybe they put less priority overall on trying to commercialise any one given discovery or possible patent. But then the researcher themselves who has more skin in the game, so to speak. So that's where the empirical evidence seems to live. So far, it's that it'd be better to give it better for innovation better for patenting better for commercialization, to get professors more of a say, and perhaps the control over the patents rather than the universities themselves.

Stuart Buck  

Now, this came to a head with the Katalin Karikó, which is the story that I talked about quite a bit. So she was this Hungarian researcher who came over to the US and worked with the University of Pennsylvania, and got demoted repeatedly at Pennsylvania, because she couldn't get NIH grants to support the work that she was doing on early mRNA research – which at the time was not very popular. Everyone's heard of it now, because it's it turned out to be the basis of some covered vaccines. But in the 90s, it wasn't very popular at all. And it was seen as the dead end for whatever something that's extremely difficult, it would never work. So she couldn't get grants to support her work and, and that by the way, it opens up a whole nother huge topic, which is the role of NIH money and so called soft money in universities soft money, meaning researchers like Karikó who were expected to pay for their own salary. It's like they're not given a salary directly by the university or not 100% of their salary, they're expected to raise their own salary for themselves to their grants. And so that makes them very dependent on appealing to whatever NIH wants to fund in a given point in time. So Katalin Karikó was repeatedly demoted and eventually basically pushed out of academia even after or what became a paper that later won her and her co-author of the Nobel Prize. Now, of course, no one knew that at the time, like, the University of Pennsylvania couldn't forsee the future. But the University of Pennsylvania did, as I found from reading some student newspapers from Pennsylvania, they kept the patents on for work – even after pushing her out of academia. And the student newspaper produced a chart like that universities across the country and how much money they're making royalties from patents. And the University of Pennsylvania was far and away making many times more money than Stanford, or other universities that you might think of as hotbeds of discovery in technological advancement. And said Pennsylvania made something like over a billion dollars in one recent year from the COVID vaccines. It's from the patents on apparently, from the patents from Kariko's research – even though she's long gone from Pen, and they not only didn't help with her research, they drove her out of academia. So it seems like this kind of glaring unfairness and on top of the inefficiency process, so yeah, I think that's an area that yeah, I think, definitely deserves some reform, or at least, some really detailed... I think Congress definitely should fund or require some really detailed investigations of what is even happening at these university offices that are supposed to be patents being researched and commercialising it: How many patents go unrealized or and commercialised, how many take too long... There are lots of anecdotal complaints at certain universities that the process takes too long and the university demands too much of a cut. I think it'd be better to have a more systematic kind of investigation to add to the anecdotal stories. But in any event, I think I do think that the story from Europe or the empirical evidence from Europe shows that it might be better to move back in the direction of giving the professor or the researcher more of a say in what happens to their own discovery.

Allison Duettmann  

Yeah, it's crazy. When you think about it, it's almost like all the incentives are wrong. It's a few of them. Right. But like, why even... how do you come up with that system in the first place, I think? Yeah, I think I love that there is this almost not really an A B testing, but at least some precedent of how it used to possibly work better. And that that was useful to to go back into that.

Allison Duettmann  

You already mentioned one of the bits on funding and where they're self-support from from the NIH. And so I think another really interesting super detailed analysis that you've done is specifically on NIH reform, you list a full laundry list there of things that could be improved with NIH, and I don't think we get to all of them, I think counting almost 10. And we already touched on them individually. But when you think about the NIH, what are like a few kind of crucial areas that you yourself are perhaps excited about and promoting that they will be I think could be possibly a good reform applied to the NIH?

Stuart Buck  

Sure, I think there are any number of ideas that you say. I think one  thing the NIH should consider using more is an approach called basically, "Fund the person not the project". Now, it's interesting, there is a programme at one of the NIH institutes called the National Institute for General Medical Sciences (NIGMS). And if you're wondering what that is, there are many folks [...], it's basically the NIH institute that funds basic research as opposed to National Cancer Institute focuses on cancer, the National Heart, Lung and Blood Institute that focuses on the cardiovascular disease etc. And NIGMS is focused on truly basic science that's not necessarily connected to any one specific disease, like some of the other NIH Institute's are.

Stuart Buck  

So NIGMS has this programme called "Maximising Investigators' Research Awards”, MIRA, and they pronounce it Mira. And this programme is really intended to give researchers more flexibility and freedom to give them funding for several years where they don't have to spend as much time trying to pre-specify everything that they're going to do and then report back on the what they said they were going to do three years ago, four years ago. Instead, it's intended to give them more freedom, to follow the science and to follow their nose, so to speak as to what the best ideas are at any given point. And I think that there's some evidence that the papers produced by that are performing equally as well or better in terms of citations. That's only one very limited metric, I think, long, longer term, you'd want to see the rate of breakthrough discoveries. And that's hard to see because you can't expect a breakthrough every day from everyone. That's impossible, right. And as the Director of NIGMS, Ms. Jon Lorsch, had said, if I knew where the next breakthrough was going to come from, I just arrived would have already made that discovery myself. So part of his idea of funding is that you want to spread the money widely amongst talented scientists and give them the freedom and who knows where the next breakthrough will come from is much, especially with basic science, often, there's any number of stories from basic science where discovery will be made in one decade, and then three decades later, it turns out that it's amazingly useful or influential yet.

Stuart Buck  

Yeah, I think that's a good example of a programme with an experimental at NIH in the sense that it's a new thing. It's still fairly new. It's been around for a few years. I think that programme could be expanded to other Institutes at NIH like National Cancer Institute, and there are other Institute's that have a version of this but it's much smaller. NIGMS funds this type of grant four times as much, when last reviewed the numbers, than the rest of NIH put together. Huge NIGMS is very much focused on this for basic science. But I think you could try that approach elsewhere at NIH, again, with the idea of giving scientists more flexibility and freedom.

Stuart Buck  

And here's another key idea, that wasn't in the document I sent you, which is that the NIH should take a more deliberate approach to experimenting and learning from what it does to take a very meta-science approach, instead of just starting up new programmes and saying, "Alright, everybody is going to do this", – then you don't have as much of a chance to learn and to iterate and to introduce deliberate experimentation. And NIH is big enough that you could do like the literal randomised experiments and how you hand out money to they find something like 90,000 grants at any given point in time, there's 90,000 active grants, and each grant often supports multiple researchers. So I'm not sure what the exact numbers.. but it's hundreds of 1000s of researchers that are supported at NIH. That's plenty of opportunity. So you could do a randomised trial that involved 1000 researchers, and now it would be a drop in the bucket from what NIH supports. And you could randomise 500 to be funded in one way and 500 to be funded in a different way. And then just follow them and spend more time and see what happens. See what are the results from funding that offers more flexibility and freedom versus the more usual way the NIH saying, I think that kind of deliberate experimentation is something that NIH should do at all. They rarely ever have done that.

Allison Duettmann  

Yeah, I think on your last point, it's really interesting just to hop back on the" Funding people not projects" that is also pretty present right now, I guess, in the private space. For example, the Liberman brothers they're now launching a new effort to actually fund individuals early on before.... making beds early on and almost invest into individuals that you invest in companies. And I think that that kind of at least that me more than that. That new approach should also hopefully extrapolate outward through different perhaps even scientific funding organisations. I think that would be wonderful.

Stuart Buck  

I like the parallel to venture capital, because I think their luck, any number of again, vanco, but statements or examples of venture capitalists who say, "Yeah, you're making a bet on the person." There are many examples where a founder of a company ends up pivoting and doing something slightly different or a lot different. And venture capitalists often are happy with that, because, hey, you're trying something and you realise it didn't work. And now you find something that did work. And it would be unthinkable to go to, I don't know, to go to Mark Zuckerberg and say, Wait a minute, like your original proposal said that you were going to do XY and Z and said you were going to spend $5,000 in year three on this line item. And where? – show us that you spent the money that way, you would never want that level of micromanagement of a talented entrepreneur. You  want to give them a little more freedom and flexibility to adapt to the market in which we should treat a lot more scientists the same way that you should give them the same respect and autonomy that entrepreneurs routinely have. And give them the freedom and flexibility to say, wait a minute, I tried. I said I was going to do XY and Z. But I tried it and it didn't work. And I came up with a better idea the next year. And now I'm going to want to do the better idea, of course, which will be called a better idea when that comes up.

Allison Duettmann  

I guess otherwise, you just assume that people aren't learning is there and actually in the field and [ ... ] , it's it's a crazy assumption to make the first place I think.

Stuart Buck  

Science should be about learning and changing and adapting to new ideas. That's the whole point. If you already knew what you were going to do five years from now, I think I'm paraphrasing as well, one scientist that I know of at Pennsylvania, as well, he said, "If I'm doing what I said I was gonna do five years ago, I should be fired because I should have learned and adapted in that time."

Allison Duettmann  

Yeah, I love it. And we only got to a small part of the laundry list that you have for the NIH including like other missed opportunities and stuff, but I think I want to close my part of the podcast with a question. Perhaps a little bit on a hopeful note to already lean into the Existential Hope part of the podcast after.

Allison Duettmann  

We have briefly touched on when we met, and you've also pointed out again afterwards, that there's actually now a really like a cool new landscape emerging of like these alternative possible homes for researchers and scientists. I think there's Efa Rose, Future House, Arc Institute, Astera, Spectec, there's a few really interesting new orgs that have been popping up. And we certainly had a few founders and executive directors of these orgs already on here for a podcast and for individuals topics. But could you share a little bit of how you see that landscape changing and what these new organisations are setting out to do? And perhaps what we can hope from them, and we don't have to cover all of them, but just the general spirit, a great idea.

Stuart Buck  

And you could add to that some there's some government agencies, surprisingly enough. There's a health version of DARPA, ARPA-H, which I'm sure listeners are familiar with. But yeah, it's intended to be a government agency that it has the innovative approach that DARPA has taken for decades. And there's a version of that in the UK as well called Aria, also trying to be like the imitator of DARPA in a way it looks. I think that's all tremendously hopeful.

Stuart Buck  

I do think it's hardly born out of just discontent with the current system that as I say, it tends to be very bureaucratic and very top heavy and a system in which it's hard to get a foothold, the average age or an age at first major NIH grant is so we're making people like slave away and other people's labs for 15 years before they they finally get a foothold as a researcher. And so they're long past the age at which a lot of people in previous generations did some of their best scientific work. And science made some of his greatest discoveries at age 25. And scientific Newton, James Watson, there's any number of examples of that kind of thing. There's a lot of discontent with that system. And so there are people within government and private philanthropists who say this is an opportunity to diversify the landscape to come up with new ways, hopefully better ways of doing science and a funding science and of organising science, all very important men of science issues.

Stuart Buck  

And so what one, one thing that I really hope we're able to do, we being just collective medicines, community and policymakers over the next five to 10 years is just really deliberately learn from what all these new organisations have produced in and do it in a systematic way.

Stuart Buck  

I do  think possibly one risk, like new organisations... just the same as universities, their temptation is going to be to send out press releases about all the amazing things they've done, and brag about that. And that's fine. That's to be expected. And often that's even true, but there will be cases in which they fail. And it's, I think it's important to learn from failure. And it's important to be public about that. And so I think, hopefully, over time, we can have a meta-science discussion that's able to have just a truly honest appraisal of what's working, what has failed, when failure isn't a bad thing, we should learn from not everything that's going to work for the person to drive, let's learn about how to design organisations more efficiently or more appropriately going forwards. And so I think that's where I hope the conversation goes over the next five to 10 years.

Beatrice Erkers  

Thank you for that. So this part is the more philosophical part. We've been talking about the meta-science and everything. But now we're  diving into the sort of Existential Hope aspects of like, where science and the progress of it can take us. You touched on it now. But I would be really curious to hear: Do you feel like things are changing? Are you gaining traction with this?

Stuart Buck  

I do think things are changing. Again, we just touched on that, that with all the proliferation of new organisations, both inside government and outside government. I think that's a really hopeful sign. And, again, my hope is that, over time, there should be a diversity of approaches and organisations that probably something that would be bad for science is, is to say everybody has to fit in the exact same box like that, that that probably isn't good, even if, even if the box makes sense. So to speak.

Stuart Buck  

So we talked about fund the person and not the project. That seems like a good idea. And we should use it sometimes. But I think if like literally 100% of science funding was on the person that the project, they're probably some failure points there as well. There probably be some things that got missed. There probably there are some examples where you should fund an organisation like funded team, not the person, right. So there are lots of different approaches one can use. And in fact, they're in some areas of science, the Large Hadron Collider, or big astrophysics efforts – you're not funding one person, you're finding like a team of 1000 people, you know what I mean? Yeah, any one approach to science funding I think would probably be suboptimal.

Stuart Buck  

But I do think, hopefully, going forward, I think one thing that could help improve the pace of innovation is having a more deliberate, deliberately diverse approach, and like how we plan science and the sorts of people to get funded, and so forth. It want to say the sorts of people who get funded that's also an interesting points to emphasise. There are lots and lots of examples from history where great scientific advances come from places that sometimes you wouldn't necessarily have expected or they come from people that were heretics. And at their time Semmelweis and the germ theory of disease, like people despise them at the time. There are tonnes of examples like that. It doesn't mean we shouldn't. Again, we don't want a science funding system that only funds people that are outcasts and heretics, that probably would be in the wrong direction too. But uh, do you think there should be some space within science like a National Institute broad ball, like we should have like a deliberate approach to fund some things that are outside the box and some of them will be crazy and won't work. We might end up funding some of the greatest breakthroughs ever if we made more space within scientific funding system for people with truly outside the big box ideas and approaches that don't get funded currently.

Beatrice Erkers  

Yeah, I remember I was talking to an economist I think –Johan Norberg . And he was talking about how technology often progresses like it it comes from quite like dirty areas, not necessarily the most appreciated ones. I think his example was the technology of like online payment solutions coming from porn – like people wanting to watch porn online. And that was like what drove online payment solutions. And then now, that's a very useful and established thing. And I'm sure there are more like exciting scientific innovations that came from the not the most appreciated areas maybe. Yeah, but it sounds like you're positive and you're optimistic. Would you say that you're optimistic about the future?

Stuart Buck  

That's that was the big question. In general, yes, I think that there's lots of possibilities... there's dangers as well. We both kind of think – I think I've met you in an Effective Altruism conference. There are certainly areas like nuclear proliferation, or biosecurity, or artificial general intelligence. There's some of the areas of potential R&D advancement that do possibly have some risks, and some would say Existential Risks. But I think the theme of your podcast is Existential Hope, right, it's that, – yeah, hope, hopefully, with the broad, the broad sweep of innovation and improvement, that we can address these Existential Risks, and that we can hopefully still keep progressing and making life better for everyone. And so I think that's where, like, I guess my hopes and efforts would lie is like trying to figure out what what we're doing wrong. And all the ways in which we're holding back scientists and science itself, and figuring out ways to hopefully speed up and accelerate the pace of innovation. I think that does offer more hope for the future.

Beatrice Erkers  

I'd be interested to hear in relation to the increasing pace of progress on like, in relation to maybe AI in specific, do you see this science landscape shifting due to that?

Stuart Buck  

Due to AI in general – that I haven't looked at it that much in depth there. There are some really amazing advances that have been made, for example, AlphaFold and protein folding. There are other similar tools that are like that, that offer the potential to speed up at least some components of science. I'm less certain myself, but what I've seen out of large language models, so far, it seems like they have a case there's sometimes shows sides of great sophistication, but then sometimes they just completely hallucinate it, at least the ones today. And so I'm a little nervous about that – you could end up with kind of pollution of the scientific literature with people using AI models to to write papers, saying this will help them publish more, and then it'll end up on live somewhere and might be largely fake, or largely just made off or hallucinating.

Stuart Buck  

Another thing that I worry about, and this is born out of the work that I did, while I was at the Arnold Foundation, a lot of the published literature just isn't that good. Some of its outright fraudulent. And we hear more discoveries about academic fraud that come out, it seems every week, a lot of it isn't that Reproducible. And even the stuff that is reproducible that often isn't described that well in print. So one thing that I funded, for example, was called the Reproducibility Project in cancer biology. And it tried their original idea was to replicate the experiments from 50. Top cited cancer biology papers. And ultimately, they could only completes fewer than half their intended experiments. And the reason was that in every single case, you couldn't just go off the published paper, you had to go back to the original lab and ask them to fill in all the gaps and fill in all the details about what they had actually done. And some of the original labs were not cooperative at all, and in some cases, just more hostile. Some cases just said, "we're not sure". And when they were cooperative, the answer was always that you need twice as much materials as you expected. And so it's going to cost more and take longer. And to worry about AI models that might be trained on scientific literature, as if what's written down on paper is the complete entry... and that's often not the case if you just worried that might spiral into even more your reproducible work. So no, again, cautiously optimistic about some of the specialised tools that are out there that are based on like, really rigorous systematic databases. But then there's a whole wealth or a whole broad millions, millions of scientific articles that I would say, probably aren't worth trying to use in an AI model in the first place, or at least with great scepticism and a lot of corrections needed before just training a model and expect to get useful information out of it. So it's all over the place. But yes, that's my answer for now.

Beatrice Erkers  

Yeah, no, that's very interesting. I hadn't really thought about that in relation to that it would get access to like maybe incorrect data technically. Yeah, I guess that that I heard about the Centre for Open Science and that's like, where they try to pre-register the studies that they're doing so that you can actually check if they are correct in that they're reproducible, or that, that oftentimes you don't really publish what's what maybe not like an exciting result or something like that. But here you can see what people have published or have been doing research on, even if the results aren't like that exciting. Any other projects like that, that you think are promising or seem like they could actually make a difference in this field?

Stuart Buck  

Yeah, there's a lot of directions I can take this like that. The idea of pre-registration are really came out of medicine from some recommendations that were made, I think as early as the late 1980s, or definitely, by the early 90s, there were some folks writing about it and the idea in medicine, and folks in medicine, who were they were trying to do meta research in medicine, to try to summarise a whole body of literature in as of the 90s – and, probably still true somewhat today, they were very frustrated. And I remember an article by a couple of folks at the time where they said that it's just really strange to them that you can find much more information on baseball statistics. And up to the minute information about every baseball team, every baseball player, everything they've ever done, but yet, it's impossible to find that information about clinical trials, that much wealth of information about clinical trials and medicine, which is to them more important to them baseball, even so pre-registration was in part a way to get access to information about the trials that are being conducted on human beings and involving drugs or other types of treatments that might be used in medicine. That was one motivation for it is still be able to see the kind of the whole denominator so to speak, it's easy to see the published successes and the press releases of announcing that a drug will cure in the US the commercial, that's a "try this drug".

Stuart Buck  

But it's it's harder to find the failures since the idea of preregistration, in part was intended to address that. And then, like subsequently, I guess you're able to like what there was one famous article by Eric Turner and some colleagues where they reviewed, something like 70 or 80 clinical trials that have been done on antidepressants. And I'm going from memory here, so I might get a number slightly wrong. But roughly half of the trials showed that the antidepressants work. And roughly half of the trials showed no results at the antidepressant wasn't any better than placebo. And they had access to all this because of those pre-registration. And because they went to the FDA, and these were FDA approved drugs and the FDA demands to see all the evidence and not just what gotten the top journal. So you got to have the whole denominator there to look at. And so what they found was that basically, all of the positive trials, I think, except one got published in a top journal. And the negative trials or the null trials mostly went unpublished, but I think a few of them were published and then with the kind of spin or misrepresentation is if they have been positive studies.

Stuart Buck  

And so yeah, if you look at the medical literature, and this is also ties back to what I was saying about AI – if you looked at just the medical literature on those antidepressants, you would say, wow, they all work, everything's amazing. But if you look at the whole body of evidence that the FDA could look at, you would say if sometimes they work, sometimes they don't, or sometimes the one medicine is better than others... it's a lot more complicated and messy when you have all the evidence sitting before you. [ ... ] like that in medicine, for registration has been growing in other disciplines –psychology and economics in particular, over the past 10 years or so.

Stuart Buck  

The American Economics Association, like the Centre for Open Science, it started demanding preregistration about 10 years ago. And the rate at which that happened every on a yearly basis. It's gone up and up in economics alone. Yeah, I think that's all kind of hopeful signs of progress. And in different fields just being a better practices about being public about what kinds of studies you're doing. And then ultimately, it's kind of all for naught unless you like publicise the failures and the null results as well. Because again, if if you only publish the positive results, and don't say anything publicly about the negative results, it's very skewed. It'd be like if you flip a coin, and then you only announced when you flip a heads, yeah, you would look like you flip 100% heads, but that would be x and that wouldn't be true. So be very misleading.

Beatrice Erkers  

That's, yeah. Yeah, I'm gonna make a bit of a turn with the next question now, which is, it's a question that we always ask in this podcast, which is in relation to a eucatastrophe. So it's a term that means basically the opposite of a catastrophe. So once it's happened, the value of the world would be much higher. And so feel free to relate this question to, like your area of expertise within science. And I think also think ambitiously when I asked you this question, which is, if you could envision a specific eucatastrophe for the progress of science. What would that be? What would be an Existential Hope scenario of science, if you like if all the work that you're doing now came true, what would happen?

Stuart Buck  

That's great. Yeah, I first came across the term eucatastrophe in the writings of JRR? Tolkien? Did he invent the word, I can't remember?

Beatrice Erkers  

Yeah, I believe he did.

Stuart Buck  

Yeah, that's a super interesting question. The kind of revolutionary side of me says that in terms of meta-science, what would be really cool is if you could duplicate all federal funding of science, but with an entirely new set of institutions, imagine we spent 50 billion or so on NIH, what if we added an extra 50 billion on biomedical research with the condition that it has to be run completely differently from NIH, and it has to be in the hands of different people. And again, touching on these ideas of [ ... ] exploiting the and proliferating the diversity of the approaches, and as the scientists who get funded and the ways in which funding is handed out, because then you have two different systems operating. And so thinking very meta here, you have two different systems operating with equal amounts of funding. And now you can really see at a grand scale, hopefully what happens, and what are the results, there will be a chance to test out lots of different meta science ideas that that people have discussed for years or decades. But you just really need a kind of grand landscape in which to experiment with that.

Stuart Buck  

So again, it's very meta, but my hope would be that, that we learn a lot about how to fund science and how to organise, organise scientists into into organisations into labs and universities, we need entirely different institutions that are not, don't look anything like universities, for example, maybe you shouldn't, maybe you should set up a whole National Institute where the rule is, you can only find scientists who are working out of their garage, I don't know, just, you need some kind of like wild ideas out there to try out different approaches, different scientists, different ideas. And my hope would be that, at a minimum, we'd learn something from that. But in terms of Existential Hope, I think we might end up breeding a number of great breakthroughs that wouldn't have happened otherwise. And today it is top heavy bureaucratic system that is so conformist, and so focused on doing whatever gets you approval from the kind of the existing bureacracy.

Beatrice Erkers  

Yeah, I really, really like that.

Beatrice Erkers  

If you were to introduce someone who's entirely new to this field. Is there anything you'd recommend that they read or watch? Maybe it's just your own Substack? Or is there anything else that you would recommend for an intro to the field?

Stuart Buck  

Sure. The writings the Good Science Project has produced are a good place to look at. You mentioned the Centre for Open Science, they've had a number of publications and projects that are related to meta-science. I would say there's more stuff coming out from the Institute for Progress and the Federation of American Scientists, they've had a series of papers on open science, for example. Yeah, that might be... that gives you plenty of reading to start with.

Beatrice Erkers  

That sounds like a great place to start.

Beatrice Erkers  

And how can listeners best stay updated with your work and the work of the Good Science Project?

Stuart Buck  

You mentioned Substack, it's just goodscience.substack.com. Also, the website is just goodscienceproject.org. And yeah, I love to hear from folks who have ideas or during the writings and want to learn more.

Beatrice Erkers  

Great. Thank you so much, Stuart, for coming on this podcast. We really appreciate it. We'll definitely keep an eye on all the work that you're doing with the Good Science Project in the future. So thank you so much!

Read