The Biorisk Question

By

How to think through biosecurity risks.
7 Minutes Read

In June 1972, a graduate student at Stanford University named Janet Mertz made a discovery that expanded the possibilities of molecular biology, but also triggered an attitude shift toward biosecurity.

While working in Paul Berg's laboratory, Mertz found that a restriction enzyme called EcoRI could cut DNA and, in the process, leave behind “sticky ends.” Mertz used the enzyme to cut two pieces of DNA—each isolated from a distinct organism—and then joined their ends together to create a new strand of genetic material. Thus was born recombinant DNA, the backbone of modern biotechnology.

The prior summer, Mertz had mentioned her intention to do this precise experiment while attending a summer course at Cold Spring Harbor Laboratory, a small research institute on the northern lip of Long Island. Mertz's experimental plans frightened other scientists, who quickly enacted a voluntary moratorium on "the cloning of any DNA that might contain potentially biohazardous materials."

In 1973, two professors—Herb Boyer at UCSF and Stanley Cohen at Stanford University—finished what Mertz started. The duo cut up DNA from E. coli and Staphylococcus and recombined them into a single loop, which they then inserted into E. coli cells. As the cells divided, they propagated the recombinant DNA and passed it down to their progeny.

Berg shared the 1980 Nobel Prize in Chemistry, while Boyer and Cohen became co-authors on one of the most lucrative biotechnology patents of all time (namely, that of recombinant DNA), which earned $250M in lifetime licensing and royalty fees.

However, back when Cohen and Boyer first told people about their chimeric DNA molecules, scientists became fearful that the technology could be used to make dangerous biological agents, saying “Oh my God, you guys can really make some dangerous things,” as Berg later recalled. The scientific community recognized that this was a key moment: Prior to EcoRI, potential hazards from intermingling genetic material from different organisms seemed like a distant possibility, because technological barriers made it extremely difficult to recombine arbitrary strands of DNA. Mertz’s discovery removed these barriers.

In response, Berg and colleagues organized a large meeting in 1975 to decide what to do. More than 150 scientists, philosophers, lawyers, and journalists gathered at the Asilomar resort near Monterey Bay to devise rules governing recombinant DNA research. The conference ended with a set of recommendations, including a moratorium on some experiments.

NMAH-2006-3853

Stanley Cohen's laboratory notebook. Credit: National Museum of American History/Smithsonian

Asilomar’s echoes are felt in every biology lab today. The biosecurity guidelines that emanated from that meeting are a big part of why, even now, molecular biologists use weakened forms of E. coli for experiments and laboratories have six or more different waste bins.

But Asilomar’s most impactful legacy, arguably, is that a coalition recognized a pivotal moment of innovation and gathered to think through its possible risks. This legacy is worth revisiting because it seems like new technologies are announced almost daily: gene-editing tools, organisms with synthetic genomes, computationally-designed proteins, large language models, and more. Like EcoRI, each of these brings great benefit, but may also inadvertently enable harm.


Starting Point

Not all innovations create new risks. When scientists and policy makers today need to evaluate a new innovation, we’ve found that a good place to start is to answer a single question: “If successful, would this technology overcome protections that currently keep things safe?”

Arriving at an answer requires at least two steps. First, determine what currently keeps us safe, and then evaluate whether an innovation would substantively remove those protections.

Consider viruses: One of the many phenomena that limit the danger of viruses is the rate at which they become zoonotic, which is the time it takes for them to jump from animals to humans. 75 percent of new infectious diseases arise this way, according to the CDC. As a result, technologies that can circumvent or accelerate this process are considered high risk by most researchers, and the CDC has issued moratoriums on “gain of function” research at times even though the technology can enhance our understanding of disease.

Another example is germline genome editing, or changing DNA such that it’s passed onto an organism’s offspring. This carries risks, too. Prior to the discovery of gene drives, CRISPR and other gene-editing tools, germline edits were not a big concern because technological barriers made these experiments impossible. Now all the enabling technologies are within reach (and have been unethically demonstrated in human children). Germline editing has been outlawed in over 70 different countries and by international treaties.

In some situations, safety relies on restricting access to information. A controversial paper published in 2012 showed that just four mutations could convert the H5N1 bird influenza virus into a deadly pathogen capable of infecting ferrets. Because the technical barriers to perform mutagenesis are relatively low, restricting access to knowledge of these zoonotic mutations is a key way to mitigate risks. This is why researchers are often encouraged to consider whether their work could create so-called “information hazards.”

Perhaps the most dramatic example of “restricting access” concerns DNA manufacturing itself. Prior to the availability of à la carte DNA synthesis, getting hold of a DNA sequence encoding a dangerous toxin or organism required physical access to existing genetic material or the original organism —a bad “actor” had to find a tube and steal it, basically. Physical protections on such samples are typically strong, so risks were low. 

All of this changed with on-demand DNA synthesis. Now, all that is needed to get DNA is a data file with the sequence. DNA can be ordered from at least a dozen different companies or printed using a benchtop synthesis machine. Large swaths of the DNA synthesis industry have responded by screening the sequences that researchers order, and will refuse to fill orders—and even notify law enforcement—when appropriate.

Notably, DNA screening started as a voluntary initiative, self-imposed by the industry, much like Asilomar’s first set of self-imposed restrictions on recombinant DNA research. And recently, the White House issued an executive order to require it of all providers.

Despite these examples, it’s worth emphasizing that research can touch on sensitive areas without creating new risks. One could argue, for example, that publishing a recipe to make methamphetamine on the internet is not an information hazard because the same information already exists elsewhere. Making information easier to access, in other words, doesn’t necessarily increase risks.

But then, of course, there are all those technologies that exist in a nebulous zone of gray. These are the technologies that might overcome existing protections, but have unclear risks. 

Groups from RAND, MIT, and OpenAI have, for example, all sought to evaluate whether large-language models like ChatGPT create new bioengineering and biodesign risks. Despite the similarities of their studies, they each arrive at very different conclusions regarding the risks posed by these models. Their differences seem to stem from disagreements around what currently keeps things safe in the areas they tested, particularly concerning the importance of information hazards. 

And what about AI tools to design proteins? It is often argued that these tools can be used to make proteins that don't exist anywhere in nature—and thus potentially sidestep technological difficulties in making toxins and infectious agents. Moreover, biological surveillance systems could be blind to them in the worst case scenario, because the products could look very different from any known proteins. 

As a result, many scientific leaders think this technology requires special attention. David Baker, George Church, and other protein designers have published plans to regulate the technology—including mandatory DNA screening and the storage of all synthesis DNA in secure databases—but also acknowledged that "an international group…should take the lead" on implementing the policies. The details there still need to be ironed out.

 As a final set of examples, let’s think through efforts to engineer microbes to perform chemistry.

A few years ago, our Asimov Labs team (while working at MIT) and others engineered microbes to biosynthesize benzylamine, a molecule used to make therapeutics, textiles, paints, and a rocket propellant that is among the highest-energy materials known to man, called CL-20.

Reflecting on the key starting question, we first asked ourselves and a panel of external biosecurity consultants, “What protections currently limit the illicit manufacture of explosives?” And the answer, it turned out, was not the availability of benzylamine. The same molecule made by our engineered microbes is already cheap and widely-available online.

Rather, the main obstacle is converting benzylamine into its explosive form, which requires special facilities and formulation know-how that is not available online. On the flipside, our work made it possible to make benzylamine in an environmentally responsible way, without creating the toxic byproducts that plague chemical synthesis.

A final case study is yeast engineered to make opioids.

In 2015, Christina Smolke's group at Stanford University engineered yeast to make two types of opioids, thebaine and hydrocodone, from sugar. The scientists did this by adding 21 enzymes—taken from plants, mammals, bacteria, and other organisms—to the yeast cells. These strains have the potential to unlock cheaper and more effective medicines, but could also be used to manufacture illicit drugs. The engineered cells made thebaine at a titer of 6.4 μg/L, and hydrocodone at 0.3 μg/L. There are obviously laws that limit the possession and manufacture of drugs. But when this paper was published, the scientists considered whether the engineered yeast cells would, nonetheless, remove obstacles blocking people from getting their hands on illicit narcotics.

At the low titers reported in the paper, the authors estimated that it would take thousands of liters of yeast cells to make enough hydrocodone for a single dose of Vicodin. In other words, scaling up production—and not information or access—was the main bottleneck limiting risk. For that reason, they didn’t deem the engineered cells to be a major risk at the time. 

Antheia, a startup company co-founded by Smolke, is now making thebaine at commercial scales. Even so, their large manufacturing capacity doesn’t necessarily remove barriers to access; the company’s approach is proprietary and their factory has the same physical protections as any chemical company would.

Throughout all of these examples, we’ve left out an important detail. Namely, whose responsibility is it to answer these questions? Should we all gather again at a resort near Monterey Bay?

Ironically, another Asilomar is unlikely to be as effective, according to Paul Berg. Asilomar worked because a coalition formed “before an entrenched, intransigent, and chronic opposition developed,” Berg said. Technologies today are “qualitatively different: they are often entwined with economic self-interest and increasingly beset by nearly irreconcilable ethical, religious, and legal conflicts, as well as by challenges to deeply held social values.” This makes it difficult to organically build a cross-societal coalition.

There is also now a diverse, distributed community working to safeguard technologies. A growing cadre of organizations including the American Society for Microbiology, the Danish Centre for Biosecurity and Biopreparedness, iGEM, the Joint Genome Institute, Science Magazine, and Asimov Labs (previously housed at MIT) have all published detailed frameworks and approaches to help other laboratories and scientists evaluate and navigate complex biorisks. Moreover, dozens of non-profit organizations (e.g. IBBIS, NTI, SecureDNA), commercial entities (e.g. Aclid, Battelle, BBN, Gryphon), IGOs, philanthropies, and academics have established efforts that are dedicated to evaluating, detecting, and addressing biosecurity challenges. 

We believe that biotechnology researchers of all walks should engage with this community and stay apprised of its work. In fact, when Asimov Labs was at MIT, we established a program to extend biosecurity training and awareness down to the level of the individual researcher: In this program, every member of the lab is required to perform a biosecurity self-assessment of their work, and to report their assessments to the wider group in lab meetings every few months. These assessments were also reviewed by an external biosecurity advisory committee. We established a framework to structure this self-assessment, and help ensure nothing fell through the cracks.

On the government side, emerging directly from the events around Asilomar, the National Institutes of Health (NIH) established the Recombinant DNA Advisory Committee, which has been operating for almost 50 years. It was renamed the Novel and Exceptional Technology and Research Advisory Committee (NExTRAC) in 2019. The NIH also convenes the National Science Advisory Board for Biosecurity (NSABB), an expert panel that recommends policies to prevent biotechnology from aiding terrorism. The Office of Science and Technology Policy (OSTP), which reports to the President, has recently released specific guidance for DNA screening. These groups and others at NIH, NIST, DOE, and DOD study the gamut of technologies, from those that could harm human health (engineered viruses) to environmental risks (gene drives).

A thorough treatment of biosecurity is challenging. As in the time of Asilomar, it intermingles science, society and policy, and requires a mix of different perspectives and competencies. What’s more, progress in biotechnology is still swift and the status quo changes over time. Grappling with biosecurity risks isn’t a one-time exercise, but an ongoing process. The single question posed in this essay isn’t a panacea for biosecurity, but we still think it’s a good place to start.

Contributors: Ben Gordon, Niko McCarty, Alec Nielsen & Arturo Casini.

Ben Gordon

Author

Subscribe Now