### The ‘Throwing Eggs at a Wall’ Approach As a follow on from my last post, I'm trying to figure out the right questions to ask about biosecurity. I'm still an outsider to the field, and so I'm making an effort to log my thinking as a 'warm-up' strategy, and get some momentum snowballing. I also think there is value in just writing shit down and putting it out there - It helps me think, I might come up with random useful nuggets by sheer dice rolling, and its a good signal of my interest. So here's an essay on some things I've been working on over the past week. I've mostly got this through reading, web-link following and talking to people. I am uncertain about a lot of these claims, and have done my best to qualify that uncertainty in this post. But regardless, hopefully it is useful to anyone that's reading this. I also feel that due to my unstructured mental model of the field, I didn't think it was worth putting my thoughts in such a neat and polished form, because they are likely to change anyway. I am biased towards this style of writing and working because of my experience as a quantitative trader, where my views might be valid for a timespan of anything from a few seconds to a few months. I call this splattering of ideas 'thought blobs'. So what I've done here is to simply write a stream of logical chains from my research in order to flesh out and clean up some of my thoughts. Again, ideas are subject to change as I learn more, and I really welcome feedback. I'll talk about - Info-hazards - Books that I am reading to get historical context - Lab leaks - How I am trying to figure out if big biological risks are actually neglected ### The Prickly Part - Info-hazards #### Definitions An [[Information hazard]] is when diseminating information potentially causes harm, like posting instructions about how to make a bomb on the internet. #### Well flagged concerns about Info-hazards in biosecurity One of my concerns in trying to do work in biosecurity is info-hazards. My rough model is, when I writing about biosecurity, I might unintentionally diseminate information about science, threat models, or scientifically useful pieces of history that could be used by malicious actors to cause harm. Doing the above seems obvious, and there are details in the document that I have linked below. But in fact, I also think there is another weird type of weaker info-hazard that comes from say, discussing a foreign actor (country or organisation X) and saying something like: > X might Y! So we should be careful! Now, this could be harmful if 1) its untrue and or 2) its spoken in a way that ratchets up the tension between X and the actors that invested in what X is doing. This kind of hear say, up-the-ante tension was evident in the nuclear arms race between the US and the USSR, and a thrilling account is written in the first couple of chapters by David Hoffman's book, [The Dead Hand.](https://en.wikipedia.org/wiki/The_Dead_Hand) I am new to the field and so am exercising caution in terms of disseminating information about something that could cause harm. Overall, so far I am invested in learning more, and so I plan to read literature recommended in the article below, including the following - Information hazards in biotechnology, a [paper](https://pmc.ncbi.nlm.nih.gov/articles/PMC6519142/) here. - Biosecurity Dilemmas by Christian Enemark At this point, I feel like I need to talk to people to get a better intuition and some feedback on whether working on infohazards would be useful. This is definitely one for the to-do list. My approach to infohazards right now is taken pretty much verbatim from Chris Bakerlee and Tessa Alexanian's [info-hazard guidance](https://docs.google.com/document/d/1VSfU3GiZumHDX2hoz3YY1PT2dQHtkbrfO8xLxI9BTGE/edit?tab=t.0), because currently I have no real prior, and the suggestions here seem sensible. The two main things that stuck out to me in the document were 1) Trying to avoid discussing / creating new biorisk threat models 2) That talking about 'history' is probably safe to do, since incidents are already likely to be well documented, and so one is unlikely to do additional harm. The document also states that the most important issues in biosecurity don't require specific threat models to be discussed. I broadly agree with this because - just from the epidimiology side in global health, good policy in dealing with pandemics doesn't actually rely on in depth technical, biochemical knowledge about how pathogens work. Maybe questions along the lines of 'disease X that is airborne and kills people within 2 hours' are sufficient. - a lot of good policy decisions and problems just don't require any technical knowledge at all. Rather, it's also stuff like convincing people that vaccines don't cause autism, which is a whole different kind of prickly problem. I really got this vibe by reading [The Dead Hand](https://en.wikipedia.org/wiki/The_Dead_Hand). On the flip side though, for something like mirror life (which I plan to write about soon), I do think it is possible that some form of threat models need to be discussed in a vague fashion since its a new technology. And having a better idea for how it might be used for harm could be useful in designing policy. But so far, its hard to know how much and how in depth the discussion needs to go. I find this to be quite meta. I plan to review some mirror life materials in my next few essays. I need to be more focused god-damn! Some specific topics that I am unsure whether to write about, that I actually would like to write on (for my own curiosity, wanting to add to open discussion and personal interest purposes) include - digging into the biochemistry of mirror life - the exact mechanisms of which pathogens might be engineered to do other things, whether harmful or not. Info-hazards seem to make the already prickly problem of biosecurity more prickly, and adds a new dimension to the whole problem. Working through info-hazards would only be worth the effort if solving the problem itself is worth the effort, and I currently believe that in the case biosecurity this is true. So, the problems are worth trying to get through, and I shouldn't be discouraged. - One thing that concerns me is that because of information barriers, there already may be a lot of groups outside of the biosecurity / EA philanthropic sphere that already know a significant portion of what we are trying to discover. And so, there could be a lot of redundancies in effort on that front, if governmental organisations seem to have more edge. - Infohazards in general add a lot of redundancy to working, because groups may potentially overlap. I can find that this is often emotionally discouraging. These are one of the reasons I have decided to have a slight tilt to start by looking at the history of biosecurity. I've written a section on this below. ### Historical Context on Biosecurity In light of infohazards, I've done some super preliminary investigation into some scattered topics in biosecurity. My aim right now is not to do a deepdive, but rather just find some topics that interest me, and to see if there are any areas that I could have some potential impact in. So far, I'm reading - The Genetic Age by Matthew Cobb, - The Dead Hand by David Hoffman I've currently just finished reading [[The Genetic Age, Matthew Cobb]]. I need to do a second pass, but so far it's given me a useful narrative timeline on the history of genetic engineering, genetically modified crops and synthetic biology. The book focuses more on history, ethics and societal implications rather than the hard science. This is why I think it makes it appropriate for science researchers. On the science side, it covers - the development of recombinant DNA in the 1980s - genetically modifying crops - gene drives and ecosytems. One thing that it has paid interesting attention on is the history of lab leaks, and also incidents that have caused shutdowns in the past. The book also offers a fairly strong view on the potential ethical harms synthetic biology / genetic engineering has in place today. It offers a particularly damning account on He Jianqui's CRISP experiment on two human embryos, marking it as clearly unethical but leaves some ambiguity on the who's calling the shots on genetic engineering ethics in China. On the flip side, there are convincing arguments from properly managed genetic modification in agriculture, where he references vitamin A fortification of golden rice. Overall I got the vibe that Cobb feels like its not worth using these technologies other than for the purposes exclusive scientific curiosity. Reminds me of his quoting Paul Berg (in the context of the genetic language) >Paul Berg, argued in 1993: To be fluent in a language, one needs to be able to read, to write, to copy, and to edit in that language Here's a brief timeline of events that the book covers, which I think is a useful reference that I will keep in my back pocket for later on. I think the key conference here is the Asilomar conference, and the subsequent research moratoriums after that. - **Early 1970s** - _1971_ – Paul Berg cancels *part* of a risky experiment involving SV40 virus – **first research pause** in genetic engineering. - _1972_ – Marked a shift to **precise gene manipulation** with recombinant DNA. - _Nov 1972_ – Scientists convene in _Honolulu_ to discuss bacterial plasmids. - _1973–74_ – Growth of public awareness and concern about recombinant DNA. - _1974_ - _July_: **First public research moratorium** announced. - _Oct_: 200 researchers meet in _Davos_ to discuss **genetic engineering ethics**. - _UK_ and _France_ set up safety and ethics committees.n - **1975** – _Asilomar Conference_ takes place, initiating global debate on genetic research regulation. - Scientists call on the **NIH** to assess hazards and develop containment guidelines. - **Media attention** intensifies; corporations like _Biolabs_ support research pauses. - **1976** – _Harvard_ applies to build a P3 biosafety lab; greater attention to containment. - **1977** – GMAG (_UK regulatory body_) formed; _Chang and Cohen_ advance E. coli gene integration methods. - **1979** – Public and scientific interest in recombinant DNA safety wanes; debate largely fades. - **1997** – Ethical concerns emerge globally with rise of GM crops; public distrust grows. - **2012** – Third major **pause in genetic research** after concerns around engineered H5N1 bird flu. - **2015** – Scientists declare it **irresponsible to edit human embryos** using CRISPR. - **2018** – _He Jiankui_ conducts controversial CRISPR experiment on human embryos in China, causing mutations — marks **entry into heritable genome editing**. - **Recent Years** - **Gene drive** technology developed, capable of altering entire ecosystems (e.g., to eliminate malaria mosquitoes). - Experiments to simulate future pandemics by creating **deadlier pathogens** spark biosecurity fears. In this vein, I wanted to dig a little more into the history of lab leaks. What I've found so far leads me to believe that accidental exposures is still a problem, is tractable, and actually is a more prevalent problem that I thought. I am currently also reading 'The Dead Hand', by David Hoffman, and the chapters of biorisk also have given a very detailed narrative into mishaps in the Soviet and US bioweapons programs. Detailed narratives are given on specifics of several programs, such as - The Japanese biological weapons experimentation right after World War II in Manchuria - The beginnings of the Soviet SV40 and smallpox weaponisation attempts, as well as accounts of the scientists who were pressured to take part - The US biological weapons programs - The Soviet drive to further genetic engineering experiments given the US's progress in that area. Then there are also mishaps which are documented in detail. The events which I found particularly worrying include - [The 1968 Dugway Sheep Incident](https://en.wikipedia.org/wiki/Dugway_sheep_incident), where around 6000 sheep had died as a result of an aircraft delivery system failing to close on a VX nerve gas test. - [The 1971 Aralsk Incident](https://en.wikipedia.org/wiki/1971_Aral_smallpox_incident), where three people died, and details had only come to light in 2002. I found these examples worrying because - They were pretty dumb mistakes - Nothing really preventing us from making them again. As part of this, I also stumbled across the paper [[High-risk human-caused pathogen exposure events from 1975-2016 - PMC]], which is a dataset of around 71 non-natural human exposures to highly pathogenic agents. If the data is solid, then that is a lot more frequent than I expected. Given that state agents are actually incentivised to hide lab leaks, this number of 71 is probably a lower bound, and so the real number across the sample could be more like once a year, unbounded above. > The research and development of biological weapons is forbidden by the BWC, so that violations of this convention, and any attendant accidents, would be expected to be kept secret. To state the obvious though, there seems to be only one study on this, and so I'm trying not to put too much weight on it. One concern that the paper highlights is that there are some ambiguity to what counts as a qualifying event. I would love to see a replication of this piece of work, to check if the order of magnitude is about the same. I would also love to see an up-to-date version as well. Alternatively, maybe it could be a cool project that I could do myself, but I still think I need to resolve some uncertainties before dedicated more time on it. As part of this, I am wondering if its worth looking at lab safety standards as a potential area to spend more time working in. Potentially, with my physics background, I wonder if I could contribute in the area of anti-microbial surfaces or far UVC, as that seems like it could be first order useful for lab safety. There are also other techniques like maintaining a negative pressure differential to make sure airborne microbes can't escape. Currently, there is a protocol called '[biosafety level](https://en.wikipedia.org/wiki/Biosafety_level)' which seems to be a set of guidelines loosely defined by the Centres for Disease Control in the US, as well as in the respective organisations in the EU and Canada. Just from the wikipedia page: > A **biosafety level** (**BSL**), or **pathogen/protection level**, is a set of [biocontainment](https://en.wikipedia.org/wiki/Biocontainment "Biocontainment") precautions required to isolate dangerous [biological agents](https://en.wikipedia.org/wiki/Biological_agent "Biological agent") in an enclosed laboratory facility. The levels of containment range from the lowest biosafety level 1 (BSL-1) to the highest at level 4 (BSL-4). In the United States, the [Centers for Disease Control and Prevention](https://en.wikipedia.org/wiki/Centers_for_Disease_Control_and_Prevention "Centers for Disease Control and Prevention") (CDC) have specified these levels in a publication referred to as Biosafety in Microbiological and Biomedical Laboratories (BMBL).[^2] In the [European Union](https://en.wikipedia.org/wiki/European_Union "European Union") (EU), the same biosafety levels are defined in a [directive](https://en.wikipedia.org/wiki/European_Union_directive "European Union directive").[^3] In Canada the four levels are known as Containment Levels.[^4] Facilities with these designations are also sometimes given as **P1** through **P4** (for pathogen or protection level), as in the term *P3 laboratory*.[^5] There is also a paper by Nature which I plan to dig into that argues for more stringent and standardised biosafety protocols - [Pei, L., Garfinkel, M. & Schmidt, M. Bottlenecks and opportunities for synthetic biology biosafety standards](https://www.nature.com/articles/s41467-022-29889-y) ### Are global catastrophic biological risks actually neglected? In my previous post I outlined some general budgets by philanthropic organisations working on the life sciences. I still need to do more work on this, and its on the to-do list, but I would like to somehow improve estimates of the amount of dollars invested. Just a brief reminder on [C.K's report](https://forum.effectivealtruism.org/posts/pnincG5vW8Far8Ggg/how-well-funded-is-biosecurity-philanthropy), which still currently seems to be the most comprehensive I've seen so far. I found the following table from the report to be especially useful, which is biosecurity investment from philanthropic foundations. My quick thoughts are that even though the fields are muddily defined due to cross-over, the order of magnitude estimates are useful. | **Area** | **Weighted $** | | ----------------------- | -------------- | | AI-Bio | $1,472,633 | | AMR | $114,129,071 | | COVID-19 | $736,317 | | Disease Surveillance | $204,696,010 | | DURC | $2,208,950 | | Fieldbuilding | $2,208,950 | | Pathogenesis Research | $197,332,845 | | GCBR Priority Areas | $37,552,146 | | Health System Readiness | $11,044,749 | | Laboratory Preparedness | $8,835,799 | | PPE | $4,417,900 | | Rapid Vaccines | $309,252,965 | | Therapeutics | $142,109,101 | Some really subjective ideas from me on this table: - As agreed with the report, there is a lot of dollars in rapid vaccines, disease surveillance and pathogenesis research. And the skew is towards these areas look quite big. - GBCR priority areas still look neglected - Laboratory preparedness looks neglected - AI-Bio is also neglected - PPE investment seems low versus non philanthropic organisations working on it - Therapeutics seems low versus non philanthropic organsiations working on it One really rough metric (although probably not that meaningful with extistential risks) that could be made from better estimates of philanthropic funding in the area is just to divide the currently allocated funding by the total cost of a harm scenario. Using that would mean - throwing all the clear issues with this kind of metric under the bus - lower the number, the more consequential the issue. And so, building a list of high impact issues with these scores would help me get a better feel of neglectedness of the issue versus the other areas. But since trying to build metrics might be oversimplifying things, one thing I am looking at is getting a better handle on who's working on what. There's a Substack called [[GCBR Organization Updates - April 2025]], by Anemone Fraz and Tessa Alexanian, that as of writing, seems to write monthly rundowns on who is doing what. Some organisations I haven't heard of include - Sentinel Bio - SecureDNA - Brown Pandemic Center - and more. I am not sure if this is a complete list but there are a lot of nuggets that are helpful. It also helps me figure out within the biosecurity space what areas might still be neglected. There are also job postings and progress reports, so I suggest anyone that's interested to check out that page. Knowing what's going on is helping me to build a better model of how organisations outside of government can provide meaningful, outsized edge to the field of biosecurity. I am still unsure of the best models of impact as an individual myself. In my earlier post, I surmised that I might be helpful in physics based approaches to preventing pandemics, and I learned from these organisation updates that there already is an organisation working on Far UVC called [Blueprint](https://blueprintbiosecurity.org/u/2025/03/Blueprint-for-Far-UVC-PREPRINTv1.0.pdf). One thing I also am particularly curious about is the Chinese and Russian commitments to biosecurity, but obvious problems like language barriers make this hard for me to get into. I really need to learn a bit of introductory Mandarin! As part of learning about where the neglected issues are, I'm also trying to figure out how philanthropic organisations can offer value, versus just governments. There is a section from Open Philanthropy's website on possible cause areas, but so far, I am still pretty uncertain about whether I should try dig deeper into this. From [[Biosecurity, Open Philanthropy]]: *Advocating to policymakers to improve biosecurity initiatives* - *Supporting general research on the magnitude of biosecurity risks and opportunities to reduce them* - *Improving and connecting disease surveillance systems so that novel threats can be detected and responded to more quickly [22](https://www.openphilanthropy.org/research/cause-reports/biosecurity#footnote22_nzd75zq " “To move towards those goals, needed improvements include: Stronger international disease surveillance systems with better interconnection and more updated technologies. Public health systems that can use electronic medical records to detect patterns in disease and to manage outbreaks.” Notes from a conversation with Tom Inglesby on October 2, 2013. \"Improvements in detection over time A study using WHO data from 1996-2009 found significant improvements in outbtreak detection over time. In 1996, outbreak detection took about 170 days from the time of the outbreak to the time it was reported in newspapers. By 2009, the lag had been reduced to about 23 days. With technologies currently in development and deployment of best practices, lag time between outbreak and detection could conceivably shrink to one or two incubation periods (depending on the disease), helping cut off disease spread. U.S. detection rates From a global perspective, the U.S. has relatively good detection rates. However, in the U.S., institutional complexities make it hard to compile data on how fast the government detects outbreaks because information has to move through local, sub-state, and state levels before reaching national agencies. Tracking the history of this information would require gathering data on various levels.\" Notes from a conversation with Jennifer Olsen on September 23, 2013 ")* - *Reducing the risks of dual use research by promoting stronger oversight mechanisms and cultural norms of caution amongst researchers [23](https://www.openphilanthropy.org/research/cause-reports/biosecurity#footnote23_5tllw2j " “The Sloan Foundation's projects in this area included: • Addressing the risks of bioterrorism. In particular, Sloan worked with building engineers to develop improved methods for air filtration in large buildings (people spend 90% of their time indoors). • Preventing people from mail-ordering DNA of potentially harmful viruses. DNA sequences for harmful agents like smallpox are publicly available, so it is important to ensure that companies that sell DNA strands are not accidentally selling dangerous ones. • Establishing new scientific norms. It sponsored the Fink Committee report, which outlines an improved culture of responsibility for scientists and discusses what kinds of experiments are potentially dangerous.” Notes from a conversation with Paula Olsiewski on July 19, 2013 ")* - *Developing novel therapies, such as broad-spectrum flu vaccines [24](https://www.openphilanthropy.org/research/cause-reports/biosecurity#footnote24_2pm9o6c " “Improving manufacturing capability for influenza vaccines is important, but existing influenza vaccines have limited effectiveness, so new and improved vaccines must be developed as soon as possible. The U.S. government and the scientific community are slowly working toward improved vaccines, but producing new vaccines takes time.” Notes from a conversation with Michael Osterholm on July 30, 2013 “To move towards those goals, needed improvements include... Development of medicines and vaccines for a wider range of illnesses.” Notes from a conversation with Tom Inglesby on October 2, 2013. ")* - *Improving the capacity for rapid production of vaccines in response to emerging threats [25](https://www.openphilanthropy.org/research/cause-reports/biosecurity#footnote25_npmulr3 " “If a vaccine is not developed, manufactured, and administered within the first 6 to 8 months after a pandemic begins, it is difficult to significantly mitigate the pandemic’s effects. Unexpected influenza pandemics are particularly dangerous because it is highly unlikely that a vaccine for any particular strain of influenza could be developed, manufactured, and distributed within 6 to 8 months.” Notes from a conversation with Michael Osterholm on July 30, 2013 “To move towards those goals, needed improvements include... A medicine and vaccine development and production process that could quickly scale up if needed. (Currently the U.S. relies on stockpiles for some specific illnesses, but it will ultimately need to be able to make medicines and vaccines for a whole range of illnesses and to be able to quickly scale up production in a crisis.)” Notes from a conversation with Tom Inglesby on October 2, 2013. ")* - *Creating or growing stockpiles of important medical countermeasures [26](https://www.openphilanthropy.org/research/cause-reports/biosecurity#footnote26_pkh5al1 " “Question 10: Has there been sufficient, sustained funding for the medical countermeasure enterprise? Answer: No. Initial Project BioShield funding ($5.593 billion for FY2004 to FY2013)11 was a good start, but there have been constant raids and attempted raids on the fund. BARDA is currently funded at about 10 percent of its actual requirements and FDA lacks sustained, balanced funding for work on medical countermeasures.12 Without sufficient, sustained funding there will be little chance of success. Medical countermeasures are the most important arrow in the biodefense quiver.” The Bipartisan WMD Terrorism Research Center 2011 pg 43. ")* - *Improving preparedness of public health and law enforcement institutions [27](https://www.openphilanthropy.org/research/cause-reports/biosecurity#footnote27_5agzurb " “The goal of biosecurity is to have available all the vaccines and medicines needed for any possible contingency, and to have a public health and healthcare systems in place that can respond to a serious and acute crisis. To move towards those goals, needed improvements include: Stronger international disease surveillance systems with better interconnection and more updated technologies. Public health systems that can use electronic medical records to detect patterns in disease and to manage outbreaks. Stronger response to outbreaks of foodborne illness. (Currently it can take months to find the source of a multi-state foodborne outbreak, and sometimes the source is never found even if thousands of people are infected.) A medicine and vaccine development and production process that could quickly scale up if needed. (Currently the U.S. relies on stockpiles for some specific illnesses, but it will ultimately need to be able to make medicines and vaccines for a whole range of illnesses and to be able to quickly scale up production in a crisis.) Development of medicines and vaccines for a wider range of illnesses. A healthcare system that can respond to mass catastrophes. Specifically, hospitals need to develop plans for transferring patients, sharing medical expertise, and learning from each other.” Notes from a conversation with Tom Inglesby on October 2, 2013. ")*