Concerns over the biosecurity risks posed by artificial intelligence (AI) models in biology continue to grow. Amid this concern, Doni Bloomfield and colleagues argue, in a Policy Forum, for improved governance and pre-release safety evaluations of new models in order to mitigate potential threats. “We propose that national governments, including the United States, pass legislation and set mandatory rules that will prevent advanced biological models from substantially contributing to large-scale dangers, such as the creation of novel or enhanced pathogens capable of causing major epidemics or even pandemics,” write the authors. Advances in biological AI models hold great promise across many applications, from speeding up drug and vaccine design to improving crop yields and resilience. However, alongside these benefits, biological AI models also pose serious risks. Because of their general-purpose nature, the same model that designs a harmless viral vector for gene therapy could be used to create a more dangerous, novel viral pathogen that evades vaccines. Although developers of these systems have made voluntary commitments to evaluate the dual-use risks of these models, Bloomfield et al. highlight how these measures are insufficient to ensure safety on their own. According to the authors, there is a notable lack of governance to address risks, including standardized, mandatory safety evaluations for advanced biological AI models. Although some policy measures exist, such as the White House Executive Order on AI, and the Bletchley Declaration signed at the UK AI Safety Summit in 2023, there is no unified approach to evaluating the safety of these powerful tools before they are released. Here, Bloomfield et al. call for policies focused on reducing the biosecurity risks of advanced biological models, while preserving scientific freedom to explore their potential benefits. Policies should require pre-release evaluations only for advanced AI models posing high risks. These evaluations can use existing frameworks for dual-use research and should include proxy tests to avoid directly synthesizing dangerous pathogens. Additionally, oversight should address the risks of releasing a model’s weights, which could enable third-party modifications to a model after its release. Moreover, policies must ensure responsible data sharing and restrict access to AI systems with unresolved risks.
Concerns over the biosecurity risks posed by artificial intelligence (AI) models in biology continue to grow. Amid this concern, Doni Bloomfield and colleagues argue, in a Policy Forum, for improved governance and pre-release safety evaluations of new models in order to mitigate potential threats. “We propose that national governments, including the United States, pass legislation and set mandatory rules that will prevent advanced biological models from substantially contributing to large-scale dangers, such as the creation of novel or enhanced pathogens capable of causing major epidemics or even pandemics,” write the authors. Advances in biological AI models hold great promise across many applications, from speeding up drug and vaccine design to improving crop yields and resilience. However, alongside these benefits, biological AI models also pose serious risks. Because of their general-purpose nature, the same model that designs a harmless viral vector for gene therapy could be used to create a more dangerous, novel viral pathogen that evades vaccines. Although developers of these systems have made voluntary commitments to evaluate the dual-use risks of these models, Bloomfield et al. highlight how these measures are insufficient to ensure safety on their own. According to the authors, there is a notable lack of governance to address risks, including standardized, mandatory safety evaluations for advanced biological AI models. Although some policy measures exist, such as the White House Executive Order on AI, and the Bletchley Declaration signed at the UK AI Safety Summit in 2023, there is no unified approach to evaluating the safety of these powerful tools before they are released. Here, Bloomfield et al. call for policies focused on reducing the biosecurity risks of advanced biological models, while preserving scientific freedom to explore their potential benefits. Policies should require pre-release evaluations only for advanced AI models posing high risks. These evaluations can use existing frameworks for dual-use research and should include proxy tests to avoid directly synthesizing dangerous pathogens. Additionally, oversight should address the risks of releasing a model’s weights, which could enable third-party modifications to a model after its release. Moreover, policies must ensure responsible data sharing and restrict access to AI systems with unresolved risks.
Journal
Science
Article Title
AI and biosecurity: The need for governance
Article Publication Date
23-Aug-2024
Discover more from Science
Subscribe to get the latest posts sent to your email.