I had the pleasure of participating in the Clinton Global Initiative’s annual meeting (#CGI2023) this week. The meeting gathered leaders from around the world and worked to get commitments for action on some of the world’s most pressing challenges, such as climate resilience, health equity, economic recovery and growth, and AI and health. Was very glad to have a fireside chat with Chelsea Clinton on #AI and #biosecurity and then to hear from a series of compelling leaders working to use AI to help solve problems in medicine and public health.
Some highlights of our discussion at #CGI2023:
- Promise of AI and global health: We should work to incorporate AI into forecasting and disease prediction strategies - it could help identify patterns that lead to earlier warning and interventions. AI can help drive new medicine and vaccine development and improve manufacturing, as well as identify new diseases where already licensed medicines can make a difference. It can make diagnostics and imaging faster and put those tools into the hands of frontline health workers, as well as health care providers working away from traditional health care systems. To realize this promise and other AI contributions to global health, we'll need interaction between AI providers and datasets, business models that make it possible, privacy protections, systems to prevent bias, and more. ?
- Risks of AI: In addition to the risks of bias, property concerns, workforce impact, there is a possibility that AI tools – both large language models and bio-design tools – could be used for biological research that results in pathogens that are more lethal or more transmissible than those that appear now in nature. And if that occurs, there is a risk that such work could precipitate high consequence accidents or deliberate events that initiate outbreaks or even pandemics. AI tools could also be used to create variants of pathogens with characteristics that help them evade current surveillance systems or are able to get around human immune system protections. These areas of risk need special monitoring, attention and governance.
- Harnessing the promise of AI will require governance and strong management of these risks. In order to prevent these risks from occurring, AI technologists, biologists, governments, and the policy community need to engage in developing solutions. Technologists and AI companies should design and conduct technical evaluations of AI tools before public release to determine risks and address them. The life science community, working with government, needs to have governance processes in place that address risks of research intended to create more lethal or transmissible pathogens. and instill internal review processes prior to the implementation of government systems. Both AI and Life Science research institutions needs to be engaged in setting up strong review processes and in educating rising researchers about these issues. Internationally, governments should be sharing best practices that emerge around addressing these risks, the UN system -- both on the security and health side -- have key roles to play in providing guidance to member states.
There is critical work to do in the time ahead, and events like CGI are key global platforms for engaging with leaders from around the world to help envision wise approaches and actions ahead.
Agentic AI for new medicines: AgentCures
1 年Thank you for shedding light on the risks associated with AI in the biological context. These risks do not solely pertain to "Artificial General Intelligence" (which undoubtedly has its own set of very serious risks) but also when less sophisticated methods are applied to risky application domains.
Senior Managing Director
1 年Tom Inglesby Very interesting. Thank you for sharing.?