Last month Tyler Smith and Miyu Niwa went to World Health Assembly and the 2024 AI for Good Summit.
Not surprisingly #AI, Digital Public Infrastructure #DPI, and the lack of progress on the Sustainable Development Goals #SDGs, were on everyone's mind.
What’s missing from the conversations:
Frank and realistic discussion about infrastructure policies:?AI requires substantial hardware, energy consumption, and computational power to work. In the majority of LMICs, data hosting and protection policy requires on-premise data storage and processing. We need solutions to meet LMIC privacy and data ownership needs, while pooling expensive and highly sophisticated technical assets so that countries can take full advantage of AI.
Useful tools to measure the incremental value of tech:?Every country’s government has to make trade-offs. In LMICs, trade-offs across sectors often result in underfunded health programs. Within health, these trade-offs often mean deciding who goes without much-needed services, drugs, or basic utilities. Yes, AI for health can be transformative, but adoption will require convincing the funders, the clinic managers, the doctors, and the nurses, that a digital intervention is (a) valuable and (b) superior to what they already do. In a place where stockouts for essential drugs are the norm, how would you convince a funder that $100,000 is better spent on an algorithm to reduce drug shortage, rather than buying those missing meds? We lack standard, useful tools to properly evaluate new digital tech in context.
A focus on reliability and trustworthiness in simple use cases:?With big hopes, comes big expectations. The excitement around AI for health is skewed toward apps that support health providers and patients directly (think of AI vision to better read diagnostics and chatbots for medical advice). While we excited about this, there are some very real challenges and tangible benefits to deploying AI tools in support of users more upstream in the health system. Large language models may not be able to provide reliable medical advice (don’t eat rocks!), but they are very good at parsing large troves of unstructured info to find what you’re looking for. They’re also good at coding. We see some very clear use cases that can help make sense of complicated clinical guidelines and health policy, parse mountains of health stats to return that needle in the haystack, and generally meet non-technical users where they are. Implementing these use cases requires some advances in structure and reliability, things we are working on now at Cooper/Smith.
Read their full piece here:
https://lnkd.in/gy3BxHuS