AI holds the key of Understanding Risk today
Plenary of UR24 in Himeji, Japan on our session about AI

AI holds the key of Understanding Risk today

Would you defend the position that “AI, and not humans, hold the key for understanding risk today”?

and would you do it at the plenary of the largest and most important conference about disaster risks, hosted by the World Bank?

That’s what I just did in Himeji, Japan.

On a debate where the opposing side was Ivan Gayton , from Humanitarian OpenStreetMap Team , who does tremendously important work empowering communities where it matters most, in most difficult places. Someone who is very much on top of all the hype of AI, and part of his job is to extract the real value. So he did have great arguments that humans should hold the key.

Before the debate the poll was ~75% pro human, ~25% pro-AI. So I had a hard work to do.

My main the arguments were: 1) AI is the next step of the same path we already are, 2) because it works, and 3) AI for Earth are the best domain to lead on AI for impact.

1) The world has already chosen this path.

Imagine a world where we don't use data or computers to understand disasters— say, a hundred years ago. Clearly, that’s not the world we live or want to. We use advanced data and computing today. AI is the next logical step on the same path. It’s just that this is a huge leap. Like moving from basic arithmetic to Fourier transforms.

Imagine now a world were we get warnings after an earthquake within seconds, that are relevant to the region (like Japan currently does), but also my building, my family, my business. In Tokyo or in Haiti. That world is not here, but I believe it’s now more closer and more possible than ever thanks to AI, and thinks like foundational models.

Worth mentioning that these systems are explicitly designed to process data, and extract patterns. To understand what the data conveys. It is still just one part of the process. Humans still make the decisions, supported by the understanding, the processing, of computers. Just like when we use any other kind of data science, or dashboards, or tools. AI is a tool, not an answer. AI is a skill we humans develop to assist us. Like a copilot on a plane. They hold the key of processing, of driving, we tell it where to go.

AI holds the key for understanding risk.

2) It works.

That world is closer than ever because AI has already proven to work, much better, much faster, and much cheaper across many domains, from translation, transcription, text generation, image generation, weather predictions like Pangu Weather from Huawei or Aurora from Microsoft, … Google has shown at this conference how weather predictions that used to take hundreds of hours now take 1 second. From hours to seconds. This is hugely important for cases like early response on disasters, where literally every seconds counts.

Moreover, AI can also pre-compute a lot of the effort, which not only means we can shift a lot of the computation workload away from critical moments, but also means we can then distribute these pre-computed models widely to communities so they can adapt it, finetune it, and create versions specific not to the wider training, but the specific regions, applications or use cases. Compute once, apply many times.

AI holds the key for understanding risk.

3) AI for Earth is particularly promising.

Understanding our Earth might the most important thing we do. Our world, our nature, biodiversity, climate change, sustainability, … This is something that matters to everyone everywhere.

Moreover, we do have large amounts, Petabytes, of Earth data. And much of it is fully open data, free from all the legal issues that plague AI in text and image domains. Data that literally covers the whole world, on archives spanning decades.

There is no other domain where the technical needs of AI, and the upside of applications is so aligned that AI for Earth.

AI holds the key for understanding risk.


Few things are as clear to me as the potential of AI for Earth. The What is clear, but we must also recognize the importance of the How and the When.

When?

Now. We are on that sweet spot of technology where we are discovered and tested the tool enough that is not only academia. But not so late that this is mainstream with stablished providers, formats, standards, and expectations. We have right now an outsized opportunity to shape AI. To become leaders, to apply it and set expectations, the how. Why is probably the strongest reason to embrace AI.

How?

This matters. For several reasons, AI is confronting a fork in the road. The large compute and data requirements of AI mean that large budgets on large corporations are particularly well suited to be at the front. And indeed, most of the most powerful AI models do come from Google, Microsoft, OpenAI, … and do use large budgets, which lead to very powerful services. They indeed invented many of the critical components of modern AI. This is great and it should continue.

But for the rest of us, there should choice. I do not want the choice of large serviced models or nothing. I want a world where AI models exist on a thriving ecosystem that shares data, compute, design principles, community ownership, that compute is shared, where we can come together to make large foundational runs that are then distributed to be adopted and adapted far and wide by users that get the same benefits without the need to have compute resources.

A world wich combines large serviced closed models with a long tail of open source and open data AI is a better world, more resilient, a more mature AI ecosystem of expectations, standards and use that caters the needs, resources and realities for everyone.

I fact, I feel strongly about this. I’ve worked with both large corporations and small non-profits, in San Francisco and in Bhutan, for the World Bank and for VC investors. And I saw the need for more open AI for Earth, so Dan Hammer and I decided to pitch philanthropists and started Clay , our AI for Earth non-profit, fully open data, open source and open for business.

So, of course yes, I do think that AI holds the key for understanding risk, and I’m all in.



After the debate the poll was ~60% pro human, ~35% pro-AI. A ~10% shift towards AI. A world where most people approach AI with skepticism, but that moves towards adoption sounds about right to me.


Of course my position on this debate is not exactly as I explained above, but much more aligned to what Ivan defended. He pointed out that humans should be on the driving seat, holding the key, not AI. That the reality is still far from that vision, specially on the communities he works with. That we must be much more, careful, participatory, realistic and involved when designing AI system and keep humans holding the key at each step.

In fact both Ivan and I largely share the same position of the real potential of AI, and how best to make it happen, on a human (and community) centric way. I did work with HOT, his organization, several times and I very much see his point. When I lived in Bhutan I also confronted directly these first hand and had to deal with them. It wasn’t AI, it was in fact very human.

We agree on our position in the middle, but we also agreed to make our points more extreme, to help make the session interesting and bring the audience important points on both sides. :)

This conference was also extremely rewarding personally. The UR Forum, the organizing body, started in 2010 in Washington DC. I was there. At the time I was a NASA/NRL Postdoc with huge doubts about what to do. I loved my job as a scientist, and I also wanted to apply my skills to something with more direct real impact to people most in need. Being located in Washington DC I had plenty of opportunities to attend unusual events. One of them was this funnily called event “Random Hacks of Kindness”. What a great name.

There I met a guy called Rowan Douglas CBE who then led re-insurance analytics at Willis. I got courage to just go talk to him and he was incredibly kind and encouraging. Also? I first met then Edward Anderson , and Kate Chapman , and many others with whom I would develop a friendship over the years. This whole science of impact started to make sense to me. I wanted to do more like these amazing people I had met.

I got convinced, at that very conference, to jump off the academic ship and I started my journey of Impact Science: National Academies, Climate change NGO Gain, Mapbox, World Bank, Satellogic, Microsoft and now Clay.?

14 years later, I’m back at this conference. As a plenary speaker. Edward was the one who reached out to invite me to talk on this session. Rowan was also here on another plenary panel and we just had a long breakfast together. Mapbox has a booth here. Satellites are present on most panels, with mentions to Satellogic. Several people praised the Planetary Computer, and Clay and AI for Earth is seen as next huge step, where we are seen leaders.

14 years after I first met him wondering if an astrophysicist had a role in disaster risk programs.


It is extremely rewarding to be here and a strong motivation to keep working on the stuff we do. The work matters, the what, the how and the when. The people you work with matters. Can’t wait to see what happens 14 years from now.

Murielle Antille

Managing Director | trilingual Bridge Builder | Go-to-market and Stakeholder Strategy | Sustainable Workforce and Skills Transformation

3 个月

Why am I not surprised ?? ! love the fact that you also acknowledge that human being should be on the driver seat to handle AI (and risks).

回复
Joaquin Toro

Lead Disaster Risk Management at The World Bank

3 个月

AI is a human tool! ??

Edward Anderson

Science, Technology and Development

3 个月

Awesome move Bruno Sanchez-Andrade Nu?o, great indipiration and debate you’ve provided. Looking forward to much more to collaborating to come.

回复
Samhir Vasdev

Tech and design for social impact. Formerly Meta and World Bank.

3 个月

Love this - Ivan I hope you pulled no punches! ??

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了