Enabling AI Innovation
'Integrating Frontier AI into our economy and society' Panel Discussion, Imperial College London.

Enabling AI Innovation

A real pleasure to attend the Enabling AI Innovation panel discussion at 英国帝国理工学院 this morning. Great to hear from such a broad range of academic perspectives and expertly chaired by Professor Mary Ryan.

I wanted to highlight three of the key themes explored during the discussion:

  • AI in healthcare,
  • how we get the right regulation and
  • the absolute imperative of building trust in AI.

?Healthcare:

AI offers such vast potential to transform healthcare through, for example, improved diagnosis, treatment personalization and administrative efficiencies not to mention the possibility of prevention. I have been writing about it for some time.

Such incredible potential and in the UK, we are blessed through the NHS, with unique access to centralized, national data but we must make sure we make good use of it. The opportunity lies in finding a balance between leveraging data for innovation and ensuring privacy through transparent processes.

It was fascinating to hear about trials of AI clinicians in four London hospitals. Human plus AI systems for high-level performance described as “a self-driving car for A&E”.? Lessons from these trials will be of interest and it is positive that we have a secure data environment in London.

Good quality data, anonymized, securely held and trusted is absolutely essential if we are to unlock the benefits for citizens but also thinking about the possibility of financing the future of the NHS by sharing lessons with the rest of the world.

Regulation:

Thinking about the correct regulatory framework is a huge part of optimizing outcomes in terms of ensuring safety whilst enabling innovation and is being grappled with around the world – not least here in the UK with the Government’s AI Summit next week and response to the AI governance white paper promised before the end of the year.

Questions were raised about whether the EU's AI Act which attempts to set out a principles based approach which categorizes risk is getting the balance right and there is an even bigger question mark over the impact of Japan’s recent decision not to enforce copyrights on data used in AI training.

Clearly, there is a need for a principled-based approach that mitigates risk, promotes competition, protects consumers and encourages innovation. We need strong but flexible regulation. ?Challenges include the lack of transparency in AI systems, especially in data sources and model processes.

It was interesting to hear about current research that was able to predict the training data from system outcomes although this cannot be the only solution - transparency, interpretability, explainability and safety are all key.

Trust:

Without trustworthy models and applications we are, well, nowhere.? I was happy to hear calls for greater stakeholder engagement and have myself been calling for greater citizen engagement including the use of alignment assemblies.

A good point was made about the perverse incentives that can be created when objectives are purely about maximizing profit. (BT's announcement about job cuts due to AI given as an example)

We need to think about how to create economic incentives to augment workers rather than necessarily replace them. We should be thinking hard about how to upskill and reskill. What do we do for all the people who won’t be supported?

This is where citizen engagement and work to align community values is so essential.

AI Policy

In his speech today the PM addressed the risks of AI and announced that the Government will “establish the world’s first AI Safety Institute – right here in the UK” as well as an intention to “push hard to agree the first ever international statement about the nature of these [AI safety] risks” as well as plans for an IPCC-like “truly global expert panel” at the summit next week. ?

With this in mind I asked the participants if they were PM, considering AI related policy, what their proposals would be.

Excellent responses ranged from taking advantage of our multi-cultural society to consult and co-design with our rich, diverse communities.

Another suggested investment in healthcare data, reminding us that there are 10 million patients in London alone. Cleaner, better data, safer and secure at source will unlock such vast potential.

There was also a focus on transparency and explainability, through a 'designing, developing, deploying, iterating' approach with the long term win being citizen acceptance. ?

Finally, there was an excellent point made about the intentionality of developers. Policy makers should be asking the developers what are you doing this for? Onus on them to think about, and explain the purpose, rather than on the regulators to prove compliance.

The PM reminded us again that AI promises new advances in human capability and the chance to solve problems that we once thought beyond us. The final question – to the panelists and audience at Imperial this morning - was whether people believed we could enable AI innovation, positively and safely - and I’m happy to report the answer was a universal yes.

?

Andrew Yakibchuk

React.js/Node.js teams | COO at Crunch.is

11 个月

Dr Saira Ghafur, Sounds like an insightful discussion!

回复

Lord Holmes Thank you for your insightful questions, and excellent article summarising it! It is so valuable when academics get to have these conversations with policy makers. Dyson School of Design Engineering

Mike Nash (BA HONS)

Generative AI | AI | business growth finder and advisor. Get the most from AI with minimal risk - AI strategy, AI insights and leading AI advice - Contact me today - CEO - MikeNashTech.com

1 年

Great article Lord Holmes, and some excellent points made. You are entirely correct; trust is an essential aspect of AI. A more trustworthy and secure AI boosts confidence, regardless of its application. But how can we foster trust in AI before any possible AI legislation?? It's simple: we should encourage enterprises to establish sound data and AI governance procedures before developing AI. Having a sound data and AI governance strategy in an enterprise increases internal confidence and trust in their own data and technology. Once a solid, iterative, risk-reduction strategy is in place, companies can concentrate on greater returns and expansion. This bottom-up approach to governance will ultimately increase confidence and trust in AI throughout society. As potential legislation (US, EU, or UK) is established, businesses can confidently adapt accordingly without huge costs or investment in re-engineering their AI workflow. Trustful AI governance and innovation are a win-win-win for enterprises, society and economic growth.? Thank you for raising this important thinking.

Michael DaCosta B.

Senior Advisor, UK Creative Festival

1 年

GOOD!!! cc Eric Van der Kleij

要查看或添加评论,请登录

Lord Holmes的更多文章

社区洞察

其他会员也浏览了