Where the AI Rubber hits the AI Road...in Hyperspace

Where the AI Rubber hits the AI Road...in Hyperspace

A recent presentation by Eduardo Ustaran attracted a lot of attention at #GPS24.

In the presentation, Eduardo highlighted that the AI Act establishes six different levels of risk depending on the type or purpose of the AI Technology in question:

  1. Prohibited AI
  2. High-risk AI Systems
  3. Transparency Risk AI
  4. General-purpose AI models
  5. General-purpose AI models with systemic risk
  6. All other AI not covered by the above

Skewed Risk

6 levels of Risk. That’s 6 quantifiable levels of ‘probability of failure classification’.

Understanding ‘what is at risk’ is obviously fundamental so you can quantify it and select your level of Risk appetite.? But it’s not really just the shiny new AI tooling, the application, or the device that houses the AI firmware we need to look at. It’s measuring the impact it could have through the relationships and dependencies it has with existing or new business processes (and existing or new inventory) that it may interface with.

In what is supposed to be the conceptual design stage, currently feels more like the 'We need to get AI ASAP' stage, you can take a stab at what level of risk you think this might be. You can do this based on the perceived outcomes or benefits the AI product is offering along with using the “six different levels of risk” from the AI Act as a guide. For some organisations this might be the extent of their risk analysis and justification for spending some of the budget on AI.

But this is where the assessment of risk to an organisation can get skewed. We can think of risk as being the probability of failure multiplied by the impact should that failure occur. If we don’t understand the impact by having clear sight of the relationships and dependencies with other systems upstream and downstream of any AI integration, we end up looking at the AI in isolation. We focus on the direct benefits with any risk assessment skewed because the full impacts of failure on the organisation are not being addressed in the risk calculation.

And talking of upstream systems, the AI models are fed with data from those very systems. Any predictions, content, recommendations, or decisions rely on accurate and timely data being received by the AI model. This needs a thorough understanding of where the data comes from, what could affect or delay it, and understanding the various tributaries and rivers the data flows through before reaching the AI Model’s estuary. Only then can we know those predictions, content, recommendations or decisions are valid.

GIGO & Bias

We all know the phrase “Garbage In, Garbage Out”, and it’s been around as long as computers have been used. 50-60 years ago this was a relatively easy thing to spot. But with today’s deep learning algorithms and machine learning tools that form their internal probability matrices from huge data sets, it’s impossible to see if a single piece of garbage sits as an outlier in the model which can have an effect or when a particular set of inputs is acted on, causing bias within the system.

The immediate impact of any bias may be difficult to quantify or assess, and may not even matter to the task at hand. However, without knowing how the AI outputs are cascaded downstream as inputs into a plethora of other systems, the true nature of the impact can be very easily missed.

May the Force be with you

Add the over-arching Cyber Security and Data Privacy risks into the mix and you have a level of governance required that most industries haven’t encountered before.?

Eduardo underlines this point very well:

“If you compare that to the non-personal data v. personal data v. special category personal data categorisation of the #GDPR, then you get a sense of how complex the regulation of AI technology is going to become.”

Or as Han Solo would say…

“Traveling through hyperspace isn't like dusting crops, boy. Without precise calculations we could fly right through a star or bounce too close to a supernova and that'd end your trip real quick, wouldn't it? ”


The regulations will be complex because the quantity of variables and the connections to these variables can be astronomic. It is going to take a long time to find acceptable ways for Regulators to consume, work out and articulate what is expected.

There is a lot of understanding to be had, work to be done and standards to be developed before boiler-plate AI legals become a thing.

霍金路伟 - Eduardo Ustaran | Leopold von Gerlach | Martin Pflüger | Nicole Saurin | Stefan Schuppert | Dan Whitehead | Jasper Siems | David Bamberg - are ahead of the game here and are well down the road of beginning to understand what this means from an AI solution engineering design and legal stand-point.

You can read Part 1 & 2 of their EU AI Act Analysis here:

https://www.engage.hoganlovells.com/knowledgeservices/news/the-eu-ai-act-an-impact-analysis-part-1

https://www.engage.hoganlovells.com/knowledgeservices/news/the-eu-ai-act-an-impact-analysis-part-2

Alan Blanchard

Experienced Business Development Leader | RegTech, SaaS Sales, Partnerships

7 个月

Thanks for sharing!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了