The AI Act-ually Happening
Let the regulatory driven transformation commence.
The EU has approved ‘The AI Act’, the draft of which was finalised in June this year. (We originally wrote about the draft release here.)
As you probably know, it categorises AI risk based on impact, where:
Much of the regulation comes down to ‘Human Centricity’ – protecting human rights and prioritising an understanding of the impact AI models will have in its deployed context.
It’s sensible.
A lot of it also boils down to understanding why AI decisions/reasoning is happening.
And so on, this is all good stuff.
These are widely considered to be traits of best practice AI, anyway.
?
An Imperfect First Step
Any first regulatory step was surely going to be imperfect, but it’s a starting point.
In a sentence:-
Systems need to be regulated because system builders won’t all act to the benefit of society.
There needs to be an external force to impose public protections.
We’ve seen some commentators bemoan the restriction to innovation – yes it will be a friction point to adoption. Some businesses will back away from AI because they can’t create the governance processes.
Organisations will need watertight Governance – frameworks for ethics and compliance for AI models.
We might expect an increase in AI outsourcing. We are certain to see a huge demand fort consultancy services to help organisations understand the law in relation to their specific use cases and guide their transformation.
Just as we saw in the explosion of companies conceived to cater to GDPR.
?
Some will err or the side of caution. It’s likely that many organisations will sit tight, waiting for other organisations to move first. Waiting to read the headlines about an organisation losing up to 7% of annual global turnover for making a foul move.
Naturally, this will cause feet to be dragged, where innovation is concerned.
Surely this is better than two feet jumping blindly in, no?
Whether you believe the existential-level threat of AI, or only the banal but pragmatic concerns of AI, a degree of reservation is probably a good idea.
?
There’s also a strong argument to be made that clarity will unlock innovation.
Clear regulation, however imperfect, is still preferable to uncertain regulation, which developers in other countries will continue to experience.
Now the lines are drawn in the sand, businesses operating in the EU know well what they should and shouldn’t do.
?
EU Law, A Global Act
Nevertheless, just because this is an EU act doesn’t mean it will only impact EU businesses.
The bar has been set! ?
Global corporations tend to prefer uniform processes across their various markets. Often, there’s a global model and minimal tweaks are made to fit the territory.
For example, EU-wide firms will know that employee protection rights in Germany are much higher than those in other EU countries.
We will see the same thing with The AI Act: organisations will build to the highest standards for the clean distribution of AI services.
The surge in hype for ‘Interpretable’, ‘Explainable’ and ‘Auditable, will continue.
An AI By Any Other Name
“What's in a name? that which we call a rose By any other name would smell as sweet.”
― William Shakespeare, Romeo and Juliet
Now, the difficulty in assessment and monitoring of AI use is worth mentioning. The system will at first rely on self-declaration.
Which begs the question, how do you detect AI when it isn’t declared?
We are likely to see, shall we say, ‘creative’ definitions of AI models. We might see organisations not talking about AI in the same way.
‘What, oh, this little thing, it’s not AI, it’s statistical regression…’
Skirting definitions. If they can’t talk about using AI, they may name it something different!
It’s ultimately driven by trust.
(And that anybody can submit a formal complaint.)
?
Three Clear Implications for Businesses: