So how (the heck) do you 'regulate' AI?
When it comes to AI regulation, there's essentially three things we need to consider. First, the use by 'bad actors' - criminals, conmen, and those who just don't have the good of their fellow mankind front and centre.
Second, the development of AI itself. For so long, the conceptual question has been posed, 'is there a limit to what AI can eventually do?' But now a better question is emerging, which is... should we make one?
Last, and most important, is the threat of human use in a legitimate (or at least legal) context, especially private industry. These are already large, cumbersome, and for the most part morally agnostic structures... what happens when that profit-driven moral disinterest is propagated by even less 'humans' in the mix?
Bad Actors: Can't be Solved, Can be 'Managed'
This is, by far, the easiest to solve for conceptually: bad actors are an issue that need to be dealt with mostly by existing laws. Just as it's already illegal to scam someone using email, phones, and in fact other humans (which is how criminal boiler rooms operate), using an AI is no different than using a 'better' gun to rob a bank.
Deploying malware, encrypting someone's files and demanding a ransom, finding an exploit and attacking a financial institution virtually - all of these are made easier by AI, even now. But since there's a human/criminal in the loop, go after them.
The thing about criminals, of course, is that they care not for our laws - but when they're using a tool developed ostensibly by 'good guys,' you can regulate them instead. It's imperfect - ask ChatGPT to write you a phishing email creatively enough and you'll get one despite the controls - but perfection is the enemy of 'good enough.' Just as we don't throw the internet out despite it's many sins, we can learn to live with the criminal misuse of AI - not least because we have no choice at this stage.
The real challenges are the other bits, which is regulating the use and development of AI. Here's the issue: we don't want harms to befall us, which includes being left behind or stifling innovation, and yet cats are very difficult to put back into bags once released.
So how do we skirt that line, and get to a place of maximum benefit while still averting AI-rmageddon?
Arrested Development
It's challenging, but not necessarily impossible, and it's probably the case that we need regulations to do it which dictate development and use of AI.
For development, I think the elephant in the room is this: we're playing God, and it's no longer a scary bedtime story to think of a sentient and essentially hostile AI overlord. Laws and principles of robotics - hard barriers like 'don't harm anyone' - need to be considered, but the issue is that giving a super-intelligence laws we phrase lets them tinker at the edges and 'lawyer' their way around it.
See: Terminator, I Robot, Wall-E, the Matrix, Ultron...
There's issues with physical security, and those are probably the easiest to mitigate. AI driving around in cars, policing our streets, fighting our wars - these aren't a brilliant idea, so maybe the bluntest policy tool in our arsenal is appropriate: blanket bans.
That way, we may avoid or at least delay the outcomes predicted by so many science-fiction and even comedy prophets - see: Robo-Cop, Silicon Valley, and that terrifying Black Mirror Episode with the robo-dobermann (Robermann?) which Boston Dynamics have apparently put on their vision board by connecting ChatGPT to one of their robots... Good grief.
These, so far, are all fever-dream concepts of a physical manifestation of AI in the real world, but an even more pressing and difficult issue lies in granting our new robot-overlords decision-making powers. The issue with that is basically that they aren't... human.
Best Case Use Case
Every science fiction story has a friendly, humanised class of AI; R2D2's charming beeps, chirps & whirs, Wall-E's trashtacular antics, or Robot's "DANGER, WILL ROBINSON" warnings. But the mistake in all these fictional incarnations lies in assuming that ethics and morals are something that'll transcend us mere fleshlings and be easily adopted by circuits and AI. What if, though, you ask an AI to make the ethical, irrational, human decision, and it comes back with "I'm sorry Dave, I'm afraid I can't do that."
领英推荐
Even when robots with AI are the bad guys, so often they're the good guys too. As we know, only a good guy with a gun can stop a bad guy with a gun, and so too will we need droid heroes to fight droid armies, and inexplicably better Terminators to fight the next latest, greatest, and yet somehow-still-worse model.
But what if the real terror isn't your door being kicked in by a titanium giant, but your loan application being denied by an AI named Karen who doesn't like the school you went to or your middle name?
This is, in my view, the best bet we have for local regulation - bans and controls on the actual use of AI. Just as I posited that criminals can crim with any tools you give them, corporates gonna corporate just the same. However the difference is, we can in fact regulate businesses about their obligations while using AI and even prevent it when it's not entirely appropriate.
From this perspective, three issues stand out as very clear needs to me for regulating the corporate sector.
First, explainability. This is nothing new in many realms where AI and machine learning have been hot topics for a while - you cannot abrogate your responsibility with an explanation that "Computer said no [and nobody can understand why]." Whatever positive duties already exist - to say, treat customers fairly, to not discriminate, whatever - should bind you even if you try to use AI, so buyer beware. Regulations to make this explicit are a good idea, to both ensure fair adoption and potentially take some steam out of the engine of this runaway train.
Second, the need for disclosure. If you cannot know you're impacted by an AI or an algorithm, then you cannot bring it's user to account for whatever decision they've let it make that impacts you. This regulation should be principles-based, and able to move with the times, which as Bob Dylan so eloquently put it, are a-changin.'
Third, we need minimum acceptable standards of what decisions can and cannot be made with AI assistance, which again must be principles based. If we've learned one thing over the past 10 years, it's that strict and prescriptive regulations invite less compliance and more regulatory arbitrage, a phenomenon which has given birth to concepts such as "buy now, pay later" and even 'cryptocurrency.'
All of this brings us down to one essential and uncomfortable fact, which is that the problem with AI isn't AI itself, but human nature. Regulation is an intercession but an imperfect one, especially if done at a national level in silos, rather than internationally and in co-operation.
Emotional Regulation
So, down to regulations... blanket bans only get us so far. This is especially the case since AI generally can spread and not necessarily care about borders, and when we're in a global prisoner's dilemma (called Capitalism) where we can't control others' actions. Once the horse has bolted, it'll do us precious little good to claim the moral high ground and say "that issue didn't start in OUR country."
We're not lacking parallels, though the clear one isn't optimistic: the climate crisis. Here, too, is a problem that's difficult to fix but that any country or party can exacerbate, and where some parties stand to benefit greatly in the short term by making matters worse.
So, understandably if you're in a developing nation where the promise of AI-lead productivity gains can reap societal changes - or less understandably, if you're an arrogant tech CEO (difficult to imagine, I know) - highfalutin ideals embedded in regulation about what should and shouldn't be developed and how might not resonate.
This is one reason why it's probably a flawed concept to lead this nation by nation. As much as setting a tone and leading by example are great, this is the nuclear race of our time - and there's actors with different incentives. We need international accords, we need them fast, and ideally we need fewer rogue nation States working on their own agenda against the rest of us.
This is made harder by the fact that the promise of AI is commensurate to the threat, meaning those with more to gain have less to lose. Plus, negative outcomes for humanity might not be what we're expecting after ~4 decades of horror movies - they may be harder to recognise. I'm far more concerned with a creeping, benign, condescending, and ultimately authoritarian robo-regime, where one can't buy a Coke if their BMI is too high (guilty) or where little Timmy is rejected from admission because his parents didn't go to University.
And, while our regulatory responses to tech are important, some broader questions are yet to be broached, let alone answered which entail our economic and political responses to tech. Aside from regulation, are we willing to look at the underlying issues, and also rethink the social norms and structures, including our economic system?
After all, it seems that market forces draw us ever onward. It's whether we're being drawn toward collective detriment or an AI-powered panacea that remains to be seen, but it'd be a good idea to at least try to paddle in one direction.
________________________________________________________________
Transforming Tech into Time, Freedom, and Profit
1 年Ask nation heads who's been regulating their citizens for millenia.
Blockchain Architect | Low Latency Rust | HFT Web3
1 年Luke Raven As with most technology.. if they focus on accountability it should** encourage reciprocal responsibility. Imagine if any CEO from a company that releases malicious AI would potentially face criminal prosecution. (Even death penalty) I bet it would suddenly be a much higher priority than before.
Chasing Financial Equality For Humanity @ Maslow. Formerly a child, ex boyfriend, multiple exits (from conversations) and have never been frightened of a pronoun - I'm very brave. A neuro ??? ????? safe space
1 年who knew Marty McCarthy could inspire with an article about regulation!! ??
Consultant | Senior Business Analyst | Aha Moment Creator | Digital Transformation | Integration | Coach | I work stuff out | Rent my brain
1 年Ask the lawyers and the lawmakers.