Attempts to Regulate A.I?  Good Luck

Attempts to Regulate A.I? Good Luck

Today I saw a post by Ed Husic MP about "Safe and Responsible Artificial Intelligence". To his credit (and the government in general) they have no choice but to make it look like they care about this.


I also very much respect Ed Husic MP and have been a long time fan. But I had to reply to his announcement of a paper designed to "guide the development of an Australian framework on AI" to get the most out of the tech while curbing risk.


So here is my response as I posted it to the announcement. Which is at: https://www.dhirubhai.net/posts/ed-husic-mp-22253513_were-determined-to-have-modern-laws-to-manage-activity-7069874432332234752-OKm1?utm_source=share&utm_medium=member_desktop



While the 'idea' of regulation of A.I seems like a good one, it is going to be practically impossible to achieve.


Regulating weapons, drugs, genetic research (cloning), etc is possible because it involves material or equipment that is expensive, hard to get or controlled via long standing mechanisms.


Regulating code is going to be like regulating art - trying to tell artists what they can paint, or what they can write about.


Sure, in the context of offensive/restricted material such as art/writing about minors - it isn't illegal - publishing it is.


The A.I 'industry' or 'movement' has already won. They have not only rushed to commercialise solutions, but to Open Source it. That is what the Google leaked document was about. They don't care about OpenAI/ChatGPT - they care about Open Source. Open Source can have thousands of collaberators globally - which is how Linux/Android/etc all got so massive.


Regardless of all the ethics and morals surrounding the development of A.I - who does the Australian Government think it is?


We are one country - one place in this PHYSICAL planet that the government has the ability to regulate. But like Cloud and other issues - the coding will just be done anywhere. I can host a website anywhere in the planet, any country, or international waters (something floating with Solar and a Starlink uplink) where Australia or any other government have zero jurisduction over.


Sites, code, compute etc, could be hosted on satellites - which could already exist should Elon want to use it for that.


Has Australia got any influence or ability to control the Tax havens? Australia can't even fix the tax laws to stop multinationals abusing the havens - how do you think you will stop be hosting some A.I code on a server under a kids bed in Lagos Nigeria? Welcome to Hack or Data Havens.


Even attempting to regulate A.I is problematic. Which aspect do you target? Cloud A.I services? Hardware? Chips? Does Australia has the ability to go against companies like NVidia which are worth hundreds of billions or the likes of Tesla or Amazon, who literally can put what they like in space.


The arrogance of any government on this planet thinking it can even regulate Space is hilarious. Private enterprise is now more powerful than governments and are getting more so. Will the Australian government even be able to operate without the help of KPMG?


That said, I will assume that every government that is capable of doing so will have their policy arms, military, police, intelligence and research - all spending a lot of time and money developing A.I capability.

From the tracking of money flows and criminal activity, monitoring people and communications, physical flow of goods - both legal and illegal, analysing trends and statistics to predict whatever they need.

Governments would have to be insane to not be pouring stupid amounts of money into A.I. A.I is the new arms race that you can be assured that China, Russia, North Korea and all others are doing just the same - and without the same ethics and morals.


The warfare that is about to occur due to A.I capability - the scale of automated swarm technologies that can already be weaponised - by a 12 year old with immagination - will bring about technology disasters akin to the floods and bushfires we've recently suffered in our country - but on a technology aspect.


From utilities, finance sector and banking, transport and everything else relying on technology - will be at risk. It is called Critical Infrastructure for a reason. We are barely getting our heads around CyberSecurity - but now imagine A.I powered CyberSecurity. A.I Hackers, thinking and attacking. Think of the old days of Virus's - now A.I powered.


The Ukrainian/Russian war currenly has hundreds of thousands of hackers globally involved in the war. Read: https://www.bbc.com/news/technology-65250356 . We just don't hear much about it.


I am not being an alarmist right now. You take major failures like Telstra a few weeks ago, add a few other failures in communications, the banks and alike. All these have already happened. But imagine them happening again - at the same time.


The CyberPandemic will come. The governments are not ready for this. The public doesn't have the full understanding of how the Internet works. I was an Internet Architect involved in global Internet Policy and I have been doing briefings for years explaining to politicians, judges, policy makers, lawyers, etc that the Internet Doesn't Exist as they think it does. It is just a concept.


The public doesn't have the full understanding 
of how the Internet works        


There is no 'public internet'. The Internet is formed from commercial relationships between carriers - such as Telstra and Optus, and their international links. Every part of the Internet is a commercial arrangement. No one gets on for free, not governments, not the Pope or US President. Every government must pay a Telco to be on the 'Internet'.


And how much of this infrastructure do you think the government (any) has in the ground that it owns? or its own sattelites? Not very much. It is all run by commercial relationships. So in a full on Cyberwar, what can the government do to protect the infrastructure? Nothing much at all. Imagine 10,000 company networks being hit at once by an A.I powered swarm. Even if the government had its own swarm to fight back, how? using the commercial networks to wage war on?


This all sounds alarmist, but ask any technology specialist if anything I've said above is not absolutely true. This 'risk' of A.I is nothing new. It has been a risk for a long time. A.I just speeds it up.


Welcome to my world everyone.



...Skeeve Stevens - Furture Crime Agency

[who has been warning everyone about risks such as these (and a lot more) for nearly 10 years]


---

Scott Phillips

Founder and CEO at Vaulted Ventures

1 年

I agree that there are difficulties, but I believe that it is possible (and necessary) to regulate AI systems. Even modest success in this is likely to avoid substantial harm. A crucial element, IMHO, is to be able to evidence common law duty of care. My group is working to bring a mechanism to market presently that can do this robustly. Once that is achieved, a host of common law regulatory elements are able to kick in. Defamation is a classic example of a kind of harm that is very difficult to address in a digital environment of near blanket anonymity. With cryptographic non-repudiation of content generation, it is a fundamentally different matter, and almost trivially easy for plaintiffs to evidence the existence of a duty of care. Once this is in place, there's suddenly some meat in the regulatory sandwich. ?? Governments can license AI companies and content platforms operating within their jurisdictions if they are in compliance with the requirements to ensure that content is linked to users who are accountable for what they publish. Legal liability insurers of the AI companies are likely to be interested in this as well. Ever heard of "vicarious liability"? Pretty sure it's still a thing.

回复
Chris Buckridge

Digital policy, Internet governance, regulation | Board member | Views, likes, and laughter are my own!

1 年

Currently reading https://thisishowtheytellmetheworldends.com, and the concerns line up - though what's remarkable is just how outdated a book published in late 2021 can seem in the current technological (and geopolitical) climate!

Tobias Crush

Problem solver, lateral thinker, cyclist

1 年

Love how it was “OK” when only the big fins were using it to identify cancer or incarcerate minorities. Once it affected those in power or further exposed the short comings of “collective biased opinion” that they want to control it. (A la Robodebt!) Really the community should be pressing for the decision tree to be explainable in human terms to expose thd rationale and bias inherent in the model. Then make informed decision based on the model as a tool rather than “truth”.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了