The AI Regulation Debate

The AI Regulation Debate

On the 28th of July, Joe Biden announced that large tech firms such as Google, Amazon, Meta, and Microsoft amongst others had agreed to voluntary commitments to ensure the safety of their AI products. This is the first in hopefully a series of productive dialogues between AI leaders and centralised authority on regulating and governing AI, a field which has been called upon more and more as people become worried about what a future of unchecked AI development could look like.

The regulation and governance of AI has been a hotly debated topic over the last few years, with valid concerns over privacy and ethics leading the conversation. Back in May, over 1,400 tech leaders including Elon Musk and Apple co-founder Steve Wozniak signed an open letter calling for a pause in the development of AI citing concerns for the general future of humanity. So far, Canada and the EU have made the most leeway with deciding upon a cohesive governance framework for AI, however other countries are seriously lagging while the pace of AI innovation only speeds up. That’s why last Friday’s news is so important. It marks the first in what will likely be a series of agreements between AI leaders and the US government on how to best regulate and govern this new technology. One of the major commitments which came out of this conversation was around the implementation of third-party oversight over next generation AI. While the specifics around ‘who’ will be responsible for ‘what’ are still unclear, this in itself is a massive step forward for regulating AI, as it ensures that development of the technology cannot continue completely unchecked. This oversight also makes it easy to enforce a cohesive regulatory framework for AI once one becomes codified in the US. “Social media has shown us the harm that powerful technology can do without the right safeguards in place,” Biden said on Friday. “These commitments are a promising step, but we have a lot more work to do together.”

Security testing will also be carried out by independent experts in regard to AI tech, which looks to examine the risk to biosecurity, cybersecurity and societal harms such as bias, discrimination, as well as more intangible risks such as self-replication (so no SkyNet). Importantly, the agreements also cover commitments to methods for reporting vulnerabilities within AI systems, as well as implementing forms of digital watermarking to enable individuals to distinguish between real and AI-generated images. This is an incredibly important addition, as it means that we may see an effective and standardised method of locating deepfakes baked into future AI tech. AI powered deepfakes are almost indistinguishable from reality, with the technology only improving as AI does. These deepfakes can be incredibly harmful to the individual depicted in the audio or video, with an actor essentially having the power to have them say or do virtually anything. Having some kind of framework in place to identify these can only be a net positive as we move forward into an AI powered future.

The fact that these companies, who usually are in intense competition with each other, came together to discuss this topic at the same table should give you an indication of how important the field of AI regulation and Governance has become for many in the IT and Software space. The commitment to publicly report flaws and biases within their technology further illuminates this point and indicates that tech giants may be committed to putting public safety first over competition and profit when it comes to AI. While only the first step in the process, this fact alone should allow those rightly concerned about the pace of AI development to breath a small sigh of relief. These initial talks are also being seen as a way to get Congress talking about more structured laws being passed to effectively regulate AI in the future. While it is a good start, there are some who are worried that this issue should not only be the remit of private companies and governments, as AI is set to affect us all. “We need a much more wide-ranging public deliberation, and that’s going to bring up issues that companies almost certainly won’t voluntarily commit to because it would lead to substantively different results, ones that may more directly impact their business models.” Said Amba Kak, Executive Director at the AI Now Institute.

While it remains to be seen how open these firms will actually be towards third-party investigators examining their systems and sharing their findings amongst competition, it’s a start. Ripples are already being felt in congress, where Senate Majority Leader Chuck Schumer has announced his plans to introduce AI legislation, as well as work closely with Biden’s administration to build upon Friday’s commitments. The White House is also in talks with other countries regarding these commitments, with an open global dialogue on AI regulation hopefully on the horizon. Considerations will also need to be made to prevent future monopolies in the space, as smaller AI developers could be shoved out by the cost of ensuring their systems adhere to new regulatory frameworks.

While this initial agreement does cover two of the major risk areas for AI, it does leave out major concerns around the risk to future jobs, environmental impact of training these AI models, and other copyright concerns which have been raised by creative communities for years.

So, big steps in the right direction, but the US is not the only player finally looking into targeted regulation and governance of AI. As a part of its overall digital strategy, the EU is currently pursuing a wide-reaching regulatory framework for AI to make sure there are standardised conditions for its development. This began in April of 2021, when the commission proposed its first regulatory framework for AI. If managed correctly, AI could enable untold benefits for multiple industries, including healthcare, sciences, and IT. The EU’s proposal looks to analyse and classify AI systems based on the risk they may pose to an end user, with more stringent regulations in place for the highest risk AI. These regulations are based on the concept that AI should ultimately be managed by people, not more automation, if we are to see the true benefits of the technology. Another part of this would be establishing a technology-neutral, universal definition for AI that will be adopted by all member states to streamline governance.

The EU’s proposed risk classification system will establish certain obligations for AI innovators, as well as users, depending on the level of perceived risk. So far, these include:

Unacceptable Risk

These systems are considered a direct threat to people, and as such will be banned. At this stage, this covers:

·??????Cognitive behavioural manipulation of people of vulnerable groups. This could be a toy which announces propaganda or harmful advice with a certain activation phrase.

·??????Social scoring. This involves classifying people based on behaviour, or other characteristics such as socio-economic status.

·??????Real-Time/Remote biometric identification systems, such as the AI powered facial recognition systems in place in some parts of China.

It’s important to note that there very well be exceptions to these rules, but AI systems which match these criteria will need to be assessed on a case-by-case basis.

High Risk

Systems which are deemed to negatively affect safety or fundamental rights will be considered high risk and will be divided into two main categories and will also need to be assessed on a case-by-case basis, as well as through routine checks over their lifespan. The two categories include:

·??????AI systems used in products which fall under the union’s product safety legislation.

·??????Systems which fall into 8 separate areas will need to be registered in a centralised database. This covers:

o??Biometric identification and categorisation of natural persons

o??Management and operation of critical infrastructure

o??Education and vocational training

o??Employment, worker management and access to self-employment

o??Access to and enjoyment of essential private services and public services and benefits

o??Law enforcement

o??Migration, asylum and border control management

o??Assistance in legal interpretation and application of the law.

In addition to these regulations, generative AI will have its own set of transparency standards by which it has to comply. These mainly cover the disclosure of content generated by AI, designing models in such a way so that they are incapable of creating illegal content, and publishing summaries of copyrighted data used in training.

Limited Risk

On the lower end of the spectrum, the EU would require AI systems to comply with the bare minimum of transparency requirements, in order to allow users to make informed decisions when using the technology. Some important stipulations here include enabling the user to maintain autonomy on whether they want to continue using the system or not, as well as knowing that they are interacting with AI. This would mainly cover technology such as deepfakes.

The EU’s aim is to reach an agreement on these regulations by the end of this year, so the first codified AI regulatory framework may be just around the corner!

To cap this edition of tech trends off, we’d like to explore a couple of major reasons why AI regulation is so important. Data Privacy is the first topic that springs to mind, as “Artificial intelligence (AI) technology is becoming increasingly prevalent, from virtual assistants like Siri and Alexa to autonomous vehicles and facial recognition systems.” However, “using AI technology raises privacy concerns, mainly concerning personal data." Explains Bhaskar Ganguli - Director, Marketing and Sales, Mass Software Solutions. At their core, AI systems rely on huge amounts of data to train themselves. This data can include sensitive information such as names and addresses, even medical records! How this data is collected and processed raises valid concerns over who has access to it. This, of course, makes fledgling AI programmes massive targets for data breaches. With the risk of AI powered malware, it’s only a matter of time before we see something like this unfold, causing billions of dollars in damages to the industry and individuals.

A parallel issue involves the potential use of AI for surveillance. Facial recognition software has already been deployed by law enforcement to track individuals in public, and adding AI to such could lead to a worrying trend. AI would be able to predict an individual’s emotional state based on facial expressions, as well as actively target them with advertising based on their mood. Without invoking George Orwell too much, this could be a very slippery slope to mass state surveillance and an advertising dystopia.

Another concern would be the effects of AI bias in business. Put simply, in a society which holds biases around race, gender, socio-economic grouping, etc – AI trained on data sets from said society will unintentionally replicate said bias. This may lead powerful business AIs to make decisions which unfairly disadvantage certain social groupings. This is not only detrimental to the individual, but the business as a whole, as key stakeholders may find themselves being shunted out by decisions made by a biased AI. Additionally, beyond data bias, an AI is an impartial decision maker. This means that if a data set contains a correlation which may be true, but should be ignored for ethical or legal reasons, the AI won’t ignore it. This can lead to a massive amount of regulatory and governance issues, incurring legal fees for affected businesses which suddenly find themselves in hot water. Because of this, businesses which look to implement AI in the future have not only an ethical, but a business imperative to ensure their AI programmes do not perpetuate any of the above issues. At the same time, risk analyses must include these categories for AI in order to better protect clients and organisations.

So, exciting news which hopefully points is in the right direction when it comes to AI regulation and governance. We hope that this article has demonstrated to you the importance of this topic, as we slowly move into an AI powered future.

Jing Wei Loh

Master of Data Science Student at UM

1 年

Thanks for the great blogpost, Piers Webster. It is great that technology powerhouses such as the United States and the EU are taking major steps in regulating the rapid advancement of AI. As AI is constantly evolving, I think that regulations around AI should be keeping up to pace with its development as well. I concur that third-party investigators should be involved in the process of AI development; however, it remains in question if this is enough. I believe that incentives play a huge role in guiding the path of AI development. For example, governments could provide tax incentives to technology companies which comply to their regulatory requirements when developing AI products. Education on ethical usage of AI is also crucial; the public, especially the young generation, should be informed about the proper usage of AI and the detrimental effects on society if this technology is abused. Anyhow, it is good to see that governments are taking actions on ethical concerns about the usage of AI to prevent a dystopian future.

回复
Oliver Reade 韋奧利芙

Looking to grow your sales without selling; let me show you how to make sales calls without selling; effectively, confidently & ethically.

1 年

Interesting blog Piers Webster, I can already see some of the so-called "unacceptable risks" being promoted by governments including Social Credit Scoring; what's the difference with AI doing this? Plus with all the data being uploaded onto the internet AI may also regurgitate the MSMs narrative on information i.e. C0V1D etc.

What's more important is that we have a proper discussion around what AI can and cannot do. AI can't make ethical judgements. Al can't make policy decisions. This is great technology that can automate a lot of work by executing extremely complex and sophisticated rulesets but it can't do our thinking for us - my worry is that lazy companies will try to use it to do just that. Well thought out regulation with this in mind could work but it will be essential that the regulation focuses on company's USE of AI, not the DEVELOPMENT of AI itself. My concern is that the current debate around regulation is too focused on the idea that the technology itself is the problem, that it is 'dangerous' in an of itself and a problem to be restrained and controlled rather than the ways in which we use or apply it. AI is a great tool for extending and enhancing human capabilities, not for replacing them. If any regulation that we have is not built with this in mind then it will only make things worse, not better.

Matthew Farr

UCaaS & CCaaS Specialist | Chief Solutions Expert at Fidelity Group | Empowering Businesses with XaaS, Telecoms, Data Connectivity, Energy, and Payment Solutions

1 年

Most people seem to have ethical and social concerns where AI is concerned. I think regulation can only be a good thing in addressing these issues

要查看或添加评论,请登录

社区洞察

其他会员也浏览了