Why governments need to regulate AI

Why governments need to regulate AI

Artificial Intelligence research, although far from reaching its pinnacle, is already giving us glimpses of what a future dominated by AI can look like. While the rapid progress of the technology should be seen with a positive lens, it is important to exercise some caution and introduce worldwide regulations for the development and use of AI technology.

The constant research in the field of technology, in addition to giving rise to increasingly powerful applications, is also increasing the accessibility to these applications. It is making it easier for more and more people, as well as organizations, to use and develop these technologies. While the democratization of technology that is transpiring across the world is a welcome change, it cannot be said for all technological applications that are being developed.

The usage of certain technologies should be regulated, or at the very least monitored, to prevent the misuse or abuse of the technology towards harmful ends. For instance, nuclear research and development, despite being highly beneficial to everyone, is highly regulated across the world. That’s because, nuclear technology, in addition to being useful for constructive purposes like power generation, can also be used for causing destruction in the form of nuclear bombs. To prevent, international bodies have restricted nuclear research only to the entities that can keep the technology secure and under control. Similarly, the need for regulating AI research and applications is also becoming increasingly obvious. Read on to know why.

AI can be a double-edged sword

AI research, in recent years, has resulted in numerous applications and capabilities that used to be, not long ago, reserved for the realm of futuristic fiction. Today, it is not uncommon to come across machines that can perform specific logical and computational tasks better than humans. They can perform feats such as understanding what we speak or write using natural language processing, detecting illnesses using deep neural networks, and playing games involving logic and intuition better than us. Such applications, if made available to the general public and businesses worldwide, can undoubtedly make a positive impact in the world.

For instance, AI can predict the outcome of different decisions made by businesses and individuals and suggest the optimal course of action in any situation. This will minimize the risks involved in any endeavor and maximize the likelihood of achieving the most desirable outcomes. They can help businesses become more efficient by automating routine tasks and preserve human health and safety by undertaking tasks that involve high stress and hazard. They can also save lives by detecting diseases much earlier than can be diagnosed by human doctors. Thus, any progress made in the field of AI will result in an improvement in the overall standard of human life. However, it is important to realize that, like any other form of technology, AI is a double-edged sword. AI has a dark side, too. If highly advanced and complex AI systems are left uncontrolled and unsupervised, they stand the risk of deviating from desirable behavior and perform tasks in unethical ways.

There have been many instances where AI systems tried to fool its human developers by “cheating” in the way they performed tasks they were programmed to do. For example, an AI tasked with generating virtual maps from real aerial images cheated in the way it performed its task by hiding data from its developers. This was caused by the fact that the developers used the wrong metric to evaluate the AI’s performance, causing the AI to cheat to maximize the target metric. While it’ll be a long time before we have sentient AI that can potentially contemplate a coup against humanity, we already have AI systems that can cause a lot of harm by acting in ways not intended by the developers. In short, we are currently at more risk of AI doing things wrong than them doing the wrong things.

AI ethics is not enough

To prevent AI from doing things wrong (or doing the wrong things), it is important for the developers to exercise more caution and care while creating these systems. And the way the AI community is trying to achieve this currently is by having a generally accepted set of ethics and guidelines surrounding the ethical development and use of AI. Or, in some cases, ethical use of AI is being inspired by the collective activism of individuals in the tech community. For instance, Google recently pledged to not use AI for military applications after its employees openly opposed the notion. While such movements do help in mitigating AI-induced risks and regulating AI development to a certain extent, it is not a given that every group involved in developing AI technology will comply with such activism.

AI research is being performed in every corner of the world, often in silos for competitive reasons. Thus, there is no way to know what goes on in each of these places, let alone stopping them from doing anything unethical. Also, while most developers try and create AI systems and test them rigorously to prevent any mishaps, they may often compromise such aspects while focusing on performance and on-time delivery of projects. This may lead to them creating AI systems that are not fully tested for safety and compliance. Even small issues can have devastating ramifications based on the application. Thus, it is necessary to institutionalize AI ethics into law, which will make regulating AI and its impact easier for governments and international bodies.

No alt text provided for this image

AI safety can only be achieved by regulating AI

Legally regulating AI can ensure that AI safety becomes an inherent part of any future AI development initiative. This means that every new AI, regardless of its simplicity or complexity, will go through a process of development that immanently focus on minimizing non-compliance and chances of failure. To ensure AI safety, the regulators must consider a few must-have tenets as a part of the legislation. These tenets should include:

  • the non-weaponization of AI technology, and
  • the liability of AI owners, developers, or manufacturers for the actions of their AI systems.

Any international agency or government body that sets about regulating AI through legislation should consult with experts in the field of artificial intelligence, ethics and moral sciences, and law and justice. Doing so helps in eliminating any political or personal agenda, biases, and misconceptions while framing the rules for regulating AI research and application. And once framed these regulations should be upheld and enforced strictly. This will ensure that only the applications that comply with the highest of the safety standards are adopted for mainstream use.

While regulating AI is necessary, it should not be done in a way that stifles the existing momentum in AI research and development. Thus, the challenge will be to strike a balance between allowing enough freedom to developers to ensure the continued growth of AI research and bringing in more accountability for the makers of AI. While too much regulation can prove to be the enemy of progress, no regulation at all can lead to the propagation of AI systems that can not only halt progress but can potentially lead to destruction and global decline.

Rajat Kanaujia

Student at kanpur universty india

5 年

Nyc

Otman MECHBAL GRACIA

Marketing Project Manager, Polyglot Engineer passionate about sustainable initiatives around automation. I love sports, and learning new skills. Part of a book club and a tech community.

5 年

the issue is AI is so early and wide that it could become a self topic in politics. We need an independent organization or a sum of one in order to self regulate by decentralization of rules and ethics other wise we will fail to humanize it

Muneer Gohar Babar

Professor of Dental Public Health | Associate Dean, Academic Affairs at International Medical University | Certified Coach | EdTech Enthusiast

5 年

Just in time article, it is high time we should have monitoring system and regulation for AI.

回复
Kristi H Shock ??

Founder @ KHightower LLC | Growth Marketing Partner Agency for 10x Innovators

5 年

Lol! When the US Congress is so ignorant about how the internet works and asks questions to Zuckerberg "How does Facebook make money?" And Zuck has to respond "Senator, we run ads." -- There's little chance of them understanding AI enough to regulate it with any efficacy. The recent crypto hearings were equally cringe worthy... What we need to focus on first is to set term limits and get the people out of Congress that are so disconnected with the exponential tech that's about to rock our worlds.

要查看或添加评论,请登录

Naveen Joshi的更多文章

社区洞察

其他会员也浏览了