One of the most pressing questions around AI is whether AI development even can be controlled, and if so, how? There is no easy answer to this question, as it involves complex ethical, technological, and societal considerations.
There are valid reasons to believe AI development should be controlled. First, AI has the potential to be used for harmful purposes, such as developing autonomous weapons or creating deepfakes that could be used to spread misinformation. Second, AI is likely, indeed almost certain, to lead to job losses, potentially sufficient to destabilize large portions of societies. Third, AI could have unintended consequences that we are not yet able to foresee, especially as capabilities advance to artificial general intelligence (AGI) and beyond.
However, there are major questions about the feasibility and effectiveness of controlling AI development:
- AI technology is advancing exponentially fast. Policymaking cannot move at a comparable speed.
- AI systems are complex at best and often opaque and difficult to understand. As AI systems become more developed, their decisions become less understandable. Overseeing inscrutable logic presents immense difficulty.
- AI is pervasive in the developed world. It has a key role in many of the systems in use today, in areas as disparate as evaluating job applicants to writing software. Work in progress today will extend that into the vast majority of the systems in use.
- AI research and development is happening globally. It is extremely difficult to enforce regulations or controls across countries. Does anyone think Russia or North Korea will slow AI development just because the West asks nicely?
- There are tremendous economic incentives to develop powerful AI quickly. Regulating development could negatively impact growth and economic benefits.
- Many AI applications have both beneficial and risky uses. Restricting technology with dual use potential is hugely complex.
- Much AI research occurs openly in academic settings. Restricting access risks stifling innovation.
- Banning or severely limiting AI development could drive some research underground, making it harder to control.
- Cultural differences shape attitudes to concepts like fairness and safety. Global views on "danger" are unlikely to align.
- There is little expert consensus on defining or predicting dangers from AI progress. AI absorbs biases and creates emergent effects hard to anticipate. Even careful regulation may fail to avert unforeseen issues.
Governments can create policies and laws that regulate AI development. President Biden has issued an executive order to ensure that America leads the way in managing the risks of AI. Nonetheless, it seems improbable that regulation is the answer.
Just focusing on the most obvious areas, it is clear that autonomous lethal weapons and algorithms that control critical infrastructure or make decisions that impact human lives need controls and safeguards. However, there are credible stories that autonomous lethal weapons have already been used in the Ukraine war. Given this genie is out of the bottle, it seems unlikely more subtle threats are likely to be controlled.
- Industry self-regulation, voluntary safety standards and public debate on managing risks offer the best paths forward. The focus should be on fostering a culture of responsibility around the immense potential of AI.
- There should be ethical principles, such as fairness, transparency, and accountability, to guide the development and use of AI.
- AI developers need to be trained on the ethical and societal implications of their work, and how to develop AI systems that are safe and responsible.
- AI systems should be continuously monitored to identify and address any potential issues.
- Finally, it is important to raise public awareness of the potential risks and benefits of AI, Companies can make their AI development more transparent by sharing their efforts with the public.
AI development can be controlled to some extent, but it requires a concerted effort from governments, businesses, and individuals to ensure that it continues to develop in a responsible and ethical manner.