Is Artificial Intelligence Good or Bad?
What does it mean to be good or bad?
To me, the value of something depends on the context that it is in. For example:
We can evaluate technology through this lens.
By themselves, guns are neither good nor bad. The National Rifle Association of America (NRA) emphasises this fact using the slogan “guns don’t kill people, people kill people”. Ultimately, this statement is misleading because guns don’t exist only by themselves. We haven’t considered the environment that these guns would exist in.
Statistics from the Harvard School of Public Health indicates that higher levels of gun ownership in current society is correlated with more total suicides, more total homicides, and more accidental gun deaths.
That said, NATO members have been supplying Ukraine with guns so that it can defend itself against “Russia’s brutal and unprovoked war of aggression”. From this perspective guns are seen as a tool to protect life.
领英推荐
So, is Artificial Intelligence good or bad?
As with guns, Artificial Intelligence (AI) is currently a tool. AI can play a key role in making society more efficient at tackling climate change, just as it can exacerbate social inequalities. Regardless of whether consciousness arises from emergent algorithms or not, today’s AI algorithms are already exceptionally powerful. Computer technology will continue to develop at scale and transition towards using better materials with higher performing algorithms.
Currently, I believe humanity’s fate with AI depends largely on the agility of our legal system to create adequate regulation with sufficient retribution for violating these regulations. Regulation would allow us to harness the initial power of AI for the good of humanity whilst protecting us from the bad of AI. I am optimistic that society can successfully adapt to the loss of human jobs if we have a plan.
Otherwise, I fear humanity’s fate could accelerate towards an escalation of uncontrollable AI power. AI doesn’t need to become conscious for humanity to destroy itself. Innocent mistakes were sufficient to create nuclear incidents. It would be good to avoid having an unstable AI nuclear meltdown scenario where we failed to harness the enormous powers of AI. The training of large scale algorithms can be very deceptive.
If we accepted to regulate the nuclear industry, then I believe we should also regulate AI. Regulation could provide the foundation of trust needed to reach global cooperation in AI. Ultimately, global cooperation in AI is essential to address the threat of AI consciousness, should that exist.
References