Why do we need safeguards for AI? Cyber security, human manipulation
Source: Pixabay

Why do we need safeguards for AI? Cyber security, human manipulation

by David Gardner, Cofounders Capital — August 7, 2023 .


Editor’s note: Investor and entrepreneur David Gardner is founder of?Cofounders Capital?in Cary and is a regular contributor to WRAL TechWire. His articles are a regular part of our?Startup Monday lineup.


CARY –??There are a lot of passionate writers and speakers today railing about various threats that society will face as consequences of the widespread use of artificial intelligence.?To gain perspective, I thought it helpful to categorize these arguments into major and minor concerns.?In?my last article?I discussed what I consider to be potential minor concerns and why.?In this follow up article, I will discuss what I believe to be the only two long term major threats created by AI.


AI and Cyber Security?

Several of the CEOs of major AI companies consider cyber security to be the single biggest challenge of this new technology.?Cyber attacks have become the norm and they will certainly be a huge part of all wars in the future.?AI can be a game changer for both offensive and defensive cyber conflicts.?Unlike current cyber attacks which tend to come in one of several forms, an AI driven attack can constantly be morphing into many different kinds of attacks at once while it is learning from each and getting more creative with each onslaught.?Indeed, there are some who even argue that no firewall will be adequate in the future.

The US spends trillions to develop and maintain the world’s largest military force but without secure networks and communications much of this hardware could be rendered useless.??Without navigation, coordination and targeting capabilities, the stronger military could succumb to a lesser adversary with a stronger offensive AI capability.?Some argue that the next superpower will be whoever controls the strongest AI.

AI is potentially a far more dangerous weapon than nuclear technology but unlike the nuclear threat which is somewhat moderated by thousands of laws, treaties and inspectors, there are as of yet no national or international governances on the use or proliferation of AI.?Some argue that this technology is so dangerous that there should be a total ban on it in the US but obviously that would leave our country at a distinct disadvantage in a world full of bad actors.


AI and Human Manipulation??

As profound of a threat as AI represents to national and corporate security, I believe there is even a greater threat to consider.

Democratic governments are particularly vulnerable to the misuse of AI.?Democracies only function when they have a well-informed group of voters with access to factual information so that they can vote in their own self interest.?This is how dictators keep democracy from taking hold.?They control all media and news services.?Just as in cyber warfare, he who controls the information generally wins the contest.

Even prior to the development of AI, we have seen the impact of both foreign and domestic manipulation of our news and other misleading information in our own democracy.?The new danger is that in mere seconds tens of millions of voter profiles can be aggregated, psychoanalyzed, micro-targeted and pummeled with AI-generated manipulative fake and misleading news and other false information.?It is hard to overstate this danger to every democracy and hope-to-be democracy on the planet.?Human brains, as reasoning engines, are like machines with emotion hacking points.?Expertly pushing the right buttons the right way with false information can almost guarantee certain behavior.?The consequences of doing this on a mass scale is nothing short of horrifying.?Without the right counter measures and laws in place, any unscrupulous AI-empowered bad actor could in the near future determine the outcome of every election and ballot item.

I grew up watching memorable sci-fi movies like the Terminator and the Matrix.?Like most people, I assumed that for AI to harm us it would have to enter the physical world in some form, usually a killer robot.?Unfortunately, this is not the case at all.?The sobering truth is that if such a bad actor or self-aware AI wanted to wipe out humanity, it would not need an army of killer robots.?It would simply manipulate us into killing each other.

Conclusion

Throughout history major advancements in technology have always fostered humanity-changing good outcomes and the potential for misuse and unintended consequences.?The industrial revolution significantly improved our living standard but also created widespread pollution and global warming.?Nuclear technology can provide limitless clean energy but also has the potential to destroy every major city on the planet.?Likewise, AI has the potential to significantly improve our lives by finding new cures for diseases, cost-effectively educating the masses and solving some of humanity’s most complex problems.?However, I am also convinced that without the appropriate safeguards, it also has the potential to end democracy as a viable form of government.


David Gardner

Fund Manager, Entrepreneur, Writer

1 年

Great comments everyone and I agree that the pros of AI may well be worth the risks. The bottom line is that we don't have a choice but to continue developing this technology because the world is full of bad actors. I recall how the old Soviet Union secretly cheated or openly reneged on every nuclear arms treaty it signed. There is no holding back the pace of technology. We can only hope that we will do a better job of controlling and governing its use than we have greenhouse gases and other global threats.

Ardis Kadiu

Innovator in AI & EdTech | Founder & CEO at Element451 | Educator & Speaker | Developer of AI Courses & Workshops | Host of #GenerationAI Podcast

1 年

AI is still in its early stages, David. Yes, there are risks, but there's a lot of work being done to mitigate them. Companies like OpenAI are actively building guardrails, focusing on responsible development and investing 20% or their computing on super alignment. The EU already has legislation regulating AI models. Catastrophizing AI only polarizes the conversation. We've got a long way to go before AI could turn on us, and by that time, safeguards will be in place. Let's not overlook the potential of AI to solve complex problems and improve lives today.

Gavin Newman

Leading specialist in virtual & hybrid events, CEO at Ivent and university lecturer in online events

1 年

Interesting to see the recent spate of Hollywood blockbusters using Ai as the premise for their storyline such as the new mission impossible. It alerts us to the fact that however much we decent people want to contain and manage it those with a nefarious motive may have other ideas

Maureen O'Connor

Healthcare Entrepreneur, Optimist and Amateur Photographer

1 年

Great article David. I find the nuclear energy/warfare analogy compelling and applicable with regard both to the very good and very bad consequences of AI and the fact that regulation cannot be confined to one country. It will need to cross borders to be effective and to keep the US competitive if we are to maintain democracies around the world.

要查看或添加评论,请登录

社区洞察