Cybersecurity in the World of Artificial Intelligence

Cybersecurity in the World of Artificial Intelligence

Artificial Intelligence (AI) is coming. It could contribute to a more secure and rational world or it may unravel our trust in technology. AI holds a strong promise of changing our world and extending compute functions to a more dominant role of directly manipulating physical world activities. This is a momentous step where we relinquish some level of control for the safety of ourselves, family, and prosperity. With the capabilities of AI, machines can be given vastly more responsibility. Driving our vehicles, operating planes, managing financial markets, controlling asset transactions, diagnosing and treating ailments, and running vast electrical, fuel, and transportation networks are all within reach of AI. With such power, comes not only responsibility, but risks from those seeking to abuse that power. Responsibility without trust can be dangerous. Where will cybersecurity play in our world where learning algorithms profoundly change and control key aspects of our future?   

Technology is a Tool 

AI, for all its science fiction undertones, is about finding patterns, learning from mistakes, breaking down problems, and adapting to achieve specific goals. AI is a series of incredible logic tools that allow methodological progression in data processing. It sounds complex, but when distilled to its base components, it becomes simple to grasp. In practice, it could be finding the most optimal route to a location, matching biometrics to profiles, interpreting the lines of the road to keep a vehicle in the proper lane, distilling research data to identify markers for diseases, or detecting brewing volatility situations in markets or people. Vast amounts of data are processed in specific ways to distill progressively better answers and models.    

The real importance is not in how AI can find patterns and optimal solutions, but rather what the world can do with those capabilities. Artificial Intelligence will play a crucial role in a new wave of technology that will revolutionize the world.  

Every day we get closer to autonomous cars that can: transport people across town or the continent, accelerate research for cures to diseases, locate hidden reserves of energy and minerals, predict human conditions like crime, heart attacks, and social upheaval. AI systems are expected to improve the lives of billions of people and the condition of the planet in many different ways, including detecting leaks in pipes to reduce waste and avoid environment disasters, optimizing crop yields to reduce starvation, configuring manufacturing lines for efficient automation, and identifying threats to people’s health and safety.   

Learning machines will contribute to computing performing actions more efficiently and accurately than humans. This will foster trust and lend to more autonomy. It is not just cars. Think medical diagnosis and treatment, financial systems, governmental functions, national defense, and hospitality services. The usages are mind boggling and near limitless. The new ways we use these capabilities will themselves create even more innovative opportunities to use AI. The next generation may focus on the monitoring, management, control, provisioning, improvement, audit, and communication between other lesser capable AI systems. Computers watching computers. The cycle will reinforce and feed itself as complexity increases and humans become incapable of keeping pace.  

As wondrous as it is, AI is still just a technology. It is a tool, albeit a very powerful and adaptable one. Here is the problem. Tools can be wielded for good or for malice. This has always been the case and we just cannot change the ways of the world. As powerful as AI is, there is a direct relationship to the amount of risk which is accompanied with the benefits. When value is created, attackers are attracted and it becomes a target. It might be a hacker, online criminal, nation state, hacktivist, or any other type of threat agent. Those who can steal, copy, control, or destroy something of value, have power. AI will be a very desirable target for those who seek power.

From Data Processing to Control of the Physical World 

Computers are masters of data. They can do calculations, storage, and all manner of processing extraordinarily well. For a very long time the data and information generated by computers were largely for humans to be better informed, to make decisions. There are other reasons of course, entertainment, communications, etc. But the point is there have been specific limits. Computers outputs were mostly to a screen, printer, or to another computer. To control things in the physical world takes quite a sizable amount of thinking, in order to do it right. In many cases, we simply don’t trust computers to deal with unexpected and complex situations.   

Modern airliners have automatic settings which can fly the plane. But we all feel much more comfortable with a human in the cockpit, even if they don’t do much but enjoy the ride. They are our failsafe. One which has an irreplaceable stake, just like the passengers, to arrive safely to the destination. Humans, although slow compared to computers, fallible in judgement, and prone to unpredictability in performance, still have a trusted reputation to keep people safe and rise when needed to adapt to changing conditions. They simply are better at critical oversight of incredibly complex, ambiguous, and unpredictable situations, especially when self-interests are involved.  

AI may challenge that very concept.   

It will likely be proven on the roads first. Autonomous systems will be designed to reduce traffic congestion, avoid accidents, and deliver passengers by the most efficient route possible. Drivers are notoriously bad around the world. Sure, a vast majority of trips end in expected success, but many do not. Tremendous resources of time, fuel, and patience are wasted with inefficient driving. Autonomous vehicles, powered by various AI systems, will eventually statistically prove to be significantly better at driving. So much so, it could revolutionize the insurance industry and create a class system on the roadways. Autonomous vehicles can travel in close chevrons at high speed, while human drivers will be greatly limited in speed and efficiency.   

Such cases will open the world to computers which will be allowed, even preferred, to control and manipulate the physical world around and for us.   

It could be as simple as a smart device to mow the lawn. An AI enhanced autonomous lawnmower could efficiently cut the grass, avoid sprinklers, not come near the household pets, detour around newly planted flowers, be respectful of pedestrians, and turn off when children approach or their toys get too close. Such a device will also monitor its performance and act on maintenance needs by proactively ordering needed parts and connecting itself to the power grid when it needs to recharge. It may also find the best place to park itself in the tool shed or garage, and only return when it determines the biological grass again requires upkeep by a smart technological overseer. The trust that AI brings, in reliably making smart decisions, will allow digital devices to manipulate, interact, and control aspects of the physical world.

Malice in Utopia

The same sensors, actuators, and embedded intelligence could also be a recipe for disaster. Errors in faulty, damaged, or obscured sensors might feed incorrect data to the AI brain, leading to unintended results. Even worse, malicious actors could alter inputs, manipulate AI outputs, or otherwise tinker with how the device responds to legitimate commands. Such acts could lead to runaway mowers, chopping down the neighbors prized petunias, or pursuing pets with messy consequences. A single act could end badly. But what if a truly malevolent actor went so far as to hijack an entire neighborhood of these devices. Like a botnet, under the control of an aggressive actor. Still, the rise of suburban lawnmowers scenario seems a bit silly.   

What if we aren’t talking about lawnmowers? What if, instead it was autonomous cars, buses, and trains that could be controlled by hackers? Perhaps medical devices in emergency rooms or those implanted in people such as defibrillators, insulin pumps, and pacemakers. Smart medical devices could save lives with newfound insights to patient needs, but also could put people at risk if they make a mistake or are controlled maliciously. What if hostile nations were to manipulate AI systems that are controlling valves on pipelines, electrical switches in substations, pressure regulators in refineries, dam control gates, and emergency vents managing the safety of power generation facilities, chemical plants, and water treatment centers? In the future, artificial intelligence will greatly benefit the operation, efficiency, and quality of these and other infrastructure critical to daily life. But where there is value, there is always risk.  

The Silver Lining 

What can be broken, can also be protected with AI. We live in a world full of risks. It does not make sense to eliminate all of them, but rather manage them to an acceptable level. Technology provides a myriad of benefits and opportunities. It also brings new challenges to manage risks. The goal is to find the right balance of controls to mitigate the risks to an acceptable level where it makes sense. Costs and usability are factors as well. The optimal balance is when benefits from the opportunities are realized while the risks are kept at an acceptable level. These goals and the enormous amounts of data required to understand them, represent perfect conditions for AI to thrive and find optimal solutions.   

Intelligent systems could lead in new capabilities to detect insiders and disgruntled employees, identify stealthy malware, dynamically reconfigure network traffic to avoid attacks, scrub software to close vulnerabilities before they are exploited, and mitigate large scale complex cyberattacks with great precision. AI could be the next great equalizer for defensive capabilities to support technology growth and innovation.  

The Unanswered Questions 

This is not the end of the discussion, rather it is the beginning of a journey. AI is still in its infancy, as are the technologies and usages which will incorporate it to deliver enhancements to systems and services. There are many roads ahead. Some more dangerous than others. We will need a guide to understand how to assess the risks of AI development and evolution, how to protect AI systems, and most importantly methods to secure the downstream use-cases it enables. 

Some questions we must ponder: 

  • What risks are present when the integrity of AI systems are undermined, results are manipulated, data exposed, or availability is denied? 
  • Who is liable if AI makes poor decisions and someone gets injured?
  • How will regulations for privacy and IP protection apply to processed and aggregated data? 
  • What level of transparency and oversight should be instituted as best practices? 
  • How should input data, AI engines, and outputs be secured? 
  • Should architectures be designed to resist replay and reverse engineering attacks, at what cost? 
  • What fail-over states and capabilities are desirable against denial-of-service attacks? 
  • How do we measure risks and describe attacks against AI systems? 
  • What usages of AI will be targeted first by cyber attackers, organized criminals, hacktivists, and nation states? 
  • How AI can be used to protect itself and interconnected computer systems from malicious and accidental attacks 
  • Will competition in AI systems drive security to an afterthought or will market leaders choose to proactively invest in making digital protections, physical safety, and personal privacy a priority?

Dawn of a New Day… 

It is time the cybersecurity community begins discussing the risks and opportunities that Artificial Intelligence holds for the future. It could bring tremendous benefits and at the same time unknowingly become a Pandora’s box for malicious attackers. We may see innovators wield AI in incredible new ways to protect people, assets, and new technologies. Governments may be compelled to step in and begin regulating the usage, protections, transparency, and oversight of AI systems. Standards bodies will also likely be involved in setting guidelines and establishing acceptable architectural models. There will likely be thought leading organizations begin to incorporate forward thinking cybersecurity controls which protect digital security of systems, the physical safety of users, and the personal privacy of people. 

I plan on exploring the intersection of cybersecurity and Artificial Intelligence in upcoming blogs. It is a worthy topic we all should be contemplating and discussing. This topic is virgin territory and will take the collective ideas and collaboration of technologists, users, and security professionals to properly mature. Follow me to keep updated and add your thoughts to the conversation. 

Right now, the future is unclear. Those with insights will have an advantage. Now is the right moment to begin discussing how we want to safely architect, integrate, and extend trust to intelligent technologies, before unexpected outcomes run rampant. It is time for leaders to emerge and establish best practices for a secure cyber world that benefits from AI.    


Interested in more? Follow me on LinkedInTwitter (@Matt_Rosenquist)Information Security Strategy, and Steemit to hear insights and what is going on in cybersecurity.


Azahara Benito Carrillo

CMO at Galgus| Founder & CEO of Extravaganza Communication | Inbound Marketing Leader | Brand Strategist | Speaker | Mentor

7 年

Is really difficult to a have a real cibersecurity, dont you think Matthew? Really interesting article. I give you the link for other one, i think it could be useful. Regards! https://geographica.gs/en/blog/artificial-intelligence-and-machine-learning/

要查看或添加评论,请登录

社区洞察

其他会员也浏览了