To Use AI for Security, We Must First Secure AI
Beena Ammanath
Trustworthy AI book author | Global Deloitte AI Institute leader | Humans For AI founder | AnitaB.org & Centre for Trustworthy Technology board member
Do you remember the moment that artificial intelligence (AI) jumped out of the pages of sci fi novels or off the big screen and appeared in your everyday life? Was it when you bought your first smartphone? Or when you noticed that the smart speaker on your kitchen counter gets a little smarter about your speech patterns and personal preferences every time you talk? Or when you compared notes with a friend and realized that your social media app is serving you two very different political ads.
AI is becoming an increasingly common presence in the average person’s life. Its sophisticated algorithms are, for the most part, cutting through massive amounts of information to give people faster and more accurate access to the information that is relevant to what they are uniquely seeking. This is not only true for consumers but enterprises as well are increasingly using AI to improve everything from fraud prevention, warehouse logistics, energy cost management, medical care (especially when paired with the data processing power of supercomputers) to a whole lot more. In fact, an analysis of earnings calls shows that mentions of AI by executives soared from about 50 in a single quarter just three years ago to 791 in the third quarter of 2017.
Despite this, AI is not a silver bullet. It has limitations, and one of the biggest limitations of AI is bias. In the past few years there have numerous headline-making stories about AI systems making mistakes. For example, systems that have tagged photos of humans as animals or chatbots that get drawn into agreeing with racist or sexist statements.
This kind of bias can be caused by several factors. One is training that is not diverse enough. AI and machine learning algorithms process large amounts of data and learn to draw conclusions based on that. But the data might not contain enough diversity – for example, say it lacked pictures of people with albinism. When there are gaps, the system makes a guess, often badly. Another way that bias creeps in is if there is other data available that leads to an incorrect inference, such as incorrectly guessing that a nurse in a photo or text is female because other data shows that fewer nurses are men. A third contributor comes from the human factor. Many systems are designed to learn from the people using them, similar to the smart speakers that “teach” themselves your speaking patterns, and pick up the biases of those users.
These errors can have serious consequences in business. Consider the whole area of cybersecurity, where AI is already starting to show up in the security operations center. According to research by ESG, 12 percent of enterprise organizations have already deployed AI-based security analytics extensively, and another 27 percent have deployed AI-based security analytics on a limited basis.
AI can play an important role in enhancing both the digital and physical aspects of a company’s perimeter, and in the short term, provide businesses with the upper hand. Over the long term, though, consider what happens when both the attacked and the attacker are empowered by AI. For example, one of the techniques to train AI is GANs (Generative Adversarial Networks). With GANS, two networks fight each other and improve simultaneously. In other words, as you create the best cop, you’re simultaneously creating the best robber.
What can a CISO do if AI can be equally as empowering for the bad actors as the good ones? One way is to fight “bad” analytics with “good” analytics, using technology such as ArcSight Investigate which combines analytics-driven investigation with real-time correlation at scale. She or he can also augment AI with the “soft” assets that affect employee behavior: culture and policy compliance. Thinking of employees as the first firewall and actively creating a culture of security is critically important. Tactics such can range from user awareness training to making sure that senior leaders understand corporate security norms and behave in ways that visibly align with these norms. Compliance is an equally important tool and another area that is benefiting from the application of AI. It requires a constant effort to not only find and fix privacy and security gaps but to stay on top of relevant industry-specific issues and regulatory changes, such as GDPR.
Before we use AI for security, AI must first be secured. We need to secure it as much as possible from implicit biases. AI algorithms are programmed by humans and humans are inherently biased, so this is a difficult challenge, but it has been taken up by academics and the companies building AI solutions and progress is being made.
We also need to fortify the power of AI by ensuring that it is democratized, rather than having it concentrated in the hands of a few government organizations or software companies. And finally, we need to work together to create an efficient regulatory framework governing AI research and implementation, the same approach we take to any precious commodity or powerful weapon.
As AI becomes increasingly pervasive, it’s increasingly important that we use it as safely and transparently as we can.
TechWomen100 2022 Winner | EDI Advocate | Ultra Runner | Women In Sport | Thames Hospice Ambassador
6 年Mitchell Feldman Deepak Ramchandani Vensi - you will find this interesting and what I referenced earlier this week.
Educator & Facilitator in evolving strong leadership skills \ Managing \ Building Thriving Culture
6 年So true. Good food for thought; for me, particularly, "............AI is not a silver bullet. It has limitations, and one of the biggest limitations of AI is bias."
Chief Executive Officer (CEO) of BRISCA
6 年So right Beena, I'll have to show this to my friend! We were just having a discussion about this.
Area Vice President, Enterprise Cloud Sales, and Customer Success - Media & Entertainment Industry at Adobe
6 年This is true for any real implementation of AI.
System Engineer I Social Innovator | Tinkerer I Nature Lover I Street Foodie | Shutterbug I Coach I Passionate IEEE Sustainability Volunteer on UN SDGs (Education, Bio Diversity, Climate Action and Justice)
6 年but then how easy is to secure AI in the first place?