The business of "yes"
Anthony Morris
Co-Founder/Head of Infosecurity | Building a virtual Security Analyst | SOC Operations Expert | BBQ Fanatic | Scout Leader
"When the Paris Exhibition [of 1878] closes, electric light will close with it and no more will be heard of it." -- Oxford professor Erasmus Wilson
"You're crazy if you think this fool contraption you've been wasting your time on will ever displace the horse." -- Banker for Alexander Winton, 1890
"There is no reason for any individual to have a computer in their home." -- Ken Olsen, DEC president, 1977
"A December 2009 survey conducted for Microsoft revealed that more than 60 percent of the consumers and more than 75 percent of senior business leaders questioned cited data safety, security, and privacy as the chief concerns about cloud computing. More than 90 percent of those surveyed expressed reservations about the security and privacy of personal data stored in the cloud." -- Microsoft Survey 2009
"Three-quarters of global businesses are currently implementing or considering bans on ChatGPT and other generative AI applications within the workplace, with risks to data security, privacy, and corporate reputation driving decisions to act. [...] 61% of companies deploying/considering generative AI bans view the steps long term or permanent." -- Blackberry survey, 2023
Electricity, Automobiles, Computers, Cloud, Generative AI... and I could keep going with seemingly endless examples. The world has always had skeptics, naysayers and those that could not see the potential in the future. I would even shamefully place myself into that group for the earlier part of my life.
With embarrassment, I'll admit that back in the early 90's I heard of a company called broadcast.com. I personally downplayed it and said "what person would wait so long to watch a video on their computer?" I didn't have the vision of Mark Cuban who sold out for 5+ billion dollars.
Circa 2005, I was shown a product called "Splunk"... to which I foolishly retorted "Why would a person ever install that bloatware when they could just use grep or write a regex expression?"
Why do I share these embarrassing self incriminating stories? Because I haven't always been quick to see or embrace the potential of new ideas. I had to grow into it. I'm not saying all this from an ivory tower- (e.g. - "Come to me and you shall learn." Rather, I'm humbly saying it as a guy that has made some short sighted choices as it pertains to technology in his career but has hopefully learned by now.
So that's one angle... put a pin in that topic while I make a second point and then I'll bring them both together.
Information security has long had a reputation of being in the business of "no". We are quick to tell the business 'No, you can't do that.' The worst of us even park it there without justification. Those that are only moderately better will put together a risk assessment justifying their position- but the end result is still "no" to the business.
I contend our role in information security is to enable the business while managing risk. Often, that requires creativity and great skill... but that is a distinction between being a business enabler and a business resistor.
So introduction made, generative AI and large language models have evolved to the point that businesses can now realize significant business value. Failing to realize the potential business value and competitive advantage of generative AI places us in the same class as those that resisted the electricity, cars, computers, telephone, television, computers and cloud.
领英推荐
As security professionals, our job is not to resist artificial intelligence but instead to manage the technology and the associated risks (yes- they do exist) in a way that allows the business to adopt the technology to move ahead of the competition faster and more cost effectively.
So where should security leaders go from here? Here is a plan of attack (a few steps to see success):
1) Educate yourself on the potential business value of generative AI. There are many ways and places to do this, but this 68 page full report by Mckinsey is one of the best.
2) Realize the security risks of LLMs. A good place to start is with the OWASP Top 10 list for LLMs.
3) Recognize dangers from inaccurate data. This can range from LLM hallucinations to incorrect output. Educate the business about the danger of over relying on generative AI technologies- especially early on like where we are right now.
4) Realize the risks associated with plagiarized and copyrighted material. It is entirely plausible that reliance on generative AI output to produce certain material could violate existing copyrights- depending on the data set it was trained on.
5) Talk to the business. Understand their goals and what they want to accomplish. What are the business drivers and how does generative AI help them get there faster/cheaper/better?
6) Armed with all this new information, education and security awareness- develop an assessment sheet to identify where security risks are greater than the business value.
7) Employ mitigating controls (not all of which will be technical) to reduce the risk to a level that is manageable by you and acceptable to the business.
The world is going to move on whether you resist or not. People are going to use the telephone instead of a telegraph. They are going to watch television instead of plays. They are going to drive cars instead of riding horses. They're going to put data in the cloud and they WILL use generative AI.
Find a way to say "yes if ..." in a risk acceptable way.