The Consumerization of IT roars back, and this time they have AI!
About ten years ago, I spent a lot of time studying the “Consumerization of IT” (also called “Shadow IT” or “FUIT”), writing countless articles about it on BrianMadden.com (along with Jack Madden and Gabe Knuth ), and speaking on the topic around the world. For those not familiar, the consumerization of IT is the trend where regular end users buy sophisticated IT services on their own, often to the consternation of their IT departments who are desperately trying to maintain control.
I never faulted the users for this. They were just trying to do their jobs. Why should they use some lame old VPN with file shares when Dropbox was simple and also let them work from their Mac at home? Does work email block attachments? No problem, "send it to my Gmail." Done!
This trend happened because technology advanced to the point that anyone with a credit card could spend a few dollars a month to buy access to services that would've cost millions of dollars and required years to implement just a decade earlier.
Of course this consumerization of IT was a nightmare for IT pros. Our users didn't understand that we were responsible for data compliance, and privacy, and security, and all the dumb boring things that users hate but that keep our companies out of the news.
It was a mess for a while there.
Luckily this was such a big deal that issues have been pretty much solved over the past decade, both because enterprise applications became more consumer-friendly (even I admit that Microsoft Office is awesome now), and also because enterprise app management products got better. (Managing and delivering apps to a Mac or iPhone in the enterprise is pretty straightforward today.)
So this topic sort of faded away.
Then ChatGPT dropped
It's hard to believe that the public release of ChatGPT was less than a year ago!
While we, <cough> <ahem>, "IT Professionals" discuss and debate the merits of LLMs and whether they'll impact our businesses (e.g. how many people can we lay off), and what the risks are, our end users are taking a somewhat different approach:
They're fine with it!
Have you asked your non-IT friends about ChatGPT? Sure, lots of people are still pretty clueless and start talking about Skynet and killer robots, but I'm noticing a definite uptick (based on my casual surveying at summer barbecues and weddings) in non-tech people using ChatGPT on a regular basis to do their jobs.
For those who use ChatGPT for work, I typically follow up by asking whether they're allowed to use it. While I hear the range of expected answers, I can tell you that I don't hear anyone saying, "Oh yeah, I used ChatGPT to cut boring tasks that took 45 minutes down to 5, but then IT said we couldn't use it, so I'm going to take the 5-minute tasks and turn them back into 45 minutes of boredom."
It's the consumerization of IT all over again, except this time they have AI! (BYOAI, anyone?)
领英推荐
Meanwhile more and more companies are implementing policies limiting or restricting ChatGPT. But will these policies even matter? What are they doing to actually stop its use? What can they do?
Block it? People will just use their phones. Or one of the countless other LLMs. Or they'll just quit and work somewhere else.
And besides, what does blocking it really achieve? Many of the smartest people alive talk about how AI and LLMs will be transformational technologies that will fundamentally alter our notion of work. Why would anyone want to work at a company who banned this outright?
And what would a "ban" even do? How would the company know? Heck, even OpenAI's own tools specifically built to detect whether ChatGPT was used are nowhere near accurate enough to be used for real.
So what's a company to do?
Obviously organizations need to embrace AI and LLMs. Blocking them (whether via policy or technology) won't stop the people who want to use them from using them, and will set the rest of the company behind.
Luckily we have a roadmap for success. Just as the consumerization of IT was once a challenge that seemed insurmountable, consumer-like products and services have grown to be an essential, secure, and reliable part of the modern workplace. Similarly, AI and LLMs are not just a passing trend, but will shift how work is done. So even if you don't know the exact way to embrace these, the first step is to recognize they will be a thing you'll need to embrace at some point.
Employees naturally seek out the most efficient ways to achieve their goals. Just like the consumerization of IT ten years ago, people used Dropbox not because they wanted to FU IT!, but because they just wanted to get their work done in a way that was more productive for them. AI and LLM usage is no different.
That said, rank-and-file employees are extremely misinformed about the actual risks of AI and LLMs. (Most people still think that everything they type into ChatGPT is being used to train the next version.) So educating employees and helping them understand where and how they can safely use LLMs, and where they should be careful, demonstrates that you respect their productivity desires while helping them keep the company safe.
The AI and LLM markets are evolving stupidly fast. We will never be able to put the smoke back in this bottle. Instead we need to dive in and learn as much as we can, knowing that training our users on how to use AI and LLMs to turbocharge their productivity will best position us and our users for the future.
And besides, if we try to tell them "no", they'll just ask ChatGPT, "What's the best way to get around my company's workplace ban of ChatGPT use?"
Like it or not, it's a different world now.
Nice Work...So True
Cyber Security Architect | Transforming Cyber Risk into Resilience
1 年Interesting read, what would you see as the "roadmap for success"? Awareness & understanding, risk assessment, policies & governance, review products/services and adapt? Curious to get your insights on this.