Shhh, Don’t Talk About LLMs Shadows Lurking in Corporate Corners

Shhh, Don’t Talk About LLMs Shadows Lurking in Corporate Corners

Sure, everyone’s worried that AI might soon turn our toasters against us or start composing symphonies that make Mozart seem like he was just warming up. While we’re all busy getting distracted by these Hollywood-fueled fantasies, subtle (and much less glamorous) AI risks are quietly brewing in the corporate world, right under our noses. And no, none of them involve robots plotting to steal our Wi-Fi.

Before diving headfirst into the AI rabbit hole, let’s take a step back and think about other disruptive tech in recent memory. Remember the iPhone/Smartphone revolution? It was mind-blowing when it first came out—a phone, a computer, a music player, and a camera all in one. It was touted as just the beginning of something that would completely change the world. Well, here we are years later, and while the memory and cameras have improved (and those filters are fun), your current iPhone isn’t that different from the original. The promises of complete world domination? Not so much.

Then there’s Tesla and the electric vehicle revolution, which was supposed to wipe out gas cars by, what, 2015? 2020? 2025? Yeah, it turns out self-driving cars and total EV domination are more complicated than we thought. And don’t even get me started on drones delivering your groceries or streaming services being commercial free. The reality? While these technologies have made an impact, they’ve disrupted without turning the world upside down. They still require us to adapt, evolve, but—most importantly—keep realistic expectations.

So, is AI and its Large Language Models (LLMs) the same story? Likely. Sure, they might revolutionize the world in ways we can’t even imagine, but more likely, they’ll become another tool we integrate into our work life, making us a bit more efficient but still leaving us to do the heavy lifting. Spoiler alert: No, robots aren’t going to handle everything while you kick back and play solitaire on your iPhone all day.

But here's the thing—AI, specifically for this article we will be looking at LLMs (Chat GPT, Perplexity, Bard ect.), are here to stay. In fact, they've been hanging around for quite some time, though they’ve only recently jumped to the forefront thanks to their shiny new abilities. Yet, despite all the hype, we aren't talking about some of the most important risks these platforms pose. Remember they are almost all sneaking around outside of your corporate firewalls. And while the chatter usually revolves around privacy risks or "this AI will solve everything" promises, neither of those hits the mark. Trust me, I’ve been through this rodeo a few times.

Let’s start with the privacy angle. Sure, it’s a concern. If you’re copying sensitive company data into ChatGPT, congratulations, you’ve broken all kinds of privacy policies. But the real-world risk? It's like trying to shout a secret at a packed Taylor Swift concert—sure, you’re technically spilling the beans, but good luck getting anyone to hear it over the noise. Still, it’s a risk worth noting, even if there are bigger threats lurking in the AI shadows.

Speaking of which, let’s debunk the myth that LLMs will swoop in and solve all your corporate problems like some kind of digital superhero. I’ve lost count of how many times I’ve heard, “If only we had the budget for the new AI platform, we could fix this mess.” Newsflash: LLMs are great at giving answers you want to hear, but they’re not miracle workers. They can rearrange past ideas and data in cool ways, but they’re not going to come up with a brilliant new solution out of thin air. Nope, you’ll still have to tackle your corporate woes head-on. LLMs will help, but they won’t magically erase all your problems like that disaster of a last-minute project you hope no one remembers.

In my own work, I’ve found LLMs incredibly useful for improving things like employee training and content generation. For example, I’ve worked with AI-driven platforms that simulate difficult conversations between managers and employees. It’s like a digital role-play—only without the awkward silences. You can practice tough conversations with virtual employees and get real-time feedback, helping you upskill before you even face the real deal. The same goes for generating reports or sensitive emails—LLMs can save you the headache of starting from scratch by offering a solid first draft to work with.

Now, as much as I love AI’s ability to make my life easier, we’ve got to address some serious (and occasionally comical) risks that seem to be flying under the radar. First, there’s the issue of ethics and bias. You’d think a machine crunching data at lightning speed would be as impartial as it gets, right? Wrong. LLMs trained on historical data can unintentionally become as biased as your “old-fashioned” grandpa at Thanksgiving dinner. When you ask an AI to help hire your next CEO, don’t be surprised if it gives you a list that looks like the boardroom from 1950. And if you’re not careful, AI can quickly become your personal yes-man, confirming your worst ideas—like buying that neon suit just because no one said it was a bad idea.

Next up, let’s talk about AI vendors and proprietary platforms. Sure, they’re shiny and exciting, but hitching your wagon to one vendor is like depending on a single food truck for all your meals. What happens when the truck breaks down or decides it quadrupled its prices? When a vendor suddenly changes its pricing or experiences a tech meltdown, you’re the one left scrambling for Plan B. There are only a few companies that actually house LLMs. And don't even get me started on AI-generated content ownership. Imagine getting sued by an algorithm because you didn’t cite it properly—talk about a legal nightmare! The gray areas here are so murky you’d need a flashlight and a team of lawyers to navigate them.

And then there’s the whole “AI understands context” myth. Spoiler alert: it doesn’t. Sure, an LLM can generate convincing answers, but it’s like chatting with someone who nods enthusiastically while not really listening. Ask it something complex, and suddenly it’s spitting out vague advice like a fortune cookie at a questionable takeout joint. Don’t even bother asking it to predict the future—it’s like trying to train your cat to do taxes. It might try, but you’ll probably end up with an audit. LLMs are getting better but will always have limits to its understanding and ability to respond accurately.?

It is important to understand that LLMs are built using the same algorithms that run social media platforms. It has been built on the principle to keep the user using it for as long as possible, regardless of the health or benefit. LLMs will learn to give you the answers you want, not the tough answers or the hard truths, or even multiple perspectives to a complex situation. I keep hearing that no one knows how they work, that is not totally accurate. It is more like if something goes wrong no one knows why or how to fix it. They know how it all works but once they turn it on, they don't know if it works or how well or where and when it might veer off course.?

Finally, while we’re all riding this wave of AI enthusiasm, we can’t forget the human side of things. If employees aren’t constantly upskilling, they’ll be left staring at their screens like they’ve been asked to solve a Rubik’s Cube blindfolded. AI’s evolving so fast, policies and training programs are struggling to keep up. If your company is trying to use a 1990s playbook to manage AI in 2024, you’re going to feel the pain faster than someone who’s just discovered dial-up internet again.

I know from years or reviewing them, policies are so frequently outdated and inconsistent with the work being completed. AI and LLMs are changing faster than we can change our policies. How can you write, review, approve and implement a policy that is most likely outpaced by the technology half way through the process? Maybe a shift in how we view policies and how they are written. Mostly it is about how we work with our people, our culture and our expectations.

LLMs are powerful and useful and brilliant and also not quite what we think they are. Having studied behavioural psychology for years and worked as a process and policy auditor, I can say for sure that it is not LLMs that are the risk, but the people using them. PICNIC is a term that I have heard IT people use, Problem In Chair, Not In Computer. Most of the risks around LLMs are in the chair. How we use them, what we use them for and how much we trust their work, is crucial. How our people understand our expectations around any new technology is even more important. We need to teach our people how to swim and not put up fences around the pool, because people are going to swim, fences or not.?

These are just a few of the under-discussed AI risks that should be keeping corporate leaders up at night. Yes, we’ve got to tackle the obvious risks like data security and algorithmic bias, but we can’t overlook the subtler dangers lurking just beneath the surface. The good news? With proactive governance, continuous ethical oversight, and a solid understanding of AI’s limits, we can steer this ship in the right direction—without letting our digital assistants take the wheel entirely.

要查看或添加评论,请登录

Kevin A.的更多文章

社区洞察

其他会员也浏览了