Next, AI Is Not Your Employee, Your Co-Worker, or Your Future Workforce
How are you feeling about your "AI co-worker" now?

Next, AI Is Not Your Employee, Your Co-Worker, or Your Future Workforce

Through our years-long partnership with UNLEASH, I’ll be hosting the Technology stage at UNLEASH America May 7-8 in Las Vegas, and conducting a pre-conference workshop on May 6. So this is a good time to bring to light a disturbing trend that’s rapidly impacting the current value of human talent — and the future value of HR.

Key Takeaways:

  • Repeat after me: AI is not my employee, my co-worker, or my workforce.
  • Elevating software diminishes humans. Period.
  • Labels matter. These are software programs. Let’s call them that.

I know that I have already lost this battle over the language we use about AI software. But I’ve been a champion for human-empowering technology for decades, and I know that this is a watershed moment when we are eroding what it means to be human. Whether you lead a team, lead an organization, or just work with humans — but especially if you work in HR — you need to be on the human side of the ledger, because AI has the inevitable potential to take the human out of HR.

In his keynote talk at the Computer Electronics Show in Las Vegas in early January, NVIDIA CEO Jensen Huang gave air cover to every tech executive hoping to replace human workers with less expensive (but not free) Artificial Intelligence (AI) software. “In the future, these AI agents are essentially [a] digital workforce that are working alongside your employees, doing things for you on your behalf,” Huang enthused. “And so the way that you would bring these specialized agents… into your company is to onboard them, just like you onboard an employee.”

But AI is not your co-worker, your cobot, or your team member.?

Four Reasons to Stop the Cobot Madness

Here is why the AI Personification Trap is so toxic to anyone who works with humans, and especially to HR.

First, remember that there is no such thing as “an AI.” That’s a marketing label, for a basket of technologies, one type of which is generative AI/Large Language Models. The same is true for “AI employees”: These are at best incomplete products, and at worst these are opaque attempts just to sell more software.

Second, the “AI employee” construction makes no logical sense. These applications are not in any way like a fully-operating human being. These are Programs. Products. Applications that have specific functions, with arbitrary limitations defined by the coder. A human is an employee. But if “an AI” uses 11 other “AIs,” is that one “AI employee,” or a dozen? Are we at the mercy of any application developer who says their “agent” should be classified as “an employee” — or any software that claims to be sentient?

Third, these technologies are amazing, incredibly flexible — and deeply flawed. A virtually-unlimited number of software developers are creating generative AI and related applications that have serious limitations and questionable reliability. Using these technologies can have real-world impacts. For example, an AI agent that was asked to find cheaper eggs actually purchased a dozen (for $31!) — without asking the user. How will you manage hundreds or thousands of employees using countless, completely-different autonomous agents? Imagine those independent “AI agents” thoughtlessly being given access to sensitive organization data, or with purchasing power, or with the power to send communications on your behalf without you knowing — or telling your employees to break the law. How will you “discipline” your “AI employees” when the software puts you or your organization at risk?

Finally — and by far the most important — the personification of software algorithms represents an erosion of what it means to be human, with societal impacts we cannot turn back. I can’t overstate how toxic this narrative is to human workers. Pundits are already predicting the demise of social capital, as organizations begin adopting software that directly replaces workers. Economist Eduardo Porter calls this a different kind of “replacement theory” — substituting software for humans. Even if we were to consider this mindset acceptable, how will we feel when there are dozens, thousands, millions, or even trillions of “AIs”?

We Acquiesced to Social Media: Let’s Not Acquiesce to “Employee AI”

Both of these things are true about social media: These apps allow humans to maintain longtime social connections, and to find new connections around the world. And, social media has dramatically accelerated the creation of echo chambers, and the fragmentation of our societies. You already don’t know? whether that user on Twitter/X is a human — or a bot designed to seed confusion and conflict.?

Both of these things are true about genAI software: These apps will help many humans who can benefit from the software’s relentless and unquestioning support and insights. And, the same AI software will increasingly be used to replace living people in a range of situations: Vendors are already telling you not to hire humans, but to hire an AI agent instead. The CEO of a recently-public company has dominated headlines by claiming to avoid hiring humans entirely. (And that’s even before lots of humanoid robots appear on the scene.)

So here's an example of what will happen.

You're planning a brainstorming meeting with a few colleagues. Someone asks, “Can I bring AIdan? I've trained him to be good at brainstorming.” Someone else says, “Sure. Can I bring mAIry? She helped us with our latest product design. Oh, and gAIry, he's a good conversation facilitator, asks great questions.” Soon there are more pieces of software "attending" than humans. Then someone says, "Hey, I forgot. Corporate says we have to invite jAIden, he's the new manager for our team. He's going to track our commitments and our performance."

And before long, someone may suggest, "Hey, why don't we just let all the AI attend, and tell us what to do after?" That is, if you haven’t seen this scenario already.

Who Needs HR With “an AI Workforce”?

Just in case you missed the shining vision of human replacement, NVIDIA’s Huang went on to say, “In the future, [the IT department will] maintain, nurture, onboard, and improve a whole bunch of digital agents and provision them to the companies to use. And so your IT department is going to become kind of like AI agent HR.” Yes, that’s just what we all want, an army of software “workers” who are “nurtured” by the organization’s technology department. And I hope nobody in IT wants this, either.

This isn’t a slippery slope. It’s a very clear example of empowering a mindset that inevitably diminishes humans. It’s because our brains are far more hackable than we think. We delude ourselves into thinking that, just because the functionality of genAI software continues to improve, these programs will so rapidly gain human-like powers, we need to give these applications human-like identities.

Anil Seth , author of “Being You,” makes the distinction between intelligence and consciousness. "Don't assume that as AI gets smarter,” he said in a recent webinar, “Consciousness will just come along for the ride." Even though AI can simulate human thought and emotions, “In general, simulations don’t have the properties of the things being simulated.” But if these our “seductive biases” convince us to treat software as conscious, and “...we sell our minds too easily to our machine creations, we not only overestimate them, we underestimate ourselves.”

So what should you do Next?

  • Do your best to maintain human-centric language. Don’t anthropomorphize the tools you use.
  • Consider renaming your People or HR organization to “Humans.” Trust me, as the language of AI workers takes hold, we run the risk of losing track of what the word “people” means. Let’s stick with Humans.
  • Watch for my new course coming out later this year. You know that I have 10 courses on LinkedIn with 1.6 million learners. I've recently been asked to update four of my courses, one of which is Your Future Workforce in the Age of AI. You can be assured that the course will be 100% about humans, and the ways we can all leverage genAI and related technologies — as a toolset.
  • And while we’re at it, let’s commit to a simple mantra: No human left behind.


gB Gary Bolles

I’m the author of The Next Rules of Work: The mindset, skillset, and toolset to lead your organization through uncertainty. I'm the Global Fellow for Transformation for Singularity University. I have over 1.6 million learners for my courses on LinkedIn Learning. I'm a partner in the consulting firm Charrette LLC. I'm a senior advisor to aca.so, powering Skill-Building Networks for enterprises. I helped to catalyze Next CoLabs, a global thinktank of AI creatives, and I'm a co-founder of eParachute.com. I'm an original founder of SoCap Global, and the former editorial director of 6 tech magazines. Learn more at gbolles.com


Rich Heckelmann

Effectively Bridging Technology Development, Marketing and Sales as Product Portfolio Leader, Pragmatic Marketing Expert, AI Product Management, Product Owner, Scrum Master, Operations, QA and Marketing AI Strategist.

4 天前

Gary an excellent piece - unfortunately so many of the CEO's engaged in AI projects now do not in any way understand this.

回复
Marti Wigder Grimminck

Founder & CEO, Keynote Speaker, Futurist Designer, Impact & Innovation

3 周

great article!

回复
Dr. Melody Schumann

Certified SCORE Mentor, Bucks County, PA. EdD/MBA-Business Consulting, Education Consultant-Curriculum & Instruction, Author/Academic Editor

1 个月

Gary, I am really enjoying reading your take on this. It is so important to keep a proper perspective on this trend. This thought came to mind...I was wondering what your Dad would think of this if he were still with us. I had the pleasure of corresponding with him by email many years ago. He was an inspiration and a kindhearted and generous person. If you had to guess, what do you think he would say? Just wondering...thanks for all you do!

The challenge ahead is ensuring that education and workforce systems equip young people with the adaptability to thrive alongside AI, not just compete with it. Curious to hear your thoughts on how we bridge that gap!

Elisa Camahort Page

CORE VALUES: I’m Here to be Helpful VALUE EQUATION: Expertise + Experience + Empathy = Effective Strategies CURRENT: Co-Founder Optionality | Consigliere to Leaders LEGACY: Start-up Exec | Co-Founder, BlogHer Inc.

1 个月

Thanks for sharing this. I do find that some of my colleagues who give advice on leveraging AI tools recommend talking to your AI chatbot like it was an employee of yours...and the assumption is that the work one does to make one's prompts super clear will also help one in managing people. But I don't recall us giving the same advice in the early days of online search...which also yields better results if one searches with terms that are clear and specific and narrow. I am really compelled by your argument here about not dehumanizing humans by treating software like a human. I already referenced this in an event I went to last night. Thanks for sharing it.

要查看或添加评论,请登录

Gary A. Bolles的更多文章

社区洞察

其他会员也浏览了