Technological Change Is as Much About People as It Is About the Technology Itself
Photo by cottonbro studio

Technological Change Is as Much About People as It Is About the Technology Itself

I share content about engineering, technology, and leadership for a community of smart, curious people ????. Join my email newsletter and subscribe to my YouTube channel for more updates and tech insights.


So, everyone’s talking about AI like it’s the next big thing that’s going to change everything overnight, right?

But let’s be real — it’s not happening as fast as we thought.

Matt Asay and Jeremie Brecheisen have some pretty interesting takes on why that’s the case and what companies can actually do to make the most out of AI.

Slow Down, It’s a Human Thing

Matt Asay ’s basically saying, “Hold up, AI isn’t taking over the world just yet, and that’s because of us humans.” Despite all the fancy tech, people — with their quirks, fears, and routines — are slowing things down.

Historically, we always overestimate how quickly new tech will catch on because we forget how complicated human behavior is.

Christopher Mims from the Wall Street Journal backs this up, calling out “technological determinism” — the idea that new tech will just magically change everything overnight. But no, new gadgets and systems have to fit into our messy human lives and preferences, which aren’t exactly straightforward or logical.

Look at cloud computing. AWS came out in 2006, and while it’s huge now, a lot of companies still cling to their old on-premises systems.

This just shows that tech change is as much about getting people on board as it is about the technology itself.

The Trust Issue with AI

Jeremie Brecheisen’s research digs into another critical aspect of AI adoption: trust.

He found that leaders don’t really get how their employees use AI or how ready they are for it. This lack of understanding can wreck trust between the bosses and the workers, which makes it harder to roll out AI smoothly.

One jaw-dropping stat from three Gallup studies — the CHRO Roundtable Survey of large company (average size 80,000 employees) chief HR officers (CHROs); the Gallup Quarterly Workforce Study of nearly 19,000 U.S. employees and leaders; and the Bentley-Gallup Business in Society Report — is that almost half of the top HR folks at big companies have no clue how often their employees use AI.

This ignorance leads to a heavy-handed approach to AI rules, stifling creativity and making everyone scared instead of encouraging teamwork and agility.

There’s also a big disconnect in how ready people feel for AI.

Nearly half of the employees say they’re good to go, but only 16% of HR leaders think their teams are ready.

This mismatch means employees might be using AI without proper support, widening the trust gap even more.

Breaking Down the Barriers

Both Asay and Brecheisen agree that companies need to shift their culture to make AI work.

Here are three strategies for leaders to bridge the trust gap and guide their companies through the AI era:

1. Measure and manage AI usage

Leaders need to know how AI is being used in their companies. This means collecting data on:

  • how often it’s used,
  • how effective it is,
  • and what tasks it’s helping with.

By focusing on what they do know, leaders can make smarter decisions about where to put safeguards and where to let employees use AI more freely.

2. Empower managers to build trust

Managers are key to making sure AI strategies work. They know best where AI can boost efficiency and what training employees need. Regular team check-ins can help managers understand their teams’ needs and support effective AI use.

3. Adopt a purpose-driven AI strategy

Ditch the fear-based, rule-heavy approach. Instead, align AI initiatives with the company’s mission. When employees feel connected to their company’s purpose, they’re more engaged and productive.

A purpose-driven AI strategy fosters innovation and collaboration, which leads to better outcomes.

Building Trust in AI

People are pretty skeptical about AI. Only 10% of U.S. adults think AI does more good than harm, and a whopping 79% don’t trust businesses to use AI responsibly.

This trust gap needs addressing.

Leaders should be transparent about how AI is used in their companies and involve employees in the decision-making process. By committing to ethical AI practices and showing how AI can enhance human work instead of replacing it, leaders can build trust and reduce fear.

The Slow But Sure Path Forward

As Asay points out, tech changes slowly because of the human factor. But this isn’t a bad thing. It gives organizations the chance to make thoughtful, deliberate decisions about integrating AI.

By focusing on trust, understanding AI usage, and aligning AI strategies with company goals, leaders can navigate the AI revolution in a way that benefits everyone.

In a nutshell, the AI revolution isn’t a quick flip but a steady transformation.

Companies need to address the human factors that slow down tech adoption and foster a culture of trust and collaboration, to tap into AI’s full potential and build a more innovative and efficient future.


This article was originally published on Medium.


Want More?

???? I write about engineering, technology, and leadership for a community of smart, curious people. Join my email newsletter for more insights and tech updates.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了