AI-Powered Laser Sharks
Matīss Treinis
Product engineer, solutions architect, and product developer. Author of Clarity agile work management framework - clarity.pm
When Innovation Jumps the Shark (and Straps Lasers to It)
We’re living in the golden age of buzzwords, and nothing exemplifies it better than the current AI craze. If you’ve seen a washing machine, a phone case, or even a coffee maker labeled as “AI-powered” lately, you’ve probably rolled your eyes so hard they nearly got stuck. Somewhere along the way, “AI” went from being the transformative tech that gave us language models and groundbreaking research to just another marketing gimmick.
This phenomenon is perfectly captured by the Gartner Hype cycle, a handy graph that tracks the lifecycle of emerging technologies. Spoiler alert: we’re sitting comfortably at the very top of the curve—the “Peak of Inflated Expectations.” At this stage, every company is racing to jam AI terminology into their branding, regardless of whether it makes sense or works. The result? A flood of poorly thought-out products that muddy the waters for legitimate uses of the technology.
And if that wasn’t enough, just last September, the Federal Trade Commission (FTC) entered the chat with Operation AI Comply. Yes, the government is stepping in to remind companies that slapping “AI” on a product doesn’t give them carte blanche to lie about what it can do. When you’re at the point where regulators have to explicitly tell you not to scam people, you know the hype has gone off the rails.
One Giant Cake, Zero Nuance
For most people, LLM (Large Language Model) and chat-bots are AI. That’s it. End of story. And why wouldn’t they think that? It quacks like a duck, talks like a duck, and autocompletes your texts like a duck - surely, it must be an AI powered duck! To the average consumer, it’s all one giant cake—delicious-looking, maybe—but with no differentiation between the real-deal tech and the frosting-covered frauds.
But AI is so much more than just chatbots. It encompasses everything from computer vision systems that analyze medical scans to algorithms that optimize traffic lights. Yet, in the public eye, these distinctions are erased. Like I said, AI is one giant cake, and whether the frosting is a scam or a genuinely useful application, it all tastes the same.
That lack of nuance leads to two dangerous outcomes. First, it enables blatant misrepresentation—companies overselling what their AI can do, leaving consumers frustrated when it doesn’t live up to the hype. Second, it fuels a backlash. When enough people feel let down, they don’t just lose faith in the bad actors—they start dismissing the entire field. AI risks going from being “the future” to “useless,” setting progress back years.
The problem with this all-or-nothing mindset is that it swings wildly between extremes. First, people are dazzled by flashy demos and marketing pitches that promise the moon. But when those promises inevitably fall short (because, shocker, your “AI-powered” coffee maker can’t actually read your mind), the pendulum swings back just as hard. Suddenly, for everyone but enthusiasts, the perception shifts from “this is the future” to “LLMs are useless, why did we even bother?”
Square Peg, Round Hole: AI in All the Wrong Places
The rush to cram AI into every product imaginable has given us a wealth of baffling inventions that feel like they belong in a sci-fi parody rather than reality. Yet, here they are, existing in the world, and reminding us that just because you can doesn’t mean you should.
The tech world has a bad habit of trying to fix problems that don’t exist, and nowhere is this more obvious than in the hilarious misapplications of AI. It’s as if companies believe that adding AI to any product, no matter how trivial, makes it futuristic and indispensable. Spoiler alert: it doesn’t. Some of these creations are so absurd, they seem like satire—but they’re all painfully real.
Let’s talk about Rabbit R1, a pocket-sized AI device companion thingy that was all the fuss (and let-down) a while back. Priced at $199, this gadget promised to be your all-in-one AI companion, capable of performing tasks ranging from web searches to media control, all through voice commands and a 2.88-inch touchscreen. While the concept of a dedicated AI assistant is intriguing, one can’t help but wonder: doesn’t your smartphone already do all of this? Isn’t this a little… redundant?
Then there is Swarovski—renowned for its luxury crystals, who back in 2024 introduced smart binoculars with built-in AI capable of identifying over 9,000 species of birds, mammals, butterflies, and dragonflies, assorted dragons and other mythical (and not so mythical) creatures. While the tech is impressive, one can’t help but chuckle at the thought of a nature walk turning into a high-tech quiz show, with your binoculars constantly feeding you trivia about every creature in sight. All at an affordable $4,799. Ugh.
Just these two examples are plenty to clearly show what happens when innovation is driven by hype rather than purpose. The problem isn’t that AI can’t be useful—it’s that we’re forcing it into products that don’t need it, creating “solutions” for problems no one has. The result? A flood of devices, apps, tools, clouds, rains and everything in-between, solutions that are as unnecessary as they are unintentionally hilarious.
And yes, these are real, albeit not particularly useful inventions. This time, I’m steering clear of outright fabrications or full-blown scams—looking at you, “AI-enhanced water” and the like. Don’t even ask.
Why Is This Happening?
Two words: money and ignorance.
First, there’s the financial incentive. AI is hot right now, and companies know they can charge a premium for anything that carries the label. Whether or not it actually adds value is secondary; the mere appearance of innovation is often enough to drive sales.
Then there’s the ignorance factor. Most people don’t fully understand what AI is or what it can realistically do. That’s not their fault—AI is a complex topic—but it creates a perfect storm where companies can overpromise without much pushback. Until, of course, reality catches up with them.
And let’s not forget the sheer carelessness of it all. AI can do incredible things when applied thoughtfully, but in the hands of people who don’t understand its limitations, it’s a recipe for disaster.
领英推荐
The Problem with False Promises
The real danger here isn’t just that consumers get duped. It’s that these failures poison the well for everyone. Every time an AI-powered product overpromises and underdelivers, it chips away at the public’s trust in AI as a whole.
We’ve seen this dynamic before in other industries. Remember the dot-com bubble? For every genuinely innovative company like Amazon, there were dozens of Pets.coms—companies that burned bright on hype and then collapsed under the weight of their own nonsense. When the bubble burst, it wasn’t just the bad actors who suffered; the entire tech industry took a hit.
If we’re not careful, AI could go the same way. And that would be a tragedy, because despite all the nonsense, this technology has the potential to change the world in ways we can barely imagine.
Ultimately, consumers might not know much about AI or technicalities of it, but they’re not stupid. If enough companies make enough empty promises, people will eventually lose patience—and their trust.
This erosion of trust has ripple effects. If AI-powered gimmicks like washing machines and phone cases dominate the market, it becomes harder for genuinely useful AI applications to gain traction. Imagine being a researcher trying to explain how an AI model can revolutionize cancer treatment when the public is still bitter about that chatbot that lied to them about a flight discount.
The stakes are high because the potential of AI is enormous. But if we let hype-driven nonsense dominate the narrative, we risk squandering that potential before it even has a chance to fully unfold.
Where AI Actually Shines
Now, let’s not throw the baby out with the bathwater. AI, when used appropriately, is an incredible tool. The key is understanding its strengths—and its limits.
At its best, AI is a productivity booster, not a magic wand. Take LLMs, for example. For senior software engineers, these tools are game-changers. They can handle repetitive tasks, fill in gaps, and speed up workflows in ways that free up time for higher-level problem-solving.
For junior developers, though? Not so much. Without the experience to spot mistakes or guide the tool effectively, an LLM is more likely to be a distraction than a help. It might even churn out bad code that a junior dev doesn’t know enough to question.
The same principle applies across industries. AI isn’t a one-size-fits-all solution; it’s a scalpel, not a sledgehammer.
Laser Sharks and the Future of AI
Which brings us back to laser sharks. They’re flashy, they’re memorable, and they’re completely unnecessary. But they’re also a perfect metaphor for what’s happening in the AI space right now. Instead of asking, “What problem does this solve?” companies are asking, “How can we make this sound futuristic?”
The result is a flood of products that prioritize marketing over substance. And while that might work in the short term, it’s not sustainable. You can only sell snake oil for so long before people catch on—and when they do, the backlash is brutal.
AI-powered laser sharks are a lot of fun to think about, but they’re also a cautionary tale. Just because you can strap a laser onto a shark doesn’t mean you should. The real future of AI doesn’t need lasers, phone cases, or chatbots that hallucinate things into being. It needs common sense, clear communication, and a focus on making people’s lives better.
Because if we don’t stop jumping the shark now, we might just end up sinking the whole ship.
Also available on The Chair Theory Substack