Cutting through the AI noise
Hi everyone,
I think it’s safe to say we are in the midst of an AI hype bubble. You can now use an “AI powered” app to help you bake the perfect pizza and an “AI enabled” washing machine to launder your chef’s apron after the pizza party. You know you’ve reached peak technology hype when the tech in question has invaded your grill and laundry room.
Often, at this stage of the hype cycle, you see a lot of stuff that seems magical at first (perfect pizza!), but isn’t actually useful. At TC Labs, we have been meeting with potential manufacturing customers, and the people we are talking to tell us they are getting bombarded by vendors pitching AI stuff. There’s lots of noise in those pitches (hype, demos, even some magic), but very little signal (genuine utility and value). Fortunately, since we have a product that is, in our opinion, all signal, we’ve been able to get past that initial skepticism and are now engaging actual customers to help solve actual problems.
I imagine it can be hard for people on the receiving end of those pitches to separate the magical (but not very useful) from the valuable. What questions should business technologists who are developing AI strategies for their companies be asking of their teams and their partners? I have a pretty good list.
Does the AI program solve a big problem (or problems) I have right now???
In the early 2010s I worked on a Google product launch for a search feature we called Google Now, a precursor to Google Assistant. The demo was magical, but not useful. I showed it to lots of customers and users, most of whom thought it was super cool, but didn’t know how it would help them.
A quality product should help the business solve an important problem, right now. You know you have achieved this when the customer immediately sees themselves in the product. When I show potential customers our TC Labs product, they tend to take over the demo, which is great!? Instead of us telling them what problems we can solve, they start telling us.?
One time, I was showing our demo to a new potential client. He told us our solution could help them avoid having to page their experts to help them solve a factory problem on a Sunday morning. “No one likes getting paged on Sunday morning,” doesn’t show up in a product spec anywhere, but it’s a real problem that we can solve for them right now.
Does it require a major up front resource investment to get up and running?
There are a lot of platforms, toolkits, ecosystems, etc. that set up a client to solve the problem, but don’t get them all the way to solution. Often, data is the big hurdle that blocks the company’s time to value. Businesses have a lot of data, but pulling it together so that it is usable by an AI solution is a big challenge. Evaluating any AI solution needs to take this into account. Does it actually solve the problem right away? Or does it only get you part way there???
At TC Labs, we use AI to solve the data integration problem. Our system ingests all sorts of data and uses AI to organize the disparate data sets into a cohesive model that represents the factory’s operations. Clients can be up and running in a couple of days, with no additional resource investment. We’re not a platform or toolkit; we get them all the way to solution.?
Does it use AI to reason? To recognize patterns and detect problems early, to generate and test ideas, and to present recommendations?
Complex problem solving is the full promise of LLMs and AI. All models are not created equal, and the AI field is constantly evolving, developing new programming approaches (multi-agent, multi-model, etc) to yield the best possible outcomes. What is this AI solution’s strategy for getting to the highest quality results?? Does it rely on a single model or does it employ a more sophisticated design? If so, does the additional design complexity lead to better results???
领英推荐
Our solution uses a multi-agent approach to analyze a factory’s performance, employing analytics and reasoning models to spot problems, suggest recommendations, and test their impact, and “factory block” modules to provide segment-specific context and visualization. As a result of this multi-faceted design, our system is highly robust. It is rarely fazed by unexpected, “edge case” situations.
Does it learn and get smarter? Does it filter out hallucinations and not-great ideas??
A quality AI product should be able to experiment and learn from its failed ideas, without requiring expensive model re-training. Furthermore, no model will have the full context of a particular business, so there needs to be a way for it to fill in these gaps in its own knowledge. AI needs to be able to constantly learn, both from external sources such as onsite experts as well as from its own trials and tribulations.
One of the features of our solution is that it constantly updates its internal model of the factory, using plant-specific data (e.g. from sensors and monitors) as well as manual input from plant experts. When one of our recommendations is implemented, our system learns what works and what doesn’t so it’s smarter the next time around.
We use this always up-to-date understanding of the plant to pressure test our reasoning engine’s findings and recommendations before they are presented to users. Our models use physical limits like mass or energy balances to validate results. Checking actual data and physical parameters helps prevent irrelevant recommendations or hallucinations from ever seeing the light of day.
Does it seamlessly integrate into my team’s daily workflows and ways of working???
One of the promises of AI is that quality solutions can integrate easily into current processes and workflows, without requiring a lot of training or process change. Many AI solutions are now conversational and multi-modal, so users can simply talk with the models (with text or voice), and input and output can be text, audio, or image.??
But beyond a conversational user experience, a quality product should be able to improve upon dashboards and reports already in use. They should be close enough to existing formats that they are easily understood by the team, but extended in high value ways. So the team can quickly grasp what’s going on, and then converse with the system to dig in.?
Our product accomplishes this by using a large library of pre-built, sector specific charts, diagrams, and graphs, so its reports are easily understood by the team. Its notification system delivers information to various stakeholders based on their preferences and roles; there is less noise and far more signal for each team member.?
Does the team behind the solution have expertise in both my business and AI technology to stay in front as AI evolves???
A high quality enterprise product comes from teams that deeply understand both the business sector and the technology. Too many products are “technology in search of a problem”. They are designed around cutting edge AI and other tech, not around the business problem. This is because deeply understanding a sector’s business problem requires having that expertise on the team, alongside the AI wizards.??
TC Labs started with the business problem: making factories more efficient and profitable. We’re AI experts and technologists, but we’re also experts with decades of experience in the manufacturing sector, in particular chemicals, refineries, non-ferrous metals, and distilleries. We started by understanding the problem before we started to figure out how AI could solve it. We architected our solution to fit the constraints of the sector: addressing real, everyday problems, with low upfront resource investment and fast time to value.
I hope these questions will help AI business leaders from all verticals peel away the noise and hone in on the signal when they are evaluating incoming AI solutions. I also hope they will help technologists and entrepreneurs skip the hype and get straight to shipping high quality, useful, valuable products. I know how much work this entails. It’s easy (and fun) to create a magical demo. It’s harder to get from there to genuine value.?
It's interesting to see how the AI landscape continues to evolve. At TC Labs, what specific strategies are you employing to stand out amidst the noise?
It's interesting to see how companies like yours are navigating the evolving AI landscape. What strategies do you find most effective in distinguishing genuine innovation from the hype?
VP at StrtupBoost
2 个月Paul, how is TC Labs innovating to stand out amidst the AI noise and maintain enthusiasm in the team and clients?
Help mid-sized organisations through their AI journey. Ex. Google, Liberty Insurance and Accenture | Consultant | Career coach | NED Board Member | Angel Investor
3 个月An excellent set of questions, Paul. A great reference for companies'' internal teams, entrepreneurs and VCs. Thanks for sharing them