AI adoption in business: finding the Midas touch or a costly mistake?

AI adoption in business: finding the Midas touch or a costly mistake?

In some respects, the recent proliferation of AI feels eerily close to the story of Midas , in which an ancient king asked for the power to turn everything he touched into gold.?

Today, it’s echoed by how much business leaders want to integrate AI solutions into their businesses at every opportunity, as it could potentially turn functions like marketing and sales into gold. Like the story of Midas, however, there’s always a downside to even the most brilliant powers, and these capabilities need to be carefully implemented lest they cause more trouble than they’re worth.?

Photo credit:

“Everything has a limit,” says Tomas Skoumal, chairman and co-president of Dyna.ai , which offers AI-powered solutions for businesses across various industries. “If you want to get AI just to say you have AI, you will fail. Nobody should buy a product just because it’s based on a certain technology.”?

While he agrees that AI can help firms a lot, Skoumai points out that there are several things that business leaders need to look out for when implementing the tech.?

Beyond the buzz

Skoumal suggests something that seems counterintuitive at first glance.?

“Ignore the AI part,” he says. “Your vendor may call it AI, but you should just look at the solution and judge it yourself.”?

Tomas Skoumal, chairman and co-president of

In other words, companies need to figure out whether buying a new solution can actually add value, such as by driving cost savings or efficiency improvements.?

For instance, airlines may use AI-powered voice chat solutions to communicate automatically with customers. An airline will typically deal with thousands of calls a day, and automating those calls could lead to huge cost savings.?

Conversely, if a firm uses AI for something less customer-related like accounting, it might not have that big of an overall impact on its business.?

This brings Skoumal to his next point: companies need to have the scale to make implementing AI worth it. A small business may not need AI for customer communication if it’s not dealing with that many calls in the first place, as the cost savings would probably be minimal.?

To determine whether an AI solution would offer actual value-add, Skoumal suggests taking a step-by-step approach. Using the example of automating calls, the company can slowly hand over an increasing percentage of calls to the AI over a few months. If it works, let the AI handle even more, and eventually the process will be fully automated.?

“You have to do it with the perspective of limiting risk, seeing if it makes sense and that you’re in a reasonable position to succeed,” he explains.?

AI’s Goldilocks zone?

Companies need to ensure the AI models they want to use are trained accurately while maintaining a good middle ground in terms of how much data is used to develop the algorithm.?

What this means is that while AI solutions need enough quality data to make accurate deductions for business leaders, the algorithms cannot be trained using too many specific data points either, as it could lead to a phenomenon known as overfitting .?

In many business use cases, an AI algorithm is used to classify, analyze, and predict data points using scorecards ,? which are collections of performance metrics. The more data points there are, the more you can train the AI to look at a specific context.?

However, if the algorithm is trained to look at specific contexts, then it’ll only be able to generate accurate insights within those circumstances.?

Subsequently, small changes in the scorecards - and the data being fed to the algorithm - could cause the AI to fail miserably. Think of it as the AI being a student who’s only studied how to answer a specific type of question for an exam. Present that student with a different type of question - even if the topic being tested is the same - and they’d fail.?

Photo credit:

This, Skoumal says, is an “unstable” combination of algorithm and data points.?

“Let’s say I have an algorithm and scorecard that is stable but not as precise. I know that it’s stable and works because when I make calls to customers in Jakarta, there’s a good take up rate,” he explains. “If I use the same scorecard and algorithm to predict which customers I should contact in Bali, my take up rate might drop a little, but it’s still acceptable because the general approach to derive insights in Jakarta can be applied to Bali as well.”?

However, if the algorithm is overtrained on too many Jakarta-specific data points, changing one piece of data - the customer’s location, in this case - may cause the AI to go haywire.?

“All of a sudden, it could tell you to use different telco operators to call your customers instead of your usual one,” Skoumai says. “Absolute nonsense can happen.”?

Firms need to ensure that the AI algorithm and scorecards are tested to see that they work in each context first, ensuring their stability. According to Skoumal, this doesn’t apply to just marketing scenarios - it’s also relevant to business functions like risk management, which are critical components for firms in the financial sector.?

“There’s always the temptation to add more data to be analyzed,” he says. “But having solutions that look at just the right amount of data also makes it easier for human users to judge whether changes in data points should have a significant effect on the algorithms and scorecards.”?

Staying in your lane

Lastly, AI solutions should be focused on the respective contexts in which they’re employed, which is important when they’re used to interact with the end customer.?

For example, ensuring that a chatbot stays on topic is vital to keeping customers engaged with the conversation, ultimately delivering business results.?

But limiting what the AI can respond with is important in other ways. According to Skoumal, if the possible range of responses is too wide, the AI might take too long to come up with an optimal response. Customers may then disengage from a conversation they feel is going nowhere.?

"You have to do it with the perspective of limiting risk, seeing if it makes sense and that you’re in a reasonable position to succeed"

To get a reasonable latency between chatbots’ responses, Skoumal suggests limiting the AI’s responses to a specific scope.?

“Based on current computing power, optimally it should take about two seconds to respond,” he points out. “For shorter questions, it can be faster, but generally speaking it should be around there.”?

Guided by business principles?

Ultimately, Skoumal advises firms to implement solutions that fit their actual needs and aren't just for show. It’s important to train the AI model they want to use on the correct data and ensure that the solutions are employed with practical considerations in mind.?

In line with that, AI should be used at scale to save costs or to automate complicated tasks. This will also eventually be the case even when AI makes advances in areas that aren’t common at the moment, such as legal and compliance work.?

“As AI becomes smarter and cheaper, it will become more worth it to implement effectively, but you will always need to have the perspective of your business’ needs in mind,” Skoumal says.?

***

Dyna.ai offers AI-driven solutions for banks, fintech firms, and businesses across various industries, helping them improve efficiency and effectiveness in areas such as customer engagement and risk management. To learn more about its solutions, reach out to the team through this link .?

***

This content was produced by Tech in Asia Studios, which connects brands with Asia's tech community. Learn more about partnering with Tech in Asia Studios.

要查看或添加评论,请登录