How to build LLMs for telco AI applications
AWS, IBM, McKinsey and Nokia on curating an LLM for domain-specific applications; bigger isn’t necessarily better and, at some point, you have to dive in
If you’ve used consumer-facing generative artificial intelligence (gen AI) tools like OpenAI’s ChatGPT, Google’s Gemini or Microsoft’s Copilot, chances are you’ve gotten back some interesting and relevant responses. Chances are you’ve also gotten back confusing nonsense that you struggle to map to the initial query you posed. That’s one of the problems with large language models (LLMs) with tens of billions of parameters. The software is combing through such a huge volume of data that finding the figurative needle in the haystack—the magic that takes your query, puts it into the appropriate context and returns information that you’d describe as intelligent—is hard to do consistently. So what does that mean for industry-specific gen AI solutions, say, some sort of telco AI tool for infrastructure planning or network optimization or any other of the host of use cases you’ll see touted on conference stages??
Basically it means that LLMs need to become smaller language models that start with a high-level view of the world’s information, pare out the noise, then layer in domain-specific data and proprietary, business-specific data. This difficult step of curation is how companies can bring gen AI to bear across their operation and realize the productivity and efficiency gains that intelligent assistance can deliver. From there, it’s a matter of more training, better inferencing, developing confidence in the machine, organizational buy-in, then you’re off to the races.?
For telco AI LLMs, “Bigger is not always better”
“The key things to remember here are two things,” Ishwar Parulkar, CTO of Telecom and Edge Cloud at AWS, explained in an interview with RCR Wireless News. “Firstly, one model doesn’t fit all…Secondly, bigger is not always better. There is a tendency to think the more the number of parameters…it’s going to be better for your job. But that’s not really true.” Smaller models, dialed in with tuning—which can include prompt engineer, the use of retrieval-augmented generation (RAG) techniques and entering manual instructions—can give better results, he said.?
Parulkar laid out a three-step process for operators to follow, and added the need to consider price/performance, model explainability, language support and quality of that support as well. “Once you have the foundational model in place, you need to pick the right data sets, figure out the level of tuning you need to really serve your use case. It’s a three-step approach: learning the use case well…getting the right foundational model, and then the right set of data to tune it…That is what is really forming the bulk of the use cases which can be productized today. However, we do see an opportunity for building domain-specific foundation models. That’ll come a little bit later.”?
For IBM, AI and multi-cloud are key strategic priorities; for operators, this is about moving from manual processes to automated processes. IBM General Manager of Global Industries Stephen Rose delineated four broad categories of use cases: customer care, IT and network automation, digital labor and cybersecurity.?
In terms of consumer-grade AI versus enterprise-grade AI, specifically telco AI, he said the big issues are around where the data comes from, the security of it, understanding any biases and the general trustworthiness of the system. “If you actually look to enterprise-grade AI,” he said, “it starts foundationally with you know where the data is coming from, and therefore you can trust it and you can be more specific and unique in the way that you apply the AI because you know exactly where the data comes from. I think for [communications service providers] going forward, and for the industry as a whole, I think the main opportunity is two things.”?
He continued: “One is finding ways to be willing to share privileged data. So, we talk about a lot of the data was hidden behind firewalls or it was within an organizational constraint let’s say. But now we’re actually seeing as openness as a general concept is becoming sort of pervasive across the industry, the data fabric that you can actually build that underpins AI is becoming more accessible in ways that we’ve never seen before. So I think there’s not only an opportunity within organizational silos within a particular organization, but even within a particular ecosystem. So, I think there’s huge opportunity for us in both domains, but I think if we work to less proprietary but privileged data and then the openness within the privileged data, then you get to do really interesting things with AI.”
领英推荐
So it’s obvious here that data quality informs quality of AI-enabled outcomes; to put that another way, garbage in, garbage out. But here’s the rub. Operators have a huge volume of highly personalized, highly contextual data on the consumer side. On the operational side, there’s an enormous amount of network telemetry that exists and that can be leveraged. The problem is operators have historically under-utilized the data they have whether that’s in service of a customer-facing outcome or an internal optimization.?
The ‘vicious cycle’ of telco AI data inputs
In talking through the data piece and the data for AI piece, McKinsey and Company Senior Partner Tomas Lajous set up the idea that the network is a proxy for the user experience, so an improved network corresponds to an improved customer experience. “Where AI comes in, is that now we can use AI to understand everything that’s happening on the network and understand relative to individual needs whether the experience is there or not. So, for starters, just by having this data, telco is going to improve the product. And of course improving the product is the first step to improving the overall experience for the customers, and to start bringing sources of differentiation in a competitive environment.”?
As for the siloed nature of operator’s data: “In the telecom space, we’ve been suffering with a vicious cycle of bad data leading to bad or insufficient AI, leading to less focus on generating data, leading to bad/insufficient data, and so forth…But we are breaking out of it.”?
Back to Parulkar’s comment that domain-specific LLMs were in the future—that comment came in an interview conducted in November last year. Fast forward to Mobile World Congress in February this year and Deutsche Telekom, e& Group, Singtel, SK Telecom and SoftBank announced the Global Telco AI Alliance; the companies plan to start a joint venture to develop telco-specific LLMs with an initial focus on digital assistants and chatbots. And, also to Parulkar’s point about language support, the plan is for optimizations for Arabic, English, German, Korean and Japanese with more to come.?
“We want our customers to experience the best possible service,” Deutsche Telekom board member Claudia Nemat said in a statement. “AI helps us do that.”?
Beyond telco AI for telcos, there’s a sub-theme playing out that corresponds to what we’d traditionally consider telco companies reaching deeper into various enterprises in an effort to expand marketshare by selling private networking, edge compute and other solutions. Nokia, which has seemingly led the charge into enterprise, ahead of Mobile World Congress trialed an industrial AI chatbot for its MXIE system, a 5G/edge bundle for industrial applications. This product taps the MX Workmate LLM which Nokia billed as the “first OT-compliant gen AI solution” according to the company. Following this thread, the industrial heavyweights presenting this week at the Hannover Messe industrial fair seem all-in on gen AI for industry.?
Discussing MX Workmate, Nokia’s Stephane Daeuble, who looks afters solutions marketing for the vendor’s enterprise division, shared a perspective on the introduction of gen AI that, while focused on industrial enablement, is also relevant to telco AI and really to AI in general. “When we had this in our hands, we wondered what to do with it,” he said. Is it too early?…[But] we now have a solution that’s greater than the sum of its parts. And equally, we always launch early. We were early with private wireless—back in 2011. People were like, ‘What are you doing?’ But we were right. This is the same, and it will take time. But if you don’t start, it never happens.”?
GEN AI Evangelist | #TechSherpa | #LiftOthersUp
7 个月Moving from general-purpose AI to domain-specific applications is definitely a complex but crucial step. Interesting insights. Sean Kinney
Innovative Revenue growth leader | Go To market strategy advisor | Strategic Alliance and eco-system builder | Executive consultant on Telecom, Cloud and AI
7 个月Bringing great minds together in one Article ?? . Stephen R. and Stephane Daeuble (heard similar from Jitin Bhandari earlier too), seem to be setting up the right enablers for our telco service providers to grab the opportunity of using AI enabled automation with clean, trusted, intelligent and collaborative data/outcomes based model.