The Synergy of Large Language Models and Traditional AI: A Pragmatic Approach
Dr M Maruf Hossain, PhD
Leading organisational transformation with Data, AI, and Automation | Thought Leadership | Innovation | Strategy to Execution | Keynote Speaker | ex-IBM, Infosys, Telstra, Australian Government | INTJ
A Large Language Model (LLM) is a noteworthy AI system, powered by neural networks, designed to comprehend and generate human-like text based on a vast amount of training data. These models are like language virtuosos, capable of matching context, generating coherent text, and answering questions in a way that seems human. They have gained significant attention due to their potential in various applications, such as chatbots, content generation, language translation, and enhancing information retrieval.
Now, why is this technology important? LLMs offer a more natural and advanced way for humans to interact with machines. They can provide assistance in understanding and generating text, which can be invaluable in customer support, content creation, and information retrieval tasks. They're also instrumental in breaking down language barriers by enabling efficient translation between languages and promoting global communication and understanding.
While LLMs have undeniably made waves in the AI landscape, it's imperative to maintain a broad perspective and not lose sight of the bigger picture. LLMs belong to the broader category of Generative AI, which, in turn, falls within the vast realm of Artificial Intelligence. The introduction of generative AI and LLMs does not diminish the significance of AI as a whole. It's crucial to recognise that despite the buzz surrounding LLMs, they are not a universal panacea, nor are they always cost-effective. According to a report in The Wall Street Journal from early October 2023, Microsoft experienced a financial challenge, losing $20 for every $10 generated from GitHub Copilot AI subscriptions. This situation underscores the need for either price adjustments or the creation of more cost-effective models.
In this context, it is important to approach generative AI pragmatically. Employing generative AI to address every AI-solvable problem may not always be the most efficient approach. It could be likened to using a cannon to eliminate a mosquito. Despite the allure of generative AI, the true value lies in creating a user experience where natural language plays a central role in ensuring customer satisfaction. Overlaying traditional user interface/experience layers on top of LLMs can inadvertently defeat the original purpose of LLM integration. Such approaches may be viewed as ‘lazy’ engineering solutions. Here are several compelling reasons to underscore the continued relevance of traditional AI for sustaining profitable AI initiatives:
Well-Defined Tasks. When a task is well-structured and the problem domain is clearly defined, traditional AI can excel. For example, in manufacturing, traditional AI systems can precisely control and optimise processes without the need for language generation.
Deterministic Processes. In situations where outcomes need to be highly deterministic and predictable, traditional AI rule-based systems can provide the desired level of control. This is crucial in fields like critical infrastructure management and autonomous systems where reliability is paramount.?
Real-time Decision Making. For applications that require immediate, real-time decision-making without the luxury of language generation and complex reasoning, traditional AI can offer low-latency responses. In autonomous vehicles, for instance, quick reactions are crucial.
Structured Data Analysis. When working with structured data, databases, and tabular information, traditional AI techniques like machine learning and statistical analysis are often more efficient. In finance, for example, traditional AI methods are used for risk assessment and fraud detection.
领英推荐
Highly Regulated Environments. Industries subject to strict regulations, such as healthcare and finance, may prefer traditional AI for its transparency and ability to provide clear, auditable explanations for its decisions.
Resource Efficiency. In scenarios where computational resources are limited or energy efficiency is a concern, traditional AI may be more practical. Devices like embedded systems or IoT sensors may benefit from less resource-intensive techniques.
Interpretable Models. When interpretability and transparency are crucial, traditional AI methods like decision trees or linear regression offer more understandable models. This is important in fields like healthcare, where patient well-being relies on trust in the decision-making process.
Cost Constraints. Traditional AI can be a cost-effective solution, especially when the development and maintenance of a Generative AI system may be prohibitively expensive.
Domain-Specific Knowledge. In situations where domain expertise is highly valued and rules can be explicitly defined, traditional AI systems are more straightforward to implement and maintain. This is often the case in expert systems used for diagnosing medical conditions or making legal decisions.
Concluding remark
In summary, while Generative AI and Large Language Models are powerful and versatile, they may not always be the most suitable choice. Traditional AI approaches are often more pragmatic and efficient in scenarios where well-defined rules, deterministic processes, real-time decision-making, structured data, transparency, cost constraints, and interpretability take precedence over the capabilities of Generative AI. The choice between traditional and Generative AI depends on the specific needs and constraints of the application.
Leading organisational transformation with Data, AI, and Automation | Thought Leadership | Innovation | Strategy to Execution | Keynote Speaker | ex-IBM, Infosys, Telstra, Australian Government | INTJ
11 个月The Wall Street Journal article https://www.wsj.com/tech/ai/ais-costly-buildup-could-make-early-products-a-hard-sell-bdd29b9f