AI and Its Impact on the Modern Enterprise - Part 2 - GenAI: History, Models, and Techniques
Brain Coral patterns - Sean Nash

AI and Its Impact on the Modern Enterprise - Part 2 - GenAI: History, Models, and Techniques

SHORT HISTORY OF GENERATIVE AI

GenAI refers to artificial intelligence systems capable of generating new content.? This content can be text, images, audio, or other media, based on existing unstructured data. Unlike Predictive AI, which typically analyzes structured (and labeled) data to find patterns and make predictions, GenAI creates stochastic, non-deterministic outputs from learned patterns, opening up a world of exciting possibilities (and significant risks) for businesses. To fully grasp its potential, it's essential to understand and appreciate the historical context and evolution of AI. Early AI has its roots in the 1940’s with artificial neural networks (ANNs).? This decade followed a period of intensive research in mathematical biology in the 1930s, which certainly influenced the development of ANNs.


It is sometimes hard to fathom the notion that ANNs were created before the invention of the software subroutine!

Nascent AI systems were focused on rule-based systems and simple decision-making algorithms, particularly during the 1980s.? With the advent of modern machine learning over the past 10-15 years, AI systems began to learn from data, improving their performance over time. Deep learning, a subset of machine learning, further advanced AI capabilities, enabling more complex tasks like image and speech recognition. GenAI represents the latest leap, capable of creating entirely new content. This feature is not totally new, since stochastic music generation systems based on musical styles existed in the 1970s and early 1980s.? However, with the advent of cloud computing resources and the recent breakthrough of the transformer architecture (2017) for natural language processing (NLP), the benefits (and risks) of GenAI are available for all organizations and enterprises.

MACHINE LEARNING MODELS AND TECHNIQUES

As with most ML systems, Generative AI involves using algorithms to create new data points based on the patterns from existing data. These ML models learn patterns and structures in the training data and use this knowledge to generate new, similar data. For instance, a Generative AI trained on text can produce coherent paragraphs, while one trained on images can create realistic pictures.? Once they are trained, large language models, such as GPT-4, generate text by predicting the next word in a sequence. Similarly, image synthesis models, like DALL-E, and Midjourney, generate images from textual descriptions.?

Effective interaction with Generative AI requires understanding how to craft textual prompts that yield the desired outputs called “completions”. This involves using clear, specific language and providing sufficient context. Experimentation and iteration are key to refining specific prompts for optimal results.? It is not surprising that human teachers are excellent at crafting prompting techniques.??

Additional techniques, such as Retrieval Augmented Generation (RAG) and Fine-tuning, can further refine an LLM's results. RAG combines generative models with retrieval systems to improve the quality and relevance of generated content. RAG techniques are a popular mechanism for allowing LLMs to understand the context of private enterprise documents. Fine-tuning models on domain-specific data can also enhance their performance, making them more useful for specific business applications. However, fine-tuning can be expensive and requires deep data science expertise, although new tools are making it easier.

In addition to these features, many of these AI systems allow extended functionality using user-defined functions and autonomous control of models (agents). While these features are quite powerful and show great promise, they are now in the early evolutionary stages of enterprise use.

More recently, sophisticated GenAI “multimodal” models have emerged.? Instead of using a collection of different models to handle multiple types of data, a multimodal model can natively understand the patterns of multiple types of data, such as text and images, and can generate similar data based on those diverse input types. OpenAI’s GPT-4o, Anthropic’s Claude 3, and Google’s Gemini 1.5 are examples of powerful multimodal GenAI models. It is fascinating to envision what innovative types of devices and applications will soon be created using multimodal AI models.


Part 3 (of 4) - Enterprise Use Cases and Risks

Godwin Josh

Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer

1 个月

Demystifying AI for managers is crucial as its influence grows. The convergence of AI with quantum computing could unlock unprecedented problem-solving capabilities. Will we see AI-powered systems design entire business models by 2035?

要查看或添加评论,请登录

社区洞察