What’s All the Noise About?: Making sense of AI and generative AI
Mark Wakelin
EVP / Global Professional Services | Chief Success Officer (Platform)
Hello my friends:
Last October, a company called OpenAI released an extremely popular, free tool on the internet called ChatGPT, creating a tsunami of interest in Artificial Intelligence (AI). According to Statista, it took 75 years for 100 million people to use telephones, 16 years for mobile phones to achieve the same, and even the internet took seven years to hit this benchmark. Meanwhile ChatGPT has reached 100 million users in two months.??
Since it can be hard not to get swept up in this furious, fast AI wave, I’ve decided to start a series of blogs dedicated to understanding the latest trends, what AI means for humanity, and, in particular, what AI means for members of the professional services industry. With the way the AI space is rapidly evolving, I worry even this piece will be outdated by the time I hit “publish.”?
What is AI?
In the simplest terms, artificial intelligence (AI) is a branch of computer science focused on creating machines and software systems capable of performing tasks typically requiring human intelligence. This includes problem-solving, learning, reasoning, perception, natural language understanding, and decision-making. AI systems aim to mimic or simulate human cognitive processes through algorithms and computational models.
What AI Isn’t!
AI is not human, meaning that, without getting too philosophical, AI doesn’t have feelings. AI can’t love and AI can’t hate. The idea of AI reaching “sentience” has been claimed by some (including some famous engineers at Google). In my opinion, if you anthropomorphise it, you’ll lose the plot. Although generative AI solutions (like GPT-4) might seem sentient, the algorithm is in fact mimicking or reflecting response patterns pulled from large language models (LLM) they are trained on. More on that later. The takeaway for now is: The machine isn’t human and doesn’t share or develop emotional responses in the same way we do.?
What types of AI are there?
There are several main types of AI or AI models, which can be characterized into two broad categories: rule-based systems and machine learning.?
Rule-based systems use predefined rules and heuristics to make decisions or solve problems. They are typically based on expert knowledge and are most effective in well-defined domains with clear rules. Our early workflow engines or chatbots would claim this type of intelligence.??
Machine learning (ML) focuses on creating models that can learn from data without being explicitly programmed for every condition. These algorithms can find patterns, make predictions, and improve their performance over time as more data is available. In other words, they’re trained. There are several types of ML algorithms, including supervised learning (structured data), unsupervised learning (less structured data), reinforcement learning (based on rewards for actions within an environment), and, perhaps the most tweeted about — deep learning (a subset of machine learning focused on artificial neural networks).?
Machine learning models are capable of learning complex patterns, representations, and abstractions from data. But, with the hype around Open AI’s ChatGPT-4, the most popular within this category is now the transformer model —?named for its ability to transform content from a large language model into an almost conversational, human-like response.
The ML model list goes on and on, but this is a field advancing faster than anything I’ve experienced in my 35 years in the technology industry. There are even evolutionary algorithms (bio-based) and insect-inspired swarming models (really!), reinforcing why insects have long-predated humanity.?
What about large language models (LLM)?
AI based on Natural Language Processing (NLP) needs data, lots of data, and usually this data is text-based (like the internet) —?hence the name "large" language model. During the so-called pre-training for the AI, the dataset is broken down into tokens, which can be a word or even a pattern of words or phrases. For instance, if I say, “Hey, how are you?,” there are probably three answers that make up 90% of the response occurrences. There are millions of these patterns in a dataset, and the transformer uses these tokens along with a mechanism called “self-attention” to create meaningful responses — or at least what the machine thinks are meaningful. (We’ll talk about hallucination in future blogs.)??
领英推荐
Once the models are pre-trained, they continue their training through fine-tuning, where a supervised process validates the input/output pairs. So, if the models pair “How are you?” with “No, I’m Mark,” then it would be trained through supervision to exclude this response. Finally, the model conducts its own sampling optimization based on probabilities and uses various trade-off mechanisms to create balanced responses.?
GPT-4 uses a LLM based on a subset of the internet as of September 2021. You can test this yourself by asking about events that have happened since that date like, “Who won Best Actor at the 95th Academy Awards?” One might wonder how these models will cope with a more real-time dataset like the ever-changing internet. Microsoft and Google are wondering this, too. There’s a huge opportunity for research on concepts like incremental training strategies or active learning, but that’s a little too deep for my introductory blog.??
So why is everyone suddenly so excited about AI now?
Most of the credit for the current excitement must go to OpenAI for releasing ChatGPT and making it free and usable by anyone. So let’s get into the definition of ChatGPT. The “Chat” in this instance refers to conversation in the chatbot sense (aka a question and answer system, or more correctly in AI parlance, a prompt and response) and the “GPT'' stands for “Generative Pre-trained Transformer.” But to break this down a little further, GPT literally means the AI: is capable of generating a response to a prompt, usually a question (Generative); has studied some predefined dataset to inform its answers (Pre-trained); and can turn those interpretations into a response (Transformer) (Vaswani et al., 2017).?
Personally, I’m excited about AI because it’s incredible and will have a significant impact on society, though we’re still shaping what that impact will be. But for a lot of people and corporations, it’s about money.
OpenAI’s free tool has big companies like Microsoft, Google and my own employer, Salesforce, seeing new potential in AI. Microsoft and Google (for now) have mainly used it to augment their suites of information access tools. Specifically, Google has released its own transformer called Bard, and Microsoft has augmented its Bing search engine with GPT capabilities. Many think ChatGPT could be the new search. I don’t agree. Instead, I think of ChatGPT as a new tool, an advisor, who you can believe most of the time. But we’ll have to see how it turns out. In the meantime, both Google’s and Microsoft's initial interpretations are quite different from each other.?
At Salesforce, we’ve had Einstein Artificial Intelligence as a platform capability since 2016. (I know my friend Parker Harris will never forget that Dreamforce or that Einstein wig .) Now, we are enhancing our Customer Relationship Management (CRM) solution (say “hello” to Einstein GPT ) using AI/ML to drive productivity and gain insights to help create intelligent, intimate customer engagement across the platform.
So, what IS all the noise about then?
Technology is again driving a shift for humanity. Like the invention of the printing press in the early 1500s, which allowed mass dissemination of knowledge, or the introduction of steam, which powered productivity in the early 1700s, AI will likely cause some disruption before creating new opportunities and hopefully enriching our lives in the long term.?
In the next blog, I’ll cover Generative AI and Professional Services Industries: Potential benefits and drawbacks of AI in professional services. As always, any feedback is appreciated (feel free to leave it in the comments) and I hope you found this introduction useful.?
—---------------?
(Disclaimer: The views expressed in this article are mine alone and are not necessarily those of Salesforce)
A note of gratitude:
I’d like to thank a couple of folks who’ve inspired me to do this.?Thank you to my colleague Patrick Stokes at Salesforce, the General Manager for the Salesforce Platform. Patrick is a lovely guy with a giant brain. I was inspired by his paper on the metaverse (about which he told me he was “just trying to get it straight” in his own head) and I felt compelled to do the same for AI. And thank you to my son, Ben Wakelin. Ben got into this field early as part of his final year thesis, and his enthusiasm has been contagious. Thank you both.
Director Solution Consulting | 19 x Salesforce Certified, Omni-Channel Marketing | Data Cloud | AI | CRM
1 年?? Great Post!!
Salesforce GM - Pro Services Latin America | Board Member | Ex Microsoft,SAP, Deloitte, PWC | Startup Mentor | TEDx
1 年Eager for the next one . Good job, Mark !
Director at Salesforce, Global Delivery Centers
1 年A great introduction to AI and demystifying it. Thanks for the post, look forward to the next one.
Customer Experience at University of California at Irvine
1 年Mark Wakelin - As usual great stuff. Please let me know if I can help in any way
Co-Founder & CEO @ ResultElf, Optimized CLTV and profits
1 年Mark Wakelin ML+DO=AI, that is one definiton of AI created by industry experts. We @ ResultElf are doing exactly this. This approach is still ”hidden secret” in many industries.