THOUGHT-PROVOKING THURSDAYS- 51.               
UNDERSTANDING THE  BUZZ AROUND ARTIFICIAL INTELLIGENCE

THOUGHT-PROVOKING THURSDAYS- 51. UNDERSTANDING THE BUZZ AROUND ARTIFICIAL INTELLIGENCE


In the dynamic realm of contemporary jargon, none resonates more loudly than the ubiquitous hum of AI transformation. It's not just a buzz; it's a pervasive force, shaping our world in ways that demand our attention. Much like the era of Digital Transformation that dominated discussions two decades ago, AI transformation has become the new focal point. As organizations eagerly embrace this paradigm shift, labelling it as an indispensable catalyst for progress, a nuanced perspective is essential.

For those of us who appreciate technology without donning the geek or nerd mantle, the unfolding AI landscape can be both fascinating and disconcerting. On one side, visionaries herald it as the most significant revolution since the internet, predicting an imminent reshaping of our familiar realities. On the other, skeptics dismiss it as mere hype. Recognizing the need for clarity amidst the din, I delved into this discourse for this week's edition, tapping into the insights of @Georg Zoeller—my esteemed ex-colleague and friend from the Facebook days, a voice of sanity in this chaos. Georg stands as one of the most intelligent, astute, and articulate engineers I've encountered in my professional journey (acknowledging a hint of bias!). With a wealth of experience leading tech across various companies, Georg is unquestionably the ideal voice to illuminate this subject.

Our online fireside chat explored the following 3 questions which I am sure are uppermost in everyone’s minds.

1. IS GENERATIVE AI REAL OR HYPE?

2. IS GENERATIVE AI JUST ABOUT CHAT GPT?

3. HOW DO YOU GET TO BE AI LITERATE?


So without further ado, let's jump right in.


1. IS GENERATIVE AI REAL OR HYPE?

?Does it have to be either? I wouldn’t fault people for approaching it this way, after all this is the nth, technology “breakthrough” coming out of Silicon Valley.

But I think we can be more nuanced, and move beyond the social media instinct of picking sides between A and B.

Let’s have a look at the most confusing aspect of this technology instead :

“We did not build it”.

Almost everything in software over the preceding decades was purpose built by hand - engineering - but this is a genuine result of science exploration and discovery and we neither understand it completely, nor do we know its full capabilities or limitations yet. We don’t know what we don’t know.

That’s unfamiliar territory and profoundly unnerving to start with for engineers because some of the properties the technology exhibits are more akin to humans (non-determinism, creation) than the traditional computing. And that creates questions we are not accustomed to like “what’s a bug, what’s a feature?”.

Take “hallucinations” for example: You see a lot of arguments about GenAI that cast hallucinations as some kind of terminal flaw of the technology, but when you remember “we didn’t build it, we discovered it” this line of reasoning doesn’t make all that much sense. The flaw in this context is how GenAI is framed/positioned (e.g. a search engine, a chatbot) and not so much in the technology itself.

When it comes to explaining the technology, we now see a lot of standard responses: Extrapolation from what we know (“It’s just a big auto-complete stochastic parrot”), projecting hopes (“AGI will save us from ourselves”) or selling the discovery as finished revolutionary product.

All three are flawed. You can’t predict fluid dynamics by looking at individual water molecules. The AGI conversation primarily serves as a distraction, the term itself is ill-defined and it is unclear to me if it even matters. The technology is powerful but doesn’t neatly fit into products and business models with a few exceptions, so we see a lot of framing happening trying to obfuscate the sharp edges.

My favorite example to this point is Customer Service Chatbot. It’s the logical leap for anyone using ChatGPT a year ago, but you know what? OpenAI’s support website still uses a templated old chatbot rather than what you are used to with ChatGPT and for good reason.

So when some very large companies are betting their stock price on a new innovation that has fundamentally different properties from anything that came before, you naturally get a very confusing environment that feeds on hype, speculation, fears, and wishful thinking. But it doesn’t say much about the merit of the technology itself, just its potential to the investors' business models.

So to answer your question:

From my perspective: Both is true. The technology is real and I’d argue likely underestimated but most product applications are overhyped, for now.

The bigger question is “What does this technology mean for my business” and the answer to that requires a fundamental understanding of what is happening with GenAI (“AI Literacy”).


2. IS GENERATIVE AI JUST ABOUT CHAT GPT?

?

Most people we talk to, usually decision makers, C-Level, etc have an inkling that there is more to it, but I think it’s accurate to say that ChatGPT and Midjourney command the mindshare and for hands-on experience are the primary data points people have and often make decisions on.

Which, as you can imagine, is a mistake. I’d argue ChatGPT isn’t even a yet product, it’s a consumer-targeted tech demo of a platform with incredibly powerful capabilities hidden behind a deceptively simple interface that obscures much of its power and limitations. It’s not enough to understand the implications of the technology.

ChatGPT, or more specifically the capabilities of OpenAIs developer platform definitely matters because it’s the industry-leading, SOTA offer in the field of cloud AI, but things are far more complicated today than a year ago:

In January 2023, the only credible technology available to companies was provided by cloud vendors, chiefly OpenAI. Fast forward to January 2024 and we don’t just have Cloud-based AI. There are now also Local/Edge AI in the form of Facebook’s LLamas, Mistral, etc which exploded into tens of thousands specialized models, but also achieve pretty impressive results without trading off privacy.

Beyond LLMs exists an absolutely bonkers array of cutting-edge open source and proprietary research at various stages of commercialization.

There’s exciting research in 3D, Animation, Avatars, Audio, Music, Time Series Forecasting, Video, and Computer Vision that's been progressing towards real-world usability in the last 9 months - but staying up to date with it, and understanding it’s implications is tough.

Worse, most progress is constantly at risk of being overshadowed by the next breakthrough in the field, so we see a lot of attempts at brand building (“Copilot”, etc) and capturing users by equating AI with the brand. ChatGPT arguably won this one so far.


3. HOW DO YOU GET TO BE AI LITERATE?


Let’s think about this maybe akin to Financial Literacy - except with a twist: The financial system has been around for ages.

With GenAI, the world is exploring this technology with more than 2000 scientific papers a week. Many of them are genuine discoveries, others are … well let’s just say Silicon Valley figured out nobody reads PRNewsWire anymore.

So what are durable skills, what is crucial fundamental understanding vs LinkedIn talk of the moment? How important are the tactical details of technology vs the strategic implications it represents?

Our premise is that companies over-index on technical details and implementation projects and underinvest in understanding the core primitives and implications of the technology.

And honestly, that’s not surprising. As an engineer or technical person, there’s no shortage of material to learn how to implement an LLM, do fine-tuning, build RAG, and the like.

But for decision-makers in business? Mostly crickets and of the little that is there, almost all come with a strong “Adopt our technology because we invested a lot in it” undertone. And we don’t think that’s optimal.

So we focus on transferring a deep, strategic understanding of what is happening, helping decision-makers build a mental map of the space and locate their companies' place in it.

Once you have a firm grasp of the “primitives of AI”, AI Literacy as we call it, applying those to a business triggers transformation rather than technology adoption, which as you mentioned in your intro, fails more often than not. From a learning standpoint, the implications are clear-- this requires continuous investment and constant upgradation. The AI landscape changes every day and you cannot assume that you know everything if you have attended a particular course. By the time you finish the course, the subject might have evolved a bit more. This will provide a huge challenge to providers of learning programs as they themselves will need to be deeply invested to grow and evolve.


Whew, that was a lot to take in, right? See this as a conversation starter, nothing more. If what you just read piques your curiosity and you want to have a more detailed discussion, please do reach out to us. This is where Georg and I can help you navigate this space with clarity and confidence. The buzz has only started!



Deepak Goel

Global Marketing I Cultural Intelligence

9 个月

I thoroughly enjoyed reading your article Sandeep. We know so little of AI's capabilities yet it has so much more to offer than just ChatGPT. Its ever-evolving landscape makes it important for us and tech professionals especially to constantly research and update their knowledge of AI. Hopefully one day, we can all maximise the potential of AI for good.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了