LLM does not equal Ai
LLM does not equal Ai and we are a very long way from AGI. Ignore the hype! But, there is a strong signal amongst the noise.
LLMs have been very successful at language; they spookily predict words that make sense to us. While this is interesting, the use cases and the business value seems much like a solution looking for a problem, at least at this point.
Like many of you, I have been burning some hours to find things that are useful to me. I have not found much beyond an excellent assistant when I’m writing code snippets. This has been bothering me enormously, and intuitively I felt that the Ai market is missing something, but I could not verbalize it. It was a weird feeling. I knew it was right, but couldn’t find the words to explain and pass on the meme.
This week we have been working on our new website (check it out at atsign.com) and whenever marketing and brand work come on my agenda, I take the time to re watch Simon Sinek’s TEDx (now 15 years old but pure genius) and I hope you can see his influence in our content from our website to every bit of content we produce.
?I watched the now familiar patter and felt refreshed for the meeting, but I listened beyond where I normally stop and eureka!
At 5:45 Simon tells us this is not "psychology but biology”. He goes on to say that the human brain has three parts. The outer “Homo Sapien brain or our neo cortex” is responsible for our rational thoughts and our language. The middle two sections make up our limbic brain, and it is responsible for our feelings, like trust and all decision making, but- here comes the kicker-that part of the brain has no capacity for language.
I think you can see why I have not been sleeping well, and I felt something was wrong but could not verbalize it. My limbic system has been working overtime trying to figure out the Ai hype.
From this aha moment I feel we have another breakthrough: Ai today is LLM, but that is really just focused on language, it has no capacity to make decisions or to test an environment. This is why LLM content has no feeling to it and getting LLMs in workflows is troublesome. The agentic movement, whilst it sounds exciting, is not really here yet. Why? Because we have only built models for “language”. We need to build models to deal with feeling and decision making using a limbic Ai system.
Fortunately, I feel we have a way forward. The recent advancements in constructive Ai models using GPUs and networking models will, over time, create a virtual mind. We will never truly know how it made a decision, but at least it will be able to make them and not just talk about them!
This is the signal amongst the noise: We need to create models that are good at single things, smaller models networked together, constantly testing and learning.
We have a huge amount to learn from biology and from people like Simon who think differently thanks to that biology!
Founder & Principal - B2B Tech Product/Business Strategy & GTM
1 天前Part 2 Then there is coding Today programming languages and databases have exacting requirements in terms of names, signatures, etc. A LLM that hallucinates the interface is frustrating. A LLM that does not understand the full system and application is frustration, and more. Then there is the limited ability to have experiences and learn from it, which some environments are trying to compensate for through "memories" etc. Those that see a bright future for robots imagine that physical AI will be different than language AI because the robots will be able to experience the physical world. I can see that making a difference, but hey, we will see. But yes, I agree with the fundamental perspective that language tricks is not the same as. true cognition and decision making in the way we think about it. There is a journey ahead of us ... BTW, I am not sure I want AI to have emotions. The little bit of pseudo emotion I have seen so far, worries me.
Founder & Principal - B2B Tech Product/Business Strategy & GTM
1 天前Part 1 I feel the golden circile idea (nice video) is a powerful idea that helps us think about why some efforts are successful - beyond luck. However, the question of why LLMs frustrate us has many facets I believe. I was in a family friendly pub last night. A parent could not control what his toddler was doing, so the parent had a mini tantrum himself. If you have not had that experience with a LLM, then you really have not tried to do anything meaningful with them, IMO. In our everyday lives, we find different challenges with different people: a range of personalities, a range of "maturity" levels (where they are in life - baby, toddler, tween, ... granky old man), a range of interests, people with different intellectual capacities in different areas of intellect, and so on. This helps explain some of the frustation IMO.
Founder | Philanthropist | Innovator | Chair | LinkedIn Top Voice | Former Chair & CEO IBM Asia Pacific | Committed to Tikkun Olam
1 天前Colin Constable Simon Sinek - v thought provoking at this time ??