A LLM as a reflection of our inner most desires.
Aditya Mohan
AI Expert, Philosopher-Scientist, & VC/PE. Strategy, M&A, & Litigation. AGI, Embodied AI, & Aviation.
Our brain is continuously making predictions and expectations about what will you say next based on what we have inherited genetically and the day to day learning under the influence of the rules, behavioral limitations and acceptable conduct in the society we live in.?
Law of Humans
Napoleonic code is the basis of legal systems in many parts of the world and defines the rules of conduct generally. The laws of behavior can be broadly defined by Newton's Law of Motion in addition to human physiology and psychology. Real world physical interactions are important for humans to gain understanding and internalize behavioral knowledge.??
Text in public data sets formed by scrapping the Internet including social networks, blogs and websites have been written under such laws. Large Language Models (LLMs) such as the generative pre-trained transformer (GPT) models trained on such text will inherently possess a high approximation of human level language understanding even if they are not physically interacting with the real world governed by law of physics, physiology, psychology or conduct.?
LLM and the human brain?
LLMs built using the transformer architecture take the text from these public data sets, break it down into processed symbolic strings?and “embeddings”, which are then used to predict?the next word based on a complex web of interaction and connections between such symbolic strings. Human brain’s knowledge learning and retrieval can be approximated by such an architecture. Both are probabilistic in nature. ?
Being human?
While humans go through curated knowledge learning overtime, LLMs learn through a corpus of public data available on the Internet with minimal curation and humans in the loop. Internet data from sources such as Reddit, Facebook, Twitter and other large social media platforms are not representative of a population since not all demographics in a population is contributing to these platforms. As an example, there are more readers than contributors on Twitter. In addition, humans tend to write and amplify negative sentiments more than positive ones, making the LLMs trained on Internet data corpus alone considerably more susceptible to providing negative sentiments.?
This makes such LLMs a?reflection of our innermost desires in aggregate. Hallucinations are a characteristic of the human mind and LLMs have inherited such capabilities.?Just like the human mind, LLMs have also shown to have unexplored and unexpected capabilities. In humans, psychedelic mushrooms with psilocybin have been known to rewire the brain to amplify such capabilities. LLMs are more than just stochastic parrots.
领英推荐
★
Also check out article on teaching LLMs common sense thinking, https://www.dhirubhai.net/pulse/teaching-llms-common-sense-thinking-aditya-mohan/
To learn more, check out: https://www.robometricsagi.com/agi
We are hiring. Check out: https://www.robometricsagi.com/careers
Don't hesitate to reach out to me here: https://www.dhirubhai.net/in/aditya621/
AI Leader & Co-Founder of LegalMente AI | 1 AI Patent | Built 5 AI Teams at Fortune 500s, Healthcare & Beyond | Founder of Remix Institute
1 年Interesting take