The Untapped Power of Large Language Models as Qualitative Comprehension Engines: Unexplored Use Cases of Chat-GPT and other LLM’s
Miriya Molina
Saas Founder and Solutions Architect leading strategy, implementation and low code solutions in Artificial Intelligence, Machine Learning, Data Science, web3 and Decentralized technology
Abstract
?
Large Language Models (LLMs) have revolutionized the field of natural language processing and understanding. While they are often celebrated for their ability to generate human-like text, their true potential lies beyond mere text generation. LLMs, such as Chat-GPT, are powerful tools for qualitative comprehension, capable of identifying patterns, reasoning, and detecting gaps in knowledge and communication. This white paper explores the untapped power of LLMs as qualitative comprehension engines, discussing their mechanisms, applications, and their role in complimenting human capabilities.
?
Introduction
?
Human communication, whether written or verbal, is rich with patterns that reveal insights into comprehension, knowledge, and reasoning. When technology companies began measuring patterns in human communication for the purpose of language generation, they inadvertently captured patterns of comprehension as well. LLMs like Chat-GPT are built on these foundations, enabling them to understand and analyze not just words, but the patterns of human thought and understanding.
?
Mechanisms of LLMs
?
At the core of LLMs is the conversion of language into mathematical representations. Words, phrases, and sentences are transformed into numerical vectors that represent their meaning and relationships.
WTF?
Vectors are just a way to store a collection of numbers that allows a calculator to recognize the pattern in the distance between those numbers, what we call a relationship. The calculator is a processing chip in a server somewhere. The words are converted to numbers that are mathematically symbolic but don’t hold much meaning in themselves. How similar is the word “blue” to the word “green”? Now what’s the difference between a “shape” and a “color”? ?Placing the words into the vector as numbers allows the calculator to compute these differences. Or to be more specific to “find” the differences in context and similarity.
?
These mathematical representations are then used for various language tasks. The premise is akin to assigning color coding to different books based on their content; it allows algorithms to work with words and patterns in communication as if they were numbers.
?
All algorithms, including LLMs, share a fundamental structure: they measure patterns in data using two consistent elements.
Every algorithm is based off the simple premise x + y = z
For example: inches of rain plus hours of sunshine equals pretty flower. The math can and does get more complex but all algorithms have two essential components:
?
1.???? Measure of Sameness: Algorithms identify similarities or patterns in data, finding elements that share common attributes or characteristics.
?
2.???? Measure of Sameness of Sameness: Algorithms assess the reliability and consistency of the identified patterns.
?
领英推荐
For example: Let’s consider a shopping mall surrounded by several houses. How can we measure the pattern of the distance between the mall and the houses? 1. Average distance combined with 2. How similar is the distance of each house to the average distance.
?
Zooming back out, the only distinction between algorithms lies in the specific mathematical equations and methods they use to identify patterns in the relationships between data.
LLMs as Comprehension Engines
Can LLMs Replace Humans?
The emergence of LLMs as qualitative comprehension engines does not imply the replacement of humans. Instead, it shifts our attention from routine tasks to higher-order activities. Consider calculators: they didn't replace statisticians but became essential tools for numerical calculations. Similarly, LLMs are calculators for words and patterns in communication, freeing humans from rote tasks.
?
Overlooked Use Case: Identifying Missing Elements
?
One overlooked application of LLMs is their ability to perform simple reasoning tasks, such as comparing qualitative lists and pointing out missing elements. This capability marks a significant leap in technology's problem-solving abilities. While previous algorithms could only identify mathematical patterns, LLMs can detect contextual gaps in knowledge, communication, and automated processes.
?
Imagine having a list of instructions to build something, and the final product doesn't work as expected. LLMs can analyze the tasks completed and help identify what was missed or misunderstood. They can answer the critical questions: "What don't I know?" and "What did I miss?". It would be completely possible to build a “find out what I don’t know” engine using LLM’s with various applications in education, business strategy applications, marketing and manufacturing. Imagine using an LLM powered app to teach social mannerisms or pro-social communication to people on the Autism Spectrum Disorder.
?
A Funny Experiment: Identifying Categories
?
What about the possibility of using it to identify narcissistic or sociopathic speech patterns in potential employees, students, or leaders? (Note: for ethical purposes, I am not recommending that Chat-GPT be used in hiring decisions or for diagnosing health conditions; this is simply an exercise in curiosity.) I conducted an experiment: I converted interviews and court proceedings from serial killer Ted Bundy into text. I also provided Chat-GPT with all diagnostic criteria for the 10 personality disorders from the DSM-V, scraping the text that used "diagnosis" or "personality disorder terminology" and then substituted benign categories. I then asked Chat-GPT which category was most likely based on the collected text. After the required disclaimer, it consistently returned Anti-social Personality Disorder. If Chat-GPT is being used to detect sentiment, why not empathy or the lack thereof? What about using it as a tool to encourage more empathetic behavior?
This opens up a bright future for LLMs in applications that assist users in recognizing their knowledge gaps, missed details, or miscommunications. They become invaluable tools for anomaly detection and recommendation engines in contextually complex domains.
?
Conclusion
?
Large Language Models, such as Chat-GPT, represent a profound advancement in the field of natural language processing. While text generation is a notable capability, their potential extends to qualitative comprehension, pattern recognition, and reasoning. By understanding the mechanisms behind LLMs and recognizing their role as qualitative comprehension engines, we can harness their power to enhance human capacity.
Mark McQuade Puck Fernsten Nathan Lile Madelyn Romberg Muntaser Syed Rohan Vardhan Jibanul Haque Kanak Choudhury Laura Li Ryan Ries Anna Joo Fee Tatsiana Sokalava, MBA, SDS Annija Eizenarma Maryam A Hassani Sylvia Bouloutas Carrie M. Leena Sukumar Sena Kim Chelsea Goddard Milly Wang Janet Gehrmann Maria Pienaar Shrunga Divakara Chavalmane Maria Attarian Kirthiga Reddy Shannon Ellis Hutt Raymond Lee Diana Solatan Jon Simon Oana O. Anushree Goenka Aquibur Rahman Katie Wilson Jiquan Ngiam Hila Emanuel Golan Adam Steinle Deborah Magid Flo Boymond Nikki Farb MARILYN BETSABE ALVARADO QUIROZ Angelique Schouten Startup Oasis Hector Jirau, Ph.D. Frank Gruber Vincent Granville Richard Cotton Y?Combinator Steve Nouri Sahab Aslam WVV Capital SignalFire Generative AI Adam Sterling OpenAI Dave Mathews Kyosuke Togami Ginger Siedschlag Adam Smith Spyro Ananiades Khobaib Zaamout, Ph.D. Peter FitzGibbon ChatGPT ??Jepson Taylor Andrew Ng Union Square Ventures Alvin Foo Brian Costa Ana Maria Echeverri Michael Kearns Forbes
?
Miriya Molina well written and well thought out. So many useful possibilities! Thanks for sharing.