The Alignment Problem: How Can Artificial Intelligence Learn Human Values - Episode I - Day Three
Stephen Fahey
Course Creator & Emotional Intelligence Specialist | Guiding Practical Skills for Mental Health Support | Former Educator, Now Building Empowering Learning Experiences
"This series explores how we as researchers, policymakers, and Business Leaders can close the gap regarding the alignment problem. I have chosen the same title as Brian Christians's book where I am reversing and upgrading some of the core themes mentioned in the book." Enjoy The Series
Prologue:
Steve Jobs was a charismatic pioneer of the personal computer era. With Steve Wozniak, Jobs founded Apple Inc. in 1976 and transformed the company into a world leader in telecommunications. Widely considered a visionary and a genius, he oversaw the launch of such revolutionary products as the iPod and the iPhone.
The story continued to build the 2nd most valuable company in the world as grounded new product managers to greater innovations long after they left the secret, closed-source giant.
Now Generative AI has emerged and partnered with the world's number one most valuable company Microsoft with its sidekick - OpenAI. An interesting Batman and Robin saga indeed.
I for one am deeply embedded in my research into Micro Language Models which by my Hypothesis have the power to save lives and protect the innocent against national and foreign security threats.
In essence, some say that we are moving too fast in the domain of GenAI according to Elon Musk and his famous musketeers - I for one back Microsoft and OpenAI in their new heroic adventure to save the world from chaos - maybe!
Introduction
Let's dive into my journey of pioneering the concept of Micro Language Models. For those new to AI, let's set the stage with some essential background before exploring the origins of this fascinating series.
Micro Language Models stand in contrast to their larger counterparts by their compact size and minimal data requirements for training. While companies like OpenAI have historically fed colossal datasets from the internet into their algorithms to create extensive, albeit imperfect, language systems, Micro Language Models take a different approach.
Consider the concept of word embedding systems, which essentially train AI to predict subsequent words in a sequence, exemplified by: Sun + beach = holiday. This foundation aims to mimic human language understanding and prediction, albeit with ongoing challenges in fully replicating the complexity of human thought and communication.
The journey into AI and language models reveals a continuous learning process about neural networks and the human brain. Despite lacking a master's degree in machine learning or psychology, one can appreciate the ongoing discoveries in how AI can address some of humanity's most pressing issues, despite inherent imperfections in current generative AI models.
As OpenAI ventures into utilizing the internet as a primary data source for models like ChatGPT, legal and ethical challenges emerge. Yet, my focus as a researcher and author is to distill current and future AI trends into a digestible format for my readers, combining my passions for reading and writing with my role as a thought leader.
So, what have I done to develop a safer language model? Armed with basic resources—a laptop, a desktop, my intellect, and a wealth of information from AI literature and the internet—I embark on this quest.
Here's my preliminary hypothesis: I aim to invert the concept of Large Language Models, creating smaller, more efficient technologies that defy the conventional need for vast data sets. Instead, I propose leveraging academic research to craft educational avatars. This shift stems from my unique background and classified knowledge gained as a former British Army reservist, which highlights the urgency of this research due to broader security implications.
Embarking on this AI exploration, I've been rapidly absorbing and applying AI knowledge, building upon my history of quick, impactful research. This book aims to take readers on a meaningful journey, simplifying complex concepts for widespread understanding. My lifelong commitment to learning and sharing knowledge underpins this endeavor, aiming to enhance global comprehension of AI through easily accessible content.
Experimenting with Large Language Models in social media has yielded intriguing preliminary results, suggesting that a single Micro Language Model could significantly streamline business operations without extensive data storage needs. Although this marks uncharted territory and presents challenges, especially for someone with limited coding experience, the potential for breakthroughs is exhilarating.
Predictions impact of Generative AI, particularly in software development and social media, hint at profound shifts in how technology is created and utilized. The goal is not just to innovate but to empower individuals and communities to harness these tools for their advancement.
As we venture into this exploration together, stay tuned for insights and lessons on leveraging AI to create a more equitable and empowered world.
Chapter One
In lieu of traditional methods—transcribing my insights on AI directly as they come to mind—I've chosen a novel approach to share my knowledge. I'm collaborating with ChatGPT-4 to integrate our discussions directly into the narrative. This strategy serves a dual purpose: Firstly, it demonstrates my interactive process with the technology, offering a unique perspective on AI engagement. It underscores the sophistication involved in crafting inquiries that yield meaningful, story-enhancing responses. This is not merely a creative choice but a reflection of a deeper scientific curiosity about our interaction with Large Language Models, viewed through the lens of an end-user. This approach, I believe, not only enriches the content but also introduces an engaging, educational dimension to the discourse on AI.
I wanted to understand a little more - not too much more about the idea of neural networks. This is important if am to at the very least appreciate the work of GenAI systems and identify pitfalls in the human application.
ChatGPT4 Question:
Give me an easy-reading artilce on what is neural networks so I can write about it in my AI book.
Alright, Steve, let's dive into the fascinating world of neural networks in a way that's easy to digest. Imagine you're learning about a city's intricate web of streets, lights, and signs for the first time. That's kind of what a neural network does when it's learning from data – navigating through complex patterns and making sense of them.
Neural networks are inspired by the human brain. Just as our brains are made up of neurons connected by synapses, neural networks consist of units (neurons) connected by lines (synapses) that carry data. These units are arranged in layers: an input layer, one or more hidden layers, and an output layer. The input layer receives the data, the hidden layers process it, and the output layer presents the final result.
领英推荐
The real magic happens in the hidden layers. Each connection between the units has a weight, which determines the importance of the input value. When data passes through, the network adjusts these weights based on how close the output is to what's expected. This process is known as learning, and it’s achieved through a method called backpropagation. Imagine giving a child feedback on a drawing; over time, they improve based on your guidance.
Neural networks are like chameleons, adapting to solve a wide range of problems, from recognizing faces in photos to translating languages or even driving cars. They learn patterns from vast amounts of data, which is why they're at the heart of many AI applications today.
The beauty of neural networks lies in their simplicity and versatility. They mimic the learning process of the human brain, yet can surpass human speed and accuracy in specific tasks. As we continue to feed them more data, they get even better, making them a cornerstone of modern AI.
When writing about neural networks in your book, consider using everyday analogies to make these concepts accessible. The goal is to demystify AI and show how it's not just a realm of complex algorithms but a fascinating reflection of our learning processes.
ChatGPT4 Question:
OK how did OpenAI code the neural networks - what type of code did they use, and why did it take them so long to build ChapGPT?
Creating something as advanced as ChatGPT by OpenAI involves a complex blend of expertise in machine learning, software engineering, and computational resources. The backbone of ChatGPT, like many AI innovations today, is built on neural networks, specifically a type known as transformer models. These models are designed to understand and generate human-like text by analyzing vast amounts of written language.
The Code
OpenAI primarily uses Python for coding its neural networks, leveraging powerful libraries such as TensorFlow and PyTorch. These libraries provide the tools necessary to design, train, and deploy neural networks efficiently. Python is favoured for its simplicity, readability, and extensive support from the machine-learning community. It allows researchers to prototype ideas quickly, test hypotheses, and scale up to large models and datasets with relative ease.
The Process
The development of ChatGPT didn't happen overnight. It's the result of years of cumulative research and development in the field of artificial intelligence. Several factors contribute to the time it takes to build a model like ChatGPT:
Why It Takes Time
Building something as complex and groundbreaking as ChatGPT requires a confluence of cutting-edge research, high-quality data, sophisticated coding, and massive computational power. Each of these elements must be carefully orchestrated and aligned with ethical standards and safety considerations. The iterative process of training, testing, and refining the models ensures they are not only powerful and efficient but also safe and responsible AI agents.
In essence, the journey to building ChatGPT showcases the dedication to advancing AI in a way that's beneficial and aligned with human values, even if it means taking the time to get it right.
Analysis from Stephen Fahey
When I read the responses to the question I asked Chat, it did not surprise me - I knew it would give me a very unique yet interesting account of the neural networks and how OpenAI has attempted to mimic the human brain into a larger word embedding system, which I argue is all it is doing, to put it into a simple, easy-to-understand way.
This is a good start to my core research, as I have, and as I am sure you have already spotted, a flaw. The human brain is not perfect, nor is the human body attached to it, and we are still learning about the brain. One of the further criticisms of LLMs is the bias it sadly can create.
Yes, in one sense, you could argue that some elements of the responses you get are biased as they have been fed with different media posts from the Wall Street Journal and BBC News - so there is the right and left-wing argument. However, what my interest in the LLMs is, is to use it to my advantage.
I cannot tell you how much it has benefited my research, the fact that I am a trained professional teacher and know what a good essay and paper looks like, as I have been trained to mark them.
Let's go back a little to pre-GenAI when you had to write and research your independent articles - you may have read media posts from the likes of BBC News and spotted flaws or clear biases that you felt did not add weight to your evidence in your article. You needed to rely solely on your analysis skills and research to identify the worth of using that news post to support your written work or not.
And this core method is still the same after GenAI or ChatGPT was released. Journalists will write articles based on their skills and reference studies sometimes to back up the story - although a lot can be said about misleading readers on medical science.
The point I am trying to make here is that Chat is now replacing the need for you to read information like news articles as it can act the same as GOOGLE, if not better, as you can program a more specific response to two main questions, upping the answers of 1500 characters, I believe, on ChatGPT 3 and 4, which most folks do. I advise you to do this as it's an LLM, so it needs to get to know your traits. So, feed it with your CV, basically, and tell it off if it makes a mistake.
DER BUNTE VOGEL ?? Internationaler Wissenstransfer - Influencerin bei Corporate Influencer Club | Wirtschaftswissenschaften
8 个月Thank you !