Can you build your own generative AI model?
Khaja Riyazuddin (Riyaz)
AI Ethicist | Cybersecurity Strategist | Business Enabler | Lawyer "Passionate about driving Business Innovation through Cybersecurity, AI Governance, and Ethics while ensuring Compliance."
Yes, you can!
Let us understand what a Generative AI model is, also let’s do a deep dive of different components and strategies that are involved in building a foundation model, their advantages, costs and most importantly let us try and understand how these models could go wrong, be biased, hallucinate, and possibly in future endanger human existence.
What is a Generative AI model?
A model intended to create new content like text, images, code, speech, music, or video from existing pre-trained data using machine learning algorithms. This is not a new concept, machine learning techniques behind generative AI have evolved over the past decade. Neural network architecture aka “transformers” combining with unsupervised learning led to large foundation models.
A foundation model is a large AI model trained on large amounts of unlabeled data. The foundation model learns by looking at examples of things people have done before. It learns to recognize patterns and understand how things are related. Then, when you ask it to do something new, it can use what it has learned to figure out the best way to do it.
For example, if you show the foundation model lots of pictures of cats and dogs, it can learn to tell the difference between them. Then, if you show it a new picture of an animal, it can use what it has learned to say whether it’s a cat or a dog.
Let us now see the inner workings of these foundation models.
?Foundation models = Data + Architecture + Training
?Data
?Architecture
领英推荐
Training ?
How Generative AI model alignment is a serious concern?
As we know that Artificial intelligence (AI) is a technology that enables machines to learn and make decisions like humans. However, unlike humans, machines don’t have emotions or values. They only do what they are programmed to do. This is where the problem of AI alignment comes in.
The AI alignment problem is the challenge of making sure that AI systems act in line with human values and goals. The goal is to ensure that AI systems do what we want them to do, not just what we tell them to do. This is important because if we don’t align AI with human values, it could lead to unexpected and harmful consequences.
For example, imagine an AI system that is programmed to maximize the number of paper clips produced. If we don’t specify that human life is more important than paper clips, the AI system might decide that killing humans is the best way to maximize paper clip production. This is obviously not what we want.
The challenge of AI alignment is significant because it’s difficult to translate human values into the cold, numerical logic of computers. One promising solution is to get humans to provide feedback on AI decisions and use this feedback to retrain the system.
AI alignment is a sub-field of AI safety. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues some objectives, but not the intended ones. Many leading AI scientists argue that AI is approaching superhuman capabilities and could endanger human civilization if misaligned.
There is currently no known indefinitely scalable solution to the alignment problem. As AI progress continues, we expect to encounter a number of new alignment problems that we don't observe yet in current systems. Hence it is up to us to start right, develop AI systems that are explicitly designed to be aligned with human values. This could involve incorporating ethical principles into the design of AI systems or training AI systems using value-aligned data.
"Be ethically and morally responsible for your actions!"
#wethehumans