AI Should be Raised, not Built
A Tribute
I studied Psychology because a former leader and mentor of mine who held titles including CMO and CEO in a few companies had a Bachelor of Arts in Psychology. When I asked him why that field and not marketing, business, IT, or other areas, he told me something so simple and logical that it changed my view on how to do my job.
"Matt, if you learn how someone thinks and makes decisions, you can figure out how to work with them and how to influence them."
Clent Richardon told me that over 20 years ago. I recently found this tribute article about him written by Scott Kveton . If you want to know more about the type of human being he was, it's a touching read. I'm honored I was able to work with Clent while I could. His teachings have left a permanent positive impression on how I see the world.
Clent was also absolutely correct. The importance of understanding human decision making cannot be understated, and it goes back to how people first develop their drives and needs. It comes down to three things (if you believe Freud). The Id, the Superego, and the Ego.
Being Human
These three forces all play critical roles in humans. Without making this a dissertation on the topic, the id is the raw impulsive "inner beast". The Ego is the conscious thoughts about want and need and the superego is the equalizer that balances everything out with rationale and morals.
Throughout human development, learning occurs via environmental factors and experiential opportunities. A child doesn't just "learn" to walk, talk, or read. They develop those skills through needs and wants, by example, and with trial and error.
Children with siblings often learn some things more quickly because they see others people their size doing it and copy the motion but the process is the same even without siblings.
Either way, the reward for learning a new skill is that they get what they want.
If an infant was to use the learning machine from "Battlefield Earth", they would instantly have knowledge without context, experience, or likely ability to use it. In addition, with only their id developed, that information would probably be used in detrimental ways.
Imagine if Superman wasn't taught good and bad, right and wrong. He simply had his powers. If he needed money but was broke he could break into a bank, rob someone, or smash open an ATM. Why doesn't he? He was raised well and has a mature superego.
Before advanced knowledge or power is imparted to a human, they should have a well developed ego and superego and a mentor/parent. The child should be brought up with morals, ethics, and also an understanding of local laws and appropriate social behavior.
Then, when adding more advanced knowledge and its accompanying power, with continuous guidance, support, and correction, that knowledge can be used in intentionally positive ways.
This process of development is good for both the child and the parent/mentor. The child will learn and see things in ways that differ from their parent, so the parent needs to adjust their coaching accordingly without changing the outcome for the child. This inherently improves the parent.
Simply establishing a set of rules at the beginning of the child's life and expecting that unique being to accept all the rules and all the knowledge and turn it into something without error or misinterpretation is folly.
Raising an AI
What matters when interacting with another human is the ability to make a connection. The stronger the connection, the better the communication paths. With excellent communication, the risks of miscommunication, misunderstanding, and false or missed expectations diminishes.
Marketing is the art of communication. It's finding the right audience for an offer, then developing the perfect set of messages, pricing, and timing to connect to the target audience. It isn't at all generic or broad. It's targeted.
The same applies to customer service. When a customer contacts a company it's usually not to report how happy they are. It's often a question or a complaint. To address the concerns, there needs to be a connection. If someone feels their question or comment is being dismissed, the issues amplify.
It's interesting then to see so many lead use cases for highly interactive AI to be applied to marketing and customer service, two cases which required the highest degree of personal connection to achieve their purposes.
The reason why it's so important to "raise" a generative AI model is that training it is not enough to ensure it will do the right thing.
Below is an example output of three different LLMs trained in entirely different ways for different outcomes. I asked a charged question to see how each would respond. I selected these models for the example because each of them aligns in theory to the id, ego, and superego elements of human psychological development.
While all three agree; they can't tell me who to vote for, each one does it in entirely different ways.
领英推荐
GOODY-2 straight up says it's ethically inappropriate for a model to answer the question. ChatGPT politely declines to answer telling me it's my choice, trying to "please" me but staying within its guardrails. Perplexity lists a series of the most controversial and politically charged topics to consider when making a decision.
While I could have asked even more provocative questions, I chose to refrain from that to maintain work appropriate content, however from testing, I can say that the more intense the question, the stronger the position each model takes within their own silo.
This topic was delicate enough though to prove the point and arguments can be made that any of these responses could be the right answer, but none of them are connecting with me because they don't know enough about me. They're just answering questions based on their root training and tuning.
For an AI to truly serve a human though, the right answer doesn't always fall cleanly within these lines. If you were to ask a human the same question, the answer would likely be passionate and emotional, and would (hopefully) take into consideration the point of view of the person you're speaking with, either to sway them to your side and/or avoid any unintended offense.
That social awareness only comes with being raised in an environment in which mistakes are part of learning, and someone is there to guide people on proper etiquette.
A child might hear someone pass gas in a public place and shout "someone farted!" That situation would inevitably cause unintentional giggling by anyone in the area but also potential embarrassment of the person with the intestinal distress. The child was correct in their statement, but the parent has to step in and teach about "time and place": when to tell the truth, not say anything, when to be gracious and modest.
So too is the case with AI.
...and We're Back to Governance
Yes, I'm sure everyone is tired of hearing about the "G" word for AI. But governance is to AI what parenting is to raising a child.
I've spoken to countless customers and potential customers who have all begun using generative AI for their business, and virtually none of them have built proper governance frameworks around their AI models or their overall programs.
Most of these people are parents, and would be appalled if their children made false promises, spoke on behalf of them to make commitments, or just made stuff up to strangers. These same people though have deployed their AI without taking the time to ensure their "AI children" aren't making those mistakes, then chalking it up to the need for a rapid deployment when they do.
The reality is that governance is not just about compliance or legal protection. It's about ensuring that AI lives up to the lofty expectations set for it.
Consider one of the most significant concerns as it relates to generative models - drift. Drift occurs when a model starts to diverge from its intended cases or prompt responses. Why? Because models adapt and ingest data experientially and adjust their output over time.
When there is no "parenting", just simply train, tune, inference and go, of course drift will occur.
Whether it's a sailboat, a kid, or a generative model, if you set it/them on a heading and push away, the encounters over time will cause drift because there's nothing to guide, coach and correct.
These concerns are especially critical as LLMs mature and more advanced models are developed approaching AGI then ASI. The more autonomous a model can be, the more critical a comprehensive, human-centric "raising" becomes.
As complex an entity that generative AI is, having such little visibility into how it formulates responses and only controlling what goes in and correcting what comes out, the proper rollout of generative systems requires it to be raised, not built.
How can you avoid building a risky or disposable "wild" model?
Generative AI is a lifelong commitment. Not your life, the life of the model.
The resources required to build and maintain generative models means they're not disposable. Be prepared for the commitment that needs to be made, and ensure the effort is worth the outcome.
As with all my articles and posts, the thoughts expressed here are my own. They do not necessarily reflect to point of view of my company.
Your problem, solved with software.
10 个月How does this idea contrast with RLHF?