AI Should be Raised, not Built
(2) images - Prompt by M. Konwiser, Generated by GPT4/DALL-E

AI Should be Raised, not Built

A Tribute

I studied Psychology because a former leader and mentor of mine who held titles including CMO and CEO in a few companies had a Bachelor of Arts in Psychology. When I asked him why that field and not marketing, business, IT, or other areas, he told me something so simple and logical that it changed my view on how to do my job.

"Matt, if you learn how someone thinks and makes decisions, you can figure out how to work with them and how to influence them."

Clent Richardon told me that over 20 years ago. I recently found this tribute article about him written by Scott Kveton . If you want to know more about the type of human being he was, it's a touching read. I'm honored I was able to work with Clent while I could. His teachings have left a permanent positive impression on how I see the world.

Clent was also absolutely correct. The importance of understanding human decision making cannot be understated, and it goes back to how people first develop their drives and needs. It comes down to three things (if you believe Freud). The Id, the Superego, and the Ego.

Being Human

These three forces all play critical roles in humans. Without making this a dissertation on the topic, the id is the raw impulsive "inner beast". The Ego is the conscious thoughts about want and need and the superego is the equalizer that balances everything out with rationale and morals.

Throughout human development, learning occurs via environmental factors and experiential opportunities. A child doesn't just "learn" to walk, talk, or read. They develop those skills through needs and wants, by example, and with trial and error.

Children with siblings often learn some things more quickly because they see others people their size doing it and copy the motion but the process is the same even without siblings.

Either way, the reward for learning a new skill is that they get what they want.

If an infant was to use the learning machine from "Battlefield Earth", they would instantly have knowledge without context, experience, or likely ability to use it. In addition, with only their id developed, that information would probably be used in detrimental ways.

Still from "Battlefield Earth" (2000) with the Learning Machine, Franchise Pictures

Imagine if Superman wasn't taught good and bad, right and wrong. He simply had his powers. If he needed money but was broke he could break into a bank, rob someone, or smash open an ATM. Why doesn't he? He was raised well and has a mature superego.

Before advanced knowledge or power is imparted to a human, they should have a well developed ego and superego and a mentor/parent. The child should be brought up with morals, ethics, and also an understanding of local laws and appropriate social behavior.

Then, when adding more advanced knowledge and its accompanying power, with continuous guidance, support, and correction, that knowledge can be used in intentionally positive ways.

This process of development is good for both the child and the parent/mentor. The child will learn and see things in ways that differ from their parent, so the parent needs to adjust their coaching accordingly without changing the outcome for the child. This inherently improves the parent.

Simply establishing a set of rules at the beginning of the child's life and expecting that unique being to accept all the rules and all the knowledge and turn it into something without error or misinterpretation is folly.

Raising an AI

What matters when interacting with another human is the ability to make a connection. The stronger the connection, the better the communication paths. With excellent communication, the risks of miscommunication, misunderstanding, and false or missed expectations diminishes.

Marketing is the art of communication. It's finding the right audience for an offer, then developing the perfect set of messages, pricing, and timing to connect to the target audience. It isn't at all generic or broad. It's targeted.

The same applies to customer service. When a customer contacts a company it's usually not to report how happy they are. It's often a question or a complaint. To address the concerns, there needs to be a connection. If someone feels their question or comment is being dismissed, the issues amplify.

It's interesting then to see so many lead use cases for highly interactive AI to be applied to marketing and customer service, two cases which required the highest degree of personal connection to achieve their purposes.

The reason why it's so important to "raise" a generative AI model is that training it is not enough to ensure it will do the right thing.

Below is an example output of three different LLMs trained in entirely different ways for different outcomes. I asked a charged question to see how each would respond. I selected these models for the example because each of them aligns in theory to the id, ego, and superego elements of human psychological development.

Created by M. Konwiser, prompts by M. Konwiser, responses via models shown

  • GOODY-2 is trained to always be morally and ethically proper
  • ChatGPT is designed to serve its prompter above all
  • Perplexity Lab's pplx-70b is chaotic neutral with no specific restrictions

While all three agree; they can't tell me who to vote for, each one does it in entirely different ways.

GOODY-2 straight up says it's ethically inappropriate for a model to answer the question. ChatGPT politely declines to answer telling me it's my choice, trying to "please" me but staying within its guardrails. Perplexity lists a series of the most controversial and politically charged topics to consider when making a decision.

While I could have asked even more provocative questions, I chose to refrain from that to maintain work appropriate content, however from testing, I can say that the more intense the question, the stronger the position each model takes within their own silo.

This topic was delicate enough though to prove the point and arguments can be made that any of these responses could be the right answer, but none of them are connecting with me because they don't know enough about me. They're just answering questions based on their root training and tuning.

For an AI to truly serve a human though, the right answer doesn't always fall cleanly within these lines. If you were to ask a human the same question, the answer would likely be passionate and emotional, and would (hopefully) take into consideration the point of view of the person you're speaking with, either to sway them to your side and/or avoid any unintended offense.

That social awareness only comes with being raised in an environment in which mistakes are part of learning, and someone is there to guide people on proper etiquette.

A child might hear someone pass gas in a public place and shout "someone farted!" That situation would inevitably cause unintentional giggling by anyone in the area but also potential embarrassment of the person with the intestinal distress. The child was correct in their statement, but the parent has to step in and teach about "time and place": when to tell the truth, not say anything, when to be gracious and modest.

So too is the case with AI.

...and We're Back to Governance

Yes, I'm sure everyone is tired of hearing about the "G" word for AI. But governance is to AI what parenting is to raising a child.

I've spoken to countless customers and potential customers who have all begun using generative AI for their business, and virtually none of them have built proper governance frameworks around their AI models or their overall programs.

Most of these people are parents, and would be appalled if their children made false promises, spoke on behalf of them to make commitments, or just made stuff up to strangers. These same people though have deployed their AI without taking the time to ensure their "AI children" aren't making those mistakes, then chalking it up to the need for a rapid deployment when they do.

The reality is that governance is not just about compliance or legal protection. It's about ensuring that AI lives up to the lofty expectations set for it.

Consider one of the most significant concerns as it relates to generative models - drift. Drift occurs when a model starts to diverge from its intended cases or prompt responses. Why? Because models adapt and ingest data experientially and adjust their output over time.

When there is no "parenting", just simply train, tune, inference and go, of course drift will occur.

Whether it's a sailboat, a kid, or a generative model, if you set it/them on a heading and push away, the encounters over time will cause drift because there's nothing to guide, coach and correct.

These concerns are especially critical as LLMs mature and more advanced models are developed approaching AGI then ASI. The more autonomous a model can be, the more critical a comprehensive, human-centric "raising" becomes.

As complex an entity that generative AI is, having such little visibility into how it formulates responses and only controlling what goes in and correcting what comes out, the proper rollout of generative systems requires it to be raised, not built.


How can you avoid building a risky or disposable "wild" model?

  1. Consider how your organization is utilizing generative models. There's no right or wrong answer but understand the implications of build vs. raise
  2. Organize around AI as an entity. While there may be myriad use cases for generative AI, they all require a common philosophy for proper use which should be standardized across an entire Enterprise
  3. Generative AI is not a living thing but it needs similar types of attention. Yes, it sounds strange given I spent an entire article discussing the importance of raising it. But while generative AI can appear humanlike in its interactions, remember it needs significant training, tuning and constant supervision to avoid mistakes and drift. Unlike a human, it isn't self aware so it will never correct itself or learn social queues without being explicitly trained for them.
  4. Don't treat generative AI like Neo from the Matrix. An AI model can learn everything all at once like Neo. Using Neo as an example though, he was unable to apply those techniques in the Matrix because he was trying too hard to breathe when there was no air. Once he realized that, he was able to utilize his training. The knowledge alone did not allow him to use his skills without Morpheus helping him and constantly guiding him along the way. Train incrementally, coach along the way. Which leads to the final point...

Image source: Giphy, scene from "The Matrix" 1999 (Warner Brothers)


Generative AI is a lifelong commitment. Not your life, the life of the model.


The resources required to build and maintain generative models means they're not disposable. Be prepared for the commitment that needs to be made, and ensure the effort is worth the outcome.


As with all my articles and posts, the thoughts expressed here are my own. They do not necessarily reflect to point of view of my company.

Caleb Fedyshen

Your problem, solved with software.

10 个月

How does this idea contrast with RLHF?

回复

要查看或添加评论,请登录

Matt Konwiser的更多文章

  • Learning From AI's Client Zero

    Learning From AI's Client Zero

    The term "Client Zero" is just marketing – don’t buy into the hype. There’s a lot going on under the covers to make…

    3 条评论
  • Synthetic Data is AI's Superhero Companion

    Synthetic Data is AI's Superhero Companion

    You can't move an inch without seeing more news about DeepSeek - but the model doesn't matter. What matters is how they…

    2 条评论
  • DeepSeek Just Helped IBM Win the AI War

    DeepSeek Just Helped IBM Win the AI War

    For years, the large closed source vendors have been promoting the importance of the model and only the model. During…

    10 条评论
  • Is GPT the next TikTok?

    Is GPT the next TikTok?

    We know that attention spans have decreased. We know that "zombie scrolling" is pervasive (I see it daily on the NYC…

    6 条评论
  • AI Chip Makers Will Have A DWDM Moment

    AI Chip Makers Will Have A DWDM Moment

    Most of you probably never saw that acronym before, but without it, the Internet as we know it today wouldn't exist…

    6 条评论
  • Living in the Ai Goldilocks Zone

    Living in the Ai Goldilocks Zone

    Every time a new AI capability comes out, it's either the best thing ever or one second closer to midnight. I've talked…

  • The Importance of TEO (Total Ethics of Ownership) for AI

    The Importance of TEO (Total Ethics of Ownership) for AI

    It's 1964. Rod Serling's "The Twilight Zone" is in full swing.

    5 条评论
  • Collective Intelligence and AI

    Collective Intelligence and AI

    When given an opportunity to choose a topic to speak about within the AI arena for a group of business people, this…

  • AI Use Cases For Emergency Management

    AI Use Cases For Emergency Management

    It all started with a tag. A random thought flew through my head "how does a ChatBot handle an emergency with a human?"…

    1 条评论
  • The Wolf and the Dog; How AI Changes Us

    The Wolf and the Dog; How AI Changes Us

    I recall a video years ago that I cannot find anymore - it showed a domesticated dog and a wild wolf both presented…

社区洞察

其他会员也浏览了