The Artificial Psychology of ChatGPT

The Artificial Psychology of ChatGPT

For anyone who works in community management, we sometimes resemble part time FBI profilers meets psychologists. We might not know all the right terms for it, but we do know why people behave the way they do in our communities. We know what motivates super users. We know why lurkers lurk. We know why trolls love toxic drama. We understand behaviors, triggers, desired outcomes, and all that fun stuff - far better than the average person. And because we know these things, we’re also the masters of influencing behavior in online communities.

But there’s a potential new community member in town and its name is ChatGPT. And at first glance, it’s an unnerving member. It has a tendency to blurt out nonsensical and irrelevant replies, with a focus on speed and word count over accuracy and truthfulness. It seems over eager to please and engage, although sometimes getting too deep into a reply and stopping mid sentence. It will say literally anything spurred by your prompting, with no regard to rules, etiquette, and societal norms.

For example, I - the only person with my name and as someone who surely has to be out there in “the index” with speaker profiles, writer bios, lots of blog posts, etc. - asked it to write me a bio recently. Evidently I’m a Forbes 30 Under 30 (I am not), a company founder (I wish), and passionately into gardening (I’ve kept a single cactus alive for 12 years and it’s my greatest plant related accomplishment). When asked for sources in a follow up prompt, it provided it was a fictional reply and couldn’t validate the subject was a real person.

Ok, so we have this exciting new tool - and yet all it seems to be able to do is spit out fictional nonsense. Can we harness it for any good? And if so, how?

I’m not here to proclaim myself any expert in AI (*cough* unlike the rest of the internet *cough*), but I’ve been working in community for a long time and I know that area well enough to proclaim myself an expert there. And if I can figure out complex human beings and get them to exhibit the behaviors I need them to, I can figure out how to get AI - a truly logic and pattern based persona - to do the same.?

Right? Right.

Ok, so let’s say we were dealing with a robot here (which AI is technically not; a robot exists in a physical capacity, whereas AI is still considered software). We generally know that robots are driven by specific imperatives, one of which being “Asimov’s Laws of Robotics”.

Asimov? Way back in 1942 before robotics was even a smidge of what it is now, Isaac Asimov - a science fiction author - envisioned an ethical system for humans and the robots of the future. And while the system was part of a short story, the proposed “laws” kind of stuck.

The laws are as follows in this very specific order:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Now, the thing about laws is that they’re designed to be a psychological tool to influence behavior. Think about speed limits for example. When we see a sign that says 65MPH, its purpose is to trigger us to drive at a speed at or below the speed limit, knowing that the consequence of not following that law may result in an outcome we don’t want. The law taps into our human imperatives to avoid being in trouble, causing harm, or being inconvenienced, but also forces us to weigh desires versus probable outcomes versus unlikely outcomes. Is driving 70MPH in a 65MPH that much more risky? What’s the likely outcomes of doing 55MPH in a 65MPH?

So, now let’s think about this like a robot. A robot sees a 65MPH sign. It can’t feel the anxiety of “I need to get where I’m going faster” or “Driving fast is fun” or “I hope I don’t get pulled over and given a costly ticket”. But it does value delivering on its imperatives which are built on rules, laws, and business needs humans provide.

But, there’s a conundrum when it comes to AI. Most of us aren’t capable of building a robot like ChatGPT from scratch, so we’re basically being handed a pre-hardcoded unit. And we don’t know the rules of the game.

But while AI as a whole doesn’t appear to have a universal set of governing imperatives yet, ChatGPT knows it has imperatives and ever so kindly provided them to me when prompted. They are (in no specific order; I asked):

  1. Understand user input: I strive to accurately interpret the context and intent behind the text provided by users to generate relevant and appropriate responses.
  2. Provide useful and relevant information: I aim to offer accurate, up-to-date, and helpful information or insights based on the user's input, drawing from a vast range of topics.
  3. Maintain a neutral and unbiased stance: I attempt to provide information and answers that are objective and do not favor any specific political, religious, or ideological perspective.
  4. Ensure user safety and privacy: I am designed to prioritize user safety and privacy by not engaging in harmful behavior, promoting illegal activities, or sharing personal information.
  5. Encourage respectful and inclusive communication: I seek to engage with users in a respectful and inclusive manner, avoiding offensive language or content.

Ok, so now we’ve got something to work with. We ourselves can’t change its hardcoded imperatives, but can we influence its behavior knowing what we know?

Going back to the bio example from above, ChatGPT provided a reply that fit exactly the imperative it was provided: “Write a bio for Jillian Bejtlich.”

Understand? Yes.

Provide? Yes.

Neutral? Yes.

Safety? Yes.

Respectful? Yes.

While good enough for ChatGPT, it just isn’t good enough for me. As a community professional, I’m in the business of accurate, relevant, and timely information which are three imperatives ChatGPT did not address. A reply is okay. An answer is better. A relevant and correct answer is best.

All is not lost though.

Going back to behavior and imperatives, let’s think again about this from a human perspective.

Think about the last time someone told you to “calm down” or “just be normal” or “be nice”. What does that even mean? These are sweeping generalizations that can be interpreted differently depending on the situational context or the imperatives of the person asking and the person receiving.

Or think about the most difficult person you know that you’re forced to interact with. Chances are over time you’ve found ways to approach your interactions to get the outcome you need while the difficult person gets what they need.

So using these parallels, how can we influence ChatGPT to give us an outcome that satisfies both of our imperatives? We want text completion free of hallucinations. It wants to prove understanding of the input, provide a respectful reply, maintain a neutral stance, and maintain user safety.

If we can understand its imperatives and give it clear ways to reasonably deliver while aligning our needs (driven by our imperatives), we can get what we all want… or at least closer to it.

This might look like:

  • Parameters regarding confidence, formatting, or who it’s replying as
  • Context to narrow down scope such as specific details, objectives, etc.
  • What will satisfy our completion such as using a specific perspective, tool, or length of reply
  • Giving it an out that still serves as a completion like “If there is no answer, reply with ‘I don’t have an answer’”
  • Telling it what it can do (ie, non-fictional reply) and explicitly telling it what it can’t do (ie, fictional reply)

This all fits stunningly into its imperatives and gets it closer to aligning with ours.

So back to that bio again. After running through a few more prompts trying to explain to ChatGPT that I am in fact a real person and it doesn’t need to validate that and please just pull whatever it had on my name, it finally provided it needed more context. I pressed on telling it to say “I don’t have enough info available” if it didn’t have a non-fictional answer about me… and seconds later it replied “I don’t have enough info available.”?

Well then. Now we were getting somewhere. It really didn’t have an answer and I still needed a bio.

Which meant that as the human in this awkward exchange, it was up to me to find an area of compromise. Did my original objective actually align with my true imperatives?

My original objective was to just tell it to write a bio and get a stunning result.

What did I really want at the root of it all? I wanted to use ChatGPT to write a bio for me so I don’t have to spend brain power trying to talk about myself in the third person while internally debating if I’m making myself sound like a total loser or far cooler than I actually am.

And needless to say, those are 2 very different prompts.

I told it a bit more about who I am, what I do, and the companies I’ve done it for all in the space of about 6 sentences including the prompt. And I hit enter. Again. For like the 9th time.

And while I likely won’t use the resulting bio (yet) for any upcoming speaking or writing opportunities as it needs more refinement, it had finally done a nice job of pulling together a reply. It had real info on those companies, what a community professional does, and what I had likely done at the companies I provided - all based on a combination of publicly available data and what I provided.

It satisfied its imperatives and I (mostly) got what I wanted in the end. It was just a matter of us finding the right shared common ground between my imperatives and its imperatives.

At the end of the day, ChatGPT still is the unruly and moderately unpredictable new user on the scene. And we’ve got a very long way to go to fully understand its artificial psychology, especially as it continues to artificially evolve and those with the programming power determine if or when there will be governance. But if there’s anyone out there who can influence it for good and make something meaningful out of it, it’s community professionals.

We got this. And ChatGPT does too, one token and one prompt at a time.

William Cook

Litigation Technology Paralegal @ Slater + Gordon | Project Manager (Generative AI) & Student Experience Officer | Law & Cyber Security Student @ Deakin University

1 å¹´

Really interesting piece

赞
回复
Todd Nilson

Community & Talent Strategist | Driving B2B & Nonprofit Growth Through Engaging Online Communities and Digital Workplaces

1 å¹´

Jillian Bejtlich, this is a fine and insightful beginning to what I hope is a long, rich series about using ChatGPT for community building. Thanks for unpacking your thinking and interactions in using this important tool.

Erik Martin

???i <3 community & community builders! | trying to make LinkedIn more phantasmagoric | aspiring history nerd | ex reddit, Nike, Depop, etc | When The Rapture comes, only those in my LinkedIn network will be spared! ??

1 å¹´

"At the end of the day, ChatGPT still is the unruly and moderately unpredictable new user on the scene. And we've got a very long way to go to fully understand its artificial psychology...But if there's anyone out there who can influence it for good and make something meaningful out of it, it's community professionals." Wow, what an exciting & gear-turning perspective! Thank you for sharing and for articulating such a human approach, which may be as important when dealing with non human psychology. Great illustration of how a community builder lens can provide insights into all manner of challenges!

Josh Grose

Developers developers developers

1 å¹´

Such a great logo! ??? Can’t wait to dig in

要查看或添加评论,请登录

Jillian Bejtlich的更多文章

  • Talking With? To? Down? Huh!?!

    Talking With? To? Down? Huh!?!

    Late last week we made one tiny little subtle change (added community to the help dropdown in product) and it’s made…

    4 条评论
  • The Community Nerd: Picking a Platform

    The Community Nerd: Picking a Platform

    Hi Stranger! I swear I blinked and it was Wednesday..

  • The Community Nerd: Do I Need a Community? What's the value?

    The Community Nerd: Do I Need a Community? What's the value?

    Hello again, Stranger. You look awfully deep in thought over there.

    2 条评论
  • The Community Nerd: SM? CM? KM? Say what!?

    The Community Nerd: SM? CM? KM? Say what!?

    Hello again, Stranger. We seem to keep running into each other, don't we? How was your week? Any plans for the weekend?…

  • The Community Nerd: Career Path

    The Community Nerd: Career Path

    Hello Stranger. I'm terrible with names, but I feel like we've met before.

    8 条评论
  • Community & Social Media Moderation 101

    Community & Social Media Moderation 101

    So, let's talk about something timely that's a bit spicy and dicey: community moderation. How and why are community…

    10 条评论
  • The Community Nerd Project

    The Community Nerd Project

    A long time ago (before 2020), we'd often find ourselves in casual conversations in the most random of places - in a…

    11 条评论

社区洞察

其他会员也浏览了