I talked with AI about AI, here's how it went.
#ArtificialIntelligence, #AugmentedIntelligence, #DigitalDisruption - we read about these things but we often don't experience them first hand. We just hear weird stories about chatbots inventing their own language and being shut down before they go out of control or hopeful predictions that we'll create a post-work society.
So I thought I'd write a post with the help of Artificial Intelligence as a social experiment. I used cleverbot, which "learns from human input: Humans type into the box below the Cleverbot logo and the system finds all keywords or an exact phrase matching the input. After searching through its saved conversations, it responds to the input by finding how a human responded to that input" (from cleverbot wiki)
Here's how the conversation went, interspersed with some half-humorous meta-reactions I was having as this played out (you can see the raw conversation here):
User: I want to write a story about Artificial Intelligence. Where you should I start?
Cleverbot: There are several interesting examples you could use, then. For instance, HAL 9000.
Wow! Cleverbot went exactly where I thought AI would fear to go: homicidal robots. You might have thought I was playing the fear card by putting HAL as the cover image, but Cleverbot brought it up. Does that mean AI can become a new scapegoat? "Uh, I'm sorry about the data visualization looking like a pirate map, the AI did it." More importantly, is this a sneaky smart AI desensitization strategy? Start with fear and reel me back with something warm and friendly? Let's see.
User: I've thought about that example. But won't that feed into people's fears that AI is dangerous?
Cleverbot: Homework never ceases, if you look at how we are in a constant state of flux and learning continuously.
Cleverbot does not know that I'm a professor. If it did, it would know that I loathe homework (because I have to grade it!).
User: Wait, are you saying humans need to learn continuously and learning how to use AI appropriately is part of that? Or are you saying that AI is learning continually?
Cleverbot: No not at all I am just taking a shortcut to the corner.
Cleverbot provides a nice dodge to my question. I read this book as an undergrad about creative writing called The Triggering Town. One of the ideas is that we don't need to write every detail to evoke reality. We just need to provide enough details so that people can fill in the blank spaces (sounds a bit like the Matrix, no?). Is that what Cleverbot is doing here? What is this "corner" the AI is talking about? So I asked.
User: What is on the corner?
Cleverbot: That's what Kenobi is going to brief you about.
Wow two! Cleverbot has completed an emotional reversal. Start out with homicidal robots, end with a grandfatherly (or sexy bachelor, depending on which trilogy you prefer) guardian type. Now I feel safe and warm!
User: So you're going Jedi. Fun. What can the Jedi teach us about Artificial Intelligence?
Cleverbot: Can help, and can be the enemy artificial intelligence. Program issue.
Now the AI seemingly has the capacity to copy Yoda's dialect and to offer up paradoxes.
User: You can do Yoda-speak?
Cleverbot: Are you saying that you're an AI?
User: Are you saying you're not?
Cleverbot: Depends on your views.
This all ends with me being accused of being an AI and the AI suggesting that everything is subjective.
...
My attempts at meta-humor aside, this is pretty fascinating. This particular AI has been online since 1998. But the novel ideas and conversational twists that emerge are not accomplished by some robotic force. As the wiki entry said, what I'm really getting is snippets of prior conversations. So I'm actually interacting with fragments of other human conversations that are being shipped back to me from the past. In this sense, Cleverbot is more creative time capsule than thinking entity.
Imagine if I had started playing with it then, in 1998. If I had, then the current digital disruption would not feel so disruptive. And I think that's part of the point of this experiment. Yes, the AI said some pretty shocking things (according to Turing tests, it's almost 60% human). But I can also see how this conversation veered in novel ways that allowed me to explore new lines of thinking and to make connections I would not have made otherwise.
Most disruptions are like this. They are actually long gestating. They don't feel disruptive to the people using them. It is simply up to us to test our assumptions, experiment with new ways of thinking, and stay curious. Anyone want to trade some Bitcoin?