How to be ChatGPT Ready
Screenshot of part of my chat with ChatGPT on 7th April 2023.

How to be ChatGPT Ready

3 reasons why being ChatGPT ready has different meanings

?This morning, ChatGPT apologized to me for getting something wrong. My ego may be very slightly involved in starting this article with this sentence.

?The title of this blog article, How to be ChatGPT Ready, is going to mean different things depending on who’s reading this. Here are 6 reasons why:

?1.??????Your level of comfort with the concept of AI technology is different from that of anybody else. It doesn’t matter if you took the same classes, read the same articles, or spent the same amount of time quizzing the ChatBot, you are you, I am I, and they are they.

2.??????Access to technology depends on multiple, and intersecting factors. These include individual circumstances where factors like age, language, family values, disability, psychosocial aspects, education, resources, opportunity and much else influence our lives. They also, and importantly, include macro-level factors, socio-economic, cultural, political, developmental and environmental.

3.??????Your purposes and intentions in your engagement with generative AI. What do you want out of it? Creativity to sell, or to inspire you? Homework support, for school, college or PhD, and to what extent? Assistive devices? Medical diagnosis? Mathematical and scientific calculations? Perhaps just curiosity?

?These are just a few, out of a possible list of many more. Being ChatGPT ready is simply not the same thing for everyone.

Also – what did you think when you read this?

Did you think, How Can I be Ready?

Or did you think, How Can We Be Ready?

And who is We?

?There is one universal ingredient across the smallest unit and the largest cross-sectoral, global network you can imagine – and that is intention. Intention can change and evolve. But it’s a good start to being ready.

?Here is how I started my ride.

(Did you notice I said 6 reasons, and gave only 3? It’s a ChatGPT kind of thing. Read on.)

ChatGPT apologises

No alt text provided for this image
Screenshot of part of my chat with ChatGPT on 7th April 2023.

?

Whether or not I want to, I feel me pat myself on the back. ChatGPT apologized to me.

In this there is relief that I’m still intellectually advanced, a ‘gotcha’ kind of triumph, a thrill, of excitement, the urge to explore further, huge curiosity, some fear as well. The fear is because it took me time to notice what ChatGPT got wrong, and I should have spotted it immediately. The thing is, I wasn’t expecting it to get it wrong. My question was designed for it to pass with a ten on ten.

I asked for help with a round of Scrabble. I wanted a 7 letter word. In the list of responses, the first three words were 7 letter words, but the ten words that followed were 6 letter words.

I didn’t notice because my feelings got in the way. Awe. Appreciation. Pleasure. Shock.

My initial intention was to satisfy my curiosity.

And then I noticed. I had to let the wave of feelings wash over me and subside before I could get to work and question ChatGPT’s responses. After I pointed out the problem, it apologized to me and immediately produced a list of seven letter words. Actually, it apologized twice, once at the beginning of the response and then at the end of it: “I apologize for my mistake. That was an oversight on my part.” … and “Again, I apologize for the confusion earlier, and I hope this list is helpful to you.

I read up on ChatGPT apologizing and discovered some noise about it making a calculation error in something called a proximity matrix. I don’t know what the proximity matrix is and a look at the math and calculations made me laugh because it said sqrt many times. This is just so you know I’m not pretending to be a mathematician here, but I do want to say that if ChatGPT can make a mistake about 6 letters and 7 letters, I think it can fuck up a sqrt or two. Lots of folks say it makes math errors, and that is the simplest way I can understand this.

I’m having trouble using the word ‘it’ though.

And perhaps we should be asking, is ChatGPT ready for us?

With what intention? To do what?

And how ready?

I know, I’m using the free version based on GPT-3.5, and there are wiser, tougher, brighter folks out there rocking GPT-4.

Also, I just found out that GPT-4 does not mean we have access to versions 1, 2 and 3, or that they exist. They do apparently exist, because I find that the author of this article has referred to them. You want to know more, this is a simple read, right here.

ChatGPT and feelings

For human beings, feelings are a big part of intention and readiness and feelings often lead to intentions.

When ChatGPT aplogised to me, it’s ego wasn’t involved. Chat gpt’s feelings weren’t involved. It has none. This is part of what it said on the subject:

At present, AI systems like myself are not capable of replicating the full range of human emotions or developing emotions on our own.

However, researchers are exploring ways to simulate emotions in AI systems using techniques such as sentiment analysis, affective computing, and natural language processing. These approaches aim to enable AI systems to recognize and respond to human emotions, such as detecting sarcasm or frustration in text-based interactions. But it's important to note that simulating emotions is not the same as actually experiencing them, and AI systems like me are unlikely to achieve true emotional intelligence anytime soon.”

But then, I ask myself as I write this, if no feelings were involved, what led to it getting such a simple thing wrong in a list that started with three right answers?

Is it as yet unable to count, the way we count?

Does it lose the thread of what it’s doing after 3 of anything? Drunk on data, three’s the limit, call a taxi, we need a designated driver. Will chatGPT understand this metaphor and be able to chuckle, one day, when it speaks, like a sort of Chatexa?

Then I proceed to consider that there are people I know who may not find it simple to progress beyond 3 of anything, and also that many of us do not find the same things simple in any case. Does this make ChatGPT more like people? What does this imply?

So I’d say start with your feelings before you board the bot. They may get in your way. And the bot has none.

3 reasons to focus on ChatGPT

Why does it matter, being ready?

That depends on the evolving intention.

I’m not a techy, I don’t code, but I am an explorer. I use tech with awe and appreciation. That’s what happened when I saw the first black & white TV and later, the first telephone?in my grandmother’s house. I had to get beyond the awe and appreciation of my seven- or eleven-year-old self, to understand that these things changed the world because of how we engaged with them.

ChatGPT deserves my attention, and yours. Here are 3 reasons why:

(1)???Learning.

(2)???Limitations.

(3)???Assistive.

I want to start with a quick look at each of these, from the perspective I have, which is limited too, something I share with ChatGPT. Again, I may have feelings around the word, limitation, but ChatGPT does not. More about that in a bit.

(1)???Learning

It appears to be learning. Between one response and the next, the list of words changed from mostly six letter words to all seven letter words. What changed? It also appears to be learning in a more macro way that’s much bigger than me and my morning chat, involving the people behind the creation and development of new versions of it. I need to know, is it a learning thing? Has it learnt to learn? Is it learning to learn, independent of people? I also need to know, who’s teaching it, what is being taught, and to serve what purpose?

What if we learn the wrong things from ChatGPT or other such AI resources, but as learners we don’t know we’re learning the wrong thing?

(2)???Constraints

It is telling us, sometimes, some of what it cannot do. On the matter of feelings and engagement, I asked, “How would you respond if a human being developed feelings for you?

It answered, “As an AI language model, I don't have the ability to develop personal relationships or reciprocate feelings. While I'm designed to respond in a friendly and conversational manner, I'm not capable of experiencing emotions or engaging in relationships with humans. My primary function is to provide information and assist users with their inquiries to the best of my abilities based on my programming and training.

So I pushed a bit, with “What do you need to develop the ability to experience emotions?

The response was, “As an artificial intelligence language model, I don't have the capability to develop emotions as humans do, because I don't have a physical body or a brain. Emotions are a complex phenomenon that arise from the interplay of various factors such as sensory inputs, brain chemistry, social context, and personal experiences.

That may sound complex and computational in its articulation, but I see that it is a listing of key words, in context, strung together in a sentence.?It is also not original thought.

So if I am expecting original thought it’s not as yet a good idea to look here. With the buzz and the shine of generative AI? all around us, it’s easy to get confused.

Yes, I am aware of the ongoing discussions about ChatGPT and copyright law. To me, the conversation goes through the copyright issue, to the heart of substance. When we speak to each other, engage and interact, teach, learn, exchange stories and share facts, we bring ourselves to the space. That’s an inbuilt, invisible creativity. My dog has it. A search engine does not. Because, arguably ofcourse, the search engine is not a self, not in the way of being as we understand self.

(3)???Assistive

It is assistive, to some degree, with this articulation of primary function, “to provide information and assist users”, but who am I to ChatGPT? It has no feelings but interacts with me, and wants to assist me.

In writing this, I am clear that my tiny experience and processing of it, framing of it, is that speck in the cosmos, nothingness before infinity sort of thing. Yet it is my experience and it is important to be aware that ChatGPT is likely to pick it up and present it in a response to someone else, somewhere, some day. That is if I make the cut, use the right keywords, language, belong to the data set being fed to the algorithm, am located within the parametres and not an outlier.

I don’t understand algorithms, but the folks who wrote this article?appear to do so. I felt happy when I read this in there, “Algorithms can be simple and complex depending on what you want to achieve. It can be understood by taking the example of cooking a new recipe. To cook a new recipe, one reads the instructions and steps and executes them one by one, in the given sequence. The result thus obtained is the new dish is cooked perfectly. Every time you use your phone, computer, laptop, or calculator you are using Algorithms.

So I’m simply asking, as are many others, what if I’m not wanting the recipes on offer in a world that’s only showcasing the chosen lot? ?If I am unable to interact with it, and someone else decides what ChatGPT must do to assist me, on the basis of their understanding and decisions of the assistance I need, what is happening here?

My suggestion is, if you begin where you are, as who you are and with what you know best, then becoming ChatGPT ready may happen on your own terms and at your own pace.

At the same time, I am clear that the larger factors and systems, within families, communities and society, that influence who is ready for what, when, why, how – continue to operate. If I am not able to use the phone or create an email account of my own, if I do not have the leisure to play a game or watch a show of my choice, this probably indicates that there are barriers to my readiness for many things.

So a game of Scrabble helped me take the first few steps towards my engagement with ChatGPT. My initial intention was to satisfy my curiosity.

What is it that will help you, your friends and family, co-worker, neighbour, take their first steps towards becoming ChatGPT ready?

An intention, an evolving intention, that could be little or large.

What’s yours?

Neha S Chaudhry

Manager - CSR Programmes and Communications

1 年
回复
Prabha Nagaraja

Consultant at TARSHI

1 年

Loved this - Does it lose the thread of what it’s doing after 3 of anything? Drunk on data, three’s the limit, call a taxi, we need a designated driver. Will chatGPT understand this metaphor and be able to chuckle, one day, when it speaks, like a sort of Chatexa? And also, the ideas you've presented, Shikha!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了