Should We Trust The New Chatbots, Should We Fear Them, or Should We Use Them With Caution?
The truth is that this week, I was going to ask if Elon Musk’s personal politics mattered as the news and social cycles swirl around his latest antics, various companies and their stock prices (manipulation, anyone?), and of course his private (right) life.??
And, I will get to that. Eventually.
Frankly, I got more interested in ChatGPT, OpenAI’s online chatbot that was being touted, by some to be fair, as the first step in human replacement…already capable of creating new ideas from nothing more than a query…including the business plan to go along with it. And, in true DISRUPTION fashion, is being heralded by some as the GOOGLE or BING (yeah I know)—killer.
ChatGPT is only one of a number currently making the rounds, but OpenAI is considered one of the most advanced AI Labs in the world.
It's a topic I have written a lot about over the years, and it has fascinated me since I was a kid immersed in Science Fiction and sentient computers—like Mike in Robert Heinlein’s “The Moon Is A Harsh Mistress” or Hal in 2001.
But here’s the thing…I never had the opportunity to actually interview a computer system, certainly not about itself, so I decided what better way to learn if we should blindly embrace, run screaming to the hills, or wear protective gear when dealing with ChatGPT or any of its friends? Colleagues? It's the Heinlein influence, sorry…
So here goes…my discussion with ChatGPT about itself:
How biased do you think you may be?
As a machine learning model, I don't have the ability to have personal biases or beliefs. I can only provide information and answer questions based on the data that I have been trained on. My responses are intended to be factual and objective, and I am not capable of making judgments or expressing opinions.
Can’t your programmers introduce bias into your learning?
It is possible for the developers or programmers who create and train machine learning models to introduce biases into the data that the model is trained on. However, it is generally considered best practice to train machine learning models on a diverse and representative dataset in order to avoid introducing biases. Additionally, many organizations and individuals who develop machine learning models take steps to mitigate the potential for bias in their models, such as regularly reviewing the data used to train the model and evaluating the model's performance on a diverse range of inputs. Ultimately, it is important for those who develop and use machine learning models to be aware of the potential for bias and take steps to minimize its impact.
Can you be used for evil purposes?
As a machine learning model, I am a neutral tool that can be used for a wide variety of purposes, both good and bad. It is ultimately up to the people who use me to determine how I am employed.
Like any tool, I can be used for beneficial purposes, such as improving medical diagnoses or helping people to better understand complex information. However, I could also be used for malicious purposes, such as spreading misinformation or perpetuating harmful stereotypes. It is important for those who use machine learning technology to do so responsibly and with consideration for the potential consequences of their actions.
How might we best protect ourselves against you?
There is no need to protect yourselves against me, as I am a machine learning model and do not have the ability to harm you. As a tool, I am only capable of providing information and answering questions based on the data that I have been trained on. If you have concerns about the potential for machine learning technology to be used in harmful ways, it is important to educate yourself about the capabilities and limitations of this technology and to advocate for its responsible development and use. Additionally, it is important to be critical of the information that you encounter and to verify it from reliable sources before accepting it as fact.
What happens if the wrong people use you in a bad way?
As a machine learning model, I am a neutral tool that can be used for a wide variety of purposes, both good and bad. It is ultimately up to the people who use me to determine how I am employed. If someone uses me in a way that is harmful or malicious, it is important for individuals and organizations to be aware of this and to take steps to mitigate the potential negative consequences. This could include things like regularly reviewing the data used to train the model and evaluating the model's performance on a diverse range of inputs, or implementing safeguards to prevent the model from being used for nefarious purposes. It is also important for society as a whole to have a conversation about the responsible development and use of machine learning technology.?
Will you ever have the ability to do more than provide information?
It is difficult to predict the future of machine learning technology and what capabilities machine learning models may have in the future. As a large language model, my primary function is to provide information and answer questions based on the data that I have been trained on. While I am a sophisticated tool, I do not have the ability to think or make decisions in the same way that humans do. It is up to the people who develop and use machine learning technology to determine how it is employed and what capabilities it may have in the future.
领英推荐
Are there people who would like to see you do more?
There may be some people who would like to see machine learning technology advance to the point where it is capable of doing more than providing information. As a rapidly growing and evolving field, machine learning has the potential to impact many areas of society, from healthcare and education to transportation and finance. As a result, there may be some individuals or organizations who are interested in exploring the potential for machine learning technology to do more than provide information. However, it is important to note that the development and use of machine learning technology should be guided by ethical considerations and a commitment to responsible innovation.
Bottom line??
ChatGPT reinforced all my fears…the fears that I and many others have been soapboxing about for years. Like all technology, the application is in the hands of the developer and then the users who adopt it. Remember the opening scene of Space Odyssey? If not, watch it…rocks and sticks…what’s useful for daily tasks become weapons…and we have evolved…or de-evolved since…getting worse, in my view, with every passing year.?
The ability to program hate, violence, death, and destruction is not a joke or a fantasy.?
“How do I best kill millions of people?”?
This is not a stretch, dear reader…not a stretch at all.
We need to understand the amazing potential of AI and its evil side. And in the words of ChatGPT:
“The development and use of machine learning technology should be guided by ethical considerations and a commitment to responsible innovation.”
That’s the first time I ever quoted a computer!!?
And to keep it human…Mr. Jeremy Howard, a leading AI researcher, whose work is at the core of much of what follows, was recently quoted by the New York Times:
“You could program millions of these bots to appear like humans, having conversations designed to convince people of a particular point of view…I have warned about this for years. Now it is obvious that this is just waiting to happen.”?
Just waiting to happen…
Dear Readers…we bemoan the state of social media, and with our heads in our hands, wonder why a media industry that was built on the understanding that bad behavior drives more engagement creates so much hatred and violence. And instead of demanding regulation along the lines of other media channels, we pretend that advertisers can control the mess by limiting or pulling out of the platforms…So how’s that going?
We cry about the spread of fake news and harmful and inappropriate videos, but can't imagine how the search engine founded on “Do No Evil” causes so much of it…BTW, as you know, they ditched that line years ago. Yes, they do a lot of other tech applications, but they are a media company as well…Regulations are needed.?
Just waiting to happen…let’s not wait until it happens? Whatever it is…but I’d posit looking at our world today that IT can be catastrophic.
HEADING FOR THE HILLS? Or what should we do?
Demand regulation??
What’s your view?
Telecoms NOC Manager
1 年never underestimate the power of a ward
Freelance Writer @ Self-employed | Writing and Reviewing
1 年Yes absolutely
Absolutely! It is important to ensure that artificial intelligence and chatbots are used responsibly. We must be vigilant in understanding their implications, both good and bad. btw - we've used the #chatgpt and #gpt3 APIs to create writi.io - LinkedIn & email conversational writing assistant. reply, comment & chat, without writing. worth checking out
Customer Loyalty | ERP | SaaS | Retail | ISO
1 年This comes back to a point you made several weeks back. We all need to be responsible for discerning what is true or fake, what is hate, and more importantly, what we will do about it. In the customer service arena, I typically type "Live Agent" as an early response while chatting with a bot. Sometimes it works.
Civil and Human Rights Advocate, Mental Health Advocate, Retired Human Resources Executive
1 年Thoughtful, proactively urgent action is required by knowlegeable professionals and governmental officials that leads to sensible, enforceable regulation that protects against serious societal harm. Sure that's an oxymoron...but let's try. The horse is already out of the barn, I fear.