AI Raises Questions About Itself
Jim Stalker
Writer exploring the space between who we're supposed to be and who we really are | Former Corporate Sales | Fitness Instructor | MA Communication
I am a newbie to ChatGPT, but signed up for the 4.0 version. I submitted the following post without any guidance or request. The response follows.
Then I respond to the response with additional notes and questions. That response also follows.
I think the exchange speaks for itself.
My Post
Whoa! Has the content creation and publication game ever become confusing?
AI being added to search and formatting the results into conversational answers in alternative formats is exciting.?
Aren't these answers built on the backs of millions of unpaid/uncredited content creators/posters providing virtually endless content to drive the answer (or at least until 2021 with CGPT 4.0)?
Won't this elevate protected, professionally created content housed behind a firewall?
Anyone who has experimented with several AI engines has seen some challenges with accuracy and formatting. No surprise.?
I suspect the scientists will work out the configuration,?but what about the precision??
Chat GPT Reply
Your questions touch on a range of important issues related to the role of AI in content creation and publication. Let's address each one in turn.
Firstly, it's true that AI models like ChatGPT are trained on vast amounts of data from the internet. They learn patterns and structures from these data to generate responses. This raises ethical questions about attribution and compensation. However, it's important to note that these models don't know specifics about which documents were in their training set or who wrote them. They don't access or retrieve documents or specific sources of data; they generate responses based on patterns they've learned.
As for the potential elevation of protected content, there could indeed be a shift in the value proposition. If AI continues to improve in its ability to generate content, it may become increasingly important for creators to offer something unique or specialized. Professional content housed behind paywalls may become more valuable if it offers insights, depth, or quality that AI can't replicate.
Accuracy and precision are indeed ongoing challenges with AI. While AI can generate plausible-sounding responses, it can also make mistakes, including factual inaccuracies, misunderstandings of context, or inappropriate responses. This is a major focus of ongoing research.
Continual learning — the process of updating AI models based on new data — is a promising but challenging area of research. As of my last update in 2021, models like ChatGPT do not learn from the conversations they have after being trained. They generate responses based on their training and do not update their knowledge based on new information. If future models are able to continually learn from new data, this could help improve their accuracy and relevance.
However, this also raises additional questions about privacy and security, as it would mean that the AI is effectively remembering information from its interactions. It's a delicate balance to maintain.
领英推荐
In summary, the use of AI in content creation and publication presents exciting opportunities, but also raises important questions about attribution, value, accuracy, and privacy. It's a rapidly evolving field, and these are issues that researchers, developers, and policymakers will need to continue to address.
My reply to the above with comments (excerpted comments from above ChatGPT answer in italics - my comments in BOLD).
This raises ethical questions about attribution and compensation.
?I think this is the point and the key question I am asking.
However, it's important to note that these models don't know specifics about which documents were in their training set or who wrote them.
Doesn’t this further make my point that one will wonder how accurate/truthful the answer will be. And one must anticipate paid search morphing into ChatGPT that would create quite cunning answers lacking transparency into the fact they contain some paid content that informed the answer acquired from training.
Professional content housed behind paywalls may become more valuable if it offers insights, depth, or quality that AI can't replicate.
But how will they know? Won’t those paywalled creators use a version of ChatGPT in the creation of their own content? I think this is the worst-case Orwellian dystopia many fear. While I don’t think a dystopia is currently in focus, I think the concerns of who is a reliable source and understanding what makes it so is important. Further, as we saw during the pandemic, at crisis moments when accuracy is a potentially lifesaving factor, calling out misinformation becomes more important. How will people know, and many who potentially using this technology will not have critical thinking training and skills, what to believe. Delivering information in a “convincing” or “persuasive” tone is already pretty solid.
While AI can generate plausible-sounding responses, it can also make mistakes, including factual inaccuracies, misunderstandings of context, or inappropriate responses. This is a major focus of ongoing research.
You make my point. How would you characterize this type of research?
questions about privacy and security, as it would mean that the AI is effectively remembering information from its interactions. It's a delicate balance to maintain?
What are the specific questions about privacy and security? How would you characterize the balance?
also raises important questions about attribution, value, accuracy, and privacy.
I think the glow of the shiny new object has early users a bit bedazzled, but I think that without attribution and accuracy there will be a whole lot of stealing going on.
Chat GPT Reply
Your comments bring up very valid concerns, and I'll do my best to address each one in turn:
Helping Firms Become Customer-Obsessed
1 年Hope all is well with you Jim!