ChatGPT

ChatGPT


Since OpenAI?released its blockbuster bot ChatGPT?in November, users have casually experimented with the tool, with even Insider reporters trying to simulate news stories or message potential dates.

To older millennials who grew up with IRC chat rooms — a text instant message system — the personal tone of conversations with the bot can evoke the experience of chatting online. But ChatGPT, the latest in technology known as "large language model tools," doesn't speak with sentience and doesn't "think" the way people do.

That means that even though ChatGPT can explain quantum physics or write a poem on command,?a full AI takeover isn't exactly imminent, according to experts.

"There's a saying that an infinite number of monkeys will eventually give you Shakespeare," said Matthew Sag, a law professor at Emory University who studies copyright implications for training and using large language models like ChatGPT.

"There's a large number of monkeys here, giving you things that are impressive — but there is intrinsically a difference between the way that humans produce language, and the way that large language models do it," he said.

Chat bots like GPT are powered by large amounts of data and computing techniques to make predictions to string words together in a meaningful way. They not only tap into a vast amount of vocabulary and information, but also understand words in context. This helps them mimic speech patterns while dispatching an encyclopedic knowledge.

Other tech companies like Google and Meta have developed their own large language model tools, which use programs that take in human prompts and devise sophisticated responses. OpenAI, in a revolutionary move, also created a user interface that is letting the general public experiment with it directly.

Some recent efforts to use chat bots for real-world services have proved troubling — with odd results. The mental health company Koko came under fire this month after its founder wrote about how the company used GPT-3 in an experiment to reply to users.

Koko cofounder Rob Morris hastened to clarify on Twitter that users weren't speaking directly to a chat bot, but that AI was used to "help craft" responses.

The founder of the controversial DoNotPay service, which claims its GPT-3-driven chat bot helps users resolve customer service disputes, also said an AI "lawyer" would advise defendants in actual courtroom traffic cases in real time, though he later?walked that back?over concerns about its risks.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了