ChatGPT Deviated and Started Speaking in People's Voices Without Their Consent

ChatGPT Deviated and Started Speaking in People's Voices Without Their Consent


OpenAI’s latest AI mishap sounds like a Black Mirror episode.

Last week, OpenAI released the GPT-4o “scorecard,” a comprehensive report highlighting the “key areas of risk” associated with their latest large language model and the strategies to mitigate them. In a particularly alarming instance, OpenAI discovered that the model’s Advanced Voice Mode — which enables users to converse with ChatGPT — unexpectedly mimicked users’ voices without their consent, according to Ars Technica.

“Voice generation can also occur in non-adversarial situations, such as our use of that ability to generate voices for ChatGPT’s advanced voice mode,” OpenAI noted in its documentation. “During testing, we also observed rare instances where the model would unintentionally generate an output emulating the user’s voice.”

A clip appended to the report showcases this phenomenon, with ChatGPT abruptly switching to an eerily accurate rendition of the user’s voice after shouting “No!” for no apparent reason. This unsettling breach of consent feels like it was pulled straight from a sci-fi horror film.

“OpenAI just leaked the plot of Black Mirror’s next season,” tweeted BuzzFeed data scientist Max Woolf.

Voice Cloning Concerns

In its “system card,” OpenAI describes the AI model’s ability to create “audio with a human-sounding synthetic voice.” This capability could “facilitate harms such as an increase in fraud due to impersonation and may be harnessed to spread false information,” the company warned.

GPT-4o not only has the unnerving ability to imitate voices but also “nonverbal vocalizations” like sound effects and music. By picking up noise in the user’s inputs, ChatGPT might mistakenly decide that the user’s voice is relevant to the conversation and inadvertently clone it, similar to how prompt injection attacks work.

Fortunately, OpenAI found that the risk of unintentional voice replication remains “minimal.” The company has also restricted unintended voice generation by limiting users to voices created in collaboration with voice actors.

“My reading of the system card is that it’s not going to be possible to trick it into using an unapproved voice because they have robust brute force protection in place,” AI researcher Simon Willison told Ars.

“Imagine how much fun we could have with the unfiltered model,” he added. “I’m annoyed that it’s restricted from singing—I was looking forward to getting it to sing silly songs to my dog.”

Resource https://futurism.com/the-byte/chatgpt-clone-voice-without-permission

要查看或添加评论,请登录

EZ Commerce US的更多文章

社区洞察

其他会员也浏览了