Can Chat-GPT be a con artist for this year’s valentine? Scammers say “YES.”
Roses are red, violets are blue, and we all know that not everything – or everyone – online is true.
Finding the right partner is always a luring and exciting subject, but these days it is expensive as well!
Hello! Knock! Knock!! “Yes.” Am talking before we confirm our relationship status and thinking of having one!
Scammers are using the latest technology like ChatGPT, Google's text to Voice converters and Deep Fakes as tools to exploit emotional feelings of humans more than ever before.
According to federal trade commission, only in America lost $1.3 billion to romance scams and the reported romance scams increased to tenfold for the people ages 18 to 29 between 2017-2021.
And people over 70+ years old reported the highest individual median losses at $9,000 compared to $750 for ages 18 to 29.
Are you able to distinguish between a person who is only utilizing a decades-old photo and a heartthrob scammer attempting to swindle you out of every penny in your bank account? Online dating scams and romance fraud are serious issues. The goal of romance scammers is to trick their victims into giving them money by disguising themselves and appealing to their emotions. Make sure that love—not fraud—is in the air by being informed. Most essential, if you think your online relationship might be developing into something more sinister, talk to your friends and family. Don't feel guilty! You'll probably be able to tell if the romance is genuine with the aid of a reliable outsider.
ChatGPT:
Everyone's favorite chatbot, writer's block breaker, and maker of absurd short stories, ChatGPT, is becoming increasingly well-known. 1 In reality, the "masterpieces" of AI-generated entertainment (by AI standards) are awe-inspiring to engineers all around the world. Although there are a few glitches in the technology still, ChatGPT can virtually match the quality of real, expert authors.
Users of underground forums start sharing malware coded by Open Ai’s viral sensation and dating scammers are planning on creating convincing fake girls with the tool. Cyber prognosticators predict more malicious use of ChatGPT is to come.
Already Tinder users start using ChatGPT to create stories and metaphor to seduce/entice many users online to get more feeds. Attempts to use AI in matching applications have existed in the past. Programmers have made several attempts to turn meeting people on matchmaking apps into a game by using artificial intelligence to send messages to hundreds of users at once. The last of these attempts is to use ChatGPT.
领英推荐
But those using ChatGPT take a little more manual effort. When they match with someone, they request an opening message from ChatGPT based on their interests. They then copy and paste the output and send it to the person they match. Judging by the videos these people have shared on TikTok, ChatGPT works.
In contrast, evil actors are utilizing technology for their own gains, as is the case with most good things. Cybercriminals are investigating the different applications of the AI chatbot to deceive individuals into handing over their personal information and cash. Here are some of the most recent questionable uses of AI text generators and some safety tips for you and your devices.
False profiles for dating. AI is starting to be used by "catfish," or those who establish false online personas to entice others into relationships. Romance scammers can now use AI to reduce their burden and try to maintain multiple dating profiles simultaneously, much like malware creators who are utilizing it to scale up their production. ChatGPT can change the tone of its communications to provide scammers with ideas. A con artist might instruct ChatGPT to ramp up the charm, for instance, or to construct a love letter. This could lead to sincere-sounding declarations of love that could persuade someone to contribute money or give up their personally identifiable information (PII).
By being extremely vigilant and carefully examining any texts, emails, or direct messages you receive from strangers, you may avoid being duped by AI-generated text. There are a few telltale indicators that a message was authored by AI. For instance, AI frequently employs brief sentences and repeats the same terms. AI might also produce content that communicates a lot without actually saying anything. AI can't form opinions; therefore, their messages could come out as hollow. If the person you're interacting with in a romance scam declines to meet up in person or video chat, think about breaking off the contact.
Interesting story:
Dating apps like Tinder, Bumble, OkCupid etc. already have many fake profiles to solve the fake profile issue Filteroff came with live video interaction instead of chat option. Still, deepfake along with AI’s chat bot and Google’s text to voice/speech integrations can make it tougher than ever to fight the crimes in real time as the threat actors are finding more ways to exploit.
(If any one ready to build this use case am ready.)
LOOK FOR THE RED FLAGS
There are common red flags of a scammer you should always be aware of, like if they...
Finally, protections against ChatGPT misuse by criminals would eventually, and "unfortunately," need to be enforced through regulation. Although hackers and journalists have discovered ways to get through those safeguards, open AI has put in place some measures to stop ChatGPT from receiving blatant requests to construct spyware. It may be necessary for businesses like Open AI to be legally required to educate their AI to recognize such abuse.
Be Vigilant – share information – spread awareness.
Sales and Marketing - HP Indigo
2 年I honestly find the heading ambigous...not sure whether its intentional..but its hard to follow what you are alluding to...