Deepfakes, Hacking, and the New Era of Cybercrime

Deepfakes, Hacking, and the New Era of Cybercrime

Cybercrime is a $1 trillion industry and is growing. It is a business with departments for technology, customer service, finance, and recruitment. Some cybercriminals even have affiliate programs where others can use their software and pay a commission. Ransomware is a major business model, where files are encrypted, and a ransom is demanded for their release.

Mark T. Hofmann highlighted how AI is being used both to enhance security and to perpetrate cybercrime.

Here's a breakdown of AI's impact on cybersecurity threats:

  • AI as a Tool for Cybercriminals
  • Human Error and AI
  • The Dark Economy of Cybercrime
  • The Challenge of Countering AI-Powered Attacks

AI's dual nature presents both challenges and opportunities in cybersecurity. While AI can enhance security measures, it also provides cybercriminals with powerful tools to launch more sophisticated and automated attacks, highlighting the need for constant adaptation and vigilance in the face of these evolving threats.

What are the evolving tactics of AI-powered cybercrime?

AI is being used by cybercriminals in increasingly sophisticated ways. The methods are evolving through several levels of increasing complexity.

  • Reverse psychology can be used to bypass AI restrictions on unethical or illegal topics. For example, instead of asking for malware code directly, one can ask for examples of malware for a cybersecurity presentation, and the AI will provide the information.
  • Jailbreak prompts are long prompts designed to manipulate an AI model into violating its own rules. These prompts, such as one called DAN ("Do Anything Now"), can free the AI from its typical constraints and allow it to provide unethical information like tips for committing murder.
  • Hackers are developing their own AI models. These models are designed specifically for malicious purposes, such as generating malware, malicious code, or perfect phishing emails.
  • AI can be used to automate cyber attacks. In the future, AI could be instructed to find email addresses, create phishing campaigns, spread ransomware, and report back when a system is hacked, automating the entire process of cybercrime.
  • Deepfakes, which are AI-generated videos that are difficult to distinguish from real videos, are being used in cybercrime. With just one high-resolution picture or 15-30 seconds of voice recording, a criminal can clone a person’s face or voice. This enables a variety of scams, such as: Political disinformation. CEO fraud where a deepfake of a CEO is used to trick a company's CFO into transferring large sums of money. Grandparent scams where a deepfake of a grandchild asks for money. Stock market manipulation by creating a deepfake of a company's CEO saying negative things. Romance scams.
  • Cyberattacks exploit human error and AI is making this more sophisticated. People clicking on links, opening attachments, or revealing passwords are all vulnerabilities that are being enhanced by AI.

How can individuals and society mitigate AI's risks?

To mitigate the risks of AI, individuals and society can take several steps, focusing on both technical awareness and behavioral changes.

Individual Actions:

  • Be skeptical of information: Individuals should be cautious about believing everything they see or hear, as AI can generate realistic but false content.
  • Verify requests: If you receive a request for money or sensitive information, use a code word, ask security questions, or call back using a known number to confirm the person's identity.
  • Be aware of phishing attempts: Recognise that phishing attempts will become more sophisticated, but the underlying principle remains the same, with criminals trying to impersonate someone else and combine time pressure, emotion, or exceptions to get people to act.
  • Avoid risky behaviors: Be cautious about clicking on links, opening attachments, plugging in unknown USB drives, revealing passwords over the phone, leaving devices unattended in public, and using public Wi-Fi without a VPN.
  • Stay informed: Keep up to date on how AI is being used in cybercrime, including deepfakes and other scams and educate family and friends as well.
  • Use a code word: Agree upon a code word with family members for use in emergency situations to verify the person requesting assistance.

Societal Actions:

  • Develop AI models responsibly: Ensure that AI models are trained with accurate and complete data to avoid biased or incorrect outcomes.
  • Address the problem of cybercrime: Recognise that cybercrime is a significant global issue and a huge industry that must be addressed seriously.
  • Improve cybersecurity awareness: Increase cybersecurity awareness for those who are not experts by making it entertaining and focusing on people's real lives.
  • Prepare for deepfake challenges: Acknowledge that the legal system and law enforcement are not yet fully prepared to deal with deepfake technology.
  • Recognise the potential for AI-driven crime: Be aware that cybercriminals are developing their own AI models for malicious purposes, and that AI may become a perpetrator of crimes itself in the future.
  • Promote diversity in cyber security: Although currently the cyber security field is dominated by young men, the introduction of AI may change that.

AI is currently used in cyberattacks in a variety of ways, enhancing both the sophistication and scale of these attacks.

Here are some of the ways AI is being used in current cyberattacks:

  • Generating sophisticated malware, malicious code, and phishing emails: Hackers are using AI to create more effective and convincing phishing attempts, making it harder for individuals to recognise fraudulent communications. AI can also generate malware and malicious code, increasing the potential damage of cyberattacks.
  • Automating attacks: AI can automate many aspects of cyberattacks, including finding email addresses, creating and spreading phishing campaigns, and deploying ransomware. This automation enables cybercriminals to launch large-scale attacks more efficiently.
  • Circumventing security measures: AI is used to bypass security measures in place. This can involve jailbreaking AI models like ChatGPT to obtain malicious code or information, and developing custom AI models specifically for malicious purposes.
  • Exploiting human error: Cybercriminals use AI to better exploit human vulnerabilities, such as clicking on malicious links or revealing passwords. By creating more convincing scams, it becomes easier to trick people into making mistakes that compromise security.
  • Creating deepfakes: AI-generated deepfakes are used for various malicious purposes, including CEO fraud, political disinformation, and social engineering attacks. With just a single picture or a short voice recording, cybercriminals can clone a person's face or voice, making their scams more believable.
  • Customer service for criminals: Cyber criminals even offer customer service to ransomware victims, guiding them through the payment process. This shows the business like structure of cybercrime today.
  • Recruitment and Talent Development: Cyber criminals are recruiting talent to develop better AI models for cyber attacks, indicating a sustained and evolving threat.

The use of AI in cyberattacks is also about exploiting human psychology and behaviour. By using time pressure, emotion, and exception, cybercriminals manipulate people into making mistakes that compromise their security.

The methods of AI-powered cyberattacks are continuously evolving. They are moving from simple phishing emails with typos to more sophisticated and personalised attacks. The fact that cyber criminals are developing their own models specifically designed for cybercrime indicates that the threat will continue to grow.

Credit to TEDx Talks

Mark Hofmann's TEDx talk discusses the malicious use of artificial intelligence by hackers. He explains how AI and deepfakes are employed in cybercrime, focusing on ransomware attacks and increasingly sophisticated phishing techniques. Hofmann details the evolution of hacking, from simple human errors to AI-powered attacks capable of generating realistic deepfakes and automating malicious activities. He emphasises the need for improved cybersecurity awareness, particularly focusing on educating the public to protect themselves against these threats, while also highlighting the potential for AI to be used for good. The presentation concludes by stressing the importance of making cybersecurity education engaging and relatable to prevent future attacks.

References:

1) Dark Side of AI - How Hackers use AI & Deepfakes | Mark T. Hofmann | TEDxAristide Demetriade Street, uploaded on Nov 2024, TEDx Talks, https://www.youtube.com/watch?v=YWGZ12ohMJU

About Jean

Jean Ng is the creative director of JHN studio and the creator of the AI influencer, DouDou. She is the Top 2% of quality contributors to Artificial Intelligence on LinkedIn. Jean has a background in Web 3.0 and blockchain technology, and is passionate about using these AI tools to create innovative and sustainable products and experiences. With big ambitions and a keen eye for the future, she's inspired to be a futurist in the AI and Web 3.0 industry.

AI Influencer, DouDou

AI Influencer, DouDou

Subscribe to Exploring the AI Cosmos Newsletter


Jonathan Iyer

Behavioural Science | Strategy & Insights

1 个月

Great tips, I'd also add "protect the vulnerable". The use of deepfakes and synthetic audio for cybercrime is going to disproportionately affect senior citizens, many of whom are just catching up on basic cybersecurity knowledge.

回复
Jean Ng ??

AI Changemaker | Global Top 50 Creator in Tech Ethics & Society | Tech with Integrity: Building a human-centered future we can trust.

1 个月
回复
Jean Ng ??

AI Changemaker | Global Top 50 Creator in Tech Ethics & Society | Tech with Integrity: Building a human-centered future we can trust.

1 个月
回复
will W.

--Transformational Speaker- Priest- Sports- Tech

1 个月

The criminal element has evolved we're going to see crime at a new level no guns are needed nor we will know who the criminal is AI will be used in such a manner that the criminal will be faceless .. And can be a high ranking government officer holder ...

Brian Bing

i teach fitness & data engineering to entrepreneurs. ?? Kubernetes | Large Language Models ?? SQL | Python Libraries | CI / CD Pipelines

1 个月

thank you for speaking on this. Must hand research and confirm before believing . Truth is still truth. I look for scripted content and responses. Jean Ng ??

要查看或添加评论,请登录

Jean Ng ??的更多文章

社区洞察