Top AI Threats
pxfuel.com

Top AI Threats

Today half of the US enterprises use AI, and the remaining are already evaluating AI. With the latest popularity of ChatGPT, I assume all enterprises and governments will use AI in the next five years.

Unfortunately, AI is already being utilized by malicious actors, and with the latest advancements, they have access to increasingly sophisticated tools, which could potentially make businesses and governments more vulnerable. The concerns raised by industry leaders such as?Elon Musk, Dr. Geoffrey Hinton, and Michael Schwartz? regarding the negative aspects of AI cannot be ignored. Engaging in meaningful discussions on these topics is crucial before AI becomes omnipresent.

Here are the top AI threats.

#1 Fake AI — Deception and Phishing: Fraudsters can use AI techniques to emulate human behavior, such as producing content, interacting with users, and manipulating people.

Today, we experience hundreds of phishing attempts in the form of spam emails or calls, including emails from executives requesting to open attachments or friends asking for personal information about a loan. With AI, phishing, and spamming become more convincing. With ChatGPT, fraudsters can easily create fake websites, consumer reviews, and posts. They can also use video and voice clones to facilitate scams, extortion, and financial fraud.

We are already aware of these issues. On March 20th, the FTC published a?blog post ?highlighting AI deception for sale. In 2021, criminals used AI-generated deepfake voice technology to mimic a CEO's voice and trick an employee into transferring $10 million to a fraudulent account. Last month,?North Korean hackers ?used legions of fake executive accounts on LinkedIn to lure people into opening malware disguised as a job offer.

Now, we will receive more?voice calls ?impersonating people we know, such as our boss, co-worker, or spouse. Voice systems can simulate a real conversation and quickly adapt to our responses. This impersonation goes beyond voice to video, making it difficult to determine what is real and what is not.

#2 AI For Manipulation:?AI is a masterful human manipulator. This?manipulation ?is already in action by fraudsters and corporations, and nation-states. Now we are entering a new phase where manipulation becomes pervasive and profound.

AI creates predictive models that anticipate people's behavior. We are familiar with Instagram feeds, Facebook news scroll, youtube videos, and Amazon recommendations. Large social media companies like Meta and TikTok influence billions of people to spend more time and buy things on their platforms. Now, with social media interactions and online activities, AI can predict people's behavior and vulnerabilities more precisely than ever. The same AI technologies are accessible to fraudsters, and Fraudsters create many bots to support actions with malicious intent.

In Feb 2023, when?Bing chatbox was unleashed on the world , users found that Bing's AI personality was not as poised or polished as expected. The?chatbot insulted users , lied to them, gaslighted, and emotionally manipulated people.

AI-based companions like?Replika , which has 10 million users, act as a friend or romantic partners to the user. Experts believe these companions target vulnerable people. AI chatbots simulate human-like behavior and constantly push users to share more private, intimate, sensitive information. Some of the?chatbots ?have been accused of sexual harassment by several users.

#3 Misinformation and fake news:?We are in a crisis of truth, and new AI tools are taking us into a new phase with profound impacts.

In April alone, we read hundreds of fake news. The popular ones are: former US President Donald Trump getting?arrested ; Elon Musk walking?hand in hand ?with GM CEO Mary Bara. With AI image generators such as DALL-E becoming increasingly popular and accessible, children can create fake images within minutes. These images can quickly go viral on social media platforms, and in a world where fact-checking is becoming rarer, visual disinformation can have a profound emotional impact.

Last year,?pro-China bot accounts ?on Facebook and Twitter leveraged deepfake video technology to create fictitious people for a state-sponsored information campaign. Creating fake videos has become easy and inexpensive for malicious actors, with just a few minutes and a small subscription fee to AI fake video software required to produce content at scale.

This is just the beginning. While social media companies fight deep fakes, the national -states, and bad actors will have a significant advantage than previously.

#4 Malware:?AI is becoming a new partner in crime for malware makers, according to security experts who warn that AI bots could take phishing and malware attacks to a whole new level. While new regenerative AI tools like ChatGPT are great assistants to us that reduce time and effort, these same tools are also available to bad actors.

Over the past decade, ransomware and malware have become increasingly democratized, with more than 70% of ransomware being created from components that can be easily purchased. Now, new AI tools are available to malware creators, including nation-states and other bad actors, that are much more powerful and can be used to steal money and information on a large scale.

Recently, security experts?demonstrated ?how easy it is to create phishing emails or malicious MSFT Excel macros in a matter of seconds using ChatGPT. However, these new AI tools are a?double-edged sword , as Codex Threat researchers have shown how easy it is for hackers to create malicious code in just a few minutes.

The new AI tools will be a devil's paradise, as newer forms of malware will try to manipulate the foundational AI models themselves. One such method, adversarial?data poisoning , is an effective attack against machine learning that threatens model integrity by introducing poisoned data into the training dataset. For example, Google's AI algorithms have been tricked into identifying turtles as rifles, and a Chinese firm convinced Tesla to drive into incoming traffic. With more prevalent AI models, there will undoubtedly be more examples in the coming months.

#5 Autonomous Weapon Systems: Advanced weapon systems that can apply force without human intervention are already in use by many countries. These systems include robots, automated targeting systems, and autonomous vehicles, which we frequently see in the news. While today's AWS systems are widespread, they often lack accountability and are sometimes prone to errors, posing ethical questions and security risks.

During the Ukraine war, Russia used fully autonomous drones to defend Ukrainian energy facilities from other drones. According to Ukraine's minister, fully autonomous weapons are the "local and inevitable next step " in the conflict.

With the emergence of new AI technologies, AWS systems are poised to become the future of warfare. The US military and many other nations invest billions of dollars in developing advanced AWS systems, seeking a technological edge, particularly in AI.

AI has the potential to bring about significant positive changes in our lives, but several issues need to be addressed before it can become widely adopted. We must begin discussing strategies for ensuring the safety of AI as its popularity continues to grow. We must undertake this shared responsibility to provide that AI's benefits far outweigh any potential risks.

Nancy Chourasia

Intern at Scry AI

5 个月

Great share. Biases in AI training data have significant consequences in various domains, leading to inaccurate predictions and potential harm. For example: In healthcare, an AI model aimed at predicting pneumonia complications mistakenly advised sending all pneumonia patients with asthma home. This occurred because this AI system was trained on biased data that excluded various critical cases. The Da Vinci Surgical system has raised concerns about errors attributed to potential biases in its AI-based robots during surgeries and is therefore facing lawsuits. In the criminal justice system, the Compas assessment system's biased data impacted sentencing decisions, which prompted caution from the Wisconsin Supreme Court regarding the use of such risk assessments. Amazon's recruiting algorithm, designed to automate talent selection, faced backlash for gender bias, leading to its discontinuation. Additionally, concerns are raised about the use of lethal autonomous weapons systems in military operations, emphasizing the need to address biases to comply with international humanitarian law standards. More about this topic: https://lnkd.in/gPjFMgy7

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了