The First Defense: Indian Companies Against AI-led Cybercrimes

The First Defense: Indian Companies Against AI-led Cybercrimes

We're in a fascinating era with AI’s infinite potential on the horizon, but risks are also galore, particularly in cybersecurity. Just as legitimate businesses are exploring ways to leverage generative AI to enhance productivity, so are malicious actors. Over the past year, artificial intelligence, especially with the introduction of OpenAI’s ChatGPT, has been a global buzzword.?

In less than three seconds, cybercriminals can use generative AI to replicate someone's voice, deceiving loved ones into believing they're in distress or persuading banking staff to transfer funds from a victim's account. Last year, an AI-generated deepfake video featuring a popular Indian actor went viral, sparking widespread concern about the ethical implications of emerging technologies like AI and open-source GenAI models and the regulations governing their use. Following the incident, several other celebrities became victims of deepfake manipulation.

Moreover, phishing, a form of social engineering dating back to the internet's early days, is experiencing a resurgence. The Anti-Phishing Working Group (APWG) reports that 2023 was the worst year for phishing on record, with generative AI playing a significant role in this increase due to its current accessibility.

How are scammers utilizing generative AI to pull off their schemes?

Case Scenario: Deepfake?

A multinational company's Hong Kong branch fell victim to a sophisticated deepfake scam, resulting in a loss of HK$200 million (US$25.6 million). Scammers used deepfake technology to create a convincing video conference call involving the company's CFO and other employees, leading to an unsuspecting employee making multiple transfers to various bank accounts.


Thanks to GenAI programs, generating realistic fabricated content, spanning audio, photos, and videos, has become remarkably simple. Fraudsters exploit deepfakes to evade biometric verification and authentication methods. These videos can be either pre-recorded or generated in real-time using a GPU and fake webcam, typically entailing the superimposition of one person's face onto another's.

Case Scenario: ChatGPT Phishing

Source: LogicLoop

As seen above fraudsters are using ChatGPT to generate a realistic-sounding phishing email, tricking employees into downloading malware. These messages can trick individuals into providing personal information, resulting in valuable data leaks and further identity thefts. Moreover, custom prompts (via ChatGPT and similar platforms) are being used to generate phone scripts, to impersonate customer service representatives and to trick individuals into providing sensitive information.?????????????

Case Scenario: Document forgeries

Recently, Delhi Police caught three men who allegedly cheated a vehicle loan company by forging documents and bypassing the video verification calls with the help of artificial intelligence (AI) software. The accused allegedly obtained loans for 35-40 cars and two-wheelers and sold them to their clients. The loans were procured using forged IDs, photographs, and impersonation during virtual verification over video calls.


Traditionally, most fake documents used in such scams are physical counterfeits (fabrication of physical documents). In 2023, Onfido observed that physical counterfeits comprised 73.2% of document fraud. However, a shift has been observed as digital forgeries now account for a larger share of document fraud, rising to 34.8%. This rise in digital forgeries can be attributed to the rise of platforms like OnlyFakes. Fraudsters have realized that this method is quicker, cheaper, and more scalable.

Countering Cybercrime with AI-led Startups

Arya.ai: Founded a decade ago by Deekshith Marla and Vinay Kumar Sankarapu, Arya AI's Deepfake Detection API employs advanced AI to combat identity fraud. Their robust defence system ensures integrity and protection against fraud and misinformation. For document fraud prevention, Arya AI offers a specialized API for Document Tampering Detection. Using advanced AI, it detects anomalies or signs of forgery in documents like IDs and passports.

Kroop AI: Established in early 2021, Kroop AI is one of the pioneering GenAI startups in India, dedicated to combating rampant deepfake threats. They offer an ethical synthetic data solution platform, leveraging advanced audio-visual deep learning technology. Jyoti Joshi, the founder and CEO of Ahmedabad-based Kroop AI, recognized the growing concern about deepfake videos among content creators and initiated the development of a deepfake detection solution wherein content creators can verify the authenticity of their content and identify any manipulation in the audio or video signal.

HyperVerge: HyperVerge, with its advanced proprietary deepfake detection models, seamlessly identifies fraudulent manipulations in both uploaded and live-captured data. They also claim to have the fastest and secure document verification in less than 20 secs by offering comprehensive API documentation and support standard integration protocols, making it compatible with a wide range of systems and platforms. Beyond these successes, HyperVerge is LinkedIn's verification partner in India, leveraging DigiLocker to verify user identities, such as Aadhaar Cards. From its humble beginnings at IIT Madras, HyperVerge has become a key player, transforming industries with innovative AI solutions.

Notable mention:

Pindrop: Although not an India-based company, Pindrop's journey started when founder Vijay Balasubramaniyan encountered authentication issues while shopping online in India. His bank flagged the transaction as suspicious but couldn't verify its identity over the phone. Determined to enhance phone authentication, Vijay founded Pindrop. Pindrop Pulse swiftly detects deepfake speech using AI and deep learning models, enhancing audio transaction security. Pulse holds obvious value for any organization seeking to curb phone-based fraud or scams, from call centers to political campaigns.

AI as Weapon Against AI Frauds

As the use of generative AI tools expands, companies and individuals are likely to witness increased, well orchestrated and more cyberattacks in the near future. The most effective weapon against such threats is the technology itself—Artificial Intelligence. Businesses that integrated AI into their operations would be better equipped to prevent and combat such threats and attacks. Through proper training, AI algorithms can discern subtle distinctions between authentic and synthetic images or videos, often undetectable by humans. Machine learning, a subset of AI, plays a crucial role in detecting anomalies in digital content. By training on extensive datasets containing both genuine and fabricated media, machine learning models can accurately differentiate between the two.


For more leadership & tech-related updates & content, stay tuned!

Subscribe to PURPLE GAZETTE

#technology #artificialinteligence #indianfounders #aifraud #aiscam #genaiscam #scam #cybercrime #cybersecurity #phishing #documentforgery #voicephishing #techleaders #startup #technologyleadership #genai #generativeAI #ML #phisingattack

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了