Generative AI, AGI and the Future of Scams in 2025.
I remember standing on the bustling streets of Mumbai in August last year, in a city of a mix of heat and humidity, preparing to address the Global Fintech Fest (GFF) on the topic of ‘AI the next frontier’ as part of the Australian Trade and Investment Commission (Austrade) Fintech Delegation to India.
The invitation came at an exciting time: 2024 was shaping up to be the year of radical technological leaps, particularly around Generative AI and the nascent discussions on AGI (Artificial General Intelligence). Even then, I had begun to notice a disturbing trend rippling beneath all the innovation—a darkness lurking behind chatbots, image generators, and automated decision-making systems.
It was the realization that criminals, fraudsters, and unscrupulous opportunists were watching these tools develop with keen interest, eager to exploit them for their own ends. Now, in the wake of ongoing breakthroughs, I find myself traveling the globe in search of clarity and solutions. In many ways, I feel as though I’m chasing a rapidly moving target: the future of scams in an era dominated by generative technologies and social media amplification.
On stage in Mumbai, I sat shoulder to shoulder with other tech leaders, innovators, and policy makers, sharing insights on how artificial intelligence had evolved so quickly.
The conversation among the delegation was surprisingly frank about the ethical dilemmas we all faced—particularly around data privacy, misinformation, and the rapidly growing sophistication of online scams.
In the audience, one could sense a mixture of anticipation and anxiety. Banks, venture capitalists, fintech startups, and regulators were all aware of the unstoppable momentum behind AI.
The conversation cantered on how to harness that momentum for good. Yet even as I spoke optimistically about finance and technology’s potential to bring about massive social and economic change, a small voice inside me kept asking: what about the unintended consequences?
A few months earlier, I found myself on a Teams meeting with cybersecurity experts from New Zealand, California, Dallas and London who were increasingly troubled by the proliferation of deepfake technologies. Their concerns mirrored those raised in a recent Europol report, highlighting how generative AI tools can instantly produce persuasive emails, cloned voices, and fabricated videos that dupe unsuspecting individuals into sharing private data.
Within certain dark corners of the internet, forums and black-market communities had begun trading not just stolen credit card numbers, but sophisticated AI scam kits. Social media platforms, already struggling to control misinformation, suddenly faced an influx of artificially generated but highly credible content. I spoke with these experts late into the night, listening to their stories of heartbreak and deception, from corporate networks penetrated by cleverly disguised emails to retirees tricked into wiring their life savings to smooth-talking AI “representatives” who sounded every bit as real as a bank manager.
Traveling onward, I spent time in New York, a global financial center that has always fascinated me with their rich history of innovation and commerce. Over the course of 2024, International cities began rolling out advanced AI regulations and conducting public awareness campaigns. I saw billboard advertisements warning citizens about “synthetic identities” and “voice phishing.”
Government agencies issued guidelines explaining how to verify if a message was truly from a known contact or a cleverly disguised imposter. These efforts were supported by global think tanks and academic institutions, such as the Oxford Internet Institute and the Stanford Internet Observatory, which published alarming data about how quickly generative AI tools were learning to mimic human behavior.
Even the big social media companies—Facebook (Meta), Twitter (X), Instagram, TikTok—started experimenting with more rigorous content verification systems, but they were playing catch-up against a tidal wave of generative content that seemed impossible to fully monitor and as of earlier this week Meta announced they have given up.
The more I spoke with professionals across continents, the more I realized that our battle against AI-powered scams wasn’t just about technology. It was about human trust—an essential currency that binds societies and economies together. Scammers understand this well.
They prey on trust, leveraging sophisticated generative AI to craft personalized social engineering attacks, producing heart-wrenching narratives or urgent pleas that seem genuine.
In the context of social media, where likes and shares fuel viral content, these scams can spread faster than any human moderator can handle.
Algorithms that prioritize engagement unwittingly push deceptive posts to millions, sometimes even billions, of users, thereby amplifying the scale of the damage. In many ways, social media has become the ultimate Trojan horse, carrying within it the seeds of manipulative content designed to fleece unsuspecting users.
During my travels, I spent some time reviewing various white papers and articles that tackled these issues head on. A particularly sobering read was MIT Technology Review’s extensive feature on deepfake technology, describing how advanced machine-learning models can replicate facial expressions down to the tiniest detail. Another compelling resource came from the World Economic Forum, warning that as AI systems become more autonomous and approach what some might call AGI, we will see an exponential rise in the types and complexities of scams.
The WEF argued that as AI takes on more tasks—everything from writing sophisticated codes to simulating emotional responses—scammers can scale up their efforts, run multiple campaigns simultaneously, and customize each one with laser-like precision. Think of it as micro-targeted propaganda, but instead of trying to influence election outcomes, the objective is to siphon money or sensitive data from unsuspecting victims.
The discussions in late December at the Ten13 meetup about AGI were even more unsettling. While true AGI—machines that can perform any intellectual task a human can—remains a topic of intense scientific debate, the line between advanced narrow AI and AGI is blurring.
Researchers at OpenAI, DeepMind, and other cutting-edge labs caution that we should be thinking now about the ramifications of systems that can autonomously learn and adapt, even beyond their human-designated parameters.
If current generative AI can manipulate speech and imagery so convincingly, one wonders what a self-improving AGI could accomplish, potentially orchestrating scams at a scale and sophistication that defy human comprehension.
In reflecting on these studies, I found myself returning to a central question: how can we balance the extraordinary benefits of AI with the urgent need to contain its more harmful applications?
In the fintech realm, AI-driven predictive analytics have revolutionized everything from loan approvals to fraud detection. Chatbots are streamlining customer service, and robo-advisors are democratizing investment strategies.
Indeed, at the GFF in Mumbai, the spirit of optimism was tangible as we discussed the potential for AI-driven inclusion in developing markets—helping unbanked populations access financial services in new ways. Yet, at each step forward, there is a potential step backward if these technologies fall into the wrong hands.
The idea of criminals deploying advanced AI to track user behavior, adapt social engineering scripts on-the-fly, and even infiltrate secure systems remains terrifying.
领英推荐
Much of the promise for combatting AI-driven scams lies in building equally robust defenses. Advanced pattern recognition can detect anomalies in communication patterns, and next-generation antivirus tools can flag suspicious file behavior or hidden generative signatures.
Leading cybersecurity firms are working on AI “vaccine” software, designed to immunize a user’s data from known scam vectors by analyzing typical user behavior and intercepting out-of-character interactions.
However, this is very much an arms race, and adversaries continue to hone their algorithms to circumvent these defenses.
Privacy concerns further complicate matters because the more data you collect to train defensive AI systems, the more you risk creating vulnerabilities that hackers can exploit.
Social media, in particular, poses the most imminent danger. Platforms are commercial enterprises that rely on engagement-driven algorithms to capture attention.
These algorithms do not inherently distinguish between legitimate, insightful content and malicious, AI-generated spam or disinformation. As a result, social media can amplify malicious actors, handing them an exponentially larger audience than they could have ever reached via email or phone scams.
The demon of social media lies in its scale and virality. Once a piece of disinformation or a link to a scam is out there, it can spread beyond anyone’s control.
I’ve observed as close friends and colleagues, seemingly tech-savvy individuals, fall victim to elaborate LinkedIn phishing attempts or cloned voices “calling from the bank.”
In one particularly heartbreaking case, a friend lost tens of thousands of dollars after receiving a “routine verification request” from an AI-generated voice that matched his bank manager’s accent and intonation almost perfectly. There was no stammer, no robotic sign-off—just a smooth, confident voice confirming details that my friend readily offered. This is the frightening level of personal targeting we are now up against.
In searching for solutions, I often circle back to education and regulation as two crucial pillars.
At the GFF in Mumbai, I urged government officials and financial executives to invest more heavily in public awareness campaigns. Citizens must be equipped with the digital literacy skills to question suspicious links, to verify unexpected calls, and to recognize the subtle markers of AI-generated content.
Regulatory bodies also have a critical role to play, setting boundaries on how generative AI models can be trained and deployed, particularly in areas involving biometric data or hyper-realistic synthetic media. We can’t rely on voluntary corporate measures alone; well-crafted legislation and international cooperation are essential.
Another potential beacon of hope lies in authentication technologies.
As AI becomes better at mimicking human identities, we need robust verification methods, such as digital watermarks, blockchain-based identity solutions, and cryptographic checks. These can help confirm that a video truly depicts who it claims to depict, or that a message genuinely came from the account owner.
Of course, security features must be balanced with user-friendliness; otherwise, consumers might bypass them out of frustration, ironically enabling scammers to operate freely. The ongoing challenge is to make authentication seamless but also resilient against AI tampering.
Even so, every time I think we’ve found a foothold, generative AI leaps forward. Models learn to write in ways that mimic not just human grammar, but also personal style and cultural nuances. They can produce real-time translations that sound entirely native, bridging language barriers and widening the target audience for scams. In some sense, the arms race may never end, but we can certainly do more to hold the front lines.
As I get ready for my 2025 travels and prepare for my next public talk—perhaps back in India, or maybe in another global hub—I carry with me a sense of both wonder and dread at what the future holds.
Technological innovations like Generative AI and even the speculative leap toward AGI promise significant breakthroughs in healthcare, finance, education, and beyond. But there’s no doubt these same breakthroughs will empower a new breed of scams, more cunning and pervasive than ever before. Social media has already proven to be an unwitting ally to malicious actors, amplifying their reach while the rest of us scramble to keep up.
In every talk I give in every fireside chat, and in every article I write, I encourage leaders to take a holistic approach. Governments, corporations, and consumers each have a critical part to play. Governments must legislate and collaborate internationally; corporations must incorporate robust safeguards and ethical frameworks into their platforms; and consumers must stay vigilant, continually educating themselves about the newest threats. Only a multi-pronged strategy can keep pace with the rapid evolution of AI-driven scams.
It was a privilege to stand before the audience in Mumbai last year as a member of the Austrade Fintech Delegation and witness firsthand the global commitment to harness AI’s potential responsibly. Yet I remain convinced that no single conference or delegation can solve this alone. Through my journeys in 2024, I’ve seen pockets of innovation and awareness that give me hope, but also enough startling examples of AI-driven fraud to be reminded of the high stakes.
The demon of social media—perpetuating scams on an unimaginable scale—is not just a possibility; it is already here, feasting on our collective complacency.
Perhaps the best I can do, as I pack my bags and head to my next destination, is to continue sounding the alarm, to share the cautionary tales I’ve witnessed, and to advocate for solutions that are as creative and determined as the forces we stand against. The future of scams is tied to the future of AI itself, and whether we allow it to become the thief in the night or the beacon that illuminates our world depends on the choices we make today.
After 48 flights last year and as I ready to board yet another plane, I hold onto the belief that even the darkest demons can be confronted when we work together, armed with knowledge, integrity, and the resilience needed to navigate this brave new world.
?
Investor, M&A Strategist, Experimentation Fanatic, Class Clown. HNW Advisors & Family Office Reps, welcome. Family Offices and HNW entity inquiries: [email protected]
1 个月Insightful article, Rob. Thanks for the “boots on the ground” reporting and reflection.