February 21, 2025
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
Repatriation introduces significant network challenges, further amplified by the adoption of disruptive technologies like SDN, SD-WAN, SASE and the rapid integration of AI/ML, especially at the edge. While beneficial, these technologies add complexity to network management, particularly in areas such as traffic routing, policy enforcement, and handling the unpredictable workloads generated by AI. ... Managing a hybrid environment spanning on-premises and public cloud resources introduces inherent complexity. Network teams must navigate diverse technologies, integrate disparate tools and maintain visibility across a distributed infrastructure. On-premises networks often lack the dynamic scalability and flexibility of cloud environments. Absorbing repatriated workloads further complicates existing infrastructure, making monitoring and troubleshooting more challenging. ... Repatriated workloads introduce potential security vulnerabilities if not seamlessly integrated into existing security frameworks. On-premises security stacks not designed for the increased traffic volume previously handled by SASE services can introduce latency and performance bottlenecks. Adjustments to SD-WAN routing and policy enforcement may be necessary to redirect traffic to on-premises security resources.
We can no longer limit user access to one or two devices — we must address the entire ecosystem. Instead of forcing users down a single, constrained path, security teams need to acknowledge that users will inevitably venture into unsafe territory, and focus on strengthening the security of the broader environment. In 2015, we as security practitioners could get by with placing “do not walk on the grass” signs and ushering users down manicured pathways. In 2025, we need to create more resilient grass. ... The risk extends beyond basic access. Forty-percent of employees download customer data to personal devices, while 33% alter sensitive data, and 31% approve large financial transactions. And, most alarming, 63% use personal accounts on their work laptops — most commonly Google — to share work files and create documents, effectively bypassing email filtering and data loss prevention (DLP) systems. ... Browser-based access exposes users to risks from malicious plugins, extensions and post authentication compromise, while the increasing reliance on SaaS applications creates opportunities for supply chain attacks. Personal accounts serve as particularly vulnerable entry points, allowing threat actors to leverage compromised credentials or stolen authentication tokens to infiltrate corporate networks.
The rapid evolution of generative AI presents a formidable challenge in the arms race between deepfake creators and detection technologies. As AI-driven content generation becomes more sophisticated, traditional detection mechanisms are at a fast risk of becoming obsolete. Deepfake detection relies on training machine learning models on large datasets of genuine and manipulated media, but the scarcity of diverse and high-quality datasets can impede progress. Limited access to comprehensive datasets has made it difficult to develop robust detection systems that generalize across various media formats and manipulation techniques. To address this challenge, DARPA puts a strong emphasis on interdisciplinary collaboration. By partnering with institutions such as SRI International and PAR Technology, DARPA leverages cutting-edge expertise to enhance the capabilities of its deepfake detection ecosystem. These partnerships facilitate the exchange of knowledge and technical resources that accelerate the refinement of forensic tools. DARPA’s open research model also allows diverse perspectives to converge, fostering rapid innovation and adaptation in response to emerging threats. Deepfake detection also faces significant computational challenges. Training deep neural networks to recognize manipulated media requires extensive processing power and large-scale data storage.
AI agents are not just an evolution of AI; they are a fundamental shift in IT operations and decision-making. These agents are being increasingly integrated into Predictive AIOps, where they autonomously manage, optimize, and troubleshoot systems without human intervention. Unlike traditional automation, which follows pre-defined scripts, AI agents dynamically predict, adapt, and respond to system conditions in real time. ... AI agents are transforming IT management and operational resilience. Instead of just replacing workflows, they now optimize and predict system health, automatically mitigating risks and reducing downtime. Whether it's self-repairing IT infrastructure, real-time cybersecurity monitoring, or orchestrating distributed cloud environments, AI Agents are pushing technology toward self-governing, intelligent automation. ... The future of AI agents is both thrilling and terrifying. Companies are investing in large action models — next-gen AI that doesn’t just generate text but actually does things. We’re talking about AI that can manage entire business processes or run a company’s operations without human intervention.?... AI agents aren’t just another tech buzzword — they represent a fundamental shift in how AI interacts with the world. Sure, we’re still in the early days, and there’s a lot of fluff in the market, but make no mistake: AI agents will change the way we work, live, and do business.
Technical debt is the implied cost of future IT infrastructure rework caused by choosing expedient IT solutions like shortcuts, software patches or deferred IT upgrades over long-term, sustainable designs. It’s easily accrued when under pressure to innovate quickly but leads to waste and security gaps and vulnerabilities that compromise an organization’s integrity, making systems more susceptible to cyber threats. Technical debt can also be costly to eradicate, with companies spending an average of 20-40% of their IT budgets on addressing it. ... Cloud sprawl refers to the uncontrolled proliferation of cloud services, instances, and resources within an organization. It often results from rapid growth, lack of visibility, and decentralized decision-making. At Surveil, we have over 2.5 billion data points to lean on to identify trends and we know that organizations with unmanaged cloud environments can see up to 30% higher cloud costs due to redundant and idle resources.Unchecked cloud sprawl can lead to increased security vulnerabilities due to unmanaged and unmonitored resources. ... Right-sizing involves aligning IT resources precisely with the demands of applications or workloads to optimize performance and cost. Our data shows that organizations that effectively right-size their IT estate can reduce cloud costs by up to 40%, unlocking business value to invest in other business priorities.?
Software bugs and bad code releases are common culprits behind tech outages. These issues can arise from errors in the code, insufficient testing, or unforeseen interactions among software components. Moreover, the complexity of modern software systems exacerbates the risk of outages. As applications become more interconnected, the potential for failures increases. A seemingly minor bug in one component can have far-reaching consequences, potentially bringing down entire systems or services. ... The impact of backup failures can be particularly devastating as they often come to light during already critical situations. For instance, a healthcare provider might lose access to patient records during a primary system failure, only to find that their backup data is incomplete or corrupted. Such scenarios underscore the importance of not just having backup systems, but ensuring they are fully functional, up-to-date, and capable of meeting the organization's recovery needs. ... Human error remains one of the leading causes of tech outages. This can include mistakes made during routine maintenance, misconfigurations, or accidental deletions. In high-pressure environments, even experienced professionals can make errors, especially when dealing with complex systems or tight deadlines.