Februrary 28, 2025
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
Shadow testing is especially useful for microservices with frequent deployments, helping services evolve without breaking dependencies. It validates schema and API changes early, reducing risk before consumer impact. It also assesses performance under real conditions and ensures proper compatibility with third-party services. ... Shadow testing doesn’t replace traditional testing but rather complements it by reducing reliance on fragile integration tests. While unit tests remain essential for validating logic and end-to-end tests catch high-level failures, shadow testing fills the gap of real-world validation without disrupting users. Shadow testing follows a common pattern regardless of environment and has been implemented by tools like Diffy from Twitter/X, which introduced automated-response comparisons to detect discrepancies effectively. ... The environment where shadow testing is performed may vary, providing different benefits. More realistic environments are obviously better:Staging shadow testing — Easier to set up, avoids compliance and data isolation issues, and can use synthetic or anonymized production traffic to validate changes safely. Production shadow testing — Provides the most accurate validation using live traffic but requires safeguards for data handling, compliance and test workload isolation.?
Creating an Office of Responsible AI can play a vital role in a governance model. This office should include representatives from IT, security, legal, compliance, and human resources to ensure that all facets of the organization have input in decision-making regarding AI tools. This collaborative approach can help mitigate the risks associated with shadow AI applications. You want to ensure that employees have secure and sanctioned tools. Don’t forbid AI—teach people how to use it safely. Indeed, the “ban all tools” approach never works; it lowers morale, causes turnover, and may even create legal or HR issues. The call to action is clear: Cloud security administrators must proactively address the shadow AI challenge. This involves auditing current AI usage within the organization and continuously monitoring network traffic and data flows for any signs of unauthorized tool deployment. Yes, we’re creating AI cops. However, don’t think they get to run around and point fingers at people or let your cloud providers point fingers at you. This is one of those problems that can only be solved with a proactive education program aimed at making employees more productive and not afraid of getting fired. Shadow AI is yet another buzzword to track, but also it’s undeniably a growing problem for cloud computing security administrators.?
The debate about truly transformative AI may not be about whether it can think or be conscious like a human, but rather about its ability to perform complex tasks across different domains autonomously and effectively. It is important to recognize that the value and usefulness of machines does not depend on their ability to exactly replicate human thought and cognitive abilities, but rather on their ability to achieve similar or better results through different methods. Although the human brain has inspired much of the development of contemporary AI, it need not be the definitive model for the design of superior AI. Perhaps by freeing the development of AI from strict neural emulation, researchers can explore novel architectures and approaches that optimize different objectives, constraints, and capabilities, potentially overcoming the limitations of human cognition in certain contexts. ... Some human factors that could be stumbling blocks on the road to transformative AI include: the information overload we receive, the possible misalignment with our human values, the possible negative perception we may be acquiring, the view of AI as our competitor, the excessive dependence on human experience, the possible perception of futility of ethics in AI, the loss of trust, overregulation, diluted efforts in research and application, the idea of human obsolescence, or the possibility of an “AI-cracy”, for example.
We live in a time when the true ideals of a free and open internet are under attack. The most recent repeal of net neutrality regulations is taking us toward a more centralized, controlled version of the internet. In this scenario, a decentralized, permissionless internet offers a powerful alternative to today’s reality. Decentralized systems can address the threat of censorship by distributing content across a network of nodes, ensuring that no single entity can block or suppress information. Decentralized physical infrastructure networks (DePIN) demonstrate how decentralized storage can keep data accessible even when network parts are disrupted or taken offline. This censorship resistance is crucial in regions where governments or corporations try to limit free expression online. Decentralization can also cultivate economic democracy by eliminating intermediaries like ISPs and related fees. Blockchain-based platforms allow smaller, newer players to compete with incumbent services and content companies on a level playing field. The Helium network, for example, uses a decentralized model to challenge traditional telecom monopolies with community-driven wireless infrastructure. In a decentralized system, developers don’t need approval from ISPs to launch new services.
With massive volumes of data to make sense of, having reliable and scalable modern data architectures that can organise and store data in a structured, secure, and governed manner while ensuring data reliability and integrity is critical. This is especially true in the hybrid, multi-cloud environment in which companies operate today. Furthermore, as we face a new “AI summer”, executives are experiencing increased pressure to respond to the tsunami of hype around AI and its promise to enhance efficiency and competitive differentiation. This means companies will need to rely on high-quality, verifiable data to implement AI-powered technologies Generative AI and Large Language Models (LLMs) at an enterprise scale. ... Beyond infrastructure, companies in India need to look at ways to create a culture of data. In today’s digital-first organisations, many businesses require real-time analytics to operate efficiently. To enable this, organisations need to create data platforms that are easy to use and equipped with the latest tools and controls so that employees at every level can get their hands on the right data to unlock productivity, saving them valuable time for other strategic priorities. Building a data culture also needs to come from the top; it is imperative to ensure that data is valued and used strategically and consistently to drive decision-making.
What might be a bit surprising, however, is one particular pain point that customers in this vertical bring up repeatedly. What is this mysterious pain point? I’m not sure if it has an official name or not, but many people I meet with share with me that they are spending so much time responding to regulatory findings that they hardly have time for anything else. This is troubling to say the least. It may be an uncomfortable discussion to have, but I’d argue that it is long since past the time we as a security community have this discussion. ... The threats enterprises face change and evolve quickly – even rapidly I might say. Regulations often have trouble keeping up with the pace of that change. This means that enterprises are often forced to solve last year’s or even last decade’s problems, rather than the problems that might actually pose a far greater threat to the enterprise. In my opinion, regulatory agencies need to move more quickly to keep pace with the changing threat landscape. ... Regulations are often produced by large, bureaucratic bodies that do not move particularly quickly. This means that if some part of the regulation is ineffective, overly burdensome, impractical, or otherwise needs adjusting, it may take some time before this change happens. In the interim, enterprises have no choice but to comply with something that the regulatory body has already acknowledged needs adjusting.