February 26, 2025
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
The Tool / Retrieval Layer forms the backbone of an intelligent agent’s ability to gather, process, and apply knowledge. It enables the agent to retrieve relevant information from diverse data sources, ensuring it has the necessary context to make informed decisions and execute tasks effectively. By integrating various databases, APIs, and knowledge structures, this layer acts as a bridge between raw data and actionable intelligence, equipping the agent with a robust understanding of its environment. ... The Action / Orchestration Layer is a critical component in an intelligent agent’s architecture, responsible for transforming insights and understanding into concrete, executable actions. It serves as the bridge between perception and execution, ensuring that workflows are effectively managed, tasks are executed efficiently, and system interactions remain seamless. This layer must handle the complexity of decision-making, automation, and resource coordination while maintaining adaptability to dynamic conditions. ... The Reasoning Layer is where the agent’s cognitive processes take place, enabling it to analyse data, understand context, draw inferences, and make informed decisions. This layer bridges raw data retrieval and actionable execution by leveraging advanced AI models and structured reasoning techniques.?
Several current AI models use chain-of-thought reasoning, an AI technique that helps large language models solve problems by breaking them down into a series of logical steps. The process aims to improve performance and safety by enabling the AI to verify its outputs. But "reasoning" also exposes a new attack surface, allowing adversaries to manipulate the AI's safety mechanisms. A research team comprising experts from Duke University, Accenture and Taiwan's National Tsing Hua University, found a vulnerability in how the models processed and displayed their reasoning. They developed a dataset called Malicious-Educator to test the vulnerability, designing prompts that tricked the models into overriding their built-in safety checks. These adversarial prompts exploited the AI's intermediate reasoning process, which is often displayed in user interfaces. ... The researchers acknowledged that they could be facilitating further jailbreaking attacks by publishing the Malicious-Educator dataset but argued that studying these vulnerabilities openly is necessary to develop stronger AI safety measures. A key distinction in this research is its focus on cloud-based models. AI models running in the cloud often include hidden safety filters that block harmful input prompts and moderate output in real-time. Local models lack these automatic safeguards unless users implement them manually.?
The CISO requires specific and sustained support from the board to effectively protect the organization from cyber threats. A strong partnership between the CISO and board is essential for establishing and maintaining robust cybersecurity practices. My favourite saying one that CISO Robert Veres relayed to me: The board should support the “Red” and challenge the “Green.” This support is exactly what the CISO requires as a foundation. The board must help set the overall strategic direction that aligns with the organization’s risk appetite. This high-level guidance provides the framework within which the CISO can develop and implement security programs. While the CISO establishes the cyber risk culture, they need the board to reinforce this by setting the appropriate tone from the top and ensuring cybersecurity compliance is prioritized across all levels of management and business units. This is a difficult task for some boards as they may lack a good understanding of business and integration of the technology strategy. A critical requirement is for the CISO to have a strong mandate to operate with clear accountability. They need the authority to act and defend the enterprise without excessive interference, allowing them to respond quickly and effectively to emerging threats.
Consolidating artificial consciousness (simulated intelligence) into cyberattacks is, in a general sense, changing the dangerous scene, creating difficulties for all people and organizations. Generally, digital dangers have been, to a great extent, manual, depending on the inventiveness and flexibility of the aggressor. The idea of these dangers has developed as artificial brainpower has become more computerized, versatile, and practical.?AI-based assaults can dissect immense measures of information to recognize weaknesses and send off profoundly designated phishing efforts to spread the most recent malware with negligible human intercession. The speed and execution of computer-based intelligence-fueled assaults imply that dangers can arise more suddenly than at any time in recent memory. For instance, simulated intelligence can mechanize the surveillance and observation stages and guide targets rapidly and precisely. This fast weakness, recognizable proof permits aggressors to take advantage of weaknesses before they are fixed, giving organizations less chance to respond. Additionally, AI can create modified malware that constantly evolves to evade detection using traditional security frameworks, making it more difficult to defend against.
While the concept is compelling, will we see this wave of AI factories that Jensen is promising? Probably not at scale. AI hardware is not only costly to acquire and operate, but it also doesn’t run continuously like a database server. Once a model is trained, it may not need updates for months, leaving this expensive infrastructure sitting idle. For that reason, Alan Howard, senior analyst at Omdia specializing in infrastructure and data centers, believes most AI hardware deployments will occur in multipurpose data centers. ... AI tech advances rapidly, and keeping up with the competition is prohibitively expensive, Palaniappan added. “When you start looking at how much each of these GPUs cost, and it gets outdated quite pretty quickly, that becomes bottleneck,” he said. “If you are trying to leverage a data center, you’re always looking for the latest chip in the in the facility, so many of these data centers are losing money because of these efforts.” ... In addition to the cost of the GPUs, significant investment is required for networking hardware, as all the GPUs need to communicate with each other efficiently. Tom Traugott, senior vice president of strategy at EdgeCore Digital Infrastructure, explains that in a typical eight-GPU Nvidia DGX system, the GPUs communicate via NVlink.?
When companies agree to combine, things get complicated, particularly when blending their IT and digital operations. To that end, organizations must carefully outline how they plan to merge their IT departments to overcome associated challenges and avoid expensive disruptions. ... IT is the cornerstone of most multinational corporations. Determining how each merger participant will mesh its systems with the other is significant, particularly because 47% of M&A deals fail because of IT problems. IT due diligence is paramount. Not only does the process help identify priorities and risks beforehand, but it also lets the acquiring company properly evaluate the technical capabilities of the firm it intends to purchase. ... Cross-border M&As are subject to data privacy and compliance regulations that vary significantly across jurisdictions. When assessing an international merger, ensure there aren't any non-compliance risks and that the firm being acquired operates legitimately. Be aware of complex international data and privacy laws. Address any irregularities with a strong compliance strategy and retain expert legal counsel before signing the deal. ... In fact, cultural mismatch is one of the top reasons why M&As fail.?