August 03, 2024
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
Technical debt often stems from the costs of running and maintaining legacy technology services, especially older applications. It typically arises when organizations make short-term sacrifices or use quick fixes to address immediate needs without ever returning to resolve those temporary solutions. For CIOs, balancing technical debt
With businesses collecting more data than ever, for data analysts it can be more like scrounging through the bins than panning for gold. “Hiring data scientists is outside the reach of most organizations but that doesn't mean you can’t use the expertise of an AI agent,” Callens says. Once a business has a handle on which metrics really matter, the rest falls into place, organizations can define objectives and then optimize data sources. As the quality of the data improves the decisions are better informed and the outcomes can be monitored more effectively. Rather than each decision acting in isolation it becomes a positive feedback loop where data and decisions are inextricably linked: At that point the organization is truly data driven. Subramanian explains that changing the culture to become more data-driven requires top-down focus. When making decisions stakeholders should be asked to provide data justification for their choices and managers should be asked to track and report on data metrics in their organizations. “Have you established tracking of historical data metrics and some trend analysis?” she says. “Prioritizing data in decision making will help drive a more data-driven culture
Central to the technology is the concept of foundation models, which are rapidly broadening the functionality of AI. While earlier AI platforms were trained on specific data sets to produce a focused but limited output, the new approach throws the doors wide open. In simple — and somewhat unsettling — terms, a foundation model can learn new tricks from unrelated data. “What makes these new systems foundation models is that they, as the name suggests, can be the foundation for many applications of the AI model,” says IBM. “Using self-supervised learning
领英推荐
To secure your organisation, you have to figure out where your APIs are, who’s using them and how they are being accessed. This information is important as API deployment increases your organisation’s attack surface making it more vulnerable to threats. The more exposed they are, the greater the chance a sneaky attacker might find a vulnerable spot in your system. Once you’ve pinpointed your APIs and have full visibility of potential points of access, you can start to include them in your vulnerability management processes
As the world’s networking infrastructure has evolved, there is now far more private backbone bandwidth available. Like all cloud solutions, NaaS also benefits from significant ongoing price/performance improvements in commercial hardware. Combined with the growing number of carrier-neutral colocation facilities, NaaS providers simply have many more building blocks to assemble reliable, affordable, any-to-any connectivity for practically any location. The biggest changes derive from the advanced networking and security approaches that today’s NaaS solutions employ. Modern NaaS solutions fully disaggregate control and data planes, hosting control functions in the cloud. As a result, they benefit from practically unlimited (and inexpensive) cloud computing capacity to keep costs low, even as they maintain privacy and guaranteed performance. Even more importantly, the most sophisticated NaaS providers use novel metadata-based routing techniques and maintain end-to-end encryption. These providers have no visibility into enterprise traffic; all encryption/decryption happens only under the business’ direct control.
With the advancement of stream processing engines like Apache Flink, Spark, etc., we can aggregate and process data streams in real time, as they handle low-latency data ingestion while supporting fault tolerance and data processing