May 28, 2024
Kannan Subbiah
FCA | CISA | CGEIT | CCISO | GRC Consulting | Independent Director | Enterprise & Solution Architecture | Former Sr. VP & CTO of MF Utilities | BU Soft Tech | itTrident
By partitioning LLMs, we achieve a scalable architecture in which edge devices handle lightweight, real-time tasks while the heavy lifting is offloaded to the cloud. For example, say we are running medical scanning devices that exist worldwide. AI-driven image processing and analysis is core to the value of those devices; however, if we’re shipping huge images back to some central computing platform for diagnostics, that won’t be optimal. Network latency will delay some of the processing, and if the network is somehow out, which it may be in several rural areas, then you’re out of business. ... The first step involves evaluating the LLM and the AI toolkits and determining which components can be effectively run on the edge. This typically includes lightweight models or specific layers of a larger model that perform inference tasks. Complex training and fine-tuning operations remain in the cloud or other eternalized systems. Edge systems can preprocess raw data to reduce its volume and complexity before sending it to the cloud or processing it using its LLM.
As people move up the corporate totem pole their attention to detail gives way to big-picture thinking, and rightly so. You can’t look beyond and yet mind your every step on the way to an uncharted terrain. Yet when it comes to research and development, especially high-risk, high-impact projects, there is hardly any trade-off between thinking big and thinking in detail. You must do both. For instance, in the inaugural session of my last workshop, one of the senior directors was invited and the first thing he noticed was the mistake in the session duration. ... Now imagine this situation in a corporate context. How likely is the boss to call out a rather silly mistake? It was innocuous for all practical purposes. Most won’t point it out, let alone address it immediately. But not at ISRO.? ... Here’s the interesting thing. One of the participants was incessantly quizzing me, bordering on a challenge, and everyone was nonchalant about it. In a typical corporate milieu, such people would be shunned or would be asked to shut up. But not here. We had a volley of arguments, and people around seemed to enjoy it and encourage it. They were not only okay with varied points of view but also protective of it.?
“What we’ve done is built a common gateway that talks to all the various large language models on the backend, and currently we support more than 50 different models, whether they’re for images, text or chat, or whatnot. ... “Obviously, this space is accelerating superfast. A year ago, we had zero LLMs and today we have 50 LLMs. That gives you some indication of just how fast this is moving. Different models will have different attributes and that’s something we’ll have to continue to monitor. But by having that mechanism we can monitor with and control what we send and what we receive, we believe we can better manage that.” ... “In some ways, experiments that aren’t successful are some of the most interesting ones, because you learn what doesn’t work and that forces you to ask follow-up questions about what will work and to look at things differently. As teams saw the results of these experiments and saw the impact on customers, it’s really engaged them to spend more time with the technology and focus on customer outcomes.”
领英推荐
Alert fatigue is the result of several related factors. First, today’s security tools generate an incredible volume of event data. This makes it difficult for security practitioners to distinguish between background noise and serious threats. Second, many systems are prone to false positives, which are triggered either by harmless activity or by overly sensitive anomaly thresholds. This can desensitize defenders who may end up missing important attack signals. The third factor contributing to alert fatigue is the lack of clear prioritization. The systems generating these alerts often don’t have mechanisms that triage and prioritize the events. This can lead to paralyzing inaction because the practitioners don’t know where to begin. Finally, when alert records or logs do not contain sufficient evidence and response guidance, defenders are unsure of the next actionable steps. This confusion wastes valuable time and contributes to frustration and fatigue. ... The elements of the “SOC visibility triad” I mentioned earlier – NDR, EDR, and SIEM are among the critical new technologies that can help.
If willingness and skill are the two main dimensions that influence hesitancy toward AI, employees who question whether taking the time to learn the technology is worth the effort are at the intersection. These employees often believe the AI learning curve is too steep to justify embarking on in the first place, he notes. “People perceive that AI is something complex, probably because of all of these movies. They worry: Will they have time and effort to learn these new skills and to adapt to these new systems?” Jaksic says. This challenge is not unique to AI, he adds. “We all prefer familiar ways of working, and we don’t like to disrupt our established day-to-day activities,” he says. Perhaps the best inroads then is to show that learning enough about AI to use it productively does not require a monumental investment. To this end, Jaksic has structured a formal program at KEO for AI education in bite-size segments. The program, known as Summer of Innovation, is organized around lunchtime sessions taught by senior leaders around high-level AI concepts.?
Gen AI needs to be accountable and auditable. It needs to be instructed and learn what information it can retrieve. Combining it with IA serves as the linchpin of effective data governance, enhancing the accuracy, security, and accountability of data throughout its lifecycle. Put simply, by wrapping Gen AI with IA businesses have greater control of data and automated workflows, managing how it is processed, secured – from unauthorized changes – and stored. It is this ‘process wrapper’ concept that will allow organizations to deploy Gen AI effectively and responsibly. Adoption and transparency of Gen AI – now – is imperative, as innovation continues to grow at pace. The past 12 months have seen significant innovations in language learning models (LLMs) and Gen AI to simplify automations that tackle complex and hard-to-automate processes. ... Before implementing any sort of new automation technology, organizations must establish use cases unique to their business and undertake risk management assessments to avoid potential noncompliance, data breaches and other serious issues.