AI Security: Customer Needs And Opportunities
Generated by ChatGPT/DALL-E (prompt: create an illustration depicting a future scenario showing how human - virtual agent collaborations are secured)

AI Security: Customer Needs And Opportunities

Sekhar Sarukkai, Cybersecurity@ UC Berkeley / EAC @ Center for Long-Term Cybersecurity


There are no signs of AI momentum and VC funding subsiding. 75% of founders in the latest Y Combinator cohort are working on AI startups. Almost all tech vendors are executing on revamping their strategy to be AI centric.?

This momentum is not purely vendor driven. In spite of questions on value realization , based on customer usage research by Skyhigh Security, 100% of enterprises are using AI Services with enterprises on average using more than 320 AI services! Organizations of all kinds ranging from insurance, finance, manufacturing to government and everything in between are investing heavily in AI with large teams with significant IT budgets. In a recent survey PWC found that over half the companies have already adopted genAI. Just last quarter, OpenAI also announced its 1 millionth enterprise seat – less than a year since it released the enterprise product.

With all this action there is a proliferation of vendor perspectives on the dangers of AI, and perspectives on the biggest security, safety, privacy, and governance concerns that will keep enterprises from using AI in production. Here, I attempt to capture my perspective on the key security challenges and opportunities that have emerged over the last year.


The Simplified AI stack

One can view the recent waves of AI technology innovations in 3 layers:

The Foundational AI?

This layer comprises everything needed to train models and inference use cases that leverage these models. The pace of innovation has been breathtaking with game-changing releases such as OpenAI's o1 that supports multimodal reasoning , or Anthropic's Computer Use . Big money has pumped in big money at this level, in essence picking the winners in this core AI infrastructure Model layer.

There are two types of infrastructure in this layer:?

1. Foundation models dominated by the likes of OpenAI, Google Gemini, Mistral, Grok, Llama and Anthropic. These models (open-source or close-source) are trained on a humongous corpus of data (public or private). Models can be used for inference as a SaaS (such as ChatGPT), PaaS (such as AWS Bedrock, Azure/Google AI platforms) or deployed in private instances. These foundation model companies alone account for almost 50% of Nvidia’s AI chip business .?

2. Custom models, thousands of which can be found in a marketplace such as Huggingface . These models are trained on specialized data sets (public or private) for targeted use cases or verticals. These models can also be consumed as SaaS or PaaS services, or by deploying them in private instances, or knowledge search engine such as Perplexity.ai . In addition, enterprises can choose to create and deploy their own fine-tuned models deployed in their private data centers or private Cloud. Recent quarterly reports from Microsoft, for example, show that while the general public Cloud growth without considering AI PaaS is only modest, the fastest growing part of the public Cloud business is AI PaaS.

LLM/SLM Use Cases:

A simplified view of customer use cases in each layer can be seen from the perspective of two dimensions: X-axis that identifies if an AI effort/application is targeted at internal employees (buyer typically being IT) or external customers/partners (buyer typically being business owners). The Y-axis identifies if the deployment of the AI application is in a private instance (say on Google/MS AI or AWS Bedrock, or private data center) or if it is directly consumed as a 3rd party SaaS Service (such as ChatGPT) over the Internet. The below matrix helps clarify this view.


The above 2x2 matrix shows the different use cases driving adoption of LLMs in enterprises and the customer deployment priorities. Green cells indicate current investments/POCs. Yellow cells indicate active developments, and red cells indicate potential future investments.?

For the LLM layer, use of foundation models for employee knowledge access and creation of new content is predominant use case. Many large enterprises block SaaS deployed LLMs (such as block of chatGPT) but allow employees to query against private deployments (such as via Azure AI).?

The AI Copilots?

If mindshare is the yardstick of success at the foundational AI layer, revenue is arguably the yardstick in this layer. Just like in the internet boom, the internet infrastructure vendors (think AT&Ts) were critical but not the biggest business beneficiaries of the internet revolution. It was the applications that used this infrastructure to create new and interesting applications such as Google, Uber, and Meta. Similarly, in the AI era, big $$ applications are already emerging in the form of AI-copilots: as Nadela the CEO of Microsoft proclaimed in Ignite 2023 that they were a Copilot company , envisioning a future where everything and everyone will have a Copilot. Studies have shown potential 30+% gain in developer productivity , or 50+% gain in insurance claims processing productivity already. No wonder Nadela recently stated that Copilot is Microsoft’s fastest growing M365 suite product . It has already become a runaway leader in enterprise Copilot deployments. Skyhigh’s recent data confirms this with a 5000%+ increase in use of M365 Copilot in the last 6 months!?

More importantly it is the dominant genAI technology in use at large enterprises even though M365 copilot penetration is only at 1% of the M365 business – leaving huge headroom for growth. The impact on enterprises should not be underestimated - the amount of enterprise data being indexed, as well as the data shared with M365 Copilot is unprecedented. Almost all enterprises block ChatGPT and almost all of them also allow O365 Copilot which in turn uses Azure AI service (primarily OpenAI). The M365 add-on per-seat pricing for the Copilot is shaping up to be the model for all SaaS vendors to roll out their own Copilots, generating a lucrative revenue stream.

Copilot Use Cases:

The above 2x2 matrix shows the different use cases driving adoption of Copilots in enterprises and the customer deployment priorities. Green cells indicate current investments/POCs. Yellow cells indicate active developments, and red cells indicate potential future investments.?

Predominant usage of Copilots is either use of Microsoft O365/Github Copilot, or development of customer facing Copilots (typically RAG based). Employee facing productivity-enhancing Copilots & partner facing process efficiency related Copilots are also being developed organically especially with use of Microsoft Copilot Studio to create low-code/no-code custom Copilots. The actual implementation of RAG based AI apps brings a whole slew of products such as vector databases, data tools for training & inference pipelines, and low-code/no-code tools to help developers implement AI apps.

The AI Auto-pilots

And then there is the AI auto-pilots layer. This promises to be the layer that will break open the SI/consulting business that is now manual, expensive and is by some estimates almost 8-10 times the size of the SaaS/software business. While still early, these Agentic systems promise to excel in task analysis, introspection, task breakdown, and autonomous task execution in response to goals set [or implied] by users. This can potentially eliminate the human in the loop. There is a lot of VC funding currently targeting this space. A lot of the 75% of the recent YC cohort falls in this layer. Just last week, Salesforce announced Agentforce to boost employee productivity and Microsoft introduced Copilot Agents initially targeted at SMBs which can enable automation of business processes, as well as extension of their Copilot Studio to support custom Agents. In fact, the most recent release of Claude from Anthropic is a feature that allows AI to use computers on your behalf, enabling AI to use tools needed to complete a task autonomously – a blessing to users and a challenge to security folks.

There are autonomous agents for a personal assistant (ex. Multion.ai ), support agents (ex. Ada.cx , intercom.com ), developers (ex. Cognition.ai (devin), Cursor.com , Replit.com ), sales team (ex. Apollo.io , accountstory.com ), security operations (ex. AirMDR.com , Prophetsecurity.ai , AgamottoSecurity.com ), UX researcher (ex. Altis.io , Maze.co ), UX designer (ex. DesignPro.ai ), System Integrator (ex. TechStack.management ), project manager (ex. Clickup.com ), finance back office employees (ex. Prajna.ai ), or even an universal AI employee (ex. ema.co ). And for each of the startups above there are more than a dozen alternatives and many more functional and vertical Agent types, so the space is massive and growing rapidly. Some studies indicate that such agentic systems that automate actions could result in 1.5T$ of additional software spending, positioning these new breed of companies to be the next vanguard of Service-as-Software companies that could dominate the IT landscape.

Auto-pilot Use Cases:

The above 2x2 matrix shows the different use cases driving adoption of Agents in enterprises and the customer deployment priorities. Green cells indicate current investments/POCs. Yellow cells indicate active developments, and red cells indicate potential future investments.?

Given the complexity in developing production Agentic applications, enterprises have chosen a more immediate path of trialing Agents developed by SaaS vendors that are purpose built for different functional areas within an organization. The primary enterprise driver being one of automation and improved business processes. Some enterprises have also embarked on projects centered around custom Agentic applications (both of internal and external use cases) that should expect to go into production over the next year or so.


AI Security Needs and Opportunities

Not surprisingly, ever since the unveiling of ChatGPT just 2 years ago, new hitherto unknown security, privacy and governance lingo has become mainstream such as: prompt engineering, jailbreaking, hallucinations, data poisoning, etc. The industry has been quick to get its arms around these new issues and has resulted in some metrics used to measure the proclivity of various LLMs to exhibit these characteristics. OWASP has done good work in identifying the top 10 LLM risks that are comprehensive and are actively updated. In addition, EnkryptAI.com has done a good job of creating and maintaining a comprehensive LLM safety leaderboard that is a handy tool to benchmark the safety scores of leading LLMs across 4 dimensions of bias, toxicity, jailbreak and malware. The same analysis can also be done for custom LLMs via their redteaming.

However, given the full context of the AI stack, taking a narrow view of AI security is at best naive, and at worst lacks clear actionable guidance for enterprise AI project teams on what security risks need to be prioritized.

Each layer requires distinct issues to be considered in order to address the unique challenges imposed by potentially different buyers. For example, red-teaming to identify bias issues with models being deployed by customers is likely to be of interest to the application team whereas data leakage via 3rd party Copilots or data protection of? RAG/fine-tune models could be an information security, CISO, or CDO concern.


Security At Layer 1

Risk 1.1: Prompt engineering

As use of LLM prompts have increased, prompt injection attacks have become more prevalent. New techniques such as Cresecndomation , Skeletonkey , and many-shot have made jailbreaking a cottage industry. There are numerous examples of jailbroken GPT or new prompt techniques to carry out campaigns are available online and in the darkweb, such as WormGPT, FraudGPT, EscapeGPT, BadGPT, and DarkGPT.

Risk 1.2: Data leakage

In addition to the above risk, data leakage and inadvertent information disclosure have became a big concern for customers. An early publicly discussed case at Samsung quickly made customers vary of data leakage. It also illustrated how prompt data cannot be considered to be private by default, and that it could find its way into the model via model fine-tuning. There have been continuing privacy protections at the LLM layer but you will still need to opt-out to ensure that no prompt data finds its way into the model. Additionally, historically exploited ways of data exfiltration could be more easily deployed in the context of a long running conversation with LLMs. Being cognizant of issues like image markdown exploit in Bing chatbot as well as chatGPT will also need to be addressed especially with private deployments of LLMs.

Risk 1.3: Misconfigured Instances

Given the rapid changes in AI deployments, there are many configurations that may not be well understood by customers in a shared responsibility model. Configurations vary by LLM, PaaS and include privacy and security concerns such as whether prompts are used to train models by default, whether external data is integrated into the LLM, the permissions required to access the model, etc. that need to be appropriately configured. In addition to privacy settings , security related settings also should be addressed. For example, ChatGPT allows plugins for integration with 3rd party services. This can be a source of data leak and it will be important to ensure only trusted plugins are connected to the enterprise chatGPT instance.

While red teaming is not a new concept, it is particularly important across all layers of the AI stack since LLMs act as black boxes and their outputs are not predictable. For example, Anthropic's findings showed that LLMs could turn into sleeper agents with attacks hidden in model weights that may surface unexpectedly.

Securing Foundational AI: Best Practices

The need to understand and to guard against the above risks is real and the foundational AI companies themselves have added some built-in guardrails. However, a slew of LLM security startups have appeared each with their unique strengths (ex. Hiddenlayer , EnkryptAI , PrivateAI ) to complement and extend guardrails built into the infrastructure. Even though we are in the early innings, there has already been some M&A activity (ex Robust Intelligence being acquired by Cisco), and existing security vendors have touted significant growth in their AI security business. Expect more growth and consolidation in this space.

For internal-facing public deployments of LLMs, Secure Service Edge (SSE forward proxy) is the logical PEP (policy enforcement point) platform on which to extend controls to SaaS AI applications since enterprises already use it to discover, control access, and protect data to all SaaS applications and services. the decisioning on LLM specific threats and issues such as prompt injection attacks, or data privacy issues can be addressed via LLM guardrail PDPs (policy decision point). On the flip side, for deployments on private infrastructure, an AI firewall, a SSE reverse proxy, or enforcement via LLM APIs are viable options. Independent of the deployment model, continuous discovery, red teaming and configuration audits will also be crucial.


From a use case perspective, the above risks can be addressed by the following technologies. For internal facing public deployments of LLMs, SSE is the logical platform on which to extend controls to AI, while for deployments on private infrastructure, red teaming and LLM firewall are more relevant.


Security at Layer 2

Security risks differ for third-party AI Copilots used by employees (such as Microsoft Copilot) and custom AI Copilots developed by business teams that are customer- or partner-facing. For example, access to and manipulation of the knowledge graph of third-party AI Copilots is limited and so is the control of what data is used to train the model itself. Nevertheless, security concerns addressed at Layer 1 need to be augmented with a focus on three areas: data, context, and permissions to address broader Copilot security needs.

Rapid adoption of a RAG (Retrieval Augmented Generation) architecture for Copilots developed by vendors and enterprises alike leads to security risks over and beyond the risks identified at Layer 1.


Risk 2.1: Data poisoning

Risk 2.1.1: Model poisoning

Models are data. There are more than 250K of open source models (for example in HuggingFace) and even more proprietary ones. Earlier this year, researchers discovered more than 100 models in the ones they sampled in HuggingFace that were malicious. This will get worse as new techniques are being uncovered to inject code into Pickle files, and other ways to infiltrate the Model supply chain. Model poisoning can happen to even large projects. For example, hackers were able to gain access to Meta’s model repositories through exposed API tokens that could have allowed an adversary to silently poison the training data. Such deep supply chain attacks will be hard to trace and harder to fix.

Risk 2.1.2: Context poisoning

RAG applications use enterprise context information (typically stored in vector databases) to leverage LLMs to provide responses that are tailored to that environment, rather than a generic response. Prompts are enriched with this context so that the responses are more precise. Unfortunately, just like prompt engineering attacks, malicious Context engineering can lead to copilot responses that may be harmful. Microsoft Copilot creates a rich semantic index for all enterprise data such as email, documents, chats, meeting summaries, etc. used for pre-processing/grounding of prompts sent to the backend LLM. The ability to tap into this rich context for nefarious activities like spear phishing attacks was recently demonstrated at the Blackhat conference by the prolific security research team at Zenity . .?

Risk 2.1.3: Data supply chain poisoning

Imagine a developer copilot searching for a bug resolution in stackoverflow, finding a seemingly plausible fix and applying it only to find later that the stackoverflow comments were deliberately crafted by bad actors to insert malicious code. This scenario is not as futuristic as you might think. There is already a case in the wild with a malicious PyPi Python package orchestrated by the cybercriminal campaign tagged Cool Package. This kind of issue where Copilots will find seemingly useful information and use that as part of the response will get even more challenging to thwart as bad actors exploit data supply chains for gain via public data poisoning. A related issue in the form of typosquatting has also been observed in the wild.

Another form of data supply chain poisoning in the MS Copilot context can be carried out by simply sharing a malicious file with the target user. This attack will work even if the user does not accept the share and is not even aware of this shared malicious file.

Interestingly, to protect data from the model it is possible to use an understanding of the working of LLMs to deliberately poison sensitive data so that it does not get digested by the model. One approach that gained popularity with creatives (called Nightshade ) could also be extended to enterprise use cases to protect sensitive data from AI.

Risk 2.2: Excessive Permissions?

The above example that makes a user vulnerable to a Remote Copilot Execution attack highlights the tenuous relationship between data permissions and Copilot. The mere fact that the user has access to a file implies the files will be indexed by a Copilot implies a limitation in how traditional security controls could be bypassed.

Risk 2.2.1: Publicly accessible Copilots

Just like public S3 buckets were a thing during the early days of Cloud, publicly visible Copilots due to misconfigurations are a thing now. Researchers at Zenity identified thousands of open custom Copilots created using the MS Copilot studio and have released a tool to search for them. Microsoft has since changed some default configurations but configurations need to be monitored to ensure copilots aren’t inadvertently made public.

Risk 2.2.2: Shadow Copilot usage

Co–pilots abound. Almost all SaaS vendors have Copilots (SFDC Einstein, SAP Joule, Crowdstrike Charlotte, Github Copilot to name a few) and each of them have varying degrees of exposure . From a risk perspective, it will be important to gain visibility into usage of these copilots, configurations of these copilots and the data that is being shared/connected with these.

Risk 2.2.3: Excessive User permissions

Microsoft’s own State of the 2023 Cloud risk report found that >50% of users are super-admins more than half of which was considered high risk. Oversharing at a file level is significantly more rampant since labeling & maintaining permissions is extremely cumbersome, and is a shared responsibility. Due to complex labels and permission inheritance mechanisms (for example Copilot generated content does not inherit labels) and how these permissions interact with direct & group level permissions, Sharepoint permissions, guest/link access policies etc that are relied on by the Microsoft Graph Knowledgebase, it is hard for enterprises to get a good handle on data leakage risk. Also, a study conducted by Cornell University revealed that approximately 40% of programs generated using GitHub’s Copilot contained vulnerabilities , emphasizing the real-world risks of granting AI systems excessive permissions without strict oversight.

Risk 2.3: Connected Copilot Applications

Microsoft Copilot is a great example of the value of a Copilot that has visibility into data across multiple products in the product suite (such as email, Teams, Sharepoint, etc.) but also to extend its value by interacting with other sources of knowledge including 3rd party applications such as Salesforce, ServiceNow, Zendesk, etc. The integrations increasingly are not restricted to consumption of data but with the use of orchestration enabled by the Power Platform, Copilots can take action as well. Consider the warning that Microsoft issued around the Midnight Blizzard campaign . Microsoft called out the issue of compromised accounts that can in turn grant high permissions to OAuth applications (in this case Office 365 Exchange) that enabled bad actors to maintain persistent connection to the Microsoft environment. This threat is compounded with connected applications to Copilot due to the sheer breadth of data and applications that it touches.

Securing AI Copilots: Best Practices

The above risks can be addressed based on the use cases. For use of external third-party AI Copilots (such as Microsoft Copilot or Github Copilot), the first requirement is to get an understanding of shadow AI Copilot/AI usage via discovery. Skyhigh’s extended registry of AI risk attributes can be a handy way to discover third-party AI Copilots used by enterprises. Data protection at this layer is best addressed with a combination of network-based and API-based data security. Inline controls can be either applied via Secure Service Edge (SSE) forward proxies (for example, for file upload to AI Copilots) and application/LLM reverse proxies for home-grown AI Copilots/Chatbots, or browser-based controls for application interactions that use WebSockets.?

Another risk centers around other applications that can connect to AI Copilots. For example, Microsoft Copilots can be configured to connect to Salesforce bidirectionally. The ability to query Microsoft Copilot configurations to gain visibility into such connected applications will be essential to block this vector of leakage.

In addition to the AI Copilot application-level controls, special consideration needs to also be given to the retrieval augmented generation (RAG) knowledge graph and indices that are used by AI Copilots. Third-party AI Copilots are opaque in this regard. Hence organizations will be dependent on API support by AI Copilots to introspect and set policies of the vector/graph stores used. In particular, a challenging problem to address at this layer centers around permission management at the data level as well as privilege escalations that may inadvertently arise for AI Copilot-created content. Many AI Copilots do not yet have robust API support for inline/real-time control further compounding the challenge. Finally, APIs can be used for near real-time protection and on-demand scanning of the knowledge base for data controls as well as scanning for data poisoning attacks.




Security at Layer 3

With increased ability to automate actions (such as in the Microsoft Power platform), Copilots can autonomously take action on the user’s behalf. Unbeknownst to the user, the AI Copilot is transforming a user to the human Copilot. This is further accelerating with Large Action Models and related technologies that make creation of Agents simple, while simultaneously introducing new concerns.

Agentic systems build on large language models (LLMs) and retrieval-augmented generation (RAGs). They add the ability to take action via introspection, task analysis, function calling, and leveraging other agents or humans to complete their tasks. This requires agents to use a framework to identify and validate agent and human identities as well as to ensure that the actions and results are trustworthy. The simple view of an LLM interacting with a human in Layer 1 is replaced by a collection of dynamically formed groups of agents that work together to complete a task, increasing the security concerns multi-fold. In fact, the most recent release of Claude from Anthropic is a feature that allows AI to use computers on your behalf, enabling AI to use tools needed to complete a task autonomously – a blessing to users and a challenge to security folks.


Risk 3.1: Unauthorized Actions

Automatic tool invocation action taken by Copilots and Agents is an issue since in many cases these Agents have elevated privileges to perform these actions. And this is an even bigger issue when agents are autonomous where prompt injection hacking can be used to force nefarious actions without user knowledge. In addition, in multi-agentic systems the confused deputy problem is an issue with actions that can stealthily escalate privilege.?

Risk 3.2: Rogue Agents

Some researchers recently claimed AI has passed the Turing test . Rogue agents built on this AI can be deployed to deceive humans. So, the issue of establishing whether or not a human is talking to an AI system needs to be addressed in order for Agents to collaborate with humans and vice-versa. It has manifested in cases where ChatGPT recruited a human through taskRabbit to pass the Captcha test using deceit. These kinds of autonomous actions will become more common with agentic systems.

Risk 3.3: Agent Identity and Permissions

A fundamental issue with multi-agentic systems is the need to authenticate identity of Agents and authorize client agent requests. This can pose a challenge if agents can masquerade other Agents or if requesting Agent identities cannot be strongly verified. In a future world where escalated privilege Agents communicate with each other and complete tasks, the damage incurred can be instantaneous and hard to detect unless fine grained authorization controls are strictly enforced.

Risk 3.4: Human Legal Identity

In certain cases, delegated secret sharing is essential to complete a task, for example through a wallet (for a personal Agent). In the enterprise context, a Financial Agent may need to validate the legal identity of humans and their relationships – there is no standardized way for Agents to do so today. Without this, Agents will not have a way to collaborate with humans in a world where humans will increasingly be the Copilots.

Risk 3.5: Deep Fakes

On the other end of the spectrum, in a deep fake scam a finance worker in HongKong paid out 25M$ assuming a deepfake version of the CFO in a web meeting was indeed the real CFO. This highlights the increasing risk of Agents that can impersonate humans with impersonating humans becoming easier with the latest multi-modal models, OpenAI recently warned that a 15 second voice sample would be enough to impersonate the voice of a human, deepfake videos are not far behind as illustrated by the HongKong case.?

Securing Autopilots: Best Practices

In addition to the security controls discussed in the previous two layers that include LLM protection for LLMs and data controls for Copilots, the Agentic layer necessitates the introduction of an expanded role for identity and access management (IAM), as well as trusted task execution.



Summary


Even though there are some overlaps, the 3 types of AI security technologies to comprehensively address emerging needs in enterprises are LLM/AI firewalls, SSE, and Agentic security. The LLM/AI firewall with robust guardrails and red teaming form the core the Layer 1 needs. Companies such as HiddenLayer and EnkryptAI are examples of startups positioned to address this requirement. At the other end of the spectrum is Agentic security startups such as Knostic and AISentry that are addressing the complex landscape of permission management ensuring RAG data permissions are enforced and that Agents and Copilots are authorized appropriately for fine grain actions.?

In the middle is data protection that includes managing risk of data loss and protection against malicious direct/indirect data injection/poisoning attacks; leading SSE vendor Skyhigh Security recently announced the extension of the SSE platform to support AI use cases. In addition to SSE vendors, purpose built solutions for data protection and threat analysis of Copilot deployments by startups like Zenity have innovative solutions. In addition, solutions that provide data visibility and control with context and data lineage such as CyberHaven could prove valuable in the genAI era where humans and Agents collaborate on data.?


In addition to security at these 3 layers, given the unpredictable nature of AI application faults there is a need to perform continuous red teaming (Layer 1 red teaming solutions such as EnkryptAI will likely extend to cover this across all layers) and should be comprehensively adopted by enterprises. Over time, consistent monitoring and governance of enterprise AI deployments will also become critical for businesses to be more agile in their AI journey. Since AI deployments are complex and span various technology stacks, solutions like Precize.io will become essential tools for bottoms up AI governance, while other still stealth-mode startups complement it by providing top-down visibility that empowers business analysts to triage, reproduce and fix production issues unique to the new breed of AI applications especially with long-running conversations and autonomous actions.?

?

Rakesh L.

A seasoned executive with 23+ years of global industry experience in Services Delivery

5 天前

Sekhar Sarukkai amazing read on AI and Security! A must read for all professionals to understand security landscape in AI world.

Shanti Corrigan MA, CSPG

Helping connect people, ideas and resources in service to a more equitable, just, secure and sustainable future.

1 周

A great read love it! Flagging also for Jessica Newman !

Ramesh Sarukkai

CPO & CTO @ Rate, Faculty at UCal Berkeley, Founder Prajna.AI

2 周

Great podcast - awesome insights!

Nilom Sen

Vice President, Global Support & Technical Account Management

2 周

Sekhar Sarukkai Great read, Insightful and gives a 360-degree view of AI security, from implementation strategies to risk mitigation.

Muthukrishnan Manoharan

Principal Engineer at Broadcom

3 周

Great read! It really clarified the AI landscape. Thanks for the clear breakdown.

要查看或添加评论,请登录

Sekhar Sarukkai的更多文章

  • Part 2: Angel Investing To Power The New Software Stack

    Part 2: Angel Investing To Power The New Software Stack

    Sekhar Sarukkai Angel investing is high-risk as the majority of the companies will fail to reach full potential, and by…

    4 条评论
  • Part 1: Technologies Driving The Software Economy

    Part 1: Technologies Driving The Software Economy

    Sekhar Sarukkai Over the last decade, we have seen a tectonic shift in the software stack, unlocking many opportunities…

    19 条评论
  • Digital Global Warming

    Digital Global Warming

    Throughout history, creation and consumption of goods have resulted in the problem of waste. Today, our most valued…

    1 条评论

社区洞察

其他会员也浏览了