Generative AI in cybersecurity- 3 rules to make the best of it
"Generative AI" through "MidJourney"

Generative AI in cybersecurity- 3 rules to make the best of it

AI based tools and methods have taken the world by storm, and it would be safe to say that I have rarely seen such excitement in a long time.? I would say that the last time there was so much "buzz in the air" was in the early days of development of cloud native services a good few years back.? The promise of the public cloud has borne fruit in more ways that we could have ever imagined. Buoyed by that experience, the collective excitement around AI is all the more palpable.

We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten. Don't let yourself be lulled into inaction ......."Bill Gates- The Road Ahead 1995"

It is becoming difficult to imagine industries and professions that could ever hope to be untouched by the immense potential of AI.? However, there is also a palpable fear that AI based automation and tools will cause job losses across entire professions, and continue to reinforce the algorithmic biases that are already so well entrenched in the industry,

I have often had intense discussions with colleagues on "how should a cybersecurity professional think about AI and areas where it can be applied".? I have attempted to address this question here, but also address it differently.? To me, the secret is to think about the rules that drive the creation of use cases rather than the use cases themselves.

Rule 1 : Think probabilistic outcomes (not deterministic ones)

To get the most value out of any AI based tool, it is important to think of use cases and scenarios that can not be addressed by any other mechanism.? The best example of that would be probabilistic scenarios where the outcome is highly subject to different influencing events, each of which has varying unknown levels of probability.? Examples include?

  • Generating in/out conditions for business continuity and other failure mode testing?
  • Translating abstract threat intelligence reports into realistic scenarios based on models and datasets that reflect realistic conditions within the organisation

As the world moves towards a “assume breach” mindset, there is also a need to take an “all-of-enterprise-view” and constantly simulate realistic scenarios.? Historically, that meant long, expensive consulting engagements or a lot of preparation time.? With AI, we will be in a position to? plan better, move faster and be prepared for a range of eventualities.

Organisations should actively explore building probabilistic?scenarios of interest and start proof-of-concepts around it.

Rule 2: Focus on known unknowns

Every organisation has its own unique business realities and its corresponding threat perceptions and residual risks.? Usually the operational security teams “just know” what these are.? And almost as always, they lack the resources to dive in to explore hidden risks.

This is another area where AI based tools are already starting to make a difference. AI based techniques are not only enabling us to detect and analyse large masses of seemingly unrelated data, but also help trigger cyber security responses much faster, and with a higher level of confidence.? Some of the scenarios that surfaced during recent discussions include

  • Detecting anomalies in authorisation data sets and search for “toxic combinations”
  • Detecting abnormal flows in network traffic and data flows? within the company
  • Testing hypothesis on datasets - especially to understand context between disparate applications and infrastructure elements to identify “living off the land” compromises

Rule 3:? Simulate newer channels of compromise

The ability to easily (and cheaply) simulate new and emerging areas of compromise is of course, the holy grail for cybersecurity teams. Use cases for this is only limited by the imagination of the blue teams.? Some of the interesting use cases from my personal "wishlist" are

  • Generating realistic anomalous network/data traffic for Purple Teaming against algorithmic adversaries
  • Generating synthetic but highly context-realistic versions of privacy datasets that can used for security testing as well as in “honeypot databases”?
  • Using AI tools on real IOT and OT data to simulate compromise and attack scenarios

Needless to say, this ability cuts both ways, and allows hackers to generate realistic attack payloads as well.? But historically organized hacking groups have always been at the cutting edge of technology and have been at the forefront of weaponizing every new technological advance. Generative AI now gives Blue Teams an ability to even the playing field.

alok sain

Director at CFDEEL Laboratory, CID, Govt of West Bengal ( Retired)

1 年

This is the need of the present days in the field of cyber security.

回复
Bhupesh Kumar Pandey

Project Manager, SAFe Certified PO/PM, SAFe Certified RTE, Certified Scrum Master

1 年

Generative AI with Cybersecurity explained in clear and simple words. Thanks for sharing Suvabrata Sinha

Rigoberto Gonzalez, Senior Lead Product Manager

FinOps | Associate Director | Product Manager | DevOps Coach | Executive Product Mentor | SIAM Professional | ITIL?4 MPT | SAFe Professional | Senior Manager | Corporate Product Management | Service Deliver Management

1 年

Great blog! Thank you for sharing your knowledge! amigo Suvabrata Sinha

Durga Prakash Devarakonda

Former Managing Director - Elevance Health Inc GCC | GCC Hyper Scaler | GCC coach, Mentor and Advisor

1 年

Very good blog Suvabrata Sinha

要查看或添加评论,请登录

Suvabrata Sinha的更多文章

社区洞察

其他会员也浏览了