Musings on ChatGPT vs. Identity and Access
Credit to midjourney on Discord, sorry Boston Dynamics!

Musings on ChatGPT vs. Identity and Access

Chatting football, kids and holidays at dinner the other evening with some old friends, the conversation moved, randomly to ChatGPT; My friend mentioned the implications of its humanlike responses for coursework, he an ex-teacher and my wife a current English teacher discussed the possibility that this tool could make coursework defunct and what the mitigations the profession could put in place to identify AI generated content; at this point I glazed over (apologies to any teachers reading) but the topic has resonance for me elsewhere.

The concept of AI generated problem solving has been a subject of Xendl’s internal conversations for the past 6 or 7 months; Not so much for advertising copyright and certainly not for writing Macbeth essays, no, our conversations have centred around the challenges we (consultants or developers of software) face, or more accurately, the problems our prospective customers face and how machine powered intelligence could either augment human decisions or completely replace them.

Now, we’ve already embarked on this journey “technically” in places, primarily in managing ownership of master data in GRC systems. We’ve built applications and tools that identify business leavers’ who own specific critical items of master data in the GRC system and, by using some basic intelligence, can help the business re-assign that master data, without business folk (as I like to call them) ever having to contact a helpdesk or GRC administrator… but that’s just a start. What common problems do organisations face today, across the globe irrespective of the underlying applications they are using to run their business, the tools they are using to manage risk and the line of business they operate in?

Now that is the billion-dollar question, and one I think can’t be precisely answered right now. To some extent, with artificial intelligence and technology generally, the problems we can solve, or have the imagination to solve are limited by the extent to which technology can affect the world around it. By that I mean, right now, you can’t ask ChatGPT to bake you a cake. The same limitations exist in my world. There are already hugely impressive machine learning tools that are identifying anomalies in the identity space, ensuring that the right people are accessing the right platforms from the right places, but they aren’t yet (as far as I know) restricting people’s physical access to buildings via a Boston Dynamics like automaton - Someone confirm they are not doing this yet! It’s a scary thought!

Anyway, back to my musings: there is a place (lowbrow though it may be) that AI and ML is massively underused and is, to my mind, ready for a smattering, no let’s be brave Nick, a deluge of augmentation, pattern recognition or even the hallowed Artificial intelligence (insert the AHHHHHH noise from HBO programs here for dramatic effect). That area, my friends is intra-system access risk and least privilege. Decisions made in these spaces are both complex and simple at the same time. Complex in terms of the operational impact of adding or removing access from one or several users and how that manifests itself in varied authorisation concepts and, simple in terms of the principles that guide us in the pursuit of least privilege; for example, one should not have more access than one needs. But my obsession with the relationship between complexity and simplicity will have to wait for another thought piece!

Why do I think a machine could so effectively manage this relationship between giving and taking away? Here comes the finale,

I’ve managed Segregation of duties issues for most of my professional life. There are three ways to deal with such issues: mitigate, remediate or accept. While these three options are all relatively “simple”, the impact of a poor decision, the wrong decision, the wrong choice of the three to both the individuals within an organisation, teams and processes can be almost catastrophic. Predicting the outcome of one of these decisions takes almost superhuman foresight, a view of processes, people, behaviours and impacts all at the same time… extremely “complex”.

What better mechanism to analyse all of the above complexity and make simple decisions, predicting impacts with some accuracy than a machine?

So that’s the vision; use intelligence to manage authorisations or permissions within any number of applications, balance the need for operational efficiency (I need access to this or that) with making compliant decisions (mitigation, remediation or acceptance) all in the blink of an eye and all, for the most part, without a human being!

Think of the hours saved with administration teams running periodic access reports, highlighting SoD issues and subsequently lines of business reviewing those reports and spending painstaking hours or days making individual decisions. Think of the lack of production issues when the wrong access is removed from the wrong person and production halts or a key process can’t be completed. Think of it friends as you stare into the middle distance…

Is it possible, you bet your bottom dollar it is: can ChatGPT do it? Not as far as I know; but that’s not to say Xendl won’t…

#ChatGPT #SAPGRC #Xendl #MusingsOnAI #Artificialintelligence #SAPSecurity #UKSOX #SOX #Compliance #GRC

要查看或添加评论,请登录

社区洞察

其他会员也浏览了