Segregation of duties in RPA: Yes, but focus on the real risks
And here we go again with one of the most famous security topic: Segregation of Duties (moreover, SoD).
The basic concept underlying SoD is that no employee or group should be in a position both to perpetrate and to conceal errors or fraud in the normal course of their duties.
Without any doubt maintaining appropriate SoD is critical from a security perspective. But enforcing SoD at all levels is somehow questionable in the context of unattended RPA.
Should the definition of SoD be applicable to virtual employee performing "batch" operations that do not require user intervention? If security, and SoD controls in particular, governing the RPA environment are designed and operating effectively, should you really care about SoD in the back-end systems where bots are actually transacting?
My simple response would be no, you should not care. You must be saying that this is a bold statement.
Reality check
I came across so many articles, research papers and blogs highlighting the importance of performing (and ultimately enforcing) SoD checks when provisioning access to bot accounts in back-end transactional systems.
The message is that granting conflictive activities (allowing SoD violations) to a bot account could potentially lead to a major security breach.
The recommendation is to treat bot identities (and underlying accounts) the same way as you treat human identities. Despite the fact that bot accounts are defined as “dialog” accounts similar to human accounts (to enable interactive system access from the graphical user interface), I think it is good to remind ourselves that bot accounts are not used in the same way as human accounts.
Why? Simply because the behavior of bots is dictated by the code (and consequently written rules). Humans are all capable of behaving profoundly unethically, but certainly not an RPA bot. As if by magic, a bot is not going to go crazy and start performing fraudulent activities. A bot simply follow rules that have been codified by humans.
The principle of least privilege (practice of limiting access rights for users to the bare minimum permissions) should not be applicable to bot accounts that run in unattended manner. Limiting the actions of bots by restricting the authorizations at the account level (and enforcing SoD) in the back-end systems is not a suitable approach for 2 main reasons:
- The first reason is operational as this will limit the ability to reuse existing bot accounts to execute other automated activities. If you create an account per bot (only limited to the activities you are trying to automate in a given process/ subprocess), you might not “sweat the asset”, meaning that the account might be under-utilized. An account should ideally be used 24/7 and this why bot orchestration/ coordination is essential to ensure full utilization of your back-end bot accounts.
- The second reason is financial as this will increase the number of bot accounts created in all your back-end transactional systems. More accounts means more user licenses, more maintenance and ultimately more costs.
So what are the key risks arising from the lack of SoD in bot accounts?
There are 2 critical risks that in my view require max attention.
- The first one is that an individual could potentially get access to bot credentials (via password-based attacks/ exploits) and use them to directly transact with back-end systems (remember also that someone ultimately needs to know and enter bot credentials in the RPA tool password vault/ credential manager - and the individual might be sharing the passwords with many more people). Given that bot accounts have been granted wide access with no SoD, the individual would be able to "directly" perpetrate fraudulent activities by misusing bot accounts.
- The second risk is that a developer could alter RPA codes/programs in order to modify the activities performed by bots (alter the existing statements/ routines), promote them a into the production control room/ command centre and ultimately schedule bot execution runs. Given that bot accounts have been granted wide access, the individual would be able to "indirectly" perpetrate/ conceal fraud activities.
These risks can definitely be mitigated by implementing the appropriate controls within your RPA processes and underlying technology.
Controls within RPA: this is where the focus should be
It is imperative to implement the right set of controls within RPA processes/ supporting solutions in order to mitigate the 2 risks highlighted previously.
There are 3 key controls that must be deployed: 1 process related and 2 security/ access related controls.
From an access control perspective, you should ensure that:
- Roles are appropriately segregated within you RPA operating model governing your processes and technology.
- Credentials of bot accounts are stored in a encrypted vault, ideally external to the RPA application, with a password rotation mechanism in order to limit the lifespan of bot account passwords. New random and strong passwords should be automatically generated via auto-password change feature (replacing existing passwords across all connected back-end systems) in order to ensure that no individuals are actually aware of bot account passwords (eliminating the risk of unauthorized usage). Highly privileged/ sensitive accounts must be frequently rotated, including after each use (one-time-passwords).
From a process-control perspective, code profiling is by far the most important activity within your change management process and should be at the top of your bot production entry checklist (refer to my post "RPA: 6 checks that you must perform before releasing a bot in production").
You should spend enough time during RPA deployment to define and implement a code analysis process in order to review the usage of particular security and compliance standards but more importantly to detect unauthorized injection of malicious code. Reviewing code should be done in a very rigorous way and using an automated mechanism.
You can have SoD conflicts within a given RPA code but these conflicts should be “sane”, meaning that no malicious statements or routines should alter the legitimate data inputs feeding bot processing. As an example, if the same bot can maintain vendor bank accounts and process vendor payment, bot promoters (usually also playing the role of code reviewer) should definitely ensure that developers have not injected a statement to add a default fraudulent bank account during master data maintenance.
Remember that bot promoters are the gatekeepers of your RPA production environment and they should be appropriately trained.
Once the code is in production and new changes need to be pushed/ imported again into the production control room, you should perform an automated code comparison to ensure that the baseline code has been altered in accordance to new requirements (or bug fixes) previously validated by RPA business analyst.
As a closing note, I just wanted to re-emphasize the need to first focus on the real big security/ access risks within your RPA processes and supporting technology. By implementing appropriate controls at that level (such as the one mentioned above), you will naturally mitigate most of the security/ access risks in your back-end transactional systems/ applications. As simple as that !
I hope you enjoyed the reading and I would be very interested in hearing your thoughts.
Opinions expressed are solely my own and do not necessarily express the views or opinions of my employer.
Defining the future of governance with ACTIVE GOVERNANCE for identities, processes, and technology. Helping organizations solve complex control challenges with advanced automated control solutions.
10 个月Excellent article Ralph Aboujaoude Diaz. It′s great to see experts like yourself posting and writing about Segregation of Duties., Let′s continue to spread the word. ??
President at INVOKE INC.
3 年Ralph, very good article. I will blatantly repost this as I think everyone should read this. As a follow up comment, the same learning curve you highlighted in a separate post applies often here as well but with another stakeholder namely infosec. We get Infosec comfortable at the early stage with a least access principle because a) the financial implications are not as big and b) their comfort level and understanding is limited. Then when the program grows we challenge the status quo and optimize SoD. Articles like these hopefully help in the change management that is needed to get different stakeholders comfortable with a more optimized approach right from the start.
Business Architect Robotics @ Arvesta #ExpertInTheField
3 年Nice article Ralph! You can feel its based on experience from the field. A topic where the ultimate/generic solution will not appear just around the corner. Always a very important balance to find on an ‘automation project by project base’. For each project the most relevant focusses can differ, so key to clearly identify, align and communicate on the SoD topic with all your relevant stakeholders upfront.
Product Manager at SAVii | FinTech
3 年This is such a great read, thanks Ralph I would agree that the most important points are: - Segregation of duty between developer / Production gatekeeper - Code review
Agentic and Generative Solutions - Communication, Media and Tech at Accenture
3 年Ralph Aboujaoude Diaz. This is great write up. Real world challenges ??.