Managing customer outcomes using low code platforms and event based architectures

No alt text provided for this image

I have republished this article under our LakeTree brand as the K2 platform was bought by NINTEX. It would seem that over the intervening two years even more organizations have initiated a move to investigating or implementing event based architectures and thought that a refresh of the initial article may help folks understand how to use low-code DPA and RPA platforms in conjunction with event based architectures to achieve loosely coupled business processes and technical enablers.

Organizations are increasingly relying on their IT and process architectures to provide customers with greater levels of service and greater levels of satisfaction and value. This has been achieved through the expansion (sometimes painfully rapid) of the IT and process landscape within organizations. Each expansion adds progressively to the coupling of systems, processes and business rules. The value chain is further strained when there are autonomous departments that deliver service to other departments to fulfill customer outcomes. The expanded delivery chain causes a lack of data and rule integrity and increases the complexity of adding the next bit of innovative functionality. Test cycles become extended and unintended side effects crop up across the landscape. This inevitably leads to the need for more time, unpredictable delivery timeboxes, larger budgets and delays to further initiatives. Sometimes, the response is to conduct large scale process re-engineering projects to fix the fragility across the eco-system.

In this article I cover some actions that customers have taken to gain “control” of their process and execution landscape, so that their business can operate a little less encumbered by the weight of historical processes and IT platforms.

Organizations are constantly seeking to add more value to their customers. This is generally achieved through the releasing of new products or services through digital channels. These new offerings are enabled through sophisticated technology enablers. Unfortunately, as more offerings are added the internal complexity and interdependencies on systems and business rules starts to slow the pace at which the business can reliably bring new or enhanced solutions to their customers.

The reasons for this are varied and complex, amongst these are the way business processes are structured and the stateful way in which each process step or work context step is initiated. In the development space these types of concerns have long since been identified and mitigated through the use of stateless asynchronous interactions, modularization and architecture layering (admittedly it’s not perfect).

The same type of IT architecture pattern and thinking could be applied to a value chain. If we could make work context and process steps stateless of each other, then we would be able to independently scale the business processes, make them more robust and?alleviate to some extent, the “tightly coupled business process interactions we have created”. I fully acknowledge that a process is by definition a series or progressive rules and data driven steps and after some refinement there is no further scope for abstraction and decoupling. The topic of discussion is aimed more at course grained business systems or “work context” steps within a value chain that execute across technology stacks.

Imagine a scenario where owing to a tactical requirement a new set of temporary and distinct activities need to be injected into a value chain. A current example, may be related to COVID type health assessments or other legislation. This would typically require changes to one or more solutions, workflows, forms and changes to service endpoint definitions at a technical level. This of course is followed by a regression testing campaign (automated if the investment was made). Once the temporary requirement is no longer needed, we are left with redundant items of code that may be too expensive to take out or where folks are fearful of opening up that “complex” component and regression testing the full stack again.

Enter event-based architectures with stateless business solution fortresses. A solution fortress can be seen as a self-contained universe. SAP or other ERP solutions or independent systems of record would be a good example. This means that internally they are consistent and contained (data and business logic) and they only execute their allotted outcome in the course of a value chain execution. They do not try to co-ordinate data or business transactions outside of their universe.

Thus, a value chain outcome becomes a series of solutions that independently of each other execute their contained function with the aim of fulfilling a goal. What madness is this mythical landscape?

In order to bring this magic into being, there would need to be some underlying rules of engagement, principles and technical patterns. Some of the most prominent elements would be adopting an asynchronous pattern to service consumers. The environment would also need a controller to ensure once and once only guaranteed delivery to event subscribers (paired with event re-submitters where the operations team may need to reinitiate or complete an event cascade).?It may also be helpful to adopt canonical data definitions for event subscribers and publishers and to expose the endpoints through a virtualization and abstraction layer.

Once we have these elements in place it becomes relatively simple to add on a new system of record and to enlist this new solution to an existing or where needed revised business event. Each solution can be tested independently. The benefit comes where we add information to the canonical message structures. Given that it is a non-breaking change then existing application do not need to be retested, only the new application and its subscribers would need to be tested. Returning to our temporary scenario when the tactical response is no longer needed then the simply decouple the subscriber and stop the solution with no impact on the underlying business process and value chain.

All this decoupling and federation does come with a bit of a downside. Given that the systems and business processes are all acting independently it becomes difficult to determine when the value chain has completed or where exactly in its business execution context the transaction may find itself.?

This is where workflow platforms typically excel. Their raison d'être if you will is to manage the flow of business flow execution across solutions in a very serial and methodical fashion. Typically, (as is the case with NINTEX K2) they visualize the executing processes steps very clearly and couple this with rich reporting. By making the workflow platform a subscriber to specific business event we are able to enlist then into the value chain and provide an observer and value stream execution capability. Thus once the workflow engine has determined that a value chain has?achieved a state of completion for a given transaction across the organizational technology landscape (internal and external) then it can enact higher order behaviors back to customers, suppliers or knowledge workers. Thus, answering questions like where in the process (value stream/user journey) is a customer transaction it becomes the domain of the workflow engine to provide that insight. Additionally, the workflow platforms in their own right can also be subscribers to events and doorways to systems of record, internal rule execution and handoff of task and activities to robotic workers.

The scenario covered in the preceding text was used to formulate a solution that allows, in this specific case, a government organization to throw a biohazard level security event (amongst other event types) that in turn was subscribed to by the K2 platform. The K2 solution then responded to the event, by creating a non-linear case managed workflow. Based on the type of event, the internal steps required by the case would be injected as workflows into the case. Appropriate tasks were also created and assigned to humans and robots across the organization in response. The case worker could inject additional activities onto the case. As the case activities completed, we would be able to see the case progression and manage it to completion based on SLA requirements. The elegance of this solution is that as new event types are defined the underlying tech stack can independently expand to support these requirements without causing disruption to the existing capabilities.

要查看或添加评论,请登录

LakeTree的更多文章

社区洞察

其他会员也浏览了