Part 1: Closing the Gap between AI Models and Business Impact

Part 1: Closing the Gap between AI Models and Business Impact

Shifting from model-centric to process-centric.


When effectively applied, Artificial Intelligence (AI) models can significantly enhance decision-making in today’s data-driven business world. Despite the benefits, adoption has been difficult, particularly in sectors with complex processes where AI's errors would compromise a high cost (such as banking, insurance, quality control, etc.

During the adoption process, it is common to see AI and tech teams start forcing business teams to change their processes abruptly to fit the implementations. This approach usually results in system abandonment, endless cycles of backtesting, and revisions without real benefits for organizations.

A very effective way to boost AI adoption is by separating data, models, and business logic using a decision engine. This method allows companies to integrate AI into processes without disrupting established workflows and have an automation and data-driven strategy for each decision.

Decoupling Data and AI Models from Business Logic

Why does the model-centric mindset complicate adoption?

AI models are sophisticated tools designed to solve specific questions that could create considerable value for organizations. However, adopting them is complex, particularly for existing business processes when AI expectations are not aligned. The following aspects make model-centric approaches challenging to adopt in practice. (Check this post about 3 must-know AI inference strategies )

  • AI models are not crystal balls. They solve specific problems by exploiting patterns in their training data. Even the most advanced models nowadays, like LLMs, solve a particular question (next-word prediction given previous context and a vast language dataset). Given this narrow nature, every model needs guardrails, simplifications, predefined rules, and tech integrations to be successfully adopted.

  • Modifying complex AI models: Changing AI models is more challenging than adjusting the business logic, partly because of their nature of solving specific and not general questions. On the one hand, Modifying an existing model to add extra variables or answer an alternative question would require weeks or months and probably double the cost. On the other hand, adjusting the integration and business rules would be a matter of minutes.

  • AI models learn from data. A common challenge for AI adoption is that stakeholders use models to validate their existing pre-defined rules and biases . Frequently, this causes powerful AI models to go to waste. The best way to mitigate this is with clear metrics and backtesting. However, if there are compulsory rules or data, including them in the business logic layer would be okay.?


Organize data consumption and avoid a complete data mess.

When starting workflow automation, creating data modules consumed directly by various systems is tempting and would feel fast “at the beginning.” However, this can lead to a complex mess of APIs, databases, and external services and would soon restrict scaling your systems. Data should be a central part of your process automation strategy and handled responsibly.


  • Internal Data: To keep your systems and databases decoupled, minimize the number of access points to your data. Ideally, you should target having one single access point and organize all your systems behind it. An API Gateway or DAO (Data Access Object) design pattern is an excellent way to manage this complexity.

  • External Services: Avoid integrating external services directly into the business logic or implementing them per business user or system. A great way to organize external sources is to use an integration framework, which makes it easy to integrate a new source with a predictable way to expose the data. It is outside the scope of this article, but design patterns like Adapters, Facades, and Strategies could save your day! Once the data is consistently on your system, consume it through the single access point.

  • AI Outputs as Data Entities: Treat AI outputs as data entities and delegate all the training/inference complexity to your choice's MLOPs framework/strategy. This approach can significantly reduce the complexity of integrating the models into the business logic and make communication with stakeholders easier.


Benefits of decoupling the business logic

Once the responsibilities of data and models are clearly defined, the business logic manages all the details to combine these and reach a final decision for a specific problem (e.g., Loan Approval, Credit Line Definition, next best offer, etc.). Introducing the business logic layer simplifies governance and maintenance of processes for business units and accelerates iteration. Well-designed workflows act as living policy or executable documentation, enabling data-based decision-making processes.


  • By incorporating business logic into a workflow manager, decision-makers have complete control over their business rules. They can adjust thresholds or add/remove variables, with immediate reflection in workflow outputs. This process eliminates the need for extensive engineering support and allows prompt implementation of changes, such as enhancing fraud detection systems.

  • Simplifying AI Adoption: AI adoption becomes as easy as integrating model outputs into workflows and setting thresholds. Teams can gradually introduce AI, limiting its decision-making to specific users or more complex deployment strategies. This approach reduces friction between AI teams and business units, ensuring smooth adoption once the model’s performance meets expectations.

  • Data-Informed Process Adjustments: A robust process automation engine should provide detailed insights into the decision-making process, including steps used, output distributions, discriminative steps, and case manager resolution times. This data-centric approach empowers business units to make informed decisions, reducing reliance on intuition and biases.


Conclusion

By shifting from an AI-model-centric to a workflow-centric approach, companies could simplify the challenges behind AI adoption and open the doors to the benefits of having automated workflows instead of document-based policies. The following are some high-level steps to move the organization through this transition.

Kickstart Business Logic:

  • Identify the first business process to transform. Consider the current steps and the benefits of automating the process and ideally transforming it into system features. Keep the scope clear and well-defined.?
  • Explicitly declare the process’s inputs and outputs without ambiguity. At this point, it is essential to push for standard definitions; otherwise, automation will become a non-scalable tree of personalizations.
  • Map the technical requirements of the decision engine. Based on these requirements, evaluate tools in the market. Make an informed decision and stick to one tool. Use the defined process as a proof of concept with the tool provider and pivot from there.


Data Mapping and AI Models Integration:

  • While defining the process, explicitly map data points, including tentative thresholds and their relevance. Keep the mapping simple and the logic behind every variable to avoid creating a wish list. Focus on the minimum data points needed for the process AS IS.
  • Provide the data mapping to the engineering team to build the single access system. Utilize existing external sources or initiate integration behind the data access point.?
  • Discuss the process with the AI team. Identify existing models that can enrich the process. Discuss detected weak points in conceptualizing AI applications. More information about moving from ideation to production in this post


Deploy, Iterate, and Maintain:

  • With the first version of the process ready, train users to consume it and start integrating related systems.
  • Measure as much as possible from the decision engine and use it to decide improvements and data changes. Go ahead and start creating controlled experiments to finetune the process. Finding the optimal setup can turn your process into a competitive advantage.

Stay tuned for Part 2 in a couple of weeks, where we’ll explore Risk Policy Automation in more depth. Share your thoughts and comments below to engage with this topic!


José Javier Sastoque Sánchez

Product Owner | Business Analyst | Product Analyst | Customer Journey | Requirement Gathering | Product Management | Technology | Agile Methodologies | Fintech

3 个月

Great job. It has been a pleasure to contribute to this project and learn so much during the process. Wishing continued success for what’s to come.

Thank you for sharing this insightful piece Alejandro Betancourt, Ph.D. - love collaborating with you!

Maik Taro Wehmeyer

Co-Founder & CEO at Taktile (YC S20) - We are hiring!

3 个月

Super insightful - thanks for sharing this Alejandro Betancourt, Ph.D.

André Andreazzi

COO and CHRO at Klym | ex-McKinsey & Company | Connecting business and technology for innovation | Business transformation | Executive Counsellor

3 个月

That′s great post Alejandro Betancourt, Ph.D. the ¨map process¨ is key to have good understanding of how things work and where the models can be relevant. I would add a bullt on the ¨define the goal¨, to force a clarity of what to achieve. It would be valuable to the team to measure their success while they progress on experimentations and solutions. Like we do every Q with our OKRs.

Felix Schmidt

Customer Success x Taktile | PhD candidate data-driven decision making | ex-McK

3 个月

Thanks for the shoutout Alejandro Betancourt, Ph.D. - pleased to work with you and your team ?? This is a great summary and piece of knowledge relevant for so many organizations!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了