How API-led Connectivity serves Business Intelligence and Analytics
Data consumers and explorers using API-led approach towards Business Intelligence

How API-led Connectivity serves Business Intelligence and Analytics

BI tools are playing a major role in making data professionals understand huge quantities of data that enterprises accumulate over time. Using BI tools actionable information and tasks can be guided for decision making. Anyone can use it, many vendors provide time-based free trials or even so-called "forever free" versions. I have downloaded many BI tools in the past to try them out and explore the data exposed by the organization.

Is your organization's data discoverable?

One major problem I faced in the past trying to explore the capabilities of BI tools was the discoverability of our data. 80% of my time evaluating BI tools was spent on finding the different sources where data lives and having access to it. And when I had access to the data, I wasn't sure if the data I am accessing is complete. For example, customer data can be living in several systems like Salesforce, SAP, Microsoft, etc. - so when I received access to customer data, it was just access to a system such as Salesforce and therefore not complete.

Es wurde kein Alt-Text für dieses Bild angegeben.

Even I managed to get customer data from all sources - Things got even more complex when I started to look for related data such as recent orders, contracts, or current opportunities on that accounts. While I enjoy that many BI tools provide out-of-the-box connectors for many systems, but working in an agile environment, where multiple teams have the freedom of using tools of their choice, it becomes difficult to earn the advantage of the out-of-the-box connectors.

Es wurde kein Alt-Text für dieses Bild angegeben.

In such an interconnected environment, broken connections (due to maintenance/upgrade work) are just a matter of time. Imagine the previous example of customer data, if the customer information gets re-structured or changed in one system, all BI tools will be impacted directly. Additionally, the connectors bring in the complexity of point-to-point integration, which is super hard to manage.

Making data be more discoverable

MuleSoft's API-led connectivity approach suggests three-layered orchestration of organizing APIs as part of a so-called application network . All APIs are treated as products and can be discovered as part of internal and external communities.

Es wurde kein Alt-Text für dieses Bild angegeben.

In order to serve the need for discoverable data independent from the system the data lives in, you could approach to utilize an embedded layer within the process layer and compose APIs dedicated towards data as a product.

Es wurde kein Alt-Text für dieses Bild angegeben.

The exposure of these data products could be for instance through OData or GraphQL, which would make it much easier to consume on the BI tools side and users don't have to worry about accessing the system underneath directly.

Es wurde kein Alt-Text für dieses Bild angegeben.

As part of the self-service discovery portal, users which require access to data could request access to the data products APIs and concentrate on business intelligence rather than connectivity to multiple systems.

Es wurde kein Alt-Text für dieses Bild angegeben.

Once the request is approved, users can access the data via dedicated APIs and build the business intelligence and analytics required to define actionable guidance for decision making.

Es wurde kein Alt-Text für dieses Bild angegeben.

Using standard connectors like OData, GraphQL, and REST the BI teams can easily achieve their goals by exploring and consuming the data in a discoverable, understandable and secure manner.

Try it out yourself

Getting started with DataGraph


Christoffer Hansson

IT generalist | M.Sc. Business Administration | Integration & Data Specialist

1 年

Hello Amir, Thank you for presenting how ALC supports the BI function. This is not something I've found anywhere. However, it's not as easy as this architectural model makes it look as some data has high quality/integrity and speed in loading the data onto your data lake is of greater importance. The following HBR article sets a framework for this by introducing an Offensive vs Defensive strategy. https://hbr.org/2017/05/whats-your-data-strategy Building every dataflow from source to Process API adds cost and increases maintenance that sometimes is not necessary. Thus, I'd advocate for a framework similar to the Offensive vs Defensive matrix, but with more context and details. As an example, data from IoT sensors send data with high frequency but the data model is easy to interpret and there's no need to validate for data quality issues. Any case in between IoT data and important master data will require a decision. Reference data, transaction data etc can have different needs. To your knowledge, is there a common framework or have you seen such a decision matrix of how to decide what governance in your data pipelines that you need to add?

回复
Arpit Das

Helping customers maximize their business outcomes with Digital Transformation

3 年

Nice one

Florina Moldovanu

Managing Director @Joline & Flo | EMBA @Swiss School of Business and Management | Mental Health, Work-Harmony Balance, Lifelong Learning Advocate

3 年

Very insightful!

Daniel Portmann

Enterprise Account Executive at Databricks

3 年

Very nice one, thank you very much for sharing. There is a tremendous potential for solution like this.

Marcel Luginbuehl

Senior Enterprise Account Executive, Insurances, Switzerland

3 年

要查看或添加评论,请登录

社区洞察

其他会员也浏览了