BFFs could be your new Best Friends Forever!

BFFs could be your new Best Friends Forever!

Confused? Well, I was too when I first heard from my manager that let's work on BFF this quarter. That day he often mentioned this term and me being a novice - all I could think was Best Friends Forever, but how to implement it in a tech company?! :D Finally I gathered some courage to ask about it in a meeting and thank god I did as many of us had a vague idea of what that really is.

Simpler times when 1 frontend was served by 1 backend

Well, in simpler times there used to be just one frontend served by one backend - nice and easy, but with time as technology grew - a lot of frontend devices and channels came into the picture. All of these frontend devices/channels are still being served by the same single backend making it a monolith!

More frontend channels still served by same backend making it monolith

Over the past few years, many companies have been trying to get rid of this monolithic structure by breaking it down into multiple microservices but the problem is not easy to solve. These frontend devices or channels have specialised needs and there needs to be a lot of data processing and manipulation done at the frontend level to be able to utilise the data that is received from the backend services. Solution?? BFFs!!

What is a BFF?

Backends for Frontends are a thin layer between your backend and your frontend to act as a custom backend layer per user experience. They exist so that all the processing and manipulation that was happening earlier at the frontend level, can now be separated out to this thin layer, making your frontend lighter. There is no business logic in this layer.

So earlier when the frontend was calling the backend directly, it will now call the BFF and then BFF will talk to the backend and then give a response back to the frontend.

Let's discuss some of the benefits of having a BFF:

  1. Easy to maintain: Given it is a thin layer and independently managed, it is quite easy to maintain.
  2. Frontend team autonomy: Given it is a custom backend for a frontend, it makes the most sense for the frontend team to be responsible for managing their own BFF as it will then allow them to have more control over what goes in.
  3. Resilient to API changes: Imagine the backend team decides to change the way API returns the data but your frontend might not be ready to consume it directly, having a layer in between can help with graceful degradation and switching to the new structure.
  4. Better Error Handling: Now that you have a custom backend layer in between, you can handle the errors better by providing more meaningful context unlike some generic errors - improving the overall user experience.

Please Note: If you have a good micro-service architecture, you might not even need a BFF. BFFs are helpful in buying you more time in your journey of transitioning from a monolith to a micro-service architecture. And hence, it really depends on where and how the BFF is being implemented - it is not always an ideal solution. Consider the following question that often get asked about BFFs:

What about the latency and code duplication when using BFFs?

BFFs could become a boon when we are able to balance the data retrieval and processing. Consider the following example:

I have a Frontend app to buy some products, and hence when placing an order, there would be a lot of to-and-fro between the FE and BE - say once to create a draft order, next to create payment intent, then to accept payment, apply payment, fulfil the product and finally complete the order and return the order-purchase details. If we have a BFF in between, in that case the FE will call the BFF once and the rest of the calls will be made from BFF to BE and finally the order purchase details will be returned from BE to BFF to FE - making it much more simpler and cleaner.

Furthermore, to reduce latency, you can deploy this BFF closer to your BE - maybe in the same availability zone or same machine!

Another way to reduce latency is to cache some data. Now, if the product list is something that doesn't update very frequently then we can cache this data in BFF and serve it from there - saving some network calls and making the FE much faster and more efficient.

Given these BFFs are for specific frontends, there isn't as much code duplication as one might think. And even if there is a bit, we still have achieved decoupling of our code, giving us more flexibility and the ability to scale FE and BE independently.

Example Problem

Let's dive into an example problem:

Contoso (fictional organisation) is a leading global e-commerce platform. In simpler times, it used to have a simple website served by a BE.

Example problem

But then it grew! Along with the classic website, mobile apps were launched, they expanded into B2B sales and also introduced another vertical that allowed other businesses to sell Contoso’s products on their websites to their customers but use Contoso’s APIs as the backend (basically B2B2C). All these frontends or channels were still being served by the monolith. Over a few years, Contoso has been trying to move to a micro-service-oriented architecture, but again the problem is still the same. All these FE channels have varied needs, such as:

  • Classic Website: A simple web interface to buy products online.
  • Mobile Apps: Both Android and iOS apps. They may have geolocation data, different abilities to send notifications (popups/badges), different ways to upload videos, etc.
  • B2B Portal: Portal for businesses who'd like to buy the products in bulk and then sell them separately or give them to their employees. It allows bulk orders, checking invoices, topping up balances, etc.
  • B2B2C Portal: These are just APIs and are there for business clients who'd like to use them in their applications directly to sell products to their customers. They don't really have a dedicated front end.

Due to these specialised frontend needs, a lot of data manipulation needs to happen to be able to consume whatever comes from a generic purpose backend.

And hence, we try to solve this problem using BFFs! We propose the following solution where Classic website is served by Web BFF, Android app by Android BFF, iOS app by iOS BFF, B2B Portal and B2B2C Portal by Business BFF. And slowly we get rid of the monolith.

Proposed solution with BFFs

But, how many BFFs is the right number of BFFs?

Let's have a look at Android and iOS BFFs.

We realised that there is a feature parity between the two apps, the release cycles are similar, there's version controlling, teams are aligned and so is the product roadmap and hence it makes sense to have a common Mobile BFF for the two.

Having a common mobile BFF for both Android and iOS apps

On the other hand, let's look at B2B Portal and B2B2C Portal being served from the same Business BFF.

We realised that B2B Portal has a UI and B2B2C Portal doesn't, the payment methods are different, there are two different teams working on them and the product roadmap is very different for these products and hence it makes more sense to have separate BFFs for each of these channels. And this gives us Version2.0 of the above proposed solution!

Having separate BFFs for B2B and B2B2C - thus coming to solution version 2.0

BFFs with Micro Frontend Architecture!

Micro Frontends also called MFEs have become very popular these days as they allow you to decompose your FE into little MFE apps that work loosely together and can be semi-independently managed, making it easier for developers to develop and release. Let's understand how we can embrace BFFs with MFEs. Consider the following example:

At Contoso, B2B Portal’s Frontend is basically made up of 2 MFEs:

  • Accounts Portal: Portal is for businesses to sign up, login, check invoices, place orders, etc.
  • Campaigns Portal: Campaigns portal allows them to run promotions, discounts, etc.

These two MFEs are managed by two separate teams and have varied needs. Thus, it makes sense to have two separate BFFs for them.

And now what we have got is a Slice of app managed by one single team - we have gone vertical!

Go Vertical with BFF in a MicroFrontend Architecture

How do I become friends with a BFF?

Let's try to create an example BFF using Google Cloud Platform! Here's the Github repo with all the code along with instructions in the Readme.

We have following services:

  1. Product Service: List of SKUs, each item having a SKU id, name, description, theme and image_url. This service is a serverless NodeJS HTTP-triggered Google Cloud Function, i.e. you only need to worry about your functional logic and nothing about servers or infrastructure, it uses NodeJS runtime, it can be invoked from standard HTTP requests.
  2. Inventory Service: Inventory status for each denomination available per sku_id. This service is a serverless Python HTTP-triggered Google Cloud Function, i.e. same as above but using a Python runtime.

For data services, you have plenty of options to choose from such as Google Cloud SQL, Firestore, Bigtable, etc., but for sake of simplicity we can use a json file in the example.

Next, we have following frontend requirements:

  1. Mobile App: The requirement for our mobile frontend app is that when we first open the app, we should be able to see the list of all skus. And as we click on any sku, we should then see the denominations and stock for it.
  2. API: The requirement for our API is that when a client calls it - it returns all the skus along with the denominations and the stock in a single call. The data stitched here is different to the mobile one.

Now that we have different requirements defined for each frontend let's implement the BFFs.

  • Mobile BFF: We are implementing this as a NodeJS, HTTP triggered, Google Cloud Function. And when you call /products endpoint, you get the list of all products. Next when calling a particular sku or product /product/102 we get all the details along with denominations and stock for that particular sku.

Mobile BFF implementation in Google Cloud Function

  • API BFF: We will be running this in a serverless container service called Cloud Run, our app is in Typescript and we also have attached a Redis cache to our Cloud Run service via Cloud Memorystore.

API BFF implementation in Google Cloud Run with cache via Memorystore

Now whenever we call the /products endpoint, the first call usually takes a bit of time and then the results are cached for about 10 seconds. And hence all the following calls become really quick as they are served from the cache. You can see in the following video the timings of the calls and also the logs from attached logger that show when the cache was hit and when it was missed.

Effects of cache hit and miss when calling API BFF

Don't forget to checkout the GitHub repository with all the services and BFFs mentioned above.

Conclusion

You may choose a BFF when your frontends have specialised needs. A BFF can help with performance optimisation, team autonomy and also to implement fine grained security measures to hide any sensitive information. If your method contracts are same you need minimal changes to plug your BFF in.

On the other hand, you may be better off with your monolith if you have a smaller project, as BFF comes with added infrastructure and maintenance costs and comes with a risk of your APIs going out of sync. If you feel BFF is not something that might be helpful, other options to consider are:

  • API Gateways
  • GraphQL
  • Or another microservice...

So, are you ready to befriend a BFF?



Beautiful slide, I like that C3PO and R2D2 as BFF (best friend forever)???

要查看或添加评论,请登录

Lovee Jain的更多文章

  • Serverlessing like a Pro!

    Serverlessing like a Pro!

    If you are starting off your cloud journey or already have embarked on one, you must have come across the term…

    8 条评论

社区洞察

其他会员也浏览了