Azure Service Fabric - Our way

Azure Service Fabric - Our way

Over the last several years I was working on different micro-service based solutions for large-scale systems. The last project was done on Azure Service Fabric Platform and we found new great features that aren`t described well by Microsoft and I will share that knowledge with you.

Service Fabric has two main options stateless and stateful services, if you think that stateless apps are better you are wrong.

A stateless approach is more general and looks like as three-tier app. For example, some Web (SPA or ASP.NET MVC) tier, app-services tier (Web API) and some database tier as SQL Server. Stateless share all state stored in some database and that database is a bottleneck, all other tiers can be scaled easily, so you need to introduce data caching.

Another way in Service Fabric is database-less stateful which is significantly better for large-scale apps. It has no bottlenecks and all state is partitioned across all scaled service replicas, every microservice has at least 2 slave replicas and one active master. Stateful service available only through Service Fabric internal API so you need additional stateless services to have external access as it shown below.

We started development with a stateful approach and optimized it, by default service fabric needs stateless nodes to access stateful services externally, which means that you need more resources to host at least two processes per microservice. So, to reduce network latency we just added Web API host with direct access to data from stateful service and configured the load balancer to route requests directly to stateful services.

This scenario you will not find in any official Service Fabric examples but it is faster, simpler and cheaper than usual stateful approach. In our case, we have access to in-memory reliable data directly from Web API controllers and service logic, which is replicated across cluster nodes.

To make things working you just need to add special routing probes to your resources and port rules to load-balancer. Load-balancer should send requests only to active master nodes because only one stateful replica per service type is an active master and partition coordinator with open network listener. Other service replicas just slaves with data partitions replicated and has no external APIs to access data!

As data storage, we used reliable services and persisting data directly on compute nodes which are blazing fast, in case of node failure slave becomes an active master and opens Web API endpoint and has all data replicated, load-balancer just start routing requests to new active master of that service type, so here is no down time.

Our microservices was written on .NET Core and its architecture is shown below:

In conclusion, our architecture is able to scale fast and provide lowest possible latency to access your data and services, and you will pay only for compute power because here is no external database, all data stored across compute nodes with reliable services.

If you need more information just ask here.

Anton Bastiuchenko

.NET and Azure Engineer

3 年

Hello Serhii Seletskyi, are you still using Azure Service Fabric in 2021 or switched to ACI or AKS?

What would you suggest for the event store (on-premise)? Kafka / rabbit = DevOps :(

回复
Rafael Herscovici

Principal engineer at Goss.Media

6 年

I am playing around with the same idea, but find it very hard to have examples. Most of the examples i find, are outdated and do not show new functionality or other needed things.

回复
Thomas Booysen

C# Full Stack Software Engineer and Scrum Master at BMW Group South Africa

6 年

Hi, In terms of your Domain model, what does your typical Microservice store (ie. is it one model or multiple models), also what size does your data reach?

Igor Ljaskevic

Great at turning coffee into code.

7 年

Very interesting. This is especially useful for micro-services that are actually micro and the data stored in reliable collections can easily fit into memory and local storage. Great article!

要查看或添加评论,请登录

Serhii Seletskyi ????的更多文章