Taking some design principles from kubernetes
Scrrenshot gRPC and eventbus code

Taking some design principles from kubernetes

I watched a video a while ago about a deep dive into kubernetes architecture. It covered a few principles but the one that stuck out fo me was this :

Kubernetes api's are declarative rather than imperative

  • As a user you define a desired state and the system works to drive to that state. So basically a user would create an API object that is persisted on the kube API server until deletion. Kubernetes in turn will ensure all relevant components work in parallel to drive to that state. The benefit is automatic recovery, if anything goes wrong it will take care of moving the application around.
  • If the master was setup to use imperative api's it would then need to make calls to the components and tell them what to do, this would make the master more complex (play catchup, store state of every single component it was responsible for, the master control plane would grow larger and larger as components are added), it would become brittle and dificult to extend.
  • All declarative api's that are exposed externally are used internally by all the components to interact with each other. The master makes use of the scheduler to check if the node has the capacity for a workload, it will interact with the api server and then all components work independently to drive to that state. The kubernetes API server is the center of everything in the kubernetes world (remember there is also no single point of failure as we have replicas of master nodes). The nodes, when they come up will monitor the kube API (watch) i.e each node is responsible for its own health and keeping itself running. If it crashes and recovers it goes to the API server to check its last state. This is called "Level triggered rather than Edge triggered". This results in a simpler more robust system that can easily recover from failure.

So based on the this principle and a requirement I had to scale (parallel workloads and monitor them in Tekton pipelines), I decided to build a system using golang for all the services (as I had already built a basic gRPC microserverice in golang) that could implement this declarative approach with no hidden api's. I also wanted to make it as simple as possible, avoiding message queue broker implementation (i.e kafka, amq etc), as well as make use of gRPC remote procedure calls with protobuf. For the eventbus I forked a repo that uses rpc HTTPPath where a server is started and client/s subscribe via the rpc HTTPPath protocol (register themselves with the server) and set a simple callback function. The users then interact via a simple rest API, once the API method has been successsfully completed the rcp server publishes the event to the subscribed client/s. The client/s receive this event and then query the API server via a gRPC protobuf call, this then concludes the Level triggered sequence. If a client fails the server de-registers it, once the client is back online it registers against the server (as a subscriber) and then can query its last state from the API server via the gRPC protobuf call. In the kubernetes deployment I would then create several "server" pod replicas, they make use of the kubernetes service proxy to round robin between them making a simlple HA and scalable solution.

Simple diagram of the system.

No alt text provided for this image

Integration and Testing

I created a simple gRPC protobuf framework. There are loads of tutorials online about creating gRPC and protobuf services using golang. The second step was to include this into the rest API microservice using a simple layout (golang opensource layout). Once I completed this step, I ensured all my unit tests worked and coverage was over 80%. The last step was then to integrate the eventbus into the project. This took me longer than expected as I had to include unit tests for both the HTTPPath client and server services, ensuring code coverage was over 80%. Once this was done I built and tested the services for both amd64 and arm64 processes, I could then scp the arm64 binaries to my pine64 cluster and test the services remotely

Here are the screen shots :

Client rest API call (api call to local server)

No alt text provided for this image

Server rest API and publish event

No alt text provided for this image

Client receiving rpc HTTPath via callback function (on remote pi server - I was too lazy to update the time zone from Newy York to UTC+1)

No alt text provided for this image

Client making a gRPC protobuf call to the gRPC server

No alt text provided for this image

The next steps now are to build and push the linux containers and then deploy them on my local kubernetes cluster. I have a ArgoCD and Tekton deployed as well as sonarqube and gitea for my local repo to build, deploy and test the pipelines. The final stage will be deployed to an openshift dedicated cluster for load and performance profiling.

The use cases for this type of design are endless. As mentioned it allows for easy extending, it takes all the hard work off the "master|" as we declare a state via the restAPI, the "master" stores this state and informs the client/s via a simple callback, the clients then work in parallel to update and work to the desired state (in my case execute Tekton pipelines in parallel). The design is highly scalable and fairly simple to implement.

Thanks kubernetes :)

要查看或添加评论,请登录

Luigi Zuccarelli的更多文章

  • Podman Executing Unikernels ?

    Podman Executing Unikernels ?

    I'm really big fan of Podman, as a team we are responsible for a component (oc-mirror) that does bulk copying of…

    8 条评论
  • Unikernel Platform As A Service

    Unikernel Platform As A Service

    We are extremely fortunate at Red Hat. We have the opportunity to work on technology that could benefit the open-source…

    3 条评论
  • Rust is fast (super fast)

    Rust is fast (super fast)

    In a previous article, I posted on LinkedIn (see here), I wrote about a highly available and scalable IoT solution…

    2 条评论
  • Rust Everywhere

    Rust Everywhere

    Over the December break I decided to start looking at learning Rust, the main reason really is that I have been using…

    8 条评论
  • High Availability and Scalable IOT Edge Service using a Raspberry Pi Cluster

    High Availability and Scalable IOT Edge Service using a Raspberry Pi Cluster

    I built a cluster of Raspberry PI's (6 pi3s and 1 pi4). They are fairly old now (except for the pi4), I used them for…

    1 条评论
  • You gotta love Red Hat Universal Base Images (ubi)

    You gotta love Red Hat Universal Base Images (ubi)

    When working with Linux Containers, I create all images using centos8 for development (dev) and user acceptance testing…

    2 条评论
  • My disastrous technical interview

    My disastrous technical interview

    I was a bit reluctant to write this article because I felt I would be exposing myself. I have over 25 years of…

    14 条评论
  • Custom CICD pipeline with less than 1 minute end to end builds

    Custom CICD pipeline with less than 1 minute end to end builds

    I am a big fan of Tekton and was recently on a team that completed a proof of concept over a period of 2 months using…

  • Event based on demand IOT measurements

    Event based on demand IOT measurements

    In my last article I wrote about a gRPC microservice event service (without a message broker) based on a design…

  • Using ML to score against the NEWS chart

    Using ML to score against the NEWS chart

    In my last article I wrote about a photoplethysmograph wearable device that measures 3 vital signs, namely blood…

    2 条评论

社区洞察

其他会员也浏览了