Taking some design principles from kubernetes
I watched a video a while ago about a deep dive into kubernetes architecture. It covered a few principles but the one that stuck out fo me was this :
Kubernetes api's are declarative rather than imperative
- As a user you define a desired state and the system works to drive to that state. So basically a user would create an API object that is persisted on the kube API server until deletion. Kubernetes in turn will ensure all relevant components work in parallel to drive to that state. The benefit is automatic recovery, if anything goes wrong it will take care of moving the application around.
- If the master was setup to use imperative api's it would then need to make calls to the components and tell them what to do, this would make the master more complex (play catchup, store state of every single component it was responsible for, the master control plane would grow larger and larger as components are added), it would become brittle and dificult to extend.
- All declarative api's that are exposed externally are used internally by all the components to interact with each other. The master makes use of the scheduler to check if the node has the capacity for a workload, it will interact with the api server and then all components work independently to drive to that state. The kubernetes API server is the center of everything in the kubernetes world (remember there is also no single point of failure as we have replicas of master nodes). The nodes, when they come up will monitor the kube API (watch) i.e each node is responsible for its own health and keeping itself running. If it crashes and recovers it goes to the API server to check its last state. This is called "Level triggered rather than Edge triggered". This results in a simpler more robust system that can easily recover from failure.
So based on the this principle and a requirement I had to scale (parallel workloads and monitor them in Tekton pipelines), I decided to build a system using golang for all the services (as I had already built a basic gRPC microserverice in golang) that could implement this declarative approach with no hidden api's. I also wanted to make it as simple as possible, avoiding message queue broker implementation (i.e kafka, amq etc), as well as make use of gRPC remote procedure calls with protobuf. For the eventbus I forked a repo that uses rpc HTTPPath where a server is started and client/s subscribe via the rpc HTTPPath protocol (register themselves with the server) and set a simple callback function. The users then interact via a simple rest API, once the API method has been successsfully completed the rcp server publishes the event to the subscribed client/s. The client/s receive this event and then query the API server via a gRPC protobuf call, this then concludes the Level triggered sequence. If a client fails the server de-registers it, once the client is back online it registers against the server (as a subscriber) and then can query its last state from the API server via the gRPC protobuf call. In the kubernetes deployment I would then create several "server" pod replicas, they make use of the kubernetes service proxy to round robin between them making a simlple HA and scalable solution.
Simple diagram of the system.
Integration and Testing
I created a simple gRPC protobuf framework. There are loads of tutorials online about creating gRPC and protobuf services using golang. The second step was to include this into the rest API microservice using a simple layout (golang opensource layout). Once I completed this step, I ensured all my unit tests worked and coverage was over 80%. The last step was then to integrate the eventbus into the project. This took me longer than expected as I had to include unit tests for both the HTTPPath client and server services, ensuring code coverage was over 80%. Once this was done I built and tested the services for both amd64 and arm64 processes, I could then scp the arm64 binaries to my pine64 cluster and test the services remotely
Here are the screen shots :
Client rest API call (api call to local server)
Server rest API and publish event
Client receiving rpc HTTPath via callback function (on remote pi server - I was too lazy to update the time zone from Newy York to UTC+1)
Client making a gRPC protobuf call to the gRPC server
The next steps now are to build and push the linux containers and then deploy them on my local kubernetes cluster. I have a ArgoCD and Tekton deployed as well as sonarqube and gitea for my local repo to build, deploy and test the pipelines. The final stage will be deployed to an openshift dedicated cluster for load and performance profiling.
The use cases for this type of design are endless. As mentioned it allows for easy extending, it takes all the hard work off the "master|" as we declare a state via the restAPI, the "master" stores this state and informs the client/s via a simple callback, the clients then work in parallel to update and work to the desired state (in my case execute Tekton pipelines in parallel). The design is highly scalable and fairly simple to implement.
Thanks kubernetes :)