Alternative to Kubernetes: Kontena
Marcel Koert
Innovative Platform Engineer | DevOps Engineer | Site Reliability Engineer | IT Educator | Founder of Melomar-IT
Kontena offers support to companies that need to handle large-scale containers. Founded in March 2015, Kontena has developed an open-source platform for the management of apps and microservices in containerized environments.
The company offers a user-friendly service for on-premise, cloud, or hybrid infrastructures. Little knowledge of DevOps or Linux is required to use the system, which aims to provide "everything needed to run and scale containers in production."
Kontena Help Management of Kubernetes Clusters
A free version of Kontena is available for the management of Kubernetes clusters. This free desktop application for container orchestration is provided in parallel with the enterprise version.
The free version of Kontena is available for download for Linux, Windows, and macOS. With the Kontena Management Dashboard, you can granularly understand what is going on in the clusters.
The dashboard has a real-time visualization of the most important metrics, configurations, and protocol streams. Users get an insight into their Kubernetes clusters, including all nodes and current workloads. This can be used, for example, to ensure that a cluster is properly set up and configured.
An integrated terminal allows applications to be inspected or corrected without losing context. Access to the data, namespaces, and other resources is restricted by the role-based access control. To ensure this, kontena supports common external authentication systems through user management and integration APIs.
The Free version differs from the Enterprise version in that it does not offer a browser view, the authentication options are restricted, and there is no premium support.
Features
- Advanced Orchestration
- Service Abstraction
- Service Linking
- Automatic Node Discovery
- Mixed Infrastructure Support
- Easy to start
- Fast deployment
- Easy managing of resources.
Kubernetes Distribution Kontena Pharos available as a Beta
With Pharos, the company Kontena has announced a certified Kubernetes distribution. A free, open-source solution licensed under Apache 2 is intended to convince through its solidity and simplicity in both the private and commercial environments.
According to the manufacturer, Kontena Pharos offers a foundation for Kubernetes clusters of all sizes. It is based on the latest Kubernetes sources with all the essential components - including tools that are designed to perform it easy to update and maintain the system with security fixes and platform updates.
Pharos is designed to work not only in the cloud but on any infrastructure. In particular, the administration, which requires a lot of resources and specialist knowledge, is to be simplified, underlines Miska Kaipiainen, CEO and founder of Kontena Inc .: "We have made it our task to help developers and companies of the immense complexity of container technology and especially of To relieve Kubernetes. "
In the development of Pharos, the experience that could be gathered with the own container platform solution, which has been available since 2015, flowed into the development: "With Wettena Pharos, companies can benefit from container technology immediately and not only after months or even years," says Kaipiainen.
The version 1.0 of Kontena Pharos had been released in May during KubeCon Europe 2018 in Copenhagen, Denmark. The freely available version can be found in the Pharos cluster repository at Github. A trial version of Kontena Pharos is available on the manufacturer's website. For companies, Kontena offers commercial subscriptions with support and SLA agreements, as well as advice and training packages.
Kontena Pharos 2.4, Kontena Network Loadbalancer / Universal Loadbalancer, Kontena Lens and Kontena Storage in action in Bare Metal instances of Scaleway
Kontena Pharos 2.4 was announced with new features and independence from the Kontena Lens brick in addition to the implementation of version 1.14.3 of Kubernetes.
As before, We launch three Bare Metal instances at Scaleway here type C2L with Ubuntu 18.04 LTS (in the region of Amsterdam).
To deploy my Kubernetes cluster, We will, therefore, use this new version of Kontena Pharos. You can have access to the community version on GitHub:
Or you can go for Pro:
Kontena Pharos OSS is the basic version and contains all the essential functionalities to take full advantage of Kubernetes on any scale, on any infrastructure. It is a 100% open source under Apache 2 license. You can use it for free, for any use.
Kontena Pharos PRO is based on Kontena Pharos OSS but has more enhanced features and advanced functionality. It is commercial, but you can evaluate it for free, as long as you need it!
We start by preparing my cluster.yml configuration file which contains a number of add-ons supported by the PRO version of Kontena:
Beforehand We applied this script for cloud-init at the level of these instances:
#! / bin / sh apt install sudo iputils-ping -y echo "root ALL = (ALL) NOPASSWD: ALL" | tee /etc/sudoers.d/root yes | mkfs.ext4 / dev / sda mkdir -p / var / lib / docker fs_uuid = $ (blkid -o value -s UUID / dev / sda) echo "UUID = $ fs_uuid / var / lib / docker ext4 defaults 0 0"> > / etc / fstab mount -a curl -s https://install.zerotier.com/ | bash zerotier-cli join <YOUR NETWORK-ID>
We indeed have a second 250 GB disk on these instances (especially for Kontena Storage):
These instances are added to ZeroTier (P2P VPN) on a private subnet where the Ethernet Bridging mode is activated (for Kontena Network Load Balancer):
And we launch it all:
$ pharos up -c cluster.yml
The Kubernetes cluster is then available:
We have access to the Kontena Lens dashboard:
I deployed here Kontena Storage, which takes Rook (Ceph) in the cluster as well as an associated dashboard. To make it accessible, I modify the manifest of its service (to put it in Load Balancer type via Kontena Network Balancer which takes up MetalLB):
A ZeroTier private network pool address is automatically assigned by Kontena Network Load Balancer:
Allowing access to the dashboard. The password associated with admin is retrieved as follows:
$ kubectl -n kontena-storage get secret rook-ceph-dashboard-password -o jsonpath = "{['data'] ['password']}" | base64 --decode && echo
We can also use the command line to know the state of health of the Ceph cluster:
Kontena Lens offers the possibility of accessing a catalog of Charts to install applications in the Kubernetes cluster:
We modify the Chart parameters for Weave Scope from a Kontena Lens terminal by putting a Load Balancer type service once again:
and deployment:
We have access to the viewer via Weave Scope deployed in the cluster:
Rook offers the possibility (like OpenEBS) of deploying Minio, for example. We deployed Minio here in a distributed mode in the cluster:
We take the sources of my FC chatbot to deploy it statically within a bucket in Minio:
We reuse the Cloudflare Argo Tunnel to make this Chatbot publicly accessible:
The Chatbot is accessible via the URL returned by Argo Tunnel:
with correct performance:
Another test of "GitOps" via Flagger and Istio in this cluster. We- start from the sources offered via this Github repository, according to this kinematics:
Isitio, Weave Flux, Flagger, Prometheus and Helm are loaded into the cluster:
kubectl -n kube-system create sa tiller kubectl create clusterrolebinding tiller-cluster-rule \ --clusterrole = cluster-admin \ --serviceaccount = kube-system: tiller helm init --service-account tiller --wait git clone https: //github.com/<YOUR-USERNAME>/gitops-istio cd gitops-istio ./scripts/flux-init.sh [email protected]: <YOUR-USERNAME> / gitops-istio
At startup, Weave Flux generates an SSH key and saves the public key. The command in bold above will print the public key.
To synchronize the state of your cluster with git, you must copy the public key and create a deployment key with write access to its GitHub repository. On GitHub, select Settings> Deploy keys, click Add deployment key, check Allow write access, paste the Flux public key, and click Add key.
When Weave Flux has to write access to your repository, it will do the following:
- It creates istio-system and prod namespaces.
- He creates the CRDs for Istio
- He installs Flagger with Helm
- He installs Grafana for Flagger
- It creates the deployment of the load test
- It creates a frontal deployment in the canary mode
- It creates the backend deployment in the canary mode
- It creates the Istio public footbridge.
When Weave Flux synchronizes the Git repository with the cluster, it creates the front-end / backend deployment, HPA and a canary object. Flagger uses this definition to create a series of objects: Kubernetes deployments, ClusterIP services and Istio virtual services:
Flagger detects that the deployment revision has changed and initiates a new deployment:
All this is monitored with Grafana:
and viewable in Weave Scope:
Or in Weave Cloud where it is possible to initiate an automated deployment in GitOps mode (example here with the FC demonstrator):
We make a change on its GitHub repository of the deployment manifest, and automatic detection of the change occurred then redeployment:
accompanied by monitoring:
The FC demonstrator is always accessible (via the IP address provided by Kontena Network Load Balancer).
Finally, we can use Kontena Universal Load Balancer which takes Akrobateo previously seen by modifying the cluster.yml file at the add-on level:
addons: kontena-universal-lb: enabled: true
For this, I start from a cluster of Bare Metal instances in Scaleway of type C2M:
Once the deployment is complete, Kontena Universal Load Balancer (Akrobateo) is installed (Akrobateo being a simple Kubernetes operator allowing to expose the LoadBalancer services of the cluster as a hostPorts node using DaemonSets):
and always with Kontena Lens:
Hope you find out points from our complete step by step guide and clear things about configuration Kontena Pharos 2.4, Kontena Network Loadbalancer / Universal Loadbalancer, Kontena Lens and Kontena Storage in action in Bare Metal instances of Scaleway. If you still have any in your mind you can contact with us for further information.