Creating Your Own Local Cluster with Kubernetes

Creating Your Own Local Cluster with Kubernetes

In today's world, cost optimization has become a crucial topic as everyone strives to optimize their expenses. One significant area where cost optimization is imperative is in managing cloud bills. Cloud services can accumulate hefty bills, and it's essential to find ways to minimize these costs. So, how can we achieve this? The solution lies in using fewer services for a shorter duration, and the answer to this challenge is to create your own local cluster to run these services. However, the question arises, who will take care of crucial aspects such as automatic scaling, app stability, and availability in the local cluster? This is where Kubernetes comes in. By installing Kubernetes in your local cluster, you can efficiently manage all these aspects and ensure the smooth operation of your services, while keeping costs in check.

In this blog post, I provide a step-by-step guide on how to create a local cluster using Kubernetes. To begin, you will need at least two machines with a minimum of 2GB RAM, running Ubuntu 20.04, and connected to the network. Additionally, it is essential to disable swaps on each machine.

  1. Enable IP-Forwarding and let iptables see bridged traffic(every node).

echo 1 | sudo tee /proc/sys/net/ipv4/ip_forwar


cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf

br_netfilter

EOF


cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF


sudo sysctl --system        


2. Install Kubeadm, kubectl, and kubelet(every node).

sudo apt-get update && sudo apt-get install -y apt-transport-https cur

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -


cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list

deb https://apt.kubernetes.io/ kubernetes-xenial main

EOF


sudo apt-get update

sudo apt-get install -y kubelet kubeadm kubectl

sudo apt-mark hold kubelet kubeadm kubectl
        

3. Then the next step is to install containerd. Containerd is a container runtime that manages the lifecycle of a container on a physical or virtual machine (a host). It is a daemon process that creates, starts, stops, and destroys containers. It is also able to pull container images from container registries, mount storage, and enable networking for a container. Before installing containerd we need to set up some Configure prerequisites thing.

cat <<EOF | sudo tee /etc/modules-load.d/containerd.co

overlay

br_netfilter

EOF


sudo modprobe overlay

sudo modprobe br_netfilter


cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf

net.bridge.bridge-nf-call-iptables? = 1

net.ipv4.ip_forward ? ? ? ? ? ? ? ? = 1

net.bridge.bridge-nf-call-ip6tables = 1

EOF


sudo sysctl --systemn        

Install and configure containerd

sudo apt-get update && sudo apt-get install -y container

sudo mkdir -p /etc/container

sudo containerd config default | sudo tee /etc/containerd/config.tomld

sudo systemctl restart container        

4. To initialize your cluster, you have completed the installation of all necessary components. However, prior to initialization, it is important to specify a pod network. It's worth noting that each pod has its unique IP address, and in the given an example, a small /24 suffix is used. Nevertheless, most real-world applications will require a larger address space. Additionally, it's recommended to run pods on a separate network from your host machines.

Run this in your master node

sudo kubeadm init --pod-network-cidr=192.168.2.0/2        

After running this command, you should see this message

## Your Kubernetes control-plane has initialized successfully
## 
## To start using your cluster, you need to run the following as a regular user:
## 
##   mkdir -p $HOME/.kube
##   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
##   sudo chown $(id -u):$(id -g) $HOME/.kube/config
## 
## You should now deploy a pod network to the cluster.
## Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
##   https://kubernetes.io/docs/concepts/cluster-administration/addons/
## 
## Then you can join any number of worker nodes by running the following on each as root:
## 
## kubeadm join 192.168.0.10:6443 --token boq2jb.qk3gu4v01l5cg2xc \
##     --discovery-token-ca-cert-hash sha256:35ad26fc926cb98e16f10447a1b43bc947d07c2c19b380c148d4c1478c7bf834        

Then run the following commands

mkdir -p $HOME/.kub
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/confige        

Calico

Calico is an open-source networking and network security solution for containers, virtual machines, and native host-based workloads. For pods communication, we use a calico. You can install Calico using this reference.

5. After successfully installing Calico you can join your worker nodes.

for joining the worker node you need a join command and create a join command using the following command.

Run this command on your master node.

kubeadm token create --print-join-comman        

After getting the join command run that commands on your worker node, and after executing that command go to the master and check that the worker node is ready.

Your cluster is now ready to deploy.

Metallb

6. After deploying Kubernetes now it's time to deploy metallb for access to your server from the outside. To install metallb on your Kubernetes do the following things.

Create the MetalLB deployment and service:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/namespace.yam
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/metallb.yaml        

Then Create a secret to hold the configuration information that MetalLB will use.

kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"        

Create a ConfigMap to hold the MetalLB configuration, for creating config.yaml

  1. Replace <start_ip> and <end_ip> with the range of IP addresses you want to allocate for MetalLB to use.
  2. Verify that the installation was successful.

apiVersion: v
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: my-ip-space
      protocol: layer2
      addresses:
      - 192.168.0.210-192.168.0.250        

And apply it

kubectl apply -f config.yaml
        

That's it! Now you can start using MetalLB to allocate IP addresses for your Kubernetes services.

If you have any doubts or suggestions, please contact me.

Stanley Russel

??? Engineer & Manufacturer ?? | Internet Bonding routers to Video Servers | Network equipment production | ISP Independent IP address provider | Customized Packet level Encryption & Security ?? | On-premises Cloud ?

1 年

Great post! Kubernetes is an incredible platform for cost optimization. I'm interested in hearing more about how engineers are leveraging it to optimize software development projects. What has been the most successful process you've seen so far? Could you also share a few of the challenges you've encountered? #kubernetes #costoptimization #engineering #softwaredevelopment

Niranjan Koranne

Lead Recruiter @ Talentnauts | Ex-Manager at Capgemini Technology Services

1 年

Step by step detailed explanation Sagar Patil !

回复
Ashutosh Bhan

|| CISCO Certified CCNA || Palo Alto Networks Certified Network Security Engineer || CISCO Certified Specialist||

1 年

Really good article Sagar Patil ??

回复
Mahendra Palecha

From Sketches to Functional Designs

1 年

Nice article Sagar

回复

要查看或添加评论,请登录

Sagar Patil的更多文章

  • Boost Your Go Applications with Redis Caching

    Boost Your Go Applications with Redis Caching

    If you're building your microservice with Go and looking to significantly improve performance, scalability, and…

    8 条评论
  • Writing your first Kubernetes Operator

    Writing your first Kubernetes Operator

    When I decided to write my own Kubernetes (K8s) operator, I had many questions. How do I start? What is the structure…

    4 条评论
  • Kubernetes “x509: certificate has expired or is not yet valid” error

    Kubernetes “x509: certificate has expired or is not yet valid” error

    Actually, in this blog, I want to share my real experience with you. On a beautiful morning in spring, you wake up and…

    2 条评论
  • IP Addresses and Subnet Masks

    IP Addresses and Subnet Masks

    Recently, while I was working on the Kubernetes cluster, I encountered some issues with IP addresses. During that time,…

    4 条评论
  • Channels in Golang

    Channels in Golang

    Welcome to my blog! In this article, I will provide an in-depth explanation of various terms related to channels in…

    9 条评论
  • The Freedom of iCalendar

    The Freedom of iCalendar

    When I joined Workship.Inc as a software developer, I had the opportunity to write my first microservice for Tacitbase,…

    2 条评论
  • How Golang goroutines, channels and waitgroup works.

    How Golang goroutines, channels and waitgroup works.

    #golang #goroutines #channels #waitgroup #concurrency #developer #programming Go is a modern programming language that…

  • Do you want to Reduce your Cloud bill to $0?

    Do you want to Reduce your Cloud bill to $0?

    Do you use cloud services for tech startup companies? Are you concerned about high cloud bills due to overusing…

社区洞察

其他会员也浏览了