How to install Kubernetes by yourself - with Kubeadm, Ansible and Vagrant
Kubernetes running in Vagrant - with a little help from Ansible and Kubeadm

How to install Kubernetes by yourself - with Kubeadm, Ansible and Vagrant

Kubernetes is a pretty complex system and it is notoriously hard to install, also because there are many ways to install it.

As a result, there are tons of "out-of-the-box" solutions. Many cloud providers offer it preinstalled. There are also distributions like CoreOS offering ready-to-go solution at the operating system level, and all-in-one installers like kops for AWS.

However, I believe is it important to learn how to install it from scratch, step by step. Of course you do not have to install it when it is provided pre-installed (typically in clouds). But if you have to manage it, not understanding its installation is a big limitation. How are you supposed to troubleshoot it if you are not able to get it off of the ground without an "automated pilot" ?

So here there is my tutorial describing how to install Kubernetes from scratch in a development environment, using Ubuntu 16 and Vagrant. Deployment is further automated with Ansible. Procedures has been tested on OSX and Windows 10 with Windows Bash.

The procedure can actually be applied in cloud environments like Amazon Web Services, but here I am focusing on teaching how to install Kubernetes and not how to create a cloud infrastructure. So the infrastructure here is limited to Vagrant, that is simple enough to start without having to learn complex cloud automation.

Prepare the servers

Let's start creating the base environment as described in the picture.

At a minimum you need a master node and one worker node. But a single worker is of little use so I recommend at least 3 workers. Master and workers must be able to see each other, using stable IPs.

Here is a Vagrantfile able to deploy this setup.

domain   = 'kube'

# use two digits id below, please
nodes = [
  { :hostname => 'master':ip => 'https://www.dhirubhai.net/redir/invalid-link-page?url=10%2e0%2e0%2e10', :id => '10' },
  { :hostname => 'node1',   :ip => 'https://www.dhirubhai.net/redir/invalid-link-page?url=10%2e0%2e0%2e11', :id => '11' },
  { :hostname => 'node2',   :ip => 'https://www.dhirubhai.net/redir/invalid-link-page?url=10%2e0%2e0%2e12', :id => '12' },
  { :hostname => 'node3',   :ip => 'https://www.dhirubhai.net/redir/invalid-link-page?url=10%2e0%2e0%2e13', :id => '13' },
]

memory = 2000

$script = <<SCRIPT
sudo mv hosts /etc/hosts
chmod 0600 /home/vagrant/.ssh/id_rsa
usermod -a -G vagrant ubuntu
cp -Rvf /home/vagrant/.ssh /home/ubuntu
chown -Rvf ubuntu /home/ubuntu
apt-get -y update
apt-get -y install python-minimal python-apt
SCRIPT

Vagrant.configure("2") do |config|
  config.ssh.insert_key = false
  nodes.each do |node|
    config.vm.define node[:hostname] do |nodeconfig|
      nodeconfig.vm.box = "ubuntu/xenial64"
      nodeconfig.vm.hostname = node[:hostname]
      nodeconfig.vm.network :private_network, ip: node[:ip], virtualbox__intnet: domain
      nodeconfig.vm.provider :virtualbox do |vb|
        vb.name = node[:hostname]+"."+domain
        vb.memory = memory
        vb.cpus = 1
        vb.customize ['modifyvm', :id, '--natdnshostresolver1', 'on']
        vb.customize ['modifyvm', :id, '--natdnsproxy1', 'on']
        vb.customize ['modifyvm', :id, '--macaddress1', "5CA1AB1E00"+node[:id]]
        vb.customize ['modifyvm', :id, '--natnet1', "192.168/16"]
      end
      nodeconfig.vm.provision "file", source: "hosts", destination: "hosts"
      nodeconfig.vm.provision "file", source: "~/.vagrant.d/insecure_private_key", destination: "/home/vagrant/.ssh/id_rsa"
      nodeconfig.vm.provision "shell", inline: $script
    end
  end
end

Note the Vagrantfile also installs an hosts file (located in the same directory), as follows:

Place those files in a folder on your machine, with Vagrant installed and execute vagrant up. You will end up with 4 nodes in your VirtualBox. Note each node takes 2 gigabytes so your machine needs at least 8 GB of memory available for the virtual machines.

Installing the required software

Now we have the (empty) nodes, we need to install the required software to run Kubernetes.

The Vagrantfile builds the nodes and installs an ssh key allowing to access them without a password. This is the required setup for using ansible. The second step to perform is the installation of the required packages for Kubernetes in the nodes. This task is performed by ansible

Note before executing it, you need to setup ssh to be able to access the nodes without a password. You can do it easily with Vagrant with vagrant ssh-config >~/.ssh/config and then add to your hosts file the line:

127 .0 .0 .1 master node1 node2 node3

You should be able then to access the servers with ssh vagrant@master, ssh vagrant@node1, etc. If it works you are ready to execute ansible. For clarity I split the ansible scripts in a series of snippets you are free to put it together with a main script and include_tasks. The whole script is available on GitHub however.

The first snippet is about installing the required software:

- shell: "apt-get -y update && apt-get install -y apt-transport-https"
- apt_repository:
    repo: 'deb https://apt.kubernetes.io/ kubernetes-xenial main'
    state: present
- name: install docker and kubernetes
  apt: name={{ item }} state=present allow_unauthenticated=yes
  with_items:
    - docker.io
    - kubelet
    - kubeadm
    - kubectl
    - ntp

The key services are Docker and Kubelet. Docker is the essential service because Kubernetes is an orchestrator of Docker containers, so everything is built on top of Docker. The Kubelet is the main Kubernetes service since it manages Docker according Kubernetes rules.  

We also need two command line clients: kubectl, the command line Kubernetes client, and kubeadm, the command line installer. Also we need to install ntp time server because Kubernetes requires nanosecond correct time.

 Once you have installed packages, there are a few mandatory configurations. This is the ansible script performing them:

 command: modprobe {{item}}
  with_items:    
  - ip_vs
  - ip_vs_rr
  - ip_vs_wrr
  - ip_vs_sh
  - nf_conntrack_ipv4
- lineinfile: path=/etc/modules line='{{item}}' create=yes state=present
  with_items: 
  - ip_vs
  - ip_vs_rr
  - ip_vs_wrr
  - ip_vs_sh
  - nf_conntrack_ipv4
  - sysctl: name=net.ipv4.ip_forward value=1 state=present reload=yes sysctl_set=yes
  - service: name=docker state=started enabled=yes
  - service: name=ntp state=started enabled=yes
  - service: name=kubelet state=started enabled=yes

Basically here we need to be sure a few kernel modules used by Kubernetes are loaded and services are started.

Configuring master

Once you have the nodes ready and configured, with the core services installed and running, we can install the Kubernetes master.

Kubernetes includes a number of services running inside Docker, so once Docker is started, you can complete the installation of the master deploying the appropriate docker images.

This operation is performed by the kubeadm tool. All you need to do is the kubeadm init. Note that it will expose a server (the apiserver) to other services at a given IP. If you have more than an interface, the address may not be what you expect (as it happens in our VirtualBox Virtual Machines), so we are going to specify the address. Since we know the IP of the virtual machines in our cluster, we can provide the IP statically.

The kubeadm will then install all the required services in Docker and it will run for a while. It installs:

  • etcd storing cluster configurations
  • apiserver answering to `kubectl` requests
  • controller managing the cluster
  • scheduler deciding where containers should be placed
  • dns for implementing service discovery
  • proxy for locating services in the cluster

Once the kubeadm completes its work, it will generate in its output a command, that must be executed in each of the other nodes to connect to the master. The ansible script to deploy the master is:

- lineinfile: dest=/etc/sysctl.conf line='net.bridge.bridge-nf-call-ip6tables = 1' state=present
- lineinfile: dest=/etc/sysctl.conf line='net.bridge.bridge-nf-call-iptables = 1' state=present
- name: initialize kube
  shell: >
       kubeadm reset &&
       sysctl -p &&
       kubeadm init --apiserver-advertise-address=https://www.dhirubhai.net/redir/invalid-link-page?url=10%2e0%2e0%2e10 --pod-network-cidr=https://www.dhirubhai.net/redir/invalid-link-page?url=10%2e244%2e0%2e0/16
  args:
    creates: /etc/kubeadm-join.sh
  register: kubeadm_out
- lineinfile:
    path: /etc/kubeadm-join.sh
    line: "{{kubeadm_out.stdout_lines[-1]}}"
    create: yes
  when: kubeadm_out.stdout.find("kubeadm join") != -1
- service: name=kubelet state=started enabled=yes
- file: name=/etc/kubectl state=directory
- name: fix configmap for proxy
  shell: >
    export KUBECONFIG=/etc/kubernetes/admin.conf ;
    kubectl -n kube-system get cm/kube-proxy -o yaml
    | sed -e 's!clusterCIDR: ""!clusterCIDR: "https://www.dhirubhai.net/redir/invalid-link-page?url=10%2e0%2e0%2e0%2F24"!' >/etc/kubectl/kube-proxy.map ;
    kubectl -n kube-system replace cm/kube-proxy -f  /etc/kubectl/kube-proxy.map ;
    kubectl -n kube-system delete pods -l k8s-app=kube-proxy
  args:
    creates: /etc/kubectl/kube-proxy.map

Note here also a required "fix": since in Vagrant we have 2 network interfaces on different networks, we have to change the configuration of the proxy to tell on which network the other cluster members are.

Installing a Network plugin

We cannot proceed to deploy the nodes until we have installed another component: the network plugin.

Kubernetes uses a special networking model that extends Docker, to connect the containers with a virtual network. It is implemented by many different plugins, each one with different advantages and disadvantages. We picked weave (probably the more widely used), so before moving to the nodes, we complete the installation of the master installing the appropriate networking plugin.

- sysctl: name=net.bridge.bridge-nf-call-ip6tables value=1 state=present reload=yes sysctl_set=yes
- sysctl: name=net.bridge.bridge-nf-call-iptables value=1 state=present reload=yes sysctl_set=yes
- name: install weave net
  shell: >
    export KUBECONFIG=/etc/kubernetes/admin.conf ;
    export kubever=$(sudo kubectl version | base64 | tr -d '\n') ;
    curl --location "https://cloud.weave.works/k8s/net?k8s-version=$kubever" >/etc/kubectl/weave.yml ;
    kubectl apply -f /etc/kubectl/weave.yml
- shell: >
    export KUBECONFIG=/etc/kubernetes/admin.conf ;
    kubectl get pods -n kube-system -l name=weave-net
  register: result
  until: result.stdout.find("Running") != -1
  retries: 100
  delay: 10

Configuring nodes

Once the master is ready and nodes have installed the required software, we can complete the setup just executing kubeadm in each node with the join command, in order to complete the configuration.

However, nodes communicate with the master using a protected channel. Not everyone can connect to the master. So to join the cluster you need to provide some secret informations. This secret is a token, that is generated when you perform a kubeadm init on the master.

So to automate the node creation with ansible, we collect the output of the master after init, and distribute the command to the nodes where we execute it. This is a task performed by this ansible script.

- sysctl: name=net.bridge.bridge-nf-call-ip6tables value=1 state=present reload=yes sysctl_set=yes
- sysctl: name=net.bridge.bridge-nf-call-iptables value=1 state=present reload=yes sysctl_set=yes
- shell: "cat /etc/kubeadm-join.sh"
  register: cat_kubeadm_join
  when: inventory_hostname == master_hostname
- set_fact:
    kubeadm_join: "{{cat_kubeadm_join.stdout}}"
  when: inventory_hostname == master_hostname
- name: join nodes
  shell: >
     systemctl stop kubelet ; kubeadm reset ; 
     echo "{{hostvars[master_hostname].kubeadm_join}}" >/etc/kubeadm-join.sh ; 
     bash /etc/kubeadm-join.sh
  args:
    creates: /etc/kubeadm-join.sh
  when: inventory_hostname != master_hostname
- name: checking all nodes up
  shell: >
      export KUBECONFIG=/etc/kubernetes/admin.conf ;
      kubectl get nodes {{item}}
  register: result
  until: result.stdout.find("Ready") != -1
  retries: 100
  delay: 10
  with_items: "{{ groups['nodes'] }}"
  when: inventory_hostname == master_hostname

The node initialization will also distribute to the nodes two containers that are critical for Kubernetes to work properly. The first is the (already mentioned) networking plugin, and the second is the proxy. The image shows the complete Kubernetes cluster, ready to work.

Now we can control the cluster, using the kubectl command on the master, so we will use it to ensure the cluster is up and running and all the nodes are ready. The important point to know is: to access the cluster we need to provide to kubectl, a configuration file with all the keys. This file is located in /etc/kubernetes/admin.conf.

We can either copy this file and distribute to the users or use it directly, pointing to it with the KUBECONFIG environment variable (but you need to be root in this case). To test if the cluster is completely up and running we use the second way but generally it is better to distribute the configuration to non-root users.

Conclusions

If you followed the tutorial, you have now a Kubernetes cluster up and running in your Virtual Box and you can play with it installing components.

Those scripts (eventually in a more updated version) are part of the ongoing open source Mosaico project on GitHub, where we are building a deployment that will include also a number of components aimed to BigData and Serverless application.

In the repository there are also instructions how to run the deployment also on Windows. You can find the link on my company website. Enjoy.


David Stanton - MBA-ITM

ManTech Information Technology Network Engineer at ManTech

6 年

kubeadm regardless of switch/command only returning "Illegal Instruction" on a Raspberry PI 3B+ 4xRaspbery Pi Zero W cluster everything else is working but I cannot add my hosts. Any clues? fresh updated installs.

回复
Jun Deng

Senior Technical Manager

6 年

awesome stuff

回复
Andrew Hardie

Dev(Sec)Ops, Platform Engineering, IA & Observability Craftsman & Evangelist

6 年

Good piece!

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了