Kubernetes the hard way, my journey!
To really learn something, you have to do the hard way. To truly understand Kubernetes, I decided to learn it the hard way.
This is my journey. It is mostly smooth sailing as it is well documented, but I do find a bug which needs fixing.
The first step is to download and install Google Cloud SDK, which include the gcloud command line.
After installation, run the command below. It seems it needs to select a project to use. I am not sure what happens if you don't already have a project.
% ./google-cloud-sdk/bin/gcloud init Welcome! This command will take you through the configuration of gcloud. Your current configuration has been set to: [default] You can skip diagnostics next time by using the following flag: gcloud init --skip-diagnostics Network diagnostic detects and fixes local network connection issues. Checking network connection...done. Reachability Check passed. Network diagnostic passed (1/1 checks passed). You must log in to continue. Would you like to log in (Y/n)? Y Your browser has been opened to visit: https://accounts.google.com/o/oauth2/auth?response_type=code&client_id=32555940559.apps.googleusercontent.com&redirect_uri=http%3A%2F%2Flocalhost%3A8085%2F&scope=openid+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fappengine.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcompute+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Faccounts.reauth&state=ZjW6cvYDBvm4wmFLkzTCTbRgyzTwp8&access_type=offline&code_challenge=K1S4xXe6P0Om1CUKn7fXN_jXsYrp5Ry68SUpZzmvH3U&code_challenge_method=S256 You are logged in as: [[email protected]]. Pick cloud project to use: [1] api-project-1007544013996 [2] charles-blog [3] charles-blogp [4] charles-cms [5] charles-codelab [6] charles-guo-303205 [7] charles-service [8] charles-site [9] charles-web2py [10] charleschat-e3948 [11] charlesguoblog [12] clouduptime [13] iloveyou-ad219 [14] python-abp [15] rapid-fulcrum-754 [16] rich-karma-142622 [17] tridioncourse [18] Create a new project Please enter numeric choice or text value (must exactly match list item): 12 Your current project has been set to: [clouduptime]. Not setting default zone/region (this feature makes it easier to use [gcloud compute] by setting an appropriate default value for the --zone and --region flag). See https://cloud.google.com/compute/docs/gcloud-compute section on how to set default compute region and zone manually. If you would like [gcloud init] to be able to do this for you the next time you run it, make sure the Compute Engine API is enabled for your project on the https://console.developers.google.com/apis page. Created a default .boto configuration file at [/Users/yinchao/.boto]. See this file and [https://cloud.google.com/storage/docs/gsutil/commands/config] for more information about configuring Google Cloud Storage. Your Google Cloud SDK is configured and ready to use! * Commands that require authentication will use [email protected] by default * Commands will reference project `clouduptime` by default Run `gcloud help config` to learn how to change individual settings This gcloud configuration is called [default]. You can create additional configurations if you work with multiple accounts and/or projects. Run `gcloud topic configurations` to learn more. Some things to try next: * Run `gcloud --help` to see the Cloud Platform services you can interact with. And run `gcloud help COMMAND` to get help on any gcloud command. * Run `gcloud topic --help` to learn about advanced features of the SDK like arg files and output formatting
This is the version I have:
% gcloud version Google Cloud SDK 344.0.0 bq 2.0.69 core 2021.06.04
gsutil 4.62
Installing the Client Tools
No surprise there if you following the instructions, this is after installation
% cfssl version Version: 1.4.1
Runtime: go1.12.12
% kubectl version --client
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"darwin/amd64"}
Creating network and firewall rules
% gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom API [compute.googleapis.com] not enabled on project [670539060445]. Would you like to enable and retry (this will take a few minutes)? (y/N)? y Enabling service [compute.googleapis.com] on project [670539060445]... Operation "operations/acf.p2-670539060445-c1184b4e-3dd9-48af-b7eb-d4d93308a12b" finished successfully. Created [https://www.googleapis.com/compute/v1/projects/clouduptime/global/networks/kubernetes-the-hard-way]. NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4 kubernetes-the-hard-way CUSTOM REGIONAL Instances on this network will not be reachable until firewall rules are created. As an example, you can allow all internal traffic between instances as well as SSH, RDP, and ICMP by running: $ gcloud compute firewall-rules create <FIREWALL_NAME> --network kubernetes-the-hard-way --allow tcp,udp,icmp --source-ranges <IP_RANGE> $ gcloud compute firewall-rules create <FIREWALL_NAME> --network kubernetes-the-hard-way --allow tcp:22,tcp:3389,icmp % gcloud compute networks subnets create kubernetes \ --network kubernetes-the-hard-way \ --range 10.240.0.0/24 Created [https://www.googleapis.com/compute/v1/projects/clouduptime/regions/us-west1/subnetworks/kubernetes]. NAME REGION NETWORK RANGE
kubernetes us-west1 kubernetes-the-hard-way 10.240.0.0/24
% gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \ --allow tcp,udp,icmp \ --network kubernetes-the-hard-way \ --source-ranges 10.240.0.0/24,10.200.0.0/16 Creating firewall...?Created [https://www.googleapis.com/compute/v1/projects/clouduptime/global/firewalls/kubernetes-the-hard-way-allow-internal]. Creating firewall...done. NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp False % gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \ --allow tcp:22,tcp:6443,icmp \ --network kubernetes-the-hard-way \ --source-ranges 0.0.0.0/0 Creating firewall...?Created [https://www.googleapis.com/compute/v1/projects/clouduptime/global/firewalls/kubernetes-the-hard-way-allow-external]. Creating firewall...done. NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp False
% gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way" NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp False kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp False To show all fields of the firewall, please show in JSON format: --format=json
To show all fields in table format, please see the examples in --help.
Create public IP:
% gcloud compute addresses create kubernetes-the-hard-way \ --region $(gcloud config get-value compute/region) Created [https://www.googleapis.com/compute/v1/projects/clouduptime/regions/us-west1/addresses/kubernetes-the-hard-way]. % gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')" NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS
kubernetes-the-hard-way 34.83.51.143 EXTERNAL us-west1 RESERVED
Compute Instances
Kubernetes Controllers
% gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')" NAME ADDRESS/RANGE TYPE PURPOSE NETWORK REGION SUBNET STATUS kubernetes-the-hard-way 34.83.51.143 EXTERNAL us-west1 RESERVED % for i in 0 1 2; do gcloud compute instances create controller-${i} \ --async \ --boot-disk-size 200GB \ --can-ip-forward \ --image-family ubuntu-2004-lts \ --image-project ubuntu-os-cloud \ --machine-type e2-standard-2 \ --private-network-ip 10.240.0.1${i} \ --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \ --subnet kubernetes \ --tags kubernetes-the-hard-way,controller done NOTE: The users will be charged for public IPs when VMs are created. Instance creation in progress for [controller-0]: https://www.googleapis.com/compute/v1/projects/clouduptime/zones/us-west1-c/operations/operation-1623274106503-5c45bf47dc04a-be843e27-b5727b51 Use [gcloud compute operations describe URI] command to check the status of the operation(s). NOTE: The users will be charged for public IPs when VMs are created. Instance creation in progress for [controller-1]: https://www.googleapis.com/compute/v1/projects/clouduptime/zones/us-west1-c/operations/operation-1623274113094-5c45bf4e25536-e37770ef-41bcef64 Use [gcloud compute operations describe URI] command to check the status of the operation(s). NOTE: The users will be charged for public IPs when VMs are created. Instance creation in progress for [controller-2]: https://www.googleapis.com/compute/v1/projects/clouduptime/zones/us-west1-c/operations/operation-1623274115986-5c45bf50e7478-0443a51f-a81d39eb
Use [gcloud compute operations describe URI] command to check the status of the operation(s).
Kubernetes Workers
% for i in 0 1 2; do gcloud compute instances create worker-${i} \ --async \ --boot-disk-size 200GB \ --can-ip-forward \ --image-family ubuntu-2004-lts \ --image-project ubuntu-os-cloud \ --machine-type e2-standard-2 \ --metadata pod-cidr=10.200.${i}.0/24 \ --private-network-ip 10.240.0.2${i} \ --scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \ --subnet kubernetes \ --tags kubernetes-the-hard-way,worker done NOTE: The users will be charged for public IPs when VMs are created. Instance creation in progress for [worker-0]: https://www.googleapis.com/compute/v1/projects/clouduptime/zones/us-west1-c/operations/operation-1623274191213-5c45bf98a54c8-23f12275-af4fa58a Use [gcloud compute operations describe URI] command to check the status of the operation(s). NOTE: The users will be charged for public IPs when VMs are created. Instance creation in progress for [worker-1]: https://www.googleapis.com/compute/v1/projects/clouduptime/zones/us-west1-c/operations/operation-1623274194699-5c45bf9bf8656-06081adf-57529f57 Use [gcloud compute operations describe URI] command to check the status of the operation(s). NOTE: The users will be charged for public IPs when VMs are created. Instance creation in progress for [worker-2]: https://www.googleapis.com/compute/v1/projects/clouduptime/zones/us-west1-c/operations/operation-1623274197829-5c45bf9ef48e5-d88901f5-a32be810 Use [gcloud compute operations describe URI] command to check the status of the operation(s). % gcloud compute instances list --filter="tags.items=kubernetes-the-hard-way" NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS controller-0 us-west1-c e2-standard-2 10.240.0.10 34.105.4.81 RUNNING controller-1 us-west1-c e2-standard-2 10.240.0.11 34.82.19.224 RUNNING controller-2 us-west1-c e2-standard-2 10.240.0.12 35.233.210.133 RUNNING worker-0 us-west1-c e2-standard-2 10.240.0.20 35.185.235.165 RUNNING worker-1 us-west1-c e2-standard-2 10.240.0.21 34.82.223.81 RUNNING
worker-2 us-west1-c e2-standard-2 10.240.0.22 34.145.111.5 RUNNING
Configuring SSH Access
% gcloud compute ssh controller-0 WARNING: The private SSH key file for gcloud does not exist. WARNING: The public SSH key file for gcloud does not exist. WARNING: You do not have an SSH key for gcloud. WARNING: SSH keygen will be executed to generate a key. Generating public/private rsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /Users/xxxxxxx/.ssh/google_compute_engine. Your public key has been saved in /Users/xxxxxxx/.ssh/google_compute_engine.pub. The key fingerprint is: SHA256:iovU8NOM+cHmpdbTjwbET8uo3kSxMrzGpbv/ad4x+fw [email protected] The key's randomart image is: +---[RSA 3072]----+ | | | | | .. | | . oo. | | . +.S= . | | + O Oo + . | | . B %ooo + | | . . BoBo ++ = | | . .oBoo*=.o o.E| +----[SHA256]-----+ Updating project ssh metadata...?Updated [https://www.googleapis.com/compute/v1/projects/clouduptime]. Updating project ssh metadata...done. Waiting for SSH key to propagate. Warning: Permanently added 'compute.6436549211158514836' (ECDSA) to the list of known hosts. Welcome to Ubuntu 20.04.2 LTS (GNU/Linux 5.4.0-1044-gcp x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage System information as of Wed Jun 9 21:33:32 UTC 2021 System load: 0.0 Processes: 122 Usage of /: 0.8% of 193.66GB Users logged in: 0 Memory usage: 3% IPv4 address for ens4: 10.240.0.10 Swap usage: 0% 1 update can be applied immediately. To see these additional updates run: apt list --upgradable
Provisioning a CA and Generating TLS Certificates
Certificate Authority
% { cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "8760h" }, "profiles": { "kubernetes": { "usages": ["signing", "key encipherment", "server auth", "client auth"], "expiry": "8760h" } } } } EOF cat > ca-csr.json <<EOF { "CN": "Kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "US", "L": "Portland", "O": "Kubernetes", "OU": "CA", "ST": "Oregon" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca } 2021/06/09 14:38:13 [INFO] generating a new CA key and certificate from CSR 2021/06/09 14:38:13 [INFO] generate received request 2021/06/09 14:38:13 [INFO] received CSR 2021/06/09 14:38:13 [INFO] generating key: rsa-2048 2021/06/09 14:38:13 [INFO] encoded CSR 2021/06/09 14:38:13 [INFO] signed certificate with serial number 525222325803124307012800048869718245986330760045
Distribute the Client and Server Certificates
% for instance in worker-0 worker-1 worker-2; do gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/ done Warning: Permanently added 'compute.7537341684081527840' (ECDSA) to the list of known hosts. ca.pem 100% 1318 35.2KB/s 00:00 worker-0-key.pem 100% 1679 46.2KB/s 00:00 worker-0.pem 100% 1493 42.1KB/s 00:00 Warning: Permanently added 'compute.3825325201041427516' (ECDSA) to the list of known hosts. ca.pem 100% 1318 36.1KB/s 00:00 worker-1-key.pem 100% 1675 43.5KB/s 00:00 worker-1.pem 100% 1493 41.2KB/s 00:00 Warning: Permanently added 'compute.7217083990206614585' (ECDSA) to the list of known hosts. ca.pem 100% 1318 25.7KB/s 00:00 worker-2-key.pem 100% 1679 45.4KB/s 00:00 worker-2.pem 100% 1493 40.5KB/s 00:00 % for instance in controller-0 controller-1 controller-2; do gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \ service-account-key.pem service-account.pem ${instance}:~/ done ca.pem 100% 1318 8.8KB/s 00:00 ca-key.pem 100% 1679 11.4KB/s 00:00 kubernetes-key.pem 100% 1675 10.3KB/s 00:00 kubernetes.pem 100% 1663 46.5KB/s 00:00 service-account-key.pem 100% 1679 46.0KB/s 00:00 service-account.pem 100% 1440 40.6KB/s 00:00 Warning: Permanently added 'compute.7437964077951650926' (ECDSA) to the list of known hosts. ca.pem 100% 1318 37.7KB/s 00:00 ca-key.pem 100% 1679 46.3KB/s 00:00 kubernetes-key.pem 100% 1675 47.2KB/s 00:00 kubernetes.pem 100% 1663 47.4KB/s 00:00 service-account-key.pem 100% 1679 48.2KB/s 00:00 service-account.pem 100% 1440 40.0KB/s 00:00 Warning: Permanently added 'compute.6163077399197813867' (ECDSA) to the list of known hosts. ca.pem 100% 1318 36.2KB/s 00:00 ca-key.pem 100% 1679 47.9KB/s 00:00 kubernetes-key.pem 100% 1675 46.7KB/s 00:00 kubernetes.pem 100% 1663 46.7KB/s 00:00 service-account-key.pem 100% 1679 45.1KB/s 00:00
service-account.pem
Generating Kubernetes Configuration Files for Authentication
Those configuration files enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers. Following the steps to generate those files, and distribute them at the end.
% for instance in worker-0 worker-1 worker-2; do gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/ done worker-0.kubeconfig 100% 6386 164.5KB/s 00:00 kube-proxy.kubeconfig 100% 6324 172.3KB/s 00:00 worker-1.kubeconfig 100% 6382 153.2KB/s 00:00 kube-proxy.kubeconfig 100% 6324 96.2KB/s 00:00 worker-2.kubeconfig 100% 6386 175.3KB/s 00:00 kube-proxy.kubeconfig 100% 6324 167.5KB/s 00:00 % for instance in controller-0 controller-1 controller-2; do gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/ done admin.kubeconfig 100% 6265 169.6KB/s 00:00 kube-controller-manager.kubeconfig 100% 6387 175.3KB/s 00:00 kube-scheduler.kubeconfig 100% 6341 161.7KB/s 00:00 admin.kubeconfig 100% 6265 166.7KB/s 00:00 kube-controller-manager.kubeconfig 100% 6387 173.1KB/s 00:00 kube-scheduler.kubeconfig 100% 6341 162.9KB/s 00:00 admin.kubeconfig 100% 6265 160.9KB/s 00:00 kube-controller-manager.kubeconfig 100% 6387 159.3KB/s 00:00 kube-scheduler.kubeconfig 100% 6341 164.5KB/s 00:00
Generating the Data Encryption Config and Key
Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to encrypt cluster data at rest.
% for instance in controller-0 controller-1 controller-2; do
gcloud compute scp encryption-config.yaml ${instance}:~/ done encryption-config.yaml 100% 240 6.2KB/s 00:00 encryption-config.yaml 100% 240 6.8KB/s 00:00 encryption-config.yaml 100% 240 6.4KB/s 00:00
Bootstrapping the etcd Cluster
Kubernetes components are stateless and store cluster state in etcd. In this lab you will bootstrap a three node etcd cluster and configure it for high availability and secure remote access.
$ { > sudo systemctl daemon-reload > sudo systemctl enable etcd > sudo systemctl start etcd > } Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /etc/systemd/system/etcd.service. $ sudo ETCDCTL_API=3 etcdctl member list \ > --endpoints=https://127.0.0.1:2379 \ > --cacert=/etc/etcd/ca.pem \ > --cert=/etc/etcd/kubernetes.pem \ > --key=/etc/etcd/kubernetes-key.pem 3a57933972cb5131, started, controller-2, https://10.240.0.12:2380, https://10.240.0.12:2379, false f98dc20bce6225a0, started, controller-0, https://10.240.0.10:2380, https://10.240.0.10:2379, false ffed16798470cab5, started, controller-1, https://10.240.0.11:2380, https://10.240.0.11:2379, false
Bootstrapping the Kubernetes Control Plane
In this lab you will bootstrap the Kubernetes control plane across three compute instances and configure it for high availability. You will also create an external load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.
At this stage, I run into an issue where the API to get the environment variable REGION is not working. It looks I am not alone as this is has been reported by others as well here
$ REGION=$(curl -s -H "Metadata-Flavor: Google" \ > https://metadata.google.internal/computeMetadata/v1/project/attributes/google-compute-default-region) $ echo $REGION
<!DOCTYPE html> <html lang=en> <meta charset=utf-8> <meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width"> <title>Error 404 (Not Found)!!1</title> <style> *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px} </style> <a href=//www.google.com/><span id=logo aria-label=Google></span></a> <p><b>404.</b> <ins>That’s an error.</ins> <p>The requested URL <code>/computeMetadata/v1/project/attributes/google-compute-default-region</code> was not found on this server. <ins>That’s all we know.</ins>
I worked around this issue by hard coding the region value:
REGION=us-west1
Someone suggested a alternative solution of using this in the same bug report:
$ curl -s -H "Metadata-Flavor: Google" https://metadata.google.internal/computeMetadata/v1/instance/zone | cut -d/ -f 4 | sed 's/.\{2\}$//' us-west1
This is how to start controller services:
$ { > sudo systemctl daemon-reload > sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler > sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler > } Created symlink /etc/systemd/system/multi-user.target.wants/kube-apiserver.service → /etc/systemd/system/kube-apiserver.service. Created symlink /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service → /etc/systemd/system/kube-controller-manager.service.
Created symlink /etc/systemd/system/multi-user.target.wants/kube-scheduler.service → /etc/systemd/system/kube-scheduler.service.
Verification
$ kubectl cluster-info --kubeconfig admin.kubeconfig Kubernetes control plane is running at https://127.0.0.1:6443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
$ curl -H "Host: kubernetes.default.svc.cluster.local" -i https://127.0.0.1/healthz HTTP/1.1 200 OK Server: nginx/1.18.0 (Ubuntu) Date: Wed, 09 Jun 2021 23:13:37 GMT Content-Type: text/plain; charset=utf-8 Content-Length: 2 Connection: keep-alive Cache-Control: no-cache, private X-Content-Type-Options: nosniff X-Kubernetes-Pf-Flowschema-Uid: 84c02b60-00be-4447-b687-ba856239a521
X-Kubernetes-Pf-Prioritylevel-Uid: 8c48516c-ebe6-4e89-a756-b2294db1cda9
Bootstrapping the Kubernetes Worker Nodes
In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: runc, container networking plugins, containerd, kubelet, and kube-proxy.
$ { > sudo systemctl daemon-reload > sudo systemctl enable containerd kubelet kube-proxy > sudo systemctl start containerd kubelet kube-proxy > } Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /etc/systemd/system/containerd.service. Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /etc/systemd/system/kubelet.service. Created symlink /etc/systemd/system/multi-user.target.wants/kube-proxy.service → /etc/systemd/system/kube-proxy.service.
Verification
% gcloud compute ssh controller-0 \ --command "kubectl get nodes --kubeconfig admin.kubeconfig" NAME STATUS ROLES AGE VERSION worker-0 Ready <none> 43s v1.21.0 worker-1 Ready <none> 39s v1.21.0 worker-2 Ready <none> 37s v1.21.0
Configuring kubectl for Remote Access
In this lab you will generate a kubeconfig file for the kubectl command line utility based on the admin user credentials.
% kubectl version Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"} % kubectl get nodes NAME STATUS ROLES AGE VERSION worker-0 Ready <none> 3m46s v1.21.0 worker-1 Ready <none> 3m42s v1.21.0 worker-2 Ready <none> 3m40s v1.21.0
Provisioning Pod Network Routes
Pods scheduled to a node receive an IP address from the node's Pod CIDR range. At this point pods can not communicate with other pods running on different nodes due to missing network routes.
In this lab you will create a route for each worker node that maps the node's Pod CIDR range to the node's internal IP address
% for i in 0 1 2; do gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \ --network kubernetes-the-hard-way \ --next-hop-address 10.240.0.2${i} \ --destination-range 10.200.${i}.0/24 done Created [https://www.googleapis.com/compute/v1/projects/clouduptime/global/routes/kubernetes-route-10-200-0-0-24]. NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000 Created [https://www.googleapis.com/compute/v1/projects/clouduptime/global/routes/kubernetes-route-10-200-1-0-24]. NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000 Created [https://www.googleapis.com/compute/v1/projects/clouduptime/global/routes/kubernetes-route-10-200-2-0-24]. NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000 yinchao@MacBook-Pro-3 google-cloud-sdk % gcloud compute routes list --filter "network: kubernetes-the-hard-way" NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY default-route-00bf2b130b2a6e3f kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000 default-route-f362c48065a3a6d6 kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 0 kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000 kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000 kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000
Deploying the DNS Cluster Add-on
In this lab you will deploy the DNS add-on which provides DNS based service discovery, backed by CoreDNS, to applications running inside the Kubernetes cluster.
% kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml serviceaccount/coredns created clusterrole.rbac.authorization.k8s.io/system:coredns created clusterrolebinding.rbac.authorization.k8s.io/system:coredns created configmap/coredns created deployment.apps/coredns created service/kube-dns created % kubectl get pods -l k8s-app=kube-dns -n kube-system NAME READY STATUS RESTARTS AGE coredns-8494f9c688-95kxz 0/1 ContainerCreating 0 18s coredns-8494f9c688-zjnzz 1/1 Running 0 18s yinchao@MacBook-Pro-3 google-cloud-sdk % kubectl run busybox --image=busybox:1.28 --command -- sleep 3600 pod/busybox created % kubectl get pods -l run=busybox NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 7s % POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}") % kubectl exec -ti $POD_NAME -- nslookup kubernetes Server: 10.32.0.10 Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local Name: kubernetes
Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
System Engineer | DevOps
3 年Very good job :-)