How to solve The connection to server was refused in kubernetes & kubeadm reset issue
[root@masternode ~]# systemctl status docker
● docker.service - Docker Application Container Engine
? ?Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
? ?Active: inactive (dead)
? ? ?Docs: https://docs.docker.com
[root@masternode ~]# systemctl start docker?
[root@masternode ~]# systemctl status docker?
● docker.service - Docker Application Container Engine
? ?Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
? ?Active: active (running) since Sat 2022-05-14 19:09:07 IST; 48s ago
? ? ?Docs: https://docs.docker.com
?Main PID: 8780 (dockerd)
? ? Tasks: 17
? ?Memory: 135.8M
? ?CGroup: /system.slice/docker.service
? ? ? ? ? ?└─8780 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
May 14 19:09:07 masternode.example.com dockerd[8780]: time="2022-05-14T19:09:07.136151899+05:30" level=info msg="Removing stal...311e)"
May 14 19:09:07 masternode.example.com dockerd[8780]: time="2022-05-14T19:09:07.139318607+05:30" level=warning msg="Error (Una...g...."
May 14 19:09:07 masternode.example.com dockerd[8780]: time="2022-05-14T19:09:07.412886521+05:30" level=info msg="Removing stal...b102)"
May 14 19:09:07 masternode.example.com dockerd[8780]: time="2022-05-14T19:09:07.415461533+05:30" level=warning msg="Error (Una...g...."
May 14 19:09:07 masternode.example.com dockerd[8780]: time="2022-05-14T19:09:07.493314213+05:30" level=info msg="Default bridg...dress"
May 14 19:09:07 masternode.example.com dockerd[8780]: time="2022-05-14T19:09:07.613446717+05:30" level=info msg="Loading conta...done."
May 14 19:09:07 masternode.example.com dockerd[8780]: time="2022-05-14T19:09:07.650658582+05:30" level=info msg="Docker daemon....10.16
May 14 19:09:07 masternode.example.com dockerd[8780]: time="2022-05-14T19:09:07.651079691+05:30" level=info msg="Daemon has co...ation"
May 14 19:09:07 masternode.example.com systemd[1]: Started Docker Application Container Engine.
May 14 19:09:07 masternode.example.com dockerd[8780]: time="2022-05-14T19:09:07.685866385+05:30" level=info msg="API listen on....sock"
Hint: Some lines were ellipsized, use -l to show in full.
[root@masternode ~]# systemctl enable docker?
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@masternode ~]# systemctl status kubelet?
● kubelet.service - kubelet: The Kubernetes Node Agent
? ?Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
? Drop-In: /usr/lib/systemd/system/kubelet.service.d
? ? ? ? ? ?└─10-kubeadm.conf
? ?Active: failed (Result: exit-code) since Sat 2022-05-14 19:07:53 IST; 2min 21s ago
? ? ?Docs: https://kubernetes.io/docs/
?Main PID: 8723 (code=exited, status=1/FAILURE)
May 14 19:07:53 masternode.example.com kubelet[8723]: I0514 19:07:53.219002? ? 8723 topology_manager.go:133] "Creating topolog...ainer"
May 14 19:07:53 masternode.example.com kubelet[8723]: I0514 19:07:53.219114? ? 8723 container_manager_linux.go:321] "Creating ...d=true
May 14 19:07:53 masternode.example.com kubelet[8723]: I0514 19:07:53.219328? ? 8723 state_mem.go:36] "Initialized new in-memor...store"
May 14 19:07:53 masternode.example.com kubelet[8723]: I0514 19:07:53.219810? ? 8723 kubelet.go:313] "Using dockershim is depre...ation"
May 14 19:07:53 masternode.example.com kubelet[8723]: I0514 19:07:53.219896? ? 8723 client.go:80] "Connecting to docker on the....sock"
May 14 19:07:53 masternode.example.com kubelet[8723]: I0514 19:07:53.219938? ? 8723 client.go:99] "Start docker client with re..."2m0s"
May 14 19:07:53 masternode.example.com kubelet[8723]: E0514 19:07:53.220821? ? 8723 server.go:302] "Failed to run kubelet" err...ning?"
May 14 19:07:53 masternode.example.com systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 14 19:07:53 masternode.example.com systemd[1]: Unit kubelet.service entered failed state.
May 14 19:07:53 masternode.example.com systemd[1]: kubelet.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
[root@masternode ~]# systemctl restart kubelet?
[root@masternode ~]# systemctl status kubelet??
● kubelet.service - kubelet: The Kubernetes Node Agent
? ?Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
? Drop-In: /usr/lib/systemd/system/kubelet.service.d
? ? ? ? ? ?└─10-kubeadm.conf
? ?Active: activating (auto-restart) (Result: exit-code) since Sat 2022-05-14 19:10:28 IST; 2s ago
? ? ?Docs: https://kubernetes.io/docs/
? Process: 9071 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
?Main PID: 9071 (code=exited, status=1/FAILURE)
May 14 19:10:28 masternode.example.com systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
May 14 19:10:28 masternode.example.com systemd[1]: Unit kubelet.service entered failed state.
May 14 19:10:28 masternode.example.com systemd[1]: kubelet.service failed.?
[root@masternode ~]# systemctl restart kubelet
[root@masternode ~]# systemctl status kubelet?
● kubelet.service - kubelet: The Kubernetes Node Agent
? ?Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
? Drop-In: /usr/lib/systemd/system/kubelet.service.d
? ? ? ? ? ?└─10-kubeadm.conf
? ?Active: activating (auto-restart) (Result: exit-code) since Sat 2022-05-14 19:19:14 IST; 6s ago
? ? ?Docs: https://kubernetes.io/docs/
? Process: 10782 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
?Main PID: 10782 (code=exited, status=1/FAILURE)
May 14 19:19:14 masternode.example.com systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
May 14 19:19:14 masternode.example.com systemd[1]: Unit kubelet.service entered failed state.
May 14 19:19:14 masternode.example.com systemd[1]: kubelet.service failed.
领英推荐
[root@masternode ~]# kubeadm init
I0514 19:19:36.478384? ?10798 version.go:255] remote version is much newer: v1.24.0; falling back to: stable-1.23
[init] Using Kubernetes version: v1.23.6
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local masternode.example.com] and IPs [10.96.0.1 192.168.29.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost masternode.example.com] and IPs [192.168.29.200 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost masternode.example.com] and IPs [192.168.29.200 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.507138 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node masternode.example.com as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node masternode.example.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: alsb2z.6q9k1l7nmkqld8tf
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
? mkdir -p $HOME/.kube
? sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
? sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
? export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
? https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.29.200:6443 --token alsb2z.6q9k1l7nmkqld8tf \
? ? ? ? --discovery-token-ca-cert-hash sha256:6578f2f5e833027f6a82333c77dc9f1f8faab5275112d97a4b098fbe8c3bf?
1
vi /etc/default/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=cgroupfs"
systemctl daemon-reload
systemctl restart kubelet
systemctl status? kubelet
systemctl staus docker -->check the status?
########################################################################################################################################################
2.
cat <<EOF | sudo tee /etc/docker/daemon.json
? ? {
? ? ? "exec-opts": ["native.cgroupdriver=systemd"],
? ? ? "log-driver": "json-file",
? ? ? "log-opts": {
? ? ? "max-size": "100m"
? ?},
? ? ? ?"storage-driver": "overlay2"
? ? ? ?}
EOF
change it ---->
cat <<EOF | sudo tee /etc/docker/daemon.json
{
? ? ? "exec-opts": ["native.cgroupdriver=cgroupfs"],
? ? ? "log-driver": "json-file",
? ? ? "log-opts": {
? ? ? "max-size": "100m"
? ?},
? ? ? ?"storage-driver": "overlay2"
? ? ? ?}
EOF
systemctl daemon-reload
systemctl restart docker?
3.vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
systemctl daemon-reload
systemctl restart kubelet
systemctl status? kubelet
.
Technical Service delivery Manager|AWS certified Solution Architect
2 年Hi Vikas, Have you identified it solution. I am also facing the same error and tried initializing kubeadm, also checked the ports ingress etc. Please do let me know how you resolved this error.