Kubernetes Networking
Kubernetes Services
Each Pod is assigned a unique IP address. Every container in a Pod shares the network namespace on the Linux host, including the IP address and network ports. Within a Pod, containers share an IP address and port space, and can find each other via?localhost.
Although certain Pods can do their work independently of an external stimulus, many applications are meant to respond to external requests. For example, in the case of microservices, Pods will usually respond to HTTP requests coming either from other Pods inside the cluster or from clients outside the cluster.
This leads to a problem: if some set of Pods (call them?backends) provides functionality to other Pods (call them?frontends) inside your cluster, how do the?frontends?find out and keep track of which IP address to connect to, so that the frontend can use the backend part of the workload?
Here comes the services, Services allow clients to discover and talk to Pods. They are like internal load balancers, designed to distribute traffic to a subset of Pods that satisfy a set of rules on their labels.
A Kubernetes Service is a resource you create to make a single, constant point of entry to a group of Pods providing the same service. Each service has an IP address and port that never change while the service exists. Clients can open connections to that IP and port, and those connections are then routed to one of the Pods backing that service. This way, clients of a service don't need to know the location of individual Pods providing the service, allowing those Pods to be moved around the cluster at any time.
Example
Suppose you have a frontend web server and a backend database server. There may be multiple Pods that all act as the frontend, but there may only be a single backend database pod.
You need to solve two problems to make the system function:
Solution
By creating a service for the frontend Pods and configuring it to be accessible from outside the cluster, you expose a single, constant IP address through which external clients can connect to the Pods.
Similarly, by also creating a service for the backend pod, you create a stable address for the backend pod. The service address doesn’t change even if the pod's IP address changes. Additionally, by creating the service, you also enable the frontend Pods to easily find the backend service by its name through either environment variables or DNS. All the components of your system (the two services, the two sets of Pods backing those services, and the interdependencies between them) are shown below:
You now understand the basic idea behind services. Now, let's dig deeper by introducing the different types of Kubernetes services.
ClusterIP
The primary purpose of ClusterIP service is exposing groups of Pods to other Pods in the cluster. Choosing this value makes the Service only reachable from within the cluster. This is the default?ServiceType.
apiVersion: v1
kind: Service
metadata:
?name: my-service
spec:
?type: ClusterIP
?selector:
??app.kubernetes.io/name: MyApp
?ports:
??- port: 80
???targetPort: 80
NodePort
You'll also want to expose services such as frontend webservers, to the outside, so external clients can access them as depicted below.
The first method of exposing a set of Pods to external clients is by creating a service and setting its type to?NodePort. If you set the type field to?NodePort, the Kubernetes control plane allocates a port from a range (default: 30000-32767). Each node proxies that port (the same port number on every Node) into your Service.
The below diagram shows an external client connecting to a?NodePort?service either through Node 1 or 2.
apiVersion: v1
kind: Service
metadata:
? name: my-service
spec:
? type: NodePort
? selector:
? ? app.kubernetes.io/name: MyApp
? ports:
? ? - port: 8080
? ? ? targetPort: 8080
? ? ? nodePort: 30123
LoadBalancer
Kubernetes clusters running on cloud providers usually support the automatic provision of a load balancer from the cloud infrastructure.All you need to do is set the service's type to?LoadBalancer?instead of?NodePort.
The load balancer will have its own unique, publicly accessible IP address and will redirect all connections to your service. You can thus access your service through the load balancer's IP address.
The below diagram shows an external client connecting to a?LoadBalancer?service.
apiVersion: v1
kind: Service
metadata:
? name: my-service
spec:
? selector:
? ? app.kubernetes.io/name: MyApp
? ports:
? ? - protocol: TCP
? ? ? port: 8080
? ? ? targetPort: 8080
? ? ? nodePort: 32143
? type: LoadBalancer
If Kubernetes is running in an environment that doesn't support?LoadBalancer?services, the load balancer will not be provisioned, but the service will still behave like a NodePort service.
ExternalName
Instead of exposing an external service by manually configuring the service's Endpoints, a simpler method allows you to refer to an external service by its fully qualified domain name (FQDN). ExternalName?maps the Service to the contents of the?externalName?field, by returning a CNAME record with its value. No proxying of any kind is set up. This Service definition, for example, maps the?my-service?Service in the?prod?namespace to?my.database.example.com
领英推荐
apiVersion: v
kind: Service
metadata:
? name: my-service
? namespace: prod
spec:
? type: ExternalName
? externalName: my.database.example.com1
This would allow Pods communicating with the?my-service?to reach the database endpoint at?my.database.example.com.
Example:
In this example we are going to deploy a pod inside our cluster and expose the pod via a?ClusterIP?service, then test if we can reach the pod through the service internally.
kubectl run nginx --image=nginx --restart=Never --port=80 --expose
2. Confirm that the?ClusterIP?service has been created:
kubectl get svc nginx
3. Create a temporary?busybox?pod and use?wget?to send traffic to the service:
kubectl run busybox --rm --image=busybox -it --restart=Never -- wget -O- nginx:80
You should be able to receive a response from the nginx pod that the service was able to bridge the communication between the Pods successfully!
<title>Welcome to nginx!</title>
In this example we are going to create a?NodePort?service and configure it to select the nginx pod, then attempt to send requests from outside the cluster.
First let's create the service.
cat <<'EOF' | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
? name: nginx-nodeport
spec:
? type: NodePort
? ports:
? - port: 80
? ? targetPort: 80
? ? nodePort: 32000
? selector:
run: nginx
EOF
Once successfully created:
Ingress
In Kubernetes, an Ingress is an API object that provides a way to expose HTTP and HTTPS routes from outside the cluster to services within the cluster. An Ingress controller is responsible for implementing the rules defined in the Ingress resource and routing incoming traffic to the appropriate service.
The Ingress resource defines a set of rules for how incoming traffic should be handled. These rules typically include a hostname or path prefix that should match the incoming request and the service that should handle the request. In addition to routing traffic to services, Ingress rules can also be used to define TLS termination, load balancing, and other advanced networking features.
The Ingress controller is responsible for implementing the rules defined in the Ingress resource. Kubernetes does not include a default Ingress controller, so users must choose and deploy one that suits their needs. Popular Ingress controllers include Nginx, Traefik, and Istio.
Ingress controllers typically run as pods in the cluster and monitor the Kubernetes API for changes to Ingress resources. When a new Ingress resource is created or updated, the controller updates its configuration to reflect the new rules and starts routing incoming traffic to the appropriate services.
DNS (Domain Name System)
In Kubernetes, DNS (Domain Name System) is used to provide service discovery for applications running in the cluster. Kubernetes DNS allows applications to refer to other services by their DNS names rather than their IP addresses, which makes it easier to manage and scale applications in the cluster.
Kubernetes DNS is implemented as a cluster-level service, which provides DNS resolution for all pods running in the cluster. When a pod sends a DNS query for a service name, the Kubernetes DNS service responds with the IP addresses of the pods that provide the service. The DNS service automatically updates its records as pods are added or removed from the cluster, which ensures that applications can always find the most up-to-date IP addresses for the services they depend on.
Kubernetes DNS is based on the CoreDNS project, which is a flexible and extensible DNS server that can be customized to support various use cases. CoreDNS can be extended with plugins to support features such as caching, load balancing, and service discovery for external resources.
Overall, DNS is a critical component of service discovery in Kubernetes, and it plays an important role in enabling applications to communicate with each other within the cluster. By using Kubernetes DNS, users can simplify application deployment and management and improve the scalability and reliability of their applications.
CNI (Container Network Interface)
In Kubernetes, CNI (Container Network Interface) is a specification that defines how network plugins can interface with the Kubernetes networking model to provide networking services to pods and containers.
The CNI specification defines a set of APIs for network plugins to implement, which allows them to configure the network interfaces of containers and pods, and to manage the routing and connectivity between them. The CNI specification also defines a standard format for the configuration files used by network plugins, which makes it easy to switch between different plugins or use multiple plugins together.
Kubernetes uses CNI to provide a flexible and extensible networking model for containers and pods. CNI allows users to choose from a range of network plugins that can provide different networking services, such as overlay networks, service meshes, and software-defined networks.
Overall, CNI is a key component of the Kubernetes networking model, and it allows users to choose from a range of network plugins to meet their specific networking requirements. By using CNI, users can build scalable and reliable networking infrastructure for their Kubernetes applications, and they can easily switch between different plugins or extend them as needed.
Software development senior associate at Fiserv
1 年Great ??
Software developer at TCS
1 年????
Quality Assurance Engineer at Fiserv
1 年????????