GITLAB CI/CD:
Section A:
?
Configuring GitLab Runners:
?
In GitLab CI/CD, Runners run the code defined in .gitlab-ci.yml. A GitLab Runner is a lightweight, highly-scalable agent that picks up a CI job through the coordinator API of GitLab CI/CD, runs the job, and sends the result back to the GitLab instance.
?
Runners are created by an administrator and are visible in the GitLab UI. Runners can be specific to certain projects or available to all projects.
Types of Runners
?
There are three types of Runners:
?
?
To create the above three types of runners go to Section B part of a document.
?
If you are running self-managed GitLab, you can create your own Runners.
?
If you are using GitLab.com, you can use the shared Runners provided by GitLab or create your own group or specific Runners.
Shared Runners
?
Shared Runners are available to every project in a GitLab instance.
?
Use shared Runners when you have multiple jobs with similar requirements. Rather than having multiple Runners idling for many projects, you can have a few Runners that handle multiple projects.
?
If you are using a self-managed instance of GitLab, your administrator can create shared Runners and configure them to use the executor you want. The administrator can also configure a maximum number of Shared Runner pipeline minutes for each group.
?
If you are using GitLab.com, you can select from a list of shared Runners that GitLab maintains. You can use Shared Runners for a limited number of minutes each month, based on your GitLab.com tier.
How shared Runners pick jobs
?
Shared Runners process jobs by using a fair usage queue. This queue prevents projects from creating hundreds of jobs and using all available shared Runner resources.
?
The fair usage queue algorithm assigns jobs based on the projects that have the fewest number of jobs already running on shared Runners.
?
Example 1
?
If these jobs are in the queue:
?
???????????Job 1 for Project 1
???????????Job 2 for Project 1
???????????Job 3 for Project 1
???????????Job 4 for Project 2
???????????Job 5 for Project 2
???????????Job 6 for Project 3
?
The fair usage algorithm assigns jobs in this order:
?
???????????Job 1 is chosen first, because it has the lowest job number from projects with no running jobs (that is, all projects).
???????????Job 4 is next, because 4 is now the lowest job number from projects with no running jobs (Project 1 has a job running).
???????????Job 6 is next, because 6 is now the lowest job number from projects with no running jobs (Projects 1 and 2 have jobs running).
???????????Job 2 is next, because, of projects with the lowest number of jobs running (each has 1), it is the lowest job number.
???????????Job 5 is next, because Project 1 now has 2 jobs running and Job 5 is the lowest remaining job number between Projects 2 and 3.
???????????Finally is Job 3… because it’s the only job left.
?
Example 2
?
If these jobs are in the queue:
?
???????????Job 1 for project 1
???????????Job 2 for project 1
???????????Job 3 for project 1
???????????Job 4 for project 2
???????????Job 5 for project 2
???????????Job 6 for project 3
?
The fair usage algorithm assigns jobs in this order:
?
???????????Job 1 is chosen first, because it has the lowest job number from projects with no running jobs (that is, all projects).
???????????We finish job 1.
???????????Job 2 is next, because, having finished Job 1, all projects have 0 jobs running again, and 2 is the lowest available job number.
???????????Job 4 is next, because with Project 1 running a job, 4 is the lowest number from projects running no jobs (Projects 2 and 3).
???????????We finish job 4.
???????????Job 5 is next, because having finished Job 4, Project 2 has no jobs running again.
???????????Job 6 is next, because Project 3 is the only project left with no running jobs.
???????????Lastly we choose Job 3… because, again, it’s the only job left.
?
Enable a shared Runner
?
By default, all projects can use shared Runners, and they are enabled by default.
?
However, you can enable or disable shared Runners for individual projects.
?
To enable or disable a shared Runner:
?
???????????Go to the project’s Settings > CI/CD and expand the Runners section.
???????????Click Allow shared Runners or Disable shared Runners.
?
Group Runners
?
Use Group Runners when you want all projects in a group to have access to a set of Runners.
?
Group Runners process jobs by using a first in, first out (FIFO) queue.
Create a group Runner
?
You can create a group Runner for your self-managed GitLab instance or for GitLab.com. You must have Owner permissions for the group.
?
To create a group Runner:
?
???????????Install Runner.
???????????Go to the group you want to make the Runner work for.
???????????Go to Settings > CI/CD and expand the Runners section.
???????????Note the URL and token.
???????????Register the Runner.
?
Pause or remove a group Runner
?
You can pause or remove a group Runner. You must have Owner permissions for the group.
?
???????????Go to the group you want to remove or pause the Runner for.
???????????Go to Settings > CI/CD and expand the Runners section.
???????????Click Pause or Remove Runner.
???????????On the confirmation dialog, click OK.
?
Specific Runners
?
Use Specific Runners when you want to use Runners for specific projects. For example, when you have:
???????????Jobs with specific requirements, like a deploy job that requires credentials.
???????????Projects with a lot of CI activity that can benefit from being separate from other Runners.
?
You can set up a specific Runner to be used by multiple projects. Specific Runners must be enabled for each project explicitly.
?
Specific Runners process jobs by using a first in, first out (FIFO) queue.
Note: Specific Runners do not get shared with forked projects automatically. A fork does copy the CI / CD settings of the cloned repository.
?
Create a specific Runner
?
You can create a specific Runner for your self-managed GitLab instance or for GitLab.com. You must have Owner permissions for the project.
?
To create a specific Runner:
?
???????????Install Runner.
???????????Go to the project’s Settings > CI/CD and expand the Runners section.
???????????Note the URL and token.
???????????Register the Runner.
?
Enable a specific Runner for a specific project
?
A specific Runner is available in the project it was created for. An administrator can enable a specific Runner to apply to additional projects.
?
???????????You must have Owner permissions for the project.
???????????The specific Runner must not be locked.
?
To enable or disable a specific Runner for a project:
?
???????????Go to the project’s Settings > CI/CD and expand the Runners section.
???????????Click Enable for this project or Disable for this project.
?
Prevent a specific Runner from being enabled for other projects
?
You can configure a specific Runner so it is “locked” and cannot be enabled for other projects. This setting can be enabled when you first register a Runner, but can also be changed later.
?
To lock or unlock a Runner:
?
???????????Go to the project’s Settings > CI/CD and expand the Runners section.
???????????Find the Runner you want to lock or unlock. Make sure it’s enabled.
???????????Click the pencil button.
???????????Check the Lock to current projects option.
???????????Click Save changes.
?
Prevent Runners from revealing sensitive information
?
Introduced in GitLab 10.0.
?
You can protect Runners so they don’t reveal sensitive information. When a Runner is protected, the Runner picks jobs created on protected branches or protected tags only, and ignores other jobs.
?
To protect or unprotect a Runner:
?
???????????Go to the project’s Settings > CI/CD and expand the Runners section.
???????????Find the Runner you want to protect or unprotect. Make sure it’s enabled.
???????????Click the pencil button.
???????????Check the Protected option.
???????????Click Save changes.
?
?
Features:
?
Allows:
●????Running multiple jobs concurrently.
●????Using multiple tokens with multiple servers (even per-project).
●????Limiting number of concurrent jobs per-token.
Jobs can be run:
●????Locally.
●????Using Docker containers.
●????Supports Bash and Windows PowerShell.
●????Works on GNU/Linux, macOS, and Windows (pretty much anywhere you can run Docker).
●????Allows customization of the job running environment.
●????Automatic configuration reload without restart.
●????Easy to use setup with support for Docker, Docker-SSH, Parallels, or SSH running environments.
●????Enables caching of Docker containers.
?
?
Section B:
In the above images below are the step details that are followed to create a gitlab runner.
?
Step 1: Download Certificate of gitlab.synerzip.com
?
Step 2: Execute below command to register ca certificate
sudo gitlab-runner register --tls-ca-file gitlab.synerzip.com
?
Step 3:
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/):
?
Step 4:
Please enter the gitlab-ci token for this runner:
XXXXXXXXXXXXX(Please take shared token who has admin access of synerzip gitlab private repo)
?
Step 5:
Please enter the gitlab-ci description for this runner:
?Newrunner
?
Step 6:
Please enter the gitlab-ci tags for this runner (comma separated):
android-tags
Registering runner... succeeded runner=XXXX
?
Step 7:
Please enter the executor: docker, docker-ssh, virtualbox, docker-ssh+machine, kubernetes, parallels, shell, ssh, docker+machine:
Docker
?
Step 8:
Please enter the default Docker image (e.g. ruby:2.1):
alpine:latest
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
?
Setting up .gitlab-ci.yml ?file for CI/CD GitLab Repo:
?
Step 1: Add a .gitlab-ci.yml file to your repository’s root directory.
???????????Ensure your project is configured to use a Runner.
?
?
?Step 2: Use variables which are stored at setting-->CICD-->Variables section
???????????$DEMO_STORE_FILE
???????????????????????????????????OR
????????????Create a variable section at a top of script and use those variables
???????????Variables:
???TEST: "HELLO WORLD"
?
Step 3: Create different stages of your application
???????????stages:
?- build
?- deploy
?
Step 4: Select a branch to which you want to deploy an application
only:
???- master
?
Step 5: Select?a docker image to which you want to execute a stage
???????????image: google/cloud-sdk:alpine
?
Step 6: Select a runner tag on which you want to run an application
???????????tags:
???- bu7sharedrunner
?
Step 7: You can write mention stage logic at script section
???????????script:
?
?
?
GitLab To GCP:
?
Below are the steps details to create a docker image, push an image to the container registry and deploy it on Kubernetes cluster.
?
Login into docker to create a docker image of mention repo and push that image to container registry of gcr.io, so for that we need have a permission to push images to container register by using DEMO_STORE_FILE json file which is stored at setting-->CICD-->Variables section.
Create Service Account key in json format. Downloaded key keeps it safe. We will be using while configuring your project as a Docker project will enable you to set up Docker registry integrations in project settings.
??- docker login -u _json_key -p "$(cat $DEMO_STORE_FILE)" https://gcr.io
?
Build a docker image by using Dockerfile in your repo
??- docker build -t demo-backend:build-${CI_JOB_ID} --pull -f Dockerfile .
?
Before pushing the image to Google container registry, you must add the registry name and image name as a tag to the image.
us.gcr.io hosts your images in the United States.
eu.gcr.io hosts your images in the European Union.
asia.gcr.io hosts your images in Asia.
Add the tag to your image:
???- docker tag demo-backend:build-${CI_JOB_ID}????????????????????????????gcr.io/demo-265009/demo-backend:build-${CI_JOB_ID}
?
By adding the Google Container Registry integration, you will be able to push and pull images effortlessly, without having to worry about authentication.
??- docker push?gcr.io/demo-265009/kiwano-backend:build-${CI_JOB_ID}
?
To allow gcloud (and other tools in Cloud SDK) to use service account credentials to make requests, use this command to import these credentials from a file that contains a private authorization key, and activate them for use in gcloud. gcloud auth activate-service-account serves the same function as gcloud auth login but uses a service account rather than Google user credentials.
???- gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json
?
To set a project
???- gcloud config set project demo-265009
?
To replace a build number which is generated by running gitlab cicd job in deployment file
???- sed -i "s/BUILD_NUMBER/${CI_JOB_ID}/"?demo-backend.yaml
?
If you run multiple clusters within your Google Cloud project, you need to choose which cluster kubectl talks to. You can set a default cluster for kubectl by setting the current context in Kubernetes' kubeconfig file. Additionally, you can run kubectl commands against a specific cluster using the --cluster flag.
The following sections explain how kubeconfig works, how to set a default cluster for kubectl, and how to run individual kubectl commands against a specific cluster.
???- gcloud container clusters get-credentials demo-dev --region=us-central1-a
?
kubectl replace (pointing to a new yaml file) and delete previous yaml file, replace first deletes the resources, then creates it from the file you give itIf true, immediately remove resources from API and bypass graceful deletion. Note that immediate deletion of some resources may result in inconsistency or data loss and requires confirmation.
???- kubectl replace --force -f demo-backend.yaml
?
?
GCP Usage and Details:
Kubernetes Engine:
Organizations typically use Google Kubernetes Engine to:
?
Create or resize Docker container clusters
Create container pods, replication controllers, jobs, services or load balancers
Resize application controllers
Update and upgrade container clusters
Debug container clusters
Users can interact with Google Kubernetes Engine using the gcloud command line interface or the Google Cloud Platform Console.
?
GKE clusters are powered by the Kubernetes open source cluster management system. Kubernetes provides the mechanisms through which you interact with your cluster. You use Kubernetes commands and resources to deploy and manage your applications, perform administration tasks, set policies, and monitor the health of your deployed workloads.
?
Google currently charges a flat fee for Kubernetes Engine services depending on the number of nodes in a cluster. A cluster of five nodes or less is currently free, and a cluster of six nodes or more is currently priced at $0.15 per-hour per-cluster. However, cloud pricing is extremely competitive and changes frequently
?
Section C:
?
1. Cluster:
?
Created a cluster which is of node size 3 with 2cpu and 7.5 GB memory
?
To create a cluster follow mention below steps
?
1. Go to https://console.cloud.google.com and log in.
?
2. Enable the Container Engine API.
?
3. Install and initialize the gcloud command-line tools. These tools send commands to Google Cloud and let you do things like create and delete clusters.
?
Go to the gcloud downloads page to download and install the gcloud SDK.
?
See the gcloud documentation for more information on the gcloud SDK.
?
Install kubectl, which is a tool for controlling kubernetes. From the terminal, enter:
?
gcloud components install kubectl
?
4. Create a Kubernetes cluster on Google Cloud, by typing in the following command:
?
gcloud container clusters create <YOUR_CLUSTER> \
?????--num-nodes=3 \
?????--machine-type=n1-standard-2 \
?????--zone=us-central1-b
?
?
?
where:
?
--num-nodes specifies how many computers to spin up. The higher the number, the greater the cost.
--machine-type specifies the amount of CPU and RAM in each node. There is a variety of types to choose from. Picking something appropriate here will have a large effect on how much you pay - smaller machines restrict the max amount of RAM each user can have access to but allow more fine-grained scaling, reducing cost. The default (n1-standard-2) has 2CPUs and 7.5G of RAM each, and might not be a good fit for all use cases!
--zone specifies which data center to use. Pick something that is not too far away from your users. You can find a list of them here.
?
5. To test if your cluster is initialized, run:
?
kubectl get node
The response should list three running nodes.
?
---------------------------------------------------------OR----------------------------------------------------------------
?
Once you’ve registered, you will be given a Project. This is where you create all of your GCP resources.
?
Now, click on the console menu in the top left, highlight Kubernetes Engine, and click Clusters:
?
Now, click the Create cluster button.
?
At this point, you will see a long list of cluster templates and cluster options to choose from. Leave the Standard cluster option selected and enter a new name for the cluster in the Name field.
?
In the free trial, you’re only allowed to create a Zonal cluster, so leave this option selected.
Choose a Zone that’s located close to you.
The Master version defines which version of Kubernetes will be used by your cluster. There are many options to choose from, and you can leave it set to the default option.
However, if you want to follow the configuration steps in the next section for accepting deployments from outside, then select at least 1.14.
The next section lets you configure the default node pool for your cluster. This is the group of machines that GCP dedicates to running your cluster and its resources. You can leave this section as is, but if you want to do the configurations in the next section, then you should set the number of nodes to 6.
Now, click Create.
?
?
?
?
?
?
?
2. Workloads:
?
After pushing docker images on container registry we need to deploy those on Kubernetes Cluster pod.
?
To deploy FE/BE application follow mention below steps:
?
FE/BE Application : Deployed a Frontend and Backend application under workloads section.
?
Below are the details for IP assigned to FE and BE.
FE: xxx.xxx.xxx.xxx and select targetport as a container port to expose and port to expose outside world
BE: xxx.xxx.xxx.xxx and port exposed to 9021
?
3. Configuration
?
How to create secret:
?
kubectl create secret generic postgres --from-literal=postgres_user=admindemo --from-literal=postgres_password=demo1234 --namespace=demo-frontend-dev
4. How to create postgres service
Below is a yaml script to create postgres database service and will deploy that service to kubernetes cluster pod
In configuration section we have created a secret use those values in env section of script mention below
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
?name: postgres
?namespace: kiwano-frontend-dev
spec:
?replicas: 1
?template:
?????metadata:
????labels:
????name: postgres
?????spec:
????nodeSelector:
????disktype: node2
????containers:
????- name: postgres
????image: postgres:9.6
????ports:
????- containerPort: 5432
????env:
????- name: POSTGRES_USER
??????????valueFrom:
??????????secretKeyRef:
??????????name: postgres
??????????key: postgres_user
????- name: POSTGRES_PASSWORD
??????????valueFrom:
??????????secretKeyRef:
??????????name: postgres
??????????key: postgres_password
????volumeMounts:
??????????- mountPath: /var/lib/postgresql/data
??????????name: postgredb
??????????subPath: datadb
????volumes:
????- name: postgredb
??????????persistentVolumeClaim:
??????????claimName: postgres-pv-claim
5. Storage Section
?
A StorageClass provides a way for administrators to describe the "classes" of storage they offer. Different classes might map to quality-of-service levels, or to backup policies.
?
Create a storage class:
?
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
?name: postgres
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Retain
parameters:
?type: pd-standard
volumeBindingMode: Immediate
?
Create a persistvolumeclaim:
Kubernetes persistent volumes are administrator provisioned volumes. These are created with a particular filesystem, size, and identifying characteristics such as volume IDs and names.
?
A Kubernetes persistent volume has the following attributes
?
●???It is provisioned either dynamically or by an administrator
●???Created with a particular filesystem
●???Has a particular size
●???Has identifying characteristics such as volume IDs and a name
?
In order for pods to start using these volumes, they need to be claimed (via a persistent volume claim) and the claim referenced in the spec for a pod. A Persistent Volume Claim describes the amount and characteristics of the storage required by the pod, finds any matching persistent volumes and claims these. Storage Classes describe default volume information (filesystem,size,block size etc).
Kubernetes persistent volumes remain available outside of the pod lifecycle – this means that the volume will remain even after the pod is deleted. It is available to claim by another pod if required, and the data is retained.
?
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
?labels:
?????app: postgres
?name: postgres-pv-claim
?namespace: kiwano-frontend-dev
spec:
?storageClassName: postgres
?accessModes:
?- ReadWriteOnce
?resources:
?????requests:
????storage: 5Gi
?
Assign created persistvolumeclaim to postgres database for more details refer volume section of postgres script:
?
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
?name: postgres
?namespace: kiwano-frontend-dev
spec:
?replicas: 1
?template:
?????metadata:
????labels:
????name: postgres
?????spec:
????nodeSelector:
????disktype: node2
????containers:
????- name: postgres
????image: postgres:9.6
????ports:
????- containerPort: 5432
????env:
????- name: POSTGRES_USER
??????????valueFrom:
??????????secretKeyRef:
??????????name: postgres
??????????key: postgres_user
????- name: POSTGRES_PASSWORD
??????????valueFrom:
??????????secretKeyRef:
??????????name: postgres
??????????key: postgres_password
????volumeMounts:
??????????- mountPath: /var/lib/postgresql/data
??????????name: postgredb
??????????subPath: datadb
????volumes:
????- name: postgredb
??????????persistentVolumeClaim:
??????????claimName: postgres-pv-claim
6. Autoscaling in Kubernetes
The Basics:
Clusters are how Kubernetes groups machines. They are composed of Nodes (individual machines, oftentimes virtual) which run Pods. Pods have containers that request resources such as CPU, Memory, and GPU. The Cluster Autoscaler adds or removes Nodes in a Cluster based on resource requests from Pods.
?
?
What to Scale?
In the context of Kubernetes cluster, there are typically two things you want to scale as a user:
?
Pods: For a given application let's say you are running X replicas, if more requests come than the pool of X pods can handle, it is a good idea to scale to more than X replicas for that application. For this to work seamlessly, your nodes should have enough available resources so that those extra pods can be scheduled and executed successfully. That brings us to the second part of what to scale.
?
Nodes: Capacity of all nodes put together represents your cluster's capacity. If the workload demand goes beyond this capacity, then you would have to add nodes to the cluster and make sure the workload can be scheduled and executed effectively. If the PODs keep scaling, at some point the resources that nodes have available will run out and you will have to add more nodes to increase overall resources available at the cluster level.
?
?
?
?
When to Scale?
The decision of when to scale has two parts—one is measuring a certain metric continuously and when the metric crosses a threshold value, then acting on it by scaling a certain resource. For example, you might want to measure the average CPU consumption of your pods and then trigger a scale operation if the CPU consumption crosses 80%. But one metric does not fit all use cases and for different kinds of applications, the metric might vary. For example, for a message queue, the number of messages in a waiting state might be the appropriate metric. For memory intensive applications, memory consumption might be that metric.
?
So far we have only considered the scale-up part, but when the workload usage drops, there should be a way to scale down gracefully and without causing interruption to existing requests being processed. We will look at implementation details of these things in later sections.
?
How to Scale?
This is really an implementation detail, but nevertheless an important one. In the case of pods, simply changing the number of replicas in the replication controller is enough. In case of nodes, there should be a way to call the cloud provider's API, create a new instance and make it a part of the cluster—which is a relatively non-trivial operation and may take more time comparatively.
?
Benefits of Autoscaling:
To better understand where autoscaling would provide the most value, let’s start with an example. Imagine you have a 24/7 production service with a load that is variable in time, where it is very busy during the day in the US, and relatively low at night. Ideally, we would want the number of nodes in the cluster and the number of pods in deployment to dynamically adjust to the load to meet end user demand. The new Cluster Autoscaling feature together with Horizontal Pod Autoscaler can handle this for you automatically.
?
When Does The Autoscaler Add Capacity?
The autoscaler increases the size of the pods when there are pods that are not able to be scheduled due to resource shortages. It can be configured to not scale up or down past a certain number of machines
?
?
Setting Up the HPA:
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
?name: testingHPA
spec:
?scaleTargetRef:
??apiVersion: apps/v1beta1
??kind: Deployment
??name: awesome_app
?minReplicas: 3
?maxReplicas: 5
?targetCPUUtilizationPercentage: 85
?
●????what we want to scale, achieved by spec.scaleTargetRef.
●????spec.scaleTargetRef tells HPA which scalable controller to scale (Deployment, RepicaSet or ReplicationController).
●????In our example, we are telling HPA to scale a Deployment named "awesome_app."
●????We then provide the scaling parameters — minReplicas , maxReplicas?and when to scale.
●????This simple HPA starts scaling if the CPU Utilization goes above 85% maintaining the pod count between 3 and 5 both inclusive
?
Cluster Level Auto scaler:
Cluster autoscaler looks for the pods that cannot be scheduled and checks if adding a new node, similar to the other in the cluster, would help. If yes, then it resizes the cluster to accommodate the waiting pods.
?
Cluster autoscaler also scales down the cluster if it notices that one or more nodes are not needed anymore for an extended period of time (10min but it may change in the future).
?
Cluster autoscaler is configured per instance group (GCE) or node pool (Google Kubernetes Engine).
?
Updating existing cluster:
gcloud container clusters update mytestcluster --enable-autoscaling --min-nodes=1 --max-nodes=5
?
CPU Based Scaling:
With Horizontal Pod Autoscaling, Kubernetes automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilisation.
?
Execute the command: “kubectl get deployment” to get the existing deployments.
?
Create a Horizontal Pod Autoscaler i.e. hpa for a particular deployment using command:
?
kubectl autoscale deployment <deployment-name> --min=2 --max=5 --cpu-percent=80
?
In our case, currently, we have set the minimum no. of replicas to 1, what if we need to update the min. No. of replicas to 2. In this scenario, we just need to get the hpa in yaml format and update the yaml file. Here’s an example
?
?
7. Implemented Ingress External HTTPS(S) Load Balancer
Ingress for External HTTP(S) Load Balancing deploys the Google Cloud External HTTP(S) Load Balancer. This internet-facing load balancer is deployed globally across Google's edge network as a managed and scalable pool of load balancing resources.
?
Google Cloud's external HTTP(S) load balancer is a globally distributed load balancer for exposing applications publicly on the internet. It's deployed across Google Points of Presence (PoPs) globally providing low latency HTTP(S) connections to users. Anycast routing is used for the load balancer IPs, allowing internet routing to determine the lowest cost path to its closest Google Load Balancer.
?
GKE Ingress deploys the external HTTP(S) load balancer to provide global load balancing natively for Pods as backends.
?
Node Port:
Before creating a Load Balancer create a node port in the service section of Kubernetes Engine.
?
?
So it will create below script for Node Port
?
apiVersion: v1
kind: Service
metadata:
?creationTimestamp: "2020-04-15T10:26:21Z"
?labels:
??app: demo-frontend
?name: demo-frontend
?namespace: kiwano-frontend-dev
?resourceVersion: "6783344"
?selfLink: /api/v1/namespaces/demo-frontend-dev/services/demo-frontend
?uid: 8d42efc4-7f03-11ea-8880-42010a800089
spec:
?clusterIP: 10.0.10.1
?externalTrafficPolicy: Cluster
?ports:
?- nodePort: 30785
??port: 80
??protocol: TCP
??targetPort: 80
?selector:
??app: demo-frontend
?sessionAffinity: None
?type: NodePort
status:
?loadBalancer: {}
?
?
?
?
?
Ingress Load Balancer:
?
To creating a Ingress Load balancer for FE service
?
So it will create a below script for Ingress Loadbalancer
?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
?annotations:
??ingress.kubernetes.io/backends: '{"k8s-be-30785--475902e47c60d6e1":"HEALTHY"}'
??ingress.kubernetes.io/forwarding-rule: k8s-fw-demo-frontend-dev-frontend-ingress--475902e47c60d6e1
??ingress.kubernetes.io/target-proxy: k8s-tp-demo-frontend-dev-frontend-ingress--475902e47c60d6e1
??ingress.kubernetes.io/url-map: k8s-um-demo-frontend-dev-frontend-ingress--475902e47c60d6e1
??kubernetes.io/ingress.global-static-ip-name: demofrontendloadbalancer
?creationTimestamp: "2020-04-15T15:22:19Z"
?generation: 1
?name: frontend-ingress
?namespace: demo-frontend-dev
?resourceVersion: "8812864"
?selfLink: /apis/extensions/v1beta1/namespaces/demo-frontend-dev/ingresses/frontend-ingress
?uid: e5552c47-7f2c-11ea-8880-42010a800089
spec:
?backend:
??serviceName: demo-frontend
??servicePort: 80
status:
?loadBalancer:
??ingress:
??- ip: XXX.XXX.XXX.XXX