2 Days Workshop on GCP by LinuxWorld
Shubhankar Thapliyal
DevOps Engineer ? at AAPC || Cloud Enthusiast! || Ex Engineer@Mindtree
Recently, I finished a 2 day long GCP(Google Cloud Platform) workshop with the guidance of Mr. Vimal Daga.
I am a Mindtree mind who had basic work experience on Cloud, DevOps and Programming. The journey with Linux World Informatics started when I joined their Ansible Automation training which is an Ongoing Course.
On Day 1 of the workshop we learned various concepts:
- Google Cloud Program (GCP) is a public cloud which is capable of providing the infrastructure which is required for any IT industry to run its services without an IT company investing lump sum amounts for building its infrastructure. GCP charges according to the services used in their data centers which are wide spread across the globe and provide the most reliable services.
- There are 2 ways of accessing GCP, which is through WebUI and CLI. CLI is capable of providing facilities that might not be available on the WebUI.
- Before accessing GCP services, it is mandatory to create a project which would be perfect to manage services inside this project. Once it is done, link this to the billing account and enable the services required and then start using them.
- GCP provides a CLI console on the website itself so that all the CLI commands can also be handled or else a separate SDK is also available for users to run gcloud commands.
- Launch an instance using Google Compute Engine services using both WebUI and CLI and hosting APACHE web server inside the instance. Further create a firewall rule so that the httpd services are accessible by the public.
- VPC plays an important role in networking in GCP. A VPC represents a complete room in which various services can be hosted. VPC can be allocated with multiple firewall rules which instances and resources inside that particular VPC can use.
- Generally instances in different VPCs don't have connectivity. VPC Peering is a service that GCP provides so as to enable mutual connectivity not only between instances of different VPC networks but also between different projects in GCP.
On Day 2 of the workshop we learned further advanced concepts:
- The session mainly focused on the K8s cluster services provided by GCP. GCP provides a Kubernetes Engine wherein the entire K8s service can be managed using reliable services provided by google.
- While configuring a cluster, the only credentials that need to be provided are specifications of the node groups. We only need to manage these nodes that are specified by us whereas the master node of the K8s is completed managed by GCP. The services run inside the master node like kubeadm and kubelet services are all managed by google hence they are more reliable than private K8s cluster.
- This cluster can be managed by the local command prompt by updating the K8s config file, integrating the K8s services with the Load Balancers, hence making the services publicly available. Nodes are then contacted with the same URL of the Load Balancers.
- Further integrate these services with database services in GCP. Worked with MySQL services that provide an entirely new instance and hence a database can be created inside it with specified user. This database is only exposed so that the cluster node program can be connected and the front end application retrieves the data from the back end database.
- Discussed about the IAM services of GCP wherein only services with restricted permissions can be allocated to teammates so that they do not disturb the other configurations and do their work on only the access to the services granted to them.
- Discussed some core topics of K8s like pods, deployment, containers, replica set, etc.
- Also had a brief overview of Google App Engine (GAE) wherein a required task can be performed with the required infrastructure only launched at the time of usage and automatically terminates on the termination of the use case. This is more useful as the bill is not generated at times when the infrastructure is not in use.
- The GAE service allows developers to deploy their code with only one command. The infrastructure required is dynamically created and the code is hosted on the service. It even maintains the different versions of the code.
I'm very happy that Linux World is providing such great training exercises and hope that they continue in the future as well. Thanks to team Linux World Informatics, Mr. Vimal, Mrs .Preeti and everyone else who have motivated me and helped me through the process.