Cloud Architecting with GCP: A Learning Journey?—?Part3

Cloud Architecting with GCP: A Learning Journey?—?Part3

This is part3 of a series that is intended to help you as a solutions architect to design reliable solutions on GCP. In part1 and 2, we have covered overview of cloud computing and GCP, key roles in cloud architectue, the importance of GCP in the enterprise landscape, and an a deep dive in GCP Compute Engine. In this part we will complete other core compust services such as GKE, Anthos, Cloud Run, Cloud Run Functions, and App Engine.

Google Kubernetes Engine (GKE)

Google Kubernetes Engine (GKE) is a managed Kubernetes service that simplifies deploying, scaling, and managing containerized applications. GKE is a Google-hosted managed Kubernetes service in the cloud. The GKE environment consists of multiple machines, specifically Compute Engine instances, grouped together to form a cluster. You can create a Kubernetes cluster with Kubernetes Engine, but how is GKE different from Kubernetes? From the user’s perspective, it’s a lot simpler. GKE manages all the control plane components for us. It still exposes an IP address to which we send all of our Kubernetes API requests, but GKE takes responsibility for provisioning and managing all the control plane infrastructure behind it. It also eliminates the need of a separate control plane.

Clusters can be created across a region or in a single zone. A single zone is the default. When you deploy across a region the nodes are deployed to three separate zones and the total number of nodes deployed will be three times higher.

With GKE, Google Cloud persistent disks are automatically provisioned by default when you create Kubernetes persistent volumes to provide storage for stateful applications. GKE automatically provisions Google Cloud network load balancers when you deploy Kubernetes network load balancer services, and provisions Google Cloud HTTP and HTTP(S) Load Balancers when you configure Kubernetes Ingress resources. This auto-provisioning feature eliminates the need to configure and manage these resources manually.

What are PODS?

A Pod creates the environment where the containers live, and that environment can accommodate one or more containers. If there is more than one container in a Pod, they are tightly coupled and share resources, like networking and storage. Kubernetes assigns each Pod a unique IP address, and every container within a Pod shares the network namespace, including IP address and network ports. Containers within the same Pod can communicate through localhost, 127.0.0.1. A Pod can also specify a set of storage volumes that will be shared among its containers.

Use Cases:

  • Deploying microservices-based applications.
  • Running workloads that need high scalability and reliability.
  • Hybrid and multi-cloud deployments with Kubernetes portability.

Key Features:

  • Automatic scaling: Scale nodes and pods dynamically based on traffic.
  • Multi-cluster support: Deploy workloads across multiple regions.
  • Automatic upgrades and patching: Reduce operational overhead.
  • Integrated monitoring and logging: Simplifies troubleshooting.
  • Support for GPU and TPU workloads: Ideal for AI/ML applications.

GKE modes

  • Autopilot: recommended, GKE manages the underlying infrastructure such as node configuration, autoscaling, auto-upgrades, baseline security configurations, and baseline networking configuration. ?with Autopilot GKE you only pay for what you use. The configuration options in GKE Autopilot are more restrictive than in GKE Standard. Features like SSH and privilege escalation were removed and there are limitations on node affinity and host access. However, all Pods in GKE Autopilot are scheduled with a Guaranteed class Quality of Service (or QoS).
  • Standard: you manage the underlying infrastructure, including configuring the individual nodes. With GKE standard, you pay for all of the provisioned infrastructure, regardless of how much gets used.


Anthos

Run K8S clusters on cloud & on-premises. Multi-cluster management: Consistent managed K8S. Provides Service Mesh (based on Istio).

Sidecar to implement common microservice features:

  • Authentication & authorization (service accounts)
  • Distributed Tracing, Automatic metrics, logs & dashboards
  • A/B testing, canary rollouts (even track SLIs & error budgets)
  • Cloud Logging & Cloud Monitoring Support


Cloud Run

Cloud Run is a fully managed platform for running containerized applications in a serverless environment. It abstracts infrastructure management while providing portability and scalability.

A managed compute platform that runs stateless containers via web requests or Pub/Sub events. Cloud Run is serverless. That means it removes all infrastructure management tasks so you can focus on developing applications. It’s built on Knative, an open API and runtime environment built on Kubernetes. It can be fully managed on Google Cloud, on Google Kubernetes Engine, or anywhere Knative runs.

It can automatically scale up and down from zero almost instantaneously, and it charges only for the resources used, calculated down to the nearest 100 milliseconds, so you‘ll never pay for over-provisioned resources. You only pay for the CPU, memory, and networking that are consumed during request handling.

Cloud Run developer workflow: The Cloud Run developer workflow is a straightforward three-step process. First, use your favorite programming language to write your application. This application should listen for web requests. Second, build and package your application into a container image. Finally, deploy the container image to Cloud Run. When you deploy your container image, you get a unique HTTP(S) URL. Cloud Run then starts your container on demand to handle requests, and ensures that all incoming requests are handled by dynamically adding and removing containers.

Jobs that must be run in response to Pub/Sub or Eventarc events are good candidates for Cloud Run. Cloud Run jobs work differently from HTTP Cloud Run services. A Cloud Run job doesn't listen for and serve HTTP requests. There is no need to listen on a port or start a web server. Instead, the job is executed as a one-off task or as part of a workflow. You can also use Cloud Scheduler to run a job on a regular schedule. When a Cloud Run job finishes, the job exits. Each job can be composed of a single task or multiple independent tasks. Because multiple tasks within a job are independent, the tasks can be run in parallel. In addition, tasks that fail can be automatically retried. Each task within a job runs a single container image. Like Cloud Run services, Cloud Run jobs run on a fully serverless platform.

Cloud Run Workflows:

You can use the container-based workflow or use the source-based workflow. Source-based workflow will deploy source code instead of a container image. Cloud Run then builds the source and packages the application into a container image. Cloud Run does this using Buildpacks - an open source project. Google Cloud buildpacks are also used for the internal build system for Cloud Run functions and App Engine.

Use Cases:

  • Deploying APIs and web services.
  • Running event-driven or background jobs.
  • Stateless applications requiring rapid scaling.
  • Running applications with unpredictable traffic.

Key Features:

  • Auto-scaling: Scales up to thousands of instances instantly.
  • Pay-per-use pricing: You only pay for the actual compute time.
  • Support for any language/runtime: As long as it runs in a container.
  • HTTPS endpoint out-of-the-box: No need for additional setup.
  • Knative-based portability: Run the same containers on Kubernetes.


Cloud Run Functions

Previously known as Cloud Functions. Cloud Run functions is a lightweight, event-based, asynchronous compute solution that allows you to create small, single-purpose functions that respond to cloud events, without the need to manage a server or a runtime environment. These functions can be used to construct application workflows from individual business logic tasks. You’re billed to the nearest 100 milliseconds, but only while your code is running. Events from Cloud Storage and Pub/Sub can trigger Cloud Run functions asynchronously, or you can use HTTP invocation for synchronous execution.

With Cloud Run functions, you can develop an application that is event-driven, serverless, and highly scalable. Each function is a lightweight microservice that allows you to integrate application components and data sources. Cloud Run functions is ideal for microservices that require a small piece of code to quickly process data in response to an event. Cloud Run functions is priced according to how long your function runs, the number of invocations, and the resources that you provision for the function. You can use Cloud Run functions for lightweight extract-transform-load, or ETL, operations or for processing messages that are published to a Pub/Sub topic. Cloud Run functions can also serve as a target for webhooks, which allow applications or services to make direct HTTP calls to invoke microservices. Any lightweight functionality that is run in response to an event is a candidate for Cloud Run functions. The Cloud Run functions service automatically installs all dependencies before running your code.


App Engine

App Engine is a fully managed platform-as-a-service (PaaS) that enables developers to deploy web applications without managing servers. It supports multiple programming languages and offers built-in scalability.

App Engine Environments:

App Engine supports two environments: standard and flexible.

The App Engine standard environment runs your code in a sandbox, and doesn’t require you to build containers. However, your applications must be written with specific versions of supported languages. The standard environment responds well to spikes in traffic by scaling up within seconds and scaling down to zero after 15 minutes of inactivity. You pay nothing when your application scales down to zero. The standard environment is a reasonable choice for non-containerized applications with spikes in traffic.

The App Engine flexible environment requires you to create a container for your application, but you gain flexibility by doing so. Your code can be written by using any languages and libraries. The flexible environment is better for applications that have sustained traffic, because it scales up and down much slower than the standard environment does, and cannot scale to zero. Your application will always need to run in at least one instance.

Cloud Run provides many of the best features of both App Engine environments. Cloud Run applications are required to run in containers. If you do not need custom containers, buildpacks can automatically create the containers for you. Cloud Run can scale up and down almost immediately in response to traffic spikes. Unlike App Engine, you only pay for Cloud Run when you are processing requests, rounded up to the nearest tenth of a second. App Engine is fully supported, and it works well for creating web applications and web APIs, but Cloud Run is often a better choice for these use cases.

Use Cases:

  • Web and mobile backend applications.
  • Startups and developers who want to focus on code rather than infrastructure.
  • Applications requiring automatic scaling with minimal operational overhead.
  • Multi-language development with support for Python, Java, Node.js, and more.

Key Features:

  • Automatic scaling: Scales up and down based on traffic.
  • Fully managed environment: No need for infrastructure management.
  • Standard and flexible environments: Standard uses sandboxed runtimes, while flexible allows custom runtimes.
  • Built-in security and monitoring: Integrates with Cloud IAM and Stackdriver.


Comparative Analysis

Choosing the right compute service depends on your specific use case, scalability needs, infrastructure control, and operational preferences. By understanding these options, solutions architects can design efficient and cost-effective systems on GCP. You can check the below table to choose the better compute service based on your use case.

In the context of infrastructure control, Compute Engine provides the most control while GKE and Cloud Run provides less control and Cloud Run functions provides the lowest level of control. And regarding the customer managed and serverless context, Compute Engine is fully managed by the customer, while GKE, Cloud Run, and Cloud Run functions are more serverless.


Conclusion

Google Cloud provides a variety of compute options suited for different workloads.

  • Use Compute Engine when you need full control over infrastructure.
  • Use GKE for managing containerized applications efficiently.
  • Use Cloud Run when you want serverless containers with automatic scaling.
  • Use App Engine for building web apps with minimal infrastructure concerns.
  • Use Cloud Run Functions for lightweight event-driven workloads.


you can read this article on medium:

https://medium.com/@hamdyahmed1984/cloud-architecting-with-gcp-a-learning-journey-part3-209fed0db9cb


要查看或添加评论,请登录

Hamdy A. AbdulFatah的更多文章

社区洞察

其他会员也浏览了