KCC Direct Controllers, Compact Placement Policies and a new Cloud Region In Mexico????

KCC Direct Controllers, Compact Placement Policies and a new Cloud Region In Mexico????

The News

GKE

  • KCC Direct Controllers : KCC (Kubernetes Config Controllers) is a tool that allows you to provision Google Cloud Resources using Kubernetes (aka Yaml files). The product announced a new feature called Direct Controllers which uses the native GCP API to create the resource (instead of generating Terraform code and running it). The team also released a contribution guide which allows anyone to write their own resource or fields.
  • Specify compact placement policies on the pod level : With node auto-provisioning GKE automatically create node-pools for your clusters and places your workloads on them. You can now specify a custom compact placement policy on the pod level which will instruct node auto-provisioning to place the node-pools and workloads in close location to each other within a zone. This helps reduce network latency between your pods.
  • The?Querétaro, Mexico region is open for business : The northamerica-south1 location is now available. For more information, see?Global Locations .

Cloud Cloud

  • Google Cloud expanded the CVE program: Starting November 12 Google Cloud started issuing CVEs for critical Google Cloud vulnerabilities, even when we do not require customer action or patching. And for those the CVE record will be tagged with the “exclusively-hosted-service” tag to indicate these don’t require any action.
  • Enabled or Disable scanning per repository: Artifacts Registry now support enabling or disabling analysis scanning per individual repository. Before you could only enable scanning for the entire project. But now you can disable it and enable it per individual repository .

The Technical

GKE

  • Speed up Data loading for AI/ML inference on GKE : As AI Models are becoming big so it is the latency to start them. Already the container images for the Inference software (TGI, vLLM…) are in the ten of GBs but you add to that the size of the model and you are looking at minutes to get a pods started to serve traffic. This article explores some GKE features that can help reduce the time it takes to load a model.
  • Self-Service Fleets on GKE with ArgoCD: Managing applications across clusters and teams can be cumbersome. GKE Enterprise fleets aims are streamlining this. In this article you’ll learn how combining GKE fleets with ArgoCD and Workload Identity can make that easier and allow developers to self-service on GKE.
  • Making IAM for GKE simpler: We have done a lot of work to make managing IAM policies for GKE easier. One of the areas of friction has been Workload Identity which takes many steps to configure. We launched a simplified process now.

The Editorial

Abdelfettah SGHIOUAR

Senior Cloud Developer Advocate | Podcaster | Speaker | CNCF Ambassador | Kubestronaut | Human

4 天前
回复

要查看或添加评论,请登录