Kubernetes-101: 1000-Foot Overview
Mattias Fjellstr?m
Cloud Architect | Author | HashiCorp Ambassador | HashiCorp User Group Leader
In this series of articles we have looked at many of the Kubernetes primitives that are needed to run a working application: Pods, Deployments, Ingresses, Services, Volumes, ConfigMaps, Secrets, and more.
Kubernetes is a platform, and there are many extensions and tools in the Kubernetes ecosystem that extends its functionality far beyond what vanilla Kubernetes can offer. This is both a good thing, and an overwhelming thing. This is what the Cloud Native Computing Foundation (CNCF) landscape of Cloud Native tools look like, many of these tools have something to do with Kubernetes in some way:
In this article we will go through each and every box in the above image! No, I am just kidding. I will briefly show a few examples of interesting technology that are part of the landscape in the image above.
The common thread in the three tools we will look at, and many of the other tools in the landscape, is that they use custom resource definitions in Kubernetes. This is an advanced topic we have not covered, but the gist is that you can create your own resources types and controllers/operators for these types and install them in any Kubernetes cluster. In the following sections we will look at Argo CD, Knative Functions, and Tekton Pipelines.
This article is also available at my blog mattias.engineer/k8s/1000-foot-overview/
GitOps with Argo CD
Traditionally you deploy code to your Kubernetes cluster using a?push model. This usually means you have a CI/CD pipeline where you connect to your cluster and apply your Kubernetes manifests to it. A wild idea appeared in someones head:?what if we pull changes into our cluster instead of pushing changes into it??This is what the idea of?GitOps?is all about. Don’t let a CI/CD pipeline have access to your cluster, instead let your cluster have access to your git repository. The basic idea is that you have a git repository with your Kubernetes manifest files, and you install an operator in your cluster that continuously checks your git repository for any changes. If a change is detected the operator applies the changes to the resources in your cluster. Once nice feature of this approach is that your external deployment pipeline does not need access to your cluster, improving the security posture of your cluster.
Argo CD?is a popular tool for working with GitOps. Argo CD introduces new custom resources into your Kubernetes cluster. To install these custom resources you need to follow the?installation instructions for Argo CD.
With Argo CD installed in your cluster you can create an Argo CD?AppProject:
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: my-argo-project
spec:
sourceRepos:
- https://my-git-repo.com/repo/repo.git
destinations:
- namespace: my-namespace
server: https://kubernetes.default.svc
namespaceResourceWhitelist:
- group: "*"
kind: "*"
An AppProject is a logical container for applications. In the above manifest I says that I allow any Application connected to this AppProject to deploy code from the repository?https://my-git-repo.com/repo/repo.git, and I allow any namespaced resources to be created in the Namespace?my-namespace?in the cluster where the AppProject is deployed (that is what the?server: https://kubernetes.default.svc?means). After setting up an AppProject I can create one or more?Applications:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-argo-application
spec:
project: my-argo-project
destination:
server: https://kubernetes.default.svc
namespace: my-namespace
source:
repoURL: https://my-git-repo.com/repo/repo.git
targetRevision: main
path: manifests/
directory:
recurse: true
syncPolicy:
automated:
selfHeal: true
In the manifest above we see an Application named?my-argo-application. Resources in this Application will be created from manifest files located in the?manifests?directory of my repository at?https://my-git-repo.com/repo/repo.git?using the?main?branch. All resources will be created in the?my-namespace?Namespace. I have configured a?syncPolicy?that says the Application should self-heal. This means that if there are any changes to the running resources in my Kubernetes cluster then Argo should reverse those changes for me automatically.
This is essentially everything that is required to get started with GitOps in your cluster. There are many details I have skipped to mention in this overview. For instance, how should your repository look? How do you handle secrets in your applications? There are answers to these questions, but for now I will leave the topic of GitOps.
Run functions using Knative Functions
Knative allows you to build serverless and event-driven applications in Kubernetes. Knative has a few different components, in this section I will only discuss?Knative Functions.
Knative Functions allow you to easily create event-driven functions in your cluster without requiring deep knowledge of Kubernetes or containers.
I will run through a small demo to show you how a Knative Function works. To follow along you need to install the Knative Functions?func?CLI tool (instructions), you must also set up an environment for Knative using?this guide.
With all the prerequisites installed I begin by creating a new function named?hello-world:
$ func create -l typescript hello-world
Created typescript function in /tmp/hello-world
The?func create?command generated the following files for me:
领英推荐
$ tree .
.
└── hello-world
├── README.md
├── func.yaml
├── package-lock.json
├── package.json
├── src
│ └── index.ts
├── test
│ ├── integration.ts
│ └── unit.ts
└── tsconfig.json
3 directories, 8 files
The Knative specific file is?func.yaml?which looks like this:
specVersion: 0.35.0
name: hello-world
runtime: typescript
registry: ""
image: ""
imageDigest: ""
created: 2023-02-17T19:34:47.421329+01:00
build:
buildpacks: []
builder: ""
buildEnvs:
- name: BP_NODE_RUN_SCRIPTS
value: build
run:
volumes: []
envs: []
deploy:
namespace: default
remote: false
annotations: {}
options: {}
labels: []
healthEndpoints:
liveness: /health/liveness
readiness: /health/readiness
Under the?deploy?section in the configuration file we see some details that are interesting for our application. The function will be deployed to the?default?Namespace in a local cluster (remote: false).
The actual source code of my function can be found in?src/index.ts. I remove some of the example code and end up with this function:
import { Context, StructuredReturn } from "faas-js-runtime"
export const handle = async (context: Context, body: string): Promise<StructuredReturn> => {
context.log.info("Sample log")
return {
body: body,
headers: {
"content-type": "application/json"
}
}
}
This is a basic function that returns any body that is sent to it. Note that it is purely for illustrative purposes! To deploy my function I run?func deploy:
$ func deploy --registry <my registry>
...
?? Deploying function to the cluster
...
This process takes a while. A Docker image for the function is built and pushed to the registry I specify in the deploy command. Once it is done an instance of the function is started in your cluster. To invoke the function you can run?func invoke?in your terminal.
This has been a very brief look at Knative Functions. If you are familiar with tools such as Azure Functions or AWS Lambda you will probably like Knative Functions!
Cloud Native CI/CD Pipelines with Tekton
You might be familiar with CI/CD pipelines from Jenkins or GitHub Actions or similar tools.?Tekton?is a tool to create CI/CD pipelines as Kubernetes custom resources. Tekton brings CI/CD pipelines into your cluster and you can create them like you create any Kubernetes resource. To get started with Tekton you should follow the?official installation instructions.
With Tekton installed in your cluster you can begin defining?Tasks?that consists of?steps?that reach perform some command. Two simple Tasks that each run a custom script looks like this:
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: hello-task
spec:
params:
- name: username
type: string
steps:
- name: hello
image: ubuntu
script: |
#!/bin/bash
echo "Hello $(params.username)!"
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: goodbye-task
spec:
params:
- name: username
type: string
steps:
- name: goodbye
image: ubuntu
script: |
#!/bin/bash
echo "Goodbye $(params.username)!"
Above I have define two Tasks. The first Task is named?hello-task. It takes a username as a parameter, and it runs a custom script that echos?"Hello $(params.username)!"?where the value of the parameter will be substituted in the echo result. The second Task is named?goodbue-task?and it basically performs the same thing, except that it says Goodbye instead of Hello. With these two Tasks we can construct our first?Pipeline. A Pipeline combines our Tasks into a workflow:
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: my-pipeline
spec:
params:
- name: username
type: string
tasks:
- name: hello
taskRef:
name: hello-task
params:
- name: username
value: $(params.username)
- name: goodbye
runAfter:
- hello
taskRef:
name: goodbye-task
params:
- name: username
value: $(params.username)
This Pipeline first runs my?hello-task, and if successful it will continue to run the?goodbye-task.
I have not defined any triggers for my Pipeline, but I wanted to keep this overview simple so I will skip that for now.
Summary
In this article we briefly looked at three interesting tools from the larger cloud-native ecosystem for Kubernetes.
The time has now come for me to finish this series! For anyone who has followed me in this journey, thank you!
Other series will follow after this one, we will see what the topic will be!