Into the Multi-Cloud Part 1: The Oracle DB Operator for Kubernetes, Containerized Oracle Databases and Azure
Introduction
As architectures in the cloud continue to develop, multi-cloud solutions have become more common. Companies and teams may opt for a multi-cloud approach to avoid complete dependency on one cloud-provider. Going multi-cloud also gives you access to technologies one or another cloud provider may provide best.
Oracle’s partnership with Microsoft opens a door of possibilities enabling you to take advantage of the Oracle Database on Azure and more!
Oracle delivers the Oracle Database Operator for Kubernetes (a.k.a OraOperator), which extends the Kubernetes API to help automate the lifecycle management of either databases in the Oracle cloud, on-premises or containerized Oracle databases.
In other previous blogs, we have explored deploying containerized Oracle Databases using the Oracle Database Operator:
In this blog, our main focus and goal is to run containerized Oracle Databases on Azure and utilize Azure Kubernetes Service, Azure Files and Azure Disks.
Deploying to Azure Kubernetes Service (AKS)
To begin working with AKS and deploying the OraOperator in an AKS cluster, you need a working AKS cluster and access to it from either the Azure Cloud Shell or your local terminal. Follow the official Azure documentation for provisioning an AKS cluster and use the method that applies best for you.
To gain access to your cluster on Azure, you can find and follow the instructions on the Azure portal when viewing your cluster’s overview, by clicking on the button: Connect.
Accessing your AKS cluster on Azure Cloud Shell
On your Azure Cloud Shell, use the pre-installed Azure CLI and follow the provided terminal commands from the connect instructions, to set the current active subscription for your cluster.
az account set --subscription mySubscription
Finally, to configure access to your AKS cluster, you can run the following command to download cluster credentials. Make sure you to replace the strings below with the proper resource-group and cluster name.
az aks get-credentials --resource-group myResourceGroup --name myCluster --overwrite-existing
Installing a pre-requisite of the OraOperator: cert-manager
Before you install the OraOperator, it is required to install cert-manager. Cert-manager generates TLS certificates that are required by the operator’s webhooks. To install, run the following command:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.14.4/cert-manager.yaml
The above command creates the cert-manager namespace among others, make sure all pods are Running before installing the operator.
Preparing for OraOperator installation: access-management
The operator currently supports two modes of deployment: cluster-scoped or namespace-scoped. Cluster-scoped deployments allows the operator to operate in the cluster with access to work in and monitor all namespaces. Meanwhile, a namespace-scoped deployment limits the access of the operator to the namespaces of your choosing and only watches the specified namespaces.
This blog will opt for a namespace-scoped deployment. This configuration requires an update of the operator’s manifest file with all the namespaces the operator should watch, delimited by commas.
- name: WATCH_NAMESPACE
value: "namespace1,namespace2"
You can download the manifest by using curl or copy-pasting into a local file. Run the following command to create the YAML file myOperatorManifest.yaml locally.
curl -o myOperatorManifest.yaml https://raw.githubusercontent.com/oracle/oracle-database-operator/main/oracle-database-operator.yaml
Above, we have set the databases as the namespace we want to limit the operator to. You can use the following command to create and set the namespace: databases as the default for any subsequent commands.
kubectl create ns databases && kubectl config set-context --current --namespace=databases
Finally, you need to grant the operator access to the namespace(s) by creating a role-binding resource to each namespace with the operator’s service account: serviceaccount:oracle-database-operator-system:default.
kubectl create rolebinding oraoperator-rb-databases -n databases \
--clusterrole=oracle-database-operator-manager-role \
--serviceaccount=oracle-database-operator-system:default
Installing the Operator
To install the operator, apply the modified manifest file.
kubectl apply -f myOperatorManifest.yaml
Once completely installed, you can check the new API resources and custom resources that the OraOperator adds, as shown in the following GIF.
Now that the OraOperator has been installed, you can start learning more about working with containerized Oracle Databases using the Single Instance Database controller of the operator.
Creating a Single Instance Database (SIDB) on AKS
In this blog, we will use Single Instance Databases (SIDB) and the Oracle Database 23ai Free container image. We have the following YAML which creates an Oracle SIDB instance in the cluster:
# create-sidb.yaml
apiVersion: database.oracle.com/v1alpha1
kind: SingleInstanceDatabase
metadata:
name: norman-freedb
spec:
sid: FREE
edition: free
## 1 - Database Image details
image:
pullFrom: container-registry.oracle.com/database/free:latest
prebuiltDB: true
## 2 - Secret containing SIDB password mapped to secretKey
adminPassword:
secretName: norman-freedb-admin-secret
persistence:
## 3 - Dynamic Database Persistence Configuration
size: 100Gi
storageClass: "managed-csi"
accessMode: "ReadWriteOnce"
## 4 - Setup and Startup scripts
scriptsVolumeName: "naberinstorage-pv"
## 5 - Azure Loadbalancer
loadBalancer: true
replicas: 1
Let’s break this down.
1?—?Configuring database image details
The spec.image field provides the ability to specify which container image to deploy. The boolean field spec.image.prebuiltDB configures the database deployment to include the data files of the database inside the image itself. As a result, the database startup time of the container is reduced, down to a couple of seconds.
## 1 - Database Image details
image:
pullFrom: container-registry.oracle.com/database/free:latest
prebuiltDB: true
Note that this blog uses Database 23ai Free and does not require a secret to authorize pulling from https://container-registry.oracle.com/, unlike deploying an Oracle Database Enterprise Edition. The Oracle Database Enterprise Edition does not support the prebuiltDB option, you to accept the license agreement and provide credentials as secrets. If a secret is required, it is referenced in the SIDB YAML file under image.pullSecrets. More information can be found in the documentation.
image:
pullFrom: container-registry.oracle.com/database/enterprise:latest
pullSecrets: oracle-container-registry-secret
领英推荐
2 — Creating the Admin Password
Deploying an SIDB will require a password regardless of the database edition.
## 2 - Secret containing SIDB password mapped to secretKey
adminPassword:
secretName: norman-freedb-admin-secret
You can create a Kubernetes secret for the password with the following command:
kubectl create secret generic norman-freedb-admin-secret --from-literal=oracle_pwd=<specify password here>
3 — Configuring Database Persistence with Azure (optional)
Since containerized Oracle Databases, or containerized databases in general are stateful applications, having some persistence is crucial towards successfully maintaining your data and the state of the database. This is where persistent volumes come in, these are a piece of storage that have a different lifecycle from the application pods.
The SIDB controller offers two database persistence (storage) configurations for containerized Oracle Databases:
On Azure, AKS supports three (3) container storage interface (CSI) drivers for Azure Disks, Azure Files and Azure Blob storage services, with which you can create persistent volumes with. You can find out more about each and which to use in different scenarios in Azure’s best practices documentation for AKS storage and backups.
This blog demonstrates dynamic persistence provisioning of Azure Disks. In the following snippet, the field storageClass is set to managed-csi which is the storageClass that provisions Azure Standard SSD locally redundant storage (LRS) with a size of 100Gi. Meanwhile, we have set the accessMode to ReadWriteOnce, as an Azure Disk can only be mounted with this accessMode.
persistence:
## 3 - Dynamic Database Persistence Configuration
storageClass: "managed-csi"
size: 100Gi
accessMode: "ReadWriteOnce"
Finally, for automatic storage expansion of block volumes, it is required that you run the following command to create the required privileges:
kubectl apply -f https://raw.githubusercontent.com/oracle/oracle-database-operator/main/rbac/storage-class-rbac.yaml
If you would like to retain your volumes, use Static Persistence Provisioning instead. You will need to pre-provision a storage resource on Azure and create a PersistentVolume on your cluster.
The official Azure documentation provides a step-by-step how-to guide on statically provisioning a volume with Azure Disks and others. To use static provisioning with a pre-existing PersistentVolume and avoid dynamic provisioning, you can replace the persistence fields above with the following fields. Note how spec.persistence.datafilesVolumeName is used instead.
persistence:
## 3 - Static Database Persistence Configuration
datafilesVolumeName: "my-static-pv"
accessMode: "ReadWriteOnce"
storageClass: ""
4 — Initializing the Oracle Database using Custom Scripts (optional)
The SIDB controller and the underlying containerized Oracle database provide a useful ability to initialize your database and run custom startup and setup scripts. In order for your deployment to read and access your scripts, you need to mount a PersistentVolume with the scripts inside. Once your persistentVolume is created, on the SIDB YAML file, specify the name under persistence.scriptsVolumeName:
persistence:
## 3 ...
## 4 - Setup and Startup scripts
scriptsVolumeName: "naberinstorage-pv"
Note that in order to enable the operator to work with persistent volumes, it is required that you run the following command to create the required privileges:
kubectl apply -f https://raw.githubusercontent.com/oracle/oracle-database-operator/main/rbac/persistent-volume-rbac.yaml
Azure’s step-by-step how-to guide on statically provisioning a File storage volume shows how to create an Azure File storage account using the Azure CLI. You can then create the PersistentVolume with the following modified YAML file:
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: file.csi.azure.com
name: "{nameOfPV}" # set to naberinstorage-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
csi:
driver: file.csi.azure.com
volumeHandle: "{resource-group-name}#{account-name}#{file-share-name}" # make sure this volumeid is unique for every identical share in the cluster
volumeAttributes:
shareName: "{nameOfFileShare}" # set to databasefs
nodeStageSecretRef:
name: "{nameOfSecret}" # set to azure-secret, created from the guide
namespace: default
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict
- nosharesock
- nobrl # disable sending byte range lock requests to the server and for applications which have challenges with posix locks
With your Azure storage account and an existing file share, you can use the Azure portal to create directories and upload files. You will need to create two directories: setup and startup. Under startup, you can upload the following files as an example:
-- 01_users.sql - creates users and grants privileges
alter session set container=FREEPDB1;
-- todowner
create user TODOOWNER no authentication;
grant create table to TODOOWNER;
grant create procedure, create view, create sequence to TODOOWNER;
alter user TODOOWNER quota unlimited on USERS;
-- todouser
create user TODOUSER identified by "Welcome12345" password expire;
grant create session to TODOUSER;
-- 02_tables.sql - creates tables and grants access to tables
alter session set container=FREEPDB1;
-- create table todoitem
CREATE TABLE TODOOWNER.TODOITEM (
id NUMBER GENERATED ALWAYS AS IDENTITY, description VARCHAR2(4000),
creation_ts TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
done NUMBER(1,0) default 0,
PRIMARY KEY (id)
);
insert into TODOOWNER.todoitem (description) values ('My first task!');
-- grant access
grant select, insert, update, delete on TODOOWNER.TODOITEM to TODOUSER;
5 — Enabling Azure Loadbalancer
With the containerized Oracle SIDBs, setting the field spec.loadBalancer to true will create a LoadBalancer resource on Azure.
## 5 - Azure Loadbalancer
loadBalancer: true
Note that when set to spec.LoadBalancer is set to false, the SIDB controller defaults to NodePort service. This configuration will require watch and list privileges to nodes.
kubectl apply -f https://raw.githubusercontent.com/oracle/oracle-database-operator/main/rbac/node-rbac.yaml
Deploying your Single Instance Database
To deploy, run apply to create the SIDB resource in your cluster.
kubectl apply -f create-sidb.yaml
Once the database is created, you can try connecting to it. The following GIF demonstrates connecting to the Oracle Database using DataGrip and shows that the users, tables and dummy data from the startup scripts were successfully created.
There we go!
Final Words
The Oracle Database Operator for Kubernetes (OraOperator) enables you to create and manage Oracle Databases in the Oracle cloud or a cluster using kubectl. Using the operator in Azure’s AKS, you can deploy and take advantage of an Oracle Database with your current Azure workloads.
If you are interested in other cloud providers, or the certification of the operator, you can find out about what is supported in the MOS (My Oracle Support) in the following note OCNE: Oracle Cloud Native Environment Certification And Support On Cloud Platforms And Hypervisors (Doc ID 2899157.1)
If you would like to learn more, come join our Oracle Developers channel on Slack to discuss Java, JDBC, the Oracle Database, OCI, ODSA, and other topics!
Resources
You can also find this post on Medium! Thank you for reading!
Architect and Developer Advocate, Oracle. XR, Hologram, Immersive Tech Developer.
6 个月This is fantastic! Great work! I will reference this in the Build MultiCloud Devops using Azure CI/CD Pipelines with Oracle Database Cloud Services workshop https://apexapps.oracle.com/pls/apex/r/dbpm/livelabs/run-workshop?p210_wid=3914
Wonderful work. It is good to see companies partner. This article is a solid explanation of how to use Oracle with Azure. Do you expect to see more collaboration in the future?
Senior Software Engineer @ Headstorm | Fullstack, Cloud, DevOps, Containers
6 个月Insightful! Awesome work, Norman Aberin