OpenStack cloud charms deployed with MaaS and JuJu on Ubuntu 20.04 LTS server edition - Part 3

OpenStack cloud charms deployed with MaaS and JuJu on Ubuntu 20.04 LTS server edition - Part 3

This article is the last part of the series “OpenStack cloud charms deployed with MaaS and JuJu on Ubuntu 20.04 LTS server edition” Here we will look at configuring storage and selecting right network interfaces for OpenStack services and external connectivity keeping in sync with our preparatory steps. Thereafter, leveraging JuJu charm, we will deploy OpenStack in an unattended way.

You can find the previous two articles?here?and?here.

Our POC OpenStack deployment will be in three machines, one for control plane/controller and another two for compute nodes. So lets start !

Openstack Controller

The cloud controller provides the central management system for OpenStack deployments. Typically, the cloud controller manages authentication and sends messaging to all the systems through a message queue.

There are few ways to design and deploy OpenStack controller. The controller architecture you choose would best fit depending on your needs. For example, one can choose any one of the following methods for controller architecture after analyzing various requirements and trade off with the scheme of things.

  • Standalone server (Single)
  • Active-Passive
  • Multi-node active-active
  • Distributed

If you are familiar with Pacemaker/Corosync for deploying highly available cluster/application then it will be easier to grasp concepts on highly available fault tolerant OpenStack controller architecture in case you opt for it although we will proceed with deploying OpenStack controller in a single server.

Compute

The OpenStack Compute service allows you to control an Infrastructure-as-a-Service (IaaS) cloud computing platform. It gives you control over instances and networks, and allows you to manage access to the cloud through users and projects.

Our OpenStack POC environment will consists of two compute nodes where one extra disk(300GB) will be added for Ceph storage on each machines. The Ceph storage will be used by the following OpenStack components:

  • Block storage devices created within Cinder for VMs will be provisioned in Ceph.
  • Nova compute will store the virtual disk images of the running VMs in Ceph.
  • Glance will store its images in Ceph.

The first hard drive is used by node operating system and lxc containers for various OpenStack components. Some common storage, compute and networking components are also installed in the first drive.

Review both the disks you have added for node OS and Ceph storage respectively for each machines.

No alt text provided for this image

Interface configuration

Due to some reason, i decided to setup OpenStack in virtual servers instead of BMS which we discussed previously although the procedures/steps will remain same for the rest of this article if you proceed with physical servers(BMS).

Make sure you have already plugged the network adapters LAN, WAN and IPMI (in case you are doing with BMS) to the machines and the states of the machines are ready.

No alt text provided for this image

The DHCP service will not be configured in the WAN interface(ens224) and this interface will be used as an external bridge. The MaaS will manage DHCP for rest of the two interfaces.

No alt text provided for this image

Next, the PXE network(192.168.101.0/24) will be binded to the OpenStack application services using spaces. Create a space and associate it with the PXE network. The PXE network is generally not used to bind application services. Better create a VLAN branched out of LAN interface(fabric 0) and make use of it to create a space and subsequently bind the space with the application services of OpenStack(East-West traffic).

No alt text provided for this image

Traditionally, the first network interface(ens192) is used for communication between various cloud services also known as East-West traffic and the second network interface(ens224) is used for communication between resources inside the cloud and the external network which is known as North-South traffic. Here the external network signifies the networks outside OpenStack installation.

Install Openstack

We will make use of stable?JuJu bundles?that deploys OpenStack with MaaS as a backing cloud. Download the bundle and unzip it to a suitable location.

It is also possible to setup OpenStack without using JuJu bundles, installing each components one by one as described?here.

A bundle is an encapsulation of a working deployment containing all the configuration, resources and references that allows us to create/recreate a OpenStack deployment with a single command seamlessly.

$ wget https://github.com/openstack-charmers/openstack-bundles/archive/refs/heads/master.zip
$ unzip master.zip
$ cd ~/openstack-bundles-master/stable/openstack-base        

Review the bundle and adjust it as per your requirements. First edit the data port values to the interface(ens224) that you earmarked for external bridge and add an entry for the second disk drive in OSD devices. Remember to tag the machines in MaaS.

Finally, review the relationship and application section in the deployment file(bundle.yaml) and adjust it as per your requirements.

...
...
series: focal
variables:
  openstack-origin: &openstack-origin cloud:focal-xena
  data-port: &data-port ens224
  worker-multiplier: &worker-multiplier 0.25
  osd-devices: &osd-devices /dev/sdb
  expected-osd-count: &expected-osd-count 2
  expected-mon-count: &expected-mon-count 1
machines:
  '0':
    constraints: tags=Controller-1
  '1':
    constraints: tags=Compute-1
  '2':
    constraints: tags=Compute-2
...
...        

Bind all the OpenStack application services with the space (pxe-network) we created previously.

...
...
  ceph-osd:
    annotations:
      gui-x: '1065'
      gui-y: '1540'
    charm: cs:ceph-osd-315
    num_units: 3
    options:
      osd-devices: *osd-devices
      source: *openstack-origin
    to:
    - '1'
    - '2'
    bindings:
      "": pxe-network
      ...
      ...        

Deploy OpenStack using the configuration, resources and references defined within the bundle.yaml

You can download the complete modified bundle from?here.

$ cd ~/openstack-bundles-master/stable/openstack-base
$ juju add-model -c maas-controller --config default-series=focal poc
please enter password for admin on maas-controller: 
Added 'poc' model on maas-cloud/default with credential 'maas-cloud-creds' for user 'admin'
$ juju switch maas-controller:poc
maas-controller:admin/poc (no change)
$ juju deploy ./bundle.yaml
Located charm "ceph-mon" in charm-store, revision 61
Located charm "ceph-osd" in charm-store, revision 315
Located charm "ceph-radosgw" in charm-store, revision 300
Located charm "cinder" in charm-store, revision 317
...
...
- add unit placement/0 to 0/lxd/14
- add unit rabbitmq-server/0 to 0/lxd/15
- add unit vault/0 to 0/lxd/16
Deploy of bundle completed.        

Check the status of deployment.

$ juju status        
No alt text provided for this image

If you are using machines (virtual servers) with manual power type then switch on the machines(ready state) otherwise if you are using BMS using IPMI, machines will be started automatically.

Once machines are deployed, JuJu will start installing OpenStack units/components. If the message section for ceph-osd says “Non-pristine devices detected, consult list-disks, zap-disk and blackl ist-* actions.” then run following JuJu commands. This happens when the OSD devices are already partitioned e.g with ext4 file system and the drive contains data.

$ juju run-action --wait ceph-osd/1 zap-disk devices=/dev/sdb i-really-mean-it=yes
$ juju run-action  ceph-osd/1 add-disk osd-devices=/dev/sdb        

After a while all the application units will be active except vault with the message “Vaults need to be initialized”

No alt text provided for this image

Initialize Vault

Install?Vault?in the MaaS machine:

$ curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
OK 
$ sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
$ sudo apt update
$ sudo apt install vault -y        

Configure the VAULT_ADDR environment variable and then initialize the vault deployment:

$ export VAULT_ADDR="https://192.168.101.31:8200"
$ vault operator init -key-shares=5 -key-threshold=3        

Each vault unit must be individually unsealed, so if there are multiple vault units repeat the unseal process below for each unit changing the VAULT_ADDR environment variable each time to point at the individual units.

$ vault operator unseal  Unseal_Key
$ vault operator unseal  Unseal_Key
$ vault operator unseal  Unseal_Key
$ vault operator unseal  Unseal_Key
$ vault operator unseal  Unseal_Key        

Authorize vault charm

$ export VAULT_TOKEN="Initial Root Token"
$ vault token create -ttl=10m        

You should get a response like:

Key                  Value
---                  -----
token                hvs.ehkjBUubIZTj85ZvJYY65PTB
token_accessor       akDRjYi6QDefkllCk2KSeZ2e
token_duration       12h40m
token_renewable      true
token_policies       ["root"]
identity_policies    []
policies             ["root"]        

This token can then be used to setup access for the charm to Vault:

$ juju run-action --wait vault/leader authorize-charm token=hvs.ehkjBUubIZTj85ZvJYY65PTB
$ juju run-action --wait vault/leader generate-root-ca        

At this point, Vault unit will be ready and If the installation goes smoothly then workload of all units will be active.

No alt text provided for this image

Post installation task

Horizon dashboard

Install OpenStack client and set the required environment variables for the OpenStack command-line clients.

$ sudo apt install python3-openstackclient
$ source ~/openstack-bundles-master/stable/openstack-base/openrc        

Login to the horizon dashboard. But first fetch the IP address and password for horizon dashboard:

$ juju status --format=yaml openstack-dashboard | grep public-address | awk '{print $2}' | head -1
192.168.101.39
$ juju run --unit keystone/0 leader-get admin_passwd
ieZ0geizee4HaM3O        
No alt text provided for this image

Import cloud image

Download and import a cloud image in glance:

$ mkdir ~/cloud-images
$ cd ~/cloud-images
$ curl https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img --output ~/cloud-images/focal-amd64.img
$ openstack image create --public --container-format bare --disk-format qcow2 --file ~/cloud-images/focal-amd64.img focal-amd-64
$ wget https://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
$ qemu-img convert -f raw -O qcow2 cirros-0.3.4-x86_64-disk.img cirros-0.3.4-x86_64-disk.qcow2
$ openstack image create --public --container-format bare --disk-format qcow2 --file ~/cloud-images/cirros-0.3.4-x86_64-disk.qcow2 cirros-0.3.4        
No alt text provided for this image

Create network/subnet/provider router

$ juju config ovn-chassis bridge-interface-mappings=br-ex:ens224
$ openstack network create --external    --provider-network-type flat --provider-physical-network physnet1    ext_net
$ openstack subnet create --network ext_net --no-dhcp    --gateway 172.106.161.193 --subnet-range 172.106.161.192/26    --allocation-pool start=172.106.161.212,end=172.106.161.219    ext_subnet
$ openstack network create int_net
$ openstack subnet create --network int_net --dns-nameserver 8.8.8.8    --gateway 172.16.0.1 --subnet-range 172.16.0.0/24    --allocation-pool start=172.16.0.10,end=172.16.0.200    int_subnet
$ openstack router create provider-router
$ openstack router set --external-gateway ext_net provider-router
$ openstack router add subnet provider-router int_subnet        
No alt text provided for this image

Cloud Key

Create cloud keys:

$ mkdir -p ~/cloud-keys
$ ssh-keygen -q -N '' -f ~/cloud-keys/id_mykey
$ openstack keypair create --public-key ~/cloud-keys/id_mykey.pub mykey        
No alt text provided for this image

Create security groups

Create security groups:

for i in $(openstack security group list | awk '/default/{ print $2 }'); do
   openstack security group rule create $i --protocol icmp --remote-ip 0.0.0.0/0;
   openstack security group rule create $i --protocol tcp --remote-ip 0.0.0.0/0 --dst-port 22;
done        
No alt text provided for this image

Set up protocol for console access

Set the protocol for console access in horizon dashboard:

$ juju config nova-cloud-controller console-access-protocol=novnc        

Create flavor

Create a flavor from horizon dashboard by navigating though Admin-> Compute -> Flavors

No alt text provided for this image

Create a container

Create a container in Object storage through Project -> Object Store -> Containers

No alt text provided for this image

Create a Tenant

First, define few environment variables for the tenant.

$ export TENANT=user-1
$ export PASSWORD=12345
$ export TENANT_DESC="user-1"
$ export TENANT_EMAIL="[email protected]"        

Create a project

$ openstack project create $TENANT --description $TENANT_DESC
$ TENANT_ID=$(openstack project list | awk "/\ $TENANT\ / { print \$2 }")
$ echo $TENANT_ID        

Create user and group

$ openstack user create --project $TENANT --password $PASSWORD --email $TENANT_EMAIL $TENANT
$ openstack group create --description "$TENANT users" $TENANT-users        

As an admin user in horizon, Change the domain context to default.

Click Identity->Domains->Default->Set Domain context

Now, Click Identity->Domains->Default->Manage members and add user-1 as member, Choose admin and member as role and Save it.

No alt text provided for this image
No alt text provided for this image

Select Projects -> User-1 -> Manage members and add user-1 to the project members:

No alt text provided for this image

Login as tenant

No alt text provided for this image

At this point, if you want to add few more units or edit the existing OpenStack deployment then add entries for the new units or edit the bundle file and run the bundle again. JuJu is smart enough to understand which units needs to be added or removed.

It is also possible to add new components without using bundle file by running a set of JuJu commands in the terminal. Lets say you want to add Ceph dashboard to the existing OpenStack deployment then run the following few JuJu commands from terminal that enables you to deploy a OpenStack components very quickly.

Ceph dashboard

$ juju deploy ceph-dashboard
$ juju add-relation ceph-dashboard:dashboard ceph-mon:dashboard
$ juju add-relation ceph-dashboard:certificates vault:certificates
$ juju run-action --wait ceph-dashboard/0 add-user username=admin role=administrator
unit-ceph-dashboard-0:
  UnitId: ceph-dashboard/0
  id: "36"
  results:
    password: wBqKZPTV1TtQ
  status: completed
  timing:
    completed: 2023-05-31 04:09:09 +0000 UTC
    enqueued: 2023-05-31 04:09:07 +0000 UTC
    started: 2023-05-31 04:09:07 +0000 UTC        
No alt text provided for this image

You can now export the bundle to a yaml file.

$ juju export-bundle --filename openstack_poc.yaml        

Create instance

Create an instance by navigating through Project -> Compute -> Instances

Attach a floating IP to the instance.

No alt text provided for this image

Test connectivity

Project -> Compute -> Instances -> Test-1 -> Console

No alt text provided for this image

SSH to the instance

No alt text provided for this image

Building OpenStack components manually from scratch is a herculean task. Luckily, using JuJu one can easily design and deploy cloud services quickly and efficiently. The capability of JuJu to design and deploy workloads to various IaaS components had never been so easier. The service modelling in JuJu is yet another powerful feature that you may like to check out using its GUI.

要查看或添加评论,请登录

Dwija D的更多文章

社区洞察

其他会员也浏览了