KVM Live Migration with IBM Spectrum Scale (GPFS)

KVM Live Migration with IBM Spectrum Scale (GPFS)

KVM (Kernel-based Virtual Machine) is a popular open-source virtualization technology for Linux. GPFS (General Parallel File System), now known as IBM Spectrum Scale, is a high-performance clustered file system often used in environments where high availability and scalability are essential.

Performing live migration of KVM virtual machines with GPFS involves moving a running VM from one physical host to another without causing downtime.


HLD


Install KVM on x86 RedHat 8/9

Environment

Operating System: Red Hat Enterprise Linux 8/9

KVM Hosts:

kvm1 (192.168.100.100)

kvm2 (192.168.100.101)

Storage:

Shared storage device 100GB mounted at /vms on both hosts by using IBM Spectrum Scale

Check CPU virtualization.

grep -E '(vmx|svm)' /proc/cpuinfo         

Installation Steps

1. Prerequisites:

  1. Ensure active Red Hat subscriptions on both hosts.
  2. Verify available disk space using lsblk.

2. Install KVM packages:

dnf -y install qemu-kvm libvirt virt-install libguestfs-tools virt-top virt-manager cockpit  cockpit-machines        
yum install qemu-kvm python-virtinst libvirt libvirt-python virt-manager libguestfs-tools virt-install qemu-img Libvirt-client -y        

3. Enable and Start Services:

systemctl enable --now libvirtd cockpit.socket        

Enable IP forwarding:

echo "net.ipv4.ip_forward=1" | sudo tee -a /etc/sysctl.conf        
echo "net.ipv4.conf.all.rp_filter = 2"|sudo tee -a  /etc/sysctl.conf        

4. GPFS Shared Storage Installation and Configuration:

Unzip the installation file:

unzip Storage_Scale_Developer-x.x.x.x-x86_64-Linux.zip        
chmod +x Storage_Scale_*        

Extracting GPFS:

./Storage_Scale_Developer-x.x.x.x-x86_64-Linux-install        

Make sure Secure Boot is disabled before GPFS installation:

mokutil --sb-state        

Install prerequisite RPMs:

yum -y install kernel-devel gcc-c++ cpp gcc binutils elfutils-libelf-devel ksh         

Install the necessary GPFS RPM packages:

cd /usr/lpp/mmfs/5.1.9.1/gpfs_rpms/        
rpm -ivh gpfs.base*.rpm gpfs.gpl*rpm gpfs.license*rpm gpfs.gskit*rpm gpfs.msg*rpm gpfs.compression*rpm gpfs.adv*rpm gpfs.crypto*rpm        

Add hostnames to /etc/hosts:

192.168.100.100 kvm1
192.168.100.101 kvm2        

Generate SSH-Key on both KVMs:

ssh-keygen        

Passwordless:

ssh-copy-id root@kvm1
ssh-copy-id root@kvm2         

Add GPFS to /root/.bashrc:

export PATH=$PATH:/usr/lpp/mmfs/bin        

Check GPFS Cluster:

mmlscluster

GPFS cluster information
========================
  GPFS cluster name:         kvm.kvm1
  GPFS cluster id:           14744800261344692204
  GPFS UID domain:           kvm.kvm1
  Remote shell command:      /usr/bin/ssh
  Remote file copy command:  /usr/bin/scp
  Repository type:           CCR

 Node  Daemon node name  IP address       Admin node name  Designation
-----------------------------------------------------------------------
   1   kvm1              192.168.100.100  kvm1             quorum
   2   kvm2              192.168.100.101  kvm2             quorum        

Create clusters with specified nodes and parameters:

mmcrcluster -N kvm1:quorum,kvm2:quorum -p kvm1 -s kvm2 -r /usr/bin/ssh -R /usr/bin/scp -C kvmvms        

Accept License:

mmchlicense server --accept -N kvm1,kvm2         

Start GPFS Cluster:

mmstartup -a         

Check node status:

 mmgetstate -a

 Node number  Node name  GPFS state
-------------------------------------
           1  kvm1       active
           2  kvm2       active        

Check Disk Name:

lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda             8:0    0   50G  0 disk
├─sda1          8:1    0  600M  0 part /boot/efi
├─sda2          8:2    0    1G  0 part /boot
└─sda3          8:3    0 48.4G  0 part
  ├─rhel-root 253:0    0 43.4G  0 lvm  /
  └─rhel-swap 253:1    0    5G  0 lvm  [SWAP]
sdb             8:16   0  100G  0 disk
└─vmsvg-vmslv 253:2    0  100G  0 lvm  /var/kvm
sdc             8:32   0  100G  0 disk
sdd             8:48   0    2G  0 disk
sr0            11:0    1 11.7G  0 rom        

Network Shared Disk (NSD) Setup:

kvm.nsd
%nsd:
       device=/dev/sdc
       nsd=kvmvms1
       servers=kvm1,kvm2
       usage=dataAndMetadata
       failureGroup=4001
       pool=system        
tie.nsd
%nsd:
       device=/dev/sdd
       nsd=kvmtie
       servers=kvm1,kvm2
       usage=dataAndMetadata
       failureGroup=4001
       pool=system        

Create NSDs:

mmcrnsd -F kvm.nsd
mmcrnsd -F tie.nsd

mmcrnsd: Processing disk sdb
mmcrnsd: Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.
mmcrnsd: Processing disk sdc
mmcrnsd: Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.        

Check NSDs:

 mmlsnsd

 File system   Disk name       NSD servers
--------------------------------------------------------------------------
 (free disk)   kvmvms1         kvm1,kvm2
 (free disk)   kvmvms2         kvm1,kvm2        

Configure Tiebreaker Disks:

mmchconfig tiebreakerDisks="kvmvms2"        

Check GPFS Configuration:

mmlsconfig
Configuration data for cluster kvm.kvm1:
----------------------------------------
clusterName kvm.kvm1
clusterId 14744800261344692204
autoload no
dmapiFileHandleSize 32
minReleaseLevel 5.1.9.0
tscCmdAllowRemoteConnections no
ccrEnabled yes
cipherList AUTHONLY
sdrNotifyAuthEnabled yes
adminMode central

File systems in cluster kvm.kvm1:
---------------------------------
/dev/gpfs001        

Create GPFS file systems:

mmcrfs gpfs001 -F kvm.nsd -Q no -A yes -T /vms -B 256k -j cluster -n 32 -m 1 -M 2 -R 2 -r 1 -v no        

Check NSDs:

  mmlsnsd

 File system   Disk name       NSD servers
--------------------------------------------------------------------------
 gpfs001       kvmvms1         kvm1,kvm2
 (free disk)   kvmvms2         kvm1,kvm2        

Create Tiebreaker Disks for less than 3 nodes cluster:

mmchconfig tiebreakerDisks="kvmvms2"
mmchconfig: Command successfully completed
mmchconfig: Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.

mmlsconfig
Configuration data for cluster kvm.kvm1:
----------------------------------------
clusterName kvm.kvm1
clusterId 14744800261344692204
autoload no
dmapiFileHandleSize 32
minReleaseLevel 5.1.9.0
tscCmdAllowRemoteConnections no
ccrEnabled yes
cipherList AUTHONLY
sdrNotifyAuthEnabled yes
tiebreakerDisks kvmvms2
adminMode central

File systems in cluster kvm.kvm1:
---------------------------------
/dev/gpfs001        

Create GPFS file systems:

mmcrfs gpfs01 -F kvm.nsd -Q no -A yes -T /vms -B 256k -j cluster -n 32 -m 1 -M 2 -R 2 -r 1 -v no        

Mount GPFS Disk for all cluster nodes:

mmmount /vms -a        

Verify mount GPFS filesystem:

mmlsmount all
File system gpfs001 is mounted on 2 nodes.
[root@kvm2 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 3.8G     0  3.8G   0% /dev
tmpfs                    3.8G     0  3.8G   0% /dev/shm
tmpfs                    3.8G   18M  3.8G   1% /run
tmpfs                    3.8G     0  3.8G   0% /sys/fs/cgroup
/dev/mapper/rhel-root     44G   12G   32G  28% /
/dev/sda2               1014M  300M  715M  30% /boot
/dev/sda1                599M  5.8M  594M   1% /boot/efi
/dev/mapper/vmsvg-vmslv  100G   11G   90G  11% /var/kvm
tmpfs                    769M  4.0K  769M   1% /run/user/0
gpfs001                  100G  623M  100G   1% /vms        

5. Configure Networking for KVM:

nmcli con add type bridge ifname bridge0        
nmcli con up bridge0        
nmcli con modify ens192 master bridge0  # Replace ens192 with the actual NIC card        
systemctl restart NetworkManager        

6. Verify Network Configuration:

   ip add        

7. Manage Virtual Machines:

* Use virt-manager or virsh commands to create, manage, and migrate VMs.

virsh list        

8. Live Maigration:

Note: Create VM for live migration test

virsh list

 Id   Name         State
----------------------------
 1    centos-st8   running        
virsh migrate --live centos-st8 qemu+ssh://kvm2/system        
virsh list
 Id   Name   State
--------------------

        

### just migrated to another KVM Host ###


要查看或添加评论,请登录

Abdulkarim Sanba的更多文章

社区洞察

其他会员也浏览了