Discover the Power of IBM Power - PowerVM Virtualization
Michal Wiktorek
Unix Systems Administrator | AIX/Linux | IBM Power | Santander Bank Polska
Introduction
When I first encountered Power servers in 2011, one of the things that particularly intrigued me was the virtualization technology that allowed dozens of production operating systems to run concurrently on a single physical server. I was aware that virtualization solutions such as VMWare vSphere for the x86 architecture existed, and I myself used Virtualbox on my home computer. However, what surprised me was that IBM Power servers did not require the installation of an operating system to serve as a Hypervisor. In this technology, the PowerVM Hypervisor is not installed as software on the hard disk; instead, it is part of the server's firmware and is closely integrated with the hardware. PowerVM operates at a lower level than the operating system, allowing it to control direct access and allocation of physical resources such as processors, memory, and I/O devices.
I was also quite impressed by the fact that it was completely normal for this platform to add and remove CPU and RAM resources without needing to restart the operating system (even for production systems), and the uninterrupted ability to move systems between physical servers (Live Partition Mobility).
Since then, many years have passed, and competitive virtualization technologies on x86 have significantly evolved. Containerization has also gained significant popularity, and widespread migrations of systems from on-premise infrastructure to the cloud, have made the concept of virtualization somewhat less popular. In cloud-operating systems, clients often do not know and do not need to know which technology operates "underneath," at a lower level than the operating system.
How does PowerVM virtualization relate to the cloud and container technologies today, and what benefits does it bring? I would like to discuss this among other topics in this article.
In this text, I tried to present the PowerVM technology topic in a simple way to make it understandable also for those not familiar with IBM technologies, so I ask experts to forgive the oversimplifications :)
Any fool can explain complicated things in a complicated way :)
Virtualization
Although the concept of virtualization in IT became particularly fashionable after 2000, when companies massively moved systems from bare-metal servers to virtual machines (thereby gaining immense flexibility in management and freeing up a lot of space in server rooms), the actual concept of running multiple operating systems using the same CPU, as well as the term "Hypervisor," appeared with the mainframe computer, specifically the IBM System 360, at the turn of the 1960s and 1970s.
In the context of server virtualization, a Hypervisor is software that allows multiple instances of operating systems to run on a single physical server, sharing its resources. These operating system instances are commonly referred to as "Virtual Machines" (VMs), although they may have different corresponding names depending on the virtualization technology used. For example, the equivalent of a Virtual Machine for servers based on SPARC architecture is LDOM (Logical Domain), and for HP servers based on PA-RISC or Integrity, it was vPAR (Virtual Partition).
In the case of IBM Power servers, the "heart" of PowerVM virtualization is the Power Hypervisor (PHYP), which allows the physical resources of the server to be shared among Logical Partitions (LPARs). Currently, in the Power world, the terms LPAR and VM are used interchangeably. In this context, these terms can be considered synonymous.
Some Power server models, instead of using PowerVM virtualization, support PowerKVM, which is the equivalent of KVM (Kernel-based Virtual Machine) virtualization known from the x86 architecture, but this solution is dedicated only to Linux and is not as advanced as PowerVM.
The resources that PHYP allocates to an LPAR, are the processor, RAM, and I/O devices (which means access to disk space and network).
For managing LPARs as well as Power servers, the Hardware Management Console (HMC) is used, which can function as an external physical device or as a Virtual Machine (vHMC). Administrative access to the console is available through a CLI interface (via ssh), GUI, or API. Most administrative operations are performed using the functionality of Dynamic Logical Partitioning (DLPAR), which allows for the dynamic allocation and removal of resources such as processors, memory, and I/O components, without the need to stop the system.
A very useful feature worth mentioning is Live Partition Mobility (LPM), which allows the uninterrupted movement of LPARs between physical servers (this feature can be considered equivalent to the vMotion function known from the VMware vSphere platform).
Support for Multiple Operating Systems
In the case of PowerVM virtualization, using LPARs allows for the independent operation of different operating systems such as:
It is important to note, that the AIX and IBM i systems are developed exclusively for Power servers - they cannot operate on any other processor architecture in a manner officially supported by IBM.
Processor Virtualization
The IBM Power processor is one of the main differences compared to servers based on the x86 architecture and is simultaneously one of its greatest advantages.
It is often stated that Power processors are twice as efficient as Intel's, but I think is important to note that this can depend heavily on the specific software compilation, use of multithreading potential, type of workload, and many other factors, which can make the performance difference greater or smaller.
Unfortunately, performance comparison tests available online are often conducted to support the thesis of a particular vendor. I recommend to pay special attention to whether the performance comparison is Core vs. Core, not Core vs. Socket, as it is easy to manipulate results in such cases - the concept of a processor name in the Power platform refers to Core, not per Socket.
In my opinion, if possible, it is advisable to conduct comparative tests for a specific case as part of a Proof of Concept, for instance, on hardware borrowed from a supplier or partner and compare performance, such as on a real database load example. I also recommend checking the behavior of the virtualization platform under heavy load. I remember situations where a Power6 server operated at 90-100% load, where the performance drop was not noticeable to the business, while an x86 server at 55-60% load required migrating virtual machines off the physical server, because the performance level was so low it resulted in service unavailability. Of course, this is merely anecdotal proof based on my observations and related to older generations of processors. Unfortunately, I haven't found any reliable comparative tests on the internet which I could relate to (if you have found such reliable tests, I encourage you to share them in the comments to the article).
It's important to note that dedicated processors (mapping whole physical processor core) and Micro-Partitioning (processor time devided into 10ms "time slice" shared processor partitions - or to put it simply, the method of mapping virtual processors to physical processors in PowerVM), are recognized by many software vendors. This recognition allows for licensing costs to be based on the number of processors assigned only to a specific LPAR or a processor pool. In contrast, for x86 servers, it is often necessary to pay for licenses for all the CPUs of the physical server.
The aforementioned processor pools (Shared Processor Pool - SSP) are features not found on other virtualization platforms. These pools allow for enclosing selected LPARs within a pool of shared processors, which can significantly reduce the licensing costs for software usage. For example, if 4 LPARs each have 3 vCPUs but are enclosed in a processor pool that limits utilization to 5 vCPUs, you would pay for licenses for 5 vCPUs, not for the total number of processors assigned to the LPARs, which would be 12. However, the specific licensing costs also depend on the licensing calculation method of the software vendor.
In the world of IBM Power servers, the term "processor" refers to a processor core. There are two CPU-related modes in which an LPAR can operate:
- Dedicated (assignment of a number of dedicated processors)
- Shared (assignment of Processing Units and the number of Virtual Processors)
CPUs, both in shared and dedicated modes, can be dynamically added to and removed from LPARs without the need to stop the system (as long as the settings of the LPAR profile’s minimum and maximum allow it).
Dedicated processors can only be assigned as whole numbers. There is an option to set a Donating mode, which allows the donation of unused processor cycles to the pool, available when the LPAR is active, and a separate option for situation when the LPAR is powered off. Dedicated processors are usually used where exceptionally high performance is required.
Shared processors offer much greater flexibility than dedicated ones. In this case, the values for Processing Unit and Virtual Processors are set for the LPAR:
If an LPAR utilizes all the available processor cycles within its Processing Unit and operates in uncapped mode, it has the ability to utilize additional processor cycles if they are available in the pool. This is particularly useful for handling usage peaks, for example, during times when the system is subjected to unusually high traffic. In cases where multiple LPARs are loaded beyond their Processing Unit values, they compete for CPU resources, and processor cycles are distributed taking into account the set weight of the LPARs.
By default, all physical processors that are not dedicated to LPARs contribute to the shared processor pool. Unused cycles from the unutilized Processing Units assigned to LPARs are also donated to the pool. This provides significant load balancing capabilities and allows for much more efficient "packing" of the physical server with LPARs, which without virtualization would require much more hardware.
The advantages of sharing processors can be greater the more the LPARs differ in their CPU utilization characteristics.
For example, if one system is primarily busy from 10:00 AM to 2:00 PM and another system on the same physical server experiences peak loads from 9:00 PM to 11:00 PM, they can utilize the same CPU resources. Without virtualization, significantly more resources would be required to handle the load of both systems.
Personally, I once conducted a comparative test between a server with PowerVM and a server with virtualization from a competing vendor. It turned out that to handle the workload of the Power server equipped with 20 CPUs under its current load from the active environments, three servers from the competitor would be necessary, even though the compared server was equipped with a significantly larger number of cores (32 cores). This was because the competitor's virtualization did not support processor sharing, and each virtual machine would have had to have processors assigned exclusively.
In the IT world, many systems have been over-scaled, and in reality, their needs are significantly lower than initially anticipated at the start of the implementation project. There are also systems that are heavily utilized for example only once a month, and the rest of the time they get bored. Virtualization allows for the utilization of these unused resources and use them where they are really needed.
The screenshot below shows an example of CPU utilization for an LPAR that, thanks to operating in uncapped mode, could utilize additional processor cycles from the pool during a sudden increase in load:
领英推荐
Simultaneous Multithreading with SMT-8
This is not actually a feature directly related to PowerVM, but rather to the Power processor itself, but I thought it was worth mentioning.
What particularly distinguishes Power servers is their multithreading capabilities and the ability to operate the system in modes:
x86 processors allow for operation in two threads (Hyper-Threading technology in the case of Intel), while Power processors can handle up to 8 threads (8 logical processors per 1 core) starting from the Power8 generation, which was introduced 10 years ago.
For example, a Linux distribution such as Red Hat Enterprise Linux running on a Power server will have the capability to operate with 8 threads per core. The same system, in the same version but compiled for x86, will only allow for operation with 2 threads.
The benefits of using multithreading potential largely depend on how the software is written. However, in cases where the multithreading capability of a system is crucial, the Power platform has the most to offer.
The graphic below illustrates the evolution of the Power processor over the years. Work is currently underway on the Power11 processor.
Memory
RAM, like other resources, can be dynamically added to and removed from an LPAR, interestingly, also without interruption, in a manner that is safe for the running system (as long as it hasn't been allocated by application processes operating at the operating system level).
It's worth noting that in Power servers, you can use features such as:
AMM (Active Memory Mirror) - This allows for mirroring memory to secure continuous system operation in the event of a failure of a physical DIMM memory module.
AME (Active Memory Expansion) - This is an on-the-fly memory compression mechanism that can be used for AIX systems. More information at the link: https://www.ibm.com/docs/en/aix/7.3?topic=management-active-memory-expansion-ame
Formerly, Power servers allowed the use of AMS (Active Memory Sharing) technology, which enabled the sharing and deduplication of RAM between LPARs. However, this feature is no longer available in Power10 servers, and I suspect it will likely not be developed further due to its relatively low popularity among customers. Personally, I have used this feature in the past; while the technology offered interesting possibilities, the problem with its utilization lay on the side of applications and databases, which are generally unwillingly to release allocated RAM at the operating system level.
I/O Virtualization
Every physical server has a limited number of I/O slots that can be used for Fibre-Channel or Ethernet cards. Physical cards or adapters can be dedicated to specific LPARs, and while allocating resources exclusively offers benefits in the form of higher performance, the flexibility of such a solution is low compared to virtual resources.
How can I/O be virtualized in the case of PowerVM?
VIOS allows for the virtualization of both storage and network adapters, significantly reducing the number of needed physical cards in the server.
In Power servers, a DUAL VIOS configuration is commonly used, meaning two VIOS systems that are redundant to each other. It is also possible to use just one VIOS, or more than two, but the configuration with two VIOS systems is, in my opinion, a compromise between simplicity and decent level of redundancy.
The ability to use two servers for I/O virtualization is something that distinguishes PowerVM from x86 virtualization, where a single operating system is responsible for sharing I/O devices, and in case of its failure, it affects all systems operating within the physical server - something I painfully experienced when I performed just a routine scan to detect new disks on an x86 server and encountered a BUG that caused the entire server to crash and production databases to become unavailable.
In a typical Dual VIOS configuration, VIOS restarts and upgrades can be performed with minimal impact on client LPARs (in the vast majority of cases, the impact on applications and databases is unnoticeable).
Storage
In addition to the ability to assign dedicated Fibre-Channel cards to LPARs, access to disk storage can be managed using the advantages of virtualization:
Network
Virtualization of access to physical network adapters is typically implemented using the following technologies:
Virtual Machines vs Containers
It’s common to encounter comparisons between virtual machines and containers, and even discussions about which is better.
In my opinion, both solutions have their place in IT and offer different benefits, so trying to prove the superiority of one over the other is largely pointless. It’s important to use the right tools in the areas where they perform best.
Virtual machines and containers do not have to be mutually exclusive - they can coexist perfectly well. There are cases where it makes sense to use containers directly on bare-metal, but there are also many benefits to running containers within a virtual machine, thus combining the advantages of both solutions.
Containers share the host operating system’s kernel, which makes isolation and security separation a significant challenge, whereas a virtual machine provides full isolation of the operating system. On the other hand, virtual machines lack the lightness and startup speed of containers.
In the case of AIX systems, long before the trend of containerization emerged, IBM introduced WPARs (Workload Partitions), which can be seen as an equivalent to containers, though they are not very frequently used by customers (though from my own experience, I can say they have considerable potential when it comes to testing code within DevOps).
In the Linux world, it's important to note that the Red Hat OpenShift Container Platform can also be installed on LPARs within PowerVM, thereby benefiting from the high performance of the Power architecture among many other advantages.
PowerVM vs Cloud
I may disappoint someone with this statement, but the cloud is also made up of physical servers :)
With migrations from on-premise infrastructure models to the cloud, many people have stopped worrying about what is "underneath," but just because something is not visible doesn't mean it's not exists.
The previously mentioned capabilities of PowerVM related to flexibility, scalability, high availability, and resource sharing are an excellent foundation for building cloud infrastructure.
PowerVM supports the operation of private clouds in on-premise infrastructure, especially when using the IBM PowerVC for Private Cloud product. As for the public cloud, PowerVM is the basis for IBM PowerVS and services from other well-known providers offering AIX, IBM i or Linux systems in the ppc64le architecture.
Summary
I hope the text was interesting and introduced you to the topic of PowerVM virtualization.
During my professional career, I have had experience with various virtualization platforms, but it was the PowerVM technology that has earned my greatest respect.
I realize that the opportunities to work with this type of virtualization are practically limited to large companies that use IBM Power servers, and this is also why PowerVM technology is not as well-known as virtualization platforms for x86.
However, despite its niche market, PowerVM still offers a very high level of virtualization capabilities, and further innovations and integrations with other technologies can continue to enhance the attractiveness of this technology.
It is worth observing how the evolution of business needs might influence the future of PowerVM and similar specialized virtualization solutions.
Senior Manager at IBM-ISDL
2 个月Great summary of PowerVM features in a nut shell. I like this "Any fool can explain complicated things in a complicated way :)" You connected the dots very well. Thank you
Especialista en Sistemas en Banco Mercantil | IBM/EMC/PureStorage SAN switch skills, AIX and RHEL or SUSE Linux installation
2 个月thumbs up and thanks
Sr. Solution Advisor, helping companies manage and optimize their data for AI and ESG reporting.
3 个月Perfect example of the IBM ingenuity, and Frank Soltis specialist on system/38 and AS/400.
Power Sales Specialist IBM
3 个月Excelent article!! Another view of Power Virtual Management (the smart way of virtualization)!
Freelance Senior Linux/Unix Engineer
6 个月Thanks a lot for sharing. very nice article on PowerVM