What is virtualization ?
Fancy Wang
Helping Global Enterprises Optimize Network Performance | Ethernet Card & Switch Solutions
Fancy Wang 1907 2021
"Virtualization" is not a new technology, as early as the 1960s IBM mainframe system proposed this concept. At that time, the meaning of "virtualization" was limited to logically dividing the resources of the mainframe into different applications.
Through multitasking, multiple applications and processes can be run on the mainframe at the same time. Over time, the connotation of the term "virtualization" has expanded to the abstraction, definition, and reintegration of resources on hardware platforms, operating systems, storage devices, and computer network resources.
There are no strict standards regarding the definition of virtualization. Some typical definitions are given below:
● "Virtualization usually refers to a separation mechanism, that is, the separation of service requests from the services provided by the physical layer."-VMware
● "Virtualization is a logical representation of resources, it is not restricted by physical limitations."-IBM Corporation
● "Virtualization is the abstraction of physical resources and locations. IT resources such as servers, applications, desktops, storage, and networks are no longer tightly coupled with physical facilities, but are presented as logical resources. The mapping relationship between resources is created and managed."-EMC Corporation
In summary, the core of virtualization technology is the abstraction of physical resources. In terms of implementation, it hides the differences in different attributes of the physical layer by providing a set of operations similar to a common interface.
The emergence and development of virtualization technology is closely related to the development of mainframes and changes in server hardware costs. The 1950s to the 1970s were the golden age of mainframes, but the cost of mainframes was high, and both users and manufacturers were exploring ways to improve hardware utilization and reduce costs. In this context, IBM's CP-67 software is the first to allow multiple applications to run on mainframe systems at the same time through partitioning technology.
Although this had a huge impact on the mainframe market, after all, this early virtualization technology could not have an impact similar to today's on the industry at that time. With the gradual popularity of distributed computing and multi-user operating systems, and the rapid decline in hardware costs, low-cost servers have begun to emerge, and virtualization technology on mainframes has begun to suffer. In the 1980s, most manufacturers basically gave up virtualization technology, and the computer architecture developed at the same time naturally did not include support for virtualization.
When entering the 1990s, the rapid development of Windows and Linux operating systems enabled the x86 processor to continuously improve in performance, and gradually established its industry standard status. However, with the growth of x86-based server and desktop deployment, people are beginning to find that although the scale of server hardware facilities continues to expand, most servers only run one application.
According to IDC statistics, on a typical x86 server, the highest CPU utilization rate is only 10% to 15%. Along with low resource utilization, there are rising operation and maintenance costs: power supply, cooling, and complex maintenance and management overhead. As the complexity of the system increases exponentially with the scale of the system, IT maintenance has gradually become a problem for enterprises. In particular, some business models that operate 7×24 hours uninterruptedly make IT maintenance even more difficult.
Obviously, the same problem faced by mainframes in the 1960s is reproduced on x86 servers: the resources of physical servers are not fully utilized, and changing business models make this situation more complicated.
In this era, VMware introduced virtualization technology to the x86 platform. In 1999, VMware released the first version of VMware WorkStation, which virtualized the x86 32-bit platform. Soon after, VMware released the ESX series of products, creating a virtualized pattern on the x86 platform. Xen, another virtualization platform, also developed rapidly during the same period. Xen was originally an internal research project of the Cambridge University Laboratory in the late 1990s. After receiving funding from the Linux Foundation, it quickly became a successful example of open source virtualization systems.
Currently, companies such as IBM, Intel, and Redhat are all Xen Member of the open source community Xen.org. The traditional desktop operating system giant Microsoft also joined the virtualization camp in 2008, launching Hyper-V at the same time as Windows Server 2008. Hardware vendors such as Intel and AMD have contributed to virtualization technology in terms of hardware virtualization, which has further improved the virtualization technology.
Asterfusion Data Technologies is committed to providing enterprise users with services including technical consultation and one-stop deployment, and helping enterprise users enjoy the technological dividends brought by the Internet, open source, and open networks.