Virtualization Made Clear for Recruiters. Part 1: Hypervisors

Virtualization Made Clear for Recruiters. Part 1: Hypervisors

Hello everyone,

We're excited to share another piece of content with you. In one of our previous articles, we covered the topic of DevOps for recruiters. Today, we're shifting our focus to another important subject related to it — Virtualization. We'll explain this concept in easy-to-understand terms and give a brief overview of how it has evolved over time. Our aim is to ensure that this information is easily accessible, while keeping in mind that this article is primarily intended for technical recruiters.

First, we'll start with ??Hardware Virtualization, and then we'll explore how this technology has evolved to tackle more complex tasks. These related technologies include ??Clouds and Cloud Providers, ????Containerization and Microservices, and ??Serverless Computing.

It's important to note that these technologies are often used together by software companies. Some tasks can be solved easily with just one of them, while others require the help of multiple inventions. They don't replace each other, but rather complement each other.

So, let's break everything down step by step:

???? First, let's take a trip back in time. Virtualization technologies first emerged in the 1970s and were initially developed for supercomputers, particularly mainframes. In recent years, as digitalization has become more prevalent in the B2B and B2C sectors, virtualization has gained popularity and found its way into modern applications. These applications use software that runs over the internet, rather than as standalone programs.

What caused this technology to develop so rapidly and become widely adopted?

It all started with servers. In the past, each application and database needed its own server, which meant that companies had to procure their own servers or even entire data centers to ensure proper protection and resilience. It's a simple concept: your program must run on a computer with an operating system, and installing multiple programs in one session increases the risk of disruption to other programs if one crashes. Not to mention, if someone gains unauthorized access to such a system, they gain access to data and other programs.

You may be thinking, "Why not just buy/build a separate computer with an operating system for each individual program or product?" Although this approach was once used, it isn't practical for businesses as it entails:

?? High costs – servers are expensive hardware, and purchasing a new server for each new service can be quite costly;

?? Increased energy consumption and inefficient use of available computing resources;

?? Complex network infrastructure, which requires expensive maintenance;

?? Demanding and pricey backup procedures;

?? More equipment means higher failure rates.

Let's imagine a comparable situation in our regular lives, using something more relatable as an example, rather than relying on the oft-used analogy of technology or computers:

Picture this: you have five people who need their own apartments to live comfortably. But buying separate apartments is too expensive and inefficient. Instead, you could get a bigger apartment and assign each tenant to a room. However, this solution isn't very reliable and can significantly lower the quality of life for everyone involved. It could even compromise the safety of the apartment and its residents if someone brings in uninvited guests and hosts a party ????.
In an ideal world, we could use magic to divide a large apartment into several isolated spaces and ensure that each tenant believes they're the only ones living in their assigned area. They wouldn't cross paths with anyone else, nor compete for shared resources like the entrance door, bathroom, or kitchen. Alas, we don't have a wizard to do this for us ???.

But in the world of technology, there's a solution that works similarly to magic. It's called a Hypervisor.

Essentially, a hypervisor is a program or hardware design that allows multiple operating systems to run on the same host computer at the same time. Each separate emulation is called a Virtual Machine (VM).

Thanks to virtualization technologies, different operating systems can coexist within the same server. All the necessary equipment is emulated within each virtual machine, so the operating system thinks it's on a physical server.

By adopting this approach, we can reap a range of benefits, including:

?? Optimal use of computing resources;

?? Cost savings;

?? Quick application deployment;

?? Simplified system administration.

From an engineering standpoint, it's crucial to understand that virtualization technology is quite complex. There are two distinct types of virtualization, and a hybrid one that combines both. Yet, to avoid adding complexity to our story, we'll focus on the two main types:

?? Type 1 (native, bare-metal). This type is commonly used in modern high-load applications and cloud provider solutions that need to run 24/7 all year round.

With this type of hypervisor, there's no intermediary layer between the hardware, typically a powerful server, and the virtual machines running the applications. A Host Virtualization OS, or Hypervisor, is used as a special operating system, usually based on Linux without a graphical user interface to conserve resources. Some benefits of server virtualization include high VM density, server migration capability, and availability of free software. However, it can be expensiver and requires a high level of engineering expertise.

?? Many maintenance engineers informally refer to this type and all related activities as Server Virtualization. Therefore, when engineers mention their experience with server virtualization, they are usually referring to the type described above.

Examples of Type 1 hypervisors include VMware ESXi, MS Hyper V, Xen, KVM, oVirt, RedHat Virtualization, and Proxmox.

?? Type 2 (hosted). These hypervisors are typically installed on an existing Operating System, making it a hosted hypervisor that relies on the host machine's OS to carry out certain operations such as managing calls to the CPU, network resources, memory, and storage.

This makes it an excellent option for end-user productivity. It's great for those who need to quickly test specific software on various operating systems or learn how to set up and manage virtual machines.

This type of hypervisor is commonly used by average users who need to run particular applications on their machines. So, you may be familiar with it if you've ever tried to run a Windows OS emulator on your MacOS to run specific software.

Examples of Type 2 hypervisors include Oracle VirtualBox, VMware Workstation, Parallels, and QEMU.

Alright, so we've discussed the main point of today's topic, we recognized that although the initial hypervisors have made great strides in optimizing computing resources, they still demand the presence of skilled and pricey engineers who are capable of designing such solutions. Additionally, the monitoring and maintenance of these hypervisors is also very costly. Therefore, not all companies are willing to maintain entire infrastructure teams solely to utilize these technologies, despite their desire to use them.

And we've reached our first milestone!

The dedication of engineering-minded individuals to making these technologies more accessible has brought us to an exciting point where cloud providers are emerging as game-changers. But let's save that topic for our next article, where we'll delve into the crucial role of cloud providers in enabling businesses of all sizes to access virtualization.

Stay tuned ?? because this is just the beginning!


要查看或添加评论,请登录

HIVE的更多文章

社区洞察

其他会员也浏览了