By Steve Baca VCP, VCI, VCAP, Global Knowledge Instructor
Virtualization is an umbrella term that continues to evolve to include many different types that are used in many different ways in production environments. Originally virtualization was done by writing software and firmware code for physical equipment so that the physical equipment could run multiple jobs at once. With the success of VMware and its virtualization of x86 hardware, the term virtualization has grown to include not just virtualizing servers, but whole new areas of IT. This article is going to look at the origins of virtualization and how some of the historical development has spurred on today's virtualization. In addition, we will discuss different types of virtualization that are being utilized in the marketplace today and a listing of some of the leading vendors.
In general, the idea behind virtualization is to make many from one. As an example, from one physical server using virtualization software, multiple virtual machines can run as if each virtual machine were a separate physical box. In data centers, before virtualization, one or more applications and an operating system would run on their own unique physical server. Since each one of those physical servers needed floor or rack space, there was a problem of the growing size and number of data centers, that businesses needed in IT. Using virtualization to consolidate the number of physical servers reversed the trend of this sprawl, , and companies began to see a cost savings.
From the system administrator's point of view, another reason to virtualize is the ability to quickly add more virtual machines as needed, without having to purchase new physical servers. The delay in obtaining new servers varies widely with each company and, in some environments, could be quite lengthy. With virtualization, the length of this process can be greatly reduced because the physical server is already up and running in production. The system administrator can quickly create a brand new virtual machine by adding the virtual machine to an existing physical host. Thus, you can run many virtual machines on one physical server.
A third reason to virtualize is for better resource utilization. Before virtualization, it was not unusual to see a physical server using less than five percent, or even ten percent, of its CPU and/or memory. As an example, consider the case where a physical server was purchased to run an application that only runs during the evening. When the application is not processing, such as in the morning or afternoon, then the physical box is sitting idle, which is a tremendous waste of resources. Thus, virtualizing the application that is used only at night runs at night, leaves that virtual machine to run on the same server with other virtual machines that utilize resources during the morning or afternoon. The virtual machines will balance each other's resource usage. Since one virtual machine application will run during the day, and the other virtual machine's application will process at night, the physical server will better utilize its resources. Resources such as memory and CPU on a server can be safely utilized by multiple virtual machines processing up to 75 percent to 80 percent on a continuous basis, with server side virtualization from vendors such as VMware. The advantage is the utilization of the resources will be far more efficient with virtualization, than if the applications ran on individual physical servers.
A fourth reason to use virtualization is that it can utilize new features that create a more reliable environment. As an example, VMware offers a feature called High Availability (HA). This additional feature is used when a physical server fails. After HA has determined that the physical server is down, it can restart the virtual machines on surviving servers. Therefore, an application will experience less down time using HA as an automated approach to physical server failure. Other vendors have their own features written into their code that offer different forms of reliability as well.
These are a few of the reasons to virtualize, and there are definitely more reasons. Now, let's turn to the beginning of virtualization.
Origins of Virtualization
The origins of virtualization began with a paper presented on time-shared computers, by C. Strachey at the June 1959 UNESCO Information Processing Conference. Time-sharing was a new idea, and Professor Strachey was the first to publish on the topic that would lead to virtualization. After this conference, new research was done, and several more research papers written on the topic of time-sharing began to appear. These research papers energized a small group of programmers at the Massachusetts Institute of Technology (MIT) to begin to develop a Compatible Time-Sharing system (CTSS). From these first time-sharing systems attempts, virtualization was pioneered in the early 1960s by IBM, General Electric, and other companies attempting to solve several problems.
The main problem that IBM wanted to solve was that each new system that they introduced was incompatible with previous systems. IBM's president, T.J. Watson, Jr., had given an IBM 704 for use by MIT and other New England schools in the 1950s. Then, each time IBM built a newer, bigger processor, the system had to be upgraded, and customers were continuously being retrained whenever a new system was introduced. To solve this problem, IBM designed its new S/360 mainframe system to be backwards-compatible, but it was still a single-user system running batch jobs.
At this time, MIT and Bell Labs were requesting time-sharing systems to solve their problem of many programmers and very few systems to run their programs. Thus, IBM developed the System/360-40 (CP-40 mainframe) for their lab to test time-sharing. This first system, the CP-40, eventually evolved into the development and public release of the first commercial mainframe to support virtualization the System/360-67 (CP-67 mainframe) in 1968. The new CP-67 contained a 32-bit CPU with virtual memory hardware. The CP-67 mainframe's operating system was named Control Program/Console Monitor System (CP/CMS). The early hypervisor gave each mainframe user a console monitor system (CMS), essentially a single-user operating system, which did not have to be complex because it was only supporting one user. The hypervisor provided the resources while the CMs supported the time-sharing capabilities, allocation, and protection. CP-67 enabled memory sharing across virtual machines while giving each user their own virtual memory. Thus, the CP operating system's approach provided each user with an operating system at the machine instruction level.
Virtualization continues to be used on the mainframe system even today, but it took nearly two decades before virtualization would become heavily used outside of the mainframe world. Although IBM had provided a blueprint for virtualization, the client-server model that took over from the mainframe was inexpensive and not powerful enough to run multiple operating systems. These issues for the client-server model meant that these new systems could not support virtualization, and the idea of virtualization would disappear for many years. Eventually, the hardware performance increased to a point where significant savings could be realized by virtualizing X86. The concepts of virtualization that were developed on the mainframe were eventually ported over to X86 servers by VMware in 1998, and a new era of virtualization began.
Types and Major Players in Virtualization
Although some form of virtualization has been around since the mid-1960s, it has evolved over time, while remaining close to its roots. Much of the evolution in virtualization has occurred in just the last few years, with new types being developed and commercialized. It can be difficult to restrict the types of virtualization to just a few areas with the release of so many different types and no true standard definition. Therefore, the definition of virtualization can be limited "to make many from one," and also limited to the most popular types of virtualization that are used in business today. For the purposes of this article, the different types of virtualization are confined to Desktop Virtualization, Application Virtualization, Server Virtualization, Storage Virtualization, and Network Virtualization.
The virtualization of the desktop, which sometimes is referred to as Virtual Desktop Infrastructure (VDI), is where a desktop operating system, such as Windows 7, will run as a virtual machine on a physical server with other virtual desktops. The processing of multiple virtual desktops occurs on one or a few physical servers, typically at the centralized data center. The copy of the OS and applications that each end user utilizes will typically be cached in memory as one image on the physical server.
If you go back to the IBM mainframe era, each user would use the mainframe to do the centralized processing for their terminal session, so the user's environment consisted of a monitor and a keyboard with all of the processing happening back on the centralized mainframe. The monitor was not in color, which meant programs that used color graphics were not available on a terminal connected to a mainframe. However, in the 1990s, IT started to migrate to the inexpensive desktop system where each user would have a physical computer . The PC would consist of a color monitor, keyboard, and mouse, with much of the processing and the operating system running locally, using the physical desktop's central processing unit (CPU) and physical random access memory (RAM) instead of using the centralized mainframe to do the processing.
In today's VDI marketplace, there are two dominate vendors, VMware Horizon View and Citrix Xen Desktop, vying to become the leader in the desktop virtualization marketplace. Both vendors have the ability to project graphic displays with rapid response from the mainframe. The desktops also come with a mouse, and both solutions make the end-user's experience feel that the remote desktop is local. Thus, the performance of the remote desktop and how the end-user accesses their applications should be no different than if they were using a physical desktop. Both VMware Horizon View and Citrix Xen Desktop each have a strong footprint and are the most-utilized choices for desktop virtualization in business today.
Application virtualization uses software to package an application into a "single executable and run anywhere" type of application. The software application is separated from the operating system and runs in what is referred to as a "sandbox." Virtualizing the application allows things like the registry and configuration changes to appear to run in the underlying operating system, although they really are running in the sandbox. There are two types of application virtualization: remote and streaming of the application. A remote application will run on a server, and the client uses some type of remote display protocol to communicate back to the client machine. Since a large number of system administrators and users have experience running remotely, it can be fairly easy to set up remote displays for applications. With a streaming application, you can run one copy of the application on the server, and then have many client desktops access and run the streaming application locally. By streaming the application, the upgrade process is easier, since you just set up another streaming application with the new version, and have the end users point to the new version of the application.
Some of the application virtualization products in the marketplace are Citrix XenApp, Novell ZENworks Application Virtualization, and VMware ThinApp.
Server virtualization allows for many virtual machines to run on one physical server. The virtual servers share the resources of the physical server, which leads to better utilization of the physical servers resources. The resources that the virtual machines share are CPU, memory, storage, and networking. All of these resources are provided to the virtual machines through the hypervisor of the physical server. The hypervisor is the operating system and software that operate on the physical box. Each virtual machine runs independently of the other virtual machines on the same box. The virtual machines can have different operating systems and are isolated from each other. The server virtualization offers a way to consolidate applications that used to run on individual physical servers, and now with the hypervisor software runs on the same physical server represented by virtual machines. Server virtualization is what most people think of when they think of virtualization, due to VMware's vSphere, which has a large percentage of the marketplace. In addition, some of the other vendors are, Citrix XenServer, Microsoft's Hyper-V, and Red Hat's Enterprise Virtualization.
Storage virtualization is the process of grouping physical storage using software to represent what appears to be a single storage device in a virtual format. Correlations can be made between storage virtualization and traditional virtual machines, since both take physical hardware and resources and abstract access to them. There is a difference between a traditional virtual machine and a virtual storage. The virtual machine is a set of files, while virtual storage typically runs in memory on the storage controller that is created using software.
A form of storage virtualization has been incorporated into storage features for many years. Features such as Snapshots and RAID take physical disks and present them in a virtual format. These features can provide a format to help with performance or add redundancy to the storage that is presented to the host as a volume. The host sees the volume as a big disk, which fits the description of storage virtualization.
The storage array vendors have implemented storage virtualization within the operating system of their respective arrays. This type of storage virtualization is called internal storage virtualization. In addition, there is external storage virtualization that is implemented by Veritas and many other storage vendors.
Network virtualization is using software to perform network functionality by decoupling the virtual networks from the underlying network hardware. Once you start using network virtualization, the physical network is only used for packet forwarding, so all of the management is done using the virtual or software-based switches. When VMware' ESX server grew in popularity, it included a virtual switch that allowed enough network management and data transfer to happen inside of the ESX host. This paradigm shift caught the eye of Cisco, so when VMware was upgrading to vSphere 4.0, Cisco helped to write the code for VMware's new Distributed Switch. This helped Cisco learn how to work and design network virtualization, and an internal movement was started to write all of the Cisco switches to be software-based administrative entities.
The network virtualization marketplace is really in its infancy with many startups and options to choose from at this time. Cisco and many startup companies are vying for control in this area of virtualization, which has huge potential.
The vendors in network virtualization are the hypervisor's internal virtual switch. In addition, third-party vendors, such as Cisco and IBM, have developed virtual switches that can be used by hypervisors such as ESXi.
The reasons to virtualize might have begun with saving money, but there are other good reasons to virtualize, such as better resource utilization and the ability to quickly add new virtual machines. Fortunately, the ability to save money makes it easier to get approval for virtualization. The reasons for virtualizing have increased since IBM began to incorporate the concept into the mainframe system in the 1960s. Once the client-server age began, there was a period of time where virtualization was not utilized outside of the mainframe systems. Eventually, the need for virtualization made it a viable solution again. VMware started the rise in popularity of virtualization by virtualizing the server. As server virtualization grew in popularity, other IT areas were also seen as virtualization possibilities, such as virtualizing the desktop.