Red Hat Virtualization, Part 1: Understanding How We Got to Hardware Virtualization
You may ask yourself, Why do I need a virtual machine - or not need it? What would I do with a virtual machine? Part I takes a look at the background and development of virualization and addresses memory management and the role of hardware in a virtual machine. This series of white papers assumes you know a little about the inner workings of a typical computer system but want to know, at a high-level, what virtualization of hardware means. This is Part 1 of a 3-part white paper on Red Hat Virtualization.
The idea of building a "virtual machine" is probably foreign to most people. How is this related to a computer? Why do I need a virtual machine - or not need it? What would one do with a virtual machine?
In some movies and games, people, animals, and landscapes, and their destruction, are generated on a display for the audience. These are commonly referred to as virtual animations. They exist only in this displayed environment.
This white paper assumes the audience knows very little about the inner workings of a typical computer system but wants to know at a high-level what virtualization of hardware, or whatever the term used to describe it, means.
First we will delve back into the history of computing, the evolution of computing internals, including the current trends in computer processing chips, but in a very high-level way, in order to understand how one gets to the latest rendition, virtualized hardware.
Virtualization of hardware requires a special set of software be used on the physical host. Within Red Hat Enterprise Linux (RHEL), the virtualization software pieces provided include Xen, Qemu, and KVM. These additional software components allow multiple virtualized hardware instances in memory running their own independent Operating System (OS) with one or more user accessible applications.
Virtualization - Everything Is In Memory
The terminology around virtualization relates to almost everything in memory: process virtual memory, shared virtual memory, virtualized hardware, virtual machine. All of these terms will be discussed in relevant parts of this white paper.
Defining Virtualization from the Beginning
Virtualization is a term coined to deal with how programs see their world. An application that edits simple text files has to be put into memory to run, has to have memory for the content being worked on, and has to be able to save and or retrieve the content for the user. Sounds simple, but the underlying mechanisms used have become quite complex in an attempt to make application development less complex. OSs have had to add all sorts of security, for users and processes, which were not needed in the early days; then, computers were not all interconnected like they are today.
The term motherboard relates to the physical circuit board and provides the interconnection of these components. There is one important component not shown above: the Central Processing Unit, or CPU, as it was initially called.
In the Beginning
Early computers were simple in design: Central Processor Unit (CPU), Memory (RAM), and peripherals for storage and display. Simple, concise, but unique. The programs in the early days had to include specific software code, related directly to the local peripheral hardware, to save or read the data. There was just one application being run on the computer and it did everything: display user information, save data, retrieve and change data from whatever device was being used.
The piece of software code that saved data to a file would be called a routine. Within a program, it would be referred to as a subroutine to the main application. There might be many subroutines in an application.
Software and Hardware Evolved
With so many people wanting a computer, so many uses for them, and so many minds working on them, there were bound to be many ideas. Some of these ideas continue to this day as stalwarts of design, and others have been further evolved into newer versions that continue to evolve as new ideas continue to be implemented.
Evolution of Programming Languages
Early programs were written in machine language. This was very tedious and required every program be developed independently. A word-processing program had to display to a screen, and save and retrieve from whatever storage device was local. Many early machines were dedicated to just one specialized application such as word processing. Those days are long past - but wait - they are back with devices like book readers, MP3 Players, routers, and switches. Others, like cellular phones and other Personal Digital Assistants or PDAs, often have dedicated OSs and applications.
In order to speed up application development, general programming languages were developed that either ran a more English-like language program (interpreter) or translated the English language program into the machine code instructions (compiler). Compiler programs that developed over the past few decades, and continue to be enhanced, include C Programming Language, Fortran, Pascal, and many others. Interpreting programming languages include Bourne Shell versions, Perl, PHP, Java, and a host of others.