This vSphere Essentials white paper will give you a basic understanding of some of the concerns or planning points to consider as you get ready to deploy vSphere in your organization. This is a glimpse into some of the essential things to consider for implementing vSphere. This will focus on some of the basics that vSphere administrators run into when installing the latest features and not realizing they still have some older versions implemented.
Every VMware class I teach, whether it's the basic ICM (Install Configure, Manage) or it's the more involved FastTrack, a lot of students run into basic confusion on planning or the lingo. There are a few brave students who ask questions, and they should since they are in the classroom. But some don't ask questions, thinking it's the basics and by some miracle they should know it already. Consequently, I decided to cover these topics in this white paper. I hope this helps the target audience who are looking for vSphere Essentials to get started and will be in the class for more advanced, in-depth topics.
Background on Physical Machines
The terminology seems to be the first cause for confusion. Remember, before we went to virtualization, we used to buy expensive servers from IBM, HP, Dell or other hardware vendors and then install our operating systems (Oss). The operating system was either something from Microsoft or some flavor of Linux. Then on top of that OS, we installed our application, for example, installing Windows 2008 on top of your Dell Server and then putting something like Microsoft Exchange or SQL on top of that.
We are so used to this scenario that it's sometimes hard to move away from that mindset, especially for folks who are only responsible for Server Administration and not vSphere Administration. After virtualization, we are installing VMware's ESXi on top of that Dell Server and then creating virtual machines (VMs) using vSphere Client (or the web version of the client) from some machine to manage the ESXi and creating VMs that will, in turn, be running the Microsoft or the Linux OS, which will then have products like MS Exchange or SQL installed.
Administrators who are using Remote Desktop Protocol (RDP) or something similar to connect to the "Exchange Server" are still doing the same thing. They don't even need to know that it used to be installed on a physical Dell server and now has been virtualized and is on top of the ESXi as a VM. To the Exchange administrator, it is still the same process: using RDP to connect to the VM running Exchange Server. Even though I say that they don't need to know that the server has been virtualized, students sometime come to classes and talk about the support from a vendor if their product is on the physical system or has been virtualized on a VM. You need to check support issues with the product vendor that you are virtualizing, since it varies.
The terminology is now different. After virtualization, that Dell server hardware we installed our VMware ESXi product is called the Host. Windows 2008 is installed in a VM and is the Guest Operating System.
Advantages to Implementing Virtual Machines
The advantage of doing this is that we previously used maybe 2 percent to 20 percent of our physical resources, since CPU and RAM were getting faster and comparatively cheaper. Now, after doing the virtualization, we can put multiple VMs on that same physical chassis from Dell (in our example) and each VM is isolated from the others. This isolation is important because if one of my VMs crashes, it will not have any effect on other VMs. Through consolidation, I have reduced the number of physical chasses I own. You will also see in the ICM and FastTrack classes that with many Fault Tolerant and Clustering methods available, we can protect this entire ESXi host along with all its VMs, if it were to fail.
Another advantage that we gain from consolidation is that, depending on the consolidation ratio you are implementing, you have reduced the number of machines you have purchased. This is not only a financial savings, it is space savings. Think about where you were putting these servers. They were all sitting in your Data Center, taking up a whole lot of space. Less rack space used means more savings. Also, more machines would have generated more heat; less heat means less air conditioning, providing even more savings.
Have you ever run into a dilemma: you want to purchase a physical server that doesn't have to be that powerful, but vendors don't usually sell hardware with lower configurations? Now, with virtualization, you can put those application servers in a VM, and you can have different CPU and RAM allocations for different VM set up the way you want them. The ESXi host does the resource sharing based on the allocation. You can do the carving of CPU and RAM per VM. This is called resource sharing, and is thoroughly covered in various VMware classes.