In this post, I’ll build on the previous discussion of virtualizing data centers but with specific guidance for virtualizing desktops (this is sometimes referred to as VDI – Virtual Desktop Infrastructure). I’ll focus on desktop virtualization implementation and cover desktop virtualization in general, not the specifics of one platform or another. Let’s take a look at five basics that should be considered in any desktop virtualization project:
First and foremost, you need to know what you consume today on your desktops. By this I mean actually used, not simply installed. Another important thing here is to figure out what is “normal” or average and when peak periods are with a goal of not having the desktop VMs all peak at the same time on the same server. One of the primary drivers for this phase is to gather the knowledge necessary to figure out how many desktop VMs can be placed on each server – the more that can be placed on a single server, the greater the consolidation ratio and the lower the TCO.
Balancing the Various Hardware Components
The goal is to keep all of the equipment roughly evenly loaded to minimize cost and maximize utilization (within reason). For example, you don’t want a low-end storage solution or slow disks that will cause the virtual desktops to perform slowly, nor do you need a top-of-the-line storage system in most cases. The key is to find one that will provide the requisite performance.
Ideally, you want RAM to run all of the virtual desktops and keep the CPUs fairly busy (averaging 60% to 75% is fairly normal). If there is a lot more RAM that the CPUs can effectively use (for example, CPU-intensive tasks that require modest amounts of memory), the extra RAM is wasted. On the other hand, if you have many machines that are not CPU-intensive and they all run on the same host, you may exhaust the RAM available, causing lots of swapping to disk, drastically reducing performance.
Sizing the Storage Properly
Adding a server or some RAM is relatively easy and inexpensive, but a major upgrade to storage can be very time-consuming, not to mention expensive. When sizing storage, most people simply count the TB of space required, calculate the requisite number of drives to provide that space (with a few extra for parity, hot spares, etc.) and consider the project complete. That is not adequate, however, as different disks have widely varying performance capabilities.
Most vendors today offer various mechanisms to optimize the speed of storage, and they need to be carefully considered and implemented in most environments to get adequate performance from the consolidated environment. Users have used PCs for years with their own storage devices – even the slow SATA ones provide for 80 or so IOPS. In a virtual environment, it is not reasonable to think that every two or three users will get their own 15K SAS or Fibre Channel drive. They are simply too expensive.
If performance and availability in the event of a drive failure are both important (and let’s face it, that is the great majority of the time), RAID 10 (or 0+1, depending on the vendor’s offerings) provides the best balance between the options, especially when spinning disks are used. This may be less of an issue in SSD drives due to their high I/O characteristics and relatively small sizes, so many administrators choose RAID 5 in that scenario.
Minimizing the Number of Base Images Deployed
One of the advantages of virtualizing your desktops is that the possible devices and variations of software and hardware that need to be supported can be minimized. You no longer need to support dozens of models of Dell, HP, Lenovo, and ASUS notebooks and desktops. You have a single standard hardware platform.
Some basic guiding principles in creating various standard VM sizes include:
- Use 1 CPU whenever possible
- Use enough RAM so that you don’t need to swap to disk, but provisioning more RAM than necessary simply decreases the consolidation ratio and/or causes a lot more RAM to be purchased that would otherwise be necessary
- Use a 32-bit OS instead of a 64-bit OS when possible
- Minimize the number of variations to make it simpler to patch and maintain the virtual desktops
There are a lot more software combinations required in an enterprise than hardware ones in most cases. There are two solutions to this problem, namely:
- Create separate VMs for each unique combination of software and “hardware” configuration required. This tends to have the net result of many possible base images, all of which need to be patched and updated separately, greatly increasing the load on the IT staff and raising the TCO.
- Use application virtualization techniques, such as VMware’s ThinApp or Microsoft’s App-V. This is the next logical step in virtualization and separates the application from the OS in much the same way that server or desktop virtualization separates the OS from the physical hardware. This technique allows the same application to be packaged and run on multiple operating systems without any modification.
Accepting a Wide Variety of End User Devices
One of the big changes going on in IT at the moment is that users want access to their data from many different devices, and they want that access at all times. When planning a VDI deployment, consider the methods that your users will be able to use to access their desktop. Options include:
- Laptops with a locally cached copy of their virtual desktop (at least a copy that can be cached as needed) for those who, in some cases, may not have Internet access to get to their desktop, such as those who travel and sales people.
- Existing PCs, desktops, or laptops running Windows, Linux, and/or Macintosh OS variants, can be used. This does not offer many of the savings that VDI promises initially, but may save money up front, with these devices being replaced by other options as they wear out, come up for replacement, etc.
- Thin clients or zero clients, which are basically dumb terminals that have enough intelligence to connect to the VDI infrastructure to access the virtual desktops.
- Tablets, such as Android tablets or iPads that are becoming more and more popular. They provide simple, easy, convenient access from anywhere with a network connection, and many have 3G or 4G capabilities built in, allowing access virtually any time.
- Smart phones, for those times that none of the above are handy and access is required.
The five “secrets” shared here will greatly increase the chance of a successful VDI implementation. It is very important to plan for this, to balance the hardware environment to handle not just average, but peak load, to properly size storage, not just for capacity, but for performance as well, to minimize the number of base images so that the costs of maintaining each can also be minimized, and to accept and embrace the fact that people will connect with a wide variety of devices, and to create a plan to accommodate as many of these devices as is feasible at the lowest cost.