Abstract
The technologies examined reduce operational expenses (OpEx), not capital expenses (CapEx) that has traditionally been the focus of virtualization. Many companies implemented virtualization with the goal of saving money in the form of fewer servers to buy, with a side benefit of reducing the footprint of the servers and lowering the required power and cooling. Most of the savings were in capital, but do not expect the same with many of the technologies listed here, because some may even require some additional capital expenditures, at least for software, in order to save on the day-to-day operations of IT. The bigger cost in running an IT department is in the OpEx category anyway, so savings there are recurring.
Sample
Introduction
At VMworld in August 2013, VMware said that there are three basic phases of virtualization, namely:
Basic Virtual Machines (VMs), where test and development, and non-production critical VMs are virtualized. Most companies today have at least gotten to this stage (most reliable accounts put this in the 60 percent to 80 percent range today). Mission critical VMs, where the critical servers that the company runs (such as SQL, Exchange, Oracle, and SAP), are virtualized. According to VMware's research, about 54 percent of companies are in this stage today. They also noted that half of all Oracle and SAP servers and three-quarters of Microsoft servers in this category are virtualized today. IT-as-a-Service (IaaS), the phase where we use cloud computing (private, public, or a hybrid of the two). IT provides the resources and controls, but the VMs may be anywhere. Again, according to VMware, 21 percent of companies are in this stage today.
It is interesting to note in the roughly 10 years that VMware has been pushing server virtualization how the number of VMs-per-administrator has changed. In the first phase, where there typically are more, smaller physical servers, the number of VMs per administrator averages around 120 (across companies of all sizes), while in phase two, it is 170, and in the IaaS phase, the current average is 363. To manage that number of VMs well, IT has to have better tools to control what is deployed and where, the governance to control data security and access, and the ability to manage and monitor such a diverse set of assets.
So what do you do-which technology or technologies should you look at once you've virtualized all or most of your environment? What can you look at to drive down costs and increase efficiency? This white paper discusses several possible "next moves" and how they can make your business more agile and, at the same time, reduce costs.
The technologies to consider next that are discussed in this white paper include the following:
- The Software Defined Data Center (SDDC) (via Software Defined Networking [SDN] using NSX and Software Defined Storage [SDS] via VSAN)
- Desktop virtualization (VDI with the Horizon Suite)
- Cloud computing (vCloud Hybrid Service [vCHS], vCloud Director [vCD], and vCloud Automation Center [vCAC]
- Operations Management (vCenter Operations Manager [vC OPS], VCenter Log Insight, and the IT Business Management [ITBM] Suite)
You can choose to implement any or all of the technologies listed to optimize your environment. VMware's offerings are designed to integrate with each other, making it easier to deploy any combination that works in your environment. That type of integration will only deepen with upcoming product releases.
The Software Defined Data Center
If you have looked at VMware's marketing campaign and website, you have probably noticed lots of references to the Software Defined Data Center (SDDC). How is that different from what you have now? Most organizations have a Hardware Defined Data Center (HDDC) today, where networking is implemented by various switches and routers and storage via either Storage Area Networks (SANs), or Network Attached Storage (NAS).
In both cases, the networking and storage is often locked into a specific vendor with experts on staff in those vendor's products. The expertise of IT is in many brittle, vertical silos where the storage team only understands storage and the network team only understands networking, which leads to lots of finger pointing when an issue arises between these teams, the application teams, and the virtualization teams. These solutions are often very expensive to purchase (i.e., they require a large CapEx or capital expenditure) in the first place and the cost a lot to maintain and monitor through the expertise of the specialists on each team (i.e., they require a large, ongoing OpEx, or operational expenditure). Often, due to these investments, the organization is locked into a particular vendor and the costs to migrate to a new solution are often cost prohibitive. What can be done to alleviate these issues?
Two big areas are emerging in this regard-Software Defined Networking (SDN) and Software Defined Storage (SDS). They can be implemented independently or together. The basic idea is to do for networking and storage what virtualization did for computing. To some people, this is a radical idea, but then again, so was virtualization when VMware introduced it. While you may not be ready to implement either at the moment, you should start considering if they make sense and start planning for their implementation. Note: At this point, larger, more complex organizations will probably benefit most from SDN, while SDS can be utilized cost effectively by organizations of all sizes and complexities.