55 Results Found
One of the most significant new features in Microsoft Windows Server 2012 is the Hyper-V Replica (HVR) capability. Whether you are considering this for your own organization or just prepping for your Windows Server 2012 MCSA, this white paper presents the essentials of deploying this disaster recovery feature.
Discover how the enhanced performance and reliability of Amazon Aurora will help AWS customers reduce performance bottlenecks in their applications. The relatively low cost of Aurora will tempt many customers to migrate workloads to this implementation of RDS.
Amazon Redshift opens up enterprise data warehouse (EDW) capabilities to even the smallest of businesses, yet its costs, security, and flexibility also make it appealing to the largest of enterprises. It allows companies to easily and conveniently scale their EDW needs both up and down, and as a managed service, it allows your team to offload all of the "undifferentiated heavy lifting" of building and maintaining an EDW. Its raw storage costs are about one-fifth to one-tenth of traditional in-house EDW, and AWS has taken great care to ensure its performance is still competitive with those in-house solutions. Before deciding to use Amazon Redshift, however, it's important to understand what it is and is not.
This white paper explores the native AWS storage solutions, enabling you to deliver applications in the cloud in the most efficient, cost-effective, and secure manner. In terms of storage, it's important to understand the characteristics of each AWS storage option so that you can implement one or more AWS storage services to meet your needs. Often, you'll find that utilizing multiple storage options together will give you the best outcomes.
The National Institute of Standards and Technology (NIST) created a cloud definition that has been well-accepted across the IT industry. NIST was mandated to assist government agencies to adopt cloud computing for their IT operations. As part of their mandate, NIST created multiple working groups to define cloud computing, its architecture, and requirements. In this paper we explore the center core of NIST's cloud definition.
Many people believe that cloud computing requires server (or desktop) virtualization. But does it? We will look at using virtualization without cloud computing, cloud computing without virtualization, and then look at using both together. In each case, we'll look at where each deployment might be most useful, some use cases for it and some limitations.
After a review of Software-Defined Networking (SDN) and its close cousin Network Functions Virtualization (NFV), this white paper addresses three main deployment scenarios: SDN without deploying cloud computing, cloud computing without deploying SDN, and deploying cloud computing in conjunction with SDN. We'll look at use cases, when the approach makes sense, and any applicable limitations.
In 2013, VMware announced VMware Virtual SAN (VSAN), which is VMware's native version of Software Defined Storage (SDS). It is simple, easy to setup and managed by user-defined policies. This paper explains VSAN, its basic requirements and how it works.
The technologies examined reduce operational expenses (OpEx), not capital expenses (CapEx) that has traditionally been the focus of virtualization. Many companies implemented virtualization with the goal of saving money in the form of fewer servers to buy with a side benefit of reducing the footprint of the servers and lowering the required power and cooling. Most of the savings were in capital, but do not expect the same with many of the technologies listed here, because some may even require some additional capital expenditures, at least for software, in order to save on the day-to-day operations of IT. The bigger cost in running an IT department is in the OpEx category anyway, so savings there are recurring.
AWS has introduced Auto Scaling so that you can take advantage of cloud computing without having to incur the costs of adding more personnel or building your own software. You can use Auto Scaling to scale for high availability, to meet increasing system demand, or to control costs by eliminating unneeded capacity. You can also use Auto Scaling to quickly deploy software for massive systems, using testable, scriptable processes to minimize risk and cost of deployment.