Checkout

Cart () Loading...

    • Quantity:
    • Delivery:
    • Dates:
    • Location:

    $

Software Problems and How Docker Addresses Them

Date:
Jan. 12, 2016
Author:
Jon Gallagher

Abstract

Organizations are leveraging Docker to become more agile, responsive and leaner as they continue to compete in a challenging software environment. Using Docker allows organizations to create easily deployable software systems that can run on individual or clustered computer systems on a wide variety of platforms. Learn how Docker makes it easy to update, test and debug software with this white paper and gain foundational knowledge about Dockerfile, Docker images and containers.

Sample

Docker is a new approach to old, but increasingly troublesome problems in the software industry, namely:

  • How can we deploy ever more powerful and complex software systems that are used by tens, hundreds, or thousands of users concurrently?
  • How can we create, update, and maintain this software, while giving developers the platforms to run the software for testing and debugging?
  • How can we facilitate testing and create automated systems that detect bugs and performance problems?
  • How can we deploy these systems, doing the system administration equivalent of changing tires on a moving car, to help users who depend on our software to always be available?
  • How can we use the lessons we learned in creating more powerful and flexible hardware to help us solve our software problems?

That last question is an important one, because hardware went through a similar evolution to address similar issue and, Docker was partly inspired by the hardware evolution. As hardware became more powerful, the IT industry used that extra power to solve problems inherent in running complex systems.

The hardware was "chopped up" into virtual machine (VM) software that for all intents and purposes is a separate machine, indistinguishable from those running on traditional machines (bare metal computers). VMs solved the problem of making computers more efficient and cost-effective. You can buy one or more large boxes, then divide their capacity into multiple smaller VMs to run the system. As the system grows and changes, just change the space and power allocated to the VM.

To be as portable as possible a VM defines the operating system to be used, the number of CPUs that need to be allocated, the amount of memory that must be assigned, and any local storage space reserved for it. When these resources become available the VM boots up the operating system, starts any other necessary programs, then is ready to run anything a bare metal computer might run.

On the software side, new systems are also becoming more complex and as such they are increasingly depending on other software systems (for example, an application that depends on an image-rendering library). These dependencies must be managed carefully, because a misconfigured system may not run, may run incorrectly, or may have security vulnerabilities. One way to manage dependencies is to use the VM approach: just package up the desired software, along with all the software it depends on, into a VM image. Then when the system boots up, everything is in the correct place at the correct version level with the correct configuration. The trouble with the VM approach for deploying software is that a server must be built with each package. The packages end up getting bigger and more complex to justify the resources dedicated to starting and running the VM. Also, deploying software as a VM means that all the resources for the VM itself must be allocated, making it difficult to have multiple VMs on a developer's laptop, for example.

Meanwhile, with the concept of minimum viable products (MVPs) and agile approaches to development, project stakeholders and end users are demanding faster cycle times and more responsive software teams. Rather than waiting months or years for products, users are demanding product releases in weeks, days, and maybe even hours or minutes. Think of how often companies like Google, Twitter, Amazon, and Facebook change their software. In these companies, there is no concept of a release being frozen, tested, and then released. The software is continuously changing. And because there is no "frozen" version of the software, each new iteration of the software must be packaged so it can be deployed quickly.

At the same time, the idea that a company would develop a single software system based on one programming language, one set of libraries, for one operating system, is no longer standard practice. Now, each group working on its own modules within a software system can choose the tools and environments that best meet the modules' needs. This new paradigm means that all the dependencies for any software, such as libraries, run-time environments as well as the new code itself, must be part of the release.

Download
Format:
PDF
Total Pages:
9