This is the final part of a three-part overview discussing Architecting on Google Cloud Platform. Parts one and two addressed infrastructure services and augmented infrastructure. Today we’ll discuss Platform-as-a-Service (PaaS) solutions.
PaaS solutions were some of the first services offered by cloud providers. These solutions came early since they provide a tremendous value of allowing a developer to deploy their code without having to request or provision infrastructure, which may take days or weeks. GCP has a number of such PaaS solutions that allow you to deploy your application without the complexity of provisioning and managing infrastructure. These solutions fall into two families: serverless applications and Linux containers.
Maybe you are a developer that has some code that you just want to run and not worry about infrastructure at all. Google App Engine (GAE) and Cloud Functions allow you to upload your code to Google and have it run when needed.
GAE will deploy your application for you and scale to near infinity if needed without any intervention. This is ideal for web applications or mobile application servers as you may not know the scaling demands ahead of time such as traffic spikes from online shopping during Black Friday or if a video hosted on your site goes viral. GAE also allows for performing rolling updates of your application or splitting traffic across two versions of the application for A/B testing. GAE should be considered as a platform to use when deploying lightweight applications that are agnostic to the infrastructure.
Cloud Functions, on the other hand, take the classic cloud use case of “pay-as-you-go” to the next level. You can have your code only run when you need it and be billed at the precision of the nearest 100 milliseconds of compute time. This means you only pay for the compute you use instead of any idle time waiting for a server to start or the next request to come in. There are various use cases for running a simple code function on demand. Mobile applications often communicate with a backend server to perform actions such as updating user information, saving a game, or sending an email. All of these cases can be solved by triggering the corresponding cloud function when a user performs the action in the mobile app.
Linux containers are the current buzzword in the tech industry. Their appeal is that they are much lighter weight than VMs and provide complete isolation of a running process. Docker images are a way to define how your application and its dependencies are bundled together. Docker has the benefit of being an industry standard, which allows you to use the same definition inside and outside of GCP. There are at least three ways to run standard Docker containers on GCP as of the time of publication: Google App Engine, VMs on Google Compute Engine (GCE), and Google Container Engine (Kubernetes).
Google App Engine (GAE) Flex
This is the simplest way to deploy a Docker container on GCP. Just like the traditional GAE described, your service could scale and deploy automatically. The only difference is you must provide a Dockerfile describing how to create the container image from your application. From there, GAE will do the rest.
Deploying VMs with Docker Images (Alpha Feature)
This mechanism works a lot like GAE Flex does under the covers. A standard Docker image can be specified when creating a VM. Then, a VM will be provisioned with Docker installed and the specified container running. This allows you to deploy containers but get all the manageability of GCE such as load balancers, networks, and cost saving strategies.
Google Container Engine (GKE)
Kubernetes is an orchestration engine for deploying containers across many hosts and allows you to scale and spread your applications. Deploying a Kubernetes cluster typically requires provisioning multiple nodes, installing Kubernetes, and configuring the cluster. GKE allows you to specify how many nodes you would like and handles the rest for you.
If you already have a Kubernetes cluster on-premises, you can burst to the cloud by setting up federation between your on-prem cluster and a cluster in GKE. This allows you to distribute your applications to multiple cloud providers or reach a scale that your on-prem infrastructure cannot support.
In each part of this series offering an understanding of GCP, we discussed working at three different levels of the technology stack. Part one, Understanding GCP Infrastructure Services, focused on deploying infrastructure on which to provision the system architecture. Part two, Understanding GCP Augmented Infrastructure, builds on infrastructure, discussing tools that improve management of large-scale systems and further automate system operations. In this final part of the series, we moved past infrastructure and discussed tools available that all allow deployment of applications directly.
Outside of this series, there is a host of Google product families. GCP also provides tools for Identity and Access Management, Machine Learning, Big Data, Developer Tools, etc. Many of these products are discussed in depth in various GCP courses offered by Global Knowledge, such as the Google Cloud Platform Fundamentals: Core Infrastructure for an overview of all the products and services, Data Engineering on Google Cloud Platform training course for all things Big Data and Machine Learning or the GCP Fundamentals for AWS Professionals course which introduces the seasoned AWS pro to Google Cloud.
Never miss another article. Sign up for our newsletter.