Checkout

Cart () Loading...

    • Quantity:
    • Delivery:
    • Dates:
    • Location:

    $

Understanding Google Cloud Platform Infrastructure Services

Date:
Sep. 26, 2017
Author:
Alex Meade

This is part one of a three-part overview discussing Architecting on Google Cloud Platform.

Google Cloud Platform (GCP) is Google’s public cloud offering comparable to Amazon Web Services and Microsoft Azure. The difference is that GCP is built upon Google's massive, cutting-edge infrastructure that handles the traffic and workload of all Google users. As such, GCP has courted numerous customers that need to run at an enormous global scale, such as Coca-Cola and Niantic (creators of Pokémon Go). A detailed explanation of how Google helped Pokémon Go scale to 50x their expected traffic in just a few days after the game launched can be found here.

There is a wide-range of services available in GCP ranging from Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) to completely managed Software-as-a-Service (SaaS). In the first part of this series, we will discuss the available infrastructure components and how they provide a powerful and flexible foundation on which to build your applications.

Google Compute Engine (Virtual Machines)

Hosted virtual machines (VM) are usually the first thing people think of when moving to the cloud. While GCP has many ways to run your applications such as App Engine, Container Engine, or Cloud Functions, VMs are still the easiest transition to the cloud for architects and developers that are used to having complete control over their operating system.

Google Compute Engine (GCE) supports running both Linux and Windows virtual machines. Either Google-maintained machine images can be used to provision VMs or images from your existing infrastructure can be imported. One common practice is to provision an image for a particular OS, install all needed software dependencies for your applications, then create a new image from that VM. This gives you a “pre-baked” image that you can quickly deploy without having to wait for software to install before the VM can begin doing valuable work. Another strategy is to install a common set of tools into an image, such as a company-wide compliance package, then create a “golden” image that you share with development teams to use for their applications.

There are a number of cost saving strategies that can save a lot of money when deploying applications and infrastructure on the cloud. For example, preemptible instances is a unique feature that can cut the costs of running a VM by 80 percent! A preemptible instance is a normal VM except that Google can decide it needs the capacity back and delete the VM without asking. This can be amazing if you design your applications to be fault tolerant and able to lose a node without disruption. A common use case is having a fleet of preemptible VMs all working on similar tasks. If one is deleted mid-task, the work is shifted to another VM.

Other cost saving features using GCP are per-minute billing, committed use discounts, sustained use discounts, recommendations engine, etc.

Hint: Cloud architects will want to be aware of these features to optimize costs within GCP for their organization.

Networking

Google has completely redesigned network infrastructure from the ground level in order to accommodate their unparalleled scale. They have a global, private, high-speed fiber network with custom routing protocols, hardware and even topology. GCP sits on this networking stack but completely resolves you of managing the complexity of a physical network. This totally changes how architects and developers need to think about networking and greatly simplifies management of firewall rules and routing, for example.

Network objects in GCP are global. This means you could have a VM in Hong Kong and a VM in Belgium on the same private network that are still able to communicate. You could even toss a global load-balancer in front of them and have your customers near both locations refer to your services by the same IP. These networks are also not tied to a single IP space, meaning you can have completely unrelated subnets (such as 10.1.1.0/24 and 192.133.71.0/24) in the same private network able to communicate. This communication between subnets can be easily filtered via firewall rules.

Firewall rules can be implemented two different ways within GCP. Traditional “iptables style” rules that specify which IP ranges can communicate with other IP ranges over certain protocols can be created with specified priority values to determine which rule to apply in a given scenario. This can be complex since you need to know which servers are on certain IP ranges because if they change, you must update the rules. GCP allows for configuring firewall rules that are just as powerful via network tags. This means a network architect can control traffic across their network by simply tagging resources.

Example rules:

Name Targets Source Filters Protocols / Ports Action Priority Network
database-traffic backend Tags: frontend tcp:3306 Allow 1000 default
public-web-traffic frontend IP ranges: 0.0.0.0/0 tcp:80 Allow 1000 default

This results in traffic being allowed from the internet only to VMs with the frontend tag and allows the frontend and backend VMs to communicate with each other.

Storage

Applications are typically not very useful unless they have access to your data. There are numerous storage options hosted on GCP including persistent disks, cloud storage and database solutions.

Persistent Disks

This is block storage for your VMs. Standard HDD and SSD options are available to attach to your VMs. These live independently of the VM and can be attached to multiple VMs concurrently. Google automatically handles data redundancy and performance scaling behind the scenes. Local SSDs, which are physically co-located on the host of the VM, are available for high performance applications.

Cloud Storage

Cloud storage is Google’s answer for an object store. Arbitrary blobs of data are uploaded into a “bucket” and then can be versioned, widely-replicated and shared. Cloud storage has multiple storage classes: regional, multi-regional, nearline and coldline. Multi-regional storage buckets are automatically geo-redundant, which means all data is replicated across multiple datacenters and leaves your data safe if a whole data center goes offline. Nearline and coldline storage buckets are just as performant as the other storage classes for retrieving data but allow for balancing the costs of retrieval versus storage. For example, coldline is the cheapest storage option but has the highest cost of retrieving data. This is ideal for backing up large amounts of data that may never need to be accessed.

Database Solutions

Google has two NoSQL solutions: Datastore and Bigtable. For relational data, Google has managed Cloud SQL instances (MySQL or Postgres) and Cloud Spanner. Cloud Spanner is the world’s first relational database that offers the ability to scale to thousands of servers while maintaining high performance and strong consistency.

In part two of this series, we will discuss how these core infrastructure pieces can be augmented, so we can seamlessly scale to massive proportions with no intervention by a systems administrator—all while maintaining the lowest costs possible given the workload.

To learn more about architecting on Google Cloud Platform (in preparation for the Google Cloud Architect Certification), check out Global Knowledge’s GCP training course offerings.

Related Posts

Part 2: Understanding Google Cloud Platform Augmented Infrastructure
Part 3: Understanding Google Cloud Platform Application Services

Related Courses

Google Cloud Fundamentals: Core Infrastructure
Architecting with Google Cloud Platform: Infrastructure
Architecting with Google Cloud Platform: Design and Process

Subscribe

Never miss another article. Sign up for our newsletter.