Google Kubernetes Engine vs AWS Elastic Kubernetes Service for Lay People

Partner with Monsieur Popp
10 min readAug 26, 2020

In the first part of this series, we took a quick look at the rise of microservices as an increasingly recommended architectural pattern for flexible, scalable, and cost efficient applications. In this next post, I will attempt to highlight for a business person the experience of a developer using the top two most popular choices for running Kubernetes as a managed service: AWS and GCP. To do so, I´ll compare and contrast the experience of setting up a cluster; deploying an application on said cluster; before synthesizing the major differences between the two platforms. At that point we´ll have arrived at another trend taking root in the world of cloud computing: the automation of infrastructure provisioning, or said otherwise, the configuration of infrastructure through code.

Setting up Elastic Kubernetes Services

AWS offers two options for setting up containers. The first, through the Command Line Interface, and the second, the AWS management console.

To leverage the CLI, one must:

  1. First install the CLI on your computer.
  2. Set up your credentials, whereby you are prompted to share your AWS account access key, secret access key, the AWS Region you´d like to deploy your cluster, and the output format.
  3. Then, install ekstcl. At a high level, ekstcl is a CLI tool for creating clusters on EKS. It is written in Go, uses CloudFormation, and was created by a third party called Weaveworks. You can pretty much do everything with ekstcl, including managing the nodes of your cluster once they´ve been created.
  4. Install Kubectl, the command line tool that enables nodes to communicate with the master kubernetes nodes API (and consequently get access to the scheduler, controller, and etcd)
  5. From there, you can set up your compute option for the containers. AWS gives you the option to select AWS Fargate (aka serverless, which means your clusters are instantiated as a function of a triggered event), Linux applications on EC2 nodes, or Windows applications on EC2 nodes .
  6. Following that, cluster should take 15 minutes or so to get created.

To leverage the AWS Management Console, the steps are similar, save for the end:

  1. Install CLI tool
  2. Configure AWS CLI Credentials
  3. Install and configure kubectl
  4. Create the EKS Cluster IAM Role.
  5. Create EKS cluster´s Virtual Private Cluster
  6. Once those steps are completed, you can go directly into the console and configure your cluster from there, which can be done by toggling to the ´create cluster´ tab and filling out:

a) Name ( A unique name for your cluster)
b) Kubernetes version,
c) Cluster service role (same role you selected for creating your cluster),
d) Secrets encryption — (Optional to do, but you can choose to enable envelope encryption of Kubernetes secrets using the AWS Key Management Service)
e) VPC
f) public/private endpoint creation for kubernetes cluster API
g) Create a kubeconfig file. This is needed in order to configure access to Kubernetes when used in conjunction with the kubectl command line tool (or other clients).
h) Create compute for cluster (Similar to what was done with CLI)

In both cases, remember that this new EKS cluster is created without a worker node group, but it does create a pod execution role. AWS treats add-ons like autoscaling as required, but ironically they do not come standard out of the box — they need to be set up. As of the summer of 2020 auto managed nodes are handled by AWS but their configuration is handled by the developer. By that, I mean that first the correct Identify Access Management (IAM) policy needs to be implemented so workers nodes can autoscale. Once that is done, the cluster autoscaler needs to be deployed using the command kubectl apply -f.

If that sounds like a lot of mumbo jumbo, let me translate. Managed services by definition simplify and abstract whatever it is that you want to configure. The more components are abstracted, the less work the developer does — which is what they hope for. Ask any developer how much they enjoy setting up infrastructure, running unit tests, doing health checks, you might get a blank stare. What you just saw here is that AWS offers a managed service that requires a hefty time investment up front, and continued maintenance thereafter.

Setting up a Google Kubernetes Engine

Google Cloud, similar to AWS, provides two methods for setting up Google Kubernetes Engine: Cloud Shell and the Command-line.

Before we begin, we´re assuming that you´ve already set up a GCP account. Once you´ve done that, visit the Kubernetes Engine page in the Google Cloud console; create/ select a project; and wait for the API and related services to be enabled. This can take several minutes. At that point, you can either use Cloud Shell or local shell to run commands that will ultimately set up your GKE container.

To do so, one must:

  1. Use the gcloud tool to configure two default settings: default project and compute zone. Your project will have a unique identifier, better known as ´the project Id´.
  2. To do the former, use the command gcloud config set project <project id>
  3. Then, on to set up a cluster. That is done by completing the command gcloud container clusters create <cluster-name> — num-nodes=1

That´s it.

Deploying an Application on your cluster with EKS and GKE

We´ve set up infrastructure, so let´s dive into a real use case: deploying a web application.

For EKS, the process requires the publication of a docker image to Elastic Container Registry before its ultimate deployment on EKS (We´ll assume you´ve dockerized your application first):

  1. Push your image to the AWS ECR repository
  2. Create a GKE cluster as described above
  3. Create a VPC (for the sake of simplicity we won´t dive into this matter for this article)
  4. Create an EKS cluster with worker nodes. Easiest way to do this is to leverage eksctl create command, or you can create a YAML file (which can be potentially translate down the line when if your organization goes multi cloud and seeks to automate configuration, but that´s a separate subject for another time)
  5. In any case, you will specify the number of worker nodes and the underlying infrastructure (ie decide which AWS instance types you would spin up) either using YAML

Now, Remember that the eksctl create command enables you from the CLI tool to interact with kubectl, which is the mechanism that lets you interact with the control plane of kubernetes. But to create, modify, and delete Kubernetes resources (such as pods, deployments, services), you need to publish manifest deployments to the cluster. So…

  1. To that end, create a deployment manifest which (logically) deploy the app on pods.
  2. Then, create a service manifest in order to enable communication between pods and with the external internet if need be.

At that point you’ve deployed an application on EKS!

For GKE, the process is similar if not even more straightforward:

  1. First, build the container image. The goal here is to pull down the source code and Dockerfile locally before setting the project ID variable equivalent to your google cloud project ID. That way, when you push the docker image to Google´s Container Registry, it will refer back to the project ID that your GKE cluster will invoke.
  2. From there, deploy your image to GKE. To do so, input into the Cloud Shell the command kubectl create deployment <project ID>
  3. You then specify the baseline number of pod replicas, and add the resource HorizontalPodAutoscaler so your deployment flexes horizontally and vertically automatically. Doing so can be done in two commands
  4. You can then expose the service to the internet if need be.

Contrasting and Comparing GKE and EKS

I´m not the first to compare and contrast AWS and GKE; metaphorically speaking I am standing on the shoulders of giant developer shoulders.

That being said, in hope of translating these complex topics into bite size chunks that can be comprehended by less technically minded professionals, who are nonetheless keen to bridge the gap with stakeholders they interact on a regularly basis, I will do my best to highlight what I see as the major differences:

Developer Experience

AWS CLI tools, and add ons like ekstcl, that are needed to be set up in order to communicate with clusters is not the easiest path to navigate. Granted, one you get past the first few times of setting up and updating those clusters, you will become fluent in a language predominant in the cloud computing world. But if you have not yet started your journey to a microservices based architecture, Google Kubernetes Engine is a solid option to explore because it is simpler to use.

I will grant credit where credit is due: AWS is investing in EKS, most recently enabling automanaged clusters (which means manually adding worker nodes is a thing of the past); the gap between EKS and GKE feature sets is closing. That still does not pick at the argument around the developer experience, but at this point I´m splitting hairs. At the end of the day, EKS takes 20 minutes and requires 11 complicated steps, while GKE can be done in 3 steps taking up 2 minutes.

Pardon the propaganda, which nonetheless cannot deny that the code does not lie!

Automation is more mature with GKE

The concept of DevOps outlines that the bridge between development and operations teams become smaller over time as the two functions coalesce in a beautiful symbiotic relationship. If that seems quite utopian, you are not wrong. In fact, as teams are asked to become more effective in light of reduced budgets and headcount, developers often have to take on ops jobs — something that they don’t particularly enjoy. Automating as much as possible the underlying processes for setting up, configuring, and maintaining infra is top of mind (we´ll conclude on this matter in a bit). To that end, upgrading the control panel, nodes, and monitoring their health should be automated. That ability is more mature on GKE than EKS. To expand, AWS requires developers to add on option services that become primordial as time passes such as autoscaling, VPCs, DNS, etc. Pools and master nodes need to be manually upgraded while that is done automatically in GCP. All in all, managing their life cycles is more time consuming on AWS than on GCP.

Cost

Again, I´m not the first one to touch on this: these two writers have already done so. I talked to a CEO of a security technology company, and he insisted that there was no technical differentiation from cloud providers as essentially compute is a commodity. That statement is not totally wrong nor totally correct, and the GKE vs EKS comparison highlights why. At the end of the day, GKE and EKS run on compute — on virtual machines. Differentiating VMs is difficult, unless you can spin them up faster, run jobs more effectively, tear them down quickly, and scale them out and up. For instance, if you customize their sizes and charge for ephemeral machines available on demand, you start to drive a wedge to differentiate. But are those innovations creating moats that make Berkshire Hathaway water when evaluating insurmountable competitive advantages? In the short term, perhaps, in the long term, no. In any case, GKE in the short term is able to reduce costs because their underlying infra costs less as GKE is more effective in running jobs; setting up and tearing down VMs; and the product is developer friendly which reduces the complexity and brunt of developers´ jobs, thus reducing the total cost of ownership. Holistically then GKE is more cost friendly than EKS. But in the long term, nothign stops AWS from catching up, so the CEO´s argument that cloud is a commodity eventually will ring true.

The future is automation

To play devil´s advocate, I could touch on other differentiators, such as the manner by which GKE and EKS approach cloud security and vulnerability scanning of their containers; I could talk about the history of GCP running more containers since they initiated the borg project that begetted kubernetes, and thus can guarantee higher levels of uptime; and so on so forth. The purpose of this article is not to engage in propaganda (though I admit it may come out here and there since GCP does sign my checks), but rather to educate lay people on the top two most popular choices for managing containers at scale. Actually, choosing one or the other might be a moot point since we are seeing a generational shift in IT begetting a world of multi (rather than mono) cloud. It offers more possibilities as companies can mitigate risks by not being locked in all while reducing costs since they can arbitrage costs on compute, storage, analytics, etc. For developers looking to simplify the process of automating infrastructure, the need to abstract such journeys becomes top of mind, and thus CLI tools/consoles lose their appeal. . Some vendors like Haschicorp have been addressing this need for a while (and garnering eye watering valuations, deservedly so), as they improve developer productivity all while ensuring consistent ways to provision, secure, and maintain cloud landing zones (more on that in a later article).

So perhaps I’ve outgrown the remit of the article by highlighting a convenient truth by touching on the following: as multi-cloud becomes standard practice, and containers becomes the pattern of choice, business stakeholders that want to empower their development teams should ask themselves: how can I make their lives easier once I have them over the hump of actually re-architecting (or building for the first time) a microservices based applications? By doing what managed services are already attempting to do: automating the time consuming and manual components pertaining to any type of orchestration. That means being able to set up infrastructure through templates that can be recycled and reused for deployments on multiple clouds. Indeed, automation is the prodigal child of managed services, and the world of IT and devops will continue to trend in that direction.

--

--

Partner with Monsieur Popp

French American biz dev nerd. Butler to Mademoiselle Popp, chief of staff to Madame Popp. Views are my own.