What Do Amazon ECS Kubernetes Service and Fargate Mean for You?
As many of us in this space have long-anticipated, Amazon has finally unveiled their answer to Google and Microsoft's Managed Kubernetes Services at their annual re:Invent conference in Las Vegas. Dubbed Elastic Container Service for Kubernetes, or EKS, Amazon's offering simplifies the work of creating and managing your own Kubernetes cluster by enabling users to create and cluster their own Kubernetes worker nodes. Amazon also released AWS Fargate, a complement to their Elastic Container Service (ECS) that completely removes the infrastructure management required to run ECS clusters and makes containers a first-class resource.
While this announcement is great news for those just getting started down the path to highly-scalable, cloud-native containerized platforms, this might leave those currently using ECS or their own managed Kubernetes clusters asking themselves: “What does this mean for me?”
If you asked yourself these questions after this announcement, then I am hoping that this short article helps sort some confusion out.
What is Amazon EKS?
Kubernetes is a clustered container orchestration system that automates the creation, replication, scaling and management of Docker containers. While Kubernetes is an extremely powerful container orchestrator, provisioning, scaling, updating and monitoring these clusters manually is often time-consuming and difficult.
Amazon's Elastic Container Service for Kubernetes, or EKS, solves this by offering is a managed service that greatly simplifies the process of creating and using Kubernetes clusters in AWS. Like Amazon's ‘classic’ Elastic Container Service offering, this is accomplished by allowing users to create Kubernetes workers managed by three centralized and highly-available Kubernetes masters through the AWS Console or Command Line Interface (CLI). Kubernetes is automatically updated on these masters, and access credentials are managed by IAM instead of users having to create them on their own. Lastly, EKS can run in a VPC and fully supported AWS PrivateLink to keep all traffic between your EKS cluster and your other AWS resources completely within AWS.
You can get more information on EKS from Jeff Barr's blog post here.
We’re currently using Amazon’s ECS: what does EKS mean for me?
Amazon's EC2 Container Service, or ECS, is a cloud-native container management service that runs entirely on Docker and EC2. Like Kubernetes, Swarm and other container orchestrators, ECS enables users to write configuration files describing the containers that they need and deploy them onto regular EC2 instances with a special agent installed and IAM permission applied. This approach makes scheduling containers onto clusters across availability zones and regions easy and fast.
There are few things to consider if you’re currently using ECS and are considering utilizing EKS. At bottom, the biggest difference between the two services is that ECS is a simpler and more basic container orchestration tool than EKS, which brings both advantages and disadvantages.
On the one hand, ECS’s more limited feature set and seamless integration with other AWS services such as Elastic Load Balancing, Route 53 and CloudWatch makes it easy for businesses to quickly begin reducing their infrastructure management overhead and focus on shipping product faster and more safely.
On the other, there are costs to the simplicity of ECS that make EKS an attractive alternative for a number of reasons:
Firstly, Kubernetes has a significantly more mature syntax for application definition. ‘Pods’, ‘Deployments’, 'ReplicaSets' and 'Services' allow engineers to define containers within an applications in ways that would be cumbersome to replicate on ECS at scale. As well, Kubernetes label selectors allow for much easier ways of describing how containers consume networking and storage resources. This enables engineers and developers to spend less time making ECS work for them and more time deploying highly-complex applications faster.
Secondly, load balancing, storage and networking are more flexible with Kubernetes. While ECS depends on Route 53 and Elastic Load Balancing (ELB) to distribute inbound traffic, Kubernetes Services allow you to create "Services" that define custom load balancing rules. Kubernetes also allows you to use other load balancers such as F5s and HAProxy. This can be advantageous for organizations that have on-premise load balancers that cannot be migrated into AWS easily.
Lastly, monitoring, logging and health-checking are much more mature within Kubernetes compared to ECS. While ECS uses CloudWatch and ELB health checks to gauge container healthiness and CloudTrail for API auditing, health checks (‘Liveliness Probes’) in Kubernetes can be defined within the same definitions as your ‘Pods’ and allow for significantly more expressive policies, such as "run this custom command every 30 seconds for the next five minutes." Kubernetes also uses Fluentd for log shipping and management, so you can have containers send logs to any system you want, including CloudWatch and CloudTrail (which are automatically enabled with EKS). This is convenient for organizations that have on-premises centralized systems of record, as it reduces the amount of work required to ingest that data.
Should I Use EKS?
Whether you should look to be using EKS depends on your current level of maturity. For those just starting out, ECS is still a quick, capable and simple option for scheduling containers.
But as your platform grows in complexity teams can often hit roadblocks during releases due to the limitations of ECS that I described above. Here, EKS makes for a flexible and powerful alternative to help you accelerate towards a highly-scalable, cloud-native container platform.
You can find a more comprehensive summary of the differences between ECS and Kubernetes here.
We are currently managing our own Kubernetes infrastructure: what does EKS mean for me?
At first glance, EKS is not too different from offerings provided by other cloud platforms. They all essentially offer ‘Kubernetes in a box’. You or your teams have either (a) likely already considered (and rejected) using a managed service in favor of having more control over your infrastructure, or (b) found ECS or other orchestrators too limiting for your use case.
But this situation is liable to change and there’s reasons for considering managed K8s when it does so. At Contino, we believe that a big part of creating a truly DevOps culture is removing as many barriers between the product and its customers as possible. While defining application deployments with Kubernetes is already fairly ‘close to the code’ (application teams can provide their Deployment specification right within their app and register it into their Kubernetes cluster at the end of the deployment pipeline), at the end of the day, that Kubernetes cluster needs to be maintained by someone. If managing that cluster is taking time away from your teams that can be better spent on focusing on your customers, then moving to EKS makes a lot of sense.
Should I Use EKS?
If you want to make the leap (particularly to AWS) to an ecosystem where as much time as possible is diverted from undifferentiated heavy lifting, then EKS makes sense. However, if the maintenance costs of maintaining your Kubernetes infrastructure are manageable or if there are strategic benefits to running it yourself, then it might be wise to continue running your own clusters. Like most of Amazon's managed offerings, such as RDS and Aurora, your ability to create infrastructure on their PaaS is at the expense of being able to control the underlying compute, networking and storage that makes that PaaS work. EKS is no exception to this. If you need to maintain a custom kubeconfig for your Kubernetes masters or have special Federation requirements, EKS will likely not work for you.
Tired of managing container infrastructure? You need AWS Fargate
AWS Fargate, also announced yesterday at re:invent, takes the idea of ‘classic’ ECS even further by removing servers altogether and making the container a compute primitive. What does this mean? Simply put: forget VMs and setting up clusters. If you want containers on ECS, you simply deploy your task and give it the name of the cluster into which the containers will be placed along with your usual networking, storage and resource preferences. That's literally it!
What does Fargate mean for me?
Fargate represents a monumental shift in the evolution of containers for one simple reason: it puts the container on the same level as the instance. Let me explain.
Since Docker popularized using containers for application delivery in 2013, engineers managing applications deployed by containers always had to worry about the underlying compute running Docker: systems had to be patched, upgraded and maintained, even if immutable deployment strategies were used to create them. Despite advances in orchestration tooling and managed services over the years (Swarm, Mesophere, ECS, Nomad, Kubernetes, etc.), compute was always an underlying concern in some way, shape or form.
Managing the compute is a non-differentiating, time- and resource-intensive task that takes developers away from their main aim: getting valuable code to the customer. The most common solution to this problem is to increase the efficiency of the compute management process by increasing collaboration and accelerating feedback loops. But IT organizations silo the engineers doing this work into dedicated ‘DevOps’ teams that, although well-intentioned, can, over time, retreat into their own backlogs and feature development, becoming a silo themselves. The tight collaboration with development teams loosens, and requests between teams get thrown over walls in JIRA instead of being addressed collaboratively.
The result? The problem of managing compute still remains. Does this sound familiar?
But Fargate changes this by skipping the problem entirely. The only choices that your teams need to make for your application deployments is: "How many resources does my application need", "What kind of orchestrator do I want to use," and "How much do I want to pay?". Worrying about having enough compute or ensuring your clusters are up to date is in the past. This way, your teams can spend less time on operations overhead and more time on building better products, serving your customer's needs faster and better, and, most importantly, avoid creating the same silos that your organization is trying to break down.
Fargate Is Awesome...But (Potentially) Costly
However, like ECS, Fargate's simplicity comes at a cost: literally. At this time of writing, Fargate-managed tasks cost $0.000014/second per vCPU and $0.00000353/second per GB memory. These numbers seem low, but they can be deceiving. Let me illustrate what I mean.
Let's say that your web application is composed of two ECS Services: a web frontend and an application service backing it. Each Service is configured to run at least one ECS Task and, at most, five Tasks at any given time with a 200% overrun (i.e. each Service can run up to ten tasks at maximum load: five that are starting up, and five that are terminating). Furthermore, each Task requires 0.75 vCPU cores, or 768 CPU units, and up to 1GB memory. At a maximum scale of 20 Tasks (10 per Service), your application will consume 15 vCPU cores and 20GB memory.
To support this load, you create an EC2 autoscaling group of t2.medium instances (2 vCPU cores and 4 GB of memory per instance) with a minimum size of one instance and a maximum size of eight instances. (Many will probably choose a non-burstable instance size to avoid worrying about CPU credits.)
This infrastructure will will cost us between $0.046-$0.368/hour to operate ($0.046/hour for a t2.medium Linux instance * 8 instances), network egress, monitoring and storage costs notwithstanding. Not bad.
Let's see how we fare with Fargate. Fargate only allows a certain set of vCPU and memory combinations. Since the 0.75 vCPU/1GB memory combination isn't supported, we'll move up to the next available combination: 1 vCPU/1GB. Recall that the current pricing for Fargate is $0.000014/second per vCPU and $0.00000353/second per GB memory. Given this, we will pay at least $0.0633/hour and, at most, $1.266/hour. That's a 244% premium at the top end, and that's before CloudWatch, data and storage costs!
Now, there are many optimizations that can be made to reduce that top-end cost: reducing your resource requirements, breaking up the application layer further, etc. Also, recall that this top end cost will only be accrued when your Service is scaling to its "desired count." Given that EC2 resources and Fargate bill per-second, not per-hour, real-world costs will likely be lower. However, my point still stands: the total cost of using Fargate is directly proportional to the number of vCPUs and GBs of memory your application consumes. If cost is the biggest factor for your platform, and you are willing to accept the risks involved with managing your own infrastructure, then Fargate might not be the best choice.
Despite this warning, I still believe that Fargate is, truly, the next level in the evolution of digital platforms. It provides the best of both worlds: a simple application deployment experience without the hassle of worrying about how to scale it.
Amazon's ECS Kubernetes Service and Fargate are exciting announcements with positive and (potentially) negative ramifications for many people. EKS removes the hassle of setting up and maintaining a Kubernetes cluster in AWS. Fargate takes this a step further and removes the hassle of maintaining a container infrastructure entirely. This lets developers simply deploy containers and move on with their lives.
I think that Fargate will be attractive for many enterprises that are just getting started in their journey to a modern digital platform. It can help them skip any intermediate stages and jump straight to deploying containers with as little hassle as possible. EKS is a great next step for organizations that love the power of Kubernetes but don't love the steps required for getting it up and running.
As EKS is still in private beta, technical specifications, developer guides and deep-dive FAQs for it are unavailable. We intend on providing a webinar on how to use this service accompanied with a more technical blog post and comparison against existing services. In the meantime, sign up for the beta and cross your fingers for access! However, Fargate is available in us-east-1 and other regions to play with! Give it a try today!