Getting Started with Amazon EKS

Amazon’s Elastic Kubernetes Service, or more commonly, EKS, is a managed Kubernetes cluster offering from the makers of S3, EC2, and Route 53. With a managed Kubernetes cluster, you are responsible for providing (and paying for) worker machines that do all the heavy lifting in Kubernetes: run pods, manage networking, etc. With EKS, Amazon provides the rest – an API server, an etcd cluster, core controller managers, etc.

How refreshingly freeing!

That means you don’t have to dive deep into Kubernetes’ control plane, which can charitably be described as “a bit complicated.”

Formal Proof of Correctness: Kubernetes’ Control Plane

What’s Involved in an EKS “Deployment”?

With the control plane safely identified as “somebody else’s problem”, all we need to worry about is:

  1. An Amazon Key ID and Secret Access Key.
  2. Some IAM Roles.
  3. An EC2 SSH Public/Private Key Pair.
  4. An Amazon VPC for our nodes to inhabit.
  5. Some region-specific Subnets.
  6. Networking and routing configurations.

Meet My New Friend, eksctl

While we could do all this by hand, or script the excellent aws (official) CLI, there’s a tool out there by WeaveWorks, called eksctl that does almost all of this for us.

If you’re on macOS, the easiest way to install eksctl is via Homebrew:

$ brew tap weaveworks/tap$ brew install weaveworks/tap/eksctl

More in-depth installation instructions, including those for other platforms like Linux and Windows, can be found here:

Spinning up an EKS Cluster

Before you can use eksctl, you’ll have to provide some credentials for your Amazon AWS account. Luckily, eksctl can read and understand the same credentials store that the official aws client uses.

$ aws configure
AWS Secret Access Key [None]: YOUR-SECRET-ACCESS-KEY
Default region name [None]: us-west-2
Default output format [None]:

Be careful: that prompt for the Secret Access Key will echo characters to the screen as you type, so someone might be able to shoulder-surf. Do this step in the privacy of your own home and/or office!

Now, we can create a cluster, using the aptly named create cluster command:

$ eksctl create cluster

This… will take a while. Here’s what’s happening behind the screen:

Subnets for three randomly-chosen AWS regions are being created, in the RFC-1918 private address space of You can see that in the first bit of output:

[ℹ]  using region us-west-2
[ℹ]  setting availability zones to [us-west-2c us-west-2b us-west-2d]
[ℹ]  subnets for us-west-2c - public: private:
[ℹ]  subnets for us-west-2b - public: private:
[ℹ]  subnets for us-west-2d - public: private:

Then, eksctl determines which AMI to use for the worker nodes. In this case, we’re using 1.14 of Kubernetes:

[ℹ]  nodegroup "ng-305e968b" will use "ami-05d586e6f773f6abf" [AmazonLinux2/1.14]
[ℹ]  using Kubernetes version 1.14
[ℹ]  creating EKS cluster "unique-sheepdog-1577372989" in "us-west-2" region with un-managed nodes
[ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup

Most of the magic of eksctl is done by Amazon CloudFormation stacks.

The first CloudFormation stack is for the sundry bits of infrastructure we need to provide for the EKS control plane to exist: our subnets, VPC, IAM roles, etc. The second stack defines the EC2 instances that comprise our node group.

In EKS, you can attach multiple groups of VMs to the same control plane, to provide different capabilities to your cluster. For example, you may have workloads that need to run on ARM processors, rather than x86-64 CPUs. You could provision ARM-based EC2 instances in an “arm” node group, and attach them to your existing cluster. In our case, we’ll only have the one node group.

You’ll also notice that eksctl chose a name for my new cluster, since I forgot to provide the --name flag. There’s a whole bunch of flags you can pass to create cluster; I highly recommend you peruse the eks help create cluster output to get a feel for the power and flexibility available to you.

Now, eksctl is going to spend a fair amount of time deploying EC2 instances via CloudFormation, so it’s high time you took a walk, finished Tolstoy’s War & Peace, or picked up a caffeine habit.

Why not try a nice cup of coffee or tea while you wait for Kubernetes? Go on, you deserve it!

All Deployed?


When eksctl is all done, it should have modified your kubectl configuration to allow you to access the new API server in all of its container-y goodness. Give it a whirl!

$ kubectl get nodes
NAME                                           STATUS   ROLES    AGE     VERSION   Ready    <none>   3m50s   v1.14.7-eks-1861c5    Ready    <none>   3m52s   v1.14.7-eks-1861c5

(my coffee break ran a bit long, which is why my nodes are almost 4 minutes old.)

Making Ourselves At Home

Now that we have a working 2-node cluster, let’s try to deploy something to it. We’ve gone ahead and built a small cluster-warming gift for you, in the form of a YAML spec you can apply:

$ kubectl apply -f
namespace/welcome created
service/welcome created
deployment.apps/welcome created

One of the neat things about the control plane that EKS gives you is that it is already wired into the full suite of Amazon’s Web Services. That means when you ask for a Load Balancer to front your brand new service, you get an honest-to-goodness ALB, complete with its own DNS name!

$ kubectl -n welcome get all
NAME                           READY   STATUS    RESTARTS   AGE
pod/welcome-5fc9bdcdd7-g96hn   1/1     Running   0          87s
NAME              TYPE           CLUSTER-IP      EXTERNAL-IP                                                               PORT(S)        AGE
service/welcome   LoadBalancer   80:32573/TCP   87s
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/welcome   1/1     1            1           87s
NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/welcome-5fc9bdcdd7   1         1         1       87s

The EXTERNAL-IP column of our LoadBalancer Service service/welcome gives the name of our ALB, which we can access with our web browser, over HTTP. Sadly, we do not have TLS configured yet, but we’ll most likely do that on our Ingress Controller, so just remember to access the service over http://.

Go You!

Spread the word

twitter icon facebook icon linkedin icon