Setting up an EKS Kubernetes cluster for learning Nov 12 2022

The fastest way to learn something is through practice. Most of my work is on AWS, so running a local Kubernetes cluster is not the best option. I want to test Kubernetes integration with other AWS services. To run experiments, I create a cluster using eksctl, run my tests and then destroy it.

In this post, I will assume that you already have an AWS account and have installed the aws cli, eksctl and kubectl on your machine.

This post will be short. It will only serve as a blueprint to get you started. I'll be using the us-east-1 region, but you can use any region you want.

If you don't know which region to use, you could try to use https://www.cloudping.info/ to check the latency between your machine and the AWS regions (It also works for other providers). There are many other factors that you should consider, like cost differences for resources in different regions, etcetera, but for this quick test, you could use latency.

SSH keys

We need ssh keys to access the cluster nodes. We will use the aws CLI to create a key pair and register it in the AWS Key Management Service (KMS).

1
aws ec2 create-key-pair --key-name temp-key --query 'KeyMaterial' --output text > temp-key.pem

Reference: AWS Documentation - create key pair

If you are unfamiliar with SSH keys, maybe read the first two sections of my article Understanding SSH Keys and using Keychain to manage passphrase on macOS.

The previous command will generate the private key and store it in temp-key.pem. To get the public key, you can run the following:

1
ssh-keygen -y -f temp-key.pem > temp-key.pub

Create the cluster

I prefer to keep my infrastructure as code. Even if we will be using eksctl in an imperative way, it is nice to reference back to where I have the initial settings. For that, we will be using a configuration file. Create a new file. I'll name it cluster.yml with the following content:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: temp-cluster
  region: us-east-1

nodeGroups:
  - name: ng-1
    # the t3.medium supports 3 ENIs, so keep that in mind on how many pods you can put in the Node
    instanceType: t3.medium
    desiredCapacity: 2
    ssh:
      publicKeyPath: temp-key.pub

Now we can apply the configuration:

1
eksctl create cluster -f cluster.yml

The creation process takes a while, about 20 mins.

Connect to the cluster

Once the cluster is created, we want to set up kubectl to manage the cluster. We can do that by running the following:

1
aws eks --region us-east-1 update-kubeconfig --name temp-cluster

And finally, we can start working on the newly created cluster. Let's check the services:

1
kubectl get svc

You should see something like the following:

1
2
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   10m

Perfect! We have a cluster up and running.

Now you can run your tests and destroy the cluster when you are done.

Modifying the Node Group

If you need to modify the Node Group, you can do that by editing the cluster.yml file and running:

1
eksctl update cluster -f cluster.yml

Remember that not all aspects of the node group can be modified, for example, the instance type. If you need to change the instance type, you must create a new node group and delete the old one.

Destroy the cluster

To clean up the cluster, we can run the following:

1
eksctl delete cluster -f cluster.yml

And that's it! Happy hacking!

Final thoughts

This was a quick and simple post to learn how to create your Kubernetes cluster and start learning. The instructions here are straightforward. The interesting part is when you start modifying your cluster and node group configuration.

Anyways, I hope it was helpful.

References


** If you want to check what else I'm currently doing, be sure to follow me on twitter @rderik or subscribe to the newsletter. If you want to send me a direct message, you can send it to derik@rderik.com.