Category: K8s

This post is DEPRECATED

Please check out the most recent article instead.

What we are building

In this tutorial we will cover setting up a HA privately networked Kubernetes cluster in AWS with Kubernetes kops.

  • Fully private VPC, housing utility and private subnets, with hybrid cloud capabilities over VPN
  • HA (Highly Available) masters spread across availability zones with private subnetting
  • Nodes spread across availability zones with private subnetting
  • Routing between subnets with NAT gateways
  • Elastic Load Balancers in front of the resources for public access
  • Bastion server for backend SSH access to the instances

Installing kops

Kubernetes kops is an open source tool that Kubernetes offers that can be used for deploying Kubernetes clusters against different cloud providers. We will be using the tool to help us with the heavy lifting in this tutorial.

Start by installing the most recent version of kops from the master branch.


brew update && brew install --HEAD kops

More information on installing kops can be found here for our non OS X users.

Installing kubectl

We will also be needing a tool called kubectl. Think of this as a thin CLI client for the Kubernetes API, similar to the aws CLI tool we will be installing next.

You can download the tarball from the Kubernetes latest release page in github, or follow the official install guide here.

wget -O https://github.com/kubernetes/kubernetes/releases/download/v1.4.6/kubernetes.tar.gz
sudo cp kubernetes/platforms/darwin/amd64/kubectl /usr/local/bin/kubectl

Setting up your AWS environment

Setting up a kops IAM user

In this example we will be using a dedicated IAM user to use with kops. This user will need basic API security credentials in order to use kops. Create the user and credentials using the AWS console. More information.

Kubernetes kops uses the official AWS Go SDK, so all we need to do here is set up your system to use the official AWS supported methods of registering security credentials defined here. Here is an example using the aws command line tool to set up your security credentials.

brew update && brew install awscli
aws configure
aws iam list-users

We should now be able to pull a list of IAM users from the API, verifying that our credentials are working as expected.

Setting up DNS for your cluster

We will need a publicly resolvable domain name for our cluster. So we need to make sure we have a hosted zone setup in Route53. In this example we will be using nivenly.com for our example hosted zone.

 ID=$(uuidgen) && aws route53 create-hosted-zone --name nivenly.com --caller-reference $ID 

Setting up a state store for your cluster

In this example we will be creating a dedicated S3 bucket for kops to use. This is where kops will store the state of your cluster and the representation of your cluster, and serves as the source of truth for our cluster configuration throughout the process. We will call this nivenly-com-state-store. I recommend keeping the creation confined to us-east-1, otherwise more input will be needed here.

 aws s3api create-bucket --bucket nivenly-com-state-store --region us-east-1 

Creating your first cluster

Setup your environment for kops

Okay! We are ready to start creating our first cluster. Lets first set up a few environmental variables to make this process as clean as possible.

export NAME=myfirstcluster.nivenly.com
export KOPS_STATE_STORE=s3://nivenly-com-state-store

Note: You don’t have to use environmental variables here. You can always define the values using the –name and –state flags later.

Form your create cluster command

We will need to note which availability zones are available to us. In this example we will be deploying our cluster to the us-west-1 region.

 aws ec2 describe-availability-zones --region us-west-1 

Lets form our create cluster command. Here we want to define a few things.

  • –node-count 3
    • We want 3 Kubernetes nodes
  • –zones us-west-2a,us-west-2b,us-west-2c
    • We want to run our nodes spread across the 3 availability zones available to our account
    • This is a CSV list, pulled from the API in the previous request
  • –master-zones us-west-2a,us-west-2b,us-west-2c 
    • This will tell kops that we want 3 masters, running in HA in these 3 availability zones
  • –dns-zone nivenly.com
    • We define the DNS hosted zone we created earlier
  • –node-size t2.medium
    • We set our nodes to a defined instance size
  • –master-size t2.medium
    • We set our masters to a slightly larger instance size
  • –topology private 
    • We define that we want to use a private network topology with kops
  • –networking weave 
    • We tell kops to use Weave for our overlay network
    • Many thanks to our friends at Weave for helping us make this a staple part of our clusters!
  • –image 293135079892/k8s-1.4-debian-jessie-amd64-hvm-ebs-2016-11-16
    • This is required as a temporary workaround until kops 1.4.2 is released (Estimated Dec 17, 2016)

Kops will default to ~/.ssh/id_rsa.pub for backend access. You can override this with –ssh-public-key /path/to/key.pub

kops create cluster \
    --node-count 3 \
    --zones us-west-2a,us-west-2b,us-west-2c \
    --master-zones us-west-2a,us-west-2b,us-west-2c \
    --dns-zone nivenly.com \
    --node-size t2.medium \
    --master-size t2.medium \
    --topology private \
    --networking weave \
    --image 293135079892/k8s-1.4-debian-jessie-amd64-hvm-ebs-2016-11-16 \
    ${NAME}

kops will deploy these instances using AWS auto scaling groups, so each instance should be ephemeral and will rebuild itself if taken offline for any reason.

Cluster Configuration

We now have created the underlying cluster configuration, lets take a look at every aspect that will define our cluster.

 kops edit cluster ${NAME} 

This will open up the cluster config (that is actually stored in the S3 bucket we created earlier!) in your favorite text editor. Here is where we can optionally really tweak our cluster for our use case. In this tutorial, we leave it default for now.

For more information on these directives, and the kops API please checkout the kops official documentation

Apply the changes

Okay, we are ready to create the cluster in AWS. We do so running the following command.

 kops update cluster ${NAME} --yes 

Start using the cluster

The resources will be deployed asynchronously here. So even though kops has finished, that does not mean our cluster is built. A great way to check if the cluster is online, and the API is working is to use kubectl

 kubectl get nodes 

After we verify the API is responding, we can now use the Kubernetes cluster.

Backend SSH access

We should also now have a bastion server behind an elastic load balancer in AWS that will give us access to the cluster over SSH. Grab the bastion ELB A record, and the instance private IP you want to access from the AWS console and SSH into the bastion as follows.

 
ssh -A admin@<bastion_elb_a_record>
ssh admin@<instance_private_ip>

What do you think?

I always love comments, and suggestions on how to be better. Let me know your thoughts, if you have any good ones.

I wrote a lot of the code for the features in this article, feel free to hit me up on github if you want to follow along!

Follow @kris-nova