Setting up an HA Kubernetes Cluster in AWS with private topology with kops 1.5.1

Setting up an HA Kubernetes Cluster in AWS with private topology with kops 1.5.1

Today we announce the release of Kubernetes kops 1.5.1 LTS!

I figured what better way to announce the release, than with an updated blog post on setting up an HA cluster on private topology with the new release!

What we are building

In this tutorial we will cover setting up a HA privately networked Kubernetes cluster in AWS with Kubernetes kops 1.5.1.

  • Fully managed VPC in AWS, with automatically generated private, and public subnets.
  • Outbound traffic managed through a NAT gateway and elastic IP in each private subnet.
  • Classic ELB fronting the Kubernetes API on TCP 443 (No firewall holes for the cluster).
  • Classic ELB fronting a bastion ASG for resilient SSH access for admins.
  • HA (Highly Available) Kubernetes masters spread across multiple availability zones in an ASG.
  • Kubernetes nodes spread across multiple availability zones in an ASG.
  • Public DNS alias for the Kubernetes API.

Installing kops 1.5.1

Kubernetes kops is an open source tool that Kubernetes offers that can be used for deploying Kubernetes clusters in AWS. We will be using the tool in this tutorial.


curl -sSL https://github.com/kubernetes/kops/releases/download/1.5.1/kops-darwin-amd64 -O
chmod +x kops-darwin-amd64
sudo mv kops-darwin-amd64 /usr/local/bin

More information on installing kops can be found here for our non OS X users.

Installing kubectl

We will also be needing a tool called kubectl. Think of this as a thin CLI client for the Kubernetes API, similar to the aws CLI tool we will be installing next.

curl -O https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin

Setting up your AWS environment

Get your AWS credentials from the console

Use the AWS console to get an AWS AccessKeyId and AWS SecretAccessKey Official Documentaion. After you have your credentials, download the CLI tool and configure it with your new user. You can also use any method defined here.

brew update && brew install awscli
aws configure

We strongly recommend using a single user with a select few IAM permissions to run kops. Thankfully kops provides a handy IAM creation script that will create a new user with the correct permissions. Be sure to note your new AccessKeyId and SecretAccessKey for the next step.

curl -O https://raw.githubusercontent.com/kubernetes/kops/master/hack/new-iam-user.sh
sh new-iam-user.sh <group> <user>
aws iam list-users

Setting up DNS for your cluster

We will need a publicly resolvable domain name for our cluster. So we need to make sure we have a hosted zone setup in Route53. In this example we will be using nivenly.com for our example hosted zone.

 ID=$(uuidgen) && aws route53 create-hosted-zone --name nivenly.com --caller-reference $ID 

More information on more advanced DNS setup.

Setting up a state store for your cluster

Kops will store a representation of your Kubernetes cluster in AWS S3. This is called the kops state store. It is important to note that kops DOES NOT store any concept of what resources are deployed. That would create two sources of truth (The AWS API, and the state store). Rather, kops will merely store a definition of the Kubernetes cluster, that will then be applied to AWS via kops.

We will call our state store in this example nivenly-com-state-store.

 aws s3api create-bucket --bucket nivenly-com-state-store --region us-east-1 

Creating your first cluster

Getting ready

Okay! We are ready to start creating our first cluster. Lets first set up a few environmental variables to make this process as clean as possible.

export NAME=myfirstcluster.nivenly.com
export KOPS_STATE_STORE=s3://nivenly-com-state-store

Form your create cluster command

We will need to note which availability zones are available to us. These are different for every AWS account. In this example we will be deploying our cluster to the us-west-1 region.

 aws ec2 describe-availability-zones --region us-west-1 

Lets form our create cluster command. Here we want to define a few things.

  • –node-count 3
    • We want 3 Kubernetes nodes
  • –zones us-west-2a,us-west-2b,us-west-2c
    • We want to run our nodes spread across the 3 availability zones available to our account
    • This is a CSV list, pulled from the API in the previous request
  • –master-zones us-west-2a,us-west-2b,us-west-2c 
    • This will tell kops to spread masters across those availability zones.
    • Because there is more than 1, this will automatically be ran in HA.
  • –dns-zone nivenly.com
    • We define the DNS hosted zone we created earlier
  • –node-size t2.large
    • We set our nodes to a defined instance size
  • –master-size t2.large
    • We set our masters to a defined instance size
  • –topology private 
    • We define that we want to use a private network topology with kops.
    • This is what tells kops to build the diagram above.
  • –networking calico 
    • We tell kops to use Calico for our overlay network
    • Overlay networks are required for this configuration.
    • Many thanks to our friends at Calico for helping us get this into kops!
  • –bastion
    • Add this flag to tell kops to create a bastion server so you can SSH into the cluster

Kops will default to ~/.ssh/id_rsa.pub for backend access. You can override this with –ssh-public-key /path/to/key.pub

kops create cluster \
    --node-count 3 \
    --zones us-west-2a,us-west-2b,us-west-2c \
    --master-zones us-west-2a,us-west-2b,us-west-2c \
    --dns-zone nivenly.com \
    --node-size t2.large \
    --master-size t2.large \
    --topology private \
    --networking calico \
    --bastion \
    ${NAME}

kops will deploy these instances using AWS auto scaling groups, so each instance should be ephemeral and will rebuild itself if taken offline for any reason.

Cluster Configuration

We now have created the underlying cluster configuration, lets take a look at every aspect that will define our cluster.

 kops edit cluster ${NAME} 

This will open up the cluster config (that is actually stored in the S3 bucket we created earlier!) in your favorite text editor. Here is where we can optionally really tweak our cluster for our use case. In this tutorial, we leave it default for now.

For more information on these directives, and the kops API please checkout the kops official documentation

Apply the changes

Okay, we are ready to create the cluster in AWS. We do so running the following command.

 kops update cluster ${NAME} --yes 

Start using the cluster

The resources will be deployed asynchronously here. So even though kops has finished, that does not mean our cluster is built. A great way to check if the cluster is online, and the API is working is to use kubectl

 kubectl get nodes 

After we verify the API is responding, we can now use the Kubernetes cluster.

Backend SSH access

We should also now have a bastion server behind an elastic load balancer in AWS that will give us access to the cluster over SSH. Grab the bastion ELB A record, and the instance private IP you want to access from the AWS console and SSH into the bastion as follows.

 
ssh-add ~/.ssh/id_rsa
ssh -A admin@bastion.myfirstcluster.nivenly.com
ssh admin@<master_private_ip>

Follow @kris-nova

LEAVE A COMMENT

15 comments
  1. Jim
    February 21, 2017 22:10:pm Reply

    To enable the bastion server, –bastion flag is now required.

  2. Bob Henkel
    April 6, 2017 15:56:pm Reply

    Thanks so much! This is a great write up and it helped a ton!

  3. Filipe Grillo
    April 6, 2017 18:17:pm Reply

    Hi Kris!
    First of all thanks for the great tutorial! I’m much more confortable with Kubernetes and Kops now 🙂

    But I am having trouble spinning Pods on this new cluster I’ve created with the steps. Every pod I try gets the same errors:

    Error syncing pod, skipping: failed to “SetupNetwork” for “nginx-171435654-qs05b_default” with SetupNetworkError: “Failed to setup network for pod \”nginx-171435654-qs05b_default(6ac7a72a-1aef-11e7-8d2f-0e662640c304)\” using network plugins \”cni\”: Get https:///api/v1/namespaces/default/pods/nginx-171435654-qs05b: http: no Host in request URL; Skipping pod”

    And then it get stuck on “ContainerCreating” state, have you had any issues like this?
    I’m reading about Calico being the issue, but so far wasn’t able to fix it.

    • Filipe Grillo
      April 6, 2017 22:17:pm Reply

      Got it working after changing the kubernetesVersion to 1.6.1 and rollingUpdating the cluster 🙂

      • Prataksha Gurung
        May 8, 2017 12:53:pm Reply

        How did you change the KubernetesVersion to 1.6.1? I am having the same issue

        • habibi
          August 1, 2017 15:27:pm Reply

          I am not sure if this is the only way. but you can use kops edit cluster ${NAME}. then modify the version. after that run the kops rollinhg-update.

  4. Neverfox
    April 19, 2017 13:52:pm Reply

    Is there no longer a way to stand up a cluster without the whole domain name process at the beginning? This was never required with the old kube-up method. I would simple stand up the cluster and kubectl just worked (because of keys etc). If I needed external services, I used the public DNS of an ELB. Why the extra hoop to jump through now, and is there a way to get started without a domain name?

    • Steve Coffman
      July 15, 2017 15:22:pm Reply

      Actually, if you check out the experimental gossip based cluster kops support, you can avoid DNS. I’m not sure how the classic ELB works with it though.

  5. Mike
    June 29, 2017 12:13:pm Reply

    Just to preface this, I am a bit new to k8s. Could this post discuss (or maybe another post) how one would go about deploying a front-end/back-end web application on this topology?

  6. Raja
    July 27, 2017 09:04:am Reply

    Hi Nova, Many thanks for this article.

    I have a quick question though. Can I set the value to “flannel” for the option: –networking ? In other words, does this work for flannel instead of calico ?