Category: K8s

Today we announce the release of Kubernetes kops 1.5.1 LTS!

I figured what better way to announce the release, than with an updated blog post on setting up an HA cluster on private topology with the new release!

What we are building

In this tutorial we will cover setting up a HA privately networked Kubernetes cluster in AWS with Kubernetes kops 1.5.1.

  • Fully managed VPC in AWS, with automatically generated private, and public subnets.
  • Outbound traffic managed through a NAT gateway and elastic IP in each private subnet.
  • Classic ELB fronting the Kubernetes API on TCP 443 (No firewall holes for the cluster).
  • Classic ELB fronting a bastion ASG for resilient SSH access for admins.
  • HA (Highly Available) Kubernetes masters spread across multiple availability zones in an ASG.
  • Kubernetes nodes spread across multiple availability zones in an ASG.
  • Public DNS alias for the Kubernetes API.

Installing kops 1.5.1

Kubernetes kops is an open source tool that Kubernetes offers that can be used for deploying Kubernetes clusters in AWS. We will be using the tool in this tutorial.


curl -sSL https://github.com/kubernetes/kops/releases/download/1.5.1/kops-darwin-amd64 -O
chmod +x kops-darwin-amd64
sudo mv kops-darwin-amd64 /usr/local/bin

More information on installing kops can be found here for our non OS X users.

Installing kubectl

We will also be needing a tool called kubectl. Think of this as a thin CLI client for the Kubernetes API, similar to the aws CLI tool we will be installing next.

curl -O https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin

Setting up your AWS environment

Get your AWS credentials from the console

Use the AWS console to get an AWS AccessKeyId and AWS SecretAccessKey Official Documentaion. After you have your credentials, download the CLI tool and configure it with your new user. You can also use any method defined here.

brew update && brew install awscli
aws configure

We strongly recommend using a single user with a select few IAM permissions to run kops. Thankfully kops provides a handy IAM creation script that will create a new user with the correct permissions. Be sure to note your new AccessKeyId and SecretAccessKey for the next step.

curl -O https://raw.githubusercontent.com/kubernetes/kops/master/hack/new-iam-user.sh
sh new-iam-user.sh <group> <user>
aws iam list-users

Setting up DNS for your cluster

We will need a publicly resolvable domain name for our cluster. So we need to make sure we have a hosted zone setup in Route53. In this example we will be using nivenly.com for our example hosted zone.

 ID=$(uuidgen) && aws route53 create-hosted-zone --name nivenly.com --caller-reference $ID 

More information on more advanced DNS setup.

Setting up a state store for your cluster

Kops will store a representation of your Kubernetes cluster in AWS S3. This is called the kops state store. It is important to note that kops DOES NOT store any concept of what resources are deployed. That would create two sources of truth (The AWS API, and the state store). Rather, kops will merely store a definition of the Kubernetes cluster, that will then be applied to AWS via kops.

We will call our state store in this example nivenly-com-state-store.

 aws s3api create-bucket --bucket nivenly-com-state-store --region us-east-1 

Creating your first cluster

Getting ready

Okay! We are ready to start creating our first cluster. Lets first set up a few environmental variables to make this process as clean as possible.

export NAME=myfirstcluster.nivenly.com
export KOPS_STATE_STORE=s3://nivenly-com-state-store

Form your create cluster command

We will need to note which availability zones are available to us. These are different for every AWS account. In this example we will be deploying our cluster to the us-west-1 region.

 aws ec2 describe-availability-zones --region us-west-1 

Lets form our create cluster command. Here we want to define a few things.

  • –node-count 3
    • We want 3 Kubernetes nodes
  • –zones us-west-2a,us-west-2b,us-west-2c
    • We want to run our nodes spread across the 3 availability zones available to our account
    • This is a CSV list, pulled from the API in the previous request
  • –master-zones us-west-2a,us-west-2b,us-west-2c 
    • This will tell kops to spread masters across those availability zones.
    • Because there is more than 1, this will automatically be ran in HA.
  • –dns-zone nivenly.com
    • We define the DNS hosted zone we created earlier
  • –node-size t2.large
    • We set our nodes to a defined instance size
  • –master-size t2.large
    • We set our masters to a defined instance size
  • –topology private 
    • We define that we want to use a private network topology with kops.
    • This is what tells kops to build the diagram above.
  • –networking calico 
    • We tell kops to use Calico for our overlay network
    • Overlay networks are required for this configuration.
    • Many thanks to our friends at Calico for helping us get this into kops!
  • –bastion
    • Add this flag to tell kops to create a bastion server so you can SSH into the cluster

Kops will default to ~/.ssh/id_rsa.pub for backend access. You can override this with –ssh-public-key /path/to/key.pub

kops create cluster \
    --node-count 3 \
    --zones us-west-2a,us-west-2b,us-west-2c \
    --master-zones us-west-2a,us-west-2b,us-west-2c \
    --dns-zone nivenly.com \
    --node-size t2.large \
    --master-size t2.large \
    --topology private \
    --networking calico \
    --bastion \
    ${NAME}

kops will deploy these instances using AWS auto scaling groups, so each instance should be ephemeral and will rebuild itself if taken offline for any reason.

Cluster Configuration

We now have created the underlying cluster configuration, lets take a look at every aspect that will define our cluster.

 kops edit cluster ${NAME} 

This will open up the cluster config (that is actually stored in the S3 bucket we created earlier!) in your favorite text editor. Here is where we can optionally really tweak our cluster for our use case. In this tutorial, we leave it default for now.

For more information on these directives, and the kops API please checkout the kops official documentation

Apply the changes

Okay, we are ready to create the cluster in AWS. We do so running the following command.

 kops update cluster ${NAME} --yes 

Start using the cluster

The resources will be deployed asynchronously here. So even though kops has finished, that does not mean our cluster is built. A great way to check if the cluster is online, and the API is working is to use kubectl

 kubectl get nodes 

After we verify the API is responding, we can now use the Kubernetes cluster.

Backend SSH access

We should also now have a bastion server behind an elastic load balancer in AWS that will give us access to the cluster over SSH. Grab the bastion ELB A record, and the instance private IP you want to access from the AWS console and SSH into the bastion as follows.

 
ssh-add ~/.ssh/id_rsa
ssh -A admin@bastion.myfirstcluster.nivenly.com
ssh admin@<master_private_ip>

Follow @kris-nova

This post is DEPRECATED

Please check out the most recent article instead.

What we are building

In this tutorial we will cover setting up a HA privately networked Kubernetes cluster in AWS with Kubernetes kops.

  • Fully private VPC, housing utility and private subnets, with hybrid cloud capabilities over VPN
  • HA (Highly Available) masters spread across availability zones with private subnetting
  • Nodes spread across availability zones with private subnetting
  • Routing between subnets with NAT gateways
  • Elastic Load Balancers in front of the resources for public access
  • Bastion server for backend SSH access to the instances

Installing kops

Kubernetes kops is an open source tool that Kubernetes offers that can be used for deploying Kubernetes clusters against different cloud providers. We will be using the tool to help us with the heavy lifting in this tutorial.

Start by installing the most recent version of kops from the master branch.


brew update && brew install --HEAD kops

More information on installing kops can be found here for our non OS X users.

Installing kubectl

We will also be needing a tool called kubectl. Think of this as a thin CLI client for the Kubernetes API, similar to the aws CLI tool we will be installing next.

You can download the tarball from the Kubernetes latest release page in github, or follow the official install guide here.

wget -O https://github.com/kubernetes/kubernetes/releases/download/v1.4.6/kubernetes.tar.gz
sudo cp kubernetes/platforms/darwin/amd64/kubectl /usr/local/bin/kubectl

Setting up your AWS environment

Setting up a kops IAM user

In this example we will be using a dedicated IAM user to use with kops. This user will need basic API security credentials in order to use kops. Create the user and credentials using the AWS console. More information.

Kubernetes kops uses the official AWS Go SDK, so all we need to do here is set up your system to use the official AWS supported methods of registering security credentials defined here. Here is an example using the aws command line tool to set up your security credentials.

brew update && brew install awscli
aws configure
aws iam list-users

We should now be able to pull a list of IAM users from the API, verifying that our credentials are working as expected.

Setting up DNS for your cluster

We will need a publicly resolvable domain name for our cluster. So we need to make sure we have a hosted zone setup in Route53. In this example we will be using nivenly.com for our example hosted zone.

 ID=$(uuidgen) && aws route53 create-hosted-zone --name nivenly.com --caller-reference $ID 

Setting up a state store for your cluster

In this example we will be creating a dedicated S3 bucket for kops to use. This is where kops will store the state of your cluster and the representation of your cluster, and serves as the source of truth for our cluster configuration throughout the process. We will call this nivenly-com-state-store. I recommend keeping the creation confined to us-east-1, otherwise more input will be needed here.

 aws s3api create-bucket --bucket nivenly-com-state-store --region us-east-1 

Creating your first cluster

Setup your environment for kops

Okay! We are ready to start creating our first cluster. Lets first set up a few environmental variables to make this process as clean as possible.

export NAME=myfirstcluster.nivenly.com
export KOPS_STATE_STORE=s3://nivenly-com-state-store

Note: You don’t have to use environmental variables here. You can always define the values using the –name and –state flags later.

Form your create cluster command

We will need to note which availability zones are available to us. In this example we will be deploying our cluster to the us-west-1 region.

 aws ec2 describe-availability-zones --region us-west-1 

Lets form our create cluster command. Here we want to define a few things.

  • –node-count 3
    • We want 3 Kubernetes nodes
  • –zones us-west-2a,us-west-2b,us-west-2c
    • We want to run our nodes spread across the 3 availability zones available to our account
    • This is a CSV list, pulled from the API in the previous request
  • –master-zones us-west-2a,us-west-2b,us-west-2c 
    • This will tell kops that we want 3 masters, running in HA in these 3 availability zones
  • –dns-zone nivenly.com
    • We define the DNS hosted zone we created earlier
  • –node-size t2.medium
    • We set our nodes to a defined instance size
  • –master-size t2.medium
    • We set our masters to a slightly larger instance size
  • –topology private 
    • We define that we want to use a private network topology with kops
  • –networking weave 
    • We tell kops to use Weave for our overlay network
    • Many thanks to our friends at Weave for helping us make this a staple part of our clusters!
  • –image 293135079892/k8s-1.4-debian-jessie-amd64-hvm-ebs-2016-11-16
    • This is required as a temporary workaround until kops 1.4.2 is released (Estimated Dec 17, 2016)

Kops will default to ~/.ssh/id_rsa.pub for backend access. You can override this with –ssh-public-key /path/to/key.pub

kops create cluster \
    --node-count 3 \
    --zones us-west-2a,us-west-2b,us-west-2c \
    --master-zones us-west-2a,us-west-2b,us-west-2c \
    --dns-zone nivenly.com \
    --node-size t2.medium \
    --master-size t2.medium \
    --topology private \
    --networking weave \
    --image 293135079892/k8s-1.4-debian-jessie-amd64-hvm-ebs-2016-11-16 \
    ${NAME}

kops will deploy these instances using AWS auto scaling groups, so each instance should be ephemeral and will rebuild itself if taken offline for any reason.

Cluster Configuration

We now have created the underlying cluster configuration, lets take a look at every aspect that will define our cluster.

 kops edit cluster ${NAME} 

This will open up the cluster config (that is actually stored in the S3 bucket we created earlier!) in your favorite text editor. Here is where we can optionally really tweak our cluster for our use case. In this tutorial, we leave it default for now.

For more information on these directives, and the kops API please checkout the kops official documentation

Apply the changes

Okay, we are ready to create the cluster in AWS. We do so running the following command.

 kops update cluster ${NAME} --yes 

Start using the cluster

The resources will be deployed asynchronously here. So even though kops has finished, that does not mean our cluster is built. A great way to check if the cluster is online, and the API is working is to use kubectl

 kubectl get nodes 

After we verify the API is responding, we can now use the Kubernetes cluster.

Backend SSH access

We should also now have a bastion server behind an elastic load balancer in AWS that will give us access to the cluster over SSH. Grab the bastion ELB A record, and the instance private IP you want to access from the AWS console and SSH into the bastion as follows.

 
ssh -A admin@<bastion_elb_a_record>
ssh admin@<instance_private_ip>

What do you think?

I always love comments, and suggestions on how to be better. Let me know your thoughts, if you have any good ones.

I wrote a lot of the code for the features in this article, feel free to hit me up on github if you want to follow along!

Follow @kris-nova