Okay – wow – it’s been a long time since I have blogged here…
❤️ Hi everyone! I missed you! ❤️
So due to some unforeseen circumstance in the past ~year or so… I was unable to continue contributing to my pride and joy Kubicorn! It’s been a real bummer, but I am glad to say that those days are officially behind me!
The good news? I am back! And we are working hard on making kubicorn
even more amazing!
So without further delay, let’s start looking into what I wanted to update everyone about kubicorn
.
What is kubicorn
?
Kubicorn is a Kubernetes infrastructure management tool!
kubicorn
attempts to solve managing kubernetes based infrastructure in a cloud native way. The project is supported by the Cloud Native Computing Foundation (CNCF) and attempts to offer a few things unique to cloud native infrastructure:
- Modular components
- Vendorable library in Go
- A guarantee-less framework that is flexible and scalable
- A solid way to bootstrap and manage your infrastructure
- Software to manage infrastructure
These patterns are fundamental to understanding cloud native technologies, and (in my opinion) are the future of infrastructure management!
How do I know this? I literally wrote the book on it…
TLDR; It’s a badass command line tool that goes from 0
to kubernetes
in just a few moments.
What’s new in kubicorn
?
We have officially adopted the Kubernetes cluster API
The cluster API is an official working group in the Kubernetes community.
The pattern dictates that while bootstrapping a cluster, kubicorn
will first create the Kubernetes control plane.
Then kubicorn
will define the control plane itself, as well as the machines that will serve as the nodes (workhorses) of the Kubernetes cluster.
Finally, kubicorn
will install a controller that will realize the machines defined in the previous step.
This allows for arbitrary clients to adjust the machine definition, without having to care how kubicorn
will autoscale their cluster.
Autoscaling infrastructure, the kubernetes way!
So this is an official working group of Kubernetes, and more information can be found here!
We are currently working on building out this framework, and if you think it is a good idea (like we do) feel free to get involved.
We have a dope website
So many thanks to Ellen for her hard work in building out our fabulous site. If you would like to contribute please let us know!
We have moved to our own GitHub organization
Despite my attempts to keep the project as one of my pets, it’s just gotten too big. It has now moved to it’s own organization. We would love for YOU to get involved. Just open up an issue or contact Kris Nova if you would like to join.
We are just now starting on the controller
Writing the infrastructure controller is probably the most exciting thing to happen to the project yet.
We literally JUST started the repository. Get in now, while the getting is good!
We need help with the Dockerfile, with the idea behind the controller, and even with the code to make it so. If you wan’t to get involved, now is your chance!
We want your help
Seriously.
Our lead maintainer Marko started out as someone on the internet just like you. He is now a super admin of the project, and is a rock star at keeping up with the day-to-day work of kubicorn
.
We would love to help you become a cloud native badass if you would like to contribute. Please join the slack channel and start talking to us.
And of course as always…
Let me know what you think in the comments below!
Hey everyone!
So a huge thanks to Hashiconf for letting me come out and talk about this stuff in person! But for those of you who missed it, or want more information there is also this blog on the matter as well.
So this is just a quick technical follow up of the tool terraformctl
that I used in my session to get Terraform up and running inside of Kubernetes as a controller!
What is terraformctl?
A command line tool and gRPC server that is pronounced Terraform Cuddle.
The GitHub repo can be found here!
It’s a philosophical example of how infrastructure engineers might start looking at running cloud native applications to manage infrastructure. The idea behind the tool is to introduce this new way of thinking, and not necessarily to be the concrete implementation you are working for. This idea is new, and therefore a lot of tooling is till being crafted. This is just a quick and dirty example of what it might look like.
Terraformctl follows a simple client/server pattern.
We use gRPC to define the protocol in which the client will communicate with the server.
The server is a program written in Golang that will handle incoming gRPC requests concurrently while running a control loop.
The incoming requests are cached to a mutex controlled shared point in memory.
The control loop reads from the shared memory.
Voila. Concurrent microservices in Go!
What is cloud native infrastructure?
Well it’s this crazy idea that we should start looking at managing cloud native infrastructure in the same way we manage traditional cloud native applications.
If we treat infrastructure as software then we have no reason to run the software in legacy or traditional ways when we can truly concur our software by running it in a cloud native way. I love this idea so much that I helped author a book on the subject! Feel free to check it out here!
The bottom line is that the new way of looking at the stack is to start thinking of the layers that were traditionally managed in other ways as layers that are now managed by discreet and happy applications. These applications can be ran in containers, and orchestrated in the same ways that all other applications can. So why not do that? YOLO.
What Terraformctl is not..
Terraformctl is not (and will never be) production ready.
It’s a demo tool, and it’s hacky. If you really want to expand on my work feel free to ping me, or just out right fork it. I don’t have time to maintain yet another open source project unfortunately.
Terraformctl is not designed to replace any enterprise solutions, it’s just a thought experiment. Solving these problems is extremely hard, so I just want more people to understand what is really going into these tools.
Furthermore there are a number of features not yet implemented in the code base, that the code base was structure for. Who knows, maybe one day I will get around to coding them. We will see.
If you really, really, really want to talk more about this project. Please email me at kris@nivenly.com.
Follow @kris__nova (I am fucking awesome)
What are we creating?
- Kubernetes v1.7.3
- Private Networking in Digital Ocean
- Encrypted VPN mesh for droplets
- Ubuntu Droplets
So at Gophercon I released my latest project kubicorn.
As I go along I want to publish a set of use cases as examples. This helps me exercise kubicorn
and understand my assumptions. It would be really cool if others could step in and use these cases to improve the system.

Creating your cluster
So the deployment process is pretty straight forward. The first thing you need to do is grab a copy of `kubicorn v0.0.003`.
$ go get github.com/kris-nova/kubicorn
Verify kubicorn
is working, and you are running the right version.
$ kubicorn --fab
Also you will need a Digital Ocean access key. You can use this guide to help you create one. Then just export the key as an environmental variable.
$ export DIGITALOCEAN_ACCESS_TOKEN=*****************************************
The project offers a starting point for a digital ocean cluster called a profile. Go ahead and create one on your local filesystem.
$ kubicorn create dofuckyeah --profile do
Feel free to take a look at the newly created representation of the cluster and tweak it to your liking. Here is what mine looks like
For my cluster all I did was change the maxCount
from 3 to 7 for my node serverPool
.
When you are happy with your config, go ahead and apply the changes!
$ kubicorn apply dofuckyeah -v 4
Then check out your new cluster and wait for your nodes to come to life!
$ kubectl get no

What we created
We created 8 droplets, all running Ubuntu 16.04
The master droplet uses a fantastic tool called meshbird
to create an encrypted private VPN service mesh on Digital Ocean private networking.
Each of the droplets get a new virtual NIC called tun0
that allows each of the droplets to route on a private VPN.
The nodes register against the master via the newly created virtual NIC.
The master API is advertised on the public IP of the master droplet.
You can checkout the bootstrap script for the master here, and for the nodes here.
And thanks to kubeadm
…
Poof. Kubernetes.
Want to learn more?
Check out the kubicorn project on GitHub!
Follow @kubicornk8s on Twitter to get up to the second updates!
Join us in #kubicorn in the official Gopher’s slack!
Follow @kris__nova (I am fucking awesome)
Just keep reading.. I promise this is worth it..
Okay so I made a new Kubernetes infrastructure tool (sorry not sorry). Introducing my latest pile of rubbish… kubicorn
!
Check it out on github here.
Why I made the tool
I made the tool for a lot of reasons. The main one is so that I could have some architectural freedom. Here are some other reasons:
- I want to start using (or abusing) kubeadm
- I believe in standardizing a Kubernetes cluster API for the community
- I believe in pulling configuration out of the engine, so users can define their own
- I believe in creating the tool as a consumable library, so others can start to use it to build infrastructure operators
- I wanted to enforce the concept of reconciliation, and pull it all the way up to the top of the library
- I want to support multiple clouds (really)
- I want it to be EASY to build a cloud implementation
- I want it to be EASY to understand the code base
- I want it to be EASY contribute to the project
- I want it to be as idiomatic Go as possible
I am sure there are more, but you get the idea.
What it does
It empowers the user (that’s you) to manage infrastructure.
It lets the user (still you) define things like what the infrastructure should look like, and how the cluster should bootstrap.
It offers really great starting points (called profiles) that are designed to work out of the box. Tweak away!
It (eventually) will take snapshots of clusters to create an image. The image is both the infrastructure layer as well as the application layer bundled up into a lovely tarball. The tarball can be moved around, replicated, saved, and backed up. The tarball is a record of your entire cluster state (including infrastructure!).
What is next?
Please help me.
I need contributors and volunteers for the project. I want to share as much knowledge as possible with the user (you!) so that everyone can begin contributing.
What clouds do we support?
Today? AWS
Tomorrow? Digital Ocean (literally tomorrow.. checkout out the PR)
Next? You tell me. The whole point here is that the implementation is easy, so anyone can do it.
Kubicorn vs Kops
Feature | Kops | Kubicorn |
---|---|---|
HA Clusters | ✔ | ✖ |
Easy to use library | ✖ | ✔ |
Kubeadm | ✖ | ✔ |
Bring your own bootstrap | ✖ | ✔ |
Awesome as shit | ✔ | ✔ |
API in Go | ✔ | ✔ |
Digital Ocean Support | ✖ | ✔ |
Kubernetes Official | ✔ | ✖ |
Multiple Operating Systems (Ubuntu, CentOS, etc) | ✖ | ✔ |
Requires DNS | ✔ | ✖ |
Setting up Kubernetes 1.7.0 in AWS with Kubicorn
This is not ready for production! I started coding this a few weeks ago in my free time, and it’s very new!
Also check out the official walkthrough here!
Install kubicorn
go get github.com/kris-nova/kubicorn
Create a cluster API
kubicorn create knova --profile aws
Authenticate
You should probably create a new IAM user for this, with the following permission
- AmazonEC2FullAccess
- AutoScalingFullAccess
- AmazonVPCFullAccess
Then export your auth information
export AWS_ACCESS_KEY_ID="omgsecret" export AWS_SECRET_ACCESS_KEY="evenmoresecret"
Apply
Then you can apply your changes!
kubicorn apply knova

Access
Then you can access your cluster
kubectl get nodes
Delete
Delete your cluster
kubicorn delete knova

Your 2nd day with Kubernetes on AWS
Okay, so you have a cluster up and running on AWS. Now what? Seriously, managing a Kubernetes cluster is hard. Especially if you are even thinking about keeping up with the pace of the community. The good news, is that kops
makes this easy. Here a few commonly used stories on how to manage a cluster after everything is already up and running. If there is something you don’t see, that you would like, please let me know!
This tutorial assumes you were able to successfully get a cluster up and running in AWS, and you are now ready to see what else it can do.
In this tutorial we are covering 2nd day concerns for managing a Kubernetes cluster on AWS. The idea of this tutorial is to exercise some useful bits of kops
functionality that you won’t see during a cluster deployment. Here we really open up kops
to see what she can do (yes, kops
is a girl)
In this tutorial we will be running kops
1.5.1, which can be downloaded here.
We will also be making the assumption that you have an environment setup similar to the following.
export NAME=nextday.nivenly.com export KOPS_STATE_STORE=s3://nivenly-com-state-store
Upgrading Kubernetes with kops
Suppose you are running an older version of Kubernetes, and want to run the latest and greatest..
Here we will start off with a Kubernetes v1.4.8
cluster. We are picking an older cluster here to demonstrate the workflow in which you could upgrade your Kubernetes cluster. The project evolves quickly, and you want to be able to iterate on your clusters just as quickly. To deploy a Kubernetes v1.4.8
cluster:
kops create cluster --zones us-west-2a --kubernetes-version 1.4.8 $KOPS_NAME --yes
As the cluster is deploying, notice how kops
will conveniently remind us that the version of Kubernetes that we are deploying is outdated. This is by design. We really want users to know when they are running old code.
..snip A new kubernetes version is available: 1.5.2 Upgrading is recommended (try kops upgrade cluster) ..snip
So now we have an older version of Kubernetes running. We know this by running the following command and looking for Server Version: version.Info
kubectl version
Now, we can use the following command to see what kops
suggests we should do:
kops upgrade cluster $KOPS_NAME
We can safely append --yes
to the end of our command to apply the upgrade to our configuration. But what is really happening here?
When we run the upgrade command as in
kops upgrade cluster $KOPS_NAME --yes
all we are really doing is appending some values to the cluster spec. (Remember, this is the state store that is stored in S3 in YAML). Which of course can always be accessed and edited using:
kops edit cluster $KOPS_NAME
In this case you will notice how the kops upgrade cluster
command conveniently changed the following line in the configuration file for us.
kubernetesVersion: 1.5.2
We can now run a kops update cluster
command as we always would, to apply the change.
kops update cluster $KOPS_NAME --yes
We can now safely roll each of our nodes to finish the upgrade. Let’s use kops rolling-update cluster
to re deploy each of our nodes. This is necessary to finish the upgrade. A kops rolling update will cycle each of the instance in the autoscale group with the new configuration.
kops rolling-update cluster $KOPS_NAME --yes
We can now check the version of Kubernetes, and validate that we are in fact using the latest version.
kubectl version
Note: If a specific version of Kubernetes is desired, you can always use the --channel
flag and specify a valid channel. An example channel can be found here.
Scaling your cluster
Suppose you would like to scale your cluster to process more work..
In this example we will start off with a very basic cluster, and turn the node count up using kops instance groups.
kops create cluster --zones us-west-2a --node-count 3 $KOPS_NAME --yes
After the cluster is deployed we can validate that we are using 3 nodes by running
kubectl get nodes
Say we want to scale our nodes from 3 to 30. We can easily do that with kops by editing the nodes
instance group using:
kops edit instancegroup nodes
We can then bump our node counts up to 30
spec: image: kope.io/k8s-1.4-debian-jessie-amd64-hvm-ebs-2016-10-21 machineType: t2.medium maxSize: 30 minSize: 30 role: Node subnets: - us-west-2a
We then of course need to update our newly edited configuration
kops update cluster $KOPS_NAME --yes
Kops will update the AWS ASG automatically, and poof we have a 30 node cluster.
I do actually try all of this before recommending it to anyone. So yes, I was able to actually deploy a 30 node cluster in AWS with kops.
The cluster was deployed successfully, and the primary component of lag was waiting on Amazon to deploy the instances after detecting a change in the Autoscaling group.
A quick delete command from kops, and all is well.
kops delete cluster $KOPS_NAME --yes
Audit your clusters
Suppose you need to know what is going on in the cloud.. and audit your infrastructure..
By design kops
will never store information about the cloud resources, and will always look them up at runtime. So gaining a glimpse into what you have running currently can be a bit of a concern. There are 2 kops
commands that are very useful for auditing your environment, and also auditing a single cluster.
In order to see what clusters we have running in a state store we first use the following command:
kops get clusters
Notice how we no longer have to use `$KOPS_NAME`. This is because we already have a cluster deployed, and thus should already have a working `~/.kube/config` file in place. We can infer a lot of information from the file. Now that we have a cluster name (or more!) in mind, we can use the following command:
kops toolbox dump
Which will output all the wonderful information we could want about a cluster in a format that is easy to query. It is important to note that the resources defined here are discovered using the same cluster lookup methods `kops` uses for all other cluster commands. This is a raw, and unique output of your cluster at runtime!
Thanks!
Thank you for reading my article. As always, I appreciate any feedback from users. So let me know how we could be better. Also feel free to check me out on GitHub for more kops updates.

Today we announce the release of Kubernetes kops 1.5.1 LTS!
I figured what better way to announce the release, than with an updated blog post on setting up an HA cluster on private topology with the new release!
What we are building
In this tutorial we will cover setting up a HA privately networked Kubernetes cluster in AWS with Kubernetes kops 1.5.1.
- Fully managed VPC in AWS, with automatically generated private, and public subnets.
- Outbound traffic managed through a NAT gateway and elastic IP in each private subnet.
- Classic ELB fronting the Kubernetes API on TCP 443 (No firewall holes for the cluster).
- Classic ELB fronting a bastion ASG for resilient SSH access for admins.
- HA (Highly Available) Kubernetes masters spread across multiple availability zones in an ASG.
- Kubernetes nodes spread across multiple availability zones in an ASG.
- Public DNS alias for the Kubernetes API.
Installing kops 1.5.1
Kubernetes kops is an open source tool that Kubernetes offers that can be used for deploying Kubernetes clusters in AWS. We will be using the tool in this tutorial.
curl -sSL https://github.com/kubernetes/kops/releases/download/1.5.1/kops-darwin-amd64 -O chmod +x kops-darwin-amd64 sudo mv kops-darwin-amd64 /usr/local/bin
More information on installing kops can be found here for our non OS X users.
Installing kubectl
We will also be needing a tool called kubectl. Think of this as a thin CLI client for the Kubernetes API, similar to the aws CLI tool we will be installing next.
curl -O https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl chmod +x kubectl sudo mv kubectl /usr/local/bin
Setting up your AWS environment
Get your AWS credentials from the console
Use the AWS console to get an AWS AccessKeyId and AWS SecretAccessKey Official Documentaion. After you have your credentials, download the CLI tool and configure it with your new user. You can also use any method defined here.
brew update && brew install awscli aws configure
We strongly recommend using a single user with a select few IAM permissions to run kops. Thankfully kops provides a handy IAM creation script that will create a new user with the correct permissions. Be sure to note your new AccessKeyId and SecretAccessKey for the next step.
curl -O https://raw.githubusercontent.com/kubernetes/kops/master/hack/new-iam-user.sh sh new-iam-user.sh <group> <user> aws iam list-users
Setting up DNS for your cluster
We will need a publicly resolvable domain name for our cluster. So we need to make sure we have a hosted zone setup in Route53. In this example we will be using nivenly.com for our example hosted zone.
ID=$(uuidgen) && aws route53 create-hosted-zone --name nivenly.com --caller-reference $ID
More information on more advanced DNS setup.
Setting up a state store for your cluster
Kops will store a representation of your Kubernetes cluster in AWS S3. This is called the kops state store. It is important to note that kops DOES NOT store any concept of what resources are deployed. That would create two sources of truth (The AWS API, and the state store). Rather, kops will merely store a definition of the Kubernetes cluster, that will then be applied to AWS via kops.
We will call our state store in this example nivenly-com-state-store.
aws s3api create-bucket --bucket nivenly-com-state-store --region us-east-1
Creating your first cluster
Getting ready
Okay! We are ready to start creating our first cluster. Lets first set up a few environmental variables to make this process as clean as possible.
export NAME=myfirstcluster.nivenly.com export KOPS_STATE_STORE=s3://nivenly-com-state-store
Form your create cluster command
We will need to note which availability zones are available to us. These are different for every AWS account. In this example we will be deploying our cluster to the us-west-1 region.
aws ec2 describe-availability-zones --region us-west-1
Lets form our create cluster command. Here we want to define a few things.
- –node-count 3
- We want 3 Kubernetes nodes
- –zones us-west-2a,us-west-2b,us-west-2c
- We want to run our nodes spread across the 3 availability zones available to our account
- This is a CSV list, pulled from the API in the previous request
- –master-zones us-west-2a,us-west-2b,us-west-2c
- This will tell kops to spread masters across those availability zones.
- Because there is more than 1, this will automatically be ran in HA.
- –dns-zone nivenly.com
- We define the DNS hosted zone we created earlier
- –node-size t2.large
- We set our nodes to a defined instance size
- –master-size t2.large
- We set our masters to a defined instance size
- –topology private
- We define that we want to use a private network topology with kops.
- This is what tells kops to build the diagram above.
- –networking calico
- We tell kops to use Calico for our overlay network
- Overlay networks are required for this configuration.
- Many thanks to our friends at Calico for helping us get this into kops!
- –bastion
- Add this flag to tell kops to create a bastion server so you can SSH into the cluster
Kops will default to ~/.ssh/id_rsa.pub for backend access. You can override this with –ssh-public-key /path/to/key.pub
kops create cluster \ --node-count 3 \ --zones us-west-2a,us-west-2b,us-west-2c \ --master-zones us-west-2a,us-west-2b,us-west-2c \ --dns-zone nivenly.com \ --node-size t2.large \ --master-size t2.large \ --topology private \ --networking calico \ --bastion \ ${NAME}
kops will deploy these instances using AWS auto scaling groups, so each instance should be ephemeral and will rebuild itself if taken offline for any reason.
Cluster Configuration
We now have created the underlying cluster configuration, lets take a look at every aspect that will define our cluster.
kops edit cluster ${NAME}
This will open up the cluster config (that is actually stored in the S3 bucket we created earlier!) in your favorite text editor. Here is where we can optionally really tweak our cluster for our use case. In this tutorial, we leave it default for now.
For more information on these directives, and the kops API please checkout the kops official documentation
Apply the changes
Okay, we are ready to create the cluster in AWS. We do so running the following command.
kops update cluster ${NAME} --yes
Start using the cluster
The resources will be deployed asynchronously here. So even though kops has finished, that does not mean our cluster is built. A great way to check if the cluster is online, and the API is working is to use kubectl
kubectl get nodes
After we verify the API is responding, we can now use the Kubernetes cluster.
Backend SSH access
We should also now have a bastion server behind an elastic load balancer in AWS that will give us access to the cluster over SSH. Grab the bastion ELB A record, and the instance private IP you want to access from the AWS console and SSH into the bastion as follows.
ssh-add ~/.ssh/id_rsa ssh -A admin@bastion.myfirstcluster.nivenly.com ssh admin@<master_private_ip>

This post is DEPRECATED
Please check out the most recent article instead.
What we are building
In this tutorial we will cover setting up a HA privately networked Kubernetes cluster in AWS with Kubernetes kops.
- Fully private VPC, housing utility and private subnets, with hybrid cloud capabilities over VPN
- HA (Highly Available) masters spread across availability zones with private subnetting
- Nodes spread across availability zones with private subnetting
- Routing between subnets with NAT gateways
- Elastic Load Balancers in front of the resources for public access
- Bastion server for backend SSH access to the instances
Installing kops
Kubernetes kops is an open source tool that Kubernetes offers that can be used for deploying Kubernetes clusters against different cloud providers. We will be using the tool to help us with the heavy lifting in this tutorial.
Start by installing the most recent version of kops from the master branch.
brew update && brew install --HEAD kops
More information on installing kops can be found here for our non OS X users.
Installing kubectl
We will also be needing a tool called kubectl. Think of this as a thin CLI client for the Kubernetes API, similar to the aws CLI tool we will be installing next.
You can download the tarball from the Kubernetes latest release page in github, or follow the official install guide here.
wget -O https://github.com/kubernetes/kubernetes/releases/download/v1.4.6/kubernetes.tar.gz sudo cp kubernetes/platforms/darwin/amd64/kubectl /usr/local/bin/kubectl
Setting up your AWS environment
Setting up a kops IAM user
In this example we will be using a dedicated IAM user to use with kops. This user will need basic API security credentials in order to use kops. Create the user and credentials using the AWS console. More information.
Kubernetes kops uses the official AWS Go SDK, so all we need to do here is set up your system to use the official AWS supported methods of registering security credentials defined here. Here is an example using the aws
command line tool to set up your security credentials.
brew update && brew install awscli aws configure aws iam list-users
We should now be able to pull a list of IAM users from the API, verifying that our credentials are working as expected.
Setting up DNS for your cluster
We will need a publicly resolvable domain name for our cluster. So we need to make sure we have a hosted zone setup in Route53. In this example we will be using nivenly.com for our example hosted zone.
ID=$(uuidgen) && aws route53 create-hosted-zone --name nivenly.com --caller-reference $ID
Setting up a state store for your cluster
In this example we will be creating a dedicated S3 bucket for kops to use. This is where kops will store the state of your cluster and the representation of your cluster, and serves as the source of truth for our cluster configuration throughout the process. We will call this nivenly-com-state-store. I recommend keeping the creation confined to us-east-1, otherwise more input will be needed here.
aws s3api create-bucket --bucket nivenly-com-state-store --region us-east-1
Creating your first cluster
Setup your environment for kops
Okay! We are ready to start creating our first cluster. Lets first set up a few environmental variables to make this process as clean as possible.
export NAME=myfirstcluster.nivenly.com export KOPS_STATE_STORE=s3://nivenly-com-state-store
Note: You don’t have to use environmental variables here. You can always define the values using the –name and –state flags later.
Form your create cluster command
We will need to note which availability zones are available to us. In this example we will be deploying our cluster to the us-west-1 region.
aws ec2 describe-availability-zones --region us-west-1
Lets form our create cluster command. Here we want to define a few things.
- –node-count 3
- We want 3 Kubernetes nodes
- –zones us-west-2a,us-west-2b,us-west-2c
- We want to run our nodes spread across the 3 availability zones available to our account
- This is a CSV list, pulled from the API in the previous request
- –master-zones us-west-2a,us-west-2b,us-west-2c
- This will tell kops that we want 3 masters, running in HA in these 3 availability zones
- –dns-zone nivenly.com
- We define the DNS hosted zone we created earlier
- –node-size t2.medium
- We set our nodes to a defined instance size
- –master-size t2.medium
- We set our masters to a slightly larger instance size
- –topology private
- We define that we want to use a private network topology with kops
- –networking weave
- We tell kops to use Weave for our overlay network
- Many thanks to our friends at Weave for helping us make this a staple part of our clusters!
- –image 293135079892/k8s-1.4-debian-jessie-amd64-hvm-ebs-2016-11-16
- This is required as a temporary workaround until kops 1.4.2 is released (Estimated Dec 17, 2016)
Kops will default to ~/.ssh/id_rsa.pub for backend access. You can override this with –ssh-public-key /path/to/key.pub
kops create cluster \ --node-count 3 \ --zones us-west-2a,us-west-2b,us-west-2c \ --master-zones us-west-2a,us-west-2b,us-west-2c \ --dns-zone nivenly.com \ --node-size t2.medium \ --master-size t2.medium \ --topology private \ --networking weave \ --image 293135079892/k8s-1.4-debian-jessie-amd64-hvm-ebs-2016-11-16 \ ${NAME}
kops will deploy these instances using AWS auto scaling groups, so each instance should be ephemeral and will rebuild itself if taken offline for any reason.
Cluster Configuration
We now have created the underlying cluster configuration, lets take a look at every aspect that will define our cluster.
kops edit cluster ${NAME}
This will open up the cluster config (that is actually stored in the S3 bucket we created earlier!) in your favorite text editor. Here is where we can optionally really tweak our cluster for our use case. In this tutorial, we leave it default for now.
For more information on these directives, and the kops API please checkout the kops official documentation
Apply the changes
Okay, we are ready to create the cluster in AWS. We do so running the following command.
kops update cluster ${NAME} --yes
Start using the cluster
The resources will be deployed asynchronously here. So even though kops has finished, that does not mean our cluster is built. A great way to check if the cluster is online, and the API is working is to use kubectl
kubectl get nodes
After we verify the API is responding, we can now use the Kubernetes cluster.
Backend SSH access
We should also now have a bastion server behind an elastic load balancer in AWS that will give us access to the cluster over SSH. Grab the bastion ELB A record, and the instance private IP you want to access from the AWS console and SSH into the bastion as follows.
ssh -A admin@<bastion_elb_a_record> ssh admin@<instance_private_ip>
What do you think?
I always love comments, and suggestions on how to be better. Let me know your thoughts, if you have any good ones.
I wrote a lot of the code for the features in this article, feel free to hit me up on github if you want to follow along!