Author: Kris Nova

So as I continue to find workarounds and fixes for running Archlinux on my Microsoft Surface Book I will post them..

Here is a great quick and dirty fix for the wifi issue.


After closing your Surface Pro, or sending your computer into a state of hibernation or suspension the WiFi agent quits working.


Found a handy dandy script that totally fixes the problem.

sudo wget -P /usr/lib/systemd/system-sleep



So I joined a new team at Microsoft and we run Microsoft Teams for our primary form of communication.

So here is a quick walk through of getting it up and running on Archlinux.


Download the pacman package from Github

cd ~


Install with Pacman

sudo pacman -U teams-for-linux-0.0.4.pacman

Alias teams to run in the background

echo "alias teams='teams &>/dev/null &'" >> ~/.bashrc

source ~/.bashrc


Then you can run teams from the command line to launch the program. Happy Microsofting. J


Follow @kris-nova

What are we creating?

  • Kubernetes v1.7.3
  • Private Networking in Digital Ocean
  • Encrypted VPN mesh for droplets
  • Ubuntu Droplets

So at Gophercon I released my latest project kubicorn.

As I go along I want to publish a set of use cases as examples. This helps me exercise kubicorn and understand my assumptions. It would be really cool if others could step in and use these cases to improve the system.

7 Node Cluster in Digital Ocean

Creating your cluster

So the deployment process is pretty straight forward. The first thing you need to do is grab a copy of `kubicorn v0.0.003`.

$ go get

Verify kubicorn is working, and you are running the right version.

$ kubicorn --fab

Also you will need a Digital Ocean access key. You can use this guide to help you create one. Then just export the key as an environmental variable.

$ export DIGITALOCEAN_ACCESS_TOKEN=***************************************** 

The project offers a starting point for a digital ocean cluster called a profile. Go ahead and create one on your local filesystem.

$ kubicorn create dofuckyeah --profile do

Feel free to take a look at the newly created representation of the cluster and tweak it to your liking. Here is what mine looks like

For my cluster all I did was change the maxCount from 3 to 7 for my node serverPool.

When you are happy with your config, go ahead and apply the changes!

$ kubicorn apply dofuckyeah -v 4

Then check out your new cluster and wait for your nodes to come to life!

$ kubectl get no
kubectl get nodes

What we created

We created 8 droplets, all running Ubuntu 16.04

The master droplet uses a fantastic tool called meshbird to create an encrypted private VPN service mesh on Digital Ocean private networking.

Each of the droplets get a new virtual NIC called tun0 that allows each of the droplets to route on a private VPN.

The nodes register against the master via the newly created virtual NIC.

The master API is advertised on the public IP of the master droplet.

You can checkout the bootstrap script for the master here, and for the nodes here.

And thanks to kubeadm

Poof. Kubernetes.

Want to learn more?

Check out the kubicorn project on GitHub!

Follow @kubicornk8s on Twitter to get up to the second updates!

Join us in #kubicorn in the official Gopher’s slack!


Follow @kris-nova

Follow @kris-nova

Just keep reading.. I promise this is worth it..

Okay so I made a new Kubernetes infrastructure tool (sorry not sorry). Introducing my latest pile of rubbish… kubicorn!

Check it out on github here.

Why I made the tool

I made the tool for a lot of reasons. The main one is so that I could have some architectural freedom. Here are some other reasons:

  • I want to start using (or abusing) kubeadm
  • I believe in standardizing a Kubernetes cluster API for the community
  • I believe in pulling configuration out of the engine, so users can define their own
  • I believe in creating the tool as a consumable library, so others can start to use it to build infrastructure operators
  • I wanted to enforce the concept of reconciliation, and pull it all the way up to the top of the library
  • I want to support multiple clouds (really)
  • I want it to be EASY to build a cloud implementation
  • I want it to be EASY to understand the code base
  • I want it to be EASY contribute to the project
  • I want it to be as idiomatic Go as possible

I am sure there are more, but you get the idea.

What it does

It empowers the user (that’s you) to manage infrastructure.

It lets the user (still you) define things like what the infrastructure should look like, and how the cluster should bootstrap.

It offers really great starting points (called profiles) that are designed to work out of the box. Tweak away!

It (eventually) will take snapshots of clusters to create an image. The image is both the infrastructure layer as well as the application layer bundled up into a lovely tarball. The tarball can be moved around, replicated, saved, and backed up. The tarball is a record of your entire cluster state (including infrastructure!).

What is next?

Please help me.

I need contributors and volunteers for the project. I want to share as much knowledge as possible with the user (you!) so that everyone can begin contributing.

What clouds do we support?

Today? AWS

Tomorrow? Digital Ocean (literally tomorrow.. checkout out the PR)

Next? You tell me. The whole point here is that the implementation is easy, so anyone can do it.


Kubicorn vs Kops


Feature Kops Kubicorn
HA Clusters
Easy to use library
Bring your own bootstrap
Awesome as shit
API in Go
Digital Ocean Support
Kubernetes Official
Multiple Operating Systems (Ubuntu, CentOS, etc)
Requires DNS


Setting up Kubernetes 1.7.0 in AWS with Kubicorn

This is not ready for production! I started coding this a few weeks ago in my free time, and it’s very new!

Also check out the official walkthrough here!

Install kubicorn

go get

Create a cluster API

kubicorn create knova --profile aws


You should probably create a new IAM user for this, with the following permission

  • AmazonEC2FullAccess
  • AutoScalingFullAccess
  • AmazonVPCFullAccess

Then export your auth information

export AWS_ACCESS_KEY_ID="omgsecret"
export AWS_SECRET_ACCESS_KEY="evenmoresecret"


Then you can apply your changes!

kubicorn apply knova


Example Output



Then you can access your cluster

kubectl get nodes


Delete your cluster

kubicorn delete knova

Follow @kris-nova

So was sitting and having a cup of coffee this morning with Kelsey Hightower and he shared a beautiful piece of advice that I just had to share!

So let’s keep it sweet and simple:

If you are struggling getting a WiFi authentication page to open on your Mac..

You can go to to hit the default auth page in your browser.

This just changed my life. I hope it helps you.

So a good friend of mine recently posted something on Twitter…screen-shot-2017-06-06-at-18-42-15

So I decided to crank out a quick write up on the matter. It’s something that also bothered me for the longest time, and a few years ago when I was hired into a job that enforced using Outlook clients I finally got to the bottom of it!

Also I work at Microsoft, so I had the best testing ground in the world. I just set an email to a non-outlook email address from my work account. Let’s take a look and see what happened.

From office 365

So that actually worked out as expected!

It looks like the new Office 365 web client is on point and doing great.  Good job everyone!

Heck, even the smiley face is the proper emoticon code 😊 (😁)

For clarity I went ahead and pulled to raw message, and ran a quick base64 decode on it to get the following outputs.



    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
    <style type="text/css" style="display:none;">
        <!-- P {
            margin-top: 0;
            margin-bottom: 0;

<body dir="ltr">
    <div id="divtagdefaultwrapper" style="font-size:12pt;color:#000000;font-family:Calibri,Helvetica,sans-serif;" dir="ltr">
        <p>Hi everyone!</p>
        <p>Just sending a quick testing email <span>😊</span></p>


But I know there are sinister email’s still lurking around the internet somewhere. Let’s dive deeper..

From Outlook

screenshot2017-06-06at18-56-35I found an email in my archives when I tested this theory years ago. My only computer that runs the Outlook client is actually at the office, and I am lazy.

Regardless, this sample is really what we are looking for!

Here we can see that the HTML in the email does something fairly concerning. We didn’t get valid HTML. The email used a pseudo HTML markdown that was designed for word-processing style sheets, and does not adhere to the ISO/IEC 15445 standard for HTML!

The markdown actually references some font logic that is defined as an HTML comment, as well as referencing a proprietary font!

Hence why the font definitions aren’t being interpreted, and the user is left with an unfriendly looking email to read. Below is the decoded base64 content.


<html xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="" xmlns="">

    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
    <meta name="Title" content="">
    <meta name="Keywords" content="">
    <meta name="Generator" content="Microsoft Word 15 (filtered medium)">
        /* Font Definitions */
        @font-face {
            font-family: Wingdings;
            panose-1: 5 0 0 0 0 0 0 0 0 0;
        @font-face {
            font-family: "Cambria Math";
            panose-1: 2 4 5 3 5 4 6 3 2 4;
        @font-face {
            font-family: Calibri;
            panose-1: 2 15 5 2 2 2 4 3 2 4;
        /* Style Definitions */
        div.MsoNormal {
            margin: 0in;
            margin-bottom: .0001pt;
            font-size: 12.0pt;
            font-family: Calibri;
        span.MsoHyperlink {
            mso-style-priority: 99;
            color: #0563C1;
            text-decoration: underline;
        span.MsoHyperlinkFollowed {
            mso-style-priority: 99;
            color: #954F72;
            text-decoration: underline;
        span.EmailStyle17 {
            mso-style-type: personal-compose;
            font-family: Calibri;
            color: windowtext;
        span.msoIns {
            mso-style-type: export-only;
            mso-style-name: "";
            text-decoration: underline;
            color: teal;
        .MsoChpDefault {
            mso-style-type: export-only;
            font-family: Calibri;
        @page WordSection1 {
            size: 8.5in 11.0in;
            margin: 1.0in 1.0in 1.0in 1.0in;
        div.WordSection1 {
            page: WordSection1;

<body bgcolor="white" lang="EN-US" link="#0563C1" vlink="#954F72">
    <div class="WordSection1">
        <p class="MsoNormal"><span style="font-size:11.0pt">Hey this is a test! </span><span style="font-size:11.0pt;font-family:Wingdings">J</span><span style="font-size:11.0pt">
        <p class="MsoNormal"><span style="font-size:11.0pt"><o:p>&nbsp;</o:p></span></p>


The infamous capital j

If you have ever wondered why you see erroneous j‘s floating around in emails, the unencoded sample from above can answer that as well.

When a user (in the older Outlook tools) types :) the program will actually replace the smiley face with J instead of 😊.

The source clearly shows that the smiley face being sent is encoded in the Wingdings font!

<span style="font-size:11.0pt;font-family:Wingdings">J</span>

Wingdings is available for purchase here at if you want to run it on your server to decode the J‘s properly.


Older outlook clients used to do some wonky things with mime encoding and pseudo HTML formatting that conflicted with ISO/IEC 15445

Most clients probably don’t support the non-standard encoding (well).  So it’s another case of not adhering to standard to do things a proprietary way.

So in theory, Outlook has been using emoticons since before emoticons were cool. NO BIG DEAL!

Glad to know it’s working as expected now, and that we should see the problem disappear over time. So much ❤ for the engineers who helped fix this! jjj

A well written sorting algorithm is hard to replace, and typically the ones that have been battle tested will stand the test of time and stick around for a while. The Go programming language offers a clean abstraction on top of a commonly used sorting algorithm. With the sort  package in the standard library a user can sort arbitrary types in arbitrary ways using a time-proven sorting algorithm.

The way Go has set this up for us is by defining an interface called the sort.Interface. The rule is that as long as a type implements to this interface, it can be sorted.


The sort.Interface interface

type Interface interface {
       // Len is the number of elements in the collection.
       Len() int
       // Less reports whether the element with
       // index i should sort before the element with index j.
       Less(i, j int) bool
       // Swap swaps the elements with indexes i and j.
       Swap(i, j int)

The interface definition can be found here. The interface defines 3 functions, all of which are critical to the success of the sort.


The sort works by dividing the length of a collection by 2. So we need a way to determine a length (or size) of a collection. Think of this like the len() function in Go, and usually the len() function is appropriate to use here.


This is a function that will receive 2 integer indices from the collection. The algorithm assumes the implementation will perform some logic here and make an assertion. The algorithm will not care if a user actually checks if a value is less than another, as that assertion is completely arbitrary. This is analogous to the function used to compare values in C’s qsort()


Go takes the sort abstraction one step further, and also makes the user define a Swap() function. This is called after Less() returns false and the algorithm understands that a change is needed. In Go however, you can make Swap() do whatever you want in this case.

Sorting integers with sort.Sort()

A very basic example of defining a sort.Interface implementation is to create an integer alias in Go, and implement the required functions.

type Integers []int

func (s Integers) Len() int {
	return len(s)
func (s Integers) Swap(i, j int) {
	s[i], s[j] = s[j], s[i]
func (s Integers) Less(i, j int) bool {
	return s[i] < s[j]

This implementation can now be passed to sort.Sort() and sorted on. A full working example of an integer sort with sort.Sort() can be found here in the Go playground, or my raw source here on GitHub.

Sorting structs with sort.Sort()

Below is an example of sorting on a collection of custom types, and using the utf8 package to sort on the int32 value of a rune. This is important because it demonstrates how a user might want to sort on a “calculated” value instead of a literal value.

A user can implement any logic they wish in Less() or in Swap() giving the user a powerful opportunity to build quick and efficient sorting program on any type they can dream of. This is useful in situations where the logic for what a user wants to sort on, might be something different than a simple numerical or alphabetical sort.

type Type struct {
	Value string

type Types []Type

func (s Types) Len() int {
	return len(s)
func (s Types) Swap(i, j int) {
	s[i], s[j] = s[j], s[i]
func (s Types) Less(i, j int) bool {
	iRune, _ := utf8.DecodeRuneInString(s[i].Value)
	jRune, _ := utf8.DecodeRuneInString(s[j].Value)
	return int32(iRune) > int32(jRune)

As always you can run the code in Go Playground, and also see the raw source in GitHub.


The Go programming language offers a familiar and clean abstraction for sorting on arbitrary types and with arbitrary logic. The user can implement any logic they wish in regards to what the sort will use to sort on.

By offering the underlying algorithm and defining the sort.Interface the programming language helps us build powerful sorting programs without enforcing a single opinion on what the algorithm should do but rather how to do it.


Thank you for reading my article. As always, I appreciate any feedback from users. So let me know how we could be better.

Follow @kris-nova

So today I decided to take on a fairly complex design pattern in the hopes of demonstrating some more advanced features of concurrency in Go or at least ruffle a few feathers about how to approach digraph processing.

Wikipedia offers a great explanation of graph theory in case you need to brush up!

The goal of this post is to offer a design approach to processing graphs concurrently , where each vertex in the graph represents a processable unit of work .

The graph will be processed concurrently via a `Crawl()` operation, where there are N number of concurrent processors . Each processor will process a stream of vertices procedurally by referencing a function pointer to a `process()` function.

The Graph

The graph is composed of edges and vertices. We start the example off by building a complex graph with an arbitrary child count. Each of the children may or may not have their own children. A child is represented by a vertex.

The Vertices

A vertex represent the intersections of the graph. Where 2 edges intersect, there will be a vertex. The only way to get from one vertex to another is to traverse it’s corresponding edge.

The Edges

Edges are what connect vertices. Every edge has a `to` and a `from` pointer that will point to the two vertices it’s connecting. Because the edge insinuates direction, and only connects in a single direction that makes the graph a directional graph, or a digraph.

The Processors

Processors process vertexes. In this example we simulate some arbitrary work by injecting a random sleep. In a real implementation a processor would actually accomplish some amount of work that a vertex would require. We have N processors so they can operate on vertices concurrently. The more processors, the more vertices we can concurrently operate on, or process.

The Crawler

The crawler will traverse edges to find vertices. As the crawler finds a vertex it will concurrently process each vertex by passing it to a processor. Processors are called cyclically and in order.

For example if we had 10 vertices and 3 processors the call pattern would look like this.

V -- P
1 -- 1
2 -- 2
3 -- 3
4 -- 1
5 -- 2
6 -- 3
7 -- 1
8 -- 2
9 -- 3
0 -- 1

The vertices to be processed in unique goroutines, but on shared channels. The channels will buffer and form a queue if the vertex workflow overflows their ability to keep up.

The win

The huge win here is that the graph stays constructed in it’s original form. The crawler can iterate through it’s many (and complex) layers quickly because of the concurrent processing design. A processor could be replaced with any implementation capable of running a workload. This allows the user to structure complex data, while operating on it without any overhead of understanding the data. The processor gets a vertex, and thats it. The processors have no concept of order, and they don’t need to.

Notice how the program is able to calculate the operations by counting the graph, and the graph is actually processed quickly with the same number of operations. Good. Clean. Concurrent processing.

Furthermore a user can turn up the number of iterations and specify how many times to visit each vertex. This is useful in situations were a process is idempotent but could potentially fail. Sending the same request N number of times makes sense and increases the probability of a success.

package main

import (

const (
	NumberOfConcurrentProcessors = 32
	NumberOfCrawlIterations = 4

// DiGraph
// This is a directional graph where edges can point in any direction between vertices.
// The graph has 1 root vertex, which is where the Crawl() starts from.
type DiGraph struct {
	RootVertex       *Vertex       // The root vertex of the graph
	Processors       [] *Processor // List of concurrent processors
	ProcessorIndex   int           // The current index of the next processor to use
	Edges            []*Edge       // All directional edges that make up the graph
	Iterations       int           // The total number of times to iterate over the graph
	TotalVertices    int           // Count of the total number of vertices that make up the graph
	ProcessedChannel chan int      // Channel to track processed vertices
	ProcessedCount   int           // Total number of processed vertices
	TotalOperations  int           // Total number of expected operations | [(TotalVertices * Iterations) - Iterations] + 1

// Vertex
// A single unit that composes the graph. Each vertex has relationships with other vertices,
// and should represent a single entity or unit of work.
type Vertex struct {
	Name   string // Unique name of this Vertex
	Edges  []*Edge
	Status int

// Edge
// Edges connect vertices together. Edges have a concept of how many times they have been processed
// And a To and From direction
type Edge struct {
	To             *Vertex
	From           *Vertex
	ProcessedCount int

// Processor
// This represents a single concurrent process that will operate on N number of vertices
type Processor struct {
	Function func(*Vertex) int
	Channel  chan *Vertex

// Init the graph with a literal definition
var TheGraph *DiGraph = NewGraph()

func main() {
	TheGraph.Init(NumberOfConcurrentProcessors, NumberOfCrawlIterations)

func (d *DiGraph) Init(n, i int) {
	noProcs := n
	d.TotalVertices = d.RootVertex.recursiveCount()
	d.Iterations = i
	for ; n > 0; n-- {
		p := Processor{Channel: make(chan *Vertex)}
		d.Processors = append(d.Processors, &p)
		p.Function = Process
		go p.Exec()
	d.TotalOperations = (d.TotalVertices * d.Iterations) - d.Iterations + 1 //Math is hard
	fmt.Printf("Total Vertices              : %d\n", d.TotalVertices)
	fmt.Printf("Total Iterations            : %d\n", d.Iterations)
	fmt.Printf("Total Concurrent Processors : %d\n", noProcs)
	fmt.Printf("Total Assumed Operations    : %d\n", d.TotalOperations)

func (d *DiGraph) Crawl() {
	d.ProcessedChannel = make(chan int)
	go d.RootVertex.recursiveProcess(d.getProcessor().Channel)
	for d.ProcessedCount < d.TotalOperations {
		d.ProcessedCount += <-d.ProcessedChannel
                //o(fmt.Sprintf("%d ", d.ProcessedCount))
	fmt.Printf("Total Comlpeted Operations  : %d\n", d.ProcessedCount)

func (d *DiGraph) getProcessor() *Processor {
	maxIndex := len(d.Processors) - 1
	if d.ProcessorIndex == maxIndex {
		d.ProcessorIndex = 0
	} else {
		d.ProcessorIndex += 1
	return d.Processors[d.ProcessorIndex]

func Process(v *Vertex) int {

	// Simulate some work with a random sleep
	sleep := rand.Intn(100 - 0) + 100
	time.Sleep(time.Millisecond * time.Duration(sleep))

        o(fmt.Sprintf("Processing: %s", v.Name))
	// Return a status code
	return 1

func (v *Vertex) recursiveProcess(ch chan *Vertex) {
	ch <- v
	for _, e := range v.Edges {
		if e.ProcessedCount < TheGraph.Iterations {
			e.ProcessedCount += 1
			go e.To.recursiveProcess(TheGraph.getProcessor().Channel)

func (v *Vertex) recursiveCount() int {
	i := 1
	for _, e := range v.Edges {
		if e.ProcessedCount != 0 {
			e.ProcessedCount = 0
			i += e.To.recursiveCount()
	return i

func (v *Vertex) AddVertex(name string) *Vertex {
	newVertex := &Vertex{Name: name}
	newEdge := &Edge{To: newVertex, From: v, ProcessedCount: -1}
	newVertex.Edges = append(newVertex.Edges, newEdge)
	v.Edges = append(v.Edges, newEdge)
	return newVertex

func (p *Processor) Exec() {
	for {
		v := <-p.Channel
		v.Status = p.Function(v)
		TheGraph.ProcessedChannel <- 1

func NewGraph() *DiGraph {
	rootVertex := &Vertex{Name: "0"}
	v1 := rootVertex.AddVertex("1")
	v1_3 := v1.AddVertex("1-3")
	v1_3_3 := v1_3.AddVertex("1-3-3")
	graph := &DiGraph{}
	graph.RootVertex = rootVertex
	return graph

func o(str string) {

Try it out

You can run the code yourself in the Go playground.


Thank you for reading my article. As always, I appreciate any feedback from users. So let me know how we could be better.

Follow @kris-nova


Given 2 arbitrary integers X and N construct a tree such that the root has X child nodes, and each of the root’s children has N child nodes. Walk the graph touching each node only once and tracking the distance of each node from the root of the tree. For every node that has children, the parent node MUST BE visited first.

The total number of node visitations should match the following formula:

T = (X * N) + X + 1


package main

import (

// main will build and walk the tree
func main() {
	fmt.Println("Building Tree")
	root := buildTree()
	fmt.Println("Walking Tree")


// Total is a fun way to total how many nodes we have visited
var total = 1

// How many children for the root to thave
const rootsChildren = 3

// How many children for the root's children to have
const childrenChildren = 10

// node is a super simple node struct that will form the tree
type node struct {
	parent   *node
	children []*node
	depth    int

// buildTree will construct the tree for walking.
func buildTree() *node {
	var root &node
	for _, child := range root.children {
	return root

// addChildren is a convenience to add an arbitrary number of children
func (n *node) addChildren(count int) {
	for i := 0; i < count; i++ {
		newChild := &node{
			parent: n,
			depth:  n.depth + 1,
		n.children = append(n.children, newChild)

// walk is a recursive function that calls itself for every child
func (n *node) walk() {
	for _, child := range n.children {

// visit will get called on every node in the tree.
func (n *node) visit() {
	d := "└"
	for i := 0; i <= n.depth; i++ {
		d = d + "───"
	fmt.Printf("%s Visiting node with address %p and parent %p Total (%d)\n", d, n, n.parent, total)
	total = total + 1


Execution Output

Building Tree
Walking Tree
└─── Visiting node with address 0x104401a0 and parent 0x0 Total (1)
└────── Visiting node with address 0x104401c0 and parent 0x104401a0 Total (2)
└───────── Visiting node with address 0x10440220 and parent 0x104401c0 Total (3)
└───────── Visiting node with address 0x10440240 and parent 0x104401c0 Total (4)
└───────── Visiting node with address 0x10440260 and parent 0x104401c0 Total (5)
└───────── Visiting node with address 0x10440280 and parent 0x104401c0 Total (6)
└───────── Visiting node with address 0x104402a0 and parent 0x104401c0 Total (7)
└───────── Visiting node with address 0x104402e0 and parent 0x104401c0 Total (8)
└───────── Visiting node with address 0x10440300 and parent 0x104401c0 Total (9)
└───────── Visiting node with address 0x10440320 and parent 0x104401c0 Total (10)
└───────── Visiting node with address 0x10440340 and parent 0x104401c0 Total (11)
└───────── Visiting node with address 0x10440360 and parent 0x104401c0 Total (12)
└────── Visiting node with address 0x104401e0 and parent 0x104401a0 Total (13)
└───────── Visiting node with address 0x10440380 and parent 0x104401e0 Total (14)
└───────── Visiting node with address 0x104403a0 and parent 0x104401e0 Total (15)
└───────── Visiting node with address 0x104403c0 and parent 0x104401e0 Total (16)
└───────── Visiting node with address 0x104403e0 and parent 0x104401e0 Total (17)
└───────── Visiting node with address 0x10440400 and parent 0x104401e0 Total (18)
└───────── Visiting node with address 0x10440440 and parent 0x104401e0 Total (19)
└───────── Visiting node with address 0x10440460 and parent 0x104401e0 Total (20)
└───────── Visiting node with address 0x10440480 and parent 0x104401e0 Total (21)
└───────── Visiting node with address 0x104404a0 and parent 0x104401e0 Total (22)
└───────── Visiting node with address 0x104404c0 and parent 0x104401e0 Total (23)
└────── Visiting node with address 0x10440200 and parent 0x104401a0 Total (24)
└───────── Visiting node with address 0x104404e0 and parent 0x10440200 Total (25)
└───────── Visiting node with address 0x10440500 and parent 0x10440200 Total (26)
└───────── Visiting node with address 0x10440520 and parent 0x10440200 Total (27)
└───────── Visiting node with address 0x10440540 and parent 0x10440200 Total (28)
└───────── Visiting node with address 0x10440560 and parent 0x10440200 Total (29)
└───────── Visiting node with address 0x104405a0 and parent 0x10440200 Total (30)
└───────── Visiting node with address 0x104405c0 and parent 0x10440200 Total (31)
└───────── Visiting node with address 0x104405e0 and parent 0x10440200 Total (32)
└───────── Visiting node with address 0x10440600 and parent 0x10440200 Total (33)
└───────── Visiting node with address 0x10440620 and parent 0x10440200 Total (34)

Try it out

You can run the code yourself in the Go playground.


Thank you for reading my article. As always, I appreciate any feedback from users. So let me know how we could be better.

Follow @kris-nova

Your 2nd day with Kubernetes on AWS

Okay, so you have a cluster up and running on AWS. Now what? Seriously, managing a Kubernetes cluster is hard. Especially if you are even thinking about keeping up with the pace of the community. The good news, is that kops makes this easy. Here a few commonly used stories on how to manage a cluster after everything is already up and running. If there is something you don’t see, that you would like, please let me know!

This tutorial assumes you were able to successfully get a cluster up and running in AWS, and you are now ready to see what else it can do.

In this tutorial we are covering 2nd day concerns for managing a Kubernetes cluster on AWS. The idea of this tutorial is to exercise some useful bits of kops functionality that you won’t see during a cluster deployment. Here we really open up kops to see what she can do (yes, kops is a girl)

In this tutorial we will be running kops 1.5.1, which can be downloaded here.

We will also be making the assumption that you have an environment setup similar to the following.

export KOPS_STATE_STORE=s3://nivenly-com-state-store

Upgrading Kubernetes with kops

Suppose you are running an older version of Kubernetes, and want to run the latest and greatest..

Here we will start off with a Kubernetes v1.4.8 cluster. We are picking an older cluster here to demonstrate the workflow in which you could upgrade your Kubernetes cluster. The project evolves quickly, and you want to be able to iterate on your clusters just as quickly. To deploy a Kubernetes v1.4.8 cluster:

kops create cluster --zones us-west-2a --kubernetes-version 1.4.8 $KOPS_NAME --yes

As the cluster is deploying, notice how kops will conveniently remind us that the version of Kubernetes that we are deploying is outdated. This is by design. We really want users to know when they are running old code.

A new kubernetes version is available: 1.5.2
Upgrading is recommended (try kops upgrade cluster)

So now we have an older version of Kubernetes running. We know this by running the following command and looking for Server Version: version.Info

kubectl version

Now, we can use the following command to see what kops suggests we should do:

kops upgrade cluster $KOPS_NAME

We can safely append --yes to the end of our command to apply the upgrade to our configuration. But what is really happening here?

When we run the upgrade command as in

kops upgrade cluster $KOPS_NAME --yes

all we are really doing is appending some values to the cluster spec. (Remember, this is the state store that is stored in S3 in YAML). Which of course can always be accessed and edited using:

kops edit cluster $KOPS_NAME

In this case you will notice how the kops upgrade cluster command conveniently changed the following line in the configuration file for us.

  kubernetesVersion: 1.5.2

We can now run a kops update cluster command as we always would, to apply the change.

kops update cluster $KOPS_NAME --yes

We can now safely roll each of our nodes to finish the upgrade. Let’s use kops rolling-update cluster to re deploy each of our nodes. This is necessary to finish the upgrade. A kops rolling update will cycle each of the instance in the autoscale group with the new configuration.

kops rolling-update cluster $KOPS_NAME --yes

We can now check the version of Kubernetes, and validate that we are in fact using the latest version.

 kubectl version 

Note: If a specific version of Kubernetes is desired, you can always use the --channel flag and specify a valid channel. An example channel can be found here.

Scaling your cluster

Suppose you would like to scale your cluster to process more work..

In this example we will start off with a very basic cluster, and turn the node count up using kops instance groups.

kops create cluster --zones us-west-2a --node-count 3 $KOPS_NAME --yes

After the cluster is deployed we can validate that we are using 3 nodes by running

kubectl get nodes

Say we want to scale our nodes from 3 to 30. We can easily do that with kops by editing the nodes instance group using:

kops edit instancegroup nodes

We can then bump our node counts up to 30

  machineType: t2.medium
  maxSize: 30
  minSize: 30
  role: Node
  - us-west-2a

We then of course need to update our newly edited configuration

kops update cluster $KOPS_NAME --yes


Kops will update the AWS ASG automatically, and poof we have a 30 node cluster.

I do actually try all of this before recommending it to anyone. So yes, I was able to actually deploy a 30 node cluster in AWS with kops.

The cluster was deployed successfully, and the primary component of lag was waiting on Amazon to deploy the instances after detecting a change in the Autoscaling group.


A quick delete command from kops, and all is well.

 kops delete cluster $KOPS_NAME --yes 

Audit your clusters

Suppose you need to know what is going on in the cloud.. and audit your infrastructure..

By design kops will never store information about the cloud resources, and will always look them up at runtime. So gaining a glimpse into what you have running currently can be a bit of a concern. There are 2 kops commands that are very useful for auditing your environment, and also auditing a single cluster.

In order to see what clusters we have running in a state store we first use the following command:

kops get clusters

Notice how we no longer have to use `$KOPS_NAME`. This is because we already have a cluster deployed, and thus should already have a working `~/.kube/config` file in place. We can infer a lot of information from the file. Now that we have a cluster name (or more!) in mind, we can use the following command:

kops toolbox dump

Which will output all the wonderful information we could want about a cluster in a format that is easy to query. It is important to note that the resources defined here are discovered using the same cluster lookup methods `kops` uses for all other cluster commands. This is a raw, and unique output of your cluster at runtime!


Thank you for reading my article. As always, I appreciate any feedback from users. So let me know how we could be better. Also feel free to check me out on GitHub for more kops updates.

Follow @kris-nova