Category: K8s

Sup nerds.


So if you want to see me live demo this check out my live stream at 11am Pacific at twitch.tv/setns on Friday. I will post the recording here after it’s done.


So if you have been following along on Twitter you have probably seen my talking about my new server cabinet I have been working on, as well as a few tweets about running Falco on ARM.

So recently I joined Sysdig, Inc as chief OSS and I have been hacking on the kernel and our open source security tools for the past few months.

If you have ever used wireshark or sysdig or falco then yeah – we are THOSE folks. Falco is written in C++ and uses either a kernel module or a BPF probe to trace system call events in the Linux kernel. It’s fairly complicated to get all the pieces installed and working well on a Linux system, let alone a Kubernetes system.

So I figured everyone could use a fun project during the apocalypse including myself. So today I made a few branches of Linux, Falco, and Kubernetes and got everything dialed in nicely and compiling on ARMv7

So if you are interested in Kubernetes and Linux security, if you enjoy free and open source software, and if you have a raspberry pi or another arm board laying around please follow along and try out my distro I slapped together today.

So let’s understand what we have going on here.

 

 

 


 

NOVIX

Star

github.com/kris-nova/novix

So I put together an image that should make this is easy as pi (pun intended) to set up. It’s my operating system so if you don’t like I don’t care. Get off my lawn.

What is inside NOVIX?

Component Version
Architecture armv7
Kernel Linux novix 4.19.118-1-ARCH armv7l GNU/Linux
Operating System Base Arch Linux
Operating System Novix
Falco 0.22.0
Kubernetes 1.18
Kubeadm 1.18
Tested on Chips RaspberryPi 3/4 (armv7) Raspberry Pi 1 B (armv6)

 

 

Where do I get NOVIX?

See the latest RELEASE on GitHub

Image Download Arch Size
Novix 1.0.1 novix-1.0.1-armv7.img.gz armv7 4.3gb
Novix 1.0.0 novix-1.0.0-armv7.img.gz armv7 8.5gb

Included in the image:

  • Kernel headers
  • Falco objects
  • Kubernetes binaries
  • Docker
  • CRI
  • Emacs
  • grpc
  • jq

 

 


 

Setting up NOVIX on a Raspberry Pi 3/4

I am assuming you are running Linux, if you aren’t you should probably start. Otherwise you can duck duck go how to do this on Windows or a Mac – I am sure there are a lot of resources out there.

Download NOVIX and flash to your SD card

mkdir ~/novix && cd ~/novix
fdisk -l # Use this command to find your SD card (mine is usually /dev/sdc)
umount/dev/sdc*
fdisk /dev/sdc # (Use the device that matches your SD card from above)

Thanks Arch Linux Arm Community.

At the fdisk prompt, delete old partitions and create a new one:
Type o. This will clear out any partitions on the drive.
Type p to list partitions. There should be no partitions left.
Type n, then p for primary, 1 for the first partition on the drive, press ENTER to accept the default first sector, then type +110M for the last sector.
Type t, then c to set the first partition to type W95 FAT32 (LBA).
Type n, then p for primary, 2 for the second partition on the drive, and then press ENTER twice to accept the default first and last sector.
Write the partition table and exit by typing w.

Now format the boot partition

mkfs.vfat /dev/sdc1

And now the root partition

mkfs.ext4 /dev/sdc2

And now let’s set up our sd card

wget https://nivenly.com/novix/novix-1.0.1-armv7.img.gz
gunzip --stdout novix-1.0.1-armv7.img.gz | sudo dd bs=4M of=/dev/sdc
sync

If you get stuck check out the official installation guide and just use my image instead of the one they suggest.

A better example can be found at the official arm for arch linux installation guide

Throw the SD card into the back of your raspberry pi, hook it up to your network, and give it some power. You should see a solid light and a blinky light on the card (not the network) indicating that your pi is online.

SSH into your NOVIX instance

Now we are assuming you have a lovely DHCP server online somewhere and your pi should now be on your network. Find it’s IP address by pulling client lists from your networking gear, arping, guessing, nmap, whatever. I just went into my unifi dashboard and there it was!

 

 

 

Default NOVIX Username Default NOVIX Password
novix charlie
ssh novix@10.0.0.36
cat README

Notice if you type novix and hit tab to complete there are a handful of handy commands.


 

Running Falco

Falco should come precompiled. The kernel module should be loaded and the daemon should already be running.

novix.falco-logs

 

 

 


 

Running Kubernetes

Kubernetes 1.18 should also be baked into the image and all dependencies should already be installed and configured.

The Kubernetes Master

Start by setting up a master. Pick a hostname you want to use for your master (NOTE: you should also probably put this in /etc/hosts on all the machines in your cluster)

In this example we will use novix-master for our hostname. Set it using the following command

novix.hostname novix-master

Now start your master server

novix.k8s-master

You should see the output of kubeadm giving you a “join command” that should look something like

kubeadm join 10.0.0.44:443 --token uvjdta.h41bhz0aw5scnvka \
--discovery-token-ca-cert-hash sha256:0d0c32d30ab1dd2a5f3ca6f1d83b61aba9204bf6f8aa8f76e6c50ee37becb6ba

Note the following:

Key Value
Server 10.0.0.43
Token  uvjdta.h41bhz0aw5scnvka
Hash  sha256:0d0c32d30ab1dd2a5f3ca6f1d83b61aba9204bf6f8aa8f76e6c50ee37becb6ba

 

Now install Calico CNI on your cluster.

kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml

The Kubernetes Node(s)

Set up a new novix machine as one of your nodes. Set a new hostname.

novix.hostname novix-001

Now either run the pasted kubeadm join command above, or you can try

novix.k8s-join 10.0.0.43 uvjdta.h41bhz0aw5scnvka sha256:0d0c32d30ab1dd2a5f3ca6f1d83b61aba9204bf6f8aa8f76e6c50ee37becb6ba

As long as everything can route you should now have a working kubernetes cluster with Falco.

 


 

 

One of the questions I am asked most in my field is what are the secrets to getting talks accepted at technical events.

Since 2016 I have worked at Datapipe, Deis, Microsoft, Heptio, VMware and now Sysdig. I have spoken at events (like Kubecon) at every one of these jobs, and even when I was unemployed.

Below are the things I always focus on while writing a proposal. I hope these can help you with your own goals of writing outstanding conference proposals, and crafting the talk itself.


 

 

Problems are good

Image result for jurassic parkSeriously, the majority of your effort in writing a proposal should be focusing on something bad.

I know it sounds counter-intuitive but stop for a moment and think about things that are entertaining and exciting for you. Movies, Television, Books, Stories… Everything you probably love tells a story and has a well-defined enemy. Without the presence of an enemy, the heroism simply cannot exist.

Think of your favorite movie, mine happens to be Jurassic Park. There is a fundamental problem that the story focuses on which is humans have no business creating fucking dinosaurs. Also, sub-plot if you do decide to create dinosaurs, you better at least run their cages on Linux.

The worse the problem happens to be, the more exciting solving the problem is to the audience.


Here are things I am always on the lookout for in my technical career.

  • Moments when I am frustrated or passionate about something (this is a sign there is a very nasty enemy lurking nearby)
  • Moments when I say things like “It’s not that simple” or “It doesn’t work like that” or “I wish it was that easy” (again, something bad is just around the corner)
  • Moments, when I start asking myself, is something is worth it (this implies that there would be a lot of work, to solve a problem)

These types of moments are invaluable and where every single talk idea of mine comes from. In fact, if you study theater, or writing you learn very quickly the importance of antagonism. Furthermore, you learn about anti-hero and martyrs, etc. Even if you look at the horror genre, it’s an entire category that celebrates the evil a story can contain.

People love drama.

Check out my clusterfuck talk I did at FOSDEM last year. It highlights a problem well.

 


The clusterfuck hidden in the Kubernetes code base – Kris Nóva – FOSDEM 2019


 

So let’s start by coming up with a fictional talk, that presents a problem.

Designing a remote cloud data center with 99.999% uptime

We know we have to have a data center to host data, and we know we won’t have access to it. This is a pretty good start as we know that we have a problem that must be solved. How do we manage and design this data center?

 


Constraints are better

 

Okay, so now that you learned the importance of having a problem, and are giving the negativity the attention it deserves we can start to look at interesting twists in our story.

By adding constraints or limitations you can create an even more exciting environment.

Let’s look at a movie called “Deadly snakes“. So there is at least a clear problem here which is that snakes can fucking kill you and that is terrifying. But we can do better, let’s add some constraints to our story.

 

Snakes on a plane

Image result for snakes on a plane

 

Now that sounds like a great story! Not only do we have a clear antagonist, but now we have a set of constraints that our hero has to abide by in order to defeat their foe. Which is, you can’t get off a plane while it’s flying over the ocean and now there is deadly snakes trying to kill you.

By creating an antagonist, and having a fascinating set of constraints we are starting to create a really powerful hero. But again — you have to have the danger and the constraints in order for the hero to be this triumphant.

So let’s look at tech again.

Remember our talk “Designing a cloud data center with 99.999% uptime“. So this is good because we present our problem of having high uptime. This seems pretty cool, I might go check this talk out.

Now let’s add an outstanding constraint to our talk and see what happens.

Designing a remote cloud data center with 99.999% uptime… on Mars

Holy shit this talk sounds amazing. We have a clear problem and a fascinating constraint. I would hands down cancel plans to go see this talk. Why are you doing this? Why on mars? How much does this cost? Do computers even work there? How will the hardware get there? How will you connect to it? Why the uptime? WHAT ARE YOU DOING?

 


 

New ideas are the best

Okay so now you have a problem and a crazy constraint that makes what would otherwise be a fairly simple problem to solve exponentially more complex. Can we get any better from here?

Yes. These are the talks that go down in history, and you can write one (if you are lucky).

Every once in awhile (maybe once in a lifetime) you might hit what I call “The Trifecta”. Which is a magical moment when 3 very rare things all happen at the same time.

  1. You have a really nasty problem
  2. You have a really interesting constraint
  3. You happen to either invent or use technology that isn’t mainstream yet or has never been done

This is like the indie band that makes it big – but for tech.

How do you know if you have found the next exciting piece of tech for your problem? This question is hard to answer, and I usually ask the following questions to gauge if I am on to something.

  • Is it easy to use?
    • If the answer is no, you are on the right track
  • Can I google it?
    • If the answer is no, you are on the right track
  • Does it solve my problem, and does it adhere to my constraint?
    • If the answer is yes, you are on the right track
  • Do I wish the CFP I am writing was already on YouTube so I didn’t have to do this?
    • If the answer is yes, you are on the right track (and you probably aren’t the only one thinking this)

 

 

While a talk like “Running a remote cloud data center on Mars” would be outstanding it could still be better. What if in order to build our data center we had to do something never before done before? This could change the entire tone of the talk.

The first martian. Designing a robot to build a remote data center on Mars“.

Now that is an outstanding talk.

Not only do we have a clear problem, a fascinating constraint, but we solved the problem in a way that has never been done before. We built a fucking robot that will build our data center for us so we don’t have to.

If you are lucky, you might be in this type of situation in your lifetime. These are the types of talks that make history, and you can find them in your everyday life if you look hard enough.

 


The CFP

 

There are 3 things I always focus on when writing a CFP


  1. The Title (problems, constraints, innovation)
  2. The Pitch (How do you explain the problem, the constraints, and your innovation in less than 10 seconds)
  3. The Lesson (You should always explain what the audience will learn, or elude to something they might enjoy)

For the title, I always have a bad guy. I always have a problem. I always have an enemy.

For the pitch, it is quick and easy to understand. You have 3 sentences (max) to hook someone. Make it juicy. People love drama.

For the lesson, I try to create wonder and empathy. Which movie would I rather watch?

  • College students go on a road trip and are murdered by a hitchhiker
  • 3 students on spring break go on the road trip that will change their lives forever when they pick up an innocent-looking hitchhiker

 

I try to give folks a reason to find out more. How does the story end? What happens next?

 

 

 

 

 


Bottom Line


Unfortunately, nobody wants to see a talk called “$project is really great”.

However, everyone wants to see a talk called “$project solves this really crazy terrible annoying fucking problem in a way nobody else has ever done before”.

Find a problem, and study it. Obsess over it. Cherish it.

You will get your talk accepted every time.

If you don’t have a problem, you can’t resolve anything. Which means you don’t have a story, which means you don’t have a talk.


NEVER submit a talk to “get on stage” or “raise awareness” or sell anything.

Submit a talk because you were lucky enough to discover a problem, then you solved it, and it was a pain in the ass.

Now you want to share your journey with others. 


Extra credit if the problem you are solving exists for a large group of people – solving it is an opportunity to create a hero and create empathy for others.


 

 

 

 

 

 

 

 

Earlier this year I found myself taking a much needed break from employment.

I began taking a meticulous approach at looking for a new gig. I was astonished and overwhelmed at the interest I had from amazing organizations in the Cloud Native ecosystem. I met with large tech companies, startups, and entrepreneurs yet to start their journey. I noticed everyone was very excited about Kubernetes. Every meeting shared the common theme. It seemed like the entire industry was obsessed with solving their unique problems with our new distributed kernel: Kubernetes.

 

There was only one problem:  Kubernetes is effectively “complete”

 

Don’t get me wrong, there is still a lot of work to do and Kubernetes will always have room for improvement.

My point here is that the core functionality of Kubernetes has not only been completed, but iterated on several times over. The scope of the project is relatively finite, and we have reached a point of maturity where we have functional components to satisfy this scope. The API is several versions deep, and the goal of the project was to provide a platform for the ecosystem to build on. We have achieved that. With that being said, taking another “Kubernetes Centric” role didn’t seem very attractive to me. I love Kubernetes and have been working with the core upstream community for a long time, but I think it’s time to pivot slightly.

 

I am an innovator.

One could argue the majority of innovation within Kubernetes has long since been crafted, whereas innovation with Kubernetes is just beginning.

 

Regardless, I am convinced Kubernetes is here to stay.

As the new Cloud Native “kernel” for distributed systems management, Kubernetes surely won’t be disappearing from my career any time soon. Nor will the CNCF. As a Kubernetes expert naturally I began my investigation into discovering technology that solved real problems in unique and powerful ways. I really wanted to find technology that I believed in, that I still hadn’t mastered, and that would get me excited. I wanted to go deeper into the little black boxes that still seemed like magic to me. I wanted more. I found myself speaking with many companies and looked at a lot of open source tools: service meshes, networking tools, deployment tools, etc.

 

 

There was one company that stood out…

I see the potential in gaining visibility into our systems at the kernel level, while operating a containerized system on top.

Period.

 

Furthermore there are a lot of other reasons I made the choice I did. To be be completely honest I simply fell in love with the way everyone worked. They were a low level shop of hackers and genuine people. Everyone I met was empowered and knew precisely what they were talking about. It was a refreshing interviewing experience. Good friends from open source like the infamous POP truly helped make me feel welcomed and respected. During my interview we white boarded the Linux kernel not to pass a frivolous test, but because we genuinely needed the diagram for our discussion on eBPF. I immediately knew that this job was going to push me forward technically. I loved it.

As the saga goes, the company offered me a position to take on ownership of our open source tools and I was thrilled. I am quite excited for our team, and an opportunity to do what I love doing: inspiring engineers and solving concrete problems with them!

 

 

 

 


So let’s talk Falco.

 

As part of my new role at Sysdig I will be managing our OSS team, while simultaneously working on our tools with them. I am way too excited about new friends in Kubernetes SIG-security. Growing Falco and driving adoption will be a primary goal for me at Sysdig. We want to flesh Falco out as the defacto way of pulling metrics from the kernel for runtime security and intrusion detection.

Once upon a time a simple network packet had every bit of information you needed for total forensics analysis. Today with Kubernetes abstractions sharing network, disk, and memory across containerized process the network is no longer a viable avenue for learning about our complete systems behavior. This is where Falco and eBPF come into the picture.

Falco uses two critical components into gaining visibility of our systems. The first is a comprehensive list of syscall information that we access via eBPF. The second is our context from Kubernetes. By joining these two otherwise completely separated bits of data together, we now have everything we need to secure a Kubernetes cluster.

Falco allows users to dynamically build rules against this rich data set, and can take action if and when a rule is broken. This noninvasive and elegant approach to security is built on battle tested concepts, applied in exciting new ways. To be honest, I can’t wait to start hacking on the kernel with the rest of the folks on the team. I am excited to go deeper into kernel security with eBPF and Falco.

 

 

 

 


So lets get started!

Today I am pleased to announce full time employment with Sysdig!

 

I will be spending some time getting to know our current state of affairs and exploring our other open source tools moving forward. Stay tuned for updates as we start to dial up our presence in the open source ecosystem with tools like Falco.

 

Please reach out if you are interested in collaborating or would like to find out more about our new projects. My email is nova@sysdig.com and I would love to hear from you.

 

 

 

 

 

 

 

 

Hey Everyone!

So I haven’t blogged on nivenly.com in ages so I figured I was over due for a technical post. As I am sure some of you have noticed I am recently unemployed and have been spending the last week or so on back to back phone calls with a lot of big names in tech. I have been asked the following question probably 50 times in the last 10 days:

What do you want to do next in your career?

I have been thinking a lot about this recently, and some ideas I have been playing with for a while finally came together in my head. So I want to present this crazy notion of this probably insane idea I have.

Embedded Advocacy

So let’s first cover my previous experience so we can understand how I ended up here.

DevOps / SRE / Infrastructure Bro (literally)

So once upon a time I wore a pager and had a beard.

This shouldn’t be a shock to anyone.

I learned a vary valuable lesson here: My job was far more fulfilling and I was far more productive when I would focus on 1 thing and do it well. At this gig I was part of a dispersed “DevOps” team that was charged with performing a lot of bullshit for the many engineering teams we had. I felt most productive and successful when I was able to pair with a team for an extended period of time and really dig in and solve some problems. The one-off interrupt driven nature of working with other teams left me frustrated and made the quality of my work degrade as I was context switching all day every day.

DevOps / SRE / Infrastructure witch

I went through a gender transition NBD.

I then started a new job doing the same style of back end support. This time in fishnets and I was not working with multiple teams.

I was embedded to focus on infrastructure for a SINGLE purpose.

This was one of the best jobs I ever worked at (until we were acquired by large tech company).

I left work every day feeling productive, successful, and proud of what I did. I made the team more productive, and was able to dig in and solve a lot of the really tough problems that nobody else could solve. To this day, I don’t think I have ever been more productive at my day job than I was here. In fact, the majority of my book Cloud Native Infrastructure was inspired from the work I did while working as an embedded DevOps engineer.

Fucking Kubernetes

So then Kubernetes happened.

People discovered I had mad backend skills, and could also perform really well under pressure (thanks pager) and next thing I know I am a developer advocate ranting about Kubernetes for a few years.

I never lost my passion for my backend work (and to be honest can’t wait to get back into it).

Working as a developer advocate has been nice, but it’s not what everyone thinks it is.

Travelling sucks, hotels are annoying, airlines lose your bag, you are constantly working on the go, it’s really challenging work.

Then the 30th of the month happens and you get a notification to do expense reports that you don’t have time for.

Google maps directed you to the wrong building for your talk in a country that doesn’t speak English.

Oh by the way, it’s time to check in for your next flight.

After spending some time talking with folks about what I want next in my career I realized I was trying to describe a role that doesn’t exist, and desperately needs to.

Embedded Advocacy

Product first.

Working as a DA I was a jill of all trades and a master of none. I was detached from our engineers, but was expected to represent them. There was no way for me to keep up with the many efforts that were concurrently happening so ultimately I lost my ability to be a deep and authentic knowledge expert about a single thing. This is when I noticed my passion started to slide.

By working as a DA for many different things it left me uninformed and overworked. The engineers were frustrated with me, and I was with them. Not to mention the customers…

Developer Advocacy has 0 business interfacing with an existing customer. That isn’t advocacy.

I started to imagine a world where I could focus on a single tool. A single product. Do one thing and do it well. Contribute to engineering conversations, and help influence engineering, and still advocating for a single product if and when it was needed. There are few engineers who can do this, and I am amazed we haven’t figured out that the majority of DAs would be super stars here.

Imagine going alpha with an open source tool, then having one of the core engineers go out into the wild to teach folks about it and gain feedback. Then imagine them coming back to the whiteboard and sharing their findings in a digestible way to the engineering team who built the product. This. is. developer. advocacy.

 

 

 

 

We need to start embedding our DAs into our engineering orgs and letting them do what they are good at. Give a DA a single concrete problem, and light a fire underneath them.

We need to stop expecting our DAs to be our one stop shop for every single effort we have going on. This isn’t a sponsorship. Let DAs influence tech. Give them a seat in the engineering discussions to do so.

Embed your DAs to a single team, in the same way you would embed infrastructure engineers to a single team.

Let a single product define the scope of a DAs involvement with the community. Why send a DA on the road if the product doesn’t need it? Let’s be concrete people.

Let’s do one thing, and do it well.

 

</endrant>

 

Hey Everyone,

 

Me again.

 

Okay so I am on the plane and heading to Kubecon. I haven’t updated this blog in a while so I figured it was time for a good post. Also I have been thinking about engineering a solution to a problem I have had at Kubecon in the past.

 

Problem Statement

I am beyond busy at Kubecon, and rarely get enough time to spend with the folks I would like to.

Proposed Solution

I was thinking about setting up an office hours situation at Kubecon where folks could come and get some time to talk about whatever they want.

I have had folks come out as transgender, or homosexual to me at tech conferences. I have had folks want to talk about family, their career, advice on writing a book. Help with installing Kubernetes. Advice on CI/CD systems. Questions about their company. The list goes on and on!

To be clear I very much enjoyed all of these personal encounters. I think about them all the time, and still routinely talk to most of these people.

I was sad that I didn’t get more time with some folks as I was running around like a mad woman. So I wanted to be a little more structured this time.

So I am going to try to use calendly to help me stay organized at the conference and to book some time with people.

 

Here is how it works:

  1. Book time with me at the conference
  2. I find a small confidential corner of the conference for us to hang out at.
  3. I let you know via email where to meet me.
  4. We meet up and can talk about literally anything. This is your time ❤️
  5. I will stay as long as I can – so long as nobody else is waiting.
  6. Thats it.

You can book time with me here

 

 

 

FAQ:

 

Does this mean this is all I get?

Nope.

This is just a way to keep things structured, normal conference chaos rules still apply – I just give the folks who book time priority.

 

Can we really talk about anything?

100% yes. We can take selfies, bitch about tech, whatever.

 

How will I find you?

I will send you an email (you give me your email address when you sign up) that tells you what to do. Don’t worry if something happens or you can’t find me – we will re-schedule and make it work.

 

What if I can’t find you?

Don’t worry if something happens or you can’t find me – we will re-schedule and make it work.

 

Can I pick the place?

Yep – just let me know in the email! We can get food, booze, coffee, ice cream – whatever.

 

You can book time with me here

 

Okay – wow – it’s been a long time since I have blogged here…

 

❤️ Hi everyone! I missed you! ❤️

 

 

So due to some unforeseen circumstance in the past ~year or so… I was unable to continue contributing to my pride and joy Kubicorn! It’s been a real bummer, but I am glad to say that those days are officially behind me!

 

The good news? I am back! And we are working hard on making kubicorn even more amazing!

 

So without further delay, let’s start looking into what I wanted to update everyone about kubicorn.

 

 

 

 

What is kubicorn?

Kubicorn is a Kubernetes infrastructure management tool!

kubicorn attempts to solve managing kubernetes based infrastructure in a cloud native way. The project is supported by the Cloud Native Computing Foundation (CNCF)  and attempts to offer a few things unique to cloud native infrastructure:

  1. Modular components
  2. Vendorable library in Go
  3. A guarantee-less framework that is flexible and scalable
  4. A solid way to bootstrap and manage your infrastructure
  5. Software to manage infrastructure

 

These patterns are fundamental to understanding cloud native technologies, and (in my opinion) are the future of infrastructure management!

How do I know this? I literally wrote the book on it

TLDR; It’s a badass command line tool that goes from 0to kubernetes in just a few moments.

 

What’s new in kubicorn?

We have officially adopted the Kubernetes cluster API

The cluster API is an official working group in the Kubernetes community.

The pattern dictates that while bootstrapping a cluster, kubicorn will first create the Kubernetes control plane.

Then kubicorn will define the control plane itself, as well as the machines that will serve as the nodes (workhorses) of the Kubernetes cluster.

Finally, kubicornwill install a controller that will realize the machines defined in the previous step.

This allows for arbitrary clients to adjust the machine definition, without having to care how kubicorn will autoscale their cluster.

 

Autoscaling infrastructure, the kubernetes way!

So this is an official working group of Kubernetes, and more information can be found here!

We are currently working on building out this framework, and if you think it is a good idea (like we do) feel free to get involved.

We have a dope website

kubicorn.io

So many thanks to Ellen for her hard work in building out our fabulous site. If you would like to contribute please let us know!

We have moved to our own GitHub organization

Despite my attempts to keep the project as one of my pets, it’s just gotten too big. It has now moved to it’s own organization. We would love for YOU to get involved. Just open up an issue or contact Kris Nova if you would like to join.

We are just now starting on the controller

Writing the infrastructure controller is probably the most exciting thing to happen to the project yet.

We literally JUST started the repository. Get in now, while the getting is good!

We need help with the Dockerfile, with the idea behind the controller, and even with the code to make it so. If you wan’t to get involved, now is your chance!

We want your help

Seriously.

Our lead maintainer Marko started out as someone on the internet just like you. He is now a super admin of the project, and is a rock star at keeping up with the day-to-day work of kubicorn.

We would love to help you become a cloud native badass if you would like to contribute. Please join the slack channel and start talking to us.

 

 

 

 

 

 

 

 

And of course as always…

 

Let me know what you think in the comments below!

Hey everyone!

So a huge thanks to Hashiconf for letting me come out and talk about this stuff in person! But for those of you who missed it, or want more information there is also this blog on the matter as well.

So this is just a quick technical follow up of the tool terraformctl that I used in my session to get Terraform up and running inside of Kubernetes as a controller!

What is terraformctl?

A command line tool and gRPC server that is pronounced Terraform Cuddle.

 

The GitHub repo can be found here!

 

It’s a philosophical example of how infrastructure engineers might start looking at running cloud native applications to manage infrastructure. The idea behind the tool is to introduce this new way of thinking, and not necessarily to be the concrete implementation you are working for. This idea is new, and therefore a lot of tooling is till being crafted. This is just a quick and dirty example of what it might look like.

Terraformctl follows a simple client/server pattern.

We use gRPC to define the protocol in which the client will communicate with the server.

The server is a program written in Golang that will handle incoming gRPC requests concurrently while running a control loop.

The incoming requests are cached to a mutex controlled shared point in memory.

The control loop reads from the shared memory.

Voila. Concurrent microservices in Go!

What is cloud native infrastructure?

Well it’s this crazy idea that we should start looking at managing cloud native infrastructure in the same way we manage traditional cloud native applications.

If we treat infrastructure as software then we have no reason to run the software in legacy or traditional ways when we can truly concur our software by running it in a cloud native way. I love this idea so much that I helped author a book on the subject! Feel free to check it out here!

The bottom line is that the new way of looking at the stack is to start thinking of the layers that were traditionally managed in other ways as layers that are now managed by discreet and happy applications. These applications can be ran in containers, and orchestrated in the same ways that all other applications can. So why not do that? YOLO.

What Terraformctl is not..

Terraformctl is not (and will never be) production ready.

It’s a demo tool, and it’s hacky. If you really want to expand on my work feel free to ping me, or just out right fork it. I don’t have time to maintain yet another open source project unfortunately.

Terraformctl is not designed to replace any enterprise solutions, it’s just a thought experiment. Solving these problems is extremely hard, so I just want more people to understand what is really going into these tools.

Furthermore there are a number of features not yet implemented in the code base, that the code base was structure for. Who knows, maybe one day I will get around to coding them. We will see.

If you really, really, really want to talk more about this project. Please email me at kris@nivenly.com.

 

 

Follow @kris-nova

What are we creating?

  • Kubernetes v1.7.3
  • Private Networking in Digital Ocean
  • Encrypted VPN mesh for droplets
  • Ubuntu Droplets

So at Gophercon I released my latest project kubicorn.

As I go along I want to publish a set of use cases as examples. This helps me exercise kubicorn and understand my assumptions. It would be really cool if others could step in and use these cases to improve the system.

7 Node Cluster in Digital Ocean

Creating your cluster

So the deployment process is pretty straight forward. The first thing you need to do is grab a copy of `kubicorn v0.0.003`.


$ go get github.com/kris-nova/kubicorn

Verify kubicorn is working, and you are running the right version.

$ kubicorn --fab

Also you will need a Digital Ocean access key. You can use this guide to help you create one. Then just export the key as an environmental variable.

 
$ export DIGITALOCEAN_ACCESS_TOKEN=***************************************** 

The project offers a starting point for a digital ocean cluster called a profile. Go ahead and create one on your local filesystem.

$ kubicorn create dofuckyeah --profile do

Feel free to take a look at the newly created representation of the cluster and tweak it to your liking. Here is what mine looks like

For my cluster all I did was change the maxCount from 3 to 7 for my node serverPool.

When you are happy with your config, go ahead and apply the changes!

$ kubicorn apply dofuckyeah -v 4

Then check out your new cluster and wait for your nodes to come to life!

$ kubectl get no
kubectl get nodes

What we created

We created 8 droplets, all running Ubuntu 16.04

The master droplet uses a fantastic tool called meshbird to create an encrypted private VPN service mesh on Digital Ocean private networking.

Each of the droplets get a new virtual NIC called tun0 that allows each of the droplets to route on a private VPN.

The nodes register against the master via the newly created virtual NIC.

The master API is advertised on the public IP of the master droplet.

You can checkout the bootstrap script for the master here, and for the nodes here.

And thanks to kubeadm

Poof. Kubernetes.

Want to learn more?

Check out the kubicorn project on GitHub!

Follow @kubicornk8s on Twitter to get up to the second updates!

Join us in #kubicorn in the official Gopher’s slack!

 

Follow @kris-nova

Follow @kris-nova

Just keep reading.. I promise this is worth it..

Okay so I made a new Kubernetes infrastructure tool (sorry not sorry). Introducing my latest pile of rubbish… kubicorn!

Check it out on github here.

Why I made the tool

I made the tool for a lot of reasons. The main one is so that I could have some architectural freedom. Here are some other reasons:

  • I want to start using (or abusing) kubeadm
  • I believe in standardizing a Kubernetes cluster API for the community
  • I believe in pulling configuration out of the engine, so users can define their own
  • I believe in creating the tool as a consumable library, so others can start to use it to build infrastructure operators
  • I wanted to enforce the concept of reconciliation, and pull it all the way up to the top of the library
  • I want to support multiple clouds (really)
  • I want it to be EASY to build a cloud implementation
  • I want it to be EASY to understand the code base
  • I want it to be EASY contribute to the project
  • I want it to be as idiomatic Go as possible

I am sure there are more, but you get the idea.

What it does

It empowers the user (that’s you) to manage infrastructure.

It lets the user (still you) define things like what the infrastructure should look like, and how the cluster should bootstrap.

It offers really great starting points (called profiles) that are designed to work out of the box. Tweak away!

It (eventually) will take snapshots of clusters to create an image. The image is both the infrastructure layer as well as the application layer bundled up into a lovely tarball. The tarball can be moved around, replicated, saved, and backed up. The tarball is a record of your entire cluster state (including infrastructure!).

What is next?

Please help me.

I need contributors and volunteers for the project. I want to share as much knowledge as possible with the user (you!) so that everyone can begin contributing.

What clouds do we support?

Today? AWS

Tomorrow? Digital Ocean (literally tomorrow.. checkout out the PR)

Next? You tell me. The whole point here is that the implementation is easy, so anyone can do it.

 

Kubicorn vs Kops

 

Feature Kops Kubicorn
HA Clusters
Easy to use library
Kubeadm
Bring your own bootstrap
Awesome as shit
API in Go
Digital Ocean Support
Kubernetes Official
Multiple Operating Systems (Ubuntu, CentOS, etc)
Requires DNS

 

Setting up Kubernetes 1.7.0 in AWS with Kubicorn

This is not ready for production! I started coding this a few weeks ago in my free time, and it’s very new!

Also check out the official walkthrough here!

Install kubicorn

go get github.com/kris-nova/kubicorn

Create a cluster API

kubicorn create knova --profile aws

Authenticate

You should probably create a new IAM user for this, with the following permission

  • AmazonEC2FullAccess
  • AutoScalingFullAccess
  • AmazonVPCFullAccess

Then export your auth information

export AWS_ACCESS_KEY_ID="omgsecret"
export AWS_SECRET_ACCESS_KEY="evenmoresecret"

Apply

Then you can apply your changes!

kubicorn apply knova

 

Example Output

 

Access

Then you can access your cluster

kubectl get nodes

Delete

Delete your cluster

kubicorn delete knova

Follow @kris-nova

Your 2nd day with Kubernetes on AWS

Okay, so you have a cluster up and running on AWS. Now what? Seriously, managing a Kubernetes cluster is hard. Especially if you are even thinking about keeping up with the pace of the community. The good news, is that kops makes this easy. Here a few commonly used stories on how to manage a cluster after everything is already up and running. If there is something you don’t see, that you would like, please let me know!

This tutorial assumes you were able to successfully get a cluster up and running in AWS, and you are now ready to see what else it can do.

In this tutorial we are covering 2nd day concerns for managing a Kubernetes cluster on AWS. The idea of this tutorial is to exercise some useful bits of kops functionality that you won’t see during a cluster deployment. Here we really open up kops to see what she can do (yes, kops is a girl)

In this tutorial we will be running kops 1.5.1, which can be downloaded here.

We will also be making the assumption that you have an environment setup similar to the following.

export NAME=nextday.nivenly.com
export KOPS_STATE_STORE=s3://nivenly-com-state-store

Upgrading Kubernetes with kops

Suppose you are running an older version of Kubernetes, and want to run the latest and greatest..

Here we will start off with a Kubernetes v1.4.8 cluster. We are picking an older cluster here to demonstrate the workflow in which you could upgrade your Kubernetes cluster. The project evolves quickly, and you want to be able to iterate on your clusters just as quickly. To deploy a Kubernetes v1.4.8 cluster:

kops create cluster --zones us-west-2a --kubernetes-version 1.4.8 $KOPS_NAME --yes

As the cluster is deploying, notice how kops will conveniently remind us that the version of Kubernetes that we are deploying is outdated. This is by design. We really want users to know when they are running old code.

..snip
A new kubernetes version is available: 1.5.2
Upgrading is recommended (try kops upgrade cluster)
..snip

So now we have an older version of Kubernetes running. We know this by running the following command and looking for Server Version: version.Info

kubectl version

Now, we can use the following command to see what kops suggests we should do:

kops upgrade cluster $KOPS_NAME

We can safely append --yes to the end of our command to apply the upgrade to our configuration. But what is really happening here?

When we run the upgrade command as in

kops upgrade cluster $KOPS_NAME --yes

all we are really doing is appending some values to the cluster spec. (Remember, this is the state store that is stored in S3 in YAML). Which of course can always be accessed and edited using:

kops edit cluster $KOPS_NAME

In this case you will notice how the kops upgrade cluster command conveniently changed the following line in the configuration file for us.

  kubernetesVersion: 1.5.2

We can now run a kops update cluster command as we always would, to apply the change.

kops update cluster $KOPS_NAME --yes

We can now safely roll each of our nodes to finish the upgrade. Let’s use kops rolling-update cluster to re deploy each of our nodes. This is necessary to finish the upgrade. A kops rolling update will cycle each of the instance in the autoscale group with the new configuration.

kops rolling-update cluster $KOPS_NAME --yes

We can now check the version of Kubernetes, and validate that we are in fact using the latest version.

 kubectl version 

Note: If a specific version of Kubernetes is desired, you can always use the --channel flag and specify a valid channel. An example channel can be found here.

Scaling your cluster

Suppose you would like to scale your cluster to process more work..

In this example we will start off with a very basic cluster, and turn the node count up using kops instance groups.

kops create cluster --zones us-west-2a --node-count 3 $KOPS_NAME --yes

After the cluster is deployed we can validate that we are using 3 nodes by running

kubectl get nodes

Say we want to scale our nodes from 3 to 30. We can easily do that with kops by editing the nodes instance group using:

kops edit instancegroup nodes

We can then bump our node counts up to 30

spec:
  image: kope.io/k8s-1.4-debian-jessie-amd64-hvm-ebs-2016-10-21
  machineType: t2.medium
  maxSize: 30
  minSize: 30
  role: Node
  subnets:
  - us-west-2a

We then of course need to update our newly edited configuration

kops update cluster $KOPS_NAME --yes

screen-shot-2017-03-01-at-10-36-30-pm

Kops will update the AWS ASG automatically, and poof we have a 30 node cluster.

I do actually try all of this before recommending it to anyone. So yes, I was able to actually deploy a 30 node cluster in AWS with kops.

The cluster was deployed successfully, and the primary component of lag was waiting on Amazon to deploy the instances after detecting a change in the Autoscaling group.

 

A quick delete command from kops, and all is well.

 kops delete cluster $KOPS_NAME --yes 

Audit your clusters

Suppose you need to know what is going on in the cloud.. and audit your infrastructure..

By design kops will never store information about the cloud resources, and will always look them up at runtime. So gaining a glimpse into what you have running currently can be a bit of a concern. There are 2 kops commands that are very useful for auditing your environment, and also auditing a single cluster.

In order to see what clusters we have running in a state store we first use the following command:

kops get clusters

Notice how we no longer have to use `$KOPS_NAME`. This is because we already have a cluster deployed, and thus should already have a working `~/.kube/config` file in place. We can infer a lot of information from the file. Now that we have a cluster name (or more!) in mind, we can use the following command:

kops toolbox dump

Which will output all the wonderful information we could want about a cluster in a format that is easy to query. It is important to note that the resources defined here are discovered using the same cluster lookup methods `kops` uses for all other cluster commands. This is a raw, and unique output of your cluster at runtime!

Thanks!

Thank you for reading my article. As always, I appreciate any feedback from users. So let me know how we could be better. Also feel free to check me out on GitHub for more kops updates.

Follow @kris-nova