Category: Go

Okay – wow – it’s been a long time since I have blogged here…

 

❤️ Hi everyone! I missed you! ❤️

 

 

So due to some unforeseen circumstance in the past ~year or so… I was unable to continue contributing to my pride and joy Kubicorn! It’s been a real bummer, but I am glad to say that those days are officially behind me!

 

The good news? I am back! And we are working hard on making kubicorn even more amazing!

 

So without further delay, let’s start looking into what I wanted to update everyone about kubicorn.

 

 

 

 

What is kubicorn?

Kubicorn is a Kubernetes infrastructure management tool!

kubicorn attempts to solve managing kubernetes based infrastructure in a cloud native way. The project is supported by the Cloud Native Computing Foundation (CNCF)  and attempts to offer a few things unique to cloud native infrastructure:

  1. Modular components
  2. Vendorable library in Go
  3. A guarantee-less framework that is flexible and scalable
  4. A solid way to bootstrap and manage your infrastructure
  5. Software to manage infrastructure

 

These patterns are fundamental to understanding cloud native technologies, and (in my opinion) are the future of infrastructure management!

How do I know this? I literally wrote the book on it

TLDR; It’s a badass command line tool that goes from 0to kubernetes in just a few moments.

 

What’s new in kubicorn?

We have officially adopted the Kubernetes cluster API

The cluster API is an official working group in the Kubernetes community.

The pattern dictates that while bootstrapping a cluster, kubicorn will first create the Kubernetes control plane.

Then kubicorn will define the control plane itself, as well as the machines that will serve as the nodes (workhorses) of the Kubernetes cluster.

Finally, kubicornwill install a controller that will realize the machines defined in the previous step.

This allows for arbitrary clients to adjust the machine definition, without having to care how kubicorn will autoscale their cluster.

 

Autoscaling infrastructure, the kubernetes way!

So this is an official working group of Kubernetes, and more information can be found here!

We are currently working on building out this framework, and if you think it is a good idea (like we do) feel free to get involved.

We have a dope website

kubicorn.io

So many thanks to Ellen for her hard work in building out our fabulous site. If you would like to contribute please let us know!

We have moved to our own GitHub organization

Despite my attempts to keep the project as one of my pets, it’s just gotten too big. It has now moved to it’s own organization. We would love for YOU to get involved. Just open up an issue or contact Kris Nova if you would like to join.

We are just now starting on the controller

Writing the infrastructure controller is probably the most exciting thing to happen to the project yet.

We literally JUST started the repository. Get in now, while the getting is good!

We need help with the Dockerfile, with the idea behind the controller, and even with the code to make it so. If you wan’t to get involved, now is your chance!

We want your help

Seriously.

Our lead maintainer Marko started out as someone on the internet just like you. He is now a super admin of the project, and is a rock star at keeping up with the day-to-day work of kubicorn.

We would love to help you become a cloud native badass if you would like to contribute. Please join the slack channel and start talking to us.

 

 

 

 

 

 

 

 

And of course as always…

 

Let me know what you think in the comments below!

Site Link
Twitter @krisnova
LinkedIn  kris-nova
GitHub  kris-nova

My contact information. This will make more sense later.

 

Like all good blog posts. This too started out as a tweet.

 

 

 

The conversation was regarding me changing my Twitter handle, and Justin was bringing up a great point. Which is that my Twitter handle is hard-coded all over the internet. My engineer brain instantly started to draft up solutions to this problem, as I realized this was the perfect use case for pointer pointers!

Pointers in Go

Now if you are new to pointers in Go don’t worry. Pointers are easy.

Just think of pointers as URLs!

Seriously, a URL is just a pointer to a web page. In this case the URL itself (say https://nivenly.com) is the address and whatever the web page returns is it’s value.

See? Pretty easy right. I told you pointers were no big deal.

Pointer Pointers

So in the case of changing my contact information, let’s take a look at how things are today.

Notice how in this model, if I need to revise my contact information I would need to go an update every page on the internet that has my contact information linked to it. If we had a pointer page we could safely hard code all the pages on the internet to one page that I control. Say a page on my blog for instance. Then if we needed to update anything all we need to do is update the pointer page.

 

In this model, we no longer need to make a change to the web page on the internet as it will also be pointing to the pointer page.

An example in Go

 

Below is a concrete example of pointer pointers in Go, using the use case from the tweet. Needing to change your twitter URL without changing the higher level implementation.

 


package main

import (
	"fmt"
)

type ContactInfo struct {

	Twitter string
	LinkedIn string
	GitHub string

}

func main() {
	
	// -----------------------------------------------------
	// First version of my contact information
	kris := &ContactInfo{
		Twitter: "https://twitter.com/kris__nova",
		LinkedIn: "https://linkedin.com/in/kris-nova/",
		GitHub: "https://github.com/kris-nova",
	}


	// -----------------------------------------------------
	// Assign my contact information as a pointer pointer
	myContactInfo := &kris
	fmt.Println(**myContactInfo)
	
	
	// -----------------------------------------------------
	// Second version of my contact information
	kris = &ContactInfo{
		Twitter: "https://twitter.com/krisnova",
		LinkedIn: "https://linkedin.com/in/kris-nova/",
		GitHub: "https://github.com/kris-nova",
	}
	
	
	
	// -----------------------------------------------------
	// Feel free to hard code myContactInfo anywhere and I 
	// can change it right from underneath you!
	fmt.Println(**myContactInfo)
}

Run it on the Go playground!

 

 

Cheersthanks!

 

 

 

So hanging out at GothamGo this year has been inspirational! I have been able to rub elbows with the best of the best Go engineers in the world.

Last night I was introduced to what I think is..

..finally the solution to generics in Go!

The G Package

The Generics Package

So there is an Apache 2 open source licensed package that can be found on GitHub here. ‘

The package is clean, and elegant. So let’s take the package for a spin!

First things first, we need to install the G package. Luckily it using the Go idiomatic installation method go get.


go get github.com/StabbyCutyou/generics

Now we can import the package into our Go program.


import . "github.com/StabbyCutyou/generics"

The Implementation

We can now take ugly and non idiomatic (but flexible) Go code such as the following and implement a much more elegant solution for Generics.


func UglyUnIdiomaticQuoteGenericApproachUnquote(poorexcuse ...interface{}) []interface{}

with the G package now becomes the following


func Excellence(things ...G) []G

Backwards Compatibility

From the G package repository we can read a glorious compatibility statement:

G meets the standard of golang by matching its stance on backwards compatibility. Until a 2.0 release of generics, which may never happen, G will always be 100% Backwards compatible with it’s initial 1.0 release.

Behind The Scenes

The source code for G is simple and elegant, and I encourage all users to take a peak at what is going on behind the scenes. In my eyes it is a clean solution to Generics in Go, and I couldn’t be more pleased with the project.

I hope this helps.

Cheers.

So I am working on a Go speech today, and I got to a slide where I wanted to mention the C# programming language. Or more importantly, I wanted to mention how some internal teams at Microsoft are switching over from C# to Go!

The only problem with this slide is that I am supposed to be somewhat credible in what I say..

and I have never written a line of C# in my life.

Furthermore I run linux as my primary operating system. So of course I decided it would be a good idea to try to get a C# development environment up and and running on Archlinux. After a total of 10 seconds of searching Google I couldn’t find the step-by-step tutorial I wanted so naturally I am creating one.

So here goes…

 

Install VS Code

C# isn’t scary at all!

Download page for Linux

Tarball for Linux

Okay so you don’t need VS code, but the C# plugin is really legit. It feels like any other programming language!

But feel free to use any text editor you like. We are just going to be banging out a quick and dirty hello world.

If you plan on writing copious amounts of C# I strongly suggest you get VS code. It’s free and works fantastically on Linux.

Install Mono

(Seriously this is all you do)


sudo pacman -S mono

Write your hello world program

Create a new file called HelloWorld.cs anywhere on your file system.

HelloWorld.cs

// A Hello World! program in C#.
using System;
namespace HelloWorld
{
    class Hello 
    {
        static void Main() 
        {
            Console.WriteLine("Hello World!");

            // Keep the console window open in debug mode.
            Console.WriteLine("Press any key to exit.");
            Console.ReadKey();
        }
    }
}

Thanks to the official Microsoft docs for the code snippet!

Compile

The mono mcs compiler works very similar to gcc and accepts an -out flag to specify the name of the executable.


mcs -out:helloexe HelloWorld.cs

Run

Then you can run your program!


mono hello.exe

What’s next?

I am going to do some benchmarking with C# and Go and explore some concurrency patterns between the two. Stay tuned for my findings!

Hey everyone!

So a huge thanks to Hashiconf for letting me come out and talk about this stuff in person! But for those of you who missed it, or want more information there is also this blog on the matter as well.

So this is just a quick technical follow up of the tool terraformctl that I used in my session to get Terraform up and running inside of Kubernetes as a controller!

What is terraformctl?

A command line tool and gRPC server that is pronounced Terraform Cuddle.

 

The GitHub repo can be found here!

 

It’s a philosophical example of how infrastructure engineers might start looking at running cloud native applications to manage infrastructure. The idea behind the tool is to introduce this new way of thinking, and not necessarily to be the concrete implementation you are working for. This idea is new, and therefore a lot of tooling is till being crafted. This is just a quick and dirty example of what it might look like.

Terraformctl follows a simple client/server pattern.

We use gRPC to define the protocol in which the client will communicate with the server.

The server is a program written in Golang that will handle incoming gRPC requests concurrently while running a control loop.

The incoming requests are cached to a mutex controlled shared point in memory.

The control loop reads from the shared memory.

Voila. Concurrent microservices in Go!

What is cloud native infrastructure?

Well it’s this crazy idea that we should start looking at managing cloud native infrastructure in the same way we manage traditional cloud native applications.

If we treat infrastructure as software then we have no reason to run the software in legacy or traditional ways when we can truly concur our software by running it in a cloud native way. I love this idea so much that I helped author a book on the subject! Feel free to check it out here!

The bottom line is that the new way of looking at the stack is to start thinking of the layers that were traditionally managed in other ways as layers that are now managed by discreet and happy applications. These applications can be ran in containers, and orchestrated in the same ways that all other applications can. So why not do that? YOLO.

What Terraformctl is not..

Terraformctl is not (and will never be) production ready.

It’s a demo tool, and it’s hacky. If you really want to expand on my work feel free to ping me, or just out right fork it. I don’t have time to maintain yet another open source project unfortunately.

Terraformctl is not designed to replace any enterprise solutions, it’s just a thought experiment. Solving these problems is extremely hard, so I just want more people to understand what is really going into these tools.

Furthermore there are a number of features not yet implemented in the code base, that the code base was structure for. Who knows, maybe one day I will get around to coding them. We will see.

If you really, really, really want to talk more about this project. Please email me at kris@nivenly.com.

 

 

A well written sorting algorithm is hard to replace, and typically the ones that have been battle tested will stand the test of time and stick around for a while. The Go programming language offers a clean abstraction on top of a commonly used sorting algorithm. With the sort  package in the standard library a user can sort arbitrary types in arbitrary ways using a time-proven sorting algorithm.

The way Go has set this up for us is by defining an interface called the sort.Interface. The rule is that as long as a type implements to this interface, it can be sorted.

 

The sort.Interface interface


type Interface interface {
       // Len is the number of elements in the collection.
       Len() int
       // Less reports whether the element with
       // index i should sort before the element with index j.
       Less(i, j int) bool
       // Swap swaps the elements with indexes i and j.
       Swap(i, j int)
}

The interface definition can be found here. The interface defines 3 functions, all of which are critical to the success of the sort.

Len()

The sort works by dividing the length of a collection by 2. So we need a way to determine a length (or size) of a collection. Think of this like the len() function in Go, and usually the len() function is appropriate to use here.

Less()

This is a function that will receive 2 integer indices from the collection. The algorithm assumes the implementation will perform some logic here and make an assertion. The algorithm will not care if a user actually checks if a value is less than another, as that assertion is completely arbitrary. This is analogous to the function used to compare values in C’s qsort()

Swap()

Go takes the sort abstraction one step further, and also makes the user define a Swap() function. This is called after Less() returns false and the algorithm understands that a change is needed. In Go however, you can make Swap() do whatever you want in this case.

Sorting integers with sort.Sort()

A very basic example of defining a sort.Interface implementation is to create an integer alias in Go, and implement the required functions.

type Integers []int

func (s Integers) Len() int {
	return len(s)
}
func (s Integers) Swap(i, j int) {
	s[i], s[j] = s[j], s[i]
}
func (s Integers) Less(i, j int) bool {
	return s[i] < s[j]
}

This implementation can now be passed to sort.Sort() and sorted on. A full working example of an integer sort with sort.Sort() can be found here in the Go playground, or my raw source here on GitHub.

Sorting structs with sort.Sort()

Below is an example of sorting on a collection of custom types, and using the utf8 package to sort on the int32 value of a rune. This is important because it demonstrates how a user might want to sort on a “calculated” value instead of a literal value.

A user can implement any logic they wish in Less() or in Swap() giving the user a powerful opportunity to build quick and efficient sorting program on any type they can dream of. This is useful in situations where the logic for what a user wants to sort on, might be something different than a simple numerical or alphabetical sort.

type Type struct {
	Value string
}

type Types []Type

func (s Types) Len() int {
	return len(s)
}
func (s Types) Swap(i, j int) {
	s[i], s[j] = s[j], s[i]
}
func (s Types) Less(i, j int) bool {
	iRune, _ := utf8.DecodeRuneInString(s[i].Value)
	jRune, _ := utf8.DecodeRuneInString(s[j].Value)
	return int32(iRune) > int32(jRune)
}

As always you can run the code in Go Playground, and also see the raw source in GitHub.

Conclusion

The Go programming language offers a familiar and clean abstraction for sorting on arbitrary types and with arbitrary logic. The user can implement any logic they wish in regards to what the sort will use to sort on.

By offering the underlying algorithm and defining the sort.Interface the programming language helps us build powerful sorting programs without enforcing a single opinion on what the algorithm should do but rather how to do it.

Thanks!

Thank you for reading my article. As always, I appreciate any feedback from users. So let me know how we could be better.

Follow @kris-nova

So today I decided to take on a fairly complex design pattern in the hopes of demonstrating some more advanced features of concurrency in Go or at least ruffle a few feathers about how to approach digraph processing.

Wikipedia offers a great explanation of graph theory in case you need to brush up!

The goal of this post is to offer a design approach to processing graphs concurrently , where each vertex in the graph represents a processable unit of work .

The graph will be processed concurrently via a `Crawl()` operation, where there are N number of concurrent processors . Each processor will process a stream of vertices procedurally by referencing a function pointer to a `process()` function.

The Graph

The graph is composed of edges and vertices. We start the example off by building a complex graph with an arbitrary child count. Each of the children may or may not have their own children. A child is represented by a vertex.

The Vertices

A vertex represent the intersections of the graph. Where 2 edges intersect, there will be a vertex. The only way to get from one vertex to another is to traverse it’s corresponding edge.

The Edges

Edges are what connect vertices. Every edge has a `to` and a `from` pointer that will point to the two vertices it’s connecting. Because the edge insinuates direction, and only connects in a single direction that makes the graph a directional graph, or a digraph.

The Processors

Processors process vertexes. In this example we simulate some arbitrary work by injecting a random sleep. In a real implementation a processor would actually accomplish some amount of work that a vertex would require. We have N processors so they can operate on vertices concurrently. The more processors, the more vertices we can concurrently operate on, or process.

The Crawler

The crawler will traverse edges to find vertices. As the crawler finds a vertex it will concurrently process each vertex by passing it to a processor. Processors are called cyclically and in order.

For example if we had 10 vertices and 3 processors the call pattern would look like this.

V -- P
------
1 -- 1
2 -- 2
3 -- 3
4 -- 1
5 -- 2
6 -- 3
7 -- 1
8 -- 2
9 -- 3
0 -- 1

The vertices to be processed in unique goroutines, but on shared channels. The channels will buffer and form a queue if the vertex workflow overflows their ability to keep up.

The win

The huge win here is that the graph stays constructed in it’s original form. The crawler can iterate through it’s many (and complex) layers quickly because of the concurrent processing design. A processor could be replaced with any implementation capable of running a workload. This allows the user to structure complex data, while operating on it without any overhead of understanding the data. The processor gets a vertex, and thats it. The processors have no concept of order, and they don’t need to.

Notice how the program is able to calculate the operations by counting the graph, and the graph is actually processed quickly with the same number of operations. Good. Clean. Concurrent processing.

Furthermore a user can turn up the number of iterations and specify how many times to visit each vertex. This is useful in situations were a process is idempotent but could potentially fail. Sending the same request N number of times makes sense and increases the probability of a success.

package main



import (
	"math/rand"
	"time"
	"fmt"
)

const (
	NumberOfConcurrentProcessors = 32
	NumberOfCrawlIterations = 4
)

// DiGraph
//
// This is a directional graph where edges can point in any direction between vertices.
// The graph has 1 root vertex, which is where the Crawl() starts from.
type DiGraph struct {
	RootVertex       *Vertex       // The root vertex of the graph
	Processors       [] *Processor // List of concurrent processors
	ProcessorIndex   int           // The current index of the next processor to use
	Edges            []*Edge       // All directional edges that make up the graph
	Iterations       int           // The total number of times to iterate over the graph
	TotalVertices    int           // Count of the total number of vertices that make up the graph
	ProcessedChannel chan int      // Channel to track processed vertices
	ProcessedCount   int           // Total number of processed vertices
	TotalOperations  int           // Total number of expected operations | [(TotalVertices * Iterations) - Iterations] + 1
}

// Vertex
//
// A single unit that composes the graph. Each vertex has relationships with other vertices,
// and should represent a single entity or unit of work.
type Vertex struct {
	Name   string // Unique name of this Vertex
	Edges  []*Edge
	Status int
}

// Edge
//
// Edges connect vertices together. Edges have a concept of how many times they have been processed
// And a To and From direction
type Edge struct {
	To             *Vertex
	From           *Vertex
	ProcessedCount int
}

// Processor
//
// This represents a single concurrent process that will operate on N number of vertices
type Processor struct {
	Function func(*Vertex) int
	Channel  chan *Vertex
}

// Init the graph with a literal definition
var TheGraph *DiGraph = NewGraph()

func main() {
	TheGraph.Init(NumberOfConcurrentProcessors, NumberOfCrawlIterations)
	TheGraph.Crawl()
}

func (d *DiGraph) Init(n, i int) {
	noProcs := n
	d.TotalVertices = d.RootVertex.recursiveCount()
	d.Iterations = i
	for ; n > 0; n-- {
		p := Processor{Channel: make(chan *Vertex)}
		d.Processors = append(d.Processors, &p)
		p.Function = Process
		go p.Exec()
	}
	d.TotalOperations = (d.TotalVertices * d.Iterations) - d.Iterations + 1 //Math is hard
	fmt.Printf("Total Vertices              : %d\n", d.TotalVertices)
	fmt.Printf("Total Iterations            : %d\n", d.Iterations)
	fmt.Printf("Total Concurrent Processors : %d\n", noProcs)
	fmt.Printf("Total Assumed Operations    : %d\n", d.TotalOperations)
}

func (d *DiGraph) Crawl() {
	d.ProcessedChannel = make(chan int)
	go d.RootVertex.recursiveProcess(d.getProcessor().Channel)
	fmt.Printf("---\n")
	for d.ProcessedCount < d.TotalOperations {
		d.ProcessedCount += <-d.ProcessedChannel
                //o(fmt.Sprintf("%d ", d.ProcessedCount))
	}
	fmt.Printf("\n---\n")
	fmt.Printf("Total Comlpeted Operations  : %d\n", d.ProcessedCount)
}

func (d *DiGraph) getProcessor() *Processor {
	maxIndex := len(d.Processors) - 1
	if d.ProcessorIndex == maxIndex {
		d.ProcessorIndex = 0
	} else {
		d.ProcessorIndex += 1
	}
	return d.Processors[d.ProcessorIndex]
}

func Process(v *Vertex) int {

	// Simulate some work with a random sleep
	rand.Seed(time.Now().Unix())
	sleep := rand.Intn(100 - 0) + 100
	time.Sleep(time.Millisecond * time.Duration(sleep))

        o(fmt.Sprintf("Processing: %s", v.Name))
	// Return a status code
	return 1
}

func (v *Vertex) recursiveProcess(ch chan *Vertex) {
	ch <- v
	for _, e := range v.Edges {
		if e.ProcessedCount < TheGraph.Iterations {
			e.ProcessedCount += 1
			go e.To.recursiveProcess(TheGraph.getProcessor().Channel)
		}
	}
}

func (v *Vertex) recursiveCount() int {
	i := 1
	for _, e := range v.Edges {
		if e.ProcessedCount != 0 {
			e.ProcessedCount = 0
			i += e.To.recursiveCount()
		}
	}
	return i
}

func (v *Vertex) AddVertex(name string) *Vertex {
	newVertex := &Vertex{Name: name}
	newEdge := &Edge{To: newVertex, From: v, ProcessedCount: -1}
	newVertex.Edges = append(newVertex.Edges, newEdge)
	v.Edges = append(v.Edges, newEdge)
	return newVertex
}

func (p *Processor) Exec() {
	for {
		v := <-p.Channel
		v.Status = p.Function(v)
		TheGraph.ProcessedChannel <- 1
	}
}

func NewGraph() *DiGraph {
	rootVertex := &Vertex{Name: "0"}
	v1 := rootVertex.AddVertex("1")
	rootVertex.AddVertex("2")
	rootVertex.AddVertex("3")
	v1.AddVertex("1-1")
	v1.AddVertex("1-2")
	v1_3 := v1.AddVertex("1-3")
	v1_3.AddVertex("1-3-1")
	v1_3.AddVertex("1-3-2")
	v1_3_3 := v1_3.AddVertex("1-3-3")
	v1_3_3.AddVertex("1-3-3-1")
	v1_3_3.AddVertex("1-3-3-2")
	v1_3_3.AddVertex("1-3-3-3")
	v1_3_3.AddVertex("1-3-3-4")
	v1_3_3.AddVertex("1-3-3-5")
	v1_3_3.AddVertex("1-3-3-6")
	v1_3_3.AddVertex("1-3-3-7")
	graph := &DiGraph{}
	graph.RootVertex = rootVertex
	return graph
}

func o(str string) {
	fmt.Println(str)
}

Try it out

You can run the code yourself in the Go playground.

Thanks!

Thank you for reading my article. As always, I appreciate any feedback from users. So let me know how we could be better.

Follow @kris-nova

Problem

Given 2 arbitrary integers X and N construct a tree such that the root has X child nodes, and each of the root’s children has N child nodes. Walk the graph touching each node only once and tracking the distance of each node from the root of the tree. For every node that has children, the parent node MUST BE visited first.

The total number of node visitations should match the following formula:


T = (X * N) + X + 1

Solution

package main

import (
	"fmt"
)

// main will build and walk the tree
func main() {
	fmt.Println("Building Tree")
	root := buildTree()
	fmt.Println("Walking Tree")
	root.walk()

}

// Total is a fun way to total how many nodes we have visited
var total = 1

// How many children for the root to thave
const rootsChildren = 3

// How many children for the root's children to have
const childrenChildren = 10

// node is a super simple node struct that will form the tree
type node struct {
	parent   *node
	children []*node
	depth    int
}

// buildTree will construct the tree for walking.
func buildTree() *node {
	var root &node
	root.addChildren(rootsChildren)
	for _, child := range root.children {
		child.addChildren(childrenChildren)
	}
	return root
}

// addChildren is a convenience to add an arbitrary number of children
func (n *node) addChildren(count int) {
	for i := 0; i < count; i++ {
		newChild := &node{
			parent: n,
			depth:  n.depth + 1,
		}
		n.children = append(n.children, newChild)
	}
}

// walk is a recursive function that calls itself for every child
func (n *node) walk() {
	n.visit()
	for _, child := range n.children {
		child.walk()
	}
}

// visit will get called on every node in the tree.
func (n *node) visit() {
	d := "└"
	for i := 0; i <= n.depth; i++ {
		d = d + "───"
	}
	fmt.Printf("%s Visiting node with address %p and parent %p Total (%d)\n", d, n, n.parent, total)
	total = total + 1
}

 

Execution Output


Building Tree
Walking Tree
└─── Visiting node with address 0x104401a0 and parent 0x0 Total (1)
└────── Visiting node with address 0x104401c0 and parent 0x104401a0 Total (2)
└───────── Visiting node with address 0x10440220 and parent 0x104401c0 Total (3)
└───────── Visiting node with address 0x10440240 and parent 0x104401c0 Total (4)
└───────── Visiting node with address 0x10440260 and parent 0x104401c0 Total (5)
└───────── Visiting node with address 0x10440280 and parent 0x104401c0 Total (6)
└───────── Visiting node with address 0x104402a0 and parent 0x104401c0 Total (7)
└───────── Visiting node with address 0x104402e0 and parent 0x104401c0 Total (8)
└───────── Visiting node with address 0x10440300 and parent 0x104401c0 Total (9)
└───────── Visiting node with address 0x10440320 and parent 0x104401c0 Total (10)
└───────── Visiting node with address 0x10440340 and parent 0x104401c0 Total (11)
└───────── Visiting node with address 0x10440360 and parent 0x104401c0 Total (12)
└────── Visiting node with address 0x104401e0 and parent 0x104401a0 Total (13)
└───────── Visiting node with address 0x10440380 and parent 0x104401e0 Total (14)
└───────── Visiting node with address 0x104403a0 and parent 0x104401e0 Total (15)
└───────── Visiting node with address 0x104403c0 and parent 0x104401e0 Total (16)
└───────── Visiting node with address 0x104403e0 and parent 0x104401e0 Total (17)
└───────── Visiting node with address 0x10440400 and parent 0x104401e0 Total (18)
└───────── Visiting node with address 0x10440440 and parent 0x104401e0 Total (19)
└───────── Visiting node with address 0x10440460 and parent 0x104401e0 Total (20)
└───────── Visiting node with address 0x10440480 and parent 0x104401e0 Total (21)
└───────── Visiting node with address 0x104404a0 and parent 0x104401e0 Total (22)
└───────── Visiting node with address 0x104404c0 and parent 0x104401e0 Total (23)
└────── Visiting node with address 0x10440200 and parent 0x104401a0 Total (24)
└───────── Visiting node with address 0x104404e0 and parent 0x10440200 Total (25)
└───────── Visiting node with address 0x10440500 and parent 0x10440200 Total (26)
└───────── Visiting node with address 0x10440520 and parent 0x10440200 Total (27)
└───────── Visiting node with address 0x10440540 and parent 0x10440200 Total (28)
└───────── Visiting node with address 0x10440560 and parent 0x10440200 Total (29)
└───────── Visiting node with address 0x104405a0 and parent 0x10440200 Total (30)
└───────── Visiting node with address 0x104405c0 and parent 0x10440200 Total (31)
└───────── Visiting node with address 0x104405e0 and parent 0x10440200 Total (32)
└───────── Visiting node with address 0x10440600 and parent 0x10440200 Total (33)
└───────── Visiting node with address 0x10440620 and parent 0x10440200 Total (34)

Try it out

You can run the code yourself in the Go playground.

Thanks!

Thank you for reading my article. As always, I appreciate any feedback from users. So let me know how we could be better.

Follow @kris-nova

I wrote a writeup on the C implementation of Go plugins in Go 1.8 with a proof of concept code, and examples on how to demonstrate the functionality.

The original project can be found here.

#include <stdlib.h>
#include <stdio.h>
#include <dlfcn.h>

int main(int argc, char **argv) {
    void *handle;
    void (*run)();
    char *error;

    handle = dlopen ("../plugins/plugin1.so", RTLD_LAZY);
    if (!handle) {
        fputs (dlerror(), stderr);
        printf("\n");
        exit(1);
    }

    run = dlsym(handle, "plugin/unnamed-4dc81edc69e27be0c67b8f6c72a541e65358fd88.init");
    if ((error = dlerror()) != NULL)  {
        fputs(error, stderr);
        printf("\n");
        exit(1);
    }

    (*run)();
    dlclose(handle);
}

Suppose you want to execute a function, and you expect it to complete in a pre defined amount of time..

Maybe you just don’t care about the result if you can’t get it quickly.

Timeout patterns in golang are very useful if you want to fail quickly. Particularly in regard to web programming or socket programming.

The idea behind a timeout is handy, but a pain to code over and over. Here is a clever example of a concurrent timeout implementation, as well as an example on channel factories.

Timeout

Timeouts are the idea that the code should move forward based on an arbitrary defined amount of time, if another task has not completed.

Concurrent Factories

Concurrent factories are ways of generating channels in go, that look and feel the same, but can have different implementations.

In the case of the code below, the getTimeoutChannel function behaves as a concurrent factory.


package main

import (
	"time"
	"fmt"
)

// Will call the getTimeoutChannel factory function, passing in different sleep times for each channel
func main() {
	ch_a := getTimeoutChannel(1) // 1 sec
	ch_b := getTimeoutChannel(2) // 2 sec
	ch_c := getTimeoutChannel(5) // 5 sec
	switch {
	case <-ch_a:
		fmt.Println("Channel A")
		break
	case <-ch_b:
		fmt.Println("Channel B")
		break
	case <-ch_c:
		fmt.Println("Channel C")
		break
	}
}

// Will generate a new channel, and concurrently run a sleep based on the input
// Will return true after the sleep is over
func getTimeoutChannel(N int) chan bool {
	ch := make(chan bool)
	go func() {
		time.Sleep(time.Second * time.Duration(N))
		ch <- true
	}()
	return ch
}

Follow @kris-nova

What happened?

We started 3 concurrent timeout factories, each with a unique channel. Each of the channels will timeout and return a value, after the defined sleep. In this example ch_a will obviously timeout first, as it only sleeps for 1 second.

The switch statement will hang until one of the 3 channels returns a value. After the switch statement detects a value from a channel, it performs the logic for the corresponding case. This allows us to easily pick and chose which avenue the code should progress with further.

When is this useful?

Imagine if ch_a and ch_b were not timeout channels but rather actual logic in your program. Imagine if ch_a was actually a read from a cache, and ch_b was actually a read from a database.

Let’s say the 2 second timeout, was actually a 2 second cache read. The program shouldn’t really care if the cache read is successful or not. If it is taking 2 seconds, then it is hardly doing it’s job as a quick cache anyway. So the program should be smart enough to use whatever value is returned first and not whatever value should be returned first. In this case, the database read.

Now we are in a situation where we implemented a cache, and for whatever reason the cache doesn’t seem to want to return a value. Perhaps updating the cache would be in order?

We can take our example one step further and keep the 5 second timeout on ch_c. For the sake of our experimental program, 5 seconds should be more than enough time for any of the supported avenues to return a meaningful value. If after the 5 seconds has elapsed and the first two channels haven’t reported any valuable data we should consider the system in a state of catastrophic failure, and report back accordingly. Simply add the failure path to the program, and rest assured that the program will handle even the most unexpected of edge cases quickly, and meaningfully.

Now, doesn’t that seem like a great way to structure a program?