<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Michael Champagne&#39;s blog</title>
    <link>https://blog.csnet.me/</link>
    <description>Recent content on Michael Champagne&#39;s blog</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <lastBuildDate>Wed, 11 Apr 2018 00:00:00 +0000</lastBuildDate>
    
	<atom:link href="https://blog.csnet.me/index.xml" rel="self" type="application/rss+xml" />
    
    
    <item>
      <title>On-prem k8s | Part 1</title>
      <link>https://blog.csnet.me/k8s-thw/part1/</link>
      <pubDate>Wed, 11 Apr 2018 00:00:00 +0000</pubDate>
      
      <guid>https://blog.csnet.me/k8s-thw/part1/</guid>
      <description>Machines This tutorial assumes you already have the basic infrastructure blocks like DHCP or DNS up and running. We will setup a HA Kubernetes cluster, with 3 control plane nodes and 3 worker nodes.
We will also need a load balancer in front of the Kubernetes API server. We will use HAProxy.
The OS will be Ubuntu 16.04 for all the hosts.
The following table contains the list of hosts to be provisioned.</description>
    </item>
    
    <item>
      <title>On-Prem k8s | Part 2</title>
      <link>https://blog.csnet.me/k8s-thw/part2/</link>
      <pubDate>Wed, 11 Apr 2018 00:00:00 +0000</pubDate>
      
      <guid>https://blog.csnet.me/k8s-thw/part2/</guid>
      <description>PKI Infrastructure We will use cfssl and cfssljson utilities from CloudFlare’s open source PKI toolkit.
If you are interested in building a complete PKI infrastructure, I invite you to read this interesting post on CloudFlare&amp;rsquo;s blog
Tools Installation The easiest way to install the two utilities is to download the prebuilt packages.
curl -o cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 curl -o cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64  chmod +x cfssl cfssljson
sudo mv cfssl cfssljson /usr/local/bin/</description>
    </item>
    
    <item>
      <title>On-Prem k8s | Part 3</title>
      <link>https://blog.csnet.me/k8s-thw/part3/</link>
      <pubDate>Wed, 11 Apr 2018 00:00:00 +0000</pubDate>
      
      <guid>https://blog.csnet.me/k8s-thw/part3/</guid>
      <description>Kubeconfig files Kubeconfig files, or Kubernetes configuration files, enable Kubernetes client to locate and authenticate to Kubernetes API servers.
We will use kubectl to generate config files for kubelet and kube-proxy clients.
The Kubernetes scheduler and control manager also access the kubernetes API server but through non secured port exposed on the localhost interface. Therefore they do not require kubeconfig file. kubectl First we will install kubectl utility.
curl -LO https://storage.</description>
    </item>
    
    <item>
      <title>On-Prem k8s | Part 4</title>
      <link>https://blog.csnet.me/k8s-thw/part4/</link>
      <pubDate>Wed, 11 Apr 2018 00:00:00 +0000</pubDate>
      
      <guid>https://blog.csnet.me/k8s-thw/part4/</guid>
      <description>Secret data encryption Kubernetes supports data encryption at rest to securely store data in the etcd k/v database.
In this section, we will create a Kubernetes encryption config manifest to specify the resource we want to be encrypted, the encryption mechanism and key.
Later, the kube-api server will be started with the --experimental-encryption-provider-config flag in order to enable data encryption at rest
Encryption key First, we generate a random key, base64 encoded:</description>
    </item>
    
    <item>
      <title>On-Prem k8s | Part 5</title>
      <link>https://blog.csnet.me/k8s-thw/part5/</link>
      <pubDate>Wed, 11 Apr 2018 00:00:00 +0000</pubDate>
      
      <guid>https://blog.csnet.me/k8s-thw/part5/</guid>
      <description>Kubernetes stores cluster states in an etcd k/v store. We will now setup a three nodes etcd cluster for high availability.
All actions described in this section need to be performed on each controller node Install binaries wget -q --show-progress --https-only --timestamping \ &amp;quot;https://github.com/coreos/etcd/releases/download/v3.2.18/etcd-v3.2.18-linux-amd64.tar.gz&amp;quot;  Extract and copy etcd and etcdctl binaries to your PATH
tar -xvf etcd-v3.2.18-linux-amd64.tar.gz  sudo mv etcd-v3.2.18-linux-amd64/etcd* /usr/local/bin  Setup etcd Create etcd directories and copy TLS certs</description>
    </item>
    
    <item>
      <title>On-Prem k8s | Part 6</title>
      <link>https://blog.csnet.me/k8s-thw/part6/</link>
      <pubDate>Wed, 11 Apr 2018 00:00:00 +0000</pubDate>
      
      <guid>https://blog.csnet.me/k8s-thw/part6/</guid>
      <description>We will now bootstrap the Kubernetes control plane on the three controller nodes. We will also setup the HAProxy host haprx1 which will load balance the API server traffic over the three controllers.
Load Balancer Install HAProxy HAProxy will be setup on the load balancer node haprx1 The latest stable version is not available in the default repos
sudo add-apt-repository ppa:vbernat/haproxy-1.8  apt-cache policy haproxy  sudo apt-get install haproxy=1.</description>
    </item>
    
    <item>
      <title>On-Prem k8s | Part 7</title>
      <link>https://blog.csnet.me/k8s-thw/part7/</link>
      <pubDate>Sun, 15 Apr 2018 00:00:00 +0000</pubDate>
      
      <guid>https://blog.csnet.me/k8s-thw/part7/</guid>
      <description>In this section, we will setup the worker nodes. The following components will be installed on each node:
 cni plugins containerd kubelet kube-proxy  All steps in this section need to be run on each worker node Host preparation A few points need to be addressed at host level in order to make everything work smoothly:
 socat and conntrack should be installed
sudo apt install socat conntrack  ipv4 packet forwarding should be enabled</description>
    </item>
    
    <item>
      <title>On-Prem k8s | Part 8</title>
      <link>https://blog.csnet.me/k8s-thw/part8/</link>
      <pubDate>Mon, 16 Apr 2018 00:00:00 +0000</pubDate>
      
      <guid>https://blog.csnet.me/k8s-thw/part8/</guid>
      <description>In this section we will generate a kubeconfig file for the kubectl k8s utility based on the admin user credentials.
Run the commands from the same directory used to generate the admin client certificates in Part 2. kubeconfig file The kubeconfig file contains the information allowing to remotely connect to a Kubernetes cluster. It stores the following information:
cluster Name API server URL CA data user Name Client cert data context Name Cluster name User name  The context links the cluster information to the user information.</description>
    </item>
    
    <item>
      <title>On-Prem k8s | Part 9</title>
      <link>https://blog.csnet.me/k8s-thw/part9/</link>
      <pubDate>Wed, 18 Apr 2018 00:00:00 +0000</pubDate>
      
      <guid>https://blog.csnet.me/k8s-thw/part9/</guid>
      <description>Weave We will use weave version 2.3.0 as our network overlay.
It is really easy to install as a Kubernetes Addon and does not require additional configuration.
Here is the basic command from the Weave documentation:
kubectl apply -f &amp;quot;https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d &#39;\n&#39;)&amp;quot;  We need to customize the default configuration to reflect our custom POD_CIDR network (10.16.0.0/16). We can customize the yaml manifest by passing the IPALLOC_RANGE option to the HTTP get:</description>
    </item>
    
    <item>
      <title>On-Prem k8s | Part 10</title>
      <link>https://blog.csnet.me/k8s-thw/part10/</link>
      <pubDate>Wed, 18 Apr 2018 00:00:00 +0000</pubDate>
      
      <guid>https://blog.csnet.me/k8s-thw/part10/</guid>
      <description>CoreDNS CoreDNS is a dns server written in Go. It is a Cloud Native Computing Foundation incubating project and an eventual replacement to kube-dns. It has been promoted to beta with Kubernetes 1.10.
CoreDNS is configured through a Corefile, and in Kubernetes, we will use a ConfigMap
We will simply deploy it using the following manifest file
kubectl apply -f https://raw.githubusercontent.com/mch1307/k8s-thw/master/coredns.yaml  serviceaccount &amp;quot;coredns&amp;quot; created clusterrole.rbac.authorization.k8s.io &amp;quot;system:coredns&amp;quot; created clusterrolebinding.rbac.authorization.k8s.io &amp;quot;system:coredns&amp;quot; created configmap &amp;quot;coredns&amp;quot; created deployment.</description>
    </item>
    
    <item>
      <title>On-Prem k8s | Part 11</title>
      <link>https://blog.csnet.me/k8s-thw/part11/</link>
      <pubDate>Fri, 11 May 2018 00:00:00 +0000</pubDate>
      
      <guid>https://blog.csnet.me/k8s-thw/part11/</guid>
      <description>After having setup the Kubernetes cluster as described in this guide, I have tried to validate it against the &amp;ldquo;CNCF K8s Conformance Tests&amp;rdquo;. The easiest and standard tool to do that is Heptio&amp;rsquo;s Sonobuoy. Head to &amp;ldquo;Sonobuoy Scanner tool&amp;rdquo; web site, Click on &amp;ldquo;Scan your cluster&amp;rdquo;, copy the generated kubectl command and run it on your cluster.
Unfortunately the result was not the one I was expecting 😞
The first runs were timing out, so I added more resource to my VMs and finally changed the timeout parameter in the downloaded sonobuoy yaml file.</description>
    </item>
    
    <item>
      <title>vaultlib: a Go Vault client library for reading secrets</title>
      <link>https://blog.csnet.me/2019/02/vaultlib/</link>
      <pubDate>Sat, 02 Feb 2019 00:00:00 +0000</pubDate>
      
      <guid>https://blog.csnet.me/2019/02/vaultlib/</guid>
      <description>Moving applications to containers can require some considerable development efforts. One of the important transitions is the application&amp;rsquo;s configuration. In traditional deployments, configuration is usually performed &amp;ldquo;individually&amp;rdquo; on the target server. Some organizations use deployment or automation tools, others might even do that manually.
As part of their configuration, most applications will require different credentials for accessing services they consume (database, APIs,.. ). HashiCorp&amp;rsquo;s Vault is a great tool to manage such data.</description>
    </item>
    
    <item>
      <title>On-Premises Kubernetes… The Hard Way</title>
      <link>https://blog.csnet.me/2018/04/on-prem-k8s-thw/</link>
      <pubDate>Thu, 26 Apr 2018 00:00:00 +0000</pubDate>
      
      <guid>https://blog.csnet.me/2018/04/on-prem-k8s-thw/</guid>
      <description>While preparing the CKA exam, I have been using minikube and kubeadm or Rancher&amp;rsquo;s rke to bootstrap kubernetes clusters. Those tools are very nice but I wanted to understand all the details of a full setup. The best for this is the excellent &amp;#8220;Kubernetes The Hard Way&amp;#8221; tutorial from Kelsey Hightower.
I wanted to do the setup on-premises, meaning no cloud provider. So I had to &amp;#8220;adapt&amp;#8221; the tutorial accordingly.</description>
    </item>
    
    <item>
      <title>From WP.com to Hugo on Netlify</title>
      <link>https://blog.csnet.me/2018/03/18/migrate-to-hugo/</link>
      <pubDate>Sun, 18 Mar 2018 00:00:00 +0000</pubDate>
      
      <guid>https://blog.csnet.me/2018/03/18/migrate-to-hugo/</guid>
      <description>When I decided to start blogging and have my own &amp;ldquo;site&amp;rdquo;, I chose Wordpress.com as it is was for me the easiest and most comfortable solution. I have been using it for a few month and to be fair, I think the product and service are ok.
The main pain points to me are:
 authoring and preview having to move to a more expensive plan in order to install plugin  So I decided to have a look at Hugo, a fast static web site generator.</description>
    </item>
    
    <item>
      <title>Building a Go Api: gRPC, Rest and OpenApi (swagger)</title>
      <link>https://blog.csnet.me/blog/building-a-go-api-grpc-rest-and-openapi-swagger.1/</link>
      <pubDate>Wed, 13 Dec 2017 00:00:00 +0000</pubDate>
      
      <guid>https://blog.csnet.me/blog/building-a-go-api-grpc-rest-and-openapi-swagger.1/</guid>
      <description>gRPC is an open source RPC framework offering high performance and pluggable support for authentication, tracing, health checks and load balancing. It offers libraries in most widely used languages (Java, Node.js, C++, Python, Go,..)
In this post, we will create a pseudo &amp;#8220;Home control&amp;#8221; server that will expose some APIs using gRPC. We will then add a Rest API using grpc-gateway and generate an OpenAPI documentation.
Prerequisites The following components should be installed:</description>
    </item>
    
    <item>
      <title>gomotics 0.3.0 released</title>
      <link>https://blog.csnet.me/2017/11/03/gomotics-0-3-0-released/</link>
      <pubDate>Fri, 03 Nov 2017 22:28:39 +0000</pubDate>
      
      <guid>https://blog.csnet.me/2017/11/03/gomotics-0-3-0-released/</guid>
      <description>gomotics is a Go API for Niko Home Control.
The 0.3.0 release introduce integration with Jeedom. It can now act as a gateway between Jeedom and your Niko Home control installation.
Source code available on GitHub, documentation here
&amp;nbsp;</description>
    </item>
    
    <item>
      <title>First steps with Rancher 2.0</title>
      <link>https://blog.csnet.me/2017/10/25/first-steps-with-rancher-2-0/</link>
      <pubDate>Wed, 25 Oct 2017 00:00:00 +0000</pubDate>
      
      <guid>https://blog.csnet.me/2017/10/25/first-steps-with-rancher-2-0/</guid>
      <description>Rancher Labs has released Rancher 2.0 Tech Preview on 26th of September. The 2.0 release is a significant one as it brings many changes compared to 1.x versions. Based on current Rancher users feedback, market trends (almost all major infrastructure providers offer &amp;#8220;Kubernetes-As-A-Service&amp;#8221;), and some vision (&amp;#8220;Kubernetes everywhere&amp;#8221;), they have re-engineered Rancher 2.0 to be fully based on Kubernetes.
To me, one of the key strengths of Rancher so far was to bring easiness to deploy and manage a container orchestrator.</description>
    </item>
    
    <item>
      <title>Setting up CI/CD pipeline for Golang using Travis-CI, Coveralls, goreleaser and Docker</title>
      <link>https://blog.csnet.me/2017/09/11/setting-up-cicd-pipeline-for-golang-using-travis-ci-coveralls-goreleaser-and-docker/</link>
      <pubDate>Mon, 11 Sep 2017 21:11:16 +0000</pubDate>
      
      <guid>https://blog.csnet.me/2017/09/11/setting-up-cicd-pipeline-for-golang-using-travis-ci-coveralls-goreleaser-and-docker/</guid>
      <description>Overview In the previous post, I was introducing my personal project &amp;#8220;gomotics&amp;#8221;, a domotics API for Niko Home Control, written in Go. In this post I will detail how I setup the &amp;#8220;build &amp;#8211; test &amp;#8211; release&amp;#8221; pipeline for this project.
Objectives The objectives for this &amp;#8220;pipeline&amp;#8221; are:
 build the code for different platforms run the unit tests (sometimes they are more integration tests) measure the coverage in case of release tag, release the built binaries build a docker image and release it on Docker Hub  To achieve those objectives we will use GitHub for sources, Travis-CI for building, Coveralls for coverage, goreleaser to automate the releases to GitHub pages and Docker Hub for our container image release.</description>
    </item>
    
    <item>
      <title>gomotics, a go rest API for Niko Home Control</title>
      <link>https://blog.csnet.me/2017/09/06/gomotics-a-go-rest-api-for-niko-home-control/</link>
      <pubDate>Wed, 06 Sep 2017 10:53:02 +0000</pubDate>
      
      <guid>https://blog.csnet.me/2017/09/06/gomotics-a-go-rest-api-for-niko-home-control/</guid>
      <description>I am developing a small domotics back-end in Go, mainly as a learning method. I already did that in NodeJs and as I wanted to learn Golang, I decided to re-develop same kind of tool. This time, I am doing it more &amp;#8220;properly&amp;#8221; in terms of testing coverage. And as Go is a compiled language, I have setup an automated build, test and release process using Travis CI, Coveralls and goreleaser.</description>
    </item>
    
    <item>
      <title>gomotics</title>
      <link>https://blog.csnet.me/gomotics/</link>
      <pubDate>Sun, 27 Aug 2017 20:36:33 +0000</pubDate>
      
      <guid>https://blog.csnet.me/gomotics/</guid>
      <description>Overview Gomotics is a small program written in Go that aims offering easy to consume Rest API endpoints to Niko Home Control and act as interface between Niko Home Control and Jeedom. It can be used to solely link NHC to Jeedom or to build an UI to perform simple, day to day operations.
Features  NHC zero conf: automatically discovers the Niko Home Control NHC switches NHC dimmers Interface with Jeedom (matches NHC/Jeedom on name and location) Automatically create NHC rooms, switches and dimmers in Jeedom  Demo   Installation Navigate to the gomotics releases page.</description>
    </item>
    
    <item>
      <title>Traefik/Consul demo app: my first Go “dev”</title>
      <link>https://blog.csnet.me/2017/08/20/traefikconsul-demo-app-my-first-go-dev/</link>
      <pubDate>Sun, 20 Aug 2017 00:00:00 +0000</pubDate>
      
      <guid>https://blog.csnet.me/2017/08/20/traefikconsul-demo-app-my-first-go-dev/</guid>
      <description>If you don&amp;#8217;t mind my interesting story, scroll down to the &amp;#8220;quick demo overview&amp;#8221; 😉
In the last weeks, I have been playing with HashiCorp&amp;#8217;s Consul, for service registration and key/value store.
After playtime, I had to introduce the product to internal Teams, so I prepared a few slides on the architecture and main features of Consul. To complete my presentation I prepared a brief demo which was first using consul-template watching for Consul service catalog and restarting the simple, static HAProxy I had setup in front on my &amp;#8220;demo&amp;#8221; web app.</description>
    </item>
    
    <item>
      <title>Traefik as a Dynamically-Configured Proxy and Load-Balancer</title>
      <link>https://blog.csnet.me/blog/2017-07-11-rancher-traefik/</link>
      <pubDate>Wed, 12 Jul 2017 00:00:00 +0000</pubDate>
      
      <guid>https://blog.csnet.me/blog/2017-07-11-rancher-traefik/</guid>
      <description>This post has also been published on Rancher.com on 12-07-2017
When deploying applications in the container world, one of the less obvious points is how to make the application available to the external world, outside of the container cluster. One option is to use the host port, which basically maps one port of the host to the container port where the application is exposed. While this option is fine for local development, it is not viable in a real cluster with many applications deployed.</description>
    </item>
    
    <item>
      <title></title>
      <link>https://blog.csnet.me/index.json</link>
      <pubDate>Wed, 08 Mar 2017 00:02:24 +0100</pubDate>
      
      <guid>https://blog.csnet.me/index.json</guid>
      <description></description>
    </item>
    
  </channel>
</rss>