Image for post
Image for post

Some time after writing the first article, where I cleverly use jsonnet and gitlab, I realized that pipelines are certainly good, but unnecessarily difficult and inconvenient.

In most cases, a typical task is need: “to generate YAML and put it in Kubernetes”. Actually, this is what the Argo CD does really well.

Argo CD allows you to connect a Git repository and sync its state to Kubernetes. By default several types of applications are supported: Kustomize, Helm charts, Ksonnet, raw Jsonnet or simple directories with YAML/JSON manifests.

Most users will be happy for having just this tool set, but not everyone. In order to satisfy the needs of anyone, Argo CD has the ability to use custom tooling. …


Hi, recently I faced across an interesting task to setup a storage server for backup of a large number of block devices.

Every week we back up all virtual machines in our cloud, so there is a need to be able handle thousands of backups and do it as fast and efficiently as possible.

Unfortunately, the standard RAID5, RAID6 levels are not suitable due the fact that recovery process on such large disks as ours will be painfully long and most likely never finished successfully.

Let’s consider what alternatives are:

Erasure Coding — An analogue to RAID5, RAID6, but with a configurable parity level. Also the fault tolerance is performed not for whole block devices, but for each object separately. The easiest way to try Erasure Coding is to deploy minio. …


Image for post
Image for post

Not so far ago, I was faced with a quite unusual task of configuring routing for MetalLB. All would be nothing, since MetalLB usually does not require any additional configuration from user side, but in our case there is a fairly large cluster with a quite simple network configuration.

In this article I will show you how to configure source-based and policy-based routing for the external network on your cluster.

I will not dwell on installing and configuring MetalLB in detail, as I assume you already have some experience. Let’s understand the essence and configure the routing. …


Image for post
Image for post

Gitlab CI have a nice feature to generate docker-registry tokens per each job, but this feature is working only for it’s own docker registry and does not working with an external ones, eg. Harbor, Nexus, Quay and etc.

There is an opportunity to set-up external docker registry for Gitlab, it is well described in the documentation Use an external container registry with GitLab as an auth endpoint.

Proposed to configure brand new docker-registry with token based authentication. Harbor also uses docker-registry in backend, so that we could configure it, but problem is that both Gitlab and Harbor require to set their own parameters which are actually conflicts. …


Image for post
Image for post
Photo by Christopher Gower on Unsplash

Hi!
Recently, many cool automation tools have been released both for building Docker images and for deploying to Kubernetes. In this regard, I decided to play with the Gitlab a little, study its capabilities and, of course, configure the pipeline.

The source of inspiration for this work was the site kubernetes.io, which is automatically generated from source code.
For each new pullrequest the bot generates a preview version with your changes automatically and provides a link for review.

I tried to build a similar process from scratch, but entirely built on Gitlab CI and free tools that I used to use to deploy applications in Kubernetes. …


Image for post
Image for post

Let me tell you how you can safely store SSH keys on a local machine, for not having a fear that some application can steal or decrypt them.
This article will be especially useful to those who have not found an elegant solution after the paranoia in 2018 and continue storing keys in $HOME/.ssh.

To solve this problem, I suggest you using KeePassXC, which is one of the best password managers, it is using strong encryption algorithms, and also it have an integrated SSH agent.

This allows you to safely store all the keys directly in the password database and automatically add them to the system when it is opened. …


Image for post
Image for post

I had a need to show dashboard with monitoring information on several screens in the office. There are several old Raspberry Pi Model B+ and a hypervisor with a virtually unlimited amount of resources.

Apparently the Raspberry Pi Model B+ does not have enough power to keep the browser running constantly and draw a large amount of graphics in it, which is why the page is partially buggy and often crashes.

I found a fairly simple and elegant solution, which I want to share with you.

As you know, all Raspberries have a quite powerful video processor, which is great for hardware video decoding. This is how the idea to launch a browser with dashboard somewhere else appeared, and to connect a stream with a rendered image to the respberry. …


Image for post
Image for post

Short guide how to setup Keycloak for connect Kubernetes with your LDAP-server and import users and groups. It will allow you to configure RBAC and use auth-proxy to secure Kubernetes Dasboard and another applications, which have no authentification from begining.

Keycloak Installation

Let’s assume that you already have LDAP-server. It can be Active Directory, FreeIPA, OpenLDAP or something else. If you have no LDAP-server you can just use Keycloak for creating users directly on its interface, or connect another public oidc-providers (Google, Github, Gitlab), result will be same.

First you need to install Keycloak. You can install it separated or inside your Kubernetes-cluster. Usually if you have many Kubernetes-clusters, there a sense to install it separated. Otherwise you can just use ready helm-chart for Keycloak and install it into Kubernetes. …


This guide is updated version of my previous article Creating High Available Baremetal Kubernetes cluster with Kubeadm and Keepalived (Simple Guide)
Since v1.13 deployment has become much easier and more logical. Note that this article is my personal interpretation of official Creating Highly Available Clusters with kubeadm for Stacked control plane nodes plus few more steps for Keepalived.

If you have any questions, or something is not clear, please refer to the official documentation or ask the Google. All steps described here in the short and simple form.

Input data

We have 3-nodes cluster:

  • node1 (10.9.8.11)
  • node2 (10.9.8.12)
  • node3 (10.9.8.13)

We will create one fault-resistant cluster-ip for…


This guide is a free interpretation of official Creating Highly Available Clusters with kubeadm for Stacked control plane nodes. I don’t like this difficult form which used there, so I wrote this article.

If you have any questions, or something is not clear, please refer to the official documentation or ask the Google. All steps described here in the short and simple form.

Input data

We have 3-nodes cluster:

  • node1 (10.9.8.11)
  • node2 (10.9.8.12)
  • node3 (10.9.8.13)

We will create one fault-resistant cluster-ip for them:

  • 10.9.8.10

Then install etcd and kubernetes cluster on them.

LoadBalancer setup

First, we need to install keepalived on all three nodes:

apt-get -y install…

About

Andrei Kvapil

This mess is mine!

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store