Andrei Kvapil
3 min readSep 12, 2020

--

Thanks for the benchmarks Alexis!

Most plugins have non-optimized defaults that work in most common situations, regardless of network topology, OS and kernel version.

I have an experience of tuning some of them, and I would like to share it with you, just briefly:

flannel has at least three different overlay modes: vxlan, ipip and host-gw. The host-gw is most productive one, but vxlan is used by default.

There was quite old but good article Comparison of Networking Solutions for Kubernetes which show the superiority of host-gw mode over vxlan:

Latency percentiles at 250,000 RPS (≈50% of maximum RPS), ms
(source: Comparison of Networking Solutions for Kubernetes)

The change the overlay mode for flannel, just update cni-config in the Kubernetes manifest:

net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}

read more about supported backends in the official documentation

kube-router has ipip overlay mode enabled by default, which also used for better universality but noticeably sacrificing the performance. You can disable it by providing --enable-overlay=false commandline flag in the Kubernetes manifest.

--enable-overlay                   When enable-overlay set to true, IP-in-IP tunneling is used for pod-to-pod networking across nodes in different subnets. When set to false no tunneling is used and routing infrastrcture is expected to route traffic for pod-to-pod networking across nodes in different subnets (default true

more about commandline options in the official documentation.

Also kube-router wouldn’t be so good if I didn’t offer the kube-proxy replacement. In this mode you can fully replace kube-proxy with own kube-router implementation of service-proxy with the IPVS, various scheduling algorithms and optional DSR.

To enable this mode just follow the guide to install kube-router with providing service proxy, firewall and pod networking.

cilium also has few overlays available: VXLAN and Geneve, vxlan is used by default however it might be simple disabled by providing --tunnel=disabled flag for the cilium agent.

-t, --tunnel string                                 Tunnel mode {vxlan, geneve, disabled} (default "vxlan" for the "veth" datapath mode)

read mode about overlay network modes in the official documentation.

Also cilium allows to run in kube-proxy-free mode, replacing kube-proxy by it’s own implementation with many cool options like DSR, Socket-based load-balancing and so on.

Highly efficient: By performing the load-balancing at the socket level by translating the address inside the connect(2) system call, the cost of load-balancing is paid upfront when setting up the connection and no additional translation is needed for the duration of the connection afterwards. The performance is identical as if the the application talks directly to the backend.

To enable this mode install kubernetes without kube-proxy:

kubeadm init --pod-network-cidr=10.112.0.0/12 --skip-phases=addon/kube-proxy

and installing cilium using Helm chart with the following options:

helm install cilium cilium/cilium \
--namespace kube-system \
--set global.kubeProxyReplacement=strict \
--set global.k8sServiceHost=API_SERVER_IP \
--set global.k8sServicePort=API_SERVER_PORT

read more about kube-proxy-free mode in the official documentation.

I hope you will take these notes into account before preparing your next benchmark.

Thanks for attention and your amazing work, we are all looking forward to your new articles 🙂

--

--