Kubernetes (K8S) has emerged as the de facto Docker orchestration platform since it was made open source in 2014. Kubernetes has been used in production by Google for over 15 years, which means the framework has been tested. One of the issues with Kubernetes when it initially became available was that installing Kubernetes involved several commands to run and several configurations to make. Several managed services for Kubernetes emerged, mostly from cloud service providers. The cloud-managed services did simplify the task of spinning up a Kubernetes cluster, but issues relating to the complexities of running a cluster still remain.
K3S was first made available in early 2019 as a lightweight option for Kubernetes.
What was the need for K3S? With the widespread use of mobile phones, smart devices, and other mobile devices, the computing infrastructure got out of hand, so to speak. Aggregating data from these distributed devices to data centers incurred a network bandwidth overhead—not to mention the latency. Edge computing emerged as a distributed computing paradigm to bring data computation and storage closer to where it is generated, which is distributed mobile, smart devices, and other infrastructure. Containerization was an obvious choice for running the applications in an Edge environment, but the question arose about how to make the container orchestration more manageable given that the heavyweight Kubernetes was not easy to install and manage. Rancher Labs introduced K3S as a lightweight version of Kubernetes designed for “resource-constrained” environments. K3S has its benefits.
K3S binaries are only about 40 MB in size. Kubernetes (K8S) binaries are about 325 MB in comparison. A single node Kubernetes (K8S) cluster uses over 1 GB of memory as compared to only 260 MBb used by a K3S cluster.
Easy to Install
Kubernetes (K8S) was known for its daunting installation procedure when it first became available. Simplifications to the installation were made including managed services offerings. K3S bundles all the Kubernetes components, which include the kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy into a single k3s server-agent model consisting of combined processes. All that is needed is a modern Linux kernel, and a single command that completes in half a minute installs K3S.
[email protected]:~$ curl -sfL https://get.k3s.io | sh -
[INFO] Finding release for channel stable
[INFO] Using v1.19.3+k3s3 as release
[INFO] Downloading hash
[INFO] Downloading binary
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service ? /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s
The Kubernetes master node is running after the single preceding command.
[email protected]:~$ sudo k3s kubectl get node
NAME STATUS ROLES AGE VERSION
ip-10-0-0-83 Ready master 29s v1.19.3+k3s3
Additional nodes may be added with another single command.
K3S is fully CNCF (Cloud Native Computing Foundation) certified. CNCF hosts Kubernetes and several other open source projects. K3S is highly available and designed for production workloads. K3s is suitable for unattended, remote locations.
Edge and IoT
Though initially touted as being built for Edge, K3S is suitable for several modern computing environments that involve connected devices and distributed resources, such as Internet of Things (IoT) and Continuous Integration/Continuous Delivery (CI/CD).