Kubernetes as a Service with Rancher

Container and Rancher logo
Contents
Share

Containers are a form of operating system virtualisation that has revolutionised the world of IT and Cloud Computing over the years. Over time, at CloudFire, we've also gained some expertise and knowledge about them.

This article shows you what steps have been taken in general, and primarily by CloudFire, from the early days to the approach and tools that we currently use in terms of containers.

One step back

By definition, a container is an executable unit of software in which a particular application code is packaged in common ways so that it can be launched anywhere. Said application code is packed along its libraries and dependencies. Hence, it can be used to run anything: from microservices or software processes, to a larger application.

Over the years, the use of containers and their deployment have changed. Their global approval has certainly been driven by the paradigm shift that has directly involved the use of applications. Said paradigm now enables efficiency and speed in their deployment.

The entry of virtualization has led to a division of the physical server into various environments, each of which has the ability to leverage a different operating system. This means that, this approach, leads to an already innovative benefit: the ability to host multiple applications on a single physical server. However, the limitation that remains in this solution is that the startup of a new application instance also involves the startup of a complete operating system, and consequently a waste of time and resources.

Through containerization , on the other hand, it becomes possible to run multiple application instances on the same operating system, thus optimizing resource utilization and boot time. In fact, by nature, containers represent a sterile, fixed image that allows it to run in any environment without special dependencies on the host operating system.

Differences between VMs and Containers

Container technology is often confused with virtual machine or server virtualisation technology. Despite the presence of some core similarities, they lead to very different results.

  • Less impact on occupied resources:
    Virtual machines, run by the hypervisor on each VM, are bound to include their own operating system, resources, library and application files. All of this results in a large amount of system resources and, in the case of multiple VMs running on the same server, there is a risk of system overhead. On the other hand, each container, which shares the same host operating system or system kernel, is first and foremost smaller in size. It usually only occupies a few megabytes, thus taking only a few seconds to boot, compared to the gigabytes and minutes that are typical for a VM.
  • Increased portability, ease of deployment, and application resilience:
    Another benefit related to the container world that is especially appreciated by developers themselves, is the proper functioning of the application, regardless of the environment hosting it or the features themselves. In fact, the risk of using multiple sources, libraries and repositories to write and structure a given application, increases the possibility of being incompatible in different environments. Through containers, this problem does not exist at all.
  • Greater application resilience and support:
    Through containers, DevOps teams know that applications will always run the same, regardless of the environment in which they are deployed. This way they're able to create, test and deploy across multiple environments, whilst maintaining continuity;
  • Increased efficiency and speed in deployment, patching or scaling processes:
    Unlike the traditional approach with VMs, where the installation of prerequisites such as libraries and other supporting software was required, once the container image is published by the developer, the operator is able to run it in a few simple steps. As for patching, it is made even simpler, since installing a new version involves only downloading the new image. With regard to scaling, scaling the application means running multiple instances of the same container, thus making it easy to both scale up and scale down the application.

Where's the challenge? From Docker, through Kubernetes, to Rancher

With every technological innovation, and especially one as revolutionary as containers, comes new challenges to be faced.

In the past, application containerisation was attributable to a niche audience because the level of skills required was quite high. For this reason, Docker was born: an opensource software that enables easy and efficient execution, and that allows deployment of containers and applications by automating repetitive and trivial processes through simple commands.

Initially, many companies, including CloudFire, began to heavily adopt Docker by hosting containers in various environments such as VMs or physical servers. However, this led to a proliferation of applications, making it difficult to optimise resources, to control the content running within containers and to monitor the interactions with other applications/containers. Nevertheless, with time and experience, we have reached some considerations and applied the following guidelines:

  • Properly structuring communication between containers;
  • Keep track of how containers run, possibly defining on file the necessary configurations;
  • Monitor the status of each container and have automatic repair mechanisms in case of problems.

Meeting all these needs required the adoption of a container orchestrator: Kubernetes.

Kubernetes is the opensource software that harmonises the deployment and management of the entire container lifecycle, ensuring automated scalability and resilience of each container and consequently of the application within it itself. Given its undeniable functionality and potential, Kubernetes became, and still remains, the de facto standard.

As with containers, using Kubernetes requires in-depth knowledge and expertise on the subject, and yet it remains difficult to implement and maintain, even for experienced Cloud Architects.

This is why managed solutions are adopted, including Rancher, a complete software stack thatsimplifies the deployment and the creation of new clusters. Rancher itself is an application that runs inside containers. Its installation is as simple as running a container.

Una volta dato il comando, e il container di Rancher è in esecuzione, basterà navigare all'indirizzo IP della macchina che ospita il container e seguire le istruzioni indicate.

Processo utilizzo Rancher

Once logged into Rancher, it will be possible to create a new cluster. What you need to do is define the cluster name, select the host machines, and finally choose the CNI - Container Network Interface.

Rancher-less: never again

In conclusion, at CloudFire we've chosen and currently use Rancher because:

  • It allows to act in multiple ways🎨 from creation to deployment of K8s🛞 clustersfor both internal testing and production, and especially for their management. This way all setups are managed centrally, completely eliminating configuration errors by offering a single dashboard to the various operators, whilst maintaining the customisation of each cluster;
  • 🛡️ From a security point of view, Rancher facilitates the configuration of Role Based Acces Control (RBAC) and enables targeted reporting on its use;
  • Relative to Monitoring ⚠️& Alerting🚨, with Rancher you can quickly start a monitoring and alerting system based on Prometheus and Grafana, without any specific expertise;
  • Finally, for those who have relied on us to manage Kubernetes clusters that were previously installed using other systems, we have implemented interconnection projects 🖇with Rancher's interface, without any application migration needed.

You might also be interested