Asynchronous Programming with JavaScript Part 1: Introduction & Overview
June 13, 2018Application Persistent Database Connections
August 13, 2018At CKH we have been using Kubernetes for orchestrating our Docker containers. Kubernetes is an open source container management platform designed to run enterprise-class, cloud-enabled, and web-scalable IT workloads. It is built upon the foundation laid by Google’s 15 years of experience in running containerized applications. Kubernetes provides mechanisms for application deployment, scheduling, updating, maintenance, and scaling. A key feature of Kubernetes is that it actively manages the containers to ensure the state of the cluster continually matches the user’s intentions. It is a great tool and I encourage you to check it out.
One of the challenges when using Kubernetes, however, is when you have a Deployment that is set to scale your app to multiple Pods. Let’s pause for a second and explain what a Deployment and a Pod are.
Understanding Deployments
A Deployment controller provides declarative updates for Pods and ReplicaSets.
You describe the desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
Understanding Pods
A Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process on your cluster.
A Pod encapsulates an application container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run. A Pod represents a unit of deployment: a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources.
Docker is the most common container runtime used in a Kubernetes Pod, but Pods support other container runtimes as well.
Pods in a Kubernetes cluster can be used in two main ways:
-
Pods that run a single container. The “one-container-per-Pod” model is the most common Kubernetes use case; in this case, you can think of a Pod as a wrapper around a single container, and Kubernetes manages the Pods rather than the containers directly.
-
Pods that run multiple containers that need to work together. A Pod might encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers might form a single cohesive unit of service–one container serving files from a shared volume to the public, while a separate “sidecar” container refreshes or updates those files. The Pod wraps these containers and storage resources together as a single manageable entity.
So back to the problem at hand. When we have a deployment that is set to run multiple replicas of a pod, there is not an easy get the logs for all the pods running. Out of the box, you can get the logs for a pod with the kubectl logs command.
But imagine you have 6 pods running. You would have to run the command on all 6 pods.
And then just keep repeating…
This can be tedious, and as developers, tedious tasks that a computer can do are what we should excel at finding solutions to. So along comes Kubetail.
Kubetail was created by developer Johan Haleby. (thanks!) He blogged about the tool here.
http://code.haleby.se/2015/11/13/tail-logs-from-multiple-pods-simultaneously-in-kubernetes/
Kubetail allows you to monitor logs from multiple pods with ease. You can follow/tail logs based on the pod names. This allows you to easily watch not only all pods in a deployment, but also all pods of say a certain app (because you followed good naming conventions when creating your deployments). For example, I can watch all pods named “api” with this command
Or all pods named “data-lake-api” with this command.
As you can see it offers a nice color scheme which is even customizable to your liking. It is just pleasing to me to watch all my logs fly down the screen. You can even pipe the calls with grep if you are looking for a specific log, for example.
I have found this tool super helpful in diagnosing any issues with my deployments and I highly recommend you add it to your toolbox today.