If you like this article, come work with me at DoiT International, where I am a cloud architect. Write me, or  send your resume here . Also, see my blog post about our unusual way of working.

The Yiddish original is here.

Kubernetes: The simple way to run complex server applications

Kubernetes addresses the problem of running complex server applications with many parts.

First, the parts: Over the decades, computing has moved to higher and higher levels of abstraction. From hardware computers, we moved to virtual machines. But since 2015, a new abstraction has emerged, the container: A portable, lightweight bundle of functionality--i.e., part of an application--that can run on a virtual machine.

A container is not a virtual machine: it is simply a well-protected directory with the files of an application, which can be downloaded to a virtual machine and used without worrying about other files in the virtual machine.

Second, the orchestration: As cloud platforms allow software systems to grow in complexity and size, managing them automatically becomes all the more important.

Kubernetes was invented for this: It orchestrates software containers. That is, it launches containers and ensures that they remain running; it defines communication between them; it adds more containers to serve an application in accordance with the load; and so on.

Is Kubernetes really necessary? No--software ran fine for years before Kubernetes. However, if your cloud-based server is more than moderately complicated, you’ll find that Kubernetes makes your life easier. It is also the standard for cloud orchestration, and so will let you take advantage of the large variety of third-party software that works with it.

Kubernetes has a number of concepts that help organize the orchestration, like “Pod,” “Deployment” and “Service.” The details of these definitions are not important at this point. But for a short overview: A Pod is a container which is running in the cloud, sometimes with some auxiliary containers; A Deployment is a group of copies of a pod which serves an application; and a Service exposes an application for access at a certain address.

Now I will lead you through a little “Hello World” example--according to time-honoured custom--to show the usage of Kubernetes: How you run a simple server.

  1. First, create a cluster, a collection of virtual machines in which your containers will run.

  • Register for Google Cloud. There is a free option.

  • Go to Google Kubernetes Engine in the Cloud Console and click “CREATE” and then “CONFIGURE” next to “Standard. Next: “My first cluster” and “CREATE NOW”.

  1. Second, enter the Cloud Shell, a virtual machine in the cloud dedicated to you; you can give commands as in a shell on any computer. (And in fact you can do this on your laptop.)

  • Click the name of the cluster.

  • Click “CONNECT” and “RUN IN CLOUD SHELL” to go to the Cloud Shell. You will see a command line like

  •   gcloud clusters get-credentials my-first-cluster-1 --zone us-central1-c --project   my-project

  • Type Enter to  run this command and connect to the cluster.

  • Click “AUTHORIZE” if this appears.

  1. Third, launch everything based on definitions in a yaml file:

  • You can look at this file in the address in this command and see the definitions of the Deployment and Service.

         kubectl apply -f https://joshuafox.com/content/sholem.yaml

  • Type that command.

  1. After about three minutes, type kubectl get service,deployment .

  • Next to the service sholem-service , you will see an address under “External-IP”; and next to the deployment, you will see “1/1” indicating that the one pod is running. If you don’t see these, wait 3 minutes and try again.

  • Open the Internet Protocol address as 34.69.48.156 in the example, in a browser and see a web-page from the server, and see a web page that says “Hello, world”.

  • We have reached our goal! We have a server running in the cloud, created from a predefined bundle of functionality, on infrastructure conveniently managed by Kuberenetes.