The microservice architecture is an approach for developing and deploying enterprise cloud-native software applications that involve separating the core business capabilities of the application into decoupled components. Each business capability represents some functionality that the application provides as services to the end user. The idea of microservices is in contrast to the monolithic architecture which involves building applications as a composite of its “individual” capabilities. See an illustration in Figure 45-1.

Figure 45-1
figure 1

Microservice applications (right) vs. monolithic applications (left)

Microservices interact with each other using representational state transfer (REST) communications for stateless interoperability. By stateless, we mean that “the server does not store state about the client session.” These protocols can be HTTP request/response APIs or an asynchronous messaging queue. This flexibility allows the microservice to easily scale and respond to request even if another microservice fails.

Advantages of Microservices

  • Loosely coupled components make the application fault tolerant.

  • Ability to scale out making each component highly available.

  • The modularity of components makes it easier to extend existing capabilities.

Challenges with Microservices

  • The software architecture increases in complexity.

  • Overhead in management and orchestration of microservices. We will, however, see in the next sessions how Docker and Kubernetes work to mitigate this challenge.

Docker

Docker is a virtualization application that abstracts applications into isolated environments known as containers. The idea behind a container is to provide a unified platform that includes the software tools and dependencies for developing and deploying an application.

The traditional way of developing applications is where an application is designed and hosted on a single server. This is illustrated in Figure 45-2. This setup is prone to several problems including the famous “it works on my machine but not on yours”. Also in this architecture, apps are difficult to scale and to migrate resulting in huge costs and slow deployment.

Figure 45-2
figure 2

Application running on a single server

Virtual Machines vs. Containers

Virtual machines (VMs), illustrated in Figure 45-3, emulate the capabilities of a physical machine making it possible to install and run operating systems by using a hypervisor. The hypervisor is a piece of software on the physical machine (the host) that makes it possible to carry out virtualization where multiple guest machines are managed by the host machine.

Figure 45-3
figure 3

Virtual machines

Containers on the other hand isolate the environment for hosting an application with its own libraries and software dependencies; however, as opposed to a VM, containers on a machine all share the same operating system kernel. Docker is an example of a container. This is illustrated in Figure 45-4.

Figure 45-4
figure 4

Containers

Working with Docker

Google Cloud Shell comes pre-configured with Docker.

Key concepts to note are

  • Dockerfile: A Dockerfile is a text file that specifies how an image will be created.

  • Docker images: Images are created by building a Dockerfile.

  • Docker containers: Docker containers are the running instance of an image.

The diagram in Figure 45-5 highlights the process to build an image and run a Docker container.

Figure 45-5
figure 5

Steps to deploying a Docker container

Table 45-1 shows key commands when creating a Dockerfile.

Table 45-1 Commands for Creating Dockerfiles

Build and Run a Simple Docker Container

Clone the book repository to run this example in Cloud Shell; we have a bash script titled date-script.sh in the chapter folder. The script assigns the current date to a variable and then prints out the date to the console. The Dockerfile will copy the script from the local machine to the docker container file system and execute the shell script when running the container. The Dockerfile to build the container is stored in docker-intro/hello-world.

# navigate to the folder with images cd docker-intro/hello-world

Let’s view the bash script.

cat date-script.sh #! /bin/sh DATE="$(date)" echo "Todays date is $DATE"

Let’s view the Dockerfile.

# view the Dockerfile cat Dockerfile # base image for building container FROM docker.io/alpine # add maintainer label LABEL maintainer="dvdbisong@gmail.com" # copy script from local machine to container file system COPY date-script.sh /date-script.sh # execute script CMD sh date-script.sh

The Docker image will be built off the Alpine Linux package . See https://hub.docker.com/_/alpine . The CMD routine executes the script when the container runs.

Build the Image

Run the following command to build the Docker image.

# build the image docker build -t ekababisong.org/first_image .

Build output

Sending build context to Docker daemon  2.048kB Step 1/4 : FROM docker.io/alpine latest: Pulling from library/alpine 6c40cc604d8e: Pull complete Digest: sha256:b3dbf31b77fd99d9c08f780ce6f5282aba076d70a513a8be859d8d3a4d0c92b8 Status: Downloaded newer image for alpine:latest  ---> caf27325b298 Step 2/4 : LABEL maintainer="dvdbisong@gmail.com"  ---> Running in 306600656ab4 Removing intermediate container 306600656ab4  ---> 33beb1ebcb3c Step 3/4 : COPY date-script.sh /date-script.sh  ---> Running in 688dc55c502a Removing intermediate container 688dc55c502a  ---> dfd6517a0635 Step 4/4 : CMD sh date-script.sh  ---> Running in eb80136161fe Removing intermediate container eb80136161fe  ---> e97c75dcc5ba Successfully built e97c75dcc5ba Successfully tagged ekababisong.org/first_image:latest

Run the Container

Execute the following command to run the Docker container.

# show the images on the image docker images # run the docker container from the image docker run ekababisong.org/first_image Todays date is Sun Feb 24 04:45:08 UTC 2019

Important Docker Commands

In this section, let’s review some important Docker commands.

Commands for Managing Images

Table 45-2 contains commands for managing Docker images.

Table 45-2 Docker Commands for Managing Images

Commands for Managing Containers

Table 45-3 contains commands for managing Docker containers.

Table 45-3 Docker Commands for Managing Containers

Running a Docker Container

Let’s break down the following command for running a Docker container:

docker run -d -it --rm --name [CONTAINER_NAME] -p 8081:80 [IMAGE_NAME]

where

  • -d runs the container in detached mode. This mode runs the container in the background.

  • -it runs in interactive mode, with a terminal session attached.

  • --rm removes the container when it exits.

  • --name specifies a name for the container.

  • -p does port forwarding from host to the container (i.e., host:container) .

Kubernetes

When a microservice application is deployed in production, it usually has many running containers that need to be allocated the right amount of resources in response to user demands. Also, there is a need to ensure that the containers are online, are running, and are communicating with one another. The need to efficiently manage and coordinate clusters of containerized applications gave rise to Kubernetes.

Kubernetes is a software system that addresses the concerns of deploying, scaling, and monitoring containers. Hence, it is called a container orchestrator. Examples of other container orchestrators in the wild are Docker Swarm, Mesos Marathon, and HashiCorp Nomad.

Kubernetes was built and released by Google as an open source software, which is now managed by the Cloud Native Computing Foundation (CNCF) . Google Cloud Platform offers a managed Kubernetes service called Google Kubernetes Engine (GKE). Amazon Elastic Container Service for Kubernetes (EKS) also provides a managed Kubernetes service.

Features of Kubernetes

The following are some features of Kubernetes:

  • Horizontal auto-scaling: Dynamically scales containers based on resource demands

  • Self-healing: Re-provisions failed nodes in response to health checks

  • Load balancing: Efficiently distributes requests between containers in a pod

  • Rollbacks and updates: Easily update or revert to a previous container deployment without causing application downtime

  • DNS service discovery: Uses Domain Name System (DNS) to manage container groups as a Kubernetes service

Components of Kubernetes

The main components of the Kubernetes engine are

  • Master node(s): Manages the Kubernetes cluster. There may be more than one master node in high availability mode for fault-tolerance purposes. In this case, only one is the master, and the others follow.

  • Worker node(s): Machine(s) that runs containerized applications that are scheduled as pod(s).

The illustration in Figure 45-6 provides an overview of the Kubernetes architecture.

Figure 45-6
figure 6

High-level overview of Kubernetes components

Master Node(s)

The master node consists of

  • etcd (distributed key-store): It manages the Kubernetes cluster state. This distributed key-store can be a part of the master node or external to it. Nevertheless, all master nodes connect to it.

  • api server: It manages all administrative tasks. The api server receives commands from the user (kubectl cli, REST or GUI); these commands are executed and the new cluster state is stored in the distributed key-store.

  • scheduler: It schedules work to worker nodes by allocating pods. It is responsible for resource allocation.

  • controller: It ensures that the desired state of the Kubernetes cluster is maintained. The desired state is what is contained in a JSON or YAML deployment file.

Worker Node(s)

The worker node(s) consists of

  • kubelet: The kubelet agent runs on each worker node. It connects the worker node to the api server on the master node and receives instructions from it. It ensures the pods on the node are healthy.

  • kube-proxy: It is the Kubernetes network proxy that runs on each worker node. It listens to the api server and forwards requests to the appropriate pod. It is important for load balancing.

  • pod(s): It consists of one or more containers that share network and storage resources as well as container runtime instructions. Pods are the smallest deployable unit in Kubernetes.

Writing a Kubernetes Deployment File

The Kubernetes deployment file defines the desired state for the various Kubernetes objects. Examples of Kubernetes objects are

  • Pods: It is a collection of one or more containers.

  • ReplicaSets: It is part of the controller in the master node. It specifies the number of replicas of a pod that should be running at any given time. It ensures that the specified number of pods is maintained in the cluster.

  • Deployments: It automatically creates ReplicaSets. It is also part of the controller in the master node. It ensures that the cluster’s current state matches the desired state.

  • Namespaces: It partitions the cluster into sub-clusters to organize users into groups.

  • Service: It is a logical group of pods with a policy to access them.

    • ServiceTypes: It specifies the type of service, for example, ClusterIP, NodePort, LoadBalancer, and ExternalName. As an example, LoadBalancer exposes the service externally using a cloud provider’s load balancer.

Other important tags in writing a Kubernetes deployment file

  • spec: It describes the desired state of the cluster

  • metadata: It contains information of the object

  • labels: It is used to specify attributes of objects as key-value pairs

  • selector: It is used to select a subset of objects based on their label values

The deployment file is specified as a yaml file.

Deploying Kubernetes on Google Kubernetes Engine

Google Kubernetes engine (GKE) provides a managed environment for deploying application containers. To create and deploy resources on GCP from the local shell, the Google command-line SDK gcloud will have to be installed and configured. If this is not the case on your machine, follow the instructions at https://cloud.google.com/sdk/gcloud/ . Otherwise, a simpler option is to use the Google Cloud Shell which already has gcloud and kubectl (the Kubernetes command-line interface) installed.

Creating a GKE Cluster

Run the following command to create a cluster of containers on GKE. Assign the cluster name.

# create a GKE cluster gcloud container clusters create my-gke-cluster-name

A Kubernetes cluster is created on GCP with three nodes (as default). The GKE dashboard on GCP is shown in Figure 45-7.

Creating cluster ekaba-gke-cluster in us-central1-a... Cluster is being deployed...done. Created [https://container.googleapis.com/v1/projects/oceanic-sky-230504/zones/us-central1-a/clusters/ekaba-gke-cluster]. To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-central1-a/ekaba-gke-cluster?project=oceanic-sky-230504 kubeconfig entry generated for ekaba-gke-cluster. NAME               LOCATION       MASTER_VERSION  MASTER_IP     MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS ekaba-gke-cluster  us-central1-a  1.11.7-gke.4    35.226.72.40  n1-standard-1  1.11.7-gke.4  3          RUNNING

Figure 45-7
figure 7

Google Kubernetes Engine dashboard

To learn more about creating clusters with Google Kubernetes Engine, visit https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster .

Run the following command to display the nodes of the provisioned cluster on GKE.

# get the nodes of the kubernetes cluster on GKE kubectl get nodes NAME                                               STATUS    ROLES     AGE       VERSION gke-ekaba-gke-cluster-default-pool-e28c64e0-8fk1   Ready     <none>    45m       v1.11.7-gke.4 gke-ekaba-gke-cluster-default-pool-e28c64e0-fmck   Ready     <none>    45m       v1.11.7-gke.4 gke-ekaba-gke-cluster-default-pool-e28c64e0-zzz1   Ready     <none>    45m       v1.11.7-gke.4

Delete the Kubernetes Cluster on GKE

Run the following command to delete a cluster on GKE.

# delete the kubernetes cluster gcloud container clusters delete my-gke-cluster-name

Note

Always remember to clean up cloud resources when they are no longer needed.

This chapter introduced the concepts of a microservice architecture and provided an overview of working with Docker containers for building applications in isolated environments/sandboxes. In the event that many of such containers are deployed in production, this chapter introduces Kubernetes as a container orchestrator for managing the concerns of deploying, scaling, and monitoring containers.

The next chapter will discuss on Kubeflow and Kubeflow Pipelines for deploying machine learning components into production on Kubernetes.