The two main objectives of a microservices architecture is the speed to production and the capability of the application to evolve. Unlike a monolithic application, a microservices deployment includes many individual (and independent) deployments. Instead of one single deployment, now we have hundreds of deployments. Unless we have an automated build system, managing such a deployment is a nightmare. An automated build system will help to streamline the deployment process, but will not solve all the issues in a large-scale deployment. We also need to worry about making a microservice portable, along with all its dependencies, so that the environment the developer tests will not be different from the test and production environments. This helps identify any issues very early in the development cycle and the chances are quite minimal that there will be issues in production. In this chapter we talk about different microservices deployment patterns, containers, container orchestration, container native microservices frameworks, and finally continuous delivery.

Containers and Microservices

The primary goal of a container is to provide a containerized environment for the applications it runs. A containerized environment is an isolated environment. One or more containers can run on the same physical host machine. But one container does not know about the processes running in other containers. For example, if you run your microservice in a container, it has its own view of the filesystem, network interfaces, processes, and hostname. Let’s say a foo microservice, which is running in the foo container, can refer to any other service running in the same container with the hostname localhost, while a bar microservice, which is running in the bar container, can refer to any other service running in the same container with the hostname localhost, while both foo and bar containers running on the same host machineFootnote 1.

The concept of containers was made popular a few years back with Docker. But it has a long history. In 1979, with chroot system call in UNIX V7, users were able to change the root directory of a running process, so it will not be able to access any part of the filesystem, beyond the root. This was added to BSD in 1982, and even today, chroot is considered a best practice among sys-admins, when you run any process that’s not in a containerized environment. A couple of decades later in 2000, FreeBSD introduced a new concept called FreeBSD Jails. With Jails, the same host environment can be partitioned into multiple isolated environments, where each environment can have its own IP address. In 2001, Linux VServer introduced a similar concept like FreeBSD Jails. With Linux VServer, the host environment can be partitioned by the filesystem, network, and the memory. Solaris came up with the Solaris Containers in 2004, which introduced another variation of process isolation. Google launched Process Containers in 2006, which was designed to build isolation over CPU, disk I/O, memory and the network. A year later this was renamed to control groups and merged into the Linux kernel 2.6.24.

Linux cgroups and namespaces are the foundation of the containers we see today. LXC (Linux Containers) in 2008 implemented the Linux Container Manager using cgroups and namespaces. CloudFoundry, in 2011, came up with a container implementation based on LXC, called Warden, but later changed it to its own implementation. Docker was born in 2013, and just like Warden, it too was built on top of LXC. Docker made container technologies much more usable and in the next sections, we’ll talk about Docker in detail.

Introduction to Docker

As we discussed, the containerization provided by Docker is built on top of Linux control groups and namespaces. The Linux namespaces build isolation in a way that each process sees its own view of the system: file, process, network interfaces, hostname, and many more. Control groups, also known as cgroups, limit the number of resources each process can consume. The combination of these two can build an isolated environment on the same host machine, sharing the same CPU, memory, network and the filesystem, with no impact on others.

The level of isolation that Docker introduces is quite different from what we see in virtual machines. Before Docker became popular, virtual machines were used to replicate similar operating environments, with all the dependencies. If you have really done it, you probably know the pain in doing it! A virtual machine image is quite bulky. It packs everything you need, from the operating system to application-level binary dependencies. Portability is a major concern. Figure 8-1 shows how a virtual machine creates isolation.

Figure 8-1
figure 1

High-level virtual machine architecture with a type-2 hypervisor

A virtual machine runs on a hypervisor. There are two types of hypervisors. The type-1 hypervisor does not need a host operating system, while the type-2 hypervisor runs on a host operating system. Each virtual machine carries its own operating system. To run multiple virtual machines on the same host operating system, you need to have a good, powerful computer. To use one virtual machine per microservice to build an isolated environment is an overkill and a waste of resources. If you are familiar with virtual machines, you can think of a container as a lightweight virtual machine. Each container provides an isolated environment, yet they share the same operating system kernel with the host machine. Figure 8-2 shows how a container creates isolation.

Figure 8-2
figure 2

Multiple containers running on the same host operating system

Unlike a virtual machine, all the containers deployed in the same host machine share the same operating system kernel. It’s in fact an isolated process. To boot up a container, there is no overhead in booting up an operating system; it’s just the application running in the container. Also, since containers do not pack the operating system, you can run many containers in the same host machine. These are the key driving forces behind picking containers as the most popular way of deploying and distributing microservices.

Docker builds the process isolation on top of control groups and namespaces come with the Linux kernel. Even without Docker, we could still do the same with the Linux kernel. So, why has Docker become the most popular container technology? Docker adds several new features, apart from process isolation to make it more attractive to the developer community. Making containers portable, building an ecosystem around Docker Hub, exposing an API for container management and building tooling around that, and making container images reusable are some of these features. We discuss each of them in detail in this chapter.

Installing Docker

Docker follows a client-server architecture model, where you have a Docker client and a Docker host (which is also known as the Docker Engine). When you install Docker locally in your machine, you will get both the client and the host installed. All the instructions on how to install Docker based on your platform are available from the Docker websiteFootnote 2. At the time of writing the book, Docker is available on MacOS, Windows, CentOS, Debian, Fedora, and Ubuntu. The following command will help to test the Docker installation. If the installation is successful, this will return the related system information.

:\> docker info

The following command will also return the versions related to the Docker client and the engine.

:\> docker version Client:  Version: 18.03.1-ce  API version: 1.37  Go version: go1.9.5  Git commit: 9ee9f40  Built: Thu Apr 26 07:13:02 2018  OS/Arch: darwin/amd64  Experimental: false  Orchestrator: swarm Server:  Engine:   Version: 18.03.1-ce   API version: 1.37 (minimum version 1.08)   Go version: go1.9.5   Git commit: 9ee9f40   Built: Thu Apr 26 07:22:38 2018   OS/Arch: linux/amd64   Experimental: true

Docker Architecture

Figure 8-3 shows the high-level Docker architecture where we have the Docker client, Docker host, or the engine and the registry. The Docker client is the command-line tool that we are going to interact with. The Docker Daemon running on the Docker host listens to the requests coming from the client and acts accordingly. The client can communicate with the daemon using a REST API, over UNIX sockets, or a network interface. In the following sections, we explain how exactly the communication happens and the sequence of events.

Figure 8-3
figure 3

High-level Docker architecture

Docker Images

A Docker image is a package that includes your microservice (or your application) along with all its dependencies. Your application will see only the dependencies that you pack in your Docker image. You define the filesystem for your image, and it will not have access to the host filesystem. A Docker image is created using a Dockerfile. The Dockerfile defines all the dependencies to build a Docker image.

Note

For readers who are keen on exploring more on Docker concepts, we recommend following the Docker documentationFootnote 3.

The following code shows the contents from a sample Dockerfile. The first line says start building this new image using the base image called openjdk:8-jdk-alpine. When we build the Docker image from this Dockerfile with the tooling provided by Docker, it first loads the openjdk:8-jdk-alpine image. First the Docker engine will see whether the image is in the local registry of images, and if not, it will load from a remote registry (for example, the Docker Hub). Also, if the openjdk:8-jdk-alpine image has dependencies on other Docker images, those will also be pulled and stored in the local registry.

FROM openjdk:8-jdk-alpine ADD target/sample01-1.0.0.jar /sample01-1.0.0.jar ENTRYPOINT ["java", "-jar", "sample01-1.0.0.jar"]

The second line says to copy sample01-1.0.0.jar from the target directory under the current location of the filesystem (this is in fact our host filesystem, which we use to create a Docker image) to the root of the filesystem of the image we want to create. The third line defines the entry point or the command to execute when someone runs this Docker image.

To build a Docker image from a Dockerfile, we use the following command. We go through this later in the chapter, so you do not need to try it out now. This command will produce a Docker image and, by default, it will be stored in the local image registry. This is in fact the Docker client. Once we execute the command, the Docker daemon running on the Docker engine will make sure all the dependent images are loaded locally and the new image is created.

:\> docker build -t sample01 .

Docker Registry

If you are familiar with Maven, you may already know how Maven repositories work. The Docker registries follow a similar concept. A Docker registry is a repository of Docker images. They operate at different levels. The Docker Hub is a public Docker registry where anyone can push and pull images. Later in the chapter, we explain how to publish and retrieve images to and from the Docker Hub. Having a centralized registry of all the images helps you share and reuse them. It is also becoming a popular way of distributing software. If your organization needs a restricted Docker registry just for its employees, that can be done too.

The following command will list all the Docker images stored in your local machine. This will include all the images you built locally as well as pulled from a Docker registry.

:\> docker image ls

The following command will list all the containers available in the host machine and their status. In the next section, we discuss the difference between images and containers.

:\> docker ps

Containers

A container is a running instance of a Docker image. In fact, images become containers when they run on a Docker engine. There can be many images in your local machine, but not all of them are running all the time, unless you explicitly make them to run. A container is in fact a regular Linux container defined by namespaces and control groups. We use the following command to start a Docker container from a Docker image. Here, sample01 is the image name. This command will first check whether sample01 image and all the other base images are in the local Docker registry, and if not, it will pull all the missing images from a remote registry. Finally, once all the images are there, the container will start and execute the program or the command, set as the ENTRYPOINT in the corresponding Dockerfile.

:\> docker run sample01

Deploying Microservices with Docker

In this section, we see how to create a Docker image with a microservice, run it, and then finally publish it to the Docker Hub. First, we need to have our microservice running locally. Once you download all the examples from the Git repository of the book, you can find the source code related to this sample available in the ch08/sample01 directory. Let’s build the sample with the following command, run from the ch08/sample01 directory.

:\> mvn clean install

This will result in the target/sample01-1.0.0.jar file. This is our microservice, and now we need to create a Docker image with it.

Note

To run the examples in this chapter, you need Java 8 or latest, Maven 3.2 or latest, and a Git client. Once you have successfully installed those tools, you need to clone the Git repo: https://github.com/microservices-for-enterprise/samples.git . The chapter samples are in the ch08 directory.

Creating a Docker Image with a Microservice

To create Docker image, first we need to create a Dockerfile. As we discussed in previous sections, this file defines all the other dependent images, other binary dependencies, and the command to start our microservice. The following code lists the contents of the Dockerfile, created in the ch08/sample01 directory. Since we use Spring Boot to build the microservice, we need to have Java in our image. There are two options. One is to get Java binary and install it to our image from scratch and the other approach is to find a Docker container already existing with a Java environment and reuse it. The second approach is the recommended one and we use it here. We use openjdk:8-jdk-alpine as the base image, which will be pulled from the Docker Hub at the time we build our image.

FROM openjdk:8-jdk-alpine ADD target/sample01-1.0.0.jar /sample01-1.0.0.jar ENTRYPOINT ["java", "-jar", "sample01-1.0.0.jar"]

Let’s use the following command from ch08/sample01 directory to build a Docker image with the name (-t in the command) sample01. The output you see may be slightly different from what is shown here, if you do not have the openjdk:8-jdk-alpine image already loaded into your Docker engine.

:\> docker build -t sample01 . Sending build context to Docker daemon 17.45MB Step 1/3: FROM openjdk:8-jdk-alpine  ---> 83621aae5e20 Step 2/3: ADD target/sample01-1.0.0.jar /sample01-1.0.0.jar  ---> f3448272e3a9 Step 3/3: ENTRYPOINT ["java", "-jar", "sample01-1.0.0.jar"]  ---> Running in ec9a9f91c950 Removing intermediate container ec9a9f91c950  ---> 35188a2bfb00 Successfully built 35188a2bfb00 Successfully tagged sample01:latest

There is one important thing to learn from this output. There you can see Docker engine performs the operation in three steps. There is a step corresponding to each line in the Dockerfile. Docker builds images as layers. Each step in this operation creates a layer. These layers are read-only and reusable between multiple containers. Let’s revisit this concept, later in this chapter, once we better understand the behavior of containers.

To list all the images in the Docker engine, we can use the following command.

:\> docker image ls

Running a Microservice as a Docker Container

Once we have the Docker image available for our microservice, we can use the following command to spin up a Docker container with it.

:\> docker run -p 9000:9000 sample01

This command instructs the Docker engine to spin up a new Docker container from the sample01 image and map port 9000 of the host machine to port 9000 of the Docker container. We picked 9000 here, because our microservice in the container starts on port 9000. Unless we map the container port to a host machine port, we won’t be able to communicate with our microservice running in a container.

When you run this command, you can see that the output from the microservice running in the container is printed on the console of the host machine. And also if you use Ctrl+C, it will kill the container. That is because, in the way we executed this command, the container is attached to the host machine’s terminal. With the -d option used in the docker run command, we can de-attach the container from the host machine terminal. This will return the container ID corresponding to the one we just started.

:\> docker run -d -p 9000:9000 sample01 9a5cd90b714fc5a27281f94622c1a0d8f1dd1a344f4f4fcc6609413db39de000

To test our microservice running in the container, use the following command from the host machine.

:\> curl http://localhost:9000/order/11 {"customer_id":"101021","order_id":"11","payment_method":{"card_type":"VISA","expiration":"01/22","name":"John Doe","billing_address":"201, 1st Street, San Jose, CA"},"items":[{"code":"101","qty":1},{"code":"103","qty":5}],"shipping_address":"201, 1st Street, San Jose, CA"}>

Hint

If we want to see all the containers running on our Docker engine, we can use the docker ps command. To stop a running container, we need to use the docker stop <container id> command. Also if you want remove a container, use the docker rm <container id> command. Once you remove the container, you can delete the corresponding Docker image from your local registry with the docker rmi <image name> command.

Publishing a Docker Image to the Docker Hub

Docker Hub is a public Docker registry available to anyone to push Docker images and pull from. First, we need to create a Docker ID from https://hub.docker.com/ . For example, we use the Docker ID prabath in the following example. Once you create a Docker ID, with a password, use the following command to register your Docker ID with the Docker client running locally.

:\> docker login --username=prabath

This command will prompt you to enter the password, and the Docker client will store your credentials in the keychain. Next, use the following command to find the image ID of the Docker image we created before and copy the value of IMAGE ID field corresponding to the sample01 image; for example, 35188a2bfb00.

:\> docker images

Now we need to tag our image with the Docker ID from the Docker Hub, as shown in the following command. Tagging helps create a more meaningful name for an image and since we plan to publish this image to the Docker Hub, make sure the tag follows the convention, where it starts with your Docker Hub account name. You would need to replace prabath from your Docker Hub account name.

:\> docker tag 35188a2bfb00 prabath/sample01

Finally, use the following command to push the image to the Docker Hub , and it should appear under your Docker Hub account once published.

:\> docker push prabath/sample01

Now, anyone can pull this image from anywhere, using the following command.

:\> docker pull prabath/sample01

Docker Compose

In practice, in a microservices deployment we have more than one service, where each service has its own container. For example, in Figure 8-4 the Order Processing microservice talks to the Inventory microservice. Also, there can be cases where one microservice depends on other services like a database. The database will be another container, but still part of the same application. The Docker Compose helps define and manage such multi-container environments. It’s another tool that we have to install apart from the Docker client and the Docker engine (host).

Note

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configurationFootnote 4. You can refer to the Docker documentationFootnote 5 for more details. It’s quite straightforward.

Figure 8-4
figure 4

Multi-container microservices deployment

Docker Compose uses a YAML file to define the configuration of an application. The following is an example YAML file that’s named docker-compose.yml, and it brings up the Order Processing(sample01) and Inventory(sample02) microservices. You can find the complete file in the ch08/sample02 directory.

version: '3.3' services:   inventory:     image: sample02     ports:       - "9090:9090"     restart: always   orderprocessing:     image: sample01     restart: always     ports:       - "9000:9000"     depends_on:       - inventory

Here we define two services, inventory and orderprocessing . The ports tag under each service defines the port forwarding rules (which we discussed). The image tag points to the Docker image. The depends_on tag defined under orderprocessing states that it depends on the inventory service. You can define multiple services under the depends_on tag. Finally, you may notice the value of the restart tag is set to always, under both the services, which instructs the Docker engine to restart the services always, in case they are down.

Building a Docker Image for Inventory Microservice

We already have a Docker image for the Order Processing microservice. Let’s create another one for the Inventory microservice. To create a Docker image, first we need to create a Dockerfile. As we discussed in previous sections, this file defines all the other dependent images, other binary dependencies, and the command to start our microservices. The following lists the contents of the Dockerfile created in the ch08/sample02 directory.

FROM openjdk:8-jdk-alpine ADD target/sample02-1.0.0.jar /sample02-1.0.0.jar ENTRYPOINT ["java", "-jar", "sample02-1.0.0.jar"]

Let’s use the following command from the ch08/sample02 directory to build the Docker image with the name sample02. Before running this command, make sure you have built the sample with Maven.

:\> docker build -t sample02 .

Now, let’s push this image to the Docker Hub. This may not be useful for this example, but later in the book, we’ll refer directly to the image from the Docker Hub. Use the following command to find the image ID of the Docker image we created before and copy the value of IMAGE ID field corresponding to the sample02 image; for example, 35199a2bfb00.

:\> docker images

Now we need to tag our image with the Docker ID from the Docker Hub, as shown in the following command. Finally, use the following command to push the image to the Docker Hub, and it should appear under your Docker Hub account once published. Also make sure that you are logged into the Docker Hub with the docker login command, prior to executing the docker push command.

:\> docker tag 35199a2bfb00 prabath/sample02 :\> docker push prabath/sample02

If you are already running the Order Processing microservice, stop the running container. To find all the containers running in the Docker host, use the following command and it will return the container ID of all the containers. Find the container ID (for example e1039667db1a) related to the sample01 image.

:\> docker ps

To remove a running container, we have to stop it first, and then remove it.

:\> docker container stop e1039667db1a :\> docker container rm e1039667db1a

Launching the Application with Docker Compose

We are all set to launch our application with Docker Compose. Run the following command from ch08/sample02. This is where we have docker-compose.yml, which we created before.

:\> docker-compose up

Since we are launching docker-compose here in the attached mode (with no –d) you will be able to see the output from both the containers on your terminal. Also you will notice that each log from both the microservices is tagged with the corresponding service name defined in the docker-compose.yml file.

Testing the Application with cURL

Now we have all our microservices running with Docker Compose. Let’s first try to test the Order Processing microservice with the following cURL command. This call only hits the Order Processing microservice and returns the result.

:\> curl http://localhost:9000/order/11 {"customer_id":"101021","order_id":"11","payment_method":{"card_type":"VISA","expiration":"01/22","name":"John Doe","billing_address":"201, 1st Street, San Jose, CA"},"items":[{"code":"101","qty":1},{"code":"103","qty":5}],"shipping_address":"201, 1st Street, San Jose, CA"}>

Let’s try another request to the Order Processing microservice, which also invokes the Inventory microservice. The request goes to Order Processing microservice and it invokes the Inventory microservice. Here we are posting a JSON request to the Order Processing microservice.

:\> curl -v -H "Content-Type: application/json" -d '{"customer_id":"101021","payment_method":{"card_type":"VISA","expiration":"01/22","name":"John Doe","billing_address":"201, 1st Street, San Jose, CA"},"items":[{"code":"101","qty":1},{"code":"103","qty":5}],"shipping_address":"201, 1st Street, San Jose, CA"}' http://localhost:9000/order

Now if you look at the terminal attached to both containers, you will find the following output, which confirms that the request has hit the Inventory microservice.

inventory_1 | 101 inventory_1 | 103

How Does the Communication Happen Between Containers?

To see how exactly the two microservices are wired in our previous example, we need to look at the source code of the Order Processing microservice, which available at ch08/sample01/ src/main/java/com/apress/ch08/sample01/service/OrderProcessing.java. There you can see that, instead of an IP address or a hostname, we use the service name (as defined in the docker-compose.yml) of the Inventory microservice: http://inventory:9090/inventory.

@RequestMapping(method = RequestMethod.POST) public ResponseEntity<?> createOrder(@RequestBody Order order) {     if (order != null) {         RestTemplate restTemplate = new RestTemplate();         URI uri = URI.create(             "http://inventory:9090/inventory");         restTemplate.put(uri, order.getItems());         order.setOrderId(UUID.randomUUID().toString());         URI location = ServletUriComponentsBuilder                 .fromCurrentRequest().path("/{id}")                 .buildAndExpand(order.getOrderId())                 .toUri();         return ResponseEntity.created(location).build();     }     return ResponseEntity.status(             HttpStatus.BAD_REQUEST).build(); }

Note

By default, Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, as well as discoverable by them at a hostname identical to the container nameFootnote 6.

Container Orchestration

Containers and Docker took most of the pain out of working on microservices. If not for Docker, microservices wouldn’t be popular today. Then again, containers only solve one part of the problem in a large-scale microservices deployment. How about managing the lifecycle of a container from the point it is born to the point it is terminated? How about scheduling containers to run on different physical machines in a network, tracking their running status, and load balancing between multiple containers of a given cluster? How about autoscaling containers to meet varying requests or the load on the containers? These are the issues addressed in a container orchestration framework. Kubernetes and Apache Mesos are the most popular container orchestration frameworks. In this chapter, we only focus on Kubernetes.

Introduction to Kubernetes

In short, Docker abstracts the machine (or the computer), while Kubernetes abstracts the network. Google introduced Kubernetes to the public in 2014 as an open source project. Before Kubernetes, for many years, Google worked on a project called Borg, to help their internal developers and system administrators manage thousands of applications/services deployed over large datacenters. Kubernetes is the next phase of Borg.

Kubernetes lets you deploy and scale an application of any type, running on a container, to thousands of nodes effortlessly. For example, in a deployment descriptor (which is understood by Kubernetes), you can specify how many instances of the Order Processing microservice you need to have.

Kubernetes Architecture

Just like in Docker, Kubernetes too follows a client-server based architecture. There is one Kubernetes master node and a set of worker nodes connected to it. The master node is also known as the Kubernetes control plane, which controls the complete Kubernetes cluster. There are four main components in a Kubernetes control plane: the API server, scheduler, controller manager, and etcd. The API server exposes a set of APIs to all the worker nodes and also to the other components in the control plane itself. The scheduler assigns a worker node to each deployment unit of your application. The applications are actually running on a worker node.

Figure 8-5
figure 5

High-level Kubernetes architecture (a Kubernetes cluster)

The controller manager is responsible for managing and tracking all the nodes in the Kubernetes deployment. It makes sure components are replicated properly and autoscaled, the failures are handled gracefully, as well as performs many other tasks. The etcd is a highly available, consistent datastore, which stores the Kubernetes cluster configuration.

A worker node consists of three components—a kubelet, a container runtime, and a kube-proxy. The container runtime can be based on Docker or rkt. Even though the Kubernetes container runtime was initially tied to Docker and rkt, it can now be extended to support any Open Container Initiative (OCI)-based container runtimes via the Container Runtime Interface (CRI). The responsibility of the kubelet is to manage the node by communicating with the API server running on the master node. It is running on each worker node and acts as a node agent to the master node. Kube-proxy or the Kubernetes network proxy does simple TCP and UDP stream forwarding or round-robin TCP and UDP forwarding across a set of backends.

Installing Kubectl

The Kubernetes master node can run anywhere. To interact with the master node, we need to have kubectl installed locally in our machine. The kubectl installation instructions are available at https://kubernetes.io/docs/tasks/tools/install-kubectl/ . Make sure that the version of kubectl is within one minor version difference of your cluster.

Installing Minikube

Minikube is the best way to get started with Kubernetes in your local machine. Unlike a production Kubernetes cluster, Minikube only supports one-node Kubernetes cluster. In this chapter, we use Minikube to set up a Kubernetes cluster. Minikube installation details are available at https://kubernetes.io/docs/tasks/tools/install-minikube/ . You will never use Minikube in a production environment, but the Kubernetes concepts that you learn with Minikube are still valid across any type of a Kubernetes distribution.

Test the Kubernetes Setup

Once you have installed both the kubectl and Minikube in your local machine, we need to start the Minikube server with the following command.

:\> minikube start minikube config set WantUpdateNotification false Starting local Kubernetes v1.9.0 cluster... Starting VM... Getting VM IP address... Moving files into cluster... Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... Kubectl is now configured to use the cluster. Loading cached images from config file.

Once Minikube starts, run the following command to verify the communication between kubectl and the Kubernetes cluster. This prints the version of both the kubectl client and the Kubernetes cluster.

:\> kubectl version Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T22:30:22Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"", Minor:"", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2018-01-26T19:04:38Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}

Also, with the following command, we can determine where the Kubernetes cluster is running.

:\> kubectl cluster-info Kubernetes master is running at https://192.168.99.100:8443

Now we can use following command to see all the nodes in our Kubernetes cluster. Since we are running Minikube, you will only find one node.

:\> kubectl get nodes NAME       STATUS    ROLES     AGE       VERSION minikube   Ready     <none>    24d       v1.9.0

To find more details about a given node, use the following command. This will return a whole lot of metadata related to the node.

:\> kubectl describe nodes minikube

Kubernetes Core Concepts

In this section, we discuss fundamental concepts associated with Kubernetes. Before we get there, let’s look at how Kubernetes works with containers.

The first thing we need to do prior to a Kubernetes deployment is to identify and package our applications into container images, where at runtime, each application will have its own isolated container. Then we need to identify how we are going to group these containers. For example, there can be microservices that run always together, and only one microservice exposes the functionality to the outsiders. In all the other cases, communication between microservices is just internal. Another example is a database we have as a container, which is only used by one microservice. In that case, we can group the database container and the corresponding microservice together. Well, someone of course can argue this as an anti-pattern. We do not disagree. Let’s revisit our example, after defining the term pod, within the context of Kubernetes.

Pod

In Kubernetes, we call a group of containers a pod. A pod is the smallest deployment unit in Kubernetes. You cannot just deploy containers. First we need to group one or more containers as a pod. Since the pod is the smallest deployment unit in Kubernetes, it can scale only the pods, not the containers. In other words, all the containers grouped as a pod must have the same scalability requirements. If we revisit our previous example, you can group a set of microservices together in a pod. If all of them have the same scalability requirements. But grouping a microservice with a database in a pod is not an ideal use case. Usually a database has different scalability requirements than a microservice.

One common use case for a pod is to have a sidecar proxy with the microservice itself (see Figure 8-6). Here, all the inbound and outbound requests to the microservice flow through the proxy. This model is discussed in detail in Chapter 9, “Service Mesh”.

Figure 8-6
figure 6

Grouping a set of services into a pod

Even though a pod can have multiple containers, all of them share the same network interface and storage. Containers within a pod can communicate with each other using localhost as the hostname, and none of the services by default are exposed outside the pod. Also, since all the containers in same pod share the same network interface, no two containers can spin up their services on the same port. When scheduling pods on worker nodes, Kubernetes makes sure that all the containers in the same pod are scheduled to run on the same physical machine (see Figure 8-7).

Figure 8-7
figure 7

Containers in the same pod are scheduled to run on the same physical machine

Creating a Pod

Let’s look at the following deployment descriptor, which is used to create a pod with the Order Processing microservice and the Inventory microservice. Here first we set the value of the kind attribute to Pod, and later under the spec attribute, we define the set of images need to be grouped into this pod.

Important

Here we made an assumption that both the Order Processing and Inventory microservices fit into a pod and both of them have the same scalability requirements. This is a mere assumption we made here to make the examples straightforward and explain the concepts.

apiVersion: v1 kind: Pod metadata:   name: ecomm-pod   labels:     app: ecommapp spec:   containers:     - name: orderprocessing       image: prabath/sample04       ports:         - containerPort : 9000     - name: inventory       image: prabath/sample02       ports:         - containerPort : 9090

Once the application developer comes up with the deployment descriptor, he or she will feed that into the Kubernetes control plane using kubectl, via the API server. Then the scheduler will schedule the pods on worker nodes, and the corresponding container runtime will pull the container images from the Docker registry.

Note

In the Kubernetes deployment descriptor we point to a slightly modified version of the Order Processing microservice (from what we discussed earlier) called sample04, instead of the sample01.

ReplicaSet

When you deploy a pod in a Kubernetes environment, you can specify how many instances of a given pod you want to keep running all the time. There can be many other scaling requirements as well, which we discuss later in the chapter with examples. It is the responsibility of the ReplicaSetFootnote 7 to monitor the number of running pods and make sure to maintain the expected number. If one pod goes down for some reason, the ReplicaSet will make sure to spin up a new one. Later in the chapter, we explain how ReplicaSet works.

Service

In a way, a serviceFootnote 8 in Kubernetes is a grouping of pods that provide the same functionality. These pods in fact are different instances of the same pod. For example, you may run five instances of the same Order Processing microservice as five pods to cater to high traffic. These pods are exposed to the outside world, via a service. A service has its own IP address and port, which will never change during its lifetime and it knows how to route traffic to the attached pods. None of the microservice client applications need worry about talking directly to a pod, but to a service. At the same time, based on the scalability requirements and other reasons, pods may come up and go down. Whenever a pod comes up and goes down, it may carry a different IP addresses and it is hard to make any connection from a client application to a pod. The service in Kubernetes solves this problem. There is an example later in the chapter that demonstrates how to create a service in Kubernetes.

Deployment

DeploymentFootnote 9 is a higher-level construct that can be used to deploy and manage applications in a Kubernetes environment. It uses the ReplicaSet (which is a low-level construct) to create pods. We discuss how to use the deployment construct later in this chapter.

Figure 8-8 illustrates how the Pods, Services, ReplicaSets , and Deployments are related.

Figure 8-8
figure 8

Pods, Services, ReplicaSets , and Deployments

Deploying Microservices in a Kubernetes Environment

In this section, we see how to create a pod with two microservices and invoke one microservice from the host machine using cURL while the two microservices communicate with each other, in the same pod.

Creating a Pod with a YAML File

The first thing we need to do is to come up with a deployment descriptor for our pod. It’s the same YAML file we discussed in the previous section. There we create a pod called ecomm-pod, with two container images called prabath/sample04 and prabath/sample02. Previously in this chapter we created those two images and pushed to the Docker Hub (sample04 is a slightly modified version of the sample01). Each container in the YAML configuration defines the port where the microservice is running. For example, the Order Processing microservice is running on HTTP port 9000, while the Inventory microservice is running on HTTP port 9090. You can find the complete YAML file (ecomm-pod.yml) in the ch08/sample03 directory. To create a pod with this YAML configuration, run the following command from ch08/sample03.

:\> kubectl create -f ecomm-pod.yml pod/ecomm-pod created

If the pod is successfully created, the following command should return the status of the pod.

:\> kubectl get pods NAME        READY     STATUS    RESTARTS   AGE ecomm-pod   2/2       Running   0          1m

The fist time, it may take some time to get the status updated to Running, as the container environment of the worker node has to pull the container images from the Docker Hub. The value 2/2 under the column Ready states that both containers in this pod are ready to accept requests. To find out more details about this pod, we can use the following command. This will once again return a whole lot of useful information about the given pod. Figure 8-9 shows the part of the output that lists all the events generated while booting up the pod.

:\> kubectl describe pod ecomm-pod

Figure 8-9
figure 9

Events in starting up the ecomm-pod

If you want to delete a pod, use the following command.

:\> kubectl delete -f ecomm-pod.yml

Creating a Service with a YAML File

Even though we have a running pod, none of the microservices running there are accessible outside the pod. To expose a microservice outside of a pod, we need to create a service. Once again to define a service, we need to have a service descriptor, a YAML file, as shown here. The complete YAML file (ecomm-service.yml) is available in the ch08/sample03 directory.

apiVersion: v1 kind: Service metadata:   name: ecomm-service spec:   selector:     app: ecommapp   ports:     - port: 80       targetPort: 9000   type: NodePort

Here you can see the value of the kind attribute is set to Service, and under selector/app attribute, the value is set to the label of the ecomm-pod we created before. This service exposed to the outside world via HTTP port 80 and the traffic coming to it will be routed to port 9000 (targetPort), which is the port of the Order Processing microservice. Finally, another important attribute we cannot miss is the type, where the value is set to NodePort. When the value of the service type is set to NodePort, it exposes the service on each node’s IP at a static port.

Let’s run the following command from the ch08/sample03 directory to create the service.

:\> kubectl create -f ecomm-service.yml service/ecomm-service created

If the service is successfully created, the following command should return the status of the service. There you can see that port 80 of the service is mapped to port 32179 of the Kubernetes master node.

:\> kubectl get svc NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S) ecomm-service   NodePort    10.97.200.207   <none>        80:32179/TCP kubernetes      ClusterIP   10.96.0.1       <none>        443/TCP

If you want to delete a service, use the following command. This will only remove the service, not the corresponding pod.

:\> kubectl delete -f ecomm-service.yml

Testing the Pod with cURL

Now let’s invoke the Order Processing service from the host machine. Here we use the IP address of Kubernetes cluster master node (which you can obtain from the kubectl cluster-info command).

:\> curl http://192.168.99.100:32179/order/11 {"customer_id":"101021","order_id":"11","payment_method":{"card_type":"VISA","expiration":"01/22","name":"John Doe","billing_address":"201, 1st Street, San Jose, CA"},"items":[{"code":"101","qty":1},{"code":"103","qty":5}],"shipping_address":"201, 1st Street, San Jose, CA"}

This request only hits the Order Processing microservice and does not validate the communication between the two microservices. Let’s use the following request to create an order via the Order Processing microservice, which will also then update the Inventory microservice.

curl -v -H "Content-Type: application/json" -d '{"customer_id":"101021","payment_method":{"card_type":"VISA","expiration":"01/22","name":"John Doe","billing_address":"201, 1st Street, San Jose, CA"},"items":[{"code":"101","qty":1},{"code":"103","qty":5}],"shipping_address":"201, 1st Street, San Jose, CA"}'  http://192.168.99.100:32179/order

We can confirm whether our request hits the Inventory microservice by looking at its logs. The following command helps to get the logs from a container running in a pod. Here, ecomm-pod is the name of the pod, and inventory is the container name (as defined in the pod deployment descriptor YAML file). The logs from the Inventory microservice print the item codes from our order request.

:\> kubectl logs ecomm-pod -c inventory 101 103

How Does Communication Happen Between Containers in the Same Pod?

Within a pod, communication between microservices can happen in different ways. In this example, the Order Processing microservice uses localhost as the hostname to talk to the Inventory microservice, as per the following codebase available at ch08/sample04/ src/main/java/com/apress/ch08/sample04/service/OrderProcessing.java. There you can see, instead of an IP address or a hostname, that we use localhost (http://localhost:9090/inventory).

@RequestMapping(method = RequestMethod.POST) public ResponseEntity<?> createOrder(@RequestBody Order order) {     if (order != null) {         RestTemplate restTemplate = new RestTemplate();         URI uri = URI.create(             "http://inventory:9090/inventory");         restTemplate.put(uri, order.getItems());         order.setOrderId(UUID.randomUUID().toString());         URI location = ServletUriComponentsBuilder                 .fromCurrentRequest().path("/{id}")                 .buildAndExpand(order.getOrderId())                 .toUri();         return ResponseEntity.created(location).build();     }     return ResponseEntity.status(             HttpStatus.BAD_REQUEST).build(); }

In addition to using HTTP over localhost, there are two other popular options for inter-container communication within a single pod: using shared volumes and using inter-process communication (IPC). We recommend interested readers to refer to Kubernetes documentationFootnote 10 on those topics.

How Does Communication Happen Between Containers in Different Pods?

The containers in different pods communicate with each other using the pod IP address and the corresponding port. In a Kubernetes cluster, each pod has its own distinct IP address.

Deployments and Replica Sets

Even though we discussed creating a pod using a deployment descriptor, in practice you will never use it. You’ll use a deployment instead. A deployment is an object type in Kubernetes (just like a pod) that helps us manage pods using ReplicaSets. A ReplicaSet is an essential component in a Kubernetes cluster where you can specify how you want to scale up and down a given pod. Before we create a deployment, let’s do some clean up by deleting the pod we created and the corresponding service.

The following commands first delete the service and then the pod. You need to run these commands from the ch08/sample03 directory.

:\> kubectl delete -f ecomm-service.yml :\> kubectl delete -f ecomm-pod.yml

Now, to create a deployment, we need to define our requirements in a deployment descriptor, which is a YAML file. The complete YAML file (ecomm-deployment.yml) is available in the ch08/sample03 directory. Here you can see that the value of the kind attribute is set to Deployment. The value of spec/replicas is set to 3, which means this deployment will create three instances of the pod defined under template. Just like in the deployment descriptor for the pod (which we used before), here too we define the container images those should be part of the pod, created under this deployment.

apiVersion: apps/v1beta1 kind: Deployment metadata:   name: ecomm-deployment spec:   replicas: 3   template:     metadata:       labels:         app: ecomm-deployment     spec:       containers:         - name: orderprocessing           image: prabath/sample04           ports:             - containerPort : 9000         - name: inventory           image: prabath/sample02           ports:             - containerPort : 9090

Let’s use the following command from ch08/sample03 to create a deployment.

:\> kubectl create -f ecomm-deployment.yml deployment.apps/ecomm-deployment created

If the deployment is successfully created, the following command should return the status of it. The output says that three replicas of the pod defined under the corresponding deployment have been created and are ready to use.

:\> kubectl get deployments NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE ecomm-deployment   3         3         3            3           23m

The following command shows all the pods created by this deployment.

:\> kubectl get pods NAME                                READY  STATUS    RESTARTS AGE ecomm-deployment-546f8c4d6b-67ttd   2/2    Running   1        24m ecomm-deployment-546f8c4d6b-hldnf   2/2    Running   0        24m ecomm-deployment-546f8c4d6b-m9vmt   2/2    Running   0        24m

Now we need to create a service pointing to this deployment, so the services running in the pods are accessible from the outside. Here we use ecomm-dep-service.yml, which is available in the ch08/sample03 directory, to create the service. The only difference from the previous case is the value of spec/select/app, and it now points to the labels/app of the deployment.

apiVersion: v1 kind: Service metadata:   name: ecomm-service spec:   selector:     app: ecomm-deployment   ports:     - port: 80       targetPort: 9000   type: NodePort

Let’s use the following command from ch08/sample03 to create the service.

:\> kubectl create -f ecomm-dep-service.yml service/ecomm-service created

If the service is successfully created, the following command should return the status of the service. There you can see that port 80 of the service is mapped to port 31763 of the Kubernetes master node.

:\> kubectl get svc NAME           TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S) ecomm-service  NodePort   10.111.102.84  <none>       80:31763/TCP kubernetes     ClusterIP  10.96.0.1      <none>       443/TCP

Now let’s invoke the Order Processing service from the host machine.

:\> curl http://192.168.99.100:31763/order/11 {"customer_id":"101021","order_id":"11","payment_method":{"card_type":"VISA","expiration":"01/22","name":"John Doe","billing_address":"201, 1st Street, San Jose, CA"},"items":[{"code":"101","qty":1},{"code":"103","qty":5}],"shipping_address":"201, 1st Street, San Jose, CA"}

Scaling a Deployment

In the previous section, we started our deployment with three replicas or three instances of the pod defined under the deployment descriptor. The following command shows how to scale the replicas to five. Here, deployment.extensions/ecomm-deployment is the name of our deployment.

:\> kubectl scale --replicas=5 deployment.extensions/ecomm-deployment deployment.extensions/ecomm-deployment scaled

Immediately after this command, if you run the following, you will find that new two pods are created.

:\> kubectl get pods NAME                               READY   STATUS             RESTARTS ecomm-deployment-546f8c4d6b-67ttd  2/2     Running            1 ecomm-deployment-546f8c4d6b-hldnf  2/2     Running            0 ecomm-deployment-546f8c4d6b-hsqnp  0/2     ContainerCreating  0 ecomm-deployment-546f8c4d6b-m9vmt  2/2     Running            0 ecomm-deployment-546f8c4d6b-vt624  0/2     ContainerCreating  0

If you run the same command again in a few seconds, you find that all the pods are successfully created.

NAME                               READY   STATUS             RESTARTS ecomm-deployment-546f8c4d6b-67ttd  2/2     Running            1 ecomm-deployment-546f8c4d6b-hldnf  2/2     Running            0 ecomm-deployment-546f8c4d6b-hsqnp  0/2     Running            0 ecomm-deployment-546f8c4d6b-m9vmt  2/2     Running            0 ecomm-deployment-546f8c4d6b-vt624  0/2     Running            0

Autoscaling a Deployment

Kubernetes also lets you autoscale a deployment based on certain parameters. For example, the following command sets the ecomm-deployment to autoscale based on the average CPU utilization. If the average CPU utilization across all the pods is over 50%, then the system should start scaling up to 10 replicas. At the same time, if the CPU utilization goes below 50%, the system should scale down to one replica.

:\> kubectl autoscale deployment ecomm-deployment --cpu-percent=50 --min=1 --max=10 horizontalpodautoscaler.autoscaling/ecomm-deployment autoscaled

Helm: Package Management with Kubernetes

HelmFootnote 11 is a package manager for Kubernetes, and it makes it possible to organize Kubernetes objects in a packaged application that anyone can download and install in one click or customize. Such packages are known as charts in Helm. Using Helm charts, you can define, install, and upgrade even the most complex Kubernetes applications.

A chart is a collection of files that describes a related set of Kubernetes resources. A single chart might be used to deploy something simple, like a memcached pod, or something complex, like a full web app stack with HTTP servers, databases, caches, and so on. Installing a chart is quite similar to installing a package using a package management tool such as Apt or Yum. Once you have Helm up and running, installing a package can be as simple as running helm install stable/mysql in the command line.

Helm charts describe even the most complex apps, provide repeatable application installation, and serve as a single point of authority. Also, you can build lifecycle management for your Kubernetes-based microservices as charts, which are easy to version, share, and host on public or private servers. You can also roll back to any specific version if required.

Microservices Deployment Patterns

Based on the business requirements, we find that there are multiple deployment patterns for microservices. Microservices were there a while, even before the containers became mainstream, and these deployment patterns have evolved over time. In the following sections, we weigh the pros and cons of each of these deployment patterns.

Multiple Services per Host

This model is useful when you have fewer microservices and do not expect each microservice to be isolated from the others. Here the host can be a physical machine or a virtual machine. This pattern will not scale when you have multiple microservices, and also will not help you in achieving the benefits of a microservices architecture.

Service per Host

With this model, the physical host machine isolates each microservice. This pattern will not scale when you have many microservices, even for something aroung 10. It will be a waste of resources and will be a maintenance nightmare. Also, it becomes harder and more time consuming to replicate the same operating environment across development, test, staging, and production setups.

Service per Virtual Machine

With this model, the virtual machine isolates each microservice. This pattern is better than the previous service per host model, but still will not scale when you have many microservices. A virtual machine carries a lot of overhead, and we need to have powerful hardware to run multiple virtual machines in a single physical host. Also, due to the increased size, the virtual machine images are less portable.

Service per Container

This is the most common and the recommended deployment model. Each microservice is deployed in its own container. This makes microservices more portable and scalable.

Container-Native Microservice Frameworks

Most of the microservices frameworks and programming languages that we use to build microservices are not designed to work with container management and orchestration technologies by default. Therefore, developers or DevOps have to put extra effort to create the artifacts/configurations that are required for deploying the applications as containers. There are certain technologies, such as Metaparticle.io, that try to build a uniform set of plugins to incorporate such container-native capabilities into the application’s or microservice’s code that you develop, as annotations. We discuss several of them next.

Metaparticle

MetaparticleFootnote 12 is getting some traction when it comes to microservice development as it provides some interesting features to harness your applications with containers and Kubernetes.

Metaparticle is a standard library for cloud-native applications on Kubernetes. The objective of Metaparticle is to democratize the development of distributed systems by providing simple, but powerful building blocks, built on top of containers and Kubernetes.

Metaparticle provides access to these primitives via familiar programming language interfaces. Developers no longer need to master multiple tools and file formats to harness the power of containers and Kubernetes. Metaparticle allows us to do the following:

  • Containerize your applications.

  • Deploy your applications to Kubernetes.

  • Quickly develop replicated, load-balanced services.

  • Handle synchronization like locking and master election between distributed replicas.

  • Easily develop cloud-native patterns like sharded systems.

For example, let’s consider the following code snippet, which contains a simple Java code for a hello world HTTP service. You can find the complete code example in the ch08/sample05 directory.

public class Main {     private static final int port = 8080;     public static void main(String[] args) {     // Code of a simple HTTP service     } }

Here we use Metaparticle to containerize our Java microservices. You need to have the following Maven dependency to build a program with Metaparticle.

<dependency>       <groupId>io.metaparticle</groupId>       <artifactId>metaparticle-package</artifactId>       <version>0.1-SNAPSHOT</version> </dependency>

To harness your code with Docker and Kubernetes, you need to wrap the code of the HTTP service with Metaparticle constructs.

@Package(repository = "docker.io/your_docker_id", jarFile = "target/metaparticle-hello-1.0-SNAPSHOT.jar") public static void main(String[] args) {       Containerize(() -> {          try {              HttpServer server = HttpServer.create(new InetSocketAddress(port), 0); ...

Here, we use the @Package annotation to describe how to package the application and specify the Docker Hub username. Also, we need to wrap the main function in the Containerize function, which triggers the Metaparticle code when we build our microservice application.

Now we can build your application with mvn compile and, once the build is successful, we can run it with mvn exec:java -Dexec.mainClass=io.metaparticle.tutorial.Main.

This will start the HTTP microservice as a container. To access this service externally, we need to expose the ports of our microservice application. To do this, we need to add a @Runtime annotation to supply the ports to expose.

...     @Runtime(ports={port}) ...

As a final step, consider the task of exposing a replicated service on the Internet. To do this, we need to expand our usage of the @Runtime and @Package annotations. The @Package annotation has its publish field added with a value set to true. This is necessary in order to push the built image to the Docker repository. Then, we add the executor field to the @Runtime annotation to set our execution environment to metaparticle, which will launch the service into the currently configured Kubernetes environment. Finally, we add a replicas field to the @Runtime annotation. This specifies the number of replicas to schedule.

...     @Runtime(ports={port},              replicas = 4,              executor = "metaparticle")     @Package(repository = "kasunindrasiri",             jarFile = "target/metaparticle-package-tutorial-0.1-SNAPSHOT-jar-with-dependencies.jar",             publish = true,             verbose = true) ...

After we compile and run this, we can see that there are four pod replicas running behind a Kubernetes ClusterIP service.

Containerizing a Spring Boot Service

We can use Metaparticle to containerize an existing Spring Boot application by using the Metaparticle dependency and annotations. First we need to add the Metaparticle dependency to our project’s pom file and add execute the SpringApplicaiton.run inside the Containerize function.

import static io.metaparticle.Metaparticle.Containerize; @SpringBootApplication public class BootDemoApplication {     @Runtime(ports = {8080},             replicas = 4,             executor = "metaparticle")     @Package(repository = "your-docker-user-goes-here",             jarFile = "target/boot-demo-0.0.1-SNAPSHOT.jar",             publish = true,             verbose = true)     public static void main(String[] args) {         Containerize(() ->  SpringApplication.run(BootDemoApplication.class, args));     } } @RestController class HelloController {     @GetMapping("/")     public String hello(HttpServletRequest request) {         System.out.printf("[%s]%n", request.getRequestURL());         return String.format("Hello containers [%s] from %s",                 request.getRequestURL(), System.getenv("HOSTNAME"));     } }

In addition to seamless containerization, it also offers other features such as distributed synchronization, sharding, etc. In addition to Metaparticle, languages such as Ballerina.io support such capabilities via the annotations. You can find the complete example in the ch08/sample06 directory.

Spring Boot and Docker Integration

Spring Boot is meant to be deployed and run on containers. It offers plugins that allow you to create docker images from the Spring Boot service that you develop. You can create the Docker image by creating the required Dockerfile and adding the docker-file Maven plugin to the project build phase. For example, the Dockerfile will look as follows:

FROM openjdk:8-jdk-alpine VOLUME /tmp ARG JAR_FILE COPY ${JAR_FILE} app.jar ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

You can add the docker-file pluginFootnote 13 to your pom file of your project. The complete code example is in the ch08/sample07 directory.

When it comes to Spring Boot runtime, it uses an embedded web server such as Tomcat to deploy and boot up your services. So you can easily start a microservice with its embedded Tomcat runtime. Although the startup time and memory footprint are relatively high (several seconds), still we can consider Spring Boot a container native technology.

Ballerina: Docker and Kubernetes Integration

We introduced Ballerina in Chapter 7, “Integrating Microservices,” and it also offers some container-native capabilities as part of the language extensions. The developers can generate the deployment artifacts for the Ballerina code just by annotating code with suitable annotations of a deployment method of their choice. For example, in the following code snippet, we annotate code with Kubernetes annotations to generate deployment artifacts.

import ballerina/http; import ballerinax/kubernetes; @kubernetes:Deployment {     enableLiveness: true,     singleYAML: true } @kubernetes:Ingress {     hostname: "abc.com" } @kubernetes:Service {name: "hello"} endpoint http:Listener helloEP {     port: 9090 }; @http:ServiceConfig {     basePath: "/helloWorld" } service<http:Service> helloWorld bind helloEP {     sayHello(endpoint outboundEP, http:Request request) {         http:Response response = new;         response.setTextPayload("Hello, World from service helloWorld ! \n");         _ = outboundEP->respond(response);     } }

During the Ballerina build phase, it will generate Docker images and respective Kubernetes deployment artifacts. The complete code example is in the ch08/sample08 directory. Ballerina deployment choices are so diverse so that you can deploy it in a conventional virtual machine (VM) or bare metal servers, Docker, Kubernetes and on a service mesh, if you are using a Service Mesh such as Istio.

Continuous Integration, Delivery, and Deployment

One of the key rationales behind microservice architecture is less time to production and shorter feedback cycles. One cannot meet such goals without automation. A good microservices architecture will only look good on paper (or on a whiteboard), if not for the timely advancements in DevOps and tooling around automation. Microservices came as a good idea, as they have all the tooling support at the time they started to become mainstream, in the form of Docker, Ansible, Puppet, Chef, and many more. Tooling around automation can be divided into two broader categories—continuous integration tools and continuous deployment tools.

Note

This article is a great source of information, and it includes ideas from different personalities on continuous integration, continuous deployment, and continuous delivery: https://bit.ly/2wyBLNW .

Continuous Integration

Continuous integration enables software development teams to work collaboratively, without stepping on each other's toes, by automating builds and source code integration to maintain source code integrity. It also integrates with DevOps tools to create an automated code delivery pipeline. Continuous integration helps development teams avoid integration hell where the software works on individual developers’ machines, but it fails when all developers integrate their code. Forrester, one of the top analyst firms, in its latest reportFootnote 14 on continuous integration tools, identified the top ten tools in the domain: Atlassian Bamboo, AWS CodeBuild, CircleCI, CloudBees Jenkins, Codeship, GitLab CI, IBM UrbanCode Build, JetBrains TeamCity, Microsoft VSTS, and Travis CI.

Continuous Delivery

The continuous delivery tools bundle applications, infrastructure, middleware, and their supporting installation processes and dependencies into release packages that transition across the lifecycle. The objective of this is to keep the code in a deployable state all the time. The latest Forrester report on continuous delivery and release automation highlighted 15 most significant vendors in the domain: Atlassian, CA Technologies, Chef Software, Clarive, CloudBees, Electric Cloud, Flexagon, Hewlett Packard Enterprise (HPE), IBM, Micro Focus, Microsoft, Puppet, Red Hat, VMware, and XebiaLabs.

However, if we look at the continuous delivery tools that are primarily supporting Kubernetes, the tools such as Weave CloudFootnote 15, SpinnakerFootnote 16 (by Netflix), CodefreshFootnote 17, HarnessFootnote 18, and GoCDFootnote 19 are quite popular.

Continuous Deployment

Continuous deployment is a process that takes the artifacts produced by the continuous delivery process and deploys into a production setup, ideally every time a developer updates the code! Organizations that follow continuous deployment practices deploy code into production more than one hundred times a day. Blue-green, A/B testing, and canary releases are the three main approaches or practices people follow in continuous deployment.

Blue-Green Deployment

Blue-green deployment is a proven strategy for introducing changes into a running system. It’s been around for almost a decade now, and it’s successfully used by many large enterprises. Under the blue-green strategy we maintain two close-to-identical deployments; one is called blue and the other one is green. At a given time either blue or green takes the live traffic—let’s say blue. Now, we have the green environment to deploy our changes and test. If all works fine, we redirect the live traffic from blue to green at the load balancer. Then green becomes the environment that takes live traffic, while blue becomes available to deploy new changes. If there is an issue, we can quite easily switch the environments, and therefore can automatically roll back the new changes.

Canary Releases

The concept behind the canary comes from a tactic used by coal miners. They used to bring canaries into the mines with them to monitor the level of carbon monoxide in the air. If a canary dies, that means the level of carbon monoxide in the air is high, so they leave the coalmine. With the canary releases, a build is first made available to a selected set of the audience (maybe 5% to 10% of the entire live traffic), and if it works well (or doesn’t die), then it’s made available to everyone. This minimizes the risk of introducing new changes, as the production roll out happens slowly, in smaller chunks.

A/B Testing

A/B testing is about evaluating the impact of a new change against the old system or evaluating the impact of two different changes simultaneously. This is mostly used to track user behavior due to some changes introduced to a website. For example, you may have one version of your website having enabled social login as an option for signup, and another without social login. Another example would be having different colors or placements for important messages on a website and seeing which one is clicked more. A/B testing is used to measure different competing functionalities of an application with live traffic. After some time only the winner stays and the other competing features will be rolled back.

Note

For readers who are interested in more details about continuous deployment, we recommend going through the “Canary Release” articleFootnote 20 by Danilo Sato and “Blue Green Deployment” articleFootnote 21 by Marin Fowler.

Summary

In this chapter, we discussed how to run microservices in a production deployment with containers. Containers and microservices are a match made in heaven and if not for containers, microservices wouldn’t be mainstream. We discussed deploying microservices with Docker and later discussed how the container orchestration works with Kubernetes. We also looked at Metaparticle, one of the prominent cloud native microservices frameworks, which helps us incorporate container-native capabilities into the applications or microservices code that we develop as annotations. Finally we discussed continuous integration/delivery and deployment.

In the next chapter, we discuss one of the trending topics in microservices architecture and also a key ingredient in any microservices deployment: the Service Mesh.