A BLOG POST

Docker: The Key to Consistent and Reliable App Deployment

Docker is quickly becoming one of the most popular tools for app deployment, and for good reason. This powerful platform allows developers to create, deploy, and run applications in a consistent and reliable environment, regardless of the underlying infrastructure. Whether you're working on a small, single-page app or a large, complex enterprise system, Docker has the tools and features you need to ensure your app runs smoothly and reliably, every time.

Docker: The Key to Consistent and Reliable App Deployment image
chandrawijaya image
chandrawijayaPublished at Β  1/4/2023, 11:51:00 AM

Updated atΒ 7/11/2024, 9:09:26 AM

Read Β 384 times

It is recommended for you to understand what container and containerization actually are. Also the difference about VMs or virtualization including what it was in bare metal computing. Head to Containers vs VMs blog post to read about it.

Docker Logo image
Docker Logo

Source: https://www.docker.com/

Docker is a platform that allows developers and system administrators to create, deploy, and run distributed applications. It packages and isolates portable software and their dependencies using a process known as containerization. This makes it possible to deploy consistently and dependably across many environments, from local development to production.


Containers share the host's kernel and only contain the libraries and resources required for the app to execute, in contrast to virtual machines (VMs), which build a complete duplicate of the host operating system and hardware. As a result, containers are lighter and more effective than virtual machines (VMs) since they don't need as much memory, storage, or computing power.


Because it makes it simple and reliable for developers and system administrators to bundle and deploy their programs, Docker has grown in popularity. Additionally, it enables users to run numerous apps on the same server without interfering by allowing them to build separate environments for testing and development.

🀜 It works on my machine Β―\_(ツ)_/Β―

Docker solves the "it works on my machine" problem, is a common issue faced by developers when trying to deploy an application on different environments. The app may work perfectly on the developer's local machine, but when deployed to a different environment, it may encounter bugs or compatibility issues. This problem is particularly prevalent in situations where different developers are working on the same codebase, or when the app is deployed to different environments, such as production, staging, or testing.


Docker addresses this problem by providing a consistent and isolated environment for the app to run in. When using Docker, developers can package their app and its dependencies in a container. Because the container contains everything the app needs to run, it eliminates the need for developers to worry about compatibility issues or missing dependencies. This means that if the app runs in the container, it will run in any environment.

Docker for Application Development image
Docker for Application Development

Source: https://youtu.be/3c-iBn73dDE?t=297

This term might sound familiar to you. Yes, it is Java's tagline which made it widely used even until today. WORA is a principle that aims to make it easier to write code that can be run on different platforms and environments without modification.


In this sense, Docker can be considered as a technology that enables WORA for applications. With Docker, developers can package their applications into a single container and run it on any machine that has the Docker Engine installed, regardless of the host's operating system.

Docker for Application Deployment image
Docker for Application Deployment

Source: https://youtu.be/3c-iBn73dDE?t=487

To install Docker, you will first need to go to the Docker website and download the Docker Community Edition (CE) for your specific operating system.

Once the installer has been downloaded, run it and follow the prompts to install Docker.

After the installation is complete, open a terminal and run the command docker -v to verify that Docker has been installed correctly and to check the version number. You should see your version like this.

sh

1╭─ powershell
2╰─❯ docker -v
3Docker version 20.10.16, build aa7e414

To run a container, we will first need to pull an image from a public registry, or build your own image. In more descriptive way, the most basic steps to start a container is shown below.

In term of pulling images, the official public registry available is Docker Hub. According to the definition, it is the world’s largest repository of container images with an array of content sources including container community developers, open source projects and independent software vendors (ISV) building and distributing their code in containers. Users get access to free public repositories for storing and sharing images or can choose subscription plan for private repos.

To the Hub, developers can push their built images to be shared with others. And from the Hub, another developers pull the images to run it as container in their own machine. This makes sharing Docker images easier and faster.

However, sharing images into Docker Hub is allowing it to be consumed publicly. Which means, if your code base is confidential, you don't want to publish your image there. There are several ways to mitigate this such as implementing private registry.

We are going to learn how to create our own image so we will not pull from Docker Hub but write our own Dockerfile instead. First, let's pick a very easy code base to run. Clone this repository which I created for the purpose of this post. It is a very simple Spring Boot application with a Hello World API endpoint. However, it uses Spring Boot version 3.0.2 which is the latest when this post is written, and also utilizing Java 17 which pretty new at this moment.


Now with these variants mentioned, we will see that Docker will give portability and isolation I mentioned earlier. Assuming that we don't have Java 17 installed, we will still be able to run our image in an isolated environment.


Now to create our image, we're gonna need to create a Dockerfile. A Dockerfile is similar to a blueprint of our image. It contains every detail that we need to build an image so that the image will run smoothly. All properties, commands need to be run, configurations, platform variants, dependencies, you name it, are written here.


A Dockerfile is commonly named Dockerfile, while you might be more familiar with name.extension style, but both ways works. The difference between them is just you will need to add more parameter when building the image later. To simplify, we will use the first way. In many ways to create a file, I'm gonna run touch Dockerfile.

sh

1╭─ powershell ~\Downloads\demo
2╰─❯ touch Dockerfile
3
4
5    Directory: ~\Downloads\demo
6
7
8Mode                LastWriteTime         Length Name
9----                -------------         ------ ----
10-a----        1/26/2023   4:07 PM              0 Dockerfile

Now we need to write several things in our Dockerfile. First, the base image. Technically, a container consists of images layer. The base image is mostly Linux Base Image since it is relatively small. This is why in Windows, you are required to enable Windows Subsystem Linux (WSL).

Also, for me personally this also promotes Linux advantages of its compatibility to work across multiple platforms such as Windows or MacOS, and this give me more reason to recommend every developer to learn it.
What is a Container? image
What is a Container?

Source: https://youtu.be/3c-iBn73dDE?t=664

I'm not gonna explain about how to write Dockerfile specifically, you can find another better versions out there, but here are some references to learn from.

Dockerfile reference

Best practices for writing Dockerfiles

All containers (not only Docker) need Linux base image and the Linux base images to use vary. The most common to use is Alpine distribution because of its small size and only contain what necessary.


Since our application is a Maven project, there are several ways to build image of it. There are two that come to my mind. First, we build the project using Maven and then use COPY command in the Dockerfile to copy the built jar file into the base image specific directory and then run javac command.


Or the second method, which I'm gonna use, is to RUN Maven command to build the project, then COPY the built jar file into the base image and then run it. Pretty similar, but notice that we add the Maven compile process into the Dockerfile itself. To make it short, here is the full Dockerfile contents.

1FROM maven:3.8.7-openjdk-18-slim
2ENV HOME=/app
3RUN mkdir -p $HOME
4WORKDIR $HOME
5
6COPY pom.xml $HOME
7COPY src ./src
8
9RUN ["mvn","package", "-DskipTests"]
10
11ENTRYPOINT ["java", "-jar", "/app/target/demo-1.0.0.jar"]

Don't be overwhelmed πŸ˜΅β€πŸ’« let's break it down.

  • FROM maven:3.8.7-openjdk-18-slim : The base image we use is an Linux image with JDK 18 and Maven installed on it. This way, we don't need to RUN another command to install Maven or Java anymore. Thanks to many variants available we can use according to our needs.
  • ENV HOME=/app : We create a local variable called HOME. We will call this using $HOME.
  • RUN mkdir -p $HOME : We ask Docker to create our /app folder by using mkdir command in wherever the current position is. We can inspect this later but the default directory depends on what base image we are using.
  • WORKDIR $HOME : means we set our building process base directory to $HOME or /app.
  • COPY pom.xml $HOME : Copy the pom.xml file to our /app directory.
  • COPY src ./src : Copy our src folder which contains our main source code to /src folder. Because we set our WORKDIR in advance, the expected outcome structure would be /app/src/*.
  • RUN ["mvn","package", "-DskipTests"] : Run the Maven package command. It equals to what we usually run in terminal while compiling Maven project such as maven package or maven package or maven install.
  • ENTRYPOINT ["java", "-jar", "/app/target/tracc-0.0.1-SNAPSHOT.jar"] : Like other Maven project, the outcome of the built process will be a /target folder containing our built files including the .jar. We can directly run it using java command like our usual way to run Java app. The ENTRYPOINT command will tell the container built later where and how to run the image.

Here is the folder structure will be generated in our Docker image.

1./
2β”œβ”€β”€ ./app
3β”‚   β”œβ”€β”€ src
4β”‚   β”‚   β”œβ”€β”€ main
5β”‚   β”‚   β”‚   β”œβ”€β”€ java
6β”‚   β”‚   β”‚   └── resources
7β”‚   β”‚   β”‚       β”œβ”€β”€ static
8β”‚   β”‚   β”‚       β”œβ”€β”€ templates
9β”‚   β”‚   β”‚       └── application.properties
10β”‚   β”‚   └── test
11β”‚   β”œβ”€β”€ target
12β”‚   β”‚   β”œβ”€β”€ ...
13β”‚   β”‚   β”œβ”€β”€ demo-1.0.0.jar
14β”‚   └── pom.xml

Now our Dockerfile is ready. Time to build our image! There are several essential Docker command that we must know.

  • docker build : to build image
  • docker run : to run an image into container
  • docker stop / docker restart / docker start : to stop / restart / start a container
  • docker exec : execute commands in container, like shell.
  • docker ps : to list containers we run. I usually add -a parameter to show all my containers
  • docker logs : get log of a running container
  • docker images : maintain your available images
  • docker container : maintain your available containers

Please note that these commands are just the basics and I don't even include the options or additional parameters of each command. You can learn it here for deeper learning.


So to build our image, let's run:

sh

1docker build -t demo-project:1.0.0 .

Notice that I add -t parameter, it means tag. It is recommended to always tag our image because it will make it easier for us to version them. And the second parameter is the path, which I stated as . .

This means build a Docker image in the current folder. Because I named our Dockerfile as Dockerfile, it will automatically pick that file as the blueprint. However as I mentioned earlier, if we named our Dockerfile as Demo.dockerfile, we need to specify the Dockerfile with -f parameter. For example:

sh

1docker build -f Demo.dockerfile -t demo-project:1.0.0 .

After hitting Enter, the build process will begin and don't worry your terminal goes a bit crazy with those words written there. With the Dockerfile above, I made sure that the build process will be successful. However, if you create your own Dockerfile, sometimes it will take trial and error process until you get your working image.


And after a while, we will get our Docker image. You may be expecting a file generated from this process and probably start to look for it in your folder and it's not there. The image we built will be stored in our local Docker repository installed in our machine. You can find it using docker images command:

sh

1╭─ powershell  ~\ ξ‚°         
2╰─❯ docker images
3REPOSITORY              TAG                     IMAGE ID       CREATED         SIZE
4
5demo-project                        1.0.0                    ffe99a54f2fd   1 hour ago    404MB
6docker-sample-app                               latest                  ffe99a54f2fd   7 months ago    404MB
7yugabytedb/yugabyte                             latest                  417f5a000cfe   9 months ago    1.8GB
8confluentinc/cp-kafkacat                        latest                  4fa7fa9bfbac   9 months ago    713MB
9debezium/connect                                1.4                     c856cfe4edbf   21 months ago   745MB
10confluentinc/cp-schema-registry                 5.5.3                   da954c8c8fbb   2 years ago     1.28GB
11confluentinc/cp-enterprise-kafka                5.5.3                   378f9494767c   2 years ago     757MB
12confluentinc/cp-zookeeper                       5.5.3                   76a5bccdb7a7   2 years ago     667MB
13edenhill/kafkacat                               1.5.0                   d3dc4f492999   3 years ago     22.4MB

You may only have one image if this is your first time building one, here I have several images I used before.

Now that we have our image, we can run it as a container. The command we use is docker run. While there are many parameters available for this command, I usually add:

sh

1docker run -d -p <port:port> --name <container-name> <image>

-d parameter is for detach mode and -p is for exposing our container into our machine port, then add the image name (include the tag if you have several images with a same name).

To run our image, I will expose the port to 8080 and since we know the default Tomcat port run in our Spring Boot application is 8080 also, we can write it like this 8080:8080.

sh

1docker run -d -p 8080:8080 --name docker-demo demo-project:1.0.0

In detach mode, the terminal will only give us a container ID printed. To get more information about our container, run docker ps command and our container will be listed.

Our container is now live and running. It is a Spring Boot application with a REST API endpoint at /hello. Let's try to call it.

sh

1curl http://localhost:8080/hello

Nice! Our endpoint works. But there is nothing special about this so far. We haven't see how Docker portability and isolation benefits us. Say no more!

The easiest way to transport our image is throught Docker Hub. But as I mentioned earlier we are not going to upload our image there even though it's free. We are going to "conventional" way instead. First, we will export our image to a .tar file using docker save command and then send it to our teamates. The .tar file need to be "loaded" into their local Docker repository using docker load command. And finally, run the loaded image as before.

Export Docker Images to Local File image
Export Docker Images to Local File


To export images, let's run:

sh

1docker save -o ~/demo-project.tar demo-project:1.0.0

-o parameter means output, and we add the path and filename we want for our exported image, and lastly the image we want to export. And voila! We have our image in .tar file!


Now we can transfer this image to our teamates to let them run it. But before they able to run it, they need to load it.

sh

1docker load -i ./demo-project.tar

This command will load the tar file into their local Docker repository and from here you know how to run it don't you?


After running the image, each of our teamates can call the /hello API on their own machine and I guarantee it will work. This shows us the portability and isolation of Docker. The portability refers to the ability of a Docker container to run consistently across different environments, regardless of the underlying infrastructure.


In this project we see that our application is portable across different machines of our teamates, and I want to mention specifically about the Java version we use. Regardless what version of Java they installed on their own machine, could be 8, 11, or even 19, our application still runs on Java 17 (see pom.xml).

And the isolation is shown by that the app uses Java 17 and even if the host has Java 8 or 19 installed, it will not conflict with the other Java versions installed on it.

Imagine if our app provides important task and live in long run, but if a problem occurs we don’t have the facility to know what’s going on, or at least check log we wrote in the code.


Check out the code in DockerDemoApplication.java class, at method helloWorldController which serves as our sample controller. As you can see that I added a log as monitoring aspect example. But how can we make use of this?


Now is the time when we use docker logs command. To use this command, simply add container name and hit Enter.

sh

1docker logs -f demo-project:1.0.0

By running this command you can now get a full information of the container including your app logs. The -f command I add for follow mode. However, it is recommended to use external logging system such as ELK stack. This is because our container is considerably volatile and non persistent, in the sense that the file system is ephemeral. Meaning that any changes made to the container's file system are not persisted after the container is stopped or removed.


When a container is run, it starts with a clean state, with the file system initialized from the image specified in the docker run command. Any changes made to the container's file system while it is running, such as creating or modifying files, will be lost when the container is stopped or removed.


This volatility is actually one of the benefits of containers, as it allows for easy and consistent deployment. Containers can be started and stopped quickly and easily, without the need to worry about preserving the state of the file system. In addition, since containers are lightweight and have a small footprint, they can be easily scaled up and down as needed, without the need for additional resources.

However, if you need to persist data or state of your application, you have options such as

  1. Using a volume to mount a host directory into a container
  2. Using a data container to store data separately from the container
  3. Using a external storage service such as Amazon S3, Google Cloud Storage, etc.

A Docker container does make use of the host's operating system (OS) kernel, but it does not use the host's libraries, system tools, or settings. Instead, a container runs in its own isolated environment, which is created by the container engine. The container engine uses the host's kernel to create a virtualized environment for the container, which is called a namespace.

The namespace provides the container with its own file system, network stack, and process tree, which are isolated from the host system. This means that the container can have its own libraries, system tools, and settings, which are different from the host's.


The base image defined in the Dockerfile is used to provide the container with its own OS environment. A base image is a pre-built image that contains the necessary libraries, system tools, and settings to run a specific type of application. For example, there are base images for different versions of Linux, such as Ubuntu, Debian, or Alpine, and for different versions of Windows.

Docker images are built to run on a specific OS architecture, such as Linux on x86_64. If the host machine's architecture is different from the image's architecture, the image will not run properly.

When a container is created, the container engine uses the base image to create the container's file system, network stack, and process tree. The base image provides the container with a consistent and isolated environment that is needed to run the application.


For example, if you have a Linux x86_64 image and try to run it on a Windows or a Mac machine, the image will not work, because the host's architecture is different from the image's architecture.

Additionally, Windows and Mac systems use a different kernel from Linux and require an additional layer of virtualization to run Linux containers such as WSL. This means that there may be some additional configuration required when running a Linux image on Windows or Mac systems. And thankfully, Linux systems are very adaptable on both Windows or Mac, which I darely say that we can now run Docker images everywhere 😁

As you become more familiar with Docker, you may want to explore other features and tools such as Docker Volumes, Docker Networks, and Docker Swarm. These tools can help to manage data and networks in the containers and can provide additional functionality for deploying and scaling our applications.

At last, it is important to mention that Docker is just one tool in the larger ecosystem of containerization and container orchestration. There are other containerization and orchestration tools such as Kubernetes (k8s), Mesos, and OpenShift (OCP) that you may want to explore as your needs grow and evolve.


In my opinion, Docker is the most understandable and comprehensive intro before diving into cloud computing paradigm.

Docker Tutorial for Beginners

Reactions


Comments


More articles

If you enjoyed this article, why not check my other posts?