Docker: The Key to Consistent and Reliable App Deployment

Docker: The Key to Consistent and Reliable App Deployment

Published
January 4, 2023
Author
Chandra Wijaya
Tags
Docker

Prerequisite

It is recommended for you to understand what container and containerization actually are. Also the difference about VMs or virtualization including what it was in bare metal computing. Head to my Containers vs VMs blog post to read about it.

What is Docker?

notion image
Docker is a platform that allows developers and system administrators to create, deploy, and run distributed applications. It packages and isolates portable software and their dependencies using a process known as containerization. This makes it possible to deploy consistently and dependably across many environments, from local development to production.
Containers share the host's kernel and only contain the libraries and resources required for the app to execute, in contrast to virtual machines (VMs), which build a complete duplicate of the host operating system and hardware. As a result, containers are lighter and more effective than virtual machines (VMs) since they don't need as much memory, storage, or computing power.
Because it makes it simple and reliable for developers and system administrators to bundle and deploy their programs, Docker has grown in popularity. Additionally, it enables users to run numerous apps on the same server without interfering by allowing them to build separate environments for testing and development.

Portability

🤜 It works on my machine ¯\_(ツ)_/¯
Docker solves the "it works on my machine" problem, is a common issue faced by developers when trying to deploy an application on different environments. The app may work perfectly on the developer's local machine, but when deployed to a different environment, it may encounter bugs or compatibility issues. This problem is particularly prevalent in situations where different developers are working on the same codebase, or when the app is deployed to different environments, such as production, staging, or testing.
Docker addresses this problem by providing a consistent and isolated environment for the app to run in. When using Docker, developers can package their app and its dependencies in a container. Because the container contains everything the app needs to run, it eliminates the need for developers to worry about compatibility issues or missing dependencies. This means that if the app runs in the container, it will run in any environment.
Docker for Application Development
Docker for Application Development
WORA (Write Once Run Anywhere)
This term might sound familiar to you. Yes, it is Java's tagline which made it widely used even until today. WORA is a principle that aims to make it easier to write code that can be run on different platforms and environments without modification.
In this sense, Docker can be considered as a technology that enables WORA for applications. With Docker, developers can package their applications into a single container and run it on any machine that has the Docker Engine installed, regardless of the host's operating system.
Docker for Application Deployment
Docker for Application Deployment

Getting Started

Installing Docker
To install Docker, you will first need to go to the Docker website and download the Docker Community Edition (CE) for your specific operating system.
Once the installer has been downloaded, run it and follow the prompts to install Docker.
After the installation is complete, open a terminal and run the command docker -v to verify that Docker has been installed correctly and to check the version number. You should see your version like this.
╭─ powershell ╰─❯ docker -v Docker version 20.10.16, build aa7e414
Running a Container
To run a container, we will first need to pull an image from a public registry, or build your own image. In more descriptive way, the most basic steps to start a container is shown below.
 
Basic Docker Steps
Basic Docker Steps
Pulling Images from Hub
In term of pulling images, the official public registry available is Docker Hub. According to the definition, it is the world’s largest repository of container images with an array of content sources including container community developers, open source projects and independent software vendors (ISV) building and distributing their code in containers. Users get access to free public repositories for storing and sharing images or can choose subscription plan for private repos.
 
Docker Hub
Docker Hub
To the Hub, developers can push their built images to be shared with others. And from the Hub, another developers pull the images to run it as container in their own machine. This makes sharing Docker images easier and faster.
However, sharing images into Docker Hub is allowing it to be consumed publicly. Which means, if your code base is confidential, you don't want to publish your image there. There are several ways to mitigate this such as implementing private registry.
Creating Our Own Image
We are going to learn how to create our own image so we will not pull from Docker Hub but write our own Dockerfile instead. First, let's pick a very easy code base to run. Clone this repository which I created for the purpose of this post. It is a very simple Spring Boot application with a Hello World API endpoint. However, it uses Spring Boot version 3.0.2 which is the latest when this post is written, and also utilizing Java 17 which pretty new at this moment.
Now with these variants mentioned, we will see that Docker will give portability and isolation I mentioned earlier. Assuming that we don't have Java 17 installed, we will still be able to run our image in an isolated environment.
Now to create our image, we're gonna need to create a Dockerfile. A Dockerfile is similar to a blueprint of our image. It contains every detail that we need to build an image so that the image will run smoothly. All properties, commands need to be run, configurations, platform variants, dependencies, you name it, are written here.
A Dockerfile is commonly named Dockerfile, while you might be more familiar with name.extension style, but both ways works. The difference between them is just you will need to add more parameter when building the image later. To simplify, we will use the first way. In many ways to create a file, I'm gonna run touch Dockerfile.
╭─ powershell ~\Downloads\demo ╰─❯ touch Dockerfile Directory: ~\Downloads\demo Mode LastWriteTime Length Name ---- ------------- ------ ---- -a---- 1/26/2023 4:07 PM 0 Dockerfile
Now we need to write several things in our Dockerfile. First, the base image. Technically, a container consists of images layer. The base image is mostly Linux Base Image since it is relatively small. This is why in Windows, you are required to enable Windows Subsystem Linux (WSL).
Also, for me personally this also promotes Linux advantages of its compatibility to work across multiple platforms such as Windows or MacOS, and this give me more reason to recommend every developer to learn it.
 
What is a Container?
What is a Container?
I'm not gonna explain about how to write Dockerfile specifically, you can find another better versions out there, but here are some references to learn from.
All containers (not only Docker) need Linux base image and the Linux base images to use vary. The most common to use is Alpine distribution because of its small size and only contain what necessary.
Since our application is a Maven project, there are several ways to build image of it. There are two that come to my mind. First, we build the project using Maven and then use COPY command in the Dockerfile to copy the built jar file into the base image specific directory and then run javac command.
Or the second method, which I'm gonna use, is to RUN Maven command to build the project, then COPY the built jar file into the base image and then run it. Pretty similar, but notice that we add the Maven compile process into the Dockerfile itself. To make it short, here is the full Dockerfile contents.
FROM maven:3.8.7-openjdk-18-slim ENV HOME=/app RUN mkdir -p $HOME WORKDIR $HOME COPY pom.xml $HOME COPY src ./src RUN ["mvn","package", "-DskipTests"] ENTRYPOINT ["java", "-jar", "/app/target/demo-1.0.0.jar"]
Don't be overwhelmed 😵‍💫 let's break it down.
  • FROM maven:3.8.7-openjdk-18-slim : The base image we use is an Linux image with JDK 18 and Maven installed on it. This way, we don't need to RUN another command to install Maven or Java anymore. Thanks to many variants available we can use according to our needs.
  • ENV HOME=/app : We create a local variable called HOME. We will call this using $HOME.
  • RUN mkdir -p $HOME : We ask Docker to create our /app folder by using mkdir command in wherever the current position is. We can inspect this later but the default directory depends on what base image we are using.
  • WORKDIR $HOME : means we set our building process base directory to $HOME or /app.
  • COPY pom.xml $HOME : Copy the pom.xml file to our /app directory.
  • COPY src ./src : Copy our src folder which contains our main source code to /src folder. Because we set our WORKDIR in advance, the expected outcome structure would be /app/src/*.
  • RUN ["mvn","package", "-DskipTests"] : Run the Maven package command. It equals to what we usually run in terminal while compiling Maven project such as maven package or maven package or maven install.
  • ENTRYPOINT ["java", "-jar", "/app/target/tracc-0.0.1-SNAPSHOT.jar"] : Like other Maven project, the outcome of the built process will be a /target folder containing our built files including the .jar. We can directly run it using java command like our usual way to run Java app. The ENTRYPOINT command will tell the container built later where and how to run the image.
Here is the folder structure will be generated in our Docker image.
./ ├── ./app │ ├── src │ │ ├── main │ │ │ ├── java │ │ │ └── resources │ │ │ ├── static │ │ │ ├── templates │ │ │ └── application.properties │ │ └── test │ ├── target │ │ ├── ... │ │ ├── demo-1.0.0.jar │ └── pom.xml
Now our Dockerfile is ready. Time to build our image! There are several essential Docker command that we must know.
  • docker build : to build image
  • docker run : to run an image into container
  • docker stop / docker restart / docker start : to stop / restart / start a container
  • docker exec : execute commands in container, like shell.
  • docker ps : to list containers we run. I usually add -a parameter to show all my containers
  • docker logs : get log of a running container
  • docker images : maintain your available images
  • docker container : maintain your available containers
Please note that these commands are just the basics and I don't even include the options or additional parameters of each command. You can learn it here for deeper learning.
So to build our image, let's run:
docker build -t demo-project:1.0.0 .
Notice that I add -t parameter, it means tag. It is recommended to always tag our image because it will make it easier for us to version them. And the second parameter is the path, which I stated as . .
This means build a Docker image in the current folder. Because I named our Dockerfile as Dockerfile, it will automatically pick that file as the blueprint. However as I mentioned earlier, if we named our Dockerfile as Demo.dockerfile, we need to specify the Dockerfile with -f parameter. For example:
docker build -f Demo.dockerfile -t demo-project:1.0.0 .
After hitting Enter, the build process will begin and don't worry your terminal goes a bit crazy with those words written there. With the Dockerfile above, I made sure that the build process will be successful. However, if you create your own Dockerfile, sometimes it will take trial and error process until you get your working image.
And after a while, we will get our Docker image. You may be expecting a file generated from this process and probably start to look for it in your folder and it's not there. The image we built will be stored in our local Docker repository installed in our machine. You can find it using docker images command:
╭─ powershell ~\  ╰─❯ docker images REPOSITORY TAG IMAGE ID CREATED SIZE demo-project 1.0.0 ffe99a54f2fd 1 hour ago 404MB docker-sample-app latest ffe99a54f2fd 7 months ago 404MB yugabytedb/yugabyte latest 417f5a000cfe 9 months ago 1.8GB confluentinc/cp-kafkacat latest 4fa7fa9bfbac 9 months ago 713MB debezium/connect 1.4 c856cfe4edbf 21 months ago 745MB confluentinc/cp-schema-registry 5.5.3 da954c8c8fbb 2 years ago 1.28GB confluentinc/cp-enterprise-kafka 5.5.3 378f9494767c 2 years ago 757MB confluentinc/cp-zookeeper 5.5.3 76a5bccdb7a7 2 years ago 667MB edenhill/kafkacat 1.5.0 d3dc4f492999 3 years ago 22.4MB
You may only have one image if this is your first time building one, here I have several images I used before.
Run An Image
Now that we have our image, we can run it as a container. The command we use is docker run. While there are many parameters available for this command, I usually add:
docker run -d -p <port:port> --name <container-name> <image>
-d parameter is for detach mode and -p is for exposing our container into our machine port, then add the image name (include the tag if you have several images with a same name).
To run our image, I will expose the port to 8080 and since we know the default Tomcat port run in our Spring Boot application is 8080 also, we can write it like this 8080:8080.
docker run -d -p 8080:8080 --name docker-demo demo-project:1.0.0
In detach mode, the terminal will only give us a container ID printed. To get more information about our container, run docker ps command and our container will be listed.
Testing It Out
Our container is now live and running. It is a Spring Boot application with a REST API endpoint at /hello. Let's try to call it.
curl http://localhost:8080/hello
Nice! Our endpoint works. But there is nothing special about this so far. We haven't see how Docker portability and isolation benefits us. Say no more!
Delivering Images
The easiest way to transport our image is throught Docker Hub. But as I mentioned earlier we are not going to upload our image there even though it's free. We are going to "conventional" way instead. First, we will export our image to a .tar file using docker save command and then send it to our teamates. The .tar file need to be "loaded" into their local Docker repository using docker load command. And finally, run the loaded image as before.
Export Docker Images to Local File
Export Docker Images to Local File
To export images, let's run:
docker save -o ~/demo-project.tar demo-project:1.0.0
-o parameter means output, and we add the path and filename we want for our exported image, and lastly the image we want to export. And voila! We have our image in .tar file!
Now we can transfer this image to our teamates to let them run it. But before they able to run it, they need to load it.
docker load -i ./demo-project.tar
This command will load the tar file into their local Docker repository and from here you know how to run it don't you?
After running the image, each of our teamates can call the /hello API on their own machine and I guarantee it will work. This shows us the portability and isolation of Docker. The portability refers to the ability of a Docker container to run consistently across different environments, regardless of the underlying infrastructure.
In this project we see that our application is portable across different machines of our teamates, and I want to mention specifically about the Java version we use. Regardless what version of Java they installed on their own machine, could be 8, 11, or even 19, our application still runs on Java 17 (see pom.xml).
And the isolation is shown by that the app uses Java 17 and even if the host has Java 8 or 19 installed, it will not conflict with the other Java versions installed on it.
Monitoring
Imagine if our app provides important task and live in long run, but if a problem occurs we don’t have the facility to know what’s going on, or at least check log we wrote in the code.
Check out the code in DockerDemoApplication.java class, at method helloWorldController which serves as our sample controller. As you can see that I added a log as monitoring aspect example. But how can we make use of this?
Now is the time when we use docker logs command. To use this command, simply add container name and hit Enter.
docker logs -f demo-project:1.0.0
By running this command you can now get a full information of the container including your app logs. The -f command I add for follow mode. However, it is recommended to use external logging system such as ELK stack. This is because our container is considerably volatile and non persistent, in the sense that the file system is ephemeral. Meaning that any changes made to the container's file system are not persisted after the container is stopped or removed.
When a container is run, it starts with a clean state, with the file system initialized from the image specified in the docker run command. Any changes made to the container's file system while it is running, such as creating or modifying files, will be lost when the container is stopped or removed.
This volatility is actually one of the benefits of containers, as it allows for easy and consistent deployment. Containers can be started and stopped quickly and easily, without the need to worry about preserving the state of the file system. In addition, since containers are lightweight and have a small footprint, they can be easily scaled up and down as needed, without the need for additional resources.
However, if you need to persist data or state of your application, you have options such as
Using a volume to mount a host directory into a container
Using a data container to store data separately from the container
Using a external storage service such as Amazon S3, Google Cloud Storage, etc.

✋Docker utilizes Host's OS not its own

A Docker container does make use of the host's operating system (OS) kernel, but it does not use the host's libraries, system tools, or settings. Instead, a container runs in its own isolated environment, which is created by the container engine. The container engine uses the host's kernel to create a virtualized environment for the container, which is called a namespace.
The namespace provides the container with its own file system, network stack, and process tree, which are isolated from the host system. This means that the container can have its own libraries, system tools, and settings, which are different from the host's.
The base image defined in the Dockerfile is used to provide the container with its own OS environment. A base image is a pre-built image that contains the necessary libraries, system tools, and settings to run a specific type of application. For example, there are base images for different versions of Linux, such as Ubuntu, Debian, or Alpine, and for different versions of Windows.
Docker images are built to run on a specific OS architecture, such as Linux on x86_64. If the host machine's architecture is different from the image's architecture, the image will not run properly.
When a container is created, the container engine uses the base image to create the container's file system, network stack, and process tree. The base image provides the container with a consistent and isolated environment that is needed to run the application.
For example, if you have a Linux x86_64 image and try to run it on a Windows or a Mac machine, the image will not work, because the host's architecture is different from the image's architecture.
Additionally, Windows and Mac systems use a different kernel from Linux and require an additional layer of virtualization to run Linux containers such as WSL. This means that there may be some additional configuration required when running a Linux image on Windows or Mac systems. And thankfully, Linux systems are very adaptable on both Windows or Mac, which I darely say that we can now run Docker images everywhere 😁

What's Next?

As you become more familiar with Docker, you may want to explore other features and tools such as Docker Volumes, Docker Networks, and Docker Swarm. These tools can help to manage data and networks in the containers and can provide additional functionality for deploying and scaling our applications.
At last, it is important to mention that Docker is just one tool in the larger ecosystem of containerization and container orchestration. There are other containerization and orchestration tools such as Kubernetes (k8s), Mesos, and OpenShift (OCP) that you may want to explore as your needs grow and evolve.
In my opinion, Docker is the most understandable and comprehensive intro before diving into cloud computing paradigm.

References

Docker Tutorial for Beginners [FULL COURSE in 3 Hours]
Full Docker Tutorial | Complete Docker Course | Hands-on course with a lot of demos and explaining the concepts behind, so that you really understand it. 💙 Become a Kubernetes Administrator - CKA: https://bit.ly/3WwgLF5 💚 Become a DevOps Engineer - full educational program: https://bit.ly/3WvLq53 🧡 Udemy courses: https://bit.ly/3ozagEC ► Follow me on IG for behind the scenes content: 👉🏼 https://bit.ly/2F3LXYJ #docker #dockertutorial #techworldwithnana By the end, you will have a deep understanding of the concepts and a great overall big picture of how Docker is used in the whole software development process. The course is a mix of animated theoretic explanation and hands-on demo’s to follow along, so you get your first hands-on experience with Docker and feel more confident using it in your project. ▬▬▬▬▬▬ T I M E S T A M P S ⏰ ▬▬▬▬▬▬ 0:00 - Intro and Course Overview 01:58 - What is Docker? 10:56 - What is a Container? 19:40 - Docker vs Virtual Machine 23:53 - Docker Installation 42:02 - Main Docker Commands 57:15 - Debugging a Container 1:06:39 - Demo Project Overview - Docker in Practice 1:10:08 - Developing with Containers 1:29:49 - Docker Compose - Running multiple services 1:42:02 - Dockerfile - Building our own Docker Image 2:04:36 - Private Docker Repository - Pushing our built Docker Image into a private Registry on AWS 2:19:06 - Deploy our containerized app 2:27:26 - Docker Volumes - Persist data in Docker 2:33:03 - Volumes Demo - Configure persistence for our demo project 2:45:13 - Wrap Up 🔗 Links ► Developing with Containers - Demo project: https://gitlab.com/nanuchi/techworld-js-docker-demo-app 🚀 1. What is Docker? ► What is a container and what problems does it solve? ► Container repository - where do containers live? 🚀 2. What is a Container technically ► What is a container technically? (layers of images) ► Demo part (docker hub and run a docker container locally) 🚀 3. Docker vs Virtual Machine 🚀 4. Docker Installation ► Before Installing Docker - prerequisites ► Install docker on Mac, Windows, Linux ❗️ Note: Docker Toolbox has been deprecated. Please use Docker Desktop instead. See for Mac (https://docs.docker.com/docker-for-mac/) and for Windows (https://docs.docker.com/docker-for-windows/). 🚀 5. Main Docker Commands ► docker pull, docker run, docker ps, docker stop, docker start, port mapping 🚀 6. Debugging a Container ► docker logs, docker exec -it 🚀 7. Demo Project Overview - Docker in Practice (Nodejs App with MongoDB and MongoExpress UI) 🚀 8. Developing with Containers ► JavaScript App (HTML, JavaScript Frontend, Node.js Backend) ► MongoDB and Mongo Express Set-Up with Docker ► Docker Network concept and demo 🚀 9. Docker Compose - Running multiple services ► What is Docker Compose? ► How to use it - Create the Docker Compose File ► Docker Networking in Docker Compose 🚀 10. Dockerfile - Building our own Docker Image ► What is a Dockerfile? ► Create the Dockerfile ► Build an image with Dockerfile 🚀 11. Private Docker Repository - Pushing our built Docker Image into a private Registry on AWS ► Private Repository on AWS ECR ► docker login ► docker tag ► Push Docker Image to the Private Repo 🚀 12. Deploy our containerized application 🚀 13. Docker Volumes - Persist data in Docker ► When do we need Docker Volumes? ► What is Docker Volumes? ► Docker Volumes Types 🚀 14. Volumes Demo - Configure persistence for our demo project ▬▬▬▬▬▬ Want to learn more? 🚀 ▬▬▬▬▬▬ DevOps Tools, like GitHub Actions, Terraform ► https://bit.ly/2W9UEq6 Jenkins Pipeline Tutorials ► https://bit.ly/2Wunx08 Full Kubernetes tutorial ► https://www.youtube.com/playlist?list=PLy7NrYWoggjziYQIDorlXjTvvwweTYoNC ▬▬▬▬▬▬ Connect with me 👋 ▬▬▬▬▬▬ Join private FB group ► https://bit.ly/32UVSZP INSTAGRAM ► https://bit.ly/2F3LXYJ TWITTER ► https://bit.ly/3i54PUB LINKEDIN ► https://bit.ly/3hWOLVT ▬▬▬▬▬▬ Courses & Bootcamp & Ebooks 🚀 ▬▬▬▬▬▬ ► Become a DevOps Engineer - full educational program 👉🏼 https://bit.ly/45mXaer ► High-Quality and Hands-On Courses 👉🏼 https://bit.ly/3BNS8Kv ► Kubernetes 101 - compact and easy-to-read ebook bundle 👉🏼 https://bit.ly/3Ozl28x
Docker Tutorial for Beginners [FULL COURSE in 3 Hours]