Docker for Java Developers: Introduction
This article is part of our Academy Course titled Docker Tutorial for Java Developers.
In this course, we provide a series of tutorials so that you can develop your own Docker based applications. We cover a wide range of topics, from Docker over command line, to development, testing, deployment and continuous integration. With our straightforward tutorials, you will be able to get your own projects up and running in minimum time. Check it out here!
Table Of Contents
1. Introduction
If you have not heard about Docker, then you have probably spent the last few years on some other planet of the Solar system. Docker stormed into our industry and in no time dramatically changed many well-established software development and operational practices and patterns. These days pretty much every organization is using Docker (or equivalent of it), the brave ones even in production, and its adoption is growing at fantastic pace.
In this tutorial we are going to talk about how Docker can help us, Java developers, in accomplishing our day to day tasks. The tutorial consists of several parts where we going to touch upon different aspects of Docker and its applicability to Java applications development.
We will start off by learning the basics:
- Why we should invest our time in learning Docker
- Get to know Docker command line tooling
- Using REST façade to talk to Docker
Then we will move on to the topics related specifically to Docker in context of Java applications right after:
- Building
- Developing
- Testing
- Deploying
- Continuous Integration / Delivery
The material we will be going through assumes that you have some basic familiarity with Docker and have at least version 17.06.1-ce already installed on the machine (it does not really matter if you are on Linux, Windows or Mac per se).
2. Linux Containers: The Big Bang
The story, which made Docker and friends possible, begins back in 2006, when a couple of awesome engineers at Google started the work on the feature under the name “process containers”. It was later rebranded to “control groups” (or cgroups as we know them today) and was merged into the Linux kernel starting from version 2.6.24, released in January 2008.
Essentially, cgroups is a Linux kernel feature that limits, accounts for, prioritizes and isolates the resource usage (CPU, memory, disk I/O, network, etc.) of processes. Most importantly, to support all that the Linux kernel does not need to start any virtual machines or hypervisors. Along with namespaces, another very powerful feature of the Linux kernel, cgroups serve as a fundamental building block for containers: operating system-level virtualization.
Container-based virtualization is exceptionally lightweight (comparing to traditional virtual machines), imposes little to no overhead, share the same operating system kernel and do not require special hardware support to perform efficiently. To say it in other words, containers become a new model to wrap the applications so they could be run in isolation on a shared operating system. Although not without the limitations, going with containers becomes a mainstream in the virtualization space nowadays.
To be fair, not all Linux/Unix distributions use the same mechanisms for operating system-level virtualization. To mention a couple of examples, FreeBSD has jails for such purposes while Solaris has the concept of zones.
So, how to get started with containers? Well, you may have heard abbreviations like LXC or LXD which are essentially the entry points for containers management on most of the Linux/Unix distributions. The thing is, those are somewhat low-level and not easy to start with. But luckily we have Docker and rkt, the application-centric container management engines, which right from the inception became the de facto choices for the application developers across the globe.
3. Docker: Containers for Masses
So what is Docker essentially? It started off as a powerful and easy to use container engine but these days it would be fair to call it a full-fledged container management platform. It is written in Go and takes advantage of the Linux kernel features (mostly namespaces and cgroups) to do the job. The community edition is downloadable free of charge whereas the enterprise edition is also available through subscription offerings. To settle the stage, along this tutorial we are going to use the features of the community edition only.
3.1. Architecture
From the architectural perspective, Docker consists of three main parts. In the heart of Docker sits the daemon process, dockerd. In turn, dockerd relies on another daemon, containerd, as the abstraction layer to interface with the Linux kernel namespaces and cgroups. The last piece of the puzzle is a set of command line tools (like for example docker and docker-compose), known as Docker CLI, which are able to talk to dockerd daemon though the Docker Engine API it exposes.
Each of the Docker components mentioned above deserves own tutorial, so many interesting features and capabilities they provide, though our focus would be primarily centered on Docker Engine API and the Docker CLI family (docker and docker-compose).
One of the strongest arguments in favor of choosing Docker is that it runs natively on the majority of the Linux distributions but it does not stop there. macOS and Windows operating systems are also supported pretty well, with a few caveats to be aware of.
In order to understand how Docker works, we have to unveil a bit its internal model. At any time, if you feel like there are not enough details uncovered about the subject, please do not hesitate to consult the official documentation.
3.2. Images
In Docker, everything you do is managing the specific objects. Images and containers are arguably the most important ones however there are others like volumes, networks and plugins, to name a few. All of them we are going to see in action in different sections of the tutorial, starting with images and containers right away.
Image could be treated as a set of instructions on how to create the container. In Docker, one image could be inherited (or based on) from another image, adding additional instructions on top of base ones. Each image consists of multiple layers, which are effectively immutable. Under the hood these layers are backed by dedicated file systems (by default UnionFS, but others could be plugged in as well), making them very lightweight and fast.
So … how could you create such images for your own needs? It is actually pretty simple, to build your own image in Docker you create a Dockerfile which is just a text document that defines the set of steps (or instructions) required to assemble the image (and run it later). Along the way you may decide to create completely customized images yourself or, in most cases, reference the images created by others, which are published in a registry. To give you a sneak peek on how the Dockerfiles may look like, here is a quick example:
FROM alpine:3.6 CMD ["uname", "-a"]
Each instruction in a Download Now








