#wrapITup is a series on CloudwithCaleb where we explore what containers are, why they are all the rage in IT, and how the cloud is the perfect place for them to live.
In the last post, we took a general look at how we get from a Dockerfile to a working application in a Docker container. We saw that in order to have a running container, we needed to build a docker image.
Today we are going to take a closer look at a Dockerfile to help us understand what is happening.
The above image shows a very simple Dockerfile. These are the steps, or instructions, that you pass to Docker in order to build the final Docker image. Think of the Dockerfile as a recipe you pass on to Docker, the chef, who goes off, takes your recipe and precisely follows the instructions to create it. He then comes back with the finished, steaming meal, which is your Docker Image!
So let’s break down these steps.
Recall that we mentioned that Docker images included the application and the environment it needs to run? Well, that explains why the Dockerfile starts with a FROM instruction.
Sidenote: It’s important to understand that the operating system we use is itself an image that we use. So we build our own Application image based on the Alpine OS image.
In the above sample Dockerfile, the FROM instruction pulls the Alpine Linux distribution version 3.13.0 operating system, which is extremely lightweight. This will be the first layer of our image upon which our application will be built by Docker.
The next instruction, which is the COPY command, does precisely that. It tells Docker which files from the running computer need to be copied to the new image. These can be the files needed for the application to run.
The next RUN command is the next layer that builds the application from the data copied in the previous step. It goes ahead and creates the application, and puts all the files and folders where they are needed so that the application will be ready to start the moment it’s needed.
The final step is the CMD instruction. This is where you pass the command that the application needs to run. When the image is loaded into a container, this command will be executed to start the app.
At this point, all that is left is to tell Docker to go ahead and build our image from this Dockerfile. Docker will go through these commands, line by line. When it’s done, we will have a single image with the Alpine 3.13.0 operating system, the ‘My App’ files all placed where they need to be along with the final command ready to be executed to launch the app when it’s placed in a container.
Now that we have a Docker image of ‘My app’, we need to store it somewhere so that everyone can download the file later, place it into a Docker container and have it up and running within seconds. We can store images in cloud-based repositories like Dockerhub. These repositories, or repos, organize and stall all the versions of the Docker Images that are uploaded.
We can have both public repos that are available to everyone, or private repos that can only be accessed with the correct authentication. From our above example, the Alpine image for the lightweight operating system is stored on a public Dockerhub repo.
We can, for example, tag our newly created version of the ‘My App’ image as version 2.2. We can then push (upload) this ‘My App:2.2’ image to our private company’s Dockerhub account where all of the teams that need our image can pull (download) it.
The below image brings together the whole flow; from Dockerfile and the source code on our computer, to a server running our ‘My App’ image in a Docker container.
Now that we have a simple overview of containerization, we can take next week to look at all the benefits that using containers can provide. Until then, check out Dockerhub if you’d like to take a look at all the freely available images you can download!