Content from Introduction


Last updated on 2024-10-14 | Edit this page

Estimated time: 32 minutes

Overview

Questions

  • What are virtual machines and containers?
  • How are they used in open science?

Objectives

  • Explain the main components of a virtual machine and a container and list the major differences between them
  • Explain at a conceptual level how these tools can be used in research workflows to enable open and reproducible research

Introduction


  • This lesson is basically a lecture. Make sure to set that expectation
  • Remind students that they’re not expected to remember everything. They’ll have a chance to do some hands-on later that might help clear up some questions.
  • Don’t forget to mention the glossary as reference material

You might have heard of containers and virtual machines in various contexts. For example, you might have heard of ‘containerizing’ an application, running an application in ‘Docker’, ‘spinning up’ a virtual machine in order to run a certain program.

This lesson is intended to provide a hands-on primer to both virtual machines and containers. We will begin with a conceptual overview. We will follow it by hands-on explorations of two tools: VirtualBox and Docker.

If you forget what a particular term means, refer to the Glossary.

Prerequisite

This lesson assumes no prior experience with containers or virtual machines. However, it does assume basic knowledge about computer and networking concepts. These include

  • Ability to install software (and obtaining elevated/administrative rights to do so),
  • Basic knowledge of the components of a computer and what they do (CPU, network, storage)
  • Knowledge of how to navigate your computer’s directory structure (either graphically or via the command line).

Prior exposure to using command line tools is useful but not required.

What are virtual machines and what do they do?


Lead off script: We’re all familiar with computers like Macs and PCs. They run an operating system like Windows or Mac, and we can run programs on that computer like web browsers and word processors. The operating system controls all of the physical resources.

Explain the diagram

  • Physical resources, and how the OS controls access to them
  • The VMM is a program that gets a slice of those resources and presents them as if they were physical resources to virtual machines
  • Define the relationship between the host and guest OS.
  • Use the concept of a computer-within-a-computer
diagram showing boxes with hardware resources, applications, and the operating system
A conceptual representation of an ordinary computer system. The operating system oversees the physical hardware resources and is responsible for executing individual applications and allocating those resources to them.

Normally, computers run a single operating system with a single set of applications. Sometimes (for reasons we’ll discuss soon), people might want to run a totally separate operating system with a different set of applications. One way to do that is to split up the physical resources like CPU, RAM, etc. and present them to that second operating system for its exclusive use. The concept of splitting up these resources (i.e., virtualizing them) so that only this second operating system can access them is the idea behind a virtual machine.

diagram showing boxes with hardware resources, applications, the operating system, and how virtualization shares resources
In a virtual machine, physical hardware resources are divided between the host operating system and any virtual machines (guest operating systems). The virtual machine manager (VMM) takes care of managing the virtualized resources. Each virtual machine only sees the resources allocated to it.

At its core, a virtual machine (referred to as VM from now on) is a self-contained set of operating system, applications, and any other needed files that run on a host machine. The VM files are usually encapsulated inside of a inside single file called a VM image. The file you downloaded during the lesson setup is an example of an image.

Callout

One way to think of the VM concept is a computer that runs inside your computer. All the programs that run inside this mini computer can’t “see” anything running outside of it, either in the host or other VMs running on the same physical computer.

Why would we want to run a mini computer inside of our main computer? VMs are commonly used to

  • More easily manage and deploy complex applications.
  • Run multiple operating systems and their applications on the same physical hardware.

Examples:

  • Your bank’s internal systems need to be robust against risks like hardware failure, hacking, weather events, etc. One way to achieve this is to run several identical servers spread out geographically. While one could install all the software on each server, using a VM reduces complexity by allowing the same software and associated configurations to be quickly deployed and managed across varying hardware.
  • Many cloud computing providers like Amazon allow users to purchase resources on their systems. To maximize their investment on physical hardware, these companies will set up a virtual server for each customer. Functionally, these virtual servers behave as a standalone machine but in reality, there may be dozens of other virtual servers running on the same physical system.

VMs are also commonly used in academic research scenarios as well as they can help with the problem of research reproducibility by packaging all data and code together so that others can easily re-run the same analysis while avoiding the issue of having to install and configure the environment in the same way as the original researcher. They also help optimize the usage of the computing resources owned by the institution. You might have interacted with VMs at your institution if you’ve ever logged into a “remote desktop” or “virtual computing environment” that many institutions use to provide access to licensed software.

Callout

Benefits of VMs

  • Help with distributing and managing applications by including all needed dependencies and configurations.
  • Increase security by isolating applications from each other.
  • Maximize the use of physical hardware resources by running multiple isolated operating systems at the same time.

What are containers and what do they do?


diagram showing boxes with hardware resources, applications, containers, and the operating system
Containers are environments that encapsulate applications and their dependencies. Applications running inside a container are functionally isolated from the host. The container manager runs containers and determines what each container can “see” on the host. The level of isolation is not as high as VMs (represented by the dashed lines).

Containers are conceptually similar to VMs in that they also encapsulate applications and their dependencies into packages that can be easily distributed and run on different physical machines. A notable difference is that when using containers, hardware is not virtualized and containerized applications must be compatible with the host OS and its hardware. In more technical terms, applications running in a container share the host’s kernel and therefore must be compatible with the host’s architecture. In practical terms, this means that containers:

  • Are generally less resource-intensive than comparable VMs, at the cost of portability.
  • Are generally not able to run applications written for one operating system on another.

Two core concepts

Ephemerality

Containers should be considered ephemeral so that they can be destroyed and recreated at any time such that the application within simply resumes as if nothing had happened. Therefore, containerzied applications must be designed in such a way that allows the container manager to save all user data and configurations outside of the container. This separation is what enables some of the use cases below.

Modularization A popular paradigm is modularizing a complex application into smaller, loosely connected components called “microservices”. Each microservice runs in its own container and communicates with other microservices via an isolated, private network that is set up by the container management platform. This approach helps with maintenance, scalability, and robustness since a microservice can be stopped, updated, and/or swapped without affecting the other microservices.

Examples

  • Web applications. E.g., a web front-end container that talks to a database backend running in a different container. In this case, the ephemerality and microservice concepts allow for easily updating the software, while being sure that the data won’t be affected. For example, the database can be updated or replaced without needing to touch the front-end software at all (thereby allowing error or maintenance messages to function).
  • Data science, data management, and other research uses. In these applications, the benefits of ephemerality and modularization via microservices are realized to enable reuseable, reproducible, and cloud-native workflows. Due to their lighter weight and the ability to define and create containers via plain-text blueprints (e.g., Dockerfiles – more on that later), containers have become more popular than virtual machines in research environments.

Might also want to mention these additional characteristics:

  • Containers can contain applications from various OSs like Linux or Windows, but containers based on Unix-like OSs (e.g., Linux) are the most common.
  • Containers are generally console based. If they have a graphical interface, the main way containers present it is via a web browser.
  • Software to create an manage containers is varied. Docker is the most popular one.

Callout

Benefits of containers

  • A lighter footprint (mainly around lower CPU and memory requirements) compared to VMs.
  • Quickly and easily maintain a complex application without affecting any user data or causing issues with conflicting dependencies in the host OS.
  • Quickly and easily scale applications. For example, when there is a need to dynamically run multiple instances of an application across a cluster of servers to handle increased demand.
  • Robustness of an application stack. If an application is made up of smaller applications that talk to each other via standard mechanisms (e.g., web APIs), it is easier to pinpoint and recover from problems.

Comparing virtual machines and containers


Virtual Machines Containers
Contains all the dependencies needed to run an application Yes Yes
Isolates an application from the host OS Yes Yes
Ease of distribution Very easy Easy/hard (depending on complexity and hardware compatibility)
Disk space, CPU, and memory requirements Larger Smaller
Presents virtual versions of real hardware like CPUs, disks, etc Yes No
Scaling based on computing needs More difficult Easier
Able to run applications from one operating system on another Yes Sometimes*
Able to run applications from one CPU architecture No** No
  • It’s possible in some cases. For example, Docker on Windows can run Linux containers because it secretly runs them inside a Linux VM. ** It is possible with some VM software and with some architectures. In the background, the software uses emulation which is different on a technical level than virtualization. Examples of architectures are Intel x86 (32 bit or 64 bit), ARM, RISC and more.

Challenge 1:

If you are running a web browser inside a VM, would the host OS able to determine what internet addresses you are connecting to?

What about an application making web requests from inside a container? Can the host see the IP addresses your containerized application is connecting to?

In both cases, the host can (in principle) see what sites or IP addresses the guest OS or container is connecting to. In the VM case, even though the network hardware is virtualized, the actual data still has to go through the real hardware at some point. For containers, the container already uses the real hardware the effect is the same. If virtual private network (VPN) software is used within the container or VM, then the only thing the host could see is the address of the VPN.

Key Points

  • Conceptually, a virtual machine is a separate computer that runs with its own resources, operating system, and applications inside of a host operating system.
  • Containers are like lightweight virtual machines with some subtle but consequential differences.
  • Containers and virtual machines can address many of the same use cases.
  • Both virtual machines and containers are commonly used in academic research but containers are more popular.

Content from Virtual machines using VirtualBox


Last updated on 2024-10-14 | Edit this page

Estimated time: 12 minutes

Overview

Questions

  • How do you import and launch a VM using VirtualBox?
  • How do you accomplish common tasks?
  • How and why do you change settings for a VM?

Objectives

  • Explain how to navigate the VirtualBox interface
  • Demonstrate how to run a VM
  • Show how to manage resources
  • Show how to take advantage of snapshots
  • Explore changing resource allocations

Introduction


Let’s now turn to exploring how to use virtual machines (VMs). There are many choices for running virtual machines, each with their own strengths and weaknesses. The ones you may encounter more often are the VMWare family of products, Hyper-V which is included with Windows, Parallels which is a product for MacOS, and VirtualBox which is owned by Oracle Corporation and is cross-platform and open-source.

As part of the setup, you should have already have VirtualBox installed and running on your system before continuing.

Running the example VM


Users may be prompted to use the Basic or Expert interface. Make sure students know what they see may not be exactly what is on your screen

Exploring the UI - Run VirtualBox - import the VM - run it - see a desktop - proper way to stop the VM

Common tasks?


  • Suspend, resume
  • Snapshots
  • Mapping folders and hardware resources (?)

Challenge 1:

Take a snapshot of the VM. Start the VM and suspend it. Now Delete the parent snapshot. What will be the result if you boot up the VM again?

Todo

Managing VMs


Challenge 2:

Increase the RAM available to the VM to 2 GB (2048 MB). Verify it by running this command inside a terminal window

cat /proc/meminfo | grep MemTotal

What number do you see? What should be the effect on the VM’s performance?

You should see 2014504 kB. Performance should increase, especially when applications are loading a lot of data into memory. Web browsers are especially heavy memory users.

Key Points

  • VM point 1

Content from Basics of Containers with Docker


Last updated on 2024-11-21 | Edit this page

Estimated time: 20 minutes

Overview

Questions

  • What is a Docker image?
  • What is a Docker container?
  • How do you start and stop a container?
  • How do retrieve output from a container to a local machine?

Objectives

  • Explain the difference between a Docker image and a Docker container
  • Retrieve a Docker image from the cloud
  • Start a Docker container running on a local machine
  • Use the command line to check the status of the container
  • Clean the environment by stopping the container

Introduction


Containers, like virtual machines, allow us to effectively simulate running another computer within our own machine. Why would we want to go through this process of running one computer within another. A few situations where containers are especially useful are:

  1. You want to use software that is incompatible with the operating system on your machine.
  2. You want to use a program that has lots of dependencies, which you do not want to manage.
  3. You want to run analyses on a new set of data with identical settings as a prior study.

Instructors should feel free to add their own examples in the introduction, to help your learners appreciate the utility of containers. Providing your own use case of containers helps lend authenticity to the lesson.

Images versus containers

There are two big pieces of the container world: images and containers. They are related to one another, but they are not synonymous. Briefly, images provide the plans for making a container, and a container is similar to a virtual machine in that it is effectively another computer running on your computer. To use an analogy from architecture, images are the blueprints and containers are the actual building.

Callout

If you are a fan of philosophy, images are for Platonists and containers are for nominalists.

Considering the differences between images and containers…

Images are

  1. Read-only
  2. Contain instructions (in a file called a “Dockerfile” - we talk about Dockerfiles later in the lesson)
  3. They do not actually “do” anything

Containers are

  1. Modifiable (while running)
  2. Can include files and programs (like your computer!)
  3. Can run analyses or web applications (and more)

TODO: Anything the instructor should be aware of. Maybe here’s a point for an image of some sorts.

Challenge 1: Images versus containers

You instructor introduced one analogy for explaining the difference between a Docker image and a Docker container. What is another way to explain images and containers?

Several analogies exist, and here are a few:

  • An image is a recipe, say, for your favorite curry, while the container is the actual curry dish you can eat.
  • “Think of a container as a shipping container for software - it holds important content like files and programs so that an application can be delivered efficiently from producer to consumer. An image is more like a read-only manifest or schematic of what will be inside the container.” (from Jacob Schmitt)
  • If you are familiar with object-oriented programming, you can think of an image as a class, and a container an object of that class.

Working with containers

One thing to note right away is that a lot of the work of running containers happens through the command line interface. That is, we do not have a graphical user interface (GUI) with menus to work with. Instead, we type commands into a terminal for starting and stopping containers.

For the purposes of this lesson, we are going to use a relatively lightweight workflow of using a container. Briefly, the steps of using a container are:

  1. Retrieve the image we would like to use from an online repository.
  2. Start the container running (like turning on a computer).
  3. Interact with the container, if the container has such functionality (some containers are just programmed to run without additional interaction from users).
  4. Check the status of the container.
  5. Upon completion of whatever task we are using the container for, stop the container (like turning off the computer).

Steps 1, 2, 4, and 5 are all associated with a specific docker command:

  1. Retrieve image: docker pull
  2. Start container: docker run
  3. Check status: docker ps
  4. Stop container: docker stop

The instructions included in the two episodes on containers assume that learners are using the virtual machines described in prior episodes. However, the following Docker instructions can all be run on any computer that has an internet connection and has Docker installed. You can find more information about installing Docker at the Carpentries’ Containers lesson.

Retrieving images

The first step of using containers is to download a copy of the image you would like to use. For Docker images, there are multiple sites on the internet that serve as sources for Docker images. Two common repositories are DockerHub and GitHub’s Container Registry; for this lesson, we will be downloading from DockerHub. The nice thing is that we do not have to open a web browser and manually download a file - instead we can use the Docker commands to do this for us. For downloading images, the syntax is:

docker pull <image creator>/<image name>

Where we replace <image creator> with the username of the person or organization responsible for the image and <image name> with the name of the image. For this lesson, we are going to use an image that includes the OpenRefine software. OpenRefine is a powerful data-wrangling tool that runs in a web browser.

Callout

Want to learn more about OpenRefine? Check out the Library Carpentry Lesson on Open Refine.

To run this command (and all subsequent Docker commands), we will be using the command-line interface (CLI) in our virtual machines. Open the terminal window by clicking the computer screen icon in the lower-left corner of the virtual machine window.

Callout

If you are not using a virtual machine, or if you are using a different virtual machine than the one introduced in previous episodes, you may need to open a command line terminal a different way. Searching for an application called “terminal” on most systems will tell you what the name of the program is to run a command line terminal.

screenshot showing command line terminal icon location
The command line terminal can be opened by clicking the computer screen icon

Once the command line terminal is open, type the command to retrieve the OpenRefine image:

docker pull felixlohmeier/openrefine

After typing in the command, press “Enter” and Docker will download the image from DockerHub. You should see output that tracks the progress of the download.

terminal window showing downloading progress
The progress display when downloading the OpenRefine image

By default, the command docker pull will only look for images on DockerHub; if you want to download images from another source, such as the GitHub Container Registry (GHCR), you need to indicate this in the docker pull command. Specifically, we add the source information immediately before the namespace argument. So if we wanted to download the Docker image from the official OpenRefine project on GitHub, we would run

docker pull ghcr.io/openrefine/containers

where ghcr.io indicates the source of the image is the GHCR.

Starting an image

You now have an image of a Docker container on your machine. This means, thinking back to our architecture analogy, you have the blueprints and now it is time to actually make the thing (“the thing” in this case being the running container on your machine). To start a container running from an image, we use the docker run command, passing the name of the image and any additional information. In our case, we will need to provide information on how we can interact with the container by setting the ports (we will see later how we use this information). In the command-line interface, run:

docker run -p 3333:3333 felixlohmeier/openrefine

Breaking down this command, there are three key parts:

  1. docker run: tells docker to start running a new container
  2. -p 3333:3333: tells docker we will use the local port 3333:3333 to communicate with the running container
  3. felixlohmeier/openrefine: is the name of the image from which to build the container

There is a good chance you will see a variety of messages, including some warnings. However, these are not going to interfere with our lesson, so we will ignore them for now.

The three warning messages you are likely to see are:

log4j:WARN No appenders could be found for logger (org.eclipse.jetty.util.log).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

These indicate that a logging system in the image is not configured the right way. We are not going to be looking at logs of the container, so we do not need to worry about these messages. If you end up building your own images, logging is likely to be an important part of your development and debugging process.

Status check

At this point, our container is running. Or at least it should be. How can we check? In order to see which containers are running, we will use the docker ps command. Because the container we just started is running in the terminal window where we issued the docker run command, we will need to open a new terminal tab. We can do this in the terminal File menu, selecting the New Tab… option (File > New Tab…).

screenshot showing new tab option in terminal File menu
A new tab can be opened through the File menu

In this new terminal window, type the following and press “Enter”:

docker ps

You should see a table print out in the terminal window. Note that if your windows are narrow, the output will wrap around the screen and be a little difficult (although not impossible) to read. If you find this is the case, you can make your terminal (and possibly your virtual machine) windows wider, then run the docker ps command again. The output should look something like:

$ docker ps
CONTAINER ID   IMAGE                      COMMAND                  CREATED         STATUS         PORTS                                       NAMES
e1e174015296   felixlohmeier/openrefine   "/app/refine -i 0.0.…"   9 seconds ago   Up 8 seconds   0.0.0.0:3333->3333/tcp, :::3333->3333/tcp   epic_nobel
$

Although the value in the first and last columns (CONTAINER ID and NAMES, respectively) will likely be different for everyone. The important columns to note are:

  • CONTAINER ID: A unique identifier for this container. You can have multiple containers based on the same image running simultaneously, and they will all have different values for CONTAINER ID.
  • IMAGE: The name of the image this container is based on.
  • STATUS: This will indicate if a container is running (it will say something like Up 5 minutes, which means it started running 5 minutes ago) or if it has stopped running (the message will be Exited (143) 7 seconds ago).

So now we see that our container is running and we are ready to actually use the OpenRefine program.

Using the container

The first thing we need to do is download the sample data we are going to work with in OpenRefine. In a the web browser on the virtual machine, enter the URL https://bit.ly/lc-article-data. This should either download a CSV file or present you with a webpage of the CSV data. If the latter (you see a webpage of the data), download the data as a CSV file. Because you are working in the Virtual Machine, this download should happen within the VM. These data are 1,001 records of Open Access published articles. Note the following instructions for using OpenRefine are adapted from the Library Carpentry lesson on OpenRefine.

Callout

Note if you are not using a virtual machine, the CSV file will install on your local machine.

We now need to open OpenRefine, so open a new tab in the web browser that is running on your Virtual Machine, and enter the following in the URL bar: localhost:3333. You should now see the OpenRefine program in your web browser.

TODO Screenshot of OpenRefine in VM?

Start by loading file we downloaded into OpenRefine.

  1. Click Create Project from the left hand menu and select “Get data from This Computer” (these options may already be selected).
  2. Click Choose Files (or ‘Browse’, depending on your setup) and locate the file which you have downloaded called doaj-article-sample.csv.
  3. Click Next >> where the next screen gives you options to ensure the data is imported into OpenRefine correctly.
  4. Click in the Character encoding box and set it to UTF-8, if it is not already set to UTF-8.
  5. Leave all other settings to their default values.
  6. Click the Create project >> button at the top right of the screen. This will create the project and open it for you.

Next we will clean up one part of the data.

  1. Click the dropdown triangle on the Publisher column.
  2. Select the Facet > menu item.
  3. Select Text facet in the submenu. OpenRefine menus showing facet options
  4. Note that in the values there are two that look almost identical - why do these two values appear separately rather than as a single value?
  5. On the publisher column use the dropdown menu to select Edit cells > Common transforms > Collapse consecutive whitespace. OpenRefine menus showing cell transformation options
  6. Look at the publisher facet now - has it changed? (if it hasn’t changed try clicking the Refresh option to make sure it updates).

Finally, we can export this cleaned version of the data to a new file.

  1. In the top-right corner of OpenRefine, click the Export dropdown menu.
  2. Select Comma-separated value.
  3. Note an updated version of the file, called doaj-article-sample-csv.csv has been saved on the Virtual Machine.

From this point, the easiest way to move the file somewhere else (like onto your computer), is to move the file to the cloud (e.g. Google Drive, Box, etc.) and download it from there.

Stopping the container

Now that we are finished working with OpenRefine and our file is on our local machine, we can stop the container. Stopping the container is equivalent to turning off a computer, and we will use the command docker stop to shut the container down. Before we do, though, we need to find the ID of the container that is running OpenRefine. This is because the docker stop requires that ID so it knows which container to stop. To find the container ID, we again use docker ps to provide us with a table of all the running containers. When you run docker ps, you should see a familiar table, with most information identical to what we saw before, but with the time information updated in the CREATED and STATUS columns:

$ docker ps
CONTAINER ID   IMAGE                      COMMAND                  CREATED         STATUS         PORTS                                       NAMES
e1e174015296   felixlohmeier/openrefine   "/app/refine -i 0.0.…"   9 minutes ago   Up 9 minutes   0.0.0.0:3333->3333/tcp, :::3333->3333/tcp   epic_nobel
$

The first column, CONTAINER ID has the information we need in order to stop the container from running.

What if the table is empty? That is, what if after running docker ps, you see

$ docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
$

This means you have no containers currently running. You will not need to use the docker stop command because the container has already been shut down.

The syntax for docker stop is:

docker stop <CONTAINER ID>

where we replace <CONTAINER ID> with the actual string of letters and numbers that identify the container. So on my machine, to stop the container, I will run

docker stop e1e174015296

The container ID on your machine will almost certainly be different from the one on my machine. If they are the same, I suggest you go buy a lottery ticket now.

Challenge 2: Checking the status of containers

We saw before that we could check the status of running containers by using the command docker ps. What happens when you run the same command now? What about when you run the same command with the -a flag?

  • docker ps will show the status of all running containers. If you have no containers running, and you probably do not at this point of the lesson, you should see an empty table, like:
$ docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
$ 
  • docker ps -a will show all containers that are running or have been run on the machine. This includes the container that we stopped earlier.
$ docker ps -a
CONTAINER ID   IMAGE                      COMMAND                  CREATED         STATUS                       PORTS     NAMES
e1e174015296   felixlohmeier/openrefine   "/app/refine -i 0.0.…"   20 minutes ago      Exited (143) 2 minutes ago                determined_torvalds
$

Note the date information (in the CREATED and STATUS fields) and the container name (the NAMES field) will likely be different on your machine.

Challenge 3: Order of operations

Rearrange the following commands to (in the following order) (1) start the OpenRefine container, (2) find the container image ID of the running OpenRefine container, and (3) terminal the OpenRefine container.

docker stop <container ID>
docker run -p 3333:3333 felixlohmeier/openrefine
docker ps
docker run -p 3333:3333 felixlohmeier/openrefine
docker ps
docker stop <container ID>

Callout

TODO Add any notes that may be relevant, but not necessary for lesson?

Key Points

  • Containers are a way to provide a consistent environment for reproducible work.
  • Use docker pull to copy an image to your machine
  • Use docker start to start running a container
  • Use docker ps to check the status of running containers
  • Use docker stop to stop running a container

Content from Creating Containers with Docker


Last updated on 2024-11-26 | Edit this page

Estimated time: 12 minutes

Overview

Questions

  • How do you create new Docker images?

Objectives

  • Explain how a Dockerfile is used to create Docker images
  • Create a Dockerfile to run a command
  • Use docker build to create a new image
  • Update a Dockerfile to run a Python script

TODO Anything instructors should be aware of for this episode?

Introduction


In the previous episode, we used a Docker image to run a Docker container. We briefly covered how images are used to make a container, but where does the Docker image come from? In this episode, we will create and use Dockerfiles to make a new image, and use that image to start a Docker container.

TODO Might be a good spot for a visual.

Dockerfiles


A Dockerfile is a plain text file that includes instructions for making a Docker image. Dockerfiles can be very short or very long, depending on how complicated the Docker image is going to be. A minimal Dockerfile would have two pieces of information:

  1. The base image to start from, and
  2. What to do when the container starts running

The first point (the base image) may seem a little odd, as it appears we are actually building an image from…another image? And that is exactly what we are doing!

Callout

It is possible for you to start from an operating system image (e.g. from an ISO file), but there are a lot of Docker images already available for whichever operating system you want to use. But let us not make even more work for ourselves. It’s turtles all the way down and that’s OK.

Dockerfile gross anatomy

The general structure of a Dockerfile is a series of keywords followed by some information. For the minimal Dockerfile we are going to build, we need to include commands to accomplish the two points we mentioned above (identify the base image and do something when the container starts):

  1. The FROM command is followed by the name of the image to start from and the version number of that image. When we use the Dockerfile to build the Docker image, our computer will start by downloading a copy of that base image, and adding to based on whatever subsequent commands we provide in the Dockerfile.
  2. The CMD command tells the container what to do when it starts running. In some cases, it will start an application, like we saw with the OpenRefine image in the previous episode. The CMD command can also do things like run analyses based on data passed to the container.

For this episode, we will start by creating a Dockerfile with only these two commands:

FROM python:3.9
CMD ["python", "--version"]

TODO: Where do we want to be making this file? Is the home directory OK?

Creating Dockerfiles

As mentioned above, Dockerfile are plain text files, so we need to use a text editor to create them. In the virtual machine where we are working, the nano text editor will work for us. To create the Dockerfile, start by opening up the command line terminal again (or opening a new tab if the terminal is already running) and running the following:

nano Dockerfile

This command will do two things: first, it creates an empty text file called “Dockerfile” and, second, it opens the file in the nano text editor. We will add those two lines, as well as a comment for each line. Comments are useful for anyone who will need to look at the Dockerfile in the future (this usually means you!). The comment character, #, tells the computer to ignore any text that comes to the right of the #, as it is for human eyes only. In the nano text editor, add the following:

FROM python:3.9  # use the python base image, version 3.9
CMD ["python", "--version"]  # on container start, print the version of python

Which should look something like this in your terminal:

terminal window showing contents of Dockerfile in nano editor
The nano text editor for your first Dockerfile

Once you have that typed in, you can save and close the file. To do this, first hold down the Control/Command key and press the letter “O” key (this is often written as ^O) and press “Enter” to confirm the save to the file called “Dockerfile”. Second, to exit the nano editor, hold down the Control/Command key and press the letter “X” key (^X). If you are curious about what was saved to the Dockerfile, you can run the command cat Dockerfile on the terminal command line and it will print the contents of the file to the screen.

Creating images from Dockerfiles

We have now created the Dockerfile, and there is one last step necessary to make the Docker image from that Dockerfile. To make the Docker image, we use the docker build command and provide it with the name of the image. In the terminal command line, run the following command:

docker build -t vboxuser/python-text .

Breaking down the command into the component parts:

  • docker build is the base command telling our computer to build a Docker image
  • -t indicates we are going to “tag” this image with a label
  • vboxuser/python-test provides the name of the repository (vboxuser) and the name of the image (python-test). This convention, using the repository name and the image name, is like using a person’s full name (family name and given name), rather than just someone’s given name. Just using an image name alone, without a repository name, would be like referring to “Mohamed” or “Maria” and expecting other people to know exactly who you are referring to.
  • . is just a dot telling your computer where the Dockerfile is located. In the command line interface, the dot is directory where you are running the command. TODO: Might be a good spot to link the LC Shell lesson here. If the Dockerfile was somewhere other than the folder your command line terminal is currently running in, you would replace the dot with that location. For example, if the Dockerfile was located on the Desktop, we would update our command to docker build -t vboxuser/python-text ~/Desktop (where the dot . is replaced with ~/Desktop).

TODO Explain commands. Note how no new file is created in our directory, it gets created…somewhere, though?

docker build -t ...

docker image ls

Challenge 1: Updating our analogy

In the previous episode, a couple of different analogies were introduced to explain the relationship between a Docker image and a Docker container. Pick one of those analogies and update it to also include the Dockerfile.

TODO: Update these analogies with Dockerfiles.

  • To use an analogy from architecture, images are the blueprints and containers are the actual building.
  • An image is a recipe, say, for your favorite curry, while the container is the actual curry dish you can eat.
  • “Think of a container as a shipping container for software - it holds important content like files and programs so that an application can be delivered efficiently from producer to consumer. An image is more like a read-only manifest or schematic of what will be inside the container.” (from Jacob Schmitt)
  • If you are familiar with object-oriented programming, you can think of an image as a class, and a container an object of that class.

Starting containers

TODO This is review.

docker run ...

Confirm it ran and quit

docker ps -a

Challenge 2: Update base image

  • Update the Dockerfile to have a base image that includes Python version 3.12 (instead of Python version 3.9)
  • Build the image
  • Start the container to confirm it is using Python version 3.12

To change the base image, update the information passed to the FROM command. That is, open the Dockerfile and change this line:

FROM python:3.9

to

FROM python:3.12

Buid the image

In the terminal, use docker build to create a new version of the image.

BASH

docker build -t <username>/python-container

This command will over-write the previous version of the image. TODO Need to test this statement.

Verify image was updated

In the terminal, use docker run to start a container based on the updated image.

BASH

docker run <username>/python-container

Copying files into the image

TODO Add flavor text about why we might do this.

COPY ...

See https://stackoverflow.com/questions/32727594/how-to-pass-arguments-to-shell-script-through-docker-run and https://www.tutorialspoint.com/how-to-pass-command-line-arguments-to-a-python-docker-container

for example of passing arguments to a script. Passing arguments might be too much.

Challenge 3: Copy a script to run in the container

There is a script (make this a print("Hello World!") python script) you want to include

TODO: Insert solution

Key Points

  • Dockerfiles include instructions for creating a Docker image
  • The FROM command in a Dockerfile indicates the base image to build on
  • The CMD command in a Dockerfile includes commands to execute when a container starts running
  • The COPY command in a Dockerfile copies files from your local machine to the Docker image so they are available for use when the container is running