How to Update Docker Containers on File Change
I used to rebuild my images and restart my containers on every file change. It was the closest I could get to a live reload at the time.
Poor Solution: Rebuilding containers
Specifically, I would run this bash script every time I made a change to a file:
# !/bin/bash # Set default names for image and container imageName=myImage containerName=myContainer # Build image with the same image name echo Building new image... docker build -t $imageName -f Dockerfile . # Remove existing container running old image echo Deleting old container... docker rm -f $containerName # Delete old images that: # 1. No longer have a reference # 2. Do not currently have a container using it echo Deleting old image... docker image prune # Run new container echo Running new container... docker run -d -p 5000:5000 --name $containerName $imageName
This script ran a
docker build and
docker run while cleaning up old images and containers.
It was fairly suboptimal in day-to-day development with such slow iterations.
Better Solution: Using Bind Mounts
In order to prevent creating a new image on each file change, we can use bind mounts.
Suppose my project is located at
/server on the host machine (the computer I’m developing on) and I mount this directory into the container. This
/server directory will be referenced by its path on the host machine. This means our local directory will overwrite the respective portion of the image when the container is started.
We will only need to build the image once for tiny code changes and until the installed dependencies need to be changed.
We can use
docker-compose for this. We specify our container configuration using
Let’s say this is my project structure for a Flask application I’ve been developing.
📂project ┣ 📜 Dockerfile ┣ 📜 docker-compose.yml ┗ 📂 server ┣ 📜 main.py ┣ 📜 requirements.txt ┗ 📜 every other file in my project...
For the sake of being thorough, this is what my
Dockerfile looked like.
FROM ubuntu:16.04 RUN apt-get update -y && apt-get install -y python-pip python-dev build-essential RUN pip install --upgrade pip COPY server/requirements.txt /app/requirements.txt WORKDIR /app RUN pip install -r requirements.txt COPY server /app ENTRYPOINT ["python"] CMD ["app/main.py"]
docker-compose.yml, we specify where the
Dockerfile resides in
build: context. Mine is in
., or the current directory (same directory as
volumes key is where we specify the bind mount. We’re mounting our
/server folder to our container, and we’ll be accessing it from our working directory,
version: '3' services: web: build: context: . ports: - 5000:5000 volumes: - ./server:/app
Now we can change our bash script,
build.sh, to run the following commands:
# Start up the container docker-compose up -d # Check that container is up and running docker-compose ps
Changes to the mounted code files will be picked up by the container and reflected by our dockerized process.
No need to restart anything!