This article will cover how to set up all the containers you need to start developing a full-stack web app.
Although, here, I will use specific technologies, the technologies running in each container could be replaced by your preferred ones with some minor changes.
On the other hand, note that this article will only cover the set up of the development. All the Dockerfiles included in this article are not production-ready.
Now, let’s have a look at what our infrastructure will look like. To do so, I will first show the file structure of our project:
app├── docker-compose.yaml├── api│ ├── Dockerfile.dev│ ├── go.mod│ └── main.go├── client│ ├── Dockerfile.dev│ └── create-react-app files and directories└── nginx├── default.conf└── Dockerfile.dev
Moreover, I will add two other services, a database and a cache. Our docker-compose file will include a Postgres container, as well as a redis one, to show how these services would be integrated into our application.
On the other hand, I’ve also decided to use a NGINX server. Although this would not be necessary for our development environment, as we could be using our local machine as a development server, this server will disappear in production, so using NGINX in our docker-compose build will make it easier to transition from this development build to a production-ready one.
Firstly, we will create the Dockerfile for our React client. To do that, first navigate to the root directory of your project (in this tutorial, the
app/ directory will play that role) and run the following command:
npx create-react-app client
Once the command has finished its execution, we will have our basic React client-ready. The only problem with starting it with the previous command is that, by default, it creates a Git repository in the
client/ directory, so make sure to delete it.
Apart from that, we are ready to create our first Dockerfile, so we will add the following lines to a file called
Dockerfile.dev, which will be located in the client/ directory.
FROM node:alpineWORKDIR /appCOPY ./package.json ./RUN npm installCOPY . .CMD [“npm”,”run”, “start”]
Although this is a basic Dockerfile, I will emphasize one best practice that is worth noting: to avoid the
package.json file getting recached every time we update a file, we split its copy from the other files.
This way, we can avoid running the
npm install command in each build, which is one of the most time-consuming steps of building a Docker image.
On the other hand, it is usually interesting to specify the version of the image we want in our build, as it enhances the consistency of our containers.
This time, I’ve chosen the
node:alpine one, which is a minimal image, i.e., it contains the bare minimum to run npm.
Moving to the back-end side, let’s first create a basic server with Go’s Gin web framework. Navigate to the
app/api directory and create a file
Basic Gin Server
We will also need a
go.mod file, which is the equivalent of
package.json in npm.
module github.com/<yourUsername>/<nameOfYourProject>go 1.13require github.com/gin-gonic/gin v1.5.0
Once we have both files ready, we can start writing our
Dockerfile.dev file in the API directory:
FROM golang:latestENV GO111MODULE=onWORKDIR /appCOPY ./go.mod .RUN go mod downloadRUN go get github.com/pilu/freshCOPY . .CMD ["fresh"]
As you may have already noticed, in this Dockerfile, we follow the same best practices as in the other one.
We have split the
go.mod copy from the other files, as this will prevent the execution of the
go mod download command when no new dependencies have been added.
On the other hand, you may have noticed that the default command this container will be running is
fresh instead of
go run main.go.
However, if you have ever used Gin, you already know that hot reload is not a default feature in this framework. As this is a desired characteristic in a development environment, we will use fresh, which is a command-line tool that builds and restarts a web application every time a file is updated.
Finally, the only step before getting into our docker-compose file is setting up the NGINX server. First of all, create an
nginx/ directory with a file named
Default configuration of our NGINX server
In short, this file will tell NGINX to create a server listening on port 80, and then redirect the requests to one of its upstream servers.
If the request’s endpoint starts with
/api/, NGINX will proxy pass the request to our API server.
If the endpoint is any other one, the client-server will be the one getting the request. The other rule is related to WebSockets (SockJS), but initially, it wouldn’t be a big deal if you decided to skip that one.
Dockerfile.dev for the NGINX server will be the simplest one:
FROM nginxCOPY ./default.conf /etc/nginx/conf.d/default.conf
Now that we are done with all our Dockerfiles, we can move on to the last step: the Docker Compose configuration.
Docker Compose is a tool for defining and running multi-container Docker applications.
In this file, we define the Dockerfiles we want to build our containers from, the ports they have to expose, the volumes they have to define, etc.
Although this file may seem a little intimidating at first, it’s actually a compilation of all the tools you are probably already familiar with:
The first thing that I would like to address is the different restart policies we can assign to the Docker containers.
There are 4 policies: no, always, on-failure, unless-stopped. Note that I’ve assigned the always one to the NGINX server, and on-failure to the API server, but actually, unless you have a reason to restart another service, the only one that is interesting in our application is the one for the NGINX server.
As our NGINX server is the one redirecting traffic to the other containers, there’s no way any other service can work while it is down.
Furthermore, I’ve added port mapping to all the containers. Although the ports I’ve chosen are mostly the standard ones, they may be changed to any other port you prefer.
On the other hand, the volumes added here satisfy one of the following two needs: they either enable hot-reloading (API and client containers), or they allow data persistence (redis and Postgres containers).
Finally, passing environment variables to the containers using Docker Compose is fairly easy.
As it can be observed, it is possible to just add the variable itself in the docker-compose file, or, even better, create a .env file where you would store the variables, especially the ones that should not be shared (passwords, secret keys…).
Once you are done with all the required files, it’s time to run your project and test that everything is working properly. To do that, just run the following command in the root directory:
docker-compose up --build
You can see if the containers have been successfully created by running:
When you are done with developing, you can stop all the containers at once by running: