How to build a development environment using Docker

We all encounter such problems in development: after the functions are developed locally, when they are deployed to the server or pulled locally by others, the functions will not be used.

Most of these exceptions are dependent differences caused by different systems. Therefore, in order to solve this problem, there is a need to build a unified development environment based on Docker.

For basic knowledge about docker, please refer to docker tutorial.

1. Benefits of using Docker

  • Easy deployment
    It often takes us a long time to build the environment. For team cooperation, every time a new person comes in, it needs to waste these avoidable time. Moreover, various problems often occur when setting up the environment, resulting in abnormal operation of the project code. If Docker is used, only the first person needs to write the development container, and others only need to pull it down to complete the construction of the project environment, which can effectively avoid meaningless waste of time.
  • Isolation
    We often deploy multiple project environments on one computer. If it is installed directly, it may cause interference with each other. For example, a project requires Node.js 14 and some require Node.js 12. If it is deployed directly on the local computer, it can never coexist. If it is docker, this problem can be avoided. Docker also ensures that each application uses only the resources allocated to it (including CPU, memory, and disk space). A special software will not use all your available resources, otherwise it will reduce performance and even stop other applications completely.

2. Install Docker  

1) Installing Docker for Linux

Taking Arch Linux as an example, other distributions are also similar, just replaced by its package management tool.

# Set the domestic mirror station, which is used for domestic speed-up. Optional operation
$ sudo pacman-mirrors -i -c China -m rank

# Installing Docker using Pacman
$ sudo pacman -S docker

# Establish docker user group. By default, the docker command uses Unix socket to communicate with the docker engine. Only root users and docker group users can access the Unix socket of docker engine. For security reasons, the root user is not directly used on Linux systems. Therefore, it is better to add users who need to use docker to the docker user group.
$ sudo groupadd docker

# Add the current user to the docker group, exit the current terminal and log in again
$ sudo usermod -aG docker $USER

# Test for successful installation
$ docker run --rm hello-world

2) Windows 10

It is easy to install docker under Windows 10. There are several ways:

Manual download and installation:

download Docker Desktop for Windows.

After downloading, double-click Docker Desktop Installer.exe to start installation.

Install using winget:

$ winget install Docker.DockerDesktop


Run Docker:

Enter Docker in the Windows search bar and click Docker Desktop to start running.

After Docker starts, a whale icon will appear in the Windows taskbar.


Wait for a moment. When the whale icon is still, it indicates that Docker is started successfully. Then you can open PowerShell/CMD/Windows Terminal to use Docker.

3) macOS

Install using Homebrew:

Homebrew's Cask already supports Docker Desktop for Mac, so you can easily use Homebrew Cask for installation:

$ brew install --cask docker

Manual download and installation:

If you need to download manually, please click download Docker Desktop for Mac.

Please note that the software corresponding to the chip type is downloaded. The versions corresponding to M1 and Intel chips are not common.

Like other macOS software, the installation is also very simple. Double click the downloaded. dmg file, and then drag the whale icon called Moby to the Application folder (you need to enter the user password).

Run Docker:

Find the Docker icon in the application and click Run.


After running, you will see an additional whale icon in the menu bar in the upper right corner, which indicates the running status of Docker.

After the installation is completed and started, we can check the installed Docker version through the command on the terminal.

$ docker --version

3. Docker source change

The default source of docker is foreign, and the speed of domestic access is relatively slow. Therefore, it can be replaced with domestic source to improve the speed of image removal.

1) Linux source swap

It is relatively simple under Linux. Just create a deamon.json file and write the configuration:

$ vi /etc/docker/deamon.json

# Input mirror source
  # It is also possible to change only one source. You can directly use a string instead of an array.
  "registry-mirrors": [

# : wq save and restart docker after exiting
$ systemctl restart docker

2) Source swapping for windows and Mac

Both Windows and Mac use Docker Desktop, so you can configure it directly in the GUI.

Open the Docker interface and click Docker Engine:


In the output box on the right, enter the image source:

    "registry-mirrors": [

4. Write Dockerfile

After installing Docker, we can then write our own project development environment. This paper will take the front-end development environment as an example to build Dockerfile.

Include environment:

  • node.js 14.17
  • npm 6.14
  • yarn 1.22
# In front-end development, it is often necessary to use shell commands, and it is important to have a relatively complete environment. Therefore, ubuntu is selected as the basis. If you care about the container size, you can choose the applicable basic image
FROM ubuntu
LABEL org.opencontainers.image.authors=""

# Setting environment variables 
ENV DEBIAN_FRONTEND noninteractive

# Set time zone
ARG TZ=Asia/Shanghai

RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

# Operate with root user
USER root

# Replacing alicloud source can speed up the speed in China
RUN sed -i "s/" /etc/apt/sources.list && \
    sed -i "s/" /etc/apt/sources.list && \
    sed -i "s/" /etc/apt/sources.list
RUN  apt-get clean

# Update the source and install the corresponding tools
RUN apt-get update && apt-get install -y \
    zsh \
    vim \
    wget \
    curl \
    python \

#  Installing zsh makes it easier to use the shell later when entering the container
RUN git clone ~/.oh-my-zsh && \
    cp ~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc && \
    git clone ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions && \
    git clone ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-syntax-highlighting && \
    sed -i 's/^plugins=(/plugins=(zsh-autosuggestions zsh-syntax-highlighting z /' ~/.zshrc && \
    chsh -s /bin/zsh

# Create me user
RUN useradd --create-home --no-log-init --shell /bin/zsh -G sudo me && \
    adduser me sudo && \
    echo 'me:password' | chpasswd

# Install omz for me
RUN git clone ~/.oh-my-zsh && \
    cp ~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc && \
    git clone ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions && \
    git clone ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-syntax-highlighting && \
    sed -i 's/^plugins=(/plugins=(zsh-autosuggestions zsh-syntax-highlighting z /' ~/.zshrc

# Install nvm and node
ENV NVM_DIR=/home/me/.nvm \

RUN mkdir -p $NVM_DIR && \
    curl -o- | bash && \
    . $NVM_DIR/ && \
    nvm install ${NODE_VERSION} && \
    nvm use ${NODE_VERSION} && \
    nvm alias ${NODE_VERSION} && \
    ln -s `npm bin --global` /home/me/.node-bin && \
    npm install --global nrm && \
    nrm use taobao && \
    echo '' >> ~/.zshrc && \
    echo 'export NVM_DIR="$HOME/.nvm"' >> ~/.zshrc && \
    echo '[ -s "$NVM_DIR/" ] && . "$NVM_DIR/"  # This loads nvm' >> ~/.zshrc

# Install yarn
RUN curl -o- -L | bash; \
    echo '' >> ~/.zshrc && \
    echo 'export PATH="$HOME/.yarn/bin:$PATH"' >> ~/.zshrc

# Add NVM binaries to root's .bashrc
USER root

RUN echo '' >> ~/.zshrc && \
    echo 'export NVM_DIR="/home/me/.nvm"' >> ~/.zshrc && \
    echo '[ -s "$NVM_DIR/" ] && . "$NVM_DIR/"  # This loads nvm' >> ~/.zshrc && \
    echo '' >> ~/.zshrc && \
    echo 'export YARN_DIR="/home/me/.yarn"' >> ~/.zshrc && \
    echo 'export PATH="$YARN_DIR/bin:$PATH"' >> ~/.zshrc

# Add PATH for node & YARN
ENV PATH $PATH:/home/me/.node-bin:/home/me/.yarn/bin

# Delete apt/lists,You can reduce the final mirror size
RUN rm -rf /var/lib/apt/lists/*

WORKDIR /var/www
 Finished writing Dockerfile After, build:

docker build -t frontend/react:v1 .
After construction, you can run directly:

# Run as me, recommended method
docker run --user=me -it frontend/react:v1 /bin/zsh

# Run as root
docker run -it frontend/react:v1 /bin/zsh

5. Prepare docker-compose.yml

During development, we usually need multiple containers to work together. For example, when we need to work with mysql or other containers, docker-compose.yml can better organize them.

version: '2'
        context: .
        dockerfile: react/Dockerfile
    tty: true
        - 30000:3000
        - ./react/www:/var/www
        - frontend
    image: mysql:5.7
        - 33060:3306
        - ./mysql/data:/var/lib/mysql
        - ./mysql/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
        - MYSQL_ROOT_PASSWORD=password
        - frontend

# By placing the container on the same network, you can access it directly through the container name
    driver: bridge 


6. Start the container

After writing the above Dockerfile and docker-compose.yml, you can start development happily!

# Enter docker-compose.yml Directory
$ cd frontend

# Start docker in the background-compose.yml If the container is not built, it will be built first
$ docker-compose up -d

# Enter the react container for command line interaction
$ docker-compose exec --user=me react /bin/zsh


In order to test whether containers can access each other, you can write the following files using. The database needs to be created by yourself:

// index.js
const mysql = require('mysql')
const connection = mysql.createConnection({
    host: 'mysql',
    user: 'root',
    password: 'password',
    database: 'test',


connection.query(`SELECT * FROM users`, function (error, results, fields) {
    if (error) throw error;



After running, you can see the results:

$ node index.js
[ RowDataPacket { id: 1, name: 'Caster' } ]


7. Summary

It is very convenient to use Docker to build a development environment. Once built, it can be used many times on many machines. Even if you want to reinstall the system, you don't have to configure it repeatedly.

If you don't like to write Dockerfile, you can also directly open a container, enter the container configuration, and then use docker save/export to export.

reference material:

1. Docker tutorial

2. Docker build development environment

Tags: Docker docker compose

Posted on Fri, 03 Dec 2021 12:35:34 -0500 by ClaytonBellmor