Table of contents
- Introduction
- Installing Docker and Setting Up the Machine
- Setting up the Frontend (React Application)
- Setting Up the Backend (Node.js + Express)
- Networking: Connecting the Containers
- Creating a MongoDB Container with Environment Variables
- Run the Frontend and Backend Containers
- Handling CORS Issues
- Bringing It All Together with Docker Compose
- Improvements and Optimizations
- Conclusion
- Resources
Introduction
In my previous blog about containerization basics, I explained why containerization is important in modern software architecture. If you haven't read it yet, be sure to check it out: Exploring the Basics of Virtualization and Containers. Now, let's take a practical approach and learn how to Dockerize a MERN (MongoDB, Express, React, Node.js) stack application.
Brief introduction to the three-tier architecture
Before we dive into Dockerization, it's important to understand the three-tier architecture:
Presentation Layer (Frontend, Client facing) - The React application
Application Layer (Backend, Server facing) - The Node.js and Express server
Data Layer (Database) - MongoDB for data storage
MERN essentially stands for MongoDB-Express-React-Node.js. Docker helps in containerizing these components separately, ensuring portability, scalability, and ease of deployment.
Understanding the MERN Stack
The MERN stack is a popular JavaScript-based technology stack used for building full-stack web applications. It consists of:
MongoDB: A NoSQL database that stores data in flexible, JSON-like documents.
Express.js: A lightweight framework for building backend APIs using Node.js.
React.js: A powerful JavaScript library for creating interactive user interfaces.
Node.js: A runtime environment that allows JavaScript to run on the server side.
MERN stack applications are known for their speed, scalability, and ability to use JavaScript across the entire development stack, making them an excellent choice for modern web applications.
Project Overview
This project is forked from MongoDB's demo project - https://github.com/mongodb-developer/mern-stack-example and structured to follow a three-tier architecture. Below is an overview of the directory structure to help understand the project setup:
We will create separate Dockerfiles for the client and server parts of the code, with the docker-compose.yaml file placed outside the individual directories. This structured approach ensures a clear separation of concerns, making it easier to manage and scale each component individually.
Installing Docker and Setting Up the Machine
Install Docker on the machine and add the current user to the docker
group. Using newgrp
will change the real and effective group ID to the Docker group, allowing us to use Docker commands without needing to log in again.
sudo apt-get update
curl -fsSL https://get.docker.com/ | sh
sudo usermod -aG docker $USER
newgrp docker
Setting up the Frontend (React Application)
A multi-stage build in Docker helps create optimized and smaller final images. The frontend Dockerfile uses a two-stage process:
Stage 1: Build
Base Image: Uses
node:lts-alpine
, a lightweight version of Node.js.Working Directory: Sets up
/usr/src/app
to store application files.Dependency Management:
Copies
package.json
andpackage-lock.json
to ensure only necessary dependencies are installed.Runs
npm install
to install dependencies.
Application Code:
Copies all project files.
Runs
npm run build
to generate production-ready static files.
Stage 2: Run
Base Image: Uses
nginx:alpine
for serving static files efficiently.Copying Build Artifacts: Copies the
dist
folder from the build stage to Nginx’s serving directory (/usr/share/nginx/html
).Exposing Port 80: Ensures the container listens on port 80.
Start Command: Uses
CMD ["nginx", "-g", "daemon off;"]
to keep Nginx running in the foreground.
### STAGE 1: Build ###
FROM node:lts-alpine AS build
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "./"]
RUN npm install
COPY . .
RUN npm run build
### STAGE 2: Run ###
FROM nginx:alpine
COPY --from=build /usr/src/app/dist /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
🔹 Why use Multi-Stage Builds?
Reduces final image size by excluding unnecessary development dependencies.
Separates the build environment from the production environment for better security and efficiency.
🔹 Why use Nginx?
Acts as a reverse proxy, handling requests efficiently.
Helps in domain hosting and improving security with SSL certificates.
Enhances performance by serving static files efficiently.
Build the Docker Image
docker build -t frontend-app:0.0.1 client/
We build the Docker image of the frontend-app
by using the build context (Dockerfile and code files) located in the client
directory.
Setting Up the Backend (Node.js + Express)
The Node.js backend runs in a lightweight containerized environment using Alpine Linux for efficiency. The backend is responsible for handling API requests and communicating with MongoDB.
Changes in backend code
Since we will be making use of MongoDB, we need to ensure that our backend code has the right Database URI to connect to. Change the URI in the connection.js file:
const URI = "mongodb://user:pass@mongodb:27017/merndb?authSource=admin";
The URI looks like mongodb://<USERNAME>:<PASSWORD>@<DB_HOST>:<DB_PORT>/<DB_NAME>?authSource=admin
.
These values must be noted down, as we will be using these variables to create and setup our MongoDB container.
Breakdown of the Backend Dockerfile
Base Image: Uses
node:lts-alpine
, a lightweight version of Node.js.Working Directory: Sets up
/usr/src/app
as the working directory.Dependency Management:
Copies
package.json
andpackage-lock.json
to install only necessary dependencies.Runs
npm install
to fetch dependencies.
Application Code:
- Copies the rest of the source code into the container.
Port Exposure: Opens port
5050
to allow external traffic to reach the backend.Startup Command: Uses
CMD ["npm", "start"]
to launch the application.
# Use the official Node.js image as the base image
FROM node:lts-alpine
# Set the working directory
WORKDIR /usr/src/app
# Copy package.json and package-lock.json
COPY package*.json ./
# Install dependencies
RUN npm install
# Copy the rest of the application code
COPY . .
# Expose the port the app runs on
EXPOSE 5050
# Command to run the application
CMD ["npm", "start"]
🔹 Why use Alpine?
Alpine reduces the image size significantly, improving performance and security.
It ensures a minimal attack surface by stripping unnecessary OS components.
Build the Docker Image
docker build -t backend-app:0.0.1 server/
We build the docker image of the backend-app
by providing the build context of the application to server
directory.
Networking: Connecting the Containers
To enable seamless communication between the frontend, backend, and database, we create a Docker network. This ensures:
Containers can communicate using their service names (e.g.,
mongodb
,backend
).Internal DNS resolution handles connectivity within the network.
Ensures security and isolation between containers of different network (virtual bridge).
docker network create mern-nw
Creating a MongoDB Container with Environment Variables
Let us now create a MongoDB container that exposes the default port 27017 and name it mongodb. We will also provide the environment variables needed to initialize the database.
Note: It is really important to add the network mern-nw to all the containers, so as to effectively communicate with one another using their container names.
docker run -d -p 27017:27017 \
--name mongodb \
--network mern-nw \
-e MONGO_INITDB_ROOT_USERNAME=user \
-e MONGO_INITDB_ROOT_PASSWORD=pass \
mongo: 6.0.20
The values of the environment variables were put in from the connection URI that was mentioned in the backend section of the code.
Run the Frontend and Backend Containers
It’s time to get the frontend and backend running. Now we can create the containers using the images we created a while ago.
docker run -d -p 5050:5050 \
--name backend-app \
--network mern-nw \ backend-app:0.0.1
docker run -d -p 80:80 \
--name frontend-app \
--network mern-nw \ frontend-app: 0.0.1
The application should now be accessible at the http:<IP of the server>.
Note: The port 80 should be open as an inbound port in the security groups.
Handling CORS Issues
When running the frontend, requests to the backend may fail due to CORS (Cross-Origin Resource Sharing) restrictions and throw network errors. The issue arises because the frontend code uses localhost
to connect to the backend (dev environment), which is interpreted as the client machine’s local environment.
🔹 How to fix this?
Instead of
localhost
, use the server's IP address in API requests.Ensure the backend explicitly allows requests from the frontend domain [https://mufazmi.medium.com/solving-cors-issues-in-your-node-js-application-836506e63871].
Port whitelisting is required when using the server’s IP.
Note: Exposing the backend port is a serious risk because it increases the attack surface for potential attackers. This could lead to compromising our application. In an upcoming blog in this series, we will learn how to protect the port from being exposed to the internet by using a reverse proxy.
The application should now be up and running, and we should be able to use it. However, setting up the network and running the containers separately was quite tedious, wasn't it? What if I told you there's a way to automate this process with just a single command? Let's find out!
Bringing It All Together with Docker Compose
Now that all the services are defined, we use Docker Compose to bring them up with a single command. The docker-compose.yaml
file:
Defines frontend, backend, and database as services.
Binds them together via a custom bridge network.
Mounts a persistent volume for MongoDB to store data.
version: "3.8"
services:
frontend-app:
build:
context: ./client/
dockerfile: Dockerfile
container_name: frontend-app
ports:
- "80:80"
networks:
- mern-nw
backend:
build:
context: ./server/
dockerfile: Dockerfile
container_name: backend-app
ports:
- "5050:5050"
networks:
- mern-nw
depends_on:
- mongodb
mongodb:
image: mongo:6.0.20
container_name: mongodb
ports:
- "27017:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: user
MONGO_INITDB_ROOT_PASSWORD: pass
networks:
- mern-nw
volumes:
- mongodb-data:/data/db
networks:
mern-nw:
driver: bridge
volumes:
mongodb-data:
Doesn't this file look neat and well-documented? This is the power of Infrastructure as Code, allowing us to version control our infrastructure like code.
Take note of the volumes section in the docker-compose.yaml file. It's used to store our MongoDB data persistently on our disk as a Docker volume (default location at var/lib/docker/volumes) by linking the data storage location with the volume. Since containers are temporary by nature and can stop due to resource constraints or other issues, volumes ensure our data remains safe.
Running the Services
Make sure that the docker containers created previously are deleted and the ports are open to be mapped again.
docker stop $(docker ps -q)
docker rm $(docker ps -qa)
To start all services, run:
docker compose up --build -d
The application should now be up and running as shown below.
To stop and remove the containers:
docker compose down
Note: This does not delete the volumes. Only the containers are stopped and removed. If we run the docker compose up command again, we will be able to see the data retained.
To remove volumes:
docker compose down -v
Improvements and Optimizations
Once everything works, we can refine the setup:
Use npm cache to speed up builds.
Leverage Nginx to reverse proxy backend requests on port
5050
(to be covered in an upcoming blog). This will enable us to secure the backend port from the internet.Run containers as non-root users for security.
Add maintainer labels in Dockerfiles for better metadata.
Push images to Docker Hub (to be covered in an upcoming CI/CD-focused blog).
🔹 Important: Always get the DevOps process working first, then optimize and refactor.
Conclusion
Dockerizing a MERN stack application makes deployment seamless, scalable, and efficient. By breaking the application into separate containers for the frontend, backend, and database, we ensure modularity and easy management.
In the next blogs, I’ll cover how to dockerize more applications along with setting up CI/CD pipelines and optimizing Nginx as a backend reverse proxy. Stay tuned! 🚀
Resources
GitHub - GitHub repository of this project
Docker Documentation - Comprehensive guide on Docker commands, best practices, and tutorials.
Docker Compose Documentation - Learn how to orchestrate multi-container applications.
Dockerizing a Node.js Application - Official Node.js guide to running an app in Docker.
Best Practices for Writing Dockerfiles - Learn how to optimize Docker images.