Table of contents
- Understanding the Three-Tier Architecture
- Writing the Dockerfile for Spring Boot Backend
- Writing the Dockerfile for React Frontend
- Modifying nginx.conf for Reverse Proxy
- Updating application.yaml for Database Connectivity
- Writing docker-compose.yaml
- Automating Deployment with GitLab CI/CD
- Deploying the Application
- Improvements and Optimizations
- Conclusion
In my previous blog, I discussed Dockerizing a MERN stack application, covering container networking, multi-stage builds, and Docker Compose. If you're new to containerization, I recommend checking out that post to understand how to build and run Docker containers.
This time, we’ll Dockerize a Spring Boot backend with a React frontend, while ensuring that:
The backend and database ports are not exposed to the outside world.
Nginx acts as a reverse proxy, forwarding API requests to Spring Boot.
We use Docker Compose for managing all services.
CI/CD is automated using GitLab CI/CD with a self-hosted runner on EC2.
Understanding the Three-Tier Architecture
The three-tier architecture separates this application into three layers:
- Frontend (Presentation Layer) – React
Handles the UI and user interactions.
Communicates with the backend through API calls.
- Backend (Application Layer) – Spring Boot
Processes business logic and API requests.
Interacts with the database for CRUD operations.
- Database Layer – PostgreSQL
- Stores and manages application data.
This separation ensures scalability, maintainability, and security by decoupling components.
In the architecture above, Nginx acts as a reverse proxy for our three-tier infrastructure. It runs as a container listening on port 80 and serves the app code written in React and built using npm build
. Nginx also forwards any requests to /api
to the Spring Boot backend container running on port 8080. The Spring Boot application uses PostgreSQL, which also runs as a container. We keep the backend application and database hidden from the outside world by using Nginx's reverse proxy.
Project Structure
This project is a clone of this project. The directory structure is shown in the image above. We have two directories: one hosts the Spring Boot application, and the other contains the React application code.
Now, let's move on to containerizing each tier of the application. 🚀
Writing the Dockerfile for Spring Boot Backend
Let's take a look at the pom.xml
file to see which Java version was used to write the code. This is important because the code might not work in a different version of the runtime environment.
We see that the Java version used is v21, so we need to compile and run the code using the appropriate Java version.
We use a multi-stage build approach:
The first stage builds the JAR file using Maven.
The second stage runs the JAR using OpenJDK.
🔹 Key points:
Uses only Java runtime without dependencies to run the Jar file (smaller in size).
No ports are exposed externally (only within the internal Docker network).
### Build Stage ###
# Use an official Maven image for the build
FROM maven:3.9.8-eclipse-temurin-21 AS build
# Set the working directory in the container
WORKDIR /app
# Copy src directory
COPY src /app/src
# Copy pom.xml file
COPY pom.xml /app
# Install dependencies using mvn
RUN mvn clean install -DskipTests
### Run Stage ###
# Use an official Java image as runtime
FROM openjdk:21
# Copy the jar artifact to the working directory
COPY --from=build /app/target/*.jar /app/app.jar
# Set the working directory
WORKDIR /app
# Expose the port 8080 for backend
EXPOSE 8080
# Run the jar file
CMD ["java", "-jar", "app.jar"]
Writing the Dockerfile for React Frontend
The frontend follows a two-stage build:
Node.js builds the React app into static files.
Nginx serves the static files and acts as a reverse proxy for the backend.
🔹 Key points:
React application is built separately, keeping the final image lightweight.
Nginx handles frontend requests and proxies backend API calls.
# Use an official Node.js image for the build
FROM node:16 AS build
# Set the working directory in the container
WORKDIR /app
# Copy package.json and package-lock.json
COPY package*.json ./
# Install dependencies
RUN npm install
# Copy the rest of the React application
COPY . .
# Build the React app for production
RUN npm run build
# Use an Nginx image to serve the static files
FROM nginx:alpine
# Copy the React build folder to Nginx's html directory
COPY --from=build /app/build /usr/share/nginx/html
# Copy the custom nginx.conf to the container
COPY nginx.conf /etc/nginx/conf.d/default.conf
# Expose port 80 for the frontend
EXPOSE 80
# Start Nginx
CMD ["nginx", "-g", "daemon off;"]
Modifying nginx.conf
for Reverse Proxy
To prevent exposing the backend's port externally, we configure Nginx to forward API requests (/api) to the backend within the Docker network.
🔹 Key Configurations in nginx.conf
:
Static files are served from
/usr/share/nginx/html
.Requests to
/api/
are proxied tofullstack-backend:8080
inside the Docker network.CORS headers are set to allow frontend-backend communication.
server {
listen 80;
listen [::]:80;
server_name localhost;
#access_log /var/log/nginx/host.access.log main;
root /usr/share/nginx/html;
index index.html index.htm;
# Serve React app (SPA)
location / {
try_files $uri $uri/ /index.html; # Try to serve static files, otherwise return index.html
}
#error_page 404 /404.html;
location /api/ {
proxy_pass http://fullstack-backend:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# CORS headers
add_header 'Access-Control-Allow-Origin' 'http://localhost:80' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'Origin, Content-Type, X-Auth-Token, Authorization' always;
add_header 'Access-Control-Allow-Credentials' 'true' always;
# Handle OPTIONS requests for pre-flight checks
if ($request_method = 'OPTIONS') {
add_header 'Access-Control-Allow-Origin' 'http://localhost:80';
add_header 'Access-Control-Allow-Methods' 'GET, POST, PUT, DELETE, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'Origin, Content-Type, X-Auth-Token, Authorization';
add_header 'Access-Control-Allow-Credentials' 'true';
add_header 'Content-Length' 0;
return 204;
}
}
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Since we are using a reverse proxy at /api
for our backend application, we can access the application at localhost/api
instead of localhost:8080/api
. Nginx takes care of the rest and automatically routes the request to our backend container. Therefore, we need to update our code to use localhost
instead of localhost:8080
.
Note: We use the
try_files
directive in Nginx to attempt routing the URI. If that doesn't work, it redirects toindex.html
. From there, React Router takes over and manages the routing.Also, note that the
fullstack-backend:8080
used as a reverse proxy for the backend app should match the name configured for the backend container.
Updating application.yaml
for Database Connectivity
In a Spring Boot application, the application.yaml
file is used for centralized configuration management. It lets developers set up configurations for different environments, including database connections, server properties, logging levels, security settings, and more. Before we start containerizing the backend application, we need to make a few changes to the application.yaml
file.
Spring Boot needs to connect to the PostgreSQL container inside Docker. Instead of using localhost
, we reference the container name (db
) in application.yaml
:
datasource:
url: jdbc:postgresql://db:5432/mydatabase
username: postgres
password: mysecretpassword
🔹 Why?
The backend can now reach PostgreSQL inside the internal Docker network.
No need to expose port 5432 externally!
Writing docker-compose.yaml
Now that we have Dockerfiles for frontend and backend let’s define them in docker-compose.yaml
.
🔹 Key Configurations:
No external ports are exposed for the backend and database.
The frontend communicates with the backend using Nginx reverse proxy.
A custom bridge network (
fullstack-net
) enables communication.A PostgreSQL volume ensures database persistence.
services:
# Spring Boot backend
fullstack-backend:
container_name: fullstack-backend
build:
context: ./fullstack-backend
# ports:
#- "8080:8080" # Expose port 8080 for backend
environment:
- SPRING_PROFILES_ACTIVE=prod
- SPRING_DATASOURCE_URL=jdbc:postgresql://db:5432/mydatabase
- SPRING_DATASOURCE_USERNAME=postgres
- SPRING_DATASOURCE_PASSWORD=mysecretpassword
networks:
- fullstack-net # Bind to the custom network
depends_on:
- db
# React frontend
fullstack-front:
container_name: fullstack-front
build:
context: ./fullstack-front
ports:
- "80:80" # Expose port 80 for frontend (Nginx)
networks:
- fullstack-net # Bind to the custom network
depends_on:
- fullstack-backend
# PostgreSQL database
db:
image: postgres:13
environment:
POSTGRES_DB: mydatabase
POSTGRES_USER: postgres
POSTGRES_PASSWORD: mysecretpassword
volumes:
- postgres-data:/var/lib/postgresql/data
#ports:
# - "5432:5432" # Expose PostgreSQL port
networks:
- fullstack-net # Bind to the custom network
# Define a named volume for PostgreSQL data
volumes:
postgres-data:
# Define a custom network for the services to communicate
networks:
fullstack-net:
driver: bridge
Note: This setup works well in a local environment because the frontend code connects to the backend using localhost. However, in a live environment, we should use our server's IP address to route requests to the Nginx port on that server.
export SERVER_IP=$(curl http://checkip.amazonaws.com)
echo $SERVER_IP
sed -i "s/localhost/$SERVER_IP/g" fullstack-front/nginx.conf
sed -i "s/localhost/$SERVER_IP/g" fullstack-front/src/helpers/axios_helper.js
The code above fetches the public IP of the AWS EC2 server and replaces the text "localhost" with the fetched IP in the frontend application code. Now, the code is ready to be served.
To start all containers, run:
docker compose up --build -d
Note: The initial build takes some time because the image is generated from scratch and the dependencies are installed. However, subsequent builds are much quicker due to Docker's BuildKit feature, which caches the docker layers as well as the build context.
The project should now be accessible at the EC2 server IP. We didn't need to expose any backend ports; everything works by just exposing the HTTP (80) port.
To stop and remove them:
docker compose down
The volumes still persist, to remove the volumes:
docker compose down -v
Automating Deployment with GitLab CI/CD
Once the application is containerized, we automate:
Building the frontend & backend images.
Pushing them to Docker Hub.
Deploying the containers on an EC2 instance using Docker Compose.
Import the Project from GitHub
Let's import the project into GitLab to enable the CI/CD workflow. Head to New Project button on top of the page and click on Import project.
We would have to authorize GitLab to access GitHub.
Once that's done, select the name of the repository in GitHub and import it to GitLab. Before creating our pipeline, let's configure a few settings in GitLab CI/CD.
Head to Variables section and add the DockerHub Username and Password.
Setting Up a Self-Hosted GitLab Runner on EC2
A GitLab Runner is an application that executes CI/CD jobs in a GitLab pipeline. It picks up tasks defined in .gitlab-ci.yml
and runs them in an isolated environment (Docker, shell, Kubernetes, etc.). We set up a self-hosted GitLab Runner on an EC2 instance to execute CI/CD jobs.
🔹 Steps to set up a GitLab Runner on EC2:
Install Docker on EC2:
sudo apt update sudo apt install docker.io -y sudo usermod -aG docker gitlab-runner
Register GitLab Runner:
Click on "New Project Runner" and use the information provided to set up the EC2 instance.
Note: Select Docker as the executor and configure the runner.
gitlab-runner run
👉 Once set up, GitLab CI/CD will use this runner to deploy the application!
Writing the GitLab CI/CD Pipeline (.gitlab-ci.yml
)
stages:
- build
- push
- deploy
# Stage 2: Modify files using `sed` and build Docker images
build_and_modify_files:
stage: build
tags:
- ec2-runner # Specify the self-hosted runner tag here
script:
- git clone https://gitlab.com/anantvaid/springboot-app-demo/
- cd springboot-app-demo/
- export SERVER_IP=$(curl http://checkip.amazonaws.com)
- echo $SERVER_IP
- sed -i "s/localhost/$SERVER_IP/g" fullstack-front/nginx.conf
- sed -i "s/localhost/$SERVER_IP/g" fullstack-front/src/helpers/axios_helper.js
# Build Docker images for frontend and backend
- cd fullstack-front
- docker build -t $DOCKER_HUB_USERNAME/spr-react-frontend-app:$CI_COMMIT_SHA .
- cd ..
- cd fullstack-backend
- docker build -t $DOCKER_HUB_USERNAME/spr-react-backend-app:$CI_COMMIT_SHA .
- cd ..
# Stage 3: Push Docker images to Docker Hub
push_to_dockerhub:
stage: push
tags:
- ec2-runner # Specify the self-hosted runner tag here
image: docker:20.10.7
services:
- docker:19.03.12-dind
script:
- echo "$DOCKER_HUB_PASSWORD" | docker login -u "$DOCKER_HUB_USERNAME" --password-stdin
- docker tag $DOCKER_HUB_USERNAME/spr-react-frontend-app:$CI_COMMIT_SHA $DOCKER_HUB_USERNAME/spr-react-frontend-app:latest
- docker push $DOCKER_HUB_USERNAME/spr-react-frontend-app:$CI_COMMIT_SHA
- docker push $DOCKER_HUB_USERNAME/spr-react-frontend-app:latest
- docker tag $DOCKER_HUB_USERNAME/spr-react-backend-app:$CI_COMMIT_SHA $DOCKER_HUB_USERNAME/spr-react-backend-app:latest
- docker push $DOCKER_HUB_USERNAME/spr-react-backend-app:$CI_COMMIT_SHA
- docker push $DOCKER_HUB_USERNAME/spr-react-backend-app:latest
# Stage 4: Deploy to EC2 using Docker Compose
deploy_to_ec2:
stage: deploy
tags:
- ec2-runner # Specify the self-hosted runner tag here
image: docker:20.10.7
services:
- docker:19.03.12-dind
script:
- docker pull $DOCKER_HUB_USERNAME/spr-react-frontend-app:latest
- docker pull $DOCKER_HUB_USERNAME/spr-react-backend-app:latest
# Ensure old containers are stopped and removed
- docker compose down
# Rebuild and start the application with new images
- docker compose up -d --build
only:
- main # Deploy only when pushing to the `main` branch
Stages Overview
The pipeline consists of three stages:
- Build (
build_and_modify_files
) – Updates config files dynamically and builds Docker images.
Clones the GitLab repository.
Fetches the server’s public IP using
curl http://checkip.amazonaws.com
Uses
sed
to replacelocalhost
innginx.conf
andaxios_helper.js
with the server's IP.Builds Docker images for the frontend and backend.
- Push (
push_to_dockerhub
) – Pushes the built images to Docker Hub.
Logs into Docker Hub using stored credentials.
Tags Docker images with the latest commit SHA and
latest
.Pushes the images to Docker Hub.
- Deploy (
deploy_to_ec2
) – Pulls the latest images and deploys them using Docker Compose on EC2.
Pulls the latest images from Docker Hub.
Stops and removes existing containers.
Rebuilds and starts the application using
docker-compose up -d --build
.
Update docker-compose.yaml
file
Since we are pushing the images to DockerHub, we can use the image to deploy the application directly, instead of building the files from the build context.
services:
# Spring Boot backend
fullstack-backend:
container_name: fullstack-backend
image: anantvaid4/spr-react-backend-app:latest
# ports:
#- "8080:8080" # Expose port 8080 for backend
environment:
- SPRING_PROFILES_ACTIVE=prod
- SPRING_DATASOURCE_URL=jdbc:postgresql://db:5432/mydatabase
- SPRING_DATASOURCE_USERNAME=postgres
- SPRING_DATASOURCE_PASSWORD=mysecretpassword
networks:
- fullstack-net # Bind to the custom network
depends_on:
- db
# React frontend
fullstack-front:
container_name: fullstack-front
image: anantvaid4/spr-react-frontend-app:latest
ports:
- "80:80" # Expose port 80 for frontend (Nginx)
networks:
- fullstack-net # Bind to the custom network
depends_on:
- fullstack-backend
# PostgreSQL database
db:
image: postgres:13
environment:
POSTGRES_DB: mydatabase
POSTGRES_USER: postgres
POSTGRES_PASSWORD: mysecretpassword
volumes:
- postgres-data:/var/lib/postgresql/data
#ports:
# - "5432:5432" # Expose PostgreSQL port
networks:
- fullstack-net # Bind to the custom network
# Define a named volume for PostgreSQL data
volumes:
postgres-data:
# Define a custom network for the services to communicate
networks:
fullstack-net:
driver: bridge
Note: The build and context section of the code is now replaced with an image that pulls from the DockerHub repository.
Deploying the Application
Once a commit is pushed to main
, the GitLab pipeline automatically:
Builds the images
Pushes them to Docker Hub
Deploys the latest version on EC2 using Docker Compose
To manually deploy:
docker compose up -d --build
Improvements and Optimizations
Once everything works, we can refine the setup:
Use npm cache to speed up builds.
Run containers as non-root users for security.
Add maintainer labels in Dockerfiles for better metadata.
Package the Springboot application using
./gradlew bootBuildImage
🔹 Important: Always get the DevOps process working first, then optimize and refactor.
Conclusion
By containerizing our Spring Boot + React application, we achieved:
✅ A fully containerized setup
✅ Backend & database security by keeping them inside the Docker network
✅ Reverse proxying with Nginx for API requests
✅ CI/CD automation for seamless deployments
In my next blog, I’ll cover deploying Django Application and CI/CD with GitHub Actions! Stay tuned. 🚀