Pathfinder: A Dockerized Node.js Backend with Kubernetes Deployment and Monitoring

Introduction

Pathfinder is a mobile application with a Node.js backend that serves as the core server for an event management and campus navigation platform. It is built using the Node.js and Express framework and connects to a cloud-hosted MongoDB Atlas database to handle event, user, and location data.

This project documents the complete DevOps lifecycle of the application — from containerization to deployment and monitoring.

  • Part 1 focused on containerization, taking the Node.js application and packaging it into a portable Docker image.
  • Part 2 focused on orchestration, deploying this containerized application onto a local Kubernetes cluster using Minikube, and implementing auto-scaling and monitoring.
  • Part 3 consolidates everything — from code to a fully deployed and monitored application — and serves as a comprehensive documentation of the entire journey.

Objectives - Containerization

  • To containerize the 'Pathfinder' Node.js backend using Docker.
  • To create an efficient and reproducible build process using a Dockerfile.
  • To manage sensitive data, such as database credentials, using Docker's environment variables.
  • To successfully connect the containerized application to the external, cloud-hosted MongoDB Atlas database.

Objectives - Orchestration and Monitoring

  • To deploy the Dockerized 'Pathfinder' application on a local Kubernetes cluster managed by Minikube.
  • To create Kubernetes Deployment and Service manifests (.yaml files) to manage and expose the application pods.
  • To implement resource requests and limits to ensure efficient resource consumption.
  • To install and configure a monitoring stack using Prometheus and Grafana via Helm charts.
  • To visualize key performance metrics (CPU, memory) on a Grafana dashboard.

Objectives - Final Documentation

  • To create a comprehensive post documenting the complete procedures and outcomes of all parts.
  • To design and present a clear overall architecture diagram of the entire system.
  • To consolidate all project files, configurations, and screenshots into a single, accessible report.

Containers and Images Used

  1. node:18-alpine (Base Image): The official Node.js Docker image, using the lightweight Alpine Linux variant.
  2. pathfinder-backend:latest (Custom Image): My application image, based on node:18-alpine.
  3. prometheus-community/prometheus (Monitoring): The Prometheus Helm chart for collecting metrics.
  4. grafana/grafana (Monitoring): The Grafana Helm chart for visualizing metrics.

Software and Tools Used

  • Docker Desktop: To build Docker images and run containers locally for testing.
  • Minikube: To run a local, single-node Kubernetes cluster on my machine.
  • kubectl: The command-line tool for interacting with the Kubernetes cluster.
  • Helm: The package manager for Kubernetes, used to simplify the installation of Prometheus and Grafana.
  • VS Code: My code editor for writing the Node.js application, Dockerfile, and Kubernetes manifests.
  • MongoDB Atlas: The cloud-hosted, fully-managed MongoDB database service.

Overall Architecture

Architecture Diagram 1

Architecture - Part 1

Architecture Diagram 2

Architecture - Part 2


Architecture Description:

The first architecture diagram illustrates a modern DevOps workflow. The process begins on the developer's PC, which contains the 'Pathfinder' Node.js source code and the Dockerfile. The developer runs the docker build command, which uses the Dockerfile to create a self-contained Docker Image bundling the application code, dependencies, and runtime. This image can either be tested locally or pushed to Docker Hub for deployment.

The second architecture diagram shows how this image is deployed in a Minikube cluster. Using kubectl apply, the Kubernetes Manifests (deployment.yaml, service.yaml, hpa.yaml) are applied to the cluster, creating deployments, services, and auto-scaling components. The pods pull the image from Docker Hub, connect to MongoDB Atlas, and expose the application via a service. The monitoring is handled by Prometheus and Grafana.


Procedure - Part 1 - Steps involved in the process

  1. Project Setup: The Node.js backend was created with its server.js, package.json, and routes.
    Pathfinder/
    ├── back_end/
    │   ├── jobs/
    │   ├── k8s/
    │   │   ├── deployment.yaml
    │   │   ├── grafana-deployment.yaml
    │   │   ├── prometheus.yaml
    │   │   └── service.yaml
    │   ├── middleware/
    │   ├── models/
    │   ├── routes/
    │   ├── scripts/
    │   ├── .dockerignore
    │   ├── .env
    │   ├── app.js
    │   ├── Dockerfile
    │   └── package.json
    │
    ├── front_end/
    ├── .gitignore
    └── README.md
    
  2. Dockerfile Creation: A Dockerfile was created in the project's root directory to define the container image.
    # Use an official Node.js runtime as a parent image
    FROM node:18-alpine
    
    # Set the working directory in the container
    WORKDIR /app
    
    # Copy package.json and package-lock.json
    COPY package*.json ./
    
    # Install project dependencies
    RUN npm install
    
    # Bundle app source
    COPY . .
    
    # Make your port accessible
    EXPOSE 3000
    
    # Define the command to run your app
    CMD [ "node", "server.js" ]
    
  3. Building the Docker Image: The docker build command was run from the terminal in the project's root.
    docker build -t pathfinder-backend:latest .
    

    [*** Screenshot of the successful `docker build` command output in your terminal ***]

  4. Running the Container: The image was run as a container, passing the MONGO_URI as an environment variable (-e).

    docker run -d -p 3000:3000 -e MONGO_URI="mongodb+srv://..." --name pathfinder pathfinder-backend:latest

  5. Verification: I verified the container was running using docker ps and checked its logs with docker logs pathfinder to confirm the successful connection to MongoDB Atlas.







Procedure - Part 2 - Steps involved in the process

  1. Starting Minikube: First, the local Kubernetes cluster was started.
    minikube start
    



  2. Creating Deployment Manifest (deployment.yaml): This file defines how to run the application, including the image to use, the number of replicas, and resource limits.
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: pathfinder-deployment
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: pathfinder
      template:
        metadata:
          labels:
            app: pathfinder
        spec:
          containers:
          - name: pathfinder-backend
            image: [Your-DockerHub-Username]/pathfinder-backend:latest
            ports:
            - containerPort: 3000
            env:
            - name: MONGO_URI
              value: "mongodb+srv://..."
            resources:
              requests:
                cpu: "100m" # Request 0.1 CPU
              limits:
                cpu: "200m" # Limit to 0.2 CPU
    
  3. Creating Service Manifest (service.yaml): This file exposes the deployment within the cluster.
    apiVersion: v1
    kind: Service
    metadata:
      name: pathfinder-service
    spec:
      selector:
        app: pathfinder
      ports:
      - protocol: TCP
        port: 80
        targetPort: 3000
      type: NodePort 
    
  4. Applying Manifests: The files were applied to the cluster.
    kubectl apply -f deployment.yaml
    kubectl apply -f service.yaml
  5. Verifying Deployment: I checked the status of the deployment and pods.
    kubectl get deployments
    kubectl get pods
    


  6. Installing Prometheus & Grafana: I used Helm to install the monitoring stack.

    # Add the Helm repositories
    helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
    helm repo add grafana https://grafana.github.io/helm-charts
    helm repo update
    # Install Prometheus
    helm install prometheus prometheus-community/prometheus
    # Install Grafana
    helm install grafana grafana/grafana

  7. Visualizing Metrics: I accessed the Grafana dashboard by port-forwarding and imported a dashboard for cluster metrics.

    [*** Screenshot of your Grafana dashboard showing CPU and Memory usage of the pods ***]


Procedure - Part 3 - Steps involved in the process

  1. Documentation: I created a detailed technical write-up on blogger.com to document the entire workflow — from building the Node.js application to deploying and monitoring it on Kubernetes.


  2. Project Demonstration: A short walkthrough video was recorded and uploaded to YouTube, demonstrating the running setup, container build process, Kubernetes deployment, and Grafana monitoring dashboard. (Watch here)

What are the outcomes of my project?

  • A fully containerized and portable Node.js backend application ("Pathfinder").
  • A scalable, self-healing deployment of the application on a Kubernetes cluster.
  • A real-time monitoring and visualization stack (Prometheus & Grafana) to observe application performance.
  • This comprehensive blog post, which serves as complete documentation for the project.
  • All project artifacts (code, manifests) are version-controlled on GitHub and the container image is publicly available on Docker Hub.

Conclusion

This project provided a practical, hands-on experience in the complete DevOps lifecycle — from containerizing a Node.js backend to deploying and monitoring it using Kubernetes. The process began with understanding how to package an application into a portable Docker image and progressed into orchestration, showcasing the power of Kubernetes to manage, scale, and monitor workloads effectively.

Setting up deployments, services, and a monitoring stack with Prometheus and Grafana was both challenging and rewarding. The experience not only strengthened my understanding of containerization and cloud-native tools but also highlighted the importance of automation, scalability, and observability in modern application development.

Overall, this project bridged the gap between development ("Dev") and operations ("Ops"), giving me a strong foundation in real-world DevOps practices and cloud-native architecture.


References and Acknowledgement

  1. Original Owners:

Comments