TJ WEBDEV
Story of nginx - using it as a Reverse Proxy, Api Gateway & Rate Limiter

Story of nginx - using it as a Reverse Proxy, Api Gateway & Rate Limiter

December 11, 2024
|
Node.jsNginxReverse ProxyApi GatewayRate Limiter

Like many Node.js developers, I started by running my app directly with node app.js. Then I learned about production deployments, and someone mentioned "You should put NGINX in front of your Node app." šŸ˜• But why? My app was working fine!šŸ”„

The Evolution

First Step: Reverse Proxy

Started with the basics - using NGINX as a reverse proxy:

# Basic reverse proxy setup
server {
    listen 80;
    server_name myapp.com;
    
    # Forward to Node.js
    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

Level Up: API Gateway

Then came the exciting part - routing different backend servers through path-based routing. I had two clients(Client A and Client B), and for each I have created a separate node.js backend but the server is same.

Lets suppose the backend server domain is api.backend.com.

I have want to setup it like this:

The frontend makes requests to:

So, I setup the NGINX like this:

server {
    listen 80;
    server_name myapp.com;

    # Backend API for Client A
    location /api/clientA {
        # Remove the /api/clientA prefix before proxying
        rewrite ^/api/clientA/(.*) /api/$1 break;
        
        proxy_pass http://localhost:3000;  # Client A's backend
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_cache_bypass $http_upgrade;
    }

    # Backend API for Client B
    location /api/clientB {
        # Remove the /api/clientB prefix before proxying
        rewrite ^/api/clientB/(.*) /api/$1 break;
        
        proxy_pass http://localhost:4000;  # Client B's backend
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_cache_bypass $http_upgrade;
    }

    # Optional: Default fallback for other unmatched requests
    location / {
        return 404 '{"error": "Not found"}';
        default_type application/json;
    }
}

Requests and Redirection Explanation

Frontend Request to Client A API:

Frontend Request to Client B API:

Unmatched Request:

Note: The frontend does not need to know about the internal backend ports (3000, 4000). Nginx abstracts this, so the frontend communicates with a single domain (api.backend.com) and Nginx routes the traffic to the appropriate backend.

A Security Move: Rate Limiting

As traffic grew, I needed to protect my APIs from abuse. So that no one can spam my APIs.

# Define a rate limit zone for the API
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=15r/s;

server {
    listen 80;
    server_name myapp.com;

    # Apply rate limiting to all API traffic
    location /api {
        # Rate limit: 15 requests per second with burst of 10, no delay
        limit_req zone=api_limit burst=10 nodelay;

        error_page 429 =429 '{"error": "Too many requests. Please try again later."}';
        default_type application/json;

        proxy_pass http://localhost:3000;  # Single Backend server
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

Explanation:

If you have multiple backend servers, you can setup rate limiting for each backend server.

Multiple Servers with Individual Rate Limits

This configuration is suitable for scenarios where different backend APIs (e.g., Client A and Client B) have separate rate limits.

# Define a shared rate limit zone for API traffic

limit_req_zone $binary_remote_addr zone=clientA_limit:10m rate=15r/s;
limit_req_zone $binary_remote_addr zone=clientB_limit:10m rate=10r/s;

server {
    listen 80;
    server_name myapp.com;

    # Rate limiting for Client A
    location /api/clientA {
        # 15 requests/second, burst of 10
        limit_req zone=clientA_limit burst=10 nodelay;

        error_page 429 =429 '{"error": "Too many requests for Client A. Please try again later."}';
        default_type application/json;

        rewrite ^/api/clientA/(.*) /api/$1 break;
        proxy_pass http://localhost:3000;  # Client A backend
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

    # Rate limiting for Client B
    location /api/clientB {
        # 10 requests/second, burst of 5
        limit_req zone=clientB_limit burst=5 nodelay;

        error_page 429 =429 '{"error": "Too many requests for Client B. Please try again later."}';
        default_type application/json;

        rewrite ^/api/clientB/(.*) /api/$1 break;
        proxy_pass http://localhost:4000;  # Client B backend
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

Explanation:

Comparison

FeatureSingle Server SetupMultiple Servers Setup
Rate Limit Zoneapi_limit shared for all trafficSeparate zones (clientA_limit, clientB_limit) for each backend
Burst LimitSame for all trafficDifferent burst limits per backend
Error ResponseGeneric for all requestsCustom per backend
Backend ServersSingle backend (http://localhost:3000)Separate backends for Client A and Client B
ComplexitySimpler to configure and manageMore granular control, slightly complex

Important Lesson