Why We Built a CDN with Nginx and how to Build CDN and Active DDos on it

Why We Built a CDN with Nginx and how to Build CDN and Active DDos on it

In today's fast-paced digital world, content delivery speed is critical. A slow website leads to high bounce rates and poor user experience. The solution? A high-performance Content Delivery Network (CDN) using Nginx and Docker Compose.

A CDN reduces load times by caching content closer to users, lowering latency, and improving availability. This guide walks through deploying a production-ready Nginx-based CDN for faster, more secure content delivery.

? The Architecture

This CDN consists of two Nginx instances:

ComponentRolenginx-origin serves the original contentnginx-proxyActs as a cache to reduce origin load

?? Caching Strategy:

  • Cache 200 OK responses for 10 minutes.
  • Cache 404 responses for 5 minutes to reduce unnecessary origin requests.
  • Serve stale cache during origin failures.
  • Gzip compression enabled for better performance.

?? Security Enhancements:

  • Hotlink protection prevents unauthorized content embedding.
  • Restrict HTTP methods (allow only GET and HEAD).
  • Hide server tokens to prevent information disclosure.


??? Deployment Steps


This project presents a scalable, high-performance Content Delivery Network (CDN) built using Nginx and Docker Compose. The solution enhances content distribution speed, reduces server load, and improves security. This CDN optimizes web content delivery while ensuring high availability by implementing?load balancing, caching, rate limiting, and SSL/TLS encryption.

Key Features:

Load Balancing – Nginx distributes traffic across multiple origin servers for optimized performance. High Availability – The system supports multiple instances, preventing single points of failure. Caching Mechanism – Reduces redundant requests and improves response times for static assets. Security Hardening – Implements DDoS protection, rate limiting, and hotlink prevention. SSL/TLS Support – Enables HTTPS with Let's Encrypt integration for secure data transmission. Logging & Monitoring – Uses Prometheus & Grafana for real-time traffic analysis and system health tracking.

This CDN architecture is lightweight, portable, and deployable using Docker. It can be scaled further with Kubernetes and integrated with Cloudflare or AWS CloudFront for global edge caching.

mkdir -p logs cache origin
chmod 777 logs cache
echo "<h1>CDN is Live</h1>" > origin/index.html
        

1. docker-compose.yml

version: '3.8'

services:
  nginx-origin:
    image: nginx:stable-alpine
    container_name: nginx-origin
    volumes:
      - ./origin:/usr/share/nginx/html:ro
      - ./nginx_origin.conf:/etc/nginx/nginx.conf:ro
      - ./logs:/var/log/nginx
    networks:
      - cdn-network
    restart: unless-stopped

  nginx-proxy:
    image: nginx:stable-alpine
    container_name: nginx-proxy
    depends_on:
      - nginx-origin
    volumes:
      - ./nginx_proxy.conf:/etc/nginx/nginx.conf:ro
      - cache:/var/cache/nginx
      - ./logs:/var/log/nginx
    ports:
      - "80:80"
    networks:
      - cdn-network
    restart: unless-stopped

volumes:
  cache:

networks:
  cdn-network:
    driver: bridge        


2. nginx_origin.conf

worker_processes auto;

events {}

http {
  server {
    listen 80;
    server_name _;

    root /usr/share/nginx/html;
    index index.html;
    autoindex off;

    # Security Headers
    server_tokens off;
    add_header X-Frame-Options DENY;
    add_header X-Content-Type-Options nosniff;
    add_header X-XSS-Protection "1; mode=block";

    # Allow only GET and HEAD requests
    location / {
      limit_except GET HEAD { deny all; }
    }

    access_log /var/log/nginx/origin_access.log;
    error_log /var/log/nginx/origin_error.log warn;
  }
}        

3. nginx_proxy.conf

worker_processes auto;

events {
  worker_connections 1024;
}

http {
  proxy_cache_path /var/cache/nginx
    levels=1:2
    keys_zone=STATIC:100m
    max_size=1g
    inactive=120m
    use_temp_path=off;

  proxy_cache_key "$scheme$request_method$host$request_uri";

  server {
    listen 80;
    server_name _;

    location / {
      proxy_pass https://nginx-origin:80;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

      # Enable Caching
      proxy_cache STATIC;
      proxy_cache_valid 200 10m;
      proxy_cache_valid 404 5m;
      proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
      proxy_cache_lock on;

      # Serve Stale Content on Errors
      proxy_cache_background_update on;
      proxy_cache_revalidate on;

      # Debugging: Show Cache Status
      add_header X-Cache-Status $upstream_cache_status;

      # Compression for Performance
      gzip on;
      gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
      gzip_vary on;

      # Hotlink Protection (Fixed valid_referers directive)
      valid_referers server_names example.com *.example.com;
      valid_referers none blocked;
      if ($invalid_referer) {
          return 403;
      }

      # Set Cache-Control Headers
      expires 30d;
      add_header Cache-Control "public, max-age=2592000";
    }

    # Logging
    access_log /var/log/nginx/proxy_access.log;
    error_log /var/log/nginx/proxy_error.log warn;
  }
}        


Architecture Flowchart

          +------------------+
        |  User Request    |
        +--------+---------+
                 |
                 v
   +--------------------------+
   |  nginx-load-balancer (CDN) |
   |  - Distributes traffic                |
   |  - Implements caching            |
   |  - Provides security                  |
   +----------+--------------+
              |
    ------------------------------
    |             |               |
    v             v               v
+-----------+  +-----------+  +-----------+
| Origin-1  |  | Origin-2  |  | Origin-n  |
+-----------+  +-----------+  +-----------+
  | Content  |  | Content  |  | Content  |
-------------------------------------------
        

explain details

1. DDoS Protection & Rate Limiting

To prevent excessive requests from a single IP, we add rate limiting.

limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;

server {
  listen 80;
  location / {
    limit_req zone=one burst=5 nodelay;
    proxy_pass https://origins;
  }
}
        

  • Limits each client to 1 request per second.
  • Allows up to 5 burst requests, rejecting excess traffic.


2. HTTPS (SSL/TLS) with Let’s Encrypt

To enable HTTPS, we use Let’s Encrypt.

Step 1: Generate SSL Certificate

docker run --rm -it -v certbot-etc:/etc/letsencrypt certbot/certbot certonly --standalone -d yourdomain.com
        
server {
  listen 443 ssl;
  ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
  location / {
    proxy_pass https://origins;
  }
}
        

要查看或添加评论,请登录

Reza Bojnordi的更多文章