Fading Coder

One Final Commit for the Last Sprint

Home > Tech > Content

Getting Started with Nginx: Installation, Configuration, and Reverse Proxy Setup

Tech 1

Prerequisites and Installation

First, install the necessary libraries for building Nginx from source:

yum install -y pcre pcre-devel zlib zlib-devel openssl openssl-devel

Download the source package (e.g., vertion 1.14.0), extract it, and compile:

cd /opt/downloads
tar -zxvf nginx-1.14.0.tar.gz
cd nginx-1.14.0
./configure --prefix=/usr/local/nginx
make && make install

Start the service and configure the firewall to allow traffic:

cd /usr/local/nginx/sbin
./nginx

# Allow traffic on port 90 (example)
firewall-cmd --zone=public --add-port=90/tcp --permanent
firewall-cmd --reload

You can now access the server via http://192.168.142.128:90.

Basic Operational Commands

Run these commands within the /usr/local/nginx/sbin directory:

  • Start: ./nginx
  • Stop: ./nginx -s stop
  • Reload Config: ./nginx -s reload
  • Check Process: ps aux | grep nginx

Configuration Structure

The main configuration file is located at /usr/local/nginx/conf/nginx.conf. It consists of three main logical blocks:

1. Main Global Block

Sets process-level directives.

worker_processes  1;  # Defines the number of worker processes. Adjust based on CPU cores.

2. Events Block

Defines how Nginx handles connections.

events {
    worker_connections  1024;  # Maximum simultaneous connections per worker.
}

3. HTTP Block

Contains directives for handling HTTP traffic, including proxying and load balancing.

http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

    server {
        listen       80;
        server_name  192.168.142.130; # Replace with your server IP

        location / {
            root   html;
            index  index.html index.htm;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

Setting Up a Reverse Proxy

Scenario 1: Single Backend (Tomcat)

Goal: Access a Tomcat application running on port 8080 via www.123.com on port 80.

  1. Ensure Tomcat is running on port 8080.
  2. Add a host entry mapping www.123.com to your server IP (e.g., 192.168.142.130).
  3. Modify the nginx.conf server block to forward requests:
server {
    listen       80;
    server_name  www.123.com; # Or the specific IP

    location / {
        # Forward all requests to the local Tomcat instance
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

Reload Nginx (./nginx -s reload) and visit www.123.com.

Scenario 2: Path-Based Routing

Goal: Route requests based on the URL path to different Tomcat instances (e.g., port 8080 for /test8080/, port 8081 for /test8081/).

  1. Set up two Tomcat instances on ports 8080 and 8081. Ensure ports are open in the firewall.
  2. Create test pages in their respective webapps folders.
  3. Configure a new server block in Nginx:
server {
    listen       9001;
    server_name  192.168.142.130;

    # Match path starting with /test8080/
    location ~ /test8080/ {
        proxy_pass http://localhost:8080;
    }

    # Match path starting with /test8081/
    location ~ /test8081/ {
        proxy_pass http://localhost:8081;
    }
}

Access http://www.123.com:9001/test8080/b.html or .../test8081/a.html.

Load Balancing Configuartion

Goal: Distribute traffic between two servers (8080 and 8081) to handle http://www.123.com/edu/edu.html.

  1. Prepare two Tomcat instances with an edu/edu.html page in their webapps.
  2. Define an upstream group (backend pool) and reference it in the location block:
http {
    # Define the backend server pool
    upstream backend_pool {
        server 192.168.142.130:8080;
        server 192.168.142.130:8081;
    }

    server {
        listen       80;
        server_name  www.123.com;

        location / {
            # Distribute requests to the backend_pool
            proxy_pass http://backend_pool;
        }
    }
}

Refreshing the browser will alternate responses between the two servers (Round Robin is the default).

Load Balancing Methods

  • Round Robin (Default): Requests are distributed sequentially.
  • Weight: Assigns a ratio to specific servers for uneven distribution.
upstream backend_pool {
    server 192.168.142.130:8080 weight=2; # Gets 2x traffic
    server 192.168.142.130:8081 weight=1;
}

Related Articles

Understanding Strong and Weak References in Java

Strong References Strong reference are the most prevalent type of object referencing in Java. When an object has a strong reference pointing to it, the garbage collector will not reclaim its memory. F...

Comprehensive Guide to SSTI Explained with Payload Bypass Techniques

Introduction Server-Side Template Injection (SSTI) is a vulnerability in web applications where user input is improper handled within the template engine and executed on the server. This exploit can r...

Implement Image Upload Functionality for Django Integrated TinyMCE Editor

Django’s Admin panel is highly user-friendly, and pairing it with TinyMCE, an effective rich text editor, simplifies content management significantly. Combining the two is particular useful for bloggi...

Leave a Comment

Anonymous

◎Feel free to join the discussion and share your thoughts.