Fading Coder

One Final Commit for the Last Sprint

Home > Tech > Content

Analyzing Nginx Logs with ELK Stack and Visualizing Data

Tech 1

This article demonstrates how to analyze Nginx logs using the ELK stack and visualize the resulting data.

The environment assumes Elasticsearch is already installed and running, as described in a previous guide.

Installing Nginx

For testing purposes, a minimal installation suffices.

[root@server ~]# cd /usr/local/src/
[root@server ~]# wget http://nginx.org/download/nginx-1.13.5.tar.gz
[root@server ~]# tar -zxf nginx-1.13.5.tar.gz
[root@server ~]# cd nginx-1.13.5
[root@server nginx-1.13.5]# ./configure --prefix=/usr/local/nginx

Install missing dependencies via yum if needed.

Compile and install using make && make install.

A sample log entry from Nginx:

192.168.1.26 - - [14/Sep/2017:16:48:42 +0800] "GET /ui/favicons/favicon-16x16.png HTTP/1.1" 304 0 "http://192.168.1.124/app/kibana" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36" "-"

The default log format for Nginx is defined as:

log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

Configuring Logstash

Creating Confiugration File

Navigate to /opt/elk/logstash-5.2.2 and create a new configuration file named nginx_access.conf with the following content:

input {
    file {
        path => [ "/usr/local/nginx/logs/access.log" ]
        start_position => "beginning"
        ignore_older => 0
        type => "nginx-access"
    }
}

filter {
    if [type] == "nginx-access" {
        grok {
            match => [
                "message", "%{IPORHOST:clientip} %{NGUSER:ident} %{NGUSER:auth} \[%{HTTPDATE:timestamp}\] \"%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}\" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{NOTSPACE:http_x_forwarded_for}"
            ]
        }

        urldecode {
            all_fields => true
        }

        date {
            locale => "en"
            match => ["timestamp", "dd/MMM/YYYY:HH:mm:ss Z"]
        }

        geoip {
            source => "clientip"
            target => "geoip"
            database => "/opt/elk/logstash-5.2.2/data/GeoLite2-City.mmdb"
            add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
            add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
        }

        mutate {
            convert => [ "[geoip][coordinates]", "float" ]
            convert => [ "response", "integer" ]
            convert => [ "bytes", "integer" ]
            replace => { "type" => "nginx_access" }
            remove_field => "message"
        }
    }
}

output {
    elasticsearch {
        hosts => ["192.168.1.124:9200"]
        index => "logstash-nginx-access-%{+YYYY.MM.dd}"
    }
    stdout { codec => rubydebug }
}

Understanding the Configuration

  • Input: Uses the file plugin to read from Nginx's access log.
  • Filter: Processes data using grok for parsing, geoip for location data, date for timestamp handling, and mutate for field conversions.
  • Output: Sends processed data to Elasticsearch and prints to standard output for debugging.

Testing Grok Patterns

Use the pattern %{IPORHOST:clientip} %{NGUSER:ident} %{NGUSER:auth} \[%{HTTPDATE:timestamp}\] \"%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}\" %{NUMBER:response} (?:%{NUMBER:bytes}|-) %{QS:referrer} %{QS:agent} %{NOTSPACE:http_x_forwarded_for} to verify matching accuracy at http://grokdebug.herokuapp.com/.

Downloading GeoIP Database

Download the GeoLite2 City database from MaxMind:

[root@server data]# wget http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz
[root@server data]# tar -xzf GeoLite2-City.tar.gz

Place the extracted database file into the configured directory.

Validating Configuration

Test the configuration syntax using:

[root@server data]# /opt/elk/logstash-5.2.2/bin/logstash -t -f /opt/elk/logstash-5.2.2/config.d/nginx_access.conf
Configuration OK

Start Logstash with:

[root@server logstash-5.2.2]# nohup /opt/elk/logstash-5.2.2/bin/logstash -f /opt/elk/logstash-5.2.2/config.d/nginx_access.conf &

Monitor startup with tail -f nohup.out. Once successful, Logstash will begin sending data to Elasticsearch.

Tags: elknginx

Related Articles

Understanding Strong and Weak References in Java

Strong References Strong reference are the most prevalent type of object referencing in Java. When an object has a strong reference pointing to it, the garbage collector will not reclaim its memory. F...

Comprehensive Guide to SSTI Explained with Payload Bypass Techniques

Introduction Server-Side Template Injection (SSTI) is a vulnerability in web applications where user input is improper handled within the template engine and executed on the server. This exploit can r...

Implement Image Upload Functionality for Django Integrated TinyMCE Editor

Django’s Admin panel is highly user-friendly, and pairing it with TinyMCE, an effective rich text editor, simplifies content management significantly. Combining the two is particular useful for bloggi...

Leave a Comment

Anonymous

◎Feel free to join the discussion and share your thoughts.