Fading Coder

One Final Commit for the Last Sprint

Home > Tech > Content

Implementing ELK Stack with Kafka for Centralized Log Management

Tech May 13 1

Elasticsearch Cluster Setup

Configure two nodes with IPs 192.168.1.105 and 192.168.1.106. Ensure proper host resolution in /etc/hosts on both servers.

# Install EPEL repository
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

# Install Java and Elasticsearch
yum install jdk-8u171-linux-x64.rpm elasticsearch-5.4.0.rpm

Node 1 Configuration (/etc/elasticsearch/elasticsearch.yml):

cluster.name: elk-cluster
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 192.168.1.105
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.1.105", "192.168.1.106"]

Create data directories and set permissions on both nodes:

mkdir -p /var/lib/elasticsearch /var/log/elasticsearch
chown -R elasticsearch:elasticsearch /var/lib/elasticsearch /var/log/elasticsearch

Memory Configuration (/etc/elasticsearch/jvm.options):

-Xms2g
-Xmx2g

Configure systemd limits:

# Edit /usr/lib/systemd/system/elasticsearch.service
LimitMEMLOCK=infinity

systemctl daemon-reload
systemctl restart elasticsearch

Node 2 Configuration (/etc/elasticsearch/elasticsearch.yml):

cluster.name: elk-cluster
node.name: node-2
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 192.168.1.106
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.1.105", "192.168.1.106"]

Kibana Installation and Configuration

 yum install kibana-5.6.5-x86_64.rpm

# Configure /etc/kibana/kibana.yml
server.port: 5601
server.host: "192.168.1.105"
elasticsearch.url: "http://192.168.1.105:9200"

systemctl enable kibana
systemctl start kibana

Nginx Reverse Proxy with Authentication

# Install Nginx
tar -xvf nginx-1.14.0.tar.gz
cd nginx-1.14.0
./configure --prefix=/usr/local/nginx --with-http_ssl_module
make && make install

# Create authentication
htpasswd -c /usr/local/nginx/conf/.htpasswd kibana_user

Nginx configuration for Kibana:

server {
    listen 80;
    server_name kibana.example.com;
    auth_basic "Kibana Authentication";
    auth_basic_user_file /usr/local/nginx/conf/.htpasswd;
    
    location / {
        proxy_pass http://127.0.0.1:5601;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

Logstash Configuration for Multiple Sources

Nginx Access Logs (/etc/logstash/conf.d/nginx.conf):

input {
  file {
    path => "/var/log/nginx/access.log"
    codec => json
    type => "nginx_access"
  }
}

output {
  elasticsearch {
    hosts => ["192.168.1.105:9200"]
    index => "nginx-access-%{+YYYY.MM.dd}"
  }
}

System Logs (/etc/logstash/conf.d/syslog.conf):

input {
  syslog {
    port => 5140
    type => "system"
  }
}

output {
  elasticsearch {
    hosts => ["192.168.1.105:9200"]
    index => "system-logs-%{+YYYY.MM.dd}"
  }
}

Zookeeper and Kafka Cluster Setup

Install on three nodes: 192.168.1.105, 192.168.1.106, 192.168.1.107

# Install Zookeeper
tar -xvf zookeeper-3.4.14.tar.gz
ln -s /opt/zookeeper-3.4.14 /opt/zookeeper

# Configure zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/opt/zookeeper/data
clientPort=2181
server.1=192.168.1.105:2888:3888
server.2=192.168.1.106:2888:3888
server.3=192.168.1.107:2888:3888

# Create myid files
echo "1" > /opt/zookeeper/data/myid  # On node 1
echo "2" > /opt/zookeeper/data/myid  # On node 2
echo "3" > /opt/zookeeper/data/myid  # On node 3

# Start Zookeeper
/opt/zookeeper/bin/zkServer.sh start
# Install Kafka
tar -xvf kafka_2.12-2.3.0.tgz
ln -s /opt/kafka_2.12-2.3.0 /opt/kafka

# Node 1 configuration (/opt/kafka/config/server.properties)
broker.id=1
listeners=PLAINTEXT://192.168.1.105:9092
zookeeper.connect=192.168.1.105:2181,192.168.1.106:2181,192.168.1.107:2181

# Node 2 configuration
broker.id=2
listeners=PLAINTEXT://192.168.1.106:9092
zookeeper.connect=192.168.1.105:2181,192.168.1.106:2181,192.168.1.107:2181

# Node 3 configuration
broker.id=3
listeners=PLAINTEXT://192.168.1.107:9092
zookeeper.connect=192.168.1.105:2181,192.168.1.106:2181,192.168.1.107:2181

# Start Kafka
/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties

Integrating Kafka with Logstash

Create Kafka topic:

/opt/kafka/bin/kafka-topics.sh --create \
  --zookeeper 192.168.1.105:2181,192.168.1.106:2181,192.168.1.107:2181 \
  --partitions 3 \
  --replication-factor 2 \
  --topic logs

Logstash input configuration:

input {
  kafka {
    bootstrap_servers => "192.168.1.105:9092,192.168.1.106:9092"
    topics => ["logs"]
  }
}

output {
  elasticsearch {
    hosts => ["192.168.1.105:9200"]
    index => "kafka-logs-%{+YYYY.MM.dd}"
  }
}

Troubleshooting Common Issues

If Elasticsearch fails to start with "system call filters failed to install" error:

# Add to elasticsearch.yml
bootstrap.system_call_filter: false

For permission issues with logs:

chmod 644 /var/log/messages
chown -R logstash:logstash /var/log/logstash

Related Articles

Understanding Strong and Weak References in Java

Strong References Strong reference are the most prevalent type of object referencing in Java. When an object has a strong reference pointing to it, the garbage collector will not reclaim its memory. F...

Comprehensive Guide to SSTI Explained with Payload Bypass Techniques

Introduction Server-Side Template Injection (SSTI) is a vulnerability in web applications where user input is improper handled within the template engine and executed on the server. This exploit can r...

Implement Image Upload Functionality for Django Integrated TinyMCE Editor

Django’s Admin panel is highly user-friendly, and pairing it with TinyMCE, an effective rich text editor, simplifies content management significantly. Combining the two is particular useful for bloggi...

Leave a Comment

Anonymous

◎Feel free to join the discussion and share your thoughts.