Implementing ELK Stack with Kafka for Centralized Log Management
Elasticsearch Cluster Setup
Configure two nodes with IPs 192.168.1.105 and 192.168.1.106. Ensure proper host resolution in /etc/hosts on both servers.
# Install EPEL repository
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
# Install Java and Elasticsearch
yum install jdk-8u171-linux-x64.rpm elasticsearch-5.4.0.rpmNode 1 Configuration (/etc/elasticsearch/elasticsearch.yml):
cluster.name: elk-cluster
node.name: node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 192.168.1.105
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.1.105", "192.168.1.106"]Create data directories and set permissions on both nodes:
mkdir -p /var/lib/elasticsearch /var/log/elasticsearch
chown -R elasticsearch:elasticsearch /var/lib/elasticsearch /var/log/elasticsearchMemory Configuration (/etc/elasticsearch/jvm.options):
-Xms2g
-Xmx2gConfigure systemd limits:
# Edit /usr/lib/systemd/system/elasticsearch.service
LimitMEMLOCK=infinity
systemctl daemon-reload
systemctl restart elasticsearchNode 2 Configuration (/etc/elasticsearch/elasticsearch.yml):
cluster.name: elk-cluster
node.name: node-2
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 192.168.1.106
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.1.105", "192.168.1.106"]Kibana Installation and Configuration
yum install kibana-5.6.5-x86_64.rpm
# Configure /etc/kibana/kibana.yml
server.port: 5601
server.host: "192.168.1.105"
elasticsearch.url: "http://192.168.1.105:9200"
systemctl enable kibana
systemctl start kibanaNginx Reverse Proxy with Authentication
# Install Nginx
tar -xvf nginx-1.14.0.tar.gz
cd nginx-1.14.0
./configure --prefix=/usr/local/nginx --with-http_ssl_module
make && make install
# Create authentication
htpasswd -c /usr/local/nginx/conf/.htpasswd kibana_userNginx configuration for Kibana:
server {
listen 80;
server_name kibana.example.com;
auth_basic "Kibana Authentication";
auth_basic_user_file /usr/local/nginx/conf/.htpasswd;
location / {
proxy_pass http://127.0.0.1:5601;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}Logstash Configuration for Multiple Sources
Nginx Access Logs (/etc/logstash/conf.d/nginx.conf):
input {
file {
path => "/var/log/nginx/access.log"
codec => json
type => "nginx_access"
}
}
output {
elasticsearch {
hosts => ["192.168.1.105:9200"]
index => "nginx-access-%{+YYYY.MM.dd}"
}
}System Logs (/etc/logstash/conf.d/syslog.conf):
input {
syslog {
port => 5140
type => "system"
}
}
output {
elasticsearch {
hosts => ["192.168.1.105:9200"]
index => "system-logs-%{+YYYY.MM.dd}"
}
}Zookeeper and Kafka Cluster Setup
Install on three nodes: 192.168.1.105, 192.168.1.106, 192.168.1.107
# Install Zookeeper
tar -xvf zookeeper-3.4.14.tar.gz
ln -s /opt/zookeeper-3.4.14 /opt/zookeeper
# Configure zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/opt/zookeeper/data
clientPort=2181
server.1=192.168.1.105:2888:3888
server.2=192.168.1.106:2888:3888
server.3=192.168.1.107:2888:3888
# Create myid files
echo "1" > /opt/zookeeper/data/myid # On node 1
echo "2" > /opt/zookeeper/data/myid # On node 2
echo "3" > /opt/zookeeper/data/myid # On node 3
# Start Zookeeper
/opt/zookeeper/bin/zkServer.sh start# Install Kafka
tar -xvf kafka_2.12-2.3.0.tgz
ln -s /opt/kafka_2.12-2.3.0 /opt/kafka
# Node 1 configuration (/opt/kafka/config/server.properties)
broker.id=1
listeners=PLAINTEXT://192.168.1.105:9092
zookeeper.connect=192.168.1.105:2181,192.168.1.106:2181,192.168.1.107:2181
# Node 2 configuration
broker.id=2
listeners=PLAINTEXT://192.168.1.106:9092
zookeeper.connect=192.168.1.105:2181,192.168.1.106:2181,192.168.1.107:2181
# Node 3 configuration
broker.id=3
listeners=PLAINTEXT://192.168.1.107:9092
zookeeper.connect=192.168.1.105:2181,192.168.1.106:2181,192.168.1.107:2181
# Start Kafka
/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.propertiesIntegrating Kafka with Logstash
Create Kafka topic:
/opt/kafka/bin/kafka-topics.sh --create \
--zookeeper 192.168.1.105:2181,192.168.1.106:2181,192.168.1.107:2181 \
--partitions 3 \
--replication-factor 2 \
--topic logsLogstash input configuration:
input {
kafka {
bootstrap_servers => "192.168.1.105:9092,192.168.1.106:9092"
topics => ["logs"]
}
}
output {
elasticsearch {
hosts => ["192.168.1.105:9200"]
index => "kafka-logs-%{+YYYY.MM.dd}"
}
}Troubleshooting Common Issues
If Elasticsearch fails to start with "system call filters failed to install" error:
# Add to elasticsearch.yml
bootstrap.system_call_filter: falseFor permission issues with logs:
chmod 644 /var/log/messages
chown -R logstash:logstash /var/log/logstash