Configuring SASL Authentication in an Apache Kafka Cluster
Configuring Kafka Authentication Using SASL
This article outlines the procedure to configure authentication in an Apache Kafka cluster using the SASL mechanism, focusing specifically on SCRAM for security purposes.
Overview of Supported Authentication Methods
Apache Kafka allows authentication via SSL and SASL. The following SASL mechanisms are available:
- SASL/GSSAPI: Utilizes Kerberos authentication for secure connections.
- SASL/PLAIN: Simple username and password mechanism.
- SASL/SCRAM: Employs ZooKeeper for managing credentials dynamically.
- SASL/OAUTHBEARER: Integrates with OAuth 2 frameworks.
Configuration Steps
ZooKeeper Configuraiton
- Modify
zoo.cfgto enable SASL authentication:
# Configuration parameters
dataDir=/var/zookeeper
clientPort=2181
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
- Create a JAAS configuration file
zookeeper_jaas.conffor ZooKeeper authentication:
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_super="super_pass"
user_kafka="kafka_pass";
};
Kafka Configuraton
- Update the Kafka configuration in
server.properties:
broker.id=1
listeners=SASL_PLAINTEXT://localhost:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
sasl.enabled.mechanisms=SCRAM-SHA-512
zookeeper.connect=127.0.0.1:2181
super.users=User:admin
- Create a JAAS configuration file
kafka_server_jaas.conf:
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin_pass";
};
Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username="kafka"
password="kafka_pass";
};
Add Users and ACLs
Use the kafka-configs.sh to add user credantials:
./bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-512=[password=kafka_pass]' --entity-type users --entity-name kafka
Grant topic-level permissions with kafka-acls.sh:
./bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:kafka --operation Read --topic my-topic
./bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:kafka --operation Write --topic my-topic
Initialization and Startup
Ensure ZooKeeper and Kafka start with the respective environmental variable settings:
export KAFKA_OPTS="-Djava.security.auth.login.config=/path/to/kafka_server_jaas.conf"
nohup ./bin/kafka-server-start.sh ./config/server.properties &
Verification
Test the producer and consumer operations with proper credentials to validate authentication.
Producer example:
props.put("security.protocol", "SASL_PLAINTEXT");
props.put("sasl.mechanism", "SCRAM-SHA-512");
props.put("sasl.jaas.config", "org.apache.kafka.common.security.scram.ScramLoginModule required username='kafka' password='kafka_pass';");
Consumer example:
props.put("security.protocol", "SASL_PLAINTEXT");
props.put("sasl.mechanism", "SCRAM-SHA-512");
props.put("sasl.jaas.config", "org.apache.kafka.common.security.scram.ScramLoginModule required username='kafka' password='kafka_pass';");
Notes
- Ensure configuration file paths are acccurately set.
- Synchronize changes across all Kafka and ZooKeeper nodes for consistency.