Securing Apache Kafka with SASL-PLAIN Authentication and ACL Authorization
Core Security Concepts
- SASL (Simple Authentication and Security Layer): Handles identity verification during client-to-server connections, ensuring credential data is handled securely.
- SSL/TLS: Encrypts the data transmitted over the network. Relying on SASL alone leaves the payload unencrypted after authentication.
- ACL (Access Control List): Defines granular rules dictating which authenticated identities can perform specific operations on particular resources.
Kafka Security Architecture
Apache Kafka provides a multi-layered security model (available from version 0.10 onwards):
- Secure communication between brokers and ZooKeeper.
- SASL-based Authentication: Validates identities for inter-broker, client-to-broker, and administrative connections.
- ACL-based Authorization: Restricts client access to specific topics and consumer groups.
- SSL-based Encryption: Protects data in transit.
Kafka supports multiple SASL mechanisms: GSSAPI (Kerberos), PLAIN, SCRAM-SHA-256, and SCRAM-SHA-512.
Configuring Broker SASL/PLAIN Authentication
To enable SASL, modify the listeners parameter in server.properties to accept SASL_PLAINTEXT or SASL_SSL connections:
listeners=SASL_PLAINTEXT://kafka-broker.local:9092If inter-broker communication must also use SASL, specify the protocol:
security.inter.broker.protocol=SASL_PLAINTEXTImplementing SASL/PLAIN
SASL/PLAIN utilizes straightforward username/password credentials. It is highly recommended to pair this mechanism with TLS to prevent credential interception.
1. Broker-Side Setup
Create a JAAS configuration file (e.g., broker_jaas.conf) in the Kafka config directory:
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="sys_admin"
password="s3cr3tK3y"
user_sys_admin="s3cr3tK3y"
user_producer_user="usrP@ss";
};The username and password fields define the credentials used for inter-broker communication. The user_<principal> entries define the passwords for connecting clients.
Pass the JAAS file path to the broker JVM via the KAFKA_OPTS environment variable:
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/config/broker_jaas.conf"Update server.properties with the required SASL parameters:
listeners=SASL_SSL://kafka-broker.local:9093
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAINLaunch the broker with the updated configuration:
bin/kafka-server-start.sh config/server-sasl.properties2. Client-Side Setup
For Java clients, inject the JAAS configuration directly into client.properties:
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="producer_user" \
password="usrP@ss";
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAINFor command-line tools, define a separate client_jaas.conf file:
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="producer_user"
password="usrP@ss";
};Export the environment variable and execute the console producer:
export KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/config/client_jaas.conf"
bin/kafka-console-producer.sh --broker-list kafka-broker.local:9092 \
--topic secure-events \
--producer-property security.protocol=SASL_PLAINTEXT \
--producer-property sasl.mechanism=PLAINProduction Considerations
Always combine SASL/PLAIN with SSL/TLS. To avoid storing plaintext passwords on disk, utilize custom callback handlers introduced in Kafka 2.0 (sasl.server.callback.handler.class and sasl.client.callback.handler.class) to fetch credentials from external secure vaults or authentication servers.
Configuring Broker ACL Authorization
Kafka provides a pluggable authorizer. The default implementation stores ACLs in ZooKeeper. Enable it in server.properties:
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizerBy default, if no ACLs match a resource, access is denied to everyone except super users. To allow all access when no ACL exists, set:
allow.everyone.if.no.acl.found=trueDefine super users (semicolon-separated) to bypass ACL checks:
super.users=User:sys_admin;User:rootPrincipal formatting can be customized via principal.builder.class for SSL clients or sasl.kerberos.principal.to.local.rules for Kerberos/SASL principals.
Managing ACLs via CLI
Granting Producer Access:
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zk.local:2181 \
--add --allow-principal User:producer_user \
--producer --topic secure-eventsGranting IP and User Specific Access:
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zk.local:2181 \
--add --allow-principal User:consumer_a \
--allow-host 10.20.30.40 \
--operation Read --operation Write \
--topic secure-eventsApplying Permissions via Prefix Matching:
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zk.local:2181 \
--add --allow-principal User:producer_user \
--producer --topic secure- \
--resource-pattern-type prefixedQuerying ACLs:
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zk.local:2181 \
--list --topic secure-eventsRemoving ACLs:
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=zk.local:2181 \
--remove --allow-principal User:consumer_a \
--allow-host 10.20.30.40 \
--operation Read --operation Write \
--topic secure-eventsManaging ACLs via AdminClient API:
bin/kafka-acls.sh --bootstrap-server kafka-broker.local:9092 \
--command-config /tmp/admin-props.conf \
--add --allow-principal User:consumer_a \
--consumer --topic secure-events --group analytics-group