Known Issues in Apache Kafka
Learn about the known issues in Kafka, the impact or changes to the functionality, and the workaround.
- CDPD-60862: Rolling restart fails during ZDU when DDL operations are in progress
 - 
     
During a Zero Downtime Upgrade (ZDU), the rolling restart of services that support Data Definition Language (DDL) statements might fail if DDL operations are in progress during the upgrade. As a result, ensure that you do not run DDL statements during ZDU.
The following services support DDL statements:- Impala
 - Hive – using HiveQL
 - Spark – using SparkSQL
 - HBase
 - Phoenix
 - Kafka
 
Data Manipulation Lanaguage (DML) statements are not impacted and can be used during ZDU. Following the successful upgrade, you can resume running DDL statements.
 - CDPD-60489: Jackson-dataformat-yaml 2.12.7 and Snakeyaml 2.0 are not compatible.
 - You must not use Jackson-dataformat-yaml through the platform for YAML parsing.
 - OPSAPS-59553: SMM's bootstrap server config should be updated based on Kafka's listeners
 - 
            
SMM does not show any metrics for Kafka or Kafka Connect when multiple listeners are set in Kafka.
 
- The 
offsets.topic.replication.factorproperty must be less than or equal to the number of live brokers - 
            The 
offsets.topic.replication.factorbroker configuration is now enforced upon auto topic creation. Internal auto topic creation will fail with aGROUP_COORDINATOR_NOT_AVAILABLEerror until the cluster size meets this replication factor requirement. 
- Requests fail when sending to a nonexistent topic with
              
auto.create.topics.enableset to true - 
            
The first few
producerequests fail when sending to a nonexistent topic withauto.create.topics.enableset to true. 
- Performance degradation when SSL Is enabled
 - In some configuration scenarios, significant performance degradation can occur when SSL is enabled. The impact varies depending on your CPU, JVM version, Kafka configuration, and message size. Consumers are typically more affected than producers.
 
- OPSAPS-43236: Kafka garbage collection logs are written to the process directory
 - By default Kafka garbage collection logs are written to the agent process directory. Changing the default path for these log files is currently unsupported.
 
- RANGER-3809: Idempotent Kafka producer fails to initialize due to an authorization failure
 -  Kafka producers that have idempotence enabled require the
            Idempotent Write permission to be set on the cluster resource in Ranger. If permission
            is not given, the client fails to initialize and an error similar to the following is
            thrown:
            
Idempotence is enabled by default for clients in Kafka 3.0.1, 3.1.1, and any version after 3.1.1. This means that any client updated to 3.0.1, 3.1.1, or any version after 3.1.1 is affected by this issue.org.apache.kafka.common.KafkaException: Cannot execute transactional method because we are in an error state at org.apache.kafka.clients.producer.internals.TransactionManager.maybeFailWithError(TransactionManager.java:1125) at org.apache.kafka.clients.producer.internals.TransactionManager.maybeAddPartition(TransactionManager.java:442) at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:1000) at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:914) at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:800) . . . Caused by: org.apache.kafka.common.errors.ClusterAuthorizationException: Cluster authorization failed. - CDPD-49304: AvroConverter does not support composite default values
 - AvroConverter cannot handle schemas containing a
              
STRUCTtype default value. - DBZ-4990: The Debezium Db2 Source connector does not support schema evolution
 - The Debezium Db2 Source connector does not support the evolution (updates) of schemas. In addition, schema change events are not emitted to the schema change topic if there is a change in the schema of a table that is in capture mode. For more information, see DBZ-4990.
 - CFM-3532: The Stateless NiFi Source, Stateless NiFi Sink, and HDFS Stateless Sink connectors cannot use Snappy compression
 - This issue only affects Stateless NiFi Source and Sink
            connectors if the connector is running a dataflow that uses a processor that uses Hadoop
            libraries and is configured to use Snappy compression. The HDFS Stateless Sink connector
            is only affected if the 
Compression CodecorCompression Codec for Parquetproperties are set toSNAPPY.If you are affected by this issue, errors similar to the following will be present in the logs.Failed to write to HDFS due to java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Failed to write to HDFS due to java.lang.RuntimeException: native snappy library not available: this version of libhadoop was built without snappy support. - OPSAPS-69481: Some Kafka Connect metrics missing from Cloudera Manager due to conflicting definitions
 - The metric definitions for
              
kafka_connect_connector_task_metrics_batch_size_avgandkafka_connect_connector_task_metrics_batch_size_maxin recent Kafka CSDs conflict with previous definitions in other CSDs. This prevents Cloudera Manager from registering these metrics. It also results in SMM returning an error. The metrics also cannot be monitored in Cloudera Manager chart builder or queried using the Cloudera Manager API. 
- OPSAPS-71258:Zstandard and Snappy compression do not support /tmp mounted as noexec
 - Kafka cannot process messages compressed with Zstandard or
            Snappy if /tmp is mounted as 
noexec. 
Unsupported Features
- 
            The following Kafka features are not supported in Cloudera Data Platform:
- Only Java and .Net based clients are supported. Clients developed with C, C++, Python, and other languages are currently not supported.
 - The Kafka default authorizer is not supported. This includes setting ACLs and all related APIs, broker functionality, and command-line tools.
 - SASL/SCRAM is only supported for delegation token based authentication. It is not supported as a standalone authentication mechanism.
 - Kafka KRaft in this release of Cloudera Runtime is in technical preview and does
                  not support the following:
- Deployments with multiple log directories. This includes deployments that use JBOD for storage.
 - Delegation token based authentication.
 - Migrating an already running Kafka service from ZooKeeper to KRaft.
 - Atlas Integration.
 
 
 
Limitations
- Collection of Partition Level Metrics May Cause Cloudera Manager’s Performance to Degrade
 - 
              
If the Kafka service operates with a large number of partitions, collection of partition level metrics may cause Cloudera Manager's performance to degrade.
If you are observing performance degradation and your cluster is operating with a high number of partitions, you can choose to disable the collection of partition level metrics.Complete the following steps to turn off the collection of partition level metrics:- Obtain the Kafka service name:
- In Cloudera Manager, Select the Kafka service.
 - Select any available chart, and select Open in Chart Builder from the configuration icon drop-down.
 - Find 
$SERVICENAME=near the top of the display.The Kafka service name is the value of$SERVICENAME. 
 - Turn off the collection of partition level metrics:
- Go to .
 - Find and configure the Cloudera Manager Agent Monitoring
                          Advanced Configuration Snippet (Safety Valve) configuration
                          property.Enter the following to turn off the collection of partition level metrics:
Replace[KAFKA_SERVICE_NAME]_feature_send_broker_topic_partition_entity_update_enabled=false[KAFKA_SERVICE_NAME]with the service name of Kafka obtained in step 1. The service name should always be in lower case. - Click Save Changes.
 
 
 - Obtain the Kafka service name:
 
