You must be aware of the known issues and limitations, the areas of impact, and
workaround in Cloudera Manager 7.13.1 and its cumulative hotfixes.
Known issues identified in Cloudera Manager 7.13.1 CHF4
(7.13.1.400)
There are no new known issues identified in this release.
Known issues identified in Cloudera Manager 7.13.1 CHF3
(7.13.1.300)
There are no new known issues identified in this release.
Known issues identified in Cloudera Manager 7.13.1 CHF2
(7.13.1.200)
There are no new known issues identified in this release.
Known issues identified in Cloudera Manager 7.13.1 CHF1
(7.13.1.100)
This section lists the known issues that are identified in this release:
CDPD-79725: Hive fails to start after Datahub restart due to
high memory usage
After restarting the Cloudera Data hub, the services appears to be down in the
Cloudera Manager UI. The Cloudera Management Console reports a node failure error
for the master node.
The issue is caused by high memory usage due to the G1 garbage collector on Java
17, leading to insufficient memory issues and thereby moving the Cloudera clusters
to an error state.
Starting with Cloudera 7.3.1.0, Java 17 is the default runtime instead of Java 8,
and its memory management increases memory usage, potentially affecting system
performance. Clusters might report error states, and logs might show insufficient
memory exceptions.
To mitigate this issue and prevent startup failures after a Datahub restart, you
can perform either of the following actions, or both:
Reduce the Java heap size for affected services to prevent nodes from exceeding
the available memory.
Increase physical memory for on cloud or on-premises instances running the
affected services.
Known issues in Cloudera Manager 7.13.1
OPSAPS-74341: NodeManagers
might fail to start during the cluster restart after the Cloudera Manager 7.13.1.x upgrade
Cgroup v2 support is enabled in CDP 7.1.9 SP1 CHF5 and higher versions. However, if
the user upgrades from Cloudera Manager 7.11.3.x to Cloudera Manager 7.13.1.x, and the environment is using cgroup v2,
the NodeManagers might fail to start during the cluster restart
after the Cloudera Manager 7.13.1.x upgrade.
To resolve this issue temporarily, you must perform the following steps:
Go to the YARN service page on the Cloudera Manager UI.
Navigate to the Configuration tab.
Search for NodeManager Advanced Configuration Snippet (Safety Valve)
for yarn-site.xml.
Due to a missing dependency caused by an incomplete build and packaging in certain
OS releases, the HMS (Hive Metastore) Canary health test fails, logging a
ClassNotFoundException in the Service Monitor log.
This problem relates to all deliveries using runtime cluster version 7.1.x or 7.2.x,
while the Cloudera Manager version is 7.13.1.x and the OS is NOT
RHEL8.
In case your OS is either RHEL 9 or SLES 15 or Ubuntu 2004 or Ubuntu 2204 and if
you install the Cloudera Manager 7.13.1.x version, then create a
symbolic link using root user privileges on the node that host the Service Monitor
service (cloudera-scm-firehose) at
/opt/cloudera/cm/lib/cdh71/cdh71-hive-client-7.13.1-shaded.jar,
pointing to
/opt/cloudera/cm/lib/cdh7/cdh7-hive-client-7.13.1-shaded.jar.
Restart the Service Monitor service post the change. This will allow the Service
Monitor to perform Canary testing correctly on the HMS (Hive Metastore) service.
OPSAPS-72706, OPSAPS-73188: Hive queries fail after upgrading
Cloudera Manager from 7.11.2 to 7.11.3 or
later
Upgrading Cloudera Manager from version
7.11.2 or earlier to 7.11.3 or later causes Hive queries to fail due to JDK17
restrictions. Some JDK8 options are deprecated, leading to inaccessible classes and
exceptions:
java.lang.reflect.InaccessibleObjectException: Unable to make field private volatile java.lang.String java.net.URI.string accessible
To resolve this issue:
In Cloudera Manager, go to Tez > Configuration
Append the following values to both tez.am.launch.cmd-opts and
tez.task.launch.cmd-opts:
Charts for HMS event APIs (get_next_notification,
get_current_notificationEventId, and fire_listener_event) are missing in Cloudera Manager > Hive > Hive Metastore Instance > Charts Library > API
Monitor HMS event activity using Hive Metastore
logs.
OPSAPS-72270: Start ECS
command fails on uncordon nodes step
7.13.1, 7.13.1.100, 7.13.1.200
7.13.1.300
In an ECS HA cluster, the server node restarts during the start up. This may cause
the uncordon step to fail.
To resolve this issue temporarily, you must perform the following steps:
Run the following command on the same node to verify whether the
kube-apiserver is
ready:
kubectl get pods -n kube-system | grep kube-apiserver
Resume the command from the Cloudera Manager UI.
OPSAPS-73225: Cloudera Manager Agent
reporting inactive/failed processes in Heartbeat request
7.13.1, 7.13.1.100, 7.13.1.200
7.13.1.300
As part of introducing Cloudera Manager 7.13.x, some changes were
done to the Cloudera Manager logging, eventually causing Cloudera Manager Agent to report on inactive/stale processes during
Heartbeat request.
As a result, the Cloudera Manager servers logs are getting filled
rapidly with these notifications though they do not have impact on service.
In addition, with adding the support for the Cloudera Observability
feature, some additional messages were added to the logging of the server. However,
in case the customer did not purchase the Cloudera Observability
feature, or the telemetry monitoring is not being used, these messages (which
appears as "TELEMETRY_ALTUS_ACCOUNT is not configured for Otelcol" are filling the
server logs and preventing proper follow-up on the server activities).
This will be fixed in a later release by moving these log notifications to DEBUG
level so they don't appear on the Cloudera Manager server logs.
Until that fix, perform the following workaround to filter out these messages.
On each of the Cloudera Manager servers, update with root
credentials the file /etc/cloudera-scm-server/log4j.properties
and add the following lines at the end of the
file:
# === Custom Appender with Filters ===
log4j.appender.filteredlog=org.apache.log4j.ConsoleAppender
log4j.appender.filteredlog.layout=org.apache.log4j.PatternLayout
log4j.appender.filteredlog.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
# === Filter #1: Drop warning ===
log4j.appender.filteredlog.filter.1=org.apache.log4j.varia.StringMatchFilter
log4j.appender.filteredlog.filter.1.StringToMatch=Received Process Heartbeat for unknown (or duplicate) process.
log4j.appender.filteredlog.filter.1.AcceptOnMatch=false
# === Filter #2: Drop telemetry config warning ===
log4j.appender.filteredlog.filter.2=org.apache.log4j.varia.StringMatchFilter
log4j.appender.filteredlog.filter.2.StringToMatch=TELEMETRY_ALTUS_ACCOUNT is not configured for Otelcol
log4j.appender.filteredlog.filter.2.AcceptOnMatch=false
# === Accept all other messages ===
log4j.appender.filteredlog.filter.3=org.apache.log4j.varia.AcceptAllFilter
# === Specific logger for AgentProtocolImpl ===
log4j.logger.com.cloudera.server.cmf.AgentProtocolImpl=WARN, filteredlog
log4j.additivity.com.cloudera.server.cmf.AgentProtocolImpl=false
# === Specific logger for BaseMonitorConfigsEvaluator ===
log4j.logger.com.cloudera.cmf.service.config.BaseMonitorConfigsEvaluator=WARN, filteredlog
log4j.additivity.com.cloudera.cmf.service.config.BaseMonitorConfigsEvaluator=false
Once done, restart the Cloudera Manager server(s) for the updated
configuration to be picked.
OPSAPS-73211: Cloudera Manager 7.13.1 does
not clean up Python Path impacting Hue to start
When you upgrade from Cloudera Manager 7.7.1 or lower versions to
Cloudera Manager 7.13.1 or higher versions with CDP Private
Cloud Base 7.1.7.x Hue does not start because Cloudera Manager
forces Hue to start with Python 3.8, and Hue needs Python 2.7.
The reason for this issue is because Cloudera Manager does not
clean up the Python Path at any time, so when Hue tries to start the Python Path
points to 3.8, which is not supported in CDP Private Cloud Base 7.1.7.x version by
Hue.
To resolve this issue temporarily, you must perform the following steps:
Locate the hue.sh in
/opt/cloudera/cm-agent/service/hue/.
Add the following line after export
HADOOP_CONF_DIR=$CONF_DIR/hadoop-conf:
OPSAPS-73011: Wrong parameter in the
/etc/default/cloudera-scm-server file
7.13.1, 7.13.1.100, 7.13.1.200, 7.13.1.300
7.13.1.400
In case the Cloudera Manager needs to be installed in
High Availability (2 nodes or more as explained here), the parameter
CMF_SERVER_ARGS in the
/etc/default/cloudera-scm-server file is missing the word
"export" before it (on the file there is only
CMF_SERVER_ARGS= and not export CMF_SERVER_ARGS=),
so the parameter cannot be utilized correctly.
Edit the
/etc/default/cloudera-scm-server file with root credentials and
add the word "export" before the parameter
CMF_SERVER_ARGS=.
OPSAPS-60346: Upgrading Cloudera Manager
Agent triggers cert rotation in Auto-TLS use case 1
Upgrading Cloudera Manager Agent nodes from the Cloudera Manager UI wizard as part of a Cloudera Manager upgrade causes the host to get new certificates,
which becomes disruptive.
The issue happens with use case 1 and Cloudera Manager DB is
because Cloudera Manager always regenerates the host cert as part
of the host install or host upgrade step. However, with use case 3, Cloudera Manager does not regenerate the cert as it comes from the user.
Currently, there are three following possible workarounds:
Rotate all CMCA certs again using the generateCmca API
command, and using the "location" argument to specify a directory on disk. This
will revert to the old behavior of storing the certs on disk instead of the
DB.
Switch to Auto-TLS Use Case 3 (Customer CA-signed Certificates).
Manual upgrade of Cloudera Manager Agents, instead of
upgrading from Cloudera Manager GUI.
OPSAPS-72447, CDPD-76705: Ozone incremental replication fails
to copy renamed directory
7.13.1, 7.13.1.100, 7.13.1.200, 7.13.1.300
7.13.1.400
Ozone incremental replication using Ozone replication
policies succeed but might fail to sync nested renames for FSO buckets.
When a directory and its contents are renamed between
the replication runs, the outer level rename synced but did not sync the contents with
the previous name.
None
OPSAPS-72756:The runOzoneCommand API endpoint fails during the
Ozone replication policy run
7.13.1, 7.13.1.100, 7.13.1.200, 7.13.1.300
7.13.1.400
The
/clusters/{clusterName}/runOzoneCommandCloudera Manager API endpoint fails when the API is called with the
getOzoneBucketInfo command. In this scenario, the Ozone
replication policy runs also fail if the following conditions are true:
The source Cloudera Manager version is 7.11.3 CHF11
or 7.11.3 CHF12.
The target Cloudera Manager is version 7.11.3
through 7.11.3 CHF10 or 7.13.0.0 or later where the feature flag
API_OZONE_REPLICATION_USING_PROXY_USER is disabled.
Choose one of the following methods as a workaround:
Upgrade the target Cloudera Manager before you
upgrade the source Cloudera Manager for 7.11.3 CHF12 version
only.
Pause all replication policies, upgrade source Cloudera Manager, upgrade destination Cloudera Manager, and unpause the replication policies.
Upgrade source Cloudera Manager, upgrade target Cloudera Manager, and rerun the failed Ozone replication policies
between the source and target clusters.
OPSAPS-65377: Cloudera Manager - Host
Inspector not finding Psycopg2 on Ubuntu 20 or Redhat 8.x when Psycopg2 version 2.9.3
is installed.
7.13.1, 7.13.1.100, 7.13.1.200, 7.13.1.300
7.13.1.400
Host Inspector fails with Psycopg2 version error while upgrading to Cloudera Manager 7.13.1.x versions. When you run the Host Inspector,
you get an error Not finding Psycopg2, even though it
is installed on all hosts.
None
OPSAPS-68340: Zeppelin paragraph execution fails with the
User not allowed to impersonate error.
Starting from Cloudera Manager 7.11.3, Cloudera Manager auto-configures the
livy_admin_users configuration when Livy is run for the first
time. If you add Zeppelin or Knox services later to the existing cluster and do not
manually update the service user, the User not allowed to
impersonate error is displayed.
If you add Zeppelin or Knox services later to the existing cluster, you must
manually add the respective service user to the livy_admin_users
configuration in the Livy configuration page.
OPSAPS-69847:Replication policies might fail if source and
target use different Kerberos encryption types
Replication policies might fail if the source and target Cloudera Manager instances use different encryption types in
Kerberos because of different Java versions. For example, the Java 11 and higher
versions might use the aes256-cts encryption type, and the versions
lower than Java 11 might use the rc4-hmac encryption type.
Ensure that both the instances use the same Java version. If it is not possible to
have the same Java versions on both the instances, ensure that they use the same
encryption type for Kerberos. To check the encryption type in Cloudera Manager, search for krb_enc_types on
the Cloudera Manager > Administration > Settings page.
OPSAPS-72804: For recurring replication policies, the interval
is overwritten to 1 after the replication policy is edited
7.13.1
7.13.1.100
When you edit an Atlas, Iceberg, Ozone, or a Ranger
replication policy that has a recurring schedule on the Replication Manager UI, the
Edit Replication Policy modal window appears as expected. However, the frequency of
the policy is reset to run at “1” unit where the unit depends on what you have set in
the replication policy. For example, if you have set the replication policy to run
every four hours, it is reset to one hour when you edit the replication policy.
After you edit the replication policy as required, you
must ensure that you manually set the frequency to the original scheduled frequency,
and then save the replication policy.
OPSAPS-69342: Access issues identified in MariaDB 10.6 were
causing discrepancies in High Availability (HA) mode
MariaDB 10.6, by default, includes the property
require_secure_transport=ON in the configuration file
(/etc/my.cnf), which is absent in MariaDB 10.4. This setting
prohibits non-TLS connections, leading to access issues. This problem is observed in
High Availability (HA) mode, where certain operations may not be using the same
connection.
To resolve the issue temporarily, you can either comment out or disable the line
require_secure_transport in the configuration file located at
/etc/my.cnf.
OPSAPS-70771: Running Ozone replication policy does not show
performance reports
During an Ozone replication policy run, the A
server error has occurred. See Cloudera Manager server log for details error
message appears when you click:
Performance Reports > OZONE Performance Summary or Performance Reports > OZONE Performance Full on the Replication Policies page.
Download CSV on the Replication
History page to download any report.
None
CDPD-53160: Incorrect job run status appears for subsequent
Hive ACID replication policy runs after the replication policy fails
7.13.1, 7.13.1.100, 7.13.1.200
7.13.1.300
When a Hive ACID replication policy run fails with the
FAILED_ADMIN status, the subsequent Hive ACID replication
policy runs show SKIPPED instead of
FAILED_ADMIN status on the Cloudera Manager > Replication Manager > Replication Policies > Actions > Show History page which is incorrect. It is recommended that you check Hive ACID
replication policy runs if multiple subsequent policy runs show the
SKIPPED status.
None
CDPQE-36126: Iceberg replication fails when source and target
clusters use different nameservice names
When you run an Iceberg replication policy between
clusters where the source and target clusters use different nameservice names, the
replication policy fails.
Perform the following steps to mitigate the issue, note
that in the following steps the source nameservice is assumed to be ns1 and target
cluster nameservice is assumed to be ns2:
Go to the Cloudera Manager > Replication > Replication > Replication Policies page.
Click Actions > Edit for the required Iceberg replication policy.
Go to the Advanced tab on the Edit
Iceberg Replication Policy modal window.
Enter the
mapreduce.job.hdfs-servers.token-renewal.exclude =
ns1, ns2 key value pair for Advanced
Configuration Snippet (Safety Valve) for source hdfs-site.xml and
Advanced Configuration Snippet (Safety Valve) for destination
hdfs-site.xml fields.
Save the changes.
Click Actions > Run Now to run the replication policy.
CDPD-53185: Clear REPL_TXN_MAP table on target cluster when
deleting a Hive ACID replication policy
7.13.1, 7.13.1.100, 7.13.1.200, 7.13.1.300
7.13.1.400
The entry in REPL_TXN_MAP table on the target cluster is
retained when the following conditions are true:
A Hive ACID replication policy is replicating a transaction that requires
multiple replication cycles to complete.
The replication policy and databases used in it get deleted on the source and
target cluster even before the transaction is completely replicated.
In this scenario, if you create a database using the same name as the deleted
database on the source cluster, and then use the same name for the new Hive ACID
replication policy to replicate the database, the replicated database on the target
cluster is tagged as ‘database incompatible’. This happens after the housekeeper
thread process (that runs every 11 days for an entry) deletes the retained
entry.
Create another Hive ACID replication policy with a
different name for the new database
DMX-3973: Ozone replication policy with linked bucket as
destination fails intermittently
When you create an Ozone replication policy using a
linked/non-linked source cluster bucket and a linked target bucket, the replication
policy fails during the "Trigger a OZONE replication job on one of the available OZONE
roles" step.
None
OPSAPS-68143:Ozone replication policy fails for empty source
OBS bucket
An Ozone incremental replication policy for an OBS
bucket fails during the “Run File Listing on Peer cluster” step when the source bucket
is empty.
None
OPSAPS-71592: Replication Manager does not read the default
value of “ozone_replication_core_site_safety_valve” during Ozone replication policy
run
7.13.1
7.13.1.100
During the Ozone replication policy run, Replication
Manager does not read the value in the
ozone_replication_core_site_safety_valve advanced configuration
snippet if it is configured with the default value.
To mitigate this issue, you can use one of the following
methods:
Remove some or all the properties in
ozone_replication_core_site_safety_valve, and move them to
ozone-conf/ozone-site.xml_service_safety_valve.
Add a dummy property with no value in
ozone_replication_core_site_safety_valve. For example, add
<property><name>dummy_property</name><value></value></property>,
save the changes, and run the Ozone replication policy.
OPSAPS-71897: Finalize Upgrade command fails after upgrading
the cluster with CustomKerberos setup causing INTERNAL_ERROR with EC
writes.
The hive.compactor.initiator.on
checkbox in Cloudera Manager UI for Hive Metastore (HMS) does not
reflect the actual configuration value in cloud deployments. The default value is
false, causing the compactor to not run.
To update the
hive.compactor.initiator.on value:
In the Cloudera Manager, go to Hive > Configuration
Add the value for hive.compactor.initiator.on to
true in the "Hive Service Advanced Configuration Snippet
(Safety Valve) for hive-site.xml"
Save the changes and Restart.
Once applied, the compaction process will run as expected.
OPSAPS-70702: Ranger replication policies fail if the clusters
do not use AutoTLS
Ranger replication policies fail during the
Exporting services, policies and roles from Ranger remote
step.
Log in to the Ranger Admin host(s) on the source cluster.
Identify the Cloudera Manager agent PEM file using
the # cat /etc/cloudera-scm-agent/config.ini | grep -i
client_cert_file command. For example, the file might reside in
client_cert_file=/myTLSpath/cm_server-cert.pem
location.
Create the path for the new PEM file using the # mkdir -p
/var/lib/cloudera-scm-agent/agent-cert/ command.
Copy the client_cert_file from
config.ini as
cm-auto-global_cacerts.pem file using the # cp
/myTLSpath/cm_server-cert.pem
/var/lib/cloudera-scm-agent/agent-cert/cm-auto-global_cacerts.pem
command.
Change the ownership to 644 using the
# chmod 644
/var/lib/cloudera-scm-agent/agent-cert/cm-auto-global_cacerts.pem
command.
Resume the Ranger replication policy in Replication Manager.
OPSAPS-71424: The configuration sanity check step ignores
during the replication advanced configuration snippet values during the Ozone
replication policy job run
7.13.1
7.13.1.100
The OBS-to-OBS Ozone replication policy jobs fail if
the S3 property values for fs.s3a.endpoint,
fs.s3a.secret.key, and fs.s3a.access.key are empty
in Ozone Service Advanced Configuration Snippet (Safety Valve) for
ozone-conf/ozone-site.xml even though you defined the properties in
Ozone Replication Advanced Configuration Snippet (Safety Valve) for
core-site.xml.
Ensure that the S3 property values for
fs.s3a.endpoint, fs.s3a.secret.key, and
fs.s3a.access.key contains at least a dummy value in Ozone
Service Advanced Configuration Snippet (Safety Valve) for
ozone-conf/ozone-site.xml.
Additionally, you must ensure that
you do not update the property values in Ozone Replication Advanced
Configuration Snippet (Safety Valve) for core-site.xml for Ozone
replication jobs. This is because the values in this advanced configuration snippet
overrides the property values in core-site.xml and not the
ozone-site.xml file.
Different property values in
Ozone Service Advanced Configuration Snippet (Safety Valve) for
ozone-conf/ozone-site.xml and Ozone Replication Advanced Configuration
Snippet (Safety Valve) for core-site.xml result in a nondeterministic behavior
where the replication job picks up either value during the job run which leads to
incorrect results or replication job failure.
OPSAPS-71403: Ozone replication policy creation wizard shows
"Listing Type" field in source Cloudera Private Cloud Base versions
lower than 7.1.9
When the source Cloudera Private Cloud Base cluster version is lower than 7.1.9 and the
Cloudera Manager version is 7.11.3, the Ozone replication policy
creation wizard shows Listing Type and its options. These
options are not available in Cloudera Private Cloud Base 7.1.8x
versions.
OPSAPS-71659: Ranger replication policy fails because of
incorrect source to destination service name mapping
7.13.1
7.13.1.100
Ranger replication policy fails because of incorrect
source to destination service name mapping format during the transform step.
If the service names are different in the source and
target, then you can perform the following steps to resolve the issue:
SSH to the host on which the Ranger Admin role is running.
Find the ranger-replication.sh file.
Create a backup copy of the file.
Locate substituteEnv
SOURCE_DESTINATION_RANGER_SERVICE_NAME_MAPPING
${RANGER_REPL_SERVICE_NAME_MAPPING} in the file.
Modify it to substituteEnv
SOURCE_DESTINATION_RANGER_SERVICE_NAME_MAPPING
"'${RANGER_REPL_SERVICE_NAME_MAPPING//\"}'"
Save the file.
Rerun the Ranger replication policy.
OPSAPS-69782: HBase COD-COD replication from 7.3.1 to 7.2.18
fails during the "create adhoc snapshot" step
7.13.1
7.13.1.100
An HBase replication policy replicating from 7.3.1 COD
to 7.2.18 COD cluster that has ‘Perform Initial Snapshot” enabled fails during the
snapshot creation step in Cloudera Replication Manager.
OPSAPS-71414: Permission denied for Ozone replication policy
jobs if the source and target bucket names are identical
The OBS-to-OBS Ozone replication policy job fails with
the com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden or
Permission denied error when the bucket names on the source and
target clusters are identical and the job uses S3 delegation tokens. Note that the
Ozone replication jobs use the delegation tokens when the S3 connector service is
enabled in the cluster.
You can use one of the following workarounds to mitigate
the issue:
Use different bucket names on the source and target clusters.
Set the fs.s3a.delegation.token.binding property to an empty
value in ozone_replication_core_site_safety_valve to disable the
delegation tokens for Ozone replication policy jobs.
OPSAPS-71256: The “Create Ranger replication policy” action
shows 'TypeError' if no peer exists
7.13.1
7.13.1.100
When you click target Cloudera Manager > Replication Manager > Replication Policies > Create Replication Policy > Ranger replication policy, the TypeError: Cannot read properties of undefined
error appears.
OPSAPS-71067: Wrong interval sent from the Replication Manager
UI after Ozone replication policy submit or edit process.
SMM does not show any metrics for Kafka or Kafka Connect when
multiple listeners are set in Kafka.
Workaround: SMM cannot identify multiple listeners and
still points to bootstrap server using the default broker port (9093 for
SASL_SSL). You need to override the bootstrap server URL by
performing the following steps:
In Cloudera Manager, go to SMM > Configuration > Streams Messaging Manager Rest Admin Server Advanced Configuration
Snippet (Safety Valve)
Override bootstrap server URL (hostname:port as set in the
listeners for broker) for
streams-messaging-manager.yaml.
Save your changes.
Restart SMM.
OPSAPS-69317: Kafka Connect Rolling Restart Check fails if SSL
Client authentication is required
The rolling restart action does not work in Kafka Connect when
the ssl.client.auth option is set to required. The health check fails with a timeout
which blocks restarting the subsequent Kafka Connect instances.
You can set ssl.client.auth to
requested instead of required and initiate a
rolling restart again. Alternatively, you can perform the rolling restart manually by
restarting the Kafka Connect instances one-by-one and checking periodically whether
the service endpoint is available before starting the next one.
OPSAPS-70971: Schema Registry does not have permissions to use
Atlas after an upgrade
Following an upgrade, Schema Registry might not have the
required permissions in Ranger to access Atlas. As a result, Schema Registry's
integration with Atlas might not function in secure clusters where Ranger
authorization is enabled.
Access the Ranger Console (Ranger Admin web UI).
Click the cm_atlas resource-based service.
Add the schemaregistry user to the all - *
policies.
Click Manage Service > Edit Service.
Add the schemaregistry user to the
default.policy.users property.
OPSAPS-59597: SMM UI logs are not supported by Cloudera Manager
Cloudera Manager does not display a
Log Files menu for SMM UI role (and SMM UI logs cannot be
displayed in the Cloudera Manager UI) because the logging type used
by SMM UI is not supported by Cloudera Manager.
View the SMM UI logs on the host.
OPSAPS-72298: Impala metadata replication is mandatory and UDF
functions parameters are not mapped to the destination
Impala metadata replication is enabled by default but the
legacy Impala C/C++ UDF's (user-defined functions) are not replicated as expected
during the Hive external table replication policy run.
Edit the location of the UDF functions after the
replication run is complete. To accomplish this task, you can edit the “path of the
UDF function” to map it to the new cluster address, or you can use a script.
OPSAPS-70713: Error appears when running Atlas replication
policy if source or target clusters use Dell EMC Isilon storage
You cannot create an Atlas replication policy between
clusters if one or both the clusters use Dell EMC Isilon storage.
None
OPSAPS-72468: Subsequent Ozone OBS-to-OBS replication policy
do not skip replicated files during replication
7.13.1
7.13.1.100
The first Ozone replication policy run is a bootstrap
run. Sometimes, the subsequent runs might also be bootstrap jobs if the incremental
replication fails and the job runs fall back to bootstrap replication. In this
scenario, the bootstrap replication jobs might replicate the files that were already
replicated because the modification time is different for a file on the source and the
target cluster.
None
OPSAPS-72470: Hive ACID replication policies fail when target
cluster uses Dell EMC Isilon storage and supports JDK17
Hive ACID replication policies fail if the target
cluster is deployed with Dell EMC Isilon storage and also supports JDK17.
None
OPSAPS-73138, OPSAPS-72435: Ozone OBS-to-OBS replication
policies create directories in the target cluster even when no such directories exist
on the source cluster
Ozone OBS-to-OBS replication uses Hadoop S3A connector
to access data on the OBS buckets. Depending on the runtime version and settings in
the clusters:
directory marker keys (associated to the parent directories) appear in the
destination bucket even when it is not available in the source bucket.
delete requests of non-existing keys to the destination storage are submitted
which result in `Key delete failed` messages to appear in the Ozone Manager
log.
The OBS buckets are flat namespaces with independent keys, and the character
‘/’ has no special significance in the key names. Whereas in FSO buckets, each
bucket is a hierarchical namespace with filesystem-like semantics, where the ‘/’
separated components become the path in the hierarchy. The S3A connector provides
filesystem-like semantics over object stores where the connector mimics the
directory behaviour, that is, it creates and optionally deletes the “empty directory
markers”. These markers get created when the S3A connector creates an empty
directory. Depending on the runtime (S3A connector) version and settings, these
markers are deleted when a descendant path is created and is not deleted.
Empty directory marker creation is inherent to S3A
connector. Empty directory marker deletion behavior can be adjusted using the
fs.s3a.directory.marker.retention = keep
or delete key-value pair. For information about configuring the
key-value pair, see Controlling the S3A Directory Marker
Behavior.
OBS-7407: After upgrading to Cloudera Manager 7.13.1.x, the error message Otelcol self telemetry configuration appears to
be incorrect. is logged in the cloudera-scm-agent.log file.
This issue arises due to a missing key
(otelcol_telemetry) in the host configuration, caused by the upgrade to Cloudera Manager 7.13.1. The error is logged hourly on all agent nodes
managed by Cloudera Manager.
You can safely ignore this error message, as it does not
impact any Cloudera components.
OPSAPS-73655: Cloud replication fails after the delegation
token is issued
HDFS and Hive external table replication policies from
an on-premises cluster to cloud fail when the following conditions are true:
You choose the Advanced Options > Delete Policy > Delete Permanently option during the replication policy creation process.
Incremental replication is in progress, that is the source paths of
the replication are snapshottable directories and the bootstrap replication run is
complete.