Known issues

Learn about the known issues in Cloudera Data Catalog, the impact or changes to the functionality, and the workaround.

CDPDSS-2488: In MGMT service of DH CM, the telemetry command is failing bringing the master node down

When a Cloudera Data Hub cluster is created, the management service of Cloudera Manager becomes unavailable because the telemetry command fails on Hive. Because of this, the Master node of Cloudera Data Hub goes down causing a Node Failure error on the cluster.

In Cloudera Data Catalog, due to the failure of the master node of Cloudera Data Hub, the Profiler and the On-Demand Profiler page in the Asset Details page does not load.

It is recommended to use profilers on a Compute Cluster enabled environment to avoid this issue.
CDPDSS-3403: Altered ICEBERG table is not available in Compute Cluster enable environment
Iceberg tables modified or renamed with the ALTER operation appear with both old and new "name", however, neither entry can be accessed. Apache Atlas synchronizes the table with the altered name with its full qualified name instead of its name shown in the Search results.
CDPDSS-3412: Profiling is not working on migrated ICEBERG table
Hive tables migrated to Iceberg with the ALTER TABLE statement cannot be profiled in Compute Cluster enabled environments. Cluster Sensitivity Profilers and Hive Column Profilers will not be able to apply tags to the values.
CDPDSS-3387: Profiling of struct or composite data type are not supported in Cloudera Data Catalog
Profiling of struct or composite data type are not supported in Cloudera Data Catalog. When profiling tables with such data. the profiling job will be stuck in the Undetermined status.
CDPDSS-3561: The number of columns is displayed incorrectly when the DB or table name is too long
In Asset Details, the number of columns (# of Columns) is displayed as zero when the database or table name is too long.
CDPDSS-3852: Number of Columns count is getting incremented on creating new tables
In Search > Asset Details, the # number of Columns value may not match the real number of columns in the table.
None
CDPDSS-4051: Node Group setup API responds with HTTP code 202 even for quota Issues - (400 Bad Request)
If the resource quota request fails after a profiler setup attempt, the cloud provider may return a bad request response with a 400 status code. Although the profiler launch seems successful first, the following error message is returned and the profiler is unavailable: The auto-scaling node group is unavailable at the moment but profilers seem to have been launched. This has caused an inconsistency on Cloudera Data Catalog. Please delete all profilers and relaunch them to continue profiling your assets.
None
CDPDSS-4042: Even though profiler is launched, audits charts and on demand profilers are missing
If the distroXCrn is not received in the API response after a profiler is launched, on-demand profilers and audit logs are not visible.
Try to reload the Profilers page. If the reload is not working, delete your browser's cache and try using Incognito mode.
CDPDSS-4058: The basic cron expression returns a Not a Number for leap year dates
If you set the Basic Schedule of a profiler to a leap day in a leap year, the schedule cannot be saved.
Use a day different from the leap day in your schedule. Alternatively, you can use cron expressions with leap days.
CDPDSS-4133: Column count is incorrect in the pagination footer
In Asset Details > Schema can show a larger number of total rows in the pagination counter than available.
CDPDSS-4134: When user rejects a tag there should be a dialogue box to reconfirm the rejection
There is no dialog for confirmation in Asset Details > Classifications when rejecting recommended classifications.
CDPDSS-4151: Edit Tag Rule screen lacks Save button in intermediate step
While editing a tag rule for the Data Compliance Profiler, there is no Save button available in the intermediate steps. As a result, users are forced to proceed to the final step just to save their changes.
CDPDSS-4266: Stuck profiler pods should be deleted
In Compute Cluster enabled environments, the profilers remain stuck if the Compute Cluster is unable to provision the necessary resources. As a result, the subsequently scheduled jobs (of the same kind) will not be scheduled until the resources are provisioned.
Go to the compute cluster and delete the stuck profiler pod. Update the profiler configs to compensate for the resource crunch, to help the profilers to get scheduled properly next time.