Known issues
Learn about the known issues in Cloudera Data Catalog, the impact or changes to the functionality, and the workaround.
- CDPDSS-2488: In MGMT service of DH CM, the telemetry command is failing bringing the master node down
-
When a Cloudera Data Hub cluster is created, the management service of Cloudera Manager becomes unavailable because the telemetry command fails on Hive. Because of this, the Master node of Cloudera Data Hub goes down causing a Node Failure error on the cluster.
In Cloudera Data Catalog, due to the failure of the master node of Cloudera Data Hub, the Profiler and the On-Demand Profiler page in the Asset Details page does not load.
- CDPDSS-3403: Altered ICEBERG table is not available in Compute Cluster enable environment
- Iceberg tables modified or renamed with the
ALTER
operation appear with both old and new "name", however, neither entry can be accessed. Apache Atlas synchronizes the table with the altered name with its full qualified name instead of its name shown in the Search results. - CDPDSS-3412: Profiling is not working on migrated ICEBERG table
- Hive tables migrated to Iceberg with the
ALTER TABLE
statement cannot be profiled in Compute Cluster enabled environments. Cluster Sensitivity Profilers and Hive Column Profilers will not be able to apply tags to the values. - CDPDSS-3387: Profiling of struct or composite data type are not supported in Cloudera Data Catalog
- Profiling of struct or composite data type are not supported in Cloudera Data Catalog. When profiling tables with such data. the profiling job will be stuck in the Undetermined status.
- CDPDSS-3561: The number of columns is displayed incorrectly when the DB or table name is too long
- In Asset Details, the number of columns (# of Columns) is displayed as zero when the database or table name is too long.
- CDPDSS-3852: Number of Columns count is getting incremented on creating new tables
- In # number of Columns value may not match the real number of columns in the table. , the
- CDPDSS-4051: Node Group setup API responds with HTTP code 202 even for quota Issues - (400 Bad Request)
- If the resource quota request fails after a profiler setup attempt, the cloud provider may return a bad request response with a 400 status code. Although the profiler launch seems successful first, the following error message is returned and the profiler is unavailable: The auto-scaling node group is unavailable at the moment but profilers seem to have been launched. This has caused an inconsistency on Cloudera Data Catalog. Please delete all profilers and relaunch them to continue profiling your assets.
- CDPDSS-4042: Even though profiler is launched, audits charts and on demand profilers are missing
- If the
distroXCrn
is not received in the API response after a profiler is launched, on-demand profilers and audit logs are not visible. - CDPDSS-4058: The basic cron expression returns a Not a Number for leap year dates
- If you set the Basic Schedule of a profiler to a leap day in a leap year, the schedule cannot be saved.
- CDPDSS-4133: Column count is incorrect in the pagination footer
- In can show a larger number of total rows in the pagination counter than available.
- CDPDSS-4134: When user rejects a tag there should be a dialogue box to reconfirm the rejection
- There is no dialog for confirmation in when rejecting recommended classifications.
- CDPDSS-4151: Edit Tag Rule screen lacks Save button in intermediate step
- While editing a tag rule for the Data Compliance Profiler, there is no Save button available in the intermediate steps. As a result, users are forced to proceed to the final step just to save their changes.
- CDPDSS-4266: Stuck profiler pods should be deleted
- In Compute Cluster enabled environments, the profilers remain stuck if the Compute Cluster is unable to provision the necessary resources. As a result, the subsequently scheduled jobs (of the same kind) will not be scheduled until the resources are provisioned.