Known issues
Learn about the known issues in Cloudera Data Catalog, the impact or changes to the functionality, and the workaround.
- CDPDSS-2488: In MGMT service of DH CM, the telemetry command is failing bringing the master node down
-
When a Cloudera Data Hub cluster is created, the management service of Cloudera Manager becomes unavailable because the telemetry command fails on Hive. Because of this, the Master node of Cloudera Data Hub goes down causing a Node Failure error on the cluster.
In Cloudera Data Catalog, due to the failure of the master node of Cloudera Data Hub, the Profiler and the On-Demand Profiler page in the Asset Details page does not load.
- CDPDSS-3364: Not able to delete the profilers from DSS App when underlying compute cluster deleted
- When deleting the compute cluster in Compute Cluster enabled environments profilers cannot be deleted. The following error is displayed: Failed to delete profilers with error 401.
- CDPDSS-3403: Altered ICEBERG table is not available in Compute Cluster enable environment
- Iceberg tables modified or renamed with the
ALTER
operation appear with both old and new "name", however, neither entry can be accessed. Apache Atlas synchronizes the table with the altered name with its full qualified name instead of its name shown in the Search results. - CDPDSS-3412: Profiling is not working on migrated ICEBERG table
- Hive tables migrated to Iceberg with the
ALTER TABLE
statement cannot be profiled in Compute Cluster enabled environments. Cluster Sensitivity Profilers and Hive Column Profilers will not be able to apply tags to the values. - CDPDSS-3401: Profiler is shown as running in DSS App on deleting default compute cluster
- In a Compute Cluster enabled environment, after deleting the default Compute Cluster profiler are displayed as still running in the Dashboard. However, profiler jobs will fail even if the default Compute Cluster is recreated.
- CDPDSS-3387: Profiling of struct or composite data type are not supported in Cloudera Data Catalog
- Profiling of struct or composite data type are not supported in Cloudera Data Catalog. When profiling tables with such data. the profiling job will be stuck in the Undetermined status.
- CDPDSS-3561: The number of columns is displayed incorrectly when the DB or table name is too long
- In Asset Details, the number of columns (# of Columns) is displayed as zero when the database or table name is too long.
- CDPDSS-3852: Number of Columns count is getting incremented on creating new tables
- In # number of Columns value may not match the real number of columns in the table. , the
- CDPDSS-3815: Cron expression scheduled for the next years can lead to incorrect schedule
- If your set a cron expression where the next date match is only possible in the next 2-3 years (for example, a specific day has to happen on a specific day in a month), the expression is evaluated to something more general. Some details of the expression can be skipped in the evaluation leading to an incorrect next run date.
- CDPDSS-4002: Pagination support for the list of profiled assets per job
- Pagination support is still needed for the list of profiled assets for a particular profiling job in . If the list is too long, the user interface can become delayed or unresponsive.
- CDPDSS-4051: Node Group setup API responds with HTTP code 202 even for quota Issues - (400 Bad Request)
- If the resource quota request fails after a profiler setup attempt, the cloud provider may return a bad request response with a 400 status code. Although the profiler launch seems successful first, the following error message is returned and the profiler is unavailable: The auto-scaling node group is unavailable at the moment but profilers seem to have been launched. This has caused an inconsistency on Cloudera Data Catalog. Please delete all profilers and relaunch them to continue profiling your assets.
- CDPDSS-4042: Even though profiler is launched, audits charts and on demand profilers are missing
- If the
distroXCrn
is not received in the API response after a profiler is launched, on-demand profilers and audit logs are not visible. - CDPDSS-4058: The basic cron expression returns a Not a Number for leap year dates
- If you set the Basic Schedule of a profiler to a leap day in a leap year, the schedule cannot be saved.
- CDPDSS-4057: All info ('?') buttons incorrectly functioning as Save buttons
- In Profiler Configuration, the ? icons also serve as Save buttons.
- CDPDSS-4063: Specific end of month dates are classified as invalid
- The following cron expressions are not supported in the
profiler scheduler:
Cloudera Data Catalog returns the Please recheck the cron expression. Day of month must be a valid number message for the 29th, 30th and 31s of any month.0 10 30 * * 0 10 29 * * 0 10 31 * *
- CDPDSS-4064: Cron job runs in UTC time-zone as multi-cron expressions are not supported
-
When a Basic Cron is set up in a way that it cannot be described as a single CRON expression, like
* * 1 6 *
in a non-GMT+0 timezone, which means "every minute on the 1st of June," the job will still run in the UTC timezone. This happens because multi-cron expressions are not supported.For example, in the Asia/Calcutta timezone (GMT+5:30), two separate CRON expressions would be needed to schedule this job according to IST (Indian Standard Time), as it spans over two days. However, the current logic of profilers does not allow this setup. As a result, such a job runs in UTC.