Fixed issues in 1.5.4 SP2 CHF1
Review the fixed issues in the Cloudera AI 1.5.4 SP2 Cumulative hotfix 1 release.
- DSE-40325: Frequently received 504 timeout error for VFS stat calls
- Concurrent project forks have caused 504 timeout errors when navigating the Projects page.
- DSE-41027: Upgrade path from 1.5.0 -> 1.5.2 -> 1.5.4 -> 1.5.4 CHF3 fails with missing key
-
Due to a strict validation introduced, certain upgrade paths were missing the required key, resulting in upgrade failures. Users upgrading to version 1.5.4 might have encountered this issue if their original Cloudera Data Services on premises version at the time of initial installation was 1.5.0 or 1.5.1.
This issue has been resolved.
- DSE-42379: Model registry page and its deletion result in errors on Cloudera AI Registry page
-
While navigating the Cloudera AI Registry pages, the following error was displayed: Could not interpret page token: provided token did not match parameters.
Additionally, when attempting to delete a model, the model disappeared from the page and an error message appeared: Failed to delete model from model registry &{} (*models.Error) is not supported by the TextConsumer, can be resolved by supporting TextUnmarshaler interface.
These issues occurred due to failures in page token validation on the Cloudera AI Registry pages.
These issues have been resolved.
- DSE-43808: 2.0.44: Cloudera AI Registry workbench pagination issues
-
While navigating to the next page in the Cloudera AI Registry workbench UI, an error was displayed. Additionally, selecting the option to display 25-51 results per page often resulted in showing only 10 items.
These issues have been resolved.
- DSE-42480: List of Cloudera AI Registry calls in workbench is invoked with page_size=10 10
- When navigating on the Cloudera AI Registry pages to view the models
from the workbench, it was always displayed with
page_size=10
, even if you had previously selected another value.This issue has been resolved.
- DSE-43104: Timezones in Cloudera on premises cause pods to be killed with exit code 34
-
In an on premises environment, the Kubernetes cluster could be configured to any timezone, leading to timezone discrepancies, particularly for engine pods. This issue affected engine timestamp fields, such as scheduling_at, starting_at, running_at, and finished_at, which were reported in varying timezones throughout the infrastructure. As a result, when the base cluster and the nodes were in different timezones, the engine pods were terminated.
This issue has been resolved.