Feature #44414
closed
bubble up errors during 'apply' phase to 'cluster warnings'
Added by Joshua Schmid about 4 years ago.
Updated over 2 years ago.
Description
Since we moved to a fully declarative approach which handles most of the deployment in the background (k8-like) it became harder to detect failures without looking at the logs.
I'd suggest to use the `set_health_warnings
` to inform the user about any failed deployment attempts(and more?)
This like:
Failed to apply mds.test_that spec ServiceSpec({'placement': PlacementSpec(count=1), 'service_type': 'mds', 'service_id': 'test_that'}): too few hosts: want 1, have set()
are currently buried deep in the logs.
- Description updated (diff)
- Related to Bug #44270: Under certain circumstances, "ceph orch apply" returns success even when no OSDs are created added
- Related to Feature #45905: cephadm: errors in serve() should create a HEALTH warning added
- Priority changed from Normal to High
- Assignee set to Joshua Schmid
- Assignee changed from Joshua Schmid to Daniel Pivonka
- Related to Bug #48939: Orchestrator removes mon daemon from wrong host when removing host from cluster added
- Assignee deleted (
Daniel Pivonka)
- Priority changed from High to Normal
- Assignee set to Melissa Li
- Status changed from New to Fix Under Review
- Pull request ID set to 42565
- Status changed from Fix Under Review to Pending Backport
- Backport set to pacific
- Pull request ID changed from 42565 to 43376
- Status changed from Pending Backport to Resolved
Also available in: Atom
PDF