Actions
Bug #49056
closedfaulty behaviour running ceph orch apply mds with missing fsname
Status:
Resolved
Priority:
Normal
Assignee:
-
Category:
orchestrator
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
pacific
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
On 15.2.8 , I accidently ran `ceph orch apply mds label:mds`, so without the <fsname>.
This did not give an error message, but instead created these daemons:
# ceph orch ps NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID ... mds.label:mds.mds2801.gfahpt mds2801.banette.os error 4m ago 9m <unknown> docker.io/ceph/ceph:v15.2.8 <unknown> <unknown> mds.label:mds.mds2801.wrmlmb mds2801.banette.os error 4m ago 9m <unknown> docker.io/ceph/ceph:v15.2.8 <unknown> <unknown> mds.label:mds.mds2801.xpmusg mds2801.banette.os error 4m ago 9m <unknown> docker.io/ceph/ceph:v15.2.8 <unknown> <unknown> ...
I tried removing them (ceph orch daemon rm mds.label:mds.mds2803.aeutth) but that starts new ones, and this also doesnt work:
[root@mds2803 ~]# ceph orch apply mds mds.label:mds 0
Error EINVAL: num/count must be > 1
Any other way to get rid of these?
Thank you!
Kenneth
Updated by Kenneth Waegeman about 3 years ago
removing worked doing 'ceph orch rm label:mds'
Updated by Sebastian Wagner about 3 years ago
always use yaml files!
service_type: mds
service_id: mycephfs
placement:
label: mds
Updated by Sebastian Wagner about 3 years ago
- Status changed from New to Resolved
Updated by Sebastian Wagner about 3 years ago
- Tracker changed from Bug to Support
Updated by Ken Dreyer about 3 years ago
- Tracker changed from Support to Bug
- Status changed from Resolved to Pending Backport
- Backport set to pacific
- Regression set to No
- Severity set to 3 - minor
Updated by Ken Dreyer about 3 years ago
- Status changed from Pending Backport to Resolved
Merged to pacific as a part of https://github.com/ceph/ceph/pull/39623
Actions