Actions
Bug #39027
closedrbd snap remove error have log radosgw pool
Status:
Closed
Priority:
Normal
Assignee:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
i run rbd on ceph cluster
{ "mon": { "ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable)": 3 }, "mgr": { "ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable)": 3 }, "osd": { "ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable)": 112 }, "mds": {}, "overall": { "ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable)": 118 } }
when i delete snapshot of volume, it has log
2019-03-29 15:03:21.391 7f1cf37fe700 -1 librbd::SnapshotUnprotectRequest: cannot get children for pool '.rgw.root' 2019-03-29 15:03:21.391 7f1cf37fe700 -1 librbd::SnapshotUnprotectRequest: cannot get children for pool 'default.rgw.meta' 2019-03-29 15:03:21.403 7f1cf37fe700 -1 librbd::SnapshotUnprotectRequest: encountered error: (1) Operation not permitted 2019-03-29 15:03:21.403 7f1cf37fe700 -1 librbd::SnapshotUnprotectRequest: 0x55fec6d23dc0 should_complete_error: ret_val=-1 rbd: unprotecting snap failed: 2019-03-29 15:03:21.403 7f1cf37fe700 -1 librbd::SnapshotUnprotectRequest: 0x55fec6d23dc0 should_complete_error: ret_val=-1 (1) Operation not permitted
I don't create .rgw.root and default.rgw.meta pool and not create new radosgw.
why my cluster auto create rgw pool ? How i can prevent this issue.
Thanks.
Actions