Project

General

Profile

Actions

Bug #56620

open

Deploy a ceph cluster with cephadm,using ceph-volume lvm create command to create osd can not managed by cephadm

Added by xiaoliang yang almost 2 years ago. Updated 15 days ago.

Status:
New
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Deploy a ceph cluster with cephadm,then create osd used "ceph-volume lvm create --bluestore --data /dev/vdc".Execute ceph health detail,like this.
[root@node1 ~]# ceph health detail
HEALTH_WARN 1 stray daemon(s) not managed by cephadm
[WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm
stray daemon osd.3 on host node1 not managed by cephadm

Actions #1

Updated by Ilya Dryomov almost 2 years ago

  • Target version changed from v16.2.10 to v16.2.11
Actions #2

Updated by Sergii Kuzko about 1 year ago

Hi
Can you update the bug status
Or transfer to the group of the current version 16.2.12
Ceph version 16.2.11 has already been released, but the problem remains.

Actions #3

Updated by Ilya Dryomov about 1 year ago

  • Target version deleted (v16.2.11)
Actions #4

Updated by Laura Flores 15 days ago

Looks like a case of this:
/a/yuriw-2024-04-09_14:58:25-rados-wip-yuri4-testing-2024-04-08-1432-distro-default-smithi/7649104

2024-04-10T05:50:46.964 DEBUG:teuthology.orchestra.run.smithi145:> sudo /home/ubuntu/cephtest/cephadm --image quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci/ceph:3a866975a60f6dc859246f6f19fd8ddb4db2cc2e shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 516476c6-f6fd-11ee-b64a-cb9ed24678a4 -- ceph-volume lvm zap /dev/vg_nvme/lv_4
...
2024-04-10T06:00:28.179 INFO:teuthology.orchestra.run.smithi040.stdout:2024-04-10T05:50:00.000240+0000 mon.a (mon.0) 456 : cluster 3 [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm

Actions

Also available in: Atom PDF