Project

General

Profile

Actions

Bug #46776

open

Difference between ceph-volume and /var/lib/ceph/osd/ceph-x

Added by Seena Fallah over 3 years ago. Updated over 3 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
nautilus, octopus
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I'm seeing a difference between `ceph-volume lvm list /dev/sdb` command result and data linked in `/var/lib/ceph/osd/*` for db path.

ceph-volume output:

[block]       /dev/ceph-bcce3074-3095-481f-bf89-5bc746bb5b8f/osd-block-0c2f092a-4d92-4b0a-85e6-b65221cff791
                  db device                 /dev/nvme0n1p1

in /var/lib/ceph/osd/ceph-x:

lrwxrwxrwx  1 ceph ceph   93 Jun 16 03:19 block -> /dev/ceph-bcce3074-3095-481f-bf89-5bc746bb5b8f/osd-block-0c2f092a-4d92-4b0a-85e6-b65221cff791
lrwxrwxrwx  1 ceph ceph   14 Jun 16 03:19 block.db -> /dev/nvme1n1p1

Ceph volume debug logs:

[2020-07-30 16:44:54,950][ceph_volume.main][INFO  ] Running command: ceph-volume  lvm list /dev/sdb
[2020-07-30 16:44:54,951][ceph_volume.process][INFO  ] Running command: /sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/sdb
[2020-07-30 16:44:57,103][ceph_volume.process][INFO  ] stdout ceph.block_device=/dev/ceph-bcce3074-3095-481f-bf89-5bc746bb5b8f/osd-block-0c2f092a-4d92-4b0a-85e6-b65221cff791,ceph.block_uuid=sBvTc3-XQZb-pnhX-d1Oo-eY1d-4LoD-Aygc0d,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=5efb7dd0-e5b5-4301-b870-da5b415710a1,ceph.cluster_name=ceph,ceph.crush_device_class=None,ceph.db_device=/dev/nvme0n1p1,ceph.db_uuid=a79b77e5-d882-6843-bc07-4c27a82cd48c,ceph.encrypted=0,ceph.osd_fsid=0c2f092a-4d92-4b0a-85e6-b65221cff791,ceph.osd_id=58,ceph.type=block,ceph.vdo=0";"/dev/ceph-bcce3074-3095-481f-bf89-5bc746bb5b8f/osd-block-0c2f092a-4d92-4b0a-85e6-b65221cff791";"osd-block-0c2f092a-4d92-4b0a-85e6-b65221cff791";"ceph-bcce3074-3095-481f-bf89-5bc746bb5b8f";"sBvTc3-XQZb-pnhX-d1Oo-eY1d-4LoD-Aygc0d";"<9.10t
[2020-07-30 16:44:57,105][ceph_volume.process][INFO  ] Running command: /sbin/pvs --no-heading --readonly --separator=";" -S lv_uuid=sBvTc3-XQZb-pnhX-d1Oo-eY1d-4LoD-Aygc0d -o pv_name,pv_tags,pv_uuid,vg_name,lv_uuid
[2020-07-30 16:44:59,486][ceph_volume.process][INFO  ] stdout /dev/sdb";"";"XbH9Bl-Yk1L-3Bkm-zkkI-Wsq2-9X5s-qpon5u";"ceph-bcce3074-3095-481f-bf89-5bc746bb5b8f";"sBvTc3-XQZb-pnhX-d1Oo-eY1d-4LoD-Aygc0d

Using nautilus 14.2.9

Actions #1

Updated by Jan Fajerski over 3 years ago

Can we get a full output of "ceph-volume lvm list"?
I suppose the link to /var/lib/ceph is actually correct and lvm list just prints things wrongly?
Do you happen to remember how the OSDs were created (manually via ceph volume prepare/create/batch, ceph-ansible, ...)?

It looks like the lvm tag on the block LV is to blame, I'm just trying to understand how it was jumbled up.

Actions #2

Updated by Seena Fallah over 3 years ago

I have run this command with my self and data was same as ceph-volume

/sbin/pvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size /dev/sdb

I think it was deploy with ceph-deploy at first and then it was take over with ceph-ansible.
Are lvm tags useless? I mean does any process read these tags to perform any action?

Actions

Also available in: Atom PDF