Project

General

Profile

Bug #38175

ceph-volume inventory command is slow

Added by Sébastien Han about 5 years ago. Updated over 3 years ago.

Status:
Closed
Priority:
Normal
Assignee:
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Running on an idle machine with the following specs:

[root@e23-h03-740xd ~]# lsblk
NAME                          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                             8:0    0  1.7T  0 disk
|-sda1                          8:1    0    1G  0 part /boot
`-sda2                          8:2    0  1.7T  0 part
  |-rhel_e23--h03--740xd-root 253:0    0   50G  0 lvm  /
  |-rhel_e23--h03--740xd-swap 253:1    0    4G  0 lvm  [SWAP]
  `-rhel_e23--h03--740xd-home 253:2    0  1.6T  0 lvm  /home
sdb                             8:16   0  1.7T  0 disk
sdc                             8:32   0  1.7T  0 disk
sdd                             8:48   0  1.7T  0 disk
sde                             8:64   0  1.7T  0 disk
sdf                             8:80   0  1.7T  0 disk
sdg                             8:96   0  1.7T  0 disk
sdh                             8:112  0  1.7T  0 disk
sdi                             8:128  0  1.7T  0 disk
sdj                             8:144  0  1.7T  0 disk
sdk                             8:160  0  1.7T  0 disk
sdl                             8:176  0  1.7T  0 disk

[root@e23-h03-740xd ~]# free -mh
              total        used        free      shared  buff/cache   available
Mem:           187G        2.1G        183G         10M        1.6G        184G
Swap:          4.0G          0B        4.0G

[root@e23-h03-740xd ~]# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                64
On-line CPU(s) list:   0-63
Thread(s) per core:    2
Core(s) per socket:    16
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 85
Model name:            Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz
Stepping:              4
CPU MHz:               2100.000
BogoMIPS:              4200.00
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              1024K
L3 cache:              22528K
NUMA node0 CPU(s):     0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62
NUMA node1 CPU(s):     1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_ppin intel_pt ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke spec_ctrl intel_stibp flush_l1d

The inventory command needs 40 secs to list only 16 drives:

[root@e23-h03-740xd ~]# time docker run -v /dev:/dev -v /sys:/sys --net=host --privileged=true -v /var/run/udev/:/var/run/udev/ -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm  --entrypoint=ceph-volume ceph/daemon:latest-master inventory

Device Path               Size         rotates available Model name
/dev/sdb                  1.64 TB      True    True      PERC H740P Adp
/dev/sdc                  1.64 TB      True    True      PERC H740P Adp
/dev/sdd                  1.64 TB      True    True      PERC H740P Adp
/dev/sde                  1.64 TB      True    True      PERC H740P Adp
/dev/sdf                  1.64 TB      True    True      PERC H740P Adp
/dev/sdg                  1.64 TB      True    True      PERC H740P Adp
/dev/sdh                  1.64 TB      True    True      PERC H740P Adp
/dev/sdi                  1.64 TB      True    True      PERC H740P Adp
/dev/sdj                  1.64 TB      True    True      PERC H740P Adp
/dev/sdk                  1.64 TB      True    True      PERC H740P Adp
/dev/sdl                  1.64 TB      True    True      PERC H740P Adp
/dev/sda                  1.64 TB      True    False     PERC H740P Adp

real    0m39.766s
user    0m0.014s
sys     0m0.014s

On another machine:

[root@tarox~] time docker run -v /dev:/dev -v /sys:/sys --net=host --privileged=true -v /var/run/udev/:/var/run/udev/ -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm  --entrypoint=ceph-volume ceph/daemon:latest-master inventory

Device Path               Size         rotates available Model name
/dev/sda                  476.94 GB    False   False     SanDisk SD8SN8U5

real    0m7.832s
user    0m0.009s
sys     0m0.008s

History

#1 Updated by Rishabh Dave over 4 years ago

  • Status changed from New to In Progress
  • Assignee set to Rishabh Dave

Looks very similar to tracker 37490, LVs list is being created multiple times. I'll start with this one.

#2 Updated by Rishabh Dave over 3 years ago

  • Pull request ID set to 30102

The PR was closed since ceph-volume development moved in a different direction -- instead of improving filter method in container classes (namely, Volumes, PVolumes and VolumeGroups) we got rid of them in favour of filters provided by LVM commands. See https://github.com/ceph/ceph/pull/32493 and https://github.com/ceph/ceph/pull/35937. Closing this ticket too.

#3 Updated by Rishabh Dave over 3 years ago

  • Status changed from In Progress to Closed

Also available in: Atom PDF