Project

General

Profile

Actions

Bug #23581

closed

ceph-volume should use mount -o discard when underlying device is using VDO

Added by David Galloway about 6 years ago. Updated almost 6 years ago.

Status:
Resolved
Priority:
Immediate
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

[root@reesi004 ~]# lsblk /dev/mapper/vdo_sda 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vdo_sda         253:12   0  3.6T  0 vdo  
└─vg_sda-lv_sda 253:13   0  3.6T  0 lvm  

[root@reesi004 ~]# mount | grep osd
tmpfs on /var/lib/ceph/osd/ceph-55 type tmpfs (rw,relatime,seclabel)

# ceph-volume lvm list

====== osd.55 ======

  [  db]    /dev/journals/sda

      type                      db
      osd id                    55
      cluster fsid              dd5d3a03-ba02-48a5-97ec-869b5443fb7b
      cluster name              ceph
      osd fsid                  19e83348-bc84-4516-9a78-fbac863ca3f7
      db device                 /dev/journals/sda
      encrypted                 0
      db uuid                   Y7JcqI-YQ2A-CetU-rqwA-Fp48-yT7h-mWjG07
      cephx lockbox secret      
      block uuid                EutNis-Gb1K-D281-dAMa-tHtD-6nYW-qiOCXv
      block device              /dev/vg_sda/lv_sda
      crush device class        None

  [block]    /dev/vg_sda/lv_sda

      type                      block
      osd id                    55
      cluster fsid              dd5d3a03-ba02-48a5-97ec-869b5443fb7b
      cluster name              ceph
      osd fsid                  19e83348-bc84-4516-9a78-fbac863ca3f7
      db device                 /dev/journals/sda
      encrypted                 0
      db uuid                   Y7JcqI-YQ2A-CetU-rqwA-Fp48-yT7h-mWjG07
      cephx lockbox secret      
      block uuid                EutNis-Gb1K-D281-dAMa-tHtD-6nYW-qiOCXv
      block device              /dev/vg_sda/lv_sda
      crush device class        None

https://bugzilla.redhat.com/show_bug.cgi?id=1563794#c2

Actions #1

Updated by David Galloway about 6 years ago

  • Description updated (diff)
Actions #2

Updated by Alfredo Deza about 6 years ago

  • Priority changed from Normal to Immediate
Actions #3

Updated by Alfredo Deza about 6 years ago

This only affects filestore, bluestore does not need any of the special mounts since it is using tmpfs.

Actions #5

Updated by Alfredo Deza about 6 years ago

  • Status changed from New to Fix Under Review
Actions #6

Updated by Alfredo Deza about 6 years ago

  • Status changed from Fix Under Review to Resolved

merged commit 06c0933 into master

Actions #7

Updated by Sergey Ponomarev almost 6 years ago

Alfredo Deza wrote:

This only affects filestore, bluestore does not need any of the special mounts since it is using tmpfs.

Please answer me, how can enable block discarding VDO if top of OSD -> Bluestore (not Filestore)?
I see counter "block map discard required" don'd decrease and always growing up

Actions #8

Updated by Sergey Ponomarev almost 6 years ago

Sergey Ponomarev wrote:

Alfredo Deza wrote:

This only affects filestore, bluestore does not need any of the special mounts since it is using tmpfs.

Please answer me, how can enable block discarding VDO if top of OSD -> Bluestore (not Filestore)?
I see counter "block map discard required" don'd decrease and always growing up

I don't understand, for Ceph 12.2.5 in chain VDO+Bluestore free space don't discard and reclaim?
This work only for VDO+Filestore?

Actions #9

Updated by Alfredo Deza almost 6 years ago

With bluestore, we don't mount devices so no discard flags can be enabled on devices. Bluestore does its own discard internally. More details on how that is done in this PR https://github.com/ceph/ceph/pull/14727

Actions

Also available in: Atom PDF