Project

General

Profile

Bug #25207

ceph-volume lvm create gives segmentation fault

Added by Pavan Kumar Linga over 2 years ago. Updated about 2 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature:

Description

Hi,

I am trying to install the osd on nvme drive. It is not linked to any volume groups.
When i tried to use the command "ceph-volume lvm create --data /dev/nvme1n1" on the osd node, i got the following segmentation fault. For testing, on the other server, i created a volume group and the logical volume, used the "ceph-volume lvm create --data ceph_vg_ce2/osd_A", even this gives the segmenatation fault.

Debug messages:

[root@ce3-ap03 ~]# lsblk
NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda               8:0    0 465.8G  0 disk
+-sda1            8:1    0     1G  0 part /boot
+-sda2            8:2    0 464.8G  0 part
  +-centos-root 253:0    0    50G  0 lvm  /
  +-centos-swap 253:1    0     4G  0 lvm  [SWAP]
  +-centos-home 253:2    0 410.8G  0 lvm  /home
sdb               8:16   0 465.8G  0 disk
sdc               8:32   0 372.6G  0 disk
sr0              11:0    1  1024M  0 rom
nvme0n1         259:0    0 260.9G  0 disk
+-nvme0n1p1     259:1    0 255.8G  0 part
+-nvme0n1p2     259:2    0     5G  0 part
nvme1n1         259:3    0 745.2G  0 disk
[root@ce3-ap03 ~]#
[root@ce3-ap03 ~]#
[root@ce3-ap03 ~]#
[root@ce3-ap03 ~]#
[root@ce3-ap03 ~]# ceph-volume lvm create --data /dev/nvme1n1
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new bab85a09-83cf-430d-8e0d-aff77c69cb01
Running command: vgcreate --force --yes ceph-f47d0cd6-511e-4c01-946e-359a3e5fca24 /dev/nvme1n1
 stdout: Wiping gpt signature on /dev/nvme1n1.
 stdout: Wiping gpt signature on /dev/nvme1n1.
 stdout: Wiping PMBR signature on /dev/nvme1n1.
 stdout: Physical volume "/dev/nvme1n1" successfully created.
 stdout: Volume group "ceph-f47d0cd6-511e-4c01-946e-359a3e5fca24" successfully created
Running command: lvcreate --yes -l 100%FREE -n osd-block-bab85a09-83cf-430d-8e0d-aff77c69cb01 ceph-f47d0cd6-511e-4c01-946e-359a3e5fca24
 stdout: Logical volume "osd-block-bab85a09-83cf-430d-8e0d-aff77c69cb01" created.
Running command: /bin/ceph-authtool --gen-print-key
Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Running command: chown -h ceph:ceph /dev/ceph-f47d0cd6-511e-4c01-946e-359a3e5fca24/osd-block-bab85a09-83cf-430d-8e0d-aff77c69cb01
Running command: chown -R ceph:ceph /dev/dm-3
Running command: ln -s /dev/ceph-f47d0cd6-511e-4c01-946e-359a3e5fca24/osd-block-bab85a09-83cf-430d-8e0d-aff77c69cb01 /var/lib/ceph/osd/ceph-0/block
Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
 stderr: got monmap epoch 1
Running command: ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQCG9mBbjW7CEhAAdY//MVEu21Tk+4UDl1zIgw==
 stdout: creating /var/lib/ceph/osd/ceph-0/keyring
 stdout: added entity osd.0 auth auth(auid = 18446744073709551615 key=AQCG9mBbjW7CEhAAdY//MVEu21Tk+4UDl1zIgw== with 0 caps)
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid bab85a09-83cf-430d-8e0d-aff77c69cb01 --setuser ceph --setgroup ceph
 stderr: 2018-07-31 16:53:44.940544 7f0ebfd35580 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
 stderr: *** Caught signal (Segmentation fault) **
 stderr: in thread 7f0ebfd35580 thread_name:ceph-osd
 stderr: ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable)
 stderr: 1: (()+0xa48ec1) [0x55645e268ec1]
 stderr: 2: (()+0xf6d0) [0x7f0ebcf816d0]
 stderr: 3: (malloc_usable_size()+0x83) [0x7f0ebf6bdf73]
 stderr: 4: (rocksdb::BlockBasedTable::NewIndexIterator(rocksdb::ReadOptions const&, rocksdb::BlockIter*, rocksdb::BlockBasedTable::CachableEntry<rocksdb::BlockBasedTable::IndexReader>*)+0x466) [0x55645e5e46b6]
 stderr: 5: (rocksdb::BlockBasedTable::Open(rocksdb::ImmutableCFOptions const&, rocksdb::EnvOptions const&, rocksdb::BlockBasedTableOptions const&, rocksdb::InternalKeyComparator const&, std::unique_ptr<rocksdb::RandomAccessFileReader, std::default_delete<rocksdb::RandomAccessFileReader> >&&, unsigned long, std::unique_ptr<rocksdb::TableReader, std::default_delete<rocksdb::TableReader> >*, bool, bool, int)+0xe3c) [0x55645e5ea07c]
 stderr: 6: (rocksdb::BlockBasedTableFactory::NewTableReader(rocksdb::TableReaderOptions const&, std::unique_ptr<rocksdb::RandomAccessFileReader, std::default_delete<rocksdb::RandomAccessFileReader> >&&, unsigned long, std::unique_ptr<rocksdb::TableReader, std::default_delete<rocksdb::TableReader> >*, bool) const+0x51) [0x55645e5de181]
 stderr: 7: (rocksdb::TableCache::GetTableReader(rocksdb::EnvOptions const&, rocksdb::InternalKeyComparator const&, rocksdb::FileDescriptor const&, bool, unsigned long, bool, rocksdb::HistogramImpl*, std::unique_ptr<rocksdb::TableReader, std::default_delete<rocksdb::TableReader> >*, bool, int, bool)+0x215) [0x55645e6a4155]
 stderr: 8: (rocksdb::TableCache::FindTable(rocksdb::EnvOptions const&, rocksdb::InternalKeyComparator const&, rocksdb::FileDescriptor const&, rocksdb::Cache::Handle**, bool, bool, rocksdb::HistogramImpl*, bool, int, bool)+0x2b0) [0x55645e6a4760]
 stderr: 9: (rocksdb::TableCache::NewIterator(rocksdb::ReadOptions const&, rocksdb::EnvOptions const&, rocksdb::InternalKeyComparator const&, rocksdb::FileDescriptor const&, rocksdb::RangeDelAggregator*, rocksdb::TableReader**, rocksdb::HistogramImpl*, bool, rocksdb::Arena*, bool, int)+0x57e) [0x55645e6a4e7e]
 stderr: 10: (rocksdb::BuildTable(std::string const&, rocksdb::Env*, rocksdb::ImmutableCFOptions const&, rocksdb::MutableCFOptions const&, rocksdb::EnvOptions const&, rocksdb::TableCache*, rocksdb::InternalIterator*, std::unique_ptr<rocksdb::InternalIterator, std::default_delete<rocksdb::InternalIterator> >, rocksdb::FileMetaData*, rocksdb::InternalKeyComparator const&, std::vector<std::unique_ptr<rocksdb::IntTblPropCollectorFactory, std::default_delete<rocksdb::IntTblPropCollectorFactory> >, std::allocator<std::unique_ptr<rocksdb::IntTblPropCollectorFactory, std::default_delete<rocksdb::IntTblPropCollectorFactory> > > > const*, unsigned int, std::string const&, std::vector<unsigned long, std::allocator<unsigned long> >, unsigned long, rocksdb::CompressionType, rocksdb::CompressionOptions const&, bool, rocksdb::InternalStats*, rocksdb::TableFileCreationReason, rocksdb::EventLogger*, int, rocksdb::Env::IOPriority, rocksdb::TableProperties*, int)+0x113e) [0x55645e62ae8e]
 stderr: 11: (rocksdb::DBImpl::WriteLevel0TableForRecovery(int, rocksdb::ColumnFamilyData*, rocksdb::MemTable*, rocksdb::VersionEdit*)+0x90c) [0x55645e57721c]
 stderr: 12: (rocksdb::DBImpl::RecoverLogFiles(std::vector<unsigned long, std::allocator<unsigned long> > const&, unsigned long*, bool)+0x1b4b) [0x55645e5795ab]
 stderr: 13: (rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool, bool)+0x7e6) [0x55645e57a2e6]
 stderr: 14: (rocksdb::DB::Open(rocksdb::DBOptions const&, std::string const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**)+0xed3) [0x55645e57b5a3]
 stderr: 15: (rocksdb::DB::Open(rocksdb::Options const&, std::string const&, rocksdb::DB**)+0x186) [0x55645e57c7e6]
 stderr: 16: (RocksDBStore::do_open(std::ostream&, bool)+0x8e8) [0x55645e1c3368]
 stderr: 17: (BlueStore::_open_db(bool)+0xdb3) [0x55645e14e313]
 stderr: 18: (BlueStore::_fsck(bool, bool)+0x3c7) [0x55645e180827]
 stderr: 19: (BlueStore::mkfs()+0xe40) [0x55645e157150]
 stderr: 20: (OSD::mkfs(CephContext*, ObjectStore*, std::string const&, uuid_d, int)+0x29b) [0x55645dd22f1b]
 stderr: 21: (main()+0x11a2) [0x55645dc38de2]
 stderr: 22: (__libc_start_main()+0xf5) [0x7f0ebbf8e445]
 stderr: 23: (()+0x4b9003) [0x55645dcd9003]
 stderr: 2018-07-31 16:53:45.466266 7f0ebfd35580 -1 *** Caught signal (Segmentation fault) **
 stderr: in thread 7f0ebfd35580 thread_name:ceph-osd
 stderr: ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable)
 stderr: 1: (()+0xa48ec1) [0x55645e268ec1]
 stderr: 2: (()+0xf6d0) [0x7f0ebcf816d0]
 stderr: 3: (malloc_usable_size()+0x83) [0x7f0ebf6bdf73]
 stderr: 4: (rocksdb::BlockBasedTable::NewIndexIterator(rocksdb::ReadOptions const&, rocksdb::BlockIter*, rocksdb::BlockBasedTable::CachableEntry<rocksdb::BlockBasedTable::IndexReader>*)+0x466) [0x55645e5e46b6]
 stderr: 5: (rocksdb::BlockBasedTable::Open(rocksdb::ImmutableCFOptions const&, rocksdb::EnvOptions const&, rocksdb::BlockBasedTableOptions const&, rocksdb::InternalKeyComparator const&, std::unique_ptr<rocksdb::RandomAccessFileReader, std::default_delete<rocksdb::RandomAccessFileReader> >&&, unsigned long, std::unique_ptr<rocksdb::TableReader, std::default_delete<rocksdb::TableReader> >*, bool, bool, int)+0xe3c) [0x55645e5ea07c]
 stderr: 6: (rocksdb::BlockBasedTableFactory::NewTableReader(rocksdb::TableReaderOptions const&, std::unique_ptr<rocksdb::RandomAccessFileReader, std::default_delete<rocksdb::RandomAccessFileReader> >&&, unsigned long, std::unique_ptr<rocksdb::TableReader, std::default_delete<rocksdb::TableReader> >*, bool) const+0x51) [0x55645e5de181]
 stderr: 7: (rocksdb::TableCache::GetTableReader(rocksdb::EnvOptions const&, rocksdb::InternalKeyComparator const&, rocksdb::FileDescriptor const&, bool, unsigned long, bool, rocksdb::HistogramImpl*, std::unique_ptr<rocksdb::TableReader, std::default_delete<rocksdb::TableReader> >*, bool, int, bool)+0x215) [0x55645e6a4155]
 stderr: 8: (rocksdb::TableCache::FindTable(rocksdb::EnvOptions const&, rocksdb::InternalKeyComparator const&, rocksdb::FileDescriptor const&, rocksdb::Cache::Handle**, bool, bool, rocksdb::HistogramImpl*, bool, int, bool)+0x2b0) [0x55645e6a4760]
 stderr: 9: (rocksdb::TableCache::NewIterator(rocksdb::ReadOptions const&, rocksdb::EnvOptions const&, rocksdb::InternalKeyComparator const&, rocksdb::FileDescriptor const&, rocksdb::RangeDelAggregator*, rocksdb::TableReader**, rocksdb::HistogramImpl*, bool, rocksdb::Arena*, bool, int)+0x57e) [0x55645e6a4e7e]
 stderr: 10: (rocksdb::BuildTable(std::string const&, rocksdb::Env*, rocksdb::ImmutableCFOptions const&, rocksdb::MutableCFOptions const&, rocksdb::EnvOptions const&, rocksdb::TableCache*, rocksdb::InternalIterator*, std::unique_ptr<rocksdb::InternalIterator, std::default_delete<rocksdb::InternalIterator> >, rocksdb::FileMetaData*, rocksdb::InternalKeyComparator const&, std::vector<std::unique_ptr<rocksdb::IntTblPropCollectorFactory, std::default_delete<rocksdb::IntTblPropCollectorFactory> >, std::allocator<std::unique_ptr<rocksdb::IntTblPropCollectorFactory, std::default_delete<rocksdb::IntTblPropCollectorFactory> > > > const*, unsigned int, std::string const&, std::vector<unsigned long, std::allocator<unsigned long> >, unsigned long, rocksdb::CompressionType, rocksdb::CompressionOptions const&, bool, rocksdb::InternalStats*, rocksdb::TableFileCreationReason, rocksdb::EventLogger*, int, rocksdb::Env::IOPriority, rocksdb::TableProperties*, int)+0x113e) [0x55645e62ae8e]
 stderr: 11: (rocksdb::DBImpl::WriteLevel0TableForRecovery(int, rocksdb::ColumnFamilyData*, rocksdb::MemTable*, rocksdb::VersionEdit*)+0x90c) [0x55645e57721c]
 stderr: 12: (rocksdb::DBImpl::RecoverLogFiles(std::vector<unsigned long, std::allocator<unsigned long> > const&, unsigned long*, bool)+0x1b4b) [0x55645e5795ab]
 stderr: 13: (rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool, bool)+0x7e6) [0x55645e57a2e6]
 stderr: 14: (rocksdb::DB::Open(rocksdb::DBOptions const&, std::string const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**)+0xed3) [0x55645e57b5a3]
 stderr: 15: (rocksdb::DB::Open(rocksdb::Options const&, std::string const&, rocksdb::DB**)+0x186) [0x55645e57c7e6]
 stderr: 16: (RocksDBStore::do_open(std::ostream&, bool)+0x8e8) [0x55645e1c3368]
 stderr: 17: (BlueStore::_open_db(bool)+0xdb3) [0x55645e14e313]
 stderr: 18: (BlueStore::_fsck(bool, bool)+0x3c7) [0x55645e180827]
 stderr: 19: (BlueStore::mkfs()+0xe40) [0x55645e157150]
 stderr: 20: (OSD::mkfs(CephContext*, ObjectStore*, std::string const&, uuid_d, int)+0x29b) [0x55645dd22f1b]
 stderr: 21: (main()+0x11a2) [0x55645dc38de2]
 stderr: 22: (__libc_start_main()+0xf5) [0x7f0ebbf8e445]
 stderr: 23: (()+0x4b9003) [0x55645dcd9003]
 stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
 stderr: -340> 2018-07-31 16:53:44.940544 7f0ebfd35580 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
 stderr: 0> 2018-07-31 16:53:45.466266 7f0ebfd35580 -1 *** Caught signal (Segmentation fault) **
 stderr: in thread 7f0ebfd35580 thread_name:ceph-osd
 stderr: ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable)
 stderr: 1: (()+0xa48ec1) [0x55645e268ec1]
 stderr: 2: (()+0xf6d0) [0x7f0ebcf816d0]
 stderr: 3: (malloc_usable_size()+0x83) [0x7f0ebf6bdf73]
 stderr: 4: (rocksdb::BlockBasedTable::NewIndexIterator(rocksdb::ReadOptions const&, rocksdb::BlockIter*, rocksdb::BlockBasedTable::CachableEntry<rocksdb::BlockBasedTable::IndexReader>*)+0x466) [0x55645e5e46b6]
 stderr: 5: (rocksdb::BlockBasedTable::Open(rocksdb::ImmutableCFOptions const&, rocksdb::EnvOptions const&, rocksdb::BlockBasedTableOptions const&, rocksdb::InternalKeyComparator const&, std::unique_ptr<rocksdb::RandomAccessFileReader, std::default_delete<rocksdb::RandomAccessFileReader> >&&, unsigned long, std::unique_ptr<rocksdb::TableReader, std::default_delete<rocksdb::TableReader> >*, bool, bool, int)+0xe3c) [0x55645e5ea07c]
 stderr: 6: (rocksdb::BlockBasedTableFactory::NewTableReader(rocksdb::TableReaderOptions const&, std::unique_ptr<rocksdb::RandomAccessFileReader, std::default_delete<rocksdb::RandomAccessFileReader> >&&, unsigned long, std::unique_ptr<rocksdb::TableReader, std::default_delete<rocksdb::TableReader> >*, bool) const+0x51) [0x55645e5de181]
 stderr: 7: (rocksdb::TableCache::GetTableReader(rocksdb::EnvOptions const&, rocksdb::InternalKeyComparator const&, rocksdb::FileDescriptor const&, bool, unsigned long, bool, rocksdb::HistogramImpl*, std::unique_ptr<rocksdb::TableReader, std::default_delete<rocksdb::TableReader> >*, bool, int, bool)+0x215) [0x55645e6a4155]
 stderr: 8: (rocksdb::TableCache::FindTable(rocksdb::EnvOptions const&, rocksdb::InternalKeyComparator const&, rocksdb::FileDescriptor const&, rocksdb::Cache::Handle**, bool, bool, rocksdb::HistogramImpl*, bool, int, bool)+0x2b0) [0x55645e6a4760]
 stderr: 9: (rocksdb::TableCache::NewIterator(rocksdb::ReadOptions const&, rocksdb::EnvOptions const&, rocksdb::InternalKeyComparator const&, rocksdb::FileDescriptor const&, rocksdb::RangeDelAggregator*, rocksdb::TableReader**, rocksdb::HistogramImpl*, bool, rocksdb::Arena*, bool, int)+0x57e) [0x55645e6a4e7e]
 stderr: 10: (rocksdb::BuildTable(std::string const&, rocksdb::Env*, rocksdb::ImmutableCFOptions const&, rocksdb::MutableCFOptions const&, rocksdb::EnvOptions const&, rocksdb::TableCache*, rocksdb::InternalIterator*, std::unique_ptr<rocksdb::InternalIterator, std::default_delete<rocksdb::InternalIterator> >, rocksdb::FileMetaData*, rocksdb::InternalKeyComparator const&, std::vector<std::unique_ptr<rocksdb::IntTblPropCollectorFactory, std::default_delete<rocksdb::IntTblPropCollectorFactory> >, std::allocator<std::unique_ptr<rocksdb::IntTblPropCollectorFactory, std::default_delete<rocksdb::IntTblPropCollectorFactory> > > > const*, unsigned int, std::string const&, std::vector<unsigned long, std::allocator<unsigned long> >, unsigned long, rocksdb::CompressionType, rocksdb::CompressionOptions const&, bool, rocksdb::InternalStats*, rocksdb::TableFileCreationReason, rocksdb::EventLogger*, int, rocksdb::Env::IOPriority, rocksdb::TableProperties*, int)+0x113e) [0x55645e62ae8e]
 stderr: 11: (rocksdb::DBImpl::WriteLevel0TableForRecovery(int, rocksdb::ColumnFamilyData*, rocksdb::MemTable*, rocksdb::VersionEdit*)+0x90c) [0x55645e57721c]
 stderr: 12: (rocksdb::DBImpl::RecoverLogFiles(std::vector<unsigned long, std::allocator<unsigned long> > const&, unsigned long*, bool)+0x1b4b) [0x55645e5795ab]
 stderr: 13: (rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool, bool)+0x7e6) [0x55645e57a2e6]
 stderr: 14: (rocksdb::DB::Open(rocksdb::DBOptions const&, std::string const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**)+0xed3) [0x55645e57b5a3]
 stderr: 15: (rocksdb::DB::Open(rocksdb::Options const&, std::string const&, rocksdb::DB**)+0x186) [0x55645e57c7e6]
 stderr: 16: (RocksDBStore::do_open(std::ostream&, bool)+0x8e8) [0x55645e1c3368]
 stderr: 17: (BlueStore::_open_db(bool)+0xdb3) [0x55645e14e313]
 stderr: 18: (BlueStore::_fsck(bool, bool)+0x3c7) [0x55645e180827]
 stderr: 19: (BlueStore::mkfs()+0xe40) [0x55645e157150]
 stderr: 20: (OSD::mkfs(CephContext*, ObjectStore*, std::string const&, uuid_d, int)+0x29b) [0x55645dd22f1b]
 stderr: 21: (main()+0x11a2) [0x55645dc38de2]
 stderr: 22: (__libc_start_main()+0xf5) [0x7f0ebbf8e445]
 stderr: 23: (()+0x4b9003) [0x55645dcd9003]
 stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
--> Was unable to complete a new OSD, will rollback changes
--> OSD will be fully purged from the cluster, because the ID was generated
Running command: ceph osd purge osd.0 --yes-i-really-mean-it
 stderr: purged osd.0
-->  RuntimeError: Command failed with exit code -11: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid bab85a09-83cf-430d-8e0d-aff77c69cb01 --setuser ceph --setgroup ceph
[root@ce3-ap03 ~]#
[root@ce3-ap03 ~]#
[root@ce3-ap03 ~]# lsblk
NAME                                                                                                 MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                                                                                                    8:0    0 465.8G  0 disk
+-sda1                                                                                                 8:1    0     1G  0 part /boot
+-sda2                                                                                                 8:2    0 464.8G  0 part
  +-centos-root                                                                                      253:0    0    50G  0 lvm  /
  +-centos-swap                                                                                      253:1    0     4G  0 lvm  [SWAP]
  +-centos-home                                                                                      253:2    0 410.8G  0 lvm  /home
sdb                                                                                                    8:16   0 465.8G  0 disk
sdc                                                                                                    8:32   0 372.6G  0 disk
sr0                                                                                                   11:0    1  1024M  0 rom
nvme0n1                                                                                              259:0    0 260.9G  0 disk
+-nvme0n1p1                                                                                          259:1    0 255.8G  0 part
+-nvme0n1p2                                                                                          259:2    0     5G  0 part
nvme1n1                                                                                              259:3    0 745.2G  0 disk
+-ceph--f47d0cd6--511e--4c01--946e--359a3e5fca24-osd--block--bab85a09--83cf--430d--8e0d--aff77c69cb01
                                                                                                     253:3    0 745.2G  0 lvm
[root@ce3-ap03 ~]#

The reason for both the seg faults shows as "_read_fsid unparsable uuid". Can you please suggest a solution to overcome this problem.

History

#1 Updated by Alfredo Deza over 2 years ago

  • Description updated (diff)

#2 Updated by Alfredo Deza over 2 years ago

  • Project changed from ceph-volume to bluestore
bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid

The above is considered normal for the first time the OSD is created.

Not sure I see anything wrong with ceph-volume here, moving to the bluestore tracker.

#3 Updated by Pavan Kumar Linga over 2 years ago

Alfredo Deza wrote:

[...]

The above is considered normal for the first time the OSD is created.

Not sure I see anything wrong with ceph-volume here, moving to the bluestore tracker.

I even tried with the manual installation from the source code. All the time, the osd's are down and out.
I tried with all the possible installation methods. The ceph cluster use to work earlier but now i get these issues.
Am i missing something on the drive partitioning side so that it is hindering the running of osd's in manual installation or the segmentation fault using ceph-volume? I followed your suggestion in Bug 1509175 - creating PARTUUID using fdisk and also tried with parted.

I really appreciate your time and help. Thankyou

#4 Updated by Sage Weil over 2 years ago

This looks a bit like the error we see when jemalloc is enabled in /etc/{default,sysconfig}/ceph. Can you see if it is enbaled there and, if so, disable it?

See http://tracker.ceph.com/issues/20557

#5 Updated by Sage Weil about 2 years ago

  • Status changed from New to Can't reproduce

Also available in: Atom PDF