Project

General

Profile

Bug #47208

ceph-osd Failed to create bluestore

Added by fengsheng wang 20 days ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
08/31/2020
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature:

Description

--- begin dump of recent events ---
-47> 2020-08-31 11:41:02.120 fffebf2f2c50 5 asok(0xaaabb9604800) register_command assert hook 0xaaabb95b6c20
-46> 2020-08-31 11:41:02.120 fffebf2f2c50 5 asok(0xaaabb9604800) register_command abort hook 0xaaabb95b6c20
-45> 2020-08-31 11:41:02.120 fffebf2f2c50 5 asok(0xaaabb9604800) register_command perfcounters_dump hook 0xaaabb95b6c20
-44> 2020-08-31 11:41:02.120 fffebf2f2c50 5 asok(0xaaabb9604800) register_command 1 hook 0xaaabb95b6c20
-43> 2020-08-31 11:41:02.120 fffebf2f2c50 5 asok(0xaaabb9604800) register_command perf dump hook 0xaaabb95b6c20
-42> 2020-08-31 11:41:02.120 fffebf2f2c50 5 asok(0xaaabb9604800) register_command perfcounters_schema hook 0xaaabb95b6c20
-41> 2020-08-31 11:41:02.120 fffebf2f2c50 5 asok(0xaaabb9604800) register_command perf histogram dump hook 0xaaabb95b6c20
-40> 2020-08-31 11:41:02.120 fffebf2f2c50 5 asok(0xaaabb9604800) register_command 2 hook 0xaaabb95b6c20
-39> 2020-08-31 11:41:02.120 fffebf2f2c50 5 asok(0xaaabb9604800) register_command perf schema hook 0xaaabb95b6c20
-38> 2020-08-31 11:41:02.120 fffebf2f2c50 5 asok(0xaaabb9604800) register_command perf histogram schema hook 0xaaabb95b6c20
-37> 2020-08-31 11:41:02.120 fffebf2f2c50 5 asok(0xaaabb9604800) register_command perf reset hook 0xaaabb95b6c20
-36> 2020-08-31 11:41:02.120 fffebf2f2c50 5 asok(0xaaabb9604800) register_command config show hook 0xaaabb95b6c20
-35> 2020-08-31 11:41:02.120 fffebf2f2c50 5 asok(0xaaabb9604800) register_command config help hook 0xaaabb95b6c20
-34> 2020-08-31 11:41:02.120 fffebf2f2c50 5 asok(0xaaabb9604800) register_command config set hook 0xaaabb95b6c20
-33> 2020-08-31 11:41:02.120 fffebf2f2c50 5 asok(0xaaabb9604800) register_command config unset hook 0xaaabb95b6c20
-32> 2020-08-31 11:41:02.120 fffebf2f2c50 5 asok(0xaaabb9604800) register_command config get hook 0xaaabb95b6c20
-31> 2020-08-31 11:41:02.120 fffebf2f2c50 5 asok(0xaaabb9604800) register_command config diff hook 0xaaabb95b6c20
-30> 2020-08-31 11:41:02.120 fffebf2f2c50 5 asok(0xaaabb9604800) register_command config diff get hook 0xaaabb95b6c20
-29> 2020-08-31 11:41:02.120 fffebf2f2c50 5 asok(0xaaabb9604800) register_command log flush hook 0xaaabb95b6c20
-28> 2020-08-31 11:41:02.120 fffebf2f2c50 5 asok(0xaaabb9604800) register_command log dump hook 0xaaabb95b6c20
-27> 2020-08-31 11:41:02.120 fffebf2f2c50 5 asok(0xaaabb9604800) register_command log reopen hook 0xaaabb95b6c20
-26> 2020-08-31 11:41:02.120 fffebf2f2c50 5 asok(0xaaabb9604800) register_command dump_mempools hook 0xaaabba342548
-25> 2020-08-31 11:41:02.140 fffebf2f2c50 0 ceph version 14.2.10 (b340acf629a010a74d90da5782a2c5fe0b54ac20) nautilus (stable), process ceph-osd, pid 1894893
-24> 2020-08-31 11:41:02.140 fffebf2f2c50 0 pidfile_write: ignore empty --pid-file
-23> 2020-08-31 11:41:02.190 fffebf2f2c50 5 asok(0xaaabb9604800) init /var/run/ceph/ceph-osd.0.asok
-22> 2020-08-31 11:41:02.190 fffebf2f2c50 5 asok(0xaaabb9604800) bind_and_listen /var/run/ceph/ceph-osd.0.asok
-21> 2020-08-31 11:41:02.190 fffebf2f2c50 5 asok(0xaaabb9604800) register_command 0 hook 0xaaabba328858
-20> 2020-08-31 11:41:02.190 fffebf2f2c50 5 asok(0xaaabb9604800) register_command version hook 0xaaabba328858
-19> 2020-08-31 11:41:02.190 fffebf2f2c50 5 asok(0xaaabb9604800) register_command git_version hook 0xaaabba328858
-18> 2020-08-31 11:41:02.190 fffebf2f2c50 5 asok(0xaaabb9604800) register_command help hook 0xaaabb95b6bd0
-17> 2020-08-31 11:41:02.190 fffebf2f2c50 5 asok(0xaaabb9604800) register_command get_command_descriptions hook 0xaaabb95b6bc0
-16> 2020-08-31 11:41:02.190 fffeb9f9b490 5 asok(0xaaabb9604800) entry start
-15> 2020-08-31 11:41:02.190 fffebf2f2c50 -1 auth: error reading file: /ceph/data/keyring: can't open /ceph/data/keyring: (2) No such file or directory
-14> 2020-08-31 11:41:02.190 fffebf2f2c50 -1 created new key in keyring /ceph/data/keyring
-13> 2020-08-31 11:41:02.190 fffebf2f2c50 1 bluestore(/ceph/data) mkfs path /ceph/data
-12> 2020-08-31 11:41:02.190 fffebf2f2c50 -1 bluestore(/ceph/data/block) _read_bdev_label failed to open /ceph/data/block: (2) No such file or directory
-11> 2020-08-31 11:41:02.190 fffebf2f2c50 -1 bluestore(/ceph/data/block) _read_bdev_label failed to open /ceph/data/block: (2) No such file or directory
-10> 2020-08-31 11:41:02.190 fffebf2f2c50 -1 bluestore(/ceph/data/block) _read_bdev_label failed to open /ceph/data/block: (2) No such file or directory
-9> 2020-08-31 11:41:02.190 fffebf2f2c50 -1 bluestore(/ceph/data) _read_fsid unparsable uuid
-8> 2020-08-31 11:41:02.190 fffebf2f2c50 1 bluestore(/ceph/data) mkfs using provided fsid 5a261d9b-63f0-4e16-bfc5-e12484a8fae2
-7> 2020-08-31 11:41:02.190 fffebf2f2c50 1 bdev create path /ceph/data/block type kernel
-6> 2020-08-31 11:41:02.190 fffebf2f2c50 1 bdev(0xaaabba202000 /ceph/data/block) open path /ceph/data/block
-5> 2020-08-31 11:41:02.190 fffebf2f2c50 -1 bdev(0xaaabba202000 /ceph/data/block) _lock flock failed on /ceph/data/block
-4> 2020-08-31 11:41:02.190 fffebf2f2c50 -1 bdev(0xaaabba202000 /ceph/data/block) open failed to lock /ceph/data/block: (11) Resource temporarily unavailable
-3> 2020-08-31 11:41:02.190 fffebf2f2c50 -1 bluestore(/ceph/data) mkfs failed, (11) Resource temporarily unavailable
-2> 2020-08-31 11:41:02.190 fffebf2f2c50 -1 OSD::mkfs: ObjectStore::mkfs failed with error (11) Resource temporarily unavailable
-1> 2020-08-31 11:41:02.190 fffebf2f2c50 -1 * ERROR: error creating empty object store in /ceph/data: (11) Resource temporarily unavailable
0> 2020-08-31 11:41:02.200 fffebf2f2c50 -1 *
* Caught signal (Segmentation fault) **
in thread fffebf2f2c50 thread_name:ceph-osd

ceph version 14.2.10 (b340acf629a010a74d90da5782a2c5fe0b54ac20) nautilus (stable)
1: [0xfffec91507c0]
2: (tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc::ThreadCache::FreeList*, unsigned int, int)+0x88) [0xfffec8e075d4]
3: (tcmalloc::ThreadCache::Scavenge()+0x60) [0xfffec8e079fc]
4: (std::vector<Option, std::allocator<Option> >::~vector()+0x340) [0xfffebf7591a8]
5: (_cxa_finalize()+0xc8) [0xfffec82f8c98]
6: (()+0x25a798) [0xfffebf63a798]
7: (()+0xeaa8) [0xfffec916eaa8]
8: (()+0x388ec) [0xfffec82f88ec]
9: (on_exit()+0) [0xfffec82f8914]
10: (Preforker::exit(int)+0x40) [0xaaaba9e10328]
11: (main()+0x1c10) [0xaaaba9ddc578]
12: (
_libc_start_main()+0xf0) [0xfffec82e1724]
13: (()+0x4bea34) [0xaaaba9e0ea34]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
  • Caught signal (Segmentation fault) **
    in thread fffebf2f2c50 thread_name:ceph-osd
    /ceph/bin/target/osd: line 268: 1894893 Segmentation fault (core dumped) ceph-osd -i "${CEPH_META_OSD_ID}" --no-mon-config -c "/etc/ceph/ceph.conf" --mkkey --osd_uuid="${CEPH_META_OSD_UUID}" --osd_objectstore=bluestore --bluestore_block_path "${UUID_DEV}/${CEPH_META_OSD_UUID}-main" ${DB_DEV:+--bluestore_block_db_path "${UUID_DEV}/${CEPH_META_OSD_UUID}-db"} ${WAL_DEV:+--bluestore_block_wal_path "${UUID_DEV}/${CEPH_META_OSD_UUID}-wal"} --mkfs --log_to_stderr=true --err_to_stderr=true --log_file /dev/null --osd-data ${GLOBAL_DATA_DIR}

Also available in: Atom PDF