Actions
Bug #50992
opensrc/common/PriorityCache.cc: FAILED ceph_assert(mem_avail >= 0)
% Done:
0%
Source:
Community (user)
Tags:
backport_processed
Backport:
quincy, reef
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
When I use ceph-volume to add a new OSD, the following error appears
root@yujiang-performance-dev:~/github/ceph/build# ceph-volume lvm create --bluestore --data /dev/nvme0n1 Running command: /usr/bin/ceph-authtool --gen-print-key Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 88631385-1de2-4168-9c4e-8b3b3e292355 Running command: /sbin/vgcreate --force --yes ceph-f32a7f97-3199-4cce-80f1-31923e35ca1f /dev/nvme0n1 stdout: Physical volume "/dev/nvme0n1" successfully created. stdout: Volume group "ceph-f32a7f97-3199-4cce-80f1-31923e35ca1f" successfully created Running command: /sbin/lvcreate --yes -l 476932 -n osd-block-88631385-1de2-4168-9c4e-8b3b3e292355 ceph-f32a7f97-3199-4cce-80f1-31923e35ca1f stdout: Logical volume "osd-block-88631385-1de2-4168-9c4e-8b3b3e292355" created. Running command: /usr/bin/ceph-authtool --gen-print-key Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0 --> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin Running command: /bin/chown -h ceph:ceph /dev/ceph-f32a7f97-3199-4cce-80f1-31923e35ca1f/osd-block-88631385-1de2-4168-9c4e-8b3b3e292355 Running command: /bin/chown -R ceph:ceph /dev/dm-0 Running command: /bin/ln -s /dev/ceph-f32a7f97-3199-4cce-80f1-31923e35ca1f/osd-block-88631385-1de2-4168-9c4e-8b3b3e292355 /var/lib/ceph/osd/ceph-0/block Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap stderr: got monmap epoch 1 Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQAPVq9gdL/3MxAALNvWGRgndR/7H0i/cc6b+g== stdout: creating /var/lib/ceph/osd/ceph-0/keyring added entity osd.0 auth(key=AQAPVq9gdL/3MxAALNvWGRgndR/7H0i/cc6b+g==) Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/ Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 88631385-1de2-4168-9c4e-8b3b3e292355 --setuser ceph --setgroup ceph stderr: 2021-05-27T08:19:29.280+0000 7f6f2beb3f00 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid stderr: /tmp/release/Ubuntu/WORKDIR/ceph-16.2.4-1-g574376ae42/src/common/PriorityCache.cc: In function 'void PriorityCache::Manager::balance()' thread 7f6f0a079700 time 2021-05-27T08:19:30.129967+0000 stderr: /tmp/release/Ubuntu/WORKDIR/ceph-16.2.4-1-g574376ae42/src/common/PriorityCache.cc: 301: FAILED ceph_assert(mem_avail >= 0) stderr: ceph version 16.2.4-1-g574376ae42 (574376ae42a35e132c77048bed3f4e36fc7cf68c) pacific (stable) stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x14e) [0x5650082ff179] stderr: 2: /usr/bin/ceph-osd(+0xac3363) [0x5650082ff363] stderr: 3: (PriorityCache::Manager::balance()+0x444) [0x565008f43164] stderr: 4: (BlueStore::MempoolThread::entry()+0x831) [0x565008992df1] stderr: 5: /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f6f2a0126db] stderr: 6: clone() stderr: *** Caught signal (Aborted) ** stderr: in thread 7f6f0a079700 thread_name:bstore_mempool stderr: 2021-05-27T08:19:30.136+0000 7f6f0a079700 -1 /tmp/release/Ubuntu/WORKDIR/ceph-16.2.4-1-g574376ae42/src/common/PriorityCache.cc: In function 'void PriorityCache::Manager::balance()' thread 7f6f0a079700 time 2021-05-27T08:19:30.129967+0000 stderr: /tmp/release/Ubuntu/WORKDIR/ceph-16.2.4-1-g574376ae42/src/common/PriorityCache.cc: 301: FAILED ceph_assert(mem_avail >= 0) stderr: ceph version 16.2.4-1-g574376ae42 (574376ae42a35e132c77048bed3f4e36fc7cf68c) pacific (stable) stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x14e) [0x5650082ff179] stderr: 2: /usr/bin/ceph-osd(+0xac3363) [0x5650082ff363] stderr: 3: (PriorityCache::Manager::balance()+0x444) [0x565008f43164] stderr: 4: (BlueStore::MempoolThread::entry()+0x831) [0x565008992df1] stderr: 5: /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f6f2a0126db] stderr: 6: clone() stderr: ceph version 16.2.4-1-g574376ae42 (574376ae42a35e132c77048bed3f4e36fc7cf68c) pacific (stable) stderr: 1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x12980) [0x7f6f2a01d980] stderr: 2: gsignal() stderr: 3: abort() stderr: 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a9) [0x5650082ff1d4] stderr: 5: /usr/bin/ceph-osd(+0xac3363) [0x5650082ff363] stderr: 6: (PriorityCache::Manager::balance()+0x444) [0x565008f43164] stderr: 7: (BlueStore::MempoolThread::entry()+0x831) [0x565008992df1] stderr: 8: /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f6f2a0126db] stderr: 9: clone() stderr: 2021-05-27T08:19:30.140+0000 7f6f0a079700 -1 *** Caught signal (Aborted) ** stderr: in thread 7f6f0a079700 thread_name:bstore_mempool stderr: ceph version 16.2.4-1-g574376ae42 (574376ae42a35e132c77048bed3f4e36fc7cf68c) pacific (stable) stderr: 1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x12980) [0x7f6f2a01d980] stderr: 2: gsignal() stderr: 3: abort() stderr: 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a9) [0x5650082ff1d4] stderr: 5: /usr/bin/ceph-osd(+0xac3363) [0x5650082ff363] stderr: 6: (PriorityCache::Manager::balance()+0x444) [0x565008f43164] stderr: 7: (BlueStore::MempoolThread::entry()+0x831) [0x565008992df1] stderr: 8: /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f6f2a0126db] stderr: 9: clone() stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this. stderr: -2827> 2021-05-27T08:19:29.280+0000 7f6f2beb3f00 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid stderr: -1> 2021-05-27T08:19:30.136+0000 7f6f0a079700 -1 /tmp/release/Ubuntu/WORKDIR/ceph-16.2.4-1-g574376ae42/src/common/PriorityCache.cc: In function 'void PriorityCache::Manager::balance()' thread 7f6f0a079700 time 2021-05-27T08:19:30.129967+0000 stderr: /tmp/release/Ubuntu/WORKDIR/ceph-16.2.4-1-g574376ae42/src/common/PriorityCache.cc: 301: FAILED ceph_assert(mem_avail >= 0) stderr: ceph version 16.2.4-1-g574376ae42 (574376ae42a35e132c77048bed3f4e36fc7cf68c) pacific (stable) stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x14e) [0x5650082ff179] stderr: 2: /usr/bin/ceph-osd(+0xac3363) [0x5650082ff363] stderr: 3: (PriorityCache::Manager::balance()+0x444) [0x565008f43164] stderr: 4: (BlueStore::MempoolThread::entry()+0x831) [0x565008992df1] stderr: 5: /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f6f2a0126db] stderr: 6: clone() stderr: 0> 2021-05-27T08:19:30.140+0000 7f6f0a079700 -1 *** Caught signal (Aborted) ** stderr: in thread 7f6f0a079700 thread_name:bstore_mempool stderr: ceph version 16.2.4-1-g574376ae42 (574376ae42a35e132c77048bed3f4e36fc7cf68c) pacific (stable) stderr: 1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x12980) [0x7f6f2a01d980] stderr: 2: gsignal() stderr: 3: abort() stderr: 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a9) [0x5650082ff1d4] stderr: 5: /usr/bin/ceph-osd(+0xac3363) [0x5650082ff363] stderr: 6: (PriorityCache::Manager::balance()+0x444) [0x565008f43164] stderr: 7: (BlueStore::MempoolThread::entry()+0x831) [0x565008992df1] stderr: 8: /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f6f2a0126db] stderr: 9: clone() stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this. stderr: *** Caught signal (Segmentation fault) ** stderr: in thread 7f6f0a079700 thread_name:bstore_mempool stderr: ceph version 16.2.4-1-g574376ae42 (574376ae42a35e132c77048bed3f4e36fc7cf68c) pacific (stable) stderr: 1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x12980) [0x7f6f2a01d980] stderr: 2: pthread_getname_np() stderr: 3: (ceph::logging::Log::dump_recent()+0x510) [0x565008cc5920] stderr: 4: /usr/bin/ceph-osd(+0x127bc40) [0x565008ab7c40] stderr: 5: /lib/x86_64-linux-gnu/libpthread.so.0(+0x12980) [0x7f6f2a01d980] stderr: 6: gsignal() stderr: 7: abort() stderr: 8: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a9) [0x5650082ff1d4] stderr: 9: /usr/bin/ceph-osd(+0xac3363) [0x5650082ff363] stderr: 10: (PriorityCache::Manager::balance()+0x444) [0x565008f43164] stderr: 11: (BlueStore::MempoolThread::entry()+0x831) [0x565008992df1] stderr: 12: /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f6f2a0126db] stderr: 13: clone() --> Was unable to complete a new OSD, will rollback changes Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it stderr: purged osd.0 --> RuntimeError: Command failed with exit code 250: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 88631385-1de2-4168-9c4e-8b3b3e292355 --setuser ceph --setgroup ceph
Actions