Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2024-03-14T13:32:10Z
Ceph
Redmine
bluestore - Backport #64928 (In Progress): reef: KeyValueDB/KVTest.RocksDB_estimate_size tests fa...
https://tracker.ceph.com/issues/64928
2024-03-14T13:32:10Z
Backport Bot
ceph-volume - Bug #64260 (Pending Backport): ceph-volume lvm migrate could assert on AttributeErr...
https://tracker.ceph.com/issues/64260
2024-01-31T03:29:03Z
Igor Fedotov
igor.fedotov@croit.io
<p>Full back trace is:</p>
<pre>
File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
instance.main()
File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/migrate.py", line 542, in main
self.migrate_osd()
File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/migrate.py", line 448, in migrate_osd
target_lv)
File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
return func(*a, **kw)
File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/migrate.py", line 384, in migrate_to_existing
tag_tracker.remove_lvs(source_devices, target_type)
File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/migrate.py", line 178, in remove_lvs
remaining_devices.remove(device)
File "/usr/lib/python3.6/site-packages/ceph_volume/util/device.py", line 170, in __eq__
return self.path == other.path
AttributeError: 'NoneType' object has no attribute 'path'
</pre>
bluestore - Backport #64071 (In Progress): reef: src/common/PriorityCache.cc: FAILED ceph_assert(...
https://tracker.ceph.com/issues/64071
2024-01-18T02:05:09Z
Backport Bot
bluestore - Backport #63813 (In Progress): quincy: ObjectStore/StoreTestSpecificAUSize.SpilloverF...
https://tracker.ceph.com/issues/63813
2023-12-13T15:51:56Z
Backport Bot
bluestore - Backport #63812 (In Progress): reef: ObjectStore/StoreTestSpecificAUSize.SpilloverFix...
https://tracker.ceph.com/issues/63812
2023-12-13T15:51:49Z
Backport Bot
RADOS - Backport #63400 (In Progress): reef: pybind: ioctx.get_omap_keys asserts if start_after p...
https://tracker.ceph.com/issues/63400
2023-11-02T12:53:34Z
Backport Bot
RADOS - Backport #63399 (In Progress): quincy: pybind: ioctx.get_omap_keys asserts if start_after...
https://tracker.ceph.com/issues/63399
2023-11-02T12:53:26Z
Backport Bot
bluestore - Backport #62781 (New): reef: btree allocator doesn't pass alloctor's UTs
https://tracker.ceph.com/issues/62781
2023-09-09T03:04:40Z
Backport Bot
bluestore - Backport #62780 (New): quincy: btree allocator doesn't pass alloctor's UTs
https://tracker.ceph.com/issues/62780
2023-09-09T03:04:33Z
Backport Bot
bluestore - Bug #62401 (Pending Backport): ObjectStore/StoreTestSpecificAUSize.SpilloverFixedTest...
https://tracker.ceph.com/issues/62401
2023-08-10T20:49:05Z
Laura Flores
<p>/a/yuriw-2023-07-25_21:35:32-rados-wip-yuri8-testing-2023-07-24-0819-quincy-distro-default-smithi/7352449<br /><pre><code class="text syntaxhl"><span class="CodeRay">2023-07-26T21:14:11.542 INFO:teuthology.orchestra.run.smithi006.stdout:[ RUN ] ObjectStore/StoreTestSpecificAUSize.SpilloverFixedTest/2
2023-07-26T21:14:15.964 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 0
2023-07-26T21:14:15.966 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 10
2023-07-26T21:14:16.037 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 20
2023-07-26T21:14:16.038 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 30
2023-07-26T21:14:16.040 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 40
2023-07-26T21:14:16.041 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 50
2023-07-26T21:14:16.042 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 60
2023-07-26T21:14:16.043 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 70
2023-07-26T21:14:16.044 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 80
2023-07-26T21:14:16.045 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 90
2023-07-26T21:14:16.046 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 100
2023-07-26T21:14:16.085 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 110
2023-07-26T21:14:16.086 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 120
2023-07-26T21:14:16.088 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 130
2023-07-26T21:14:16.089 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 140
2023-07-26T21:14:16.090 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 150
2023-07-26T21:14:16.092 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 160
2023-07-26T21:14:16.093 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 170
2023-07-26T21:14:16.094 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 180
2023-07-26T21:14:16.095 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 190
2023-07-26T21:14:16.097 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 200
2023-07-26T21:14:16.145 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 210
2023-07-26T21:14:16.146 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 220
2023-07-26T21:14:16.148 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 230
2023-07-26T21:14:16.149 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 240
2023-07-26T21:14:16.151 INFO:teuthology.orchestra.run.smithi006.stderr:seeding object 250
2023-07-26T21:14:16.152 INFO:teuthology.orchestra.run.smithi006.stderr:Op 0
2023-07-26T21:14:16.152 INFO:teuthology.orchestra.run.smithi006.stderr:available_objects: 252 in_flight_objects: 4 total objects: 256 in_flight 4
2023-07-26T21:14:23.941 INFO:teuthology.orchestra.run.smithi006.stderr:Op 100
2023-07-26T21:14:23.941 INFO:teuthology.orchestra.run.smithi006.stderr:available_objects: 241 in_flight_objects: 15 total objects: 256 in_flight 15
2023-07-26T21:14:31.945 INFO:teuthology.orchestra.run.smithi006.stderr:Op 200
2023-07-26T21:14:31.945 INFO:teuthology.orchestra.run.smithi006.stderr:available_objects: 251 in_flight_objects: 5 total objects: 256 in_flight 5
2023-07-26T21:14:36.664 INFO:teuthology.orchestra.run.smithi006.stdout:done
2023-07-26T21:14:54.253 INFO:teuthology.orchestra.run.smithi006.stdout:/build/ceph-17.2.6-823-gf97a6ef3/src/test/objectstore/store_test.cc:10447: Failure
2023-07-26T21:14:54.254 INFO:teuthology.orchestra.run.smithi006.stdout:Expected equality of these values:
2023-07-26T21:14:54.254 INFO:teuthology.orchestra.run.smithi006.stdout: 0
2023-07-26T21:14:54.254 INFO:teuthology.orchestra.run.smithi006.stdout: logger->get(l_bluefs_slow_used_bytes)
2023-07-26T21:14:54.254 INFO:teuthology.orchestra.run.smithi006.stdout: Which is: 221577216
2023-07-26T21:14:54.254 INFO:teuthology.orchestra.run.smithi006.stdout:1 : device size 0xbfffe000 : using 0x3a900000(937 MiB)
2023-07-26T21:14:54.254 INFO:teuthology.orchestra.run.smithi006.stdout:2 : device size 0x2625a0000 : using 0xd360000(211 MiB)
2023-07-26T21:14:54.255 INFO:teuthology.orchestra.run.smithi006.stdout:RocksDBBlueFSVolumeSelector Usage Matrix:
2023-07-26T21:14:54.255 INFO:teuthology.orchestra.run.smithi006.stdout:DEV/LEV WAL DB SLOW * * REAL FILES
2023-07-26T21:14:54.255 INFO:teuthology.orchestra.run.smithi006.stdout:LOG 0 B 4 MiB 0 B 0 B 0 B 992 KiB 1
2023-07-26T21:14:54.255 INFO:teuthology.orchestra.run.smithi006.stdout:WAL 0 B 0 B 0 B 0 B 0 B 0 B 1
2023-07-26T21:14:54.255 INFO:teuthology.orchestra.run.smithi006.stdout:DB 0 B 933 MiB 0 B 0 B 0 B 834 MiB 21
2023-07-26T21:14:54.255 INFO:teuthology.orchestra.run.smithi006.stdout:SLOW 0 B 0 B 211 MiB 0 B 0 B 208 MiB 3
2023-07-26T21:14:54.255 INFO:teuthology.orchestra.run.smithi006.stdout:TOTAL 0 B 937 MiB 211 MiB 0 B 0 B 0 B 26
2023-07-26T21:14:54.255 INFO:teuthology.orchestra.run.smithi006.stdout:MAXIMUMS:
2023-07-26T21:14:54.256 INFO:teuthology.orchestra.run.smithi006.stdout:LOG 0 B 4 MiB 0 B 0 B 0 B 992 KiB
2023-07-26T21:14:54.256 INFO:teuthology.orchestra.run.smithi006.stdout:WAL 0 B 2.2 GiB 282 MiB 0 B 0 B 1.0 GiB
2023-07-26T21:14:54.256 INFO:teuthology.orchestra.run.smithi006.stdout:DB 0 B 1.3 GiB 0 B 0 B 0 B 1.3 GiB
2023-07-26T21:14:54.256 INFO:teuthology.orchestra.run.smithi006.stdout:SLOW 0 B 0 B 211 MiB 0 B 0 B 208 MiB
2023-07-26T21:14:54.256 INFO:teuthology.orchestra.run.smithi006.stdout:TOTAL 0 B 2.9 GiB 493 MiB 0 B 0 B 0 B
2023-07-26T21:14:54.256 INFO:teuthology.orchestra.run.smithi006.stdout:>> SIZE << 0 B 2.8 GiB 9.1 GiB
2023-07-26T21:14:54.256 INFO:teuthology.orchestra.run.smithi006.stdout:
2023-07-26T21:14:58.258 INFO:teuthology.orchestra.run.smithi006.stdout:==> rm -r bluestore.test_temp_dir
2023-07-26T21:14:58.521 INFO:teuthology.orchestra.run.smithi006.stdout:[ FAILED ] ObjectStore/StoreTestSpecificAUSize.SpilloverFixedTest/2, where GetParam() = "bluestore" (46980 ms)
</span></code></pre></p>
bluestore - Bug #61949 (Pending Backport): btree allocator doesn't pass alloctor's UTs
https://tracker.ceph.com/issues/61949
2023-07-10T21:20:44Z
Igor Fedotov
igor.fedotov@croit.io
[ RUN ] Allocator/AllocTest.test_alloc_non_aligned_len/3<br />Creating alloc type btree<br />/home/if/ceph.3/src/os/bluestore/BtreeAllocator.cc: In function 'void BtreeAllocator::_remove_from_tree(uint64_t, uint64_t)' thread 7f805aee6100 time 2023-07-11T00:20:11.256261+0300<br />/home/if/ceph.3/src/os/bluestore/BtreeAllocator.cc: 172: FAILED ceph_assert(rs != range_tree.end())<br /> ceph version Development (no_version) reef (dev)<br /> 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x152) [0x7f805c8ebf97]<br /> 2: /home/if/ceph.3/build/lib/libceph-common.so.2(+0x29e1a4) [0x7f805c8ec1a4]<br /> 3: (BtreeAllocator::_remove_from_tree(unsigned long, unsigned long)+0x106) [0x5559935ee196]<br /> 4: (BtreeAllocator::_allocate(unsigned long, unsigned long, unsigned long*, unsigned long*)+0x2fe) [0x5559935f085e]<br /> 5: (BtreeAllocator::_allocate(unsigned long, unsigned long, unsigned long, long, std::vector<bluestore_pextent_t, mempool::pool_allocator<(mempool::pool_index_t)5, bluestore_pextent_t> ><strong>)+0xa2) [0x5559935f0d72]<br /> 6: (BtreeAllocator::allocate(unsigned long, unsigned long, unsigned long, long, std::vector<bluestore_pextent_t, mempool::pool_allocator<(mempool::pool_index_t)5, bluestore_pextent_t> ></strong>)+0xa7) [0x5559935f10a7]<br /> 7: (AllocTest_test_alloc_non_aligned_len_Test::TestBody()+0xb8) [0x55599357eb08]<br /> 8: (void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*)+0x4d) [0x5559935cb1bd]<br /> 9: (testing::Test::Run()+0xce) [0x5559935be8be]<br /> 10: (testing::TestInfo::Run()+0x135) [0x5559935bea15]<br /> 11: (testing::TestSuite::Run()+0xc9) [0x5559935beb09]<br /> 12: (testing::internal::UnitTestImpl::RunAllTests()+0x49c) [0x5559935bf08c]<br /> 13: (testing::UnitTest::Run()+0x82) [0x5559935bf342]<br /> 14: main()<br /> 15: /lib64/libc.so.6(+0x275b0) [0x7f805bbb95b0]<br /> 16: __libc_start_main()<br /> 17: _start()
<ul>
<li>Caught signal (Aborted) **</li>
</ul>
RADOS - Bug #58304 (Pending Backport): pybind: ioctx.get_omap_keys asserts if start_after paramet...
https://tracker.ceph.com/issues/58304
2022-12-16T16:29:11Z
Igor Fedotov
igor.fedotov@croit.io
bluestore - Bug #54555 (Fix Under Review): ceph-bluestore-tool doesn't handle 'free-fragmentation...
https://tracker.ceph.com/issues/54555
2022-03-14T18:16:43Z
Igor Fedotov
igor.fedotov@croit.io
bluestore - Bug #50992 (Pending Backport): src/common/PriorityCache.cc: FAILED ceph_assert(mem_av...
https://tracker.ceph.com/issues/50992
2021-05-27T08:29:33Z
Jiang Yu
lnsyyj@hotmail.com
<p>When I use ceph-volume to add a new OSD, the following error appears</p>
<pre>
root@yujiang-performance-dev:~/github/ceph/build# ceph-volume lvm create --bluestore --data /dev/nvme0n1
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 88631385-1de2-4168-9c4e-8b3b3e292355
Running command: /sbin/vgcreate --force --yes ceph-f32a7f97-3199-4cce-80f1-31923e35ca1f /dev/nvme0n1
stdout: Physical volume "/dev/nvme0n1" successfully created.
stdout: Volume group "ceph-f32a7f97-3199-4cce-80f1-31923e35ca1f" successfully created
Running command: /sbin/lvcreate --yes -l 476932 -n osd-block-88631385-1de2-4168-9c4e-8b3b3e292355 ceph-f32a7f97-3199-4cce-80f1-31923e35ca1f
stdout: Logical volume "osd-block-88631385-1de2-4168-9c4e-8b3b3e292355" created.
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
--> Executable selinuxenabled not in PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
Running command: /bin/chown -h ceph:ceph /dev/ceph-f32a7f97-3199-4cce-80f1-31923e35ca1f/osd-block-88631385-1de2-4168-9c4e-8b3b3e292355
Running command: /bin/chown -R ceph:ceph /dev/dm-0
Running command: /bin/ln -s /dev/ceph-f32a7f97-3199-4cce-80f1-31923e35ca1f/osd-block-88631385-1de2-4168-9c4e-8b3b3e292355 /var/lib/ceph/osd/ceph-0/block
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
stderr: got monmap epoch 1
Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQAPVq9gdL/3MxAALNvWGRgndR/7H0i/cc6b+g==
stdout: creating /var/lib/ceph/osd/ceph-0/keyring
added entity osd.0 auth(key=AQAPVq9gdL/3MxAALNvWGRgndR/7H0i/cc6b+g==)
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 88631385-1de2-4168-9c4e-8b3b3e292355 --setuser ceph --setgroup ceph
stderr: 2021-05-27T08:19:29.280+0000 7f6f2beb3f00 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
stderr: /tmp/release/Ubuntu/WORKDIR/ceph-16.2.4-1-g574376ae42/src/common/PriorityCache.cc: In function 'void PriorityCache::Manager::balance()' thread 7f6f0a079700 time 2021-05-27T08:19:30.129967+0000
stderr: /tmp/release/Ubuntu/WORKDIR/ceph-16.2.4-1-g574376ae42/src/common/PriorityCache.cc: 301: FAILED ceph_assert(mem_avail >= 0)
stderr: ceph version 16.2.4-1-g574376ae42 (574376ae42a35e132c77048bed3f4e36fc7cf68c) pacific (stable)
stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x14e) [0x5650082ff179]
stderr: 2: /usr/bin/ceph-osd(+0xac3363) [0x5650082ff363]
stderr: 3: (PriorityCache::Manager::balance()+0x444) [0x565008f43164]
stderr: 4: (BlueStore::MempoolThread::entry()+0x831) [0x565008992df1]
stderr: 5: /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f6f2a0126db]
stderr: 6: clone()
stderr: *** Caught signal (Aborted) **
stderr: in thread 7f6f0a079700 thread_name:bstore_mempool
stderr: 2021-05-27T08:19:30.136+0000 7f6f0a079700 -1 /tmp/release/Ubuntu/WORKDIR/ceph-16.2.4-1-g574376ae42/src/common/PriorityCache.cc: In function 'void PriorityCache::Manager::balance()' thread 7f6f0a079700 time 2021-05-27T08:19:30.129967+0000
stderr: /tmp/release/Ubuntu/WORKDIR/ceph-16.2.4-1-g574376ae42/src/common/PriorityCache.cc: 301: FAILED ceph_assert(mem_avail >= 0)
stderr: ceph version 16.2.4-1-g574376ae42 (574376ae42a35e132c77048bed3f4e36fc7cf68c) pacific (stable)
stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x14e) [0x5650082ff179]
stderr: 2: /usr/bin/ceph-osd(+0xac3363) [0x5650082ff363]
stderr: 3: (PriorityCache::Manager::balance()+0x444) [0x565008f43164]
stderr: 4: (BlueStore::MempoolThread::entry()+0x831) [0x565008992df1]
stderr: 5: /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f6f2a0126db]
stderr: 6: clone()
stderr: ceph version 16.2.4-1-g574376ae42 (574376ae42a35e132c77048bed3f4e36fc7cf68c) pacific (stable)
stderr: 1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x12980) [0x7f6f2a01d980]
stderr: 2: gsignal()
stderr: 3: abort()
stderr: 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a9) [0x5650082ff1d4]
stderr: 5: /usr/bin/ceph-osd(+0xac3363) [0x5650082ff363]
stderr: 6: (PriorityCache::Manager::balance()+0x444) [0x565008f43164]
stderr: 7: (BlueStore::MempoolThread::entry()+0x831) [0x565008992df1]
stderr: 8: /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f6f2a0126db]
stderr: 9: clone()
stderr: 2021-05-27T08:19:30.140+0000 7f6f0a079700 -1 *** Caught signal (Aborted) **
stderr: in thread 7f6f0a079700 thread_name:bstore_mempool
stderr: ceph version 16.2.4-1-g574376ae42 (574376ae42a35e132c77048bed3f4e36fc7cf68c) pacific (stable)
stderr: 1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x12980) [0x7f6f2a01d980]
stderr: 2: gsignal()
stderr: 3: abort()
stderr: 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a9) [0x5650082ff1d4]
stderr: 5: /usr/bin/ceph-osd(+0xac3363) [0x5650082ff363]
stderr: 6: (PriorityCache::Manager::balance()+0x444) [0x565008f43164]
stderr: 7: (BlueStore::MempoolThread::entry()+0x831) [0x565008992df1]
stderr: 8: /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f6f2a0126db]
stderr: 9: clone()
stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
stderr: -2827> 2021-05-27T08:19:29.280+0000 7f6f2beb3f00 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
stderr: -1> 2021-05-27T08:19:30.136+0000 7f6f0a079700 -1 /tmp/release/Ubuntu/WORKDIR/ceph-16.2.4-1-g574376ae42/src/common/PriorityCache.cc: In function 'void PriorityCache::Manager::balance()' thread 7f6f0a079700 time 2021-05-27T08:19:30.129967+0000
stderr: /tmp/release/Ubuntu/WORKDIR/ceph-16.2.4-1-g574376ae42/src/common/PriorityCache.cc: 301: FAILED ceph_assert(mem_avail >= 0)
stderr: ceph version 16.2.4-1-g574376ae42 (574376ae42a35e132c77048bed3f4e36fc7cf68c) pacific (stable)
stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x14e) [0x5650082ff179]
stderr: 2: /usr/bin/ceph-osd(+0xac3363) [0x5650082ff363]
stderr: 3: (PriorityCache::Manager::balance()+0x444) [0x565008f43164]
stderr: 4: (BlueStore::MempoolThread::entry()+0x831) [0x565008992df1]
stderr: 5: /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f6f2a0126db]
stderr: 6: clone()
stderr: 0> 2021-05-27T08:19:30.140+0000 7f6f0a079700 -1 *** Caught signal (Aborted) **
stderr: in thread 7f6f0a079700 thread_name:bstore_mempool
stderr: ceph version 16.2.4-1-g574376ae42 (574376ae42a35e132c77048bed3f4e36fc7cf68c) pacific (stable)
stderr: 1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x12980) [0x7f6f2a01d980]
stderr: 2: gsignal()
stderr: 3: abort()
stderr: 4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a9) [0x5650082ff1d4]
stderr: 5: /usr/bin/ceph-osd(+0xac3363) [0x5650082ff363]
stderr: 6: (PriorityCache::Manager::balance()+0x444) [0x565008f43164]
stderr: 7: (BlueStore::MempoolThread::entry()+0x831) [0x565008992df1]
stderr: 8: /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f6f2a0126db]
stderr: 9: clone()
stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
stderr: *** Caught signal (Segmentation fault) **
stderr: in thread 7f6f0a079700 thread_name:bstore_mempool
stderr: ceph version 16.2.4-1-g574376ae42 (574376ae42a35e132c77048bed3f4e36fc7cf68c) pacific (stable)
stderr: 1: /lib/x86_64-linux-gnu/libpthread.so.0(+0x12980) [0x7f6f2a01d980]
stderr: 2: pthread_getname_np()
stderr: 3: (ceph::logging::Log::dump_recent()+0x510) [0x565008cc5920]
stderr: 4: /usr/bin/ceph-osd(+0x127bc40) [0x565008ab7c40]
stderr: 5: /lib/x86_64-linux-gnu/libpthread.so.0(+0x12980) [0x7f6f2a01d980]
stderr: 6: gsignal()
stderr: 7: abort()
stderr: 8: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a9) [0x5650082ff1d4]
stderr: 9: /usr/bin/ceph-osd(+0xac3363) [0x5650082ff363]
stderr: 10: (PriorityCache::Manager::balance()+0x444) [0x565008f43164]
stderr: 11: (BlueStore::MempoolThread::entry()+0x831) [0x565008992df1]
stderr: 12: /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db) [0x7f6f2a0126db]
stderr: 13: clone()
--> Was unable to complete a new OSD, will rollback changes
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
stderr: purged osd.0
--> RuntimeError: Command failed with exit code 250: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 88631385-1de2-4168-9c4e-8b3b3e292355 --setuser ceph --setgroup ceph
</pre>
bluestore - Bug #20236 (New): bluestore: ObjectStore/StoreTestSpecificAUSize.Many4KWritesNoCSumTe...
https://tracker.ceph.com/issues/20236
2017-06-09T15:36:29Z
Sage Weil
sage@newdream.net
<pre>
2017-06-09T03:37:28.314 INFO:teuthology.orchestra.run.smithi068.stderr:seeding object 0
2017-06-09T03:37:28.314 INFO:teuthology.orchestra.run.smithi068.stderr:Op 0
2017-06-09T03:37:28.314 INFO:teuthology.orchestra.run.smithi068.stderr:available_objects: 0 in_flight_objects: 1 total objects: 1 in_flight 1
2017-06-09T03:37:41.290 INFO:teuthology.orchestra.run.smithi068.stderr:Op 200
2017-06-09T03:37:41.291 INFO:teuthology.orchestra.run.smithi068.stderr:available_objects: 0 in_flight_objects: 1 total objects: 1 in_flight 1
2017-06-09T03:37:55.988 INFO:teuthology.orchestra.run.smithi068.stderr:Op 400
2017-06-09T03:37:55.989 INFO:teuthology.orchestra.run.smithi068.stderr:available_objects: 0 in_flight_objects: 1 total objects: 1 in_flight 1
2017-06-09T03:38:12.470 INFO:teuthology.orchestra.run.smithi068.stderr:Op 600
2017-06-09T03:38:12.470 INFO:teuthology.orchestra.run.smithi068.stderr:available_objects: 0 in_flight_objects: 1 total objects: 1 in_flight 1
2017-06-09T03:38:31.486 INFO:teuthology.orchestra.run.smithi068.stderr:Op 800
2017-06-09T03:38:31.486 INFO:teuthology.orchestra.run.smithi068.stderr:available_objects: 0 in_flight_objects: 1 total objects: 1 in_flight 1
2017-06-09T03:38:52.763 INFO:teuthology.orchestra.run.smithi068.stdout:/build/ceph-12.0.2-2533-g080b00b/src/test/objectstore/store_test.cc:5729: Failure
2017-06-09T03:38:52.764 INFO:teuthology.orchestra.run.smithi068.stdout: Expected: res_stat.allocated
2017-06-09T03:38:52.764 INFO:teuthology.orchestra.run.smithi068.stdout: Which is: 4259840
2017-06-09T03:38:52.764 INFO:teuthology.orchestra.run.smithi068.stdout:To be equal to: max_object
2017-06-09T03:38:52.764 INFO:teuthology.orchestra.run.smithi068.stdout: Which is: 4194304
</pre><br />/a/yuriw-2017-06-08_19:40:51-rados-wip-yuri-testing2_2017_7_10-distro-basic-smithi/1271584