https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2021-01-15T22:44:37Z
Ceph
CephFS - Bug #48680: mds: scrubbing stuck "scrub active (0 inodes in the stack)"
https://tracker.ceph.com/issues/48680?journal_id=182750
2021-01-15T22:44:37Z
Patrick Donnelly
pdonnell@redhat.com
<ul><li><strong>Target version</strong> changed from <i>v16.0.0</i> to <i>v17.0.0</i></li><li><strong>Backport</strong> set to <i>pacific,octopus,nautilus</i></li></ul>
CephFS - Bug #48680: mds: scrubbing stuck "scrub active (0 inodes in the stack)"
https://tracker.ceph.com/issues/48680?journal_id=190214
2021-04-12T14:45:09Z
Patrick Donnelly
pdonnell@redhat.com
<ul><li><strong>Related to</strong> <i><a class="issue tracker-1 status-2 priority-4 priority-default" href="/issues/48773">Bug #48773</a>: qa: scrub does not complete</i> added</li></ul>
CephFS - Bug #48680: mds: scrubbing stuck "scrub active (0 inodes in the stack)"
https://tracker.ceph.com/issues/48680?journal_id=213766
2022-03-31T07:40:50Z
Milind Changire
<ul></ul><p>scrape logs point to a crash in osd:<br /><pre>
2020-12-18T15:32:48.646 INFO:scrape:Crash: Command failed on smithi193 with status 1: 'sudo rm -rf -- /home/ubuntu/cephtest/mnt.0/client.0/tmp'
2020-12-18T15:32:48.646 INFO:scrape:ceph version 16.0.0-8467-gd02b6b21 (d02b6b2187aba6b98f4df50520d865a75a745267) pacific (dev)
1: /lib64/libpthread.so.0(+0x12b20) [0x7fa2fc29cb20]
2: gsignal()
3: abort()
4: (ceph::__ceph_abort(char const*, int, char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x1b6) [0x560c391a61ad]
5: (KernelDevice::_aio_thread()+0x1254) [0x560c39ccf464]
6: (KernelDevice::AioCompletionThread::entry()+0x11) [0x560c39cda761]
7: /lib64/libpthread.so.0(+0x814a) [0x7fa2fc29214a]
8: clone()
2020-12-18T15:32:48.647 INFO:scrape:1 jobs: ['5715913']
2020-12-18T15:32:48.647 INFO:scrape:suites: ['clusters/1a2s-mds-1c-client-3node', 'conf/{client', 'distro/{rhel_8}', 'fs/workload/{begin', 'mds', 'mon', 'mount/kclient/{mount', 'ms-die-on-skipped}}', 'objectstore-ec/bluestore-ec-root', 'omap_limit/10000', 'osd-asserts', 'osd}', 'overrides/{distro/stock/{k-stock', 'overrides/{frag_enable', 'rhel_8}', 'scrub/yes', 'session_timeout', 'tasks/workunit/suites/iogen}', 'whitelist_health', 'whitelist_wrongly_marked_down}']
</pre></p>
CephFS - Bug #48680: mds: scrubbing stuck "scrub active (0 inodes in the stack)"
https://tracker.ceph.com/issues/48680?journal_id=214340
2022-04-11T05:15:56Z
Venky Shankar
vshankar@redhat.com
<ul><li><strong>Backport</strong> changed from <i>pacific,octopus,nautilus</i> to <i>quincy, pacific</i></li></ul><p>Maybe related (but no backtrace in OSDs): <a class="external" href="https://pulpito.ceph.com/vshankar-2022-04-09_12:55:41-fs-wip-vshankar-testing-55110-20220408-203242-testing-default-smithi/6783864/">https://pulpito.ceph.com/vshankar-2022-04-09_12:55:41-fs-wip-vshankar-testing-55110-20220408-203242-testing-default-smithi/6783864/</a></p>
CephFS - Bug #48680: mds: scrubbing stuck "scrub active (0 inodes in the stack)"
https://tracker.ceph.com/issues/48680?journal_id=220135
2022-07-12T13:05:19Z
Patrick Donnelly
pdonnell@redhat.com
<ul><li><strong>Target version</strong> deleted (<del><i>v17.0.0</i></del>)</li></ul>