Activity
From 04/01/2022 to 04/30/2022
04/30/2022
- 10:47 PM Bug #55444: test_cls_rbd.sh: multiple TestClsRbd failures during upgrade test
- also in http://pulpito.front.sepia.ceph.com/yuriw-2022-04-30_04:14:21-upgrade:pacific-p2p-pacific-16.2.8_RC1-distro-d...
04/29/2022
- 10:27 PM Bug #55444: test_cls_rbd.sh: multiple TestClsRbd failures during upgrade test
- https://github.com/ceph/ceph/pull/46092 merged
- 05:21 PM Bug #55444: test_cls_rbd.sh: multiple TestClsRbd failures during upgrade test
- ... however upgrading pacific-without-pr-45963 OSDs to master does NOT trigger it.
Could this be related to the sw... - 04:43 PM Bug #55444: test_cls_rbd.sh: multiple TestClsRbd failures during upgrade test
- Neha Ojha wrote:
> Can you check if the same test also fails in master, which includes https://github.com/ceph/ceph/... - 04:35 PM Bug #55444: test_cls_rbd.sh: multiple TestClsRbd failures during upgrade test
- > Half of the OSDs were upgraded
Just noting that the "half" part is irrelevant -- in my simpler run linked in htt... - 04:18 PM Bug #55444: test_cls_rbd.sh: multiple TestClsRbd failures during upgrade test
- In /a/yuriw-2022-04-27_14:24:25-upgrade:octopus-x-pacific-distro-default-smithi/6808913, we are using an octopus vers...
- 03:53 PM Bug #55444: test_cls_rbd.sh: multiple TestClsRbd failures during upgrade test
- Neha Ojha wrote:
> Ilya Dryomov wrote:
> > It is triggered only with upgraded OSDs.
>
> Ah, that makes sense.
... - 03:51 PM Bug #55444: test_cls_rbd.sh: multiple TestClsRbd failures during upgrade test
- Ilya Dryomov wrote:
> It is triggered only with upgraded OSDs.
Ah, that makes sense. - 03:48 PM Bug #55444: test_cls_rbd.sh: multiple TestClsRbd failures during upgrade test
- It is triggered only with upgraded OSDs.
- 03:41 PM Bug #55444: test_cls_rbd.sh: multiple TestClsRbd failures during upgrade test
- Ilya Dryomov wrote:
> Yup, omap is definitely involved. LibRadosAio.OmapPP passes against fresh pacific OSDs and fa... - 03:10 PM Bug #55444 (Fix Under Review): test_cls_rbd.sh: multiple TestClsRbd failures during upgrade test
- 02:30 PM Bug #55444: test_cls_rbd.sh: multiple TestClsRbd failures during upgrade test
- Can you check if the same test also fails in master, which includes https://github.com/ceph/ceph/pull/45904? I'm goin...
- 07:06 AM Bug #55444: test_cls_rbd.sh: multiple TestClsRbd failures during upgrade test
- Ilya Dryomov wrote:
> This may sound crazy but the only explanation that I'm able to come up with for these failures... - 06:45 AM Bug #55444: test_cls_rbd.sh: multiple TestClsRbd failures during upgrade test
- All green:
https://pulpito.ceph.com/dis-2022-04-29_05:11:42-upgrade:octopus-x-wip-55444-pacific-distro-default-smi...
04/28/2022
- 10:06 PM Bug #55444: test_cls_rbd.sh: multiple TestClsRbd failures during upgrade test
- Ilya, can you run https://github.com/ceph/ceph-ci/commits/wip-55444-pacific through your reproducer?
- 10:02 PM Bug #55444: test_cls_rbd.sh: multiple TestClsRbd failures during upgrade test
- Yup, omap is definitely involved. LibRadosAio.OmapPP passes against fresh pacific OSDs and fails against upgraded OS...
- 09:42 PM Bug #55444: test_cls_rbd.sh: multiple TestClsRbd failures during upgrade test
- This may sound crazy but the only explanation that I'm able to come up with for these failures is that the octopus ->...
- 12:42 AM Bug #55444: test_cls_rbd.sh: multiple TestClsRbd failures during upgrade test
- This is for 16.2.8 release
Seeing in https://pulpito.ceph.com/yuriw-2022-04-27_14:24:25-upgrade:octopus-x-pacific-... - 02:22 PM Bug #55187 (Need More Info): ceph_abort_msg(\"bluefs enospc\")
04/27/2022
- 08:47 PM Feature #55474: BLUESTORE_FRAGMENTATION health check doesn't work
- Tobias Urdin wrote:
> What do you even do in that scenario, redeploy the OSD?
No other choice than this right now... - 08:46 PM Feature #55474: BLUESTORE_FRAGMENTATION health check doesn't work
- Igor Fedotov wrote:
> IMO this is a feature request.
Shouldn't we then remove it from the documentation since it ... - 08:36 PM Feature #55474: BLUESTORE_FRAGMENTATION health check doesn't work
- What do you even do in that scenario, redeploy the OSD?
- 08:00 PM Feature #55474: BLUESTORE_FRAGMENTATION health check doesn't work
- IMO this is a feature request.
- 05:43 PM Feature #55474 (New): BLUESTORE_FRAGMENTATION health check doesn't work
- It seems like there is documentation around `BLUESTORE_FRAGMENTATION` in the docs:
https://docs.ceph.com/en/pacifi... - 01:09 AM Documentation #55462 (New): clarify the meaning of the code in do_replay_recovery_read
- I spent some time to understand the function do_replay_recovery_read in BlueFS.cc. Most parts of the code make sense ...
04/26/2022
- 02:49 PM Backport #55442 (Resolved): pacific: rocksdb omap iterators become extremely slow in the presence...
- 02:48 PM Bug #55324: rocksdb omap iterators become extremely slow in the presence of large delete range to...
- https://github.com/ceph/ceph/pull/45963 merged
- 01:14 AM Bug #55444 (Pending Backport): test_cls_rbd.sh: multiple TestClsRbd failures during upgrade test
- Description: rados/upgrade/nautilus-x-singleton/{0-cluster/{openstack start} 1-install/nautilus 2-partial-upgrade/fir...
04/25/2022
- 11:06 PM Backport #55442 (In Progress): pacific: rocksdb omap iterators become extremely slow in the prese...
- 11:05 PM Backport #55442 (Resolved): pacific: rocksdb omap iterators become extremely slow in the presence...
- https://github.com/ceph/ceph/pull/46096
- 11:05 PM Backport #55441 (Resolved): quincy: rocksdb omap iterators become extremely slow in the presence ...
- https://github.com/ceph/ceph/pull/46175
- 11:00 PM Bug #55324 (Pending Backport): rocksdb omap iterators become extremely slow in the presence of la...
- 10:37 PM Bug #55324 (Fix Under Review): rocksdb omap iterators become extremely slow in the presence of la...
04/18/2022
- 11:07 PM Backport #55361 (Resolved): quincy: os/bluestore: Always update the cursor position in AVL near-f...
- 10:55 PM Backport #55361 (Resolved): quincy: os/bluestore: Always update the cursor position in AVL near-f...
- https://github.com/ceph/ceph/pull/45885
- 10:55 PM Backport #55360 (Resolved): octopus: os/bluestore: Always update the cursor position in AVL near-...
- 10:55 PM Backport #55359 (Resolved): pacific: os/bluestore: Always update the cursor position in AVL near-...
- 10:53 PM Bug #55358 (Resolved): os/bluestore: Always update the cursor position in AVL near-fit search
- To backport https://github.com/ceph/ceph/pull/45884.
Quincy backport https://github.com/ceph/ceph/pull/45885 has bee...
04/14/2022
- 02:28 AM Bug #55328: OSD crashed due to checksum error
- We run _dd_ command to the disk area. The result is as follows....
- 02:24 AM Bug #55328 (Closed): OSD crashed due to checksum error
- OSD.14 crashed and produced the following logs....
04/13/2022
- 09:51 PM Bug #55324 (Resolved): rocksdb omap iterators become extremely slow in the presence of large dele...
- The high-level problem is a severe performance degradation of RGW bucket listings. The underlying issue is RocksDB ra...
04/12/2022
- 10:59 PM Bug #55307 (Fix Under Review): bluefs fsync doesn't respect file truncate
- 10:51 PM Bug #55307 (Resolved): bluefs fsync doesn't respect file truncate
- So truncate + fsync sequence doesn't result in file's metadata update in BlueFS log.
- 06:10 PM Backport #55302 (Rejected): octopus: Hybrid allocator might return duplicate extents when perform...
- 06:10 PM Backport #55301 (Resolved): quincy: Hybrid allocator might return duplicate extents when performi...
- 06:10 PM Backport #55300 (Resolved): pacific: Hybrid allocator might return duplicate extents when perform...
- 06:06 PM Bug #54973 (Pending Backport): Hybrid allocator might return duplicate extents when performing on...
04/05/2022
- 05:43 PM Bug #55187: ceph_abort_msg(\"bluefs enospc\")
- Igor, this issue was seen in the gibba cluster, while upgrading to 4e244311a4a30b157b41694d9cd9c9a9ecef285f. Currentl...
- 05:35 PM Bug #55187: ceph_abort_msg(\"bluefs enospc\")
- And this ticket (along with related ones) might be relevant:
https://tracker.ceph.com/issues/53466 - 05:30 PM Bug #55187: ceph_abort_msg(\"bluefs enospc\")
- @Aishwarya could you please share the link to the relevant teuthology job where the issue occurred?
- 04:10 PM Bug #55187 (Need More Info): ceph_abort_msg(\"bluefs enospc\")
- from osd crash info in gibba cluster: ...
- 04:20 AM Bug #53184: failed to start new osd due to SIGSEGV in BlueStore::read()
- > We will try it next week.
We tried it and attached the logs.
> But it still looks like there is some gap betwee...
04/04/2022
- 06:41 PM Bug #55145 (Resolved): Bogus assert in SimpleBitmap
- 06:39 PM Bug #55145 (Pending Backport): Bogus assert in SimpleBitmap
- 06:38 PM Bug #55145: Bogus assert in SimpleBitmap
- Neha Ojha wrote:
> Quincy backport: https://github.com/ceph/ceph/pull/45738 to expedite merge
merged - 06:41 PM Backport #55180 (Resolved): quincy: Bogus assert in SimpleBitmap
- 06:40 PM Backport #55180 (Resolved): quincy: Bogus assert in SimpleBitmap
- https://github.com/ceph/ceph/pull/45738
04/01/2022
- 02:36 AM Bug #53184: failed to start new osd due to SIGSEGV in BlueStore::read()
- Thank you for your investigation. That hypothesis seems to be correct. I understand that more detailed logs are neede...
Also available in: Atom