General

Profile

相洋 于

  • Registered on: 09/13/2016
  • Last connection: 10/18/2019

Issues

Activity

10/18/2019

06:04 AM RADOS Bug #42058: OSD reconnected across map epochs, inconsistent pg logs created
https://tracker.ceph.com/issues/22570
@Greg, my problem is related to this tracker.
Problem can be resolved and...

10/10/2019

03:01 AM RADOS Bug #42058: OSD reconnected across map epochs, inconsistent pg logs created
Greg Farnum wrote:
> Okay, so the issue here is that osd.1 managed to reconnect to osd.5 and osd.9 without triggerin...
01:10 AM RADOS Bug #42058: OSD reconnected across map epochs, inconsistent pg logs created
Our cluster is running on Luminous 12.2.12.
I do not think PR https://github.com/ceph/ceph/pull/25343 can not solve...

10/09/2019

07:13 AM RADOS Bug #42058: OSD reconnected across map epochs, inconsistent pg logs created
see PR: https://github.com/ceph/ceph/pull/25343 which also avoid triggerring RESETSESSION.
06:52 AM RADOS Bug #42058: OSD reconnected across map epochs, inconsistent pg logs created
@Greg
Assume pg 1.1a maps to osds[1,5,9], osd1 is the primary osd.
Time 1: osd1 osd5 osd9 was online and could...

09/26/2019

08:53 AM RADOS Bug #42058 (Duplicate): OSD reconnected across map epochs, inconsistent pg logs created
Get the lossless cluster connection between osd.2 and osd.47 for example.
When osd.47 is restarted and at the same...

08/12/2019

06:59 AM Messengers Bug #41195: [msg/simple] in_seq_ack in not reset to zero when pipe session is reset, as a result,...
PR:
https://github.com/ceph/ceph/pull/29592
06:45 AM Messengers Bug #41195 (Pending Backport): [msg/simple] in_seq_ack in not reset to zero when pipe session is ...
Ceph version: 10.2.10
In our production environment, we met monitor memory leak a few times.
We opened the jemal...
05:57 AM rgw Bug #39950: [rgw] prepend object name with bucket marker not bucket id
It has been resolved in Luminous. Please close the ticket, thks.

07/24/2019

01:46 AM Ceph Revision 91f6f8fd (ceph): os/bluestore: it's better to erase spanning blob only once
when several extents use the same spanning blob, we reclaim the
blob if all Pextent are to be released, but we erase ...

Also available in: Atom