yp dai
- Login: ypdai
- Registered on: 11/01/2018
- Last sign in: 09/21/2020
Issues
open | closed | Total | |
---|---|---|---|
Assigned issues | 0 | 0 | 0 |
Reported issues | 2 | 0 | 2 |
Activity
09/25/2020
- 01:09 AM rgw Bug #47529: In multi-site and versioned scenarios, operations with null versionId cannot be deleted synchronously
- I tried to submit a PR for this problem:
https://github.com/ceph/ceph/pull/37276
09/22/2020
- 08:44 AM rgw Bug #47554: librgw:load compressor failed.
- [root@ceph01 ceph]# rpm -qf /usr/lib64/ceph/compressor/libceph_zlib.so
ceph-base-14.2.9-0.el7.x86_64 - 08:41 AM rgw Bug #47554: librgw:load compressor failed.
- Hi,The same ceph version (v14.2.9), I did not find this problem.
Please check that there is no error in the cluster ...
09/18/2020
- 08:19 AM rgw Bug #47529: In multi-site and versioned scenarios, operations with null versionId cannot be deleted synchronously
- I have tested, delete the versionId is not 'null' can be synchronized to the secondary zone.
- 08:07 AM rgw Bug #47529: In multi-site and versioned scenarios, operations with null versionId cannot be deleted synchronously
- lei cao wrote:
> could you provide rgw log in secondary zone, there must be error log where rgw try delete the null... - 06:48 AM rgw Bug #47529 (Fix Under Review): In multi-site and versioned scenarios, operations with null versionId cannot be deleted synchronously
- 1.first,complete the multisite configuration, rgw1 in master zone, rgw2 in secondary zone.
2.create a bucket named b...
09/03/2020
- 02:51 AM Ceph Revision 0bfca9fc (ceph): doc: Update multisite-sync-policy.rst
- Signed-off-by: ypdai <self19900924@gmail.com>
08/27/2019
- 01:22 PM Ceph Revision 3b8fe9aa (ceph): doc: modify the wrong word "defails" to "details".
- Signed-off-by: ypdai <self19900924@gmail.com>
(cherry picked from commit 8cefe3de7835ce136826faf595122cc210bf90af)
07/11/2019
- 09:42 AM Ceph Revision 8cefe3de (ceph): doc: modify the wrong word "defails" to "details".
- Signed-off-by: ypdai <self19900924@gmail.com>
11/01/2018
- 06:14 AM RADOS Bug #36667 (New): OSD object_map sync returned error
- i deploy a cephfs and the used the vdbench tool to wirte data in cephfs mount point,after a while osd appears down.
...
Also available in: Atom