Activity
From 11/20/2018 to 12/19/2018
12/19/2018
- 12:17 AM Bug #37543 (Fix Under Review): mds: purge queue recovery hangs during boot if PQ journal is damaged
12/18/2018
- 11:11 AM Backport #37700 (Resolved): luminous: fuse client can't read file due to can't acquire Fr
- https://github.com/ceph/ceph/pull/25677
- 11:11 AM Backport #37699 (Resolved): mimic: fuse client can't read file due to can't acquire Fr
- https://github.com/ceph/ceph/pull/25676
- 11:10 AM Backport #37696 (Rejected): luminous: client: fix failure in quota size limitation when using samba
- 11:10 AM Backport #37695 (Resolved): mimic: client: fix failure in quota size limitation when using samba
- https://github.com/ceph/ceph/pull/25678
- 04:18 AM Bug #37547 (Pending Backport): client: fix failure in quota size limitation when using samba
- 04:17 AM Bug #37333 (Pending Backport): fuse client can't read file due to can't acquire Fr
- 04:10 AM Bug #37681 (Resolved): qa: power off still resulted in client sending session close
- ...
12/17/2018
- 10:34 PM Feature #37678 (Fix Under Review): mds: log new client sessions with various metadata
- 10:27 PM Feature #37678 (Resolved): mds: log new client sessions with various metadata
- Including time to create/journal the new session, any throttling on the new session message, mount point, and client ...
- 04:35 PM Cleanup #37674 (Fix Under Review): mds: create separate config for heartbeat timeout
- 04:30 PM Cleanup #37674 (Resolved): mds: create separate config for heartbeat timeout
- Currently the MDS uses the mds_beacon_grace for the heartbeat timeout. If we need to increase the beacon grace becaus...
- 02:52 PM Bug #37670 (Fix Under Review): standby-replay MDS spews message to log every second
- 02:40 PM Bug #37670 (Resolved): standby-replay MDS spews message to log every second
- I used the mgr volumes module on my rook cluster to create a new cephfs. The orchestrator started up 2 MDS':...
- 02:37 PM Bug #37547 (Fix Under Review): client: fix failure in quota size limitation when using samba
- 04:13 AM Backport #37608: luminous: MDS admin socket command `dump cache` with a very large cache will han...
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > btw, cannot see the `Pull request ID` section to update the PR i...
12/14/2018
- 07:32 PM Backport #24929 (In Progress): luminous: qa: test_recovery_pool tries asok on wrong node
- 07:12 PM Bug #37617: CephFS did not recover re-plugging network cable
- Patrick Donnelly wrote:
> We do not have a tracker yet. Work is planned in the near future on this and we'll create ... - 05:26 PM Backport #37608: luminous: MDS admin socket command `dump cache` with a very large cache will han...
- Venky Shankar wrote:
> btw, cannot see the `Pull request ID` section to update the PR id...
Backports still follo... - 04:53 PM Backport #37608: luminous: MDS admin socket command `dump cache` with a very large cache will han...
- btw, cannot see the `Pull request ID` section to update the PR id...
- 10:36 AM Backport #37608 (Need More Info): luminous: MDS admin socket command `dump cache` with a very lar...
- 10:21 AM Backport #37608 (In Progress): luminous: MDS admin socket command `dump cache` with a very large ...
- 10:37 AM Backport #37609 (Need More Info): mimic: MDS admin socket command `dump cache` with a very large ...
- 10:18 AM Backport #37609 (In Progress): mimic: MDS admin socket command `dump cache` with a very large cac...
- 10:26 AM Backport #37629 (In Progress): luminous: mds: do not call Journaler::_trim twice
- 10:25 AM Backport #37628 (In Progress): mimic: mds: do not call Journaler::_trim twice
- 10:24 AM Backport #37627 (In Progress): luminous: mds: fix incorrect l_pq_executing_ops statistics when me...
- 10:24 AM Backport #37626 (In Progress): mimic: mds: fix incorrect l_pq_executing_ops statistics when meet ...
- 10:23 AM Backport #37610 (In Progress): luminous: qa: pjd test appears to require more than 3h timeout for...
- 10:22 AM Backport #37611 (In Progress): mimic: qa: pjd test appears to require more than 3h timeout for so...
12/13/2018
- 11:31 PM Bug #37617: CephFS did not recover re-plugging network cable
- Niklas Hambuechen wrote:
> Hey Patrick,
>
> > Currently, it is necessary to restart the client when this happens.... - 11:22 PM Bug #37617: CephFS did not recover re-plugging network cable
- Hey Patrick,
> Currently, it is necessary to restart the client when this happens.
Is there already a feature r... - 05:26 PM Bug #37617 (Rejected): CephFS did not recover re-plugging network cable
- > I would expect Ceph to recover automatically from this short 11-minute network interruption.
Ceph will recover b... - 11:29 PM Feature #9755 (Resolved): Fence late clients during reconnect timeout
- This has been corrected but this issue was never closed.
- 01:35 PM Bug #37644 (Fix Under Review): extend reconnect period when mds is busy
- 01:25 PM Bug #37644 (Resolved): extend reconnect period when mds is busy
- 03:59 AM Bug #21754 (Rejected): mds: src/osdc/Journaler.cc: 402: FAILED assert(!r)
- Seems this no longer happens anymore.
- 01:17 AM Backport #37481: luminous: mds: MDCache.cc: 11673: abort()
- this one too please
- 01:16 AM Backport #37480: mimic: mds: MDCache.cc: 11673: abort()
- Zheng, please handle this.
- 01:01 AM Bug #37639 (Fix Under Review): mds: output client IP of blacklisted/evicted clients to cluster log
12/12/2018
- 11:23 PM Bug #37639 (Resolved): mds: output client IP of blacklisted/evicted clients to cluster log
- If the MDS evicts a misbehaving client, send a cluster INFO notice so the operator can take action or look for patter...
- 10:34 PM Backport #37604 (In Progress): luminous: mds: PurgeQueue write error handler does not handle EBLA...
- 10:33 PM Backport #37605 (In Progress): mimic: mds: PurgeQueue write error handler does not handle EBLACKL...
- 09:52 PM Bug #36669 (Rejected): client: displayed as the capacity of all OSDs when there are multiple data...
- See PR discussion.
- 09:46 PM Backport #37637 (Rejected): luminous: client: support getfattr ceph.dir.pin extended attribute
- 09:46 PM Backport #37636 (Rejected): mimic: client: support getfattr ceph.dir.pin extended attribute
- 09:45 PM Backport #37635 (Resolved): luminous: race of updating wanted caps
- https://github.com/ceph/ceph/pull/25762
- 09:45 PM Backport #37634 (Resolved): mimic: race of updating wanted caps
- https://github.com/ceph/ceph/pull/25680
- 09:45 PM Backport #37633 (Resolved): luminous: mds: remove duplicated l_mdc_num_strays perfcounter set
- https://github.com/ceph/ceph/pull/25682
- 09:45 PM Backport #37632 (Resolved): mimic: mds: remove duplicated l_mdc_num_strays perfcounter set
- https://github.com/ceph/ceph/pull/25681
- 09:44 PM Backport #37631 (Resolved): luminous: client: do not move f->pos untill success write
- https://github.com/ceph/ceph/pull/25684
- 09:44 PM Backport #37630 (Resolved): mimic: client: do not move f->pos untill success write
- https://github.com/ceph/ceph/pull/25683
- 09:44 PM Backport #37629 (Resolved): luminous: mds: do not call Journaler::_trim twice
- https://github.com/ceph/ceph/pull/25562
- 09:44 PM Backport #37628 (Resolved): mimic: mds: do not call Journaler::_trim twice
- https://github.com/ceph/ceph/pull/25561
- 09:44 PM Backport #37627 (Resolved): luminous: mds: fix incorrect l_pq_executing_ops statistics when meet ...
- https://github.com/ceph/ceph/pull/25560
- 09:44 PM Backport #37626 (Resolved): mimic: mds: fix incorrect l_pq_executing_ops statistics when meet an ...
- https://github.com/ceph/ceph/pull/25559
- 09:41 PM Backport #37606 (In Progress): luminous: mds: directories pinned keep being replicated back and f...
- 09:40 PM Backport #37607 (In Progress): mimic: mds: directories pinned keep being replicated back and fort...
- 09:25 PM Backport #37602 (In Progress): luminous: mds: severe internal fragment when decoding xattr_map fr...
- 09:24 PM Backport #37603 (In Progress): mimic: mds: severe internal fragment when decoding xattr_map from ...
- 09:21 PM Backport #37481 (Need More Info): luminous: mds: MDCache.cc: 11673: abort()
- Patrick, this looks to me like a non-trivial backport because of extensive changes to src/mds/MDCache.cc that were ma...
- 09:20 PM Backport #37480 (Need More Info): mimic: mds: MDCache.cc: 11673: abort()
- Patrick, this looks to me like a non-trivial backport because of extensive changes to src/mds/MDCache.cc that were ma...
- 09:07 PM Backport #37423 (In Progress): luminous: qa: wrong setting for msgr failures
- 09:06 PM Backport #37424 (In Progress): mimic: qa: wrong setting for msgr failures
- 08:48 PM Bug #37566 (Pending Backport): mds: do not call Journaler::_trim twice
- 08:47 PM Bug #37567 (Pending Backport): mds: fix incorrect l_pq_executing_ops statistics when meet an inva...
- 08:46 PM Backport #37623 (In Progress): luminous: qa: client socket inaccessible without sudo
- 08:46 PM Backport #37623 (Resolved): luminous: qa: client socket inaccessible without sudo
- https://github.com/ceph/ceph/pull/25516
- 08:46 PM Bug #24872 (Pending Backport): qa: client socket inaccessible without sudo
- 08:45 PM Bug #37546 (Pending Backport): client: do not move f->pos untill success write
- 08:44 PM Bug #37368: mds: directories pinned keep being replicated back and forth between exporting mds an...
- Not sure why I marked this Pending Backport but I've now merged the PR into master.
- 08:44 PM Backport #36577 (In Progress): luminous: qa: teuthology may hang on diagnostic commands for fuse ...
- 08:42 PM Bug #37464 (Pending Backport): race of updating wanted caps
- 08:41 PM Bug #37516 (Pending Backport): mds: remove duplicated l_mdc_num_strays perfcounter set
- 08:38 PM Backport #36578 (In Progress): mimic: qa: teuthology may hang on diagnostic commands for fuse mount
- 08:32 PM Bug #26969: kclient: mount unexpectedly gets osdmap updates causing test to fail
- Another: /ceph/teuthology-archive/pdonnell-2018-12-10_21:45:37-kcephfs-wip-pdonnell-testing-20181210.180934-distro-ba...
- 12:52 AM Bug #37617: CephFS did not recover re-plugging network cable
- The OSD logs around time `Dec 11 03:25:49` (the "I was blacklisted" time):...
- 12:44 AM Bug #37617 (Resolved): CephFS did not recover re-plugging network cable
- Today my hardware hoster had a planned switch maintenance during which the switch one of my 3 ceph nodes was connecte...
12/11/2018
- 04:01 PM Backport #37611 (Resolved): mimic: qa: pjd test appears to require more than 3h timeout for some ...
- https://github.com/ceph/ceph/pull/25557
- 04:01 PM Backport #37610 (Resolved): luminous: qa: pjd test appears to require more than 3h timeout for so...
- https://github.com/ceph/ceph/pull/25558
- 04:00 PM Backport #37609 (Resolved): mimic: MDS admin socket command `dump cache` with a very large cache ...
- https://github.com/ceph/ceph/pull/25642
- 04:00 PM Backport #37608 (Resolved): luminous: MDS admin socket command `dump cache` with a very large cac...
- https://github.com/ceph/ceph/pull/25567
- 04:00 PM Backport #37607 (Resolved): mimic: mds: directories pinned keep being replicated back and forth b...
- https://github.com/ceph/ceph/pull/25521
- 04:00 PM Backport #37606 (Resolved): luminous: mds: directories pinned keep being replicated back and fort...
- https://github.com/ceph/ceph/pull/25522
- 04:00 PM Backport #37605 (Resolved): mimic: mds: PurgeQueue write error handler does not handle EBLACKLISTED
- https://github.com/ceph/ceph/pull/25523
- 04:00 PM Backport #37604 (Resolved): luminous: mds: PurgeQueue write error handler does not handle EBLACKL...
- https://github.com/ceph/ceph/pull/25524
- 03:59 PM Backport #37603 (Resolved): mimic: mds: severe internal fragment when decoding xattr_map from log...
- https://github.com/ceph/ceph/pull/25519
- 03:59 PM Backport #37602 (Resolved): luminous: mds: severe internal fragment when decoding xattr_map from ...
- https://github.com/ceph/ceph/pull/25520
- 12:20 PM Bug #37594 (Resolved): mds: mds state change race
- In multi-mds cluster, recovering mds may receive mdsmap that changes
its state after other mds. Furthermore, the rec... - 01:56 AM Bug #36189 (Fix Under Review): ceph-fuse client can't read or write due to backward cap_gen
12/10/2018
- 10:27 PM Backport #37426 (In Progress): mimic: ceph-volume-client: cannot set mode for cephfs volumes as r...
- 06:06 PM Bug #37546 (Fix Under Review): client: do not move f->pos untill success write
- 06:04 PM Bug #37566 (Fix Under Review): mds: do not call Journaler::_trim twice
- 06:02 PM Bug #37567 (Fix Under Review): mds: fix incorrect l_pq_executing_ops statistics when meet an inva...
- 04:43 PM Bug #37355 (Duplicate): tasks.cephfs.test_volume_client fails with "ImportError: No module named ...
- #24919 needs backported.
- 02:48 PM Feature #37523: NFS-Ganesha: Add support to set quota from nfs mount point
- I assume you're looking to just expose the xattrs to the client via xattr support detailed in RFC8276?
- 02:41 PM Feature #37523: NFS-Ganesha: Add support to set quota from nfs mount point
- Supriti, is this something you wanted to work on?
- 02:28 PM Support #37407 (Rejected): ceph-fuse setfattr fail
- This type of question is more appropriate for the ceph-users mailing list. Please repost your question there.
12/08/2018
12/07/2018
- 07:00 PM Bug #37368 (Pending Backport): mds: directories pinned keep being replicated back and forth betwe...
- 06:58 PM Bug #37399 (Pending Backport): mds: severe internal fragment when decoding xattr_map from log event
- 06:56 PM Bug #37394 (Pending Backport): mds: PurgeQueue write error handler does not handle EBLACKLISTED
- 06:55 PM Bug #36703 (Pending Backport): MDS admin socket command `dump cache` with a very large cache will...
- 06:53 PM Feature #36707 (Pending Backport): client: support getfattr ceph.dir.pin extended attribute
- 06:49 PM Bug #36594 (Pending Backport): qa: pjd test appears to require more than 3h timeout for some conf...
- 05:33 PM Bug #37544 (Duplicate): mds: reconnect of client during thrashing fails
- 11:22 AM Bug #37544: mds: reconnect of client during thrashing fails
- ...
- 06:46 AM Bug #37544 (Duplicate): mds: reconnect of client during thrashing fails
- ...
- 09:14 AM Bug #37567 (Resolved): mds: fix incorrect l_pq_executing_ops statistics when meet an invalid item...
- l_pq_executing_ops should subtract the ops we added previous
in the condition we meet an invalid item in the purge q... - 09:09 AM Bug #37566 (Resolved): mds: do not call Journaler::_trim twice
- Journaler::_trim is called in the routine:
PurgeQueue::_execute_item
==>Journaler::write_head
==>Journaler::_finis... - 08:41 AM Bug #37547 (Resolved): client: fix failure in quota size limitation when using samba
- In samba, Client::_write may be called with the offset a
negative number, and use the f->pos as the real write offse... - 08:36 AM Bug #37546 (Resolved): client: do not move f->pos untill success write
- writes maybe failed in Client::_write, so do not move f->pos when write
successfully. - 06:16 AM Bug #37543 (Resolved): mds: purge queue recovery hangs during boot if PQ journal is damaged
- ...
12/06/2018
- 09:18 PM Bug #37540 (Fix Under Review): luminous: MDSMap session timeout cannot be modified
- 08:49 PM Bug #37540 (Resolved): luminous: MDSMap session timeout cannot be modified
- This was fixed in https://github.com/ceph/ceph/pull/19440/commits/67ca6cd229a595d54ccea18b5452f2574ede9657
The fix...
12/05/2018
- 11:25 AM Backport #37425 (In Progress): luminous: ceph-volume-client: cannot set mode for cephfs volumes a...
12/04/2018
- 03:21 PM Feature #37523 (New): NFS-Ganesha: Add support to set quota from nfs mount point
- Currently NFS-Ganesha enforces quota set from other cephfs client. But there is no way to set it from nfs mount point...
- 06:51 AM Bug #37516 (Resolved): mds: remove duplicated l_mdc_num_strays perfcounter set
- The l_mdc_num_strays perfcounter had been encapsulated into StrayManager notify_stray_created() and notify_stray_remo...
12/03/2018
- 12:20 PM Bug #37378: truncate_seq ordering issues with object creation
- Greg Farnum wrote:
> And a PR is probably the best way to develop new code rather than diagnose the issue, yeah!
... - 11:37 AM Bug #37378: truncate_seq ordering issues with object creation
- Greg Farnum wrote:
> Luis Henriques wrote:
> > However, I'm attaching here a quick hack that, from the tests I've d...
11/30/2018
- 08:52 PM Bug #37378: truncate_seq ordering issues with object creation
- And a PR is probably the best way to develop new code rather than diagnose the issue, yeah!
- 08:52 PM Bug #37378: truncate_seq ordering issues with object creation
- Luis Henriques wrote:
> However, I'm attaching here a quick hack that, from the tests I've done, fixes this issue on... - 07:45 AM Bug #37464 (Fix Under Review): race of updating wanted caps
- 07:21 AM Backport #37481 (Resolved): luminous: mds: MDCache.cc: 11673: abort()
- https://github.com/ceph/ceph/pull/25990
- 07:21 AM Backport #37480 (Resolved): mimic: mds: MDCache.cc: 11673: abort()
- https://github.com/ceph/ceph/pull/26252
- 03:14 AM Backport #36503 (In Progress): mimic: qa: infinite timeout on asok command causes job to die
- https://github.com/ceph/ceph/pull/25332
11/29/2018
- 06:54 PM Bug #37378: truncate_seq ordering issues with object creation
- One thing I forgot to mention in my last comment is that we would also need a new CEPH_FEATURE_<something> to make su...
- 02:22 PM Bug #37378: truncate_seq ordering issues with object creation
- Zheng Yan wrote:
> yes, it's what I mean. besides, we should do the check after getting RW caps of src/dest inode
... - 01:43 AM Bug #37378: truncate_seq ordering issues with object creation
- yes, it's what I mean. besides, we should do the check after getting RW caps of src/dest inode
- 07:39 AM Bug #37464 (Resolved): race of updating wanted caps
- 04:08 AM Bug #37399 (Fix Under Review): mds: severe internal fragment when decoding xattr_map from log event
- 03:55 AM Bug #36035 (Pending Backport): mds: MDCache.cc: 11673: abort()
11/28/2018
- 11:01 PM Bug #37378: truncate_seq ordering issues with object creation
- Zheng Yan wrote:
> For now, I think we can add a check to kernel client, make sure src inode and dest inode's trunca... - 02:41 PM Bug #37378: truncate_seq ordering issues with object creation
- For now, I think we can add a check to kernel client, make sure src inode and dest inode's truncate seq are the same....
- 02:04 PM Bug #37378: truncate_seq ordering issues with object creation
- Would it be acceptable to do something like adding an extra CEPH_OSD_COPY_FROM_FLAG_IGNORE_TRUNCATE_SEQ flag? The ke...
- 12:24 PM Bug #37378: truncate_seq ordering issues with object creation
- It seems that CEPH_OSD_OP_COPY_FROM is designed for cache tier, not suite for general use. The problem happen if src ...
- 11:40 AM Bug #37378: truncate_seq ordering issues with object creation
- Greg Farnum wrote:
> Oh hrm. That does make it more interesting.
>
> ...oh my. I bet that when you do a copy-from...
11/27/2018
- 10:40 PM Backport #36200 (In Progress): luminous: mds: fix mds damaged due to unexpected journal length
- 10:27 PM Backport #37426 (Resolved): mimic: ceph-volume-client: cannot set mode for cephfs volumes as requ...
- https://github.com/ceph/ceph/pull/25413
- 10:27 PM Backport #37425 (Resolved): luminous: ceph-volume-client: cannot set mode for cephfs volumes as r...
- https://github.com/ceph/ceph/pull/25407
- 10:27 PM Backport #37424 (Resolved): mimic: qa: wrong setting for msgr failures
- https://github.com/ceph/ceph/pull/25517
- 10:27 PM Backport #37423 (Resolved): luminous: qa: wrong setting for msgr failures
- https://github.com/ceph/ceph/pull/25518
- 10:16 PM Bug #36676 (Pending Backport): qa: wrong setting for msgr failures
- 09:55 PM Bug #36651 (Pending Backport): ceph-volume-client: cannot set mode for cephfs volumes as required...
- 08:55 PM Documentation #36180 (Resolved): doc: Typo error on cephfs/fuse/
- 08:55 PM Backport #36309 (Resolved): luminous: doc: Typo error on cephfs/fuse/
- 08:32 PM Documentation #36286 (Resolved): doc: fix broken fstab url in cephfs/fuse
- 08:32 PM Backport #36312 (Resolved): luminous: doc: fix broken fstab url in cephfs/fuse
- 07:02 PM Bug #37378: truncate_seq ordering issues with object creation
- Oh hrm. That does make it more interesting.
...oh my. I bet that when you do a copy-from op on the OSD side, the t... - 03:39 PM Bug #37378: truncate_seq ordering issues with object creation
- Greg Farnum wrote:
> From what you're showing here, it looks like that patch is effectively just disabling the OSD's... - 03:18 PM Bug #37378: truncate_seq ordering issues with object creation
- From what you're showing here, it looks like that patch is effectively just disabling the OSD's truncate_seq check in...
- 10:18 AM Bug #37378: truncate_seq ordering issues with object creation
- Patrick Donnelly wrote:
> Moving to RADOS since the problem appears to be there.
Thanks, Patrick. In the meantim... - 02:31 PM Support #37407 (Rejected): ceph-fuse setfattr fail
- I want to use the quota of cephfs, And I mount the ceph by ceph-fuse as describe in the doc.However, when i exec "set...
- 08:30 AM Bug #37399: mds: severe internal fragment when decoding xattr_map from log event
- Either https://github.com/ceph/ceph/pull/25264 or https://github.com/ceph/ceph/pull/25275 can resolve this issue. Aft...
- 08:23 AM Bug #37399 (Resolved): mds: severe internal fragment when decoding xattr_map from log event
- 12:19 AM Bug #37394 (Fix Under Review): mds: PurgeQueue write error handler does not handle EBLACKLISTED
11/26/2018
- 11:28 PM Bug #37378: truncate_seq ordering issues with object creation
- Moving to RADOS since the problem appears to be there.
- 04:52 PM Bug #37378: truncate_seq ordering issues with object creation
- Anyone more knowledgable with OSD code could please confirm if the following PrimaryLogPG::do_read() patch makes sens...
- 10:54 PM Bug #37368 (Fix Under Review): mds: directories pinned keep being replicated back and forth betwe...
- 10:10 PM Bug #37394 (Resolved): mds: PurgeQueue write error handler does not handle EBLACKLISTED
- It should have similar logic to MDSRank::handle_write_error.
As a result, if the PQ sees a write error due to bein... - 05:38 PM Bug #37333 (Fix Under Review): fuse client can't read file due to can't acquire Fr
- 04:25 PM Bug #37355 (In Progress): tasks.cephfs.test_volume_client fails with "ImportError: No module name...
- 'ceph_argparse' module for py2.7 is in 'ceph-common' package. And for py3 the module is in 'python3-ceph-argparse' pa...
11/23/2018
- 08:16 PM Bug #36189 (Need More Info): ceph-fuse client can't read or write due to backward cap_gen
- Zheng writes: "If cap is invalid during reconnect, mds should consider issued caps is empty (just CEPH_CAP_PIN)"
a... - 08:14 PM Backport #36462 (Need More Info): luminous: ceph-fuse client can't read or write due to backward ...
- First attempted backport, https://github.com/ceph/ceph/pull/25089, was closed because the master PR might have an iss...
- 08:14 PM Backport #36463 (Need More Info): mimic: ceph-fuse client can't read or write due to backward cap...
- The first backport was https://github.com/ceph/ceph/pull/25091. The original master fix might have an issue, though, ...
- 05:36 PM Bug #37378: truncate_seq ordering issues with object creation
- I don't fully understand the following code, but I suspect the issue could be related to truncate_seq in this OSD fun...
- 11:33 AM Bug #37378: truncate_seq ordering issues with object creation
- I forgot to mention that using the 'rados' command I'm able to see that the objects in the data pool actually seem to...
- 10:17 AM Bug #37378 (Resolved): truncate_seq ordering issues with object creation
- I'm seeing a bug with copy_file_range in recent clients. Here's a simple way to reproduce it:...
11/22/2018
- 05:16 PM Backport #36690 (Resolved): mimic: client: request next osdmap for blacklisted client
- 04:40 PM Backport #36690: mimic: client: request next osdmap for blacklisted client
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24987
merged - 05:15 PM Backport #36218 (Resolved): mimic: Some cephfs tool commands silently operate on only rank 0, eve...
- 04:39 PM Backport #36218: mimic: Some cephfs tool commands silently operate on only rank 0, even if multip...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25036
merged - 05:15 PM Backport #36461 (Resolved): mimic: mds: rctime not set on system inode (root) at startup
- 04:38 PM Backport #36461: mimic: mds: rctime not set on system inode (root) at startup
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25042
merged - 04:53 PM Backport #36463 (Resolved): mimic: ceph-fuse client can't read or write due to backward cap_gen
- 04:41 PM Backport #36463: mimic: ceph-fuse client can't read or write due to backward cap_gen
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25091
merged - 04:53 PM Backport #36457 (Resolved): mimic: client: explicitly show blacklisted state via asok status command
- 04:39 PM Backport #36457: mimic: client: explicitly show blacklisted state via asok status command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24993
merged - 04:53 PM Backport #37093 (Resolved): mimic: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" duri...
- 04:37 PM Backport #37093: mimic: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" during max_mds ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25095
merged - 09:42 AM Bug #37368 (Resolved): mds: directories pinned keep being replicated back and forth between expor...
- Recently, when developing the rstat propagation function, we found that when pinning some directory to a specific ran...
11/21/2018
- 06:26 PM Bug #37355: tasks.cephfs.test_volume_client fails with "ImportError: No module named 'ceph_argpar...
- I believe this problem also exists in Luminous?
- 01:12 PM Bug #37355 (Duplicate): tasks.cephfs.test_volume_client fails with "ImportError: No module named ...
- seen here: http://qa-proxy.ceph.com/teuthology/yuriw-2018-11-20_16:46:36-fs-wip-yuri3-testing-2018-11-16-1727-mimic-d...
- 06:13 PM Backport #36694 (Resolved): mimic: mds: cache drop command requires timeout argument when it is s...
- 06:13 PM Backport #36282 (Resolved): mimic: mds: add drop_cache command
- 12:52 PM Bug #24517: "Loading libcephfs-jni: Failure!" in fs suite
- seen again here: http://qa-proxy.ceph.com/teuthology/yuriw-2018-11-20_16:46:36-fs-wip-yuri3-testing-2018-11-16-1727-m...
11/20/2018
- 04:24 AM Bug #37333 (Resolved): fuse client can't read file due to can't acquire Fr
- ceph version: jewel:10.2.2
logs:
client.log...
Also available in: Atom