Activity
From 08/03/2019 to 09/01/2019
09/01/2019
- 02:32 PM Bug #40297 (Resolved): cephfs-shell: Produces TypeError on passing '*' pattern to ls, rm or rmdir
- Issue fixed by this PR: https://github.com/ceph/ceph/pull/29552
- 02:18 PM Backport #41269 (In Progress): nautilus: cephfs-shell: Convert files path type from string to bytes
- I have fixed the conflicts. Please review the PR: https://github.com/ceph/ceph/pull/30057
08/30/2019
- 02:46 PM Backport #41488 (In Progress): nautilus: client: client should return EIO when it's unsafe reqs h...
- 02:46 PM Backport #41488 (New): nautilus: client: client should return EIO when it's unsafe reqs have been...
- 02:45 PM Backport #41488 (In Progress): nautilus: client: client should return EIO when it's unsafe reqs h...
- 02:33 PM Backport #41477 (In Progress): nautilus: cephfs-data-scan scan_links FAILED ceph_assert(p->second...
- 12:47 PM Backport #41468 (Need More Info): mimic: mds: recall capabilities more regularly when under cache...
- non-trivial
- 12:45 PM Bug #41140 (Resolved): mds: trim cache more regularly
- Since #41141 is fixed by the same PR, we'll handle the backports there.
- 12:36 PM Backport #41467 (In Progress): nautilus: mds: recall capabilities more regularly when under cache...
- 12:30 PM Backport #41465 (In Progress): nautilus: mount.ceph: doesn't accept "strictatime"
- 12:28 PM Backport #41276 (In Progress): nautilus: qa: malformed job
- 09:40 AM Backport #41283 (In Progress): nautilus: cephfs-shell: No error message is printed on ls of inval...
- 09:31 AM Backport #41113 (In Progress): nautilus: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- 09:22 AM Backport #40900 (In Progress): nautilus: mds: only evict an unresponsive client when another clie...
- 09:01 AM Backport #40897 (In Progress): nautilus: ceph_volume_client: fs_name must be converted to string ...
- 09:00 AM Backport #40495 (In Progress): nautilus: test_volume_client: declare only one default for python ...
- 08:59 AM Backport #40857 (In Progress): nautilus: ceph_volume_client: python program embedded in test_volu...
- 08:58 AM Backport #40854 (In Progress): nautilus: test_volume_client: test_put_object_versioned is unreliable
- 08:57 AM Backport #40887 (In Progress): nautilus: ceph_volume_client: to_bytes converts NoneType object str
- 08:55 AM Bug #39510: test_volume_client: test_put_object_versioned is unreliable
- 27718 was replaced by https://github.com/ceph/ceph/pull/28692
- 08:50 AM Bug #39405: ceph_volume_client: python program embedded in test_volume_client.py use python2.7
- 27718 was replaced by https://github.com/ceph/ceph/pull/28692
- 08:45 AM Backport #41112 (In Progress): nautilus: cephfs-shell: cd with no args has no effect
- 08:38 AM Backport #41269 (Need More Info): nautilus: cephfs-shell: Convert files path type from string to ...
- conflicts
- 08:37 AM Backport #41268 (In Progress): nautilus: cephfs-shell: onecmd throws TypeError
- 08:34 AM Backport #41118 (In Progress): nautilus: cephfs-shell: add CI testing with flake8
- 08:12 AM Bug #41585 (Resolved): mds: client evicted twice in one tick
- 2019-08-09 14:41:39.292140 7fd33eba7700 0 log_channel(cluster) log [WRN] : client id 2646901 has not responded to ca...
- 08:12 AM Backport #41105 (In Progress): nautilus: cephfs-shell: flake8 blank line and indentation error
- 08:11 AM Backport #40898 (In Progress): nautilus: cephfs-shell: Error messages are printed to stdout
- 08:09 AM Backport #40895 (In Progress): nautilus: pybind: Add standard error message and fix print of path...
- 08:06 AM Backport #40131 (In Progress): nautilus: Document behaviour of fsync-after-close
- 07:47 AM Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
- It's Bluestore on spinning disks. I don't really have an overview of the data distribution, it's very uneven. Perhaps...
- 05:40 AM Bug #41581 (In Progress): pybind/mgr: Fix subvolume options
- > $ ./bin/ceph fs subvolume create
> Invalid command: missing required parameter vol_name(<string>)
> fs subvolume ...
08/29/2019
- 11:38 AM Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
- Janek Bevendorff wrote:
> Little status update: our data pool now uses up 186TiB while only storing 53TiB of actual ... - 09:30 AM Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
- Little status update: our data pool now uses up 186TiB while only storing 53TiB of actual data with a replication fac...
- 05:42 AM Backport #41128 (In Progress): nautilus: qa: power off still resulted in client sending session c...
- https://github.com/ceph/ceph/pull/29983
08/28/2019
- 10:22 PM Feature #41566 (In Progress): mds: support rolling upgrades
- The MDS currently does not support rolling upgrades. Normally we recommend upgrading all MDS at the same time for thi...
- 10:13 PM Bug #41565 (Resolved): mds: detect MDS<->MDS messages that are not versioned
- Inter-MDS messages are now versioned. We should add a check that confirms that no current or new messages sent betwee...
- 10:07 PM Bug #14807 (Can't reproduce): MDS crashes repeatedly after upgrade to Infernalis from Hammer
- 10:07 PM Feature #15506 (Resolved): qa: run at least one upgrade test in the FS suite
- We've been doing testing of this since Mimic in fs:upgrade.
- 10:03 PM Feature #12107 (Resolved): mds: use versioned wire protocol; obviate CEPH_MDS_PROTOCOL
- 02:16 AM Backport #41108 (In Progress): mimic: mds: disallow setting ceph.dir.pin value exceeding max rank id
- https://github.com/ceph/ceph/pull/29940
- 12:50 AM Backport #41107 (In Progress): nautilus: mds: disallow setting ceph.dir.pin value exceeding max r...
- https://github.com/ceph/ceph/pull/29938
08/27/2019
- 10:38 PM Bug #41541 (Resolved): mgr/volumes: ephemerally pin volumes
- Apply export_ephemeral_distributed to volumes by default. Provide the option to change this to default balancer.
- 04:42 PM Bug #41538 (Resolved): mds: wrong compat can cause MDS to be added daemon registry on mgr but not...
- See Kefu's excellent synopsis of the problem: https://tracker.ceph.com/issues/41525#note-3
- 01:12 PM Backport #40343: luminous: mds: fix corner case of replaying open sessions
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28536
m... - 01:12 PM Backport #40041: luminous: avoid trimming too many log segments after mds failover
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28543
m... - 01:12 PM Backport #40221: luminous: mds: reset heartbeat during long-running loops in recovery
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28544
m... - 10:58 AM Backport #38686: luminous: kcephfs TestClientLimits.test_client_pin fails with "client caps fell ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27040
m... - 10:58 AM Backport #38445: luminous: mds: drop cache does not timeout as expected
- backport PR https://github.com/ceph/ceph/pull/27342
merge commit 5154062f2c4a1499ce74a518eb7bb54e9560aad5 (v12.2.12-... - 10:58 AM Backport #38340: luminous: mds: may leak gather during cache drop
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27342
m... - 10:58 AM Backport #38877: luminous: mds: high debug logging with many subtrees is slow
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27679
m... - 10:57 AM Backport #39191: luminous: mds: crash during mds restart
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27737
m... - 10:57 AM Backport #39198: luminous: mds: we encountered "No space left on device" when moving huge number ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27801
m... - 10:56 AM Backport #39208: luminous: mds: mds_cap_revoke_eviction_timeout is not used to initialize Server:...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27840
m... - 10:55 AM Backport #39468: luminous: There is no punctuation mark or blank between tid and client_id in th...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27848
m... - 10:55 AM Backport #39221: luminous: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28432
m... - 10:55 AM Backport #39231: luminous: kclient: nofail option not supported
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28436
m... - 10:55 AM Backport #40160: luminous: FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_dir()"
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28437
m... - 10:54 AM Backport #39213: luminous: mds: there is an assertion when calling Beacon::shutdown()
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28438
m... - 09:36 AM Feature #41182 (In Progress): mgr/volumes: add `fs subvolume extend/shrink` commands
- Look at OpenStack manila's cephfs driver extend_share and shrink_share method implementation,
https://github.com/o... - 09:09 AM Backport #41444 (In Progress): nautilus: mgr/volumes: handle incorrect pool_layout setting during...
- https://github.com/ceph/ceph/pull/29926
- 09:09 AM Backport #41437 (In Progress): nautilus: mgr/volumes: subvolume and subvolume group path exists e...
- https://github.com/ceph/ceph/pull/29926
- 08:51 AM Bug #23975 (Resolved): qa: TestVolumeClient.test_lifecycle needs updated for new eviction behavior
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:51 AM Bug #24133 (Resolved): mds: broadcast quota to relevant clients when quota is explicitly set
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:22 AM Backport #40222: mimic: mds: reset heartbeat during long-running loops in recovery
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28918
m... - 07:22 AM Backport #40875: mimic: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29187
m... - 07:22 AM Backport #39685: mimic: ceph-fuse: client hang because its bad session PipeConnection to mds
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29200
m... - 07:21 AM Backport #38099: mimic: mds: remove cache drop admin socket command
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29210
m... - 07:21 AM Backport #38687: mimic: kcephfs TestClientLimits.test_client_pin fails with "client caps fell bel...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29211
m... - 03:21 AM Backport #41100 (In Progress): mimic: tools/cephfs: memory leak in cephfs/Resetter.cc
- https://github.com/ceph/ceph/pull/29915
- 03:19 AM Backport #41106 (In Progress): nautilus: mds: add command that modify session metadata
- https://github.com/ceph/ceph/pull/29914
08/26/2019
- 08:26 PM Backport #39233: mimic: kclient: nofail option not supported
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28090
m... - 08:26 PM Backport #39472: mimic: mds: fail to resolve snapshot name contains '_'
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28186
m... - 08:25 PM Backport #39669: mimic: mds: output lock state in format dump
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28274
m... - 08:25 PM Backport #39679: mimic: pybind: add the lseek() function to pybind of cephfs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28337
m... - 08:25 PM Backport #39689: mimic: mds: error "No space left on device" when create a large number of dirs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28381
m... - 08:25 PM Backport #40168: mimic: client: ceph.dir.rctime xattr value incorrectly prefixes "09" to the nano...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28501
m... - 08:24 PM Backport #40342: mimic: mds: fix corner case of replaying open sessions
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28579
m... - 08:24 PM Backport #40042: mimic: avoid trimming too many log segments after mds failover
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28650
m... - 03:08 PM Backport #41508 (Resolved): nautilus: add information about active scrubs to "ceph -s" (and elsew...
- https://github.com/ceph/ceph/pull/30704
- 02:56 PM Bug #40489 (Resolved): cephfs-shell: name 'files' is not defined error in do_rm()
- 02:55 PM Bug #40679 (Resolved): cephfs-shell: TypeError in poutput
- 02:55 PM Backport #41495 (Resolved): nautilus: qa: 'ceph osd require-osd-release nautilus' fails
- https://github.com/ceph/ceph/pull/31040
- 02:51 PM Bug #40775 (Resolved): /src/include/xlist.h: 77: FAILED assert(_size == 0)
- 02:50 PM Backport #41489 (Resolved): luminous: client: client should return EIO when it's unsafe reqs have...
- https://github.com/ceph/ceph/pull/30242
- 02:50 PM Backport #41488 (Resolved): nautilus: client: client should return EIO when it's unsafe reqs have...
- https://github.com/ceph/ceph/pull/30043
- 02:50 PM Backport #41487 (Resolved): mimic: client: client should return EIO when it's unsafe reqs have be...
- https://github.com/ceph/ceph/pull/30241
- 02:49 PM Backport #41477 (Resolved): nautilus: cephfs-data-scan scan_links FAILED ceph_assert(p->second >=...
- https://github.com/ceph/ceph/pull/30041
- 02:49 PM Backport #41476 (Rejected): mimic: cephfs-data-scan scan_links FAILED ceph_assert(p->second >= be...
- 02:46 PM Documentation #41472 (Resolved): doc: add multiple active MDSs and Subtree Management in CephFS
- Give a technical description on how subtrees are handled by MDSs. Also do the same for multiple acive MDSs.
- 02:45 PM Documentation #41470 (Resolved): Document requirements for using cephfs
- Communicate high-level requirements (e.g. need 1-2 MDS; at least 2 pools; key auth and distribution)
- 02:44 PM Bug #41434 (Fix Under Review): mds: infinite loop in Locker::file_update_finish()
- 12:53 PM Bug #41434 (Resolved): mds: infinite loop in Locker::file_update_finish()
- ...
- 02:43 PM Backport #41468 (Rejected): mimic: mds: recall capabilities more regularly when under cache pressure
- 02:43 PM Backport #41467 (Resolved): nautilus: mds: recall capabilities more regularly when under cache pr...
- https://github.com/ceph/ceph/pull/30040
- 02:43 PM Backport #41466 (Resolved): mimic: mount.ceph: doesn't accept "strictatime"
- https://github.com/ceph/ceph/pull/30240
- 02:43 PM Backport #41465 (Resolved): nautilus: mount.ceph: doesn't accept "strictatime"
- https://github.com/ceph/ceph/pull/30039
- 02:30 PM Documentation #41451 (Resolved): Document distributed metadata cache
- Explain distributed metadata cache maintained by MDS/clients. This should touch on capabilities, cache management, an...
- 02:22 PM Backport #41444 (Resolved): nautilus: mgr/volumes: handle incorrect pool_layout setting during `f...
- 02:21 PM Backport #41437 (Resolved): nautilus: mgr/volumes: subvolume and subvolume group path exists even...
- 01:50 PM Bug #41419: mds: missing dirfrag damaged check before CDir::fetch
- 0> 2019-08-23 15:51:03.871241 7f990ee3e700 -1 /build/ceph-12.2.8/src/include/elist.h: In function 'elist<T>::~elist()...
- 09:04 AM Cleanup #41430 (Fix Under Review): mds: reorg JournalPointer header
- 09:00 AM Cleanup #41430 (Resolved): mds: reorg JournalPointer header
- 08:58 AM Backport #41002: nautilus:client: failed to drop dn and release caps causing mds stary stacking.
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29478
m... - 07:36 AM Cleanup #41428 (Fix Under Review): mds: reorg InoTable header
- 07:30 AM Cleanup #41428 (Resolved): mds: reorg InoTable header
- 03:50 AM Backport #41099 (In Progress): nautilus: tools/cephfs: memory leak in cephfs/Resetter.cc
- https://github.com/ceph/ceph/pull/29879
- 03:46 AM Backport #41096 (In Progress): nautilus: mds: map client_caps been inserted by mistake
- https://github.com/ceph/ceph/pull/29878
08/25/2019
- 09:50 PM Bug #41426 (Can't reproduce): mds: wrongly signals directory is empty when dentry is damaged?
- In this test:
/ceph/teuthology-archive/pdonnell-2019-08-24_04:19:23-fs-wip-pdonnell-testing-20190824.014616-distro... - 04:36 AM Bug #41140 (Pending Backport): mds: trim cache more regularly
- 04:36 AM Bug #41141 (Pending Backport): mds: recall capabilities more regularly when under cache pressure
- 04:33 AM Bug #41337 (Pending Backport): mgr/volumes: handle incorrect pool_layout setting during `fs subvo...
- 04:32 AM Bug #41371 (Pending Backport): mgr/volumes: subvolume and subvolume group path exists even when c...
08/24/2019
- 11:49 AM Bug #41419: mds: missing dirfrag damaged check before CDir::fetch
- another option is make sure all type of callback contexts (passing to CDir::fetch) handle error code
- 11:43 AM Bug #41419 (New): mds: missing dirfrag damaged check before CDir::fetch
- we don't have damaged check before every CDir::fetch. It can cause request leak.
An user encountered following cra...
08/23/2019
- 11:17 PM Bug #36370 (Pending Backport): add information about active scrubs to "ceph -s" (and elsewhere)
- 11:11 PM Bug #40877 (Pending Backport): client: client should return EIO when it's unsafe reqs have been d...
- 11:08 PM Cleanup #41181 (Resolved): mds: reorg FSMap header
- 11:07 PM Support #40906: Full CephFS causes hang when accessing inode.
- The MDS crashed while I was working in the damaged directories at `2019-08-23 15:51:03.871241`. The standby took over...
- 10:43 PM Bug #41415 (Can't reproduce): mgr/volumes: AssertionError: '33' != 'new_pool'
- ...
- 08:35 PM Feature #41311 (Fix Under Review): deprecate CephFS inline_data support
- 05:09 PM Bug #40773 (Pending Backport): qa: 'ceph osd require-osd-release nautilus' fails
- 03:56 AM Backport #41097 (In Progress): mimic: mds: map client_caps been inserted by mistake
- https://github.com/ceph/ceph/pull/29833
- 03:54 AM Backport #41095 (In Progress): nautilus: qa: race in test_standby_replay_singleton_fail
- https://github.com/ceph/ceph/pull/29832
- 01:40 AM Backport #41000 (In Progress): luminous: client: failed to drop dn and release caps causing mds s...
08/22/2019
- 10:06 PM Backport #39691 (In Progress): luminous: mds: error "No space left on device" when create a larg...
- 04:02 PM Bug #41398 (Fix Under Review): qa: KeyError: 'cluster' in ceph.stop
- 03:59 PM Bug #41398 (Resolved): qa: KeyError: 'cluster' in ceph.stop
- ...
- 02:31 PM Support #40906: Full CephFS causes hang when accessing inode.
- Did the logs provide the information that you needed, or do you need more/different information?
- 02:04 PM Bug #40821 (Fix Under Review): osdc: objecter ops output does not have useful time information
- 01:28 PM Feature #16656: mount.ceph: enable consumption of ceph keyring files
- Updated PR here:
https://github.com/ceph/ceph/pull/29817
This not only allows the helper to find cephx secr... - 10:20 AM Bug #24403: mon failed to return metadata for mds
- this ticket is now rather old. do you mind, if I just close it?
- 04:15 AM Bug #40773 (Fix Under Review): qa: 'ceph osd require-osd-release nautilus' fails
- 03:39 AM Backport #41093 (In Progress): nautilus: qa: tasks.cephfs.test_client_recovery.TestClientRecovery...
- https://github.com/ceph/ceph/pull/29811
- 03:37 AM Backport #41094 (In Progress): mimic: qa: tasks.cephfs.test_client_recovery.TestClientRecovery.te...
- https://github.com/ceph/ceph/pull/29812
- 01:24 AM Bug #41133 (Fix Under Review): qa/tasks: update thrasher design
08/21/2019
- 10:12 PM Backport #37761: mimic: mds: deadlock when setting config value via admin socket
- NOTE: do not merge to mimic until #41354 merges and is ready for backport since this introduces a failure.
- 07:37 PM Bug #41371 (Fix Under Review): mgr/volumes: subvolume and subvolume group path exists even when c...
- 09:09 AM Bug #41371 (Resolved): mgr/volumes: subvolume and subvolume group path exists even when creation ...
- (testenv) [rraja@bzn build]$ ./bin/ceph fs subvolume create a subvol00 --pool_layout invalid_pool
Error EINVAL: Trac... - 05:59 PM Bug #41133 (Resolved): qa/tasks: update thrasher design
- 05:59 PM Feature #10369 (Resolved): qa-suite: detect unexpected MDS failovers and daemon crashes
- 02:46 PM Backport #41071: nautilus: mgr/volumes: unable to create subvolumegroups/subvolumes when ceph-mgr...
- backport PR https://github.com/ceph/ceph/pull/29490
merge commit f05a301b92f574edb17e8dff73fb65f3d6b032d0 (v14.2.2-3... - 02:46 PM Backport #41070: nautilus: mgr/volumes: Add `ceph fs subvolumegroup getpath` command
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29490
m... - 02:43 PM Backport #40326 (Resolved): nautilus: mds: evict stale client when one of its write caps are stolen
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28583
m... - 02:42 PM Backport #40324 (Resolved): nautilus: ceph_volume_client: d_name needs to be converted to string ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28609
m... - 02:42 PM Backport #40839 (Resolved): nautilus: cephfs-shell: TypeError in poutput
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29156
m... - 02:42 PM Backport #40842 (Resolved): nautilus: ceph-fuse: mount does not support the fallocate()
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29157
m... - 02:42 PM Backport #40843 (Resolved): nautilus: cephfs-shell: name 'files' is not defined error in do_rm()
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29158
m... - 02:41 PM Backport #40874 (Resolved): nautilus: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29186
m... - 02:41 PM Backport #40438 (Resolved): nautilus: getattr on snap inode stuck
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29231
m... - 02:41 PM Backport #40440 (Resolved): nautilus: mds: cannot switch mds state from standby-replay to active
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29233
m... - 02:40 PM Backport #40443 (Resolved): nautilus: libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when o...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29343
m... - 02:40 PM Backport #40445 (Resolved): nautilus: mds: MDCache::cow_inode does not cleanup unneeded client_sn...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29344
m... - 02:22 PM Backport #40845 (Resolved): nautilus: MDSMonitor: use stringstream instead of dout for mds repaired
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29159
m... - 09:01 AM Backport #40796: nautilus: mgr / volumes: support asynchronous subvolume deletes
- merge commit a7a380a
v14.2.2-16-ga7a380a44f
08/20/2019
- 09:19 PM Bug #41228: mon: deleting a CephFS and its pools causes MONs to crash
- Okay, the problem is not what I first thought.
The branch where we look at the data pool unconditionally is only t... - 08:52 PM Bug #41228: mon: deleting a CephFS and its pools causes MONs to crash
- Okay so we definitely need to figure out the PGStats fix, and try to dig up the appropriate FS bug as a separate tick...
- 01:31 PM Backport #40325 (In Progress): mimic: ceph_volume_client: d_name needs to be converted to string ...
- 01:22 PM Backport #40130 (In Progress): mimic: Document behaviour of fsync-after-close
- 01:12 PM Backport #41089 (In Progress): nautilus: cephfs-shell: Multiple flake8 errors
- 11:01 AM Bug #41337 (Fix Under Review): mgr/volumes: handle incorrect pool_layout setting during `fs subvo...
- 06:10 AM Backport #38339: mimic: mds: may leak gather during cache drop
- This needs https://github.com/ceph/ceph/pull/28452 (tracker https://tracker.ceph.com/issues/38131) to be merge for th...
- 02:44 AM Bug #41346 (Fix Under Review): mds: MDSIOContextBase instance leak
- 02:32 AM Bug #41346 (Resolved): mds: MDSIOContextBase instance leak
- From time to time, we see mds crushes when shutting down:...
- 02:21 AM Backport #41088 (In Progress): mimic: qa: AssertionError: u'open' != 'stale'
- https://github.com/ceph/ceph/pull/29751
- 02:19 AM Backport #41087 (In Progress): nautilus: qa: AssertionError: u'open' != 'stale'
- https://github.com/ceph/ceph/pull/29750
08/19/2019
- 08:27 PM Backport #41002 (Resolved): nautilus:client: failed to drop dn and release caps causing mds stary...
- 04:44 PM Bug #41329 (Fix Under Review): mds: reject sessionless messages
- 01:37 AM Bug #41329 (Resolved): mds: reject sessionless messages
- src/mds/Server.cc:
Server::handle_client_session, mds should reject sessionless messages. - 04:37 PM Bug #41144 (Pending Backport): mount.ceph: doesn't accept "strictatime"
- 04:35 PM Bug #41006 (Pending Backport): cephfs-data-scan scan_links FAILED ceph_assert(p->second >= before...
- 02:19 PM Bug #41337 (Resolved): mgr/volumes: handle incorrect pool_layout setting during `fs subvolume/sub...
- Instead of a traceback, raise a clear error and log error message when FS subvolume and subvolume group is created wi...
- 01:40 PM Bug #41219 (Fix Under Review): mgr/volumes: send purge thread (and other) health warnings to `cep...
- 01:40 PM Bug #41327 (Fix Under Review): mds: dirty rstat lost during scatter-gather process
- 01:40 PM Bug #41218 (Fix Under Review): mgr/volumes: retry spawning purge threads on failure
- 09:43 AM Bug #36348: luminous(?): blogbench I/O with two kernel clients; one stalls
- (nautilus) http://qa-proxy.ceph.com/teuthology/yuriw-2019-08-15_23:26:08-multimds-wip-yuri5-testing-2019-08-15-2024-n...
- 08:27 AM Feature #41302: mds: add ephemeral random and distributed export pins
- Patrick Donnelly wrote:
> Sidharth, I've discussed this with Doug and we'll be assigning this to you.
>
> Sidhart...
08/18/2019
- 01:55 PM Bug #41327 (Fix Under Review): mds: dirty rstat lost during scatter-gather process
- In the following scenario, the current lock's dirty state could be lost:
# 1. current lock's state is LOCK_LOCK;
...
08/16/2019
- 09:57 PM Bug #40773: qa: 'ceph osd require-osd-release nautilus' fails
- https://github.com/ceph/ceph/pull/29715
- 07:37 PM Bug #40773: qa: 'ceph osd require-osd-release nautilus' fails
- http://pulpito.ceph.com/pdonnell-2019-08-15_13:19:31-fs-wip-pdonnell-testing-20190814.222632-distro-basic-smithi/
... - 07:16 PM Documentation #41316 (Fix Under Review): doc: update documentation for LazyIO
- 03:44 PM Documentation #41316 (Resolved): doc: update documentation for LazyIO
- Update the documentation with usage info about the LazyIO methods: lazyio_propagate() and lazyio_synchronize(). Also ...
- 06:52 PM Feature #41302: mds: add ephemeral random and distributed export pins
- Sidharth, I've discussed this with Doug and we'll be assigning this to you.
Sidharth Anupkrishnan wrote:
> Nice!... - 06:25 PM Feature #41302: mds: add ephemeral random and distributed export pins
- Nice!
I have a doubt regarding how we could use consistent hashing for the 2nd case: "export_ephemeral_random" pinni... - 04:04 PM Feature #41302: mds: add ephemeral random and distributed export pins
- Here are some scripts shared by Dan from CERN that can be used to manually test random subtree pinning: https://githu...
- 05:38 PM Support #40906: Full CephFS causes hang when accessing inode.
- I've sent an e-mail to Zheng with a link to download the logs due to sensitive info. The client requested the file 0....
- 05:33 PM Bug #41319 (Can't reproduce): ceph.in: pool creation fails with "AttributeError: 'str' object has...
- ...
- 04:42 PM Bug #41192 (In Progress): mds: atime not being updated persistently
- I've dropped the PR for now, as Zheng pointed out that atime is not actually tied to Fr caps after all, but rather to...
- 12:52 PM Bug #40298 (Resolved): cephfs-shell: 'rmdir *' does not remove all directories
- The issue is resolved in this PR https://github.com/ceph/ceph/commit/4c968b1f30faab9f9013dee95043ccf5f38f5d20.
- 11:41 AM Feature #41311 (Resolved): deprecate CephFS inline_data support
- I sent a proposal to the various ceph mailing lists to deprecate inline_data support for Octopus. At this point, we m...
- 10:51 AM Bug #41310 (Resolved): client: lazyio synchronize does not get file size
- LazyIO synchronize fails to do the task of making the propagated writes by other clients/fds visible to the current f...
- 08:08 AM Backport #37761 (In Progress): mimic: mds: deadlock when setting config value via admin socket
08/15/2019
- 06:25 PM Feature #41302: mds: add ephemeral random and distributed export pins
- It's worth noting that the only difference between the two options is that export_ephemeral_distributed is not hierar...
- 06:15 PM Feature #41302 (Resolved): mds: add ephemeral random and distributed export pins
- Background: export pins [1] are an effective way to distribute metadata load for large workloads without the metadata...
- 04:22 PM Support #40906: Full CephFS causes hang when accessing inode.
- It seems that after some time (hour, days, weeks) things get fixed, but it sure would be nice to know how to get it i...
- 01:46 AM Support #40906: Full CephFS causes hang when accessing inode.
- please provide logs of both ceph-fuse and mds during accessing the bad file.
- 01:55 PM Bug #41242 (Fix Under Review): mds: re-introudce mds_log_max_expiring to control expiring concurr...
- 09:50 AM Bug #40939: mds: map client_caps been inserted by mistake
- keep the script happy (alternatively, we could delete #41098)
- 09:09 AM Backport #41283 (Rejected): nautilus: cephfs-shell: No error message is printed on ls of invalid ...
- 09:08 AM Backport #41276 (Resolved): nautilus: qa: malformed job
- https://github.com/ceph/ceph/pull/30038
- 09:07 AM Backport #41269 (Resolved): nautilus: cephfs-shell: Convert files path type from string to bytes
- https://github.com/ceph/ceph/pull/30057
- 09:07 AM Backport #41268 (Rejected): nautilus: cephfs-shell: onecmd throws TypeError
- 07:31 AM Bug #41228: mon: deleting a CephFS and its pools causes MONs to crash
- I think the first time I used the standard mimic workflow of @mds fail@ and once all MDSs are stopped, @fs remove@. T...
08/14/2019
- 10:23 PM Bug #41031 (Pending Backport): qa: malformed job
- 10:06 PM Bug #40430 (Pending Backport): cephfs-shell: No error message is printed on ls of invalid directo...
- 10:04 PM Bug #41164 (Pending Backport): cephfs-shell: onecmd throws TypeError
- 10:03 PM Bug #41163 (Pending Backport): cephfs-shell: Convert files path type from string to bytes
- 09:24 PM Bug #41228: mon: deleting a CephFS and its pools causes MONs to crash
- > [14:11:31] <@gregsfortytwo> batrick: looking at tracker.ceph.com/issues/41228 and it's got a lot going on but part...
- 09:10 PM Bug #41228: mon: deleting a CephFS and its pools causes MONs to crash
- Can you include exactly what commands you ran? Did you still have clients mounted while deleting the FS?
- 04:22 PM Bug #41192 (Fix Under Review): mds: atime not being updated persistently
- 04:18 PM Feature #41220: mgr/volumes: add test case for blacklisted clients
- Related: test ceph-mgr getting blacklisted. It should recover somehow (probably close the libcephfs handle and get a ...
- 04:17 PM Feature #16656: mount.ceph: enable consumption of ceph keyring files
- PR here: https://github.com/ceph/ceph/pull/29642
It occurs to me though that now that we have a way to get to the ... - 04:14 PM Feature #16656 (Fix Under Review): mount.ceph: enable consumption of ceph keyring files
- 01:26 PM Backport #37761: mimic: mds: deadlock when setting config value via admin socket
- https://github.com/ceph/ceph/pull/29664
- 12:08 PM Feature #41209: mds: create a configurable snapshot limit
- Zheng Yan wrote:
> Milind Changire wrote:
> > Thanks for the hint Zheng.
> >
> > How could the MDS return the st... - 11:09 AM Feature #41209: mds: create a configurable snapshot limit
- Milind Changire wrote:
> Thanks for the hint Zheng.
>
> How could the MDS return the status "too many snaps" to t... - 10:43 AM Feature #41209: mds: create a configurable snapshot limit
- Thanks for the hint Zheng.
How could the MDS return the status "too many snaps" to the caller ?
There's no error ... - 08:07 AM Bug #41242 (Closed): mds: re-introudce mds_log_max_expiring to control expiring concurrency manually
- In some case, huge of mds segments could be expired concurrently, which might bring very heavy loads to OSDs and we c...
- 04:22 AM Backport #40944 (In Progress): nautilus: mgr: failover during in qa testing causes unresponsive c...
- https://github.com/ceph/ceph/pull/29649
- 02:12 AM Support #40906: Full CephFS causes hang when accessing inode.
- Okay, we had another data corruption incident, so I took some time to try looking deeper into the problem. I did some...
08/13/2019
- 08:01 PM Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
- I tried again, this time with a replicated pool and just one MDS. I think it's too early to draw definitive conclusio...
- 02:54 PM Bug #41140: mds: trim cache more regularly
- I believe this problem may be particularly severe when the main data pool is an EC pool. I am trying the same thing w...
- 01:17 PM Bug #41228 (Resolved): mon: deleting a CephFS and its pools causes MONs to crash
- Disclaimer: I am not entirely sure if this is strictly related to CephFS or a general problem when deleting pools wit...
- 11:14 AM Bug #41133 (In Progress): qa/tasks: update thrasher design
- 06:46 AM Feature #41220 (New): mgr/volumes: add test case for blacklisted clients
- see qa/tasks/cephfs/test_volumes.py
- 06:44 AM Bug #41218: mgr/volumes: retry spawning purge threads on failure
- thanks -- those were cache in my browser :P
- 05:16 AM Bug #41218 (Resolved): mgr/volumes: retry spawning purge threads on failure
- seen here: http://qa-proxy.ceph.com/teuthology/pdonnell-2019-08-07_15:57:31-fs-wip-pdonnell-testing-20190807.132723-d...
- 05:18 AM Bug #41219 (Resolved): mgr/volumes: send purge thread (and other) health warnings to `ceph status`
- so as to easily identify issues rather than scanning log files. Also, log any critical error(s) in cluster log.
- 01:34 AM Feature #41209: mds: create a configurable snapshot limit
- mds.0 has snaptable, which contains information about all snapshots. Each mds has a snapclient, which also caches sna...
08/12/2019
- 09:00 PM Feature #16656: mount.ceph: enable consumption of ceph keyring files
- Jeff Layton wrote:
> That's certainly another possibility. I'm not sure it's any easier though.
>
> We'd have to ... - 06:18 PM Feature #16656: mount.ceph: enable consumption of ceph keyring files
- That's certainly another possibility. I'm not sure it's any easier though.
We'd have to scrape and parse the outpu... - 06:05 PM Feature #16656: mount.ceph: enable consumption of ceph keyring files
- Jeff Layton wrote:
> My current thinking is to link in libceph-common, create a context and fetch the keys using the... - 05:37 PM Feature #16656: mount.ceph: enable consumption of ceph keyring files
- My current thinking is to link in libceph-common, create a context and fetch the keys using the same C++ routines tha...
- 03:49 PM Feature #16656 (In Progress): mount.ceph: enable consumption of ceph keyring files
- 03:46 PM Bug #41187 (Rejected): src/mds/Server.cc: 958: FAILED assert(in->snaprealm)
- Sorry Jan, snapshots are not stable in Luminous and we don't spend time looking at snapshot related failures for that...
- 03:42 PM Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
- Each file will have an object in the default data pool (the data pool used at file system creation time) with an exte...
- 01:35 PM Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
- I had the same issue before our MDS and Mons died.
Journal was producing 2 files a few TB big and the metadatapool ... - 01:34 PM Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
- referenced: https://tracker.ceph.com/issues/41026
- 01:28 PM Bug #41204 (New): CephFS pool usage 3x above expected value and sparse journal dumps
- I am in the process of copying about 230 million small and medium-sized files to a CephFS and I have three active MDS...
- 02:03 PM Backport #41098: luminous: mds: map client_caps been inserted by mistake
- (snapshots are not supported/stable in luminous)
- 01:43 PM Backport #41098 (Rejected): luminous: mds: map client_caps been inserted by mistake
- 01:59 PM Backport #40162: mimic: FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_dir()"
- Backport PR here:
https://github.com/ceph/ceph/pull/29609 - 12:59 PM Backport #40162 (In Progress): mimic: FSAL_CEPH assertion failed in Client::_lookup_name: "parent...
- 01:56 PM Feature #41209 (Resolved): mds: create a configurable snapshot limit
- Add a new config option that imposes a limit on the number of snapshots in a directory. Zheng has found in the past t...
- 01:31 PM Bug #40821 (In Progress): osdc: objecter ops output does not have useful time information
- 12:29 PM Bug #41192: mds: atime not being updated persistently
- Now that I've done a bit more investigation, I think there are actually two parts to this bug:
1) the MDS only upd... - 06:54 AM Backport #40894 (In Progress): nautilus: mds: cleanup truncating inodes when standby replay mds t...
- https://github.com/ceph/ceph/pull/29591
- 05:32 AM Cleanup #41185 (Fix Under Review): mds: reorg FSMapUser header
08/09/2019
- 06:03 PM Bug #41192: mds: atime not being updated persistently
- Tracepoints from adding and removing caps for the inode:...
- 05:26 PM Bug #41192: mds: atime not being updated persistently
- ...maybe Fw too?
- 05:16 PM Bug #41192 (Won't Fix): mds: atime not being updated persistently
- xfstest generic/192 fails with kcephfs. It basically:
mounts the fs
creates a file and records the atime
waits a... - 02:18 PM Bug #41187 (Rejected): src/mds/Server.cc: 958: FAILED assert(in->snaprealm)
- We saw this on a cluster where two out of three mds servers needed to be rebooted. After the reboot both mds dumped c...
- 11:03 AM Cleanup #41185 (Resolved): mds: reorg FSMapUser header
- 10:16 AM Feature #41182 (Resolved): mgr/volumes: add `fs subvolume extend/shrink` commands
- To extend and shrink subvolume set desired quota.
And during shrink, maybe error out if the desired shrunk size i... - 10:02 AM Cleanup #41181 (Fix Under Review): mds: reorg FSMap header
- 09:55 AM Cleanup #41181 (Resolved): mds: reorg FSMap header
- 09:12 AM Bug #41069 (Closed): nautilus: test_subvolume_group_create_with_desired_mode fails with "Assertio...
- The test passed in a more recent nautilus test run that included the mgr/volumes backport PR https://github.com/ceph...
- 08:04 AM Cleanup #41178 (Fix Under Review): mds: reorg DamageTable header
- 07:54 AM Cleanup #41178 (Resolved): mds: reorg DamageTable header
- 06:12 AM Bug #39395 (Resolved): ceph: ceph fs auth fails
08/08/2019
- 01:49 PM Bug #41164 (Fix Under Review): cephfs-shell: onecmd throws TypeError
- 01:41 PM Bug #41164 (Resolved): cephfs-shell: onecmd throws TypeError
- cmd2 (0.9.15)
Python 3.7.4
Fedora 30
Due to changes in recent cmd module, on any command the following error is ... - 01:17 PM Bug #41163 (Fix Under Review): cephfs-shell: Convert files path type from string to bytes
- 01:09 PM Bug #41163 (Resolved): cephfs-shell: Convert files path type from string to bytes
- 09:36 AM Bug #41140: mds: trim cache more regularly
- I have the following settings now, which seem to work okay-ish:...
- 04:56 AM Bug #41034: cephfs-journal-tool: NetHandler create_socket couldn't create socket
- I forgot to say: the tool functions work, the warning/error doesn't affect its functionality but its confusing if you...
- 04:55 AM Bug #41034: cephfs-journal-tool: NetHandler create_socket couldn't create socket
- the cephfs-journal-tool but I also see this with the cephfs-data-scan
but I made a mistake, its Ubuntu 16.04, on the...
08/07/2019
- 09:48 PM Bug #41141 (Fix Under Review): mds: recall capabilities more regularly when under cache pressure
- 09:47 PM Bug #41140 (Fix Under Review): mds: trim cache more regularly
- 04:53 PM Bug #41140: mds: trim cache more regularly
- We did much the same thing in the OSD. Previously we trimmed in a single thread at regular intervals, but now we tri...
- 09:39 AM Bug #41140: mds: trim cache more regularly
- Janek Bevendorff wrote:
> This may be obvious, but to put the whole thing into context: this cache trimming issue ca... - 09:29 AM Bug #41140: mds: trim cache more regularly
- This may be obvious, but to put the whole thing into context: this cache trimming issue can make a CephFS permanently...
- 09:07 PM Bug #41034 (Need More Info): cephfs-journal-tool: NetHandler create_socket couldn't create socket
- Can you please elaborate on this, which journal tool are you talking about?
If possible, could you provide steps to... - 08:03 PM Bug #40927 (Resolved): mgr/volumes: unable to create subvolumegroups/subvolumes when ceph-mgr is ...
- 08:02 PM Feature #40617 (Resolved): mgr/volumes: Add `ceph fs subvolumegroup getpath` command
- 08:02 PM Backport #41071 (Resolved): nautilus: mgr/volumes: unable to create subvolumegroups/subvolumes wh...
- 08:02 PM Backport #41070 (Resolved): nautilus: mgr/volumes: Add `ceph fs subvolumegroup getpath` command
- 02:42 PM Bug #41144: mount.ceph: doesn't accept "strictatime"
- Changing subject. "nostrictatime" seems to be intercepted by /bin/mount, so the mount helper doesn't need to handle it.
- 07:37 AM Bug #41148 (Fix Under Review): client: _readdir_cache_cb() may use the readdir_cache already clear
- 07:31 AM Bug #41148: client: _readdir_cache_cb() may use the readdir_cache already clear
- segment fault:inode=0x0
- 07:25 AM Bug #41148: client: _readdir_cache_cb() may use the readdir_cache already clear
- huanwen ren wrote:
> Zheng Yan wrote:
> > I think getattr does not affect parent directory inode's completeness
> ... - 07:22 AM Bug #41148: client: _readdir_cache_cb() may use the readdir_cache already clear
- Zheng Yan wrote:
> I think getattr does not affect parent directory inode's completeness
From the log, there is a... - 07:00 AM Bug #41148: client: _readdir_cache_cb() may use the readdir_cache already clear
- I think getattr does not affect parent directory inode's completeness
- 06:56 AM Bug #41148 (Resolved): client: _readdir_cache_cb() may use the readdir_cache already clear
- Calling function A means to get dir information from the cache, but in the while loop,
the contents of readdir_cach... - 04:55 AM Bug #41147: mds: crash loop - Server.cc 6835: FAILED ceph_assert(in->first <= straydn->first)
- this is temporarily fixed by wiping session table
- 04:34 AM Bug #41147 (Duplicate): mds: crash loop - Server.cc 6835: FAILED ceph_assert(in->first <= straydn...
- After creating a new FS and running it for 2 days I my MDS is in a crash loop. I didn't try anything yet so far as to...
08/06/2019
- 10:36 PM Bug #41140: mds: trim cache more regularly
- Dan van der Ster wrote:
> FWIW i can trigger the same problems here without creates. `ls -lR` of a large tree is eno... - 05:01 PM Bug #41140: mds: trim cache more regularly
- FWIW i can trigger the same problems here without creates. `ls -lR` of a large tree is enough, and increasing the thr...
- 04:42 PM Bug #41140 (Resolved): mds: trim cache more regularly
- Under -create- workloads that result in the acquisition of a lot of capabilities, the MDS can't trim the cache fast e...
- 07:56 PM Bug #41144 (Resolved): mount.ceph: doesn't accept "strictatime"
- The cephfs mount helper doesn't support either strictatime or nostrictatime. It should intercept those options and se...
- 04:46 PM Bug #41141 (Resolved): mds: recall capabilities more regularly when under cache pressure
- If a client is doing a large parallel create workload, the MDS may not recall capabilities fast enough and the client...
- 02:46 AM Bug #41133 (Closed): qa/tasks: update thrasher design
- * Make the Thrasher class abstract by adding _do_thrash abstract function.
* Change OSDThrasher, RBDMirrorThrasher, ... - 02:45 AM Bug #41066 (Closed): mds: skip trim mds cache if mdcache is not opened
08/05/2019
- 11:09 PM Backport #41129 (Resolved): mimic: qa: power off still resulted in client sending session close
- https://github.com/ceph/ceph/pull/30233
- 11:09 PM Backport #41128 (Resolved): nautilus: qa: power off still resulted in client sending session close
- https://github.com/ceph/ceph/pull/29983
- 11:07 PM Backport #41118 (Rejected): nautilus: cephfs-shell: add CI testing with flake8
- 11:07 PM Backport #41114 (Resolved): mimic: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- https://github.com/ceph/ceph/pull/31283
- 11:07 PM Backport #41113 (Resolved): nautilus: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- https://github.com/ceph/ceph/pull/30032
- 11:06 PM Backport #41112 (Rejected): nautilus: cephfs-shell: cd with no args has no effect
- 11:06 PM Backport #41108 (Resolved): mimic: mds: disallow setting ceph.dir.pin value exceeding max rank id
- https://github.com/ceph/ceph/pull/29940
- 11:06 PM Backport #41107 (Resolved): nautilus: mds: disallow setting ceph.dir.pin value exceeding max rank id
- https://github.com/ceph/ceph/pull/29938
- 11:06 PM Backport #41106 (Resolved): nautilus: mds: add command that modify session metadata
- https://github.com/ceph/ceph/pull/32245
- 11:06 PM Backport #41105 (Rejected): nautilus: cephfs-shell: flake8 blank line and indentation error
- 11:05 PM Backport #41101 (Rejected): luminous: tools/cephfs: memory leak in cephfs/Resetter.cc
- 11:05 PM Backport #41100 (Resolved): mimic: tools/cephfs: memory leak in cephfs/Resetter.cc
- https://github.com/ceph/ceph/pull/29915
- 11:05 PM Backport #41099 (Resolved): nautilus: tools/cephfs: memory leak in cephfs/Resetter.cc
- https://github.com/ceph/ceph/pull/29879
- 11:05 PM Backport #41098 (Rejected): luminous: mds: map client_caps been inserted by mistake
- 11:05 PM Backport #41097 (Resolved): mimic: mds: map client_caps been inserted by mistake
- https://github.com/ceph/ceph/pull/29833
- 11:04 PM Backport #41096 (Resolved): nautilus: mds: map client_caps been inserted by mistake
- https://github.com/ceph/ceph/pull/29878
- 11:04 PM Backport #41095 (Resolved): nautilus: qa: race in test_standby_replay_singleton_fail
- https://github.com/ceph/ceph/pull/29832
- 11:04 PM Backport #41094 (Resolved): mimic: qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_...
- https://github.com/ceph/ceph/pull/29812
- 11:04 PM Backport #41093 (Resolved): nautilus: qa: tasks.cephfs.test_client_recovery.TestClientRecovery.te...
- https://github.com/ceph/ceph/pull/29811
- 11:04 PM Backport #41089 (Rejected): nautilus: cephfs-shell: Multiple flake8 errors
- 11:04 PM Backport #41088 (Resolved): mimic: qa: AssertionError: u'open' != 'stale'
- https://github.com/ceph/ceph/pull/29751
- 11:03 PM Backport #41087 (Resolved): nautilus: qa: AssertionError: u'open' != 'stale'
- https://github.com/ceph/ceph/pull/29750
- 04:05 PM Backport #40326: nautilus: mds: evict stale client when one of its write caps are stolen
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28583
merged - 04:04 PM Backport #40324: nautilus: ceph_volume_client: d_name needs to be converted to string before using
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28609
merged - 04:03 PM Backport #40839: nautilus: cephfs-shell: TypeError in poutput
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29156
merged - 04:03 PM Backport #40842: nautilus: ceph-fuse: mount does not support the fallocate()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29157
merged - 04:02 PM Backport #40843: nautilus: cephfs-shell: name 'files' is not defined error in do_rm()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29158
merged - 04:02 PM Backport #40874: nautilus: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- Xiaoxi Chen wrote:
> https://github.com/ceph/ceph/pull/29186
merged - 04:01 PM Backport #40438: nautilus: getattr on snap inode stuck
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29231
merged - 04:00 PM Backport #40440: nautilus: mds: cannot switch mds state from standby-replay to active
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29233
merged - 03:59 PM Backport #40443: nautilus: libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when operating on...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29343
merged - 03:58 PM Backport #40445: nautilus: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29344
merged - 02:14 PM Bug #41072 (In Progress): scheduled cephfs snapshots (via ceph manager)
- 12:54 PM Bug #41072: scheduled cephfs snapshots (via ceph manager)
- - also, interface for fetching snap metadata (`flushed` state, etc...)...
- 12:23 PM Bug #41072 (Resolved): scheduled cephfs snapshots (via ceph manager)
- outline detailed here: https://pad.ceph.com/p/cephfs-snap-mirroring
Specify a snapshot schedule on any (sub)direct... - 01:47 PM Bug #40989 (Resolved): qa: RECENT_CRASH warning prevents wait_for_health_clear from completing
- 01:46 PM Feature #40986 (Fix Under Review): cephfs qos: implement cephfs qos base on tokenbucket algorighm
- 01:45 PM Backport #41000 (New): luminous: client: failed to drop dn and release caps causing mds stary sta...
- Zheng, please take this one. The backport is non-trivial.
- 01:14 PM Feature #41074 (Resolved): pybind/mgr/volumes: mirror (scheduled) snapshots to remote target
- outline detailed here: https://pad.ceph.com/p/cephfs-snap-mirroring
mirror scheduled (and temporary) snapshots a r... - 01:13 PM Feature #41073 (Rejected): cephfs-sync: tool for synchronizing cephfs snapshots to remote target
- Introduce a rsync like tool (cephfs-sync) for mirroring scheduled and temp cephfs snapshots to sync targets. sync tar...
- 01:02 PM Backport #41070 (In Progress): nautilus: mgr/volumes: Add `ceph fs subvolumegroup getpath` command
- 11:56 AM Backport #41070 (Resolved): nautilus: mgr/volumes: Add `ceph fs subvolumegroup getpath` command
- https://github.com/ceph/ceph/pull/29490
- 01:00 PM Backport #41071 (In Progress): nautilus: mgr/volumes: unable to create subvolumegroups/subvolumes...
- 11:58 AM Backport #41071 (Resolved): nautilus: mgr/volumes: unable to create subvolumegroups/subvolumes wh...
- https://github.com/ceph/ceph/pull/29490
- 10:50 AM Bug #41069 (Need More Info): nautilus: test_subvolume_group_create_with_desired_mode fails with "...
- Seen here is nautilus run: http://qa-proxy.ceph.com/teuthology/yuriw-2019-07-30_20:57:10-fs-wip-yuri-testing-2019-07-...
- 05:23 AM Bug #41066: mds: skip trim mds cache if mdcache is not opened
- https://github.com/ceph/ceph/pull/29481
- 05:18 AM Bug #41066 (Closed): mds: skip trim mds cache if mdcache is not opened
- ```
2019-07-24 14:51:28.028198 7f6dc2543700 1 mds.0.940446 active_start
2019-07-24 14:51:39.452890 7f6dc2543700 1... - 12:29 AM Backport #41001 (In Progress): mimic: client: failed to drop dn and release caps causing mds star...
- 12:20 AM Backport #41002 (In Progress): nautilus:client: failed to drop dn and release caps causing mds st...
Also available in: Atom