Project

General

Profile

Activity

From 07/29/2019 to 08/27/2019

08/27/2019

10:38 PM Bug #41541 (Resolved): mgr/volumes: ephemerally pin volumes
Apply export_ephemeral_distributed to volumes by default. Provide the option to change this to default balancer. Patrick Donnelly
04:42 PM Bug #41538 (Resolved): mds: wrong compat can cause MDS to be added daemon registry on mgr but not...
See Kefu's excellent synopsis of the problem: https://tracker.ceph.com/issues/41525#note-3 Patrick Donnelly
01:12 PM Backport #40343: luminous: mds: fix corner case of replaying open sessions
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28536
m...
Nathan Cutler
01:12 PM Backport #40041: luminous: avoid trimming too many log segments after mds failover
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28543
m...
Nathan Cutler
01:12 PM Backport #40221: luminous: mds: reset heartbeat during long-running loops in recovery
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28544
m...
Nathan Cutler
10:58 AM Backport #38686: luminous: kcephfs TestClientLimits.test_client_pin fails with "client caps fell ...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27040
m...
Nathan Cutler
10:58 AM Backport #38445: luminous: mds: drop cache does not timeout as expected
backport PR https://github.com/ceph/ceph/pull/27342
merge commit 5154062f2c4a1499ce74a518eb7bb54e9560aad5 (v12.2.12-...
Nathan Cutler
10:58 AM Backport #38340: luminous: mds: may leak gather during cache drop
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27342
m...
Nathan Cutler
10:58 AM Backport #38877: luminous: mds: high debug logging with many subtrees is slow
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27679
m...
Nathan Cutler
10:57 AM Backport #39191: luminous: mds: crash during mds restart
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27737
m...
Nathan Cutler
10:57 AM Backport #39198: luminous: mds: we encountered "No space left on device" when moving huge number ...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27801
m...
Nathan Cutler
10:56 AM Backport #39208: luminous: mds: mds_cap_revoke_eviction_timeout is not used to initialize Server:...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27840
m...
Nathan Cutler
10:55 AM Backport #39468: luminous: There is no punctuation mark or blank between tid and client_id in th...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/27848
m...
Nathan Cutler
10:55 AM Backport #39221: luminous: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28432
m...
Nathan Cutler
10:55 AM Backport #39231: luminous: kclient: nofail option not supported
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28436
m...
Nathan Cutler
10:55 AM Backport #40160: luminous: FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_dir()"
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28437
m...
Nathan Cutler
10:54 AM Backport #39213: luminous: mds: there is an assertion when calling Beacon::shutdown()
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28438
m...
Nathan Cutler
09:36 AM Feature #41182 (In Progress): mgr/volumes: add `fs subvolume extend/shrink` commands
Look at OpenStack manila's cephfs driver extend_share and shrink_share method implementation,
https://github.com/o...
Ramana Raja
09:09 AM Backport #41444 (In Progress): nautilus: mgr/volumes: handle incorrect pool_layout setting during...
https://github.com/ceph/ceph/pull/29926 Ramana Raja
09:09 AM Backport #41437 (In Progress): nautilus: mgr/volumes: subvolume and subvolume group path exists e...
https://github.com/ceph/ceph/pull/29926 Ramana Raja
08:51 AM Bug #23975 (Resolved): qa: TestVolumeClient.test_lifecycle needs updated for new eviction behavior
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler
08:51 AM Bug #24133 (Resolved): mds: broadcast quota to relevant clients when quota is explicitly set
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler
07:22 AM Backport #40222: mimic: mds: reset heartbeat during long-running loops in recovery
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28918
m...
Nathan Cutler
07:22 AM Backport #40875: mimic: /src/include/xlist.h: 77: FAILED assert(_size == 0)
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29187
m...
Nathan Cutler
07:22 AM Backport #39685: mimic: ceph-fuse: client hang because its bad session PipeConnection to mds
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29200
m...
Nathan Cutler
07:21 AM Backport #38099: mimic: mds: remove cache drop admin socket command
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29210
m...
Nathan Cutler
07:21 AM Backport #38687: mimic: kcephfs TestClientLimits.test_client_pin fails with "client caps fell bel...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29211
m...
Nathan Cutler
03:21 AM Backport #41100 (In Progress): mimic: tools/cephfs: memory leak in cephfs/Resetter.cc
https://github.com/ceph/ceph/pull/29915 Prashant D
03:19 AM Backport #41106 (In Progress): nautilus: mds: add command that modify session metadata
https://github.com/ceph/ceph/pull/29914 Prashant D

08/26/2019

08:26 PM Backport #39233: mimic: kclient: nofail option not supported
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28090
m...
Nathan Cutler
08:26 PM Backport #39472: mimic: mds: fail to resolve snapshot name contains '_'
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28186
m...
Nathan Cutler
08:25 PM Backport #39669: mimic: mds: output lock state in format dump
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28274
m...
Nathan Cutler
08:25 PM Backport #39679: mimic: pybind: add the lseek() function to pybind of cephfs
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28337
m...
Nathan Cutler
08:25 PM Backport #39689: mimic: mds: error "No space left on device" when create a large number of dirs
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28381
m...
Nathan Cutler
08:25 PM Backport #40168: mimic: client: ceph.dir.rctime xattr value incorrectly prefixes "09" to the nano...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28501
m...
Nathan Cutler
08:24 PM Backport #40342: mimic: mds: fix corner case of replaying open sessions
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28579
m...
Nathan Cutler
08:24 PM Backport #40042: mimic: avoid trimming too many log segments after mds failover
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28650
m...
Nathan Cutler
03:08 PM Backport #41508 (Resolved): nautilus: add information about active scrubs to "ceph -s" (and elsew...
https://github.com/ceph/ceph/pull/30704 Nathan Cutler
02:56 PM Bug #40489 (Resolved): cephfs-shell: name 'files' is not defined error in do_rm()
Nathan Cutler
02:55 PM Bug #40679 (Resolved): cephfs-shell: TypeError in poutput
Nathan Cutler
02:55 PM Backport #41495 (Resolved): nautilus: qa: 'ceph osd require-osd-release nautilus' fails
https://github.com/ceph/ceph/pull/31040 Nathan Cutler
02:51 PM Bug #40775 (Resolved): /src/include/xlist.h: 77: FAILED assert(_size == 0)
Nathan Cutler
02:50 PM Backport #41489 (Resolved): luminous: client: client should return EIO when it's unsafe reqs have...
https://github.com/ceph/ceph/pull/30242 Nathan Cutler
02:50 PM Backport #41488 (Resolved): nautilus: client: client should return EIO when it's unsafe reqs have...
https://github.com/ceph/ceph/pull/30043 Nathan Cutler
02:50 PM Backport #41487 (Resolved): mimic: client: client should return EIO when it's unsafe reqs have be...
https://github.com/ceph/ceph/pull/30241 Nathan Cutler
02:49 PM Backport #41477 (Resolved): nautilus: cephfs-data-scan scan_links FAILED ceph_assert(p->second >=...
https://github.com/ceph/ceph/pull/30041 Nathan Cutler
02:49 PM Backport #41476 (Rejected): mimic: cephfs-data-scan scan_links FAILED ceph_assert(p->second >= be...
Nathan Cutler
02:46 PM Documentation #41472 (Resolved): doc: add multiple active MDSs and Subtree Management in CephFS
Give a technical description on how subtrees are handled by MDSs. Also do the same for multiple acive MDSs. Sidharth Anupkrishnan
02:45 PM Documentation #41470 (Resolved): Document requirements for using cephfs
Communicate high-level requirements (e.g. need 1-2 MDS; at least 2 pools; key auth and distribution) Varsha Rao
02:44 PM Bug #41434 (Fix Under Review): mds: infinite loop in Locker::file_update_finish()
Zheng Yan
12:53 PM Bug #41434 (Resolved): mds: infinite loop in Locker::file_update_finish()
... Zheng Yan
02:43 PM Backport #41468 (Rejected): mimic: mds: recall capabilities more regularly when under cache pressure
Nathan Cutler
02:43 PM Backport #41467 (Resolved): nautilus: mds: recall capabilities more regularly when under cache pr...
https://github.com/ceph/ceph/pull/30040 Nathan Cutler
02:43 PM Backport #41466 (Resolved): mimic: mount.ceph: doesn't accept "strictatime"
https://github.com/ceph/ceph/pull/30240 Nathan Cutler
02:43 PM Backport #41465 (Resolved): nautilus: mount.ceph: doesn't accept "strictatime"
https://github.com/ceph/ceph/pull/30039 Nathan Cutler
02:30 PM Documentation #41451 (Resolved): Document distributed metadata cache
Explain distributed metadata cache maintained by MDS/clients. This should touch on capabilities, cache management, an... Jeff Layton
02:22 PM Backport #41444 (Resolved): nautilus: mgr/volumes: handle incorrect pool_layout setting during `f...
Nathan Cutler
02:21 PM Backport #41437 (Resolved): nautilus: mgr/volumes: subvolume and subvolume group path exists even...
Nathan Cutler
01:50 PM Bug #41419: mds: missing dirfrag damaged check before CDir::fetch
0> 2019-08-23 15:51:03.871241 7f990ee3e700 -1 /build/ceph-12.2.8/src/include/elist.h: In function 'elist<T>::~elist()... Zheng Yan
09:04 AM Cleanup #41430 (Fix Under Review): mds: reorg JournalPointer header
Varsha Rao
09:00 AM Cleanup #41430 (Resolved): mds: reorg JournalPointer header
Varsha Rao
08:58 AM Backport #41002: nautilus:client: failed to drop dn and release caps causing mds stary stacking.
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29478
m...
Nathan Cutler
07:36 AM Cleanup #41428 (Fix Under Review): mds: reorg InoTable header
Varsha Rao
07:30 AM Cleanup #41428 (Resolved): mds: reorg InoTable header
Varsha Rao
03:50 AM Backport #41099 (In Progress): nautilus: tools/cephfs: memory leak in cephfs/Resetter.cc
https://github.com/ceph/ceph/pull/29879 Prashant D
03:46 AM Backport #41096 (In Progress): nautilus: mds: map client_caps been inserted by mistake
https://github.com/ceph/ceph/pull/29878 Prashant D

08/25/2019

09:50 PM Bug #41426 (Can't reproduce): mds: wrongly signals directory is empty when dentry is damaged?
In this test:
/ceph/teuthology-archive/pdonnell-2019-08-24_04:19:23-fs-wip-pdonnell-testing-20190824.014616-distro...
Patrick Donnelly
04:36 AM Bug #41140 (Pending Backport): mds: trim cache more regularly
Patrick Donnelly
04:36 AM Bug #41141 (Pending Backport): mds: recall capabilities more regularly when under cache pressure
Patrick Donnelly
04:33 AM Bug #41337 (Pending Backport): mgr/volumes: handle incorrect pool_layout setting during `fs subvo...
Patrick Donnelly
04:32 AM Bug #41371 (Pending Backport): mgr/volumes: subvolume and subvolume group path exists even when c...
Patrick Donnelly

08/24/2019

11:49 AM Bug #41419: mds: missing dirfrag damaged check before CDir::fetch
another option is make sure all type of callback contexts (passing to CDir::fetch) handle error code Zheng Yan
11:43 AM Bug #41419 (New): mds: missing dirfrag damaged check before CDir::fetch
we don't have damaged check before every CDir::fetch. It can cause request leak.
An user encountered following cra...
Zheng Yan

08/23/2019

11:17 PM Bug #36370 (Pending Backport): add information about active scrubs to "ceph -s" (and elsewhere)
Patrick Donnelly
11:11 PM Bug #40877 (Pending Backport): client: client should return EIO when it's unsafe reqs have been d...
Patrick Donnelly
11:08 PM Cleanup #41181 (Resolved): mds: reorg FSMap header
Patrick Donnelly
11:07 PM Support #40906: Full CephFS causes hang when accessing inode.
The MDS crashed while I was working in the damaged directories at `2019-08-23 15:51:03.871241`. The standby took over... Robert LeBlanc
10:43 PM Bug #41415 (Can't reproduce): mgr/volumes: AssertionError: '33' != 'new_pool'
... Patrick Donnelly
08:35 PM Feature #41311 (Fix Under Review): deprecate CephFS inline_data support
Patrick Donnelly
05:09 PM Bug #40773 (Pending Backport): qa: 'ceph osd require-osd-release nautilus' fails
Patrick Donnelly
03:56 AM Backport #41097 (In Progress): mimic: mds: map client_caps been inserted by mistake
https://github.com/ceph/ceph/pull/29833 Prashant D
03:54 AM Backport #41095 (In Progress): nautilus: qa: race in test_standby_replay_singleton_fail
https://github.com/ceph/ceph/pull/29832 Prashant D
01:40 AM Backport #41000 (In Progress): luminous: client: failed to drop dn and release caps causing mds s...
Zheng Yan

08/22/2019

10:06 PM Backport #39691 (In Progress): luminous: mds: error "No space left on device" when create a larg...
Patrick Donnelly
04:02 PM Bug #41398 (Fix Under Review): qa: KeyError: 'cluster' in ceph.stop
Patrick Donnelly
03:59 PM Bug #41398 (Resolved): qa: KeyError: 'cluster' in ceph.stop
... Patrick Donnelly
02:31 PM Support #40906: Full CephFS causes hang when accessing inode.
Did the logs provide the information that you needed, or do you need more/different information? Robert LeBlanc
02:04 PM Bug #40821 (Fix Under Review): osdc: objecter ops output does not have useful time information
Varsha Rao
01:28 PM Feature #16656: mount.ceph: enable consumption of ceph keyring files
Updated PR here:
https://github.com/ceph/ceph/pull/29817
This not only allows the helper to find cephx secr...
Jeff Layton
10:20 AM Bug #24403: mon failed to return metadata for mds
this ticket is now rather old. do you mind, if I just close it? Sebastian Wagner
04:15 AM Bug #40773 (Fix Under Review): qa: 'ceph osd require-osd-release nautilus' fails
Patrick Donnelly
03:39 AM Backport #41093 (In Progress): nautilus: qa: tasks.cephfs.test_client_recovery.TestClientRecovery...
https://github.com/ceph/ceph/pull/29811 Prashant D
03:37 AM Backport #41094 (In Progress): mimic: qa: tasks.cephfs.test_client_recovery.TestClientRecovery.te...
https://github.com/ceph/ceph/pull/29812 Prashant D
01:24 AM Bug #41133 (Fix Under Review): qa/tasks: update thrasher design
Jos Collin

08/21/2019

10:12 PM Backport #37761: mimic: mds: deadlock when setting config value via admin socket
NOTE: do not merge to mimic until #41354 merges and is ready for backport since this introduces a failure. Jason Dillaman
07:37 PM Bug #41371 (Fix Under Review): mgr/volumes: subvolume and subvolume group path exists even when c...
Ramana Raja
09:09 AM Bug #41371 (Resolved): mgr/volumes: subvolume and subvolume group path exists even when creation ...
(testenv) [rraja@bzn build]$ ./bin/ceph fs subvolume create a subvol00 --pool_layout invalid_pool
Error EINVAL: Trac...
Ramana Raja
05:59 PM Bug #41133 (Resolved): qa/tasks: update thrasher design
Patrick Donnelly
05:59 PM Feature #10369 (Resolved): qa-suite: detect unexpected MDS failovers and daemon crashes
Patrick Donnelly
02:46 PM Backport #41071: nautilus: mgr/volumes: unable to create subvolumegroups/subvolumes when ceph-mgr...
backport PR https://github.com/ceph/ceph/pull/29490
merge commit f05a301b92f574edb17e8dff73fb65f3d6b032d0 (v14.2.2-3...
Nathan Cutler
02:46 PM Backport #41070: nautilus: mgr/volumes: Add `ceph fs subvolumegroup getpath` command
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29490
m...
Nathan Cutler
02:43 PM Backport #40326 (Resolved): nautilus: mds: evict stale client when one of its write caps are stolen
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28583
m...
Nathan Cutler
02:42 PM Backport #40324 (Resolved): nautilus: ceph_volume_client: d_name needs to be converted to string ...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/28609
m...
Nathan Cutler
02:42 PM Backport #40839 (Resolved): nautilus: cephfs-shell: TypeError in poutput
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29156
m...
Nathan Cutler
02:42 PM Backport #40842 (Resolved): nautilus: ceph-fuse: mount does not support the fallocate()
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29157
m...
Nathan Cutler
02:42 PM Backport #40843 (Resolved): nautilus: cephfs-shell: name 'files' is not defined error in do_rm()
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29158
m...
Nathan Cutler
02:41 PM Backport #40874 (Resolved): nautilus: /src/include/xlist.h: 77: FAILED assert(_size == 0)
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29186
m...
Nathan Cutler
02:41 PM Backport #40438 (Resolved): nautilus: getattr on snap inode stuck
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29231
m...
Nathan Cutler
02:41 PM Backport #40440 (Resolved): nautilus: mds: cannot switch mds state from standby-replay to active
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29233
m...
Nathan Cutler
02:40 PM Backport #40443 (Resolved): nautilus: libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when o...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29343
m...
Nathan Cutler
02:40 PM Backport #40445 (Resolved): nautilus: mds: MDCache::cow_inode does not cleanup unneeded client_sn...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29344
m...
Nathan Cutler
02:22 PM Backport #40845 (Resolved): nautilus: MDSMonitor: use stringstream instead of dout for mds repaired
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29159
m...
Nathan Cutler
09:01 AM Backport #40796: nautilus: mgr / volumes: support asynchronous subvolume deletes
merge commit a7a380a
v14.2.2-16-ga7a380a44f
Nathan Cutler

08/20/2019

09:19 PM Bug #41228: mon: deleting a CephFS and its pools causes MONs to crash
Okay, the problem is not what I first thought.
The branch where we look at the data pool unconditionally is only t...
Greg Farnum
08:52 PM Bug #41228: mon: deleting a CephFS and its pools causes MONs to crash
Okay so we definitely need to figure out the PGStats fix, and try to dig up the appropriate FS bug as a separate tick... Greg Farnum
01:31 PM Backport #40325 (In Progress): mimic: ceph_volume_client: d_name needs to be converted to string ...
Nathan Cutler
01:22 PM Backport #40130 (In Progress): mimic: Document behaviour of fsync-after-close
Nathan Cutler
01:12 PM Backport #41089 (In Progress): nautilus: cephfs-shell: Multiple flake8 errors
Nathan Cutler
11:01 AM Bug #41337 (Fix Under Review): mgr/volumes: handle incorrect pool_layout setting during `fs subvo...
Ramana Raja
06:10 AM Backport #38339: mimic: mds: may leak gather during cache drop
This needs https://github.com/ceph/ceph/pull/28452 (tracker https://tracker.ceph.com/issues/38131) to be merge for th... Venky Shankar
02:44 AM Bug #41346 (Fix Under Review): mds: MDSIOContextBase instance leak
Patrick Donnelly
02:32 AM Bug #41346 (Resolved): mds: MDSIOContextBase instance leak
From time to time, we see mds crushes when shutting down:... Xuehan Xu
02:21 AM Backport #41088 (In Progress): mimic: qa: AssertionError: u'open' != 'stale'
https://github.com/ceph/ceph/pull/29751 Prashant D
02:19 AM Backport #41087 (In Progress): nautilus: qa: AssertionError: u'open' != 'stale'
https://github.com/ceph/ceph/pull/29750 Prashant D

08/19/2019

08:27 PM Backport #41002 (Resolved): nautilus:client: failed to drop dn and release caps causing mds stary...
Patrick Donnelly
04:44 PM Bug #41329 (Fix Under Review): mds: reject sessionless messages
Patrick Donnelly
01:37 AM Bug #41329 (Resolved): mds: reject sessionless messages
src/mds/Server.cc:
Server::handle_client_session, mds should reject sessionless messages.
guodong xiao
04:37 PM Bug #41144 (Pending Backport): mount.ceph: doesn't accept "strictatime"
Patrick Donnelly
04:35 PM Bug #41006 (Pending Backport): cephfs-data-scan scan_links FAILED ceph_assert(p->second >= before...
Patrick Donnelly
02:19 PM Bug #41337 (Resolved): mgr/volumes: handle incorrect pool_layout setting during `fs subvolume/sub...
Instead of a traceback, raise a clear error and log error message when FS subvolume and subvolume group is created wi... Ramana Raja
01:40 PM Bug #41219 (Fix Under Review): mgr/volumes: send purge thread (and other) health warnings to `cep...
Venky Shankar
01:40 PM Bug #41327 (Fix Under Review): mds: dirty rstat lost during scatter-gather process
Patrick Donnelly
01:40 PM Bug #41218 (Fix Under Review): mgr/volumes: retry spawning purge threads on failure
Venky Shankar
09:43 AM Bug #36348: luminous(?): blogbench I/O with two kernel clients; one stalls
(nautilus) http://qa-proxy.ceph.com/teuthology/yuriw-2019-08-15_23:26:08-multimds-wip-yuri5-testing-2019-08-15-2024-n... Venky Shankar
08:27 AM Feature #41302: mds: add ephemeral random and distributed export pins
Patrick Donnelly wrote:
> Sidharth, I've discussed this with Doug and we'll be assigning this to you.
>
> Sidhart...
Sidharth Anupkrishnan

08/18/2019

01:55 PM Bug #41327 (Fix Under Review): mds: dirty rstat lost during scatter-gather process
In the following scenario, the current lock's dirty state could be lost:
# 1. current lock's state is LOCK_LOCK;
...
Xuehan Xu

08/16/2019

09:57 PM Bug #40773: qa: 'ceph osd require-osd-release nautilus' fails
https://github.com/ceph/ceph/pull/29715 Patrick Donnelly
07:37 PM Bug #40773: qa: 'ceph osd require-osd-release nautilus' fails
http://pulpito.ceph.com/pdonnell-2019-08-15_13:19:31-fs-wip-pdonnell-testing-20190814.222632-distro-basic-smithi/
...
Patrick Donnelly
07:16 PM Documentation #41316 (Fix Under Review): doc: update documentation for LazyIO
Patrick Donnelly
03:44 PM Documentation #41316 (Resolved): doc: update documentation for LazyIO
Update the documentation with usage info about the LazyIO methods: lazyio_propagate() and lazyio_synchronize(). Also ... Sidharth Anupkrishnan
06:52 PM Feature #41302: mds: add ephemeral random and distributed export pins
Sidharth, I've discussed this with Doug and we'll be assigning this to you.
Sidharth Anupkrishnan wrote:
> Nice!...
Patrick Donnelly
06:25 PM Feature #41302: mds: add ephemeral random and distributed export pins
Nice!
I have a doubt regarding how we could use consistent hashing for the 2nd case: "export_ephemeral_random" pinni...
Sidharth Anupkrishnan
04:04 PM Feature #41302: mds: add ephemeral random and distributed export pins
Here are some scripts shared by Dan from CERN that can be used to manually test random subtree pinning: https://githu... Patrick Donnelly
05:38 PM Support #40906: Full CephFS causes hang when accessing inode.
I've sent an e-mail to Zheng with a link to download the logs due to sensitive info. The client requested the file 0.... Robert LeBlanc
05:33 PM Bug #41319 (Can't reproduce): ceph.in: pool creation fails with "AttributeError: 'str' object has...
... Patrick Donnelly
04:42 PM Bug #41192 (In Progress): mds: atime not being updated persistently
I've dropped the PR for now, as Zheng pointed out that atime is not actually tied to Fr caps after all, but rather to... Jeff Layton
12:52 PM Bug #40298 (Resolved): cephfs-shell: 'rmdir *' does not remove all directories
The issue is resolved in this PR https://github.com/ceph/ceph/commit/4c968b1f30faab9f9013dee95043ccf5f38f5d20. Varsha Rao
11:41 AM Feature #41311 (Resolved): deprecate CephFS inline_data support
I sent a proposal to the various ceph mailing lists to deprecate inline_data support for Octopus. At this point, we m... Jeff Layton
10:51 AM Bug #41310 (Resolved): client: lazyio synchronize does not get file size
LazyIO synchronize fails to do the task of making the propagated writes by other clients/fds visible to the current f... Sidharth Anupkrishnan
08:08 AM Backport #37761 (In Progress): mimic: mds: deadlock when setting config value via admin socket
Nathan Cutler

08/15/2019

06:25 PM Feature #41302: mds: add ephemeral random and distributed export pins
It's worth noting that the only difference between the two options is that export_ephemeral_distributed is not hierar... Patrick Donnelly
06:15 PM Feature #41302 (Resolved): mds: add ephemeral random and distributed export pins
Background: export pins [1] are an effective way to distribute metadata load for large workloads without the metadata... Patrick Donnelly
04:22 PM Support #40906: Full CephFS causes hang when accessing inode.
It seems that after some time (hour, days, weeks) things get fixed, but it sure would be nice to know how to get it i... Robert LeBlanc
01:46 AM Support #40906: Full CephFS causes hang when accessing inode.
please provide logs of both ceph-fuse and mds during accessing the bad file. Zheng Yan
01:55 PM Bug #41242 (Fix Under Review): mds: re-introudce mds_log_max_expiring to control expiring concurr...
Patrick Donnelly
09:50 AM Bug #40939: mds: map client_caps been inserted by mistake
keep the script happy (alternatively, we could delete #41098) Nathan Cutler
09:09 AM Backport #41283 (Rejected): nautilus: cephfs-shell: No error message is printed on ls of invalid ...
Nathan Cutler
09:08 AM Backport #41276 (Resolved): nautilus: qa: malformed job
https://github.com/ceph/ceph/pull/30038 Nathan Cutler
09:07 AM Backport #41269 (Resolved): nautilus: cephfs-shell: Convert files path type from string to bytes
https://github.com/ceph/ceph/pull/30057 Nathan Cutler
09:07 AM Backport #41268 (Rejected): nautilus: cephfs-shell: onecmd throws TypeError
Nathan Cutler
07:31 AM Bug #41228: mon: deleting a CephFS and its pools causes MONs to crash
I think the first time I used the standard mimic workflow of @mds fail@ and once all MDSs are stopped, @fs remove@. T... Janek Bevendorff

08/14/2019

10:23 PM Bug #41031 (Pending Backport): qa: malformed job
Patrick Donnelly
10:06 PM Bug #40430 (Pending Backport): cephfs-shell: No error message is printed on ls of invalid directo...
Patrick Donnelly
10:04 PM Bug #41164 (Pending Backport): cephfs-shell: onecmd throws TypeError
Patrick Donnelly
10:03 PM Bug #41163 (Pending Backport): cephfs-shell: Convert files path type from string to bytes
Patrick Donnelly
09:24 PM Bug #41228: mon: deleting a CephFS and its pools causes MONs to crash
> [14:11:31] <@gregsfortytwo> batrick: looking at tracker.ceph.com/issues/41228 and it's got a lot going on but part... Greg Farnum
09:10 PM Bug #41228: mon: deleting a CephFS and its pools causes MONs to crash
Can you include exactly what commands you ran? Did you still have clients mounted while deleting the FS? Greg Farnum
04:22 PM Bug #41192 (Fix Under Review): mds: atime not being updated persistently
Patrick Donnelly
04:18 PM Feature #41220: mgr/volumes: add test case for blacklisted clients
Related: test ceph-mgr getting blacklisted. It should recover somehow (probably close the libcephfs handle and get a ... Patrick Donnelly
04:17 PM Feature #16656: mount.ceph: enable consumption of ceph keyring files
PR here: https://github.com/ceph/ceph/pull/29642
It occurs to me though that now that we have a way to get to the ...
Jeff Layton
04:14 PM Feature #16656 (Fix Under Review): mount.ceph: enable consumption of ceph keyring files
Patrick Donnelly
01:26 PM Backport #37761: mimic: mds: deadlock when setting config value via admin socket
https://github.com/ceph/ceph/pull/29664 Venky Shankar
12:08 PM Feature #41209: mds: create a configurable snapshot limit
Zheng Yan wrote:
> Milind Changire wrote:
> > Thanks for the hint Zheng.
> >
> > How could the MDS return the st...
Milind Changire
11:09 AM Feature #41209: mds: create a configurable snapshot limit
Milind Changire wrote:
> Thanks for the hint Zheng.
>
> How could the MDS return the status "too many snaps" to t...
Zheng Yan
10:43 AM Feature #41209: mds: create a configurable snapshot limit
Thanks for the hint Zheng.
How could the MDS return the status "too many snaps" to the caller ?
There's no error ...
Milind Changire
08:07 AM Bug #41242 (Closed): mds: re-introudce mds_log_max_expiring to control expiring concurrency manually
In some case, huge of mds segments could be expired concurrently, which might bring very heavy loads to OSDs and we c... Zhi Zhang
04:22 AM Backport #40944 (In Progress): nautilus: mgr: failover during in qa testing causes unresponsive c...
https://github.com/ceph/ceph/pull/29649 Prashant D
02:12 AM Support #40906: Full CephFS causes hang when accessing inode.
Okay, we had another data corruption incident, so I took some time to try looking deeper into the problem. I did some... Robert LeBlanc

08/13/2019

08:01 PM Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
I tried again, this time with a replicated pool and just one MDS. I think it's too early to draw definitive conclusio... Janek Bevendorff
02:54 PM Bug #41140: mds: trim cache more regularly
I believe this problem may be particularly severe when the main data pool is an EC pool. I am trying the same thing w... Janek Bevendorff
01:17 PM Bug #41228 (Resolved): mon: deleting a CephFS and its pools causes MONs to crash
Disclaimer: I am not entirely sure if this is strictly related to CephFS or a general problem when deleting pools wit... Janek Bevendorff
11:14 AM Bug #41133 (In Progress): qa/tasks: update thrasher design
Jos Collin
06:46 AM Feature #41220 (New): mgr/volumes: add test case for blacklisted clients
see qa/tasks/cephfs/test_volumes.py Venky Shankar
06:44 AM Bug #41218: mgr/volumes: retry spawning purge threads on failure
thanks -- those were cache in my browser :P Venky Shankar
05:16 AM Bug #41218 (Resolved): mgr/volumes: retry spawning purge threads on failure
seen here: http://qa-proxy.ceph.com/teuthology/pdonnell-2019-08-07_15:57:31-fs-wip-pdonnell-testing-20190807.132723-d... Venky Shankar
05:18 AM Bug #41219 (Resolved): mgr/volumes: send purge thread (and other) health warnings to `ceph status`
so as to easily identify issues rather than scanning log files. Also, log any critical error(s) in cluster log. Venky Shankar
01:34 AM Feature #41209: mds: create a configurable snapshot limit
mds.0 has snaptable, which contains information about all snapshots. Each mds has a snapclient, which also caches sna... Zheng Yan

08/12/2019

09:00 PM Feature #16656: mount.ceph: enable consumption of ceph keyring files
Jeff Layton wrote:
> That's certainly another possibility. I'm not sure it's any easier though.
>
> We'd have to ...
Patrick Donnelly
06:18 PM Feature #16656: mount.ceph: enable consumption of ceph keyring files
That's certainly another possibility. I'm not sure it's any easier though.
We'd have to scrape and parse the outpu...
Jeff Layton
06:05 PM Feature #16656: mount.ceph: enable consumption of ceph keyring files
Jeff Layton wrote:
> My current thinking is to link in libceph-common, create a context and fetch the keys using the...
Patrick Donnelly
05:37 PM Feature #16656: mount.ceph: enable consumption of ceph keyring files
My current thinking is to link in libceph-common, create a context and fetch the keys using the same C++ routines tha... Jeff Layton
03:49 PM Feature #16656 (In Progress): mount.ceph: enable consumption of ceph keyring files
Patrick Donnelly
03:46 PM Bug #41187 (Rejected): src/mds/Server.cc: 958: FAILED assert(in->snaprealm)
Sorry Jan, snapshots are not stable in Luminous and we don't spend time looking at snapshot related failures for that... Patrick Donnelly
03:42 PM Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
Each file will have an object in the default data pool (the data pool used at file system creation time) with an exte... Patrick Donnelly
01:35 PM Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
I had the same issue before our MDS and Mons died.
Journal was producing 2 files a few TB big and the metadatapool ...
Anonymous
01:34 PM Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
referenced: https://tracker.ceph.com/issues/41026 Anonymous
01:28 PM Bug #41204 (New): CephFS pool usage 3x above expected value and sparse journal dumps
I am in the process of copying about 230 million small and medium-sized files to a CephFS and I have three active MDS... Janek Bevendorff
02:03 PM Backport #41098: luminous: mds: map client_caps been inserted by mistake
(snapshots are not supported/stable in luminous) Patrick Donnelly
01:43 PM Backport #41098 (Rejected): luminous: mds: map client_caps been inserted by mistake
Patrick Donnelly
01:59 PM Backport #40162: mimic: FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_dir()"
Backport PR here:
https://github.com/ceph/ceph/pull/29609
Jeff Layton
12:59 PM Backport #40162 (In Progress): mimic: FSAL_CEPH assertion failed in Client::_lookup_name: "parent...
Jeff Layton
01:56 PM Feature #41209 (Resolved): mds: create a configurable snapshot limit
Add a new config option that imposes a limit on the number of snapshots in a directory. Zheng has found in the past t... Patrick Donnelly
01:31 PM Bug #40821 (In Progress): osdc: objecter ops output does not have useful time information
Varsha Rao
12:29 PM Bug #41192: mds: atime not being updated persistently
Now that I've done a bit more investigation, I think there are actually two parts to this bug:
1) the MDS only upd...
Jeff Layton
06:54 AM Backport #40894 (In Progress): nautilus: mds: cleanup truncating inodes when standby replay mds t...
https://github.com/ceph/ceph/pull/29591 Prashant D
05:32 AM Cleanup #41185 (Fix Under Review): mds: reorg FSMapUser header
Varsha Rao

08/09/2019

06:03 PM Bug #41192: mds: atime not being updated persistently
Tracepoints from adding and removing caps for the inode:... Jeff Layton
05:26 PM Bug #41192: mds: atime not being updated persistently
...maybe Fw too? Jeff Layton
05:16 PM Bug #41192 (Won't Fix): mds: atime not being updated persistently
xfstest generic/192 fails with kcephfs. It basically:
mounts the fs
creates a file and records the atime
waits a...
Jeff Layton
02:18 PM Bug #41187 (Rejected): src/mds/Server.cc: 958: FAILED assert(in->snaprealm)
We saw this on a cluster where two out of three mds servers needed to be rebooted. After the reboot both mds dumped c... Jan Fajerski
11:03 AM Cleanup #41185 (Resolved): mds: reorg FSMapUser header
Varsha Rao
10:16 AM Feature #41182 (Resolved): mgr/volumes: add `fs subvolume extend/shrink` commands
To extend and shrink subvolume set desired quota.
And during shrink, maybe error out if the desired shrunk size i...
Ramana Raja
10:02 AM Cleanup #41181 (Fix Under Review): mds: reorg FSMap header
Varsha Rao
09:55 AM Cleanup #41181 (Resolved): mds: reorg FSMap header
Varsha Rao
09:12 AM Bug #41069 (Closed): nautilus: test_subvolume_group_create_with_desired_mode fails with "Assertio...
The test passed in a more recent nautilus test run that included the mgr/volumes backport PR https://github.com/ceph... Ramana Raja
08:04 AM Cleanup #41178 (Fix Under Review): mds: reorg DamageTable header
Varsha Rao
07:54 AM Cleanup #41178 (Resolved): mds: reorg DamageTable header
Varsha Rao
06:12 AM Bug #39395 (Resolved): ceph: ceph fs auth fails
Varsha Rao

08/08/2019

01:49 PM Bug #41164 (Fix Under Review): cephfs-shell: onecmd throws TypeError
Varsha Rao
01:41 PM Bug #41164 (Resolved): cephfs-shell: onecmd throws TypeError
cmd2 (0.9.15)
Python 3.7.4
Fedora 30
Due to changes in recent cmd module, on any command the following error is ...
Varsha Rao
01:17 PM Bug #41163 (Fix Under Review): cephfs-shell: Convert files path type from string to bytes
Varsha Rao
01:09 PM Bug #41163 (Resolved): cephfs-shell: Convert files path type from string to bytes
Varsha Rao
09:36 AM Bug #41140: mds: trim cache more regularly
I have the following settings now, which seem to work okay-ish:... Janek Bevendorff
04:56 AM Bug #41034: cephfs-journal-tool: NetHandler create_socket couldn't create socket
I forgot to say: the tool functions work, the warning/error doesn't affect its functionality but its confusing if you... Anonymous
04:55 AM Bug #41034: cephfs-journal-tool: NetHandler create_socket couldn't create socket
the cephfs-journal-tool but I also see this with the cephfs-data-scan
but I made a mistake, its Ubuntu 16.04, on the...
Anonymous

08/07/2019

09:48 PM Bug #41141 (Fix Under Review): mds: recall capabilities more regularly when under cache pressure
Patrick Donnelly
09:47 PM Bug #41140 (Fix Under Review): mds: trim cache more regularly
Patrick Donnelly
04:53 PM Bug #41140: mds: trim cache more regularly
We did much the same thing in the OSD. Previously we trimmed in a single thread at regular intervals, but now we tri... Mark Nelson
09:39 AM Bug #41140: mds: trim cache more regularly
Janek Bevendorff wrote:
> This may be obvious, but to put the whole thing into context: this cache trimming issue ca...
Dan van der Ster
09:29 AM Bug #41140: mds: trim cache more regularly
This may be obvious, but to put the whole thing into context: this cache trimming issue can make a CephFS permanently... Janek Bevendorff
09:07 PM Bug #41034 (Need More Info): cephfs-journal-tool: NetHandler create_socket couldn't create socket
Can you please elaborate on this, which journal tool are you talking about?
If possible, could you provide steps to...
Neha Ojha
08:03 PM Bug #40927 (Resolved): mgr/volumes: unable to create subvolumegroups/subvolumes when ceph-mgr is ...
Patrick Donnelly
08:02 PM Feature #40617 (Resolved): mgr/volumes: Add `ceph fs subvolumegroup getpath` command
Patrick Donnelly
08:02 PM Backport #41071 (Resolved): nautilus: mgr/volumes: unable to create subvolumegroups/subvolumes wh...
Patrick Donnelly
08:02 PM Backport #41070 (Resolved): nautilus: mgr/volumes: Add `ceph fs subvolumegroup getpath` command
Patrick Donnelly
02:42 PM Bug #41144: mount.ceph: doesn't accept "strictatime"
Changing subject. "nostrictatime" seems to be intercepted by /bin/mount, so the mount helper doesn't need to handle it. Jeff Layton
07:37 AM Bug #41148 (Fix Under Review): client: _readdir_cache_cb() may use the readdir_cache already clear
Zheng Yan
07:31 AM Bug #41148: client: _readdir_cache_cb() may use the readdir_cache already clear
segment fault:inode=0x0
huanwen ren
07:25 AM Bug #41148: client: _readdir_cache_cb() may use the readdir_cache already clear
huanwen ren wrote:
> Zheng Yan wrote:
> > I think getattr does not affect parent directory inode's completeness
> ...
huanwen ren
07:22 AM Bug #41148: client: _readdir_cache_cb() may use the readdir_cache already clear
Zheng Yan wrote:
> I think getattr does not affect parent directory inode's completeness
From the log, there is a...
huanwen ren
07:00 AM Bug #41148: client: _readdir_cache_cb() may use the readdir_cache already clear
I think getattr does not affect parent directory inode's completeness Zheng Yan
06:56 AM Bug #41148 (Resolved): client: _readdir_cache_cb() may use the readdir_cache already clear
Calling function A means to get dir information from the cache, but in the while loop,
the contents of readdir_cach...
huanwen ren
04:55 AM Bug #41147: mds: crash loop - Server.cc 6835: FAILED ceph_assert(in->first <= straydn->first)
this is temporarily fixed by wiping session table Anonymous
04:34 AM Bug #41147 (Duplicate): mds: crash loop - Server.cc 6835: FAILED ceph_assert(in->first <= straydn...
After creating a new FS and running it for 2 days I my MDS is in a crash loop. I didn't try anything yet so far as to... Anonymous

08/06/2019

10:36 PM Bug #41140: mds: trim cache more regularly
Dan van der Ster wrote:
> FWIW i can trigger the same problems here without creates. `ls -lR` of a large tree is eno...
Patrick Donnelly
05:01 PM Bug #41140: mds: trim cache more regularly
FWIW i can trigger the same problems here without creates. `ls -lR` of a large tree is enough, and increasing the thr... Dan van der Ster
04:42 PM Bug #41140 (Resolved): mds: trim cache more regularly
Under -create- workloads that result in the acquisition of a lot of capabilities, the MDS can't trim the cache fast e... Patrick Donnelly
07:56 PM Bug #41144 (Resolved): mount.ceph: doesn't accept "strictatime"
The cephfs mount helper doesn't support either strictatime or nostrictatime. It should intercept those options and se... Jeff Layton
04:46 PM Bug #41141 (Resolved): mds: recall capabilities more regularly when under cache pressure
If a client is doing a large parallel create workload, the MDS may not recall capabilities fast enough and the client... Patrick Donnelly
02:46 AM Bug #41133 (Closed): qa/tasks: update thrasher design
* Make the Thrasher class abstract by adding _do_thrash abstract function.
* Change OSDThrasher, RBDMirrorThrasher, ...
Jos Collin
02:45 AM Bug #41066 (Closed): mds: skip trim mds cache if mdcache is not opened
Zhi Zhang

08/05/2019

11:09 PM Backport #41129 (Resolved): mimic: qa: power off still resulted in client sending session close
https://github.com/ceph/ceph/pull/30233 Patrick Donnelly
11:09 PM Backport #41128 (Resolved): nautilus: qa: power off still resulted in client sending session close
https://github.com/ceph/ceph/pull/29983 Patrick Donnelly
11:07 PM Backport #41118 (Rejected): nautilus: cephfs-shell: add CI testing with flake8
Patrick Donnelly
11:07 PM Backport #41114 (Resolved): mimic: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
https://github.com/ceph/ceph/pull/31283 Patrick Donnelly
11:07 PM Backport #41113 (Resolved): nautilus: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
https://github.com/ceph/ceph/pull/30032 Patrick Donnelly
11:06 PM Backport #41112 (Rejected): nautilus: cephfs-shell: cd with no args has no effect
Patrick Donnelly
11:06 PM Backport #41108 (Resolved): mimic: mds: disallow setting ceph.dir.pin value exceeding max rank id
https://github.com/ceph/ceph/pull/29940 Patrick Donnelly
11:06 PM Backport #41107 (Resolved): nautilus: mds: disallow setting ceph.dir.pin value exceeding max rank id
https://github.com/ceph/ceph/pull/29938 Patrick Donnelly
11:06 PM Backport #41106 (Resolved): nautilus: mds: add command that modify session metadata
https://github.com/ceph/ceph/pull/32245 Patrick Donnelly
11:06 PM Backport #41105 (Rejected): nautilus: cephfs-shell: flake8 blank line and indentation error
Patrick Donnelly
11:05 PM Backport #41101 (Rejected): luminous: tools/cephfs: memory leak in cephfs/Resetter.cc
Patrick Donnelly
11:05 PM Backport #41100 (Resolved): mimic: tools/cephfs: memory leak in cephfs/Resetter.cc
https://github.com/ceph/ceph/pull/29915 Patrick Donnelly
11:05 PM Backport #41099 (Resolved): nautilus: tools/cephfs: memory leak in cephfs/Resetter.cc
https://github.com/ceph/ceph/pull/29879 Patrick Donnelly
11:05 PM Backport #41098 (Rejected): luminous: mds: map client_caps been inserted by mistake
Patrick Donnelly
11:05 PM Backport #41097 (Resolved): mimic: mds: map client_caps been inserted by mistake
https://github.com/ceph/ceph/pull/29833 Patrick Donnelly
11:04 PM Backport #41096 (Resolved): nautilus: mds: map client_caps been inserted by mistake
https://github.com/ceph/ceph/pull/29878 Patrick Donnelly
11:04 PM Backport #41095 (Resolved): nautilus: qa: race in test_standby_replay_singleton_fail
https://github.com/ceph/ceph/pull/29832 Patrick Donnelly
11:04 PM Backport #41094 (Resolved): mimic: qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_...
https://github.com/ceph/ceph/pull/29812 Patrick Donnelly
11:04 PM Backport #41093 (Resolved): nautilus: qa: tasks.cephfs.test_client_recovery.TestClientRecovery.te...
https://github.com/ceph/ceph/pull/29811 Patrick Donnelly
11:04 PM Backport #41089 (Rejected): nautilus: cephfs-shell: Multiple flake8 errors
Patrick Donnelly
11:04 PM Backport #41088 (Resolved): mimic: qa: AssertionError: u'open' != 'stale'
https://github.com/ceph/ceph/pull/29751 Patrick Donnelly
11:03 PM Backport #41087 (Resolved): nautilus: qa: AssertionError: u'open' != 'stale'
https://github.com/ceph/ceph/pull/29750 Patrick Donnelly
04:05 PM Backport #40326: nautilus: mds: evict stale client when one of its write caps are stolen
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28583
merged
Yuri Weinstein
04:04 PM Backport #40324: nautilus: ceph_volume_client: d_name needs to be converted to string before using
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28609
merged
Yuri Weinstein
04:03 PM Backport #40839: nautilus: cephfs-shell: TypeError in poutput
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29156
merged
Yuri Weinstein
04:03 PM Backport #40842: nautilus: ceph-fuse: mount does not support the fallocate()
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29157
merged
Yuri Weinstein
04:02 PM Backport #40843: nautilus: cephfs-shell: name 'files' is not defined error in do_rm()
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29158
merged
Yuri Weinstein
04:02 PM Backport #40874: nautilus: /src/include/xlist.h: 77: FAILED assert(_size == 0)
Xiaoxi Chen wrote:
> https://github.com/ceph/ceph/pull/29186
merged
Yuri Weinstein
04:01 PM Backport #40438: nautilus: getattr on snap inode stuck
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29231
merged
Yuri Weinstein
04:00 PM Backport #40440: nautilus: mds: cannot switch mds state from standby-replay to active
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29233
merged
Yuri Weinstein
03:59 PM Backport #40443: nautilus: libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when operating on...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29343
merged
Yuri Weinstein
03:58 PM Backport #40445: nautilus: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29344
merged
Yuri Weinstein
02:14 PM Bug #41072 (In Progress): scheduled cephfs snapshots (via ceph manager)
Patrick Donnelly
12:54 PM Bug #41072: scheduled cephfs snapshots (via ceph manager)
- also, interface for fetching snap metadata (`flushed` state, etc...)... Venky Shankar
12:23 PM Bug #41072 (Resolved): scheduled cephfs snapshots (via ceph manager)
outline detailed here: https://pad.ceph.com/p/cephfs-snap-mirroring
Specify a snapshot schedule on any (sub)direct...
Venky Shankar
01:47 PM Bug #40989 (Resolved): qa: RECENT_CRASH warning prevents wait_for_health_clear from completing
Patrick Donnelly
01:46 PM Feature #40986 (Fix Under Review): cephfs qos: implement cephfs qos base on tokenbucket algorighm
Patrick Donnelly
01:45 PM Backport #41000 (New): luminous: client: failed to drop dn and release caps causing mds stary sta...
Zheng, please take this one. The backport is non-trivial. Patrick Donnelly
01:14 PM Feature #41074 (Resolved): pybind/mgr/volumes: mirror (scheduled) snapshots to remote target
outline detailed here: https://pad.ceph.com/p/cephfs-snap-mirroring
mirror scheduled (and temporary) snapshots a r...
Venky Shankar
01:13 PM Feature #41073 (Rejected): cephfs-sync: tool for synchronizing cephfs snapshots to remote target
Introduce a rsync like tool (cephfs-sync) for mirroring scheduled and temp cephfs snapshots to sync targets. sync tar... Venky Shankar
01:02 PM Backport #41070 (In Progress): nautilus: mgr/volumes: Add `ceph fs subvolumegroup getpath` command
Ramana Raja
11:56 AM Backport #41070 (Resolved): nautilus: mgr/volumes: Add `ceph fs subvolumegroup getpath` command
https://github.com/ceph/ceph/pull/29490 Ramana Raja
01:00 PM Backport #41071 (In Progress): nautilus: mgr/volumes: unable to create subvolumegroups/subvolumes...
Ramana Raja
11:58 AM Backport #41071 (Resolved): nautilus: mgr/volumes: unable to create subvolumegroups/subvolumes wh...
https://github.com/ceph/ceph/pull/29490 Ramana Raja
10:50 AM Bug #41069 (Need More Info): nautilus: test_subvolume_group_create_with_desired_mode fails with "...
Seen here is nautilus run: http://qa-proxy.ceph.com/teuthology/yuriw-2019-07-30_20:57:10-fs-wip-yuri-testing-2019-07-... Venky Shankar
05:23 AM Bug #41066: mds: skip trim mds cache if mdcache is not opened
https://github.com/ceph/ceph/pull/29481 Zhi Zhang
05:18 AM Bug #41066 (Closed): mds: skip trim mds cache if mdcache is not opened
```
2019-07-24 14:51:28.028198 7f6dc2543700 1 mds.0.940446 active_start
2019-07-24 14:51:39.452890 7f6dc2543700 1...
Zhi Zhang
12:29 AM Backport #41001 (In Progress): mimic: client: failed to drop dn and release caps causing mds star...
Xiaoxi Chen
12:20 AM Backport #41002 (In Progress): nautilus:client: failed to drop dn and release caps causing mds st...
Xiaoxi Chen

08/01/2019

05:46 PM Support #40906: Full CephFS causes hang when accessing inode.
Please confirm that I understand the process so that I can give it a try.
Thanks!
Robert LeBlanc
05:45 PM Bug #41049: adding ceph secret key to kernel failed: Invalid argument.
reason: accidentially double base64 encoded
MISSING old warning!: secret is not valid base64: Invalid argument.
Anonymous
05:32 PM Bug #41049 (New): adding ceph secret key to kernel failed: Invalid argument.
Fresh Nautilus Cluster.
Fresh cephfs.
This is not a common base64 error
mount -t ceph 10.3.2.1:6789:/ /mnt/ -o n...
Anonymous

07/31/2019

06:40 PM Feature #40811 (Pending Backport): mds: add command that modify session metadata
Patrick Donnelly
06:34 PM Bug #40927 (Pending Backport): mgr/volumes: unable to create subvolumegroups/subvolumes when ceph...
Patrick Donnelly
05:37 PM Bug #41034 (Resolved): cephfs-journal-tool: NetHandler create_socket couldn't create socket
Ubuntu 18.04 with ceph Nautilus repo
Journal tool is broken:
2019-07-31 19:36:56.879 7f57bf308700 -1 NetHandler cre...
Anonymous
05:33 PM Bug #40999 (Pending Backport): qa: AssertionError: u'open' != 'stale'
Patrick Donnelly
05:10 PM Bug #41031 (Resolved): qa: malformed job
/ceph/teuthology-archive/pdonnell-2019-07-31_00:35:45-fs-wip-pdonnell-testing-20190730.205527-distro-basic-smithi/416... Patrick Donnelly
04:59 PM Bug #41026 (Rejected): MDS process crashes on 14.2.2
Please seek help on ceph-users. Provide more information about your cluster and how the error came about. Patrick Donnelly
04:00 PM Bug #41026: MDS process crashes on 14.2.2
... Anonymous
01:15 PM Bug #41026 (Rejected): MDS process crashes on 14.2.2
MDS Process on Ubuntu 18.04 Nautilus 14.2.2 are crashing, unable to recover
-7> 2019-07-31 13:29:46.888 7fb36a...
Anonymous
03:32 PM Bug #39395: ceph: ceph fs auth fails
merged https://github.com/ceph/ceph/pull/28666 Yuri Weinstein
09:08 AM Bug #40960: client: failed to drop dn and release caps causing mds stary stacking.
some more background of this issue is under
https://tracker.ceph.com/issues/38679#note-9
Xiaoxi Chen
02:49 AM Bug #41006 (Fix Under Review): cephfs-data-scan scan_links FAILED ceph_assert(p->second >= before...
Zheng Yan

07/30/2019

10:13 PM Backport #40845: nautilus: MDSMonitor: use stringstream instead of dout for mds repaired
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29159
merged
Yuri Weinstein
06:35 PM Bug #39947 (Pending Backport): cephfs-shell: add CI testing with flake8
Patrick Donnelly
05:07 PM Bug #40951 (Rejected): (think this is bug) Why CephFS metadata pool usage is only growing?
NAB Patrick Donnelly
12:09 PM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
Bingo - mds_log_max_segments.
In Luminous desc for this option is empty:...
Konstantin Shalygin
01:25 PM Bug #41006: cephfs-data-scan scan_links FAILED ceph_assert(p->second >= before+len)
looks like discontiguous free inode number can trigger the crash Zheng Yan
08:56 AM Bug #41006 (Resolved): cephfs-data-scan scan_links FAILED ceph_assert(p->second >= before+len)
Running cephfs-data-scan scan_links on a test 14.2.2 cluster I get this assertion:... Dan van der Ster
05:51 AM Feature #5520 (In Progress): osdc: should handle namespaces
Jos Collin
01:42 AM Backport #41002 (Resolved): nautilus:client: failed to drop dn and release caps causing mds stary...
https://github.com/ceph/ceph/pull/29478 Xiaoxi Chen
01:41 AM Backport #41001 (Resolved): mimic: client: failed to drop dn and release caps causing mds stary s...
https://github.com/ceph/ceph/pull/29479 Xiaoxi Chen
01:38 AM Backport #41000 (Resolved): luminous: client: failed to drop dn and release caps causing mds star...
https://github.com/ceph/ceph/pull/29830 Xiaoxi Chen

07/29/2019

09:53 PM Bug #40603 (Pending Backport): mds: disallow setting ceph.dir.pin value exceeding max rank id
Patrick Donnelly
09:49 PM Bug #40965 (Resolved): client: don't report ceph.* xattrs via listxattr
Patrick Donnelly
09:47 PM Bug #40939 (Pending Backport): mds: map client_caps been inserted by mistake
Patrick Donnelly
09:46 PM Bug #40960 (Pending Backport): client: failed to drop dn and release caps causing mds stary stack...
Patrick Donnelly
09:10 PM Bug #37681: qa: power off still resulted in client sending session close
backport note: also need fix for https://tracker.ceph.com/issues/40999 Patrick Donnelly
08:09 PM Bug #37681 (Pending Backport): qa: power off still resulted in client sending session close
Patrick Donnelly
09:09 PM Bug #40999 (Fix Under Review): qa: AssertionError: u'open' != 'stale'
Patrick Donnelly
09:06 PM Bug #40999 (Resolved): qa: AssertionError: u'open' != 'stale'
... Patrick Donnelly
08:10 PM Bug #40968 (Pending Backport): qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_stal...
Patrick Donnelly
06:17 PM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
Konstantin Shalygin wrote:
> ??The MDS tracks opened files and capabilities in the MDS journal. That would explain t...
Patrick Donnelly
04:25 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
??The MDS tracks opened files and capabilities in the MDS journal. That would explain the space usage in the metadata... Konstantin Shalygin
05:38 PM Cleanup #40992 (Pending Backport): cephfs-shell: Multiple flake8 errors
Patrick Donnelly
11:54 AM Cleanup #40992: cephfs-shell: Multiple flake8 errors
Not ignoring E501 instead limiting line length to 100.... Varsha Rao
06:59 AM Cleanup #40992 (Fix Under Review): cephfs-shell: Multiple flake8 errors
Varsha Rao
06:48 AM Cleanup #40992 (Resolved): cephfs-shell: Multiple flake8 errors
After ignoring E501 and W503 flake8 errors, following needs to be fixed:... Varsha Rao
 

Also available in: Atom