Activity
From 11/20/2020 to 12/19/2020
12/19/2020
- 09:26 PM Feature #7320 (Fix Under Review): qa: thrash directory fragmentation
- 06:36 PM Fix #48683 (Resolved): mds/MDSMap: print each flag value in MDSMap::dump
- Don't require operators to do bitwise arithmetic on the "flags" field. Print each flag.
https://github.com/ceph/ce... - 06:35 PM Feature #48682 (Resolved): MDSMonitor: add command to print fs flags
- From this list:
https://github.com/ceph/ceph/blob/master/src/include/ceph_fs.h#L275-L285
12/18/2020
- 09:27 PM Bug #48517 (Resolved): mds: "CDir.cc: 1530: FAILED ceph_assert(!is_complete())"
- 09:25 PM Feature #22477 (Resolved): multifs: remove multifs experimental warnings
- 09:15 PM Bug #48680 (New): mds: scrubbing stuck "scrub active (0 inodes in the stack)"
- ...
- 09:10 PM Bug #48679 (Resolved): client: items pinned in cache preventing unmount
- ...
- 09:06 PM Bug #48678 (In Progress): client: spins on tick interval
- ...
- 02:41 PM Bug #48501 (Fix Under Review): pybind/mgr/volumes: inherited snapshots should be filtered out of ...
- 08:21 AM Bug #48673 (Pending Backport): High memory usage on standby replay MDS
- Hi.
We have recently installed a Ceph cluster and with about 27M objects. The filesystem seems to have 15M files.
... - 04:33 AM Feature #44931 (Fix Under Review): mgr/volumes: get the list of auth IDs that have been granted a...
12/17/2020
- 11:39 PM Bug #21539: man: missing man page for mount.fuse.ceph
- Adding this to the packaging in https://github.com/ceph/ceph/pull/38642
- 07:29 PM Bug #48661 (Fix Under Review): mds: reserved can be set on feature set
- 07:28 PM Bug #48661 (Resolved): mds: reserved can be set on feature set
- ...
- 05:40 PM Backport #48638 (Resolved): nautilus: pybind/ceph_volume_client: allows authorize on auth_ids not...
- 05:39 PM Backport #48638 (In Progress): nautilus: pybind/ceph_volume_client: allows authorize on auth_ids ...
- 12:06 AM Backport #48638 (Resolved): nautilus: pybind/ceph_volume_client: allows authorize on auth_ids not...
- 12:05 AM Backport #48638 (Resolved): nautilus: pybind/ceph_volume_client: allows authorize on auth_ids not...
- ...
- 04:15 AM Backport #48644 (Resolved): octopus: client: ceph.dir.entries does not acquire necessary caps
- https://github.com/ceph/ceph/pull/38949
- 04:15 AM Backport #48643 (Resolved): nautilus: client: ceph.dir.entries does not acquire necessary caps
- https://github.com/ceph/ceph/pull/38950
- 04:12 AM Bug #48313 (Pending Backport): client: ceph.dir.entries does not acquire necessary caps
- 04:11 AM Feature #17856 (Resolved): qa: background cephfs forward scrub teuthology task
- 04:10 AM Backport #48642 (Resolved): octopus: Client: the directory's capacity will not be updated after w...
- https://github.com/ceph/ceph/pull/38947
- 04:10 AM Backport #48641 (Resolved): nautilus: Client: the directory's capacity will not be updated after ...
- https://github.com/ceph/ceph/pull/38948
- 04:09 AM Bug #48318 (Pending Backport): Client: the directory's capacity will not be updated after write d...
- 02:19 AM Backport #48639 (Resolved): luminous: pybind/ceph_volume_client: allows authorize on auth_ids not...
- 12:10 AM Backport #48639 (Resolved): luminous: pybind/ceph_volume_client: allows authorize on auth_ids not...
- ...
- 12:06 AM Backport #48637 (Resolved): octopus: pybind/ceph_volume_client: allows authorize on auth_ids not ...
- 12:05 AM Backport #48637 (Resolved): octopus: pybind/ceph_volume_client: allows authorize on auth_ids not ...
- ...
- 12:05 AM Bug #48555: pybind/ceph_volume_client: allows authorize on auth_ids not created through ceph_volu...
- ...
- 12:03 AM Bug #48555 (Pending Backport): pybind/ceph_volume_client: allows authorize on auth_ids not create...
- 12:03 AM Bug #48555 (Resolved): pybind/ceph_volume_client: allows authorize on auth_ids not created throug...
- Backports done manually.
12/16/2020
- 10:19 PM Backport #48634 (In Progress): nautilus: qa: tox failures
- 10:15 PM Backport #48634 (Resolved): nautilus: qa: tox failures
- https://github.com/ceph/ceph/pull/38627
- 10:18 PM Backport #48635 (In Progress): octopus: qa: tox failures
- 10:15 PM Backport #48635 (Resolved): octopus: qa: tox failures
- https://github.com/ceph/ceph/pull/38626
- 10:14 PM Bug #48633 (Pending Backport): qa: tox failures
- 08:44 PM Bug #48633 (Fix Under Review): qa: tox failures
- 08:43 PM Bug #48633 (Resolved): qa: tox failures
- ...
- 02:52 PM Backport #47158 (In Progress): octopus: mgr/volumes: Mark subvolumes with ceph.dir.subvolume vxat...
- 01:50 PM Feature #48622 (Fix Under Review): mgr/nfs: Add tests for readonly exports
- 08:23 AM Feature #48622 (Resolved): mgr/nfs: Add tests for readonly exports
- 09:50 AM Bug #47787: mgr/nfs: exercise host-level HA of NFS-Ganesha by killing the process
- I can reproduce it with SIGTERM...
- 06:52 AM Feature #48619: client: track (and forward to MDS) average read/write/metadata latency
- Xiubo suggested that the client also sends min/max and stddev.
- 05:17 AM Feature #48619 (Pending Backport): client: track (and forward to MDS) average read/write/metadata...
- Client already tracks cumulative read/write/metadata latencies. However, average latencies are much more useful to th...
- 06:01 AM Tasks #48620 (In Progress): mds: break the mds_lock or get rid of the mds_lock for some code
- 05:48 AM Tasks #48620 (In Progress): mds: break the mds_lock or get rid of the mds_lock for some code
- 05:54 AM Bug #48559 (In Progress): qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- 03:18 AM Bug #48517 (Fix Under Review): mds: "CDir.cc: 1530: FAILED ceph_assert(!is_complete())"
12/15/2020
- 01:02 PM Feature #48602 (Resolved): `cephfs-top` frontend utility
- The plumbing work for tracking (client) metrics in the MDS is already done and mgr/stats module provides an interface...
- 12:41 PM Documentation #48585 (Fix Under Review): mds_cache_trim_decay_rate misnamed?
- No other places, just being more explicit would be helpful I think.
12/14/2020
- 10:21 PM Bug #44113 (Resolved): cephfs-shell: set proper return value for the tool
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:20 PM Bug #47182 (Resolved): mon: deleting a CephFS and its pools causes MONs to crash
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:20 PM Bug #47734 (Resolved): client: hang after statfs
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:28 PM Bug #48403 (Fix Under Review): mds: fix recall defaults based on feedback from production clusters
- 09:21 PM Bug #48555 (Fix Under Review): pybind/ceph_volume_client: allows authorize on auth_ids not create...
- https://github.com/ceph/ceph/security/advisories/GHSA-32wm-mjvr-8w9f
https://github.com/ceph/ceph-ghsa-32wm-mjvr-8... - 03:52 PM Documentation #48585: mds_cache_trim_decay_rate misnamed?
- Jan Fajerski wrote:
> Patrick Donnelly wrote:
> > I think I just got that option name from the code (DecayRate) but... - 03:36 PM Documentation #48585: mds_cache_trim_decay_rate misnamed?
- Patrick Donnelly wrote:
> I think I just got that option name from the code (DecayRate) but, yes, the name is unfort... - 03:25 PM Documentation #48585: mds_cache_trim_decay_rate misnamed?
- Jan Fajerski wrote:
> I'm unsure about all this, so input is appreciated.
>
> I recently played around with this ... - 10:42 AM Documentation #48585 (Resolved): mds_cache_trim_decay_rate misnamed?
- I'm unsure about all this, so input is appreciated.
I recently played around with this and essentially broke a clu... - 02:39 PM Feature #48509 (Fix Under Review): mds: dmClock based subvolume QoS scheduler
- 02:39 PM Bug #48559 (Triaged): qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- 02:38 PM Bug #48562 (Triaged): qa: scrub - object missing on disk; some files may be lost
- 11:26 AM Bug #48517: mds: "CDir.cc: 1530: FAILED ceph_assert(!is_complete())"
- The MDS is doing the merge:...
- 03:23 AM Bug #48517: mds: "CDir.cc: 1530: FAILED ceph_assert(!is_complete())"
- As we talked last week, I will take this and will figure out why the same CDir was fetched twice.
Thanks.
12/12/2020
- 03:56 AM Bug #48559: qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- Patrick Donnelly wrote:
> Please link the source file for the paste you put in the description.
http://qa-proxy.c... - 03:10 AM Feature #48577 (In Progress): pybind/mgr/volumes: support snapshots on subvolumegroups
- We removed this recently but I think it needs to come back based on new developments in kubernetes with VolumeGroups....
- 02:52 AM Backport #47095 (In Progress): octopus: mds: provide altrenatives to increase the total cephfs su...
- 12:17 AM Backport #48374 (In Progress): nautilus: client: dump which fs is used by client for multiple-fs
- 12:15 AM Backport #48372 (In Progress): octopus: client: dump which fs is used by client for multiple-fs
- 12:11 AM Backport #46094 (Rejected): octopus: cephfs-shell: set proper return value for the tool
- Skipping backport for cephfs-shell for now (especially non-trivial ones).
12/11/2020
- 11:54 PM Bug #48524 (Fix Under Review): octopus: run_shell() got an unexpected keyword argument 'timeout'
- 11:44 PM Backport #47248 (Rejected): nautilus: mon: deleting a CephFS and its pools causes MONs to crash
- https://tracker.ceph.com/issues/47941#note-3
- 11:44 PM Backport #47941 (Rejected): nautilus: octopus: client: hang after statfs
- There are too many conflicts for this. I'm closing as it's mostly to fix a few QA test failures that are not signific...
- 06:44 PM Bug #48559: qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- Please link the source file for the paste you put in the description.
- 08:45 AM Bug #48559 (Resolved): qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- Found this in the `fs` suite run....
- 06:00 PM Backport #48568 (Resolved): octopus: tasks.cephfs.test_nfs.TestNFS.test_cluster_info: IP mismatch
- https://github.com/ceph/ceph/pull/39004
- 05:52 PM Bug #48491 (Pending Backport): tasks.cephfs.test_nfs.TestNFS.test_cluster_info: IP mismatch
- 10:40 AM Bug #48562 (New): qa: scrub - object missing on disk; some files may be lost
- 2020-12-10T05:14:53.213 INFO:tasks.ceph.mds.b.smithi165.stderr:2020-12-10T05:14:53.212+0000 7f27f1562700 -1 log_chann...
- 08:05 AM Bug #47977 (Resolved): fs: "./bin/ceph daemon client.admin.133423 config show" do not work
- 03:50 AM Bug #48555 (Resolved): pybind/ceph_volume_client: allows authorize on auth_ids not created throug...
12/10/2020
- 05:21 PM Feature #6373: kcephfs: qa: test fscache
- Jeff Layton wrote:
> Patrick Donnelly wrote:
> >
> > I want to see fscache tested in regularly in teuthology. So,... - 01:18 PM Feature #6373 (Need More Info): kcephfs: qa: test fscache
- Patrick Donnelly wrote:
>
> I want to see fscache tested in regularly in teuthology. So, the yaml fragments to tur... - 12:34 AM Feature #6373: kcephfs: qa: test fscache
- Jeff Layton wrote:
> Patrick Donnelly wrote:
> > Jeff Layton wrote:
> > > I've already done that then. I guess we ... - 12:20 AM Feature #6373 (In Progress): kcephfs: qa: test fscache
- Patrick Donnelly wrote:
> Jeff Layton wrote:
> > I've already done that then. I guess we can close this. To test fs... - 12:04 AM Feature #6373: kcephfs: qa: test fscache
- Jeff Layton wrote:
> I've already done that then. I guess we can close this. To test fscache, you just need to kick ... - 09:30 AM Documentation #48531: doc/cephfs: "ceph fs new" command is, ironically, old. The new (correct as ...
- https://docs.ceph.com/en/latest/cephfs/createfs/#creating-a-file-system
This section, "Creating a File System", mi... - 09:17 AM Documentation #48531: doc/cephfs: "ceph fs new" command is, ironically, old. The new (correct as ...
10:29 < IcePic> https://docs.ceph.com/en/latest/cephfs/createfs/
10:30 < jeeva> yeah i made a new pool "manila...- 09:15 AM Documentation #48531 (Resolved): doc/cephfs: "ceph fs new" command is, ironically, old. The new (...
- $subject
https://docs.ceph.com/en/latest/cephfs/createfs/ - 12:32 AM Bug #48524 (Resolved): octopus: run_shell() got an unexpected keyword argument 'timeout'
- ...
12/09/2020
- 09:28 PM Feature #6373 (Resolved): kcephfs: qa: test fscache
- 09:28 PM Feature #6373: kcephfs: qa: test fscache
- I've already done that then. I guess we can close this. To test fscache, you just need to kick off the run with the f...
- 09:08 PM Feature #6373: kcephfs: qa: test fscache
- Jeff Layton wrote:
> Not by itself, no. That said, the goal of this ticket is a bit unclear. What exactly should we ... - 08:45 PM Feature #6373: kcephfs: qa: test fscache
- Not by itself, no. That said, the goal of this ticket is a bit unclear. What exactly should we be aiming to do with t...
- 07:30 PM Feature #6373: kcephfs: qa: test fscache
- Jeff Layton wrote:
> Patch to add arbitrary mount options to kclient:
>
> https://github.com/ceph/ceph/pull/38407... - 08:57 PM Bug #48517 (In Progress): mds: "CDir.cc: 1530: FAILED ceph_assert(!is_complete())"
- 07:15 PM Bug #48517 (Resolved): mds: "CDir.cc: 1530: FAILED ceph_assert(!is_complete())"
- ...
- 07:45 PM Backport #48521 (Resolved): octopus: client: add ceph.cluster_fsid/ceph.client_id vxattr support ...
- https://github.com/ceph/ceph/pull/39000
- 07:45 PM Backport #48520 (Resolved): nautilus: client: add ceph.cluster_fsid/ceph.client_id vxattr support...
- https://github.com/ceph/ceph/pull/39001
- 07:28 PM Feature #48337 (Pending Backport): client: add ceph.cluster_fsid/ceph.client_id vxattr support in...
- 05:16 PM Bug #48491 (Fix Under Review): tasks.cephfs.test_nfs.TestNFS.test_cluster_info: IP mismatch
- 09:53 AM Bug #48491 (In Progress): tasks.cephfs.test_nfs.TestNFS.test_cluster_info: IP mismatch
- 12:41 AM Bug #48491: tasks.cephfs.test_nfs.TestNFS.test_cluster_info: IP mismatch
- /a/teuthology-2020-12-06_07:01:02-rados-master-distro-basic-smithi/5685113
- 05:09 PM Bug #48514 (Fix Under Review): mgr/nfs: Don't prefix 'ganesha-' to cluster id
- 04:48 PM Bug #48514 (Resolved): mgr/nfs: Don't prefix 'ganesha-' to cluster id
- Service name in orchestrator is '<service_type>.<service_id>'
https://github.com/ceph/ceph/blob/master/src/python-... - 12:40 PM Bug #48502: ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS)
- In the latest test run, I cannot reproduce it. Sometimes ganesha takes time to be restarted completely. This can be t...
- 12:47 AM Bug #48502 (Triaged): ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS)
- 12:38 AM Bug #48502 (Triaged): ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS)
- ...
- 08:28 AM Feature #48509 (Fix Under Review): mds: dmClock based subvolume QoS scheduler
- This is a ticket for subvolume QoS Scheduler.
Our idea has previously been discussed with maintainers and develope... - 02:58 AM Feature #22477 (Fix Under Review): multifs: remove multifs experimental warnings
- 02:25 AM Tasks #22479 (Closed): multifs: review testing coverage
12/08/2020
- 07:30 PM Bug #48501 (Resolved): pybind/mgr/volumes: inherited snapshots should be filtered out of snapshot...
- If a snapshot is created on a parent directory of a subvolume, it shows up in the snapshot listing:...
- 07:21 PM Bug #38832: mds: fail to resolve snapshot name contains '_'
- Patrick Donnelly wrote:
> Zheng, what was the motivation for the change to append this information to the snap name ... - 03:44 PM Bug #48491 (Triaged): tasks.cephfs.test_nfs.TestNFS.test_cluster_info: IP mismatch
- 11:17 AM Bug #48491: tasks.cephfs.test_nfs.TestNFS.test_cluster_info: IP mismatch
- https://pulpito.ceph.com/swagner-2020-12-07_12:36:07-rados:cephadm-wip-swagner2-testing-2020-12-07-1137-distro-basic-...
- 11:06 AM Bug #48491 (Resolved): tasks.cephfs.test_nfs.TestNFS.test_cluster_info: IP mismatch
- ...
- 03:35 PM Bug #48447 (Resolved): vstart_runner: fails to print final result line
12/07/2020
- 10:26 PM Bug #48318 (Fix Under Review): Client: the directory's capacity will not be updated after write d...
- 07:53 PM Bug #48365: qa: ffsb build failure on CentOS 8.2
- Xiubo Li wrote:
> @Patrick,
>
> Checked the teuthology log I didn't see any suspecious error or warning could cau... - 03:41 AM Bug #48365: qa: ffsb build failure on CentOS 8.2
- @Patrick,
Checked the teuthology log I didn't see any suspecious error or warning could cause this, and the "confi... - 02:46 AM Bug #48365: qa: ffsb build failure on CentOS 8.2
- I have freshly installed a new CentOS 8.2 VM and tried it, I didn't hit no any issue for the compiling except some wa...
- 01:15 AM Bug #48365 (In Progress): qa: ffsb build failure on CentOS 8.2
- 03:08 PM Bug #48411 (Triaged): tasks.cephfs.test_volumes.TestSubvolumeGroups: RuntimeError: rank all faile...
- 08:46 AM Backport #48457 (In Progress): nautilus: client: fix crash when doing remount in none fuse case
- 08:45 AM Backport #48458 (In Progress): octopus: client: fix crash when doing remount in none fuse case
- 08:14 AM Bug #48473 (Resolved): fs perf stats command crashes
- From [1], try the below command mistakes. These 'mds_ranks' must be integers and non-empty. Otherwise it crashes.
...
12/04/2020
- 11:07 AM Backport #48458 (Resolved): octopus: client: fix crash when doing remount in none fuse case
- https://github.com/ceph/ceph/pull/38466
- 11:07 AM Backport #48457 (Resolved): nautilus: client: fix crash when doing remount in none fuse case
- https://github.com/ceph/ceph/pull/38467
- 05:32 AM Feature #48394 (Fix Under Review): mds: defer storing the OpenFileTable journal
12/03/2020
- 12:19 PM Backport #48375 (In Progress): octopus: libcephfs allows calling ftruncate on a file open read-only
- 12:18 PM Backport #48374 (Need More Info): nautilus: client: dump which fs is used by client for multiple-fs
- feature backport, presumed non-trivial
- 12:18 PM Backport #48372 (Need More Info): octopus: client: dump which fs is used by client for multiple-fs
- feature backport, presumed non-trivial
- 11:43 AM Backport #48285 (In Progress): octopus: rados/upgrade/nautilus-x-singleton fails due to cluster [...
- 11:34 AM Backport #47095 (Need More Info): octopus: mds: provide altrenatives to increase the total cephfs...
- feature backport, presumed non-trivial
- 10:49 AM Bug #48447 (Resolved): vstart_runner: fails to print final result line
- Not printing the final result makes it a little less convenient and also less informative.
12/02/2020
- 08:18 PM Bug #48422 (Fix Under Review): mds: MDCache.cc:5319 FAILED ceph_assert(rejoin_ack_gather.count(md...
- 08:08 AM Bug #48422 (Resolved): mds: MDCache.cc:5319 FAILED ceph_assert(rejoin_ack_gather.count(mds->get_n...
- ...
- 08:12 PM Bug #48439: fsstress failure with mds thrashing: "mds.0.6 Evicting (and blocklisting) client sess...
- Jeff Layton wrote:
> Answering my own question, looks like: 4.18.0-240.1.1.el8_3.x86_64. I'd be interested to see if... - 08:08 PM Bug #48439: fsstress failure with mds thrashing: "mds.0.6 Evicting (and blocklisting) client sess...
- Answering my own question, looks like: 4.18.0-240.1.1.el8_3.x86_64. I'd be interested to see if this is still a probl...
- 08:01 PM Bug #48439: fsstress failure with mds thrashing: "mds.0.6 Evicting (and blocklisting) client sess...
- I wonder if this is the same problem as https://tracker.ceph.com/issues/47563? What kernel was the client running?
- 08:00 PM Bug #48439: fsstress failure with mds thrashing: "mds.0.6 Evicting (and blocklisting) client sess...
- relevant ECONRESET:...
- 07:56 PM Bug #48439 (Resolved): fsstress failure with mds thrashing: "mds.0.6 Evicting (and blocklisting) ...
- ...
- 08:09 PM Bug #48203 (Resolved): qa: quota failure
- 08:08 PM Bug #48206 (Pending Backport): client: fix crash when doing remount in none fuse case
- 08:07 PM Fix #15134 (Resolved): multifs: test case exercising mds_thrash for multiple filesystems
- 07:48 PM Feature #6373: kcephfs: qa: test fscache
- Patch to add arbitrary mount options to kclient:
https://github.com/ceph/ceph/pull/38407 - 11:09 AM Bug #44415 (Resolved): cephfs.pyx: passing empty string is fine but passing None is not to arg co...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:07 AM Bug #47565 (Resolved): qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:07 AM Bug #47806 (Resolved): mon/MDSMonitor: divide mds identifier and mds real name with dot
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:07 AM Bug #47833 (Resolved): mds FAILED ceph_assert(sessions != 0) in function 'void SessionMap::hit_se...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:06 AM Bug #47881 (Resolved): mon/MDSMonitor: stop all MDS processes in the cluster at the same time. So...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:06 AM Bug #47918 (Resolved): cephfs client and nfs-ganesha have inconsistent reference count after rele...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
12/01/2020
- 09:31 PM Feature #6373: kcephfs: qa: test fscache
- https://github.com/ceph/ceph-cm-ansible/pull/592
https://github.com/ceph/ceph-cm-ansible/pull/593 - 01:48 PM Feature #6373: kcephfs: qa: test fscache
- One other catch. If we want to do testing with fscache, then it would be ideal if we could provision the clients with...
- 07:44 PM Backport #47990 (Resolved): nautilus: qa: "client.4606 isn't responding to mclientcaps(revoke), i...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37840
m... - 07:43 PM Backport #47988 (Resolved): nautilus: cephfs client and nfs-ganesha have inconsistent reference c...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37838
m... - 07:43 PM Backport #47957 (Resolved): nautilus: mon/MDSMonitor: stop all MDS processes in the cluster at th...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37822
m... - 07:43 PM Backport #47939 (Resolved): nautilus: mon/MDSMonitor: divide mds identifier and mds real name wit...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37821
m... - 07:43 PM Backport #47935 (Resolved): nautilus: mds FAILED ceph_assert(sessions != 0) in function 'void Ses...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37820
m... - 07:42 PM Backport #46611 (Resolved): nautilus: cephfs.pyx: passing empty string is fine but passing None i...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37725
m... - 02:47 PM Bug #48411 (Resolved): tasks.cephfs.test_volumes.TestSubvolumeGroups: RuntimeError: rank all fail...
- I got this failure when doing some testing with the draft fscache rework. It looks unrelated to the kernel changes, a...
- 04:05 AM Bug #48242 (Resolved): qa: add debug information for client address for kclient
- 03:46 AM Bug #47786 (Resolved): mds: log [ERR] : failed to commit dir 0x100000005f1.1010* object, errno -2
- 03:44 AM Bug #46769: qa: Refactor cephfs creation/removal code.
- Actually, here's a test failure where we get:...
- 03:38 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- /ceph/teuthology-archive/pdonnell-2020-11-24_19:01:27-fs-wip-pdonnell-testing-20201123.213848-distro-basic-smithi/565...
11/30/2020
- 10:31 PM Backport #47990: nautilus: qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x200000...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37840
merged - 10:31 PM Backport #47988: nautilus: cephfs client and nfs-ganesha have inconsistent reference count after ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37838
merged - 10:29 PM Backport #47957: nautilus: mon/MDSMonitor: stop all MDS processes in the cluster at the same time...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37822
merged - 10:29 PM Backport #47939: nautilus: mon/MDSMonitor: divide mds identifier and mds real name with dot
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37821
merged - 10:28 PM Backport #47935: nautilus: mds FAILED ceph_assert(sessions != 0) in function 'void SessionMap::hi...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37820
merged - 10:27 PM Backport #46611: nautilus: cephfs.pyx: passing empty string is fine but passing None is not to ar...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37725
merged - 10:11 PM Bug #48313 (Fix Under Review): client: ceph.dir.entries does not acquire necessary caps
- 07:03 PM Feature #48404: client: add a ceph.caps vxattr
- So (e.g.):...
- 06:51 PM Feature #48404 (Resolved): client: add a ceph.caps vxattr
- We recently added a new vxattr to the kernel client, to help support some testing and to generally improve visibility...
- 06:34 PM Bug #48403 (Resolved): mds: fix recall defaults based on feedback from production clusters
- They are too low and often cause the MDS to OOM.
- 03:59 PM Fix #47931 (Fix Under Review): Directory quota optimization
- 02:18 PM Backport #48370 (In Progress): octopus: mds: dir->mark_new should together with dir->mark_dirty
- 01:04 PM Backport #48196 (Need More Info): octopus: mgr/volumes: allow/deny r/rw access of auth IDs to sub...
- conflict
- 12:34 PM Backport #48129 (In Progress): octopus: some clients may return failure in the scenario where mul...
- 06:36 AM Feature #48394 (In Progress): mds: defer storing the OpenFileTable journal
- 06:36 AM Feature #48394 (Fix Under Review): mds: defer storing the OpenFileTable journal
- When flushing the OpenFileTable journal to osd disk may take a bit
longer, if we hold the mds_lock or other locks, i...
11/26/2020
- 11:16 AM Backport #48376 (Resolved): nautilus: libcephfs allows calling ftruncate on a file open read-only
- https://github.com/ceph/ceph/pull/39129
- 11:16 AM Backport #48375 (Resolved): octopus: libcephfs allows calling ftruncate on a file open read-only
- https://github.com/ceph/ceph/pull/38424
- 11:15 AM Backport #48374 (Resolved): nautilus: client: dump which fs is used by client for multiple-fs
- https://github.com/ceph/ceph/pull/38552
- 11:15 AM Backport #48372 (Resolved): octopus: client: dump which fs is used by client for multiple-fs
- https://github.com/ceph/ceph/pull/38551
- 11:15 AM Backport #48371 (Resolved): nautilus: mds: dir->mark_new should together with dir->mark_dirty
- https://github.com/ceph/ceph/pull/39128
- 11:15 AM Backport #48370 (Resolved): octopus: mds: dir->mark_new should together with dir->mark_dirty
- https://github.com/ceph/ceph/pull/38352
- 03:53 AM Feature #46866 (Fix Under Review): kceph: add metric for number of pinned capabilities
- 03:53 AM Feature #46866: kceph: add metric for number of pinned capabilities
- The patchwork link: https://patchwork.kernel.org/project/ceph-devel/patch/20201126034743.1151342-1-xiubli@redhat.com/...
11/25/2020
- 09:30 PM Cleanup #48235 (Resolved): client: do not unset the client_debug_inject_tick_delay in libcephfs
- 09:29 PM Bug #48249 (Pending Backport): mds: dir->mark_new should together with dir->mark_dirty
- 09:28 PM Bug #48202 (Pending Backport): libcephfs allows calling ftruncate on a file open read-only
- 09:27 PM Feature #48246 (Pending Backport): client: dump which fs is used by client for multiple-fs
- 09:24 PM Bug #48365 (Resolved): qa: ffsb build failure on CentOS 8.2
- ...
11/24/2020
- 09:22 AM Feature #48337 (Fix Under Review): client: add ceph.cluster_fsid/ceph.client_id vxattr support in...
- 07:09 AM Feature #48337 (In Progress): client: add ceph.cluster_fsid/ceph.client_id vxattr support in libc...
- 07:08 AM Feature #48337 (Resolved): client: add ceph.cluster_fsid/ceph.client_id vxattr support in libcephfs
- More detail please see: https://tracker.ceph.com/issues/44340
11/23/2020
- 02:40 PM Bug #48318 (Won't Fix): Client: the directory's capacity will not be updated after write data int...
- rstats are propagated lazily. Try doing an fsync.
11/21/2020
- 03:01 AM Bug #48318 (Resolved): Client: the directory's capacity will not be updated after write data into...
- The reproduction steps are as follows:...
11/20/2020
- 11:28 PM Bug #48313 (In Progress): client: ceph.dir.entries does not acquire necessary caps
- My mistake -- fix isn't quite ready yet. We might want to roll in a fix that gets the same caps when we look for the ...
- 11:23 PM Bug #48313: client: ceph.dir.entries does not acquire necessary caps
- @Jeff, you marked this as "Fix under review" but where is the PR?
- 03:38 PM Bug #48313 (Resolved): client: ceph.dir.entries does not acquire necessary caps
- Cloned from linux kernel client tracker #48104. The userland client needs the same change to take Fs caps for dirstat...
- 09:08 PM Feature #6373: kcephfs: qa: test fscache
- I started testing fscache in my home environment about a year ago and found that it was pretty horribly broken. David...
- 03:48 AM Backport #48111 (In Progress): octopus: doc: document MDS recall configurations
Also available in: Atom