Activity
From 12/08/2020 to 01/06/2021
01/06/2021
- 11:20 PM Backport #48457: nautilus: client: fix crash when doing remount in none fuse case
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/38467
merged - 11:20 PM Backport #48110: nautilus: client: ::_read fails to advance pos at EOF checking
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37991
merged - 11:19 PM Backport #48097: nautilus: osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37988
merged - 11:19 PM Backport #48095: nautilus: mds: fix file recovery crash after replaying delayed requests
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37986
merged - 09:35 PM Documentation #48585 (Resolved): mds_cache_trim_decay_rate misnamed?
- 08:31 PM Bug #48773 (In Progress): qa: scrub does not complete
- ...
- 08:29 PM Bug #48772 (Need More Info): qa: pjd: not ok 9, 44, 80
- ...
- 08:25 PM Bug #48771 (New): qa: iogen: workload fails to cause balancing
- Not really a bug but it causes a test failure and is worthy of investigation:...
- 07:54 PM Bug #48765 (Fix Under Review): have mount helper pick appropriate mon sockets for ms_mode value
- 02:13 PM Bug #48765 (Resolved): have mount helper pick appropriate mon sockets for ms_mode value
- Ilya recently added msgr2 support to the kclient, but the mount helper still ignores any v2 addresses when mounting. ...
- 07:12 PM Bug #48770 (Fix Under Review): qa: "Test failure: test_hole (tasks.cephfs.test_failover.TestClust...
- 06:56 PM Bug #48770 (Resolved): qa: "Test failure: test_hole (tasks.cephfs.test_failover.TestClusterResize)"
- ...
- 05:50 PM Bug #48766: qa: Test failure: test_evict_client (tasks.cephfs.test_volume_client.TestVolumeClient)
- master failure: /ceph/teuthology-archive/teuthology-2020-12-27_03:15:03-fs-master-distro-basic-smithi/5738903/teuthol...
- 02:23 PM Bug #48766 (Duplicate): qa: Test failure: test_evict_client (tasks.cephfs.test_volume_client.Test...
- test_evict_client fails in [1] and [2].
[1] http://qa-proxy.ceph.com/teuthology/jcollin-2021-01-05_16:12:23-fs-wip... - 03:02 PM Bug #48760: qa: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- failure in master:
/ceph/teuthology-archive/teuthology-2021-01-05_03:15:02-fs-master-distro-basic-smithi/5754681/t... - 04:37 AM Bug #48760 (Can't reproduce): qa: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- Job 5756405 [1] fails with below error.
[1] http://qa-proxy.ceph.com/teuthology/jcollin-2021-01-05_16:12:23-fs-wip... - 02:51 PM Documentation #48531 (Resolved): doc/cephfs: "ceph fs new" command is, ironically, old. The new (...
- 02:20 PM Feature #44928 (Fix Under Review): mgr/volumes: evict clients based on auth ID and subvolume mounted
- 01:51 PM Bug #45344: doc: Table Of Contents doesn't work
- I spoke to Patrick (the creator and owner of the CephFS documentation) about this, and for the time being, the rst fi...
- 01:43 PM Bug #48763: mds memory leak
- ...
- 09:47 AM Bug #48763 (Need More Info): mds memory leak
- I have a possible memory leak in 14.2.10 mds. MDS suddenly uses 107GB Ram, before around 70.
Once mds starts its eat... - 12:57 PM Bug #44565 (In Progress): src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || st...
01/05/2021
- 09:45 PM Bug #48756 (Fix Under Review): qa: kclient does not synchronously write with O_DIRECT
- 08:15 PM Bug #48756: qa: kclient does not synchronously write with O_DIRECT
- Trying to reproduce on master: https://pulpito.ceph.com/pdonnell-2021-01-05_20:15:07-fs:workload-master-distro-basic-...
- 08:09 PM Bug #48756 (Resolved): qa: kclient does not synchronously write with O_DIRECT
- ...
- 08:35 PM Bug #48757 (Fix Under Review): qa: "[WRN] Replacing daemon mds.d as rank 0 with standby daemon md...
- 08:30 PM Bug #48757 (Resolved): qa: "[WRN] Replacing daemon mds.d as rank 0 with standby daemon mds.f"
- /ceph/teuthology-archive/pdonnell-2020-12-24_22:49:03-fs:workload-wip-pdonnell-testing-20201224.195406-distro-basic-s...
- 05:07 PM Bug #48753 (Fix Under Review): mds: spurious wakeups in cache upkeep
- 05:06 PM Bug #48753 (Resolved): mds: spurious wakeups in cache upkeep
- ...
- 04:39 PM Documentation #48731 (Resolved): mgr/nfs: Add info related to rook, clarify pseudo path and dashb...
- 03:02 PM Feature #46074 (Resolved): mds: provide altrenatives to increase the total cephfs subvolume snaps...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 03:00 PM Bug #48555 (Resolved): pybind/ceph_volume_client: allows authorize on auth_ids not created throug...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:08 PM Documentation #48531 (Fix Under Review): doc/cephfs: "ceph fs new" command is, ironically, old. T...
01/04/2021
- 02:44 PM Bug #48673 (Need More Info): High memory usage on standby replay MDS
- 02:44 PM Bug #48711 (Triaged): mds: standby-replay mds abort when replay metablob
- 11:46 AM Feature #45746 (In Progress): mgr/nfs: Add interface to update export
- 10:09 AM Feature #48736 (Fix Under Review): qa: enable debug loglevel kclient test suits
- 04:24 AM Feature #48736 (In Progress): qa: enable debug loglevel kclient test suits
- 03:43 AM Feature #48736 (Resolved): qa: enable debug loglevel kclient test suits
- This is helpful when debugging and resolving bugs.
12/31/2020
- 12:25 PM Documentation #48731 (In Progress): mgr/nfs: Add info related to rook, clarify pseudo path and da...
- 12:19 PM Documentation #48731 (Resolved): mgr/nfs: Add info related to rook, clarify pseudo path and dashb...
12/30/2020
- 01:46 AM Bug #48679 (Fix Under Review): client: items pinned in cache preventing unmount
- 01:35 AM Bug #48679: client: items pinned in cache preventing unmount
- Xiubo Li wrote:
> For example for the inode 0x10000000e51:
>
> [...]
>
> Because it has the Fb cap, so the flu...
12/28/2020
- 09:27 AM Backport #47085 (Resolved): octopus: common: validate type CephBool cause 'invalid command json'
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37362
m... - 09:26 AM Backport #47095 (Resolved): octopus: mds: provide altrenatives to increase the total cephfs subvo...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38553
m... - 09:25 AM Backport #48372 (Resolved): octopus: client: dump which fs is used by client for multiple-fs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38551
m...
12/24/2020
- 11:27 AM Bug #48679: client: items pinned in cache preventing unmount
- For example for the inode 0x10000000e51:...
- 04:35 AM Bug #47662 (Resolved): mds: try to replicate hot dir to restarted MDS
- 04:33 AM Fix #48053 (Resolved): qa: update test_readahead to work with the kernel
- 04:32 AM Bug #48701 (Resolved): pybind/cephfs: MCommand message is constructed with command separated into...
- 04:20 AM Bug #47294 (Resolved): client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- 02:55 AM Bug #48711 (Closed): mds: standby-replay mds abort when replay metablob
- Ceph Version 14.2.15
OS: CentOS 7.6.1810
We create a fs that have three active mds, three standby-replay mds, three... - 12:24 AM Feature #44192 (Fix Under Review): mds: stable multimds scrub
12/23/2020
- 08:25 AM Bug #48707 (Fix Under Review): client: unmount() doesn't dump the cache
- 08:23 AM Bug #48707 (Resolved): client: unmount() doesn't dump the cache
- The delay_put_inodes() will be called by the tick() periodically
per second, and when the _unmount() is waiting for ... - 07:53 AM Bug #48706: mgr/nfs: Does not detect exports created by dashboard
- Dashboard can detect volume/nfs exports
- 07:50 AM Bug #48706 (New): mgr/nfs: Does not detect exports created by dashboard
- ...
- 06:28 AM Bug #48679 (In Progress): client: items pinned in cache preventing unmount
- 05:54 AM Bug #48559 (Fix Under Review): qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- 02:40 AM Bug #44565: src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XL...
- We have just hit this in 14.2.11, with 3 active mds.
mds log is at ceph-post-file: b1a56b74-6fbe-41bb-adcd-183695c39...
12/22/2020
- 09:14 PM Feature #48704 (New): mds: recall caps proportional to the number issued
- mds_recall_max_caps may wipe out the client cache for small clients. It may also not be large enough for very aggress...
- 06:30 PM Backport #48703 (Rejected): octopus: mgr/nfs: Add tests for readonly exports
- 06:26 PM Feature #48622 (Pending Backport): mgr/nfs: Add tests for readonly exports
- 06:01 PM Bug #48702 (Fix Under Review): qa: fwd_scrub should only scrub rank 0
- 05:42 PM Bug #48702 (Resolved): qa: fwd_scrub should only scrub rank 0
- ...
- 05:45 PM Backport #47085: octopus: common: validate type CephBool cause 'invalid command json'
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37362
merged - 05:30 PM Bug #48701 (Fix Under Review): pybind/cephfs: MCommand message is constructed with command separa...
- 05:28 PM Bug #48701 (Resolved): pybind/cephfs: MCommand message is constructed with command separated into...
- ...
- 04:47 PM Backport #48111 (Resolved): octopus: doc: document MDS recall configurations
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38202
m... - 04:44 PM Backport #48111: octopus: doc: document MDS recall configurations
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/38202
merged - 04:47 PM Backport #47095: octopus: mds: provide altrenatives to increase the total cephfs subvolume snapsh...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/38553
merged - 04:47 PM Backport #48191 (Resolved): octopus: mds: throttle workloads which acquire caps faster than the c...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38095
m... - 04:44 PM Backport #48191: octopus: mds: throttle workloads which acquire caps faster than the client can r...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/38095
merged - 04:47 PM Backport #48109 (Resolved): octopus: client: ::_read fails to advance pos at EOF checking
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37989
m... - 04:43 PM Backport #48109: octopus: client: ::_read fails to advance pos at EOF checking
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37989
merged - 04:47 PM Backport #48098 (Resolved): octopus: osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37987
m... - 04:43 PM Backport #48098: octopus: osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37987
merged - 04:46 PM Backport #48372: octopus: client: dump which fs is used by client for multiple-fs
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/38551
merged - 04:46 PM Backport #48096 (Resolved): octopus: mds: fix file recovery crash after replaying delayed requests
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37985
m... - 04:42 PM Backport #48096: octopus: mds: fix file recovery crash after replaying delayed requests
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37985
merged - 04:45 PM Bug #48524: octopus: run_shell() got an unexpected keyword argument 'timeout'
- https://github.com/ceph/ceph/pull/38550 merged
- 02:33 PM Feature #44928 (In Progress): mgr/volumes: evict clients based on auth ID and subvolume mounted
- 01:46 PM Bug #48700 (Closed): client: Client::rmdir() may fail to remove a snapshot
- Call to Client::may_delete() from Client::rmdir() is done here https://github.com/ceph/ceph/blob/master/src/client/Cl...
12/21/2020
- 04:26 PM Bug #48673: High memory usage on standby replay MDS
- Thanks for the information. There were a few fixes in v15.2.8 relating to memory consumption for the MDS which may be...
- 04:17 PM Bug #48673: High memory usage on standby replay MDS
- Patrick Donnelly wrote:
> Please share `ceph versions` and `ceph fs dump`.
>
> I believe we've recently fixed som... - 02:53 PM Bug #48673: High memory usage on standby replay MDS
- Daniel Persson wrote:
> Hi.
>
> We have recently installed a Ceph cluster and with about 27M objects. The filesys... - 04:12 PM Fix #48121 (Fix Under Review): qa: merge fs/multimds suites
- 02:52 PM Bug #48679: client: items pinned in cache preventing unmount
- Patrick, this one seems similiar to the one I have fixed before, I will take it.
Thanks. - 02:41 PM Bug #48679: client: items pinned in cache preventing unmount
- ...
- 02:46 PM Feature #48619 (In Progress): client: track (and forward to MDS) average read/write/metadata latency
- 06:38 AM Bug #47294 (Fix Under Review): client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Fix in https://github.com/ceph/ceph/pull/38668
- 03:46 AM Bug #48559: qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- The commit(03908aa04344) has removed the code which was dumpping the scrub detail result. And currently only the foll...
12/19/2020
- 09:26 PM Feature #7320 (Fix Under Review): qa: thrash directory fragmentation
- 06:36 PM Fix #48683 (Resolved): mds/MDSMap: print each flag value in MDSMap::dump
- Don't require operators to do bitwise arithmetic on the "flags" field. Print each flag.
https://github.com/ceph/ce... - 06:35 PM Feature #48682 (Resolved): MDSMonitor: add command to print fs flags
- From this list:
https://github.com/ceph/ceph/blob/master/src/include/ceph_fs.h#L275-L285
12/18/2020
- 09:27 PM Bug #48517 (Resolved): mds: "CDir.cc: 1530: FAILED ceph_assert(!is_complete())"
- 09:25 PM Feature #22477 (Resolved): multifs: remove multifs experimental warnings
- 09:15 PM Bug #48680 (New): mds: scrubbing stuck "scrub active (0 inodes in the stack)"
- ...
- 09:10 PM Bug #48679 (Resolved): client: items pinned in cache preventing unmount
- ...
- 09:06 PM Bug #48678 (In Progress): client: spins on tick interval
- ...
- 02:41 PM Bug #48501 (Fix Under Review): pybind/mgr/volumes: inherited snapshots should be filtered out of ...
- 08:21 AM Bug #48673 (Pending Backport): High memory usage on standby replay MDS
- Hi.
We have recently installed a Ceph cluster and with about 27M objects. The filesystem seems to have 15M files.
... - 04:33 AM Feature #44931 (Fix Under Review): mgr/volumes: get the list of auth IDs that have been granted a...
12/17/2020
- 11:39 PM Bug #21539: man: missing man page for mount.fuse.ceph
- Adding this to the packaging in https://github.com/ceph/ceph/pull/38642
- 07:29 PM Bug #48661 (Fix Under Review): mds: reserved can be set on feature set
- 07:28 PM Bug #48661 (Resolved): mds: reserved can be set on feature set
- ...
- 05:40 PM Backport #48638 (Resolved): nautilus: pybind/ceph_volume_client: allows authorize on auth_ids not...
- 05:39 PM Backport #48638 (In Progress): nautilus: pybind/ceph_volume_client: allows authorize on auth_ids ...
- 12:06 AM Backport #48638 (Resolved): nautilus: pybind/ceph_volume_client: allows authorize on auth_ids not...
- 12:05 AM Backport #48638 (Resolved): nautilus: pybind/ceph_volume_client: allows authorize on auth_ids not...
- ...
- 04:15 AM Backport #48644 (Resolved): octopus: client: ceph.dir.entries does not acquire necessary caps
- https://github.com/ceph/ceph/pull/38949
- 04:15 AM Backport #48643 (Resolved): nautilus: client: ceph.dir.entries does not acquire necessary caps
- https://github.com/ceph/ceph/pull/38950
- 04:12 AM Bug #48313 (Pending Backport): client: ceph.dir.entries does not acquire necessary caps
- 04:11 AM Feature #17856 (Resolved): qa: background cephfs forward scrub teuthology task
- 04:10 AM Backport #48642 (Resolved): octopus: Client: the directory's capacity will not be updated after w...
- https://github.com/ceph/ceph/pull/38947
- 04:10 AM Backport #48641 (Resolved): nautilus: Client: the directory's capacity will not be updated after ...
- https://github.com/ceph/ceph/pull/38948
- 04:09 AM Bug #48318 (Pending Backport): Client: the directory's capacity will not be updated after write d...
- 02:19 AM Backport #48639 (Resolved): luminous: pybind/ceph_volume_client: allows authorize on auth_ids not...
- 12:10 AM Backport #48639 (Resolved): luminous: pybind/ceph_volume_client: allows authorize on auth_ids not...
- ...
- 12:06 AM Backport #48637 (Resolved): octopus: pybind/ceph_volume_client: allows authorize on auth_ids not ...
- 12:05 AM Backport #48637 (Resolved): octopus: pybind/ceph_volume_client: allows authorize on auth_ids not ...
- ...
- 12:05 AM Bug #48555: pybind/ceph_volume_client: allows authorize on auth_ids not created through ceph_volu...
- ...
- 12:03 AM Bug #48555 (Pending Backport): pybind/ceph_volume_client: allows authorize on auth_ids not create...
- 12:03 AM Bug #48555 (Resolved): pybind/ceph_volume_client: allows authorize on auth_ids not created throug...
- Backports done manually.
12/16/2020
- 10:19 PM Backport #48634 (In Progress): nautilus: qa: tox failures
- 10:15 PM Backport #48634 (Resolved): nautilus: qa: tox failures
- https://github.com/ceph/ceph/pull/38627
- 10:18 PM Backport #48635 (In Progress): octopus: qa: tox failures
- 10:15 PM Backport #48635 (Resolved): octopus: qa: tox failures
- https://github.com/ceph/ceph/pull/38626
- 10:14 PM Bug #48633 (Pending Backport): qa: tox failures
- 08:44 PM Bug #48633 (Fix Under Review): qa: tox failures
- 08:43 PM Bug #48633 (Resolved): qa: tox failures
- ...
- 02:52 PM Backport #47158 (In Progress): octopus: mgr/volumes: Mark subvolumes with ceph.dir.subvolume vxat...
- 01:50 PM Feature #48622 (Fix Under Review): mgr/nfs: Add tests for readonly exports
- 08:23 AM Feature #48622 (Resolved): mgr/nfs: Add tests for readonly exports
- 09:50 AM Bug #47787: mgr/nfs: exercise host-level HA of NFS-Ganesha by killing the process
- I can reproduce it with SIGTERM...
- 06:52 AM Feature #48619: client: track (and forward to MDS) average read/write/metadata latency
- Xiubo suggested that the client also sends min/max and stddev.
- 05:17 AM Feature #48619 (Pending Backport): client: track (and forward to MDS) average read/write/metadata...
- Client already tracks cumulative read/write/metadata latencies. However, average latencies are much more useful to th...
- 06:01 AM Tasks #48620 (In Progress): mds: break the mds_lock or get rid of the mds_lock for some code
- 05:48 AM Tasks #48620 (In Progress): mds: break the mds_lock or get rid of the mds_lock for some code
- 05:54 AM Bug #48559 (In Progress): qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- 03:18 AM Bug #48517 (Fix Under Review): mds: "CDir.cc: 1530: FAILED ceph_assert(!is_complete())"
12/15/2020
- 01:02 PM Feature #48602 (Resolved): `cephfs-top` frontend utility
- The plumbing work for tracking (client) metrics in the MDS is already done and mgr/stats module provides an interface...
- 12:41 PM Documentation #48585 (Fix Under Review): mds_cache_trim_decay_rate misnamed?
- No other places, just being more explicit would be helpful I think.
12/14/2020
- 10:21 PM Bug #44113 (Resolved): cephfs-shell: set proper return value for the tool
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:20 PM Bug #47182 (Resolved): mon: deleting a CephFS and its pools causes MONs to crash
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:20 PM Bug #47734 (Resolved): client: hang after statfs
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:28 PM Bug #48403 (Fix Under Review): mds: fix recall defaults based on feedback from production clusters
- 09:21 PM Bug #48555 (Fix Under Review): pybind/ceph_volume_client: allows authorize on auth_ids not create...
- https://github.com/ceph/ceph/security/advisories/GHSA-32wm-mjvr-8w9f
https://github.com/ceph/ceph-ghsa-32wm-mjvr-8... - 03:52 PM Documentation #48585: mds_cache_trim_decay_rate misnamed?
- Jan Fajerski wrote:
> Patrick Donnelly wrote:
> > I think I just got that option name from the code (DecayRate) but... - 03:36 PM Documentation #48585: mds_cache_trim_decay_rate misnamed?
- Patrick Donnelly wrote:
> I think I just got that option name from the code (DecayRate) but, yes, the name is unfort... - 03:25 PM Documentation #48585: mds_cache_trim_decay_rate misnamed?
- Jan Fajerski wrote:
> I'm unsure about all this, so input is appreciated.
>
> I recently played around with this ... - 10:42 AM Documentation #48585 (Resolved): mds_cache_trim_decay_rate misnamed?
- I'm unsure about all this, so input is appreciated.
I recently played around with this and essentially broke a clu... - 02:39 PM Feature #48509 (Fix Under Review): mds: dmClock based subvolume QoS scheduler
- 02:39 PM Bug #48559 (Triaged): qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- 02:38 PM Bug #48562 (Triaged): qa: scrub - object missing on disk; some files may be lost
- 11:26 AM Bug #48517: mds: "CDir.cc: 1530: FAILED ceph_assert(!is_complete())"
- The MDS is doing the merge:...
- 03:23 AM Bug #48517: mds: "CDir.cc: 1530: FAILED ceph_assert(!is_complete())"
- As we talked last week, I will take this and will figure out why the same CDir was fetched twice.
Thanks.
12/12/2020
- 03:56 AM Bug #48559: qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- Patrick Donnelly wrote:
> Please link the source file for the paste you put in the description.
http://qa-proxy.c... - 03:10 AM Feature #48577 (In Progress): pybind/mgr/volumes: support snapshots on subvolumegroups
- We removed this recently but I think it needs to come back based on new developments in kubernetes with VolumeGroups....
- 02:52 AM Backport #47095 (In Progress): octopus: mds: provide altrenatives to increase the total cephfs su...
- 12:17 AM Backport #48374 (In Progress): nautilus: client: dump which fs is used by client for multiple-fs
- 12:15 AM Backport #48372 (In Progress): octopus: client: dump which fs is used by client for multiple-fs
- 12:11 AM Backport #46094 (Rejected): octopus: cephfs-shell: set proper return value for the tool
- Skipping backport for cephfs-shell for now (especially non-trivial ones).
12/11/2020
- 11:54 PM Bug #48524 (Fix Under Review): octopus: run_shell() got an unexpected keyword argument 'timeout'
- 11:44 PM Backport #47248 (Rejected): nautilus: mon: deleting a CephFS and its pools causes MONs to crash
- https://tracker.ceph.com/issues/47941#note-3
- 11:44 PM Backport #47941 (Rejected): nautilus: octopus: client: hang after statfs
- There are too many conflicts for this. I'm closing as it's mostly to fix a few QA test failures that are not signific...
- 06:44 PM Bug #48559: qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- Please link the source file for the paste you put in the description.
- 08:45 AM Bug #48559 (Resolved): qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- Found this in the `fs` suite run....
- 06:00 PM Backport #48568 (Resolved): octopus: tasks.cephfs.test_nfs.TestNFS.test_cluster_info: IP mismatch
- https://github.com/ceph/ceph/pull/39004
- 05:52 PM Bug #48491 (Pending Backport): tasks.cephfs.test_nfs.TestNFS.test_cluster_info: IP mismatch
- 10:40 AM Bug #48562 (New): qa: scrub - object missing on disk; some files may be lost
- 2020-12-10T05:14:53.213 INFO:tasks.ceph.mds.b.smithi165.stderr:2020-12-10T05:14:53.212+0000 7f27f1562700 -1 log_chann...
- 08:05 AM Bug #47977 (Resolved): fs: "./bin/ceph daemon client.admin.133423 config show" do not work
- 03:50 AM Bug #48555 (Resolved): pybind/ceph_volume_client: allows authorize on auth_ids not created throug...
12/10/2020
- 05:21 PM Feature #6373: kcephfs: qa: test fscache
- Jeff Layton wrote:
> Patrick Donnelly wrote:
> >
> > I want to see fscache tested in regularly in teuthology. So,... - 01:18 PM Feature #6373 (Need More Info): kcephfs: qa: test fscache
- Patrick Donnelly wrote:
>
> I want to see fscache tested in regularly in teuthology. So, the yaml fragments to tur... - 12:34 AM Feature #6373: kcephfs: qa: test fscache
- Jeff Layton wrote:
> Patrick Donnelly wrote:
> > Jeff Layton wrote:
> > > I've already done that then. I guess we ... - 12:20 AM Feature #6373 (In Progress): kcephfs: qa: test fscache
- Patrick Donnelly wrote:
> Jeff Layton wrote:
> > I've already done that then. I guess we can close this. To test fs... - 12:04 AM Feature #6373: kcephfs: qa: test fscache
- Jeff Layton wrote:
> I've already done that then. I guess we can close this. To test fscache, you just need to kick ... - 09:30 AM Documentation #48531: doc/cephfs: "ceph fs new" command is, ironically, old. The new (correct as ...
- https://docs.ceph.com/en/latest/cephfs/createfs/#creating-a-file-system
This section, "Creating a File System", mi... - 09:17 AM Documentation #48531: doc/cephfs: "ceph fs new" command is, ironically, old. The new (correct as ...
10:29 < IcePic> https://docs.ceph.com/en/latest/cephfs/createfs/
10:30 < jeeva> yeah i made a new pool "manila...- 09:15 AM Documentation #48531 (Resolved): doc/cephfs: "ceph fs new" command is, ironically, old. The new (...
- $subject
https://docs.ceph.com/en/latest/cephfs/createfs/ - 12:32 AM Bug #48524 (Resolved): octopus: run_shell() got an unexpected keyword argument 'timeout'
- ...
12/09/2020
- 09:28 PM Feature #6373 (Resolved): kcephfs: qa: test fscache
- 09:28 PM Feature #6373: kcephfs: qa: test fscache
- I've already done that then. I guess we can close this. To test fscache, you just need to kick off the run with the f...
- 09:08 PM Feature #6373: kcephfs: qa: test fscache
- Jeff Layton wrote:
> Not by itself, no. That said, the goal of this ticket is a bit unclear. What exactly should we ... - 08:45 PM Feature #6373: kcephfs: qa: test fscache
- Not by itself, no. That said, the goal of this ticket is a bit unclear. What exactly should we be aiming to do with t...
- 07:30 PM Feature #6373: kcephfs: qa: test fscache
- Jeff Layton wrote:
> Patch to add arbitrary mount options to kclient:
>
> https://github.com/ceph/ceph/pull/38407... - 08:57 PM Bug #48517 (In Progress): mds: "CDir.cc: 1530: FAILED ceph_assert(!is_complete())"
- 07:15 PM Bug #48517 (Resolved): mds: "CDir.cc: 1530: FAILED ceph_assert(!is_complete())"
- ...
- 07:45 PM Backport #48521 (Resolved): octopus: client: add ceph.cluster_fsid/ceph.client_id vxattr support ...
- https://github.com/ceph/ceph/pull/39000
- 07:45 PM Backport #48520 (Resolved): nautilus: client: add ceph.cluster_fsid/ceph.client_id vxattr support...
- https://github.com/ceph/ceph/pull/39001
- 07:28 PM Feature #48337 (Pending Backport): client: add ceph.cluster_fsid/ceph.client_id vxattr support in...
- 05:16 PM Bug #48491 (Fix Under Review): tasks.cephfs.test_nfs.TestNFS.test_cluster_info: IP mismatch
- 09:53 AM Bug #48491 (In Progress): tasks.cephfs.test_nfs.TestNFS.test_cluster_info: IP mismatch
- 12:41 AM Bug #48491: tasks.cephfs.test_nfs.TestNFS.test_cluster_info: IP mismatch
- /a/teuthology-2020-12-06_07:01:02-rados-master-distro-basic-smithi/5685113
- 05:09 PM Bug #48514 (Fix Under Review): mgr/nfs: Don't prefix 'ganesha-' to cluster id
- 04:48 PM Bug #48514 (Resolved): mgr/nfs: Don't prefix 'ganesha-' to cluster id
- Service name in orchestrator is '<service_type>.<service_id>'
https://github.com/ceph/ceph/blob/master/src/python-... - 12:40 PM Bug #48502: ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS)
- In the latest test run, I cannot reproduce it. Sometimes ganesha takes time to be restarted completely. This can be t...
- 12:47 AM Bug #48502 (Triaged): ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS)
- 12:38 AM Bug #48502 (Triaged): ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS)
- ...
- 08:28 AM Feature #48509 (Fix Under Review): mds: dmClock based subvolume QoS scheduler
- This is a ticket for subvolume QoS Scheduler.
Our idea has previously been discussed with maintainers and develope... - 02:58 AM Feature #22477 (Fix Under Review): multifs: remove multifs experimental warnings
- 02:25 AM Tasks #22479 (Closed): multifs: review testing coverage
12/08/2020
- 07:30 PM Bug #48501 (Resolved): pybind/mgr/volumes: inherited snapshots should be filtered out of snapshot...
- If a snapshot is created on a parent directory of a subvolume, it shows up in the snapshot listing:...
- 07:21 PM Bug #38832: mds: fail to resolve snapshot name contains '_'
- Patrick Donnelly wrote:
> Zheng, what was the motivation for the change to append this information to the snap name ... - 03:44 PM Bug #48491 (Triaged): tasks.cephfs.test_nfs.TestNFS.test_cluster_info: IP mismatch
- 11:17 AM Bug #48491: tasks.cephfs.test_nfs.TestNFS.test_cluster_info: IP mismatch
- https://pulpito.ceph.com/swagner-2020-12-07_12:36:07-rados:cephadm-wip-swagner2-testing-2020-12-07-1137-distro-basic-...
- 11:06 AM Bug #48491 (Resolved): tasks.cephfs.test_nfs.TestNFS.test_cluster_info: IP mismatch
- ...
- 03:35 PM Bug #48447 (Resolved): vstart_runner: fails to print final result line
Also available in: Atom