Activity
From 11/19/2019 to 12/18/2019
12/18/2019
- 09:12 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- Jeff Layton wrote:
> > > The big problem is that all of the creates are not necessarily processed in a strict order ... - 09:07 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- Patrick Donnelly wrote:
> Jeff Layton wrote:
> > The problem there is that the second set would grow without bound.... - 08:43 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- Jeff Layton wrote:
> The problem there is that the second set would grow without bound. It's not a lot of info per i... - 08:26 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- The problem there is that the second set would grow without bound. It's not a lot of info per inode, but it's enough ...
- 08:04 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- Jeff Layton wrote:
> I have patches for this for the MDS, and the kernel, but I keep hitting a race where the client... - 07:33 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- I have patches for this for the MDS, and the kernel, but I keep hitting a race where the client adds an already-used ...
- 08:53 PM Bug #41329: mds: reject sessionless messages
- Follow-up: https://github.com/ceph/ceph/pull/32318
I've asked Zheng to make another tracker ticket. - 03:39 PM Cleanup #43369 (Fix Under Review): mds: reorg SnapClient header
- 02:28 PM Cleanup #43369 (Resolved): mds: reorg SnapClient header
- 02:39 PM Bug #43362 (Fix Under Review): client: disallow changing fuse_default_permissions option at runtime
- 04:46 AM Bug #43362 (Resolved): client: disallow changing fuse_default_permissions option at runtime
- If fuse_default_permissions is false when initializing fuse, then ceph-fuse will use its own permission check. If cha...
- 01:33 PM Cleanup #43367 (Fix Under Review): mds: reorg SimpleLock header
- 01:00 PM Cleanup #43367 (Resolved): mds: reorg SimpleLock header
- 12:40 PM Cleanup #43366 (Fix Under Review): mds: reorg SessionMap header
- 12:34 PM Cleanup #43366 (Resolved): mds: reorg SessionMap header
- 09:05 AM Bug #43336 (Fix Under Review): qa: test_unmount_for_evicted_client hangs
- 06:56 AM Bug #43336: qa: test_unmount_for_evicted_client hangs
- I think it was caused by
[ 150.326253] ceph: mdsc_handle_session corrupt message mds0 len 75^M
- 05:15 AM Cleanup #42465 (Resolved): mds: reorg MDSRank header
- 04:30 AM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Patrick Donnelly wrote:
> The baseline performance is surprising I think. That's with the same MDS patches? The usua... - 12:23 AM Documentation #43162 (Resolved): doc: "adding an MDS" in deployment is out-of-date
12/17/2019
- 06:58 PM Bug #42088 (Fix Under Review): 'ceph -s' does not show standbys if there are no filesystems
- 06:58 PM Bug #42088 (Pending Backport): 'ceph -s' does not show standbys if there are no filesystems
- 05:18 PM Bug #43039 (Need More Info): client: shutdown race fails with status 141
- Jeff Layton wrote:
> (Handing back to Patrick for now)
>
> Is this problem still occurring in teuthology?
Havn... - 03:36 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Ok, I think I've figured out what's going on with the inode number reuse.
I changed the code to not remove ino_t e... - 12:08 PM Backport #42713: nautilus: mgr: daemon state for mds not available
- I'll send the backport tomorrow.
- 10:36 AM Documentation #43162 (In Progress): doc: "adding an MDS" in deployment is out-of-date
- 09:27 AM Feature #43349 (Fix Under Review): mgr/volumes: provision subvolumes with config metadata storage...
- 09:25 AM Feature #43349 (Resolved): mgr/volumes: provision subvolumes with config metadata storage in cephfs
- Patrick had this idea a while back, but this never got implemented. Currently, there is no storage area when a subvol...
- 09:20 AM Backport #43348 (Resolved): nautilus: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- https://github.com/ceph/ceph/pull/32756
- 09:20 AM Backport #43347 (Resolved): mimic: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- https://github.com/ceph/ceph/pull/32757
- 09:18 AM Backport #43345 (Resolved): nautilus: mds: metadata changes may be lost when MDS is restarted
- https://github.com/ceph/ceph/pull/30843
- 09:18 AM Backport #43344 (Rejected): mimic: mds: metadata changes may be lost when MDS is restarted
- 09:16 AM Backport #43343 (Resolved): nautilus: mds: client does not response to cap revoke After session s...
- https://github.com/ceph/ceph/pull/32909
- 09:16 AM Backport #43342 (Rejected): mimic: mds: client does not response to cap revoke After session stal...
- 05:06 AM Backport #43338 (In Progress): nautilus: qa/tasks: add remaining tests for fs volume
- 04:28 AM Backport #43338 (Resolved): nautilus: qa/tasks: add remaining tests for fs volume
- https://github.com/ceph/ceph/pull/33122/
- 03:30 AM Bug #43326: mds: batch getattr/lookup bug
- One symptom is client lookup request hang
- 01:27 AM Feature #43337 (New): fs: support relatime correctly for CephFS
- As of now, CephFS does not seem to handle atime.
The relatime mount option (since Linux 2.6.30) is meant to not o... - 12:31 AM Bug #42872 (Pending Backport): qa/tasks: add remaining tests for fs volume
- 12:08 AM Bug #40784 (Pending Backport): mds: metadata changes may be lost when MDS is restarted
- 12:06 AM Bug #42826 (Pending Backport): mds: client does not response to cap revoke After session stale->r...
- 12:04 AM Cleanup #42464 (Resolved): mds: reorg MDSMap header
- 12:00 AM Bug #36094 (Pending Backport): mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
12/16/2019
- 11:58 PM Feature #43182 (Resolved): mds: increase default cache size to 4GB
- 11:34 PM Bug #43336 (Resolved): qa: test_unmount_for_evicted_client hangs
- ...
- 04:59 PM Bug #42365: client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn)
- ceph-post-file: 123801df-99cc-4c0a-a76c-9b6c8a614394
- 03:51 PM Bug #42365: client: FAILED assert(dir->readdir_cache[dirp->cache_index] == dn)
- hit 3 more times in 13.2.5, I had catch a coredump
- 02:35 PM Bug #43329 (Resolved): cephfs-shell: AttributeError when undefined an conf opt is attemptted to read
- conf_get() from pybind/cephfs/cephfs.pyx returns None when passed argument is not present as config file option which...
- 01:27 PM Bug #43248 (In Progress): cephfs-shell: do not drop into shell after running command-line command
- 08:20 AM Bug #43326 (Fix Under Review): mds: batch getattr/lookup bug
- 08:13 AM Bug #43326 (Resolved): mds: batch getattr/lookup bug
12/15/2019
- 08:09 AM Backport #42462 (Resolved): nautilus: doc: MDS and metadata pool hardware requirements/recommenda...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31116
m...
12/14/2019
12/13/2019
- 07:51 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Doing some testing today with xfstests, during generic/531 test, I saw some of these pop up in the kernel ring buffer...
- 07:29 PM Documentation #43222 (Resolved): doc: mention multimds in dev guide's list of integration test su...
- 07:29 PM Documentation #43220 (Resolved): doc: clarify difference fs and kcephfs suite in dev guide
- 07:07 PM Bug #43133 (Resolved): vstop.sh: Mounts are not cleaned up
- 06:21 AM Bug #43251 (Fix Under Review): mds: track client provided metric flags in session
- 01:19 AM Bug #43149: kclient: umount will stuck for around 1 minutes sometimes
- The fixing commit: https://github.com/ceph/ceph-client/commit/992dd028db77657b5eb164d0825a991d5c14ec78
- 01:17 AM Bug #43295 (Fix Under Review): kclient: keep the session state until it is released
- The fixing commit: https://github.com/ceph/ceph-client/commit/38d173ab657c9b77ad3ab0f8c9b83245959cdb63
- 01:16 AM Bug #43295 (In Progress): kclient: keep the session state until it is released
Let's keep the session state until its memories is released.- 01:16 AM Bug #43295 (Resolved): kclient: keep the session state until it is released
- When reconnecting the session but if it is denied by the MDS due
to client was in blacklist or something else, kclie... - 01:11 AM Feature #43294 (Fix Under Review): mount.ceph: give a hint message when no mds is up or cluster i...
- 01:11 AM Feature #43294 (In Progress): mount.ceph: give a hint message when no mds is up or cluster is lag...
- 01:11 AM Feature #43294: mount.ceph: give a hint message when no mds is up or cluster is laggy
- The relating PR: https://github.com/ceph/ceph/pull/32164
- 01:10 AM Feature #43294 (Resolved): mount.ceph: give a hint message when no mds is up or cluster is laggy
- The kclient will return EHOSTUNREACH when no MDS is up or the cluster is laggy.
Check it and give a hint. - 01:00 AM Feature #4386 (Fix Under Review): kclient: Mount error message when no MDS present
- 01:00 AM Feature #4386: kclient: Mount error message when no MDS present
- Return -EHOSTUNREACH instead if no MDS is up or the cluster is laggy.
- 12:55 AM Bug #43293 (Resolved): kclient: trigger the reclaim work once there has enough pending caps
- This will fix it: https://github.com/ceph/ceph-client/commit/bba1560bd4a46aa0d16bb7d81abd9d0eb47dea36.
- 12:54 AM Bug #43293 (Resolved): kclient: trigger the reclaim work once there has enough pending caps
- For corner case the reclaim work won't be fired even we have a large number of pending caps in time as expected.
12/12/2019
- 09:39 PM Bug #42923 (Pending Backport): pybind / cephfs: remove static typing in LibCephFS.chown
- 02:28 PM Bug #43249 (Fix Under Review): cephfs-shell: exit failure when non-interactive command fails
- 06:29 AM Documentation #37746: doc: how to mount a subdir with ceph-fuse/kclient
- Марк Коренберг wrote:
> Well,
> 1. I don't consider using options in the device part. I just found a working examp... - 05:40 AM Backport #43271 (In Progress): nautilus: qa/tasks: Fix raises that doesn't re-raise in test_volum...
- 05:30 AM Backport #43271 (Resolved): nautilus: qa/tasks: Fix raises that doesn't re-raise in test_volumes.py
- https://github.com/ceph/ceph/pull/33122
- 12:09 AM Bug #43270 (Fix Under Review): kclient: retry the same mds later after the new session is opened
- This is the fixing patch: https://github.com/ceph/ceph-client/commit/5be1d0c54652ae3ba0a452bb3b12950e20597d0e
- 12:08 AM Bug #43270 (Resolved): kclient: retry the same mds later after the new session is opened
- With max_mds > 1 and for a request which are choosing a random
mds rank and if the relating session is not opened ye...
12/11/2019
- 09:15 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- The baseline performance is surprising I think. That's with the same MDS patches? The usual full tilt create/second r...
- 06:19 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Changed my test rig around a bit so I could give bluestore a LV backed by an SSD, and rebuilt the kernel w/o KASAN.
... - 04:57 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- I've pushed the current patch stack to https://github.com/ceph/ceph-client/tree/wip-async-dirops .
It's still very... - 04:45 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- The patchset currently has a module option to enable this that defaults of "off". So I can do some apples to apples t...
- 04:36 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- I finally have a working patchset for this, and so far, the results are somewhat lackluster. I'm seeing about the sam...
- 05:51 PM Cleanup #41951 (Resolved): mds: obsolete mds_cache_size
- 05:43 PM Cleanup #42866 (Resolved): mds: reorg ScrubStack header
- 05:41 PM Cleanup #42865 (Resolved): mds: reorg ScrubHeader header
- 05:40 PM Cleanup #42864 (Resolved): mds: reorg ScatterLock header
- 05:39 PM Cleanup #42813 (Resolved): mds: reorg RecoveryQueue header
- 05:39 PM Cleanup #42792 (Resolved): mds: reorg OpenFileTable header
- 05:37 PM Bug #41694 (Pending Backport): qa/tasks: Fix raises that doesn't re-raise in test_volumes.py
- 03:30 PM Bug #43247 (Resolved): qa: test_cephfs_shell.TestSnapshots.test_snap FAIL
- 01:50 PM Documentation #37746: doc: how to mount a subdir with ceph-fuse/kclient
- Well,
1. I don't consider using options in the device part. I just found a working example and wondered that there ... - 05:28 AM Documentation #37746: doc: how to mount a subdir with ceph-fuse/kclient
- Марк Коренберг wrote:
> In order not to loose linked info from https://forum.proxmox.com/threads/mount-cephfs-using-... - 11:54 AM Bug #43249 (In Progress): cephfs-shell: exit failure when non-interactive command fails
- 09:59 AM Bug #43251 (Resolved): mds: track client provided metric flags in session
- With PR https://github.com/ceph/ceph/pull/26004, MDS will start tracking client provided metrics. However, the set of...
- 07:10 AM Bug #43250 (Fix Under Review): qa/test_cephfs_shell: TestDu.test_du_works_for_hardlinks fails
- 06:57 AM Bug #43250 (In Progress): qa/test_cephfs_shell: TestDu.test_du_works_for_hardlinks fails
- 06:42 AM Bug #43250 (Resolved): qa/test_cephfs_shell: TestDu.test_du_works_for_hardlinks fails
- Got this yesterday locally -...
12/10/2019
- 11:37 PM Bug #43247 (Fix Under Review): qa: test_cephfs_shell.TestSnapshots.test_snap FAIL
- 10:58 PM Bug #43247: qa: test_cephfs_shell.TestSnapshots.test_snap FAIL
- master: http://pulpito.ceph.com/pdonnell-2019-12-10_20:51:09-fs-master-distro-basic-smithi/
- 08:52 PM Bug #43247 (Resolved): qa: test_cephfs_shell.TestSnapshots.test_snap FAIL
- ...
- 11:22 PM Bug #43249 (Resolved): cephfs-shell: exit failure when non-interactive command fails
- If a one-shot command fails, the cephfs-shell should exit with a non-zero status:...
- 11:20 PM Bug #43248 (Resolved): cephfs-shell: do not drop into shell after running command-line command
- e.g....
- 09:34 PM Cleanup #42468 (Resolved): mds: reorg MDSTable header
- 09:33 PM Cleanup #42564 (Resolved): mds: reorg Migrator header
- 09:31 PM Cleanup #42793 (Resolved): mds: reorg PurgeQueue header
- 09:03 AM Documentation #43222 (Resolved): doc: mention multimds in dev guide's list of integration test su...
- 07:16 AM Documentation #43220 (In Progress): doc: clarify difference fs and kcephfs suite in dev guide
- 07:12 AM Documentation #43220 (Resolved): doc: clarify difference fs and kcephfs suite in dev guide
- 05:57 AM Backport #43219 (In Progress): nautilus: mgr/volumes: ERROR: test_subvolume_create_with_desired_u...
- 05:49 AM Backport #43219 (Resolved): nautilus: mgr/volumes: ERROR: test_subvolume_create_with_desired_uid_...
- https://github.com/ceph/ceph/pull/31741
- 05:44 AM Bug #43038 (Pending Backport): mgr/volumes: ERROR: test_subvolume_create_with_desired_uid_gid (ta...
- 05:42 AM Bug #43218 (Rejected): kclient: when looking up the snap dirs sometime will hit WARN_ON
- Hit this twice in 30 minutes, the following are the warning:
76 <7>[ 3254.346712] ceph: readdir fetching 100... - 05:22 AM Feature #4386: kclient: Mount error message when no MDS present
- And maybe we could return the -ESTALE or some other specified errornos to the userland to mount.ceph and then the mou...
- 05:20 AM Feature #4386: kclient: Mount error message when no MDS present
- Checked the new mount API, we still need the fix when the mount request timedout due to there is no any MDS is up or ...
- 03:52 AM Documentation #22204 (Resolved): doc: scrub_path is missing in the docs
- 12:37 AM Documentation #42016 (Resolved): doc: layout rest of intro page
12/09/2019
- 11:59 PM Bug #43216 (Resolved): MDSMonitor: removes MDS coming out of quorum election
- Event sequence:
- 2019-12-07T12:26:26.854 mon_thrash kills mon.a(leader)
- 2019-12-07T12:27:07.843 mon_thrash rev... - 10:12 PM Bug #43133 (Fix Under Review): vstop.sh: Mounts are not cleaned up
- 06:55 PM Feature #26996 (Fix Under Review): cephfs: get capability cache hits by clients to provide intros...
- 06:29 PM Bug #43191 (Fix Under Review): test_cephfs_shell: set `colors` to Never for cephfs-shell
- 06:17 AM Bug #43191 (Resolved): test_cephfs_shell: set `colors` to Never for cephfs-shell
- Originally, the plan was to use setUpClass and tearDownClass for tests[1] but I missed pushing that modification befo...
- 03:13 PM Documentation #43210 (In Progress): doc: MDS config reference improvements
- https://docs.ceph.com/docs/master/cephfs/mds-config-ref/
Add details on how to apply a configuration option, fetch... - 03:06 PM Backport #43085 (In Progress): nautilus: pybind / cephfs: remove static typing in LibCephFS.chown
- 02:56 PM Backport #43085 (New): nautilus: pybind / cephfs: remove static typing in LibCephFS.chown
- Reopening. I will update this PR with this fix: https://github.com/ceph/ceph/pull/31741
- 02:57 PM Feature #40929 (In Progress): pybind/mgr/mds_autoscaler: create mgr plugin to deploy and configur...
- 02:56 PM Fix #41782 (Fix Under Review): mds: allow stray directories to fragment and switch from 10 stray ...
- Update:
Stray dirs are not being dropped from 10 to 1. Zheng recommended having more stray dirs.
Only fragmentation... - 02:44 PM Bug #43208 (Fix Under Review): mds: unsafe req may result in data remaining in the datapool
- 12:56 PM Bug #43208 (Resolved): mds: unsafe req may result in data remaining in the datapool
- when client create file, if early_reply is set true, the metadata has not write to journal and the file data is succe...
- 02:42 PM Bug #43039: client: shutdown race fails with status 141
- (Handing back to Patrick for now)
Is this problem still occurring in teuthology? - 11:55 AM Feature #36253 (Fix Under Review): cephfs: clients should send usage metadata to MDSs for adminis...
- 11:54 AM Feature #24285 (Fix Under Review): mgr: add module which displays current usage of file system (`...
12/08/2019
12/07/2019
- 12:14 AM Feature #43182 (Resolved): mds: increase default cache size to 4GB
- 1GB is too low as a default and usually results in cache size warnings at that size; the MDS will struggle to maintai...
12/06/2019
- 10:50 PM Backport #42440: mimic: mds: create a configurable snapshot limit
- Nathan Cutler wrote:
> feature backport - does it need a release note?
Yes. - 12:58 PM Backport #42440 (Need More Info): mimic: mds: create a configurable snapshot limit
- feature backport - does it need a release note?
- 10:50 PM Backport #42441: nautilus: mds: create a configurable snapshot limit
- Nathan Cutler wrote:
> feature backport - does it need a release note?
Yes. - 12:58 PM Backport #42441 (Need More Info): nautilus: mds: create a configurable snapshot limit
- feature backport - does it need a release note?
- 01:32 PM Backport #43143 (In Progress): nautilus: mds: tolerate no snaprealm encoded in on-disk root inode
- 01:32 PM Backport #43141 (In Progress): nautilus: tools/cephfs: linkages injected by cephfs-data-scan have...
- 01:31 PM Backport #43138 (In Progress): nautilus: mds: reports unrecognized message for mgrclient messages
- 01:29 PM Backport #43137 (In Progress): nautilus: pybind/mgr/volumes: idle connection drop is not working
- 01:27 PM Bug #42923 (Resolved): pybind / cephfs: remove static typing in LibCephFS.chown
- 01:26 PM Backport #43085 (Rejected): nautilus: pybind / cephfs: remove static typing in LibCephFS.chown
- The code being fixed does not exist in nautilus.
- 01:25 PM Backport #43001 (In Progress): nautilus: qa: ignore "ceph.dir.pin: No such attribute" for (old) k...
- 01:21 PM Backport #42951 (In Progress): nautilus: Test failure: test_subvolume_snapshot_ls (tasks.cephfs.t...
- 01:21 PM Backport #42949 (In Progress): nautilus: mds: inode lock stuck at unstable state after evicting c...
- 01:20 PM Backport #43170 (In Progress): nautilus: nautilus: qa: ignore RECENT_CRASH for multimds snapshot ...
- 01:20 PM Backport #43170 (Resolved): nautilus: nautilus: qa: ignore RECENT_CRASH for multimds snapshot tes...
- https://github.com/ceph/ceph/pull/32072
- 01:19 PM Bug #42922 (Pending Backport): nautilus: qa: ignore RECENT_CRASH for multimds snapshot testing
- 01:18 PM Backport #42738 (Need More Info): nautilus: mgr/volumes: cleanup libcephfs handles on mgr shutdown
- 01:18 PM Backport #42713 (Need More Info): nautilus: mgr: daemon state for mds not available
- 01:17 PM Backport #42650 (In Progress): nautilus: mds: no assert on frozen dir when scrub path
- 12:58 PM Backport #42631 (In Progress): nautilus: client: FAILED assert(cap == in->auth_cap)
- 10:11 AM Bug #39947 (Resolved): cephfs-shell: add CI testing with flake8
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:11 AM Bug #40202 (Resolved): cephfs-shell: Error messages are printed to stdout
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:10 AM Bug #40430 (Resolved): cephfs-shell: No error message is printed on ls of invalid directories
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:10 AM Bug #40476 (Resolved): cephfs-shell: cd with no args has no effect
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:10 AM Bug #40836 (Resolved): cephfs-shell: flake8 blank line and indentation error
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:10 AM Cleanup #40992 (Resolved): cephfs-shell: Multiple flake8 errors
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:09 AM Bug #41164 (Resolved): cephfs-shell: onecmd throws TypeError
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
12/05/2019
- 11:58 PM Bug #42643 (Resolved): vstart.sh: highlight presence of stray conf file
- 10:01 PM Backport #41283 (Rejected): nautilus: cephfs-shell: No error message is printed on ls of invalid ...
- We're not backporting cephfs-shell fixes to Nautilus anymore.
- 10:01 PM Backport #41268 (Rejected): nautilus: cephfs-shell: onecmd throws TypeError
- We're not backporting cephfs-shell fixes to Nautilus anymore.
- 10:01 PM Backport #41118 (Rejected): nautilus: cephfs-shell: add CI testing with flake8
- We're not backporting cephfs-shell fixes to Nautilus anymore.
- 10:01 PM Backport #41112 (Rejected): nautilus: cephfs-shell: cd with no args has no effect
- We're not backporting cephfs-shell fixes to Nautilus anymore.
- 10:01 PM Backport #41105 (Rejected): nautilus: cephfs-shell: flake8 blank line and indentation error
- We're not backporting cephfs-shell fixes to Nautilus anymore.
- 10:01 PM Backport #41089 (Rejected): nautilus: cephfs-shell: Multiple flake8 errors
- We're not backporting cephfs-shell fixes to Nautilus anymore.
- 10:00 PM Backport #40898 (Rejected): nautilus: cephfs-shell: Error messages are printed to stdout
- 09:45 PM Bug #42894 (Fix Under Review): kclient: if there has at least one MDS still not laggy the mount w...
- 09:45 PM Bug #42760 (Fix Under Review): kclient: get random mds not work as expected
- 09:45 PM Bug #42515 (Fix Under Review): fs: OpenFileTable object shards have too many k/v pairs
- 09:37 PM Bug #42088 (New): 'ceph -s' does not show standbys if there are no filesystems
- 09:36 PM Bug #26901 (New): mds: no throttlers set on incoming messages
- 09:36 PM Bug #21507 (New): mds: debug logs near respawn are not flushed
- 09:36 PM Bug #21058 (New): mds: remove UNIX file permissions binary dependency
- 09:35 PM Bug #20938 (New): CephFS: concurrent access to file from multiple nodes blocks for seconds
- 09:35 PM Bug #19812 (New): client: not swapping directory caps efficiently leads to very slow create chains
- 09:35 PM Bug #18883 (New): qa: failures in samba suite
- 09:35 PM Bug #17847 (New): "Fuse mount failed to populate /sys/ after 31 seconds" in jewel 10.2.4
- 09:35 PM Bug #17594 (New): cephfs: permission checking not working (MDS should enforce POSIX permissions)
- 09:35 PM Bug #16881 (New): RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
- 09:35 PM Bug #16920 (New): mds.inodes* perf counters sound like the number of inodes but they aren't
- 09:35 PM Bug #16556 (New): LibCephFS.InterProcessLocking failing on master and jewel
- 09:35 PM Bug #9105 (New): ~ObjectCacher behaves poorly on EBLACKLISTED
- 09:35 PM Bug #9101 (New): multimds: unlinked file is not pruned from replica mds caches
- 09:34 PM Bug #4023 (New): kclient: d_revalidate is abusing d_parent
- 09:34 PM Bug #2277 (New): qa: flock test broken
- 09:23 PM Bug #42252 (Rejected): mimic: test_21501 (tasks.cephfs.test_volume_client.TestVolumeClient) failure
- Fixed by https://github.com/ceph/ceph/pull/31017
- 06:56 PM Documentation #43162 (Resolved): doc: "adding an MDS" in deployment is out-of-date
- https://docs.ceph.com/docs/master/cephfs/add-remove-mds/#adding-an-mds
See: https://github.com/ceph/ceph/pull/32... - 06:13 PM Documentation #42016 (Fix Under Review): doc: layout rest of intro page
- 06:11 PM Documentation #43155 (Closed): CephFS Documentation Sprint 4
- 05:20 PM Documentation #43154 (Resolved): doc: migrate best practice recommendations to relevant docs
- Best practices doc:
https://docs.ceph.com/docs/master/cephfs/best-practices/
Should just put these recommendati... - 03:01 PM Bug #43149 (In Progress): kclient: umount will stuck for around 1 minutes sometimes
- During umount, if there the last request reply is only a safe one without unsafe, the umount won't have any chance to...
- 02:55 PM Bug #43149 (Resolved): kclient: umount will stuck for around 1 minutes sometimes
- While running some test, in one terminal run a script to do creating/deleting/listing large amount of directories wit...
- 02:54 PM Feature #38851 (Rejected): mount.ceph.fuse: support secretfile option
- 02:53 PM Bug #43061: ceph fs add_data_pool doesn't set pool metadata properly
- Ramana Raja wrote:
> [...]
> `add_data_pool` sets the pool's meta data properly if the pool's application metadata ... - 11:10 AM Backport #43144 (Rejected): mimic: mds: tolerate no snaprealm encoded in on-disk root inode
- 11:10 AM Backport #43143 (Resolved): nautilus: mds: tolerate no snaprealm encoded in on-disk root inode
- https://github.com/ceph/ceph/pull/32079
- 11:10 AM Backport #43142 (Rejected): mimic: tools/cephfs: linkages injected by cephfs-data-scan have first...
- 11:07 AM Backport #43141 (Resolved): nautilus: tools/cephfs: linkages injected by cephfs-data-scan have fi...
- https://github.com/ceph/ceph/pull/32078
- 11:07 AM Backport #43138 (Resolved): nautilus: mds: reports unrecognized message for mgrclient messages
- https://github.com/ceph/ceph/pull/32077
- 11:07 AM Backport #43137 (Resolved): nautilus: pybind/mgr/volumes: idle connection drop is not working
- https://github.com/ceph/ceph/pull/33116
- 08:58 AM Bug #36094: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- For the sake of completeness, here the crash logging with the extra debug output:...
- 02:20 AM Bug #36094 (Fix Under Review): mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- 06:45 AM Bug #43133 (Resolved): vstop.sh: Mounts are not cleaned up
- When vstop is run while the cephFS is mounted, mount processes are retained and can't be killed.
And also the mount ...
12/04/2019
- 10:59 PM Bug #43129 (New): qa: `fs dump` fails during snaptests
- ...
- 10:54 PM Bug #42636 (Resolved): qa: AttributeError: can't set attribute
- 10:53 PM Bug #42636 (Pending Backport): qa: AttributeError: can't set attribute
- 10:50 PM Bug #43036 (Pending Backport): mds: reports unrecognized message for mgrclient messages
- 10:18 PM Bug #42675 (Pending Backport): mds: tolerate no snaprealm encoded in on-disk root inode
- 10:03 PM Bug #43125: qa: ceph_volume_client not available "ModuleNotFoundError: No module named 'ceph_volu...
- Can't seem to reproduce on master:
http://pulpito.ceph.com/pdonnell-2019-12-04_20:54:30-fs-master-distro-basic-smi... - 08:46 PM Bug #43125 (Can't reproduce): qa: ceph_volume_client not available "ModuleNotFoundError: No modul...
- ...
- 09:58 PM Bug #42829 (Pending Backport): tools/cephfs: linkages injected by cephfs-data-scan have first == ...
- 09:57 PM Bug #38452 (Resolved): mds: assert crash loop while unlinking file
- 09:45 PM Bug #43113 (Pending Backport): pybind/mgr/volumes: idle connection drop is not working
- 03:57 PM Documentation #16300 (Resolved): doc: fuse_disable_pagecache
12/03/2019
- 10:35 PM Bug #43113 (Fix Under Review): pybind/mgr/volumes: idle connection drop is not working
- 09:24 PM Bug #43113 (Resolved): pybind/mgr/volumes: idle connection drop is not working
- after creating a subvolume:...
- 02:51 PM Feature #38851: mount.ceph.fuse: support secretfile option
- Yes. I think this is intentional.
The secretfile thing was really for the kernel client, which had a very primitiv... - 12:27 PM Bug #43061 (In Progress): ceph fs add_data_pool doesn't set pool metadata properly
- ...
- 10:45 AM Bug #43038 (Fix Under Review): mgr/volumes: ERROR: test_subvolume_create_with_desired_uid_gid (ta...
- 12:48 AM Bug #42894: kclient: if there has at least one MDS still not laggy the mount will fail
- 12:46 AM Feature #7333 (In Progress): client: evaluate multiple O_APPEND writers
- 12:45 AM Feature #4386: kclient: Mount error message when no MDS present
- An extra patch has been post and it is based on the current old mount API.
There is a new mount API for cephfs, and ...
12/02/2019
- 09:31 PM Feature #118: kclient: clean pages when throwing out dirty metadata on session teardown
- I'm not sure what this tracker is really asking for, tbqh.
Hmm...now that I look, I do see this:
> commit 6c99f... - 07:53 AM Feature #118: kclient: clean pages when throwing out dirty metadata on session teardown
- If I have correctly comprehended it and from the current code and my test we had already implemented it.
There has... - 04:43 PM Feature #16468 (Resolved): kclient: Exclude ceph.* xattr namespace in listxattr
- Thanks for verifying Xiubo!
- 02:12 PM Feature #16468: kclient: Exclude ceph.* xattr namespace in listxattr
- 02:11 PM Feature #16468: kclient: Exclude ceph.* xattr namespace in listxattr
- The ceph.* xattr has been removed, so this has been fixed:
>
> commit e09580b343aa117fd07c1bb7f7dfc5bc630a2953
... - 02:46 PM Bug #42986 (Triaged): qa: Test failure: test_drop_cache_command_dead (tasks.cephfs.test_misc.Test...
- 02:45 PM Bug #43038 (In Progress): mgr/volumes: ERROR: test_subvolume_create_with_desired_uid_gid (tasks.c...
- 02:44 PM Bug #43061 (Triaged): ceph fs add_data_pool doesn't set pool metadata properly
- 02:41 PM Bug #43090 (Fix Under Review): mds:check if oldin is null before accessing its member
- 02:41 PM Bug #43090 (Need More Info): mds:check if oldin is null before accessing its member
- Can you share your cluster version, logs, and backtrace?
- 01:58 PM Bug #43090 (Closed): mds:check if oldin is null before accessing its member
- in mds/server : handle_client_rename:
CInode *oldin = 0;
if destdnl->is_null;
then oldin will be still 0;
... - 12:41 PM Backport #43085 (Resolved): nautilus: pybind / cephfs: remove static typing in LibCephFS.chown
- https://github.com/ceph/ceph/pull/31741
11/28/2019
- 11:26 PM Bug #43061 (Resolved): ceph fs add_data_pool doesn't set pool metadata properly
- maybe related to https://tracker.ceph.com/issues/36028...
- 01:38 AM Feature #15066 (Rejected): multifs: Allow filesystems to be assigned RADOS namespace as well as p...
- Now that pg_autoscaler exists with pg merging, this feature is not compelling. Using separate pools for multifs has a...
- 01:38 AM Feature #5520 (Rejected): osdc: should handle namespaces
- Now that pg_autoscaler exists with pg merging, this feature is not compelling. Using separate pools for multifs has a...
- 01:36 AM Feature #15070: mon: client: multifs: auth caps on client->mon connections to limit their access ...
- Giving this to Rishabh as discussed.
- 01:35 AM Feature #22478 (Rejected): multifs: support snapshots for shared data pool
- Now that pg_autoscaler exists with pg merging, this feature is not compelling. Using separate pools for multifs has a...
11/27/2019
- 10:15 PM Feature #15070: mon: client: multifs: auth caps on client->mon connections to limit their access ...
- also see branch wip-djf-15070-rebase on https://github.com/fullerdj/ceph/
- 05:46 PM Bug #43041 (Rejected): ceph-fuse client reported "No space left on device" when from cluster copy...
- Sorry, we don't consider bugs on clusters this old. Please upgrade!
- 05:35 AM Bug #43041 (Rejected): ceph-fuse client reported "No space left on device" when from cluster copy...
- cluster version: 0.94.9
client version: 0.94.9
ceph-fuse client err info:
2019-11-27 11:04:06.800947 7fddb0dfa7... - 10:33 AM Bug #43039: client: shutdown race fails with status 141
- I think that's probably indicative of a SIGPIPE error, which probably means some task was writing to a pipe that did ...
- 12:24 AM Bug #43039 (Resolved): client: shutdown race fails with status 141
- ...
- 09:10 AM Cleanup #41951 (Fix Under Review): mds: obsolete mds_cache_size
11/26/2019
- 11:56 PM Bug #42923 (Pending Backport): pybind / cephfs: remove static typing in LibCephFS.chown
- 11:55 PM Bug #43038 (Resolved): mgr/volumes: ERROR: test_subvolume_create_with_desired_uid_gid (tasks.ceph...
- ...
- 08:20 PM Bug #43036 (Fix Under Review): mds: reports unrecognized message for mgrclient messages
- 07:22 PM Bug #43036 (Resolved): mds: reports unrecognized message for mgrclient messages
- ...
- 08:17 PM Bug #43035 (Rejected): qa: Test failure: test_ceph_config_show (tasks.cephfs.test_admin.TestConfi...
- Closed in favor of #43035.
- 07:12 PM Bug #43035 (Rejected): qa: Test failure: test_ceph_config_show (tasks.cephfs.test_admin.TestConfi...
- http://pulpito.ceph.com/pdonnell-2019-11-26_04:58:35-fs-wip-pdonnell-testing-20191126.005014-distro-basic-smithi/4543...
- 07:00 PM Documentation #43034 (New): doc: document large omap warning for directory fragmentation
- https://docs.ceph.com/docs/master/cephfs/health-messages/
and
https://docs.ceph.com/docs/master/cephfs/dirfrags... - 06:56 PM Documentation #43033 (In Progress): doc: directory fragmentation section on config options
- https://docs.ceph.com/docs/master/cephfs/dirfrags/
Add section on advanced (not dev) config options for the MDS. - 06:55 PM Documentation #43032 (New): doc: directory fragmentation omap cost/benefits
- https://docs.ceph.com/docs/master/cephfs/dirfrags/
* Discussion of rationale for directory fragmentation: object o... - 06:52 PM Documentation #23897 (In Progress): doc: create snapshot user doc
- 06:52 PM Documentation #37746 (In Progress): doc: how to mount a subdir with ceph-fuse/kclient
- 06:52 PM Documentation #16300 (In Progress): doc: fuse_disable_pagecache
- 06:52 PM Documentation #22204 (In Progress): doc: scrub_path is missing in the docs
- 06:43 PM Documentation #42407 (In Progress): doc: add a doc for libcephfs
- 06:43 PM Documentation #41688 (In Progress): doc: client config reference improvements
- 06:41 PM Documentation #24642 (In Progress): doc: visibility semantics to other clients
- 06:37 PM Documentation #41999 (Resolved): CephFS Documentation Sprint 2
- 06:37 PM Documentation #42016 (In Progress): doc: layout rest of intro page
- 06:37 PM Documentation #43031 (Closed): CephFS Documentation Sprint 3
- 05:57 PM Bug #38681 (Resolved): cephfs-shell: add commands to manipulate snapshots
- 03:45 PM Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- Zoltan Arnold Nagy wrote:
> What info can I provide?
I think it'd be best to open a new tracker ticket for the... - 03:21 PM Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- I do see this on a new mds setup, with 14.2.4, having the right ganesha setup:
@root@c10n5:~# cat /etc/ganesha/gan... - 02:52 PM Documentation #43028 (Resolved): doc: cephfs-shell options
- Like what's in https://docs.ceph.com/docs/master/cephfs/client-config-ref/
- 02:49 PM Feature #42447 (Fix Under Review): add basic client setup page
- 02:36 PM Bug #42872 (Fix Under Review): qa/tasks: add remaining tests for fs volume
- 10:12 AM Bug #42872 (In Progress): qa/tasks: add remaining tests for fs volume
- 02:33 PM Documentation #41825 (Resolved): CephFS Documentation Sprint 1
- 12:50 PM Bug #36348 (Resolved): luminous(?): blogbench I/O with two kernel clients; one stalls
- The patches were merged into -rc7 kernel, so this should be resolved now.
11/25/2019
- 10:10 PM Bug #42940 (Fix Under Review): client: trim_cache not invalidate kernel cache
- 06:29 PM Bug #42872 (New): qa/tasks: add remaining tests for fs volume
- Jos Collin wrote:
> We cannot test this with accuracy.
>
> Because:
>
> `ceph fs volume ls` would list the al... - 10:01 AM Bug #42872 (Closed): qa/tasks: add remaining tests for fs volume
- We cannot test this with accuracy.
Because:
`ceph fs volume ls` would list the already existing volumes and th... - 01:22 PM Feature #118 (In Progress): kclient: clean pages when throwing out dirty metadata on session tear...
- 09:47 AM Backport #43002 (Rejected): mimic: qa: ignore "ceph.dir.pin: No such attribute" for (old) kernel ...
- 09:47 AM Backport #43001 (Resolved): nautilus: qa: ignore "ceph.dir.pin: No such attribute" for (old) kern...
- https://github.com/ceph/ceph/pull/32075
- 09:47 AM Backport #43000 (Rejected): luminous: qa: ignore "ceph.dir.pin: No such attribute" for (old) kern...
- 03:00 AM Bug #42986 (Resolved): qa: Test failure: test_drop_cache_command_dead (tasks.cephfs.test_misc.Tes...
- ...
11/23/2019
- 06:06 AM Fix #38801 (Pending Backport): qa: ignore "ceph.dir.pin: No such attribute" for (old) kernel client
- Whoops, forgot to move this.
11/22/2019
- 11:57 PM Bug #42894 (Fix Under Review): kclient: if there has at least one MDS still not laggy the mount w...
- 08:30 AM Backport #42951 (Resolved): nautilus: Test failure: test_subvolume_snapshot_ls (tasks.cephfs.test...
- https://github.com/ceph/ceph/pull/33122
- 08:30 AM Backport #42950 (Rejected): mimic: mds: inode lock stuck at unstable state after evicting client
- 08:30 AM Backport #42949 (Resolved): nautilus: mds: inode lock stuck at unstable state after evicting client
- https://github.com/ceph/ceph/pull/32073
- 04:09 AM Backport #42943 (In Progress): nautilus: mds: free heap memory may grow too large for some workloads
- 04:03 AM Backport #42943 (Resolved): nautilus: mds: free heap memory may grow too large for some workloads
- https://github.com/ceph/ceph/pull/31802
- 04:02 AM Backport #42942 (Rejected): mimic: mds: free heap memory may grow too large for some workloads
- 04:02 AM Bug #42938 (Pending Backport): mds: free heap memory may grow too large for some workloads
- 03:51 AM Bug #42887: tasks.cephfs.test_volume_client.TestVolumeClient: test_21501 No such file or directory
- Log: [[http://qa-proxy.ceph.com/teuthology/yuriw-2019-11-09_19:10:09-fs-wip-yuri-mimic_13.2.7_RC2-distro-basic-smithi...
- 02:51 AM Bug #42941 (Fix Under Review): mds: stuck "waiting for osdmap 273 (which blacklists prior instance)"
- 02:42 AM Bug #42941 (In Progress): mds: stuck "waiting for osdmap 273 (which blacklists prior instance)"
- I see the issue.
- 02:37 AM Bug #42941 (Rejected): mds: stuck "waiting for osdmap 273 (which blacklists prior instance)"
- ...
- 01:22 AM Bug #42940 (Fix Under Review): client: trim_cache not invalidate kernel cache
11/21/2019
- 09:26 PM Bug #42707 (Resolved): Kernel 5.0 CephFS client hang
- Looks like the updates have trickled out to ubuntu repos. Let's call this resolved. Please reopen if you see it again...
- 09:24 PM Bug #42842 (Resolved): CephFS linux kernel hang, v4.15
- Glad to hear it. We'll call this one resolved.
- 07:43 PM Bug #42842: CephFS linux kernel hang, v4.15
- I am no longer seeing the problem on -70.79. Had a number of kernel versions installed and must have gotten confused.
- 03:00 PM Bug #42842: CephFS linux kernel hang, v4.15
- -66.75 is definitely bad, but -70.79 should be ok. Can you validate that you still see the problem on that kernel?
- 07:15 PM Bug #42835: qa: test_scrub_abort fails during check_task_status("idle")
- Nautilus backport will be tracked by #42738.
- 07:12 PM Backport #42738: nautilus: mgr/volumes: cleanup libcephfs handles on mgr shutdown
- and this: #42835
- 06:04 PM Bug #42938 (Resolved): mds: free heap memory may grow too large for some workloads
- MDS should periodically release heap free space to the kernel as part of cache trimming.
- 05:36 PM Backport #41283 (New): nautilus: cephfs-shell: No error message is printed on ls of invalid direc...
- 05:36 PM Backport #41268 (New): nautilus: cephfs-shell: onecmd throws TypeError
- 05:35 PM Backport #41118 (New): nautilus: cephfs-shell: add CI testing with flake8
- 05:35 PM Backport #41112 (New): nautilus: cephfs-shell: cd with no args has no effect
- 05:35 PM Backport #41105 (New): nautilus: cephfs-shell: flake8 blank line and indentation error
- 05:34 PM Backport #41089 (New): nautilus: cephfs-shell: Multiple flake8 errors
- 05:33 PM Backport #40898 (New): nautilus: cephfs-shell: Error messages are printed to stdout
- 02:55 PM Feature #42831 (Fix Under Review): mds: add config to deny all client reconnects
- 02:54 PM Bug #42917 (Duplicate): ceph: task status not available
- 02:52 PM Bug #42872 (Need More Info): qa/tasks: add remaining tests for fs volume
- 02:51 PM Bug #42887 (Need More Info): tasks.cephfs.test_volume_client.TestVolumeClient: test_21501 No such...
- 04:22 AM Bug #42923 (Fix Under Review): pybind / cephfs: remove static typing in LibCephFS.chown
- 04:21 AM Bug #42923 (In Progress): pybind / cephfs: remove static typing in LibCephFS.chown
- 04:21 AM Bug #42923 (Resolved): pybind / cephfs: remove static typing in LibCephFS.chown
- ...
- 02:06 AM Bug #42894: kclient: if there has at least one MDS still not laggy the mount will fail
- The following commits should fix it.
https://github.com/ceph/ceph-client/commit/2f35ef362bc14f25dac6738472180d9a4a... - 01:59 AM Backport #41890: nautilus: mount.ceph: enable consumption of ceph keyring files
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30521
m... - 01:43 AM Backport #41890 (Resolved): nautilus: mount.ceph: enable consumption of ceph keyring files
- 01:43 AM Feature #16656 (Resolved): mount.ceph: enable consumption of ceph keyring files
- 01:31 AM Bug #42922 (Resolved): nautilus: qa: ignore RECENT_CRASH for multimds snapshot testing
- https://github.com/ceph/ceph/pull/29911
needs backport.
11/20/2019
- 11:33 PM Bug #42646 (Pending Backport): Test failure: test_subvolume_snapshot_ls (tasks.cephfs.test_volume...
- 11:32 PM Bug #42020 (Resolved): qa: fuse_mount should check if mounted in umount_wait
- DaemonWatchdog is not in mimic/nautilus.
- 11:31 PM Bug #42020 (Pending Backport): qa: fuse_mount should check if mounted in umount_wait
- 11:26 PM Bug #42746 (Resolved): mds crashed in MDCache::request_forward
- 11:25 PM Bug #42759 (Pending Backport): mds: inode lock stuck at unstable state after evicting client
- 10:37 PM Bug #42920 (New): mds: removed from map due to dropped (?) beacons
- ...
- 10:30 PM Bug #42919 (New): mds: heartbeat timeout during large scale git-clone/rm workload
- ...
- 10:03 PM Bug #42917 (Duplicate): ceph: task status not available
- ...
- 10:21 AM Bug #24679 (Resolved): qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:14 AM Bug #41031 (Resolved): qa: malformed job
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:56 AM Bug #42894 (Resolved): kclient: if there has at least one MDS still not laggy the mount will fail
- In case:
# ceph fs dump
[...]
max_mds 3
in 0,1,2
up {0=5139,1=4837,2=4985}
failed
damaged
stoppe... - 12:31 AM Bug #42827 (Fix Under Review): mds: when mounting the extra slash(es) at the end of server path w...
11/19/2019
- 08:41 PM Bug #42887: tasks.cephfs.test_volume_client.TestVolumeClient: test_21501 No such file or directory
- Please link the source teuthology log. Add html "pre" markup around the log so it's readable.
- 04:39 PM Bug #42887 (Won't Fix): tasks.cephfs.test_volume_client.TestVolumeClient: test_21501 No such file...
- ...
- 07:18 PM Backport #42678 (Resolved): luminous: qa: malformed job
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31449
m... - 04:33 PM Backport #42678: luminous: qa: malformed job
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31449
merged - 07:18 PM Backport #42672 (Resolved): luminous: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31450
m... - 04:32 PM Backport #42672: luminous: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/31450
merged - 07:18 PM Backport #42774 (Resolved): luminous: mds: add command that modify session metadata
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31573
m... - 04:31 PM Backport #42774: luminous: mds: add command that modify session metadata
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/31573
mergedhttps://trello.com/c/YlSLupiJ - 04:02 PM Backport #42886 (In Progress): nautilus: mgr/volumes: allow setting uid, gid of subvolume and sub...
- 03:54 PM Backport #42886 (Resolved): nautilus: mgr/volumes: allow setting uid, gid of subvolume and subvol...
- ... by passing optional arguments --uid and --gid to `ceph fs subvolume/subvolume group create` commands.
https://... - 10:43 AM Bug #42252: mimic: test_21501 (tasks.cephfs.test_volume_client.TestVolumeClient) failure
- Rishabh Dave wrote:
> Couldn't reproduce this issue locally; test_21501 passed for me.
with python3? also, I thin... - 10:42 AM Bug #42252: mimic: test_21501 (tasks.cephfs.test_volume_client.TestVolumeClient) failure
- Couldn't reproduce this issue locally; test_21501 passed for me.
- 10:25 AM Feature #40959 (Pending Backport): mgr/volumes: allow setting uid, gid of subvolume and subvolume...
- 09:03 AM Bug #40877 (Resolved): client: client should return EIO when it's unsafe reqs have been dropped w...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:03 AM Bug #40967 (Resolved): qa: race in test_standby_replay_singleton_fail
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:03 AM Bug #40968 (Resolved): qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_stale_write_...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:02 AM Bug #40999 (Resolved): qa: AssertionError: u'open' != 'stale'
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:01 AM Bug #41585 (Resolved): mds: client evicted twice in one tick
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:38 AM Backport #41886 (Resolved): nautilus: mds: client evicted twice in one tick
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30951
m... - 08:36 AM Backport #41488 (Resolved): nautilus: client: client should return EIO when it's unsafe reqs have...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30043
m... - 08:34 AM Backport #41095 (Resolved): nautilus: qa: race in test_standby_replay_singleton_fail
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29832
m... - 08:33 AM Backport #41093 (Resolved): nautilus: qa: tasks.cephfs.test_client_recovery.TestClientRecovery.te...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29811
m... - 08:33 AM Backport #41087 (Resolved): nautilus: qa: AssertionError: u'open' != 'stale'
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29750
m... - 06:02 AM Feature #42875 (New): mgr/volumes: user credentials for ListVolumes, GetCapacity and ValidateVolu...
- Validate user credentials for the following API/Commands:
ValidateVolumeCapabilities
GetCapacity
ListVolumes - 05:57 AM Feature #42874 (New): mgr/volumes: add ValidateVolumeCapabilities API/command for `fs volume`
- add ValidateVolumeCapabilities API/command for `fs volume` as mentioned in [1]
[1] https://github.com/container-st... - 05:55 AM Feature #42873 (New): mgr/volumes: add GetCapacity API/command for `fs volume`
- add `fs volume getcapacity` command as suggested in [1].
[1] https://github.com/container-storage-interface/spec/i... - 05:06 AM Bug #42872 (Resolved): qa/tasks: add remaining tests for fs volume
- There are missing tests for `fs volume` in test_volumes.py. Only test_volume_rm is available. Where are the tests for...
- 01:32 AM Bug #42827: mds: when mounting the extra slash(es) at the end of server path will be wrongly pars...
- This should fix it: https://github.com/ceph/ceph/pull/31713
Also available in: Atom