Activity
From 01/08/2020 to 02/06/2020
02/06/2020
- 10:48 PM Backport #43137 (In Progress): nautilus: pybind/mgr/volumes: idle connection drop is not working
- 10:26 PM Bug #44023 (New): MDS continuously crashing on v14.2.7
- I have max mds set to 2, though I have tried fiddling with the values since hitting the crash. Ceph status indicates ...
- 10:07 PM Backport #43790 (In Progress): nautilus: RuntimeError: Files in flight high water is unexpectedly...
- 08:42 PM Backport #38350 (Rejected): luminous: mds: decoded LogEvent may leak during shutdown
- Leaks during MDS shutdown; not essential.
- 08:42 PM Backport #38349 (Rejected): mimic: mds: decoded LogEvent may leak during shutdown
- Leaks only during MDS shutdown; not essential.
- 08:41 PM Backport #37637 (Rejected): luminous: client: support getfattr ceph.dir.pin extended attribute
- Cancelling this backport; it's not essential.
- 08:40 PM Backport #37636 (Rejected): mimic: client: support getfattr ceph.dir.pin extended attribute
- Cancelling this backport. It's not essential.
- 07:01 PM Bug #43061: ceph fs add_data_pool doesn't set pool metadata properly
- Status on this Ramana?
- 06:59 PM Bug #43750 (Triaged): mds: add perf counters for openfiletable
- 04:53 PM Bug #44021 (Resolved): client: bad error handling in Client::_lseek
- The SEEK_HOLE and SEEK_DATA error handling looks broken in the userland client:...
- 04:47 PM Backport #44020 (Resolved): pybind/mgr/volumes: restore from snapshot
- https://github.com/ceph/ceph/pull/33122/
- 04:46 PM Feature #24880 (Pending Backport): pybind/mgr/volumes: restore from snapshot
- 03:06 PM Bug #43960: MDS: incorrectly issues Fc for new opens when there is an existing writer
- Greg Farnum wrote:
> Okay, I dove into this a bit today. No final conclusions but reminding myself about how some of... - 12:11 AM Bug #43960: MDS: incorrectly issues Fc for new opens when there is an existing writer
- Okay, I dove into this a bit today. No final conclusions but reminding myself about how some of this works and severa...
- 09:18 AM Bug #43748: client: improve wanted handling so we don't request unused caps (active-standby exclu...
- most unexpected cap revokes were because of fuse API limitation. lookup and gettattr always want CEPH_STAT_CAP_INODE_...
- 09:16 AM Feature #7333: client: evaluate multiple O_APPEND writers
- This is one fix about this O_APPEND & O_DIRECT:
In O_APPEND & O_DIRECT mode, the data from different writers will
...
02/05/2020
- 05:09 AM Bug #43968 (Fix Under Review): qa: multimds suite using centos7
- 05:04 AM Bug #43968 (Resolved): qa: multimds suite using centos7
- http://pulpito.ceph.com/teuthology-2020-02-01_04:15:02-multimds-master-testing-basic-smithi/4723526/
- 04:40 AM Bug #43796 (Fix Under Review): qa: test_version_splitting
- 03:58 AM Bug #43965 (Resolved): mgr/volumes: synchronize ownership (for symlinks) and inode timestamps for...
- `lchown()` and `[l]utimes()` python binding calls needs to be implemented. async cloner module in mgr/volumes would s...
- 12:53 AM Bug #43964 (Resolved): qa: Test failure: test_acls
- This shows up in all runs:...
02/04/2020
- 04:04 PM Bug #43960: MDS: incorrectly issues Fc for new opens when there is an existing writer
- From the original ticket:
Jeff Layton wrote:
> Greg pointed out some things in a face-to-face discussion the other ... - 03:33 PM Bug #43960 (Triaged): MDS: incorrectly issues Fc for new opens when there is an existing writer
- Cloned from #43748, to cover the MDS-side issue. (Note that I have changed much of the text below to correct a few de...
- 02:57 PM Bug #43909: mds: SIGSEGV in Migrator::export_sessions_flushed
- ...
- 02:51 PM Bug #43901: qa: fsx: fatal error: libaio.h: No such file or directory
- What distro is this? For RHEL/Centos you probably just need to ensure that libaio-devel is installed. For ubuntu, lib...
- 02:11 PM Bug #43943: qa: "[WRN] evicting unresponsive client smithi131:z (6314), after 304.461 seconds"
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > client: 172.21.15.131:0/4191323679 (cephfs instance), registers ... - 01:00 PM Bug #43748: client: improve wanted handling so we don't request unused caps (active-standby exclu...
- Greg pointed out some things in a face-to-face discussion the other day that lead me to question whether this ought t...
02/03/2020
- 08:08 PM Bug #43943: qa: "[WRN] evicting unresponsive client smithi131:z (6314), after 304.461 seconds"
- Venky Shankar wrote:
> client: 172.21.15.131:0/4191323679 (cephfs instance), registers its addrs with ceph-mgr:
>
... - 04:36 PM Bug #43943: qa: "[WRN] evicting unresponsive client smithi131:z (6314), after 304.461 seconds"
- client: 172.21.15.131:0/4191323679 (cephfs instance), registers its addrs with ceph-mgr:...
02/02/2020
- 02:57 PM Feature #42530 (Resolved): cephfs-shell: add setxattr and getxattr
- 02:52 PM Bug #40861 (Resolved): cephfs-shell: -p doesn't work for rmdir
- 02:52 PM Bug #40863 (Resolved): cephfs-shell: rmdir with -p attempts to delete non-dir files as well
02/01/2020
- 12:56 PM Bug #40867 (Resolved): mgr: failover during in qa testing causes unresponsive client warnings
- Moving this back to resolved. Opened #43943
- 12:56 PM Bug #43943 (Resolved): qa: "[WRN] evicting unresponsive client smithi131:z (6314), after 304.461 ...
- /a/sage-2020-01-28_03:52:05-rados-wip-sage2-testing-2020-01-27-1839-distro-basic-smithi/4713589
description: rados/m...
01/31/2020
- 01:13 PM Bug #43905: qa: test_rebuild_inotable infinite loop
- It's a bug revealed by 'mds: cleanup '* -> excl' check in Locker::file_eval()'
- 09:47 AM Bug #43905: qa: test_rebuild_inotable infinite loop
- (2 << 40) is correct because inode number of rank 1 start at (2 << 40)
- 08:26 AM Bug #43908 (Fix Under Review): mds: FAILED ceph_assert(!p.is_remote_wrlock())
- Nothing do with async dirops PR
- 03:55 AM Bug #40867 (In Progress): mgr: failover during in qa testing causes unresponsive client warnings
- Another one:
/a/sage-2020-01-30_22:27:29-rados-wip-sage-testing-2020-01-30-1230-distro-basic-smithi/4719492
01/30/2020
- 03:21 PM Bug #43208 (Resolved): mds: unsafe req may result in data remaining in the datapool
- 03:17 PM Bug #43909 (Resolved): mds: SIGSEGV in Migrator::export_sessions_flushed
- ...
- 03:15 PM Bug #43908 (Resolved): mds: FAILED ceph_assert(!p.is_remote_wrlock())
- ...
- 03:06 PM Cleanup #43408 (Resolved): mds: reorg StrayManager header
- 03:04 PM Bug #43905 (Closed): qa: test_rebuild_inotable infinite loop
- ...
- 01:55 PM Bug #43763 (Resolved): cephfs-shell: ls long listing (ls -l) fails when executed outside root (/)
- 01:48 PM Bug #43902 (Triaged): qa: mon_thrash: timeout "ceph quorum_status"
- ...
- 01:45 PM Bug #43901 (Resolved): qa: fsx: fatal error: libaio.h: No such file or directory
- ...
- 10:18 AM Bug #43761 (Triaged): mon/MDSMonitor: "ceph fs authorize cephfs client.test /test rw" does not gi...
- Ramana, I'm assigning this to you. The bug is arguably in ceph-ansible because it's enabling the application but not ...
- 09:53 AM Bug #43596: mds: crash when enable msgr v2 due to lost contact
- We just updated our 2nd cluster to nautilus and saw the exact same mds respawn at the moment we enabled msgr2:
<pr... - 08:58 AM Bug #40867: mgr: failover during in qa testing causes unresponsive client warnings
- Sage Weil wrote:
> another instance of this on master,
> [...]
> /a/sage-2020-01-28_03:52:05-rados-wip-sage2-testi...
01/29/2020
- 02:52 PM Bug #41759 (Can't reproduce): mgr/volumes: test_async_subvolume_rm fails since purge threads did ...
- Patrick Donnelly wrote:
> Venky, is this still a problem?
haven't seen it lately. moving to "can't reproduce"
... - 01:57 PM Bug #43761: mon/MDSMonitor: "ceph fs authorize cephfs client.test /test rw" does not give the nec...
- Hello,
From the mailing list : https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/23FDDSYBCDVMYGCUTAL...
01/28/2020
- 08:05 PM Bug #40867: mgr: failover during in qa testing causes unresponsive client warnings
- another instance of this on master,...
- 02:32 PM Bug #43762: pybind/mgr/volumes: create fails with TypeError
- Adding more context to this
This happened after creating a second volume. I had to enable creation of multi filesy... - 11:53 AM Backport #43568: nautilus: qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY
- Nathan Cutler wrote:
> @Ramana - please feel free to take any backport issue that is in state "New" or "Need More In...
01/27/2020
- 10:23 PM Bug #43827 (Duplicate): decode fail in SessionMapStore::decode_legacy on upgrade
- 04:20 PM Bug #43827: decode fail in SessionMapStore::decode_legacy on upgrade
- I think it's RADOS bug. omap header/keys got lost after upgrade
- 06:01 AM Bug #43827: decode fail in SessionMapStore::decode_legacy on upgrade
- mimic does not use legacy session format. Looks like that mds got zero length omap header, so it retired loading sess...
- 07:44 PM Backport #43790 (New): nautilus: RuntimeError: Files in flight high water is unexpectedly low (0 ...
- 05:21 PM Backport #43790 (Need More Info): nautilus: RuntimeError: Files in flight high water is unexpecte...
- seems to be complicated by the lack of https://github.com/ceph/ceph/pull/31596 in nautilus?
- 05:16 PM Backport #43784 (In Progress): nautilus: fs: OpenFileTable object shards have too many k/v pairs
- 04:54 PM Backport #43780 (In Progress): nautilus: qa: Test failure: test_drop_cache_command_dead (tasks.ce...
- 04:53 PM Backport #43777 (In Progress): nautilus: qa: test_full racy check: AssertionError: 29 not greater...
- 04:51 PM Backport #43733 (In Progress): nautilus: qa: ffsb suite causes SLOW_OPS warnings
- 04:48 PM Backport #43729 (In Progress): nautilus: client: chdir does not raise error if a file is passed
- 04:47 PM Backport #43628 (In Progress): nautilus: client: disallow changing fuse_default_permissions optio...
- 04:46 PM Backport #43724 (Need More Info): nautilus: mgr/volumes: subvolumes with snapshots can be deleted
- leaving mgr/volumes backports to the developers
- 04:45 PM Backport #43624 (In Progress): nautilus: mds: note features client has when rejecting client due ...
- 04:44 PM Backport #43573 (In Progress): nautilus: cephfs-journal-tool: will crash without any extra argument
- 04:37 PM Backport #43568 (In Progress): nautilus: qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY
- @Ramana - please feel free to take any backport issue that is in state "New" or "Need More Info".
If you can possi... - 04:37 PM Bug #43748: client: improve wanted handling so we don't request unused caps (active-standby exclu...
- Changing this to just a client bug. The kernel driver issue will be tracked via: https://bugzilla.redhat.com/show_bug...
- 04:33 PM Backport #43509 (In Progress): nautilus: 'ceph -s' does not show standbys if there are no filesys...
- 04:32 PM Backport #43502 (In Progress): mimic: mount.ceph: give a hint message when no mds is up or cluste...
- 04:30 PM Backport #43503 (In Progress): nautilus: mount.ceph: give a hint message when no mds is up or clu...
- 04:21 PM Backport #43343 (In Progress): nautilus: mds: client does not response to cap revoke After sessio...
- 04:08 PM Backport #43629 (Need More Info): nautilus: mgr/volumes: provision subvolumes with config metadat...
- 04:07 PM Backport #43338 (Need More Info): nautilus: qa/tasks: add remaining tests for fs volume
- 04:06 PM Backport #43137 (Need More Info): nautilus: pybind/mgr/volumes: idle connection drop is not working
- 11:58 AM Feature #40811 (Resolved): mds: add command that modify session metadata
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
01/26/2020
- 05:22 PM Bug #43827 (Duplicate): decode fail in SessionMapStore::decode_legacy on upgrade
- /a/sage-2020-01-26_15:00:33-upgrade:cephfs-wip-sage2-testing-2020-01-24-1408-distro-basic-smithi/4709313 (and the who...
- 11:07 AM Backport #43143 (Resolved): nautilus: mds: tolerate no snaprealm encoded in on-disk root inode
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32079
m... - 11:06 AM Backport #41106 (Resolved): nautilus: mds: add command that modify session metadata
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32245
m...
01/25/2020
- 11:40 AM Cleanup #40694 (Resolved): mds: move MDSDaemon conf change handling to MDSRank finisher
- 11:38 AM Cleanup #40694 (Pending Backport): mds: move MDSDaemon conf change handling to MDSRank finisher
- 11:31 AM Bug #43336 (Resolved): qa: test_unmount_for_evicted_client hangs
- 11:26 AM Bug #43336 (Pending Backport): qa: test_unmount_for_evicted_client hangs
- 11:25 AM Backport #43345 (In Progress): nautilus: mds: metadata changes may be lost when MDS is restarted
01/24/2020
- 11:54 PM Bug #43090 (Closed): mds:check if oldin is null before accessing its member
- 11:52 PM Bug #42827 (Won't Fix): mds: when mounting the extra slash(es) at the end of server path will be ...
- 11:51 PM Bug #41242 (Closed): mds: re-introudce mds_log_max_expiring to control expiring concurrency manually
- 11:33 PM Bug #43817 (Resolved): mds: update cephfs octopus feature bit
- 2020-02-06 After discussion at the CDM, we will stop naming releases for the CephFS min_compat_client bits. The oper...
- 11:19 PM Feature #43423 (In Progress): mds: collect and show the dentry lease metric
- 11:18 PM Feature #39098 (Resolved): mds: lock caching for asynchronous unlink
- 11:17 PM Bug #42770 (Closed): Regulary trim inode in memory
- 11:17 PM Bug #41651 (Closed): dbench: command not found
- 11:13 PM Cleanup #37931 (New): MDSMonitor: rename `mds repaired` to `fs repaired`
- 11:11 PM Feature #12274 (New): mds: start forward scrubs from all subtree roots, skip non-auth metadata
- 11:06 PM Bug #26863 (Can't reproduce): qa: test_full_different_file "dd: error writing 'large_file': No sp...
- 11:05 PM Bug #41759: mgr/volumes: test_async_subvolume_rm fails since purge threads did not cleanup trash ...
- Venky, is this still a problem?
- 10:05 PM Bug #43748: client: improve wanted handling so we don't request unused caps (active-standby exclu...
- I see a couple of other potential fixes:
1) we could not ask for those caps on an OPEN/CREATE and just rely on the... - 10:02 PM Bug #43800 (Duplicate): FAILED ceph_assert(omap_num_objs <= MAX_OBJECTS) - primary and standby MD...
- super xor wrote:
> seems to exist already, my bad: https://tracker.ceph.com/issues/43348
no worries! Thanks for t... - 08:46 AM Bug #43800: FAILED ceph_assert(omap_num_objs <= MAX_OBJECTS) - primary and standby MDS failure
- seems to exist already, my bad: https://tracker.ceph.com/issues/43348
- 06:33 AM Bug #43800 (Duplicate): FAILED ceph_assert(omap_num_objs <= MAX_OBJECTS) - primary and standby MD...
- We had a complete cephfs failure tonight caused by crashes of all active and standby MDS....
- 09:10 PM Fix #41782: mds: allow stray directories to fragment and switch from 10 stray directories to 1
- We'll look at merging this at the beginning of Pacific release cycle.
- 04:39 PM Backport #43143: nautilus: mds: tolerate no snaprealm encoded in on-disk root inode
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32079
merged - 04:07 PM Backport #41106: nautilus: mds: add command that modify session metadata
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/32245
merged - 12:51 AM Bug #43762 (Need More Info): pybind/mgr/volumes: create fails with TypeError
- This needs more information, as Victoria is checking if this happens because of their configuration/python version.
01/23/2020
- 11:53 PM Bug #43459 (Resolved): qa: FATAL ERROR: libtool does not seem to be installed.
- 11:29 PM Cleanup #43387 (Resolved): mds: reorg SnapServer header
- 11:28 PM Bug #43660 (Resolved): mds: null pointer dereference in Server::handle_client_link
- 11:12 PM Bug #43796 (Resolved): qa: test_version_splitting
- ...
- 08:43 PM Bug #43762 (Triaged): pybind/mgr/volumes: create fails with TypeError
- 11:15 AM Bug #43762 (Closed): pybind/mgr/volumes: create fails with TypeError
- ...
- 05:02 PM Backport #43791 (Rejected): mimic: RuntimeError: Files in flight high water is unexpectedly low (...
- 05:02 PM Backport #43790 (Resolved): nautilus: RuntimeError: Files in flight high water is unexpectedly lo...
- https://github.com/ceph/ceph/pull/33115
- 04:57 PM Backport #43785 (Rejected): mimic: fs: OpenFileTable object shards have too many k/v pairs
- 04:57 PM Backport #43784 (Resolved): nautilus: fs: OpenFileTable object shards have too many k/v pairs
- https://github.com/ceph/ceph/pull/32921
- 04:56 PM Backport #43780 (Resolved): nautilus: qa: Test failure: test_drop_cache_command_dead (tasks.cephf...
- https://github.com/ceph/ceph/pull/32919
- 04:55 PM Backport #43778 (Rejected): mimic: qa: test_full racy check: AssertionError: 29 not greater than ...
- 04:55 PM Backport #43777 (Resolved): nautilus: qa: test_full racy check: AssertionError: 29 not greater th...
- https://github.com/ceph/ceph/pull/32918
- 04:46 PM Backport #43770 (In Progress): nautilus: mount.ceph fails with ERANGE if name= option is longer t...
- 04:40 PM Backport #43770 (Resolved): nautilus: mount.ceph fails with ERANGE if name= option is longer than...
- https://github.com/ceph/ceph/pull/32807
- 04:38 PM Feature #3244: qa: integrate Ganesha into teuthology testing to regularly exercise Ganesha CephFS...
- Patrick Donnelly wrote:
> Nathan, are you still planning to work on this?
Yes. Sorry for the latency! - 01:21 AM Feature #3244: qa: integrate Ganesha into teuthology testing to regularly exercise Ganesha CephFS...
- Nathan, are you still planning to work on this?
- 01:06 PM Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- I am seeing similar issues on our cluster. I had the Ganesha node running on the same node as the MONs just for conve...
- 01:05 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- What I have is mostly working now, but I'm occasionally seeing an async create come back with -EEXIST when running xf...
- 12:07 PM Bug #43763 (Resolved): cephfs-shell: ls long listing (ls -l) fails when executed outside root (/)
- cephfs-shell: ls long listing (ls -l) fails when executed outside root (/)
stat fails with No such file or directory - 10:22 AM Bug #43761 (Resolved): mon/MDSMonitor: "ceph fs authorize cephfs client.test /test rw" does not g...
- Hello,
I notice a regression on "ceph fs authorize" command that is not enough anymore to give right access to be ... - 01:30 AM Bug #43644 (Fix Under Review): mds: Empty directory check is done on the importer side (at import...
- 01:27 AM Bug #36078 (Can't reproduce): mds: 9 active MDS cluster stuck during fsstress
- 01:24 AM Bug #43600 (Triaged): qa: workunits/suites/iozone.sh: line 5: iozone: command not found
- 01:23 AM Feature #17852 (Resolved): mds: when starting forward scrub, return handle or stamp/version which...
- 01:17 AM Bug #43517 (Triaged): qa: random subvolumegroup collision
- 01:16 AM Feature #41302 (Fix Under Review): mds: add ephemeral random and distributed export pins
- 01:15 AM Bug #38203 (Can't reproduce): ceph-mds segfault during migrator nicely exporting
01/22/2020
- 10:41 PM Bug #43513 (Resolved): qa: filelock_interrupt.py hang
- 04:23 PM Bug #42986 (Pending Backport): qa: Test failure: test_drop_cache_command_dead (tasks.cephfs.test_...
- 04:01 AM Cleanup #42867 (Resolved): mds: reorg Server header
- 04:01 AM Bug #42515 (Pending Backport): fs: OpenFileTable object shards have too many k/v pairs
- 03:58 AM Feature #39129 (Resolved): create mechanism to delegate ranges of inode numbers to client
- 03:55 AM Cleanup #43369 (Resolved): mds: reorg SnapClient header
- 03:55 AM Bug #43649 (Pending Backport): mount.ceph fails with ERANGE if name= option is longer than 37 cha...
- 03:40 AM Bug #43748: client: improve wanted handling so we don't request unused caps (active-standby exclu...
- Zheng Yan wrote:
> > Now, send SIGKILL to the standby ceph-fuse client. This will cause I/O to halt for the first cl... - 03:14 AM Bug #43748: client: improve wanted handling so we don't request unused caps (active-standby exclu...
> Now, send SIGKILL to the standby ceph-fuse client. This will cause I/O to halt for the first client until the MDS...
01/21/2020
- 11:15 PM Documentation #43154 (Resolved): doc: migrate best practice recommendations to relevant docs
- 11:08 PM Bug #43719 (Resolved): qa: "error New address family defined, please update secclass_map."
- 09:05 PM Bug #43719 (Fix Under Review): qa: "error New address family defined, please update secclass_map."
- 05:13 PM Bug #43719 (In Progress): qa: "error New address family defined, please update secclass_map."
- 09:26 PM Bug #43750 (Resolved): mds: add perf counters for openfiletable
- So we can do some accounting and monitoring. Have counters for omap updates, number of objects, files opened in total...
- 06:45 PM Bug #43748 (Fix Under Review): client: improve wanted handling so we don't request unused caps (a...
- In an active/standby configuration of two clients managed by file locks, the standby client causes unbuffered I/O on ...
- 05:11 PM Backport #43347 (In Progress): mimic: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- 04:49 PM Backport #43347: mimic: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- It would be great if this backport could make it in a upcoming patch release. It's quite trivial and would save us fr...
- 05:09 PM Backport #43348 (In Progress): nautilus: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- 05:06 PM Bug #42467 (Can't reproduce): mds: daemon crashes while updating blacklist
- 03:04 PM Bug #42467: mds: daemon crashes while updating blacklist
- No idea how this happen. when a request is removed from active_requests, request should also be removed from session-...
- 05:03 PM Bug #43742 (Duplicate): fsx.sh failed to build xfstests
- 01:04 PM Bug #43742 (Duplicate): fsx.sh failed to build xfstests
- /a/pdonnell-2020-01-21_04:55:38-fs-wip-pdonnell-testing-20200121.015336-distro-basic-smithi/4689982/teuthology.log
... - 05:02 PM Bug #43741 (Duplicate): kernel_untar_build.sh failed
- 12:54 PM Bug #43741 (Duplicate): kernel_untar_build.sh failed
- http://pulpito.ceph.com/pdonnell-2020-01-21_04:55:38-fs-wip-pdonnell-testing-20200121.015336-distro-basic-smithi/4690...
- 04:56 PM Documentation #43743 (Resolved): doc: fix mount.ceph
- 02:11 PM Documentation #43743 (Resolved): doc: fix mount.ceph
- Second command doesn't render probably due to some mistake in the underlying RST file.
- 01:36 PM Backport #43568: nautilus: qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY
- Nathan, can I go ahead and create a backport PR for this?
- 01:32 PM Backport #43733: nautilus: qa: ffsb suite causes SLOW_OPS warnings
- Nathan, can I go ahead and create a backport PR for this?
- 01:04 PM Backport #43137: nautilus: pybind/mgr/volumes: idle connection drop is not working
- To be backported together with #43139
- 01:03 PM Backport #43137 (New): nautilus: pybind/mgr/volumes: idle connection drop is not working
- First backport attempt - https://github.com/ceph/ceph/pull/32076 - was closed.
- 02:52 AM Bug #43513 (Fix Under Review): qa: filelock_interrupt.py hang
- 01:31 AM Bug #16881 (Pending Backport): RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
- 01:30 AM Bug #43554 (Pending Backport): qa: test_full racy check: AssertionError: 29 not greater than or e...
01/20/2020
- 10:22 PM Feature #40959 (Resolved): mgr/volumes: allow setting uid, gid of subvolume and subvolume group d...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:20 PM Backport #43734 (Rejected): mimic: qa: ffsb suite causes SLOW_OPS warnings
- 10:20 PM Backport #43733 (Resolved): nautilus: qa: ffsb suite causes SLOW_OPS warnings
- https://github.com/ceph/ceph/pull/32917
- 10:19 PM Bug #42922 (Resolved): nautilus: qa: ignore RECENT_CRASH for multimds snapshot testing
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:19 PM Bug #42923 (Resolved): pybind / cephfs: remove static typing in LibCephFS.chown
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:18 PM Bug #43036 (Resolved): mds: reports unrecognized message for mgrclient messages
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:18 PM Bug #43038 (Resolved): mgr/volumes: ERROR: test_subvolume_create_with_desired_uid_gid (tasks.ceph...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:17 PM Backport #43730 (Rejected): mimic: client: chdir does not raise error if a file is passed
- 10:17 PM Backport #43729 (Resolved): nautilus: client: chdir does not raise error if a file is passed
- https://github.com/ceph/ceph/pull/32916
- 10:17 PM Backport #43724 (Resolved): nautilus: mgr/volumes: subvolumes with snapshots can be deleted
- https://github.com/ceph/ceph/pull/33122/
- 10:16 PM Backport #43141 (Resolved): nautilus: tools/cephfs: linkages injected by cephfs-data-scan have fi...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32078
m... - 09:18 PM Backport #43141: nautilus: tools/cephfs: linkages injected by cephfs-data-scan have first == head
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32078
merged - 10:16 PM Backport #43138 (Resolved): nautilus: mds: reports unrecognized message for mgrclient messages
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32077
m... - 07:57 PM Backport #43138: nautilus: mds: reports unrecognized message for mgrclient messages
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32077
merged - 10:15 PM Backport #43001 (Resolved): nautilus: qa: ignore "ceph.dir.pin: No such attribute" for (old) kern...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32075
m... - 07:57 PM Backport #43001: nautilus: qa: ignore "ceph.dir.pin: No such attribute" for (old) kernel client
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32075
merged - 10:15 PM Backport #42949 (Resolved): nautilus: mds: inode lock stuck at unstable state after evicting client
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32073
m... - 07:56 PM Backport #42949: nautilus: mds: inode lock stuck at unstable state after evicting client
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32073
merged - 10:14 PM Backport #43170 (Resolved): nautilus: nautilus: qa: ignore RECENT_CRASH for multimds snapshot tes...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32072
m... - 07:56 PM Backport #43170: nautilus: nautilus: qa: ignore RECENT_CRASH for multimds snapshot testing
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32072
merged - 10:13 PM Backport #43219 (Resolved): nautilus: mgr/volumes: ERROR: test_subvolume_create_with_desired_uid_...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31741
m... - 10:13 PM Backport #43085 (Resolved): nautilus: pybind / cephfs: remove static typing in LibCephFS.chown
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31741
m... - 10:12 PM Backport #42886 (Resolved): nautilus: mgr/volumes: allow setting uid, gid of subvolume and subvol...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31741
m... - 10:12 PM Backport #42650 (Resolved): nautilus: mds: no assert on frozen dir when scrub path
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32071
m... - 08:26 PM Bug #43719: qa: "error New address family defined, please update secclass_map."
- Yep, I think so:
https://lkml.org/lkml/2019/4/23/517
Basically, building old kernels on really new userspace gl... - 06:38 PM Bug #43719 (Resolved): qa: "error New address family defined, please update secclass_map."
- ...
- 07:15 PM Bug #43522 (Resolved): qa: update xfstests_dev to install python2 instead of python on ubuntu 19
- 07:14 PM Bug #43486 (Resolved): qa: test_acls: cannot find packages on centos 8
- 07:14 PM Bug #43496 (Resolved): qa: xfstest_dev.py crashes while calling teuthology.misc.get_system_type
- 07:12 PM Bug #43440 (Pending Backport): client: chdir does not raise error if a file is passed
- 07:10 PM Bug #43601 (Resolved): qa: ERROR: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
- 07:07 PM Bug #43645 (Pending Backport): mgr/volumes: subvolumes with snapshots can be deleted
- 06:41 PM Bug #43599 (Resolved): kclient: corrupt message failure on RHEL8 distribution kernel
- 05:15 PM Bug #42637 (Pending Backport): qa: ffsb suite causes SLOW_OPS warnings
- 02:49 PM Bug #43596 (New): mds: crash when enable msgr v2 due to lost contact
- 02:47 PM Bug #43637 (Triaged): nautilus: qa: Health check failed: Reduced data availability: 16 pgs inacti...
- 02:45 PM Bug #43640 (Need More Info): nautilus: qa: test_async_subvolume_rm failure
- Waiting for this to be reproduced again.
- 02:29 PM Bug #43664 (Duplicate): mds: metric_spec is encoded into version 1 MClientSession
- 06:42 AM Bug #43664 (Fix Under Review): mds: metric_spec is encoded into version 1 MClientSession
- 06:39 AM Bug #43664 (Duplicate): mds: metric_spec is encoded into version 1 MClientSession
01/19/2020
- 07:51 AM Bug #43660 (Fix Under Review): mds: null pointer dereference in Server::handle_client_link
- 07:48 AM Bug #43660 (Resolved): mds: null pointer dereference in Server::handle_client_link
01/18/2020
- 04:27 PM Backport #43219: nautilus: mgr/volumes: ERROR: test_subvolume_create_with_desired_uid_gid (tasks....
- Jos Collin wrote:
> https://github.com/ceph/ceph/pull/31741
merged - 04:27 PM Backport #43085: nautilus: pybind / cephfs: remove static typing in LibCephFS.chown
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31741
merged - 04:27 PM Backport #42886: nautilus: mgr/volumes: allow setting uid, gid of subvolume and subvolume group d...
- Jos Collin wrote:
> ... by passing optional arguments --uid and --gid to `ceph fs subvolume/subvolume group create` ... - 04:26 PM Backport #42650: nautilus: mds: no assert on frozen dir when scrub path
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32071
merged - 11:17 AM Feature #41182 (Resolved): mgr/volumes: add `fs subvolume extend/shrink` commands
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:15 AM Documentation #42044 (Resolved): doc/ceph-fuse: -k missing in man page
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:14 AM Feature #42479 (Resolved): mgr/volumes: add `fs subvolume resize infinite` command
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:35 AM Backport #42943 (Resolved): nautilus: mds: free heap memory may grow too large for some workloads
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31802
m... - 10:34 AM Backport #42631 (Resolved): nautilus: client: FAILED assert(cap == in->auth_cap)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32065
m... - 10:23 AM Backport #42279 (Resolved): nautilus: qa: logrotate should tolerate connection resets
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31082
m... - 10:23 AM Backport #42129 (Resolved): nautilus: doc/ceph-fuse: -k missing in man page
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30765
m... - 09:37 AM Backport #42790 (Resolved): nautilus: mgr/volumes: add `fs subvolume resize infinite` command
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31332
m... - 09:36 AM Backport #42615 (Resolved): nautilus: mgr/volumes: add `fs subvolume extend/shrink` commands
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31332
m... - 09:35 AM Backport #42142 (Resolved): nautilus: mds:split the dir if the op makes it oversized, because som...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31302
m... - 09:35 AM Backport #42424 (Resolved): nautilus: qa: "cluster [ERR] Error recovering journal 0x200: (2) No...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31084
m... - 09:35 AM Backport #42422 (Resolved): nautilus: test_reconnect_eviction fails with "RuntimeError: MDS in re...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31083
m... - 09:34 AM Backport #42158 (Resolved): nautilus: osdc: objecter ops output does not have useful time informa...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31081
m...
01/17/2020
- 10:51 PM Tasks #4492 (New): mds: Define kill points involved in clustered migration and recovery
- 10:50 PM Feature #39129 (Fix Under Review): create mechanism to delegate ranges of inode numbers to client
- 09:49 PM Backport #42943: nautilus: mds: free heap memory may grow too large for some workloads
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/31802
merged - 09:48 PM Backport #42631: nautilus: client: FAILED assert(cap == in->auth_cap)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32065
merged - 07:33 PM Bug #43596: mds: crash when enable msgr v2 due to lost contact
- That's indeed very odd. I looked through the code but didn't find a good reason why this would happen. It is interest...
- 01:13 PM Bug #43596: mds: crash when enable msgr v2 due to lost contact
- Yes the MDS was upgraded to 14.2.6 also.
Below the mon log when I changed its addr. (Full file that day is at ceph... - 06:49 PM Bug #43644 (Triaged): mds: Empty directory check is done on the importer side (at import finish) ...
- 06:48 PM Bug #43644: mds: Empty directory check is done on the importer side (at import finish) during mig...
- Zheng Yan wrote:
> you are right. we can do the check a export_dir and export_frozen. If directory is empty, abort. ... - 12:13 PM Bug #43644: mds: Empty directory check is done on the importer side (at import finish) during mig...
- you are right. we can do the check a export_dir and export_frozen. If directory is empty, abort. But we still need to...
- 09:16 AM Bug #43644: mds: Empty directory check is done on the importer side (at import finish) during mig...
- Sidharth Anupkrishnan wrote:
> In the current MDS code, the migration of empty directories is prohibited but it is ... - 09:13 AM Bug #43644 (Rejected): mds: Empty directory check is done on the importer side (at import finish)...
- In the current MDS code, the migration of empty directories is prohibited but it is actually exported during the migr...
- 03:59 PM Bug #43649: mount.ceph fails with ERANGE if name= option is longer than 37 characters
- It turns out that name= options can pretty much be arbitrarily long, so I reworked the code to remove the need for an...
- 03:57 PM Bug #43649 (In Progress): mount.ceph fails with ERANGE if name= option is longer than 37 characters
- 03:13 PM Bug #43649 (Resolved): mount.ceph fails with ERANGE if name= option is longer than 37 characters
- Aaron reported on the cephfs mailing list that some mount attempts were failing with ERANGE. For example:...
- 10:01 AM Bug #43645 (Fix Under Review): mgr/volumes: subvolumes with snapshots can be deleted
- 09:28 AM Bug #43645 (Resolved): mgr/volumes: subvolumes with snapshots can be deleted
- ...
- 07:50 AM Feature #24880 (Fix Under Review): pybind/mgr/volumes: restore from snapshot
- clone from a snap: https://github.com/ceph/ceph/pull/32030
Most of this work will be required for restoring a subv... - 05:46 AM Bug #42835 (Fix Under Review): qa: test_scrub_abort fails during check_task_status("idle")
01/16/2020
- 11:49 PM Bug #43640: nautilus: qa: test_async_subvolume_rm failure
- Just the lines from the teuthology log for the mgr connection:...
- 09:00 PM Bug #43640 (Need More Info): nautilus: qa: test_async_subvolume_rm failure
- ...
- 08:17 PM Bug #43638 (Duplicate): nautilus qa: tasks/cfuse_workunit_suites_ffsb.yaml failure
- 08:05 PM Bug #43638 (Duplicate): nautilus qa: tasks/cfuse_workunit_suites_ffsb.yaml failure
- ...
- 05:45 PM Bug #43637 (Triaged): nautilus: qa: Health check failed: Reduced data availability: 16 pgs inacti...
- ...
- 02:46 PM Backport #43629 (Resolved): nautilus: mgr/volumes: provision subvolumes with config metadata stor...
- https://github.com/ceph/ceph/pull/33122/
- 02:46 PM Backport #43628 (Resolved): nautilus: client: disallow changing fuse_default_permissions option a...
- https://github.com/ceph/ceph/pull/32915
- 02:46 PM Backport #43627 (Rejected): mimic: client: disallow changing fuse_default_permissions option at r...
- 02:45 PM Backport #43624 (Resolved): nautilus: mds: note features client has when rejecting client due to ...
- https://github.com/ceph/ceph/pull/32914
- 09:50 AM Bug #43601 (Fix Under Review): qa: ERROR: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
- 12:25 AM Bug #43601 (Triaged): qa: ERROR: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
- Looks like it's just that the MDS is responding to a getattr request on the root inode with EROFS:...
- 12:26 AM Bug #43125 (Can't reproduce): qa: ceph_volume_client not available "ModuleNotFoundError: No modul...
- 12:13 AM Documentation #43155 (Closed): CephFS Documentation Sprint 4
- 12:12 AM Bug #42637 (Fix Under Review): qa: ffsb suite causes SLOW_OPS warnings
- 12:00 AM Bug #16881 (Fix Under Review): RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
01/15/2020
- 08:49 PM Bug #43599 (Fix Under Review): kclient: corrupt message failure on RHEL8 distribution kernel
- 08:25 PM Bug #43599 (In Progress): kclient: corrupt message failure on RHEL8 distribution kernel
- 08:20 PM Bug #43599: kclient: corrupt message failure on RHEL8 distribution kernel
- This is just one of those places where the kernel client did not ever expect to see a struct be extended. I suspect t...
- 06:43 PM Bug #43599: kclient: corrupt message failure on RHEL8 distribution kernel
- Jeff Layton wrote:
> What kernel is this?... - 05:50 PM Bug #43599: kclient: corrupt message failure on RHEL8 distribution kernel
- What kernel is this?
- 08:48 PM Bug #43600: qa: workunits/suites/iozone.sh: line 5: iozone: command not found
- Unfortunately, CentOS 8 / RHEL 8 don't have this package. We'll need to filter out these distributions somehow.
Mo... - 07:41 PM Bug #36507 (Duplicate): client: connection failure during reconnect causes client to hang
- Thanks huanwen!
- 07:39 PM Bug #42467: mds: daemon crashes while updating blacklist
- Zheng, I think you may have inadvertently fixed this in...
- 07:28 PM Bug #43216 (New): MDSMonitor: removes MDS coming out of quorum election
- 07:22 PM Bug #40608 (Duplicate): mds: assert after `delete gather` in C_Drop_Cache::recall_client_state
- Fixed by: https://tracker.ceph.com/issues/38445
- 07:16 PM Bug #42941 (Rejected): mds: stuck "waiting for osdmap 273 (which blacklists prior instance)"
- Cause was reverted.
- 05:19 AM Feature #43349: mgr/volumes: provision subvolumes with config metadata storage in cephfs
- backport note: additionally, include https://github.com/ceph/ceph/pull/32645
- 04:33 AM Feature #43349 (Pending Backport): mgr/volumes: provision subvolumes with config metadata storage...
- 04:33 AM Feature #43349 (Resolved): mgr/volumes: provision subvolumes with config metadata storage in cephfs
- 02:16 AM Bug #43362 (Pending Backport): client: disallow changing fuse_default_permissions option at runtime
- 02:15 AM Cleanup #43367 (Resolved): mds: reorg SimpleLock header
- 02:14 AM Cleanup #43386 (Resolved): mds: reorg SnapRealm header
- 02:13 AM Cleanup #43418 (Resolved): mds: reorg flock header
- 02:13 AM Cleanup #43424 (Resolved): mds: reorg inode_backtrace header
- 01:55 AM Bug #42986 (Fix Under Review): qa: Test failure: test_drop_cache_command_dead (tasks.cephfs.test_...
- 12:32 AM Bug #43513: qa: filelock_interrupt.py hang
- Zheng Yan wrote:
> Looks like flock syscall was restarted after handling signal alarm. The script does not work with... - 12:14 AM Bug #43554 (Fix Under Review): qa: test_full racy check: AssertionError: 29 not greater than or e...
01/14/2020
- 11:52 PM Bug #43601 (Resolved): qa: ERROR: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
- ...
- 11:15 PM Bug #43600 (Resolved): qa: workunits/suites/iozone.sh: line 5: iozone: command not found
- ...
- 10:37 PM Bug #43542 (Resolved): mds/FSMap.cc: 1063: FAILED ceph_assert(count)
- 10:32 PM Bug #43484 (Pending Backport): mds: note features client has when rejecting client due to feature...
- 10:10 PM Bug #43599 (Resolved): kclient: corrupt message failure on RHEL8 distribution kernel
- ...
- 08:44 PM Bug #43541 (Resolved): qa/cephfs: don't test client on latest RHEL
- 08:44 PM Bug #43539 (Resolved): qa/cephfs: don't test kclient RHEL 7
- 08:34 PM Backport #42279: nautilus: qa: logrotate should tolerate connection resets
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31082
merged - 08:33 PM Backport #42129: nautilus: doc/ceph-fuse: -k missing in man page
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30765
merged - 05:39 PM Bug #43598 (Resolved): mds: PurgeQueue does not handle objecter errors
- Here: https://github.com/ceph/ceph/blob/6ea89e01971462432e0bc8b128b950acec4d85fe/src/mds/PurgeQueue.cc#L555
The fi... - 05:30 PM Bug #43596 (Need More Info): mds: crash when enable msgr v2 due to lost contact
- > It seems to be stIt seems to be stable now after enabling v2 on all mons and restarting all mds's.able now after en...
- 12:41 PM Bug #43596 (New): mds: crash when enable msgr v2 due to lost contact
- We just upgraded from mimic v13.2.7 to v14.2.6 and when we enable msgr v2 on the mon which an MDS is connected to, th...
- 04:15 PM Bug #43440 (Fix Under Review): client: chdir does not raise error if a file is passed
01/13/2020
- 09:44 PM Bug #43493 (Fix Under Review): osdc: fix null pointer caused program crash
- 04:29 PM Backport #42790: nautilus: mgr/volumes: add `fs subvolume resize infinite` command
- Jos Collin wrote:
> https://github.com/ceph/ceph/pull/31332
merged - 04:28 PM Backport #42615: nautilus: mgr/volumes: add `fs subvolume extend/shrink` commands
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31332
merged - 04:28 PM Backport #42142: nautilus: mds:split the dir if the op makes it oversized, because some ops maybe...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31302
merged - 04:27 PM Backport #42424: nautilus: qa: "cluster [ERR] Error recovering journal 0x200: (2) No such file ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31084
merged - 04:27 PM Backport #42422: nautilus: test_reconnect_eviction fails with "RuntimeError: MDS in reject state ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31083
merged - 04:26 PM Backport #42158: nautilus: osdc: objecter ops output does not have useful time information
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31081
merged - 02:37 PM Bug #43567: qa: UnicodeDecodeError in TestGetAndPut.test_put_and_get_without_target_directory
- ...
- 12:12 PM Bug #43567 (Fix Under Review): qa: UnicodeDecodeError in TestGetAndPut.test_put_and_get_without_t...
- 11:41 AM Bug #43567 (Resolved): qa: UnicodeDecodeError in TestGetAndPut.test_put_and_get_without_target_di...
- decode() is run on a type @str@ -...
- 12:31 PM Feature #22446 (Resolved): mds: ask idle client to trim more caps
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:28 PM Bug #40283 (Resolved): qa: add testing for lazyio
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:28 PM Cleanup #40694 (Resolved): mds: move MDSDaemon conf change handling to MDSRank finisher
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:27 PM Bug #41148 (Resolved): client: _readdir_cache_cb() may use the readdir_cache already clear
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:27 PM Bug #41310 (Resolved): client: lazyio synchronize does not get file size
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:26 PM Bug #41835 (Resolved): mds: cache drop command does not drive cap recall
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:26 PM Bug #41837 (Resolved): client: lseek function does not return the correct value.
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:22 PM Backport #42161 (Resolved): nautilus: qa: add testing for lazyio
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30769
m... - 12:22 PM Backport #41888 (Resolved): nautilus: client: lazyio synchronize does not get file size
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30769
m... - 12:22 PM Backport #42147 (Resolved): nautilus: mds: mds returns -5 error when the deleted file does not exist
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30767
m... - 12:22 PM Backport #42145 (Resolved): nautilus: client: return error when someone passes bad whence value t...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30766
m... - 12:21 PM Backport #42121 (Resolved): nautilus: client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30764
m... - 12:21 PM Backport #42040 (Resolved): nautilus: client: _readdir_cache_cb() may use the readdir_cache alrea...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30763
m... - 12:21 PM Backport #42035 (Resolved): nautilus: client: lseek function does not return the correct value.
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30762
m... - 12:21 PM Backport #42339 (Resolved): nautilus: mds: move MDSDaemon conf change handling to MDSRank finisher
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30761
m... - 12:21 PM Backport #41899 (Resolved): nautilus: mds: cache drop command does not drive cap recall
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30761
m... - 12:21 PM Backport #41865 (Resolved): nautilus: mds: ask idle client to trim more caps
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30761
m... - 11:48 AM Backport #43573 (Resolved): nautilus: cephfs-journal-tool: will crash without any extra argument
- https://github.com/ceph/ceph/pull/32913
- 11:48 AM Backport #43572 (Rejected): mimic: cephfs-journal-tool: will crash without any extra argument
- 11:48 AM Backport #43568 (Resolved): nautilus: qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY
- https://github.com/ceph/ceph/pull/32912
- 03:03 AM Bug #43218 (Rejected): kclient: when looking up the snap dirs sometime will hit WARN_ON
- This is not a bug and will close it....
- 01:43 AM Feature #9477 (Closed): Handle kclient shutdown with dead network more gracefully
- this can be handled by 'umount -f'
- 01:43 AM Feature #8368 (Resolved): kernel: Notify users of mds disconnect and allow them to react to it
- 01:30 AM Feature #8368: kernel: Notify users of mds disconnect and allow them to react to it
- resolved by https://tracker.ceph.com/issues/39967
01/12/2020
- 12:52 AM Bug #36635: mds: purge queue corruption from wrong backport
- I think we can just add an upgrade note to Octopus to not upgrade from 13.2.2.
01/11/2020
- 01:49 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- OK, I move the wait to client side. See commit "client: wait for async creating before sending request or cap message...
- 01:13 AM Bug #43543 (Triaged): mds: scrub on directory with recently created files may fail to load backtr...
- 12:25 AM Backport #43558 (In Progress): nautilus: mds: reject forward scrubs when cluster has multiple act...
- 12:19 AM Backport #43558 (Resolved): nautilus: mds: reject forward scrubs when cluster has multiple active...
- https://github.com/ceph/ceph/pull/32602
- 12:19 AM Backport #43559 (Rejected): mimic: mds: reject forward scrubs when cluster has multiple active MD...
- 12:18 AM Bug #43483 (Pending Backport): mds: reject forward scrubs when cluster has multiple active MDS (m...
- 12:16 AM Bug #43249 (Resolved): cephfs-shell: exit failure when non-interactive command fails
01/10/2020
- 10:24 PM Bug #43251 (Resolved): mds: track client provided metric flags in session
- 10:22 PM Cleanup #43366 (Resolved): mds: reorg SessionMap header
- 10:15 PM Bug #43554 (Resolved): qa: test_full racy check: AssertionError: 29 not greater than or equal to 30
- ...
- 09:24 PM Backport #43506 (In Progress): nautilus: MDSMonitor: warn if a new file system is being created w...
- 08:19 PM Backport #42161: nautilus: qa: add testing for lazyio
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30769
merged - 08:19 PM Backport #41888: nautilus: client: lazyio synchronize does not get file size
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30769
merged - 08:18 PM Backport #42147: nautilus: mds: mds returns -5 error when the deleted file does not exist
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30767
merged - 08:18 PM Backport #42145: nautilus: client: return error when someone passes bad whence value to llseek
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30766
merged - 08:17 PM Backport #42121: nautilus: client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30764
merged - 08:17 PM Backport #42040: nautilus: client: _readdir_cache_cb() may use the readdir_cache already clear
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30763
merged - 08:16 PM Backport #42035: nautilus: client: lseek function does not return the correct value.
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30762
merged - 08:16 PM Backport #42339: nautilus: mds: move MDSDaemon conf change handling to MDSRank finisher
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30761
merged - 08:15 PM Backport #41899: nautilus: mds: cache drop command does not drive cap recall
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30761
merged - 08:15 PM Backport #41865: nautilus: mds: ask idle client to trim more caps
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30761
merged - 07:42 PM Bug #43540 (Duplicate): qa: test_export_pin (tasks.cephfs.test_exports.TestExports) failure
- Real error is here:...
- 12:57 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- I still don't understand what value this flag adds. Why not just always have requests involving an inode wait on the ...
- 03:09 AM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Jeff Layton wrote:
> Great! I'll still plan to add in a sanity check for this in the client too.
Patrick is right... - 03:01 AM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Jeff Layton wrote:
> Zheng Yan wrote:
> > mainly for wait_for_create_inode() function in MDS. Also make mds print e... - 11:02 AM Bug #43440: client: chdir does not raise error if a file is passed
- Not a cephfs shell bug. The error should be raised by ceph_chdir().
- 07:44 AM Bug #43543: mds: scrub on directory with recently created files may fail to load backtraces and r...
- This issue exists since scrub is first implemented. Should be easy to fix, just ignore checking backtrace if dirty_pa...
01/09/2020
- 11:14 PM Bug #43543: mds: scrub on directory with recently created files may fail to load backtraces and r...
- If you flush the journal:...
- 11:08 PM Bug #43543 (Resolved): mds: scrub on directory with recently created files may fail to load backt...
- On a vstart cluster, copy a directory tree into CephFS and do a recursive scrub concurrently:...
- 08:25 PM Bug #43514 (Pending Backport): qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY
- 07:57 PM Bug #43542 (Fix Under Review): mds/FSMap.cc: 1063: FAILED ceph_assert(count)
- 07:48 PM Bug #43542 (Resolved): mds/FSMap.cc: 1063: FAILED ceph_assert(count)
- ...
- 07:54 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Great! I'll still plan to add in a sanity check for this in the client too.
- 07:25 PM Feature #24461 (In Progress): cephfs: improve file create performance buffering file unlink/creat...
- Jeff Layton wrote:
> There's a potential problem I spotted today with copying the layouts from the first synchronous... - 07:21 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- There's a potential problem I spotted today with copying the layouts from the first synchronous create.
Suppose we... - 07:10 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Zheng Yan wrote:
> mainly for wait_for_create_inode() function in MDS. Also make mds print error if it failed to han... - 02:05 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- mainly for wait_for_create_inode() function in MDS. Also make mds print error if it failed to handle async request.
- 11:42 AM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Zheng Yan wrote:
> please add a flag that tell if a request is async.
> https://github.com/ukernel/ceph/commit/54f... - 07:13 PM Bug #43515 (Resolved): qa: SyntaxError: invalid token
- 07:12 PM Bug #43487 (Resolved): qa: test_acls does not detect rhel8
- 06:22 PM Feature #118 (Rejected): kclient: clean pages when throwing out dirty metadata on session teardown
- Excellent. In that case, let's go ahead and close this out.
- 06:43 AM Feature #118: kclient: clean pages when throwing out dirty metadata on session teardown
- Case1:
In the case when unmounting, the vfs will do this for us.
Case2:
In the case when the session is reconne... - 06:09 PM Bug #43541 (Fix Under Review): qa/cephfs: don't test client on latest RHEL
- 06:08 PM Bug #43541 (Resolved): qa/cephfs: don't test client on latest RHEL
- 06:04 PM Bug #43540 (Duplicate): qa: test_export_pin (tasks.cephfs.test_exports.TestExports) failure
- ...
- 06:02 PM Bug #43539 (Fix Under Review): qa/cephfs: don't test kclient RHEL 7
- 05:42 PM Bug #43539 (Resolved): qa/cephfs: don't test kclient RHEL 7
- Just fix the symlin qa/cephfs/mount/kclient/overrides/distro/rhel/rhel_7.yaml.
- 11:34 AM Feature #42530 (Fix Under Review): cephfs-shell: add setxattr and getxattr
- 06:48 AM Feature #43435 (Fix Under Review): kclient:send client provided metric flags in client metadata
- Under review in V2's "[Patch v2 8/8] ceph: send client provided metric flags in client metadata"
https://patchwork... - 06:45 AM Feature #43423 (Fix Under Review): mds: collect and show the dentry lease metric
- 06:44 AM Bug #37617: CephFS did not recover re-plugging network cable
- I also enconter the same problem. when I use ls or other operations on the mountpoint,it will failed. Even if the net...
- 05:50 AM Feature #4386 (Resolved): kclient: Mount error message when no MDS present
- 05:49 AM Feature #4386: kclient: Mount error message when no MDS present
- Fixed in:
https://github.com/ceph/ceph/pull/32164
https://patchwork.kernel.org/patch/11283665
01/08/2020
- 03:27 PM Bug #43516 (Resolved): qa: verify sub-suite does not define os_version
- 02:38 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- please add a flag that tell if a request is async.
https://github.com/ukernel/ceph/commit/54f6bbdc85505ddea21583e9c... - 01:54 PM Bug #43513: qa: filelock_interrupt.py hang
- Looks like flock syscall was restarted after handling signal alarm. The script does not work with python3, but work w...
- 08:58 AM Bug #43522 (Fix Under Review): qa: update xfstests_dev to install python2 instead of python on ub...
- 08:47 AM Bug #43522 (Resolved): qa: update xfstests_dev to install python2 instead of python on ubuntu 19
- 08:42 AM Bug #43393 (In Progress): qa: add support/qa for cephfs-shell on CentOS 9 / RHEL9
Also available in: Atom