Activity
From 12/26/2019 to 01/24/2020
01/24/2020
- 11:54 PM Bug #43090 (Closed): mds:check if oldin is null before accessing its member
- 11:52 PM Bug #42827 (Won't Fix): mds: when mounting the extra slash(es) at the end of server path will be ...
- 11:51 PM Bug #41242 (Closed): mds: re-introudce mds_log_max_expiring to control expiring concurrency manually
- 11:33 PM Bug #43817 (Resolved): mds: update cephfs octopus feature bit
- 2020-02-06 After discussion at the CDM, we will stop naming releases for the CephFS min_compat_client bits. The oper...
- 11:19 PM Feature #43423 (In Progress): mds: collect and show the dentry lease metric
- 11:18 PM Feature #39098 (Resolved): mds: lock caching for asynchronous unlink
- 11:17 PM Bug #42770 (Closed): Regulary trim inode in memory
- 11:17 PM Bug #41651 (Closed): dbench: command not found
- 11:13 PM Cleanup #37931 (New): MDSMonitor: rename `mds repaired` to `fs repaired`
- 11:11 PM Feature #12274 (New): mds: start forward scrubs from all subtree roots, skip non-auth metadata
- 11:06 PM Bug #26863 (Can't reproduce): qa: test_full_different_file "dd: error writing 'large_file': No sp...
- 11:05 PM Bug #41759: mgr/volumes: test_async_subvolume_rm fails since purge threads did not cleanup trash ...
- Venky, is this still a problem?
- 10:05 PM Bug #43748: client: improve wanted handling so we don't request unused caps (active-standby exclu...
- I see a couple of other potential fixes:
1) we could not ask for those caps on an OPEN/CREATE and just rely on the... - 10:02 PM Bug #43800 (Duplicate): FAILED ceph_assert(omap_num_objs <= MAX_OBJECTS) - primary and standby MD...
- super xor wrote:
> seems to exist already, my bad: https://tracker.ceph.com/issues/43348
no worries! Thanks for t... - 08:46 AM Bug #43800: FAILED ceph_assert(omap_num_objs <= MAX_OBJECTS) - primary and standby MDS failure
- seems to exist already, my bad: https://tracker.ceph.com/issues/43348
- 06:33 AM Bug #43800 (Duplicate): FAILED ceph_assert(omap_num_objs <= MAX_OBJECTS) - primary and standby MD...
- We had a complete cephfs failure tonight caused by crashes of all active and standby MDS....
- 09:10 PM Fix #41782: mds: allow stray directories to fragment and switch from 10 stray directories to 1
- We'll look at merging this at the beginning of Pacific release cycle.
- 04:39 PM Backport #43143: nautilus: mds: tolerate no snaprealm encoded in on-disk root inode
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32079
merged - 04:07 PM Backport #41106: nautilus: mds: add command that modify session metadata
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/32245
merged - 12:51 AM Bug #43762 (Need More Info): pybind/mgr/volumes: create fails with TypeError
- This needs more information, as Victoria is checking if this happens because of their configuration/python version.
01/23/2020
- 11:53 PM Bug #43459 (Resolved): qa: FATAL ERROR: libtool does not seem to be installed.
- 11:29 PM Cleanup #43387 (Resolved): mds: reorg SnapServer header
- 11:28 PM Bug #43660 (Resolved): mds: null pointer dereference in Server::handle_client_link
- 11:12 PM Bug #43796 (Resolved): qa: test_version_splitting
- ...
- 08:43 PM Bug #43762 (Triaged): pybind/mgr/volumes: create fails with TypeError
- 11:15 AM Bug #43762 (Closed): pybind/mgr/volumes: create fails with TypeError
- ...
- 05:02 PM Backport #43791 (Rejected): mimic: RuntimeError: Files in flight high water is unexpectedly low (...
- 05:02 PM Backport #43790 (Resolved): nautilus: RuntimeError: Files in flight high water is unexpectedly lo...
- https://github.com/ceph/ceph/pull/33115
- 04:57 PM Backport #43785 (Rejected): mimic: fs: OpenFileTable object shards have too many k/v pairs
- 04:57 PM Backport #43784 (Resolved): nautilus: fs: OpenFileTable object shards have too many k/v pairs
- https://github.com/ceph/ceph/pull/32921
- 04:56 PM Backport #43780 (Resolved): nautilus: qa: Test failure: test_drop_cache_command_dead (tasks.cephf...
- https://github.com/ceph/ceph/pull/32919
- 04:55 PM Backport #43778 (Rejected): mimic: qa: test_full racy check: AssertionError: 29 not greater than ...
- 04:55 PM Backport #43777 (Resolved): nautilus: qa: test_full racy check: AssertionError: 29 not greater th...
- https://github.com/ceph/ceph/pull/32918
- 04:46 PM Backport #43770 (In Progress): nautilus: mount.ceph fails with ERANGE if name= option is longer t...
- 04:40 PM Backport #43770 (Resolved): nautilus: mount.ceph fails with ERANGE if name= option is longer than...
- https://github.com/ceph/ceph/pull/32807
- 04:38 PM Feature #3244: qa: integrate Ganesha into teuthology testing to regularly exercise Ganesha CephFS...
- Patrick Donnelly wrote:
> Nathan, are you still planning to work on this?
Yes. Sorry for the latency! - 01:21 AM Feature #3244: qa: integrate Ganesha into teuthology testing to regularly exercise Ganesha CephFS...
- Nathan, are you still planning to work on this?
- 01:06 PM Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- I am seeing similar issues on our cluster. I had the Ganesha node running on the same node as the MONs just for conve...
- 01:05 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- What I have is mostly working now, but I'm occasionally seeing an async create come back with -EEXIST when running xf...
- 12:07 PM Bug #43763 (Resolved): cephfs-shell: ls long listing (ls -l) fails when executed outside root (/)
- cephfs-shell: ls long listing (ls -l) fails when executed outside root (/)
stat fails with No such file or directory - 10:22 AM Bug #43761 (Resolved): mon/MDSMonitor: "ceph fs authorize cephfs client.test /test rw" does not g...
- Hello,
I notice a regression on "ceph fs authorize" command that is not enough anymore to give right access to be ... - 01:30 AM Bug #43644 (Fix Under Review): mds: Empty directory check is done on the importer side (at import...
- 01:27 AM Bug #36078 (Can't reproduce): mds: 9 active MDS cluster stuck during fsstress
- 01:24 AM Bug #43600 (Triaged): qa: workunits/suites/iozone.sh: line 5: iozone: command not found
- 01:23 AM Feature #17852 (Resolved): mds: when starting forward scrub, return handle or stamp/version which...
- 01:17 AM Bug #43517 (Triaged): qa: random subvolumegroup collision
- 01:16 AM Feature #41302 (Fix Under Review): mds: add ephemeral random and distributed export pins
- 01:15 AM Bug #38203 (Can't reproduce): ceph-mds segfault during migrator nicely exporting
01/22/2020
- 10:41 PM Bug #43513 (Resolved): qa: filelock_interrupt.py hang
- 04:23 PM Bug #42986 (Pending Backport): qa: Test failure: test_drop_cache_command_dead (tasks.cephfs.test_...
- 04:01 AM Cleanup #42867 (Resolved): mds: reorg Server header
- 04:01 AM Bug #42515 (Pending Backport): fs: OpenFileTable object shards have too many k/v pairs
- 03:58 AM Feature #39129 (Resolved): create mechanism to delegate ranges of inode numbers to client
- 03:55 AM Cleanup #43369 (Resolved): mds: reorg SnapClient header
- 03:55 AM Bug #43649 (Pending Backport): mount.ceph fails with ERANGE if name= option is longer than 37 cha...
- 03:40 AM Bug #43748: client: improve wanted handling so we don't request unused caps (active-standby exclu...
- Zheng Yan wrote:
> > Now, send SIGKILL to the standby ceph-fuse client. This will cause I/O to halt for the first cl... - 03:14 AM Bug #43748: client: improve wanted handling so we don't request unused caps (active-standby exclu...
> Now, send SIGKILL to the standby ceph-fuse client. This will cause I/O to halt for the first client until the MDS...
01/21/2020
- 11:15 PM Documentation #43154 (Resolved): doc: migrate best practice recommendations to relevant docs
- 11:08 PM Bug #43719 (Resolved): qa: "error New address family defined, please update secclass_map."
- 09:05 PM Bug #43719 (Fix Under Review): qa: "error New address family defined, please update secclass_map."
- 05:13 PM Bug #43719 (In Progress): qa: "error New address family defined, please update secclass_map."
- 09:26 PM Bug #43750 (Resolved): mds: add perf counters for openfiletable
- So we can do some accounting and monitoring. Have counters for omap updates, number of objects, files opened in total...
- 06:45 PM Bug #43748 (Fix Under Review): client: improve wanted handling so we don't request unused caps (a...
- In an active/standby configuration of two clients managed by file locks, the standby client causes unbuffered I/O on ...
- 05:11 PM Backport #43347 (In Progress): mimic: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- 04:49 PM Backport #43347: mimic: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- It would be great if this backport could make it in a upcoming patch release. It's quite trivial and would save us fr...
- 05:09 PM Backport #43348 (In Progress): nautilus: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- 05:06 PM Bug #42467 (Can't reproduce): mds: daemon crashes while updating blacklist
- 03:04 PM Bug #42467: mds: daemon crashes while updating blacklist
- No idea how this happen. when a request is removed from active_requests, request should also be removed from session-...
- 05:03 PM Bug #43742 (Duplicate): fsx.sh failed to build xfstests
- 01:04 PM Bug #43742 (Duplicate): fsx.sh failed to build xfstests
- /a/pdonnell-2020-01-21_04:55:38-fs-wip-pdonnell-testing-20200121.015336-distro-basic-smithi/4689982/teuthology.log
... - 05:02 PM Bug #43741 (Duplicate): kernel_untar_build.sh failed
- 12:54 PM Bug #43741 (Duplicate): kernel_untar_build.sh failed
- http://pulpito.ceph.com/pdonnell-2020-01-21_04:55:38-fs-wip-pdonnell-testing-20200121.015336-distro-basic-smithi/4690...
- 04:56 PM Documentation #43743 (Resolved): doc: fix mount.ceph
- 02:11 PM Documentation #43743 (Resolved): doc: fix mount.ceph
- Second command doesn't render probably due to some mistake in the underlying RST file.
- 01:36 PM Backport #43568: nautilus: qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY
- Nathan, can I go ahead and create a backport PR for this?
- 01:32 PM Backport #43733: nautilus: qa: ffsb suite causes SLOW_OPS warnings
- Nathan, can I go ahead and create a backport PR for this?
- 01:04 PM Backport #43137: nautilus: pybind/mgr/volumes: idle connection drop is not working
- To be backported together with #43139
- 01:03 PM Backport #43137 (New): nautilus: pybind/mgr/volumes: idle connection drop is not working
- First backport attempt - https://github.com/ceph/ceph/pull/32076 - was closed.
- 02:52 AM Bug #43513 (Fix Under Review): qa: filelock_interrupt.py hang
- 01:31 AM Bug #16881 (Pending Backport): RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
- 01:30 AM Bug #43554 (Pending Backport): qa: test_full racy check: AssertionError: 29 not greater than or e...
01/20/2020
- 10:22 PM Feature #40959 (Resolved): mgr/volumes: allow setting uid, gid of subvolume and subvolume group d...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:20 PM Backport #43734 (Rejected): mimic: qa: ffsb suite causes SLOW_OPS warnings
- 10:20 PM Backport #43733 (Resolved): nautilus: qa: ffsb suite causes SLOW_OPS warnings
- https://github.com/ceph/ceph/pull/32917
- 10:19 PM Bug #42922 (Resolved): nautilus: qa: ignore RECENT_CRASH for multimds snapshot testing
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:19 PM Bug #42923 (Resolved): pybind / cephfs: remove static typing in LibCephFS.chown
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:18 PM Bug #43036 (Resolved): mds: reports unrecognized message for mgrclient messages
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:18 PM Bug #43038 (Resolved): mgr/volumes: ERROR: test_subvolume_create_with_desired_uid_gid (tasks.ceph...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:17 PM Backport #43730 (Rejected): mimic: client: chdir does not raise error if a file is passed
- 10:17 PM Backport #43729 (Resolved): nautilus: client: chdir does not raise error if a file is passed
- https://github.com/ceph/ceph/pull/32916
- 10:17 PM Backport #43724 (Resolved): nautilus: mgr/volumes: subvolumes with snapshots can be deleted
- https://github.com/ceph/ceph/pull/33122/
- 10:16 PM Backport #43141 (Resolved): nautilus: tools/cephfs: linkages injected by cephfs-data-scan have fi...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32078
m... - 09:18 PM Backport #43141: nautilus: tools/cephfs: linkages injected by cephfs-data-scan have first == head
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32078
merged - 10:16 PM Backport #43138 (Resolved): nautilus: mds: reports unrecognized message for mgrclient messages
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32077
m... - 07:57 PM Backport #43138: nautilus: mds: reports unrecognized message for mgrclient messages
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32077
merged - 10:15 PM Backport #43001 (Resolved): nautilus: qa: ignore "ceph.dir.pin: No such attribute" for (old) kern...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32075
m... - 07:57 PM Backport #43001: nautilus: qa: ignore "ceph.dir.pin: No such attribute" for (old) kernel client
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32075
merged - 10:15 PM Backport #42949 (Resolved): nautilus: mds: inode lock stuck at unstable state after evicting client
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32073
m... - 07:56 PM Backport #42949: nautilus: mds: inode lock stuck at unstable state after evicting client
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32073
merged - 10:14 PM Backport #43170 (Resolved): nautilus: nautilus: qa: ignore RECENT_CRASH for multimds snapshot tes...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32072
m... - 07:56 PM Backport #43170: nautilus: nautilus: qa: ignore RECENT_CRASH for multimds snapshot testing
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32072
merged - 10:13 PM Backport #43219 (Resolved): nautilus: mgr/volumes: ERROR: test_subvolume_create_with_desired_uid_...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31741
m... - 10:13 PM Backport #43085 (Resolved): nautilus: pybind / cephfs: remove static typing in LibCephFS.chown
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31741
m... - 10:12 PM Backport #42886 (Resolved): nautilus: mgr/volumes: allow setting uid, gid of subvolume and subvol...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31741
m... - 10:12 PM Backport #42650 (Resolved): nautilus: mds: no assert on frozen dir when scrub path
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32071
m... - 08:26 PM Bug #43719: qa: "error New address family defined, please update secclass_map."
- Yep, I think so:
https://lkml.org/lkml/2019/4/23/517
Basically, building old kernels on really new userspace gl... - 06:38 PM Bug #43719 (Resolved): qa: "error New address family defined, please update secclass_map."
- ...
- 07:15 PM Bug #43522 (Resolved): qa: update xfstests_dev to install python2 instead of python on ubuntu 19
- 07:14 PM Bug #43486 (Resolved): qa: test_acls: cannot find packages on centos 8
- 07:14 PM Bug #43496 (Resolved): qa: xfstest_dev.py crashes while calling teuthology.misc.get_system_type
- 07:12 PM Bug #43440 (Pending Backport): client: chdir does not raise error if a file is passed
- 07:10 PM Bug #43601 (Resolved): qa: ERROR: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
- 07:07 PM Bug #43645 (Pending Backport): mgr/volumes: subvolumes with snapshots can be deleted
- 06:41 PM Bug #43599 (Resolved): kclient: corrupt message failure on RHEL8 distribution kernel
- 05:15 PM Bug #42637 (Pending Backport): qa: ffsb suite causes SLOW_OPS warnings
- 02:49 PM Bug #43596 (New): mds: crash when enable msgr v2 due to lost contact
- 02:47 PM Bug #43637 (Triaged): nautilus: qa: Health check failed: Reduced data availability: 16 pgs inacti...
- 02:45 PM Bug #43640 (Need More Info): nautilus: qa: test_async_subvolume_rm failure
- Waiting for this to be reproduced again.
- 02:29 PM Bug #43664 (Duplicate): mds: metric_spec is encoded into version 1 MClientSession
- 06:42 AM Bug #43664 (Fix Under Review): mds: metric_spec is encoded into version 1 MClientSession
- 06:39 AM Bug #43664 (Duplicate): mds: metric_spec is encoded into version 1 MClientSession
01/19/2020
- 07:51 AM Bug #43660 (Fix Under Review): mds: null pointer dereference in Server::handle_client_link
- 07:48 AM Bug #43660 (Resolved): mds: null pointer dereference in Server::handle_client_link
01/18/2020
- 04:27 PM Backport #43219: nautilus: mgr/volumes: ERROR: test_subvolume_create_with_desired_uid_gid (tasks....
- Jos Collin wrote:
> https://github.com/ceph/ceph/pull/31741
merged - 04:27 PM Backport #43085: nautilus: pybind / cephfs: remove static typing in LibCephFS.chown
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31741
merged - 04:27 PM Backport #42886: nautilus: mgr/volumes: allow setting uid, gid of subvolume and subvolume group d...
- Jos Collin wrote:
> ... by passing optional arguments --uid and --gid to `ceph fs subvolume/subvolume group create` ... - 04:26 PM Backport #42650: nautilus: mds: no assert on frozen dir when scrub path
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32071
merged - 11:17 AM Feature #41182 (Resolved): mgr/volumes: add `fs subvolume extend/shrink` commands
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:15 AM Documentation #42044 (Resolved): doc/ceph-fuse: -k missing in man page
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:14 AM Feature #42479 (Resolved): mgr/volumes: add `fs subvolume resize infinite` command
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:35 AM Backport #42943 (Resolved): nautilus: mds: free heap memory may grow too large for some workloads
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31802
m... - 10:34 AM Backport #42631 (Resolved): nautilus: client: FAILED assert(cap == in->auth_cap)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32065
m... - 10:23 AM Backport #42279 (Resolved): nautilus: qa: logrotate should tolerate connection resets
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31082
m... - 10:23 AM Backport #42129 (Resolved): nautilus: doc/ceph-fuse: -k missing in man page
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30765
m... - 09:37 AM Backport #42790 (Resolved): nautilus: mgr/volumes: add `fs subvolume resize infinite` command
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31332
m... - 09:36 AM Backport #42615 (Resolved): nautilus: mgr/volumes: add `fs subvolume extend/shrink` commands
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31332
m... - 09:35 AM Backport #42142 (Resolved): nautilus: mds:split the dir if the op makes it oversized, because som...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31302
m... - 09:35 AM Backport #42424 (Resolved): nautilus: qa: "cluster [ERR] Error recovering journal 0x200: (2) No...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31084
m... - 09:35 AM Backport #42422 (Resolved): nautilus: test_reconnect_eviction fails with "RuntimeError: MDS in re...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31083
m... - 09:34 AM Backport #42158 (Resolved): nautilus: osdc: objecter ops output does not have useful time informa...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31081
m...
01/17/2020
- 10:51 PM Tasks #4492 (New): mds: Define kill points involved in clustered migration and recovery
- 10:50 PM Feature #39129 (Fix Under Review): create mechanism to delegate ranges of inode numbers to client
- 09:49 PM Backport #42943: nautilus: mds: free heap memory may grow too large for some workloads
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/31802
merged - 09:48 PM Backport #42631: nautilus: client: FAILED assert(cap == in->auth_cap)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32065
merged - 07:33 PM Bug #43596: mds: crash when enable msgr v2 due to lost contact
- That's indeed very odd. I looked through the code but didn't find a good reason why this would happen. It is interest...
- 01:13 PM Bug #43596: mds: crash when enable msgr v2 due to lost contact
- Yes the MDS was upgraded to 14.2.6 also.
Below the mon log when I changed its addr. (Full file that day is at ceph... - 06:49 PM Bug #43644 (Triaged): mds: Empty directory check is done on the importer side (at import finish) ...
- 06:48 PM Bug #43644: mds: Empty directory check is done on the importer side (at import finish) during mig...
- Zheng Yan wrote:
> you are right. we can do the check a export_dir and export_frozen. If directory is empty, abort. ... - 12:13 PM Bug #43644: mds: Empty directory check is done on the importer side (at import finish) during mig...
- you are right. we can do the check a export_dir and export_frozen. If directory is empty, abort. But we still need to...
- 09:16 AM Bug #43644: mds: Empty directory check is done on the importer side (at import finish) during mig...
- Sidharth Anupkrishnan wrote:
> In the current MDS code, the migration of empty directories is prohibited but it is ... - 09:13 AM Bug #43644 (Rejected): mds: Empty directory check is done on the importer side (at import finish)...
- In the current MDS code, the migration of empty directories is prohibited but it is actually exported during the migr...
- 03:59 PM Bug #43649: mount.ceph fails with ERANGE if name= option is longer than 37 characters
- It turns out that name= options can pretty much be arbitrarily long, so I reworked the code to remove the need for an...
- 03:57 PM Bug #43649 (In Progress): mount.ceph fails with ERANGE if name= option is longer than 37 characters
- 03:13 PM Bug #43649 (Resolved): mount.ceph fails with ERANGE if name= option is longer than 37 characters
- Aaron reported on the cephfs mailing list that some mount attempts were failing with ERANGE. For example:...
- 10:01 AM Bug #43645 (Fix Under Review): mgr/volumes: subvolumes with snapshots can be deleted
- 09:28 AM Bug #43645 (Resolved): mgr/volumes: subvolumes with snapshots can be deleted
- ...
- 07:50 AM Feature #24880 (Fix Under Review): pybind/mgr/volumes: restore from snapshot
- clone from a snap: https://github.com/ceph/ceph/pull/32030
Most of this work will be required for restoring a subv... - 05:46 AM Bug #42835 (Fix Under Review): qa: test_scrub_abort fails during check_task_status("idle")
01/16/2020
- 11:49 PM Bug #43640: nautilus: qa: test_async_subvolume_rm failure
- Just the lines from the teuthology log for the mgr connection:...
- 09:00 PM Bug #43640 (Need More Info): nautilus: qa: test_async_subvolume_rm failure
- ...
- 08:17 PM Bug #43638 (Duplicate): nautilus qa: tasks/cfuse_workunit_suites_ffsb.yaml failure
- 08:05 PM Bug #43638 (Duplicate): nautilus qa: tasks/cfuse_workunit_suites_ffsb.yaml failure
- ...
- 05:45 PM Bug #43637 (Triaged): nautilus: qa: Health check failed: Reduced data availability: 16 pgs inacti...
- ...
- 02:46 PM Backport #43629 (Resolved): nautilus: mgr/volumes: provision subvolumes with config metadata stor...
- https://github.com/ceph/ceph/pull/33122/
- 02:46 PM Backport #43628 (Resolved): nautilus: client: disallow changing fuse_default_permissions option a...
- https://github.com/ceph/ceph/pull/32915
- 02:46 PM Backport #43627 (Rejected): mimic: client: disallow changing fuse_default_permissions option at r...
- 02:45 PM Backport #43624 (Resolved): nautilus: mds: note features client has when rejecting client due to ...
- https://github.com/ceph/ceph/pull/32914
- 09:50 AM Bug #43601 (Fix Under Review): qa: ERROR: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
- 12:25 AM Bug #43601 (Triaged): qa: ERROR: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
- Looks like it's just that the MDS is responding to a getattr request on the root inode with EROFS:...
- 12:26 AM Bug #43125 (Can't reproduce): qa: ceph_volume_client not available "ModuleNotFoundError: No modul...
- 12:13 AM Documentation #43155 (Closed): CephFS Documentation Sprint 4
- 12:12 AM Bug #42637 (Fix Under Review): qa: ffsb suite causes SLOW_OPS warnings
- 12:00 AM Bug #16881 (Fix Under Review): RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
01/15/2020
- 08:49 PM Bug #43599 (Fix Under Review): kclient: corrupt message failure on RHEL8 distribution kernel
- 08:25 PM Bug #43599 (In Progress): kclient: corrupt message failure on RHEL8 distribution kernel
- 08:20 PM Bug #43599: kclient: corrupt message failure on RHEL8 distribution kernel
- This is just one of those places where the kernel client did not ever expect to see a struct be extended. I suspect t...
- 06:43 PM Bug #43599: kclient: corrupt message failure on RHEL8 distribution kernel
- Jeff Layton wrote:
> What kernel is this?... - 05:50 PM Bug #43599: kclient: corrupt message failure on RHEL8 distribution kernel
- What kernel is this?
- 08:48 PM Bug #43600: qa: workunits/suites/iozone.sh: line 5: iozone: command not found
- Unfortunately, CentOS 8 / RHEL 8 don't have this package. We'll need to filter out these distributions somehow.
Mo... - 07:41 PM Bug #36507 (Duplicate): client: connection failure during reconnect causes client to hang
- Thanks huanwen!
- 07:39 PM Bug #42467: mds: daemon crashes while updating blacklist
- Zheng, I think you may have inadvertently fixed this in...
- 07:28 PM Bug #43216 (New): MDSMonitor: removes MDS coming out of quorum election
- 07:22 PM Bug #40608 (Duplicate): mds: assert after `delete gather` in C_Drop_Cache::recall_client_state
- Fixed by: https://tracker.ceph.com/issues/38445
- 07:16 PM Bug #42941 (Rejected): mds: stuck "waiting for osdmap 273 (which blacklists prior instance)"
- Cause was reverted.
- 05:19 AM Feature #43349: mgr/volumes: provision subvolumes with config metadata storage in cephfs
- backport note: additionally, include https://github.com/ceph/ceph/pull/32645
- 04:33 AM Feature #43349 (Pending Backport): mgr/volumes: provision subvolumes with config metadata storage...
- 04:33 AM Feature #43349 (Resolved): mgr/volumes: provision subvolumes with config metadata storage in cephfs
- 02:16 AM Bug #43362 (Pending Backport): client: disallow changing fuse_default_permissions option at runtime
- 02:15 AM Cleanup #43367 (Resolved): mds: reorg SimpleLock header
- 02:14 AM Cleanup #43386 (Resolved): mds: reorg SnapRealm header
- 02:13 AM Cleanup #43418 (Resolved): mds: reorg flock header
- 02:13 AM Cleanup #43424 (Resolved): mds: reorg inode_backtrace header
- 01:55 AM Bug #42986 (Fix Under Review): qa: Test failure: test_drop_cache_command_dead (tasks.cephfs.test_...
- 12:32 AM Bug #43513: qa: filelock_interrupt.py hang
- Zheng Yan wrote:
> Looks like flock syscall was restarted after handling signal alarm. The script does not work with... - 12:14 AM Bug #43554 (Fix Under Review): qa: test_full racy check: AssertionError: 29 not greater than or e...
01/14/2020
- 11:52 PM Bug #43601 (Resolved): qa: ERROR: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
- ...
- 11:15 PM Bug #43600 (Resolved): qa: workunits/suites/iozone.sh: line 5: iozone: command not found
- ...
- 10:37 PM Bug #43542 (Resolved): mds/FSMap.cc: 1063: FAILED ceph_assert(count)
- 10:32 PM Bug #43484 (Pending Backport): mds: note features client has when rejecting client due to feature...
- 10:10 PM Bug #43599 (Resolved): kclient: corrupt message failure on RHEL8 distribution kernel
- ...
- 08:44 PM Bug #43541 (Resolved): qa/cephfs: don't test client on latest RHEL
- 08:44 PM Bug #43539 (Resolved): qa/cephfs: don't test kclient RHEL 7
- 08:34 PM Backport #42279: nautilus: qa: logrotate should tolerate connection resets
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31082
merged - 08:33 PM Backport #42129: nautilus: doc/ceph-fuse: -k missing in man page
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30765
merged - 05:39 PM Bug #43598 (Resolved): mds: PurgeQueue does not handle objecter errors
- Here: https://github.com/ceph/ceph/blob/6ea89e01971462432e0bc8b128b950acec4d85fe/src/mds/PurgeQueue.cc#L555
The fi... - 05:30 PM Bug #43596 (Need More Info): mds: crash when enable msgr v2 due to lost contact
- > It seems to be stIt seems to be stable now after enabling v2 on all mons and restarting all mds's.able now after en...
- 12:41 PM Bug #43596 (New): mds: crash when enable msgr v2 due to lost contact
- We just upgraded from mimic v13.2.7 to v14.2.6 and when we enable msgr v2 on the mon which an MDS is connected to, th...
- 04:15 PM Bug #43440 (Fix Under Review): client: chdir does not raise error if a file is passed
01/13/2020
- 09:44 PM Bug #43493 (Fix Under Review): osdc: fix null pointer caused program crash
- 04:29 PM Backport #42790: nautilus: mgr/volumes: add `fs subvolume resize infinite` command
- Jos Collin wrote:
> https://github.com/ceph/ceph/pull/31332
merged - 04:28 PM Backport #42615: nautilus: mgr/volumes: add `fs subvolume extend/shrink` commands
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31332
merged - 04:28 PM Backport #42142: nautilus: mds:split the dir if the op makes it oversized, because some ops maybe...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31302
merged - 04:27 PM Backport #42424: nautilus: qa: "cluster [ERR] Error recovering journal 0x200: (2) No such file ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31084
merged - 04:27 PM Backport #42422: nautilus: test_reconnect_eviction fails with "RuntimeError: MDS in reject state ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31083
merged - 04:26 PM Backport #42158: nautilus: osdc: objecter ops output does not have useful time information
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31081
merged - 02:37 PM Bug #43567: qa: UnicodeDecodeError in TestGetAndPut.test_put_and_get_without_target_directory
- ...
- 12:12 PM Bug #43567 (Fix Under Review): qa: UnicodeDecodeError in TestGetAndPut.test_put_and_get_without_t...
- 11:41 AM Bug #43567 (Resolved): qa: UnicodeDecodeError in TestGetAndPut.test_put_and_get_without_target_di...
- decode() is run on a type @str@ -...
- 12:31 PM Feature #22446 (Resolved): mds: ask idle client to trim more caps
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:28 PM Bug #40283 (Resolved): qa: add testing for lazyio
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:28 PM Cleanup #40694 (Resolved): mds: move MDSDaemon conf change handling to MDSRank finisher
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:27 PM Bug #41148 (Resolved): client: _readdir_cache_cb() may use the readdir_cache already clear
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:27 PM Bug #41310 (Resolved): client: lazyio synchronize does not get file size
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:26 PM Bug #41835 (Resolved): mds: cache drop command does not drive cap recall
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:26 PM Bug #41837 (Resolved): client: lseek function does not return the correct value.
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:22 PM Backport #42161 (Resolved): nautilus: qa: add testing for lazyio
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30769
m... - 12:22 PM Backport #41888 (Resolved): nautilus: client: lazyio synchronize does not get file size
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30769
m... - 12:22 PM Backport #42147 (Resolved): nautilus: mds: mds returns -5 error when the deleted file does not exist
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30767
m... - 12:22 PM Backport #42145 (Resolved): nautilus: client: return error when someone passes bad whence value t...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30766
m... - 12:21 PM Backport #42121 (Resolved): nautilus: client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30764
m... - 12:21 PM Backport #42040 (Resolved): nautilus: client: _readdir_cache_cb() may use the readdir_cache alrea...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30763
m... - 12:21 PM Backport #42035 (Resolved): nautilus: client: lseek function does not return the correct value.
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30762
m... - 12:21 PM Backport #42339 (Resolved): nautilus: mds: move MDSDaemon conf change handling to MDSRank finisher
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30761
m... - 12:21 PM Backport #41899 (Resolved): nautilus: mds: cache drop command does not drive cap recall
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30761
m... - 12:21 PM Backport #41865 (Resolved): nautilus: mds: ask idle client to trim more caps
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30761
m... - 11:48 AM Backport #43573 (Resolved): nautilus: cephfs-journal-tool: will crash without any extra argument
- https://github.com/ceph/ceph/pull/32913
- 11:48 AM Backport #43572 (Rejected): mimic: cephfs-journal-tool: will crash without any extra argument
- 11:48 AM Backport #43568 (Resolved): nautilus: qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY
- https://github.com/ceph/ceph/pull/32912
- 03:03 AM Bug #43218 (Rejected): kclient: when looking up the snap dirs sometime will hit WARN_ON
- This is not a bug and will close it....
- 01:43 AM Feature #9477 (Closed): Handle kclient shutdown with dead network more gracefully
- this can be handled by 'umount -f'
- 01:43 AM Feature #8368 (Resolved): kernel: Notify users of mds disconnect and allow them to react to it
- 01:30 AM Feature #8368: kernel: Notify users of mds disconnect and allow them to react to it
- resolved by https://tracker.ceph.com/issues/39967
01/12/2020
- 12:52 AM Bug #36635: mds: purge queue corruption from wrong backport
- I think we can just add an upgrade note to Octopus to not upgrade from 13.2.2.
01/11/2020
- 01:49 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- OK, I move the wait to client side. See commit "client: wait for async creating before sending request or cap message...
- 01:13 AM Bug #43543 (Triaged): mds: scrub on directory with recently created files may fail to load backtr...
- 12:25 AM Backport #43558 (In Progress): nautilus: mds: reject forward scrubs when cluster has multiple act...
- 12:19 AM Backport #43558 (Resolved): nautilus: mds: reject forward scrubs when cluster has multiple active...
- https://github.com/ceph/ceph/pull/32602
- 12:19 AM Backport #43559 (Rejected): mimic: mds: reject forward scrubs when cluster has multiple active MD...
- 12:18 AM Bug #43483 (Pending Backport): mds: reject forward scrubs when cluster has multiple active MDS (m...
- 12:16 AM Bug #43249 (Resolved): cephfs-shell: exit failure when non-interactive command fails
01/10/2020
- 10:24 PM Bug #43251 (Resolved): mds: track client provided metric flags in session
- 10:22 PM Cleanup #43366 (Resolved): mds: reorg SessionMap header
- 10:15 PM Bug #43554 (Resolved): qa: test_full racy check: AssertionError: 29 not greater than or equal to 30
- ...
- 09:24 PM Backport #43506 (In Progress): nautilus: MDSMonitor: warn if a new file system is being created w...
- 08:19 PM Backport #42161: nautilus: qa: add testing for lazyio
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30769
merged - 08:19 PM Backport #41888: nautilus: client: lazyio synchronize does not get file size
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30769
merged - 08:18 PM Backport #42147: nautilus: mds: mds returns -5 error when the deleted file does not exist
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30767
merged - 08:18 PM Backport #42145: nautilus: client: return error when someone passes bad whence value to llseek
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30766
merged - 08:17 PM Backport #42121: nautilus: client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30764
merged - 08:17 PM Backport #42040: nautilus: client: _readdir_cache_cb() may use the readdir_cache already clear
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30763
merged - 08:16 PM Backport #42035: nautilus: client: lseek function does not return the correct value.
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30762
merged - 08:16 PM Backport #42339: nautilus: mds: move MDSDaemon conf change handling to MDSRank finisher
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30761
merged - 08:15 PM Backport #41899: nautilus: mds: cache drop command does not drive cap recall
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30761
merged - 08:15 PM Backport #41865: nautilus: mds: ask idle client to trim more caps
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30761
merged - 07:42 PM Bug #43540 (Duplicate): qa: test_export_pin (tasks.cephfs.test_exports.TestExports) failure
- Real error is here:...
- 12:57 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- I still don't understand what value this flag adds. Why not just always have requests involving an inode wait on the ...
- 03:09 AM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Jeff Layton wrote:
> Great! I'll still plan to add in a sanity check for this in the client too.
Patrick is right... - 03:01 AM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Jeff Layton wrote:
> Zheng Yan wrote:
> > mainly for wait_for_create_inode() function in MDS. Also make mds print e... - 11:02 AM Bug #43440: client: chdir does not raise error if a file is passed
- Not a cephfs shell bug. The error should be raised by ceph_chdir().
- 07:44 AM Bug #43543: mds: scrub on directory with recently created files may fail to load backtraces and r...
- This issue exists since scrub is first implemented. Should be easy to fix, just ignore checking backtrace if dirty_pa...
01/09/2020
- 11:14 PM Bug #43543: mds: scrub on directory with recently created files may fail to load backtraces and r...
- If you flush the journal:...
- 11:08 PM Bug #43543 (Resolved): mds: scrub on directory with recently created files may fail to load backt...
- On a vstart cluster, copy a directory tree into CephFS and do a recursive scrub concurrently:...
- 08:25 PM Bug #43514 (Pending Backport): qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY
- 07:57 PM Bug #43542 (Fix Under Review): mds/FSMap.cc: 1063: FAILED ceph_assert(count)
- 07:48 PM Bug #43542 (Resolved): mds/FSMap.cc: 1063: FAILED ceph_assert(count)
- ...
- 07:54 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Great! I'll still plan to add in a sanity check for this in the client too.
- 07:25 PM Feature #24461 (In Progress): cephfs: improve file create performance buffering file unlink/creat...
- Jeff Layton wrote:
> There's a potential problem I spotted today with copying the layouts from the first synchronous... - 07:21 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- There's a potential problem I spotted today with copying the layouts from the first synchronous create.
Suppose we... - 07:10 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Zheng Yan wrote:
> mainly for wait_for_create_inode() function in MDS. Also make mds print error if it failed to han... - 02:05 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- mainly for wait_for_create_inode() function in MDS. Also make mds print error if it failed to handle async request.
- 11:42 AM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Zheng Yan wrote:
> please add a flag that tell if a request is async.
> https://github.com/ukernel/ceph/commit/54f... - 07:13 PM Bug #43515 (Resolved): qa: SyntaxError: invalid token
- 07:12 PM Bug #43487 (Resolved): qa: test_acls does not detect rhel8
- 06:22 PM Feature #118 (Rejected): kclient: clean pages when throwing out dirty metadata on session teardown
- Excellent. In that case, let's go ahead and close this out.
- 06:43 AM Feature #118: kclient: clean pages when throwing out dirty metadata on session teardown
- Case1:
In the case when unmounting, the vfs will do this for us.
Case2:
In the case when the session is reconne... - 06:09 PM Bug #43541 (Fix Under Review): qa/cephfs: don't test client on latest RHEL
- 06:08 PM Bug #43541 (Resolved): qa/cephfs: don't test client on latest RHEL
- 06:04 PM Bug #43540 (Duplicate): qa: test_export_pin (tasks.cephfs.test_exports.TestExports) failure
- ...
- 06:02 PM Bug #43539 (Fix Under Review): qa/cephfs: don't test kclient RHEL 7
- 05:42 PM Bug #43539 (Resolved): qa/cephfs: don't test kclient RHEL 7
- Just fix the symlin qa/cephfs/mount/kclient/overrides/distro/rhel/rhel_7.yaml.
- 11:34 AM Feature #42530 (Fix Under Review): cephfs-shell: add setxattr and getxattr
- 06:48 AM Feature #43435 (Fix Under Review): kclient:send client provided metric flags in client metadata
- Under review in V2's "[Patch v2 8/8] ceph: send client provided metric flags in client metadata"
https://patchwork... - 06:45 AM Feature #43423 (Fix Under Review): mds: collect and show the dentry lease metric
- 06:44 AM Bug #37617: CephFS did not recover re-plugging network cable
- I also enconter the same problem. when I use ls or other operations on the mountpoint,it will failed. Even if the net...
- 05:50 AM Feature #4386 (Resolved): kclient: Mount error message when no MDS present
- 05:49 AM Feature #4386: kclient: Mount error message when no MDS present
- Fixed in:
https://github.com/ceph/ceph/pull/32164
https://patchwork.kernel.org/patch/11283665
01/08/2020
- 03:27 PM Bug #43516 (Resolved): qa: verify sub-suite does not define os_version
- 02:38 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- please add a flag that tell if a request is async.
https://github.com/ukernel/ceph/commit/54f6bbdc85505ddea21583e9c... - 01:54 PM Bug #43513: qa: filelock_interrupt.py hang
- Looks like flock syscall was restarted after handling signal alarm. The script does not work with python3, but work w...
- 08:58 AM Bug #43522 (Fix Under Review): qa: update xfstests_dev to install python2 instead of python on ub...
- 08:47 AM Bug #43522 (Resolved): qa: update xfstests_dev to install python2 instead of python on ubuntu 19
- 08:42 AM Bug #43393 (In Progress): qa: add support/qa for cephfs-shell on CentOS 9 / RHEL9
01/07/2020
- 10:11 PM Bug #42238 (Resolved): cephfs-shell: setxattr() is passed extra length argument
- 10:07 PM Bug #43517 (Resolved): qa: random subvolumegroup collision
- ...
- 09:59 PM Bug #43336 (Resolved): qa: test_unmount_for_evicted_client hangs
- 09:58 PM Cleanup #42563 (Resolved): mds: reorg MDSTableServer header
- 09:57 PM Cleanup #42690 (Resolved): mds: reorg Mutation header
- 09:56 PM Bug #43438 (Pending Backport): cephfs-journal-tool: will crash without any extra argument
- 09:21 PM Bug #43516 (Fix Under Review): qa: verify sub-suite does not define os_version
- 09:14 PM Bug #43516 (Resolved): qa: verify sub-suite does not define os_version
- ...
- 09:12 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Yes, setting a zero length i_xattrs buffer on the new inode seems to have corrected the problem. I believe what was h...
- 05:24 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- At this point, I'm 90% sure the problem is in xattrs. Basically, after creating the file async we're leaving the i_xa...
- 04:00 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- I threw in a hack to do this:...
- 02:44 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- also see https://github.com/ceph/ceph/pull/30969
- 02:42 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- For current async create code. ceph_mds_reply_inode::max_size is 0. client can't write to the new file until it gets ...
- 02:00 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- dynamic debugging from the client, with async dirops disabled. This is during the write calls:...
- 01:32 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- I can use strace to get timing statistics on individual calls though. With async dirops disabled:...
- 01:18 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Now that I look closer, I don't think strace -c is measuring what we need. It's looking at CPU time in each syscall. ...
- 01:16 AM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Thanks for checking. I'll have to play around with this more myself.
- 09:07 PM Bug #43515 (Fix Under Review): qa: SyntaxError: invalid token
- 09:06 PM Bug #43515 (Resolved): qa: SyntaxError: invalid token
- ...
- 08:53 PM Bug #43514 (Fix Under Review): qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY
- 08:48 PM Bug #43514 (Resolved): qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY
- ...
- 08:33 PM Bug #43513 (Resolved): qa: filelock_interrupt.py hang
- ...
- 02:45 PM Backport #43509 (Resolved): nautilus: 'ceph -s' does not show standbys if there are no filesystems
- https://github.com/ceph/ceph/pull/32912
- 02:44 PM Backport #43506 (Resolved): nautilus: MDSMonitor: warn if a new file system is being created with...
- https://github.com/ceph/ceph/pull/32600
- 02:44 PM Backport #43505 (Rejected): mimic: MDSMonitor: warn if a new file system is being created with an...
- 02:42 PM Backport #43503 (Resolved): nautilus: mount.ceph: give a hint message when no mds is up or cluste...
- https://github.com/ceph/ceph/pull/32910
- 02:42 PM Backport #43502 (Resolved): mimic: mount.ceph: give a hint message when no mds is up or cluster i...
- https://github.com/ceph/ceph/pull/32911
- 12:43 PM Documentation #43154 (In Progress): doc: migrate best practice recommendations to relevant docs
- 12:34 PM Bug #43496: qa: xfstest_dev.py crashes while calling teuthology.misc.get_system_type
- Oh, BTW, the crash happened locally, not on teuthology.
- 12:33 PM Bug #43496 (Fix Under Review): qa: xfstest_dev.py crashes while calling teuthology.misc.get_syste...
- 12:30 PM Bug #43496 (Resolved): qa: xfstest_dev.py crashes while calling teuthology.misc.get_system_type
- teuthology.misc.get_system_type calls teuthology.misc.sh. Fix: add a wrapper method of teuthology.misc.sh to vstart_r...
- 12:33 PM Bug #43486 (Fix Under Review): qa: test_acls: cannot find packages on centos 8
- 08:00 AM Bug #43486: qa: test_acls: cannot find packages on centos 8
- I am checking if xfstests-dev runs fine without btrfs-prog-devel. The reason for it's absence on CentOS 8 is (AFAIS) ...
- 07:14 AM Bug #43486 (In Progress): qa: test_acls: cannot find packages on centos 8
- 11:07 AM Bug #43483 (In Progress): mds: reject forward scrubs when cluster has multiple active MDS (more t...
- 08:42 AM Backport #43338 (New): nautilus: qa/tasks: add remaining tests for fs volume
- 03:07 AM Bug #43493 (Can't reproduce): osdc: fix null pointer caused program crash
- PurgeRange.oncommit NULL error
01/06/2020
- 09:40 PM Bug #43329 (Resolved): cephfs-shell: AttributeError when undefined an conf opt is attemptted to read
- 08:42 PM Documentation #37746 (Resolved): doc: how to mount a subdir with ceph-fuse/kclient
- 08:35 PM Bug #43460 (Resolved): qa: loff_t type missing for fsync-tester
- 08:31 PM Fix #42450 (Pending Backport): MDSMonitor: warn if a new file system is being created with an EC ...
- 08:28 PM Bug #43326 (Resolved): mds: batch getattr/lookup bug
- 08:27 PM Bug #42088 (Pending Backport): 'ceph -s' does not show standbys if there are no filesystems
- 08:20 PM Feature #43294 (Pending Backport): mount.ceph: give a hint message when no mds is up or cluster i...
- 07:48 PM Bug #43487 (Fix Under Review): qa: test_acls does not detect rhel8
- 07:46 PM Bug #43487 (Resolved): qa: test_acls does not detect rhel8
- ...
- 07:44 PM Bug #43486 (Resolved): qa: test_acls: cannot find packages on centos 8
- ...
- 06:24 PM Bug #43484 (Fix Under Review): mds: note features client has when rejecting client due to feature...
- 06:15 PM Bug #43484 (Resolved): mds: note features client has when rejecting client due to feature incompat
- Currently we get a message like:...
- 03:17 PM Bug #43407: mds crash after update to v14.2.5
- The first ESubtreeMap in the journal was wrong. It should also contains dir 0x1...
- 02:52 PM Bug #43407 (Triaged): mds crash after update to v14.2.5
- 03:07 PM Bug #43483 (Resolved): mds: reject forward scrubs when cluster has multiple active MDS (more than...
- Forward scrub may cause the MDS to hit various assertions if there is more than one rank. Have the MDS check if there...
- 02:40 PM Bug #43440 (Triaged): client: chdir does not raise error if a file is passed
01/03/2020
- 11:49 PM Bug #43460 (Fix Under Review): qa: loff_t type missing for fsync-tester
- 11:44 PM Bug #43460 (Resolved): qa: loff_t type missing for fsync-tester
- ...
- 11:35 PM Bug #43459 (Fix Under Review): qa: FATAL ERROR: libtool does not seem to be installed.
- 11:28 PM Bug #43459 (In Progress): qa: FATAL ERROR: libtool does not seem to be installed.
- 11:24 PM Bug #43459 (Resolved): qa: FATAL ERROR: libtool does not seem to be installed.
- ...
- 07:34 PM Bug #43407: mds crash after update to v14.2.5
- Status update:
I have tried
cephfs-journal-tool event recover_dentries summary
followed with
cephfs-journal-tool... - 03:29 PM Bug #43407: mds crash after update to v14.2.5
- > 2. recover journal events:
> cephfs-journal-tool journal export backup.bin
Do you mean
_cephfs-journal-tool ev... - 02:23 PM Bug #43407: mds crash after update to v14.2.5
- mds shows there are some ENoOp log events. This means some region of mds log was erased by cephfs-journal-tools. Why ...
- 11:18 AM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- I built a tree based on 1e2fe722c41d4cc34094afb157b3eb06b4a50972, which is the commit just before the merge of Zheng'...
- 03:14 AM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Patrick Donnelly wrote:
> Zheng Yan wrote:
> > Patrick Donnelly wrote:
> > > The baseline performance is surprisin... - 01:51 AM Feature #43423: mds: collect and show the dentry lease metric
- Patches are ready and waiting for the depending PR [1] to be merged.
[1] https://github.com/ceph/ceph/pull/26004
01/02/2020
- 09:19 PM Bug #43407: mds crash after update to v14.2.5
- Yes I had 3 filesystems (namespaces), one for every mds daemon, and the setup was working up to the update to v14.2.5...
- 07:00 PM Bug #43407: mds crash after update to v14.2.5
- Were you using multiple MDS before?
Can you increase MDS debugging:
ceph config set mds debug_mds 10
and res... - 08:10 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Looking at my home-grown testcase, the results look pretty good, but an untarring a random kernel tarball is consider...
01/01/2020
- 09:38 AM Documentation #43154: doc: migrate best practice recommendations to relevant docs
- https://docs.ceph.com/docs/master/cephfs/fuse/ - This is the location of the FUSE docs.
12/31/2019
- 12:46 PM Bug #43440 (Resolved): client: chdir does not raise error if a file is passed
- ...
- 06:14 AM Feature #41566 (In Progress): mds: support rolling upgrades
- 04:13 AM Feature #43435: kclient:send client provided metric flags in client metadata
- Patch is ready and the test output is:...
- 04:11 AM Bug #43438 (Fix Under Review): cephfs-journal-tool: will crash without any extra argument
- 04:10 AM Bug #43438: cephfs-journal-tool: will crash without any extra argument
- The fixing PR: https://github.com/ceph/ceph/pull/32452
- 04:01 AM Bug #43438 (In Progress): cephfs-journal-tool: will crash without any extra argument
- 04:00 AM Bug #43438 (Resolved): cephfs-journal-tool: will crash without any extra argument
- ...
12/30/2019
- 04:16 PM Bug #41565 (Fix Under Review): mds: detect MDS<->MDS messages that are not versioned
- 05:42 AM Feature #43435 (In Progress): kclient:send client provided metric flags in client metadata
- 05:42 AM Feature #43435 (Resolved): kclient:send client provided metric flags in client metadata
- This will send the kclient provided metric flags to the MDS server.
12/27/2019
12/26/2019
- 05:26 PM Cleanup #43425 (Fix Under Review): mds: reorg snap header
- 03:01 PM Cleanup #43425 (Resolved): mds: reorg snap header
- 03:03 PM Cleanup #43426 (Resolved): mds: reorg mdstypes header
- 03:02 PM Cleanup #43424 (Fix Under Review): mds: reorg inode_backtrace header
- 01:58 PM Cleanup #43424 (Resolved): mds: reorg inode_backtrace header
- 06:15 AM Feature #43423: mds: collect and show the dentry lease metric
- https://tracker.ceph.com/issues/24285
- 06:12 AM Feature #43423: mds: collect and show the dentry lease metric
- Locally the patch is ready, but depend on https://github.com/ceph/ceph/pull/26004, which hasn't been merged yet.
<... - 06:10 AM Feature #43423 (Resolved): mds: collect and show the dentry lease metric
- Kclient will collect the dentry lease metric and send it to the MDS, currently this hasn't been shown in the perf stats.
Also available in: Atom