Activity
From 04/19/2020 to 05/18/2020
05/18/2020
- 10:14 PM Bug #45552 (Resolved): qa/task/vstart_runner.py: admin_socket: exception getting command descript...
- 10:12 PM Bug #45090 (Pending Backport): mds: inode's xattr_map may reference a large memory.
- 10:11 PM Bug #45090 (Resolved): mds: inode's xattr_map may reference a large memory.
- 10:10 PM Bug #43598 (Pending Backport): mds: PurgeQueue does not handle objecter errors
- 10:08 PM Bug #45114 (Pending Backport): client: make cache shrinking callbacks available via libcephfs
- 10:01 PM Bug #45373 (Resolved): cephfs-shell: OSError type exceptions throw object has no attribute 'get_e...
- 09:58 PM Bug #45430 (Resolved): qa/cephfs: cleanup() and cleanup_netns() needs to be run even FS was not m...
- 09:29 PM Bug #45593 (Rejected): qa: removing network bridge appears to cause dropped packets
- ...
- 07:59 PM Bug #45590 (Fix Under Review): qa: TypeError: unsupported operand type(s) for +: 'range' and 'range'
- 07:54 PM Bug #45590 (Resolved): qa: TypeError: unsupported operand type(s) for +: 'range' and 'range'
- ...
- 05:59 PM Bug #45521: mds: layout parser does not handle [-.] in pool names
- Zheng Yan wrote:
>
> is this behavior related to this issue?
>
Not at all -- I put this in the wrong tracker.... - 01:48 PM Bug #45532 (Triaged): cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- 01:46 PM Bug #45553 (Duplicate): mds: rstats on snapshot are updated by changes to HEAD
- 01:12 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Okay , I will work on writing up a new ticket for the slow requests problem and at the moment not do anything to trou...
- 12:58 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Unfortunately, ganesha doesn't have great instrumentation in this area. There is a ganeshactl program that ships with...
- 12:22 PM Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- Ganesha patches are merged and have been for over a week. The libcephfs bits are also still ready, but testing is tak...
- 04:15 AM Bug #45575 (Resolved): cephfs-journal-tool: incorrect read_offset after finding missing objects
- in JournalScanner::scan_events(), read_offset is not increased when find missing objects, that will lead to a wrong r...
05/16/2020
- 07:43 PM Documentation #45573 (New): doc: client: client_reconnect_stale=1
- The existing documentation is out of date: https://docs.ceph.com/docs/mimic/cephfs/eviction/#advanced-un-blacklisting...
- 02:21 AM Cleanup #45525 (Resolved): qa/task/cephfs/mount.py: skip saving/restoring the previous value for ...
05/15/2020
- 03:18 PM Bug #45398 (Fix Under Review): mgr/volumes: Not able to resize cephfs subvolume with ceph fs subv...
- 08:41 AM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Okay I have left the nfs and old ceph cluster connected and haven't seen a cache pressure message. Added a client mou...
- 03:36 AM Bug #45530 (Fix Under Review): qa/tasks/cephfs/test_snapshots.py: Command failed with status 1: [...
- 03:35 AM Bug #45531 (Fix Under Review): qa/task/cephfs: stderr:iptables v1.8.2 (nf_tables): CHAIN_ADD fai...
- This PR is fixing it: https://github.com/ceph/ceph-build/pull/1569
- 12:42 AM Bug #45531: qa/task/cephfs: stderr:iptables v1.8.2 (nf_tables): CHAIN_ADD failed (Operation not ...
- Jeff Layton wrote:
> Got it, so basically you just need to go through and vet all of those and convert the ones that... - 03:15 AM Cleanup #45525 (Fix Under Review): qa/task/cephfs/mount.py: skip saving/restoring the previous va...
- taking this -- it's particularly disruptive for qa testing and the fix is simple.
05/14/2020
- 08:43 PM Bug #45538 (Triaged): qa: Fix string/byte comparison mismatch in test_exports
- 03:33 PM Bug #45531: qa/task/cephfs: stderr:iptables v1.8.2 (nf_tables): CHAIN_ADD failed (Operation not ...
- Got it, so basically you just need to go through and vet all of those and convert the ones that were configured as "m...
- 02:44 PM Bug #45531: qa/task/cephfs: stderr:iptables v1.8.2 (nf_tables): CHAIN_ADD failed (Operation not ...
- Hi Jeff,
You are right.Checked it again: ... - 02:37 PM Bug #45531: qa/task/cephfs: stderr:iptables v1.8.2 (nf_tables): CHAIN_ADD failed (Operation not ...
- Xiubo also asked on IRC:...
- 02:34 PM Bug #45531: qa/task/cephfs: stderr:iptables v1.8.2 (nf_tables): CHAIN_ADD failed (Operation not ...
- I dropped my PR as Ilya (rightly) pointed out that updating the configs from a distro kernel would pull in a bunch of...
- 12:13 AM Bug #45531: qa/task/cephfs: stderr:iptables v1.8.2 (nf_tables): CHAIN_ADD failed (Operation not ...
- We are missing some kconfig contents:...
- 11:40 AM Bug #45553 (Duplicate): mds: rstats on snapshot are updated by changes to HEAD
- The "ceph.dir.rbytes" xattr on .snap/<snap_name> directory is getting updated on already taken snapshots.
Check the ... - 10:18 AM Bug #45532: cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- Much simpler test case:...
- 09:26 AM Bug #45552 (Fix Under Review): qa/task/vstart_runner.py: admin_socket: exception getting command ...
- 08:53 AM Bug #45552 (In Progress): qa/task/vstart_runner.py: admin_socket: exception getting command descr...
- 08:53 AM Bug #45552 (Resolved): qa/task/vstart_runner.py: admin_socket: exception getting command descript...
- ...
- 05:19 AM Bug #45304 (Fix Under Review): qa/fuse_mount.py: tests crash when /sys/fs/fuse/connections is absent
05/13/2020
- 10:49 PM Bug #45100: qa: Test failure: test_damaged_dentry (tasks.cephfs.test_damage.TestDamage)
- Octopus too: /ceph/teuthology-archive/yuriw-2020-05-09_21:30:44-kcephfs-wip-yuri-octopus_15.2.2_RC0-distro-basic-smit...
- 08:50 PM Bug #43515 (Resolved): qa: SyntaxError: invalid token
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:49 PM Feature #44277 (Resolved): pybind/mgr/volumes: add command to return metadata regarding a subvolume
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:49 PM Bug #44801 (Resolved): client: write stuck at waiting for larger max_size
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:59 PM Bug #45538 (Triaged): qa: Fix string/byte comparison mismatch in test_exports
- mount.getfattr() returns string rather than bytes after https://github.com/ceph/ceph/pull/34941. This produces assert...
- 04:47 PM Bug #45531: qa/task/cephfs: stderr:iptables v1.8.2 (nf_tables): CHAIN_ADD failed (Operation not ...
- We are missing some kconfig contents:
CONFIG_NF_TABLES/CONFIG_NF_TABLES_IPV4/CONFIG_NF_TABLES_ARP/CONFIG_NF_TABLE... - 09:25 AM Bug #45531 (Resolved): qa/task/cephfs: stderr:iptables v1.8.2 (nf_tables): CHAIN_ADD failed (Ope...
- ...
- 01:46 PM Bug #45521: mds: layout parser does not handle [-.] in pool names
- Jeff Layton wrote:
> The kernel client only copies off the layout when given Fw or or Fr caps.
>
is this behavi... - 01:04 PM Bug #45521 (In Progress): mds: layout parser does not handle [-.] in pool names
- 12:47 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- We are officially moved off of the old cluster so now I can mess around with the old one without any worries (still u...
- 11:15 AM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Cephfs for our cluster has grown significantly in the number of clients due to joining a large HTC grid. I will not e...
- 12:27 PM Bug #45532 (Resolved): cephfs kernel client rmdir degradation/regression in kernels 5.1 and higher
- Hi,
a simple test script (testdirs.sh), demonstrating the problem described below, creating 10k files in ~1000 dir... - 08:54 AM Bug #45530 (Resolved): qa/tasks/cephfs/test_snapshots.py: Command failed with status 1: ['cd', '|...
- ...
- 04:03 AM Cleanup #45525: qa/task/cephfs/mount.py: skip saving/restoring the previous value for ip_forward
- ...
- 03:31 AM Cleanup #45525 (In Progress): qa/task/cephfs/mount.py: skip saving/restoring the previous value f...
- 03:30 AM Cleanup #45525 (Resolved): qa/task/cephfs/mount.py: skip saving/restoring the previous value for ...
- Skip saving/restore the previous value for ip_forward and just hardcode it to 1.
- 03:38 AM Bug #45524 (Fix Under Review): ceph-fuse: the -d option couldn't enable the debug mode in libfuse
- 03:25 AM Bug #45524 (In Progress): ceph-fuse: the -d option couldn't enable the debug mode in libfuse
- 03:24 AM Bug #45524 (Resolved): ceph-fuse: the -d option couldn't enable the debug mode in libfuse
- ...
05/12/2020
- 07:15 PM Bug #45521: mds: layout parser does not handle [-.] in pool names
- The kernel client only copies off the layout when given Fw or or Fr caps.
We could change the MDS to gratuitously... - 05:47 PM Bug #45521 (Resolved): mds: layout parser does not handle [-.] in pool names
- ...
- 07:03 PM Bug #45459 (Resolved): qa/task/cephfs/mount.py: Error: Connection activation failed: Activation f...
- 02:07 PM Feature #20196: mds: early reintegration of strays on hardlink deletion
- we can more files in strays now, https://github.com/ceph/ceph/pull/33479
- 02:35 AM Bug #45446 (Resolved): vstart_runner.py: using python3 leads to TypeError: unhashable type: 'Raw'
05/11/2020
- 07:56 PM Bug #45430 (Fix Under Review): qa/cephfs: cleanup() and cleanup_netns() needs to be run even FS w...
- 07:45 PM Feature #45371 (In Progress): mgr/volumes: `protect` and `clone` operation in a single transaction
- 07:32 PM Bug #45332 (Resolved): qa: TestExports is failure under new Python3 runtime
- 07:15 PM Bug #45283 (Triaged): Kernel log flood "ceph: Failed to find inode for 1"
- 06:05 PM Backport #45212 (Resolved): nautilus: client: write stuck at waiting for larger max_size
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34767
m... - 02:55 PM Backport #45212: nautilus: client: write stuck at waiting for larger max_size
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34767
merged - 06:05 PM Backport #45181 (Resolved): nautilus: pybind/mgr/volumes: add command to return metadata regardin...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34679
m... - 06:03 PM Backport #44655: nautilus: qa: SyntaxError: invalid token
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34470
m... - 04:16 PM Backport #44655 (Resolved): nautilus: qa: SyntaxError: invalid token
- 02:53 PM Backport #44655: nautilus: qa: SyntaxError: invalid token
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34470
merged - 04:17 PM Backport #45496 (In Progress): nautilus: client: fuse mount will print call trace with incorrect ...
- 02:26 PM Backport #45496 (Resolved): nautilus: client: fuse mount will print call trace with incorrect opt...
- https://github.com/ceph/ceph/pull/35000
- 04:13 PM Backport #45495 (In Progress): octopus: client: fuse mount will print call trace with incorrect o...
- 02:26 PM Backport #45495 (Resolved): octopus: client: fuse mount will print call trace with incorrect options
- https://github.com/ceph/ceph/pull/34999
- 04:11 PM Bug #44645 (Resolved): cephfs-shell: Fix flake8 errors (E302, E502, E128, F821, W605, E128 and E122)
- being backported via https://tracker.ceph.com/issues/45476
- 04:10 PM Bug #44657 (Resolved): cephfs-shell: Fix flake8 errors (F841, E302, E502, E128, E305 and E222)
- will be backported via https://tracker.ceph.com/issues/45476
- 04:06 PM Backport #45476 (In Progress): octopus: cephfs-shell: CI testing does not detect flake8 errors
- 02:21 PM Backport #45476 (Resolved): octopus: cephfs-shell: CI testing does not detect flake8 errors
- https://github.com/ceph/ceph/pull/34998
- 04:05 PM Backport #45477 (In Progress): octopus: fix MClientCaps::FLAG_SYNC in check_caps
- 02:22 PM Backport #45477 (Resolved): octopus: fix MClientCaps::FLAG_SYNC in check_caps
- https://github.com/ceph/ceph/pull/34997
- 04:05 PM Backport #45473 (In Progress): octopus: some obsolete "ceph mds" sub commands are suggested by ba...
- 02:21 PM Backport #45473 (Resolved): octopus: some obsolete "ceph mds" sub commands are suggested by bash ...
- https://github.com/ceph/ceph/pull/34996
- 02:35 PM Bug #16881 (Resolved): RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:35 PM Bug #24823 (Resolved): mds: deadlock when setting config value via admin socket
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:35 PM Bug #36189 (Resolved): ceph-fuse client can't read or write due to backward cap_gen
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:34 PM Feature #37678 (Resolved): mds: log new client sessions with various metadata
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:34 PM Bug #38020 (Resolved): mds: remove cache drop admin socket command
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:34 PM Bug #38137 (Resolved): mds: may leak gather during cache drop
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:34 PM Bug #38348 (Resolved): mds: drop cache does not timeout as expected
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:34 PM Bug #38677 (Resolved): qa: kclient unmount hangs after file system goes down
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:34 PM Bug #38704 (Resolved): qa: "[WRN] Health check failed: 1/3 mons down, quorum b,c (MON_DOWN) in cl...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:33 PM Fix #38801 (Resolved): qa: ignore "ceph.dir.pin: No such attribute" for (old) kernel client
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:33 PM Bug #39305 (Resolved): ceph-fuse: client hang because its bad session PipeConnection to mds
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:33 PM Bug #39405 (Resolved): ceph_volume_client: python program embedded in test_volume_client.py use p...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:33 PM Bug #39406 (Resolved): ceph_volume_client: d_name needs to be converted to string before using
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:33 PM Bug #39510 (Resolved): test_volume_client: test_put_object_versioned is unreliable
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:33 PM Documentation #39620 (Resolved): doc: MDS and metadata pool hardware requirements/recommendations
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:32 PM Bug #40460 (Resolved): test_volume_client: declare only one default for python version
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:32 PM Bug #40800 (Resolved): ceph_volume_client: to_bytes converts NoneType object str
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:32 PM Bug #40936 (Resolved): tools/cephfs: memory leak in cephfs/Resetter.cc
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:31 PM Bug #41031 (Resolved): qa: malformed job
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:31 PM Bug #41141 (Resolved): mds: recall capabilities more regularly when under cache pressure
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:31 PM Feature #41209 (Resolved): mds: create a configurable snapshot limit
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:31 PM Bug #41799 (Resolved): client: FAILED assert(cap == in->auth_cap)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:31 PM Bug #41800 (Resolved): qa: logrotate should tolerate connection resets
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:30 PM Bug #41836 (Resolved): qa: "cluster [ERR] Error recovering journal 0x200: (2) No such file or d...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:30 PM Bug #42213 (Resolved): test_reconnect_eviction fails with "RuntimeError: MDS in reject state up:a...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:30 PM Bug #42251 (Resolved): mds: no assert on frozen dir when scrub path
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:30 PM Fix #42450 (Resolved): MDSMonitor: warn if a new file system is being created with an EC default ...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:30 PM Bug #42515 (Resolved): fs: OpenFileTable object shards have too many k/v pairs
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:29 PM Bug #42637 (Resolved): qa: ffsb suite causes SLOW_OPS warnings
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:29 PM Bug #42675 (Resolved): mds: tolerate no snaprealm encoded in on-disk root inode
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:29 PM Bug #42759 (Resolved): mds: inode lock stuck at unstable state after evicting client
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:29 PM Bug #42826 (Resolved): mds: client does not response to cap revoke After session stale->resume ci...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:29 PM Bug #42829 (Resolved): tools/cephfs: linkages injected by cephfs-data-scan have first == head
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:29 PM Bug #42938 (Resolved): mds: free heap memory may grow too large for some workloads
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:29 PM Bug #43061 (Resolved): ceph fs add_data_pool doesn't set pool metadata properly
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:28 PM Bug #43362 (Resolved): client: disallow changing fuse_default_permissions option at runtime
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:28 PM Bug #43438 (Resolved): cephfs-journal-tool: will crash without any extra argument
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:28 PM Bug #43440 (Resolved): client: chdir does not raise error if a file is passed
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:28 PM Bug #43483 (Resolved): mds: reject forward scrubs when cluster has multiple active MDS (more than...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:28 PM Bug #43554 (Resolved): qa: test_full racy check: AssertionError: 29 not greater than or equal to 30
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:28 PM Bug #43567 (Resolved): qa: UnicodeDecodeError in TestGetAndPut.test_put_and_get_without_target_di...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:27 PM Bug #43761 (Resolved): mon/MDSMonitor: "ceph fs authorize cephfs client.test /test rw" does not g...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:27 PM Bug #43909 (Resolved): mds: SIGSEGV in Migrator::export_sessions_flushed
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:27 PM Bug #44021 (Resolved): client: bad error handling in Client::_lseek
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:26 PM Bug #44295 (Resolved): mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:26 PM Backport #45497 (Resolved): nautilus: mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat...
- https://github.com/ceph/ceph/pull/35185
- 02:26 PM Bug #44408 (Resolved): qa: after the cephfs qa test case quit the mountpoints still exist
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:25 PM Bug #44437 (Resolved): qa:test_config_session_timeout failed with incorrect options
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:25 PM Bug #44525 (Resolved): LibCephFS::RecalledGetattr test failed
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:24 PM Bug #44677 (Resolved): stale scrub status entry from a failed mds shows up in `ceph status`
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:23 PM Bug #44771 (Resolved): ceph-fuse: ceph::__ceph_abort(): ceph-fuse killed by SIGABRT in Client::_d...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:23 PM Bug #44885 (Resolved): enable 'big_writes' fuse option if ceph-fuse is linked to libfuse < 3.0
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:22 PM Backport #45478 (Resolved): nautilus: fix MClientCaps::FLAG_SYNC in check_caps
- https://github.com/ceph/ceph/pull/35118
- 02:21 PM Backport #45474 (Resolved): nautilus: some obsolete "ceph mds" sub commands are suggested by bash...
- https://github.com/ceph/ceph/pull/35117
- 02:20 PM Bug #45387 (Resolved): qa: install task runs twice with double unwind causing fatal errors
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:23 AM Bug #45459: qa/task/cephfs/mount.py: Error: Connection activation failed: Activation failed becau...
- fall back to use brctl instead of nmcli to setup the bridge for now on all ubuntu releases.
- 07:23 AM Bug #45459 (Fix Under Review): qa/task/cephfs/mount.py: Error: Connection activation failed: Acti...
- 07:12 AM Bug #45459 (Resolved): qa/task/cephfs/mount.py: Error: Connection activation failed: Activation f...
- ...
05/10/2020
- 04:12 AM Bug #40001 (Rejected): mds cache oversize after restart
- This ticket has become stale. Closing.
- 04:11 AM Bug #45141 (Pending Backport): some obsolete "ceph mds" sub commands are suggested by bash comple...
- 04:02 AM Bug #45114 (Triaged): client: make cache shrinking callbacks available via libcephfs
- 04:01 AM Bug #45104 (Closed): NFS deployed using orchestrator watch_url not working and mkdirs permission ...
05/09/2020
- 04:54 PM Documentation #44788 (Resolved): cephfs-shell: Missing documentation of quota, df and du
- 07:11 AM Bug #45299 (Resolved): qa/cephfs: AssertionError: b'755' != '755' when running test_subvolume_gro...
- 04:28 AM Bug #45425 (Resolved): qa/cephfs: mount.py must use StringIO instead of BytesIO
05/08/2020
- 03:07 PM Backport #45389 (Resolved): octopus: qa: install task runs twice with double unwind causing fatal...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34912
m... - 03:07 PM Backport #45214: octopus: client: write stuck at waiting for larger max_size
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34766
m... - 01:18 AM Backport #45214 (Resolved): octopus: client: write stuck at waiting for larger max_size
- 03:07 PM Backport #44800: octopus: mds: 'if there is lock cache on dir' check is buggy
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34273
m... - 01:13 AM Backport #44800 (Resolved): octopus: mds: 'if there is lock cache on dir' check is buggy
- 03:06 PM Backport #45211 (Resolved): octopus: enable 'big_writes' fuse option if ceph-fuse is linked to li...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34769
m... - 03:06 PM Backport #45046 (Resolved): octopus: ceph-fuse: ceph::__ceph_abort(): ceph-fuse killed by SIGABRT...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34769
m... - 11:46 AM Bug #45446 (Fix Under Review): vstart_runner.py: using python3 leads to TypeError: unhashable typ...
- 11:44 AM Bug #45446 (Resolved): vstart_runner.py: using python3 leads to TypeError: unhashable type: 'Raw'
- ...
- 01:14 AM Bug #44448 (Resolved): mds: 'if there is lock cache on dir' check is buggy
05/07/2020
- 11:32 PM Backport #45389: octopus: qa: install task runs twice with double unwind causing fatal errors
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/34912
merged - 11:30 PM Backport #45214: octopus: client: write stuck at waiting for larger max_size
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34766
merged - 11:29 PM Backport #44800: octopus: mds: 'if there is lock cache on dir' check is buggy
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/34273
merged - 06:27 PM Backport #45211: octopus: enable 'big_writes' fuse option if ceph-fuse is linked to libfuse < 3.0
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34769
merged - 06:27 PM Backport #45046: octopus: ceph-fuse: ceph::__ceph_abort(): ceph-fuse killed by SIGABRT in Client:...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34769
merged - 04:41 PM Bug #44546: cleanup: Can't lookup inode 1
- Stefan Kooman wrote:
> @Luis Henriques
>
> "As Greg mentioned in the first comment, this seems to be a cephx conf... - 02:57 PM Bug #44546: cleanup: Can't lookup inode 1
- @Luis Henriques
"As Greg mentioned in the first comment, this seems to be a cephx configuration issue. '-13' means... - 01:41 PM Bug #44546: cleanup: Can't lookup inode 1
- Ok, in that case, let's leave it as a debug message then. I think we should probably mark your patch for stable too. ...
- 10:59 AM Bug #44546: cleanup: Can't lookup inode 1
- Jeff Layton wrote:
> I'm fine with demoting this log message. If we're concerned about admins not getting a hint as ... - 09:52 AM Bug #44546: cleanup: Can't lookup inode 1
- I'm fine with demoting this log message. If we're concerned about admins not getting a hint as to the problem, then w...
- 04:15 PM Backport #45230 (Resolved): octopus: ceph fs add_data_pool doesn't set pool metadata properly
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34775
m... - 03:27 PM Backport #45230: octopus: ceph fs add_data_pool doesn't set pool metadata properly
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34775
merged - 04:15 PM Backport #45226 (Resolved): octopus: mon/MDSMonitor: "ceph fs authorize cephfs client.test /test ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34775
m... - 03:27 PM Backport #45226: octopus: mon/MDSMonitor: "ceph fs authorize cephfs client.test /test rw" does no...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34775
merged - 04:15 PM Backport #45220 (Resolved): octopus: mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34803
m... - 04:13 PM Backport #45219 (Resolved): octopus: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterFull)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34802
m... - 04:13 PM Backport #45216 (Resolved): octopus: qa: after the cephfs qa test case quit the mountpoints still...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34801
m... - 04:13 PM Backport #45049 (Resolved): octopus: stale scrub status entry from a failed mds shows up in `ceph...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34800
m... - 04:12 PM Backport #44844 (Resolved): octopus: qa:test_config_session_timeout failed with incorrect options
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34799
m... - 04:12 PM Backport #44843 (Resolved): octopus: LibCephFS::RecalledGetattr test failed
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34798
m... - 04:06 PM Backport #45222 (Resolved): octopus: cephfs-journal-tool: cannot set --dry_run arg
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34804
m... - 04:06 PM Backport #45227 (Resolved): octopus: qa: UnicodeDecodeError in TestGetAndPut.test_put_and_get_wit...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34805
m... - 04:05 PM Backport #45180 (Resolved): octopus: pybind/mgr/volumes: add command to return metadata regarding...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34681
m... - 03:13 PM Bug #45434 (Resolved): qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
- http://pulpito.ceph.com/yuriw-2020-05-05_20:55:43-kcephfs-wip-yuri-testing-2020-05-05-1439-distro-basic-smithi/502617...
- 01:16 PM Bug #45430 (Resolved): qa/cephfs: cleanup() and cleanup_netns() needs to be run even FS was not m...
- Mountpoint directory and network namespaces are created whether or not the mount command itself passes. Therefore, bo...
- 12:02 PM Documentation #44788 (In Progress): cephfs-shell: Missing documentation of quota, df and du
- 11:46 AM Bug #41034 (Fix Under Review): cephfs-journal-tool: NetHandler create_socket couldn't create socket
- 11:07 AM Backport #41865: nautilus: mds: ask idle client to trim more caps
- > Nathan, is this a bug in your script?
When a hotfix release like 14.2.7 is done, a bunch of backport trackers th... - 10:56 AM Backport #45287 (Resolved): nautilus: ceph-fuse: ceph::__ceph_abort(): ceph-fuse killed by SIGABR...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34771
m... - 10:56 AM Backport #45210 (Resolved): nautilus: enable 'big_writes' fuse option if ceph-fuse is linked to l...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34771
m... - 10:55 AM Backport #45050: nautilus: stale scrub status entry from a failed mds shows up in `ceph status`
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34563
m... - 09:39 AM Bug #44645 (Pending Backport): cephfs-shell: Fix flake8 errors (E302, E502, E128, F821, W605, E12...
- 09:38 AM Bug #44657 (Pending Backport): cephfs-shell: Fix flake8 errors (F841, E302, E502, E128, E305 and ...
- 08:59 AM Feature #41073: cephfs-sync: tool for synchronizing cephfs snapshots to remote target
- Feature Requirements:
* tool should be implemented as a mgr plugin
* tool should mirror User Defined as well as Sch... - 06:46 AM Bug #45425 (Fix Under Review): qa/cephfs: mount.py must use StringIO instead of BytesIO
- 06:43 AM Bug #45425 (Resolved): qa/cephfs: mount.py must use StringIO instead of BytesIO
- ...
- 03:04 AM Bug #45342 (Fix Under Review): qa/tasks/vstart_runner.py: RuntimeError: Fuse mount failed to popu...
- 03:03 AM Feature #45267 (Fix Under Review): ceph-fuse: Reduce memory copy in ceph-fuse during data IO
05/06/2020
- 07:58 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- I don't see any evidence for a refcount leak in ganesha. What ganesha _does_ do is maintain a cache of filehandles, a...
- 06:53 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- I ahven't read the entire thread but my theory is:
(a) NFS-Ganesha has a reference leak that is preventing it from... - 06:37 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- (reformatted)
- 06:49 PM Backport #41865: nautilus: mds: ask idle client to trim more caps
- This is actually in v14.2.8:...
- 04:04 PM Backport #45050 (Resolved): nautilus: stale scrub status entry from a failed mds shows up in `cep...
- 03:26 PM Backport #44476 (In Progress): luminous: mds: assert(p != active_requests.end())
- Sidharth, please do this backport.
- 03:21 PM Backport #43000 (Rejected): luminous: qa: ignore "ceph.dir.pin: No such attribute" for (old) kern...
- Rejected as Luminous is EOL.
- 03:21 PM Backport #41101 (Rejected): luminous: tools/cephfs: memory leak in cephfs/Resetter.cc
- Rejected as Luminous is EOL.
- 03:20 PM Backport #40858 (Rejected): luminous: ceph_volume_client: python program embedded in test_volume_...
- Rejected as Luminous is EOL.
- 03:20 PM Backport #40855 (Rejected): luminous: test_volume_client: test_put_object_versioned is unreliable
- Rejected as Luminous is EOL.
- 03:20 PM Backport #40493 (Rejected): luminous: test_volume_client: declare only one default for python ver...
- Rejected as Luminous is EOL.
- 03:20 PM Backport #40323 (Rejected): luminous: ceph_volume_client: d_name needs to be converted to string ...
- Rejected as Luminous is EOL.
- 03:19 PM Backport #39687 (Rejected): luminous: ceph-fuse: client hang because its bad session PipeConnecti...
- Rejected as Luminous is EOL.
- 03:19 PM Backport #38737 (Rejected): luminous: qa: "[WRN] Health check failed: 1/3 mons down, quorum b,c (...
- Rejected as Luminous is EOL.
- 03:18 PM Backport #38710 (Rejected): luminous: qa: kclient unmount hangs after file system goes down
- Rejected as Luminous is EOL.
- 03:18 PM Backport #38100 (Rejected): luminous: mds: remove cache drop admin socket command
- Rejected as Luminous is EOL.
- 03:18 PM Backport #36462 (Rejected): luminous: ceph-fuse client can't read or write due to backward cap_gen
- Rejected as Luminous is EOL.
- 03:14 PM Backport #45218 (Rejected): luminous: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterF...
- Rejected as luminous is EOL.
- 11:27 AM Bug #45398 (Resolved): mgr/volumes: Not able to resize cephfs subvolume with ceph fs subvolume cr...
- in cephcsi we are using ceph fs subvolume create command to resize the PVC, which is not working in the cluster which...
- 08:56 AM Bug #45396 (Fix Under Review): ceph-fuse: building the source code failed with libfuse3.5 or high...
- 08:56 AM Bug #45396: ceph-fuse: building the source code failed with libfuse3.5 or higher versions
- The 'int cmd' parameter type in fuse ioctl changed to 'unsigned int' since libfuse3.5.0.
- 08:40 AM Bug #45396 (In Progress): ceph-fuse: building the source code failed with libfuse3.5 or higher ve...
- 08:40 AM Bug #45396 (Resolved): ceph-fuse: building the source code failed with libfuse3.5 or higher versions
- 08:13 AM Feature #45237 (Fix Under Review): pybind/mgr/volumes: add command to return metadata regarding a...
05/05/2020
- 09:19 PM Bug #45071 (Pending Backport): cephfs-shell: CI testing does not detect flake8 errors
- 09:15 PM Bug #45024 (Fix Under Review): mds: wrong link count under certain circumstance
- 09:10 PM Bug #44988 (Triaged): client: track dirty inodes in a per-session list for effective cap flushing
- 09:09 PM Bug #44963 (Pending Backport): fix MClientCaps::FLAG_SYNC in check_caps
- 09:07 PM Bug #44947 (Need More Info): Hung ops for evicted CephFS clients do not get cleaned up fully
- 09:00 PM Bug #44904 (Resolved): CephFSMount::run_shell does not run command with sudo
- 08:56 PM Bug #44902 (Rejected): ceph-fuse read the file cached data when we recover from the blacklist
- 08:21 PM Bug #43543 (Fix Under Review): mds: scrub on directory with recently created files may fail to lo...
- 06:03 PM Backport #45389 (In Progress): octopus: qa: install task runs twice with double unwind causing fa...
- 06:00 PM Backport #45389 (Resolved): octopus: qa: install task runs twice with double unwind causing fatal...
- https://github.com/ceph/ceph/pull/34912
- 05:58 PM Bug #45387 (Pending Backport): qa: install task runs twice with double unwind causing fatal errors
- 04:28 PM Bug #45387 (Fix Under Review): qa: install task runs twice with double unwind causing fatal errors
- 04:18 PM Bug #45387: qa: install task runs twice with double unwind causing fatal errors
- master failure: /ceph/teuthology-archive/teuthology-2020-04-17_03:15:03-fs-master-distro-basic-smithi/4959584/teuthol...
- 04:16 PM Bug #45387 (Resolved): qa: install task runs twice with double unwind causing fatal errors
- ...
- 05:09 PM Backport #45285 (Rejected): mimic: client: write stuck at waiting for larger max_size
- Closing as Mimic is EOL.
- 05:09 PM Backport #44290 (Rejected): mimic: mds: SIGSEGV in Migrator::export_sessions_flushed
- Closing as Mimic is EOL.
- 05:09 PM Backport #44479 (Rejected): mimic: mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- Closing as Mimic is EOL.
- 05:09 PM Backport #42440 (Rejected): mimic: mds: create a configurable snapshot limit
- Closing as Mimic is EOL.
- 05:09 PM Backport #40886 (Rejected): mimic: ceph_volume_client: to_bytes converts NoneType object str
- Closing as Mimic is EOL.
- 05:09 PM Backport #41468 (Rejected): mimic: mds: recall capabilities more regularly when under cache pressure
- Closing as Mimic is EOL.
- 05:09 PM Backport #40325 (Rejected): mimic: ceph_volume_client: d_name needs to be converted to string bef...
- Closing as Mimic is EOL.
- 05:09 PM Backport #40856 (Rejected): mimic: ceph_volume_client: python program embedded in test_volume_cli...
- Closing as Mimic is EOL.
- 05:09 PM Backport #38339 (Rejected): mimic: mds: may leak gather during cache drop
- Closing as Mimic is EOL.
- 05:09 PM Backport #38444 (Rejected): mimic: mds: drop cache does not timeout as expected
- Closing as Mimic is EOL.
- 05:09 PM Backport #37761 (Rejected): mimic: mds: deadlock when setting config value via admin socket
- Closing as Mimic is EOL.
- 05:09 PM Backport #38085 (Rejected): mimic: mds: log new client sessions with various metadata
- Closing as Mimic is EOL.
- 05:09 PM Backport #36463 (Rejected): mimic: ceph-fuse client can't read or write due to backward cap_gen
- Closing as Mimic is EOL.
- 05:07 PM Backport #45026 (Rejected): mimic: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- Closing as Mimic is EOL.
- 05:07 PM Backport #44329 (Rejected): mimic: client: bad error handling in Client::_lseek
- Closing as Mimic is EOL.
- 05:07 PM Backport #44477 (Rejected): mimic: mds: assert(p != active_requests.end())
- Closing as Mimic is EOL.
- 05:07 PM Backport #44488 (Rejected): mimic: qa: malformed job
- Closing as Mimic is EOL.
- 05:07 PM Backport #43785 (Rejected): mimic: fs: OpenFileTable object shards have too many k/v pairs
- Closing as Mimic is EOL.
- 05:07 PM Backport #43791 (Rejected): mimic: RuntimeError: Files in flight high water is unexpectedly low (...
- Closing as Mimic is EOL.
- 05:07 PM Backport #43734 (Rejected): mimic: qa: ffsb suite causes SLOW_OPS warnings
- Closing as Mimic is EOL.
- 05:07 PM Backport #43778 (Rejected): mimic: qa: test_full racy check: AssertionError: 29 not greater than ...
- Closing as Mimic is EOL.
- 05:07 PM Backport #43627 (Rejected): mimic: client: disallow changing fuse_default_permissions option at r...
- Closing as Mimic is EOL.
- 05:07 PM Backport #43730 (Rejected): mimic: client: chdir does not raise error if a file is passed
- Closing as Mimic is EOL.
- 05:07 PM Backport #43559 (Rejected): mimic: mds: reject forward scrubs when cluster has multiple active MD...
- Closing as Mimic is EOL.
- 05:07 PM Backport #43572 (Rejected): mimic: cephfs-journal-tool: will crash without any extra argument
- Closing as Mimic is EOL.
- 05:07 PM Backport #43342 (Rejected): mimic: mds: client does not response to cap revoke After session stal...
- Closing as Mimic is EOL.
- 05:07 PM Backport #43505 (Rejected): mimic: MDSMonitor: warn if a new file system is being created with an...
- Closing as Mimic is EOL.
- 05:07 PM Backport #43002 (Rejected): mimic: qa: ignore "ceph.dir.pin: No such attribute" for (old) kernel ...
- Closing as Mimic is EOL.
- 05:07 PM Backport #43142 (Rejected): mimic: tools/cephfs: linkages injected by cephfs-data-scan have first...
- Closing as Mimic is EOL.
- 05:07 PM Backport #43144 (Rejected): mimic: mds: tolerate no snaprealm encoded in on-disk root inode
- Closing as Mimic is EOL.
- 05:07 PM Backport #42942 (Rejected): mimic: mds: free heap memory may grow too large for some workloads
- Closing as Mimic is EOL.
- 05:07 PM Backport #42950 (Rejected): mimic: mds: inode lock stuck at unstable state after evicting client
- Closing as Mimic is EOL.
- 05:07 PM Backport #42632 (Rejected): mimic: client: FAILED assert(cap == in->auth_cap)
- Closing as Mimic is EOL.
- 05:07 PM Backport #42649 (Rejected): mimic: mds: no assert on frozen dir when scrub path
- Closing as Mimic is EOL.
- 05:07 PM Backport #42423 (Rejected): mimic: qa: "cluster [ERR] Error recovering journal 0x200: (2) No su...
- Closing as Mimic is EOL.
- 05:07 PM Backport #42463 (Rejected): mimic: doc: MDS and metadata pool hardware requirements/recommendations
- Closing as Mimic is EOL.
- 05:07 PM Backport #42278 (Rejected): mimic: qa: logrotate should tolerate connection resets
- Closing as Mimic is EOL.
- 05:07 PM Backport #42421 (Rejected): mimic: test_reconnect_eviction fails with "RuntimeError: MDS in rejec...
- Closing as Mimic is EOL.
- 04:56 PM Bug #44790 (Duplicate): qa: adjust-ulimits invoked after being removed
- dup of #45387
- 04:33 PM Bug #21058: mds: remove UNIX file permissions binary dependency
- https://github.com/ceph/ceph/pull/32705#discussion_r420244862
- 04:29 PM Backport #45229 (Resolved): nautilus: ceph fs add_data_pool doesn't set pool metadata properly
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34774
m... - 04:29 PM Backport #45225 (Resolved): nautilus: mon/MDSMonitor: "ceph fs authorize cephfs client.test /test...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34774
m... - 04:14 PM Bug #43901 (Resolved): qa: fsx: fatal error: libaio.h: No such file or directory
- 11:35 AM Bug #44546: cleanup: Can't lookup inode 1
- As Greg mentioned in the first comment, this seems to be a cephx configuration issue. '-13' means -EACCES (Permissio...
- 04:37 AM Bug #45344: doc: Table Of Contents doesn't work
- Greg Farnum wrote:
> I think this is a project-wide thing that Zach (docs guy) already has a ticket for?
No, I se...
05/04/2020
- 07:54 PM Feature #20: client: recover from a killed session (w/ blacklist)
- this merged after octopus
- 07:41 PM Bug #44779: mimic: Command failed (workunit test suites/ffsb.sh) on smithi063 with status 124
- Milind, please use the pre html tags for code/log paste.
- 04:51 PM Bug #44579 (Resolved): qa: commit 9f6c764f10f break qa code in several places
- 04:50 PM Bug #44389 (Pending Backport): client: fuse mount will print call trace with incorrect options
- 04:48 PM Bug #44565 (Need More Info): src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK ||...
- 03:56 PM Bug #44546: cleanup: Can't lookup inode 1
- just saw this on a 5.3.13 kernel cephfs client
- 08:27 AM Bug #44546: cleanup: Can't lookup inode 1
- Hi following an upgrade to Ubuntu 20.04, I have the same issue.
* I indeed mounted a cephfs subdirectory with the ... - 03:54 PM Bug #45333: LARGE_OMAP_OBJECTS in pool metadata
- Kenneth Waegeman wrote:
> Thanks, I tried this:
>
> [root@mds01 ~]# ceph daemon mds.mds01 dirfrag ls /backups/xxx... - 03:12 PM Bug #45333: LARGE_OMAP_OBJECTS in pool metadata
- Thanks, I tried this:
[root@mds01 ~]# ceph daemon mds.mds01 dirfrag ls /backups/xxx/yyy/zzz/exp5_0/results_filtere... - 02:46 PM Bug #45333: LARGE_OMAP_OBJECTS in pool metadata
- Kenneth Waegeman wrote:
> Hmm that's a very good question, is there a simple way to check this?
>
> It the very v... - 02:38 PM Bug #45333: LARGE_OMAP_OBJECTS in pool metadata
- Hmm that's a very good question, is there a simple way to check this?
It the very very least Luminous, but quite s... - 02:08 PM Bug #45333: LARGE_OMAP_OBJECTS in pool metadata
- Kenneth Waegeman wrote:
> yes, we are running 14.2.6
>
> [root@mds01 ~]# ceph -v
> ceph version 14.2.6 (f0aa067a... - 02:06 PM Bug #45333: LARGE_OMAP_OBJECTS in pool metadata
- yes, we are running 14.2.6
[root@mds01 ~]# ceph -v
ceph version 14.2.6 (f0aa067ac7a02ee46ea48aa26c6e298b5ea272e9)... - 01:42 PM Bug #45333 (Need More Info): LARGE_OMAP_OBJECTS in pool metadata
- Just to confirm, this is with a Nautilus cluster?
- 03:25 PM Bug #45283: Kernel log flood "ceph: Failed to find inode for 1"
- Michael Robertson wrote:
> Hmm, they are requesting details on how to reproduce the problem. I don't think they woul... - 01:16 PM Bug #45283: Kernel log flood "ceph: Failed to find inode for 1"
- Hmm, they are requesting details on how to reproduce the problem. I don't think they would appreciate me directing th...
- 01:42 PM Bug #45344: doc: Table Of Contents doesn't work
- I think this is a project-wide thing that Zach (docs guy) already has a ticket for?
- 12:19 PM Bug #45373 (Fix Under Review): cephfs-shell: OSError type exceptions throw object has no attribut...
- 12:08 PM Bug #45373 (Resolved): cephfs-shell: OSError type exceptions throw object has no attribute 'get_e...
- ...
- 11:20 AM Feature #45237: pybind/mgr/volumes: add command to return metadata regarding a subvolume snapshot
- @Kotresh can you also add a tag called `ceph-csi` if possible to this issue/report?
- 11:20 AM Feature #45237: pybind/mgr/volumes: add command to return metadata regarding a subvolume snapshot
- Yeah, snapshot status as mentioned in previous comment, and may be additional details like below could also help:
... - 11:16 AM Feature #45371 (Resolved): mgr/volumes: `protect` and `clone` operation in a single transaction
- At the moment if we want to make a clone of a snapshot, the caller ( example , ceph CSI) has to make 2 calls to achie...
- 09:10 AM Bug #45320 (Fix Under Review): client: Other UID don't write permission when the file is marked w...
04/30/2020
- 01:23 PM Bug #45349 (Fix Under Review): mds: send scrub status to ceph-mgr only when scrub is running (or ...
- 09:20 AM Bug #45349 (Resolved): mds: send scrub status to ceph-mgr only when scrub is running (or paused, ...
- Right now task status in ceph status shows idle scrub status for each active MDS:...
- 10:44 AM Feature #45267: ceph-fuse: Reduce memory copy in ceph-fuse during data IO
- Currently the splice write makes no sense here, because there is no any file descripter in in_buf and out_buf, and th...
- 03:32 AM Bug #45344 (Resolved): doc: Table Of Contents doesn't work
- 1. Click here: https://docs.ceph.com/docs/master/cephfs/
2. Click 'Getting Started with CephFS'. It works.
3. B... - 12:04 AM Bug #45342 (In Progress): qa/tasks/vstart_runner.py: RuntimeError: Fuse mount failed to populate ...
- If the -l option is specified in :...
- 12:02 AM Bug #45342 (Resolved): qa/tasks/vstart_runner.py: RuntimeError: Fuse mount failed to populate /sy...
- ...
04/29/2020
- 06:53 PM Bug #45338: find leads to recursive output with nfs mount
- reducing to high until we upgrade lrc to nautilus and check if this disappears.
- 06:47 PM Bug #45338: find leads to recursive output with nfs mount
- Yeah, I was just looking at it. I guess that comes from RHCS3? 2.7 series is also EOL upstream, so the same advice st...
- 06:30 PM Bug #45338: find leads to recursive output with nfs mount
- The box running nfs-ganesha is running nfs-ganesha-2.7.4-10.el7cp.x86_64
- 06:30 PM Bug #45338: find leads to recursive output with nfs mount
- Just to update: nfs-ganesha on reesi004 is 2.7.4
- 06:17 PM Bug #45338: find leads to recursive output with nfs mount
- You'd need to update it on the box that's running nfs-ganesha. It will probably also entail updating libcephfs and li...
- 06:10 PM Bug #45338: find leads to recursive output with nfs mount
- Jeff,
this is on the client side, do we need updated nfs-ganesha here? or are you talking about the ceph server side? - 05:11 PM Bug #45338: find leads to recursive output with nfs mount
- I will update and try, but I guess it might not be an issue.
- 05:06 PM Bug #45338: find leads to recursive output with nfs mount
- Oh my. That nfs-ganesha package is quite old, and I think there have been some readdir-related fixes that went in sin...
- 05:02 PM Bug #45338 (Closed): find leads to recursive output with nfs mount
- $ sudo find ./ -maxdepth 1 -type d -mtime +270
./dahorak-test-pr
./shmohan
./shmohan
./shmohan
./shmohan
./shm... - 06:41 PM Bug #45339 (Fix Under Review): qa/cephfs: run nsenter commands with superuser privileges
- 06:19 PM Bug #45339 (Resolved): qa/cephfs: run nsenter commands with superuser privileges
- ...
- 02:22 PM Bug #45333 (Need More Info): LARGE_OMAP_OBJECTS in pool metadata
- LARGE_OMAP_OBJECTS 1 large omap objects
1 large objects found in pool 'metadata'
Search the cluster log for... - 01:29 PM Bug #45332 (Resolved): qa: TestExports is failure under new Python3 runtime
- In http://qa-proxy.ceph.com/teuthology/sidharthanup-2020-04-28_19:54:15-multimds-master-distro-default-smithi/4995153...
- 01:02 PM Bug #45283: Kernel log flood "ceph: Failed to find inode for 1"
- Thanks Luis, I appreciate the help.
https://bugs.launchpad.net/ubuntu/+source/linux-meta-azure/+bug/1875884
I t... - 09:54 AM Bug #45283: Kernel log flood "ceph: Failed to find inode for 1"
- Michael Robertson wrote:
> Cool, thanks guys.
> I reviewed Ubuntu bug reporting guidelines, and created Launchpad a... - 08:45 AM Bug #45320: client: Other UID don't write permission when the file is marked with SUID or SGID
- may_setattr 0x7f1902360b00 = -1
- 08:43 AM Bug #45320: client: Other UID don't write permission when the file is marked with SUID or SGID
- Client log:...
- 08:39 AM Bug #45320 (Fix Under Review): client: Other UID don't write permission when the file is marked w...
- The problem appears in the ceph-fuse mount,there is no problem with the kernel mount
Example:... - 04:35 AM Bug #43943: qa: "[WRN] evicting unresponsive client smithi131:z (6314), after 304.461 seconds"
- /a/teuthology-2020-04-26_02:30:03-rados-octopus-distro-basic-smithi/4984936
- 04:07 AM Bug #45257 (Duplicate): Removing filesystem results in task status scrub status old mdses in idle...
- duplicate of https://tracker.ceph.com/issues/44677
- 03:28 AM Bug #45257: Removing filesystem results in task status scrub status old mdses in idle state
- Greg Farnum wrote:
> Venky, you just fixed this, right?
Correct (https://tracker.ceph.com/issues/44677) -- backpo...
04/28/2020
- 11:58 PM Bug #43943: qa: "[WRN] evicting unresponsive client smithi131:z (6314), after 304.461 seconds"
- /a/teuthology-2020-04-26_07:01:02-rados-master-distro-basic-smithi/4986046
- 07:25 PM Bug #45257: Removing filesystem results in task status scrub status old mdses in idle state
- Venky, you just fixed this, right?
- 05:45 PM Backport #45227 (In Progress): octopus: qa: UnicodeDecodeError in TestGetAndPut.test_put_and_get_...
- 05:43 PM Backport #45222 (In Progress): octopus: cephfs-journal-tool: cannot set --dry_run arg
- 05:38 PM Backport #45220 (In Progress): octopus: mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rst...
- 05:32 PM Backport #45219 (In Progress): octopus: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestCluste...
- 05:31 PM Backport #45216 (In Progress): octopus: qa: after the cephfs qa test case quit the mountpoints st...
- 05:30 PM Backport #45049 (In Progress): octopus: stale scrub status entry from a failed mds shows up in `c...
- 05:26 PM Backport #44844 (In Progress): octopus: qa:test_config_session_timeout failed with incorrect options
- 05:24 PM Backport #44843 (In Progress): octopus: LibCephFS::RecalledGetattr test failed
- 04:01 PM Bug #45283: Kernel log flood "ceph: Failed to find inode for 1"
- Cool, thanks guys.
I reviewed Ubuntu bug reporting guidelines, and created Launchpad account.
The bug report requir... - 01:19 PM Bug #45283: Kernel log flood "ceph: Failed to find inode for 1"
- Thanks for taking a look, Luis. Assigning this to you for now. Feel free to close as you see fit.
- 10:40 AM Bug #45283: Kernel log flood "ceph: Failed to find inode for 1"
- Michael Robertson wrote:
> Sounds good. Anything I can do to support the process?
I believe that the best thing t... - 08:31 AM Bug #45283: Kernel log flood "ceph: Failed to find inode for 1"
- Sounds good. Anything I can do to support the process?
- 01:23 PM Feature #45267: ceph-fuse: Reduce memory copy in ceph-fuse during data IO
- Zheng Yan wrote:
> please check if we can use fuse's zero copy feature (splice read/write)
Yeah, sure. Will do that. - 09:07 AM Feature #45267: ceph-fuse: Reduce memory copy in ceph-fuse during data IO
- please check if we can use fuse's zero copy feature (splice read/write)
- 01:22 PM Bug #45104: NFS deployed using orchestrator watch_url not working and mkdirs permission denied da...
- Thanks! The relevant patch should make V2.8.4 and V3.3.
- 10:05 AM Bug #45304 (In Progress): qa/fuse_mount.py: tests crash when /sys/fs/fuse/connections is absent
- 09:59 AM Bug #45304 (Resolved): qa/fuse_mount.py: tests crash when /sys/fs/fuse/connections is absent
- Following is the specific log entries showing that -...
- 06:53 AM Backport #45221 (In Progress): nautilus: cephfs-journal-tool: cannot set --dry_run arg
- 06:07 AM Backport #45217 (In Progress): nautilus: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClust...
- 05:29 AM Bug #45300 (Fix Under Review): qa/tasks/vstart_runner.py: TypeError: mount() got an unexpected ke...
- 05:24 AM Bug #45300 (Resolved): qa/tasks/vstart_runner.py: TypeError: mount() got an unexpected keyword ar...
- 2020-04-28 01:23:05,521.521 INFO:__main__:test_subvolume_group_create_with_desired_mode (tasks.cephfs.test_volumes.Te...
- 05:15 AM Bug #45299 (Fix Under Review): qa/cephfs: AssertionError: b'755' != '755' when running test_subvo...
- 05:12 AM Bug #45299 (In Progress): qa/cephfs: AssertionError: b'755' != '755' when running test_subvolume_...
- 05:12 AM Bug #45299 (Resolved): qa/cephfs: AssertionError: b'755' != '755' when running test_subvolume_gro...
- ...
04/27/2020
- 11:06 PM Bug #45080: mds slow journal ops then cephfs damaged after restarting an osd host
- Hmm so the MDS shutdown assert is a bug we've been dancing around on where we were trying to clean up Context callbac...
- 06:42 PM Feature #45289 (Fix Under Review): mgr/volumes: create fs subvolumes with isolated RADOS namespaces
- 11:24 AM Feature #45289 (Resolved): mgr/volumes: create fs subvolumes with isolated RADOS namespaces
- Add an option to `fs subvolume create` command to allow subvolumes to be created in unique RADOS namespaces.
The c... - 06:02 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Let me test it some more. I think the cache pressure messages were showing up without any load. I am going to see if ...
- 04:38 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- mitchell walls wrote:
> I would say that it would be related to load or open files but I saw the message on both the... - 05:01 PM Backport #45230 (In Progress): octopus: ceph fs add_data_pool doesn't set pool metadata properly
- 05:01 PM Backport #45226 (In Progress): octopus: mon/MDSMonitor: "ceph fs authorize cephfs client.test /te...
- 03:45 PM Backport #45225 (In Progress): nautilus: mon/MDSMonitor: "ceph fs authorize cephfs client.test /t...
- 03:45 PM Backport #45229 (In Progress): nautilus: ceph fs add_data_pool doesn't set pool metadata properly
- 02:38 PM Bug #45283: Kernel log flood "ceph: Failed to find inode for 1"
- My memory on this code starts to be somewhat blurry, but looking at git log (in mainline) the following 4 patches see...
- 10:02 AM Bug #45283 (Closed): Kernel log flood "ceph: Failed to find inode for 1"
- Rook v1.2.7 (official chart) and ceph v14.2.9 in an AKS cluster with VMSS.
OS provided by AKS is currently Ubuntu 16... - 01:38 PM Backport #45213 (Rejected): luminous: client: write stuck at waiting for larger max_size
- 10:36 AM Backport #45213 (Need More Info): luminous: client: write stuck at waiting for larger max_size
- luminous backport is non-trivial due to post-Luminous refactoring
@Zheng, given that Luminous is supposed to be EO... - 10:54 AM Backport #45287 (In Progress): nautilus: ceph-fuse: ceph::__ceph_abort(): ceph-fuse killed by SIG...
- 10:53 AM Backport #45287 (Resolved): nautilus: ceph-fuse: ceph::__ceph_abort(): ceph-fuse killed by SIGABR...
- https://github.com/ceph/ceph/pull/34771
- 10:52 AM Backport #45210 (In Progress): nautilus: enable 'big_writes' fuse option if ceph-fuse is linked t...
- 10:46 AM Backport #45046 (In Progress): octopus: ceph-fuse: ceph::__ceph_abort(): ceph-fuse killed by SIGA...
- 10:44 AM Backport #45211 (In Progress): octopus: enable 'big_writes' fuse option if ceph-fuse is linked to...
- 10:31 AM Backport #45285 (In Progress): mimic: client: write stuck at waiting for larger max_size
- 10:30 AM Backport #45285 (Rejected): mimic: client: write stuck at waiting for larger max_size
- https://github.com/ceph/ceph/pull/34768
- 10:30 AM Backport #45212 (In Progress): nautilus: client: write stuck at waiting for larger max_size
- 10:29 AM Backport #45214 (In Progress): octopus: client: write stuck at waiting for larger max_size
- 07:05 AM Bug #45261 (Fix Under Review): mds: FAILED assert(locking == lock) in MutationImpl::finish_locking
- 05:12 AM Feature #45237: pybind/mgr/volumes: add command to return metadata regarding a subvolume snapshot
- @Kotresh is it possible to add snapshot status ie ready or not for consuming?
04/26/2020
- 07:15 AM Feature #45267: ceph-fuse: Reduce memory copy in ceph-fuse during data IO
- Have talked with Yan, Zheng and I will work on this.
Thanks - 07:14 AM Feature #45267 (In Progress): ceph-fuse: Reduce memory copy in ceph-fuse during data IO
- 07:13 AM Feature #45267 (Resolved): ceph-fuse: Reduce memory copy in ceph-fuse during data IO
- we can reduce memory copy in ceph-fuse's IO path. for example, fuse_ll_read() have two copies, libcephfs to a temp bu...
04/25/2020
- 10:59 AM Bug #45103 (Resolved): TestVolumeClient is failing under new py3 runtime
- 09:32 AM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- I forgot to mention that the message has been going for few hours continually....
- 09:15 AM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- I would say that it would be related to load or open files but I saw the message on both the new and old ceph cluster...
04/24/2020
- 10:23 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Yeah, I think it's basically doing what it's supposed to do but there may be some delay between libcephfs getting the...
- 07:42 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- It seems the way it is doing it might've changed it used to just sit there. On Monday, I plan to test the multi-mds p...
- 07:15 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Unfortunately it seems to be back. It was also sitting relatively dormant during this time again. Let me know if you ...
- 05:59 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Many thanks. Let me know how it goes!
- 05:50 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Okay I have it setup. The userspace version for ganesha is showing
ceph version 16.0.0-977-gd8f2dbd (d8f2dbd338fb52... - 04:03 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Sure, in a nutshell...
Check out the correct ganesha branch in git first. Clone my tree and cd into it, and then:
... - 03:10 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Just double checking, I need to update ceph on the nfs server to the rpms from the ci, then build nfs-ganesha? Could ...
- 01:34 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- I kicked off a ceph-ci build here:
https://shaman.ceph.com/builds/ceph/wip-jlayton-45114/d8f2dbd338fb5237bed8c34dd... - 01:18 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Don't worry about the container, I can create it if needed. If not I could also create the packages if you would like...
- 01:11 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- I can install the package within the container as well if needed. What is are the needed ceph patches? I could attemp...
- 12:50 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Probably easy for some people, but it may take me some time. I'll see what I can do.
- 12:35 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- The Ganesha server is now running in a podman container version 15.2.1. So it should be easy to create an image with ...
- 12:13 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Ok, ceph patches are pretty much done. Just waiting on review so I can merge them. There are also some ganesha patche...
- 03:53 PM Bug #45261: mds: FAILED assert(locking == lock) in MutationImpl::finish_locking
- ...
- 03:13 PM Bug #45261: mds: FAILED assert(locking == lock) in MutationImpl::finish_locking
- I got in touch with the user who triggered this. It seems they (accidentally) had two identical jobs running on two d...
- 01:42 PM Bug #45261 (Resolved): mds: FAILED assert(locking == lock) in MutationImpl::finish_locking
- Hi,
We got two identical crashes a few minutes apart on two different active MDS's:... - 12:09 PM Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- Ok, ceph patches are pretty much done. Just waiting on review so I can merge them. There are also some ganesha patche...
- 11:38 AM Bug #45257 (Duplicate): Removing filesystem results in task status scrub status old mdses in idle...
- This is more of an annoyance and can't figure out how to clear it. I have deleted my filesystem a couple times now wh...
- 11:00 AM Feature #44891 (Resolved): link ceph-fuse to libfuse3 if possible
- 08:09 AM Backport #45251 (In Progress): octopus: "ceph fs status" command outputs to stderr instead of std...
- https://github.com/ceph/ceph/pull/34727
- 07:51 AM Backport #45251 (Resolved): octopus: "ceph fs status" command outputs to stderr instead of stdout...
- https://github.com/ceph/ceph/pull/34727
- 07:42 AM Bug #44456 (Resolved): qa/vstart_runner.py: AttributeError: 'LocalRemote' object has no attribute...
04/23/2020
- 06:56 PM Feature #45237: pybind/mgr/volumes: add command to return metadata regarding a subvolume snapshot
- Hi Humble,
Please let us know if you require anything else. - 06:48 PM Feature #45237 (Resolved): pybind/mgr/volumes: add command to return metadata regarding a subvolu...
- Along the lines of https://tracker.ceph.com/issues/44277, Ceph CSI requires metadata of subvolume snapshot as well.
... - 01:41 PM Bug #36635 (Resolved): mds: purge queue corruption from wrong backport
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 01:37 PM Backport #45230 (Resolved): octopus: ceph fs add_data_pool doesn't set pool metadata properly
- https://github.com/ceph/ceph/pull/34775
- 01:37 PM Backport #45229 (Resolved): nautilus: ceph fs add_data_pool doesn't set pool metadata properly
- https://github.com/ceph/ceph/pull/34774
- 01:37 PM Backport #45227 (Resolved): octopus: qa: UnicodeDecodeError in TestGetAndPut.test_put_and_get_wit...
- https://github.com/ceph/ceph/pull/34805
- 01:37 PM Backport #45226 (Resolved): octopus: mon/MDSMonitor: "ceph fs authorize cephfs client.test /test ...
- https://github.com/ceph/ceph/pull/34775
- 01:36 PM Backport #45225 (Resolved): nautilus: mon/MDSMonitor: "ceph fs authorize cephfs client.test /test...
- https://github.com/ceph/ceph/pull/34774
- 01:35 PM Backport #45222 (Resolved): octopus: cephfs-journal-tool: cannot set --dry_run arg
- https://github.com/ceph/ceph/pull/34804
- 01:35 PM Backport #45221 (Resolved): nautilus: cephfs-journal-tool: cannot set --dry_run arg
- https://github.com/ceph/ceph/pull/34784
- 01:35 PM Backport #45220 (Resolved): octopus: mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat ...
- https://github.com/ceph/ceph/pull/34803
- 01:35 PM Backport #45219 (Resolved): octopus: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterFull)
- https://github.com/ceph/ceph/pull/34802
- 01:35 PM Backport #45218 (Rejected): luminous: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterF...
- 01:35 PM Backport #45217 (Resolved): nautilus: qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterF...
- https://github.com/ceph/ceph/pull/34783
- 01:34 PM Backport #45216 (Resolved): octopus: qa: after the cephfs qa test case quit the mountpoints still...
- https://github.com/ceph/ceph/pull/34801
- 01:34 PM Backport #45214 (Resolved): octopus: client: write stuck at waiting for larger max_size
- https://github.com/ceph/ceph/pull/34766
- 01:34 PM Backport #45213 (Rejected): luminous: client: write stuck at waiting for larger max_size
- 01:33 PM Backport #45212 (Resolved): nautilus: client: write stuck at waiting for larger max_size
- https://github.com/ceph/ceph/pull/34767
- 01:33 PM Backport #45211 (Resolved): octopus: enable 'big_writes' fuse option if ceph-fuse is linked to li...
- https://github.com/ceph/ceph/pull/34769
- 01:33 PM Backport #45210 (Resolved): nautilus: enable 'big_writes' fuse option if ceph-fuse is linked to l...
- https://github.com/ceph/ceph/pull/34771
04/22/2020
- 09:32 PM Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- Ok, I have a first stab at the ceph piece of this mostly done now. The ganesha piece still needs some work as it's no...
- 02:56 PM Bug #44962 (Pending Backport): "ceph fs status" command outputs to stderr instead of stdout when ...
- 01:56 PM Bug #44962: "ceph fs status" command outputs to stderr instead of stdout when json formatting is ...
- Greg Farnum wrote:
> Kotresh, this merged to master but I'm not sure how far we need to backport it, please check an... - 05:18 AM Bug #44962 (In Progress): "ceph fs status" command outputs to stderr instead of stdout when json ...
- Kotresh, this merged to master but I'm not sure how far we need to backport it, please check and update the ticket.
- 01:38 PM Documentation #44310 (Resolved): doc: add blog post for recover_session in kclient
- Posted:
https://ceph.io/community/automatic-cephfs-recovery-after-blacklisting/ - 12:17 PM Bug #40001: mds cache oversize after restart
- Yunzhi,
What is the value of the config option 'mds_cache_memory_limit' on the system ?
Are you referring to this o... - 11:30 AM Backport #45180 (In Progress): octopus: pybind/mgr/volumes: add command to return metadata regard...
- 10:29 AM Backport #45180 (Resolved): octopus: pybind/mgr/volumes: add command to return metadata regarding...
- https://github.com/ceph/ceph/pull/34681
- 11:09 AM Backport #45181 (In Progress): nautilus: pybind/mgr/volumes: add command to return metadata regar...
- 10:34 AM Backport #45181 (Resolved): nautilus: pybind/mgr/volumes: add command to return metadata regardin...
- https://github.com/ceph/ceph/pull/34679
- 05:17 AM Bug #44645 (Resolved): cephfs-shell: Fix flake8 errors (E302, E502, E128, F821, W605, E128 and E122)
- 05:15 AM Bug #43061 (Pending Backport): ceph fs add_data_pool doesn't set pool metadata properly
- 05:15 AM Bug #43761 (Pending Backport): mon/MDSMonitor: "ceph fs authorize cephfs client.test /test rw" do...
- 05:14 AM Bug #44382 (Pending Backport): qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterFull)
- 05:13 AM Feature #44044 (Resolved): qa: add network namespaces to kernel/ceph-fuse mounts for partition te...
- I'm marking this Resolved for now, but it wouldn't surprise me if we develop some tests on it we want to take back to...
- 05:11 AM Bug #44657 (Resolved): cephfs-shell: Fix flake8 errors (F841, E302, E502, E128, E305 and E222)
- 05:08 AM Bug #44885 (Pending Backport): enable 'big_writes' fuse option if ceph-fuse is linked to libfuse ...
- 05:07 AM Bug #44801 (Pending Backport): client: write stuck at waiting for larger max_size
04/21/2020
- 06:39 PM Backport #45027: nautilus: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34343
m... - 06:38 PM Backport #44480: nautilus: mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34343
m... - 06:38 PM Backport #44328: nautilus: client: bad error handling in Client::_lseek
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34308
m... - 06:38 PM Backport #44337: nautilus: mds: purge queue corruption from wrong backport
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34307
m... - 11:58 AM Bug #44384: qa: FAIL: test_evicted_caps (tasks.cephfs.test_client_recovery.TestClientRecovery)
- Sorry for the long delay on this. I haven't heard of this happening since the original occurrence, but I decided to d...
- 09:38 AM Bug #45103: TestVolumeClient is failing under new py3 runtime
- Removed the py2 supporting and for all the qa test are dropping the py2. Only py3 will be support.
04/20/2020
- 01:56 PM Bug #45104: NFS deployed using orchestrator watch_url not working and mkdirs permission denied da...
- It is the docker image so NFS-Ganesha Release = V3.2.
- 01:46 PM Bug #45104: NFS deployed using orchestrator watch_url not working and mkdirs permission denied da...
- The watch_url problem is a known regression in ganesha. Fixed by:
https://github.com/nfs-ganesha/nfs-ganesha/commi... - 12:06 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- mitchell walls wrote:
> I am going to move to kernel/fuse mount cephfs this week.
That's certainly worth testin... - 11:11 AM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- I removed a million files (the application/homedirectory share) and now the cache pressure messages are much less oft...
- 05:56 AM Bug #45145 (Fix Under Review): qa/test_full: failed to open 'large_file_a': No space left on device
- 05:19 AM Bug #45145 (Duplicate): qa/test_full: failed to open 'large_file_a': No space left on device
- ...
- 03:55 AM Backport #44337 (Resolved): nautilus: mds: purge queue corruption from wrong backport
- 03:55 AM Backport #44328 (Resolved): nautilus: client: bad error handling in Client::_lseek
- 03:54 AM Backport #45027 (Resolved): nautilus: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- 03:54 AM Backport #44480 (Resolved): nautilus: mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
Also available in: Atom