Activity
From 05/02/2021 to 05/31/2021
05/31/2021
- 05:05 PM Backport #50897: nautilus: mds: monclient: wait_auth_rotating timed out after 30
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41448
merged - 05:05 PM Backport #50128: nautilus: pybind/mgr/volumes: deadlock on async job hangs finisher thread
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41394
merged - 05:03 PM Backport #50625: nautilus: qa: "ls: cannot access 'lost+found': No such file or directory"
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40769
merged - 05:03 PM Backport #50290: nautilus: MDS stuck at stopping when reducing max_mds
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40769
merged - 05:03 PM Backport #49514: nautilus: client: allow looking up snapped inodes by inode number+snapid tuple
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40769
merged - 08:47 AM Bug #50954: mgr/pybind/snap_schedule: commands only support positional arguments?
- Can you use proper positional arguments here? ...
- 08:41 AM Backport #50872 (In Progress): pacific: qa: testing kernel patch for client metrics causes mds abort
- 08:08 AM Backport #47020 (In Progress): nautilus: client: shutdown race fails with status 141
- 06:27 AM Bug #50530 (In Progress): pacific: client: abort after MDS blocklist
- 02:16 AM Bug #51023 (Resolved): mds: tcmalloc::allocate_full_cpp_throw_oom(unsigned long)+0xf3)
From https://pulpito.ceph.com/yuriw-2021-05-27_19:31:33-kcephfs-wip-yuri3-testing-2021-05-27-0818-nautilus-distro-...- 01:30 AM Backport #49519 (Resolved): nautilus: client: wake up the front pos waiter
- Thanks.
05/29/2021
- 03:17 PM Backport #50628: nautilus: client: access(path, X_OK) on non-executable file as root always succeeds
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41297
merged - 03:16 PM Backport #49519: nautilus: client: wake up the front pos waiter
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40865
merged - 03:16 PM Backport #50634: nautilus: mds: failure replaying journal (EMetaBlob)
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/41144
merged
05/28/2021
- 04:40 AM Backport #50993 (In Progress): pacific: cephfs-mirror: incrementally transfer snapshots whenever ...
05/27/2021
- 01:18 PM Bug #50984 (Fix Under Review): qa: test_full multiple the mon_osd_full_ratio twice
- 03:20 AM Bug #50984 (Resolved): qa: test_full multiple the mon_osd_full_ratio twice
- The cluster has already multiple the full ratio before returning the "max_avail".
- 08:35 AM Backport #50994 (Resolved): pacific: cephfs-mirror: be resilient to recreated snapshot during syn...
- https://github.com/ceph/ceph/pull/41947
- 08:35 AM Backport #50993 (Resolved): pacific: cephfs-mirror: incrementally transfer snapshots whenever pos...
- https://github.com/ceph/ceph/pull/41475
- 08:33 AM Bug #50561 (Pending Backport): cephfs-mirror: incrementally transfer snapshots whenever possible
- 08:32 AM Bug #49939 (Pending Backport): cephfs-mirror: be resilient to recreated snapshot during synchroni...
- 07:25 AM Backport #50991 (Resolved): pacific: mgr/nfs: skipping conf file or passing empty file throws tra...
- https://github.com/ceph/ceph/pull/42096
- 07:24 AM Bug #50858 (Pending Backport): mgr/nfs: skipping conf file or passing empty file throws traceback
- 01:42 AM Bug #50976 (Fix Under Review): mds: scrub error on inode 0x1
05/26/2021
- 01:58 PM Bug #45997: nautilus: ceph_volume_client.py: UnicodeEncodeError exception while removing volume w...
- https://github.com/ceph/ceph/pull/36679 merged
- 09:28 AM Bug #50976: mds: scrub error on inode 0x1
- In this case, the backtrace check for inode 0x1 has failed.
Root Inode backtrace needs to be saved as soon as the in... - 09:25 AM Bug #50976 (Resolved): mds: scrub error on inode 0x1
- ...
05/24/2021
- 09:47 PM Bug #48753 (Resolved): mds: spurious wakeups in cache upkeep
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:47 PM Bug #48877 (Resolved): qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:46 PM Bug #49309 (Resolved): nautilus: qa: "Assertion `cb_done' failed."
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:45 PM Bug #50048 (Resolved): mds: standby-replay only trims cache when it reaches the end of the replay...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:31 PM Backport #49472 (Resolved): octopus: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40767
m... - 09:31 PM Backport #50633 (Resolved): octopus: mds: failure replaying journal (EMetaBlob)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40743
m... - 09:31 PM Backport #50256 (Resolved): octopus: mds: standby-replay only trims cache when it reaches the end...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40743
m... - 09:31 PM Backport #48813 (Resolved): octopus: mds: spurious wakeups in cache upkeep
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40743
m... - 09:31 PM Backport #49475 (Resolved): octopus: nautilus: qa: "Assertion `cb_done' failed."
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40708
m... - 09:21 PM Bug #50258 (Resolved): pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
- 09:18 PM Backport #50632 (Resolved): pacific: mds: failure replaying journal (EMetaBlob)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40855
m... - 09:18 PM Backport #50254 (Resolved): pacific: mds: standby-replay only trims cache when it reaches the end...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40855
m... - 09:18 PM Backport #50183 (Resolved): pacific: client: openned inodes counter is inconsistent
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40685
m... - 01:08 PM Bug #50954 (Resolved): mgr/pybind/snap_schedule: commands only support positional arguments?
- It looks like the module does not support passing optional ceph arguments.
See:... - 05:01 AM Bug #50946 (Duplicate): mgr/stats: exception ValueError in perf stats
- 'ceph fs perf stats' command excepts for giving strings mistakenly in the rank list....
05/22/2021
- 02:34 PM Bug #49845: qa: failed umount in test_volumes
- Also seen in the run http://pulpito.front.sepia.ceph.com/khiremat-2021-05-21_16:22:47-fs:volumes-wip-khiremat-41403-d...
- 10:44 AM Bug #50719: xattr returning from the dead (sic!)
- Jeff Layton wrote:
> Ok. RHEL7's kcephfs client is quite old. It's possible that this is something fixed in a more r...
05/21/2021
- 09:51 PM Backport #50872 (Need More Info): pacific: qa: testing kernel patch for client metrics causes mds...
- Xiubo please take this one too.
- 08:06 PM Bug #50870 (Closed): qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
- This issue was caused by a bug in the aforementioned PR. No need to work on this Xiubo.
- 08:26 AM Bug #50870: qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
- Patrick Donnelly wrote:
> Xiubo, please take a look at this one. It might be something to do with the caps but that'... - 06:05 PM Backport #50284 (Rejected): nautilus: MDS slow request lookupino #0x100 on rank 1 block forever o...
- Will drop this as nautilus is EOL and how the original bug was induced is not known.
- 06:04 PM Backport #47020 (Need More Info): nautilus: client: shutdown race fails with status 141
- Xiubo, please do this backport.
- 04:18 PM Feature #48404 (Fix Under Review): client: add a ceph.caps vxattr
- 10:35 AM Backport #50876 (In Progress): pacific: cephfs-mirror: allow mirror daemon to connect to local/pr...
- 10:34 AM Backport #50917 (In Progress): pacific: Mirroring path "remove" don't not seem to work
- 10:34 AM Backport #50537 (In Progress): pacific: "ceph fs snapshot mirror daemon status" should not use js...
- 03:10 AM Backport #50537: pacific: "ceph fs snapshot mirror daemon status" should not use json keys as value
- Please take this one Venky.
- 10:33 AM Backport #50877 (In Progress): pacific: qa: test_mirroring_init_failure_with_recovery failure
- 10:32 AM Backport #50871 (In Progress): pacific: cephfs-mirror: use sensible mount/shutdown timeouts
- 10:29 AM Backport #50629 (In Progress): pacific: cephfs-mirror: ignore snapshots on parent directories whe...
- 06:59 AM Backport #50629: pacific: cephfs-mirror: ignore snapshots on parent directories when synchronizin...
- Patrick Donnelly wrote:
> Venky please take this one.
ack - 03:05 AM Backport #50629: pacific: cephfs-mirror: ignore snapshots on parent directories when synchronizin...
- Venky please take this one.
- 08:23 AM Bug #50824 (In Progress): qa: snaptest-git-ceph bus error
- 08:21 AM Bug #50824: qa: snaptest-git-ceph bus error
- The distro is rhel 8.
- 08:18 AM Bug #50824: qa: snaptest-git-ceph bus error
- For this one, I think it should be a bug of `git` tool:...
- 08:22 AM Bug #50825: qa: snaptest-git-ceph hang during mon thrashing v2
- I am afraid this is also caused by `git` tool's bug, but there has not remote/ directory for this test.
- 05:19 AM Bug #50825: qa: snaptest-git-ceph hang during mon thrashing v2
- ...
- 03:08 AM Backport #50541 (In Progress): pacific: libcephfs: support file descriptor based *at() APIs
- 03:03 AM Backport #50538 (In Progress): pacific: mgr/pybind/snap_schedule: do not fail when no fs snapshot...
- 02:53 AM Backport #50873 (In Progress): pacific: mon,doc: deprecate min_compat_client
05/20/2021
- 07:45 PM Backport #50917 (Resolved): pacific: Mirroring path "remove" don't not seem to work
- https://github.com/ceph/ceph/pull/41475
- 07:45 PM Backport #50914 (Resolved): octopus: MDS heartbeat timed out between during executing MDCache::st...
- https://github.com/ceph/ceph/pull/45157
- 07:45 PM Backport #50913 (Resolved): pacific: MDS heartbeat timed out between during executing MDCache::st...
- https://github.com/ceph/ceph/pull/42061
- 07:43 PM Bug #50834 (Pending Backport): MDS heartbeat timed out between during executing MDCache::start_fi...
- 07:43 PM Bug #50523 (Pending Backport): Mirroring path "remove" don't not seem to work
- 07:36 PM Bug #49845: qa: failed umount in test_volumes
- Back: /ceph/teuthology-archive/pdonnell-2021-05-20_14:09:54-fs-wip-pdonnell-testing-20210518.214114-distro-basic-smit...
- 04:49 PM Bug #47979 (Can't reproduce): qa: test_ephemeral_pin_distribution failure
Haven't seen this again.- 04:43 PM Feature #48577 (In Progress): pybind/mgr/volumes: support snapshots on subvolumegroups
- 02:14 PM Bug #50825 (In Progress): qa: snaptest-git-ceph hang during mon thrashing v2
- 01:40 PM Bug #50867 (Fix Under Review): qa: fs:mirror: reduced data availability
- 01:26 PM Bug #50867 (In Progress): qa: fs:mirror: reduced data availability
- 12:35 PM Documentation #50904 (In Progress): mgr/nfs: add nfs-ganesha config hierarchy
- 12:29 PM Documentation #50904 (Resolved): mgr/nfs: add nfs-ganesha config hierarchy
- 07:44 AM Backport #50899 (In Progress): pacific: mds: monclient: wait_auth_rotating timed out after 30
- 06:25 AM Backport #50899 (Resolved): pacific: mds: monclient: wait_auth_rotating timed out after 30
- https://github.com/ceph/ceph/pull/41450
- 07:44 AM Backport #50898 (In Progress): octopus: mds: monclient: wait_auth_rotating timed out after 30
- 06:25 AM Backport #50898 (Resolved): octopus: mds: monclient: wait_auth_rotating timed out after 30
- https://github.com/ceph/ceph/pull/41449
- 07:44 AM Backport #50897 (In Progress): nautilus: mds: monclient: wait_auth_rotating timed out after 30
- 06:25 AM Backport #50897 (Resolved): nautilus: mds: monclient: wait_auth_rotating timed out after 30
- https://github.com/ceph/ceph/pull/41448
- 06:24 AM Bug #50390 (Pending Backport): mds: monclient: wait_auth_rotating timed out after 30
- 01:27 AM Bug #50840 (Fix Under Review): mds: CephFS kclient gets stuck when getattr() on a certain file
- 01:13 AM Bug #50840: mds: CephFS kclient gets stuck when getattr() on a certain file
- From the logs, we can see that the inode 0x100000003ed was trying to recover the size at least 2 minutes ago, the log...
05/19/2021
- 08:14 PM Bug #50852 (Fix Under Review): mds: remove fs_name stored in MDSRank
- 07:54 PM Bug #50622 (Fix Under Review): msg: active_connections regression
- 07:27 PM Backport #50632: pacific: mds: failure replaying journal (EMetaBlob)
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40855
merged - 07:27 PM Backport #50254: pacific: mds: standby-replay only trims cache when it reaches the end of the rep...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40855
merged - 07:27 PM Backport #50183: pacific: client: openned inodes counter is inconsistent
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40685
merged - 01:44 PM Bug #50532 (Fix Under Review): mgr/volumes: hang when removing subvolume when pools are full
- 01:43 PM Bug #49308 (Duplicate): nautilus: qa: "AssertionError: expected removing source snapshot of a clo...
- Duplicate of https://tracker.ceph.com/issues/48231
- 01:42 PM Bug #49469 (Duplicate): qa: "AssertionError: expected removing source snapshot of a clone to fail"
- Duplicate of https://tracker.ceph.com/issues/48231
- 01:41 PM Bug #48231 (Fix Under Review): qa: test_subvolume_clone_in_progress_snapshot_rm is racy
- 01:41 PM Bug #48231 (In Progress): qa: test_subvolume_clone_in_progress_snapshot_rm is racy
- 03:34 AM Backport #50128 (In Progress): nautilus: pybind/mgr/volumes: deadlock on async job hangs finisher...
- 12:47 AM Documentation #50865 (Resolved): doc: move mds state diagram .dot into rst
05/18/2021
- 09:31 PM Bug #50870 (Need More Info): qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
- Nevermind Xiubo.
- 08:34 PM Bug #50870: qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
- Probably caused by: https://github.com/ceph/ceph/pull/39910#pullrequestreview-662546315
- 08:16 PM Bug #50870 (Triaged): qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
- Xiubo, please take a look at this one. It might be something to do with the caps but that'd be weird.
- 08:16 PM Bug #50870 (Closed): qa: test_full: "rm: cannot remove 'large_file_a': Permission denied"
- ...
- 08:45 PM Backport #50877 (Resolved): pacific: qa: test_mirroring_init_failure_with_recovery failure
- https://github.com/ceph/ceph/pull/41475
- 08:45 PM Backport #50876 (Resolved): pacific: cephfs-mirror: allow mirror daemon to connect to local/prima...
- https://github.com/ceph/ceph/pull/41475
- 08:45 PM Backport #50875 (Resolved): pacific: mds: MDSLog::journaler pointer maybe crash with use-after-free
- https://github.com/ceph/ceph/pull/42060
- 08:45 PM Backport #50874 (Resolved): octopus: mds: MDSLog::journaler pointer maybe crash with use-after-free
- https://github.com/ceph/ceph/pull/41626
- 08:45 PM Backport #50873 (Resolved): pacific: mon,doc: deprecate min_compat_client
- https://github.com/ceph/ceph/pull/41468
- 08:45 PM Backport #50872 (Resolved): pacific: qa: testing kernel patch for client metrics causes mds abort
- https://github.com/ceph/ceph/pull/41596
- 08:43 PM Bug #50822 (Pending Backport): qa: testing kernel patch for client metrics causes mds abort
- 08:43 PM Bug #50819 (Pending Backport): mon,doc: deprecate min_compat_client
- 08:42 PM Bug #50807 (Pending Backport): mds: MDSLog::journaler pointer maybe crash with use-after-free
- 08:41 PM Bug #50224 (Pending Backport): qa: test_mirroring_init_failure_with_recovery failure
- 08:41 PM Feature #50581 (Pending Backport): cephfs-mirror: allow mirror daemon to connect to local/primary...
- 08:40 PM Backport #50871 (Resolved): pacific: cephfs-mirror: use sensible mount/shutdown timeouts
- https://github.com/ceph/ceph/pull/41475
- 08:39 PM Bug #50035 (Pending Backport): cephfs-mirror: use sensible mount/shutdown timeouts
- 08:20 PM Bug #42516 (Resolved): mds: some mutations have initiated (TrackedOp) set to 0
- 07:59 PM Bug #50868 (New): qa: "kern.log.gz already exists; not overwritten"
- ...
- 07:50 PM Bug #50867 (Resolved): qa: fs:mirror: reduced data availability
- ...
- 06:14 PM Documentation #50865 (Fix Under Review): doc: move mds state diagram .dot into rst
- 04:00 PM Documentation #50865 (Resolved): doc: move mds state diagram .dot into rst
- Apparently you can embed the .dot diagram, like in:
https://github.com/ceph/ceph/pull/41382/files
- 04:49 PM Backport #50488 (In Progress): pacific: mgr/nfs: move nfs code out of volumes plugin
- 04:48 PM Backport #50843 (In Progress): pacific: mgr/nfs: cli is broken as cluster id and binding argument...
- https://github.com/ceph/ceph/pull/41389
- 04:48 PM Backport #50597 (In Progress): pacific: mgr/nfs: Add troubleshooting section
- https://github.com/ceph/ceph/pull/41389
- 04:42 PM Bug #50858 (Fix Under Review): mgr/nfs: skipping conf file or passing empty file throws traceback
- 09:31 AM Bug #50858 (Resolved): mgr/nfs: skipping conf file or passing empty file throws traceback
- It should print helpful error message instead of throwing traceback...
- 01:00 PM Bug #50811: pacific: qa: paramiko.buffered_pipe.PipeTimeout
- ...
- 06:35 AM Bug #50854 (New): qa: ERROR: test_lifecycle (tasks.cephfs.test_volume_client.TestVolumeClient)
- The test failed in pacific teuthology run as below.
2021-05-07T12:18:32.264 INFO:tasks.cephfs_test_runner:========... - 04:20 AM Bug #50852 (Resolved): mds: remove fs_name stored in MDSRank
- MDSRank doesn't need to store the fs_name fetched from the MMDSMap message's map_fs_name. fs_name can be obtained by ...
- 02:53 AM Cleanup #50149 (Resolved): client: always register callbacks before mount()
- 02:50 AM Bug #48365 (Resolved): qa: ffsb build failure on CentOS 8.2
- 02:45 AM Backport #50849 (Rejected): octopus: mds: "cluster [ERR] Error recovering journal 0x203: (2) No...
- 02:45 AM Backport #50848 (Resolved): pacific: mds: "cluster [ERR] Error recovering journal 0x203: (2) No...
- https://github.com/ceph/ceph/pull/42059
- 02:42 AM Bug #50389 (Pending Backport): mds: "cluster [ERR] Error recovering journal 0x203: (2) No such ...
- 02:40 AM Backport #50847 (Resolved): octopus: mds: journal recovery thread is possibly asserting with mds_...
- https://github.com/ceph/ceph/pull/45156
- 02:40 AM Backport #50846 (Resolved): pacific: mds: journal recovery thread is possibly asserting with mds_...
- https://github.com/ceph/ceph/pull/42058
- 02:39 AM Bug #50744 (Pending Backport): mds: journal recovery thread is possibly asserting with mds_lock n...
- 01:52 AM Feature #1276: client: expose mds partition via virtual xattrs
- Jeff Layton wrote:
> This ticket is quite old and it's not very clear what it's asking for. Sage or Patrick, can you... - 01:51 AM Bug #50826: kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
- Jeff Layton wrote:
> The bad patch involved in #50281 was never merged into RHEL, so I doubt this is related.
>
>...
05/17/2021
- 11:50 PM Bug #42516 (Fix Under Review): mds: some mutations have initiated (TrackedOp) set to 0
- 09:13 PM Bug #47276 (Fix Under Review): MDSMonitor: add command to rename file systems
- 04:14 PM Bug #50834 (Fix Under Review): MDS heartbeat timed out between during executing MDCache::start_fi...
- 05:10 AM Bug #50834 (Resolved): MDS heartbeat timed out between during executing MDCache::start_files_to_r...
- This issue happens with v14.2.19 (also v14.2.16). We have also discussed it in the mailing list https://lists.ceph.io...
- 03:40 PM Backport #50843 (Resolved): pacific: mgr/nfs: cli is broken as cluster id and binding arguments a...
- https://github.com/ceph/ceph/pull/41389
- 03:39 PM Bug #50783 (Pending Backport): mgr/nfs: cli is broken as cluster id and binding arguments are opt...
- 03:36 PM Bug #50823: qa: RuntimeError: timeout waiting for cluster to stabilize
- The MDSThrasher timed out for some reason setting thrasher exception which caused the daemonwatchdog to bark.
- 01:53 PM Bug #50840 (Resolved): mds: CephFS kclient gets stuck when getattr() on a certain file
Copied from the mail list:...- 01:48 PM Bug #50696 (Won't Fix): nautilus: qa: multimds/thrash tasks/cfuse_workunit_suites_fsstress failure
- 12:50 PM Feature #1276: client: expose mds partition via virtual xattrs
- This ticket is quite old and it's not very clear what it's asking for. Sage or Patrick, can you elaborate?
- 11:36 AM Bug #50826: kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
- The bad patch involved in #50281 was never merged into RHEL, so I doubt this is related.
The hung task warning in ... - 06:08 AM Bug #48812 (In Progress): qa: test_scrub_pause_and_resume_with_abort failure
- 06:08 AM Bug #48812: qa: test_scrub_pause_and_resume_with_abort failure
- Patrick Donnelly wrote:
> /ceph/teuthology-archive/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/611574... - 03:33 AM Bug #50822 (Fix Under Review): qa: testing kernel patch for client metrics causes mds abort
- Since we have tolerate unknown metric types in MDS, so we should fix this in MDS code, do not assert when receiving u...
05/15/2021
- 04:15 PM Backport #49472: octopus: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40767
merged - 04:07 PM Backport #50633: octopus: mds: failure replaying journal (EMetaBlob)
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40743
merged - 04:07 PM Backport #50256: octopus: mds: standby-replay only trims cache when it reaches the end of the rep...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40743
merged - 04:07 PM Backport #48813: octopus: mds: spurious wakeups in cache upkeep
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40743
merged - 04:06 PM Backport #49475: octopus: nautilus: qa: "Assertion `cb_done' failed."
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40708
merged - 03:44 AM Bug #50826: kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
- Might be related to #50281 but that was with the testing kernel.
- 03:44 AM Bug #50826 (Closed): kceph: stock RHEL kernel hangs on snaptests with mon|osd thrashers
- /ceph/teuthology-archive/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/6115757/teuthology.log
and
... - 03:30 AM Bug #50825 (Need More Info): qa: snaptest-git-ceph hang during mon thrashing v2
- ...
- 03:22 AM Bug #50824 (Won't Fix): qa: snaptest-git-ceph bus error
- ...
- 03:19 AM Bug #50823 (New): qa: RuntimeError: timeout waiting for cluster to stabilize
- ...
- 03:13 AM Bug #50822 (Resolved): qa: testing kernel patch for client metrics causes mds abort
- ...
- 03:11 AM Bug #50821: qa: untar_snap_rm failure during mds thrashing
- I don't think this is related to #50281 but may be.
- 03:11 AM Bug #50821 (New): qa: untar_snap_rm failure during mds thrashing
- ...
- 03:03 AM Bug #48812 (New): qa: test_scrub_pause_and_resume_with_abort failure
- /ceph/teuthology-archive/pdonnell-2021-05-14_21:45:42-fs-master-distro-basic-smithi/6115747/teuthology.log
and
...
05/14/2021
- 07:23 PM Bug #50819 (Fix Under Review): mon,doc: deprecate min_compat_client
- 07:21 PM Bug #50819 (Resolved): mon,doc: deprecate min_compat_client
- We effectively did this already in Pacific but didn't update the docs or add a warning to the min_compat_client fs se...
- 04:44 PM Bug #41034 (Resolved): cephfs-journal-tool: NetHandler create_socket couldn't create socket
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:43 PM Bug #45100 (Resolved): qa: Test failure: test_damaged_dentry (tasks.cephfs.test_damage.TestDamage)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:42 PM Bug #45835 (Resolved): mds: OpenFileTable::prefetch_inodes during rejoin can cause out-of-memory
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:42 PM Documentation #48017 (Resolved): snap-schedule doc
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:42 PM Bug #48403 (Resolved): mds: fix recall defaults based on feedback from production clusters
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:41 PM Bug #48679 (Resolved): client: items pinned in cache preventing unmount
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:41 PM Bug #48765 (Resolved): have mount helper pick appropriate mon sockets for ms_mode value
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:41 PM Documentation #48914 (Resolved): mgr/nfs: Update about user config
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:41 PM Bug #49318 (Resolved): qa: racy session evicted check
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:40 PM Bug #49459 (Resolved): pybind/cephfs: DT_REG and DT_LNK values are wrong
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:40 PM Bug #49510 (Resolved): qa: file system deletion not complete because starter fs already destroyed
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:40 PM Bug #49559 (Resolved): libcephfs: test termination "what(): Too many open files"
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:40 PM Bug #49617 (Resolved): mds: race of fetching large dirfrag
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:40 PM Bug #49882 (Resolved): mgr/volumes: setuid and setgid file bits are not retained after a subvolum...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:39 PM Documentation #49921 (Resolved): mgr/nfs: Update about cephadm single nfs-ganesha daemon per host...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:39 PM Bug #50090 (Resolved): client: only check pool permissions for regular files
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:39 PM Bug #50215 (Resolved): qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 03:40 PM Backport #50286 (Resolved): octopus: qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40783
m... - 03:39 PM Backport #50181 (Resolved): octopus: client: only check pool permissions for regular files
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40779
m... - 03:39 PM Backport #50027 (Resolved): octopus: client: items pinned in cache preventing unmount
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40778
m... - 03:39 PM Backport #49950 (Resolved): octopus: mgr/nfs: Update about cephadm single nfs-ganesha daemon per ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40777
m... - 03:39 PM Backport #49934 (Resolved): octopus: libcephfs: test termination "what(): Too many open files"
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40776
m... - 03:39 PM Backport #49752 (Resolved): octopus: snap-schedule doc
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40775
m... - 03:39 PM Cleanup #50816 (Fix Under Review): mgr/nfs: add nfs to mypy
- Annotate all the functions in this source file and add an section in src/mypy.ini to ensure that this file is annotat...
- 03:39 PM Backport #49851 (Resolved): octopus: mds: race of fetching large dirfrag
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40774
m... - 03:39 PM Backport #49611 (Resolved): octopus: qa: racy session evicted check
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40773
m... - 03:38 PM Backport #49560 (Resolved): octopus: qa: file system deletion not complete because starter fs alr...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40772
m... - 03:38 PM Backport #49518 (Resolved): octopus: client: wake up the front pos waiter
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40771
m... - 03:38 PM Backport #49515 (Resolved): octopus: pybind/cephfs: DT_REG and DT_LNK values are wrong
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40770
m... - 03:38 PM Backport #49347 (Resolved): octopus: qa: Test failure: test_damaged_dentry (tasks.cephfs.test_dam...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40765
m... - 03:38 PM Backport #48878 (Resolved): octopus: mds: fix recall defaults based on feedback from production c...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40764
m... - 03:38 PM Backport #48836 (Resolved): octopus: have mount helper pick appropriate mon sockets for ms_mode v...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40763
m... - 03:38 PM Backport #45853 (Resolved): octopus: cephfs-journal-tool: NetHandler create_socket couldn't creat...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40762
m... - 03:37 PM Backport #49904 (Resolved): octopus: mgr/volumes: setuid and setgid file bits are not retained af...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40268
m... - 01:07 PM Bug #50801 (Duplicate): cephfs-top should show average instead of cumulative latency
- Duplicate of #48619
- 11:05 AM Bug #50016: qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
- recurrence seen in Pacific QA:
# https://pulpito.ceph.com/yuriw-2021-05-06_19:28:46-fs-wip-yuri8-testing-2021-05-06-... - 10:58 AM Bug #45434: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
- recurrence seen in Pacific QA:
# https://pulpito.ceph.com/yuriw-2021-05-06_19:28:46-fs-wip-yuri8-testing-2021-05-06-... - 09:19 AM Feature #47277 (Fix Under Review): implement new mount "device" syntax for kcephfs
- 08:23 AM Bug #50808 (Fix Under Review): qa: test_data_scan.TestDataScan.test_pg_files AssertionError: Item...
- 05:10 AM Bug #50808 (Resolved): qa: test_data_scan.TestDataScan.test_pg_files AssertionError: Items in the...
- Run the qa test locally:...
- 07:24 AM Bug #50811 (New): pacific: qa: paramiko.buffered_pipe.PipeTimeout
- "Teuthology run":https://pulpito.ceph.com/yuriw-2021-05-06_19:28:46-fs-wip-yuri8-testing-2021-05-06-0832-pacific-dist...
- 07:11 AM Bug #50279: qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
- A fresh "run for the Pacific branch":https://pulpito.ceph.com/yuriw-2021-05-06_19:28:46-fs-wip-yuri8-testing-2021-05-...
- 02:47 AM Bug #50807 (Fix Under Review): mds: MDSLog::journaler pointer maybe crash with use-after-free
- 02:24 AM Bug #50807 (Resolved): mds: MDSLog::journaler pointer maybe crash with use-after-free
- When the _recovery_thread is trying to reformat the journal, it will delete the old journal pointer and assign with a...
05/13/2021
- 01:48 PM Bug #50801 (Duplicate): cephfs-top should show average instead of cumulative latency
- I was playing with cephfs-top today and noticed that the read/write latency fields are in seconds, but the numbers we...
05/12/2021
- 05:46 PM Bug #50783 (Fix Under Review): mgr/nfs: cli is broken as cluster id and binding arguments are opt...
- 04:54 PM Bug #50783 (Resolved): mgr/nfs: cli is broken as cluster id and binding arguments are optional
- In the following commands clusterid requirement is made optional which breaks cli....
- 03:18 PM Backport #50286: octopus: qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40783
merged - 03:18 PM Backport #50181: octopus: client: only check pool permissions for regular files
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40779
merged - 03:18 PM Backport #50027: octopus: client: items pinned in cache preventing unmount
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40778
merged - 03:17 PM Backport #49950: octopus: mgr/nfs: Update about cephadm single nfs-ganesha daemon per host limita...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40777
merged - 03:17 PM Backport #49934: octopus: libcephfs: test termination "what(): Too many open files"
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40776
merged - 03:16 PM Backport #49752: octopus: snap-schedule doc
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40775
merged - 03:16 PM Backport #49851: octopus: mds: race of fetching large dirfrag
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40774
merged - 03:15 PM Backport #49611: octopus: qa: racy session evicted check
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40773
merged - 03:15 PM Backport #49560: octopus: qa: file system deletion not complete because starter fs already destroyed
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40772
merged - 03:14 PM Backport #49518: octopus: client: wake up the front pos waiter
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40771
merged - 03:13 PM Backport #49515: octopus: pybind/cephfs: DT_REG and DT_LNK values are wrong
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40770
merged - 03:13 PM Backport #49347: octopus: qa: Test failure: test_damaged_dentry (tasks.cephfs.test_damage.TestDam...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40765
merged - 03:12 PM Backport #48878: octopus: mds: fix recall defaults based on feedback from production clusters
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40764
merged - 03:12 PM Backport #48836: octopus: have mount helper pick appropriate mon sockets for ms_mode value
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40763
merged - 03:11 PM Backport #45853: octopus: cephfs-journal-tool: NetHandler create_socket couldn't create socket
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/40762
merged - 03:11 PM Backport #49904: octopus: mgr/volumes: setuid and setgid file bits are not retained after a subvo...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40268
merged - 02:26 PM Bug #50390: mds: monclient: wait_auth_rotating timed out after 30
- error message:
2021-05-04T05:51:54.719+0800 7f105b2737c0 -1 mds.c unable to obtain rotating service keys; retrying... - 08:00 AM Bug #50390: mds: monclient: wait_auth_rotating timed out after 30
- It seem to that paxos not active cause mon leader not push to other mon
2021-05-12T15:07:43.654+0800 7f5f761f9700 ... - 02:47 AM Bug #50390: mds: monclient: wait_auth_rotating timed out after 30
- pull request id 40880 fixed this? I just see the parameters "rotating keys bootstrap timeout: 15" added. It doesn't s...
- 10:50 AM Bug #50719: xattr returning from the dead (sic!)
- Ralph Böhme wrote:
> > What kernel version are you running this on?
>
> # uname -r
> 3.10.0-1062.18.1.el7.x86_... - 07:22 AM Bug #50719: xattr returning from the dead (sic!)
- Hi Jeff,
thanks for looking into this!
Jeff Layton wrote:
> What kernel version are you running this on?
# ... - 09:23 AM Backport #50628 (In Progress): nautilus: client: access(path, X_OK) on non-executable file as roo...
- 09:14 AM Backport #50626 (In Progress): octopus: client: access(path, X_OK) on non-executable file as root...
- 09:12 AM Backport #50627 (In Progress): pacific: client: access(path, X_OK) on non-executable file as root...
- 09:06 AM Backport #50625 (In Progress): nautilus: qa: "ls: cannot access 'lost+found': No such file or dir...
- 08:59 AM Backport #50623 (In Progress): octopus: qa: "ls: cannot access 'lost+found': No such file or dire...
- 04:38 AM Backport #50186 (In Progress): pacific: qa: daemonwatchdog fails if mounts not defined
- 02:38 AM Bug #42516: mds: some mutations have initiated (TrackedOp) set to 0
- Ramana Raja wrote:
> I checked Migrator.cc for creation of MutationImpl object and setting of its TrackedOp initiate...
05/11/2021
- 07:56 PM Backport #47609 (Rejected): nautilus: mds: OpenFileTable::prefetch_inodes during rejoin can cause...
- 07:51 PM Backport #49413 (Resolved): octopus: mgr/nfs: Update about user config
- 05:50 PM Bug #50755 (Duplicate): mds restart but unable to obtain rotating service keys
- 08:31 AM Bug #50755 (Duplicate): mds restart but unable to obtain rotating service keys
- version-15.2.0
error message:
2021-05-04T05:51:54.719+0800 7f105b2737c0 -1 mds.c unable to obtain rotating ser... - 09:00 AM Bug #45349 (Resolved): mds: send scrub status to ceph-mgr only when scrub is running (or paused, ...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:55 AM Backport #49471 (Resolved): nautilus: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40713
m... - 08:53 AM Backport #46480 (Resolved): nautilus: mds: send scrub status to ceph-mgr only when scrub is runni...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36183
m... - 08:14 AM Bug #50390: mds: monclient: wait_auth_rotating timed out after 30
- I had the same problem, repeat to restart mds.
- 08:06 AM Bug #50390: mds: monclient: wait_auth_rotating timed out after 30
- I have same question, target version ceph-v15.2.0
2021-05-04T05:49:24.717+0800 7f105b2737c0 0 monclient: wait_auth_... - 05:25 AM Bug #50744 (Fix Under Review): mds: journal recovery thread is possibly asserting with mds_lock n...
- 03:40 AM Bug #50744 (Resolved): mds: journal recovery thread is possibly asserting with mds_lock not locked
- in MDLog::_recovery_thread it is running without holding the mds_lock, but it will call mds->damanaged(), which will ...
05/10/2021
- 05:02 PM Bug #50719: xattr returning from the dead (sic!)
- What kernel version are you running this on? Is this something easily reproducible, or does it take a while?
There... - 01:40 PM Bug #50719 (Triaged): xattr returning from the dead (sic!)
- 05:31 AM Bug #50719 (Need More Info): xattr returning from the dead (sic!)
- Hi Ceph folks,
slow from the Samba team here. :)
I'm investigating a problem at a customer site where xattr dat... - 04:46 PM Support #49116: written io continuous high occupancy
- Suggest turning up debugging to see what the MDS is doing.
- 02:45 PM Backport #49471: nautilus: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40713
merged - 02:30 PM Bug #50389: mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" ...
- More detail:...
- 09:35 AM Bug #50389 (Fix Under Review): mds: "cluster [ERR] Error recovering journal 0x203: (2) No such ...
- There is one rare case that when mds daemon received a new mdsmap
and during decoding it, the metadata_pool will be ... - 01:48 PM Bug #50622 (Triaged): msg: active_connections regression
- 01:45 PM Bug #50695 (Need More Info): nautilus: qa: Test failure: test_kill_mdstable (tasks.cephfs.test_sn...
- 01:43 PM Bug #50696: nautilus: qa: multimds/thrash tasks/cfuse_workunit_suites_fsstress failure
- This was probably fixed recently for Octopus/Pacific. This one doesn't look to be worth investigating further as Naut...
05/09/2021
05/08/2021
- 07:53 PM Backport #46480: nautilus: mds: send scrub status to ceph-mgr only when scrub is running (or paus...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36183
merged - 02:01 PM Bug #50389: mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" ...
- Checked all the possible logs in osd/mon/mds and the related code, and have compared the normal logs, the sequence ar...
05/07/2021
- 10:09 PM Bug #50696 (Won't Fix): nautilus: qa: multimds/thrash tasks/cfuse_workunit_suites_fsstress failure
- See, https://pulpito.ceph.com/yuriw-2021-05-04_15:32:03-multimds-wip-yuri3-testing-2021-04-29-1036-nautilus-distro-ba...
- 09:27 PM Bug #50695 (Need More Info): nautilus: qa: Test failure: test_kill_mdstable (tasks.cephfs.test_sn...
- See this here,
https://pulpito.ceph.com/yuriw-2021-05-04_15:32:03-multimds-wip-yuri3-testing-2021-04-29-1036-nautilu... - 07:36 PM Bug #50546: nautilus: qa: 'The following counters failed to be set on mds daemons: {''mds.importe...
- See again here, https://pulpito.ceph.com/yuriw-2021-05-04_15:32:03-multimds-wip-yuri3-testing-2021-04-29-1036-nautilu...
- 04:01 AM Bug #50389: mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" ...
The cephfs_metadata pool was created since osdmap v22:...- 02:05 AM Bug #50389: mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" ...
- Checked the mds log:...
- 02:19 AM Bug #47041 (Resolved): MDS recall configuration options not documented yet
- https://docs.ceph.com/en/latest/cephfs/cache-configuration/#mds-recall
05/06/2021
- 05:16 AM Bug #48673: High memory usage on standby replay MDS
- Hi Patrick.
I've tried to run the cluster with both settings for 24 hours each. It became slightly worse, but that... - 01:07 AM Bug #42516: mds: some mutations have initiated (TrackedOp) set to 0
- I checked Migrator.cc for creation of MutationImpl object and setting of its TrackedOp initiated_at attribute mention...
05/05/2021
- 10:31 PM Bug #42516 (In Progress): mds: some mutations have initiated (TrackedOp) set to 0
- 03:29 PM Bug #49672 (Resolved): nautilus: "Error EINVAL: invalid command" in fs-nautilus-distro-basic-smithi
- 12:43 PM Bug #50224: qa: test_mirroring_init_failure_with_recovery failure
- Requires PR https://github.com/ceph/ceph/pull/40885 for fully fixing the failed test.
- 12:41 PM Bug #50224 (Fix Under Review): qa: test_mirroring_init_failure_with_recovery failure
05/04/2021
- 02:51 PM Backport #50632 (In Progress): pacific: mds: failure replaying journal (EMetaBlob)
- 12:50 AM Backport #50632 (Resolved): pacific: mds: failure replaying journal (EMetaBlob)
- https://github.com/ceph/ceph/pull/40855
- 02:00 PM Backport #50634 (In Progress): nautilus: mds: failure replaying journal (EMetaBlob)
- 12:50 AM Backport #50634 (Resolved): nautilus: mds: failure replaying journal (EMetaBlob)
- https://github.com/ceph/ceph/pull/41144
- 01:57 PM Backport #50633 (In Progress): octopus: mds: failure replaying journal (EMetaBlob)
- 12:50 AM Backport #50633 (Resolved): octopus: mds: failure replaying journal (EMetaBlob)
- https://github.com/ceph/ceph/pull/40743
- 08:45 AM Bug #50224: qa: test_mirroring_init_failure_with_recovery failure
- Tested with https://github.com/ceph/ceph/pull/40885 and the failures due to blocked updated thread have gone away: ht...
- 12:55 AM Backport #50636 (Resolved): pacific: session dump includes completed_requests twice, once as an i...
- https://github.com/ceph/ceph/pull/42057
- 12:55 AM Backport #50635 (Resolved): octopus: session dump includes completed_requests twice, once as an i...
- https://github.com/ceph/ceph/pull/41625
- 12:53 AM Bug #50559 (Pending Backport): session dump includes completed_requests twice, once as an integer...
- 12:49 AM Bug #50246 (Pending Backport): mds: failure replaying journal (EMetaBlob)
- 12:45 AM Backport #50631 (Resolved): octopus: mds: Error ENOSYS: mds.a started profiler
- https://github.com/ceph/ceph/pull/45155
- 12:45 AM Backport #50630 (Resolved): pacific: mds: Error ENOSYS: mds.a started profiler
- https://github.com/ceph/ceph/pull/42056
- 12:45 AM Backport #50629 (Resolved): pacific: cephfs-mirror: ignore snapshots on parent directories when s...
- https://github.com/ceph/ceph/pull/41475
- 12:44 AM Bug #50442 (Pending Backport): cephfs-mirror: ignore snapshots on parent directories when synchro...
- 12:40 AM Backport #50628 (Resolved): nautilus: client: access(path, X_OK) on non-executable file as root a...
- https://github.com/ceph/ceph/pull/41297
- 12:40 AM Backport #50627 (Resolved): pacific: client: access(path, X_OK) on non-executable file as root al...
- https://github.com/ceph/ceph/pull/41294
- 12:40 AM Backport #50626 (Resolved): octopus: client: access(path, X_OK) on non-executable file as root al...
- https://github.com/ceph/ceph/pull/41295
- 12:40 AM Backport #50625 (Resolved): nautilus: qa: "ls: cannot access 'lost+found': No such file or direct...
- https://github.com/ceph/ceph/pull/40769
- 12:40 AM Bug #50433 (Pending Backport): mds: Error ENOSYS: mds.a started profiler
- 12:40 AM Backport #50624 (Resolved): pacific: qa: "ls: cannot access 'lost+found': No such file or directory"
- https://github.com/ceph/ceph/pull/40856
- 12:40 AM Backport #50623 (Resolved): octopus: qa: "ls: cannot access 'lost+found': No such file or directory"
- https://github.com/ceph/ceph/pull/40768
- 12:38 AM Bug #50216 (Pending Backport): qa: "ls: cannot access 'lost+found': No such file or directory"
- 12:35 AM Bug #50060 (Pending Backport): client: access(path, X_OK) on non-executable file as root always s...
- 12:12 AM Bug #50221: qa: snaptest-git-ceph failure in git diff
- These resulted in hangs:
/ceph/teuthology-archive/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.04...
05/03/2021
- 11:55 PM Bug #50221: qa: snaptest-git-ceph failure in git diff
- This also looks related, with stock kernel: /ceph/teuthology-archive/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-tes...
- 11:53 PM Bug #50221: qa: snaptest-git-ceph failure in git diff
- Slightly different failure also with the stock kernel but 3 MDS ranks: /ceph/teuthology-archive/pdonnell-2021-05-01_0...
- 11:44 PM Bug #50622 (Resolved): msg: active_connections regression
- ...
- 10:48 PM Bug #50250: mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~...
- /ceph/teuthology-archive/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/608...
- 10:28 PM Bug #48773: qa: scrub does not complete
- Another: /ceph/teuthology-archive/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-s...
- 08:50 PM Backport #50255 (Resolved): nautilus: mds: standby-replay only trims cache when it reaches the en...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40744
m... - 08:50 PM Backport #50179 (Resolved): nautilus: client: only check pool permissions for regular files
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40730
m... - 08:50 PM Backport #50026 (Resolved): nautilus: client: items pinned in cache preventing unmount
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40722
m... - 08:50 PM Backport #49853 (Resolved): nautilus: mds: race of fetching large dirfrag
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40720
m... - 08:46 PM Backport #49562 (Resolved): nautilus: qa: file system deletion not complete because starter fs al...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40709
m... - 08:46 PM Backport #49516 (Resolved): nautilus: pybind/cephfs: DT_REG and DT_LNK values are wrong
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40704
m... - 08:46 PM Backport #49473 (Resolved): nautilus: nautilus: qa: "Assertion `cb_done' failed."
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40701
m... - 08:46 PM Backport #49613 (Resolved): nautilus: qa: racy session evicted check
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40714
m... - 04:16 PM Bug #48411 (Resolved): tasks.cephfs.test_volumes.TestSubvolumeGroups: RuntimeError: rank all fail...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:15 PM Bug #49662 (Resolved): ceph-dokan improvements for additional mounts
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:14 PM Bug #49972 (Resolved): mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:14 PM Bug #50020 (Resolved): qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirr...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:14 PM Fix #50045 (Resolved): qa: test standby_replay in workloads
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:13 PM Bug #50305 (Resolved): MDS doesn't set fscrypt flag on new inodes with crypto context in xattr bu...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:03 PM Backport #50285 (Resolved): pacific: qa: test standby_replay in workloads
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40853
m... - 04:03 PM Backport #50287 (Resolved): pacific: qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40852
m... - 04:03 PM Backport #50253: pacific: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/b...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40825
m... - 03:59 PM Backport #50086 (Resolved): pacific: tasks.cephfs.test_volumes.TestSubvolumeGroups: RuntimeError:...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40688
m... - 03:59 PM Backport #50180 (Resolved): pacific: client: only check pool permissions for regular files
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40686
m... - 03:59 PM Backport #50185 (Resolved): pacific: qa: "RADOS object not found (Failed to operate read op for o...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40684
m... - 03:58 PM Backport #50190 (Resolved): pacific: qa: "Assertion `cb_done' failed."
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40683
m... - 03:58 PM Backport #50225: pacific: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40682
m... - 03:58 PM Backport #50127 (Resolved): pacific: pybind/mgr/volumes: deadlock on async job hangs finisher thread
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40630
m... - 03:58 PM Backport #50187 (Resolved): pacific: ceph-dokan improvements for additional mounts
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40627
m... - 03:55 PM Bug #48673: High memory usage on standby replay MDS
- Daniel Persson wrote:
> Patrick Donnelly wrote:
> > Thanks for the information. There were a few fixes in v15.2.8 r... - 11:06 AM Bug #48673: High memory usage on standby replay MDS
- Hi,
we are experiencing the same behavior, but with ceph 14.2.18. Memory usage of the standby-replay MDS keeps growi... - 01:41 PM Bug #50546 (Triaged): nautilus: qa: 'The following counters failed to be set on mds daemons: {''m...
- 01:40 PM Bug #50569 (Won't Fix): nautilus: qa: tasks/cfuse_workunit_suites_fsstress validater/valgrind fai...
- Won't fix since this is probably caused by only using 2 machines for these tests. New QA suite uses 3 nodes. Nautilu...
- 01:38 PM Bug #50570 (Triaged): nautilus: qa: tasks/trim-i22073 cluster [WRN] Health check failed: 1 client...
- 05:04 AM Bug #50224: qa: test_mirroring_init_failure_with_recovery failure
- Hit this again recently: https://pulpito.ceph.com/vshankar-2021-04-30_17:19:54-fs-wip-cephfs-mirror-incremental-sync-...
Also available in: Atom