Activity
From 01/07/2021 to 02/05/2021
02/05/2021
- 06:17 PM Bug #49192 (Fix Under Review): qa::ERROR: test_recover_auth_metadata_during_authorize
- 05:56 PM Bug #49192: qa::ERROR: test_recover_auth_metadata_during_authorize
- This applies to following tests as well.
1. test_recover_auth_metadata_during_deauthorize
2. test_subvolume_autho... - 05:53 PM Bug #49192: qa::ERROR: test_recover_auth_metadata_during_authorize
- The order might not be retained during recovering dirty auth_metadata file. So it should not be compared as strings.
- 05:50 PM Bug #49192 (Resolved): qa::ERROR: test_recover_auth_metadata_during_authorize
- 2021-02-05T05:23:23.676 INFO:tasks.cephfs_test_runner:===============================================================...
- 04:44 PM Backport #48285: octopus: rados/upgrade/nautilus-x-singleton fails due to cluster [WRN] evicting ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/38422
merged
02/04/2021
- 06:42 PM Feature #46892 (Resolved): pybind/mgr/volumes: Make number of cloner threads configurable
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 06:42 PM Bug #47307 (Resolved): mds: throttle workloads which acquire caps faster than the client can release
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 06:40 PM Bug #48633 (Resolved): qa: tox failures
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 06:39 PM Backport #48375 (Resolved): octopus: libcephfs allows calling ftruncate on a file open read-only
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38424
m... - 06:22 PM Backport #48375: octopus: libcephfs allows calling ftruncate on a file open read-only
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/38424
merged - 06:38 PM Backport #48370 (Resolved): octopus: mds: dir->mark_new should together with dir->mark_dirty
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38352
m... - 06:21 PM Backport #48370: octopus: mds: dir->mark_new should together with dir->mark_dirty
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/38352
merged - 06:38 PM Backport #48129 (Resolved): octopus: some clients may return failure in the scenario where multip...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38349
m... - 06:21 PM Backport #48129: octopus: some clients may return failure in the scenario where multiple clients ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/38349
merged - 03:36 PM Feature #48953 (Need More Info): cephfs-mirror: suppport snapshot mirror of subdirectories and/or...
- pending on "group consistency snapshot replication" requirements
need further info for the above feature in the futu... - 12:11 PM Bug #48763: mds memory leak
- A similar mds memory leak has been found in our CephFS cluster with ceph version 12.2.12。
Cache status and dump_memp... - 11:02 AM Backport #49160 (In Progress): nautilus: qa: :ERROR: test_idempotency
- 10:58 AM Backport #49160 (Fix Under Review): nautilus: qa: :ERROR: test_idempotency
- 10:41 AM Backport #49160 (Resolved): nautilus: qa: :ERROR: test_idempotency
- https://github.com/ceph/ceph/pull/39292
- 11:01 AM Backport #49028 (In Progress): nautilus: mgr/volumes: evict clients based on auth ID and subvolum...
- 11:01 AM Backport #48901 (In Progress): nautilus: mgr/volumes: get the list of auth IDs that have been gra...
- 11:00 AM Backport #48859 (In Progress): nautilus: pybind/mgr/volumes: inherited snapshots should be filter...
- 11:00 AM Backport #48195 (In Progress): nautilus: mgr/volumes: allow/deny r/rw access of auth IDs to subvo...
- 04:38 AM Bug #49157 (Resolved): mon/MDSMonitor.cc: fix join fscid not applied with pending fsmap at boot
- At boot stage, mds new_info.join_fscid is set after new_info is added to pending fsmap. Hence join_fscid will be upda...
02/03/2021
- 11:48 PM Bug #49035: pybind/cephfs: evict operation in ceph_volume_client fails with TypeError
- Closed my pull request, let's get https://github.com/ceph/ceph/pull/39038 in. Thanks!
- 07:31 PM Backport #48634 (Resolved): nautilus: qa: tox failures
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38627
m... - 06:41 PM Backport #48634: nautilus: qa: tox failures
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/38627
merged - 07:31 PM Backport #48192 (Resolved): nautilus: mds: throttle workloads which acquire caps faster than the ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38101
m... - 06:41 PM Backport #48192: nautilus: mds: throttle workloads which acquire caps faster than the client can ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/38101
merged - 02:48 PM Bug #49133 (Resolved): mgr/nfs: Rook does not support restart of services, handle the NotImplemen...
- 02:00 PM Bug #49132 (Resolved): mds crashed "assert_condition": "state == LOCK_XLOCK || state == LOCK_XLO...
- ...
- 11:41 AM Feature #49125 (New): mgr/volumes: Provide subvolume and subvolumegroup interfaces to manage Ceph...
- CephFS mirroring currently works when the CLI explicitly calls out the required path within the FS to mirror. For exa...
- 09:21 AM Bug #49121 (Fix Under Review): vstart: volumes/nfs interface complaints cluster does not exists
- 08:52 AM Bug #49121 (Resolved): vstart: volumes/nfs interface complaints cluster does not exists
- ...
- 09:20 AM Bug #49122 (Fix Under Review): vstart: Rados url error
- 09:01 AM Bug #49122 (Resolved): vstart: Rados url error
- ...
- 02:36 AM Support #49116 (New): written io continuous high occupancy
- Ceph's status is healthy.No other processes are running on the server, only ceph is running.But ceph’s io occupies a ...
- 01:44 AM Bug #48148: mds: Server.cc:6764 FAILED assert(in->filelock.can_read(mdr->get_client()))
- The client randomly reads, writes, setsattr, and rmdir to all the directories, but it is not sure what operations hav...
02/02/2021
- 01:29 PM Bug #48148: mds: Server.cc:6764 FAILED assert(in->filelock.can_read(mdr->get_client()))
Thanks for bringing this up!
Was it a delete only workload? Or was the directory in question subjected to any o...
02/01/2021
- 06:11 PM Feature #48991: client: allow looking up snapped inodes by inode number+snapid tuple
- I started looking at this today, and it's a little trickier than I thought. The current client code that sends the LO...
- 02:44 PM Bug #49035: pybind/cephfs: evict operation in ceph_volume_client fails with TypeError
- There is already PR pushed for this [1] to the pacific branch.
Both the fixes are exactly same. I am fine taking any... - 06:05 AM Bug #49074 (Fix Under Review): mds: don't start purging inodes in the middle of recovery
- 06:04 AM Bug #49074 (Resolved): mds: don't start purging inodes in the middle of recovery
- If mds kills client session in the middle of recovery, it will purge preallocated inos in the killed session twice. o...
01/29/2021
- 09:32 AM Bug #48673: High memory usage on standby replay MDS
- Hello,
We have noticed the same behavior in ceph v15.2.3 and v15.2.8
Note, this is not the case with all filesy...
01/28/2021
- 05:44 PM Feature #48953: cephfs-mirror: suppport snapshot mirror of subdirectories and/or ancestors of a m...
- Caveat:
Subdirs and ancestor dirs replication cannot be done. In my opinion, these are mutually exclusive items. We ... - 05:10 PM Backport #48643 (Resolved): nautilus: client: ceph.dir.entries does not acquire necessary caps
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38950
m... - 05:10 PM Backport #48641 (Resolved): nautilus: Client: the directory's capacity will not be updated after ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38948
m... - 05:10 PM Backport #47823 (Resolved): nautilus: pybind/mgr/volumes: Make number of cloner threads configurable
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37936
m... - 02:34 PM Feature #49040 (Fix Under Review): cephfs-mirror: test mirror daemon with valgrind
- 01:46 PM Feature #49040 (In Progress): cephfs-mirror: test mirror daemon with valgrind
- 01:46 PM Feature #49040 (Resolved): cephfs-mirror: test mirror daemon with valgrind
- Add test YAMLS to run mirror daemon with valgrind in teuthology and fix any valgrind reported errors (leaks, etc..)
- 01:05 PM Backport #48879 (In Progress): nautilus: mds: fix recall defaults based on feedback from producti...
- 12:54 PM Backport #48837 (In Progress): nautilus: have mount helper pick appropriate mon sockets for ms_mo...
- 11:27 AM Backport #48814 (In Progress): nautilus: mds: spurious wakeups in cache upkeep
- 11:24 AM Backport #48376 (In Progress): nautilus: libcephfs allows calling ftruncate on a file open read-only
- 11:23 AM Backport #48371 (In Progress): nautilus: mds: dir->mark_new should together with dir->mark_dirty
- 11:23 AM Backport #48286 (Need More Info): nautilus: rados/upgrade/nautilus-x-singleton fails due to clust...
- non-trivial due to post-nautilus refactoring
- 11:18 AM Backport #48130 (In Progress): nautilus: some clients may return failure in the scenario where mu...
01/27/2021
- 08:44 PM Bug #49035: pybind/cephfs: evict operation in ceph_volume_client fails with TypeError
- Pull request: https://github.com/ceph/ceph/pull/39111
- 08:27 PM Bug #49035 (Duplicate): pybind/cephfs: evict operation in ceph_volume_client fails with TypeError
- A recently introduced change in cephfs.pyx affected the ceph_volume_client evict command. The mds_command() operation...
- 07:20 PM Backport #48643: nautilus: client: ceph.dir.entries does not acquire necessary caps
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/38950
merged - 07:19 PM Backport #48641: nautilus: Client: the directory's capacity will not be updated after write data ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/38948
merged - 07:18 PM Backport #47823: nautilus: pybind/mgr/volumes: Make number of cloner threads configurable
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37936
merged - 06:12 PM Backport #49027 (In Progress): pacific: mgr/volumes: evict clients based on auth ID and subvolume...
- 02:15 PM Backport #49027 (Resolved): pacific: mgr/volumes: evict clients based on auth ID and subvolume mo...
- https://github.com/ceph/ceph/pull/39109
- 02:16 PM Backport #49029 (Resolved): octopus: mgr/volumes: evict clients based on auth ID and subvolume mo...
- https://github.com/ceph/ceph/pull/39390
- 02:16 PM Backport #49028 (Resolved): nautilus: mgr/volumes: evict clients based on auth ID and subvolume m...
- https://github.com/ceph/ceph/pull/39292
- 02:14 PM Feature #44928 (Pending Backport): mgr/volumes: evict clients based on auth ID and subvolume mounted
- 12:56 PM Bug #48812 (Can't reproduce): qa: test_scrub_pause_and_resume_with_abort failure
- 12:55 PM Bug #48812: qa: test_scrub_pause_and_resume_with_abort failure
- Closing this tracker for now. Will reopen in we see this again.
- 12:55 PM Bug #48812: qa: test_scrub_pause_and_resume_with_abort failure
- Xiubo Li wrote:
> Jos has hit the same issue too, please see https://github.com/ceph/ceph/pull/38684#discussion_r555...
01/26/2021
- 09:56 AM Bug #47307: mds: throttle workloads which acquire caps faster than the client can release
- Is it related to MDS cache overconsumption?
01/25/2021
- 08:10 PM Feature #48991 (Resolved): client: allow looking up snapped inodes by inode number+snapid tuple
- Currently, we have ceph_ll_lookup_inode(), but that only takes an inode number and can't deal with a snapped inode. A...
- 02:26 PM Bug #48830 (Fix Under Review): pacific: qa: :ERROR: test_idempotency
- 12:38 PM Bug #48766 (In Progress): qa: Test failure: test_evict_client (tasks.cephfs.test_volume_client.Te...
- 12:38 PM Bug #48766: qa: Test failure: test_evict_client (tasks.cephfs.test_volume_client.TestVolumeClient)
- The issue is no longer applicable to master as the test is removed as part of removing ceph_volume_client [1]
... - 12:31 PM Bug #48773 (In Progress): qa: scrub does not complete
- 10:25 AM Bug #48812: qa: test_scrub_pause_and_resume_with_abort failure
- Jos has hit the same issue too, please see https://github.com/ceph/ceph/pull/38684#discussion_r555022883.
- 08:51 AM Bug #48812: qa: test_scrub_pause_and_resume_with_abort failure
- This possibly could be a timing issue. Here is the run with master: https://pulpito.ceph.com/vshankar-2021-01-25_06:3...
- 05:07 AM Bug #48812 (In Progress): qa: test_scrub_pause_and_resume_with_abort failure
01/24/2021
01/23/2021
- 04:49 PM Bug #48830 (In Progress): pacific: qa: :ERROR: test_idempotency
- The issue is no longer applicable to master as the test is removed as part of removing ceph_volume_client [1]
[1...
01/22/2021
- 09:36 AM Bug #48923: pacific: pybind: revert removal of ceph_volume_client library
- This is not a backport. This revert is targeted for pacific. Please see, https://github.com/ceph/ceph/pull/38960#issu...
- 09:33 AM Bug #48923 (In Progress): pacific: pybind: revert removal of ceph_volume_client library
01/21/2021
- 04:16 PM Backport #48568 (In Progress): octopus: tasks.cephfs.test_nfs.TestNFS.test_cluster_info: IP mismatch
- 02:40 PM Feature #48953 (Need More Info): cephfs-mirror: suppport snapshot mirror of subdirectories and/or...
- mgr/mirroring assigns directory paths to `cephfs-mirror` daemon instances. Right now, only a single mirror daemon is ...
- 02:37 PM Backport #48520 (In Progress): nautilus: client: add ceph.cluster_fsid/ceph.client_id vxattr supp...
- 02:35 PM Backport #48521 (In Progress): octopus: client: add ceph.cluster_fsid/ceph.client_id vxattr suppo...
- 05:49 AM Feature #48944 (New): pybind/mirroring: add subvolume/subvolumegroup interfaces for snapshot mirr...
- Rather than the operator adding subvolume/subvolumegroup paths via "fs snapshot mirror add/remove" interface, introdu...
- 05:37 AM Feature #48943 (Resolved): cephfs-mirror: display cephfs mirror instances in `ceph status` command
- CephFS mirror daemons should register with ceph-mgr to get included in service map. This would allow mirror daemon in...
01/20/2021
- 07:05 AM Bug #48778: Setting quota triggered ops storms on meta pool
- After restarting both MDS the situation went back to normal.
Combing though our logs to find out which action trig...
01/19/2021
- 01:17 PM Feature #45729 (Resolved): pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes in...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 01:17 PM Bug #46163 (Resolved): mgr/volumes: Clone operation uses source subvolume root directory mode and...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:08 AM Bug #48778: Setting quota triggered ops storms on meta pool
- After the issue did not re-appear for 6 days, I tried enabling quotas again on 71 directories. Shortly after doing so...
- 06:29 AM Bug #48923 (Resolved): pacific: pybind: revert removal of ceph_volume_client library
- The primary consumers of the ceph_volume_client_library are OpenStack manila's CephFS drivers. The drivers have not y...
01/18/2021
- 04:29 PM Documentation #48914 (In Progress): mgr/nfs: Update about user config
- 04:23 PM Documentation #48914 (Resolved): mgr/nfs: Update about user config
- ...
- 02:38 PM Backport #48643 (In Progress): nautilus: client: ceph.dir.entries does not acquire necessary caps
- 02:28 PM Backport #48635 (Resolved): octopus: qa: tox failures
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38626
m... - 02:28 PM Backport #47059 (Resolved): octopus: mgr/volumes: Clone operation uses source subvolume root dire...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36803
m... - 02:28 PM Backport #46820 (Resolved): octopus: pybind/mgr/volumes: Add the ability to keep snapshots of sub...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36803
m... - 02:15 PM Backport #48644 (In Progress): octopus: client: ceph.dir.entries does not acquire necessary caps
- 02:02 PM Backport #48641 (In Progress): nautilus: Client: the directory's capacity will not be updated aft...
- 02:01 PM Backport #48642 (In Progress): octopus: Client: the directory's capacity will not be updated afte...
- 01:41 PM Bug #48912 (Resolved): ls -l in cephfs-shell tries to chase symlinks when stat'ing and errors out...
- ls -l in cephfs-shell tries to chase symlink targets when stat'ing. For example, from the kclient:...
- 01:35 PM Feature #48911 (Resolved): cephfs-shell needs "ln" command equivalent
- It's not currently possible to create symlinks or hardlinks with cephfs-shell (no ln command or equivalent). Add that...
- 12:03 PM Bug #48873 (Triaged): test_cluster_set_reset_user_config: AssertionError: NFS Ganesha cluster dep...
- It looks like ganesha daemon did not come up. I cannot reproduce this issue in latest test run:
http://qa-proxy.ceph... - 05:05 AM Bug #48763: mds memory leak
- A similar situation has been found in our CephFS cluster configued with Nautilus 14.2.11. The RAM usage suddenly reac...
01/16/2021
- 09:11 PM Feature #47162 (Resolved): mds: handle encrypted filenames in the MDS for fscrypt
- 04:19 PM Bug #48673: High memory usage on standby replay MDS
- Patrick Donnelly wrote:
> Thanks for the information. There were a few fixes in v15.2.8 relating to memory consumpti... - 05:14 AM Bug #48700: client: Client::rmdir() may fail to remove a snapshot
- Venky Shankar wrote:
> This is not really a bug and was related to sticky bit on the root directory in a teuthology ... - 05:13 AM Bug #48700 (Closed): client: Client::rmdir() may fail to remove a snapshot
- This is not really a bug and was related to sticky bit on the root directory in a teuthology test. The fix has been m...
01/15/2021
- 07:55 PM Backport #48901 (Resolved): nautilus: mgr/volumes: get the list of auth IDs that have been grante...
- https://github.com/ceph/ceph/pull/39292
- 07:55 PM Backport #48900 (Resolved): octopus: mgr/volumes: get the list of auth IDs that have been granted...
- https://github.com/ceph/ceph/pull/39390
- 07:53 PM Feature #44931 (Pending Backport): mgr/volumes: get the list of auth IDs that have been granted a...
- 06:54 PM Backport #48635: octopus: qa: tox failures
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/38626
merged - 06:54 PM Backport #47059: octopus: mgr/volumes: Clone operation uses source subvolume root directory mode ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36803
merged - 06:54 PM Backport #46820: octopus: pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes ind...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/36803
merged - 04:24 AM Bug #48886 (New): mds: version MMDSCacheRejoin
- This was missed in the MDS metadata/messages that was recently versioned.
01/14/2021
- 10:02 PM Bug #48877 (In Progress): qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- 04:39 PM Bug #48877: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- Patrick Donnelly wrote:
> [...]
>
> From: /ceph/teuthology-archive/pdonnell-2021-01-13_23:30:53-fs-wip-pdonnell-t... - 03:49 PM Bug #48877 (Resolved): qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- ...
- 10:01 PM Feature #45746 (Fix Under Review): mgr/nfs: Add interface to update export
- 05:22 PM Feature #44192 (Resolved): mds: stable multimds scrub
- 04:11 PM Bug #48365 (Can't reproduce): qa: ffsb build failure on CentOS 8.2
- Haven't seen this anymore.
- 04:10 PM Backport #48879 (Resolved): nautilus: mds: fix recall defaults based on feedback from production ...
- https://github.com/ceph/ceph/pull/39134
- 04:10 PM Backport #48878 (Resolved): octopus: mds: fix recall defaults based on feedback from production c...
- https://github.com/ceph/ceph/pull/40764
- 04:08 PM Bug #46434 (Resolved): osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:08 PM Bug #48403 (Pending Backport): mds: fix recall defaults based on feedback from production clusters
- 04:08 PM Bug #46906 (Resolved): mds: fix file recovery crash after replaying delayed requests
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:07 PM Bug #48076 (Resolved): client: ::_read fails to advance pos at EOF checking
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 03:40 PM Bug #47294 (Resolved): client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- 11:09 AM Bug #48873 (Triaged): test_cluster_set_reset_user_config: AssertionError: NFS Ganesha cluster dep...
- https://pulpito.ceph.com/swagner-2021-01-13_11:19:08-rados:cephadm-wip-swagner3-testing-2021-01-12-1316-distro-basic-...
01/13/2021
- 06:45 PM Bug #48863: cephfs-shell should allow changing all mode bits
- The issue is that you can't currently set S_ISUID, S_ISGID or S_ISVTX. We should allow that within cephfs-shell.
I... - 06:44 PM Bug #48863 (Resolved): cephfs-shell should allow changing all mode bits
- Currently, cephfs-shell says:...
- 04:24 PM Bug #48808 (Resolved): mon/MDSMonitor: `fs rm` is not idempotent
- 04:23 PM Bug #48834 (Resolved): qa: MDS_SLOW_METADATA_IO with osd thrasher
- 03:45 PM Backport #48859 (Resolved): nautilus: pybind/mgr/volumes: inherited snapshots should be filtered ...
- https://github.com/ceph/ceph/pull/39292
- 03:45 PM Backport #48858 (Resolved): octopus: pybind/mgr/volumes: inherited snapshots should be filtered o...
- https://github.com/ceph/ceph/pull/39390
- 03:44 PM Bug #48501 (Pending Backport): pybind/mgr/volumes: inherited snapshots should be filtered out of ...
- 01:02 PM Backport #47095: octopus: mds: provide altrenatives to increase the total cephfs subvolume snapsh...
- Andras Sali wrote:
> Really looking forward to getting this fix with 15.2.9 - is there a planned release date?
@A... - 03:42 AM Bug #48778: Setting quota triggered ops storms on meta pool
- Today I was able to simultaneously shut down all clients, which I did. The write operations were still happening, eve...
01/12/2021
- 04:59 PM Documentation #48838 (Resolved): document ms_mode options in mount.ceph manpage
- 04:19 PM Backport #47095: octopus: mds: provide altrenatives to increase the total cephfs subvolume snapsh...
- Really looking forward to getting this fix with 15.2.9 - is there a planned release date?
- 03:29 PM Bug #48770 (Resolved): qa: "Test failure: test_hole (tasks.cephfs.test_failover.TestClusterResize)"
- 01:46 PM Bug #47294 (Fix Under Review): client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Fix at: https://github.com/ceph/ceph/pull/38858
Thanks for spotting that. - 12:05 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- That is probably the underlying cause. I'll push a patch fixing that tomorrow and then a bunch of tests and see if th...
- 08:40 AM Bug #48778: Setting quota triggered ops storms on meta pool
- Small question, why do we see client_request(mds.0:61937457 setxattr #0x10006a64656 ceph.quota caller_uid=0, caller_g...
- 05:54 AM Bug #48778: Setting quota triggered ops storms on meta pool
- Thank you for your reply, Patrick.
I already suspected the requests might come from the clients, but my initial de...
01/11/2021
- 11:48 PM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- By the way, something odd I noticed today was:
https://github.com/ceph/ceph/blob/d20916964984242e513a645bd275fad89... - 11:42 PM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Patrick Donnelly wrote:
> Adam Emerson wrote:
> > This doesn't look quite the same? Before we were having the opera... - 07:28 PM Bug #48839 (Resolved): qa: Error: Unable to find a match: cephfs-top
- My fault with bad scheduling.
- 06:58 PM Bug #48839 (Resolved): qa: Error: Unable to find a match: cephfs-top
- ...
- 06:40 PM Documentation #48838 (In Progress): document ms_mode options in mount.ceph manpage
- 06:37 PM Documentation #48838 (Resolved): document ms_mode options in mount.ceph manpage
- We recently merged a patch to support the ms_mode= option in mount.ceph. Document the new option in the manpage.
- 06:20 PM Backport #48837 (Resolved): nautilus: have mount helper pick appropriate mon sockets for ms_mode ...
- https://github.com/ceph/ceph/pull/39133
- 06:20 PM Backport #48836 (Resolved): octopus: have mount helper pick appropriate mon sockets for ms_mode v...
- https://github.com/ceph/ceph/pull/40763
- 06:18 PM Bug #48835 (New): qa: add ms_mode random choice to kclient tests
- Building on https://github.com/ceph/ceph/pull/38788 (#48765): modify kernel_client.py to conditionally set ms_mode fo...
- 06:15 PM Bug #48765 (Pending Backport): have mount helper pick appropriate mon sockets for ms_mode value
- 06:13 PM Bug #48661 (Resolved): mds: reserved can be set on feature set
- 06:11 PM Bug #48834 (Fix Under Review): qa: MDS_SLOW_METADATA_IO with osd thrasher
- 06:09 PM Bug #48834 (Resolved): qa: MDS_SLOW_METADATA_IO with osd thrasher
- ...
- 06:02 PM Bug #48833 (New): snap_rm hang during osd thrashing
- ...
- 05:47 PM Bug #48832: qa: fsstress w/ valgrind causes MDS to be blocklisted
- Similar one: /ceph/teuthology-archive/pdonnell-2021-01-10_18:07:36-fs-wip-pdonnell-testing-20210110.050947-distro-bas...
- 05:46 PM Bug #48832 (New): qa: fsstress w/ valgrind causes MDS to be blocklisted
- ...
- 05:42 PM Bug #48831 (New): qa: ERROR: test_snapclient_cache
- ...
- 05:40 PM Bug #48830 (Resolved): pacific: qa: :ERROR: test_idempotency
- ...
- 05:37 PM Bug #48772: qa: pjd: not ok 9, 44, 80
- Also k-stock:...
- 02:50 PM Bug #48772 (Triaged): qa: pjd: not ok 9, 44, 80
- Both tests are with the kernel's testing branch.
- 05:11 PM Bug #44100: cephfs rsync kworker high load.
- I also encountered this on Ubuntu 20.04 @Linux 5.4.0-60-generic #67-Ubuntu SMP Tue Jan 5 18:31:36 UTC 2021@
The clus... - 04:39 PM Feature #48602 (Resolved): `cephfs-top` frontend utility
- 04:30 PM Bug #42887 (Won't Fix): tasks.cephfs.test_volume_client.TestVolumeClient: test_21501 No such file...
- 04:30 PM Bug #42061 (Won't Fix): volume_client: AssertionError: 237 != 8
- 04:28 PM Bug #42724 (Won't Fix): pybind/mgr/volumes: confirm backwards-compatibility of ceph_volume_client.py
- 02:58 PM Bug #48760 (Triaged): qa: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- 02:56 PM Bug #48763 (Need More Info): mds memory leak
- Can you set `debug mds = 10` during the event so we can get an idea what the MDS is doing during this time, assuming ...
- 02:54 PM Bug #48766 (Triaged): qa: Test failure: test_evict_client (tasks.cephfs.test_volume_client.TestVo...
- 02:47 PM Bug #48773 (Triaged): qa: scrub does not complete
- 02:45 PM Bug #48778 (Need More Info): Setting quota triggered ops storms on meta pool
- This doesn't sound like a bug we'd expect. I think you may have some clients executing these setxattr requests but I'...
- 08:53 AM Bug #47787: mgr/nfs: exercise host-level HA of NFS-Ganesha by killing the process
- Tim Serong wrote:
> How was your test cluster deployed (vstart/cstart/something else)? I'd like to see if I'm able ... - 06:55 AM Bug #47787: mgr/nfs: exercise host-level HA of NFS-Ganesha by killing the process
- I tried reproducing this (admittedly with a downstream SUSE build), and TERM seemed to have no effect, while KILL *di...
- 08:03 AM Feature #48404 (In Progress): client: add a ceph.caps vxattr
- 08:02 AM Backport #48195: nautilus: mgr/volumes: allow/deny r/rw access of auth IDs to subvolume and subvo...
- Patrick/Ramana,
I am planning to backport this along with [1] which is under review. The fix [1] persists the auth... - 08:02 AM Backport #48196: octopus: mgr/volumes: allow/deny r/rw access of auth IDs to subvolume and subvol...
- Patrick/Ramana,
I am planning to backport this along with [1] which is under review. The fix [1] persists the auth...
01/10/2021
- 05:04 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Adam Emerson wrote:
> This doesn't look quite the same? Before we were having the operation file with a definite tim... - 04:47 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- This doesn't look quite the same? Before we were having the operation file with a definite timeout and I can't find t...
01/09/2021
- 03:10 AM Bug #48811 (Closed): qa: fs/snaps/snaptest-realm-split.sh hang
- https://github.com/ceph/ceph/pull/38732
- 03:07 AM Bug #48757 (Resolved): qa: "[WRN] Replacing daemon mds.d as rank 0 with standby daemon mds.f"
01/08/2021
- 11:20 PM Backport #48814 (Resolved): nautilus: mds: spurious wakeups in cache upkeep
- https://github.com/ceph/ceph/pull/39130
- 11:20 PM Backport #48813 (Resolved): octopus: mds: spurious wakeups in cache upkeep
- https://github.com/ceph/ceph/pull/40743
- 11:16 PM Bug #48753 (Pending Backport): mds: spurious wakeups in cache upkeep
- 11:14 PM Bug #48707 (Resolved): client: unmount() doesn't dump the cache
- 11:07 PM Bug #48812 (Resolved): qa: test_scrub_pause_and_resume_with_abort failure
- ...
- 11:06 PM Bug #48365: qa: ffsb build failure on CentOS 8.2
- Haven't seen this failure recently but did get this:...
- 11:00 PM Bug #48811: qa: fs/snaps/snaptest-realm-split.sh hang
- Probably related group of failures:...
- 10:57 PM Bug #48811 (Closed): qa: fs/snaps/snaptest-realm-split.sh hang
- ...
- 08:12 PM Bug #48808 (Fix Under Review): mon/MDSMonitor: `fs rm` is not idempotent
- 08:04 PM Bug #48808 (Resolved): mon/MDSMonitor: `fs rm` is not idempotent
- ...
- 07:27 PM Bug #48756 (Resolved): qa: kclient does not synchronously write with O_DIRECT
- 07:26 PM Fix #48121 (Resolved): qa: merge fs/multimds suites
- 06:47 PM Bug #48702 (Resolved): qa: fwd_scrub should only scrub rank 0
- 06:39 PM Bug #48514 (Resolved): mgr/nfs: Don't prefix 'ganesha-' to cluster id
- 06:37 PM Bug #47294 (Need More Info): client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Looks like this is still here, maybe more racy than before:...
- 06:26 PM Bug #48805 (Resolved): mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blog...
- ...
- 06:19 PM Feature #18513 (Resolved): MDS: scrub: forward scrub reports missing backtraces on new files as d...
- This is fixed in the current code.
- 04:38 PM Fix #48802 (Resolved): mds: define CephFS errors that replace standard errno values
- CephFS protocol depends on the errno numbers which may vary by operating system. Copy the Linux values we use into in...
01/07/2021
- 09:58 PM Feature #48791: mds: support file block size
- Technically, I think the blocksize can be anything >= 8 bytes or so. Too small or large a block will be cumbersome to...
- 09:53 PM Feature #48791 (Rejected): mds: support file block size
- The new fscrypt feature in the kernel client that is under development needs to be able to prevent the MDS from trunc...
- 05:57 PM Bug #48203 (Resolved): qa: quota failure
- Luis Henriques wrote:
> Patrick Donnelly wrote:
> > Hey Luis, I think this is still broken; the revert didn't work:... - 04:42 PM Bug #48203: qa: quota failure
- Patrick Donnelly wrote:
> Hey Luis, I think this is still broken; the revert didn't work: https://pulpito.ceph.com/t... - 03:50 PM Bug #48203 (Need More Info): qa: quota failure
- Hey Luis, I think this is still broken; the revert didn't work: https://pulpito.ceph.com/teuthology-2021-01-03_03:15:...
- 12:22 PM Backport #48457 (Resolved): nautilus: client: fix crash when doing remount in none fuse case
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38467
m... - 12:22 PM Backport #48110 (Resolved): nautilus: client: ::_read fails to advance pos at EOF checking
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37991
m... - 12:21 PM Backport #48097 (Resolved): nautilus: osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37988
m... - 12:21 PM Backport #48095 (Resolved): nautilus: mds: fix file recovery crash after replaying delayed requests
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37986
m... - 05:28 AM Bug #48778 (Need More Info): Setting quota triggered ops storms on meta pool
- A few days ago, I toyed with setting a quota on a handful (less than 60) directories on my CephFS filesystem. At the ...
Also available in: Atom