Activity
From 12/27/2020 to 01/25/2021
01/25/2021
- 08:10 PM Feature #48991 (Resolved): client: allow looking up snapped inodes by inode number+snapid tuple
- Currently, we have ceph_ll_lookup_inode(), but that only takes an inode number and can't deal with a snapped inode. A...
- 02:26 PM Bug #48830 (Fix Under Review): pacific: qa: :ERROR: test_idempotency
- 12:38 PM Bug #48766 (In Progress): qa: Test failure: test_evict_client (tasks.cephfs.test_volume_client.Te...
- 12:38 PM Bug #48766: qa: Test failure: test_evict_client (tasks.cephfs.test_volume_client.TestVolumeClient)
- The issue is no longer applicable to master as the test is removed as part of removing ceph_volume_client [1]
... - 12:31 PM Bug #48773 (In Progress): qa: scrub does not complete
- 10:25 AM Bug #48812: qa: test_scrub_pause_and_resume_with_abort failure
- Jos has hit the same issue too, please see https://github.com/ceph/ceph/pull/38684#discussion_r555022883.
- 08:51 AM Bug #48812: qa: test_scrub_pause_and_resume_with_abort failure
- This possibly could be a timing issue. Here is the run with master: https://pulpito.ceph.com/vshankar-2021-01-25_06:3...
- 05:07 AM Bug #48812 (In Progress): qa: test_scrub_pause_and_resume_with_abort failure
01/24/2021
01/23/2021
- 04:49 PM Bug #48830 (In Progress): pacific: qa: :ERROR: test_idempotency
- The issue is no longer applicable to master as the test is removed as part of removing ceph_volume_client [1]
[1...
01/22/2021
- 09:36 AM Bug #48923: pacific: pybind: revert removal of ceph_volume_client library
- This is not a backport. This revert is targeted for pacific. Please see, https://github.com/ceph/ceph/pull/38960#issu...
- 09:33 AM Bug #48923 (In Progress): pacific: pybind: revert removal of ceph_volume_client library
01/21/2021
- 04:16 PM Backport #48568 (In Progress): octopus: tasks.cephfs.test_nfs.TestNFS.test_cluster_info: IP mismatch
- 02:40 PM Feature #48953 (Need More Info): cephfs-mirror: suppport snapshot mirror of subdirectories and/or...
- mgr/mirroring assigns directory paths to `cephfs-mirror` daemon instances. Right now, only a single mirror daemon is ...
- 02:37 PM Backport #48520 (In Progress): nautilus: client: add ceph.cluster_fsid/ceph.client_id vxattr supp...
- 02:35 PM Backport #48521 (In Progress): octopus: client: add ceph.cluster_fsid/ceph.client_id vxattr suppo...
- 05:49 AM Feature #48944 (New): pybind/mirroring: add subvolume/subvolumegroup interfaces for snapshot mirr...
- Rather than the operator adding subvolume/subvolumegroup paths via "fs snapshot mirror add/remove" interface, introdu...
- 05:37 AM Feature #48943 (Resolved): cephfs-mirror: display cephfs mirror instances in `ceph status` command
- CephFS mirror daemons should register with ceph-mgr to get included in service map. This would allow mirror daemon in...
01/20/2021
- 07:05 AM Bug #48778: Setting quota triggered ops storms on meta pool
- After restarting both MDS the situation went back to normal.
Combing though our logs to find out which action trig...
01/19/2021
- 01:17 PM Feature #45729 (Resolved): pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes in...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 01:17 PM Bug #46163 (Resolved): mgr/volumes: Clone operation uses source subvolume root directory mode and...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:08 AM Bug #48778: Setting quota triggered ops storms on meta pool
- After the issue did not re-appear for 6 days, I tried enabling quotas again on 71 directories. Shortly after doing so...
- 06:29 AM Bug #48923 (Resolved): pacific: pybind: revert removal of ceph_volume_client library
- The primary consumers of the ceph_volume_client_library are OpenStack manila's CephFS drivers. The drivers have not y...
01/18/2021
- 04:29 PM Documentation #48914 (In Progress): mgr/nfs: Update about user config
- 04:23 PM Documentation #48914 (Resolved): mgr/nfs: Update about user config
- ...
- 02:38 PM Backport #48643 (In Progress): nautilus: client: ceph.dir.entries does not acquire necessary caps
- 02:28 PM Backport #48635 (Resolved): octopus: qa: tox failures
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38626
m... - 02:28 PM Backport #47059 (Resolved): octopus: mgr/volumes: Clone operation uses source subvolume root dire...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36803
m... - 02:28 PM Backport #46820 (Resolved): octopus: pybind/mgr/volumes: Add the ability to keep snapshots of sub...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36803
m... - 02:15 PM Backport #48644 (In Progress): octopus: client: ceph.dir.entries does not acquire necessary caps
- 02:02 PM Backport #48641 (In Progress): nautilus: Client: the directory's capacity will not be updated aft...
- 02:01 PM Backport #48642 (In Progress): octopus: Client: the directory's capacity will not be updated afte...
- 01:41 PM Bug #48912 (Resolved): ls -l in cephfs-shell tries to chase symlinks when stat'ing and errors out...
- ls -l in cephfs-shell tries to chase symlink targets when stat'ing. For example, from the kclient:...
- 01:35 PM Feature #48911 (Resolved): cephfs-shell needs "ln" command equivalent
- It's not currently possible to create symlinks or hardlinks with cephfs-shell (no ln command or equivalent). Add that...
- 12:03 PM Bug #48873 (Triaged): test_cluster_set_reset_user_config: AssertionError: NFS Ganesha cluster dep...
- It looks like ganesha daemon did not come up. I cannot reproduce this issue in latest test run:
http://qa-proxy.ceph... - 05:05 AM Bug #48763: mds memory leak
- A similar situation has been found in our CephFS cluster configued with Nautilus 14.2.11. The RAM usage suddenly reac...
01/16/2021
- 09:11 PM Feature #47162 (Resolved): mds: handle encrypted filenames in the MDS for fscrypt
- 04:19 PM Bug #48673: High memory usage on standby replay MDS
- Patrick Donnelly wrote:
> Thanks for the information. There were a few fixes in v15.2.8 relating to memory consumpti... - 05:14 AM Bug #48700: client: Client::rmdir() may fail to remove a snapshot
- Venky Shankar wrote:
> This is not really a bug and was related to sticky bit on the root directory in a teuthology ... - 05:13 AM Bug #48700 (Closed): client: Client::rmdir() may fail to remove a snapshot
- This is not really a bug and was related to sticky bit on the root directory in a teuthology test. The fix has been m...
01/15/2021
- 07:55 PM Backport #48901 (Resolved): nautilus: mgr/volumes: get the list of auth IDs that have been grante...
- https://github.com/ceph/ceph/pull/39292
- 07:55 PM Backport #48900 (Resolved): octopus: mgr/volumes: get the list of auth IDs that have been granted...
- https://github.com/ceph/ceph/pull/39390
- 07:53 PM Feature #44931 (Pending Backport): mgr/volumes: get the list of auth IDs that have been granted a...
- 06:54 PM Backport #48635: octopus: qa: tox failures
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/38626
merged - 06:54 PM Backport #47059: octopus: mgr/volumes: Clone operation uses source subvolume root directory mode ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36803
merged - 06:54 PM Backport #46820: octopus: pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes ind...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/36803
merged - 04:24 AM Bug #48886 (New): mds: version MMDSCacheRejoin
- This was missed in the MDS metadata/messages that was recently versioned.
01/14/2021
- 10:02 PM Bug #48877 (In Progress): qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- 04:39 PM Bug #48877: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- Patrick Donnelly wrote:
> [...]
>
> From: /ceph/teuthology-archive/pdonnell-2021-01-13_23:30:53-fs-wip-pdonnell-t... - 03:49 PM Bug #48877 (Resolved): qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- ...
- 10:01 PM Feature #45746 (Fix Under Review): mgr/nfs: Add interface to update export
- 05:22 PM Feature #44192 (Resolved): mds: stable multimds scrub
- 04:11 PM Bug #48365 (Can't reproduce): qa: ffsb build failure on CentOS 8.2
- Haven't seen this anymore.
- 04:10 PM Backport #48879 (Resolved): nautilus: mds: fix recall defaults based on feedback from production ...
- https://github.com/ceph/ceph/pull/39134
- 04:10 PM Backport #48878 (Resolved): octopus: mds: fix recall defaults based on feedback from production c...
- https://github.com/ceph/ceph/pull/40764
- 04:08 PM Bug #48403 (Pending Backport): mds: fix recall defaults based on feedback from production clusters
- 04:08 PM Bug #46434 (Resolved): osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:08 PM Bug #46906 (Resolved): mds: fix file recovery crash after replaying delayed requests
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:07 PM Bug #48076 (Resolved): client: ::_read fails to advance pos at EOF checking
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 03:40 PM Bug #47294 (Resolved): client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- 11:09 AM Bug #48873 (Triaged): test_cluster_set_reset_user_config: AssertionError: NFS Ganesha cluster dep...
- https://pulpito.ceph.com/swagner-2021-01-13_11:19:08-rados:cephadm-wip-swagner3-testing-2021-01-12-1316-distro-basic-...
01/13/2021
- 06:45 PM Bug #48863: cephfs-shell should allow changing all mode bits
- The issue is that you can't currently set S_ISUID, S_ISGID or S_ISVTX. We should allow that within cephfs-shell.
I... - 06:44 PM Bug #48863 (Resolved): cephfs-shell should allow changing all mode bits
- Currently, cephfs-shell says:...
- 04:24 PM Bug #48808 (Resolved): mon/MDSMonitor: `fs rm` is not idempotent
- 04:23 PM Bug #48834 (Resolved): qa: MDS_SLOW_METADATA_IO with osd thrasher
- 03:45 PM Backport #48859 (Resolved): nautilus: pybind/mgr/volumes: inherited snapshots should be filtered ...
- https://github.com/ceph/ceph/pull/39292
- 03:45 PM Backport #48858 (Resolved): octopus: pybind/mgr/volumes: inherited snapshots should be filtered o...
- https://github.com/ceph/ceph/pull/39390
- 03:44 PM Bug #48501 (Pending Backport): pybind/mgr/volumes: inherited snapshots should be filtered out of ...
- 01:02 PM Backport #47095: octopus: mds: provide altrenatives to increase the total cephfs subvolume snapsh...
- Andras Sali wrote:
> Really looking forward to getting this fix with 15.2.9 - is there a planned release date?
@A... - 03:42 AM Bug #48778: Setting quota triggered ops storms on meta pool
- Today I was able to simultaneously shut down all clients, which I did. The write operations were still happening, eve...
01/12/2021
- 04:59 PM Documentation #48838 (Resolved): document ms_mode options in mount.ceph manpage
- 04:19 PM Backport #47095: octopus: mds: provide altrenatives to increase the total cephfs subvolume snapsh...
- Really looking forward to getting this fix with 15.2.9 - is there a planned release date?
- 03:29 PM Bug #48770 (Resolved): qa: "Test failure: test_hole (tasks.cephfs.test_failover.TestClusterResize)"
- 01:46 PM Bug #47294 (Fix Under Review): client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Fix at: https://github.com/ceph/ceph/pull/38858
Thanks for spotting that. - 12:05 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- That is probably the underlying cause. I'll push a patch fixing that tomorrow and then a bunch of tests and see if th...
- 08:40 AM Bug #48778: Setting quota triggered ops storms on meta pool
- Small question, why do we see client_request(mds.0:61937457 setxattr #0x10006a64656 ceph.quota caller_uid=0, caller_g...
- 05:54 AM Bug #48778: Setting quota triggered ops storms on meta pool
- Thank you for your reply, Patrick.
I already suspected the requests might come from the clients, but my initial de...
01/11/2021
- 11:48 PM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- By the way, something odd I noticed today was:
https://github.com/ceph/ceph/blob/d20916964984242e513a645bd275fad89... - 11:42 PM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Patrick Donnelly wrote:
> Adam Emerson wrote:
> > This doesn't look quite the same? Before we were having the opera... - 07:28 PM Bug #48839 (Resolved): qa: Error: Unable to find a match: cephfs-top
- My fault with bad scheduling.
- 06:58 PM Bug #48839 (Resolved): qa: Error: Unable to find a match: cephfs-top
- ...
- 06:40 PM Documentation #48838 (In Progress): document ms_mode options in mount.ceph manpage
- 06:37 PM Documentation #48838 (Resolved): document ms_mode options in mount.ceph manpage
- We recently merged a patch to support the ms_mode= option in mount.ceph. Document the new option in the manpage.
- 06:20 PM Backport #48837 (Resolved): nautilus: have mount helper pick appropriate mon sockets for ms_mode ...
- https://github.com/ceph/ceph/pull/39133
- 06:20 PM Backport #48836 (Resolved): octopus: have mount helper pick appropriate mon sockets for ms_mode v...
- https://github.com/ceph/ceph/pull/40763
- 06:18 PM Bug #48835 (New): qa: add ms_mode random choice to kclient tests
- Building on https://github.com/ceph/ceph/pull/38788 (#48765): modify kernel_client.py to conditionally set ms_mode fo...
- 06:15 PM Bug #48765 (Pending Backport): have mount helper pick appropriate mon sockets for ms_mode value
- 06:13 PM Bug #48661 (Resolved): mds: reserved can be set on feature set
- 06:11 PM Bug #48834 (Fix Under Review): qa: MDS_SLOW_METADATA_IO with osd thrasher
- 06:09 PM Bug #48834 (Resolved): qa: MDS_SLOW_METADATA_IO with osd thrasher
- ...
- 06:02 PM Bug #48833 (New): snap_rm hang during osd thrashing
- ...
- 05:47 PM Bug #48832: qa: fsstress w/ valgrind causes MDS to be blocklisted
- Similar one: /ceph/teuthology-archive/pdonnell-2021-01-10_18:07:36-fs-wip-pdonnell-testing-20210110.050947-distro-bas...
- 05:46 PM Bug #48832 (New): qa: fsstress w/ valgrind causes MDS to be blocklisted
- ...
- 05:42 PM Bug #48831 (New): qa: ERROR: test_snapclient_cache
- ...
- 05:40 PM Bug #48830 (Resolved): pacific: qa: :ERROR: test_idempotency
- ...
- 05:37 PM Bug #48772: qa: pjd: not ok 9, 44, 80
- Also k-stock:...
- 02:50 PM Bug #48772 (Triaged): qa: pjd: not ok 9, 44, 80
- Both tests are with the kernel's testing branch.
- 05:11 PM Bug #44100: cephfs rsync kworker high load.
- I also encountered this on Ubuntu 20.04 @Linux 5.4.0-60-generic #67-Ubuntu SMP Tue Jan 5 18:31:36 UTC 2021@
The clus... - 04:39 PM Feature #48602 (Resolved): `cephfs-top` frontend utility
- 04:30 PM Bug #42887 (Won't Fix): tasks.cephfs.test_volume_client.TestVolumeClient: test_21501 No such file...
- 04:30 PM Bug #42061 (Won't Fix): volume_client: AssertionError: 237 != 8
- 04:28 PM Bug #42724 (Won't Fix): pybind/mgr/volumes: confirm backwards-compatibility of ceph_volume_client.py
- 02:58 PM Bug #48760 (Triaged): qa: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- 02:56 PM Bug #48763 (Need More Info): mds memory leak
- Can you set `debug mds = 10` during the event so we can get an idea what the MDS is doing during this time, assuming ...
- 02:54 PM Bug #48766 (Triaged): qa: Test failure: test_evict_client (tasks.cephfs.test_volume_client.TestVo...
- 02:47 PM Bug #48773 (Triaged): qa: scrub does not complete
- 02:45 PM Bug #48778 (Need More Info): Setting quota triggered ops storms on meta pool
- This doesn't sound like a bug we'd expect. I think you may have some clients executing these setxattr requests but I'...
- 08:53 AM Bug #47787: mgr/nfs: exercise host-level HA of NFS-Ganesha by killing the process
- Tim Serong wrote:
> How was your test cluster deployed (vstart/cstart/something else)? I'd like to see if I'm able ... - 06:55 AM Bug #47787: mgr/nfs: exercise host-level HA of NFS-Ganesha by killing the process
- I tried reproducing this (admittedly with a downstream SUSE build), and TERM seemed to have no effect, while KILL *di...
- 08:03 AM Feature #48404 (In Progress): client: add a ceph.caps vxattr
- 08:02 AM Backport #48195: nautilus: mgr/volumes: allow/deny r/rw access of auth IDs to subvolume and subvo...
- Patrick/Ramana,
I am planning to backport this along with [1] which is under review. The fix [1] persists the auth... - 08:02 AM Backport #48196: octopus: mgr/volumes: allow/deny r/rw access of auth IDs to subvolume and subvol...
- Patrick/Ramana,
I am planning to backport this along with [1] which is under review. The fix [1] persists the auth...
01/10/2021
- 05:04 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Adam Emerson wrote:
> This doesn't look quite the same? Before we were having the operation file with a definite tim... - 04:47 AM Bug #47294: client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- This doesn't look quite the same? Before we were having the operation file with a definite timeout and I can't find t...
01/09/2021
- 03:10 AM Bug #48811 (Closed): qa: fs/snaps/snaptest-realm-split.sh hang
- https://github.com/ceph/ceph/pull/38732
- 03:07 AM Bug #48757 (Resolved): qa: "[WRN] Replacing daemon mds.d as rank 0 with standby daemon mds.f"
01/08/2021
- 11:20 PM Backport #48814 (Resolved): nautilus: mds: spurious wakeups in cache upkeep
- https://github.com/ceph/ceph/pull/39130
- 11:20 PM Backport #48813 (Resolved): octopus: mds: spurious wakeups in cache upkeep
- https://github.com/ceph/ceph/pull/40743
- 11:16 PM Bug #48753 (Pending Backport): mds: spurious wakeups in cache upkeep
- 11:14 PM Bug #48707 (Resolved): client: unmount() doesn't dump the cache
- 11:07 PM Bug #48812 (Resolved): qa: test_scrub_pause_and_resume_with_abort failure
- ...
- 11:06 PM Bug #48365: qa: ffsb build failure on CentOS 8.2
- Haven't seen this failure recently but did get this:...
- 11:00 PM Bug #48811: qa: fs/snaps/snaptest-realm-split.sh hang
- Probably related group of failures:...
- 10:57 PM Bug #48811 (Closed): qa: fs/snaps/snaptest-realm-split.sh hang
- ...
- 08:12 PM Bug #48808 (Fix Under Review): mon/MDSMonitor: `fs rm` is not idempotent
- 08:04 PM Bug #48808 (Resolved): mon/MDSMonitor: `fs rm` is not idempotent
- ...
- 07:27 PM Bug #48756 (Resolved): qa: kclient does not synchronously write with O_DIRECT
- 07:26 PM Fix #48121 (Resolved): qa: merge fs/multimds suites
- 06:47 PM Bug #48702 (Resolved): qa: fwd_scrub should only scrub rank 0
- 06:39 PM Bug #48514 (Resolved): mgr/nfs: Don't prefix 'ganesha-' to cluster id
- 06:37 PM Bug #47294 (Need More Info): client: thread hang in Client::_setxattr_maybe_wait_for_osdmap
- Looks like this is still here, maybe more racy than before:...
- 06:26 PM Bug #48805 (Resolved): mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blog...
- ...
- 06:19 PM Feature #18513 (Resolved): MDS: scrub: forward scrub reports missing backtraces on new files as d...
- This is fixed in the current code.
- 04:38 PM Fix #48802 (Resolved): mds: define CephFS errors that replace standard errno values
- CephFS protocol depends on the errno numbers which may vary by operating system. Copy the Linux values we use into in...
01/07/2021
- 09:58 PM Feature #48791: mds: support file block size
- Technically, I think the blocksize can be anything >= 8 bytes or so. Too small or large a block will be cumbersome to...
- 09:53 PM Feature #48791 (Rejected): mds: support file block size
- The new fscrypt feature in the kernel client that is under development needs to be able to prevent the MDS from trunc...
- 05:57 PM Bug #48203 (Resolved): qa: quota failure
- Luis Henriques wrote:
> Patrick Donnelly wrote:
> > Hey Luis, I think this is still broken; the revert didn't work:... - 04:42 PM Bug #48203: qa: quota failure
- Patrick Donnelly wrote:
> Hey Luis, I think this is still broken; the revert didn't work: https://pulpito.ceph.com/t... - 03:50 PM Bug #48203 (Need More Info): qa: quota failure
- Hey Luis, I think this is still broken; the revert didn't work: https://pulpito.ceph.com/teuthology-2021-01-03_03:15:...
- 12:22 PM Backport #48457 (Resolved): nautilus: client: fix crash when doing remount in none fuse case
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38467
m... - 12:22 PM Backport #48110 (Resolved): nautilus: client: ::_read fails to advance pos at EOF checking
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37991
m... - 12:21 PM Backport #48097 (Resolved): nautilus: osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37988
m... - 12:21 PM Backport #48095 (Resolved): nautilus: mds: fix file recovery crash after replaying delayed requests
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37986
m... - 05:28 AM Bug #48778 (Need More Info): Setting quota triggered ops storms on meta pool
- A few days ago, I toyed with setting a quota on a handful (less than 60) directories on my CephFS filesystem. At the ...
01/06/2021
- 11:20 PM Backport #48457: nautilus: client: fix crash when doing remount in none fuse case
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/38467
merged - 11:20 PM Backport #48110: nautilus: client: ::_read fails to advance pos at EOF checking
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37991
merged - 11:19 PM Backport #48097: nautilus: osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37988
merged - 11:19 PM Backport #48095: nautilus: mds: fix file recovery crash after replaying delayed requests
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37986
merged - 09:35 PM Documentation #48585 (Resolved): mds_cache_trim_decay_rate misnamed?
- 08:31 PM Bug #48773 (In Progress): qa: scrub does not complete
- ...
- 08:29 PM Bug #48772 (Need More Info): qa: pjd: not ok 9, 44, 80
- ...
- 08:25 PM Bug #48771 (New): qa: iogen: workload fails to cause balancing
- Not really a bug but it causes a test failure and is worthy of investigation:...
- 07:54 PM Bug #48765 (Fix Under Review): have mount helper pick appropriate mon sockets for ms_mode value
- 02:13 PM Bug #48765 (Resolved): have mount helper pick appropriate mon sockets for ms_mode value
- Ilya recently added msgr2 support to the kclient, but the mount helper still ignores any v2 addresses when mounting. ...
- 07:12 PM Bug #48770 (Fix Under Review): qa: "Test failure: test_hole (tasks.cephfs.test_failover.TestClust...
- 06:56 PM Bug #48770 (Resolved): qa: "Test failure: test_hole (tasks.cephfs.test_failover.TestClusterResize)"
- ...
- 05:50 PM Bug #48766: qa: Test failure: test_evict_client (tasks.cephfs.test_volume_client.TestVolumeClient)
- master failure: /ceph/teuthology-archive/teuthology-2020-12-27_03:15:03-fs-master-distro-basic-smithi/5738903/teuthol...
- 02:23 PM Bug #48766 (Duplicate): qa: Test failure: test_evict_client (tasks.cephfs.test_volume_client.Test...
- test_evict_client fails in [1] and [2].
[1] http://qa-proxy.ceph.com/teuthology/jcollin-2021-01-05_16:12:23-fs-wip... - 03:02 PM Bug #48760: qa: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- failure in master:
/ceph/teuthology-archive/teuthology-2021-01-05_03:15:02-fs-master-distro-basic-smithi/5754681/t... - 04:37 AM Bug #48760 (Can't reproduce): qa: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- Job 5756405 [1] fails with below error.
[1] http://qa-proxy.ceph.com/teuthology/jcollin-2021-01-05_16:12:23-fs-wip... - 02:51 PM Documentation #48531 (Resolved): doc/cephfs: "ceph fs new" command is, ironically, old. The new (...
- 02:20 PM Feature #44928 (Fix Under Review): mgr/volumes: evict clients based on auth ID and subvolume mounted
- 01:51 PM Bug #45344: doc: Table Of Contents doesn't work
- I spoke to Patrick (the creator and owner of the CephFS documentation) about this, and for the time being, the rst fi...
- 01:43 PM Bug #48763: mds memory leak
- ...
- 09:47 AM Bug #48763 (Need More Info): mds memory leak
- I have a possible memory leak in 14.2.10 mds. MDS suddenly uses 107GB Ram, before around 70.
Once mds starts its eat... - 12:57 PM Bug #44565 (In Progress): src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || st...
01/05/2021
- 09:45 PM Bug #48756 (Fix Under Review): qa: kclient does not synchronously write with O_DIRECT
- 08:15 PM Bug #48756: qa: kclient does not synchronously write with O_DIRECT
- Trying to reproduce on master: https://pulpito.ceph.com/pdonnell-2021-01-05_20:15:07-fs:workload-master-distro-basic-...
- 08:09 PM Bug #48756 (Resolved): qa: kclient does not synchronously write with O_DIRECT
- ...
- 08:35 PM Bug #48757 (Fix Under Review): qa: "[WRN] Replacing daemon mds.d as rank 0 with standby daemon md...
- 08:30 PM Bug #48757 (Resolved): qa: "[WRN] Replacing daemon mds.d as rank 0 with standby daemon mds.f"
- /ceph/teuthology-archive/pdonnell-2020-12-24_22:49:03-fs:workload-wip-pdonnell-testing-20201224.195406-distro-basic-s...
- 05:07 PM Bug #48753 (Fix Under Review): mds: spurious wakeups in cache upkeep
- 05:06 PM Bug #48753 (Resolved): mds: spurious wakeups in cache upkeep
- ...
- 04:39 PM Documentation #48731 (Resolved): mgr/nfs: Add info related to rook, clarify pseudo path and dashb...
- 03:02 PM Feature #46074 (Resolved): mds: provide altrenatives to increase the total cephfs subvolume snaps...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 03:00 PM Bug #48555 (Resolved): pybind/ceph_volume_client: allows authorize on auth_ids not created throug...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:08 PM Documentation #48531 (Fix Under Review): doc/cephfs: "ceph fs new" command is, ironically, old. T...
01/04/2021
- 02:44 PM Bug #48673 (Need More Info): High memory usage on standby replay MDS
- 02:44 PM Bug #48711 (Triaged): mds: standby-replay mds abort when replay metablob
- 11:46 AM Feature #45746 (In Progress): mgr/nfs: Add interface to update export
- 10:09 AM Feature #48736 (Fix Under Review): qa: enable debug loglevel kclient test suits
- 04:24 AM Feature #48736 (In Progress): qa: enable debug loglevel kclient test suits
- 03:43 AM Feature #48736 (Resolved): qa: enable debug loglevel kclient test suits
- This is helpful when debugging and resolving bugs.
12/31/2020
- 12:25 PM Documentation #48731 (In Progress): mgr/nfs: Add info related to rook, clarify pseudo path and da...
- 12:19 PM Documentation #48731 (Resolved): mgr/nfs: Add info related to rook, clarify pseudo path and dashb...
12/30/2020
- 01:46 AM Bug #48679 (Fix Under Review): client: items pinned in cache preventing unmount
- 01:35 AM Bug #48679: client: items pinned in cache preventing unmount
- Xiubo Li wrote:
> For example for the inode 0x10000000e51:
>
> [...]
>
> Because it has the Fb cap, so the flu...
12/28/2020
- 09:27 AM Backport #47085 (Resolved): octopus: common: validate type CephBool cause 'invalid command json'
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37362
m... - 09:26 AM Backport #47095 (Resolved): octopus: mds: provide altrenatives to increase the total cephfs subvo...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38553
m... - 09:25 AM Backport #48372 (Resolved): octopus: client: dump which fs is used by client for multiple-fs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38551
m...
Also available in: Atom