Activity
From 04/11/2021 to 05/10/2021
05/10/2021
- 05:02 PM Bug #50719: xattr returning from the dead (sic!)
- What kernel version are you running this on? Is this something easily reproducible, or does it take a while?
There... - 01:40 PM Bug #50719 (Triaged): xattr returning from the dead (sic!)
- 05:31 AM Bug #50719 (Need More Info): xattr returning from the dead (sic!)
- Hi Ceph folks,
slow from the Samba team here. :)
I'm investigating a problem at a customer site where xattr dat... - 04:46 PM Support #49116: written io continuous high occupancy
- Suggest turning up debugging to see what the MDS is doing.
- 02:45 PM Backport #49471: nautilus: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40713
merged - 02:30 PM Bug #50389: mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" ...
- More detail:...
- 09:35 AM Bug #50389 (Fix Under Review): mds: "cluster [ERR] Error recovering journal 0x203: (2) No such ...
- There is one rare case that when mds daemon received a new mdsmap
and during decoding it, the metadata_pool will be ... - 01:48 PM Bug #50622 (Triaged): msg: active_connections regression
- 01:45 PM Bug #50695 (Need More Info): nautilus: qa: Test failure: test_kill_mdstable (tasks.cephfs.test_sn...
- 01:43 PM Bug #50696: nautilus: qa: multimds/thrash tasks/cfuse_workunit_suites_fsstress failure
- This was probably fixed recently for Octopus/Pacific. This one doesn't look to be worth investigating further as Naut...
05/09/2021
05/08/2021
- 07:53 PM Backport #46480: nautilus: mds: send scrub status to ceph-mgr only when scrub is running (or paus...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36183
merged - 02:01 PM Bug #50389: mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" ...
- Checked all the possible logs in osd/mon/mds and the related code, and have compared the normal logs, the sequence ar...
05/07/2021
- 10:09 PM Bug #50696 (Won't Fix): nautilus: qa: multimds/thrash tasks/cfuse_workunit_suites_fsstress failure
- See, https://pulpito.ceph.com/yuriw-2021-05-04_15:32:03-multimds-wip-yuri3-testing-2021-04-29-1036-nautilus-distro-ba...
- 09:27 PM Bug #50695 (Need More Info): nautilus: qa: Test failure: test_kill_mdstable (tasks.cephfs.test_sn...
- See this here,
https://pulpito.ceph.com/yuriw-2021-05-04_15:32:03-multimds-wip-yuri3-testing-2021-04-29-1036-nautilu... - 07:36 PM Bug #50546: nautilus: qa: 'The following counters failed to be set on mds daemons: {''mds.importe...
- See again here, https://pulpito.ceph.com/yuriw-2021-05-04_15:32:03-multimds-wip-yuri3-testing-2021-04-29-1036-nautilu...
- 04:01 AM Bug #50389: mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" ...
The cephfs_metadata pool was created since osdmap v22:...- 02:05 AM Bug #50389: mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" ...
- Checked the mds log:...
- 02:19 AM Bug #47041 (Resolved): MDS recall configuration options not documented yet
- https://docs.ceph.com/en/latest/cephfs/cache-configuration/#mds-recall
05/06/2021
- 05:16 AM Bug #48673: High memory usage on standby replay MDS
- Hi Patrick.
I've tried to run the cluster with both settings for 24 hours each. It became slightly worse, but that... - 01:07 AM Bug #42516: mds: some mutations have initiated (TrackedOp) set to 0
- I checked Migrator.cc for creation of MutationImpl object and setting of its TrackedOp initiated_at attribute mention...
05/05/2021
- 10:31 PM Bug #42516 (In Progress): mds: some mutations have initiated (TrackedOp) set to 0
- 03:29 PM Bug #49672 (Resolved): nautilus: "Error EINVAL: invalid command" in fs-nautilus-distro-basic-smithi
- 12:43 PM Bug #50224: qa: test_mirroring_init_failure_with_recovery failure
- Requires PR https://github.com/ceph/ceph/pull/40885 for fully fixing the failed test.
- 12:41 PM Bug #50224 (Fix Under Review): qa: test_mirroring_init_failure_with_recovery failure
05/04/2021
- 02:51 PM Backport #50632 (In Progress): pacific: mds: failure replaying journal (EMetaBlob)
- 12:50 AM Backport #50632 (Resolved): pacific: mds: failure replaying journal (EMetaBlob)
- https://github.com/ceph/ceph/pull/40855
- 02:00 PM Backport #50634 (In Progress): nautilus: mds: failure replaying journal (EMetaBlob)
- 12:50 AM Backport #50634 (Resolved): nautilus: mds: failure replaying journal (EMetaBlob)
- https://github.com/ceph/ceph/pull/41144
- 01:57 PM Backport #50633 (In Progress): octopus: mds: failure replaying journal (EMetaBlob)
- 12:50 AM Backport #50633 (Resolved): octopus: mds: failure replaying journal (EMetaBlob)
- https://github.com/ceph/ceph/pull/40743
- 08:45 AM Bug #50224: qa: test_mirroring_init_failure_with_recovery failure
- Tested with https://github.com/ceph/ceph/pull/40885 and the failures due to blocked updated thread have gone away: ht...
- 12:55 AM Backport #50636 (Resolved): pacific: session dump includes completed_requests twice, once as an i...
- https://github.com/ceph/ceph/pull/42057
- 12:55 AM Backport #50635 (Resolved): octopus: session dump includes completed_requests twice, once as an i...
- https://github.com/ceph/ceph/pull/41625
- 12:53 AM Bug #50559 (Pending Backport): session dump includes completed_requests twice, once as an integer...
- 12:49 AM Bug #50246 (Pending Backport): mds: failure replaying journal (EMetaBlob)
- 12:45 AM Backport #50631 (Resolved): octopus: mds: Error ENOSYS: mds.a started profiler
- https://github.com/ceph/ceph/pull/45155
- 12:45 AM Backport #50630 (Resolved): pacific: mds: Error ENOSYS: mds.a started profiler
- https://github.com/ceph/ceph/pull/42056
- 12:45 AM Backport #50629 (Resolved): pacific: cephfs-mirror: ignore snapshots on parent directories when s...
- https://github.com/ceph/ceph/pull/41475
- 12:44 AM Bug #50442 (Pending Backport): cephfs-mirror: ignore snapshots on parent directories when synchro...
- 12:40 AM Backport #50628 (Resolved): nautilus: client: access(path, X_OK) on non-executable file as root a...
- https://github.com/ceph/ceph/pull/41297
- 12:40 AM Backport #50627 (Resolved): pacific: client: access(path, X_OK) on non-executable file as root al...
- https://github.com/ceph/ceph/pull/41294
- 12:40 AM Backport #50626 (Resolved): octopus: client: access(path, X_OK) on non-executable file as root al...
- https://github.com/ceph/ceph/pull/41295
- 12:40 AM Backport #50625 (Resolved): nautilus: qa: "ls: cannot access 'lost+found': No such file or direct...
- https://github.com/ceph/ceph/pull/40769
- 12:40 AM Bug #50433 (Pending Backport): mds: Error ENOSYS: mds.a started profiler
- 12:40 AM Backport #50624 (Resolved): pacific: qa: "ls: cannot access 'lost+found': No such file or directory"
- https://github.com/ceph/ceph/pull/40856
- 12:40 AM Backport #50623 (Resolved): octopus: qa: "ls: cannot access 'lost+found': No such file or directory"
- https://github.com/ceph/ceph/pull/40768
- 12:38 AM Bug #50216 (Pending Backport): qa: "ls: cannot access 'lost+found': No such file or directory"
- 12:35 AM Bug #50060 (Pending Backport): client: access(path, X_OK) on non-executable file as root always s...
- 12:12 AM Bug #50221: qa: snaptest-git-ceph failure in git diff
- These resulted in hangs:
/ceph/teuthology-archive/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.04...
05/03/2021
- 11:55 PM Bug #50221: qa: snaptest-git-ceph failure in git diff
- This also looks related, with stock kernel: /ceph/teuthology-archive/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-tes...
- 11:53 PM Bug #50221: qa: snaptest-git-ceph failure in git diff
- Slightly different failure also with the stock kernel but 3 MDS ranks: /ceph/teuthology-archive/pdonnell-2021-05-01_0...
- 11:44 PM Bug #50622 (Resolved): msg: active_connections regression
- ...
- 10:48 PM Bug #50250: mds: "log [WRN] : Scrub error on inode 0x10000004506 (/client.0/tmp/clients/client3/~...
- /ceph/teuthology-archive/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-smithi/608...
- 10:28 PM Bug #48773: qa: scrub does not complete
- Another: /ceph/teuthology-archive/pdonnell-2021-05-01_09:07:09-fs-wip-pdonnell-testing-20210501.040415-distro-basic-s...
- 08:50 PM Backport #50255 (Resolved): nautilus: mds: standby-replay only trims cache when it reaches the en...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40744
m... - 08:50 PM Backport #50179 (Resolved): nautilus: client: only check pool permissions for regular files
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40730
m... - 08:50 PM Backport #50026 (Resolved): nautilus: client: items pinned in cache preventing unmount
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40722
m... - 08:50 PM Backport #49853 (Resolved): nautilus: mds: race of fetching large dirfrag
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40720
m... - 08:46 PM Backport #49562 (Resolved): nautilus: qa: file system deletion not complete because starter fs al...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40709
m... - 08:46 PM Backport #49516 (Resolved): nautilus: pybind/cephfs: DT_REG and DT_LNK values are wrong
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40704
m... - 08:46 PM Backport #49473 (Resolved): nautilus: nautilus: qa: "Assertion `cb_done' failed."
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40701
m... - 08:46 PM Backport #49613 (Resolved): nautilus: qa: racy session evicted check
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40714
m... - 04:16 PM Bug #48411 (Resolved): tasks.cephfs.test_volumes.TestSubvolumeGroups: RuntimeError: rank all fail...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:15 PM Bug #49662 (Resolved): ceph-dokan improvements for additional mounts
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:14 PM Bug #49972 (Resolved): mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:14 PM Bug #50020 (Resolved): qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirr...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:14 PM Fix #50045 (Resolved): qa: test standby_replay in workloads
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:13 PM Bug #50305 (Resolved): MDS doesn't set fscrypt flag on new inodes with crypto context in xattr bu...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:03 PM Backport #50285 (Resolved): pacific: qa: test standby_replay in workloads
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40853
m... - 04:03 PM Backport #50287 (Resolved): pacific: qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40852
m... - 04:03 PM Backport #50253: pacific: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/b...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40825
m... - 03:59 PM Backport #50086 (Resolved): pacific: tasks.cephfs.test_volumes.TestSubvolumeGroups: RuntimeError:...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40688
m... - 03:59 PM Backport #50180 (Resolved): pacific: client: only check pool permissions for regular files
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40686
m... - 03:59 PM Backport #50185 (Resolved): pacific: qa: "RADOS object not found (Failed to operate read op for o...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40684
m... - 03:58 PM Backport #50190 (Resolved): pacific: qa: "Assertion `cb_done' failed."
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40683
m... - 03:58 PM Backport #50225: pacific: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40682
m... - 03:58 PM Backport #50127 (Resolved): pacific: pybind/mgr/volumes: deadlock on async job hangs finisher thread
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40630
m... - 03:58 PM Backport #50187 (Resolved): pacific: ceph-dokan improvements for additional mounts
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40627
m... - 03:55 PM Bug #48673: High memory usage on standby replay MDS
- Daniel Persson wrote:
> Patrick Donnelly wrote:
> > Thanks for the information. There were a few fixes in v15.2.8 r... - 11:06 AM Bug #48673: High memory usage on standby replay MDS
- Hi,
we are experiencing the same behavior, but with ceph 14.2.18. Memory usage of the standby-replay MDS keeps growi... - 01:41 PM Bug #50546 (Triaged): nautilus: qa: 'The following counters failed to be set on mds daemons: {''m...
- 01:40 PM Bug #50569 (Won't Fix): nautilus: qa: tasks/cfuse_workunit_suites_fsstress validater/valgrind fai...
- Won't fix since this is probably caused by only using 2 machines for these tests. New QA suite uses 3 nodes. Nautilu...
- 01:38 PM Bug #50570 (Triaged): nautilus: qa: tasks/trim-i22073 cluster [WRN] Health check failed: 1 client...
- 05:04 AM Bug #50224: qa: test_mirroring_init_failure_with_recovery failure
- Hit this again recently: https://pulpito.ceph.com/vshankar-2021-04-30_17:19:54-fs-wip-cephfs-mirror-incremental-sync-...
05/01/2021
- 12:20 AM Backport #50597 (Resolved): pacific: mgr/nfs: Add troubleshooting section
- https://github.com/ceph/ceph/pull/41389
- 12:20 AM Backport #50596 (Rejected): octopus: mgr/nfs: Add troubleshooting section
- 12:16 AM Documentation #50008 (Pending Backport): mgr/nfs: Add troubleshooting section
04/30/2021
- 12:35 PM Backport #50391 (Rejected): pacific: MDS doesn't set fscrypt flag on new inodes with crypto conte...
- Yeah. This is some basic infrastructure that we merged for fscrypt support, but it's going to require more changes be...
- 05:28 AM Feature #50581 (Fix Under Review): cephfs-mirror: allow mirror daemon to connect to local/primary...
04/29/2021
- 06:15 PM Documentation #50008 (In Progress): mgr/nfs: Add troubleshooting section
- 04:50 PM Backport #50255: nautilus: mds: standby-replay only trims cache when it reaches the end of the re...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40744
merged - 01:43 PM Feature #50581 (Resolved): cephfs-mirror: allow mirror daemon to connect to local/primary cluster...
- This enables rook to easily deploy and manage cephfs-mirror daemons and mimics how other daemon in ceph work.
- 01:28 PM Bug #50523 (Fix Under Review): Mirroring path "remove" don't not seem to work
- 03:20 AM Feature #47264 (In Progress): "fs authorize" subcommand should work for multiple FSs too
04/28/2021
- 09:36 PM Bug #46883: kclient: ghost kernel mount
- Xiubo Li wrote:
> Ramana Raja wrote:
> > I saw the following failure multiple times in Yuri's nautilus runs,
> > h... - 01:47 AM Bug #46883: kclient: ghost kernel mount
- Ramana Raja wrote:
> I saw the following failure multiple times in Yuri's nautilus runs,
> https://pulpito.ceph.com... - 01:29 AM Bug #46883: kclient: ghost kernel mount
- I saw the following failure multiple times in Yuri's nautilus runs,
https://pulpito.ceph.com/yuriw-2021-04-21_16:19:... - 09:25 PM Backport #50179: nautilus: client: only check pool permissions for regular files
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40730
merged - 09:17 PM Bug #50570 (Won't Fix - EOL): nautilus: qa: tasks/trim-i22073 cluster [WRN] Health check failed: ...
- https://pulpito.ceph.com/yuriw-2021-04-20_21:38:51-fs-wip-yuri8-testing-2021-04-20-0734-nautilus-distro-basic-smithi/...
- 08:53 PM Bug #50569: nautilus: qa: tasks/cfuse_workunit_suites_fsstress validater/valgrind failures
- Maybe this is same as https://tracker.ceph.com/issues/36685 [paramiko timeout not working for hung process] ?
- 08:48 PM Bug #50569 (Won't Fix): nautilus: qa: tasks/cfuse_workunit_suites_fsstress validater/valgrind fai...
- Description: fs/verify/{begin centos_latest clusters/fixed-2-ucephfs conf/{client mds mon osd} mount/fuse objectstore...
- 07:09 PM Backport #50026: nautilus: client: items pinned in cache preventing unmount
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40722
merged - 07:08 PM Backport #49853: nautilus: mds: race of fetching large dirfrag
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40720
merged - 05:03 PM Feature #50235: allow cephfs-shell to mount named filesystems
- I would like to work on this ticket. If that's ok, can some one please assign this ticket to me?
- 03:09 PM Bug #50561 (Fix Under Review): cephfs-mirror: incrementally transfer snapshots whenever possible
- 03:08 PM Bug #50561 (Resolved): cephfs-mirror: incrementally transfer snapshots whenever possible
- Currently, each snapshot is synchronized by purging data on remote filesystem under the directory followed by bulk co...
- 12:31 PM Bug #50559 (Fix Under Review): session dump includes completed_requests twice, once as an integer...
- 12:20 PM Bug #50559 (Resolved): session dump includes completed_requests twice, once as an integer and onc...
- ...
- 06:34 AM Bug #15783 (New): client: enable acls by default
- 06:32 AM Bug #49644 (New): vstart_runner: run_ceph_w() doesn't work with shell=True
- 05:33 AM Bug #50010 (Fix Under Review): qa/cephfs: get_key_from_keyfile() return None when key is not foun...
- 03:47 AM Backport #50445 (In Progress): pacific: client: Inode.cc: 405: FAILED ceph_assert(_ref >= 0)
- 03:47 AM Backport #50392 (In Progress): pacific: cephfs-top: exception: addwstr() returned ERR
- 01:14 AM Bug #50389 (In Progress): mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file ...
04/27/2021
- 11:25 PM Bug #50546 (Won't Fix): nautilus: qa: 'The following counters failed to be set on mds daemons: {'...
- tasks/cephfs_test_exports failed in multi mds suite in nautilus testing as follows,
https://pulpito.ceph.com/yuriw-2... - 02:45 PM Backport #50541 (Resolved): pacific: libcephfs: support file descriptor based *at() APIs
- https://github.com/ceph/ceph/pull/41475
- 02:41 PM Bug #50298 (Pending Backport): libcephfs: support file descriptor based *at() APIs
- 01:25 PM Backport #50539 (Rejected): octopus: mgr/pybind/snap_schedule: do not fail when no fs snapshots a...
- 01:25 PM Backport #50538 (Resolved): pacific: mgr/pybind/snap_schedule: do not fail when no fs snapshots a...
- https://github.com/ceph/ceph/pull/41044
- 01:25 PM Backport #50537 (Resolved): pacific: "ceph fs snapshot mirror daemon status" should not use json ...
- https://github.com/ceph/ceph/pull/41475
- 01:22 PM Bug #50266 (Pending Backport): "ceph fs snapshot mirror daemon status" should not use json keys a...
- 01:22 PM Bug #49837 (Pending Backport): mgr/pybind/snap_schedule: do not fail when no fs snapshots are ava...
- 04:46 AM Bug #50523: Mirroring path "remove" don't not seem to work
- I can reproduce this when mirror daemon is not running (guessing that your case too):...
- 03:54 AM Bug #50523 (In Progress): Mirroring path "remove" don't not seem to work
- 02:18 AM Backport #50253 (Resolved): pacific: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/cli...
- 02:08 AM Bug #50532 (Resolved): mgr/volumes: hang when removing subvolume when pools are full
- 01:58 AM Backport #50225 (Resolved): pacific: mds: failed to decode message of type 29 v1: void CapInfoPay...
- 12:15 AM Feature #50531 (New): cephfs-top : show `delayed_ranks` in cephfs-top output
- cephfs-top : show `delayed_ranks` -set of active MDS ranks that are reporting stale metrics in cephfs-top output
04/26/2021
- 11:55 PM Backport #49562: nautilus: qa: file system deletion not complete because starter fs already destr...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40709
merged - 11:54 PM Backport #49516: nautilus: pybind/cephfs: DT_REG and DT_LNK values are wrong
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40704
merged - 11:54 PM Backport #49473: nautilus: nautilus: qa: "Assertion `cb_done' failed."
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40701
merged - 10:32 PM Backport #49613: nautilus: qa: racy session evicted check
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40714
merged - 05:18 PM Backport #50285: pacific: qa: test standby_replay in workloads
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40853
merged - 05:18 PM Backport #50287: pacific: qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40852
merged - 05:17 PM Backport #50253: pacific: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/b...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40825
merged - 04:58 PM Bug #50530 (Resolved): pacific: client: abort after MDS blocklist
- ...
- 04:36 PM Bug #50528 (New): pacific: qa: fs:thrash: pjd suite not ok 20
- ...
- 04:32 PM Bug #50527 (New): pacific: qa: untar_snap_rm timeout during osd thrashing (ceph-fuse)
- ...
- 03:13 PM Bug #40284: kclient: evaluate/fix/add lazio support in the kernel
- Discussing for Quincy: Jeff is against keeping this in the kernel -- we are not really using it and it's probably inc...
- 02:40 PM Feature #16745: mon: prevent allocating snapids allocated for CephFS
- Milind, this one looks related to what you're investigating.
- 02:31 PM Tasks #38386 (Closed): qa: write kernel fscache tests
- Dup of #6373
- 01:40 PM Bug #50523 (Triaged): Mirroring path "remove" don't not seem to work
- 01:05 PM Bug #50523 (Resolved): Mirroring path "remove" don't not seem to work
- Consider the following:...
- 10:30 AM Feature #47277: implement new mount "device" syntax for kcephfs
- I did some work on this a while back but never go to posting a patch (or changing the tracker status). I'm reviving t...
- 10:28 AM Feature #47277 (In Progress): implement new mount "device" syntax for kcephfs
- 06:38 AM Bug #45434: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
- osdstat of osd.6:...
04/24/2021
- 01:48 AM Bug #45434: qa: test_full_fsync (tasks.cephfs.test_full.TestClusterFull) failed
- Ramana Raja wrote:
> see this in octopus testing, https://pulpito.ceph.com/yuriw-2021-02-09_00:31:50-kcephfs-wip-yur...
04/23/2021
- 05:43 PM Bug #50224 (In Progress): qa: test_mirroring_init_failure_with_recovery failure
- Patrick,
PR #41000 not a fix for this tracker. I've started to look into this today.
PR #41000 is an additional... - 03:31 PM Bug #50224 (Fix Under Review): qa: test_mirroring_init_failure_with_recovery failure
- 12:30 PM Bug #50224 (In Progress): qa: test_mirroring_init_failure_with_recovery failure
- 02:49 PM Backport #50086: pacific: tasks.cephfs.test_volumes.TestSubvolumeGroups: RuntimeError: rank all f...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40688
merged - 02:48 PM Backport #50180: pacific: client: only check pool permissions for regular files
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40686
merged - 02:48 PM Backport #50185: pacific: qa: "RADOS object not found (Failed to operate read op for oid cephfs_m...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40684
merged - 02:47 PM Backport #50190: pacific: qa: "Assertion `cb_done' failed."
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40683
merged - 02:46 PM Backport #50225: pacific: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40682
merged - 02:43 PM Backport #50127: pacific: pybind/mgr/volumes: deadlock on async job hangs finisher thread
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40630
merged - 02:42 PM Backport #50187: pacific: ceph-dokan improvements for additional mounts
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40627
merged - 05:27 AM Bug #50447 (Fix Under Review): cephfs-mirror: disallow adding a active peered file system back to...
- 03:26 AM Bug #50258 (Fix Under Review): pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
- 03:02 AM Bug #50495 (New): libcephfs: shutdown race fails with status 141
- Back from the dead: #43039.
/ceph/teuthology-archive/yuriw-2021-04-21_19:25:00-fs-wip-yuri3-testing-2021-04-21-093... - 02:48 AM Bug #45145 (Duplicate): qa/test_full: failed to open 'large_file_a': No space left on device
04/22/2021
- 04:25 PM Backport #50488 (Resolved): pacific: mgr/nfs: move nfs code out of volumes plugin
- https://github.com/ceph/ceph/pull/41389
- 04:21 PM Cleanup #50080 (Pending Backport): mgr/nfs: move nfs code out of volumes plugin
- 01:28 PM Backport #50445: pacific: client: Inode.cc: 405: FAILED ceph_assert(_ref >= 0)
- Patrick Donnelly wrote:
> Xiubo, please try backporting this.
Sure. - 01:28 PM Backport #50392: pacific: cephfs-top: exception: addwstr() returned ERR
- Patrick Donnelly wrote:
> Backport this one too please.
Sure.
04/21/2021
- 06:30 PM Feature #50470 (Resolved): cephfs-top: multiple file system support
- Currently only shows one file system (or combines, not sure).
- 01:15 PM Bug #50246 (Fix Under Review): mds: failure replaying journal (EMetaBlob)
- In standby_replay, if some dentries just added/linked but not get a
chance to replay the EOpen journals followed, if... - 07:13 AM Bug #50246: mds: failure replaying journal (EMetaBlob)
- ...
- 05:04 AM Bug #50246: mds: failure replaying journal (EMetaBlob)
- The [dir 0x10000000a35 /client.0/tmp/t/linux-5.4/Documentation/devicetree/bindings/hwmon/] was trimmed from the subtr...
- 12:47 AM Bug #50246: mds: failure replaying journal (EMetaBlob)
- Patrick Donnelly wrote:
> Xiubo Li wrote:
> > Before replaying the journal, the inode 0x10000000a35 was removed fro... - 12:03 PM Bug #50442 (Fix Under Review): cephfs-mirror: ignore snapshots on parent directories when synchro...
- 09:06 AM Bug #50442 (In Progress): cephfs-mirror: ignore snapshots on parent directories when synchronizin...
- 02:25 AM Bug #50442: cephfs-mirror: ignore snapshots on parent directories when synchronizing snapshots
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Patrick Donnelly wrote:
> > > Venky Shankar wrote:
> > > > Ign... - 02:20 AM Bug #50442: cephfs-mirror: ignore snapshots on parent directories when synchronizing snapshots
- Venky Shankar wrote:
> Patrick Donnelly wrote:
> > Venky Shankar wrote:
> > > Ignore syncing snapshots starting wi... - 02:09 AM Bug #50442: cephfs-mirror: ignore snapshots on parent directories when synchronizing snapshots
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Ignore syncing snapshots starting with an underscore since they ... - 02:05 AM Bug #50442: cephfs-mirror: ignore snapshots on parent directories when synchronizing snapshots
- Venky Shankar wrote:
> Ignore syncing snapshots starting with an underscore since they are internally used by CephFS... - 09:10 AM Bug #46985 (Resolved): common: validate type CephBool cause 'invalid command json'
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:10 AM Bug #46988 (Resolved): mds: 'forward loop' when forward_all_requests_to_auth is set
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:09 AM Documentation #48010 (Resolved): doc: document MDS recall configurations
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:08 AM Bug #49511 (Resolved): qa: "AttributeError: 'NoneType' object has no attribute 'mon_manager'"
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:07 AM Bug #49833 (Resolved): MDS should return -ENODATA when asked to remove xattr that doesn't exist
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:05 AM Backport #49903 (Resolved): nautilus: mgr/volumes: setuid and setgid file bits are not retained a...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40270
m... - 07:09 AM Cleanup #50450 (New): mgr/nfs: Simplify the parsing of Ganesha Conf using existing pseudo-parsers
- More discussion here https://github.com/ceph/ceph/pull/40526#discussion_r607171126
- 06:58 AM Feature #50449 (Fix Under Review): mgr/nfs: Add unit tests for conf parser and others
- 06:26 AM Backport #50173: pacific: mgr/nfs: validation error on creating custom export
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40687
m... - 12:03 AM Backport #50173 (Resolved): pacific: mgr/nfs: validation error on creating custom export
- 06:21 AM Backport #50015 (Resolved): pacific: qa: "AttributeError: 'NoneType' object has no attribute 'mon...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/40645
m... - 05:49 AM Feature #50448 (New): cephfs-mirror: easy repeering
- Right now re-adding a file system as a peer to another file system requires that the cluster admin/operator cleanup s...
- 04:18 AM Bug #50447 (Resolved): cephfs-mirror: disallow adding a active peered file system back to its source
- cephfs-mirror is unidirectional atm. So, If (cluster2, fs) is an active peer for (cluster1, fs), disallow adding (clu...
- 12:42 AM Backport #50392 (Need More Info): pacific: cephfs-top: exception: addwstr() returned ERR
- Backport this one too please.
- 12:42 AM Backport #50445 (Need More Info): pacific: client: Inode.cc: 405: FAILED ceph_assert(_ref >= 0)
- Xiubo, please try backporting this.
04/20/2021
- 09:06 PM Bug #50433 (Fix Under Review): mds: Error ENOSYS: mds.a started profiler
- 05:53 AM Bug #50433 (In Progress): mds: Error ENOSYS: mds.a started profiler
- 05:53 AM Bug #50433: mds: Error ENOSYS: mds.a started profiler
- The "./bin/ceph tell mds.a heap XXX" commands succeeded, but still returning -ENOSYS errno.
- 05:51 AM Bug #50433 (Resolved): mds: Error ENOSYS: mds.a started profiler
- ...
- 09:05 PM Backport #50445 (Resolved): pacific: client: Inode.cc: 405: FAILED ceph_assert(_ref >= 0)
- https://github.com/ceph/ceph/pull/41052
- 09:02 PM Bug #49536 (Pending Backport): client: Inode.cc: 405: FAILED ceph_assert(_ref >= 0)
- We can try backporting this to pacific but I don't think it's worth the trouble farther than that.
- 09:00 PM Bug #50387: client: fs/snaps failure
- /ceph/teuthology-archive/pdonnell-2021-04-20_02:26:05-fs:thrash-master-distro-basic-smithi/6060039/teuthology.log
... - 06:12 PM Backport #49903: nautilus: mgr/volumes: setuid and setgid file bits are not retained after a subv...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40270
merged - 05:56 PM Bug #44384 (Can't reproduce): qa: FAIL: test_evicted_caps (tasks.cephfs.test_client_recovery.Test...
- Jeff Layton wrote:
> Lowering priority to Normal. Patrick have there been any more occurrences of this?
I have no... - 12:59 PM Bug #44384: qa: FAIL: test_evicted_caps (tasks.cephfs.test_client_recovery.TestClientRecovery)
- Lowering priority to Normal. Patrick have there been any more occurrences of this?
- 05:43 PM Bug #50442 (Resolved): cephfs-mirror: ignore snapshots on parent directories when synchronizing s...
- Ignore syncing snapshots starting with an underscore since they are internally used by CephFS.
- 04:54 PM Bug #50246: mds: failure replaying journal (EMetaBlob)
- Xiubo Li wrote:
> Before replaying the journal, the inode 0x10000000a35 was removed from the inode_map in upkeep thr... - 08:15 AM Bug #50246: mds: failure replaying journal (EMetaBlob)
Before replaying the journal, the inode 0x10000000a35 was removed from the inode_map in upkeep thread:...- 02:06 PM Bug #50266 (Fix Under Review): "ceph fs snapshot mirror daemon status" should not use json keys a...
- 07:17 AM Bug #50266: "ceph fs snapshot mirror daemon status" should not use json keys as value
- BTW, this is how the json would look for multiple active mirror daemon instances (yeh, we only support running one in...
- 07:41 AM Bug #50216: qa: "ls: cannot access 'lost+found': No such file or directory"
- Jeff Layton wrote:
> How do you reproduce this on the kclient? The MDS would have to give this inode number as the r... - 05:25 AM Bug #47276 (In Progress): MDSMonitor: add command to rename file systems
04/19/2021
- 10:44 PM Bug #50281 (Resolved): qa: untar_snap_rm timeout
- Considering this resolved -- thanks Jeff.
- 08:55 PM Bug #50281: qa: untar_snap_rm timeout
- test run: https://pulpito.ceph.com/pdonnell-2021-04-19_20:54:09-fs:snaps-master-distro-basic-smithi/
- 07:44 PM Bug #50281: qa: untar_snap_rm timeout
- Ok, I think I see the issue. We had a patch that had done a conversion from atomic_t to refcount_t for some objects, ...
- 05:25 PM Bug #50281: qa: untar_snap_rm timeout
- Same problem there:...
- 05:20 PM Bug #50281: qa: untar_snap_rm timeout
- /ceph/teuthology-archive/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/604...
- 04:38 PM Bug #50281: qa: untar_snap_rm timeout
- Looks like the client kernel hit a GPF. The address appears to be a poisoned list value, so this may be a use-after-f...
- 01:47 PM Bug #50281 (Triaged): qa: untar_snap_rm timeout
- 04:25 PM Backport #50391 (Need More Info): pacific: MDS doesn't set fscrypt flag on new inodes with crypto...
- No longer certain we want to backport this -- Jeff is considering another approach.
- 01:53 PM Bug #50407 (Need More Info): mds_session state is stale after restart all mds
- Hi Wei, can you add more to the description please? Are you working on a PR for this? I couldn't find it on Github.
- 01:51 PM Bug #50389: mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or directory" ...
- I will take it. Thanks.
- 01:45 PM Bug #50387 (Triaged): client: fs/snaps failure
- 11:53 AM Bug #50216: qa: "ls: cannot access 'lost+found': No such file or directory"
- How do you reproduce this on the kclient? The MDS would have to give this inode number as the result of a lookup, I'd...
- 02:40 AM Bug #50216: qa: "ls: cannot access 'lost+found': No such file or directory"
- Fixed it in the kernel client too: https://patchwork.kernel.org/project/ceph-devel/list/?series=469331
Maybe Jeff ... - 02:38 AM Bug #50216 (Fix Under Review): qa: "ls: cannot access 'lost+found': No such file or directory"
- 02:13 AM Bug #50216: qa: "ls: cannot access 'lost+found': No such file or directory"
- For our privious Private-Inodes fixes, we have missed the "lost+found" dir, whose ino is 0x40.head....
- 10:15 AM Bug #50035 (Fix Under Review): cephfs-mirror: use sensible mount/shutdown timeouts
- 04:54 AM Bug #50246 (In Progress): mds: failure replaying journal (EMetaBlob)
04/17/2021
- 01:24 AM Bug #50408 (New): mds_session state is stale after restart all mds daemon
- After restart all mds daemon, client-mds session is turn into stale state and the nfs service is not recover although...
- 01:14 AM Bug #50407 (Need More Info): mds_session state is stale after restart all mds
04/16/2021
- 01:38 PM Bug #50237: cephfs-journal-tool/cephfs-data-scan: Stuck in infinite loop with "NetHandler create_...
- we ran the followings from our mon.0 (10.223.14.4):
"cephfs-data-scan pg_files / 23.4" at 2021-04-16T12:16:58.280... - 09:31 AM Bug #50266 (In Progress): "ceph fs snapshot mirror daemon status" should not use json keys as value
- 05:54 AM Backport #50186 (New): pacific: qa: daemonwatchdog fails if mounts not defined
- 04:23 AM Bug #50060 (Fix Under Review): client: access(path, X_OK) on non-executable file as root always s...
- 04:10 AM Backport #50392 (Resolved): pacific: cephfs-top: exception: addwstr() returned ERR
- https://github.com/ceph/ceph/pull/41053
- 04:10 AM Backport #50391 (Rejected): pacific: MDS doesn't set fscrypt flag on new inodes with crypto conte...
- 04:07 AM Bug #50305 (Pending Backport): MDS doesn't set fscrypt flag on new inodes with crypto context in ...
- 04:06 AM Bug #50091 (Pending Backport): cephfs-top: exception: addwstr() returned ERR
- 04:04 AM Fix #48683 (Resolved): mds/MDSMap: print each flag value in MDSMap::dump
- 04:04 AM Feature #48682 (Resolved): MDSMonitor: add command to print fs flags
- 03:57 AM Bug #50390 (Fix Under Review): mds: monclient: wait_auth_rotating timed out after 30
- 03:54 AM Bug #50390 (Resolved): mds: monclient: wait_auth_rotating timed out after 30
- Symptom:...
- 03:36 AM Bug #50389 (Resolved): mds: "cluster [ERR] Error recovering journal 0x203: (2) No such file or ...
- Symptom:...
- 03:03 AM Bug #50387 (Duplicate): client: fs/snaps failure
- ...
- 02:53 AM Bug #50220: qa: dbench workload timeout
- /ceph/teuthology-archive/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/604...
- 02:52 AM Bug #50281: qa: untar_snap_rm timeout
- /ceph/teuthology-archive/pdonnell-2021-04-15_01:35:57-fs-wip-pdonnell-testing-20210414.230315-distro-basic-smithi/604...
04/15/2021
- 09:13 AM Bug #50216: qa: "ls: cannot access 'lost+found': No such file or directory"
- After https://github.com/ceph/ceph/pull/40868, I can reproduce it locally:...
- 08:14 AM Bug #50373 (Fix Under Review): qa: AttributeError: 'LocalCephManager' object has no attribute 'ctx'
- 04:45 AM Bug #50373 (Fix Under Review): qa: AttributeError: 'LocalCephManager' object has no attribute 'ctx'
- When running the test by using the vstart_runner.py:...
- 07:35 AM Backport #49519 (In Progress): nautilus: client: wake up the front pos waiter
- 02:09 AM Backport #49519: nautilus: client: wake up the front pos waiter
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40707
The new one https://github.com/ceph/ceph/pull/40865 - 01:25 AM Backport #49519: nautilus: client: wake up the front pos waiter
- In nautilus we should switch the notify_one() to SignalOne() instead. More detail please see src/common/Cond.h.
- 06:04 AM Bug #50035 (In Progress): cephfs-mirror: use sensible mount/shutdown timeouts
- 04:34 AM Feature #49942: cephfs-mirror: enable running in HA
- FYI, this is HA active/active.
- 04:33 AM Feature #50372 (Resolved): test: Implement cephfs-mirror trasher test for HA active/active
04/14/2021
- 08:41 PM Backport #47084 (Rejected): nautilus: mds: 'forward loop' when forward_all_requests_to_auth is set
- can be avoided by not setting the config, which is off by default
- 08:40 PM Backport #47086 (Rejected): nautilus: common: validate type CephBool cause 'invalid command json'
- not that important
- 08:34 PM Backport #50182 (Rejected): nautilus: client: openned inodes counter is inconsistent
- Nathan Cutler wrote:
> This ticket is for tracking the nautilus backport of a follow-on fix for #46865 which was bac... - 08:34 PM Backport #50023 (Rejected): nautilus: ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_...
- Nathan Cutler wrote:
> AFAICT the code being fixed does not exist in nautilus
Indeed! - 08:32 PM Backport #50251 (Rejected): nautilus: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/cl...
- This resolves warnings that typically show up with multimds scrub which isn't supported in nautilus.
- 08:32 PM Backport #50189 (Rejected): nautilus: qa: "Assertion `cb_done' failed."
- This one is not worth the effort.
- 08:31 PM Backport #48112 (Rejected): nautilus: doc: document MDS recall configurations
- 08:31 PM Backport #49519: nautilus: client: wake up the front pos waiter
- Xiubo, please take this one.
- 07:28 PM Backport #49933 (Rejected): nautilus: MDS should return -ENODATA when asked to remove xattr that ...
- https://tracker.ceph.com/issues/49833#note-12
- 07:28 PM Backport #49931 (Rejected): octopus: MDS should return -ENODATA when asked to remove xattr that d...
- https://tracker.ceph.com/issues/49833#note-12
- 05:40 PM Documentation #41725 (New): Document on-disk format of inodes
- 05:39 PM Bug #44097 (Can't reproduce): nautilus: "cluster [WRN] Health check failed: 1 clients failing to ...
- 05:35 PM Feature #10679 (In Progress): Add support for the chattr +i command (immutable file)
- 03:26 PM Backport #50282 (In Progress): pacific: MDS slow request lookupino #0x100 on rank 1 block forever...
- 03:24 PM Backport #50254 (In Progress): pacific: mds: standby-replay only trims cache when it reaches the ...
- 03:23 PM Backport #50289 (In Progress): pacific: MDS stuck at stopping when reducing max_mds
- 03:21 PM Backport #50285 (In Progress): pacific: qa: test standby_replay in workloads
- *
- 03:19 PM Backport #50287 (In Progress): pacific: qa: "log [ERR] : error reading sessionmap 'mds2_sessionmap'"
- 02:15 PM Bug #50083: CephFS file access issues using kernel driver: file overwritten with null bytes
- Hi sorry for the delay.
We were fairly panicked about this so have moved our kit onto a different kernel that does... - 01:10 PM Backport #50354 (Rejected): octopus: mgr/nfs: validation error on creating custom export
- 01:07 PM Documentation #50161: mgr/nfs: validation error on creating custom export
- Backport to Octopus too. Because this related PR is backported https://github.com/ceph/ceph/pull/40766
- 10:53 AM Bug #49833: MDS should return -ENODATA when asked to remove xattr that doesn't exist
- I don't think this should be backported to octopus and nautilus since the change that introduced the incorrect handli...
- 04:25 AM Bug #50216 (In Progress): qa: "ls: cannot access 'lost+found': No such file or directory"
- 03:53 AM Bug #48365 (Fix Under Review): qa: ffsb build failure on CentOS 8.2
- 03:37 AM Bug #48365: qa: ffsb build failure on CentOS 8.2
- From https://www.gnu.org/software/automake/manual/automake.html, it seems the ffsb is using the old form of `AM_INIT_...
- 01:43 AM Bug #48365: qa: ffsb build failure on CentOS 8.2
- ...
- 01:31 AM Bug #48365: qa: ffsb build failure on CentOS 8.2
- Patrick Donnelly wrote:
> Back from the dead: /ceph/teuthology-archive/pdonnell-2021-04-08_22:42:24-fs-wip-pdonnell-...
04/13/2021
- 02:24 PM Bug #50035: cephfs-mirror: use sensible mount/shutdown timeouts
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Mostly setting this as a config option (in ceph.conf) would suff... - 02:04 PM Bug #50035: cephfs-mirror: use sensible mount/shutdown timeouts
- Venky Shankar wrote:
> Mostly setting this as a config option (in ceph.conf) would suffice. However, cephfs-mirror c... - 01:07 PM Bug #50035: cephfs-mirror: use sensible mount/shutdown timeouts
- Mostly setting this as a config option (in ceph.conf) would suffice. However, cephfs-mirror can connect to the remote...
- 05:52 AM Bug #49939 (Fix Under Review): cephfs-mirror: be resilient to recreated snapshot during synchroni...
04/12/2021
- 09:19 PM Bug #50083: CephFS file access issues using kernel driver: file overwritten with null bytes
- David, ping? Are you able to answer the questions above?
- 07:20 PM Bug #49873 (Duplicate): ceph_lremovexattr does not return error on file in ceph pacific
- Thanks for checking that Sidharth.
- 03:15 PM Bug #49873: ceph_lremovexattr does not return error on file in ceph pacific
- That sounds sensible to me, thanks! I did attempt to build ceph a copule of weeks ago, but unfortunately I was unabl...
- 02:54 PM Bug #49873: ceph_lremovexattr does not return error on file in ceph pacific
- I tested this incorrectly last week using libcephfs due to an incorrect reproduction of the steps. Upon retesting cor...
- 07:19 PM Bug #50305 (Resolved): MDS doesn't set fscrypt flag on new inodes with crypto context in xattr bu...
- The new fscrypt context handling code will set the "fscrypt" flag when the encryption.ctx xattr is set explicitly, bu...
- 06:03 PM Backport #50253 (In Progress): pacific: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/...
- 04:46 PM Bug #50271 (Won't Fix): vmware esxi NFS client cannot create thin provisioned vmdk files
- Closing this as WONTFIX since we can't really do it without harming performance for important workloads.
- 03:48 PM Bug #50271: vmware esxi NFS client cannot create thin provisioned vmdk files
- I responded in the original email thread. The problem here is that ceph doesn't report sparse file usage correctly. W...
- 01:44 PM Bug #50271 (Triaged): vmware esxi NFS client cannot create thin provisioned vmdk files
- 02:26 PM Bug #50237: cephfs-journal-tool/cephfs-data-scan: Stuck in infinite loop with "NetHandler create_...
- thanks for the reply!
i'll collect relevant mon/mds debug logs, and the strace outputs this week
if there is anythi... - 01:49 PM Bug #50237 (Need More Info): cephfs-journal-tool/cephfs-data-scan: Stuck in infinite loop with "N...
- 01:46 PM Bug #50258 (Triaged): pacific: qa: "run() got an unexpected keyword argument 'stdin_data'"
- 01:44 PM Bug #50279 (Triaged): qa: "Replacing daemon mds.b as rank 0 with standby daemon mds.c"
- 11:16 AM Bug #50298 (Fix Under Review): libcephfs: support file descriptor based *at() APIs
- 11:14 AM Bug #50298 (Resolved): libcephfs: support file descriptor based *at() APIs
04/11/2021
- 10:00 AM Backport #50252 (Need More Info): octopus: mds: "cluster [WRN] Scrub error on inode 0x1000000039d...
- cherry-pick applies cleanly, but the resulting code does not compile
see https://github.com/ceph/ceph/pull/40781 f...
Also available in: Atom