Activity
From 03/04/2021 to 04/02/2021
04/02/2021
- 05:44 PM Bug #50112 (Triaged): MDS stuck at stopping when reducing max_mds
- 01:42 PM Bug #50112: MDS stuck at stopping when reducing max_mds
- I've figured out "7 mds.1.cache still have replicated objects" may be the reason that this MDS cannot complete its sh...
- 12:13 PM Bug #50112 (Resolved): MDS stuck at stopping when reducing max_mds
- We are trying to upgrade to v16 today. Cephadm is trying to reduce max_mds to 1 automatically. However, MDS.1 is stuc...
- 07:10 AM Feature #46865 (Resolved): client: add metric for number of pinned capabilities
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:09 AM Bug #48559 (Resolved): qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:09 AM Fix #48802 (Resolved): mds: define CephFS errors that replace standard errno values
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:08 AM Bug #48912 (Resolved): ls -l in cephfs-shell tries to chase symlinks when stat'ing and errors out...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:08 AM Bug #49074 (Resolved): mds: don't start purging inodes in the middle of recovery
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:08 AM Bug #49121 (Resolved): vstart: volumes/nfs interface complaints cluster does not exists
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:07 AM Bug #49391 (Resolved): qa: run fs:verify with tcmalloc
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:07 AM Bug #49464 (Resolved): qa: rank_freeze prevents failover on some tests
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:07 AM Bug #49507 (Resolved): qa: mds removed because trimming for too long with valgrind
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:07 AM Bug #49607 (Resolved): qa: slow metadata ops during scrubbing
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:07 AM Feature #49619 (Resolved): cephfs-mirror: add mirror peers via bootstrapping
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:07 AM Feature #49623 (Resolved): Windows CephFS support - ceph-dokan
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:06 AM Bug #49711 (Resolved): cephfs-mirror: symbolic links do not get synchronized at times
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:06 AM Bug #49719 (Resolved): mon/MDSMonitor: standby-replay daemons should be removed when the flag is ...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:06 AM Bug #49725 (Resolved): client: crashed in cct->_conf.get_val() in Client::start_tick_thread()
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:06 AM Bug #49822 (Resolved): test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirr...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 03:55 AM Bug #50108 (New): access to a file with the wrong permission when changing the parent directory's...
- Recently, we tried to manage the permission of files and directories in ceph with ACL.
Basically, we planed to set...
04/01/2021
- 06:17 PM Backport #49935 (Resolved): pacific: libcephfs: test termination "what(): Too many open files"
- 06:17 PM Backport #49932 (Resolved): pacific: MDS should return -ENODATA when asked to remove xattr that d...
- 06:17 PM Backport #49929 (Resolved): pacific: test: test_mirroring_command_idempotency (tasks.cephfs.test_...
- 06:16 PM Backport #49905 (Resolved): pacific: mgr/volumes: setuid and setgid file bits are not retained af...
- 06:16 PM Backport #49854 (Resolved): pacific: client: crashed in cct->_conf.get_val() in Client::start_tic...
- 06:16 PM Backport #49852 (Resolved): pacific: mds: race of fetching large dirfrag
- 06:16 PM Backport #49765 (Resolved): pacific: cephfs-mirror: symbolic links do not get synchronized at times
- 06:16 PM Backport #49753 (Resolved): pacific: cephfs-mirror: add mirror peers via bootstrapping
- 06:15 PM Backport #49751 (Resolved): pacific: snap-schedule doc
- 06:14 PM Backport #49713 (Resolved): pacific: mgr/nfs: Add interface to update export
- 06:14 PM Backport #49687 (Resolved): pacific: client: add metric for number of pinned capabilities
- 06:14 PM Backport #49685 (Resolved): pacific: ls -l in cephfs-shell tries to chase symlinks when stat'ing ...
- 06:14 PM Backport #49634 (Resolved): pacific: Windows CephFS support - ceph-dokan
- 06:13 PM Backport #49631 (Resolved): pacific: mds: don't start purging inodes in the middle of recovery
- 06:13 PM Backport #49630 (Resolved): pacific: qa: slow metadata ops during scrubbing
- 06:13 PM Backport #49612 (Resolved): pacific: qa: racy session evicted check
- 06:13 PM Backport #49610 (Resolved): pacific: qa: mds removed because trimming for too long with valgrind
- 06:12 PM Backport #49609 (Resolved): pacific: qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- 06:12 PM Backport #49608 (Resolved): pacific: mds: define CephFS errors that replace standard errno values
- 06:12 PM Backport #49569 (Resolved): pacific: qa: rank_freeze prevents failover on some tests
- 06:12 PM Backport #49563 (Resolved): pacific: qa: run fs:verify with tcmalloc
- 06:11 PM Backport #49561 (Resolved): pacific: qa: file system deletion not complete because starter fs alr...
- 06:11 PM Backport #49520 (Resolved): pacific: client: wake up the front pos waiter
- 06:11 PM Backport #49517 (Resolved): pacific: pybind/cephfs: DT_REG and DT_LNK values are wrong
- 06:11 PM Backport #49512 (Resolved): pacific: client: allow looking up snapped inodes by inode number+snap...
- 06:10 PM Backport #49474 (Resolved): pacific: nautilus: qa: "Assertion `cb_done' failed."
- 06:10 PM Backport #49470 (Resolved): pacific: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- 06:10 PM Backport #49346 (Resolved): pacific: vstart: volumes/nfs interface complaints cluster does not ex...
- 03:45 PM Bug #49662 (Fix Under Review): ceph-dokan improvements for additional mounts
- 02:58 PM Bug #50021: qa: snaptest-git-ceph failure during mon thrashing
- Xiubo Li wrote:
> Patrick Donnelly wrote:
> > Xiubo Li wrote:
> >
> > [...]
> >
> > > @Patrick,
> > >
> > > ... - 12:27 PM Bug #50083: CephFS file access issues using kernel driver: file overwritten with null bytes
- More questions:
1) how large are these files (generally)?
2) at what point does the corruption start?
3) How far... - 12:05 PM Bug #50083: CephFS file access issues using kernel driver: file overwritten with null bytes
- Ok, so it is reproducible in your environment. The problem is that I'm unclear on what sort of I/O is being done here...
- 08:32 AM Bug #50083: CephFS file access issues using kernel driver: file overwritten with null bytes
- We do have a reliable repro currently yes. A file that we write to several times an hour shows clear corruption corre...
- 09:18 AM Bug #50091 (Fix Under Review): cephfs-top: exception: addwstr() returned ERR
- 04:24 AM Bug #50091 (Resolved): cephfs-top: exception: addwstr() returned ERR
- When the terminal is not wide enough, this can be seen 100%.
- 01:32 AM Bug #50090 (Resolved): client: only check pool permissions for regular files
- There is no need to do a check_pool_perm() on anything that isn't
a regular file, as the MDS is what handles talking...
03/31/2021
- 08:46 PM Bug #50083: CephFS file access issues using kernel driver: file overwritten with null bytes
- This is the first I've heard of this as well. You mentioned seeing this on v5.11 kernels. Have you also seen it on ea...
- 07:05 PM Bug #50083 (Triaged): CephFS file access issues using kernel driver: file overwritten with null b...
- This has never been heard of before. The most likely cause is something in your setup, e.g. a rogue process (rsync) m...
- 01:57 PM Bug #50083 (Resolved): CephFS file access issues using kernel driver: file overwritten with null ...
- Ceph cluster is running 14.2.9 (nautilus), a 3 node containerised cluster. 1 active MDS, 2 standby
Using ceph kernel... - 06:09 PM Feature #41073 (Rejected): cephfs-sync: tool for synchronizing cephfs snapshots to remote target
- 04:24 AM Feature #41073: cephfs-sync: tool for synchronizing cephfs snapshots to remote target
- I think this can be closed. Right Milind?
(We dropped relying on rsync) - 06:08 PM Feature #46432 (Resolved): cephfs-mirror: manager module interface to add/remove directory snapshots
- 04:22 AM Feature #46432 (Closed): cephfs-mirror: manager module interface to add/remove directory snapshots
- Feature available in Pacific.
- 06:08 PM Feature #44191 (Resolved): cephfs: geo-replication
- 04:22 AM Feature #44191 (Closed): cephfs: geo-replication
- Feature available in Pacific.
- 06:08 PM Feature #41074 (Resolved): pybind/mgr/volumes: mirror (scheduled) snapshots to remote target
- 04:21 AM Feature #41074 (Closed): pybind/mgr/volumes: mirror (scheduled) snapshots to remote target
- Patrick Donnelly wrote:
> Close this out?
Definitely. - 03:30 PM Backport #50086 (Resolved): pacific: tasks.cephfs.test_volumes.TestSubvolumeGroups: RuntimeError:...
- https://github.com/ceph/ceph/pull/40688
- 03:26 PM Bug #48411 (Pending Backport): tasks.cephfs.test_volumes.TestSubvolumeGroups: RuntimeError: rank ...
- 02:31 PM Backport #50030 (In Progress): pacific: qa: fs:cephadm mount does not wait for mds to be created
- 02:11 PM Feature #49811 (Fix Under Review): mds: collect I/O sizes from client for cephfs-top
- 01:06 PM Bug #49939: cephfs-mirror: be resilient to recreated snapshot during synchronization
- So, I am experimenting with how MDS handles path traversals when just an inode number rather than inode number+dname ...
- 12:48 PM Cleanup #50080 (In Progress): mgr/nfs: move nfs code out of volumes plugin
- 12:45 PM Cleanup #50080 (Resolved): mgr/nfs: move nfs code out of volumes plugin
- 09:54 AM Bug #48805 (Fix Under Review): mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/...
03/30/2021
- 10:30 PM Feature #41073: cephfs-sync: tool for synchronizing cephfs snapshots to remote target
- ping
- 10:30 PM Feature #46432: cephfs-mirror: manager module interface to add/remove directory snapshots
- Close this out?
- 10:30 PM Feature #44191: cephfs: geo-replication
- Close this out?
- 10:29 PM Feature #41074: pybind/mgr/volumes: mirror (scheduled) snapshots to remote target
- Close this out?
- 10:28 PM Backport #49930 (Resolved): pacific: mon/MDSMonitor: standby-replay daemons should be removed whe...
- 10:27 PM Bug #49720 (Fix Under Review): mon/MDSMonitor: do not pointlessly kill standbys that are incompat...
- 08:52 PM Bug #48411 (In Progress): tasks.cephfs.test_volumes.TestSubvolumeGroups: RuntimeError: rank all f...
- 06:58 PM Bug #50060 (Triaged): client: access(path, X_OK) on non-executable file as root always succeeds
- 06:58 PM Bug #50060 (Resolved): client: access(path, X_OK) on non-executable file as root always succeeds
- See "[ceph-users] ceph-fuse false passed X_OK check".
Check works for non-root users. - 04:00 PM Bug #50057 (Fix Under Review): client: openned inodes counter is inconsistent
- 03:12 PM Bug #50057 (Resolved): client: openned inodes counter is inconsistent
- ...
- 11:08 AM Bug #49466: qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJ...
- Thanks Rishabh for your analysis.
> The best fix is to add a method to teuthology.orchestra.remote.Remote. It woul... - 03:48 AM Bug #50021: qa: snaptest-git-ceph failure during mon thrashing
- Patrick Donnelly wrote:
> Xiubo Li wrote:
>
> [...]
>
> > @Patrick,
> >
> > Maybe we could save a ceph repo s... - 03:20 AM Bug #50021: qa: snaptest-git-ceph failure during mon thrashing
- Xiubo Li wrote:
> Okay, I was in wrong direction yesterday.
>
> I think it was the `git clone ceph...` command's ... - 03:16 AM Bug #50048 (Fix Under Review): mds: standby-replay only trims cache when it reaches the end of th...
- 03:03 AM Bug #50048 (Resolved): mds: standby-replay only trims cache when it reaches the end of the replay...
- This could take a significant amount of time under load. Trim regularly like the active MDS.
03/29/2021
- 11:55 PM Bug #50021: qa: snaptest-git-ceph failure during mon thrashing
- Okay, I was in wrong direction yesterday.
I think it was the `git clone ceph...` command's problem, it took too lo... - 01:51 PM Bug #50021 (In Progress): qa: snaptest-git-ceph failure during mon thrashing
- 01:05 PM Bug #50021: qa: snaptest-git-ceph failure during mon thrashing
- The exception occured just before the snap test at:...
- 12:58 PM Bug #50021: qa: snaptest-git-ceph failure during mon thrashing
- Checked all the mds/osd/mon/client/kernel/misc related logs, didn't find any error during that exception around 2021-...
- 08:19 AM Bug #50021: qa: snaptest-git-ceph failure during mon thrashing
- Checked the client logs in `smithi016/log/ceph-client.0.25180.log.gz`, everything works well till now, I didn't see a...
- 10:13 PM Fix #50045 (Fix Under Review): qa: test standby_replay in workloads
- 10:12 PM Fix #50045 (Resolved): qa: test standby_replay in workloads
- To improve our test coverage of this frequently enabled feature (both in cephadm and Rook).
- 01:47 PM Bug #49939 (In Progress): cephfs-mirror: be resilient to recreated snapshot during synchronization
- 01:43 PM Bug #50033 (Triaged): mgr/stats: be resilient to offline MDS rank-0
- 06:41 AM Bug #50033 (Resolved): mgr/stats: be resilient to offline MDS rank-0
- mgr/stats can repeatedly report stale perf stats when MDS rank-0 becomes offline. Even after a standby daemon transit...
- 09:15 AM Bug #50035 (Resolved): cephfs-mirror: use sensible mount/shutdown timeouts
- The mirror daemon just relies on the defaults which are pretty high:...
- 07:21 AM Bug #50020: qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
- test case fix: prio -> low
- 07:21 AM Bug #50020: qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
- The real failure was the "failed" state not reflecting in mirror status. From the daemon logs, the mirror daemon rest...
- 07:19 AM Bug #50020 (Fix Under Review): qa: "RADOS object not found (Failed to operate read op for oid cep...
03/28/2021
- 12:05 PM Backport #50030 (Resolved): pacific: qa: fs:cephadm mount does not wait for mds to be created
- https://github.com/ceph/ceph/pull/40528
- 12:04 PM Bug #49684 (Pending Backport): qa: fs:cephadm mount does not wait for mds to be created
03/27/2021
- 06:38 PM Bug #49301 (Resolved): mon/MonCap: `fs authorize` generates unparseable cap for file system name ...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 06:37 PM Bug #49736 (Resolved): cephfs-top: missing keys in the client_metadata
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 06:37 PM Feature #49953 (Resolved): cephfs-top : allow configurable stats refresh interval
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 06:37 PM Bug #49974 (Resolved): cephfs-top: fails with exception "OPENED_FILES"
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 06:36 PM Bug #50005 (Resolved): cephfs-top: flake8 E501 line too long error
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:39 AM Bug #50020: qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirror)"
- The index object (cephfs_mirror) is missing in the rados pool. This is created when mirroring is enabled (via mgr/mir...
- 12:53 AM Feature #48682 (Fix Under Review): MDSMonitor: add command to print fs flags
- 12:53 AM Fix #48683 (Fix Under Review): mds/MDSMap: print each flag value in MDSMap::dump
03/26/2021
- 10:16 PM Backport #50027 (Resolved): octopus: client: items pinned in cache preventing unmount
- https://github.com/ceph/ceph/pull/40778
- 10:15 PM Backport #50026 (Resolved): nautilus: client: items pinned in cache preventing unmount
- https://github.com/ceph/ceph/pull/40722
- 10:15 PM Backport #50025 (Resolved): pacific: client: items pinned in cache preventing unmount
- https://github.com/ceph/ceph/pull/40629
- 10:15 PM Backport #50024 (Rejected): octopus: ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_n...
- 10:15 PM Backport #50023 (Rejected): nautilus: ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_...
- 10:15 PM Backport #50022 (Resolved): pacific: ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_n...
- https://github.com/ceph/ceph/pull/40628
- 10:13 PM Bug #48679 (Pending Backport): client: items pinned in cache preventing unmount
- 10:11 PM Bug #49936 (Pending Backport): ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_num <= ...
- 10:08 PM Bug #50016: qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
- /ceph/teuthology-archive/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/599...
- 03:25 PM Bug #50016 (New): qa: test_damage: "RuntimeError: 2 mutations had unexpected outcomes"
- ...
- 10:07 PM Bug #48771: qa: iogen: workload fails to cause balancing
- /ceph/teuthology-archive/pdonnell-2021-03-24_23:26:35-fs-wip-pdonnell-testing-20210324.190252-distro-basic-smithi/599...
- 10:05 PM Bug #50021 (Resolved): qa: snaptest-git-ceph failure during mon thrashing
- ...
- 09:57 PM Bug #50020 (Resolved): qa: "RADOS object not found (Failed to operate read op for oid cephfs_mirr...
- ...
- 09:56 PM Bug #50019 (Closed): qa: mount failure with cephadm "probably no MDS server is up?"
- ...
- 07:29 PM Backport #49564 (Resolved): pacific: mon/MonCap: `fs authorize` generates unparseable cap for fil...
- 05:48 PM Backport #49935: pacific: libcephfs: test termination "what(): Too many open files"
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40372
merged - 05:31 PM Backport #50011 (Resolved): pacific: cephfs-top: flake8 E501 line too long error
- 12:17 PM Backport #50011 (In Progress): pacific: cephfs-top: flake8 E501 line too long error
- 12:00 PM Backport #50011 (Resolved): pacific: cephfs-top: flake8 E501 line too long error
- https://github.com/ceph/ceph/pull/40422
- 05:31 PM Backport #49994 (Resolved): pacific: cephfs-top: fails with exception "OPENED_FILES"
- 12:17 PM Backport #49994 (In Progress): pacific: cephfs-top: fails with exception "OPENED_FILES"
- 06:25 AM Backport #49994 (Resolved): pacific: cephfs-top: fails with exception "OPENED_FILES"
- https://github.com/ceph/ceph/pull/40422
- 05:24 PM Backport #49986 (Resolved): pacific: cephfs-top : allow configurable stats refresh interval
- 05:24 PM Backport #49973 (Resolved): pacific: cephfs-top: missing keys in the client_metadata
- 03:34 PM Backport #49932: pacific: MDS should return -ENODATA when asked to remove xattr that doesn't exist
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40371
merged - 03:33 PM Backport #49685: pacific: ls -l in cephfs-shell tries to chase symlinks when stat'ing and errors ...
- Varsha Rao wrote:
> https://github.com/ceph/ceph/pull/40308
merged - 03:33 PM Backport #49713: pacific: mgr/nfs: Add interface to update export
- Varsha Rao wrote:
> https://github.com/ceph/ceph/pull/40307
merged - 03:32 PM Backport #49852: pacific: mds: race of fetching large dirfrag
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40252
merged - 03:32 PM Backport #49854: pacific: client: crashed in cct->_conf.get_val() in Client::start_tick_thread()
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40251
merged - 03:31 PM Bug #49379: client: wake up the front pos waiter
- https://github.com/ceph/ceph/pull/40109 merged
- 03:30 PM Backport #49609: pacific: qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40108
merged - 03:25 PM Backport #50015 (Resolved): pacific: qa: "AttributeError: 'NoneType' object has no attribute 'mon...
- https://github.com/ceph/ceph/pull/40645
- 03:21 PM Bug #49511 (Pending Backport): qa: "AttributeError: 'NoneType' object has no attribute 'mon_manag...
- Also seen in pacific: https://pulpito.ceph.com/yuriw-2021-03-25_21:03:23-fs-wip-yuri-testing-2021-03-25-1105-pacific-...
- 12:20 PM Documentation #49921: mgr/nfs: Update about cephadm single nfs-ganesha daemon per host limitation
- pacifci backport merged in #40355
- 11:56 AM Bug #50005 (Pending Backport): cephfs-top: flake8 E501 line too long error
- 09:43 AM Bug #50005 (Fix Under Review): cephfs-top: flake8 E501 line too long error
- 09:42 AM Bug #50005 (Resolved): cephfs-top: flake8 E501 line too long error
- ...
- 11:39 AM Bug #50010 (Resolved): qa/cephfs: get_key_from_keyfile() return None when key is not found in key...
- Absence of key in a keyring file is an odd and exceptional situation. Therefore, @CephFSMount.get_key_from_keyfile()@...
- 11:03 AM Documentation #49372 (Resolved): doc: broken links multimds and kcephfs
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:02 AM Documentation #49763 (Resolved): doc: Document mds cap acquisition readdir throttle
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:52 AM Documentation #50008 (Resolved): mgr/nfs: Add troubleshooting section
- 08:59 AM Bug #49466: qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJ...
- Cause of the bug: Write was attempted on @/tmp@ file with the @root@ user. Files in @/tmp@ can't be written by any us...
- 06:26 AM Bug #49466 (In Progress): qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tm...
- 08:18 AM Bug #49972 (Fix Under Review): mds: failed to decode message of type 29 v1: void CapInfoPayload::...
- 07:26 AM Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- Venky Shankar wrote:
> Xiubo Li wrote:
> > @Venkey, with your backport patch I can reproduce it locally.
>
> Whi... - 06:20 AM Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- Venky Shankar wrote:
> Xiubo Li wrote:
> > @Venkey, with your backport patch I can reproduce it locally.
>
> Whi... - 06:13 AM Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- Xiubo Li wrote:
> @Venkey, with your backport patch I can reproduce it locally.
Which backport? cephfs-mirror ser... - 06:04 AM Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- @Venkey, with your backport patch I can reproduce it locally.
- 04:58 AM Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- Patrick Donnelly wrote:
> I think this might just be because the pacific branch was missing: https://tracker.ceph.co... - 04:37 AM Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- Locally I have built the origin/pacific ceph and with the lasted origin/testing kclient, it works well:...
- 03:06 AM Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- ...
- 06:22 AM Bug #49974 (Pending Backport): cephfs-top: fails with exception "OPENED_FILES"
03/25/2021
- 08:37 PM Bug #49500 (Fix Under Review): qa: "Assertion `cb_done' failed."
- 06:05 PM Backport #49986 (In Progress): pacific: cephfs-top : allow configurable stats refresh interval
- 05:15 PM Backport #49986 (Resolved): pacific: cephfs-top : allow configurable stats refresh interval
- https://github.com/ceph/ceph/pull/40417
- 05:41 PM Bug #49974: cephfs-top: fails with exception "OPENED_FILES"
- PR https://github.com/ceph/ceph/pull/39972 is merged in pacific. Backport should be straightforward.
- 11:18 AM Bug #49974 (Fix Under Review): cephfs-top: fails with exception "OPENED_FILES"
- 11:12 AM Bug #49974 (Resolved): cephfs-top: fails with exception "OPENED_FILES"
- Commit 89cc2cda4aa4 introduces additional metrics but did not add those metrics to cephfs-top.
Also, include a che... - 05:34 PM Backport #49905: pacific: mgr/volumes: setuid and setgid file bits are not retained after a subvo...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40267
merged - 05:33 PM Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- Patrick Donnelly wrote:
> I think this might just be because the pacific branch was missing: https://tracker.ceph.co... - 05:33 PM Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- I think this might just be because the pacific branch was missing: https://tracker.ceph.com/issues/46865
- 01:40 PM Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- Patrick mentioned that this could be related to the testing kernel (as Jeff merged some of Xiubo's patches that adds ...
- 08:52 AM Bug #49972: mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- another instance (same branch): https://pulpito.ceph.com/vshankar-2021-03-25_05:53:38-fs-wip-cephfs-mirror-pacific-ba...
- 08:51 AM Bug #49972 (Resolved): mds: failed to decode message of type 29 v1: void CapInfoPayload::decode
- This was seen in pacific backport branch: https://pulpito.ceph.com/vshankar-2021-03-25_05:53:38-fs-wip-cephfs-mirror-...
- 05:33 PM Backport #49563: pacific: qa: run fs:verify with tcmalloc
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40091
merged - 05:33 PM Backport #49610: pacific: qa: mds removed because trimming for too long with valgrind
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40091
merged - 05:32 PM Backport #49634: pacific: Windows CephFS support - ceph-dokan
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40069
merged - 05:32 PM Backport #49346: pacific: vstart: volumes/nfs interface complaints cluster does not exists
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/39974
merged - 05:31 PM Backport #49687: pacific: client: add metric for number of pinned capabilities
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/39972
merged - 12:16 PM Bug #49843 (Resolved): qa: fs/snaps/snaptest-upchildrealms.sh failure
- 10:46 AM Backport #49973 (In Progress): pacific: cephfs-top: missing keys in the client_metadata
- 09:50 AM Backport #49973 (Resolved): pacific: cephfs-top: missing keys in the client_metadata
- https://github.com/ceph/ceph/pull/40402
- 09:46 AM Feature #49953 (Pending Backport): cephfs-top : allow configurable stats refresh interval
- 09:46 AM Bug #49736 (Pending Backport): cephfs-top: missing keys in the client_metadata
- 09:39 AM Bug #49736 (Fix Under Review): cephfs-top: missing keys in the client_metadata
- 03:55 AM Bug #49922: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- Jeff Layton wrote:
> Are you exporting your ceph mount via knfsd?
No, I don't have anything related to NFS deploy...
03/24/2021
- 11:24 PM Bug #49922: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- Jeff Layton wrote:
> Yeah, that repeated call makes it look like the client is repeatedly calling in to the MDS for ... - 10:10 PM Bug #49922: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- Yeah, that repeated call makes it look like the client is repeatedly calling in to the MDS for that inode number. It'...
- 09:53 PM Bug #49922 (Fix Under Review): MDS slow request lookupino #0x100 on rank 1 block forever on dispa...
- 09:01 PM Bug #49922 (In Progress): MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- 08:45 PM Bug #49500: qa: "Assertion `cb_done' failed."
- Patrick Donnelly wrote:
> Jeff Layton wrote:
> > Maybe we could lower mds_max_caps_per_client for this test? It def... - 08:33 PM Bug #49500: qa: "Assertion `cb_done' failed."
- Jeff Layton wrote:
> Maybe we could lower mds_max_caps_per_client for this test? It defaults to 1M now, but we could... - 06:52 PM Bug #49605: pybind/mgr/volumes: deadlock on async job hangs finisher thread
- Venky Shankar wrote:
> Patrick Donnelly wrote:
> > So the issue is that AsyncJobs.get_job() is called with AsyncJob... - 04:01 PM Backport #49935 (In Progress): pacific: libcephfs: test termination "what(): Too many open files"
- 03:59 PM Backport #49520 (In Progress): pacific: client: wake up the front pos waiter
- 03:58 PM Backport #49609 (In Progress): pacific: qa: ERROR: test_damaged_dentry, KeyError: 'passed_validat...
- 03:58 PM Backport #49930 (In Progress): pacific: mon/MDSMonitor: standby-replay daemons should be removed ...
- 03:58 PM Backport #49932 (In Progress): pacific: MDS should return -ENODATA when asked to remove xattr tha...
- 03:53 PM Backport #49423 (Resolved): pacific: doc: broken links multimds and kcephfs
- 03:49 PM Backport #49877 (Resolved): pacific: doc: Document mds cap acquisition readdir throttle
- 03:47 PM Backport #49414 (Resolved): pacific: mgr/nfs: Update about user config
- 03:45 PM Backport #49951 (Resolved): pacific: mgr/nfs: Update about cephadm single nfs-ganesha daemon per ...
- 10:27 AM Backport #49951 (In Progress): pacific: mgr/nfs: Update about cephadm single nfs-ganesha daemon p...
- 01:50 AM Backport #49951 (Resolved): pacific: mgr/nfs: Update about cephadm single nfs-ganesha daemon per ...
- https://github.com/ceph/ceph/pull/40362
- 05:43 AM Feature #49953 (Fix Under Review): cephfs-top : allow configurable stats refresh interval
- 05:42 AM Feature #49953 (In Progress): cephfs-top : allow configurable stats refresh interval
- 05:39 AM Feature #49953 (Resolved): cephfs-top : allow configurable stats refresh interval
- 03:11 AM Bug #49928 (Duplicate): client: items pinned in cache preventing unmount x2
- 03:10 AM Bug #49928: client: items pinned in cache preventing unmount x2
- For the inode `0x10000001949`, since it has Fb cap and the flush cap snap was delayed, but never did it after that:
... - 12:43 AM Bug #49928 (In Progress): client: items pinned in cache preventing unmount x2
- 01:50 AM Backport #49950 (Resolved): octopus: mgr/nfs: Update about cephadm single nfs-ganesha daemon per ...
- https://github.com/ceph/ceph/pull/40777
- 01:47 AM Bug #49936 (Fix Under Review): ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_num <= ...
- 01:46 AM Documentation #49921 (Pending Backport): mgr/nfs: Update about cephadm single nfs-ganesha daemon ...
03/23/2021
- 03:00 PM Backport #49564: pacific: mon/MonCap: `fs authorize` generates unparseable cap for file system na...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40086
merged - 03:00 PM Backport #49569: pacific: qa: rank_freeze prevents failover on some tests
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40082
merged - 02:56 PM Backport #49474: pacific: nautilus: qa: "Assertion `cb_done' failed."
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40093
merged - 02:55 PM Backport #49512: pacific: client: allow looking up snapped inodes by inode number+snapid tuple
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40092
merged - 02:55 PM Backport #49751: pacific: snap-schedule doc
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40090
merged - 02:53 PM Backport #49561: pacific: qa: file system deletion not complete because starter fs already destroyed
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40089
merged - 02:53 PM Backport #49470: pacific: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40087
merged - 02:51 PM Backport #49517: pacific: pybind/cephfs: DT_REG and DT_LNK values are wrong
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40085
merged - 02:50 PM Backport #49608: pacific: mds: define CephFS errors that replace standard errno values
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40083
merged - 02:49 PM Backport #49612: pacific: qa: racy session evicted check
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40081
merged - 02:48 PM Backport #49630: pacific: qa: slow metadata ops during scrubbing
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40080
merged - 02:47 PM Backport #49631: pacific: mds: don't start purging inodes in the middle of recovery
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40079
merged - 01:24 PM Feature #49942 (Resolved): cephfs-mirror: enable running in HA
- cephfs-mirror and mgr/mirroring has the machinery to run/support HA but we do not have any test coverage for such a s...
- 09:13 AM Bug #49912: client: dir->dentries inconsistent, both newname and oldname points to same inode, m...
- I am doubting that if there has two tasks are doing the rename:
For task1, if it just do _lookup(_INPROGRESS) and ... - 03:29 AM Bug #49912: client: dir->dentries inconsistent, both newname and oldname points to same inode, m...
- @Xiaoxi
Is this reproduceable for you ? If so, how often ? Locally I was trying in a loop by renaming two file for... - 09:03 AM Bug #49939 (Resolved): cephfs-mirror: be resilient to recreated snapshot during synchronization
- The mirror daemon works with snapshots paths. It does rely on snap-id to infer deleted and renamed snapshots, but onc...
- 07:04 AM Bug #49605: pybind/mgr/volumes: deadlock on async job hangs finisher thread
- Patrick Donnelly wrote:
> So the issue is that AsyncJobs.get_job() is called with AsyncJobs.lock locked. Then gettin... - 05:43 AM Bug #49922: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- Logs from mds.0. Also repeating at the same frequency....
- 04:21 AM Backport #49929 (In Progress): pacific: test: test_mirroring_command_idempotency (tasks.cephfs.te...
- 03:05 AM Backport #49929 (Resolved): pacific: test: test_mirroring_command_idempotency (tasks.cephfs.test_...
- https://github.com/ceph/ceph/pull/40206
- 03:19 AM Bug #49936 (Pending Backport): ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_num <= ...
- ...
- 03:10 AM Backport #49935 (Resolved): pacific: libcephfs: test termination "what(): Too many open files"
- https://github.com/ceph/ceph/pull/40372
- 03:10 AM Backport #49934 (Resolved): octopus: libcephfs: test termination "what(): Too many open files"
- https://github.com/ceph/ceph/pull/40776
- 03:10 AM Backport #49933 (Rejected): nautilus: MDS should return -ENODATA when asked to remove xattr that ...
- 03:10 AM Backport #49932 (Resolved): pacific: MDS should return -ENODATA when asked to remove xattr that d...
- https://github.com/ceph/ceph/pull/40371
- 03:10 AM Backport #49931 (Rejected): octopus: MDS should return -ENODATA when asked to remove xattr that d...
- 03:06 AM Bug #49559 (Pending Backport): libcephfs: test termination "what(): Too many open files"
- 03:05 AM Bug #49621 (Resolved): qa: ERROR: test_fragmented_injection (tasks.cephfs.test_data_scan.TestData...
- 03:05 AM Backport #49930 (Resolved): pacific: mon/MDSMonitor: standby-replay daemons should be removed whe...
- https://github.com/ceph/ceph/pull/40325
- 03:05 AM Bug #49833 (Pending Backport): MDS should return -ENODATA when asked to remove xattr that doesn't...
- 03:04 AM Bug #49822 (Pending Backport): test: test_mirroring_command_idempotency (tasks.cephfs.test_admin....
- 03:03 AM Bug #49719 (Pending Backport): mon/MDSMonitor: standby-replay daemons should be removed when the ...
- 02:53 AM Bug #49928 (Duplicate): client: items pinned in cache preventing unmount x2
- ...
03/22/2021
- 05:04 PM Bug #49922: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- Jeff Layton wrote:
> Was there anything useful in the logs from mds 1 about the op and what state it's in?
I set ... - 03:31 PM Bug #49922: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- I'm unfamiliar with the MDS code so some notes as I peruse it:
Ok, so the TrackedOp entries get put on the list wh... - 12:05 PM Bug #49922 (Resolved): MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- We have two MDSs deployed by cephadm.
Several hours ago, we got a health warning:... - 02:02 PM Bug #49912: client: dir->dentries inconsistent, both newname and oldname points to same inode, m...
- I think that after the mv, the directory should no longer be considered ORDERED. We probably _can_ consider it comple...
- 01:41 PM Bug #49912 (Triaged): client: dir->dentries inconsistent, both newname and oldname points to same...
- 01:51 PM Bug #49845 (Resolved): qa: failed umount in test_volumes
- The client kernel in this test had a bad patch in it that has since been fixed. See:
https://tracker.ceph.com/... - 12:44 PM Backport #49685 (In Progress): pacific: ls -l in cephfs-shell tries to chase symlinks when stat'i...
- https://github.com/ceph/ceph/pull/40308
- 12:36 PM Backport #49713 (In Progress): pacific: mgr/nfs: Add interface to update export
- https://github.com/ceph/ceph/pull/40307
- 12:23 PM Backport #49414 (In Progress): pacific: mgr/nfs: Update about user config
- 11:58 AM Documentation #49921 (In Progress): mgr/nfs: Update about cephadm single nfs-ganesha daemon per h...
- 11:38 AM Documentation #49921 (Resolved): mgr/nfs: Update about cephadm single nfs-ganesha daemon per host...
- 03:45 AM Feature #49811: mds: collect I/O sizes from client for cephfs-top
- @Patric, @Jeff
Comparing the iotop/iostat:
We may also need to collect the average IO READ/WRITE speed per-seco... - 03:23 AM Feature #49811 (In Progress): mds: collect I/O sizes from client for cephfs-top
- 02:33 AM Bug #49605: pybind/mgr/volumes: deadlock on async job hangs finisher thread
- So the issue is that AsyncJobs.get_job() is called with AsyncJobs.lock locked. Then getting the next job involves ope...
- 02:30 AM Feature #46866: kceph: add metric for number of pinned capabilities
- Pushing the kclient patchwork.
03/21/2021
- 05:50 PM Bug #49605 (In Progress): pybind/mgr/volumes: deadlock on async job hangs finisher thread
- ...
- 04:51 PM Bug #49912 (Resolved): client: dir->dentries inconsistent, both newname and oldname points to sam...
- we have an applications that use FS as a lock --- an empty file named .dw_gem2_cmn_sd_{INPROGRESS/COMPLETE} , applic...
03/20/2021
- 04:19 AM Backport #49903 (In Progress): nautilus: mgr/volumes: setuid and setgid file bits are not retaine...
- 03:15 AM Backport #49903 (Resolved): nautilus: mgr/volumes: setuid and setgid file bits are not retained a...
- https://github.com/ceph/ceph/pull/40270
- 04:01 AM Backport #49904 (In Progress): octopus: mgr/volumes: setuid and setgid file bits are not retained...
- 03:15 AM Backport #49904 (Resolved): octopus: mgr/volumes: setuid and setgid file bits are not retained af...
- https://github.com/ceph/ceph/pull/40268
- 03:29 AM Backport #49905 (In Progress): pacific: mgr/volumes: setuid and setgid file bits are not retained...
- 03:15 AM Backport #49905 (Resolved): pacific: mgr/volumes: setuid and setgid file bits are not retained af...
- https://github.com/ceph/ceph/pull/40267
- 03:12 AM Bug #49882 (Pending Backport): mgr/volumes: setuid and setgid file bits are not retained after a ...
03/19/2021
- 09:46 PM Bug #49605: pybind/mgr/volumes: deadlock on async job hangs finisher thread
- debugging pr https://github.com/ceph/ceph/pull/40264
- 09:20 PM Bug #49605: pybind/mgr/volumes: deadlock on async job hangs finisher thread
- /ceph/teuthology-archive/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/596...
- 06:13 PM Backport #49852 (In Progress): pacific: mds: race of fetching large dirfrag
- 06:12 PM Backport #49854 (In Progress): pacific: client: crashed in cct->_conf.get_val() in Client::start_...
- 06:12 PM Backport #49877 (In Progress): pacific: doc: Document mds cap acquisition readdir throttle
- 02:46 PM Bug #49500: qa: "Assertion `cb_done' failed."
- Maybe we could lower mds_max_caps_per_client for this test? It defaults to 1M now, but we could take that down to 500...
- 02:39 PM Bug #49500: qa: "Assertion `cb_done' failed."
- I'm not sure that setting is enough to explain this. AFAICT, that setting is only consulted in notify_health(), so I ...
- 12:57 PM Backport #49753 (In Progress): pacific: cephfs-mirror: add mirror peers via bootstrapping
- 12:57 PM Backport #49765 (In Progress): pacific: cephfs-mirror: symbolic links do not get synchronized at ...
- 10:07 AM Feature #48943 (Resolved): cephfs-mirror: display cephfs mirror instances in `ceph status` command
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:06 AM Bug #49419 (Resolved): cephfs-mirror: dangling pointer in PeerReplayer
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:12 AM Bug #49882 (Fix Under Review): mgr/volumes: setuid and setgid file bits are not retained after a ...
03/18/2021
- 03:07 PM Bug #49882 (In Progress): mgr/volumes: setuid and setgid file bits are not retained after a subvo...
- 02:23 PM Bug #49882 (Resolved): mgr/volumes: setuid and setgid file bits are not retained after a subvolum...
- setuid and setgid file bits are not retained after a subvolume snapshot restore
Reproducer on vstart cluster:
#... - 01:53 PM Backport #49686: pacific: cephfs-mirror: display cephfs mirror instances in `ceph status` command
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39973
m... - 01:53 PM Backport #49432: pacific: cephfs-mirror: dangling pointer in PeerReplayer
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39810
m... - 01:26 PM Bug #49873: ceph_lremovexattr does not return error on file in ceph pacific
- Jeff Layton wrote:
> John, I fixed a similar sounding bug in the MDS yesterday:
>
> https://tracker.ceph.com/... - 01:01 PM Bug #49873: ceph_lremovexattr does not return error on file in ceph pacific
- John, I fixed a similar sounding bug in the MDS yesterday:
https://tracker.ceph.com/issues/49833
Are you ab... - 09:29 AM Bug #49736: cephfs-top: missing keys in the client_metadata
- https://github.com/ceph/ceph/pull/40210
- 04:50 AM Bug #44100: cephfs rsync kworker high load.
- We have also experienced a similar issue, where kernel mount performance degraded severely while doing rsync (running...
- 02:45 AM Backport #49877 (Resolved): pacific: doc: Document mds cap acquisition readdir throttle
- https://github.com/ceph/ceph/pull/40250
- 02:41 AM Documentation #49763 (Pending Backport): doc: Document mds cap acquisition readdir throttle
03/17/2021
- 09:47 PM Feature #48791 (Need More Info): mds: support file block size
- 09:45 PM Feature #41073: cephfs-sync: tool for synchronizing cephfs snapshots to remote target
- Milind what's the status of this tickeT?
- 07:03 PM Bug #49873: ceph_lremovexattr does not return error on file in ceph pacific
- I'll also note that I did find the following issue
https://tracker.ceph.com/issues/49833
But forgot to reference ... - 07:00 PM Bug #49873 (Triaged): ceph_lremovexattr does not return error on file in ceph pacific
- John Mulligan wrote:
> To try and clarify:
>
> The xattr is set on the link. There should be no xattr of that nam... - 06:52 PM Bug #49873: ceph_lremovexattr does not return error on file in ceph pacific
- To try and clarify:
The xattr is set on the link. There should be no xattr of that name on the file the link point... - 06:46 PM Bug #49873: ceph_lremovexattr does not return error on file in ceph pacific
- John Mulligan wrote:
> While running our go-ceph CI against pacific for the first time our CI failed in the xattr te... - 06:24 PM Bug #49873 (Duplicate): ceph_lremovexattr does not return error on file in ceph pacific
- While running our go-ceph CI against pacific for the first time our CI failed in the xattr tests.
It expected a call... - 04:02 PM Bug #49859 (Triaged): Snapshot schedules are not deleted after enabling/disabling snap module
- 10:15 AM Bug #49859 (Triaged): Snapshot schedules are not deleted after enabling/disabling snap module
- Assuming the following:...
- 03:41 PM Bug #49559: libcephfs: test termination "what(): Too many open files"
- Xiubo Li wrote:
> It seems the tests will fire many event works, which will open many fds, the last issue about this... - 01:48 PM Backport #49686 (Resolved): pacific: cephfs-mirror: display cephfs mirror instances in `ceph stat...
- 01:46 PM Backport #49432 (Resolved): pacific: cephfs-mirror: dangling pointer in PeerReplayer
- 10:02 AM Bug #49621: qa: ERROR: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
- There is another error log ahead of the above call trace:...
- 09:59 AM Bug #49621 (Fix Under Review): qa: ERROR: test_fragmented_injection (tasks.cephfs.test_data_scan....
- 04:28 AM Bug #49621 (In Progress): qa: ERROR: test_fragmented_injection (tasks.cephfs.test_data_scan.TestD...
- 04:28 AM Feature #49811: mds: collect I/O sizes from client for cephfs-top
- Sure, will work on it. Thanks.
- 03:30 AM Backport #49854 (Resolved): pacific: client: crashed in cct->_conf.get_val() in Client::start_tic...
- https://github.com/ceph/ceph/pull/40251
- 03:25 AM Backport #49853 (Resolved): nautilus: mds: race of fetching large dirfrag
- https://github.com/ceph/ceph/pull/40720
- 03:25 AM Backport #49852 (Resolved): pacific: mds: race of fetching large dirfrag
- https://github.com/ceph/ceph/pull/40252
- 03:25 AM Bug #49725 (Pending Backport): client: crashed in cct->_conf.get_val() in Client::start_tick_thre...
- 03:25 AM Backport #49851 (Resolved): octopus: mds: race of fetching large dirfrag
- https://github.com/ceph/ceph/pull/40774
- 03:23 AM Bug #49617 (Pending Backport): mds: race of fetching large dirfrag
03/16/2021
- 08:42 PM Bug #49843 (Fix Under Review): qa: fs/snaps/snaptest-upchildrealms.sh failure
- Bad error handling in this patch:
https://lore.kernel.org/ceph-devel/20210315180717.266155-3-jlayton@kernel.or... - 08:12 PM Bug #49843: qa: fs/snaps/snaptest-upchildrealms.sh failure
- This may be fallout from the recent snapdir handling fixes. I'll take a look.
- 07:53 PM Bug #49843 (Resolved): qa: fs/snaps/snaptest-upchildrealms.sh failure
- ...
- 08:01 PM Bug #49845 (Resolved): qa: failed umount in test_volumes
- ...
- 07:21 PM Bug #49837 (Fix Under Review): mgr/pybind/snap_schedule: do not fail when no fs snapshots are ava...
- 05:16 PM Bug #49837 (Resolved): mgr/pybind/snap_schedule: do not fail when no fs snapshots are available
- When calling the json output, we should not return any error but just an empty dict:...
- 05:35 PM Bug #49833 (Fix Under Review): MDS should return -ENODATA when asked to remove xattr that doesn't...
- 04:36 PM Bug #49833: MDS should return -ENODATA when asked to remove xattr that doesn't exist
- I'll take this one since I have a patch (and testcase).
- 04:22 PM Bug #49833 (Triaged): MDS should return -ENODATA when asked to remove xattr that doesn't exist
- 04:16 PM Bug #49833 (Resolved): MDS should return -ENODATA when asked to remove xattr that doesn't exist
- This patch adds a small gtest that shows that the handling of removexattr is wrong:...
- 04:50 PM Bug #49834 (Won't Fix - EOL): octopus: qa: test_statfs_on_deleted_fs failure
- https://pulpito.ceph.com/yuriw-2021-03-13_22:13:22-fs-wip-yuriw-octopus-15.2.10-distro-basic-smithi/5962994/
Test ... - 04:34 PM Bug #49826: Multiple nfs-ganesha instances and strays objects in CephFS
- The strays behavior makes some sense, since we don't really do anything client-side to notify the application when th...
- 04:27 PM Bug #49826: Multiple nfs-ganesha instances and strays objects in CephFS
- Aleksandr Rudenko wrote:
> Usual stray objects are purged after 10-20 secs. But not in this case. In this case stray... - 07:15 AM Bug #49826 (New): Multiple nfs-ganesha instances and strays objects in CephFS
- Hi!
We have one CephFS and two standalone ganesha instances on different hosts which export the same directory.
W... - 12:15 PM Bug #49736: cephfs-top: missing keys in the client_metadata
- Venky Shankar wrote:
MDSRank::dump_sessions() has this filter:
>
> [...]
>
> ... which might be the reason tha... - 05:34 AM Bug #49822 (Fix Under Review): test: test_mirroring_command_idempotency (tasks.cephfs.test_admin....
- 04:09 AM Bug #49822 (Resolved): test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirr...
- With https://github.com/ceph/ceph/pull/39845/commits/a04010e9490aa726d219c41139c27417dac836e2 peer_add monitor interf...
- 02:46 AM Bug #49719 (Fix Under Review): mon/MDSMonitor: standby-replay daemons should be removed when the ...
03/15/2021
- 06:25 PM Feature #49811 (Resolved): mds: collect I/O sizes from client for cephfs-top
- An average is a start but a histogram would be better for this kind of data.
- 05:44 AM Backport #49520: pacific: client: wake up the front pos waiter
- Patrick Donnelly wrote:
> Xiubo, please do this backport.
The backport PR: https://github.com/ceph/ceph/pull/40109 - 05:38 AM Backport #49609: pacific: qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- Patrick Donnelly wrote:
> Xiubo, please do this backport.
The backport PR: https://github.com/ceph/ceph/pull/40108
03/12/2021
- 09:10 PM Backport #49634 (In Progress): pacific: Windows CephFS support - ceph-dokan
- 02:42 PM Backport #49634: pacific: Windows CephFS support - ceph-dokan
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/40069
ceph-backport.sh versi... - 09:08 PM Backport #49432 (In Progress): pacific: cephfs-mirror: dangling pointer in PeerReplayer
- 08:59 PM Backport #49432 (Need More Info): pacific: cephfs-mirror: dangling pointer in PeerReplayer
- 09:06 PM Backport #49685 (Need More Info): pacific: ls -l in cephfs-shell tries to chase symlinks when sta...
- 09:05 PM Backport #49474 (In Progress): pacific: nautilus: qa: "Assertion `cb_done' failed."
- 09:03 PM Backport #49512 (In Progress): pacific: client: allow looking up snapped inodes by inode number+s...
- 09:01 PM Backport #49610 (In Progress): pacific: qa: mds removed because trimming for too long with valgrind
- 08:59 PM Backport #49765 (Need More Info): pacific: cephfs-mirror: symbolic links do not get synchronized ...
- 12:55 PM Backport #49765 (Resolved): pacific: cephfs-mirror: symbolic links do not get synchronized at times
- https://github.com/ceph/ceph/pull/40206
- 08:59 PM Backport #49753 (Need More Info): pacific: cephfs-mirror: add mirror peers via bootstrapping
- 08:59 PM Backport #49713 (Need More Info): pacific: mgr/nfs: Add interface to update export
- Varsha, please do this backport.
- 08:58 PM Backport #49609 (Need More Info): pacific: qa: ERROR: test_damaged_dentry, KeyError: 'passed_vali...
- Xiubo, please do this backport.
- 08:58 PM Backport #49751 (In Progress): pacific: snap-schedule doc
- 08:56 PM Backport #49561 (In Progress): pacific: qa: file system deletion not complete because starter fs ...
- 08:55 PM Backport #49414 (Need More Info): pacific: mgr/nfs: Update about user config
- Varsha, please do this one.
- 08:55 PM Backport #49470 (In Progress): pacific: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- 08:53 PM Backport #49564 (In Progress): pacific: mon/MonCap: `fs authorize` generates unparseable cap for ...
- 08:51 PM Backport #49517 (In Progress): pacific: pybind/cephfs: DT_REG and DT_LNK values are wrong
- 08:49 PM Backport #49520 (Need More Info): pacific: client: wake up the front pos waiter
- Xiubo, please do this backport.
- 08:49 PM Backport #49563 (In Progress): pacific: qa: run fs:verify with tcmalloc
- 08:48 PM Backport #49608 (In Progress): pacific: mds: define CephFS errors that replace standard errno values
- 08:46 PM Backport #49569 (In Progress): pacific: qa: rank_freeze prevents failover on some tests
- 08:44 PM Backport #49612 (In Progress): pacific: qa: racy session evicted check
- 08:43 PM Backport #49630 (In Progress): pacific: qa: slow metadata ops during scrubbing
- 08:41 PM Backport #49631 (In Progress): pacific: mds: don't start purging inodes in the middle of recovery
- 04:47 PM Documentation #49763 (Fix Under Review): doc: Document mds cap acquisition readdir throttle
- 11:19 AM Documentation #49763 (In Progress): doc: Document mds cap acquisition readdir throttle
- 08:13 AM Documentation #49763 (Resolved): doc: Document mds cap acquisition readdir throttle
- Documentation for mds cap acquisition readdir throttle is missing which is introduced
with the PR [1]. This needs to... - 12:50 PM Bug #49711 (Pending Backport): cephfs-mirror: symbolic links do not get synchronized at times
- 07:27 AM Bug #49736: cephfs-top: missing keys in the client_metadata
- Patrick Donnelly wrote:
> > Either cephfs-top should handle the missing metadata entries or the mgr/stats should fil... - 02:20 AM Bug #49559: libcephfs: test termination "what(): Too many open files"
- It seems the tests will fire many event works, which will open many fds, the last issue about this was cause by the e...
- 12:46 AM Bug #49725: client: crashed in cct->_conf.get_val() in Client::start_tick_thread()
- With the upstream code, I can reproduce it around 10 time by running 8 hours at night.
03/11/2021
- 08:05 PM Feature #49304 (Fix Under Review): nfs-ganesha: plumb xattr support into FSAL_CEPH
- I've proposed some patches for ganesha to update its xattr implementation (which was based on an earlier draft of the...
- 08:03 PM Bug #49559: libcephfs: test termination "what(): Too many open files"
- We still have the coredump from this test failure, but the x86_64 binaries have been reaped so we can't analyze it. I...
- 06:30 PM Bug #49559: libcephfs: test termination "what(): Too many open files"
- Logging the RLIMIT_NOFILE we set should be no problem.
It may be tough to get a file descriptor in the same proces... - 05:36 PM Bug #49559: libcephfs: test termination "what(): Too many open files"
- Jeff Layton wrote:
> Xiubo Li wrote:
> > IMO, for this we can lower down the concurent threads 128 --> 32 and can t... - 02:49 PM Bug #49559: libcephfs: test termination "what(): Too many open files"
- Xiubo Li wrote:
> IMO, for this we can lower down the concurent threads 128 --> 32 and can try it serveral times. Fr... - 05:40 PM Backport #49753 (Resolved): pacific: cephfs-mirror: add mirror peers via bootstrapping
- https://github.com/ceph/ceph/pull/40206
- 05:39 PM Bug #49500: qa: "Assertion `cb_done' failed."
- Jeff Layton wrote:
> Yeah, looking at the MDS logs from the above run. I don't see any occurrences of the word "reca... - 03:17 PM Bug #49500: qa: "Assertion `cb_done' failed."
- Yeah, looking at the MDS logs from the above run. I don't see any occurrences of the word "recall" in there and at le...
- 03:01 PM Bug #49500: qa: "Assertion `cb_done' failed."
- With the most recent change to make that variable atomic, I doubt we're hitting cache-coherency problems. It seems mo...
- 05:35 PM Feature #49619 (Pending Backport): cephfs-mirror: add mirror peers via bootstrapping
- 05:32 PM Bug #49736 (Triaged): cephfs-top: missing keys in the client_metadata
- > Either cephfs-top should handle the missing metadata entries or the mgr/stats should fill in defaults until it can ...
- 01:04 PM Bug #49736 (Resolved): cephfs-top: missing keys in the client_metadata
- There are missing keys in the mgr/stats client_metadata for some clients, which causes the exception mentioned in the...
- 05:25 PM Backport #49752 (Resolved): octopus: snap-schedule doc
- https://github.com/ceph/ceph/pull/40775
- 05:25 PM Backport #49751 (Resolved): pacific: snap-schedule doc
- https://github.com/ceph/ceph/pull/40090
- 05:23 PM Documentation #48017 (Pending Backport): snap-schedule doc
- 05:04 PM Feature #6373: kcephfs: qa: test fscache
- Jeff Layton wrote:
> The yaml frags that let you test fscache are machine-specific since the clients need to be prov... - 04:59 PM Feature #6373: kcephfs: qa: test fscache
- The yaml frags that let you test fscache are machine-specific since the clients need to be provisioned with an extra ...
- 04:51 PM Feature #6373 (In Progress): kcephfs: qa: test fscache
- 02:14 PM Bug #49725 (Fix Under Review): client: crashed in cct->_conf.get_val() in Client::start_tick_thre...
- 01:11 AM Bug #49725 (Resolved): client: crashed in cct->_conf.get_val() in Client::start_tick_thread()
- The call trace:...
03/10/2021
- 05:59 PM Bug #49720 (Resolved): mon/MDSMonitor: do not pointlessly kill standbys that are incompatible wit...
- During a rolling upgrade, standbys may suicide once the CompatSet for the FSMap is updated. This needlessly complicat...
- 05:42 PM Bug #49719 (Resolved): mon/MDSMonitor: standby-replay daemons should be removed when the flag is ...
- 03:05 PM Backport #49713 (Resolved): pacific: mgr/nfs: Add interface to update export
- 03:05 PM Backport #49712 (Rejected): octopus: mgr/nfs: Add interface to update export
- 03:01 PM Bug #49133 (Resolved): mgr/nfs: Rook does not support restart of services, handle the NotImplemen...
- Backported with #45746.
- 03:00 PM Feature #45746 (Pending Backport): mgr/nfs: Add interface to update export
- 02:57 PM Bug #49122: vstart: Rados url error
- @singuliere none please do not delete the links between the parent ticket and the backport ticket. Just close the bac...
- 06:33 AM Bug #49122 (Resolved): vstart: Rados url error
- 06:32 AM Bug #49122: vstart: Rados url error
- Removing the "pacific" backport because the PR including the fix is already backported via https://tracker.ceph.com/i...
- 02:27 PM Bug #49711 (Fix Under Review): cephfs-mirror: symbolic links do not get synchronized at times
- 02:21 PM Bug #49711 (Resolved): cephfs-mirror: symbolic links do not get synchronized at times
- Due to this problematic code in src/tools/cephfs_mirror/PeerReplayer.cc:...
- 06:52 AM Backport #49423 (In Progress): pacific: doc: broken links multimds and kcephfs
- 05:00 AM Backport #49423 (New): pacific: doc: broken links multimds and kcephfs
- The file name and path are different in pacific. See https://github.com/ceph/ceph/blob/pacific/doc/dev/developer_guid...
- 06:29 AM Backport #49412 (Rejected): pacific: vstart: Rados url error
- 04:58 AM Documentation #49372: doc: broken links multimds and kcephfs
- The file name and path are different in pacific. See https://github.com/ceph/ceph/blob/pacific/doc/dev/developer_guid...
03/09/2021
- 11:47 PM Backport #49423 (Rejected): pacific: doc: broken links multimds and kcephfs
- 11:45 PM Documentation #49372: doc: broken links multimds and kcephfs
- The "documentation was not backported to pacific":https://github.com/ceph/ceph/pull/37949 nor is it associated with a...
- 11:28 PM Backport #49412 (In Progress): pacific: vstart: Rados url error
- 11:27 PM Backport #49346 (In Progress): pacific: vstart: volumes/nfs interface complaints cluster does not...
- 11:25 PM Backport #49686 (In Progress): pacific: cephfs-mirror: display cephfs mirror instances in `ceph s...
- 09:45 PM Backport #49686 (Resolved): pacific: cephfs-mirror: display cephfs mirror instances in `ceph stat...
- https://github.com/ceph/ceph/pull/39973
- 11:23 PM Backport #49687 (In Progress): pacific: client: add metric for number of pinned capabilities
- 09:45 PM Backport #49687 (Resolved): pacific: client: add metric for number of pinned capabilities
- https://github.com/ceph/ceph/pull/39972
- 10:23 PM Bug #49672: nautilus: "Error EINVAL: invalid command" in fs-nautilus-distro-basic-smithi
- https://github.com/ceph/ceph/pull/39960 merged
- 09:00 PM Bug #49672 (Fix Under Review): nautilus: "Error EINVAL: invalid command" in fs-nautilus-distro-ba...
- First occurrence https://sentry.ceph.com/organizations/ceph/issues/4718/events/3eb2f218e5b44406a9f1fd54ef90c5b4/?proj...
- 07:51 PM Bug #49672: nautilus: "Error EINVAL: invalid command" in fs-nautilus-distro-basic-smithi
- https://pulpito.ceph.com/teuthology-2021-03-06_04:20:17-upgrade:luminous-x-nautilus-distro-basic-smithi/
- 04:22 PM Bug #49672 (Resolved): nautilus: "Error EINVAL: invalid command" in fs-nautilus-distro-basic-smithi
- This is for 14.2.17 release
Run: https://pulpito.ceph.com/yuriw-2021-03-08_16:51:42-fs-nautilus-distro-basic-smith... - 09:55 PM Bug #49684 (Fix Under Review): qa: fs:cephadm mount does not wait for mds to be created
- 09:48 PM Bug #49684 (In Progress): qa: fs:cephadm mount does not wait for mds to be created
- 09:29 PM Bug #49684 (Resolved): qa: fs:cephadm mount does not wait for mds to be created
- ...
- 09:41 PM Feature #46865 (Pending Backport): client: add metric for number of pinned capabilities
- 09:40 PM Feature #48943 (Pending Backport): cephfs-mirror: display cephfs mirror instances in `ceph status...
- 09:40 PM Backport #49685 (Resolved): pacific: ls -l in cephfs-shell tries to chase symlinks when stat'ing ...
- 09:38 PM Bug #48912 (Pending Backport): ls -l in cephfs-shell tries to chase symlinks when stat'ing and er...
- 09:37 PM Bug #49511 (Resolved): qa: "AttributeError: 'NoneType' object has no attribute 'mon_manager'"
- 08:35 AM Feature #40401 (Resolved): mgr/volumes: allow/deny r/rw access of auth IDs to subvolume and subvo...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:34 AM Feature #44928 (Resolved): mgr/volumes: evict clients based on auth ID and subvolume mounted
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:34 AM Feature #44931 (Resolved): mgr/volumes: get the list of auth IDs that have been granted access to...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:33 AM Bug #48501 (Resolved): pybind/mgr/volumes: inherited snapshots should be filtered out of snapshot...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:32 AM Bug #48830 (Resolved): pacific: qa: :ERROR: test_idempotency
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:31 AM Bug #49192 (Resolved): qa::ERROR: test_recover_auth_metadata_during_authorize
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:31 AM Bug #49294 (Resolved): pacific: pybind/ceph_volume_client: volume authorize/deauthorize crashes w...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:31 AM Bug #49374 (Resolved): mgr/volumes: Bump up the AuthMetadataManager's version to 6
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:45 AM Bug #49662 (Resolved): ceph-dokan improvements for additional mounts
- This PR [1] adds a few ceph-dokan improvements, mostly targeting additional fs mounts:
* a "unmap" command
* avoi...
03/08/2021
- 10:20 PM Bug #45344 (Resolved): doc: Table Of Contents doesn't work
- An update to the UI made by Kefu Chai in March 2021 fixes this issue.
- 09:22 PM Documentation #48017 (Fix Under Review): snap-schedule doc
- 11:55 AM Documentation #48017: snap-schedule doc
- The module was only backported to octopus, so we can probably skip the doc backport to nautilus.
- 07:40 PM Backport #49431 (Resolved): octopus: mgr/volumes: Bump up the AuthMetadataManager's version to 6
- 12:20 PM Backport #49431 (In Progress): octopus: mgr/volumes: Bump up the AuthMetadataManager's version to 6
- 07:40 PM Backport #49508 (Resolved): octopus: pybind/ceph_volume_client: volume authorize/deauthorize cras...
- 12:19 PM Backport #49508 (In Progress): octopus: pybind/ceph_volume_client: volume authorize/deauthorize c...
- 07:30 PM Backport #49266 (Resolved): octopus: qa::ERROR: test_recover_auth_metadata_during_authorize
- 07:30 PM Backport #49230 (Resolved): octopus: qa: :ERROR: test_idempotency
- 07:30 PM Backport #49029 (Resolved): octopus: mgr/volumes: evict clients based on auth ID and subvolume mo...
- 07:30 PM Backport #48900 (Resolved): octopus: mgr/volumes: get the list of auth IDs that have been granted...
- 07:29 PM Backport #48858 (Resolved): octopus: pybind/mgr/volumes: inherited snapshots should be filtered o...
- 07:29 PM Backport #48196 (Resolved): octopus: mgr/volumes: allow/deny r/rw access of auth IDs to subvolume...
- 02:48 PM Bug #49621: qa: ERROR: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
- This only failed in the local test, will work on it tommorrow.
- 02:45 PM Bug #49559 (Triaged): libcephfs: test termination "what(): Too many open files"
- 02:44 PM Bug #49644: vstart_runner: run_ceph_w() doesn't work with shell=True
- This PR exposes this issue and adds a workaround for it - https://github.com/ceph/ceph/pull/38443.
- 02:43 PM Bug #49644 (In Progress): vstart_runner: run_ceph_w() doesn't work with shell=True
- 09:05 AM Bug #49644 (New): vstart_runner: run_ceph_w() doesn't work with shell=True
- Setting @shell@ to @True@ leads to a crash when @tasks.mgr.test_module_selftest.TestModuleSelftest.test_selftest_clus...
- 12:51 PM Bug #48805: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/s...
- I'm unable to comment on the exact teuthology run mentioned in the description.
However, with the testing so far, th... - 08:17 AM Backport #49429 (Resolved): pacific: mgr/volumes: Bump up the AuthMetadataManager's version to 6
03/07/2021
- 03:51 PM Bug #49495 (Resolved): qa/ceph_manager: raw_cluster_cmd passes both args and kwargs
- 03:51 PM Bug #49486 (Resolved): qa: raw_cluster_cmd and raw_cluster_cmd_result loses command arguments passed
03/05/2021
- 10:35 PM Backport #49634 (Resolved): pacific: Windows CephFS support - ceph-dokan
- https://github.com/ceph/ceph/pull/40069
- 10:30 PM Feature #49623 (Pending Backport): Windows CephFS support - ceph-dokan
- 01:48 PM Feature #49623 (Resolved): Windows CephFS support - ceph-dokan
- This issue tracks the Windows CephFS support, introduced by this PR[1]
[1] https://github.com/ceph/ceph/pull/38819 - 07:35 PM Backport #49631 (Resolved): pacific: mds: don't start purging inodes in the middle of recovery
- https://github.com/ceph/ceph/pull/40079
- 07:35 PM Backport #49630 (Resolved): pacific: qa: slow metadata ops during scrubbing
- https://github.com/ceph/ceph/pull/40080
- 07:34 PM Bug #49607 (Pending Backport): qa: slow metadata ops during scrubbing
- 07:33 PM Bug #49074 (Pending Backport): mds: don't start purging inodes in the middle of recovery
- 05:41 PM Bug #49628 (New): mgr/nfs: Support cluster info command for rook
- Fetch cluster info i.e IP and Port ($ ceph nfs cluster info [<clusterid>])
- 09:00 AM Bug #49621 (Resolved): qa: ERROR: test_fragmented_injection (tasks.cephfs.test_data_scan.TestData...
- When running the teuthology test locally, the tasks.cephfs.test_data_scan.TestDataScan test failed:...
- 06:43 AM Bug #49617 (Fix Under Review): mds: race of fetching large dirfrag
- 03:55 AM Bug #49617 (Triaged): mds: race of fetching large dirfrag
- 03:50 AM Bug #49617 (Resolved): mds: race of fetching large dirfrag
- When a dirfrag contains more than 'mds_dir_keys_per_op' items, MDS needs to send multiple 'omap-get-vals' requests to...
- 05:51 AM Feature #49619 (Fix Under Review): cephfs-mirror: add mirror peers via bootstrapping
- 05:43 AM Feature #49619 (In Progress): cephfs-mirror: add mirror peers via bootstrapping
- 05:42 AM Feature #49619 (Resolved): cephfs-mirror: add mirror peers via bootstrapping
- Right now, adding a peer requires peer cluster ceph configuration and user keyring to be available in the primary clu...
03/04/2021
- 09:35 PM Backport #49613 (Resolved): nautilus: qa: racy session evicted check
- https://github.com/ceph/ceph/pull/40714
- 09:35 PM Backport #49612 (Resolved): pacific: qa: racy session evicted check
- https://github.com/ceph/ceph/pull/40081
- 09:35 PM Backport #49611 (Resolved): octopus: qa: racy session evicted check
- https://github.com/ceph/ceph/pull/40773
- 09:35 PM Backport #49610 (Resolved): pacific: qa: mds removed because trimming for too long with valgrind
- https://github.com/ceph/ceph/pull/40091
- 09:33 PM Bug #49458 (Resolved): qa: switch fs:upgrade from nautilus to octopus
- 09:32 PM Bug #49507 (Pending Backport): qa: mds removed because trimming for too long with valgrind
- 09:30 PM Backport #49609 (Resolved): pacific: qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- https://github.com/ceph/ceph/pull/40108
- 09:30 PM Bug #49318 (Pending Backport): qa: racy session evicted check
- 09:30 PM Backport #49608 (Resolved): pacific: mds: define CephFS errors that replace standard errno values
- https://github.com/ceph/ceph/pull/40083
- 09:29 PM Fix #48802 (Pending Backport): mds: define CephFS errors that replace standard errno values
- 09:28 PM Bug #48559 (Pending Backport): qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- 09:23 PM Bug #49607 (Fix Under Review): qa: slow metadata ops during scrubbing
- 09:19 PM Bug #49607 (Resolved): qa: slow metadata ops during scrubbing
- ...
- 06:56 PM Bug #49605 (Resolved): pybind/mgr/volumes: deadlock on async job hangs finisher thread
- ...
- 10:34 AM Bug #49597 (New): mds: mds goes to 'replay' state after setting 'osd_failsafe_ratio' to less than...
- Steps to reproduce on vstart cluster:
1. Set the following in ../src/vstart.sh
1. Disable client_check_pool_...
Also available in: Atom