Activity
From 02/23/2021 to 03/24/2021
03/24/2021
- 11:24 PM Bug #49922: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- Jeff Layton wrote:
> Yeah, that repeated call makes it look like the client is repeatedly calling in to the MDS for ... - 10:10 PM Bug #49922: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- Yeah, that repeated call makes it look like the client is repeatedly calling in to the MDS for that inode number. It'...
- 09:53 PM Bug #49922 (Fix Under Review): MDS slow request lookupino #0x100 on rank 1 block forever on dispa...
- 09:01 PM Bug #49922 (In Progress): MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- 08:45 PM Bug #49500: qa: "Assertion `cb_done' failed."
- Patrick Donnelly wrote:
> Jeff Layton wrote:
> > Maybe we could lower mds_max_caps_per_client for this test? It def... - 08:33 PM Bug #49500: qa: "Assertion `cb_done' failed."
- Jeff Layton wrote:
> Maybe we could lower mds_max_caps_per_client for this test? It defaults to 1M now, but we could... - 06:52 PM Bug #49605: pybind/mgr/volumes: deadlock on async job hangs finisher thread
- Venky Shankar wrote:
> Patrick Donnelly wrote:
> > So the issue is that AsyncJobs.get_job() is called with AsyncJob... - 04:01 PM Backport #49935 (In Progress): pacific: libcephfs: test termination "what(): Too many open files"
- 03:59 PM Backport #49520 (In Progress): pacific: client: wake up the front pos waiter
- 03:58 PM Backport #49609 (In Progress): pacific: qa: ERROR: test_damaged_dentry, KeyError: 'passed_validat...
- 03:58 PM Backport #49930 (In Progress): pacific: mon/MDSMonitor: standby-replay daemons should be removed ...
- 03:58 PM Backport #49932 (In Progress): pacific: MDS should return -ENODATA when asked to remove xattr tha...
- 03:53 PM Backport #49423 (Resolved): pacific: doc: broken links multimds and kcephfs
- 03:49 PM Backport #49877 (Resolved): pacific: doc: Document mds cap acquisition readdir throttle
- 03:47 PM Backport #49414 (Resolved): pacific: mgr/nfs: Update about user config
- 03:45 PM Backport #49951 (Resolved): pacific: mgr/nfs: Update about cephadm single nfs-ganesha daemon per ...
- 10:27 AM Backport #49951 (In Progress): pacific: mgr/nfs: Update about cephadm single nfs-ganesha daemon p...
- 01:50 AM Backport #49951 (Resolved): pacific: mgr/nfs: Update about cephadm single nfs-ganesha daemon per ...
- https://github.com/ceph/ceph/pull/40362
- 05:43 AM Feature #49953 (Fix Under Review): cephfs-top : allow configurable stats refresh interval
- 05:42 AM Feature #49953 (In Progress): cephfs-top : allow configurable stats refresh interval
- 05:39 AM Feature #49953 (Resolved): cephfs-top : allow configurable stats refresh interval
- 03:11 AM Bug #49928 (Duplicate): client: items pinned in cache preventing unmount x2
- 03:10 AM Bug #49928: client: items pinned in cache preventing unmount x2
- For the inode `0x10000001949`, since it has Fb cap and the flush cap snap was delayed, but never did it after that:
... - 12:43 AM Bug #49928 (In Progress): client: items pinned in cache preventing unmount x2
- 01:50 AM Backport #49950 (Resolved): octopus: mgr/nfs: Update about cephadm single nfs-ganesha daemon per ...
- https://github.com/ceph/ceph/pull/40777
- 01:47 AM Bug #49936 (Fix Under Review): ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_num <= ...
- 01:46 AM Documentation #49921 (Pending Backport): mgr/nfs: Update about cephadm single nfs-ganesha daemon ...
03/23/2021
- 03:00 PM Backport #49564: pacific: mon/MonCap: `fs authorize` generates unparseable cap for file system na...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40086
merged - 03:00 PM Backport #49569: pacific: qa: rank_freeze prevents failover on some tests
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40082
merged - 02:56 PM Backport #49474: pacific: nautilus: qa: "Assertion `cb_done' failed."
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40093
merged - 02:55 PM Backport #49512: pacific: client: allow looking up snapped inodes by inode number+snapid tuple
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40092
merged - 02:55 PM Backport #49751: pacific: snap-schedule doc
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40090
merged - 02:53 PM Backport #49561: pacific: qa: file system deletion not complete because starter fs already destroyed
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40089
merged - 02:53 PM Backport #49470: pacific: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40087
merged - 02:51 PM Backport #49517: pacific: pybind/cephfs: DT_REG and DT_LNK values are wrong
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40085
merged - 02:50 PM Backport #49608: pacific: mds: define CephFS errors that replace standard errno values
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40083
merged - 02:49 PM Backport #49612: pacific: qa: racy session evicted check
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40081
merged - 02:48 PM Backport #49630: pacific: qa: slow metadata ops during scrubbing
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40080
merged - 02:47 PM Backport #49631: pacific: mds: don't start purging inodes in the middle of recovery
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/40079
merged - 01:24 PM Feature #49942 (Resolved): cephfs-mirror: enable running in HA
- cephfs-mirror and mgr/mirroring has the machinery to run/support HA but we do not have any test coverage for such a s...
- 09:13 AM Bug #49912: client: dir->dentries inconsistent, both newname and oldname points to same inode, m...
- I am doubting that if there has two tasks are doing the rename:
For task1, if it just do _lookup(_INPROGRESS) and ... - 03:29 AM Bug #49912: client: dir->dentries inconsistent, both newname and oldname points to same inode, m...
- @Xiaoxi
Is this reproduceable for you ? If so, how often ? Locally I was trying in a loop by renaming two file for... - 09:03 AM Bug #49939 (Resolved): cephfs-mirror: be resilient to recreated snapshot during synchronization
- The mirror daemon works with snapshots paths. It does rely on snap-id to infer deleted and renamed snapshots, but onc...
- 07:04 AM Bug #49605: pybind/mgr/volumes: deadlock on async job hangs finisher thread
- Patrick Donnelly wrote:
> So the issue is that AsyncJobs.get_job() is called with AsyncJobs.lock locked. Then gettin... - 05:43 AM Bug #49922: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- Logs from mds.0. Also repeating at the same frequency....
- 04:21 AM Backport #49929 (In Progress): pacific: test: test_mirroring_command_idempotency (tasks.cephfs.te...
- 03:05 AM Backport #49929 (Resolved): pacific: test: test_mirroring_command_idempotency (tasks.cephfs.test_...
- https://github.com/ceph/ceph/pull/40206
- 03:19 AM Bug #49936 (Pending Backport): ceph-fuse: src/include/buffer.h: 1187: FAILED ceph_assert(_num <= ...
- ...
- 03:10 AM Backport #49935 (Resolved): pacific: libcephfs: test termination "what(): Too many open files"
- https://github.com/ceph/ceph/pull/40372
- 03:10 AM Backport #49934 (Resolved): octopus: libcephfs: test termination "what(): Too many open files"
- https://github.com/ceph/ceph/pull/40776
- 03:10 AM Backport #49933 (Rejected): nautilus: MDS should return -ENODATA when asked to remove xattr that ...
- 03:10 AM Backport #49932 (Resolved): pacific: MDS should return -ENODATA when asked to remove xattr that d...
- https://github.com/ceph/ceph/pull/40371
- 03:10 AM Backport #49931 (Rejected): octopus: MDS should return -ENODATA when asked to remove xattr that d...
- 03:06 AM Bug #49559 (Pending Backport): libcephfs: test termination "what(): Too many open files"
- 03:05 AM Bug #49621 (Resolved): qa: ERROR: test_fragmented_injection (tasks.cephfs.test_data_scan.TestData...
- 03:05 AM Backport #49930 (Resolved): pacific: mon/MDSMonitor: standby-replay daemons should be removed whe...
- https://github.com/ceph/ceph/pull/40325
- 03:05 AM Bug #49833 (Pending Backport): MDS should return -ENODATA when asked to remove xattr that doesn't...
- 03:04 AM Bug #49822 (Pending Backport): test: test_mirroring_command_idempotency (tasks.cephfs.test_admin....
- 03:03 AM Bug #49719 (Pending Backport): mon/MDSMonitor: standby-replay daemons should be removed when the ...
- 02:53 AM Bug #49928 (Duplicate): client: items pinned in cache preventing unmount x2
- ...
03/22/2021
- 05:04 PM Bug #49922: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- Jeff Layton wrote:
> Was there anything useful in the logs from mds 1 about the op and what state it's in?
I set ... - 03:31 PM Bug #49922: MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- I'm unfamiliar with the MDS code so some notes as I peruse it:
Ok, so the TrackedOp entries get put on the list wh... - 12:05 PM Bug #49922 (Resolved): MDS slow request lookupino #0x100 on rank 1 block forever on dispatched
- We have two MDSs deployed by cephadm.
Several hours ago, we got a health warning:... - 02:02 PM Bug #49912: client: dir->dentries inconsistent, both newname and oldname points to same inode, m...
- I think that after the mv, the directory should no longer be considered ORDERED. We probably _can_ consider it comple...
- 01:41 PM Bug #49912 (Triaged): client: dir->dentries inconsistent, both newname and oldname points to same...
- 01:51 PM Bug #49845 (Resolved): qa: failed umount in test_volumes
- The client kernel in this test had a bad patch in it that has since been fixed. See:
https://tracker.ceph.com/... - 12:44 PM Backport #49685 (In Progress): pacific: ls -l in cephfs-shell tries to chase symlinks when stat'i...
- https://github.com/ceph/ceph/pull/40308
- 12:36 PM Backport #49713 (In Progress): pacific: mgr/nfs: Add interface to update export
- https://github.com/ceph/ceph/pull/40307
- 12:23 PM Backport #49414 (In Progress): pacific: mgr/nfs: Update about user config
- 11:58 AM Documentation #49921 (In Progress): mgr/nfs: Update about cephadm single nfs-ganesha daemon per h...
- 11:38 AM Documentation #49921 (Resolved): mgr/nfs: Update about cephadm single nfs-ganesha daemon per host...
- 03:45 AM Feature #49811: mds: collect I/O sizes from client for cephfs-top
- @Patric, @Jeff
Comparing the iotop/iostat:
We may also need to collect the average IO READ/WRITE speed per-seco... - 03:23 AM Feature #49811 (In Progress): mds: collect I/O sizes from client for cephfs-top
- 02:33 AM Bug #49605: pybind/mgr/volumes: deadlock on async job hangs finisher thread
- So the issue is that AsyncJobs.get_job() is called with AsyncJobs.lock locked. Then getting the next job involves ope...
- 02:30 AM Feature #46866: kceph: add metric for number of pinned capabilities
- Pushing the kclient patchwork.
03/21/2021
- 05:50 PM Bug #49605 (In Progress): pybind/mgr/volumes: deadlock on async job hangs finisher thread
- ...
- 04:51 PM Bug #49912 (Resolved): client: dir->dentries inconsistent, both newname and oldname points to sam...
- we have an applications that use FS as a lock --- an empty file named .dw_gem2_cmn_sd_{INPROGRESS/COMPLETE} , applic...
03/20/2021
- 04:19 AM Backport #49903 (In Progress): nautilus: mgr/volumes: setuid and setgid file bits are not retaine...
- 03:15 AM Backport #49903 (Resolved): nautilus: mgr/volumes: setuid and setgid file bits are not retained a...
- https://github.com/ceph/ceph/pull/40270
- 04:01 AM Backport #49904 (In Progress): octopus: mgr/volumes: setuid and setgid file bits are not retained...
- 03:15 AM Backport #49904 (Resolved): octopus: mgr/volumes: setuid and setgid file bits are not retained af...
- https://github.com/ceph/ceph/pull/40268
- 03:29 AM Backport #49905 (In Progress): pacific: mgr/volumes: setuid and setgid file bits are not retained...
- 03:15 AM Backport #49905 (Resolved): pacific: mgr/volumes: setuid and setgid file bits are not retained af...
- https://github.com/ceph/ceph/pull/40267
- 03:12 AM Bug #49882 (Pending Backport): mgr/volumes: setuid and setgid file bits are not retained after a ...
03/19/2021
- 09:46 PM Bug #49605: pybind/mgr/volumes: deadlock on async job hangs finisher thread
- debugging pr https://github.com/ceph/ceph/pull/40264
- 09:20 PM Bug #49605: pybind/mgr/volumes: deadlock on async job hangs finisher thread
- /ceph/teuthology-archive/pdonnell-2021-03-15_22:16:56-fs-wip-pdonnell-testing-20210315.182203-distro-basic-smithi/596...
- 06:13 PM Backport #49852 (In Progress): pacific: mds: race of fetching large dirfrag
- 06:12 PM Backport #49854 (In Progress): pacific: client: crashed in cct->_conf.get_val() in Client::start_...
- 06:12 PM Backport #49877 (In Progress): pacific: doc: Document mds cap acquisition readdir throttle
- 02:46 PM Bug #49500: qa: "Assertion `cb_done' failed."
- Maybe we could lower mds_max_caps_per_client for this test? It defaults to 1M now, but we could take that down to 500...
- 02:39 PM Bug #49500: qa: "Assertion `cb_done' failed."
- I'm not sure that setting is enough to explain this. AFAICT, that setting is only consulted in notify_health(), so I ...
- 12:57 PM Backport #49753 (In Progress): pacific: cephfs-mirror: add mirror peers via bootstrapping
- 12:57 PM Backport #49765 (In Progress): pacific: cephfs-mirror: symbolic links do not get synchronized at ...
- 10:07 AM Feature #48943 (Resolved): cephfs-mirror: display cephfs mirror instances in `ceph status` command
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:06 AM Bug #49419 (Resolved): cephfs-mirror: dangling pointer in PeerReplayer
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:12 AM Bug #49882 (Fix Under Review): mgr/volumes: setuid and setgid file bits are not retained after a ...
03/18/2021
- 03:07 PM Bug #49882 (In Progress): mgr/volumes: setuid and setgid file bits are not retained after a subvo...
- 02:23 PM Bug #49882 (Resolved): mgr/volumes: setuid and setgid file bits are not retained after a subvolum...
- setuid and setgid file bits are not retained after a subvolume snapshot restore
Reproducer on vstart cluster:
#... - 01:53 PM Backport #49686: pacific: cephfs-mirror: display cephfs mirror instances in `ceph status` command
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39973
m... - 01:53 PM Backport #49432: pacific: cephfs-mirror: dangling pointer in PeerReplayer
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39810
m... - 01:26 PM Bug #49873: ceph_lremovexattr does not return error on file in ceph pacific
- Jeff Layton wrote:
> John, I fixed a similar sounding bug in the MDS yesterday:
>
> https://tracker.ceph.com/... - 01:01 PM Bug #49873: ceph_lremovexattr does not return error on file in ceph pacific
- John, I fixed a similar sounding bug in the MDS yesterday:
https://tracker.ceph.com/issues/49833
Are you ab... - 09:29 AM Bug #49736: cephfs-top: missing keys in the client_metadata
- https://github.com/ceph/ceph/pull/40210
- 04:50 AM Bug #44100: cephfs rsync kworker high load.
- We have also experienced a similar issue, where kernel mount performance degraded severely while doing rsync (running...
- 02:45 AM Backport #49877 (Resolved): pacific: doc: Document mds cap acquisition readdir throttle
- https://github.com/ceph/ceph/pull/40250
- 02:41 AM Documentation #49763 (Pending Backport): doc: Document mds cap acquisition readdir throttle
03/17/2021
- 09:47 PM Feature #48791 (Need More Info): mds: support file block size
- 09:45 PM Feature #41073: cephfs-sync: tool for synchronizing cephfs snapshots to remote target
- Milind what's the status of this tickeT?
- 07:03 PM Bug #49873: ceph_lremovexattr does not return error on file in ceph pacific
- I'll also note that I did find the following issue
https://tracker.ceph.com/issues/49833
But forgot to reference ... - 07:00 PM Bug #49873 (Triaged): ceph_lremovexattr does not return error on file in ceph pacific
- John Mulligan wrote:
> To try and clarify:
>
> The xattr is set on the link. There should be no xattr of that nam... - 06:52 PM Bug #49873: ceph_lremovexattr does not return error on file in ceph pacific
- To try and clarify:
The xattr is set on the link. There should be no xattr of that name on the file the link point... - 06:46 PM Bug #49873: ceph_lremovexattr does not return error on file in ceph pacific
- John Mulligan wrote:
> While running our go-ceph CI against pacific for the first time our CI failed in the xattr te... - 06:24 PM Bug #49873 (Duplicate): ceph_lremovexattr does not return error on file in ceph pacific
- While running our go-ceph CI against pacific for the first time our CI failed in the xattr tests.
It expected a call... - 04:02 PM Bug #49859 (Triaged): Snapshot schedules are not deleted after enabling/disabling snap module
- 10:15 AM Bug #49859 (Triaged): Snapshot schedules are not deleted after enabling/disabling snap module
- Assuming the following:...
- 03:41 PM Bug #49559: libcephfs: test termination "what(): Too many open files"
- Xiubo Li wrote:
> It seems the tests will fire many event works, which will open many fds, the last issue about this... - 01:48 PM Backport #49686 (Resolved): pacific: cephfs-mirror: display cephfs mirror instances in `ceph stat...
- 01:46 PM Backport #49432 (Resolved): pacific: cephfs-mirror: dangling pointer in PeerReplayer
- 10:02 AM Bug #49621: qa: ERROR: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
- There is another error log ahead of the above call trace:...
- 09:59 AM Bug #49621 (Fix Under Review): qa: ERROR: test_fragmented_injection (tasks.cephfs.test_data_scan....
- 04:28 AM Bug #49621 (In Progress): qa: ERROR: test_fragmented_injection (tasks.cephfs.test_data_scan.TestD...
- 04:28 AM Feature #49811: mds: collect I/O sizes from client for cephfs-top
- Sure, will work on it. Thanks.
- 03:30 AM Backport #49854 (Resolved): pacific: client: crashed in cct->_conf.get_val() in Client::start_tic...
- https://github.com/ceph/ceph/pull/40251
- 03:25 AM Backport #49853 (Resolved): nautilus: mds: race of fetching large dirfrag
- https://github.com/ceph/ceph/pull/40720
- 03:25 AM Backport #49852 (Resolved): pacific: mds: race of fetching large dirfrag
- https://github.com/ceph/ceph/pull/40252
- 03:25 AM Bug #49725 (Pending Backport): client: crashed in cct->_conf.get_val() in Client::start_tick_thre...
- 03:25 AM Backport #49851 (Resolved): octopus: mds: race of fetching large dirfrag
- https://github.com/ceph/ceph/pull/40774
- 03:23 AM Bug #49617 (Pending Backport): mds: race of fetching large dirfrag
03/16/2021
- 08:42 PM Bug #49843 (Fix Under Review): qa: fs/snaps/snaptest-upchildrealms.sh failure
- Bad error handling in this patch:
https://lore.kernel.org/ceph-devel/20210315180717.266155-3-jlayton@kernel.or... - 08:12 PM Bug #49843: qa: fs/snaps/snaptest-upchildrealms.sh failure
- This may be fallout from the recent snapdir handling fixes. I'll take a look.
- 07:53 PM Bug #49843 (Resolved): qa: fs/snaps/snaptest-upchildrealms.sh failure
- ...
- 08:01 PM Bug #49845 (Resolved): qa: failed umount in test_volumes
- ...
- 07:21 PM Bug #49837 (Fix Under Review): mgr/pybind/snap_schedule: do not fail when no fs snapshots are ava...
- 05:16 PM Bug #49837 (Resolved): mgr/pybind/snap_schedule: do not fail when no fs snapshots are available
- When calling the json output, we should not return any error but just an empty dict:...
- 05:35 PM Bug #49833 (Fix Under Review): MDS should return -ENODATA when asked to remove xattr that doesn't...
- 04:36 PM Bug #49833: MDS should return -ENODATA when asked to remove xattr that doesn't exist
- I'll take this one since I have a patch (and testcase).
- 04:22 PM Bug #49833 (Triaged): MDS should return -ENODATA when asked to remove xattr that doesn't exist
- 04:16 PM Bug #49833 (Resolved): MDS should return -ENODATA when asked to remove xattr that doesn't exist
- This patch adds a small gtest that shows that the handling of removexattr is wrong:...
- 04:50 PM Bug #49834 (Won't Fix - EOL): octopus: qa: test_statfs_on_deleted_fs failure
- https://pulpito.ceph.com/yuriw-2021-03-13_22:13:22-fs-wip-yuriw-octopus-15.2.10-distro-basic-smithi/5962994/
Test ... - 04:34 PM Bug #49826: Multiple nfs-ganesha instances and strays objects in CephFS
- The strays behavior makes some sense, since we don't really do anything client-side to notify the application when th...
- 04:27 PM Bug #49826: Multiple nfs-ganesha instances and strays objects in CephFS
- Aleksandr Rudenko wrote:
> Usual stray objects are purged after 10-20 secs. But not in this case. In this case stray... - 07:15 AM Bug #49826 (New): Multiple nfs-ganesha instances and strays objects in CephFS
- Hi!
We have one CephFS and two standalone ganesha instances on different hosts which export the same directory.
W... - 12:15 PM Bug #49736: cephfs-top: missing keys in the client_metadata
- Venky Shankar wrote:
MDSRank::dump_sessions() has this filter:
>
> [...]
>
> ... which might be the reason tha... - 05:34 AM Bug #49822 (Fix Under Review): test: test_mirroring_command_idempotency (tasks.cephfs.test_admin....
- 04:09 AM Bug #49822 (Resolved): test: test_mirroring_command_idempotency (tasks.cephfs.test_admin.TestMirr...
- With https://github.com/ceph/ceph/pull/39845/commits/a04010e9490aa726d219c41139c27417dac836e2 peer_add monitor interf...
- 02:46 AM Bug #49719 (Fix Under Review): mon/MDSMonitor: standby-replay daemons should be removed when the ...
03/15/2021
- 06:25 PM Feature #49811 (Resolved): mds: collect I/O sizes from client for cephfs-top
- An average is a start but a histogram would be better for this kind of data.
- 05:44 AM Backport #49520: pacific: client: wake up the front pos waiter
- Patrick Donnelly wrote:
> Xiubo, please do this backport.
The backport PR: https://github.com/ceph/ceph/pull/40109 - 05:38 AM Backport #49609: pacific: qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- Patrick Donnelly wrote:
> Xiubo, please do this backport.
The backport PR: https://github.com/ceph/ceph/pull/40108
03/12/2021
- 09:10 PM Backport #49634 (In Progress): pacific: Windows CephFS support - ceph-dokan
- 02:42 PM Backport #49634: pacific: Windows CephFS support - ceph-dokan
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/40069
ceph-backport.sh versi... - 09:08 PM Backport #49432 (In Progress): pacific: cephfs-mirror: dangling pointer in PeerReplayer
- 08:59 PM Backport #49432 (Need More Info): pacific: cephfs-mirror: dangling pointer in PeerReplayer
- 09:06 PM Backport #49685 (Need More Info): pacific: ls -l in cephfs-shell tries to chase symlinks when sta...
- 09:05 PM Backport #49474 (In Progress): pacific: nautilus: qa: "Assertion `cb_done' failed."
- 09:03 PM Backport #49512 (In Progress): pacific: client: allow looking up snapped inodes by inode number+s...
- 09:01 PM Backport #49610 (In Progress): pacific: qa: mds removed because trimming for too long with valgrind
- 08:59 PM Backport #49765 (Need More Info): pacific: cephfs-mirror: symbolic links do not get synchronized ...
- 12:55 PM Backport #49765 (Resolved): pacific: cephfs-mirror: symbolic links do not get synchronized at times
- https://github.com/ceph/ceph/pull/40206
- 08:59 PM Backport #49753 (Need More Info): pacific: cephfs-mirror: add mirror peers via bootstrapping
- 08:59 PM Backport #49713 (Need More Info): pacific: mgr/nfs: Add interface to update export
- Varsha, please do this backport.
- 08:58 PM Backport #49609 (Need More Info): pacific: qa: ERROR: test_damaged_dentry, KeyError: 'passed_vali...
- Xiubo, please do this backport.
- 08:58 PM Backport #49751 (In Progress): pacific: snap-schedule doc
- 08:56 PM Backport #49561 (In Progress): pacific: qa: file system deletion not complete because starter fs ...
- 08:55 PM Backport #49414 (Need More Info): pacific: mgr/nfs: Update about user config
- Varsha, please do this one.
- 08:55 PM Backport #49470 (In Progress): pacific: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- 08:53 PM Backport #49564 (In Progress): pacific: mon/MonCap: `fs authorize` generates unparseable cap for ...
- 08:51 PM Backport #49517 (In Progress): pacific: pybind/cephfs: DT_REG and DT_LNK values are wrong
- 08:49 PM Backport #49520 (Need More Info): pacific: client: wake up the front pos waiter
- Xiubo, please do this backport.
- 08:49 PM Backport #49563 (In Progress): pacific: qa: run fs:verify with tcmalloc
- 08:48 PM Backport #49608 (In Progress): pacific: mds: define CephFS errors that replace standard errno values
- 08:46 PM Backport #49569 (In Progress): pacific: qa: rank_freeze prevents failover on some tests
- 08:44 PM Backport #49612 (In Progress): pacific: qa: racy session evicted check
- 08:43 PM Backport #49630 (In Progress): pacific: qa: slow metadata ops during scrubbing
- 08:41 PM Backport #49631 (In Progress): pacific: mds: don't start purging inodes in the middle of recovery
- 04:47 PM Documentation #49763 (Fix Under Review): doc: Document mds cap acquisition readdir throttle
- 11:19 AM Documentation #49763 (In Progress): doc: Document mds cap acquisition readdir throttle
- 08:13 AM Documentation #49763 (Resolved): doc: Document mds cap acquisition readdir throttle
- Documentation for mds cap acquisition readdir throttle is missing which is introduced
with the PR [1]. This needs to... - 12:50 PM Bug #49711 (Pending Backport): cephfs-mirror: symbolic links do not get synchronized at times
- 07:27 AM Bug #49736: cephfs-top: missing keys in the client_metadata
- Patrick Donnelly wrote:
> > Either cephfs-top should handle the missing metadata entries or the mgr/stats should fil... - 02:20 AM Bug #49559: libcephfs: test termination "what(): Too many open files"
- It seems the tests will fire many event works, which will open many fds, the last issue about this was cause by the e...
- 12:46 AM Bug #49725: client: crashed in cct->_conf.get_val() in Client::start_tick_thread()
- With the upstream code, I can reproduce it around 10 time by running 8 hours at night.
03/11/2021
- 08:05 PM Feature #49304 (Fix Under Review): nfs-ganesha: plumb xattr support into FSAL_CEPH
- I've proposed some patches for ganesha to update its xattr implementation (which was based on an earlier draft of the...
- 08:03 PM Bug #49559: libcephfs: test termination "what(): Too many open files"
- We still have the coredump from this test failure, but the x86_64 binaries have been reaped so we can't analyze it. I...
- 06:30 PM Bug #49559: libcephfs: test termination "what(): Too many open files"
- Logging the RLIMIT_NOFILE we set should be no problem.
It may be tough to get a file descriptor in the same proces... - 05:36 PM Bug #49559: libcephfs: test termination "what(): Too many open files"
- Jeff Layton wrote:
> Xiubo Li wrote:
> > IMO, for this we can lower down the concurent threads 128 --> 32 and can t... - 02:49 PM Bug #49559: libcephfs: test termination "what(): Too many open files"
- Xiubo Li wrote:
> IMO, for this we can lower down the concurent threads 128 --> 32 and can try it serveral times. Fr... - 05:40 PM Backport #49753 (Resolved): pacific: cephfs-mirror: add mirror peers via bootstrapping
- https://github.com/ceph/ceph/pull/40206
- 05:39 PM Bug #49500: qa: "Assertion `cb_done' failed."
- Jeff Layton wrote:
> Yeah, looking at the MDS logs from the above run. I don't see any occurrences of the word "reca... - 03:17 PM Bug #49500: qa: "Assertion `cb_done' failed."
- Yeah, looking at the MDS logs from the above run. I don't see any occurrences of the word "recall" in there and at le...
- 03:01 PM Bug #49500: qa: "Assertion `cb_done' failed."
- With the most recent change to make that variable atomic, I doubt we're hitting cache-coherency problems. It seems mo...
- 05:35 PM Feature #49619 (Pending Backport): cephfs-mirror: add mirror peers via bootstrapping
- 05:32 PM Bug #49736 (Triaged): cephfs-top: missing keys in the client_metadata
- > Either cephfs-top should handle the missing metadata entries or the mgr/stats should fill in defaults until it can ...
- 01:04 PM Bug #49736 (Resolved): cephfs-top: missing keys in the client_metadata
- There are missing keys in the mgr/stats client_metadata for some clients, which causes the exception mentioned in the...
- 05:25 PM Backport #49752 (Resolved): octopus: snap-schedule doc
- https://github.com/ceph/ceph/pull/40775
- 05:25 PM Backport #49751 (Resolved): pacific: snap-schedule doc
- https://github.com/ceph/ceph/pull/40090
- 05:23 PM Documentation #48017 (Pending Backport): snap-schedule doc
- 05:04 PM Feature #6373: kcephfs: qa: test fscache
- Jeff Layton wrote:
> The yaml frags that let you test fscache are machine-specific since the clients need to be prov... - 04:59 PM Feature #6373: kcephfs: qa: test fscache
- The yaml frags that let you test fscache are machine-specific since the clients need to be provisioned with an extra ...
- 04:51 PM Feature #6373 (In Progress): kcephfs: qa: test fscache
- 02:14 PM Bug #49725 (Fix Under Review): client: crashed in cct->_conf.get_val() in Client::start_tick_thre...
- 01:11 AM Bug #49725 (Resolved): client: crashed in cct->_conf.get_val() in Client::start_tick_thread()
- The call trace:...
03/10/2021
- 05:59 PM Bug #49720 (Resolved): mon/MDSMonitor: do not pointlessly kill standbys that are incompatible wit...
- During a rolling upgrade, standbys may suicide once the CompatSet for the FSMap is updated. This needlessly complicat...
- 05:42 PM Bug #49719 (Resolved): mon/MDSMonitor: standby-replay daemons should be removed when the flag is ...
- 03:05 PM Backport #49713 (Resolved): pacific: mgr/nfs: Add interface to update export
- 03:05 PM Backport #49712 (Rejected): octopus: mgr/nfs: Add interface to update export
- 03:01 PM Bug #49133 (Resolved): mgr/nfs: Rook does not support restart of services, handle the NotImplemen...
- Backported with #45746.
- 03:00 PM Feature #45746 (Pending Backport): mgr/nfs: Add interface to update export
- 02:57 PM Bug #49122: vstart: Rados url error
- @singuliere none please do not delete the links between the parent ticket and the backport ticket. Just close the bac...
- 06:33 AM Bug #49122 (Resolved): vstart: Rados url error
- 06:32 AM Bug #49122: vstart: Rados url error
- Removing the "pacific" backport because the PR including the fix is already backported via https://tracker.ceph.com/i...
- 02:27 PM Bug #49711 (Fix Under Review): cephfs-mirror: symbolic links do not get synchronized at times
- 02:21 PM Bug #49711 (Resolved): cephfs-mirror: symbolic links do not get synchronized at times
- Due to this problematic code in src/tools/cephfs_mirror/PeerReplayer.cc:...
- 06:52 AM Backport #49423 (In Progress): pacific: doc: broken links multimds and kcephfs
- 05:00 AM Backport #49423 (New): pacific: doc: broken links multimds and kcephfs
- The file name and path are different in pacific. See https://github.com/ceph/ceph/blob/pacific/doc/dev/developer_guid...
- 06:29 AM Backport #49412 (Rejected): pacific: vstart: Rados url error
- 04:58 AM Documentation #49372: doc: broken links multimds and kcephfs
- The file name and path are different in pacific. See https://github.com/ceph/ceph/blob/pacific/doc/dev/developer_guid...
03/09/2021
- 11:47 PM Backport #49423 (Rejected): pacific: doc: broken links multimds and kcephfs
- 11:45 PM Documentation #49372: doc: broken links multimds and kcephfs
- The "documentation was not backported to pacific":https://github.com/ceph/ceph/pull/37949 nor is it associated with a...
- 11:28 PM Backport #49412 (In Progress): pacific: vstart: Rados url error
- 11:27 PM Backport #49346 (In Progress): pacific: vstart: volumes/nfs interface complaints cluster does not...
- 11:25 PM Backport #49686 (In Progress): pacific: cephfs-mirror: display cephfs mirror instances in `ceph s...
- 09:45 PM Backport #49686 (Resolved): pacific: cephfs-mirror: display cephfs mirror instances in `ceph stat...
- https://github.com/ceph/ceph/pull/39973
- 11:23 PM Backport #49687 (In Progress): pacific: client: add metric for number of pinned capabilities
- 09:45 PM Backport #49687 (Resolved): pacific: client: add metric for number of pinned capabilities
- https://github.com/ceph/ceph/pull/39972
- 10:23 PM Bug #49672: nautilus: "Error EINVAL: invalid command" in fs-nautilus-distro-basic-smithi
- https://github.com/ceph/ceph/pull/39960 merged
- 09:00 PM Bug #49672 (Fix Under Review): nautilus: "Error EINVAL: invalid command" in fs-nautilus-distro-ba...
- First occurrence https://sentry.ceph.com/organizations/ceph/issues/4718/events/3eb2f218e5b44406a9f1fd54ef90c5b4/?proj...
- 07:51 PM Bug #49672: nautilus: "Error EINVAL: invalid command" in fs-nautilus-distro-basic-smithi
- https://pulpito.ceph.com/teuthology-2021-03-06_04:20:17-upgrade:luminous-x-nautilus-distro-basic-smithi/
- 04:22 PM Bug #49672 (Resolved): nautilus: "Error EINVAL: invalid command" in fs-nautilus-distro-basic-smithi
- This is for 14.2.17 release
Run: https://pulpito.ceph.com/yuriw-2021-03-08_16:51:42-fs-nautilus-distro-basic-smith... - 09:55 PM Bug #49684 (Fix Under Review): qa: fs:cephadm mount does not wait for mds to be created
- 09:48 PM Bug #49684 (In Progress): qa: fs:cephadm mount does not wait for mds to be created
- 09:29 PM Bug #49684 (Resolved): qa: fs:cephadm mount does not wait for mds to be created
- ...
- 09:41 PM Feature #46865 (Pending Backport): client: add metric for number of pinned capabilities
- 09:40 PM Feature #48943 (Pending Backport): cephfs-mirror: display cephfs mirror instances in `ceph status...
- 09:40 PM Backport #49685 (Resolved): pacific: ls -l in cephfs-shell tries to chase symlinks when stat'ing ...
- 09:38 PM Bug #48912 (Pending Backport): ls -l in cephfs-shell tries to chase symlinks when stat'ing and er...
- 09:37 PM Bug #49511 (Resolved): qa: "AttributeError: 'NoneType' object has no attribute 'mon_manager'"
- 08:35 AM Feature #40401 (Resolved): mgr/volumes: allow/deny r/rw access of auth IDs to subvolume and subvo...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:34 AM Feature #44928 (Resolved): mgr/volumes: evict clients based on auth ID and subvolume mounted
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:34 AM Feature #44931 (Resolved): mgr/volumes: get the list of auth IDs that have been granted access to...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:33 AM Bug #48501 (Resolved): pybind/mgr/volumes: inherited snapshots should be filtered out of snapshot...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:32 AM Bug #48830 (Resolved): pacific: qa: :ERROR: test_idempotency
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:31 AM Bug #49192 (Resolved): qa::ERROR: test_recover_auth_metadata_during_authorize
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:31 AM Bug #49294 (Resolved): pacific: pybind/ceph_volume_client: volume authorize/deauthorize crashes w...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:31 AM Bug #49374 (Resolved): mgr/volumes: Bump up the AuthMetadataManager's version to 6
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:45 AM Bug #49662 (Resolved): ceph-dokan improvements for additional mounts
- This PR [1] adds a few ceph-dokan improvements, mostly targeting additional fs mounts:
* a "unmap" command
* avoi...
03/08/2021
- 10:20 PM Bug #45344 (Resolved): doc: Table Of Contents doesn't work
- An update to the UI made by Kefu Chai in March 2021 fixes this issue.
- 09:22 PM Documentation #48017 (Fix Under Review): snap-schedule doc
- 11:55 AM Documentation #48017: snap-schedule doc
- The module was only backported to octopus, so we can probably skip the doc backport to nautilus.
- 07:40 PM Backport #49431 (Resolved): octopus: mgr/volumes: Bump up the AuthMetadataManager's version to 6
- 12:20 PM Backport #49431 (In Progress): octopus: mgr/volumes: Bump up the AuthMetadataManager's version to 6
- 07:40 PM Backport #49508 (Resolved): octopus: pybind/ceph_volume_client: volume authorize/deauthorize cras...
- 12:19 PM Backport #49508 (In Progress): octopus: pybind/ceph_volume_client: volume authorize/deauthorize c...
- 07:30 PM Backport #49266 (Resolved): octopus: qa::ERROR: test_recover_auth_metadata_during_authorize
- 07:30 PM Backport #49230 (Resolved): octopus: qa: :ERROR: test_idempotency
- 07:30 PM Backport #49029 (Resolved): octopus: mgr/volumes: evict clients based on auth ID and subvolume mo...
- 07:30 PM Backport #48900 (Resolved): octopus: mgr/volumes: get the list of auth IDs that have been granted...
- 07:29 PM Backport #48858 (Resolved): octopus: pybind/mgr/volumes: inherited snapshots should be filtered o...
- 07:29 PM Backport #48196 (Resolved): octopus: mgr/volumes: allow/deny r/rw access of auth IDs to subvolume...
- 02:48 PM Bug #49621: qa: ERROR: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
- This only failed in the local test, will work on it tommorrow.
- 02:45 PM Bug #49559 (Triaged): libcephfs: test termination "what(): Too many open files"
- 02:44 PM Bug #49644: vstart_runner: run_ceph_w() doesn't work with shell=True
- This PR exposes this issue and adds a workaround for it - https://github.com/ceph/ceph/pull/38443.
- 02:43 PM Bug #49644 (In Progress): vstart_runner: run_ceph_w() doesn't work with shell=True
- 09:05 AM Bug #49644 (New): vstart_runner: run_ceph_w() doesn't work with shell=True
- Setting @shell@ to @True@ leads to a crash when @tasks.mgr.test_module_selftest.TestModuleSelftest.test_selftest_clus...
- 12:51 PM Bug #48805: mds: "cluster [WRN] Scrub error on inode 0x1000000039d (/client.0/tmp/blogbench-1.0/s...
- I'm unable to comment on the exact teuthology run mentioned in the description.
However, with the testing so far, th... - 08:17 AM Backport #49429 (Resolved): pacific: mgr/volumes: Bump up the AuthMetadataManager's version to 6
03/07/2021
- 03:51 PM Bug #49495 (Resolved): qa/ceph_manager: raw_cluster_cmd passes both args and kwargs
- 03:51 PM Bug #49486 (Resolved): qa: raw_cluster_cmd and raw_cluster_cmd_result loses command arguments passed
03/05/2021
- 10:35 PM Backport #49634 (Resolved): pacific: Windows CephFS support - ceph-dokan
- https://github.com/ceph/ceph/pull/40069
- 10:30 PM Feature #49623 (Pending Backport): Windows CephFS support - ceph-dokan
- 01:48 PM Feature #49623 (Resolved): Windows CephFS support - ceph-dokan
- This issue tracks the Windows CephFS support, introduced by this PR[1]
[1] https://github.com/ceph/ceph/pull/38819 - 07:35 PM Backport #49631 (Resolved): pacific: mds: don't start purging inodes in the middle of recovery
- https://github.com/ceph/ceph/pull/40079
- 07:35 PM Backport #49630 (Resolved): pacific: qa: slow metadata ops during scrubbing
- https://github.com/ceph/ceph/pull/40080
- 07:34 PM Bug #49607 (Pending Backport): qa: slow metadata ops during scrubbing
- 07:33 PM Bug #49074 (Pending Backport): mds: don't start purging inodes in the middle of recovery
- 05:41 PM Bug #49628 (New): mgr/nfs: Support cluster info command for rook
- Fetch cluster info i.e IP and Port ($ ceph nfs cluster info [<clusterid>])
- 09:00 AM Bug #49621 (Resolved): qa: ERROR: test_fragmented_injection (tasks.cephfs.test_data_scan.TestData...
- When running the teuthology test locally, the tasks.cephfs.test_data_scan.TestDataScan test failed:...
- 06:43 AM Bug #49617 (Fix Under Review): mds: race of fetching large dirfrag
- 03:55 AM Bug #49617 (Triaged): mds: race of fetching large dirfrag
- 03:50 AM Bug #49617 (Resolved): mds: race of fetching large dirfrag
- When a dirfrag contains more than 'mds_dir_keys_per_op' items, MDS needs to send multiple 'omap-get-vals' requests to...
- 05:51 AM Feature #49619 (Fix Under Review): cephfs-mirror: add mirror peers via bootstrapping
- 05:43 AM Feature #49619 (In Progress): cephfs-mirror: add mirror peers via bootstrapping
- 05:42 AM Feature #49619 (Resolved): cephfs-mirror: add mirror peers via bootstrapping
- Right now, adding a peer requires peer cluster ceph configuration and user keyring to be available in the primary clu...
03/04/2021
- 09:35 PM Backport #49613 (Resolved): nautilus: qa: racy session evicted check
- https://github.com/ceph/ceph/pull/40714
- 09:35 PM Backport #49612 (Resolved): pacific: qa: racy session evicted check
- https://github.com/ceph/ceph/pull/40081
- 09:35 PM Backport #49611 (Resolved): octopus: qa: racy session evicted check
- https://github.com/ceph/ceph/pull/40773
- 09:35 PM Backport #49610 (Resolved): pacific: qa: mds removed because trimming for too long with valgrind
- https://github.com/ceph/ceph/pull/40091
- 09:33 PM Bug #49458 (Resolved): qa: switch fs:upgrade from nautilus to octopus
- 09:32 PM Bug #49507 (Pending Backport): qa: mds removed because trimming for too long with valgrind
- 09:30 PM Backport #49609 (Resolved): pacific: qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- https://github.com/ceph/ceph/pull/40108
- 09:30 PM Bug #49318 (Pending Backport): qa: racy session evicted check
- 09:30 PM Backport #49608 (Resolved): pacific: mds: define CephFS errors that replace standard errno values
- https://github.com/ceph/ceph/pull/40083
- 09:29 PM Fix #48802 (Pending Backport): mds: define CephFS errors that replace standard errno values
- 09:28 PM Bug #48559 (Pending Backport): qa: ERROR: test_damaged_dentry, KeyError: 'passed_validation'
- 09:23 PM Bug #49607 (Fix Under Review): qa: slow metadata ops during scrubbing
- 09:19 PM Bug #49607 (Resolved): qa: slow metadata ops during scrubbing
- ...
- 06:56 PM Bug #49605 (Resolved): pybind/mgr/volumes: deadlock on async job hangs finisher thread
- ...
- 10:34 AM Bug #49597 (New): mds: mds goes to 'replay' state after setting 'osd_failsafe_ratio' to less than...
- Steps to reproduce on vstart cluster:
1. Set the following in ../src/vstart.sh
1. Disable client_check_pool_...
03/03/2021
- 09:50 PM Bug #49511 (Fix Under Review): qa: "AttributeError: 'NoneType' object has no attribute 'mon_manag...
- 09:44 PM Backport #49564 (Need More Info): pacific: mon/MonCap: `fs authorize` generates unparseable cap f...
- Ramana, please do this backport.
- 05:56 PM Bug #49371 (Triaged): Misleading alarm if all MDS daemons have failed
- Thanks for the report. That is indeed confusing. I think we will change it so laggy/dead daemons are still removed by...
- 12:11 PM Bug #47689 (Resolved): rados/upgrade/nautilus-x-singleton fails due to cluster [WRN] evicting unr...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:10 PM Bug #48202 (Resolved): libcephfs allows calling ftruncate on a file open read-only
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:10 PM Feature #48337 (Resolved): client: add ceph.cluster_fsid/ceph.client_id vxattr support in libcephfs
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:08 PM Feature #49040 (Resolved): cephfs-mirror: test mirror daemon with valgrind
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:44 AM Backport #49425 (Resolved): pacific: cephfs-mirror: test mirror daemon with valgrind
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39635
m... - 11:36 AM Backport #48286 (Resolved): nautilus: rados/upgrade/nautilus-x-singleton fails due to cluster [WR...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39706
m... - 02:25 AM Tasks #46890 (Closed): client: add request lock support
- Has fold this feature into https://tracker.ceph.com/issues/46688, to simplify the inode lock patches I have remove th...
- 02:23 AM Tasks #46688 (Fix Under Review): client: add inode lock support
03/02/2021
- 07:32 PM Feature #49040: cephfs-mirror: test mirror daemon with valgrind
- https://github.com/ceph/ceph/pull/39635 merged
- 02:45 PM Backport #49569 (Resolved): pacific: qa: rank_freeze prevents failover on some tests
- https://github.com/ceph/ceph/pull/40082
- 02:41 PM Bug #49464 (Pending Backport): qa: rank_freeze prevents failover on some tests
- 10:37 AM Feature #48682 (In Progress): MDSMonitor: add command to print fs flags
- 08:45 AM Bug #49458: qa: switch fs:upgrade from nautilus to octopus
- hey Sidharth, are you still working on this? if you don't have enough bandwidth for this issue at this moment, probab...
- 01:12 AM Bug #49559: libcephfs: test termination "what(): Too many open files"
- IMO, for this we can lower down the concurent threads 128 --> 32 and can try it serveral times. From my test without ...
03/01/2021
- 08:10 PM Backport #49564 (Resolved): pacific: mon/MonCap: `fs authorize` generates unparseable cap for fil...
- https://github.com/ceph/ceph/pull/40086
- 08:10 PM Backport #49563 (Resolved): pacific: qa: run fs:verify with tcmalloc
- https://github.com/ceph/ceph/pull/40091
- 08:09 PM Bug #49301 (Pending Backport): mon/MonCap: `fs authorize` generates unparseable cap for file syst...
- 08:05 PM Bug #49391 (Pending Backport): qa: run fs:verify with tcmalloc
- 07:40 PM Backport #49562 (Resolved): nautilus: qa: file system deletion not complete because starter fs al...
- https://github.com/ceph/ceph/pull/40709
- 07:40 PM Backport #49561 (Resolved): pacific: qa: file system deletion not complete because starter fs alr...
- https://github.com/ceph/ceph/pull/40089
- 07:40 PM Backport #49560 (Resolved): octopus: qa: file system deletion not complete because starter fs alr...
- https://github.com/ceph/ceph/pull/40772
- 07:36 PM Bug #49510 (Pending Backport): qa: file system deletion not complete because starter fs already d...
- 06:59 PM Bug #49559 (Resolved): libcephfs: test termination "what(): Too many open files"
- ...
- 05:57 PM Backport #48286: nautilus: rados/upgrade/nautilus-x-singleton fails due to cluster [WRN] evicting...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/39706
merged - 02:38 PM Bug #49465 (Triaged): qa: Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_...
- 02:37 PM Bug #49536 (Fix Under Review): client: Inode.cc: 405: FAILED ceph_assert(_ref >= 0)
- 02:23 PM Bug #49536 (In Progress): client: Inode.cc: 405: FAILED ceph_assert(_ref >= 0)
02/28/2021
- 05:41 AM Bug #49536 (Fix Under Review): client: Inode.cc: 405: FAILED ceph_assert(_ref >= 0)
- 05:16 AM Bug #49536: client: Inode.cc: 405: FAILED ceph_assert(_ref >= 0)
- Switched the _ref to atomic type.
- 04:51 AM Bug #49536 (Resolved): client: Inode.cc: 405: FAILED ceph_assert(_ref >= 0)
- When running the ceph mount/unmount serially by around 100000 times, I see:
/data/ceph/src/client/Inode.cc: In fun...
02/26/2021
- 05:35 PM Backport #49520 (Resolved): pacific: client: wake up the front pos waiter
- https://github.com/ceph/ceph/pull/40109
- 05:35 PM Backport #49519 (Resolved): nautilus: client: wake up the front pos waiter
- https://github.com/ceph/ceph/pull/40865
- 05:35 PM Backport #49518 (Resolved): octopus: client: wake up the front pos waiter
- https://github.com/ceph/ceph/pull/40771
- 05:35 PM Backport #49517 (Resolved): pacific: pybind/cephfs: DT_REG and DT_LNK values are wrong
- https://github.com/ceph/ceph/pull/40085
- 05:35 PM Backport #49516 (Resolved): nautilus: pybind/cephfs: DT_REG and DT_LNK values are wrong
- https://github.com/ceph/ceph/pull/40704
- 05:35 PM Backport #49515 (Resolved): octopus: pybind/cephfs: DT_REG and DT_LNK values are wrong
- https://github.com/ceph/ceph/pull/40770
- 05:34 PM Bug #49498 (Resolved): qa: "TypeError: update_attrs() got an unexpected keyword argument 'createfs'"
- 05:32 PM Bug #49459 (Pending Backport): pybind/cephfs: DT_REG and DT_LNK values are wrong
- 05:31 PM Bug #49379 (Pending Backport): client: wake up the front pos waiter
- 05:30 PM Backport #49514 (Resolved): nautilus: client: allow looking up snapped inodes by inode number+sna...
- https://github.com/ceph/ceph/pull/40769
- 05:30 PM Backport #49513 (Resolved): octopus: client: allow looking up snapped inodes by inode number+snap...
- https://github.com/ceph/ceph/pull/40768
- 05:30 PM Backport #49512 (Resolved): pacific: client: allow looking up snapped inodes by inode number+snap...
- https://github.com/ceph/ceph/pull/40092
- 05:29 PM Feature #48991 (Pending Backport): client: allow looking up snapped inodes by inode number+snapid...
- 05:24 PM Bug #49511 (Resolved): qa: "AttributeError: 'NoneType' object has no attribute 'mon_manager'"
- ...
- 05:14 PM Bug #49510 (Fix Under Review): qa: file system deletion not complete because starter fs already d...
- 05:12 PM Bug #49510 (Resolved): qa: file system deletion not complete because starter fs already destroyed
- During cleanup of ceph, the file systems are not cleaned up which causes unnecessary MDS failover messages:...
- 05:06 PM Backport #49508 (Need More Info): octopus: pybind/ceph_volume_client: volume authorize/deauthoriz...
- 04:59 PM Backport #49508 (Resolved): octopus: pybind/ceph_volume_client: volume authorize/deauthorize cras...
- https://github.com/ceph/ceph/pull/39906
- 05:06 PM Backport #49431 (Need More Info): octopus: mgr/volumes: Bump up the AuthMetadataManager's version...
- 04:56 PM Bug #49294 (Pending Backport): pacific: pybind/ceph_volume_client: volume authorize/deauthorize c...
- 04:53 PM Bug #49507 (Fix Under Review): qa: mds removed because trimming for too long with valgrind
- 04:51 PM Bug #49507 (Resolved): qa: mds removed because trimming for too long with valgrind
- ...
- 04:50 PM Backport #48376 (Resolved): nautilus: libcephfs allows calling ftruncate on a file open read-only
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39129
m... - 01:11 AM Backport #48376: nautilus: libcephfs allows calling ftruncate on a file open read-only
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/39129
merged - 04:49 PM Backport #48520 (Resolved): nautilus: client: add ceph.cluster_fsid/ceph.client_id vxattr support...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39001
m... - 04:48 PM Backport #49430 (Resolved): nautilus: mgr/volumes: Bump up the AuthMetadataManager's version to 6
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39658
m... - 04:48 PM Backport #49447 (Resolved): nautilus: pybind/ceph_volume_client: volume authorize/deauthorize cra...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39658
m... - 09:16 AM Bug #49503 (New): standby-replay mds assert failed when replay
- an error occurred during standy-replay MDS replay....
- 06:19 AM Bug #49225 (Won't Fix): client: hold the client_lock in C_Readahead::finish
02/25/2021
- 10:18 PM Bug #49309: nautilus: qa: "Assertion `cb_done' failed."
- This is still alive: #49500
Cloned this ticket so that the good fix, which didn't help, will still get backported ... - 03:11 AM Bug #49309 (Pending Backport): nautilus: qa: "Assertion `cb_done' failed."
- Hoping that simple fix is all what's needed...
- 10:17 PM Bug #49500 (Resolved): qa: "Assertion `cb_done' failed."
- Clone of #49309. The (good) fix we thought might help did not.
https://pulpito.ceph.com/pdonnell-2021-02-25_21:22:... - 09:45 PM Feature #46074: mds: provide altrenatives to increase the total cephfs subvolume snapshot counts ...
- Andras Sali wrote:
> When upgrading an existing cluster with subvolumes (csi volumes), does the old limit of 400 sna... - 08:16 PM Feature #46074: mds: provide altrenatives to increase the total cephfs subvolume snapshot counts ...
- When upgrading an existing cluster with subvolumes (csi volumes), does the old limit of 400 snapshots still apply to ...
- 09:28 PM Bug #49391 (Fix Under Review): qa: run fs:verify with tcmalloc
- 08:19 PM Backport #48286 (In Progress): nautilus: rados/upgrade/nautilus-x-singleton fails due to cluster ...
- 07:40 PM Backport #48520: nautilus: client: add ceph.cluster_fsid/ceph.client_id vxattr support in libcephfs
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/39001
merged - 05:16 PM Backport #49265: pacific: qa::ERROR: test_recover_auth_metadata_during_authorize
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39437
m... - 05:12 PM Backport #49496 (Resolved): pacific: pacific: qa: :ERROR: test_idempotency
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39038
m... - 05:11 PM Backport #49496 (In Progress): pacific: pacific: qa: :ERROR: test_idempotency
- 01:35 PM Backport #49496 (Resolved): pacific: pacific: qa: :ERROR: test_idempotency
- https://github.com/ceph/ceph/pull/39038
- 03:37 PM Bug #49498 (Fix Under Review): qa: "TypeError: update_attrs() got an unexpected keyword argument ...
- 03:35 PM Bug #49498 (Resolved): qa: "TypeError: update_attrs() got an unexpected keyword argument 'createfs'"
- From: /ceph/teuthology-archive/pdonnell-2021-02-25_05:37:21-fs-wip-pdonnell-testing-20210225.033441-distro-basic-smit...
- 03:13 PM Bug #48502: ERROR: test_exports_on_mgr_restart (tasks.cephfs.test_nfs.TestNFS)
- ...
- 01:29 PM Backport #49027 (Resolved): pacific: mgr/volumes: evict clients based on auth ID and subvolume mo...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/39109
m... - 12:15 PM Bug #49495 (Resolved): qa/ceph_manager: raw_cluster_cmd passes both args and kwargs
- See: https://github.com/ceph/ceph/commit/c0907f99e87898f7e0afd47f4ed143b20a477cb7
Following is dummy program dem... - 11:27 AM Bug #48912 (Fix Under Review): ls -l in cephfs-shell tries to chase symlinks when stat'ing and er...
- Actually cephfs-shell does not support symlinks. For now I have just modified 'ls' command to support it.
- 09:56 AM Bug #45834: cephadm: "fs volume create cephfs" overwrites existing placement specification
- Jan,
Could you attach the fs dump just after creating 'cephfs'
eg.
$ ceph fs dump --format=json-pretty
I suspec... - 08:19 AM Bug #49486 (In Progress): qa: raw_cluster_cmd and raw_cluster_cmd_result loses command arguments ...
- 08:17 AM Bug #49486 (Resolved): qa: raw_cluster_cmd and raw_cluster_cmd_result loses command arguments passed
- The value of @kwargs['args']@ is overrided by value of @args@ even when @args@ is empty list/tuple. This happens for ...
- 06:41 AM Bug #49466: qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/tmp.ZEeZBasJ...
- just for record: same issue hit me yesterday night on vstart cluster too.
- 12:47 AM Bug #49466 (Resolved): qa: "Command failed on gibba030 with status 1: 'set -ex\nsudo dd of=/tmp/t...
- ...
- 03:15 AM Backport #49475 (Resolved): octopus: nautilus: qa: "Assertion `cb_done' failed."
- https://github.com/ceph/ceph/pull/40708
- 03:15 AM Backport #49474 (Resolved): pacific: nautilus: qa: "Assertion `cb_done' failed."
- https://github.com/ceph/ceph/pull/40093
- 03:15 AM Backport #49473 (Resolved): nautilus: nautilus: qa: "Assertion `cb_done' failed."
- https://github.com/ceph/ceph/pull/40701
- 03:05 AM Backport #49472 (Resolved): octopus: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- https://github.com/ceph/ceph/pull/40767
- 03:05 AM Backport #49471 (Resolved): nautilus: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- https://github.com/ceph/ceph/pull/40713
- 03:05 AM Backport #49470 (Resolved): pacific: qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- https://github.com/ceph/ceph/pull/40087
- 03:02 AM Bug #48877 (Pending Backport): qa: ffsb workload: PG_AVAILABILITY|PG_DEGRADED warnings
- 02:51 AM Bug #49469: qa: "AssertionError: expected removing source snapshot of a clone to fail"
- Google says the last time this test failed was last year: https://pulpito.ceph.com/yuriw-2020-06-19_18:38:18-fs-nauti...
- 02:50 AM Bug #49469 (Duplicate): qa: "AssertionError: expected removing source snapshot of a clone to fail"
- ...
- 12:41 AM Bug #49465 (Triaged): qa: Command crashed: 'sudo TESTDIR=/home/ubuntu/cephtest bash -c ceph_test_...
- ...
- 12:32 AM Bug #49464 (Fix Under Review): qa: rank_freeze prevents failover on some tests
- 12:30 AM Bug #49464 (Resolved): qa: rank_freeze prevents failover on some tests
- ...
02/24/2021
- 11:02 PM Bug #45834 (Triaged): cephadm: "fs volume create cephfs" overwrites existing placement specifica...
- 04:20 PM Bug #49459 (Fix Under Review): pybind/cephfs: DT_REG and DT_LNK values are wrong
- 04:20 PM Bug #49459 (Resolved): pybind/cephfs: DT_REG and DT_LNK values are wrong
- 02:55 PM Feature #49457: add copy offload into libcephfs
- Should have read:
"It would probably also be helpful to have some sort of copy_file_range command in cephfs-shell ... - 02:42 PM Feature #49457 (New): add copy offload into libcephfs
- The kclient has a copy_file_range implementation that uses a COPY2 OSD command to tell the OSD to copy data to a diff...
- 02:50 PM Bug #49450 (Duplicate): qa: upgrade/cephfs/featureful_client upgrade from nautilus to quincy fails
- 11:58 AM Bug #49450: qa: upgrade/cephfs/featureful_client upgrade from nautilus to quincy fails
- /a/sage-2021-02-19_15:05:48-upgrade-wip-sage2-testing-2021-02-19-0836-distro-basic-smithi/5894465/
- 11:52 AM Bug #49450 (Duplicate): qa: upgrade/cephfs/featureful_client upgrade from nautilus to quincy fails
- ...
- 02:47 PM Bug #49458 (Resolved): qa: switch fs:upgrade from nautilus to octopus
- Ceph won't upgrade from Nautilus -> Quincy (N-3 releases) but does from Octopus -> Quincy (N-2). See also #39020 for ...
- 01:30 PM Backport #49430 (In Progress): nautilus: mgr/volumes: Bump up the AuthMetadataManager's version to 6
- 01:30 PM Backport #49447 (In Progress): nautilus: pybind/ceph_volume_client: volume authorize/deauthorize ...
- 08:17 AM Backport #49447 (Resolved): nautilus: pybind/ceph_volume_client: volume authorize/deauthorize cra...
- https://github.com/ceph/ceph/pull/39658
- 12:47 PM Fix #48683 (In Progress): mds/MDSMap: print each flag value in MDSMap::dump
02/23/2021
- 11:38 PM Bug #48411: tasks.cephfs.test_volumes.TestSubvolumeGroups: RuntimeError: rank all failed to reach...
- Looks like the dirfrag was empty and closed:...
- 09:35 PM Bug #49379 (Fix Under Review): client: wake up the front pos waiter
- 12:21 PM Bug #49434 (Duplicate): `client isn't responding to mclientcaps(revoke)` for hours
- One of our clients does not seem to respond to `mclientcaps(revoke)`, to a request where the issued and pending caps ...
- 07:05 AM Backport #49432 (Resolved): pacific: cephfs-mirror: dangling pointer in PeerReplayer
- https://github.com/ceph/ceph/pull/39810
- 07:05 AM Bug #49419 (Pending Backport): cephfs-mirror: dangling pointer in PeerReplayer
- 05:01 AM Bug #49301 (In Progress): mon/MonCap: `fs authorize` generates unparseable cap for file system na...
- making the following changes fixed the issue,...
- 04:50 AM Backport #49431 (Resolved): octopus: mgr/volumes: Bump up the AuthMetadataManager's version to 6
- https://github.com/ceph/ceph/pull/39906
- 04:50 AM Backport #49430 (Resolved): nautilus: mgr/volumes: Bump up the AuthMetadataManager's version to 6
- https://github.com/ceph/ceph/pull/39658
- 04:50 AM Backport #49429 (Resolved): pacific: mgr/volumes: Bump up the AuthMetadataManager's version to 6
- https://github.com/ceph/ceph/pull/39480
- 04:49 AM Bug #49374 (Pending Backport): mgr/volumes: Bump up the AuthMetadataManager's version to 6
Also available in: Atom