Activity
From 03/12/2018 to 04/10/2018
04/10/2018
- 11:22 PM Feature #13688 (Resolved): mds: performance: journal inodes with capabilities to limit rejoin tim...
- Fixed by Zheng's openfile table: https://github.com/ceph/ceph/pull/20132
- 11:20 PM Feature #14456: mon: prevent older/incompatible clients from mounting the file system
- I think the right direction is to allow setting a flag on the MDSMap to prevent older clients from connecting to the ...
- 11:18 PM Bug #22482 (Won't Fix): qa: MDS can apparently journal new file on "full" metadata pool
- This is expected. The MDS is treated specially by the OSDs to allow some writes when the pool is full.
- 11:13 PM Feature #15507: MDS: support "watching" an inode/dentry
- https://bugzilla.redhat.com/show_bug.cgi?id=1561326
- 10:36 PM Bug #23615 (Rejected): qa: test for "snapid allocation/deletion mismatch with monitor"
- From Zheng:
> mds deletes old snapshots by MRemoveSnaps message. Monitor does not do snap_seq auto-increment wh... - 06:45 PM Bug #23643 (Resolved): qa: osd_mon_report_interval typo in test_full.py
- https://github.com/ceph/ceph/blob/577737d007c05bc7a3972158be8c520ab73a1517/qa/tasks/cephfs/test_full.py#L137
- 06:33 PM Bug #23624 (Resolved): cephfs-foo-tool crashes immediately it starts
- 08:27 AM Bug #23624 (Fix Under Review): cephfs-foo-tool crashes immediately it starts
- https://github.com/ceph/ceph/pull/21321
- 08:14 AM Bug #23624 (Resolved): cephfs-foo-tool crashes immediately it starts
- http://pulpito.ceph.com/pdonnell-2018-04-06_22:48:23-kcephfs-master-testing-basic-smithi/
- 05:54 PM Backport #23642 (Rejected): luminous: mds: the number of inode showed by "mds perf dump" not corr...
- 05:54 PM Backport #23641 (Resolved): luminous: auth|doc: fs authorize error for existing credentials confu...
- https://github.com/ceph/ceph/pull/22963
- 05:53 PM Backport #23638 (Resolved): luminous: ceph-fuse: getgroups failure causes exception
- https://github.com/ceph/ceph/pull/21687
- 05:53 PM Backport #23637 (Resolved): luminous: mds: assertion in MDSRank::validate_sessions
- https://github.com/ceph/ceph/pull/21372
- 05:53 PM Backport #23636 (Resolved): luminous: mds: kicked out by monitor during rejoin
- https://github.com/ceph/ceph/pull/21366
- 05:53 PM Backport #23635 (Resolved): luminous: client: fix request send_to_auth was never really used
- https://github.com/ceph/ceph/pull/21354
- 05:53 PM Backport #23634 (Resolved): luminous: doc: outline the steps for upgrading an MDS cluster
- https://github.com/ceph/ceph/pull/21352
- 05:53 PM Backport #23632 (Resolved): luminous: mds: handle client requests when mds is stopping
- https://github.com/ceph/ceph/pull/21346
- 02:24 PM Bug #23519: mds: mds got laggy because of MDSBeacon stuck in mqueue
- the two issue are not the same, but they are caused by the same reason: mds take too much time to handle MDSMap messa...
- 09:24 AM Bug #23625 (Fix Under Review): mds: sessions opened by journal replay do not get dirtied properly
- https://github.com/ceph/ceph/pull/21323
- 09:12 AM Bug #23625 (Resolved): mds: sessions opened by journal replay do not get dirtied properly
- http://pulpito.ceph.com/pdonnell-2018-04-06_01:22:39-multimds-wip-pdonnell-testing-20180405.233852-testing-basic-smit...
- 07:07 AM Backport #23158 (Resolved): jewel: mds: underwater dentry check in CDir::_omap_fetched is racy
- 04:46 AM Bug #23380 (Pending Backport): mds: ceph.dir.rctime follows dir ctime not inode ctime
- 04:45 AM Bug #23452 (Pending Backport): mds: assertion in MDSRank::validate_sessions
- 04:45 AM Bug #23446 (Pending Backport): ceph-fuse: getgroups failure causes exception
- 04:44 AM Bug #23530 (Pending Backport): mds: kicked out by monitor during rejoin
- 04:43 AM Bug #23602 (Pending Backport): mds: handle client requests when mds is stopping
- 04:43 AM Bug #23541 (Pending Backport): client: fix request send_to_auth was never really used
- 04:42 AM Feature #23623 (Resolved): mds: mark allow_snaps true by default
- 04:41 AM Bug #23491 (Resolved): fs: quota backward compatibility
- 01:22 AM Bug #21745 (Fix Under Review): mds: MDBalancer using total (all time) request count in load stati...
- https://github.com/ceph/ceph/pull/19220/commits/e9689c1ff7e75394298c0e86aa9ed4e703391c3e
04/09/2018
- 11:29 PM Bug #22824 (Resolved): Journaler::flush() may flush less data than expected, which causes flush w...
- 11:29 PM Backport #22967 (Resolved): luminous: Journaler::flush() may flush less data than expected, which...
- 11:17 PM Backport #22967: luminous: Journaler::flush() may flush less data than expected, which causes flu...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20431
merged - 11:29 PM Bug #22221 (Resolved): qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), actual: -34 vs 0
- 11:29 PM Backport #22383 (Resolved): luminous: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), ...
- 11:17 PM Backport #22383: luminous: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), actual: -34...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21173
merged - 11:17 PM Backport #23154 (Resolved): luminous: mds: FAILED assert (p != active_requests.end()) in MDReques...
- 11:16 PM Backport #23154: luminous: mds: FAILED assert (p != active_requests.end()) in MDRequestRef MDCach...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21176
merged - 10:59 PM Bug #23571 (Resolved): mds: make sure that MDBalancer uses heartbeat info from the same epoch
- 10:58 PM Backport #23572 (Resolved): luminous: mds: make sure that MDBalancer uses heartbeat info from the...
- 10:58 PM Bug #23569 (Resolved): mds: counter decay incorrect
- 10:58 PM Backport #23570 (Resolved): luminous: mds: counter decay incorrect
- 10:58 PM Bug #23560 (Resolved): mds: mds gets significantly behind on trimming while creating millions of ...
- 10:57 PM Backport #23561 (Resolved): luminous: mds: mds gets significantly behind on trimming while creati...
- 10:34 PM Bug #23519: mds: mds got laggy because of MDSBeacon stuck in mqueue
- dongdong tao wrote:
> Patrick Donnelly wrote:
> > Dongdong, I think fast dispatch may not be the answer here. We're... - 08:12 PM Bug #23519 (In Progress): mds: mds got laggy because of MDSBeacon stuck in mqueue
- 02:27 PM Bug #23519: mds: mds got laggy because of MDSBeacon stuck in mqueue
- Patrick Donnelly wrote:
> Dongdong, I think fast dispatch may not be the answer here. We're not yet sure on the caus... - 01:39 PM Bug #23519: mds: mds got laggy because of MDSBeacon stuck in mqueue
- Dongdong, I think fast dispatch may not be the answer here. We're not yet sure on the cause. Do you have ideas?
- 09:07 PM Bug #10423 (Closed): update hadoop gitbuilders
- stale
- 09:03 PM Bug #20593 (Pending Backport): mds: the number of inode showed by "mds perf dump" not correct aft...
- 09:01 PM Feature #21156 (Resolved): mds: speed up recovery with many open inodes
- 09:00 PM Bug #21765 (Pending Backport): auth|doc: fs authorize error for existing credentials confusing/un...
- Please backport: https://github.com/ceph/ceph/pull/17678/commits/447b3d4852acd2db656c973cc224fb77d3fff590
- 08:56 PM Feature #22545 (Duplicate): add dump inode command to mds
- 08:52 PM Bug #6613 (Closed): samba is crashing in teuthology
- Closing as stale.
- 08:52 PM Feature #358: mds: efficient revert to snapshot
- Also consider cloning snapshots.
- 08:50 PM Documentation #21172 (Duplicate): doc: Export over NFS
- 08:48 PM Bug #21745: mds: MDBalancer using total (all time) request count in load statistics
- https://github.com/ceph/ceph/pull/19220/commits/fb8d07772ffd3b061d2752c6b3375f6cb187be4b
Zheng, please amend the a... - 08:43 PM Bug #19101 (Closed): "samba3error [Unknown error/failure. Missing torture_fail() or torture_asser...
- Not looking at samba right now.
- 08:39 PM Bug #23234 (Won't Fix): mds: damage detected while opening remote dentry
- Sorry, we won't look at bugs for multiple actives pre-Luminous.
- 08:37 PM Bug #21412: cephfs: too many cephfs snapshots chokes the system
- Zheng, is this issue resolved with the snapshot changes for Mimic?
- 08:36 PM Bug #20494 (Closed): cephfs_data_scan: try_remove_dentries_for_stray assertion failure
- Closing due to inactivity.
- 08:31 PM Bug #19255 (Can't reproduce): qa: test_full_fclose failure
- 08:27 PM Bug #22788 (Won't Fix): ceph-fuse performance issues with rsync
- 08:26 PM Feature #12274 (In Progress): mds: start forward scrubs from all subtree roots, skip non-auth met...
- 08:21 PM Bug #23615 (Rejected): qa: test for "snapid allocation/deletion mismatch with monitor"
- See email thread.
- 08:11 PM Bug #23556 (Closed): segfault in LibCephFS.ShutdownRace (jewel 10.2.11 integration testing)
- 07:45 PM Feature #21888 (Fix Under Review): Adding [--repair] option for cephfs-journal-tool make it can r...
- 07:29 PM Feature #23362 (In Progress): mds: add drop_cache command
- 06:39 PM Feature #23376: nfsgw: add NFS-Ganesha to service map similar to "rgw-nfs"
- The service map is a librados resource consumed by ceph-mgr. It periodically gets perfcounters, for example. When l...
- 06:23 PM Feature #23376: nfsgw: add NFS-Ganesha to service map similar to "rgw-nfs"
- I'm not aware of any bugs open on this. Is there any background on the rgw-nfs map at all? I've not looked at the ser...
- 05:22 PM Documentation #23611 (Need More Info): doc: add description of new fs-client auth profile
- On that page: http://docs.ceph.com/docs/master/rados/operations/user-management/#authorization-capabilities
https:... - 04:00 PM Documentation #23568 (Pending Backport): doc: outline the steps for upgrading an MDS cluster
- 01:37 PM Bug #23518: mds: crash when failover
- Are you still hitting the issue or has it gone away? If so `debug mds = 20` logs would be helpful..
- 01:32 PM Bug #23393 (Fix Under Review): ceph-ansible: update Ganesha config for nfs_file_gw to use optimal...
- https://github.com/ceph/ceph-ansible/pull/2503
- 01:26 PM Bug #23538 (Fix Under Review): mds: fix occasional dir rstat inconsistency between multi-MDSes
- 01:25 PM Bug #23530 (Fix Under Review): mds: kicked out by monitor during rejoin
- 12:59 PM Bug #23602 (Resolved): mds: handle client requests when mds is stopping
- https://github.com/ceph/ceph/pull/21167
04/08/2018
- 04:35 PM Bug #23211 (Resolved): client: prevent fallback to remount when dentry_invalidate_cb is true but ...
- 04:35 PM Backport #23356 (Resolved): jewel: client: prevent fallback to remount when dentry_invalidate_cb ...
- 02:54 PM Bug #23332: kclient: with fstab entry is not coming up reboot
- Zheng Yan wrote:
> In messages_ceph-sshreeka-run379-node5-client
> [...]
>
> looks like fstab didn't include c... - 01:12 AM Bug #23332: kclient: with fstab entry is not coming up reboot
- In messages_ceph-sshreeka-run379-node5-client ...
- 06:36 AM Bug #23503: mds: crash during pressure test
- Patrick Donnelly wrote:
> wei jin wrote:
> > Hi, Patrick, I have a question: after pinning base dir, will subdirs s...
04/06/2018
- 10:28 PM Bug #23332 (New): kclient: with fstab entry is not coming up reboot
- 10:25 PM Documentation #23583 (Resolved): doc: update snapshot doc to account for recent changes
- http://docs.ceph.com/docs/master/dev/cephfs-snapshots/
- 10:23 PM Bug #22741 (Resolved): osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-yuri-mas...
- 10:21 PM Backport #22696 (In Progress): luminous: client: dirty caps may never get the chance to flush
- https://github.com/ceph/ceph/pull/21278
- 10:11 PM Backport #22696: luminous: client: dirty caps may never get the chance to flush
- I'd prefer not to, I'll try to resolve the conflicts.
- 10:05 PM Bug #23582 (Fix Under Review): MDSMonitor: mds health warnings printed in bad format
- https://github.com/ceph/ceph/pull/21276
- 08:29 PM Bug #23582 (Resolved): MDSMonitor: mds health warnings printed in bad format
- Example:...
- 05:32 PM Bug #23033 (Resolved): qa: ignore more warnings during mds-full test
- 05:32 PM Backport #23060 (Resolved): luminous: qa: ignore more warnings during mds-full test
- 05:28 PM Bug #22483 (Resolved): mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obsolete
- 05:27 PM Bug #21402 (Resolved): mds: move remaining containers in CDentry/CDir/CInode to mempool
- 05:27 PM Backport #22972 (Resolved): luminous: mds: move remaining containers in CDentry/CDir/CInode to me...
- 05:26 PM Bug #22288 (Resolved): mds: assert when inode moves during scrub
- 05:26 PM Backport #23016 (Resolved): luminous: mds: assert when inode moves during scrub
- 03:20 AM Backport #23572 (In Progress): luminous: mds: make sure that MDBalancer uses heartbeat info from ...
- 03:18 AM Backport #23572 (Resolved): luminous: mds: make sure that MDBalancer uses heartbeat info from the...
- https://github.com/ceph/ceph/pull/21267
- 03:17 AM Bug #23571 (Resolved): mds: make sure that MDBalancer uses heartbeat info from the same epoch
- Already fixed in: https://github.com/ceph/ceph/pull/18941/
- 03:15 AM Backport #23570 (In Progress): luminous: mds: counter decay incorrect
- 03:12 AM Backport #23570 (Resolved): luminous: mds: counter decay incorrect
- https://github.com/ceph/ceph/pull/21266
- 03:12 AM Bug #23569 (Resolved): mds: counter decay incorrect
- Fixed by https://github.com/ceph/ceph/pull/18776
Issue for backport. - 03:07 AM Documentation #23427 (Fix Under Review): doc: create doc outlining steps to bring down cluster
- https://github.com/ceph/ceph/pull/21265
- 01:13 AM Bug #21584 (Resolved): FAILED assert(get_version() < pv) in CDir::mark_dirty
- 01:13 AM Backport #22031 (Resolved): jewel: FAILED assert(get_version() < pv) in CDir::mark_dirty
- 12:02 AM Bug #23532 (Resolved): doc: create PendingReleaseNotes and add dev doc for openfile table purpose...
04/05/2018
- 10:52 PM Documentation #23568 (Fix Under Review): doc: outline the steps for upgrading an MDS cluster
- https://github.com/ceph/ceph/pull/21263
- 10:37 PM Documentation #23568 (Resolved): doc: outline the steps for upgrading an MDS cluster
- Until we have versioned MDS-MDS messages and feature flags obeyed by MDSs during upgrades (e.g. require_mimic_mds), t...
- 09:22 PM Bug #23567 (Resolved): MDSMonitor: successive changes to max_mds can allow hole in ranks
- With 3 MDS, approximately this sequence:...
- 07:39 PM Backport #22384 (Resolved): jewel: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), act...
- 07:09 PM Bug #23503: mds: crash during pressure test
- wei jin wrote:
> Hi, Patrick, I have a question: after pinning base dir, will subdirs still be migrated to other act... - 06:39 PM Bug #22263 (Resolved): client reconnect gather race
- 06:39 PM Backport #22380 (Resolved): jewel: client reconnect gather race
- 06:38 PM Bug #22631 (Resolved): mds: crashes because of old pool id in journal header
- 06:38 PM Backport #22764 (Resolved): jewel: mds: crashes because of old pool id in journal header
- 06:37 PM Bug #21383 (Resolved): qa: failures from pjd fstest
- 06:37 PM Backport #21489 (Resolved): jewel: qa: failures from pjd fstest
- 04:26 PM Bug #22821 (Resolved): mds: session reference leak
- 04:26 PM Backport #22970 (Resolved): jewel: mds: session reference leak
- 01:21 PM Bug #23529 (Resolved): TmapMigratePP.DataScan asserts in jewel
- 04:19 AM Backport #23561 (In Progress): luminous: mds: mds gets significantly behind on trimming while cre...
- https://github.com/ceph/ceph/pull/21256
- 04:14 AM Backport #23561 (Resolved): luminous: mds: mds gets significantly behind on trimming while creati...
- https://github.com/ceph/ceph/pull/21256
- 04:14 AM Bug #23560 (Pending Backport): mds: mds gets significantly behind on trimming while creating mill...
- 03:43 AM Bug #23491 (Fix Under Review): fs: quota backward compatibility
- https://github.com/ceph/ceph/pull/21255
04/04/2018
- 11:49 PM Bug #23560 (Fix Under Review): mds: mds gets significantly behind on trimming while creating mill...
- https://github.com/ceph/ceph/pull/21254
- 11:44 PM Bug #23560 (Resolved): mds: mds gets significantly behind on trimming while creating millions of ...
- Under create heavy workloads, MDS sometimes reaches ~60 untrimmed segments for brief periods. I suggest we bump mds_l...
- 09:26 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
- Luminous test revert PR: https://github.com/ceph/ceph/pull/21251
- 09:22 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
- Jeff Layton wrote:
> To be clear, I think we may want to leave off the patch that adds the new testcase from this se... - 09:13 PM Bug #20988 (Resolved): client: dual client segfault with racing ceph_shutdown
- 09:13 PM Backport #21526 (Closed): jewel: client: dual client segfault with racing ceph_shutdown
- Dropping this due to the age of jewel, dubious value, and lack of dual-client use-case for jewel.
- 08:48 PM Bug #23556: segfault in LibCephFS.ShutdownRace (jewel 10.2.11 integration testing)
- Tentatively agreed to drop the PR, because "jewel is near EOL and we don't have a use-case with dual clients for jewel"
- 08:31 PM Bug #23556: segfault in LibCephFS.ShutdownRace (jewel 10.2.11 integration testing)
- Looks to be coming from this commit, because it's the one that adds the ShutdownRace test case:
https://github.com... - 08:23 PM Bug #23556 (Closed): segfault in LibCephFS.ShutdownRace (jewel 10.2.11 integration testing)
- The test runs libcephfs/test.sh workunit, which in turn runs the ceph_test_libcephfs binary from the ceph-test packag...
- 05:51 PM Bug #23210 (Resolved): ceph-fuse: exported nfs get "stale file handle" when mds migrating
- Sorry, this should not be backported. ceph-fuse "support" for NFS is only for Mimic.
- 06:05 AM Bug #23210 (Pending Backport): ceph-fuse: exported nfs get "stale file handle" when mds migrating
- @Patrick, please confirm that this should be backported to luminous and which master PR.
- 07:50 AM Bug #16807 (Resolved): Crash in handle_slave_rename_prep
- http://tracker.ceph.com/issues/16768 already fixed
- 07:47 AM Bug #22353 (Resolved): kclient: ceph_getattr() return zero st_dev for normal inode
- 07:44 AM Feature #4501 (Resolved): Identify fields in CDir which aren't permanently necessary
- 07:43 AM Tasks #4499 (Resolved): Identify fields in CInode which aren't permanently necessary
- 07:43 AM Feature #14427 (Resolved): qa: run snapshot tests under thrashing
- 07:41 AM Feature #21877 (Resolved): quota and snaprealm integation
- 07:38 AM Feature #22371 (Resolved): mds: implement QuotaRealm to obviate parent quota lookup
- https://github.com/ceph/ceph/pull/18424
- 07:36 AM Bug #3254 (Resolved): mds: Replica inode's parent snaprealms are not open
- 07:36 AM Bug #3254: mds: Replica inode's parent snaprealms are not open
- opening snaprealm parents is no longer required with the new snaprealm format
https://github.com/ceph/ceph/pull/16779 - 07:34 AM Bug #1938 (Resolved): mds: snaptest-2 doesn't pass with 3 MDS system
- https://github.com/ceph/ceph/pull/16779
- 07:34 AM Bug #925 (Resolved): mds: update replica snaprealm on rename
- https://github.com/ceph/ceph/pull/16779
04/03/2018
- 08:07 PM Documentation #23271 (Resolved): doc: create install/setup guide for NFS-Ganesha w/ CephFS
- 06:44 PM Bug #23436 (Resolved): Client::_read() always return 0 when reading from inline data
- https://github.com/ceph/ceph/pull/21221
Wrote/tested that before I saw your PR, sorry Zheng. - 02:46 AM Bug #23436 (Fix Under Review): Client::_read() always return 0 when reading from inline data
- previous RP is buggy
https://github.com/ceph/ceph/pull/21186 - 06:41 PM Bug #23210 (Resolved): ceph-fuse: exported nfs get "stale file handle" when mds migrating
- 01:38 PM Bug #23509: ceph-fuse: broken directory permission checking
- Pull request here:
https://github.com/ceph/ceph/pull/21181 - 12:04 PM Bug #23250 (Need More Info): mds: crash during replay: interval_set.h: 396: FAILED assert(p->firs...
- 11:54 AM Bug #21070 (Resolved): MDS: MDS is laggy or crashed When deleting a large number of files
- 10:08 AM Bug #23529 (Fix Under Review): TmapMigratePP.DataScan asserts in jewel
- https://github.com/ceph/ceph/pull/21208
- 09:30 AM Backport #21450 (Closed): jewel: MDS: MDS is laggy or crashed When deleting a large number of files
- jewel does not have this bug
- 09:14 AM Bug #23541 (Fix Under Review): client: fix request send_to_auth was never really used
- 03:20 AM Bug #23541: client: fix request send_to_auth was never really used
- https://github.com/ceph/ceph/pull/21191
- 03:20 AM Bug #23541 (Resolved): client: fix request send_to_auth was never really used
- Client request's send_to_auth was never really used in choose_target_mds, although it would be set to true when getti...
- 09:09 AM Bug #23532 (Fix Under Review): doc: create PendingReleaseNotes and add dev doc for openfile table...
- https://github.com/ceph/ceph/pull/21204
- 07:39 AM Bug #23491: fs: quota backward compatibility
- will there be feature bit for mimic. If there will be, it's easy to add a mds version check to client
- 06:59 AM Bug #23491: fs: quota backward compatibility
- old user space client talking to new mds is OK. new client talking to old mds may have problem.
- 02:59 AM Backport #23356 (In Progress): jewel: client: prevent fallback to remount when dentry_invalidate_...
- 02:46 AM Backport #23157 (In Progress): luminous: mds: underwater dentry check in CDir::_omap_fetched is racy
- 02:44 AM Backport #23158 (In Progress): jewel: mds: underwater dentry check in CDir::_omap_fetched is racy
04/02/2018
- 05:42 PM Bug #23448 (Fix Under Review): nfs-ganesha: fails to parse rados URLs with '.' in object name
- https://review.gerrithub.io/#/c/406085/
- 04:37 PM Bug #23448: nfs-ganesha: fails to parse rados URLs with '.' in object name
- Opened github bug here:
https://github.com/nfs-ganesha/nfs-ganesha/issues/283 - 01:47 PM Bug #23518: mds: crash when failover
- No. I did nothing.
During pressure test, I ran into two crashes, another one is #23503. - 01:43 PM Bug #23518 (Need More Info): mds: crash when failover
- Did you evict the client session during this time?
- 01:42 PM Bug #23491: fs: quota backward compatibility
- There definitely needs to be something to keep this feature turned off until all the MDSes have upgraded. That may ju...
- 11:52 AM Bug #23446: ceph-fuse: getgroups failure causes exception
- New PR here:
https://github.com/ceph/ceph/pull/21132 - 10:41 AM Backport #23154 (In Progress): luminous: mds: FAILED assert (p != active_requests.end()) in MDReq...
- 10:40 AM Backport #23155 (Need More Info): jewel: mds: FAILED assert (p != active_requests.end()) in MDReq...
- @Patrick: This jewel backport appears to depend on https://github.com/ceph/ceph/commit/57c001335e180868088b329b575f0b...
- 10:29 AM Backport #22970 (In Progress): jewel: mds: session reference leak
- 10:20 AM Backport #22696 (Need More Info): luminous: client: dirty caps may never get the chance to flush
- @Patrick: Appears to depend on https://github.com/ceph/ceph/commit/8859ccf2824adbf836f32760b7e4f81c92cb47c4 - do you ...
- 10:19 AM Backport #22697 (Need More Info): jewel: client: dirty caps may never get the chance to flush
- @Patrick: Appears to depend on https://github.com/ceph/ceph/commit/8859ccf2824adbf836f32760b7e4f81c92cb47c4 - do you ...
- 10:15 AM Backport #22504 (Need More Info): luminous: client may fail to trim as many caps as MDS asked for
- non-trivial backport (the luminous and jewel conflicts are exactly the same, so once the luminous backport is done, t...
- 10:05 AM Backport #22505 (Need More Info): jewel: client may fail to trim as many caps as MDS asked for
- non-trivial backport (the luminous and jewel conflicts are exactly the same, so once the luminous backport is done, t...
- 09:56 AM Backport #22383 (In Progress): luminous: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0...
- 09:55 AM Backport #22384 (In Progress): jewel: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), ...
- 07:10 AM Bug #23538: mds: fix occasional dir rstat inconsistency between multi-MDSes
- https://github.com/ceph/ceph/pull/21166
- 06:58 AM Bug #23538: mds: fix occasional dir rstat inconsistency between multi-MDSes
- I looked through the code and found in MDCache::predirty_journal_parents, the parent's rstat of a file will be update...
- 06:57 AM Bug #23538 (Resolved): mds: fix occasional dir rstat inconsistency between multi-MDSes
- Recently we found dir rstat inconsistency between multi-MDSes on ceph version Luminous.
For example, on client A,...
04/01/2018
- 08:52 PM Backport #22380 (In Progress): jewel: client reconnect gather race
- 08:50 PM Backport #22378 (In Progress): jewel: ceph-fuse: failure to remount in startup test does not hand...
03/31/2018
- 12:01 PM Backport #22031 (In Progress): jewel: FAILED assert(get_version() < pv) in CDir::mark_dirty
- 12:00 PM Backport #22031 (Need More Info): jewel: FAILED assert(get_version() < pv) in CDir::mark_dirty
- @Zheng, this backport appears to depend on 7596f4b76a31d16b01ed733be9c52ecfffa7b21a
Please advise. - 05:28 AM Bug #23532 (Resolved): doc: create PendingReleaseNotes and add dev doc for openfile table purpose...
- https://github.com/ceph/ceph/pull/20132
- 05:19 AM Backport #21526 (In Progress): jewel: client: dual client segfault with racing ceph_shutdown
- 05:12 AM Backport #21489 (In Progress): jewel: qa: failures from pjd fstest
- 05:07 AM Backport #21450 (Need More Info): jewel: MDS: MDS is laggy or crashed When deleting a large numbe...
- Zheng, this backport appears to depend on 2b396cab22c9faaa7496000e77ec5f2d7e7d553d which is not in jewel.
03/30/2018
- 03:45 PM Bug #23530: mds: kicked out by monitor during rejoin
- https://github.com/ceph/ceph/pull/21144
above PR can fix this issue , and we it has been verified in our cluster. - 03:38 PM Bug #23530 (Resolved): mds: kicked out by monitor during rejoin
- function process_imported_caps might hold mds_lock too long,
mds tick thread will be starved, and leads to ... - 03:36 PM Bug #23529 (Resolved): TmapMigratePP.DataScan asserts in jewel
- I'm not sure why this test is in the rados suite, when it tests a CephFS-specific tool(?), but here we are. After som...
- 03:35 AM Bug #23519: mds: mds got laggy because of MDSBeacon stuck in mqueue
- currently, there is no incoming mds messages can be fast dispatch.
so, and since the MDSBeacon's handler handle_mds_... - 03:15 AM Bug #23519 (Resolved): mds: mds got laggy because of MDSBeacon stuck in mqueue
- the MDSBeacon message from monitor may stuck for long time in mqueue.
because DispatcherQueue is currently dispatchi... - 03:06 AM Bug #23503: mds: crash during pressure test
- Patrick Donnelly wrote:
> wei jin wrote:
> > After crash, the standby mds took it over, however, we observed anothe... - 03:01 AM Bug #23518 (Resolved): mds: crash when failover
- 2018-03-29 10:25:04.719502 7f5ae5ad2700 -1 /build/ceph-12.2.4/src/mds/MDCache.cc: In function 'void MDCache::handle_c...
03/29/2018
- 09:04 PM Documentation #23334 (Resolved): doc: note client eviction results in a client instance blacklist...
- 07:44 PM Bug #23503: mds: crash during pressure test
- wei jin wrote:
> After crash, the standby mds took it over, however, we observed another crash:
This smells like ... - 07:43 PM Bug #23503 (Duplicate): mds: crash during pressure test
- 09:08 AM Bug #23503: mds: crash during pressure test
- After crash, the standby mds took it over, however, we observed another crash:
2018-03-29 10:25:04.719502 7f5ae5ad... - 09:05 AM Bug #23503 (Duplicate): mds: crash during pressure test
- ceph version: 12.2.4
10 mds, 9 active + 1 standby
disabled dir fragmentation
We created 9 directories, and pnine... - 06:39 PM Bug #22754 (Resolved): mon: removing tier from an EC base pool is forbidden, even if allow_ec_ove...
- 06:39 PM Backport #22971 (Resolved): luminous: mon: removing tier from an EC base pool is forbidden, even ...
- 01:20 PM Backport #22971: luminous: mon: removing tier from an EC base pool is forbidden, even if allow_ec...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20433
merged - 06:27 PM Bug #10915 (Fix Under Review): client: hangs on umount if it had an MDS session evicted
- https://github.com/ceph/ceph/pull/21065
- 02:28 PM Bug #23509: ceph-fuse: broken directory permission checking
- Kernel chdir syscall does:...
- 02:23 PM Bug #23509 (Resolved): ceph-fuse: broken directory permission checking
- Description of problem:
We have encountered cephfs-fuse mounted directory different behavior than base Linux or kern... - 04:27 AM Backport #23474 (In Progress): luminous: client: allow caller to request that setattr request be ...
- https://github.com/ceph/ceph/pull/21109
- 12:21 AM Bug #23491 (Resolved): fs: quota backward compatibility
- Since...
03/28/2018
- 08:49 PM Backport #22968 (Resolved): jewel: Journaler::flush() may flush less data than expected, which ca...
- 08:48 PM Bug #22730 (Resolved): mds: scrub crash
- 08:48 PM Backport #22865 (Resolved): jewel: mds: scrub crash
- 08:47 PM Bug #22734 (Resolved): cephfs-journal-tool: may got assertion failure due to not shutdown
- 08:47 PM Backport #22863 (Resolved): jewel: cephfs-journal-tool: may got assertion failure due to not shut...
- 08:47 PM Backport #22861 (Resolved): jewel: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-...
- 08:46 PM Bug #22536 (Resolved): client:_rmdir() uses a deleted memory structure(Dentry) leading a core
- 08:46 PM Backport #22700 (Resolved): jewel: client:_rmdir() uses a deleted memory structure(Dentry) leadin...
- 08:45 PM Bug #22562 (Resolved): mds: fix dump last_sent
- 08:45 PM Backport #22695 (Resolved): jewel: mds: fix dump last_sent
- 08:44 PM Bug #22008 (Resolved): Processes stuck waiting for write with ceph-fuse
- 08:44 PM Backport #22241 (Resolved): jewel: Processes stuck waiting for write with ceph-fuse
- 03:00 PM Bug #20592 (Resolved): client::mkdirs not handle well when two clients send mkdir request for a s...
- 03:00 PM Backport #20823 (Resolved): jewel: client::mkdirs not handle well when two clients send mkdir req...
- 06:01 AM Backport #22862 (Resolved): luminous: cephfs-journal-tool: may got assertion failure due to not s...
- 06:01 AM Bug #22627 (Resolved): qa: kcephfs lacks many configurations in the fs/multimds suites
- 06:01 AM Backport #22891 (Resolved): luminous: qa: kcephfs lacks many configurations in the fs/multimds su...
- 06:00 AM Bug #22652 (Resolved): client: fails to release to revoking Fc
- 06:00 AM Backport #22688 (Resolved): luminous: client: fails to release to revoking Fc
- 06:00 AM Bug #22910 (Resolved): client: setattr should drop "Fs" rather than "As" for mtime and size
- 05:59 AM Backport #22935 (Resolved): luminous: client: setattr should drop "Fs" rather than "As" for mtime...
- 05:59 AM Bug #22909 (Resolved): client: readdir bug
- 05:59 AM Backport #22936 (Resolved): luminous: client: readdir bug
- 05:59 AM Bug #22886 (Resolved): kclient: Test failure: test_full_same_file (tasks.cephfs.test_full.TestClu...
- 05:58 AM Backport #22966 (Resolved): luminous: kclient: Test failure: test_full_same_file (tasks.cephfs.te...
- 05:58 AM Backport #22969 (Resolved): luminous: mds: session reference leak
- 05:56 AM Backport #23013 (Resolved): luminous: mds: LOCK_SYNC_MIX state makes "getattr" operations extreme...
- 05:55 AM Bug #22993 (Resolved): qa: kcephfs thrash sub-suite does not ignore MON_DOWN
- 05:55 AM Backport #23061 (Resolved): luminous: qa: kcephfs thrash sub-suite does not ignore MON_DOWN
- 05:55 AM Bug #22990 (Resolved): qa: mds-full: ignore "Health check failed: pauserd,pausewr flag(s) set (OS...
- 05:55 AM Backport #23062 (Resolved): luminous: qa: mds-full: ignore "Health check failed: pauserd,pausewr ...
- 05:54 AM Bug #23094 (Resolved): mds: add uptime to status asok command
- 05:54 AM Backport #23150 (Resolved): luminous: mds: add uptime to status asok command
- 05:53 AM Bug #23041 (Resolved): ceph-fuse: clarify -i is not a valid option
- 05:53 AM Backport #23156 (Resolved): luminous: ceph-fuse: clarify -i is not a valid option
- 05:53 AM Bug #23028 (Resolved): client: allow client to use caps that are revoked but not yet returned
- 05:52 AM Backport #23314 (Resolved): luminous: client: allow client to use caps that are revoked but not y...
- 05:52 AM Backport #23355 (Resolved): luminous: client: prevent fallback to remount when dentry_invalidate_...
- 05:51 AM Backport #23475 (Resolved): luminous: ceph-fuse: trim ceph-fuse -V output
- https://github.com/ceph/ceph/pull/21600
- 05:51 AM Backport #23474 (Resolved): luminous: client: allow caller to request that setattr request be syn...
- https://github.com/ceph/ceph/pull/21109
03/27/2018
- 10:50 PM Backport #22862: luminous: cephfs-journal-tool: may got assertion failure due to not shutdown
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20251
merged - 10:49 PM Backport #22891: luminous: qa: kcephfs lacks many configurations in the fs/multimds suites
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20302
merged - 10:49 PM Backport #22688: luminous: client: fails to release to revoking Fc
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20342
merged - 10:48 PM Backport #22935: luminous: client: setattr should drop "Fs" rather than "As" for mtime and size
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20354
merged - 10:48 PM Backport #22936: luminous: client: readdir bug
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20356
merged - 10:47 PM Backport #22966: luminous: kclient: Test failure: test_full_same_file (tasks.cephfs.test_full.Tes...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20417
merged - 10:46 PM Backport #22969: luminous: mds: session reference leak
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20432
merged - 10:45 PM Backport #23013: luminous: mds: LOCK_SYNC_MIX state makes "getattr" operations extremely slow whe...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20455
merged - 10:41 PM Backport #23061: luminous: qa: kcephfs thrash sub-suite does not ignore MON_DOWN
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20523
merged - 10:41 PM Backport #23062: luminous: qa: mds-full: ignore "Health check failed: pauserd,pausewr flag(s) set...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20525
merged - 10:38 PM Backport #23150: luminous: mds: add uptime to status asok command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20626
merged - 10:37 PM Backport #23156: luminous: ceph-fuse: clarify -i is not a valid option
- Prashant D wrote:
> https://github.com/ceph/ceph/pull/20654
merged - 10:37 PM Backport #23314: luminous: client: allow client to use caps that are revoked but not yet returned
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20904
merged - 10:36 PM Backport #23355: luminous: client: prevent fallback to remount when dentry_invalidate_cb is true ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20960
merged - 09:59 PM Bug #23248 (Pending Backport): ceph-fuse: trim ceph-fuse -V output
- 09:59 PM Bug #23293 (Resolved): client: Client::_read returns buffer length on success instead of bytes read
- 09:58 PM Bug #23288 (Resolved): ceph-fuse: Segmentation fault --localize-reads
- 09:58 PM Bug #23291 (Pending Backport): client: add way to sync setattr operations to MDS
- 09:57 PM Bug #23436 (Resolved): Client::_read() always return 0 when reading from inline data
- 01:52 PM Bug #23172 (Resolved): mds: fixed MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 definition breaks luminous ...
- 01:52 PM Backport #23414 (Resolved): luminous: mds: fixed MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 definition b...
- 01:36 PM Bug #23421 (Need More Info): ceph-fuse: stop ceph-fuse if no root permissions?
- Jos, can you get more detailed debug logs when this happens? It is probably not related to sudo permissions.
03/26/2018
- 09:52 AM Bug #23452 (Fix Under Review): mds: assertion in MDSRank::validate_sessions
- https://github.com/ceph/ceph/pull/21040
- 09:51 AM Bug #23380 (Fix Under Review): mds: ceph.dir.rctime follows dir ctime not inode ctime
- https://github.com/ceph/ceph/pull/21039
- 08:43 AM Backport #23014 (In Progress): jewel: mds: LOCK_SYNC_MIX state makes "getattr" operations extreme...
- https://github.com/ceph/ceph/pull/21038
03/23/2018
- 09:08 PM Bug #23452 (Resolved): mds: assertion in MDSRank::validate_sessions
This function is meant to make the MDS more resilient by killing any client sessions that have prealloc_inos that a...- 01:39 PM Bug #23446 (In Progress): ceph-fuse: getgroups failure causes exception
- Pull request here:
https://github.com/ceph/ceph/pull/21025 - 11:58 AM Bug #23446: ceph-fuse: getgroups failure causes exception
- Ok, draft patch is building in shaman now. It should make ceph-fuse send the error back to the kernel when this occur...
- 10:10 AM Bug #23446: ceph-fuse: getgroups failure causes exception
- Looks like bad error handling. Here's getgroups:...
- 08:24 AM Bug #23446 (Resolved): ceph-fuse: getgroups failure causes exception
- Problem described here:
https://github.com/ceph/ceph-csi/pull/30#issuecomment-375331907... - 12:28 PM Bug #23448 (Resolved): nfs-ganesha: fails to parse rados URLs with '.' in object name
- I added a ganesha config file to nfs-ganesha RADOS pool with the object name "ganesha.conf". I then added this to the...
03/22/2018
- 07:46 AM Bug #23394: nfs-ganesha: check cache configuration when exporting FSAL_CEPH
- The default should be the safe/efficient configuration and we should make an attempt to either fix it automatically o...
- 07:38 AM Bug #23210 (Fix Under Review): ceph-fuse: exported nfs get "stale file handle" when mds migrating
- https://github.com/ceph/ceph/pull/21003
- 06:13 AM Backport #23414 (In Progress): luminous: mds: fixed MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 definitio...
- https://github.com/ceph/ceph/pull/21001
- 03:50 AM Bug #23436 (Fix Under Review): Client::_read() always return 0 when reading from inline data
- https://github.com/ceph/ceph/pull/20997
- 03:00 AM Bug #23436 (Resolved): Client::_read() always return 0 when reading from inline data
- /a/pdonnell-2018-03-19_23:30:16-fs-wip-pdonnell-testing-20180317.202121-testing-basic-smithi/2306753/
03/21/2018
- 02:29 PM Bug #23429: File corrupt after writing to cephfs
- When did you create the filesystem. were the filesystem created before luminous?
Do you know how these files were ... - 09:38 AM Bug #23429: File corrupt after writing to cephfs
- Patrick Donnelly wrote:
> Can you paste `ceph fs dump` and `ceph osd dump`.
- 09:36 AM Bug #23429: File corrupt after writing to cephfs
- Patrick Donnelly wrote:
> Can you paste `ceph fs dump` and `ceph osd dump`.
dumped fsmap epoch 83
e83
enable_mu... - 09:36 AM Bug #23429: File corrupt after writing to cephfs
- Zheng Yan wrote:
> Is file size larger than it should be? do you store cephfs metadata on ec pools?
metadata in r... - 08:53 AM Bug #23429 (Need More Info): File corrupt after writing to cephfs
- Can you paste `ceph fs dump` and `ceph osd dump`.
- 07:50 AM Bug #23429: File corrupt after writing to cephfs
- Is file size larger than it should be? do you store cephfs metadata on ec pools?
- 07:08 AM Bug #23429 (Can't reproduce): File corrupt after writing to cephfs
- We fount millions of file corruption on our CephFS+ec+overwrite cluster.I select some of them and use diff tools to c...
- 06:53 AM Bug #23425: Kclient: failed to umount
- Zheng Yan wrote:
> this is purely kernel issue. which kernel did you use?
3.10.0-693.el7.x86_64 - 01:01 AM Bug #23425: Kclient: failed to umount
- this is purely kernel issue. which kernel did you use?
- 03:59 AM Feature #4504 (Resolved): mds: trim based on total memory usage
- 01:03 AM Documentation #23427 (Resolved): doc: create doc outlining steps to bring down cluster
03/20/2018
- 05:48 PM Bug #23425 (Closed): Kclient: failed to umount
- my fault, we will file a bz for this.
- 05:46 PM Bug #23425 (Closed): Kclient: failed to umount
- I was trying to do mount on kernel client ,IOs and umount(automated script). While umounting, umount command: sudo um...
- 12:49 PM Bug #23210: ceph-fuse: exported nfs get "stale file handle" when mds migrating
- try modifying Client::ll_lookup, ignore may_lookup check for lookup . and lookup ..
- 12:31 PM Bug #23210: ceph-fuse: exported nfs get "stale file handle" when mds migrating
- but with above patch, "lookup ." on non-directory inode should works
- 11:56 AM Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- Possibly related tracker:
http://tracker.ceph.com/issues/18537 - 11:44 AM Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- This is worth looking into, but I wonder how big of a problem this is once you disable a lot of the ganesha mdcache b...
- 11:35 AM Bug #23291 (In Progress): client: add way to sync setattr operations to MDS
- PR is here:
https://github.com/ceph/ceph/pull/20913 - 11:34 AM Bug #23394: nfs-ganesha: check cache configuration when exporting FSAL_CEPH
- I don't think we should do this. We have settings that we recommend, but not using those doesn't mean that anything i...
- 04:56 AM Bug #23421 (Closed): ceph-fuse: stop ceph-fuse if no root permissions?
- I think it would be a good idea to prevent ceph-fuse from proceeding if there are no appropriate permissions.
I requ...
03/19/2018
- 04:42 PM Backport #23414 (Resolved): luminous: mds: fixed MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 definition b...
- https://github.com/ceph/ceph/pull/21001
- 01:30 PM Bug #22051 (Can't reproduce): tests: Health check failed: Reduced data availability: 5 pgs peerin...
- Did not see further runs with this, closing for now
- 11:53 AM Backport #23355 (In Progress): luminous: client: prevent fallback to remount when dentry_invalida...
- https://github.com/ceph/ceph/pull/20960
- 01:51 AM Bug #23210: ceph-fuse: exported nfs get "stale file handle" when mds migrating
- I add some debug info in the call relation above. So I get the log both of kernel and ceph-fuse when error happend.
...
03/17/2018
- 06:31 PM Bug #23172 (Pending Backport): mds: fixed MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 definition breaks l...
- 11:48 AM Bug #23288: ceph-fuse: Segmentation fault --localize-reads
- @Patrick:
This doesn't exist in luminous and jewel. This is the after effect of a cleanup. The logic is changed al...
03/16/2018
- 08:40 PM Bug #23394 (Rejected): nfs-ganesha: check cache configuration when exporting FSAL_CEPH
- NFS Ganesha should check that caching is disabled when exporting FSAL_CEPH. If misconfigured, it should raise an erro...
- 08:36 PM Bug #23393 (Resolved): ceph-ansible: update Ganesha config for nfs_file_gw to use optimal settings
- This config file template:
https://github.com/ceph/ceph-ansible/blob/master/roles/ceph-nfs/templates/ganesha.conf.... - 05:29 PM Bug #23332: kclient: with fstab entry is not coming up reboot
- Can someone please check the logs, we are having this issue 50% of the time in sanity runs.
03/15/2018
- 07:06 PM Documentation #23334: doc: note client eviction results in a client instance blacklisted, not an ...
- > I don't really like "entity address" either as it's also not clear. Let's use "client instance".
Ok. PR - https:... - 06:41 PM Documentation #23334: doc: note client eviction results in a client instance blacklisted, not an ...
- Rishabh Dave wrote:
> What attribute other than address can be used to remove a client from blacklist? I tried clien... - 06:39 PM Documentation #23334: doc: note client eviction results in a client instance blacklisted, not an ...
- Rishabh Dave wrote:
> Or... do you want the docs to clearer that by "address" it means the "entity address" (like h... - 07:35 AM Documentation #23334: doc: note client eviction results in a client instance blacklisted, not an ...
- Or... do you want the docs to clearer that by "address" it means the "entity address" (like help text below) and not...
- 07:24 AM Documentation #23334: doc: note client eviction results in a client instance blacklisted, not an ...
- Patrick Donnelly wrote:
> can be confusing as it lists an "address" as what's blacklisted. This could be confusing a... - 01:30 PM Documentation #23271: doc: create install/setup guide for NFS-Ganesha w/ CephFS
- https://github.com/ceph/ceph/pull/20915
- 01:02 PM Bug #23380 (Resolved): mds: ceph.dir.rctime follows dir ctime not inode ctime
- The original intention of the rctime was reflect the latest inode ctime in the children: https://lkml.org/lkml/2008/7...
- 09:34 AM Feature #23376 (Rejected): nfsgw: add NFS-Ganesha to service map similar to "rgw-nfs"
- Not sure, if there is already a tracker issue for this feature. Feel Free to close if that is the case.
- 04:35 AM Bug #23288 (Fix Under Review): ceph-fuse: Segmentation fault --localize-reads
- https://github.com/ceph/ceph/pull/20908
- 03:32 AM Backport #23314 (In Progress): luminous: client: allow client to use caps that are revoked but no...
- https://github.com/ceph/ceph/pull/20904
03/14/2018
- 10:32 PM Bug #23172 (Fix Under Review): mds: fixed MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 definition breaks l...
- https://github.com/ceph/ceph/pull/20903
- 01:29 PM Bug #23172: mds: fixed MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 definition breaks luminous upgrades
- Jan Fajerski wrote:
> Would the following work:
> It seems like the pre-12.2.4 MDS's suicide because the feature fl... - 01:06 PM Bug #23172: mds: fixed MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 definition breaks luminous upgrades
- Would the following work:
It seems like the pre-12.2.4 MDS's suicide because the feature flag 8 changed values (file... - 10:12 PM Bug #21985: mds: definition of MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 is wrong
- Yes, let's not backport to jewel.
- 12:55 PM Bug #21985: mds: definition of MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 is wrong
- @Yan, @Patrick - I checked and this affects jewel as well. The fix is problematic, though, because it causes MDSes to...
- 10:08 PM Bug #23247 (Won't Fix): doc: distinguish versions in ceph-fuse
- Okay, let's close this then.
- 09:04 AM Bug #23247: doc: distinguish versions in ceph-fuse
- Patrick Donnelly wrote:
> I understand. Is it possible to allow FUSE to process --version as well so that it can pri... - 07:11 PM Bug #23291: client: add way to sync setattr operations to MDS
- Ok, I have a draft patch for this now. What I don't have is a great way to test it.
Hmm...now that I look, we have... - 07:04 PM Feature #23362 (Resolved): mds: add drop_cache command
- This should try to trim the cache as much as possible, optionally ask clients to release all caps, and optionally flu...
- 03:04 PM Bug #23250: mds: crash during replay: interval_set.h: 396: FAILED assert(p->first > start+len)
- No, PG scrub has nothing do with metadata scrub. No idea what caused the corruption.
- 10:31 AM Bug #23289: mds: xfstest generic/089 hangs in rename syscall in luminous
- By the way, have you actually seen the test run to completion? I've waited a few hours and the test kept running.
- 09:00 AM Bug #23210: ceph-fuse: exported nfs get "stale file handle" when mds migrating
- After debugging kernel(version 4.14), I found something in call relation "exportfs_decode_fh()->fuse_fh_to_dentry()->...
- 08:50 AM Backport #23356 (Resolved): jewel: client: prevent fallback to remount when dentry_invalidate_cb ...
- https://github.com/ceph/ceph/pull/21189
- 08:50 AM Backport #23355 (Resolved): luminous: client: prevent fallback to remount when dentry_invalidate_...
- https://github.com/ceph/ceph/pull/20960
- 05:49 AM Bug #23332: kclient: with fstab entry is not coming up reboot
- Patrick Donnelly wrote:
> You probably have the wrong mons in the mount configuration.
I have only one mon. - 03:44 AM Bug #23332: kclient: with fstab entry is not coming up reboot
- need add _netdev mount option
- 03:43 AM Bug #23280: mds: restarted mds may show wrong num_strays stats
- Patrick Donnelly wrote:
> 鹏 张 wrote:
> > 1.on ceph filesystem:mkdir test1 test2
> > 2.touch ./test1/1 ./test1/2
>... - 03:35 AM Bug #23211 (Pending Backport): client: prevent fallback to remount when dentry_invalidate_cb is t...
- 03:35 AM Bug #23350: mds: deadlock during unlink and export
- ...
- 03:34 AM Bug #22802 (Resolved): libcephfs: allow setting default perms
- 03:26 AM Bug #23327: qa: pjd test sees wrong ctime after unlink
- yes, it looks like that 'sleep 1' slept less than 1 second. probably caused by changes in RC kernel ...
03/13/2018
- 10:19 PM Bug #23350 (New): mds: deadlock during unlink and export
- For: http://pulpito.ceph.com/pdonnell-2018-03-11_22:42:18-multimds-wip-pdonnell-testing-20180311.180352-testing-basic...
- 08:16 PM Bug #23327: qa: pjd test sees wrong ctime after unlink
- It may be that the sleep was cut short but I'm not sure we can trust the timestamps from teuthology because the outpu...
- 12:44 AM Bug #23327 (New): qa: pjd test sees wrong ctime after unlink
- From: http://pulpito.ceph.com/yuriw-2018-03-09_16:22:21-fs-wip-yuriw-master-3.8.18-distro-basic-smithi/2271692/
Th... - 06:50 PM Bug #10915 (In Progress): client: hangs on umount if it had an MDS session evicted
- 02:44 PM Bug #10915: client: hangs on umount if it had an MDS session evicted
- The issue is also reproducible with the kernel client.
Patrick, can you assign this issue to me? - 06:47 PM Bug #23332 (Need More Info): kclient: with fstab entry is not coming up reboot
- You probably have the wrong mons in the mount configuration.
- 01:44 PM Bug #23332 (Closed): kclient: with fstab entry is not coming up reboot
- With 4 clients(2 fuse and 2 kernel clients), was running automated script for making fstab entry before reboot of cli...
- 02:18 PM Documentation #23334 (Resolved): doc: note client eviction results in a client instance blacklist...
- The docs here:
http://docs.ceph.com/docs/master/cephfs/eviction/
can be confusing as it lists an "address" as w... - 11:34 AM Bug #23250: mds: crash during replay: interval_set.h: 396: FAILED assert(p->first > start+len)
- I haven't run any MDS scrub, never found how to properly do that. Did a PG scrub of all metadata PG's though.
- 04:04 AM Bug #23250: mds: crash during replay: interval_set.h: 396: FAILED assert(p->first > start+len)
- looks like InoTable::repair is buggy (it shouldn't increase inotable version without submitting a log event). did you...
- 09:50 AM Bug #23289: mds: xfstest generic/089 hangs in rename syscall in luminous
- I can confirm that the test still progresses (I see changes in the /sys/kernel/debug/ceph/xxx/mdsc file). But this m...
- 02:58 AM Bug #23289: mds: xfstest generic/089 hangs in rename syscall in luminous
- could you check if the test is hang or slow (check /sys/kernel/debug/ceph/xxx/mdsc). my local test show the test is s...
- 09:50 AM Bug #23280: mds: restarted mds may show wrong num_strays stats
- Patrick Donnelly wrote:
> 鹏 张 wrote:
> > 1.on ceph filesystem:mkdir test1 test2
> > 2.touch ./test1/1 ./test1/2
>... - 09:29 AM Bug #23280: mds: restarted mds may show wrong num_strays stats
- Patrick Donnelly wrote:
> 鹏 张 wrote:
> > 1.on ceph filesystem:mkdir test1 test2
> > 2.touch ./test1/1 ./test1/2
>...
03/12/2018
- 10:05 PM Bug #16881: RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
- Latest instance: http://pulpito.ceph.com/yuriw-2018-03-09_16:22:21-fs-wip-yuriw-master-3.8.18-distro-basic-smithi/227...
- 10:00 PM Bug #23280 (Need More Info): mds: restarted mds may show wrong num_strays stats
- 鹏 张 wrote:
> 1.on ceph filesystem:mkdir test1 test2
> 2.touch ./test1/1 ./test1/2
> 3.ln ./test1/1 ./test2/1
> ... - 09:07 PM Bug #23291: client: add way to sync setattr operations to MDS
- Sounds good to me.
- 08:20 PM Bug #23291: client: add way to sync setattr operations to MDS
- ceph_ll_fsync would give us almost the semantics we need. The main problem is that we may not have the file open and ...
- 07:53 PM Bug #23291: client: add way to sync setattr operations to MDS
- Jeff Layton wrote:
> Zheng pointed out that we could end up with a setattr request being cached in the libcephfs cli... - 02:34 PM Bug #23291: client: add way to sync setattr operations to MDS
- The big question here is how to achieve this:
The kernel has an AT_STATX_FORCE_SYNC flag, and we could repurpose i... - 02:11 PM Bug #23291: client: add way to sync setattr operations to MDS
- No. I don't think that bug is related. That one is dealing with setxattr (setting extended attributes) and yes, the c...
- 02:35 PM Bug #16739: Client::setxattr always sends setxattr request to MDS
- ...I think you mean CEPH_CAP_XATTR_EXCL (Xx) here? Though you might need Ax too if someone is updating the ACLs using...
- 09:15 AM Backport #23314 (Resolved): luminous: client: allow client to use caps that are revoked but not y...
- https://github.com/ceph/ceph/pull/20904
- 09:14 AM Backport #23308 (Resolved): luminous: doc: Fix -d option in ceph-fuse doc
- https://github.com/ceph/ceph/pull/21616
Also available in: Atom