Project

General

Profile

Activity

From 05/02/2018 to 05/31/2018

05/31/2018

01:47 PM Bug #24241: NFS-Ganesha libcephfs: Assert failure in object_locator_to_pg
If you have time, it's probably worthwhile to roll a new testcase for ceph_ll_get_stripe_osd for this sort of thing. ... Jeff Layton
11:57 AM Backport #24345 (Resolved): mimic: mds: root inode's snaprealm doesn't get journalled correctly
Nathan Cutler

05/30/2018

12:04 PM Backport #24345 (In Progress): mimic: mds: root inode's snaprealm doesn't get journalled correctly
https://github.com/ceph/ceph/pull/22322 Zheng Yan
11:52 AM Backport #24345 (Resolved): mimic: mds: root inode's snaprealm doesn't get journalled correctly
Zheng Yan
12:04 PM Bug #24343 (Resolved): mds: root inode's snaprealm doesn't get journalled correctly
https://github.com/ceph/ceph/pull/22320 Zheng Yan
11:25 AM Bug #24343 (Resolved): mds: root inode's snaprealm doesn't get journalled correctly
Zheng Yan
03:47 AM Backport #24341 (In Progress): luminous: mds memory leak
https://github.com/ceph/ceph/pull/22310 Zheng Yan
03:43 AM Backport #24341 (Resolved): luminous: mds memory leak
https://github.com/ceph/ceph/pull/22310 Zheng Yan
03:42 AM Backport #24340 (In Progress): mimic: mds memory leak
https://github.com/ceph/ceph/pull/22309 Zheng Yan
03:38 AM Backport #24340 (Resolved): mimic: mds memory leak
https://github.com/ceph/ceph/pull/22309 Zheng Yan
03:36 AM Bug #24289 (Pending Backport): mds memory leak
Zheng Yan

05/29/2018

09:08 PM Bug #24284: cephfs: allow prohibiting user snapshots in CephFS
We should actually discuss what kind of interface admins want. Dan van der Ster certainly has thoughts; others might ... Greg Farnum
05:33 PM Feature #24263: client/mds: create a merkle tree of objects to allow efficient generation of diff...
A few questions:
- What is the sha1 of? The object's content? That isn't necessarily known (e.g. 4 MB object whe...
Sage Weil
09:58 AM Bug #22269 (Resolved): ceph-fuse: failure to remount in startup test does not handle client_die_o...
Nathan Cutler
09:58 AM Backport #22378 (Resolved): jewel: ceph-fuse: failure to remount in startup test does not handle ...
Nathan Cutler
09:57 AM Backport #23932 (Resolved): jewel: client: avoid second lock on client_lock
Nathan Cutler
09:45 AM Backport #24189: luminous: qa: kernel_mount.py umount must handle timeout arg
Prashant D wrote:
> This tracker should be closed as duplicate tracker for #24188.
Here's what I see happening he...
Nathan Cutler
09:40 AM Backport #24331 (Resolved): luminous: mon: mds health metrics sent to cluster log indpeendently
https://github.com/ceph/ceph/pull/22558 Nathan Cutler
09:40 AM Backport #24330 (Resolved): mimic: mon: mds health metrics sent to cluster log indpeendently
https://github.com/ceph/ceph/pull/22265 Nathan Cutler

05/28/2018

04:39 PM Feature #24233 (Closed): Add new command ceph mds status
Vikhyat Umrao
04:38 PM Feature #24233: Add new command ceph mds status
Patrick Donnelly wrote:
>
> Why can't this information be from `ceph fs status --format=json`? I'm not really se...
Vikhyat Umrao
03:47 AM Backport #24205 (In Progress): luminous: mds: broadcast quota to relevant clients when quota is e...
https://github.com/ceph/ceph/pull/22271 Prashant D
12:49 AM Bug #24269 (Fix Under Review): multimds pjd open test fails
https://github.com/ceph/ceph/pull/22266 Zheng Yan

05/27/2018

10:20 PM Bug #24308 (Pending Backport): mon: mds health metrics sent to cluster log indpeendently
mimic backport: https://github.com/ceph/ceph/pull/22265 Sage Weil

05/25/2018

08:39 PM Backport #24311 (Resolved): luminous: pjd: cd: too many arguments
https://github.com/ceph/ceph/pull/22883 Nathan Cutler
08:39 PM Backport #24310 (Resolved): mimic: pjd: cd: too many arguments
https://github.com/ceph/ceph/pull/22882 Nathan Cutler
07:03 PM Bug #24307 (Pending Backport): pjd: cd: too many arguments
Josh Durgin
04:35 PM Bug #24307: pjd: cd: too many arguments
https://github.com/ceph/ceph/pull/22233 Neha Ojha
04:21 PM Bug #24307 (Fix Under Review): pjd: cd: too many arguments
-https://github.com/ceph/ceph/pull/22251- Sage Weil
04:20 PM Bug #24307 (Resolved): pjd: cd: too many arguments
... Sage Weil
04:44 PM Bug #24308 (Fix Under Review): mon: mds health metrics sent to cluster log indpeendently
Sage Weil
04:44 PM Bug #24308: mon: mds health metrics sent to cluster log indpeendently
https://github.com/ceph/ceph/pull/22252 Sage Weil
04:42 PM Bug #24308 (Resolved): mon: mds health metrics sent to cluster log indpeendently
We generate a health warning, which has its own logging infrastructure. But MDSMonitor is *also* sending them to wrn... Sage Weil
03:23 PM Bug #24306 (Resolved): mds: use intrusive_ptr to manage Message life-time
We're regularly getting bugs relating to messages not getting released. Latest one is #24289.
Use a boost::intrusi...
Patrick Donnelly
03:10 PM Feature #24233: Add new command ceph mds status
Vikhyat Umrao wrote:
> Thanks John and Patrick for the feedback. I think rename is not needed let us get a new comma...
Patrick Donnelly
02:51 PM Feature #24305 (Resolved): client/mds: allow renaming across quota boundaries
Issue here: https://github.com/ceph/ceph/blob/77b35faa36f83d837a5fe2685efcd4b9be59406a/src/client/Client.cc#L12214-L1... Patrick Donnelly
11:03 AM Backport #24296 (Resolved): mimic: repeated eviction of idle client until some IO happens
https://github.com/ceph/ceph/pull/22550 Nathan Cutler
11:03 AM Backport #24295 (Resolved): luminous: repeated eviction of idle client until some IO happens
https://github.com/ceph/ceph/pull/22780 Nathan Cutler
10:10 AM Feature #24263: client/mds: create a merkle tree of objects to allow efficient generation of diff...
Patrick Donnelly wrote:
> John Spray wrote:
> > I'm a fan. Questions that spring to mind:
> >
> > - Do we apply...
John Spray
08:16 AM Bug #24289 (Fix Under Review): mds memory leak
https://github.com/ceph/ceph/pull/22240 Zheng Yan
08:09 AM Bug #24289 (Resolved): mds memory leak
forget to call message->put() in some cases Zheng Yan
04:05 AM Bug #24052 (Pending Backport): repeated eviction of idle client until some IO happens
Patrick Donnelly
03:07 AM Feature #24286 (Resolved): tools: create CephFS shell
> The Ceph file system (CephFS) provides for kernel driver and FUSE client access. In testing and trivial system admi... Patrick Donnelly
02:54 AM Bug #24240 (Fix Under Review): qa: 1 mutations had unexpected outcomes
https://github.com/ceph/ceph/pull/22234 Zheng Yan
02:31 AM Bug #24284: cephfs: allow prohibiting user snapshots in CephFS
Zheng Yan wrote:
> change default of mds_snap_max_uid to 0
Okay, but we should enforce that as a file system opti...
Patrick Donnelly
02:25 AM Bug #24284: cephfs: allow prohibiting user snapshots in CephFS
change default of mds_snap_max_uid to 0 Zheng Yan
02:28 AM Feature #24285 (Resolved): mgr: add module which displays current usage of file system (`fs top`)
It would ideally provide a list of sessions doing I/O, what kind of I/O, bandwidth of reads/writes, etc. Also the sam... Patrick Donnelly
02:24 AM Documentation #23775 (Resolved): PendingReleaseNotes: add notes for major Mimic features
Patrick Donnelly
02:15 AM Feature #9659 (Duplicate): MDS: support cache eviction
Patrick Donnelly
01:46 AM Bug #23715 (Closed): "Scrubbing terminated -- not all pgs were active and clean" in fs-jewel-dist...
Problem seems to have gone away. Closing. Patrick Donnelly

05/24/2018

10:43 PM Feature #14456: mon: prevent older/incompatible clients from mounting the file system
We're moving this to target 13.2.1. Patrick Donnelly
10:40 PM Documentation #23775: PendingReleaseNotes: add notes for major Mimic features
https://github.com/ceph/ceph/pull/22232 Patrick Donnelly
10:27 PM Bug #24284 (Resolved): cephfs: allow prohibiting user snapshots in CephFS
Since snapshots can be used to circumvent (accidentally or not) the quotas as snapshot file data that has since been ... Patrick Donnelly
09:03 PM Feature #22370 (Resolved): cephfs: add kernel client quota support
Patrick Donnelly
08:28 PM Backport #22378: jewel: ceph-fuse: failure to remount in startup test does not handle client_die_...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21162
merged
Yuri Weinstein
08:28 PM Backport #23932: jewel: client: avoid second lock on client_lock
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21734
merged
Yuri Weinstein
07:21 PM Backport #24209 (Resolved): mimic: client: deleted inode's Bufferhead which was in STATE::Tx woul...
Patrick Donnelly
07:21 PM Bug #24111 (Resolved): mds didn't update file's max_size
Patrick Donnelly
07:21 PM Backport #24187 (Resolved): mimic: mds didn't update file's max_size
Patrick Donnelly
07:20 PM Backport #24254 (Resolved): mimic: kceph: umount on evicted client blocks forever
Patrick Donnelly
07:20 PM Backport #24255 (Resolved): mimic: qa: kernel_mount.py umount must handle timeout arg
Patrick Donnelly
07:17 PM Backport #24186 (Resolved): mimic: client: segfault in trim_caps
Patrick Donnelly
07:15 PM Backport #24202 (Resolved): mimic: client: fails to respond cap revoke from non-auth mds
Patrick Donnelly
07:14 PM Backport #24206 (Resolved): mimic: mds: broadcast quota to relevant clients when quota is explici...
Patrick Donnelly
07:14 PM Bug #24118 (Resolved): mds: crash when using `config set` on tracked configs
Patrick Donnelly
07:13 PM Backport #24157 (Resolved): mimic: mds: crash when using `config set` on tracked configs
Patrick Donnelly
07:13 PM Backport #24191 (Resolved): mimic: fs: reduce number of helper debug messages at level 5 for client
Patrick Donnelly
05:05 PM Feature #24263: client/mds: create a merkle tree of objects to allow efficient generation of diff...
John Spray wrote:
> I'm a fan. Questions that spring to mind:
>
> - Do we apply this to all files, or only large...
Patrick Donnelly
09:39 AM Feature #24263: client/mds: create a merkle tree of objects to allow efficient generation of diff...
I'm a fan. Questions that spring to mind:
- Do we apply this to all files, or only large ones based on some heuri...
John Spray
02:11 PM Backport #24201 (In Progress): luminous: client: fails to respond cap revoke from non-auth mds
https://github.com/ceph/ceph/pull/22221 Prashant D
01:47 PM Bug #24240: qa: 1 mutations had unexpected outcomes
The test case corrupted open file table’s omap header. One field in omap header is ‘num_objects’. The corrupted heade... Zheng Yan
01:36 PM Bug #24241: NFS-Ganesha libcephfs: Assert failure in object_locator_to_pg
Patrick Donnelly wrote:
> What version of Ceph are you using?
I run vstart cluster from master (last commit in on...
supriti singh
05:01 AM Bug #24241 (Need More Info): NFS-Ganesha libcephfs: Assert failure in object_locator_to_pg
What version of Ceph are you using? Patrick Donnelly
01:36 PM Feature #24233: Add new command ceph mds status
Thanks John and Patrick for the feedback. I think rename is not needed let us get a new command which can give status... Vikhyat Umrao
09:57 AM Bug #23084 (Resolved): doc: update ceph-fuse with FUSE options
Kefu Chai
09:57 AM Backport #23151 (Resolved): luminous: doc: update ceph-fuse with FUSE options
Kefu Chai
09:49 AM Backport #24189 (In Progress): luminous: qa: kernel_mount.py umount must handle timeout arg
This tracker should be closed as duplicate tracker for #24188. Prashant D
09:42 AM Backport #24188 (In Progress): luminous: kceph: umount on evicted client blocks forever
https://github.com/ceph/ceph/pull/22208 Prashant D
07:48 AM Bug #24269 (Resolved): multimds pjd open test fails
http://qa-proxy.ceph.com/teuthology/pdonnell-2018-05-23_14:53:33-multimds-wip-pdonnell-testing-20180522.181319-mimic-... Zheng Yan
04:53 AM Backport #24185 (In Progress): luminous: client: segfault in trim_caps
Patrick Donnelly
02:21 AM Bug #23972: Ceph MDS Crash from client mounting aufs over cephfs
The crash was at "mdr->tracedn = mdr->dn[ 0].back()", because mdr->dn[ 0] is empty. request that triggered the crash ... Zheng Yan

05/23/2018

08:55 PM Feature #24263 (New): client/mds: create a merkle tree of objects to allow efficient generation o...
Idea is that the collection of objects representing a file would be arranged as a merkle tree. Any write to an object... Patrick Donnelly
07:09 PM Backport #24255 (In Progress): mimic: qa: kernel_mount.py umount must handle timeout arg
Patrick Donnelly
06:31 PM Backport #24255 (Resolved): mimic: qa: kernel_mount.py umount must handle timeout arg
https://github.com/ceph/ceph/pull/22138 Nathan Cutler
07:08 PM Backport #24254 (In Progress): mimic: kceph: umount on evicted client blocks forever
Patrick Donnelly
06:31 PM Backport #24254 (Resolved): mimic: kceph: umount on evicted client blocks forever
https://github.com/ceph/ceph/pull/22138 Nathan Cutler
12:44 PM Backport #24107 (In Progress): luminous: PurgeQueue::_consume() could return true when there were...
https://github.com/ceph/ceph/pull/22176 Prashant D
10:50 AM Feature #24233: Add new command ceph mds status
So I guess Vikhyat is suggesting an "MDS" command to match those for other daemons, but that wouldn't just be a renam... John Spray
08:11 AM Bug #23826 (Duplicate): mds: assert after daemon restart
Checked again, it's likely fixed by https://github.com/ceph/ceph/pull/21883/commits/0a38a499b86c0ee13aa0e783a8359bcce... Zheng Yan
08:08 AM Backport #24108 (In Progress): luminous: MDCache.cc: 5317: FAILED assert(mds->is_rejoin())
Zheng Yan
08:07 AM Backport #24108: luminous: MDCache.cc: 5317: FAILED assert(mds->is_rejoin())
https://github.com/ceph/ceph/pull/22171 Zheng Yan
07:19 AM Backport #23946: luminous: mds: crash when failover
@Nathan @Patrick I have cherry-picked pr21769 as well. Please review pr21900. Prashant D
07:17 AM Bug #24241 (New): NFS-Ganesha libcephfs: Assert failure in object_locator_to_pg
When calling ceph_ll_get_stripe_osd from nfs-ganesha fsal ceph in file mds.c, assertion failure causes segmentation f... supriti singh
03:46 AM Backport #24207 (In Progress): luminous: client: deleted inode's Bufferhead which was in STATE::T...
https://github.com/ceph/ceph/pull/22168 Zheng Yan
03:02 AM Bug #24239 (Fix Under Review): cephfs-journal-tool: Importing a zero-length purge_queue journal b...
https://github.com/ceph/ceph/pull/22144 Patrick Donnelly
02:33 AM Bug #24239 (Resolved): cephfs-journal-tool: Importing a zero-length purge_queue journal breaks it...
When we were importing a zero-length purge_queue journal exported previously, the last object and
the following one ...
yupeng chen
02:57 AM Bug #24236 (Fix Under Review): cephfs-journal-tool: journal inspect reports DAMAGED for purge que...
https://github.com/ceph/ceph/pull/22146 Patrick Donnelly
02:19 AM Bug #24236: cephfs-journal-tool: journal inspect reports DAMAGED for purge queue when it's empty
https://github.com/ceph/ceph/pull/22146
before fix, in a new created cluster fs, run:
$cephfs-journal-tool --jour...
cory gu
02:18 AM Bug #24236 (Fix Under Review): cephfs-journal-tool: journal inspect reports DAMAGED for purge que...
When purge queue is empty, joural inspect still report DAMAGED
journal integrity.
cory gu
02:46 AM Bug #24240 (Resolved): qa: 1 mutations had unexpected outcomes
... Patrick Donnelly
02:46 AM Bug #24238 (Fix Under Review): test gets ENOSPC from bluestore block device
https://github.com/ceph/ceph/pull/22165 Sage Weil
02:43 AM Bug #24238: test gets ENOSPC from bluestore block device
The underlying block device was thinly provisioned and ran out of space. Asserting on ENOSPC from a block IO is expe... Sage Weil
02:42 AM Bug #24238: test gets ENOSPC from bluestore block device
Sage Weil
02:32 AM Bug #24238 (Resolved): test gets ENOSPC from bluestore block device
... Patrick Donnelly
01:36 AM Feature #22372 (Resolved): kclient: implement quota handling using new QuotaRealm
Zheng Yan

05/22/2018

10:18 PM Bug #21777: src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
Zheng, do you think this is also resolved by the fix to #23826? Patrick Donnelly
10:12 PM Bug #24129 (In Progress): qa: test_version_splitting (tasks.cephfs.test_sessionmap.TestSessionMap...
Patrick Donnelly
10:11 PM Documentation #24093 (In Progress): doc: Update *remove a metadata server*
Patrick Donnelly
10:08 PM Bug #24101 (Closed): mds: deadlock during fsstress workunit with 9 actives
Apparently resolved by the revert. Patrick Donnelly
10:03 PM Feature #23689: qa: test major/minor version upgrades
Partially addressed by the QA suite fs:upgrade:snaps. Patrick Donnelly
09:48 PM Feature #24233: Add new command ceph mds status
Why does the command need renamed? Patrick Donnelly
08:49 PM Feature #24233 (Closed): Add new command ceph mds status
Add new command ceph mds status
For more information please check - https://tracker.ceph.com/issues/24217
Changin...
Vikhyat Umrao
09:42 PM Feature #20598: mds: revisit LAZY_IO
See also: https://github.com/ceph/ceph/pull/21067 Patrick Donnelly
08:12 PM Bug #23972: Ceph MDS Crash from client mounting aufs over cephfs
John Spray wrote:
> Any chance you can reproduce this with debuginfo packages installed, so that we can get meaningf...
Sean Sullivan
01:54 PM Backport #24191 (In Progress): mimic: fs: reduce number of helper debug messages at level 5 for c...
Patrick Donnelly
01:53 PM Bug #24177: qa: fsstress workunit does not execute in parallel on same host without clobbering files
I suspect the problem is in unpacking and building ltp. The fsstress commands already use a pid-specific directory. H... Jeff Layton
01:52 PM Backport #24157 (In Progress): mimic: mds: crash when using `config set` on tracked configs
Patrick Donnelly
01:48 PM Backport #24209 (In Progress): mimic: client: deleted inode's Bufferhead which was in STATE::Tx w...
https://github.com/ceph/ceph/pull/22136 Patrick Donnelly
01:48 PM Backport #24187 (In Progress): mimic: mds didn't update file's max_size
https://github.com/ceph/ceph/pull/22137 Patrick Donnelly
01:46 PM Backport #24186 (In Progress): mimic: client: segfault in trim_caps
https://github.com/ceph/ceph/pull/22139 Patrick Donnelly
01:45 PM Backport #24202 (In Progress): mimic: client: fails to respond cap revoke from non-auth mds
Patrick Donnelly
01:44 PM Backport #24206 (In Progress): mimic: mds: broadcast quota to relevant clients when quota is expl...
Patrick Donnelly

05/21/2018

01:28 PM Backport #24049 (In Progress): luminous: ceph-fuse: missing dentries in readdir result
https://github.com/ceph/ceph/pull/22119 Prashant D
01:21 PM Backport #24050 (In Progress): luminous: mds: MClientCaps should carry inode's dirstat
https://github.com/ceph/ceph/pull/22118 Prashant D
08:49 AM Backport #24209 (Resolved): mimic: client: deleted inode's Bufferhead which was in STATE::Tx woul...
https://github.com/ceph/ceph/pull/22136 Nathan Cutler
08:49 AM Backport #24208 (Rejected): jewel: client: deleted inode's Bufferhead which was in STATE::Tx woul...
Nathan Cutler
08:49 AM Backport #24207 (Resolved): luminous: client: deleted inode's Bufferhead which was in STATE::Tx w...
https://github.com/ceph/ceph/pull/22168 Nathan Cutler
08:48 AM Backport #24206 (Resolved): mimic: mds: broadcast quota to relevant clients when quota is explici...
https://github.com/ceph/ceph/pull/22141 Nathan Cutler
08:48 AM Backport #24205 (Resolved): luminous: mds: broadcast quota to relevant clients when quota is expl...
https://github.com/ceph/ceph/pull/22271 Nathan Cutler
08:48 AM Backport #24202 (Resolved): mimic: client: fails to respond cap revoke from non-auth mds
https://github.com/ceph/ceph/pull/22140 Nathan Cutler
08:48 AM Backport #24201 (Resolved): luminous: client: fails to respond cap revoke from non-auth mds
https://github.com/ceph/ceph/pull/22221 Nathan Cutler

05/20/2018

11:56 PM Bug #24133 (Pending Backport): mds: broadcast quota to relevant clients when quota is explicitly set
Patrick Donnelly
11:55 PM Bug #23837 (Pending Backport): client: deleted inode's Bufferhead which was in STATE::Tx would le...
Patrick Donnelly
11:55 PM Bug #24172 (Pending Backport): client: fails to respond cap revoke from non-auth mds
Patrick Donnelly

05/19/2018

10:05 AM Backport #24191 (Resolved): mimic: fs: reduce number of helper debug messages at level 5 for client
https://github.com/ceph/ceph/pull/22154 Nathan Cutler
10:05 AM Backport #24190 (Resolved): luminous: fs: reduce number of helper debug messages at level 5 for c...
https://github.com/ceph/ceph/pull/23014 Nathan Cutler
10:04 AM Backport #24189 (Resolved): luminous: qa: kernel_mount.py umount must handle timeout arg
https://github.com/ceph/ceph/pull/22208 Nathan Cutler
10:04 AM Backport #24188 (Resolved): luminous: kceph: umount on evicted client blocks forever
https://github.com/ceph/ceph/pull/22208 Nathan Cutler
10:04 AM Backport #24187 (Resolved): mimic: mds didn't update file's max_size
https://github.com/ceph/ceph/pull/22137 Nathan Cutler
10:04 AM Backport #24186 (Resolved): mimic: client: segfault in trim_caps
https://github.com/ceph/ceph/pull/22139 Nathan Cutler
09:57 AM Bug #24054 (Pending Backport): kceph: umount on evicted client blocks forever
Zheng Yan
09:56 AM Bug #24053 (Pending Backport): qa: kernel_mount.py umount must handle timeout arg
Zheng Yan
04:45 AM Backport #24185 (Resolved): luminous: client: segfault in trim_caps
https://github.com/ceph/ceph/pull/22201 Patrick Donnelly
04:38 AM Bug #24137 (Pending Backport): client: segfault in trim_caps
Patrick Donnelly

05/18/2018

09:33 PM Bug #24111 (Pending Backport): mds didn't update file's max_size
Patrick Donnelly
09:32 PM Bug #21014 (Pending Backport): fs: reduce number of helper debug messages at level 5 for client
Patrick Donnelly
07:38 PM Bug #24177 (Resolved): qa: fsstress workunit does not execute in parallel on same host without cl...
... Patrick Donnelly
05:10 PM Bug #24172: client: fails to respond cap revoke from non-auth mds
adding mimic backport because the PR targets master Nathan Cutler
12:17 PM Bug #24172: client: fails to respond cap revoke from non-auth mds
this can also explain http://tracker.ceph.com/issues/23350 Zheng Yan
12:10 PM Bug #24172 (Fix Under Review): client: fails to respond cap revoke from non-auth mds
Zheng Yan
12:10 PM Bug #24172: client: fails to respond cap revoke from non-auth mds
https://github.com/ceph/ceph/pull/22080 Zheng Yan
12:06 PM Bug #24172 (Resolved): client: fails to respond cap revoke from non-auth mds
Zheng Yan
02:33 PM Bug #24173: ceph_volume_client: allow atomic update of RADOS objects

Greg Farnum's suggestions to do atomic RADOS object updates,
"If you've already got code that does all these t...
Ramana Raja
02:13 PM Bug #24173 (Resolved): ceph_volume_client: allow atomic update of RADOS objects
The manila driver needs the ceph_volume_client to atomically update contents
of RADOS objects used to store ganesha'...
Ramana Raja
02:09 PM Bug #23393 (Resolved): ceph-ansible: update Ganesha config for nfs_file_gw to use optimal settings
Ramana Raja
12:14 AM Bug #24137 (In Progress): client: segfault in trim_caps
https://github.com/ceph/ceph/pull/22073 Patrick Donnelly

05/17/2018

08:45 AM Backport #23946 (In Progress): luminous: mds: crash when failover
Nathan Cutler
08:44 AM Bug #24137: client: segfault in trim_caps
compile test_trim_caps.cc with the newest libcephfs. set mds_min_caps_per_client to 1, set mds_max_ratio_caps_per_cli... Zheng Yan
04:44 AM Bug #24137: client: segfault in trim_caps
Zheng Yan wrote:
> The problem is that anchor only pins current inode. Client::unlink() still may drop reference of ...
Patrick Donnelly
12:44 AM Bug #24137: client: segfault in trim_caps
The problem is that anchor only pins current inode. Client::unlink() still may drop reference of its parent inode. Zheng Yan
08:41 AM Backport #24157 (Resolved): mimic: mds: crash when using `config set` on tracked configs
https://github.com/ceph/ceph/pull/22153 Nathan Cutler
04:14 AM Documentation #24093 (Fix Under Review): doc: Update *remove a metadata server*
https://github.com/ceph/ceph/pull/22035 Jos Collin
12:55 AM Bug #24052: repeated eviction of idle client until some IO happens
https://github.com/ceph/ceph/pull/22026 Zheng Yan
12:52 AM Bug #24101: mds: deadlock during fsstress workunit with 9 actives
Ivan Guan wrote:
> Zheng Yan wrote:
> > caused by https://github.com/ceph/ceph/pull/21615
>
> Sorry,i don't unde...
Zheng Yan

05/16/2018

09:04 PM Bug #24118 (Pending Backport): mds: crash when using `config set` on tracked configs
Sage Weil
07:59 PM Bug #24138: qa: support picking a random distro using new teuthology $
@Warren - wonder if it easy doable to add `yaml` configuration so if suites ^ run on `rhel` then `-k testing` is used... Yuri Weinstein
05:43 PM Bug #24138: qa: support picking a random distro using new teuthology $
FYI
merged PRs related to this:
https://tracker.ceph.com/issues/24138
https://github.com/ceph/ceph/pull/21932
h...
Yuri Weinstein
05:34 PM Bug #24138: qa: support picking a random distro using new teuthology $

That's it I guess. Should also find a way to make `-k testing` the default unless distro == RHEL.
Patrick Donnelly
05:33 PM Bug #24138: qa: support picking a random distro using new teuthology $
@batrick I assume suites are: `fs`, `kcephfs`, `nutlimds` ? more? Yuri Weinstein
06:16 PM Bug #24137: client: segfault in trim_caps
Zheng Yan wrote:
> [...]
>
> I think above commit isn't quite right. how about patch below
>
> [...]
I'm no...
Patrick Donnelly
11:10 AM Bug #24137: client: segfault in trim_caps
... Zheng Yan
12:47 PM Bug #24101: mds: deadlock during fsstress workunit with 9 actives
Zheng Yan wrote:
> caused by https://github.com/ceph/ceph/pull/21615
Sorry,i don't understand why this pr can cau...
Ivan Guan
03:39 AM Bug #24052 (Fix Under Review): repeated eviction of idle client until some IO happens
Zheng Yan

05/15/2018

11:35 PM Bug #21014 (Fix Under Review): fs: reduce number of helper debug messages at level 5 for client
https://github.com/ceph/ceph/pull/21972 Patrick Donnelly
10:35 PM Backport #23991 (In Progress): luminous: client: hangs on umount if it had an MDS session evicted
Patrick Donnelly
09:37 PM Bug #24028: CephFS flock() on a directory is broken
Марк Коренберг wrote:
> Patrick Donnelly, why you set version to 14 ? Will this change be merged to Luminous ?
Be...
Patrick Donnelly
08:25 PM Bug #24028: CephFS flock() on a directory is broken
Patrick Donnelly, why you set version to 14 ? Will this change be merged to Luminous ? Марк Коренберг
08:23 PM Bug #24028: CephFS flock() on a directory is broken
https://github.com/ceph/ceph/blob/master/src/client/fuse_ll.cc#L1037 ? Марк Коренберг
07:48 PM Bug #24028: CephFS flock() on a directory is broken
Does ceph-fuse not have this problem? Patrick Donnelly
03:38 AM Bug #24028: CephFS flock() on a directory is broken
https://github.com/ceph/ceph-client/commit/ae2a8539ab7bb72f37306a544a555e9fc9ce8221 Zheng Yan
08:04 PM Bug #23837: client: deleted inode's Bufferhead which was in STATE::Tx would lead a assert fail
BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1576908 Patrick Donnelly
08:01 PM Bug #23837: client: deleted inode's Bufferhead which was in STATE::Tx would lead a assert fail
Fixed formatting. Patrick Donnelly
11:37 AM Bug #23837 (Fix Under Review): client: deleted inode's Bufferhead which was in STATE::Tx would le...
https://github.com/ceph/ceph/pull/22001 Zheng Yan
08:04 PM Bug #24087 (Duplicate): client: assert during shutdown after blacklisted
Missed that. Thanks Zheng! Patrick Donnelly
09:58 AM Bug #24087: client: assert during shutdown after blacklisted
dup of http://tracker.ceph.com/issues/23837 Zheng Yan
07:55 PM Bug #24133 (Fix Under Review): mds: broadcast quota to relevant clients when quota is explicitly set
Patrick Donnelly
08:18 AM Bug #24133: mds: broadcast quota to relevant clients when quota is explicitly set
https://github.com/ceph/ceph/pull/21997 Zhi Zhang
08:13 AM Bug #24133 (Resolved): mds: broadcast quota to relevant clients when quota is explicitly set
We found client won't get quota updated for a long time under following case. We found this issue on Luminous, but it... Zhi Zhang
07:41 PM Bug #24138 (Resolved): qa: support picking a random distro using new teuthology $
Similar to https://github.com/ceph/ceph/pull/22008/files Patrick Donnelly
07:38 PM Bug #24137: client: segfault in trim_caps
Reasonable assumption about this crash is either the inode was deleted (in which case the Cap should have been delete... Patrick Donnelly
07:18 PM Bug #24137 (Resolved): client: segfault in trim_caps
... Patrick Donnelly
03:28 PM Backport #24136 (Resolved): luminous: MDSMonitor: uncommitted state exposed to clients/mdss
https://github.com/ceph/ceph/pull/23013 Nathan Cutler
01:45 PM Bug #23768 (Pending Backport): MDSMonitor: uncommitted state exposed to clients/mdss
Mimic PR: https://github.com/ceph/ceph/pull/22005 Patrick Donnelly
03:53 AM Bug #24074: Read ahead in fuse client is broken with large buffer size
try passing '--client_readahead_max_bytes=4194304' option to ceph-fuse Zheng Yan

05/14/2018

10:21 PM Bug #24129 (Fix Under Review): qa: test_version_splitting (tasks.cephfs.test_sessionmap.TestSessi...
https://github.com/ceph/ceph/pull/21992 Patrick Donnelly
08:13 PM Bug #24129 (Resolved): qa: test_version_splitting (tasks.cephfs.test_sessionmap.TestSessionMap) t...
... Patrick Donnelly
10:10 PM Bug #24074 (Need More Info): Read ahead in fuse client is broken with large buffer size
Chuan Qiu wrote:
> If the read is larger than 128K(e.g. 4M as our object size), fuse client will receive read reques...
Patrick Donnelly
08:54 PM Backport #23935 (In Progress): luminous: mds: may send LOCK_SYNC_MIX message to starting MDS
https://github.com/ceph/ceph/pull/21990 Patrick Donnelly
08:36 PM Backport #24130 (In Progress): luminous: mds: race with new session from connection and imported ...
https://github.com/ceph/ceph/pull/21989 Patrick Donnelly
08:33 PM Backport #24130 (Resolved): luminous: mds: race with new session from connection and imported ses...
Patrick Donnelly
08:32 PM Bug #24072 (Pending Backport): mds: race with new session from connection and imported session
Mimic PR: https://github.com/ceph/ceph/pull/21988 Patrick Donnelly
04:29 AM Bug #24072: mds: race with new session from connection and imported session
WIP: https://github.com/ceph/ceph/pull/21966 Patrick Donnelly
06:53 PM Documentation #24093: doc: Update *remove a metadata server*
It should be sufficient to say that the operator can just turn it the MDS off, however that is done for their environ... Patrick Donnelly
05:57 PM Bug #24118 (Fix Under Review): mds: crash when using `config set` on tracked configs
https://github.com/ceph/ceph/pull/21984 Sage Weil
05:46 PM Bug #24118 (Resolved): mds: crash when using `config set` on tracked configs
These configs: https://github.com/ceph/ceph/blob/7dbba9e54282e0a4c3000eb0c1a66e346c7eab98/src/mds/MDSDaemon.cc#L362-L... Patrick Donnelly
04:03 PM Bug #24052: repeated eviction of idle client until some IO happens
The log for that client at 128.142.160.86 is here: ceph-post-file: dd10811e-2790-43e4-b0a9-135725f70209
Thanks for...
Dan van der Ster
01:47 PM Bug #24052: repeated eviction of idle client until some IO happens
It's not expected. could you upload client log with debug_ms=1 Zheng Yan
02:41 PM Bug #23972: Ceph MDS Crash from client mounting aufs over cephfs
Any chance you can reproduce this with debuginfo packages installed, so that we can get meaningful backtraces? John Spray
01:57 PM Bug #24054 (Fix Under Review): kceph: umount on evicted client blocks forever
https://github.com/ceph/ceph/pull/21941 Patrick Donnelly
01:48 PM Bug #23837 (In Progress): client: deleted inode's Bufferhead which was in STATE::Tx would lead a ...
Kicking this back to In Progress. Please see comments in original PR. It has been reverted by https://github.com/ceph... Patrick Donnelly
01:45 PM Bug #24101: mds: deadlock during fsstress workunit with 9 actives
Revert: https://github.com/ceph/ceph/pull/21975 Patrick Donnelly
01:39 PM Bug #24101: mds: deadlock during fsstress workunit with 9 actives
Zheng Yan
11:53 AM Bug #24101: mds: deadlock during fsstress workunit with 9 actives
caused by https://github.com/ceph/ceph/pull/21615 Zheng Yan
01:29 PM Bug #24030 (Fix Under Review): ceph-fuse: double dash meaning
Patrick Donnelly
07:45 AM Bug #24053 (Fix Under Review): qa: kernel_mount.py umount must handle timeout arg
Zheng Yan
07:44 AM Bug #24053: qa: kernel_mount.py umount must handle timeout arg
https://github.com/ceph/ceph/pull/21941 Zheng Yan
04:05 AM Bug #24111 (Fix Under Review): mds didn't update file's max_size
https://github.com/ceph/ceph/pull/21963 Zheng Yan
03:37 AM Bug #24111 (Resolved): mds didn't update file's max_size
http://pulpito.ceph.com/pdonnell-2018-05-04_03:45:51-multimds-master-testing-basic-smithi/2474517/ Zheng Yan
03:59 AM Bug #24039 (Closed): MDSTableServer.cc: 62: FAILED assert(g_conf->mds_kill_mdstable_at != 1)
create new ticket for the fsstress hang http://tracker.ceph.com/issues/24111
close this one
Zheng Yan

05/13/2018

03:01 PM Backport #24108 (Resolved): luminous: MDCache.cc: 5317: FAILED assert(mds->is_rejoin())
https://github.com/ceph/ceph/pull/22171 Nathan Cutler
03:01 PM Backport #24107 (Resolved): luminous: PurgeQueue::_consume() could return true when there were no...
https://github.com/ceph/ceph/pull/22176 Nathan Cutler

05/12/2018

04:19 AM Bug #24072 (In Progress): mds: race with new session from connection and imported session
Patrick Donnelly

05/11/2018

10:14 PM Bug #23837 (Pending Backport): client: deleted inode's Bufferhead which was in STATE::Tx would le...
Mimic PR: https://github.com/ceph/ceph/pull/21954 Patrick Donnelly
10:09 PM Backport #23946: luminous: mds: crash when failover
Prashant, pr21769 is merged. Patrick Donnelly
10:06 PM Bug #24047 (Pending Backport): MDCache.cc: 5317: FAILED assert(mds->is_rejoin())
Mimic PR: https://github.com/ceph/ceph/pull/21952 Patrick Donnelly
10:01 PM Bug #24073 (Pending Backport): PurgeQueue::_consume() could return true when there were no purge ...
Mimic PR: https://github.com/ceph/ceph/pull/21951 Patrick Donnelly
03:50 AM Bug #24073: PurgeQueue::_consume() could return true when there were no purge queue item actually...
dongdong tao wrote:
> Yeah, that‘s what i want to recommend to you, it can work as you expected.
Thank you:-) Tha...
Xuehan Xu
03:46 AM Bug #24073: PurgeQueue::_consume() could return true when there were no purge queue item actually...
Yeah, that‘s what i want to recommend to you, it can work as you expected. dongdong tao
03:04 AM Bug #24073: PurgeQueue::_consume() could return true when there were no purge queue item actually...
dongdong tao wrote:
> Hi Xuehan,
> I'm just curious about that how do you repair your purge queue journal ?
By t...
Xuehan Xu
03:01 AM Bug #24073: PurgeQueue::_consume() could return true when there were no purge queue item actually...
dongdong tao wrote:
> Hi Xuehan,
> I'm just curious about that how do you repair your purge queue journal ?
Actu...
Xuehan Xu
02:34 AM Bug #24073: PurgeQueue::_consume() could return true when there were no purge queue item actually...
Hi Xuehan,
I'm just curious about that how do you repair your purge queue journal ?
dongdong tao
06:06 PM Bug #24101 (Closed): mds: deadlock during fsstress workunit with 9 actives
http://pulpito.ceph.com/pdonnell-2018-05-11_00:47:01-multimds-wip-pdonnell-testing-20180510.225359-testing-basic-smit... Patrick Donnelly
05:52 PM Feature #17230 (Fix Under Review): ceph_volume_client: py3 compatible
https://github.com/ceph/ceph/pull/21948 Rishabh Dave
04:26 AM Documentation #24093 (Resolved): doc: Update *remove a metadata server*
Update: http://docs.ceph.com/docs/master/rados/deployment/ceph-deploy-mds/#remove-a-metadata-server
See:
http://d...
Jos Collin

05/10/2018

10:09 PM Bug #24090 (Resolved): mds: fragmentation in QA is slowing down ops enough for WRNs
http://pulpito.ceph.com/pdonnell-2018-05-08_18:15:09-fs-mimic-testing-basic-smithi/
http://pulpito.ceph.com/pdonnell...
Patrick Donnelly
08:48 PM Bug #24089 (Rejected): mds: print slow requests to debug log when sending health WRN to monitors ...
Nevermind, it is actually printed earlier in the log. Sorry for the noise. Patrick Donnelly
08:46 PM Bug #24089 (Rejected): mds: print slow requests to debug log when sending health WRN to monitors ...
... Patrick Donnelly
08:22 PM Bug #24088 (Duplicate): mon: slow remove_snaps op reported in cluster health log
... Patrick Donnelly
04:57 PM Bug #24087 (Duplicate): client: assert during shutdown after blacklisted
... Patrick Donnelly
11:48 AM Bug #24039: MDSTableServer.cc: 62: FAILED assert(g_conf->mds_kill_mdstable_at != 1)
the pjd: http://pulpito.ceph.com/pdonnell-2018-05-04_03:45:51-multimds-master-testing-basic-smithi/2475062/... Zheng Yan
11:25 AM Feature #22446: mds: ask idle client to trim more caps
Glad to see this :)
- Backport set to mimic,luminous
Thanks.
Webert Lima
09:44 AM Bug #23332: kclient: with fstab entry is not coming up reboot
I still don't think this is kernel issue. please path kernel with below change and try again.... Zheng Yan
06:44 AM Bug #24073: PurgeQueue::_consume() could return true when there were no purge queue item actually...
Xuehan Xu wrote:
> In our online clusters, we encountered the bug #19593. Although we cherry-pick the fixing commits...
Xuehan Xu
04:38 AM Bug #24073: PurgeQueue::_consume() could return true when there were no purge queue item actually...
https://github.com/ceph/ceph/pull/21923 Xuehan Xu
04:38 AM Bug #24073 (Resolved): PurgeQueue::_consume() could return true when there were no purge queue it...
In our online clusters, we encountered the bug #19593. Although we cherry-pick the fixing commits, the purge queue's ... Xuehan Xu
06:39 AM Bug #24074 (Need More Info): Read ahead in fuse client is broken with large buffer size
If the read is larger than 128K(e.g. 4M as our object size), fuse client will receive read requests as multiple ll_re... Chuan Qiu
04:10 AM Backport #23984 (In Progress): luminous: mds: scrub on fresh file system fails
https://github.com/ceph/ceph/pull/21922 Prashant D
04:07 AM Backport #23982 (In Progress): luminous: qa: TestVolumeClient.test_lifecycle needs updated for ne...
https://github.com/ceph/ceph/pull/21921 Prashant D
04:05 AM Bug #23826: mds: assert after daemon restart
Finish context of MDCache::open_undef_inodes_dirfrags() calls rejoin_gather_finish() without check rejoin_gather. I t... Zheng Yan

05/09/2018

09:29 PM Bug #24072 (Resolved): mds: race with new session from connection and imported session
... Patrick Donnelly
09:14 PM Feature #22446 (New): mds: ask idle client to trim more caps
Patrick Donnelly
09:11 PM Documentation #23611: doc: add description of new fs-client auth profile
Blocked by resolution to #23751. Patrick Donnelly
09:08 PM Feature #22370 (In Progress): cephfs: add kernel client quota support
Patrick Donnelly
09:08 PM Feature #22372: kclient: implement quota handling using new QuotaRealm
Zheng, what's the status on those patches? Patrick Donnelly
09:05 PM Bug #23332 (Need More Info): kclient: with fstab entry is not coming up reboot
Zheng Yan wrote:
> kexec in dmesgs looks suspicious. client mounted cephfs, then used kexec to load kernel image aga...
Patrick Donnelly
09:02 PM Bug #23350: mds: deadlock during unlink and export
Well this is aggravating. I think it's time we plan evictions for clients that do not respond to cap release. Patrick Donnelly
08:56 PM Bug #23394 (Rejected): nfs-ganesha: check cache configuration when exporting FSAL_CEPH
Patrick Donnelly
08:52 PM Feature #14456 (Fix Under Review): mon: prevent older/incompatible clients from mounting the file...
https://github.com/ceph/ceph/pull/21885 Patrick Donnelly
07:06 PM Bug #23855: mds: MClientCaps should carry inode's dirstat
Testing; will revert Nathan Cutler
06:57 PM Bug #23291 (Resolved): client: add way to sync setattr operations to MDS
Nathan Cutler
06:57 PM Backport #23474 (Resolved): luminous: client: allow caller to request that setattr request be syn...
Nathan Cutler
02:55 PM Backport #23474: luminous: client: allow caller to request that setattr request be synchronous
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21109
merged
Yuri Weinstein
06:56 PM Bug #23602 (Resolved): mds: handle client requests when mds is stopping
Nathan Cutler
06:56 PM Backport #23632 (Resolved): luminous: mds: handle client requests when mds is stopping
Nathan Cutler
02:54 PM Backport #23632: luminous: mds: handle client requests when mds is stopping
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21346
merged
Yuri Weinstein
06:56 PM Bug #23541 (Resolved): client: fix request send_to_auth was never really used
Nathan Cutler
06:56 PM Backport #23635 (Resolved): luminous: client: fix request send_to_auth was never really used
Nathan Cutler
02:54 PM Backport #23635: luminous: client: fix request send_to_auth was never really used
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21354
merged
Yuri Weinstein
06:13 PM Bug #24040 (Need More Info): mds: assert in CDir::_committed
Patrick Donnelly
02:14 PM Bug #24040: mds: assert in CDir::_committed
Thanks for the report - it looks like you're using a 11.x ("kraken") version, which is no longer receiving bug fixes.... John Spray
01:56 PM Bug #24039: MDSTableServer.cc: 62: FAILED assert(g_conf->mds_kill_mdstable_at != 1)
the fsstress failure: http://pulpito.ceph.com/pdonnell-2018-05-04_03:45:51-multimds-master-testing-basic-smithi/24745... Zheng Yan
01:12 PM Bug #24039: MDSTableServer.cc: 62: FAILED assert(g_conf->mds_kill_mdstable_at != 1)
fsstress failure looks like new bug
pjd failure is similar to http://tracker.ceph.com/issues/23327
two dead tasks...
Zheng Yan
11:55 AM Bug #23327: qa: pjd test sees wrong ctime after unlink
http://pulpito.ceph.com/pdonnell-2018-05-04_03:45:51-multimds-master-testing-basic-smithi/2475062/ Zheng Yan
06:08 AM Backport #23951 (In Progress): luminous: mds: stuck during up:stopping
https://github.com/ceph/ceph/pull/21901 Prashant D
03:36 AM Backport #23946: luminous: mds: crash when failover
Opened backport PR#21900 (https://github.com/ceph/ceph/pull/21900). We need to cherry pick PR#21769 once it gets merg... Prashant D
03:29 AM Backport #23950 (In Progress): luminous: mds: stopping rank 0 cannot shutdown until log is trimmed
https://github.com/ceph/ceph/pull/21899 Prashant D

05/08/2018

10:49 PM Backport #24055 (In Progress): luminous: VolumeClient: allow ceph_volume_client to create 'volume...
Patrick Donnelly
10:45 PM Backport #24055 (Resolved): luminous: VolumeClient: allow ceph_volume_client to create 'volumes' ...
https://github.com/ceph/ceph/pull/21897 Patrick Donnelly
10:43 PM Feature #23695 (Pending Backport): VolumeClient: allow ceph_volume_client to create 'volumes' wit...
Mimic PR: https://github.com/ceph/ceph/pull/21896 Patrick Donnelly
10:37 PM Bug #24054 (Resolved): kceph: umount on evicted client blocks forever
Failed test:
/ceph/teuthology-archive/pdonnell-2018-05-08_01:06:46-kcephfs-mimic-testing-basic-smithi/2494030/teut...
Patrick Donnelly
10:33 PM Bug #24053 (Resolved): qa: kernel_mount.py umount must handle timeout arg
... Patrick Donnelly
10:27 PM Bug #24052 (Resolved): repeated eviction of idle client until some IO happens
We see repeated eviction of idle client sessions. We have client_reconnect_stale on the ceph-fuse clients, and these ... Dan van der Ster
08:56 PM Backport #24050 (Resolved): luminous: mds: MClientCaps should carry inode's dirstat
https://github.com/ceph/ceph/pull/22118 Nathan Cutler
08:56 PM Backport #24049 (Resolved): luminous: ceph-fuse: missing dentries in readdir result
https://github.com/ceph/ceph/pull/22119 Nathan Cutler
08:49 PM Bug #23530 (Resolved): mds: kicked out by monitor during rejoin
Nathan Cutler
08:49 PM Backport #23636 (Resolved): luminous: mds: kicked out by monitor during rejoin
Nathan Cutler
07:47 PM Backport #23636: luminous: mds: kicked out by monitor during rejoin
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21366
merged
Yuri Weinstein
08:49 PM Bug #23452 (Resolved): mds: assertion in MDSRank::validate_sessions
Nathan Cutler
08:48 PM Backport #23637 (Resolved): luminous: mds: assertion in MDSRank::validate_sessions
Nathan Cutler
07:46 PM Backport #23637: luminous: mds: assertion in MDSRank::validate_sessions
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21372
merged
Yuri Weinstein
08:48 PM Bug #23625 (Resolved): mds: sessions opened by journal replay do not get dirtied properly
Nathan Cutler
08:48 PM Backport #23702 (Resolved): luminous: mds: sessions opened by journal replay do not get dirtied p...
Nathan Cutler
07:46 PM Backport #23702: luminous: mds: sessions opened by journal replay do not get dirtied properly
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21441
merged
Yuri Weinstein
08:47 PM Bug #23582 (Resolved): MDSMonitor: mds health warnings printed in bad format
Nathan Cutler
08:47 PM Backport #23703 (Resolved): luminous: MDSMonitor: mds health warnings printed in bad format
Nathan Cutler
07:46 PM Backport #23703: luminous: MDSMonitor: mds health warnings printed in bad format
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21447
merged
Yuri Weinstein
08:47 PM Bug #23380 (Resolved): mds: ceph.dir.rctime follows dir ctime not inode ctime
Nathan Cutler
08:47 PM Backport #23750 (Resolved): luminous: mds: ceph.dir.rctime follows dir ctime not inode ctime
Nathan Cutler
07:45 PM Backport #23750: luminous: mds: ceph.dir.rctime follows dir ctime not inode ctime
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21448
merged
Yuri Weinstein
08:46 PM Bug #23764 (Resolved): MDSMonitor: new file systems are not initialized with the pending_fsmap epoch
Nathan Cutler
08:46 PM Backport #23791 (Resolved): luminous: MDSMonitor: new file systems are not initialized with the p...
Nathan Cutler
07:44 PM Backport #23791: luminous: MDSMonitor: new file systems are not initialized with the pending_fsma...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21512
merged
Yuri Weinstein
08:46 PM Bug #23714 (Resolved): slow ceph_ll_sync_inode calls after setattr
Nathan Cutler
08:45 PM Backport #23802 (Resolved): luminous: slow ceph_ll_sync_inode calls after setattr
Nathan Cutler
07:44 PM Backport #23802: luminous: slow ceph_ll_sync_inode calls after setattr
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21542
merged
Yuri Weinstein
08:45 PM Bug #23652 (Resolved): client: fix gid_count check in UserPerm->deep_copy_from()
Nathan Cutler
08:44 PM Backport #23771 (Resolved): luminous: client: fix gid_count check in UserPerm->deep_copy_from()
Nathan Cutler
07:43 PM Backport #23771: luminous: client: fix gid_count check in UserPerm->deep_copy_from()
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21596
merged
Yuri Weinstein
08:44 PM Bug #23762 (Resolved): MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_fsmap
Nathan Cutler
08:43 PM Backport #23792 (Resolved): luminous: MDSMonitor: MDSMonitor::encode_pending modifies fsmap not p...
Nathan Cutler
07:43 PM Backport #23792: luminous: MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_fsmap
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21732
merged
Yuri Weinstein
08:43 PM Bug #23873 (Resolved): cephfs does not count st_nlink for directories correctly?
Nathan Cutler
08:43 PM Bug #19706: Laggy mon daemons causing MDS failover (symptom: failed to set counters on mds daemon...
I don't have reason to believe use of utime_t caused this issue but it's possible this could fix it: https://github.c... Patrick Donnelly
08:43 PM Backport #23987 (Resolved): luminous: cephfs does not count st_nlink for directories correctly?
Nathan Cutler
07:42 PM Backport #23987: luminous: cephfs does not count st_nlink for directories correctly?
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21796
merged
Yuri Weinstein
08:42 PM Bug #23880 (Resolved): mds: scrub code stuck at trimming log segments
Nathan Cutler
08:42 PM Backport #23930 (Resolved): luminous: mds: scrub code stuck at trimming log segments
Nathan Cutler
07:41 PM Backport #23930: luminous: mds: scrub code stuck at trimming log segments
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21840
merged
Yuri Weinstein
08:41 PM Bug #23813 (Resolved): client: "remove_session_caps still has dirty|flushing caps" when thrashing...
Nathan Cutler
08:41 PM Backport #23934 (Resolved): luminous: client: "remove_session_caps still has dirty|flushing caps"...
Nathan Cutler
07:41 PM Backport #23934: luminous: client: "remove_session_caps still has dirty|flushing caps" when thras...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21844
merged
Yuri Weinstein
06:25 PM Bug #21777 (New): src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
Patrick Donnelly
01:36 PM Bug #21777 (Fix Under Review): src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
Zheng Yan
12:40 PM Bug #21777: src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
-https://github.com/ceph/ceph/pull/21883- Zheng Yan
03:57 AM Bug #21777 (In Progress): src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
Zheng Yan
06:23 PM Bug #24047 (Fix Under Review): MDCache.cc: 5317: FAILED assert(mds->is_rejoin())
https://github.com/ceph/ceph/pull/21883 Patrick Donnelly
06:23 PM Bug #24047 (Resolved): MDCache.cc: 5317: FAILED assert(mds->is_rejoin())
... Patrick Donnelly
04:58 PM Bug #24030: ceph-fuse: double dash meaning
https://github.com/ceph/ceph/pull/21889 Jos Collin
04:07 PM Bug #23885 (Resolved): MDSMonitor: overzealous MDS_ALL_DOWN and MDS_UP_LESS_THAN_MAX health warni...
Mimic PR: https://github.com/ceph/ceph/pull/21888 Patrick Donnelly
02:39 PM Bug #24039: MDSTableServer.cc: 62: FAILED assert(g_conf->mds_kill_mdstable_at != 1)
Right, there's something else wrong with the test. Patrick Donnelly
01:35 PM Bug #24039: MDSTableServer.cc: 62: FAILED assert(g_conf->mds_kill_mdstable_at != 1)
These are intentional crashes in table transaction test Zheng Yan
07:10 AM Backport #23936 (In Progress): luminous: cephfs-journal-tool: segfault during journal reset
https://github.com/ceph/ceph/pull/21874 Prashant D

05/07/2018

11:40 PM Bug #24040 (Need More Info): mds: assert in CDir::_committed
... zs 吴
10:56 PM Bug #23894 (Pending Backport): ceph-fuse: missing dentries in readdir result
Mimic: https://github.com/ceph/ceph/pull/21867 Patrick Donnelly
10:47 PM Bug #23855 (Pending Backport): mds: MClientCaps should carry inode's dirstat
Mimic PR: https://github.com/ceph/ceph/pull/21866 Patrick Donnelly
10:40 PM Bug #21777: src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
<deleted/> Patrick Donnelly
08:54 PM Bug #21777 (New): src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
Deleted: see #24047. Patrick Donnelly
10:00 PM Bug #24039 (Closed): MDSTableServer.cc: 62: FAILED assert(g_conf->mds_kill_mdstable_at != 1)
... Patrick Donnelly
02:41 PM Bug #24002 (Resolved): qa: check snap upgrade on multimds cluster
Patrick Donnelly
01:39 PM Bug #24030: ceph-fuse: double dash meaning
Jos, please take a crack at fixing this. Thanks! Patrick Donnelly
04:38 AM Bug #24030 (Closed): ceph-fuse: double dash meaning
... Jos Collin
01:37 PM Bug #23994 (Need More Info): mds: OSD space is not reclaimed until MDS is restarted
Patrick Donnelly
02:47 AM Bug #23994: mds: OSD space is not reclaimed until MDS is restarted
please try again and dump mds' cache (ceph daemon mds.xxx dump cache /tmp/cachedump.x) Zheng Yan
05:41 AM Backport #23934 (In Progress): luminous: client: "remove_session_caps still has dirty|flushing ca...
https://github.com/ceph/ceph/pull/21844 Prashant D
04:36 AM Bug #23768 (Fix Under Review): MDSMonitor: uncommitted state exposed to clients/mdss
https://github.com/ceph/ceph/pull/21842 Patrick Donnelly
04:02 AM Backport #23931 (In Progress): luminous: qa: test_purge_queue_op_rate: self.assertTrue(phase2_ops...
https://github.com/ceph/ceph/pull/21841 Prashant D
03:59 AM Backport #23930 (In Progress): luminous: mds: scrub code stuck at trimming log segments
https://github.com/ceph/ceph/pull/21840 Prashant D

05/06/2018

08:44 PM Bug #24028: CephFS flock() on a directory is broken
I tested flock() logic on different hosts.
on one host:
flock my_dir sleep 1000
on second:
flock my_dir e...
Марк Коренберг
08:42 PM Bug #24028 (Resolved): CephFS flock() on a directory is broken
Accroding to man page, flock() semantics must work also on directory. Actually it works with, say, Ext4. It does not ... Марк Коренберг

05/04/2018

02:12 PM Bug #23994: mds: OSD space is not reclaimed until MDS is restarted
This was on the kernel client. I tried Ubuntu's 4.13.0-39-generic and 4.15.0-15-generic kernels.
With the fuse cli...
Niklas Hambuechen
01:44 PM Bug #23994: mds: OSD space is not reclaimed until MDS is restarted
What client (kernel or fuse), and what version of the client? John Spray
05:09 AM Bug #23885 (Fix Under Review): MDSMonitor: overzealous MDS_ALL_DOWN and MDS_UP_LESS_THAN_MAX heal...
https://github.com/ceph/ceph/pull/21810 Patrick Donnelly
02:07 AM Bug #24002 (Pending Backport): qa: check snap upgrade on multimds cluster
Patrick Donnelly

05/03/2018

10:33 PM Feature #23695: VolumeClient: allow ceph_volume_client to create 'volumes' without namespace isol...
https://github.com/ceph/ceph/pull/21808 Ramana Raja
09:27 PM Bug #24004 (Resolved): mds: curate priority of perf counters sent to mgr
Make sure we have the most interesting statisitcs available for prometheus for dashboard use. Additionally, see if we... Patrick Donnelly
08:36 PM Bug #24002 (Fix Under Review): qa: check snap upgrade on multimds cluster
https://github.com/ceph/ceph/pull/21805 Patrick Donnelly
08:35 PM Bug #24002 (Resolved): qa: check snap upgrade on multimds cluster
To get an idea how the snap format upgrade works on a previously multimds cluster. (No need to exercise the two MDS s... Patrick Donnelly
07:48 PM Cleanup #24001 (Resolved): MDSMonitor: remove vestiges of `mds deactivate`
For Nautilus. Patrick Donnelly
06:02 PM Backport #23946: luminous: mds: crash when failover
Will also need: https://github.com/ceph/ceph/pull/21769 Patrick Donnelly
05:33 PM Feature #23623 (Resolved): mds: mark allow_snaps true by default
Patrick Donnelly
05:33 PM Documentation #23583 (Resolved): doc: update snapshot doc to account for recent changes
Patrick Donnelly
01:41 PM Backport #23987 (In Progress): luminous: cephfs does not count st_nlink for directories correctly?
Patrick Donnelly
10:28 AM Backport #23987 (Resolved): luminous: cephfs does not count st_nlink for directories correctly?
https://github.com/ceph/ceph/pull/21796 Nathan Cutler
01:27 PM Bug #23393 (Fix Under Review): ceph-ansible: update Ganesha config for nfs_file_gw to use optimal...
Ramana Raja
01:26 PM Bug #23393: ceph-ansible: update Ganesha config for nfs_file_gw to use optimal settings
https://github.com/ceph/ceph-ansible/pull/2556 Ramana Raja
01:02 PM Bug #23994 (Need More Info): mds: OSD space is not reclaimed until MDS is restarted
With my Luminous test cluster on Ubuntu I ran into a situation where I filled up an OSD by putting files on CephFS, a... Niklas Hambuechen
10:29 AM Backport #23991 (Resolved): luminous: client: hangs on umount if it had an MDS session evicted
https://github.com/ceph/ceph/pull/22018 Nathan Cutler
10:29 AM Backport #23990 (Rejected): jewel: client: hangs on umount if it had an MDS session evicted
Nathan Cutler
10:28 AM Backport #23989 (Resolved): luminous: mds: don't report slow request for blocked filelock request
https://github.com/ceph/ceph/pull/22782
follow-on fix: https://github.com/ceph/ceph/pull/26048 went into 12.2.11
Nathan Cutler
10:27 AM Backport #23984 (Resolved): luminous: mds: scrub on fresh file system fails
https://github.com/ceph/ceph/pull/21922 Nathan Cutler
10:27 AM Backport #23982 (Resolved): luminous: qa: TestVolumeClient.test_lifecycle needs updated for new e...
https://github.com/ceph/ceph/pull/21921 Nathan Cutler
12:00 AM Bug #23958: mds: scrub doesn't always return JSON results
Zheng Yan wrote:
> recursive scrub is async, it does not return anything
Good point, thanks. Even so, we shoudl r...
Patrick Donnelly

05/02/2018

11:56 PM Bug #16842 (Can't reproduce): mds: replacement MDS crashes on InoTable release
Patrick Donnelly
10:57 PM Bug #23975 (Pending Backport): qa: TestVolumeClient.test_lifecycle needs updated for new eviction...
Patrick Donnelly
07:53 PM Bug #23975 (Fix Under Review): qa: TestVolumeClient.test_lifecycle needs updated for new eviction...
https://github.com/ceph/ceph/pull/21789 Patrick Donnelly
06:59 PM Bug #23975 (Resolved): qa: TestVolumeClient.test_lifecycle needs updated for new eviction behavior
... Patrick Donnelly
08:50 PM Bug #23768 (New): MDSMonitor: uncommitted state exposed to clients/mdss
Moving this back to fs. This is a different bug Josh. Patrick Donnelly
08:44 PM Bug #23768 (Resolved): MDSMonitor: uncommitted state exposed to clients/mdss
backport is tracked in the fs bug Josh Durgin
06:06 PM Bug #23972 (New): Ceph MDS Crash from client mounting aufs over cephfs

Here is a rough outline of my topology
https://pastebin.com/HQqbMxyj
---
I can reliably crash all (in my case...
Sean Sullivan
05:02 PM Feature #17230 (In Progress): ceph_volume_client: py3 compatible
Patrick Donnelly
04:08 PM Bug #10915 (Pending Backport): client: hangs on umount if it had an MDS session evicted
Patrick Donnelly
02:21 PM Bug #23960 (Pending Backport): mds: scrub on fresh file system fails
Patrick Donnelly
02:20 PM Bug #23873 (Pending Backport): cephfs does not count st_nlink for directories correctly?
Patrick Donnelly
02:20 PM Bug #22428 (Pending Backport): mds: don't report slow request for blocked filelock request
Patrick Donnelly
03:10 AM Bug #23958: mds: scrub doesn't always return JSON results
recursive scrub is async, it does not return anything Zheng Yan
 

Also available in: Atom