Project

General

Profile

Activity

From 11/21/2017 to 12/20/2017

12/20/2017

06:41 PM Backport #22493 (In Progress): luminous: mds: crash during exiting
Nathan Cutler
02:47 AM Backport #22493 (Resolved): luminous: mds: crash during exiting
https://github.com/ceph/ceph/pull/19610 Zheng Yan
11:56 AM Backport #22508 (Closed): luminous: MDSMonitor: inconsistent role/who usage in command help
Nathan Cutler
11:54 AM Backport #22505 (Rejected): jewel: client may fail to trim as many caps as MDS asked for
Nathan Cutler
11:54 AM Backport #22504 (Resolved): luminous: client may fail to trim as many caps as MDS asked for
https://github.com/ceph/ceph/pull/24119 Nathan Cutler
11:54 AM Backport #22503 (Resolved): luminous: mds: read hang in multiple mds setup
https://github.com/ceph/ceph/pull/19646 Nathan Cutler
11:54 AM Backport #22501 (Resolved): luminous: qa: CommandFailedError: Command failed on smithi135 with st...
https://github.com/ceph/ceph/pull/19628 Nathan Cutler
11:54 AM Backport #22500 (Resolved): luminous: cephfs: potential adjust failure in lru_expire
https://github.com/ceph/ceph/pull/19627
Nathan Cutler
11:54 AM Backport #22499 (Resolved): luminous: cephfs-journal-tool: tool would miss to report some invalid...
https://github.com/ceph/ceph/pull/19626 Nathan Cutler
11:39 AM Backport #21947 (In Progress): luminous: mds: preserve order of requests during recovery of multi...
Nathan Cutler
03:41 AM Backport #22494 (Fix Under Review): jewel: unsigned integer overflow in file_layout_t::get_period
https://github.com/ceph/ceph/pull/19611 Zheng Yan
03:35 AM Backport #22494 (Resolved): jewel: unsigned integer overflow in file_layout_t::get_period
A customer encountered this
https://bugzilla.redhat.com/show_bug.cgi?id=1527548
Zheng Yan
02:09 AM Bug #22492 (Fix Under Review): Locker::calc_new_max_size does not take layout.stripe_count into a...
https://github.com/ceph/ceph/pull/19609 Zheng Yan
02:07 AM Bug #22492 (Resolved): Locker::calc_new_max_size does not take layout.stripe_count into account
if layout.stripe_count is N, size_increment is actually 'N * mds_client_writeable_range_max_inc_objs' objects Zheng Yan
01:09 AM Bug #22360 (Pending Backport): mds: crash during exiting
Patrick Donnelly
12:35 AM Backport #22490 (In Progress): luminous: mds: handle client session messages when mds is stopping
Patrick Donnelly
12:35 AM Backport #22490 (Resolved): luminous: mds: handle client session messages when mds is stopping
https://github.com/ceph/ceph/pull/19585 Patrick Donnelly

12/19/2017

11:45 PM Feature #22446: mds: ask idle client to trim more caps
Zheng Yan wrote:
> Idle client holds so many caps is wasteful, because it increase the chance that mds trim other re...
Patrick Donnelly
09:55 PM Bug #22488 (New): mds: unlink blocks on large file when metadata pool is full
With both of these PRs on mimic-dev1:
https://github.com/ceph/ceph/pull/19588
https://github.com/ceph/ceph/pull/1...
Patrick Donnelly
09:53 PM Bug #22487 (Rejected): mds: setattr blocked when metadata pool is full
With both of these PRs on mimic-dev1:
https://github.com/ceph/ceph/pull/19588
https://github.com/ceph/ceph/pull/1...
Patrick Donnelly
07:25 PM Bug #22483 (In Progress): mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obso...
Patrick Donnelly
07:12 PM Bug #22483 (Resolved): mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obsolete
https://github.com/ceph/ceph/blob/06b7707cee87a54517630def0ad274340325a677/src/mds/Server.cc#L1742
since: b4ca5ae4...
Patrick Donnelly
06:56 PM Bug #22482 (Won't Fix): qa: MDS can apparently journal new file on "full" metadata pool
... Patrick Donnelly
05:00 PM Fix #15064 (Closed): multifs: tweak text on "flag set enable multiple"
Okay, thanks for the info John. I'll just close this then. Patrick Donnelly
03:36 PM Fix #15064: multifs: tweak text on "flag set enable multiple"
I think there was an idea that the message ought to be scarier, but I'm not sure we need that at this point John Spray
03:05 PM Fix #15064 (Need More Info): multifs: tweak text on "flag set enable multiple"
Not sure what the problem is here. Patrick Donnelly
03:13 PM Tasks #22479 (Closed): multifs: review testing coverage
Patrick Donnelly
03:11 PM Feature #22478 (Rejected): multifs: support snapshots for shared data pool
If two file systems use the same data pool with different RADOS namespaces, it is necessary for them to cooperate on ... Patrick Donnelly
03:08 PM Feature #22477 (Resolved): multifs: remove multifs experimental warnings
Once all sub-tasks are complete. Patrick Donnelly
09:26 AM Bug #22475: qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClusterFull.test_full_...
Flagging for luminous backport because b4ca5ae (which presumably introduced the test failure this is fixing (?)) was ... Nathan Cutler
05:51 AM Bug #22475 (Fix Under Review): qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClu...
Patrick Donnelly
05:51 AM Bug #22475: qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClusterFull.test_full_...
https://github.com/ceph/ceph/pull/19588 Patrick Donnelly
05:25 AM Bug #22475 (Resolved): qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClusterFull...
... Patrick Donnelly
05:53 AM Bug #22436 (Pending Backport): qa: CommandFailedError: Command failed on smithi135 with status 22...
https://github.com/ceph/ceph/pull/19534 Patrick Donnelly
03:09 AM Bug #22460: mds: handle client session messages when mds is stopping
luminous RP: https://github.com/ceph/ceph/pull/19585 Zheng Yan
02:01 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
https://github.com/ceph/ceph-client/commit/a2a44b35146e6ccf099e4326bc1a7e2cdaf02f65 Zheng Yan

12/18/2017

02:53 PM Bug #22460 (Pending Backport): mds: handle client session messages when mds is stopping
Patrick Donnelly
02:49 PM Bug #22428: mds: don't report slow request for blocked filelock request
Perhaps reclassify the slow requests blocked by locks as "clients not releasing file locks" or similar to be differen... Patrick Donnelly
02:47 PM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
This is also fixed in master already by one of Zheng's commits. We need to link to the commit in master where this is... Patrick Donnelly
02:41 PM Bug #22353 (In Progress): kclient: ceph_getattr() return zero st_dev for normal inode
Patrick Donnelly
09:26 AM Feature #19578 (Fix Under Review): mds: optimize CDir::_omap_commit() and CDir::_committed() for ...
https://github.com/ceph/ceph/pull/19574 Zheng Yan
12:10 AM Bug #22459 (Pending Backport): cephfs-journal-tool: tool would miss to report some invalid range
Patrick Donnelly
12:09 AM Bug #22458 (Pending Backport): cephfs: potential adjust failure in lru_expire
Patrick Donnelly

12/16/2017

12:01 AM Feature #22446: mds: ask idle client to trim more caps
no about recovery time, clients already trim their cache aggressively when mds recovers.
Idle client holds so many...
Zheng Yan

12/15/2017

10:24 PM Bug #21853 (Fix Under Review): mds: mdsload debug too high
https://github.com/ceph/ceph/pull/19556 Patrick Donnelly
08:26 PM Feature #22446: mds: ask idle client to trim more caps
Ah, the problem is recovery of the MDS takes too long. (from follow-up posts to "[ceph-users] cephfs mds millions of ... Patrick Donnelly
08:16 PM Feature #22446: mds: ask idle client to trim more caps
What's the goal? Prevent the situation where the client has ~1M caps for an indefinite period like what we saw on the... Patrick Donnelly
01:42 AM Feature #22446 (Resolved): mds: ask idle client to trim more caps
we can add decay counter to client session, tracking the rate we add new cap to the client Zheng Yan
07:30 PM Bug #21393 (Pending Backport): MDSMonitor: inconsistent role/who usage in command help
Patrick Donnelly
07:30 PM Bug #22293 (Pending Backport): client may fail to trim as many caps as MDS asked for
Patrick Donnelly
07:30 PM Bug #22357 (Pending Backport): mds: read hang in multiple mds setup
Patrick Donnelly
07:29 PM Bug #21764 (Resolved): common/options.cc: Update descriptions and visibility levels for MDS/clien...
Patrick Donnelly
07:02 PM Bug #22460 (Resolved): mds: handle client session messages when mds is stopping
https://github.com/ceph/ceph/pull/19234 Patrick Donnelly
06:59 PM Bug #22459 (Resolved): cephfs-journal-tool: tool would miss to report some invalid range
https://github.com/ceph/ceph/pull/19421 Patrick Donnelly
06:58 PM Bug #22458 (Resolved): cephfs: potential adjust failure in lru_expire
https://github.com/ceph/ceph/pull/19277 Patrick Donnelly

12/13/2017

11:09 PM Backport #22392: luminous: mds: tell session ls returns vanila EINVAL when MDS is not active
https://github.com/ceph/ceph/pull/19505 Shinobu Kinjo
09:28 PM Bug #22436 (Resolved): qa: CommandFailedError: Command failed on smithi135 with status 22: 'sudo ...
Key bits:... Patrick Donnelly
04:41 PM Feature #22417 (Fix Under Review): support purge queue with cephfs-journal-tool
Patrick Donnelly
09:34 AM Feature #22417: support purge queue with cephfs-journal-tool
I have already pulled a request
https://github.com/ceph/ceph/pull/19471
dongdong tao
09:33 AM Feature #22417 (Resolved): support purge queue with cephfs-journal-tool
Currently, luminous has introduced new purge queue journal whose inode number is 0x500
but cephfs-journal-tool does ...
dongdong tao
03:12 PM Bug #22428 (Resolved): mds: don't report slow request for blocked filelock request
Zheng Yan
12:58 PM Backport #22407 (In Progress): luminous: client: implement delegation support in userland cephfs
Nathan Cutler

12/12/2017

01:31 PM Backport #22385 (In Progress): luminous: mds: mds should ignore export_pin for deleted directory
Nathan Cutler
08:43 AM Backport #22385 (Resolved): luminous: mds: mds should ignore export_pin for deleted directory
https://github.com/ceph/ceph/pull/19360 Nathan Cutler
11:25 AM Backport #22398 (In Progress): luminous: man: missing man page for mount.fuse.ceph
Nathan Cutler
08:44 AM Backport #22398 (Resolved): luminous: man: missing man page for mount.fuse.ceph
https://github.com/ceph/ceph/pull/19449 Nathan Cutler
11:10 AM Bug #22374 (Fix Under Review): luminous: mds: SimpleLock::num_rdlock overloaded
John Spray
06:16 AM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
I just submitted a PR for this: https://github.com/ceph/ceph/pull/19442 Xuehan Xu
04:44 AM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
... Xuehan Xu
04:43 AM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
@ 6> 2017-12-04 17:59:45.509030 7ff4bb7fb700 7 mds.0.locker rdlock_finish on (ifile sync) on [inode 0x10004cec485... Xuehan Xu
04:42 AM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
Sorry, I misedited the log:
@ -6> 2017-12-04 17:59:45.509030 7ff4bb7fb700 7 mds.0.locker rdlock_finish on (ifile...
Xuehan Xu
04:41 AM Bug #22374 (Duplicate): luminous: mds: SimpleLock::num_rdlock overloaded
Recently, when doing massive directory delete test, both active mds and standby mds aborted and can't be started agai... Xuehan Xu
08:45 AM Backport #22407 (Resolved): luminous: client: implement delegation support in userland cephfs
https://github.com/ceph/ceph/pull/19480 Nathan Cutler
08:43 AM Backport #22392 (Resolved): luminous: mds: tell session ls returns vanila EINVAL when MDS is not ...
Nathan Cutler
08:43 AM Backport #22384 (Resolved): jewel: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), act...
https://github.com/ceph/ceph/pull/21172 Nathan Cutler
08:42 AM Backport #22383 (Resolved): luminous: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), ...
https://github.com/ceph/ceph/pull/21173 Nathan Cutler
08:42 AM Backport #22382 (Rejected): jewel: client: give more descriptive error message for remount failures
Nathan Cutler
08:42 AM Backport #22381 (Rejected): luminous: client: give more descriptive error message for remount fai...
Nathan Cutler
08:42 AM Backport #22380 (Resolved): jewel: client reconnect gather race
https://github.com/ceph/ceph/pull/21163 Nathan Cutler
08:42 AM Backport #22379 (Resolved): luminous: client reconnect gather race
https://github.com/ceph/ceph/pull/19326 Nathan Cutler
08:42 AM Backport #22378 (Resolved): jewel: ceph-fuse: failure to remount in startup test does not handle ...
https://github.com/ceph/ceph/pull/21162 Nathan Cutler
02:49 AM Feature #22372 (Resolved): kclient: implement quota handling using new QuotaRealm
Patrick Donnelly
02:46 AM Feature #22371 (Resolved): mds: implement QuotaRealm to obviate parent quota lookup
https://github.com/ceph/ceph/pull/18424 Patrick Donnelly
02:45 AM Feature #22370 (Resolved): cephfs: add kernel client quota support
Patrick Donnelly

12/11/2017

01:13 AM Bug #22360 (Fix Under Review): mds: crash during exiting
https://github.com/ceph/ceph/pull/19424 Zheng Yan
12:43 AM Bug #22360 (Resolved): mds: crash during exiting
2017-12-07 21:30:39.323495 7fe6b8830700 0 -- 192.168.100.49:6800/227508323 >> 192.168.100.59:0/2802955003 pipe(0x560... Zheng Yan

12/10/2017

07:16 PM Cleanup #22359 (New): mds: change MDSMap::in to a mds_rank_t which is the current size of the clu...
Now that we no longer allow deactivating arbitrary ranks, it makes sense to change the `std::set<mds_rank_t> MDSMap::... Patrick Donnelly
09:00 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
Robert Sander wrote:
> Robert Sander wrote:
> > Zheng Yan wrote:
> > > I can't reproduce it on Fedora 26. please p...
Zheng Yan

12/09/2017

02:12 PM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
Robert Sander wrote:
> Zheng Yan wrote:
> > I can't reproduce it on Fedora 26. please provide versions of kernel an...
Robert Sander
09:33 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
Zheng Yan wrote:
> I can't reproduce it on Fedora 26. please provide versions of kernel and fuse-libs installed on t...
Robert Sander
08:39 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
no '+ sign' is caused by ls code... Zheng Yan
05:17 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
If fuse-libs version < 2.8, ceph-fuse can't get supplementary groups of an user. group ACL only apply for users who p... Zheng Yan
02:03 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
I can't reproduce it on Fedora 26. please provide versions of kernel and fuse-libs installed on the machine that ran ... Zheng Yan
07:06 AM Bug #22357 (Fix Under Review): mds: read hang in multiple mds setup
http://tracker.ceph.com/issues/22357 Zheng Yan
07:04 AM Bug #22357 (Resolved): mds: read hang in multiple mds setup
Zheng Yan

12/08/2017

07:33 PM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
The kernel client in Ubuntu 17.10 (4.13.0-17-generic) does not have this issue, but it does not show if ACLs are set ... Robert Sander
04:40 PM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
Now with better formatting:
Running Ceph 12.2.2

Create Filesystem fresh on this version.
FUSE-mounted files...
Robert Sander
04:37 PM Bug #22353 (Resolved): kclient: ceph_getattr() return zero st_dev for normal inode
Running Ceph 12.2.2
Create Filesystem fresh on this version.
FUSE-mounted filesystem with client_acl_type=posix...
Robert Sander
07:20 AM Bug #22249: Need to restart MDS to release cephfs space
junming rao wrote:
> Zheng Yan wrote:
> > please try remounting all cephfs with ceph-fuse option --client_try_dentr...
Zheng Yan
03:18 AM Bug #22249: Need to restart MDS to release cephfs space
Zheng Yan wrote:
> please try remounting all cephfs with ceph-fuse option --client_try_dentry_invalidate=false.
>
...
junming rao
02:45 AM Bug #22249: Need to restart MDS to release cephfs space
please try remounting all cephfs with ceph-fuse option --client_try_dentry_invalidate=false.
Besides, please creat...
Zheng Yan
02:26 AM Bug #22249: Need to restart MDS to release cephfs space
Zheng Yan wrote:
> It seems you have multiple clients mount cephfs. do you use kernel client or ceph-fuse? try execu...
junming rao

12/07/2017

03:09 AM Backport #22339: luminous: ceph-fuse: failure to remount in startup test does not handle client_d...
https://github.com/ceph/ceph/pull/19370 Shinobu Kinjo
03:01 AM Backport #22339 (Resolved): luminous: ceph-fuse: failure to remount in startup test does not hand...
https://github.com/ceph/ceph/pull/19370 Shinobu Kinjo
03:08 AM Bug #22338: mds: ceph mds stat json should use array output for info section
Ji You wrote:
> When use `ceph mds stat -f json-pretty` would get output as below:
>
> [...]
>
> The proper ou...
Ji You
02:58 AM Bug #22338 (Resolved): mds: ceph mds stat json should use array output for info section
When use `ceph mds stat -f json-pretty` would get output as below:... Ji You

12/06/2017

09:12 PM Feature #20610: MDSMonitor: add new command to shrink the cluster in an automated way
Hi Douglas, is this something that you're still planning on working on? If not, I'm willing to have a look at it. Jesse Williamson
02:51 PM Bug #22334 (New): client: throttle osd requests created by page-write
If we create lots of small file in cephfs, page writeback may create hundreds of thousand of OSD requests. these many... Zheng Yan
08:51 AM Bug #22219: mds: mds should ignore export_pin for deleted directory
https://github.com/ceph/ceph/pull/19360 Zheng Yan
06:37 AM Bug #22219 (Pending Backport): mds: mds should ignore export_pin for deleted directory
Patrick Donnelly
06:37 AM Feature #22097 (Resolved): mds: change mds perf counters can statistics filesystem operations num...
Patrick Donnelly
06:36 AM Bug #22269 (Pending Backport): ceph-fuse: failure to remount in startup test does not handle clie...
Patrick Donnelly

12/05/2017

03:00 AM Bug #22263: client reconnect gather race
https://github.com/ceph/ceph/pull/19326 Zheng Yan
02:54 AM Bug #22249: Need to restart MDS to release cephfs space
It seems you have multiple clients mount cephfs. do you use kernel client or ceph-fuse? try executing "echo 3 >/proc/... Zheng Yan
12:33 AM Bug #22051: tests: Health check failed: Reduced data availability: 5 pgs peering (PG_AVAILABILITY...
Please make sure this isn't a misconfigured run or a missing log whitelist; you can kick it to RADOS if not. :) Greg Farnum

12/04/2017

06:54 PM Feature #18490 (Pending Backport): client: implement delegation support in userland cephfs
Thanks for remembering to update this ticket Jeff. We need to backport this for Luminous as this is needed for 3.0.
...
Patrick Donnelly
02:52 PM Bug #22292: mds: scrub may mark repaired directory with lost dentries and not flush backtrace
Greg also brought up some good points that we should also mark the directory as damaged (especially in a persistent w... Patrick Donnelly
02:45 PM Bug #22292: mds: scrub may mark repaired directory with lost dentries and not flush backtrace
Consensus during scrub is that this can be resolved by adding an appropriate warning to scrub output that the directo... Patrick Donnelly

12/03/2017

10:55 AM Feature #18490 (Resolved): client: implement delegation support in userland cephfs
Patches merged into both ceph and ganesha for this. Jeff Layton

12/01/2017

10:03 PM Bug #22256 (Resolved): nfs-ganesha: crashes in free_delegrecall_context
This was fixed by commit f332c172a2884c04a0d4e743c8858ff3e7f957a1 in ganesha (and the associated ntirpc changes). Jeff Layton
09:39 PM Bug #22256 (In Progress): nfs-ganesha: crashes in free_delegrecall_context
Patrick Donnelly
09:50 PM Bug #21777: src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
Reducing priority since we can't seem to get this reproduced. Patrick Donnelly
02:49 AM Bug #22293: client may fail to trim as many caps as MDS asked for
kernel patch https://github.com/ceph/ceph-client/commit/4f9b2bc31681f41fe73ddbabc6e9b9fd047af126 Zheng Yan
02:44 AM Bug #22293 (Fix Under Review): client may fail to trim as many caps as MDS asked for
https://github.com/ceph/ceph/pull/19271 Zheng Yan
02:22 AM Bug #22293 (Resolved): client may fail to trim as many caps as MDS asked for
Client::trim_caps() can't trim inode if it has null child dentries. If config option client_cache_size is large, Clie... Zheng Yan

11/30/2017

11:18 PM Bug #22292 (New): mds: scrub may mark repaired directory with lost dentries and not flush backtrace
Simple reproducer (with selected output):... Patrick Donnelly
07:15 PM Tasks #22291 (New): add metadata thrasher to qa suite
Add a tool to qa suite, possibly based on smallfile (https://github.com/bengland2/smallfile) to run filesystem operat... Douglas Fuller
06:56 PM Bug #22288 (Fix Under Review): mds: assert when inode moves during scrub
https://github.com/ceph/ceph/pull/19263 Douglas Fuller
04:03 PM Bug #22288 (In Progress): mds: assert when inode moves during scrub
Douglas Fuller
03:40 PM Bug #22288 (Resolved): mds: assert when inode moves during scrub
If an inode moves while on the scrub stack, it can be enqueued a second time and hit:
mds/CInode.cc: 4153: FAILED ...
Douglas Fuller
06:51 PM Bug #22221 (Pending Backport): qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), actual:...
Patrick Donnelly
06:50 PM Bug #22254 (Pending Backport): client: give more descriptive error message for remount failures
Patrick Donnelly
06:49 PM Bug #22263 (Pending Backport): client reconnect gather race
Patrick Donnelly
09:58 AM Bug #22249: Need to restart MDS to release cephfs space
Zheng Yan wrote:
> still no clue in the log. Do you still have this issue after restarting mds
Hi Zheng yan:
...
junming rao
09:01 AM Bug #21539 (Pending Backport): man: missing man page for mount.fuse.ceph
Jos Collin
09:01 AM Bug #21991 (Pending Backport): mds: tell session ls returns vanila EINVAL when MDS is not active
Jos Collin

11/29/2017

02:25 PM Bug #22249: Need to restart MDS to release cephfs space
still no clue in the log. Do you still have this issue after restarting mds Zheng Yan
01:59 PM Bug #22249: Need to restart MDS to release cephfs space
Zheng Yan wrote:
> can't find any clue from log. Next time it happens, please set debug_mds=10 and capture some log ...
junming rao

11/28/2017

11:05 PM Bug #22269 (Fix Under Review): ceph-fuse: failure to remount in startup test does not handle clie...
https://github.com/ceph/ceph/pull/19218 Patrick Donnelly
08:41 PM Bug #22269 (Resolved): ceph-fuse: failure to remount in startup test does not handle client_die_o...
https://github.com/ceph/ceph/blob/38f051c22af1def4a06427876ee2e5000046fd03/src/client/Client.cc#L10063-L10066
The ...
Patrick Donnelly
01:42 PM Bug #22249: Need to restart MDS to release cephfs space
can't find any clue from log. Next time it happens, please set debug_mds=10 and capture some log before mds restart Zheng Yan
03:08 AM Bug #22249: Need to restart MDS to release cephfs space
Zheng Yan wrote:
> It seems the log was generated by pre-luminous mds. which version of ceph do you use.
OS Ver...
junming rao
09:09 AM Bug #22263 (Fix Under Review): client reconnect gather race
https://github.com/ceph/ceph/pull/19207 Zheng Yan
09:06 AM Bug #22263 (Resolved): client reconnect gather race
#0 raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1 0x0000555555ae8bde in reraise_fatal (signum...
Zheng Yan

11/27/2017

10:04 PM Bug #22249 (Need More Info): Need to restart MDS to release cephfs space
Patrick Donnelly
02:45 PM Bug #22249: Need to restart MDS to release cephfs space
It seems the log was generated by pre-luminous mds. which version of ceph do you use. Zheng Yan
07:52 AM Bug #22249 (Can't reproduce): Need to restart MDS to release cephfs space
I used 'ceph df' to show the usage of the cluster was 238TB (2 copies), however, the result of using 'du -sh' into th... junming rao
09:21 PM Bug #22003 (Need More Info): [CephFS-Ganesha]MDS migrate will affect Ganesha service?
Please retry with Ganesha -next. Patrick Donnelly
07:37 PM Bug #22256: nfs-ganesha: crashes in free_delegrecall_context
Here's my ganesha.conf as well. I bisected the change down to 46a5e8535f978b1e12dcb15cbdcbf6d5e757d24e (nfs_rpc_call)... Jeff Layton
07:34 PM Bug #22256 (Resolved): nfs-ganesha: crashes in free_delegrecall_context
I've been working on delegation support in cephfs for ganesha. The ceph pieces were recently merged, so I rebased my ... Jeff Layton
06:49 PM Bug #22254 (Fix Under Review): client: give more descriptive error message for remount failures
https://github.com/ceph/ceph/pull/19181 Patrick Donnelly
05:53 PM Bug #22254 (Resolved): client: give more descriptive error message for remount failures
During remount failures:
https://github.com/ceph/ceph/blob/54e51fd3c39a38e72ed989f862e6e21515f41d3b/src/client/Cli...
Patrick Donnelly
01:11 PM Bug #21539 (Fix Under Review): man: missing man page for mount.fuse.ceph
https://github.com/ceph/ceph/pull/19172 Jos Collin
02:25 AM Backport #22237 (In Progress): luminous: request that is "!mdr->is_replay() && mdr->is_queued_for...
https://github.com/ceph/ceph/pull/19157 Prashant D

11/25/2017

12:32 AM Backport #22241: jewel: Processes stuck waiting for write with ceph-fuse
https://github.com/ceph/ceph/pull/19141 Shinobu Kinjo
12:21 AM Backport #22240: luminous: Processes stuck waiting for write with ceph-fuse
https://github.com/ceph/ceph/pull/19137 Shinobu Kinjo
12:16 AM Backport #22242: luminous: mds: limit size of subtree migration
https://github.com/ceph/ceph/pull/19136 Shinobu Kinjo

11/24/2017

09:57 PM Backport #22242 (Resolved): luminous: mds: limit size of subtree migration
https://github.com/ceph/ceph/pull/20339 Nathan Cutler
09:57 PM Backport #22241 (Resolved): jewel: Processes stuck waiting for write with ceph-fuse
Nathan Cutler
09:57 PM Backport #22240 (Resolved): luminous: Processes stuck waiting for write with ceph-fuse
https://github.com/ceph/ceph/pull/20340 Nathan Cutler
09:56 PM Backport #22239 (Rejected): luminous: provide a way to look up snapshotted inodes by vinodeno_t
Nathan Cutler
09:56 PM Backport #22237 (Resolved): luminous: request that is "!mdr->is_replay() && mdr->is_queued_for_re...
https://github.com/ceph/ceph/pull/19157 Nathan Cutler
01:31 PM Bug #21539 (In Progress): man: missing man page for mount.fuse.ceph
Jos Collin

11/22/2017

09:52 PM Backport #22228: luminous: client: trim_caps may remove cap iterator points to
https://github.com/ceph/ceph/pull/19105 Patrick Donnelly
09:49 PM Backport #22228 (In Progress): luminous: client: trim_caps may remove cap iterator points to
Patrick Donnelly
09:48 PM Backport #22228 (Resolved): luminous: client: trim_caps may remove cap iterator points to
https://github.com/ceph/ceph/pull/19105 Patrick Donnelly
09:47 PM Bug #22157 (Pending Backport): client: trim_caps may remove cap iterator points to
Patrick Donnelly
09:47 PM Bug #22163 (Pending Backport): request that is "!mdr->is_replay() && mdr->is_queued_for_replay()"...
Patrick Donnelly
09:46 PM Bug #22058 (Resolved): mds: admin socket wait for scrub completion is racy
Patrick Donnelly
09:45 PM Feature #22105 (Pending Backport): provide a way to look up snapshotted inodes by vinodeno_t
Patrick Donnelly
09:44 PM Bug #22008 (Pending Backport): Processes stuck waiting for write with ceph-fuse
Patrick Donnelly
09:43 PM Bug #21892 (Pending Backport): limit size of subtree migration
Patrick Donnelly
05:36 AM Bug #22221 (Fix Under Review): qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), actual:...
https://github.com/ceph/ceph/pull/19095 Patrick Donnelly
05:32 AM Bug #22221 (Resolved): qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), actual: -34 vs 0
... Patrick Donnelly
04:45 AM Bug #22219 (Fix Under Review): mds: mds should ignore export_pin for deleted directory
https://github.com/ceph/ceph/pull/19092 Zheng Yan
03:53 AM Bug #22219 (Resolved): mds: mds should ignore export_pin for deleted directory
Otherwise, subtree dirfrag may prevent stray inode from getting purged Zheng Yan

11/21/2017

04:25 PM Bug #21991: mds: tell session ls returns vanila EINVAL when MDS is not active
Based on the latest findings, a new PR is created: https://github.com/ceph/ceph/pull/19078 Jos Collin
05:04 AM Bug #22157 (Fix Under Review): client: trim_caps may remove cap iterator points to
https://github.com/ceph/ceph/pull/19060 Patrick Donnelly
 

Also available in: Atom