Project

General

Profile

Activity

From 12/04/2017 to 01/02/2018

01/02/2018

03:40 PM Bug #22547: active mds session miss for client
Zheng Yan wrote:
> dongdong tao wrote:
> > zheng, if a client has been evicted by mds, the client should still thin...
dongdong tao
01:47 AM Bug #22547: active mds session miss for client
dongdong tao wrote:
> zheng, if a client has been evicted by mds, the client should still think the connection is av...
Zheng Yan
03:17 PM Backport #22552 (Resolved): luminous: doc: epoch barrier mechanism not found
Sage Weil
11:34 AM Backport #22552 (Fix Under Review): luminous: doc: epoch barrier mechanism not found
Jos Collin
10:57 AM Backport #22552: luminous: doc: epoch barrier mechanism not found
https://github.com/ceph/ceph/pull/19741 Jos Collin
10:43 AM Backport #22552 (Resolved): luminous: doc: epoch barrier mechanism not found
Jos Collin
11:14 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
Jos Collin wrote:
> I don't see anything in the URLs provided. Additionally, this looks like a Support Case.
can ...
Yong Wang
11:10 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
wangyong wang wrote:
> Hi all.
> ==============================
> version: jewel 10.2.10 (professional rpms)
> no...
Yong Wang

01/01/2018

11:56 AM Bug #22547 (Need More Info): active mds session miss for client
Jos Collin
06:47 AM Bug #22542 (Pending Backport): doc: epoch barrier mechanism not found
Jos Collin

12/29/2017

04:17 PM Feature #22545: add dump inode command to mds
I just notice, it's almost same with 11172 dongdong tao
03:35 PM Bug #22542 (Resolved): doc: epoch barrier mechanism not found
Sage Weil
01:40 AM Bug #22551 (Need More Info): client: should flush dirty caps on backgroud
the dirty data would have a background thread to do the flush,so we may need to flush dirty caps backgroud too dongdong tao

12/28/2017

03:29 PM Bug #22550 (New): mds: FAILED assert(probe->known_size[p->oid] <= shouldbe) when mds start

I stop the mds while coping files to the cluster, then I try to start mds later, I encounter a failed assertion.
...
jianxiong shao
02:05 PM Bug #22548: mds: crash during recovery
Just once.
It took a little long time during recovery and then crashed. There are about 10M files in the file syst...
wei jin
01:46 PM Bug #22548: mds: crash during recovery
this probably can be fixed by. How many times do you encounter this issue... Zheng Yan
07:15 AM Bug #22548: mds: crash during recovery
Zheng Yan wrote:
> which line trigger the assertion
Hi, yan
this line:
0> 2017-12-27 23:27:05.892112 7f0...
wei jin
07:04 AM Bug #22548: mds: crash during recovery
which line trigger the assertion Zheng Yan
04:42 AM Bug #22548 (Need More Info): mds: crash during recovery
2017-12-27 23:27:05.919710 7f08483d0700 -1 *** Caught signal (Aborted) **
in thread 7f08483d0700 thread_name:ms_dis...
wei jin
12:53 PM Bug #22547: active mds session miss for client
by saying evicted, i means due to the auto_close_timeout. dongdong tao
12:50 PM Bug #22547: active mds session miss for client
zheng, if a client has been evicted by mds, the client should still think the connection is available,
and when that...
dongdong tao
10:25 AM Bug #22547: active mds session miss for client
wei jin wrote:
> Ok. I will do it soon.
>
I can not reproduce it after open the log and it will have an impact ...
wei jin
07:21 AM Bug #22547: active mds session miss for client
Ok. I will do it soon.
This happened after I restarted mds daemon last night. And also there is another crash(bug ...
wei jin
07:10 AM Bug #22547: active mds session miss for client
please set debug_mds=10 and check why mds evicted the client. it's likely that docker host went to sleep or there was... Zheng Yan
04:34 AM Bug #22547 (Need More Info): active mds session miss for client
Our user case: k8s docker mounts cephfs using cephfs kernel client.
If we do not use the 'mounted dir', after a wh...
wei jin
06:58 AM Feature #21156: mds: speed up recovery with many open inodes
thanks, that can explain the senerio we have met,
sometimes my standby-replay mds spend too much time in rejoin stat...
dongdong tao
06:56 AM Feature #21156: mds: speed up recovery with many open inodes
besides, when there are lots of open inodes, it's not efficient to journal all of them in each log segment. Zheng Yan
02:46 AM Feature #21156: mds: speed up recovery with many open inodes
mds need to open all inodes with client caps during recovery. some of these inode may be not in the journal Zheng Yan
02:00 AM Feature #21156: mds: speed up recovery with many open inodes
hi zheng,
i'm not sure if i understand this correctlly, do you mean the mds can not recover the openning inode jus...
dongdong tao

12/27/2017

04:32 PM Bug #22546: client: dirty caps may never get the chance to flush
https://github.com/ceph/ceph/pull/19703
dongdong tao
04:05 PM Bug #22546 (Resolved): client: dirty caps may never get the chance to flush
currently, we flush the caps in function Client::flush_caps_sync
but there is a bug in this funcion.
because the ...
dongdong tao
03:54 PM Feature #22545: add dump inode command to mds
pull request:
https://github.com/ceph/ceph/pull/19677
dongdong tao
03:53 PM Feature #22545 (Duplicate): add dump inode command to mds
1 when the mds cache is really big. it's hard to dump all the cache
2 most of the time, we only want to know a speci...
dongdong tao
10:58 AM Bug #22542 (Fix Under Review): doc: epoch barrier mechanism not found
Jos Collin
10:21 AM Bug #22542: doc: epoch barrier mechanism not found
https://github.com/ceph/ceph/pull/19701 Jos Collin

12/26/2017

10:17 AM Bug #22542 (Resolved): doc: epoch barrier mechanism not found
[[http://docs.ceph.com/docs/master/cephfs/full/]] says "For more on the epoch barrier mechanism, see Ceph filesystem ... Jos Collin

12/25/2017

03:57 AM Bug #22536: client:_rmdir() uses a deleted memory structure(Dentry) leading a core
fixed by https://github.com/ceph/ceph/pull/19672 Ivan Guan
03:45 AM Bug #22536 (Resolved): client:_rmdir() uses a deleted memory structure(Dentry) leading a core
Version: ceph-10.2.2
Bug description:
"::rmdir()" acquires the Dentry structure "by get_or_create(dir, name, &de...
Ivan Guan

12/22/2017

11:47 AM Bug #22523 (Need More Info): Jewel10.2.10 cephfs journal corrupt,later event jump into previous ...
I don't see anything in the URLs provided. Additionally, this looks like a Support Case. Jos Collin
09:16 AM Bug #22524 (Resolved): NameError: global name 'get_mds_map' is not defined
We don't need to backport this fix to luminous. The commit that introduced
this bug, https://github.com/ceph/ceph/co...
Ramana Raja
04:53 AM Bug #22524 (Pending Backport): NameError: global name 'get_mds_map' is not defined
Patrick Donnelly
04:55 AM Bug #22338 (Resolved): mds: ceph mds stat json should use array output for info section
Patrick Donnelly
04:55 AM Bug #21853 (Pending Backport): mds: mdsload debug too high
Patrick Donnelly
04:55 AM Feature #19578 (Pending Backport): mds: optimize CDir::_omap_commit() and CDir::_committed() for ...
Patrick Donnelly
04:54 AM Bug #22492 (Pending Backport): Locker::calc_new_max_size does not take layout.stripe_count into a...
Patrick Donnelly
04:49 AM Backport #22503 (In Progress): luminous: mds: read hang in multiple mds setup
https://github.com/ceph/ceph/pull/19646 Prashant D
02:37 AM Backport #22503: luminous: mds: read hang in multiple mds setup
I'm on it. Prashant D
12:28 AM Bug #22357: mds: read hang in multiple mds setup
https://github.com/ceph/ceph/pull/19414 Patrick Donnelly

12/21/2017

11:45 PM Bug #22487: mds: setattr blocked when metadata pool is full
right. full test should have no problem Zheng Yan
10:27 PM Bug #22487: mds: setattr blocked when metadata pool is full
Presumably that would be because with the vstart config the MDS writes cannot actually be written whereas with the te... Patrick Donnelly
02:35 PM Bug #22487: mds: setattr blocked when metadata pool is full
I reproduced this locally.
It was caused by stuck log flush ...
Zheng Yan
10:38 PM Bug #22526 (Pending Backport): AttributeError: 'LocalFilesystem' object has no attribute 'ec_prof...
Fixed by: https://github.com/ceph/ceph/pull/19533 Patrick Donnelly
02:19 PM Bug #22526 (Resolved): AttributeError: 'LocalFilesystem' object has no attribute 'ec_profile'
Hit an error while running a ceph_volume_client test on a vstart Ceph cluster using the command
LD_LIBRARY_PATH=`pwd...
Ramana Raja
04:13 PM Bug #22357: mds: read hang in multiple mds setup
i don't see any merged pr. Shinobu Kinjo
01:04 PM Bug #22524 (Fix Under Review): NameError: global name 'get_mds_map' is not defined
https://github.com/ceph/ceph/pull/19633 Ramana Raja
12:48 PM Bug #22524 (Resolved): NameError: global name 'get_mds_map' is not defined
Hit a error while running test_volume_client on a dev vstart cluster (Ceph master) using
# LD_LIBRARY_PATH=`pwd`/lib...
Ramana Raja
12:05 PM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
type :fs
version:10.2.10
Yong Wang
11:59 AM Bug #22523 (Closed): Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
Hi all.
==============================
version: jewel 10.2.10 (professional rpms)
nodes : 3 centos7.3
cephfs : k...
Yong Wang
10:08 AM Backport #22501: luminous: qa: CommandFailedError: Command failed on smithi135 with status 22: 's...
https://github.com/ceph/ceph/pull/19628 Shinobu Kinjo
10:05 AM Backport #22500: luminous: cephfs: potential adjust failure in lru_expire
https://github.com/ceph/ceph/pull/19627 Shinobu Kinjo
10:03 AM Backport #22499: luminous: cephfs-journal-tool: tool would miss to report some invalid range
https://github.com/ceph/ceph/pull/19626 Shinobu Kinjo
08:36 AM Bug #22374 (Duplicate): luminous: mds: SimpleLock::num_rdlock overloaded
Nathan Cutler

12/20/2017

06:41 PM Backport #22493 (In Progress): luminous: mds: crash during exiting
Nathan Cutler
02:47 AM Backport #22493 (Resolved): luminous: mds: crash during exiting
https://github.com/ceph/ceph/pull/19610 Zheng Yan
11:56 AM Backport #22508 (Closed): luminous: MDSMonitor: inconsistent role/who usage in command help
Nathan Cutler
11:54 AM Backport #22505 (Rejected): jewel: client may fail to trim as many caps as MDS asked for
Nathan Cutler
11:54 AM Backport #22504 (Resolved): luminous: client may fail to trim as many caps as MDS asked for
https://github.com/ceph/ceph/pull/24119 Nathan Cutler
11:54 AM Backport #22503 (Resolved): luminous: mds: read hang in multiple mds setup
https://github.com/ceph/ceph/pull/19646 Nathan Cutler
11:54 AM Backport #22501 (Resolved): luminous: qa: CommandFailedError: Command failed on smithi135 with st...
https://github.com/ceph/ceph/pull/19628 Nathan Cutler
11:54 AM Backport #22500 (Resolved): luminous: cephfs: potential adjust failure in lru_expire
https://github.com/ceph/ceph/pull/19627
Nathan Cutler
11:54 AM Backport #22499 (Resolved): luminous: cephfs-journal-tool: tool would miss to report some invalid...
https://github.com/ceph/ceph/pull/19626 Nathan Cutler
11:39 AM Backport #21947 (In Progress): luminous: mds: preserve order of requests during recovery of multi...
Nathan Cutler
03:41 AM Backport #22494 (Fix Under Review): jewel: unsigned integer overflow in file_layout_t::get_period
https://github.com/ceph/ceph/pull/19611 Zheng Yan
03:35 AM Backport #22494 (Resolved): jewel: unsigned integer overflow in file_layout_t::get_period
A customer encountered this
https://bugzilla.redhat.com/show_bug.cgi?id=1527548
Zheng Yan
02:09 AM Bug #22492 (Fix Under Review): Locker::calc_new_max_size does not take layout.stripe_count into a...
https://github.com/ceph/ceph/pull/19609 Zheng Yan
02:07 AM Bug #22492 (Resolved): Locker::calc_new_max_size does not take layout.stripe_count into account
if layout.stripe_count is N, size_increment is actually 'N * mds_client_writeable_range_max_inc_objs' objects Zheng Yan
01:09 AM Bug #22360 (Pending Backport): mds: crash during exiting
Patrick Donnelly
12:35 AM Backport #22490 (In Progress): luminous: mds: handle client session messages when mds is stopping
Patrick Donnelly
12:35 AM Backport #22490 (Resolved): luminous: mds: handle client session messages when mds is stopping
https://github.com/ceph/ceph/pull/19585 Patrick Donnelly

12/19/2017

11:45 PM Feature #22446: mds: ask idle client to trim more caps
Zheng Yan wrote:
> Idle client holds so many caps is wasteful, because it increase the chance that mds trim other re...
Patrick Donnelly
09:55 PM Bug #22488 (New): mds: unlink blocks on large file when metadata pool is full
With both of these PRs on mimic-dev1:
https://github.com/ceph/ceph/pull/19588
https://github.com/ceph/ceph/pull/1...
Patrick Donnelly
09:53 PM Bug #22487 (Rejected): mds: setattr blocked when metadata pool is full
With both of these PRs on mimic-dev1:
https://github.com/ceph/ceph/pull/19588
https://github.com/ceph/ceph/pull/1...
Patrick Donnelly
07:25 PM Bug #22483 (In Progress): mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obso...
Patrick Donnelly
07:12 PM Bug #22483 (Resolved): mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obsolete
https://github.com/ceph/ceph/blob/06b7707cee87a54517630def0ad274340325a677/src/mds/Server.cc#L1742
since: b4ca5ae4...
Patrick Donnelly
06:56 PM Bug #22482 (Won't Fix): qa: MDS can apparently journal new file on "full" metadata pool
... Patrick Donnelly
05:00 PM Fix #15064 (Closed): multifs: tweak text on "flag set enable multiple"
Okay, thanks for the info John. I'll just close this then. Patrick Donnelly
03:36 PM Fix #15064: multifs: tweak text on "flag set enable multiple"
I think there was an idea that the message ought to be scarier, but I'm not sure we need that at this point John Spray
03:05 PM Fix #15064 (Need More Info): multifs: tweak text on "flag set enable multiple"
Not sure what the problem is here. Patrick Donnelly
03:13 PM Tasks #22479 (Closed): multifs: review testing coverage
Patrick Donnelly
03:11 PM Feature #22478 (Rejected): multifs: support snapshots for shared data pool
If two file systems use the same data pool with different RADOS namespaces, it is necessary for them to cooperate on ... Patrick Donnelly
03:08 PM Feature #22477 (Resolved): multifs: remove multifs experimental warnings
Once all sub-tasks are complete. Patrick Donnelly
09:26 AM Bug #22475: qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClusterFull.test_full_...
Flagging for luminous backport because b4ca5ae (which presumably introduced the test failure this is fixing (?)) was ... Nathan Cutler
05:51 AM Bug #22475 (Fix Under Review): qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClu...
Patrick Donnelly
05:51 AM Bug #22475: qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClusterFull.test_full_...
https://github.com/ceph/ceph/pull/19588 Patrick Donnelly
05:25 AM Bug #22475 (Resolved): qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClusterFull...
... Patrick Donnelly
05:53 AM Bug #22436 (Pending Backport): qa: CommandFailedError: Command failed on smithi135 with status 22...
https://github.com/ceph/ceph/pull/19534 Patrick Donnelly
03:09 AM Bug #22460: mds: handle client session messages when mds is stopping
luminous RP: https://github.com/ceph/ceph/pull/19585 Zheng Yan
02:01 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
https://github.com/ceph/ceph-client/commit/a2a44b35146e6ccf099e4326bc1a7e2cdaf02f65 Zheng Yan

12/18/2017

02:53 PM Bug #22460 (Pending Backport): mds: handle client session messages when mds is stopping
Patrick Donnelly
02:49 PM Bug #22428: mds: don't report slow request for blocked filelock request
Perhaps reclassify the slow requests blocked by locks as "clients not releasing file locks" or similar to be differen... Patrick Donnelly
02:47 PM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
This is also fixed in master already by one of Zheng's commits. We need to link to the commit in master where this is... Patrick Donnelly
02:41 PM Bug #22353 (In Progress): kclient: ceph_getattr() return zero st_dev for normal inode
Patrick Donnelly
09:26 AM Feature #19578 (Fix Under Review): mds: optimize CDir::_omap_commit() and CDir::_committed() for ...
https://github.com/ceph/ceph/pull/19574 Zheng Yan
12:10 AM Bug #22459 (Pending Backport): cephfs-journal-tool: tool would miss to report some invalid range
Patrick Donnelly
12:09 AM Bug #22458 (Pending Backport): cephfs: potential adjust failure in lru_expire
Patrick Donnelly

12/16/2017

12:01 AM Feature #22446: mds: ask idle client to trim more caps
no about recovery time, clients already trim their cache aggressively when mds recovers.
Idle client holds so many...
Zheng Yan

12/15/2017

10:24 PM Bug #21853 (Fix Under Review): mds: mdsload debug too high
https://github.com/ceph/ceph/pull/19556 Patrick Donnelly
08:26 PM Feature #22446: mds: ask idle client to trim more caps
Ah, the problem is recovery of the MDS takes too long. (from follow-up posts to "[ceph-users] cephfs mds millions of ... Patrick Donnelly
08:16 PM Feature #22446: mds: ask idle client to trim more caps
What's the goal? Prevent the situation where the client has ~1M caps for an indefinite period like what we saw on the... Patrick Donnelly
01:42 AM Feature #22446 (Resolved): mds: ask idle client to trim more caps
we can add decay counter to client session, tracking the rate we add new cap to the client Zheng Yan
07:30 PM Bug #21393 (Pending Backport): MDSMonitor: inconsistent role/who usage in command help
Patrick Donnelly
07:30 PM Bug #22293 (Pending Backport): client may fail to trim as many caps as MDS asked for
Patrick Donnelly
07:30 PM Bug #22357 (Pending Backport): mds: read hang in multiple mds setup
Patrick Donnelly
07:29 PM Bug #21764 (Resolved): common/options.cc: Update descriptions and visibility levels for MDS/clien...
Patrick Donnelly
07:02 PM Bug #22460 (Resolved): mds: handle client session messages when mds is stopping
https://github.com/ceph/ceph/pull/19234 Patrick Donnelly
06:59 PM Bug #22459 (Resolved): cephfs-journal-tool: tool would miss to report some invalid range
https://github.com/ceph/ceph/pull/19421 Patrick Donnelly
06:58 PM Bug #22458 (Resolved): cephfs: potential adjust failure in lru_expire
https://github.com/ceph/ceph/pull/19277 Patrick Donnelly

12/13/2017

11:09 PM Backport #22392: luminous: mds: tell session ls returns vanila EINVAL when MDS is not active
https://github.com/ceph/ceph/pull/19505 Shinobu Kinjo
09:28 PM Bug #22436 (Resolved): qa: CommandFailedError: Command failed on smithi135 with status 22: 'sudo ...
Key bits:... Patrick Donnelly
04:41 PM Feature #22417 (Fix Under Review): support purge queue with cephfs-journal-tool
Patrick Donnelly
09:34 AM Feature #22417: support purge queue with cephfs-journal-tool
I have already pulled a request
https://github.com/ceph/ceph/pull/19471
dongdong tao
09:33 AM Feature #22417 (Resolved): support purge queue with cephfs-journal-tool
Currently, luminous has introduced new purge queue journal whose inode number is 0x500
but cephfs-journal-tool does ...
dongdong tao
03:12 PM Bug #22428 (Resolved): mds: don't report slow request for blocked filelock request
Zheng Yan
12:58 PM Backport #22407 (In Progress): luminous: client: implement delegation support in userland cephfs
Nathan Cutler

12/12/2017

01:31 PM Backport #22385 (In Progress): luminous: mds: mds should ignore export_pin for deleted directory
Nathan Cutler
08:43 AM Backport #22385 (Resolved): luminous: mds: mds should ignore export_pin for deleted directory
https://github.com/ceph/ceph/pull/19360 Nathan Cutler
11:25 AM Backport #22398 (In Progress): luminous: man: missing man page for mount.fuse.ceph
Nathan Cutler
08:44 AM Backport #22398 (Resolved): luminous: man: missing man page for mount.fuse.ceph
https://github.com/ceph/ceph/pull/19449 Nathan Cutler
11:10 AM Bug #22374 (Fix Under Review): luminous: mds: SimpleLock::num_rdlock overloaded
John Spray
06:16 AM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
I just submitted a PR for this: https://github.com/ceph/ceph/pull/19442 Xuehan Xu
04:44 AM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
... Xuehan Xu
04:43 AM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
@ 6> 2017-12-04 17:59:45.509030 7ff4bb7fb700 7 mds.0.locker rdlock_finish on (ifile sync) on [inode 0x10004cec485... Xuehan Xu
04:42 AM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
Sorry, I misedited the log:
@ -6> 2017-12-04 17:59:45.509030 7ff4bb7fb700 7 mds.0.locker rdlock_finish on (ifile...
Xuehan Xu
04:41 AM Bug #22374 (Duplicate): luminous: mds: SimpleLock::num_rdlock overloaded
Recently, when doing massive directory delete test, both active mds and standby mds aborted and can't be started agai... Xuehan Xu
08:45 AM Backport #22407 (Resolved): luminous: client: implement delegation support in userland cephfs
https://github.com/ceph/ceph/pull/19480 Nathan Cutler
08:43 AM Backport #22392 (Resolved): luminous: mds: tell session ls returns vanila EINVAL when MDS is not ...
Nathan Cutler
08:43 AM Backport #22384 (Resolved): jewel: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), act...
https://github.com/ceph/ceph/pull/21172 Nathan Cutler
08:42 AM Backport #22383 (Resolved): luminous: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), ...
https://github.com/ceph/ceph/pull/21173 Nathan Cutler
08:42 AM Backport #22382 (Rejected): jewel: client: give more descriptive error message for remount failures
Nathan Cutler
08:42 AM Backport #22381 (Rejected): luminous: client: give more descriptive error message for remount fai...
Nathan Cutler
08:42 AM Backport #22380 (Resolved): jewel: client reconnect gather race
https://github.com/ceph/ceph/pull/21163 Nathan Cutler
08:42 AM Backport #22379 (Resolved): luminous: client reconnect gather race
https://github.com/ceph/ceph/pull/19326 Nathan Cutler
08:42 AM Backport #22378 (Resolved): jewel: ceph-fuse: failure to remount in startup test does not handle ...
https://github.com/ceph/ceph/pull/21162 Nathan Cutler
02:49 AM Feature #22372 (Resolved): kclient: implement quota handling using new QuotaRealm
Patrick Donnelly
02:46 AM Feature #22371 (Resolved): mds: implement QuotaRealm to obviate parent quota lookup
https://github.com/ceph/ceph/pull/18424 Patrick Donnelly
02:45 AM Feature #22370 (Resolved): cephfs: add kernel client quota support
Patrick Donnelly

12/11/2017

01:13 AM Bug #22360 (Fix Under Review): mds: crash during exiting
https://github.com/ceph/ceph/pull/19424 Zheng Yan
12:43 AM Bug #22360 (Resolved): mds: crash during exiting
2017-12-07 21:30:39.323495 7fe6b8830700 0 -- 192.168.100.49:6800/227508323 >> 192.168.100.59:0/2802955003 pipe(0x560... Zheng Yan

12/10/2017

07:16 PM Cleanup #22359 (New): mds: change MDSMap::in to a mds_rank_t which is the current size of the clu...
Now that we no longer allow deactivating arbitrary ranks, it makes sense to change the `std::set<mds_rank_t> MDSMap::... Patrick Donnelly
09:00 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
Robert Sander wrote:
> Robert Sander wrote:
> > Zheng Yan wrote:
> > > I can't reproduce it on Fedora 26. please p...
Zheng Yan

12/09/2017

02:12 PM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
Robert Sander wrote:
> Zheng Yan wrote:
> > I can't reproduce it on Fedora 26. please provide versions of kernel an...
Robert Sander
09:33 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
Zheng Yan wrote:
> I can't reproduce it on Fedora 26. please provide versions of kernel and fuse-libs installed on t...
Robert Sander
08:39 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
no '+ sign' is caused by ls code... Zheng Yan
05:17 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
If fuse-libs version < 2.8, ceph-fuse can't get supplementary groups of an user. group ACL only apply for users who p... Zheng Yan
02:03 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
I can't reproduce it on Fedora 26. please provide versions of kernel and fuse-libs installed on the machine that ran ... Zheng Yan
07:06 AM Bug #22357 (Fix Under Review): mds: read hang in multiple mds setup
http://tracker.ceph.com/issues/22357 Zheng Yan
07:04 AM Bug #22357 (Resolved): mds: read hang in multiple mds setup
Zheng Yan

12/08/2017

07:33 PM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
The kernel client in Ubuntu 17.10 (4.13.0-17-generic) does not have this issue, but it does not show if ACLs are set ... Robert Sander
04:40 PM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
Now with better formatting:
Running Ceph 12.2.2

Create Filesystem fresh on this version.
FUSE-mounted files...
Robert Sander
04:37 PM Bug #22353 (Resolved): kclient: ceph_getattr() return zero st_dev for normal inode
Running Ceph 12.2.2
Create Filesystem fresh on this version.
FUSE-mounted filesystem with client_acl_type=posix...
Robert Sander
07:20 AM Bug #22249: Need to restart MDS to release cephfs space
junming rao wrote:
> Zheng Yan wrote:
> > please try remounting all cephfs with ceph-fuse option --client_try_dentr...
Zheng Yan
03:18 AM Bug #22249: Need to restart MDS to release cephfs space
Zheng Yan wrote:
> please try remounting all cephfs with ceph-fuse option --client_try_dentry_invalidate=false.
>
...
junming rao
02:45 AM Bug #22249: Need to restart MDS to release cephfs space
please try remounting all cephfs with ceph-fuse option --client_try_dentry_invalidate=false.
Besides, please creat...
Zheng Yan
02:26 AM Bug #22249: Need to restart MDS to release cephfs space
Zheng Yan wrote:
> It seems you have multiple clients mount cephfs. do you use kernel client or ceph-fuse? try execu...
junming rao

12/07/2017

03:09 AM Backport #22339: luminous: ceph-fuse: failure to remount in startup test does not handle client_d...
https://github.com/ceph/ceph/pull/19370 Shinobu Kinjo
03:01 AM Backport #22339 (Resolved): luminous: ceph-fuse: failure to remount in startup test does not hand...
https://github.com/ceph/ceph/pull/19370 Shinobu Kinjo
03:08 AM Bug #22338: mds: ceph mds stat json should use array output for info section
Ji You wrote:
> When use `ceph mds stat -f json-pretty` would get output as below:
>
> [...]
>
> The proper ou...
Ji You
02:58 AM Bug #22338 (Resolved): mds: ceph mds stat json should use array output for info section
When use `ceph mds stat -f json-pretty` would get output as below:... Ji You

12/06/2017

09:12 PM Feature #20610: MDSMonitor: add new command to shrink the cluster in an automated way
Hi Douglas, is this something that you're still planning on working on? If not, I'm willing to have a look at it. Jesse Williamson
02:51 PM Bug #22334 (New): client: throttle osd requests created by page-write
If we create lots of small file in cephfs, page writeback may create hundreds of thousand of OSD requests. these many... Zheng Yan
08:51 AM Bug #22219: mds: mds should ignore export_pin for deleted directory
https://github.com/ceph/ceph/pull/19360 Zheng Yan
06:37 AM Bug #22219 (Pending Backport): mds: mds should ignore export_pin for deleted directory
Patrick Donnelly
06:37 AM Feature #22097 (Resolved): mds: change mds perf counters can statistics filesystem operations num...
Patrick Donnelly
06:36 AM Bug #22269 (Pending Backport): ceph-fuse: failure to remount in startup test does not handle clie...
Patrick Donnelly

12/05/2017

03:00 AM Bug #22263: client reconnect gather race
https://github.com/ceph/ceph/pull/19326 Zheng Yan
02:54 AM Bug #22249: Need to restart MDS to release cephfs space
It seems you have multiple clients mount cephfs. do you use kernel client or ceph-fuse? try executing "echo 3 >/proc/... Zheng Yan
12:33 AM Bug #22051: tests: Health check failed: Reduced data availability: 5 pgs peering (PG_AVAILABILITY...
Please make sure this isn't a misconfigured run or a missing log whitelist; you can kick it to RADOS if not. :) Greg Farnum

12/04/2017

06:54 PM Feature #18490 (Pending Backport): client: implement delegation support in userland cephfs
Thanks for remembering to update this ticket Jeff. We need to backport this for Luminous as this is needed for 3.0.
...
Patrick Donnelly
02:52 PM Bug #22292: mds: scrub may mark repaired directory with lost dentries and not flush backtrace
Greg also brought up some good points that we should also mark the directory as damaged (especially in a persistent w... Patrick Donnelly
02:45 PM Bug #22292: mds: scrub may mark repaired directory with lost dentries and not flush backtrace
Consensus during scrub is that this can be resolved by adding an appropriate warning to scrub output that the directo... Patrick Donnelly
 

Also available in: Atom