Project

General

Profile

Activity

From 12/12/2017 to 01/10/2018

01/10/2018

11:24 PM Backport #22590: jewel: ceph.in: tell mds does not understand --cluster
https://github.com/ceph/ceph/pull/19907 Prashant D
10:44 PM Backport #22590: jewel: ceph.in: tell mds does not understand --cluster
I'm on it. Prashant D
04:41 PM Bug #22631 (Fix Under Review): mds: crashes because of old pool id in journal header
Jos Collin
03:41 PM Backport #22076 (In Progress): luminous: 'ceph tell mds' commands result in 'File exists' errors ...
Nathan Cutler
03:17 PM Backport #22076 (Fix Under Review): luminous: 'ceph tell mds' commands result in 'File exists' er...
Jos Collin
02:45 PM Bug #22652: client: fails to release to revoking Fc
Sage Weil
01:29 PM Bug #22652: client: fails to release to revoking Fc
I reproduced it locally. it seems like kernel issue. The issue happens only when fuse_use_invalidate_cb is true. Zheng Yan
11:02 AM Bug #22652 (Resolved): client: fails to release to revoking Fc
http://pulpito.ceph.com/pdonnell-2018-01-09_21:14:38-multimds-wip-pdonnell-testing-20180109.193634-testing-basic-smit... Zheng Yan
05:54 AM Bug #22647 (Fix Under Review): mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
https://github.com/ceph/ceph/pull/19891 Zheng Yan
02:34 AM Bug #22647 (Resolved): mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
... Zheng Yan
01:08 AM Bug #22629 (Fix Under Review): client: avoid recursive lock in ll_get_vino
Patrick Donnelly
01:05 AM Bug #22562 (Pending Backport): mds: fix dump last_sent
Patrick Donnelly
01:05 AM Bug #22546 (Pending Backport): client: dirty caps may never get the chance to flush
Patrick Donnelly
01:04 AM Bug #22536 (Pending Backport): client:_rmdir() uses a deleted memory structure(Dentry) leading a ...
Patrick Donnelly
12:44 AM Bug #22646: qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficient osds
https://github.com/ceph/ceph/pull/19885 Patrick Donnelly
12:40 AM Bug #22646 (Resolved): qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficient osds
This causes startup to fail for ec pool configurations.
(This was included in my fix for #22627 but I'm breaking i...
Patrick Donnelly

01/09/2018

04:03 PM Bug #22631: mds: crashes because of old pool id in journal header
https://github.com/ceph/ceph/pull/19860 dongdong tao
08:38 AM Bug #22631: mds: crashes because of old pool id in journal header
through the code, we found it is because of the old pool id in the journal header.
my solution is
add "set pool_id"...
dongdong tao
08:35 AM Bug #22631 (Resolved): mds: crashes because of old pool id in journal header
we have use rados cppool command to copy the cephfs metadata pool
but,after copy done, mds would keep crashing when ...
dongdong tao
02:53 PM Backport #21948 (In Progress): luminous: MDSMonitor: mons should reject misconfigured mds_blackli...
Nathan Cutler
02:43 PM Backport #21874 (In Progress): luminous: qa: libcephfs_interface_tests: shutdown race failures
Nathan Cutler
02:43 PM Backport #21870 (In Progress): luminous: Assertion in EImportStart::replay should be a damaged()
Nathan Cutler
01:02 PM Feature #22545 (Fix Under Review): add dump inode command to mds
Nathan Cutler
09:18 AM Backport #22630 (Resolved): doc: misc fixes for CephFS best practices
This is a backport of: https://github.com/ceph/ceph/pull/19791 Jos Collin
08:13 AM Backport #22630 (Resolved): doc: misc fixes for CephFS best practices
https://github.com/ceph/ceph/pull/19858 Jos Collin
07:47 AM Bug #22629: client: avoid recursive lock in ll_get_vino
https://github.com/ceph/ceph/pull/19837 dongdong tao
07:47 AM Bug #22629 (Resolved): client: avoid recursive lock in ll_get_vino
ll_get_vino would lock the client_lock.
the caller must not have it.
dongdong tao
04:54 AM Bug #21991 (Resolved): mds: tell session ls returns vanila EINVAL when MDS is not active
Jos Collin
04:19 AM Bug #22627 (Fix Under Review): qa: kcephfs lacks many configurations in the fs/multimds suites
https://github.com/ceph/ceph/pull/19856 Patrick Donnelly
04:17 AM Bug #22627 (Resolved): qa: kcephfs lacks many configurations in the fs/multimds suites
In particular:
o Not using the common overrides/
o Not using 8 OSDs for EC configurations
o Not using openstack ...
Patrick Donnelly
03:48 AM Bug #22626 (Rejected): mds: sessionmap version mismatch when replay esessions
Zhang, we are not accepting bugs for multimds clusters on jewel. You can still seek help/advice on ceph-users if you ... Patrick Donnelly
03:21 AM Bug #22626 (Rejected): mds: sessionmap version mismatch when replay esessions
We used ceph 10.2.10 and backported this PR: https://github.com/ceph/ceph/commit/a49726e10ef23be124d92872470fd258a193... Zhi Zhang
03:46 AM Bug #22551: client: should flush dirty caps on backgroud
that's what i'm concerned about, maybe it's not been flushed periodically, it should be easy to verify, will do it dongdong tao
03:43 AM Bug #22551: client: should flush dirty caps on backgroud
Dirty metadata should be flushed when the cap is released. It may also happen periodically (I'm not certain). Patrick Donnelly
01:59 AM Bug #22551: client: should flush dirty caps on backgroud
i will write a case to verify it. dongdong tao
01:49 AM Bug #22551: client: should flush dirty caps on backgroud
i'm not sure if i'm right, if there is only one client and opened a file write some data and did not close it. i know... dongdong tao
01:09 AM Bug #22607 (Rejected): client: should delete cap in remove_cap
The cap is deleted via "in->caps.erase(mds)". The session xlist entry is deleted in the Cap destructor. Patrick Donnelly

01/08/2018

10:28 PM Documentation #22599 (Resolved): doc: mds memory tracking of cache is imprecise by a constant factor
Patrick Donnelly
05:23 PM Backport #22392 (Resolved): luminous: mds: tell session ls returns vanila EINVAL when MDS is not ...
Jos Collin
02:45 PM Bug #22610 (In Progress): MDS: assert failure when the inode for the cap_export from other MDS ha...
Patrick Donnelly
08:04 AM Bug #22610: MDS: assert failure when the inode for the cap_export from other MDS happened not in ...
Fire a pull request: https://github.com/ceph/ceph/pull/19836 Jianyu Li
07:57 AM Bug #22610 (Resolved): MDS: assert failure when the inode for the cap_export from other MDS happe...
We use two active MDS in our online environment, recently mds.1 restarted and during its rejoin phase, mds.0 met asse... Jianyu Li
02:43 PM Bug #22551 (Need More Info): client: should flush dirty caps on backgroud
Dongdong, can you explain more what the problem is? Do you have an issue you've observed? Patrick Donnelly
02:40 PM Bug #21419: client: is ceph_caps_for_mode correct for r/o opens?
No, I've not had time to look at it. For now, I'll just mark this as low priority until I can revisit ir. Jeff Layton
01:27 PM Backport #22569: jewel: doc: clarify path restriction instructions
Added follow-on cherry-pick https://github.com/ceph/ceph/pull/19840 Nathan Cutler
11:59 AM Backport #22569: jewel: doc: clarify path restriction instructions
Commit 85ac1cd which was a cherry-pick of d1277f1 fixing tracker issue http://tracker.ceph.com/issues/16906 introduce... Jos Collin
11:16 AM Backport #22569 (Resolved): jewel: doc: clarify path restriction instructions
Nathan Cutler
05:16 AM Backport #22569 (In Progress): jewel: doc: clarify path restriction instructions
Jos Collin
04:24 AM Backport #22569 (Resolved): jewel: doc: clarify path restriction instructions
Jos Collin
11:16 AM Documentation #16906 (Resolved): doc: clarify path restriction instructions
Nathan Cutler
04:31 AM Backport #22587 (In Progress): luminous: mds: mdsload debug too high
Prashant D
03:32 AM Backport #22587 (Need More Info): luminous: mds: mdsload debug too high
https://github.com/ceph/ceph/pull/19827 Prashant D
04:12 AM Backport #22579: luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obs...
https://github.com/ceph/ceph/pull/19830 Shinobu Kinjo
04:12 AM Backport #22579: luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obs...
Shinobu Kinjo wrote:
-> fix already in luminous
-
Shinobu Kinjo
03:58 AM Backport #22579: luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obs...
fix already in luminous Shinobu Kinjo
03:37 AM Backport #22573 (In Progress): luminous: AttributeError: 'LocalFilesystem' object has no attribut...
https://github.com/ceph/ceph/pull/19829 Prashant D

01/07/2018

04:48 AM Bug #22607: client: should delete cap in remove_cap
https://github.com/ceph/ceph/pull/19782 dongdong tao
04:48 AM Bug #22607 (Rejected): client: should delete cap in remove_cap
I think the cap should be deleted.
so that the cap can be removed from session->caps
dongdong tao

01/05/2018

09:40 PM Bug #22051 (Need More Info): tests: Health check failed: Reduced data availability: 5 pgs peering...
Patrick Donnelly
09:37 PM Bug #21575 (Resolved): mds: client caps can go below hard-coded default (100)
Patrick Donnelly
09:34 PM Feature #20752 (Resolved): cap message flag which indicates if client still has pending capsnap
Patrick Donnelly
09:32 PM Bug #21419 (Need More Info): client: is ceph_caps_for_mode correct for r/o opens?
Jeff, any update on this? Patrick Donnelly
09:30 PM Documentation #21172: doc: Export over NFS
Ramana, any update on this? Patrick Donnelly
07:25 PM Documentation #22599 (Fix Under Review): doc: mds memory tracking of cache is imprecise by a cons...
https://github.com/ceph/ceph/pull/19807 Patrick Donnelly
07:19 PM Documentation #22599 (In Progress): doc: mds memory tracking of cache is imprecise by a constant ...
Patrick Donnelly
07:19 PM Documentation #22599 (Resolved): doc: mds memory tracking of cache is imprecise by a constant factor
MDS currently can use up much more memory than its mds_cache_memory_limit. This is more noticable in deployments of a... Patrick Donnelly
06:44 PM Bug #22548 (Need More Info): mds: crash during recovery
Patrick Donnelly
05:09 PM Bug #21539 (Resolved): man: missing man page for mount.fuse.ceph
Jos Collin
02:51 PM Bug #21539: man: missing man page for mount.fuse.ceph
follow-on fix: https://github.com/ceph/ceph/pull/19792 Nathan Cutler
05:09 PM Backport #22398 (Resolved): luminous: man: missing man page for mount.fuse.ceph
Jos Collin
04:08 PM Documentation #2206 (Resolved): Need a control command to gracefully shutdown an active MDS prior...
Sage Weil
03:02 PM Bug #22595 (Resolved): doc: mount.fuse.ceph is missing in index.rst
Nathan Cutler
02:56 PM Bug #22595 (Closed): doc: mount.fuse.ceph is missing in index.rst
Luminous backport handled via #21539 Nathan Cutler
01:57 PM Bug #22595 (Fix Under Review): doc: mount.fuse.ceph is missing in index.rst
Jos Collin
01:57 PM Bug #22595: doc: mount.fuse.ceph is missing in index.rst
https://github.com/ceph/ceph/pull/19792 Jos Collin
01:56 PM Bug #22595 (Resolved): doc: mount.fuse.ceph is missing in index.rst
mount.fuse.ceph is missing in http://docs.ceph.com/docs/master/cephfs/ Jos Collin
12:19 PM Backport #22590 (Resolved): jewel: ceph.in: tell mds does not understand --cluster
https://github.com/ceph/ceph/pull/19907 Nathan Cutler
12:18 PM Backport #22587 (Resolved): luminous: mds: mdsload debug too high
https://github.com/ceph/ceph/pull/19827 Nathan Cutler
12:17 PM Backport #22563 (In Progress): luminous: mds: optimize CDir::_omap_commit() and CDir::_committed(...
Nathan Cutler
12:17 PM Backport #22564 (In Progress): luminous: Locker::calc_new_max_size does not take layout.stripe_co...
Nathan Cutler
12:16 PM Backport #22580 (Resolved): luminous: qa: full flag not set on osdmap for tasks.cephfs.test_full....
https://github.com/ceph/ceph/pull/19962 Nathan Cutler
12:16 PM Backport #22579 (Resolved): luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full ...
https://github.com/ceph/ceph/pull/19830 Nathan Cutler
12:16 PM Backport #22573 (Resolved): luminous: AttributeError: 'LocalFilesystem' object has no attribute '...
https://github.com/ceph/ceph/pull/19829 Nathan Cutler
10:10 AM Backport #22569 (Fix Under Review): jewel: doc: clarify path restriction instructions
https://github.com/ceph/ceph/pull/19795 Jos Collin
09:39 AM Backport #22569 (Resolved): jewel: doc: clarify path restriction instructions
https://github.com/ceph/ceph/pull/19795 and https://github.com/ceph/ceph/pull/19840 Jos Collin
09:39 AM Documentation #16906 (Pending Backport): doc: clarify path restriction instructions
Jos Collin
12:42 AM Bug #22483 (Pending Backport): mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is...
https://github.com/ceph/ceph/pull/19602 Patrick Donnelly
12:40 AM Bug #22475 (Pending Backport): qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClu...
Patrick Donnelly

01/04/2018

07:32 PM Bug #22562 (Fix Under Review): mds: fix dump last_sent
Patrick Donnelly
03:57 AM Bug #22562: mds: fix dump last_sent
https://github.com/ceph/ceph/pull/19762 dongdong tao
03:57 AM Bug #22562 (Resolved): mds: fix dump last_sent
last_sent in capability is an integer dongdong tao
07:15 AM Backport #22564 (Resolved): luminous: Locker::calc_new_max_size does not take layout.stripe_count...
https://github.com/ceph/ceph/pull/19776 Zheng Yan
07:10 AM Backport #22563 (Resolved): luminous: mds: optimize CDir::_omap_commit() and CDir::_committed() f...
https://github.com/ceph/ceph/pull/19775 Zheng Yan
03:46 AM Bug #22542 (Resolved): doc: epoch barrier mechanism not found
Jos Collin
03:26 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
can't find any 'osd_op ... write' in mds logs. So I can't find any clue how the corruption happened. Zheng Yan
01:48 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
Zheng Yan wrote:
> can't any log for "2017-12-16". next time you do experiment,please set debug_ms=1 for mds
Dear...
鹏 张

01/03/2018

06:11 PM Bug #22536 (Fix Under Review): client:_rmdir() uses a deleted memory structure(Dentry) leading a ...
Patrick Donnelly
05:40 PM Bug #22546 (Fix Under Review): client: dirty caps may never get the chance to flush
Patrick Donnelly
02:42 PM Feature #16775 (Fix Under Review): MDS command for listing open files
https://github.com/ceph/ceph/pull/19760 John Spray
01:04 PM Feature #16775: MDS command for listing open files
could you please have a look at this pr
https://github.com/ceph/ceph/pull/19760
dongdong tao
02:04 PM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
can't any log for "2017-12-16". next time you do experiment,please set debug_ms=1 for mds Zheng Yan
10:10 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
Zheng Yan wrote:
> please upload ceph cluster log. So I can check timestamp of mds failovers
Dear zheng:
I ha...
鹏 张
03:45 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
please upload ceph cluster log. So I can check timestamp of mds failovers Zheng Yan
04:00 AM Bug #22547: active mds session miss for client
Zheng Yan wrote:
> Sorry. the while the process is:
>
> mds close client connection
> client's remote_reset call...
dongdong tao
02:51 AM Bug #22547: active mds session miss for client
Sorry. the while the process is:
mds close client connection
client's remote_reset callback gets called
client s...
Zheng Yan

01/02/2018

03:40 PM Bug #22547: active mds session miss for client
Zheng Yan wrote:
> dongdong tao wrote:
> > zheng, if a client has been evicted by mds, the client should still thin...
dongdong tao
01:47 AM Bug #22547: active mds session miss for client
dongdong tao wrote:
> zheng, if a client has been evicted by mds, the client should still think the connection is av...
Zheng Yan
03:17 PM Backport #22552 (Resolved): luminous: doc: epoch barrier mechanism not found
Sage Weil
11:34 AM Backport #22552 (Fix Under Review): luminous: doc: epoch barrier mechanism not found
Jos Collin
10:57 AM Backport #22552: luminous: doc: epoch barrier mechanism not found
https://github.com/ceph/ceph/pull/19741 Jos Collin
10:43 AM Backport #22552 (Resolved): luminous: doc: epoch barrier mechanism not found
Jos Collin
11:14 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
Jos Collin wrote:
> I don't see anything in the URLs provided. Additionally, this looks like a Support Case.
can ...
Yong Wang
11:10 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
wangyong wang wrote:
> Hi all.
> ==============================
> version: jewel 10.2.10 (professional rpms)
> no...
Yong Wang

01/01/2018

11:56 AM Bug #22547 (Need More Info): active mds session miss for client
Jos Collin
06:47 AM Bug #22542 (Pending Backport): doc: epoch barrier mechanism not found
Jos Collin

12/29/2017

04:17 PM Feature #22545: add dump inode command to mds
I just notice, it's almost same with 11172 dongdong tao
03:35 PM Bug #22542 (Resolved): doc: epoch barrier mechanism not found
Sage Weil
01:40 AM Bug #22551 (Need More Info): client: should flush dirty caps on backgroud
the dirty data would have a background thread to do the flush,so we may need to flush dirty caps backgroud too dongdong tao

12/28/2017

03:29 PM Bug #22550 (New): mds: FAILED assert(probe->known_size[p->oid] <= shouldbe) when mds start

I stop the mds while coping files to the cluster, then I try to start mds later, I encounter a failed assertion.
...
jianxiong shao
02:05 PM Bug #22548: mds: crash during recovery
Just once.
It took a little long time during recovery and then crashed. There are about 10M files in the file syst...
wei jin
01:46 PM Bug #22548: mds: crash during recovery
this probably can be fixed by. How many times do you encounter this issue... Zheng Yan
07:15 AM Bug #22548: mds: crash during recovery
Zheng Yan wrote:
> which line trigger the assertion
Hi, yan
this line:
0> 2017-12-27 23:27:05.892112 7f0...
wei jin
07:04 AM Bug #22548: mds: crash during recovery
which line trigger the assertion Zheng Yan
04:42 AM Bug #22548 (Need More Info): mds: crash during recovery
2017-12-27 23:27:05.919710 7f08483d0700 -1 *** Caught signal (Aborted) **
in thread 7f08483d0700 thread_name:ms_dis...
wei jin
12:53 PM Bug #22547: active mds session miss for client
by saying evicted, i means due to the auto_close_timeout. dongdong tao
12:50 PM Bug #22547: active mds session miss for client
zheng, if a client has been evicted by mds, the client should still think the connection is available,
and when that...
dongdong tao
10:25 AM Bug #22547: active mds session miss for client
wei jin wrote:
> Ok. I will do it soon.
>
I can not reproduce it after open the log and it will have an impact ...
wei jin
07:21 AM Bug #22547: active mds session miss for client
Ok. I will do it soon.
This happened after I restarted mds daemon last night. And also there is another crash(bug ...
wei jin
07:10 AM Bug #22547: active mds session miss for client
please set debug_mds=10 and check why mds evicted the client. it's likely that docker host went to sleep or there was... Zheng Yan
04:34 AM Bug #22547 (Need More Info): active mds session miss for client
Our user case: k8s docker mounts cephfs using cephfs kernel client.
If we do not use the 'mounted dir', after a wh...
wei jin
06:58 AM Feature #21156: mds: speed up recovery with many open inodes
thanks, that can explain the senerio we have met,
sometimes my standby-replay mds spend too much time in rejoin stat...
dongdong tao
06:56 AM Feature #21156: mds: speed up recovery with many open inodes
besides, when there are lots of open inodes, it's not efficient to journal all of them in each log segment. Zheng Yan
02:46 AM Feature #21156: mds: speed up recovery with many open inodes
mds need to open all inodes with client caps during recovery. some of these inode may be not in the journal Zheng Yan
02:00 AM Feature #21156: mds: speed up recovery with many open inodes
hi zheng,
i'm not sure if i understand this correctlly, do you mean the mds can not recover the openning inode jus...
dongdong tao

12/27/2017

04:32 PM Bug #22546: client: dirty caps may never get the chance to flush
https://github.com/ceph/ceph/pull/19703
dongdong tao
04:05 PM Bug #22546 (Resolved): client: dirty caps may never get the chance to flush
currently, we flush the caps in function Client::flush_caps_sync
but there is a bug in this funcion.
because the ...
dongdong tao
03:54 PM Feature #22545: add dump inode command to mds
pull request:
https://github.com/ceph/ceph/pull/19677
dongdong tao
03:53 PM Feature #22545 (Duplicate): add dump inode command to mds
1 when the mds cache is really big. it's hard to dump all the cache
2 most of the time, we only want to know a speci...
dongdong tao
10:58 AM Bug #22542 (Fix Under Review): doc: epoch barrier mechanism not found
Jos Collin
10:21 AM Bug #22542: doc: epoch barrier mechanism not found
https://github.com/ceph/ceph/pull/19701 Jos Collin

12/26/2017

10:17 AM Bug #22542 (Resolved): doc: epoch barrier mechanism not found
[[http://docs.ceph.com/docs/master/cephfs/full/]] says "For more on the epoch barrier mechanism, see Ceph filesystem ... Jos Collin

12/25/2017

03:57 AM Bug #22536: client:_rmdir() uses a deleted memory structure(Dentry) leading a core
fixed by https://github.com/ceph/ceph/pull/19672 Ivan Guan
03:45 AM Bug #22536 (Resolved): client:_rmdir() uses a deleted memory structure(Dentry) leading a core
Version: ceph-10.2.2
Bug description:
"::rmdir()" acquires the Dentry structure "by get_or_create(dir, name, &de...
Ivan Guan

12/22/2017

11:47 AM Bug #22523 (Need More Info): Jewel10.2.10 cephfs journal corrupt,later event jump into previous ...
I don't see anything in the URLs provided. Additionally, this looks like a Support Case. Jos Collin
09:16 AM Bug #22524 (Resolved): NameError: global name 'get_mds_map' is not defined
We don't need to backport this fix to luminous. The commit that introduced
this bug, https://github.com/ceph/ceph/co...
Ramana Raja
04:53 AM Bug #22524 (Pending Backport): NameError: global name 'get_mds_map' is not defined
Patrick Donnelly
04:55 AM Bug #22338 (Resolved): mds: ceph mds stat json should use array output for info section
Patrick Donnelly
04:55 AM Bug #21853 (Pending Backport): mds: mdsload debug too high
Patrick Donnelly
04:55 AM Feature #19578 (Pending Backport): mds: optimize CDir::_omap_commit() and CDir::_committed() for ...
Patrick Donnelly
04:54 AM Bug #22492 (Pending Backport): Locker::calc_new_max_size does not take layout.stripe_count into a...
Patrick Donnelly
04:49 AM Backport #22503 (In Progress): luminous: mds: read hang in multiple mds setup
https://github.com/ceph/ceph/pull/19646 Prashant D
02:37 AM Backport #22503: luminous: mds: read hang in multiple mds setup
I'm on it. Prashant D
12:28 AM Bug #22357: mds: read hang in multiple mds setup
https://github.com/ceph/ceph/pull/19414 Patrick Donnelly

12/21/2017

11:45 PM Bug #22487: mds: setattr blocked when metadata pool is full
right. full test should have no problem Zheng Yan
10:27 PM Bug #22487: mds: setattr blocked when metadata pool is full
Presumably that would be because with the vstart config the MDS writes cannot actually be written whereas with the te... Patrick Donnelly
02:35 PM Bug #22487: mds: setattr blocked when metadata pool is full
I reproduced this locally.
It was caused by stuck log flush ...
Zheng Yan
10:38 PM Bug #22526 (Pending Backport): AttributeError: 'LocalFilesystem' object has no attribute 'ec_prof...
Fixed by: https://github.com/ceph/ceph/pull/19533 Patrick Donnelly
02:19 PM Bug #22526 (Resolved): AttributeError: 'LocalFilesystem' object has no attribute 'ec_profile'
Hit an error while running a ceph_volume_client test on a vstart Ceph cluster using the command
LD_LIBRARY_PATH=`pwd...
Ramana Raja
04:13 PM Bug #22357: mds: read hang in multiple mds setup
i don't see any merged pr. Shinobu Kinjo
01:04 PM Bug #22524 (Fix Under Review): NameError: global name 'get_mds_map' is not defined
https://github.com/ceph/ceph/pull/19633 Ramana Raja
12:48 PM Bug #22524 (Resolved): NameError: global name 'get_mds_map' is not defined
Hit a error while running test_volume_client on a dev vstart cluster (Ceph master) using
# LD_LIBRARY_PATH=`pwd`/lib...
Ramana Raja
12:05 PM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
type :fs
version:10.2.10
Yong Wang
11:59 AM Bug #22523 (Closed): Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
Hi all.
==============================
version: jewel 10.2.10 (professional rpms)
nodes : 3 centos7.3
cephfs : k...
Yong Wang
10:08 AM Backport #22501: luminous: qa: CommandFailedError: Command failed on smithi135 with status 22: 's...
https://github.com/ceph/ceph/pull/19628 Shinobu Kinjo
10:05 AM Backport #22500: luminous: cephfs: potential adjust failure in lru_expire
https://github.com/ceph/ceph/pull/19627 Shinobu Kinjo
10:03 AM Backport #22499: luminous: cephfs-journal-tool: tool would miss to report some invalid range
https://github.com/ceph/ceph/pull/19626 Shinobu Kinjo
08:36 AM Bug #22374 (Duplicate): luminous: mds: SimpleLock::num_rdlock overloaded
Nathan Cutler

12/20/2017

06:41 PM Backport #22493 (In Progress): luminous: mds: crash during exiting
Nathan Cutler
02:47 AM Backport #22493 (Resolved): luminous: mds: crash during exiting
https://github.com/ceph/ceph/pull/19610 Zheng Yan
11:56 AM Backport #22508 (Closed): luminous: MDSMonitor: inconsistent role/who usage in command help
Nathan Cutler
11:54 AM Backport #22505 (Rejected): jewel: client may fail to trim as many caps as MDS asked for
Nathan Cutler
11:54 AM Backport #22504 (Resolved): luminous: client may fail to trim as many caps as MDS asked for
https://github.com/ceph/ceph/pull/24119 Nathan Cutler
11:54 AM Backport #22503 (Resolved): luminous: mds: read hang in multiple mds setup
https://github.com/ceph/ceph/pull/19646 Nathan Cutler
11:54 AM Backport #22501 (Resolved): luminous: qa: CommandFailedError: Command failed on smithi135 with st...
https://github.com/ceph/ceph/pull/19628 Nathan Cutler
11:54 AM Backport #22500 (Resolved): luminous: cephfs: potential adjust failure in lru_expire
https://github.com/ceph/ceph/pull/19627
Nathan Cutler
11:54 AM Backport #22499 (Resolved): luminous: cephfs-journal-tool: tool would miss to report some invalid...
https://github.com/ceph/ceph/pull/19626 Nathan Cutler
11:39 AM Backport #21947 (In Progress): luminous: mds: preserve order of requests during recovery of multi...
Nathan Cutler
03:41 AM Backport #22494 (Fix Under Review): jewel: unsigned integer overflow in file_layout_t::get_period
https://github.com/ceph/ceph/pull/19611 Zheng Yan
03:35 AM Backport #22494 (Resolved): jewel: unsigned integer overflow in file_layout_t::get_period
A customer encountered this
https://bugzilla.redhat.com/show_bug.cgi?id=1527548
Zheng Yan
02:09 AM Bug #22492 (Fix Under Review): Locker::calc_new_max_size does not take layout.stripe_count into a...
https://github.com/ceph/ceph/pull/19609 Zheng Yan
02:07 AM Bug #22492 (Resolved): Locker::calc_new_max_size does not take layout.stripe_count into account
if layout.stripe_count is N, size_increment is actually 'N * mds_client_writeable_range_max_inc_objs' objects Zheng Yan
01:09 AM Bug #22360 (Pending Backport): mds: crash during exiting
Patrick Donnelly
12:35 AM Backport #22490 (In Progress): luminous: mds: handle client session messages when mds is stopping
Patrick Donnelly
12:35 AM Backport #22490 (Resolved): luminous: mds: handle client session messages when mds is stopping
https://github.com/ceph/ceph/pull/19585 Patrick Donnelly

12/19/2017

11:45 PM Feature #22446: mds: ask idle client to trim more caps
Zheng Yan wrote:
> Idle client holds so many caps is wasteful, because it increase the chance that mds trim other re...
Patrick Donnelly
09:55 PM Bug #22488 (New): mds: unlink blocks on large file when metadata pool is full
With both of these PRs on mimic-dev1:
https://github.com/ceph/ceph/pull/19588
https://github.com/ceph/ceph/pull/1...
Patrick Donnelly
09:53 PM Bug #22487 (Rejected): mds: setattr blocked when metadata pool is full
With both of these PRs on mimic-dev1:
https://github.com/ceph/ceph/pull/19588
https://github.com/ceph/ceph/pull/1...
Patrick Donnelly
07:25 PM Bug #22483 (In Progress): mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obso...
Patrick Donnelly
07:12 PM Bug #22483 (Resolved): mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obsolete
https://github.com/ceph/ceph/blob/06b7707cee87a54517630def0ad274340325a677/src/mds/Server.cc#L1742
since: b4ca5ae4...
Patrick Donnelly
06:56 PM Bug #22482 (Won't Fix): qa: MDS can apparently journal new file on "full" metadata pool
... Patrick Donnelly
05:00 PM Fix #15064 (Closed): multifs: tweak text on "flag set enable multiple"
Okay, thanks for the info John. I'll just close this then. Patrick Donnelly
03:36 PM Fix #15064: multifs: tweak text on "flag set enable multiple"
I think there was an idea that the message ought to be scarier, but I'm not sure we need that at this point John Spray
03:05 PM Fix #15064 (Need More Info): multifs: tweak text on "flag set enable multiple"
Not sure what the problem is here. Patrick Donnelly
03:13 PM Tasks #22479 (Closed): multifs: review testing coverage
Patrick Donnelly
03:11 PM Feature #22478 (Rejected): multifs: support snapshots for shared data pool
If two file systems use the same data pool with different RADOS namespaces, it is necessary for them to cooperate on ... Patrick Donnelly
03:08 PM Feature #22477 (Resolved): multifs: remove multifs experimental warnings
Once all sub-tasks are complete. Patrick Donnelly
09:26 AM Bug #22475: qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClusterFull.test_full_...
Flagging for luminous backport because b4ca5ae (which presumably introduced the test failure this is fixing (?)) was ... Nathan Cutler
05:51 AM Bug #22475 (Fix Under Review): qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClu...
Patrick Donnelly
05:51 AM Bug #22475: qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClusterFull.test_full_...
https://github.com/ceph/ceph/pull/19588 Patrick Donnelly
05:25 AM Bug #22475 (Resolved): qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClusterFull...
... Patrick Donnelly
05:53 AM Bug #22436 (Pending Backport): qa: CommandFailedError: Command failed on smithi135 with status 22...
https://github.com/ceph/ceph/pull/19534 Patrick Donnelly
03:09 AM Bug #22460: mds: handle client session messages when mds is stopping
luminous RP: https://github.com/ceph/ceph/pull/19585 Zheng Yan
02:01 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
https://github.com/ceph/ceph-client/commit/a2a44b35146e6ccf099e4326bc1a7e2cdaf02f65 Zheng Yan

12/18/2017

02:53 PM Bug #22460 (Pending Backport): mds: handle client session messages when mds is stopping
Patrick Donnelly
02:49 PM Bug #22428: mds: don't report slow request for blocked filelock request
Perhaps reclassify the slow requests blocked by locks as "clients not releasing file locks" or similar to be differen... Patrick Donnelly
02:47 PM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
This is also fixed in master already by one of Zheng's commits. We need to link to the commit in master where this is... Patrick Donnelly
02:41 PM Bug #22353 (In Progress): kclient: ceph_getattr() return zero st_dev for normal inode
Patrick Donnelly
09:26 AM Feature #19578 (Fix Under Review): mds: optimize CDir::_omap_commit() and CDir::_committed() for ...
https://github.com/ceph/ceph/pull/19574 Zheng Yan
12:10 AM Bug #22459 (Pending Backport): cephfs-journal-tool: tool would miss to report some invalid range
Patrick Donnelly
12:09 AM Bug #22458 (Pending Backport): cephfs: potential adjust failure in lru_expire
Patrick Donnelly

12/16/2017

12:01 AM Feature #22446: mds: ask idle client to trim more caps
no about recovery time, clients already trim their cache aggressively when mds recovers.
Idle client holds so many...
Zheng Yan

12/15/2017

10:24 PM Bug #21853 (Fix Under Review): mds: mdsload debug too high
https://github.com/ceph/ceph/pull/19556 Patrick Donnelly
08:26 PM Feature #22446: mds: ask idle client to trim more caps
Ah, the problem is recovery of the MDS takes too long. (from follow-up posts to "[ceph-users] cephfs mds millions of ... Patrick Donnelly
08:16 PM Feature #22446: mds: ask idle client to trim more caps
What's the goal? Prevent the situation where the client has ~1M caps for an indefinite period like what we saw on the... Patrick Donnelly
01:42 AM Feature #22446 (Resolved): mds: ask idle client to trim more caps
we can add decay counter to client session, tracking the rate we add new cap to the client Zheng Yan
07:30 PM Bug #21393 (Pending Backport): MDSMonitor: inconsistent role/who usage in command help
Patrick Donnelly
07:30 PM Bug #22293 (Pending Backport): client may fail to trim as many caps as MDS asked for
Patrick Donnelly
07:30 PM Bug #22357 (Pending Backport): mds: read hang in multiple mds setup
Patrick Donnelly
07:29 PM Bug #21764 (Resolved): common/options.cc: Update descriptions and visibility levels for MDS/clien...
Patrick Donnelly
07:02 PM Bug #22460 (Resolved): mds: handle client session messages when mds is stopping
https://github.com/ceph/ceph/pull/19234 Patrick Donnelly
06:59 PM Bug #22459 (Resolved): cephfs-journal-tool: tool would miss to report some invalid range
https://github.com/ceph/ceph/pull/19421 Patrick Donnelly
06:58 PM Bug #22458 (Resolved): cephfs: potential adjust failure in lru_expire
https://github.com/ceph/ceph/pull/19277 Patrick Donnelly

12/13/2017

11:09 PM Backport #22392: luminous: mds: tell session ls returns vanila EINVAL when MDS is not active
https://github.com/ceph/ceph/pull/19505 Shinobu Kinjo
09:28 PM Bug #22436 (Resolved): qa: CommandFailedError: Command failed on smithi135 with status 22: 'sudo ...
Key bits:... Patrick Donnelly
04:41 PM Feature #22417 (Fix Under Review): support purge queue with cephfs-journal-tool
Patrick Donnelly
09:34 AM Feature #22417: support purge queue with cephfs-journal-tool
I have already pulled a request
https://github.com/ceph/ceph/pull/19471
dongdong tao
09:33 AM Feature #22417 (Resolved): support purge queue with cephfs-journal-tool
Currently, luminous has introduced new purge queue journal whose inode number is 0x500
but cephfs-journal-tool does ...
dongdong tao
03:12 PM Bug #22428 (Resolved): mds: don't report slow request for blocked filelock request
Zheng Yan
12:58 PM Backport #22407 (In Progress): luminous: client: implement delegation support in userland cephfs
Nathan Cutler

12/12/2017

01:31 PM Backport #22385 (In Progress): luminous: mds: mds should ignore export_pin for deleted directory
Nathan Cutler
08:43 AM Backport #22385 (Resolved): luminous: mds: mds should ignore export_pin for deleted directory
https://github.com/ceph/ceph/pull/19360 Nathan Cutler
11:25 AM Backport #22398 (In Progress): luminous: man: missing man page for mount.fuse.ceph
Nathan Cutler
08:44 AM Backport #22398 (Resolved): luminous: man: missing man page for mount.fuse.ceph
https://github.com/ceph/ceph/pull/19449 Nathan Cutler
11:10 AM Bug #22374 (Fix Under Review): luminous: mds: SimpleLock::num_rdlock overloaded
John Spray
06:16 AM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
I just submitted a PR for this: https://github.com/ceph/ceph/pull/19442 Xuehan Xu
04:44 AM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
... Xuehan Xu
04:43 AM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
@ 6> 2017-12-04 17:59:45.509030 7ff4bb7fb700 7 mds.0.locker rdlock_finish on (ifile sync) on [inode 0x10004cec485... Xuehan Xu
04:42 AM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
Sorry, I misedited the log:
@ -6> 2017-12-04 17:59:45.509030 7ff4bb7fb700 7 mds.0.locker rdlock_finish on (ifile...
Xuehan Xu
04:41 AM Bug #22374 (Duplicate): luminous: mds: SimpleLock::num_rdlock overloaded
Recently, when doing massive directory delete test, both active mds and standby mds aborted and can't be started agai... Xuehan Xu
08:45 AM Backport #22407 (Resolved): luminous: client: implement delegation support in userland cephfs
https://github.com/ceph/ceph/pull/19480 Nathan Cutler
08:43 AM Backport #22392 (Resolved): luminous: mds: tell session ls returns vanila EINVAL when MDS is not ...
Nathan Cutler
08:43 AM Backport #22384 (Resolved): jewel: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), act...
https://github.com/ceph/ceph/pull/21172 Nathan Cutler
08:42 AM Backport #22383 (Resolved): luminous: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), ...
https://github.com/ceph/ceph/pull/21173 Nathan Cutler
08:42 AM Backport #22382 (Rejected): jewel: client: give more descriptive error message for remount failures
Nathan Cutler
08:42 AM Backport #22381 (Rejected): luminous: client: give more descriptive error message for remount fai...
Nathan Cutler
08:42 AM Backport #22380 (Resolved): jewel: client reconnect gather race
https://github.com/ceph/ceph/pull/21163 Nathan Cutler
08:42 AM Backport #22379 (Resolved): luminous: client reconnect gather race
https://github.com/ceph/ceph/pull/19326 Nathan Cutler
08:42 AM Backport #22378 (Resolved): jewel: ceph-fuse: failure to remount in startup test does not handle ...
https://github.com/ceph/ceph/pull/21162 Nathan Cutler
02:49 AM Feature #22372 (Resolved): kclient: implement quota handling using new QuotaRealm
Patrick Donnelly
02:46 AM Feature #22371 (Resolved): mds: implement QuotaRealm to obviate parent quota lookup
https://github.com/ceph/ceph/pull/18424 Patrick Donnelly
02:45 AM Feature #22370 (Resolved): cephfs: add kernel client quota support
Patrick Donnelly
 

Also available in: Atom