Project

General

Profile

Activity

From 12/21/2017 to 01/19/2018

01/19/2018

04:31 AM Bug #22734: cephfs-journal-tool: may got assertion failure due to not shutdown
https://github.com/ceph/ceph/pull/19991 dongdong tao
04:22 AM Bug #22734 (Resolved): cephfs-journal-tool: may got assertion failure due to not shutdown
```
2018-01-14T19:36:56.381 INFO:teuthology.orchestra.run.smithi139.stderr:Error loading journal: (2) No such file o...
dongdong tao

01/18/2018

11:02 PM Bug #22733 (Duplicate): ceph-fuse: failed assert in xlist<T>::iterator& xlist<T>::iterator::opera...
Thanks for the report anyway! Patrick Donnelly
10:09 PM Bug #22733 (Duplicate): ceph-fuse: failed assert in xlist<T>::iterator& xlist<T>::iterator::opera...
We had several ceph-fuse crashes with errors like... Andras Pataki
08:02 PM Bug #22730 (Fix Under Review): mds: scrub crash
https://github.com/ceph/ceph/pull/20012 Patrick Donnelly
05:38 PM Bug #22730: mds: scrub crash
Doug, please take a look at this one. Patrick Donnelly
04:17 PM Bug #22730 (Resolved): mds: scrub crash
this crash can be reproduced by 2 steps
1 ceph daemon mds.a scrub_path <dir> recursive
2 ceph daemon mds.a scrub_...
dongdong tao
12:43 AM Backport #22700 (In Progress): jewel: client:_rmdir() uses a deleted memory structure(Dentry) lea...
https://github.com/ceph/ceph/pull/19993 Prashant D
12:27 AM Backport #22700: jewel: client:_rmdir() uses a deleted memory structure(Dentry) leading a core
I'm on it. Prashant D

01/17/2018

10:07 PM Bug #22683 (Fix Under Review): client: coredump when nfs-ganesha use ceph_ll_get_inode()
Patrick Donnelly
03:34 PM Feature #4208: Add more replication pool tests for Hadoop / Ceph bindings
Bulk move of hadoop category into FS project. John Spray
03:34 PM Feature #4361: Setup another gitbuilder VM for building external Hadoop git repo(s)
Bulk move of hadoop category into FS project. John Spray
03:34 PM Bug #3544: ./configure checks CFLAGS for jni.h if --with-hadoop is specified but also needs to ch...
Bulk move of hadoop category into FS project. John Spray
03:34 PM Bug #1661: Hadoop: expected system directories not present
Bulk move of hadoop category into FS project. John Spray
03:34 PM Bug #1663: Hadoop: file ownership/permission not available in hadoop
Bulk move of hadoop category into FS project. John Spray
03:26 PM Bug #21748 (Can't reproduce): client assertions tripped during some workloads
No response in several months, and I've never seen this trip in my own testing. Closing for now. Please reopen if you... Jeff Layton
03:24 PM Bug #22003 (Resolved): [CephFS-Ganesha]MDS migrate will affect Ganesha service?
No response in two months. Closing bug.
Please reopen or comment if you've been able to test with that patch and i...
Jeff Layton
03:23 PM Bug #21419 (Rejected): client: is ceph_caps_for_mode correct for r/o opens?
Ok, I think you're right. may_open happens at a higher level and we will simply request the caps at that point. False... Jeff Layton
10:50 AM Bug #21734: mount client shows total capacity of cluster but not of a pool
(Just moving this closed ticket because I'm deleting the bogus "cephfs" category in the toplevel Ceph project) John Spray
07:05 AM Backport #22719 (In Progress): luminous: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
https://github.com/ceph/ceph/pull/19982 Zheng Yan
06:57 AM Backport #22719 (Resolved): luminous: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
https://github.com/ceph/ceph/pull/19982 Zheng Yan
05:43 AM Backport #22590 (In Progress): jewel: ceph.in: tell mds does not understand --cluster
Prashant D
04:12 AM Bug #22629 (Pending Backport): client: avoid recursive lock in ll_get_vino
Patrick Donnelly
04:12 AM Bug #22631 (Pending Backport): mds: crashes because of old pool id in journal header
Patrick Donnelly
04:11 AM Backport #22690 (In Progress): luminous: qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insuffic...
https://github.com/ceph/ceph/pull/19976 Prashant D
04:10 AM Bug #22647 (Pending Backport): mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
Patrick Donnelly
02:38 AM Backport #22689 (In Progress): jewel: client: fails to release to revoking Fc
Prashant D
02:38 AM Backport #22689: jewel: client: fails to release to revoking Fc
https://github.com/ceph/ceph/pull/19975 Prashant D

01/16/2018

07:28 PM Bug #22428: mds: don't report slow request for blocked filelock request
Here's a recent example from someone in #ceph:... John Spray
02:13 PM Backport #22688 (In Progress): luminous: client: fails to release to revoking Fc
https://github.com/ceph/ceph/pull/19970 Zheng Yan
08:16 AM Backport #22688 (Resolved): luminous: client: fails to release to revoking Fc
https://github.com/ceph/ceph/pull/20342 Nathan Cutler
01:57 PM Backport #22699 (In Progress): luminous: client:_rmdir() uses a deleted memory structure(Dentry) ...
Zheng Yan
01:57 PM Backport #22699 (Fix Under Review): luminous: client:_rmdir() uses a deleted memory structure(Den...
https://github.com/ceph/ceph/pull/19968 Zheng Yan
08:17 AM Backport #22699 (Resolved): luminous: client:_rmdir() uses a deleted memory structure(Dentry) lea...
https://github.com/ceph/ceph/pull/19968 Nathan Cutler
08:34 AM Backport #22579 (In Progress): luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster fu...
Nathan Cutler
08:31 AM Backport #22580 (In Progress): luminous: qa: full flag not set on osdmap for tasks.cephfs.test_fu...
Nathan Cutler
08:23 AM Backport #22695 (In Progress): jewel: mds: fix dump last_sent
Nathan Cutler
08:17 AM Backport #22695 (Resolved): jewel: mds: fix dump last_sent
https://github.com/ceph/ceph/pull/19961 Nathan Cutler
08:22 AM Backport #22694 (In Progress): luminous: mds: fix dump last_sent
Nathan Cutler
08:17 AM Backport #22694 (Resolved): luminous: mds: fix dump last_sent
https://github.com/ceph/ceph/pull/19959 Nathan Cutler
08:17 AM Backport #22700 (Resolved): jewel: client:_rmdir() uses a deleted memory structure(Dentry) leadin...
https://github.com/ceph/ceph/pull/19993 Nathan Cutler
08:17 AM Backport #22697 (Rejected): jewel: client: dirty caps may never get the chance to flush
Nathan Cutler
08:17 AM Backport #22696 (Resolved): luminous: client: dirty caps may never get the chance to flush
https://github.com/ceph/ceph/pull/21278 Nathan Cutler
08:16 AM Backport #22690 (Resolved): luminous: qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficien...
https://github.com/ceph/ceph/pull/19976 Nathan Cutler
08:16 AM Backport #22689 (Resolved): jewel: client: fails to release to revoking Fc
https://github.com/ceph/ceph/pull/19975 Nathan Cutler
06:38 AM Bug #22683: client: coredump when nfs-ganesha use ceph_ll_get_inode()
https://github.com/ceph/ceph/pull/19957 huanwen ren
02:47 AM Bug #22683 (Resolved): client: coredump when nfs-ganesha use ceph_ll_get_inode()
Environment:
nfs : nfs-ganehsa2.5.4 + https://github.com/nfs-ganesha/nfs-ganesha/commit/476c2068bd4a3fd22f0d...
huanwen ren

01/15/2018

02:36 PM Bug #22610 (Fix Under Review): MDS: assert failure when the inode for the cap_export from other M...
Zheng Yan

01/13/2018

01:43 AM Bug #21402 (In Progress): mds: move remaining containers in CDentry/CDir/CInode to mempool
Patrick Donnelly

01/12/2018

10:42 PM Bug #22652 (Pending Backport): client: fails to release to revoking Fc
Patrick Donnelly
10:39 PM Bug #22646 (Pending Backport): qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficient osds
Patrick Donnelly
03:49 PM Feature #21995 (In Progress): ceph-fuse: support nfs export
Jos Collin
11:07 AM Feature #21156 (In Progress): mds: speed up recovery with many open inodes
Zheng Yan

01/11/2018

10:50 PM Backport #22508: luminous: MDSMonitor: inconsistent role/who usage in command help
See also: https://github.com/ceph/ceph/pull/19926 Patrick Donnelly
10:29 PM Bug #21393: MDSMonitor: inconsistent role/who usage in command help
The fix for this causes upgrade tests to fail: http://tracker.ceph.com/issues/22527#note-9
We will probably need t...
Patrick Donnelly
08:39 AM Bug #22652 (Fix Under Review): client: fails to release to revoking Fc
https://github.com/ceph/ceph/pull/19920 Zheng Yan
08:37 AM Bug #22652: client: fails to release to revoking Fc
hang fuse_reverse_inval_inode() was caused by hang page writeback. Zheng Yan

01/10/2018

11:24 PM Backport #22590: jewel: ceph.in: tell mds does not understand --cluster
https://github.com/ceph/ceph/pull/19907 Prashant D
10:44 PM Backport #22590: jewel: ceph.in: tell mds does not understand --cluster
I'm on it. Prashant D
04:41 PM Bug #22631 (Fix Under Review): mds: crashes because of old pool id in journal header
Jos Collin
03:41 PM Backport #22076 (In Progress): luminous: 'ceph tell mds' commands result in 'File exists' errors ...
Nathan Cutler
03:17 PM Backport #22076 (Fix Under Review): luminous: 'ceph tell mds' commands result in 'File exists' er...
Jos Collin
02:45 PM Bug #22652: client: fails to release to revoking Fc
Sage Weil
01:29 PM Bug #22652: client: fails to release to revoking Fc
I reproduced it locally. it seems like kernel issue. The issue happens only when fuse_use_invalidate_cb is true. Zheng Yan
11:02 AM Bug #22652 (Resolved): client: fails to release to revoking Fc
http://pulpito.ceph.com/pdonnell-2018-01-09_21:14:38-multimds-wip-pdonnell-testing-20180109.193634-testing-basic-smit... Zheng Yan
05:54 AM Bug #22647 (Fix Under Review): mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
https://github.com/ceph/ceph/pull/19891 Zheng Yan
02:34 AM Bug #22647 (Resolved): mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
... Zheng Yan
01:08 AM Bug #22629 (Fix Under Review): client: avoid recursive lock in ll_get_vino
Patrick Donnelly
01:05 AM Bug #22562 (Pending Backport): mds: fix dump last_sent
Patrick Donnelly
01:05 AM Bug #22546 (Pending Backport): client: dirty caps may never get the chance to flush
Patrick Donnelly
01:04 AM Bug #22536 (Pending Backport): client:_rmdir() uses a deleted memory structure(Dentry) leading a ...
Patrick Donnelly
12:44 AM Bug #22646: qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficient osds
https://github.com/ceph/ceph/pull/19885 Patrick Donnelly
12:40 AM Bug #22646 (Resolved): qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficient osds
This causes startup to fail for ec pool configurations.
(This was included in my fix for #22627 but I'm breaking i...
Patrick Donnelly

01/09/2018

04:03 PM Bug #22631: mds: crashes because of old pool id in journal header
https://github.com/ceph/ceph/pull/19860 dongdong tao
08:38 AM Bug #22631: mds: crashes because of old pool id in journal header
through the code, we found it is because of the old pool id in the journal header.
my solution is
add "set pool_id"...
dongdong tao
08:35 AM Bug #22631 (Resolved): mds: crashes because of old pool id in journal header
we have use rados cppool command to copy the cephfs metadata pool
but,after copy done, mds would keep crashing when ...
dongdong tao
02:53 PM Backport #21948 (In Progress): luminous: MDSMonitor: mons should reject misconfigured mds_blackli...
Nathan Cutler
02:43 PM Backport #21874 (In Progress): luminous: qa: libcephfs_interface_tests: shutdown race failures
Nathan Cutler
02:43 PM Backport #21870 (In Progress): luminous: Assertion in EImportStart::replay should be a damaged()
Nathan Cutler
01:02 PM Feature #22545 (Fix Under Review): add dump inode command to mds
Nathan Cutler
09:18 AM Backport #22630 (Resolved): doc: misc fixes for CephFS best practices
This is a backport of: https://github.com/ceph/ceph/pull/19791 Jos Collin
08:13 AM Backport #22630 (Resolved): doc: misc fixes for CephFS best practices
https://github.com/ceph/ceph/pull/19858 Jos Collin
07:47 AM Bug #22629: client: avoid recursive lock in ll_get_vino
https://github.com/ceph/ceph/pull/19837 dongdong tao
07:47 AM Bug #22629 (Resolved): client: avoid recursive lock in ll_get_vino
ll_get_vino would lock the client_lock.
the caller must not have it.
dongdong tao
04:54 AM Bug #21991 (Resolved): mds: tell session ls returns vanila EINVAL when MDS is not active
Jos Collin
04:19 AM Bug #22627 (Fix Under Review): qa: kcephfs lacks many configurations in the fs/multimds suites
https://github.com/ceph/ceph/pull/19856 Patrick Donnelly
04:17 AM Bug #22627 (Resolved): qa: kcephfs lacks many configurations in the fs/multimds suites
In particular:
o Not using the common overrides/
o Not using 8 OSDs for EC configurations
o Not using openstack ...
Patrick Donnelly
03:48 AM Bug #22626 (Rejected): mds: sessionmap version mismatch when replay esessions
Zhang, we are not accepting bugs for multimds clusters on jewel. You can still seek help/advice on ceph-users if you ... Patrick Donnelly
03:21 AM Bug #22626 (Rejected): mds: sessionmap version mismatch when replay esessions
We used ceph 10.2.10 and backported this PR: https://github.com/ceph/ceph/commit/a49726e10ef23be124d92872470fd258a193... Zhi Zhang
03:46 AM Bug #22551: client: should flush dirty caps on backgroud
that's what i'm concerned about, maybe it's not been flushed periodically, it should be easy to verify, will do it dongdong tao
03:43 AM Bug #22551: client: should flush dirty caps on backgroud
Dirty metadata should be flushed when the cap is released. It may also happen periodically (I'm not certain). Patrick Donnelly
01:59 AM Bug #22551: client: should flush dirty caps on backgroud
i will write a case to verify it. dongdong tao
01:49 AM Bug #22551: client: should flush dirty caps on backgroud
i'm not sure if i'm right, if there is only one client and opened a file write some data and did not close it. i know... dongdong tao
01:09 AM Bug #22607 (Rejected): client: should delete cap in remove_cap
The cap is deleted via "in->caps.erase(mds)". The session xlist entry is deleted in the Cap destructor. Patrick Donnelly

01/08/2018

10:28 PM Documentation #22599 (Resolved): doc: mds memory tracking of cache is imprecise by a constant factor
Patrick Donnelly
05:23 PM Backport #22392 (Resolved): luminous: mds: tell session ls returns vanila EINVAL when MDS is not ...
Jos Collin
02:45 PM Bug #22610 (In Progress): MDS: assert failure when the inode for the cap_export from other MDS ha...
Patrick Donnelly
08:04 AM Bug #22610: MDS: assert failure when the inode for the cap_export from other MDS happened not in ...
Fire a pull request: https://github.com/ceph/ceph/pull/19836 Jianyu Li
07:57 AM Bug #22610 (Resolved): MDS: assert failure when the inode for the cap_export from other MDS happe...
We use two active MDS in our online environment, recently mds.1 restarted and during its rejoin phase, mds.0 met asse... Jianyu Li
02:43 PM Bug #22551 (Need More Info): client: should flush dirty caps on backgroud
Dongdong, can you explain more what the problem is? Do you have an issue you've observed? Patrick Donnelly
02:40 PM Bug #21419: client: is ceph_caps_for_mode correct for r/o opens?
No, I've not had time to look at it. For now, I'll just mark this as low priority until I can revisit ir. Jeff Layton
01:27 PM Backport #22569: jewel: doc: clarify path restriction instructions
Added follow-on cherry-pick https://github.com/ceph/ceph/pull/19840 Nathan Cutler
11:59 AM Backport #22569: jewel: doc: clarify path restriction instructions
Commit 85ac1cd which was a cherry-pick of d1277f1 fixing tracker issue http://tracker.ceph.com/issues/16906 introduce... Jos Collin
11:16 AM Backport #22569 (Resolved): jewel: doc: clarify path restriction instructions
Nathan Cutler
05:16 AM Backport #22569 (In Progress): jewel: doc: clarify path restriction instructions
Jos Collin
04:24 AM Backport #22569 (Resolved): jewel: doc: clarify path restriction instructions
Jos Collin
11:16 AM Documentation #16906 (Resolved): doc: clarify path restriction instructions
Nathan Cutler
04:31 AM Backport #22587 (In Progress): luminous: mds: mdsload debug too high
Prashant D
03:32 AM Backport #22587 (Need More Info): luminous: mds: mdsload debug too high
https://github.com/ceph/ceph/pull/19827 Prashant D
04:12 AM Backport #22579: luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obs...
https://github.com/ceph/ceph/pull/19830 Shinobu Kinjo
04:12 AM Backport #22579: luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obs...
Shinobu Kinjo wrote:
-> fix already in luminous
-
Shinobu Kinjo
03:58 AM Backport #22579: luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obs...
fix already in luminous Shinobu Kinjo
03:37 AM Backport #22573 (In Progress): luminous: AttributeError: 'LocalFilesystem' object has no attribut...
https://github.com/ceph/ceph/pull/19829 Prashant D

01/07/2018

04:48 AM Bug #22607: client: should delete cap in remove_cap
https://github.com/ceph/ceph/pull/19782 dongdong tao
04:48 AM Bug #22607 (Rejected): client: should delete cap in remove_cap
I think the cap should be deleted.
so that the cap can be removed from session->caps
dongdong tao

01/05/2018

09:40 PM Bug #22051 (Need More Info): tests: Health check failed: Reduced data availability: 5 pgs peering...
Patrick Donnelly
09:37 PM Bug #21575 (Resolved): mds: client caps can go below hard-coded default (100)
Patrick Donnelly
09:34 PM Feature #20752 (Resolved): cap message flag which indicates if client still has pending capsnap
Patrick Donnelly
09:32 PM Bug #21419 (Need More Info): client: is ceph_caps_for_mode correct for r/o opens?
Jeff, any update on this? Patrick Donnelly
09:30 PM Documentation #21172: doc: Export over NFS
Ramana, any update on this? Patrick Donnelly
07:25 PM Documentation #22599 (Fix Under Review): doc: mds memory tracking of cache is imprecise by a cons...
https://github.com/ceph/ceph/pull/19807 Patrick Donnelly
07:19 PM Documentation #22599 (In Progress): doc: mds memory tracking of cache is imprecise by a constant ...
Patrick Donnelly
07:19 PM Documentation #22599 (Resolved): doc: mds memory tracking of cache is imprecise by a constant factor
MDS currently can use up much more memory than its mds_cache_memory_limit. This is more noticable in deployments of a... Patrick Donnelly
06:44 PM Bug #22548 (Need More Info): mds: crash during recovery
Patrick Donnelly
05:09 PM Bug #21539 (Resolved): man: missing man page for mount.fuse.ceph
Jos Collin
02:51 PM Bug #21539: man: missing man page for mount.fuse.ceph
follow-on fix: https://github.com/ceph/ceph/pull/19792 Nathan Cutler
05:09 PM Backport #22398 (Resolved): luminous: man: missing man page for mount.fuse.ceph
Jos Collin
04:08 PM Documentation #2206 (Resolved): Need a control command to gracefully shutdown an active MDS prior...
Sage Weil
03:02 PM Bug #22595 (Resolved): doc: mount.fuse.ceph is missing in index.rst
Nathan Cutler
02:56 PM Bug #22595 (Closed): doc: mount.fuse.ceph is missing in index.rst
Luminous backport handled via #21539 Nathan Cutler
01:57 PM Bug #22595 (Fix Under Review): doc: mount.fuse.ceph is missing in index.rst
Jos Collin
01:57 PM Bug #22595: doc: mount.fuse.ceph is missing in index.rst
https://github.com/ceph/ceph/pull/19792 Jos Collin
01:56 PM Bug #22595 (Resolved): doc: mount.fuse.ceph is missing in index.rst
mount.fuse.ceph is missing in http://docs.ceph.com/docs/master/cephfs/ Jos Collin
12:19 PM Backport #22590 (Resolved): jewel: ceph.in: tell mds does not understand --cluster
https://github.com/ceph/ceph/pull/19907 Nathan Cutler
12:18 PM Backport #22587 (Resolved): luminous: mds: mdsload debug too high
https://github.com/ceph/ceph/pull/19827 Nathan Cutler
12:17 PM Backport #22563 (In Progress): luminous: mds: optimize CDir::_omap_commit() and CDir::_committed(...
Nathan Cutler
12:17 PM Backport #22564 (In Progress): luminous: Locker::calc_new_max_size does not take layout.stripe_co...
Nathan Cutler
12:16 PM Backport #22580 (Resolved): luminous: qa: full flag not set on osdmap for tasks.cephfs.test_full....
https://github.com/ceph/ceph/pull/19962 Nathan Cutler
12:16 PM Backport #22579 (Resolved): luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full ...
https://github.com/ceph/ceph/pull/19830 Nathan Cutler
12:16 PM Backport #22573 (Resolved): luminous: AttributeError: 'LocalFilesystem' object has no attribute '...
https://github.com/ceph/ceph/pull/19829 Nathan Cutler
10:10 AM Backport #22569 (Fix Under Review): jewel: doc: clarify path restriction instructions
https://github.com/ceph/ceph/pull/19795 Jos Collin
09:39 AM Backport #22569 (Resolved): jewel: doc: clarify path restriction instructions
https://github.com/ceph/ceph/pull/19795 and https://github.com/ceph/ceph/pull/19840 Jos Collin
09:39 AM Documentation #16906 (Pending Backport): doc: clarify path restriction instructions
Jos Collin
12:42 AM Bug #22483 (Pending Backport): mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is...
https://github.com/ceph/ceph/pull/19602 Patrick Donnelly
12:40 AM Bug #22475 (Pending Backport): qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClu...
Patrick Donnelly

01/04/2018

07:32 PM Bug #22562 (Fix Under Review): mds: fix dump last_sent
Patrick Donnelly
03:57 AM Bug #22562: mds: fix dump last_sent
https://github.com/ceph/ceph/pull/19762 dongdong tao
03:57 AM Bug #22562 (Resolved): mds: fix dump last_sent
last_sent in capability is an integer dongdong tao
07:15 AM Backport #22564 (Resolved): luminous: Locker::calc_new_max_size does not take layout.stripe_count...
https://github.com/ceph/ceph/pull/19776 Zheng Yan
07:10 AM Backport #22563 (Resolved): luminous: mds: optimize CDir::_omap_commit() and CDir::_committed() f...
https://github.com/ceph/ceph/pull/19775 Zheng Yan
03:46 AM Bug #22542 (Resolved): doc: epoch barrier mechanism not found
Jos Collin
03:26 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
can't find any 'osd_op ... write' in mds logs. So I can't find any clue how the corruption happened. Zheng Yan
01:48 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
Zheng Yan wrote:
> can't any log for "2017-12-16". next time you do experiment,please set debug_ms=1 for mds
Dear...
鹏 张

01/03/2018

06:11 PM Bug #22536 (Fix Under Review): client:_rmdir() uses a deleted memory structure(Dentry) leading a ...
Patrick Donnelly
05:40 PM Bug #22546 (Fix Under Review): client: dirty caps may never get the chance to flush
Patrick Donnelly
02:42 PM Feature #16775 (Fix Under Review): MDS command for listing open files
https://github.com/ceph/ceph/pull/19760 John Spray
01:04 PM Feature #16775: MDS command for listing open files
could you please have a look at this pr
https://github.com/ceph/ceph/pull/19760
dongdong tao
02:04 PM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
can't any log for "2017-12-16". next time you do experiment,please set debug_ms=1 for mds Zheng Yan
10:10 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
Zheng Yan wrote:
> please upload ceph cluster log. So I can check timestamp of mds failovers
Dear zheng:
I ha...
鹏 张
03:45 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
please upload ceph cluster log. So I can check timestamp of mds failovers Zheng Yan
04:00 AM Bug #22547: active mds session miss for client
Zheng Yan wrote:
> Sorry. the while the process is:
>
> mds close client connection
> client's remote_reset call...
dongdong tao
02:51 AM Bug #22547: active mds session miss for client
Sorry. the while the process is:
mds close client connection
client's remote_reset callback gets called
client s...
Zheng Yan

01/02/2018

03:40 PM Bug #22547: active mds session miss for client
Zheng Yan wrote:
> dongdong tao wrote:
> > zheng, if a client has been evicted by mds, the client should still thin...
dongdong tao
01:47 AM Bug #22547: active mds session miss for client
dongdong tao wrote:
> zheng, if a client has been evicted by mds, the client should still think the connection is av...
Zheng Yan
03:17 PM Backport #22552 (Resolved): luminous: doc: epoch barrier mechanism not found
Sage Weil
11:34 AM Backport #22552 (Fix Under Review): luminous: doc: epoch barrier mechanism not found
Jos Collin
10:57 AM Backport #22552: luminous: doc: epoch barrier mechanism not found
https://github.com/ceph/ceph/pull/19741 Jos Collin
10:43 AM Backport #22552 (Resolved): luminous: doc: epoch barrier mechanism not found
Jos Collin
11:14 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
Jos Collin wrote:
> I don't see anything in the URLs provided. Additionally, this looks like a Support Case.
can ...
Yong Wang
11:10 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
wangyong wang wrote:
> Hi all.
> ==============================
> version: jewel 10.2.10 (professional rpms)
> no...
Yong Wang

01/01/2018

11:56 AM Bug #22547 (Need More Info): active mds session miss for client
Jos Collin
06:47 AM Bug #22542 (Pending Backport): doc: epoch barrier mechanism not found
Jos Collin

12/29/2017

04:17 PM Feature #22545: add dump inode command to mds
I just notice, it's almost same with 11172 dongdong tao
03:35 PM Bug #22542 (Resolved): doc: epoch barrier mechanism not found
Sage Weil
01:40 AM Bug #22551 (Need More Info): client: should flush dirty caps on backgroud
the dirty data would have a background thread to do the flush,so we may need to flush dirty caps backgroud too dongdong tao

12/28/2017

03:29 PM Bug #22550 (New): mds: FAILED assert(probe->known_size[p->oid] <= shouldbe) when mds start

I stop the mds while coping files to the cluster, then I try to start mds later, I encounter a failed assertion.
...
jianxiong shao
02:05 PM Bug #22548: mds: crash during recovery
Just once.
It took a little long time during recovery and then crashed. There are about 10M files in the file syst...
wei jin
01:46 PM Bug #22548: mds: crash during recovery
this probably can be fixed by. How many times do you encounter this issue... Zheng Yan
07:15 AM Bug #22548: mds: crash during recovery
Zheng Yan wrote:
> which line trigger the assertion
Hi, yan
this line:
0> 2017-12-27 23:27:05.892112 7f0...
wei jin
07:04 AM Bug #22548: mds: crash during recovery
which line trigger the assertion Zheng Yan
04:42 AM Bug #22548 (Need More Info): mds: crash during recovery
2017-12-27 23:27:05.919710 7f08483d0700 -1 *** Caught signal (Aborted) **
in thread 7f08483d0700 thread_name:ms_dis...
wei jin
12:53 PM Bug #22547: active mds session miss for client
by saying evicted, i means due to the auto_close_timeout. dongdong tao
12:50 PM Bug #22547: active mds session miss for client
zheng, if a client has been evicted by mds, the client should still think the connection is available,
and when that...
dongdong tao
10:25 AM Bug #22547: active mds session miss for client
wei jin wrote:
> Ok. I will do it soon.
>
I can not reproduce it after open the log and it will have an impact ...
wei jin
07:21 AM Bug #22547: active mds session miss for client
Ok. I will do it soon.
This happened after I restarted mds daemon last night. And also there is another crash(bug ...
wei jin
07:10 AM Bug #22547: active mds session miss for client
please set debug_mds=10 and check why mds evicted the client. it's likely that docker host went to sleep or there was... Zheng Yan
04:34 AM Bug #22547 (Need More Info): active mds session miss for client
Our user case: k8s docker mounts cephfs using cephfs kernel client.
If we do not use the 'mounted dir', after a wh...
wei jin
06:58 AM Feature #21156: mds: speed up recovery with many open inodes
thanks, that can explain the senerio we have met,
sometimes my standby-replay mds spend too much time in rejoin stat...
dongdong tao
06:56 AM Feature #21156: mds: speed up recovery with many open inodes
besides, when there are lots of open inodes, it's not efficient to journal all of them in each log segment. Zheng Yan
02:46 AM Feature #21156: mds: speed up recovery with many open inodes
mds need to open all inodes with client caps during recovery. some of these inode may be not in the journal Zheng Yan
02:00 AM Feature #21156: mds: speed up recovery with many open inodes
hi zheng,
i'm not sure if i understand this correctlly, do you mean the mds can not recover the openning inode jus...
dongdong tao

12/27/2017

04:32 PM Bug #22546: client: dirty caps may never get the chance to flush
https://github.com/ceph/ceph/pull/19703
dongdong tao
04:05 PM Bug #22546 (Resolved): client: dirty caps may never get the chance to flush
currently, we flush the caps in function Client::flush_caps_sync
but there is a bug in this funcion.
because the ...
dongdong tao
03:54 PM Feature #22545: add dump inode command to mds
pull request:
https://github.com/ceph/ceph/pull/19677
dongdong tao
03:53 PM Feature #22545 (Duplicate): add dump inode command to mds
1 when the mds cache is really big. it's hard to dump all the cache
2 most of the time, we only want to know a speci...
dongdong tao
10:58 AM Bug #22542 (Fix Under Review): doc: epoch barrier mechanism not found
Jos Collin
10:21 AM Bug #22542: doc: epoch barrier mechanism not found
https://github.com/ceph/ceph/pull/19701 Jos Collin

12/26/2017

10:17 AM Bug #22542 (Resolved): doc: epoch barrier mechanism not found
[[http://docs.ceph.com/docs/master/cephfs/full/]] says "For more on the epoch barrier mechanism, see Ceph filesystem ... Jos Collin

12/25/2017

03:57 AM Bug #22536: client:_rmdir() uses a deleted memory structure(Dentry) leading a core
fixed by https://github.com/ceph/ceph/pull/19672 Ivan Guan
03:45 AM Bug #22536 (Resolved): client:_rmdir() uses a deleted memory structure(Dentry) leading a core
Version: ceph-10.2.2
Bug description:
"::rmdir()" acquires the Dentry structure "by get_or_create(dir, name, &de...
Ivan Guan

12/22/2017

11:47 AM Bug #22523 (Need More Info): Jewel10.2.10 cephfs journal corrupt,later event jump into previous ...
I don't see anything in the URLs provided. Additionally, this looks like a Support Case. Jos Collin
09:16 AM Bug #22524 (Resolved): NameError: global name 'get_mds_map' is not defined
We don't need to backport this fix to luminous. The commit that introduced
this bug, https://github.com/ceph/ceph/co...
Ramana Raja
04:53 AM Bug #22524 (Pending Backport): NameError: global name 'get_mds_map' is not defined
Patrick Donnelly
04:55 AM Bug #22338 (Resolved): mds: ceph mds stat json should use array output for info section
Patrick Donnelly
04:55 AM Bug #21853 (Pending Backport): mds: mdsload debug too high
Patrick Donnelly
04:55 AM Feature #19578 (Pending Backport): mds: optimize CDir::_omap_commit() and CDir::_committed() for ...
Patrick Donnelly
04:54 AM Bug #22492 (Pending Backport): Locker::calc_new_max_size does not take layout.stripe_count into a...
Patrick Donnelly
04:49 AM Backport #22503 (In Progress): luminous: mds: read hang in multiple mds setup
https://github.com/ceph/ceph/pull/19646 Prashant D
02:37 AM Backport #22503: luminous: mds: read hang in multiple mds setup
I'm on it. Prashant D
12:28 AM Bug #22357: mds: read hang in multiple mds setup
https://github.com/ceph/ceph/pull/19414 Patrick Donnelly

12/21/2017

11:45 PM Bug #22487: mds: setattr blocked when metadata pool is full
right. full test should have no problem Zheng Yan
10:27 PM Bug #22487: mds: setattr blocked when metadata pool is full
Presumably that would be because with the vstart config the MDS writes cannot actually be written whereas with the te... Patrick Donnelly
02:35 PM Bug #22487: mds: setattr blocked when metadata pool is full
I reproduced this locally.
It was caused by stuck log flush ...
Zheng Yan
10:38 PM Bug #22526 (Pending Backport): AttributeError: 'LocalFilesystem' object has no attribute 'ec_prof...
Fixed by: https://github.com/ceph/ceph/pull/19533 Patrick Donnelly
02:19 PM Bug #22526 (Resolved): AttributeError: 'LocalFilesystem' object has no attribute 'ec_profile'
Hit an error while running a ceph_volume_client test on a vstart Ceph cluster using the command
LD_LIBRARY_PATH=`pwd...
Ramana Raja
04:13 PM Bug #22357: mds: read hang in multiple mds setup
i don't see any merged pr. Shinobu Kinjo
01:04 PM Bug #22524 (Fix Under Review): NameError: global name 'get_mds_map' is not defined
https://github.com/ceph/ceph/pull/19633 Ramana Raja
12:48 PM Bug #22524 (Resolved): NameError: global name 'get_mds_map' is not defined
Hit a error while running test_volume_client on a dev vstart cluster (Ceph master) using
# LD_LIBRARY_PATH=`pwd`/lib...
Ramana Raja
12:05 PM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
type :fs
version:10.2.10
Yong Wang
11:59 AM Bug #22523 (Closed): Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
Hi all.
==============================
version: jewel 10.2.10 (professional rpms)
nodes : 3 centos7.3
cephfs : k...
Yong Wang
10:08 AM Backport #22501: luminous: qa: CommandFailedError: Command failed on smithi135 with status 22: 's...
https://github.com/ceph/ceph/pull/19628 Shinobu Kinjo
10:05 AM Backport #22500: luminous: cephfs: potential adjust failure in lru_expire
https://github.com/ceph/ceph/pull/19627 Shinobu Kinjo
10:03 AM Backport #22499: luminous: cephfs-journal-tool: tool would miss to report some invalid range
https://github.com/ceph/ceph/pull/19626 Shinobu Kinjo
08:36 AM Bug #22374 (Duplicate): luminous: mds: SimpleLock::num_rdlock overloaded
Nathan Cutler
 

Also available in: Atom