Activity
From 12/25/2017 to 01/23/2018
01/23/2018
- 06:06 PM Bug #21393 (Resolved): MDSMonitor: inconsistent role/who usage in command help
- 06:05 PM Backport #22508 (Closed): luminous: MDSMonitor: inconsistent role/who usage in command help
- Yes, let's forgo the luminous backport. Thanks for pointing that out Nathan!
- 12:41 PM Bug #22776: mds: session count,dns and inos from cli "fs status" is always 0
- *PR*: https://github.com/ceph/ceph/pull/20079
- 12:07 PM Bug #22776 (Resolved): mds: session count,dns and inos from cli "fs status" is always 0
- ...
- 09:48 AM Backport #22762 (In Progress): jewel: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
- 09:40 AM Backport #22762 (Resolved): jewel: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
- https://github.com/ceph/ceph/pull/20067
- 09:40 AM Backport #22765 (Resolved): luminous: client: avoid recursive lock in ll_get_vino
- https://github.com/ceph/ceph/pull/20086
- 09:40 AM Backport #22764 (Resolved): jewel: mds: crashes because of old pool id in journal header
- https://github.com/ceph/ceph/pull/20111
- 09:40 AM Backport #22763 (Resolved): luminous: mds: crashes because of old pool id in journal header
- https://github.com/ceph/ceph/pull/20085
01/22/2018
- 10:11 PM Bug #22754 (Resolved): mon: removing tier from an EC base pool is forbidden, even if allow_ec_ove...
- OSDMonitor::_check_remove_tier needs to be made aware that this should be permitted if the base tier is suitable for ...
- 08:06 PM Bug #22741: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-yuri-master-1.19.18...
- Core: /ceph/teuthology-archive/yuriw-2018-01-19_18:23:03-powercycle-wip-yuri-master-1.19.18-distro-basic-smithi/20909...
- 02:14 PM Bug #22741: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-yuri-master-1.19.18...
- Assigned to CephFS because it's crashing in the ceph-fuse process (in the absence of a better home for ObjectCacher i...
- 03:03 PM Feature #12107 (In Progress): mds: use versioned wire protocol; obviate CEPH_MDS_PROTOCOL
- 07:28 AM Feature #12107: mds: use versioned wire protocol; obviate CEPH_MDS_PROTOCOL
- I'm working on this, please assign this to me
- 12:05 PM Backport #22508 (Need More Info): luminous: MDSMonitor: inconsistent role/who usage in command help
- Non-trivial backport - since it's essentially a documentation fix, I'm not sure if it's worth the risk.
- 11:18 AM Backport #22078 (In Progress): luminous: ceph.in: tell mds does not understand --cluster
01/20/2018
- 05:33 PM Bug #22741 (Resolved): osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-yuri-mas...
- Run: http://pulpito.ceph.com/yuriw-2018-01-19_18:23:03-powercycle-wip-yuri-master-1.19.18-distro-basic-smithi/
Jobs:...
01/19/2018
- 04:31 AM Bug #22734: cephfs-journal-tool: may got assertion failure due to not shutdown
- https://github.com/ceph/ceph/pull/19991
- 04:22 AM Bug #22734 (Resolved): cephfs-journal-tool: may got assertion failure due to not shutdown
- ```
2018-01-14T19:36:56.381 INFO:teuthology.orchestra.run.smithi139.stderr:Error loading journal: (2) No such file o...
01/18/2018
- 11:02 PM Bug #22733 (Duplicate): ceph-fuse: failed assert in xlist<T>::iterator& xlist<T>::iterator::opera...
- Thanks for the report anyway!
- 10:09 PM Bug #22733 (Duplicate): ceph-fuse: failed assert in xlist<T>::iterator& xlist<T>::iterator::opera...
- We had several ceph-fuse crashes with errors like...
- 08:02 PM Bug #22730 (Fix Under Review): mds: scrub crash
- https://github.com/ceph/ceph/pull/20012
- 05:38 PM Bug #22730: mds: scrub crash
- Doug, please take a look at this one.
- 04:17 PM Bug #22730 (Resolved): mds: scrub crash
- this crash can be reproduced by 2 steps
1 ceph daemon mds.a scrub_path <dir> recursive
2 ceph daemon mds.a scrub_... - 12:43 AM Backport #22700 (In Progress): jewel: client:_rmdir() uses a deleted memory structure(Dentry) lea...
- https://github.com/ceph/ceph/pull/19993
- 12:27 AM Backport #22700: jewel: client:_rmdir() uses a deleted memory structure(Dentry) leading a core
- I'm on it.
01/17/2018
- 10:07 PM Bug #22683 (Fix Under Review): client: coredump when nfs-ganesha use ceph_ll_get_inode()
- 03:34 PM Feature #4208: Add more replication pool tests for Hadoop / Ceph bindings
- Bulk move of hadoop category into FS project.
- 03:34 PM Feature #4361: Setup another gitbuilder VM for building external Hadoop git repo(s)
- Bulk move of hadoop category into FS project.
- 03:34 PM Bug #3544: ./configure checks CFLAGS for jni.h if --with-hadoop is specified but also needs to ch...
- Bulk move of hadoop category into FS project.
- 03:34 PM Bug #1661: Hadoop: expected system directories not present
- Bulk move of hadoop category into FS project.
- 03:34 PM Bug #1663: Hadoop: file ownership/permission not available in hadoop
- Bulk move of hadoop category into FS project.
- 03:26 PM Bug #21748 (Can't reproduce): client assertions tripped during some workloads
- No response in several months, and I've never seen this trip in my own testing. Closing for now. Please reopen if you...
- 03:24 PM Bug #22003 (Resolved): [CephFS-Ganesha]MDS migrate will affect Ganesha service?
- No response in two months. Closing bug.
Please reopen or comment if you've been able to test with that patch and i... - 03:23 PM Bug #21419 (Rejected): client: is ceph_caps_for_mode correct for r/o opens?
- Ok, I think you're right. may_open happens at a higher level and we will simply request the caps at that point. False...
- 10:50 AM Bug #21734: mount client shows total capacity of cluster but not of a pool
- (Just moving this closed ticket because I'm deleting the bogus "cephfs" category in the toplevel Ceph project)
- 07:05 AM Backport #22719 (In Progress): luminous: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
- https://github.com/ceph/ceph/pull/19982
- 06:57 AM Backport #22719 (Resolved): luminous: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
- https://github.com/ceph/ceph/pull/19982
- 05:43 AM Backport #22590 (In Progress): jewel: ceph.in: tell mds does not understand --cluster
- 04:12 AM Bug #22629 (Pending Backport): client: avoid recursive lock in ll_get_vino
- 04:12 AM Bug #22631 (Pending Backport): mds: crashes because of old pool id in journal header
- 04:11 AM Backport #22690 (In Progress): luminous: qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insuffic...
- https://github.com/ceph/ceph/pull/19976
- 04:10 AM Bug #22647 (Pending Backport): mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
- 02:38 AM Backport #22689 (In Progress): jewel: client: fails to release to revoking Fc
- 02:38 AM Backport #22689: jewel: client: fails to release to revoking Fc
- https://github.com/ceph/ceph/pull/19975
01/16/2018
- 07:28 PM Bug #22428: mds: don't report slow request for blocked filelock request
- Here's a recent example from someone in #ceph:...
- 02:13 PM Backport #22688 (In Progress): luminous: client: fails to release to revoking Fc
- https://github.com/ceph/ceph/pull/19970
- 08:16 AM Backport #22688 (Resolved): luminous: client: fails to release to revoking Fc
- https://github.com/ceph/ceph/pull/20342
- 01:57 PM Backport #22699 (In Progress): luminous: client:_rmdir() uses a deleted memory structure(Dentry) ...
- 01:57 PM Backport #22699 (Fix Under Review): luminous: client:_rmdir() uses a deleted memory structure(Den...
- https://github.com/ceph/ceph/pull/19968
- 08:17 AM Backport #22699 (Resolved): luminous: client:_rmdir() uses a deleted memory structure(Dentry) lea...
- https://github.com/ceph/ceph/pull/19968
- 08:34 AM Backport #22579 (In Progress): luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster fu...
- 08:31 AM Backport #22580 (In Progress): luminous: qa: full flag not set on osdmap for tasks.cephfs.test_fu...
- 08:23 AM Backport #22695 (In Progress): jewel: mds: fix dump last_sent
- 08:17 AM Backport #22695 (Resolved): jewel: mds: fix dump last_sent
- https://github.com/ceph/ceph/pull/19961
- 08:22 AM Backport #22694 (In Progress): luminous: mds: fix dump last_sent
- 08:17 AM Backport #22694 (Resolved): luminous: mds: fix dump last_sent
- https://github.com/ceph/ceph/pull/19959
- 08:17 AM Backport #22700 (Resolved): jewel: client:_rmdir() uses a deleted memory structure(Dentry) leadin...
- https://github.com/ceph/ceph/pull/19993
- 08:17 AM Backport #22697 (Rejected): jewel: client: dirty caps may never get the chance to flush
- 08:17 AM Backport #22696 (Resolved): luminous: client: dirty caps may never get the chance to flush
- https://github.com/ceph/ceph/pull/21278
- 08:16 AM Backport #22690 (Resolved): luminous: qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficien...
- https://github.com/ceph/ceph/pull/19976
- 08:16 AM Backport #22689 (Resolved): jewel: client: fails to release to revoking Fc
- https://github.com/ceph/ceph/pull/19975
- 06:38 AM Bug #22683: client: coredump when nfs-ganesha use ceph_ll_get_inode()
- https://github.com/ceph/ceph/pull/19957
- 02:47 AM Bug #22683 (Resolved): client: coredump when nfs-ganesha use ceph_ll_get_inode()
- Environment:
nfs : nfs-ganehsa2.5.4 + https://github.com/nfs-ganesha/nfs-ganesha/commit/476c2068bd4a3fd22f0d...
01/15/2018
- 02:36 PM Bug #22610 (Fix Under Review): MDS: assert failure when the inode for the cap_export from other M...
01/13/2018
01/12/2018
- 10:42 PM Bug #22652 (Pending Backport): client: fails to release to revoking Fc
- 10:39 PM Bug #22646 (Pending Backport): qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficient osds
- 03:49 PM Feature #21995 (In Progress): ceph-fuse: support nfs export
- 11:07 AM Feature #21156 (In Progress): mds: speed up recovery with many open inodes
01/11/2018
- 10:50 PM Backport #22508: luminous: MDSMonitor: inconsistent role/who usage in command help
- See also: https://github.com/ceph/ceph/pull/19926
- 10:29 PM Bug #21393: MDSMonitor: inconsistent role/who usage in command help
- The fix for this causes upgrade tests to fail: http://tracker.ceph.com/issues/22527#note-9
We will probably need t... - 08:39 AM Bug #22652 (Fix Under Review): client: fails to release to revoking Fc
- https://github.com/ceph/ceph/pull/19920
- 08:37 AM Bug #22652: client: fails to release to revoking Fc
- hang fuse_reverse_inval_inode() was caused by hang page writeback.
01/10/2018
- 11:24 PM Backport #22590: jewel: ceph.in: tell mds does not understand --cluster
- https://github.com/ceph/ceph/pull/19907
- 10:44 PM Backport #22590: jewel: ceph.in: tell mds does not understand --cluster
- I'm on it.
- 04:41 PM Bug #22631 (Fix Under Review): mds: crashes because of old pool id in journal header
- 03:41 PM Backport #22076 (In Progress): luminous: 'ceph tell mds' commands result in 'File exists' errors ...
- 03:17 PM Backport #22076 (Fix Under Review): luminous: 'ceph tell mds' commands result in 'File exists' er...
- 02:45 PM Bug #22652: client: fails to release to revoking Fc
- 01:29 PM Bug #22652: client: fails to release to revoking Fc
- I reproduced it locally. it seems like kernel issue. The issue happens only when fuse_use_invalidate_cb is true.
- 11:02 AM Bug #22652 (Resolved): client: fails to release to revoking Fc
- http://pulpito.ceph.com/pdonnell-2018-01-09_21:14:38-multimds-wip-pdonnell-testing-20180109.193634-testing-basic-smit...
- 05:54 AM Bug #22647 (Fix Under Review): mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
- https://github.com/ceph/ceph/pull/19891
- 02:34 AM Bug #22647 (Resolved): mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
- ...
- 01:08 AM Bug #22629 (Fix Under Review): client: avoid recursive lock in ll_get_vino
- 01:05 AM Bug #22562 (Pending Backport): mds: fix dump last_sent
- 01:05 AM Bug #22546 (Pending Backport): client: dirty caps may never get the chance to flush
- 01:04 AM Bug #22536 (Pending Backport): client:_rmdir() uses a deleted memory structure(Dentry) leading a ...
- 12:44 AM Bug #22646: qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficient osds
- https://github.com/ceph/ceph/pull/19885
- 12:40 AM Bug #22646 (Resolved): qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficient osds
- This causes startup to fail for ec pool configurations.
(This was included in my fix for #22627 but I'm breaking i...
01/09/2018
- 04:03 PM Bug #22631: mds: crashes because of old pool id in journal header
- https://github.com/ceph/ceph/pull/19860
- 08:38 AM Bug #22631: mds: crashes because of old pool id in journal header
- through the code, we found it is because of the old pool id in the journal header.
my solution is
add "set pool_id"... - 08:35 AM Bug #22631 (Resolved): mds: crashes because of old pool id in journal header
- we have use rados cppool command to copy the cephfs metadata pool
but,after copy done, mds would keep crashing when ... - 02:53 PM Backport #21948 (In Progress): luminous: MDSMonitor: mons should reject misconfigured mds_blackli...
- 02:43 PM Backport #21874 (In Progress): luminous: qa: libcephfs_interface_tests: shutdown race failures
- 02:43 PM Backport #21870 (In Progress): luminous: Assertion in EImportStart::replay should be a damaged()
- 01:02 PM Feature #22545 (Fix Under Review): add dump inode command to mds
- 09:18 AM Backport #22630 (Resolved): doc: misc fixes for CephFS best practices
- This is a backport of: https://github.com/ceph/ceph/pull/19791
- 08:13 AM Backport #22630 (Resolved): doc: misc fixes for CephFS best practices
- https://github.com/ceph/ceph/pull/19858
- 07:47 AM Bug #22629: client: avoid recursive lock in ll_get_vino
- https://github.com/ceph/ceph/pull/19837
- 07:47 AM Bug #22629 (Resolved): client: avoid recursive lock in ll_get_vino
- ll_get_vino would lock the client_lock.
the caller must not have it. - 04:54 AM Bug #21991 (Resolved): mds: tell session ls returns vanila EINVAL when MDS is not active
- 04:19 AM Bug #22627 (Fix Under Review): qa: kcephfs lacks many configurations in the fs/multimds suites
- https://github.com/ceph/ceph/pull/19856
- 04:17 AM Bug #22627 (Resolved): qa: kcephfs lacks many configurations in the fs/multimds suites
- In particular:
o Not using the common overrides/
o Not using 8 OSDs for EC configurations
o Not using openstack ... - 03:48 AM Bug #22626 (Rejected): mds: sessionmap version mismatch when replay esessions
- Zhang, we are not accepting bugs for multimds clusters on jewel. You can still seek help/advice on ceph-users if you ...
- 03:21 AM Bug #22626 (Rejected): mds: sessionmap version mismatch when replay esessions
- We used ceph 10.2.10 and backported this PR: https://github.com/ceph/ceph/commit/a49726e10ef23be124d92872470fd258a193...
- 03:46 AM Bug #22551: client: should flush dirty caps on backgroud
- that's what i'm concerned about, maybe it's not been flushed periodically, it should be easy to verify, will do it
- 03:43 AM Bug #22551: client: should flush dirty caps on backgroud
- Dirty metadata should be flushed when the cap is released. It may also happen periodically (I'm not certain).
- 01:59 AM Bug #22551: client: should flush dirty caps on backgroud
- i will write a case to verify it.
- 01:49 AM Bug #22551: client: should flush dirty caps on backgroud
- i'm not sure if i'm right, if there is only one client and opened a file write some data and did not close it. i know...
- 01:09 AM Bug #22607 (Rejected): client: should delete cap in remove_cap
- The cap is deleted via "in->caps.erase(mds)". The session xlist entry is deleted in the Cap destructor.
01/08/2018
- 10:28 PM Documentation #22599 (Resolved): doc: mds memory tracking of cache is imprecise by a constant factor
- 05:23 PM Backport #22392 (Resolved): luminous: mds: tell session ls returns vanila EINVAL when MDS is not ...
- 02:45 PM Bug #22610 (In Progress): MDS: assert failure when the inode for the cap_export from other MDS ha...
- 08:04 AM Bug #22610: MDS: assert failure when the inode for the cap_export from other MDS happened not in ...
- Fire a pull request: https://github.com/ceph/ceph/pull/19836
- 07:57 AM Bug #22610 (Resolved): MDS: assert failure when the inode for the cap_export from other MDS happe...
- We use two active MDS in our online environment, recently mds.1 restarted and during its rejoin phase, mds.0 met asse...
- 02:43 PM Bug #22551 (Need More Info): client: should flush dirty caps on backgroud
- Dongdong, can you explain more what the problem is? Do you have an issue you've observed?
- 02:40 PM Bug #21419: client: is ceph_caps_for_mode correct for r/o opens?
- No, I've not had time to look at it. For now, I'll just mark this as low priority until I can revisit ir.
- 01:27 PM Backport #22569: jewel: doc: clarify path restriction instructions
- Added follow-on cherry-pick https://github.com/ceph/ceph/pull/19840
- 11:59 AM Backport #22569: jewel: doc: clarify path restriction instructions
- Commit 85ac1cd which was a cherry-pick of d1277f1 fixing tracker issue http://tracker.ceph.com/issues/16906 introduce...
- 11:16 AM Backport #22569 (Resolved): jewel: doc: clarify path restriction instructions
- 05:16 AM Backport #22569 (In Progress): jewel: doc: clarify path restriction instructions
- 04:24 AM Backport #22569 (Resolved): jewel: doc: clarify path restriction instructions
- 11:16 AM Documentation #16906 (Resolved): doc: clarify path restriction instructions
- 04:31 AM Backport #22587 (In Progress): luminous: mds: mdsload debug too high
- 03:32 AM Backport #22587 (Need More Info): luminous: mds: mdsload debug too high
- https://github.com/ceph/ceph/pull/19827
- 04:12 AM Backport #22579: luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obs...
- https://github.com/ceph/ceph/pull/19830
- 04:12 AM Backport #22579: luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obs...
- Shinobu Kinjo wrote:
-> fix already in luminous
-
- 03:58 AM Backport #22579: luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obs...
- fix already in luminous
- 03:37 AM Backport #22573 (In Progress): luminous: AttributeError: 'LocalFilesystem' object has no attribut...
- https://github.com/ceph/ceph/pull/19829
01/07/2018
- 04:48 AM Bug #22607: client: should delete cap in remove_cap
- https://github.com/ceph/ceph/pull/19782
- 04:48 AM Bug #22607 (Rejected): client: should delete cap in remove_cap
- I think the cap should be deleted.
so that the cap can be removed from session->caps
01/05/2018
- 09:40 PM Bug #22051 (Need More Info): tests: Health check failed: Reduced data availability: 5 pgs peering...
- 09:37 PM Bug #21575 (Resolved): mds: client caps can go below hard-coded default (100)
- 09:34 PM Feature #20752 (Resolved): cap message flag which indicates if client still has pending capsnap
- 09:32 PM Bug #21419 (Need More Info): client: is ceph_caps_for_mode correct for r/o opens?
- Jeff, any update on this?
- 09:30 PM Documentation #21172: doc: Export over NFS
- Ramana, any update on this?
- 07:25 PM Documentation #22599 (Fix Under Review): doc: mds memory tracking of cache is imprecise by a cons...
- https://github.com/ceph/ceph/pull/19807
- 07:19 PM Documentation #22599 (In Progress): doc: mds memory tracking of cache is imprecise by a constant ...
- 07:19 PM Documentation #22599 (Resolved): doc: mds memory tracking of cache is imprecise by a constant factor
- MDS currently can use up much more memory than its mds_cache_memory_limit. This is more noticable in deployments of a...
- 06:44 PM Bug #22548 (Need More Info): mds: crash during recovery
- 05:09 PM Bug #21539 (Resolved): man: missing man page for mount.fuse.ceph
- 02:51 PM Bug #21539: man: missing man page for mount.fuse.ceph
- follow-on fix: https://github.com/ceph/ceph/pull/19792
- 05:09 PM Backport #22398 (Resolved): luminous: man: missing man page for mount.fuse.ceph
- 04:08 PM Documentation #2206 (Resolved): Need a control command to gracefully shutdown an active MDS prior...
- 03:02 PM Bug #22595 (Resolved): doc: mount.fuse.ceph is missing in index.rst
- 02:56 PM Bug #22595 (Closed): doc: mount.fuse.ceph is missing in index.rst
- Luminous backport handled via #21539
- 01:57 PM Bug #22595 (Fix Under Review): doc: mount.fuse.ceph is missing in index.rst
- 01:57 PM Bug #22595: doc: mount.fuse.ceph is missing in index.rst
- https://github.com/ceph/ceph/pull/19792
- 01:56 PM Bug #22595 (Resolved): doc: mount.fuse.ceph is missing in index.rst
- mount.fuse.ceph is missing in http://docs.ceph.com/docs/master/cephfs/
- 12:19 PM Backport #22590 (Resolved): jewel: ceph.in: tell mds does not understand --cluster
- https://github.com/ceph/ceph/pull/19907
- 12:18 PM Backport #22587 (Resolved): luminous: mds: mdsload debug too high
- https://github.com/ceph/ceph/pull/19827
- 12:17 PM Backport #22563 (In Progress): luminous: mds: optimize CDir::_omap_commit() and CDir::_committed(...
- 12:17 PM Backport #22564 (In Progress): luminous: Locker::calc_new_max_size does not take layout.stripe_co...
- 12:16 PM Backport #22580 (Resolved): luminous: qa: full flag not set on osdmap for tasks.cephfs.test_full....
- https://github.com/ceph/ceph/pull/19962
- 12:16 PM Backport #22579 (Resolved): luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full ...
- https://github.com/ceph/ceph/pull/19830
- 12:16 PM Backport #22573 (Resolved): luminous: AttributeError: 'LocalFilesystem' object has no attribute '...
- https://github.com/ceph/ceph/pull/19829
- 10:10 AM Backport #22569 (Fix Under Review): jewel: doc: clarify path restriction instructions
- https://github.com/ceph/ceph/pull/19795
- 09:39 AM Backport #22569 (Resolved): jewel: doc: clarify path restriction instructions
- https://github.com/ceph/ceph/pull/19795 and https://github.com/ceph/ceph/pull/19840
- 09:39 AM Documentation #16906 (Pending Backport): doc: clarify path restriction instructions
- 12:42 AM Bug #22483 (Pending Backport): mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is...
- https://github.com/ceph/ceph/pull/19602
- 12:40 AM Bug #22475 (Pending Backport): qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClu...
01/04/2018
- 07:32 PM Bug #22562 (Fix Under Review): mds: fix dump last_sent
- 03:57 AM Bug #22562: mds: fix dump last_sent
- https://github.com/ceph/ceph/pull/19762
- 03:57 AM Bug #22562 (Resolved): mds: fix dump last_sent
- last_sent in capability is an integer
- 07:15 AM Backport #22564 (Resolved): luminous: Locker::calc_new_max_size does not take layout.stripe_count...
- https://github.com/ceph/ceph/pull/19776
- 07:10 AM Backport #22563 (Resolved): luminous: mds: optimize CDir::_omap_commit() and CDir::_committed() f...
- https://github.com/ceph/ceph/pull/19775
- 03:46 AM Bug #22542 (Resolved): doc: epoch barrier mechanism not found
- 03:26 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
- can't find any 'osd_op ... write' in mds logs. So I can't find any clue how the corruption happened.
- 01:48 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
- Zheng Yan wrote:
> can't any log for "2017-12-16". next time you do experiment,please set debug_ms=1 for mds
Dear...
01/03/2018
- 06:11 PM Bug #22536 (Fix Under Review): client:_rmdir() uses a deleted memory structure(Dentry) leading a ...
- 05:40 PM Bug #22546 (Fix Under Review): client: dirty caps may never get the chance to flush
- 02:42 PM Feature #16775 (Fix Under Review): MDS command for listing open files
- https://github.com/ceph/ceph/pull/19760
- 01:04 PM Feature #16775: MDS command for listing open files
- could you please have a look at this pr
https://github.com/ceph/ceph/pull/19760 - 02:04 PM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
- can't any log for "2017-12-16". next time you do experiment,please set debug_ms=1 for mds
- 10:10 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
- Zheng Yan wrote:
> please upload ceph cluster log. So I can check timestamp of mds failovers
Dear zheng:
I ha... - 03:45 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
- please upload ceph cluster log. So I can check timestamp of mds failovers
- 04:00 AM Bug #22547: active mds session miss for client
- Zheng Yan wrote:
> Sorry. the while the process is:
>
> mds close client connection
> client's remote_reset call... - 02:51 AM Bug #22547: active mds session miss for client
- Sorry. the while the process is:
mds close client connection
client's remote_reset callback gets called
client s...
01/02/2018
- 03:40 PM Bug #22547: active mds session miss for client
- Zheng Yan wrote:
> dongdong tao wrote:
> > zheng, if a client has been evicted by mds, the client should still thin... - 01:47 AM Bug #22547: active mds session miss for client
- dongdong tao wrote:
> zheng, if a client has been evicted by mds, the client should still think the connection is av... - 03:17 PM Backport #22552 (Resolved): luminous: doc: epoch barrier mechanism not found
- 11:34 AM Backport #22552 (Fix Under Review): luminous: doc: epoch barrier mechanism not found
- 10:57 AM Backport #22552: luminous: doc: epoch barrier mechanism not found
- https://github.com/ceph/ceph/pull/19741
- 10:43 AM Backport #22552 (Resolved): luminous: doc: epoch barrier mechanism not found
- 11:14 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
- Jos Collin wrote:
> I don't see anything in the URLs provided. Additionally, this looks like a Support Case.
can ... - 11:10 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
- wangyong wang wrote:
> Hi all.
> ==============================
> version: jewel 10.2.10 (professional rpms)
> no...
01/01/2018
- 11:56 AM Bug #22547 (Need More Info): active mds session miss for client
- 06:47 AM Bug #22542 (Pending Backport): doc: epoch barrier mechanism not found
12/29/2017
- 04:17 PM Feature #22545: add dump inode command to mds
- I just notice, it's almost same with 11172
- 03:35 PM Bug #22542 (Resolved): doc: epoch barrier mechanism not found
- 01:40 AM Bug #22551 (Need More Info): client: should flush dirty caps on backgroud
- the dirty data would have a background thread to do the flush,so we may need to flush dirty caps backgroud too
12/28/2017
- 03:29 PM Bug #22550 (New): mds: FAILED assert(probe->known_size[p->oid] <= shouldbe) when mds start
I stop the mds while coping files to the cluster, then I try to start mds later, I encounter a failed assertion.
...- 02:05 PM Bug #22548: mds: crash during recovery
- Just once.
It took a little long time during recovery and then crashed. There are about 10M files in the file syst... - 01:46 PM Bug #22548: mds: crash during recovery
- this probably can be fixed by. How many times do you encounter this issue...
- 07:15 AM Bug #22548: mds: crash during recovery
- Zheng Yan wrote:
> which line trigger the assertion
Hi, yan
this line:
0> 2017-12-27 23:27:05.892112 7f0... - 07:04 AM Bug #22548: mds: crash during recovery
- which line trigger the assertion
- 04:42 AM Bug #22548 (Need More Info): mds: crash during recovery
- 2017-12-27 23:27:05.919710 7f08483d0700 -1 *** Caught signal (Aborted) **
in thread 7f08483d0700 thread_name:ms_dis... - 12:53 PM Bug #22547: active mds session miss for client
- by saying evicted, i means due to the auto_close_timeout.
- 12:50 PM Bug #22547: active mds session miss for client
- zheng, if a client has been evicted by mds, the client should still think the connection is available,
and when that... - 10:25 AM Bug #22547: active mds session miss for client
- wei jin wrote:
> Ok. I will do it soon.
>
I can not reproduce it after open the log and it will have an impact ... - 07:21 AM Bug #22547: active mds session miss for client
- Ok. I will do it soon.
This happened after I restarted mds daemon last night. And also there is another crash(bug ... - 07:10 AM Bug #22547: active mds session miss for client
- please set debug_mds=10 and check why mds evicted the client. it's likely that docker host went to sleep or there was...
- 04:34 AM Bug #22547 (Need More Info): active mds session miss for client
- Our user case: k8s docker mounts cephfs using cephfs kernel client.
If we do not use the 'mounted dir', after a wh... - 06:58 AM Feature #21156: mds: speed up recovery with many open inodes
- thanks, that can explain the senerio we have met,
sometimes my standby-replay mds spend too much time in rejoin stat... - 06:56 AM Feature #21156: mds: speed up recovery with many open inodes
- besides, when there are lots of open inodes, it's not efficient to journal all of them in each log segment.
- 02:46 AM Feature #21156: mds: speed up recovery with many open inodes
- mds need to open all inodes with client caps during recovery. some of these inode may be not in the journal
- 02:00 AM Feature #21156: mds: speed up recovery with many open inodes
- hi zheng,
i'm not sure if i understand this correctlly, do you mean the mds can not recover the openning inode jus...
12/27/2017
- 04:32 PM Bug #22546: client: dirty caps may never get the chance to flush
- https://github.com/ceph/ceph/pull/19703
- 04:05 PM Bug #22546 (Resolved): client: dirty caps may never get the chance to flush
- currently, we flush the caps in function Client::flush_caps_sync
but there is a bug in this funcion.
because the ... - 03:54 PM Feature #22545: add dump inode command to mds
- pull request:
https://github.com/ceph/ceph/pull/19677 - 03:53 PM Feature #22545 (Duplicate): add dump inode command to mds
- 1 when the mds cache is really big. it's hard to dump all the cache
2 most of the time, we only want to know a speci... - 10:58 AM Bug #22542 (Fix Under Review): doc: epoch barrier mechanism not found
- 10:21 AM Bug #22542: doc: epoch barrier mechanism not found
- https://github.com/ceph/ceph/pull/19701
12/26/2017
- 10:17 AM Bug #22542 (Resolved): doc: epoch barrier mechanism not found
- [[http://docs.ceph.com/docs/master/cephfs/full/]] says "For more on the epoch barrier mechanism, see Ceph filesystem ...
12/25/2017
- 03:57 AM Bug #22536: client:_rmdir() uses a deleted memory structure(Dentry) leading a core
- fixed by https://github.com/ceph/ceph/pull/19672
- 03:45 AM Bug #22536 (Resolved): client:_rmdir() uses a deleted memory structure(Dentry) leading a core
- Version: ceph-10.2.2
Bug description:
"::rmdir()" acquires the Dentry structure "by get_or_create(dir, name, &de...
Also available in: Atom