Project

General

Profile

Activity

From 01/10/2018 to 02/08/2018

02/08/2018

09:35 PM Bug #22886: kclient: Test failure: test_full_same_file (tasks.cephfs.test_full.TestClusterFull)
https://github.com/ceph/ceph/pull/20310 Patrick Donnelly
09:35 PM Bug #22886 (Pending Backport): kclient: Test failure: test_full_same_file (tasks.cephfs.test_full...
Patrick Donnelly
09:34 PM Bug #22821 (Pending Backport): mds: session reference leak
Patrick Donnelly
09:34 PM Bug #22824 (Pending Backport): Journaler::flush() may flush less data than expected, which causes...
Patrick Donnelly
09:33 PM Bug #22754 (Pending Backport): mon: removing tier from an EC base pool is forbidden, even if allo...
Patrick Donnelly
09:31 PM Bug #22801: client: Client::flush_snaps still uses obsolete Client::user_id/group_id
https://github.com/ceph/ceph/pull/20200 Patrick Donnelly
09:31 PM Bug #22801 (Resolved): client: Client::flush_snaps still uses obsolete Client::user_id/group_id
Patrick Donnelly
06:06 PM Backport #22864 (Resolved): luminous: mds: scrub crash
Nathan Cutler
06:05 PM Backport #22860 (Resolved): luminous: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercyc...
Nathan Cutler
06:05 PM Bug #22776 (Resolved): mds: session count,dns and inos from cli "fs status" is always 0
Nathan Cutler
06:05 PM Backport #22859 (Resolved): luminous: mds: session count,dns and inos from cli "fs status" is alw...
Nathan Cutler
06:04 PM Bug #22610 (Resolved): MDS: assert failure when the inode for the cap_export from other MDS happe...
Nathan Cutler
06:04 PM Backport #22867 (Resolved): luminous: MDS: assert failure when the inode for the cap_export from ...
Nathan Cutler
06:04 PM Bug #21892 (Resolved): limit size of subtree migration
Nathan Cutler
06:03 PM Backport #22242 (Resolved): luminous: mds: limit size of subtree migration
Nathan Cutler
06:03 PM Backport #22240 (Resolved): luminous: Processes stuck waiting for write with ceph-fuse
Nathan Cutler
05:55 PM Bug #21568 (Resolved): MDSMonitor commands crashing on cluster upgraded from Hammer (nonexistent ...
Patrick Donnelly
05:55 PM Backport #21953 (Resolved): luminous: MDSMonitor commands crashing on cluster upgraded from Hamme...
Patrick Donnelly
05:52 PM Bug #22058 (Resolved): mds: admin socket wait for scrub completion is racy
Nathan Cutler
05:52 PM Backport #22907 (Resolved): luminous: mds: admin socket wait for scrub completion is racy
Nathan Cutler
05:51 PM Bug #18743 (Resolved): Scrub considers dirty backtraces to be damaged, puts in damage table even ...
Nathan Cutler
05:51 PM Backport #22089 (Resolved): luminous: Scrub considers dirty backtraces to be damaged, puts in dam...
Nathan Cutler
12:25 PM Feature #22929: libcephfs.pyx: add chown and chmod functions
Patrick Donnelly wrote:
> Thanks for the report. Jan, would you like to work on this? We appreciate PRs :)
Hi Pat...
Jan Vondra
10:24 AM Bug #22925: mds: LOCK_SYNC_MIX state makes "getattr" operations extremely slow when there are lot...
please try
https://github.com/ukernel/ceph/commit/7db1563416b5559310dbbc834795b83a4ccdaab4
Zheng Yan
07:07 AM Bug #22925: mds: LOCK_SYNC_MIX state makes "getattr" operations extremely slow when there are lot...
Hi, Patrick, as discussed yesterday, in our case, the whole procedure of a single run of "getattr" ops processing is ... Xuehan Xu
01:16 AM Backport #22936 (In Progress): luminous: client: readdir bug
https://github.com/ceph/ceph/pull/20356 Prashant D
12:02 AM Backport #22935 (In Progress): luminous: client: setattr should drop "Fs" rather than "As" for mt...
https://github.com/ceph/ceph/pull/20354 Prashant D

02/07/2018

10:45 PM Backport #22864: luminous: mds: scrub crash
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20249
merged
Yuri Weinstein
10:45 PM Backport #22860: luminous: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-yuri...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20256
merged
Yuri Weinstein
10:44 PM Backport #22859: luminous: mds: session count,dns and inos from cli "fs status" is always 0
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20299
merged
Yuri Weinstein
10:44 PM Backport #22867: luminous: MDS: assert failure when the inode for the cap_export from other MDS h...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20300
merged
Yuri Weinstein
10:41 PM Backport #22242: luminous: mds: limit size of subtree migration
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20339
merged
Yuri Weinstein
10:41 PM Backport #22240: luminous: Processes stuck waiting for write with ceph-fuse
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20340
merged
Yuri Weinstein
10:40 PM Backport #22907: luminous: mds: admin socket wait for scrub completion is racy
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20341
merged
Yuri Weinstein
10:40 PM Backport #22089: luminous: Scrub considers dirty backtraces to be damaged, puts in damage table e...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20341
merged
Yuri Weinstein
05:37 PM Backport #22382 (Rejected): jewel: client: give more descriptive error message for remount failures
Rejected because not needed with http://tracker.ceph.com/issues/22378 Patrick Donnelly
05:37 PM Bug #22254 (Resolved): client: give more descriptive error message for remount failures
No, I guess not because of http://tracker.ceph.com/issues/22378 Patrick Donnelly
07:35 AM Bug #22254: client: give more descriptive error message for remount failures
@Patrick: You rejected the luminous backport, but the jewel backport should still go forward - correct? Nathan Cutler
03:25 PM Bug #22948 (Resolved): client: wire up ceph_ll_readv and ceph_ll_writev
These two functions are stubbed out in the client libraries and always just return -1. Wire them into the backend inf... Jeff Layton
05:01 AM Backport #22936 (Resolved): luminous: client: readdir bug
https://github.com/ceph/ceph/pull/20356 Nathan Cutler
05:01 AM Backport #22935 (Resolved): luminous: client: setattr should drop "Fs" rather than "As" for mtime...
https://github.com/ceph/ceph/pull/20354 Nathan Cutler
01:51 AM Bug #22869: compiling Client.cc generate warnings
Yes, the gcc 8.0.1 gives plenty of errors and warnings everywhere in the unmodified code. Yesterday, I have managed t... Jos Collin

02/06/2018

11:17 PM Bug #21406 (Resolved): ceph.in: tell mds does not understand --cluster
Nathan Cutler
11:16 PM Backport #22590 (Resolved): jewel: ceph.in: tell mds does not understand --cluster
Nathan Cutler
09:39 PM Feature #22929: libcephfs.pyx: add chown and chmod functions
Thanks for the report. Jan, would you like to work on this? We appreciate PRs :) Patrick Donnelly
12:31 PM Feature #22929 (New): libcephfs.pyx: add chown and chmod functions
Chown and chmod functions are included in libcephfs.h but there are no equivalents in python binding. The only workar... Jan Vondra
09:09 PM Bug #22933 (Resolved): client: add option descriptions and review levels (e.g. LEVEL_DEV)
Patrick Donnelly
08:50 PM Bug #22869: compiling Client.cc generate warnings
Jos Collin wrote:
> Patrick,
>
> I'm using: gcc (GCC) 8.0.1 20180131 (Red Hat 8.0.1-0.9).
>
> At first look, ...
Patrick Donnelly
07:05 PM Bug #22163 (Resolved): request that is "!mdr->is_replay() && mdr->is_queued_for_replay()" may han...
Nathan Cutler
07:04 PM Backport #22237 (Resolved): luminous: request that is "!mdr->is_replay() && mdr->is_queued_for_re...
Nathan Cutler
05:40 PM Backport #22237: luminous: request that is "!mdr->is_replay() && mdr->is_queued_for_replay()" may...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19157
merged
Yuri Weinstein
06:03 PM Backport #22688: luminous: client: fails to release to revoking Fc
https://github.com/ceph/ceph/pull/20342 Patrick Donnelly
05:54 PM Backport #22907 (In Progress): luminous: mds: admin socket wait for scrub completion is racy
https://github.com/ceph/ceph/pull/20341 Patrick Donnelly
05:53 PM Backport #22089 (In Progress): luminous: Scrub considers dirty backtraces to be damaged, puts in ...
https://github.com/ceph/ceph/pull/20341 Patrick Donnelly
05:46 PM Backport #22240: luminous: Processes stuck waiting for write with ceph-fuse
https://github.com/ceph/ceph/pull/20340 Patrick Donnelly
05:46 PM Backport #22240 (In Progress): luminous: Processes stuck waiting for write with ceph-fuse
http://tracker.ceph.com/issues/22240 Patrick Donnelly
05:41 PM Backport #22381 (Rejected): luminous: client: give more descriptive error message for remount fai...
This one is no longer necessary after: http://tracker.ceph.com/issues/22339 Patrick Donnelly
05:35 PM Backport #22242: luminous: mds: limit size of subtree migration
https://github.com/ceph/ceph/pull/20339 Patrick Donnelly
05:30 PM Backport #22242 (In Progress): luminous: mds: limit size of subtree migration
Patrick Donnelly
02:39 PM Bug #22647 (Resolved): mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
Nathan Cutler
02:39 PM Backport #22762 (Resolved): jewel: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
Nathan Cutler
01:48 PM Backport #22689 (Resolved): jewel: client: fails to release to revoking Fc
Nathan Cutler
01:46 PM Bug #21501 (Resolved): ceph_volume_client: sets invalid caps for existing IDs with no caps
Nathan Cutler
01:45 PM Backport #21626 (Resolved): jewel: ceph_volume_client: sets invalid caps for existing IDs with no...
Nathan Cutler
01:43 PM Bug #21423 (Resolved): qa: test_client_pin times out waiting for dentry release from kernel
Nathan Cutler
01:42 PM Backport #21519 (Resolved): jewel: qa: test_client_pin times out waiting for dentry release from ...
Nathan Cutler
01:41 PM Backport #21067 (Resolved): jewel: MDS integer overflow fix
Nathan Cutler
08:49 AM Bug #22249: Need to restart MDS to release cephfs space
Zheng Yan wrote:
> The logs show that the client held caps on stray inodes. which is root cause of the issue.
>
>...
junming rao
08:48 AM Backport #22865 (In Progress): jewel: mds: scrub crash
https://github.com/ceph/ceph/pull/20335 Prashant D
06:18 AM Backport #22863 (In Progress): jewel: cephfs-journal-tool: may got assertion failure due to not s...
https://github.com/ceph/ceph/pull/20333 Prashant D
04:19 AM Bug #22925 (Resolved): mds: LOCK_SYNC_MIX state makes "getattr" operations extremely slow when th...

Recently, in our online Luminous Cephfs clusters, we found that there are occasionally lots of slow requests:
20...
Xuehan Xu

02/05/2018

07:52 PM Feature #9312 (Resolved): kclient: support signatures in kernel code
cephx signatures are supported (and required by default) since 3.19:
https://git.kernel.org/pub/scm/linux/kernel/g...
Ilya Dryomov
07:44 PM Feature #9312: kclient: support signatures in kernel code
Zheng, this is resolved right? Which commit? Patrick Donnelly
07:42 PM Documentation #8918 (Resolved): kclient: known working kernels
http://docs.ceph.com/docs/master/cephfs/best-practices/#which-kernel-version Patrick Donnelly
07:17 PM Bug #21861: osdc: truncate Object and remove the bh which have someone wait for read on it occur ...
Zheng, is this what your recent master PR for ObjectCacher fixed? Greg Farnum
02:50 PM Bug #22869: compiling Client.cc generate warnings
Patrick,
I'm using: gcc (GCC) 8.0.1 20180131 (Red Hat 8.0.1-0.9).
At first look, I thought that this was a pro...
Jos Collin
02:40 PM Bug #22869 (Need More Info): compiling Client.cc generate warnings
Patrick Donnelly
02:38 PM Bug #22869: compiling Client.cc generate warnings
Jos, what options are you using to generate these warnings? Patrick Donnelly
02:42 PM Bug #22910 (Pending Backport): client: setattr should drop "Fs" rather than "As" for mtime and size
Patrick Donnelly
02:41 PM Bug #22909 (Pending Backport): client: readdir bug
Patrick Donnelly
02:40 PM Bug #22885 (Need More Info): MDS trimming Not ending
Patrick Donnelly
10:47 AM Backport #22861 (In Progress): jewel: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercyc...
https://github.com/ceph/ceph/pull/20312 Prashant D
03:13 AM Backport #22891 (In Progress): luminous: qa: kcephfs lacks many configurations in the fs/multimds...
https://github.com/ceph/ceph/pull/20302 Prashant D
01:12 AM Backport #22867 (In Progress): luminous: MDS: assert failure when the inode for the cap_export fr...
https://github.com/ceph/ceph/pull/20300 Prashant D
12:06 AM Backport #22859 (In Progress): luminous: mds: session count,dns and inos from cli "fs status" is ...
https://github.com/ceph/ceph/pull/20299 Prashant D

02/03/2018

08:31 PM Bug #22839 (Rejected): MDSAuthCaps (unlike others) still require "allow" at start
As discussed in the PR, closing this because we don't have a profile analog in CephFS. Patrick Donnelly
07:33 PM Bug #21759 (Resolved): Assertion in EImportStart::replay should be a damaged()
Patrick Donnelly
07:33 PM Backport #21870 (Resolved): luminous: Assertion in EImportStart::replay should be a damaged()
Patrick Donnelly
06:34 PM Bug #22646 (Resolved): qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficient osds
Nathan Cutler
06:34 PM Backport #22690 (Resolved): luminous: qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficien...
Nathan Cutler
05:42 PM Bug #22360 (Resolved): mds: crash during exiting
Nathan Cutler
05:41 PM Backport #22493 (Resolved): luminous: mds: crash during exiting
Nathan Cutler
01:02 PM Bug #22910: client: setattr should drop "Fs" rather than "As" for mtime and size
https://github.com/ceph/ceph/pull/18786
this one should also be backported
dongdong tao
01:01 PM Bug #22910 (Resolved): client: setattr should drop "Fs" rather than "As" for mtime and size
dongdong tao
12:58 PM Bug #22909: client: readdir bug
I think we should backport this one dongdong tao
12:54 PM Bug #22909: client: readdir bug
https://github.com/ceph/ceph/pull/18784 dongdong tao
12:53 PM Bug #22909 (Resolved): client: readdir bug
Fix: "Client::readdir_r_cb" tried to read its parent dir, but it reads itself. dongdong tao
07:34 AM Bug #22610: MDS: assert failure when the inode for the cap_export from other MDS happened not in ...
Re-adding rejected jewel backport to appease backport tooling. Nathan Cutler
07:18 AM Backport #22907 (Resolved): luminous: mds: admin socket wait for scrub completion is racy
https://github.com/ceph/ceph/pull/20341 Nathan Cutler
02:30 AM Bug #22885: MDS trimming Not ending
MDS encountered error and went to readonly mode. I think it was caused by the client that didn't advance oldest clien... Zheng Yan
02:02 AM Bug #22058 (Pending Backport): mds: admin socket wait for scrub completion is racy
Needs backport as the bug will be introduced by:
https://github.com/ceph/ceph/pull/18858
Patrick Donnelly
01:18 AM Bug #20452 (Resolved): Adding pool with id smaller then existing data pool ids breaks MDSMap::is_...
Patrick Donnelly
01:18 AM Backport #20714 (Rejected): jewel: Adding pool with id smaller then existing data pool ids breaks...
Ah, this should not be necessary for jewel since it uses std::set, a sorted container. Patrick Donnelly

02/02/2018

11:08 PM Backport #22493: luminous: mds: crash during exiting
Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/19610
merged
Yuri Weinstein
10:33 PM Backport #22690: luminous: qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficient osds
Prashant D wrote:
> https://github.com/ceph/ceph/pull/19976
merged
Yuri Weinstein
10:24 PM Backport #20823 (In Progress): jewel: client::mkdirs not handle well when two clients send mkdir ...
Nathan Cutler
10:20 PM Backport #20714 (Need More Info): jewel: Adding pool with id smaller then existing data pool ids ...
Depends on de0ce386ee59dbf70a010696d4aa91d46ed73b20 which is not going to backported (?) Nathan Cutler
08:06 PM Bug #21402: mds: move remaining containers in CDentry/CDir/CInode to mempool
It occurred to me I wasn't comparing apples to apples when doing the memory reduction comparisons. I looked at the sa... Patrick Donnelly
02:57 PM Bug #22886 (In Progress): kclient: Test failure: test_full_same_file (tasks.cephfs.test_full.Test...
Yes, that error has been happening for the mds-full tests now with and without kclient. I'll look into that today. Th... Patrick Donnelly
02:39 PM Bug #22886: kclient: Test failure: test_full_same_file (tasks.cephfs.test_full.TestClusterFull)
this patch https://github.com/ceph/ceph-ci/commit/2fff0eb4c491f04803debec7c0f5de66e3825ee7 seems to make full tests p... Zheng Yan
09:47 AM Bug #22886: kclient: Test failure: test_full_same_file (tasks.cephfs.test_full.TestClusterFull)
patch https://github.com/ceph/ceph-client/commit/b9e5d03b6e64972164bff45ae3adb64a23e7568a fixes this issue.
but ot...
Zheng Yan
06:34 AM Bug #22886: kclient: Test failure: test_full_same_file (tasks.cephfs.test_full.TestClusterFull)
it seems to be caused by delay dirty metadata writeback Zheng Yan
08:02 AM Backport #22339 (Resolved): luminous: ceph-fuse: failure to remount in startup test does not hand...
Nathan Cutler
08:01 AM Bug #22460 (Resolved): mds: handle client session messages when mds is stopping
Nathan Cutler
08:01 AM Backport #22490 (Resolved): luminous: mds: handle client session messages when mds is stopping
Nathan Cutler
08:00 AM Bug #22459 (Resolved): cephfs-journal-tool: tool would miss to report some invalid range
Nathan Cutler
08:00 AM Backport #22499 (Resolved): luminous: cephfs-journal-tool: tool would miss to report some invalid...
Nathan Cutler
07:59 AM Bug #22458 (Resolved): cephfs: potential adjust failure in lru_expire
Nathan Cutler
07:59 AM Backport #22500 (Resolved): luminous: cephfs: potential adjust failure in lru_expire
Nathan Cutler
07:58 AM Bug #22492 (Resolved): Locker::calc_new_max_size does not take layout.stripe_count into account
Nathan Cutler
07:58 AM Backport #22564 (Resolved): luminous: Locker::calc_new_max_size does not take layout.stripe_count...
Nathan Cutler
07:57 AM Bug #22526 (Resolved): AttributeError: 'LocalFilesystem' object has no attribute 'ec_profile'
Nathan Cutler
07:57 AM Backport #22573 (Resolved): luminous: AttributeError: 'LocalFilesystem' object has no attribute '...
Nathan Cutler
07:56 AM Backport #22579 (Resolved): luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full ...
Nathan Cutler
07:55 AM Backport #22699 (Resolved): luminous: client:_rmdir() uses a deleted memory structure(Dentry) lea...
Nathan Cutler
07:55 AM Backport #22719 (Resolved): luminous: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
Nathan Cutler
07:02 AM Backport #22860 (In Progress): luminous: osdc: "FAILED assert(bh->last_write_tid > tid)" in power...
https://github.com/ceph/ceph/pull/20256 Prashant D
05:49 AM Backport #22798 (Resolved): luminous: mds: add success return
Nathan Cutler
05:29 AM Backport #22868 (Rejected): jewel: MDS: assert failure when the inode for the cap_export from oth...
Nathan Cutler
01:01 AM Backport #22868 (Closed): jewel: MDS: assert failure when the inode for the cap_export from other...
Zheng is right. Patrick Donnelly
05:16 AM Backport #21519 (In Progress): jewel: qa: test_client_pin times out waiting for dentry release fr...
Nathan Cutler
05:14 AM Backport #22494 (In Progress): jewel: unsigned integer overflow in file_layout_t::get_period
Nathan Cutler
04:44 AM Backport #22862 (In Progress): luminous: cephfs-journal-tool: may got assertion failure due to no...
https://github.com/ceph/ceph/pull/20251 Prashant D
03:57 AM Backport #22864 (In Progress): luminous: mds: scrub crash
https://github.com/ceph/ceph/pull/20249 Prashant D
12:56 AM Bug #22835: client: the total size of fs is equal to the cluster size when using multiple data pools
Thanks Patrick Donnelly. shangzhong zhu

02/01/2018

11:48 PM Bug #22839 (Fix Under Review): MDSAuthCaps (unlike others) still require "allow" at start
https://github.com/ceph/ceph/pull/20248 Patrick Donnelly
11:43 PM Backport #22891 (Resolved): luminous: qa: kcephfs lacks many configurations in the fs/multimds su...
https://github.com/ceph/ceph/pull/20302 Nathan Cutler
11:37 PM Bug #22835 (Won't Fix): client: the total size of fs is equal to the cluster size when using mult...
This is intended. To avoid double-counting available space, the client simply returns the total raw space in the clus... Patrick Donnelly
11:32 PM Bug #22436 (Resolved): qa: CommandFailedError: Command failed on smithi135 with status 22: 'sudo ...
Nathan Cutler
11:31 PM Backport #22501 (Resolved): luminous: qa: CommandFailedError: Command failed on smithi135 with st...
Nathan Cutler
11:07 PM Backport #22501: luminous: qa: CommandFailedError: Command failed on smithi135 with status 22: 's...
Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/19628
merged
Yuri Weinstein
11:01 PM Bug #21821 (Resolved): MDSMonitor: mons should reject misconfigured mds_blacklist_interval
Nathan Cutler
11:01 PM Backport #21948 (Resolved): luminous: MDSMonitor: mons should reject misconfigured mds_blacklist_...
Nathan Cutler
09:12 PM Backport #21948: luminous: MDSMonitor: mons should reject misconfigured mds_blacklist_interval
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19871
merged
Yuri Weinstein
11:00 PM Backport #22694 (Resolved): luminous: mds: fix dump last_sent
Nathan Cutler
09:11 PM Backport #22694: luminous: mds: fix dump last_sent
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19959
merged
Yuri Weinstein
11:00 PM Bug #22475 (Resolved): qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClusterFull...
Nathan Cutler
11:00 PM Backport #22580 (Resolved): luminous: qa: full flag not set on osdmap for tasks.cephfs.test_full....
Nathan Cutler
09:11 PM Backport #22580: luminous: qa: full flag not set on osdmap for tasks.cephfs.test_full.TestCluster...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19962
mergedReviewed-by: Patrick Donnelly <pdonnell@redh...
Yuri Weinstein
10:50 PM Bug #22627 (Pending Backport): qa: kcephfs lacks many configurations in the fs/multimds suites
Patrick Donnelly
10:39 PM Bug #22886: kclient: Test failure: test_full_same_file (tasks.cephfs.test_full.TestClusterFull)
These may be related:... Patrick Donnelly
10:34 PM Bug #22886 (Resolved): kclient: Test failure: test_full_same_file (tasks.cephfs.test_full.TestClu...
From: http://pulpito.ceph.com/pdonnell-2018-01-30_23:38:56-kcephfs-wip-pdonnell-i22627-testing-basic-smithi/2129601/
...
Patrick Donnelly
09:16 PM Bug #22885 (Need More Info): MDS trimming Not ending
HEALTH_WARN 1 clients failing to advance oldest client/flush tid; insufficient standby MDS daemons available; 1 MDSs ... hailong jiang
09:15 PM Backport #22339: luminous: ceph-fuse: failure to remount in startup test does not handle client_d...
Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/19370
merged
Yuri Weinstein
09:14 PM Backport #22490: luminous: mds: handle client session messages when mds is stopping
Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/19585
merged
Yuri Weinstein
09:14 PM Backport #22499: luminous: cephfs-journal-tool: tool would miss to report some invalid range
Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/19626
merged
Yuri Weinstein
09:13 PM Backport #22500: luminous: cephfs: potential adjust failure in lru_expire
Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/19627
merged
Yuri Weinstein
09:13 PM Backport #22564: luminous: Locker::calc_new_max_size does not take layout.stripe_count into account
Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/19776
merged
Yuri Weinstein
09:12 PM Backport #22573: luminous: AttributeError: 'LocalFilesystem' object has no attribute 'ec_profile'
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19829
merged
Yuri Weinstein
09:12 PM Backport #22579: luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obs...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19830
merged
Yuri Weinstein
09:10 PM Backport #22699: luminous: client:_rmdir() uses a deleted memory structure(Dentry) leading a core
Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/19968
merged
Yuri Weinstein
09:09 PM Backport #22719: luminous: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/19982
merged
Yuri Weinstein
02:43 PM Backport #22868: jewel: MDS: assert failure when the inode for the cap_export from other MDS happ...
it's multimds bug, I don't think we need to backport it to jewel Zheng Yan
10:50 AM Backport #22868 (Rejected): jewel: MDS: assert failure when the inode for the cap_export from oth...
Nathan Cutler
12:02 PM Bug #22869 (Closed): compiling Client.cc generate warnings
[ 2%] Building CXX object src/client/CMakeFiles/client.dir/Client.cc.o
In file included from /home/jcollin/workspac...
Jos Collin
10:50 AM Backport #22867 (Resolved): luminous: MDS: assert failure when the inode for the cap_export from ...
https://github.com/ceph/ceph/pull/20300 Nathan Cutler
10:49 AM Backport #22865 (Resolved): jewel: mds: scrub crash
https://github.com/ceph/ceph/pull/20335 Nathan Cutler
10:49 AM Backport #22864 (Resolved): luminous: mds: scrub crash
https://github.com/ceph/ceph/pull/20249 Nathan Cutler
10:49 AM Backport #22863 (Resolved): jewel: cephfs-journal-tool: may got assertion failure due to not shut...
https://github.com/ceph/ceph/pull/20333 Nathan Cutler
10:49 AM Backport #22862 (Resolved): luminous: cephfs-journal-tool: may got assertion failure due to not s...
https://github.com/ceph/ceph/pull/20251 Nathan Cutler
10:49 AM Backport #22861 (Resolved): jewel: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-...
https://github.com/ceph/ceph/pull/20312 Nathan Cutler
10:49 AM Backport #22860 (Resolved): luminous: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercyc...
https://github.com/ceph/ceph/pull/20256 Nathan Cutler
10:49 AM Backport #22859 (Resolved): luminous: mds: session count,dns and inos from cli "fs status" is alw...
https://github.com/ceph/ceph/pull/20299 Nathan Cutler
05:23 AM Bug #21402: mds: move remaining containers in CDentry/CDir/CInode to mempool
64GB cache size limit experiment attached.
The master branch was tested with 64 kernel clients each building the k...
Patrick Donnelly

01/31/2018

10:49 PM Bug #21025 (Resolved): racy is_mounted() checks in libcephfs
Nathan Cutler
10:48 PM Backport #21359 (Resolved): luminous: racy is_mounted() checks in libcephfs
Nathan Cutler
10:10 PM Backport #21359: luminous: racy is_mounted() checks in libcephfs
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/17875
merged
Yuri Weinstein
10:08 PM Backport #21359: luminous: racy is_mounted() checks in libcephfs
Nevermind, PR was good as-is. Patrick Donnelly
09:22 PM Backport #22798: luminous: mds: add success return
Patrick Donnelly wrote:
> No upstream tracker issue for this. It was fixed upstream in https://github.com/ceph/ceph/...
Yuri Weinstein
01:20 PM Bug #22839 (Rejected): MDSAuthCaps (unlike others) still require "allow" at start
This was changed for the OSD and mon caps, but the MDS caps were missed:
https://github.com/ceph/ceph/pull/15991/com...
John Spray
02:30 AM Bug #22835 (Won't Fix): client: the total size of fs is equal to the cluster size when using mult...
*Ceph Cluster*... shangzhong zhu
12:49 AM Bug #22523 (Closed): Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
Zheng Yan

01/30/2018

11:31 PM Feature #16775 (Resolved): MDS command for listing open files
Patrick Donnelly
11:30 PM Bug #22610 (Pending Backport): MDS: assert failure when the inode for the cap_export from other M...
Patrick Donnelly
11:29 PM Bug #22734 (Pending Backport): cephfs-journal-tool: may got assertion failure due to not shutdown
Patrick Donnelly
11:27 PM Bug #22730 (Pending Backport): mds: scrub crash
Patrick Donnelly
11:26 PM Bug #22776 (Pending Backport): mds: session count,dns and inos from cli "fs status" is always 0
Patrick Donnelly
11:26 PM Bug #22741 (Pending Backport): osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-...
Patrick Donnelly
10:45 PM Bug #22754 (Fix Under Review): mon: removing tier from an EC base pool is forbidden, even if allo...
https://github.com/ceph/ceph/pull/20190 Patrick Donnelly
10:36 PM Bug #22754 (In Progress): mon: removing tier from an EC base pool is forbidden, even if allow_ec_...
Patrick Donnelly
05:33 PM Bug #21402 (Fix Under Review): mds: move remaining containers in CDentry/CDir/CInode to mempool
https://github.com/ceph/ceph/pull/19954
Also ran two 64-client kernel build tests (one patched, one master) with a...
Patrick Donnelly
03:42 PM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
mds_blacklist_interval = 1440
We found that that arguments is too little for the HA testing, it should be adjusted l...
Yong Wang
02:01 PM Feature #12107: mds: use versioned wire protocol; obviate CEPH_MDS_PROTOCOL
have added the support for MDentryLink and MDentryUnlink,
next step is CInode::encode_lock_state/CInode::decode_lock...
dongdong tao
03:32 AM Bug #22788: ceph-fuse performance issues with rsync
with -H option, rsync does 1k write. without -H option, rsync does 4k write. ceph-fuse does not enable kernel writeba... Zheng Yan
02:26 AM Bug #22829 (Resolved): ceph-fuse: uses up all snap tags
got following crash during snap tests... Zheng Yan

01/29/2018

03:23 PM Bug #22788: ceph-fuse performance issues with rsync
It seems that the -H option causes low performance when destination is in cephfs. I still don't figure out why Zheng Yan
03:20 PM Bug #22754: mon: removing tier from an EC base pool is forbidden, even if allow_ec_overwrites is set
As far as I'm aware, nobody has worked on it, so that would be a no. John Spray
03:13 PM Bug #22754: mon: removing tier from an EC base pool is forbidden, even if allow_ec_overwrites is set
Is this going to make it into 12.2.3? David Turner
02:53 PM Feature #21995 (Fix Under Review): ceph-fuse: support nfs export
https://github.com/ceph/ceph/pull/20168 Jos Collin
02:41 PM Bug #22801: client: Client::flush_snaps still uses obsolete Client::user_id/group_id
CapSnap is for flushing snapshoted metadata (the metadata that were dirty at the time of mksnap), nothing do with set... Zheng Yan
02:27 PM Bug #22776 (Fix Under Review): mds: session count,dns and inos from cli "fs status" is always 0
Patrick Donnelly
02:26 PM Bug #22734 (Fix Under Review): cephfs-journal-tool: may got assertion failure due to not shutdown
Patrick Donnelly
12:59 PM Bug #22802: libcephfs: allow setting default perms
Part of the problem here is that there are really two sets of default permissions in this code. There is one in the C... Jeff Layton
03:48 AM Bug #22824 (Fix Under Review): Journaler::flush() may flush less data than expected, which causes...
https://github.com/ceph/ceph/pull/20155 Zheng Yan
03:02 AM Bug #22824 (Resolved): Journaler::flush() may flush less data than expected, which causes flush w...
Zheng Yan

01/28/2018

11:27 AM Bug #22821 (Fix Under Review): mds: session reference leak
https://github.com/ceph/ceph/pull/20148 Zheng Yan
08:15 AM Bug #22821 (Resolved): mds: session reference leak
there are several places get session by:
"Session *session = static_cast<Session *>(m->get_connection()->get_priv(...
Zheng Yan

01/26/2018

08:16 PM Bug #22802: libcephfs: allow setting default perms
What I think I'm going to do is just add a ceph_mount_perms_set() function to the API that will reset it to a UserPer... Jeff Layton
05:49 PM Bug #22802: libcephfs: allow setting default perms
The current default is to set it to -1, so that's probably what we'll do here.
Further down the rabbit hole, we ha...
Jeff Layton
05:36 PM Bug #22802: libcephfs: allow setting default perms
Jeff Layton wrote:
> Serious question: does anyone actually use the SyntheticClient? It's only linked into the ceph-...
Patrick Donnelly
04:52 PM Bug #22802: libcephfs: allow setting default perms
Serious question: does anyone actually use the SyntheticClient? It's only linked into the ceph-syn binary, and I don'... Jeff Layton
06:16 PM Bug #21091 (Resolved): StrayManager::truncate is broken
Nathan Cutler
06:16 PM Backport #21657 (Resolved): luminous: StrayManager::truncate is broken
Nathan Cutler
02:35 PM Feature #21156 (Fix Under Review): mds: speed up recovery with many open inodes
https://github.com/ceph/ceph/pull/20132 Zheng Yan
01:48 PM Bug #22801: client: Client::flush_snaps still uses obsolete Client::user_id/group_id
Hmm...cap_dirtier_uid is only set to anything non-default in _do_setattr(). That function does this very early:
<p...
Jeff Layton
01:43 PM Bug #22801: client: Client::flush_snaps still uses obsolete Client::user_id/group_id
I think we should cap_dirtier_uid/gid into CapSnap Zheng Yan
01:06 PM Bug #22801: client: Client::flush_snaps still uses obsolete Client::user_id/group_id
I don't have the best grasp of how snapshots work in cephfs, so I'm a little confused as to what should be done here:... Jeff Layton

01/25/2018

10:47 PM Bug #22802 (Resolved): libcephfs: allow setting default perms
-These options no longer work as advertised and only effect the SyntheticClient (with the exception of #22801). Best ... Patrick Donnelly
10:45 PM Bug #22801 (Resolved): client: Client::flush_snaps still uses obsolete Client::user_id/group_id
This appears to be a hold-out from the UserPerm work last year and the last remaining user of Client::user_id|user_gi... Patrick Donnelly
08:25 PM Bug #22219 (Resolved): mds: mds should ignore export_pin for deleted directory
Patrick Donnelly
08:25 PM Backport #22385 (Resolved): luminous: mds: mds should ignore export_pin for deleted directory
Patrick Donnelly
08:02 PM Backport #22385: luminous: mds: mds should ignore export_pin for deleted directory
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19360
merged
Yuri Weinstein
08:25 PM Bug #22357 (Resolved): mds: read hang in multiple mds setup
Patrick Donnelly
08:24 PM Backport #22503 (Resolved): luminous: mds: read hang in multiple mds setup
Patrick Donnelly
08:02 PM Backport #22503: luminous: mds: read hang in multiple mds setup
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19646
merged
Yuri Weinstein
08:24 PM Bug #21985 (Resolved): mds: definition of MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 is wrong
Patrick Donnelly
08:24 PM Backport #22067 (Resolved): luminous: mds: definition of MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 is w...
Patrick Donnelly
07:11 PM Backport #22067: luminous: mds: definition of MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 is wrong
Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/18782
merged
Yuri Weinstein
08:23 PM Bug #21975 (Resolved): MDS: mds gets significantly behind on trimming while creating millions of ...
Patrick Donnelly
08:23 PM Backport #22068 (Resolved): luminous: mds: mds gets significantly behind on trimming while creati...
Patrick Donnelly
07:09 PM Backport #22068: luminous: mds: mds gets significantly behind on trimming while creating millions...
Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/18783
merged
Yuri Weinstein
08:22 PM Bug #22009 (Resolved): don't check gid when none specified in auth caps
Patrick Donnelly
08:22 PM Backport #22074 (Resolved): luminous: don't check gid when none specified in auth caps
Patrick Donnelly
07:07 PM Backport #22074: luminous: don't check gid when none specified in auth caps
Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/18835
merged
Yuri Weinstein
08:21 PM Bug #21722 (Resolved): mds: no assertion on inode being purging in find_ino_peers()
Patrick Donnelly
08:21 PM Backport #21952 (Resolved): luminous: mds: no assertion on inode being purging in find_ino_peers()
Patrick Donnelly
07:07 PM Backport #21952: luminous: mds: no assertion on inode being purging in find_ino_peers()
Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/18869
merged
Yuri Weinstein
08:21 PM Bug #21843 (Resolved): mds: preserve order of requests during recovery of multimds cluster
Patrick Donnelly
08:20 PM Backport #21947 (Resolved): luminous: mds: preserve order of requests during recovery of multimds...
Patrick Donnelly
07:06 PM Backport #21947: luminous: mds: preserve order of requests during recovery of multimds cluster
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/18871
merged
Yuri Weinstein
08:20 PM Bug #21928 (Resolved): src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == inode_map.size(...
Patrick Donnelly
08:19 PM Backport #22077 (Resolved): luminous: src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == ...
Patrick Donnelly
07:05 PM Backport #22077: luminous: src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == inode_map.s...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/18912
merged
Yuri Weinstein
08:19 PM Bug #21959 (Resolved): MDSMonitor: monitor gives constant "is now active in filesystem cephfs as ...
Patrick Donnelly
08:19 PM Backport #22192 (Resolved): luminous: MDSMonitor: monitor gives constant "is now active in filesy...
Patrick Donnelly
07:05 PM Backport #22192: luminous: MDSMonitor: monitor gives constant "is now active in filesystem cephfs...
Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/19055
merged
Yuri Weinstein
08:18 PM Backport #22379 (Resolved): luminous: client reconnect gather race
Patrick Donnelly
08:16 PM Feature #19578 (Resolved): mds: optimize CDir::_omap_commit() and CDir::_committed() for large di...
Patrick Donnelly
08:16 PM Backport #22563 (Resolved): luminous: mds: optimize CDir::_omap_commit() and CDir::_committed() f...
Patrick Donnelly
06:58 PM Backport #22563: luminous: mds: optimize CDir::_omap_commit() and CDir::_committed() for large di...
Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/19775
merged
Yuri Weinstein
08:16 PM Bug #21853 (Resolved): mds: mdsload debug too high
Patrick Donnelly
08:15 PM Backport #22587 (Resolved): luminous: mds: mdsload debug too high
Patrick Donnelly
06:58 PM Backport #22587: luminous: mds: mdsload debug too high
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19827
merged
Yuri Weinstein
08:15 PM Backport #22763 (Resolved): luminous: mds: crashes because of old pool id in journal header
Patrick Donnelly
06:56 PM Backport #22763: luminous: mds: crashes because of old pool id in journal header
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20085
merged
Yuri Weinstein
08:15 PM Bug #22629 (Resolved): client: avoid recursive lock in ll_get_vino
Patrick Donnelly
08:14 PM Backport #22765 (Resolved): luminous: client: avoid recursive lock in ll_get_vino
Patrick Donnelly
06:55 PM Backport #22765: luminous: client: avoid recursive lock in ll_get_vino
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20086
merged
Yuri Weinstein
08:13 PM Bug #22157 (Resolved): client: trim_caps may remove cap iterator points to
Patrick Donnelly
08:04 PM Bug #22157: client: trim_caps may remove cap iterator points to
merged https://github.com/ceph/ceph/pull/19105 Yuri Weinstein
08:13 PM Backport #22228 (Resolved): luminous: client: trim_caps may remove cap iterator points to
Patrick Donnelly
07:46 PM Backport #22004 (Resolved): luminous: FAILED assert(get_version() < pv) in CDir::mark_dirty
Nathan Cutler
07:12 PM Backport #22004: luminous: FAILED assert(get_version() < pv) in CDir::mark_dirty
Kefu Chai wrote:
> https://github.com/ceph/ceph/pull/18008
merged
Yuri Weinstein
07:45 PM Backport #22078 (Resolved): luminous: ceph.in: tell mds does not understand --cluster
Nathan Cutler
07:08 PM Backport #22078: luminous: ceph.in: tell mds does not understand --cluster
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/18831
merged
Yuri Weinstein
07:45 PM Bug #21967 (Resolved): 'ceph tell mds' commands result in 'File exists' errors on client admin so...
Nathan Cutler
07:45 PM Backport #22076 (Resolved): luminous: 'ceph tell mds' commands result in 'File exists' errors on ...
Nathan Cutler
07:08 PM Backport #22076: luminous: 'ceph tell mds' commands result in 'File exists' errors on client admi...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/18831
merged
Yuri Weinstein
07:11 PM Feature #18490 (Resolved): client: implement delegation support in userland cephfs
Nathan Cutler
07:11 PM Backport #22407 (Resolved): luminous: client: implement delegation support in userland cephfs
Nathan Cutler
07:00 PM Backport #22407: luminous: client: implement delegation support in userland cephfs
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19480
merged
Yuri Weinstein
07:11 PM Bug #21512 (Resolved): qa: libcephfs_interface_tests: shutdown race failures
Nathan Cutler
07:11 PM Backport #21874 (Resolved): luminous: qa: libcephfs_interface_tests: shutdown race failures
Nathan Cutler
06:57 PM Backport #21874: luminous: qa: libcephfs_interface_tests: shutdown race failures
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20082
merged
Yuri Weinstein
07:10 PM Backport #21525 (Resolved): luminous: client: dual client segfault with racing ceph_shutdown
Nathan Cutler
06:57 PM Backport #21525: luminous: client: dual client segfault with racing ceph_shutdown
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20082
merged
Yuri Weinstein
07:03 PM Bug #22263: client reconnect gather race
Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/19326
merged
Yuri Weinstein
05:35 PM Backport #22798 (Resolved): luminous: mds: add success return
No upstream tracker issue for this. It was fixed upstream in https://github.com/ceph/ceph/pull/16778/commits/f519fca9... Patrick Donnelly
05:11 PM Backport #21657: luminous: StrayManager::truncate is broken
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/18019
merged
Yuri Weinstein
05:44 AM Bug #22741 (Fix Under Review): osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-...
https://github.com/ceph/ceph/pull/20113 Zheng Yan
04:45 AM Backport #22764 (In Progress): jewel: mds: crashes because of old pool id in journal header
https://github.com/ceph/ceph/pull/20111 Prashant D

01/24/2018

04:13 PM Bug #22788 (Won't Fix): ceph-fuse performance issues with rsync
Hi,
I have a performance issue when running rsync on a FUSE-mounted CephFS.
dd runs on "line speed" on my test ...
Robert Sander
02:02 PM Bug #22249: Need to restart MDS to release cephfs space
The logs show that the client held caps on stray inodes. which is root cause of the issue.
Did you try -client_try...
Zheng Yan
09:08 AM Bug #22249: Need to restart MDS to release cephfs space
Zheng Yan wrote:
> junming rao wrote:
> > Zheng Yan wrote:
> > > please try remounting all cephfs with ceph-fuse o...
junming rao
09:31 AM Bug #22741: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-yuri-master-1.19.18...
... Zheng Yan
02:44 AM Backport #22765 (In Progress): luminous: client: avoid recursive lock in ll_get_vino
https://github.com/ceph/ceph/pull/20086 Prashant D
01:48 AM Backport #22763 (In Progress): luminous: mds: crashes because of old pool id in journal header
https://github.com/ceph/ceph/pull/20085 Prashant D

01/23/2018

06:06 PM Bug #21393 (Resolved): MDSMonitor: inconsistent role/who usage in command help
Patrick Donnelly
06:05 PM Backport #22508 (Closed): luminous: MDSMonitor: inconsistent role/who usage in command help
Yes, let's forgo the luminous backport. Thanks for pointing that out Nathan! Patrick Donnelly
12:41 PM Bug #22776: mds: session count,dns and inos from cli "fs status" is always 0
*PR*: https://github.com/ceph/ceph/pull/20079 shangzhong zhu
12:07 PM Bug #22776 (Resolved): mds: session count,dns and inos from cli "fs status" is always 0
... shangzhong zhu
09:48 AM Backport #22762 (In Progress): jewel: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
Nathan Cutler
09:40 AM Backport #22762 (Resolved): jewel: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
https://github.com/ceph/ceph/pull/20067 Nathan Cutler
09:40 AM Backport #22765 (Resolved): luminous: client: avoid recursive lock in ll_get_vino
https://github.com/ceph/ceph/pull/20086 Nathan Cutler
09:40 AM Backport #22764 (Resolved): jewel: mds: crashes because of old pool id in journal header
https://github.com/ceph/ceph/pull/20111 Nathan Cutler
09:40 AM Backport #22763 (Resolved): luminous: mds: crashes because of old pool id in journal header
https://github.com/ceph/ceph/pull/20085 Nathan Cutler

01/22/2018

10:11 PM Bug #22754 (Resolved): mon: removing tier from an EC base pool is forbidden, even if allow_ec_ove...
OSDMonitor::_check_remove_tier needs to be made aware that this should be permitted if the base tier is suitable for ... John Spray
08:06 PM Bug #22741: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-yuri-master-1.19.18...
Core: /ceph/teuthology-archive/yuriw-2018-01-19_18:23:03-powercycle-wip-yuri-master-1.19.18-distro-basic-smithi/20909... Patrick Donnelly
02:14 PM Bug #22741: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-yuri-master-1.19.18...
Assigned to CephFS because it's crashing in the ceph-fuse process (in the absence of a better home for ObjectCacher i... John Spray
03:03 PM Feature #12107 (In Progress): mds: use versioned wire protocol; obviate CEPH_MDS_PROTOCOL
Patrick Donnelly
07:28 AM Feature #12107: mds: use versioned wire protocol; obviate CEPH_MDS_PROTOCOL
I'm working on this, please assign this to me dongdong tao
12:05 PM Backport #22508 (Need More Info): luminous: MDSMonitor: inconsistent role/who usage in command help
Non-trivial backport - since it's essentially a documentation fix, I'm not sure if it's worth the risk. Nathan Cutler
11:18 AM Backport #22078 (In Progress): luminous: ceph.in: tell mds does not understand --cluster
Nathan Cutler

01/20/2018

05:33 PM Bug #22741 (Resolved): osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-yuri-mas...
Run: http://pulpito.ceph.com/yuriw-2018-01-19_18:23:03-powercycle-wip-yuri-master-1.19.18-distro-basic-smithi/
Jobs:...
Yuri Weinstein

01/19/2018

04:31 AM Bug #22734: cephfs-journal-tool: may got assertion failure due to not shutdown
https://github.com/ceph/ceph/pull/19991 dongdong tao
04:22 AM Bug #22734 (Resolved): cephfs-journal-tool: may got assertion failure due to not shutdown
```
2018-01-14T19:36:56.381 INFO:teuthology.orchestra.run.smithi139.stderr:Error loading journal: (2) No such file o...
dongdong tao

01/18/2018

11:02 PM Bug #22733 (Duplicate): ceph-fuse: failed assert in xlist<T>::iterator& xlist<T>::iterator::opera...
Thanks for the report anyway! Patrick Donnelly
10:09 PM Bug #22733 (Duplicate): ceph-fuse: failed assert in xlist<T>::iterator& xlist<T>::iterator::opera...
We had several ceph-fuse crashes with errors like... Andras Pataki
08:02 PM Bug #22730 (Fix Under Review): mds: scrub crash
https://github.com/ceph/ceph/pull/20012 Patrick Donnelly
05:38 PM Bug #22730: mds: scrub crash
Doug, please take a look at this one. Patrick Donnelly
04:17 PM Bug #22730 (Resolved): mds: scrub crash
this crash can be reproduced by 2 steps
1 ceph daemon mds.a scrub_path <dir> recursive
2 ceph daemon mds.a scrub_...
dongdong tao
12:43 AM Backport #22700 (In Progress): jewel: client:_rmdir() uses a deleted memory structure(Dentry) lea...
https://github.com/ceph/ceph/pull/19993 Prashant D
12:27 AM Backport #22700: jewel: client:_rmdir() uses a deleted memory structure(Dentry) leading a core
I'm on it. Prashant D

01/17/2018

10:07 PM Bug #22683 (Fix Under Review): client: coredump when nfs-ganesha use ceph_ll_get_inode()
Patrick Donnelly
03:34 PM Feature #4208: Add more replication pool tests for Hadoop / Ceph bindings
Bulk move of hadoop category into FS project. John Spray
03:34 PM Feature #4361: Setup another gitbuilder VM for building external Hadoop git repo(s)
Bulk move of hadoop category into FS project. John Spray
03:34 PM Bug #3544: ./configure checks CFLAGS for jni.h if --with-hadoop is specified but also needs to ch...
Bulk move of hadoop category into FS project. John Spray
03:34 PM Bug #1661: Hadoop: expected system directories not present
Bulk move of hadoop category into FS project. John Spray
03:34 PM Bug #1663: Hadoop: file ownership/permission not available in hadoop
Bulk move of hadoop category into FS project. John Spray
03:26 PM Bug #21748 (Can't reproduce): client assertions tripped during some workloads
No response in several months, and I've never seen this trip in my own testing. Closing for now. Please reopen if you... Jeff Layton
03:24 PM Bug #22003 (Resolved): [CephFS-Ganesha]MDS migrate will affect Ganesha service?
No response in two months. Closing bug.
Please reopen or comment if you've been able to test with that patch and i...
Jeff Layton
03:23 PM Bug #21419 (Rejected): client: is ceph_caps_for_mode correct for r/o opens?
Ok, I think you're right. may_open happens at a higher level and we will simply request the caps at that point. False... Jeff Layton
10:50 AM Bug #21734: mount client shows total capacity of cluster but not of a pool
(Just moving this closed ticket because I'm deleting the bogus "cephfs" category in the toplevel Ceph project) John Spray
07:05 AM Backport #22719 (In Progress): luminous: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
https://github.com/ceph/ceph/pull/19982 Zheng Yan
06:57 AM Backport #22719 (Resolved): luminous: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
https://github.com/ceph/ceph/pull/19982 Zheng Yan
05:43 AM Backport #22590 (In Progress): jewel: ceph.in: tell mds does not understand --cluster
Prashant D
04:12 AM Bug #22629 (Pending Backport): client: avoid recursive lock in ll_get_vino
Patrick Donnelly
04:12 AM Bug #22631 (Pending Backport): mds: crashes because of old pool id in journal header
Patrick Donnelly
04:11 AM Backport #22690 (In Progress): luminous: qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insuffic...
https://github.com/ceph/ceph/pull/19976 Prashant D
04:10 AM Bug #22647 (Pending Backport): mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
Patrick Donnelly
02:38 AM Backport #22689 (In Progress): jewel: client: fails to release to revoking Fc
Prashant D
02:38 AM Backport #22689: jewel: client: fails to release to revoking Fc
https://github.com/ceph/ceph/pull/19975 Prashant D

01/16/2018

07:28 PM Bug #22428: mds: don't report slow request for blocked filelock request
Here's a recent example from someone in #ceph:... John Spray
02:13 PM Backport #22688 (In Progress): luminous: client: fails to release to revoking Fc
https://github.com/ceph/ceph/pull/19970 Zheng Yan
08:16 AM Backport #22688 (Resolved): luminous: client: fails to release to revoking Fc
https://github.com/ceph/ceph/pull/20342 Nathan Cutler
01:57 PM Backport #22699 (In Progress): luminous: client:_rmdir() uses a deleted memory structure(Dentry) ...
Zheng Yan
01:57 PM Backport #22699 (Fix Under Review): luminous: client:_rmdir() uses a deleted memory structure(Den...
https://github.com/ceph/ceph/pull/19968 Zheng Yan
08:17 AM Backport #22699 (Resolved): luminous: client:_rmdir() uses a deleted memory structure(Dentry) lea...
https://github.com/ceph/ceph/pull/19968 Nathan Cutler
08:34 AM Backport #22579 (In Progress): luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster fu...
Nathan Cutler
08:31 AM Backport #22580 (In Progress): luminous: qa: full flag not set on osdmap for tasks.cephfs.test_fu...
Nathan Cutler
08:23 AM Backport #22695 (In Progress): jewel: mds: fix dump last_sent
Nathan Cutler
08:17 AM Backport #22695 (Resolved): jewel: mds: fix dump last_sent
https://github.com/ceph/ceph/pull/19961 Nathan Cutler
08:22 AM Backport #22694 (In Progress): luminous: mds: fix dump last_sent
Nathan Cutler
08:17 AM Backport #22694 (Resolved): luminous: mds: fix dump last_sent
https://github.com/ceph/ceph/pull/19959 Nathan Cutler
08:17 AM Backport #22700 (Resolved): jewel: client:_rmdir() uses a deleted memory structure(Dentry) leadin...
https://github.com/ceph/ceph/pull/19993 Nathan Cutler
08:17 AM Backport #22697 (Rejected): jewel: client: dirty caps may never get the chance to flush
Nathan Cutler
08:17 AM Backport #22696 (Resolved): luminous: client: dirty caps may never get the chance to flush
https://github.com/ceph/ceph/pull/21278 Nathan Cutler
08:16 AM Backport #22690 (Resolved): luminous: qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficien...
https://github.com/ceph/ceph/pull/19976 Nathan Cutler
08:16 AM Backport #22689 (Resolved): jewel: client: fails to release to revoking Fc
https://github.com/ceph/ceph/pull/19975 Nathan Cutler
06:38 AM Bug #22683: client: coredump when nfs-ganesha use ceph_ll_get_inode()
https://github.com/ceph/ceph/pull/19957 huanwen ren
02:47 AM Bug #22683 (Resolved): client: coredump when nfs-ganesha use ceph_ll_get_inode()
Environment:
nfs : nfs-ganehsa2.5.4 + https://github.com/nfs-ganesha/nfs-ganesha/commit/476c2068bd4a3fd22f0d...
huanwen ren

01/15/2018

02:36 PM Bug #22610 (Fix Under Review): MDS: assert failure when the inode for the cap_export from other M...
Zheng Yan

01/13/2018

01:43 AM Bug #21402 (In Progress): mds: move remaining containers in CDentry/CDir/CInode to mempool
Patrick Donnelly

01/12/2018

10:42 PM Bug #22652 (Pending Backport): client: fails to release to revoking Fc
Patrick Donnelly
10:39 PM Bug #22646 (Pending Backport): qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficient osds
Patrick Donnelly
03:49 PM Feature #21995 (In Progress): ceph-fuse: support nfs export
Jos Collin
11:07 AM Feature #21156 (In Progress): mds: speed up recovery with many open inodes
Zheng Yan

01/11/2018

10:50 PM Backport #22508: luminous: MDSMonitor: inconsistent role/who usage in command help
See also: https://github.com/ceph/ceph/pull/19926 Patrick Donnelly
10:29 PM Bug #21393: MDSMonitor: inconsistent role/who usage in command help
The fix for this causes upgrade tests to fail: http://tracker.ceph.com/issues/22527#note-9
We will probably need t...
Patrick Donnelly
08:39 AM Bug #22652 (Fix Under Review): client: fails to release to revoking Fc
https://github.com/ceph/ceph/pull/19920 Zheng Yan
08:37 AM Bug #22652: client: fails to release to revoking Fc
hang fuse_reverse_inval_inode() was caused by hang page writeback. Zheng Yan

01/10/2018

11:24 PM Backport #22590: jewel: ceph.in: tell mds does not understand --cluster
https://github.com/ceph/ceph/pull/19907 Prashant D
10:44 PM Backport #22590: jewel: ceph.in: tell mds does not understand --cluster
I'm on it. Prashant D
04:41 PM Bug #22631 (Fix Under Review): mds: crashes because of old pool id in journal header
Jos Collin
03:41 PM Backport #22076 (In Progress): luminous: 'ceph tell mds' commands result in 'File exists' errors ...
Nathan Cutler
03:17 PM Backport #22076 (Fix Under Review): luminous: 'ceph tell mds' commands result in 'File exists' er...
Jos Collin
02:45 PM Bug #22652: client: fails to release to revoking Fc
Sage Weil
01:29 PM Bug #22652: client: fails to release to revoking Fc
I reproduced it locally. it seems like kernel issue. The issue happens only when fuse_use_invalidate_cb is true. Zheng Yan
11:02 AM Bug #22652 (Resolved): client: fails to release to revoking Fc
http://pulpito.ceph.com/pdonnell-2018-01-09_21:14:38-multimds-wip-pdonnell-testing-20180109.193634-testing-basic-smit... Zheng Yan
05:54 AM Bug #22647 (Fix Under Review): mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
https://github.com/ceph/ceph/pull/19891 Zheng Yan
02:34 AM Bug #22647 (Resolved): mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
... Zheng Yan
01:08 AM Bug #22629 (Fix Under Review): client: avoid recursive lock in ll_get_vino
Patrick Donnelly
01:05 AM Bug #22562 (Pending Backport): mds: fix dump last_sent
Patrick Donnelly
01:05 AM Bug #22546 (Pending Backport): client: dirty caps may never get the chance to flush
Patrick Donnelly
01:04 AM Bug #22536 (Pending Backport): client:_rmdir() uses a deleted memory structure(Dentry) leading a ...
Patrick Donnelly
12:44 AM Bug #22646: qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficient osds
https://github.com/ceph/ceph/pull/19885 Patrick Donnelly
12:40 AM Bug #22646 (Resolved): qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficient osds
This causes startup to fail for ec pool configurations.
(This was included in my fix for #22627 but I'm breaking i...
Patrick Donnelly
 

Also available in: Atom