Activity
From 01/10/2018 to 02/08/2018
02/08/2018
- 09:35 PM Bug #22886: kclient: Test failure: test_full_same_file (tasks.cephfs.test_full.TestClusterFull)
- https://github.com/ceph/ceph/pull/20310
- 09:35 PM Bug #22886 (Pending Backport): kclient: Test failure: test_full_same_file (tasks.cephfs.test_full...
- 09:34 PM Bug #22821 (Pending Backport): mds: session reference leak
- 09:34 PM Bug #22824 (Pending Backport): Journaler::flush() may flush less data than expected, which causes...
- 09:33 PM Bug #22754 (Pending Backport): mon: removing tier from an EC base pool is forbidden, even if allo...
- 09:31 PM Bug #22801: client: Client::flush_snaps still uses obsolete Client::user_id/group_id
- https://github.com/ceph/ceph/pull/20200
- 09:31 PM Bug #22801 (Resolved): client: Client::flush_snaps still uses obsolete Client::user_id/group_id
- 06:06 PM Backport #22864 (Resolved): luminous: mds: scrub crash
- 06:05 PM Backport #22860 (Resolved): luminous: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercyc...
- 06:05 PM Bug #22776 (Resolved): mds: session count,dns and inos from cli "fs status" is always 0
- 06:05 PM Backport #22859 (Resolved): luminous: mds: session count,dns and inos from cli "fs status" is alw...
- 06:04 PM Bug #22610 (Resolved): MDS: assert failure when the inode for the cap_export from other MDS happe...
- 06:04 PM Backport #22867 (Resolved): luminous: MDS: assert failure when the inode for the cap_export from ...
- 06:04 PM Bug #21892 (Resolved): limit size of subtree migration
- 06:03 PM Backport #22242 (Resolved): luminous: mds: limit size of subtree migration
- 06:03 PM Backport #22240 (Resolved): luminous: Processes stuck waiting for write with ceph-fuse
- 05:55 PM Bug #21568 (Resolved): MDSMonitor commands crashing on cluster upgraded from Hammer (nonexistent ...
- 05:55 PM Backport #21953 (Resolved): luminous: MDSMonitor commands crashing on cluster upgraded from Hamme...
- 05:52 PM Bug #22058 (Resolved): mds: admin socket wait for scrub completion is racy
- 05:52 PM Backport #22907 (Resolved): luminous: mds: admin socket wait for scrub completion is racy
- 05:51 PM Bug #18743 (Resolved): Scrub considers dirty backtraces to be damaged, puts in damage table even ...
- 05:51 PM Backport #22089 (Resolved): luminous: Scrub considers dirty backtraces to be damaged, puts in dam...
- 12:25 PM Feature #22929: libcephfs.pyx: add chown and chmod functions
- Patrick Donnelly wrote:
> Thanks for the report. Jan, would you like to work on this? We appreciate PRs :)
Hi Pat... - 10:24 AM Bug #22925: mds: LOCK_SYNC_MIX state makes "getattr" operations extremely slow when there are lot...
- please try
https://github.com/ukernel/ceph/commit/7db1563416b5559310dbbc834795b83a4ccdaab4 - 07:07 AM Bug #22925: mds: LOCK_SYNC_MIX state makes "getattr" operations extremely slow when there are lot...
- Hi, Patrick, as discussed yesterday, in our case, the whole procedure of a single run of "getattr" ops processing is ...
- 01:16 AM Backport #22936 (In Progress): luminous: client: readdir bug
- https://github.com/ceph/ceph/pull/20356
- 12:02 AM Backport #22935 (In Progress): luminous: client: setattr should drop "Fs" rather than "As" for mt...
- https://github.com/ceph/ceph/pull/20354
02/07/2018
- 10:45 PM Backport #22864: luminous: mds: scrub crash
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20249
merged - 10:45 PM Backport #22860: luminous: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-yuri...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20256
merged - 10:44 PM Backport #22859: luminous: mds: session count,dns and inos from cli "fs status" is always 0
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20299
merged - 10:44 PM Backport #22867: luminous: MDS: assert failure when the inode for the cap_export from other MDS h...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20300
merged - 10:41 PM Backport #22242: luminous: mds: limit size of subtree migration
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20339
merged - 10:41 PM Backport #22240: luminous: Processes stuck waiting for write with ceph-fuse
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20340
merged - 10:40 PM Backport #22907: luminous: mds: admin socket wait for scrub completion is racy
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20341
merged - 10:40 PM Backport #22089: luminous: Scrub considers dirty backtraces to be damaged, puts in damage table e...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20341
merged - 05:37 PM Backport #22382 (Rejected): jewel: client: give more descriptive error message for remount failures
- Rejected because not needed with http://tracker.ceph.com/issues/22378
- 05:37 PM Bug #22254 (Resolved): client: give more descriptive error message for remount failures
- No, I guess not because of http://tracker.ceph.com/issues/22378
- 07:35 AM Bug #22254: client: give more descriptive error message for remount failures
- @Patrick: You rejected the luminous backport, but the jewel backport should still go forward - correct?
- 03:25 PM Bug #22948 (Resolved): client: wire up ceph_ll_readv and ceph_ll_writev
- These two functions are stubbed out in the client libraries and always just return -1. Wire them into the backend inf...
- 05:01 AM Backport #22936 (Resolved): luminous: client: readdir bug
- https://github.com/ceph/ceph/pull/20356
- 05:01 AM Backport #22935 (Resolved): luminous: client: setattr should drop "Fs" rather than "As" for mtime...
- https://github.com/ceph/ceph/pull/20354
- 01:51 AM Bug #22869: compiling Client.cc generate warnings
- Yes, the gcc 8.0.1 gives plenty of errors and warnings everywhere in the unmodified code. Yesterday, I have managed t...
02/06/2018
- 11:17 PM Bug #21406 (Resolved): ceph.in: tell mds does not understand --cluster
- 11:16 PM Backport #22590 (Resolved): jewel: ceph.in: tell mds does not understand --cluster
- 09:39 PM Feature #22929: libcephfs.pyx: add chown and chmod functions
- Thanks for the report. Jan, would you like to work on this? We appreciate PRs :)
- 12:31 PM Feature #22929 (New): libcephfs.pyx: add chown and chmod functions
- Chown and chmod functions are included in libcephfs.h but there are no equivalents in python binding. The only workar...
- 09:09 PM Bug #22933 (Resolved): client: add option descriptions and review levels (e.g. LEVEL_DEV)
- 08:50 PM Bug #22869: compiling Client.cc generate warnings
- Jos Collin wrote:
> Patrick,
>
> I'm using: gcc (GCC) 8.0.1 20180131 (Red Hat 8.0.1-0.9).
>
> At first look, ... - 07:05 PM Bug #22163 (Resolved): request that is "!mdr->is_replay() && mdr->is_queued_for_replay()" may han...
- 07:04 PM Backport #22237 (Resolved): luminous: request that is "!mdr->is_replay() && mdr->is_queued_for_re...
- 05:40 PM Backport #22237: luminous: request that is "!mdr->is_replay() && mdr->is_queued_for_replay()" may...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19157
merged - 06:03 PM Backport #22688: luminous: client: fails to release to revoking Fc
- https://github.com/ceph/ceph/pull/20342
- 05:54 PM Backport #22907 (In Progress): luminous: mds: admin socket wait for scrub completion is racy
- https://github.com/ceph/ceph/pull/20341
- 05:53 PM Backport #22089 (In Progress): luminous: Scrub considers dirty backtraces to be damaged, puts in ...
- https://github.com/ceph/ceph/pull/20341
- 05:46 PM Backport #22240: luminous: Processes stuck waiting for write with ceph-fuse
- https://github.com/ceph/ceph/pull/20340
- 05:46 PM Backport #22240 (In Progress): luminous: Processes stuck waiting for write with ceph-fuse
- http://tracker.ceph.com/issues/22240
- 05:41 PM Backport #22381 (Rejected): luminous: client: give more descriptive error message for remount fai...
- This one is no longer necessary after: http://tracker.ceph.com/issues/22339
- 05:35 PM Backport #22242: luminous: mds: limit size of subtree migration
- https://github.com/ceph/ceph/pull/20339
- 05:30 PM Backport #22242 (In Progress): luminous: mds: limit size of subtree migration
- 02:39 PM Bug #22647 (Resolved): mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
- 02:39 PM Backport #22762 (Resolved): jewel: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
- 01:48 PM Backport #22689 (Resolved): jewel: client: fails to release to revoking Fc
- 01:46 PM Bug #21501 (Resolved): ceph_volume_client: sets invalid caps for existing IDs with no caps
- 01:45 PM Backport #21626 (Resolved): jewel: ceph_volume_client: sets invalid caps for existing IDs with no...
- 01:43 PM Bug #21423 (Resolved): qa: test_client_pin times out waiting for dentry release from kernel
- 01:42 PM Backport #21519 (Resolved): jewel: qa: test_client_pin times out waiting for dentry release from ...
- 01:41 PM Backport #21067 (Resolved): jewel: MDS integer overflow fix
- 08:49 AM Bug #22249: Need to restart MDS to release cephfs space
- Zheng Yan wrote:
> The logs show that the client held caps on stray inodes. which is root cause of the issue.
>
>... - 08:48 AM Backport #22865 (In Progress): jewel: mds: scrub crash
- https://github.com/ceph/ceph/pull/20335
- 06:18 AM Backport #22863 (In Progress): jewel: cephfs-journal-tool: may got assertion failure due to not s...
- https://github.com/ceph/ceph/pull/20333
- 04:19 AM Bug #22925 (Resolved): mds: LOCK_SYNC_MIX state makes "getattr" operations extremely slow when th...
Recently, in our online Luminous Cephfs clusters, we found that there are occasionally lots of slow requests:
20...
02/05/2018
- 07:52 PM Feature #9312 (Resolved): kclient: support signatures in kernel code
- cephx signatures are supported (and required by default) since 3.19:
https://git.kernel.org/pub/scm/linux/kernel/g... - 07:44 PM Feature #9312: kclient: support signatures in kernel code
- Zheng, this is resolved right? Which commit?
- 07:42 PM Documentation #8918 (Resolved): kclient: known working kernels
- http://docs.ceph.com/docs/master/cephfs/best-practices/#which-kernel-version
- 07:17 PM Bug #21861: osdc: truncate Object and remove the bh which have someone wait for read on it occur ...
- Zheng, is this what your recent master PR for ObjectCacher fixed?
- 02:50 PM Bug #22869: compiling Client.cc generate warnings
- Patrick,
I'm using: gcc (GCC) 8.0.1 20180131 (Red Hat 8.0.1-0.9).
At first look, I thought that this was a pro... - 02:40 PM Bug #22869 (Need More Info): compiling Client.cc generate warnings
- 02:38 PM Bug #22869: compiling Client.cc generate warnings
- Jos, what options are you using to generate these warnings?
- 02:42 PM Bug #22910 (Pending Backport): client: setattr should drop "Fs" rather than "As" for mtime and size
- 02:41 PM Bug #22909 (Pending Backport): client: readdir bug
- 02:40 PM Bug #22885 (Need More Info): MDS trimming Not ending
- 10:47 AM Backport #22861 (In Progress): jewel: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercyc...
- https://github.com/ceph/ceph/pull/20312
- 03:13 AM Backport #22891 (In Progress): luminous: qa: kcephfs lacks many configurations in the fs/multimds...
- https://github.com/ceph/ceph/pull/20302
- 01:12 AM Backport #22867 (In Progress): luminous: MDS: assert failure when the inode for the cap_export fr...
- https://github.com/ceph/ceph/pull/20300
- 12:06 AM Backport #22859 (In Progress): luminous: mds: session count,dns and inos from cli "fs status" is ...
- https://github.com/ceph/ceph/pull/20299
02/03/2018
- 08:31 PM Bug #22839 (Rejected): MDSAuthCaps (unlike others) still require "allow" at start
- As discussed in the PR, closing this because we don't have a profile analog in CephFS.
- 07:33 PM Bug #21759 (Resolved): Assertion in EImportStart::replay should be a damaged()
- 07:33 PM Backport #21870 (Resolved): luminous: Assertion in EImportStart::replay should be a damaged()
- 06:34 PM Bug #22646 (Resolved): qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficient osds
- 06:34 PM Backport #22690 (Resolved): luminous: qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficien...
- 05:42 PM Bug #22360 (Resolved): mds: crash during exiting
- 05:41 PM Backport #22493 (Resolved): luminous: mds: crash during exiting
- 01:02 PM Bug #22910: client: setattr should drop "Fs" rather than "As" for mtime and size
- https://github.com/ceph/ceph/pull/18786
this one should also be backported - 01:01 PM Bug #22910 (Resolved): client: setattr should drop "Fs" rather than "As" for mtime and size
- 12:58 PM Bug #22909: client: readdir bug
- I think we should backport this one
- 12:54 PM Bug #22909: client: readdir bug
- https://github.com/ceph/ceph/pull/18784
- 12:53 PM Bug #22909 (Resolved): client: readdir bug
- Fix: "Client::readdir_r_cb" tried to read its parent dir, but it reads itself.
- 07:34 AM Bug #22610: MDS: assert failure when the inode for the cap_export from other MDS happened not in ...
- Re-adding rejected jewel backport to appease backport tooling.
- 07:18 AM Backport #22907 (Resolved): luminous: mds: admin socket wait for scrub completion is racy
- https://github.com/ceph/ceph/pull/20341
- 02:30 AM Bug #22885: MDS trimming Not ending
- MDS encountered error and went to readonly mode. I think it was caused by the client that didn't advance oldest clien...
- 02:02 AM Bug #22058 (Pending Backport): mds: admin socket wait for scrub completion is racy
- Needs backport as the bug will be introduced by:
https://github.com/ceph/ceph/pull/18858 - 01:18 AM Bug #20452 (Resolved): Adding pool with id smaller then existing data pool ids breaks MDSMap::is_...
- 01:18 AM Backport #20714 (Rejected): jewel: Adding pool with id smaller then existing data pool ids breaks...
- Ah, this should not be necessary for jewel since it uses std::set, a sorted container.
02/02/2018
- 11:08 PM Backport #22493: luminous: mds: crash during exiting
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/19610
merged - 10:33 PM Backport #22690: luminous: qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficient osds
- Prashant D wrote:
> https://github.com/ceph/ceph/pull/19976
merged - 10:24 PM Backport #20823 (In Progress): jewel: client::mkdirs not handle well when two clients send mkdir ...
- 10:20 PM Backport #20714 (Need More Info): jewel: Adding pool with id smaller then existing data pool ids ...
- Depends on de0ce386ee59dbf70a010696d4aa91d46ed73b20 which is not going to backported (?)
- 08:06 PM Bug #21402: mds: move remaining containers in CDentry/CDir/CInode to mempool
- It occurred to me I wasn't comparing apples to apples when doing the memory reduction comparisons. I looked at the sa...
- 02:57 PM Bug #22886 (In Progress): kclient: Test failure: test_full_same_file (tasks.cephfs.test_full.Test...
- Yes, that error has been happening for the mds-full tests now with and without kclient. I'll look into that today. Th...
- 02:39 PM Bug #22886: kclient: Test failure: test_full_same_file (tasks.cephfs.test_full.TestClusterFull)
- this patch https://github.com/ceph/ceph-ci/commit/2fff0eb4c491f04803debec7c0f5de66e3825ee7 seems to make full tests p...
- 09:47 AM Bug #22886: kclient: Test failure: test_full_same_file (tasks.cephfs.test_full.TestClusterFull)
- patch https://github.com/ceph/ceph-client/commit/b9e5d03b6e64972164bff45ae3adb64a23e7568a fixes this issue.
but ot... - 06:34 AM Bug #22886: kclient: Test failure: test_full_same_file (tasks.cephfs.test_full.TestClusterFull)
- it seems to be caused by delay dirty metadata writeback
- 08:02 AM Backport #22339 (Resolved): luminous: ceph-fuse: failure to remount in startup test does not hand...
- 08:01 AM Bug #22460 (Resolved): mds: handle client session messages when mds is stopping
- 08:01 AM Backport #22490 (Resolved): luminous: mds: handle client session messages when mds is stopping
- 08:00 AM Bug #22459 (Resolved): cephfs-journal-tool: tool would miss to report some invalid range
- 08:00 AM Backport #22499 (Resolved): luminous: cephfs-journal-tool: tool would miss to report some invalid...
- 07:59 AM Bug #22458 (Resolved): cephfs: potential adjust failure in lru_expire
- 07:59 AM Backport #22500 (Resolved): luminous: cephfs: potential adjust failure in lru_expire
- 07:58 AM Bug #22492 (Resolved): Locker::calc_new_max_size does not take layout.stripe_count into account
- 07:58 AM Backport #22564 (Resolved): luminous: Locker::calc_new_max_size does not take layout.stripe_count...
- 07:57 AM Bug #22526 (Resolved): AttributeError: 'LocalFilesystem' object has no attribute 'ec_profile'
- 07:57 AM Backport #22573 (Resolved): luminous: AttributeError: 'LocalFilesystem' object has no attribute '...
- 07:56 AM Backport #22579 (Resolved): luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full ...
- 07:55 AM Backport #22699 (Resolved): luminous: client:_rmdir() uses a deleted memory structure(Dentry) lea...
- 07:55 AM Backport #22719 (Resolved): luminous: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
- 07:02 AM Backport #22860 (In Progress): luminous: osdc: "FAILED assert(bh->last_write_tid > tid)" in power...
- https://github.com/ceph/ceph/pull/20256
- 05:49 AM Backport #22798 (Resolved): luminous: mds: add success return
- 05:29 AM Backport #22868 (Rejected): jewel: MDS: assert failure when the inode for the cap_export from oth...
- 01:01 AM Backport #22868 (Closed): jewel: MDS: assert failure when the inode for the cap_export from other...
- Zheng is right.
- 05:16 AM Backport #21519 (In Progress): jewel: qa: test_client_pin times out waiting for dentry release fr...
- 05:14 AM Backport #22494 (In Progress): jewel: unsigned integer overflow in file_layout_t::get_period
- 04:44 AM Backport #22862 (In Progress): luminous: cephfs-journal-tool: may got assertion failure due to no...
- https://github.com/ceph/ceph/pull/20251
- 03:57 AM Backport #22864 (In Progress): luminous: mds: scrub crash
- https://github.com/ceph/ceph/pull/20249
- 12:56 AM Bug #22835: client: the total size of fs is equal to the cluster size when using multiple data pools
- Thanks Patrick Donnelly.
02/01/2018
- 11:48 PM Bug #22839 (Fix Under Review): MDSAuthCaps (unlike others) still require "allow" at start
- https://github.com/ceph/ceph/pull/20248
- 11:43 PM Backport #22891 (Resolved): luminous: qa: kcephfs lacks many configurations in the fs/multimds su...
- https://github.com/ceph/ceph/pull/20302
- 11:37 PM Bug #22835 (Won't Fix): client: the total size of fs is equal to the cluster size when using mult...
- This is intended. To avoid double-counting available space, the client simply returns the total raw space in the clus...
- 11:32 PM Bug #22436 (Resolved): qa: CommandFailedError: Command failed on smithi135 with status 22: 'sudo ...
- 11:31 PM Backport #22501 (Resolved): luminous: qa: CommandFailedError: Command failed on smithi135 with st...
- 11:07 PM Backport #22501: luminous: qa: CommandFailedError: Command failed on smithi135 with status 22: 's...
- Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/19628
merged - 11:01 PM Bug #21821 (Resolved): MDSMonitor: mons should reject misconfigured mds_blacklist_interval
- 11:01 PM Backport #21948 (Resolved): luminous: MDSMonitor: mons should reject misconfigured mds_blacklist_...
- 09:12 PM Backport #21948: luminous: MDSMonitor: mons should reject misconfigured mds_blacklist_interval
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19871
merged - 11:00 PM Backport #22694 (Resolved): luminous: mds: fix dump last_sent
- 09:11 PM Backport #22694: luminous: mds: fix dump last_sent
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19959
merged - 11:00 PM Bug #22475 (Resolved): qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClusterFull...
- 11:00 PM Backport #22580 (Resolved): luminous: qa: full flag not set on osdmap for tasks.cephfs.test_full....
- 09:11 PM Backport #22580: luminous: qa: full flag not set on osdmap for tasks.cephfs.test_full.TestCluster...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19962
mergedReviewed-by: Patrick Donnelly <pdonnell@redh... - 10:50 PM Bug #22627 (Pending Backport): qa: kcephfs lacks many configurations in the fs/multimds suites
- 10:39 PM Bug #22886: kclient: Test failure: test_full_same_file (tasks.cephfs.test_full.TestClusterFull)
- These may be related:...
- 10:34 PM Bug #22886 (Resolved): kclient: Test failure: test_full_same_file (tasks.cephfs.test_full.TestClu...
- From: http://pulpito.ceph.com/pdonnell-2018-01-30_23:38:56-kcephfs-wip-pdonnell-i22627-testing-basic-smithi/2129601/
... - 09:16 PM Bug #22885 (Need More Info): MDS trimming Not ending
- HEALTH_WARN 1 clients failing to advance oldest client/flush tid; insufficient standby MDS daemons available; 1 MDSs ...
- 09:15 PM Backport #22339: luminous: ceph-fuse: failure to remount in startup test does not handle client_d...
- Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/19370
merged - 09:14 PM Backport #22490: luminous: mds: handle client session messages when mds is stopping
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/19585
merged - 09:14 PM Backport #22499: luminous: cephfs-journal-tool: tool would miss to report some invalid range
- Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/19626
merged - 09:13 PM Backport #22500: luminous: cephfs: potential adjust failure in lru_expire
- Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/19627
merged - 09:13 PM Backport #22564: luminous: Locker::calc_new_max_size does not take layout.stripe_count into account
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/19776
merged - 09:12 PM Backport #22573: luminous: AttributeError: 'LocalFilesystem' object has no attribute 'ec_profile'
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19829
merged - 09:12 PM Backport #22579: luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obs...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19830
merged - 09:10 PM Backport #22699: luminous: client:_rmdir() uses a deleted memory structure(Dentry) leading a core
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/19968
merged - 09:09 PM Backport #22719: luminous: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/19982
merged - 02:43 PM Backport #22868: jewel: MDS: assert failure when the inode for the cap_export from other MDS happ...
- it's multimds bug, I don't think we need to backport it to jewel
- 10:50 AM Backport #22868 (Rejected): jewel: MDS: assert failure when the inode for the cap_export from oth...
- 12:02 PM Bug #22869 (Closed): compiling Client.cc generate warnings
- [ 2%] Building CXX object src/client/CMakeFiles/client.dir/Client.cc.o
In file included from /home/jcollin/workspac... - 10:50 AM Backport #22867 (Resolved): luminous: MDS: assert failure when the inode for the cap_export from ...
- https://github.com/ceph/ceph/pull/20300
- 10:49 AM Backport #22865 (Resolved): jewel: mds: scrub crash
- https://github.com/ceph/ceph/pull/20335
- 10:49 AM Backport #22864 (Resolved): luminous: mds: scrub crash
- https://github.com/ceph/ceph/pull/20249
- 10:49 AM Backport #22863 (Resolved): jewel: cephfs-journal-tool: may got assertion failure due to not shut...
- https://github.com/ceph/ceph/pull/20333
- 10:49 AM Backport #22862 (Resolved): luminous: cephfs-journal-tool: may got assertion failure due to not s...
- https://github.com/ceph/ceph/pull/20251
- 10:49 AM Backport #22861 (Resolved): jewel: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-...
- https://github.com/ceph/ceph/pull/20312
- 10:49 AM Backport #22860 (Resolved): luminous: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercyc...
- https://github.com/ceph/ceph/pull/20256
- 10:49 AM Backport #22859 (Resolved): luminous: mds: session count,dns and inos from cli "fs status" is alw...
- https://github.com/ceph/ceph/pull/20299
- 05:23 AM Bug #21402: mds: move remaining containers in CDentry/CDir/CInode to mempool
- 64GB cache size limit experiment attached.
The master branch was tested with 64 kernel clients each building the k...
01/31/2018
- 10:49 PM Bug #21025 (Resolved): racy is_mounted() checks in libcephfs
- 10:48 PM Backport #21359 (Resolved): luminous: racy is_mounted() checks in libcephfs
- 10:10 PM Backport #21359: luminous: racy is_mounted() checks in libcephfs
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/17875
merged - 10:08 PM Backport #21359: luminous: racy is_mounted() checks in libcephfs
- Nevermind, PR was good as-is.
- 09:22 PM Backport #22798: luminous: mds: add success return
- Patrick Donnelly wrote:
> No upstream tracker issue for this. It was fixed upstream in https://github.com/ceph/ceph/... - 01:20 PM Bug #22839 (Rejected): MDSAuthCaps (unlike others) still require "allow" at start
- This was changed for the OSD and mon caps, but the MDS caps were missed:
https://github.com/ceph/ceph/pull/15991/com... - 02:30 AM Bug #22835 (Won't Fix): client: the total size of fs is equal to the cluster size when using mult...
- *Ceph Cluster*...
- 12:49 AM Bug #22523 (Closed): Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
01/30/2018
- 11:31 PM Feature #16775 (Resolved): MDS command for listing open files
- 11:30 PM Bug #22610 (Pending Backport): MDS: assert failure when the inode for the cap_export from other M...
- 11:29 PM Bug #22734 (Pending Backport): cephfs-journal-tool: may got assertion failure due to not shutdown
- 11:27 PM Bug #22730 (Pending Backport): mds: scrub crash
- 11:26 PM Bug #22776 (Pending Backport): mds: session count,dns and inos from cli "fs status" is always 0
- 11:26 PM Bug #22741 (Pending Backport): osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-...
- 10:45 PM Bug #22754 (Fix Under Review): mon: removing tier from an EC base pool is forbidden, even if allo...
- https://github.com/ceph/ceph/pull/20190
- 10:36 PM Bug #22754 (In Progress): mon: removing tier from an EC base pool is forbidden, even if allow_ec_...
- 05:33 PM Bug #21402 (Fix Under Review): mds: move remaining containers in CDentry/CDir/CInode to mempool
- https://github.com/ceph/ceph/pull/19954
Also ran two 64-client kernel build tests (one patched, one master) with a... - 03:42 PM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
- mds_blacklist_interval = 1440
We found that that arguments is too little for the HA testing, it should be adjusted l... - 02:01 PM Feature #12107: mds: use versioned wire protocol; obviate CEPH_MDS_PROTOCOL
- have added the support for MDentryLink and MDentryUnlink,
next step is CInode::encode_lock_state/CInode::decode_lock... - 03:32 AM Bug #22788: ceph-fuse performance issues with rsync
- with -H option, rsync does 1k write. without -H option, rsync does 4k write. ceph-fuse does not enable kernel writeba...
- 02:26 AM Bug #22829 (Resolved): ceph-fuse: uses up all snap tags
- got following crash during snap tests...
01/29/2018
- 03:23 PM Bug #22788: ceph-fuse performance issues with rsync
- It seems that the -H option causes low performance when destination is in cephfs. I still don't figure out why
- 03:20 PM Bug #22754: mon: removing tier from an EC base pool is forbidden, even if allow_ec_overwrites is set
- As far as I'm aware, nobody has worked on it, so that would be a no.
- 03:13 PM Bug #22754: mon: removing tier from an EC base pool is forbidden, even if allow_ec_overwrites is set
- Is this going to make it into 12.2.3?
- 02:53 PM Feature #21995 (Fix Under Review): ceph-fuse: support nfs export
- https://github.com/ceph/ceph/pull/20168
- 02:41 PM Bug #22801: client: Client::flush_snaps still uses obsolete Client::user_id/group_id
- CapSnap is for flushing snapshoted metadata (the metadata that were dirty at the time of mksnap), nothing do with set...
- 02:27 PM Bug #22776 (Fix Under Review): mds: session count,dns and inos from cli "fs status" is always 0
- 02:26 PM Bug #22734 (Fix Under Review): cephfs-journal-tool: may got assertion failure due to not shutdown
- 12:59 PM Bug #22802: libcephfs: allow setting default perms
- Part of the problem here is that there are really two sets of default permissions in this code. There is one in the C...
- 03:48 AM Bug #22824 (Fix Under Review): Journaler::flush() may flush less data than expected, which causes...
- https://github.com/ceph/ceph/pull/20155
- 03:02 AM Bug #22824 (Resolved): Journaler::flush() may flush less data than expected, which causes flush w...
01/28/2018
- 11:27 AM Bug #22821 (Fix Under Review): mds: session reference leak
- https://github.com/ceph/ceph/pull/20148
- 08:15 AM Bug #22821 (Resolved): mds: session reference leak
- there are several places get session by:
"Session *session = static_cast<Session *>(m->get_connection()->get_priv(...
01/26/2018
- 08:16 PM Bug #22802: libcephfs: allow setting default perms
- What I think I'm going to do is just add a ceph_mount_perms_set() function to the API that will reset it to a UserPer...
- 05:49 PM Bug #22802: libcephfs: allow setting default perms
- The current default is to set it to -1, so that's probably what we'll do here.
Further down the rabbit hole, we ha... - 05:36 PM Bug #22802: libcephfs: allow setting default perms
- Jeff Layton wrote:
> Serious question: does anyone actually use the SyntheticClient? It's only linked into the ceph-... - 04:52 PM Bug #22802: libcephfs: allow setting default perms
- Serious question: does anyone actually use the SyntheticClient? It's only linked into the ceph-syn binary, and I don'...
- 06:16 PM Bug #21091 (Resolved): StrayManager::truncate is broken
- 06:16 PM Backport #21657 (Resolved): luminous: StrayManager::truncate is broken
- 02:35 PM Feature #21156 (Fix Under Review): mds: speed up recovery with many open inodes
- https://github.com/ceph/ceph/pull/20132
- 01:48 PM Bug #22801: client: Client::flush_snaps still uses obsolete Client::user_id/group_id
- Hmm...cap_dirtier_uid is only set to anything non-default in _do_setattr(). That function does this very early:
<p... - 01:43 PM Bug #22801: client: Client::flush_snaps still uses obsolete Client::user_id/group_id
- I think we should cap_dirtier_uid/gid into CapSnap
- 01:06 PM Bug #22801: client: Client::flush_snaps still uses obsolete Client::user_id/group_id
- I don't have the best grasp of how snapshots work in cephfs, so I'm a little confused as to what should be done here:...
01/25/2018
- 10:47 PM Bug #22802 (Resolved): libcephfs: allow setting default perms
- -These options no longer work as advertised and only effect the SyntheticClient (with the exception of #22801). Best ...
- 10:45 PM Bug #22801 (Resolved): client: Client::flush_snaps still uses obsolete Client::user_id/group_id
- This appears to be a hold-out from the UserPerm work last year and the last remaining user of Client::user_id|user_gi...
- 08:25 PM Bug #22219 (Resolved): mds: mds should ignore export_pin for deleted directory
- 08:25 PM Backport #22385 (Resolved): luminous: mds: mds should ignore export_pin for deleted directory
- 08:02 PM Backport #22385: luminous: mds: mds should ignore export_pin for deleted directory
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19360
merged - 08:25 PM Bug #22357 (Resolved): mds: read hang in multiple mds setup
- 08:24 PM Backport #22503 (Resolved): luminous: mds: read hang in multiple mds setup
- 08:02 PM Backport #22503: luminous: mds: read hang in multiple mds setup
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19646
merged - 08:24 PM Bug #21985 (Resolved): mds: definition of MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 is wrong
- 08:24 PM Backport #22067 (Resolved): luminous: mds: definition of MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 is w...
- 07:11 PM Backport #22067: luminous: mds: definition of MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 is wrong
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/18782
merged - 08:23 PM Bug #21975 (Resolved): MDS: mds gets significantly behind on trimming while creating millions of ...
- 08:23 PM Backport #22068 (Resolved): luminous: mds: mds gets significantly behind on trimming while creati...
- 07:09 PM Backport #22068: luminous: mds: mds gets significantly behind on trimming while creating millions...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/18783
merged - 08:22 PM Bug #22009 (Resolved): don't check gid when none specified in auth caps
- 08:22 PM Backport #22074 (Resolved): luminous: don't check gid when none specified in auth caps
- 07:07 PM Backport #22074: luminous: don't check gid when none specified in auth caps
- Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/18835
merged - 08:21 PM Bug #21722 (Resolved): mds: no assertion on inode being purging in find_ino_peers()
- 08:21 PM Backport #21952 (Resolved): luminous: mds: no assertion on inode being purging in find_ino_peers()
- 07:07 PM Backport #21952: luminous: mds: no assertion on inode being purging in find_ino_peers()
- Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/18869
merged - 08:21 PM Bug #21843 (Resolved): mds: preserve order of requests during recovery of multimds cluster
- 08:20 PM Backport #21947 (Resolved): luminous: mds: preserve order of requests during recovery of multimds...
- 07:06 PM Backport #21947: luminous: mds: preserve order of requests during recovery of multimds cluster
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/18871
merged - 08:20 PM Bug #21928 (Resolved): src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == inode_map.size(...
- 08:19 PM Backport #22077 (Resolved): luminous: src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == ...
- 07:05 PM Backport #22077: luminous: src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == inode_map.s...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/18912
merged - 08:19 PM Bug #21959 (Resolved): MDSMonitor: monitor gives constant "is now active in filesystem cephfs as ...
- 08:19 PM Backport #22192 (Resolved): luminous: MDSMonitor: monitor gives constant "is now active in filesy...
- 07:05 PM Backport #22192: luminous: MDSMonitor: monitor gives constant "is now active in filesystem cephfs...
- Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/19055
merged - 08:18 PM Backport #22379 (Resolved): luminous: client reconnect gather race
- 08:16 PM Feature #19578 (Resolved): mds: optimize CDir::_omap_commit() and CDir::_committed() for large di...
- 08:16 PM Backport #22563 (Resolved): luminous: mds: optimize CDir::_omap_commit() and CDir::_committed() f...
- 06:58 PM Backport #22563: luminous: mds: optimize CDir::_omap_commit() and CDir::_committed() for large di...
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/19775
merged - 08:16 PM Bug #21853 (Resolved): mds: mdsload debug too high
- 08:15 PM Backport #22587 (Resolved): luminous: mds: mdsload debug too high
- 06:58 PM Backport #22587: luminous: mds: mdsload debug too high
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19827
merged - 08:15 PM Backport #22763 (Resolved): luminous: mds: crashes because of old pool id in journal header
- 06:56 PM Backport #22763: luminous: mds: crashes because of old pool id in journal header
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20085
merged - 08:15 PM Bug #22629 (Resolved): client: avoid recursive lock in ll_get_vino
- 08:14 PM Backport #22765 (Resolved): luminous: client: avoid recursive lock in ll_get_vino
- 06:55 PM Backport #22765: luminous: client: avoid recursive lock in ll_get_vino
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20086
merged - 08:13 PM Bug #22157 (Resolved): client: trim_caps may remove cap iterator points to
- 08:04 PM Bug #22157: client: trim_caps may remove cap iterator points to
- merged https://github.com/ceph/ceph/pull/19105
- 08:13 PM Backport #22228 (Resolved): luminous: client: trim_caps may remove cap iterator points to
- 07:46 PM Backport #22004 (Resolved): luminous: FAILED assert(get_version() < pv) in CDir::mark_dirty
- 07:12 PM Backport #22004: luminous: FAILED assert(get_version() < pv) in CDir::mark_dirty
- Kefu Chai wrote:
> https://github.com/ceph/ceph/pull/18008
merged - 07:45 PM Backport #22078 (Resolved): luminous: ceph.in: tell mds does not understand --cluster
- 07:08 PM Backport #22078: luminous: ceph.in: tell mds does not understand --cluster
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/18831
merged - 07:45 PM Bug #21967 (Resolved): 'ceph tell mds' commands result in 'File exists' errors on client admin so...
- 07:45 PM Backport #22076 (Resolved): luminous: 'ceph tell mds' commands result in 'File exists' errors on ...
- 07:08 PM Backport #22076: luminous: 'ceph tell mds' commands result in 'File exists' errors on client admi...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/18831
merged - 07:11 PM Feature #18490 (Resolved): client: implement delegation support in userland cephfs
- 07:11 PM Backport #22407 (Resolved): luminous: client: implement delegation support in userland cephfs
- 07:00 PM Backport #22407: luminous: client: implement delegation support in userland cephfs
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19480
merged - 07:11 PM Bug #21512 (Resolved): qa: libcephfs_interface_tests: shutdown race failures
- 07:11 PM Backport #21874 (Resolved): luminous: qa: libcephfs_interface_tests: shutdown race failures
- 06:57 PM Backport #21874: luminous: qa: libcephfs_interface_tests: shutdown race failures
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20082
merged - 07:10 PM Backport #21525 (Resolved): luminous: client: dual client segfault with racing ceph_shutdown
- 06:57 PM Backport #21525: luminous: client: dual client segfault with racing ceph_shutdown
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20082
merged - 07:03 PM Bug #22263: client reconnect gather race
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/19326
merged - 05:35 PM Backport #22798 (Resolved): luminous: mds: add success return
- No upstream tracker issue for this. It was fixed upstream in https://github.com/ceph/ceph/pull/16778/commits/f519fca9...
- 05:11 PM Backport #21657: luminous: StrayManager::truncate is broken
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/18019
merged - 05:44 AM Bug #22741 (Fix Under Review): osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-...
- https://github.com/ceph/ceph/pull/20113
- 04:45 AM Backport #22764 (In Progress): jewel: mds: crashes because of old pool id in journal header
- https://github.com/ceph/ceph/pull/20111
01/24/2018
- 04:13 PM Bug #22788 (Won't Fix): ceph-fuse performance issues with rsync
- Hi,
I have a performance issue when running rsync on a FUSE-mounted CephFS.
dd runs on "line speed" on my test ... - 02:02 PM Bug #22249: Need to restart MDS to release cephfs space
- The logs show that the client held caps on stray inodes. which is root cause of the issue.
Did you try -client_try... - 09:08 AM Bug #22249: Need to restart MDS to release cephfs space
- Zheng Yan wrote:
> junming rao wrote:
> > Zheng Yan wrote:
> > > please try remounting all cephfs with ceph-fuse o... - 09:31 AM Bug #22741: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-yuri-master-1.19.18...
- ...
- 02:44 AM Backport #22765 (In Progress): luminous: client: avoid recursive lock in ll_get_vino
- https://github.com/ceph/ceph/pull/20086
- 01:48 AM Backport #22763 (In Progress): luminous: mds: crashes because of old pool id in journal header
- https://github.com/ceph/ceph/pull/20085
01/23/2018
- 06:06 PM Bug #21393 (Resolved): MDSMonitor: inconsistent role/who usage in command help
- 06:05 PM Backport #22508 (Closed): luminous: MDSMonitor: inconsistent role/who usage in command help
- Yes, let's forgo the luminous backport. Thanks for pointing that out Nathan!
- 12:41 PM Bug #22776: mds: session count,dns and inos from cli "fs status" is always 0
- *PR*: https://github.com/ceph/ceph/pull/20079
- 12:07 PM Bug #22776 (Resolved): mds: session count,dns and inos from cli "fs status" is always 0
- ...
- 09:48 AM Backport #22762 (In Progress): jewel: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
- 09:40 AM Backport #22762 (Resolved): jewel: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
- https://github.com/ceph/ceph/pull/20067
- 09:40 AM Backport #22765 (Resolved): luminous: client: avoid recursive lock in ll_get_vino
- https://github.com/ceph/ceph/pull/20086
- 09:40 AM Backport #22764 (Resolved): jewel: mds: crashes because of old pool id in journal header
- https://github.com/ceph/ceph/pull/20111
- 09:40 AM Backport #22763 (Resolved): luminous: mds: crashes because of old pool id in journal header
- https://github.com/ceph/ceph/pull/20085
01/22/2018
- 10:11 PM Bug #22754 (Resolved): mon: removing tier from an EC base pool is forbidden, even if allow_ec_ove...
- OSDMonitor::_check_remove_tier needs to be made aware that this should be permitted if the base tier is suitable for ...
- 08:06 PM Bug #22741: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-yuri-master-1.19.18...
- Core: /ceph/teuthology-archive/yuriw-2018-01-19_18:23:03-powercycle-wip-yuri-master-1.19.18-distro-basic-smithi/20909...
- 02:14 PM Bug #22741: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-yuri-master-1.19.18...
- Assigned to CephFS because it's crashing in the ceph-fuse process (in the absence of a better home for ObjectCacher i...
- 03:03 PM Feature #12107 (In Progress): mds: use versioned wire protocol; obviate CEPH_MDS_PROTOCOL
- 07:28 AM Feature #12107: mds: use versioned wire protocol; obviate CEPH_MDS_PROTOCOL
- I'm working on this, please assign this to me
- 12:05 PM Backport #22508 (Need More Info): luminous: MDSMonitor: inconsistent role/who usage in command help
- Non-trivial backport - since it's essentially a documentation fix, I'm not sure if it's worth the risk.
- 11:18 AM Backport #22078 (In Progress): luminous: ceph.in: tell mds does not understand --cluster
01/20/2018
- 05:33 PM Bug #22741 (Resolved): osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-yuri-mas...
- Run: http://pulpito.ceph.com/yuriw-2018-01-19_18:23:03-powercycle-wip-yuri-master-1.19.18-distro-basic-smithi/
Jobs:...
01/19/2018
- 04:31 AM Bug #22734: cephfs-journal-tool: may got assertion failure due to not shutdown
- https://github.com/ceph/ceph/pull/19991
- 04:22 AM Bug #22734 (Resolved): cephfs-journal-tool: may got assertion failure due to not shutdown
- ```
2018-01-14T19:36:56.381 INFO:teuthology.orchestra.run.smithi139.stderr:Error loading journal: (2) No such file o...
01/18/2018
- 11:02 PM Bug #22733 (Duplicate): ceph-fuse: failed assert in xlist<T>::iterator& xlist<T>::iterator::opera...
- Thanks for the report anyway!
- 10:09 PM Bug #22733 (Duplicate): ceph-fuse: failed assert in xlist<T>::iterator& xlist<T>::iterator::opera...
- We had several ceph-fuse crashes with errors like...
- 08:02 PM Bug #22730 (Fix Under Review): mds: scrub crash
- https://github.com/ceph/ceph/pull/20012
- 05:38 PM Bug #22730: mds: scrub crash
- Doug, please take a look at this one.
- 04:17 PM Bug #22730 (Resolved): mds: scrub crash
- this crash can be reproduced by 2 steps
1 ceph daemon mds.a scrub_path <dir> recursive
2 ceph daemon mds.a scrub_... - 12:43 AM Backport #22700 (In Progress): jewel: client:_rmdir() uses a deleted memory structure(Dentry) lea...
- https://github.com/ceph/ceph/pull/19993
- 12:27 AM Backport #22700: jewel: client:_rmdir() uses a deleted memory structure(Dentry) leading a core
- I'm on it.
01/17/2018
- 10:07 PM Bug #22683 (Fix Under Review): client: coredump when nfs-ganesha use ceph_ll_get_inode()
- 03:34 PM Feature #4208: Add more replication pool tests for Hadoop / Ceph bindings
- Bulk move of hadoop category into FS project.
- 03:34 PM Feature #4361: Setup another gitbuilder VM for building external Hadoop git repo(s)
- Bulk move of hadoop category into FS project.
- 03:34 PM Bug #3544: ./configure checks CFLAGS for jni.h if --with-hadoop is specified but also needs to ch...
- Bulk move of hadoop category into FS project.
- 03:34 PM Bug #1661: Hadoop: expected system directories not present
- Bulk move of hadoop category into FS project.
- 03:34 PM Bug #1663: Hadoop: file ownership/permission not available in hadoop
- Bulk move of hadoop category into FS project.
- 03:26 PM Bug #21748 (Can't reproduce): client assertions tripped during some workloads
- No response in several months, and I've never seen this trip in my own testing. Closing for now. Please reopen if you...
- 03:24 PM Bug #22003 (Resolved): [CephFS-Ganesha]MDS migrate will affect Ganesha service?
- No response in two months. Closing bug.
Please reopen or comment if you've been able to test with that patch and i... - 03:23 PM Bug #21419 (Rejected): client: is ceph_caps_for_mode correct for r/o opens?
- Ok, I think you're right. may_open happens at a higher level and we will simply request the caps at that point. False...
- 10:50 AM Bug #21734: mount client shows total capacity of cluster but not of a pool
- (Just moving this closed ticket because I'm deleting the bogus "cephfs" category in the toplevel Ceph project)
- 07:05 AM Backport #22719 (In Progress): luminous: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
- https://github.com/ceph/ceph/pull/19982
- 06:57 AM Backport #22719 (Resolved): luminous: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
- https://github.com/ceph/ceph/pull/19982
- 05:43 AM Backport #22590 (In Progress): jewel: ceph.in: tell mds does not understand --cluster
- 04:12 AM Bug #22629 (Pending Backport): client: avoid recursive lock in ll_get_vino
- 04:12 AM Bug #22631 (Pending Backport): mds: crashes because of old pool id in journal header
- 04:11 AM Backport #22690 (In Progress): luminous: qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insuffic...
- https://github.com/ceph/ceph/pull/19976
- 04:10 AM Bug #22647 (Pending Backport): mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
- 02:38 AM Backport #22689 (In Progress): jewel: client: fails to release to revoking Fc
- 02:38 AM Backport #22689: jewel: client: fails to release to revoking Fc
- https://github.com/ceph/ceph/pull/19975
01/16/2018
- 07:28 PM Bug #22428: mds: don't report slow request for blocked filelock request
- Here's a recent example from someone in #ceph:...
- 02:13 PM Backport #22688 (In Progress): luminous: client: fails to release to revoking Fc
- https://github.com/ceph/ceph/pull/19970
- 08:16 AM Backport #22688 (Resolved): luminous: client: fails to release to revoking Fc
- https://github.com/ceph/ceph/pull/20342
- 01:57 PM Backport #22699 (In Progress): luminous: client:_rmdir() uses a deleted memory structure(Dentry) ...
- 01:57 PM Backport #22699 (Fix Under Review): luminous: client:_rmdir() uses a deleted memory structure(Den...
- https://github.com/ceph/ceph/pull/19968
- 08:17 AM Backport #22699 (Resolved): luminous: client:_rmdir() uses a deleted memory structure(Dentry) lea...
- https://github.com/ceph/ceph/pull/19968
- 08:34 AM Backport #22579 (In Progress): luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster fu...
- 08:31 AM Backport #22580 (In Progress): luminous: qa: full flag not set on osdmap for tasks.cephfs.test_fu...
- 08:23 AM Backport #22695 (In Progress): jewel: mds: fix dump last_sent
- 08:17 AM Backport #22695 (Resolved): jewel: mds: fix dump last_sent
- https://github.com/ceph/ceph/pull/19961
- 08:22 AM Backport #22694 (In Progress): luminous: mds: fix dump last_sent
- 08:17 AM Backport #22694 (Resolved): luminous: mds: fix dump last_sent
- https://github.com/ceph/ceph/pull/19959
- 08:17 AM Backport #22700 (Resolved): jewel: client:_rmdir() uses a deleted memory structure(Dentry) leadin...
- https://github.com/ceph/ceph/pull/19993
- 08:17 AM Backport #22697 (Rejected): jewel: client: dirty caps may never get the chance to flush
- 08:17 AM Backport #22696 (Resolved): luminous: client: dirty caps may never get the chance to flush
- https://github.com/ceph/ceph/pull/21278
- 08:16 AM Backport #22690 (Resolved): luminous: qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficien...
- https://github.com/ceph/ceph/pull/19976
- 08:16 AM Backport #22689 (Resolved): jewel: client: fails to release to revoking Fc
- https://github.com/ceph/ceph/pull/19975
- 06:38 AM Bug #22683: client: coredump when nfs-ganesha use ceph_ll_get_inode()
- https://github.com/ceph/ceph/pull/19957
- 02:47 AM Bug #22683 (Resolved): client: coredump when nfs-ganesha use ceph_ll_get_inode()
- Environment:
nfs : nfs-ganehsa2.5.4 + https://github.com/nfs-ganesha/nfs-ganesha/commit/476c2068bd4a3fd22f0d...
01/15/2018
- 02:36 PM Bug #22610 (Fix Under Review): MDS: assert failure when the inode for the cap_export from other M...
01/13/2018
01/12/2018
- 10:42 PM Bug #22652 (Pending Backport): client: fails to release to revoking Fc
- 10:39 PM Bug #22646 (Pending Backport): qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficient osds
- 03:49 PM Feature #21995 (In Progress): ceph-fuse: support nfs export
- 11:07 AM Feature #21156 (In Progress): mds: speed up recovery with many open inodes
01/11/2018
- 10:50 PM Backport #22508: luminous: MDSMonitor: inconsistent role/who usage in command help
- See also: https://github.com/ceph/ceph/pull/19926
- 10:29 PM Bug #21393: MDSMonitor: inconsistent role/who usage in command help
- The fix for this causes upgrade tests to fail: http://tracker.ceph.com/issues/22527#note-9
We will probably need t... - 08:39 AM Bug #22652 (Fix Under Review): client: fails to release to revoking Fc
- https://github.com/ceph/ceph/pull/19920
- 08:37 AM Bug #22652: client: fails to release to revoking Fc
- hang fuse_reverse_inval_inode() was caused by hang page writeback.
01/10/2018
- 11:24 PM Backport #22590: jewel: ceph.in: tell mds does not understand --cluster
- https://github.com/ceph/ceph/pull/19907
- 10:44 PM Backport #22590: jewel: ceph.in: tell mds does not understand --cluster
- I'm on it.
- 04:41 PM Bug #22631 (Fix Under Review): mds: crashes because of old pool id in journal header
- 03:41 PM Backport #22076 (In Progress): luminous: 'ceph tell mds' commands result in 'File exists' errors ...
- 03:17 PM Backport #22076 (Fix Under Review): luminous: 'ceph tell mds' commands result in 'File exists' er...
- 02:45 PM Bug #22652: client: fails to release to revoking Fc
- 01:29 PM Bug #22652: client: fails to release to revoking Fc
- I reproduced it locally. it seems like kernel issue. The issue happens only when fuse_use_invalidate_cb is true.
- 11:02 AM Bug #22652 (Resolved): client: fails to release to revoking Fc
- http://pulpito.ceph.com/pdonnell-2018-01-09_21:14:38-multimds-wip-pdonnell-testing-20180109.193634-testing-basic-smit...
- 05:54 AM Bug #22647 (Fix Under Review): mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
- https://github.com/ceph/ceph/pull/19891
- 02:34 AM Bug #22647 (Resolved): mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code
- ...
- 01:08 AM Bug #22629 (Fix Under Review): client: avoid recursive lock in ll_get_vino
- 01:05 AM Bug #22562 (Pending Backport): mds: fix dump last_sent
- 01:05 AM Bug #22546 (Pending Backport): client: dirty caps may never get the chance to flush
- 01:04 AM Bug #22536 (Pending Backport): client:_rmdir() uses a deleted memory structure(Dentry) leading a ...
- 12:44 AM Bug #22646: qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficient osds
- https://github.com/ceph/ceph/pull/19885
- 12:40 AM Bug #22646 (Resolved): qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficient osds
- This causes startup to fail for ec pool configurations.
(This was included in my fix for #22627 but I'm breaking i...
Also available in: Atom