Project

General

Profile

Activity

From 10/25/2018 to 11/23/2018

11/23/2018

08:16 PM Bug #36189 (Need More Info): ceph-fuse client can't read or write due to backward cap_gen
Zheng writes: "If cap is invalid during reconnect, mds should consider issued caps is empty (just CEPH_CAP_PIN)"
a...
Nathan Cutler
08:14 PM Backport #36462 (Need More Info): luminous: ceph-fuse client can't read or write due to backward ...
First attempted backport, https://github.com/ceph/ceph/pull/25089, was closed because the master PR might have an iss... Nathan Cutler
08:14 PM Backport #36463 (Need More Info): mimic: ceph-fuse client can't read or write due to backward cap...
The first backport was https://github.com/ceph/ceph/pull/25091. The original master fix might have an issue, though, ... Nathan Cutler
05:36 PM Bug #37378: truncate_seq ordering issues with object creation
I don't fully understand the following code, but I suspect the issue could be related to truncate_seq in this OSD fun... Luis Henriques
11:33 AM Bug #37378: truncate_seq ordering issues with object creation
I forgot to mention that using the 'rados' command I'm able to see that the objects in the data pool actually seem to... Luis Henriques
10:17 AM Bug #37378 (Resolved): truncate_seq ordering issues with object creation
I'm seeing a bug with copy_file_range in recent clients. Here's a simple way to reproduce it:... Luis Henriques

11/22/2018

05:16 PM Backport #36690 (Resolved): mimic: client: request next osdmap for blacklisted client
Nathan Cutler
04:40 PM Backport #36690: mimic: client: request next osdmap for blacklisted client
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24987
merged
Yuri Weinstein
05:15 PM Backport #36218 (Resolved): mimic: Some cephfs tool commands silently operate on only rank 0, eve...
Nathan Cutler
04:39 PM Backport #36218: mimic: Some cephfs tool commands silently operate on only rank 0, even if multip...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25036
merged
Yuri Weinstein
05:15 PM Backport #36461 (Resolved): mimic: mds: rctime not set on system inode (root) at startup
Nathan Cutler
04:38 PM Backport #36461: mimic: mds: rctime not set on system inode (root) at startup
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25042
merged
Yuri Weinstein
04:53 PM Backport #36463 (Resolved): mimic: ceph-fuse client can't read or write due to backward cap_gen
Jonathan Brielmaier
04:41 PM Backport #36463: mimic: ceph-fuse client can't read or write due to backward cap_gen
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25091
merged
Yuri Weinstein
04:53 PM Backport #36457 (Resolved): mimic: client: explicitly show blacklisted state via asok status command
Jonathan Brielmaier
04:39 PM Backport #36457: mimic: client: explicitly show blacklisted state via asok status command
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24993
merged
Yuri Weinstein
04:53 PM Backport #37093 (Resolved): mimic: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" duri...
Jonathan Brielmaier
04:37 PM Backport #37093: mimic: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" during max_mds ...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25095
merged
Yuri Weinstein
09:42 AM Bug #37368 (Resolved): mds: directories pinned keep being replicated back and forth between expor...
Recently, when developing the rstat propagation function, we found that when pinning some directory to a specific ran... Xuehan Xu

11/21/2018

06:26 PM Bug #37355: tasks.cephfs.test_volume_client fails with "ImportError: No module named 'ceph_argpar...
I believe this problem also exists in Luminous? Patrick Donnelly
01:12 PM Bug #37355 (Duplicate): tasks.cephfs.test_volume_client fails with "ImportError: No module named ...
seen here: http://qa-proxy.ceph.com/teuthology/yuriw-2018-11-20_16:46:36-fs-wip-yuri3-testing-2018-11-16-1727-mimic-d... Venky Shankar
06:13 PM Backport #36694 (Resolved): mimic: mds: cache drop command requires timeout argument when it is s...
Patrick Donnelly
06:13 PM Backport #36282 (Resolved): mimic: mds: add drop_cache command
Patrick Donnelly
12:52 PM Bug #24517: "Loading libcephfs-jni: Failure!" in fs suite
seen again here: http://qa-proxy.ceph.com/teuthology/yuriw-2018-11-20_16:46:36-fs-wip-yuri3-testing-2018-11-16-1727-m... Venky Shankar

11/20/2018

04:24 AM Bug #37333 (Resolved): fuse client can't read file due to can't acquire Fr
ceph version: jewel:10.2.2
logs:
client.log...
Ivan Guan

11/19/2018

04:21 PM Bug #25113: mds: allows client to create ".." and "." dirents
Is it possible to create such direntries using this sequense?
1. create symlink "hack" -> ".."
2. mkdir hack
i...
Марк Коренберг

11/15/2018

02:25 PM Backport #36694 (In Progress): mimic: mds: cache drop command requires timeout argument when it i...
Venky Shankar
03:31 AM Backport #36282 (In Progress): mimic: mds: add drop_cache command
ACK Venky Shankar

11/14/2018

04:09 PM Backport #37093 (In Progress): mimic: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" d...
Jonathan Brielmaier
03:52 PM Backport #36694: mimic: mds: cache drop command requires timeout argument when it is supposed to ...
Possibly to be done in together with #36282 in a single PR. Nathan Cutler
03:51 PM Backport #36694 (Need More Info): mimic: mds: cache drop command requires timeout argument when i...
Nathan Cutler
02:54 PM Backport #36694: mimic: mds: cache drop command requires timeout argument when it is supposed to ...
Needs first backport of https://github.com/ceph/ceph/pull/21566 as mimic misses the "drop cache" command Jonathan Brielmaier
03:50 PM Backport #36282: mimic: mds: add drop_cache command
@Venky can you combine #36694 with this backport? (Like you already did for the luminous backport afaict) Nathan Cutler
12:59 PM Backport #36463 (In Progress): mimic: ceph-fuse client can't read or write due to backward cap_gen
Jonathan Brielmaier
12:48 PM Backport #36462 (In Progress): luminous: ceph-fuse client can't read or write due to backward cap...
Jonathan Brielmaier

11/13/2018

09:18 PM Backport #37093 (Resolved): mimic: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" duri...
https://github.com/ceph/ceph/pull/25095 Nathan Cutler
09:17 PM Backport #37092 (Resolved): luminous: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" d...
https://github.com/ceph/ceph/pull/25826 Nathan Cutler
09:04 PM Bug #36350 (Pending Backport): mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" during m...
Patrick Donnelly
03:18 PM Feature #36253 (In Progress): cephfs: clients should send usage metadata to MDSs for administrati...
Venky Shankar
01:56 PM Feature #37085 (Resolved): add command to bring cluster down rapidly
We now have a command to nicely bring the cluster down via `ceph fs set <name> down true`. This does sequential deact... Patrick Donnelly
09:57 AM Feature #25013 (Resolved): mds: add average session age (uptime) perf counter
Nathan Cutler
09:57 AM Backport #35938 (Resolved): mimic: mds: add average session age (uptime) perf counter
Nathan Cutler
09:57 AM Bug #26962 (Resolved): mds: use monotonic clock for beacon sender thread waits
Nathan Cutler
09:57 AM Backport #32090 (Resolved): mimic: mds: use monotonic clock for beacon sender thread waits
Nathan Cutler
09:57 AM Bug #26959 (Resolved): mds: use monotonic clock for beacon message timekeeping
Nathan Cutler
09:57 AM Backport #35837 (Resolved): mimic: mds: use monotonic clock for beacon message timekeeping
Nathan Cutler
09:56 AM Bug #24004 (Resolved): mds: curate priority of perf counters sent to mgr
Nathan Cutler
09:56 AM Backport #26991 (Resolved): mimic: mds: curate priority of perf counters sent to mgr
Nathan Cutler

11/12/2018

08:25 PM Backport #35938: mimic: mds: add average session age (uptime) perf counter
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24467
merged
Yuri Weinstein
08:25 PM Backport #32090: mimic: mds: use monotonic clock for beacon sender thread waits
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24467
merged
Yuri Weinstein
08:25 PM Backport #35837: mimic: mds: use monotonic clock for beacon message timekeeping
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24467
merged
Yuri Weinstein
08:25 PM Backport #26991: mimic: mds: curate priority of perf counters sent to mgr
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24467
merged
Yuri Weinstein

11/11/2018

10:38 AM Backport #36460 (In Progress): luminous: mds: rctime not set on system inode (root) at startup
Nathan Cutler
10:31 AM Backport #36461 (In Progress): mimic: mds: rctime not set on system inode (root) at startup
Nathan Cutler

11/10/2018

02:56 PM Backport #36218 (In Progress): mimic: Some cephfs tool commands silently operate on only rank 0, ...
Nathan Cutler
02:53 PM Backport #36209 (Need More Info): mimic: mds: runs out of file descriptors after several respawns
Nathan Cutler

11/09/2018

05:20 PM Bug #36593: qa: quota failure caused by clients stepping on each other
And another update: I can not understand why there are two clients (both on smithi071 btw) that do a readdir in the r... Luis Henriques

11/08/2018

04:56 PM Bug #36593: qa: quota failure caused by clients stepping on each other
Quick update: Looking further at the logs helped me... getting more confused :-)
So, all the 4 clients are failing...
Luis Henriques
04:29 PM Backport #36456 (In Progress): luminous: client: explicitly show blacklisted state via asok statu...
Jonathan Brielmaier
04:05 PM Backport #36457 (In Progress): mimic: client: explicitly show blacklisted state via asok status c...
Jonathan Brielmaier
01:11 PM Bug #36703 (Fix Under Review): MDS admin socket command `dump cache` with a very large cache will...
Venky Shankar
04:41 AM Backport #36690 (In Progress): mimic: client: request next osdmap for blacklisted client
Jos Collin
04:34 AM Backport #36691 (In Progress): luminous: client: request next osdmap for blacklisted client
Jos Collin

11/07/2018

11:37 PM Bug #36730 (Fix Under Review): mds: should apply policy to throttle client messages
Patrick Donnelly
11:32 PM Bug #36730 (Rejected): mds: should apply policy to throttle client messages
Currently client messages are not throttled except by the global DispatchQueue::dispatch_throttler which is applied t... Patrick Donnelly
10:48 PM Feature #22446 (In Progress): mds: ask idle client to trim more caps
Patrick Donnelly
11:08 AM Feature #36338 (Resolved): Namespace support for libcephfs
Jeff Layton
09:46 AM Feature #36338: Namespace support for libcephfs
Thanks. As it's already implemented this ticket can be closed. Stefan Kooman

11/06/2018

05:22 PM Feature #36707 (Fix Under Review): client: support getfattr ceph.dir.pin extended attribute
Patrick Donnelly
06:37 AM Feature #36707 (Resolved): client: support getfattr ceph.dir.pin extended attribute
In Multi-MDSes, we can set ceph.dir.pin on client to bind a directory to a specific MDS. But we can't get this attrib... Zhi Zhang
04:07 PM Backport #36695 (In Progress): luminous: mds: cache drop command requires timeout argument when i...
Venky Shankar
02:46 PM Bug #16842: mds: replacement MDS crashes on InoTable release
https://github.com/ceph/ceph/pull/24942 can resolve this problem Ivan Guan
06:41 AM Bug #16842: mds: replacement MDS crashes on InoTable release
I think the patch https://github.com/ceph/ceph/pull/14164 can't resolve this bug completely.For example my situation:... Ivan Guan

11/05/2018

10:31 PM Bug #36673: /build/ceph-13.2.1/src/mds/CDir.cc: 1504: FAILED assert(is_auth())
https://bugzilla.redhat.com/show_bug.cgi?id=1642015 Patrick Donnelly
08:59 PM Backport #36643 (Resolved): mimic: Internal fragment of ObjectCacher
Nathan Cutler
02:37 PM Bug #36669 (Fix Under Review): client: displayed as the capacity of all OSDs when there are multi...
Patrick Donnelly
02:36 PM Bug #36593: qa: quota failure caused by clients stepping on each other
Luis Henriques wrote:
> Patrick Donnelly wrote:
> > Luis Henriques wrote:
> > > A quick look at the logs shows tha...
Patrick Donnelly
11:36 AM Bug #36593: qa: quota failure caused by clients stepping on each other
Patrick Donnelly wrote:
> Luis Henriques wrote:
> > A quick look at the logs shows that there are 4 clients running...
Luis Henriques
12:35 PM Bug #36703 (Resolved): MDS admin socket command `dump cache` with a very large cache will hang/ki...
The MDS tries to dump the cache to a formatter which will not work well if the MDS cache is too large (probably start... Venky Shankar
04:22 AM Backport #24759 (In Progress): luminous: test gets ENOSPC from bluestore block device
Venky Shankar

11/04/2018

02:31 PM Backport #36643: mimic: Internal fragment of ObjectCacher
Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24873
merged
Yuri Weinstein

11/03/2018

03:51 AM Backport #36695 (Resolved): luminous: mds: cache drop command requires timeout argument when it i...
https://github.com/ceph/ceph/pull/24468 Nathan Cutler
03:50 AM Backport #36694 (Resolved): mimic: mds: cache drop command requires timeout argument when it is s...
https://github.com/ceph/ceph/pull/25118 Nathan Cutler
03:50 AM Backport #36691 (Resolved): luminous: client: request next osdmap for blacklisted client
https://github.com/ceph/ceph/pull/24986 Nathan Cutler
03:50 AM Backport #36690 (Resolved): mimic: client: request next osdmap for blacklisted client
https://github.com/ceph/ceph/pull/24987 Nathan Cutler
03:48 AM Feature #17230 (Resolved): ceph_volume_client: py3 compatible
Nathan Cutler
03:48 AM Backport #26850 (Resolved): mimic: ceph_volume_client: py3 compatible
Nathan Cutler
12:02 AM Bug #36676 (Fix Under Review): qa: wrong setting for msgr failures
Patrick Donnelly
12:00 AM Bug #36676 (Pending Backport): qa: wrong setting for msgr failures
Patrick Donnelly
12:00 AM Bug #36668 (Pending Backport): client: request next osdmap for blacklisted client
Patrick Donnelly

11/02/2018

02:26 PM Backport #36643 (In Progress): mimic: Internal fragment of ObjectCacher
Nathan Cutler

11/01/2018

10:21 PM Bug #36593: qa: quota failure caused by clients stepping on each other
Luis Henriques wrote:
> A quick look at the logs shows that there are 4 clients running this test simultaneously. I...
Patrick Donnelly
09:55 PM Bug #36320 (Pending Backport): mds: cache drop command requires timeout argument when it is suppo...
Patrick Donnelly
09:54 PM Feature #36585 (Resolved): allow nfs-ganesha to export named cephfs filesystems
Patrick Donnelly
08:02 PM Bug #36676 (Fix Under Review): qa: wrong setting for msgr failures
Patrick Donnelly
08:00 PM Bug #36676 (Resolved): qa: wrong setting for msgr failures
https://github.com/ceph/ceph/blob/c0fd904b99a928f3cc2df112f5162edfe6a9165c/qa/suites/fs/thrash/msgr-failures/osd-mds-... Patrick Donnelly
05:14 PM Bug #36668 (Fix Under Review): client: request next osdmap for blacklisted client
Patrick Donnelly
06:56 AM Bug #36668: client: request next osdmap for blacklisted client
https://github.com/ceph/ceph/pull/24870 Zhi Zhang
06:54 AM Bug #36668 (Resolved): client: request next osdmap for blacklisted client
In Luminous version, we found blacklisted client would never get rid of blacklisted flag if network was down for some... Zhi Zhang
04:59 PM Backport #26850: mimic: ceph_volume_client: py3 compatible
Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24443
merged
Yuri Weinstein
03:37 PM Bug #36673: /build/ceph-13.2.1/src/mds/CDir.cc: 1504: FAILED assert(is_auth())
log file showing errors (debug level 3) Jon Morby
03:31 PM Bug #36673 (New): /build/ceph-13.2.1/src/mds/CDir.cc: 1504: FAILED assert(is_auth())
... Jon Morby
12:43 PM Bug #36669: client: displayed as the capacity of all OSDs when there are multiple data pools in t...
Fixup
https://github.com/ceph/ceph/pull/24880
huanwen ren
07:25 AM Bug #36669 (Rejected): client: displayed as the capacity of all OSDs when there are multiple data...
When using ceph-fuse to mount the cephfs file directory, if there are multiple data pools built in FS, the capacity o... huanwen ren
10:22 AM Backport #36642 (In Progress): luminous: Internal fragment of ObjectCacher
Nathan Cutler
08:12 AM Backport #36642: luminous: Internal fragment of ObjectCacher
PR: https://github.com/ceph/ceph/pull/24872 Mykola Golub
08:20 AM Backport #36643: mimic: Internal fragment of ObjectCacher
PR: https://github.com/ceph/ceph/pull/24873 Mykola Golub

10/31/2018

08:07 PM Backport #36664 (In Progress): jewel: Internal fragment of ObjectCacher
Nathan Cutler
06:58 PM Backport #36664: jewel: Internal fragment of ObjectCacher
PR: https://github.com/ceph/ceph/pull/24865 Mykola Golub
06:47 PM Backport #36664 (Rejected): jewel: Internal fragment of ObjectCacher
https://github.com/ceph/ceph/pull/24865 Jason Dillaman
06:34 PM Feature #36663 (In Progress): mds: adjust cache memory limit automatically via target that tracks...
Basic idea is to have a new config like `mds_memory_target` that, if set, automatically adjusts `mds_cache_memory_lim... Patrick Donnelly
05:13 PM Feature #24464: cephfs: file-level snapshots
Onkar M wrote:
> I want to work on this issue. I'm new to Ceph, so want to use this feature to get to know Ceph bett...
Patrick Donnelly

10/30/2018

09:25 PM Bug #36651 (Fix Under Review): ceph-volume-client: cannot set mode for cephfs volumes as required...
Patrick Donnelly
07:12 PM Bug #36651 (Resolved): ceph-volume-client: cannot set mode for cephfs volumes as required by Open...
OpenShift developers report that when they use their dynamic external storage provider (in OpenShift 3.11) with manil... Tom Barron
07:14 PM Feature #24464: cephfs: file-level snapshots
I want to work on this issue. I'm new to Ceph, so want to use this feature to get to know Ceph better.
Will someo...
Onkar M
06:22 PM Bug #36611: ceph-mds failure
Please update the list with what you did to fix the FS so everyone can learn from the experience. =) Patrick Donnelly
05:31 PM Backport #32092 (Resolved): mimic: mds: migrate strays part by part when shutdown mds
Patrick Donnelly
05:08 PM Backport #32092: mimic: mds: migrate strays part by part when shutdown mds
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24435
merged
Yuri Weinstein
05:31 PM Bug #36346 (Resolved): mimic: mds: purge queue corruption from wrong backport
Patrick Donnelly
04:49 PM Bug #36346: mimic: mds: purge queue corruption from wrong backport
Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24485
merged
Yuri Weinstein
05:30 PM Bug #24644 (Resolved): cephfs-journal-tool: wrong layout info used
Patrick Donnelly
05:30 PM Backport #24933 (Resolved): mimic: cephfs-journal-tool: wrong layout info used
Patrick Donnelly
04:48 PM Backport #24933: mimic: cephfs-journal-tool: wrong layout info used
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24583
merged
Yuri Weinstein
05:30 PM Backport #36280 (Resolved): mimic: qa: RuntimeError: FSCID 10 has no rank 1
Patrick Donnelly
04:48 PM Backport #36280: mimic: qa: RuntimeError: FSCID 10 has no rank 1
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24572
merged
Yuri Weinstein
05:30 PM Feature #25188 (Resolved): mds: configurable timeout for client eviction
Patrick Donnelly
05:29 PM Backport #35975 (Resolved): mimic: mds: configurable timeout for client eviction
Patrick Donnelly
04:47 PM Backport #35975: mimic: mds: configurable timeout for client eviction
Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24661
merged
Yuri Weinstein
05:29 PM Backport #36501 (Resolved): mimic: qa: increase rm timeout for workunit cleanup
Patrick Donnelly
04:46 PM Backport #36501: mimic: qa: increase rm timeout for workunit cleanup
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24684
merged
Yuri Weinstein
05:15 PM Backport #36643 (Resolved): mimic: Internal fragment of ObjectCacher
https://github.com/ceph/ceph/pull/24873 Patrick Donnelly
05:15 PM Backport #36642 (Resolved): luminous: Internal fragment of ObjectCacher
https://github.com/ceph/ceph/pull/24872 Patrick Donnelly
04:44 PM Bug #36635 (Resolved): mds: purge queue corruption from wrong backport
Master version of #36346. We need to add the special handling for the wrong purge queue format in 13.2.2. Patrick Donnelly
04:13 PM Bug #26969: kclient: mount unexpectedly gets osdmap updates causing test to fail
On mimic: /ceph/teuthology-archive/yuriw-2018-10-23_17:23:46-kcephfs-wip-yuri2-testing-2018-10-23-1513-mimic-testing-... Patrick Donnelly

10/29/2018

08:54 PM Bug #36611: ceph-mds failure
Patrick Donnelly wrote:
> Please see the announcement on ceph-announce concerning CephFS. You need to downgrade the ...
Jon Morby
08:39 PM Bug #36611 (Won't Fix): ceph-mds failure
Please see the announcement on ceph-announce concerning CephFS. You need to downgrade the MDS daemons to 13.2.1.
E...
Patrick Donnelly
01:25 PM Feature #36608 (Fix Under Review): mds: answering all pending getattr/lookups targeting the same ...
Patrick Donnelly

10/28/2018

09:46 PM Bug #36611 (Won't Fix): ceph-mds failure
... Jon Morby
01:35 PM Feature #36608 (Resolved): mds: answering all pending getattr/lookups targeting the same inode in...
As for now, all getattr/lookup requests get processes one by one, which is kind of wasting CPU resources. Actually, f... Xuehan Xu

10/26/2018

11:29 PM Bug #36395 (Resolved): mds: Documentation for the reclaim mechanism
Patrick Donnelly
12:20 PM Feature #36585: allow nfs-ganesha to export named cephfs filesystems
New ceph interface here. I also have a set of ganesha patches that will use this to generate filehandles when it's de... Jeff Layton
11:23 AM Bug #36192 (Pending Backport): Internal fragment of ObjectCacher
Jason Dillaman

10/25/2018

10:31 AM Bug #36593: qa: quota failure caused by clients stepping on each other
A quick look at the logs shows that there are 4 clients running this test simultaneously. I wonder if this something... Luis Henriques
08:01 AM Backport #36309 (In Progress): luminous: doc: Typo error on cephfs/fuse/
Jos Collin
 

Also available in: Atom