Activity
From 10/25/2018 to 11/23/2018
11/23/2018
- 08:16 PM Bug #36189 (Need More Info): ceph-fuse client can't read or write due to backward cap_gen
- Zheng writes: "If cap is invalid during reconnect, mds should consider issued caps is empty (just CEPH_CAP_PIN)"
a... - 08:14 PM Backport #36462 (Need More Info): luminous: ceph-fuse client can't read or write due to backward ...
- First attempted backport, https://github.com/ceph/ceph/pull/25089, was closed because the master PR might have an iss...
- 08:14 PM Backport #36463 (Need More Info): mimic: ceph-fuse client can't read or write due to backward cap...
- The first backport was https://github.com/ceph/ceph/pull/25091. The original master fix might have an issue, though, ...
- 05:36 PM Bug #37378: truncate_seq ordering issues with object creation
- I don't fully understand the following code, but I suspect the issue could be related to truncate_seq in this OSD fun...
- 11:33 AM Bug #37378: truncate_seq ordering issues with object creation
- I forgot to mention that using the 'rados' command I'm able to see that the objects in the data pool actually seem to...
- 10:17 AM Bug #37378 (Resolved): truncate_seq ordering issues with object creation
- I'm seeing a bug with copy_file_range in recent clients. Here's a simple way to reproduce it:...
11/22/2018
- 05:16 PM Backport #36690 (Resolved): mimic: client: request next osdmap for blacklisted client
- 04:40 PM Backport #36690: mimic: client: request next osdmap for blacklisted client
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24987
merged - 05:15 PM Backport #36218 (Resolved): mimic: Some cephfs tool commands silently operate on only rank 0, eve...
- 04:39 PM Backport #36218: mimic: Some cephfs tool commands silently operate on only rank 0, even if multip...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25036
merged - 05:15 PM Backport #36461 (Resolved): mimic: mds: rctime not set on system inode (root) at startup
- 04:38 PM Backport #36461: mimic: mds: rctime not set on system inode (root) at startup
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25042
merged - 04:53 PM Backport #36463 (Resolved): mimic: ceph-fuse client can't read or write due to backward cap_gen
- 04:41 PM Backport #36463: mimic: ceph-fuse client can't read or write due to backward cap_gen
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25091
merged - 04:53 PM Backport #36457 (Resolved): mimic: client: explicitly show blacklisted state via asok status command
- 04:39 PM Backport #36457: mimic: client: explicitly show blacklisted state via asok status command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24993
merged - 04:53 PM Backport #37093 (Resolved): mimic: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" duri...
- 04:37 PM Backport #37093: mimic: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" during max_mds ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/25095
merged - 09:42 AM Bug #37368 (Resolved): mds: directories pinned keep being replicated back and forth between expor...
- Recently, when developing the rstat propagation function, we found that when pinning some directory to a specific ran...
11/21/2018
- 06:26 PM Bug #37355: tasks.cephfs.test_volume_client fails with "ImportError: No module named 'ceph_argpar...
- I believe this problem also exists in Luminous?
- 01:12 PM Bug #37355 (Duplicate): tasks.cephfs.test_volume_client fails with "ImportError: No module named ...
- seen here: http://qa-proxy.ceph.com/teuthology/yuriw-2018-11-20_16:46:36-fs-wip-yuri3-testing-2018-11-16-1727-mimic-d...
- 06:13 PM Backport #36694 (Resolved): mimic: mds: cache drop command requires timeout argument when it is s...
- 06:13 PM Backport #36282 (Resolved): mimic: mds: add drop_cache command
- 12:52 PM Bug #24517: "Loading libcephfs-jni: Failure!" in fs suite
- seen again here: http://qa-proxy.ceph.com/teuthology/yuriw-2018-11-20_16:46:36-fs-wip-yuri3-testing-2018-11-16-1727-m...
11/20/2018
- 04:24 AM Bug #37333 (Resolved): fuse client can't read file due to can't acquire Fr
- ceph version: jewel:10.2.2
logs:
client.log...
11/19/2018
- 04:21 PM Bug #25113: mds: allows client to create ".." and "." dirents
- Is it possible to create such direntries using this sequense?
1. create symlink "hack" -> ".."
2. mkdir hack
i...
11/15/2018
- 02:25 PM Backport #36694 (In Progress): mimic: mds: cache drop command requires timeout argument when it i...
- 03:31 AM Backport #36282 (In Progress): mimic: mds: add drop_cache command
- ACK
11/14/2018
- 04:09 PM Backport #37093 (In Progress): mimic: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" d...
- 03:52 PM Backport #36694: mimic: mds: cache drop command requires timeout argument when it is supposed to ...
- Possibly to be done in together with #36282 in a single PR.
- 03:51 PM Backport #36694 (Need More Info): mimic: mds: cache drop command requires timeout argument when i...
- 02:54 PM Backport #36694: mimic: mds: cache drop command requires timeout argument when it is supposed to ...
- Needs first backport of https://github.com/ceph/ceph/pull/21566 as mimic misses the "drop cache" command
- 03:50 PM Backport #36282: mimic: mds: add drop_cache command
- @Venky can you combine #36694 with this backport? (Like you already did for the luminous backport afaict)
- 12:59 PM Backport #36463 (In Progress): mimic: ceph-fuse client can't read or write due to backward cap_gen
- 12:48 PM Backport #36462 (In Progress): luminous: ceph-fuse client can't read or write due to backward cap...
11/13/2018
- 09:18 PM Backport #37093 (Resolved): mimic: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" duri...
- https://github.com/ceph/ceph/pull/25095
- 09:17 PM Backport #37092 (Resolved): luminous: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" d...
- https://github.com/ceph/ceph/pull/25826
- 09:04 PM Bug #36350 (Pending Backport): mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" during m...
- 03:18 PM Feature #36253 (In Progress): cephfs: clients should send usage metadata to MDSs for administrati...
- 01:56 PM Feature #37085 (Resolved): add command to bring cluster down rapidly
- We now have a command to nicely bring the cluster down via `ceph fs set <name> down true`. This does sequential deact...
- 09:57 AM Feature #25013 (Resolved): mds: add average session age (uptime) perf counter
- 09:57 AM Backport #35938 (Resolved): mimic: mds: add average session age (uptime) perf counter
- 09:57 AM Bug #26962 (Resolved): mds: use monotonic clock for beacon sender thread waits
- 09:57 AM Backport #32090 (Resolved): mimic: mds: use monotonic clock for beacon sender thread waits
- 09:57 AM Bug #26959 (Resolved): mds: use monotonic clock for beacon message timekeeping
- 09:57 AM Backport #35837 (Resolved): mimic: mds: use monotonic clock for beacon message timekeeping
- 09:56 AM Bug #24004 (Resolved): mds: curate priority of perf counters sent to mgr
- 09:56 AM Backport #26991 (Resolved): mimic: mds: curate priority of perf counters sent to mgr
11/12/2018
- 08:25 PM Backport #35938: mimic: mds: add average session age (uptime) perf counter
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24467
merged - 08:25 PM Backport #32090: mimic: mds: use monotonic clock for beacon sender thread waits
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24467
merged - 08:25 PM Backport #35837: mimic: mds: use monotonic clock for beacon message timekeeping
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24467
merged - 08:25 PM Backport #26991: mimic: mds: curate priority of perf counters sent to mgr
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24467
merged
11/11/2018
- 10:38 AM Backport #36460 (In Progress): luminous: mds: rctime not set on system inode (root) at startup
- 10:31 AM Backport #36461 (In Progress): mimic: mds: rctime not set on system inode (root) at startup
11/10/2018
- 02:56 PM Backport #36218 (In Progress): mimic: Some cephfs tool commands silently operate on only rank 0, ...
- 02:53 PM Backport #36209 (Need More Info): mimic: mds: runs out of file descriptors after several respawns
11/09/2018
- 05:20 PM Bug #36593: qa: quota failure caused by clients stepping on each other
- And another update: I can not understand why there are two clients (both on smithi071 btw) that do a readdir in the r...
11/08/2018
- 04:56 PM Bug #36593: qa: quota failure caused by clients stepping on each other
- Quick update: Looking further at the logs helped me... getting more confused :-)
So, all the 4 clients are failing... - 04:29 PM Backport #36456 (In Progress): luminous: client: explicitly show blacklisted state via asok statu...
- 04:05 PM Backport #36457 (In Progress): mimic: client: explicitly show blacklisted state via asok status c...
- 01:11 PM Bug #36703 (Fix Under Review): MDS admin socket command `dump cache` with a very large cache will...
- 04:41 AM Backport #36690 (In Progress): mimic: client: request next osdmap for blacklisted client
- 04:34 AM Backport #36691 (In Progress): luminous: client: request next osdmap for blacklisted client
11/07/2018
- 11:37 PM Bug #36730 (Fix Under Review): mds: should apply policy to throttle client messages
- 11:32 PM Bug #36730 (Rejected): mds: should apply policy to throttle client messages
- Currently client messages are not throttled except by the global DispatchQueue::dispatch_throttler which is applied t...
- 10:48 PM Feature #22446 (In Progress): mds: ask idle client to trim more caps
- 11:08 AM Feature #36338 (Resolved): Namespace support for libcephfs
- 09:46 AM Feature #36338: Namespace support for libcephfs
- Thanks. As it's already implemented this ticket can be closed.
11/06/2018
- 05:22 PM Feature #36707 (Fix Under Review): client: support getfattr ceph.dir.pin extended attribute
- 06:37 AM Feature #36707 (Resolved): client: support getfattr ceph.dir.pin extended attribute
- In Multi-MDSes, we can set ceph.dir.pin on client to bind a directory to a specific MDS. But we can't get this attrib...
- 04:07 PM Backport #36695 (In Progress): luminous: mds: cache drop command requires timeout argument when i...
- 02:46 PM Bug #16842: mds: replacement MDS crashes on InoTable release
- https://github.com/ceph/ceph/pull/24942 can resolve this problem
- 06:41 AM Bug #16842: mds: replacement MDS crashes on InoTable release
- I think the patch https://github.com/ceph/ceph/pull/14164 can't resolve this bug completely.For example my situation:...
11/05/2018
- 10:31 PM Bug #36673: /build/ceph-13.2.1/src/mds/CDir.cc: 1504: FAILED assert(is_auth())
- https://bugzilla.redhat.com/show_bug.cgi?id=1642015
- 08:59 PM Backport #36643 (Resolved): mimic: Internal fragment of ObjectCacher
- 02:37 PM Bug #36669 (Fix Under Review): client: displayed as the capacity of all OSDs when there are multi...
- 02:36 PM Bug #36593: qa: quota failure caused by clients stepping on each other
- Luis Henriques wrote:
> Patrick Donnelly wrote:
> > Luis Henriques wrote:
> > > A quick look at the logs shows tha... - 11:36 AM Bug #36593: qa: quota failure caused by clients stepping on each other
- Patrick Donnelly wrote:
> Luis Henriques wrote:
> > A quick look at the logs shows that there are 4 clients running... - 12:35 PM Bug #36703 (Resolved): MDS admin socket command `dump cache` with a very large cache will hang/ki...
- The MDS tries to dump the cache to a formatter which will not work well if the MDS cache is too large (probably start...
- 04:22 AM Backport #24759 (In Progress): luminous: test gets ENOSPC from bluestore block device
11/04/2018
- 02:31 PM Backport #36643: mimic: Internal fragment of ObjectCacher
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24873
merged
11/03/2018
- 03:51 AM Backport #36695 (Resolved): luminous: mds: cache drop command requires timeout argument when it i...
- https://github.com/ceph/ceph/pull/24468
- 03:50 AM Backport #36694 (Resolved): mimic: mds: cache drop command requires timeout argument when it is s...
- https://github.com/ceph/ceph/pull/25118
- 03:50 AM Backport #36691 (Resolved): luminous: client: request next osdmap for blacklisted client
- https://github.com/ceph/ceph/pull/24986
- 03:50 AM Backport #36690 (Resolved): mimic: client: request next osdmap for blacklisted client
- https://github.com/ceph/ceph/pull/24987
- 03:48 AM Feature #17230 (Resolved): ceph_volume_client: py3 compatible
- 03:48 AM Backport #26850 (Resolved): mimic: ceph_volume_client: py3 compatible
- 12:02 AM Bug #36676 (Fix Under Review): qa: wrong setting for msgr failures
- 12:00 AM Bug #36676 (Pending Backport): qa: wrong setting for msgr failures
- 12:00 AM Bug #36668 (Pending Backport): client: request next osdmap for blacklisted client
11/02/2018
11/01/2018
- 10:21 PM Bug #36593: qa: quota failure caused by clients stepping on each other
- Luis Henriques wrote:
> A quick look at the logs shows that there are 4 clients running this test simultaneously. I... - 09:55 PM Bug #36320 (Pending Backport): mds: cache drop command requires timeout argument when it is suppo...
- 09:54 PM Feature #36585 (Resolved): allow nfs-ganesha to export named cephfs filesystems
- 08:02 PM Bug #36676 (Fix Under Review): qa: wrong setting for msgr failures
- 08:00 PM Bug #36676 (Resolved): qa: wrong setting for msgr failures
- https://github.com/ceph/ceph/blob/c0fd904b99a928f3cc2df112f5162edfe6a9165c/qa/suites/fs/thrash/msgr-failures/osd-mds-...
- 05:14 PM Bug #36668 (Fix Under Review): client: request next osdmap for blacklisted client
- 06:56 AM Bug #36668: client: request next osdmap for blacklisted client
- https://github.com/ceph/ceph/pull/24870
- 06:54 AM Bug #36668 (Resolved): client: request next osdmap for blacklisted client
- In Luminous version, we found blacklisted client would never get rid of blacklisted flag if network was down for some...
- 04:59 PM Backport #26850: mimic: ceph_volume_client: py3 compatible
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24443
merged - 03:37 PM Bug #36673: /build/ceph-13.2.1/src/mds/CDir.cc: 1504: FAILED assert(is_auth())
- log file showing errors (debug level 3)
- 03:31 PM Bug #36673 (New): /build/ceph-13.2.1/src/mds/CDir.cc: 1504: FAILED assert(is_auth())
- ...
- 12:43 PM Bug #36669: client: displayed as the capacity of all OSDs when there are multiple data pools in t...
- Fixup
https://github.com/ceph/ceph/pull/24880 - 07:25 AM Bug #36669 (Rejected): client: displayed as the capacity of all OSDs when there are multiple data...
- When using ceph-fuse to mount the cephfs file directory, if there are multiple data pools built in FS, the capacity o...
- 10:22 AM Backport #36642 (In Progress): luminous: Internal fragment of ObjectCacher
- 08:12 AM Backport #36642: luminous: Internal fragment of ObjectCacher
- PR: https://github.com/ceph/ceph/pull/24872
- 08:20 AM Backport #36643: mimic: Internal fragment of ObjectCacher
- PR: https://github.com/ceph/ceph/pull/24873
10/31/2018
- 08:07 PM Backport #36664 (In Progress): jewel: Internal fragment of ObjectCacher
- 06:58 PM Backport #36664: jewel: Internal fragment of ObjectCacher
- PR: https://github.com/ceph/ceph/pull/24865
- 06:47 PM Backport #36664 (Rejected): jewel: Internal fragment of ObjectCacher
- https://github.com/ceph/ceph/pull/24865
- 06:34 PM Feature #36663 (In Progress): mds: adjust cache memory limit automatically via target that tracks...
- Basic idea is to have a new config like `mds_memory_target` that, if set, automatically adjusts `mds_cache_memory_lim...
- 05:13 PM Feature #24464: cephfs: file-level snapshots
- Onkar M wrote:
> I want to work on this issue. I'm new to Ceph, so want to use this feature to get to know Ceph bett...
10/30/2018
- 09:25 PM Bug #36651 (Fix Under Review): ceph-volume-client: cannot set mode for cephfs volumes as required...
- 07:12 PM Bug #36651 (Resolved): ceph-volume-client: cannot set mode for cephfs volumes as required by Open...
- OpenShift developers report that when they use their dynamic external storage provider (in OpenShift 3.11) with manil...
- 07:14 PM Feature #24464: cephfs: file-level snapshots
- I want to work on this issue. I'm new to Ceph, so want to use this feature to get to know Ceph better.
Will someo... - 06:22 PM Bug #36611: ceph-mds failure
- Please update the list with what you did to fix the FS so everyone can learn from the experience. =)
- 05:31 PM Backport #32092 (Resolved): mimic: mds: migrate strays part by part when shutdown mds
- 05:08 PM Backport #32092: mimic: mds: migrate strays part by part when shutdown mds
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24435
merged - 05:31 PM Bug #36346 (Resolved): mimic: mds: purge queue corruption from wrong backport
- 04:49 PM Bug #36346: mimic: mds: purge queue corruption from wrong backport
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24485
merged - 05:30 PM Bug #24644 (Resolved): cephfs-journal-tool: wrong layout info used
- 05:30 PM Backport #24933 (Resolved): mimic: cephfs-journal-tool: wrong layout info used
- 04:48 PM Backport #24933: mimic: cephfs-journal-tool: wrong layout info used
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24583
merged - 05:30 PM Backport #36280 (Resolved): mimic: qa: RuntimeError: FSCID 10 has no rank 1
- 04:48 PM Backport #36280: mimic: qa: RuntimeError: FSCID 10 has no rank 1
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24572
merged - 05:30 PM Feature #25188 (Resolved): mds: configurable timeout for client eviction
- 05:29 PM Backport #35975 (Resolved): mimic: mds: configurable timeout for client eviction
- 04:47 PM Backport #35975: mimic: mds: configurable timeout for client eviction
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24661
merged - 05:29 PM Backport #36501 (Resolved): mimic: qa: increase rm timeout for workunit cleanup
- 04:46 PM Backport #36501: mimic: qa: increase rm timeout for workunit cleanup
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24684
merged - 05:15 PM Backport #36643 (Resolved): mimic: Internal fragment of ObjectCacher
- https://github.com/ceph/ceph/pull/24873
- 05:15 PM Backport #36642 (Resolved): luminous: Internal fragment of ObjectCacher
- https://github.com/ceph/ceph/pull/24872
- 04:44 PM Bug #36635 (Resolved): mds: purge queue corruption from wrong backport
- Master version of #36346. We need to add the special handling for the wrong purge queue format in 13.2.2.
- 04:13 PM Bug #26969: kclient: mount unexpectedly gets osdmap updates causing test to fail
- On mimic: /ceph/teuthology-archive/yuriw-2018-10-23_17:23:46-kcephfs-wip-yuri2-testing-2018-10-23-1513-mimic-testing-...
10/29/2018
- 08:54 PM Bug #36611: ceph-mds failure
- Patrick Donnelly wrote:
> Please see the announcement on ceph-announce concerning CephFS. You need to downgrade the ... - 08:39 PM Bug #36611 (Won't Fix): ceph-mds failure
- Please see the announcement on ceph-announce concerning CephFS. You need to downgrade the MDS daemons to 13.2.1.
E... - 01:25 PM Feature #36608 (Fix Under Review): mds: answering all pending getattr/lookups targeting the same ...
10/28/2018
- 09:46 PM Bug #36611 (Won't Fix): ceph-mds failure
- ...
- 01:35 PM Feature #36608 (Resolved): mds: answering all pending getattr/lookups targeting the same inode in...
- As for now, all getattr/lookup requests get processes one by one, which is kind of wasting CPU resources. Actually, f...
10/26/2018
- 11:29 PM Bug #36395 (Resolved): mds: Documentation for the reclaim mechanism
- 12:20 PM Feature #36585: allow nfs-ganesha to export named cephfs filesystems
- New ceph interface here. I also have a set of ganesha patches that will use this to generate filehandles when it's de...
- 11:23 AM Bug #36192 (Pending Backport): Internal fragment of ObjectCacher
10/25/2018
- 10:31 AM Bug #36593: qa: quota failure caused by clients stepping on each other
- A quick look at the logs shows that there are 4 clients running this test simultaneously. I wonder if this something...
- 08:01 AM Backport #36309 (In Progress): luminous: doc: Typo error on cephfs/fuse/
Also available in: Atom