Activity
From 10/16/2018 to 11/14/2018
11/14/2018
- 04:09 PM Backport #37093 (In Progress): mimic: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" d...
- 03:52 PM Backport #36694: mimic: mds: cache drop command requires timeout argument when it is supposed to ...
- Possibly to be done in together with #36282 in a single PR.
- 03:51 PM Backport #36694 (Need More Info): mimic: mds: cache drop command requires timeout argument when i...
- 02:54 PM Backport #36694: mimic: mds: cache drop command requires timeout argument when it is supposed to ...
- Needs first backport of https://github.com/ceph/ceph/pull/21566 as mimic misses the "drop cache" command
- 03:50 PM Backport #36282: mimic: mds: add drop_cache command
- @Venky can you combine #36694 with this backport? (Like you already did for the luminous backport afaict)
- 12:59 PM Backport #36463 (In Progress): mimic: ceph-fuse client can't read or write due to backward cap_gen
- 12:48 PM Backport #36462 (In Progress): luminous: ceph-fuse client can't read or write due to backward cap...
11/13/2018
- 09:18 PM Backport #37093 (Resolved): mimic: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" duri...
- https://github.com/ceph/ceph/pull/25095
- 09:17 PM Backport #37092 (Resolved): luminous: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" d...
- https://github.com/ceph/ceph/pull/25826
- 09:04 PM Bug #36350 (Pending Backport): mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" during m...
- 03:18 PM Feature #36253 (In Progress): cephfs: clients should send usage metadata to MDSs for administrati...
- 01:56 PM Feature #37085 (Resolved): add command to bring cluster down rapidly
- We now have a command to nicely bring the cluster down via `ceph fs set <name> down true`. This does sequential deact...
- 09:57 AM Feature #25013 (Resolved): mds: add average session age (uptime) perf counter
- 09:57 AM Backport #35938 (Resolved): mimic: mds: add average session age (uptime) perf counter
- 09:57 AM Bug #26962 (Resolved): mds: use monotonic clock for beacon sender thread waits
- 09:57 AM Backport #32090 (Resolved): mimic: mds: use monotonic clock for beacon sender thread waits
- 09:57 AM Bug #26959 (Resolved): mds: use monotonic clock for beacon message timekeeping
- 09:57 AM Backport #35837 (Resolved): mimic: mds: use monotonic clock for beacon message timekeeping
- 09:56 AM Bug #24004 (Resolved): mds: curate priority of perf counters sent to mgr
- 09:56 AM Backport #26991 (Resolved): mimic: mds: curate priority of perf counters sent to mgr
11/12/2018
- 08:25 PM Backport #35938: mimic: mds: add average session age (uptime) perf counter
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24467
merged - 08:25 PM Backport #32090: mimic: mds: use monotonic clock for beacon sender thread waits
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24467
merged - 08:25 PM Backport #35837: mimic: mds: use monotonic clock for beacon message timekeeping
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24467
merged - 08:25 PM Backport #26991: mimic: mds: curate priority of perf counters sent to mgr
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24467
merged
11/11/2018
- 10:38 AM Backport #36460 (In Progress): luminous: mds: rctime not set on system inode (root) at startup
- 10:31 AM Backport #36461 (In Progress): mimic: mds: rctime not set on system inode (root) at startup
11/10/2018
- 02:56 PM Backport #36218 (In Progress): mimic: Some cephfs tool commands silently operate on only rank 0, ...
- 02:53 PM Backport #36209 (Need More Info): mimic: mds: runs out of file descriptors after several respawns
11/09/2018
- 05:20 PM Bug #36593: qa: quota failure caused by clients stepping on each other
- And another update: I can not understand why there are two clients (both on smithi071 btw) that do a readdir in the r...
11/08/2018
- 04:56 PM Bug #36593: qa: quota failure caused by clients stepping on each other
- Quick update: Looking further at the logs helped me... getting more confused :-)
So, all the 4 clients are failing... - 04:29 PM Backport #36456 (In Progress): luminous: client: explicitly show blacklisted state via asok statu...
- 04:05 PM Backport #36457 (In Progress): mimic: client: explicitly show blacklisted state via asok status c...
- 01:11 PM Bug #36703 (Fix Under Review): MDS admin socket command `dump cache` with a very large cache will...
- 04:41 AM Backport #36690 (In Progress): mimic: client: request next osdmap for blacklisted client
- 04:34 AM Backport #36691 (In Progress): luminous: client: request next osdmap for blacklisted client
11/07/2018
- 11:37 PM Bug #36730 (Fix Under Review): mds: should apply policy to throttle client messages
- 11:32 PM Bug #36730 (Rejected): mds: should apply policy to throttle client messages
- Currently client messages are not throttled except by the global DispatchQueue::dispatch_throttler which is applied t...
- 10:48 PM Feature #22446 (In Progress): mds: ask idle client to trim more caps
- 11:08 AM Feature #36338 (Resolved): Namespace support for libcephfs
- 09:46 AM Feature #36338: Namespace support for libcephfs
- Thanks. As it's already implemented this ticket can be closed.
11/06/2018
- 05:22 PM Feature #36707 (Fix Under Review): client: support getfattr ceph.dir.pin extended attribute
- 06:37 AM Feature #36707 (Resolved): client: support getfattr ceph.dir.pin extended attribute
- In Multi-MDSes, we can set ceph.dir.pin on client to bind a directory to a specific MDS. But we can't get this attrib...
- 04:07 PM Backport #36695 (In Progress): luminous: mds: cache drop command requires timeout argument when i...
- 02:46 PM Bug #16842: mds: replacement MDS crashes on InoTable release
- https://github.com/ceph/ceph/pull/24942 can resolve this problem
- 06:41 AM Bug #16842: mds: replacement MDS crashes on InoTable release
- I think the patch https://github.com/ceph/ceph/pull/14164 can't resolve this bug completely.For example my situation:...
11/05/2018
- 10:31 PM Bug #36673: /build/ceph-13.2.1/src/mds/CDir.cc: 1504: FAILED assert(is_auth())
- https://bugzilla.redhat.com/show_bug.cgi?id=1642015
- 08:59 PM Backport #36643 (Resolved): mimic: Internal fragment of ObjectCacher
- 02:37 PM Bug #36669 (Fix Under Review): client: displayed as the capacity of all OSDs when there are multi...
- 02:36 PM Bug #36593: qa: quota failure caused by clients stepping on each other
- Luis Henriques wrote:
> Patrick Donnelly wrote:
> > Luis Henriques wrote:
> > > A quick look at the logs shows tha... - 11:36 AM Bug #36593: qa: quota failure caused by clients stepping on each other
- Patrick Donnelly wrote:
> Luis Henriques wrote:
> > A quick look at the logs shows that there are 4 clients running... - 12:35 PM Bug #36703 (Resolved): MDS admin socket command `dump cache` with a very large cache will hang/ki...
- The MDS tries to dump the cache to a formatter which will not work well if the MDS cache is too large (probably start...
- 04:22 AM Backport #24759 (In Progress): luminous: test gets ENOSPC from bluestore block device
11/04/2018
- 02:31 PM Backport #36643: mimic: Internal fragment of ObjectCacher
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24873
merged
11/03/2018
- 03:51 AM Backport #36695 (Resolved): luminous: mds: cache drop command requires timeout argument when it i...
- https://github.com/ceph/ceph/pull/24468
- 03:50 AM Backport #36694 (Resolved): mimic: mds: cache drop command requires timeout argument when it is s...
- https://github.com/ceph/ceph/pull/25118
- 03:50 AM Backport #36691 (Resolved): luminous: client: request next osdmap for blacklisted client
- https://github.com/ceph/ceph/pull/24986
- 03:50 AM Backport #36690 (Resolved): mimic: client: request next osdmap for blacklisted client
- https://github.com/ceph/ceph/pull/24987
- 03:48 AM Feature #17230 (Resolved): ceph_volume_client: py3 compatible
- 03:48 AM Backport #26850 (Resolved): mimic: ceph_volume_client: py3 compatible
- 12:02 AM Bug #36676 (Fix Under Review): qa: wrong setting for msgr failures
- 12:00 AM Bug #36676 (Pending Backport): qa: wrong setting for msgr failures
- 12:00 AM Bug #36668 (Pending Backport): client: request next osdmap for blacklisted client
11/02/2018
11/01/2018
- 10:21 PM Bug #36593: qa: quota failure caused by clients stepping on each other
- Luis Henriques wrote:
> A quick look at the logs shows that there are 4 clients running this test simultaneously. I... - 09:55 PM Bug #36320 (Pending Backport): mds: cache drop command requires timeout argument when it is suppo...
- 09:54 PM Feature #36585 (Resolved): allow nfs-ganesha to export named cephfs filesystems
- 08:02 PM Bug #36676 (Fix Under Review): qa: wrong setting for msgr failures
- 08:00 PM Bug #36676 (Resolved): qa: wrong setting for msgr failures
- https://github.com/ceph/ceph/blob/c0fd904b99a928f3cc2df112f5162edfe6a9165c/qa/suites/fs/thrash/msgr-failures/osd-mds-...
- 05:14 PM Bug #36668 (Fix Under Review): client: request next osdmap for blacklisted client
- 06:56 AM Bug #36668: client: request next osdmap for blacklisted client
- https://github.com/ceph/ceph/pull/24870
- 06:54 AM Bug #36668 (Resolved): client: request next osdmap for blacklisted client
- In Luminous version, we found blacklisted client would never get rid of blacklisted flag if network was down for some...
- 04:59 PM Backport #26850: mimic: ceph_volume_client: py3 compatible
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24443
merged - 03:37 PM Bug #36673: /build/ceph-13.2.1/src/mds/CDir.cc: 1504: FAILED assert(is_auth())
- log file showing errors (debug level 3)
- 03:31 PM Bug #36673 (New): /build/ceph-13.2.1/src/mds/CDir.cc: 1504: FAILED assert(is_auth())
- ...
- 12:43 PM Bug #36669: client: displayed as the capacity of all OSDs when there are multiple data pools in t...
- Fixup
https://github.com/ceph/ceph/pull/24880 - 07:25 AM Bug #36669 (Rejected): client: displayed as the capacity of all OSDs when there are multiple data...
- When using ceph-fuse to mount the cephfs file directory, if there are multiple data pools built in FS, the capacity o...
- 10:22 AM Backport #36642 (In Progress): luminous: Internal fragment of ObjectCacher
- 08:12 AM Backport #36642: luminous: Internal fragment of ObjectCacher
- PR: https://github.com/ceph/ceph/pull/24872
- 08:20 AM Backport #36643: mimic: Internal fragment of ObjectCacher
- PR: https://github.com/ceph/ceph/pull/24873
10/31/2018
- 08:07 PM Backport #36664 (In Progress): jewel: Internal fragment of ObjectCacher
- 06:58 PM Backport #36664: jewel: Internal fragment of ObjectCacher
- PR: https://github.com/ceph/ceph/pull/24865
- 06:47 PM Backport #36664 (Rejected): jewel: Internal fragment of ObjectCacher
- https://github.com/ceph/ceph/pull/24865
- 06:34 PM Feature #36663 (In Progress): mds: adjust cache memory limit automatically via target that tracks...
- Basic idea is to have a new config like `mds_memory_target` that, if set, automatically adjusts `mds_cache_memory_lim...
- 05:13 PM Feature #24464: cephfs: file-level snapshots
- Onkar M wrote:
> I want to work on this issue. I'm new to Ceph, so want to use this feature to get to know Ceph bett...
10/30/2018
- 09:25 PM Bug #36651 (Fix Under Review): ceph-volume-client: cannot set mode for cephfs volumes as required...
- 07:12 PM Bug #36651 (Resolved): ceph-volume-client: cannot set mode for cephfs volumes as required by Open...
- OpenShift developers report that when they use their dynamic external storage provider (in OpenShift 3.11) with manil...
- 07:14 PM Feature #24464: cephfs: file-level snapshots
- I want to work on this issue. I'm new to Ceph, so want to use this feature to get to know Ceph better.
Will someo... - 06:22 PM Bug #36611: ceph-mds failure
- Please update the list with what you did to fix the FS so everyone can learn from the experience. =)
- 05:31 PM Backport #32092 (Resolved): mimic: mds: migrate strays part by part when shutdown mds
- 05:08 PM Backport #32092: mimic: mds: migrate strays part by part when shutdown mds
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24435
merged - 05:31 PM Bug #36346 (Resolved): mimic: mds: purge queue corruption from wrong backport
- 04:49 PM Bug #36346: mimic: mds: purge queue corruption from wrong backport
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24485
merged - 05:30 PM Bug #24644 (Resolved): cephfs-journal-tool: wrong layout info used
- 05:30 PM Backport #24933 (Resolved): mimic: cephfs-journal-tool: wrong layout info used
- 04:48 PM Backport #24933: mimic: cephfs-journal-tool: wrong layout info used
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24583
merged - 05:30 PM Backport #36280 (Resolved): mimic: qa: RuntimeError: FSCID 10 has no rank 1
- 04:48 PM Backport #36280: mimic: qa: RuntimeError: FSCID 10 has no rank 1
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24572
merged - 05:30 PM Feature #25188 (Resolved): mds: configurable timeout for client eviction
- 05:29 PM Backport #35975 (Resolved): mimic: mds: configurable timeout for client eviction
- 04:47 PM Backport #35975: mimic: mds: configurable timeout for client eviction
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24661
merged - 05:29 PM Backport #36501 (Resolved): mimic: qa: increase rm timeout for workunit cleanup
- 04:46 PM Backport #36501: mimic: qa: increase rm timeout for workunit cleanup
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24684
merged - 05:15 PM Backport #36643 (Resolved): mimic: Internal fragment of ObjectCacher
- https://github.com/ceph/ceph/pull/24873
- 05:15 PM Backport #36642 (Resolved): luminous: Internal fragment of ObjectCacher
- https://github.com/ceph/ceph/pull/24872
- 04:44 PM Bug #36635 (Resolved): mds: purge queue corruption from wrong backport
- Master version of #36346. We need to add the special handling for the wrong purge queue format in 13.2.2.
- 04:13 PM Bug #26969: kclient: mount unexpectedly gets osdmap updates causing test to fail
- On mimic: /ceph/teuthology-archive/yuriw-2018-10-23_17:23:46-kcephfs-wip-yuri2-testing-2018-10-23-1513-mimic-testing-...
10/29/2018
- 08:54 PM Bug #36611: ceph-mds failure
- Patrick Donnelly wrote:
> Please see the announcement on ceph-announce concerning CephFS. You need to downgrade the ... - 08:39 PM Bug #36611 (Won't Fix): ceph-mds failure
- Please see the announcement on ceph-announce concerning CephFS. You need to downgrade the MDS daemons to 13.2.1.
E... - 01:25 PM Feature #36608 (Fix Under Review): mds: answering all pending getattr/lookups targeting the same ...
10/28/2018
- 09:46 PM Bug #36611 (Won't Fix): ceph-mds failure
- ...
- 01:35 PM Feature #36608 (Resolved): mds: answering all pending getattr/lookups targeting the same inode in...
- As for now, all getattr/lookup requests get processes one by one, which is kind of wasting CPU resources. Actually, f...
10/26/2018
- 11:29 PM Bug #36395 (Resolved): mds: Documentation for the reclaim mechanism
- 12:20 PM Feature #36585: allow nfs-ganesha to export named cephfs filesystems
- New ceph interface here. I also have a set of ganesha patches that will use this to generate filehandles when it's de...
- 11:23 AM Bug #36192 (Pending Backport): Internal fragment of ObjectCacher
10/25/2018
- 10:31 AM Bug #36593: qa: quota failure caused by clients stepping on each other
- A quick look at the logs shows that there are 4 clients running this test simultaneously. I wonder if this something...
- 08:01 AM Backport #36309 (In Progress): luminous: doc: Typo error on cephfs/fuse/
10/24/2018
- 11:00 PM Bug #36573 (Resolved): mds: ms_handle_authentication does not hold mds lock
- 09:02 PM Bug #36594 (Fix Under Review): qa: pjd test appears to require more than 3h timeout for some conf...
- 08:54 PM Bug #36594 (Resolved): qa: pjd test appears to require more than 3h timeout for some configurations
- ...
- 08:39 PM Bug #36507: client: connection failure during reconnect causes client to hang
- Another: /ceph/teuthology-archive/pdonnell-2018-10-24_02:35:37-fs-wip-pdonnell-testing-20181023.224346-distro-basic-s...
- 08:36 PM Feature #36585: allow nfs-ganesha to export named cephfs filesystems
- Nope, I was right the first time. The issue was with filehandle hash key collisions in ganesha due to the fact that t...
- 01:48 PM Feature #36585: allow nfs-ganesha to export named cephfs filesystems
- I may have misinterpreted the problems I was seeing while attempting this yesterday. Ganesha actually embeds the expo...
- 11:46 AM Feature #36585 (Resolved): allow nfs-ganesha to export named cephfs filesystems
- Recently, libcephfs grew a new ceph_select_filesystem call that allows the caller to select a particular filesystem t...
- 08:27 PM Bug #36593 (New): qa: quota failure caused by clients stepping on each other
- ...
- 07:32 PM Backport #36282: mimic: mds: add drop_cache command
- Yes, please backport it.
- 04:04 PM Bug #36547: mds_beacon_grace and mds_beacon_interval should have a canonical setting
- I agree in principle Greg but aren't we moving to the centrally managed config in Nautilus? (i.e. the MDS and MONs sh...
- 04:34 AM Backport #36578 (Resolved): mimic: qa: teuthology may hang on diagnostic commands for fuse mount
- https://github.com/ceph/ceph/pull/25515
- 04:34 AM Backport #36577 (Resolved): luminous: qa: teuthology may hang on diagnostic commands for fuse mount
- https://github.com/ceph/ceph/pull/25516
- 04:28 AM Backport #36217 (In Progress): luminous: Some cephfs tool commands silently operate on only rank ...
10/23/2018
- 10:24 PM Bug #36394 (Resolved): mds: pending release note for state reclaim
- 06:24 AM Bug #36394 (Fix Under Review): mds: pending release note for state reclaim
- https://github.com/ceph/ceph/pull/24709
- 10:22 PM Bug #36573 (Fix Under Review): mds: ms_handle_authentication does not hold mds lock
- https://github.com/ceph/ceph/pull/24725
- 10:19 PM Bug #36573 (Resolved): mds: ms_handle_authentication does not hold mds lock
- ...
- 10:02 PM Bug #36340 (Resolved): common: fix buffer advance length overflow to cause MDS crash
- 10:01 PM Cleanup #36380 (Resolved): mds: remove cap requirement on ceph tell commands
- 10:01 PM Bug #36493 (Resolved): mds: remove MonClient reconnect when laggy
- 10:00 PM Bug #36390 (Pending Backport): qa: teuthology may hang on diagnostic commands for fuse mount
- 11:02 AM Bug #36359: cephfs slow down when export with samba server
- The reason maybe cause by the samba,you can turn on the samba param of case senstive and then have a try.
- 08:15 AM Backport #36282 (Need More Info): mimic: mds: add drop_cache command
- Feature - awaiting confirmation from Patrick that it really needs to be backported (see the parent tracker issue).
- 07:28 AM Feature #23362: mds: add drop_cache command
- this is feature, do we really need to backport it?
- 06:27 AM Backport #36313 (Resolved): mimic: doc: fix broken fstab url in cephfs/fuse
10/22/2018
- 11:04 PM Bug #36547 (Won't Fix): mds_beacon_grace and mds_beacon_interval should have a canonical setting
- mds_beacon_grace and mds_beacon_interval are both set as normal config options, and if they don't match on the mons a...
- 02:04 PM Backport #35932 (In Progress): mimic: mds: retry remounting in ceph-fuse on dcache invalidation
- 12:42 PM Bug #36477: mds: up:standbyreplay log replay falls behind up:active
- The osd reply no such file or directory when the standby mds reads the journal object, for example the object 201.026...
- 12:23 PM Bug #36477: mds: up:standbyreplay log replay falls behind up:active
- Take a look at the log of the MDS that is restarting to see if it's saying why.
- 09:31 AM Bug #26969 (Need More Info): kclient: mount unexpectedly gets osdmap updates causing test to fail
- see http://tracker.ceph.com/issues/12895. we did not see this for fuse-client for a long time. need log to check why ...
- 08:49 AM Bug #24053 (Resolved): qa: kernel_mount.py umount must handle timeout arg
- 08:44 AM Bug #24054 (Resolved): kceph: umount on evicted client blocks forever
- 08:43 AM Bug #20681 (Closed): kclient: umount target is busy
- open new ticket if it happens again
- 08:38 AM Bug #13926 (Closed): lockup in multithreaded application
- no update for a long time
- 08:36 AM Bug #17620 (Resolved): Data Integrity Issue with kernel client vs fuse client
- splice read issue. should fixed kernel commit 7ce469a53e7106acdaca2e25027941d0f7c12a8e
- 08:31 AM Bug #23250 (Closed): mds: crash during replay: interval_set.h: 396: FAILED assert(p->first > star...
- 08:30 AM Bug #21861: osdc: truncate Object and remove the bh which have someone wait for read on it occur ...
- I think this bug still exists in master
- 08:24 AM Bug #24028 (Resolved): CephFS flock() on a directory is broken
- 08:22 AM Bug #24665 (Closed): qa: TestStrays.test_hardlink_reintegration fails self.assertTrue(self.get_ba...
- close this because it's caused by test environment noise
10/19/2018
- 10:54 PM Bug #35916 (Resolved): mds: rctime may go back
- 10:54 PM Backport #36136 (Resolved): mimic: mds: rctime may go back
- 08:51 PM Backport #36136: mimic: mds: rctime may go back
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24379
merged - 10:53 PM Bug #25113 (Resolved): mds: allows client to create ".." and "." dirents
- 10:53 PM Backport #32104 (Resolved): mimic: mds: allows client to create ".." and "." dirents
- 08:51 PM Backport #32104: mimic: mds: allows client to create ".." and "." dirents
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24384
merged - 10:53 PM Bug #35945 (Resolved): client: update ctime when modifying file content
- 10:52 PM Backport #36134 (Resolved): mimic: client: update ctime when modifying file content
- 08:50 PM Backport #36134: mimic: client: update ctime when modifying file content
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24385
merged - 10:49 PM Bug #36184 (Resolved): qa: add timeouts to workunits to bound test execution time in the event of...
- 10:49 PM Backport #36278 (Resolved): mimic: qa: add timeouts to workunits to bound test execution time in ...
- 08:49 PM Backport #36278: mimic: qa: add timeouts to workunits to bound test execution time in the event o...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24408
merged - 10:49 PM Bug #36165 (Resolved): qa: Command failed on smithi189 with status 1: 'rm -rf -- /home/ubuntu/cep...
- 10:48 PM Backport #36323 (Resolved): mimic: qa: Command failed on smithi189 with status 1: 'rm -rf -- /hom...
- 08:49 PM Backport #36323: mimic: qa: Command failed on smithi189 with status 1: 'rm -rf -- /home/ubuntu/ce...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24408
merged - 10:48 PM Bug #24177 (Resolved): qa: fsstress workunit does not execute in parallel on same host without cl...
- 10:48 PM Backport #36153 (Resolved): mimic: qa: fsstress workunit does not execute in parallel on same hos...
- 08:49 PM Backport #36153: mimic: qa: fsstress workunit does not execute in parallel on same host without c...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24408
merged - 10:47 PM Backport #36501 (In Progress): mimic: qa: increase rm timeout for workunit cleanup
- 10:46 PM Bug #36114 (Resolved): mds: internal op missing events time 'throttled', 'all_read', 'dispatched'
- 10:45 PM Backport #36195 (Resolved): mimic: mds: internal op missing events time 'throttled', 'all_read', ...
- 08:48 PM Backport #36195: mimic: mds: internal op missing events time 'throttled', 'all_read', 'dispatched'
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24411
merged - 10:45 PM Bug #24129 (Resolved): qa: test_version_splitting (tasks.cephfs.test_sessionmap.TestSessionMap) t...
- 10:45 PM Backport #36156 (Resolved): mimic: qa: test_version_splitting (tasks.cephfs.test_sessionmap.TestS...
- 08:48 PM Backport #36156: mimic: qa: test_version_splitting (tasks.cephfs.test_sessionmap.TestSessionMap) ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24438
merged - 10:44 PM Bug #36103 (Resolved): ceph-fuse: add SELinux policy
- 10:44 PM Backport #36197 (Resolved): mimic: ceph-fuse: add SELinux policy
- 08:47 PM Backport #36197: mimic: ceph-fuse: add SELinux policy
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24439
mergedReviewed-by: Patrick Donnelly <pdonnell@redh... - 10:44 PM Backport #36199 (Resolved): mimic: mds: fix mds damaged due to unexpected journal length
- 08:47 PM Backport #36199: mimic: mds: fix mds damaged due to unexpected journal length
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24463
mergedeviewed-by: Patrick Donnelly <pdonnell@redha... - 10:42 PM Backport #36205 (Resolved): mimic: nfs-ganesha: ceph_fsal_setattr2 returned Operation not permitted
- 08:46 PM Backport #36205: mimic: nfs-ganesha: ceph_fsal_setattr2 returned Operation not permitted
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24464
merged - 10:42 PM Bug #36028 (Resolved): "ceph fs add_data_pool" applies pool application metadata incorrectly
- 10:41 PM Backport #36203 (Resolved): mimic: "ceph fs add_data_pool" applies pool application metadata inco...
- 08:46 PM Backport #36203: mimic: "ceph fs add_data_pool" applies pool application metadata incorrectly
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24470
merged - 10:40 PM Backport #24929 (Need More Info): luminous: qa: test_recovery_pool tries asok on wrong node
- first attempted backport - https://github.com/ceph/ceph/pull/23086 - was closed after becoming stale
backport is n... - 10:39 PM Backport #24928 (Resolved): mimic: qa: test_recovery_pool tries asok on wrong node
- 08:44 PM Backport #24928: mimic: qa: test_recovery_pool tries asok on wrong node
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23087
merged - 10:38 PM Bug #26858 (Resolved): mds: reset heartbeat map at potential time-consuming places
- 10:38 PM Backport #26886 (Resolved): mimic: mds: reset heartbeat map at potential time-consuming places
- 08:44 PM Backport #26886: mimic: mds: reset heartbeat map at potential time-consuming places
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/23506
merged - 10:38 PM Feature #25131 (Resolved): mds: optimize the way how max export size is enforced
- 10:38 PM Backport #32100 (Resolved): mimic: mds: optimize the way how max export size is enforced
- 08:43 PM Backport #32100: mimic: mds: optimize the way how max export size is enforced
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23952
merged - 10:24 PM Bug #35250 (Resolved): mds: beacon spams is_laggy message
- 10:24 PM Backport #35719 (Resolved): mimic: mds: beacon spams is_laggy message
- 08:43 PM Backport #35719: mimic: mds: beacon spams is_laggy message
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24161
merged - 09:49 PM Bug #24557 (Resolved): client: segmentation fault in handle_client_reply
- 09:49 PM Backport #35841 (Resolved): mimic: client: segmentation fault in handle_client_reply
- 08:43 PM Backport #35841: mimic: client: segmentation fault in handle_client_reply
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24187
merged - 09:48 PM Cleanup #36075 (Resolved): qa: remove knfs site from future releases
- 09:48 PM Backport #36102 (Resolved): mimic: qa: remove knfs site from future releases
- 08:42 PM Backport #36102: mimic: qa: remove knfs site from future releases
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24269
merged - 09:48 PM Bug #35848 (Resolved): MDSMonitor: lookup of gid in prepare_beacon that has been removed will cau...
- 09:47 PM Backport #35858 (Resolved): mimic: MDSMonitor: lookup of gid in prepare_beacon that has been remo...
- 08:41 PM Backport #35858: mimic: MDSMonitor: lookup of gid in prepare_beacon that has been removed will ca...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24272
merged - 09:46 PM Bug #27051 (Resolved): client: cannot list out files created by another ceph-fuse client
- 09:46 PM Backport #35934 (Resolved): mimic: client: cannot list out files created by another ceph-fuse client
- 08:41 PM Backport #35934: mimic: client: cannot list out files created by another ceph-fuse client
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24295
merged - 08:59 PM Bug #24849 (Resolved): client: statfs inode count odd
- 08:59 PM Backport #35940 (Resolved): mimic: client: statfs inode count odd
- 08:40 PM Backport #35940: mimic: client: statfs inode count odd
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24377
merged - 08:07 PM Bug #36035: mds: MDCache.cc: 11673: abort()
- In Mimic: /ceph/teuthology-archive/yuriw-2018-10-18_15:37:57-multimds-wip-yuri4-testing-2018-10-17-2308-mimic-testing...
- 04:27 PM Bug #36507: client: connection failure during reconnect causes client to hang
- Zheng Yan wrote:
> client bug or messenger bug?
It is probably two bugs (both).
The client should not get stuc... - 03:31 AM Bug #36507: client: connection failure during reconnect causes client to hang
- I think the reset was sent by following code...
- 01:57 AM Bug #36507: client: connection failure during reconnect causes client to hang
- client bug or messenger bug?
- 10:11 AM Bug #35829 (Rejected): qa: workunits/fs/misc/acl.sh failure from unexpected system.posix_acl_defa...
- test case issue
- 09:37 AM Bug #24533 (Resolved): PurgeQueue sometimes ignores Journaler errors
10/18/2018
- 09:15 PM Bug #22925 (Resolved): mds: LOCK_SYNC_MIX state makes "getattr" operations extremely slow when th...
- 09:15 PM Bug #23658 (Resolved): MDSMonitor: crash after assigning standby-replay daemon in multifs setup
- 09:14 PM Bug #10915 (Resolved): client: hangs on umount if it had an MDS session evicted
- 09:14 PM Bug #23837 (Resolved): client: deleted inode's Bufferhead which was in STATE::Tx would lead a ass...
- 09:14 PM Bug #24491 (Resolved): client: _ll_drop_pins travel inode_map may access invalid ‘next’ iterator
- 09:14 PM Backport #23014 (Rejected): jewel: mds: LOCK_SYNC_MIX state makes "getattr" operations extremely ...
- 09:13 PM Backport #23834 (Rejected): jewel: MDSMonitor: crash after assigning standby-replay daemon in mul...
- 09:13 PM Backport #23990 (Rejected): jewel: client: hangs on umount if it had an MDS session evicted
- 09:13 PM Backport #24208 (Rejected): jewel: client: deleted inode's Bufferhead which was in STATE::Tx woul...
- 09:13 PM Backport #24536 (Rejected): jewel: client: _ll_drop_pins travel inode_map may access invalid ‘nex...
- 09:13 PM Backport #24695 (Rejected): jewel: PurgeQueue sometimes ignores Journaler errors
- 09:13 PM Bug #23509 (Resolved): ceph-fuse: broken directory permission checking
- 09:13 PM Backport #23705 (Rejected): jewel: ceph-fuse: broken directory permission checking
- JEwel is EOL. Clsoing.
- 04:56 PM Backport #23705: jewel: ceph-fuse: broken directory permission checking
- This bug seems to have slipped through the cracks. We'd have to do a little work to backport this as jewel did not ge...
- 09:08 PM Feature #23376: nfsgw: add NFS-Ganesha to service map similar to "rgw-nfs"
- Jeff Layton wrote:
> Looking again at this, as I'm starting to look at how we'd populate fs_locations_info to handle... - 11:49 AM Feature #23376: nfsgw: add NFS-Ganesha to service map similar to "rgw-nfs"
- Looking again at this, as I'm starting to look at how we'd populate fs_locations_info to handle clustered ganesha mig...
- 06:20 PM Feature #36483: extend the mds auth cap "path=" syntax to enable something like "path=/foo/bar/*"
- Another possibility is to distinguish between `/foo/bar` and `/foo/bar/`. The latter would indicate that the cap does...
- 06:18 PM Feature #36481: separate out the 'p' mds auth cap into separate caps for quotas vs. choosing pool...
- Makes sense to me.
- 04:17 PM Backport #36200 (New): luminous: mds: fix mds damaged due to unexpected journal length
- Reassigning as the PR became stale.
- 01:36 PM Bug #23446 (Resolved): ceph-fuse: getgroups failure causes exception
- 12:38 PM Backport #35975 (In Progress): mimic: mds: configurable timeout for client eviction
10/17/2018
- 10:05 PM Bug #36079: ceph-fuse: hang because it miss reconnect phase when hot standby mds switch occurs
- #36507 is kinda related.
- 09:56 PM Bug #21507: mds: debug logs near respawn are not flushed
- Same failure from same test configuration:...
- 09:50 PM Bug #21507: mds: debug logs near respawn are not flushed
- Another:...
- 09:36 PM Bug #36507: client: connection failure during reconnect causes client to hang
- For posterity, here's the original job that failed: /ceph/teuthology-archive/pdonnell-2018-10-11_17:55:20-fs-wip-pdon...
- 09:25 PM Bug #36507 (Duplicate): client: connection failure during reconnect causes client to hang
- ...
- 09:23 PM Feature #24724 (Resolved): client: put instance/addr information in status asok command
- 09:23 PM Backport #24930 (Rejected): jewel: client: put instance/addr information in status asok command
- Jewel is EOL
- 09:19 PM Backport #36504 (Resolved): luminous: qa: infinite timeout on asok command causes job to die
- https://github.com/ceph/ceph/pull/25805
- 09:19 PM Backport #36503 (Resolved): mimic: qa: infinite timeout on asok command causes job to die
- https://github.com/ceph/ceph/pull/25332
- 09:19 PM Backport #36502 (Resolved): luminous: qa: increase rm timeout for workunit cleanup
- https://github.com/ceph/ceph/pull/25696
- 09:19 PM Backport #36501 (Resolved): mimic: qa: increase rm timeout for workunit cleanup
- https://github.com/ceph/ceph/pull/24684
- 05:19 PM Bug #36365 (Pending Backport): qa: increase rm timeout for workunit cleanup
- 05:17 PM Bug #36335 (Pending Backport): qa: infinite timeout on asok command causes job to die
- 05:03 PM Bug #36493 (Fix Under Review): mds: remove MonClient reconnect when laggy
- https://github.com/ceph/ceph/pull/24640
- 04:53 PM Bug #36493 (Resolved): mds: remove MonClient reconnect when laggy
- With the MonClient keepalives and reconnects, this is no longer necessary.
- 01:21 PM Feature #12282 (Fix Under Review): mds: progress/abort/pause interface for ongoing scrubs
- 12:30 PM Feature #36483 (New): extend the mds auth cap "path=" syntax to enable something like "path=/foo/...
- ... meaning that the cap applied to anything within bar but not bar itself. John Spray suggested that this would allo...
- 11:17 AM Feature #36481 (New): separate out the 'p' mds auth cap into separate caps for quotas vs. choosin...
- Arne (CERN) requested that we allow OpenStack Manila users to set quotas, but not change the pool layout within Mani...
- 09:46 AM Bug #36477 (New): mds: up:standbyreplay log replay falls behind up:active
- ...
10/16/2018
- 01:18 PM Backport #36463 (Rejected): mimic: ceph-fuse client can't read or write due to backward cap_gen
- 01:18 PM Backport #36462 (Rejected): luminous: ceph-fuse client can't read or write due to backward cap_gen
- 01:18 PM Backport #36461 (Resolved): mimic: mds: rctime not set on system inode (root) at startup
- https://github.com/ceph/ceph/pull/25042
- 01:18 PM Backport #36460 (Resolved): luminous: mds: rctime not set on system inode (root) at startup
- https://github.com/ceph/ceph/pull/25043
- 11:25 AM Backport #36457 (Resolved): mimic: client: explicitly show blacklisted state via asok status command
- https://github.com/ceph/ceph/pull/24993
- 11:25 AM Backport #36456 (Resolved): luminous: client: explicitly show blacklisted state via asok status c...
- https://github.com/ceph/ceph/pull/24994
- 04:34 AM Bug #36189 (Pending Backport): ceph-fuse client can't read or write due to backward cap_gen
- 04:33 AM Bug #36221 (Pending Backport): mds: rctime not set on system inode (root) at startup
- 04:23 AM Feature #36352 (Pending Backport): client: explicitly show blacklisted state via asok status command
- 04:13 AM Bug #36368 (Resolved): cephfs/tool: cephfs-shell have "no attribute 'decode'" err
- 03:15 AM Backport #35975: mimic: mds: configurable timeout for client eviction
- ACK
- 03:15 AM Backport #35932: mimic: mds: retry remounting in ceph-fuse on dcache invalidation
- ACK
Also available in: Atom