Activity
From 09/23/2018 to 10/22/2018
10/22/2018
- 11:04 PM Bug #36547 (Won't Fix): mds_beacon_grace and mds_beacon_interval should have a canonical setting
- mds_beacon_grace and mds_beacon_interval are both set as normal config options, and if they don't match on the mons a...
- 02:04 PM Backport #35932 (In Progress): mimic: mds: retry remounting in ceph-fuse on dcache invalidation
- 12:42 PM Bug #36477: mds: up:standbyreplay log replay falls behind up:active
- The osd reply no such file or directory when the standby mds reads the journal object, for example the object 201.026...
- 12:23 PM Bug #36477: mds: up:standbyreplay log replay falls behind up:active
- Take a look at the log of the MDS that is restarting to see if it's saying why.
- 09:31 AM Bug #26969 (Need More Info): kclient: mount unexpectedly gets osdmap updates causing test to fail
- see http://tracker.ceph.com/issues/12895. we did not see this for fuse-client for a long time. need log to check why ...
- 08:49 AM Bug #24053 (Resolved): qa: kernel_mount.py umount must handle timeout arg
- 08:44 AM Bug #24054 (Resolved): kceph: umount on evicted client blocks forever
- 08:43 AM Bug #20681 (Closed): kclient: umount target is busy
- open new ticket if it happens again
- 08:38 AM Bug #13926 (Closed): lockup in multithreaded application
- no update for a long time
- 08:36 AM Bug #17620 (Resolved): Data Integrity Issue with kernel client vs fuse client
- splice read issue. should fixed kernel commit 7ce469a53e7106acdaca2e25027941d0f7c12a8e
- 08:31 AM Bug #23250 (Closed): mds: crash during replay: interval_set.h: 396: FAILED assert(p->first > star...
- 08:30 AM Bug #21861: osdc: truncate Object and remove the bh which have someone wait for read on it occur ...
- I think this bug still exists in master
- 08:24 AM Bug #24028 (Resolved): CephFS flock() on a directory is broken
- 08:22 AM Bug #24665 (Closed): qa: TestStrays.test_hardlink_reintegration fails self.assertTrue(self.get_ba...
- close this because it's caused by test environment noise
10/19/2018
- 10:54 PM Bug #35916 (Resolved): mds: rctime may go back
- 10:54 PM Backport #36136 (Resolved): mimic: mds: rctime may go back
- 08:51 PM Backport #36136: mimic: mds: rctime may go back
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24379
merged - 10:53 PM Bug #25113 (Resolved): mds: allows client to create ".." and "." dirents
- 10:53 PM Backport #32104 (Resolved): mimic: mds: allows client to create ".." and "." dirents
- 08:51 PM Backport #32104: mimic: mds: allows client to create ".." and "." dirents
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24384
merged - 10:53 PM Bug #35945 (Resolved): client: update ctime when modifying file content
- 10:52 PM Backport #36134 (Resolved): mimic: client: update ctime when modifying file content
- 08:50 PM Backport #36134: mimic: client: update ctime when modifying file content
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24385
merged - 10:49 PM Bug #36184 (Resolved): qa: add timeouts to workunits to bound test execution time in the event of...
- 10:49 PM Backport #36278 (Resolved): mimic: qa: add timeouts to workunits to bound test execution time in ...
- 08:49 PM Backport #36278: mimic: qa: add timeouts to workunits to bound test execution time in the event o...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24408
merged - 10:49 PM Bug #36165 (Resolved): qa: Command failed on smithi189 with status 1: 'rm -rf -- /home/ubuntu/cep...
- 10:48 PM Backport #36323 (Resolved): mimic: qa: Command failed on smithi189 with status 1: 'rm -rf -- /hom...
- 08:49 PM Backport #36323: mimic: qa: Command failed on smithi189 with status 1: 'rm -rf -- /home/ubuntu/ce...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24408
merged - 10:48 PM Bug #24177 (Resolved): qa: fsstress workunit does not execute in parallel on same host without cl...
- 10:48 PM Backport #36153 (Resolved): mimic: qa: fsstress workunit does not execute in parallel on same hos...
- 08:49 PM Backport #36153: mimic: qa: fsstress workunit does not execute in parallel on same host without c...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24408
merged - 10:47 PM Backport #36501 (In Progress): mimic: qa: increase rm timeout for workunit cleanup
- 10:46 PM Bug #36114 (Resolved): mds: internal op missing events time 'throttled', 'all_read', 'dispatched'
- 10:45 PM Backport #36195 (Resolved): mimic: mds: internal op missing events time 'throttled', 'all_read', ...
- 08:48 PM Backport #36195: mimic: mds: internal op missing events time 'throttled', 'all_read', 'dispatched'
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24411
merged - 10:45 PM Bug #24129 (Resolved): qa: test_version_splitting (tasks.cephfs.test_sessionmap.TestSessionMap) t...
- 10:45 PM Backport #36156 (Resolved): mimic: qa: test_version_splitting (tasks.cephfs.test_sessionmap.TestS...
- 08:48 PM Backport #36156: mimic: qa: test_version_splitting (tasks.cephfs.test_sessionmap.TestSessionMap) ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24438
merged - 10:44 PM Bug #36103 (Resolved): ceph-fuse: add SELinux policy
- 10:44 PM Backport #36197 (Resolved): mimic: ceph-fuse: add SELinux policy
- 08:47 PM Backport #36197: mimic: ceph-fuse: add SELinux policy
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24439
mergedReviewed-by: Patrick Donnelly <pdonnell@redh... - 10:44 PM Backport #36199 (Resolved): mimic: mds: fix mds damaged due to unexpected journal length
- 08:47 PM Backport #36199: mimic: mds: fix mds damaged due to unexpected journal length
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24463
mergedeviewed-by: Patrick Donnelly <pdonnell@redha... - 10:42 PM Backport #36205 (Resolved): mimic: nfs-ganesha: ceph_fsal_setattr2 returned Operation not permitted
- 08:46 PM Backport #36205: mimic: nfs-ganesha: ceph_fsal_setattr2 returned Operation not permitted
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24464
merged - 10:42 PM Bug #36028 (Resolved): "ceph fs add_data_pool" applies pool application metadata incorrectly
- 10:41 PM Backport #36203 (Resolved): mimic: "ceph fs add_data_pool" applies pool application metadata inco...
- 08:46 PM Backport #36203: mimic: "ceph fs add_data_pool" applies pool application metadata incorrectly
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24470
merged - 10:40 PM Backport #24929 (Need More Info): luminous: qa: test_recovery_pool tries asok on wrong node
- first attempted backport - https://github.com/ceph/ceph/pull/23086 - was closed after becoming stale
backport is n... - 10:39 PM Backport #24928 (Resolved): mimic: qa: test_recovery_pool tries asok on wrong node
- 08:44 PM Backport #24928: mimic: qa: test_recovery_pool tries asok on wrong node
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23087
merged - 10:38 PM Bug #26858 (Resolved): mds: reset heartbeat map at potential time-consuming places
- 10:38 PM Backport #26886 (Resolved): mimic: mds: reset heartbeat map at potential time-consuming places
- 08:44 PM Backport #26886: mimic: mds: reset heartbeat map at potential time-consuming places
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/23506
merged - 10:38 PM Feature #25131 (Resolved): mds: optimize the way how max export size is enforced
- 10:38 PM Backport #32100 (Resolved): mimic: mds: optimize the way how max export size is enforced
- 08:43 PM Backport #32100: mimic: mds: optimize the way how max export size is enforced
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23952
merged - 10:24 PM Bug #35250 (Resolved): mds: beacon spams is_laggy message
- 10:24 PM Backport #35719 (Resolved): mimic: mds: beacon spams is_laggy message
- 08:43 PM Backport #35719: mimic: mds: beacon spams is_laggy message
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24161
merged - 09:49 PM Bug #24557 (Resolved): client: segmentation fault in handle_client_reply
- 09:49 PM Backport #35841 (Resolved): mimic: client: segmentation fault in handle_client_reply
- 08:43 PM Backport #35841: mimic: client: segmentation fault in handle_client_reply
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24187
merged - 09:48 PM Cleanup #36075 (Resolved): qa: remove knfs site from future releases
- 09:48 PM Backport #36102 (Resolved): mimic: qa: remove knfs site from future releases
- 08:42 PM Backport #36102: mimic: qa: remove knfs site from future releases
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24269
merged - 09:48 PM Bug #35848 (Resolved): MDSMonitor: lookup of gid in prepare_beacon that has been removed will cau...
- 09:47 PM Backport #35858 (Resolved): mimic: MDSMonitor: lookup of gid in prepare_beacon that has been remo...
- 08:41 PM Backport #35858: mimic: MDSMonitor: lookup of gid in prepare_beacon that has been removed will ca...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24272
merged - 09:46 PM Bug #27051 (Resolved): client: cannot list out files created by another ceph-fuse client
- 09:46 PM Backport #35934 (Resolved): mimic: client: cannot list out files created by another ceph-fuse client
- 08:41 PM Backport #35934: mimic: client: cannot list out files created by another ceph-fuse client
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24295
merged - 08:59 PM Bug #24849 (Resolved): client: statfs inode count odd
- 08:59 PM Backport #35940 (Resolved): mimic: client: statfs inode count odd
- 08:40 PM Backport #35940: mimic: client: statfs inode count odd
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24377
merged - 08:07 PM Bug #36035: mds: MDCache.cc: 11673: abort()
- In Mimic: /ceph/teuthology-archive/yuriw-2018-10-18_15:37:57-multimds-wip-yuri4-testing-2018-10-17-2308-mimic-testing...
- 04:27 PM Bug #36507: client: connection failure during reconnect causes client to hang
- Zheng Yan wrote:
> client bug or messenger bug?
It is probably two bugs (both).
The client should not get stuc... - 03:31 AM Bug #36507: client: connection failure during reconnect causes client to hang
- I think the reset was sent by following code...
- 01:57 AM Bug #36507: client: connection failure during reconnect causes client to hang
- client bug or messenger bug?
- 10:11 AM Bug #35829 (Rejected): qa: workunits/fs/misc/acl.sh failure from unexpected system.posix_acl_defa...
- test case issue
- 09:37 AM Bug #24533 (Resolved): PurgeQueue sometimes ignores Journaler errors
10/18/2018
- 09:15 PM Bug #22925 (Resolved): mds: LOCK_SYNC_MIX state makes "getattr" operations extremely slow when th...
- 09:15 PM Bug #23658 (Resolved): MDSMonitor: crash after assigning standby-replay daemon in multifs setup
- 09:14 PM Bug #10915 (Resolved): client: hangs on umount if it had an MDS session evicted
- 09:14 PM Bug #23837 (Resolved): client: deleted inode's Bufferhead which was in STATE::Tx would lead a ass...
- 09:14 PM Bug #24491 (Resolved): client: _ll_drop_pins travel inode_map may access invalid ‘next’ iterator
- 09:14 PM Backport #23014 (Rejected): jewel: mds: LOCK_SYNC_MIX state makes "getattr" operations extremely ...
- 09:13 PM Backport #23834 (Rejected): jewel: MDSMonitor: crash after assigning standby-replay daemon in mul...
- 09:13 PM Backport #23990 (Rejected): jewel: client: hangs on umount if it had an MDS session evicted
- 09:13 PM Backport #24208 (Rejected): jewel: client: deleted inode's Bufferhead which was in STATE::Tx woul...
- 09:13 PM Backport #24536 (Rejected): jewel: client: _ll_drop_pins travel inode_map may access invalid ‘nex...
- 09:13 PM Backport #24695 (Rejected): jewel: PurgeQueue sometimes ignores Journaler errors
- 09:13 PM Bug #23509 (Resolved): ceph-fuse: broken directory permission checking
- 09:13 PM Backport #23705 (Rejected): jewel: ceph-fuse: broken directory permission checking
- JEwel is EOL. Clsoing.
- 04:56 PM Backport #23705: jewel: ceph-fuse: broken directory permission checking
- This bug seems to have slipped through the cracks. We'd have to do a little work to backport this as jewel did not ge...
- 09:08 PM Feature #23376: nfsgw: add NFS-Ganesha to service map similar to "rgw-nfs"
- Jeff Layton wrote:
> Looking again at this, as I'm starting to look at how we'd populate fs_locations_info to handle... - 11:49 AM Feature #23376: nfsgw: add NFS-Ganesha to service map similar to "rgw-nfs"
- Looking again at this, as I'm starting to look at how we'd populate fs_locations_info to handle clustered ganesha mig...
- 06:20 PM Feature #36483: extend the mds auth cap "path=" syntax to enable something like "path=/foo/bar/*"
- Another possibility is to distinguish between `/foo/bar` and `/foo/bar/`. The latter would indicate that the cap does...
- 06:18 PM Feature #36481: separate out the 'p' mds auth cap into separate caps for quotas vs. choosing pool...
- Makes sense to me.
- 04:17 PM Backport #36200 (New): luminous: mds: fix mds damaged due to unexpected journal length
- Reassigning as the PR became stale.
- 01:36 PM Bug #23446 (Resolved): ceph-fuse: getgroups failure causes exception
- 12:38 PM Backport #35975 (In Progress): mimic: mds: configurable timeout for client eviction
10/17/2018
- 10:05 PM Bug #36079: ceph-fuse: hang because it miss reconnect phase when hot standby mds switch occurs
- #36507 is kinda related.
- 09:56 PM Bug #21507: mds: debug logs near respawn are not flushed
- Same failure from same test configuration:...
- 09:50 PM Bug #21507: mds: debug logs near respawn are not flushed
- Another:...
- 09:36 PM Bug #36507: client: connection failure during reconnect causes client to hang
- For posterity, here's the original job that failed: /ceph/teuthology-archive/pdonnell-2018-10-11_17:55:20-fs-wip-pdon...
- 09:25 PM Bug #36507 (Duplicate): client: connection failure during reconnect causes client to hang
- ...
- 09:23 PM Feature #24724 (Resolved): client: put instance/addr information in status asok command
- 09:23 PM Backport #24930 (Rejected): jewel: client: put instance/addr information in status asok command
- Jewel is EOL
- 09:19 PM Backport #36504 (Resolved): luminous: qa: infinite timeout on asok command causes job to die
- https://github.com/ceph/ceph/pull/25805
- 09:19 PM Backport #36503 (Resolved): mimic: qa: infinite timeout on asok command causes job to die
- https://github.com/ceph/ceph/pull/25332
- 09:19 PM Backport #36502 (Resolved): luminous: qa: increase rm timeout for workunit cleanup
- https://github.com/ceph/ceph/pull/25696
- 09:19 PM Backport #36501 (Resolved): mimic: qa: increase rm timeout for workunit cleanup
- https://github.com/ceph/ceph/pull/24684
- 05:19 PM Bug #36365 (Pending Backport): qa: increase rm timeout for workunit cleanup
- 05:17 PM Bug #36335 (Pending Backport): qa: infinite timeout on asok command causes job to die
- 05:03 PM Bug #36493 (Fix Under Review): mds: remove MonClient reconnect when laggy
- https://github.com/ceph/ceph/pull/24640
- 04:53 PM Bug #36493 (Resolved): mds: remove MonClient reconnect when laggy
- With the MonClient keepalives and reconnects, this is no longer necessary.
- 01:21 PM Feature #12282 (Fix Under Review): mds: progress/abort/pause interface for ongoing scrubs
- 12:30 PM Feature #36483 (New): extend the mds auth cap "path=" syntax to enable something like "path=/foo/...
- ... meaning that the cap applied to anything within bar but not bar itself. John Spray suggested that this would allo...
- 11:17 AM Feature #36481 (New): separate out the 'p' mds auth cap into separate caps for quotas vs. choosin...
- Arne (CERN) requested that we allow OpenStack Manila users to set quotas, but not change the pool layout within Mani...
- 09:46 AM Bug #36477 (New): mds: up:standbyreplay log replay falls behind up:active
- ...
10/16/2018
- 01:18 PM Backport #36463 (Rejected): mimic: ceph-fuse client can't read or write due to backward cap_gen
- 01:18 PM Backport #36462 (Rejected): luminous: ceph-fuse client can't read or write due to backward cap_gen
- 01:18 PM Backport #36461 (Resolved): mimic: mds: rctime not set on system inode (root) at startup
- https://github.com/ceph/ceph/pull/25042
- 01:18 PM Backport #36460 (Resolved): luminous: mds: rctime not set on system inode (root) at startup
- https://github.com/ceph/ceph/pull/25043
- 11:25 AM Backport #36457 (Resolved): mimic: client: explicitly show blacklisted state via asok status command
- https://github.com/ceph/ceph/pull/24993
- 11:25 AM Backport #36456 (Resolved): luminous: client: explicitly show blacklisted state via asok status c...
- https://github.com/ceph/ceph/pull/24994
- 04:34 AM Bug #36189 (Pending Backport): ceph-fuse client can't read or write due to backward cap_gen
- 04:33 AM Bug #36221 (Pending Backport): mds: rctime not set on system inode (root) at startup
- 04:23 AM Feature #36352 (Pending Backport): client: explicitly show blacklisted state via asok status command
- 04:13 AM Bug #36368 (Resolved): cephfs/tool: cephfs-shell have "no attribute 'decode'" err
- 03:15 AM Backport #35975: mimic: mds: configurable timeout for client eviction
- ACK
- 03:15 AM Backport #35932: mimic: mds: retry remounting in ceph-fuse on dcache invalidation
- ACK
10/15/2018
- 06:02 PM Feature #36413: make cephfs-data-scan reconstruct snaptable
- I don't think this is necessary for luminous right?
- 01:55 PM Support #36427 (Rejected): Ceph mds is stuck in creating status
- Please take this question to the ceph-users list where you'll receive more eyes to help diagnose the issue. Once we'r...
- 01:31 PM Support #36427: Ceph mds is stuck in creating status
- looks like rados issue. retry restarting mds
- 09:43 AM Support #36427: Ceph mds is stuck in creating status
- Here is the output of the command:
csl@hpc1:~$ sudo ceph daemon mds.hpc1 objecter_requests
{
"ops": [
... - 09:37 AM Support #36427: Ceph mds is stuck in creating status
- what is output of 'ceph daemon mds.hpc1 objecter_requests'?
- 09:22 AM Support #36427 (Rejected): Ceph mds is stuck in creating status
- I successfully deployed Ceph cluster with 16 OSDs and created CephFS before.
But after crash due to mds slow request... - 01:53 PM Bug #36396: mds: handle duplicated uuid in multi-mds cluster
- Simple solution to this is to evict both sessions when we detect this and log an error.
- 01:46 PM Bug #36385 (Need More Info): client: segfault doing snaptests during MDS thrashing
- Needs logs/core.
- 01:39 PM Bug #36359: cephfs slow down when export with samba server
- So you're mounting the directory using kcephfs and exporting that via samba? Have you tried using the vfs_ceph module...
- 01:39 PM Bug #36348 (Need More Info): luminous(?): blogbench I/O with two kernel clients; one stalls
- Need logs.
- 01:05 PM Backport #24928 (In Progress): mimic: qa: test_recovery_pool tries asok on wrong node
- 12:03 PM Backport #35975 (Need More Info): mimic: mds: configurable timeout for client eviction
- non-trivial feature backport; needs a CephFS developer to do it. Venky?
- 12:00 PM Backport #35932 (Need More Info): mimic: mds: retry remounting in ceph-fuse on dcache invalidation
- non-trivial backport - hoping Venky will pick it up?
- 09:19 AM Bug #36035 (Fix Under Review): mds: MDCache.cc: 11673: abort()
- 03:10 AM Bug #36035: mds: MDCache.cc: 11673: abort()
- I reproduce this locally.
Dirfrag A is subtree root, its parent inode is indoe A. Auth mds of dirfrag A is mds.a. ... - 12:46 AM Backport #36280 (In Progress): mimic: qa: RuntimeError: FSCID 10 has no rank 1
- https://github.com/ceph/ceph/pull/24572
10/12/2018
- 09:03 AM Backport #36277 (Resolved): luminous: qa: add timeouts to workunits to bound test execution time ...
- 09:03 AM Backport #36322 (Resolved): luminous: qa: Command failed on smithi189 with status 1: 'rm -rf -- /...
- 08:51 AM Backport #36152 (Resolved): luminous: qa: fsstress workunit does not execute in parallel on same ...
- 08:46 AM Feature #36413 (Fix Under Review): make cephfs-data-scan reconstruct snaptable
- 08:25 AM Feature #36413 (Resolved): make cephfs-data-scan reconstruct snaptable
- 03:55 AM Bug #36320 (Fix Under Review): mds: cache drop command requires timeout argument when it is suppo...
- PR https://github.com/ceph/ceph/pull/24555
- 01:27 AM Backport #36279 (In Progress): luminous: qa: RuntimeError: FSCID 10 has no rank 1
- https://github.com/ceph/ceph/pull/24552
10/11/2018
- 07:16 PM Backport #36153: mimic: qa: fsstress workunit does not execute in parallel on same host without c...
- ... will also require the backport to http://tracker.ceph.com/issues/36409
- 07:16 PM Backport #36152: luminous: qa: fsstress workunit does not execute in parallel on same host withou...
- ... will also require the backport to http://tracker.ceph.com/issues/36409
- 07:11 PM Backport #32090 (In Progress): mimic: mds: use monotonic clock for beacon sender thread waits
- 01:41 PM Backport #32090: mimic: mds: use monotonic clock for beacon sender thread waits
- PR https://github.com/ceph/ceph/pull/24467
- 07:10 PM Backport #35837 (In Progress): mimic: mds: use monotonic clock for beacon message timekeeping
- 01:40 PM Backport #35837: mimic: mds: use monotonic clock for beacon message timekeeping
- PR https://github.com/ceph/ceph/pull/24467
- 07:10 PM Backport #35938 (In Progress): mimic: mds: add average session age (uptime) perf counter
- 01:40 PM Backport #35938: mimic: mds: add average session age (uptime) perf counter
- PR https://github.com/ceph/ceph/pull/24467
- 05:42 PM Bug #36395: mds: Documentation for the reclaim mechanism
- Jeff Layton wrote:
> We already have some documentation in the header file. Granted, it could be fleshed out a bit, ... - 11:51 AM Bug #36395: mds: Documentation for the reclaim mechanism
- We already have some documentation in the header file. Granted, it could be fleshed out a bit, but do we require anyt...
- 03:17 AM Bug #36395 (Resolved): mds: Documentation for the reclaim mechanism
- 01:20 PM Bug #36273 (In Progress): qa: add background task for some units which drops MDS cache
- 01:20 PM Bug #36320 (In Progress): mds: cache drop command requires timeout argument when it is supposed t...
- 10:04 AM Bug #36396: mds: handle duplicated uuid in multi-mds cluster
- I think we want to kick clientA's session out at this point and let clientB take over (i.e. last one wins).
The qu... - 03:21 AM Bug #36396 (New): mds: handle duplicated uuid in multi-mds cluster
- current code can't detect following case:
clientA open mds.0 session with uuid
clientB open mds.1 session with t... - 03:30 AM Feature #36397 (New): mds: support real state reclaim
- Current reclaim code only support reset old client session. it's better to code that transfers old session's states t...
- 03:21 AM Feature #26974 (Resolved): mds: provide mechanism to allow new instance of an application to canc...
- 03:16 AM Bug #36394 (Resolved): mds: pending release note for state reclaim
- 02:30 AM Bug #36384: src/osdc/Journaler.cc: 420: FAILED ceph_assert(!r)
- ...
- 02:17 AM Bug #36387 (Duplicate): "workunit test fs/snaps/snaptest-git-ceph.sh) on smithi031 with status 128"
- dup of http://tracker.ceph.com/issues/36389
- 02:17 AM Bug #36389: untar encounters unexpected EPERM on kclient/multimds cluster with thrashing
- The 'Permission denied' error was because client was blacklisted.
client did notice that mds.0 entered reconnect s...
10/10/2018
- 08:34 PM Bug #36390 (Fix Under Review): qa: teuthology may hang on diagnostic commands for fuse mount
- https://github.com/ceph/ceph/pull/24533
- 08:30 PM Bug #36390 (Resolved): qa: teuthology may hang on diagnostic commands for fuse mount
- ...
- 08:28 PM Bug #36035: mds: MDCache.cc: 11673: abort()
- Another: /ceph/teuthology-archive/pdonnell-2018-10-09_01:07:48-multimds-wip-pdonnell-testing-20181008.224656-distro-b...
- 08:23 PM Bug #36389 (New): untar encounters unexpected EPERM on kclient/multimds cluster with thrashing
- ...
- 07:39 PM Bug #36387 (Duplicate): "workunit test fs/snaps/snaptest-git-ceph.sh) on smithi031 with status 128"
- ...
- 07:32 PM Bug #36385 (Need More Info): client: segfault doing snaptests during MDS thrashing
- ...
- 07:17 PM Bug #36384 (Resolved): src/osdc/Journaler.cc: 420: FAILED ceph_assert(!r)
- ...
- 07:06 PM Cleanup #36380 (Fix Under Review): mds: remove cap requirement on ceph tell commands
- https://github.com/ceph/ceph/pull/24529
- 04:06 PM Cleanup #36380 (In Progress): mds: remove cap requirement on ceph tell commands
- 03:52 PM Cleanup #36380 (Resolved): mds: remove cap requirement on ceph tell commands
- It's ignored; we always require "*" caps: https://github.com/ceph/ceph/blob/a6a2f395b7e5f9fd867c0ba83a9e6525485a575d/...
- 05:03 PM Bug #36348 (New): luminous(?): blogbench I/O with two kernel clients; one stalls
- Reversing duplicate since this is older. Can't believe I forgot I already opened an issue for this.
- 12:29 PM Bug #36348 (Duplicate): luminous(?): blogbench I/O with two kernel clients; one stalls
- http://tracker.ceph.com/issues/36366
- 05:03 PM Bug #36366 (Duplicate): luminous: qa: blogbench hang with two kclients and 3 active mds
- 03:58 PM Bug #36359: cephfs slow down when export with samba server
- i had run a luminous cluster,the same ,i find when a file to the windows mount dir, it will refresh the dir ,so get a...
- 01:47 PM Bug #36359: cephfs slow down when export with samba server
- > i create a jewel cephfs cluster,and run a cephfs,then export the mount dir through samba server
Have you tried r... - 03:50 PM Bug #36379 (Rejected): mds: cache drop command only requires read caps
- Nevermind, turns out MDSDaemon ignores the command caps and checks client has "all":
https://github.com/ceph/ceph/... - 03:21 PM Bug #36379 (Rejected): mds: cache drop command only requires read caps
- https://github.com/ceph/ceph/blob/a6a2f395b7e5f9fd867c0ba83a9e6525485a575d/src/mds/MDSDaemon.cc#L704-L707
This sho... - 03:24 PM Bug #36368 (Fix Under Review): cephfs/tool: cephfs-shell have "no attribute 'decode'" err
- https://github.com/ceph/ceph/pull/24508
- 02:04 AM Bug #36368: cephfs/tool: cephfs-shell have "no attribute 'decode'" err
- Other Place...
- 01:37 AM Bug #36368 (Resolved): cephfs/tool: cephfs-shell have "no attribute 'decode'" err
- cephfs-shell have "no attribute 'decode'" err, as follow:...
- 06:25 AM Bug #36370 (Resolved): add information about active scrubs to "ceph -s" (and elsewhere)
- currently, there is no way to track scrubs operations on a filesystem expect to check the mds log and figure out whic...
10/09/2018
- 10:52 PM Bug #36367 (Fix Under Review): mds: wait shorter intervals to send beacon if laggy
- 10:46 PM Bug #36367 (Resolved): mds: wait shorter intervals to send beacon if laggy
- MDS beacon upkeep always waits mds_beacon_interval seconds even when laggy. Check more frequently when we stop being ...
- 09:41 PM Bug #36366 (Duplicate): luminous: qa: blogbench hang with two kclients and 3 active mds
- From luminous QA run with -k testing:...
- 08:32 PM Bug #36365 (Fix Under Review): qa: increase rm timeout for workunit cleanup
- https://github.com/ceph/ceph/pull/24503
- 08:28 PM Bug #36365 (Resolved): qa: increase rm timeout for workunit cleanup
- Some workunits like fsstress take ~45 minutes to cleanup:...
- 01:39 PM Bug #36359 (New): cephfs slow down when export with samba server
- i create a jewel cephfs cluster,and run a cephfs,then export the mount dir through samba server
then run a window... - 12:26 PM Bug #36349: mds: src/mds/MDCache.cc: 1637: FAILED ceph_assert(follows >= realm->get_newest_seq())
- Looks like that mds_table_request(snaptable server_ready) got lost. It's the first message that mds.0 sent to mds.3
... - 10:50 AM Bug #36350 (Fix Under Review): mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" during m...
- 10:49 AM Bug #36350: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" during max_mds thrashing
- https://github.com/ceph/ceph/pull/24490
- 07:01 AM Feature #36352: client: explicitly show blacklisted state via asok status command
- https://github.com/ceph/ceph/pull/24486
- 06:58 AM Feature #36352 (Resolved): client: explicitly show blacklisted state via asok status command
- In some unstable network enviroment, we found ceph-fuse client may become blacklisted. And common users found client ...
10/08/2018
- 11:10 PM Bug #36350 (Resolved): mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" during max_mds t...
- ...
- 11:06 PM Bug #36349 (Can't reproduce): mds: src/mds/MDCache.cc: 1637: FAILED ceph_assert(follows >= realm-...
- ...
- 09:37 PM Bug #36340 (Fix Under Review): common: fix buffer advance length overflow to cause MDS crash
- 03:29 AM Bug #36340: common: fix buffer advance length overflow to cause MDS crash
- https://github.com/ceph/ceph/pull/24466
After this fix, MDS never gets crashed due to above buffer length overflow. - 03:27 AM Bug #36340 (Resolved): common: fix buffer advance length overflow to cause MDS crash
- Buffer advance length was defined as int, but in async msg, the real length of data buffer was defined as unsigned, s...
- 09:32 PM Feature #36338: Namespace support for libcephfs
- Stefan Kooman wrote:
> @Patrick/@Zheng: I don't get your points. I mean, I do know how to configure namespaces in ce... - 07:29 PM Feature #36338: Namespace support for libcephfs
- @Patrick/@Zheng: I don't get your points. I mean, I do know how to configure namespaces in cephfs. And I do use it wi...
- 01:56 PM Feature #36338 (New): Namespace support for libcephfs
- you just need to set directory layout on /path/on/cephfs
setfattr -n ceph.file.layout.pool_namespace -v your-name-... - 01:54 PM Feature #36338 (Rejected): Namespace support for libcephfs
- You need to do a few things:
* Set the auth caps to only mount a certain path: http://docs.ceph.com/docs/master/ce... - 09:27 PM Bug #36348 (Resolved): luminous(?): blogbench I/O with two kernel clients; one stalls
- ...
- 08:35 PM Bug #36078: mds: 9 active MDS cluster stuck during fsstress
- Also in Luminous: /ceph/teuthology-archive/yuriw-2018-10-05_22:19:38-multimds-wip-yuri4-testing-2018-10-05-2015-lumi...
- 07:55 PM Feature #24854: mds: if MDS fails internal heartbeat, then debugging should be increased to diagn...
- We had "debug_mds=20" when the MDS suddenly started logging "heartbeat_map is_healthy 'MDSRank' had timed out after 1...
- 04:41 PM Bug #36346 (In Progress): mimic: mds: purge queue corruption from wrong backport
- https://github.com/ceph/ceph/pull/24485
- 04:38 PM Bug #36346 (Resolved): mimic: mds: purge queue corruption from wrong backport
- Caused by backport of wrong commit from #24604.
- 04:40 PM Backport #26989: mimic: Implement "cephfs-journal-tool event splice" equivalent for purge queue
- Alas, this backport cherry-picked the wrong commit: http://tracker.ceph.com/issues/36346
- 01:37 PM Bug #36189 (Fix Under Review): ceph-fuse client can't read or write due to backward cap_gen
- https://github.com/ceph/ceph/pull/24286
- 10:05 AM Backport #36203 (In Progress): mimic: "ceph fs add_data_pool" applies pool application metadata i...
- 10:01 AM Backport #36196 (Resolved): luminous: mds: internal op missing events time 'throttled', 'all_read...
- 09:58 AM Backport #35938 (Need More Info): mimic: mds: add average session age (uptime) perf counter
- 09:55 AM Backport #35937 (Resolved): luminous: mds: add average session age (uptime) perf counter
- 09:06 AM Backport #36281: luminous: mds: add drop_cache command
- PR https://github.com/ceph/ceph/pull/24468
- 05:42 AM Backport #36281 (In Progress): luminous: mds: add drop_cache command
- 05:11 AM Backport #26991 (In Progress): mimic: mds: curate priority of perf counters sent to mgr
- PR https://github.com/ceph/ceph/pull/24467
- 04:18 AM Backport #26991: mimic: mds: curate priority of perf counters sent to mgr
- I'll post the backport today...
- 02:54 AM Backport #36206 (In Progress): luminous: nfs-ganesha: ceph_fsal_setattr2 returned Operation not p...
- https://github.com/ceph/ceph/pull/24465
- 02:52 AM Backport #36205 (In Progress): mimic: nfs-ganesha: ceph_fsal_setattr2 returned Operation not perm...
- https://github.com/ceph/ceph/pull/24464
- 02:49 AM Backport #36199 (In Progress): mimic: mds: fix mds damaged due to unexpected journal length
- https://github.com/ceph/ceph/pull/24463
10/07/2018
- 06:31 PM Feature #36338: Namespace support for libcephfs
- There is a ceph_select_filesystem call in libcephfs which _might_ do what we'd need for this, but it's not clear to m...
- 06:15 PM Feature #36338 (Resolved): Namespace support for libcephfs
- Namespace support for libcephfs would allow applications like nfs-ganesha to make use of namespaces in cephfs. With n...
10/06/2018
- 05:38 PM Bug #36335 (Fix Under Review): qa: infinite timeout on asok command causes job to die
- https://github.com/ceph/ceph/pull/24455/files
- 05:28 PM Bug #36335 (Resolved): qa: infinite timeout on asok command causes job to die
- e.g....
10/05/2018
- 10:32 PM Backport #36204 (Rejected): luminous: "ceph fs add_data_pool" applies pool application metadata i...
- Issue does not exist in Luminous.
- 08:57 PM Backport #36196: luminous: mds: internal op missing events time 'throttled', 'all_read', 'dispatc...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24410
merged - 08:57 PM Backport #35937: luminous: mds: add average session age (uptime) perf counter
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24421
merged - 09:23 AM Backport #35837: mimic: mds: use monotonic clock for beacon message timekeeping
- the second commit in this patchset is non-trivial to cherry-pick
- 08:13 AM Backport #26850 (In Progress): mimic: ceph_volume_client: py3 compatible
- 08:08 AM Backport #26991 (Need More Info): mimic: mds: curate priority of perf counters sent to mgr
- non-trivial backport - see https://github.com/ceph/ceph/pull/23882 for a naive backport and the problems it had.
- 08:05 AM Backport #26991 (In Progress): mimic: mds: curate priority of perf counters sent to mgr
- 06:19 AM Backport #36313 (In Progress): mimic: doc: fix broken fstab url in cephfs/fuse
- 02:20 AM Backport #36278 (In Progress): mimic: qa: add timeouts to workunits to bound test execution time ...
- 02:17 AM Backport #36277 (In Progress): luminous: qa: add timeouts to workunits to bound test execution ti...
- 02:12 AM Backport #36323 (In Progress): mimic: qa: Command failed on smithi189 with status 1: 'rm -rf -- /...
- 02:06 AM Backport #36323 (Resolved): mimic: qa: Command failed on smithi189 with status 1: 'rm -rf -- /hom...
- https://github.com/ceph/ceph/pull/24408
- 02:07 AM Backport #36322 (In Progress): luminous: qa: Command failed on smithi189 with status 1: 'rm -rf -...
- 02:06 AM Backport #36322 (Resolved): luminous: qa: Command failed on smithi189 with status 1: 'rm -rf -- /...
- https://github.com/ceph/ceph/pull/24403
- 01:58 AM Backport #36200 (In Progress): luminous: mds: fix mds damaged due to unexpected journal length
- https://github.com/ceph/ceph/pull/24440
- 01:06 AM Backport #36197 (In Progress): mimic: ceph-fuse: add SELinux policy
- https://github.com/ceph/ceph/pull/24439
- 01:05 AM Backport #36156 (In Progress): mimic: qa: test_version_splitting (tasks.cephfs.test_sessionmap.Te...
- https://github.com/ceph/ceph/pull/24438
10/04/2018
- 09:21 PM Bug #36320 (Resolved): mds: cache drop command requires timeout argument when it is supposed to b...
- ...
- 08:27 PM Backport #32088 (Resolved): luminous: mds: use monotonic clock for beacon sender thread waits
- 07:00 PM Backport #32092 (In Progress): mimic: mds: migrate strays part by part when shutdown mds
- 05:10 PM Backport #36153: mimic: qa: fsstress workunit does not execute in parallel on same host without c...
- Followup fixes: http://tracker.ceph.com/issues/36165 http://tracker.ceph.com/issues/36184
- 05:10 PM Backport #36152: luminous: qa: fsstress workunit does not execute in parallel on same host withou...
- Followup fixes: http://tracker.ceph.com/issues/36165 http://tracker.ceph.com/issues/36184
- 05:08 PM Bug #36165 (Pending Backport): qa: Command failed on smithi189 with status 1: 'rm -rf -- /home/ub...
- 04:39 PM Backport #36312 (In Progress): luminous: doc: fix broken fstab url in cephfs/fuse
- 09:41 AM Backport #36312 (Resolved): luminous: doc: fix broken fstab url in cephfs/fuse
- https://github.com/ceph/ceph/pull/24434
- 09:41 AM Backport #36313 (Resolved): mimic: doc: fix broken fstab url in cephfs/fuse
- https://github.com/ceph/ceph/pull/24441
- 09:40 AM Backport #36308 (Resolved): mimic: doc: Typo error on cephfs/fuse/
- 05:01 AM Backport #36308 (In Progress): mimic: doc: Typo error on cephfs/fuse/
- 07:07 AM Backport #35937 (In Progress): luminous: mds: add average session age (uptime) perf counter
- PR https://github.com/ceph/ceph/pull/24421
- 12:32 AM Documentation #36286 (Pending Backport): doc: fix broken fstab url in cephfs/fuse
10/03/2018
- 10:00 PM Backport #36309 (Resolved): luminous: doc: Typo error on cephfs/fuse/
- https://github.com/ceph/ceph/pull/24752
- 10:00 PM Backport #36308 (Resolved): mimic: doc: Typo error on cephfs/fuse/
- https://github.com/ceph/ceph/pull/24420
- 09:39 PM Bug #26973 (Resolved): mds: MDBalancer::try_rebalance() may stop prematurely
- 09:39 PM Backport #32084 (Resolved): luminous: mds: MDBalancer::try_rebalance() may stop prematurely
- 09:39 PM Bug #25008 (Resolved): "Health check failed: 1 MDSs report slow requests (MDS_SLOW_REQUEST)" in p...
- 09:38 PM Backport #25033 (Resolved): luminous: "Health check failed: 1 MDSs report slow requests (MDS_SLOW...
- 09:08 PM Backport #36195 (In Progress): mimic: mds: internal op missing events time 'throttled', 'all_read...
- 09:08 PM Backport #36196 (In Progress): luminous: mds: internal op missing events time 'throttled', 'all_r...
- 08:44 PM Backport #32088 (In Progress): luminous: mds: use monotonic clock for beacon sender thread waits
- 11:27 AM Backport #32088 (Need More Info): luminous: mds: use monotonic clock for beacon sender thread waits
- 08:40 PM Backport #35976 (Resolved): luminous: mds: configurable timeout for client eviction
- 07:46 PM Backport #35976: luminous: mds: configurable timeout for client eviction
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24086
merged - 07:51 PM Backport #35939 (Resolved): luminous: client: statfs inode count odd
- 07:45 PM Backport #35939: luminous: client: statfs inode count odd
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24376
merged - 07:50 PM Bug #26900 (Resolved): qa: reduce slow warnings arising due to limited testing hardware
- 07:50 PM Backport #26904 (Resolved): luminous: qa: reduce slow warnings arising due to limited testing har...
- 07:50 PM Backport #26904: luminous: qa: reduce slow warnings arising due to limited testing hardware
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23877
merged - 07:50 PM Cleanup #24839 (Resolved): qa: move mds/client config to qa from teuthology ceph.conf.template
- 07:50 PM Backport #24842 (Resolved): luminous: qa: move mds/client config to qa from teuthology ceph.conf....
- 07:50 PM Backport #24842: luminous: qa: move mds/client config to qa from teuthology ceph.conf.template
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23877
merged - 07:49 PM Backport #36135 (Resolved): luminous: mds: rctime may go back
- 07:44 PM Backport #36135: luminous: mds: rctime may go back
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24378
merged - 07:43 PM Backport #36153 (In Progress): mimic: qa: fsstress workunit does not execute in parallel on same ...
- 07:41 PM Backport #36152 (In Progress): luminous: qa: fsstress workunit does not execute in parallel on sa...
- 04:32 PM Backport #36152 (Need More Info): luminous: qa: fsstress workunit does not execute in parallel on...
- 04:30 PM Backport #36152 (In Progress): luminous: qa: fsstress workunit does not execute in parallel on sa...
- 01:47 PM Documentation #36180 (Pending Backport): doc: Typo error on cephfs/fuse/
- 12:41 PM Backport #36148 (Need More Info): mimic: mds: fix instances of wrongly sending client messages ou...
- This one is non-trivial because it seems to depend on https://github.com/ceph/ceph/commit/d2a202af39f
- 12:40 PM Backport #36147 (Need More Info): luminous: mds: fix instances of wrongly sending client messages...
- This one is non-trivial because it seems to depend on https://github.com/ceph/ceph/commit/d2a202af39f
- 11:48 AM Bug #35720 (Resolved): evicting client session may block finisher thread
- 11:47 AM Backport #35722 (Resolved): mimic: evicting client session may block finisher thread
- Since the commit was included in https://github.com/ceph/ceph/pull/23105, https://github.com/ceph/ceph/pull/23945 was...
- 10:04 AM Backport #32091 (Need More Info): luminous: mds: migrate strays part by part when shutdown mds
- 03:51 AM Backport #35937: luminous: mds: add average session age (uptime) perf counter
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24089 has been merged.
Thanks. I'll send the backport.
10/02/2018
- 09:58 PM Backport #36134 (In Progress): mimic: client: update ctime when modifying file content
- 09:57 PM Backport #36133 (Resolved): luminous: client: update ctime when modifying file content
- 09:01 PM Backport #36133: luminous: client: update ctime when modifying file content
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24323
merged - 09:56 PM Bug #24899 (Resolved): qa: multifs requires 4 mds but gets only 2
- 09:56 PM Backport #24912 (Resolved): luminous: qa: multifs requires 4 mds but gets only 2
- 09:01 PM Backport #24912: luminous: qa: multifs requires 4 mds but gets only 2
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24328
merged - 09:55 PM Backport #32104 (In Progress): mimic: mds: allows client to create ".." and "." dirents
- 09:50 PM Backport #32103 (Resolved): luminous: mds: allows client to create ".." and "." dirents
- 09:00 PM Backport #32103: luminous: mds: allows client to create ".." and "." dirents
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24329
merged - 09:43 PM Backport #35837 (Need More Info): mimic: mds: use monotonic clock for beacon message timekeeping
- to be backported together with #32090
- 09:42 PM Backport #32090 (Need More Info): mimic: mds: use monotonic clock for beacon sender thread waits
- to be backported together with #35837
- 09:30 PM Backport #35838 (Resolved): luminous: mds: use monotonic clock for beacon message timekeeping
- 09:02 PM Backport #35838: luminous: mds: use monotonic clock for beacon message timekeeping
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24311
merged - 07:54 PM Support #25089: Many slow requests
- I believe I'm seeing this bug. I was running multiple active mds, and exporting cephfs-fuse via samba. After upgradin...
- 07:32 PM Backport #36136 (In Progress): mimic: mds: rctime may go back
- 07:31 PM Backport #36135 (In Progress): luminous: mds: rctime may go back
- 07:26 PM Backport #35940 (In Progress): mimic: client: statfs inode count odd
- 07:25 PM Backport #35939 (In Progress): luminous: client: statfs inode count odd
- 07:16 PM Backport #35937: luminous: mds: add average session age (uptime) perf counter
- https://github.com/ceph/ceph/pull/24089 has been merged.
- 07:15 PM Backport #35937 (Need More Info): luminous: mds: add average session age (uptime) perf counter
- 07:14 PM Backport #32088 (In Progress): luminous: mds: use monotonic clock for beacon sender thread waits
- 12:02 PM Documentation #36180 (Fix Under Review): doc: Typo error on cephfs/fuse/
- https://github.com/ceph/ceph/pull/24367
- 11:13 AM Bug #36181 (Closed): Bad link on ceph/fuse
- This issue fixed and tracked here: http://tracker.ceph.com/issues/36286.
- 06:33 AM Backport #26990 (Resolved): luminous: mds: curate priority of perf counters sent to mgr
- 06:33 AM Backport #35983 (Resolved): luminous: mds: change mds perf counters can statistics filesystem ope...
- 06:32 AM Backport #36210 (Resolved): luminous: mds: runs out of file descriptors after several respawns
- 05:11 AM Documentation #36286 (Fix Under Review): doc: fix broken fstab url in cephfs/fuse
- https://github.com/ceph/ceph/pull/24362
- 04:46 AM Documentation #36286 (Resolved): doc: fix broken fstab url in cephfs/fuse
- http://docs.ceph.com/docs/master/cephfs/fuse/fstab is broken in http://docs.ceph.com/docs/master/cephfs/fuse/
10/01/2018
- 09:14 PM Backport #26990: luminous: mds: curate priority of perf counters sent to mgr
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24089
merged - 09:14 PM Backport #35983: luminous: mds: change mds perf counters can statistics filesystem operations num...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24089
merged - 09:10 PM Backport #36210: luminous: mds: runs out of file descriptors after several respawns
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24310
merged - 08:07 PM Backport #36282 (Resolved): mimic: mds: add drop_cache command
- https://github.com/ceph/ceph/pull/25118
- 08:07 PM Backport #36281 (Resolved): luminous: mds: add drop_cache command
- https://github.com/ceph/ceph/pull/24468
- 08:05 PM Backport #36280 (Resolved): mimic: qa: RuntimeError: FSCID 10 has no rank 1
- https://github.com/ceph/ceph/pull/24572
- 08:05 PM Backport #36279 (Resolved): luminous: qa: RuntimeError: FSCID 10 has no rank 1
- https://github.com/ceph/ceph/pull/24552
- 08:05 PM Backport #36278 (Resolved): mimic: qa: add timeouts to workunits to bound test execution time in ...
- https://github.com/ceph/ceph/pull/24408
- 08:05 PM Backport #36277 (Resolved): luminous: qa: add timeouts to workunits to bound test execution time ...
- https://github.com/ceph/ceph/pull/24403
- 07:51 PM Bug #36273 (New): qa: add background task for some units which drops MDS cache
- To confirm no undesirable side-effects may occur during actual workloads.
See qa/tasks/mds_thrash.py for a model. ... - 07:49 PM Feature #23362 (Pending Backport): mds: add drop_cache command
- 05:44 PM Bug #35828 (Pending Backport): qa: RuntimeError: FSCID 10 has no rank 1
- 05:42 PM Bug #36184 (Pending Backport): qa: add timeouts to workunits to bound test execution time in the ...
- 05:38 PM Bug #36192: Internal fragment of ObjectCacher
- I've applied this patch on 12.2.7, rebuilt ceph-fuse and ran my test case that originally produced this issue. Happy...
- 12:17 PM Backport #26851 (Resolved): luminous: ceph_volume_client: py3 compatible
- 12:16 PM Backport #36198 (Resolved): luminous: ceph-fuse: add SELinux policy
- 12:15 PM Backport #22504 (Resolved): luminous: client may fail to trim as many caps as MDS asked for
- 12:11 PM Backport #35718 (Resolved): luminous: mds: beacon spams is_laggy message
- 12:08 PM Backport #36101 (Resolved): luminous: qa: remove knfs site from future releases
- 12:02 PM Backport #35931 (Resolved): luminous: mds: retry remounting in ceph-fuse on dcache invalidation
09/29/2018
- 12:01 PM Bug #23262: kclient: nofail option not supported
- The issue is that mount.ceph helper doesn't handle nofail, but passes it through to the kernel. nofail is a userspac...
09/28/2018
- 09:30 PM Bug #26898 (Resolved): MDSMonitor: note ignored beacons/map changes at higher debug level
- 09:30 PM Backport #26930 (Resolved): luminous: MDSMonitor: note ignored beacons/map changes at higher debu...
- 08:07 PM Backport #26851: luminous: ceph_volume_client: py3 compatible
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24083
merged - 08:06 PM Backport #36198: luminous: ceph-fuse: add SELinux policy
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24313
merged - 08:04 PM Backport #22504: luminous: client may fail to trim as many caps as MDS asked for
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24119
merged - 08:04 PM Backport #35718: luminous: mds: beacon spams is_laggy message
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24138
merged - 08:03 PM Backport #36101: luminous: qa: remove knfs site from future releases
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24268
merged - 08:02 PM Backport #35931: luminous: mds: retry remounting in ceph-fuse on dcache invalidation
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24303
merged
- 06:11 PM Feature #36253 (Resolved): cephfs: clients should send usage metadata to MDSs for administration/...
- In particular:
* The capability "cache hits" by clients to provide introspection on the effectiveness of client ca... - 05:54 PM Bug #36252 (Duplicate): cephfs: `ceph fs top` command
- 05:51 PM Bug #36252 (Duplicate): cephfs: `ceph fs top` command
- Similar to the `top` command. It would ideally provide a list of sessions doing I/O, what kind of I/O, bandwidth of r...
- 08:42 AM Backport #32103 (In Progress): luminous: mds: allows client to create ".." and "." dirents
- 08:36 AM Backport #24912 (In Progress): luminous: qa: multifs requires 4 mds but gets only 2
- 08:12 AM Backport #32091 (In Progress): luminous: mds: migrate strays part by part when shutdown mds
- 08:03 AM Backport #36133 (In Progress): luminous: client: update ctime when modifying file content
- 07:01 AM Bug #24173 (Resolved): ceph_volume_client: allow atomic update of RADOS objects
- 07:01 AM Backport #24862 (Resolved): luminous: ceph_volume_client: allow atomic update of RADOS objects
- 07:00 AM Backport #35933 (Resolved): luminous: client: cannot list out files created by another ceph-fuse ...
- 06:49 AM Backport #35937: luminous: mds: add average session age (uptime) perf counter
- Patrick,
I'll backport this post https://github.com/ceph/ceph/pull/24089 merge.
09/27/2018
- 11:41 PM Backport #36198 (In Progress): luminous: ceph-fuse: add SELinux policy
- 11:19 PM Backport #24862: luminous: ceph_volume_client: allow atomic update of RADOS objects
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24084
merged - 11:19 PM Backport #35933: luminous: client: cannot list out files created by another ceph-fuse client
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24282
merged - 11:08 PM Backport #35838 (In Progress): luminous: mds: use monotonic clock for beacon message timekeeping
- 07:36 PM Backport #36210 (In Progress): luminous: mds: runs out of file descriptors after several respawns
- 07:27 PM Bug #36192 (Fix Under Review): Internal fragment of ObjectCacher
- 05:19 AM Bug #36192: Internal fragment of ObjectCacher
- https://github.com/ceph/ceph/pull/24297
- 01:30 PM Backport #35931 (In Progress): luminous: mds: retry remounting in ceph-fuse on dcache invalidation
- 02:03 AM Backport #35934 (In Progress): mimic: client: cannot list out files created by another ceph-fuse ...
- https://github.com/ceph/ceph/pull/24295
09/26/2018
- 10:10 PM Bug #24879 (Resolved): mds: create health warning if we detect metadata (journal) writes are slow
- 10:10 PM Backport #25046 (Resolved): luminous: mds: create health warning if we detect metadata (journal) ...
- 04:29 PM Backport #25046: luminous: mds: create health warning if we detect metadata (journal) writes are ...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/24171
merged - 06:16 PM Bug #36221 (Fix Under Review): mds: rctime not set on system inode (root) at startup
- 06:16 PM Bug #36221: mds: rctime not set on system inode (root) at startup
- https://github.com/ceph/ceph/pull/24292
- 06:13 PM Bug #36221 (Resolved): mds: rctime not set on system inode (root) at startup
- ...
- 04:19 PM Backport #36218 (Resolved): mimic: Some cephfs tool commands silently operate on only rank 0, eve...
- https://github.com/ceph/ceph/pull/25036
- 04:19 PM Backport #36217 (Resolved): luminous: Some cephfs tool commands silently operate on only rank 0, ...
- https://github.com/ceph/ceph/pull/24728
- 04:18 PM Backport #36210 (Resolved): luminous: mds: runs out of file descriptors after several respawns
- https://github.com/ceph/ceph/pull/24310
- 04:18 PM Backport #36209 (Resolved): mimic: mds: runs out of file descriptors after several respawns
- https://github.com/ceph/ceph/pull/25822
- 04:17 PM Backport #36206 (Resolved): luminous: nfs-ganesha: ceph_fsal_setattr2 returned Operation not perm...
- https://github.com/ceph/ceph/pull/24465
- 04:17 PM Backport #36205 (Resolved): mimic: nfs-ganesha: ceph_fsal_setattr2 returned Operation not permitted
- https://github.com/ceph/ceph/pull/24464
- 04:16 PM Backport #36204 (Rejected): luminous: "ceph fs add_data_pool" applies pool application metadata i...
- 04:16 PM Backport #36203 (Resolved): mimic: "ceph fs add_data_pool" applies pool application metadata inco...
- https://github.com/ceph/ceph/pull/24470
- 04:16 PM Backport #36200 (Resolved): luminous: mds: fix mds damaged due to unexpected journal length
- https://github.com/ceph/ceph/pull/24440
- 04:16 PM Backport #36199 (Resolved): mimic: mds: fix mds damaged due to unexpected journal length
- https://github.com/ceph/ceph/pull/24463
- 04:16 PM Backport #36198 (Resolved): luminous: ceph-fuse: add SELinux policy
- https://github.com/ceph/ceph/pull/24313
- 04:16 PM Backport #36197 (Resolved): mimic: ceph-fuse: add SELinux policy
- https://github.com/ceph/ceph/pull/24439
- 04:15 PM Backport #36196 (Resolved): luminous: mds: internal op missing events time 'throttled', 'all_read...
- https://github.com/ceph/ceph/pull/24410
- 04:15 PM Backport #36195 (Resolved): mimic: mds: internal op missing events time 'throttled', 'all_read', ...
- https://github.com/ceph/ceph/pull/24411
- 01:15 PM Bug #36192 (Resolved): Internal fragment of ObjectCacher
- ObjectCacher::Object::merge_left() may cause severe internal fragments if both buffer head only have a few data (both...
- 07:17 AM Bug #36189 (Resolved): ceph-fuse client can't read or write due to backward cap_gen
- ceph version: jewel(10.2.2)
mds mode: active/hot-standby
bug description:
stack:... - 12:16 AM Backport #35933 (In Progress): luminous: client: cannot list out files created by another ceph-fu...
- https://github.com/ceph/ceph/pull/24282
09/25/2018
- 11:02 PM Bug #36103 (Pending Backport): ceph-fuse: add SELinux policy
- 08:47 PM Bug #36171 (Fix Under Review): mds: ctime should not use client provided ctime/mtime
- 06:17 PM Bug #36171 (In Progress): mds: ctime should not use client provided ctime/mtime
- 01:35 AM Bug #36171 (New): mds: ctime should not use client provided ctime/mtime
- Otherwise, you can set a ctime that is far in the future and it cannot be rolled back....
- 06:11 PM Bug #36184 (Fix Under Review): qa: add timeouts to workunits to bound test execution time in the ...
- https://github.com/ceph/ceph/pull/24275
- 05:59 PM Bug #36184 (Resolved): qa: add timeouts to workunits to bound test execution time in the event of...
- 04:52 PM Backport #32098 (Resolved): luminous: mds: optimize the way how max export size is enforced
- 04:51 PM Backport #35839 (Duplicate): mimic: unhealthy heartbeat map during subtree migration
- 07:49 AM Backport #35839 (In Progress): mimic: unhealthy heartbeat map during subtree migration
- https://github.com/ceph/ceph/pull/23506
- 04:51 PM Backport #35840 (Duplicate): luminous: unhealthy heartbeat map during subtree migration
- 04:50 PM Bug #24881 (Duplicate): unhealthy heartbeat map during subtree migration
- 04:32 PM Backport #35858 (In Progress): mimic: MDSMonitor: lookup of gid in prepare_beacon that has been r...
- 12:06 AM Backport #35858 (Need More Info): mimic: MDSMonitor: lookup of gid in prepare_beacon that has bee...
- Looks like this backport PR needs backporting of src/mds/MDSMap.h changes :
/ceph7_mimic/src/mon/MDSMonitor.cc:648:2... - 04:20 PM Bug #35850 (Pending Backport): mds: runs out of file descriptors after several respawns
- 03:47 PM Bug #36181 (Closed): Bad link on ceph/fuse
- This page has the text "you may add an entry to the system fstab" with fstab being a link.
The link points to http:... - 02:50 PM Documentation #36180 (Resolved): doc: Typo error on cephfs/fuse/
- In the documentation on how to use ceph fuse, "usernname" is used as an example path while "username" is used on the ...
- 02:48 PM Bug #36114 (Pending Backport): mds: internal op missing events time 'throttled', 'all_read', 'dis...
- 02:36 PM Bug #36165 (Resolved): qa: Command failed on smithi189 with status 1: 'rm -rf -- /home/ubuntu/cep...
- 01:16 PM Backport #36102 (In Progress): mimic: qa: remove knfs site from future releases
- 01:15 PM Backport #36101 (In Progress): luminous: qa: remove knfs site from future releases
09/24/2018
- 10:01 PM Bug #36093 (Pending Backport): mds: fix mds damaged due to unexpected journal length
- 01:35 PM Bug #36093 (Fix Under Review): mds: fix mds damaged due to unexpected journal length
- 09:50 PM Bug #35961 (Pending Backport): nfs-ganesha: ceph_fsal_setattr2 returned Operation not permitted
- 09:49 PM Bug #24780 (Pending Backport): Some cephfs tool commands silently operate on only rank 0, even if...
- 09:41 PM Bug #36028 (Pending Backport): "ceph fs add_data_pool" applies pool application metadata incorrectly
- 09:40 PM Cleanup #24001 (Resolved): MDSMonitor: remove vestiges of `mds deactivate`
- 06:04 PM Bug #36165 (Fix Under Review): qa: Command failed on smithi189 with status 1: 'rm -rf -- /home/ub...
- https://github.com/ceph/ceph/pull/24252
- 06:01 PM Bug #36165 (Resolved): qa: Command failed on smithi189 with status 1: 'rm -rf -- /home/ubuntu/cep...
- ...
- 06:01 PM Backport #36153: mimic: qa: fsstress workunit does not execute in parallel on same host without c...
- Follow-up bug: #36165
- 11:02 AM Backport #36153 (Resolved): mimic: qa: fsstress workunit does not execute in parallel on same hos...
- https://github.com/ceph/ceph/pull/24408
- 06:01 PM Backport #36152: luminous: qa: fsstress workunit does not execute in parallel on same host withou...
- Follow-up bug: #36165
- 11:02 AM Backport #36152 (Resolved): luminous: qa: fsstress workunit does not execute in parallel on same ...
- https://github.com/ceph/ceph/pull/24403
- 05:10 PM Bug #36079 (Fix Under Review): ceph-fuse: hang because it miss reconnect phase when hot standby m...
- 01:37 PM Bug #36114 (Fix Under Review): mds: internal op missing events time 'throttled', 'all_read', 'dis...
- 02:58 AM Bug #36114 (In Progress): mds: internal op missing events time 'throttled', 'all_read', 'dispatched'
- 01:36 PM Bug #36103 (Fix Under Review): ceph-fuse: add SELinux policy
- https://github.com/ceph/ceph/pull/24203
- 01:35 PM Bug #26860 (Resolved): client: requests that do name lookup may be sent to wrong mds
- 01:34 PM Backport #26983 (Resolved): luminous: client: requests that do name lookup may be sent to wrong mds
- 01:34 PM Feature #26925 (Resolved): cephfs-data-scan: print the max used ino
- 01:34 PM Backport #26977 (Resolved): luminous: cephfs-data-scan: print the max used ino
- 01:33 PM Backport #35721 (Resolved): luminous: evicting client session may block finisher thread
- 01:33 PM Backport #35859 (Resolved): luminous: MDSMonitor: lookup of gid in prepare_beacon that has been r...
- 01:31 PM Backport #24934 (Resolved): luminous: cephfs-journal-tool: wrong layout info used
- 11:38 AM Cleanup #24820 (Resolved): overhead of g_conf->get_val<type>("config name") is high
- 11:38 AM Backport #25043 (Resolved): luminous: overhead of g_conf->get_val<type>("config name") is high
- 11:37 AM Bug #26834 (Resolved): mds: use self CPU usage to calculate load
- 11:37 AM Backport #26889 (Resolved): luminous: mds: use self CPU usage to calculate load
- 11:36 AM Backport #26885 (Resolved): luminous: mds: reset heartbeat map at potential time-consuming places
- 11:35 AM Bug #26899 (Resolved): MDSMonitor: consider raising priority of MMDSBeacons from MDS so they are ...
- 11:35 AM Backport #26906 (Resolved): luminous: MDSMonitor: consider raising priority of MMDSBeacons from M...
- 11:34 AM Bug #23519 (Resolved): mds: mds got laggy because of MDSBeacon stuck in mqueue
- 11:34 AM Backport #26924 (Resolved): luminous: mds: mds got laggy because of MDSBeacon stuck in mqueue
- 11:34 AM Bug #25213 (Resolved): handle ceph_ll_close on unmounted filesystem without crashing
- 11:34 AM Backport #26915 (Resolved): luminous: handle ceph_ll_close on unmounted filesystem without crashing
- 11:18 AM Bug #26894 (Resolved): mds: crash when dumping ops in flight
- 11:18 AM Backport #26981 (Resolved): luminous: mds: crash when dumping ops in flight
- 11:18 AM Bug #24840 (Resolved): mds: explain delayed client_request due to subtree migration
- 11:17 AM Backport #26987 (Resolved): luminous: mds: explain delayed client_request due to subtree migration
- 11:17 AM Bug #25141 (Resolved): CephVolumeClient: delay required after adding data pool to MDSMap
- 11:17 AM Backport #25205 (Resolved): luminous: CephVolumeClient: delay required after adding data pool to ...
- 11:02 AM Backport #36156 (Resolved): mimic: qa: test_version_splitting (tasks.cephfs.test_sessionmap.TestS...
- https://github.com/ceph/ceph/pull/24438
- 11:01 AM Backport #36148 (Rejected): mimic: mds: fix instances of wrongly sending client messages outside ...
- 11:01 AM Backport #36147 (Rejected): luminous: mds: fix instances of wrongly sending client messages outsi...
- 11:00 AM Backport #36136 (Resolved): mimic: mds: rctime may go back
- https://github.com/ceph/ceph/pull/24379
- 11:00 AM Backport #36135 (Resolved): luminous: mds: rctime may go back
- https://github.com/ceph/ceph/pull/24378
- 11:00 AM Backport #36134 (Resolved): mimic: client: update ctime when modifying file content
- https://github.com/ceph/ceph/pull/24385
- 11:00 AM Backport #36133 (Resolved): luminous: client: update ctime when modifying file content
- https://github.com/ceph/ceph/pull/24323
09/23/2018
- 01:34 PM Bug #36114: mds: internal op missing events time 'throttled', 'all_read', 'dispatched'
- https://github.com/ceph/ceph/pull/24163
- 01:34 PM Bug #36114 (Resolved): mds: internal op missing events time 'throttled', 'all_read', 'dispatched'
- ...
Also available in: Atom