Activity
From 08/23/2018 to 09/21/2018
09/21/2018
- 08:01 PM Bug #24177 (Pending Backport): qa: fsstress workunit does not execute in parallel on same host wi...
- 09:00 AM Bug #36078 (Need More Info): mds: 9 active MDS cluster stuck during fsstress
- I don't see any stuck client request in any mds. The "balancer runs too long" warning is because the mds ran too slow...
09/20/2018
- 05:26 PM Bug #36103 (Resolved): ceph-fuse: add SELinux policy
- Right now it's being run unconfined.
QA on this should check carefully for avc denials. - 04:33 PM Backport #36102 (Resolved): mimic: qa: remove knfs site from future releases
- https://github.com/ceph/ceph/pull/24269
- 04:33 PM Backport #36101 (Resolved): luminous: qa: remove knfs site from future releases
- https://github.com/ceph/ceph/pull/24268
- 04:32 PM Cleanup #36075 (Pending Backport): qa: remove knfs site from future releases
- 04:24 PM Backport #25046 (In Progress): luminous: mds: create health warning if we detect metadata (journa...
- 04:22 PM Bug #36085 (Need More Info): ceph mds repeat laggy or crashed
- Please include `debug mds = 20` as well. The logs don't help much with only `debug ms = 1`.
- 04:18 PM Bug #35961 (Fix Under Review): nfs-ganesha: ceph_fsal_setattr2 returned Operation not permitted
- 04:15 PM Bug #36081 (Rejected): delete fs, osd space not reclaimed
- This question is better suited to ceph-users ML. Please repost your question there.
- 08:46 AM Bug #36094: mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- while the active one crash, the other one failed also ...
- 08:43 AM Bug #36094 (Resolved): mds: crash(FAILED assert(omap_num_objs <= MAX_OBJECTS))
- Hi I have met mds error "FAILED assert(omap_num_objs <= MAX_OBJECTS)" in my product env.
Ceph version mimic 13.2.1
... - 07:46 AM Bug #36093: mds: fix mds damaged due to unexpected journal length
- I submit a PR according to above analysis.
https://github.com/ceph/ceph/pull/24194 - 07:44 AM Bug #36093 (Resolved): mds: fix mds damaged due to unexpected journal length
- Recently we met a rare but serious mds damaged issue during switching/restarting mds frequently. MDS reports damaged ...
- 02:17 AM Backport #35841 (In Progress): mimic: client: segmentation fault in handle_client_reply
- https://github.com/ceph/ceph/pull/24187
09/19/2018
- 12:56 PM Backport #25046: luminous: mds: create health warning if we detect metadata (journal) writes are ...
- https://github.com/ceph/ceph/pull/24171
- 11:33 AM Bug #36085 (Need More Info): ceph mds repeat laggy or crashed
- ...
- 07:01 AM Bug #35961: nfs-ganesha: ceph_fsal_setattr2 returned Operation not permitted
- Patch for nfs-ganesha submitted:
https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/426017 - 06:57 AM Bug #35961: nfs-ganesha: ceph_fsal_setattr2 returned Operation not permitted
- *PR*: https://github.com/ceph/ceph/pull/24170
- 06:48 AM Bug #36081 (Rejected): delete fs, osd space not reclaimed
- test steps as follows()
1. set ceph cluster with bluestore osds, and create cephfs
2. write test files to cephfs wi... - 04:06 AM Bug #36079 (Resolved): ceph-fuse: hang because it miss reconnect phase when hot standby mds switc...
- Version: jewel(ceph-10.2.2)
MDS mode: active/hot-standby
Desscription:
As we know MDS will kill the session if cl... - 03:02 AM Bug #36072: ceph fuse client can't write data and reporting waiting for caps need Fw want Fb
- this pr https://github.com/ceph/ceph/pull/24164 can resolve this problem.
- 02:05 AM Backport #35840 (Resolved): luminous: unhealthy heartbeat map during subtree migration
- https://github.com/ceph/ceph/pull/23507
- 12:41 AM Backport #35719 (In Progress): mimic: mds: beacon spams is_laggy message
- https://github.com/ceph/ceph/pull/24161
09/18/2018
- 11:43 PM Bug #36078 (Can't reproduce): mds: 9 active MDS cluster stuck during fsstress
- ...
- 10:30 PM Cleanup #24001 (Fix Under Review): MDSMonitor: remove vestiges of `mds deactivate`
- https://github.com/ceph/ceph/pull/24158
- 10:04 PM Bug #24177 (Fix Under Review): qa: fsstress workunit does not execute in parallel on same host wi...
- https://github.com/ceph/ceph/pull/24157
- 09:57 PM Cleanup #36075: qa: remove knfs site from future releases
- https://github.com/ceph/ceph/pull/24153
https://github.com/ceph/ceph/pull/24154
https://github.com/ceph/ceph/pull/2... - 08:49 PM Cleanup #36075 (In Progress): qa: remove knfs site from future releases
- 08:48 PM Cleanup #36075 (Resolved): qa: remove knfs site from future releases
- `luminous`, `mimic` etc
IRC discussion:... - 09:43 PM Bug #19240 (Closed): multimds on linode: troubling op throughput scaling from 8 to 16 MDS in kern...
- These results are likely obsolete. Closing.
- 09:41 PM Bug #24090 (Resolved): mds: fragmentation in QA is slowing down ops enough for WRNs
- Resolved via extended timeouts in a recent fix.
- 09:40 PM Bug #23059 (Resolved): mds: FAILED assert (p != active_requests.end()) in MDRequestRef MDCache::r...
- 09:40 PM Backport #23155 (Rejected): jewel: mds: FAILED assert (p != active_requests.end()) in MDRequestRe...
- EOL
- 09:39 PM Bug #22546 (Resolved): client: dirty caps may never get the chance to flush
- 09:39 PM Backport #22697 (Rejected): jewel: client: dirty caps may never get the chance to flush
- EOL
- 09:39 PM Bug #22293 (Resolved): client may fail to trim as many caps as MDS asked for
- 09:39 PM Backport #22505 (Rejected): jewel: client may fail to trim as many caps as MDS asked for
- Closing as jewel is EOL.
- 09:38 PM Bug #23797 (Can't reproduce): qa: cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
- Closing. Haven't seen this in a while.
- 07:58 PM Backport #26983: luminous: client: requests that do name lookup may be sent to wrong mds
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23793
merged - 07:58 PM Backport #26977: luminous: cephfs-data-scan: print the max used ino
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23881
merged - 07:56 PM Backport #35721: luminous: evicting client session may block finisher thread
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/23946
mergedReviewed-by: Patrick Donnelly <pdonnell@redhat.com> - 07:56 PM Backport #35859: luminous: MDSMonitor: lookup of gid in prepare_beacon that has been removed will...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23990
merged - 07:55 PM Backport #24934: luminous: cephfs-journal-tool: wrong layout info used
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/24033
merged - 06:37 PM Backport #25046 (New): luminous: mds: create health warning if we detect metadata (journal) write...
- 06:32 PM Bug #26869: mds: becomes stuck in up:replay during thrashing (not multimds)
- From Luminous testing: /ceph/teuthology-archive/yuriw-2018-09-17_23:29:33-kcephfs-wip-yuri2-testing-2018-09-17-1940-l...
- 12:48 PM Bug #36072 (New): ceph fuse client can't write data and reporting waiting for caps need Fw want Fb
- Version: jewel(10.2.2)
MDS mode: hot standby
Describe:
The client will not be able to continue writing data if M... - 02:07 AM Backport #35718 (In Progress): luminous: mds: beacon spams is_laggy message
- https://github.com/ceph/ceph/pull/24138
09/17/2018
- 07:48 PM Backport #25043: luminous: overhead of g_conf->get_val<type>("config name") is high
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23408
merged - 07:47 PM Backport #26889: luminous: mds: use self CPU usage to calculate load
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/23505
merged - 07:47 PM Backport #26885: luminous: mds: reset heartbeat map at potential time-consuming places
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/23507
merged - 07:46 PM Backport #26906: luminous: MDSMonitor: consider raising priority of MMDSBeacons from MDS so they ...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23554
merged - 07:46 PM Backport #26924: luminous: mds: mds got laggy because of MDSBeacon stuck in mqueue
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23556
merged - 07:45 PM Backport #26915: luminous: handle ceph_ll_close on unmounted filesystem without crashing
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23617
merged - 07:44 PM Backport #26981: luminous: mds: crash when dumping ops in flight
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/23677
merged - 07:44 PM Backport #26987: luminous: mds: explain delayed client_request due to subtree migration
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23678
merged - 07:43 PM Backport #25205: luminous: CephVolumeClient: delay required after adding data pool to MDSMap
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23726
merged - 07:32 PM Bug #36035 (Resolved): mds: MDCache.cc: 11673: abort()
- ...
- 05:41 PM Bug #35961: nfs-ganesha: ceph_fsal_setattr2 returned Operation not permitted
- I take it back. The check is correct. What's happening is that ganesha is just calling CEPH_SETATTR_MTIME with the cu...
- 04:05 PM Bug #35961: nfs-ganesha: ceph_fsal_setattr2 returned Operation not permitted
- It fell down on the truncate. The setattr mask shows CEPH_SETATTR_MTIME and CEPH_SETATTR_SIZE. I suspect this check i...
- 03:56 PM Bug #26968: klient: mount fails during MDS failover
- Zheng Yan wrote:
> It's mount timeout. I think it's related to socket failure injection
>
> [...]
OKay, how do... - 07:55 AM Bug #36029 (New): ceph-fuse assert failed when try to do file lock
- When I user ceph-fuse client to access cephfs, I found one client panic and print such stack as below.
The ceph clus... - 07:21 AM Bug #36028 (Fix Under Review): "ceph fs add_data_pool" applies pool application metadata incorrectly
- https://github.com/ceph/ceph/pull/24125
- 07:20 AM Bug #36028 (Resolved): "ceph fs add_data_pool" applies pool application metadata incorrectly
- From mailing list thread "[ceph-users] CephFS "authorize" on erasure-coded FS"
- 01:28 AM Backport #22504 (In Progress): luminous: client may fail to trim as many caps as MDS asked for
- https://github.com/ceph/ceph/pull/24119
09/16/2018
- 10:01 PM Bug #35916 (Pending Backport): mds: rctime may go back
- 09:59 PM Bug #35945 (Pending Backport): client: update ctime when modifying file content
- 09:51 PM Bug #26961 (Pending Backport): mds: fix instances of wrongly sending client messages outside of M...
- 09:31 PM Bug #24129 (Pending Backport): qa: test_version_splitting (tasks.cephfs.test_sessionmap.TestSessi...
09/15/2018
- 02:15 AM Feature #22105 (Resolved): provide a way to look up snapshotted inodes by vinodeno_t
- 02:15 AM Backport #22239 (Rejected): luminous: provide a way to look up snapshotted inodes by vinodeno_t
- Rejecting this since we don't support snapshots in Luminous and there's no push for this feature.
09/13/2018
- 08:21 PM Backport #35983 (In Progress): luminous: mds: change mds perf counters can statistics filesystem ...
- 08:21 PM Backport #35983 (Resolved): luminous: mds: change mds perf counters can statistics filesystem ope...
- https://github.com/ceph/ceph/pull/24089
- 07:47 PM Backport #26990 (In Progress): luminous: mds: curate priority of perf counters sent to mgr
- 06:26 PM Backport #35976 (In Progress): luminous: mds: configurable timeout for client eviction
- 06:26 PM Backport #35976 (Resolved): luminous: mds: configurable timeout for client eviction
- https://github.com/ceph/ceph/pull/24086
- 06:26 PM Backport #35975 (Resolved): mimic: mds: configurable timeout for client eviction
- https://github.com/ceph/ceph/pull/24661
- 06:25 PM Feature #25188 (Pending Backport): mds: configurable timeout for client eviction
- 06:20 PM Backport #24862 (In Progress): luminous: ceph_volume_client: allow atomic update of RADOS objects
- 06:10 PM Backport #26851 (In Progress): luminous: ceph_volume_client: py3 compatible
- 05:46 PM Feature #14456 (Resolved): mon: prevent older/incompatible clients from mounting the file system
- 05:46 PM Backport #24914 (Resolved): mimic: mon: prevent older/incompatible clients from mounting the file...
- 04:16 PM Backport #24914: mimic: mon: prevent older/incompatible clients from mounting the file system
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23105
merged - 02:57 AM Bug #35961: nfs-ganesha: ceph_fsal_setattr2 returned Operation not permitted
- nfs-ganesha 2.6.3
ceph version 13.2.1
nfs client: nfs-utils-1.3.0-0.54.el7.x86_64
mount nfs export directory with ... - 01:37 AM Bug #35961 (Resolved): nfs-ganesha: ceph_fsal_setattr2 returned Operation not permitted
- How to reproduce:
1. mount the nfs-ganesha export directory
2. log in using user1 and create new file named abc.t...
09/12/2018
- 07:40 PM Backport #26888 (Resolved): mimic: mds: use self CPU usage to calculate load
- 05:14 PM Backport #26888: mimic: mds: use self CPU usage to calculate load
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/23503
merged - 07:39 PM Backport #26982 (Resolved): mimic: mds: crash when dumping ops in flight
- 05:13 PM Backport #26982: mimic: mds: crash when dumping ops in flight
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23672
merged - 07:38 PM Backport #26984 (Resolved): mimic: client: requests that do name lookup may be sent to wrong mds
- 05:13 PM Backport #26984: mimic: client: requests that do name lookup may be sent to wrong mds
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23700
merged - 07:38 PM Backport #26923 (Resolved): mimic: mds: mds got laggy because of MDSBeacon stuck in mqueue
- 05:13 PM Backport #26923: mimic: mds: mds got laggy because of MDSBeacon stuck in mqueue
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23703
merged - 07:37 PM Backport #26929 (Resolved): mimic: MDSMonitor: note ignored beacons/map changes at higher debug l...
- 05:12 PM Backport #26929: mimic: MDSMonitor: note ignored beacons/map changes at higher debug level
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23704
merged - 07:37 PM Backport #25206 (Resolved): mimic: CephVolumeClient: delay required after adding data pool to MDSMap
- 05:12 PM Backport #25206: mimic: CephVolumeClient: delay required after adding data pool to MDSMap
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23725
merged - 07:36 PM Bug #26967 (Resolved): qa: kcephfs suite has kernel build failures
- 07:36 PM Bug #24679 (Resolved): qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- 07:35 PM Backport #26976 (Resolved): mimic: qa: kcephfs suite has kernel build failures
- 05:11 PM Backport #26976: mimic: qa: kcephfs suite has kernel build failures
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23769
merged - 07:35 PM Backport #26956 (Resolved): mimic: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- 05:11 PM Backport #26956: mimic: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23769
merged - 07:33 PM Bug #24522 (Resolved): blogbench.sh failed in upgrade:luminous-x-mimic-distro-basic-smithi
- 07:33 PM Backport #24717 (Resolved): luminous: blogbench.sh failed in upgrade:luminous-x-mimic-distro-basi...
- ...
- 07:30 PM Backport #26988 (Resolved): mimic: mds: explain delayed client_request due to subtree migration
- 04:40 PM Backport #26988: mimic: mds: explain delayed client_request due to subtree migration
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23792
merged - 07:30 PM Feature #24604 (Resolved): Implement "cephfs-journal-tool event splice" equivalent for purge queue
- 07:29 PM Backport #26989 (Resolved): mimic: Implement "cephfs-journal-tool event splice" equivalent for pu...
- 04:39 PM Backport #26989: mimic: Implement "cephfs-journal-tool event splice" equivalent for purge queue
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23818
merged - 07:29 PM Backport #24863 (Resolved): mimic: ceph_volume_client: allow atomic update of RADOS objects
- 04:39 PM Backport #24863: mimic: ceph_volume_client: allow atomic update of RADOS objects
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23878
merged - 07:28 PM Backport #26978 (Resolved): mimic: cephfs-data-scan: print the max used ino
- 04:38 PM Backport #26978: mimic: cephfs-data-scan: print the max used ino
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23880
merged - 07:27 PM Backport #32086 (Resolved): mimic: mds: MDBalancer::try_rebalance() may stop prematurely
- 04:37 PM Backport #32086: mimic: mds: MDBalancer::try_rebalance() may stop prematurely
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23883
merged - 05:38 PM Backport #26991 (New): mimic: mds: curate priority of perf counters sent to mgr
- 01:52 PM Bug #35946: ceph df strange usage
- As it happens we are improving the internal metrics we are collecting so these values will be (hopefully) less confus...
- 01:51 PM Bug #35946 (Closed): ceph df strange usage
- The global usage is just adding the used and total stats (statfs(2) output for filestore) for every OSD in the system...
- 04:56 AM Bug #35946 (Closed): ceph df strange usage
- ...
- 03:20 AM Bug #26961 (Fix Under Review): mds: fix instances of wrongly sending client messages outside of M...
- https://github.com/ceph/ceph/pull/24048
- 01:51 AM Bug #35828 (Fix Under Review): qa: RuntimeError: FSCID 10 has no rank 1
- https://github.com/ceph/ceph/pull/24044
- 12:47 AM Bug #34531: cephfs does not appear in 'df' if the quota size is too small
- Not really sure how we can fix this...
- 12:42 AM Support #35694 (Rejected): CephFS stops working after upgrade from 12.2.7 to 12.2.8
- The appropriate forum for these questions/support is ceph-users. Please reopen an issue when you're certain you've fo...
- 12:35 AM Bug #27237 (Rejected): unix df reports global raw storage as usable storage after mimic 13.2.1 up...
- not a bug
09/11/2018
- 11:12 PM Bug #23380 (Resolved): mds: ceph.dir.rctime follows dir ctime not inode ctime
- Forked this new fix into a separate issue.
- 07:02 AM Bug #23380 (Fix Under Review): mds: ceph.dir.rctime follows dir ctime not inode ctime
- https://github.com/ceph/ceph/pull/24022
- 11:11 PM Bug #35945 (Resolved): client: update ctime when modifying file content
- Follow-up fix to #23380.
https://github.com/ceph/ceph/pull/24022 - 10:53 PM Backport #26904 (In Progress): luminous: qa: reduce slow warnings arising due to limited testing ...
- 08:39 PM Backport #24841 (Resolved): mimic: qa: move mds/client config to qa from teuthology ceph.conf.tem...
- 08:39 PM Backport #26903 (Resolved): mimic: qa: reduce slow warnings arising due to limited testing hardware
- 07:16 PM Backport #35940 (Resolved): mimic: client: statfs inode count odd
- https://github.com/ceph/ceph/pull/24377
- 07:16 PM Backport #35939 (Resolved): luminous: client: statfs inode count odd
- https://github.com/ceph/ceph/pull/24376
- 07:16 PM Backport #35938 (Resolved): mimic: mds: add average session age (uptime) perf counter
- https://github.com/ceph/ceph/pull/24467
- 07:16 PM Backport #35937 (Resolved): luminous: mds: add average session age (uptime) perf counter
- https://github.com/ceph/ceph/pull/24421
- 07:15 PM Backport #35934 (Resolved): mimic: client: cannot list out files created by another ceph-fuse client
- https://github.com/ceph/ceph/pull/24295
- 07:15 PM Backport #35933 (Resolved): luminous: client: cannot list out files created by another ceph-fuse ...
- https://github.com/ceph/ceph/pull/24282
- 07:15 PM Backport #35932 (Resolved): mimic: mds: retry remounting in ceph-fuse on dcache invalidation
- https://github.com/ceph/ceph/pull/24695
- 07:15 PM Backport #35931 (Resolved): luminous: mds: retry remounting in ceph-fuse on dcache invalidation
- https://github.com/ceph/ceph/pull/24303
- 06:27 PM Bug #27657 (Pending Backport): mds: retry remounting in ceph-fuse on dcache invalidation
- 06:26 PM Bug #27051 (Pending Backport): client: cannot list out files created by another ceph-fuse client
- 06:26 PM Bug #24849 (Pending Backport): client: statfs inode count odd
- 06:22 PM Feature #25013 (Pending Backport): mds: add average session age (uptime) perf counter
- 05:44 PM Documentation #27209 (Resolved): doc: document state of kernel client feature parity with ceph-fuse
- 12:24 PM Bug #27237: unix df reports global raw storage as usable storage after mimic 13.2.1 update.
- Reporting raw available space was the historic behaviour of cephfs's fsstat implementation -- it's because in the gen...
- 10:41 AM Bug #34531: cephfs does not appear in 'df' if the quota size is too small
- Interesting... it's not obvious how we can fix this, if the quota is below one block then it probably doesn't make se...
- 10:35 AM Support #35694: CephFS stops working after upgrade from 12.2.7 to 12.2.8
- Suggest setting "debug mds = 10" and gathering logs from the period when an MDS daemon goes active to the point that ...
- 08:22 AM Bug #26968: klient: mount fails during MDS failover
- It's mount timeout. I think it's related to socket failure injection...
- 07:53 AM Bug #35829: qa: workunits/fs/misc/acl.sh failure from unexpected system.posix_acl_default attribute
- looks like the test executes qa/workunits/fs/misc/acl.sh on two mounts at the same time. acl.sh is not designed be ru...
- 07:15 AM Bug #35916: mds: rctime may go back
- /ceph/teuthology-archive/pdonnell-2018-09-08_06:58:28-kcephfs-wip-pdonnell-testing-20180908.044939-distro-basic-smith...
- 07:04 AM Bug #35916 (Fix Under Review): mds: rctime may go back
- https://github.com/ceph/ceph/pull/24023
- 02:11 AM Bug #35916 (Resolved): mds: rctime may go back
- code like below can cause rctime to go back
if (new_mtime > pi.inode.ctime)
pi.inode.ctime = pi.inode.rstat.rct...
09/10/2018
- 09:19 PM Bug #35850: mds: runs out of file descriptors after several respawns
- https://github.com/ceph/ceph/pull/24020
- 01:41 PM Feature #35411 (In Progress): mds: store session birth time on-disk in session map
- 01:36 PM Bug #26901: mds: no throttlers set on incoming messages
- Zheng was working on this as part of his efforts to sort client requests by priority but ran into issues. Will revisi...
09/07/2018
- 11:19 PM Backport #35859 (In Progress): luminous: MDSMonitor: lookup of gid in prepare_beacon that has bee...
- 11:18 PM Backport #35859 (Resolved): luminous: MDSMonitor: lookup of gid in prepare_beacon that has been r...
- https://github.com/ceph/ceph/pull/23990
- 11:18 PM Backport #35858 (Resolved): mimic: MDSMonitor: lookup of gid in prepare_beacon that has been remo...
- https://github.com/ceph/ceph/pull/24272
- 11:18 PM Bug #35848 (Pending Backport): MDSMonitor: lookup of gid in prepare_beacon that has been removed ...
- 07:39 PM Bug #35848 (Fix Under Review): MDSMonitor: lookup of gid in prepare_beacon that has been removed ...
- https://github.com/ceph/ceph/pull/23984
- 07:04 PM Bug #35848: MDSMonitor: lookup of gid in prepare_beacon that has been removed will cause exception
- ...
- 05:30 PM Bug #35848 (Resolved): MDSMonitor: lookup of gid in prepare_beacon that has been removed will cau...
- ...
- 11:17 PM Bug #35850 (In Progress): mds: runs out of file descriptors after several respawns
- 10:36 PM Bug #35850 (Pending Backport): mds: runs out of file descriptors after several respawns
- 09:41 PM Bug #35850 (In Progress): mds: runs out of file descriptors after several respawns
- 07:05 PM Bug #35850 (Resolved): mds: runs out of file descriptors after several respawns
- ...
- 10:21 AM Backport #35841 (Resolved): mimic: client: segmentation fault in handle_client_reply
- https://github.com/ceph/ceph/pull/24187
- 10:21 AM Backport #35840 (Duplicate): luminous: unhealthy heartbeat map during subtree migration
- https://github.com/ceph/ceph/pull/23507
- 10:21 AM Backport #35839 (Duplicate): mimic: unhealthy heartbeat map during subtree migration
- https://github.com/ceph/ceph/pull/23506
- 10:21 AM Backport #35838 (Resolved): luminous: mds: use monotonic clock for beacon message timekeeping
- https://github.com/ceph/ceph/pull/24311
- 10:21 AM Backport #35837 (Resolved): mimic: mds: use monotonic clock for beacon message timekeeping
- https://github.com/ceph/ceph/pull/24467
- 08:02 AM Bug #23380: mds: ceph.dir.rctime follows dir ctime not inode ctime
- I just noticed that the qa test added in this PR doesn't check the >> case.
qa/workunits/fs: test for cephfs rs... - 07:57 AM Bug #23380: mds: ceph.dir.rctime follows dir ctime not inode ctime
- Patrick Donnelly wrote:
> Dan van der Ster wrote:
> > Sorry but I think we need to re-open this.
> >
> > On v12.... - 01:53 AM Bug #15783 (New): client: enable acls by default
- 01:52 AM Bug #18532 (New): mds: forward scrub failing to repair dir stats (was: subdir with corrupted dirs...
- 01:51 AM Bug #24522 (Pending Backport): blogbench.sh failed in upgrade:luminous-x-mimic-distro-basic-smithi
- 01:50 AM Feature #26974: mds: provide mechanism to allow new instance of an application to cancel old MDS ...
- New PR: https://github.com/ceph/ceph/pull/23069
- 01:48 AM Bug #24881 (Pending Backport): unhealthy heartbeat map during subtree migration
- 01:48 AM Bug #24870 (Resolved): ceph-debug-docker: python3 libraries not installed in docker image
- 01:47 AM Bug #26959 (Pending Backport): mds: use monotonic clock for beacon message timekeeping
- 01:47 AM Documentation #22989 (Resolved): doc: add documentation for MDS states
- 01:37 AM Bug #24557 (Pending Backport): client: segmentation fault in handle_client_reply
09/06/2018
- 06:04 PM Bug #21777 (Need More Info): src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
- Dropping priority on this as there have been no known reoccurrence.
- 05:59 PM Bug #26995 (Resolved): qa: specify distro/kernel matrix to test in kclient qa-suite
- 05:54 PM Bug #35829 (Rejected): qa: workunits/fs/misc/acl.sh failure from unexpected system.posix_acl_defa...
- ...
- 05:21 PM Bug #35828 (Resolved): qa: RuntimeError: FSCID 10 has no rank 1
- ...
- 06:51 AM Backport #32100 (In Progress): mimic: mds: optimize the way how max export size is enforced
- https://github.com/ceph/ceph/pull/23952
- 12:23 AM Backport #35722 (In Progress): mimic: evicting client session may block finisher thread
- https://github.com/ceph/ceph/pull/23945
- 12:11 AM Backport #35722 (Resolved): mimic: evicting client session may block finisher thread
- https://github.com/ceph/ceph/pull/23105
- 12:22 AM Backport #35721 (In Progress): luminous: evicting client session may block finisher thread
- https://github.com/ceph/ceph/pull/23946
- 12:09 AM Backport #35721 (Resolved): luminous: evicting client session may block finisher thread
- https://github.com/ceph/ceph/pull/23946
- 12:06 AM Bug #35720 (Resolved): evicting client session may block finisher thread
- Thread 11 (Thread 0x7f14e11c5700 (LWP 10777)):
#0 0x00007f14ec009995 in pthread_cond_wait@@GLIBC_2.3.2 () from /li...
09/05/2018
- 11:44 PM Backport #35719 (Resolved): mimic: mds: beacon spams is_laggy message
- https://github.com/ceph/ceph/pull/24161
- 11:44 PM Backport #35718 (Resolved): luminous: mds: beacon spams is_laggy message
- https://github.com/ceph/ceph/pull/24138
- 11:43 PM Bug #35250 (Pending Backport): mds: beacon spams is_laggy message
- 04:02 PM Bug #23380: mds: ceph.dir.rctime follows dir ctime not inode ctime
- Dan van der Ster wrote:
> Sorry but I think we need to re-open this.
>
> On v12.2.8 ceph-fuse client and server, ... - 02:56 PM Bug #23380: mds: ceph.dir.rctime follows dir ctime not inode ctime
- Sorry but I think we need to re-open this.
On v12.2.8 ceph-fuse client and server, we see the exact same behaviour... - 12:34 PM Support #35694 (Rejected): CephFS stops working after upgrade from 12.2.7 to 12.2.8
- Hi !
We have done an upgrade from 12.2.7 to 12.2.8 on Ubuntu 14.04.5 LTS (amd64).
In the order MON-OSD-MDS-MGR-RA... - 05:31 AM Bug #24030 (New): ceph-fuse: double dash meaning
09/04/2018
- 10:00 AM Bug #27657 (Fix Under Review): mds: retry remounting in ceph-fuse on dcache invalidation
- PR https://github.com/ceph/ceph/pull/23908
09/03/2018
- 04:23 AM Feature #35411 (In Progress): mds: store session birth time on-disk in session map
- PR https://github.com/ceph/ceph/pull/23314 adds session birth time to track average session age. During MDS failover ...
09/02/2018
- 05:03 PM Backport #32084 (In Progress): luminous: mds: MDBalancer::try_rebalance() may stop prematurely
- 04:52 PM Backport #32086 (In Progress): mimic: mds: MDBalancer::try_rebalance() may stop prematurely
- 04:48 PM Backport #26991 (In Progress): mimic: mds: curate priority of perf counters sent to mgr
- 04:40 PM Backport #26977 (In Progress): luminous: cephfs-data-scan: print the max used ino
- 04:38 PM Backport #26978 (In Progress): mimic: cephfs-data-scan: print the max used ino
- 02:25 PM Backport #24863 (In Progress): mimic: ceph_volume_client: allow atomic update of RADOS objects
- 02:12 PM Backport #24842 (In Progress): luminous: qa: move mds/client config to qa from teuthology ceph.co...
- 02:10 PM Backport #24841 (In Progress): mimic: qa: move mds/client config to qa from teuthology ceph.conf....
- 01:52 AM Bug #35250 (Fix Under Review): mds: beacon spams is_laggy message
- https://github.com/ceph/ceph/pull/23851
- 01:50 AM Bug #35250 (Resolved): mds: beacon spams is_laggy message
- e.g....
08/30/2018
- 03:15 PM Bug #34531 (New): cephfs does not appear in 'df' if the quota size is too small
- cephfs does not appear in 'df' if the quota size is too small
If the quota is 10000000 (bytes), the mount point ap... - 05:25 AM Backport #26989 (In Progress): mimic: Implement "cephfs-journal-tool event splice" equivalent for...
- https://github.com/ceph/ceph/pull/23818
08/29/2018
- 05:26 AM Backport #26983 (In Progress): luminous: client: requests that do name lookup may be sent to wron...
- https://github.com/ceph/ceph/pull/23793
- 05:24 AM Backport #26988 (In Progress): mimic: mds: explain delayed client_request due to subtree migration
- https://github.com/ceph/ceph/pull/23792
08/28/2018
- 11:42 PM Backport #32098 (In Progress): luminous: mds: optimize the way how max export size is enforced
- https://github.com/ceph/ceph/pull/23789
- 11:10 AM Backport #32098 (Resolved): luminous: mds: optimize the way how max export size is enforced
- https://github.com/ceph/ceph/pull/23789
- 10:14 PM Backport #26976 (In Progress): mimic: qa: kcephfs suite has kernel build failures
- 08:35 PM Backport #24931 (Resolved): mimic: client: put instance/addr information in status asok command
- 08:00 PM Backport #24931: mimic: client: put instance/addr information in status asok command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23109
merged - 08:26 PM Bug #24856 (Resolved): mds may get discontinuous mdsmap
- 08:26 PM Backport #25047 (Resolved): mimic: mds may get discontinuous mdsmap
- 08:00 PM Backport #25047: mimic: mds may get discontinuous mdsmap
- Zheng Yan wrote:
> https://github.com/ceph/ceph/pull/23180
merged - 08:25 PM Bug #24852 (Resolved): mds: dump MDSMap epoch to log at low debug
- 08:25 PM Backport #25035 (Resolved): mimic: mds: dump MDSMap epoch to log at low debug
- 08:00 PM Backport #25035: mimic: mds: dump MDSMap epoch to log at low debug
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23196
merged - 08:24 PM Backport #26905 (Resolved): mimic: MDSMonitor: consider raising priority of MMDSBeacons from MDS ...
- 07:59 PM Backport #26905: mimic: MDSMonitor: consider raising priority of MMDSBeacons from MDS so they are...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23565
merged - 01:00 PM Bug #23958 (Resolved): mds: scrub doesn't always return JSON results
- 01:00 PM Backport #25037 (Resolved): mimic: mds: scrub doesn't always return JSON results
- 01:00 PM Bug #24853 (Resolved): mds: dump recent (memory) log messages before respawning due to being remo...
- 01:00 PM Backport #25040 (Resolved): mimic: mds: dump recent (memory) log messages before respawning due t...
- 12:59 PM Bug #24855 (Resolved): mds: reduce debugging for missing inodes during subtree migration
- 12:59 PM Backport #25042 (Resolved): mimic: mds: reduce debugging for missing inodes during subtree migration
- 12:54 PM Backport #25045 (Resolved): mimic: mds: create health warning if we detect metadata (journal) wri...
- 12:53 PM Backport #25044 (Resolved): mimic: overhead of g_conf->get_val<type>("config name") is high
- 12:51 PM Backport #26914 (Resolved): mimic: handle ceph_ll_close on unmounted filesystem without crashing
- 11:20 AM Backport #26956 (In Progress): mimic: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
- included both 24679 and 26967 :
https://github.com/ceph/ceph/pull/23769 - 11:13 AM Bug #26966 (Resolved): nfs-ganesha: epochs out of sync in cluster
- Patch merged.
- 11:11 AM Backport #32104 (Resolved): mimic: mds: allows client to create ".." and "." dirents
- https://github.com/ceph/ceph/pull/24384
- 11:11 AM Backport #32103 (Resolved): luminous: mds: allows client to create ".." and "." dirents
- https://github.com/ceph/ceph/pull/24329
- 11:10 AM Backport #32100 (Resolved): mimic: mds: optimize the way how max export size is enforced
- https://github.com/ceph/ceph/pull/23952
- 11:10 AM Backport #32092 (Resolved): mimic: mds: migrate strays part by part when shutdown mds
- https://github.com/ceph/ceph/pull/24435
- 11:10 AM Backport #32091 (Resolved): luminous: mds: migrate strays part by part when shutdown mds
- https://github.com/ceph/ceph/pull/24324
- 11:10 AM Backport #32090 (Resolved): mimic: mds: use monotonic clock for beacon sender thread waits
- https://github.com/ceph/ceph/pull/24467
- 11:10 AM Backport #32088 (Resolved): luminous: mds: use monotonic clock for beacon sender thread waits
- https://github.com/ceph/ceph/pull/24375
- 11:10 AM Backport #32086 (Resolved): mimic: mds: MDBalancer::try_rebalance() may stop prematurely
- https://github.com/ceph/ceph/pull/23883
- 11:10 AM Backport #32084 (Resolved): luminous: mds: MDBalancer::try_rebalance() may stop prematurely
- https://github.com/ceph/ceph/pull/23884
- 05:41 AM Bug #27657 (In Progress): mds: retry remounting in ceph-fuse on dcache invalidation
- 05:28 AM Bug #27216: kclient: usable space suddently decreased
- Patrick Donnelly wrote:
> Vasanth M wrote:
> > Zheng Yan wrote:
> > > what do you mean "decrease"
> > We had 19Tb...
08/27/2018
- 08:20 PM Backport #25037: mimic: mds: scrub doesn't always return JSON results
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23225
merged - 08:20 PM Backport #25040: mimic: mds: dump recent (memory) log messages before respawning due to being rem...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23275
merged - 08:19 PM Backport #25042: mimic: mds: reduce debugging for missing inodes during subtree migration
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23309
merged - 08:19 PM Backport #25045: mimic: mds: create health warning if we detect metadata (journal) writes are slow
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23343
merged - 08:19 PM Cleanup #24820: overhead of g_conf->get_val<type>("config name") is high
- https://github.com/ceph/ceph/pull/23407 merged
- 08:17 PM Backport #26914: mimic: handle ceph_ll_close on unmounted filesystem without crashing
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23603
merged - 05:14 PM Bug #27216: kclient: usable space suddently decreased
- Vasanth M wrote:
> Zheng Yan wrote:
> > what do you mean "decrease"
> We had 19Tb of total storage , Totally we... - 09:26 AM Bug #27657 (Resolved): mds: retry remounting in ceph-fuse on dcache invalidation
- Right now, failure in remounting on dcache invalidation results in the following crash:
> 1: (()+0x6d1ff4) [0x55f2...
08/25/2018
- 08:14 PM Feature #25131 (Pending Backport): mds: optimize the way how max export size is enforced
- 08:14 PM Bug #26926 (Pending Backport): mds: migrate strays part by part when shutdown mds
- 07:57 PM Bug #26973 (Pending Backport): mds: MDBalancer::try_rebalance() may stop prematurely
- 07:56 PM Bug #25113 (Pending Backport): mds: allows client to create ".." and "." dirents
- 07:51 PM Bug #26962 (Pending Backport): mds: use monotonic clock for beacon sender thread waits
- 01:02 AM Bug #27237 (Rejected): unix df reports global raw storage as usable storage after mimic 13.2.1 up...
- os: Ubuntu 18.04.1
ceph version: 13.2.1-1
repo: ceph stable for mimic - 'deb https://download.ceph.com/debian-mimic...
08/24/2018
- 10:29 AM Bug #27216: kclient: usable space suddently decreased
- Zheng Yan wrote:
> what do you mean "decrease"
We had 19Tb of total storage , Totally we have 4 osds each have ... - 09:36 AM Bug #27216: kclient: usable space suddently decreased
- what do you mean "decrease"
- 06:49 AM Bug #27216 (New): kclient: usable space suddently decreased
- Hi,
its Suddenly happens in ceph fs mountpoint(client side ) to decrease , but the cluster overall storage avail... - 09:36 AM Documentation #27209 (In Progress): doc: document state of kernel client feature parity with ceph...
- https://github.com/ceph/ceph/pull/23728
- 03:54 AM Backport #25205 (In Progress): luminous: CephVolumeClient: delay required after adding data pool ...
- 03:50 AM Backport #25206 (In Progress): mimic: CephVolumeClient: delay required after adding data pool to ...
08/23/2018
- 09:15 PM Bug #27051 (Fix Under Review): client: cannot list out files created by another ceph-fuse client
- 04:14 PM Documentation #27209 (Resolved): doc: document state of kernel client feature parity with ceph-fuse
- In particular, snapshots are now supported but as of which version? Quotas? etc.
- 08:19 AM Backport #26929 (In Progress): mimic: MDSMonitor: note ignored beacons/map changes at higher debu...
- https://github.com/ceph/ceph/pull/23704
- 08:16 AM Backport #26923 (In Progress): mimic: mds: mds got laggy because of MDSBeacon stuck in mqueue
- https://github.com/ceph/ceph/pull/23703
- 06:28 AM Backport #26984 (In Progress): mimic: client: requests that do name lookup may be sent to wrong mds
Also available in: Atom