Activity
From 07/03/2018 to 08/01/2018
08/01/2018
- 04:54 PM Cleanup #25111 (Resolved): mds: use vector to manage Contexts rather than a list
- 03:06 PM Bug #25215 (Fix Under Review): mds: changing mds_cache_memory_limit causes boost::bad_get exception
- https://github.com/ceph/ceph/pull/23365
- 02:16 PM Bug #25215 (Resolved): mds: changing mds_cache_memory_limit causes boost::bad_get exception
- ...
- 12:57 PM Bug #25213 (Resolved): handle ceph_ll_close on unmounted filesystem without crashing
- Client::_unmount will unmap and tear down all of the open Fh objects before returning. Programs that use the lowlevel...
- 09:02 AM Feature #25131: mds: optimize the way how max export size is enforced
- When importing a large subtree, mds can spends long time on sending cap import messages.
Importer...
07/31/2018
- 10:45 PM Backport #25206 (Resolved): mimic: CephVolumeClient: delay required after adding data pool to MDSMap
- https://github.com/ceph/ceph/pull/23725
- 10:45 PM Backport #25205 (Resolved): luminous: CephVolumeClient: delay required after adding data pool to ...
- https://github.com/ceph/ceph/pull/23726
- 10:35 PM Feature #25188: mds: configurable timeout for client eviction
- Changed wording.
- 04:18 AM Feature #25188 (Resolved): mds: configurable timeout for client eviction
- If a client refuses to release caps, the MDS should allow for a configurable timeout (default unlimited) where it aut...
- 09:51 PM Feature #24430 (Resolved): libcephfs: provide API to change umask
- 08:48 AM Bug #24307 (Resolved): pjd: cd: too many arguments
- 08:48 AM Backport #24311 (Resolved): luminous: pjd: cd: too many arguments
- 08:47 AM Bug #24579 (Resolved): client: returning garbage (?) for readdir
- 08:47 AM Bug #24680 (Resolved): qa: iogen.sh: line 7: cd: too many arguments
- 08:47 AM Backport #24718 (Resolved): luminous: client: returning garbage (?) for readdir
- 08:47 AM Backport #24828 (Resolved): luminous: qa: iogen.sh: line 7: cd: too many arguments
- 08:46 AM Bug #24239 (Resolved): cephfs-journal-tool: Importing a zero-length purge_queue journal breaks it...
- 08:46 AM Backport #24860 (Resolved): luminous: cephfs-journal-tool: Importing a zero-length purge_queue jo...
- 08:46 AM Bug #22683 (Resolved): client: coredump when nfs-ganesha use ceph_ll_get_inode()
- 08:45 AM Backport #23015 (Resolved): luminous: client: coredump when nfs-ganesha use ceph_ll_get_inode()
- 08:45 AM Backport #24932 (Resolved): luminous: client: put instance/addr information in status asok command
- 08:44 AM Backport #25038 (Resolved): luminous: mds: scrub doesn't always return JSON results
- 08:44 AM Bug #24440 (Resolved): common/DecayCounter: set last_decay to current time when decoding decay co...
- 08:44 AM Backport #24538 (Resolved): luminous: common/DecayCounter: set last_decay to current time when de...
- 08:11 AM Bug #24849 (Fix Under Review): client: statfs inode count odd
- 07:12 AM Backport #25045 (In Progress): mimic: mds: create health warning if we detect metadata (journal) ...
- https://github.com/ceph/ceph/pull/23343
- 02:40 AM Backport #25046 (Need More Info): luminous: mds: create health warning if we detect metadata (jou...
- Some files are missing in luminous branch, e.g src/mds/OpenFileTable.cc
$ git status
On branch wip-25046-luminous...
07/30/2018
- 11:11 PM Bug #25141 (Pending Backport): CephVolumeClient: delay required after adding data pool to MDSMap
- 09:28 PM Backport #25044: mimic: overhead of g_conf->get_val<type>("config name") is high
- Zheng, PTAL.
- 05:30 AM Backport #25044 (Need More Info): mimic: overhead of g_conf->get_val<type>("config name") is high
- The ConfigProxy is not defined in mimic and needs to backported :
In file included from /home/pdvian/backport/ceph... - 08:56 PM Bug #24052 (Resolved): repeated eviction of idle client until some IO happens
- 08:55 PM Backport #24295 (Resolved): luminous: repeated eviction of idle client until some IO happens
- 08:43 PM Backport #24295: luminous: repeated eviction of idle client until some IO happens
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22780
merged - 08:55 PM Bug #22428 (Resolved): mds: don't report slow request for blocked filelock request
- 08:54 PM Backport #23989 (Resolved): luminous: mds: don't report slow request for blocked filelock request
- 08:41 PM Backport #23989: luminous: mds: don't report slow request for blocked filelock request
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22782
merged - 08:54 PM Bug #24269 (Resolved): multimds pjd open test fails
- 08:53 PM Backport #24540 (Resolved): luminous: multimds pjd open test fails
- 08:39 PM Backport #24540: luminous: multimds pjd open test fails
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22783
merged - 08:52 PM Backport #24535 (Resolved): luminous: client: _ll_drop_pins travel inode_map may access invalid ‘...
- 08:48 PM Backport #24694 (Resolved): luminous: PurgeQueue sometimes ignores Journaler errors
- 08:37 PM Backport #24694: luminous: PurgeQueue sometimes ignores Journaler errors
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/22811
merged - 08:48 PM Backport #24311: luminous: pjd: cd: too many arguments
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22883
mergedReviewed-by: Patrick Donnelly <pdonnell@redh... - 08:47 PM Backport #24718: luminous: client: returning garbage (?) for readdir
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/22955
merged - 08:47 PM Backport #24828: luminous: qa: iogen.sh: line 7: cd: too many arguments
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22955
merged - 08:47 PM Backport #24860: luminous: cephfs-journal-tool: Importing a zero-length purge_queue journal break...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/22980
merged - 08:47 PM Backport #23015: luminous: client: coredump when nfs-ganesha use ceph_ll_get_inode()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23012
merged - 08:46 PM Backport #24932: luminous: client: put instance/addr information in status asok command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23107
merged - 08:45 PM Backport #25038: luminous: mds: scrub doesn't always return JSON results
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/23222
merged - 08:44 PM Backport #24538: luminous: common/DecayCounter: set last_decay to current time when decoding deca...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22779
merged - 08:38 PM Backport #24534: mimic: client: _ll_drop_pins travel inode_map may access invalid ‘next’ iterator
- https://github.com/ceph/ceph/pull/22786 merged
- 02:35 PM Bug #24849: client: statfs inode count odd
- Assigns total number of files on FS (directories are counted as files) to f_files - https://github.com/ceph/ceph/pull...
- 01:22 PM Feature #12282 (In Progress): mds: progress/abort/pause interface for ongoing scrubs
- 01:14 PM Feature #24286 (Fix Under Review): tools: create CephFS shell
- 01:13 PM Feature #24286: tools: create CephFS shell
- https://github.com/ceph/ceph/pull/23158
- 09:25 AM Feature #25013 (Fix Under Review): mds: add average session age (uptime) perf counter
- PR https://github.com/ceph/ceph/pull/23314
- 04:26 AM Backport #25042 (In Progress): mimic: mds: reduce debugging for missing inodes during subtree mig...
- https://github.com/ceph/ceph/pull/23309
07/29/2018
- 04:02 PM Bug #25148 (Won't Fix - EOL): "ceph session ls" produces unparseable json when run against ceph-m...
- h3. What?
Apparent race condition in upgrade test
h3. Where?
It happened in the upgrade test @upgrade:lumino...
07/28/2018
- 12:04 AM Bug #25141 (Fix Under Review): CephVolumeClient: delay required after adding data pool to MDSMap
- https://github.com/ceph/ceph/pull/23297
- 12:00 AM Bug #25141 (In Progress): CephVolumeClient: delay required after adding data pool to MDSMap
07/27/2018
- 11:59 PM Bug #25141 (Resolved): CephVolumeClient: delay required after adding data pool to MDSMap
- https://github.com/ceph/ceph/blob/726ca169ee23df2f5521ca0c9677614c6e3a2145/src/pybind/ceph_volume_client.py#L632-L636...
- 09:54 AM Feature #24444: cephfs: make InodeStat, DirStat, LeaseStat versioned
- https://github.com/ceph/ceph/pull/23280
- 03:44 AM Backport #25040 (In Progress): mimic: mds: dump recent (memory) log messages before respawning du...
- https://github.com/ceph/ceph/pull/23275
- 01:57 AM Feature #25131 (Fix Under Review): mds: optimize the way how max export size is enforced
- https://github.com/ceph/ceph/pull/23088
- 01:55 AM Feature #25131 (Resolved): mds: optimize the way how max export size is enforced
- Current way of enforcing export size is checking export size after subtree gets frozen. It may freeze a large subtree...
07/26/2018
- 04:49 PM Bug #24823 (Fix Under Review): mds: deadlock when setting config value via admin socket
- 01:05 AM Bug #25113 (Resolved): mds: allows client to create ".." and "." dirents
- Pavani Rajula (our Outreachy Intern) found a fantastic bug. Apparently the MDS will happily allow the client to creat...
07/25/2018
- 11:05 PM Cleanup #25111: mds: use vector to manage Contexts rather than a list
- https://github.com/ceph/ceph/pull/23195
- 11:05 PM Cleanup #25111 (Resolved): mds: use vector to manage Contexts rather than a list
- This is all about the usual benefits of converting a list to a vector without any of the potential drawbacks:
* We... - 02:19 PM Support #25089: Many slow requests
- Robert Sander wrote:
> Why was this rejected?
>
> I see slow requests when running more than one active MDS in Ce... - 02:13 PM Support #25089: Many slow requests
- Why was this rejected?
I see slow requests when running more than one active MDS in Ceph Luminous. Ceph Luminous s... - 02:11 PM Support #25089 (Rejected): Many slow requests
- Please seek further help on ceph-users first until it's more certain a bug is found.
- 01:46 PM Support #25089: Many slow requests
- Zheng Yan wrote:
> you need to run "ceph mds deactive xxx". see http://docs.ceph.com/docs/luminous/cephfs/multimds/
... - 12:16 PM Support #25089: Many slow requests
- you need to run "ceph mds deactive xxx". see http://docs.ceph.com/docs/luminous/cephfs/multimds/
- 08:20 AM Support #25089 (Rejected): Many slow requests
- We see many slow requests on a multi MDS setup with 12.2.7.
With 12.2.4 this was not the case, at least not that m... - 02:15 PM Bug #25099: mds: don't use dispatch queue length to calculate mds load
- Zheng, can you share the patch?
- 01:41 PM Bug #25099: mds: don't use dispatch queue length to calculate mds load
- two clients run "fsstress -d . -p 8 -n 100000 -f sync=0 -f write=0 -f dwrite=0", Attached PNGs are graphes of mds loa...
- 01:35 PM Bug #25099 (Closed): mds: don't use dispatch queue length to calculate mds load
- 09:53 AM Backport #25037 (In Progress): mimic: mds: scrub doesn't always return JSON results
- https://github.com/ceph/ceph/pull/23225
- 09:31 AM Bug #22008: Processes stuck waiting for write with ceph-fuse
- Zheng Yan wrote:
> The second one is actually different from the first one. Seems like the first one was caused by '... - 08:26 AM Backport #25038 (In Progress): luminous: mds: scrub doesn't always return JSON results
- https://github.com/ceph/ceph/pull/23222
07/24/2018
- 07:12 PM Backport #25041 (In Progress): luminous: mds: reduce debugging for missing inodes during subtree ...
- 07:11 PM Backport #25039 (In Progress): luminous: mds: dump recent (memory) log messages before respawning...
- 07:09 PM Backport #25036 (In Progress): luminous: mds: dump MDSMap epoch to log at low debug
- https://github.com/ceph/ceph/pull/23212
- 03:34 PM Backport #25034 (Resolved): mimic: "Health check failed: 1 MDSs report slow requests (MDS_SLOW_RE...
- 03:25 AM Backport #25035 (In Progress): mimic: mds: dump MDSMap epoch to log at low debug
- https://github.com/ceph/ceph/pull/23196
07/23/2018
- 05:30 PM Bug #24780 (Fix Under Review): Some cephfs tool commands silently operate on only rank 0, even if...
- PR https://github.com/ceph/ceph/pull/23187
- 05:01 PM Bug #24900 (Rejected): fs: ceph.file.layout.stripe_count changed after modification
- vi/vim will create a new file anytime you save. From strace:...
- 04:22 PM Bug #25049 (Need More Info): MDS journal flush too often and resulted in backlog
- 01:52 PM Bug #25049: MDS journal flush too often and resulted in backlog
- I need some mds.log, to check why mds flushes log.
- 01:48 PM Bug #24897 (Rejected): client: writes rejected by quota may truncate/zero parts of file
- 01:44 PM Bug #24897: client: writes rejected by quota may truncate/zero parts of file
- I think this is probably NOTABUG. The shell redirect will end up opening the destination file with O_TRUNC which will...
- 01:03 PM Backport #25047 (In Progress): mimic: mds may get discontinuous mdsmap
- https://github.com/ceph/ceph/pull/23180
- 08:12 AM Feature #17854: mds: only evict an unresponsive client when another client wants its caps
- Unresponsive/stale clients do not hold any caps[1]. Therefore, deferring their eviction would mean keeping them infin...
07/22/2018
- 10:24 AM Backport #25048: luminous: mds may get discontinuous mdsmap
- https://github.com/ceph/ceph/pull/23169
- 08:51 AM Feature #24430 (Fix Under Review): libcephfs: provide API to change umask
- https://github.com/ceph/ceph/pull/23157
07/21/2018
- 07:04 AM Bug #25049 (Need More Info): MDS journal flush too often and resulted in backlog
- When doing some operation like rmtree, the MDS get stuck because flush journal too often and the OSD for metadata hav...
- 03:55 AM Backport #25048 (In Progress): luminous: mds may get discontinuous mdsmap
- Zheng's WIP: https://github.com/ukernel/ceph/commits/luminous-24856
- 03:53 AM Backport #25048 (Resolved): luminous: mds may get discontinuous mdsmap
- https://github.com/ceph/ceph/pull/23169
- 03:53 AM Backport #25047 (Resolved): mimic: mds may get discontinuous mdsmap
- https://github.com/ceph/ceph/pull/23180
- 03:53 AM Bug #24856 (Pending Backport): mds may get discontinuous mdsmap
- 03:28 AM Backport #25046 (Resolved): luminous: mds: create health warning if we detect metadata (journal) ...
- https://github.com/ceph/ceph/pull/24171
- 03:28 AM Backport #25045 (Resolved): mimic: mds: create health warning if we detect metadata (journal) wri...
- https://github.com/ceph/ceph/pull/23343
- 03:28 AM Bug #24879 (Pending Backport): mds: create health warning if we detect metadata (journal) writes ...
- 03:25 AM Backport #25044 (Resolved): mimic: overhead of g_conf->get_val<type>("config name") is high
- https://github.com/ceph/ceph/pull/23407
- 03:25 AM Backport #25043 (Resolved): luminous: overhead of g_conf->get_val<type>("config name") is high
- https://github.com/ceph/ceph/pull/23408
- 03:24 AM Cleanup #24820 (Pending Backport): overhead of g_conf->get_val<type>("config name") is high
- 03:23 AM Backport #25042 (Resolved): mimic: mds: reduce debugging for missing inodes during subtree migration
- https://github.com/ceph/ceph/pull/23309
- 03:23 AM Backport #25041 (Resolved): luminous: mds: reduce debugging for missing inodes during subtree mig...
- https://github.com/ceph/ceph/pull/23214
- 03:23 AM Bug #24855 (Pending Backport): mds: reduce debugging for missing inodes during subtree migration
- 03:19 AM Backport #25040 (Resolved): mimic: mds: dump recent (memory) log messages before respawning due t...
- https://github.com/ceph/ceph/pull/23275
- 03:19 AM Backport #25039 (Resolved): luminous: mds: dump recent (memory) log messages before respawning du...
- https://github.com/ceph/ceph/pull/23213
- 03:18 AM Bug #24853 (Pending Backport): mds: dump recent (memory) log messages before respawning due to be...
- 03:18 AM Backport #25038 (Resolved): luminous: mds: scrub doesn't always return JSON results
- https://github.com/ceph/ceph/pull/23222
- 03:18 AM Backport #25037 (Resolved): mimic: mds: scrub doesn't always return JSON results
- https://github.com/ceph/ceph/pull/23225
- 03:17 AM Backport #25036 (Resolved): luminous: mds: dump MDSMap epoch to log at low debug
- https://github.com/ceph/ceph/pull/23212
- 03:17 AM Backport #25035 (Resolved): mimic: mds: dump MDSMap epoch to log at low debug
- https://github.com/ceph/ceph/pull/23196
- 03:17 AM Bug #24852 (Pending Backport): mds: dump MDSMap epoch to log at low debug
- 03:13 AM Bug #23958 (Pending Backport): mds: scrub doesn't always return JSON results
- 03:11 AM Bug #24893 (Resolved): client: add ceph_ll_fallocate
- 01:25 AM Backport #25033 (In Progress): luminous: "Health check failed: 1 MDSs report slow requests (MDS_S...
- 01:22 AM Backport #25033 (Resolved): luminous: "Health check failed: 1 MDSs report slow requests (MDS_SLOW...
- https://github.com/ceph/ceph/pull/23155
- 01:24 AM Backport #25034 (In Progress): mimic: "Health check failed: 1 MDSs report slow requests (MDS_SLOW...
- 01:22 AM Backport #25034 (Resolved): mimic: "Health check failed: 1 MDSs report slow requests (MDS_SLOW_RE...
- https://github.com/ceph/ceph/pull/23154
- 12:43 AM Bug #25008 (Pending Backport): "Health check failed: 1 MDSs report slow requests (MDS_SLOW_REQUES...
07/20/2018
- 10:23 PM Feature #25013 (In Progress): mds: add average session age (uptime) perf counter
- 06:14 AM Feature #25013 (Resolved): mds: add average session age (uptime) perf counter
- Add a performance counter in the MDS to capture average age (uptime) of sessions. This should reflect the average upt...
- 07:56 PM Bug #25008: "Health check failed: 1 MDSs report slow requests (MDS_SLOW_REQUEST)" in powercycle
- https://github.com/ceph/ceph/pull/23151
- 07:55 PM Bug #25008 (Fix Under Review): "Health check failed: 1 MDSs report slow requests (MDS_SLOW_REQUES...
- 03:49 AM Bug #24881: unhealthy heartbeat map during subtree migration
- limit size of export
https://github.com/ceph/ceph/pull/23088 - 03:22 AM Bug #24925 (Rejected): qa: pjd test failed ctime change unexpected result
- ...
- 02:43 AM Cleanup #24820: overhead of g_conf->get_val<type>("config name") is high
- https://github.com/ceph/ceph/pull/23139
- 12:57 AM Documentation #22989 (Fix Under Review): doc: add documentation for MDS states
07/19/2018
- 09:37 PM Bug #25008 (Resolved): "Health check failed: 1 MDSs report slow requests (MDS_SLOW_REQUEST)" in p...
- Run: http://pulpito.ceph.com/yuriw-2018-07-18_21:37:13-powercycle-mimic-distro-basic-smithi/
Jobs: 2796207
Logs: ht... - 06:12 AM Backport #24934 (In Progress): luminous: cephfs-journal-tool: wrong layout info used
- -https://github.com/ceph/ceph/pull/23125-
- 06:10 AM Backport #24933 (In Progress): mimic: cephfs-journal-tool: wrong layout info used
- -https://github.com/ceph/ceph/pull/23124-
07/18/2018
- 03:37 PM Bug #24872 (Resolved): qa: client socket inaccessible without sudo
- 03:37 PM Backport #24904 (Resolved): mimic: qa: client socket inaccessible without sudo
- 02:25 PM Backport #24904: mimic: qa: client socket inaccessible without sudo
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/23030
merged - 01:26 PM Bug #24856: mds may get discontinuous mdsmap
- steps to reproduce mds stuck at reconnect state for mimic and later. (snap cache code gets confused)...
- 09:45 AM Bug #24823: mds: deadlock when setting config value via admin socket
- An alternate way would be to pass a reference to the lock to handle_conf_change() which could then acquire the locks ...
- 04:01 AM Backport #24931 (In Progress): mimic: client: put instance/addr information in status asok command
- https://github.com/ceph/ceph/pull/23109
- 02:24 AM Backport #24932 (In Progress): luminous: client: put instance/addr information in status asok com...
- https://github.com/ceph/ceph/pull/23107
- 12:56 AM Backport #24914 (In Progress): mimic: mon: prevent older/incompatible clients from mounting the f...
- https://github.com/ceph/ceph/pull/23105
07/17/2018
- 08:35 PM Bug #24284 (Resolved): cephfs: allow prohibiting user snapshots in CephFS
- 08:34 PM Backport #24705 (Resolved): mimic: cephfs: allow prohibiting user snapshots in CephFS
- 08:13 PM Backport #24705: mimic: cephfs: allow prohibiting user snapshots in CephFS
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/22812
merged - 08:34 PM Backport #24913 (Resolved): mimic: qa: multifs requires 4 mds but gets only 2
- 08:11 PM Backport #24913: mimic: qa: multifs requires 4 mds but gets only 2
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/22835
merged - 08:33 PM Backport #24758 (Resolved): mimic: test gets ENOSPC from bluestore block device
- 08:11 PM Backport #24758: mimic: test gets ENOSPC from bluestore block device
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/22835
mergedReviewed-by: Nathan Cutler <ncutler@suse.... - 08:31 PM Backport #24719 (Resolved): mimic: client: returning garbage (?) for readdir
- 08:09 PM Backport #24719: mimic: client: returning garbage (?) for readdir
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/22956
merged - 08:31 PM Backport #24829 (Resolved): mimic: qa: iogen.sh: line 7: cd: too many arguments
- 08:09 PM Backport #24829: mimic: qa: iogen.sh: line 7: cd: too many arguments
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22956
merged - 08:31 PM Backport #24861 (Resolved): mimic: cephfs-journal-tool: Importing a zero-length purge_queue journ...
- 08:09 PM Backport #24861: mimic: cephfs-journal-tool: Importing a zero-length purge_queue journal breaks i...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/22981
merged - 07:24 AM Backport #24928 (Need More Info): mimic: qa: test_recovery_pool tries asok on wrong node
- https://github.com/ceph/ceph/pull/23087
- 07:22 AM Backport #24929 (Need More Info): luminous: qa: test_recovery_pool tries asok on wrong node
- -https://github.com/ceph/ceph/pull/23086-
- 01:47 AM Feature #24854 (In Progress): mds: if MDS fails internal heartbeat, then debugging should be incr...
07/16/2018
- 06:17 PM Cleanup #24820 (Fix Under Review): overhead of g_conf->get_val<type>("config name") is high
- https://github.com/ceph/ceph/pull/23045/commits/82fd27f73f790e7fd39917974b283fa13678918a
- 04:00 PM Backport #24254: mimic: kceph: umount on evicted client blocks forever
- Fixing PR
- 04:00 PM Backport #24255: mimic: qa: kernel_mount.py umount must handle timeout arg
- fixing PR
- 01:43 PM Bug #24893 (Fix Under Review): client: add ceph_ll_fallocate
- https://github.com/ceph/ceph/pull/23009
- 08:40 AM Backport #24934 (Resolved): luminous: cephfs-journal-tool: wrong layout info used
- https://github.com/ceph/ceph/pull/24033
- 08:40 AM Backport #24933 (Resolved): mimic: cephfs-journal-tool: wrong layout info used
- https://github.com/ceph/ceph/pull/24583
- 08:40 AM Backport #24932 (Resolved): luminous: client: put instance/addr information in status asok command
- https://github.com/ceph/ceph/pull/23107
- 08:40 AM Backport #24931 (Resolved): mimic: client: put instance/addr information in status asok command
- https://github.com/ceph/ceph/pull/23109
- 08:39 AM Backport #24930 (Rejected): jewel: client: put instance/addr information in status asok command
- 08:39 AM Backport #24929 (Resolved): luminous: qa: test_recovery_pool tries asok on wrong node
- https://github.com/ceph/ceph/pull/25569
- 08:39 AM Backport #24928 (Resolved): mimic: qa: test_recovery_pool tries asok on wrong node
- https://github.com/ceph/ceph/pull/23087
- 12:50 AM Bug #24855 (Fix Under Review): mds: reduce debugging for missing inodes during subtree migration
- https://github.com/ceph/ceph/pull/23063
07/15/2018
- 11:58 PM Bug #24853 (Fix Under Review): mds: dump recent (memory) log messages before respawning due to be...
- https://github.com/ceph/ceph/pull/23062
- 11:12 PM Bug #24852 (Fix Under Review): mds: dump MDSMap epoch to log at low debug
- https://github.com/ceph/ceph/pull/23061
- 10:07 PM Bug #24644 (Pending Backport): cephfs-journal-tool: wrong layout info used
- 10:05 PM Feature #24643 (Resolved): libcephfs: add ceph_futimens support
- 10:04 PM Feature #24724 (Pending Backport): client: put instance/addr information in status asok command
- 08:29 PM Bug #24925 (Rejected): qa: pjd test failed ctime change unexpected result
- ...
07/14/2018
- 05:19 PM Bug #24823: mds: deadlock when setting config value via admin socket
- Zheng Yan wrote:
> how about making md_config_impl release its lock when changing handle_conf_change
+1 - 07:40 AM Bug #24823: mds: deadlock when setting config value via admin socket
- how about making md_config_impl release its lock when changing handle_conf_change
- 01:21 AM Bug #24823: mds: deadlock when setting config value via admin socket
- Venky Shankar wrote:
> Patrick/Zheng,
>
> I'm thinking if we could probably make use of Zheng's changes for track... - 01:17 AM Bug #24858 (Pending Backport): qa: test_recovery_pool tries asok on wrong node
- 12:10 AM Bug #24870 (Fix Under Review): ceph-debug-docker: python3 libraries not installed in docker image
- https://github.com/ceph/ceph/pull/23040
07/13/2018
- 09:09 PM Backport #24912 (New): luminous: qa: multifs requires 4 mds but gets only 2
- 09:08 PM Backport #24912 (In Progress): luminous: qa: multifs requires 4 mds but gets only 2
- 09:00 PM Backport #24912 (Resolved): luminous: qa: multifs requires 4 mds but gets only 2
- https://github.com/ceph/ceph/pull/24328
v12.2.11 follow-up fix: https://github.com/ceph/ceph/pull/25890 - 09:09 PM Backport #24913 (In Progress): mimic: qa: multifs requires 4 mds but gets only 2
- 09:00 PM Backport #24913 (Resolved): mimic: qa: multifs requires 4 mds but gets only 2
- https://github.com/ceph/ceph/pull/22835
- 09:05 PM Backport #24759: luminous: test gets ENOSPC from bluestore block device
- Needs #24912 with backport.
- 09:01 PM Backport #24914 (Resolved): mimic: mon: prevent older/incompatible clients from mounting the file...
- https://github.com/ceph/ceph/pull/23105
- 09:00 PM Bug #24899 (Pending Backport): qa: multifs requires 4 mds but gets only 2
- 04:37 AM Bug #24899 (Fix Under Review): qa: multifs requires 4 mds but gets only 2
- https://github.com/ceph/ceph/pull/23018
- 04:34 AM Bug #24899 (Resolved): qa: multifs requires 4 mds but gets only 2
- http://pulpito.ceph.com/pdonnell-2018-07-12_18:01:13-fs-wip-pdonnell-testing-20180712.152150-distro-basic-smithi/2771...
- 08:57 PM Feature #14456 (Pending Backport): mon: prevent older/incompatible clients from mounting the file...
- 05:05 PM Cleanup #24839: qa: move mds/client config to qa from teuthology ceph.conf.template
- Nathan Cutler wrote:
> @Patrick: this is not straightforward to backport, and at first glance it looks like a cleanu... - 02:45 PM Cleanup #24839: qa: move mds/client config to qa from teuthology ceph.conf.template
- @Patrick: this is not straightforward to backport, and at first glance it looks like a cleanup/refactor. What is the ...
- 02:46 PM Backport #24904 (In Progress): mimic: qa: client socket inaccessible without sudo
- 10:44 AM Backport #24904 (Resolved): mimic: qa: client socket inaccessible without sudo
- https://github.com/ceph/ceph/pull/23030
- 10:56 AM Bug #20593: mds: the number of inode showed by "mds perf dump" not correct after trimming
- Sorry, should have used the PR#16232 merge commit....
- 10:54 AM Bug #20593 (Resolved): mds: the number of inode showed by "mds perf dump" not correct after trimm...
- Master commit was included in the initial luminous v12.2.0 release:...
- 10:55 AM Backport #23642: luminous: mds: the number of inode showed by "mds perf dump" not correct after t...
- Sorry, should have used the merge commit....
- 10:53 AM Backport #23642 (Rejected): luminous: mds: the number of inode showed by "mds perf dump" not corr...
- Master commit was included in the initial luminous v12.2.0 release:...
- 10:47 AM Backport #23642: luminous: mds: the number of inode showed by "mds perf dump" not correct after t...
- Looks like it has already been backported. I can see the commit "MDS: update the mlogger of mds in function check_mem...
- 08:54 AM Bug #24879 (Fix Under Review): mds: create health warning if we detect metadata (journal) writes ...
- 08:54 AM Bug #24879: mds: create health warning if we detect metadata (journal) writes are slow
- https://github.com/ceph/ceph/pull/23022
- 07:44 AM Bug #24900 (Rejected): fs: ceph.file.layout.stripe_count changed after modification
- first i touch en empty file.
> touch /nas/cephfs/zyli.txt
then
> setfattr -n ceph.file.layout.stripe_count -v ... - 04:38 AM Bug #24872 (Pending Backport): qa: client socket inaccessible without sudo
- 02:13 AM Bug #24897 (Rejected): client: writes rejected by quota may truncate/zero parts of file
- I found a BUg as below:
First:setfattr -n ceph.quota.max_bytes -v 5 /nas/cephfs/abc
Second:echo "abc" > /nas/ceph...
07/12/2018
- 11:59 PM Feature #24725 (Fix Under Review): mds: propagate rstats from the leaf dirs up to the specified d...
- 07:46 PM Backport #24190 (In Progress): luminous: fs: reduce number of helper debug messages at level 5 fo...
- 07:45 PM Backport #24136 (In Progress): luminous: MDSMonitor: uncommitted state exposed to clients/mdss
- 07:42 PM Backport #23790 (In Progress): luminous: mds: crash during shutdown_pass
- 07:33 PM Documentation #23427 (Resolved): doc: create doc outlining steps to bring down cluster
- 07:33 PM Backport #23669 (Resolved): luminous: doc: create doc outlining steps to bring down cluster
- 07:30 PM Backport #23642: luminous: mds: the number of inode showed by "mds perf dump" not correct after t...
- Rishabh, please do this backport.
- 07:28 PM Bug #21765 (Resolved): auth|doc: fs authorize error for existing credentials confusing/unclear
- 07:28 PM Backport #23641 (Resolved): luminous: auth|doc: fs authorize error for existing credentials confu...
- 07:25 PM Backport #23015 (In Progress): luminous: client: coredump when nfs-ganesha use ceph_ll_get_inode()
- 07:17 PM Backport #22504: luminous: client may fail to trim as many caps as MDS asked for
- Zheng, please do this backport.
- 07:11 PM Bug #24894 (Resolved): client: allow overwrites to files with size greater than the max_file_size...
- It's confusing to not allow overwrites as the file size is not **further** increasing beyond the limit (which it alre...
- 07:00 PM Feature #24643: libcephfs: add ceph_futimens support
- Jeff Layton wrote:
> I don't think we need this at all. We have a ceph_fsetattrx call which is a superset of all the... - 05:40 PM Feature #24643: libcephfs: add ceph_futimens support
- I don't think we need this at all. We have a ceph_fsetattrx call which is a superset of all the futimes/lutimes calls.
- 06:06 PM Bug #24893 (Resolved): client: add ceph_ll_fallocate
- We have a ceph_fallocate command, but we need an analogous ceph_ll_fallocate command for ganesha. Add a trivial wrapper.
- 10:44 AM Backport #24296 (Resolved): mimic: repeated eviction of idle client until some IO happens
- 10:43 AM Backport #24703 (Resolved): mimic: PurgeQueue sometimes ignores Journaler errors
- 10:42 AM Backport #24537 (Resolved): mimic: common/DecayCounter: set last_decay to current time when decod...
- 10:42 AM Backport #24704 (Resolved): mimic: mds: low wrlock efficiency due to dirfrags traversal
- 10:40 AM Bug #24308 (Resolved): mon: mds health metrics sent to cluster log indpeendently
- 10:39 AM Backport #24330 (Resolved): mimic: mon: mds health metrics sent to cluster log indpeendently
- 10:26 AM Bug #24823: mds: deadlock when setting config value via admin socket
- Patrick/Zheng,
I'm thinking if we could probably make use of Zheng's changes for tracker https://tracker.ceph.com/... - 09:54 AM Feature #24880: pybind/mgr/volumes: restore from snapshot
- I just checked who wrote that "TODO" comment, and it turns out it was me, even though I have no memory of it :-)
I... - 03:17 AM Feature #24880 (Resolved): pybind/mgr/volumes: restore from snapshot
- Old description:
Manila's cephfs driver does not support recovering data from snapshots,The driver uses the ceph_v... - 03:31 AM Bug #24881 (Fix Under Review): unhealthy heartbeat map during subtree migration
- https://github.com/ceph/ceph/pull/22999
- 03:26 AM Bug #24881 (Duplicate): unhealthy heartbeat map during subtree migration
- 02:32 AM Feature #22446: mds: ask idle client to trim more caps
- Rishabh Dave wrote:
> Can I get few implementation specific details to get started working on this issue?
Copying... - 02:26 AM Bug #24867: ceph.file.layout can not set by setfattr
- Patrick Donnelly wrote:
> yuanli zhu wrote:
> > I already know the mistake,the file I set has non-zero size,But i a... - 02:18 AM Bug #24867: ceph.file.layout can not set by setfattr
- yuanli zhu wrote:
> I already know the mistake,the file I set has non-zero size,But i also want to set the non-zero ... - 01:45 AM Bug #24867: ceph.file.layout can not set by setfattr
- I already know the mistake,the file I set has non-zero size,But i also want to set the non-zero size file.I want to d...
- 12:59 AM Bug #24867 (Need More Info): ceph.file.layout can not set by setfattr
- Is the file empty? You cannot set the layout on a file that has non-zero size.
- 02:15 AM Bug #23332 (Closed): kclient: with fstab entry is not coming up reboot
- Closing this as it's a consequence of #24879.
- 02:14 AM Bug #24879 (Resolved): mds: create health warning if we detect metadata (journal) writes are slow
- This can have wide-ranging effects including session establishment taking a long time (sessions are journaled). Let's...
07/11/2018
- 11:25 PM Backport #24296: mimic: repeated eviction of idle client until some IO happens
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22550
merged - 11:25 PM Backport #24703: mimic: PurgeQueue sometimes ignores Journaler errors
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/22810
merged - 11:24 PM Backport #24537: mimic: common/DecayCounter: set last_decay to current time when decoding decay c...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22816
merged - 11:24 PM Backport #24704: mimic: mds: low wrlock efficiency due to dirfrags traversal
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/22884
merged - 08:17 PM Backport #24330: mimic: mon: mds health metrics sent to cluster log indpeendently
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22265
merged - 07:58 PM Documentation #22989 (In Progress): doc: add documentation for MDS states
- WIP: https://github.com/ceph/ceph/pull/22996
- 06:29 PM Bug #24872 (Fix Under Review): qa: client socket inaccessible without sudo
- https://github.com/ceph/ceph/pull/22995
- 06:27 PM Bug #24872 (Resolved): qa: client socket inaccessible without sudo
- https://github.com/ceph/ceph/blob/5688476513a78cf9ab2cf3b1f65e6244f05ea73d/qa/tasks/cephfs/fuse_mount.py#L377
glob... - 02:51 PM Bug #24819 (Resolved): "terminate called after throwing an instance of 'ceph::buffer::malformed_i...
- the bug is in master only, no backports needed.
- 04:25 AM Bug #24819 (Fix Under Review): "terminate called after throwing an instance of 'ceph::buffer::mal...
- Sage already pushed a PR for this: https://github.com/ceph/ceph/pull/22968
- 02:35 PM Bug #24870 (Resolved): ceph-debug-docker: python3 libraries not installed in docker image
- ceph_volume_client can not be imported using python3 interpreter on the container built by src/script/ceph-debug-dock...
- 01:44 PM Feature #24444 (In Progress): cephfs: make InodeStat, DirStat, LeaseStat versioned
- 06:27 AM Backport #24862 (New): luminous: ceph_volume_client: allow atomic update of RADOS objects
- 06:13 AM Backport #24862 (In Progress): luminous: ceph_volume_client: allow atomic update of RADOS objects
- 06:22 AM Backport #24860 (In Progress): luminous: cephfs-journal-tool: Importing a zero-length purge_queue...
- 06:18 AM Backport #24861 (In Progress): mimic: cephfs-journal-tool: Importing a zero-length purge_queue jo...
- 05:40 AM Bug #24868 (Resolved): doc: Incomplete Kernel mount debugging
- 1. http://docs.ceph.com/docs/master/cephfs/troubleshooting/#kernel-mount-debugging seems incomplete.
2. http://docs.... - 05:18 AM Bug #24849: client: statfs inode count odd
- How about setting...
- 03:47 AM Bug #24856 (Fix Under Review): mds may get discontinuous mdsmap
- https://github.com/ceph/ceph/pull/22977
- 03:34 AM Bug #24867: ceph.file.layout can not set by setfattr
- How to remove this restriction
- 03:33 AM Bug #24867: ceph.file.layout can not set by setfattr
- When the layout fields of a file are modified using setfattr, this file must be empty, otherwise an error will occur....
- 02:44 AM Bug #24867 (Need More Info): ceph.file.layout can not set by setfattr
- I can get the ceph.file.layout by using getfattr.
[root@node1 ~]# getfattr -n ceph.file.layout /nas/cephfs/test.tx...
07/10/2018
- 08:29 PM Backport #24863 (Resolved): mimic: ceph_volume_client: allow atomic update of RADOS objects
- https://github.com/ceph/ceph/pull/23878
- 08:29 PM Backport #24862 (Resolved): luminous: ceph_volume_client: allow atomic update of RADOS objects
- https://github.com/ceph/ceph/pull/24084
- 08:29 PM Backport #24861 (Resolved): mimic: cephfs-journal-tool: Importing a zero-length purge_queue journ...
- https://github.com/ceph/ceph/pull/22981
- 08:29 PM Backport #24860 (Resolved): luminous: cephfs-journal-tool: Importing a zero-length purge_queue jo...
- https://github.com/ceph/ceph/pull/22980
- 08:26 PM Bug #24239 (Pending Backport): cephfs-journal-tool: Importing a zero-length purge_queue journal b...
- 08:26 PM Bug #24173 (Pending Backport): ceph_volume_client: allow atomic update of RADOS objects
- 08:25 PM Feature #24465 (Resolved): client: allow client to leave state intact on MDS when tearing down ob...
- 08:18 PM Bug #24858 (Fix Under Review): qa: test_recovery_pool tries asok on wrong node
- https://github.com/ceph/ceph/pull/22971
- 08:06 PM Bug #24858 (Resolved): qa: test_recovery_pool tries asok on wrong node
- https://github.com/ceph/ceph/blob/fdb1e142c17d84f40232a8e739f344fb607ca15a/qa/tasks/cephfs/test_recovery_pool.py#L196...
- 02:11 PM Bug #24856 (Resolved): mds may get discontinuous mdsmap
- steps to reproduce. two active mds, no standby
- gdb attach mds.0, wait until 'ceph -w' shows it's laggy
- restar... - 02:05 PM Bug #24855 (Resolved): mds: reduce debugging for missing inodes during subtree migration
- We often see:...
- 02:01 PM Feature #24854 (New): mds: if MDS fails internal heartbeat, then debugging should be increased to...
- This should incrementally increase to 20 as the timeout reaches mds_beacon_grace.
- 01:53 PM Bug #24853 (Resolved): mds: dump recent (memory) log messages before respawning due to being remo...
- If the MDS has been stuck working on something and consequently gets removed from the MDSMap, we'd like to know what ...
- 01:50 PM Bug #24852 (Resolved): mds: dump MDSMap epoch to log at low debug
- Production deployments use low debugging but it'd be useful to always know what MDSMap epoch the MDS is currently pro...
- 12:22 PM Bug #24849: client: statfs inode count odd
- We set f_ffree to -1 because it's not meaningful (unlike e.g. ext4, we don't really have an inode limit other than th...
- 11:34 AM Bug #24849 (Resolved): client: statfs inode count odd
- Hi,
We noticed that df -i on our cehpfs was a bit odd:... - 12:15 PM Bug #23262: kclient: nofail option not supported
- We are running the latest util-linux of centos 7.5, util-linux-2.23.2-52.el7.x86_64.
If we mount another fs type wi... - 11:18 AM Backport #23641 (In Progress): luminous: auth|doc: fs authorize error for existing credentials co...
- 04:49 AM Backport #24829 (In Progress): mimic: qa: iogen.sh: line 7: cd: too many arguments
- 04:00 AM Backport #24842 (Resolved): luminous: qa: move mds/client config to qa from teuthology ceph.conf....
- https://github.com/ceph/ceph/pull/23877
- 04:00 AM Backport #24841 (Resolved): mimic: qa: move mds/client config to qa from teuthology ceph.conf.tem...
- https://github.com/ceph/ceph/pull/23659
- 03:59 AM Bug #24138 (Resolved): qa: support picking a random distro using new teuthology $
- 03:59 AM Backport #24706 (Resolved): mimic: qa: support picking a random distro using new teuthology $
- 03:58 AM Backport #24534 (Resolved): mimic: client: _ll_drop_pins travel inode_map may access invalid ‘nex...
- 03:57 AM Backport #24539 (Resolved): mimic: multimds pjd open test fails
- 03:57 AM Bug #24240 (Resolved): qa: 1 mutations had unexpected outcomes
- 03:56 AM Backport #24541 (Resolved): mimic: qa: 1 mutations had unexpected outcomes
- 03:56 AM Backport #24310 (Resolved): mimic: pjd: cd: too many arguments
- 03:42 AM Backport #24828 (In Progress): luminous: qa: iogen.sh: line 7: cd: too many arguments
07/09/2018
- 08:31 PM Backport #24706: mimic: qa: support picking a random distro using new teuthology $
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/22700
merged - 08:28 PM Backport #24534: mimic: client: _ll_drop_pins travel inode_map may access invalid ‘next’ iterator
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22791
merged - 08:28 PM Backport #24539: mimic: multimds pjd open test fails
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22819
merged - 08:27 PM Backport #24541: mimic: qa: 1 mutations had unexpected outcomes
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22841
merged - 08:26 PM Backport #24310: mimic: pjd: cd: too many arguments
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/22882
merged - 08:14 PM Bug #24840 (Resolved): mds: explain delayed client_request due to subtree migration
- Presently you will see messages like:...
- 04:55 PM Cleanup #24839 (Resolved): qa: move mds/client config to qa from teuthology ceph.conf.template
- https://github.com/ceph/ceph/pull/22740
- 02:08 PM Bug #24730 (Duplicate): Client::_invalidate_kernel_dcache causes NFS lookup “deleted” dentry
- 01:37 PM Bug #24730 (Closed): Client::_invalidate_kernel_dcache causes NFS lookup “deleted” dentry
- 01:54 PM Cleanup #24820: overhead of g_conf->get_val<type>("config name") is high
- Idea for fix is to use cache variable where hot paths use get_val. Example:
https://github.com/ceph/ceph/blob/9755... - 01:50 PM Bug #24802: races with nfs-ganesha reboots and delegation handling
- The right fix here is probably to push forward with end-to-end ceph client reclaim. The main problem is that there is...
- 01:40 PM Bug #24732 (Rejected): lsattr is not usefull for ceph-fuse
- ceph-fuse/FUSE(?) doesn't support flags for the inode to support lsattr? Closing this.
- 01:21 PM Backport #24829 (Resolved): mimic: qa: iogen.sh: line 7: cd: too many arguments
- https://github.com/ceph/ceph/pull/22956
- 01:21 PM Backport #24828 (Resolved): luminous: qa: iogen.sh: line 7: cd: too many arguments
- https://github.com/ceph/ceph/pull/22955
- 10:14 AM Bug #24823 (Resolved): mds: deadlock when setting config value via admin socket
- I hit the below deadlock while working on a PR which introduced a configuration option for MDS. The problem is with h...
- 07:06 AM Bug #23958 (Fix Under Review): mds: scrub doesn't always return JSON results
- PR https://github.com/ceph/ceph/pull/22947
- 04:07 AM Bug #24680 (Pending Backport): qa: iogen.sh: line 7: cd: too many arguments
07/08/2018
- 08:15 AM Cleanup #24820: overhead of g_conf->get_val<type>("config name") is high
- !http://tracker.ceph.com/attachments/download/3540/Screen%20Shot%202018-07-08%20at%2016.13.15.png!
- 08:14 AM Cleanup #24820 (Resolved): overhead of g_conf->get_val<type>("config name") is high
07/07/2018
- 03:22 PM Bug #24819 (Resolved): "terminate called after throwing an instance of 'ceph::buffer::malformed_i...
- Run: http://pulpito.ceph.com/teuthology-2018-07-07_02:25:03-upgrade:luminous-x-master-distro-basic-smithi/
Jobs: ['2...
07/06/2018
- 10:51 PM Feature #17230: ceph_volume_client: py3 compatible
- Note: I've deleted the backport issues as this hasn't been merged to master yet.
- 06:07 PM Bug #24802 (New): races with nfs-ganesha reboots and delegation handling
- So I've come up with a thought experiment that I think could be problematic for ganesha with delegations enabled. Thi...
07/05/2018
- 07:57 PM Backport #24696 (In Progress): luminous: mds: low wrlock efficiency due to dirfrags traversal
- 07:50 PM Backport #24704 (In Progress): mimic: mds: low wrlock efficiency due to dirfrags traversal
- 07:38 PM Backport #24311 (In Progress): luminous: pjd: cd: too many arguments
- 07:37 PM Backport #24310 (In Progress): mimic: pjd: cd: too many arguments
- 01:28 PM Bug #24780 (Resolved): Some cephfs tool commands silently operate on only rank 0, even if multipl...
I think this is biting people, e.g. in thread "[ceph-users] CephFS - How to handle "loaded dup inode" errors"
We...- 11:47 AM Backport #23669 (In Progress): luminous: doc: create doc outlining steps to bring down cluster
07/04/2018
- 01:46 PM Documentation #24726 (Resolved): Documentation about CephFS snapshots in experimental features is...
- 01:44 PM Backport #24746 (Resolved): mimic: doc: Documentation about CephFS snapshots in experimental feat...
- 09:51 AM Feature #24604: Implement "cephfs-journal-tool event splice" equivalent for purge queue
- https://github.com/ceph/ceph/pull/22850
- 09:47 AM Cleanup #24745: Spurious empty files in CephFS root pool when multiple pools associated
- (see discussion on thread "[ceph-users] Spurious empty files in CephFS root pool when multiple pools associated")
... - 09:25 AM Backport #24759 (Need More Info): luminous: test gets ENOSPC from bluestore block device
- Wait until mimic backport https://github.com/ceph/ceph/pull/22835 is merged and use its minimal version of https://gi...
- 02:21 AM Backport #24541 (In Progress): mimic: qa: 1 mutations had unexpected outcomes
- https://github.com/ceph/ceph/pull/22841
07/03/2018
- 04:22 PM Backport #24758 (In Progress): mimic: test gets ENOSPC from bluestore block device
- 04:12 PM Backport #24758 (Resolved): mimic: test gets ENOSPC from bluestore block device
- https://github.com/ceph/ceph/pull/22835
- 04:12 PM Backport #24759 (Resolved): luminous: test gets ENOSPC from bluestore block device
- https://github.com/ceph/ceph/pull/24924
- 04:11 PM Bug #24238 (Pending Backport): test gets ENOSPC from bluestore block device
- 11:12 AM Backport #24718 (In Progress): luminous: client: returning garbage (?) for readdir
- 11:11 AM Backport #24719 (In Progress): mimic: client: returning garbage (?) for readdir
- 11:01 AM Backport #24539 (In Progress): mimic: multimds pjd open test fails
- https://github.com/ceph/ceph/pull/22819
- 10:58 AM Backport #24537 (In Progress): mimic: common/DecayCounter: set last_decay to current time when de...
- https://github.com/ceph/ceph/pull/22816
- 10:37 AM Backport #24705 (In Progress): mimic: cephfs: allow prohibiting user snapshots in CephFS
- 10:20 AM Backport #24716 (Resolved): mimic: blogbench.sh failed in upgrade:luminous-x-mimic-distro-basic-s...
- 10:16 AM Backport #24694 (In Progress): luminous: PurgeQueue sometimes ignores Journaler errors
- 10:15 AM Backport #24703 (In Progress): mimic: PurgeQueue sometimes ignores Journaler errors
- 08:45 AM Bug #24512: Raw used space leak
- Unfortunately, I have now to use the disks for production...
Here follow the last tests with smaller pools, which ... - 03:58 AM Backport #24746 (In Progress): mimic: doc: Documentation about CephFS snapshots in experimental f...
- 03:08 AM Backport #24746 (Resolved): mimic: doc: Documentation about CephFS snapshots in experimental feat...
- https://github.com/ceph/ceph/pull/22803
Also available in: Atom