Activity
From 10/01/2017 to 10/30/2017
10/30/2017
- 09:05 PM Backport #21953 (In Progress): luminous: MDSMonitor commands crashing on cluster upgraded from Ha...
- https://github.com/ceph/ceph/pull/18628
- 08:43 PM Bug #21945 (Resolved): MDSCache::gen_default_file_layout segv on rados/upgrade
- 05:44 AM Bug #21945: MDSCache::gen_default_file_layout segv on rados/upgrade
- /a/kchai-2017-10-29_15:49:18-rados-wip-kefu-testing-2017-10-28-2157-distro-basic-mira/1788809
- 06:31 PM Bug #21975: MDS: mds gets significantly behind on trimming while creating millions of files
- https://github.com/ceph/ceph/pull/18624
- 06:31 PM Bug #21975 (Resolved): MDS: mds gets significantly behind on trimming while creating millions of ...
- During creat() heavy workloads, the MDS gets behind on trimming its journal as the journal grows faster than it trims...
- 01:45 PM Bug #21884: client: populate f_fsid in statfs output
- Waiting for FUSE support on this.
- 12:04 PM Bug #21406: ceph.in: tell mds does not understand --cluster
- The work around `--conf <conf file path>` doesn't work anymore with the latest source code in the Ceph master. I'm hi...
- 04:18 AM Bug #21967 (Resolved): 'ceph tell mds' commands result in 'File exists' errors on client admin so...
- ...
10/29/2017
- 06:14 PM Bug #21807 (Resolved): mds: trims all unpinned dentries when memory limit is reached
- 06:14 PM Backport #21810 (Resolved): luminous: mds: trims all unpinned dentries when memory limit is reached
- 06:13 PM Bug #19593 (Resolved): purge queue and standby replay mds
- 06:13 PM Backport #21658 (Resolved): luminous: purge queue and standby replay mds
- 06:13 PM Bug #21768 (Resolved): FAILED assert(in->is_dir()) in MDBalancer::handle_export_pins()
- 06:12 PM Backport #21806 (Resolved): luminous: FAILED assert(in->is_dir()) in MDBalancer::handle_export_pi...
- 06:12 PM Bug #21191 (Resolved): ceph: tell mds.* results in warning
- 06:12 PM Backport #21324 (Resolved): luminous: ceph: tell mds.* results in warning
- 06:11 PM Bug #21726 (Resolved): limit internal memory usage of object cacher.
- 06:11 PM Backport #21804 (Resolved): luminous: limit internal memory usage of object cacher.
- 06:10 PM Bug #21746 (Resolved): client_metadata can be missing
- 06:10 PM Backport #21805 (Resolved): luminous: client_metadata can be missing
- 06:10 PM Backport #21627 (Resolved): luminous: ceph_volume_client: sets invalid caps for existing IDs with...
- 06:08 PM Backport #21600 (Resolved): luminous: mds: client caps can go below hard-coded default (100)
- 06:08 PM Bug #21476 (Resolved): ceph_volume_client: snapshot dir name hardcoded
- 06:08 PM Backport #21514 (Resolved): luminous: ceph_volume_client: snapshot dir name hardcoded
10/27/2017
- 08:32 PM Bug #21945 (In Progress): MDSCache::gen_default_file_layout segv on rados/upgrade
- WIP https://github.com/ceph/ceph/pull/18603
- 12:09 PM Bug #21945: MDSCache::gen_default_file_layout segv on rados/upgrade
- Presumably 7adf0fb819cc98702cd97214192770472eab5d27
- 12:07 PM Bug #21945: MDSCache::gen_default_file_layout segv on rados/upgrade
- ...
- 11:55 AM Bug #21945 (Resolved): MDSCache::gen_default_file_layout segv on rados/upgrade
- ...
- 08:23 PM Backport #21953: luminous: MDSMonitor commands crashing on cluster upgraded from Hammer (nonexist...
- Please hold off on merging any backport on this due to http://tracker.ceph.com/issues/21945
- 12:45 PM Backport #21953 (Resolved): luminous: MDSMonitor commands crashing on cluster upgraded from Hamme...
- https://github.com/ceph/ceph/pull/18628
- 07:46 PM Bug #21959 (Fix Under Review): MDSMonitor: monitor gives constant "is now active in filesystem ce...
- https://github.com/ceph/ceph/pull/18600
- 07:16 PM Bug #21959 (Resolved): MDSMonitor: monitor gives constant "is now active in filesystem cephfs as ...
Cluster log is filled with:...- 04:58 PM Backport #21955 (In Progress): luminous: qa: add EC data pool to testing
- 12:45 PM Backport #21955 (Resolved): luminous: qa: add EC data pool to testing
- https://github.com/ceph/ceph/pull/18596
- 03:05 PM Bug #21337 (Resolved): luminous: MDS is not getting past up:replay on Luminous cluster
- 12:45 PM Backport #21952 (Resolved): luminous: mds: no assertion on inode being purging in find_ino_peers()
- https://github.com/ceph/ceph/pull/18869
- 12:44 PM Backport #21948 (Resolved): luminous: MDSMonitor: mons should reject misconfigured mds_blacklist_...
- https://github.com/ceph/ceph/pull/19871
- 12:44 PM Backport #21947 (Resolved): luminous: mds: preserve order of requests during recovery of multimds...
- https://github.com/ceph/ceph/pull/18871
- 02:42 AM Bug #21722 (Pending Backport): mds: no assertion on inode being purging in find_ino_peers()
10/26/2017
- 06:04 PM Bug #21153 (Resolved): Incorrect grammar in FS message "1 filesystem is have a failed mds daemon"
- 06:03 PM Bug #21230 (Resolved): the standbys are not updated via "ceph tell mds.* command"
- 04:07 PM Backport #21657 (In Progress): luminous: StrayManager::truncate is broken
- 08:24 AM Bug #21928 (Fix Under Review): src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == inode_m...
- https://github.com/ceph/ceph/pull/18555
- 08:07 AM Bug #21928: src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == inode_map.size() + snap_in...
- I think the cause is that scrub function allocates shadow inode for base inode
CInode.cc... - 12:06 AM Bug #21405 (Pending Backport): qa: add EC data pool to testing
10/25/2017
- 11:38 PM Bug #21568 (Pending Backport): MDSMonitor commands crashing on cluster upgraded from Hammer (none...
- 11:37 PM Bug #21821 (Pending Backport): MDSMonitor: mons should reject misconfigured mds_blacklist_interval
- 11:36 PM Bug #20596 (Resolved): MDSMonitor: obsolete `mds dump` and other deprecated mds commands
- 08:37 PM Bug #21843 (Pending Backport): mds: preserve order of requests during recovery of multimds cluster
- 08:35 PM Bug #21848 (In Progress): client: re-expand admin_socket metavariables in child process
- Zhi, please revisit this issue as the fix in https://github.com/ceph/ceph/pull/18393 must be reverted due to the reas...
- 06:14 PM Bug #21928 (Resolved): src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == inode_map.size(...
- ...
- 02:55 PM Feature #21888: Adding [--repair] option for cephfs-journal-tool make it can recover all journal ...
- Assigning to Ivan, as he had already submitted a PR for this Feature.
- 01:50 PM Bug #18743 (Fix Under Review): Scrub considers dirty backtraces to be damaged, puts in damage tab...
- https://github.com/ceph/ceph/pull/18538
Inspired to fix this from working on today's "[ceph-users] MDS damaged" th... - 07:53 AM Bug #21393 (In Progress): MDSMonitor: inconsistent role/who usage in command help
- 06:43 AM Bug #21903: ceph-volume-client: File "cephfs.pyx", line 696, in cephfs.LibCephFS.open
- Patrick Donnelly wrote:
> Another one during retest: http://pulpito.ceph.com/pdonnell-2017-10-24_16:24:48-fs-wip-pdo...
10/24/2017
- 09:15 PM Bug #21908: kcephfs: mount fails with -EIO
- ...
- 09:14 PM Bug #21908 (New): kcephfs: mount fails with -EIO
- ...
- 08:23 PM Bug #21884: client: populate f_fsid in statfs output
- Jeff, I liked your suggestion of a vxattr the client can lookup to check if the mount is CephFS.
- 05:17 PM Bug #21903: ceph-volume-client: File "cephfs.pyx", line 696, in cephfs.LibCephFS.open
- Another one during retest: http://pulpito.ceph.com/pdonnell-2017-10-24_16:24:48-fs-wip-pdonnell-testing-201710240410-...
- 05:17 PM Bug #21903 (New): ceph-volume-client: File "cephfs.pyx", line 696, in cephfs.LibCephFS.open
- ...
- 03:24 PM Bug #21393 (Fix Under Review): MDSMonitor: inconsistent role/who usage in command help
- https://github.com/ceph/ceph/pull/18512
10/23/2017
- 07:51 PM Bug #21884: client: populate f_fsid in statfs output
- I'm not sure we want to change f_type. It _is_ still FUSE, regardless of what userland daemon it's talking to.
If ... - 10:43 AM Bug #21884: client: populate f_fsid in statfs output
- Here's how the kernel client fills this field out:...
- 11:46 AM Bug #21848 (Fix Under Review): client: re-expand admin_socket metavariables in child process
- 02:28 AM Bug #21393 (In Progress): MDSMonitor: inconsistent role/who usage in command help
- 01:51 AM Bug #21892 (Resolved): limit size of subtree migration
- ...
10/22/2017
- 09:19 AM Feature #21888: Adding [--repair] option for cephfs-journal-tool make it can recover all journal ...
- https://github.com/ceph/ceph/pull/18465/
- 06:15 AM Feature #21888 (Fix Under Review): Adding [--repair] option for cephfs-journal-tool make it can r...
- As described in the document if a journal is damaged or for any reason an MDS is incapable of replaying it, attempt t...
10/21/2017
- 04:05 AM Bug #21884 (Resolved): client: populate f_fsid in statfs output
- We should just reuse the kclient -f_id- f_type as, in principle, the application should only need to know that the fi...
10/20/2017
- 03:21 PM Bug #21512: qa: libcephfs_interface_tests: shutdown race failures
- PR to master was https://github.com/ceph/ceph/pull/18139
- 10:08 AM Feature #21877 (Resolved): quota and snaprealm integation
- https://github.com/ceph/ceph/pull/18424/commits/4477f8b93d183eb461798b5b67550d3d5b22c16c
- 09:30 AM Backport #21874 (Resolved): luminous: qa: libcephfs_interface_tests: shutdown race failures
- https://github.com/ceph/ceph/pull/20082
- 09:29 AM Backport #21870 (Resolved): luminous: Assertion in EImportStart::replay should be a damaged()
- https://github.com/ceph/ceph/pull/18930
- 07:10 AM Bug #21861 (New): osdc: truncate Object and remove the bh which have someone wait for read on it ...
- ceph version: jewel 10.2.2
When one osd be written over the full_ratio(default is 0.95) will lead the cluster t...
10/19/2017
- 11:22 PM Bug #21777 (Need More Info): src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
- This is NMI because we weren't able to reproduce the actual problem. We'll ahve to wait for QE to reproduce again wit...
- 06:35 PM Bug #21853 (Resolved): mds: mdsload debug too high
- ...
- 01:44 PM Feature #19578: mds: optimize CDir::_omap_commit() and CDir::_committed() for large directory
- this should help large directory performance
- 07:52 AM Bug #21848: client: re-expand admin_socket metavariables in child process
- https://github.com/ceph/ceph/pull/18393
- 07:51 AM Bug #21848 (Resolved): client: re-expand admin_socket metavariables in child process
- The default value of admin_socket is $run_dir/$cluster-$name.asok. If mounting multiple ceph-fuse instances on the sa...
- 07:44 AM Bug #21483 (Resolved): qa: test_snapshot_remove (kcephfs): RuntimeError: Bad data at offset 0
- 02:20 AM Bug #21749 (Duplicate): PurgeQueue corruption in 12.2.1
- dup of #19593
- 02:17 AM Backport #21658 (Fix Under Review): luminous: purge queue and standby replay mds
- https://github.com/ceph/ceph/pull/18385
- 01:53 AM Bug #21843 (Fix Under Review): mds: preserve order of requests during recovery of multimds cluster
- https://github.com/ceph/ceph/pull/18384
- 01:50 AM Bug #21843 (Resolved): mds: preserve order of requests during recovery of multimds cluster
- there are several cases that requests get processed in wrong order
1)
touch a/b/f (handled by mds.1, early ...
10/17/2017
- 10:09 PM Bug #21821 (Fix Under Review): MDSMonitor: mons should reject misconfigured mds_blacklist_interval
- https://github.com/ceph/ceph/pull/18366
- 09:55 PM Bug #21821: MDSMonitor: mons should reject misconfigured mds_blacklist_interval
- A good opportunity to use the new min/max fields on the config option itself.
I suppose if we accept the idea that... - 05:36 PM Bug #21821 (Resolved): MDSMonitor: mons should reject misconfigured mds_blacklist_interval
- There should be a minimum acceptable value otherwise we see potential behavior where a blacklisted MDS is still writi...
- 02:01 AM Bug #21812 (Closed): standby replay mds may submit log
- I wrongly interpret the log
10/16/2017
- 01:19 PM Bug #21812 (Closed): standby replay mds may submit log
- magna002://home/smohan/LOGS/cfs-mds.magna116.trunc.log.gz
mds submitted log entry while it's in standby replay sta... - 12:07 AM Bug #21807 (Pending Backport): mds: trims all unpinned dentries when memory limit is reached
- 12:03 AM Backport #21810 (Resolved): luminous: mds: trims all unpinned dentries when memory limit is reached
- https://github.com/ceph/ceph/pull/18316
10/14/2017
- 08:49 PM Bug #21807 (Fix Under Review): mds: trims all unpinned dentries when memory limit is reached
- https://github.com/ceph/ceph/pull/18309
- 08:46 PM Bug #21807 (Resolved): mds: trims all unpinned dentries when memory limit is reached
- Generally dentries are pinned by the client cache so this was easy to miss in testing. Bug is here:
https://github... - 08:17 AM Backport #21805 (In Progress): luminous: client_metadata can be missing
- 12:32 AM Backport #21805 (Resolved): luminous: client_metadata can be missing
- https://github.com/ceph/ceph/pull/18299
- 08:16 AM Backport #21804 (In Progress): luminous: limit internal memory usage of object cacher.
- 12:23 AM Backport #21804 (Resolved): luminous: limit internal memory usage of object cacher.
- https://github.com/ceph/ceph/pull/18298
- 12:39 AM Backport #21806 (In Progress): luminous: FAILED assert(in->is_dir()) in MDBalancer::handle_export...
- 12:37 AM Backport #21806 (Resolved): luminous: FAILED assert(in->is_dir()) in MDBalancer::handle_export_pi...
- https://github.com/ceph/ceph/pull/18300
- 12:16 AM Bug #21512 (Pending Backport): qa: libcephfs_interface_tests: shutdown race failures
- 12:14 AM Bug #21726 (Pending Backport): limit internal memory usage of object cacher.
- 12:14 AM Bug #21746 (Pending Backport): client_metadata can be missing
- 12:13 AM Bug #21759 (Pending Backport): Assertion in EImportStart::replay should be a damaged()
- 12:12 AM Bug #21768 (Pending Backport): FAILED assert(in->is_dir()) in MDBalancer::handle_export_pins()
10/13/2017
- 07:18 PM Feature #21601 (Resolved): ceph_volume_client: add get, put, and delete object interfaces
- 07:18 PM Backport #21602 (Resolved): luminous: ceph_volume_client: add get, put, and delete object interfaces
- 12:37 AM Bug #21777 (Fix Under Review): src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
- https://github.com/ceph/ceph/pull/18278
10/12/2017
- 10:47 PM Bug #21777 (Need More Info): src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
- MDS may send a MMDSCacheRejoin(MMDSCacheRejoin::OP_WEAK) message to an MDS which is not rejoin/active/stopping. Once ...
- 04:04 AM Bug #21768 (Fix Under Review): FAILED assert(in->is_dir()) in MDBalancer::handle_export_pins()
- https://github.com/ceph/ceph/pull/18261
- 03:58 AM Bug #21768 (Resolved): FAILED assert(in->is_dir()) in MDBalancer::handle_export_pins()
- ...
10/11/2017
- 09:38 PM Bug #21765: auth|doc: fs authorize error for existing credentials confusing/unclear
- Doug, please take this one.
- 09:37 PM Bug #21765 (Resolved): auth|doc: fs authorize error for existing credentials confusing/unclear
- If you attempt to use `fs authorize` on a key that already exists you get an error like:
https://github.com/ceph/c... - 03:47 PM Bug #21764 (Resolved): common/options.cc: Update descriptions and visibility levels for MDS/clien...
- Go through the options in common/options.cc and figure out which should be LEVEL_DEV (hidden in the UI). BASIC/ADVANC...
- 12:02 PM Bug #21748: client assertions tripped during some workloads
- Huh. That is an interesting theory. I don't see how ganesha would do that, but maybe. Unfortunately, the original pro...
- 08:27 AM Bug #21748: client assertions tripped during some workloads
- this shouldn't happen even for traceless reply. I suspect the 'in' passed to ceph_ll_setattr isn't belong to the 'cmo...
- 11:03 AM Bug #21759 (Fix Under Review): Assertion in EImportStart::replay should be a damaged()
- https://github.com/ceph/ceph/pull/18244
- 10:35 AM Bug #21759 (Resolved): Assertion in EImportStart::replay should be a damaged()
This is one of a number of assertions that still linger in journal.cc, but since it's been seen in the wild ("[ceph...- 09:11 AM Bug #21754: mds: src/osdc/Journaler.cc: 402: FAILED assert(!r)
- ...
- 06:31 AM Bug #21749: PurgeQueue corruption in 12.2.1
- Hi Yan,
yes, we had 3 MDS running in standby-replay mode (I switched them to standby now).
Thanks for the offer... - 02:58 AM Bug #21749: PurgeQueue corruption in 12.2.1
- likely caused by http://tracker.ceph.com/issues/19593.
ping 'yanzheng' at ceph@OFTC, I will help you to recover th... - 02:40 AM Bug #21745: mds: MDBalancer using total (all time) request count in load statistics
- although it is simple to add last_timestamp and last_reqcount so that we can get an average TPS, but TPS may fluctuat...
10/10/2017
- 10:10 PM Bug #21754 (Rejected): mds: src/osdc/Journaler.cc: 402: FAILED assert(!r)
- ...
- 04:09 PM Bug #21749: PurgeQueue corruption in 12.2.1
- I saved all information/logs/objects, feel free to ask for any of it and further things.
Regards,
Daniel - 12:05 PM Bug #21749 (Duplicate): PurgeQueue corruption in 12.2.1
- From "[ceph-users] how to debug (in order to repair) damaged MDS (rank)?"
Log snippet during MDS startup:... - 02:18 PM Bug #21748: client assertions tripped during some workloads
- Actually this is wrong (as Zheng pointed out). The call is made with a zero-length path that starts from the inode on...
- 10:44 AM Bug #21748: client assertions tripped during some workloads
- The right fix is probably to just remove that assertion. I don't think it's really valid anyway. cephfs turns the ino...
- 10:42 AM Bug #21748 (Can't reproduce): client assertions tripped during some workloads
- We had a report of some crashes in ganesha here:
https://github.com/nfs-ganesha/nfs-ganesha/issues/215
Dan and ... - 12:51 PM Bug #21412: cephfs: too many cephfs snapshots chokes the system
- The trimsnap states. The rmdir actually completes quickly, but the resulting operations throw the entire cluster int...
- 02:51 AM Bug #21412: cephfs: too many cephfs snapshots chokes the system
- what do you mean "it takes almost 24 hours to delete a single snapshot"? 'rmdir .snap/xxx' tooks 24 hours or pgs on ...
- 09:55 AM Bug #21746 (Fix Under Review): client_metadata can be missing
- https://github.com/ceph/ceph/pull/18215
- 09:52 AM Bug #21746 (Resolved): client_metadata can be missing
- session opened by Server::prepare_force_open_sessions() has no client metadata.
- 09:47 AM Bug #21745 (Resolved): mds: MDBalancer using total (all time) request count in load statistics
- This was pointed out by Xiaoxi Chen
The get_req_rate() function is returning the value of l_mds_request, which is ...
10/09/2017
- 06:25 PM Bug #21405 (Fix Under Review): qa: add EC data pool to testing
- https://github.com/ceph/ceph/pull/18192
- 05:50 PM Bug #21734 (Duplicate): mount client shows total capacity of cluster but not of a pool
- SERVER:
ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
44637G... - 01:56 PM Bug #21412: cephfs: too many cephfs snapshots chokes the system
- Note, the bug says "10.2.7" but we have since upgraded to 10.2.9 and the same problem exists.
- 01:55 PM Bug #21412: cephfs: too many cephfs snapshots chokes the system
- Here is a dump of the cephfs 'dentry_lru' table, in case it is interesting.
- 01:53 PM Bug #21412: cephfs: too many cephfs snapshots chokes the system
- Here is data collected from a recent attempt to delete a very old and very large snapshot:
The snapshot extended a... - 10:30 AM Bug #21726 (Fix Under Review): limit internal memory usage of object cacher.
- https://github.com/ceph/ceph/pull/18183
- 10:22 AM Bug #21726 (Resolved): limit internal memory usage of object cacher.
- https://bugzilla.redhat.com/show_bug.cgi?id=1490814
- 07:07 AM Bug #21722: mds: no assertion on inode being purging in find_ino_peers()
- https://github.com/ceph/ceph/pull/18174
- 07:06 AM Bug #21722 (Resolved): mds: no assertion on inode being purging in find_ino_peers()
- Recently we hit an assertion on MDS only few times when MDS was very busy....
- 03:43 AM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
- kernel of RHEL7 supports FUSE_AUTO_INVAL_DATA. But FUSE_CAP_DONT_MASK was added in libfuse 3.0. Currently no major li...
10/06/2017
- 05:46 PM Feature #15066: multifs: Allow filesystems to be assigned RADOS namespace as well as pool for met...
- we should default to using a namespace named after the filesystem unless otherwise specified.
- 05:45 PM Feature #21709 (New): ceph fs authorize should detect the correct data namespace
- when per-FS data namespaces are enabled, ceph fs authorize should be updated to issue caps for them
- 02:21 PM Bug #21512: qa: libcephfs_interface_tests: shutdown race failures
- I have a PR up that seems to fix this, but it may not be what we need. env_to_vec seems like it ought to be reworked ...
10/05/2017
- 12:04 PM Bug #21512: qa: libcephfs_interface_tests: shutdown race failures
- Looking now to see if we can somehow just fix up lockdep for this. Most of the problems I have seen have seen are fal...
10/04/2017
- 06:42 PM Bug #21512: qa: libcephfs_interface_tests: shutdown race failures
- I'm now looking for ways to selectively disable lockdep for just this test. So far, I've been unable to do so:
<pr... - 04:22 PM Bug #21512: qa: libcephfs_interface_tests: shutdown race failures
- Patch to make the ShutdownRace test even more thrashy. This has each thread do the setup and teardown in a tight loop...
- 02:32 AM Bug #21568 (Fix Under Review): MDSMonitor commands crashing on cluster upgraded from Hammer (none...
- https://github.com/ceph/ceph/pull/18109
10/03/2017
- 07:41 PM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
- Ok, I think you're probably right there. Do we need a cmake test to ensure the fuse library defines FUSE_AUTO_INVAL_D...
- 02:59 AM Backport #21658 (Resolved): luminous: purge queue and standby replay mds
- https://github.com/ceph/ceph/pull/18385
- 02:58 AM Backport #21657 (Resolved): luminous: StrayManager::truncate is broken
- https://github.com/ceph/ceph/pull/18019
10/02/2017
- 06:29 PM Backport #21626 (In Progress): jewel: ceph_volume_client: sets invalid caps for existing IDs with...
- 06:20 PM Backport #21626 (Resolved): jewel: ceph_volume_client: sets invalid caps for existing IDs with no...
- https://github.com/ceph/ceph/pull/18084
- 06:29 PM Backport #21627 (In Progress): luminous: ceph_volume_client: sets invalid caps for existing IDs w...
- 06:25 PM Backport #21627 (Resolved): luminous: ceph_volume_client: sets invalid caps for existing IDs with...
- https://github.com/ceph/ceph/pull/18085
-https://github.com/ceph/ceph/pull/18447- - 12:41 PM Bug #21568: MDSMonitor commands crashing on cluster upgraded from Hammer (nonexistent pool?)
- 12:40 PM Bug #21568: MDSMonitor commands crashing on cluster upgraded from Hammer (nonexistent pool?)
- User confirmed the MDSMap referred to data pools that no longer exist. The fix should check for non-existent pools an...
10/01/2017
- 06:01 AM Bug #21304: mds v12.2.0 crashing
- The following crash still persists with v12.2.1:
2017-10-01 06:07:34.673356 7f1066040700 0 -- 194.249.156.134:680... - 12:46 AM Bug #19593 (Pending Backport): purge queue and standby replay mds
- 12:45 AM Bug #21501 (Pending Backport): ceph_volume_client: sets invalid caps for existing IDs with no caps
Also available in: Atom