Project

General

Profile

Actions

Bug #5272

closed

Updating ceph from 0.61.2 to 0.61.3 obviously changes tunables of existing cluster

Added by To Pro almost 11 years ago. Updated almost 11 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I'm running a ceph cluster with three server nodes, each running one MON, one MDS and three OSDs to provide CEPHFS storage to clients. The server- and client-node setup is debian wheezy using linux-3.8.x kernel from debian.org and ceph packages from ceph.com/debian-cuttlefish. The clients mount CephFS using kernel client. The cluster was created back then when it ran bobtail, then upgraded to cuttlefish 0.61.2 without issues. Today when I updated the server nodes to ceph 0.61.3 and restared ceph on all server-nodes (service ceph restart) consecutively, some time during that process all clients having had mounted CephFS stalled IO. Rebooting clients didn't change a thing so I decided to shutdown all (eight) clients, stop ceph daemons on all servers completely and waited for things to settle before starting up again. I then restarted ceph daemons on all server nodes again, waiting again for things to settle, then tried booting up a single client trying to mount the CephFS, still blocking IO.
I then remembered having read about Ceph tunables and as I still use linux-3.8 I would not be able to mount the FS when tunables would be set to "optimal" settings. So I just gave it a try calling "ceph osd crush tunables legacy" and from that moment all my nodes could mount CephFS again as previously with 0.61.2.

As requested by Sage I placed a tared-up mod dir on cephdrop for debugging reasons, the file is called topro_mon_0.61.2_to_0.61.3_tunables_issue.tar.bz2

Actions

Also available in: Atom PDF