Actions
Bug #19422
closedceph-mon not up after stop/start of ceph.target
Status:
Resolved
Priority:
High
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
on centos 7.3:
1) the fist ps command show ceph-mon, ceph-mgr, ceph-osd is running
2) stop of ceph.target that stops all services
3) start of ceph.target starts only osd ( no mon and also ceph-mgr which is dependent on mon)
4) status 'sudo systemctl status ceph-mon@vpm023.service is an error probably due to it never getting started before after start
2017-03-29T17:30:33.631 INFO:teuthology.orchestra.run.vpm023:Running: 'sudo ps -eaf | grep ceph' 2017-03-29T17:30:33.706 INFO:teuthology.orchestra.run.vpm023.stdout:ceph 27234 1 0 17:25 ? 00:00:00 /usr/bin/ceph-mon -f --cluster ceph --id vpm023 --setuser ceph --setgroup ceph 2017-03-29T17:30:33.706 INFO:teuthology.orchestra.run.vpm023.stdout:ceph 27557 1 0 17:28 ? 00:00:00 /usr/bin/ceph-mgr -f --cluster ceph --id vpm023 --setuser ceph --setgroup ceph 2017-03-29T17:30:33.706 INFO:teuthology.orchestra.run.vpm023.stdout:ceph 28869 1 3 17:29 ? 00:00:02 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph 2017-03-29T17:30:33.706 INFO:teuthology.orchestra.run.vpm023.stdout:ubuntu 29346 23642 0 17:30 ? 00:00:00 bash -c sudo ps -eaf | grep ceph 2017-03-29T17:30:33.706 INFO:teuthology.orchestra.run.vpm023.stdout:ubuntu 29372 29346 0 17:30 ? 00:00:00 grep ceph 2017-03-29T17:30:33.707 INFO:teuthology.orchestra.run.vpm023:Running: 'sudo systemctl list-units | grep ceph' 2017-03-29T17:30:33.766 INFO:teuthology.orchestra.run.vpm023.stdout: var-lib-ceph-osd-ceph\x2d1.mount loaded active mounted /var/lib/ceph/osd/ceph-1 2017-03-29T17:30:33.766 INFO:teuthology.orchestra.run.vpm023.stdout: ceph-mgr@vpm023.service loaded active running Ceph cluster manager daemon 2017-03-29T17:30:33.766 INFO:teuthology.orchestra.run.vpm023.stdout: ceph-mon@vpm023.service loaded active running Ceph cluster monitor daemon 2017-03-29T17:30:33.767 INFO:teuthology.orchestra.run.vpm023.stdout: ceph-osd@1.service loaded active running Ceph object storage daemon osd.1 2017-03-29T17:30:33.767 INFO:teuthology.orchestra.run.vpm023.stdout: system-ceph\x2ddisk.slice loaded active active system-ceph\x2ddisk.slice 2017-03-29T17:30:33.767 INFO:teuthology.orchestra.run.vpm023.stdout: system-ceph\x2dmgr.slice loaded active active system-ceph\x2dmgr.slice 2017-03-29T17:30:33.767 INFO:teuthology.orchestra.run.vpm023.stdout: system-ceph\x2dmon.slice loaded active active system-ceph\x2dmon.slice 2017-03-29T17:30:33.767 INFO:teuthology.orchestra.run.vpm023.stdout: system-ceph\x2dosd.slice loaded active active system-ceph\x2dosd.slice 2017-03-29T17:30:33.767 INFO:teuthology.orchestra.run.vpm023.stdout: ceph-mds.target loaded active active ceph target allowing to start/stop all ceph-mds@.service instances at once 2017-03-29T17:30:33.767 INFO:teuthology.orchestra.run.vpm023.stdout: ceph-mgr.target loaded active active ceph target allowing to start/stop all ceph-mgr@.service instances at once 2017-03-29T17:30:33.767 INFO:teuthology.orchestra.run.vpm023.stdout: ceph-mon.target loaded active active ceph target allowing to start/stop all ceph-mon@.service instances at once 2017-03-29T17:30:33.767 INFO:teuthology.orchestra.run.vpm023.stdout: ceph-osd.target loaded active active ceph target allowing to start/stop all ceph-osd@.service instances at once 2017-03-29T17:30:33.767 INFO:teuthology.orchestra.run.vpm023.stdout: ceph-radosgw.target loaded active active ceph target allowing to start/stop all ceph-radosgw@.service instances at once 2017-03-29T17:30:33.767 INFO:teuthology.orchestra.run.vpm023.stdout: ceph.target loaded active active ceph target allowing to start/stop all ceph*@.service instances at once 2017-03-29T17:30:33.768 INFO:tasks.systemd: var-lib-ceph-osd-ceph\x2d1.mount loaded active mounted /var/lib/ceph/osd/ceph-1 ceph-mgr@vpm023.service loaded active running Ceph cluster manager daemon ceph-mon@vpm023.service loaded active running Ceph cluster monitor daemon ceph-osd@1.service loaded active running Ceph object storage daemon osd.1 system-ceph\x2ddisk.slice loaded active active system-ceph\x2ddisk.slice system-ceph\x2dmgr.slice loaded active active system-ceph\x2dmgr.slice system-ceph\x2dmon.slice loaded active active system-ceph\x2dmon.slice system-ceph\x2dosd.slice loaded active active system-ceph\x2dosd.slice ceph-mds.target loaded active active ceph target allowing to start/stop all ceph-mds@.service instances at once ceph-mgr.target loaded active active ceph target allowing to start/stop all ceph-mgr@.service instances at once ceph-mon.target loaded active active ceph target allowing to start/stop all ceph-mon@.service instances at once ceph-osd.target loaded active active ceph target allowing to start/stop all ceph-osd@.service instances at once ceph-radosgw.target loaded active active ceph target allowing to start/stop all ceph-radosgw@.service instances at once ceph.target loaded active active ceph target allowing to start/stop all ceph*@.service instances at once 2017-03-29T17:30:33.768 INFO:tasks.systemd:Ceph services in failed state 2017-03-29T17:30:33.768 INFO:tasks.systemd:Stopping all Ceph services 2017-03-29T17:30:33.768 INFO:teuthology.orchestra.run.vpm023:Running: 'sudo systemctl stop ceph.target' 2017-03-29T17:30:33.837 INFO:teuthology.orchestra.run.vpm023:Running: 'sudo systemctl status ceph.target' 2017-03-29T17:30:33.891 INFO:teuthology.orchestra.run.vpm023.stdout:◠ceph.target - ceph target allowing to start/stop all ceph*@.service instances at once 2017-03-29T17:30:33.891 INFO:teuthology.orchestra.run.vpm023.stdout: Loaded: loaded (/usr/lib/systemd/system/ceph.target; enabled; vendor preset: enabled) 2017-03-29T17:30:33.891 INFO:teuthology.orchestra.run.vpm023.stdout: Active: inactive (dead) since Wed 2017-03-29 17:30:33 UTC; 70ms ago 2017-03-29T17:30:33.891 INFO:teuthology.orchestra.run.vpm023.stdout: 2017-03-29T17:30:33.891 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:20:26 vpm023 systemd[1]: Reached target ceph target allowing to start/stop all ceph*@.service instances at once. 2017-03-29T17:30:33.891 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:20:26 vpm023 systemd[1]: Starting ceph target allowing to start/stop all ceph*@.service instances at once. 2017-03-29T17:30:33.892 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:30:33 vpm023 systemd[1]: Stopped target ceph target allowing to start/stop all ceph*@.service instances at once. 2017-03-29T17:30:33.892 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:30:33 vpm023 systemd[1]: Stopping ceph target allowing to start/stop all ceph*@.service instances at once. 2017-03-29T17:30:33.892 INFO:tasks.systemd:Checking process status 2017-03-29T17:30:33.892 INFO:teuthology.orchestra.run.vpm023:Running: 'sudo ps -eaf | grep ceph' 2017-03-29T17:30:33.955 INFO:teuthology.orchestra.run.vpm023.stdout:ceph 27234 1 0 17:25 ? 00:00:00 /usr/bin/ceph-mon -f --cluster ceph --id vpm023 --setuser ceph --setgroup ceph 2017-03-29T17:30:33.955 INFO:teuthology.orchestra.run.vpm023.stdout:ceph 28869 1 3 17:29 ? 00:00:02 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph 2017-03-29T17:30:33.956 INFO:teuthology.orchestra.run.vpm023.stdout:ubuntu 29456 23642 0 17:30 ? 00:00:00 bash -c sudo ps -eaf | grep ceph 2017-03-29T17:30:33.956 INFO:teuthology.orchestra.run.vpm023.stdout:ubuntu 29482 29456 0 17:30 ? 00:00:00 grep ceph 2017-03-29T17:30:33.956 INFO:tasks.systemd:Sucessfully stopped all ceph services 2017-03-29T17:30:33.956 INFO:tasks.systemd:Starting all Ceph services 2017-03-29T17:30:33.956 INFO:teuthology.orchestra.run.vpm023:Running: 'sudo systemctl start ceph.target' 2017-03-29T17:30:40.966 INFO:teuthology.orchestra.run.vpm023:Running: 'sudo systemctl status ceph.target' 2017-03-29T17:30:41.177 INFO:teuthology.orchestra.run.vpm023.stdout:◠ceph.target - ceph target allowing to start/stop all ceph*@.service instances at once 2017-03-29T17:30:41.178 INFO:teuthology.orchestra.run.vpm023.stdout: Loaded: loaded (/usr/lib/systemd/system/ceph.target; enabled; vendor preset: enabled) 2017-03-29T17:30:41.178 INFO:teuthology.orchestra.run.vpm023.stdout: Active: active since Wed 2017-03-29 17:30:40 UTC; 213ms ago 2017-03-29T17:30:41.178 INFO:teuthology.orchestra.run.vpm023.stdout: 2017-03-29T17:30:41.178 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:30:40 vpm023 systemd[1]: Reached target ceph target allowing to start/stop all ceph*@.service instances at once. 2017-03-29T17:30:41.178 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:30:40 vpm023 systemd[1]: Starting ceph target allowing to start/stop all ceph*@.service instances at once. 2017-03-29T17:30:41.178 INFO:tasks.systemd:Sucessfully started all Ceph services 2017-03-29T17:30:41.179 INFO:teuthology.orchestra.run.vpm023:Running: 'sudo ps -eaf | grep ceph' 2017-03-29T17:30:41.253 INFO:teuthology.orchestra.run.vpm023.stdout:ceph 29523 1 0 17:30 ? 00:00:00 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph 2017-03-29T17:30:41.253 INFO:teuthology.orchestra.run.vpm023.stdout:ubuntu 29572 23642 0 17:30 ? 00:00:00 bash -c sudo ps -eaf | grep ceph 2017-03-29T17:30:41.253 INFO:teuthology.orchestra.run.vpm023.stdout:ubuntu 29598 29572 0 17:30 ? 00:00:00 grep ceph 2017-03-29T17:30:41.253 INFO:tasks.systemd:ceph 29523 1 0 17:30 ? 00:00:00 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph ubuntu 29572 23642 0 17:30 ? 00:00:00 bash -c sudo ps -eaf | grep ceph ubuntu 29598 29572 0 17:30 ? 00:00:00 grep ceph 2017-03-29T17:30:45.254 INFO:teuthology.orchestra.run.vpm023:Running: 'sudo systemctl status ceph-osd@1.service' 2017-03-29T17:30:45.313 INFO:teuthology.orchestra.run.vpm023.stdout:◠ceph-osd@1.service - Ceph object storage daemon osd.1 2017-03-29T17:30:45.313 INFO:teuthology.orchestra.run.vpm023.stdout: Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled-runtime; vendor preset: disabled) 2017-03-29T17:30:45.313 INFO:teuthology.orchestra.run.vpm023.stdout: Active: active (running) since Wed 2017-03-29 17:30:40 UTC; 4s ago 2017-03-29T17:30:45.314 INFO:teuthology.orchestra.run.vpm023.stdout: Process: 29518 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS) 2017-03-29T17:30:45.314 INFO:teuthology.orchestra.run.vpm023.stdout: Main PID: 29523 (ceph-osd) 2017-03-29T17:30:45.314 INFO:teuthology.orchestra.run.vpm023.stdout: CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@1.service 2017-03-29T17:30:45.314 INFO:teuthology.orchestra.run.vpm023.stdout: └─29523 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph 2017-03-29T17:30:45.314 INFO:teuthology.orchestra.run.vpm023.stdout: 2017-03-29T17:30:45.314 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:30:40 vpm023 systemd[1]: Starting Ceph object storage daemon osd.1... 2017-03-29T17:30:45.314 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:30:40 vpm023 systemd[1]: Started Ceph object storage daemon osd.1. 2017-03-29T17:30:45.314 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:30:40 vpm023 ceph-osd[29523]: starting osd.1 at - osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal 2017-03-29T17:30:45.314 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:30:41 vpm023 ceph-osd[29523]: 2017-03-29 17:30:41.471753 7f38a4b02ac0 -1 osd.1 29 log_to_monitors {default=true} 2017-03-29T17:30:45.315 INFO:teuthology.orchestra.run.vpm023:Running: 'sudo systemctl stop ceph-osd@1.service' 2017-03-29T17:30:49.381 INFO:teuthology.orchestra.run.vpm023:Running: 'sudo systemctl status ceph-osd@1.service' 2017-03-29T17:30:49.442 INFO:teuthology.orchestra.run.vpm023.stdout:◠ceph-osd@1.service - Ceph object storage daemon osd.1 2017-03-29T17:30:49.443 INFO:teuthology.orchestra.run.vpm023.stdout: Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled-runtime; vendor preset: disabled) 2017-03-29T17:30:49.443 INFO:teuthology.orchestra.run.vpm023.stdout: Active: inactive (dead) since Wed 2017-03-29 17:30:45 UTC; 4s ago 2017-03-29T17:30:49.443 INFO:teuthology.orchestra.run.vpm023.stdout: Process: 29523 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER} --id %i --setuser ceph --setgroup ceph (code=killed, signal=TERM) 2017-03-29T17:30:49.443 INFO:teuthology.orchestra.run.vpm023.stdout: Process: 29518 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS) 2017-03-29T17:30:49.443 INFO:teuthology.orchestra.run.vpm023.stdout: Main PID: 29523 (code=killed, signal=TERM) 2017-03-29T17:30:49.444 INFO:teuthology.orchestra.run.vpm023.stdout: 2017-03-29T17:30:49.444 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:30:40 vpm023 systemd[1]: Starting Ceph object storage daemon osd.1... 2017-03-29T17:30:49.444 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:30:40 vpm023 systemd[1]: Started Ceph object storage daemon osd.1. 2017-03-29T17:30:49.444 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:30:40 vpm023 ceph-osd[29523]: starting osd.1 at - osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal 2017-03-29T17:30:49.444 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:30:41 vpm023 ceph-osd[29523]: 2017-03-29 17:30:41.471753 7f38a4b02ac0 -1 osd.1 29 log_to_monitors {default=true} 2017-03-29T17:30:49.444 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:30:45 vpm023 systemd[1]: Stopping Ceph object storage daemon osd.1... 2017-03-29T17:30:49.445 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:30:45 vpm023 systemd[1]: Stopped Ceph object storage daemon osd.1. 2017-03-29T17:30:49.445 INFO:tasks.systemd:Sucessfully stopped single osd ceph service 2017-03-29T17:30:49.445 INFO:teuthology.orchestra.run.vpm023:Running: 'sudo systemctl start ceph-osd@1.service' 2017-03-29T17:30:53.525 INFO:teuthology.orchestra.run.vpm023:Running: 'sudo systemctl status ceph-mon@vpm023.service' 2017-03-29T17:30:53.584 INFO:teuthology.orchestra.run.vpm023.stdout:◠ceph-mon@vpm023.service - Ceph cluster monitor daemon 2017-03-29T17:30:53.584 INFO:teuthology.orchestra.run.vpm023.stdout: Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: disabled) 2017-03-29T17:30:53.584 INFO:teuthology.orchestra.run.vpm023.stdout: Active: inactive (dead) since Wed 2017-03-29 17:30:34 UTC; 19s ago 2017-03-29T17:30:53.585 INFO:teuthology.orchestra.run.vpm023.stdout: Main PID: 27234 (code=exited, status=0/SUCCESS) 2017-03-29T17:30:53.585 INFO:teuthology.orchestra.run.vpm023.stdout: 2017-03-29T17:30:53.585 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:29:07 vpm023 systemd[1]: [/usr/lib/systemd/system/ceph-mon@.service:24] Unknown lvalue 'TasksMax' in section 'Service' 2017-03-29T17:30:53.585 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:29:07 vpm023 systemd[1]: [/usr/lib/systemd/system/ceph-mon@.service:24] Unknown lvalue 'TasksMax' in section 'Service' 2017-03-29T17:30:53.585 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:29:07 vpm023 systemd[1]: [/usr/lib/systemd/system/ceph-mon@.service:24] Unknown lvalue 'TasksMax' in section 'Service' 2017-03-29T17:30:53.585 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:29:07 vpm023 systemd[1]: [/usr/lib/systemd/system/ceph-mon@.service:24] Unknown lvalue 'TasksMax' in section 'Service' 2017-03-29T17:30:53.585 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:29:08 vpm023 systemd[1]: [/usr/lib/systemd/system/ceph-mon@.service:24] Unknown lvalue 'TasksMax' in section 'Service' 2017-03-29T17:30:53.585 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:29:08 vpm023 systemd[1]: [/usr/lib/systemd/system/ceph-mon@.service:24] Unknown lvalue 'TasksMax' in section 'Service' 2017-03-29T17:30:53.585 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:30:34 vpm023 systemd[1]: Stopping Ceph cluster monitor daemon... 2017-03-29T17:30:53.585 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:30:34 vpm023 ceph-mon[27234]: 2017-03-29 17:30:34.009000 7fd0287ef700 -1 received signal: Terminated from PID: 1 task name: /usr/lib/systemd/systemd --system --deserialize 21 UID: 0 2017-03-29T17:30:53.586 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:30:34 vpm023 ceph-mon[27234]: 2017-03-29 17:30:34.009020 7fd0287ef700 -1 mon.vpm023@1(peon) e2 *** Got Signal Terminated *** 2017-03-29T17:30:53.586 INFO:teuthology.orchestra.run.vpm023.stdout:Mar 29 17:30:34 vpm023 systemd[1]: Stopped Ceph cluster monitor daemon. 2017-03-29T17:30:53.586 ERROR:teuthology.run_tasks:Saw exception from tasks. Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 89, in run_tasks manager.__enter__() File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__ return self.gen.next() File "/home/teuthworker/src/github.com_ceph_ceph_wip-systemd/qa/tasks/systemd.py", line 93, in task remote.run(args=['sudo', 'systemctl', 'status', mon_name]) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 193, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 414, in run r.wait() File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 149, in wait self._raise_for_status() File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 171, in _raise_for_status node=self.hostname, label=self.label CommandFailedError: Command failed on vpm023 with status 3: 'sudo systemctl status ceph-mon@vpm023.service'
Actions