Bug #24683
openceph-mon binary doesn't report to systemd why it dies
0%
Description
Following the quick start guide I get at a point where the monitor is supposed to come up but it doesn't. It doesn't report why though.
I have no doubt that I did something wrong (ceph noob) and I have no doubt that I will fix it and debug it without external help. But it's really bad if the default setup isn't able to say why it doesn't work, don't you think?
```
$ systemctl status ceph-mon*
● ceph-mon@mo-378c784d4.service - Ceph cluster monitor daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: exit-code) since Thu 2018-06-28 08:33:44 UTC; 5s ago
Process: 29954 ExecStart=/usr/bin/ceph-mon -f --cluster ${CLUSTER} --id %i --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
Main PID: 29954 (code=exited, status=1/FAILURE)
Jun 28 08:33:44 mo-378c784d4 systemd1: Unit ceph-mon@mo-378c784d4.service entered failed state.
Jun 28 08:33:44 mo-378c784d4 systemd1: ceph-mon@mo-378c784d4.service failed.
```
Updated by Erik Bernoth almost 6 years ago
If I execute the same command that systemd uses, I get a great readable error message:
$ /usr/bin/ceph-mon -f --cluster ceph --id %i --setuser ceph --setgroup ceph 2018-06-28 08:45:36.627547 7fedea477ec0 -1 monitor data directory at '/var/lib/ceph/mon/ceph-%i' does not exist: have you run 'mkfs'?
But for some reason this message doesn't make it into the journal.
Updated by Josh Durgin almost 6 years ago
Does this show up in the monitor's log in /var/log/ceph/ ?