Actions
Bug #23918
closed"ceph-volume lvm prepare" errors with "no valid command found"
% Done:
0%
Source:
Community (user)
Tags:
ceph-volume
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Brand new install on Ubuntu 16.04:
~$ sudo ceph-volume lvm prepare --bluestore --data /dev/sdb --osd-id 8 Running command: /usr/bin/ceph-authtool --gen-print-key Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd tree -f json Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new b78ce743-644b-45bc-a01d-89abc167f1d8 stderr: no valid command found; 10 closest matches: stderr: osd setmaxosd <int[0-]> stderr: osd pause stderr: osd crush rule rm <name> stderr: osd crush tree stderr: osd crush rule create-simple <name> <root> <type> {firstn|indep} stderr: osd crush rule create-erasure <name> {<profile>} stderr: osd crush get-tunable straw_calc_version stderr: osd crush show-tunables stderr: osd crush tunables legacy|argonaut|bobtail|firefly|hammer|jewel|optimal|default stderr: osd crush set-tunable straw_calc_version <int> stderr: Error EINVAL: invalid command --> RuntimeError: Unable to create a new OSD id
Also, ceph osd purge returns the same error:
~$ sudo ceph osd purge 0 --yes-i-really-mean-it no valid command found; 10 closest matches: osd setmaxosd <int[0-]> osd pause osd crush rule rm <name> osd crush tree osd crush rule create-simple <name> <root> <type> {firstn|indep} osd crush rule create-erasure <name> {<profile>} osd crush get-tunable straw_calc_version osd crush show-tunables osd crush tunables legacy|argonaut|bobtail|firefly|hammer|jewel|optimal|default osd crush set-tunable straw_calc_version <int> Error EINVAL: invalid command
Only thing that was at all off, was my initial install did not include "--release luminous" and I had to re-run the installer with that option.
I have not successfully created any osds yet:
~$ sudo ceph -s cluster 4ff86631-a50c-4a9c-b63a-2bf40cc60642 health HEALTH_ERR clock skew detected on mon.Ceph-C2 64 pgs are stuck inactive for more than 300 seconds 64 pgs stuck inactive 64 pgs stuck unclean noout flag(s) set Monitor clock skew detected monmap e1: 3 mons at {Ceph-C1=10.26.12.119:6789/0,Ceph-C2=10.26.12.120:6789/0,Ceph-C3=10.26.12.121:6789/0} election epoch 4, quorum 0,1,2 Ceph-C1,Ceph-C2,Ceph-C3 osdmap e6: 1 osds: 0 up, 0 in flags noout,sortbitwise,require_jewel_osds pgmap v7: 64 pgs, 1 pools, 0 bytes data, 0 objects 0 kB used, 0 kB / 0 kB avail 64 creating
I started to create one manually, but those instructions where not for bluestore and I stopped after creating the id (0), that is when I noticed the purge command errors as well.
I am also not sure why it is reporting clock skew. There isn't any...
Did I break something re-running the "ceph-deploy install --release luminous" command over the old install?
Actions