Project

General

Profile

Bug #19642

Error adding disk to ceph cluster: journal specified but not allowed by osd backend

Added by elder one almost 7 years ago. Updated about 6 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):

Description

ceph-deploy exits with error when trying to add disk to cluster:

#ceph-deploy osd prepare cph02:sdh:/dev/sdm

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.37): /usr/bin/ceph-deploy osd prepare cph02:sdh:/dev/sdm
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  disk                          : [('cph02', '/dev/sdh', '/dev/sdm')]
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : prepare
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f31fae9a290>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f31fb3001b8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks cph02:/dev/sdh:/dev/sdm
[cph02][DEBUG ] connected to host: cph02
[cph02][DEBUG ] detect platform information from remote host
[cph02][DEBUG ] detect machine type
[cph02][DEBUG ] find the location of an executable
[cph02][INFO  ] Running command: /sbin/initctl version
[cph02][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to cph02
[cph02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host cph02 disk /dev/sdh journal /dev/sdm activate False
[cph02][DEBUG ] find the location of an executable
[cph02][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/sdh /dev/sdm
[cph02][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[cph02][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph --setuser ceph --setgroup ceph
[cph02][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph --setuser ceph --setgroup ceph
[cph02][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph --setuser ceph --setgroup ceph
[cph02][WARNIN] Traceback (most recent call last):
[cph02][WARNIN]   File "/usr/sbin/ceph-disk", line 9, in <module>
[cph02][WARNIN]     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
[cph02][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5047, in run
[cph02][WARNIN]     main(sys.argv[1:])
[cph02][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4998, in main
[cph02][WARNIN]     args.func(args)
[cph02][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 1812, in main
[cph02][WARNIN]     Prepare.factory(args).prepare()
[cph02][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 1808, in factory
[cph02][WARNIN]     return PrepareFilestore(args)
[cph02][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 1821, in __init__
[cph02][WARNIN]     self.journal = PrepareJournal(args)
[cph02][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2133, in __init__
[cph02][WARNIN]     raise Error('journal specified but not allowed by osd backend')
[cph02][WARNIN] ceph_disk.main.Error: Error: journal specified but not allowed by osd backend
[cph02][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/sdh /dev/sdm
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

Disks on host:

#ceph-deploy disk list cph02

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.37): /usr/bin/ceph-deploy disk list cph02
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : list
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f1f7454a7a0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function disk at 0x7f1f749bc230>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('cph02', None, None)]
[cph02][DEBUG ] connected to host: cph02
[cph02][DEBUG ] detect platform information from remote host
[cph02][DEBUG ] detect machine type
[cph02][DEBUG ] find the location of an executable
[cph02][INFO  ] Running command: /sbin/initctl version
[cph02][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Listing disks on cph02...
[cph02][DEBUG ] find the location of an executable
[cph02][INFO  ] Running command: /usr/sbin/ceph-disk list
[cph02][DEBUG ] /dev/loop0 other, unknown
[cph02][DEBUG ] /dev/loop1 other, unknown
[cph02][DEBUG ] /dev/loop2 other, unknown
[cph02][DEBUG ] /dev/loop3 other, unknown
[cph02][DEBUG ] /dev/loop4 other, unknown
[cph02][DEBUG ] /dev/loop5 other, unknown
[cph02][DEBUG ] /dev/loop6 other, unknown
[cph02][DEBUG ] /dev/loop7 other, unknown
[cph02][DEBUG ] /dev/ram0 other, unknown
[cph02][DEBUG ] /dev/ram1 other, unknown
[cph02][DEBUG ] /dev/ram10 other, unknown
[cph02][DEBUG ] /dev/ram11 other, unknown
[cph02][DEBUG ] /dev/ram12 other, unknown
[cph02][DEBUG ] /dev/ram13 other, unknown
[cph02][DEBUG ] /dev/ram14 other, unknown
[cph02][DEBUG ] /dev/ram15 other, unknown
[cph02][DEBUG ] /dev/ram2 other, unknown
[cph02][DEBUG ] /dev/ram3 other, unknown
[cph02][DEBUG ] /dev/ram4 other, unknown
[cph02][DEBUG ] /dev/ram5 other, unknown
[cph02][DEBUG ] /dev/ram6 other, unknown
[cph02][DEBUG ] /dev/ram7 other, unknown
[cph02][DEBUG ] /dev/ram8 other, unknown
[cph02][DEBUG ] /dev/ram9 other, unknown
[cph02][DEBUG ] /dev/sda :
[cph02][DEBUG ]  /dev/sda2 other, 0x5
[cph02][DEBUG ]  /dev/sda5 swap, swap
[cph02][DEBUG ]  /dev/sda1 other, ext4, mounted on /
[cph02][DEBUG ] /dev/sdb :
[cph02][DEBUG ]  /dev/sdb1 ceph data, active, cluster ceph, osd.8
[cph02][DEBUG ] /dev/sdc :
[cph02][DEBUG ]  /dev/sdc1 ceph data, active, cluster ceph, osd.9
[cph02][DEBUG ] /dev/sdd :
[cph02][DEBUG ]  /dev/sdd1 ceph data, active, cluster ceph, osd.10
[cph02][DEBUG ] /dev/sde :
[cph02][DEBUG ]  /dev/sde1 ceph data, active, cluster ceph, osd.11
[cph02][DEBUG ] /dev/sdf :
[cph02][DEBUG ]  /dev/sdf1 ceph data, active, cluster ceph, osd.12
[cph02][DEBUG ] /dev/sdg :
[cph02][DEBUG ]  /dev/sdg1 ceph data, active, cluster ceph, osd.13
[cph02][DEBUG ] /dev/sdh other, unknown
[cph02][DEBUG ] /dev/sdi other, unknown
[cph02][DEBUG ] /dev/sdj :
[cph02][DEBUG ]  /dev/sdj1 ceph journal
[cph02][DEBUG ]  /dev/sdj2 ceph journal
[cph02][DEBUG ] /dev/sdk :
[cph02][DEBUG ]  /dev/sdk1 ceph journal
[cph02][DEBUG ]  /dev/sdk2 ceph journal
[cph02][DEBUG ] /dev/sdl :
[cph02][DEBUG ]  /dev/sdl1 ceph journal
[cph02][DEBUG ]  /dev/sdl2 ceph journal
[cph02][DEBUG ] /dev/sdm other, unknown
[cph02][DEBUG ] /dev/sdn :
[cph02][DEBUG ]  /dev/sdn2 ceph journal, for /dev/sdn1
[cph02][DEBUG ]  /dev/sdn1 ceph data, active, cluster ceph, osd.42, journal /dev/sdn2
[cph02][DEBUG ] /dev/sdo :
[cph02][DEBUG ]  /dev/sdo2 ceph journal, for /dev/sdo1
[cph02][DEBUG ]  /dev/sdo1 ceph data, active, cluster ceph, osd.43, journal /dev/sdo2
[cph02][DEBUG ] /dev/sdp :
[cph02][DEBUG ]  /dev/sdp2 ceph journal, for /dev/sdp1
[cph02][DEBUG ]  /dev/sdp1 ceph data, active, cluster ceph, osd.51, journal /dev/sdp2

History

#1 Updated by Greg Farnum almost 7 years ago

  • Project changed from Ceph to Ceph-deploy

#2 Updated by Michael Kidd over 6 years ago

Experienced this and identified the cause was using:

[osd]
setuser_match_path = /var/lib/ceph/$type/$cluster-$id

The specific use case was migrating from Hammer to Jewel and not wanting to chown all the old OSDs to the 'ceph' user as part of the upgrade. The work around for this instance was to create separte [osd.X] entries to cover the old OSDs, but leaving that out of the master [osd] section. This permitted normal OSD creation under Jewel.

#3 Updated by elder one over 6 years ago

Thank you!

removed line

 setuser_match_path = /var/lib/ceph/$type/$cluster-$id

from my ceph.conf and disk was added successfully.

#4 Updated by Michael Kidd over 6 years ago

  • Subject changed from Error adding disk to ceph cluster to Error adding disk to ceph cluster: journal specified but not allowed by osd backend

#5 Updated by Vikhyat Umrao over 6 years ago

Downstream Red Hat Ceph Storage bug: https://bugzilla.redhat.com/show_bug.cgi?id=1468840

#6 Updated by Alfredo Deza about 6 years ago

  • Status changed from New to Closed

Closing as it is not a ceph-deploy issue (see https://bugzilla.redhat.com/show_bug.cgi?id=1468840)

Also available in: Atom PDF