Project

General

Profile

Actions

Bug #15397

closed

osd's down/out with ceph-deploy bluestore option

Added by Vasu Kulkarni about 8 years ago. Updated over 7 years ago.

Status:
Closed
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I think I am passing the right argument but the OSDs are still in out state

http://qa-proxy.ceph.com/teuthology/vasu-2016-04-05_15:01:19-ceph-deploy-master---basic-vps/109836/teuthology.log

2016-04-05T15:55:42.874 INFO:teuthology.orchestra.run.vpm188:Running: 'cd /home/ubuntu/cephtest/ceph-deploy && ./ceph-deploy osd create --bluestore vpm157:vdb'
2016-04-05T15:55:43.033 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ubuntu/.cephdeploy.conf
2016-04-05T15:55:43.033 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.cli][INFO  ] Invoked (1.5.31): ./ceph-deploy osd create --bluestore vpm157:vdb
2016-04-05T15:55:43.033 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.cli][INFO  ] ceph-deploy options:
2016-04-05T15:55:43.033 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.cli][INFO  ]  username                      : None
2016-04-05T15:55:43.034 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.cli][INFO  ]  disk                          : [('vpm157', '/dev/vdb', None)]
2016-04-05T15:55:43.034 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
2016-04-05T15:55:43.034 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.cli][INFO  ]  verbose                       : False
2016-04-05T15:55:43.035 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.cli][INFO  ]  bluestore                     : True
2016-04-05T15:55:43.035 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
2016-04-05T15:55:43.035 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.cli][INFO  ]  subcommand                    : create
2016-04-05T15:55:43.036 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
2016-04-05T15:55:43.036 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.cli][INFO  ]  quiet                         : False
2016-04-05T15:55:43.036 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x27fbfc8>
2016-04-05T15:55:43.036 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.cli][INFO  ]  cluster                       : ceph
2016-04-05T15:55:43.037 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
2016-04-05T15:55:43.037 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x27eb9b0>
2016-04-05T15:55:43.037 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
2016-04-05T15:55:43.038 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.cli][INFO  ]  default_release               : False
2016-04-05T15:55:43.038 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.cli][INFO  ]  zap_disk                      : False
2016-04-05T15:55:43.038 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks vpm157:/dev/vdb:
2016-04-05T15:55:43.101 INFO:teuthology.orchestra.run.vpm188.stderr:Warning: Permanently added 'vpm157,172.21.2.157' (ECDSA) to the list of known hosts.
2016-04-05T15:55:43.355 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][DEBUG ] connection detected need for sudo
2016-04-05T15:55:43.420 INFO:teuthology.orchestra.run.vpm188.stderr:Warning: Permanently added 'vpm157,172.21.2.157' (ECDSA) to the list of known hosts.
2016-04-05T15:55:43.683 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][DEBUG ] connected to host: vpm157
2016-04-05T15:55:43.683 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][DEBUG ] detect platform information from remote host
2016-04-05T15:55:43.712 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][DEBUG ] detect machine type
2016-04-05T15:55:43.719 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][DEBUG ] find the location of an executable
2016-04-05T15:55:43.721 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
2016-04-05T15:55:43.722 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.osd][DEBUG ] Deploying osd to vpm157
2016-04-05T15:55:43.722 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
2016-04-05T15:55:43.726 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.osd][DEBUG ] Preparing host vpm157 disk /dev/vdb journal None activate True
2016-04-05T15:55:43.726 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][DEBUG ] find the location of an executable
2016-04-05T15:55:43.740 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][INFO  ] Running command: sudo /usr/sbin/ceph-disk -v prepare --bluestore --cluster ceph --fs-type xfs -- /dev/vdb
2016-04-05T15:55:43.857 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
2016-04-05T15:55:43.874 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/253:16/dm/uuid
2016-04-05T15:55:43.874 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] set_type: Will colocate block with data on /dev/vdb
2016-04-05T15:55:43.874 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/253:16/dm/uuid
2016-04-05T15:55:43.874 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/253:16/dm/uuid
2016-04-05T15:55:43.874 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/253:16/dm/uuid
2016-04-05T15:55:43.874 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
2016-04-05T15:55:43.877 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
2016-04-05T15:55:43.885 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
2016-04-05T15:55:43.892 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
2016-04-05T15:55:43.909 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/253:16/dm/uuid
2016-04-05T15:55:43.909 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] set_data_partition: Creating osd partition on /dev/vdb
2016-04-05T15:55:43.910 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/253:16/dm/uuid
2016-04-05T15:55:43.910 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] ptype_tobe_for_name: name = data
2016-04-05T15:55:43.910 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/253:16/dm/uuid
2016-04-05T15:55:43.911 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] create_partition: Creating data partition num 1 size 100 on /dev/vdb
2016-04-05T15:55:43.911 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command_check_call: Running command: /sbin/sgdisk --new=1:0:+100M --change-name=1:ceph data --partition-guid=1:388cf022-286d-4a61-b894-e370697da7f1 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/vdb
2016-04-05T15:55:44.986 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][DEBUG ] The operation has completed successfully.
2016-04-05T15:55:44.987 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] update_partition: Calling partprobe on created device /dev/vdb
2016-04-05T15:55:44.987 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
2016-04-05T15:55:44.987 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command: Running command: /sbin/partprobe /dev/vdb
2016-04-05T15:55:45.003 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
2016-04-05T15:55:45.069 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/253:16/dm/uuid
2016-04-05T15:55:45.070 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/253:16/dm/uuid
2016-04-05T15:55:45.070 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/253:17/dm/uuid
2016-04-05T15:55:45.070 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/253:16/dm/uuid
2016-04-05T15:55:45.071 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/253:16/dm/uuid
2016-04-05T15:55:45.071 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] ptype_tobe_for_name: name = block
2016-04-05T15:55:45.071 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/253:16/dm/uuid
2016-04-05T15:55:45.072 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] create_partition: Creating block partition num 2 size 0 on /dev/vdb
2016-04-05T15:55:45.072 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command_check_call: Running command: /sbin/sgdisk --largest-new=2 --change-name=2:ceph block --partition-guid=2:c6c0ba0b-6eea-4809-9b61-6fdbad1b7045 --typecode=2:cafecafe-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/vdb
2016-04-05T15:55:46.089 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][DEBUG ] The operation has completed successfully.
2016-04-05T15:55:46.089 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] update_partition: Calling partprobe on created device /dev/vdb
2016-04-05T15:55:46.089 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
2016-04-05T15:55:46.304 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command: Running command: /sbin/partprobe /dev/vdb
2016-04-05T15:55:46.469 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
2016-04-05T15:55:46.988 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/253:16/dm/uuid
2016-04-05T15:55:46.988 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/253:16/dm/uuid
2016-04-05T15:55:46.988 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/253:18/dm/uuid
2016-04-05T15:55:46.989 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] prepare_device: Block is GPT partition /dev/disk/by-partuuid/c6c0ba0b-6eea-4809-9b61-6fdbad1b7045
2016-04-05T15:55:46.989 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] prepare_device: Block is GPT partition /dev/disk/by-partuuid/c6c0ba0b-6eea-4809-9b61-6fdbad1b7045
2016-04-05T15:55:46.989 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] populate_data_path_device: Creating xfs fs on /dev/vdb1
2016-04-05T15:55:46.989 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/vdb1
2016-04-05T15:55:47.207 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][DEBUG ] meta-data=/dev/vdb1              isize=2048   agcount=4, agsize=6400 blks
2016-04-05T15:55:47.208 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=1
2016-04-05T15:55:47.208 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][DEBUG ]          =                       crc=0        finobt=0
2016-04-05T15:55:47.208 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][DEBUG ] data     =                       bsize=4096   blocks=25600, imaxpct=25
2016-04-05T15:55:47.208 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][DEBUG ]          =                       sunit=0      swidth=0 blks
2016-04-05T15:55:47.209 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
2016-04-05T15:55:47.209 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][DEBUG ] log      =internal log           bsize=4096   blocks=864, version=2
2016-04-05T15:55:47.209 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
2016-04-05T15:55:47.210 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
2016-04-05T15:55:47.210 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] mount: Mounting /dev/vdb1 on /var/lib/ceph/tmp/mnt.0IzDce with options noatime,inode64
2016-04-05T15:55:47.210 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/vdb1 /var/lib/ceph/tmp/mnt.0IzDce
2016-04-05T15:55:47.210 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.0IzDce
2016-04-05T15:55:47.211 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.0IzDce
2016-04-05T15:55:47.211 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.0IzDce/ceph_fsid.24081.tmp
2016-04-05T15:55:47.224 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.0IzDce/ceph_fsid.24081.tmp
2016-04-05T15:55:47.257 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.0IzDce/fsid.24081.tmp
2016-04-05T15:55:47.258 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.0IzDce/fsid.24081.tmp
2016-04-05T15:55:47.290 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.0IzDce/magic.24081.tmp
2016-04-05T15:55:47.291 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.0IzDce/magic.24081.tmp
2016-04-05T15:55:47.306 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.0IzDce/block_uuid.24081.tmp
2016-04-05T15:55:47.314 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.0IzDce/block_uuid.24081.tmp
2016-04-05T15:55:47.317 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.0IzDce/block -> /dev/disk/by-partuuid/c6c0ba0b-6eea-4809-9b61-6fdbad1b7045
2016-04-05T15:55:47.318 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.0IzDce/type.24081.tmp
2016-04-05T15:55:47.351 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.0IzDce/type.24081.tmp
2016-04-05T15:55:47.351 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command: Running command: /sbin/restorecon -R /var/lib/ceph/tmp/mnt.0IzDce
2016-04-05T15:55:47.359 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.0IzDce
2016-04-05T15:55:47.363 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] unmount: Unmounting /var/lib/ceph/tmp/mnt.0IzDce
2016-04-05T15:55:47.363 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.0IzDce
2016-04-05T15:55:47.380 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] get_dm_uuid: get_dm_uuid /dev/vdb uuid path is /sys/dev/block/253:16/dm/uuid
2016-04-05T15:55:47.380 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/vdb
2016-04-05T15:55:48.450 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][DEBUG ] The operation has completed successfully.
2016-04-05T15:55:48.450 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] update_partition: Calling partprobe on prepared device /dev/vdb
2016-04-05T15:55:48.450 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
2016-04-05T15:55:48.715 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command: Running command: /sbin/partprobe /dev/vdb
2016-04-05T15:55:48.830 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
2016-04-05T15:55:49.498 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] command_check_call: Running command: /usr/bin/udevadm trigger --action=add --sysname-match vdb1
2016-04-05T15:55:49.556 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][INFO  ] Running command: sudo systemctl enable ceph.target
2016-04-05T15:55:54.774 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][INFO  ] checking OSD status...
2016-04-05T15:55:54.774 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][DEBUG ] find the location of an executable
2016-04-05T15:55:54.785 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][INFO  ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
2016-04-05T15:55:55.060 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] there are 2 OSDs down
2016-04-05T15:55:55.060 INFO:teuthology.orchestra.run.vpm188.stderr:[vpm157][WARNING] there are 2 OSDs out
2016-04-05T15:55:55.061 INFO:teuthology.orchestra.run.vpm188.stderr:[ceph_deploy.osd][DEBUG ] Host vpm157 is now ready for osd use.

Actions #1

Updated by Vasu Kulkarni about 8 years ago

  • Assignee set to Loïc Dachary

loic,

does the -fs option along with --bluestore to ceph-disk looks right to you? Does it needs to be fixed in ceph-disk?

Actions #2

Updated by Loïc Dachary about 8 years ago

I'm not sure I understand the question, which -fs option ?

Actions #3

Updated by Loïc Dachary about 8 years ago

  • Assignee changed from Loïc Dachary to Vasu Kulkarni
Actions #4

Updated by Vasu Kulkarni about 8 years ago

  • Assignee changed from Vasu Kulkarni to Loïc Dachary

loic,

I just pass the '--bluestore' option to osd create but internally it does call ceph-disk with '-fs xfs' along with '--bluestore' option, I am wondering if that is correct usage?

sudo /usr/sbin/ceph-disk -v prepare --bluestore --cluster ceph --fs-type xfs -- /dev/vdb
Actions #5

Updated by Loïc Dachary about 8 years ago

The --fs-type option for bluestore is valid. If the OSD are not activated, there is another reason. You need to go to the machine where the OSD is, ceph-disk list to verify all is as expected, try to ceph-disk --verbose activate /dev/vdb1 manually and see why it does not work. If it does work you will need to check the journalctl of the /dev/vdb1 ceph-disk unit to find the error message. What operating system is it ?

Actions #6

Updated by Loïc Dachary about 8 years ago

  • Assignee changed from Loïc Dachary to Vasu Kulkarni
Actions #7

Updated by Vasu Kulkarni about 8 years ago

using verbose, it looks like it does need expermital features for 'bluestore' as well as 'rocksdb' enabled, I was expecting this to be handled automatically along with other osd settings. It still didn't work for me, Looking at other settings.


[ubuntu@mira034 ceph-deploy]$ sudo ceph-disk --verbose activate /dev/sdb1
main_activate: path = /dev/sdb1
get_dm_uuid: get_dm_uuid /dev/sdb1 uuid path is /sys/dev/block/8:17/dm/uuid
command: Running command: /sbin/blkid -o udev -p /dev/sdb1
command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/sdb1
command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
mount: Mounting /dev/sdb1 on /var/lib/ceph/tmp/mnt.yrJOlV with options noatime,inode64
command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 -- /dev/sdb1 /var/lib/ceph/tmp/mnt.yrJOlV
command: Running command: /sbin/restorecon /var/lib/ceph/tmp/mnt.yrJOlV
activate: Cluster uuid is b42004a0-176f-4265-860b-f2d795cfac2b
command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
activate: Cluster name is ceph
activate: OSD uuid is 997ae264-fb34-40f2-a2a2-06638b0f0744
activate: OSD id is 0
activate: Initializing OSD...
command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/tmp/mnt.yrJOlV/activate.monmap
got monmap epoch 1
command_check_call: Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /var/lib/ceph/tmp/mnt.yrJOlV/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.yrJOlV --osd-uuid 997ae264-fb34-40f2-a2a2-06638b0f0744 --keyring /var/lib/ceph/tmp/mnt.yrJOlV/keyring --setuser ceph --setgroup ceph
2016-04-08 20:40:58.384159 7f30a6cb0800 -1 *** experimental feature 'bluestore' is not enabled ***
This feature is marked as experimental, which means it
 - is untested
 - is unsupported
 - may corrupt your data
 - may break your cluster is an unrecoverable fashion
To enable this feature, add this to your ceph.conf:
  enable experimental unrecoverable data corrupting features = bluestore

2016-04-08 20:40:58.384164 7f30a6cb0800 -1 unable to create object store
mount_activate: Failed to activate
unmount: Unmounting /var/lib/ceph/tmp/mnt.yrJOlV
command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.yrJOlV
Traceback (most recent call last):
  File "/sbin/ceph-disk", line 9, in <module>
    load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4964, in run
    main(sys.argv[1:])
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4915, in main
    args.func(args)
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3269, in main_activate
    reactivate=args.reactivate,
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3026, in mount_activate
    (osd_id, cluster) = activate(path, activate_key_template, init)
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3202, in activate
    keyring=keyring,
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2678, in mkfs
    '--setgroup', get_ceph_user(),
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 439, in command_check_call
    return subprocess.check_call(arguments)
  File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i', '0', '--monmap', '/var/lib/ceph/tmp/mnt.yrJOlV/activate.monmap', '--osd-data', '/var/lib/ceph/tmp/mnt.yrJOlV', '--osd-uuid', '997ae264-fb34-40f2-a2a2-06638b0f0744', '--keyring', '/var/lib/ceph/tmp/mnt.yrJOlV/keyring', '--setuser', 'ceph', '--setgroup', 'ceph']' returned non-zero exit status 237
[ubuntu@mira034 ceph-deploy]$ ceph-disk list
Traceback (most recent call last):
  File "/usr/sbin/ceph-disk", line 9, in <module>
    load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4964, in run
    main(sys.argv[1:])
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4906, in main
    setup_statedir(args.statedir)
  File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4368, in setup_statedir
    os.mkdir(STATEDIR + "/tmp")
OSError: [Errno 13] Permission denied: '/var/lib/ceph/tmp'
[ubuntu@mira034 ceph-deploy]$ sudo ceph-disk list
/dev/sda :
 /dev/sda1 other, ext4, mounted on /
/dev/sdb :
 /dev/sdb2 ceph block, for /dev/sdb1
 /dev/sdb1 ceph data, prepared, cluster ceph, osd.0, block /dev/sdb2
/dev/sdc other, unknown
/dev/sdd other, unknown
/dev/sde other, xfs
/dev/sdf other, btrfs
/dev/sdg other, xfs
/dev/sdh other, btrfs

Actions #8

Updated by Loïc Dachary about 8 years ago

Before you create the OSD, you could add the experimental feature support to ceph.conf. Does that work when you do that ?

Actions #9

Updated by Vasu Kulkarni about 8 years ago

  • Assignee changed from Vasu Kulkarni to Sage Weil

I have enabled the following conf now after ceph-deploy new, but the osd's are still out. I got this from wip-bluestore, Assigining this to Sage to check what I am mssing.

  overrides:
    admin_socket:
      branch: master
    ceph:
      conf:
        mon:
          debug mon: 20
          debug ms: 1
          debug paxos: 20
        osd:
          bluestore block size: 96636764160
          bluestore bluefs env mirror: true
          debug bdev: 40
          debug bluefs: 20
          debug bluestore: 30
          debug filestore: 20
          debug journal: 20
          debug ms: 1
          debug osd: 20
          debug rocksdb: 10
          enable experimental unrecoverable data corrupting features: '*'
          osd debug randomize hobject sort order: false
          osd objectstore: bluestore

http://qa-proxy.ceph.com/teuthology/vasu-2016-04-09_16:25:15-ceph-deploy-master---basic-vps/118353/teuthology.log

Actions #10

Updated by Sage Weil over 7 years ago

  • Status changed from New to Closed
Actions

Also available in: Atom PDF