Project

General

Profile

Actions

Bug #23911

closed

ceph:luminous: osd out/down when setup with ubuntu/bluestore

Added by Vasu Kulkarni about 6 years ago. Updated over 2 years ago.

Status:
Won't Fix - EOL
Priority:
High
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

this could be a systemd issue or more,

a) setup cluster using ceph-deploy
b) use ceph-disk/bluestore option for osds
cd /home/ubuntu/cephtest/ceph-deploy && ./ceph-deploy osd create --bluestore mira006:sdb

c) osd out/down

http://qa-proxy.ceph.com/teuthology/teuthology-2018-04-26_05:55:02-ceph-deploy-luminous-distro-basic-mira/2441160/teuthology.log


2018-04-26T09:49:38.010 INFO:teuthology.orchestra.run.mira006.stderr:[mira006][WARNING] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.IwXf08
2018-04-26T09:49:38.011 INFO:teuthology.orchestra.run.mira006.stderr:[mira006][WARNING] unmount: Unmounting /var/lib/ceph/tmp/mnt.IwXf08
2018-04-26T09:49:38.011 INFO:teuthology.orchestra.run.mira006.stderr:[mira006][WARNING] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.IwXf08
2018-04-26T09:49:38.125 INFO:teuthology.orchestra.run.mira006.stderr:[mira006][WARNING] get_dm_uuid: get_dm_uuid /dev/sdb uuid path is /sys/dev/block/8:16/dm/uuid
2018-04-26T09:49:38.132 INFO:teuthology.orchestra.run.mira006.stderr:[mira006][WARNING] command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdb
2018-04-26T09:49:39.149 INFO:teuthology.orchestra.run.mira006.stderr:[mira006][DEBUG ] Warning: The kernel is still using the old partition table.
2018-04-26T09:49:39.149 INFO:teuthology.orchestra.run.mira006.stderr:[mira006][DEBUG ] The new table will be used at the next reboot or after you
2018-04-26T09:49:39.149 INFO:teuthology.orchestra.run.mira006.stderr:[mira006][DEBUG ] run partprobe(8) or kpartx(8)
2018-04-26T09:49:39.149 INFO:teuthology.orchestra.run.mira006.stderr:[mira006][DEBUG ] The operation has completed successfully.
2018-04-26T09:49:39.149 INFO:teuthology.orchestra.run.mira006.stderr:[mira006][WARNING] update_partition: Calling partprobe on prepared device /dev/sdb
2018-04-26T09:49:39.149 INFO:teuthology.orchestra.run.mira006.stderr:[mira006][WARNING] command_check_call: Running command: /sbin/udevadm settle --timeout=600
2018-04-26T09:49:39.150 INFO:teuthology.orchestra.run.mira006.stderr:[mira006][WARNING] command: Running command: /usr/bin/flock -s /dev/sdb /sbin/partprobe /dev/sdb
2018-04-26T09:49:39.263 INFO:teuthology.orchestra.run.mira006.stderr:[mira006][WARNING] command_check_call: Running command: /sbin/udevadm settle --timeout=600
2018-04-26T09:49:39.263 INFO:teuthology.orchestra.run.mira006.stderr:[mira006][WARNING] command_check_call: Running command: /sbin/udevadm trigger --action=add --sysname-match sdb1
2018-04-26T09:49:39.280 INFO:teuthology.orchestra.run.mira006.stderr:[mira006][INFO  ] Running command: sudo systemctl enable ceph.target
2018-04-26T09:49:44.402 INFO:teuthology.orchestra.run.mira006.stderr:[mira006][INFO  ] checking OSD status...
2018-04-26T09:49:44.402 INFO:teuthology.orchestra.run.mira006.stderr:[mira006][DEBUG ] find the location of an executable
2018-04-26T09:49:44.406 INFO:teuthology.orchestra.run.mira006.stderr:[mira006][INFO  ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json
2018-04-26T09:49:44.723 INFO:teuthology.orchestra.run.mira006.stderr:[mira006][WARNING] there is 1 OSD down
2018-04-26T09:49:44.724 INFO:teuthology.orchestra.run.mira006.stderr:[mira006][WARNING] there is 1 OSD out
2018-04-26T09:49:44.724 INFO:teuthology.orchestra.run.mira006.stderr:[ceph_deploy.osd][DEBUG ] Host mira006 is now ready for osd use.
Actions

Also available in: Atom PDF