Project

General

Profile

Actions

Bug #7261

closed

No OSDs are up after a successful ceph-deploy call

Added by Alfredo Deza about 10 years ago. Updated about 10 years ago.

Status:
Resolved
Priority:
High
Assignee:
-
Category:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):

Description

After ceph-deploy deploys a set of OSDs successfully the cluster is never able to get any of them
up so 'ceph health' always returns HEALTH_ERR

2014-01-27T13:48:12.516 DEBUG:teuthology.orchestra.run:Running [10.214.133.22]: 'cd /home/ubuntu/cephtest/ceph-deploy && ./ceph-deploy osd create --zap-disk plana87:sdd'
2014-01-27T13:48:12.639 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.cli[0m][[1;37mINFO[0m  ] Invoked (1.3.4): ./ceph-deploy osd create --zap-disk plana87:sdd
2014-01-27T13:48:12.640 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.osd[0m][[1;34mDEBUG[0m ] Preparing cluster ceph disks plana87:/dev/sdd:
2014-01-27T13:48:12.697 INFO:teuthology.orchestra.run.err:[10.214.133.22]: Warning: Permanently added 'plana87,10.214.133.29' (ECDSA) to the list of known hosts.
2014-01-27T13:48:12.827 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ] connected to host: plana87
2014-01-27T13:48:12.828 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ] detect platform information from remote host
2014-01-27T13:48:12.846 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ] detect machine type
2014-01-27T13:48:12.850 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.osd[0m][[1;37mINFO[0m  ] Distro info: Ubuntu 12.04 precise
2014-01-27T13:48:12.850 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.osd[0m][[1;34mDEBUG[0m ] Deploying osd to plana87
2014-01-27T13:48:12.851 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ] write cluster configuration to /etc/ceph/{cluster}.conf
2014-01-27T13:48:12.853 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;37mINFO[0m  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
2014-01-27T13:48:12.877 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.osd[0m][[1;34mDEBUG[0m ] Preparing host plana87 disk /dev/sdd journal None activate True
2014-01-27T13:48:12.877 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;37mINFO[0m  ] Running command: sudo ceph-disk-prepare --zap-disk --fs-type xfs --cluster ceph -- /dev/sdd
2014-01-27T13:48:15.302 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;33mWARNIN[0m] INFO:ceph-disk:Will colocate journal with data on /dev/sdd
2014-01-27T13:48:22.688 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ] Creating new GPT entries.
2014-01-27T13:48:22.689 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ] GPT data structures destroyed! You may now partition the disk using fdisk or
2014-01-27T13:48:22.689 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ] other utilities.
2014-01-27T13:48:22.689 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ] The operation has completed successfully.
2014-01-27T13:48:22.689 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ] Information: Moved requested sector from 34 to 2048 in
2014-01-27T13:48:22.692 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ] order to align on 2048-sector boundaries.
2014-01-27T13:48:22.692 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ] The operation has completed successfully.
2014-01-27T13:48:22.692 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ] Information: Moved requested sector from 10485761 to 10487808 in
2014-01-27T13:48:22.692 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ] order to align on 2048-sector boundaries.
2014-01-27T13:48:22.693 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ] The operation has completed successfully.
2014-01-27T13:48:22.693 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ] meta-data=/dev/sdd1              isize=2048   agcount=4, agsize=30196417 blks
2014-01-27T13:48:22.693 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ]          =                       sectsz=512   attr=2, projid32bit=0
2014-01-27T13:48:22.693 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ] data     =                       bsize=4096   blocks=120785665, imaxpct=25
2014-01-27T13:48:22.693 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ]          =                       sunit=0      swidth=0 blks
2014-01-27T13:48:22.694 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ] naming   =version 2              bsize=4096   ascii-ci=0
2014-01-27T13:48:22.694 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ] log      =internal log           bsize=4096   blocks=58977, version=2
2014-01-27T13:48:22.694 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
2014-01-27T13:48:22.694 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ] realtime =none                   extsz=4096   blocks=0, rtextents=0
2014-01-27T13:48:22.694 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;34mDEBUG[0m ] The operation has completed successfully.
2014-01-27T13:48:22.695 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana87[0m][[1;37mINFO[0m  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
2014-01-27T13:48:22.711 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.osd[0m][[1;34mDEBUG[0m ] Host plana87 is now ready for osd use.
2014-01-27T13:48:22.723 INFO:teuthology.task.ceph-deploy:successfully created osd
2014-01-27T13:48:22.723 DEBUG:teuthology.orchestra.run:Running [10.214.133.22]: 'cd /home/ubuntu/cephtest/ceph-deploy && ./ceph-deploy osd create --zap-disk plana83:sdb'
2014-01-27T13:48:22.910 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.cli[0m][[1;37mINFO[0m  ] Invoked (1.3.4): ./ceph-deploy osd create --zap-disk plana83:sdb
2014-01-27T13:48:22.911 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.osd[0m][[1;34mDEBUG[0m ] Preparing cluster ceph disks plana83:/dev/sdb:
2014-01-27T13:48:22.967 INFO:teuthology.orchestra.run.err:[10.214.133.22]: Warning: Permanently added 'plana83,10.214.133.33' (ECDSA) to the list of known hosts.
2014-01-27T13:48:23.096 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] connected to host: plana83
2014-01-27T13:48:23.097 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] detect platform information from remote host
2014-01-27T13:48:23.115 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] detect machine type
2014-01-27T13:48:23.119 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.osd[0m][[1;37mINFO[0m  ] Distro info: Ubuntu 12.04 precise
2014-01-27T13:48:23.119 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.osd[0m][[1;34mDEBUG[0m ] Deploying osd to plana83
2014-01-27T13:48:23.119 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] write cluster configuration to /etc/ceph/{cluster}.conf
2014-01-27T13:48:23.122 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;37mINFO[0m  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
2014-01-27T13:48:23.162 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.osd[0m][[1;34mDEBUG[0m ] Preparing host plana83 disk /dev/sdb journal None activate True
2014-01-27T13:48:23.163 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;37mINFO[0m  ] Running command: sudo ceph-disk-prepare --zap-disk --fs-type xfs --cluster ceph -- /dev/sdb
2014-01-27T13:48:26.189 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;33mWARNIN[0m] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
2014-01-27T13:48:33.926 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] Creating new GPT entries.
2014-01-27T13:48:33.927 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] GPT data structures destroyed! You may now partition the disk using fdisk or
2014-01-27T13:48:33.927 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] other utilities.
2014-01-27T13:48:33.928 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] The operation has completed successfully.
2014-01-27T13:48:33.928 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] Information: Moved requested sector from 34 to 2048 in
2014-01-27T13:48:33.930 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] order to align on 2048-sector boundaries.
2014-01-27T13:48:33.931 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] The operation has completed successfully.
2014-01-27T13:48:33.931 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] Information: Moved requested sector from 10485761 to 10487808 in
2014-01-27T13:48:33.931 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] order to align on 2048-sector boundaries.
2014-01-27T13:48:33.931 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] The operation has completed successfully.
2014-01-27T13:48:33.931 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] meta-data=/dev/sdb1              isize=2048   agcount=4, agsize=30196417 blks
2014-01-27T13:48:33.932 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ]          =                       sectsz=512   attr=2, projid32bit=0
2014-01-27T13:48:33.932 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] data     =                       bsize=4096   blocks=120785665, imaxpct=25
2014-01-27T13:48:33.932 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ]          =                       sunit=0      swidth=0 blks
2014-01-27T13:48:33.932 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] naming   =version 2              bsize=4096   ascii-ci=0
2014-01-27T13:48:33.932 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] log      =internal log           bsize=4096   blocks=58977, version=2
2014-01-27T13:48:33.932 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
2014-01-27T13:48:33.932 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] realtime =none                   extsz=4096   blocks=0, rtextents=0
2014-01-27T13:48:33.932 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] The operation has completed successfully.
2014-01-27T13:48:33.933 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;37mINFO[0m  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
2014-01-27T13:48:33.950 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.osd[0m][[1;34mDEBUG[0m ] Host plana83 is now ready for osd use.
2014-01-27T13:48:33.961 INFO:teuthology.task.ceph-deploy:successfully created osd
2014-01-27T13:48:33.961 DEBUG:teuthology.orchestra.run:Running [10.214.133.22]: 'cd /home/ubuntu/cephtest/ceph-deploy && ./ceph-deploy osd create --zap-disk plana83:sdc'
2014-01-27T13:48:34.084 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.cli[0m][[1;37mINFO[0m  ] Invoked (1.3.4): ./ceph-deploy osd create --zap-disk plana83:sdc
2014-01-27T13:48:34.085 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.osd[0m][[1;34mDEBUG[0m ] Preparing cluster ceph disks plana83:/dev/sdc:
2014-01-27T13:48:34.133 INFO:teuthology.orchestra.run.err:[10.214.133.22]: Warning: Permanently added 'plana83,10.214.133.33' (ECDSA) to the list of known hosts.
2014-01-27T13:48:34.265 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] connected to host: plana83
2014-01-27T13:48:34.265 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] detect platform information from remote host
2014-01-27T13:48:34.284 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] detect machine type
2014-01-27T13:48:34.287 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.osd[0m][[1;37mINFO[0m  ] Distro info: Ubuntu 12.04 precise
2014-01-27T13:48:34.288 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.osd[0m][[1;34mDEBUG[0m ] Deploying osd to plana83
2014-01-27T13:48:34.288 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] write cluster configuration to /etc/ceph/{cluster}.conf
2014-01-27T13:48:34.291 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;37mINFO[0m  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
2014-01-27T13:48:34.315 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.osd[0m][[1;34mDEBUG[0m ] Preparing host plana83 disk /dev/sdc journal None activate True
2014-01-27T13:48:34.315 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;37mINFO[0m  ] Running command: sudo ceph-disk-prepare --zap-disk --fs-type xfs --cluster ceph -- /dev/sdc
2014-01-27T13:48:36.740 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;33mWARNIN[0m] INFO:ceph-disk:Will colocate journal with data on /dev/sdc
2014-01-27T13:48:44.226 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] Creating new GPT entries.
2014-01-27T13:48:44.227 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] GPT data structures destroyed! You may now partition the disk using fdisk or
2014-01-27T13:48:44.229 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] other utilities.
2014-01-27T13:48:44.229 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] The operation has completed successfully.
2014-01-27T13:48:44.229 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] Information: Moved requested sector from 34 to 2048 in
2014-01-27T13:48:44.229 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] order to align on 2048-sector boundaries.
2014-01-27T13:48:44.230 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] The operation has completed successfully.
2014-01-27T13:48:44.230 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] Information: Moved requested sector from 10485761 to 10487808 in
2014-01-27T13:48:44.230 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] order to align on 2048-sector boundaries.
2014-01-27T13:48:44.230 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] The operation has completed successfully.
2014-01-27T13:48:44.230 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] meta-data=/dev/sdc1              isize=2048   agcount=4, agsize=30196417 blks
2014-01-27T13:48:44.231 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ]          =                       sectsz=512   attr=2, projid32bit=0
2014-01-27T13:48:44.232 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] data     =                       bsize=4096   blocks=120785665, imaxpct=25
2014-01-27T13:48:44.232 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ]          =                       sunit=0      swidth=0 blks
2014-01-27T13:48:44.232 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] naming   =version 2              bsize=4096   ascii-ci=0
2014-01-27T13:48:44.233 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] log      =internal log           bsize=4096   blocks=58977, version=2
2014-01-27T13:48:44.233 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
2014-01-27T13:48:44.233 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] realtime =none                   extsz=4096   blocks=0, rtextents=0
2014-01-27T13:48:44.233 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] The operation has completed successfully.
2014-01-27T13:48:44.233 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;37mINFO[0m  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
2014-01-27T13:48:44.242 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.osd[0m][[1;34mDEBUG[0m ] Host plana83 is now ready for osd use.
2014-01-27T13:48:44.253 INFO:teuthology.task.ceph-deploy:successfully created osd
2014-01-27T13:48:44.253 DEBUG:teuthology.orchestra.run:Running [10.214.133.22]: 'cd /home/ubuntu/cephtest/ceph-deploy && ./ceph-deploy osd create --zap-disk plana83:sdd'
2014-01-27T13:48:44.376 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.cli[0m][[1;37mINFO[0m  ] Invoked (1.3.4): ./ceph-deploy osd create --zap-disk plana83:sdd
2014-01-27T13:48:44.377 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.osd[0m][[1;34mDEBUG[0m ] Preparing cluster ceph disks plana83:/dev/sdd:
2014-01-27T13:48:44.430 INFO:teuthology.orchestra.run.err:[10.214.133.22]: Warning: Permanently added 'plana83,10.214.133.33' (ECDSA) to the list of known hosts.
2014-01-27T13:48:44.559 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] connected to host: plana83
2014-01-27T13:48:44.559 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] detect platform information from remote host
2014-01-27T13:48:44.579 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] detect machine type
2014-01-27T13:48:44.582 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.osd[0m][[1;37mINFO[0m  ] Distro info: Ubuntu 12.04 precise
2014-01-27T13:48:44.583 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.osd[0m][[1;34mDEBUG[0m ] Deploying osd to plana83
2014-01-27T13:48:44.583 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] write cluster configuration to /etc/ceph/{cluster}.conf
2014-01-27T13:48:44.586 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;37mINFO[0m  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
2014-01-27T13:48:44.610 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.osd[0m][[1;34mDEBUG[0m ] Preparing host plana83 disk /dev/sdd journal None activate True
2014-01-27T13:48:44.610 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;37mINFO[0m  ] Running command: sudo ceph-disk-prepare --zap-disk --fs-type xfs --cluster ceph -- /dev/sdd
2014-01-27T13:48:47.035 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;33mWARNIN[0m] INFO:ceph-disk:Will colocate journal with data on /dev/sdd
2014-01-27T13:48:54.421 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] Creating new GPT entries.
2014-01-27T13:48:54.422 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] GPT data structures destroyed! You may now partition the disk using fdisk or
2014-01-27T13:48:54.423 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] other utilities.
2014-01-27T13:48:54.423 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] The operation has completed successfully.
2014-01-27T13:48:54.423 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] Information: Moved requested sector from 34 to 2048 in
2014-01-27T13:48:54.426 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] order to align on 2048-sector boundaries.
2014-01-27T13:48:54.426 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] The operation has completed successfully.
2014-01-27T13:48:54.426 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] Information: Moved requested sector from 10485761 to 10487808 in
2014-01-27T13:48:54.426 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] order to align on 2048-sector boundaries.
2014-01-27T13:48:54.426 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] The operation has completed successfully.
2014-01-27T13:48:54.427 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] meta-data=/dev/sdd1              isize=2048   agcount=4, agsize=30196417 blks
2014-01-27T13:48:54.427 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ]          =                       sectsz=512   attr=2, projid32bit=0
2014-01-27T13:48:54.427 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] data     =                       bsize=4096   blocks=120785665, imaxpct=25
2014-01-27T13:48:54.427 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ]          =                       sunit=0      swidth=0 blks
2014-01-27T13:48:54.427 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] naming   =version 2              bsize=4096   ascii-ci=0
2014-01-27T13:48:54.428 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] log      =internal log           bsize=4096   blocks=58977, version=2
2014-01-27T13:48:54.428 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
2014-01-27T13:48:54.428 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] realtime =none                   extsz=4096   blocks=0, rtextents=0
2014-01-27T13:48:54.428 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;34mDEBUG[0m ] The operation has completed successfully.
2014-01-27T13:48:54.428 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mplana83[0m][[1;37mINFO[0m  ] Running command: sudo udevadm trigger --subsystem-match=block --action=add
2014-01-27T13:48:54.437 INFO:teuthology.orchestra.run.err:[10.214.133.22]: [[1mceph_deploy.osd[0m][[1;34mDEBUG[0m ] Host plana83 is now ready for osd use.
2014-01-27T13:48:54.449 INFO:teuthology.task.ceph-deploy:successfully created osd
2014-01-27T13:48:54.449 DEBUG:teuthology.orchestra.run:Running [10.214.133.22]: 'cd /home/ubuntu/cephtest && sudo ceph health'
2014-01-27T13:48:54.748 DEBUG:teuthology.task.ceph-deploy:Ceph health: HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
2014-01-27T13:49:04.748 DEBUG:teuthology.orchestra.run:Running [10.214.133.22]: 'cd /home/ubuntu/cephtest && sudo ceph health'
2014-01-27T13:49:04.980 DEBUG:teuthology.task.ceph-deploy:Ceph health: HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
2014-01-27T13:49:14.980 DEBUG:teuthology.orchestra.run:Running [10.214.133.22]: 'cd /home/ubuntu/cephtest && sudo ceph health'
2014-01-27T13:49:15.215 DEBUG:teuthology.task.ceph-deploy:Ceph health: HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
2014-01-27T13:49:25.216 DEBUG:teuthology.orchestra.run:Running [10.214.133.22]: 'cd /home/ubuntu/cephtest && sudo ceph health'
2014-01-27T13:49:25.452 DEBUG:teuthology.task.ceph-deploy:Ceph health: HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
2014-01-27T13:49:35.452 DEBUG:teuthology.orchestra.run:Running [10.214.133.22]: 'cd /home/ubuntu/cephtest && sudo ceph health'
2014-01-27T13:49:35.685 DEBUG:teuthology.task.ceph-deploy:Ceph health: HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds

Log file http://qa-proxy.ceph.com/teuthology/teuthology-2014-01-27_09:32:50-ceph-deploy-next-testing-basic-plana/55438/teuthology.log

Josh had some findings:

after enabling udev logging, re-running ceph-disk-prepare, and notifying udev, I see the udev rule is calling ceph-disk-activate correctly, but it is erroring out

looks like the root cause is that it's bailing on journal activation when it fails to find the fs type of the journal (which doesn't make sense since it's just a partition)

Jan 28 11:37:50 mira012 udevd[12104]: '/usr/sbin/ceph-disk-activate /dev/sdc1'(err) 'Traceback (most recent call last):'
Jan 28 11:37:50 mira012 udevd[12104]: '/usr/sbin/ceph-disk-activate /dev/sdc1'(err) '  File "/usr/sbin/ceph-disk", line 2516, in <module>'
Jan 28 11:37:50 mira012 udevd[12104]: '/usr/sbin/ceph-disk-activate /dev/sdc1'(err) '    main()'
Jan 28 11:37:50 mira012 udevd[12104]: '/usr/sbin/ceph-disk-activate /dev/sdc1'(err) '  File "/usr/sbin/ceph-disk", line 2494, in main'
Jan 28 11:37:50 mira012 udevd[12104]: '/usr/sbin/ceph-disk-activate /dev/sdc1'(err) '    args.func(args)'
Jan 28 11:37:50 mira012 udevd[12104]: '/usr/sbin/ceph-disk-activate /dev/sdc1'(err) '  File "/usr/sbin/ceph-disk", line 1846, in main_activate'
Jan 28 11:37:50 mira012 udevd[12104]: '/usr/sbin/ceph-disk-activate /dev/sdc1'(err) '    init=args.mark_init,'
Jan 28 11:37:50 mira012 udevd[12104]: '/usr/sbin/ceph-disk-activate /dev/sdc1'(err) '  File "/usr/sbin/ceph-disk", line 1617, in mount_activate'
Jan 28 11:37:50 mira012 udevd[12104]: '/usr/sbin/ceph-disk-activate /dev/sdc1'(err) '    path = mount(dev=dev, fstype=fstype, options=mount_options)'
Jan 28 11:37:50 mira012 udevd[12104]: '/usr/sbin/ceph-disk-activate /dev/sdc1'(err) '  File "/usr/sbin/ceph-disk", line 780, in mount'
Jan 28 11:37:50 mira012 udevd[12104]: '/usr/sbin/ceph-disk-activate /dev/sdc1'(err) '    path,'
Jan 28 11:37:50 mira012 udevd[12104]: '/usr/sbin/ceph-disk-activate /dev/sdc1'(err) '  File "/usr/lib/python2.7/subprocess.py", line 506, in check_call'
Jan 28 11:37:50 mira012 udevd[12104]: '/usr/sbin/ceph-disk-activate /dev/sdc1'(err) '    retcode = call(*popenargs, **kwargs)'
Jan 28 11:37:50 mira012 udevd[12104]: '/usr/sbin/ceph-disk-activate /dev/sdc1'(err) '  File "/usr/lib/python2.7/subprocess.py", line 493, in call'
Jan 28 11:37:50 mira012 udevd[12104]: '/usr/sbin/ceph-disk-activate /dev/sdc1'(err) '    return Popen(*popenargs, **kwargs).wait()'
Jan 28 11:37:50 mira012 udevd[12104]: '/usr/sbin/ceph-disk-activate /dev/sdc1'(err) '  File "/usr/lib/python2.7/subprocess.py", line 679, in __init__'
Jan 28 11:37:50 mira012 udevd[12104]: '/usr/sbin/ceph-disk-activate /dev/sdc1'(err) '    errread, errwrite)'
Jan 28 11:37:50 mira012 udevd[12104]: '/usr/sbin/ceph-disk-activate /dev/sdc1'(err) '  File "/usr/lib/python2.7/subprocess.py", line 1249, in _execute_child'
Jan 28 11:37:50 mira012 udevd[12104]: '/usr/sbin/ceph-disk-activate /dev/sdc1'(err) '    raise child_exception'
Jan 28 11:37:50 mira012 rsyslogd-2177: imuxsock begins to drop messages from pid 12104 due to rate-limiting
Actions #1

Updated by Alfredo Deza about 10 years ago

  • Status changed from 12 to Resolved

Pull request: https://github.com/ceph/ceph/pull/1156

Merged into ceph's next branch (cherry-picked into master): 3a39f36

Actions

Also available in: Atom PDF