Bug #15350
closedceph-disk failed on centos in ceph-disk-jewel-distro-basic-mira
0%
Description
Run: http://pulpito.ceph.com/teuthology-2016-03-31_23:13:02-ceph-disk-jewel-distro-basic-mira/
Job: 101561
LOgs: http://qa-proxy.ceph.com/teuthology/teuthology-2016-03-31_23:13:02-ceph-disk-jewel-distro-basic-mira/101561/teuthology.log
2016-04-01T00:08:39.307 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk: 2016-04-01T00:08:39.307 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:get_dm_uuid: get_dm_uuid /dev/mapper/mpatha uuid path is /sys/dev/block/253:0/dm/uuid 2016-04-01T00:08:39.307 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:get_dm_uuid: get_dm_uuid /dev/mapper/mpatha uuid is mpath-2001b4d2000000000 2016-04-01T00:08:39.307 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk: 2016-04-01T00:08:39.307 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:populate_data_path_device: Creating xfs fs on /dev/dm-1 2016-04-01T00:08:39.307 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/dm-1 2016-04-01T00:08:39.710 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:existing superblock read failed: Input/output error 2016-04-01T00:08:39.710 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:mkfs.xfs: pwrite64 failed: Input/output error 2016-04-01T00:08:39.710 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:meta-data=/dev/dm-1 isize=2048 agcount=4, agsize=61041197 blks 2016-04-01T00:08:39.710 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:= sectsz=512 attr=2, projid32bit=1 2016-04-01T00:08:39.711 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:= crc=0 finobt=0 2016-04-01T00:08:39.711 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:data = bsize=4096 blocks=244164785, imaxpct=25 2016-04-01T00:08:39.711 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:= sunit=0 swidth=0 blks 2016-04-01T00:08:39.712 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:naming =version 2 bsize=4096 ascii-ci=0 ftype=0 2016-04-01T00:08:39.712 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:log =internal log bsize=4096 blocks=119221, version=2 2016-04-01T00:08:39.712 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:= sectsz=512 sunit=0 blks, lazy-count=1 2016-04-01T00:08:39.712 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:realtime =none extsz=4096 blocks=0, rtextents=0 2016-04-01T00:08:39.879 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:Traceback (most recent call last): 2016-04-01T00:08:39.879 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:File "/usr/sbin/ceph-disk", line 9, in <module> 2016-04-01T00:08:39.879 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')() 2016-04-01T00:08:39.879 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4964, in run 2016-04-01T00:08:39.880 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:main(sys.argv[1:]) 2016-04-01T00:08:39.881 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4915, in main 2016-04-01T00:08:39.881 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:args.func(args) 2016-04-01T00:08:39.881 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1774, in main 2016-04-01T00:08:39.881 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:Prepare.factory(args).prepare() 2016-04-01T00:08:39.881 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1762, in prepare 2016-04-01T00:08:39.881 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:self.prepare_locked() 2016-04-01T00:08:39.881 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1794, in prepare_locked 2016-04-01T00:08:39.882 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:self.data.prepare(self.journal) 2016-04-01T00:08:39.882 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2446, in prepare 2016-04-01T00:08:39.883 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:self.prepare_device(*to_prepare_list) 2016-04-01T00:08:39.883 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2624, in prepare_device 2016-04-01T00:08:39.883 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:self.populate_data_path_device(*to_prepare_list) 2016-04-01T00:08:39.883 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2579, in populate_data_path_device 2016-04-01T00:08:39.883 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:raise Error(e) 2016-04-01T00:08:39.883 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:ceph_disk.main.Error: Error: Command '['/usr/sbin/mkfs', '-t', 'xfs', '-f', '-i', 'size=2048', '--', '/dev/dm-1']' returned non-zero exit status 1 2016-04-01T00:08:39.937 INFO:tasks.workunit.client.0.mira037.stdout:../../../workunit.client.0/ceph-disk/ceph-disk-test.py::TestCephDisk::test_activate_multipath FAILED 2016-04-01T00:08:39.937 INFO:tasks.workunit.client.0.mira037.stdout: 2016-04-01T00:08:39.937 INFO:tasks.workunit.client.0.mira037.stdout:=================================== FAILURES =================================== 2016-04-01T00:08:39.937 INFO:tasks.workunit.client.0.mira037.stdout:_____________________ TestCephDisk.test_destroy_osd_by_id ______________________ 2016-04-01T00:08:39.938 INFO:tasks.workunit.client.0.mira037.stdout: 2016-04-01T00:08:39.938 INFO:tasks.workunit.client.0.mira037.stdout:self = <ceph-disk-test.TestCephDisk object at 0x1fe79d0> 2016-04-01T00:08:39.938 INFO:tasks.workunit.client.0.mira037.stdout: 2016-04-01T00:08:39.938 INFO:tasks.workunit.client.0.mira037.stdout: def test_destroy_osd_by_id(self): 2016-04-01T00:08:39.938 INFO:tasks.workunit.client.0.mira037.stdout: c = CephDisk() 2016-04-01T00:08:39.938 INFO:tasks.workunit.client.0.mira037.stdout: disk = c.unused_disks()[0] 2016-04-01T00:08:39.938 INFO:tasks.workunit.client.0.mira037.stdout: osd_uuid = str(uuid.uuid1()) 2016-04-01T00:08:39.939 INFO:tasks.workunit.client.0.mira037.stdout: c.sh("ceph-disk --verbose prepare --osd-uuid " + osd_uuid + " " + disk) 2016-04-01T00:08:39.939 INFO:tasks.workunit.client.0.mira037.stdout: c.wait_for_osd_up(osd_uuid) 2016-04-01T00:08:39.939 INFO:tasks.workunit.client.0.mira037.stdout: c.check_osd_status(osd_uuid) 2016-04-01T00:08:39.939 INFO:tasks.workunit.client.0.mira037.stdout:> c.destroy_osd(osd_uuid) 2016-04-01T00:08:39.939 INFO:tasks.workunit.client.0.mira037.stdout: 2016-04-01T00:08:39.940 INFO:tasks.workunit.client.0.mira037.stdout:../../../workunit.client.0/ceph-disk/ceph-disk-test.py:277: 2016-04-01T00:08:39.940 INFO:tasks.workunit.client.0.mira037.stdout:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2016-04-01T00:08:39.940 INFO:tasks.workunit.client.0.mira037.stdout:../../../workunit.client.0/ceph-disk/ceph-disk-test.py:168: in destroy_osd 2016-04-01T00:08:39.940 INFO:tasks.workunit.client.0.mira037.stdout: """.format(id=id)) 2016-04-01T00:08:39.940 INFO:tasks.workunit.client.0.mira037.stdout:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2016-04-01T00:08:39.940 INFO:tasks.workunit.client.0.mira037.stdout: 2016-04-01T00:08:39.940 INFO:tasks.workunit.client.0.mira037.stdout:command = '\n set -xe\n ceph-disk --verbose deactivate --deactivate-by-id 2\n ceph-disk --verbose destroy --destroy-by-id 2 --zap\n ' 2016-04-01T00:08:39.941 INFO:tasks.workunit.client.0.mira037.stdout: 2016-04-01T00:08:39.941 INFO:tasks.workunit.client.0.mira037.stdout: @staticmethod 2016-04-01T00:08:39.941 INFO:tasks.workunit.client.0.mira037.stdout: def sh(command): 2016-04-01T00:08:39.941 INFO:tasks.workunit.client.0.mira037.stdout: LOG.debug(":sh: " + command) 2016-04-01T00:08:39.941 INFO:tasks.workunit.client.0.mira037.stdout: proc = subprocess.Popen( 2016-04-01T00:08:39.941 INFO:tasks.workunit.client.0.mira037.stdout: args=command, 2016-04-01T00:08:39.941 INFO:tasks.workunit.client.0.mira037.stdout: stdout=subprocess.PIPE, 2016-04-01T00:08:39.942 INFO:tasks.workunit.client.0.mira037.stdout: stderr=subprocess.STDOUT, 2016-04-01T00:08:39.942 INFO:tasks.workunit.client.0.mira037.stdout: shell=True, 2016-04-01T00:08:39.942 INFO:tasks.workunit.client.0.mira037.stdout: bufsize=1) 2016-04-01T00:08:39.942 INFO:tasks.workunit.client.0.mira037.stdout: lines = [] 2016-04-01T00:08:39.942 INFO:tasks.workunit.client.0.mira037.stdout: with proc.stdout: 2016-04-01T00:08:39.942 INFO:tasks.workunit.client.0.mira037.stdout: for line in iter(proc.stdout.readline, b''): 2016-04-01T00:08:39.943 INFO:tasks.workunit.client.0.mira037.stdout: line = line.decode('utf-8') 2016-04-01T00:08:39.943 INFO:tasks.workunit.client.0.mira037.stdout: if 'dangerous and experimental' in line: 2016-04-01T00:08:39.943 INFO:tasks.workunit.client.0.mira037.stdout: LOG.debug('SKIP dangerous and experimental') 2016-04-01T00:08:39.943 INFO:tasks.workunit.client.0.mira037.stdout: continue 2016-04-01T00:08:39.943 INFO:tasks.workunit.client.0.mira037.stdout: lines.append(line) 2016-04-01T00:08:39.943 INFO:tasks.workunit.client.0.mira037.stdout: LOG.debug(line.strip().encode('ascii', 'ignore')) 2016-04-01T00:08:39.943 INFO:tasks.workunit.client.0.mira037.stdout: if proc.wait() != 0: 2016-04-01T00:08:39.944 INFO:tasks.workunit.client.0.mira037.stdout: raise subprocess.CalledProcessError( 2016-04-01T00:08:39.944 INFO:tasks.workunit.client.0.mira037.stdout: returncode=proc.returncode, 2016-04-01T00:08:39.944 INFO:tasks.workunit.client.0.mira037.stdout:> cmd=command 2016-04-01T00:08:39.944 INFO:tasks.workunit.client.0.mira037.stdout: ) 2016-04-01T00:08:39.944 INFO:tasks.workunit.client.0.mira037.stdout:E CalledProcessError: Command ' 2016-04-01T00:08:39.944 INFO:tasks.workunit.client.0.mira037.stdout:E set -xe 2016-04-01T00:08:39.945 INFO:tasks.workunit.client.0.mira037.stdout:E ceph-disk --verbose deactivate --deactivate-by-id 2 2016-04-01T00:08:39.945 INFO:tasks.workunit.client.0.mira037.stdout:E ceph-disk --verbose destroy --destroy-by-id 2 --zap 2016-04-01T00:08:39.945 INFO:tasks.workunit.client.0.mira037.stdout:E ' returned non-zero exit status 1 2016-04-01T00:08:39.945 INFO:tasks.workunit.client.0.mira037.stdout: 2016-04-01T00:08:39.945 INFO:tasks.workunit.client.0.mira037.stdout:../../../workunit.client.0/ceph-disk/ceph-disk-test.py:95: CalledProcessError 2016-04-01T00:08:39.945 INFO:tasks.workunit.client.0.mira037.stdout:_____________________ TestCephDisk.test_activate_multipath _____________________ 2016-04-01T00:08:39.945 INFO:tasks.workunit.client.0.mira037.stdout: 2016-04-01T00:08:39.946 INFO:tasks.workunit.client.0.mira037.stdout:self = <ceph-disk-test.TestCephDisk object at 0x1f88850> 2016-04-01T00:08:39.946 INFO:tasks.workunit.client.0.mira037.stdout: 2016-04-01T00:08:39.946 INFO:tasks.workunit.client.0.mira037.stdout: def test_activate_multipath(self): 2016-04-01T00:08:39.946 INFO:tasks.workunit.client.0.mira037.stdout: c = CephDisk() 2016-04-01T00:08:39.946 INFO:tasks.workunit.client.0.mira037.stdout: if c.sh("lsb_release -si").strip() != 'CentOS': 2016-04-01T00:08:39.946 INFO:tasks.workunit.client.0.mira037.stdout: pytest.skip( 2016-04-01T00:08:39.946 INFO:tasks.workunit.client.0.mira037.stdout: "see issue https://bugs.launchpad.net/ubuntu/+source/multipath-tools/+bug/1488688") 2016-04-01T00:08:39.947 INFO:tasks.workunit.client.0.mira037.stdout: c.ensure_sd() 2016-04-01T00:08:39.947 INFO:tasks.workunit.client.0.mira037.stdout: # 2016-04-01T00:08:39.947 INFO:tasks.workunit.client.0.mira037.stdout: # Figure out the name of the multipath device 2016-04-01T00:08:39.947 INFO:tasks.workunit.client.0.mira037.stdout: # 2016-04-01T00:08:39.947 INFO:tasks.workunit.client.0.mira037.stdout: disk = c.unused_disks('sd.')[0] 2016-04-01T00:08:39.947 INFO:tasks.workunit.client.0.mira037.stdout: c.sh("mpathconf --enable || true") 2016-04-01T00:08:39.947 INFO:tasks.workunit.client.0.mira037.stdout: c.sh("multipath " + disk) 2016-04-01T00:08:39.948 INFO:tasks.workunit.client.0.mira037.stdout: holders = os.listdir( 2016-04-01T00:08:39.948 INFO:tasks.workunit.client.0.mira037.stdout: "/sys/block/" + os.path.basename(disk) + "/holders") 2016-04-01T00:08:39.948 INFO:tasks.workunit.client.0.mira037.stdout: assert 1 == len(holders) 2016-04-01T00:08:39.948 INFO:tasks.workunit.client.0.mira037.stdout: name = open("/sys/block/" + holders[0] + "/dm/name").read() 2016-04-01T00:08:39.948 INFO:tasks.workunit.client.0.mira037.stdout: multipath = "/dev/mapper/" + name 2016-04-01T00:08:39.948 INFO:tasks.workunit.client.0.mira037.stdout: # 2016-04-01T00:08:39.949 INFO:tasks.workunit.client.0.mira037.stdout: # Prepare the multipath device 2016-04-01T00:08:39.949 INFO:tasks.workunit.client.0.mira037.stdout: # 2016-04-01T00:08:39.949 INFO:tasks.workunit.client.0.mira037.stdout: osd_uuid = str(uuid.uuid1()) 2016-04-01T00:08:39.949 INFO:tasks.workunit.client.0.mira037.stdout: c.sh("ceph-disk --verbose zap " + multipath) 2016-04-01T00:08:39.949 INFO:tasks.workunit.client.0.mira037.stdout: c.sh("ceph-disk --verbose prepare --osd-uuid " + osd_uuid + 2016-04-01T00:08:39.949 INFO:tasks.workunit.client.0.mira037.stdout:> " " + multipath) 2016-04-01T00:08:39.950 INFO:tasks.workunit.client.0.mira037.stdout: 2016-04-01T00:08:39.950 INFO:tasks.workunit.client.0.mira037.stdout:../../../workunit.client.0/ceph-disk/ceph-disk-test.py:613: 2016-04-01T00:08:39.950 INFO:tasks.workunit.client.0.mira037.stdout:_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 2016-04-01T00:08:39.950 INFO:tasks.workunit.client.0.mira037.stdout: 2016-04-01T00:08:39.950 INFO:tasks.workunit.client.0.mira037.stdout:command = 'ceph-disk --verbose prepare --osd-uuid 8bf997e4-f7d8-11e5-ac46-002590085a60 /dev/mapper/mpatha\n' 2016-04-01T00:08:39.950 INFO:tasks.workunit.client.0.mira037.stdout: 2016-04-01T00:08:39.950 INFO:tasks.workunit.client.0.mira037.stdout: @staticmethod 2016-04-01T00:08:39.951 INFO:tasks.workunit.client.0.mira037.stdout: def sh(command): 2016-04-01T00:08:39.951 INFO:tasks.workunit.client.0.mira037.stdout: LOG.debug(":sh: " + command) 2016-04-01T00:08:39.951 INFO:tasks.workunit.client.0.mira037.stdout: proc = subprocess.Popen( 2016-04-01T00:08:39.951 INFO:tasks.workunit.client.0.mira037.stdout: args=command, 2016-04-01T00:08:39.951 INFO:tasks.workunit.client.0.mira037.stdout: stdout=subprocess.PIPE, 2016-04-01T00:08:39.951 INFO:tasks.workunit.client.0.mira037.stdout: stderr=subprocess.STDOUT, 2016-04-01T00:08:39.951 INFO:tasks.workunit.client.0.mira037.stdout: shell=True, 2016-04-01T00:08:39.952 INFO:tasks.workunit.client.0.mira037.stdout: bufsize=1) 2016-04-01T00:08:39.952 INFO:tasks.workunit.client.0.mira037.stdout: lines = [] 2016-04-01T00:08:39.952 INFO:tasks.workunit.client.0.mira037.stdout: with proc.stdout: 2016-04-01T00:08:39.952 INFO:tasks.workunit.client.0.mira037.stdout: for line in iter(proc.stdout.readline, b''): 2016-04-01T00:08:39.952 INFO:tasks.workunit.client.0.mira037.stdout: line = line.decode('utf-8') 2016-04-01T00:08:39.952 INFO:tasks.workunit.client.0.mira037.stdout: if 'dangerous and experimental' in line: 2016-04-01T00:08:39.953 INFO:tasks.workunit.client.0.mira037.stdout: LOG.debug('SKIP dangerous and experimental') 2016-04-01T00:08:39.953 INFO:tasks.workunit.client.0.mira037.stdout: continue 2016-04-01T00:08:39.953 INFO:tasks.workunit.client.0.mira037.stdout: lines.append(line) 2016-04-01T00:08:39.953 INFO:tasks.workunit.client.0.mira037.stdout: LOG.debug(line.strip().encode('ascii', 'ignore')) 2016-04-01T00:08:39.953 INFO:tasks.workunit.client.0.mira037.stdout: if proc.wait() != 0: 2016-04-01T00:08:39.953 INFO:tasks.workunit.client.0.mira037.stdout: raise subprocess.CalledProcessError( 2016-04-01T00:08:39.953 INFO:tasks.workunit.client.0.mira037.stdout: returncode=proc.returncode, 2016-04-01T00:08:39.954 INFO:tasks.workunit.client.0.mira037.stdout:> cmd=command 2016-04-01T00:08:39.954 INFO:tasks.workunit.client.0.mira037.stdout: ) 2016-04-01T00:08:39.954 INFO:tasks.workunit.client.0.mira037.stdout:E CalledProcessError: Command 'ceph-disk --verbose prepare --osd-uuid 8bf997e4-f7d8-11e5-ac46-002590085a60 /dev/mapper/mpatha 2016-04-01T00:08:39.954 INFO:tasks.workunit.client.0.mira037.stdout:E ' returned non-zero exit status 1 2016-04-01T00:08:39.954 INFO:tasks.workunit.client.0.mira037.stdout: 2016-04-01T00:08:39.954 INFO:tasks.workunit.client.0.mira037.stdout:../../../workunit.client.0/ceph-disk/ceph-disk-test.py:95: CalledProcessError 2016-04-01T00:08:39.954 INFO:tasks.workunit.client.0.mira037.stdout:==================== 2 failed, 19 passed in 2623.10 seconds ==================== 2016-04-01T00:08:39.955 INFO:tasks.workunit.client.0.mira037.stderr:+ result=1 2016-04-01T00:08:39.955 INFO:tasks.workunit.client.0.mira037.stderr:++ id -u 2016-04-01T00:08:39.956 INFO:tasks.workunit.client.0.mira037.stderr:++ dirname /home/ubuntu/cephtest/workunit.client.0/ceph-disk/ceph-disk.sh 2016-04-01T00:08:39.956 INFO:tasks.workunit.client.0.mira037.stderr:+ sudo chown -R 1000 /home/ubuntu/cephtest/workunit.client.0/ceph-disk 2016-04-01T00:08:39.964 INFO:tasks.workunit.client.0.mira037.stderr:+ exit 1 2016-04-01T00:08:39.965 INFO:tasks.workunit:Stopping ['ceph-disk/ceph-disk.sh'] on client.0...
Updated by Yuri Weinstein about 8 years ago
Looks like the same issues in
Run: http://pulpito.ceph.com/teuthology-2016-04-12_23:13:02-ceph-disk-jewel-distro-basic-mira/
Job: all 2
Updated by Yuri Weinstein about 8 years ago
- Priority changed from Normal to Urgent
Updated by Kefu Chai about 8 years ago
2016-04-01T00:08:39.307 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:populate_data_path_device: Creating xfs fs on /dev/dm-1 2016-04-01T00:08:39.307 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/dm-1 2016-04-01T00:08:39.710 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:existing superblock read failed: Input/output error 2016-04-01T00:08:39.710 INFO:tasks.workunit.client.0.mira037.stderr:DEBUG:CephDisk:mkfs.xfs: pwrite64 failed: Input/output error
could be disk error?
but i logged into mira037, dmesg does not show anything interesting.
Updated by Yuri Weinstein almost 8 years ago
- Status changed from New to 12
I reproduced this job manually full logs in yuriw@teuthology ~/logs/140720
No errors on disk found
Jobs failed on the last command rm
:
2016-04-20 08:45:57,715.715 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 2016-04-20 08:45:57,741.741 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:command: Running command: /usr/sbin/partprobe /dev/mapper/mpathf 2016-04-20 08:45:58,352.352 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 2016-04-20 08:45:58,380.380 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/dm-1 2016-04-20 08:45:59,013.013 INFO:teuthology.orchestra.run.mira041:Running: 'rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/workunit.client.0 /home/ubuntu/cephtest/clone'
re-running command on a node
[yuriw@mira041 ~]$ rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/workunit.client.0 /home/ubuntu/cephtest/clone rm: cannot remove ‘/home/ubuntu/cephtest/workunits.list.client.0’: Permission denied rm: cannot remove ‘/home/ubuntu/cephtest/workunit.client.0’: Permission denied rm: cannot remove ‘/home/ubuntu/cephtest/clone’: Permission denied
also observed:
966880-2016-04-20 08:22:50,210.210 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:command_check_call: Running command: /usr/bin/udevadm settle --timeout=600 967042-2016-04-20 08:22:50,238.238 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:command: Running command: /usr/sbin/partprobe /dev/sdd 967184-2016-04-20 08:22:50,281.281 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:update_partition: partprobe /dev/sdd failed : Error: Error informing the kernel about modifications to partition /dev/sdd2 -- Device or resource busy. This means Linux won't know about any changes you made to /dev/sdd2 until you reboot -- so you shouldn't mount it or use it in any way before rebooting. 967576:2016-04-20 08:22:50,282.282 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:Error: Failed to add partition 2 (Device or resource busy) 967722-2016-04-20 08:22:50,282.282 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:(ignored, waiting 60s) 967832-2016-04-20 08:23:50,343.343 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:command_check_call: Running command: /usr/bin/udevadm settle --timeout=600
2016-04-20 08:45:49,659.659 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk::sh: ceph-disk list --format json sda sdb sdc sdd sde sdf sdg sdh 2016-04-20 08:45:51,403.403 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:[{"path": "/dev/sda", "partitions": [{"dmcrypt": {}, "uuid": null, "mount": "/", "ptype": "0x83", "is_partition": true, "fs_type": "ext4", "path": "/dev/sda1", "type": "other"}]}, {"path": "/dev/sdb", "partitions": [{"magic": "ceph osd volume v026", "dmcrypt": {}, "uuid": "cc4cad6b-9db6-4d03-b7e4-c8686bc50d29", "mount": null, "ptype": "4fbd7e29-9d25-41b8-afd0-062c0ceff05d", "is_partition": true, "cluster": null, "state": "prepared", "fs_type": "xfs", "ceph_fsid": "192689f2-65d0-40ec-b8f9-6174a8af86b8", "path": "/dev/sdb1", "type": "data", "whoami": "0", "journal_dev": "/dev/sdb2", "journal_uuid": "6d4c236d-983a-4d57-9e88-dc49364b149f"}, {"dmcrypt": {}, "uuid": "6d4c236d-983a-4d57-9e88-dc49364b149f", "ptype": "45b0969e-9b03-4f30-b4c6-b4b80ceff106", "is_partition": true, "journal_for": "/dev/sdb1", "path": "/dev/sdb2", "type": "journal"}]}, {"path": "/dev/sdc", "partitions": [{"dmcrypt": {}, "uuid": "d17080ea-eaf1-40c4-a18e-1b40adf89284", "ptype": "45b0969e-9b03-4f30-b4c6-b4b80ceff106", "is_partition": true, "path": "/dev/sdc1", "type": "journal"}]}, {"path": "/dev/sdd", "type": "other", "dmcrypt": {}, "ptype": "unknown", "is_partition": false}, {"path": "/dev/sde", "partitions": [{"dmcrypt": {}, "uuid": "9d9dcf14-e8ed-11e5-b48c-00259009e2e8", "mount": null, "ptype": "4fbd7e29-9d25-41b8-afd0-062c0ceff05d", "is_partition": true, "state": "unprepared", "fs_type": "xfs", "path": "/dev/sde1", "type": "data"}, {"dmcrypt": {}, "uuid": "f0fdab77-902f-491b-ace3-c686dc411a0a", "ptype": "45b0969e-9b03-4f30-b4c6-b4b80ceff106", "is_partition": true, "path": "/dev/sde2", "type": "journal"}]}, {"path": "/dev/sdf", "type": "other", "dmcrypt": {}, "ptype": "unknown", "is_partition": false}, {"path": "/dev/sdg", "type": "other", "dmcrypt": {}, "ptype": "unknown", "is_partition": false}, {"dmcrypt": {}, "ptype": "unknown", "is_partition": false, "fs_type": "xfs", "path": "/dev/sdh", "type": "other"}] 2016-04-20 08:45:51,423.423 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk::sh: mpathconf --enable || true 2016-04-20 08:45:51,570.570 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk::sh: multipath /dev/sdd 2016-04-20 08:45:51,862.862 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:create: mpathf (2001b4d2011e25000) undef Hitachi ,HUA722010CLA330 2016-04-20 08:45:51,862.862 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:size=932G features='0' hwhandler='0' wp=undef 2016-04-20 08:45:51,862.862 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:`-+- policy='service-time 0' prio=1 status=undef 2016-04-20 08:45:51,863.863 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:`- 6:0:0:3 sdd 8:48 undef ready running 2016-04-20 08:45:51,864.864 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk::sh: ceph-disk --verbose zap /dev/mapper/mpathf 2016-04-20 08:45:51,864.864 INFO:tasks.workunit.client.0.mira041.stderr: 2016-04-20 08:45:52,108.108 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:get_dm_uuid: get_dm_uuid /dev/dm-0 uuid path is /sys/dev/block/253:0/dm/uuid 2016-04-20 08:45:52,108.108 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:get_dm_uuid: get_dm_uuid /dev/dm-0 uuid is mpath-2001b4d2011e25000 2016-04-20 08:45:52,109.109 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk: 2016-04-20 08:45:52,110.110 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:get_dm_uuid: get_dm_uuid /dev/dm-0 uuid path is /sys/dev/block/253:0/dm/uuid 2016-04-20 08:45:52,111.111 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:get_dm_uuid: get_dm_uuid /dev/dm-0 uuid is mpath-2001b4d2011e25000 2016-04-20 08:45:52,111.111 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk: 2016-04-20 08:45:52,111.111 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:zap: Zapping partition table on /dev/dm-0 2016-04-20 08:45:52,112.112 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:command_check_call: Running command: /usr/sbin/sgdisk --zap-all -- /dev/dm-0 2016-04-20 08:45:52,122.122 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:Caution: invalid backup GPT header, but valid main header; regenerating 2016-04-20 08:45:52,123.123 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:backup header from main header. 2016-04-20 08:45:52,123.123 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk: 2016-04-20 08:45:53,159.159 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:**************************************************************************** 2016-04-20 08:45:53,159.159 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk 2016-04-20 08:45:53,159.159 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:verification and recovery are STRONGLY recommended. 2016-04-20 08:45:53,159.159 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:**************************************************************************** 2016-04-20 08:45:53,160.160 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:Warning: The kernel is still using the old partition table. 2016-04-20 08:45:53,160.160 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:The new table will be used at the next reboot. 2016-04-20 08:45:53,160.160 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:GPT data structures destroyed! You may now partition the disk using fdisk or
and what Kefu noticed too:
2016-04-20 08:45:58,379.379 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:populate_data_path_device: Creating xfs fs on /dev/dm-1 2016-04-20 08:45:58,380.380 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 -- /dev/dm-1 2016-04-20 08:45:58,686.686 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:existing superblock read failed: Input/output error 2016-04-20 08:45:58,686.686 INFO:tasks.workunit.client.0.mira041.stderr:DEBUG:CephDisk:mkfs.xfs: pwrite64 failed: Input/output error
Notes:
[yuriw@mira041 ~]$ sudo ceph-disk list /dev/dm-0 : /dev/dm-1 other /dev/dm-2 other /dev/dm-1 other /dev/dm-2 other /dev/sda : /dev/sda1 other, ext4, mounted on / /dev/sdb : /dev/sdb2 ceph journal, for /dev/sdb1 /dev/sdb1 ceph data, prepared, unknown cluster 192689f2-65d0-40ec-b8f9-6174a8af86b8, osd.0, journal /dev/sdb2 /dev/sdc : /dev/sdc1 ceph journal /dev/sdd other, unknown /dev/sde : /dev/sde2 ceph journal /dev/sde1 ceph data, unprepared /dev/sdf other, unknown /dev/sdg other, unknown /dev/sdh other, xfs
Updated by Yuri Weinstein almost 8 years ago
Updated by Yuri Weinstein over 7 years ago
Updated by Kefu Chai over 7 years ago
- Assignee set to Loïc Dachary
Loïc, could you take a look at this issue?
Updated by Yuri Weinstein over 7 years ago
I see it for jewel 10.2.3 release, not sure if it has to be fixed for it.
http://qa-proxy.ceph.com/teuthology/teuthology-2016-08-28_03:10:02-ceph-disk-jewel-distro-basic-mira/388624/teuthology.log
Updated by Loïc Dachary over 7 years ago
The ceph-disk suite must be run on VPS
Updated by Loïc Dachary over 7 years ago
- Is duplicate of Bug #17114: tests: ceph-disk-test.py requires vps added