Project

General

Profile

Actions

Bug #47742

closed

cephadm/test_dashboard_e2e.sh: OSDs are not created

Added by Kiefer Chang over 3 years ago. Updated about 3 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
Component - Orchestrator
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
Yes
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

The workunit fails on creating OSDs.

/a/teuthology-2020-10-01_07:01:02-rados-master-distro-basic-smithi/5485887
/a/teuthology-2020-10-01_07:01:02-rados-master-distro-basic-smithi/5486069
/a/yuriw-2020-10-01_20:53:30-rados-wip-yuri-testing-2020-10-01-1128-distro-basic-smithi/5487068

2020-10-01T09:22:11.416 INFO:tasks.workunit.client.0.smithi192.stdout:  Running:  orchestrator/04-osds.e2e-spec.ts                                                (1 of 1)
2020-10-01T09:22:11.416 INFO:tasks.workunit.client.0.smithi192.stderr:tput: No value for $TERM and no -T specified
2020-10-01T09:22:13.096 INFO:tasks.workunit.client.0.smithi192.stderr:Couldn't determine Mocha version
2020-10-01T09:22:13.102 INFO:tasks.workunit.client.0.smithi192.stdout:
2020-10-01T09:22:13.103 INFO:tasks.workunit.client.0.smithi192.stdout:
2020-10-01T09:22:13.109 INFO:tasks.workunit.client.0.smithi192.stdout:  OSDs page
2020-10-01T09:22:13.110 INFO:tasks.workunit.client.0.smithi192.stdout:    when Orchestrator is available
2020-10-01T09:27:16.111 INFO:tasks.workunit.client.0.smithi192.stdout:      1) should create and delete OSDs
2020-10-01T09:27:16.425 INFO:tasks.workunit.client.0.smithi192.stdout:
2020-10-01T09:27:16.426 INFO:tasks.workunit.client.0.smithi192.stdout:
2020-10-01T09:27:16.426 INFO:tasks.workunit.client.0.smithi192.stdout:  0 passing (5m)
2020-10-01T09:27:16.426 INFO:tasks.workunit.client.0.smithi192.stdout:  1 failing
2020-10-01T09:27:16.427 INFO:tasks.workunit.client.0.smithi192.stdout:
2020-10-01T09:27:16.428 INFO:tasks.workunit.client.0.smithi192.stdout:  1) OSDs page
2020-10-01T09:27:16.428 INFO:tasks.workunit.client.0.smithi192.stdout:       when Orchestrator is available
2020-10-01T09:27:16.429 INFO:tasks.workunit.client.0.smithi192.stdout:         should create and delete OSDs:
2020-10-01T09:27:16.429 INFO:tasks.workunit.client.0.smithi192.stdout:
2020-10-01T09:27:16.429 INFO:tasks.workunit.client.0.smithi192.stdout:      Timed out retrying
2020-10-01T09:27:16.429 INFO:tasks.workunit.client.0.smithi192.stdout:      + expected - actual
2020-10-01T09:27:16.429 INFO:tasks.workunit.client.0.smithi192.stdout:
2020-10-01T09:27:16.429 INFO:tasks.workunit.client.0.smithi192.stdout:      -3
2020-10-01T09:27:16.429 INFO:tasks.workunit.client.0.smithi192.stdout:      +6

From the mgr log, ceph-volume can't create LVs:

2020-10-01T21:27:43.793+0000 7f3b1fd58700  0 [cephadm DEBUG root] code: 1
2020-10-01T21:27:43.793+0000 7f3b1fd58700  0 [cephadm DEBUG root] err: /bin/podman:stderr --> passed data devices: 3 physical, 0 LVM
/bin/podman:stderr --> relative data size: 1.0
/bin/podman:stderr Running command: /usr/bin/ceph-authtool --gen-print-key
/bin/podman:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 08b0c12b-aa34-4fac-8034-d9c04534
176b
/bin/podman:stderr Running command: /usr/sbin/vgcreate --force --yes ceph-250b5e18-244f-4317-8fe3-17d4c119f9bd /dev/sdb
/bin/podman:stderr  stdout: Physical volume "/dev/sdb" successfully created.
/bin/podman:stderr  stdout: Volume group "ceph-250b5e18-244f-4317-8fe3-17d4c119f9bd" successfully created
/bin/podman:stderr Running command: /usr/sbin/lvcreate --yes -l 3839 -n osd-block-08b0c12b-aa34-4fac-8034-d9c04534176b ceph-250b5e18-244f-4317-8fe3-17d4c119f9bd
/bin/podman:stderr  stderr: Volume group "ceph-250b5e18-244f-4317-8fe3-17d4c119f9bd" has insufficient free space (3838 extents): 3839 required.
/bin/podman:stderr --> Was unable to complete a new OSD, will rollback changes
/bin/podman:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.3 --yes-i-really-mean-it
/bin/podman:stderr  stderr: purged osd.3
/bin/podman:stderr -->  RuntimeError: command returned non-zero exit status: 5
Traceback (most recent call last):
  File "<stdin>", line 6042, in <module>
  File "<stdin>", line 1277, in _infer_fsid
  File "<stdin>", line 1360, in _infer_image
  File "<stdin>", line 3589, in command_ceph_volume
  File "<stdin>", line 1039, in call_throws
RuntimeError: Failed command: /bin/podman run --rm --ipc=host --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk -e CONTAINER_IMAGE=quay.ceph.io/ceph-ci/ceph:8381e99a1355712a5b71277f250ab59b0a6518bd -e NODE_NAME=smithi058 -e CEPH_VOLUME_OSDSPEC_AFFINITY=dashboard-admin-1601587659914 -v /var/log/ceph/b3b6f074-042b-11eb-a2ad-001a4aab830c:/var/log/ceph:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /tmp/ceph-tmp6j0dem53:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmpus5oo_jd:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.ceph.io/ceph-ci/ceph:8381e99a1355712a5b71277f250ab59b0a6518bd lvm batch --no-auto /dev/sdb /dev/sdc /dev/sdd --yes --no-systemd
2020-10-01T21:27:43.793+0000 7f3b1fd58700  0 [cephadm ERROR cephadm.utils] executing create_from_spec_one(([('smithi103', <ceph.deployment.drive_selection.selector.DriveSelection object at 0x7f3b38701c18>), ('smithi058', <ceph.deployment.drive_selection.selector.DriveSelection object at 0x7f3b23e634a8>)],)) failed.
Traceback (most recent call last):
  File "/usr/share/ceph/mgr/cephadm/utils.py", line 62, in do_work
    return f(*arg)
  File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 43, in create_from_spec_one
    host, cmd, replace_osd_ids=osd_id_claims.get(host, []), env_vars=env_vars
  File "/usr/share/ceph/mgr/cephadm/services/osd.py", line 63, in create_single_host
    code, '\n'.join(err)))
RuntimeError: cephadm exited with an error code: 1, stderr:/bin/podman:stderr --> passed data devices: 3 physical, 0 LVM

Related issues 1 (0 open1 closed)

Related to ceph-volume - Bug #47758: fail to create OSDs because the requested extent is too largeResolvedJan Fajerski

Actions
Actions #1

Updated by Kiefer Chang over 3 years ago

  • Related to Bug #47758: fail to create OSDs because the requested extent is too large added
Actions #2

Updated by Kiefer Chang over 3 years ago

  • Status changed from New to Fix Under Review
  • Assignee set to Kiefer Chang
  • Pull request ID set to 37575
Actions #3

Updated by Neha Ojha over 3 years ago

/a/teuthology-2020-10-07_07:01:02-rados-master-distro-basic-smithi/5504097

Actions #4

Updated by Kefu Chai over 3 years ago

/a/kchai-2020-10-10_09:47:31-rados-wip-kefu-testing-2020-10-09-1210-distro-basic-smithi/551288

Actions #5

Updated by Kiefer Chang over 3 years ago

  • Status changed from Fix Under Review to Resolved

This error is not seen for some days after merging https://github.com/ceph/ceph/pull/37575.

Actions #6

Updated by Ernesto Puerta about 3 years ago

  • Project changed from mgr to Dashboard
  • Category changed from 185 to Component - Orchestrator
Actions

Also available in: Atom PDF