Bug #38221
LC_NUMERIC can make lvm.py fail
0%
Description
(I initialy filed this in the wrong place, https://github.com/ceph/ceph-ansible/issues/3579 )
Bug Report
What happened:
As a European, I am used to writing e.g. "two and a half" as 2,5 (not 2.5). As such I tend to set `LC_NUMERIC=en_DK.UTF-8` on my Linux workstation1.
When I ssh to a RHEL7 box, these get then passed as per the default `AcceptEnv LANG LC_CTYPE LC_NUMERIC`… etc in `/etc/ssh/sshd_config` of RHEL7 (also affects Fedora 28).
So, with this setting, the OSDs will get the var as per the default sshd_config on the OSDs and (as expected) have the comma separator in `vgs` output.
[root@odroid-hc2-00 ~]# env|grep LC LC_MEASUREMENT=en_DK.UTF-8 LC_MONETARY=de_DE.UTF-8 LC_COLLATE=en_US.UTF-8 LC_NUMERIC=en_DK.UTF-8 LC_TIME=de_DE.UTF-8 [root@odroid-hc2-00 ~]# vgs --noheadings --readonly --units=g --separator=";" ceph-64ff50de-28b8-4bb6-bf76-601df9dc1e8e;1;1;0;wz--n-;931,48g;0g [root@odroid-hc2-00 ~]# unset LC_NUMERIC [root@odroid-hc2-00 ~]# vgs --noheadings --readonly --units=g --separator=";" ceph-64ff50de-28b8-4bb6-bf76-601df9dc1e8e;1;1;0;wz--n-;931.48g;0g
It seems that `/usr/lib/python2.7/site-packages/ceph_volume/api/lvm.py` on the OSDs chokes on the comma, expecting a dot. (my OSDs are Fedora 28 ARM based, do tell if you need me to reproduce with RHEL7 x86_64 based OSDs)
If I unset the LC_NUMERIC on the host running ceph-ansible, `ansible-playbook site.yml` (copy of site.yml.samle) works just fine.
If `LC_NUMERIC=en_DK.UTF-8`, then I get
2019-02-06 15:49:45,682 p=27999 u=ansible | TASK [ceph-osd : use ceph-volume lvm batch to create bluestore osds] ************************************************** 2019-02-06 15:49:45,683 p=27999 u=ansible | Wednesday 06 February 2019 15:49:45 +0100 (0:00:00.508) 0:06:06.035 **** 2019-02-06 15:49:49,216 p=27999 u=ansible | fatal: [odroid-hc2-03]: FAILED! => {"changed": true, "cmd": ["ceph-volume", "--cluster", "ceph", "lvm", "batch", "--bluestore", "--yes", "--dmcrypt", "/dev/sda"], "delta": "0:00:02.467296", "end": "2019-02-06 15:49:49.161515", "msg": "non-zero return code", "rc": 1, "start": "2019-02-06 15:49:46.694219", "stderr": "Traceback (most recent call last):\n File \"/usr/sbin/ceph-volume\", line 11, in <module>\n load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()\n File \"/usr/lib/python2.7/site-packages/ceph_volume/main.py\", line 38, in __init__\n self.main(self.argv)\n File \"/usr/lib/python2.7/site-packages/ceph_volume/decorators.py\", line 59, in newfunc\n return f(*a, **kw)\n File \"/usr/lib/python2.7/site-packages/ceph_volume/main.py\", line 148, in main\n terminal.dispatch(self.mapper, subcommand_args)\n File \"/usr/lib/python2.7/site-packages/ceph_volume/terminal.py\", line 182, in dispatch\n instance.main()\n File \"/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/main.py\", line 40, in main\n terminal.dispatch(self.mapper, self.argv)\n File \"/usr/lib/python2.7/site-packages/ceph_volume/terminal.py\", line 182, in dispatch\n instance.main()\n File \"/usr/lib/python2.7/site-packages/ceph_volume/decorators.py\", line 16, in is_root\n return func(*a, **kw)\n File \"/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/batch.py\", line 284, in main\n self.execute(args)\n File \"/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/batch.py\", line 175, in execute\n strategy.execute()\n File \"/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/strategies/bluestore.py\", line 126, in execute\n lvs = lvm.create_lvs(create['vg'], parts=create['parts'], name_prefix='osd-data')\n File \"/usr/lib/python2.7/site-packages/ceph_volume/api/lvm.py\", line 643, in create_lvs\n sizing = volume_group.sizing(parts=parts, size=size)\n File \"/usr/lib/python2.7/site-packages/ceph_volume/api/lvm.py\", line 1067, in sizing\n size = int(self.free / parts)\n File \"/usr/lib/python2.7/site-packages/ceph_volume/api/lvm.py\", line 1008, in free\n return self._parse_size(self.vg_free)\n File \"/usr/lib/python2.7/site-packages/ceph_volume/api/lvm.py\", line 994, in _parse_size\n return util.str_to_int(integer)\n File \"/usr/lib/python2.7/site-packages/ceph_volume/util/__init__.py\", line 40, in str_to_int\n raise RuntimeError(error_msg)\nRuntimeError: Unable to convert to integer: '1862,98'", "stderr_lines": ["Traceback (most recent call last):", " File \"/usr/sbin/ceph-volume\", line 11, in <module>", " load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()", " File \"/usr/lib/python2.7/site-packages/ceph_volume/main.py\", line 38, in __init__", " self.main(self.argv)", " File \"/usr/lib/python2.7/site-packages/ceph_volume/decorators.py\", line 59, in newfunc", " return f(*a, **kw)", " File \"/usr/lib/python2.7/site-packages/ceph_volume/main.py\", line 148, in main", " terminal.dispatch(self.mapper, subcommand_args)", " File \"/usr/lib/python2.7/site-packages/ceph_volume/terminal.py\", line 182, in dispatch", " instance.main()", " File \"/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/main.py\", line 40, in main", " terminal.dispatch(self.mapper, self.argv)", " File \"/usr/lib/python2.7/site-packages/ceph_volume/terminal.py\", line 182, in dispatch", " instance.main()", " File \"/usr/lib/python2.7/site-packages/ceph_volume/decorators.py\", line 16, in is_root", " return func(*a, **kw)", " File \"/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/batch.py\", line 284, in main", " self.execute(args)", " File \"/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/batch.py\", line 175, in execute", " strategy.execute()", " File \"/usr/lib/python2.7/site-packages/ceph_volume/devices/lvm/strategies/bluestore.py\", line 126, in execute", " lvs = lvm.create_lvs(create['vg'], parts=create['parts'], name_prefix='osd-data')", " File \"/usr/lib/python2.7/site-packages/ceph_volume/api/lvm.py\", line 643, in create_lvs", " sizing = volume_group.sizing(parts=parts, size=size)", " File \"/usr/lib/python2.7/site-packages/ceph_volume/api/lvm.py\", line 1067, in sizing", " size = int(self.free / parts)", " File \"/usr/lib/python2.7/site-packages/ceph_volume/api/lvm.py\", line 1008, in free", " return self._parse_size(self.vg_free)", " File \"/usr/lib/python2.7/site-packages/ceph_volume/api/lvm.py\", line 994, in _parse_size", " return util.str_to_int(integer)", " File \"/usr/lib/python2.7/site-packages/ceph_volume/util/__init__.py\", line 40, in str_to_int", " raise RuntimeError(error_msg)", "RuntimeError: Unable to convert to integer: '1862,98'"], "stdout": "Running command: vgcreate --force --yes ceph-8e748fdb-e322-4be0-b3c1-6bc0f1120351 /dev/sda\n stdout: Physical volume \"/dev/sda\" successfully created.\n stdout: Volume group \"ceph-8e748fdb-e322-4be0-b3c1-6bc0f1120351\" successfully created", "stdout_lines": ["Running command: vgcreate --force --yes ceph-8e748fdb-e322-4be0-b3c1-6bc0f1120351 /dev/sda", " stdout: Physical volume \"/dev/sda\" successfully created.", " stdout: Volume group \"ceph-8e748fdb-e322-4be0-b3c1-6bc0f1120351\" successfully created"]}
When this happens, the OSD node have a VG defined but no LV in that VG after the playbook errored out. I'll attach a full log of the ansible-playbook run.
What you expected to happen:
If ceph-ansible has requirements for LANG and/or LC_NUMERIC, it should enforce their values.
How to reproduce it (minimal and precise):
`LC_NUMERIC=en_DK.UTF-8 ansible-playbook site.yml` with the following;
[ansible@ceph-ansible ceph-ansible]$ grep "^[^#;]" group_vars/osds.yml --- dummy: copy_admin_key: true devices: - /dev/sda dmcrypt: True osd_scenario: lvm [ansible@ceph-ansible ceph-ansible]$ grep "^[^#;]" group_vars/all.yml --- dummy: fetch_directory: ~/ceph-ansible-keys cluster: ceph configure_firewall: False ntp_service_enabled: true ntp_daemon_type: chronyd ceph_origin: distro ceph_repository: rhcs ceph_rhcs_version: 3 ceph_repository_type: cdn fsid: "{{ cluster_uuid.stdout }}" generate_fsid: True rbd_cache_writethrough_until_flush: "false" rbd_client_directories: false # as per CEPH125-RHCS3.0-en-1-20180517 pages 45 and 60 monitor_interface: eth0 journal_size: 5120 # OSD journal size in MB public_network: 192.168.50.0/24 # HouseNet cluster_network: "{{ public_network | regex_replace(' ', '') }}" ceph_conf_overrides: global: mon_allow_pool_delete: true client: rbd_default_features: 1 ceph_docker_image: "rhceph-3-rhel7" ceph_docker_image_tag: "latest" ceph_docker_registry: "registry.access.redhat.com/rhceph/" [ansible@ceph-ansible ceph-ansible]$ cat /etc/ansible/hosts [ceph-arm-nodes] odroid-hc2-[00:04] [ceph-x86-nodes] ceph-ansible [ceph-housenet:children] ceph-arm-nodes ceph-x86-nodes [ceph-housenet:vars] ansible_user=ansible [mons] odroid-hc2-[00:02] [mgrs] odroid-hc2-[00:02] [osds] odroid-hc2-[00:04] [clients] odroid-hc2-00 [ansible@ceph-ansible ceph-ansible]$Environment:
- OS (e.g. from /etc/os-release): Red Hat Enterprise Linux Server release 7.6 (Maipo)
- Kernel (e.g. `uname -a`): Linux ceph-ansible.internal.pcfe.net 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
- Docker version if applicable (e.g. `docker version`):
- Ansible version (e.g. `ansible-playbook --version`): ansible-playbook 2.6.12
- ceph-ansible version (e.g. `git head or tag or stable branch`): ceph-ansible-3.2.4-1.el7cp.noarch
- Ceph version (e.g. `ceph -v`): ceph version 12.2.10 (177915764b752804194937482a39e95e0ca3de94) luminous (stable)
note: all except ceph -v collected on the RHEL7 x86_64 machine running ceph-ansible. ceph -v was collected on an affected OSD running Fedora 28 ARM
[1] JFYI; the complete list of language related varoables I normally set is;
LANG=en_US.UTF-8 GDM_LANG=en_GB.UTF-8 LANGUAGE=en_GB:en_US LC_MONETARY=de_DE.UTF-8 LC_NUMERIC=en_DK.UTF-8 LC_COLLATE=en_US.UTF-8 LC_MEASUREMENT=en_DK.UTF-8 LC_TIME=de_DE.UTF-8
History
#1 Updated by Patrick Ernzer over 4 years ago
- File logfile.txt View added
do tell if you need more logs, this is stil a test cluster so I can wipe easily if more logs needed. But hopefully the reproducer is sufficient.
#2 Updated by Alfredo Deza over 3 years ago
- Status changed from New to Can't reproduce
This doesn't seem to be an issue with 12.2.12 (ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)
), I tried replicating the problem and could not trigger it at all:
[root@node3 vagrant]# LC_ALL=C ceph-volume lvm batch --bluestore --osds-per-device 2 /dev/sdo /dev/sdn Total OSDs: 4 Type Path LV Size % of device ---------------------------------------------------------------------------------------------------- [data] /dev/sdo 5.37 GB 50% ---------------------------------------------------------------------------------------------------- [data] /dev/sdo 5.37 GB 50% ---------------------------------------------------------------------------------------------------- [data] /dev/sdn 5.37 GB 50% ---------------------------------------------------------------------------------------------------- [data] /dev/sdn 5.37 GB 50% --> The above OSDs would be created if the operation continues --> do you want to proceed? (yes/no) yes Running command: vgcreate --force --yes ceph-308e992d-b0ce-4903-8170-e8db2ba92a53 /dev/sdo stdout: Physical volume "/dev/sdo" successfully created. stdout: Volume group "ceph-308e992d-b0ce-4903-8170-e8db2ba92a53" successfully created Running command: vgcreate --force --yes ceph-91f71136-7497-4576-9d3d-8475f10af3f0 /dev/sdn stdout: Physical volume "/dev/sdn" successfully created. stdout: Volume group "ceph-91f71136-7497-4576-9d3d-8475f10af3f0" successfully created Running command: lvcreate --yes -l 1374 -n osd-data-bd35b10f-830b-4dba-94e4-b419ccff13a0 ceph-91f71136-7497-4576-9d3d-8475f10af3f0 stdout: Logical volume "osd-data-bd35b10f-830b-4dba-94e4-b419ccff13a0" created. Running command: lvcreate --yes -l 1374 -n osd-data-5d307a78-5152-48f5-9c38-5e1da6d9791a ceph-91f71136-7497-4576-9d3d-8475f10af3f0 stdout: Logical volume "osd-data-5d307a78-5152-48f5-9c38-5e1da6d9791a" created. Running command: /bin/ceph-authtool --gen-print-key Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new cbf0a60a-7d34-47c9-8cb2-d86ca0bde6cf Running command: /bin/ceph-authtool --gen-print-key Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-5 Running command: restorecon /var/lib/ceph/osd/ceph-5 Running command: chown -h ceph:ceph /dev/ceph-91f71136-7497-4576-9d3d-8475f10af3f0/osd-data-bd35b10f-830b-4dba-94e4-b419ccff13a0 Running command: chown -R ceph:ceph /dev/dm-1 Running command: ln -s /dev/ceph-91f71136-7497-4576-9d3d-8475f10af3f0/osd-data-bd35b10f-830b-4dba-94e4-b419ccff13a0 /var/lib/ceph/osd/ceph-5/block Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-5/activate.monmap stderr: got monmap epoch 1 Running command: ceph-authtool /var/lib/ceph/osd/ceph-5/keyring --create-keyring --name osd.5 --add-key AQCnXJZdPKn1IxAAGzNnydiLwxYLOt/Jo+qNkQ== stdout: creating /var/lib/ceph/osd/ceph-5/keyring added entity osd.5 auth auth(auid = 18446744073709551615 key=AQCnXJZdPKn1IxAAGzNnydiLwxYLOt/Jo+qNkQ== with 0 caps) Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-5/keyring Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-5/ Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 5 --monmap /var/lib/ceph/osd/ceph-5/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-5/ --osd-uuid cbf0a60a-7d34-47c9-8cb2-d86ca0bde6cf --setuser ceph --setgroup ceph --> ceph-volume lvm prepare successful for: ceph-91f71136-7497-4576-9d3d-8475f10af3f0/osd-data-bd35b10f-830b-4dba-94e4-b419ccff13a0 Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-91f71136-7497-4576-9d3d-8475f10af3f0/osd-data-bd35b10f-830b-4dba-94e4-b419ccff13a0 --path /var/lib/ceph/osd/ceph-5 Running command: ln -snf /dev/ceph-91f71136-7497-4576-9d3d-8475f10af3f0/osd-data-bd35b10f-830b-4dba-94e4-b419ccff13a0 /var/lib/ceph/osd/ceph-5/block Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-5/block Running command: chown -R ceph:ceph /dev/dm-1 Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-5 Running command: systemctl enable ceph-volume@lvm-5-cbf0a60a-7d34-47c9-8cb2-d86ca0bde6cf stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-5-cbf0a60a-7d34-47c9-8cb2-d86ca0bde6cf.service to /usr/lib/systemd/system/ceph-volume@.service. Running command: systemctl enable --runtime ceph-osd@5 stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@5.service to /usr/lib/systemd/system/ceph-osd@.service. Running command: systemctl start ceph-osd@5 --> ceph-volume lvm activate successful for osd ID: 5 --> ceph-volume lvm create successful for: ceph-91f71136-7497-4576-9d3d-8475f10af3f0/osd-data-bd35b10f-830b-4dba-94e4-b419ccff13a0 Running command: /bin/ceph-authtool --gen-print-key Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 2d5ad223-8615-41b2-94a6-9966432b1d58 Running command: /bin/ceph-authtool --gen-print-key Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-6 Running command: restorecon /var/lib/ceph/osd/ceph-6 Running command: chown -h ceph:ceph /dev/ceph-91f71136-7497-4576-9d3d-8475f10af3f0/osd-data-5d307a78-5152-48f5-9c38-5e1da6d9791a Running command: chown -R ceph:ceph /dev/dm-2 Running command: ln -s /dev/ceph-91f71136-7497-4576-9d3d-8475f10af3f0/osd-data-5d307a78-5152-48f5-9c38-5e1da6d9791a /var/lib/ceph/osd/ceph-6/block Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-6/activate.monmap stderr: got monmap epoch 1 Running command: ceph-authtool /var/lib/ceph/osd/ceph-6/keyring --create-keyring --name osd.6 --add-key AQDQXJZd556gARAAwGMblB4vfwtFHv3Zwsm+yg== stdout: creating /var/lib/ceph/osd/ceph-6/keyring added entity osd.6 auth auth(auid = 18446744073709551615 key=AQDQXJZd556gARAAwGMblB4vfwtFHv3Zwsm+yg== with 0 caps) Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-6/keyring Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-6/ Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 6 --monmap /var/lib/ceph/osd/ceph-6/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-6/ --osd-uuid 2d5ad223-8615-41b2-94a6-9966432b1d58 --setuser ceph --setgroup ceph --> ceph-volume lvm prepare successful for: ceph-91f71136-7497-4576-9d3d-8475f10af3f0/osd-data-5d307a78-5152-48f5-9c38-5e1da6d9791a Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-6 Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-91f71136-7497-4576-9d3d-8475f10af3f0/osd-data-5d307a78-5152-48f5-9c38-5e1da6d9791a --path /var/lib/ceph/osd/ceph-6 Running command: ln -snf /dev/ceph-91f71136-7497-4576-9d3d-8475f10af3f0/osd-data-5d307a78-5152-48f5-9c38-5e1da6d9791a /var/lib/ceph/osd/ceph-6/block Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-6/block Running command: chown -R ceph:ceph /dev/dm-2 Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-6 Running command: systemctl enable ceph-volume@lvm-6-2d5ad223-8615-41b2-94a6-9966432b1d58 stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-6-2d5ad223-8615-41b2-94a6-9966432b1d58.service to /usr/lib/systemd/system/ceph-volume@.service. Running command: systemctl enable --runtime ceph-osd@6 stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@6.service to /usr/lib/systemd/system/ceph-osd@.service. Running command: systemctl start ceph-osd@6 --> ceph-volume lvm activate successful for osd ID: 6 --> ceph-volume lvm create successful for: ceph-91f71136-7497-4576-9d3d-8475f10af3f0/osd-data-5d307a78-5152-48f5-9c38-5e1da6d9791a Running command: lvcreate --yes -l 1374 -n osd-data-ffb068f7-fa86-402b-a1b2-bb7257376177 ceph-308e992d-b0ce-4903-8170-e8db2ba92a53 stdout: Logical volume "osd-data-ffb068f7-fa86-402b-a1b2-bb7257376177" created. Running command: lvcreate --yes -l 1374 -n osd-data-bc37484a-908c-40d2-b283-4064e8509b52 ceph-308e992d-b0ce-4903-8170-e8db2ba92a53 stdout: Logical volume "osd-data-bc37484a-908c-40d2-b283-4064e8509b52" created. Running command: /bin/ceph-authtool --gen-print-key Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 1c1b4e99-a4a3-4730-a4a1-f94eecb5a3ff Running command: /bin/ceph-authtool --gen-print-key Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-7 Running command: restorecon /var/lib/ceph/osd/ceph-7 Running command: chown -h ceph:ceph /dev/ceph-308e992d-b0ce-4903-8170-e8db2ba92a53/osd-data-ffb068f7-fa86-402b-a1b2-bb7257376177 Running command: chown -R ceph:ceph /dev/dm-3 Running command: ln -s /dev/ceph-308e992d-b0ce-4903-8170-e8db2ba92a53/osd-data-ffb068f7-fa86-402b-a1b2-bb7257376177 /var/lib/ceph/osd/ceph-7/block Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-7/activate.monmap stderr: got monmap epoch 1 Running command: ceph-authtool /var/lib/ceph/osd/ceph-7/keyring --create-keyring --name osd.7 --add-key AQDoXJZdQVRwLBAAp+EPXQmE7QpYQw2QBp/fTQ== stdout: creating /var/lib/ceph/osd/ceph-7/keyring added entity osd.7 auth auth(auid = 18446744073709551615 key=AQDoXJZdQVRwLBAAp+EPXQmE7QpYQw2QBp/fTQ== with 0 caps) Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-7/keyring Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-7/ Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 7 --monmap /var/lib/ceph/osd/ceph-7/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-7/ --osd-uuid 1c1b4e99-a4a3-4730-a4a1-f94eecb5a3ff --setuser ceph --setgroup ceph --> ceph-volume lvm prepare successful for: ceph-308e992d-b0ce-4903-8170-e8db2ba92a53/osd-data-ffb068f7-fa86-402b-a1b2-bb7257376177 Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-7 Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-308e992d-b0ce-4903-8170-e8db2ba92a53/osd-data-ffb068f7-fa86-402b-a1b2-bb7257376177 --path /var/lib/ceph/osd/ceph-7 Running command: ln -snf /dev/ceph-308e992d-b0ce-4903-8170-e8db2ba92a53/osd-data-ffb068f7-fa86-402b-a1b2-bb7257376177 /var/lib/ceph/osd/ceph-7/block Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-7/block Running command: chown -R ceph:ceph /dev/dm-3 Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-7 Running command: systemctl enable ceph-volume@lvm-7-1c1b4e99-a4a3-4730-a4a1-f94eecb5a3ff stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-7-1c1b4e99-a4a3-4730-a4a1-f94eecb5a3ff.service to /usr/lib/systemd/system/ceph-volume@.service. Running command: systemctl enable --runtime ceph-osd@7 stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@7.service to /usr/lib/systemd/system/ceph-osd@.service. Running command: systemctl start ceph-osd@7 --> ceph-volume lvm activate successful for osd ID: 7 --> ceph-volume lvm create successful for: ceph-308e992d-b0ce-4903-8170-e8db2ba92a53/osd-data-ffb068f7-fa86-402b-a1b2-bb7257376177 Running command: /bin/ceph-authtool --gen-print-key Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 1bf295fa-e959-407a-8d54-235450f6a2b8 Running command: /bin/ceph-authtool --gen-print-key Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-8 Running command: restorecon /var/lib/ceph/osd/ceph-8 Running command: chown -h ceph:ceph /dev/ceph-308e992d-b0ce-4903-8170-e8db2ba92a53/osd-data-bc37484a-908c-40d2-b283-4064e8509b52 Running command: chown -R ceph:ceph /dev/dm-4 Running command: ln -s /dev/ceph-308e992d-b0ce-4903-8170-e8db2ba92a53/osd-data-bc37484a-908c-40d2-b283-4064e8509b52 /var/lib/ceph/osd/ceph-8/block Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-8/activate.monmap stderr: got monmap epoch 1 Running command: ceph-authtool /var/lib/ceph/osd/ceph-8/keyring --create-keyring --name osd.8 --add-key AQAAXZZdZcTjEBAAH4gbQF0CbX98mNCHoJF5Gw== stdout: creating /var/lib/ceph/osd/ceph-8/keyring added entity osd.8 auth auth(auid = 18446744073709551615 key=AQAAXZZdZcTjEBAAH4gbQF0CbX98mNCHoJF5Gw== with 0 caps) Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-8/keyring Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-8/ Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 8 --monmap /var/lib/ceph/osd/ceph-8/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-8/ --osd-uuid 1bf295fa-e959-407a-8d54-235450f6a2b8 --setuser ceph --setgroup ceph --> ceph-volume lvm prepare successful for: ceph-308e992d-b0ce-4903-8170-e8db2ba92a53/osd-data-bc37484a-908c-40d2-b283-4064e8509b52 Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-8 Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-308e992d-b0ce-4903-8170-e8db2ba92a53/osd-data-bc37484a-908c-40d2-b283-4064e8509b52 --path /var/lib/ceph/osd/ceph-8 Running command: ln -snf /dev/ceph-308e992d-b0ce-4903-8170-e8db2ba92a53/osd-data-bc37484a-908c-40d2-b283-4064e8509b52 /var/lib/ceph/osd/ceph-8/block Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-8/block Running command: chown -R ceph:ceph /dev/dm-4 Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-8 Running command: systemctl enable ceph-volume@lvm-8-1bf295fa-e959-407a-8d54-235450f6a2b8 stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-8-1bf295fa-e959-407a-8d54-235450f6a2b8.service to /usr/lib/systemd/system/ceph-volume@.service. Running command: systemctl enable --runtime ceph-osd@8 stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@8.service to /usr/lib/systemd/system/ceph-osd@.service. Running command: systemctl start ceph-osd@8 --> ceph-volume lvm activate successful for osd ID: 8 --> ceph-volume lvm create successful for: ceph-308e992d-b0ce-4903-8170-e8db2ba92a53/osd-data-bc37484a-908c-40d2-b283-4064e8509b52 [root@node3 vagrant]# ceph --version ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable) [root@node3 vagrant]# LC_NUMERIC=fi_FI.UTF-8 ceph-volume lvm batch --bluestore --osds-per-device 2 /dev/sdo /dev/sdn --> All devices are already used by ceph. No OSDs will be created. [root@node3 vagrant]# LC_NUMERIC=fi_FI.UTF-8 ceph-volume lvm batch --bluestore --osds-per-device 2 /dev/sdp /dev/sdm Total OSDs: 4 Type Path LV Size % of device ---------------------------------------------------------------------------------------------------- [data] /dev/sdp 5.37 GB 50% ---------------------------------------------------------------------------------------------------- [data] /dev/sdp 5.37 GB 50% ---------------------------------------------------------------------------------------------------- [data] /dev/sdm 5.37 GB 50% ---------------------------------------------------------------------------------------------------- [data] /dev/sdm 5.37 GB 50% --> The above OSDs would be created if the operation continues --> do you want to proceed? (yes/no) yes Running command: vgcreate --force --yes ceph-6d2e5fb3-0eeb-43cd-9862-b10586ad92de /dev/sdp stdout: Physical volume "/dev/sdp" successfully created. stdout: Volume group "ceph-6d2e5fb3-0eeb-43cd-9862-b10586ad92de" successfully created Running command: vgcreate --force --yes ceph-35aa7817-9d50-49bb-80e2-71f5619fd7f5 /dev/sdm stdout: Physical volume "/dev/sdm" successfully created. stdout: Volume group "ceph-35aa7817-9d50-49bb-80e2-71f5619fd7f5" successfully created Running command: lvcreate --yes -l 1374 -n osd-data-4dd59f64-c8f9-4fed-995d-3cad24cdd415 ceph-35aa7817-9d50-49bb-80e2-71f5619fd7f5 stdout: Logical volume "osd-data-4dd59f64-c8f9-4fed-995d-3cad24cdd415" created. Running command: lvcreate --yes -l 1374 -n osd-data-dc4ea8e4-7b80-4781-a2f7-9c53961be790 ceph-35aa7817-9d50-49bb-80e2-71f5619fd7f5 stdout: Logical volume "osd-data-dc4ea8e4-7b80-4781-a2f7-9c53961be790" created. Running command: /bin/ceph-authtool --gen-print-key Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 9214a720-66fe-4970-822c-78150b0d6cd2 Running command: /bin/ceph-authtool --gen-print-key Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-9 Running command: restorecon /var/lib/ceph/osd/ceph-9 Running command: chown -h ceph:ceph /dev/ceph-35aa7817-9d50-49bb-80e2-71f5619fd7f5/osd-data-4dd59f64-c8f9-4fed-995d-3cad24cdd415 Running command: chown -R ceph:ceph /dev/dm-5 Running command: ln -s /dev/ceph-35aa7817-9d50-49bb-80e2-71f5619fd7f5/osd-data-4dd59f64-c8f9-4fed-995d-3cad24cdd415 /var/lib/ceph/osd/ceph-9/block Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-9/activate.monmap stderr: got monmap epoch 1 Running command: ceph-authtool /var/lib/ceph/osd/ceph-9/keyring --create-keyring --name osd.9 --add-key AQBPXZZdwB18FBAAjqv5PoIr5GSw06/UHDcvkg== stdout: creating /var/lib/ceph/osd/ceph-9/keyring added entity osd.9 auth auth(auid = 18446744073709551615 key=AQBPXZZdwB18FBAAjqv5PoIr5GSw06/UHDcvkg== with 0 caps) Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-9/keyring Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-9/ Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 9 --monmap /var/lib/ceph/osd/ceph-9/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-9/ --osd-uuid 9214a720-66fe-4970-822c-78150b0d6cd2 --setuser ceph --setgroup ceph --> ceph-volume lvm prepare successful for: ceph-35aa7817-9d50-49bb-80e2-71f5619fd7f5/osd-data-4dd59f64-c8f9-4fed-995d-3cad24cdd415 Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-9 Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-35aa7817-9d50-49bb-80e2-71f5619fd7f5/osd-data-4dd59f64-c8f9-4fed-995d-3cad24cdd415 --path /var/lib/ceph/osd/ceph-9 Running command: ln -snf /dev/ceph-35aa7817-9d50-49bb-80e2-71f5619fd7f5/osd-data-4dd59f64-c8f9-4fed-995d-3cad24cdd415 /var/lib/ceph/osd/ceph-9/block Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-9/block Running command: chown -R ceph:ceph /dev/dm-5 Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-9 Running command: systemctl enable ceph-volume@lvm-9-9214a720-66fe-4970-822c-78150b0d6cd2 stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-9-9214a720-66fe-4970-822c-78150b0d6cd2.service to /usr/lib/systemd/system/ceph-volume@.service. Running command: systemctl enable --runtime ceph-osd@9 stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@9.service to /usr/lib/systemd/system/ceph-osd@.service. Running command: systemctl start ceph-osd@9 --> ceph-volume lvm activate successful for osd ID: 9 --> ceph-volume lvm create successful for: ceph-35aa7817-9d50-49bb-80e2-71f5619fd7f5/osd-data-4dd59f64-c8f9-4fed-995d-3cad24cdd415 Running command: /bin/ceph-authtool --gen-print-key Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 85f2d85c-a9f9-4ba7-973f-749d774df77f Running command: /bin/ceph-authtool --gen-print-key Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-10 Running command: restorecon /var/lib/ceph/osd/ceph-10 Running command: chown -h ceph:ceph /dev/ceph-35aa7817-9d50-49bb-80e2-71f5619fd7f5/osd-data-dc4ea8e4-7b80-4781-a2f7-9c53961be790 Running command: chown -R ceph:ceph /dev/dm-6 Running command: ln -s /dev/ceph-35aa7817-9d50-49bb-80e2-71f5619fd7f5/osd-data-dc4ea8e4-7b80-4781-a2f7-9c53961be790 /var/lib/ceph/osd/ceph-10/block Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-10/activate.monmap stderr: got monmap epoch 1 Running command: ceph-authtool /var/lib/ceph/osd/ceph-10/keyring --create-keyring --name osd.10 --add-key AQBmXZZd/9X3NRAA9LwKXGt99BVI/bwZMDLh0g== stdout: creating /var/lib/ceph/osd/ceph-10/keyring added entity osd.10 auth auth(auid = 18446744073709551615 key=AQBmXZZd/9X3NRAA9LwKXGt99BVI/bwZMDLh0g== with 0 caps) Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-10/keyring Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-10/ Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 10 --monmap /var/lib/ceph/osd/ceph-10/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-10/ --osd-uuid 85f2d85c-a9f9-4ba7-973f-749d774df77f --setuser ceph --setgroup ceph --> ceph-volume lvm prepare successful for: ceph-35aa7817-9d50-49bb-80e2-71f5619fd7f5/osd-data-dc4ea8e4-7b80-4781-a2f7-9c53961be790 Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-10 Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-35aa7817-9d50-49bb-80e2-71f5619fd7f5/osd-data-dc4ea8e4-7b80-4781-a2f7-9c53961be790 --path /var/lib/ceph/osd/ceph-10 Running command: ln -snf /dev/ceph-35aa7817-9d50-49bb-80e2-71f5619fd7f5/osd-data-dc4ea8e4-7b80-4781-a2f7-9c53961be790 /var/lib/ceph/osd/ceph-10/block Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-10/block Running command: chown -R ceph:ceph /dev/dm-6 Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-10 Running command: systemctl enable ceph-volume@lvm-10-85f2d85c-a9f9-4ba7-973f-749d774df77f stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-10-85f2d85c-a9f9-4ba7-973f-749d774df77f.service to /usr/lib/systemd/system/ceph-volume@.service. Running command: systemctl enable --runtime ceph-osd@10 stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@10.service to /usr/lib/systemd/system/ceph-osd@.service. Running command: systemctl start ceph-osd@10 --> ceph-volume lvm activate successful for osd ID: 10 --> ceph-volume lvm create successful for: ceph-35aa7817-9d50-49bb-80e2-71f5619fd7f5/osd-data-dc4ea8e4-7b80-4781-a2f7-9c53961be790 Running command: lvcreate --yes -l 1374 -n osd-data-5b45a2e2-a7bd-4383-bc41-992681c80830 ceph-6d2e5fb3-0eeb-43cd-9862-b10586ad92de stdout: Logical volume "osd-data-5b45a2e2-a7bd-4383-bc41-992681c80830" created. Running command: lvcreate --yes -l 1374 -n osd-data-1a77d783-5c2f-4cf6-8b29-606296644b43 ceph-6d2e5fb3-0eeb-43cd-9862-b10586ad92de stdout: Logical volume "osd-data-1a77d783-5c2f-4cf6-8b29-606296644b43" created. Running command: /bin/ceph-authtool --gen-print-key Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new a3451b9c-8563-44dd-80c0-258f7730ce7c Running command: /bin/ceph-authtool --gen-print-key Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-11 Running command: restorecon /var/lib/ceph/osd/ceph-11 Running command: chown -h ceph:ceph /dev/ceph-6d2e5fb3-0eeb-43cd-9862-b10586ad92de/osd-data-5b45a2e2-a7bd-4383-bc41-992681c80830 Running command: chown -R ceph:ceph /dev/dm-7 Running command: ln -s /dev/ceph-6d2e5fb3-0eeb-43cd-9862-b10586ad92de/osd-data-5b45a2e2-a7bd-4383-bc41-992681c80830 /var/lib/ceph/osd/ceph-11/block Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-11/activate.monmap stderr: got monmap epoch 1 Running command: ceph-authtool /var/lib/ceph/osd/ceph-11/keyring --create-keyring --name osd.11 --add-key AQB/XZZdfITbEBAAe0mSp9kdHqhw8AK43Tplkw== stdout: creating /var/lib/ceph/osd/ceph-11/keyring added entity osd.11 auth auth(auid = 18446744073709551615 key=AQB/XZZdfITbEBAAe0mSp9kdHqhw8AK43Tplkw== with 0 caps) Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-11/keyring Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-11/ Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 11 --monmap /var/lib/ceph/osd/ceph-11/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-11/ --osd-uuid a3451b9c-8563-44dd-80c0-258f7730ce7c --setuser ceph --setgroup ceph --> ceph-volume lvm prepare successful for: ceph-6d2e5fb3-0eeb-43cd-9862-b10586ad92de/osd-data-5b45a2e2-a7bd-4383-bc41-992681c80830 Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-11 Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-6d2e5fb3-0eeb-43cd-9862-b10586ad92de/osd-data-5b45a2e2-a7bd-4383-bc41-992681c80830 --path /var/lib/ceph/osd/ceph-11 Running command: ln -snf /dev/ceph-6d2e5fb3-0eeb-43cd-9862-b10586ad92de/osd-data-5b45a2e2-a7bd-4383-bc41-992681c80830 /var/lib/ceph/osd/ceph-11/block Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-11/block Running command: chown -R ceph:ceph /dev/dm-7 Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-11 Running command: systemctl enable ceph-volume@lvm-11-a3451b9c-8563-44dd-80c0-258f7730ce7c stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-11-a3451b9c-8563-44dd-80c0-258f7730ce7c.service to /usr/lib/systemd/system/ceph-volume@.service. Running command: systemctl enable --runtime ceph-osd@11 stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@11.service to /usr/lib/systemd/system/ceph-osd@.service. Running command: systemctl start ceph-osd@11 --> ceph-volume lvm activate successful for osd ID: 11 --> ceph-volume lvm create successful for: ceph-6d2e5fb3-0eeb-43cd-9862-b10586ad92de/osd-data-5b45a2e2-a7bd-4383-bc41-992681c80830 Running command: /bin/ceph-authtool --gen-print-key Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 68b7f055-1245-4a6b-94c6-daafcd6f07ef Running command: /bin/ceph-authtool --gen-print-key Running command: mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-12 Running command: restorecon /var/lib/ceph/osd/ceph-12 Running command: chown -h ceph:ceph /dev/ceph-6d2e5fb3-0eeb-43cd-9862-b10586ad92de/osd-data-1a77d783-5c2f-4cf6-8b29-606296644b43 Running command: chown -R ceph:ceph /dev/dm-8 Running command: ln -s /dev/ceph-6d2e5fb3-0eeb-43cd-9862-b10586ad92de/osd-data-1a77d783-5c2f-4cf6-8b29-606296644b43 /var/lib/ceph/osd/ceph-12/block Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-12/activate.monmap stderr: got monmap epoch 1 Running command: ceph-authtool /var/lib/ceph/osd/ceph-12/keyring --create-keyring --name osd.12 --add-key AQCWXZZdg0RqNBAATUj0Qx9aHfu0tK3o94pF/A== stdout: creating /var/lib/ceph/osd/ceph-12/keyring added entity osd.12 auth auth(auid = 18446744073709551615 key=AQCWXZZdg0RqNBAATUj0Qx9aHfu0tK3o94pF/A== with 0 caps) Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-12/keyring Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-12/ Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 12 --monmap /var/lib/ceph/osd/ceph-12/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-12/ --osd-uuid 68b7f055-1245-4a6b-94c6-daafcd6f07ef --setuser ceph --setgroup ceph --> ceph-volume lvm prepare successful for: ceph-6d2e5fb3-0eeb-43cd-9862-b10586ad92de/osd-data-1a77d783-5c2f-4cf6-8b29-606296644b43 Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-12 Running command: ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-6d2e5fb3-0eeb-43cd-9862-b10586ad92de/osd-data-1a77d783-5c2f-4cf6-8b29-606296644b43 --path /var/lib/ceph/osd/ceph-12 Running command: ln -snf /dev/ceph-6d2e5fb3-0eeb-43cd-9862-b10586ad92de/osd-data-1a77d783-5c2f-4cf6-8b29-606296644b43 /var/lib/ceph/osd/ceph-12/block Running command: chown -h ceph:ceph /var/lib/ceph/osd/ceph-12/block Running command: chown -R ceph:ceph /dev/dm-8 Running command: chown -R ceph:ceph /var/lib/ceph/osd/ceph-12 Running command: systemctl enable ceph-volume@lvm-12-68b7f055-1245-4a6b-94c6-daafcd6f07ef stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-12-68b7f055-1245-4a6b-94c6-daafcd6f07ef.service to /usr/lib/systemd/system/ceph-volume@.service. Running command: systemctl enable --runtime ceph-osd@12 stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@12.service to /usr/lib/systemd/system/ceph-osd@.service. Running command: systemctl start ceph-osd@12 --> ceph-volume lvm activate successful for osd ID: 12 --> ceph-volume lvm create successful for: ceph-6d2e5fb3-0eeb-43cd-9862-b10586ad92de/osd-data-1a77d783-5c2f-4cf6-8b29-606296644b43
I am closing as "can't reproduce" but this was probably an issue and it got resolved. Feel free to re-open otherwise.
#3 Updated by Tomas Petr over 3 years ago
fixed in 12.2.11:
ceph-volume normalize comma to dot for string to int conversions (issue#37442, pr#25776, Alfredo Deza)