Project

General

Profile

Actions

Bug #54109

open

OSD creation failed with skipping the devices in the node

Added by Srinivasa Bharath Kanta over 2 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
1 - critical
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

OSD creation failed with the following error:-

022-02-01 09:46:45.881647 I | cephosd: skipping device "vda1" because it contains a filesystem "ext4"
2022-02-01 09:46:45.881677 I | cephosd: skipping device "vdb" because it contains a filesystem "LVM2_member"
2022-02-01 09:46:45.881720 I | cephosd: skipping device "vdc" because it contains a filesystem "LVM2_member"
2022-02-01 09:46:45.881744 I | cephosd: skipping device "vdd" because it contains a filesystem "LVM2_member"
2022-02-01 09:46:45.886539 I | cephosd: configuring osd devices: {"Entries":{}}

The node device details:
------------------------
[root@depressa012 ubuntu]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 447.1G 0 disk
└─sda1 8:1 0 447.1G 0 part /
sdb 8:16 0 7T 0 disk
sdc 8:32 0 7T 0 disk
sdd 8:48 0 7T 0 disk
nvme1n1 259:0 0 349.3G 0 disk
nvme0n1 259:1 0 349.3G 0 disk
[root@depressa012 ubuntu]#

Ceph status:
------------
[rook@rook-ceph-tools-d6d7c985c-7lbq5 /]$ ceph -s
cluster:
id: 487e1397-4565-4ffb-988c-80bd4b5cb385
health: HEALTH_WARN
Reduced data availability: 1 pg inactive
OSD count 0 < osd_pool_default_size 1

services:
mon: 1 daemons, quorum a (age 72m)
mgr: a(active, since 71m)
osd: 0 osds: 0 up, 0 in (since 71m)
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs: 100.000% pgs unknown
1 unknown

[rook@rook-ceph-tools-d6d7c985c-7lbq5 /]$

OSD creation was successful with the classical build.

Cluster status with the classical build:
----------------------------------------
rook@rook-ceph-tools-d6d7c985c-chvhn /]$ ceph -s
cluster:
id: d402c0e3-97d5-4b08-8908-3b21137537dc
health: HEALTH_OK

services:
mon: 1 daemons, quorum a (age 5h)
mgr: a(active, since 5h)
osd: 3 osds: 3 up (since 5h), 3 in (since 5h)
data:
pools: 2 pools, 33 pgs
objects: 2 objects, 449 KiB
usage: 18 MiB used, 59 GiB / 59 GiB avail
pgs: 33 active+clean

[rook@rook-ceph-tools-d6d7c985c-chvhn /]$

Upstream build:
--------------- https://shaman.ceph.com/builds/ceph/master/29e1fc1722aa5915b44828a5ad02ec45ce760aa3/crimson/293797/


Files

crimson-log (16.6 KB) crimson-log Srinivasa Bharath Kanta, 02/02/2022 05:09 AM

No data to display

Actions

Also available in: Atom PDF