Project

General

Profile

Actions

Bug #24947

closed

Ceph Luminous radosgw: Couldn't init storage provider (RADOS)

Added by Yves Blusseau almost 6 years ago. Updated over 5 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Target version:
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Hello,

ceph seems to have create the radosgw pools but the daemon can't start with the error Couldn't init storage provider (RADOS)

# ceph -v
ceph version 12.2.4 (52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable)

# ceph -s
  cluster:
    id:     0873b3c6-0f38-4b8d-b1b7-06cc53bbe126
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum cnp69preceph01,cnp69preceph02,cnp69preceph03
    mgr: cnp69preceph02(active), standbys: cnp69preceph03, cnp69preceph01
    mds: cephfs-1/1/1 up  {0=cnp69preceph01=up:active}, 2 up:standby
    osd: 8 osds: 8 up, 8 in

  data:
    pools:   2 pools, 136 pgs
    objects: 21 objects, 2246 bytes
    usage:   16795 MB used, 2026 GB / 2042 GB avail
    pgs:     136 active+clean

# cat /etc/ceph/ceph.conf
[global]
  fsid = 0873b3c6-0f38-4b8d-b1b7-06cc53bbe126

  auth_cluster_required = cephx
  auth_service_required = cephx
  auth_client_required = cephx
  cephx_require_signatures = false
  cephx_cluster_require_signatures = true
  cephx_service_require_signatures = false

  osd_pool_default_min_size = 2
  osd_pool_default_size = 3
  osd pool default pg num = 8
  osd pool default pgp num = 8

[mon]

  mon_clock_drift_allowed = .15
  mon_clock_drift_warn_backoff = 30

    [mon.cnp69preceph01]
      host = cnp69preceph01
      mon_addr = 10.203.35.114
    [mon.cnp69preceph02]
      host = cnp69preceph02
      mon_addr = 10.203.35.115
    [mon.cnp69preceph03]
      host = cnp69preceph03
      mon_addr = 10.203.35.116

# ceph auth list
...

client.rgw.cnp69preceph01
        key: AQD9skxbLJWhORAAehuc9xTJWLeuTQqsPtGcKA==
        caps: [mon] allow rw
        caps: [osd] allow rwx
...

# cat /var/lib/ceph/radosgw/ceph-rgw.cnp69preceph01/keyring
[client.rgw.cnp69preceph01]
        key = AQD9skxbLJWhORAAehuc9xTJWLeuTQqsPtGcKA==

# ceph osd pool ls
cephfs_data
cephfs_metadata

# /usr/bin/radosgw -d --cluster ceph --name client.rgw.cnp69preceph01 --setuser ceph --setgroup ceph --debug-rgw=20
2018-07-16 18:59:36.957979 7f90ad194e00  0 deferred set uid:gid to 167:167 (ceph:ceph)
2018-07-16 18:59:36.958536 7f90ad194e00  0 ceph version 12.2.4 (52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable), process (unknown), pid 292
2018-07-16 18:59:37.012297 7f9095341700  2 RGWDataChangesLog::ChangesRenewThread: start
2018-07-16 18:59:37.012307 7f90ad194e00 20 get_system_obj_state: rctx=0x7ffc141827a0 obj=.rgw.root:default.realm state=0x564251c02860 s->prefetch_data=0
2018-07-16 18:59:37.015016 7f90ad194e00 20 get_system_obj_state: rctx=0x7ffc14182400 obj=.rgw.root:converted state=0x564251c02860 s->prefetch_data=0
2018-07-16 18:59:37.016135 7f90ad194e00 20 get_system_obj_state: rctx=0x7ffc14181c10 obj=.rgw.root:default.realm state=0x564251c026c0 s->prefetch_data=0
2018-07-16 18:59:37.017044 7f90ad194e00 20 get_system_obj_state: rctx=0x7ffc14181df0 obj=.rgw.root:zonegroups_names.default state=0x564251c026c0 s->prefetch_data=0
2018-07-16 18:59:37.018675 7f90ad194e00 10 failed to list objects pool_iterate_begin() returned r=-2
2018-07-16 18:59:37.018703 7f90ad194e00 20 get_system_obj_state: rctx=0x7ffc14181d00 obj=.rgw.root:zone_names.default state=0x564251c026c0 s->prefetch_data=0
2018-07-16 18:59:37.019890 7f90ad194e00 20 get_system_obj_state: rctx=0x7ffc14181d00 obj=.rgw.root:zonegroups_names.default state=0x564251c026c0 s->prefetch_data=0
2018-07-16 18:59:37.020833 7f90ad194e00 20 get_system_obj_state: rctx=0x7ffc14182980 obj=.rgw.root:region_map state=0x564251c02860 s->prefetch_data=0
2018-07-16 18:59:37.021731 7f90ad194e00 10  cannot find current period zonegroup using local zonegroup
2018-07-16 18:59:37.021747 7f90ad194e00 20 get_system_obj_state: rctx=0x7ffc14182400 obj=.rgw.root:default.realm state=0x564251c02860 s->prefetch_data=0
2018-07-16 18:59:37.022863 7f90ad194e00 20 get_system_obj_state: rctx=0x7ffc141825e0 obj=.rgw.root:zonegroups_names.default state=0x564251c02860 s->prefetch_data=0
2018-07-16 18:59:37.023796 7f90ad194e00 10 Creating default zonegroup
2018-07-16 18:59:37.024713 7f90ad194e00 10 couldn't find old data placement pools config, setting up new ones for the zone
2018-07-16 18:59:37.025672 7f90ad194e00 10 failed to list objects pool_iterate_begin() returned r=-2
2018-07-16 18:59:37.025685 7f90ad194e00 10 WARNING: store->list_zones() returned r=-2
2018-07-16 18:59:37.025735 7f90ad194e00 20 get_system_obj_state: rctx=0x7ffc14181ff0 obj=.rgw.root:zone_names.default state=0x564251c02860 s->prefetch_data=0
2018-07-16 18:59:39.954906 7f90ad194e00 20 get_system_obj_state: rctx=0x7ffc14181e00 obj=.rgw.root:default.realm state=0x564251c026c0 s->prefetch_data=0
2018-07-16 18:59:39.957125 7f90ad194e00 10 could not read realm id: (2) No such file or directory
2018-07-16 18:59:39.957146 7f90ad194e00 10 WARNING: failed to set zone as default, r=-22
2018-07-16 18:59:39.957157 7f90ad194e00 20 get_system_obj_state: rctx=0x7ffc14182290 obj=.rgw.root:zonegroups_names.default state=0x564251c026c0 s->prefetch_data=0
2018-07-16 18:59:39.975570 7f90ad194e00 20 get_system_obj_state: rctx=0x7ffc14181f30 obj=.rgw.root:zone_info.e0c70beb-24f6-4e87-8a61-126d3fdbf272 state=0x564251c02ee0 s->prefetch_data=0
2018-07-16 18:59:39.977294 7f90ad194e00 20 get_system_obj_state: s->obj_tag was set empty
2018-07-16 18:59:39.977316 7f90ad194e00 20 rados->read ofs=0 len=524288
2018-07-16 18:59:39.978335 7f90ad194e00 20 rados->read r=0 bl.length=688
2018-07-16 18:59:39.978378 7f90ad194e00 20 get_system_obj_state: rctx=0x7ffc14182670 obj=.rgw.root:zonegroup_info.6e456a04-9c8b-4e4f-baf7-6054c83d65a6 state=0x564251c02ee0 s->prefetch_data=0
2018-07-16 18:59:39.979669 7f90ad194e00 20 get_system_obj_state: s->obj_tag was set empty
2018-07-16 18:59:39.979684 7f90ad194e00 20 rados->read ofs=0 len=524288
2018-07-16 18:59:39.980941 7f90ad194e00 20 rados->read r=0 bl.length=333
2018-07-16 18:59:39.980980 7f90ad194e00 20 zonegroup default
2018-07-16 18:59:39.980998 7f90ad194e00 20 get_system_obj_state: rctx=0x7ffc14182980 obj=.rgw.root:period_config.default state=0x564251c02ee0 s->prefetch_data=0
2018-07-16 18:59:39.981945 7f90ad194e00 10 Cannot find current period zone using local zone
2018-07-16 18:59:39.981951 7f90ad194e00 10  Using default name default
2018-07-16 18:59:39.981957 7f90ad194e00 20 get_system_obj_state: rctx=0x7ffc14182800 obj=.rgw.root:zone_names.default state=0x564251c02ee0 s->prefetch_data=0
2018-07-16 18:59:39.983230 7f90ad194e00 20 get_system_obj_state: s->obj_tag was set empty
2018-07-16 18:59:39.983237 7f90ad194e00 20 rados->read ofs=0 len=524288
2018-07-16 18:59:39.984129 7f90ad194e00 20 rados->read r=0 bl.length=46
2018-07-16 18:59:39.984148 7f90ad194e00 20 get_system_obj_state: rctx=0x7ffc14182800 obj=.rgw.root:zone_info.e0c70beb-24f6-4e87-8a61-126d3fdbf272 state=0x564251c02ee0 s->prefetch_data=0
2018-07-16 18:59:39.986537 7f90ad194e00 20 get_system_obj_state: s->obj_tag was set empty
2018-07-16 18:59:39.986553 7f90ad194e00 20 rados->read ofs=0 len=524288
2018-07-16 18:59:39.987287 7f90ad194e00 20 rados->read r=0 bl.length=688
2018-07-16 18:59:39.987314 7f90ad194e00 20 zone default
2018-07-16 18:59:42.838974 7f90ad194e00 20 add_watcher() i=0
2018-07-16 18:59:42.857118 7f90ad194e00 20 add_watcher() i=1
2018-07-16 18:59:42.868744 7f90ad194e00 20 add_watcher() i=2
2018-07-16 18:59:42.882614 7f90ad194e00 20 add_watcher() i=3
2018-07-16 18:59:42.890586 7f90ad194e00 20 add_watcher() i=4
2018-07-16 18:59:42.898333 7f90ad194e00 20 add_watcher() i=5
2018-07-16 18:59:42.906577 7f90ad194e00 20 add_watcher() i=6
2018-07-16 18:59:42.914454 7f90ad194e00 20 add_watcher() i=7
2018-07-16 18:59:42.914462 7f90ad194e00  2 all 8 watchers are set, enabling cache
2018-07-16 18:59:47.557135 7f90ad194e00 -1 Couldn't init storage provider (RADOS)

# ceph osd pool ls
cephfs_data
cephfs_metadata
.rgw.root
default.rgw.control
default.rgw.meta

Actions #1

Updated by Yves Blusseau almost 6 years ago

After upgrading to ceph 1.2.6 i have:

# /usr/bin/radosgw -d --cluster ceph --name client.rgw.cnp69preceph01 --setuser ceph --setgroup ceph
2018-07-17 12:46:08.219400 7f9e66f4ce80  0 deferred set uid:gid to 167:167 (ceph:ceph)
2018-07-17 12:46:08.219900 7f9e66f4ce80  0 ceph version 12.2.6 (488df8a1076c4f5fc5b8d18a90463262c438740f) luminous (stable), process radosgw, pid 512
2018-07-17 12:46:19.199561 7f9e66f4ce80  0 rgw_init_ioctx ERROR: librados::Rados::pool_create returned (34) Numerical result out of range (this can be due to a pool or placement group misconfiguration, e.g. pg_num < pgp_num or mon_max_pg_per_osd exceeded)
2018-07-17 12:46:19.202565 7f9e66f4ce80 -1 Couldn't init storage provider (RADOS)

So my problem was that mon_max_pg_per_osd is exceeded

Actions #2

Updated by Abhishek Lekshmanan over 5 years ago

  • Status changed from New to Closed
Actions

Also available in: Atom PDF