Actions
Bug #23480
closedNo meaningful error when RGW cannot create pools due to lack of available PGs
% Done:
0%
Source:
Tags:
Backport:
luminous
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Sync logs:
2018-03-27 19:26:25.142318 7f62999e5e80 0 ceph version 12.2.4 (52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable), process (unknown), pid 4641 2018-03-27 19:26:25.303906 7f62999e5e80 1 error read_lastest_epoch .rgw.root:periods.d615c771-5256-4c8b-9d49-25d5bebe2a98.latest_epoch 2018-03-27 19:26:25.303928 7f62999e5e80 0 failed to use_latest_epoch period id d615c771-5256-4c8b-9d49-25d5bebe2a98 realm id : (2) No such file or directory 2018-03-27 19:26:26.639287 7f62999e5e80 1 error read_lastest_epoch .rgw.root:periods.d615c771-5256-4c8b-9d49-25d5bebe2a98.latest_epoch 2018-03-27 19:26:26.747192 7f62999e5e80 0 starting handler: civetweb 2018-03-27 19:26:27.949146 7f626d3b0700 1 meta sync: epoch=0 in sync status comes before remote's oldest mdlog epoch=1, restarting sync 2018-03-27 19:26:32.968205 7f6280dde700 1 meta sync: ERROR: failed to read mdlog info with (2) No such file or directory 2018-03-27 19:26:32.968229 7f6280dde700 1 meta sync: ERROR: failed to read mdlog info with (2) No such file or directory 2018-03-27 19:26:32.968260 7f6280dde700 1 meta sync: ERROR: failed to read mdlog info with (2) No such file or directory 2018-03-27 19:26:32.968268 7f6280dde700 1 meta sync: ERROR: failed to read mdlog info with (2) No such file or directory 2018-03-27 19:26:32.968326 7f6280dde700 1 meta sync: ERROR: failed to read mdlog info with (2) No such file or directory 2018-03-27 19:26:32.968345 7f6280dde700 1 meta sync: ERROR: failed to read mdlog info with (2) No such file or directory 2018-03-27 19:26:32.968418 7f6280dde700 1 meta sync: ERROR: failed to read mdlog info with (2) No such file or directory 2018-03-27 19:26:33.645546 7f6280dde700 1 meta sync: ERROR: failed to read mdlog info with (2) No such file or directory 2018-03-27 19:26:33.945840 7f627b3cc700 0 meta sync: ERROR: can't store key: bucket.instance:mybucket:0ef1a91a-4aee-427e-bdf8-30589abb2d3e.36605032.1 ret=-34 2018-03-27 19:26:33.945933 7f626d3b0700 0 meta sync: cr:s=0x56075088fd10:op=0x5607720cc800:18RGWMetaSyncShardCR: child operation stack=0x560750c32960 entry=bucket.instance:mybucket:0ef1a91a-4aee-427e-bdf8-30589abb2d3e.36605032.1 returned -34 2018-03-27 19:26:33.988660 7f6280dde700 1 meta sync: ERROR: failed to read mdlog info with (2) No such file or directory 2018-03-27 19:26:50.327672 7f6272bbb700 0 ERROR: open_pool_ctx() returned -34 2018-03-27 19:26:50.327686 7f62733bc700 0 ERROR: open_pool_ctx() returned -34 2018-03-27 19:26:50.327686 7f62713b8700 0 ERROR: open_pool_ctx() returned -34 2018-03-27 19:26:50.327711 7f62733bc700 0 store->fetch_remote_obj() returned r=-34 2018-03-27 19:26:50.327712 7f6272bbb700 0 store->fetch_remote_obj() returned r=-34 2018-03-27 19:26:50.327715 7f62713b8700 0 store->fetch_remote_obj() returned r=-34 2018-03-27 19:26:50.327718 7f6273bbd700 0 ERROR: open_pool_ctx() returned -34 2018-03-27 19:26:50.327731 7f6273bbd700 0 store->fetch_remote_obj() returned r=-34 2018-03-27 19:26:50.327731 7f627a3ca700 0 ERROR: open_pool_ctx() returned -34 2018-03-27 19:26:50.327742 7f627a3ca700 0 store->fetch_remote_obj() returned r=-34 2018-03-27 19:26:50.327741 7f626bbad700 0 data sync: ERROR: failed to sync object: mybucket:0ef1a91a-4aee-427e-bdf8-30589abb2d3e.36605032.1/12466_13 2018-03-27 19:26:50.327771 7f62753c0700 0 ERROR: open_pool_ctx() returned -34 2018-03-27 19:26:50.327778 7f62753c0700 0 store->fetch_remote_obj() returned r=-34 2018-03-27 19:26:50.327779 7f627c3ce700 0 ERROR: open_pool_ctx() returned -34 2018-03-27 19:26:50.327785 7f627c3ce700 0 store->fetch_remote_obj() returned r=-34 2018-03-27 19:26:50.327790 7f626ebb3700 0 ERROR: open_pool_ctx() returned -34 2018-03-27 19:26:50.327796 7f627bbcd700 0 ERROR: open_pool_ctx() returned -34 2018-03-27 19:26:50.327800 7f627bbcd700 0 store->fetch_remote_obj() returned r=-34 2018-03-27 19:26:50.327805 7f626ebb3700 0 store->fetch_remote_obj() returned r=-34 2018-03-27 19:26:50.327805 7f62793c8700 0 ERROR: open_pool_ctx() returned -34 2018-03-27 19:26:50.327810 7f62793c8700 0 store->fetch_remote_obj() returned r=-34 2018-03-27 19:26:50.327856 7f626bbad700 0 data sync: ERROR: failed to sync object: mybucket:0ef1a91a-4aee-427e-bdf8-30589abb2d3e.36605032.1/12466_4 2018-03-27 19:26:50.327872 7f626bbad700 0 data sync: ERROR: failed to sync object: mybucket:0ef1a91a-4aee-427e-bdf8-30589abb2d3e.36605032.1/12466_1 2018-03-27 19:26:50.327880 7f627cbcf700 0 ERROR: open_pool_ctx() returned -34 2018-03-27 19:26:50.327887 7f626bbad700 0 data sync: ERROR: failed to sync object: mybucket:0ef1a91a-4aee-427e-bdf8-30589abb2d3e.36605032.1/12466_18
Pools
$ rados lspools testpool .rgw.root default.rgw.control default.rgw.meta default.rgw.log
Attempting to create pool by hand.
$ ceph osd pool create default.rgw.bucket 8 8 Error ERANGE: pg_num 8 size 3 would mean 1216 total pgs, which exceeds max 1200 (mon_max_pg_per_osd 300 * num_in_osds 4)
Some indication of the failure or graceful exiting would have been nice, rather an allowing the instance to continue running, returning http_status=416 on every request.
Updated by Matt Benjamin about 6 years ago
agree, we'd like to improve usability
Updated by Casey Bodley about 6 years ago
- Related to Bug #22351: Couldn't init storage provider (RADOS) added
Updated by Casey Bodley about 6 years ago
- Status changed from New to Fix Under Review
An error message was added for http://tracker.ceph.com/issues/22351, but that didn't apply to these code paths from multisite sync.
Updated by Casey Bodley about 6 years ago
- Status changed from Fix Under Review to Pending Backport
- Backport set to luminous
Updated by Nathan Cutler about 6 years ago
- Copied to Backport #23866: luminous: No meaningful error when RGW cannot create pools due to lack of available PGs added
Updated by Nathan Cutler almost 6 years ago
- Status changed from Pending Backport to Resolved
Actions