Project

General

Profile

Actions

Bug #62171

closed

All OSD shards should use the same scheduler type when osd_op_queue=debug_random.

Added by Aishwarya Mathuria 10 months ago. Updated about 2 months ago.

Status:
Resolved
Priority:
Normal
Category:
Correctness/Safety
Target version:
-
% Done:

0%

Source:
Tags:
backport_processed
Backport:
quincy, reef
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
rados
Component(RADOS):
OSD
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

In many teuthology tests, osd_op_queue is set to debug_random in order to perform testing with WPQ or mClock scheduler.
However, in an OSD, each shard is getting assigned a scheduler randomly. This randomness should be at OSD level not OSDShard level.


2023-07-22T19:15:20.824+0000 7fefa9083640  0 osd.4:1.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=64)
2023-07-22T19:15:20.824+0000 7fefa9083640  1 bdev(0x55e9fe1c8e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
2023-07-22T19:15:20.824+0000 7fefa9083640  1 bdev(0x55e9fe1c8e00 /var/lib/ceph/osd/ceph-4/block) open size 96636764160 (0x1680000000, 90 GiB) block_size 4096 (4 KiB) non-rotational device, discard supported
2023-07-22T19:15:20.824+0000 7fefa9083640 10 bluestore(/var/lib/ceph/osd/ceph-4/block) _read_bdev_label
2023-07-22T19:15:20.824+0000 7fefa9083640 10 bluestore(/var/lib/ceph/osd/ceph-4/block) _read_bdev_label got bdev(osd_uuid 32dbe31f-b837-4bf6-9505-1ffa6e2c7016, size 0x1680000000, btime 2023-07-22T19:14:50.886561+0000, desc main, 13 meta)
2023-07-22T19:15:20.824+0000 7fefa9083640  1 bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 3221225472 meta 0.45 kv 0.45 data 0.06
2023-07-22T19:15:20.824+0000 7fefa9083640  1 bdev(0x55e9fe1c8e00 /var/lib/ceph/osd/ceph-4/block) close
2023-07-22T19:15:21.120+0000 7fefa9083640  0 osd.4:2.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=64)
2023-07-22T19:15:21.120+0000 7fefa9083640  1 bdev(0x55e9fe1c8e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
2023-07-22T19:15:21.120+0000 7fefa9083640  1 bdev(0x55e9fe1c8e00 /var/lib/ceph/osd/ceph-4/block) open size 96636764160 (0x1680000000, 90 GiB) block_size 4096 (4 KiB) non-rotational device, discard supported
2023-07-22T19:15:21.120+0000 7fefa9083640 10 bluestore(/var/lib/ceph/osd/ceph-4/block) _read_bdev_label
2023-07-22T19:15:21.120+0000 7fefa9083640 10 bluestore(/var/lib/ceph/osd/ceph-4/block) _read_bdev_label got bdev(osd_uuid 32dbe31f-b837-4bf6-9505-1ffa6e2c7016, size 0x1680000000, btime 2023-07-22T19:14:50.886561+0000, desc main, 13 meta)
2023-07-22T19:15:21.120+0000 7fefa9083640  1 bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 3221225472 meta 0.45 kv 0.45 data 0.06
2023-07-22T19:15:21.120+0000 7fefa9083640  1 bdev(0x55e9fe1c8e00 /var/lib/ceph/osd/ceph-4/block) close
2023-07-22T19:15:21.408+0000 7fefa9083640  0 osd.4:3.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=64)
2023-07-22T19:15:21.408+0000 7fefa9083640  1 bdev(0x55e9fe1c8e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
2023-07-22T19:15:21.408+0000 7fefa9083640  1 bdev(0x55e9fe1c8e00 /var/lib/ceph/osd/ceph-4/block) open size 96636764160 (0x1680000000, 90 GiB) block_size 4096 (4 KiB) non-rotational device, discard supported
2023-07-22T19:15:21.408+0000 7fefa9083640 10 bluestore(/var/lib/ceph/osd/ceph-4/block) _read_bdev_label
2023-07-22T19:15:21.408+0000 7fefa9083640 10 bluestore(/var/lib/ceph/osd/ceph-4/block) _read_bdev_label got bdev(osd_uuid 32dbe31f-b837-4bf6-9505-1ffa6e2c7016, size 0x1680000000, btime 2023-07-22T19:14:50.886561+0000, desc main, 13 meta)
2023-07-22T19:15:21.408+0000 7fefa9083640  1 bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 3221225472 meta 0.45 kv 0.45 data 0.06
2023-07-22T19:15:21.408+0000 7fefa9083640  1 bdev(0x55e9fe1c8e00 /var/lib/ceph/osd/ceph-4/block) close
2023-07-22T19:15:21.704+0000 7fefa9083640  0 osd.4:4.OSDShard using op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue, cutoff=64)
2023-07-22T19:15:21.704+0000 7fefa9083640  1 bdev(0x55e9fe1c8e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
2023-07-22T19:15:21.704+0000 7fefa9083640  1 bdev(0x55e9fe1c8e00 /var/lib/ceph/osd/ceph-4/block) open size 96636764160 (0x1680000000, 90 GiB) block_size 4096 (4 KiB) non-rotational device, discard supported
2023-07-22T19:15:21.704+0000 7fefa9083640 10 bluestore(/var/lib/ceph/osd/ceph-4/block) _read_bdev_label
2023-07-22T19:15:21.704+0000 7fefa9083640 10 bluestore(/var/lib/ceph/osd/ceph-4/block) _read_bdev_label got bdev(osd_uuid 32dbe31f-b837-4bf6-9505-1ffa6e2c7016, size 0x1680000000, btime 2023-07-22T19:14:50.886561+0000, desc main, 13 meta)
2023-07-22T19:15:21.704+0000 7fefa9083640  1 bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 3221225472 meta 0.45 kv 0.45 data 0.06
2023-07-22T19:15:21.704+0000 7fefa9083640  1 bdev(0x55e9fe1c8e00 /var/lib/ceph/osd/ceph-4/block) close
2023-07-22T19:15:22.008+0000 7fefa9083640  1 mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 58525.17 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
2023-07-22T19:15:22.008+0000 7fefa9083640  0 osd.4:5.OSDShard using op scheduler mClockScheduler
2023-07-22T19:15:22.008+0000 7fefa9083640  1 bdev(0x55e9fe1c8e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
2023-07-22T19:15:22.008+0000 7fefa9083640  1 bdev(0x55e9fe1c8e00 /var/lib/ceph/osd/ceph-4/block) open size 96636764160 (0x1680000000, 90 GiB) block_size 4096 (4 KiB) non-rotational device, discard supported
2023-07-22T19:15:22.008+0000 7fefa9083640 10 bluestore(/var/lib/ceph/osd/ceph-4/block) _read_bdev_label
2023-07-22T19:15:22.008+0000 7fefa9083640 10 bluestore(/var/lib/ceph/osd/ceph-4/block) _read_bdev_label got bdev(osd_uuid 32dbe31f-b837-4bf6-9505-1ffa6e2c7016, size 0x1680000000, btime 2023-07-22T19:14:50.886561+0000, desc main, 13 meta)
2023-07-22T19:15:22.008+0000 7fefa9083640  1 bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 3221225472 meta 0.45 kv 0.45 data 0.06
2023-07-22T19:15:22.008+0000 7fefa9083640  1 bdev(0x55e9fe1c8e00 /var/lib/ceph/osd/ceph-4/block) close
2023-07-22T19:15:22.332+0000 7fefa9083640  1 mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 58525.17 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
2023-07-22T19:15:22.332+0000 7fefa9083640  0 osd.4:6.OSDShard using op scheduler mClockScheduler
2023-07-22T19:15:22.332+0000 7fefa9083640  1 bdev(0x55e9fe1c8e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
2023-07-22T19:15:22.332+0000 7fefa9083640  1 bdev(0x55e9fe1c8e00 /var/lib/ceph/osd/ceph-4/block) open size 96636764160 (0x1680000000, 90 GiB) block_size 4096 (4 KiB) non-rotational device, discard supported
2023-07-22T19:15:22.332+0000 7fefa9083640 10 bluestore(/var/lib/ceph/osd/ceph-4/block) _read_bdev_label
2023-07-22T19:15:22.332+0000 7fefa9083640 10 bluestore(/var/lib/ceph/osd/ceph-4/block) _read_bdev_label got bdev(osd_uuid 32dbe31f-b837-4bf6-9505-1ffa6e2c7016, size 0x1680000000, btime 2023-07-22T19:14:50.886561+0000, desc main, 13 meta)
2023-07-22T19:15:22.332+0000 7fefa9083640  1 bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 3221225472 meta 0.45 kv 0.45 data 0.06
2023-07-22T19:15:22.332+0000 7fefa9083640  1 bdev(0x55e9fe1c8e00 /var/lib/ceph/osd/ceph-4/block) close
2023-07-22T19:15:22.612+0000 7fefa9083640  1 mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 58525.17 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
2023-07-22T19:15:22.612+0000 7fefa9083640  0 osd.4:7.OSDShard using op scheduler mClockScheduler

from: /a/rfriedma-2023-07-22_17:48:57-rados:thrash-wip-rf-cephx-init-distro-default-smithi/7348140/ but can be reproduced easily


Related issues 2 (0 open2 closed)

Copied to RADOS - Backport #63873: quincy: All OSD shards should use the same scheduler type when osd_op_queue=debug_random.RejectedSridhar SeshasayeeActions
Copied to RADOS - Backport #63874: reef: All OSD shards should use the same scheduler type when osd_op_queue=debug_random.ResolvedSridhar SeshasayeeActions
Actions

Also available in: Atom PDF