Project

General

Profile

Bug #64065 » 64065-journalctl-ceph-mgr-6min.log

mgr log snippet, 14:00 - 14:06 - Mark Glines, 01/19/2024 10:46 AM

 
Jan 17 14:00:00 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34692] [GET] [200] [0.066s] [admin] [5.8K] /
Jan 17 14:00:00 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330448: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 245 KiB/s wr, 45 op/s; 5133324/142024938 objects degraded (3.614%); 11458/142024938 objects misplaced (0.008%); 253 MiB/s, 84 objects/s recovering
Jan 17 14:00:01 ceph02 ceph-mgr[1317]: [balancer INFO root] Optimize plan auto_2024-01-17_19:00:01
Jan 17 14:00:01 ceph02 ceph-mgr[1317]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 17 14:00:01 ceph02 ceph-mgr[1317]: [balancer INFO root] Some objects (0.036144) are degraded; try again later
Jan 17 14:00:02 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34700] [GET] [200] [0.025s] [admin] [5.8K] /
Jan 17 14:00:02 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330449: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 246 KiB/s wr, 46 op/s; 5133252/142024938 objects degraded (3.614%); 11458/142024938 objects misplaced (0.008%); 216 MiB/s, 71 objects/s recovering
Jan 17 14:00:03 ceph02 ceph-mgr[1317]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 17 14:00:03 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd.ec124, start_after=
Jan 17 14:00:03 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd, start_after=
Jan 17 14:00:04 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34710] [GET] [200] [0.038s] [admin] [5.8K] /
Jan 17 14:00:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:00:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:00:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:00:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:00:04 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:00:04] "GET /metrics HTTP/1.1" 200 67926 "" "Prometheus/2.43.0"
Jan 17 14:00:04 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330450: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 244 KiB/s wr, 48 op/s; 5133114/142024938 objects degraded (3.614%); 11458/142024938 objects misplaced (0.008%); 242 MiB/s, 78 objects/s recovering
Jan 17 14:00:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:00:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:00:06 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34724] [GET] [200] [0.073s] [admin] [5.8K] /
Jan 17 14:00:06 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330451: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 259 KiB/s wr, 53 op/s; 5132864/142024938 objects degraded (3.614%); 11458/142024938 objects misplaced (0.008%); 276 MiB/s, 92 objects/s recovering
Jan 17 14:00:08 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34728] [GET] [200] [0.029s] [admin] [5.8K] /
Jan 17 14:00:08 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330452: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 245 KiB/s wr, 52 op/s; 5132664/142024938 objects degraded (3.614%); 11458/142024938 objects misplaced (0.008%); 276 MiB/s, 91 objects/s recovering
Jan 17 14:00:10 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:38782] [GET] [200] [0.020s] [admin] [5.8K] /
Jan 17 14:00:10 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330453: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 253 KiB/s wr, 53 op/s; 5132417/142024938 objects degraded (3.614%); 11458/142024938 objects misplaced (0.008%); 279 MiB/s, 94 objects/s recovering
Jan 17 14:00:12 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:38790] [GET] [200] [0.020s] [admin] [5.8K] /
Jan 17 14:00:12 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330454: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 246 KiB/s wr, 52 op/s; 5132329/142024938 objects degraded (3.614%); 11458/142024938 objects misplaced (0.008%); 246 MiB/s, 82 objects/s recovering
Jan 17 14:00:14 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:38806] [GET] [200] [0.021s] [admin] [5.8K] /
Jan 17 14:00:14 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:00:14] "GET /metrics HTTP/1.1" 200 67926 "" "Prometheus/2.43.0"
Jan 17 14:00:14 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330455: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 237 KiB/s wr, 49 op/s; 5132189/142024938 objects degraded (3.614%); 11458/142024938 objects misplaced (0.008%); 261 MiB/s, 88 objects/s recovering
Jan 17 14:00:16 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:38810] [GET] [200] [0.027s] [admin] [5.8K] /
Jan 17 14:00:16 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 27c4bada-d879-4e1e-8e21-dcd6cd0b1dcf does not exist
Jan 17 14:00:16 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 6466636a-1fb6-4968-844c-e366d2b688fb does not exist
Jan 17 14:00:16 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 5a443e66-05db-4a32-86fe-274b38ec8a77 does not exist
Jan 17 14:00:16 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 16b3d803-c8e3-42cf-bf7d-2686ddca563a does not exist
Jan 17 14:00:16 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 897cff34-e475-4b70-9ede-4d066e95798c does not exist
Jan 17 14:00:16 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev e30945e2-82cd-4f1d-aff2-86b34466c367 does not exist
Jan 17 14:00:16 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev bc6f6eac-17c2-4587-b206-ac7a455a81c1 does not exist
Jan 17 14:00:16 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev a973d1b2-448d-4fab-b993-d9db93fe2959 does not exist
Jan 17 14:00:16 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330456: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 244 KiB/s wr, 52 op/s; 5131958/142024938 objects degraded (3.613%); 11458/142024938 objects misplaced (0.008%); 280 MiB/s, 96 objects/s recovering
Jan 17 14:00:18 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:38820] [GET] [200] [0.237s] [admin] [5.8K] /
Jan 17 14:00:18 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330457: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 243 KiB/s wr, 51 op/s; 5131727/142024938 objects degraded (3.613%); 11458/142024938 objects misplaced (0.008%); 284 MiB/s, 94 objects/s recovering
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] _maybe_adjust
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.19221546764548e-06 of space, bias 1.0, pg target 0.000476886187058192 quantized to 1 (current 1)
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.meta' root_id -1 using 7.890654923820866e-05 of space, bias 4.0, pg target 0.12625047878113385 quantized to 16 (current 16)
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.data' root_id -1 using 5.321202712097657e-10 of space, bias 1.0, pg target 2.128481084839063e-07 quantized to 32 (current 32)
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.longterm' root_id -1 using 0.42808620994345037 of space, bias 1.0, pg target 53.51077624293129 quantized to 64 (current 32)
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.shortterm' root_id -1 using 0.03333907358785394 of space, bias 1.0, pg target 3.813990018450491 quantized to 32 (current 32)
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'rbd.ec124' root_id -1 using 0.001054975264257227 of space, bias 1.0, pg target 0.07292516514178082 quantized to 32 (current 32)
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'rbd' root_id -1 using 2.7771156889687268e-08 of space, bias 1.0, pg target 6.137425672620887e-06 quantized to 32 (current 32)
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 6.385443254517189e-10 of space, bias 1.0, pg target 2.3519715987471645e-07 quantized to 32 (current 32)
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.1927216272585943e-10 of space, bias 1.0, pg target 1.1759857993735823e-07 quantized to 32 (current 32)
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 32)
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:00:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.k8s' root_id -1 using 8.513924339356251e-10 of space, bias 1.0, pg target 69.0625 quantized to 64 (current 32)
Jan 17 14:00:20 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:44864] [GET] [200] [0.021s] [admin] [5.8K] /
Jan 17 14:00:20 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330458: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 258 KiB/s wr, 53 op/s; 5131478/142024938 objects degraded (3.613%); 11458/142024938 objects misplaced (0.008%); 287 MiB/s, 98 objects/s recovering
Jan 17 14:00:22 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:44874] [GET] [200] [0.024s] [admin] [5.8K] /
Jan 17 14:00:22 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330459: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 260 KiB/s wr, 52 op/s; 5131406/142024938 objects degraded (3.613%); 11458/142024938 objects misplaced (0.008%); 251 MiB/s, 83 objects/s recovering
Jan 17 14:00:24 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:44878] [GET] [200] [0.016s] [admin] [5.8K] /
Jan 17 14:00:24 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:00:24] "GET /metrics HTTP/1.1" 200 67926 "" "Prometheus/2.43.0"
Jan 17 14:00:24 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330460: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 263 KiB/s wr, 50 op/s; 5131275/142024938 objects degraded (3.613%); 11458/142024938 objects misplaced (0.008%); 263 MiB/s, 87 objects/s recovering
Jan 17 14:00:26 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:44892] [GET] [200] [0.025s] [admin] [5.8K] /
Jan 17 14:00:26 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330461: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 285 KiB/s wr, 55 op/s; 5131028/142024938 objects degraded (3.613%); 11458/142024938 objects misplaced (0.008%); 282 MiB/s, 96 objects/s recovering
Jan 17 14:00:28 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42602] [GET] [200] [0.025s] [admin] [5.8K] /
Jan 17 14:00:28 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330462: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 278 KiB/s wr, 53 op/s; 5130791/142024938 objects degraded (3.613%); 11458/142024938 objects misplaced (0.008%); 285 MiB/s, 96 objects/s recovering
Jan 17 14:00:30 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42604] [GET] [200] [0.016s] [admin] [5.8K] /
Jan 17 14:00:30 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330463: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 286 KiB/s wr, 54 op/s; 5130531/142024938 objects degraded (3.612%); 11458/142024938 objects misplaced (0.008%); 282 MiB/s, 98 objects/s recovering
Jan 17 14:00:32 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42616] [GET] [200] [0.024s] [admin] [5.8K] /
Jan 17 14:00:32 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330464: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 357 KiB/s wr, 52 op/s; 5130439/142024938 objects degraded (3.612%); 11458/142024938 objects misplaced (0.008%); 247 MiB/s, 85 objects/s recovering
Jan 17 14:00:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:00:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:00:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:00:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:00:34 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:00:34] "GET /metrics HTTP/1.1" 200 67926 "" "Prometheus/2.43.0"
Jan 17 14:00:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:00:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:00:34 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42622] [GET] [200] [0.029s] [admin] [5.8K] /
Jan 17 14:00:34 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330465: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 362 KiB/s wr, 54 op/s; 5130292/142024938 objects degraded (3.612%); 11458/142024938 objects misplaced (0.008%); 263 MiB/s, 91 objects/s recovering
Jan 17 14:00:36 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42630] [GET] [200] [0.020s] [admin] [5.8K] /
Jan 17 14:00:36 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330466: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 383 KiB/s wr, 58 op/s; 5130051/142024938 objects degraded (3.612%); 11458/142024938 objects misplaced (0.008%); 291 MiB/s, 101 objects/s recovering
Jan 17 14:00:37 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42692] [GET] [200] [0.154s] [admin] [22.0B] /api/prometheus/notifications
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42676] [GET] [200] [0.315s] [admin] [966.0B] /api/prometheus/data
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42670] [GET] [200] [0.348s] [admin] [875.0B] /api/prometheus/data
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42658] [GET] [200] [0.436s] [admin] [857.0B] /api/prometheus/data
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42644] [GET] [200] [0.510s] [admin] [19.4K] /api/health/minimal
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42708] [GET] [200] [0.339s] [admin] [936.0B] /api/prometheus
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42704] [GET] [200] [0.398s] [admin] [51.0B] /api/prometheus/data
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42728] [GET] [200] [0.212s] [admin] [133.0B] /api/health/get_cluster_capacity
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42718] [GET] [200] [0.173s] [admin] [73.0B] /api/osd/settings
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42754] [GET] [200] [0.359s] [admin] [954.0B] /api/prometheus/data
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42770] [GET] [200] [0.419s] [admin] [807.0B] /api/prometheus/data
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42734] [GET] [200] [0.370s] [admin] [9.9K] /api/prometheus/rules
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42740] [GET] [200] [0.476s] [admin] [14.4K] /api/prometheus/data
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42738] [GET] [200] [0.437s] [admin] [14.4K] /api/prometheus/data
Jan 17 14:00:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:43186] [GET] [200] [0.019s] [admin] [5.8K] /
Jan 17 14:00:39 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330467: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 360 KiB/s wr, 54 op/s; 5129861/142024938 objects degraded (3.612%); 11458/142024938 objects misplaced (0.008%); 290 MiB/s, 96 objects/s recovering
Jan 17 14:00:40 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:43190] [GET] [200] [0.017s] [admin] [5.8K] /
Jan 17 14:00:41 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330468: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 354 KiB/s wr, 52 op/s; 5129630/142024938 objects degraded (3.612%); 11458/142024938 objects misplaced (0.008%); 285 MiB/s, 96 objects/s recovering
Jan 17 14:00:41 ceph02 ceph-mgr[1317]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 17 14:00:41 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd.ec124, start_after=
Jan 17 14:00:41 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd, start_after=
Jan 17 14:00:43 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:43200] [GET] [200] [0.016s] [admin] [5.8K] /
Jan 17 14:00:43 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330469: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 329 KiB/s wr, 47 op/s; 5129574/142024938 objects degraded (3.612%); 11458/142024938 objects misplaced (0.008%); 239 MiB/s, 79 objects/s recovering
Jan 17 14:00:44 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:00:44] "GET /metrics HTTP/1.1" 200 67926 "" "Prometheus/2.43.0"
Jan 17 14:00:45 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:43214] [GET] [200] [0.019s] [admin] [5.8K] /
Jan 17 14:00:45 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330471: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 221 KiB/s wr, 43 op/s; 5129438/142024938 objects degraded (3.612%); 11458/142024938 objects misplaced (0.008%); 261 MiB/s, 85 objects/s recovering
Jan 17 14:00:47 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330472: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 213 KiB/s wr, 44 op/s; 5129182/142024938 objects degraded (3.611%); 11458/142024938 objects misplaced (0.008%); 258 MiB/s, 86 objects/s recovering
Jan 17 14:00:47 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:43222] [GET] [200] [0.028s] [admin] [5.8K] /
Jan 17 14:00:49 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330473: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 221 KiB/s wr, 46 op/s; 5128954/142024938 objects degraded (3.611%); 11458/142024938 objects misplaced (0.008%); 268 MiB/s, 90 objects/s recovering
Jan 17 14:00:49 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34100] [GET] [200] [0.020s] [admin] [5.8K] /
Jan 17 14:00:50 ceph02 ceph-mgr[1317]: [dashboard INFO orchestrator] is orchestrator available: True,
Jan 17 14:00:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34108] [GET] [200] [0.218s] [admin] [133.0B] /api/health/get_cluster_capacity
Jan 17 14:00:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34136] [GET] [200] [0.098s] [admin] [22.0B] /api/prometheus/notifications
Jan 17 14:00:50 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:00:50 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:00:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34118] [GET] [200] [0.306s] [admin] [73.0B] /api/osd/settings
Jan 17 14:00:50 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:00:50 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:00:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34104] [GET] [200] [0.401s] [admin] [19.4K] /api/health/minimal
Jan 17 14:00:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34124] [GET] [200] [0.287s] [admin] [936.0B] /api/prometheus
Jan 17 14:00:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34150] [GET] [200] [0.212s] [admin] [2.3K] /api/prometheus/data
Jan 17 14:00:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34158] [GET] [200] [0.165s] [admin] [1.6K] /api/prometheus/data
Jan 17 14:00:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34130] [GET] [200] [0.264s] [admin] [9.9K] /api/prometheus/rules
Jan 17 14:00:51 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330474: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 235 KiB/s wr, 49 op/s; 5128729/142024938 objects degraded (3.611%); 11458/142024938 objects misplaced (0.008%); 263 MiB/s, 90 objects/s recovering
Jan 17 14:00:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34228] [GET] [200] [0.016s] [admin] [5.8K] /
Jan 17 14:00:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:00:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:00:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34214] [GET] [200] [0.182s] [admin] [2.2K] /api/prometheus/data
Jan 17 14:00:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:00:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34226] [GET] [200] [0.214s] [admin] [51.0B] /api/prometheus/data
Jan 17 14:00:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:00:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34200] [GET] [200] [0.166s] [admin] [993.0B] /api/prometheus/data
Jan 17 14:00:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34174] [GET] [200] [0.137s] [admin] [1.5K] /api/prometheus/data
Jan 17 14:00:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:00:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:00:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34188] [GET] [200] [0.329s] [admin] [18.7K] /api/prometheus/data
Jan 17 14:00:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34182] [GET] [200] [0.296s] [admin] [18.7K] /api/prometheus/data
Jan 17 14:00:53 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330475: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 241 KiB/s wr, 51 op/s; 5128657/142024938 objects degraded (3.611%); 11458/142024938 objects misplaced (0.008%); 269 MiB/s, 91 objects/s recovering
Jan 17 14:00:53 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34242] [GET] [200] [0.018s] [admin] [5.8K] /
Jan 17 14:00:54 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:00:54] "GET /metrics HTTP/1.1" 200 67930 "" "Prometheus/2.43.0"
Jan 17 14:00:55 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330476: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 227 KiB/s wr, 48 op/s; 5128526/142024938 objects degraded (3.611%); 11458/142024938 objects misplaced (0.008%); 249 MiB/s, 84 objects/s recovering
Jan 17 14:00:55 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34256] [GET] [200] [0.019s] [admin] [5.8K] /
Jan 17 14:00:55 ceph02 ceph-mgr[1317]: [devicehealth INFO root] Check health
Jan 17 14:00:57 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330477: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 258 KiB/s wr, 53 op/s; 5128307/142024938 objects degraded (3.611%); 11458/142024938 objects misplaced (0.008%); 275 MiB/s, 93 objects/s recovering
Jan 17 14:00:57 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34266] [GET] [200] [0.021s] [admin] [5.8K] /
Jan 17 14:00:59 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330478: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 271 KiB/s wr, 49 op/s; 5128087/142024938 objects degraded (3.611%); 11458/142024938 objects misplaced (0.008%); 272 MiB/s, 90 objects/s recovering
Jan 17 14:00:59 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:59864] [GET] [200] [0.019s] [admin] [5.8K] /
Jan 17 14:01:01 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330479: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 283 KiB/s wr, 51 op/s; 5127845/142024938 objects degraded (3.611%); 11458/142024938 objects misplaced (0.008%); 274 MiB/s, 91 objects/s recovering
Jan 17 14:01:01 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:59876] [GET] [200] [0.015s] [admin] [5.8K] /
Jan 17 14:01:01 ceph02 ceph-mgr[1317]: [balancer INFO root] Optimize plan auto_2024-01-17_19:01:01
Jan 17 14:01:01 ceph02 ceph-mgr[1317]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 17 14:01:01 ceph02 ceph-mgr[1317]: [balancer INFO root] Some objects (0.036105) are degraded; try again later
Jan 17 14:01:03 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330480: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 275 KiB/s wr, 50 op/s; 5127765/142024938 objects degraded (3.610%); 11458/142024938 objects misplaced (0.008%); 247 MiB/s, 79 objects/s recovering
Jan 17 14:01:03 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:59892] [GET] [200] [0.045s] [admin] [5.8K] /
Jan 17 14:01:03 ceph02 ceph-mgr[1317]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 17 14:01:03 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd.ec124, start_after=
Jan 17 14:01:03 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd, start_after=
Jan 17 14:01:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:01:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:01:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:01:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:01:04 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:01:04] "GET /metrics HTTP/1.1" 200 67926 "" "Prometheus/2.43.0"
Jan 17 14:01:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:01:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:01:05 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330481: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 275 KiB/s wr, 49 op/s; 5127640/142024938 objects degraded (3.610%); 11458/142024938 objects misplaced (0.008%); 261 MiB/s, 84 objects/s recovering
Jan 17 14:01:05 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:59902] [GET] [200] [0.020s] [admin] [5.8K] /
Jan 17 14:01:07 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330482: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 296 KiB/s wr, 54 op/s; 5127401/142024938 objects degraded (3.610%); 11458/142024938 objects misplaced (0.008%); 279 MiB/s, 93 objects/s recovering
Jan 17 14:01:07 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:59916] [GET] [200] [0.026s] [admin] [5.8K] /
Jan 17 14:01:09 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330483: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 276 KiB/s wr, 50 op/s; 5127176/142024938 objects degraded (3.610%); 11458/142024938 objects misplaced (0.008%); 291 MiB/s, 94 objects/s recovering
Jan 17 14:01:09 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:49264] [GET] [200] [0.019s] [admin] [5.8K] /
Jan 17 14:01:11 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330484: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 255 KiB/s wr, 52 op/s; 5126935/142024938 objects degraded (3.610%); 11458/142024938 objects misplaced (0.008%); 288 MiB/s, 95 objects/s recovering
Jan 17 14:01:11 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:49272] [GET] [200] [0.029s] [admin] [5.8K] /
Jan 17 14:01:13 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330485: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 246 KiB/s wr, 49 op/s; 5126847/142024938 objects degraded (3.610%); 11458/142024938 objects misplaced (0.008%); 250 MiB/s, 82 objects/s recovering
Jan 17 14:01:13 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:49280] [GET] [200] [0.016s] [admin] [5.8K] /
Jan 17 14:01:14 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:01:14] "GET /metrics HTTP/1.1" 200 67926 "" "Prometheus/2.43.0"
Jan 17 14:01:15 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330486: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 274 KiB/s wr, 52 op/s; 5126554/142024938 objects degraded (3.610%); 11458/142024938 objects misplaced (0.008%); 296 MiB/s, 100 objects/s recovering
Jan 17 14:01:15 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:49282] [GET] [200] [0.020s] [admin] [5.8K] /
Jan 17 14:01:17 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330487: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 278 KiB/s wr, 54 op/s; 5126474/142024938 objects degraded (3.610%); 11458/142024938 objects misplaced (0.008%); 281 MiB/s, 96 objects/s recovering
Jan 17 14:01:17 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:49292] [GET] [200] [0.019s] [admin] [5.8K] /
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330488: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 253 KiB/s wr, 49 op/s; 5126260/142024938 objects degraded (3.609%); 11458/142024938 objects misplaced (0.008%); 283 MiB/s, 94 objects/s recovering
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:38072] [GET] [200] [0.017s] [admin] [5.8K] /
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] _maybe_adjust
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.19221546764548e-06 of space, bias 1.0, pg target 0.000476886187058192 quantized to 1 (current 1)
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.meta' root_id -1 using 7.890654923820866e-05 of space, bias 4.0, pg target 0.12625047878113385 quantized to 16 (current 16)
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.data' root_id -1 using 5.321202712097657e-10 of space, bias 1.0, pg target 2.128481084839063e-07 quantized to 32 (current 32)
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.longterm' root_id -1 using 0.428102868766721 of space, bias 1.0, pg target 53.51285859584012 quantized to 64 (current 32)
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.shortterm' root_id -1 using 0.03334194905937551 of space, bias 1.0, pg target 3.814318972392558 quantized to 32 (current 32)
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'rbd.ec124' root_id -1 using 0.0010549802129757493 of space, bias 1.0, pg target 0.07292550722194867 quantized to 32 (current 32)
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'rbd' root_id -1 using 2.7771156889687268e-08 of space, bias 1.0, pg target 6.137425672620887e-06 quantized to 32 (current 32)
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 6.385443254517189e-10 of space, bias 1.0, pg target 2.3519715987471645e-07 quantized to 32 (current 32)
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.1927216272585943e-10 of space, bias 1.0, pg target 1.1759857993735823e-07 quantized to 32 (current 32)
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 32)
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:01:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.k8s' root_id -1 using 8.513924339356251e-10 of space, bias 1.0, pg target 69.0625 quantized to 64 (current 32)
Jan 17 14:01:21 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330489: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 262 KiB/s wr, 51 op/s; 5126004/142024938 objects degraded (3.609%); 11458/142024938 objects misplaced (0.008%); 279 MiB/s, 96 objects/s recovering
Jan 17 14:01:21 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:38074] [GET] [200] [0.026s] [admin] [5.8K] /
Jan 17 14:01:23 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330490: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 255 KiB/s wr, 52 op/s; 5125944/142024938 objects degraded (3.609%); 11458/142024938 objects misplaced (0.008%); 242 MiB/s, 81 objects/s recovering
Jan 17 14:01:23 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:38086] [GET] [200] [0.087s] [admin] [5.8K] /
Jan 17 14:01:24 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:01:24] "GET /metrics HTTP/1.1" 200 67925 "" "Prometheus/2.43.0"
Jan 17 14:01:25 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330491: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 286 KiB/s wr, 57 op/s; 5125648/142024938 objects degraded (3.609%); 11458/142024938 objects misplaced (0.008%); 291 MiB/s, 99 objects/s recovering
Jan 17 14:01:25 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:38100] [GET] [200] [0.056s] [admin] [5.8K] /
Jan 17 14:01:27 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330492: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 270 KiB/s wr, 54 op/s; 5125568/142024938 objects degraded (3.609%); 11458/142024938 objects misplaced (0.008%); 243 MiB/s, 81 objects/s recovering
Jan 17 14:01:27 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:38114] [GET] [200] [0.018s] [admin] [5.8K] /
Jan 17 14:01:29 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330493: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 272 KiB/s wr, 53 op/s; 5125391/142024938 objects degraded (3.609%); 11458/142024938 objects misplaced (0.008%); 271 MiB/s, 89 objects/s recovering
Jan 17 14:01:29 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52532] [GET] [200] [0.031s] [admin] [5.8K] /
Jan 17 14:01:31 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330494: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 263 KiB/s wr, 51 op/s; 5125155/142024938 objects degraded (3.609%); 11458/142024938 objects misplaced (0.008%); 270 MiB/s, 92 objects/s recovering
Jan 17 14:01:31 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52546] [GET] [200] [0.027s] [admin] [5.8K] /
Jan 17 14:01:33 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330495: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 258 KiB/s wr, 46 op/s; 5125155/142024938 objects degraded (3.609%); 11458/142024938 objects misplaced (0.008%); 212 MiB/s, 71 objects/s recovering
Jan 17 14:01:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:01:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:01:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:01:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:01:34 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52560] [GET] [200] [0.104s] [admin] [5.8K] /
Jan 17 14:01:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:01:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:01:34 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:01:34] "GET /metrics HTTP/1.1" 200 67925 "" "Prometheus/2.43.0"
Jan 17 14:01:35 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330496: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 251 KiB/s wr, 41 op/s; 5124841/142024938 objects degraded (3.608%); 11458/142024938 objects misplaced (0.008%); 273 MiB/s, 92 objects/s recovering
Jan 17 14:01:35 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 7c8b5860-59f8-4758-971a-e4e6f98cff16 does not exist
Jan 17 14:01:35 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 00fd69d3-40be-4cc3-bbc7-6fd3803475e6 does not exist
Jan 17 14:01:35 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 7ffa596a-0cdb-4a11-912d-e0a450133faf does not exist
Jan 17 14:01:35 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev d6f981dc-fb23-4d4c-8313-415390b6424f does not exist
Jan 17 14:01:35 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev c3bf2f53-d339-49cf-8c36-47d4d2848331 does not exist
Jan 17 14:01:35 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 85142f45-f413-4c47-8d83-5949023f2e94 does not exist
Jan 17 14:01:35 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev a866cc99-b601-4913-a11c-9597b7918673 does not exist
Jan 17 14:01:35 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 731afaf7-9ae1-4b51-ac33-a7aded97550e does not exist
Jan 17 14:01:36 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52564] [GET] [200] [0.022s] [admin] [5.8K] /
Jan 17 14:01:37 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330497: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 206 KiB/s wr, 33 op/s; 5124781/142024938 objects degraded (3.608%); 11458/142024938 objects misplaced (0.008%); 216 MiB/s, 72 objects/s recovering
Jan 17 14:01:37 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52606] [GET] [200] [0.204s] [admin] [22.0B] /api/prometheus/notifications
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52582] [GET] [200] [0.248s] [admin] [133.0B] /api/health/get_cluster_capacity
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52578] [GET] [200] [0.325s] [admin] [937.0B] /api/prometheus
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52592] [GET] [200] [0.307s] [admin] [73.0B] /api/osd/settings
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52574] [GET] [200] [0.533s] [admin] [19.4K] /api/health/minimal
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52672] [GET] [200] [0.229s] [admin] [936.0B] /api/prometheus/data
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52684] [GET] [200] [0.237s] [admin] [51.0B] /api/prometheus/data
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52616] [GET] [200] [0.371s] [admin] [947.0B] /api/prometheus/data
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52618] [GET] [200] [0.370s] [admin] [848.0B] /api/prometheus/data
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52608] [GET] [200] [0.513s] [admin] [9.9K] /api/prometheus/rules
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52656] [GET] [200] [0.313s] [admin] [764.0B] /api/prometheus/data
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52622] [GET] [200] [0.227s] [admin] [846.0B] /api/prometheus/data
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52642] [GET] [200] [0.320s] [admin] [13.9K] /api/prometheus/data
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52626] [GET] [200] [0.366s] [admin] [13.9K] /api/prometheus/data
Jan 17 14:01:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34888] [GET] [200] [0.020s] [admin] [5.8K] /
Jan 17 14:01:39 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330498: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 187 KiB/s wr, 31 op/s; 5124684/142024938 objects degraded (3.608%); 11458/142024938 objects misplaced (0.008%); 221 MiB/s, 73 objects/s recovering
Jan 17 14:01:40 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34890] [GET] [200] [0.019s] [admin] [5.8K] /
Jan 17 14:01:41 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330499: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 185 KiB/s wr, 30 op/s; 5124396/142024938 objects degraded (3.608%); 11458/142024938 objects misplaced (0.008%); 243 MiB/s, 82 objects/s recovering
Jan 17 14:01:41 ceph02 ceph-mgr[1317]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 17 14:01:41 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd.ec124, start_after=
Jan 17 14:01:41 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd, start_after=
Jan 17 14:01:42 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34900] [GET] [200] [0.024s] [admin] [5.8K] /
Jan 17 14:01:43 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330500: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 217 KiB/s wr, 33 op/s; 5124396/142024938 objects degraded (3.608%); 11458/142024938 objects misplaced (0.008%); 187 MiB/s, 63 objects/s recovering
Jan 17 14:01:44 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34906] [GET] [200] [0.016s] [admin] [5.8K] /
Jan 17 14:01:44 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:01:44] "GET /metrics HTTP/1.1" 200 67930 "" "Prometheus/2.43.0"
Jan 17 14:01:45 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330501: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 251 KiB/s wr, 42 op/s; 5124000/142024938 objects degraded (3.608%); 11458/142024938 objects misplaced (0.008%); 281 MiB/s, 95 objects/s recovering
Jan 17 14:01:46 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:34920] [GET] [200] [0.017s] [admin] [5.8K] /
Jan 17 14:01:47 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330502: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 267 KiB/s wr, 45 op/s; 5123928/142024938 objects degraded (3.608%); 11458/142024938 objects misplaced (0.008%); 221 MiB/s, 75 objects/s recovering
Jan 17 14:01:48 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35356] [GET] [200] [0.020s] [admin] [5.8K] /
Jan 17 14:01:49 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330503: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 293 KiB/s wr, 48 op/s; 5123811/142024938 objects degraded (3.608%); 11458/142024938 objects misplaced (0.008%); 238 MiB/s, 80 objects/s recovering
Jan 17 14:01:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35414] [GET] [200] [0.273s] [admin] [22.0B] /api/prometheus/notifications
Jan 17 14:01:50 ceph02 ceph-mgr[1317]: [dashboard INFO orchestrator] is orchestrator available: True,
Jan 17 14:01:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35382] [GET] [200] [0.259s] [admin] [133.0B] /api/health/get_cluster_capacity
Jan 17 14:01:50 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:01:50 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:01:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35426] [GET] [200] [0.417s] [admin] [2.3K] /api/prometheus/data
Jan 17 14:01:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:01:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35396] [GET] [200] [0.539s] [admin] [938.0B] /api/prometheus
Jan 17 14:01:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:01:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:01:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35388] [GET] [200] [0.474s] [admin] [73.0B] /api/osd/settings
Jan 17 14:01:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:01:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35372] [GET] [200] [0.709s] [admin] [19.4K] /api/health/minimal
Jan 17 14:01:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:01:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35402] [GET] [200] [0.765s] [admin] [9.9K] /api/prometheus/rules
Jan 17 14:01:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:01:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35454] [GET] [200] [0.594s] [admin] [1.5K] /api/prometheus/data
Jan 17 14:01:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35490] [GET] [200] [0.394s] [admin] [2.2K] /api/prometheus/data
Jan 17 14:01:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35440] [GET] [200] [0.610s] [admin] [1.7K] /api/prometheus/data
Jan 17 14:01:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35518] [GET] [200] [0.139s] [admin] [5.8K] /
Jan 17 14:01:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35506] [GET] [200] [0.290s] [admin] [51.0B] /api/prometheus/data
Jan 17 14:01:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35488] [GET] [200] [0.383s] [admin] [997.0B] /api/prometheus/data
Jan 17 14:01:51 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330504: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 318 KiB/s wr, 52 op/s; 5123503/142024938 objects degraded (3.607%); 11458/142024938 objects misplaced (0.008%); 286 MiB/s, 97 objects/s recovering
Jan 17 14:01:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:01:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35472] [GET] [200] [0.357s] [admin] [18.8K] /api/prometheus/data
Jan 17 14:01:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:01:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35456] [GET] [200] [0.480s] [admin] [18.8K] /api/prometheus/data
Jan 17 14:01:53 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330505: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 345 KiB/s wr, 58 op/s; 5123370/142024938 objects degraded (3.607%); 11458/142024938 objects misplaced (0.008%); 252 MiB/s, 84 objects/s recovering
Jan 17 14:01:53 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35534] [GET] [200] [0.025s] [admin] [5.8K] /
Jan 17 14:01:54 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:01:54] "GET /metrics HTTP/1.1" 200 67924 "" "Prometheus/2.43.0"
Jan 17 14:01:55 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330506: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 329 KiB/s wr, 56 op/s; 5123141/142024938 objects degraded (3.607%); 11458/142024938 objects misplaced (0.008%); 307 MiB/s, 103 objects/s recovering
Jan 17 14:01:55 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35540] [GET] [200] [0.029s] [admin] [5.8K] /
Jan 17 14:01:57 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330507: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 305 KiB/s wr, 53 op/s; 5123073/142024938 objects degraded (3.607%); 11458/142024938 objects misplaced (0.008%); 232 MiB/s, 77 objects/s recovering
Jan 17 14:01:57 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35554] [GET] [200] [0.016s] [admin] [5.8K] /
Jan 17 14:01:59 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330508: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 296 KiB/s wr, 53 op/s; 5122924/142024938 objects degraded (3.607%); 11458/142024938 objects misplaced (0.008%); 252 MiB/s, 83 objects/s recovering
Jan 17 14:01:59 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:37424] [GET] [200] [0.017s] [admin] [5.8K] /
Jan 17 14:02:01 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330509: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 306 KiB/s wr, 57 op/s; 5122599/142024938 objects degraded (3.607%); 11458/142024938 objects misplaced (0.008%); 303 MiB/s, 100 objects/s recovering
Jan 17 14:02:01 ceph02 ceph-mgr[1317]: [balancer INFO root] Optimize plan auto_2024-01-17_19:02:01
Jan 17 14:02:01 ceph02 ceph-mgr[1317]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 17 14:02:01 ceph02 ceph-mgr[1317]: [balancer INFO root] Some objects (0.036068) are degraded; try again later
Jan 17 14:02:01 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:37434] [GET] [200] [0.017s] [admin] [5.8K] /
Jan 17 14:02:03 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330510: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 292 KiB/s wr, 55 op/s; 5122451/142024938 objects degraded (3.607%); 11458/142024938 objects misplaced (0.008%); 267 MiB/s, 87 objects/s recovering
Jan 17 14:02:03 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:37446] [GET] [200] [0.206s] [admin] [5.8K] /
Jan 17 14:02:03 ceph02 ceph-mgr[1317]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 17 14:02:03 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd.ec124, start_after=
Jan 17 14:02:03 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd, start_after=
Jan 17 14:02:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:02:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:02:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:02:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:02:04 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:02:04] "GET /metrics HTTP/1.1" 200 67928 "" "Prometheus/2.43.0"
Jan 17 14:02:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:02:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:02:05 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330511: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 257 KiB/s wr, 50 op/s; 5122199/142024938 objects degraded (3.607%); 11458/142024938 objects misplaced (0.008%); 292 MiB/s, 97 objects/s recovering
Jan 17 14:02:05 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:37462] [GET] [200] [0.029s] [admin] [5.8K] /
Jan 17 14:02:07 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330512: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 251 KiB/s wr, 51 op/s; 5122123/142024938 objects degraded (3.606%); 11458/142024938 objects misplaced (0.008%); 256 MiB/s, 84 objects/s recovering
Jan 17 14:02:07 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:37468] [GET] [200] [0.018s] [admin] [5.8K] /
Jan 17 14:02:09 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330513: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 233 KiB/s wr, 49 op/s; 5121984/142024938 objects degraded (3.606%); 11458/142024938 objects misplaced (0.008%); 276 MiB/s, 90 objects/s recovering
Jan 17 14:02:09 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:43846] [GET] [200] [0.016s] [admin] [5.8K] /
Jan 17 14:02:11 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330514: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 241 KiB/s wr, 51 op/s; 5121691/142024938 objects degraded (3.606%); 11458/142024938 objects misplaced (0.008%); 307 MiB/s, 102 objects/s recovering
Jan 17 14:02:11 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:43860] [GET] [200] [0.020s] [admin] [5.8K] /
Jan 17 14:02:13 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330515: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 217 KiB/s wr, 46 op/s; 5121554/142024938 objects degraded (3.606%); 11458/142024938 objects misplaced (0.008%); 262 MiB/s, 86 objects/s recovering
Jan 17 14:02:13 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:43862] [GET] [200] [0.020s] [admin] [5.8K] /
Jan 17 14:02:14 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:02:14] "GET /metrics HTTP/1.1" 200 67928 "" "Prometheus/2.43.0"
Jan 17 14:02:15 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330516: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 207 KiB/s wr, 44 op/s; 5121325/142024938 objects degraded (3.606%); 11458/142024938 objects misplaced (0.008%); 276 MiB/s, 93 objects/s recovering
Jan 17 14:02:15 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:43870] [GET] [200] [0.016s] [admin] [5.8K] /
Jan 17 14:02:17 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330517: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 208 KiB/s wr, 45 op/s; 5121265/142024938 objects degraded (3.606%); 11458/142024938 objects misplaced (0.008%); 234 MiB/s, 77 objects/s recovering
Jan 17 14:02:17 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:43874] [GET] [200] [0.022s] [admin] [5.8K] /
Jan 17 14:02:19 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330518: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 210 KiB/s wr, 44 op/s; 5121120/142024938 objects degraded (3.606%); 11458/142024938 objects misplaced (0.008%); 251 MiB/s, 83 objects/s recovering
Jan 17 14:02:19 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:50000] [GET] [200] [0.019s] [admin] [5.8K] /
Jan 17 14:02:19 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] _maybe_adjust
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.19221546764548e-06 of space, bias 1.0, pg target 0.000476886187058192 quantized to 1 (current 1)
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.meta' root_id -1 using 7.890654923820866e-05 of space, bias 4.0, pg target 0.12625047878113385 quantized to 16 (current 16)
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.data' root_id -1 using 5.321202712097657e-10 of space, bias 1.0, pg target 2.128481084839063e-07 quantized to 32 (current 32)
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.longterm' root_id -1 using 0.42811871536160967 of space, bias 1.0, pg target 53.514839420201206 quantized to 64 (current 32)
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.shortterm' root_id -1 using 0.033344764827002646 of space, bias 1.0, pg target 3.8146410962091024 quantized to 32 (current 32)
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'rbd.ec124' root_id -1 using 0.00105498537454238 of space, bias 1.0, pg target 0.07292586401524202 quantized to 32 (current 32)
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'rbd' root_id -1 using 2.7771156889687268e-08 of space, bias 1.0, pg target 6.137425672620887e-06 quantized to 32 (current 32)
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 6.385443254517189e-10 of space, bias 1.0, pg target 2.3519715987471645e-07 quantized to 32 (current 32)
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.1927216272585943e-10 of space, bias 1.0, pg target 1.1759857993735823e-07 quantized to 32 (current 32)
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 32)
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:02:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.k8s' root_id -1 using 8.513924339356251e-10 of space, bias 1.0, pg target 69.0625 quantized to 64 (current 32)
Jan 17 14:02:21 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330519: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 228 KiB/s wr, 48 op/s; 5120842/142024938 objects degraded (3.606%); 11458/142024938 objects misplaced (0.008%); 283 MiB/s, 95 objects/s recovering
Jan 17 14:02:21 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:50002] [GET] [200] [0.018s] [admin] [5.8K] /
Jan 17 14:02:23 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330520: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 6.6 KiB/s rd, 218 KiB/s wr, 47 op/s; 5120703/142024938 objects degraded (3.605%); 11458/142024938 objects misplaced (0.008%); 247 MiB/s, 82 objects/s recovering
Jan 17 14:02:23 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:50010] [GET] [200] [0.021s] [admin] [5.8K] /
Jan 17 14:02:24 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:02:24] "GET /metrics HTTP/1.1" 200 67926 "" "Prometheus/2.43.0"
Jan 17 14:02:25 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330521: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 6.6 KiB/s rd, 248 KiB/s wr, 50 op/s; 5120436/142024938 objects degraded (3.605%); 11458/142024938 objects misplaced (0.008%); 274 MiB/s, 92 objects/s recovering
Jan 17 14:02:25 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:50024] [GET] [200] [0.017s] [admin] [5.8K] /
Jan 17 14:02:27 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330522: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 6.6 KiB/s rd, 273 KiB/s wr, 50 op/s; 5120368/142024938 objects degraded (3.605%); 11458/142024938 objects misplaced (0.008%); 242 MiB/s, 79 objects/s recovering
Jan 17 14:02:27 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:50038] [GET] [200] [0.017s] [admin] [5.8K] /
Jan 17 14:02:29 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330523: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 6.6 KiB/s rd, 280 KiB/s wr, 48 op/s; 5120249/142024938 objects degraded (3.605%); 11458/142024938 objects misplaced (0.008%); 256 MiB/s, 84 objects/s recovering
Jan 17 14:02:29 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46590] [GET] [200] [0.019s] [admin] [5.8K] /
Jan 17 14:02:31 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330524: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 6.6 KiB/s rd, 299 KiB/s wr, 53 op/s; 5119926/142024938 objects degraded (3.605%); 11458/142024938 objects misplaced (0.008%); 297 MiB/s, 99 objects/s recovering
Jan 17 14:02:32 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46598] [GET] [200] [0.022s] [admin] [5.8K] /
Jan 17 14:02:33 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330525: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 6.6 KiB/s rd, 288 KiB/s wr, 52 op/s; 5119793/142024938 objects degraded (3.605%); 11458/142024938 objects misplaced (0.008%); 260 MiB/s, 87 objects/s recovering
Jan 17 14:02:34 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46602] [GET] [200] [0.019s] [admin] [5.8K] /
Jan 17 14:02:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:02:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:02:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:02:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:02:34 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:02:34] "GET /metrics HTTP/1.1" 200 67930 "" "Prometheus/2.43.0"
Jan 17 14:02:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:02:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:02:35 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330526: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 285 KiB/s wr, 49 op/s; 5119557/142024938 objects degraded (3.605%); 11458/142024938 objects misplaced (0.008%); 278 MiB/s, 95 objects/s recovering
Jan 17 14:02:36 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46618] [GET] [200] [0.018s] [admin] [5.8K] /
Jan 17 14:02:37 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330527: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 251 KiB/s wr, 46 op/s; 5119465/142024938 objects degraded (3.605%); 11458/142024938 objects misplaced (0.008%); 236 MiB/s, 80 objects/s recovering
Jan 17 14:02:37 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46646] [GET] [200] [0.108s] [admin] [73.0B] /api/osd/settings
Jan 17 14:02:37 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46652] [GET] [200] [0.105s] [admin] [132.0B] /api/health/get_cluster_capacity
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46656] [GET] [200] [0.146s] [admin] [22.0B] /api/prometheus/notifications
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46636] [GET] [200] [0.394s] [admin] [738.0B] /api/prometheus/data
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46622] [GET] [200] [0.499s] [admin] [19.4K] /api/health/minimal
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46704] [GET] [200] [0.223s] [admin] [51.0B] /api/prometheus/data
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46720] [GET] [200] [0.140s] [admin] [5.8K] /
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46654] [GET] [200] [0.313s] [admin] [821.0B] /api/prometheus/data
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46672] [GET] [200] [0.198s] [admin] [949.0B] /api/prometheus/data
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46700] [GET] [200] [0.314s] [admin] [920.0B] /api/prometheus/data
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46662] [GET] [200] [0.275s] [admin] [939.0B] /api/prometheus
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46644] [GET] [200] [0.681s] [admin] [13.8K] /api/prometheus/data
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46682] [GET] [200] [0.275s] [admin] [848.0B] /api/prometheus/data
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46676] [GET] [200] [0.299s] [admin] [9.9K] /api/prometheus/rules
Jan 17 14:02:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46692] [GET] [200] [0.481s] [admin] [13.8K] /api/prometheus/data
Jan 17 14:02:39 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330528: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 220 KiB/s wr, 44 op/s; 5119358/142024938 objects degraded (3.605%); 11458/142024938 objects misplaced (0.008%); 245 MiB/s, 83 objects/s recovering
Jan 17 14:02:40 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:39378] [GET] [200] [0.031s] [admin] [5.8K] /
Jan 17 14:02:41 ceph02 ceph-mgr[1317]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 17 14:02:41 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330529: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 226 KiB/s wr, 44 op/s; 5119097/142024938 objects degraded (3.604%); 11458/142024938 objects misplaced (0.008%); 276 MiB/s, 94 objects/s recovering
Jan 17 14:02:41 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd.ec124, start_after=
Jan 17 14:02:41 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd, start_after=
Jan 17 14:02:42 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:39382] [GET] [200] [0.075s] [admin] [5.8K] /
Jan 17 14:02:43 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330530: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 201 KiB/s wr, 39 op/s; 5118978/142024938 objects degraded (3.604%); 11458/142024938 objects misplaced (0.008%); 229 MiB/s, 77 objects/s recovering
Jan 17 14:02:44 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:39386] [GET] [200] [0.056s] [admin] [5.8K] /
Jan 17 14:02:44 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:02:44] "GET /metrics HTTP/1.1" 200 67930 "" "Prometheus/2.43.0"
Jan 17 14:02:45 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330531: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 199 KiB/s wr, 36 op/s; 5118751/142024938 objects degraded (3.604%); 11458/142024938 objects misplaced (0.008%); 247 MiB/s, 85 objects/s recovering
Jan 17 14:02:46 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:39388] [GET] [200] [0.017s] [admin] [5.8K] /
Jan 17 14:02:47 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330532: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 201 KiB/s wr, 38 op/s; 5118691/142024938 objects degraded (3.604%); 11458/142024938 objects misplaced (0.008%); 211 MiB/s, 71 objects/s recovering
Jan 17 14:02:48 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:39398] [GET] [200] [0.071s] [admin] [5.8K] /
Jan 17 14:02:49 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330533: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 213 KiB/s wr, 40 op/s; 5118558/142024938 objects degraded (3.604%); 11458/142024938 objects misplaced (0.008%); 224 MiB/s, 74 objects/s recovering
Jan 17 14:02:50 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 1185321b-d039-4fa5-9fa6-b244769929f2 does not exist
Jan 17 14:02:50 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 75ea2f9e-7a69-4b99-96e5-09cb9c836daf does not exist
Jan 17 14:02:50 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 51743ff7-4b79-45e9-a702-da4f4d65a997 does not exist
Jan 17 14:02:50 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 99ef359b-72d9-4149-b17f-aa9f44737f66 does not exist
Jan 17 14:02:50 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 347d0b66-5139-4437-ac9b-40bc994278a3 does not exist
Jan 17 14:02:50 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 2ea24edb-2e6f-4755-9d79-f7245197f4be does not exist
Jan 17 14:02:50 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 2a161d02-6e99-485c-b156-6b1337318deb does not exist
Jan 17 14:02:50 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev ebf108af-61ef-4979-a8fb-505f611769ec does not exist
Jan 17 14:02:50 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:02:50 ceph02 ceph-mgr[1317]: [dashboard INFO orchestrator] is orchestrator available: True,
Jan 17 14:02:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53310] [GET] [200] [0.178s] [admin] [22.0B] /api/prometheus/notifications
Jan 17 14:02:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53218] [GET] [200] [0.410s] [admin] [51.0B] /api/prometheus/data
Jan 17 14:02:50 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:02:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:02:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:02:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:02:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:02:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53282] [GET] [200] [0.199s] [admin] [73.0B] /api/osd/settings
Jan 17 14:02:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53280] [GET] [200] [0.190s] [admin] [134.0B] /api/health/get_cluster_capacity
Jan 17 14:02:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:02:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53242] [GET] [200] [0.503s] [admin] [1.5K] /api/prometheus/data
Jan 17 14:02:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53234] [GET] [200] [0.577s] [admin] [1.6K] /api/prometheus/data
Jan 17 14:02:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:02:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53196] [GET] [200] [0.736s] [admin] [19.4K] /api/health/minimal
Jan 17 14:02:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53300] [GET] [200] [0.503s] [admin] [2.3K] /api/prometheus/data
Jan 17 14:02:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53324] [GET] [200] [0.623s] [admin] [939.0B] /api/prometheus
Jan 17 14:02:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53296] [GET] [200] [0.566s] [admin] [9.9K] /api/prometheus/rules
Jan 17 14:02:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53210] [GET] [200] [0.869s] [admin] [18.7K] /api/prometheus/data
Jan 17 14:02:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:02:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53328] [GET] [200] [0.128s] [admin] [5.8K] /
Jan 17 14:02:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53228] [GET] [200] [0.803s] [admin] [18.7K] /api/prometheus/data
Jan 17 14:02:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53250] [GET] [200] [0.249s] [admin] [988.0B] /api/prometheus/data
Jan 17 14:02:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:02:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53266] [GET] [200] [0.161s] [admin] [2.2K] /api/prometheus/data
Jan 17 14:02:51 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330534: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 247 KiB/s wr, 47 op/s; 5118247/142024938 objects degraded (3.604%); 11458/142024938 objects misplaced (0.008%); 272 MiB/s, 92 objects/s recovering
Jan 17 14:02:53 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53334] [GET] [200] [0.027s] [admin] [5.8K] /
Jan 17 14:02:53 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330535: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 247 KiB/s wr, 50 op/s; 5118116/142024938 objects degraded (3.604%); 11458/142024938 objects misplaced (0.008%); 245 MiB/s, 81 objects/s recovering
Jan 17 14:02:54 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:02:54] "GET /metrics HTTP/1.1" 200 67926 "" "Prometheus/2.43.0"
Jan 17 14:02:55 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53338] [GET] [200] [0.016s] [admin] [5.8K] /
Jan 17 14:02:55 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330536: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 244 KiB/s wr, 49 op/s; 5117883/142024938 objects degraded (3.604%); 11458/142024938 objects misplaced (0.008%); 271 MiB/s, 91 objects/s recovering
Jan 17 14:02:57 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53346] [GET] [200] [0.021s] [admin] [5.8K] /
Jan 17 14:02:57 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330537: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 232 KiB/s wr, 47 op/s; 5117823/142024938 objects degraded (3.603%); 11458/142024938 objects misplaced (0.008%); 236 MiB/s, 77 objects/s recovering
Jan 17 14:02:59 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:54630] [GET] [200] [0.017s] [admin] [5.8K] /
Jan 17 14:02:59 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330538: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 260 KiB/s wr, 44 op/s; 5117711/142024938 objects degraded (3.603%); 11458/142024938 objects misplaced (0.008%); 250 MiB/s, 81 objects/s recovering
Jan 17 14:03:01 ceph02 ceph-mgr[1317]: [balancer INFO root] Optimize plan auto_2024-01-17_19:03:01
Jan 17 14:03:01 ceph02 ceph-mgr[1317]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 17 14:03:01 ceph02 ceph-mgr[1317]: [balancer INFO root] Some objects (0.036034) are degraded; try again later
Jan 17 14:03:01 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:54638] [GET] [200] [0.017s] [admin] [5.8K] /
Jan 17 14:03:01 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330539: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 267 KiB/s wr, 47 op/s; 5117462/142024938 objects degraded (3.603%); 11458/142024938 objects misplaced (0.008%); 276 MiB/s, 91 objects/s recovering
Jan 17 14:03:03 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330540: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 251 KiB/s wr, 44 op/s; 5117312/142024938 objects degraded (3.603%); 11458/142024938 objects misplaced (0.008%); 238 MiB/s, 77 objects/s recovering
Jan 17 14:03:03 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:54646] [GET] [200] [0.085s] [admin] [5.8K] /
Jan 17 14:03:03 ceph02 ceph-mgr[1317]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 17 14:03:03 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd.ec124, start_after=
Jan 17 14:03:03 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd, start_after=
Jan 17 14:03:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:03:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:03:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:03:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:03:04 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:03:04] "GET /metrics HTTP/1.1" 200 67933 "" "Prometheus/2.43.0"
Jan 17 14:03:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:03:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:03:05 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330541: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 235 KiB/s wr, 40 op/s; 5117058/142024938 objects degraded (3.603%); 11458/142024938 objects misplaced (0.008%); 259 MiB/s, 87 objects/s recovering
Jan 17 14:03:05 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:54658] [GET] [200] [0.021s] [admin] [5.8K] /
Jan 17 14:03:07 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330542: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 267 KiB/s wr, 49 op/s; 5116974/142024938 objects degraded (3.603%); 11458/142024938 objects misplaced (0.008%); 223 MiB/s, 75 objects/s recovering
Jan 17 14:03:07 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:54662] [GET] [200] [0.020s] [admin] [5.8K] /
Jan 17 14:03:09 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330543: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 264 KiB/s wr, 47 op/s; 5116834/142024938 objects degraded (3.603%); 11458/142024938 objects misplaced (0.008%); 243 MiB/s, 81 objects/s recovering
Jan 17 14:03:09 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42596] [GET] [200] [0.017s] [admin] [5.8K] /
Jan 17 14:03:11 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330544: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 263 KiB/s wr, 52 op/s; 5116508/142024938 objects degraded (3.603%); 11458/142024938 objects misplaced (0.008%); 290 MiB/s, 99 objects/s recovering
Jan 17 14:03:11 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42604] [GET] [200] [0.020s] [admin] [5.8K] /
Jan 17 14:03:13 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330545: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 254 KiB/s wr, 51 op/s; 5116377/142024938 objects degraded (3.602%); 11458/142024938 objects misplaced (0.008%); 265 MiB/s, 89 objects/s recovering
Jan 17 14:03:13 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42620] [GET] [200] [0.016s] [admin] [5.8K] /
Jan 17 14:03:14 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:03:14] "GET /metrics HTTP/1.1" 200 67933 "" "Prometheus/2.43.0"
Jan 17 14:03:15 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330546: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 244 KiB/s wr, 49 op/s; 5116114/142024938 objects degraded (3.602%); 11458/142024938 objects misplaced (0.008%); 288 MiB/s, 99 objects/s recovering
Jan 17 14:03:15 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42622] [GET] [200] [0.020s] [admin] [5.8K] /
Jan 17 14:03:17 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330547: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 268 KiB/s wr, 55 op/s; 5116042/142024938 objects degraded (3.602%); 11458/142024938 objects misplaced (0.008%); 252 MiB/s, 84 objects/s recovering
Jan 17 14:03:17 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:42624] [GET] [200] [0.017s] [admin] [5.8K] /
Jan 17 14:03:19 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330548: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 233 KiB/s wr, 46 op/s; 5115900/142024938 objects degraded (3.602%); 11458/142024938 objects misplaced (0.008%); 269 MiB/s, 89 objects/s recovering
Jan 17 14:03:19 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52370] [GET] [200] [0.026s] [admin] [5.8K] /
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] _maybe_adjust
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.19221546764548e-06 of space, bias 1.0, pg target 0.000476886187058192 quantized to 1 (current 1)
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.meta' root_id -1 using 7.890654923820866e-05 of space, bias 4.0, pg target 0.12625047878113385 quantized to 16 (current 16)
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.data' root_id -1 using 5.321202712097657e-10 of space, bias 1.0, pg target 2.128481084839063e-07 quantized to 32 (current 32)
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.longterm' root_id -1 using 0.42813454769567505 of space, bias 1.0, pg target 53.51681846195938 quantized to 64 (current 32)
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.shortterm' root_id -1 using 0.03334741430704502 of space, bias 1.0, pg target 3.8149441967259508 quantized to 32 (current 32)
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'rbd.ec124' root_id -1 using 0.0010549890993842785 of space, bias 1.0, pg target 0.07292612149493825 quantized to 32 (current 32)
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'rbd' root_id -1 using 2.7771156889687268e-08 of space, bias 1.0, pg target 6.137425672620887e-06 quantized to 32 (current 32)
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 6.385443254517189e-10 of space, bias 1.0, pg target 2.3519715987471645e-07 quantized to 32 (current 32)
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.1927216272585943e-10 of space, bias 1.0, pg target 1.1759857993735823e-07 quantized to 32 (current 32)
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 32)
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:03:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.k8s' root_id -1 using 8.513924339356251e-10 of space, bias 1.0, pg target 69.0625 quantized to 64 (current 32)
Jan 17 14:03:21 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330549: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 269 KiB/s wr, 54 op/s; 5115552/142024938 objects degraded (3.602%); 11458/142024938 objects misplaced (0.008%); 315 MiB/s, 106 objects/s recovering
Jan 17 14:03:21 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52372] [GET] [200] [0.017s] [admin] [5.8K] /
Jan 17 14:03:23 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330550: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 251 KiB/s wr, 53 op/s; 5115415/142024938 objects degraded (3.602%); 11458/142024938 objects misplaced (0.008%); 273 MiB/s, 90 objects/s recovering
Jan 17 14:03:23 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52376] [GET] [200] [0.021s] [admin] [5.8K] /
Jan 17 14:03:24 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:03:24] "GET /metrics HTTP/1.1" 200 67927 "" "Prometheus/2.43.0"
Jan 17 14:03:25 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330551: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 251 KiB/s wr, 52 op/s; 5115173/142024938 objects degraded (3.602%); 11458/142024938 objects misplaced (0.008%); 296 MiB/s, 99 objects/s recovering
Jan 17 14:03:25 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52386] [GET] [200] [0.019s] [admin] [5.8K] /
Jan 17 14:03:27 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330552: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 306 KiB/s wr, 58 op/s; 5115093/142024938 objects degraded (3.602%); 11458/142024938 objects misplaced (0.008%); 257 MiB/s, 84 objects/s recovering
Jan 17 14:03:27 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52392] [GET] [200] [0.016s] [admin] [5.8K] /
Jan 17 14:03:29 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330553: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 284 KiB/s wr, 51 op/s; 5114948/142024938 objects degraded (3.601%); 11458/142024938 objects misplaced (0.008%); 277 MiB/s, 90 objects/s recovering
Jan 17 14:03:30 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53614] [GET] [200] [0.021s] [admin] [5.8K] /
Jan 17 14:03:31 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330554: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 307 KiB/s wr, 57 op/s; 5114632/142024938 objects degraded (3.601%); 11458/142024938 objects misplaced (0.008%); 319 MiB/s, 104 objects/s recovering
Jan 17 14:03:32 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53618] [GET] [200] [0.018s] [admin] [5.8K] /
Jan 17 14:03:33 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330555: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 286 KiB/s wr, 54 op/s; 5114487/142024938 objects degraded (3.601%); 11458/142024938 objects misplaced (0.008%); 272 MiB/s, 88 objects/s recovering
Jan 17 14:03:34 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53624] [GET] [200] [0.019s] [admin] [5.8K] /
Jan 17 14:03:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:03:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:03:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:03:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:03:34 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:03:34] "GET /metrics HTTP/1.1" 200 67930 "" "Prometheus/2.43.0"
Jan 17 14:03:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:03:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:03:35 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330556: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 274 KiB/s wr, 49 op/s; 5114249/142024938 objects degraded (3.601%); 11458/142024938 objects misplaced (0.008%); 292 MiB/s, 96 objects/s recovering
Jan 17 14:03:36 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53626] [GET] [200] [0.021s] [admin] [5.8K] /
Jan 17 14:03:37 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330557: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 287 KiB/s wr, 54 op/s; 5114169/142024938 objects degraded (3.601%); 11458/142024938 objects misplaced (0.008%); 254 MiB/s, 83 objects/s recovering
Jan 17 14:03:37 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53654] [GET] [200] [0.100s] [admin] [133.0B] /api/health/get_cluster_capacity
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53688] [GET] [200] [0.125s] [admin] [22.0B] /api/prometheus/notifications
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53652] [GET] [200] [0.350s] [admin] [936.0B] /api/prometheus
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53664] [GET] [200] [0.355s] [admin] [827.0B] /api/prometheus/data
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53636] [GET] [200] [0.463s] [admin] [19.4K] /api/health/minimal
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53702] [GET] [200] [0.326s] [admin] [73.0B] /api/osd/settings
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53680] [GET] [200] [0.374s] [admin] [819.0B] /api/prometheus/data
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53784] [GET] [200] [0.287s] [admin] [5.8K] /
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53712] [GET] [200] [0.304s] [admin] [951.0B] /api/prometheus/data
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53750] [GET] [200] [0.448s] [admin] [51.0B] /api/prometheus/data
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53728] [GET] [200] [0.317s] [admin] [902.0B] /api/prometheus/data
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53740] [GET] [200] [0.311s] [admin] [789.0B] /api/prometheus/data
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53764] [GET] [200] [0.489s] [admin] [13.8K] /api/prometheus/data
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53720] [GET] [200] [0.364s] [admin] [9.9K] /api/prometheus/rules
Jan 17 14:03:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:53774] [GET] [200] [0.507s] [admin] [13.8K] /api/prometheus/data
Jan 17 14:03:39 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330558: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 220 KiB/s wr, 44 op/s; 5114035/142024938 objects degraded (3.601%); 11458/142024938 objects misplaced (0.008%); 267 MiB/s, 87 objects/s recovering
Jan 17 14:03:40 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:38602] [GET] [200] [0.017s] [admin] [5.8K] /
Jan 17 14:03:41 ceph02 ceph-mgr[1317]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 17 14:03:41 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd.ec124, start_after=
Jan 17 14:03:41 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd, start_after=
Jan 17 14:03:41 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330559: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 235 KiB/s wr, 50 op/s; 5113713/142024938 objects degraded (3.601%); 11458/142024938 objects misplaced (0.008%); 307 MiB/s, 102 objects/s recovering
Jan 17 14:03:42 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:38618] [GET] [200] [0.016s] [admin] [5.8K] /
Jan 17 14:03:43 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330560: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 214 KiB/s wr, 47 op/s; 5113562/142024938 objects degraded (3.600%); 11458/142024938 objects misplaced (0.008%); 266 MiB/s, 88 objects/s recovering
Jan 17 14:03:44 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:38628] [GET] [200] [0.016s] [admin] [5.8K] /
Jan 17 14:03:44 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:03:44] "GET /metrics HTTP/1.1" 200 67930 "" "Prometheus/2.43.0"
Jan 17 14:03:45 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330561: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 202 KiB/s wr, 44 op/s; 5113355/142024938 objects degraded (3.600%); 11458/142024938 objects misplaced (0.008%); 278 MiB/s, 93 objects/s recovering
Jan 17 14:03:46 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:38638] [GET] [200] [0.025s] [admin] [5.8K] /
Jan 17 14:03:47 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330562: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 222 KiB/s wr, 47 op/s; 5113291/142024938 objects degraded (3.600%); 11458/142024938 objects misplaced (0.008%); 239 MiB/s, 79 objects/s recovering
Jan 17 14:03:48 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:38642] [GET] [200] [0.090s] [admin] [5.8K] /
Jan 17 14:03:49 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330563: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 187 KiB/s wr, 37 op/s; 5113170/142024938 objects degraded (3.600%); 11458/142024938 objects misplaced (0.008%); 249 MiB/s, 82 objects/s recovering
Jan 17 14:03:50 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:03:50 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:03:50 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:03:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:51952] [GET] [200] [0.425s] [admin] [1.5K] /api/prometheus/data
Jan 17 14:03:50 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:03:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:51982] [GET] [200] [0.365s] [admin] [51.0B] /api/prometheus/data
Jan 17 14:03:50 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:03:50 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:03:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:51974] [GET] [200] [0.456s] [admin] [989.0B] /api/prometheus/data
Jan 17 14:03:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:03:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52018] [GET] [200] [0.089s] [admin] [22.0B] /api/prometheus/notifications
Jan 17 14:03:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:03:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:51948] [GET] [200] [0.658s] [admin] [19.4K] /api/health/minimal
Jan 17 14:03:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:03:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52028] [GET] [200] [0.384s] [admin] [2.3K] /api/prometheus/data
Jan 17 14:03:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:51976] [GET] [200] [0.652s] [admin] [2.2K] /api/prometheus/data
Jan 17 14:03:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:51998] [GET] [200] [0.182s] [admin] [133.0B] /api/health/get_cluster_capacity
Jan 17 14:03:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52024] [GET] [200] [0.438s] [admin] [1.7K] /api/prometheus/data
Jan 17 14:03:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52038] [GET] [200] [0.187s] [admin] [5.8K] /
Jan 17 14:03:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52026] [GET] [200] [0.479s] [admin] [937.0B] /api/prometheus
Jan 17 14:03:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:51954] [GET] [200] [0.830s] [admin] [18.7K] /api/prometheus/data
Jan 17 14:03:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:51960] [GET] [200] [0.804s] [admin] [18.7K] /api/prometheus/data
Jan 17 14:03:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:51990] [GET] [200] [0.086s] [admin] [73.0B] /api/osd/settings
Jan 17 14:03:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:03:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52004] [GET] [200] [0.319s] [admin] [9.9K] /api/prometheus/rules
Jan 17 14:03:51 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330564: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 225 KiB/s wr, 48 op/s; 5112847/142024938 objects degraded (3.600%); 11458/142024938 objects misplaced (0.008%); 292 MiB/s, 98 objects/s recovering
Jan 17 14:03:53 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52052] [GET] [200] [0.021s] [admin] [5.8K] /
Jan 17 14:03:53 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330565: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 204 KiB/s wr, 42 op/s; 5112710/142024938 objects degraded (3.600%); 11458/142024938 objects misplaced (0.008%); 251 MiB/s, 82 objects/s recovering
Jan 17 14:03:54 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:03:54] "GET /metrics HTTP/1.1" 200 67932 "" "Prometheus/2.43.0"
Jan 17 14:03:55 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52058] [GET] [200] [0.021s] [admin] [5.8K] /
Jan 17 14:03:55 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330566: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 218 KiB/s wr, 44 op/s; 5112422/142024938 objects degraded (3.600%); 11458/142024938 objects misplaced (0.008%); 284 MiB/s, 94 objects/s recovering
Jan 17 14:03:57 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52070] [GET] [200] [0.031s] [admin] [5.8K] /
Jan 17 14:03:57 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330567: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 234 KiB/s wr, 45 op/s; 5112422/142024938 objects degraded (3.600%); 11458/142024938 objects misplaced (0.008%); 235 MiB/s, 77 objects/s recovering
Jan 17 14:03:59 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:59360] [GET] [200] [0.085s] [admin] [5.8K] /
Jan 17 14:03:59 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330568: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 205 KiB/s wr, 41 op/s; 5112225/142024938 objects degraded (3.600%); 11458/142024938 objects misplaced (0.008%); 268 MiB/s, 89 objects/s recovering
Jan 17 14:04:01 ceph02 ceph-mgr[1317]: [balancer INFO root] Optimize plan auto_2024-01-17_19:04:01
Jan 17 14:04:01 ceph02 ceph-mgr[1317]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 17 14:04:01 ceph02 ceph-mgr[1317]: [balancer INFO root] Some objects (0.035995) are degraded; try again later
Jan 17 14:04:01 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:59364] [GET] [200] [0.029s] [admin] [5.8K] /
Jan 17 14:04:01 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330569: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 317 KiB/s wr, 59 op/s; 5111982/142024938 objects degraded (3.599%); 11458/142024938 objects misplaced (0.008%); 292 MiB/s, 99 objects/s recovering
Jan 17 14:04:03 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:59374] [GET] [200] [0.116s] [admin] [5.8K] /
Jan 17 14:04:03 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330570: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 302 KiB/s wr, 53 op/s; 5111838/142024938 objects degraded (3.599%); 11458/142024938 objects misplaced (0.008%); 252 MiB/s, 84 objects/s recovering
Jan 17 14:04:03 ceph02 ceph-mgr[1317]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 17 14:04:03 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd.ec124, start_after=
Jan 17 14:04:04 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd, start_after=
Jan 17 14:04:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:04:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:04:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:04:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:04:04 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:04:04] "GET /metrics HTTP/1.1" 200 67932 "" "Prometheus/2.43.0"
Jan 17 14:04:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:04:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:04:05 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:59376] [GET] [200] [0.076s] [admin] [5.8K] /
Jan 17 14:04:05 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330571: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 356 KiB/s wr, 62 op/s; 5111525/142024938 objects degraded (3.599%); 11458/142024938 objects misplaced (0.008%); 290 MiB/s, 97 objects/s recovering
Jan 17 14:04:07 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:59380] [GET] [200] [0.018s] [admin] [5.8K] /
Jan 17 14:04:07 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330572: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 346 KiB/s wr, 59 op/s; 5111525/142024938 objects degraded (3.599%); 11458/142024938 objects misplaced (0.008%); 219 MiB/s, 73 objects/s recovering
Jan 17 14:04:09 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:55636] [GET] [200] [0.021s] [admin] [5.8K] /
Jan 17 14:04:09 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330573: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 338 KiB/s wr, 60 op/s; 5111320/142024938 objects degraded (3.599%); 11458/142024938 objects misplaced (0.008%); 271 MiB/s, 90 objects/s recovering
Jan 17 14:04:11 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:55648] [GET] [200] [0.018s] [admin] [5.8K] /
Jan 17 14:04:11 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330574: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 360 KiB/s wr, 67 op/s; 5111042/142024938 objects degraded (3.599%); 11458/142024938 objects misplaced (0.008%); 281 MiB/s, 96 objects/s recovering
Jan 17 14:04:13 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:55664] [GET] [200] [0.017s] [admin] [5.8K] /
Jan 17 14:04:13 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330575: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 289 KiB/s wr, 58 op/s; 5110818/142024938 objects degraded (3.599%); 11458/142024938 objects misplaced (0.008%); 286 MiB/s, 95 objects/s recovering
Jan 17 14:04:14 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:04:14] "GET /metrics HTTP/1.1" 200 67932 "" "Prometheus/2.43.0"
Jan 17 14:04:15 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:55676] [GET] [200] [0.020s] [admin] [5.8K] /
Jan 17 14:04:15 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330576: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 294 KiB/s wr, 58 op/s; 5110555/142024938 objects degraded (3.598%); 11458/142024938 objects misplaced (0.008%); 309 MiB/s, 105 objects/s recovering
Jan 17 14:04:16 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 6215a46a-daf3-4847-85e7-faf903bf7a9f does not exist
Jan 17 14:04:16 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev a6796fd4-29ea-485d-8d39-9646613b57b7 does not exist
Jan 17 14:04:16 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 8acc171d-d28b-427d-a868-44dfb314c371 does not exist
Jan 17 14:04:16 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 972d53e4-dacc-468d-80bc-66a7f9c056ee does not exist
Jan 17 14:04:16 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 8aa3a904-c0de-4aa8-83f8-22ff6edc99fe does not exist
Jan 17 14:04:16 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 2d1a7c04-b939-462c-a2ce-4677d9f64201 does not exist
Jan 17 14:04:16 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 2f3d05ce-1caa-4c6d-983b-458ce16783f3 does not exist
Jan 17 14:04:16 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 48b9318b-c014-4162-8508-982cd0c90b19 does not exist
Jan 17 14:04:17 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:55686] [GET] [200] [0.034s] [admin] [5.8K] /
Jan 17 14:04:18 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330577: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 262 KiB/s wr, 50 op/s; 5110555/142024938 objects degraded (3.598%); 11458/142024938 objects misplaced (0.008%); 234 MiB/s, 80 objects/s recovering
Jan 17 14:04:19 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52166] [GET] [200] [0.022s] [admin] [5.8K] /
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330578: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 263 KiB/s wr, 50 op/s; 5110326/142024938 objects degraded (3.598%); 11458/142024938 objects misplaced (0.008%); 292 MiB/s, 99 objects/s recovering
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] _maybe_adjust
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.19221546764548e-06 of space, bias 1.0, pg target 0.000476886187058192 quantized to 1 (current 1)
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.meta' root_id -1 using 7.890654923820866e-05 of space, bias 4.0, pg target 0.12625047878113385 quantized to 16 (current 16)
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.data' root_id -1 using 5.321202712097657e-10 of space, bias 1.0, pg target 2.128481084839063e-07 quantized to 32 (current 32)
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.longterm' root_id -1 using 0.4281514108531298 of space, bias 1.0, pg target 53.518926356641224 quantized to 64 (current 32)
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.shortterm' root_id -1 using 0.03335031712954853 of space, bias 1.0, pg target 3.8152762796203517 quantized to 32 (current 32)
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'rbd.ec124' root_id -1 using 0.0010549942077388822 of space, bias 1.0, pg target 0.07292647460995023 quantized to 32 (current 32)
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'rbd' root_id -1 using 2.7771156889687268e-08 of space, bias 1.0, pg target 6.137425672620887e-06 quantized to 32 (current 32)
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 6.385443254517189e-10 of space, bias 1.0, pg target 2.3519715987471645e-07 quantized to 32 (current 32)
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.1927216272585943e-10 of space, bias 1.0, pg target 1.1759857993735823e-07 quantized to 32 (current 32)
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 32)
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:04:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.k8s' root_id -1 using 8.513924339356251e-10 of space, bias 1.0, pg target 69.0625 quantized to 64 (current 32)
Jan 17 14:04:21 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52172] [GET] [200] [0.019s] [admin] [5.8K] /
Jan 17 14:04:22 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330579: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 265 KiB/s wr, 52 op/s; 5110068/142024938 objects degraded (3.598%); 11458/142024938 objects misplaced (0.008%); 300 MiB/s, 103 objects/s recovering
Jan 17 14:04:23 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52184] [GET] [200] [0.018s] [admin] [5.8K] /
Jan 17 14:04:24 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330580: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 271 KiB/s wr, 50 op/s; 5109877/142024938 objects degraded (3.598%); 11458/142024938 objects misplaced (0.008%); 290 MiB/s, 96 objects/s recovering
Jan 17 14:04:24 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:04:24] "GET /metrics HTTP/1.1" 200 67929 "" "Prometheus/2.43.0"
Jan 17 14:04:25 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52188] [GET] [200] [0.025s] [admin] [5.8K] /
Jan 17 14:04:26 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330581: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 249 KiB/s wr, 47 op/s; 5109620/142024938 objects degraded (3.598%); 11458/142024938 objects misplaced (0.008%); 291 MiB/s, 99 objects/s recovering
Jan 17 14:04:27 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:52190] [GET] [200] [0.017s] [admin] [5.8K] /
Jan 17 14:04:28 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330582: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 233 KiB/s wr, 45 op/s; 5109620/142024938 objects degraded (3.598%); 11458/142024938 objects misplaced (0.008%); 229 MiB/s, 77 objects/s recovering
Jan 17 14:04:30 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:57960] [GET] [200] [0.023s] [admin] [5.8K] /
Jan 17 14:04:30 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330583: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 204 KiB/s wr, 42 op/s; 5109438/142024938 objects degraded (3.598%); 11458/142024938 objects misplaced (0.008%); 277 MiB/s, 92 objects/s recovering
Jan 17 14:04:32 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:57974] [GET] [200] [0.018s] [admin] [5.8K] /
Jan 17 14:04:32 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330584: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 200 KiB/s wr, 43 op/s; 5109214/142024938 objects degraded (3.597%); 11458/142024938 objects misplaced (0.008%); 267 MiB/s, 92 objects/s recovering
Jan 17 14:04:34 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330585: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 191 KiB/s wr, 38 op/s; 5109027/142024938 objects degraded (3.597%); 11458/142024938 objects misplaced (0.008%); 258 MiB/s, 86 objects/s recovering
Jan 17 14:04:34 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:57982] [GET] [200] [0.018s] [admin] [5.8K] /
Jan 17 14:04:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:04:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:04:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:04:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:04:34 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:04:34] "GET /metrics HTTP/1.1" 200 67929 "" "Prometheus/2.43.0"
Jan 17 14:04:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:04:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:04:36 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330586: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 157 KiB/s wr, 34 op/s; 5108865/142024938 objects degraded (3.597%); 11458/142024938 objects misplaced (0.008%); 244 MiB/s, 84 objects/s recovering
Jan 17 14:04:36 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:57986] [GET] [200] [0.018s] [admin] [5.8K] /
Jan 17 14:04:37 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:58014] [GET] [200] [0.207s] [admin] [130.0B] /api/health/get_cluster_capacity
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330587: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 151 KiB/s wr, 32 op/s; 5108809/142024938 objects degraded (3.597%); 11458/142024938 objects misplaced (0.008%); 199 MiB/s, 67 objects/s recovering
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:57998] [GET] [200] [0.350s] [admin] [810.0B] /api/prometheus/data
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: [dashboard INFO orchestrator] is orchestrator available: True,
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:58090] [GET] [200] [0.106s] [admin] [22.0B] /api/prometheus/notifications
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:58050] [GET] [200] [0.295s] [admin] [73.0B] /api/osd/settings
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:58018] [GET] [200] [0.338s] [admin] [937.0B] /api/prometheus
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:58112] [GET] [200] [0.448s] [admin] [831.0B] /api/prometheus/data
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:58124] [GET] [200] [0.272s] [admin] [5.8K] /
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:57992] [GET] [200] [0.853s] [admin] [19.4K] /api/health/minimal
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:58098] [GET] [200] [0.514s] [admin] [957.0B] /api/prometheus/data
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:58034] [GET] [200] [0.701s] [admin] [13.8K] /api/prometheus/data
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:58060] [GET] [200] [0.273s] [admin] [897.0B] /api/prometheus/data
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:58024] [GET] [200] [0.700s] [admin] [13.8K] /api/prometheus/data
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:58062] [GET] [200] [0.369s] [admin] [844.0B] /api/prometheus/data
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:58066] [GET] [200] [0.374s] [admin] [51.0B] /api/prometheus/data
Jan 17 14:04:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:58076] [GET] [200] [0.434s] [admin] [9.9K] /api/prometheus/rules
Jan 17 14:04:40 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330588: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 162 KiB/s wr, 33 op/s; 5108601/142024938 objects degraded (3.597%); 11458/142024938 objects misplaced (0.008%); 254 MiB/s, 84 objects/s recovering
Jan 17 14:04:40 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:59662] [GET] [200] [0.021s] [admin] [5.8K] /
Jan 17 14:04:41 ceph02 ceph-mgr[1317]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 17 14:04:41 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd.ec124, start_after=
Jan 17 14:04:41 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd, start_after=
Jan 17 14:04:42 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330589: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 233 KiB/s wr, 46 op/s; 5108321/142024938 objects degraded (3.597%); 11458/142024938 objects misplaced (0.008%); 274 MiB/s, 92 objects/s recovering
Jan 17 14:04:42 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:59670] [GET] [200] [0.018s] [admin] [5.8K] /
Jan 17 14:04:44 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330590: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 292 KiB/s wr, 54 op/s; 5108128/142024938 objects degraded (3.597%); 11458/142024938 objects misplaced (0.008%); 278 MiB/s, 90 objects/s recovering
Jan 17 14:04:44 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:59674] [GET] [200] [0.016s] [admin] [5.8K] /
Jan 17 14:04:44 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:04:44] "GET /metrics HTTP/1.1" 200 67929 "" "Prometheus/2.43.0"
Jan 17 14:04:46 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330591: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 291 KiB/s wr, 55 op/s; 5108128/142024938 objects degraded (3.597%); 11458/142024938 objects misplaced (0.008%); 227 MiB/s, 74 objects/s recovering
Jan 17 14:04:46 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:59684] [GET] [200] [0.017s] [admin] [5.8K] /
Jan 17 14:04:48 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330592: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 359 KiB/s wr, 60 op/s; 5107872/142024938 objects degraded (3.596%); 11458/142024938 objects misplaced (0.008%); 248 MiB/s, 82 objects/s recovering
Jan 17 14:04:48 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:36140] [GET] [200] [0.015s] [admin] [5.8K] /
Jan 17 14:04:50 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330593: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 355 KiB/s wr, 59 op/s; 5107659/142024938 objects degraded (3.596%); 11458/142024938 objects misplaced (0.008%); 285 MiB/s, 95 objects/s recovering
Jan 17 14:04:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:36164] [GET] [200] [0.199s] [admin] [73.0B] /api/osd/settings
Jan 17 14:04:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:36180] [GET] [200] [0.211s] [admin] [133.0B] /api/health/get_cluster_capacity
Jan 17 14:04:50 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:04:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:36210] [GET] [200] [0.127s] [admin] [22.0B] /api/prometheus/notifications
Jan 17 14:04:50 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:04:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:36188] [GET] [200] [0.385s] [admin] [935.0B] /api/prometheus
Jan 17 14:04:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:36222] [GET] [200] [0.245s] [admin] [2.3K] /api/prometheus/data
Jan 17 14:04:50 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:04:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:36150] [GET] [200] [0.570s] [admin] [19.4K] /api/health/minimal
Jan 17 14:04:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:36294] [GET] [200] [0.137s] [admin] [5.8K] /
Jan 17 14:04:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:04:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:04:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:36196] [GET] [200] [0.501s] [admin] [9.9K] /api/prometheus/rules
Jan 17 14:04:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:36272] [GET] [200] [0.404s] [admin] [18.7K] /api/prometheus/data
Jan 17 14:04:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:36256] [GET] [200] [0.386s] [admin] [18.7K] /api/prometheus/data
Jan 17 14:04:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:04:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:36244] [GET] [200] [0.416s] [admin] [1.5K] /api/prometheus/data
Jan 17 14:04:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:04:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:36228] [GET] [200] [0.320s] [admin] [1.6K] /api/prometheus/data
Jan 17 14:04:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:04:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:04:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:36280] [GET] [200] [0.103s] [admin] [51.0B] /api/prometheus/data
Jan 17 14:04:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:04:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:36278] [GET] [200] [0.101s] [admin] [2.2K] /api/prometheus/data
Jan 17 14:04:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:36274] [GET] [200] [0.073s] [admin] [1002.0B] /api/prometheus/data
Jan 17 14:04:52 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330594: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 355 KiB/s wr, 62 op/s; 5107411/142024938 objects degraded (3.596%); 11458/142024938 objects misplaced (0.008%); 285 MiB/s, 98 objects/s recovering
Jan 17 14:04:53 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:36306] [GET] [200] [0.019s] [admin] [5.8K] /
Jan 17 14:04:54 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330595: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 304 KiB/s wr, 54 op/s; 5107195/142024938 objects degraded (3.596%); 11458/142024938 objects misplaced (0.008%); 272 MiB/s, 92 objects/s recovering
Jan 17 14:04:54 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:04:54] "GET /metrics HTTP/1.1" 200 67923 "" "Prometheus/2.43.0"
Jan 17 14:04:55 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:36322] [GET] [200] [0.018s] [admin] [5.8K] /
Jan 17 14:04:56 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330596: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 245 KiB/s wr, 45 op/s; 5107195/142024938 objects degraded (3.596%); 11458/142024938 objects misplaced (0.008%); 220 MiB/s, 76 objects/s recovering
Jan 17 14:04:57 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:36330] [GET] [200] [0.018s] [admin] [5.8K] /
Jan 17 14:04:58 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330597: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 256 KiB/s wr, 48 op/s; 5106970/142024938 objects degraded (3.596%); 11458/142024938 objects misplaced (0.008%); 275 MiB/s, 95 objects/s recovering
Jan 17 14:04:59 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:37422] [GET] [200] [0.022s] [admin] [5.8K] /
Jan 17 14:05:00 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330598: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 199 KiB/s wr, 42 op/s; 5106761/142024938 objects degraded (3.596%); 11458/142024938 objects misplaced (0.008%); 273 MiB/s, 92 objects/s recovering
Jan 17 14:05:01 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:37424] [GET] [200] [0.030s] [admin] [5.8K] /
Jan 17 14:05:01 ceph02 ceph-mgr[1317]: [balancer INFO root] Optimize plan auto_2024-01-17_19:05:01
Jan 17 14:05:01 ceph02 ceph-mgr[1317]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 17 14:05:01 ceph02 ceph-mgr[1317]: [balancer INFO root] Some objects (0.035957) are degraded; try again later
Jan 17 14:05:02 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330599: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 237 KiB/s wr, 50 op/s; 5106525/142024938 objects degraded (3.596%); 11458/142024938 objects misplaced (0.008%); 275 MiB/s, 94 objects/s recovering
Jan 17 14:05:03 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:37432] [GET] [200] [0.020s] [admin] [5.8K] /
Jan 17 14:05:04 ceph02 ceph-mgr[1317]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 17 14:05:04 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd.ec124, start_after=
Jan 17 14:05:04 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd, start_after=
Jan 17 14:05:04 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330600: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 236 KiB/s wr, 48 op/s; 5106299/142024938 objects degraded (3.595%); 11458/142024938 objects misplaced (0.008%); 278 MiB/s, 92 objects/s recovering
Jan 17 14:05:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:05:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:05:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:05:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:05:04 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:05:04] "GET /metrics HTTP/1.1" 200 67924 "" "Prometheus/2.43.0"
Jan 17 14:05:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:05:04 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:05:05 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:37436] [GET] [200] [0.019s] [admin] [5.8K] /
Jan 17 14:05:06 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330601: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 218 KiB/s wr, 44 op/s; 5106299/142024938 objects degraded (3.595%); 11458/142024938 objects misplaced (0.008%); 224 MiB/s, 74 objects/s recovering
Jan 17 14:05:07 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:37452] [GET] [200] [0.017s] [admin] [5.8K] /
Jan 17 14:05:08 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330602: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 239 KiB/s wr, 48 op/s; 5106065/142024938 objects degraded (3.595%); 11458/142024938 objects misplaced (0.008%); 281 MiB/s, 94 objects/s recovering
Jan 17 14:05:09 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:37546] [GET] [200] [0.020s] [admin] [5.8K] /
Jan 17 14:05:10 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330603: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 231 KiB/s wr, 45 op/s; 5105854/142024938 objects degraded (3.595%); 11458/142024938 objects misplaced (0.008%); 282 MiB/s, 93 objects/s recovering
Jan 17 14:05:11 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:37550] [GET] [200] [0.020s] [admin] [5.8K] /
Jan 17 14:05:12 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330604: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 257 KiB/s wr, 52 op/s; 5105574/142024938 objects degraded (3.595%); 11458/142024938 objects misplaced (0.008%); 291 MiB/s, 98 objects/s recovering
Jan 17 14:05:13 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:37566] [GET] [200] [0.019s] [admin] [5.8K] /
Jan 17 14:05:14 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330605: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 243 KiB/s wr, 50 op/s; 5105349/142024938 objects degraded (3.595%); 11458/142024938 objects misplaced (0.008%); 296 MiB/s, 97 objects/s recovering
Jan 17 14:05:14 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:05:14] "GET /metrics HTTP/1.1" 200 67924 "" "Prometheus/2.43.0"
Jan 17 14:05:15 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:37574] [GET] [200] [0.016s] [admin] [5.8K] /
Jan 17 14:05:16 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330606: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 227 KiB/s wr, 48 op/s; 5105349/142024938 objects degraded (3.595%); 11458/142024938 objects misplaced (0.008%); 238 MiB/s, 78 objects/s recovering
Jan 17 14:05:17 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:37590] [GET] [200] [0.017s] [admin] [5.8K] /
Jan 17 14:05:18 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330607: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 263 KiB/s wr, 52 op/s; 5105093/142024938 objects degraded (3.595%); 11458/142024938 objects misplaced (0.008%); 295 MiB/s, 100 objects/s recovering
Jan 17 14:05:19 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35834] [GET] [200] [0.019s] [admin] [5.8K] /
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330608: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 240 KiB/s wr, 47 op/s; 5104899/142024938 objects degraded (3.594%); 11458/142024938 objects misplaced (0.008%); 289 MiB/s, 96 objects/s recovering
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] _maybe_adjust
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.19221546764548e-06 of space, bias 1.0, pg target 0.000476886187058192 quantized to 1 (current 1)
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.meta' root_id -1 using 7.890654923820866e-05 of space, bias 4.0, pg target 0.12625047878113385 quantized to 16 (current 16)
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.data' root_id -1 using 5.321202712097657e-10 of space, bias 1.0, pg target 2.128481084839063e-07 quantized to 32 (current 32)
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.longterm' root_id -1 using 0.4281688102281819 of space, bias 1.0, pg target 53.521101278522735 quantized to 64 (current 32)
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.shortterm' root_id -1 using 0.033353671136829986 of space, bias 1.0, pg target 3.8156599780533504 quantized to 32 (current 32)
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'rbd.ec124' root_id -1 using 0.0010549974004605094 of space, bias 1.0, pg target 0.07292669530683271 quantized to 32 (current 32)
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'rbd' root_id -1 using 2.7771156889687268e-08 of space, bias 1.0, pg target 6.137425672620887e-06 quantized to 32 (current 32)
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 6.385443254517189e-10 of space, bias 1.0, pg target 2.3519715987471645e-07 quantized to 32 (current 32)
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 3.1927216272585943e-10 of space, bias 1.0, pg target 1.1759857993735823e-07 quantized to 32 (current 32)
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 32)
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 76975079161856
Jan 17 14:05:20 ceph02 ceph-mgr[1317]: [pg_autoscaler INFO root] Pool 'cephfs.infinoid.oi.k8s' root_id -1 using 8.513924339356251e-10 of space, bias 1.0, pg target 69.0625 quantized to 64 (current 32)
Jan 17 14:05:21 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35836] [GET] [200] [0.037s] [admin] [5.8K] /
Jan 17 14:05:22 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330609: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 236 KiB/s wr, 46 op/s; 5104683/142024938 objects degraded (3.594%); 11458/142024938 objects misplaced (0.008%); 279 MiB/s, 97 objects/s recovering
Jan 17 14:05:23 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35842] [GET] [200] [0.055s] [admin] [5.8K] /
Jan 17 14:05:24 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330610: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 218 KiB/s wr, 41 op/s; 5104479/142024938 objects degraded (3.594%); 11458/142024938 objects misplaced (0.008%); 265 MiB/s, 90 objects/s recovering
Jan 17 14:05:24 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:05:24] "GET /metrics HTTP/1.1" 200 67926 "" "Prometheus/2.43.0"
Jan 17 14:05:25 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35846] [GET] [200] [0.025s] [admin] [5.8K] /
Jan 17 14:05:26 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330611: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 193 KiB/s wr, 34 op/s; 5104479/142024938 objects degraded (3.594%); 11458/142024938 objects misplaced (0.008%); 204 MiB/s, 72 objects/s recovering
Jan 17 14:05:27 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35850] [GET] [200] [0.019s] [admin] [5.8K] /
Jan 17 14:05:28 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330612: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 216 KiB/s wr, 38 op/s; 5104247/142024938 objects degraded (3.594%); 11458/142024938 objects misplaced (0.008%); 256 MiB/s, 91 objects/s recovering
Jan 17 14:05:29 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:32880] [GET] [200] [0.023s] [admin] [5.8K] /
Jan 17 14:05:30 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330613: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 195 KiB/s wr, 36 op/s; 5104040/142024938 objects degraded (3.594%); 11458/142024938 objects misplaced (0.008%); 253 MiB/s, 87 objects/s recovering
Jan 17 14:05:31 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:32892] [GET] [200] [0.042s] [admin] [5.8K] /
Jan 17 14:05:32 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330614: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 328 KiB/s wr, 43 op/s; 5103796/142024938 objects degraded (3.594%); 11458/142024938 objects misplaced (0.008%); 255 MiB/s, 91 objects/s recovering
Jan 17 14:05:33 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:32906] [GET] [200] [0.084s] [admin] [5.8K] /
Jan 17 14:05:34 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330615: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 349 KiB/s wr, 46 op/s; 5103599/142024938 objects degraded (3.593%); 11458/142024938 objects misplaced (0.008%); 260 MiB/s, 89 objects/s recovering
Jan 17 14:05:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:05:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:05:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:05:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:05:34 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:05:34] "GET /metrics HTTP/1.1" 200 67930 "" "Prometheus/2.43.0"
Jan 17 14:05:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] scanning for idle connections..
Jan 17 14:05:34 ceph02 ceph-mgr[1317]: [volumes INFO mgr_util] cleaning up connections: []
Jan 17 14:05:35 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:32922] [GET] [200] [0.029s] [admin] [5.8K] /
Jan 17 14:05:36 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330616: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 359 KiB/s wr, 49 op/s; 5103599/142024938 objects degraded (3.593%); 11458/142024938 objects misplaced (0.008%); 211 MiB/s, 73 objects/s recovering
Jan 17 14:05:37 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev c73dbb97-1f5c-4ffa-9e7d-c137df98cd72 does not exist
Jan 17 14:05:37 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 3da9ddb0-20f1-4d98-8efe-0b98a2ef54d5 does not exist
Jan 17 14:05:37 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 8fa1262a-584b-4ae4-a01a-e7a32b6bfc11 does not exist
Jan 17 14:05:37 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev c5fe8f65-3e89-485e-aba8-c4c12a5dd574 does not exist
Jan 17 14:05:37 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 69c15493-2537-443c-8edb-b7cd884fe6c2 does not exist
Jan 17 14:05:37 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev b72ca0d2-6697-4e6a-a854-46aa39c34e82 does not exist
Jan 17 14:05:37 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 39f48d1e-3286-4a8b-9bc3-3a2c549719cc does not exist
Jan 17 14:05:37 ceph02 ceph-mgr[1317]: [progress WARNING root] complete: ev 25db24fc-35ed-4980-8930-00b46661cf54 does not exist
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:32966] [GET] [200] [0.134s] [admin] [73.0B] /api/osd/settings
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:32962] [GET] [200] [0.247s] [admin] [131.0B] /api/health/get_cluster_capacity
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:32950] [GET] [200] [0.375s] [admin] [890.0B] /api/prometheus/data
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:32956] [GET] [200] [0.319s] [admin] [934.0B] /api/prometheus
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:33032] [GET] [200] [0.225s] [admin] [5.8K] /
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:32946] [GET] [200] [0.394s] [admin] [51.0B] /api/prometheus/data
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:33002] [GET] [200] [0.120s] [admin] [22.0B] /api/prometheus/notifications
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:33016] [GET] [200] [0.278s] [admin] [951.0B] /api/prometheus/data
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330617: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 387 KiB/s wr, 54 op/s; 5103353/142024938 objects degraded (3.593%); 11458/142024938 objects misplaced (0.008%); 268 MiB/s, 93 objects/s recovering
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:32936] [GET] [200] [0.581s] [admin] [19.4K] /api/health/minimal
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:33022] [GET] [200] [0.341s] [admin] [827.0B] /api/prometheus/data
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:33008] [GET] [200] [0.257s] [admin] [851.0B] /api/prometheus/data
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:32984] [GET] [200] [0.128s] [admin] [806.0B] /api/prometheus/data
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:32998] [GET] [200] [0.252s] [admin] [9.9K] /api/prometheus/rules
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:32968] [GET] [200] [0.321s] [admin] [14.0K] /api/prometheus/data
Jan 17 14:05:38 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:32982] [GET] [200] [0.295s] [admin] [14.0K] /api/prometheus/data
Jan 17 14:05:40 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35308] [GET] [200] [0.027s] [admin] [5.8K] /
Jan 17 14:05:40 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330618: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 368 KiB/s wr, 50 op/s; 5103138/142024938 objects degraded (3.593%); 11458/142024938 objects misplaced (0.008%); 274 MiB/s, 91 objects/s recovering
Jan 17 14:05:41 ceph02 ceph-mgr[1317]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 17 14:05:41 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd.ec124, start_after=
Jan 17 14:05:41 ceph02 ceph-mgr[1317]: [rbd_support INFO root] load_schedules: rbd, start_after=
Jan 17 14:05:42 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35320] [GET] [200] [0.015s] [admin] [5.8K] /
Jan 17 14:05:42 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330619: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 389 KiB/s wr, 52 op/s; 5102981/142024938 objects degraded (3.593%); 11458/142024938 objects misplaced (0.008%); 252 MiB/s, 87 objects/s recovering
Jan 17 14:05:44 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35336] [GET] [200] [0.018s] [admin] [5.8K] /
Jan 17 14:05:44 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330620: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 256 KiB/s wr, 46 op/s; 5102764/142024938 objects degraded (3.593%); 11458/142024938 objects misplaced (0.008%); 258 MiB/s, 85 objects/s recovering
Jan 17 14:05:44 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:05:44] "GET /metrics HTTP/1.1" 200 67930 "" "Prometheus/2.43.0"
Jan 17 14:05:46 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35352] [GET] [200] [0.019s] [admin] [5.8K] /
Jan 17 14:05:46 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330621: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 235 KiB/s wr, 42 op/s; 5102764/142024938 objects degraded (3.593%); 11458/142024938 objects misplaced (0.008%); 206 MiB/s, 69 objects/s recovering
Jan 17 14:05:48 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:35368] [GET] [200] [0.031s] [admin] [5.8K] /
Jan 17 14:05:48 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330622: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 228 KiB/s wr, 41 op/s; 5102533/142024938 objects degraded (3.593%); 11458/142024938 objects misplaced (0.008%); 256 MiB/s, 88 objects/s recovering
Jan 17 14:05:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46678] [GET] [200] [0.019s] [admin] [5.8K] /
Jan 17 14:05:50 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330623: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 196 KiB/s wr, 36 op/s; 5102334/142024938 objects degraded (3.593%); 11458/142024938 objects misplaced (0.008%); 249 MiB/s, 84 objects/s recovering
Jan 17 14:05:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46726] [GET] [200] [0.177s] [admin] [132.0B] /api/health/get_cluster_capacity
Jan 17 14:05:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46710] [GET] [200] [0.184s] [admin] [73.0B] /api/osd/settings
Jan 17 14:05:50 ceph02 ceph-mgr[1317]: [dashboard INFO orchestrator] is orchestrator available: True,
Jan 17 14:05:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46742] [GET] [200] [0.223s] [admin] [22.0B] /api/prometheus/notifications
Jan 17 14:05:50 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:05:50 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:05:50 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:05:50 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:05:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46800] [GET] [200] [0.280s] [admin] [1.7K] /api/prometheus/data
Jan 17 14:05:50 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46770] [GET] [200] [0.297s] [admin] [934.0B] /api/prometheus
Jan 17 14:05:50 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:05:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46786] [GET] [200] [0.406s] [admin] [2.3K] /api/prometheus/data
Jan 17 14:05:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:05:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46880] [GET] [200] [0.262s] [admin] [51.0B] /api/prometheus/data
Jan 17 14:05:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:05:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46694] [GET] [200] [0.673s] [admin] [19.4K] /api/health/minimal
Jan 17 14:05:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46756] [GET] [200] [0.633s] [admin] [9.9K] /api/prometheus/rules
Jan 17 14:05:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46872] [GET] [200] [0.268s] [admin] [2.2K] /api/prometheus/data
Jan 17 14:05:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46856] [GET] [200] [0.231s] [admin] [998.0B] /api/prometheus/data
Jan 17 14:05:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:05:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46812] [GET] [200] [0.214s] [admin] [1.5K] /api/prometheus/data
Jan 17 14:05:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:05:51 ceph02 ceph-mgr[1317]: log_channel(audit) log [DBG] : from='mon.2 -' entity='mon.' cmd=[{"prefix": "balancer status", "format": "json"}]: dispatch
Jan 17 14:05:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46840] [GET] [200] [0.386s] [admin] [18.7K] /api/prometheus/data
Jan 17 14:05:51 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46826] [GET] [200] [0.345s] [admin] [18.7K] /api/prometheus/data
Jan 17 14:05:52 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46894] [GET] [200] [0.020s] [admin] [5.8K] /
Jan 17 14:05:52 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330624: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 230 KiB/s wr, 43 op/s; 5102168/142024938 objects degraded (3.592%); 11458/142024938 objects misplaced (0.008%); 229 MiB/s, 80 objects/s recovering
Jan 17 14:05:54 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46908] [GET] [200] [0.020s] [admin] [5.8K] /
Jan 17 14:05:54 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330625: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 235 KiB/s wr, 45 op/s; 5101876/142024938 objects degraded (3.592%); 11458/142024938 objects misplaced (0.008%); 273 MiB/s, 92 objects/s recovering
Jan 17 14:05:54 ceph02 ceph-mgr[1317]: [prometheus INFO cherrypy.access.281472508648136] ::ffff:172.22.0.100 - - [17/Jan/2024:19:05:54] "GET /metrics HTTP/1.1" 200 67933 "" "Prometheus/2.43.0"
Jan 17 14:05:56 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46912] [GET] [200] [0.020s] [admin] [5.8K] /
Jan 17 14:05:56 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330626: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 257 KiB/s wr, 50 op/s; 5101876/142024938 objects degraded (3.592%); 11458/142024938 objects misplaced (0.008%); 214 MiB/s, 73 objects/s recovering
Jan 17 14:05:58 ceph02 ceph-mgr[1317]: log_channel(cluster) log [DBG] : pgmap v330627: 337 pgs: 42 active+undersized+degraded+remapped+backfill_wait, 15 active+remapped+backfill_wait, 8 active+undersized+degraded+remapped+backfilling, 272 active+clean; 25 TiB data, 33 TiB used, 37 TiB / 70 TiB avail; 0 B/s rd, 306 KiB/s wr, 60 op/s; 5101630/142024938 objects degraded (3.592%); 11458/142024938 objects misplaced (0.008%); 272 MiB/s, 94 objects/s recovering
Jan 17 14:05:58 ceph02 ceph-mgr[1317]: [dashboard INFO request] [::ffff:172.22.0.1:46922] [GET] [200] [0.022s] [admin] [5.8K] /
(1-1/2)