Project

General

Profile

Bug #59551 » mgr.storage01.log

Eugen Block, 05/05/2023 07:37 AM

 
Mai 05 09:23:12 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1369: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:13 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10695674 -' entity='client.fstop' cmd=[{"prefix": "fs perf stats", "format": "json"}]: dispatch
Mai 05 09:23:13 storage01 ceph-mgr[24349]: [stats INFO root] register_mds_perf_query: {'key_descriptor': [{'type': 'mds_rank', 'regex': '^(.*)$'}, {'type': 'client_id', 'regex': '^(client.\\d*\\s+.*):.*'}], 'performance_counter_descriptors': []}
Mai 05 09:23:13 storage01 ceph-mgr[24349]: [stats INFO root] register_global_perf_query: {'key_descriptor': [{'type': 'client_id', 'regex': '^(client.\\d*\\s+.*):.*'}], 'performance_counter_descriptors': ['cap_hit', 'read_latency', 'write_latency', 'metadata_latency', 'dentry_lease', 'opened_files', 'pinned_icaps', 'opened_inodes', 'read_io_sizes', 'write_io_sizes', 'avg_read_latency', 'stdev_read_latency', 'avg_write_latency', 'stdev_write_latency', 'avg_metadata_latency', 'stdev_metadata_latency']}
Mai 05 09:23:13 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:23:13.214+0000 7f72db5f7700 -1 mgr notify stats.notify:
Mai 05 09:23:13 storage01 ceph-mgr[24349]: mgr notify stats.notify:
Mai 05 09:23:13 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:23:13.214+0000 7f72db5f7700 -1 mgr notify Traceback (most recent call last):
Mai 05 09:23:13 storage01 ceph-mgr[24349]: mgr notify Traceback (most recent call last):
File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
self.fs_perf_stats.notify_cmd(notify_id)
File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:23:13 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
Mai 05 09:23:13 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: self.fs_perf_stats.notify_cmd(notify_id)
Mai 05 09:23:13 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
Mai 05 09:23:13 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
Mai 05 09:23:13 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:23:13 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]:
Mai 05 09:23:13 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:13 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:13 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.032s] [admin] [9.1K] /api/health/minimal
Mai 05 09:23:14 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10695674 -' entity='client.fstop' cmd=[{"prefix": "fs perf stats", "format": "json"}]: dispatch
Mai 05 09:23:15 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1370: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:15 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10695674 -' entity='client.fstop' cmd=[{"prefix": "fs perf stats", "format": "json"}]: dispatch
Mai 05 09:23:15 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:15 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:15 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.031s] [admin] [9.1K] /api/health/minimal
Mai 05 09:23:16 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10695674 -' entity='client.fstop' cmd=[{"prefix": "fs perf stats", "format": "json"}]: dispatch
Mai 05 09:23:16 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.004s] [admin] [22.0B] /api/prometheus/notifications
Mai 05 09:23:16 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.019s] [admin] [1.1K] /api/summary
Mai 05 09:23:17 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1371: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:17 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10695674 -' entity='client.fstop' cmd=[{"prefix": "fs perf stats", "format": "json"}]: dispatch
Mai 05 09:23:18 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102916 might be unavailable
Mai 05 09:23:18 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102926 might be unavailable
Mai 05 09:23:18 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10104856 might be unavailable
Mai 05 09:23:18 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.8239675 might be unavailable
Mai 05 09:23:18 storage01 ceph-mgr[24349]: mgr notify stats.notify:
Mai 05 09:23:18 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:23:18.210+0000 7f72db5f7700 -1 mgr notify stats.notify:
Mai 05 09:23:18 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:23:18.210+0000 7f72db5f7700 -1 mgr notify Traceback (most recent call last):
Mai 05 09:23:18 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
Mai 05 09:23:18 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: self.fs_perf_stats.notify_cmd(notify_id)
Mai 05 09:23:18 storage01 ceph-mgr[24349]: mgr notify Traceback (most recent call last):
File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
self.fs_perf_stats.notify_cmd(notify_id)
File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:23:18 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
Mai 05 09:23:18 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
Mai 05 09:23:18 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:23:18 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]:
Mai 05 09:23:18 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10695674 -' entity='client.fstop' cmd=[{"prefix": "fs perf stats", "format": "json"}]: dispatch
Mai 05 09:23:18 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:18 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:18 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.030s] [admin] [9.1K] /api/health/minimal
Mai 05 09:23:19 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1372: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:19 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10695674 -' entity='client.fstop' cmd=[{"prefix": "fs perf stats", "format": "json"}]: dispatch
Mai 05 09:23:20 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:20 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:20 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.028s] [admin] [9.1K] /api/health/minimal
Mai 05 09:23:20 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10695674 -' entity='client.fstop' cmd=[{"prefix": "fs perf stats", "format": "json"}]: dispatch
Mai 05 09:23:21 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1373: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:21 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10695674 -' entity='client.fstop' cmd=[{"prefix": "fs perf stats", "format": "json"}]: dispatch
Mai 05 09:23:21 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.004s] [admin] [22.0B] /api/prometheus/notifications
Mai 05 09:23:21 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.021s] [admin] [1.1K] /api/summary
Mai 05 09:23:22 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10695674 -' entity='client.fstop' cmd=[{"prefix": "fs perf stats", "format": "json"}]: dispatch
Mai 05 09:23:23 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1374: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:23 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102916 might be unavailable
Mai 05 09:23:23 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102926 might be unavailable
Mai 05 09:23:23 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10104856 might be unavailable
Mai 05 09:23:23 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.8239675 might be unavailable
Mai 05 09:23:23 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:23:23.210+0000 7f72db5f7700 -1 mgr notify stats.notify:
Mai 05 09:23:23 storage01 ceph-mgr[24349]: mgr notify stats.notify:
Mai 05 09:23:23 storage01 ceph-mgr[24349]: mgr notify Traceback (most recent call last):
File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
self.fs_perf_stats.notify_cmd(notify_id)
File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:23:23 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:23:23.210+0000 7f72db5f7700 -1 mgr notify Traceback (most recent call last):
Mai 05 09:23:23 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
Mai 05 09:23:23 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: self.fs_perf_stats.notify_cmd(notify_id)
Mai 05 09:23:23 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
Mai 05 09:23:23 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
Mai 05 09:23:23 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:23:23 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]:
Mai 05 09:23:23 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:23 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:23 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.027s] [admin] [9.1K] /api/health/minimal
Mai 05 09:23:23 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10695674 -' entity='client.fstop' cmd=[{"prefix": "fs perf stats", "format": "json"}]: dispatch
Mai 05 09:23:24 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10695674 -' entity='client.fstop' cmd=[{"prefix": "fs perf stats", "format": "json"}]: dispatch
Mai 05 09:23:25 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1375: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:25 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:25 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:25 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.027s] [admin] [9.1K] /api/health/minimal
Mai 05 09:23:25 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10695674 -' entity='client.fstop' cmd=[{"prefix": "fs perf stats", "format": "json"}]: dispatch
Mai 05 09:23:25 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.004s] [admin] [69.0B] /api/feature_toggles
Mai 05 09:23:26 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.004s] [admin] [22.0B] /api/prometheus/notifications
Mai 05 09:23:26 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10695674 -' entity='client.fstop' cmd=[{"prefix": "fs perf stats", "format": "json"}]: dispatch
Mai 05 09:23:26 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.019s] [admin] [1.1K] /api/summary
Mai 05 09:23:26 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.004s] [admin] [24.0B] /ui-api/motd
Mai 05 09:23:27 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1376: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:27 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10695674 -' entity='client.fstop' cmd=[{"prefix": "fs perf stats", "format": "json"}]: dispatch
Mai 05 09:23:28 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102916 might be unavailable
Mai 05 09:23:28 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102926 might be unavailable
Mai 05 09:23:28 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10104856 might be unavailable
Mai 05 09:23:28 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.8239675 might be unavailable
Mai 05 09:23:28 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:23:28.214+0000 7f72db5f7700 -1 mgr notify stats.notify:
Mai 05 09:23:28 storage01 ceph-mgr[24349]: mgr notify stats.notify:
Mai 05 09:23:28 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:23:28.214+0000 7f72db5f7700 -1 mgr notify Traceback (most recent call last):
Mai 05 09:23:28 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
Mai 05 09:23:28 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: self.fs_perf_stats.notify_cmd(notify_id)
Mai 05 09:23:28 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
Mai 05 09:23:28 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
Mai 05 09:23:28 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:23:28 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]:
Mai 05 09:23:28 storage01 ceph-mgr[24349]: mgr notify Traceback (most recent call last):
File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
self.fs_perf_stats.notify_cmd(notify_id)
File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:23:28 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:28 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:28 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.034s] [admin] [9.1K] /api/health/minimal
Mai 05 09:23:28 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10695674 -' entity='client.fstop' cmd=[{"prefix": "fs perf stats", "format": "json"}]: dispatch
Mai 05 09:23:29 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1377: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:29 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10695674 -' entity='client.fstop' cmd=[{"prefix": "fs perf stats", "format": "json"}]: dispatch
Mai 05 09:23:30 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10695674 -' entity='client.fstop' cmd=[{"prefix": "fs perf stats", "format": "json"}]: dispatch
Mai 05 09:23:30 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:30 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:30 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.496s] [admin] [9.1K] /api/health/minimal
Mai 05 09:23:31 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1378: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:31 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.004s] [admin] [22.0B] /api/prometheus/notifications
Mai 05 09:23:31 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10695674 -' entity='client.fstop' cmd=[{"prefix": "fs perf stats", "format": "json"}]: dispatch
Mai 05 09:23:32 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.031s] [admin] [1.1K] /api/summary
Mai 05 09:23:33 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1379: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:33 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102916 might be unavailable
Mai 05 09:23:33 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102926 might be unavailable
Mai 05 09:23:33 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10104856 might be unavailable
Mai 05 09:23:33 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.8239675 might be unavailable
Mai 05 09:23:33 storage01 ceph-mgr[24349]: mgr notify stats.notify:
Mai 05 09:23:33 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:23:33.218+0000 7f72db5f7700 -1 mgr notify stats.notify:
Mai 05 09:23:33 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:23:33.218+0000 7f72db5f7700 -1 mgr notify Traceback (most recent call last):
Mai 05 09:23:33 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
Mai 05 09:23:33 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: self.fs_perf_stats.notify_cmd(notify_id)
Mai 05 09:23:33 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
Mai 05 09:23:33 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
Mai 05 09:23:33 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:23:33 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]:
Mai 05 09:23:33 storage01 ceph-mgr[24349]: mgr notify Traceback (most recent call last):
File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
self.fs_perf_stats.notify_cmd(notify_id)
File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:23:33 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:33 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:33 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.030s] [admin] [9.1K] /api/health/minimal
Mai 05 09:23:35 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1380: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:35 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:35 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:35 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.027s] [admin] [9.1K] /api/health/minimal
Mai 05 09:23:36 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.004s] [admin] [22.0B] /api/prometheus/notifications
Mai 05 09:23:36 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.028s] [admin] [1.1K] /api/summary
Mai 05 09:23:37 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1381: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:38 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102916 might be unavailable
Mai 05 09:23:38 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102926 might be unavailable
Mai 05 09:23:38 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10104856 might be unavailable
Mai 05 09:23:38 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.8239675 might be unavailable
Mai 05 09:23:38 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:23:38.222+0000 7f72db5f7700 -1 mgr notify stats.notify:
Mai 05 09:23:38 storage01 ceph-mgr[24349]: mgr notify stats.notify:
Mai 05 09:23:38 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:23:38.222+0000 7f72db5f7700 -1 mgr notify Traceback (most recent call last):
Mai 05 09:23:38 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
Mai 05 09:23:38 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: self.fs_perf_stats.notify_cmd(notify_id)
Mai 05 09:23:38 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
Mai 05 09:23:38 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
Mai 05 09:23:38 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:23:38 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]:
Mai 05 09:23:38 storage01 ceph-mgr[24349]: mgr notify Traceback (most recent call last):
File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
self.fs_perf_stats.notify_cmd(notify_id)
File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:23:38 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:38 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:38 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.027s] [admin] [9.1K] /api/health/minimal
Mai 05 09:23:39 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1382: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [balancer INFO root] Optimize plan auto_2023-05-05_07:23:40
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [balancer INFO root] do_upmap
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [balancer INFO root] pools ['tnap-backup', 'tnap-images', 'amphora', 'default.rgw.control', 'tnap-volumes-hdd', 'cephfs_hdd', 'default.rgw.buckets.index', 'cephfs_metadata', '.mgr', 'default.rgw.buckets.data', '.rgw.root', 'default.rgw.meta', 'cephfs_data', 'tnap-volumes', 'tnap-vms', 'default.rgw.log']
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [balancer INFO root] prepared 0/10 changes
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.027s] [admin] [9.1K] /api/health/minimal
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-images, start_after=
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-images, start_after=
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes, start_after=
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes, start_after=
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-vms, start_after=
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-vms, start_after=
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-backup, start_after=
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes-hdd, start_after=
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-backup, start_after=
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: amphora, start_after=
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes-hdd, start_after=
Mai 05 09:23:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: amphora, start_after=
Mai 05 09:23:41 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1383: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:23:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:23:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:23:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:23:41 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.004s] [admin] [22.0B] /api/prometheus/notifications
Mai 05 09:23:41 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.026s] [admin] [1.1K] /api/summary
Mai 05 09:23:42 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:23:43 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1384: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:43 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102916 might be unavailable
Mai 05 09:23:43 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102926 might be unavailable
Mai 05 09:23:43 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10104856 might be unavailable
Mai 05 09:23:43 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.8239675 might be unavailable
Mai 05 09:23:43 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:23:43.226+0000 7f72db5f7700 -1 mgr notify stats.notify:
Mai 05 09:23:43 storage01 ceph-mgr[24349]: mgr notify stats.notify:
Mai 05 09:23:43 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:23:43.226+0000 7f72db5f7700 -1 mgr notify Traceback (most recent call last):
Mai 05 09:23:43 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
Mai 05 09:23:43 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: self.fs_perf_stats.notify_cmd(notify_id)
Mai 05 09:23:43 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
Mai 05 09:23:43 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
Mai 05 09:23:43 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:23:43 storage01 ceph-mgr[24349]: mgr notify Traceback (most recent call last):
File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
self.fs_perf_stats.notify_cmd(notify_id)
File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:23:43 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]:
Mai 05 09:23:43 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:43 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:43 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.035s] [admin] [9.1K] /api/health/minimal
Mai 05 09:23:45 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1385: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:45 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:45 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:45 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.031s] [admin] [9.1K] /api/health/minimal
Mai 05 09:23:46 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.006s] [admin] [22.0B] /api/prometheus/notifications
Mai 05 09:23:46 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:23:46 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:23:46 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 8d2356d8-ec0f-4a80-adc0-75e648b261e3 does not exist
Mai 05 09:23:46 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 412479c0-62ce-4ad9-a0cb-0b157d3ad2e1 does not exist
Mai 05 09:23:46 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 3c888059-cb40-48b4-9aca-b99e03f58a9a does not exist
Mai 05 09:23:46 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 86d6f89f-cd8f-4c34-a518-d54eaa1fac89 does not exist
Mai 05 09:23:46 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:23:46 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev af3a4d79-a921-4cb0-9d60-9f26358b8d05 does not exist
Mai 05 09:23:46 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:44712] [GET] [200] [0.030s] [admin] [1.1K] /api/summary
Mai 05 09:23:47 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1386: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:48 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102916 might be unavailable
Mai 05 09:23:48 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102926 might be unavailable
Mai 05 09:23:48 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10104856 might be unavailable
Mai 05 09:23:48 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.8239675 might be unavailable
Mai 05 09:23:48 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:23:48.230+0000 7f72db5f7700 -1 mgr notify stats.notify:
Mai 05 09:23:48 storage01 ceph-mgr[24349]: mgr notify stats.notify:
Mai 05 09:23:48 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:23:48.230+0000 7f72db5f7700 -1 mgr notify Traceback (most recent call last):
Mai 05 09:23:48 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
Mai 05 09:23:48 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: self.fs_perf_stats.notify_cmd(notify_id)
Mai 05 09:23:48 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
Mai 05 09:23:48 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
Mai 05 09:23:48 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:23:48 storage01 ceph-mgr[24349]: mgr notify Traceback (most recent call last):
File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
self.fs_perf_stats.notify_cmd(notify_id)
File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:23:48 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]:
Mai 05 09:23:48 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:48 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:48 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.030s] [admin] [9.1K] /api/health/minimal
Mai 05 09:23:49 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1387: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:51 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1388: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:52 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [24.0B] /ui-api/motd
Mai 05 09:23:53 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1389: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:53 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102916 might be unavailable
Mai 05 09:23:53 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102926 might be unavailable
Mai 05 09:23:53 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10104856 might be unavailable
Mai 05 09:23:53 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.8239675 might be unavailable
Mai 05 09:23:53 storage01 ceph-mgr[24349]: mgr notify stats.notify:
Mai 05 09:23:53 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:23:53.230+0000 7f72db5f7700 -1 mgr notify stats.notify:
Mai 05 09:23:53 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:23:53.230+0000 7f72db5f7700 -1 mgr notify Traceback (most recent call last):
Mai 05 09:23:53 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
Mai 05 09:23:53 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: self.fs_perf_stats.notify_cmd(notify_id)
Mai 05 09:23:53 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
Mai 05 09:23:53 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
Mai 05 09:23:53 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:23:53 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]:
Mai 05 09:23:53 storage01 ceph-mgr[24349]: mgr notify Traceback (most recent call last):
File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
self.fs_perf_stats.notify_cmd(notify_id)
File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:23:53 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:59704] [GET] [200] [0.027s] [admin] [1.1K] /api/summary
Mai 05 09:23:54 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:59710] [GET] [200] [0.009s] [admin] [69.0B] /api/feature_toggles
Mai 05 09:23:54 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:54 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:23:54 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [9.1K] /api/health/minimal
Mai 05 09:23:55 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1390: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:57 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1391: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:23:57 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:23:57 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:23:57 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.011s] [admin] [1.5K] /api/logs/all
Mai 05 09:23:58 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.027s] [admin] [1.1K] /api/summary
Mai 05 09:23:58 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102916 might be unavailable
Mai 05 09:23:58 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102926 might be unavailable
Mai 05 09:23:58 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10104856 might be unavailable
Mai 05 09:23:58 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.8239675 might be unavailable
Mai 05 09:23:58 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:23:58.238+0000 7f72db5f7700 -1 mgr notify stats.notify:
Mai 05 09:23:58 storage01 ceph-mgr[24349]: mgr notify stats.notify:
Mai 05 09:23:58 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:23:58.238+0000 7f72db5f7700 -1 mgr notify Traceback (most recent call last):
Mai 05 09:23:58 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
Mai 05 09:23:58 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: self.fs_perf_stats.notify_cmd(notify_id)
Mai 05 09:23:58 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
Mai 05 09:23:58 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
Mai 05 09:23:58 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:23:58 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]:
Mai 05 09:23:58 storage01 ceph-mgr[24349]: mgr notify Traceback (most recent call last):
File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
self.fs_perf_stats.notify_cmd(notify_id)
File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:23:59 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1392: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:01 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1393: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:03 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1394: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:03 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:24:03 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102916 might be unavailable
Mai 05 09:24:03 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102926 might be unavailable
Mai 05 09:24:03 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10104856 might be unavailable
Mai 05 09:24:03 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.8239675 might be unavailable
Mai 05 09:24:03 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:24:03.238+0000 7f72db5f7700 -1 mgr notify stats.notify:
Mai 05 09:24:03 storage01 ceph-mgr[24349]: mgr notify stats.notify:
Mai 05 09:24:03 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:24:03.238+0000 7f72db5f7700 -1 mgr notify Traceback (most recent call last):
Mai 05 09:24:03 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
Mai 05 09:24:03 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: self.fs_perf_stats.notify_cmd(notify_id)
Mai 05 09:24:03 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
Mai 05 09:24:03 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
Mai 05 09:24:03 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:24:03 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]:
Mai 05 09:24:03 storage01 ceph-mgr[24349]: mgr notify Traceback (most recent call last):
File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
self.fs_perf_stats.notify_cmd(notify_id)
File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:24:05 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1395: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:07 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1396: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:08 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:24:08 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102916 might be unavailable
Mai 05 09:24:08 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102926 might be unavailable
Mai 05 09:24:08 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10104856 might be unavailable
Mai 05 09:24:08 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.8239675 might be unavailable
Mai 05 09:24:08 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:24:08.242+0000 7f72db5f7700 -1 mgr notify stats.notify:
Mai 05 09:24:08 storage01 ceph-mgr[24349]: mgr notify stats.notify:
Mai 05 09:24:08 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:24:08.242+0000 7f72db5f7700 -1 mgr notify Traceback (most recent call last):
Mai 05 09:24:08 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
Mai 05 09:24:08 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: self.fs_perf_stats.notify_cmd(notify_id)
Mai 05 09:24:08 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
Mai 05 09:24:08 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
Mai 05 09:24:08 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:24:08 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]:
Mai 05 09:24:08 storage01 ceph-mgr[24349]: mgr notify Traceback (most recent call last):
File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
self.fs_perf_stats.notify_cmd(notify_id)
File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:24:09 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1397: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:10 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:24:10 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:24:11 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1398: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:24:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:24:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:24:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:24:13 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1399: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:13 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:24:13 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102916 might be unavailable
Mai 05 09:24:13 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102926 might be unavailable
Mai 05 09:24:13 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10104856 might be unavailable
Mai 05 09:24:13 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.8239675 might be unavailable
Mai 05 09:24:13 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:24:13.246+0000 7f72db5f7700 -1 mgr notify stats.notify:
Mai 05 09:24:13 storage01 ceph-mgr[24349]: mgr notify stats.notify:
Mai 05 09:24:13 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:24:13.246+0000 7f72db5f7700 -1 mgr notify Traceback (most recent call last):
Mai 05 09:24:13 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
Mai 05 09:24:13 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: self.fs_perf_stats.notify_cmd(notify_id)
Mai 05 09:24:13 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
Mai 05 09:24:13 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
Mai 05 09:24:13 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:24:13 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]:
Mai 05 09:24:13 storage01 ceph-mgr[24349]: mgr notify Traceback (most recent call last):
File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
self.fs_perf_stats.notify_cmd(notify_id)
File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:24:15 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1400: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:17 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1401: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:18 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:24:18 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102916 might be unavailable
Mai 05 09:24:18 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102926 might be unavailable
Mai 05 09:24:18 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10104856 might be unavailable
Mai 05 09:24:18 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.8239675 might be unavailable
Mai 05 09:24:18 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:24:18.250+0000 7f72db5f7700 -1 mgr notify stats.notify:
Mai 05 09:24:18 storage01 ceph-mgr[24349]: mgr notify stats.notify:
Mai 05 09:24:18 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:24:18.250+0000 7f72db5f7700 -1 mgr notify Traceback (most recent call last):
Mai 05 09:24:18 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
Mai 05 09:24:18 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: self.fs_perf_stats.notify_cmd(notify_id)
Mai 05 09:24:18 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
Mai 05 09:24:18 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
Mai 05 09:24:18 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:24:18 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]:
Mai 05 09:24:18 storage01 ceph-mgr[24349]: mgr notify Traceback (most recent call last):
File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
self.fs_perf_stats.notify_cmd(notify_id)
File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:24:19 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1402: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:21 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1403: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:23 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1404: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:23 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:24:23 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102916 might be unavailable
Mai 05 09:24:23 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102926 might be unavailable
Mai 05 09:24:23 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10104856 might be unavailable
Mai 05 09:24:23 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.8239675 might be unavailable
Mai 05 09:24:23 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:24:23.250+0000 7f72db5f7700 -1 mgr notify stats.notify:
Mai 05 09:24:23 storage01 ceph-mgr[24349]: mgr notify stats.notify:
Mai 05 09:24:23 storage01 ceph-mgr[24349]: mgr notify Traceback (most recent call last):
File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
self.fs_perf_stats.notify_cmd(notify_id)
File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:24:23 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:24:23.250+0000 7f72db5f7700 -1 mgr notify Traceback (most recent call last):
Mai 05 09:24:23 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
Mai 05 09:24:23 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: self.fs_perf_stats.notify_cmd(notify_id)
Mai 05 09:24:23 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
Mai 05 09:24:23 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
Mai 05 09:24:23 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:24:23 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]:
Mai 05 09:24:25 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1405: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:27 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1406: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:28 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:24:28 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102916 might be unavailable
Mai 05 09:24:28 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10102926 might be unavailable
Mai 05 09:24:28 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.10104856 might be unavailable
Mai 05 09:24:28 storage01 ceph-mgr[24349]: [stats WARNING root] client metadata for client_id=client.8239675 might be unavailable
Mai 05 09:24:28 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:24:28.254+0000 7f72db5f7700 -1 mgr notify stats.notify:
Mai 05 09:24:28 storage01 ceph-mgr[24349]: mgr notify stats.notify:
Mai 05 09:24:28 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: 2023-05-05T07:24:28.254+0000 7f72db5f7700 -1 mgr notify Traceback (most recent call last):
Mai 05 09:24:28 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
Mai 05 09:24:28 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: self.fs_perf_stats.notify_cmd(notify_id)
Mai 05 09:24:28 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
Mai 05 09:24:28 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
Mai 05 09:24:28 storage01 ceph-mgr[24349]: mgr notify Traceback (most recent call last):
File "/usr/share/ceph/mgr/stats/module.py", line 32, in notify
self.fs_perf_stats.notify_cmd(notify_id)
File "/usr/share/ceph/mgr/stats/fs/perf_stats.py", line 177, in notify_cmd
metric_features = int(metadata[CLIENT_METADATA_KEY]["metric_spec"]["metric_flags"]["feature_bits"], 16)
ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:24:28 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]: ValueError: invalid literal for int() with base 16: '0x'
Mai 05 09:24:28 storage01 ceph-877636d0-d118-11ec-83c7-fa163e990a3e-mgr-storage01-ygsvte[24338]:
Mai 05 09:24:29 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1407: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:31 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1408: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:33 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1409: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:33 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.006s] [admin] [1.5K] /api/logs/all
Mai 05 09:24:33 storage01 ceph-mgr[24349]: [stats INFO root] unregister_mds_perf_queries: filter_spec=<stats.fs.perf_stats.FilterSpec object at 0x7f72750826a0>, query_id=[0, 1]
Mai 05 09:24:35 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1410: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:36 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [24.0B] /ui-api/motd
Mai 05 09:24:36 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:52300] [GET] [200] [0.004s] [admin] [69.0B] /api/feature_toggles
Mai 05 09:24:36 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.030s] [admin] [1.1K] /api/summary
Mai 05 09:24:37 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1411: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:38 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:24:39 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1412: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:40 storage01 ceph-mgr[24349]: [balancer INFO root] Optimize plan auto_2023-05-05_07:24:40
Mai 05 09:24:40 storage01 ceph-mgr[24349]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mai 05 09:24:40 storage01 ceph-mgr[24349]: [balancer INFO root] do_upmap
Mai 05 09:24:40 storage01 ceph-mgr[24349]: [balancer INFO root] pools ['tnap-vms', '.rgw.root', 'amphora', 'cephfs_metadata', 'cephfs_hdd', 'tnap-volumes', 'default.rgw.control', 'tnap-images', '.mgr', 'tnap-backup', 'default.rgw.buckets.index', 'tnap-volumes-hdd', 'cephfs_data', 'default.rgw.buckets.data', 'default.rgw.log', 'default.rgw.meta']
Mai 05 09:24:40 storage01 ceph-mgr[24349]: [balancer INFO root] prepared 0/10 changes
Mai 05 09:24:40 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:24:40 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:24:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mai 05 09:24:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mai 05 09:24:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-images, start_after=
Mai 05 09:24:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-images, start_after=
Mai 05 09:24:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes, start_after=
Mai 05 09:24:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-vms, start_after=
Mai 05 09:24:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes, start_after=
Mai 05 09:24:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-backup, start_after=
Mai 05 09:24:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-vms, start_after=
Mai 05 09:24:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes-hdd, start_after=
Mai 05 09:24:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-backup, start_after=
Mai 05 09:24:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: amphora, start_after=
Mai 05 09:24:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes-hdd, start_after=
Mai 05 09:24:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: amphora, start_after=
Mai 05 09:24:41 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1413: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:24:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:24:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:24:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:24:41 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [1.1K] /api/summary
Mai 05 09:24:43 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1414: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:43 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:24:45 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1415: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:46 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.021s] [admin] [1.1K] /api/summary
Mai 05 09:24:46 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:24:47 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1416: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:47 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:24:47 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:24:47 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 7e6d0841-4360-4021-8950-16930b493eb0 does not exist
Mai 05 09:24:47 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 48ed0add-f750-4964-969e-f94de506a90e does not exist
Mai 05 09:24:47 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev d0ac0d92-696c-40a2-91be-49392781e317 does not exist
Mai 05 09:24:47 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 52cc4325-1cc6-46b3-9169-d120354707d5 does not exist
Mai 05 09:24:47 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:24:47 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev e38ab392-c233-4fd1-861a-dee95ef1eae4 does not exist
Mai 05 09:24:48 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:24:49 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1417: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:51 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1418: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:51 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.019s] [admin] [1.1K] /api/summary
Mai 05 09:24:53 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:24:53 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1419: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:55 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1420: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:56 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.019s] [admin] [1.1K] /api/summary
Mai 05 09:24:57 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1421: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:24:58 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:24:59 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1422: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:01 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1423: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:02 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [1.1K] /api/summary
Mai 05 09:25:03 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:25:03 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1424: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:05 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1425: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:06 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.019s] [admin] [1.1K] /api/summary
Mai 05 09:25:06 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:60602] [GET] [200] [0.003s] [admin] [69.0B] /api/feature_toggles
Mai 05 09:25:07 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1426: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:08 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:25:09 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1427: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:10 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:25:10 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:25:11 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1428: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:25:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:25:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:25:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:25:11 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.027s] [admin] [1.1K] /api/summary
Mai 05 09:25:13 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:25:13 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1429: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:15 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1430: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:16 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.027s] [admin] [1.1K] /api/summary
Mai 05 09:25:17 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1431: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:18 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:25:19 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1432: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:21 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1433: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:23 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1434: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:23 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:25:25 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1435: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:27 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1436: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:28 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.006s] [admin] [1.5K] /api/logs/all
Mai 05 09:25:29 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1437: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:31 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1438: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:33 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1439: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:33 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:25:35 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1440: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:37 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1441: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:38 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:25:39 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1442: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:40 storage01 ceph-mgr[24349]: [balancer INFO root] Optimize plan auto_2023-05-05_07:25:40
Mai 05 09:25:40 storage01 ceph-mgr[24349]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mai 05 09:25:40 storage01 ceph-mgr[24349]: [balancer INFO root] do_upmap
Mai 05 09:25:40 storage01 ceph-mgr[24349]: [balancer INFO root] pools ['tnap-volumes', '.rgw.root', 'default.rgw.meta', 'default.rgw.buckets.index', 'default.rgw.log', 'cephfs_metadata', 'cephfs_data', 'default.rgw.buckets.data', 'tnap-vms', 'amphora', 'default.rgw.control', '.mgr', 'cephfs_hdd', 'tnap-images', 'tnap-backup', 'tnap-volumes-hdd']
Mai 05 09:25:40 storage01 ceph-mgr[24349]: [balancer INFO root] prepared 0/10 changes
Mai 05 09:25:40 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:25:40 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:25:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mai 05 09:25:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-images, start_after=
Mai 05 09:25:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mai 05 09:25:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-images, start_after=
Mai 05 09:25:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes, start_after=
Mai 05 09:25:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes, start_after=
Mai 05 09:25:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-vms, start_after=
Mai 05 09:25:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-vms, start_after=
Mai 05 09:25:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-backup, start_after=
Mai 05 09:25:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-backup, start_after=
Mai 05 09:25:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes-hdd, start_after=
Mai 05 09:25:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes-hdd, start_after=
Mai 05 09:25:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: amphora, start_after=
Mai 05 09:25:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: amphora, start_after=
Mai 05 09:25:41 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1443: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:25:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:25:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:25:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:25:43 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1444: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:43 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:25:45 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1445: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:47 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1446: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:47 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:25:48 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:25:48 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:25:48 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:25:48 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 08dd31e0-ca30-4dbe-ad3e-4655f80ba0ac does not exist
Mai 05 09:25:48 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev fd8d3c40-70d8-45d6-abc8-337326bfbc1d does not exist
Mai 05 09:25:48 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 9b3e158e-0eac-490e-9f71-d8f8978594cd does not exist
Mai 05 09:25:48 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev dae8fe97-395e-4feb-9178-c031842c5655 does not exist
Mai 05 09:25:48 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:25:48 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 600bc476-ca46-447e-a184-2e99c1384ed6 does not exist
Mai 05 09:25:49 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1447: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:51 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1448: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:53 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1449: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:53 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.007s] [admin] [1.5K] /api/logs/all
Mai 05 09:25:55 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1450: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:57 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1451: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:25:58 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.008s] [admin] [1.5K] /api/logs/all
Mai 05 09:25:59 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1452: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:01 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1453: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:03 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1454: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:03 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:26:05 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1455: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:07 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1456: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:08 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:26:09 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1457: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:10 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:26:10 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:26:11 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1458: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:26:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:26:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:26:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:26:13 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1459: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:13 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:26:15 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1460: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:17 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1461: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:18 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:26:19 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1462: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:21 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1463: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:23 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1464: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:23 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:26:25 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1465: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:27 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1466: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:28 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:26:29 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1467: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:31 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1468: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:33 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1469: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:33 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:26:35 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1470: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:37 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1471: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:38 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:26:39 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1472: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:40 storage01 ceph-mgr[24349]: [balancer INFO root] Optimize plan auto_2023-05-05_07:26:40
Mai 05 09:26:40 storage01 ceph-mgr[24349]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mai 05 09:26:40 storage01 ceph-mgr[24349]: [balancer INFO root] do_upmap
Mai 05 09:26:40 storage01 ceph-mgr[24349]: [balancer INFO root] pools ['default.rgw.buckets.data', 'tnap-vms', 'cephfs_metadata', 'tnap-backup', 'default.rgw.buckets.index', 'default.rgw.control', 'cephfs_hdd', 'tnap-volumes-hdd', 'cephfs_data', 'default.rgw.log', 'default.rgw.meta', 'amphora', '.rgw.root', 'tnap-images', 'tnap-volumes', '.mgr']
Mai 05 09:26:40 storage01 ceph-mgr[24349]: [balancer INFO root] prepared 0/10 changes
Mai 05 09:26:40 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:26:40 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:26:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mai 05 09:26:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mai 05 09:26:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-images, start_after=
Mai 05 09:26:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-images, start_after=
Mai 05 09:26:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes, start_after=
Mai 05 09:26:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes, start_after=
Mai 05 09:26:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-vms, start_after=
Mai 05 09:26:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-vms, start_after=
Mai 05 09:26:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-backup, start_after=
Mai 05 09:26:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-backup, start_after=
Mai 05 09:26:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes-hdd, start_after=
Mai 05 09:26:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes-hdd, start_after=
Mai 05 09:26:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: amphora, start_after=
Mai 05 09:26:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: amphora, start_after=
Mai 05 09:26:41 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1473: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:26:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:26:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:26:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:26:42 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [24.0B] /ui-api/motd
Mai 05 09:26:42 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [69.0B] /api/feature_toggles
Mai 05 09:26:42 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:39080] [GET] [200] [0.036s] [admin] [1.1K] /api/summary
Mai 05 09:26:43 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1474: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:43 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:26:45 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1475: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:47 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1476: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:47 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.020s] [admin] [1.1K] /api/summary
Mai 05 09:26:48 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:26:48 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:26:49 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1477: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:50 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:26:50 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:26:50 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 282d143d-1a51-4b9d-b3d9-2746c4ff6ee9 does not exist
Mai 05 09:26:50 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 8272aaef-d120-4c3c-b04f-ac7db4199750 does not exist
Mai 05 09:26:50 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev f8a9f726-c5d7-453b-bd59-aeb57ed71150 does not exist
Mai 05 09:26:50 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev fc760b5d-790e-4b8b-a2bd-81c98d1412e7 does not exist
Mai 05 09:26:50 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:26:50 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 3b5219e4-89ae-4653-b31e-9d100b330f57 does not exist
Mai 05 09:26:51 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1478: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:52 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [1.1K] /api/summary
Mai 05 09:26:53 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1479: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:53 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:26:55 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1480: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:57 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1481: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:26:57 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [1.1K] /api/summary
Mai 05 09:26:58 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:26:59 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1482: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:01 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1483: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:02 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.020s] [admin] [1.1K] /api/summary
Mai 05 09:27:03 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1484: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:03 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:27:05 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1485: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:07 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1486: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:07 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.030s] [admin] [1.1K] /api/summary
Mai 05 09:27:08 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:27:09 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1487: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:10 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:27:10 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:27:11 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1488: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:27:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:27:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:27:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:27:12 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:34788] [GET] [200] [0.003s] [admin] [69.0B] /api/feature_toggles
Mai 05 09:27:12 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.030s] [admin] [1.1K] /api/summary
Mai 05 09:27:13 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1489: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:13 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:27:15 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1490: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:17 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1491: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:17 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.021s] [admin] [1.1K] /api/summary
Mai 05 09:27:18 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:27:19 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1492: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:21 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1493: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:22 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.020s] [admin] [1.1K] /api/summary
Mai 05 09:27:23 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1494: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:23 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:27:25 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1495: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:27 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1496: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:27 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [1.1K] /api/summary
Mai 05 09:27:28 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:27:29 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1497: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:31 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1498: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:32 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.032s] [admin] [1.1K] /api/summary
Mai 05 09:27:33 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1499: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:33 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:27:35 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1500: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:37 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1501: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:37 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.021s] [admin] [1.1K] /api/summary
Mai 05 09:27:38 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:27:39 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1502: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:40 storage01 ceph-mgr[24349]: [balancer INFO root] Optimize plan auto_2023-05-05_07:27:40
Mai 05 09:27:40 storage01 ceph-mgr[24349]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mai 05 09:27:40 storage01 ceph-mgr[24349]: [balancer INFO root] do_upmap
Mai 05 09:27:40 storage01 ceph-mgr[24349]: [balancer INFO root] pools ['default.rgw.log', 'tnap-volumes-hdd', 'tnap-volumes', 'tnap-vms', 'tnap-backup', 'cephfs_hdd', 'cephfs_metadata', 'default.rgw.buckets.index', '.rgw.root', 'amphora', 'tnap-images', 'default.rgw.buckets.data', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'cephfs_data']
Mai 05 09:27:40 storage01 ceph-mgr[24349]: [balancer INFO root] prepared 0/10 changes
Mai 05 09:27:40 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:27:40 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:27:40 storage01 ceph-mgr[24349]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mai 05 09:27:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-images, start_after=
Mai 05 09:27:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mai 05 09:27:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-images, start_after=
Mai 05 09:27:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes, start_after=
Mai 05 09:27:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes, start_after=
Mai 05 09:27:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-vms, start_after=
Mai 05 09:27:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-vms, start_after=
Mai 05 09:27:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-backup, start_after=
Mai 05 09:27:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-backup, start_after=
Mai 05 09:27:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes-hdd, start_after=
Mai 05 09:27:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes-hdd, start_after=
Mai 05 09:27:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: amphora, start_after=
Mai 05 09:27:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: amphora, start_after=
Mai 05 09:27:41 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1503: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:27:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:27:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:27:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:27:42 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:41454] [GET] [200] [0.004s] [admin] [69.0B] /api/feature_toggles
Mai 05 09:27:42 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.034s] [admin] [1.1K] /api/summary
Mai 05 09:27:42 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [24.0B] /ui-api/motd
Mai 05 09:27:43 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1504: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:43 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:27:43 storage01 ceph-mgr[24349]: [devicehealth INFO root] Check health
Mai 05 09:27:45 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1505: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:47 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1506: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:47 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.018s] [admin] [1.1K] /api/summary
Mai 05 09:27:48 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:27:49 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1507: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:50 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:27:50 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:27:50 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:27:50 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 9b2e397c-4efc-4351-86de-0b7446119827 does not exist
Mai 05 09:27:50 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 9c1ff174-15d5-4faf-83cb-5f04b66bb578 does not exist
Mai 05 09:27:50 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 07bbcd05-b604-42dc-bb57-037c4b88a651 does not exist
Mai 05 09:27:50 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 7f59cccd-576e-45e9-8f53-7e83e32ea283 does not exist
Mai 05 09:27:50 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:27:50 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 68d795b2-8e47-4deb-bf8e-27791e828317 does not exist
Mai 05 09:27:51 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1508: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:52 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.027s] [admin] [1.1K] /api/summary
Mai 05 09:27:53 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1509: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:53 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:27:55 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1510: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:57 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1511: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:27:57 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.020s] [admin] [1.1K] /api/summary
Mai 05 09:27:58 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.006s] [admin] [1.5K] /api/logs/all
Mai 05 09:27:59 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1512: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:01 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1513: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:02 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.029s] [admin] [1.1K] /api/summary
Mai 05 09:28:03 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1514: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:03 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:28:05 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1515: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:07 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1516: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:07 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [1.1K] /api/summary
Mai 05 09:28:08 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.006s] [admin] [1.5K] /api/logs/all
Mai 05 09:28:09 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1517: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:10 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:28:10 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:28:11 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1518: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:28:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:28:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:28:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:28:12 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:34782] [GET] [200] [0.004s] [admin] [69.0B] /api/feature_toggles
Mai 05 09:28:12 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.030s] [admin] [1.1K] /api/summary
Mai 05 09:28:13 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1519: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:13 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:28:15 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1520: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:17 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1521: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:17 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.019s] [admin] [1.1K] /api/summary
Mai 05 09:28:18 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:28:19 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1522: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:21 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1523: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:22 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [1.1K] /api/summary
Mai 05 09:28:23 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1524: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:23 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:28:25 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1525: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:27 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1526: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:27 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.029s] [admin] [1.1K] /api/summary
Mai 05 09:28:28 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:28:29 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1527: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:31 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1528: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:32 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.027s] [admin] [1.1K] /api/summary
Mai 05 09:28:33 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1529: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:33 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:28:35 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1530: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:37 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1531: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:37 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.029s] [admin] [1.1K] /api/summary
Mai 05 09:28:38 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:28:39 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1532: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:40 storage01 ceph-mgr[24349]: [balancer INFO root] Optimize plan auto_2023-05-05_07:28:40
Mai 05 09:28:40 storage01 ceph-mgr[24349]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mai 05 09:28:40 storage01 ceph-mgr[24349]: [balancer INFO root] do_upmap
Mai 05 09:28:40 storage01 ceph-mgr[24349]: [balancer INFO root] pools ['amphora', 'tnap-volumes-hdd', 'tnap-volumes', 'cephfs_data', 'cephfs_metadata', 'tnap-backup', 'default.rgw.buckets.data', '.rgw.root', 'default.rgw.log', 'tnap-vms', '.mgr', 'default.rgw.buckets.index', 'default.rgw.meta', 'tnap-images', 'default.rgw.control', 'cephfs_hdd']
Mai 05 09:28:40 storage01 ceph-mgr[24349]: [balancer INFO root] prepared 0/10 changes
Mai 05 09:28:40 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:28:40 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:28:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mai 05 09:28:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mai 05 09:28:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-images, start_after=
Mai 05 09:28:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-images, start_after=
Mai 05 09:28:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes, start_after=
Mai 05 09:28:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes, start_after=
Mai 05 09:28:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-vms, start_after=
Mai 05 09:28:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-vms, start_after=
Mai 05 09:28:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-backup, start_after=
Mai 05 09:28:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-backup, start_after=
Mai 05 09:28:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes-hdd, start_after=
Mai 05 09:28:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes-hdd, start_after=
Mai 05 09:28:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: amphora, start_after=
Mai 05 09:28:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: amphora, start_after=
Mai 05 09:28:41 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1533: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:28:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:28:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:28:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:28:42 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:47132] [GET] [200] [0.003s] [admin] [69.0B] /api/feature_toggles
Mai 05 09:28:42 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.034s] [admin] [1.1K] /api/summary
Mai 05 09:28:42 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [24.0B] /ui-api/motd
Mai 05 09:28:43 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1534: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:43 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:28:45 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1535: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:47 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1536: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:47 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.029s] [admin] [1.1K] /api/summary
Mai 05 09:28:48 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:28:49 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1537: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:51 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:28:51 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1538: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:52 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.031s] [admin] [1.1K] /api/summary
Mai 05 09:28:53 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1539: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:53 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.007s] [admin] [1.5K] /api/logs/all
Mai 05 09:28:53 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:28:54 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:28:54 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 5b137382-fbe7-410c-a5c9-91baa87b00e9 does not exist
Mai 05 09:28:54 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev b6bf9a81-7392-4d6c-ba3b-c3bd0f8a3528 does not exist
Mai 05 09:28:54 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 61d3b64e-44f9-4c96-a5b7-59166dcaf6d7 does not exist
Mai 05 09:28:54 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev fe75b274-1d26-4806-8a95-8339da2fb56e does not exist
Mai 05 09:28:54 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:28:54 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 03e37353-1c5a-4aea-bbdb-d619501497e8 does not exist
Mai 05 09:28:55 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1540: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:57 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1541: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:28:57 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.020s] [admin] [1.1K] /api/summary
Mai 05 09:28:58 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:28:59 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1542: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:29:01 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1543: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:29:02 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [1.1K] /api/summary
Mai 05 09:29:03 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1544: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:29:03 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:29:05 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1545: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:29:07 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1546: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:29:07 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.020s] [admin] [1.1K] /api/summary
Mai 05 09:29:08 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:29:09 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1547: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:29:10 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:29:10 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:29:11 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1548: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:29:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:29:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:29:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:29:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:29:12 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:57842] [GET] [200] [0.003s] [admin] [69.0B] /api/feature_toggles
Mai 05 09:29:12 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.030s] [admin] [1.1K] /api/summary
Mai 05 09:29:13 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1549: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:29:13 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:29:15 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1550: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:29:17 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1551: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:29:17 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.030s] [admin] [1.1K] /api/summary
Mai 05 09:29:18 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:29:19 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1552: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:29:21 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1553: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:29:22 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.030s] [admin] [1.1K] /api/summary
Mai 05 09:29:23 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1554: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:29:23 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:29:25 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1555: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:29:27 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1556: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:29:27 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [1.1K] /api/summary
Mai 05 09:29:28 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:29:29 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1557: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:29:31 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1558: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:29:32 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [1.1K] /api/summary
Mai 05 09:29:33 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1559: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Mai 05 09:29:33 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:29:35 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1560: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Mai 05 09:29:37 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1561: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 5 op/s
Mai 05 09:29:37 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.021s] [admin] [1.1K] /api/summary
Mai 05 09:29:38 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:29:39 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1562: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 3.5 KiB/s rd, 0 B/s wr, 5 op/s
Mai 05 09:29:40 storage01 ceph-mgr[24349]: [balancer INFO root] Optimize plan auto_2023-05-05_07:29:40
Mai 05 09:29:40 storage01 ceph-mgr[24349]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mai 05 09:29:40 storage01 ceph-mgr[24349]: [balancer INFO root] do_upmap
Mai 05 09:29:40 storage01 ceph-mgr[24349]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.buckets.index', '.rgw.root', 'tnap-vms', 'amphora', 'default.rgw.buckets.data', 'tnap-volumes', 'tnap-volumes-hdd', 'tnap-backup', 'cephfs_metadata', 'default.rgw.control', 'cephfs_hdd', '.mgr', 'cephfs_data', 'default.rgw.log', 'tnap-images']
Mai 05 09:29:40 storage01 ceph-mgr[24349]: [balancer INFO root] prepared 0/10 changes
Mai 05 09:29:40 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:29:40 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:29:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mai 05 09:29:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-images, start_after=
Mai 05 09:29:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mai 05 09:29:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-images, start_after=
Mai 05 09:29:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes, start_after=
Mai 05 09:29:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-vms, start_after=
Mai 05 09:29:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes, start_after=
Mai 05 09:29:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-backup, start_after=
Mai 05 09:29:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-vms, start_after=
Mai 05 09:29:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes-hdd, start_after=
Mai 05 09:29:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-backup, start_after=
Mai 05 09:29:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes-hdd, start_after=
Mai 05 09:29:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: amphora, start_after=
Mai 05 09:29:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: amphora, start_after=
Mai 05 09:29:41 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1563: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Mai 05 09:29:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:29:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:29:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:29:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:29:42 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:43774] [GET] [200] [0.003s] [admin] [69.0B] /api/feature_toggles
Mai 05 09:29:42 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.032s] [admin] [1.1K] /api/summary
Mai 05 09:29:42 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [24.0B] /ui-api/motd
Mai 05 09:29:43 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1564: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Mai 05 09:29:43 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.006s] [admin] [1.5K] /api/logs/all
Mai 05 09:29:45 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1565: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Mai 05 09:29:47 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1566: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 7.5 KiB/s rd, 0 B/s wr, 12 op/s
Mai 05 09:29:47 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.019s] [admin] [1.1K] /api/summary
Mai 05 09:29:48 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:29:49 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1567: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 7 op/s
Mai 05 09:29:51 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1568: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 7 op/s
Mai 05 09:29:52 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.029s] [admin] [1.1K] /api/summary
Mai 05 09:29:53 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1569: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 2.3 KiB/s rd, 0 B/s wr, 3 op/s
Mai 05 09:29:53 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:29:54 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:29:54 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:29:54 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:29:54 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev c19d253d-09d4-4d7a-b1b1-874b1828e872 does not exist
Mai 05 09:29:54 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 20d34a8f-3b2a-4d64-a5f7-b76d25963fd1 does not exist
Mai 05 09:29:54 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 16dcc57f-afa3-4aa0-b35c-c7d5161a9139 does not exist
Mai 05 09:29:54 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 01b8de6b-aa8a-44f2-b315-54b3b5681b74 does not exist
Mai 05 09:29:54 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:29:54 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev a0f0e0d7-dfce-43ac-b7af-3a2ce34a6c6b does not exist
Mai 05 09:29:55 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1570: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 2.3 KiB/s rd, 0 B/s wr, 3 op/s
Mai 05 09:29:57 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1571: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 2.3 KiB/s rd, 0 B/s wr, 3 op/s
Mai 05 09:29:57 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.029s] [admin] [1.1K] /api/summary
Mai 05 09:29:58 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:29:59 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1572: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:30:01 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1573: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:30:02 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.019s] [admin] [1.1K] /api/summary
Mai 05 09:30:03 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1574: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:30:03 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.006s] [admin] [1.5K] /api/logs/all
Mai 05 09:30:05 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1575: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:30:07 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1576: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:30:07 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.027s] [admin] [1.1K] /api/summary
Mai 05 09:30:08 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:30:09 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1577: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:30:10 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:30:10 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:30:11 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1578: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:30:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:30:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:30:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:30:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:30:12 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:38678] [GET] [200] [0.003s] [admin] [69.0B] /api/feature_toggles
Mai 05 09:30:12 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [1.1K] /api/summary
Mai 05 09:30:13 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1579: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:30:13 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:30:15 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1580: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:30:17 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1581: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:30:17 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.021s] [admin] [1.1K] /api/summary
Mai 05 09:30:18 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:30:19 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1582: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:30:21 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1583: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 3 op/s
Mai 05 09:30:22 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.019s] [admin] [1.1K] /api/summary
Mai 05 09:30:23 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1584: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 3 op/s
Mai 05 09:30:23 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:30:25 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1585: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 3 op/s
Mai 05 09:30:27 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1586: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 6.4 KiB/s rd, 0 B/s wr, 10 op/s
Mai 05 09:30:27 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.019s] [admin] [1.1K] /api/summary
Mai 05 09:30:28 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:30:29 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1587: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 6.4 KiB/s rd, 0 B/s wr, 10 op/s
Mai 05 09:30:31 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1588: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 9.7 KiB/s rd, 0 B/s wr, 16 op/s
Mai 05 09:30:32 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [1.1K] /api/summary
Mai 05 09:30:33 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1589: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 7.2 KiB/s rd, 0 B/s wr, 12 op/s
Mai 05 09:30:33 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:30:35 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1590: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 7.2 KiB/s rd, 0 B/s wr, 12 op/s
Mai 05 09:30:37 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1591: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 9.8 KiB/s rd, 0 B/s wr, 16 op/s
Mai 05 09:30:37 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.027s] [admin] [1.1K] /api/summary
Mai 05 09:30:38 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:30:39 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1592: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 5.8 KiB/s rd, 0 B/s wr, 9 op/s
Mai 05 09:30:40 storage01 ceph-mgr[24349]: [balancer INFO root] Optimize plan auto_2023-05-05_07:30:40
Mai 05 09:30:40 storage01 ceph-mgr[24349]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mai 05 09:30:40 storage01 ceph-mgr[24349]: [balancer INFO root] do_upmap
Mai 05 09:30:40 storage01 ceph-mgr[24349]: [balancer INFO root] pools ['default.rgw.buckets.index', '.mgr', 'tnap-vms', '.rgw.root', 'cephfs_metadata', 'cephfs_data', 'tnap-images', 'amphora', 'tnap-volumes-hdd', 'tnap-volumes', 'tnap-backup', 'default.rgw.buckets.data', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs_hdd']
Mai 05 09:30:40 storage01 ceph-mgr[24349]: [balancer INFO root] prepared 0/10 changes
Mai 05 09:30:40 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:30:40 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:30:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mai 05 09:30:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mai 05 09:30:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-images, start_after=
Mai 05 09:30:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-images, start_after=
Mai 05 09:30:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes, start_after=
Mai 05 09:30:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes, start_after=
Mai 05 09:30:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-vms, start_after=
Mai 05 09:30:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-vms, start_after=
Mai 05 09:30:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-backup, start_after=
Mai 05 09:30:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes-hdd, start_after=
Mai 05 09:30:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: amphora, start_after=
Mai 05 09:30:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-backup, start_after=
Mai 05 09:30:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes-hdd, start_after=
Mai 05 09:30:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: amphora, start_after=
Mai 05 09:30:41 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1593: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 14 op/s
Mai 05 09:30:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:30:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:30:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:30:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:30:42 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:36700] [GET] [200] [0.004s] [admin] [69.0B] /api/feature_toggles
Mai 05 09:30:42 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.034s] [admin] [1.1K] /api/summary
Mai 05 09:30:42 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [24.0B] /ui-api/motd
Mai 05 09:30:43 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1594: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Mai 05 09:30:43 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:30:45 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1595: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Mai 05 09:30:47 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1596: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 14 op/s
Mai 05 09:30:47 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [1.1K] /api/summary
Mai 05 09:30:48 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.5K] /api/logs/all
Mai 05 09:30:49 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1597: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 5.9 KiB/s rd, 0 B/s wr, 9 op/s
Mai 05 09:30:51 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1598: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 8.4 KiB/s rd, 0 B/s wr, 13 op/s
Mai 05 09:30:52 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.029s] [admin] [1.1K] /api/summary
Mai 05 09:30:53 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1599: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Mai 05 09:30:53 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:30:55 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:30:55 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1600: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 9 op/s
Mai 05 09:30:55 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:30:55 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:30:55 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 075a14c2-d6cc-4a15-8279-1bce9886b5e2 does not exist
Mai 05 09:30:55 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 914ff23e-b09a-42c6-88d7-181f3f8c18f8 does not exist
Mai 05 09:30:55 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev a056aace-9c1f-4d71-b713-26caa5e625b4 does not exist
Mai 05 09:30:55 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 53237b68-d113-4f14-9b1c-11ecdd3e0310 does not exist
Mai 05 09:30:55 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:30:55 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev ff682640-6c44-4e17-9997-705dba7660e4 does not exist
Mai 05 09:30:57 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1601: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 14 op/s
Mai 05 09:30:57 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.032s] [admin] [1.1K] /api/summary
Mai 05 09:30:58 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:30:59 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1602: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Mai 05 09:31:00 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:00 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:00 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:31:00 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:36132] [GET] [200] [0.012s] [admin] [73.0B] /api/osd/settings
Mai 05 09:31:00 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.047s] [admin] [9.1K] /api/health/minimal
Mai 05 09:31:01 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1603: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 7.7 KiB/s rd, 0 B/s wr, 12 op/s
Mai 05 09:31:02 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.018s] [admin] [1.1K] /api/summary
Mai 05 09:31:03 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1604: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Mai 05 09:31:04 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:04 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:04 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.034s] [admin] [9.1K] /api/health/minimal
Mai 05 09:31:05 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1605: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Mai 05 09:31:07 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1606: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 7.6 KiB/s rd, 0 B/s wr, 12 op/s
Mai 05 09:31:07 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.030s] [admin] [1.1K] /api/summary
Mai 05 09:31:09 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1607: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 4.8 KiB/s rd, 0 B/s wr, 8 op/s
Mai 05 09:31:09 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:09 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:09 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.031s] [admin] [9.1K] /api/health/minimal
Mai 05 09:31:10 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:31:10 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:31:11 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1608: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 11 op/s
Mai 05 09:31:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:31:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:31:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:31:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:31:12 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:49814] [GET] [200] [0.004s] [admin] [69.0B] /api/feature_toggles
Mai 05 09:31:12 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.033s] [admin] [1.1K] /api/summary
Mai 05 09:31:13 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1609: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 7 op/s
Mai 05 09:31:14 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:14 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:14 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.032s] [admin] [9.1K] /api/health/minimal
Mai 05 09:31:15 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1610: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 7 op/s
Mai 05 09:31:17 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1611: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 7.2 KiB/s rd, 0 B/s wr, 12 op/s
Mai 05 09:31:17 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.019s] [admin] [1.1K] /api/summary
Mai 05 09:31:19 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1612: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 4.9 KiB/s rd, 0 B/s wr, 8 op/s
Mai 05 09:31:19 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:19 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:19 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.031s] [admin] [9.1K] /api/health/minimal
Mai 05 09:31:21 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1613: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 7.4 KiB/s rd, 0 B/s wr, 12 op/s
Mai 05 09:31:22 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [1.1K] /api/summary
Mai 05 09:31:23 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1614: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Mai 05 09:31:24 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:24 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:24 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [9.1K] /api/health/minimal
Mai 05 09:31:25 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1615: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Mai 05 09:31:27 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1616: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 5.3 KiB/s rd, 0 B/s wr, 8 op/s
Mai 05 09:31:27 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.027s] [admin] [1.1K] /api/summary
Mai 05 09:31:29 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1617: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 4 op/s
Mai 05 09:31:29 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:29 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:29 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.032s] [admin] [9.1K] /api/health/minimal
Mai 05 09:31:31 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1618: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 4 op/s
Mai 05 09:31:32 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.027s] [admin] [1.1K] /api/summary
Mai 05 09:31:33 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1619: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Mai 05 09:31:34 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:34 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:34 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.030s] [admin] [9.1K] /api/health/minimal
Mai 05 09:31:35 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1620: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Mai 05 09:31:37 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1621: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Mai 05 09:31:37 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.019s] [admin] [1.1K] /api/summary
Mai 05 09:31:39 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1622: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:31:39 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:39 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:39 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.036s] [admin] [9.1K] /api/health/minimal
Mai 05 09:31:40 storage01 ceph-mgr[24349]: [balancer INFO root] Optimize plan auto_2023-05-05_07:31:40
Mai 05 09:31:40 storage01 ceph-mgr[24349]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mai 05 09:31:40 storage01 ceph-mgr[24349]: [balancer INFO root] do_upmap
Mai 05 09:31:40 storage01 ceph-mgr[24349]: [balancer INFO root] pools ['default.rgw.buckets.index', 'cephfs_data', 'default.rgw.control', 'tnap-vms', 'tnap-volumes', 'default.rgw.meta', 'cephfs_metadata', '.mgr', 'amphora', 'default.rgw.log', 'tnap-volumes-hdd', 'tnap-backup', 'default.rgw.buckets.data', '.rgw.root', 'tnap-images', 'cephfs_hdd']
Mai 05 09:31:40 storage01 ceph-mgr[24349]: [balancer INFO root] prepared 0/10 changes
Mai 05 09:31:40 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:31:40 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:31:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mai 05 09:31:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-images, start_after=
Mai 05 09:31:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mai 05 09:31:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-images, start_after=
Mai 05 09:31:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes, start_after=
Mai 05 09:31:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes, start_after=
Mai 05 09:31:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-vms, start_after=
Mai 05 09:31:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-vms, start_after=
Mai 05 09:31:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-backup, start_after=
Mai 05 09:31:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-backup, start_after=
Mai 05 09:31:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes-hdd, start_after=
Mai 05 09:31:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes-hdd, start_after=
Mai 05 09:31:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: amphora, start_after=
Mai 05 09:31:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: amphora, start_after=
Mai 05 09:31:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:31:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:31:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:31:41 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1623: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:31:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:31:42 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:47552] [GET] [200] [0.004s] [admin] [69.0B] /api/feature_toggles
Mai 05 09:31:42 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.029s] [admin] [1.1K] /api/summary
Mai 05 09:31:42 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [24.0B] /ui-api/motd
Mai 05 09:31:43 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1624: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:31:44 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:44 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:44 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.027s] [admin] [9.1K] /api/health/minimal
Mai 05 09:31:45 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1625: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:31:47 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1626: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:31:47 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.027s] [admin] [1.1K] /api/summary
Mai 05 09:31:49 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1627: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:31:49 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:49 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:49 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.027s] [admin] [9.1K] /api/health/minimal
Mai 05 09:31:51 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1628: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:31:52 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.020s] [admin] [1.1K] /api/summary
Mai 05 09:31:53 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1629: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:31:54 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:54 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:54 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.027s] [admin] [9.1K] /api/health/minimal
Mai 05 09:31:54 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [1.5K] /api/logs/all
Mai 05 09:31:55 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1630: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:31:55 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:31:56 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:31:56 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:31:56 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 4e45c95f-25d5-4850-b590-1bf51dd0926a does not exist
Mai 05 09:31:56 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev ebb090e0-7cc4-4b60-8103-c7a021ce404b does not exist
Mai 05 09:31:56 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev c67d71ac-a5ac-4ed4-bdc2-ea45df615ad4 does not exist
Mai 05 09:31:56 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 184607d9-16d0-4fff-8087-7cbd36bed310 does not exist
Mai 05 09:31:56 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:31:56 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev f5adff0d-b901-4018-856d-38ce788b4034 does not exist
Mai 05 09:31:57 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1631: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:31:57 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.030s] [admin] [1.1K] /api/summary
Mai 05 09:31:59 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:59 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:59 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.036s] [admin] [9.1K] /api/health/minimal
Mai 05 09:31:59 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:31:59 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:34066] [GET] [200] [0.009s] [admin] [73.0B] /api/osd/settings
Mai 05 09:31:59 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1632: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:31:59 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:59 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:31:59 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.029s] [admin] [9.1K] /api/health/minimal
Mai 05 09:32:01 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1633: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:02 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [1.1K] /api/summary
Mai 05 09:32:03 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1634: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:04 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:04 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:04 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [9.1K] /api/health/minimal
Mai 05 09:32:05 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1635: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:07 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1636: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:07 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [1.1K] /api/summary
Mai 05 09:32:09 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1637: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:09 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:09 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:09 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.033s] [admin] [9.1K] /api/health/minimal
Mai 05 09:32:10 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:32:10 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:32:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:32:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:32:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:32:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:32:11 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1638: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:12 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:47332] [GET] [200] [0.004s] [admin] [69.0B] /api/feature_toggles
Mai 05 09:32:12 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.030s] [admin] [1.1K] /api/summary
Mai 05 09:32:13 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1639: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:14 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:14 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:14 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.030s] [admin] [9.1K] /api/health/minimal
Mai 05 09:32:15 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1640: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:17 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1641: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:17 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [1.1K] /api/summary
Mai 05 09:32:19 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1642: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:19 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:19 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:19 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.030s] [admin] [9.1K] /api/health/minimal
Mai 05 09:32:21 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1643: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:22 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.026s] [admin] [1.1K] /api/summary
Mai 05 09:32:23 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1644: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:24 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:24 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:24 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.030s] [admin] [9.1K] /api/health/minimal
Mai 05 09:32:25 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1645: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:26 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10696120 -' entity='client.crash.storage01' cmd=[{"prefix": "crash post", "target": ["mon-mgr", ""]}]: dispatch
Mai 05 09:32:26 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1646: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:26 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10696122 -' entity='client.crash.storage01' cmd=[{"prefix": "crash post", "target": ["mon-mgr", ""]}]: dispatch
Mai 05 09:32:26 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1647: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:27 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10696124 -' entity='client.crash.storage01' cmd=[{"prefix": "crash post", "target": ["mon-mgr", ""]}]: dispatch
Mai 05 09:32:27 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1648: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:27 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.046s] [admin] [1.1K] /api/summary
Mai 05 09:32:27 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.005s] [admin] [1.1K] /favicon.ico
Mai 05 09:32:28 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10696128 -' entity='client.crash.storage01' cmd=[{"prefix": "crash post", "target": ["mon-mgr", ""]}]: dispatch
Mai 05 09:32:28 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1649: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:28 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10696130 -' entity='client.crash.storage01' cmd=[{"prefix": "crash post", "target": ["mon-mgr", ""]}]: dispatch
Mai 05 09:32:28 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1650: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:29 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:29 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:29 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [9.3K] /api/health/minimal
Mai 05 09:32:29 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10696132 -' entity='client.crash.storage01' cmd=[{"prefix": "crash post", "target": ["mon-mgr", ""]}]: dispatch
Mai 05 09:32:29 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1651: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:30 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10696134 -' entity='client.crash.storage01' cmd=[{"prefix": "crash post", "target": ["mon-mgr", ""]}]: dispatch
Mai 05 09:32:30 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1652: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:30 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10696136 -' entity='client.crash.storage01' cmd=[{"prefix": "crash post", "target": ["mon-mgr", ""]}]: dispatch
Mai 05 09:32:30 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1653: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:31 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10696140 -' entity='client.crash.storage01' cmd=[{"prefix": "crash post", "target": ["mon-mgr", ""]}]: dispatch
Mai 05 09:32:31 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1654: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:32 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10696142 -' entity='client.crash.storage01' cmd=[{"prefix": "crash post", "target": ["mon-mgr", ""]}]: dispatch
Mai 05 09:32:32 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1655: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:32 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.019s] [admin] [1.1K] /api/summary
Mai 05 09:32:32 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10696144 -' entity='client.crash.storage01' cmd=[{"prefix": "crash post", "target": ["mon-mgr", ""]}]: dispatch
Mai 05 09:32:32 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1656: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:33 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10696146 -' entity='client.crash.storage01' cmd=[{"prefix": "crash post", "target": ["mon-mgr", ""]}]: dispatch
Mai 05 09:32:33 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1657: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:34 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10696150 -' entity='client.crash.storage01' cmd=[{"prefix": "crash post", "target": ["mon-mgr", ""]}]: dispatch
Mai 05 09:32:34 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1658: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:34 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:34 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:34 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.027s] [admin] [9.3K] /api/health/minimal
Mai 05 09:32:34 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10696152 -' entity='client.crash.storage01' cmd=[{"prefix": "crash post", "target": ["mon-mgr", ""]}]: dispatch
Mai 05 09:32:34 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1659: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:35 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10696154 -' entity='client.crash.storage01' cmd=[{"prefix": "crash post", "target": ["mon-mgr", ""]}]: dispatch
Mai 05 09:32:35 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1660: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:36 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10696156 -' entity='client.crash.storage01' cmd=[{"prefix": "crash post", "target": ["mon-mgr", ""]}]: dispatch
Mai 05 09:32:36 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1661: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:37 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [1.1K] /api/summary
Mai 05 09:32:38 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1662: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:39 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:39 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:39 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [9.4K] /api/health/minimal
Mai 05 09:32:40 storage01 ceph-mgr[24349]: [balancer INFO root] Optimize plan auto_2023-05-05_07:32:40
Mai 05 09:32:40 storage01 ceph-mgr[24349]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mai 05 09:32:40 storage01 ceph-mgr[24349]: [balancer INFO root] do_upmap
Mai 05 09:32:40 storage01 ceph-mgr[24349]: [balancer INFO root] pools ['cephfs_metadata', 'cephfs_data', 'default.rgw.buckets.data', 'tnap-images', 'default.rgw.meta', 'tnap-volumes', 'amphora', 'default.rgw.buckets.index', 'default.rgw.log', 'tnap-backup', 'cephfs_hdd', '.mgr', 'default.rgw.control', 'tnap-volumes-hdd', 'tnap-vms', '.rgw.root']
Mai 05 09:32:40 storage01 ceph-mgr[24349]: [balancer INFO root] prepared 0/10 changes
Mai 05 09:32:40 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1663: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:40 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:32:40 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:32:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mai 05 09:32:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-images, start_after=
Mai 05 09:32:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mai 05 09:32:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-images, start_after=
Mai 05 09:32:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes, start_after=
Mai 05 09:32:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes, start_after=
Mai 05 09:32:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-vms, start_after=
Mai 05 09:32:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-vms, start_after=
Mai 05 09:32:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-backup, start_after=
Mai 05 09:32:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-backup, start_after=
Mai 05 09:32:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes-hdd, start_after=
Mai 05 09:32:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: tnap-volumes-hdd, start_after=
Mai 05 09:32:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: amphora, start_after=
Mai 05 09:32:41 storage01 ceph-mgr[24349]: [rbd_support INFO root] load_schedules: amphora, start_after=
Mai 05 09:32:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:32:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:32:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:32:41 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:32:42 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1664: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:42 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:59674] [GET] [200] [0.003s] [admin] [69.0B] /api/feature_toggles
Mai 05 09:32:42 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [1.1K] /api/summary
Mai 05 09:32:42 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.004s] [admin] [24.0B] /ui-api/motd
Mai 05 09:32:44 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1665: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:44 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:44 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:44 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.030s] [admin] [9.4K] /api/health/minimal
Mai 05 09:32:46 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1666: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:47 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10696166 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Mai 05 09:32:47 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.027s] [admin] [1.1K] /api/summary
Mai 05 09:32:48 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1667: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:49 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:49 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:49 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [9.4K] /api/health/minimal
Mai 05 09:32:50 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1668: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:52 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1669: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:52 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [1.1K] /api/summary
Mai 05 09:32:53 storage01 ceph-mgr[24349]: log_channel(audit) log [DBG] : from='client.10696172 -' entity='client.admin' cmd=[{"prefix": "crash info", "id": "2023-05-05T07:24:28.258666Z_33a408c8-ecfa-4edf-8c37-c093ccf69bd6", "target": ["mon-mgr", ""]}]: dispatch
Mai 05 09:32:54 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1670: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:54 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:54 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:54 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.027s] [admin] [9.4K] /api/health/minimal
Mai 05 09:32:56 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1671: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:56 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:32:57 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.028s] [admin] [1.1K] /api/summary
Mai 05 09:32:57 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:32:57 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:32:57 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 13bfe92b-9a41-4a23-8350-c4e2cb5b784f does not exist
Mai 05 09:32:57 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev e1540927-0525-4217-9f99-6e2aa8ab3821 does not exist
Mai 05 09:32:57 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 8bd372dc-cf0e-4002-8a01-03fe9e4df21f does not exist
Mai 05 09:32:57 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev 4bd027f2-7098-462f-94b3-fddb2237086e does not exist
Mai 05 09:32:57 storage01 ceph-mgr[24349]: [stats WARNING root] cmdtag not found in client metadata
Mai 05 09:32:57 storage01 ceph-mgr[24349]: [progress WARNING root] complete: ev eee7b472-ea5e-46f3-94c1-f215ea2c9e65 does not exist
Mai 05 09:32:58 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1672: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:32:59 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:59 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:32:59 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.029s] [admin] [9.4K] /api/health/minimal
Mai 05 09:33:00 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1673: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:33:02 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1674: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:33:04 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1675: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:33:04 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:33:04 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:33:04 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.027s] [admin] [9.4K] /api/health/minimal
Mai 05 09:33:06 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1676: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:33:08 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1677: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:33:09 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:33:09 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:33:09 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.027s] [admin] [9.4K] /api/health/minimal
Mai 05 09:33:10 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1678: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:33:10 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:33:10 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:33:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:33:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:33:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] scanning for idle connections..
Mai 05 09:33:11 storage01 ceph-mgr[24349]: [volumes INFO mgr_util] cleaning up connections: []
Mai 05 09:33:12 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1679: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:33:14 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1680: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:33:14 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:33:14 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:33:14 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.040s] [admin] [9.4K] /api/health/minimal
Mai 05 09:33:16 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1681: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:33:18 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1682: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:33:19 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:33:19 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:33:19 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.030s] [admin] [9.4K] /api/health/minimal
Mai 05 09:33:20 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1683: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:33:22 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1684: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:33:24 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1685: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:33:24 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:33:24 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:33:24 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.027s] [admin] [9.4K] /api/health/minimal
Mai 05 09:33:26 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1686: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:33:28 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1687: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:33:29 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:33:29 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:33:29 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.027s] [admin] [9.4K] /api/health/minimal
Mai 05 09:33:30 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1688: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:33:32 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1689: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:33:34 storage01 ceph-mgr[24349]: log_channel(cluster) log [DBG] : pgmap v1690: 153 pgs: 153 active+clean; 11 GiB data, 33 GiB used, 27 GiB / 60 GiB avail
Mai 05 09:33:34 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:33:34 storage01 ceph-mgr[24349]: [dashboard INFO orchestrator] is orchestrator available: True,
Mai 05 09:33:34 storage01 ceph-mgr[24349]: [dashboard INFO request] [::ffff:XXX.13:55788] [GET] [200] [0.029s] [admin] [9.4K] /api/health/minimal
(1-1/2)