Bug #39646
closedceph-mgr log is flooded with pgmap info every two seconds
0%
Description
The ceph-mgr daemon with debug-mgr=0 logs pgmap information every two second:
Example:
2019-05-08 17:15:54.443 7fc43d8b5700 0 log_channel(cluster) log [DBG] : pgmap v188: 480 pgs: 480 active+clean; 4.7 KiB data, 71 MiB used, 216 GiB / 228 GiB avail; 5.0 KiB/s rd, 4 op/s 2019-05-08 17:15:56.443 7fc43d8b5700 0 log_channel(cluster) log [DBG] : pgmap v189: 480 pgs: 480 active+clean; 4.7 KiB data, 71 MiB used, 216 GiB / 228 GiB avail; 3.3 KiB/s rd, 3 op/s 2019-05-08 17:15:58.443 7fc43d8b5700 0 log_channel(cluster) log [DBG] : pgmap v190: 480 pgs: 480 active+clean; 4.7 KiB data, 71 MiB used, 216 GiB / 228 GiB avail; 5.0 KiB/s rd, 4 op/s 2019-05-08 17:16:00.443 7fc43d8b5700 0 log_channel(cluster) log [DBG] : pgmap v191: 480 pgs: 480 active+clean; 4.7 KiB data, 71 MiB used, 216 GiB / 228 GiB avail; 3.3 KiB/s rd, 3 op/s 2019-05-08 17:16:02.443 7fc43d8b5700 0 log_channel(cluster) log [DBG] : pgmap v192: 480 pgs: 480 active+clean; 4.7 KiB data, 71 MiB used, 216 GiB / 228 GiB avail; 3.3 KiB/s rd, 3 op/s 2019-05-08 17:16:04.447 7fc43d8b5700 0 log_channel(cluster) log [DBG] : pgmap v193: 480 pgs: 480 active+clean; 4.7 KiB data, 71 MiB used, 216 GiB / 228 GiB avail; 5.0 KiB/s rd, 4 op/s
Do we really need to show this info in the log?
Updated by Lenz Grimmer almost 5 years ago
Is this related to #37886 by any chance?
Updated by Sebastian Wagner almost 5 years ago
I'd be :+1: for remove this, Iff creating a new pgmap every two seconds is not a bug.
Updated by Vikhyat Umrao almost 5 years ago
Hi - This is not a bug this was changed because of some reason during troubleshooting/RCA we need previous historic IOPS data. If you want to stop these messages just simply change the log level to `info`.
ceph tell mon.* injectargs '--mon_cluster_log_file_level info'
Updated by Sebastian Wagner almost 5 years ago
Vikhyat Umrao wrote:
Hi - This is not a bug this was changed because of some reason during troubleshooting/RCA we need previous historic IOPS data. If you want to stop these messages just simply change the log level to `info`.
[...]
Interesting, thanks for the backgroud. To me, this raises the question, if there are better places to store history IOPS data, like prometheus and if the MGR log file is really the best place for this. Especially as this pgmap output is spamming the mgr log file in vstart clusters.
Updated by Vikhyat Umrao almost 5 years ago
Sebastian Wagner wrote:
Vikhyat Umrao wrote:
Hi - This is not a bug this was changed because of some reason during troubleshooting/RCA we need previous historic IOPS data. If you want to stop these messages just simply change the log level to `info`.
[...]
Interesting, thanks for the backgroud. To me, this raises the question, if there are better places to store history IOPS data, like prometheus and if the MGR log file is really the best place for this. Especially as this pgmap output is spamming the mgr log file in vstart clusters.
I think you can fix vstart issue by adding mon_cluster_log_file_level = info in vstart.sh [global] section.
Updated by Joao Eduardo Luis almost 5 years ago
- Status changed from New to Fix Under Review
- Assignee set to Joao Eduardo Luis
- Pull request ID set to 28917
Updated by Vikhyat Umrao almost 5 years ago
- Pull request ID changed from 28917 to 29357
Updated by Joao Eduardo Luis over 3 years ago
- Priority changed from Normal to High
Updated by Sebastian Wagner about 2 years ago
- Status changed from Fix Under Review to New
- Assignee deleted (
Joao Eduardo Luis) - Pull request ID deleted (
29357)
Updated by Radoslaw Zarzynski almost 2 years ago
- Status changed from New to Won't Fix
Closing as for over 3 years there is no consensus but feel to reopen anytime.