Bug #49551
closedcephadm journald logs are mangled
0%
Description
Regression introduced by https://github.com/ceph/ceph/pull/37729
to fix a regression introduced by https://github.com/ceph/ceph/pull/36800
to fix the original issue https://tracker.ceph.com/issues/43883
Root cause: https://github.com/containers/conmon/issues/242
Workaround: https://github.com/ceph/ceph/pull/39738
2021-02-16T02:06:07.516 DEBUG:teuthology.orchestra.run.smithi014:> sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/daemon-base:latest-octopus shell --fsid 7c10f384-6fee-11eb-9007-001a4aab830c -- ceph osd last-stat-seq osd.1 2021-02-16T02:06:07.669 INFO:teuthology.orchestra.run.smithi014.stderr:Inferring config /var/lib/ceph/7c10f384-6fee-11eb-9007-001a4aab830c/mon.a/config 2021-02-16T02:06:08.810 INFO:journalctl@ceph.mon.a.smithi014.stdout:Feb 16 02:06:08 smithi014 conmon[77146]: audit 2021-02-16T02:06:08.501429+0000 mon.a (mon.0) 6763 : audit [INF] from='mgr.34886 172.21.15.91:0/3292838717' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2021-02-16T02:06:08.810 INFO:journalctl@ceph.mon.c.smithi014.stdout:Feb 16 02:06:08 smithi014 conmon[82143]: audit 2021-02-16T02:06:08.501429+0000 mon.a (mon.0) 6763 : audit [INF] from='mgr.34886 172.21.15.91:0/3292838717' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2021-02-16T02:06:08.937 INFO:journalctl@ceph.mon.b.smithi091.stdout:Feb 16 02:06:08 smithi091 conmon[56348]: audit 2021-02-16T02:06:08.501429+0000 mon.a (mon.0 2021-02-16T02:06:08.937 INFO:journalctl@ceph.mon.b.smithi091.stdout:Feb 16 02:06:08 smithi091 conmon[56348]: ) 6763 : audit [INF] from='mgr.34886 172.21.15.91:0/3292838717' entity='mgr.x' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/x/mirror_snapshot_schedule"}]: dispatch 2021-02-16T02:06:09.307 INFO:teuthology.orchestra.run.smithi014.stdout:8559869821018 2021-02-16T02:06:09.810 INFO:journalctl@ceph.mon.c.smithi014.stdout:Feb 16 02:06:09 smithi014 conmon[82143]: cluster 2021-02-16T02:06:08.988886 2021-02-16T02:06:09.810 INFO:journalctl@ceph.mon.c.smithi014.stdout:Feb 16 02:06:09 smithi014 conmon[82143]: +0000 mgr.x (mgr.34886) 2892 : cluster [DBG] pgmap v4713: 213 pgs: 213 active+clean; 4.0 MiB data, 843 MiB used, 710 GiB / 715 GiB avail 2021-02-16T02:06:09.810 INFO:journalctl@ceph.mon.c.smithi014.stdout:Feb 16 02:06:09 smithi014 conmon[82143]: audit 2021-02-16T02:06:09.307793+0000 mon.a (mon.0) 6764 : audit [DBG] from='client.? 172.21.15.14:0/765780855' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2021-02-16T02:06:09.810 INFO:journalctl@ceph.mon.a.smithi014.stdout:Feb 16 02:06:09 smithi014 conmon[77146]: cluster 2021-02-16T02:06:08.988886+0000 2021-02-16T02:06:09.810 INFO:journalctl@ceph.mon.a.smithi014.stdout:Feb 16 02:06:09 smithi014 conmon[77146]: mgr.x (mgr.34886) 2892 : cluster [DBG] pgmap v4713: 213 pgs: 213 active+clean; 4.0 MiB data, 843 MiB used, 710 GiB / 715 GiB avail 2021-02-16T02:06:09.810 INFO:journalctl@ceph.mon.a.smithi014.stdout:Feb 16 02:06:09 smithi014 conmon[77146]: audit 2021-02-16T02:06:09.307793+0000 mon.a (mon.0) 6764 : audit [DBG] from='client.? 172.21.15.14:0/765780855' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2021-02-16T02:06:09.897 INFO:tasks.cephadm.ceph_manager.ceph:need seq 8559869821015 got 8559869821018 for osd.1 2021-02-16T02:06:09.897 DEBUG:teuthology.orchestra.run.smithi014:> sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/daemon-base:latest-octopus shell --fsid 7c10f384-6fee-11eb-9007-001a4aab830c -- ceph osd last-stat-seq osd.2 2021-02-16T02:06:09.937 INFO:journalctl@ceph.mon.b.smithi091.stdout:Feb 16 02:06:09 smithi091 conmon[56348]: cluster 2021-02-16T02:06:08.988886+0000 mgr.x (mgr.34886) 2892 : cluster [DBG] pgmap v4713: 213 pgs: 213 active+clean; 4.0 MiB data, 843 MiB used, 710 GiB / 715 GiB avail 2021-02-16T02:06:09.937 INFO:journalctl@ceph.mon.b.smithi091.stdout:Feb 16 02:06:09 smithi091 conmon[56348]: 2021-02-16T02:06:09.937 INFO:journalctl@ceph.mon.b.smithi091.stdout:Feb 16 02:06:09 smithi091 conmon[56348]: audit 2021-02-16T02:06:09.307793+0000 mon.a (mon.0) 6764 : audit [DBG] from='client.? 172.21.15.14:0/765780855' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2021-02-16T02:06:10.050 INFO:teuthology.orchestra.run.smithi014.stderr:Inferring config /var/lib/ceph/7c10f384-6fee-11eb-9007-001a4aab830c/mon.a/config 2021-02-16T02:06:11.310 INFO:journalctl@ceph.mon.a.smithi014.stdout:Feb 16 02:06:11 smithi014 conmon[77146]: cluster 2021-02-16T02:06:10.989411+0000 mgr.x (mgr.34886) 2893 : cluster 2021-02-16T02:06:11.310 INFO:journalctl@ceph.mon.a.smithi014.stdout:Feb 16 02:06:11 smithi014 conmon[77146]: [DBG] pgmap v4714: 213 pgs: 213 active+clean; 4.0 MiB data, 843 MiB used, 710 GiB / 715 GiB avail 2021-02-16T02:06:11.311 INFO:journalctl@ceph.mon.c.smithi014.stdout:Feb 16 02:06:11 smithi014 conmon[82143]: cluster 2021-02-16T02:06:10.989411+0000 2021-02-16T02:06:11.311 INFO:journalctl@ceph.mon.c.smithi014.stdout:Feb 16 02:06:11 smithi014 conmon[82143]: mgr.x (mgr.34886) 2893 : cluster [DBG] pgmap v4714: 213 pgs: 213 active+clean; 4.0 MiB data, 843 MiB used, 710 GiB / 715 GiB avail 2021-02-16T02:06:11.437 INFO:journalctl@ceph.mon.b.smithi091.stdout:Feb 16 02:06:11 smithi091 conmon[56348]: cluster 2021-02-16T02:06:10.989411+0000 mgr.x (mgr.34886) 2893 : cluster [DBG] pgmap v4714: 213 pgs: 213 active+clean; 4.0 MiB data, 843 MiB used, 710 GiB / 715 GiB avail 2021-02-16T02:06:11.437 INFO:journalctl@ceph.mon.b.smithi091.stdout:Feb 16 02:06:11 smithi091 conmon[56348]: 2021-02-16T02:06:11.740 INFO:teuthology.orchestra.run.smithi014.stdout:485331305324 2021-02-16T02:06:12.313 INFO:journalctl@ceph.mon.a.smithi014.stdout:Feb 16 02:06:12 smithi014 conmon[77146]: audit 2021-02-16T02:06:12.313 INFO:journalctl@ceph.mon.a.smithi014.stdout:Feb 16 02:06:12 smithi014 conmon[77146]: 2021- 2021-02-16T02:06:12.313 INFO:journalctl@ceph.mon.a.smithi014.stdout:Feb 16 02:06:12 smithi014 conmon[77146]: 02-16T02:06:11.739869+0000 mon.a (mon.0) 6765 2021-02-16T02:06:12.313 INFO:journalctl@ceph.mon.a.smithi014.stdout:Feb 16 02:06:12 smithi014 conmon[77146]: : audit [DBG] from='client.? 172.21.15.14:0/464041634' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2021-02-16T02:06:12.313 INFO:journalctl@ceph.mon.c.smithi014.stdout:Feb 16 02:06:12 smithi014 conmon[82143]: audit 2021-02-16T02:06:12.313 INFO:journalctl@ceph.mon.c.smithi014.stdout:Feb 16 02:06:12 smithi014 conmon[82143]: 2021-02-16T02:06:12.313 INFO:journalctl@ceph.mon.c.smithi014.stdout:Feb 16 02:06:12 smithi014 conmon[82143]: 2021 2021-02-16T02:06:12.314 INFO:journalctl@ceph.mon.c.smithi014.stdout:Feb 16 02:06:12 smithi014 conmon[82143]: -02-16T02:06: 2021-02-16T02:06:12.314 INFO:journalctl@ceph.mon.c.smithi014.stdout:Feb 16 02:06:12 smithi014 conmon[82143]: 11.739869+0000 mon.a (mon.0) 6765 : audit 2021-02-16T02:06:12.314 INFO:journalctl@ceph.mon.c.smithi014.stdout:Feb 16 02:06:12 smithi014 conmon[82143]: [DBG] from='client.? 172.21.15.14:0/464041634' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2021-02-16T02:06:12.315 INFO:tasks.cephadm.ceph_manager.ceph:need seq 485331305321 got 485331305324 for osd.2 2021-02-16T02:06:12.315 DEBUG:teuthology.orchestra.run.smithi014:> sudo /home/ubuntu/cephtest/cephadm --image docker.io/ceph/daemon-base:latest-octopus shell --fsid 7c10f384-6fee-11eb-9007-001a4aab830c -- ceph osd last-stat-seq osd.3 2021-02-16T02:06:12.437 INFO:journalctl@ceph.mon.b.smithi091.stdout:Feb 16 02:06:12 smithi091 conmon[56348]: audit 2021-02-16T02:06:11.739869+0000 mon.a (mon.0) 6765 : audit [DBG] 2021-02-16T02:06:12.437 INFO:journalctl@ceph.mon.b.smithi091.stdout:Feb 16 02:06:12 smithi091 conmon[56348]: from='client.? 172.21.15.14:0/464041634' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2021-02-16T02:06:12.468 INFO:teuthology.orchestra.run.smithi014.stderr:Inferring config /var/lib/ceph/7c10f384-6fee-11eb-9007-001a4aab830c/mon.a/config 2021-02-16T02:06:13.310 INFO:journalctl@ceph.mon.c.smithi014.stdout:Feb 16 02:06:13 smithi014 conmon[82143]: cluster 2021-02-16T02:06:12 2021-02-16T02:06:13.311 INFO:journalctl@ceph.mon.c.smithi014.stdout:Feb 16 02:06:13 smithi014 conmon[82143]: .990117+0000 mgr.x (mgr.34886) 2894 : cluster [DBG] pgmap v4715: 213 pgs: 213 active+clean; 4.0 MiB data, 843 MiB used, 710 GiB / 715 GiB avail 2021-02-16T02:06:13.311 INFO:journalctl@ceph.mon.a.smithi014.stdout:Feb 16 02:06:13 smithi014 conmon[77146]: cluster 2021-02-16T02:06:13.311 INFO:journalctl@ceph.mon.a.smithi014.stdout:Feb 16 02:06:13 smithi014 conmon[77146]: 2021-02-16T02: 2021-02-16T02:06:13.311 INFO:journalctl@ceph.mon.a.smithi014.stdout:Feb 16 02:06:13 smithi014 conmon[77146]: 06:12.990117+0000 mgr.x (mgr.34886) 2894 : cluster [DBG] pgmap v4715: 213 pgs: 213 active+clean; 4.0 MiB data, 843 MiB used, 710 GiB / 715 GiB avail 2021-02-16T02:06:13.311 INFO:journalctl@ceph.mon.a.smithi014.stdout:Feb 16 02:06:13 smithi014 conmon[77146]: 2021-02-16T02:06:13.437 INFO:journalctl@ceph.mon.b.smithi091.stdout:Feb 16 02:06:13 smithi091 conmon[56348]: cluster 2021-02-16T02:06:12.990117+0000 mgr.x (mgr.34886) 2894 : cluster [DBG] pgmap v4715: 213 pgs: 213 active+clean; 4.0 MiB data, 843 MiB used, 710 GiB / 715 GiB avail 2021-02-16T02:06:13.437 INFO:journalctl@ceph.mon.b.smithi091.stdout:Feb 16 02:06:13 smithi091 conmon[56348]: 2021-02-16T02:06:14.133 INFO:teuthology.orchestra.run.smithi014.stdout:1228360647322 2021-02-16T02:06:14.437 INFO:journalctl@ceph.mon.b.smithi091.stdout:Feb 16 02:06:14 smithi091 conmon[56348]: audit 2021-02-16T02:06:14.133103+0000 mon.a (mon.0) 6766 : audit [DBG] from='client.? 172.21.15.14:0/223062718' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 3}]: dispatch
Updated by Sebastian Wagner about 3 years ago
- Related to Bug #47916: podman containers running in a detached state do not output logs to journald added
Updated by Sebastian Wagner about 3 years ago
- Related to Bug #46654: Unsupported podman container configuration via systemd added
Updated by 玮文 胡 about 3 years ago
I think use std::clog instead of std::cerr to write the log can mitigate this issue. Because clog buffers content but cerr does not. And I think clog is more efficient since it issues fewer system calls.
Updated by Sebastian Wagner about 3 years ago
- Status changed from New to Pending Backport
- Pull request ID set to 40640
Updated by Sebastian Wagner about 3 years ago
- Status changed from Pending Backport to Fix Under Review
Updated by 玮文 胡 almost 3 years ago
Now https://github.com/ceph/ceph/pull/40640 has been merged. Should this be backported?
If so, please don't forget to also backport https://github.com/ceph/ceph/pull/39738 and https://github.com/ceph/ceph/pull/40025. #40025 is already in pacific.
Updated by Sebastian Wagner almost 3 years ago
- Status changed from Fix Under Review to Resolved
Updated by Patrick Donnelly over 1 year ago
- Related to Bug #57923: log: writes to stderr (pipe) may not be atomic added