Project

General

Profile

Bug #45421

Updated by Brad Hubbard about 4 years ago

/a/bhubbard-2020-05-01_23:30:27-rados:thrash-old-clients-master-distro-basic-smithi/5014152 
 /a/bhubbard-2020-05-01_23:30:27-rados:thrash-old-clients-master-distro-basic-smithi/5014151 
 /a/bhubbard-2020-05-01_23:30:27-rados:thrash-old-clients-master-distro-basic-smithi/5014074 


 



 <pre> 
 2020-05-02T23:04:24.917 INFO:teuthology.orchestra.run.smithi043:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph-ci/ceph:beaa4b04bc57ed43e98602e493e8a787a014b4e6 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid cfcaf4e8-8cc8-11ea-a068-001a4aab830c -- ceph mon dump -f json 
 2020-05-02T23:04:24.973 INFO:ceph.mon.c.smithi043.stdout:-- Logs begin at Sat 2020-05-02 22:48:38 UTC. -- 
 2020-05-02T23:04:24.974 INFO:ceph.mon.c.smithi043.stdout:May 02 23:04:24 smithi043 podman[10108]: 2020-05-02 23:04:24.506439991 +0000 UTC m=+0.549861856 container create c3ed65093dd89d593e40d2d1bbfa03c8dcb5f53ba7bdda77eacde8d9f1a9c28e (image=quay.io/ceph-ci/ceph:beaa4b04bc57ed43e98602e493e8a787a014b4e6, name=ceph-cfcaf4e8-8cc8-11ea-a068-001a4aab830c-mon.c) 
 2020-05-02T23:04:26.463 INFO:ceph.mon.a.smithi129.stdout:May 02 23:04:26 smithi129 bash[10331]: cluster 2020-05-02T23:04:25.450994+0000 mgr.y (mgr.14142) 61 : cluster [DBG] pgmap v52: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail 
 2020-05-02T23:04:26.463 INFO:ceph.mon.b.smithi136.stdout:May 02 23:04:26 smithi136 bash[10198]: cluster 2020-05-02T23:04:25.450994+0000 mgr.y (mgr.14142) 61 : cluster [DBG] pgmap v52: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail 
 2020-05-02T23:04:28.467 INFO:ceph.mon.a.smithi129.stdout:May 02 23:04:28 smithi129 bash[10331]: cluster 2020-05-02T23:04:27.451441+0000 mgr.y (mgr.14142) 62 : cluster [DBG] pgmap v53: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail 
 2020-05-02T23:04:28.468 INFO:ceph.mon.b.smithi136.stdout:May 02 23:04:28 smithi136 bash[10198]: cluster 2020-05-02T23:04:27.451441+0000 mgr.y (mgr.14142) 62 : cluster [DBG] pgmap v53: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail 
 2020-05-02T23:04:30.472 INFO:ceph.mon.a.smithi129.stdout:May 02 23:04:30 smithi129 bash[10331]: cluster 2020-05-02T23:04:29.451910+0000 mgr.y (mgr.14142) 63 : cluster [DBG] pgmap v54: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail 
 2020-05-02T23:04:30.473 INFO:ceph.mon.b.smithi136.stdout:May 02 23:04:30 smithi136 bash[10198]: cluster 2020-05-02T23:04:29.451910+0000 mgr.y (mgr.14142) 63 : cluster [DBG] pgmap v54: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail 
 2020-05-02T23:04:32.476 INFO:ceph.mon.a.smithi129.stdout:May 02 23:04:32 smithi129 bash[10331]: cluster 2020-05-02T23:04:31.452355+0000 mgr.y (mgr.14142) 64 : cluster [DBG] pgmap v55: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail 
 2020-05-02T23:04:32.477 INFO:ceph.mon.b.smithi136.stdout:May 02 23:04:32 smithi136 bash[10198]: cluster 2020-05-02T23:04:31.452355+0000 mgr.y (mgr.14142) 64 : cluster [DBG] pgmap v55: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail 
 2020-05-02T23:04:32.497 INFO:ceph.mon.c.smithi043.stdout:May 02 23:04:32 smithi043 podman[10108]: 2020-05-02 23:04:32.500299107 +0000 UTC m=+8.543721099 container remove c3ed65093dd89d593e40d2d1bbfa03c8dcb5f53ba7bdda77eacde8d9f1a9c28e (image=quay.io/ceph-ci/ceph:beaa4b04bc57ed43e98602e493e8a787a014b4e6, name=ceph-cfcaf4e8-8cc8-11ea-a068-001a4aab830c-mon.c) 
 2020-05-02T23:04:32.500 INFO:ceph.mon.c.smithi043.stdout:May 02 23:04:32 smithi043 bash[10103]: time="2020-05-02T23:04:32Z" level=error msg="unable to remove container c3ed65093dd89d593e40d2d1bbfa03c8dcb5f53ba7bdda77eacde8d9f1a9c28e after failing to start and attach to it" 
 2020-05-02T23:04:32.558 INFO:ceph.mon.c.smithi043.stdout:May 02 23:04:32 smithi043 bash[10103]: Error: container_linux.go:345: starting container process caused "exec: \"/usr/bin/ceph-mon\": stat /usr/bin/ceph-mon: no such file or directory" 
 2020-05-02T23:04:32.559 INFO:ceph.mon.c.smithi043.stdout:May 02 23:04:32 smithi043 bash[10103]: : OCI runtime error 
 2020-05-02T23:04:32.579 INFO:ceph.mon.c.smithi043.stdout:May 02 23:04:32 smithi043 systemd[1]: ceph-cfcaf4e8-8cc8-11ea-a068-001a4aab830c@mon.c.service: main process exited, code=exited, status=127/n/a 
 2020-05-02T23:04:32.792 INFO:ceph.mon.c.smithi043.stdout:May 02 23:04:32 smithi043 podman[10373]: Error: no container with name or ID ceph-cfcaf4e8-8cc8-11ea-a068-001a4aab830c-mon.c found: no such container 
 2020-05-02T23:04:32.815 INFO:ceph.mon.c.smithi043.stdout:May 02 23:04:32 smithi043 systemd[1]: Unit ceph-cfcaf4e8-8cc8-11ea-a068-001a4aab830c@mon.c.service entered failed state. 
 2020-05-02T23:04:32.816 INFO:ceph.mon.c.smithi043.stdout:May 02 23:04:32 smithi043 systemd[1]: ceph-cfcaf4e8-8cc8-11ea-a068-001a4aab830c@mon.c.service failed. 
 </pre> 

 <pre> 
 2020-05-02T23:04:42.923 INFO:ceph.mon.c.smithi043.stdout:May 02 23:04:42 smithi043 systemd[1]: ceph-cfcaf4e8-8cc8-11ea-a068-001a4aab830c@mon.c.service holdoff time over, scheduling restart. 
 2020-05-02T23:04:42.923 INFO:ceph.mon.c.smithi043.stdout:May 02 23:04:42 smithi043 systemd[1]: Stopped Ceph mon.c for cfcaf4e8-8cc8-11ea-a068-001a4aab830c. 
 2020-05-02T23:04:42.925 INFO:ceph.mon.c.smithi043.stdout:May 02 23:04:42 smithi043 systemd[1]: Starting Ceph mon.c for cfcaf4e8-8cc8-11ea-a068-001a4aab830c... 
 2020-05-02T23:04:43.039 INFO:ceph.mon.c.smithi043.stdout:May 02 23:04:43 smithi043 podman[10992]: Error: no container with name or ID ceph-cfcaf4e8-8cc8-11ea-a068-001a4aab830c-mon.c found: no such container 
 2020-05-02T23:04:43.047 INFO:ceph.mon.c.smithi043.stdout:May 02 23:04:43 smithi043 systemd[1]: Started Ceph mon.c for cfcaf4e8-8cc8-11ea-a068-001a4aab830c. 
 2020-05-02T23:04:43.516 INFO:ceph.mon.a.smithi129.stdout:May 02 23:04:43 smithi129 bash[10331]: audit 2020-05-02T23:04:42.890594+0000 mon.a (mon.0) 168 : audit [DBG] from='client.? 172.21.15.43:0/3120862389' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 
 2020-05-02T23:04:43.517 INFO:ceph.mon.b.smithi136.stdout:May 02 23:04:43 smithi136 bash[10198]: audit 2020-05-02T23:04:42.890594+0000 mon.a (mon.0) 168 : audit [DBG] from='client.? 172.21.15.43:0/3120862389' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 
 2020-05-02T23:04:43.690 INFO:ceph.mon.c.smithi043.stdout:May 02 23:04:43 smithi043 bash[11026]: Error: error creating container storage: the container name "ceph-cfcaf4e8-8cc8-11ea-a068-001a4aab830c-mon.c" is already in use by "c3ed65093dd89d593e40d2d1bbfa03c8dcb5f53ba7bdda77eacde8d9f1a9c28e". You have to remove that container to be able to reuse that name.: that name is already in use 
 2020-05-02T23:04:43.696 INFO:ceph.mon.c.smithi043.stdout:May 02 23:04:43 smithi043 systemd[1]: ceph-cfcaf4e8-8cc8-11ea-a068-001a4aab830c@mon.c.service: main process exited, code=exited, status=125/n/a 
 2020-05-02T23:04:43.805 INFO:ceph.mon.c.smithi043.stdout:May 02 23:04:43 smithi043 podman[11053]: Error: no container with name or ID ceph-cfcaf4e8-8cc8-11ea-a068-001a4aab830c-mon.c found: no such container 
 2020-05-02T23:04:43.818 INFO:ceph.mon.c.smithi043.stdout:May 02 23:04:43 smithi043 systemd[1]: Unit ceph-cfcaf4e8-8cc8-11ea-a068-001a4aab830c@mon.c.service entered failed state. 
 2020-05-02T23:04:43.819 INFO:ceph.mon.c.smithi043.stdout:May 02 23:04:43 smithi043 systemd[1]: ceph-cfcaf4e8-8cc8-11ea-a068-001a4aab830c@mon.c.service failed. 
 2020-05-02T23:04:44.518 INFO:ceph.mon.a.smithi129.stdout:May 02 23:04:44 smithi129 bash[10331]: cluster 2020-05-02T23:04:43.455211+0000 mgr.y (mgr.14142) 70 : cluster [DBG] pgmap v61: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail 
 2020-05-02T23:04:44.519 INFO:ceph.mon.b.smithi136.stdout:May 02 23:04:44 smithi136 bash[10198]: cluster 2020-05-02T23:04:43.455211+0000 mgr.y (mgr.14142) 70 : cluster [DBG] pgmap v61: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail 
 2020-05-02T23:04:44.694 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 
 </pre> 

 <pre> 
 2020-05-02T23:16:38.351 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 
 2020-05-02T23:16:38.351 INFO:teuthology.orchestra.run.smithi043:> true 
 2020-05-02T23:16:38.467 INFO:teuthology.orchestra.run.smithi043:> sudo /home/ubuntu/cephtest/cephadm --image quay.io/ceph-ci/ceph:beaa4b04bc57ed43e98602e493e8a787a014b4e6 shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid cfcaf4e8-8cc8-11ea-a068-001a4aab830c -- ceph mon dump -f json 
 2020-05-02T23:16:38.680 INFO:ceph.mon.a.smithi129.stdout:May 02 23:16:38 smithi129 bash[10331]: cluster 2020-05-02T23:16:37.628693+0000 mgr.y (mgr.14142) 427 : cluster [DBG] pgmap v418: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail 
 2020-05-02T23:16:38.682 INFO:ceph.mon.b.smithi136.stdout:May 02 23:16:38 smithi136 bash[10198]: cluster 2020-05-02T23:16:37.628693+0000 mgr.y (mgr.14142) 427 : cluster [DBG] pgmap v418: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail 
 2020-05-02T23:16:40.685 INFO:ceph.mon.a.smithi129.stdout:May 02 23:16:40 smithi129 bash[10331]: cluster 2020-05-02T23:16:39.629217+0000 mgr.y (mgr.14142) 428 : cluster [DBG] pgmap v419: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail 
 2020-05-02T23:16:40.686 INFO:ceph.mon.b.smithi136.stdout:May 02 23:16:40 smithi136 bash[10198]: cluster 2020-05-02T23:16:39.629217+0000 mgr.y (mgr.14142) 428 : cluster [DBG] pgmap v419: 1 pgs: 1 unknown; 0 B data, 0 B used, 0 B / 0 B avail 
 2020-05-02T23:16:40.742 INFO:teuthology.orchestra.run.smithi043.stdout: 
 2020-05-02T23:16:40.742 INFO:teuthology.orchestra.run.smithi043.stdout:{"epoch":2,"fsid":"cfcaf4e8-8cc8-11ea-a068-001a4aab830c","modified":"2020-05-02T23:04:12.125642Z","created":"2020-05-02T23:01:50.425360Z","min_mon_release":16,"min_mon_release_name":"pacific","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"172.21.15.129:3300","nonce":0},{"type":"v1","addr":"172.21.15.129:6789","nonce":0}]},"addr":"172.21.15.129:6789/0","public_addr":"172.21.15.129:6789/0","priority":0,"weight":0},{"rank":1,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"172.21.15.136:3300","nonce":0},{"type":"v1","addr":"172.21.15.136:6789","nonce":0}]},"addr":"172.21.15.136:6789/0","public_addr":"172.21.15.136:6789/0","priority":0,"weight":0}],"quorum":[0,1]} 
 2020-05-02T23:16:40.744 INFO:teuthology.orchestra.run.smithi043.stderr:dumped monmap epoch 2 
 2020-05-02T23:16:41.378 ERROR:teuthology.contextutil:Saw exception from nested tasks 
 Traceback (most recent call last): 
   File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/contextutil.py", line 32, in nested 
     vars.append(enter()) 
   File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__ 
     return next(self.gen) 
   File "/home/teuthworker/src/github.com_ceph_ceph_master/qa/tasks/cephadm.py", line 495, in ceph_mons 
     while proceed(): 
   File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/contextutil.py", line 134, in __call__ 
     raise MaxWhileTries(error_msg) 
 teuthology.exceptions.MaxWhileTries: reached maximum tries (180) after waiting for 180 seconds 
 </pre>

Back