Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2021-10-12T10:17:46Z
Ceph
Redmine
Orchestrator - Bug #52898 (Resolved): cephadm: Unable to create max luns per iSCSI target: thread...
https://tracker.ceph.com/issues/52898
2021-10-12T10:17:46Z
Sebastian Wagner
<p>Description of problem:</p>
<p>Unable to create max luns per target. Container is crashed after some luns creation.</p>
<p>Version-Release number of selected component (if applicable):<br />ceph version 16.2.0 pacific (stable)</p>
<p>How reproducible:<br />100%</p>
<p>Steps to Reproduce:<br />1.Create gateways using below file.<br />[ceph: root@host104 ~]# cat iscsi.yaml <br />service_type: iscsi<br />service_id: iscsi<br />placement:<br /> hosts:<br /> - host108<br /> - host113<br />spec:<br /> pool: iscsi_pool<br /> trusted_ip_list: "ipv4,ipv6" <br /> api_user: admin<br /> api_password: admin<br />[ceph: root@host104 ~]#</p>
<p>2.Start iscsi gateways using "Gwcli"</p>
<p>3.Create target and gateways<br />/iscsi-targets> ls<br />o- iscsi-targets ................................................................................. [DiscoveryAuth: None, Targets: 1]<br /> o- iqn.2003-01.com.example.iscsi-gw:ceph-igw ............................................................ [Auth: None, Gateways: 2]<br /> o- disks ............................................................................................................ [Disks: 0]<br /> o- gateways .............................................................................................. [Up: 2/2, Portals: 2]
| o- host108 .............................................................................................. [1.0.0.108 (UP)]
| o- host113 .............................................................................................. [1.0.0.113 (UP)]<br /> o- host-groups .................................................................................................... [Groups : 0]<br /> o- hosts ......................................................................................... [Auth: ACL_ENABLED, Hosts: 0]<br />/iscsi-targets><br />4.Create client iqn.<br />5.Create images and add disks to client</p>
<p>Actual results:<br />/iscsi-target...at:rh7-client> disk add iscsi_pool/image127<br />ok<br />/iscsi-target...at:rh7-client> disk add iscsi_pool/image128<br />Exception in thread Thread-11:<br />Traceback (most recent call last):<br /> File "/usr/lib64/python3.6/threading.py", line 916, in _bootstrap_inner<br /> self.run()<br /> File "/usr/lib64/python3.6/threading.py", line 1182, in run<br /> self.function(*self.args, **self.kwargs)<br /> File "/usr/lib/python3.6/site-packages/gwcli/gateway.py", line 646, in check_gateways<br /> check_thread.start()<br /> File "/usr/lib64/python3.6/threading.py", line 846, in start<br /> _start_new_thread(self._bootstrap, ())<br />RuntimeError: can't start new thread<br />[root@host108 ubuntu]# podman exec -it ff1f0ffc5f35 sh<br />Error: no container with name or ID ff1f0ffc5f35 found: no such container<br />[root@host108 ubuntu]#</p>
<p>Expected results:<br />Max luns should be created per target.</p>
Orchestrator - Bug #48535 (Resolved): QA smoke test: cephadm is removing mgr.y
https://tracker.ceph.com/issues/48535
2020-12-10T11:32:13Z
Sebastian Wagner
<p><a class="external" href="https://pulpito.ceph.com/yuriw-2020-12-08_16:18:10-rados-octopus-distro-basic-smithi/5693969">https://pulpito.ceph.com/yuriw-2020-12-08_16:18:10-rados-octopus-distro-basic-smithi/5693969</a></p>
<p>cephadm is properly deploying mgr.y:</p>
<pre>
Dec 08 18:58:47 smithi099 bash[10707]: audit 2020-12-08T18:58:47.142557+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14138 172.21.15.131:0/1846859103' entity='mgr.y' cmd='[{"prefix":"config-ke
y set","key":"mgr/cephadm/host.smithi131","val":"{\"daemons\": {\"mon.a\": {\"daemon_type\": \"mon\", \"daemon_id\": \"a\", \"hostname\": \"smithi131\", \"container_id\": \"5fdb0de44749\", \"container_image_id\": \"dae82b93a77958a9a6819f28e335d1a604c7e3cdecf8bba
5e0c820df2b53a64a\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe\", \"version\": \"15.2.7-661-gb1c1268b\", \"status\": 1, \"status_desc\": \"running\", \"is_active\": false, \"last_refresh\": \"2020-12-08T18:58:
47.137207\", \"created\": \"2020-12-08T18:56:38.730085\", \"started\": \"2020-12-08T18:56:45.378062\"}, \"mgr.y\": {\"daemon_type\": \"mgr\", \"daemon_id\": \"y\", \"hostname\": \"smithi131\", \"container_id\": \"620f6dfea3b3\", \"container_image_id\": \"dae82b9
3a77958a9a6819f28e335d1a604c7e3cdecf8bba5e0c820df2b53a64a\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe\", \"version\": \"15.2.7-661-gb1c1268b\", \"status\": 1, \"status_desc\": \"running\", \"is_active\": fals
e, \"last_refresh\": \"2020-12-08T18:58:47.137300\", \"created\": \"2020-12-08T18:56:47.849275\", \"started\": \"2020-12-08T18:56:47.941925\"}, \"mon.c\": {\"daemon_type\": \"mon\", \"daemon_id\": \"c\", \"hostname\": \"smithi131\", \"container_id\": \"8566d05a0
1c5\", \"container_image_id\": \"dae82b93a77958a9a6819f28e335d1a604c7e3cdecf8bba5e0c820df2b53a64a\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe\", \"version\": \"15.2.7-661-gb1c1268b\", \"status\": 1, \"status_
desc\": \"running\", \"is_active\": false, \"last_refresh\": \"2020-12-08T18:58:47.137347\", \"created\": \"2020-12-08T18:58:16.069285\", \"started\": \"2020-12-08T18:58:16.157886\"}}, \"devices\": [...], \"osdspec_previews\": [], \"daemon_config_deps\": {\"mon.a\": {\"deps\": [], \"last_config\": \"2020-12-08T18:58:44.148173\"}, \"mgr.y\": {\"deps\": [], \"last_con
fig\": \"2020-12-08T18:58:36.178335\"}, \"mon.c\": {\"deps\": [], \"last_config\": \"2020-12-08T18:58:38.524511\"}}, \"last_daemon_update\": \"2020-12-08T18:58:47.137450\", \"last_device_update\": \"2020-12-08T18:57:36.298984\", \"networks\": {\"172.17.0.0/16\":
[\"172.17.0.1\"], \"172.21.0.0/20\": [\"172.21.15.131\"], \"172.21.15.254\": [\"172.21.15.131\"], \"fe80::/64\": [\"fe80::ec4:7aff:fe88:72f9\"]}, \"last_host_check\": \"2020-12-08T18:57:14.445904\"}"}]': finished
</pre>
<p>But at some point, <code>cephadm.py</code> decides to remove it again:</p>
<pre>
Dec 08 18:58:41.703 INFO:teuthology.orchestra.run.smithi099:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid 07c2237e-3987-11eb-9811-001a4aab830c -- ceph orch apply mgr '2;smithi099=x'
Dec 08 18:58:47 smithi131 bash[10704]: cephadm 2020-12-08T18:58:47.157062+0000 mgr.y (mgr.14138) 62 : cephadm [INF] Deploying daemon mgr.x on smithi099
Dec 08 18:58:47 smithi099 bash[10707]: cephadm 2020-12-08T18:58:47.157062+0000 mgr.y (mgr.14138) 62 : cephadm [INF] Deploying daemon mgr.x on smithi099
Dec 08 18:58:49 smithi099 bash[10707]: cephadm 2020-12-08T18:58:49.238457+0000 mgr.y (mgr.14138) 64 : cephadm [INF] It is presumed safe to stop ['mgr.y']
Dec 08 18:58:49 smithi099 bash[10707]: cephadm 2020-12-08T18:58:49.238723+0000 mgr.y (mgr.14138) 65 : cephadm [INF] Removing daemon mgr.y from smithi131
</pre>
<p>thus the mgr is then missing:</p>
<pre>
2020-12-08T19:05:16.283 INFO:teuthology.orchestra.run.smithi131.stdout:NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID
2020-12-08T19:05:16.284 INFO:teuthology.orchestra.run.smithi131.stdout:alertmanager 1/1 89s ago 2m smithi131=a;count:1 docker.io/prom/alertmanager:v0.20.0 0881eb8f169f
2020-12-08T19:05:16.284 INFO:teuthology.orchestra.run.smithi131.stdout:grafana 1/1 89s ago 2m smithi099=a;count:1 docker.io/ceph/ceph-grafana:6.6.2 a0dce381714a
2020-12-08T19:05:16.284 INFO:teuthology.orchestra.run.smithi131.stdout:iscsi.iscsi 1/1 89s ago 2m smithi099=iscsi.a;count:1 quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779
2020-12-08T19:05:16.284 INFO:teuthology.orchestra.run.smithi131.stdout:mgr 1/2 89s ago 6m smithi099=x;count:2 quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779
2020-12-08T19:05:16.285 INFO:teuthology.orchestra.run.smithi131.stdout:mon 3/0 89s ago - <unmanaged> quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779
2020-12-08T19:05:16.285 INFO:teuthology.orchestra.run.smithi131.stdout:node-exporter 2/2 89s ago 2m smithi131=a;smithi099=b;count:2 docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf
2020-12-08T19:05:16.285 INFO:teuthology.orchestra.run.smithi131.stdout:osd.None 8/0 89s ago - <unmanaged> quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779
2020-12-08T19:05:16.285 INFO:teuthology.orchestra.run.smithi131.stdout:prometheus 1/1 89s ago 2m smithi099=a;count:1 docker.io/prom/prometheus:v2.18.1 de242295e225
2020-12-08T19:05:16.286 INFO:teuthology.orchestra.run.smithi131.stdout:rgw.realm.zone 1/1 89s ago 2m smithi131=realm.zone.a;count:1 quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779
2020-12-08T19:05:12.375 INFO:teuthology.orchestra.run.smithi131.stdout:NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID
2020-12-08T19:05:12.375 INFO:teuthology.orchestra.run.smithi131.stdout:alertmanager.a smithi131 running (89s) 85s ago 2m 0.20.0 docker.io/prom/alertmanager:v0.20.0 0881eb8f169f 5cfd91f2dec4
2020-12-08T19:05:12.376 INFO:teuthology.orchestra.run.smithi131.stdout:grafana.a smithi099 running (104s) 85s ago 104s 6.6.2 docker.io/ceph/ceph-grafana:6.6.2 a0dce381714a d22d8f54a540
2020-12-08T19:05:12.376 INFO:teuthology.orchestra.run.smithi131.stdout:iscsi.iscsi.a smithi099 running (2m) 85s ago 2m 3.4 quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 ee4b2dbcfe42
2020-12-08T19:05:12.376 INFO:teuthology.orchestra.run.smithi131.stdout:mgr.x smithi099 running (6m) 85s ago 6m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 5b50fc3e28b6
2020-12-08T19:05:12.376 INFO:teuthology.orchestra.run.smithi131.stdout:mon.a smithi131 running (8m) 85s ago 8m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 5fdb0de44749
2020-12-08T19:05:12.376 INFO:teuthology.orchestra.run.smithi131.stdout:mon.b smithi099 running (6m) 85s ago 6m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 6d898409329d
2020-12-08T19:05:12.377 INFO:teuthology.orchestra.run.smithi131.stdout:mon.c smithi131 running (6m) 85s ago 6m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 8566d05a01c5
2020-12-08T19:05:12.377 INFO:teuthology.orchestra.run.smithi131.stdout:node-exporter.a smithi131 running (2m) 85s ago 2m 0.18.1 docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf 6c48edd25d55
2020-12-08T19:05:12.377 INFO:teuthology.orchestra.run.smithi131.stdout:node-exporter.b smithi099 running (2m) 85s ago 2m 0.18.1 docker.io/prom/node-exporter:v0.18.1 e5a616e4b9cf e4321578ec02
2020-12-08T19:05:12.377 INFO:teuthology.orchestra.run.smithi131.stdout:osd.0 smithi131 running (5m) 85s ago 5m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 013ba62b1a67
2020-12-08T19:05:12.377 INFO:teuthology.orchestra.run.smithi131.stdout:osd.1 smithi131 running (5m) 85s ago 5m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 b3249b0b5044
2020-12-08T19:05:12.378 INFO:teuthology.orchestra.run.smithi131.stdout:osd.2 smithi131 running (4m) 85s ago 4m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 9b79623d7e60
2020-12-08T19:05:12.378 INFO:teuthology.orchestra.run.smithi131.stdout:osd.3 smithi131 running (4m) 85s ago 4m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 4626c1117138
2020-12-08T19:05:12.378 INFO:teuthology.orchestra.run.smithi131.stdout:osd.4 smithi099 running (4m) 85s ago 4m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 4a1c1bbd5040
2020-12-08T19:05:12.378 INFO:teuthology.orchestra.run.smithi131.stdout:osd.5 smithi099 running (3m) 85s ago 3m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 c3e1893f1cc6
2020-12-08T19:05:12.378 INFO:teuthology.orchestra.run.smithi131.stdout:osd.6 smithi099 running (3m) 85s ago 3m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 292f1b5ea013
2020-12-08T19:05:12.379 INFO:teuthology.orchestra.run.smithi131.stdout:osd.7 smithi099 running (3m) 85s ago 3m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 5a88aabbe925
2020-12-08T19:05:12.379 INFO:teuthology.orchestra.run.smithi131.stdout:prometheus.a smithi099 running (95s) 85s ago 2m 2.18.1 docker.io/prom/prometheus:v2.18.1 de242295e225 45d6bf1407d5
2020-12-08T19:05:12.379 INFO:teuthology.orchestra.run.smithi131.stdout:rgw.realm.zone.a smithi131 running (2m) 85s ago 2m 15.2.7-661-gb1c1268b quay.ceph.io/ceph-ci/ceph:b1c1268b5c492c09ac25a8ffa21109a4387acffe dae82b93a779 ae6e6aa0af5c
</pre>
CephFS - Bug #48491 (Resolved): tasks.cephfs.test_nfs.TestNFS.test_cluster_info: IP mismatch
https://tracker.ceph.com/issues/48491
2020-12-08T11:06:56Z
Sebastian Wagner
<pre>
2020-12-07T20:32:02.847 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2020-12-07T20:32:02.847 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-swagner3-testing-2020-12-07-1114/qa/tasks/cephfs/test_nfs.py", line 436, in test_cluster_info
2020-12-07T20:32:02.847 INFO:tasks.cephfs_test_runner: self.assertDictEqual(info_output, host_details)
2020-12-07T20:32:02.848 INFO:tasks.cephfs_test_runner:AssertionError: {'tes[21 chars]ithi069', 'ip': ['172.21.15.69', '127.0.1.1'], 'port': 2049}]} != {'tes[21 chars]ithi069', 'ip': ['172.17.0.1', '172.21.15.69'], 'port': 2049}]}
2020-12-07T20:32:02.848 INFO:tasks.cephfs_test_runner: {'test': [{'hostname': 'smithi069',
2020-12-07T20:32:02.848 INFO:tasks.cephfs_test_runner:- 'ip': ['172.21.15.69', '127.0.1.1'],
2020-12-07T20:32:02.848 INFO:tasks.cephfs_test_runner:+ 'ip': ['172.17.0.1', '172.21.15.69'],
2020-12-07T20:32:02.849 INFO:tasks.cephfs_test_runner: 'port': 2049}]}
2020-12-07T20:32:02.849 INFO:tasks.cephfs_test_runner:
2020-12-07T20:32:02.849 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-12-07T20:32:02.850 INFO:tasks.cephfs_test_runner:Ran 1 test in 32.117s
</pre>
<p><a class="external" href="https://pulpito.ceph.com/swagner-2020-12-07_12:07:52-rados:cephadm-wip-swagner3-testing-2020-12-07-1114-distro-basic-smithi/5689544/">https://pulpito.ceph.com/swagner-2020-12-07_12:07:52-rados:cephadm-wip-swagner3-testing-2020-12-07-1114-distro-basic-smithi/5689544/</a></p>
<p>log snippet:<br /><pre>
2020-12-07T20:31:47.592 INFO:tasks.cephfs_test_runner:Starting test: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
2020-12-07T20:31:47.596 INFO:teuthology.orchestra.run.smithi069:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph log 'Starting test tasks.cephfs.test_nfs.TestNFS.test_cluster_info'
2020-12-07T20:31:48.934 INFO:teuthology.orchestra.run.smithi069:> sudo systemctl status nfs-server
2020-12-07T20:31:48.982 INFO:teuthology.orchestra.run.smithi069.stdout:* nfs-server.service - NFS server and services
2020-12-07T20:31:48.983 INFO:teuthology.orchestra.run.smithi069.stdout: Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
2020-12-07T20:31:48.983 INFO:teuthology.orchestra.run.smithi069.stdout: Active: active (exited) since Mon 2020-12-07 20:22:19 UTC; 9min ago
2020-12-07T20:31:48.984 INFO:teuthology.orchestra.run.smithi069.stdout: Main PID: 1291 (code=exited, status=0/SUCCESS)
2020-12-07T20:31:48.985 INFO:teuthology.orchestra.run.smithi069.stdout: Tasks: 0 (limit: 4915)
2020-12-07T20:31:48.985 INFO:teuthology.orchestra.run.smithi069.stdout: CGroup: /system.slice/nfs-server.service
2020-12-07T20:31:48.986 INFO:teuthology.orchestra.run.smithi069.stdout:
2020-12-07T20:31:48.987 INFO:teuthology.orchestra.run.smithi069.stdout:Dec 07 20:22:19 smithi121 systemd[1]: Starting NFS server and services...
2020-12-07T20:31:48.988 INFO:teuthology.orchestra.run.smithi069.stdout:Dec 07 20:22:19 smithi121 systemd[1]: Started NFS server and services.
2020-12-07T20:31:48.991 INFO:tasks.cephfs.test_nfs:Disabling NFS
2020-12-07T20:31:48.993 INFO:teuthology.orchestra.run.smithi069:> sudo systemctl disable nfs-server --now
2020-12-07T20:31:49.057 INFO:teuthology.orchestra.run.smithi069.stderr:Removed /etc/systemd/system/multi-user.target.wants/nfs-server.service.
2020-12-07T20:31:49.194 INFO:teuthology.orchestra.run.smithi069:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph nfs cluster create cephfs test
2020-12-07T20:31:51.150 INFO:teuthology.orchestra.run.smithi069.stdout:NFS Cluster Created Successfully
2020-12-07T20:32:01.168 INFO:teuthology.orchestra.run.smithi069:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph orch ps --service_name=nfs.ganesha-test
2020-12-07T20:32:01.498 INFO:teuthology.orchestra.run.smithi069.stdout:NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID
2020-12-07T20:32:01.498 INFO:teuthology.orchestra.run.smithi069.stdout:nfs.ganesha-test.smithi069 smithi069 running (6s) 4s ago 6s 3.3 quay.ceph.io/ceph-ci/ceph:acb9f90cf72d3c35abbc0efbef808026624ef6c0 59dd54023d46 4282061f6a85
2020-12-07T20:32:01.512 INFO:teuthology.orchestra.run.smithi069:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph nfs cluster info test
2020-12-07T20:32:01.862 INFO:teuthology.orchestra.run.smithi069.stdout:{
2020-12-07T20:32:02.312 INFO:teuthology.orchestra.run.smithi069.stdout: "test": [
2020-12-07T20:32:02.312 INFO:teuthology.orchestra.run.smithi069.stdout: {
2020-12-07T20:32:02.313 INFO:teuthology.orchestra.run.smithi069.stdout: "hostname": "smithi069",
2020-12-07T20:32:02.313 INFO:teuthology.orchestra.run.smithi069.stdout: "ip": [
2020-12-07T20:32:02.313 INFO:teuthology.orchestra.run.smithi069.stdout: "172.21.15.69",
2020-12-07T20:32:02.313 INFO:teuthology.orchestra.run.smithi069.stdout: "127.0.1.1"
2020-12-07T20:32:02.314 INFO:teuthology.orchestra.run.smithi069.stdout: ],
2020-12-07T20:32:02.314 INFO:teuthology.orchestra.run.smithi069.stdout: "port": 2049
2020-12-07T20:32:02.314 INFO:teuthology.orchestra.run.smithi069.stdout: }
2020-12-07T20:32:02.314 INFO:teuthology.orchestra.run.smithi069.stdout: ]
2020-12-07T20:32:02.314 INFO:teuthology.orchestra.run.smithi069.stdout:}
2020-12-07T20:32:02.316 INFO:teuthology.orchestra.run.smithi069:> sudo hostname
2020-12-07T20:32:02.329 INFO:teuthology.orchestra.run.smithi069.stdout:smithi069
2020-12-07T20:32:02.331 INFO:teuthology.orchestra.run.smithi069:> sudo hostname -I
2020-12-07T20:32:02.387 INFO:teuthology.orchestra.run.smithi069.stdout:172.21.15.69 172.17.0.1
</pre></p>
CephFS - Bug #47009 (Resolved): TestNFS.test_cluster_set_reset_user_config: command failed with s...
https://tracker.ceph.com/issues/47009
2020-08-18T12:50:40Z
Sebastian Wagner
<pre>
sed -n '/2020-08-18T12:13:28.371/,/2020-08-18T12:17:23.469/p' *
2020-08-18T12:13:28.371 INFO:tasks.cephfs_test_runner:Starting test: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS)
2020-08-18T12:13:28.372 INFO:teuthology.orchestra.run.smithi036:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early log 'Starting test tasks.cephfs.test_nfs.TestNFS.test_cluster_set_reset_user_config'
2020-08-18T12:13:28.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:28 smithi036 bash[14996]: audit 2020-08-18T12:13:27.346480+0000 mon.a (mon.0) 397 : audit [INF] from='client.? 172.21.15.36:0/3067776373' entity='client.admin' cmd='[{"prefix": "log", "logtext": ["Starting test tasks.cephfs.test_nfs.TestNFS.test_cluster_reset_user_config_with_non_existing_clusterid"]}]': finished
2020-08-18T12:13:28.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:28 smithi036 bash[14996]: audit 2020-08-18T12:13:27.695797+0000 mgr.a (mgr.14247) 35 : audit [DBG] from='client.14288 -' entity='client.admin' cmd=[{"prefix": "nfs cluster config reset", "clusterid": "invalidtest", "target": ["mon-mgr", ""]}]: dispatch
2020-08-18T12:13:28.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:28 smithi036 bash[14996]: cluster 2020-08-18T12:13:28.012636+0000 client.admin (client.?) 0 : cluster [INF] Ended test tasks.cephfs.test_nfs.TestNFS.test_cluster_reset_user_config_with_non_existing_clusterid
2020-08-18T12:13:28.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:28 smithi036 bash[14996]: audit 2020-08-18T12:13:28.012780+0000 mon.a (mon.0) 398 : audit [INF] from='client.? 172.21.15.36:0/164629844' entity='client.admin' cmd=[{"prefix": "log", "logtext": ["Ended test tasks.cephfs.test_nfs.TestNFS.test_cluster_reset_user_config_with_non_existing_clusterid"]}]: dispatch
2020-08-18T12:13:29.377 INFO:teuthology.orchestra.run.smithi036:> sudo systemctl status nfs-server
2020-08-18T12:13:29.399 INFO:teuthology.orchestra.run.smithi036.stdout:* nfs-server.service - NFS server and services
2020-08-18T12:13:29.400 INFO:teuthology.orchestra.run.smithi036.stdout: Loaded: loaded (/lib/systemd/system/nfs-server.service; disabled; vendor preset: enabled)
2020-08-18T12:13:29.400 INFO:teuthology.orchestra.run.smithi036.stdout: Active: inactive (dead)
2020-08-18T12:13:29.401 INFO:teuthology.orchestra.run.smithi036.stdout:
2020-08-18T12:13:29.401 INFO:teuthology.orchestra.run.smithi036.stdout:Aug 18 12:04:04 smithi036 systemd[1]: Starting NFS server and services...
2020-08-18T12:13:29.401 INFO:teuthology.orchestra.run.smithi036.stdout:Aug 18 12:04:04 smithi036 systemd[1]: Started NFS server and services.
2020-08-18T12:13:29.402 INFO:teuthology.orchestra.run.smithi036.stdout:Aug 18 12:13:04 smithi036 systemd[1]: Stopping NFS server and services...
2020-08-18T12:13:29.402 INFO:teuthology.orchestra.run.smithi036.stdout:Aug 18 12:13:04 smithi036 systemd[1]: Stopped NFS server and services.
2020-08-18T12:13:29.404 DEBUG:teuthology.orchestra.run:got remote process result: 3
2020-08-18T12:13:29.404 INFO:teuthology.orchestra.run.smithi036:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early nfs cluster create cephfs test
2020-08-18T12:13:29.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:29 smithi036 bash[14996]: audit 2020-08-18T12:13:28.348582+0000 mon.a (mon.0) 399 : audit [INF] from='client.? 172.21.15.36:0/164629844' entity='client.admin' cmd='[{"prefix": "log", "logtext": ["Ended test tasks.cephfs.test_nfs.TestNFS.test_cluster_reset_user_config_with_non_existing_clusterid"]}]': finished
2020-08-18T12:13:29.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:29 smithi036 bash[14996]: cluster 2020-08-18T12:13:28.692094+0000 client.admin (client.?) 0 : cluster [INF] Starting test tasks.cephfs.test_nfs.TestNFS.test_cluster_set_reset_user_config
2020-08-18T12:13:29.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:29 smithi036 bash[14996]: audit 2020-08-18T12:13:28.692238+0000 mon.a (mon.0) 400 : audit [INF] from='client.? 172.21.15.36:0/1973330598' entity='client.admin' cmd=[{"prefix": "log", "logtext": ["Starting test tasks.cephfs.test_nfs.TestNFS.test_cluster_set_reset_user_config"]}]: dispatch
2020-08-18T12:13:29.791 INFO:teuthology.orchestra.run.smithi036.stdout:NFS Cluster Created Successfully
2020-08-18T12:13:30.636 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.354619+0000 mon.a (mon.0) 401 : audit [INF] from='client.? 172.21.15.36:0/1973330598' entity='client.admin' cmd='[{"prefix": "log", "logtext": ["Starting test tasks.cephfs.test_nfs.TestNFS.test_cluster_set_reset_user_config"]}]': finished
2020-08-18T12:13:30.636 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.776446+0000 mgr.a (mgr.14247) 37 : audit [DBG] from='client.14294 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "type": "cephfs", "clusterid": "test", "target": ["mon-mgr", ""]}]: dispatch
2020-08-18T12:13:30.636 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.785356+0000 mgr.a (mgr.14247) 38 : cephadm [INF] Saving service nfs.ganesha-test spec with placement count:1
2020-08-18T12:13:30.636 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.786010+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/spec.nfs.ganesha-test","val":"{\"created\": \"2020-08-18T12:13:29.785388\", \"spec\": {\"placement\": {\"count\": 1}, \"service_id\": \"ganesha-test\", \"service_name\": \"nfs.ganesha-test\", \"service_type\": \"nfs\", \"spec\": {\"namespace\": \"test\", \"pool\": \"nfs-ganesha\"}}}"}]: dispatch
2020-08-18T12:13:30.637 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.789858+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/spec.nfs.ganesha-test","val":"{\"created\": \"2020-08-18T12:13:29.785388\", \"spec\": {\"placement\": {\"count\": 1}, \"service_id\": \"ganesha-test\", \"service_name\": \"nfs.ganesha-test\", \"service_type\": \"nfs\", \"spec\": {\"namespace\": \"test\", \"pool\": \"nfs-ganesha\"}}}"}]': finished
2020-08-18T12:13:30.637 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.792400+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]: dispatch
2020-08-18T12:13:30.637 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.795539+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]': finished
2020-08-18T12:13:30.637 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.797870+0000 mgr.a (mgr.14247) 39 : cephadm [INF] Checking pool "nfs-ganesha" exists for service nfs.ganesha-test
2020-08-18T12:13:30.638 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.798100+0000 mgr.a (mgr.14247) 40 : cephadm [INF] Saving service nfs.ganesha-test spec with placement count:1
2020-08-18T12:13:30.638 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.798617+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/spec.nfs.ganesha-test","val":"{\"created\": \"2020-08-18T12:13:29.798122\", \"spec\": {\"placement\": {\"count\": 1}, \"service_id\": \"ganesha-test\", \"service_name\": \"nfs.ganesha-test\", \"service_type\": \"nfs\", \"spec\": {\"namespace\": \"test\", \"pool\": \"nfs-ganesha\"}}}"}]: dispatch
2020-08-18T12:13:30.638 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.802947+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/spec.nfs.ganesha-test","val":"{\"created\": \"2020-08-18T12:13:29.798122\", \"spec\": {\"placement\": {\"count\": 1}, \"service_id\": \"ganesha-test\", \"service_name\": \"nfs.ganesha-test\", \"service_type\": \"nfs\", \"spec\": {\"namespace\": \"test\", \"pool\": \"nfs-ganesha\"}}}"}]': finished
2020-08-18T12:13:30.638 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.803757+0000 mgr.a (mgr.14247) 41 : cephadm [INF] Create daemon ganesha-test.smithi036 on host smithi036 with spec NFSServiceSpec({'placement': PlacementSpec(count=1), 'service_type': 'nfs', 'service_id': 'ganesha-test', 'unmanaged': False, 'preview_only': False, 'pool': 'nfs-ganesha', 'namespace': 'test'})
2020-08-18T12:13:30.639 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.804059+0000 mgr.a (mgr.14247) 42 : cephadm [INF] Checking pool "nfs-ganesha" exists for service nfs.ganesha-test
2020-08-18T12:13:30.639 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.804216+0000 mgr.a (mgr.14247) 43 : cephadm [INF] Create keyring: client.nfs.ganesha-test.smithi036
2020-08-18T12:13:30.639 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.804550+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.ganesha-test.smithi036"}]: dispatch
2020-08-18T12:13:30.639 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.805602+0000 mgr.a (mgr.14247) 44 : cephadm [INF] Updating keyring caps: client.nfs.ganesha-test.smithi036
2020-08-18T12:13:30.640 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.805983+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "auth caps", "entity": "client.nfs.ganesha-test.smithi036", "caps": ["mon", "allow r", "osd", "allow rw pool=nfs-ganesha namespace=test"]}]: dispatch
2020-08-18T12:13:30.640 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.810709+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix": "auth caps", "entity": "client.nfs.ganesha-test.smithi036", "caps": ["mon", "allow r", "osd", "allow rw pool=nfs-ganesha namespace=test"]}]': finished
2020-08-18T12:13:30.640 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.813262+0000 mgr.a (mgr.14247) 45 : cephadm [INF] Rados config object exists: conf-nfs.ganesha-test
2020-08-18T12:13:30.640 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.813995+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
2020-08-18T12:13:30.640 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.814959+0000 mgr.a (mgr.14247) 46 : cephadm [INF] Deploying daemon nfs.ganesha-test.smithi036 on smithi036
2020-08-18T12:13:30.641 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.815454+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "config get", "who": "client.nfs.ganesha-test.smithi036", "key": "container_image"}]: dispatch
2020-08-18T12:13:32.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:32 smithi036 bash[14996]: audit 2020-08-18T12:13:31.658178+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "container_image"}]: dispatch
2020-08-18T12:13:34.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:34 smithi036 bash[14996]: audit 2020-08-18T12:13:33.945089+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]: dispatch
2020-08-18T12:13:34.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:34 smithi036 bash[14996]: audit 2020-08-18T12:13:33.949127+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]': finished
2020-08-18T12:13:34.647 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:34 smithi036 bash[14996]: audit 2020-08-18T12:13:33.953850+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]: dispatch
2020-08-18T12:13:34.647 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:34 smithi036 bash[14996]: audit 2020-08-18T12:13:33.956396+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]': finished
2020-08-18T12:13:37.808 INFO:teuthology.orchestra.run.smithi036:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early orch ls nfs
2020-08-18T12:13:38.115 INFO:teuthology.orchestra.run.smithi036.stdout:NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID
2020-08-18T12:13:38.115 INFO:teuthology.orchestra.run.smithi036.stdout:nfs.ganesha-test 1/1 4s ago 8s count:1 quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7 27a13cd81179
2020-08-18T12:13:38.395 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:38 smithi036 bash[14996]: audit 2020-08-18T12:13:38.112053+0000 mgr.a (mgr.14247) 51 : audit [DBG] from='client.14303 -' entity='client.admin' cmd=[{"prefix": "orch ls", "service_type": "nfs", "target": ["mon-mgr", ""]}]: dispatch
2020-08-18T12:14:08.131 INFO:teuthology.orchestra.run.smithi036:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early fs volume create user_test_fs
2020-08-18T12:14:09.649 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:09 smithi036 bash[14996]: audit 2020-08-18T12:14:08.447802+0000 mgr.a (mgr.14247) 67 : audit [DBG] from='client.14307 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "user_test_fs", "target": ["mon-mgr", ""]}]: dispatch
2020-08-18T12:14:09.650 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:09 smithi036 bash[14996]: audit 2020-08-18T12:14:08.448662+0000 mon.a (mon.0) 422 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "osd pool create", "pool": "cephfs.user_test_fs.meta"}]: dispatch
2020-08-18T12:14:10.406 INFO:teuthology.orchestra.run.smithi036.stderr:new fs with metadata pool 3 and data pool 4
2020-08-18T12:14:10.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:10 smithi036 bash[14996]: debug 2020-08-18T12:14:10.371+0000 7f3f88120700 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
2020-08-18T12:14:10.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:10 smithi036 bash[14996]: audit 2020-08-18T12:14:09.369221+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix": "osd pool create", "pool": "cephfs.user_test_fs.meta"}]': finished
2020-08-18T12:14:10.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:10 smithi036 bash[14996]: cluster 2020-08-18T12:14:09.369383+0000 mon.a (mon.0) 424 : cluster [DBG] osdmap e27: 3 total, 3 up, 3 in
2020-08-18T12:14:10.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:10 smithi036 bash[14996]: audit 2020-08-18T12:14:09.372098+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "osd pool create", "pool": "cephfs.user_test_fs.data"}]: dispatch
2020-08-18T12:14:11.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: audit 2020-08-18T12:14:10.371644+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix": "osd pool create", "pool": "cephfs.user_test_fs.data"}]': finished
2020-08-18T12:14:11.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: cluster 2020-08-18T12:14:10.372094+0000 mon.a (mon.0) 427 : cluster [DBG] osdmap e28: 3 total, 3 up, 3 in
2020-08-18T12:14:11.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: audit 2020-08-18T12:14:10.374540+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "fs new", "fs_name": "user_test_fs", "metadata": "cephfs.user_test_fs.meta", "data": "cephfs.user_test_fs.data"}]: dispatch
2020-08-18T12:14:11.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: cluster 2020-08-18T12:14:10.375970+0000 mon.a (mon.0) 429 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
2020-08-18T12:14:11.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: cluster 2020-08-18T12:14:10.376015+0000 mon.a (mon.0) 430 : cluster [WRN] Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
2020-08-18T12:14:11.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: cluster 2020-08-18T12:14:10.383345+0000 mon.a (mon.0) 431 : cluster [DBG] osdmap e29: 3 total, 3 up, 3 in
2020-08-18T12:14:11.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: audit 2020-08-18T12:14:10.383558+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix": "fs new", "fs_name": "user_test_fs", "metadata": "cephfs.user_test_fs.meta", "data": "cephfs.user_test_fs.data"}]': finished
2020-08-18T12:14:11.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: cluster 2020-08-18T12:14:10.383625+0000 mon.a (mon.0) 433 : cluster [DBG] fsmap user_test_fs:0
2020-08-18T12:14:12.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:12 smithi036 bash[14996]: cluster 2020-08-18T12:14:11.397053+0000 mon.a (mon.0) 434 : cluster [DBG] osdmap e30: 3 total, 3 up, 3 in
2020-08-18T12:14:13.894 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:13 smithi036 bash[14996]: cluster 2020-08-18T12:14:12.402837+0000 mon.a (mon.0) 435 : cluster [DBG] osdmap e31: 3 total, 3 up, 3 in
2020-08-18T12:14:30.417 INFO:teuthology.orchestra.run.smithi036:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early auth get-or-create-key client.test mon 'allow r' osd 'allow rw pool=nfs-ganesha namespace=test, allow rw tag cephfs data=user_test_fs' mds 'allow rw path=/'
2020-08-18T12:14:30.757 INFO:teuthology.orchestra.run.smithi036.stdout:AQAmxjtfQePTLBAA90sKBZ/mAeDvfQUPUVQ7AA==
2020-08-18T12:14:30.776 INFO:teuthology.orchestra.run.smithi036:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early nfs cluster info test
2020-08-18T12:14:31.107 INFO:teuthology.orchestra.run.smithi036.stdout:{
2020-08-18T12:14:31.107 INFO:teuthology.orchestra.run.smithi036.stdout: "test": [
2020-08-18T12:14:31.108 INFO:teuthology.orchestra.run.smithi036.stdout: {
2020-08-18T12:14:31.108 INFO:teuthology.orchestra.run.smithi036.stdout: "hostname": "smithi036",
2020-08-18T12:14:31.108 INFO:teuthology.orchestra.run.smithi036.stdout: "ip": [
2020-08-18T12:14:31.108 INFO:teuthology.orchestra.run.smithi036.stdout: "172.21.15.36"
2020-08-18T12:14:31.108 INFO:teuthology.orchestra.run.smithi036.stdout: ],
2020-08-18T12:14:31.109 INFO:teuthology.orchestra.run.smithi036.stdout: "port": 2049
2020-08-18T12:14:31.109 INFO:teuthology.orchestra.run.smithi036.stdout: }
2020-08-18T12:14:31.109 INFO:teuthology.orchestra.run.smithi036.stdout: ]
2020-08-18T12:14:31.109 INFO:teuthology.orchestra.run.smithi036.stdout:}
2020-08-18T12:14:31.116 INFO:teuthology.orchestra.run.smithi036:> sudo ceph nfs cluster config set test -i -
2020-08-18T12:14:31.393 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:31 smithi036 bash[14996]: audit 2020-08-18T12:14:30.751880+0000 mon.a (mon.0) 436 : audit [INF] from='client.? 172.21.15.36:0/275914116' entity='client.admin' cmd=[{"prefix": "auth get-or-create-key", "entity": "client.test", "caps": ["mon", "allow r", "osd", "allow rw pool=nfs-ganesha namespace=test, allow rw tag cephfs data=user_test_fs", "mds", "allow rw path=/"]}]: dispatch
2020-08-18T12:14:31.394 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:31 smithi036 bash[14996]: audit 2020-08-18T12:14:30.756918+0000 mon.a (mon.0) 437 : audit [INF] from='client.? 172.21.15.36:0/275914116' entity='client.admin' cmd='[{"prefix": "auth get-or-create-key", "entity": "client.test", "caps": ["mon", "allow r", "osd", "allow rw pool=nfs-ganesha namespace=test, allow rw tag cephfs data=user_test_fs", "mds", "allow rw path=/"]}]': finished
2020-08-18T12:14:31.394 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:31 smithi036 bash[14996]: audit 2020-08-18T12:14:31.098832+0000 mgr.a (mgr.14247) 79 : audit [DBG] from='client.14313 -' entity='client.admin' cmd=[{"prefix": "nfs cluster info", "clusterid": "test", "target": ["mon-mgr", ""]}]: dispatch
2020-08-18T12:14:32.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:32 smithi036 bash[14996]: audit 2020-08-18T12:14:31.438397+0000 mgr.a (mgr.14247) 81 : audit [DBG] from='client.14315 -' entity='client.admin' cmd=[{"prefix": "nfs cluster config set", "clusterid": "test", "target": ["mon-mgr", ""]}]: dispatch
2020-08-18T12:14:32.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:32 smithi036 bash[14996]: cephadm 2020-08-18T12:14:31.477406+0000 mgr.a (mgr.14247) 82 : cephadm [INF] Restart service nfs.ganesha-test
2020-08-18T12:14:32.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:32 smithi036 bash[14996]: audit 2020-08-18T12:14:31.478191+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "config get", "who": "client.nfs.ganesha-test.smithi036", "key": "container_image"}]: dispatch
2020-08-18T12:14:32.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:32 smithi036 bash[14996]: audit 2020-08-18T12:14:31.745368+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "config get", "who": "client.nfs.ganesha-test.smithi036", "key": "container_image"}]: dispatch
2020-08-18T12:14:47.360 INFO:teuthology.orchestra.run.smithi036.stdout:NFS-Ganesha Config Set Successfully
2020-08-18T12:14:47.607 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:47 smithi036 bash[14996]: audit 2020-08-18T12:14:47.361760+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "container_image"}]: dispatch
2020-08-18T12:14:49.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:49 smithi036 bash[14996]: audit 2020-08-18T12:14:48.826093+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/host.smithi036","val":"{\"daemons\": {\"mon.a\": {\"daemon_type\": \"mon\", \"daemon_id\": \"a\", \"hostname\": \"smithi036\", \"container_id\": \"d36c25661867\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"last_refresh\": \"2020-08-18T12:14:48.823102\", \"created\": \"2020-08-18T12:10:16.467600\", \"started\": \"2020-08-18T12:10:22.842682\"}, \"mgr.a\": {\"daemon_type\": \"mgr\", \"daemon_id\": \"a\", \"hostname\": \"smithi036\", \"container_id\": \"c590cd729900\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"last_refresh\": \"2020-08-18T12:14:48.823283\", \"created\": \"2020-08-18T12:10:24.515600\", \"started\": \"2020-08-18T12:12:53.589820\"}, \"osd.0\": {\"daemon_type\": \"osd\", \"daemon_id\": \"0\", \"hostname\": \"smithi036\", \"container_id\": \"185d14a6bc0e\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"osdspec_affinity\": \"None\", \"last_refresh\": \"2020-08-18T12:14:48.823393\", \"created\": \"2020-08-18T12:11:29.443599\", \"started\": \"2020-08-18T12:11:32.358036\"}, \"osd.1\": {\"daemon_type\": \"osd\", \"daemon_id\": \"1\", \"hostname\": \"smithi036\", \"container_id\": \"18c28aa439b9\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"osdspec_affinity\": \"None\", \"last_refresh\": \"2020-08-18T12:14:48.823644\", \"created\": \"2020-08-18T12:11:47.099598\", \"started\": \"2020-08-18T12:11:49.606862\"}, \"osd.2\": {\"daemon_type\": \"osd\", \"daemon_id\": \"2\", \"hostname\": \"smithi036\", \"container_id\": \"c5ce7db6d02f\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"osdspec_affinity\": \"None\", \"last_refresh\": \"2020-08-18T12:14:48.823856\", \"created\": \"2020-08-18T12:12:04.511598\", \"started\": \"2020-08-18T12:12:07.007047\"}, \"mds.1.smithi036.fwyvuo\": {\"daemon_type\": \"mds\", \"daemon_id\": \"1.smithi036.fwyvuo\", \"hostname\": \"smithi036\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"status\": -1, \"status_desc\": \"error\", \"last_refresh\": \"2020-08-18T12:14:48.824061\", \"created\": \"2020-08-18T12:12:48.687597\"}, \"nfs.ganesha-test.smithi036\": {\"daemon_type\": \"nfs\", \"daemon_id\": \"ganesha-test.smithi036\", \"hostname\": \"smithi036\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"status\": 1, \"status_desc\": \"running\", \"last_refresh\": \"2020-08-18T12:14:48.824135\", \"created\": \"2020-08-18T12:13:31.627596\"}}, \"devices\": [{\"rejected_reasons\": [\"Insufficient space (<5GB) on vgs\", \"locked\", \"LVM detected\"], \"available\": false, \"path\": \"/dev/nvme0n1\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"\", \"model\": \"INTEL SSDPEDMD400G4\", \"rev\": \"\", \"sas_address\": \"\", \"sas_device_handle\": \"\", \"support_discard\": \"512\", \"rotational\": \"0\", \"nr_requests\": \"1023\", \"scheduler_mode\": \"none\", \"partitions\": {}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 400088457216.0, \"human_readable_size\": \"372.61 GB\", \"path\": \"/dev/nvme0n1\", \"locked\": 1}, \"lvs\": [{\"name\": \"lv_1\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_2\", \"osd_id\": \"2\", \"cluster_name\": \"ceph\", \"type\": \"block\", \"osd_fsid\": \"b672ef8c-b1b7-4095-9010-9e0dbb7fd560\", \"cluster_fsid\": \"a08e3096-e14b-11ea-a073-001a4aab830c\", \"osdspec_affinity\": \"None\", \"block_uuid\": \"tmDUnC-MITj-o9dZ-VrFd-fVuq-YQu7-wAKj21\"}, {\"name\": \"lv_3\", \"osd_id\": \"1\", \"cluster_name\": \"ceph\", \"type\": \"block\", \"osd_fsid\": \"00c9da60-5629-446b-81c4-2aa37c742fd3\", \"cluster_fsid\": \"a08e3096-e14b-11ea-a073-001a4aab830c\", \"osdspec_affinity\": \"None\", \"block_uuid\": \"tWjwp6-rw22-xfX3-WTsV-NQfm-c2cR-RwkcQ2\"}, {\"name\": \"lv_4\", \"osd_id\": \"0\", \"cluster_name\": \"ceph\", \"type\": \"block\", \"osd_fsid\": \"22b971a7-f36a-4ac8-bc5a-bc03c27f6ebd\", \"cluster_fsid\": \"a08e3096-e14b-11ea-a073-001a4aab830c\", \"osdspec_affinity\": \"None\", \"block_uuid\": \"yDckaE-Tm2y-CZ1W-MNvA-Z9at-Sx8F-Wy1sdo\"}, {\"name\": \"lv_5\", \"comment\": \"not used by ceph\"}], \"human_readable_type\": \"ssd\", \"device_id\": \"_CVFT53300040400BGN\"}, {\"rejected_reasons\": [\"locked\"], \"available\": false, \"path\": \"/dev/sda\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"ATA\", \"model\": \"ST1000NM0033-9ZM\", \"rev\": \"SN04\", \"sas_address\": \"\", \"sas_device_handle\": \"\", \"support_discard\": \"0\", \"rotational\": \"1\", \"nr_requests\": \"128\", \"scheduler_mode\": \"cfq\", \"partitions\": {\"sda1\": {\"start\": \"2048\", \"sectors\": \"1953522688\", \"sectorsize\": 512, \"size\": 1000203616256.0, \"human_readable_size\": \"931.51 GB\", \"holders\": []}}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 1000204886016.0, \"human_readable_size\": \"931.51 GB\", \"path\": \"/dev/sda\", \"locked\": 1}, \"lvs\": [], \"human_readable_type\": \"hdd\", \"device_id\": \"ST1000NM0033-9ZM173_Z1W4J2QS\"}], \"osdspec_previews\": [], \"daemon_config_deps\": {\"mon.a\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:09.324619\"}, \"mgr.a\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:10.820348\"}, \"osd.0\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:27.858274\"}, \"osd.1\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:45.513076\"}, \"osd.2\": {\"deps\": [], \"last_config\": \"2020-08-18T12:12:02.780273\"}, \"mds.1.smithi036.fwyvuo\": {\"deps\": [], \"last_config\": \"2020-08-18T12:13:01.350567\"}, \"nfs.ganesha-test.smithi036\": {\"deps\": [], \"last_config\": \"2020-08-18T12:13:29.803823\"}}, \"last_daemon_update\": \"2020-08-18T12:14:48.824320\", \"last_device_update\": \"2020-08-18T12:12:11.983889\", \"networks\": {\"172.21.0.0/20\": [\"172.21.15.36\"], \"172.21.15.254\": [\"172.21.15.36\"], \"fe80::/64\": [\"fe80::ae1f:6bff:fef8:2542\"]}, \"last_host_check\": \"2020-08-18T12:10:50.314546\"}"}]: dispatch
2020-08-18T12:14:49.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:49 smithi036 bash[14996]: audit 2020-08-18T12:14:48.830355+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/host.smithi036","val":"{\"daemons\": {\"mon.a\": {\"daemon_type\": \"mon\", \"daemon_id\": \"a\", \"hostname\": \"smithi036\", \"container_id\": \"d36c25661867\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"last_refresh\": \"2020-08-18T12:14:48.823102\", \"created\": \"2020-08-18T12:10:16.467600\", \"started\": \"2020-08-18T12:10:22.842682\"}, \"mgr.a\": {\"daemon_type\": \"mgr\", \"daemon_id\": \"a\", \"hostname\": \"smithi036\", \"container_id\": \"c590cd729900\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"last_refresh\": \"2020-08-18T12:14:48.823283\", \"created\": \"2020-08-18T12:10:24.515600\", \"started\": \"2020-08-18T12:12:53.589820\"}, \"osd.0\": {\"daemon_type\": \"osd\", \"daemon_id\": \"0\", \"hostname\": \"smithi036\", \"container_id\": \"185d14a6bc0e\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"osdspec_affinity\": \"None\", \"last_refresh\": \"2020-08-18T12:14:48.823393\", \"created\": \"2020-08-18T12:11:29.443599\", \"started\": \"2020-08-18T12:11:32.358036\"}, \"osd.1\": {\"daemon_type\": \"osd\", \"daemon_id\": \"1\", \"hostname\": \"smithi036\", \"container_id\": \"18c28aa439b9\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"osdspec_affinity\": \"None\", \"last_refresh\": \"2020-08-18T12:14:48.823644\", \"created\": \"2020-08-18T12:11:47.099598\", \"started\": \"2020-08-18T12:11:49.606862\"}, \"osd.2\": {\"daemon_type\": \"osd\", \"daemon_id\": \"2\", \"hostname\": \"smithi036\", \"container_id\": \"c5ce7db6d02f\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"osdspec_affinity\": \"None\", \"last_refresh\": \"2020-08-18T12:14:48.823856\", \"created\": \"2020-08-18T12:12:04.511598\", \"started\": \"2020-08-18T12:12:07.007047\"}, \"mds.1.smithi036.fwyvuo\": {\"daemon_type\": \"mds\", \"daemon_id\": \"1.smithi036.fwyvuo\", \"hostname\": \"smithi036\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"status\": -1, \"status_desc\": \"error\", \"last_refresh\": \"2020-08-18T12:14:48.824061\", \"created\": \"2020-08-18T12:12:48.687597\"}, \"nfs.ganesha-test.smithi036\": {\"daemon_type\": \"nfs\", \"daemon_id\": \"ganesha-test.smithi036\", \"hostname\": \"smithi036\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"status\": 1, \"status_desc\": \"running\", \"last_refresh\": \"2020-08-18T12:14:48.824135\", \"created\": \"2020-08-18T12:13:31.627596\"}}, \"devices\": [{\"rejected_reasons\": [\"Insufficient space (<5GB) on vgs\", \"locked\", \"LVM detected\"], \"available\": false, \"path\": \"/dev/nvme0n1\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"\", \"model\": \"INTEL SSDPEDMD400G4\", \"rev\": \"\", \"sas_address\": \"\", \"sas_device_handle\": \"\", \"support_discard\": \"512\", \"rotational\": \"0\", \"nr_requests\": \"1023\", \"scheduler_mode\": \"none\", \"partitions\": {}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 400088457216.0, \"human_readable_size\": \"372.61 GB\", \"path\": \"/dev/nvme0n1\", \"locked\": 1}, \"lvs\": [{\"name\": \"lv_1\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_2\", \"osd_id\": \"2\", \"cluster_name\": \"ceph\", \"type\": \"block\", \"osd_fsid\": \"b672ef8c-b1b7-4095-9010-9e0dbb7fd560\", \"cluster_fsid\": \"a08e3096-e14b-11ea-a073-001a4aab830c\", \"osdspec_affinity\": \"None\", \"block_uuid\": \"tmDUnC-MITj-o9dZ-VrFd-fVuq-YQu7-wAKj21\"}, {\"name\": \"lv_3\", \"osd_id\": \"1\", \"cluster_name\": \"ceph\", \"type\": \"block\", \"osd_fsid\": \"00c9da60-5629-446b-81c4-2aa37c742fd3\", \"cluster_fsid\": \"a08e3096-e14b-11ea-a073-001a4aab830c\", \"osdspec_affinity\": \"None\", \"block_uuid\": \"tWjwp6-rw22-xfX3-WTsV-NQfm-c2cR-RwkcQ2\"}, {\"name\": \"lv_4\", \"osd_id\": \"0\", \"cluster_name\": \"ceph\", \"type\": \"block\", \"osd_fsid\": \"22b971a7-f36a-4ac8-bc5a-bc03c27f6ebd\", \"cluster_fsid\": \"a08e3096-e14b-11ea-a073-001a4aab830c\", \"osdspec_affinity\": \"None\", \"block_uuid\": \"yDckaE-Tm2y-CZ1W-MNvA-Z9at-Sx8F-Wy1sdo\"}, {\"name\": \"lv_5\", \"comment\": \"not used by ceph\"}], \"human_readable_type\": \"ssd\", \"device_id\": \"_CVFT53300040400BGN\"}, {\"rejected_reasons\": [\"locked\"], \"available\": false, \"path\": \"/dev/sda\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"ATA\", \"model\": \"ST1000NM0033-9ZM\", \"rev\": \"SN04\", \"sas_address\": \"\", \"sas_device_handle\": \"\", \"support_discard\": \"0\", \"rotational\": \"1\", \"nr_requests\": \"128\", \"scheduler_mode\": \"cfq\", \"partitions\": {\"sda1\": {\"start\": \"2048\", \"sectors\": \"1953522688\", \"sectorsize\": 512, \"size\": 1000203616256.0, \"human_readable_size\": \"931.51 GB\", \"holders\": []}}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 1000204886016.0, \"human_readable_size\": \"931.51 GB\", \"path\": \"/dev/sda\", \"locked\": 1}, \"lvs\": [], \"human_readable_type\": \"hdd\", \"device_id\": \"ST1000NM0033-9ZM173_Z1W4J2QS\"}], \"osdspec_previews\": [], \"daemon_config_deps\": {\"mon.a\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:09.324619\"}, \"mgr.a\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:10.820348\"}, \"osd.0\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:27.858274\"}, \"osd.1\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:45.513076\"}, \"osd.2\": {\"deps\": [], \"last_config\": \"2020-08-18T12:12:02.780273\"}, \"mds.1.smithi036.fwyvuo\": {\"deps\": [], \"last_config\": \"2020-08-18T12:13:01.350567\"}, \"nfs.ganesha-test.smithi036\": {\"deps\": [], \"last_config\": \"2020-08-18T12:13:29.803823\"}}, \"last_daemon_update\": \"2020-08-18T12:14:48.824320\", \"last_device_update\": \"2020-08-18T12:12:11.983889\", \"networks\": {\"172.21.0.0/20\": [\"172.21.15.36\"], \"172.21.15.254\": [\"172.21.15.36\"], \"fe80::/64\": [\"fe80::ae1f:6bff:fef8:2542\"]}, \"last_host_check\": \"2020-08-18T12:10:50.314546\"}"}]': finished
2020-08-18T12:14:49.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:49 smithi036 bash[14996]: audit 2020-08-18T12:14:48.831848+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]: dispatch
2020-08-18T12:14:49.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:49 smithi036 bash[14996]: audit 2020-08-18T12:14:48.834997+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]': finished
2020-08-18T12:15:17.380 INFO:teuthology.orchestra.run.smithi036:> sudo rados -p nfs-ganesha -N test ls
2020-08-18T12:15:17.451 INFO:teuthology.orchestra.run.smithi036.stdout:rec-0000000000000003:nfs.ganesha-test.smithi036
2020-08-18T12:15:17.452 INFO:teuthology.orchestra.run.smithi036.stdout:userconf-nfs.ganesha-test
2020-08-18T12:15:17.452 INFO:teuthology.orchestra.run.smithi036.stdout:grace
2020-08-18T12:15:17.452 INFO:teuthology.orchestra.run.smithi036.stdout:conf-nfs.ganesha-test
2020-08-18T12:15:17.456 INFO:tasks.cephfs.test_nfs:b'rec-0000000000000003:nfs.ganesha-test.smithi036\nuserconf-nfs.ganesha-test\ngrace\nconf-nfs.ganesha-test\n'
2020-08-18T12:15:17.456 INFO:teuthology.orchestra.run.smithi036:> sudo rados -p nfs-ganesha -N test get userconf-nfs.ganesha-test -
2020-08-18T12:15:17.504 INFO:teuthology.orchestra.run.smithi036.stdout: LOG {
2020-08-18T12:15:17.504 INFO:teuthology.orchestra.run.smithi036.stdout: Default_log_level = FULL_DEBUG;
2020-08-18T12:15:17.504 INFO:teuthology.orchestra.run.smithi036.stdout: }
2020-08-18T12:15:17.504 INFO:teuthology.orchestra.run.smithi036.stdout:
2020-08-18T12:15:17.505 INFO:teuthology.orchestra.run.smithi036.stdout: EXPORT {
2020-08-18T12:15:17.505 INFO:teuthology.orchestra.run.smithi036.stdout: Export_Id = 100;
2020-08-18T12:15:17.505 INFO:teuthology.orchestra.run.smithi036.stdout: Transports = TCP;
2020-08-18T12:15:17.505 INFO:teuthology.orchestra.run.smithi036.stdout: Path = /;
2020-08-18T12:15:17.506 INFO:teuthology.orchestra.run.smithi036.stdout: Pseudo = /ceph/;
2020-08-18T12:15:17.506 INFO:teuthology.orchestra.run.smithi036.stdout: Protocols = 4;
2020-08-18T12:15:17.506 INFO:teuthology.orchestra.run.smithi036.stdout: Access_Type = RW;
2020-08-18T12:15:17.506 INFO:teuthology.orchestra.run.smithi036.stdout: Attr_Expiration_Time = 0;
2020-08-18T12:15:17.506 INFO:teuthology.orchestra.run.smithi036.stdout: Squash = None;
2020-08-18T12:15:17.507 INFO:teuthology.orchestra.run.smithi036.stdout: FSAL {
2020-08-18T12:15:17.507 INFO:teuthology.orchestra.run.smithi036.stdout: Name = CEPH;
2020-08-18T12:15:17.507 INFO:teuthology.orchestra.run.smithi036.stdout: Filesystem = user_test_fs;
2020-08-18T12:15:17.507 INFO:teuthology.orchestra.run.smithi036.stdout: User_Id = test;
2020-08-18T12:15:17.507 INFO:teuthology.orchestra.run.smithi036.stdout: Secret_Access_Key = 'AQAmxjtfQePTLBAA90sKBZ/mAeDvfQUPUVQ7AA==';
2020-08-18T12:15:17.508 INFO:teuthology.orchestra.run.smithi036.stdout: }
2020-08-18T12:15:17.509 INFO:teuthology.orchestra.run.smithi036.stdout: }
2020-08-18T12:15:17.509 INFO:teuthology.orchestra.run.smithi036:> sudo mount -t nfs -o port=2049 172.21.15.36:/ceph /mnt
2020-08-18T12:17:22.721 INFO:teuthology.orchestra.run.smithi036.stderr:mount.nfs: Connection timed out
2020-08-18T12:17:22.724 DEBUG:teuthology.orchestra.run:got remote process result: 32
2020-08-18T12:17:22.726 INFO:teuthology.orchestra.run.smithi036:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early log 'Ended test tasks.cephfs.test_nfs.TestNFS.test_cluster_set_reset_user_config'
2020-08-18T12:17:23.460 INFO:tasks.cephfs_test_runner:test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) ... ERROR
2020-08-18T12:17:23.461 INFO:tasks.cephfs_test_runner:
2020-08-18T12:17:23.462 INFO:tasks.cephfs_test_runner:======================================================================
2020-08-18T12:17:23.462 INFO:tasks.cephfs_test_runner:ERROR: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS)
2020-08-18T12:17:23.462 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-08-18T12:17:23.463 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2020-08-18T12:17:23.463 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-swagner-testing-2020-08-18-1033/qa/tasks/cephfs/test_nfs.py", line 449, in test_cluster_set_reset_user_config
2020-08-18T12:17:23.463 INFO:tasks.cephfs_test_runner: self.ctx.cluster.run(args=mnt_cmd)
2020-08-18T12:17:23.464 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/cluster.py", line 64, in run
2020-08-18T12:17:23.464 INFO:tasks.cephfs_test_runner: return [remote.run(**kwargs) for remote in remotes]
2020-08-18T12:17:23.464 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/cluster.py", line 64, in <listcomp>
2020-08-18T12:17:23.464 INFO:tasks.cephfs_test_runner: return [remote.run(**kwargs) for remote in remotes]
2020-08-18T12:17:23.465 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 204, in run
2020-08-18T12:17:23.465 INFO:tasks.cephfs_test_runner: r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
2020-08-18T12:17:23.465 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 446, in run
2020-08-18T12:17:23.466 INFO:tasks.cephfs_test_runner: r.wait()
2020-08-18T12:17:23.466 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 160, in wait
2020-08-18T12:17:23.466 INFO:tasks.cephfs_test_runner: self._raise_for_status()
2020-08-18T12:17:23.467 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 182, in _raise_for_status
2020-08-18T12:17:23.467 INFO:tasks.cephfs_test_runner: node=self.hostname, label=self.label
2020-08-18T12:17:23.467 INFO:tasks.cephfs_test_runner:teuthology.exceptions.CommandFailedError: Command failed on smithi036 with status 32: 'sudo mount -t nfs -o port=2049 172.21.15.36:/ceph /mnt'
2020-08-18T12:17:23.467 INFO:tasks.cephfs_test_runner:
2020-08-18T12:17:23.468 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-08-18T12:17:23.468 INFO:tasks.cephfs_test_runner:Ran 3 tests in 274.876s
2020-08-18T12:17:23.468 INFO:tasks.cephfs_test_runner:
2020-08-18T12:17:23.469 INFO:tasks.cephfs_test_runner:FAILED (errors=1)
</pre>
<p><a class="external" href="https://pulpito.ceph.com/swagner-2020-08-18_11:48:28-rados:cephadm-wip-swagner-testing-2020-08-18-1033-distro-basic-smithi/5356227/">https://pulpito.ceph.com/swagner-2020-08-18_11:48:28-rados:cephadm-wip-swagner-testing-2020-08-18-1033-distro-basic-smithi/5356227/</a></p>
Dashboard - Bug #46623 (Resolved): mgr/dashboard: ui/navigation.e2e-spec.ts: AssertionError: Time...
https://tracker.ceph.com/issues/46623
2020-07-20T08:20:41Z
Sebastian Wagner
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-dashboard-pull-requests/4980/consoleFull#842358957c212b007-e891-4176-9ee7-2f60eca393b7">https://jenkins.ceph.com/job/ceph-dashboard-pull-requests/4980/consoleFull#842358957c212b007-e891-4176-9ee7-2f60eca393b7</a></p>
<pre>
────────────────────────────────────────────────────────────────────────────────────────────────────
Running: ui/notification.e2e-spec.ts (21 of 23)
Estimated: 56 seconds
Notification page
1) "before all" hook for "should open notification sidebar"
2) "after all" hook for "should open notification sidebar"
0 passing (32s)
2 failing
1) Notification page
"before all" hook for "should open notification sidebar":
AssertionError: Timed out retrying: Expected not to find content: 'Creating...' but continuously found it.
Because this error occurred during a `before all` hook we are skipping the remaining tests in the current suite: `Notification page`
at PoolPageHelper.PageHelper.navigateEdit (https://slave-ubuntu08.front.sepia.ceph.com:41082/__cypress/tests?p=cypress/integration/ui/notification.e2e-spec.ts:57:36)
at PoolPageHelper.edit_pool_pg (https://slave-ubuntu08.front.sepia.ceph.com:41082/__cypress/tests?p=cypress/integration/ui/notification.e2e-spec.ts:286:14)
at Context.eval (https://slave-ubuntu08.front.sepia.ceph.com:41082/__cypress/tests?p=cypress/integration/ui/notification.e2e-spec.ts:326:15)
2) Notification page
"after all" hook for "should open notification sidebar":
CypressError: `cy.click()` failed because it requires a DOM element.
The subject received was:
> `undefined`
The previous command that ran was:
> `cy.get()`
Because this error occurred during a `after all` hook we are skipping the remaining tests in the current suite: `Notification page`
at ensureElement (https://slave-ubuntu08.front.sepia.ceph.com:41082/__cypress/runner/cypress_runner.js:159721:24)
at validateType (https://slave-ubuntu08.front.sepia.ceph.com:41082/__cypress/runner/cypress_runner.js:159545:16)
at Object.ensureSubjectByType (https://slave-ubuntu08.front.sepia.ceph.com:41082/__cypress/runner/cypress_runner.js:159587:11)
at pushSubjectAndValidate (https://slave-ubuntu08.front.sepia.ceph.com:41082/__cypress/runner/cypress_runner.js:167293:15)
at Context.<anonymous> (https://slave-ubuntu08.front.sepia.ceph.com:41082/__cypress/runner/cypress_runner.js:167621:18)
From Your Spec Code:
at PoolPageHelper.PageHelper.delete (https://slave-ubuntu08.front.sepia.ceph.com:41082/__cypress/tests?p=cypress/integration/ui/notification.e2e-spec.ts:215:50)
at Context.eval (https://slave-ubuntu08.front.sepia.ceph.com:41082/__cypress/tests?p=cypress/integration/ui/notification.e2e-spec.ts:331:21)
(Results)
┌────────────────────────────────────────────────────────────────────────────────────────────────â”
│ Tests: 4 │
│ Passing: 0 │
│ Failing: 1 │
│ Pending: 0 │
│ Skipped: 3 │
│ Screenshots: 2 │
│ Video: false │
│ Duration: 31 seconds │
│ Estimated: 56 seconds │
│ Spec Ran: ui/notification.e2e-spec.ts │
└────────────────────────────────────────────────────────────────────────────────────────────────┘
(Screenshots)
- /home/jenkins-build/build/workspace/ceph-dashboard-pull-requests/src/pybind/mgr/ (1280x720)
dashboard/frontend/cypress/screenshots/ui/notification.e2e-spec.ts/Notification
page -- should open notification sidebar -- before all hook (failed).png
- /home/jenkins-build/build/workspace/ceph-dashboard-pull-requests/src/pybind/mgr/ (1280x720)
dashboard/frontend/cypress/screenshots/ui/notification.e2e-spec.ts/Notification
page -- should open notification sidebar -- after all hook (failed).png
(Uploading Results)
</pre>
Orchestrator - Documentation #46335 (Resolved): Document "Using cephadm to set up rgw-nfs"
https://tracker.ceph.com/issues/46335
2020-07-03T08:30:46Z
Sebastian Wagner
<ul>
<li>cephadm doesn't care about exports. instead it simply sets up the daemons.</li>
<li>cephadm only creates an empty 'conf-{service-name}' RADOS object</li>
</ul>
<p>the Q is now: how to set up an RGW export?</p>
Orchestrator - Documentation #45411 (Resolved): cephadm: add section about container images
https://tracker.ceph.com/issues/45411
2020-05-06T16:57:38Z
Sebastian Wagner
<ul>
<li>we recommend against using the</li>
</ul>
<pre>
:latest
</pre>
<p>tag for images. Reason is, there is in no guarantee that a you will get the same image on all hosts, thus having different images on different nodes. Also, upgrades no longer properly work.</p>
<p>Instead, we recommend to always use explicit tags or image IDs.</p>
<p>Basically the same as <a class="external" href="https://kubernetes.io/docs/concepts/configuration/overview/#container-images">https://kubernetes.io/docs/concepts/configuration/overview/#container-images</a></p>
Orchestrator - Documentation #44905 (Resolved): cephadm troubleshooting SSH errors
https://tracker.ceph.com/issues/44905
2020-04-02T10:07:08Z
Sebastian Wagner
<pre>
<wowas> I'm getting:
<wowas> execnet.gateway_bootstrap.HostNotFound: -F /tmp/cephadm-conf-kbqvkrkw root@10.10.1.2
<wowas> raise OrchestratorError('Failed to connect to %s (%s). Check that the host is reachable and accepts connections using the cephadm SSH key' % (host, addr)) from e
<wowas> orchestrator._interface.OrchestratorError: Failed to connect to 10.10.1.2 (10.10.1.2). Check that the host is reachable and accepts connections using the cephadm SSH key
</pre>
<p>Things users can do:</p>
<p>1. ensure, we can connect to the host via ssh:<br /><pre>
[root@mon1 ~]# cephadm shell -- ceph config-key get mgr/cephadm/ssh_identity_key > key
INFO:cephadm:Inferring fsid f8edc08a-7f17-11ea-8707-000c2915dd98
INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15
obtained 'mgr/cephadm/ssh_identity_key'
[root@mon1 ~]# chmod 0600 key
</pre></p>
<p>If this fails, cephadm doesn't have a key. Fix this by:<br /><pre>
[root@mon1 ~]# cephadm shell -- ceph cephadm generate-ssh-key
</pre><br />or <br /><pre>
[root@mon1 ~]# cat key | cephadm shell -- ceph cephadm set-ssk-key -i -
</pre></p>
<p>2. ensure the ssh config is correct<br /><pre>
[root@mon1 ~]# cephadm shell -- ceph cephadm get-ssh-config > config
</pre></p>
<p>3. Verify we can connect to the host:</p>
<pre>
[root@mon1 ~]# ssh -F config -i key root@mon1
ssh: connect to host mon1 port 22: Connection timed out
</pre>
<p>4. There is a limitation right now. the ssh user is root. Hardcoded.</p>
mgr - Feature #44856 (In Progress): telemetry: report orch backend
https://tracker.ceph.com/issues/44856
2020-03-31T14:43:25Z
Sebastian Wagner
Orchestrator - Documentation #44828 (Resolved): cephadm: clarify "Failed to infer CIDR network fo...
https://tracker.ceph.com/issues/44828
2020-03-31T09:07:47Z
Sebastian Wagner
<pre>
[22:27:31] <xirius> I follow the instruction to bootstrap a new (octopus) cluster: cephadm bootstrap --mon-ip *<mon-ip>* but I get an error: ERROR: Failed to infer CIDR network for mon ip ***; pass --skip-mon-network to configure it later
[22:27:37] <xirius> What does it mean ?
</pre>
<p>We have to clarify the error message. E.g. tell users that they have to call</p>
<pre>
ceph config set mon public_network <mon_network>
</pre>
<p>after they've called bootstrap.</p>
<p>(alternatively, add an argument <code>--public-network</code> to bootstrap)</p>
ceph-volume - Bug #44096 (Resolved): lvm prepare doesn't create vg and thus does not pass vg name...
https://tracker.ceph.com/issues/44096
2020-02-12T12:00:58Z
Sebastian Wagner
<p>Extracted from: <a class="external" href="https://tracker.ceph.com/issues/44028">https://tracker.ceph.com/issues/44028</a></p>
<pre>
$ ceph-volume lvm prepare --bluestore --data /dev/sdc --no-systemd
INFO:cephadm:/usr/bin/docker:stderr unable to read label for /dev/sdc: (2) No such file or directory
INFO:cephadm:/usr/bin/docker:stderr Running command: /usr/bin/ceph-authtool --gen-print-key
INFO:cephadm:/usr/bin/docker:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 11e7e2c4-7e16-417e-b56a-a45b7a4c6dd6
INFO:cephadm:/usr/bin/docker:stderr Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-11e7e2c4-7e16-417e-b56a-a45b7a4c6dd6
INFO:cephadm:/usr/bin/docker:stderr stderr: Volume group name has invalid characters
INFO:cephadm:/usr/bin/docker:stderr Run `lvcreate --help' for more information.
INFO:cephadm:/usr/bin/docker:stderr --> Was unable to complete a new OSD, will rollback changes
INFO:cephadm:/usr/bin/docker:stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0 --yes-i-really-mean-it
INFO:cephadm:/usr/bin/docker:stderr stderr: purged osd.0
INFO:cephadm:/usr/bin/docker:stderr --> RuntimeError: command returned non-zero exit status: 3
</pre>
<p>Looks like "<code>100%FREE</code>" is often used in the ceph-ansible context: <a class="external" href="https://github.com/search?p=1&q=ceph+100%25FREE&type=Code">https://github.com/search?p=1&q=ceph+100%25FREE&type=Code</a></p>
Orchestrator - Feature #39095 (Resolved): mgr/deepsea: return ganesha and iscsi endpoint URLs
https://tracker.ceph.com/issues/39095
2019-04-03T13:55:38Z
Sebastian Wagner
<p>his updates describe_service() to include nfs and iscsi services<br />(deepsea internally refers to these as "ganesha" and "igw" roles).<br />Additionally, if deepsea sets any of container_id, service, version,<br />rados_config_location, service_url, status or status_desc, these will<br />come through now too.</p>
<p>This relies on SUSE/DeepSea#1606 for the new<br />functionality, but if run against an older version of deepsea, it will<br />continue to operate as it did before, it just won't include<br />rados_config_location or service_url data.</p>
<p>Signed-off-by: Tim Serong <a class="email" href="mailto:tserong@suse.com">tserong@suse.com</a></p>
Orchestrator - Fix #39082 (Resolved): mgr/deepsea: use ceph_volume output in get_inventory()
https://tracker.ceph.com/issues/39082
2019-04-02T14:14:19Z
Sebastian Wagner
<p>DeepSea is being updated to use ceph_volume internally (see SUSE/DeepSea#1517 and jschmid1/DeepSea#6). Once this is done, the mgr_orch.get_inventory runner will just be returning the raw ceph_volume output, so this PR updates the DeepSea mgr module to match. There's also a couple of small cleanup commits. We probably don't want to merge this until the DS PR is in though.</p>
mgr - Bug #38626 (Resolved): pg_autoscaler is not Python 3 compatible
https://tracker.ceph.com/issues/38626
2019-03-07T14:04:50Z
Sebastian Wagner
<p><code>src/scripts/run_mypy.sh</code> revealed some type errors:</p>
<p>from <a class="external" href="https://gist.github.com/sebastian-philipp/25f70aae3b0d21b1a781c110a7ef8be4">https://gist.github.com/sebastian-philipp/25f70aae3b0d21b1a781c110a7ef8be4</a></p>
<pre>
pybind/mgr/pg_autoscaler/module.py: note: In member "get_subtree_resource_status" of class "PgAutoscaler":
pybind/mgr/pg_autoscaler/module.py:192: error: "Dict[Any, Any]" has no attribute "itervalues"
pybind/mgr/pg_autoscaler/module.py: note: In member "_maybe_adjust" of class "PgAutoscaler":
pybind/mgr/pg_autoscaler/module.py:359: error: Name 'cr_name' is not defined
pybind/mgr/pg_autoscaler/module.py:410: error: "Dict[Any, float]" has no attribute "iteritems"
pybind/mgr/pg_autoscaler/module.py:437: error: "Dict[Any, int]" has no attribute "iteritems"
</pre>
<p>Those errors look like some incompatibilities to Python 3.</p>
Dashboard - Bug #38590 (Resolved): mimic: dashboard: failed to compile the dashboard: Cannot find...
https://tracker.ceph.com/issues/38590
2019-03-05T18:51:58Z
Sebastian Wagner
<pre>
�[0mDate: �[1m�[37m2019-03-05T05:49:15.789Z�[39m�[22m�[0m
�[0mHash: �[1m�[37m894ed43e42aed84f2e6a�[39m�[22m�[0m
�[0mTime: �[1m�[37m21545�[39m�[22mms�[0m
�[0mchunk {�[1m�[33mscripts�[39m�[22m} �[1m�[32mscripts.fc88ef4a23399c760d0b.bundle.js�[39m�[22m (scripts) 210 kB �[1m�[33m[initial]�[39m�[22m�[1m�[32m [rendered]�[39m�[22m�[0m
�[0mchunk {�[1m�[33m0�[39m�[22m} �[1m�[32mstyles.89887a238a2462b3f866.bundle.css�[39m�[22m (styles) 211 kB �[1m�[33m[initial]�[39m�[22m�[1m�[32m [rendered]�[39m�[22m�[0m
�[0mchunk {�[1m�[33m1�[39m�[22m} �[1m�[32mpolyfills.997d8cc03812de50ae67.bundle.js�[39m�[22m (polyfills) 84 bytes �[1m�[33m[initial]�[39m�[22m�[1m�[32m [rendered]�[39m�[22m�[0m
�[0mchunk {�[1m�[33m2�[39m�[22m} �[1m�[32mmain.ee32620ecd1edff94184.bundle.js�[39m�[22m (main) 84 bytes �[1m�[33m[initial]�[39m�[22m�[1m�[32m [rendered]�[39m�[22m�[0m
�[0mchunk {�[1m�[33m3�[39m�[22m} �[1m�[32minline.318b50c57b4eba3d437b.bundle.js�[39m�[22m (inline) 796 bytes �[1m�[33m[entry]�[39m�[22m�[1m�[32m [rendered]�[39m�[22m�[0m
�[0m�[0m
�[0m�[1m�[33mWARNING in Invalid animation value at 11938:14. Ignoring.�[39m�[22m�[0m
�[0m�[0m
�[0m�[1m�[33mWARNING in Invalid animation value at 11937:22. Ignoring.�[39m�[22m�[0m
�[0m�[0m
�[0m�[1m�[31mERROR in node_modules/@types/lodash/common/object.d.ts(1689,12): error TS2304: Cannot find name 'Exclude'.�[39m�[22m�[0m
�[0m�[1m�[31mnode_modules/@types/lodash/common/object.d.ts(1766,12): error TS2304: Cannot find name 'Exclude'.�[39m�[22m�[0m
�[0m�[1m�[31mnode_modules/@types/lodash/common/object.d.ts(1842,34): error TS2304: Cannot find name 'Exclude'.�[39m�[22m�[0m
�[0m�[1m�[31m�[39m�[22m�[0m
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! ceph-dashboard@0.0.0 build: `ng build "--prod"`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the ceph-dashboard@0.0.0 build script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/jenkins-build/.npm/_logs/2019-03-05T05_49_15_864Z-debug.log
src/pybind/mgr/dashboard/CMakeFiles/mgr-dashboard-frontend-build.dir/build.make:1435: recipe for target '../src/pybind/mgr/dashboard/frontend/dist' failed
make[3]: *** [../src/pybind/mgr/dashboard/frontend/dist] Error 1
CMakeFiles/Makefile2:4878: recipe for target 'src/pybind/mgr/dashboard/CMakeFiles/mgr-dashboard-frontend-build.dir/all' failed
make[2]: *** [src/pybind/mgr/dashboard/CMakeFiles/mgr-dashboard-frontend-build.dir/all] Error 2
make[2]: *** Waiting for unfinished jobs....
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/15029/consoleFull#-2110717508ed6e825c-5772-40cc-ba3e-4f4430b3e036">https://jenkins.ceph.com/job/ceph-pull-requests/15029/consoleFull#-2110717508ed6e825c-5772-40cc-ba3e-4f4430b3e036</a></p>