Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2021-12-08T11:14:59Z
Ceph
Redmine
Dashboard - Bug #53526 (Triaged): mgr/dashboard: dashboard: offline hosts showing UI bug
https://tracker.ceph.com/issues/53526
2021-12-08T11:14:59Z
Sebastian Wagner
<p>NAN - undefined?<br />Status - doesn’t fit in the column</p>
<p><img src="https://tracker.ceph.com/attachments/download/5806/unnamed.png" alt="" /></p>
Orchestrator - Feature #48846 (Closed): cephadm bootstrap: add --cluster-network
https://tracker.ceph.com/issues/48846
2021-01-12T08:53:29Z
Sebastian Wagner
<pre>
cephadm bootstrap --mon-ip
</pre>
<p>defines mon ip, but how do you specify a separate cluster network?</p>
<p>This is sort of important, and doesn't seem to be well documented (or documented at all in <a class="external" href="https://docs.ceph.com/en/latest/cephadm/install/">https://docs.ceph.com/en/latest/cephadm/install/</a> ?)</p>
<p>Current idea is to pass</p>
<pre>
–skip-mon-network
</pre>
<p>and then later run</p>
<pre>
ceph config set mon cluster_network ...
</pre>
<p>but we should provide a way to set the cluster network directly on bootstrap</p>
CephFS - Bug #48491 (Resolved): tasks.cephfs.test_nfs.TestNFS.test_cluster_info: IP mismatch
https://tracker.ceph.com/issues/48491
2020-12-08T11:06:56Z
Sebastian Wagner
<pre>
2020-12-07T20:32:02.847 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2020-12-07T20:32:02.847 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-swagner3-testing-2020-12-07-1114/qa/tasks/cephfs/test_nfs.py", line 436, in test_cluster_info
2020-12-07T20:32:02.847 INFO:tasks.cephfs_test_runner: self.assertDictEqual(info_output, host_details)
2020-12-07T20:32:02.848 INFO:tasks.cephfs_test_runner:AssertionError: {'tes[21 chars]ithi069', 'ip': ['172.21.15.69', '127.0.1.1'], 'port': 2049}]} != {'tes[21 chars]ithi069', 'ip': ['172.17.0.1', '172.21.15.69'], 'port': 2049}]}
2020-12-07T20:32:02.848 INFO:tasks.cephfs_test_runner: {'test': [{'hostname': 'smithi069',
2020-12-07T20:32:02.848 INFO:tasks.cephfs_test_runner:- 'ip': ['172.21.15.69', '127.0.1.1'],
2020-12-07T20:32:02.848 INFO:tasks.cephfs_test_runner:+ 'ip': ['172.17.0.1', '172.21.15.69'],
2020-12-07T20:32:02.849 INFO:tasks.cephfs_test_runner: 'port': 2049}]}
2020-12-07T20:32:02.849 INFO:tasks.cephfs_test_runner:
2020-12-07T20:32:02.849 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-12-07T20:32:02.850 INFO:tasks.cephfs_test_runner:Ran 1 test in 32.117s
</pre>
<p><a class="external" href="https://pulpito.ceph.com/swagner-2020-12-07_12:07:52-rados:cephadm-wip-swagner3-testing-2020-12-07-1114-distro-basic-smithi/5689544/">https://pulpito.ceph.com/swagner-2020-12-07_12:07:52-rados:cephadm-wip-swagner3-testing-2020-12-07-1114-distro-basic-smithi/5689544/</a></p>
<p>log snippet:<br /><pre>
2020-12-07T20:31:47.592 INFO:tasks.cephfs_test_runner:Starting test: test_cluster_info (tasks.cephfs.test_nfs.TestNFS)
2020-12-07T20:31:47.596 INFO:teuthology.orchestra.run.smithi069:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph log 'Starting test tasks.cephfs.test_nfs.TestNFS.test_cluster_info'
2020-12-07T20:31:48.934 INFO:teuthology.orchestra.run.smithi069:> sudo systemctl status nfs-server
2020-12-07T20:31:48.982 INFO:teuthology.orchestra.run.smithi069.stdout:* nfs-server.service - NFS server and services
2020-12-07T20:31:48.983 INFO:teuthology.orchestra.run.smithi069.stdout: Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
2020-12-07T20:31:48.983 INFO:teuthology.orchestra.run.smithi069.stdout: Active: active (exited) since Mon 2020-12-07 20:22:19 UTC; 9min ago
2020-12-07T20:31:48.984 INFO:teuthology.orchestra.run.smithi069.stdout: Main PID: 1291 (code=exited, status=0/SUCCESS)
2020-12-07T20:31:48.985 INFO:teuthology.orchestra.run.smithi069.stdout: Tasks: 0 (limit: 4915)
2020-12-07T20:31:48.985 INFO:teuthology.orchestra.run.smithi069.stdout: CGroup: /system.slice/nfs-server.service
2020-12-07T20:31:48.986 INFO:teuthology.orchestra.run.smithi069.stdout:
2020-12-07T20:31:48.987 INFO:teuthology.orchestra.run.smithi069.stdout:Dec 07 20:22:19 smithi121 systemd[1]: Starting NFS server and services...
2020-12-07T20:31:48.988 INFO:teuthology.orchestra.run.smithi069.stdout:Dec 07 20:22:19 smithi121 systemd[1]: Started NFS server and services.
2020-12-07T20:31:48.991 INFO:tasks.cephfs.test_nfs:Disabling NFS
2020-12-07T20:31:48.993 INFO:teuthology.orchestra.run.smithi069:> sudo systemctl disable nfs-server --now
2020-12-07T20:31:49.057 INFO:teuthology.orchestra.run.smithi069.stderr:Removed /etc/systemd/system/multi-user.target.wants/nfs-server.service.
2020-12-07T20:31:49.194 INFO:teuthology.orchestra.run.smithi069:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph nfs cluster create cephfs test
2020-12-07T20:31:51.150 INFO:teuthology.orchestra.run.smithi069.stdout:NFS Cluster Created Successfully
2020-12-07T20:32:01.168 INFO:teuthology.orchestra.run.smithi069:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph orch ps --service_name=nfs.ganesha-test
2020-12-07T20:32:01.498 INFO:teuthology.orchestra.run.smithi069.stdout:NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID
2020-12-07T20:32:01.498 INFO:teuthology.orchestra.run.smithi069.stdout:nfs.ganesha-test.smithi069 smithi069 running (6s) 4s ago 6s 3.3 quay.ceph.io/ceph-ci/ceph:acb9f90cf72d3c35abbc0efbef808026624ef6c0 59dd54023d46 4282061f6a85
2020-12-07T20:32:01.512 INFO:teuthology.orchestra.run.smithi069:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph nfs cluster info test
2020-12-07T20:32:01.862 INFO:teuthology.orchestra.run.smithi069.stdout:{
2020-12-07T20:32:02.312 INFO:teuthology.orchestra.run.smithi069.stdout: "test": [
2020-12-07T20:32:02.312 INFO:teuthology.orchestra.run.smithi069.stdout: {
2020-12-07T20:32:02.313 INFO:teuthology.orchestra.run.smithi069.stdout: "hostname": "smithi069",
2020-12-07T20:32:02.313 INFO:teuthology.orchestra.run.smithi069.stdout: "ip": [
2020-12-07T20:32:02.313 INFO:teuthology.orchestra.run.smithi069.stdout: "172.21.15.69",
2020-12-07T20:32:02.313 INFO:teuthology.orchestra.run.smithi069.stdout: "127.0.1.1"
2020-12-07T20:32:02.314 INFO:teuthology.orchestra.run.smithi069.stdout: ],
2020-12-07T20:32:02.314 INFO:teuthology.orchestra.run.smithi069.stdout: "port": 2049
2020-12-07T20:32:02.314 INFO:teuthology.orchestra.run.smithi069.stdout: }
2020-12-07T20:32:02.314 INFO:teuthology.orchestra.run.smithi069.stdout: ]
2020-12-07T20:32:02.314 INFO:teuthology.orchestra.run.smithi069.stdout:}
2020-12-07T20:32:02.316 INFO:teuthology.orchestra.run.smithi069:> sudo hostname
2020-12-07T20:32:02.329 INFO:teuthology.orchestra.run.smithi069.stdout:smithi069
2020-12-07T20:32:02.331 INFO:teuthology.orchestra.run.smithi069:> sudo hostname -I
2020-12-07T20:32:02.387 INFO:teuthology.orchestra.run.smithi069.stdout:172.21.15.69 172.17.0.1
</pre></p>
CephFS - Bug #47009 (Resolved): TestNFS.test_cluster_set_reset_user_config: command failed with s...
https://tracker.ceph.com/issues/47009
2020-08-18T12:50:40Z
Sebastian Wagner
<pre>
sed -n '/2020-08-18T12:13:28.371/,/2020-08-18T12:17:23.469/p' *
2020-08-18T12:13:28.371 INFO:tasks.cephfs_test_runner:Starting test: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS)
2020-08-18T12:13:28.372 INFO:teuthology.orchestra.run.smithi036:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early log 'Starting test tasks.cephfs.test_nfs.TestNFS.test_cluster_set_reset_user_config'
2020-08-18T12:13:28.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:28 smithi036 bash[14996]: audit 2020-08-18T12:13:27.346480+0000 mon.a (mon.0) 397 : audit [INF] from='client.? 172.21.15.36:0/3067776373' entity='client.admin' cmd='[{"prefix": "log", "logtext": ["Starting test tasks.cephfs.test_nfs.TestNFS.test_cluster_reset_user_config_with_non_existing_clusterid"]}]': finished
2020-08-18T12:13:28.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:28 smithi036 bash[14996]: audit 2020-08-18T12:13:27.695797+0000 mgr.a (mgr.14247) 35 : audit [DBG] from='client.14288 -' entity='client.admin' cmd=[{"prefix": "nfs cluster config reset", "clusterid": "invalidtest", "target": ["mon-mgr", ""]}]: dispatch
2020-08-18T12:13:28.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:28 smithi036 bash[14996]: cluster 2020-08-18T12:13:28.012636+0000 client.admin (client.?) 0 : cluster [INF] Ended test tasks.cephfs.test_nfs.TestNFS.test_cluster_reset_user_config_with_non_existing_clusterid
2020-08-18T12:13:28.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:28 smithi036 bash[14996]: audit 2020-08-18T12:13:28.012780+0000 mon.a (mon.0) 398 : audit [INF] from='client.? 172.21.15.36:0/164629844' entity='client.admin' cmd=[{"prefix": "log", "logtext": ["Ended test tasks.cephfs.test_nfs.TestNFS.test_cluster_reset_user_config_with_non_existing_clusterid"]}]: dispatch
2020-08-18T12:13:29.377 INFO:teuthology.orchestra.run.smithi036:> sudo systemctl status nfs-server
2020-08-18T12:13:29.399 INFO:teuthology.orchestra.run.smithi036.stdout:* nfs-server.service - NFS server and services
2020-08-18T12:13:29.400 INFO:teuthology.orchestra.run.smithi036.stdout: Loaded: loaded (/lib/systemd/system/nfs-server.service; disabled; vendor preset: enabled)
2020-08-18T12:13:29.400 INFO:teuthology.orchestra.run.smithi036.stdout: Active: inactive (dead)
2020-08-18T12:13:29.401 INFO:teuthology.orchestra.run.smithi036.stdout:
2020-08-18T12:13:29.401 INFO:teuthology.orchestra.run.smithi036.stdout:Aug 18 12:04:04 smithi036 systemd[1]: Starting NFS server and services...
2020-08-18T12:13:29.401 INFO:teuthology.orchestra.run.smithi036.stdout:Aug 18 12:04:04 smithi036 systemd[1]: Started NFS server and services.
2020-08-18T12:13:29.402 INFO:teuthology.orchestra.run.smithi036.stdout:Aug 18 12:13:04 smithi036 systemd[1]: Stopping NFS server and services...
2020-08-18T12:13:29.402 INFO:teuthology.orchestra.run.smithi036.stdout:Aug 18 12:13:04 smithi036 systemd[1]: Stopped NFS server and services.
2020-08-18T12:13:29.404 DEBUG:teuthology.orchestra.run:got remote process result: 3
2020-08-18T12:13:29.404 INFO:teuthology.orchestra.run.smithi036:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early nfs cluster create cephfs test
2020-08-18T12:13:29.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:29 smithi036 bash[14996]: audit 2020-08-18T12:13:28.348582+0000 mon.a (mon.0) 399 : audit [INF] from='client.? 172.21.15.36:0/164629844' entity='client.admin' cmd='[{"prefix": "log", "logtext": ["Ended test tasks.cephfs.test_nfs.TestNFS.test_cluster_reset_user_config_with_non_existing_clusterid"]}]': finished
2020-08-18T12:13:29.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:29 smithi036 bash[14996]: cluster 2020-08-18T12:13:28.692094+0000 client.admin (client.?) 0 : cluster [INF] Starting test tasks.cephfs.test_nfs.TestNFS.test_cluster_set_reset_user_config
2020-08-18T12:13:29.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:29 smithi036 bash[14996]: audit 2020-08-18T12:13:28.692238+0000 mon.a (mon.0) 400 : audit [INF] from='client.? 172.21.15.36:0/1973330598' entity='client.admin' cmd=[{"prefix": "log", "logtext": ["Starting test tasks.cephfs.test_nfs.TestNFS.test_cluster_set_reset_user_config"]}]: dispatch
2020-08-18T12:13:29.791 INFO:teuthology.orchestra.run.smithi036.stdout:NFS Cluster Created Successfully
2020-08-18T12:13:30.636 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.354619+0000 mon.a (mon.0) 401 : audit [INF] from='client.? 172.21.15.36:0/1973330598' entity='client.admin' cmd='[{"prefix": "log", "logtext": ["Starting test tasks.cephfs.test_nfs.TestNFS.test_cluster_set_reset_user_config"]}]': finished
2020-08-18T12:13:30.636 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.776446+0000 mgr.a (mgr.14247) 37 : audit [DBG] from='client.14294 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "type": "cephfs", "clusterid": "test", "target": ["mon-mgr", ""]}]: dispatch
2020-08-18T12:13:30.636 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.785356+0000 mgr.a (mgr.14247) 38 : cephadm [INF] Saving service nfs.ganesha-test spec with placement count:1
2020-08-18T12:13:30.636 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.786010+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/spec.nfs.ganesha-test","val":"{\"created\": \"2020-08-18T12:13:29.785388\", \"spec\": {\"placement\": {\"count\": 1}, \"service_id\": \"ganesha-test\", \"service_name\": \"nfs.ganesha-test\", \"service_type\": \"nfs\", \"spec\": {\"namespace\": \"test\", \"pool\": \"nfs-ganesha\"}}}"}]: dispatch
2020-08-18T12:13:30.637 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.789858+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/spec.nfs.ganesha-test","val":"{\"created\": \"2020-08-18T12:13:29.785388\", \"spec\": {\"placement\": {\"count\": 1}, \"service_id\": \"ganesha-test\", \"service_name\": \"nfs.ganesha-test\", \"service_type\": \"nfs\", \"spec\": {\"namespace\": \"test\", \"pool\": \"nfs-ganesha\"}}}"}]': finished
2020-08-18T12:13:30.637 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.792400+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]: dispatch
2020-08-18T12:13:30.637 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.795539+0000 mon.a (mon.0) 405 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]': finished
2020-08-18T12:13:30.637 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.797870+0000 mgr.a (mgr.14247) 39 : cephadm [INF] Checking pool "nfs-ganesha" exists for service nfs.ganesha-test
2020-08-18T12:13:30.638 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.798100+0000 mgr.a (mgr.14247) 40 : cephadm [INF] Saving service nfs.ganesha-test spec with placement count:1
2020-08-18T12:13:30.638 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.798617+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/spec.nfs.ganesha-test","val":"{\"created\": \"2020-08-18T12:13:29.798122\", \"spec\": {\"placement\": {\"count\": 1}, \"service_id\": \"ganesha-test\", \"service_name\": \"nfs.ganesha-test\", \"service_type\": \"nfs\", \"spec\": {\"namespace\": \"test\", \"pool\": \"nfs-ganesha\"}}}"}]: dispatch
2020-08-18T12:13:30.638 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.802947+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/spec.nfs.ganesha-test","val":"{\"created\": \"2020-08-18T12:13:29.798122\", \"spec\": {\"placement\": {\"count\": 1}, \"service_id\": \"ganesha-test\", \"service_name\": \"nfs.ganesha-test\", \"service_type\": \"nfs\", \"spec\": {\"namespace\": \"test\", \"pool\": \"nfs-ganesha\"}}}"}]': finished
2020-08-18T12:13:30.638 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.803757+0000 mgr.a (mgr.14247) 41 : cephadm [INF] Create daemon ganesha-test.smithi036 on host smithi036 with spec NFSServiceSpec({'placement': PlacementSpec(count=1), 'service_type': 'nfs', 'service_id': 'ganesha-test', 'unmanaged': False, 'preview_only': False, 'pool': 'nfs-ganesha', 'namespace': 'test'})
2020-08-18T12:13:30.639 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.804059+0000 mgr.a (mgr.14247) 42 : cephadm [INF] Checking pool "nfs-ganesha" exists for service nfs.ganesha-test
2020-08-18T12:13:30.639 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.804216+0000 mgr.a (mgr.14247) 43 : cephadm [INF] Create keyring: client.nfs.ganesha-test.smithi036
2020-08-18T12:13:30.639 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.804550+0000 mon.a (mon.0) 408 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.ganesha-test.smithi036"}]: dispatch
2020-08-18T12:13:30.639 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.805602+0000 mgr.a (mgr.14247) 44 : cephadm [INF] Updating keyring caps: client.nfs.ganesha-test.smithi036
2020-08-18T12:13:30.640 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.805983+0000 mon.a (mon.0) 409 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "auth caps", "entity": "client.nfs.ganesha-test.smithi036", "caps": ["mon", "allow r", "osd", "allow rw pool=nfs-ganesha namespace=test"]}]: dispatch
2020-08-18T12:13:30.640 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.810709+0000 mon.a (mon.0) 410 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix": "auth caps", "entity": "client.nfs.ganesha-test.smithi036", "caps": ["mon", "allow r", "osd", "allow rw pool=nfs-ganesha namespace=test"]}]': finished
2020-08-18T12:13:30.640 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.813262+0000 mgr.a (mgr.14247) 45 : cephadm [INF] Rados config object exists: conf-nfs.ganesha-test
2020-08-18T12:13:30.640 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.813995+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
2020-08-18T12:13:30.640 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: cephadm 2020-08-18T12:13:29.814959+0000 mgr.a (mgr.14247) 46 : cephadm [INF] Deploying daemon nfs.ganesha-test.smithi036 on smithi036
2020-08-18T12:13:30.641 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:30 smithi036 bash[14996]: audit 2020-08-18T12:13:29.815454+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "config get", "who": "client.nfs.ganesha-test.smithi036", "key": "container_image"}]: dispatch
2020-08-18T12:13:32.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:32 smithi036 bash[14996]: audit 2020-08-18T12:13:31.658178+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "container_image"}]: dispatch
2020-08-18T12:13:34.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:34 smithi036 bash[14996]: audit 2020-08-18T12:13:33.945089+0000 mon.a (mon.0) 418 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]: dispatch
2020-08-18T12:13:34.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:34 smithi036 bash[14996]: audit 2020-08-18T12:13:33.949127+0000 mon.a (mon.0) 419 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]': finished
2020-08-18T12:13:34.647 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:34 smithi036 bash[14996]: audit 2020-08-18T12:13:33.953850+0000 mon.a (mon.0) 420 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]: dispatch
2020-08-18T12:13:34.647 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:34 smithi036 bash[14996]: audit 2020-08-18T12:13:33.956396+0000 mon.a (mon.0) 421 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]': finished
2020-08-18T12:13:37.808 INFO:teuthology.orchestra.run.smithi036:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early orch ls nfs
2020-08-18T12:13:38.115 INFO:teuthology.orchestra.run.smithi036.stdout:NAME RUNNING REFRESHED AGE PLACEMENT IMAGE NAME IMAGE ID
2020-08-18T12:13:38.115 INFO:teuthology.orchestra.run.smithi036.stdout:nfs.ganesha-test 1/1 4s ago 8s count:1 quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7 27a13cd81179
2020-08-18T12:13:38.395 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:13:38 smithi036 bash[14996]: audit 2020-08-18T12:13:38.112053+0000 mgr.a (mgr.14247) 51 : audit [DBG] from='client.14303 -' entity='client.admin' cmd=[{"prefix": "orch ls", "service_type": "nfs", "target": ["mon-mgr", ""]}]: dispatch
2020-08-18T12:14:08.131 INFO:teuthology.orchestra.run.smithi036:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early fs volume create user_test_fs
2020-08-18T12:14:09.649 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:09 smithi036 bash[14996]: audit 2020-08-18T12:14:08.447802+0000 mgr.a (mgr.14247) 67 : audit [DBG] from='client.14307 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "user_test_fs", "target": ["mon-mgr", ""]}]: dispatch
2020-08-18T12:14:09.650 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:09 smithi036 bash[14996]: audit 2020-08-18T12:14:08.448662+0000 mon.a (mon.0) 422 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "osd pool create", "pool": "cephfs.user_test_fs.meta"}]: dispatch
2020-08-18T12:14:10.406 INFO:teuthology.orchestra.run.smithi036.stderr:new fs with metadata pool 3 and data pool 4
2020-08-18T12:14:10.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:10 smithi036 bash[14996]: debug 2020-08-18T12:14:10.371+0000 7f3f88120700 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
2020-08-18T12:14:10.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:10 smithi036 bash[14996]: audit 2020-08-18T12:14:09.369221+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix": "osd pool create", "pool": "cephfs.user_test_fs.meta"}]': finished
2020-08-18T12:14:10.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:10 smithi036 bash[14996]: cluster 2020-08-18T12:14:09.369383+0000 mon.a (mon.0) 424 : cluster [DBG] osdmap e27: 3 total, 3 up, 3 in
2020-08-18T12:14:10.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:10 smithi036 bash[14996]: audit 2020-08-18T12:14:09.372098+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "osd pool create", "pool": "cephfs.user_test_fs.data"}]: dispatch
2020-08-18T12:14:11.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: audit 2020-08-18T12:14:10.371644+0000 mon.a (mon.0) 426 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix": "osd pool create", "pool": "cephfs.user_test_fs.data"}]': finished
2020-08-18T12:14:11.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: cluster 2020-08-18T12:14:10.372094+0000 mon.a (mon.0) 427 : cluster [DBG] osdmap e28: 3 total, 3 up, 3 in
2020-08-18T12:14:11.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: audit 2020-08-18T12:14:10.374540+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "fs new", "fs_name": "user_test_fs", "metadata": "cephfs.user_test_fs.meta", "data": "cephfs.user_test_fs.data"}]: dispatch
2020-08-18T12:14:11.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: cluster 2020-08-18T12:14:10.375970+0000 mon.a (mon.0) 429 : cluster [ERR] Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
2020-08-18T12:14:11.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: cluster 2020-08-18T12:14:10.376015+0000 mon.a (mon.0) 430 : cluster [WRN] Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
2020-08-18T12:14:11.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: cluster 2020-08-18T12:14:10.383345+0000 mon.a (mon.0) 431 : cluster [DBG] osdmap e29: 3 total, 3 up, 3 in
2020-08-18T12:14:11.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: audit 2020-08-18T12:14:10.383558+0000 mon.a (mon.0) 432 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix": "fs new", "fs_name": "user_test_fs", "metadata": "cephfs.user_test_fs.meta", "data": "cephfs.user_test_fs.data"}]': finished
2020-08-18T12:14:11.646 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:11 smithi036 bash[14996]: cluster 2020-08-18T12:14:10.383625+0000 mon.a (mon.0) 433 : cluster [DBG] fsmap user_test_fs:0
2020-08-18T12:14:12.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:12 smithi036 bash[14996]: cluster 2020-08-18T12:14:11.397053+0000 mon.a (mon.0) 434 : cluster [DBG] osdmap e30: 3 total, 3 up, 3 in
2020-08-18T12:14:13.894 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:13 smithi036 bash[14996]: cluster 2020-08-18T12:14:12.402837+0000 mon.a (mon.0) 435 : cluster [DBG] osdmap e31: 3 total, 3 up, 3 in
2020-08-18T12:14:30.417 INFO:teuthology.orchestra.run.smithi036:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early auth get-or-create-key client.test mon 'allow r' osd 'allow rw pool=nfs-ganesha namespace=test, allow rw tag cephfs data=user_test_fs' mds 'allow rw path=/'
2020-08-18T12:14:30.757 INFO:teuthology.orchestra.run.smithi036.stdout:AQAmxjtfQePTLBAA90sKBZ/mAeDvfQUPUVQ7AA==
2020-08-18T12:14:30.776 INFO:teuthology.orchestra.run.smithi036:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early nfs cluster info test
2020-08-18T12:14:31.107 INFO:teuthology.orchestra.run.smithi036.stdout:{
2020-08-18T12:14:31.107 INFO:teuthology.orchestra.run.smithi036.stdout: "test": [
2020-08-18T12:14:31.108 INFO:teuthology.orchestra.run.smithi036.stdout: {
2020-08-18T12:14:31.108 INFO:teuthology.orchestra.run.smithi036.stdout: "hostname": "smithi036",
2020-08-18T12:14:31.108 INFO:teuthology.orchestra.run.smithi036.stdout: "ip": [
2020-08-18T12:14:31.108 INFO:teuthology.orchestra.run.smithi036.stdout: "172.21.15.36"
2020-08-18T12:14:31.108 INFO:teuthology.orchestra.run.smithi036.stdout: ],
2020-08-18T12:14:31.109 INFO:teuthology.orchestra.run.smithi036.stdout: "port": 2049
2020-08-18T12:14:31.109 INFO:teuthology.orchestra.run.smithi036.stdout: }
2020-08-18T12:14:31.109 INFO:teuthology.orchestra.run.smithi036.stdout: ]
2020-08-18T12:14:31.109 INFO:teuthology.orchestra.run.smithi036.stdout:}
2020-08-18T12:14:31.116 INFO:teuthology.orchestra.run.smithi036:> sudo ceph nfs cluster config set test -i -
2020-08-18T12:14:31.393 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:31 smithi036 bash[14996]: audit 2020-08-18T12:14:30.751880+0000 mon.a (mon.0) 436 : audit [INF] from='client.? 172.21.15.36:0/275914116' entity='client.admin' cmd=[{"prefix": "auth get-or-create-key", "entity": "client.test", "caps": ["mon", "allow r", "osd", "allow rw pool=nfs-ganesha namespace=test, allow rw tag cephfs data=user_test_fs", "mds", "allow rw path=/"]}]: dispatch
2020-08-18T12:14:31.394 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:31 smithi036 bash[14996]: audit 2020-08-18T12:14:30.756918+0000 mon.a (mon.0) 437 : audit [INF] from='client.? 172.21.15.36:0/275914116' entity='client.admin' cmd='[{"prefix": "auth get-or-create-key", "entity": "client.test", "caps": ["mon", "allow r", "osd", "allow rw pool=nfs-ganesha namespace=test, allow rw tag cephfs data=user_test_fs", "mds", "allow rw path=/"]}]': finished
2020-08-18T12:14:31.394 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:31 smithi036 bash[14996]: audit 2020-08-18T12:14:31.098832+0000 mgr.a (mgr.14247) 79 : audit [DBG] from='client.14313 -' entity='client.admin' cmd=[{"prefix": "nfs cluster info", "clusterid": "test", "target": ["mon-mgr", ""]}]: dispatch
2020-08-18T12:14:32.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:32 smithi036 bash[14996]: audit 2020-08-18T12:14:31.438397+0000 mgr.a (mgr.14247) 81 : audit [DBG] from='client.14315 -' entity='client.admin' cmd=[{"prefix": "nfs cluster config set", "clusterid": "test", "target": ["mon-mgr", ""]}]: dispatch
2020-08-18T12:14:32.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:32 smithi036 bash[14996]: cephadm 2020-08-18T12:14:31.477406+0000 mgr.a (mgr.14247) 82 : cephadm [INF] Restart service nfs.ganesha-test
2020-08-18T12:14:32.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:32 smithi036 bash[14996]: audit 2020-08-18T12:14:31.478191+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "config get", "who": "client.nfs.ganesha-test.smithi036", "key": "container_image"}]: dispatch
2020-08-18T12:14:32.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:32 smithi036 bash[14996]: audit 2020-08-18T12:14:31.745368+0000 mon.a (mon.0) 439 : audit [DBG] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "config get", "who": "client.nfs.ganesha-test.smithi036", "key": "container_image"}]: dispatch
2020-08-18T12:14:47.360 INFO:teuthology.orchestra.run.smithi036.stdout:NFS-Ganesha Config Set Successfully
2020-08-18T12:14:47.607 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:47 smithi036 bash[14996]: audit 2020-08-18T12:14:47.361760+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "container_image"}]: dispatch
2020-08-18T12:14:49.644 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:49 smithi036 bash[14996]: audit 2020-08-18T12:14:48.826093+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/host.smithi036","val":"{\"daemons\": {\"mon.a\": {\"daemon_type\": \"mon\", \"daemon_id\": \"a\", \"hostname\": \"smithi036\", \"container_id\": \"d36c25661867\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"last_refresh\": \"2020-08-18T12:14:48.823102\", \"created\": \"2020-08-18T12:10:16.467600\", \"started\": \"2020-08-18T12:10:22.842682\"}, \"mgr.a\": {\"daemon_type\": \"mgr\", \"daemon_id\": \"a\", \"hostname\": \"smithi036\", \"container_id\": \"c590cd729900\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"last_refresh\": \"2020-08-18T12:14:48.823283\", \"created\": \"2020-08-18T12:10:24.515600\", \"started\": \"2020-08-18T12:12:53.589820\"}, \"osd.0\": {\"daemon_type\": \"osd\", \"daemon_id\": \"0\", \"hostname\": \"smithi036\", \"container_id\": \"185d14a6bc0e\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"osdspec_affinity\": \"None\", \"last_refresh\": \"2020-08-18T12:14:48.823393\", \"created\": \"2020-08-18T12:11:29.443599\", \"started\": \"2020-08-18T12:11:32.358036\"}, \"osd.1\": {\"daemon_type\": \"osd\", \"daemon_id\": \"1\", \"hostname\": \"smithi036\", \"container_id\": \"18c28aa439b9\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"osdspec_affinity\": \"None\", \"last_refresh\": \"2020-08-18T12:14:48.823644\", \"created\": \"2020-08-18T12:11:47.099598\", \"started\": \"2020-08-18T12:11:49.606862\"}, \"osd.2\": {\"daemon_type\": \"osd\", \"daemon_id\": \"2\", \"hostname\": \"smithi036\", \"container_id\": \"c5ce7db6d02f\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"osdspec_affinity\": \"None\", \"last_refresh\": \"2020-08-18T12:14:48.823856\", \"created\": \"2020-08-18T12:12:04.511598\", \"started\": \"2020-08-18T12:12:07.007047\"}, \"mds.1.smithi036.fwyvuo\": {\"daemon_type\": \"mds\", \"daemon_id\": \"1.smithi036.fwyvuo\", \"hostname\": \"smithi036\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"status\": -1, \"status_desc\": \"error\", \"last_refresh\": \"2020-08-18T12:14:48.824061\", \"created\": \"2020-08-18T12:12:48.687597\"}, \"nfs.ganesha-test.smithi036\": {\"daemon_type\": \"nfs\", \"daemon_id\": \"ganesha-test.smithi036\", \"hostname\": \"smithi036\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"status\": 1, \"status_desc\": \"running\", \"last_refresh\": \"2020-08-18T12:14:48.824135\", \"created\": \"2020-08-18T12:13:31.627596\"}}, \"devices\": [{\"rejected_reasons\": [\"Insufficient space (<5GB) on vgs\", \"locked\", \"LVM detected\"], \"available\": false, \"path\": \"/dev/nvme0n1\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"\", \"model\": \"INTEL SSDPEDMD400G4\", \"rev\": \"\", \"sas_address\": \"\", \"sas_device_handle\": \"\", \"support_discard\": \"512\", \"rotational\": \"0\", \"nr_requests\": \"1023\", \"scheduler_mode\": \"none\", \"partitions\": {}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 400088457216.0, \"human_readable_size\": \"372.61 GB\", \"path\": \"/dev/nvme0n1\", \"locked\": 1}, \"lvs\": [{\"name\": \"lv_1\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_2\", \"osd_id\": \"2\", \"cluster_name\": \"ceph\", \"type\": \"block\", \"osd_fsid\": \"b672ef8c-b1b7-4095-9010-9e0dbb7fd560\", \"cluster_fsid\": \"a08e3096-e14b-11ea-a073-001a4aab830c\", \"osdspec_affinity\": \"None\", \"block_uuid\": \"tmDUnC-MITj-o9dZ-VrFd-fVuq-YQu7-wAKj21\"}, {\"name\": \"lv_3\", \"osd_id\": \"1\", \"cluster_name\": \"ceph\", \"type\": \"block\", \"osd_fsid\": \"00c9da60-5629-446b-81c4-2aa37c742fd3\", \"cluster_fsid\": \"a08e3096-e14b-11ea-a073-001a4aab830c\", \"osdspec_affinity\": \"None\", \"block_uuid\": \"tWjwp6-rw22-xfX3-WTsV-NQfm-c2cR-RwkcQ2\"}, {\"name\": \"lv_4\", \"osd_id\": \"0\", \"cluster_name\": \"ceph\", \"type\": \"block\", \"osd_fsid\": \"22b971a7-f36a-4ac8-bc5a-bc03c27f6ebd\", \"cluster_fsid\": \"a08e3096-e14b-11ea-a073-001a4aab830c\", \"osdspec_affinity\": \"None\", \"block_uuid\": \"yDckaE-Tm2y-CZ1W-MNvA-Z9at-Sx8F-Wy1sdo\"}, {\"name\": \"lv_5\", \"comment\": \"not used by ceph\"}], \"human_readable_type\": \"ssd\", \"device_id\": \"_CVFT53300040400BGN\"}, {\"rejected_reasons\": [\"locked\"], \"available\": false, \"path\": \"/dev/sda\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"ATA\", \"model\": \"ST1000NM0033-9ZM\", \"rev\": \"SN04\", \"sas_address\": \"\", \"sas_device_handle\": \"\", \"support_discard\": \"0\", \"rotational\": \"1\", \"nr_requests\": \"128\", \"scheduler_mode\": \"cfq\", \"partitions\": {\"sda1\": {\"start\": \"2048\", \"sectors\": \"1953522688\", \"sectorsize\": 512, \"size\": 1000203616256.0, \"human_readable_size\": \"931.51 GB\", \"holders\": []}}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 1000204886016.0, \"human_readable_size\": \"931.51 GB\", \"path\": \"/dev/sda\", \"locked\": 1}, \"lvs\": [], \"human_readable_type\": \"hdd\", \"device_id\": \"ST1000NM0033-9ZM173_Z1W4J2QS\"}], \"osdspec_previews\": [], \"daemon_config_deps\": {\"mon.a\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:09.324619\"}, \"mgr.a\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:10.820348\"}, \"osd.0\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:27.858274\"}, \"osd.1\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:45.513076\"}, \"osd.2\": {\"deps\": [], \"last_config\": \"2020-08-18T12:12:02.780273\"}, \"mds.1.smithi036.fwyvuo\": {\"deps\": [], \"last_config\": \"2020-08-18T12:13:01.350567\"}, \"nfs.ganesha-test.smithi036\": {\"deps\": [], \"last_config\": \"2020-08-18T12:13:29.803823\"}}, \"last_daemon_update\": \"2020-08-18T12:14:48.824320\", \"last_device_update\": \"2020-08-18T12:12:11.983889\", \"networks\": {\"172.21.0.0/20\": [\"172.21.15.36\"], \"172.21.15.254\": [\"172.21.15.36\"], \"fe80::/64\": [\"fe80::ae1f:6bff:fef8:2542\"]}, \"last_host_check\": \"2020-08-18T12:10:50.314546\"}"}]: dispatch
2020-08-18T12:14:49.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:49 smithi036 bash[14996]: audit 2020-08-18T12:14:48.830355+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/host.smithi036","val":"{\"daemons\": {\"mon.a\": {\"daemon_type\": \"mon\", \"daemon_id\": \"a\", \"hostname\": \"smithi036\", \"container_id\": \"d36c25661867\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"last_refresh\": \"2020-08-18T12:14:48.823102\", \"created\": \"2020-08-18T12:10:16.467600\", \"started\": \"2020-08-18T12:10:22.842682\"}, \"mgr.a\": {\"daemon_type\": \"mgr\", \"daemon_id\": \"a\", \"hostname\": \"smithi036\", \"container_id\": \"c590cd729900\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"last_refresh\": \"2020-08-18T12:14:48.823283\", \"created\": \"2020-08-18T12:10:24.515600\", \"started\": \"2020-08-18T12:12:53.589820\"}, \"osd.0\": {\"daemon_type\": \"osd\", \"daemon_id\": \"0\", \"hostname\": \"smithi036\", \"container_id\": \"185d14a6bc0e\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"osdspec_affinity\": \"None\", \"last_refresh\": \"2020-08-18T12:14:48.823393\", \"created\": \"2020-08-18T12:11:29.443599\", \"started\": \"2020-08-18T12:11:32.358036\"}, \"osd.1\": {\"daemon_type\": \"osd\", \"daemon_id\": \"1\", \"hostname\": \"smithi036\", \"container_id\": \"18c28aa439b9\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"osdspec_affinity\": \"None\", \"last_refresh\": \"2020-08-18T12:14:48.823644\", \"created\": \"2020-08-18T12:11:47.099598\", \"started\": \"2020-08-18T12:11:49.606862\"}, \"osd.2\": {\"daemon_type\": \"osd\", \"daemon_id\": \"2\", \"hostname\": \"smithi036\", \"container_id\": \"c5ce7db6d02f\", \"container_image_id\": \"27a13cd81179969872c047797f3507863fb5f3e8be88d1df747659c8f9a4b26c\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"version\": \"16.0.0-4463-g3841ae7\", \"status\": 1, \"status_desc\": \"running\", \"osdspec_affinity\": \"None\", \"last_refresh\": \"2020-08-18T12:14:48.823856\", \"created\": \"2020-08-18T12:12:04.511598\", \"started\": \"2020-08-18T12:12:07.007047\"}, \"mds.1.smithi036.fwyvuo\": {\"daemon_type\": \"mds\", \"daemon_id\": \"1.smithi036.fwyvuo\", \"hostname\": \"smithi036\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"status\": -1, \"status_desc\": \"error\", \"last_refresh\": \"2020-08-18T12:14:48.824061\", \"created\": \"2020-08-18T12:12:48.687597\"}, \"nfs.ganesha-test.smithi036\": {\"daemon_type\": \"nfs\", \"daemon_id\": \"ganesha-test.smithi036\", \"hostname\": \"smithi036\", \"container_image_name\": \"quay.ceph.io/ceph-ci/ceph:3841ae7519e559854fede71b5930b8e39ccdc3c7\", \"status\": 1, \"status_desc\": \"running\", \"last_refresh\": \"2020-08-18T12:14:48.824135\", \"created\": \"2020-08-18T12:13:31.627596\"}}, \"devices\": [{\"rejected_reasons\": [\"Insufficient space (<5GB) on vgs\", \"locked\", \"LVM detected\"], \"available\": false, \"path\": \"/dev/nvme0n1\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"\", \"model\": \"INTEL SSDPEDMD400G4\", \"rev\": \"\", \"sas_address\": \"\", \"sas_device_handle\": \"\", \"support_discard\": \"512\", \"rotational\": \"0\", \"nr_requests\": \"1023\", \"scheduler_mode\": \"none\", \"partitions\": {}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 400088457216.0, \"human_readable_size\": \"372.61 GB\", \"path\": \"/dev/nvme0n1\", \"locked\": 1}, \"lvs\": [{\"name\": \"lv_1\", \"comment\": \"not used by ceph\"}, {\"name\": \"lv_2\", \"osd_id\": \"2\", \"cluster_name\": \"ceph\", \"type\": \"block\", \"osd_fsid\": \"b672ef8c-b1b7-4095-9010-9e0dbb7fd560\", \"cluster_fsid\": \"a08e3096-e14b-11ea-a073-001a4aab830c\", \"osdspec_affinity\": \"None\", \"block_uuid\": \"tmDUnC-MITj-o9dZ-VrFd-fVuq-YQu7-wAKj21\"}, {\"name\": \"lv_3\", \"osd_id\": \"1\", \"cluster_name\": \"ceph\", \"type\": \"block\", \"osd_fsid\": \"00c9da60-5629-446b-81c4-2aa37c742fd3\", \"cluster_fsid\": \"a08e3096-e14b-11ea-a073-001a4aab830c\", \"osdspec_affinity\": \"None\", \"block_uuid\": \"tWjwp6-rw22-xfX3-WTsV-NQfm-c2cR-RwkcQ2\"}, {\"name\": \"lv_4\", \"osd_id\": \"0\", \"cluster_name\": \"ceph\", \"type\": \"block\", \"osd_fsid\": \"22b971a7-f36a-4ac8-bc5a-bc03c27f6ebd\", \"cluster_fsid\": \"a08e3096-e14b-11ea-a073-001a4aab830c\", \"osdspec_affinity\": \"None\", \"block_uuid\": \"yDckaE-Tm2y-CZ1W-MNvA-Z9at-Sx8F-Wy1sdo\"}, {\"name\": \"lv_5\", \"comment\": \"not used by ceph\"}], \"human_readable_type\": \"ssd\", \"device_id\": \"_CVFT53300040400BGN\"}, {\"rejected_reasons\": [\"locked\"], \"available\": false, \"path\": \"/dev/sda\", \"sys_api\": {\"removable\": \"0\", \"ro\": \"0\", \"vendor\": \"ATA\", \"model\": \"ST1000NM0033-9ZM\", \"rev\": \"SN04\", \"sas_address\": \"\", \"sas_device_handle\": \"\", \"support_discard\": \"0\", \"rotational\": \"1\", \"nr_requests\": \"128\", \"scheduler_mode\": \"cfq\", \"partitions\": {\"sda1\": {\"start\": \"2048\", \"sectors\": \"1953522688\", \"sectorsize\": 512, \"size\": 1000203616256.0, \"human_readable_size\": \"931.51 GB\", \"holders\": []}}, \"sectors\": 0, \"sectorsize\": \"512\", \"size\": 1000204886016.0, \"human_readable_size\": \"931.51 GB\", \"path\": \"/dev/sda\", \"locked\": 1}, \"lvs\": [], \"human_readable_type\": \"hdd\", \"device_id\": \"ST1000NM0033-9ZM173_Z1W4J2QS\"}], \"osdspec_previews\": [], \"daemon_config_deps\": {\"mon.a\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:09.324619\"}, \"mgr.a\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:10.820348\"}, \"osd.0\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:27.858274\"}, \"osd.1\": {\"deps\": [], \"last_config\": \"2020-08-18T12:11:45.513076\"}, \"osd.2\": {\"deps\": [], \"last_config\": \"2020-08-18T12:12:02.780273\"}, \"mds.1.smithi036.fwyvuo\": {\"deps\": [], \"last_config\": \"2020-08-18T12:13:01.350567\"}, \"nfs.ganesha-test.smithi036\": {\"deps\": [], \"last_config\": \"2020-08-18T12:13:29.803823\"}}, \"last_daemon_update\": \"2020-08-18T12:14:48.824320\", \"last_device_update\": \"2020-08-18T12:12:11.983889\", \"networks\": {\"172.21.0.0/20\": [\"172.21.15.36\"], \"172.21.15.254\": [\"172.21.15.36\"], \"fe80::/64\": [\"fe80::ae1f:6bff:fef8:2542\"]}, \"last_host_check\": \"2020-08-18T12:10:50.314546\"}"}]': finished
2020-08-18T12:14:49.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:49 smithi036 bash[14996]: audit 2020-08-18T12:14:48.831848+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd=[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]: dispatch
2020-08-18T12:14:49.645 INFO:journalctl@ceph.mon.a.smithi036.stdout:Aug 18 12:14:49 smithi036 bash[14996]: audit 2020-08-18T12:14:48.834997+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14247 172.21.15.36:0/1838420726' entity='mgr.a' cmd='[{"prefix":"config-key set","key":"mgr/cephadm/osd_remove_queue","val":"[]"}]': finished
2020-08-18T12:15:17.380 INFO:teuthology.orchestra.run.smithi036:> sudo rados -p nfs-ganesha -N test ls
2020-08-18T12:15:17.451 INFO:teuthology.orchestra.run.smithi036.stdout:rec-0000000000000003:nfs.ganesha-test.smithi036
2020-08-18T12:15:17.452 INFO:teuthology.orchestra.run.smithi036.stdout:userconf-nfs.ganesha-test
2020-08-18T12:15:17.452 INFO:teuthology.orchestra.run.smithi036.stdout:grace
2020-08-18T12:15:17.452 INFO:teuthology.orchestra.run.smithi036.stdout:conf-nfs.ganesha-test
2020-08-18T12:15:17.456 INFO:tasks.cephfs.test_nfs:b'rec-0000000000000003:nfs.ganesha-test.smithi036\nuserconf-nfs.ganesha-test\ngrace\nconf-nfs.ganesha-test\n'
2020-08-18T12:15:17.456 INFO:teuthology.orchestra.run.smithi036:> sudo rados -p nfs-ganesha -N test get userconf-nfs.ganesha-test -
2020-08-18T12:15:17.504 INFO:teuthology.orchestra.run.smithi036.stdout: LOG {
2020-08-18T12:15:17.504 INFO:teuthology.orchestra.run.smithi036.stdout: Default_log_level = FULL_DEBUG;
2020-08-18T12:15:17.504 INFO:teuthology.orchestra.run.smithi036.stdout: }
2020-08-18T12:15:17.504 INFO:teuthology.orchestra.run.smithi036.stdout:
2020-08-18T12:15:17.505 INFO:teuthology.orchestra.run.smithi036.stdout: EXPORT {
2020-08-18T12:15:17.505 INFO:teuthology.orchestra.run.smithi036.stdout: Export_Id = 100;
2020-08-18T12:15:17.505 INFO:teuthology.orchestra.run.smithi036.stdout: Transports = TCP;
2020-08-18T12:15:17.505 INFO:teuthology.orchestra.run.smithi036.stdout: Path = /;
2020-08-18T12:15:17.506 INFO:teuthology.orchestra.run.smithi036.stdout: Pseudo = /ceph/;
2020-08-18T12:15:17.506 INFO:teuthology.orchestra.run.smithi036.stdout: Protocols = 4;
2020-08-18T12:15:17.506 INFO:teuthology.orchestra.run.smithi036.stdout: Access_Type = RW;
2020-08-18T12:15:17.506 INFO:teuthology.orchestra.run.smithi036.stdout: Attr_Expiration_Time = 0;
2020-08-18T12:15:17.506 INFO:teuthology.orchestra.run.smithi036.stdout: Squash = None;
2020-08-18T12:15:17.507 INFO:teuthology.orchestra.run.smithi036.stdout: FSAL {
2020-08-18T12:15:17.507 INFO:teuthology.orchestra.run.smithi036.stdout: Name = CEPH;
2020-08-18T12:15:17.507 INFO:teuthology.orchestra.run.smithi036.stdout: Filesystem = user_test_fs;
2020-08-18T12:15:17.507 INFO:teuthology.orchestra.run.smithi036.stdout: User_Id = test;
2020-08-18T12:15:17.507 INFO:teuthology.orchestra.run.smithi036.stdout: Secret_Access_Key = 'AQAmxjtfQePTLBAA90sKBZ/mAeDvfQUPUVQ7AA==';
2020-08-18T12:15:17.508 INFO:teuthology.orchestra.run.smithi036.stdout: }
2020-08-18T12:15:17.509 INFO:teuthology.orchestra.run.smithi036.stdout: }
2020-08-18T12:15:17.509 INFO:teuthology.orchestra.run.smithi036:> sudo mount -t nfs -o port=2049 172.21.15.36:/ceph /mnt
2020-08-18T12:17:22.721 INFO:teuthology.orchestra.run.smithi036.stderr:mount.nfs: Connection timed out
2020-08-18T12:17:22.724 DEBUG:teuthology.orchestra.run:got remote process result: 32
2020-08-18T12:17:22.726 INFO:teuthology.orchestra.run.smithi036:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early log 'Ended test tasks.cephfs.test_nfs.TestNFS.test_cluster_set_reset_user_config'
2020-08-18T12:17:23.460 INFO:tasks.cephfs_test_runner:test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS) ... ERROR
2020-08-18T12:17:23.461 INFO:tasks.cephfs_test_runner:
2020-08-18T12:17:23.462 INFO:tasks.cephfs_test_runner:======================================================================
2020-08-18T12:17:23.462 INFO:tasks.cephfs_test_runner:ERROR: test_cluster_set_reset_user_config (tasks.cephfs.test_nfs.TestNFS)
2020-08-18T12:17:23.462 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-08-18T12:17:23.463 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2020-08-18T12:17:23.463 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-swagner-testing-2020-08-18-1033/qa/tasks/cephfs/test_nfs.py", line 449, in test_cluster_set_reset_user_config
2020-08-18T12:17:23.463 INFO:tasks.cephfs_test_runner: self.ctx.cluster.run(args=mnt_cmd)
2020-08-18T12:17:23.464 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/cluster.py", line 64, in run
2020-08-18T12:17:23.464 INFO:tasks.cephfs_test_runner: return [remote.run(**kwargs) for remote in remotes]
2020-08-18T12:17:23.464 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/cluster.py", line 64, in <listcomp>
2020-08-18T12:17:23.464 INFO:tasks.cephfs_test_runner: return [remote.run(**kwargs) for remote in remotes]
2020-08-18T12:17:23.465 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 204, in run
2020-08-18T12:17:23.465 INFO:tasks.cephfs_test_runner: r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
2020-08-18T12:17:23.465 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 446, in run
2020-08-18T12:17:23.466 INFO:tasks.cephfs_test_runner: r.wait()
2020-08-18T12:17:23.466 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 160, in wait
2020-08-18T12:17:23.466 INFO:tasks.cephfs_test_runner: self._raise_for_status()
2020-08-18T12:17:23.467 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 182, in _raise_for_status
2020-08-18T12:17:23.467 INFO:tasks.cephfs_test_runner: node=self.hostname, label=self.label
2020-08-18T12:17:23.467 INFO:tasks.cephfs_test_runner:teuthology.exceptions.CommandFailedError: Command failed on smithi036 with status 32: 'sudo mount -t nfs -o port=2049 172.21.15.36:/ceph /mnt'
2020-08-18T12:17:23.467 INFO:tasks.cephfs_test_runner:
2020-08-18T12:17:23.468 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-08-18T12:17:23.468 INFO:tasks.cephfs_test_runner:Ran 3 tests in 274.876s
2020-08-18T12:17:23.468 INFO:tasks.cephfs_test_runner:
2020-08-18T12:17:23.469 INFO:tasks.cephfs_test_runner:FAILED (errors=1)
</pre>
<p><a class="external" href="https://pulpito.ceph.com/swagner-2020-08-18_11:48:28-rados:cephadm-wip-swagner-testing-2020-08-18-1033-distro-basic-smithi/5356227/">https://pulpito.ceph.com/swagner-2020-08-18_11:48:28-rados:cephadm-wip-swagner-testing-2020-08-18-1033-distro-basic-smithi/5356227/</a></p>
Orchestrator - Bug #46813 (Resolved): `ceph orch * --refresh` is broken
https://tracker.ceph.com/issues/46813
2020-08-03T08:09:19Z
Sebastian Wagner
<p>Those call violate <a class="external" href="https://docs.ceph.com/docs/master/dev/cephadm/#note-regarding-network-calls-from-cli-handlers">https://docs.ceph.com/docs/master/dev/cephadm/#note-regarding-network-calls-from-cli-handlers</a></p>
<p>and thus are dangerous as they might render cephadm unresponsive.</p>
<p>We just can't synchronously provide that information. Note that we then have to provide a way for users to trigger a refresh.</p>
Orchestrator - Bug #46748 (Resolved): Module 'cephadm' has failed: auth get failed: failed to fin...
https://tracker.ceph.com/issues/46748
2020-07-29T09:53:48Z
Sebastian Wagner
<p>Was purged it yesterday:</p>
<pre>
ceph osd purge 32 --yes-i-really-mean-it
ceph osd tree | grep 32 => no match
ceph osd crush remove osd.32 => device 'osd.32' does not appear in the crush map
</pre>
Orchestrator - Bug #46534 (Resolved): cephadm podman pull: Digest did not match
https://tracker.ceph.com/issues/46534
2020-07-14T13:07:14Z
Sebastian Wagner
<p><a class="external" href="https://pulpito.ceph.com/swagner-2020-07-14_12:03:52-rados:cephadm-wip-swagner-testing-2020-07-14-1125-distro-basic-smithi/">https://pulpito.ceph.com/swagner-2020-07-14_12:03:52-rados:cephadm-wip-swagner-testing-2020-07-14-1125-distro-basic-smithi/</a></p>
<pre>
INFO:cephadm:Verifying port 3000 ...
INFO:cephadm:Non-zero exit code 125 from /bin/podman run --rm --net=host -e CONTAINER_IMAGE=ceph/ceph-grafana:latest -e NODE_NAME=smithi141 --entrypoint stat ceph/ceph-grafana:latest -c %u %g /var/lib/grafana
INFO:cephadm:stat:stderr Trying to pull registry.access.redhat.com/ceph/ceph-grafana:latest...
INFO:cephadm:stat:stderr name unknown: Repo not found
INFO:cephadm:stat:stderr Trying to pull registry.fedoraproject.org/ceph/ceph-grafana:latest...
INFO:cephadm:stat:stderr manifest unknown: manifest unknown
INFO:cephadm:stat:stderr Trying to pull registry.centos.org/ceph/ceph-grafana:latest...
INFO:cephadm:stat:stderr manifest unknown: manifest unknown
INFO:cephadm:stat:stderr Trying to pull docker.io/ceph/ceph-grafana:latest...
INFO:cephadm:stat:stderr Getting image source signatures
INFO:cephadm:stat:stderr Copying blob sha256:003efafe5a84678b585af8a06810c47079aa4705e60d07f1c31a52f0e35ce0b5
INFO:cephadm:stat:stderr Digest did not match, expected sha256:003efafe5a84678b585af8a06810c47079aa4705e60d07f1c31a52f0e35ce0b5, got sha256:f4c0a426e0aa470680560b267b79382c183c9608f3da72acce8cc5beaf8ebe31
INFO:cephadm:stat:stderr Error: unable to pull ceph/ceph-grafana:latest: 4 errors occurred:
INFO:cephadm:stat:stderr * Error initializing source docker://registry.access.redhat.com/ceph/ceph-grafana:latest: Error reading manifest latest in registry.access.redhat.com/ceph/ceph-grafana: name unknown: Repo not found
INFO:cephadm:stat:stderr * Error initializing source docker://registry.fedoraproject.org/ceph/ceph-grafana:latest: Error reading manifest latest in registry.fedoraproject.org/ceph/ceph-grafana: manifest unknown: manifest unknown
INFO:cephadm:stat:stderr * Error initializing source docker://registry.centos.org/ceph/ceph-grafana:latest: Error reading manifest latest in registry.centos.org/ceph/ceph-grafana: manifest unknown: manifest unknown
INFO:cephadm:stat:stderr * Error writing blob: error storing blob to file "/var/tmp/storage976712658/1": Digest did not match, expected sha256:003efafe5a84678b585af8a06810c47079aa4705e60d07f1c31a52f0e35ce0b5, got sha256:f4c0a426e0aa470680560b267b79382c183c9608f3da72acce8cc5beaf8ebe31
INFO:cephadm:stat:stderr
Traceback (most recent call last):
File "<stdin>", line 4247, in <module>
File "<stdin>", line 968, in _default_image
File "<stdin>", line 2508, in command_deploy
File "<stdin>", line 2452, in extract_uid_gid_monitoring
File "<stdin>", line 1535, in extract_uid_gid
File "<stdin>", line 1974, in run
File "<stdin>", line 696, in call_throws
RuntimeError: Failed command: /bin/podman run --rm --net=host -e CONTAINER_IMAGE=ceph/ceph-grafana:latest -e NODE_NAME=smithi141 --entrypoint stat ceph/ceph-grafana:latest -c %u %g /var/lib/grafana
Traceback (most recent call last):
File "/usr/share/ceph/mgr/cephadm/module.py", line 1613, in _run_cephadm
code, '\n'.join(err)))
RuntimeError: cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploying daemon grafana.a ...
</pre>
Orchestrator - Bug #46036 (Resolved): cephadm: killmode=none: systemd units failed, but container...
https://tracker.ceph.com/issues/46036
2020-06-16T14:23:18Z
Sebastian Wagner
<pre>
# ceph orch ps
NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME IMAGE ID CONTAINER ID
osd.0 hostXXXXX-4 error 6m ago 92m 15.2.3.252 ceph/ceph 33194941836f c5dd2b0cc77d
osd.1 hostXXXXX-4 error 6m ago 90m 15.2.3.252 ceph/ceph 33194941836f b65dc56c76a2
</pre>
<p>turns out, the systemd unit failed:<br /><pre>
● ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc@osd.2.service - Ceph osd.2 for 92d2d4c0-af05-11ea-9578-0cc47aaa2edc
Loaded: loaded (/etc/systemd/system/ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc@.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2020-06-16 12:05:49 UTC; 1h 32min ago
Process: 3861 ExecStopPost=/bin/bash /var/lib/ceph/92d2d4c0-af05-11ea-9578-0cc47aaa2edc/osd.2/unit.poststop (code=exited, status=0/SUCCESS)
Process: 3693 ExecStart=/bin/bash /var/lib/ceph/92d2d4c0-af05-11ea-9578-0cc47aaa2edc/osd.2/unit.run (code=exited, status=125)
Process: 3676 ExecStartPre=/usr/bin/podman rm ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc-osd.2 (code=exited, status=2)
Main PID: 3693 (code=exited, status=125)
Tasks: 34
CGroup: /system.slice/system-ceph\x2d92d2d4c0\x2daf05\x2d11ea\x2d9578\x2d0cc47aaa2edc.slice/ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc@osd.2.service
├─28935 /bin/bash /var/lib/ceph/92d2d4c0-af05-11ea-9578-0cc47aaa2edc/osd.2/unit.run
├─29335 /usr/bin/podman run --rm --net=host --ipc=host --privileged --group-add=disk --name ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc-osd.2 -e CONTAINER_IMAGE=ceph/ceph -e NODE_NAME=hostXXXXX-4 -v /var/run/ceph/92d2d4c0-af05-11ea-9578-0cc47aaa2edc>
└─29396 /usr/bin/conmon --api-version 1 -s -c 2f88b58cb64519fc90842f6a473703da44c5612d2686b5beae86b0ff2a7d50bb -u 2f88b58cb64519fc90842f6a473703da44c5612d2686b5beae86b0ff2a7d50bb -r /usr/sbin/runc -b /var/lib/containers/storage/btrfs-containers/2f88b58cb64519fc90842f6a473703da44c5612d2686b5beae86b0ff2a7d5>
Jun 16 13:32:08 hostXXXXX-4 bash[28935]: Uptime(secs): 5400.0 total, 0.0 interval
Jun 16 13:32:08 hostXXXXX-4 bash[28935]: Flush(GB): cumulative 0.000, interval 0.000
Jun 16 13:32:08 hostXXXXX-4 bash[28935]: AddFile(GB): cumulative 0.000, interval 0.000
Jun 16 13:32:08 hostXXXXX-4 bash[28935]: AddFile(Total Files): cumulative 0, interval 0
Jun 16 13:32:08 hostXXXXX-4 bash[28935]: AddFile(L0 Files): cumulative 0, interval 0
Jun 16 13:32:08 hostXXXXX-4 bash[28935]: AddFile(Keys): cumulative 0, interval 0
Jun 16 13:32:08 hostXXXXX-4 bash[28935]: Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Jun 16 13:32:08 hostXXXXX-4 bash[28935]: Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Jun 16 13:32:08 hostXXXXX-4 bash[28935]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
Jun 16 13:32:08 hostXXXXX-4 bash[28935]: ** File Read Latency Histogram By Level [default] **
</pre></p>
<p>where the log shows something like</p>
<pre>
-- Unit ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc@osd.1.service has begun starting up.
Jun 16 12:03:06 hostXXXXX-4 podman[31032]: Error: cannot remove container b65dc56c76a247e9178fa81005a93cf44e502a06fb46bcc79b9bf484128b2907 as it is running ->
Jun 16 12:03:06 hostXXXXX-4 systemd[1]: Started Ceph osd.1 for 92d2d4c0-af05-11ea-9578-0cc47aaa2edc.
-- Subject: Unit ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc@osd.1.service has finished start-up
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc@osd.1.service has finished starting up.
--
-- The start-up result is done.
Jun 16 12:03:06 hostXXXXX-4 bash[31047]: WARNING: The same type, major and minor should not be used for multiple devices.
Jun 16 12:03:07 hostXXXXX-4 bash[31047]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jun 16 12:03:07 hostXXXXX-4 bash[31047]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-block-78635646-a4a6-474e->
Jun 16 12:03:07 hostXXXXX-4 bash[31047]: Running command: /usr/bin/ln -snf /dev/ceph-block-78635646-a4a6-474e-8832-c3a3e668cf9d/osd-block-f645cf27-857b-48ae->
Jun 16 12:03:07 hostXXXXX-4 bash[31047]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jun 16 12:03:07 hostXXXXX-4 bash[31047]: Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--block--78635646--a4a6--474e--8832--c3a3e668cf9d-osd-->
Jun 16 12:03:07 hostXXXXX-4 bash[31047]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jun 16 12:03:07 hostXXXXX-4 bash[31047]: Running command: /usr/bin/ln -snf /dev/ceph-block-dbs-83a1ab17-f232-4f60-887f-111b89f3f655/osd-block-db-3c9563a7-9ab>
Jun 16 12:03:07 hostXXXXX-4 bash[31047]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-block-dbs-83a1ab17-f232-4f60-887f-111b89f3f655/osd-block-db-3>
Jun 16 12:03:07 hostXXXXX-4 bash[31047]: Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--block--dbs--83a1ab17--f232--4f60--887f--111b89f3f655->
Jun 16 12:03:07 hostXXXXX-4 bash[31047]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block.db
Jun 16 12:03:07 hostXXXXX-4 bash[31047]: Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph--block--dbs--83a1ab17--f232--4f60--887f--111b89f3f655->
Jun 16 12:03:07 hostXXXXX-4 bash[31047]: --> ceph-volume lvm activate successful for osd ID: 1
Jun 16 12:03:08 hostXXXXX-4 bash[31047]: WARNING: The same type, major and minor should not be used for multiple devices.
Jun 16 12:03:08 hostXXXXX-4 bash[31047]: Error: error creating container storage: the container name "ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc-osd.1" is alr>
Jun 16 12:03:08 hostXXXXX-4 systemd[1]: ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc@osd.1.service: Main process exited, code=exited, status=125/n/a
Jun 16 12:03:08 hostXXXXX-4 bash[31227]: WARNING: The same type, major and minor should not be used for multiple devices.
Jun 16 12:03:09 hostXXXXX-4 systemd[1]: ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc@osd.1.service: Unit entered failed state.
Jun 16 12:03:09 hostXXXXX-4 systemd[1]: ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc@osd.1.service: Failed with result 'exit-code'.
Jun 16 12:03:19 hostXXXXX-4 systemd[1]: ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc@osd.1.service: Service RestartSec=10s expired, scheduling restart.
Jun 16 12:03:19 hostXXXXX-4 systemd[1]: Stopped Ceph osd.1 for 92d2d4c0-af05-11ea-9578-0cc47aaa2edc.
-- Subject: Unit ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc@osd.1.service has finished shutting down
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc@osd.1.service has finished shutting down.
Jun 16 12:03:19 hostXXXXX-4 systemd[1]: ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc@osd.1.service: Start request repeated too quickly.
Jun 16 12:03:19 hostXXXXX-4 systemd[1]: Failed to start Ceph osd.1 for 92d2d4c0-af05-11ea-9578-0cc47aaa2edc.
-- Subject: Unit ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc@osd.1.service has failed
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc@osd.1.service has failed.
--
-- The result is failed.
Jun 16 12:03:19 hostXXXXX-4 systemd[1]: ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc@osd.1.service: Unit entered failed state.
Jun 16 12:03:19 hostXXXXX-4 systemd[1]: ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc@osd.1.service: Failed with result 'exit-code'.
</pre>
<p>Adding a <code>set -e</code> changes the output to /var/lib/ceph/92d2d4c0-af05-11ea-9578-0cc47aaa2edc/osd.1/unit.run:</p>
<pre>
hostXXXXX-4:~ # systemctl status ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc@osd.1
● ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc@osd.1.service - Ceph osd.1 for 92d2d4c0-af05-11ea-9578-0cc47aaa2edc
Loaded: loaded (/etc/systemd/system/ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc@.service; enabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: exit-code) since Tue 2020-06-16 14:07:27 UTC; 4s ago
Process: 10391 ExecStopPost=/bin/bash /var/lib/ceph/92d2d4c0-af05-11ea-9578-0cc47aaa2edc/osd.1/unit.poststop (code=exited, status=0/SUCCESS)
Process: 10216 ExecStart=/bin/bash /var/lib/ceph/92d2d4c0-af05-11ea-9578-0cc47aaa2edc/osd.1/unit.run (code=exited, status=125)
Process: 10201 ExecStartPre=/usr/bin/podman rm ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc-osd.1 (code=exited, status=2)
Main PID: 10216 (code=exited, status=125)
Tasks: 29
CGroup: /system.slice/system-ceph\x2d92d2d4c0\x2daf05\x2d11ea\x2d9578\x2d0cc47aaa2edc.slice/ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc@osd.1.service
├─25971 /bin/bash /var/lib/ceph/92d2d4c0-af05-11ea-9578-0cc47aaa2edc/osd.1/unit.run
├─26395 /usr/bin/podman run --rm --net=host --ipc=host --privileged --group-add=disk --name ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc-osd.1 -e CON>
└─26452 /usr/bin/conmon --api-version 1 -s -c b65dc56c76a247e9178fa81005a93cf44e502a06fb46bcc79b9bf484128b2907 -u b65dc56c76a247e9178fa81005a93cf4>
Jun 16 14:07:27 hostXXXXX-4 systemd[1]: ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc@osd.1.service: Failed with result 'exit-code'.
</pre>
<p>now, let's kill the container:</p>
<pre>
hostXXXXX-4:~ # podman stop ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc-osd.1
b65dc56c76a247e9178fa81005a93cf44e502a06fb46bcc79b9bf484128b2907
</pre>
<p>Adding this line into /var/lib/ceph/92d2d4c0-af05-11ea-9578-0cc47aaa2edc/osd.1/unit.run</p>
<pre>
! /usr/bin/podman rm --storage ceph-92d2d4c0-af05-11ea-9578-0cc47aaa2edc-osd.1
</pre>
<p>now, the service is up again.</p>
Orchestrator - Feature #45378 (Resolved): cephadm: manage /etc/ceph/ceph.conf
https://tracker.ceph.com/issues/45378
2020-05-04T13:23:04Z
Sebastian Wagner
<p>/etc/ceph/ceph.conf is often use by tools in the ceph ecosystem. We should provide a mechanism to keep this up to date. (e.g. adding new mos)</p>
<p>See</p>
<ul>
<li><a class="external" href="https://github.com/ceph/ceph/pull/34248">https://github.com/ceph/ceph/pull/34248</a></li>
<li><a class="external" href="https://github.com/ceph/ceph-salt/issues/199">https://github.com/ceph/ceph-salt/issues/199</a></li>
</ul>
Orchestrator - Cleanup #45321 (Resolved): Servcie spec: unify `spec:` vs omitting `spec:`
https://tracker.ceph.com/issues/45321
2020-04-29T09:01:05Z
Sebastian Wagner
<pre><code class="yaml syntaxhl"><span class="CodeRay"><span class="key">service_type</span>: <span class="string"><span class="content">iscsi </span></span>
<span class="key">service_id</span>: <span class="string"><span class="content">test</span></span>
<span class="key">placement</span>:
<span class="key">hosts</span>:
- <span class="string"><span class="content">osd0</span></span>
<span class="key">spec</span>:
<span class="key">pool</span>: <span class="string"><span class="content">rbd</span></span>
<span class="key">api_user</span>: <span class="string"><span class="content">admin</span></span>
<span class="key">api_password</span>: <span class="string"><span class="content">admin</span></span>
<span class="key">trusted_ip_list</span>: <span class="string"><span class="content">192.168.121.1</span></span>
</span></code></pre>
<pre><code class="yaml syntaxhl"><span class="CodeRay"><span class="key">service_type</span>: <span class="string"><span class="content">iscsi </span></span>
<span class="key">service_id</span>: <span class="string"><span class="content">test</span></span>
<span class="key">placement</span>:
<span class="key">hosts</span>:
- <span class="string"><span class="content">osd0</span></span>
<span class="key">pool</span>: <span class="string"><span class="content">rbd</span></span>
<span class="key">api_user</span>: <span class="string"><span class="content">admin</span></span>
<span class="key">api_password</span>: <span class="string"><span class="content">admin</span></span>
<span class="key">trusted_ip_list</span>: <span class="string"><span class="content">192.168.121.1</span></span>
</span></code></pre>
<p>are the same data. we should unify them to one or the other.</p>
<p>This is is especially relevant for OSD specs.</p>
Orchestrator - Cleanup #45118 (Closed): orch (pacific): cleanup CLI
https://tracker.ceph.com/issues/45118
2020-04-16T18:43:25Z
Sebastian Wagner
<p>use</p>
<pre>
ceph orch <verb> <object>
</pre>
<p>for everything. Like</p>
<pre>
host rm -> rm host
</pre>
Orchestrator - Bug #45097 (Resolved): cephadm: UX: Traceback, if `orch host add mon1` fails.
https://tracker.ceph.com/issues/45097
2020-04-15T13:36:45Z
Sebastian Wagner
<p>This should not show a Traceback:</p>
<pre>
[root@mon1 ~]# cephadm bootstrap --mon-ip 10.10.101.5
INFO:cephadm:Verifying podman|docker is present...
INFO:cephadm:Verifying lvm2 is present...
INFO:cephadm:Verifying time synchronization is in place...
INFO:cephadm:Unit chronyd.service is enabled and running
INFO:cephadm:Repeating the final host check...
INFO:cephadm:podman|docker (/bin/docker) is present
INFO:cephadm:systemctl is present
INFO:cephadm:lvcreate is present
INFO:cephadm:Unit chronyd.service is enabled and running
INFO:cephadm:Host looks OK
INFO:root:Cluster fsid: f8edc08a-7f17-11ea-8707-000c2915dd98
INFO:cephadm:Verifying IP 10.10.101.5 port 3300 ...
INFO:cephadm:Verifying IP 10.10.101.5 port 6789 ...
INFO:cephadm:Mon IP 10.10.101.5 is in CIDR network 10.0.0.0/8
INFO:cephadm:Pulling latest docker.io/ceph/ceph:v15 container...
INFO:cephadm:Extracting ceph user uid/gid from container image...
INFO:cephadm:Creating initial keys...
INFO:cephadm:Creating initial monmap...
INFO:cephadm:Creating mon...
INFO:cephadm:Non-zero exit code 1 from /bin/firewall-cmd --permanent --query-service ceph-mon
INFO:cephadm:/bin/firewall-cmd:stdout no
INFO:cephadm:Enabling firewalld service ceph-mon in current zone...
INFO:cephadm:Waiting for mon to start...
INFO:cephadm:Waiting for mon...
INFO:cephadm:Assimilating anything we can from ceph.conf...
INFO:cephadm:Generating new minimal ceph.conf...
INFO:cephadm:Restarting the monitor...
INFO:cephadm:Setting mon public_network...
INFO:cephadm:Creating mgr...
INFO:cephadm:Non-zero exit code 1 from /bin/firewall-cmd --permanent --query-service ceph
INFO:cephadm:/bin/firewall-cmd:stdout no
INFO:cephadm:Enabling firewalld service ceph in current zone...
INFO:cephadm:Non-zero exit code 1 from /bin/firewall-cmd --permanent --query-port 8080/tcp
INFO:cephadm:/bin/firewall-cmd:stdout no
INFO:cephadm:Enabling firewalld port 8080/tcp in current zone...
INFO:cephadm:Non-zero exit code 1 from /bin/firewall-cmd --permanent --query-port 8443/tcp
INFO:cephadm:/bin/firewall-cmd:stdout no
INFO:cephadm:Enabling firewalld port 8443/tcp in current zone...
INFO:cephadm:Non-zero exit code 1 from /bin/firewall-cmd --permanent --query-port 9283/tcp
INFO:cephadm:/bin/firewall-cmd:stdout no
INFO:cephadm:Enabling firewalld port 9283/tcp in current zone...
INFO:cephadm:Wrote keyring to /etc/ceph/ceph.client.admin.keyring
INFO:cephadm:Wrote config to /etc/ceph/ceph.conf
INFO:cephadm:Waiting for mgr to start...
INFO:cephadm:Waiting for mgr...
INFO:cephadm:mgr not available, waiting (1/10)...
INFO:cephadm:mgr not available, waiting (2/10)...
INFO:cephadm:Enabling cephadm module...
INFO:cephadm:Waiting for the mgr to restart...
INFO:cephadm:Waiting for Mgr epoch 5...
INFO:cephadm:Setting orchestrator backend to cephadm...
INFO:cephadm:Generating ssh key...
INFO:cephadm:Wrote public SSH key to to /etc/ceph/ceph.pub
INFO:cephadm:Adding key to root@localhost's authorized_keys...
INFO:cephadm:Adding host mon1...
INFO:cephadm:Non-zero exit code 2 from /bin/docker run --rm --net=host -e CONTAINER_IMAGE=docker.io/ceph/ceph:v15 -e NODE_NAME=mon1 -v /var/log/ceph/f8edc08a-7f17-11ea-8707-000c2915dd98:/var/log/ceph:z -v /tmp/ceph-tmp7k819c_n:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmp8_mtgn16:/etc/ceph/ceph.conf:z --entrypoint /usr/bin/ceph docker.io/ceph/ceph:v15 orch host add mon1
INFO:cephadm:/usr/bin/ceph:stderr Error ENOENT: Failed to connect to mon1 (mon1). Check that the host is reachable and accepts connections using the cephadm SSH key
INFO:cephadm:/usr/bin/ceph:stderr you may want to run:
INFO:cephadm:/usr/bin/ceph:stderr > ssh -F =(ceph cephadm get-ssh-config) -i =(ceph config-key get mgr/cephadm/ssh_identity_key) root@mon1
Traceback (most recent call last):
File "/sbin/cephadm", line 4282, in <module>
r = args.func()
File "/sbin/cephadm", line 972, in _default_image
return func()
File "/sbin/cephadm", line 2382, in command_bootstrap
cli(['orch', 'host', 'add', host])
File "/sbin/cephadm", line 2243, in cli
).run(timeout=timeout)
File "/sbin/cephadm", line 1976, in run
self.run_cmd(), desc=self.entrypoint, timeout=timeout)
File "/sbin/cephadm", line 700, in call_throws
raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /bin/docker run --rm --net=host -e CONTAINER_IMAGE=docker.io/ceph/ceph:v15 -e NODE_NAME=mon1 -v /var/log/ceph/f8edc08a-7f17-11ea-8707-000c2915dd98:/var/log/ceph:z -v /tmp/ceph-tmp7k819c_n:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmp8_mtgn16:/etc/ceph/ceph.conf:z --entrypoint /usr/bin/ceph docker.io/ceph/ceph:v15 orch host add mon1
</pre>
<p>Also,</p>
<pre>
ssh -F =(ceph cephadm get-ssh-config) -i =(ceph config-key get mgr/cephadm/ssh_identity_key)
</pre>
<p>is a zshism, which often doesn't work.</p>
<p>Also, we should point to the docs</p>
<p>Prio = high, as it creats lots of support</p>
Orchestrator - Feature #43691 (Resolved): cephadm: upgrade major releases
https://tracker.ceph.com/issues/43691
2020-01-20T14:06:00Z
Sebastian Wagner
<p>E.g. Upgrading Octopus to Pacific</p>
<p><a class="external" href="https://pad.ceph.com/p/cephadm-upgrades">https://pad.ceph.com/p/cephadm-upgrades</a></p>
Orchestrator - Feature #43686 (Resolved): cephadm: support rgw nfs
https://tracker.ceph.com/issues/43686
2020-01-20T14:00:10Z
Sebastian Wagner
<p><a class="external" href="https://docs.ceph.com/docs/master/radosgw/nfs/">https://docs.ceph.com/docs/master/radosgw/nfs/</a></p>
Orchestrator - Bug #43681 (Resolved): cephadm: Streamline RGW deployment
https://tracker.ceph.com/issues/43681
2020-01-20T13:56:33Z
Sebastian Wagner
<p>The problem is, this doesn't work:</p>
<pre><code class="yaml syntaxhl"><span class="CodeRay"><span class="key">service_type</span>: <span class="string"><span class="content">rgw</span></span>
<span class="key">spec</span>:
<span class="key">realm</span>: <span class="string"><span class="content">myrealm</span></span>
<span class="key">zone</span>: <span class="string"><span class="content">myzone</span></span>
</span></code></pre>
<p>and then:</p>
<pre>
cephadm bootstrap ... --apply-spec rgw.yaml
</pre>
<p><b>because</b></p>
<ul>
<li>Users need to run some commands manually:</li>
</ul>
<pre>
radosgw-admin realm create --rgw-realm=default --default
radosgw-admin zonegroup create --rgw-zonegroup=default --master --default
radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=default --master --
</pre>
<ul>
<li>Calling <code>realm create</code> requires running OSDs.</li>
</ul>
<p>Which means, we have to:</p>
<ol>
<li>wait, till we have OSDs in the cluster</li>
<li>Then, run those radosgw-admin commands.</li>
</ol>