Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2020-09-16T14:31:07ZCeph
Redmine Orchestrator - Bug #47501 (Resolved): cephadm: Error bootstraping with '--container-init' optionhttps://tracker.ceph.com/issues/475012020-09-16T14:31:07ZRicardo Marquesrimarques@suse.com
<p>When I try to bootstrap a new cluster using the '--container-init' option, I get the following error:</p>
<pre>
['/usr/bin/podman', 'run', '--rm', '--net=host', '--ipc=host', '-e', 'CONTAINER_IMAGE=registry.opensuse.org/filesystems/ceph/master/u
pstream/images/ceph/ceph', '-e', 'NODE_NAME=node1', '-v', '/var/log/ceph/86501be0-f825-11ea-ba9f-525400bedfac:/var/log/ceph:z', '-v',
'/tmp/ceph-tmpp4xdmtqe:/etc/ceph/ceph.client.admin.keyring:z', '-v', '/tmp/ceph-tmph6bq7nup:/etc/ceph/ceph.conf:z', '--entrypoint',
'/usr/bin/ceph', 'registry.opensuse.org/filesystems/ceph/master/upstream/images/ceph/ceph', 'config', 'set', 'mgr', 'mgr/cephadm/cont
ainer_init', True, '--force']
Traceback (most recent call last):
File "/usr/sbin/cephadm", line 5859, in <module>
r = args.func()
File "/usr/sbin/cephadm", line 1248, in _default_image
return func()
File "/usr/sbin/cephadm", line 3032, in command_bootstrap
cli(['config', 'set', 'mgr', 'mgr/cephadm/container_init', args.container_init, '--force'])
File "/usr/sbin/cephadm", line 2831, in cli
).run(timeout=timeout)
File "/usr/sbin/cephadm", line 2479, in run
self.run_cmd(), desc=self.entrypoint, timeout=timeout)
File "/usr/sbin/cephadm", line 907, in call_throws
out, err, ret = call(command, **kwargs)
File "/usr/sbin/cephadm", line 799, in call
logger.debug("Running command: %s" % ' '.join(command))
TypeError: sequence item 22: expected str instance, bool found
</pre> Dashboard - Bug #47494 (Resolved): mgr/dashboard: Dashboard becomes unresponsive when SMART data ...https://tracker.ceph.com/issues/474942020-09-16T09:32:47ZRicardo Marquesrimarques@suse.com
<p>On my environment, 'smartmontools' is not configured, so I see a warning message on "Cluster > Device health" tab, but if I click on the "SMART" tab, the Dashboard UI stops responding and I'm no longer able to click on other tabs:</p>
<p><img src="https://tracker.ceph.com/attachments/download/5141/smart-error.gif" alt="" /></p> Orchestrator - Bug #46922 (Resolved): cephadm: IPv6 syntax inconsistencyhttps://tracker.ceph.com/issues/469222020-08-13T12:39:43ZRicardo Marquesrimarques@suse.com
<p>While trying to bootstrap a cluster with `--mon-ip` and `--apply-spec` options I found that someting IPv6 syntax is `[<IP>]`, and sometimes is`<IP>`.</p>
<p>The following examples show which combinations will work:</p>
<p><strong>1)</strong><br /><pre>
cephadm --verbose bootstrap --mon-ip fde4:8dba:82e1:0:5054:ff:fecd:3d4e --apply-spec /root/bootstrap-spec.yaml
</pre></p>
<p>Fails because we must use `[<IP>]` format on `--mon-ip`:<br /><pre>
INFO:cephadm:/usr/bin/monmaptool:stderr /usr/bin/monmaptool: invalid ip:port '[v2:fde4:8dba:82e1:0:5054:ff:fecd:3d4e:3300,v1:fde4:8dba:82e1:0:5054:ff:fecd:3d4e:6789]'
Traceback (most recent call last):
File "/usr/sbin/cephadm", line 5159, in <module>
r = args.func()
File "/usr/sbin/cephadm", line 1223, in _default_image
return func()
File "/usr/sbin/cephadm", line 2661, in command_bootstrap
tmp_monmap.name: '/tmp/monmap:z',
File "/usr/sbin/cephadm", line 2404, in run
self.run_cmd(), desc=self.entrypoint, timeout=timeout)
File "/usr/sbin/cephadm", line 884, in call_throws
raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/podman run --rm --net=host --ipc=host -e CONTAINER_IMAGE=docker.io/ceph/ceph:v15 -e NODE_NAME=node1 -v /tmp/ceph-tmpiuac2yke:/tmp/monmap:z --entrypoint /usr/bin/monmaptool docker.io/ceph/ceph:v15 --create --clobber --fsid e308bb2a-dd5f-11ea-88c2-525400ca3d18 --addv node1 [v2:fde4:8dba:82e1:0:5054:ff:fecd:3d4e:3300,v1:fde4:8dba:82e1:0:5054:ff:fecd:3d4e:6789] /tmp/monmap
</pre></p>
<p><strong>2)</strong><br /><pre>
# cat /root/bootstrap-spec.yaml
service_type: mgr
service_name: mgr
placement:
hosts:
- 'node1'
---
service_type: mon
service_name: mon
placement:
hosts:
- 'node1:[fde4:8dba:82e1:0:5054:ff:fecd:3d4e]'
# cephadm --verbose bootstrap --mon-ip [fde4:8dba:82e1:0:5054:ff:fecd:3d4e] --apply-spec /root/bootstrap-spec.yaml
</pre></p>
<p>Fails because `service_spec` does not suppor `[<IP>]` format:<br /><pre>
DEBUG:cephadm:Running command: /usr/bin/podman run --rm --net=host --ipc=host -e CONTAINER_IMAGE=registry.suse.de/devel/storage/7.0/containers/ses/7/ceph/ceph -e NODE_NAME=node1 -v /var/log/ceph/3cebcb98-dd4f-11ea-9787-525400ca3d18:/var/log/ceph:z -v /tmp/ceph-tmp9bfj1zkg:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpg0xv6fke:/etc/ceph/ceph.conf:z -v /tmp/bootstrap-spec.yaml:/tmp/spec.yml:z --entrypoint /usr/bin/ceph registry.suse.de/devel/storage/7.0/containers/ses/7/ceph/ceph orch apply -i /tmp/spec.yml
DEBUG:cephadm:/usr/bin/ceph:stderr Error EINVAL: Traceback (most recent call last):
DEBUG:cephadm:/usr/bin/ceph:stderr File "/usr/share/ceph/mgr/mgr_module.py", line 1167, in _handle_command
DEBUG:cephadm:/usr/bin/ceph:stderr return self.handle_command(inbuf, cmd)
DEBUG:cephadm:/usr/bin/ceph:stderr File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 138, in handle_command
DEBUG:cephadm:/usr/bin/ceph:stderr return dispatch[cmd['prefix']].call(self, cmd, inbuf)
DEBUG:cephadm:/usr/bin/ceph:stderr File "/usr/share/ceph/mgr/mgr_module.py", line 311, in call
DEBUG:cephadm:/usr/bin/ceph:stderr return self.func(mgr, **kwargs)
DEBUG:cephadm:/usr/bin/ceph:stderr File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 100, in <lambda>
DEBUG:cephadm:/usr/bin/ceph:stderr wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)
DEBUG:cephadm:/usr/bin/ceph:stderr File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 89, in wrapper
DEBUG:cephadm:/usr/bin/ceph:stderr return func(*args, **kwargs)
DEBUG:cephadm:/usr/bin/ceph:stderr File "/usr/share/ceph/mgr/orchestrator/module.py", line 1162, in _apply_misc
DEBUG:cephadm:/usr/bin/ceph:stderr spec = json_to_generic_spec(s)
DEBUG:cephadm:/usr/bin/ceph:stderr File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 1217, in json_to_generic_spec
DEBUG:cephadm:/usr/bin/ceph:stderr return ServiceSpec.from_json(spec)
DEBUG:cephadm:/usr/bin/ceph:stderr File "/usr/lib/python3.6/site-packages/ceph/deployment/service_spec.py", line 42, in inner
DEBUG:cephadm:/usr/bin/ceph:profile rt=0.9018697738647461, stop=False, exit=None, reads=[12]
DEBUG:cephadm:/usr/bin/ceph:stderr return method(cls, *args, **kwargs)
DEBUG:cephadm:/usr/bin/ceph:stderr File "/usr/lib/python3.6/site-packages/ceph/deployment/service_spec.py", line 488, in from_json
DEBUG:cephadm:/usr/bin/ceph:stderr return _cls._from_json_impl(c) # type: ignore
DEBUG:cephadm:/usr/bin/ceph:stderr File "/usr/lib/python3.6/site-packages/ceph/deployment/service_spec.py", line 495, in _from_json_impl
DEBUG:cephadm:/usr/bin/ceph:stderr v = PlacementSpec.from_json(v)
DEBUG:cephadm:/usr/bin/ceph:stderr File "/usr/lib/python3.6/site-packages/ceph/deployment/service_spec.py", line 42, in inner
DEBUG:cephadm:/usr/bin/ceph:stderr return method(cls, *args, **kwargs)
DEBUG:cephadm:/usr/bin/ceph:stderr File "/usr/lib/python3.6/site-packages/ceph/deployment/service_spec.py", line 245, in from_json
DEBUG:cephadm:/usr/bin/ceph:stderr isinstance(host, str) else
DEBUG:cephadm:/usr/bin/ceph:stderr File "/usr/lib/python3.6/site-packages/ceph/deployment/service_spec.py", line 134, in parse
DEBUG:cephadm:/usr/bin/ceph:stderr raise e
DEBUG:cephadm:/usr/bin/ceph:stderr File "/usr/lib/python3.6/site-packages/ceph/deployment/service_spec.py", line 131, in parse
DEBUG:cephadm:/usr/bin/ceph:stderr ip_address(six.text_type(network))
DEBUG:cephadm:/usr/bin/ceph:stderr File "/usr/lib64/python3.6/ipaddress.py", line 54, in ip_address
DEBUG:cephadm:/usr/bin/ceph:stderr address)
DEBUG:cephadm:/usr/bin/ceph:stderr ValueError: '[fde4:8dba:82e1:0:5054:ff:fecd:3d4e]' does not appear to be an IPv4 or IPv6 address
</pre></p>
<p><strong>3)</strong><br />The combination that works is:<br /><pre>
# cat /root/bootstrap-spec.yaml
service_type: mgr
service_name: mgr
placement:
hosts:
- 'node1'
---
service_type: mon
service_name: mon
placement:
hosts:
- 'node1:fde4:8dba:82e1:0:5054:ff:fecd:3d4e'
# cephadm --verbose bootstrap --mon-ip [fde4:8dba:82e1:0:5054:ff:fecd:3d4e] --apply-spec /root/bootstrap-spec.yaml
</pre></p>
<hr />
<p>Should we be consistent on the supported/required IPv6 syntax? If yes, which should be the correct one? Or should we support both?</p> Orchestrator - Bug #46910 (Can't reproduce): cephadm: Unable to create an iSCSI targethttps://tracker.ceph.com/issues/469102020-08-12T11:10:16ZRicardo Marquesrimarques@suse.com
<p>When I try to create an iSCSI target I get the following error:</p>
<p><img src="https://tracker.ceph.com/attachments/download/5034/error-creating-target.png" alt="" /></p>
<p>On `ceph-iscsi` container logs I see:</p>
<pre>
Aug 11 16:27:02 node1 bash[26919]: debug ::1 - - [11/Aug/2020 14:27:02] "GET /api/_ping HTTP/1.1" 200 -
Aug 11 16:27:03 node1 bash[26919]: debug (LUN.add_dev_to_lio) Adding image 'mypool/myrbd1' to LIO backstore user:rbd
Aug 11 16:27:03 node1 bash[26919]: debug failed to add mypool/myrbd1 to LIO - error([Errno 2] No such file or directory)
Aug 11 16:27:03 node1 bash[26919]: debug LUN alloc problem - failed to add mypool/myrbd1 to LIO - error([Errno 2] No such file or directory)
</pre>
<p>My service spec:</p>
<pre>
# tail -14 service_spec_gw.yml
---
service_type: iscsi
service_id: iscsi_service
placement:
hosts:
- 'node1'
- 'node2'
spec:
pool: rbd
trusted_ip_list: 10.20.139.201,10.20.139.202,10.20.139.203
api_port: 5000
api_user: admin1
api_password: admin2
api_secure: False
</pre>
<p>My test environment was created using the following `sesdev` command:</p>
<pre>
sesdev create pacific --ceph-salt-repo https://github.com/ceph/ceph-salt.git --ceph-salt-branch master pacific
</pre> Dashboard - Bug #46818 (Resolved): mgr/dashboard: Unable to edit iSCSI logged-in clienthttps://tracker.ceph.com/issues/468182020-08-03T15:35:11ZRicardo Marquesrimarques@suse.com
<p>Using "gwcli" tool, it's possible to do the following actions on a logged-in client:</p>
<pre><code>- add/remove disks</code></pre>
<pre><code>- set/remove/change auth</code></pre>
<p>For consistency, the same operations should be supported by Ceph Dashboard.</p> Orchestrator - Bug #46777 (Resolved): cephadm: Error bootstraping a cluster with '--registry-json...https://tracker.ceph.com/issues/467772020-07-30T12:33:26ZRicardo Marquesrimarques@suse.com
<p>When I try to bootstrap a cluster with the new '--registry-json' option, I get the following error:</p>
<pre>
INFO:cephadm:Non-zero exit code 22 from /usr/bin/podman run --rm --net=host --ipc=host -e CONTAINER_IMAGE=registry.opensuse.org/filesystems/ceph/octopus/upstream/images/ceph/ceph -e NODE_NAME=node1 -v /var/log/ceph/88cd09d8-d254-11ea-b540-5254009ce6f4:/var/log/ceph:z -v /tmp/ceph-tmp8q3k7c2q:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpxa4ehfb0:/etc/ceph/ceph.conf:z --entrypoint /usr/bin/ceph registry.opensuse.org/filesystems/ceph/octopus/upstream/images/ceph/ceph config set mgr mgr/cephadm/registry_url 192.168.1.102:5000
INFO:cephadm:/usr/bin/ceph:stderr Error EINVAL: unrecognized config option 'mgr/cephadm/registry_url'
</pre>
<p>We may need to use the '--force' option, example: <a class="external" href="https://github.com/ceph/ceph/blob/83f068d04322fdfbfea7dc0481474bbec687a4c6/src/vstart.sh#L917">https://github.com/ceph/ceph/blob/83f068d04322fdfbfea7dc0481474bbec687a4c6/src/vstart.sh#L917</a></p> Dashboard - Bug #46548 (Closed): mgr/dashboard: Unable to create user with non-ASCII chars using CLIhttps://tracker.ceph.com/issues/465482020-07-15T12:40:32ZRicardo Marquesrimarques@suse.com
<p>When I try to create a user with `Umlauts` using dashboard CLI, I see the following error:</p>
<pre>
# ceph dashboard ac-user-create mööö adminpassword
Invalid command: invalid chars ö in mööö
dashboard ac-user-create <username> [<password>] [<rolename>] [<name>] [<email>] [--enabled] [--force-password] [<pwd_expiration_date:int>] [--pwd-update-required] : Create a user
Error EINVAL: invalid command
</pre>
<p>But if I use the WebUI, I can create that user:</p>
<p><img src="https://tracker.ceph.com/attachments/download/4985/create_user_1.png" alt="" /></p>
<p><img src="https://tracker.ceph.com/attachments/download/4986/create_user_2.png" alt="" /></p> Dashboard - Backport #46436 (Resolved): octopus: mgr/dashboard: Unable to edit iSCSI target which...https://tracker.ceph.com/issues/464362020-07-09T14:57:39ZRicardo Marquesrimarques@suse.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/35997">https://github.com/ceph/ceph/pull/35997</a></p> Dashboard - Backport #46435 (Resolved): nautilus: mgr/dashboard: Unable to edit iSCSI target whic...https://tracker.ceph.com/issues/464352020-07-09T14:57:32ZRicardo Marquesrimarques@suse.com
<p><a class="external" href="https://github.com/ceph/ceph/pull/35998">https://github.com/ceph/ceph/pull/35998</a></p> Dashboard - Bug #46383 (Resolved): mgr/dashboard: Unable to edit iSCSI target which has active se...https://tracker.ceph.com/issues/463832020-07-07T10:43:37ZRicardo Marquesrimarques@suse.com
<p>iSCSI target "Edit" button is disabled when that target has an active session, but it's possible to edit it using "gwcli".</p>
<p>Note that Ceph dashboard should support same features that are supported by "gwcli".</p> Orchestrator - Bug #46283 (Rejected): cephadm: Unable to create iSCSI targethttps://tracker.ceph.com/issues/462832020-06-30T17:58:18ZRicardo Marquesrimarques@suse.com
<p>I'm getting an error when trying to create an iSCSI target on openSUSE Leap.</p>
<p><strong>How to reproduce:</strong></p>
<p>I've created a ceph cluster using `sesdev`:</p>
<pre>
sesdev create octopus --ceph-salt-repo https://github.com/ceph/ceph-salt.git --ceph-salt-branch master octopus
</pre>
<p>Then I used the dashboard to create the following pool:</p>
<p><img src="https://tracker.ceph.com/attachments/download/4942/1-create-pool.png" alt="" /></p>
<p>to create the following image:</p>
<p><img src="https://tracker.ceph.com/attachments/download/4943/2-create-rbd.png" alt="" /></p>
<p>and to create the following iSCSI target:</p>
<p><img src="https://tracker.ceph.com/attachments/download/4944/3-create-target.png" alt="" /></p>
<p>but I get the following error:</p>
<p><img src="https://tracker.ceph.com/attachments/download/4945/4-error.png" alt="" /></p> Orchestrator - Bug #46237 (Won't Fix): cephadm: Inconsistent exit codehttps://tracker.ceph.com/issues/462372020-06-26T16:33:34ZRicardo Marquesrimarques@suse.com
<p>If SSH keys are not available, then `ceph orch status` return code is zero:</p>
<pre>
master:~ # ceph orch status
Backend: cephadm
Available: False (SSH keys not set. Use `ceph cephadm set-priv-key` and `ceph cephadm set-pub-key` or `ceph cephadm generate-key`)
master:~ # echo $?
0
</pre>
<p>but if no backend is specified, then the exit code is non zero:</p>
<pre>
master:~ # ceph orch set backend ''
master:~ # ceph orch status
Error ENOENT: No orchestrator configured (try `ceph orch set backend`)
master:~ # echo $?
2
</pre>
<p>Shouldn't the exit codes be more consistent here?</p> Orchestrator - Bug #46233 (Resolved): cephadm: Add "--format" option to "ceph orch status"https://tracker.ceph.com/issues/462332020-06-26T15:08:06ZRicardo Marquesrimarques@suse.com
<p>ATM it's not possible to specify the output format for "ceph orch status":</p>
<pre>
ceph orch status --format=json
</pre> Orchestrator - Bug #46138 (Resolved): mgr/dashboard: Error creating iSCSI targethttps://tracker.ceph.com/issues/461382020-06-22T10:48:33ZRicardo Marquesrimarques@suse.com
<p>On the latest `master` (pacific), I get the following error when trying to create an iSCSI target in Dashboard:</p>
<p><img src="https://tracker.ceph.com/attachments/download/4927/2020-06-22_11-24-29.png" alt="" /></p>
<p>From mgr logs:</p>
<pre>
Jun 22 12:20:32 master bash[16203]: dashboard.rest_client.RequestException: iscsi REST API failed request with status code 500
Jun 22 12:20:32 master bash[16203]: (b'{"message":"Unhandled exception: [errno 1] RBD permission error (error openi'
Jun 22 12:20:32 master bash[16203]: b'ng image b\'rbd1\' at snapshot None)"}\n')
Jun 22 12:20:32 master bash[16203]: During handling of the above exception, another exception occurred:
Jun 22 12:20:32 master bash[16203]: Traceback (most recent call last):
Jun 22 12:20:32 master bash[16203]: File "/usr/share/ceph/mgr/dashboard/services/exception.py", line 94, in dashboard_exception_handler
Jun 22 12:20:32 master bash[16203]: return handler(*args, **kwargs)
Jun 22 12:20:32 master bash[16203]: File "/usr/lib/python3.6/site-packages/cherrypy/_cpdispatch.py", line 54, in __call__
Jun 22 12:20:32 master bash[16203]: return self.callable(*self.args, **self.kwargs)
Jun 22 12:20:32 master bash[16203]: File "/usr/share/ceph/mgr/dashboard/controllers/__init__.py", line 665, in inner
Jun 22 12:20:32 master bash[16203]: ret = func(*args, **kwargs)
Jun 22 12:20:32 master bash[16203]: File "/usr/share/ceph/mgr/dashboard/controllers/__init__.py", line 860, in wrapper
Jun 22 12:20:32 master bash[16203]: return func(*vpath, **params)
Jun 22 12:20:32 master bash[16203]: File "/usr/share/ceph/mgr/dashboard/controllers/__init__.py", line 467, in wrapper
Jun 22 12:20:32 master bash[16203]: raise ex
Jun 22 12:20:32 master bash[16203]: File "/usr/share/ceph/mgr/dashboard/controllers/__init__.py", line 457, in wrapper
Jun 22 12:20:32 master bash[16203]: status, value = task.wait(self.wait_for)
Jun 22 12:20:32 master bash[16203]: File "/usr/share/ceph/mgr/dashboard/tools.py", line 648, in wait
Jun 22 12:20:32 master bash[16203]: raise self.exception
Jun 22 12:20:32 master bash[16203]: File "/usr/share/ceph/mgr/dashboard/tools.py", line 559, in _run
Jun 22 12:20:32 master bash[16203]: val = self.task.fn(*self.task.fn_args, **self.task.fn_kwargs) # type: ignore
Jun 22 12:20:32 master bash[16203]: File "/usr/share/ceph/mgr/dashboard/controllers/iscsi.py", line 300, in create
Jun 22 12:20:32 master bash[16203]: clients, groups, 0, 100, config, settings)
Jun 22 12:20:32 master bash[16203]: File "/usr/share/ceph/mgr/dashboard/controllers/iscsi.py", line 767, in _create
Jun 22 12:20:32 master bash[16203]: raise DashboardException(msg=content_message, component='iscsi')
</pre> Dashboard - Bug #45810 (New): mgr/dashboard: Expand/collapse OSD row changes row selectionhttps://tracker.ceph.com/issues/458102020-06-02T09:36:04ZRicardo Marquesrimarques@suse.com
<p>Expand/collapse an OSD row changes row selection:</p>
<p><img src="https://tracker.ceph.com/attachments/download/4898/2020-06-02_10-32.gif" alt="" /></p>