Activity
From 04/22/2020 to 05/21/2020
05/21/2020
- 08:23 AM Bug #45625 (In Progress): cephadm: when configuring monitoring with ceph orch, ceph dashboard is ...
- 03:25 AM Feature #45163 (Pending Backport): cephadm: iscsi: read and write config-key for the dashboard
- 12:43 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- Looks similar...
- 12:05 AM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- I think this is caused by our ci downloading too many monitoring images.
I think we have two options now:
1. co...
05/20/2020
- 11:17 PM Bug #45632 (Fix Under Review): nfs: auth credentials for recovery database include mds
- 11:15 PM Bug #45632 (Resolved): nfs: auth credentials for recovery database include mds
- ...
- 06:44 PM Bug #45631 (Closed): Error parsing image configuration: Invalid status code returned when fetchin...
- ...
- 02:56 PM Bug #45629 (Resolved): cephadm: Allow users to provide ssh keys during bootstrap
- ATM, `cephadm boostrap` will always generate new SSH keys, unless `--skip-ssh` option is used.
But `--skip-ssh` al... - 02:28 PM Bug #45618 (Can't reproduce): cephadm tests fail because missing image on quay.io
- see https://status.quay.io
> May 19, 2020
> Quay.io outage
> Resolved - Currently service is restored and stable... - 08:12 AM Bug #45618: cephadm tests fail because missing image on quay.io
- yesterday, quay.io returned "Bad Gateway" in my runs. I think this was an infrastructure issue at quay.io. I just sch...
- 06:30 AM Bug #45618 (Can't reproduce): cephadm tests fail because missing image on quay.io
- ...
- 02:11 PM Bug #45628 (Resolved): cephadm qa: smoke should verify daemons are actually running
- RGW failed:...
- 01:59 PM Bug #45032: cephadm: Not recovering from `OSError: cannot send (already closed?)`
- fixed after reboot of active mgr
root@one1-ceph4.storage.:~# ceph cephadm check-host one1-ceph5
one1-ceph5 (None)... - 01:51 PM Bug #45032: cephadm: Not recovering from `OSError: cannot send (already closed?)`
- same here after a reboot of the hosts:
root@one1-ceph4.storage.:~# ceph cephadm check-host one1-ceph4
one1-ceph4 ... - 01:49 PM Bug #45032 (Pending Backport): cephadm: Not recovering from `OSError: cannot send (already closed?)`
- I think we sill have this problem.
- 01:47 PM Bug #45627 (Resolved): cephadm: frequently getting `1 hosts fail cephadm check`
- https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/ADK3Y2XHTIJ2YV6MFSQX4XPTQ4WP5ETM/...
- 01:07 PM Bug #45596: qa/tasks/cephadm: No cephadm module detected
- in any case, the MDS error is pretty fatal:
https://github.com/ceph/ceph/blob/a7ea259f24dc08abf5458a79935f4f36ad7d... - 12:55 PM Bug #45596: qa/tasks/cephadm: No cephadm module detected
- mgr log...
- 12:38 PM Bug #45625 (Resolved): cephadm: when configuring monitoring with ceph orch, ceph dashboard is onl...
- I run the commands mentioned in https://docs.ceph.com/docs/master/cephadm/monitoring/#deploying-monitoring-with-cepha...
- 12:17 PM Bug #45624 (Can't reproduce): cephadm: "ceph orch apply mgr" is deploying in wrong nodes
- I add the mgr label to 3 of my nodes.
Then when I run 'ceph orch apply mgr' I expect that mgr is deployed to all 3... - 11:58 AM Documentation #45623 (Can't reproduce): cephadm: "ceph orch apply mon" is deploying in wrong nodes
- I add the mon label to 3 of my 4 nodes,
then when I run 'ceph orch apply mon' I expect that mon is deployed to those... - 10:51 AM Bug #45621: check-host returns terrible unhelpful error message
- I find that doing @ceph mgr fail@ fixes the problem, but could never guess from the message.
- 10:49 AM Bug #45621 (Duplicate): check-host returns terrible unhelpful error message
- After having some CEPHADM_HOST_CHECK_FAILED and CEPHADM_REFRESH_FAILED warnings after rebooting some hosts, I get the...
- 09:55 AM Bug #45594 (In Progress): cephadm: weight of a replaced OSD is 0
- This is certainly not intended.. I'll investigate
- 02:08 AM Bug #45595: qa/tasks/cephadm: No filesystem is configured and MDS daemon gets deployed repeatedly
- Also, any ideas why the fsname is `all` ??
- 01:54 AM Bug #45595: qa/tasks/cephadm: No filesystem is configured and MDS daemon gets deployed repeatedly
- We attempted to configure an MDS with file system `all` using an explicit daemon id of `a`?...
- 01:34 AM Bug #45617 (Fix Under Review): mgr/orch: mds with explicit naming
- 01:24 AM Bug #45617 (Resolved): mgr/orch: mds with explicit naming
- Explicitly naming an mds:...
- 12:54 AM Bug #45252: cephadm: fail to insert modules when creating iSCSI targets
- I've created a PR to bind mount /lib/modules RO: https://github.com/ceph/ceph/pull/35141
Once I have the PR applie...
05/19/2020
- 03:54 PM Bug #45162 (Resolved): cephadm: iscsi should use the correct container image
- 03:53 PM Bug #44792 (Resolved): cephadm: make `cephadm shell` independent from /etc/ceph/ceph.conf
- 03:53 PM Bug #45196 (Resolved): cephadm: remove 'fqdn_enabled' parameter from iSCSI service spec
- 03:52 PM Bug #44826 (Resolved): cephadm: "Deploying daemon crash.li221-238... ERROR: no keyring provided"
- 03:51 PM Bug #45161 (Resolved): cephadm: iscsi should validate the existence of the given pool
- 12:12 PM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- seems that I'm close to unable to reproduce this reliably.
- 11:49 AM Bug #45596: qa/tasks/cephadm: No cephadm module detected
- Right, the mgr gets restarted:...
- 09:46 AM Bug #45596: qa/tasks/cephadm: No cephadm module detected
- This failure can also be seen in this test:
http://pulpito.ceph.com/varsha-2020-05-18_10:25:58-rados-wip-integrate-c... - 08:59 AM Bug #45596 (Resolved): qa/tasks/cephadm: No cephadm module detected
- Something weird is happening here.
First mgr fails and cephadm is disabled.... - 10:55 AM Bug #45595: qa/tasks/cephadm: No filesystem is configured and MDS daemon gets deployed repeatedly
- the logs of a failed MDS:...
- 08:04 AM Bug #45595 (Can't reproduce): qa/tasks/cephadm: No filesystem is configured and MDS daemon gets d...
- On adding mds to roles...
- 10:48 AM Bug #45604 (Duplicate): mgr/cephadm: Failed to create an OSD
- 09:37 AM Bug #45604 (Duplicate): mgr/cephadm: Failed to create an OSD
- Creating an OSD using the following commands fails....
- 07:42 AM Bug #45594 (Resolved): cephadm: weight of a replaced OSD is 0
- Not sure if this is intended.
After deleting an OSD with `--replace` flag and create new OSD on it, OSD's WEIGHT i... - 07:01 AM Bug #45252: cephadm: fail to insert modules when creating iSCSI targets
- Hmm, that didn't happen on my test system. I might need to rebuild to check, I might have to reboot the host just in ...
- 02:44 AM Bug #45252: cephadm: fail to insert modules when creating iSCSI targets
- Still seeing this after PR 34898 merged.
insert_error.txt contains more info
05/18/2020
- 03:57 PM Bug #45587: mgr/cephadm: Failed to create encrypted OSD
- note, octopus doesn't contain https://github.com/ceph/ceph/pull/34745
- 03:07 PM Bug #45587 (Resolved): mgr/cephadm: Failed to create encrypted OSD
- I can not create an encrypted OSD using Ceph 15.2.1-277-g17d346932e on SES7....
- 01:36 PM Feature #45463 (Fix Under Review): cephadm: allow custom images for grafana, prometheus, alertman...
- 12:47 PM Bug #45584 (Fix Under Review): qa/tasks/cephadm: With roleless feature no mons are deployed
- 12:30 PM Bug #45584 (Resolved): qa/tasks/cephadm: With roleless feature no mons are deployed
- http://qa-proxy.ceph.com/teuthology/varsha-2020-05-12_12:13:48-rados-wip-varsha-testing-distro-basic-smithi/5049043/
... - 08:17 AM Bug #45174: cephadm: missing parameters on 'orch daemon add iscsi'
- I've created a PR (https://github.com/ceph/ceph/pull/35097) that makes the api_user and api_password manditory.
- 08:05 AM Bug #45576: cephadm: `cephadm ls` does not play well with `cephadm logs`
- Same is true for ...
- 08:01 AM Bug #45576 (Resolved): cephadm: `cephadm ls` does not play well with `cephadm logs`
- Right now users need to run...
05/17/2020
- 06:12 AM Bug #45572 (Rejected): cephadm: ceph-crash isn't deployed anywhere
- Rejecting in favour of https://github.com/ceph/ceph-salt/issues/236
05/16/2020
- 10:36 PM Bug #45572: cephadm: ceph-crash isn't deployed anywhere
- Ah. Found it. This is due to ceph-salt calling @cephadm bootstrap@ with @--skip-ssh@, so presumably actually needs ...
- 06:50 AM Bug #45572 (Rejected): cephadm: ceph-crash isn't deployed anywhere
- AFAICT when deploying a containerized cluster with cephadm, ceph-crash is never deployed anywhere. This means that i...
05/15/2020
- 11:39 AM Bug #45560 (Fix Under Review): cephadm: fail to create OSDs
- 11:38 AM Bug #45560 (Pending Backport): cephadm: fail to create OSDs
- 09:49 AM Bug #45560 (In Progress): cephadm: fail to create OSDs
- 04:13 AM Bug #45560 (Resolved): cephadm: fail to create OSDs
- OSDs are not created after applying the following spec:...
- 10:49 AM Feature #45565 (New): cephadm: A daemon should provide information about itself (e.g. service urls)
- As a normal user of Ceph without having much insight of the development and its inner life i expect that a fresh depl...
- 10:24 AM Bug #45407 (Fix Under Review): cephadm: Speed up OSD deployment preview
- 09:35 AM Documentation #45564 (Duplicate): cephadm: document workaround for accessing the admin socket by ...
- ...
- 07:52 AM Feature #45463 (In Progress): cephadm: allow custom images for grafana, prometheus, alertmanager ...
05/14/2020
- 11:51 PM Bug #45174: cephadm: missing parameters on 'orch daemon add iscsi'
- hrm, yeah good question. Any preferences? We could auto gen if not supplied and then something like `ceph orch ls --e...
- 02:13 PM Documentation #44354: cephadm: Log messages are missing
- https://serverfault.com/questions/809093/how-do-view-older-journalctl-logs-after-a-rotation-maybe
- 02:04 PM Feature #44578 (Rejected): cephadm: verify Grafana works with Prometheus HA
- works
- 02:02 PM Feature #44601: cephadm: Mix of hosts: with and without firewall
- Maybe we can expose this via `ceph orch host ls`?
- 02:02 PM Feature #45163 (Fix Under Review): cephadm: iscsi: read and write config-key for the dashboard
- 02:00 PM Bug #44729 (Can't reproduce): cephadm enter using docker is broken
- 01:49 PM Bug #45198 (Closed): cephadm: unable to add iSCSI daemon from service spec yaml file
- 01:48 PM Bug #45286 (Closed): cephadm: Adding hosts to the cluster fails
- 01:46 PM Bug #45394 (Pending Backport): cephadm: fail to create/preview OSDs via drive group
- 01:45 PM Bug #45417 (Pending Backport): cephadm: nfs grace remove killed before completion
- 01:43 PM Feature #43705 (Closed): cephadm: on config change, restart appropriate daemons
- seems to be done. sort of. reopen if required.
- 01:42 PM Bug #44577 (Closed): cephadm: reconfigure Prometheus on MGR failover
- no need. prometheus already knows all instances.
- 01:32 PM Bug #45258 (Duplicate): cephadm: iSCSIServiceSpec: user/password should be mandatory (or autogene...
- 01:30 PM Bug #45245 (Fix Under Review): cephadm: print iscsi container's log to stdout/stderr
- 11:07 AM Bug #44673 (Fix Under Review): cephadm: `orch apply` and `orch daemon add` use completely differe...
05/13/2020
- 03:14 PM Documentation #45383: Cephadm.py OSD deployment fails: full device path or just the name?
- So as Joshua pointed to me since I had completely missed it, the reason of my confusion was that on upstream teutholo...
- 03:13 PM Bug #45534 (Closed): cephadm: "exec: \"--\": executable file not found in $PATH"
- 02:48 PM Bug #45534 (Closed): cephadm: "exec: \"--\": executable file not found in $PATH"
- ...
05/12/2020
- 05:12 PM Documentation #45411: cephadm: add section about container images
- PR 32410 was previously the Pull Request ID specified in the "Pull request ID" field of this bug.
- 03:44 PM Documentation #45411 (In Progress): cephadm: add section about container images
- 03:58 PM Feature #43940: orchestrator mgr add and rm
- not sure which PR implemented this, but it wasn't https://github.com/ceph/ceph/pull/33072
- 03:48 PM Documentation #45383 (In Progress): Cephadm.py OSD deployment fails: full device path or just the...
- 03:47 PM Documentation #45383: Cephadm.py OSD deployment fails: full device path or just the name?
- I'd like some feedback from the community (at as many levels as possible) about whether I should add a note to the do...
- 02:54 PM Bug #45032 (Fix Under Review): cephadm: Not recovering from `OSError: cannot send (already closed?)`
- 12:08 PM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- https://github.com/ceph/ceph/pull/35018 might make this thing go away, without fixing the underlying issue.
- 11:39 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- ...
- 11:36 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- ...
- 11:35 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- ...
- 11:30 AM Bug #45252 (Pending Backport): cephadm: fail to insert modules when creating iSCSI targets
- 11:29 AM Bug #45458 (Pending Backport): non-ascii chars in /etc/prometheus/ceph/ceph_default_alerts.yml
- 07:06 AM Feature #45163 (In Progress): cephadm: iscsi: read and write config-key for the dashboard
05/11/2020
- 05:11 PM Documentation #45411: cephadm: add section about container images
- PR:
https://github.com/ceph/ceph/pull/35006 - 01:02 PM Bug #44926: dashboard: creating a new bucket causes InvalidLocationConstraint
- Apely AGAMAKOU wrote:
> Hi, i've got the same issue:
>
> OS: Debian 10 (buster)
> Ceph: Octopus (15.2.1)
> Node... - 12:50 PM Bug #44926: dashboard: creating a new bucket causes InvalidLocationConstraint
- Hi, i've got the same issue:
OS: Debian 10 (buster)
Ceph: Octopus (15.2.1)
Nodes: 3
- 10:43 AM Bug #45393 (Pending Backport): Containerized osd config must be updated when adding/removing mons
- 10:41 AM Bug #45465 (Resolved): cephadm: `ceph orch restart osd` has the potential to break your cluster
- Multiple bugs here:
* the cephadm implementation doesn't check anything. (ceph osd ok-to-stop...., HEALTH_ERR)
* ... - 10:40 AM Bug #45129 (Pending Backport): simple (ceph-disk) style OSDs adopted by cephadm don't start after...
- 10:01 AM Feature #45463 (Resolved): cephadm: allow custom images for grafana, prometheus, alertmanager and...
- Right now, users don't have a way to customize them at all.
I think w're going to need a grafana_image, like in ht... - 09:54 AM Bug #45462 (Fix Under Review): 'https://download.ceph.com/debian-octopus focal Release' does not ...
- 09:52 AM Bug #45462 (Resolved): 'https://download.ceph.com/debian-octopus focal Release' does not have a R...
- http://pulpito.ceph.com/swagner-2020-05-08_13:52:54-rados-wip-swagner3-testing-2020-05-08-1329-distro-basic-smithi/50...
- 12:54 AM Bug #45458 (Resolved): non-ascii chars in /etc/prometheus/ceph/ceph_default_alerts.yml
- http://qa-proxy.ceph.com/teuthology/mgfritch-2020-05-09_01:31:09-rados-wip-mgfritch-testing-2020-05-08-1646-distro-ba...
05/09/2020
- 09:13 PM Bug #44792 (Pending Backport): cephadm: make `cephadm shell` independent from /etc/ceph/ceph.conf
- 09:06 PM Bug #45427: cephadm: auth get failed: invalid entity_auth mon
- urgent. right now node-exporter is broken
- 09:05 PM Bug #45427 (Pending Backport): cephadm: auth get failed: invalid entity_auth mon
05/08/2020
- 10:07 PM Bug #45418 (Rejected): cephadm: `orch reconfig` does not reconfig the container image
- confirmed that the needed functionality is already provided via the 'redeploy' command...
- 09:11 PM Bug #45454 (Can't reproduce): cephadm: teardown: hang at sudo systemctl stop ceph-453d3962-9141-1...
- http://pulpito.ceph.com/swagner-2020-05-08_13:52:54-rados-wip-swagner3-testing-2020-05-08-1329-distro-basic-smithi/
... - 08:09 PM Bug #45452: cephadm: while removing ceph-common, unable to remove directory '/var/lib/ceph': Devi...
- http://pulpito.ceph.com/swagner-2020-05-08_13:51:20-rados-wip-swagner2-testing-2020-05-08-1134-distro-basic-smithi/50...
- 04:49 PM Bug #45452 (Closed): cephadm: while removing ceph-common, unable to remove directory '/var/lib/ce...
- http://pulpito.ceph.com/swagner-2020-05-08_13:49:07-rados-wip-swagner-testing-2020-05-08-1133-distro-basic-smithi/503...
- 03:25 PM Bug #45451 (Can't reproduce): cephadm: `ceph orch redeploy mgr` never returns
- problem: the current active manager is restarted synchronously, which means the command never completes.
- 03:16 PM Documentation #45450 (New): cephadm: what does redeploy vs reconfig actually mean and when do do ...
- when want a users call reconfig?
is it more like an internal cephadm thing? and users should always be pointed to... - 12:00 PM Feature #45410: cephadm: Support upgrading alertmanager, grafana, prometheus and node_exporter
- It currently seems that using fixed versions for monitoring stack containers are the only way to be ensure that major...
- 11:02 AM Bug #45427 (Fix Under Review): cephadm: auth get failed: invalid entity_auth mon
- 07:56 AM Documentation #44284 (Pending Backport): cephadm: provide a way to modify the initial crushmap
05/07/2020
- 03:05 PM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- http://pulpito.ceph.com/mgfritch-2020-05-07_02:27:06-rados-wip-mgfritch-testing-2020-05-06-1821-distro-basic-smithi/5...
- 11:35 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- In the past, https://github.com/ceph/ceph/pull/34091 was able to reproduce this bug consistently. I'll look into resu...
- 11:32 AM Bug #44990 (In Progress): cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such fil...
- 02:03 PM Bug #45427: cephadm: auth get failed: invalid entity_auth mon
- we probably don't need _get_config_and_keyring for node-exporter
- 10:13 AM Bug #45427 (Resolved): cephadm: auth get failed: invalid entity_auth mon
- http://pulpito.ceph.com/mgfritch-2020-05-07_02:27:06-rados-wip-mgfritch-testing-2020-05-06-1821-distro-basic-smithi/5...
- 11:52 AM Documentation #45383: Cephadm.py OSD deployment fails: full device path or just the name?
- The confusing part from my pov is that downstream this commit that strips the device name is breaking the tests and I...
- 11:39 AM Documentation #45383: Cephadm.py OSD deployment fails: full device path or just the name?
- Some background as to why this exists see (https://github.com/ceph/ceph/commit/f026a1c9f661fc1442048ef0bfadf84c35c142...
- 11:31 AM Bug #45421 (Duplicate): cephadm: MaxWhileTries: Waiting for 3 mons in monmap: "unable to remove c...
- 06:38 AM Bug #45421: cephadm: MaxWhileTries: Waiting for 3 mons in monmap: "unable to remove container c3e...
- /a/yuriw-2020-05-05_15:20:13-rados-wip-yuri8-testing-2020-05-04-2117-octopus-distro-basic-smithi/5024853
- 02:01 AM Bug #45421 (Duplicate): cephadm: MaxWhileTries: Waiting for 3 mons in monmap: "unable to remove c...
- /a/bhubbard-2020-05-01_23:30:27-rados:thrash-old-clients-master-distro-basic-smithi/5014152
/a/bhubbard-2020-05-01_2... - 11:26 AM Bug #45394 (In Progress): cephadm: fail to create/preview OSDs via drive group
- 10:24 AM Feature #45410: cephadm: Support upgrading alertmanager, grafana, prometheus and node_exporter
- This are our current versions
Grafana 5.3.3
Alertmanager 0.16.2
Prometheus 2.11.1
Node exporter 0.17.0
grafa... - 10:03 AM Feature #45410: cephadm: Support upgrading alertmanager, grafana, prometheus and node_exporter
- These are the monitoring stack versions that we use in our nautilus-based releases:
grafana: 5.4.3
prometheus: v2.... - 01:44 AM Bug #45420 (Can't reproduce): cephadmunit.py: teuthology.exceptions.CommandFailedError: Command f...
- /a/bhubbard-2020-05-01_23:30:27-rados:thrash-old-clients-master-distro-basic-smithi/5014156...
- 12:23 AM Bug #45417 (Fix Under Review): cephadm: nfs grace remove killed before completion
- 12:22 AM Bug #45418 (Fix Under Review): cephadm: `orch reconfig` does not reconfig the container image
- 12:19 AM Bug #45418: cephadm: `orch reconfig` does not reconfig the container image
- applies to any changes to the systemd unit, unit.run, unit.poststop scripts etc.
workaround is to completely remov... - 12:18 AM Bug #45418 (Rejected): cephadm: `orch reconfig` does not reconfig the container image
- define a custom container image:...
05/06/2020
- 11:53 PM Bug #45417 (Resolved): cephadm: nfs grace remove killed before completion
- ganesha-rados-grace remove is killed before completion.
The shutdown of the nfs container + grace remove (unit.pos... - 06:22 PM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- Could this be caused by the container image not being built yet, or would that present as a different error? With any...
- 04:57 PM Documentation #45411 (Resolved): cephadm: add section about container images
- * we recommend against using the...
- 04:18 PM Feature #45410: cephadm: Support upgrading alertmanager, grafana, prometheus and node_exporter
- This might not be an issue for minor version upgrades in Grafana and Prometheus, although it would be hard to guarant...
- 04:04 PM Feature #45410: cephadm: Support upgrading alertmanager, grafana, prometheus and node_exporter
- It would be nice to have this two things:
1. Use by default fixed versions images of the different components of th... - 03:53 PM Feature #45410 (Resolved): cephadm: Support upgrading alertmanager, grafana, prometheus and node_...
- Right now, we're simply downloading :latest, which might even differ between daemons on different hosts.
- 02:36 PM Bug #45407 (Resolved): cephadm: Speed up OSD deployment preview
- There is a "pending Ceph Dashboard pull request":https://github.com/ceph/ceph/pull/34665 to implement a "preview" fea...
- 11:59 AM Bug #45399 (Resolved): NFS Ganesha : Error searching service specs for all nodes after nfs orch a...
- Environment :
- 3 hypervisors centos 8.1 (hyp00, hyp01, hyp02)
- 19 OSDs.
- cluster upgraded a month ago from n... - 11:23 AM Bug #45393 (Fix Under Review): Containerized osd config must be updated when adding/removing mons
- It's always the little things...
- 08:58 AM Bug #45393: Containerized osd config must be updated when adding/removing mons
- A quick grep of my logs shows it reconfiguring the mons and mgrs, but not the osds.
- 08:36 AM Bug #45393: Containerized osd config must be updated when adding/removing mons
- Thanks for the pointer, I'll try to figure out what's going on, seeing as I'm the one who hit this :-)
- 07:42 AM Bug #45393: Containerized osd config must be updated when adding/removing mons
- This was fixed in https://github.com/ceph/ceph/pull/33855 . Looks like we have to figure out, what went wrong here.
- 06:41 AM Bug #45393 (Resolved): Containerized osd config must be updated when adding/removing mons
- Try this:
- bootstrap a cluster (1 mon, 1 mgr)
- add a bunch of osds (@ceph orch apply osd --all-available-device... - 08:03 AM Bug #45394: cephadm: fail to create/preview OSDs via drive group
- -I could imagine that you're seeing this because the container images are not fully up to date yet. They're probably ...
- 06:56 AM Bug #45394 (Resolved): cephadm: fail to create/preview OSDs via drive group
- Create OSD with the following config:...
05/05/2020
- 02:40 PM Documentation #44284 (Fix Under Review): cephadm: provide a way to modify the initial crushmap
- 02:25 PM Documentation #44284: cephadm: provide a way to modify the initial crushmap
- also:
> I just deployed a new cluster with cephadm instead of ceph-deploy. In tyhe past, If i
> change ceph.conf ... - 02:08 PM Documentation #45383: Cephadm.py OSD deployment fails: full device path or just the name?
- I triggered a run with `sleep-before-teardown' to make it more clear. ...
- 07:41 AM Documentation #45383: Cephadm.py OSD deployment fails: full device path or just the name?
- There was device /dev/vde and it worked when I dropped the shortname and just use the whole path (https://github.com/...
- 12:45 PM Documentation #44971 (Resolved): cephadm: document the cephadm binary
- 12:45 PM Bug #45120 (Resolved): cephadm: adopt prometheus doesn't work
- 12:45 PM Bug #44832 (Resolved): cephadm: `ceph cephadm generate-key` fails with No such file or directory:...
- 12:44 PM Feature #44625 (Pending Backport): cephadm: test dmcrypt
- 12:43 PM Bug #45284 (Pending Backport): cephadm: Access host files on "cephadm shell"
- 12:43 PM Bug #45294 (Pending Backport): cephdam: rgw realm/zone could contain 'hostname'
- 12:42 PM Bug #45293 (Pending Backport): cephadm: service_id can contain a '.' char (mds, nfs, iscsi)
- 12:42 PM Bug #45249 (Pending Backport): cephadm: fail to apply a iSCSI ServiceSpec
- 06:30 AM Bug #45252: cephadm: fail to insert modules when creating iSCSI targets
- I've thrown the diff into a PR: https://github.com/ceph/ceph/pull/34898
But if we take this approach we should pro... - 05:21 AM Bug #45245: cephadm: print iscsi container's log to stdout/stderr
- OK so if it's just needing access to the logs via something like journald and journalctl then that should currently w...
- 02:00 AM Bug #45245: cephadm: print iscsi container's log to stdout/stderr
- Ceph-iscsi logging is hardcoded to go point to /dev/log, I don't think ceph-iscsi has an option to log to stderr. But...
05/04/2020
- 07:51 PM Bug #45327: cephadm: Orch daemon add is not idempotent
- /a/yuriw-2020-05-02_20:02:46-rados-wip-yuri6-testing-2020-04-30-2259-octopus-distro-basic-smithi/5016611/
- 07:45 PM Documentation #45383: Cephadm.py OSD deployment fails: full device path or just the name?
- Looks like you called ...
- 06:25 PM Documentation #45383 (Can't reproduce): Cephadm.py OSD deployment fails: full device path or just...
- OSD deployment on cephadm.py fails on my local teuthology server due to not failing to recognize the device. When I j...
- 01:48 PM Documentation #44905 (Resolved): cephadm troubleshooting SSH errors
- 01:45 PM Bug #45081 (Resolved): cephadm: `upgrade check 15.2.1` : OrchestratorError: Failed to pull 15.2.1...
- 01:44 PM Bug #44830 (Duplicate): cpehadm bootstrap: improve error message, if `host add` fails
- 01:40 PM Feature #44873 (In Progress): cephadm bootstrap: add --apply-spec <cluster.yaml>
- 01:40 PM Bug #44968 (Need More Info): cehpadm: another "RuntimeError: Set changed size during iteration"
- next time, this traceback should be printed in the logs
- 01:35 PM Bug #45010 (New): cephadm: /etc/ceph/ceph.conf directory /etc/ceph does not exist
- 01:35 PM Bug #45029 (Resolved): cephadm: add-repo fails (silently) when no arguments are given
- 01:34 PM Bug #44769 (Resolved): cephadm doesn't reuse osd_id of 'destroyed' osds
- 01:33 PM Feature #44556 (Resolved): cephadm: preview drivegroups
- 01:33 PM Bug #45108 (Resolved): test_orchestrator: service ls doesn't work
- 01:32 PM Bug #45095 (Resolved): cephadm adopt can't handle offline OSDs
- 01:32 PM Bug #45120 (Pending Backport): cephadm: adopt prometheus doesn't work
- 01:31 PM Bug #45162 (Pending Backport): cephadm: iscsi should use the correct container image
- 01:26 PM Bug #45161 (Pending Backport): cephadm: iscsi should validate the existence of the given pool
- 01:23 PM Feature #45378 (Resolved): cephadm: manage /etc/ceph/ceph.conf
- /etc/ceph/ceph.conf is often use by tools in the ceph ecosystem. We should provide a mechanism to keep this up to dat...
- 01:20 PM Bug #45286 (Need More Info): cephadm: Adding hosts to the cluster fails
- 01:18 PM Bug #45065 (Resolved): cephadm: Config option warn_on_stray_daemons does not work as expected
- 01:17 PM Documentation #44971 (Pending Backport): cephadm: document the cephadm binary
- 01:17 PM Documentation #44828 (Pending Backport): cephadm: clarify "Failed to infer CIDR network for mon ip"
- 01:16 PM Bug #44832 (Pending Backport): cephadm: `ceph cephadm generate-key` fails with No such file or di...
- 01:16 PM Bug #45196 (Pending Backport): cephadm: remove 'fqdn_enabled' parameter from iSCSI service spec
- 10:21 AM Bug #44747: orch: `ceph orch ls --service_type` is broken
- no idea. let's wait, till someone else complains.
- 06:32 AM Bug #44747: orch: `ceph orch ls --service_type` is broken
- I guess step one is to confirm this is still an issue. I had a script run `ceph orch ls --export` until it got a fail...
- 09:28 AM Bug #44252: cephadm: mgr,mds scale-down should prefer standby daemons
- this is WIP, but I need to rework the cephadm scheduler first.
05/01/2020
- 07:10 AM Bug #45252 (In Progress): cephadm: fail to insert modules when creating iSCSI targets
- OK so progress. I've tried preloading the kernel mod (iscsi-target-mod) and that works.
But the next error, and yo... - 05:02 AM Bug #44826 (Pending Backport): cephadm: "Deploying daemon crash.li221-238... ERROR: no keyring pr...
04/30/2020
- 01:08 PM Feature #45203 (Fix Under Review): OSD Spec: allow filtering via explicit hosts and labels
- 09:20 AM Bug #44887 (Rejected): cephadm: Simplify mounting of host dirs into prom/node-exporter
- 12:57 AM Bug #45343 (Resolved): Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshak...
- /a/yuriw-2020-04-28_21:58:13-rados-wip-yuri-testing-2020-04-24-1941-master-distro-basic-smithi/4995287...
04/29/2020
- 09:29 PM Bug #45252: cephadm: fail to insert modules when creating iSCSI targets
- Just spit balling. the container shares the host kernel, so we could also insmod the required kernel modules before t...
- 12:05 PM Bug #45252: cephadm: fail to insert modules when creating iSCSI targets
- According to https://unix.stackexchange.com/questions/491013/how-to-load-kernel-modules-for-docker-container-without-...
- 02:54 PM Bug #45327: cephadm: Orch daemon add is not idempotent
- I'd argue that we should only maintain one command for service creation and I would recommend going with for `apply`....
- 01:29 PM Bug #45327: cephadm: Orch daemon add is not idempotent
- Ok,...
- 10:35 AM Bug #45327 (Resolved): cephadm: Orch daemon add is not idempotent
- ...
- 01:38 PM Feature #44625 (Fix Under Review): cephadm: test dmcrypt
- 01:33 PM Bug #45093: cephadm: mgrs transiently getting co-located (one node gets two when only one was ask...
- I really hope that https://github.com/ceph/ceph/pull/34633 will fix this.
- 08:15 AM Bug #45093: cephadm: mgrs transiently getting co-located (one node gets two when only one was ask...
- ...
- 01:26 PM Feature #45203: OSD Spec: allow filtering via explicit hosts and labels
- the error is technically correct, becuase of
https://github.com/ceph/ceph/blob/e2c8d49906e11650945fafef92296a9dfc... - 11:55 AM Bug #45286: cephadm: Adding hosts to the cluster fails
- Right...
- 11:50 AM Bug #45296 (Duplicate): cephadm: daemon add mon failure: orchestrator._interface.OrchestratorVali...
- duplicating the old issue, but the new one has a better description.
- 09:01 AM Cleanup #45321 (Resolved): Servcie spec: unify `spec:` vs omitting `spec:`
- ...
- 12:30 AM Bug #45249 (Fix Under Review): cephadm: fail to apply a iSCSI ServiceSpec
04/28/2020
- 08:30 PM Bug #45293 (Fix Under Review): cephadm: service_id can contain a '.' char (mds, nfs, iscsi)
- 08:30 PM Bug #45294 (Fix Under Review): cephdam: rgw realm/zone could contain 'hostname'
- 08:23 PM Bug #45093: cephadm: mgrs transiently getting co-located (one node gets two when only one was ask...
- Note: this is not a matter of "I asked for 4 MGRs and got 4, only two were unexpectedly colocated."
What is happen... - 08:10 PM Bug #45093 (New): cephadm: mgrs transiently getting co-located (one node gets two when only one w...
- It happened again. Here is the output of the commands you asked for:
https://paste2.org/CGsgjJWy
NOTE: this tim... - 05:48 PM Bug #45037 (In Progress): octopus: cephadm: non-ceph units put wrong uid:gid in systemd unit file
- 01:20 PM Feature #45091: cephadm: CephX disabled: bad_method + failed to fetch mon config
- To update, I was able to restart all services and move to using cephx and the two MDS come online.
- 12:35 AM Bug #45296 (Duplicate): cephadm: daemon add mon failure: orchestrator._interface.OrchestratorVali...
- /a/teuthology-2020-04-26_07:01:02-rados-master-distro-basic-smithi/4985864/...
04/27/2020
- 09:55 PM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- /a/yuriw-2020-04-25_15:46:30-rados-wip-yuri4-testing-2020-04-25-0009-master-distro-basic-smithi/4984285
- 07:41 PM Bug #45294 (Resolved): cephdam: rgw realm/zone could contain 'hostname'
- A 'hostname' like sub-string could be provided as a realm/zone :...
- 07:40 PM Bug #45293 (Resolved): cephadm: service_id can contain a '.' char (mds, nfs, iscsi)
- Create a cephfs name that contains a '.' char:...
- 01:57 PM Bug #45284 (In Progress): cephadm: Access host files on "cephadm shell"
- 10:13 AM Bug #45284 (Resolved): cephadm: Access host files on "cephadm shell"
- I have a file on my host, that I would like to use to set a "config-key":...
- 10:50 AM Bug #44792: cephadm: make `cephadm shell` independent from /etc/ceph/ceph.conf
- Sebastian Wagner wrote:
> This is a usability bug:
>
> Workaround might be to infer the ceph.conf as well?
>
>... - 10:33 AM Bug #45286 (Closed): cephadm: Adding hosts to the cluster fails
- hi,
using centos8 can't add an host to the cluster via cephadm.
Following the docs i was able to boostrap a new clu... - 08:32 AM Bug #45279 (Can't reproduce): cephadm bootstrap: monmaptool --create: error writing to '/tmp/monm...
- Hi,
using cpehadm in order to install octopus on a ubuntu 18.04 fails asap I've tried to install first monitor.
I f...
04/24/2020
- 02:23 PM Feature #45263 (Resolved): osdspec/drivegroup: not enough filters to define layout
- Considering this layout:...
- 02:09 PM Bug #45167 (New): cephadm: mons are not properly deployed
- hm. this needs more investigation
- 01:40 PM Bug #44825: cephadm: bootstrap is not idempotent
- ceph-salt as well: https://github.com/ceph/ceph-salt/issues/173
- 11:59 AM Bug #45258 (Duplicate): cephadm: iSCSIServiceSpec: user/password should be mandatory (or autogene...
- Some arguments in iSCSIServiceSpec should be mandatory, like user/password/port or at least cephadm should generate d...
- 10:08 AM Bug #44792 (Fix Under Review): cephadm: make `cephadm shell` independent from /etc/ceph/ceph.conf
- 08:46 AM Bug #45249: cephadm: fail to apply a iSCSI ServiceSpec
- Ricardo Marques wrote:
> @Kiefer isn't this PR already fixing the issue? https://github.com/ceph/ceph/pull/34663
... - 08:37 AM Bug #45249: cephadm: fail to apply a iSCSI ServiceSpec
- @Kiefer isn't this PR already fixing the issue? https://github.com/ceph/ceph/pull/34663
- 07:25 AM Bug #45249 (Resolved): cephadm: fail to apply a iSCSI ServiceSpec
- How to reproduce:
* Create a spec file... - 08:17 AM Bug #45252 (Resolved): cephadm: fail to insert modules when creating iSCSI targets
- How to reproduce:
* Enable cephadm, create a pool and enable rbd application on it.
* Create an iSCSI container w... - 06:30 AM Bug #45196 (In Progress): cephadm: remove 'fqdn_enabled' parameter from iSCSI service spec
- Attached a PR that removes the option.
- 05:39 AM Bug #45198 (In Progress): cephadm: unable to add iSCSI daemon from service spec yaml file
- I've pushed a PR to fix this. And with this PR managed to deploy using the follow json:...
- 04:17 AM Bug #45245 (Resolved): cephadm: print iscsi container's log to stdout/stderr
- rbd-target-api's log lives in /var/log/rbd-target-api/rbd-target-api.log inside container.
We should print the log t...
04/23/2020
- 09:17 PM Bug #45152 (Rejected): cephadm: data structure doesn't work for multiple CephFS
- turns out, everything is fine - as long as you attach standby mds to a particular FS. which is the case. yay!
- 11:32 AM Bug #45152: cephadm: data structure doesn't work for multiple CephFS
- I am unable to follow. Which data structure are you referring to, and how does it need to be corrected?
- 03:07 PM Bug #45235 (Can't reproduce): cephadm: mons are not properly undeployed
- ...
- 12:33 PM Feature #45203 (Resolved): OSD Spec: allow filtering via explicit hosts and labels
- How to reproduce:
1. bootstrap a single-node cluster on a machine with 4 free disks, and then run "ceph versions" ... - 12:24 PM Bug #45167 (In Progress): cephadm: mons are not properly deployed
- can confirm:...
- 09:41 AM Bug #45197 (Duplicate): cephadm: rgw: failed to bind address 0.0.0.0:80
- close. duplicate.
- 09:26 AM Bug #45197: cephadm: rgw: failed to bind address 0.0.0.0:80
- ...
- 09:01 AM Bug #45197 (Duplicate): cephadm: rgw: failed to bind address 0.0.0.0:80
- Despite running as root, RGW still cannot bind to port 80....
- 09:17 AM Bug #44792: cephadm: make `cephadm shell` independent from /etc/ceph/ceph.conf
- This is a usability bug:
Workaround might be to infer the ceph.conf as well?... - 09:08 AM Bug #45198 (Closed): cephadm: unable to add iSCSI daemon from service spec yaml file
- Looking into the docs [1], I see that we can use the `add` command in two different ways:
1) *ceph orch daemon add... - 08:49 AM Bug #45196 (Resolved): cephadm: remove 'fqdn_enabled' parameter from iSCSI service spec
- The 'fqdn_enabled' parameter was never merged into ceph-iscsi master branch.
That setting was introduced on https:... - 08:27 AM Bug #45174: cephadm: missing parameters on 'orch daemon add iscsi'
- Can we access ceph-iscsi API if we don't specify an `api_user` and `api_pass`?
- 07:26 AM Bug #45129 (Fix Under Review): simple (ceph-disk) style OSDs adopted by cephadm don't start after...
- 01:32 AM Bug #45161: cephadm: iscsi should validate the existence of the given pool
- I've added a pool check to iscsci and nfs (which has the same issue): https://github.com/ceph/ceph/pull/34698
04/22/2020
- 11:27 PM Bug #45174: cephadm: missing parameters on 'orch daemon add iscsi'
- Isn't that normal. Others do that too, look at _add_rgw above it. if you want to set the non-standard options you are...
- 11:19 AM Bug #45174 (Triaged): cephadm: missing parameters on 'orch daemon add iscsi'
- 09:23 AM Bug #45174 (Resolved): cephadm: missing parameters on 'orch daemon add iscsi'
- `orch daemon add iscsi` is missing some parameters when compared to iSCSI service spec:
add command parameter: htt... - 07:41 PM Bug #45162 (Fix Under Review): cephadm: iscsi should use the correct container image
- 02:59 PM Bug #45087 (Triaged): cephadm: add-repo: cephadm uses the container image ID as Debian repo base
- 02:59 PM Bug #45087: cephadm: add-repo: cephadm uses the container image ID as Debian repo base
- Right, That doesn't make sense.
- 02:55 PM Feature #44869 (Need More Info): cephadm: automatic auth key rotation
- need info
- 02:55 PM Bug #45120 (Fix Under Review): cephadm: adopt prometheus doesn't work
- 02:17 PM Feature #43690: cephadm: service resource limits
- [16:13:08] <jlayton> need to lower the daemon memory limits to try and reproduce a problem
[16:13:27] <jlayton> (plu... - 02:11 PM Bug #44832 (Fix Under Review): cephadm: `ceph cephadm generate-key` fails with No such file or di...
- 02:00 PM Bug #44826 (Fix Under Review): cephadm: "Deploying daemon crash.li221-238... ERROR: no keyring pr...
- 01:49 PM Documentation #44828 (Fix Under Review): cephadm: clarify "Failed to infer CIDR network for mon ip"
- 01:49 PM Documentation #44905 (Pending Backport): cephadm troubleshooting SSH errors
- 01:20 PM Bug #45095 (Pending Backport): cephadm adopt can't handle offline OSDs
- 01:20 PM Bug #45108 (Pending Backport): test_orchestrator: service ls doesn't work
- 01:19 PM Bug #44609 (Resolved): cephadm: grafana: cert problem prevents dashbaord integration
- 01:18 PM Bug #45081 (Pending Backport): cephadm: `upgrade check 15.2.1` : OrchestratorError: Failed to pul...
- 12:53 PM Bug #45129: simple (ceph-disk) style OSDs adopted by cephadm don't start after reboot
- Sebastian Wagner wrote:
> Hm, we're already injecting this lvm activate into the unit file:
>
> * https://github.... - 09:16 AM Documentation #44971 (Fix Under Review): cephadm: document the cephadm binary
- 09:01 AM Bug #45172 (Resolved): bin/cephadm: logs: Traceback: not enough values to unpack (expected 2, got 1)
- query logs for a daemon that doesn't exist results in a uncaught traceback....
Also available in: Atom