Activity
From 06/06/2020 to 07/05/2020
07/03/2020
- 04:03 PM Tasks #46352 (Won't Fix): add leap support for cephadm
- Currently, the build scripts for ceph-container are designed for shipping centos 8 based images,
as it would be much... - 01:00 PM Documentation #45564: cephadm: document workaround for accessing the admin socket by entering run...
- I got it to work like this:...
- 10:01 AM Bug #45726: Module 'cephadm' has failed: auth get failed: failed to find client.crash.<node_name>...
- Any news on this, I have a new 15.2.4 cluster and its failing.
- 08:32 AM Documentation #46335: Document "Using cephadm to set up rgw-nfs"
- the building blocks are there: setting up ganesha, setting up the RADOS objects, https://docs.ceph.com/docs/master/ra...
- 08:30 AM Documentation #46335 (Resolved): Document "Using cephadm to set up rgw-nfs"
- * cephadm doesn't care about exports. instead it simply sets up the daemons.
* cephadm only creates an empty 'conf-{... - 05:32 AM Bug #46329 (Fix Under Review): cephadm: Dashboard's ganesha option is not correct if there are mu...
07/02/2020
- 02:39 PM Feature #45263 (In Progress): osdspec/drivegroup: not enough filters to define layout
- 01:54 PM Feature #45263: osdspec/drivegroup: not enough filters to define layout
- This patch allows to switch between `AND` and `OR` gating. https://github.com/ceph/ceph/compare/master...jschmid1:dri...
- 01:50 PM Feature #45203 (Resolved): OSD Spec: allow filtering via explicit hosts and labels
- 08:28 AM Bug #46327 (Rejected): cephadm: nfs daemons share the same config object
- Not a bug, by design all daemons within a cluster will share the same config object.
- 08:02 AM Bug #46327 (Won't Fix): cephadm: nfs daemons share the same config object
- If we create a NFS service with multiple instances, those instance share the same rados object as the configuration s...
- 08:09 AM Bug #46329 (Resolved): cephadm: Dashboard's ganesha option is not correct if there are multiple N...
- How to reproduce:
* Create an NFS service with multiple daemons. e.g. With the following spec:...
07/01/2020
- 11:47 PM Feature #44866 (Pending Backport): cephadm root mode: support non-root users + sudo
- 11:47 PM Feature #44866 (Resolved): cephadm root mode: support non-root users + sudo
- 08:58 AM Bug #46283 (Rejected): cephadm: Unable to create iSCSI target
- Right, I was able to create an iSCSI target on pacific/master:...
06/30/2020
- 11:20 PM Bug #46283: cephadm: Unable to create iSCSI target
- Maybe this PR (https://github.com/ceph/ceph/pull/35141) hasn't been backported into octopus yet.
- 05:58 PM Bug #46283 (Rejected): cephadm: Unable to create iSCSI target
- I'm getting an error when trying to create an iSCSI target on openSUSE Leap.
*How to reproduce:*
I've created a... - 05:29 PM Bug #46237 (New): cephadm: Inconsistent exit code
- 01:56 PM Bug #46237 (In Progress): cephadm: Inconsistent exit code
- 09:36 AM Support #45940 (Need More Info): Orchestrator to be able to deploy multiple OSDs per single drive
- 09:36 AM Support #45940: Orchestrator to be able to deploy multiple OSDs per single drive
- https://docs.ceph.com/docs/master/cephadm/drivegroups/#the-advanced-case works for you?
- 09:25 AM Bug #46271 (Resolved): podman pull: transient "Error: error creating container storage: error cre...
- ...
- 08:03 AM Bug #44990 (Pending Backport): cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no suc...
- was fixed in master.
- 01:15 AM Bug #46175 (Fix Under Review): cephadm: orch apply -i: MON and MGR service specs must not have a ...
- 01:14 AM Bug #46268 (Fix Under Review): cephadm: orch apply -i: RGW service spec id might not contain a zone
- 01:10 AM Bug #46268 (Resolved): cephadm: orch apply -i: RGW service spec id might not contain a zone
- rgw.yaml...
06/29/2020
- 08:03 PM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- /a/yuriw-2020-06-29_16:59:21-rados-octopus-distro-basic-smithi/5189862
- 06:40 PM Feature #46265 (Duplicate): test cephadm MDS deployment
- right now, the test is broken.
workaround is to apply it manuall: https://github.com/ceph/ceph/blob/cedf2bbd13daba... - 03:50 PM Bug #46103: Restart service command restarts all the services and accepts service type too
- Sebastian Wagner wrote:
> I think this is actually the correct behavior!
How is it the correct behaviour? If it i... - 12:23 PM Bug #46103: Restart service command restarts all the services and accepts service type too
- I think this is actually the correct behavior!
- 03:06 PM Bug #46256 (Need More Info): OSDS are getting re-added to the cluster, despite unamanged=True
- 03:06 PM Bug #46256 (Can't reproduce): OSDS are getting re-added to the cluster, despite unamanged=True
- 01:10 PM Bug #46138 (Pending Backport): mgr/dashboard: Error creating iSCSI target
- 01:05 PM Bug #46254 (Can't reproduce): cephadm upgrade test: exit condition is wrong. we have to wait longer
- We're exiting the upgrade loop too early. In this case, the OSD didn't had the time to came up again. ...
- 12:24 PM Bug #45016 (Pending Backport): mgr: `ceph tell mgr mgr_status` hangs
- 12:21 PM Bug #46038 (Can't reproduce): cephadm mon start failure: Failed to reset failed state of unit cep...
- feel free to reopen the issue!
- 12:21 PM Bug #46038: cephadm mon start failure: Failed to reset failed state of unit ceph-9342dcfe-afd5-11...
- the logs are gone. Maybe should put logs here into the tracker.
- 12:16 PM Bug #45327: cephadm: Orch daemon add is not idempotent
- `daemon add` is too low level. If we want commands to be idempotent, we have to remove calling them cephadm.py
- 12:14 PM Bug #45167: cephadm: mons are not properly deployed
- low, until it happens again.
- 12:04 PM Tasks #45814 (In Progress): tasks/cephadm.py: Add iSCSI smoke test
- 11:25 AM Bug #46253 (Resolved): OSD specs without service_id
- ...
- 11:07 AM Bug #46252 (Closed): MGRs should get a random identifier, ONLY if we're co-locating MGRs on the s...
- ...
- 10:28 AM Feature #45565: cephadm: A daemon should provide information about itself (e.g. service urls)
- See DaemonDescription's service_url
- 09:04 AM Bug #46245 (Fix Under Review): cephadm: set-ssh-config/clear-ssh-config command doesn't take effe...
- 07:01 AM Bug #46245: cephadm: set-ssh-config/clear-ssh-config command doesn't take effect immediately
- can you describe you workenv.
- 06:52 AM Bug #46245 (Resolved): cephadm: set-ssh-config/clear-ssh-config command doesn't take effect immed...
- Cephadm module should reload ssh config when the user sets a new ssh config or clear it.
- 07:14 AM Bug #46247: cephadm mon failure: Error: no container with name or ID ... no such container
- http://qa-proxy.ceph.com/teuthology/ideepika-2020-06-25_18:36:29-rados-wip-deepika-testing-2020-06-25-2058-distro-bas...
- 07:13 AM Bug #46247 (Can't reproduce): cephadm mon failure: Error: no container with name or ID ... no suc...
- ...
06/28/2020
- 12:49 AM Bug #46157 (Resolved): cephadm upgrade test is broken: RGW: failed to bind address 0.0.0.0:80: Pe...
06/26/2020
- 05:49 PM Bug #46233 (Fix Under Review): cephadm: Add "--format" option to "ceph orch status"
- 05:37 PM Bug #46233 (In Progress): cephadm: Add "--format" option to "ceph orch status"
- 04:55 PM Bug #46233: cephadm: Add "--format" option to "ceph orch status"
- we're talking about a function that is 3 LOCs. having multiple tracker issues for this just feels wrong to me.
- 04:35 PM Bug #46233: cephadm: Add "--format" option to "ceph orch status"
- Nathan Cutler wrote:
> This is actual two different issues ("--format option missing" and "SSH error does not affect... - 03:16 PM Bug #46233: cephadm: Add "--format" option to "ceph orch status"
- This is actual two different issues ("--format option missing" and "SSH error does not affect exit status"). Maybe us...
- 03:08 PM Bug #46233 (Resolved): cephadm: Add "--format" option to "ceph orch status"
- ATM it's not possible to specify the output format for "ceph orch status":...
- 04:33 PM Bug #46237 (Won't Fix): cephadm: Inconsistent exit code
- If SSH keys are not available, then `ceph orch status` return code is zero:...
- 01:20 PM Bug #46231 (Fix Under Review): translate.to_ceph_volume: no need to pass the drive group
- 01:18 PM Bug #46231 (Resolved): translate.to_ceph_volume: no need to pass the drive group
- The interface of translate.to_ceph_volume is needlessly complex as it takes a reference to the drive group, that is p...
- 08:06 AM Cleanup #46219 (Resolved): cephadm: remove DaemonDescription.service_id()
- It just doen't work out.
@DaemonDescription.service_id()@ is *impossible* to implement, as there is no clear rela... - 03:41 AM Feature #45654 (Rejected): orchestrator: support OSDs backed by LVM LV/VG
- Sebastian Wagner wrote:
> Is this something we need to improve in ceph-volume?
I don't think so. The origin for t... - 12:03 AM Bug #46138: mgr/dashboard: Error creating iSCSI target
- kk, will remove it :)
06/25/2020
- 11:05 PM Bug #46134: ceph mgr should fail if it cannot add osd
- Nope, cannot trace any process logs as well anywhere, appears to be just a log print.
- 02:27 PM Bug #46134: ceph mgr should fail if it cannot add osd
- hm strange. did that host appear after a while?
- 11:04 PM Bug #46038: cephadm mon start failure: Failed to reset failed state of unit ceph-9342dcfe-afd5-11...
- recurrent failure observed in Fedora 31 and Ubuntu 18.04( although after removing the stale cluster rerun on Ubuntu s...
- 03:43 PM Bug #46132 (Duplicate): cephadm: Failed to add host in cephadm through command 'ceph orch host ad...
- 11:45 AM Bug #46132: cephadm: Failed to add host in cephadm through command 'ceph orch host add node1'
- Error stack trace::
_Promise failed Traceback (most recent call last): File "/usr/share/ceph/mgr/cephadm/module.py... - 02:50 PM Feature #44886: cephadm: allow use of authenticated registry
- ...
- 02:23 PM Bug #46206: cephadm: podman 2.0
- looks like cephadm is not podman 2.0 compatible
- 11:57 AM Bug #46206: cephadm: podman 2.0
- May be related to https://github.com/ceph/ceph/pull/32995.
- 11:52 AM Bug #46206 (Rejected): cephadm: podman 2.0
- ...
- 10:12 AM Bug #46138: mgr/dashboard: Error creating iSCSI target
- ...
- 01:47 AM Bug #46138: mgr/dashboard: Error creating iSCSI target
- I managed to recreate the issue. We lock down the caps to just have access to the pool in the config. If I create an ...
- 08:47 AM Bug #46204 (Resolved): cephadm upgrade test: fail if upgrade status is set to error
- http://pulpito.ceph.com/swagner-2020-06-25_08:07:18-rados:cephadm-wip-swagner-testing-2020-06-24-1032-distro-basic-sm...
06/24/2020
- 11:22 PM Bug #46138: mgr/dashboard: Error creating iSCSI target
- oh interesting, maybe there's another cap we're missing? We did lock it down some. Let me have a play and attempt to ...
- 07:16 PM Feature #46182 (Resolved): cephadm should use the same image reference across the cluster
- The documentation has this warning (from https://docs.ceph.com/docs/octopus/install/containers/#containers):
Impor... - 03:58 PM Bug #46175: cephadm: orch apply -i: MON and MGR service specs must not have a service_id
- Nathan Cutler wrote:
> If this is caused by the presence of "service_id: SOME_STRING" in the spec yaml, maybe it wou... - 12:20 PM Bug #46175: cephadm: orch apply -i: MON and MGR service specs must not have a service_id
- If this is caused by the presence of "service_id: SOME_STRING" in the spec yaml, maybe it would make sense for cephad...
- 11:36 AM Bug #46175 (Resolved): cephadm: orch apply -i: MON and MGR service specs must not have a service_id
- service_spec_core.yml...
- 01:07 PM Bug #46098 (In Progress): Exception adding host using cephadm
- 12:02 PM Feature #46177 (New): Investigate, if we can run ssh-agent in the MGR container
- Is it possible to use password encrypted SSH keys?
Maybe something like https://stackoverflow.com/questions/468379... - 10:02 AM Documentation #46168 (In Progress): Add information about 'unmanaged' parameter
- 09:54 AM Documentation #46168 (Resolved): Add information about 'unmanaged' parameter
- Explain parameter 'unmanaged' in OSDs creation and deletion
- 08:53 AM Bug #46157 (Fix Under Review): cephadm upgrade test is broken: RGW: failed to bind address 0.0.0....
- 07:25 AM Bug #45628: cephadm qa: smoke should verify daemons are actually running
- I think we should solve this by creating a HEALTH_WARN, if a daemon enters ...
06/23/2020
- 03:28 PM Bug #46157 (Resolved): cephadm upgrade test is broken: RGW: failed to bind address 0.0.0.0:80: Pe...
- http://pulpito.ceph.com/swagner-2020-06-23_11:55:14-rados:cephadm-wip-swagner-testing-2020-06-23-1057-distro-basic-sm...
- 08:53 AM Bug #46138: mgr/dashboard: Error creating iSCSI target
- This is not a Dashboard issue because I get the same error when using `gwcli` tool:...
- 04:11 AM Documentation #46133: encourage users to apply YAML specs instead of using the CLI
- https://github.com/ceph/ceph/pull/35709
06/22/2020
- 05:12 PM Bug #45343 (Resolved): Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshak...
- 03:08 PM Documentation #46133: encourage users to apply YAML specs instead of using the CLI
- This is similar to the way that Kubernetes does things.
--Sebastian Wagner, Ceph Orchestrators Meeting 22 Jun 2020 - 08:48 AM Documentation #46133: encourage users to apply YAML specs instead of using the CLI
- We need more examples of YAML files that users can cut and paste or at least cut, alter, and paste.
- 08:31 AM Documentation #46133: encourage users to apply YAML specs instead of using the CLI
- affected files:
cephadm/adopt.rst
cephadm/install.rst
mgr/orchestrator.rst - 08:13 AM Documentation #46133 (Resolved): encourage users to apply YAML specs instead of using the CLI
- Ceph orchestrator has two main ways to interact on the command line: The CLI and YAML specs.
Turnd out, the CLI ... - 12:47 PM Bug #44746 (Closed): cephadm: vstart.sh --cephadm: don't deploy crash by default
- 12:46 PM Bug #45167: cephadm: mons are not properly deployed
- Might be fixed by https://github.com/ceph/ceph/pull/35651
- 10:48 AM Bug #46138 (Resolved): mgr/dashboard: Error creating iSCSI target
- On the latest `master` (pacific), I get the following error when trying to create an iSCSI target in Dashboard:
!2... - 08:24 AM Bug #46134 (Can't reproduce): ceph mgr should fail if it cannot add osd
- strangely, after copying the ssh keys to remote host, when a new osd is added, the process executes without failure.
... - 07:43 AM Documentation #45820 (Resolved): create OSDs doc refer to --use-all-devices
- 07:43 AM Documentation #45865 (Resolved): cephadm: The service spec documentation is lacking important inf...
- 07:30 AM Bug #45093: cephadm: mgrs transiently getting co-located (one node gets two when only one was ask...
- only happens with MGRs
- 07:04 AM Bug #46132: cephadm: Failed to add host in cephadm through command 'ceph orch host add node1'
- Please some one look in to this issue.
- 07:03 AM Bug #46132 (Duplicate): cephadm: Failed to add host in cephadm through command 'ceph orch host ad...
- Failed to add host in cephadm (octopus release) through command 'ceph orch host add node1'
I am trying to add ceph...
06/21/2020
- 07:09 PM Bug #45155: mgr/dashboard: Error listing orchestrator NFS daemons
- This is a "cephadm/orchestrator" issue, and the backport is being handled as such, so moving it to that project.
06/19/2020
- 10:02 PM Bug #46098: Exception adding host using cephadm
- lol, just discovered this myself. Confirm that the suggested fix is appropriate.
- 08:17 AM Bug #46098 (Triaged): Exception adding host using cephadm
- 04:43 AM Bug #46098: Exception adding host using cephadm
- Typo in 'Environment' section. 15.2.3 not 15.2.2
- 03:21 AM Bug #46098 (Resolved): Exception adding host using cephadm
- After bootstrapping 1st host using cephadm, attempting to add another host fails with an exception (variable referenc...
- 01:14 PM Bug #45093: cephadm: mgrs transiently getting co-located (one node gets two when only one was ask...
- I start to suspect that this comes from a race between host refresh and the scheduler, who starts to create new daemo...
- 10:31 AM Bug #45973 (Fix Under Review): Adopted MDS daemons are removed by the orchestrator because they'r...
- 08:41 AM Bug #46103 (Duplicate): Restart service command restarts all the services and accepts service typ...
- ...
06/18/2020
- 10:03 PM Bug #46097 (Won't Fix): package mode has a hardcoded ssh user
- package mode user is hardcoded to 'cephadm'
- 08:25 PM Documentation #46082: cephadm: deleting (mds) service doesn't work?
- Michael Fritch wrote:
> I think there is some confusion on the `orch` cli commands.
>
> `orch ps` will list the c... - 07:29 PM Documentation #46082: cephadm: deleting (mds) service doesn't work?
- I think there is some confusion on the `orch` cli commands.
`orch ps` will list the cephadm daemons, whereas `orch... - 05:14 PM Documentation #46082 (Can't reproduce): cephadm: deleting (mds) service doesn't work?
- ...
- 02:14 PM Bug #46036 (Fix Under Review): cephadm: killmode=none: systemd units failed, but containers still...
- 01:15 PM Documentation #46073: cephadm install fails: apt:stderr E: Unable to locate package cephadm
- Machine is amd64 in virtualbox on Windows
- 12:53 PM Documentation #46073 (Can't reproduce): cephadm install fails: apt:stderr E: Unable to locate pac...
- When following the installation guide on https://ceph.readthedocs.io/en/latest/cephadm/install/ I ran cephadm install...
- 12:36 PM Feature #44875 (Fix Under Review): mgr/rook: PlacementSpec to K8s POD scheduling conversion
- 11:41 AM Bug #45155 (Pending Backport): mgr/dashboard: Error listing orchestrator NFS daemons
- 08:59 AM Documentation #46052 (Fix Under Review): Module 'cephadm' has failed: DaemonDescription: Cannot c...
- 07:55 AM Documentation #46052: Module 'cephadm' has failed: DaemonDescription: Cannot calculate service_id:
- Sebastian Wagner wrote:
> the correct call is
>
> [...]
>
> which documentation / example did you use for this... - 06:42 AM Bug #43816: cephadm: Unable to use IPv6 on "cephadm bootstrap"
- Seems to work once I've applied https://github.com/ceph/ceph/pull/35633 and added the --ipv6 to bootstrap.
- 01:09 AM Bug #45016: mgr: `ceph tell mgr mgr_status` hangs
- Aha! Solved it. We bind mon to ipv6 (::1), in reality it's messenger is bound to ::1, however the mgr is still bindin...
06/17/2020
- 08:19 PM Bug #45672: Unable to add additional hosts to cluster using cephadm
- I wound up getting around this by using an Ansible role in which this worked successfully. You can feel free to close...
- 01:46 PM Documentation #46052: Module 'cephadm' has failed: DaemonDescription: Cannot calculate service_id:
- the correct call is...
- 01:40 PM Documentation #46052 (Resolved): Module 'cephadm' has failed: DaemonDescription: Cannot calculate...
- ceph version 15.2.3
using Cephadm... - 11:45 AM Bug #45097 (Resolved): cephadm: UX: Traceback, if `orch host add mon1` fails.
- 11:10 AM Bug #46045 (Resolved): qa/tasks/cephadm: Module 'dashboard' is not enabled error
- http://qa-proxy.ceph.com/teuthology/kchai-2020-06-17_08:41:50-rados-wip-kefu-testing-2020-06-17-1349-distro-basic-smi...
- 09:46 AM Feature #46044 (Resolved): cephadm: Distribute admin keyring.
- This is similar to the the ceph.conf, but more complicated.
Maybe use a placement spec? ... - 09:40 AM Bug #46037: ceph orch command hangs forever when trying to add osd
- `daemon add` violates https://docs.ceph.com/docs/master/dev/cephadm/#note-regarding-network-calls-from-cli-handlers ....
- 08:42 AM Feature #45653 (Duplicate): cephadm: Improve safety by using a specific user
06/16/2020
- 09:58 PM Feature #44866 (In Progress): cephadm root mode: support non-root users + sudo
- 09:58 PM Feature #45653 (In Progress): cephadm: Improve safety by using a specific user
- 03:33 PM Bug #46038 (Closed): cephadm mon start failure: Failed to reset failed state of unit ceph-9342dcf...
- commmand used:...
- 03:10 PM Bug #46037 (Can't reproduce): ceph orch command hangs forever when trying to add osd
- after bootstrapping the cephadm cluster when we login to ceph shell, ...
- 02:26 PM Bug #46036: cephadm: killmode=none: systemd units failed, but containers still running
- https://github.com/ceph/ceph/pull/35524 is part of the solution. the other part is adding a @set -e@
- 02:23 PM Bug #46036 (Resolved): cephadm: killmode=none: systemd units failed, but containers still running
- ...
- 11:08 AM Feature #45859 (Fix Under Review): cephadm: use fixed versions
- 10:59 AM Feature #45859 (In Progress): cephadm: use fixed versions
- 10:49 AM Bug #45594: cephadm: weight of a replaced OSD is 0
- The initial weight is never restored after `draining` the OSDs.
We can save the initial weight/reweight and reset ... - 09:53 AM Bug #46031 (Resolved): Exception: Failed to validate Drive Group: block_wal_size must be of type int
- ...
- 04:42 AM Bug #45016: mgr: `ceph tell mgr mgr_status` hangs
- I'll have a poke around and see if I can get this unblocked so we can continue your IPv6 adventure :)
06/15/2020
- 09:24 PM Bug #45999 (Fix Under Review): cephadm shell: picking up legacy_dir
- 09:16 PM Bug #45999 (In Progress): cephadm shell: picking up legacy_dir
- 03:27 PM Bug #45999 (Resolved): cephadm shell: picking up legacy_dir
- ...
- 02:02 PM Feature #45378 (In Progress): cephadm: manage /etc/ceph/ceph.conf
- 11:13 AM Feature #45996 (New): adopted prometheus instance uses port 9095, regardless of original port number
- When adopting prometheus (@cephadm adopt --style legacy --name prometheus.HOSTNAME@), the new prometheus daemon start...
- 11:01 AM Bug #45973: Adopted MDS daemons are removed by the orchestrator because they're orphans
- We have the same problem with adopted prometheus instances (I adopted one, it was working fine for a few minutes, the...
- 08:54 AM Documentation #45977: cephadm: Improve Service removal docs
- Yes, it worked when I entered the command above vor every service.
So I deleted every nfs service and daemon and sta...
06/12/2020
- 12:30 PM Bug #45973: Adopted MDS daemons are removed by the orchestrator because they're orphans
- It's not an accident that this is working. OTOH, this needs behavior needs improvement. Let me think about the chicke...
- 02:32 AM Bug #45973: Adopted MDS daemons are removed by the orchestrator because they're orphans
- Sebastian Wagner wrote:
> Hm. Isn't this a big flaw in adopt, not just for MDS?
Not in practice so far. The docs... - 06:51 AM Bug #45980: cephadm: implement missing "FileStore not supported" error message and update DriveGr...
- DriveGroups allow specifying filestore, they are not that tightly coupled to cephadm.
I'd argue cephadm should det... - 04:36 AM Bug #45980: cephadm: implement missing "FileStore not supported" error message and update DriveGr...
- https://docs.ceph.com/docs/master/cephadm/adoption/#limitations says "Cephadm only works with BlueStore OSDs. If ther...
- 06:18 AM Bug #45155 (Fix Under Review): mgr/dashboard: Error listing orchestrator NFS daemons
- 06:14 AM Feature #45982 (Resolved): mgr/cephadm: remove or update Dashboard settings after daemons are des...
- When these services are deployed, cephadm calls Dashboard's command to set settings to make features available in the...
06/11/2020
- 06:19 PM Bug #45097 (In Progress): cephadm: UX: Traceback, if `orch host add mon1` fails.
- 04:57 PM Bug #45980 (Resolved): cephadm: implement missing "FileStore not supported" error message and upd...
- This one is easy to reproduce.
Ask cephadm to create FileStore OSDs:... - 03:38 PM Bug #45973: Adopted MDS daemons are removed by the orchestrator because they're orphans
- Hm. Isn't this a big flaw in adopt, not just for MDS?
We might need to apply something like this before adopting... - 10:13 AM Bug #45973 (Rejected): Adopted MDS daemons are removed by the orchestrator because they're orphans
- The "docs":https://docs.ceph.com/docs/master/cephadm/adoption/ say that when converting to cephadm, one needs to rede...
- 03:22 PM Bug #45976: cephadm: prevent rm-daemon from removing legacy daemons
- we should probably prevent *legacy* daemons from being removed altogether.
workarounds:
* Either adopt to a ce... - 01:10 PM Bug #45976 (Duplicate): cephadm: prevent rm-daemon from removing legacy daemons
cephadm displays daemon when none could be found at /var/lib/ceph/unknown/osd.0...- 03:19 PM Documentation #45977: cephadm: Improve Service removal docs
- ...
- 01:20 PM Documentation #45977 (Resolved): cephadm: Improve Service removal docs
- First of all, yes I know, nfs under ceph orch is still under development but I couldn't find any information about th...
- 12:03 PM Feature #44055 (New): cephadm: make 'ls' faster
- I think a requirements for future refactorization here is:
* having a vera good pytest coverage with example outpu... - 10:38 AM Cleanup #45321 (Fix Under Review): Servcie spec: unify `spec:` vs omitting `spec:`
06/10/2020
- 02:40 PM Bug #44746: cephadm: vstart.sh --cephadm: don't deploy crash by default
- might be fixed https://github.com/ceph/ceph/pull/35472
- 12:29 PM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- https://github.com/ceph/ceph/pull/35524
06/09/2020
- 10:05 PM Bug #45961 (Fix Under Review): cephadm: high load and slow disk make "cephadm bootstrap" fail
- 08:41 PM Bug #45961: cephadm: high load and slow disk make "cephadm bootstrap" fail
- apparently `ceph -s` can take longer than 30sec to return as seen by the partial output in between retries.
- 08:10 PM Bug #45961: cephadm: high load and slow disk make "cephadm bootstrap" fail
- the code: https://github.com/ceph/ceph/blob/5a7d75290f4480764b24c241ba11f93fe8917c4b/src/cephadm/cephadm#L2497-L2517
- 08:09 PM Bug #45961: cephadm: high load and slow disk make "cephadm bootstrap" fail
- You can see in the log excerpt that the "ceph -s" command eventually produces output, but by the time the earlier att...
- 07:50 PM Bug #45961 (Resolved): cephadm: high load and slow disk make "cephadm bootstrap" fail
- When running "cephadm bootstrap" in a libvirt-based virtual environment (four VMs) running on a machine that has a si...
- 08:30 PM Bug #45962 (Closed): "ceph orch apply nfs" seems to deploy an nfs daemon, but that doesn't show u...
- ...
- 08:17 PM Bug #45962 (Closed): "ceph orch apply nfs" seems to deploy an nfs daemon, but that doesn't show u...
- After running the following commands:...
- 05:28 PM Feature #44055 (In Progress): cephadm: make 'ls' faster
- 09:35 AM Bug #45155: mgr/dashboard: Error listing orchestrator NFS daemons
- The exception in the description was already fixed.
Thest rest of work is to set pool and namespace within cephadm... - 09:33 AM Bug #45155 (In Progress): mgr/dashboard: Error listing orchestrator NFS daemons
- 08:30 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- fascinating:...
- 05:34 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- /a/kchai-2020-06-08_10:56:36-rados-wip-kefu-testing-2020-06-08-1713-distro-basic-smithi/5128793/
06/08/2020
- 04:21 PM Support #45940 (Closed): Orchestrator to be able to deploy multiple OSDs per single drive
- One might want to have multiple OSDs for a single fast (e.g. NVMe) drive. E.g. single BlueStore instance is known for...
- 04:15 PM Feature #45939 (New): Unable to use device that already has existing LVs.
- Currently Orchestrator is unable to use a device that LVM has already been initialized at. As a result:
1) User to... - 03:27 PM Feature #45938 (Closed): "ceph orch daemon add osd" lacks an ability to specify DB/WAL devices
- Looks like this command's functionality is pretty limited - it's able to add new OSDs backed by single main device on...
- 03:11 PM Bug #45343 (In Progress): Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS hands...
- Sebastian Wagner wrote:
> quay.io closed the ticket with "it's your job to do retries"
lol. Well, quay.ceph.io i... - 01:11 PM Bug #45343: Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshake timeout"
- quay.io closed the ticket with "it's your job to do retries"
- 02:35 PM Bug #45909 (Resolved): already existing cluster deployed: cephadm bootstrap failure
- 02:35 PM Bug #45909: already existing cluster deployed: cephadm bootstrap failure
- Deepika Upadhyay wrote:
> Sebastian Wagner wrote:
> > Deepika Upadhyay wrote:
> > > Sebastian Wagner wrote:
> > >... - 02:33 PM Bug #45909: already existing cluster deployed: cephadm bootstrap failure
- Sebastian Wagner wrote:
> Deepika Upadhyay wrote:
> > Sebastian Wagner wrote:
> > > hm. you already have plenty of... - 12:12 PM Bug #45909: already existing cluster deployed: cephadm bootstrap failure
- Deepika Upadhyay wrote:
> Sebastian Wagner wrote:
> > hm. you already have plenty of clusters already running on yo... - 08:51 AM Bug #45909: already existing cluster deployed: cephadm bootstrap failure
- Sebastian Wagner wrote:
> hm. you already have plenty of clusters already running on your machine. is this on purpos... - 02:34 PM Documentation #45937 (New): cephadm: setting the various certificates
- *Grafana*
how to set the grafana certificate and key:... - 01:46 PM Documentation #45936 (New): cephadm: document restart the whole cluster
- ...
- 01:35 PM Feature #44414: bubble up errors during 'apply' phase to 'cluster warnings'
- https://github.com/ceph/ceph/pull/35456 will go into this direction.
- 01:34 PM Feature #45905 (Duplicate): cephadm: errors in serve() should create a HEALTH warning
- 12:35 PM Feature #45905: cephadm: errors in serve() should create a HEALTH warning
- https://github.com/ceph/ceph/pull/35456 will go into this direction.
- 01:32 PM Bug #44603 (Rejected): cephadm: `ls --refresh` shows Tracebacks in the log
- I don't plan to fix this. instead, I'm about to remove support for --refresh
- 01:26 PM Bug #45172 (Pending Backport): bin/cephadm: logs: Traceback: not enough values to unpack (expecte...
- 01:24 PM Cleanup #45321: Servcie spec: unify `spec:` vs omitting `spec:`
- decison was to use @spec@
- 01:19 PM Feature #43911 (Resolved): test cephadm rgw deployment
- 01:17 PM Feature #45654: orchestrator: support OSDs backed by LVM LV/VG
- Is this something we need to improve in ceph-volume?
- 01:13 PM Bug #45672: Unable to add additional hosts to cluster using cephadm
- might want to run ...
- 01:13 PM Bug #45672: Unable to add additional hosts to cluster using cephadm
- execnet is again very helpful with their exceptions this time.
- 12:41 PM Documentation #45862: orch mds rm is documented but does not exist
- If you want to remove the service, you can use:...
- 12:38 PM Bug #45867: orchestrator: Errors while deployment are hidden behind the log wall
- relates to https://github.com/ceph/ceph/pull/35456
- 12:32 PM Bug #45174 (Resolved): cephadm: missing parameters on 'orch daemon add iscsi'
- 12:30 PM Feature #43836 (Resolved): cephadm adopt: also adopt Prometheus and Grafana daemons from DeepSea
- 12:27 PM Bug #45032 (Resolved): cephadm: Not recovering from `OSError: cannot send (already closed?)`
- 12:27 PM Bug #45627 (Resolved): cephadm: frequently getting `1 hosts fail cephadm check`
- 12:26 PM Documentation #45411 (Resolved): cephadm: add section about container images
- 12:26 PM Bug #45625 (Resolved): cephadm: when configuring monitoring with ceph orch, ceph dashboard is onl...
- 12:25 PM Feature #44886 (New): cephadm: allow use of authenticated registry
- 12:24 PM Bug #45696 (Resolved): cephadm: Validate bootstrap dashboard "key" and "cert" file exists
- 12:24 PM Feature #45463 (Resolved): cephadm: allow custom images for grafana, prometheus, alertmanager and...
- 12:24 PM Bug #45632 (Resolved): nfs: auth credentials for recovery database include mds
- 12:24 PM Bug #45629 (Resolved): cephadm: Allow users to provide ssh keys during bootstrap
- 12:23 PM Bug #45617 (Resolved): mgr/orch: mds with explicit naming
- 09:07 AM Bug #45808: cephadm/test_adoption.sh: Error parsing image configuration: Invalid status code retu...
- ...
06/06/2020
- 09:46 PM Tasks #45914 (Won't Fix): cephamd: make src/cephadm/vstart-smoke.sh a proper teuthology test
- src/cephadm/vstart-smoke.sh is a simple bash script. This is a perfect template to extend qa/suites/rados/cephadm/wor...
- 09:29 PM Bug #45909: already existing cluster deployed: cephadm bootstrap failure
- hm. you already have plenty of clusters already running on your machine. is this on purpose? In yes, I think ceph-a23...
- 05:18 PM Bug #45909: already existing cluster deployed: cephadm bootstrap failure
- Sebastian Wagner wrote:
> looks as if cephadm wasn't able to start the mons. can you attach
>
> [...]
sure!
...
Also available in: Atom