Project

General

Profile

Activity

From 06/10/2020 to 07/09/2020

07/09/2020

02:04 PM Bug #45726: Module 'cephadm' has failed: auth get failed: failed to find client.crash.<node_name>...
Can we get the Priority set to something higher please.
Its not affecting the cluster, but it does cause the clu...
Andy Gold
04:48 AM Bug #45726: Module 'cephadm' has failed: auth get failed: failed to find client.crash.<node_name>...
Andy Gold wrote:
> Any news on this, I have a new 15.2.4 cluster and its failing.
Any update no this , i also hav...
Sathvik Vutukuri
05:16 AM Bug #46429 (Closed): cephadm fails bootstrap with new Podman Versions 2.0.1 and 2.0.2
Podman had a major new release recently and it seems that cephadm cannot bootstrap a new cluster because of it.
I in...
Gunther Heinrich
03:18 AM Bug #46327: cephadm: nfs daemons share the same config object
Patrick Donnelly wrote:
> Michael Fritch wrote:
> > > Kiefer Chang wrote:
> > > > This causes a regression in the ...
Michael Fritch
02:27 AM Bug #46327: cephadm: nfs daemons share the same config object
Michael Fritch wrote:
> > Kiefer Chang wrote:
> > > This causes a regression in the Dashboard. RADOS objects are de...
Patrick Donnelly
02:14 AM Bug #46327: cephadm: nfs daemons share the same config object
Patrick Donnelly wrote:
> Hi Kiefer, I left some similar comments on your slide deck but also will say here:
>
...
Michael Fritch
01:32 AM Bug #46385: Can i run rados and s3 compatible object storage device in Cephadm?
Sathvik Vutukuri wrote:
> Sathvik Vutukuri wrote:
> > I have tried cephadm for ceph installation , but unable to ru...
Sathvik Vutukuri
01:31 AM Bug #46385: Can i run rados and s3 compatible object storage device in Cephadm?
Sathvik Vutukuri wrote:
> I have tried cephadm for ceph installation , but unable to run ceph RADOS Gw and s3 compat...
Sathvik Vutukuri

07/08/2020

09:06 PM Bug #46385: Can i run rados and s3 compatible object storage device in Cephadm?
RGW deployment is supported - see https://ceph.readthedocs.io/en/latest/cephadm/install/#deploy-rgws Josh Durgin
08:11 PM Bug #45724 (Fix Under Review): check-host should not fail using fqdn or not that hard
Adam King
05:23 PM Bug #46327: cephadm: nfs daemons share the same config object
Hi Kiefer, I left some similar comments on your slide deck but also will say here:
Kiefer Chang wrote:
> This cau...
Patrick Donnelly
12:20 PM Bug #46329 (Pending Backport): cephadm: Dashboard's ganesha option is not correct if there are mu...
Kefu Chai
09:19 AM Documentation #46073: cephadm install fails: apt:stderr E: Unable to locate package cephadm
This is a bug in the documentation only Reinhard Eilmsteiner
09:19 AM Documentation #46073: cephadm install fails: apt:stderr E: Unable to locate package cephadm
Reinhard Eilmsteiner wrote:
> Machine is amd64 in virtualbox on Windows
Instruction is missing adding the apt-rep...
Reinhard Eilmsteiner
06:11 AM Bug #46412 (Can't reproduce): cephadm trying to pull mimic based image
... Deepika Upadhyay

07/07/2020

08:24 PM Support #46384: 15.2.4 and cephadm - mds not starting
Here is how I do it:
https://docs.ceph.com/docs/master/cephfs/fs-volumes/#fs-volumes
"This creates a CephFS fil...
Nathan Cutler
10:49 AM Support #46384 (Resolved): 15.2.4 and cephadm - mds not starting
I have a new insatll of Octopus, 15.2.4 and using Cephadm to manage it. The mds deamons are not being created.
$...
Andy Gold
03:41 PM Bug #46398 (Fix Under Review): cephadm: can't use custom prometheus image
Patrick Seidensal
12:08 PM Bug #46398 (In Progress): cephadm: can't use custom prometheus image
Patrick Seidensal
12:08 PM Bug #46398 (Resolved): cephadm: can't use custom prometheus image
... Patrick Seidensal
01:44 PM Bug #46206 (Rejected): cephadm: podman 2.0
This was a bug in Podman v2.0.0 and has been fixed in Podman v2.0.1.
> Fixed a bug where the --privileged flag had...
Patrick Seidensal
01:24 PM Bug #46206: cephadm: podman 2.0
Removing changes from commit "b5e5c753":https://github.com/ceph/ceph/commit/b5e5c753f415ab1f18ccfe3ad636649a0f51a93a ... Patrick Seidensal
12:55 PM Bug #45792: cephadm: zapped OSD gets re-added to the cluster.
I am still putting additional input and requesting that if this behavior remains as is that a way to stop the process... David Capone
09:00 AM Bug #45792 (Fix Under Review): cephadm: zapped OSD gets re-added to the cluster.
As this is the intended behavior, this needs to be documented. See https://github.com/ceph/ceph/pull/35744 Joshua Schmid
12:38 PM Feature #45859 (Pending Backport): cephadm: use fixed versions
Patrick Seidensal
11:47 AM Bug #46396 (New): osdspec/drivegroup: check for 'intersection' in DriveSelection when multiple OS...
We do allow the use of multiple OSDSpecs explicitly. However, misconfiguration can lead to unwanted behavior.
A go...
Joshua Schmid
10:54 AM Bug #46385 (Duplicate): Can i run rados and s3 compatible object storage device in Cephadm?
I have tried cephadm for ceph installation , but unable to run ceph RADOS Gw and s3 compatibble object storage?
Ho...
Sathvik Vutukuri
09:05 AM Bug #45861 (Fix Under Review): data_devices: limit 3 deployed 6 osds per node
Joshua Schmid
09:03 AM Bug #45672 (Can't reproduce): Unable to add additional hosts to cluster using cephadm
thanks, closing Joshua Schmid
08:47 AM Bug #44756 (In Progress): drivegroups: replacement op will ignore existing wal/dbs
fixed with https://github.com/ceph/ceph/pull/34740 Joshua Schmid
08:23 AM Bug #45980 (In Progress): cephadm: implement missing "FileStore not supported" error message and ...
Joshua Schmid
08:22 AM Bug #46231 (Pending Backport): translate.to_ceph_volume: no need to pass the drive group
Joshua Schmid
08:15 AM Bug #45172 (Resolved): bin/cephadm: logs: Traceback: not enough values to unpack (expected 2, got 1)
Joshua Schmid
08:14 AM Feature #45263 (Fix Under Review): osdspec/drivegroup: not enough filters to define layout
Joshua Schmid

07/06/2020

03:46 PM Bug #45872 (Fix Under Review): ceph orch device ls exposes the `device_id` under the DEVICES colu...
Joshua Schmid
03:33 PM Documentation #46377 (Resolved): cephadm: Missing 'service_id' in last example in orchestrator#se...
Missing 'service_id' in last example in orchestrator#service-specification. Example can be found right above https://... Stephan Müller
03:28 PM Tasks #46376 (Resolved): cephadm: Make vagrant usage more comfortable
Currently you can only use a big scale factor using the vagrant setup. You can have x * (mgr, mon, osd with 2 disks).... Stephan Müller
02:40 PM Feature #44886: cephadm: allow use of authenticated registry
should registry management and authentication be handled on cri-o level by system admin or maybe by cephadm as helper... Denys Kondratenko
02:24 PM Bug #44888 (Fix Under Review): Drivegroup's :limit: isn't working correctly
Joshua Schmid
01:41 PM Bug #46327 (New): cephadm: nfs daemons share the same config object
Reopening, as this change introduces a regression and will potentially break upgrades. Lenz Grimmer
07:03 AM Bug #46327: cephadm: nfs daemons share the same config object
This causes a regression in the Dashboard. RADOS objects are designed to work with multiple daemons and each daemon h... Kiefer Chang

07/03/2020

04:03 PM Tasks #46352 (Won't Fix): add leap support for cephadm
Currently, the build scripts for ceph-container are designed for shipping centos 8 based images,
as it would be much...
Deepika Upadhyay
01:00 PM Documentation #45564: cephadm: document workaround for accessing the admin socket by entering run...
I got it to work like this:... Nathan Cutler
10:01 AM Bug #45726: Module 'cephadm' has failed: auth get failed: failed to find client.crash.<node_name>...
Any news on this, I have a new 15.2.4 cluster and its failing. Andy Gold
08:32 AM Documentation #46335: Document "Using cephadm to set up rgw-nfs"
the building blocks are there: setting up ganesha, setting up the RADOS objects, https://docs.ceph.com/docs/master/ra... Zac Dover
08:30 AM Documentation #46335 (Resolved): Document "Using cephadm to set up rgw-nfs"
* cephadm doesn't care about exports. instead it simply sets up the daemons.
* cephadm only creates an empty 'conf-{...
Sebastian Wagner
05:32 AM Bug #46329 (Fix Under Review): cephadm: Dashboard's ganesha option is not correct if there are mu...
Kiefer Chang

07/02/2020

02:39 PM Feature #45263 (In Progress): osdspec/drivegroup: not enough filters to define layout
Sebastian Wagner
01:54 PM Feature #45263: osdspec/drivegroup: not enough filters to define layout
This patch allows to switch between `AND` and `OR` gating. https://github.com/ceph/ceph/compare/master...jschmid1:dri... Joshua Schmid
01:50 PM Feature #45203 (Resolved): OSD Spec: allow filtering via explicit hosts and labels
Joshua Schmid
08:28 AM Bug #46327 (Rejected): cephadm: nfs daemons share the same config object
Not a bug, by design all daemons within a cluster will share the same config object. Varsha Rao
08:02 AM Bug #46327 (Won't Fix): cephadm: nfs daemons share the same config object
If we create a NFS service with multiple instances, those instance share the same rados object as the configuration s... Kiefer Chang
08:09 AM Bug #46329 (Resolved): cephadm: Dashboard's ganesha option is not correct if there are multiple N...
How to reproduce:
* Create an NFS service with multiple daemons. e.g. With the following spec:...
Kiefer Chang

07/01/2020

11:47 PM Feature #44866 (Pending Backport): cephadm root mode: support non-root users + sudo
Daniel Pivonka
11:47 PM Feature #44866 (Resolved): cephadm root mode: support non-root users + sudo
Daniel Pivonka
08:58 AM Bug #46283 (Rejected): cephadm: Unable to create iSCSI target
Right, I was able to create an iSCSI target on pacific/master:... Ricardo Marques

06/30/2020

11:20 PM Bug #46283: cephadm: Unable to create iSCSI target
Maybe this PR (https://github.com/ceph/ceph/pull/35141) hasn't been backported into octopus yet. Matthew Oliver
05:58 PM Bug #46283 (Rejected): cephadm: Unable to create iSCSI target
I'm getting an error when trying to create an iSCSI target on openSUSE Leap.
*How to reproduce:*
I've created a...
Ricardo Marques
05:29 PM Bug #46237 (New): cephadm: Inconsistent exit code
Ricardo Marques
01:56 PM Bug #46237 (In Progress): cephadm: Inconsistent exit code
Ricardo Marques
09:36 AM Support #45940 (Need More Info): Orchestrator to be able to deploy multiple OSDs per single drive
Sebastian Wagner
09:36 AM Support #45940: Orchestrator to be able to deploy multiple OSDs per single drive
https://docs.ceph.com/docs/master/cephadm/drivegroups/#the-advanced-case works for you? Sebastian Wagner
09:25 AM Bug #46271 (Resolved): podman pull: transient "Error: error creating container storage: error cre...
... Sebastian Wagner
08:03 AM Bug #44990 (Pending Backport): cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no suc...
was fixed in master. Sebastian Wagner
01:15 AM Bug #46175 (Fix Under Review): cephadm: orch apply -i: MON and MGR service specs must not have a ...
Michael Fritch
01:14 AM Bug #46268 (Fix Under Review): cephadm: orch apply -i: RGW service spec id might not contain a zone
Michael Fritch
01:10 AM Bug #46268 (Resolved): cephadm: orch apply -i: RGW service spec id might not contain a zone
rgw.yaml... Michael Fritch

06/29/2020

08:03 PM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
/a/yuriw-2020-06-29_16:59:21-rados-octopus-distro-basic-smithi/5189862 Neha Ojha
06:40 PM Feature #46265 (Duplicate): test cephadm MDS deployment
right now, the test is broken.
workaround is to apply it manuall: https://github.com/ceph/ceph/blob/cedf2bbd13daba...
Sebastian Wagner
03:50 PM Bug #46103: Restart service command restarts all the services and accepts service type too
Sebastian Wagner wrote:
> I think this is actually the correct behavior!
How is it the correct behaviour? If it i...
Varsha Rao
12:23 PM Bug #46103: Restart service command restarts all the services and accepts service type too
I think this is actually the correct behavior! Sebastian Wagner
03:06 PM Bug #46256 (Need More Info): OSDS are getting re-added to the cluster, despite unamanged=True
Sebastian Wagner
03:06 PM Bug #46256 (Can't reproduce): OSDS are getting re-added to the cluster, despite unamanged=True
Sebastian Wagner
01:10 PM Bug #46138 (Pending Backport): mgr/dashboard: Error creating iSCSI target
Sebastian Wagner
01:05 PM Bug #46254 (Can't reproduce): cephadm upgrade test: exit condition is wrong. we have to wait longer
We're exiting the upgrade loop too early. In this case, the OSD didn't had the time to came up again. ... Sebastian Wagner
12:24 PM Bug #45016 (Pending Backport): mgr: `ceph tell mgr mgr_status` hangs
Sebastian Wagner
12:21 PM Bug #46038 (Can't reproduce): cephadm mon start failure: Failed to reset failed state of unit cep...
feel free to reopen the issue! Sebastian Wagner
12:21 PM Bug #46038: cephadm mon start failure: Failed to reset failed state of unit ceph-9342dcfe-afd5-11...
the logs are gone. Maybe should put logs here into the tracker. Sebastian Wagner
12:16 PM Bug #45327: cephadm: Orch daemon add is not idempotent
`daemon add` is too low level. If we want commands to be idempotent, we have to remove calling them cephadm.py Sebastian Wagner
12:14 PM Bug #45167: cephadm: mons are not properly deployed
low, until it happens again. Sebastian Wagner
12:04 PM Tasks #45814 (In Progress): tasks/cephadm.py: Add iSCSI smoke test
Sebastian Wagner
11:25 AM Bug #46253 (Resolved): OSD specs without service_id
... Sebastian Wagner
11:07 AM Bug #46252 (Closed): MGRs should get a random identifier, ONLY if we're co-locating MGRs on the s...
... Sebastian Wagner
10:28 AM Feature #45565: cephadm: A daemon should provide information about itself (e.g. service urls)
See DaemonDescription's service_url Sebastian Wagner
09:04 AM Bug #46245 (Fix Under Review): cephadm: set-ssh-config/clear-ssh-config command doesn't take effe...
Sebastian Wagner
07:01 AM Bug #46245: cephadm: set-ssh-config/clear-ssh-config command doesn't take effect immediately
can you describe you workenv. Deepika Upadhyay
06:52 AM Bug #46245 (Resolved): cephadm: set-ssh-config/clear-ssh-config command doesn't take effect immed...
Cephadm module should reload ssh config when the user sets a new ssh config or clear it.
Kiefer Chang
07:14 AM Bug #46247: cephadm mon failure: Error: no container with name or ID ... no such container
http://qa-proxy.ceph.com/teuthology/ideepika-2020-06-25_18:36:29-rados-wip-deepika-testing-2020-06-25-2058-distro-bas... Deepika Upadhyay
07:13 AM Bug #46247 (Can't reproduce): cephadm mon failure: Error: no container with name or ID ... no suc...
... Deepika Upadhyay

06/28/2020

12:49 AM Bug #46157 (Resolved): cephadm upgrade test is broken: RGW: failed to bind address 0.0.0.0:80: Pe...
Sebastian Wagner

06/26/2020

05:49 PM Bug #46233 (Fix Under Review): cephadm: Add "--format" option to "ceph orch status"
Ricardo Marques
05:37 PM Bug #46233 (In Progress): cephadm: Add "--format" option to "ceph orch status"
Ricardo Marques
04:55 PM Bug #46233: cephadm: Add "--format" option to "ceph orch status"
we're talking about a function that is 3 LOCs. having multiple tracker issues for this just feels wrong to me. Sebastian Wagner
04:35 PM Bug #46233: cephadm: Add "--format" option to "ceph orch status"
Nathan Cutler wrote:
> This is actual two different issues ("--format option missing" and "SSH error does not affect...
Ricardo Marques
03:16 PM Bug #46233: cephadm: Add "--format" option to "ceph orch status"
This is actual two different issues ("--format option missing" and "SSH error does not affect exit status"). Maybe us... Nathan Cutler
03:08 PM Bug #46233 (Resolved): cephadm: Add "--format" option to "ceph orch status"
ATM it's not possible to specify the output format for "ceph orch status":... Ricardo Marques
04:33 PM Bug #46237 (Won't Fix): cephadm: Inconsistent exit code
If SSH keys are not available, then `ceph orch status` return code is zero:... Ricardo Marques
01:20 PM Bug #46231 (Fix Under Review): translate.to_ceph_volume: no need to pass the drive group
Jan Fajerski
01:18 PM Bug #46231 (Resolved): translate.to_ceph_volume: no need to pass the drive group
The interface of translate.to_ceph_volume is needlessly complex as it takes a reference to the drive group, that is p... Jan Fajerski
08:06 AM Cleanup #46219 (Resolved): cephadm: remove DaemonDescription.service_id()
It just doen't work out.
@DaemonDescription.service_id()@ is *impossible* to implement, as there is no clear rela...
Sebastian Wagner
03:41 AM Feature #45654 (Rejected): orchestrator: support OSDs backed by LVM LV/VG
Sebastian Wagner wrote:
> Is this something we need to improve in ceph-volume?
I don't think so. The origin for t...
Paul Cuzner
12:03 AM Bug #46138: mgr/dashboard: Error creating iSCSI target
kk, will remove it :) Matthew Oliver

06/25/2020

11:05 PM Bug #46134: ceph mgr should fail if it cannot add osd
Nope, cannot trace any process logs as well anywhere, appears to be just a log print. Deepika Upadhyay
02:27 PM Bug #46134: ceph mgr should fail if it cannot add osd
hm strange. did that host appear after a while? Sebastian Wagner
11:04 PM Bug #46038: cephadm mon start failure: Failed to reset failed state of unit ceph-9342dcfe-afd5-11...
recurrent failure observed in Fedora 31 and Ubuntu 18.04( although after removing the stale cluster rerun on Ubuntu s... Deepika Upadhyay
03:43 PM Bug #46132 (Duplicate): cephadm: Failed to add host in cephadm through command 'ceph orch host ad...
Sebastian Wagner
11:45 AM Bug #46132: cephadm: Failed to add host in cephadm through command 'ceph orch host add node1'
Error stack trace::
_Promise failed Traceback (most recent call last): File "/usr/share/ceph/mgr/cephadm/module.py...
Sathvik Vutukuri
02:50 PM Feature #44886: cephadm: allow use of authenticated registry
... Sebastian Wagner
02:23 PM Bug #46206: cephadm: podman 2.0
looks like cephadm is not podman 2.0 compatible Sebastian Wagner
11:57 AM Bug #46206: cephadm: podman 2.0
May be related to https://github.com/ceph/ceph/pull/32995. Patrick Seidensal
11:52 AM Bug #46206 (Rejected): cephadm: podman 2.0
... Patrick Seidensal
10:12 AM Bug #46138: mgr/dashboard: Error creating iSCSI target
... Sebastian Wagner
01:47 AM Bug #46138: mgr/dashboard: Error creating iSCSI target
I managed to recreate the issue. We lock down the caps to just have access to the pool in the config. If I create an ... Matthew Oliver
08:47 AM Bug #46204 (Resolved): cephadm upgrade test: fail if upgrade status is set to error
http://pulpito.ceph.com/swagner-2020-06-25_08:07:18-rados:cephadm-wip-swagner-testing-2020-06-24-1032-distro-basic-sm... Sebastian Wagner

06/24/2020

11:22 PM Bug #46138: mgr/dashboard: Error creating iSCSI target
oh interesting, maybe there's another cap we're missing? We did lock it down some. Let me have a play and attempt to ... Matthew Oliver
07:16 PM Feature #46182 (Resolved): cephadm should use the same image reference across the cluster
The documentation has this warning (from https://docs.ceph.com/docs/octopus/install/containers/#containers):
Impor...
Ken Dreyer
03:58 PM Bug #46175: cephadm: orch apply -i: MON and MGR service specs must not have a service_id
Nathan Cutler wrote:
> If this is caused by the presence of "service_id: SOME_STRING" in the spec yaml, maybe it wou...
Sebastian Wagner
12:20 PM Bug #46175: cephadm: orch apply -i: MON and MGR service specs must not have a service_id
If this is caused by the presence of "service_id: SOME_STRING" in the spec yaml, maybe it would make sense for cephad... Nathan Cutler
11:36 AM Bug #46175 (Resolved): cephadm: orch apply -i: MON and MGR service specs must not have a service_id
service_spec_core.yml... Nathan Cutler
01:07 PM Bug #46098 (In Progress): Exception adding host using cephadm
Stephan Müller
12:02 PM Feature #46177 (New): Investigate, if we can run ssh-agent in the MGR container
Is it possible to use password encrypted SSH keys?
Maybe something like https://stackoverflow.com/questions/468379...
Sebastian Wagner
10:02 AM Documentation #46168 (In Progress): Add information about 'unmanaged' parameter
Juan Miguel Olmo Martínez
09:54 AM Documentation #46168 (Resolved): Add information about 'unmanaged' parameter
Explain parameter 'unmanaged' in OSDs creation and deletion Juan Miguel Olmo Martínez
08:53 AM Bug #46157 (Fix Under Review): cephadm upgrade test is broken: RGW: failed to bind address 0.0.0....
Sebastian Wagner
07:25 AM Bug #45628: cephadm qa: smoke should verify daemons are actually running
I think we should solve this by creating a HEALTH_WARN, if a daemon enters ... Sebastian Wagner

06/23/2020

03:28 PM Bug #46157 (Resolved): cephadm upgrade test is broken: RGW: failed to bind address 0.0.0.0:80: Pe...
http://pulpito.ceph.com/swagner-2020-06-23_11:55:14-rados:cephadm-wip-swagner-testing-2020-06-23-1057-distro-basic-sm... Sebastian Wagner
08:53 AM Bug #46138: mgr/dashboard: Error creating iSCSI target
This is not a Dashboard issue because I get the same error when using `gwcli` tool:... Ricardo Marques
04:11 AM Documentation #46133: encourage users to apply YAML specs instead of using the CLI
https://github.com/ceph/ceph/pull/35709 Zac Dover

06/22/2020

05:12 PM Bug #45343 (Resolved): Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshak...
David Galloway
03:08 PM Documentation #46133: encourage users to apply YAML specs instead of using the CLI
This is similar to the way that Kubernetes does things.
--Sebastian Wagner, Ceph Orchestrators Meeting 22 Jun 2020
Zac Dover
08:48 AM Documentation #46133: encourage users to apply YAML specs instead of using the CLI
We need more examples of YAML files that users can cut and paste or at least cut, alter, and paste. Zac Dover
08:31 AM Documentation #46133: encourage users to apply YAML specs instead of using the CLI
affected files:
cephadm/adopt.rst
cephadm/install.rst
mgr/orchestrator.rst
Zac Dover
08:13 AM Documentation #46133 (Resolved): encourage users to apply YAML specs instead of using the CLI
Ceph orchestrator has two main ways to interact on the command line: The CLI and YAML specs.
Turnd out, the CLI ...
Sebastian Wagner
12:47 PM Bug #44746 (Closed): cephadm: vstart.sh --cephadm: don't deploy crash by default
Sebastian Wagner
12:46 PM Bug #45167: cephadm: mons are not properly deployed
Might be fixed by https://github.com/ceph/ceph/pull/35651 Sebastian Wagner
10:48 AM Bug #46138 (Resolved): mgr/dashboard: Error creating iSCSI target
On the latest `master` (pacific), I get the following error when trying to create an iSCSI target in Dashboard:
!2...
Ricardo Marques
08:24 AM Bug #46134 (Can't reproduce): ceph mgr should fail if it cannot add osd
strangely, after copying the ssh keys to remote host, when a new osd is added, the process executes without failure.
...
Deepika Upadhyay
07:43 AM Documentation #45820 (Resolved): create OSDs doc refer to --use-all-devices
Sebastian Wagner
07:43 AM Documentation #45865 (Resolved): cephadm: The service spec documentation is lacking important inf...
Sebastian Wagner
07:30 AM Bug #45093: cephadm: mgrs transiently getting co-located (one node gets two when only one was ask...
only happens with MGRs Sebastian Wagner
07:04 AM Bug #46132: cephadm: Failed to add host in cephadm through command 'ceph orch host add node1'
Please some one look in to this issue. Sathvik Vutukuri
07:03 AM Bug #46132 (Duplicate): cephadm: Failed to add host in cephadm through command 'ceph orch host ad...
Failed to add host in cephadm (octopus release) through command 'ceph orch host add node1'
I am trying to add ceph...
Sathvik Vutukuri

06/21/2020

07:09 PM Bug #45155: mgr/dashboard: Error listing orchestrator NFS daemons
This is a "cephadm/orchestrator" issue, and the backport is being handled as such, so moving it to that project. Nathan Cutler

06/19/2020

10:02 PM Bug #46098: Exception adding host using cephadm
lol, just discovered this myself. Confirm that the suggested fix is appropriate. Dan Mick
08:17 AM Bug #46098 (Triaged): Exception adding host using cephadm
Sebastian Wagner
04:43 AM Bug #46098: Exception adding host using cephadm
Typo in 'Environment' section. 15.2.3 not 15.2.2 Mark Kirkwood
03:21 AM Bug #46098 (Resolved): Exception adding host using cephadm
After bootstrapping 1st host using cephadm, attempting to add another host fails with an exception (variable referenc... Mark Kirkwood
01:14 PM Bug #45093: cephadm: mgrs transiently getting co-located (one node gets two when only one was ask...
I start to suspect that this comes from a race between host refresh and the scheduler, who starts to create new daemo... Sebastian Wagner
10:31 AM Bug #45973 (Fix Under Review): Adopted MDS daemons are removed by the orchestrator because they'r...
Sebastian Wagner
08:41 AM Bug #46103 (Duplicate): Restart service command restarts all the services and accepts service typ...
... Varsha Rao

06/18/2020

10:03 PM Bug #46097 (Won't Fix): package mode has a hardcoded ssh user
package mode user is hardcoded to 'cephadm'
Daniel Pivonka
08:25 PM Documentation #46082: cephadm: deleting (mds) service doesn't work?
Michael Fritch wrote:
> I think there is some confusion on the `orch` cli commands.
>
> `orch ps` will list the c...
Patrick Donnelly
07:29 PM Documentation #46082: cephadm: deleting (mds) service doesn't work?
I think there is some confusion on the `orch` cli commands.
`orch ps` will list the cephadm daemons, whereas `orch...
Michael Fritch
05:14 PM Documentation #46082 (Can't reproduce): cephadm: deleting (mds) service doesn't work?
... Patrick Donnelly
02:14 PM Bug #46036 (Fix Under Review): cephadm: killmode=none: systemd units failed, but containers still...
Sebastian Wagner
01:15 PM Documentation #46073: cephadm install fails: apt:stderr E: Unable to locate package cephadm
Machine is amd64 in virtualbox on Windows Reinhard Eilmsteiner
12:53 PM Documentation #46073 (Can't reproduce): cephadm install fails: apt:stderr E: Unable to locate pac...
When following the installation guide on https://ceph.readthedocs.io/en/latest/cephadm/install/ I ran cephadm install... Reinhard Eilmsteiner
12:36 PM Feature #44875 (Fix Under Review): mgr/rook: PlacementSpec to K8s POD scheduling conversion
Sebastian Wagner
11:41 AM Bug #45155 (Pending Backport): mgr/dashboard: Error listing orchestrator NFS daemons
Lenz Grimmer
08:59 AM Documentation #46052 (Fix Under Review): Module 'cephadm' has failed: DaemonDescription: Cannot c...
Sebastian Wagner
07:55 AM Documentation #46052: Module 'cephadm' has failed: DaemonDescription: Cannot calculate service_id:
Sebastian Wagner wrote:
> the correct call is
>
> [...]
>
> which documentation / example did you use for this...
Andy Gold
06:42 AM Bug #43816: cephadm: Unable to use IPv6 on "cephadm bootstrap"
Seems to work once I've applied https://github.com/ceph/ceph/pull/35633 and added the --ipv6 to bootstrap. Matthew Oliver
01:09 AM Bug #45016: mgr: `ceph tell mgr mgr_status` hangs
Aha! Solved it. We bind mon to ipv6 (::1), in reality it's messenger is bound to ::1, however the mgr is still bindin... Matthew Oliver

06/17/2020

08:19 PM Bug #45672: Unable to add additional hosts to cluster using cephadm
I wound up getting around this by using an Ansible role in which this worked successfully. You can feel free to close... Dan Skaggs
01:46 PM Documentation #46052: Module 'cephadm' has failed: DaemonDescription: Cannot calculate service_id:
the correct call is... Sebastian Wagner
01:40 PM Documentation #46052 (Resolved): Module 'cephadm' has failed: DaemonDescription: Cannot calculate...
ceph version 15.2.3
using Cephadm...
Andy Gold
11:45 AM Bug #45097 (Resolved): cephadm: UX: Traceback, if `orch host add mon1` fails.
Kefu Chai
11:10 AM Bug #46045 (Resolved): qa/tasks/cephadm: Module 'dashboard' is not enabled error
http://qa-proxy.ceph.com/teuthology/kchai-2020-06-17_08:41:50-rados-wip-kefu-testing-2020-06-17-1349-distro-basic-smi... Varsha Rao
09:46 AM Feature #46044 (Resolved): cephadm: Distribute admin keyring.
This is similar to the the ceph.conf, but more complicated.
Maybe use a placement spec? ...
Sebastian Wagner
09:40 AM Bug #46037: ceph orch command hangs forever when trying to add osd
`daemon add` violates https://docs.ceph.com/docs/master/dev/cephadm/#note-regarding-network-calls-from-cli-handlers .... Sebastian Wagner
08:42 AM Feature #45653 (Duplicate): cephadm: Improve safety by using a specific user
Sebastian Wagner

06/16/2020

09:58 PM Feature #44866 (In Progress): cephadm root mode: support non-root users + sudo
Daniel Pivonka
09:58 PM Feature #45653 (In Progress): cephadm: Improve safety by using a specific user
Daniel Pivonka
03:33 PM Bug #46038 (Closed): cephadm mon start failure: Failed to reset failed state of unit ceph-9342dcf...
commmand used:... Deepika Upadhyay
03:10 PM Bug #46037 (Can't reproduce): ceph orch command hangs forever when trying to add osd
after bootstrapping the cephadm cluster when we login to ceph shell, ... Deepika Upadhyay
02:26 PM Bug #46036: cephadm: killmode=none: systemd units failed, but containers still running
https://github.com/ceph/ceph/pull/35524 is part of the solution. the other part is adding a @set -e@ Sebastian Wagner
02:23 PM Bug #46036 (Resolved): cephadm: killmode=none: systemd units failed, but containers still running
... Sebastian Wagner
11:08 AM Feature #45859 (Fix Under Review): cephadm: use fixed versions
Patrick Seidensal
10:59 AM Feature #45859 (In Progress): cephadm: use fixed versions
Patrick Seidensal
10:49 AM Bug #45594: cephadm: weight of a replaced OSD is 0
The initial weight is never restored after `draining` the OSDs.
We can save the initial weight/reweight and reset ...
Joshua Schmid
09:53 AM Bug #46031 (Resolved): Exception: Failed to validate Drive Group: block_wal_size must be of type int
... Sebastian Wagner
04:42 AM Bug #45016: mgr: `ceph tell mgr mgr_status` hangs
I'll have a poke around and see if I can get this unblocked so we can continue your IPv6 adventure :) Matthew Oliver

06/15/2020

09:24 PM Bug #45999 (Fix Under Review): cephadm shell: picking up legacy_dir
Michael Fritch
09:16 PM Bug #45999 (In Progress): cephadm shell: picking up legacy_dir
Michael Fritch
03:27 PM Bug #45999 (Resolved): cephadm shell: picking up legacy_dir
... Deepika Upadhyay
02:02 PM Feature #45378 (In Progress): cephadm: manage /etc/ceph/ceph.conf
Sebastian Wagner
11:13 AM Feature #45996 (New): adopted prometheus instance uses port 9095, regardless of original port number
When adopting prometheus (@cephadm adopt --style legacy --name prometheus.HOSTNAME@), the new prometheus daemon start... Tim Serong
11:01 AM Bug #45973: Adopted MDS daemons are removed by the orchestrator because they're orphans
We have the same problem with adopted prometheus instances (I adopted one, it was working fine for a few minutes, the... Tim Serong
08:54 AM Documentation #45977: cephadm: Improve Service removal docs
Yes, it worked when I entered the command above vor every service.
So I deleted every nfs service and daemon and sta...
Simon Sutter

06/12/2020

12:30 PM Bug #45973: Adopted MDS daemons are removed by the orchestrator because they're orphans
It's not an accident that this is working. OTOH, this needs behavior needs improvement. Let me think about the chicke... Sebastian Wagner
02:32 AM Bug #45973: Adopted MDS daemons are removed by the orchestrator because they're orphans
Sebastian Wagner wrote:
> Hm. Isn't this a big flaw in adopt, not just for MDS?
Not in practice so far. The docs...
Tim Serong
06:51 AM Bug #45980: cephadm: implement missing "FileStore not supported" error message and update DriveGr...
DriveGroups allow specifying filestore, they are not that tightly coupled to cephadm.
I'd argue cephadm should det...
Jan Fajerski
04:36 AM Bug #45980: cephadm: implement missing "FileStore not supported" error message and update DriveGr...
https://docs.ceph.com/docs/master/cephadm/adoption/#limitations says "Cephadm only works with BlueStore OSDs. If ther... Tim Serong
06:18 AM Bug #45155 (Fix Under Review): mgr/dashboard: Error listing orchestrator NFS daemons
Kiefer Chang
06:14 AM Feature #45982 (Resolved): mgr/cephadm: remove or update Dashboard settings after daemons are des...
When these services are deployed, cephadm calls Dashboard's command to set settings to make features available in the... Kiefer Chang

06/11/2020

06:19 PM Bug #45097 (In Progress): cephadm: UX: Traceback, if `orch host add mon1` fails.
Adam King
04:57 PM Bug #45980 (Resolved): cephadm: implement missing "FileStore not supported" error message and upd...
This one is easy to reproduce.
Ask cephadm to create FileStore OSDs:...
Nathan Cutler
03:38 PM Bug #45973: Adopted MDS daemons are removed by the orchestrator because they're orphans
Hm. Isn't this a big flaw in adopt, not just for MDS?
We might need to apply something like this before adopting...
Sebastian Wagner
10:13 AM Bug #45973 (Rejected): Adopted MDS daemons are removed by the orchestrator because they're orphans
The "docs":https://docs.ceph.com/docs/master/cephadm/adoption/ say that when converting to cephadm, one needs to rede... Tim Serong
03:22 PM Bug #45976: cephadm: prevent rm-daemon from removing legacy daemons
we should probably prevent *legacy* daemons from being removed altogether.
workarounds:
* Either adopt to a ce...
Sebastian Wagner
01:10 PM Bug #45976 (Duplicate): cephadm: prevent rm-daemon from removing legacy daemons

cephadm displays daemon when none could be found at /var/lib/ceph/unknown/osd.0...
Deepika Upadhyay
03:19 PM Documentation #45977: cephadm: Improve Service removal docs
... Sebastian Wagner
01:20 PM Documentation #45977 (Resolved): cephadm: Improve Service removal docs
First of all, yes I know, nfs under ceph orch is still under development but I couldn't find any information about th... Simon Sutter
12:03 PM Feature #44055 (New): cephadm: make 'ls' faster
I think a requirements for future refactorization here is:
* having a vera good pytest coverage with example outpu...
Sebastian Wagner
10:38 AM Cleanup #45321 (Fix Under Review): Servcie spec: unify `spec:` vs omitting `spec:`
Sebastian Wagner

06/10/2020

02:40 PM Bug #44746: cephadm: vstart.sh --cephadm: don't deploy crash by default
might be fixed https://github.com/ceph/ceph/pull/35472 Sebastian Wagner
12:29 PM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
https://github.com/ceph/ceph/pull/35524 Sebastian Wagner
 

Also available in: Atom