Activity
From 06/17/2020 to 07/16/2020
07/16/2020
- 08:18 PM Bug #46578 (Duplicate): Container for iscsi gateway does not have tcmu-runner running as service
- This link describes the problem https://tracker.ceph.com/issues/46540
In short, container for iscsi does not have tc... - 05:05 PM Bug #45724 (Resolved): check-host should not fail using fqdn or not that hard
- 03:58 PM Bug #46561: cephadm: monitoring services adoption doesn't honor the container image
- I guess it will also help for initial cluster bootstrap.
Because the bootstrap worklfow also use the default value... - 03:03 PM Bug #46561: cephadm: monitoring services adoption doesn't honor the container image
- via an environment variable like we have for CEPHADM_IMAGE or a dedicated parameter (like --image) both are fine for me.
- 02:55 PM Bug #46561: cephadm: monitoring services adoption doesn't honor the container image
- hm. honestly don't know if @cephadm adopt@ has the necessary privileges to access the config store. In any case, we'r...
- 12:55 AM Bug #46561 (New): cephadm: monitoring services adoption doesn't honor the container image
- When running `cephadm adopt` command against monitoring services then the container image set via [1] isn't honored c...
- 02:18 PM Bug #46529: cephadm: error removing storage for container "...-mon": remove /var/lib/containers/s...
- https://pulpito.ceph.com/gkyratsas-2020-07-16_12:23:41-rados:cephadm:-wip-octopus-backport-swagner-testing-36109-dist...
- 10:20 AM Bug #46568 (Can't reproduce): cephadm: Sometimes setting global container_image does not work
- Here container_image is set before enabling cephadm....
- 09:03 AM Bug #46233 (Resolved): cephadm: Add "--format" option to "ceph orch status"
- 04:37 AM Bug #46329: cephadm: Dashboard's ganesha option is not correct if there are multiple NFS daemons
- The backport is included in https://github.com/ceph/ceph/pull/36109
07/15/2020
- 10:13 PM Bug #46098: Exception adding host using cephadm
- @Stephan Müller, I'd suggest starting with some freshly built VMs (mine were Ubuntu 18.04). Optionally set up the Cep...
- 02:23 PM Bug #46098 (Need More Info): Exception adding host using cephadm
- I was not yet able to reproduce it. (Tried a lot of things.)
I added new hosts to bootstrapped clusters, removed a... - 09:08 PM Bug #46560 (Resolved): cephadm: assigns invalid id to daemons
- ...
- 06:20 PM Bug #46558 (Resolved): cephadm: paths attribute ignored for db_devices/wal_devices via OSD spec
- When creating an OSD spec for using dedicated devices for either DB and/or WAL bluestore devices, we can't use the pa...
- 02:14 PM Tasks #46551 (Resolved): cephadm: Add better a better hint how to add a host
- Currently:...
- 12:17 PM Support #46547 (Resolved): cephadm: Exception adding host via FQDN if host was already added
- To reproduce you need nodes that have a subdomain (not like in current Vagrantfile). I used sesdev to find this issue...
- 11:19 AM Documentation #46546 (Resolved): doc/cephadm: "For each file system" is redundant in cephadm adop...
- $subject
- 08:44 AM Bug #46541: cephadm: OSD is marked as unmanaged in cephadm deployed cluster
- yep. this is due to the fact taht we're missing some annotation to mark "unknown" services. Marking them as "unmanage...
- 12:29 AM Bug #46541 (New): cephadm: OSD is marked as unmanaged in cephadm deployed cluster
- I followed the cephadm instructions (https://docs.ceph.com/docs/master/cephadm/install/#deploy-osds) on deploying a n...
07/14/2020
- 10:47 PM Bug #46540 (Resolved): cephadm: iSCSI gateways problems.
- After installing a clean cluster based on CentOS 8.2 minimal and podman, and adding iscsi gateways, tcmu-runner only ...
- 01:23 PM Bug #46429: cephadm fails bootstrap with new Podman Versions 2.0.1 and 2.0.2
- G. Heinrich wrote:
> Update:
> I did some additional tests and after some digging I found something in /var/log/sys... - 06:56 AM Bug #46429: cephadm fails bootstrap with new Podman Versions 2.0.1 and 2.0.2
- Update:
I did some additional tests and after some digging I found something in /var/log/syslog:... - 01:22 PM Bug #46534 (Fix Under Review): cephadm podman pull: Digest did not match
- 01:07 PM Bug #46534 (Resolved): cephadm podman pull: Digest did not match
- https://pulpito.ceph.com/swagner-2020-07-14_12:03:52-rados:cephadm-wip-swagner-testing-2020-07-14-1125-distro-basic-s...
- 11:11 AM Bug #46036 (Pending Backport): cephadm: killmode=none: systemd units failed, but containers still...
- 11:08 AM Bug #45155 (Closed): mgr/dashboard: Error listing orchestrator NFS daemons
- 11:02 AM Documentation #45564: cephadm: document workaround for accessing the admin socket by entering run...
- Nathan Cutler wrote:
> I noticed that "cephadm shell" infers the FSID (I only have one cluster running on this set... - 10:57 AM Feature #44886 (Fix Under Review): cephadm: allow use of authenticated registry
- 10:57 AM Feature #44886: cephadm: allow use of authenticated registry
- Denys Kondratenko wrote:
> should registry management and authentication be handled on cri-o level by system admin o... - 10:51 AM Bug #45631 (Closed): Error parsing image configuration: Invalid status code returned when fetchin...
- 10:39 AM Bug #46529 (Resolved): cephadm: error removing storage for container "...-mon": remove /var/lib/c...
- /a/teuthology-2020-07-12_07:01:02-rados-master-distro-basic-smithi/5217488 on centos_7.6...
- 10:26 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- yep, that's a different issue:...
07/13/2020
- 09:19 PM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- Sebastian, I am seeing similar failures in rados/thrash-old-clients on recent master, can you please confirm if they...
- 12:54 PM Bug #45999 (Resolved): cephadm shell: picking up legacy_dir
- 11:57 AM Feature #46499 (Rejected): Requesting a "ceph orch redeploy monitoring" command, as an option, so...
- It recently came to my attention that the process of updating the monitoring stack to the current latest version will...
- 10:18 AM Bug #46497 (Resolved): cephadm: prevent colon character for service_ids.
- https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/LL7N4K5RBG2E7P7MDTVC66U2ZHODJEFA/...
- 05:17 AM Bug #46327: cephadm: nfs daemons share the same config object
- Thanks Patrick and Micahel's for the comments.
To adapt the new design, I've created some new issue:
* #46493 - Nee...
07/10/2020
- 04:40 PM Tasks #46376 (Fix Under Review): cephadm: Make vagrant usage more comfortable
- 04:05 PM Bug #46453 (Can't reproduce): cephadm: iSCSI container fails to start
- Deploying an iSCSI service doesn't fail...
- 09:23 AM Bug #46385: Can i run rados and s3 compatible object storage device in Cephadm?
- Attached the screen shot after firing "radosgw-admin realm create --rgw-realm=myorg --default" command in Ceph shell,...
07/09/2020
- 02:04 PM Bug #45726: Module 'cephadm' has failed: auth get failed: failed to find client.crash.<node_name>...
- Can we get the Priority set to something higher please.
Its not affecting the cluster, but it does cause the clu... - 04:48 AM Bug #45726: Module 'cephadm' has failed: auth get failed: failed to find client.crash.<node_name>...
- Andy Gold wrote:
> Any news on this, I have a new 15.2.4 cluster and its failing.
Any update no this , i also hav... - 05:16 AM Bug #46429 (Closed): cephadm fails bootstrap with new Podman Versions 2.0.1 and 2.0.2
- Podman had a major new release recently and it seems that cephadm cannot bootstrap a new cluster because of it.
I in... - 03:18 AM Bug #46327: cephadm: nfs daemons share the same config object
- Patrick Donnelly wrote:
> Michael Fritch wrote:
> > > Kiefer Chang wrote:
> > > > This causes a regression in the ... - 02:27 AM Bug #46327: cephadm: nfs daemons share the same config object
- Michael Fritch wrote:
> > Kiefer Chang wrote:
> > > This causes a regression in the Dashboard. RADOS objects are de... - 02:14 AM Bug #46327: cephadm: nfs daemons share the same config object
- Patrick Donnelly wrote:
> Hi Kiefer, I left some similar comments on your slide deck but also will say here:
>
... - 01:32 AM Bug #46385: Can i run rados and s3 compatible object storage device in Cephadm?
- Sathvik Vutukuri wrote:
> Sathvik Vutukuri wrote:
> > I have tried cephadm for ceph installation , but unable to ru... - 01:31 AM Bug #46385: Can i run rados and s3 compatible object storage device in Cephadm?
- Sathvik Vutukuri wrote:
> I have tried cephadm for ceph installation , but unable to run ceph RADOS Gw and s3 compat...
07/08/2020
- 09:06 PM Bug #46385: Can i run rados and s3 compatible object storage device in Cephadm?
- RGW deployment is supported - see https://ceph.readthedocs.io/en/latest/cephadm/install/#deploy-rgws
- 08:11 PM Bug #45724 (Fix Under Review): check-host should not fail using fqdn or not that hard
- 05:23 PM Bug #46327: cephadm: nfs daemons share the same config object
- Hi Kiefer, I left some similar comments on your slide deck but also will say here:
Kiefer Chang wrote:
> This cau... - 12:20 PM Bug #46329 (Pending Backport): cephadm: Dashboard's ganesha option is not correct if there are mu...
- 09:19 AM Documentation #46073: cephadm install fails: apt:stderr E: Unable to locate package cephadm
- This is a bug in the documentation only
- 09:19 AM Documentation #46073: cephadm install fails: apt:stderr E: Unable to locate package cephadm
- Reinhard Eilmsteiner wrote:
> Machine is amd64 in virtualbox on Windows
Instruction is missing adding the apt-rep... - 06:11 AM Bug #46412 (Can't reproduce): cephadm trying to pull mimic based image
- ...
07/07/2020
- 08:24 PM Support #46384: 15.2.4 and cephadm - mds not starting
- Here is how I do it:
https://docs.ceph.com/docs/master/cephfs/fs-volumes/#fs-volumes
"This creates a CephFS fil... - 10:49 AM Support #46384 (Resolved): 15.2.4 and cephadm - mds not starting
- I have a new insatll of Octopus, 15.2.4 and using Cephadm to manage it. The mds deamons are not being created.
$... - 03:41 PM Bug #46398 (Fix Under Review): cephadm: can't use custom prometheus image
- 12:08 PM Bug #46398 (In Progress): cephadm: can't use custom prometheus image
- 12:08 PM Bug #46398 (Resolved): cephadm: can't use custom prometheus image
- ...
- 01:44 PM Bug #46206 (Rejected): cephadm: podman 2.0
- This was a bug in Podman v2.0.0 and has been fixed in Podman v2.0.1.
> Fixed a bug where the --privileged flag had... - 01:24 PM Bug #46206: cephadm: podman 2.0
- Removing changes from commit "b5e5c753":https://github.com/ceph/ceph/commit/b5e5c753f415ab1f18ccfe3ad636649a0f51a93a ...
- 12:55 PM Bug #45792: cephadm: zapped OSD gets re-added to the cluster.
- I am still putting additional input and requesting that if this behavior remains as is that a way to stop the process...
- 09:00 AM Bug #45792 (Fix Under Review): cephadm: zapped OSD gets re-added to the cluster.
- As this is the intended behavior, this needs to be documented. See https://github.com/ceph/ceph/pull/35744
- 12:38 PM Feature #45859 (Pending Backport): cephadm: use fixed versions
- 11:47 AM Bug #46396 (New): osdspec/drivegroup: check for 'intersection' in DriveSelection when multiple OS...
- We do allow the use of multiple OSDSpecs explicitly. However, misconfiguration can lead to unwanted behavior.
A go... - 10:54 AM Bug #46385 (Duplicate): Can i run rados and s3 compatible object storage device in Cephadm?
- I have tried cephadm for ceph installation , but unable to run ceph RADOS Gw and s3 compatibble object storage?
Ho... - 09:05 AM Bug #45861 (Fix Under Review): data_devices: limit 3 deployed 6 osds per node
- 09:03 AM Bug #45672 (Can't reproduce): Unable to add additional hosts to cluster using cephadm
- thanks, closing
- 08:47 AM Bug #44756 (In Progress): drivegroups: replacement op will ignore existing wal/dbs
- fixed with https://github.com/ceph/ceph/pull/34740
- 08:23 AM Bug #45980 (In Progress): cephadm: implement missing "FileStore not supported" error message and ...
- 08:22 AM Bug #46231 (Pending Backport): translate.to_ceph_volume: no need to pass the drive group
- 08:15 AM Bug #45172 (Resolved): bin/cephadm: logs: Traceback: not enough values to unpack (expected 2, got 1)
- 08:14 AM Feature #45263 (Fix Under Review): osdspec/drivegroup: not enough filters to define layout
07/06/2020
- 03:46 PM Bug #45872 (Fix Under Review): ceph orch device ls exposes the `device_id` under the DEVICES colu...
- 03:33 PM Documentation #46377 (Resolved): cephadm: Missing 'service_id' in last example in orchestrator#se...
- Missing 'service_id' in last example in orchestrator#service-specification. Example can be found right above https://...
- 03:28 PM Tasks #46376 (Resolved): cephadm: Make vagrant usage more comfortable
- Currently you can only use a big scale factor using the vagrant setup. You can have x * (mgr, mon, osd with 2 disks)....
- 02:40 PM Feature #44886: cephadm: allow use of authenticated registry
- should registry management and authentication be handled on cri-o level by system admin or maybe by cephadm as helper...
- 02:24 PM Bug #44888 (Fix Under Review): Drivegroup's :limit: isn't working correctly
- 01:41 PM Bug #46327 (New): cephadm: nfs daemons share the same config object
- Reopening, as this change introduces a regression and will potentially break upgrades.
- 07:03 AM Bug #46327: cephadm: nfs daemons share the same config object
- This causes a regression in the Dashboard. RADOS objects are designed to work with multiple daemons and each daemon h...
07/03/2020
- 04:03 PM Tasks #46352 (Won't Fix): add leap support for cephadm
- Currently, the build scripts for ceph-container are designed for shipping centos 8 based images,
as it would be much... - 01:00 PM Documentation #45564: cephadm: document workaround for accessing the admin socket by entering run...
- I got it to work like this:...
- 10:01 AM Bug #45726: Module 'cephadm' has failed: auth get failed: failed to find client.crash.<node_name>...
- Any news on this, I have a new 15.2.4 cluster and its failing.
- 08:32 AM Documentation #46335: Document "Using cephadm to set up rgw-nfs"
- the building blocks are there: setting up ganesha, setting up the RADOS objects, https://docs.ceph.com/docs/master/ra...
- 08:30 AM Documentation #46335 (Resolved): Document "Using cephadm to set up rgw-nfs"
- * cephadm doesn't care about exports. instead it simply sets up the daemons.
* cephadm only creates an empty 'conf-{... - 05:32 AM Bug #46329 (Fix Under Review): cephadm: Dashboard's ganesha option is not correct if there are mu...
07/02/2020
- 02:39 PM Feature #45263 (In Progress): osdspec/drivegroup: not enough filters to define layout
- 01:54 PM Feature #45263: osdspec/drivegroup: not enough filters to define layout
- This patch allows to switch between `AND` and `OR` gating. https://github.com/ceph/ceph/compare/master...jschmid1:dri...
- 01:50 PM Feature #45203 (Resolved): OSD Spec: allow filtering via explicit hosts and labels
- 08:28 AM Bug #46327 (Rejected): cephadm: nfs daemons share the same config object
- Not a bug, by design all daemons within a cluster will share the same config object.
- 08:02 AM Bug #46327 (Won't Fix): cephadm: nfs daemons share the same config object
- If we create a NFS service with multiple instances, those instance share the same rados object as the configuration s...
- 08:09 AM Bug #46329 (Resolved): cephadm: Dashboard's ganesha option is not correct if there are multiple N...
- How to reproduce:
* Create an NFS service with multiple daemons. e.g. With the following spec:...
07/01/2020
- 11:47 PM Feature #44866 (Pending Backport): cephadm root mode: support non-root users + sudo
- 11:47 PM Feature #44866 (Resolved): cephadm root mode: support non-root users + sudo
- 08:58 AM Bug #46283 (Rejected): cephadm: Unable to create iSCSI target
- Right, I was able to create an iSCSI target on pacific/master:...
06/30/2020
- 11:20 PM Bug #46283: cephadm: Unable to create iSCSI target
- Maybe this PR (https://github.com/ceph/ceph/pull/35141) hasn't been backported into octopus yet.
- 05:58 PM Bug #46283 (Rejected): cephadm: Unable to create iSCSI target
- I'm getting an error when trying to create an iSCSI target on openSUSE Leap.
*How to reproduce:*
I've created a... - 05:29 PM Bug #46237 (New): cephadm: Inconsistent exit code
- 01:56 PM Bug #46237 (In Progress): cephadm: Inconsistent exit code
- 09:36 AM Support #45940 (Need More Info): Orchestrator to be able to deploy multiple OSDs per single drive
- 09:36 AM Support #45940: Orchestrator to be able to deploy multiple OSDs per single drive
- https://docs.ceph.com/docs/master/cephadm/drivegroups/#the-advanced-case works for you?
- 09:25 AM Bug #46271 (Resolved): podman pull: transient "Error: error creating container storage: error cre...
- ...
- 08:03 AM Bug #44990 (Pending Backport): cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no suc...
- was fixed in master.
- 01:15 AM Bug #46175 (Fix Under Review): cephadm: orch apply -i: MON and MGR service specs must not have a ...
- 01:14 AM Bug #46268 (Fix Under Review): cephadm: orch apply -i: RGW service spec id might not contain a zone
- 01:10 AM Bug #46268 (Resolved): cephadm: orch apply -i: RGW service spec id might not contain a zone
- rgw.yaml...
06/29/2020
- 08:03 PM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- /a/yuriw-2020-06-29_16:59:21-rados-octopus-distro-basic-smithi/5189862
- 06:40 PM Feature #46265 (Duplicate): test cephadm MDS deployment
- right now, the test is broken.
workaround is to apply it manuall: https://github.com/ceph/ceph/blob/cedf2bbd13daba... - 03:50 PM Bug #46103: Restart service command restarts all the services and accepts service type too
- Sebastian Wagner wrote:
> I think this is actually the correct behavior!
How is it the correct behaviour? If it i... - 12:23 PM Bug #46103: Restart service command restarts all the services and accepts service type too
- I think this is actually the correct behavior!
- 03:06 PM Bug #46256 (Need More Info): OSDS are getting re-added to the cluster, despite unamanged=True
- 03:06 PM Bug #46256 (Can't reproduce): OSDS are getting re-added to the cluster, despite unamanged=True
- 01:10 PM Bug #46138 (Pending Backport): mgr/dashboard: Error creating iSCSI target
- 01:05 PM Bug #46254 (Can't reproduce): cephadm upgrade test: exit condition is wrong. we have to wait longer
- We're exiting the upgrade loop too early. In this case, the OSD didn't had the time to came up again. ...
- 12:24 PM Bug #45016 (Pending Backport): mgr: `ceph tell mgr mgr_status` hangs
- 12:21 PM Bug #46038 (Can't reproduce): cephadm mon start failure: Failed to reset failed state of unit cep...
- feel free to reopen the issue!
- 12:21 PM Bug #46038: cephadm mon start failure: Failed to reset failed state of unit ceph-9342dcfe-afd5-11...
- the logs are gone. Maybe should put logs here into the tracker.
- 12:16 PM Bug #45327: cephadm: Orch daemon add is not idempotent
- `daemon add` is too low level. If we want commands to be idempotent, we have to remove calling them cephadm.py
- 12:14 PM Bug #45167: cephadm: mons are not properly deployed
- low, until it happens again.
- 12:04 PM Tasks #45814 (In Progress): tasks/cephadm.py: Add iSCSI smoke test
- 11:25 AM Bug #46253 (Resolved): OSD specs without service_id
- ...
- 11:07 AM Bug #46252 (Closed): MGRs should get a random identifier, ONLY if we're co-locating MGRs on the s...
- ...
- 10:28 AM Feature #45565: cephadm: A daemon should provide information about itself (e.g. service urls)
- See DaemonDescription's service_url
- 09:04 AM Bug #46245 (Fix Under Review): cephadm: set-ssh-config/clear-ssh-config command doesn't take effe...
- 07:01 AM Bug #46245: cephadm: set-ssh-config/clear-ssh-config command doesn't take effect immediately
- can you describe you workenv.
- 06:52 AM Bug #46245 (Resolved): cephadm: set-ssh-config/clear-ssh-config command doesn't take effect immed...
- Cephadm module should reload ssh config when the user sets a new ssh config or clear it.
- 07:14 AM Bug #46247: cephadm mon failure: Error: no container with name or ID ... no such container
- http://qa-proxy.ceph.com/teuthology/ideepika-2020-06-25_18:36:29-rados-wip-deepika-testing-2020-06-25-2058-distro-bas...
- 07:13 AM Bug #46247 (Can't reproduce): cephadm mon failure: Error: no container with name or ID ... no suc...
- ...
06/28/2020
- 12:49 AM Bug #46157 (Resolved): cephadm upgrade test is broken: RGW: failed to bind address 0.0.0.0:80: Pe...
06/26/2020
- 05:49 PM Bug #46233 (Fix Under Review): cephadm: Add "--format" option to "ceph orch status"
- 05:37 PM Bug #46233 (In Progress): cephadm: Add "--format" option to "ceph orch status"
- 04:55 PM Bug #46233: cephadm: Add "--format" option to "ceph orch status"
- we're talking about a function that is 3 LOCs. having multiple tracker issues for this just feels wrong to me.
- 04:35 PM Bug #46233: cephadm: Add "--format" option to "ceph orch status"
- Nathan Cutler wrote:
> This is actual two different issues ("--format option missing" and "SSH error does not affect... - 03:16 PM Bug #46233: cephadm: Add "--format" option to "ceph orch status"
- This is actual two different issues ("--format option missing" and "SSH error does not affect exit status"). Maybe us...
- 03:08 PM Bug #46233 (Resolved): cephadm: Add "--format" option to "ceph orch status"
- ATM it's not possible to specify the output format for "ceph orch status":...
- 04:33 PM Bug #46237 (Won't Fix): cephadm: Inconsistent exit code
- If SSH keys are not available, then `ceph orch status` return code is zero:...
- 01:20 PM Bug #46231 (Fix Under Review): translate.to_ceph_volume: no need to pass the drive group
- 01:18 PM Bug #46231 (Resolved): translate.to_ceph_volume: no need to pass the drive group
- The interface of translate.to_ceph_volume is needlessly complex as it takes a reference to the drive group, that is p...
- 08:06 AM Cleanup #46219 (Resolved): cephadm: remove DaemonDescription.service_id()
- It just doen't work out.
@DaemonDescription.service_id()@ is *impossible* to implement, as there is no clear rela... - 03:41 AM Feature #45654 (Rejected): orchestrator: support OSDs backed by LVM LV/VG
- Sebastian Wagner wrote:
> Is this something we need to improve in ceph-volume?
I don't think so. The origin for t... - 12:03 AM Bug #46138: mgr/dashboard: Error creating iSCSI target
- kk, will remove it :)
06/25/2020
- 11:05 PM Bug #46134: ceph mgr should fail if it cannot add osd
- Nope, cannot trace any process logs as well anywhere, appears to be just a log print.
- 02:27 PM Bug #46134: ceph mgr should fail if it cannot add osd
- hm strange. did that host appear after a while?
- 11:04 PM Bug #46038: cephadm mon start failure: Failed to reset failed state of unit ceph-9342dcfe-afd5-11...
- recurrent failure observed in Fedora 31 and Ubuntu 18.04( although after removing the stale cluster rerun on Ubuntu s...
- 03:43 PM Bug #46132 (Duplicate): cephadm: Failed to add host in cephadm through command 'ceph orch host ad...
- 11:45 AM Bug #46132: cephadm: Failed to add host in cephadm through command 'ceph orch host add node1'
- Error stack trace::
_Promise failed Traceback (most recent call last): File "/usr/share/ceph/mgr/cephadm/module.py... - 02:50 PM Feature #44886: cephadm: allow use of authenticated registry
- ...
- 02:23 PM Bug #46206: cephadm: podman 2.0
- looks like cephadm is not podman 2.0 compatible
- 11:57 AM Bug #46206: cephadm: podman 2.0
- May be related to https://github.com/ceph/ceph/pull/32995.
- 11:52 AM Bug #46206 (Rejected): cephadm: podman 2.0
- ...
- 10:12 AM Bug #46138: mgr/dashboard: Error creating iSCSI target
- ...
- 01:47 AM Bug #46138: mgr/dashboard: Error creating iSCSI target
- I managed to recreate the issue. We lock down the caps to just have access to the pool in the config. If I create an ...
- 08:47 AM Bug #46204 (Resolved): cephadm upgrade test: fail if upgrade status is set to error
- http://pulpito.ceph.com/swagner-2020-06-25_08:07:18-rados:cephadm-wip-swagner-testing-2020-06-24-1032-distro-basic-sm...
06/24/2020
- 11:22 PM Bug #46138: mgr/dashboard: Error creating iSCSI target
- oh interesting, maybe there's another cap we're missing? We did lock it down some. Let me have a play and attempt to ...
- 07:16 PM Feature #46182 (Resolved): cephadm should use the same image reference across the cluster
- The documentation has this warning (from https://docs.ceph.com/docs/octopus/install/containers/#containers):
Impor... - 03:58 PM Bug #46175: cephadm: orch apply -i: MON and MGR service specs must not have a service_id
- Nathan Cutler wrote:
> If this is caused by the presence of "service_id: SOME_STRING" in the spec yaml, maybe it wou... - 12:20 PM Bug #46175: cephadm: orch apply -i: MON and MGR service specs must not have a service_id
- If this is caused by the presence of "service_id: SOME_STRING" in the spec yaml, maybe it would make sense for cephad...
- 11:36 AM Bug #46175 (Resolved): cephadm: orch apply -i: MON and MGR service specs must not have a service_id
- service_spec_core.yml...
- 01:07 PM Bug #46098 (In Progress): Exception adding host using cephadm
- 12:02 PM Feature #46177 (New): Investigate, if we can run ssh-agent in the MGR container
- Is it possible to use password encrypted SSH keys?
Maybe something like https://stackoverflow.com/questions/468379... - 10:02 AM Documentation #46168 (In Progress): Add information about 'unmanaged' parameter
- 09:54 AM Documentation #46168 (Resolved): Add information about 'unmanaged' parameter
- Explain parameter 'unmanaged' in OSDs creation and deletion
- 08:53 AM Bug #46157 (Fix Under Review): cephadm upgrade test is broken: RGW: failed to bind address 0.0.0....
- 07:25 AM Bug #45628: cephadm qa: smoke should verify daemons are actually running
- I think we should solve this by creating a HEALTH_WARN, if a daemon enters ...
06/23/2020
- 03:28 PM Bug #46157 (Resolved): cephadm upgrade test is broken: RGW: failed to bind address 0.0.0.0:80: Pe...
- http://pulpito.ceph.com/swagner-2020-06-23_11:55:14-rados:cephadm-wip-swagner-testing-2020-06-23-1057-distro-basic-sm...
- 08:53 AM Bug #46138: mgr/dashboard: Error creating iSCSI target
- This is not a Dashboard issue because I get the same error when using `gwcli` tool:...
- 04:11 AM Documentation #46133: encourage users to apply YAML specs instead of using the CLI
- https://github.com/ceph/ceph/pull/35709
06/22/2020
- 05:12 PM Bug #45343 (Resolved): Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshak...
- 03:08 PM Documentation #46133: encourage users to apply YAML specs instead of using the CLI
- This is similar to the way that Kubernetes does things.
--Sebastian Wagner, Ceph Orchestrators Meeting 22 Jun 2020 - 08:48 AM Documentation #46133: encourage users to apply YAML specs instead of using the CLI
- We need more examples of YAML files that users can cut and paste or at least cut, alter, and paste.
- 08:31 AM Documentation #46133: encourage users to apply YAML specs instead of using the CLI
- affected files:
cephadm/adopt.rst
cephadm/install.rst
mgr/orchestrator.rst - 08:13 AM Documentation #46133 (Resolved): encourage users to apply YAML specs instead of using the CLI
- Ceph orchestrator has two main ways to interact on the command line: The CLI and YAML specs.
Turnd out, the CLI ... - 12:47 PM Bug #44746 (Closed): cephadm: vstart.sh --cephadm: don't deploy crash by default
- 12:46 PM Bug #45167: cephadm: mons are not properly deployed
- Might be fixed by https://github.com/ceph/ceph/pull/35651
- 10:48 AM Bug #46138 (Resolved): mgr/dashboard: Error creating iSCSI target
- On the latest `master` (pacific), I get the following error when trying to create an iSCSI target in Dashboard:
!2... - 08:24 AM Bug #46134 (Can't reproduce): ceph mgr should fail if it cannot add osd
- strangely, after copying the ssh keys to remote host, when a new osd is added, the process executes without failure.
... - 07:43 AM Documentation #45820 (Resolved): create OSDs doc refer to --use-all-devices
- 07:43 AM Documentation #45865 (Resolved): cephadm: The service spec documentation is lacking important inf...
- 07:30 AM Bug #45093: cephadm: mgrs transiently getting co-located (one node gets two when only one was ask...
- only happens with MGRs
- 07:04 AM Bug #46132: cephadm: Failed to add host in cephadm through command 'ceph orch host add node1'
- Please some one look in to this issue.
- 07:03 AM Bug #46132 (Duplicate): cephadm: Failed to add host in cephadm through command 'ceph orch host ad...
- Failed to add host in cephadm (octopus release) through command 'ceph orch host add node1'
I am trying to add ceph...
06/21/2020
- 07:09 PM Bug #45155: mgr/dashboard: Error listing orchestrator NFS daemons
- This is a "cephadm/orchestrator" issue, and the backport is being handled as such, so moving it to that project.
06/19/2020
- 10:02 PM Bug #46098: Exception adding host using cephadm
- lol, just discovered this myself. Confirm that the suggested fix is appropriate.
- 08:17 AM Bug #46098 (Triaged): Exception adding host using cephadm
- 04:43 AM Bug #46098: Exception adding host using cephadm
- Typo in 'Environment' section. 15.2.3 not 15.2.2
- 03:21 AM Bug #46098 (Resolved): Exception adding host using cephadm
- After bootstrapping 1st host using cephadm, attempting to add another host fails with an exception (variable referenc...
- 01:14 PM Bug #45093: cephadm: mgrs transiently getting co-located (one node gets two when only one was ask...
- I start to suspect that this comes from a race between host refresh and the scheduler, who starts to create new daemo...
- 10:31 AM Bug #45973 (Fix Under Review): Adopted MDS daemons are removed by the orchestrator because they'r...
- 08:41 AM Bug #46103 (Duplicate): Restart service command restarts all the services and accepts service typ...
- ...
06/18/2020
- 10:03 PM Bug #46097 (Won't Fix): package mode has a hardcoded ssh user
- package mode user is hardcoded to 'cephadm'
- 08:25 PM Documentation #46082: cephadm: deleting (mds) service doesn't work?
- Michael Fritch wrote:
> I think there is some confusion on the `orch` cli commands.
>
> `orch ps` will list the c... - 07:29 PM Documentation #46082: cephadm: deleting (mds) service doesn't work?
- I think there is some confusion on the `orch` cli commands.
`orch ps` will list the cephadm daemons, whereas `orch... - 05:14 PM Documentation #46082 (Can't reproduce): cephadm: deleting (mds) service doesn't work?
- ...
- 02:14 PM Bug #46036 (Fix Under Review): cephadm: killmode=none: systemd units failed, but containers still...
- 01:15 PM Documentation #46073: cephadm install fails: apt:stderr E: Unable to locate package cephadm
- Machine is amd64 in virtualbox on Windows
- 12:53 PM Documentation #46073 (Can't reproduce): cephadm install fails: apt:stderr E: Unable to locate pac...
- When following the installation guide on https://ceph.readthedocs.io/en/latest/cephadm/install/ I ran cephadm install...
- 12:36 PM Feature #44875 (Fix Under Review): mgr/rook: PlacementSpec to K8s POD scheduling conversion
- 11:41 AM Bug #45155 (Pending Backport): mgr/dashboard: Error listing orchestrator NFS daemons
- 08:59 AM Documentation #46052 (Fix Under Review): Module 'cephadm' has failed: DaemonDescription: Cannot c...
- 07:55 AM Documentation #46052: Module 'cephadm' has failed: DaemonDescription: Cannot calculate service_id:
- Sebastian Wagner wrote:
> the correct call is
>
> [...]
>
> which documentation / example did you use for this... - 06:42 AM Bug #43816: cephadm: Unable to use IPv6 on "cephadm bootstrap"
- Seems to work once I've applied https://github.com/ceph/ceph/pull/35633 and added the --ipv6 to bootstrap.
- 01:09 AM Bug #45016: mgr: `ceph tell mgr mgr_status` hangs
- Aha! Solved it. We bind mon to ipv6 (::1), in reality it's messenger is bound to ::1, however the mgr is still bindin...
06/17/2020
- 08:19 PM Bug #45672: Unable to add additional hosts to cluster using cephadm
- I wound up getting around this by using an Ansible role in which this worked successfully. You can feel free to close...
- 01:46 PM Documentation #46052: Module 'cephadm' has failed: DaemonDescription: Cannot calculate service_id:
- the correct call is...
- 01:40 PM Documentation #46052 (Resolved): Module 'cephadm' has failed: DaemonDescription: Cannot calculate...
- ceph version 15.2.3
using Cephadm... - 11:45 AM Bug #45097 (Resolved): cephadm: UX: Traceback, if `orch host add mon1` fails.
- 11:10 AM Bug #46045 (Resolved): qa/tasks/cephadm: Module 'dashboard' is not enabled error
- http://qa-proxy.ceph.com/teuthology/kchai-2020-06-17_08:41:50-rados-wip-kefu-testing-2020-06-17-1349-distro-basic-smi...
- 09:46 AM Feature #46044 (Resolved): cephadm: Distribute admin keyring.
- This is similar to the the ceph.conf, but more complicated.
Maybe use a placement spec? ... - 09:40 AM Bug #46037: ceph orch command hangs forever when trying to add osd
- `daemon add` violates https://docs.ceph.com/docs/master/dev/cephadm/#note-regarding-network-calls-from-cli-handlers ....
- 08:42 AM Feature #45653 (Duplicate): cephadm: Improve safety by using a specific user
Also available in: Atom