Activity
From 06/29/2020 to 07/28/2020
07/28/2020
- 02:57 PM Feature #46182: cephadm should use the same image reference across the cluster
- I agree. Users won't get it.
- 02:57 PM Feature #44886 (Resolved): cephadm: allow use of authenticated registry
- 02:37 PM Tasks #46551 (In Progress): cephadm: Add better a better hint how to add a host
- 09:04 AM Bug #46726 (Fix Under Review): cephadm: deploying of monitoring images partially broken
- 08:24 AM Bug #46529: cephadm: error removing storage for container "...-mon": remove /var/lib/containers/s...
- Seems podman on CentOS 7 is broken?
- 08:22 AM Bug #46529: cephadm: error removing storage for container "...-mon": remove /var/lib/containers/s...
- Brad Hubbard wrote:
> /a/yuriw-2020-07-13_23:00:15-rados-wip-yuri8-testing-2020-07-13-1946-octopus-distro-basic-smit... - 03:13 AM Bug #46529: cephadm: error removing storage for container "...-mon": remove /var/lib/containers/s...
- /a/yuriw-2020-07-13_23:00:15-rados-wip-yuri8-testing-2020-07-13-1946-octopus-distro-basic-smithi/5224005
/a/yuriw-20...
07/27/2020
- 04:17 PM Bug #46726 (In Progress): cephadm: deploying of monitoring images partially broken
- https://github.com/ceph/ceph-salt/pull/299
- 02:57 PM Bug #46726 (Resolved): cephadm: deploying of monitoring images partially broken
- On a pacific cluster:
Jul 27 16:46:44 master bash[14902]: INFO:cephadm:stat:stderr Error: unable to pull ceph/ceph... - 03:32 PM Bug #46704: container_linux.go:349: "exec: \"stat\": executable file not found
- funny thing is: it was eventually deployed:...
- 02:51 PM Documentation #45858 (Fix Under Review): `ceph orch status` doesn't show in progress actions
- 02:48 PM Documentation #46377 (Pending Backport): cephadm: Missing 'service_id' in last example in orchest...
- 02:47 PM Documentation #46701 (Fix Under Review): remove `alias ceph='cephadm shell -- ceph'`
07/24/2020
- 08:45 PM Documentation #44354: cephadm: Log messages are missing
- Indeed, this was most probably caused by "Storage=auto" (the default) and non-existent /var/log/journal
Now, it pr... - 08:15 PM Bug #46704 (Can't reproduce): container_linux.go:349: "exec: \"stat\": executable file not found
- Description: rados:cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start}
https://pulpito.ceph.com/swagner-2020... - 03:28 PM Bug #44926: dashboard: creating a new bucket causes InvalidLocationConstraint
- Hi, i've got the same issue:
OS: CentOS 8.2.2004
Ceph: Octopus (15.2.4)
Nodes: 3 - 08:16 AM Documentation #46701 (Resolved): remove `alias ceph='cephadm shell -- ceph'`
- this will lead to unexpected behavior, like...
07/23/2020
- 09:19 PM Bug #46687: MGR_MODULE_ERROR: Module 'cephadm' has failed: No filters applied
- Sebastian Wagner wrote:
> indeed. Might relate to https://github.com/ceph/ceph/blob/3b31eea7fdfe9805259fdcc606e0a184... - 04:18 PM Bug #46687: MGR_MODULE_ERROR: Module 'cephadm' has failed: No filters applied
- indeed. Might relate to https://github.com/ceph/ceph/blob/3b31eea7fdfe9805259fdcc606e0a1844431a8d8/src/python-common/...
- 08:55 AM Bug #46687 (Can't reproduce): MGR_MODULE_ERROR: Module 'cephadm' has failed: No filters applied
- I made clean install of ceph using cephadm. Then trying to create osds via web interface like that and got fail:
!im... - 08:16 PM Bug #46098: Exception adding host using cephadm
- I have hit this aswell installing cephadm on debian 10 buster with an apt upgrade done.
I have the playbook that ... - 12:53 PM Documentation #46691 (New): Document manually deploment of OSDs
- Sometimes, users want to deploy OSDs completely manual. Might be drive groups are not expressive enough, or there is ...
- 08:27 AM Bug #46685 (Won't Fix): mgr/rook: OSD devices are marked as available
- I deployed a Rook Ceph cluster with latest master/octopus image today, devices are marked as available even they are ...
07/22/2020
- 10:40 PM Documentation #46377 (In Progress): cephadm: Missing 'service_id' in last example in orchestrator...
- 01:07 PM Bug #46103: Restart service command restarts all the services and accepts service type too
- Sebastian Wagner wrote:
> REFRESHED doesn't mean the daemon was restarted. Just that the status was refreshed
It ... - 10:52 AM Bug #46103 (Need More Info): Restart service command restarts all the services and accepts servic...
- 10:52 AM Bug #46103: Restart service command restarts all the services and accepts service type too
- REFRESHED doesn't mean the daemon was restarted. Just that the status was refreshed
- 01:00 PM Bug #46045 (New): qa/tasks/cephadm: Module 'dashboard' is not enabled error
- Sebastian Wagner wrote:
> this was due to the fact that the FS tests don't enable the dashboard?
Yes, even other ... - 10:58 AM Bug #46045 (Need More Info): qa/tasks/cephadm: Module 'dashboard' is not enabled error
- this was due to the fact that the FS tests don't enable the dashboard?
- 12:10 PM Bug #45819 (Can't reproduce): cephadm: Possible error in deploying-nfs-ganesha docs
- 12:06 PM Bug #45832 (Resolved): cephadm: "ceph orch apply mon" moves daemons
- See the note in https://docs.ceph.com/docs/master/cephadm/install/#deploy-additional-monitors-optional
- 12:05 PM Bug #44559 (In Progress): cephadm logs an invalid stat command
- I still have this on my plate. The "fix" here is to include the double quotes in the log message. This isn't a bug _p...
- 12:00 PM Feature #44775 (Resolved): cephadm: NFS stage 2
- 11:59 AM Feature #44287 (Rejected): cephadm: Graceful Shutdown of the Whole Ceph Cluster
- can't do that in cephadm. Needs Salt or ansible.
- 11:57 AM Documentation #44867 (Rejected): cephadm: document "package" mode
- please don't
- 11:55 AM Bug #44739 (Can't reproduce): ceph.conf parameters set via "cephadm bootstrap -c" are not persist...
- 11:53 AM Bug #44968 (Can't reproduce): cehpadm: another "RuntimeError: Set changed size during iteration"
- 11:50 AM Documentation #44354: cephadm: Log messages are missing
- We need this in the docs:
> Create the /var/log/journal directory, so journal logs will be persisted.
- 11:46 AM Bug #45576 (Resolved): cephadm: `cephadm ls` does not play well with `cephadm logs`
- resolved in the meanwhile
- 11:41 AM Documentation #45728 (Resolved): Add an example for custom images to the "bootstrap a new cluster...
- I think this is done by https://docs.ceph.com/docs/master/man/8/cephadm/#synopsis
- 11:40 AM Bug #45399 (Resolved): NFS Ganesha : Error searching service specs for all nodes after nfs orch a...
- 11:39 AM Bug #45791 (Can't reproduce): cephadm: Upgrade is failing octopus on centos 8 %d format a numbe ...
- 11:38 AM Documentation #45896: cephadm: Need a manual howto: "upgrade the cluster manually"
- I think we should document how to manually update the cluster. We'll need this for troubleshooting anyway.
- 11:35 AM Bug #45808 (New): cephadm/test_adoption.sh: Error parsing image configuration: Invalid status cod...
- for tasks/cephadm.py we use the local registry. for test_adoption we don't. Low prio, till this appears again
- 11:33 AM Bug #45867 (Resolved): orchestrator: Errors while deployment are hidden behind the log wall
- https://github.com/ceph/ceph/pull/35456
- 11:03 AM Bug #45976 (Duplicate): cephadm: prevent rm-daemon from removing legacy daemons
- 10:57 AM Bug #43816 (Resolved): cephadm: Unable to use IPv6 on "cephadm bootstrap"
- 10:56 AM Documentation #46133 (Pending Backport): encourage users to apply YAML specs instead of using the...
- 10:56 AM Documentation #46168 (Pending Backport): Add information about 'unmanaged' parameter
- 10:52 AM Bug #46256 (Can't reproduce): OSDS are getting re-added to the cluster, despite unamanged=True
- 10:51 AM Bug #46268 (Pending Backport): cephadm: orch apply -i: RGW service spec id might not contain a zone
- 10:50 AM Bug #46271 (Pending Backport): podman pull: transient "Error: error creating container storage: e...
- 10:50 AM Support #45940 (Closed): Orchestrator to be able to deploy multiple OSDs per single drive
- 10:49 AM Bug #43681 (Fix Under Review): cephadm: Streamline RGW deployment
- 10:45 AM Bug #45872 (Pending Backport): ceph orch device ls exposes the `device_id` under the DEVICES colu...
- 10:44 AM Feature #45263 (Pending Backport): osdspec/drivegroup: not enough filters to define layout
- 10:42 AM Bug #46398 (Pending Backport): cephadm: can't use custom prometheus image
- 10:41 AM Bug #46429: cephadm fails bootstrap with new Podman Versions 2.0.1 and 2.0.2
- let's wait, till https://github.com/containers/podman/issues/6933 is resolved.
- 10:38 AM Bug #46540 (Fix Under Review): cephadm: iSCSI gateways problems.
- 10:37 AM Support #46547 (Need More Info): cephadm: Exception adding host via FQDN if host was already added
- 10:36 AM Bug #46412 (Need More Info): cephadm trying to pull mimic based image
- 10:36 AM Bug #46412: cephadm trying to pull mimic based image
- works for me: ...
- 10:35 AM Support #46384 (Resolved): 15.2.4 and cephadm - mds not starting
- 10:35 AM Bug #46561 (Need More Info): cephadm: monitoring services adoption doesn't honor the container image
- 09:27 AM Bug #46654: Unsupported podman container configuration via systemd
- interestingly, Red Hat recommends killmode=none for this setup: https://www.redhat.com/sysadmin/podman-shareable-syst...
- 09:27 AM Bug #46654: Unsupported podman container configuration via systemd
- relates to https://github.com/ceph/ceph/pull/33162
- 09:19 AM Feature #46666: cephadm: Introduce 'container' specification to deploy custom containers
- https://github.com/ceph/ceph/blob/12a5c4669828a65ef23d87d22e9a6bfaad68691e/src/cephadm/cephadm#L3047
Needs a new cas... - 08:57 AM Feature #46666 (Resolved): cephadm: Introduce 'container' specification to deploy custom containers
- By introducing a 'ContainerSpec' it is possible to deploy custom containers and configurations using cephadm without ...
- 08:32 AM Bug #46665 (Resolved): cephadm plugin: Failure to start service stops service loop; no other inst...
- In the cephadm plugin, _apply_service doesn't handle any exception from create(), and so the whole list of hosts is a...
07/21/2020
- 12:23 PM Bug #46655 (Resolved): cephadm rm-cluster: Systemd ceph.target not deleted
- systemd ceph target persist (active and running after deleting the cluster using cephadm rm-cluster).
This not pre... - 12:16 PM Bug #46654 (Resolved): Unsupported podman container configuration via systemd
- Description of problem:
As per https://bugzilla.redhat.com/show_bug.cgi?id=1834974#c4 running podman containers via ... - 09:55 AM Feature #46651 (Rejected): cephadm: allow daemon/service restarts on a host basis
- Currently we have...
07/20/2020
- 05:43 PM Bug #46385: Can i run rados and s3 compatible object storage device in Cephadm?
- Sathvik Vutukuri wrote:
> Sathvik Vutukuri wrote:
> > Sathvik Vutukuri wrote:
> > > I have tried cephadm for ceph ... - 11:56 AM Bug #46098 (Fix Under Review): Exception adding host using cephadm
- 02:37 AM Bug #46098: Exception adding host using cephadm
- I've hit this with base RHEL 8.2 physical hosts. In my case the new hosts I tried to add didn't have python3, lvm or ...
- 10:09 AM Bug #46529: cephadm: error removing storage for container "...-mon": remove /var/lib/containers/s...
- Some thoughts:
* https://github.com/ceph/ceph/pull/35719 remove centos_7 from suites/rados/cephadm
* https://gith... - 09:57 AM Bug #46529: cephadm: error removing storage for container "...-mon": remove /var/lib/containers/s...
- https://pulpito.ceph.com/kchai-2020-07-18_13:35:09-rados-wip-kefu-testing-2020-07-18-1927-distro-basic-smithi/5237690
- 09:50 AM Bug #46529: cephadm: error removing storage for container "...-mon": remove /var/lib/containers/s...
- /a/kchai-2020-07-18_13:35:09-rados-wip-kefu-testing-2020-07-18-1927-distro-basic-smithi/5237560
also centos 7.6 (b... - 09:50 AM Bug #44990 (Resolved): cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file o...
- /a/kchai-2020-07-18_13:35:09-rados-wip-kefu-testing-2020-07-18-1927-distro-basic-smithi/5237560 is actually #46529
07/19/2020
- 05:51 AM Bug #46560 (Pending Backport): cephadm: assigns invalid id to daemons
- 05:44 AM Bug #44990 (New): cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or dir...
- 05:43 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- ...
07/17/2020
- 07:26 PM Bug #46561: cephadm: monitoring services adoption doesn't honor the container image
- > > Also deploying the monitoring after the bootstrap requires to run an extra ceph command to enable the prometheus ...
- 07:22 PM Bug #46561: cephadm: monitoring services adoption doesn't honor the container image
- If it's not useful to deploy the monitoring stack on a MON+MGR node, why does "cephadm bootstrap" do that?
I guess... - 01:44 PM Bug #46561: cephadm: monitoring services adoption doesn't honor the container image
- Dimitri Savineau wrote:
> > The thinking is that it doesn't really make sense to co-locate the monitoring stack with... - 01:23 PM Bug #46561: cephadm: monitoring services adoption doesn't honor the container image
- > The thinking is that it doesn't really make sense to co-locate the monitoring stack with the mon+mgr.
Some peopl... - 10:13 AM Bug #46561: cephadm: monitoring services adoption doesn't honor the container image
- Dimitri Savineau wrote:
> As a current workaround, I need to skip the monitoring stack from the bootstrap (--skip-mo... - 07:26 PM Bug #46606 (Resolved): cephadm: post-bootstrap monitoring deployment only works if the command "c...
- Post-bootstrap monitoring deployment only works if the command "ceph mgr module enable prometheus" has already been i...
- 11:08 AM Bug #46453: cephadm: iSCSI container fails to start
- relates to https://github.com/ceph/ceph/pull/35543#issuecomment-648796978
- 11:02 AM Bug #46534 (Pending Backport): cephadm podman pull: Digest did not match
- 10:58 AM Support #46547: cephadm: Exception adding host via FQDN if host was already added
- Right. the host name you use must match the output of ...
- 10:51 AM Bug #46560 (Fix Under Review): cephadm: assigns invalid id to daemons
- 10:16 AM Tasks #46376 (Resolved): cephadm: Make vagrant usage more comfortable
- 10:15 AM Bug #46329 (Resolved): cephadm: Dashboard's ganesha option is not correct if there are multiple N...
- 10:10 AM Bug #46578 (Duplicate): Container for iscsi gateway does not have tcmu-runner running as service
- 10:10 AM Bug #46540: cephadm: iSCSI gateways problems.
- From the other duplicated issue:
> In short, container for iscsi does not have tcmu-runner running as service - 09:57 AM Bug #45980 (Pending Backport): cephadm: implement missing "FileStore not supported" error message...
- 09:51 AM Bug #45980 (Resolved): cephadm: implement missing "FileStore not supported" error message and upd...
- 09:53 AM Bug #46245 (Resolved): cephadm: set-ssh-config/clear-ssh-config command doesn't take effect immed...
- 09:53 AM Feature #44866 (Resolved): cephadm root mode: support non-root users + sudo
- 09:52 AM Bug #46138 (Resolved): mgr/dashboard: Error creating iSCSI target
- 09:52 AM Bug #46231 (Resolved): translate.to_ceph_volume: no need to pass the drive group
- 09:52 AM Cleanup #45321 (Resolved): Servcie spec: unify `spec:` vs omitting `spec:`
- 09:50 AM Bug #45726 (Resolved): Module 'cephadm' has failed: auth get failed: failed to find client.crash....
- 09:50 AM Feature #45859 (Resolved): cephadm: use fixed versions
- 09:49 AM Bug #45016 (Resolved): mgr: `ceph tell mgr mgr_status` hangs
- 09:48 AM Bug #46036 (Resolved): cephadm: killmode=none: systemd units failed, but containers still running
- 09:48 AM Documentation #46052 (Resolved): Module 'cephadm' has failed: DaemonDescription: Cannot calculate...
- 09:45 AM Bug #45961 (Resolved): cephadm: high load and slow disk make "cephadm bootstrap" fail
- 09:44 AM Bug #44990 (Resolved): cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file o...
- 09:30 AM Bug #46582 (Resolved): cephadm: NFS services should not share the same namespace in a pool
- NFS services should have their dedicated pool and namespace to store export and conf objects.
cephadm allow me to ...
07/16/2020
- 08:18 PM Bug #46578 (Duplicate): Container for iscsi gateway does not have tcmu-runner running as service
- This link describes the problem https://tracker.ceph.com/issues/46540
In short, container for iscsi does not have tc... - 05:05 PM Bug #45724 (Resolved): check-host should not fail using fqdn or not that hard
- 03:58 PM Bug #46561: cephadm: monitoring services adoption doesn't honor the container image
- I guess it will also help for initial cluster bootstrap.
Because the bootstrap worklfow also use the default value... - 03:03 PM Bug #46561: cephadm: monitoring services adoption doesn't honor the container image
- via an environment variable like we have for CEPHADM_IMAGE or a dedicated parameter (like --image) both are fine for me.
- 02:55 PM Bug #46561: cephadm: monitoring services adoption doesn't honor the container image
- hm. honestly don't know if @cephadm adopt@ has the necessary privileges to access the config store. In any case, we'r...
- 12:55 AM Bug #46561 (New): cephadm: monitoring services adoption doesn't honor the container image
- When running `cephadm adopt` command against monitoring services then the container image set via [1] isn't honored c...
- 02:18 PM Bug #46529: cephadm: error removing storage for container "...-mon": remove /var/lib/containers/s...
- https://pulpito.ceph.com/gkyratsas-2020-07-16_12:23:41-rados:cephadm:-wip-octopus-backport-swagner-testing-36109-dist...
- 10:20 AM Bug #46568 (Can't reproduce): cephadm: Sometimes setting global container_image does not work
- Here container_image is set before enabling cephadm....
- 09:03 AM Bug #46233 (Resolved): cephadm: Add "--format" option to "ceph orch status"
- 04:37 AM Bug #46329: cephadm: Dashboard's ganesha option is not correct if there are multiple NFS daemons
- The backport is included in https://github.com/ceph/ceph/pull/36109
07/15/2020
- 10:13 PM Bug #46098: Exception adding host using cephadm
- @Stephan Müller, I'd suggest starting with some freshly built VMs (mine were Ubuntu 18.04). Optionally set up the Cep...
- 02:23 PM Bug #46098 (Need More Info): Exception adding host using cephadm
- I was not yet able to reproduce it. (Tried a lot of things.)
I added new hosts to bootstrapped clusters, removed a... - 09:08 PM Bug #46560 (Resolved): cephadm: assigns invalid id to daemons
- ...
- 06:20 PM Bug #46558 (Resolved): cephadm: paths attribute ignored for db_devices/wal_devices via OSD spec
- When creating an OSD spec for using dedicated devices for either DB and/or WAL bluestore devices, we can't use the pa...
- 02:14 PM Tasks #46551 (Resolved): cephadm: Add better a better hint how to add a host
- Currently:...
- 12:17 PM Support #46547 (Resolved): cephadm: Exception adding host via FQDN if host was already added
- To reproduce you need nodes that have a subdomain (not like in current Vagrantfile). I used sesdev to find this issue...
- 11:19 AM Documentation #46546 (Resolved): doc/cephadm: "For each file system" is redundant in cephadm adop...
- $subject
- 08:44 AM Bug #46541: cephadm: OSD is marked as unmanaged in cephadm deployed cluster
- yep. this is due to the fact taht we're missing some annotation to mark "unknown" services. Marking them as "unmanage...
- 12:29 AM Bug #46541 (New): cephadm: OSD is marked as unmanaged in cephadm deployed cluster
- I followed the cephadm instructions (https://docs.ceph.com/docs/master/cephadm/install/#deploy-osds) on deploying a n...
07/14/2020
- 10:47 PM Bug #46540 (Resolved): cephadm: iSCSI gateways problems.
- After installing a clean cluster based on CentOS 8.2 minimal and podman, and adding iscsi gateways, tcmu-runner only ...
- 01:23 PM Bug #46429: cephadm fails bootstrap with new Podman Versions 2.0.1 and 2.0.2
- G. Heinrich wrote:
> Update:
> I did some additional tests and after some digging I found something in /var/log/sys... - 06:56 AM Bug #46429: cephadm fails bootstrap with new Podman Versions 2.0.1 and 2.0.2
- Update:
I did some additional tests and after some digging I found something in /var/log/syslog:... - 01:22 PM Bug #46534 (Fix Under Review): cephadm podman pull: Digest did not match
- 01:07 PM Bug #46534 (Resolved): cephadm podman pull: Digest did not match
- https://pulpito.ceph.com/swagner-2020-07-14_12:03:52-rados:cephadm-wip-swagner-testing-2020-07-14-1125-distro-basic-s...
- 11:11 AM Bug #46036 (Pending Backport): cephadm: killmode=none: systemd units failed, but containers still...
- 11:08 AM Bug #45155 (Closed): mgr/dashboard: Error listing orchestrator NFS daemons
- 11:02 AM Documentation #45564: cephadm: document workaround for accessing the admin socket by entering run...
- Nathan Cutler wrote:
> I noticed that "cephadm shell" infers the FSID (I only have one cluster running on this set... - 10:57 AM Feature #44886 (Fix Under Review): cephadm: allow use of authenticated registry
- 10:57 AM Feature #44886: cephadm: allow use of authenticated registry
- Denys Kondratenko wrote:
> should registry management and authentication be handled on cri-o level by system admin o... - 10:51 AM Bug #45631 (Closed): Error parsing image configuration: Invalid status code returned when fetchin...
- 10:39 AM Bug #46529 (Resolved): cephadm: error removing storage for container "...-mon": remove /var/lib/c...
- /a/teuthology-2020-07-12_07:01:02-rados-master-distro-basic-smithi/5217488 on centos_7.6...
- 10:26 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- yep, that's a different issue:...
07/13/2020
- 09:19 PM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- Sebastian, I am seeing similar failures in rados/thrash-old-clients on recent master, can you please confirm if they...
- 12:54 PM Bug #45999 (Resolved): cephadm shell: picking up legacy_dir
- 11:57 AM Feature #46499 (Rejected): Requesting a "ceph orch redeploy monitoring" command, as an option, so...
- It recently came to my attention that the process of updating the monitoring stack to the current latest version will...
- 10:18 AM Bug #46497 (Resolved): cephadm: prevent colon character for service_ids.
- https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/LL7N4K5RBG2E7P7MDTVC66U2ZHODJEFA/...
- 05:17 AM Bug #46327: cephadm: nfs daemons share the same config object
- Thanks Patrick and Micahel's for the comments.
To adapt the new design, I've created some new issue:
* #46493 - Nee...
07/10/2020
- 04:40 PM Tasks #46376 (Fix Under Review): cephadm: Make vagrant usage more comfortable
- 04:05 PM Bug #46453 (Can't reproduce): cephadm: iSCSI container fails to start
- Deploying an iSCSI service doesn't fail...
- 09:23 AM Bug #46385: Can i run rados and s3 compatible object storage device in Cephadm?
- Attached the screen shot after firing "radosgw-admin realm create --rgw-realm=myorg --default" command in Ceph shell,...
07/09/2020
- 02:04 PM Bug #45726: Module 'cephadm' has failed: auth get failed: failed to find client.crash.<node_name>...
- Can we get the Priority set to something higher please.
Its not affecting the cluster, but it does cause the clu... - 04:48 AM Bug #45726: Module 'cephadm' has failed: auth get failed: failed to find client.crash.<node_name>...
- Andy Gold wrote:
> Any news on this, I have a new 15.2.4 cluster and its failing.
Any update no this , i also hav... - 05:16 AM Bug #46429 (Closed): cephadm fails bootstrap with new Podman Versions 2.0.1 and 2.0.2
- Podman had a major new release recently and it seems that cephadm cannot bootstrap a new cluster because of it.
I in... - 03:18 AM Bug #46327: cephadm: nfs daemons share the same config object
- Patrick Donnelly wrote:
> Michael Fritch wrote:
> > > Kiefer Chang wrote:
> > > > This causes a regression in the ... - 02:27 AM Bug #46327: cephadm: nfs daemons share the same config object
- Michael Fritch wrote:
> > Kiefer Chang wrote:
> > > This causes a regression in the Dashboard. RADOS objects are de... - 02:14 AM Bug #46327: cephadm: nfs daemons share the same config object
- Patrick Donnelly wrote:
> Hi Kiefer, I left some similar comments on your slide deck but also will say here:
>
... - 01:32 AM Bug #46385: Can i run rados and s3 compatible object storage device in Cephadm?
- Sathvik Vutukuri wrote:
> Sathvik Vutukuri wrote:
> > I have tried cephadm for ceph installation , but unable to ru... - 01:31 AM Bug #46385: Can i run rados and s3 compatible object storage device in Cephadm?
- Sathvik Vutukuri wrote:
> I have tried cephadm for ceph installation , but unable to run ceph RADOS Gw and s3 compat...
07/08/2020
- 09:06 PM Bug #46385: Can i run rados and s3 compatible object storage device in Cephadm?
- RGW deployment is supported - see https://ceph.readthedocs.io/en/latest/cephadm/install/#deploy-rgws
- 08:11 PM Bug #45724 (Fix Under Review): check-host should not fail using fqdn or not that hard
- 05:23 PM Bug #46327: cephadm: nfs daemons share the same config object
- Hi Kiefer, I left some similar comments on your slide deck but also will say here:
Kiefer Chang wrote:
> This cau... - 12:20 PM Bug #46329 (Pending Backport): cephadm: Dashboard's ganesha option is not correct if there are mu...
- 09:19 AM Documentation #46073: cephadm install fails: apt:stderr E: Unable to locate package cephadm
- This is a bug in the documentation only
- 09:19 AM Documentation #46073: cephadm install fails: apt:stderr E: Unable to locate package cephadm
- Reinhard Eilmsteiner wrote:
> Machine is amd64 in virtualbox on Windows
Instruction is missing adding the apt-rep... - 06:11 AM Bug #46412 (Can't reproduce): cephadm trying to pull mimic based image
- ...
07/07/2020
- 08:24 PM Support #46384: 15.2.4 and cephadm - mds not starting
- Here is how I do it:
https://docs.ceph.com/docs/master/cephfs/fs-volumes/#fs-volumes
"This creates a CephFS fil... - 10:49 AM Support #46384 (Resolved): 15.2.4 and cephadm - mds not starting
- I have a new insatll of Octopus, 15.2.4 and using Cephadm to manage it. The mds deamons are not being created.
$... - 03:41 PM Bug #46398 (Fix Under Review): cephadm: can't use custom prometheus image
- 12:08 PM Bug #46398 (In Progress): cephadm: can't use custom prometheus image
- 12:08 PM Bug #46398 (Resolved): cephadm: can't use custom prometheus image
- ...
- 01:44 PM Bug #46206 (Rejected): cephadm: podman 2.0
- This was a bug in Podman v2.0.0 and has been fixed in Podman v2.0.1.
> Fixed a bug where the --privileged flag had... - 01:24 PM Bug #46206: cephadm: podman 2.0
- Removing changes from commit "b5e5c753":https://github.com/ceph/ceph/commit/b5e5c753f415ab1f18ccfe3ad636649a0f51a93a ...
- 12:55 PM Bug #45792: cephadm: zapped OSD gets re-added to the cluster.
- I am still putting additional input and requesting that if this behavior remains as is that a way to stop the process...
- 09:00 AM Bug #45792 (Fix Under Review): cephadm: zapped OSD gets re-added to the cluster.
- As this is the intended behavior, this needs to be documented. See https://github.com/ceph/ceph/pull/35744
- 12:38 PM Feature #45859 (Pending Backport): cephadm: use fixed versions
- 11:47 AM Bug #46396 (New): osdspec/drivegroup: check for 'intersection' in DriveSelection when multiple OS...
- We do allow the use of multiple OSDSpecs explicitly. However, misconfiguration can lead to unwanted behavior.
A go... - 10:54 AM Bug #46385 (Duplicate): Can i run rados and s3 compatible object storage device in Cephadm?
- I have tried cephadm for ceph installation , but unable to run ceph RADOS Gw and s3 compatibble object storage?
Ho... - 09:05 AM Bug #45861 (Fix Under Review): data_devices: limit 3 deployed 6 osds per node
- 09:03 AM Bug #45672 (Can't reproduce): Unable to add additional hosts to cluster using cephadm
- thanks, closing
- 08:47 AM Bug #44756 (In Progress): drivegroups: replacement op will ignore existing wal/dbs
- fixed with https://github.com/ceph/ceph/pull/34740
- 08:23 AM Bug #45980 (In Progress): cephadm: implement missing "FileStore not supported" error message and ...
- 08:22 AM Bug #46231 (Pending Backport): translate.to_ceph_volume: no need to pass the drive group
- 08:15 AM Bug #45172 (Resolved): bin/cephadm: logs: Traceback: not enough values to unpack (expected 2, got 1)
- 08:14 AM Feature #45263 (Fix Under Review): osdspec/drivegroup: not enough filters to define layout
07/06/2020
- 03:46 PM Bug #45872 (Fix Under Review): ceph orch device ls exposes the `device_id` under the DEVICES colu...
- 03:33 PM Documentation #46377 (Resolved): cephadm: Missing 'service_id' in last example in orchestrator#se...
- Missing 'service_id' in last example in orchestrator#service-specification. Example can be found right above https://...
- 03:28 PM Tasks #46376 (Resolved): cephadm: Make vagrant usage more comfortable
- Currently you can only use a big scale factor using the vagrant setup. You can have x * (mgr, mon, osd with 2 disks)....
- 02:40 PM Feature #44886: cephadm: allow use of authenticated registry
- should registry management and authentication be handled on cri-o level by system admin or maybe by cephadm as helper...
- 02:24 PM Bug #44888 (Fix Under Review): Drivegroup's :limit: isn't working correctly
- 01:41 PM Bug #46327 (New): cephadm: nfs daemons share the same config object
- Reopening, as this change introduces a regression and will potentially break upgrades.
- 07:03 AM Bug #46327: cephadm: nfs daemons share the same config object
- This causes a regression in the Dashboard. RADOS objects are designed to work with multiple daemons and each daemon h...
07/03/2020
- 04:03 PM Tasks #46352 (Won't Fix): add leap support for cephadm
- Currently, the build scripts for ceph-container are designed for shipping centos 8 based images,
as it would be much... - 01:00 PM Documentation #45564: cephadm: document workaround for accessing the admin socket by entering run...
- I got it to work like this:...
- 10:01 AM Bug #45726: Module 'cephadm' has failed: auth get failed: failed to find client.crash.<node_name>...
- Any news on this, I have a new 15.2.4 cluster and its failing.
- 08:32 AM Documentation #46335: Document "Using cephadm to set up rgw-nfs"
- the building blocks are there: setting up ganesha, setting up the RADOS objects, https://docs.ceph.com/docs/master/ra...
- 08:30 AM Documentation #46335 (Resolved): Document "Using cephadm to set up rgw-nfs"
- * cephadm doesn't care about exports. instead it simply sets up the daemons.
* cephadm only creates an empty 'conf-{... - 05:32 AM Bug #46329 (Fix Under Review): cephadm: Dashboard's ganesha option is not correct if there are mu...
07/02/2020
- 02:39 PM Feature #45263 (In Progress): osdspec/drivegroup: not enough filters to define layout
- 01:54 PM Feature #45263: osdspec/drivegroup: not enough filters to define layout
- This patch allows to switch between `AND` and `OR` gating. https://github.com/ceph/ceph/compare/master...jschmid1:dri...
- 01:50 PM Feature #45203 (Resolved): OSD Spec: allow filtering via explicit hosts and labels
- 08:28 AM Bug #46327 (Rejected): cephadm: nfs daemons share the same config object
- Not a bug, by design all daemons within a cluster will share the same config object.
- 08:02 AM Bug #46327 (Won't Fix): cephadm: nfs daemons share the same config object
- If we create a NFS service with multiple instances, those instance share the same rados object as the configuration s...
- 08:09 AM Bug #46329 (Resolved): cephadm: Dashboard's ganesha option is not correct if there are multiple N...
- How to reproduce:
* Create an NFS service with multiple daemons. e.g. With the following spec:...
07/01/2020
- 11:47 PM Feature #44866 (Pending Backport): cephadm root mode: support non-root users + sudo
- 11:47 PM Feature #44866 (Resolved): cephadm root mode: support non-root users + sudo
- 08:58 AM Bug #46283 (Rejected): cephadm: Unable to create iSCSI target
- Right, I was able to create an iSCSI target on pacific/master:...
06/30/2020
- 11:20 PM Bug #46283: cephadm: Unable to create iSCSI target
- Maybe this PR (https://github.com/ceph/ceph/pull/35141) hasn't been backported into octopus yet.
- 05:58 PM Bug #46283 (Rejected): cephadm: Unable to create iSCSI target
- I'm getting an error when trying to create an iSCSI target on openSUSE Leap.
*How to reproduce:*
I've created a... - 05:29 PM Bug #46237 (New): cephadm: Inconsistent exit code
- 01:56 PM Bug #46237 (In Progress): cephadm: Inconsistent exit code
- 09:36 AM Support #45940 (Need More Info): Orchestrator to be able to deploy multiple OSDs per single drive
- 09:36 AM Support #45940: Orchestrator to be able to deploy multiple OSDs per single drive
- https://docs.ceph.com/docs/master/cephadm/drivegroups/#the-advanced-case works for you?
- 09:25 AM Bug #46271 (Resolved): podman pull: transient "Error: error creating container storage: error cre...
- ...
- 08:03 AM Bug #44990 (Pending Backport): cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no suc...
- was fixed in master.
- 01:15 AM Bug #46175 (Fix Under Review): cephadm: orch apply -i: MON and MGR service specs must not have a ...
- 01:14 AM Bug #46268 (Fix Under Review): cephadm: orch apply -i: RGW service spec id might not contain a zone
- 01:10 AM Bug #46268 (Resolved): cephadm: orch apply -i: RGW service spec id might not contain a zone
- rgw.yaml...
06/29/2020
- 08:03 PM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- /a/yuriw-2020-06-29_16:59:21-rados-octopus-distro-basic-smithi/5189862
- 06:40 PM Feature #46265 (Duplicate): test cephadm MDS deployment
- right now, the test is broken.
workaround is to apply it manuall: https://github.com/ceph/ceph/blob/cedf2bbd13daba... - 03:50 PM Bug #46103: Restart service command restarts all the services and accepts service type too
- Sebastian Wagner wrote:
> I think this is actually the correct behavior!
How is it the correct behaviour? If it i... - 12:23 PM Bug #46103: Restart service command restarts all the services and accepts service type too
- I think this is actually the correct behavior!
- 03:06 PM Bug #46256 (Need More Info): OSDS are getting re-added to the cluster, despite unamanged=True
- 03:06 PM Bug #46256 (Can't reproduce): OSDS are getting re-added to the cluster, despite unamanged=True
- 01:10 PM Bug #46138 (Pending Backport): mgr/dashboard: Error creating iSCSI target
- 01:05 PM Bug #46254 (Can't reproduce): cephadm upgrade test: exit condition is wrong. we have to wait longer
- We're exiting the upgrade loop too early. In this case, the OSD didn't had the time to came up again. ...
- 12:24 PM Bug #45016 (Pending Backport): mgr: `ceph tell mgr mgr_status` hangs
- 12:21 PM Bug #46038 (Can't reproduce): cephadm mon start failure: Failed to reset failed state of unit cep...
- feel free to reopen the issue!
- 12:21 PM Bug #46038: cephadm mon start failure: Failed to reset failed state of unit ceph-9342dcfe-afd5-11...
- the logs are gone. Maybe should put logs here into the tracker.
- 12:16 PM Bug #45327: cephadm: Orch daemon add is not idempotent
- `daemon add` is too low level. If we want commands to be idempotent, we have to remove calling them cephadm.py
- 12:14 PM Bug #45167: cephadm: mons are not properly deployed
- low, until it happens again.
- 12:04 PM Tasks #45814 (In Progress): tasks/cephadm.py: Add iSCSI smoke test
- 11:25 AM Bug #46253 (Resolved): OSD specs without service_id
- ...
- 11:07 AM Bug #46252 (Closed): MGRs should get a random identifier, ONLY if we're co-locating MGRs on the s...
- ...
- 10:28 AM Feature #45565: cephadm: A daemon should provide information about itself (e.g. service urls)
- See DaemonDescription's service_url
- 09:04 AM Bug #46245 (Fix Under Review): cephadm: set-ssh-config/clear-ssh-config command doesn't take effe...
- 07:01 AM Bug #46245: cephadm: set-ssh-config/clear-ssh-config command doesn't take effect immediately
- can you describe you workenv.
- 06:52 AM Bug #46245 (Resolved): cephadm: set-ssh-config/clear-ssh-config command doesn't take effect immed...
- Cephadm module should reload ssh config when the user sets a new ssh config or clear it.
- 07:14 AM Bug #46247: cephadm mon failure: Error: no container with name or ID ... no such container
- http://qa-proxy.ceph.com/teuthology/ideepika-2020-06-25_18:36:29-rados-wip-deepika-testing-2020-06-25-2058-distro-bas...
- 07:13 AM Bug #46247 (Can't reproduce): cephadm mon failure: Error: no container with name or ID ... no suc...
- ...
Also available in: Atom