Activity
From 07/19/2020 to 08/17/2020
08/17/2020
- 07:34 PM Bug #46665 (Fix Under Review): cephadm plugin: Failure to start service stops service loop; no ot...
- 04:26 PM Bug #46990: execnet: EOFError: couldnt load message header, expected 9 bytes, got 0
- The moral of the story is: wait for the bootstrap MON(s) and MGR(s) to appear in "cephadm ls" before proceeding with ...
- 04:00 PM Bug #46990: execnet: EOFError: couldnt load message header, expected 9 bytes, got 0
- Note: this problem is known to arise (only on machines with root filesystem on HDD) the first time "ceph orch apply" ...
- 12:44 PM Bug #46990: execnet: EOFError: couldnt load message header, expected 9 bytes, got 0
- Seems to:
(1) happen in libvirt VMs running on slower hardware (e.g. nested virt)
(2) be a recent regression - 12:37 PM Bug #46990: execnet: EOFError: couldnt load message header, expected 9 bytes, got 0
- Might relate to https://www.reddit.com/r/ceph/comments/f4rjrk/taking_another_whack_at_ceph_on_odroid_hc2s/
- 12:19 PM Bug #46990 (Can't reproduce): execnet: EOFError: couldnt load message header, expected 9 bytes, g...
- ...
- 03:07 PM Bug #46862 (Need More Info): cephadm: nfs ganesha client mount is read only
- Dimitri Savineau wrote:
> When trying to mount a ganesha share via nfs on a client node, the mount command is succes... - 01:17 PM Bug #45792 (Resolved): cephadm: zapped OSD gets re-added to the cluster.
- 12:57 PM Bug #46991: cephadm ls does not list legacy rgws
- Sebastian Wagner wrote:
> For RGWs, please follow step 11 to redeploy the RGWs with cephadm!
This was the step 4t... - 12:55 PM Bug #46991: cephadm ls does not list legacy rgws
- For RGWs, please follow step 11 to redeploy the RGWs with cephadm!
- 12:21 PM Bug #46991 (Resolved): cephadm ls does not list legacy rgws
- I am trying to convert ceph-ansible cluster to cephadm, when I see ceph -s...
- 12:52 PM Bug #46045 (Resolved): qa/tasks/cephadm: Module 'dashboard' is not enabled error
- 12:52 PM Bug #45594 (Resolved): cephadm: weight of a replaced OSD is 0
- 12:51 PM Feature #44548 (Resolved): cephadm: persist osd removal queue
- 12:51 PM Bug #46740 (Resolved): mgr/cephadm: restart of daemon reports host is empty
- 12:51 PM Bug #43681 (Resolved): cephadm: Streamline RGW deployment
- 12:50 PM Bug #46764: cephadm (ceph orch apply) sometimes gets "stuck" and cannot deploy any OSDs
- Though I cannot access the MGR log, I think it's safe to assume it would look like the one in #46990
- 12:49 PM Bug #46764 (Duplicate): cephadm (ceph orch apply) sometimes gets "stuck" and cannot deploy any OSDs
- 12:44 PM Bug #46764 (Need More Info): cephadm (ceph orch apply) sometimes gets "stuck" and cannot deploy a...
- Thanks for the report. I'll need more info to resolve this. MGR log might be helpful
- 12:49 PM Bug #46540 (Resolved): cephadm: iSCSI gateways problems.
- 12:48 PM Documentation #45858 (Resolved): `ceph orch status` doesn't show in progress actions
- 12:48 PM Bug #46175 (Resolved): cephadm: orch apply -i: MON and MGR service specs must not have a service_id
- 12:04 PM Bug #46921: Fedora: Download URL not found for cephadm installation
- I don't think we have official packages for fc32. See https://download.ceph.com/rpm-octopus/
08/15/2020
- 03:37 AM Bug #46921: Fedora: Download URL not found for cephadm installation
- From https://docs.ceph.com/docs/master/releases/octopus/#v15-2-0-octopus, seems it is not yet supported?
08/13/2020
- 03:04 PM Bug #46922: cephadm: IPv6 syntax inconsistency
- I'll have a look into making sure IPv6 addresses can be given wrapped or unwrapped and we'd just detect and do accord...
- 12:39 PM Bug #46922 (Resolved): cephadm: IPv6 syntax inconsistency
- While trying to bootstrap a cluster with `--mon-ip` and `--apply-spec` options I found that someting IPv6 syntax is `...
- 01:13 PM Bug #46910: cephadm: Unable to create an iSCSI target
- ...
- 04:54 AM Bug #46910: cephadm: Unable to create an iSCSI target
- I think I've had this before, if I remember correctly I've seen this when the tcmu-runner container was running but d...
- 12:01 PM Bug #46921 (Resolved): Fedora: Download URL not found for cephadm installation
- I'm following [the guide](https://ceph.readthedocs.io/en/latest/cephadm/install/) for installing Ceph using cephadm. ...
- 04:47 AM Feature #46811 (Fix Under Review): cephadm: add host metadata to the orchestrator's inventory
08/12/2020
- 11:10 AM Bug #46910 (Can't reproduce): cephadm: Unable to create an iSCSI target
- When I try to create an iSCSI target I get the following error:
!error-creating-target.png!
On `ceph-iscsi` con... - 10:58 AM Bug #46777 (Resolved): cephadm: Error bootstraping a cluster with '--registry-json' option
- 09:13 AM Bug #44738 (Won't Fix): drivegroups/cephadm: db_devices don't get applied correctly when using "p...
08/10/2020
- 03:42 PM Bug #46253: OSD specs without service_id
- Currently I only get an error running an OSD spec without any service_id.
I will now mute the traceback and just s... - 08:57 AM Bug #46814 (Pending Backport): cephadm: Deploying alertmanager image is broken
08/07/2020
- 02:41 PM Bug #46862 (Resolved): cephadm: nfs ganesha client mount is read only
- When trying to mount a ganesha share via nfs on a client node, the mount command is successful but the share is in RO...
- 10:41 AM Bug #46854: Error ENOENT: name mon.smithi074 already in use seen on octopus
- Thanks Sebastian, Strangely enough when you search for '"already in use" ENOENT' under Orchestrator there are no hits...
- 07:56 AM Bug #46854 (Duplicate): Error ENOENT: name mon.smithi074 already in use seen on octopus
- Thanks for the report Brad. We *really* have to remove all `daemon add` calls from Teuthology.
- 04:16 AM Bug #46854 (Duplicate): Error ENOENT: name mon.smithi074 already in use seen on octopus
- /a/yuriw-2020-08-05_14:55:18-rados-wip-yuri-testing-2020-08-04-2244-octopus-distro-basic-smithi/5289214...
- 07:58 AM Bug #46529: cephadm: error removing storage for container "...-mon": remove /var/lib/containers/s...
- https://github.com/ceph/ceph/pull/36321
- 04:23 AM Bug #46529: cephadm: error removing storage for container "...-mon": remove /var/lib/containers/s...
- All 7.6.
/a/yuriw-2020-08-05_14:55:18-rados-wip-yuri-testing-2020-08-04-2244-octopus-distro-basic-smithi/5289047
...
08/06/2020
- 12:05 PM Bug #46819 (Resolved): orch: 'ceph orch apply mgr 1' prints out dry run info without --dry-run fl...
- 06:48 AM Bug #46833 (Pending Backport): simple (ceph-disk style) OSDs adopted by cephadm must not call `ce...
08/05/2020
- 09:57 PM Bug #44252 (Fix Under Review): cephadm: mgr,mds scale-down should prefer standby daemons
- 02:18 PM Bug #46429 (Closed): cephadm fails bootstrap with new Podman Versions 2.0.1 and 2.0.2
- 06:59 AM Bug #46833 (Fix Under Review): simple (ceph-disk style) OSDs adopted by cephadm must not call `ce...
- 06:33 AM Bug #46833 (In Progress): simple (ceph-disk style) OSDs adopted by cephadm must not call `ceph-vo...
- 06:32 AM Bug #46833 (Resolved): simple (ceph-disk style) OSDs adopted by cephadm must not call `ceph-volum...
- The unit files created when adopting OSDs include a few @chown@ calls to handle simple (ceph-disk style) OSDs, and al...
08/04/2020
- 03:53 PM Bug #46561: cephadm: monitoring services adoption doesn't honor the container image
- @Sebastien : What information do you need ?
- 03:07 PM Bug #44926: dashboard: creating a new bucket causes InvalidLocationConstraint
- Andreas Haase wrote:
> Commands used to create the object gateway:
>
> [...]
>
> Commands used for creating th... - 12:47 PM Bug #46813 (Fix Under Review): `ceph orch * --refresh` is broken
- 11:26 AM Feature #46182 (Fix Under Review): cephadm should use the same image reference across the cluster
- 10:05 AM Feature #46827 (New): cephadm: Pin OSDs to pmem modules connected to specific CPUs
- I have OSDs running on pmem modules, so i want to pin them to cpus that are connected to those modules. Same with nvm...
08/03/2020
- 07:46 PM Bug #46819 (Fix Under Review): orch: 'ceph orch apply mgr 1' prints out dry run info without --dr...
- 06:05 PM Bug #46819 (Resolved): orch: 'ceph orch apply mgr 1' prints out dry run info without --dry-run fl...
- Sample output attempting to scale down from 3 to 1 mgr daemons. "ceph orch apply mgr 1" prints out dry run output and...
- 02:56 PM Bug #46814 (Fix Under Review): cephadm: Deploying alertmanager image is broken
- 10:05 AM Bug #46814 (Resolved): cephadm: Deploying alertmanager image is broken
- cephadm maps the 'alertmanager/' directory to '/alertmanager' instead of '/etc/alertmanager' for docker.io/prom/alert...
- 02:23 PM Bug #45327: cephadm: Orch daemon add is not idempotent
- https://pulpito.ceph.com/swagner-2020-08-03_12:11:23-rados:cephadm-wip-swagner-testing-2020-08-03-1038-distro-basic-s...
- 08:43 AM Feature #44548 (Pending Backport): cephadm: persist osd removal queue
- 08:37 AM Cleanup #43700: cephadm: make it a proper python package
- Some background infos:
I've tried to improve things in https://github.com/ceph/ceph/pull/32526 half a year ago, bu... - 08:09 AM Bug #46813 (Resolved): `ceph orch * --refresh` is broken
- Those call violate https://docs.ceph.com/docs/master/dev/cephadm/#note-regarding-network-calls-from-cli-handlers
... - 02:36 AM Feature #46811 (Closed): cephadm: add host metadata to the orchestrator's inventory
- The current host spec provides no insights into the host that is being managed. By enriching the metadata held for a ...
07/31/2020
- 06:11 PM Bug #45235 (Can't reproduce): cephadm: mons are not properly undeployed
- 05:52 PM Cleanup #46801: cephadm bootstrap should not apply a default spec if user does not supply "--appl...
- > The --no-apply-spec is called --orphan-initial-daemons, but you really don't want to do that, except if you're Teut...
- 04:08 PM Cleanup #46801 (New): cephadm bootstrap should not apply a default spec if user does not supply "...
- The current UX semantics of the "cephadm bootstrap" command lead to unwanted behavior.
Before the "--apply-spec" o... - 03:53 PM Feature #46182 (In Progress): cephadm should use the same image reference across the cluster
- 10:04 AM Feature #46182: cephadm should use the same image reference across the cluster
- https://github.com/ceph/ceph-ansible/search?q=repodigest&unscoped_q=repodigest
- 01:03 PM Bug #46098 (Resolved): Exception adding host using cephadm
- 01:03 PM Feature #45263 (Resolved): osdspec/drivegroup: not enough filters to define layout
- 01:03 PM Bug #45872 (Resolved): ceph orch device ls exposes the `device_id` under the DEVICES column which...
- 01:03 PM Bug #46398 (Resolved): cephadm: can't use custom prometheus image
- 01:02 PM Bug #46560 (Resolved): cephadm: assigns invalid id to daemons
- 01:02 PM Bug #46534 (Resolved): cephadm podman pull: Digest did not match
- 01:02 PM Bug #46271 (Resolved): podman pull: transient "Error: error creating container storage: error cre...
- 01:01 PM Bug #46268 (Resolved): cephadm: orch apply -i: RGW service spec id might not contain a zone
- 01:01 PM Documentation #46133 (Resolved): encourage users to apply YAML specs instead of using the CLI
- 01:00 PM Bug #45980 (Resolved): cephadm: implement missing "FileStore not supported" error message and upd...
- 12:59 PM Bug #46385 (Duplicate): Can i run rados and s3 compatible object storage device in Cephadm?
- Fixed by #43681 Please reopen, if that was not enough. Thanks for reporting this!
- 12:51 PM Bug #46654: Unsupported podman container configuration via systemd
- https://github.com/ceph/ceph-ansible/pull/5443
- 12:43 PM Bug #46773: [RFE] validate argument of 'cephadm add-repo --release'
- note that the https://github.com/ceph/ceph/tree/pacific tree is about 3 months behind master at this point! I'll talk...
- 12:36 PM Bug #46782 (Triaged): cephadm bootstrap demands --mon-ip or --mon-addrv option even when Mon IPs ...
- getting the info from the yaml spec without any hard dependencies to pyyaml is going to be painful.
- 09:14 AM Bug #46782 (New): cephadm bootstrap demands --mon-ip or --mon-addrv option even when Mon IPs are ...
- Scenario:
I run "cephadm bootstrap" sending it the following YAML via --apply-spec:... - 09:52 AM Bug #45465: cephadm: `ceph orch restart osd` has the potential to break your cluster
- Also we have to make this an asynchronous operation. See https://docs.ceph.com/docs/master/dev/cephadm/#note-regardin...
- 09:49 AM Bug #45465: cephadm: `ceph orch restart osd` has the potential to break your cluster
- And we need to call...
- 08:33 AM Bug #45594 (Fix Under Review): cephadm: weight of a replaced OSD is 0
- 06:06 AM Bug #46429: cephadm fails bootstrap with new Podman Versions 2.0.1 and 2.0.2
- Some days ago a new version of Podman was released (2.0.3). I installed the latests version virtually and after some ...
- 03:15 AM Bug #46045 (Fix Under Review): qa/tasks/cephadm: Module 'dashboard' is not enabled error
07/30/2020
- 07:44 PM Bug #46777 (Fix Under Review): cephadm: Error bootstraping a cluster with '--registry-json' option
- 02:48 PM Bug #46777 (In Progress): cephadm: Error bootstraping a cluster with '--registry-json' option
- 12:33 PM Bug #46777 (Resolved): cephadm: Error bootstraping a cluster with '--registry-json' option
- When I try to bootstrap a cluster with the new '--registry-json' option, I get the following error:...
- 10:09 AM Bug #46773 (Resolved): [RFE] validate argument of 'cephadm add-repo --release'
- How I hit this -
Wanted to test pacific cephadm ... - 08:05 AM Support #46758 (Resolved): ERROR: hostname "rgw1" does not match expected hostname "rgw1.clyso.cl...
- please also have a look at https://docs.ceph.com/docs/master/cephadm/concepts/#fully-qualified-domain-names-vs-bare-h...
07/29/2020
- 08:53 PM Bug #46764 (Can't reproduce): cephadm (ceph orch apply) sometimes gets "stuck" and cannot deploy ...
- This started happening about two weeks ago. Before that it was not happening. I've seen it only in single-node deploy...
- 03:50 PM Support #46758: ERROR: hostname "rgw1" does not match expected hostname "rgw1.clyso.cloud"
- my error. should have added host like this ceph orch host add rgw1 rgw1.clyso.cloud
please delete - 02:56 PM Support #46758 (Resolved): ERROR: hostname "rgw1" does not match expected hostname "rgw1.clyso.cl...
- looks like cephadm is checking /etc/hostname instead of hostname -f:...
- 02:54 PM Bug #46748 (Fix Under Review): Module 'cephadm' has failed: auth get failed: failed to find osd.3...
- 09:53 AM Bug #46748 (Resolved): Module 'cephadm' has failed: auth get failed: failed to find osd.32 in key...
- Was purged it yesterday: ...
- 02:44 PM Bug #46098: Exception adding host using cephadm
- same here. trying to add a fresh debian buster VM with all updates installed (no additional packages like docker pres...
- 12:05 PM Bug #46103 (Duplicate): Restart service command restarts all the services and accepts service typ...
- 11:38 AM Bug #46103: Restart service command restarts all the services and accepts service type too
- Actually the restart command fails....
- 11:12 AM Bug #46103 (New): Restart service command restarts all the services and accepts service type too
- 11:10 AM Bug #46103: Restart service command restarts all the services and accepts service type too
- Another confusing aspect is it accepts both service type and service name. Is this also intended?...
- 07:42 AM Bug #46745: cephadm does not set systemd dependencies when using docker
- Putting following in systemd unit file solves the issue
After=network-online.target local-fs.target time-sync.targ... - 07:34 AM Bug #46745 (Resolved): cephadm does not set systemd dependencies when using docker
- When using docker - systemd units produced by cephadm do not have any dependencies to docker.service
The effect is t... - 04:04 AM Bug #46740 (Fix Under Review): mgr/cephadm: restart of daemon reports host is empty
- 03:57 AM Bug #46740 (Resolved): mgr/cephadm: restart of daemon reports host is empty
- When you run "ceph orch daemon restart prometheus.<host>" the immediate response is OK ("restart prometheus.rhs-srv-0...
07/28/2020
- 02:57 PM Feature #46182: cephadm should use the same image reference across the cluster
- I agree. Users won't get it.
- 02:57 PM Feature #44886 (Resolved): cephadm: allow use of authenticated registry
- 02:37 PM Tasks #46551 (In Progress): cephadm: Add better a better hint how to add a host
- 09:04 AM Bug #46726 (Fix Under Review): cephadm: deploying of monitoring images partially broken
- 08:24 AM Bug #46529: cephadm: error removing storage for container "...-mon": remove /var/lib/containers/s...
- Seems podman on CentOS 7 is broken?
- 08:22 AM Bug #46529: cephadm: error removing storage for container "...-mon": remove /var/lib/containers/s...
- Brad Hubbard wrote:
> /a/yuriw-2020-07-13_23:00:15-rados-wip-yuri8-testing-2020-07-13-1946-octopus-distro-basic-smit... - 03:13 AM Bug #46529: cephadm: error removing storage for container "...-mon": remove /var/lib/containers/s...
- /a/yuriw-2020-07-13_23:00:15-rados-wip-yuri8-testing-2020-07-13-1946-octopus-distro-basic-smithi/5224005
/a/yuriw-20...
07/27/2020
- 04:17 PM Bug #46726 (In Progress): cephadm: deploying of monitoring images partially broken
- https://github.com/ceph/ceph-salt/pull/299
- 02:57 PM Bug #46726 (Resolved): cephadm: deploying of monitoring images partially broken
- On a pacific cluster:
Jul 27 16:46:44 master bash[14902]: INFO:cephadm:stat:stderr Error: unable to pull ceph/ceph... - 03:32 PM Bug #46704: container_linux.go:349: "exec: \"stat\": executable file not found
- funny thing is: it was eventually deployed:...
- 02:51 PM Documentation #45858 (Fix Under Review): `ceph orch status` doesn't show in progress actions
- 02:48 PM Documentation #46377 (Pending Backport): cephadm: Missing 'service_id' in last example in orchest...
- 02:47 PM Documentation #46701 (Fix Under Review): remove `alias ceph='cephadm shell -- ceph'`
07/24/2020
- 08:45 PM Documentation #44354: cephadm: Log messages are missing
- Indeed, this was most probably caused by "Storage=auto" (the default) and non-existent /var/log/journal
Now, it pr... - 08:15 PM Bug #46704 (Can't reproduce): container_linux.go:349: "exec: \"stat\": executable file not found
- Description: rados:cephadm/smoke-roleless/{distro/ubuntu_18.04_podman start}
https://pulpito.ceph.com/swagner-2020... - 03:28 PM Bug #44926: dashboard: creating a new bucket causes InvalidLocationConstraint
- Hi, i've got the same issue:
OS: CentOS 8.2.2004
Ceph: Octopus (15.2.4)
Nodes: 3 - 08:16 AM Documentation #46701 (Resolved): remove `alias ceph='cephadm shell -- ceph'`
- this will lead to unexpected behavior, like...
07/23/2020
- 09:19 PM Bug #46687: MGR_MODULE_ERROR: Module 'cephadm' has failed: No filters applied
- Sebastian Wagner wrote:
> indeed. Might relate to https://github.com/ceph/ceph/blob/3b31eea7fdfe9805259fdcc606e0a184... - 04:18 PM Bug #46687: MGR_MODULE_ERROR: Module 'cephadm' has failed: No filters applied
- indeed. Might relate to https://github.com/ceph/ceph/blob/3b31eea7fdfe9805259fdcc606e0a1844431a8d8/src/python-common/...
- 08:55 AM Bug #46687 (Can't reproduce): MGR_MODULE_ERROR: Module 'cephadm' has failed: No filters applied
- I made clean install of ceph using cephadm. Then trying to create osds via web interface like that and got fail:
!im... - 08:16 PM Bug #46098: Exception adding host using cephadm
- I have hit this aswell installing cephadm on debian 10 buster with an apt upgrade done.
I have the playbook that ... - 12:53 PM Documentation #46691 (New): Document manually deploment of OSDs
- Sometimes, users want to deploy OSDs completely manual. Might be drive groups are not expressive enough, or there is ...
- 08:27 AM Bug #46685 (Won't Fix): mgr/rook: OSD devices are marked as available
- I deployed a Rook Ceph cluster with latest master/octopus image today, devices are marked as available even they are ...
07/22/2020
- 10:40 PM Documentation #46377 (In Progress): cephadm: Missing 'service_id' in last example in orchestrator...
- 01:07 PM Bug #46103: Restart service command restarts all the services and accepts service type too
- Sebastian Wagner wrote:
> REFRESHED doesn't mean the daemon was restarted. Just that the status was refreshed
It ... - 10:52 AM Bug #46103 (Need More Info): Restart service command restarts all the services and accepts servic...
- 10:52 AM Bug #46103: Restart service command restarts all the services and accepts service type too
- REFRESHED doesn't mean the daemon was restarted. Just that the status was refreshed
- 01:00 PM Bug #46045 (New): qa/tasks/cephadm: Module 'dashboard' is not enabled error
- Sebastian Wagner wrote:
> this was due to the fact that the FS tests don't enable the dashboard?
Yes, even other ... - 10:58 AM Bug #46045 (Need More Info): qa/tasks/cephadm: Module 'dashboard' is not enabled error
- this was due to the fact that the FS tests don't enable the dashboard?
- 12:10 PM Bug #45819 (Can't reproduce): cephadm: Possible error in deploying-nfs-ganesha docs
- 12:06 PM Bug #45832 (Resolved): cephadm: "ceph orch apply mon" moves daemons
- See the note in https://docs.ceph.com/docs/master/cephadm/install/#deploy-additional-monitors-optional
- 12:05 PM Bug #44559 (In Progress): cephadm logs an invalid stat command
- I still have this on my plate. The "fix" here is to include the double quotes in the log message. This isn't a bug _p...
- 12:00 PM Feature #44775 (Resolved): cephadm: NFS stage 2
- 11:59 AM Feature #44287 (Rejected): cephadm: Graceful Shutdown of the Whole Ceph Cluster
- can't do that in cephadm. Needs Salt or ansible.
- 11:57 AM Documentation #44867 (Rejected): cephadm: document "package" mode
- please don't
- 11:55 AM Bug #44739 (Can't reproduce): ceph.conf parameters set via "cephadm bootstrap -c" are not persist...
- 11:53 AM Bug #44968 (Can't reproduce): cehpadm: another "RuntimeError: Set changed size during iteration"
- 11:50 AM Documentation #44354: cephadm: Log messages are missing
- We need this in the docs:
> Create the /var/log/journal directory, so journal logs will be persisted.
- 11:46 AM Bug #45576 (Resolved): cephadm: `cephadm ls` does not play well with `cephadm logs`
- resolved in the meanwhile
- 11:41 AM Documentation #45728 (Resolved): Add an example for custom images to the "bootstrap a new cluster...
- I think this is done by https://docs.ceph.com/docs/master/man/8/cephadm/#synopsis
- 11:40 AM Bug #45399 (Resolved): NFS Ganesha : Error searching service specs for all nodes after nfs orch a...
- 11:39 AM Bug #45791 (Can't reproduce): cephadm: Upgrade is failing octopus on centos 8 %d format a numbe ...
- 11:38 AM Documentation #45896: cephadm: Need a manual howto: "upgrade the cluster manually"
- I think we should document how to manually update the cluster. We'll need this for troubleshooting anyway.
- 11:35 AM Bug #45808 (New): cephadm/test_adoption.sh: Error parsing image configuration: Invalid status cod...
- for tasks/cephadm.py we use the local registry. for test_adoption we don't. Low prio, till this appears again
- 11:33 AM Bug #45867 (Resolved): orchestrator: Errors while deployment are hidden behind the log wall
- https://github.com/ceph/ceph/pull/35456
- 11:03 AM Bug #45976 (Duplicate): cephadm: prevent rm-daemon from removing legacy daemons
- 10:57 AM Bug #43816 (Resolved): cephadm: Unable to use IPv6 on "cephadm bootstrap"
- 10:56 AM Documentation #46133 (Pending Backport): encourage users to apply YAML specs instead of using the...
- 10:56 AM Documentation #46168 (Pending Backport): Add information about 'unmanaged' parameter
- 10:52 AM Bug #46256 (Can't reproduce): OSDS are getting re-added to the cluster, despite unamanged=True
- 10:51 AM Bug #46268 (Pending Backport): cephadm: orch apply -i: RGW service spec id might not contain a zone
- 10:50 AM Bug #46271 (Pending Backport): podman pull: transient "Error: error creating container storage: e...
- 10:50 AM Support #45940 (Closed): Orchestrator to be able to deploy multiple OSDs per single drive
- 10:49 AM Bug #43681 (Fix Under Review): cephadm: Streamline RGW deployment
- 10:45 AM Bug #45872 (Pending Backport): ceph orch device ls exposes the `device_id` under the DEVICES colu...
- 10:44 AM Feature #45263 (Pending Backport): osdspec/drivegroup: not enough filters to define layout
- 10:42 AM Bug #46398 (Pending Backport): cephadm: can't use custom prometheus image
- 10:41 AM Bug #46429: cephadm fails bootstrap with new Podman Versions 2.0.1 and 2.0.2
- let's wait, till https://github.com/containers/podman/issues/6933 is resolved.
- 10:38 AM Bug #46540 (Fix Under Review): cephadm: iSCSI gateways problems.
- 10:37 AM Support #46547 (Need More Info): cephadm: Exception adding host via FQDN if host was already added
- 10:36 AM Bug #46412 (Need More Info): cephadm trying to pull mimic based image
- 10:36 AM Bug #46412: cephadm trying to pull mimic based image
- works for me: ...
- 10:35 AM Support #46384 (Resolved): 15.2.4 and cephadm - mds not starting
- 10:35 AM Bug #46561 (Need More Info): cephadm: monitoring services adoption doesn't honor the container image
- 09:27 AM Bug #46654: Unsupported podman container configuration via systemd
- interestingly, Red Hat recommends killmode=none for this setup: https://www.redhat.com/sysadmin/podman-shareable-syst...
- 09:27 AM Bug #46654: Unsupported podman container configuration via systemd
- relates to https://github.com/ceph/ceph/pull/33162
- 09:19 AM Feature #46666: cephadm: Introduce 'container' specification to deploy custom containers
- https://github.com/ceph/ceph/blob/12a5c4669828a65ef23d87d22e9a6bfaad68691e/src/cephadm/cephadm#L3047
Needs a new cas... - 08:57 AM Feature #46666 (Resolved): cephadm: Introduce 'container' specification to deploy custom containers
- By introducing a 'ContainerSpec' it is possible to deploy custom containers and configurations using cephadm without ...
- 08:32 AM Bug #46665 (Resolved): cephadm plugin: Failure to start service stops service loop; no other inst...
- In the cephadm plugin, _apply_service doesn't handle any exception from create(), and so the whole list of hosts is a...
07/21/2020
- 12:23 PM Bug #46655 (Resolved): cephadm rm-cluster: Systemd ceph.target not deleted
- systemd ceph target persist (active and running after deleting the cluster using cephadm rm-cluster).
This not pre... - 12:16 PM Bug #46654 (Resolved): Unsupported podman container configuration via systemd
- Description of problem:
As per https://bugzilla.redhat.com/show_bug.cgi?id=1834974#c4 running podman containers via ... - 09:55 AM Feature #46651 (Rejected): cephadm: allow daemon/service restarts on a host basis
- Currently we have...
07/20/2020
- 05:43 PM Bug #46385: Can i run rados and s3 compatible object storage device in Cephadm?
- Sathvik Vutukuri wrote:
> Sathvik Vutukuri wrote:
> > Sathvik Vutukuri wrote:
> > > I have tried cephadm for ceph ... - 11:56 AM Bug #46098 (Fix Under Review): Exception adding host using cephadm
- 02:37 AM Bug #46098: Exception adding host using cephadm
- I've hit this with base RHEL 8.2 physical hosts. In my case the new hosts I tried to add didn't have python3, lvm or ...
- 10:09 AM Bug #46529: cephadm: error removing storage for container "...-mon": remove /var/lib/containers/s...
- Some thoughts:
* https://github.com/ceph/ceph/pull/35719 remove centos_7 from suites/rados/cephadm
* https://gith... - 09:57 AM Bug #46529: cephadm: error removing storage for container "...-mon": remove /var/lib/containers/s...
- https://pulpito.ceph.com/kchai-2020-07-18_13:35:09-rados-wip-kefu-testing-2020-07-18-1927-distro-basic-smithi/5237690
- 09:50 AM Bug #46529: cephadm: error removing storage for container "...-mon": remove /var/lib/containers/s...
- /a/kchai-2020-07-18_13:35:09-rados-wip-kefu-testing-2020-07-18-1927-distro-basic-smithi/5237560
also centos 7.6 (b... - 09:50 AM Bug #44990 (Resolved): cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file o...
- /a/kchai-2020-07-18_13:35:09-rados-wip-kefu-testing-2020-07-18-1927-distro-basic-smithi/5237560 is actually #46529
07/19/2020
Also available in: Atom