Activity
From 01/31/2021 to 03/01/2021
03/01/2021
- 10:42 PM Bug #46745 (Fix Under Review): cephadm does not set systemd dependencies when using docker
- 09:17 PM Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope n...
- /a/sage-2021-03-01_20:25:17-rados-wip-sage4-testing-2021-03-01-1042-distro-basic-gibba/5924180
description: rados/... - 07:16 PM Bug #49293: podman 3.0 on ubuntu 18.04: failed to mount overlay for metacopy check with "nodev,me...
- Deepika Upadhyay wrote:
> /ceph/teuthology-archive/yuriw-2021-02-22_20:19:50-rados-wip-yuri-testing-2021-02-22-0812-... - 06:19 PM Bug #49293: podman 3.0 on ubuntu 18.04: failed to mount overlay for metacopy check with "nodev,me...
- /ceph/teuthology-archive/yuriw-2021-02-22_20:19:50-rados-wip-yuri-testing-2021-02-22-0812-octopus-distro-basic-smithi...
- 04:50 PM Bug #49539 (Resolved): test_cephadm.sh failure: container.alertmanager.a does'nt start
- the problem was that the container image was being pulled from docker.io and was a day old and didn't have the needed...
- 04:49 PM Bug #49539: test_cephadm.sh failure: container.alertmanager.a does'nt start
- Sage Weil wrote:
> [...]
> /a/sage-2021-02-28_18:35:15-rados-wip-sage-testing-2021-02-28-1217-distro-basic-smithi/5... - 11:38 AM Bug #49551: cephadm journald logs are mangled
- I think use std::clog instead of std::cerr to write the log can mitigate this issue. Because clog buffers content but...
- 11:13 AM Bug #49551 (Resolved): cephadm journald logs are mangled
- Regression introduced by https://github.com/ceph/ceph/pull/37729
to fix a regression introduced by https://github.... - 11:23 AM Fix #49336 (Fix Under Review): re-enable coredumps for cephadm
02/28/2021
02/26/2021
- 10:07 PM Bug #49522 (Resolved): cephadm: ValueError: not enough values to unpack (expected 2, got 1)
- ...
- 04:43 PM Bug #49506 (Fix Under Review): cephadm: `cephadm ls` broken for SUSE's downstream alertmanager co...
- 04:33 PM Bug #49506 (Resolved): cephadm: `cephadm ls` broken for SUSE's downstream alertmanager container
- ...
- 11:01 AM Support #49499: new osds created by orchestrator running different image version
- Already solved with help from mailing list, sorry for the noise.
The fix (thanks to Tobias Fisher):
Hi Kenneth,
...
02/25/2021
- 07:01 PM Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope n...
- ...
- 04:21 PM Support #49499 (Resolved): new osds created by orchestrator running different image version
- Hi,
I installed new host for OSDS today, they were automatically created using drivegroups service specs (https://do... - 03:16 PM Support #49497 (Resolved): Cephadm fails to upgrade from 15.2.8 to 15.2.9
- I'm trying to upgrade cluster from 15.2.8 to 15.2.9 by running...
- 11:24 AM Feature #49492 (Resolved): cepham: Spine-Leaf network architecture
- https://access.redhat.com/webassets/avalon/d/Red_Hat_OpenStack_Platform-13-Spine_Leaf_Networking-en-US/images/0bd8baa...
- 10:46 AM Documentation #49488: Document service specs for iSCSI deployment
- fixed by https://github.com/ceph/ceph/pull/39551
https://ceph--39551.org.readthedocs.build/en/39551/cephadm/iscsi/ - 08:58 AM Documentation #49488 (Resolved): Document service specs for iSCSI deployment
- There's currently no documentation on how to deploy iSCSI gateways (or, if there is, I can't find it beyond what's li...
- 10:34 AM Tasks #49490: cephadm additions/changes to support everything rgw.py needs
- config key/value: https://github.com/ceph/ceph/pull/39648
- 10:32 AM Tasks #49490 (Resolved): cephadm additions/changes to support everything rgw.py needs
- From: https://pad.ceph.com/p/rgw-cephadm
- adopt realm.zone.id naming for tests
- make sure PlacementSpec nam... - 10:25 AM Bug #49467 (Fix Under Review): cephadm bootstrap process fails in open_ports on octopus branch: T...
- 01:29 AM Bug #49467 (Resolved): cephadm bootstrap process fails in open_ports on octopus branch: TypeError...
- There is no problem until v15.2.8, but an error occurs during bootstrap process in open_ports of the latest octopus b...
- 08:07 AM Bug #49456: cephadm dashboard test: failed to connect to the server
- qa/workunits/cephadm/test_dashboard_e2e.sh (for SEO)
- 06:36 AM Bug #49484 (Resolved): rados/upgrade/pacific-x/parallel fails due to more than 1 version shown af...
- ...
- 06:33 AM Bug #49483 (Can't reproduce): CommandFailedError: Command failed on smithi104 with status 1: 'sud...
- https://sentry.ceph.com/organizations/ceph/issues/4720/...
- 03:40 AM Bug #49436 (In Progress): cephadm bootstrap fails to create /etc/ceph directory
- the creation of the missing dir is there in the src for octopus and pacific. In both examples provided, the source is...
02/24/2021
- 10:50 PM Bug #49462 (Fix Under Review): cephadm: add iscsi and nfs to upgrade
- 09:44 PM Bug #49462 (Resolved): cephadm: add iscsi and nfs to upgrade
- these daemons are not upgraded during the upgrade process but they should be since they use the Ceph image.
- 05:19 PM Bug #49191 (Duplicate): cephadm: service_type: osd: Failed to apply: ''NoneType'' object has no a...
- already fixed
- 02:35 PM Bug #49456: cephadm dashboard test: failed to connect to the server
- /a/kchai-2021-02-24_13:00:59-rados-wip-kefu-testing-2021-02-24-1742-distro-basic-smithi/5911838/
- 02:28 PM Bug #49456 (Can't reproduce): cephadm dashboard test: failed to connect to the server
- https://pulpito.ceph.com/swagner-2021-02-22_13:53:31-rados:cephadm-wip-swagner3-testing-2021-02-22-1135-distro-basic-...
- 12:42 PM Bug #49280 (Duplicate): mds/orch: bare/short hostname as a number is not supported
- will be fixed by #46219
- 11:44 AM Bug #49449 (Won't Fix): cephadm: synchronize container timezone with host
- Right now, the timezone within the container differs from the host. Timestamps in the future (or past) will hinder tr...
- 04:06 AM Feature #47711 (Fix Under Review): mgr/cephadm: add a feature to examine the host facts to look f...
02/23/2021
- 11:25 PM Bug #48870: cephadm: Several services in error status after upgrade to 15.2.8: unrecognized argum...
- Sebastian, I am seeing something similar in pacific upgrade tests, want me create a new tracker issue?
rados/cepha... - 09:26 PM Bug #49439: logrotate should clean up container logs
- is this caused by cephadm?
- 06:37 PM Bug #49439: logrotate should clean up container logs
- See also https://bugzilla.redhat.com/show_bug.cgi?id=1892170
https://access.redhat.com/solutions/5434641 - 06:35 PM Bug #49439 (Resolved): logrotate should clean up container logs
- ...
- 05:19 PM Bug #48598 (Can't reproduce): "ceph orch daemon redeploy" fails with [errno 13] RADOS permission ...
- these tests pass now. there have been a ton of changes/fixes in hte last 2 months
- 04:18 PM Bug #49436 (Can't reproduce): cephadm bootstrap fails to create /etc/ceph directory
- In trials of Octopus and Pacific, cephadm fails to create /etc/ceph during the bootstrapping process.
These two et... - 04:16 PM Feature #47145: cephadm: Multiple daemons of the same service on single host
- ...
- 04:10 PM Fix #49336: re-enable coredumps for cephadm
- two issues:
1. messanger v2 bug
2. "wait for snap complete" hangs - 04:09 PM Fix #49336: re-enable coredumps for cephadm
- sage: sending signals seemed to work find.
- 04:09 PM Fix #49336: re-enable coredumps for cephadm
- https://github.com/ceph/ceph/pull/39530
- 03:43 PM Bug #49435 (Closed): cephadm: rgw not getting deployed due to HEALTH_WARN
- We should provide a way for users to deploy RGW anyway and at the same time prevent radosgw-admin to block indefinite...
- 03:16 PM Bug #49411 (Fix Under Review): Rook: "ceph orch ls" command fails with KeyError: 'crashcollector'
- 03:16 PM Feature #47261 (Fix Under Review): cephadm integration for cephfs-mirror daemon
- 11:25 AM Feature #49159 (Fix Under Review): "cephadm ceph-volume activate" does not support cephadm
02/22/2021
- 02:33 PM Bug #49126 (Pending Backport): rook: 'ceph orch ls' throws type error
- 11:19 AM Bug #49411 (Resolved): Rook: "ceph orch ls" command fails with KeyError: 'crashcollector'
- ...
- 09:50 AM Feature #49269: cephadm: upgrade stuck in repeating sleep when a host is offline
- Gunther Heinrich wrote:
> There might be some cases where either the host is offline for a longer amount of time or ... - 08:15 AM Feature #49269: cephadm: upgrade stuck in repeating sleep when a host is offline
- Sebastian Wagner wrote:
> Did you verify that the upgrade continues, if the host is online again?
Yes, the upgrade... - 01:43 AM Feature #49407 (Fix Under Review): Enable the ability of cephadm to trigger libstoragemgmt info f...
- When this information is gathered, cephadm configuration checks will be able to;
- raise a healthcheck for any dis... - 01:41 AM Feature #49407 (Resolved): Enable the ability of cephadm to trigger libstoragemgmt info from ceph...
- The default in ceph-volume inventory runs without libstoragemgmt integration, so by default this enhanced information...
02/20/2021
- 02:40 PM Bug #49143 (Resolved): rados/upgrade/pacific-x/parallel: monclient(hunting): authenticate timed o...
02/19/2021
- 08:48 PM Bug #48695 (Closed): cephadm: upgrade should not get blocked, on intermediate UPGRADE_FAILED_PULL
- The problem only is an issue when upgrading from older versions (that don't include https://github.com/ceph/ceph/comm...
- 06:52 PM Bug #49143 (Fix Under Review): rados/upgrade/pacific-x/parallel: monclient(hunting): authenticate...
- 12:12 PM Bug #49293: podman 3.0 on ubuntu 18.04: failed to mount overlay for metacopy check with "nodev,me...
- ...
02/18/2021
- 09:27 PM Bug #49366 (In Progress): 'cephadm version' will show a trace back if image comes from a authenti...
- 09:26 PM Bug #49366 (Resolved): 'cephadm version' will show a trace back if image comes from a authenticat...
- @
[root@magna081 sbin]# sudo cephadm version
Non-zero exit code 125 from /bin/podman run --rm --ipc=host --net=host... - 07:13 PM Bug #49239: cephadm cannot deploy OSDs with selinux-policy-minimum
- JuanMi's backport at https://github.com/ceph/ceph/pull/39636
- 05:39 PM Bug #48754 (Pending Backport): "failed xx 'sudo systemctl start ceph-None@rgw.client.1'" in upgra...
- 04:22 PM Bug #49350 (Duplicate): cephadm/test_dashboard_e2e.sh: AssertionError: Timed out retrying: expect...
- 03:40 PM Bug #48695: cephadm: upgrade should not get blocked, on intermediate UPGRADE_FAILED_PULL
- https://github.com/ceph/ceph/blob/13c0f27837d79655dc27f0f15554cf3e501fa7e4/src/cephadm/cephadm#L3099-L3103
- 06:26 AM Feature #47711 (In Progress): mgr/cephadm: add a feature to examine the host facts to look for co...
- 05:00 AM Bug #49348: cephadm: giving nonexistent service for service action command should return error
- Thanks a lot for working on this. Is this linked to the feature request / bug #49246 I issued some days ago? In that ...
- 04:12 AM Tasks #47369: Ceph scales to 100's of hosts, 1000's of OSDs....can orchestrator?
- There was an issue with bucket aggregation in the heatmap panel, so instead of 7 clusters with 4,096 - 5,792 OSDs eac...
- 12:26 AM Bug #49293: podman 3.0 on ubuntu 18.04: failed to mount overlay for metacopy check with "nodev,me...
- /a/yuriw-2021-02-15_19:34:09-rados-octopus-distro-basic-smithi/5884688/
02/17/2021
- 11:19 PM Bug #49350 (Duplicate): cephadm/test_dashboard_e2e.sh: AssertionError: Timed out retrying: expect...
- ...
- 11:14 PM Bug #49143: rados/upgrade/pacific-x/parallel: monclient(hunting): authenticate timed out after 30...
- /a/nojha-2021-02-16_17:14:48-rados-master-distro-basic-smithi/5887669
- 10:10 PM Bug #49013: cephadm: Service definition causes some container startups to fail
- Note: https://github.com/ceph/ceph/pull/39385 is intended to allow redeploy of daemons put into the state described i...
- 10:04 PM Bug #49013 (Fix Under Review): cephadm: Service definition causes some container startups to fail
- 10:02 PM Bug #49348 (Fix Under Review): cephadm: giving nonexistent service for service action command sho...
- 09:46 PM Bug #49348 (Resolved): cephadm: giving nonexistent service for service action command should retu...
- Right now, if a user runs a command like "ceph orch redeploy . . ." and gives an incorrect service name no output is ...
- 10:00 PM Bug #48142 (Pending Backport): rados:cephadm/upgrade/mon_election tests are failing: CapAdd and p...
- 09:41 PM Bug #49339 (Fix Under Review): cephadm/osd: OSD draining: don't mark OSDs out
- 04:29 PM Bug #49339 (Resolved): cephadm/osd: OSD draining: don't mark OSDs out
- there is already a --replace flag; we just need to fix the behavior in teh non-replace case to crush weight o 0 inste...
- 04:21 PM Feature #47261: cephadm integration for cephfs-mirror daemon
- Hey Sebastian,
This tracker is currently unassigned. Would we want to add an assignee? (and tho whom) - 03:20 PM Fix #49336 (Resolved): re-enable coredumps for cephadm
- we reverted the podman --init Pr. we need to find out why we have a problem there
- 12:14 PM Bug #45452 (Closed): cephadm: while removing ceph-common, unable to remove directory '/var/lib/ce...
- 12:11 PM Bug #45907 (Resolved): cepham: daemon rm for managed services is completely broken
- 11:38 AM Bug #47381 (Can't reproduce): "ceph orch apply --dry-run" reports empty osdspec even though OSDs ...
- 11:25 AM Bug #48930: when removing the iscsi service, the gateway config object remains
- this needs to be implemented in on of the methods here for iscsi:
https://github.com/ceph/ceph/blob/9bdc3b9f6fad2b... - 10:23 AM Bug #45725 (Can't reproduce): cephadm: Further improve "Failed to infer CIDR network for mon ip"
02/16/2021
- 05:27 PM Bug #48930: when removing the iscsi service, the gateway config object remains
- but who is creating the object in the first place?
- 03:41 PM Bug #48930: when removing the iscsi service, the gateway config object remains
- Sebastian Wagner wrote:
> who's creating the gateway.conf object? I can't find it in https://github.com/ceph/ceph/bl... - 03:35 PM Bug #48930: when removing the iscsi service, the gateway config object remains
- who's creating the gateway.conf object? I can't find it in https://github.com/ceph/ceph/blob/master/src/pybind/mgr/ce...
- 03:30 PM Bug #48597 (Pending Backport): pybind/mgr/cephadm: mds_join_fs not cleaned up
- 03:05 PM Tasks #49306 (New): cephadm teuthology: Add RGW client workload
- The current test suite is rather limited as we only verify the deployment of the cluster.
Would be great to have ... - 03:01 PM Cleanup #49305 (New): cephadm: don't block the CLI handler thread
- We still have a lot of places where we run remote SSH calls form within the CLI hander thread:...
- 01:14 PM Bug #48598: "ceph orch daemon redeploy" fails with [errno 13] RADOS permission denied
- reminds me of
https://www.reddit.com/r/ceph/comments/kv3z7h/cephadm_osd_deploy_errors/
but this doesn't make ...
02/15/2021
- 04:41 PM Bug #49293: podman 3.0 on ubuntu 18.04: failed to mount overlay for metacopy check with "nodev,me...
- https://github.com/containers/podman/issues/9382
- 04:04 PM Bug #49293: podman 3.0 on ubuntu 18.04: failed to mount overlay for metacopy check with "nodev,me...
- * https://github.com/containers/podman/blob/30607d727895036d32aede44c1b4375849566433/cmd/podman/root.go#L161
* https... - 03:25 PM Bug #49293 (Rejected): podman 3.0 on ubuntu 18.04: failed to mount overlay for metacopy check wit...
- cephadm faile to pull quay image using podman:...
- 03:55 PM Cleanup #45118 (Closed): orch (pacific): cleanup CLI
- I think the cli is "done" for now
- 01:02 PM Documentation #45767 (In Progress): documentation: disable the scheduler: unmanaged=True + ceph o...
- 12:35 PM Documentation #45833 (In Progress): cephadm: properly document labels
- 12:04 PM Feature #48560: Spec files for each daemon in the monitoring stack
- This is the current **monitoring.yaml**:...
- 12:01 PM Documentation #49214 (In Progress): Docs: howto Restore MON quorum
- 10:56 AM Documentation #48974: document repo_digest
- it's enabled by default now
02/13/2021
- 01:01 AM Bug #47921: Bad auth caps for orchestrated mds daemon
- the problem is probably not related to the missing caps. Are you *really* sure this is the correct solution?
- 12:54 AM Bug #49287 (New): podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.s...
- ...
02/12/2021
- 05:24 PM Tasks #47369: Ceph scales to 100's of hosts, 1000's of OSDs....can orchestrator?
- I was talking with Yaarit for getting real figures from Telemetry, and she mentioned the following ones:
* RBD ima... - 04:15 PM Bug #49280: mds/orch: bare/short hostname as a number is not supported
- ...
- 04:13 PM Bug #49280 (Duplicate): mds/orch: bare/short hostname as a number is not supported
- If the bare/short hostname is a simple id (example: 2.storage.domain):
----... - 03:46 PM Bug #49277: cephadm bootstrap --apply-spec <cluster.yaml> hangs
- Reproduced just now and attaching logs and versions.
using latest cephadm from https://download.ceph.com/rpm-octo... - 02:40 PM Bug #49277 (Duplicate): cephadm bootstrap --apply-spec <cluster.yaml> hangs
- The feature introduced by https://tracker.ceph.com/issues/44873 seems to have the following flaw.
If I bootstrap... - 01:48 PM Bug #49276 (Duplicate): Create multiple RGW instances in the same realm , same zone fails using c...
- The scenario is to create 2 rgw instances on the same node (different ports) under the same realm and same zone.
Cu... - 01:13 PM Bug #49273 (Resolved): cephadm fails deployment of node-exporter when ipv6 is disabled
- Which is due to the check in `port_in_use` method checking for both, ipv4 and ipv6 support in the same try except block.
- 12:45 PM Feature #49249 (Duplicate): cephadm: Automatically create OSDs after reinstalling base os
- 12:43 PM Feature #49159: "cephadm ceph-volume activate" does not support cephadm
- Great! Please verify that the container image used is consistent across the cluster after running the adoption process.
- 09:31 AM Feature #49159: "cephadm ceph-volume activate" does not support cephadm
- Hi Sebastian,
Since I would need to do this for +500 OSDs, doing this manually is not really an option..
I guess ... - 12:30 PM Feature #49269: cephadm: upgrade stuck in repeating sleep when a host is offline
- Did you verify that the upgrade continues, if the host is online again?
I'm a bit inclined to close this as works... - 09:51 AM Feature #49269 (New): cephadm: upgrade stuck in repeating sleep when a host is offline
- Even though the documentation clearly mentions that all hosts should be online before you initiate an upgrade I never...
- 12:01 AM Feature #47885: Add networking checks
- Currently preparing a PR which deals with requirements 1 and 4.
02/11/2021
- 07:34 PM Bug #49239: cephadm cannot deploy OSDs with selinux-policy-minimum
- Follow-on fix for systems that do not have /usr/share/empty (eg. SUSE): https://github.com/ceph/ceph/pull/39424
An... - 01:34 AM Bug #49239 (Pending Backport): cephadm cannot deploy OSDs with selinux-policy-minimum
- 07:12 PM Bug #48142 (Fix Under Review): rados:cephadm/upgrade/mon_election tests are failing: CapAdd and p...
- 06:24 PM Feature #49159: "cephadm ceph-volume activate" does not support cephadm
- does https://tracker.ceph.com/issues/46691#note-1 help?
- 05:55 PM Feature #49159: "cephadm ceph-volume activate" does not support cephadm
- Hi,
Thanks for looking into this!
Well, it's a chicken-egg problem:
This is a node that I just reinstalled to a... - 10:22 AM Feature #49159: "cephadm ceph-volume activate" does not support cephadm
- there should not be such a big difference between running ceph-volume naively vs in a container.
Please have a lo... - 04:49 PM Feature #49249 (Duplicate): cephadm: Automatically create OSDs after reinstalling base os
- #46691 provides the manual process of deploying cephadm OSDs.
we should probably provide an automated way to do t... - 03:51 PM Bug #48598: "ceph orch daemon redeploy" fails with [errno 13] RADOS permission denied
- might be related to a wrong or non-upt-do-date ceph.conf???
- 12:25 PM Feature #43696 (Rejected): cephadm: check that units start
- this would make the daemon deployment of cephadm super slow
- 12:23 PM Documentation #45383 (Can't reproduce): Cephadm.py OSD deployment fails: full device path or just...
- 12:19 PM Support #49247 (Resolved): cephadm: Add support for single daemon redeployment
- already done: https://docs.ceph.com/en/latest/api/mon_command_api/#orch-daemon-redeploy
- 12:11 PM Support #49247 (Resolved): cephadm: Add support for single daemon redeployment
- Currently, cephadm allows to redeploy services as a whole which sometimes might be a little over the top if only one ...
- 12:17 PM Feature #45770 (Rejected): cephadm: allow count=0 to have services without daemons
- why?
- 12:15 PM Feature #43691 (Resolved): cephadm: upgrade major releases
- done in the meantime
- 12:14 PM Feature #44606 (Resolved): cephadm: RGW firewall + static port
- resolved in the meantime
- 12:14 PM Bug #46134 (Can't reproduce): ceph mgr should fail if it cannot add osd
- 12:13 PM Feature #45769 (Resolved): cephadm: Don't deploy on offline hosts
- done in the meantime
- 12:12 PM Feature #46265 (Duplicate): test cephadm MDS deployment
- 12:11 PM Documentation #46335: Document "Using cephadm to set up rgw-nfs"
- thing is, I don't want users to use nfs-rgw. performance and usability is just poor.
- 12:06 PM Bug #46568 (Can't reproduce): cephadm: Sometimes setting global container_image does not work
- please reopen, if it is still reproducible
- 12:05 PM Support #46547 (Resolved): cephadm: Exception adding host via FQDN if host was already added
- 12:01 PM Documentation #46691: Document manually deploment of OSDs
- h3. How to manually (re-)deploy OSDs
In order to manually deploy cephadm OSDs, first run ceph-volume (skip this ... - 11:56 AM Documentation #44354 (Duplicate): cephadm: Log messages are missing
- fixed by using journald as log driver
- 11:55 AM Bug #46990 (Can't reproduce): execnet: EOFError: couldnt load message header, expected 9 bytes, g...
- 11:54 AM Bug #44673 (Rejected): cephadm: `orch apply` and `orch daemon add` use completely different code ...
- 11:53 AM Bug #47401 (Can't reproduce): improve drive group validation
- workaround: Use *ceph orch apply -i* instead of *ceph orch apply osd -i*
- 11:52 AM Feature #47533 (Rejected): Scan for dangling ceph auth entries
- we now clean up auth entities.
- 11:50 AM Bug #47702 (Can't reproduce): upgrading via ceph orch upgrade start results in partial applicatio...
- 11:49 AM Bug #47694 (Won't Fix): downgrading via ceph orch upgrade start results in partial application an...
- we have to support downgrades to some degree. closing as it worked eventually
- 11:48 AM Bug #47726 (Resolved): disk selector should pass all devices to ceph-volume (available and unavai...
- 11:45 AM Bug #46665 (Resolved): cephadm plugin: Failure to start service stops service loop; no other inst...
- 11:45 AM Bug #47916 (Pending Backport): podman containers running in a detached state do not output logs t...
- 11:44 AM Bug #48105 (Can't reproduce): cephadm.py: failure on interactive on error for archive file handling
- that code changed in the meantime
- 11:42 AM Bug #48171 (Resolved): catatonit not available on CentOS
- pacific now uses...
- 11:41 AM Bug #45808 (Resolved): cephadm/test_adoption.sh: Error parsing image configuration: Invalid statu...
- 11:36 AM Bug #47500 (Won't Fix): Feature <encryption> is not supported" with having it set it to "False"
- downstream issue not upstream
- 11:36 AM Tasks #47369: Ceph scales to 100's of hosts, 1000's of OSDs....can orchestrator?
- yes, we have users with > 1000 osds. that works already :-)
- 11:27 AM Feature #45712 (Duplicate): Add 'state' attribute to ServiceSpec
- 11:25 AM Bug #46247 (Can't reproduce): cephadm mon failure: Error: no container with name or ID ... no suc...
- This was fixed in the meantime
- 11:12 AM Feature #47261 (New): cephadm integration for cephfs-mirror daemon
- 11:10 AM Feature #47261: cephadm integration for cephfs-mirror daemon
- cephfs-mirror daemon
https://github.com/ceph/ceph/blob/72c3b5e6a3a88c40f6b8286cd4b2d6f1a335ed63/doc/man/8/cephfs-mir... - 11:11 AM Feature #48560 (Need More Info): Spec files for each daemon in the monitoring stack
- 11:11 AM Feature #48560 (New): Spec files for each daemon in the monitoring stack
- 11:06 AM Feature #48560 (Need More Info): Spec files for each daemon in the monitoring stack
- 11:10 AM Bug #47107 (Resolved): device-health-metrics unavailable because image ceph/ceph:latest has smart...
- resolved in ceph-container
- 11:08 AM Feature #48822: Add proper port management to mgr/cephadm
- ...
- 11:05 AM Feature #49246 (Duplicate): cephadm: Display error message when given service name is wrong
- When executing a service command with a wrong service name like...
- 11:05 AM Feature #47145: cephadm: Multiple daemons of the same service on single host
- in order to co-locate daemons, we have to use different ports for those new daemons.
- 11:02 AM Bug #48656: cephadm botched install of ceph-fuse (symbol lookup error)
- tbh, this is somewhat out of scope for cephadm. cephadm mainly cares about containers, not so much about keeping pack...
- 10:57 AM Bug #48442: cephadm: upgrade loops on mixed x86_64/arm64 cluster
- right now, this is somewhat low on our priority list. But in pacific, this should be improved by using repo_digest fo...
- 10:55 AM Bug #48261 (Won't Fix): cephadm ceph-volume inventory -- --format json-pretty: INFO:cephadm:/usr...
- *workaround*: ...
- 10:54 AM Bug #48799 (Can't reproduce): test_cephadm: stderr Job for container.alertmanager.a.service faile...
- gone
- 10:54 AM Subtask #45116 (Resolved): cephadm: RGW Load balancer using HAproxy
- 10:54 AM Documentation #48333 (Rejected): cephadm: document the image used by cephadm to call ceph-volume ...
- 10:53 AM Bug #48894: cephadm e2e: ceph device monitoring off: Error EINVAL
- somehow this is gone now
- 10:52 AM Bug #48894 (Can't reproduce): cephadm e2e: ceph device monitoring off: Error EINVAL
- 10:51 AM Bug #48694 (Resolved): ceph-volume: unrecognized arguments: --filter-for-batch
- 10:50 AM Bug #45973: Adopted MDS daemons are removed by the orchestrator because they're orphans
- prio=low. probably easier to simply redeploy MDS for upstream and find a typical downstream solution for downstream.
- 10:49 AM Bug #45465 (Resolved): cephadm: `ceph orch restart osd` has the potential to break your cluster
- 10:48 AM Support #48630 (Resolved): non-LVM OSD do not start after upgrade from 15.2.4 -> 15.2.7
- 10:47 AM Bug #45628 (Resolved): cephadm qa: smoke should verify daemons are actually running
- 10:47 AM Bug #48925 (Resolved): cephadm: iscsi missing mgr permissions
- 10:47 AM Bug #48947 (Resolved): cephadm: fix rgw osd cap tag
- 10:46 AM Bug #48594 (Resolved): cephadm: too many osd privileges for osd caps
- 10:45 AM Bug #48870 (Resolved): cephadm: Several services in error status after upgrade to 15.2.8: unrecog...
- https://github.com/ceph/ceph/pull/39300
- 10:44 AM Bug #44559 (Can't reproduce): cephadm logs an invalid stat command
- please reopen, if you see this again
- 10:42 AM Bug #49016 (Resolved): find multiple coredumps of conmon
- resolved upstream
- 10:41 AM Bug #49056 (Resolved): faulty behaviour running ceph orch apply mds with missing fsname
- 10:41 AM Bug #49056: faulty behaviour running ceph orch apply mds with missing fsname
- always use yaml files! ...
- 10:39 AM Bug #48916 (Duplicate): "File system None does not exist in the map" in upgrade:octopus-x:paralle...
- 10:38 AM Bug #49014 (Resolved): OSD service specifications ignore "rotational: 0"
- 10:37 AM Feature #47139 (Resolved): Require a minimum version for podman/docker
- 10:37 AM Feature #47139: Require a minimum version for podman/docker
- we can't backport podman >= 2.0 to octopus! Old octopus versions don#t support podman 2, thus we have to have a way f...
- 10:26 AM Bug #48164 (Resolved): Orchestrator: failed deployments leave orphaned auth entries
- 10:25 AM Bug #45279: cephadm bootstrap: monmaptool --create: error writing to '/tmp/monmap': (21) Is a dir...
- Could you please verify that /tmp/ceph-tmp6sp3jhv3 is in fact a file?
Then, could you please run the docker comman... - 02:09 AM Bug #49228 (Resolved): qa/tasks/cephadm.py: file changed as we read it
- pacific backport: https://github.com/ceph/ceph/pull/39403
02/10/2021
- 05:05 PM Bug #49143: rados/upgrade/pacific-x/parallel: monclient(hunting): authenticate timed out after 30...
- The problem seems to occur when the first mon is restarted after upgrade....
- 03:23 PM Bug #49239 (Resolved): cephadm cannot deploy OSDs with selinux-policy-minimum
- When the following conditions are true:
# A host has @selinux-policy-targeted@,
# We mount the host's @/sys@ int... - 12:47 PM Bug #48142 (New): rados:cephadm/upgrade/mon_election tests are failing: CapAdd and privileged are...
- 12:35 PM Feature #49235 (Resolved): cephadm: Log number of already upgraded daemons during upgrade process
- During the upgrade process it would be helpful if cephadm displays the number of daemons which have aleady been upgra...
- 11:26 AM Bug #49233 (New): cephadm shell: TLS handshake timeout
- https://pulpito.ceph.com/swagner-2021-02-09_10:28:14-rados:cephadm-wip-swagner2-testing-2021-02-08-1109-pacific-distr...
- 11:23 AM Bug #49232 (Can't reproduce): standard_init_linux.go:211: exec user process caused "exec format e...
- https://pulpito.ceph.com/swagner-2021-02-09_10:28:14-rados:cephadm-wip-swagner2-testing-2021-02-08-1109-pacific-distr...
02/09/2021
- 10:21 PM Bug #49228: qa/tasks/cephadm.py: file changed as we read it
- In pacific...
- 10:15 PM Bug #49228 (Resolved): qa/tasks/cephadm.py: file changed as we read it
- ...
- 02:35 PM Bug #49223: unrecognized arguments: --container-init
- If I remember correctly, the "--container-init" saga went about like so:
1. in general, there is a need for contai... - 10:09 AM Bug #49223: unrecognized arguments: --container-init
- looks like https://github.com/ceph/ceph/pull/36822 was broken back then and https://github.com/ceph/ceph/pull/37648 n...
- 09:51 AM Bug #49223 (Resolved): unrecognized arguments: --container-init
- ...
- 10:15 AM Bug #49126 (Fix Under Review): rook: 'ceph orch ls' throws type error
02/08/2021
- 06:47 PM Bug #49143: rados/upgrade/pacific-x/parallel: monclient(hunting): authenticate timed out after 30...
- ...
- 06:30 PM Bug #48142: rados:cephadm/upgrade/mon_election tests are failing: CapAdd and privileged are mutua...
- description: rados/cephadm/upgrade/{1-start 2-repo_digest/repo_digest 3-start-upgrade
4-wait distro$/{centos_8.... - 03:52 PM Documentation #49214 (Resolved): Docs: howto Restore MON quorum
- https://docs.ceph.com/en/latest/rados/operations/add-or-rm-mons/#removing-monitors-from-an-unhealthy-cluster
- 09:38 AM Bug #48068 (Resolved): cephadm: Various properties like 'last_refresh' do not contain timezone
- 06:13 AM Bug #49013: cephadm: Service definition causes some container startups to fail
- > 1. During upgrade, the new mgr doesn't redeploy the other mgrs (again) to ensure the unit.run file is in sync with ...
02/05/2021
- 04:06 PM Bug #49013: cephadm: Service definition causes some container startups to fail
- Sage Weil wrote:
> This wasn't explicitly stated in the initial ticket, but the only daemons that failed to start ... - 03:36 PM Bug #49013 (In Progress): cephadm: Service definition causes some container startups to fail
- 02:36 PM Bug #49013: cephadm: Service definition causes some container startups to fail
- Okay, summarizing to make sure I understand. We have two problems:
1. During upgrade, the new mgr doesn't redeplo... - 08:24 AM Bug #49013: cephadm: Service definition causes some container startups to fail
- @Marvin
Thanks for confirming that this is an issue unrelated to my cluster.
I did another upgrade (this time fro... - 03:50 PM Feature #49171: cephadm: set osd-memory-target
- https://pad.ceph.com/p/autotune_memory_target
- 02:29 PM Bug #49191 (Duplicate): cephadm: service_type: osd: Failed to apply: ''NoneType'' object has no a...
- ...
- 11:25 AM Bug #48933: cephadm: EOFError: couldnt load message header, expected 9 bytes, got 0
- Unfortunately I already cannot go back to do that. It was no big issue from the beginning since as far as I remember ...
02/04/2021
- 04:52 PM Bug #48715 (Resolved): docker-mirror: x509: certificate relies on legacy Common Name field, use S...
- appears to be fixed!
- 04:36 PM Bug #48754 (In Progress): "failed xx 'sudo systemctl start ceph-None@rgw.client.1'" in upgrade:oc...
- 03:59 PM Feature #49171 (Resolved): cephadm: set osd-memory-target
- cpeh-ansible sets @osd memory target@ based on:
https://github.com/ceph/ceph-ansible/blob/71a5e666e39b11cd7945afa2... - 02:25 PM Bug #49013: cephadm: Service definition causes some container startups to fail
- Sorry @Gunther I kind of missed the part where you reported that the mgr damon shows the same behaviour I observed. T...
- 01:59 PM Bug #49013: cephadm: Service definition causes some container startups to fail
- Hello there,
we are running two identical v15.2.8 clusters who apparently now show the same problem. I also did no... - 01:54 PM Feature #49165 (Need More Info): ceph crush class in osd service spec
- It would be a nice feature to be able to override a crush class for ceph osd matching a certain drive_group or patter...
- 01:31 PM Bug #48157 (Resolved): test_cephadm.sh failure You have reached your pull rate limit. You may inc...
- not seeing this issue anymore
- 12:17 PM Bug #48933: cephadm: EOFError: couldnt load message header, expected 9 bytes, got 0
- Gunther Heinrich wrote:
> Yes, "python3 -V" gives me "Python 3.8.5".
Hm.
>
> Am I correct to assume that th... - 12:04 PM Bug #48933: cephadm: EOFError: couldnt load message header, expected 9 bytes, got 0
- Yes, "python3 -V" gives me "Python 3.8.5".
Am I correct to assume that the exception refers to python inside a con... - 09:29 AM Feature #49159 (Resolved): "cephadm ceph-volume activate" does not support cephadm
- On 15.2.8, when running cephadm ceph-volume -- lvm activate --all`, I get an error related to dmcrypt:...
- 03:46 AM Feature #44055 (Closed): cephadm: make 'ls' faster
- PR closed without merge. cephadm exporter merge has made this change less important. Focus needs to be on exploiting ...
- 02:29 AM Feature #48846 (Closed): cephadm bootstrap: add --cluster-network
02/03/2021
- 07:47 PM Bug #49143 (Resolved): rados/upgrade/pacific-x/parallel: monclient(hunting): authenticate timed o...
- ...
- 05:19 PM Bug #48788 (Duplicate): cephadm bootstrap: monmaptool --create: error writing to '/tmp/monmap': (...
- 05:18 PM Bug #45279 (New): cephadm bootstrap: monmaptool --create: error writing to '/tmp/monmap': (21) Is...
- 03:25 PM Bug #48164 (Fix Under Review): Orchestrator: failed deployments leave orphaned auth entries
- 01:51 PM Bug #49013: cephadm: Service definition causes some container startups to fail
- Based on some code analysis in "/src/pybind/mgr/cephadm/upgrade.py" in found...
- 12:56 PM Tasks #46551 (Fix Under Review): cephadm: Add better a better hint how to add a host
- 12:52 PM Bug #47700 (Resolved): during OSD deletion: Module 'cephadm' has failed: Set changed size during ...
- 12:50 PM Bug #48510: CEPHADM_REFRESH_FAILED: detail item 0 not a [unicode] string
- works for me. if I do that locally, I'm getting...
- 12:25 PM Bug #48597 (Fix Under Review): pybind/mgr/cephadm: mds_join_fs not cleaned up
- 12:23 PM Feature #49127 (New): rook: Add support for service restart
- ...
- 12:12 PM Bug #49126 (Resolved): rook: 'ceph orch ls' throws type error
- ...
- 12:03 PM Bug #48924 (Fix Under Review): cephadm: upgrade process failed to pull target image: not enough v...
- fixed by https://github.com/ceph/ceph/pull/39069/commits/d31bed79411ca493ec48eeed4e9cbb7ad92295c3
- 12:02 PM Bug #48924: cephadm: upgrade process failed to pull target image: not enough values to unpack (ex...
- ...
- 11:59 AM Bug #48933: cephadm: EOFError: couldnt load message header, expected 9 bytes, got 0
- Do you have installed /usr/bin/python3 on the remote host?
- 11:57 AM Bug #48939: Orchestrator removes mon daemon from wrong host when removing host from cluster
- At this point in the development,...
- 11:53 AM Feature #47139 (Pending Backport): Require a minimum version for podman/docker
- 11:52 AM Bug #48982 (Resolved): cephadm: ubuntu_18_04: Error: error creating container storage: the contai...
- Fixed by https://github.com/ceph/ceph/pull/39003
- 11:51 AM Bug #48982: cephadm: ubuntu_18_04: Error: error creating container storage: the container name
- We already clean up the storage with
https://github.com/ceph/ceph/blob/faa93b751dc13003b23370f769a8ea252972c3dc/s... - 11:50 AM Bug #48982: cephadm: ubuntu_18_04: Error: error creating container storage: the container name
- The error is:...
- 11:40 AM Bug #49014 (Pending Backport): OSD service specifications ignore "rotational: 0"
- 11:37 AM Bug #48930: when removing the iscsi service, the gateway config object remains
- depends on https://github.com/ceph/ceph/pull/38883
- 11:35 AM Bug #49041 (Fix Under Review): cephadm: update container image tag for pacific
- 11:15 AM Bug #49076 (Duplicate): cephadm: Bootstrapping fails: json.decoder.JSONDecodeError: Expecting val...
- 11:14 AM Bug #49076 (Resolved): cephadm: Bootstrapping fails: json.decoder.JSONDecodeError: Expecting valu...
- fixed by https://github.com/containers/conmon/pull/237
- 11:14 AM Bug #48993 (Resolved): cephadm: 'mgr stat' and/or 'pg dump' output truncated
- fixed by https://github.com/containers/conmon/pull/237
- 11:02 AM Bug #49000 (Duplicate): JSONDecodeError when wait_for_mgr_restart()
- 11:01 AM Bug #49000 (New): JSONDecodeError when wait_for_mgr_restart()
- 06:33 AM Bug #48916: "File system None does not exist in the map" in upgrade:octopus-x:parallel-master
- This issue is related to https://tracker.ceph.com/issues/45595
02/01/2021
- 10:23 PM Bug #48981 (Resolved): cephadm exporter: manager errors out with assertion error
- Can't reproduce with current master - assume that Sage's PR https://github.com/ceph/ceph/pull/39097 resolved the issu...
- 05:51 PM Bug #48142: rados:cephadm/upgrade/mon_election tests are failing: CapAdd and privileged are mutua...
- /ceph/teuthology-archive/yuriw-2021-01-28_19:54:33-rados-wip-yuri4-testing-2021-01-28-0959-octopus-distro-basic-smith...
- 05:41 PM Bug #48993: cephadm: 'mgr stat' and/or 'pg dump' output truncated
- /ceph/teuthology-archive/yuriw-2021-01-28_19:54:33-rados-wip-yuri4-testing-2021-01-28-0959-octopus-distro-basic-smith...
- 04:30 PM Bug #49079 (Duplicate): cephadm: slow to clear CEPHADM_FAILED_DAEMON
- the health alert is only cleared after a full cluster state update (_refresh_hosts_and_daemons()), but we may find th...
- 01:29 PM Bug #49076: cephadm: Bootstrapping fails: json.decoder.JSONDecodeError: Expecting value: line 321...
- yes
- 01:09 PM Bug #49076: cephadm: Bootstrapping fails: json.decoder.JSONDecodeError: Expecting value: line 321...
- Thanks for the quick reply and the info. FYI and for the record, it seems that this error - as indicated in the podma...
- 12:51 PM Bug #49076: cephadm: Bootstrapping fails: json.decoder.JSONDecodeError: Expecting value: line 321...
- this is due to https://github.com/containers/podman/issues/9096 . Nothing we can do about this right now.
- 12:42 PM Bug #49076 (Duplicate): cephadm: Bootstrapping fails: json.decoder.JSONDecodeError: Expecting val...
- On latest Ubuntu 20.04.1 and Podman 2.2.1...
In relation to Bug #49013 I tried to bootstrap a new cluster for test... - 10:31 AM Bug #49013: cephadm: Service definition causes some container startups to fail
- I have an assumption and hope that someone can check if I'm correct.
For the update to 15.2.8 the service definiti...
01/31/2021
Also available in: Atom