Activity
From 04/28/2021 to 05/27/2021
05/27/2021
- 11:15 PM Bug #49654 (Pending Backport): iSCSI stops working after Upgrade 15.2.4 -> 15.2.9
- 10:09 AM Bug #49654 (In Progress): iSCSI stops working after Upgrade 15.2.4 -> 15.2.9
- 11:06 PM Bug #48442: cephadm: upgrade loops on mixed x86_64/arm64 cluster
- Which versions should this work in? Octopus v15.2.12 and Pacific v16.2.4? Or just Pacific?
- 11:23 AM Bug #48442 (Need More Info): cephadm: upgrade loops on mixed x86_64/arm64 cluster
- This might work now. We're now using repo_dist. Which might work across architectures
- 03:50 PM Feature #51011 (Resolved): simplify mgr/cephadm serve() loop
- 03:50 PM Feature #51010 (Resolved): add lamport clock
- 03:50 PM Feature #51009 (Resolved): teach mgr/cephadm to deploy agent (incl generating a keyring for each ...
- 03:50 PM Feature #51008 (Resolved): replace exporter with cephadm agent daemon mode
- 03:50 PM Feature #51007 (Resolved): add cephadm command that push results every so often
- 03:50 PM Feature #51006 (Resolved): add cephadm command to push results to endpoint
- 03:49 PM Feature #51005 (Resolved): add mgr/cephadm endpoint
- 03:41 PM Feature #51004 (Resolved): cephadm agent 2.0
- The idea is to re-build the cephadm agent and make it push data to the mgr
*Things to push to mgr/cephadm*
* ls... - 11:32 AM Feature #45652 (Duplicate): cephadm: Allow user to select monitoring stack ports
- 11:29 AM Bug #46704 (Can't reproduce): container_linux.go:349: "exec: \"stat\": executable file not found
- 11:28 AM Bug #44644 (Closed): cephadm: RGW: updating the spec doesn't update the mon store
- outdated
- 11:26 AM Feature #48339 (Rejected): Use file references in NFS ganesha service configuration
- This is a bad idea. Especially when thinking about the ingress service.
- 11:23 AM Bug #49232 (Can't reproduce): standard_init_linux.go:211: exec user process caused "exec format e...
- 11:22 AM Feature #47782 (Duplicate): ceph orch host rm <host> is not stopping the services deployed in the...
- resolved via the _no_schedule label
- 11:16 AM Feature #49492 (Resolved): cepham: Spine-Leaf network architecture
- 11:15 AM Bug #49439 (Resolved): logrotate should clean up container logs
- resolved by directly writing to journald
- 11:14 AM Bug #49456 (Can't reproduce): cephadm dashboard test: failed to connect to the server
- 11:13 AM Feature #48340 (Closed): cephadm/rgw: Add rgw_zonegroup to RGWSpec
- 11:12 AM Bug #49590 (Resolved): Error: error parsing host port: invalid port number: strconv.Atoi: parsing...
- 11:10 AM Bug #49742 (Can't reproduce): Mirror was added, but still "toomanyrequests: You have reached your...
- 10:19 AM Bug #50998 (Resolved): OSD replacement not working
- OSDS cannot be replaced because the new systemd "osd service" is not able to start in the host, raising an error like...
- 10:15 AM Bug #49626 (Resolved): cephadm: remove duplicate labels when adding a host
- 10:14 AM Bug #49805 (Resolved): unmanaged OSDs return to be managed OSDs after a cephadm module restart
- 10:13 AM Bug #49884 (Resolved): cephadm bootstrp: verify major version of the image matches major version ...
- 10:13 AM Bug #49872 (Resolved): cephadm: Don't remove the daemon keyring, if redeploy failes
- 10:12 AM Documentation #50273 (Resolved): remove keepalived_user from haproxy docs
- 10:11 AM Bug #50267 (Resolved): rgw service can be deploy with realm and no zone or vise versa
- 10:07 AM Bug #48325 (Resolved): PlacementSpec: 'NoneType' object has no attribute 'copy'
- 10:06 AM Cleanup #48251 (Won't Fix): Find a better way to allow customizing the blink_device_light_cmd cmd
- 10:05 AM Documentation #49806 (Resolved): minor problems in cephadm docs
- 10:05 AM Bug #50401 (Resolved): cephadm: Daemons that don't use ceph image always marked as needing upgrad...
- This is missing a PR ID
- 10:04 AM Support #49497: Cephadm fails to upgrade from 15.2.8 to 15.2.9
- I don't think this something we can fix unfortunately. Works for everyone else
- 10:02 AM Bug #50444 (Resolved): host labels order is random
- 10:01 AM Bug #50472 (Resolved): orchestrator doesn't provide a way to remove an entire cluster
- https://github.com/ceph/cephadm-ansible#purge
- 10:00 AM Feature #50529 (Resolved): cephadm rm-cluster is also not resetting any disks that were used as osds
- 09:59 AM Documentation #50534: docs: add full cluster purge
- https://github.com/ceph/cephadm-ansible#purge
- 09:58 AM Bug #50548 (Resolved): cephadm doesn't deploy monitors when multiple public networks
- 09:58 AM Feature #46044 (Resolved): cephadm: Distribute admin keyring.
- 09:56 AM Feature #50360 (Resolved): Configure the IP address for Ganesha
- https://github.com/ceph/ceph/commit/ae4ab5d2041856c8a891e25df610b88bc07b14f1
- 09:54 AM Documentation #50687 (Pending Backport): cephadm: must redeploy monitoring stack daemon after cha...
- 09:47 AM Feature #45410 (Resolved): cephadm: Support upgrading alertmanager, grafana, prometheus and node_...
- 09:47 AM Feature #46499 (Rejected): Requesting a "ceph orch redeploy monitoring" command, as an option, so...
- I don't see a compelling reason to do this. We should focus on more important things.
- 09:46 AM Documentation #45860 (Rejected): cephadm: document upgrades of monitoring components
- works ootb
- 09:40 AM Bug #50113 (Resolved): Upgrading to v16 breaks rgw_frontends setting
- turns out doc/releases/pacific.rst is completely missing in pacific
- 09:36 AM Bug #50113 (Pending Backport): Upgrading to v16 breaks rgw_frontends setting
- 09:34 AM Bug #50526 (Resolved): OSD massive creation: OSDs not created
- closing this for now. I'd create a new issue, if this pops up again
- 09:32 AM Bug #49551 (Resolved): cephadm journald logs are mangled
05/26/2021
- 04:22 PM Bug #50981 (Fix Under Review): cephadm: --service-type arg in 'orch ls' not handled properly for ...
- 04:13 PM Bug #50981 (Closed): cephadm: --service-type arg in 'orch ls' not handled properly for services w...
- the running daemon count comes out incorrect:...
- 11:14 AM Bug #50979 (Duplicate): rook: implement_array_function method already has a docstring
- 11:06 AM Bug #50979 (Duplicate): rook: implement_array_function method already has a docstring
- ...
05/25/2021
05/24/2021
05/21/2021
- 06:11 PM Bug #50830 (Resolved): rgw-ingress does not install
- 06:11 PM Bug #49856 (Resolved): mgr/cephadm: prometheus default alerts not provisioned
- 06:11 PM Bug #50717 (Resolved): cephadm: prometheus.yml.j2 contains "tab" character
- 12:18 PM Bug #50928 (Resolved): OSD size count mismatched with ceph orch commands
- Listing OSD service particularly shows 0 out of 4 OSDs running.
----------------------------------------------------... - 08:59 AM Bug #49551: cephadm journald logs are mangled
- Now https://github.com/ceph/ceph/pull/40640 has been merged. Should this be backported?
If so, please don't forget...
05/20/2021
- 06:25 AM Bug #47873 (Resolved): /usr/lib/sysctl.d/90-ceph-osd.conf getting installed in container, renderi...
05/19/2021
- 02:22 PM Support #50887 (Closed): ERROR: Daemon not found: mgr.ceph-node1.ruuwlz. See cephadm ls
- The previous access address of CEPH dashboard is https://ceph-node1: 8443, but now it's https://ceph-node2:443, to ch...
- 02:20 PM Bug #50886 (Can't reproduce): TypeError: can't subtract offset-naive and offset-aware datetimes
- ...
- 02:12 PM Bug #50526: OSD massive creation: OSDs not created
- Juan Miguel Olmo Martínez wrote:
> @Cory Snyder wrote:
> > @Juan, allow me to provide more detail on the scenario t... - 07:40 AM Bug #50526: OSD massive creation: OSDs not created
- @Cory Snyder wrote:
> @Juan, allow me to provide more detail on the scenario that we encountered. As far as I can te... - 12:05 PM Bug #50113 (Fix Under Review): Upgrading to v16 breaks rgw_frontends setting
- 11:55 AM Feature #47507 (Resolved): qa: add testing for Rook
- 11:52 AM Bug #50830 (Pending Backport): rgw-ingress does not install
- 11:18 AM Documentation #50883 (Duplicate): cephadm: mds_cache_memory_limit
- Users can apply:...
05/17/2021
- 08:07 PM Bug #50830 (Fix Under Review): rgw-ingress does not install
- 03:50 AM Bug #50830: rgw-ingress does not install
- Sage, could you kindly take a look at this failure?
- 03:50 AM Bug #50830: rgw-ingress does not install
- i tried to rebuild https://github.com/ceph/ceph/commit/1a55d822295f03309a696a301bbd1314953974a1, which is the merge c...
- 10:24 AM Bug #50693 (Resolved): cephadm: commands fail with "ValueError: not enough values to unpack (expe...
- backported now.
- 10:23 AM Bug #50616 (Duplicate): ValueError: not enough values to unpack (expected 2, got 1) during upgrad...
05/16/2021
- 03:55 PM Feature #50784 (Resolved): cephadm: orch upgrade check should check if the target image provided ...
- 03:54 PM Bug #50805 (Resolved): Replacement of OSDs not working in hosts with FQDN host name
- 03:51 PM Bug #50717 (Pending Backport): cephadm: prometheus.yml.j2 contains "tab" character
- 03:39 PM Bug #50830: rgw-ingress does not install
- this failure is reproducible .
- 03:39 PM Bug #50830 (Resolved): rgw-ingress does not install
- rados:cephadm:smoke-roleless/{0-distro/centos_8.2_kubic_stable 1-start 2-services/rgw-ingress 3-final}...
- 02:22 PM Bug #50693: cephadm: commands fail with "ValueError: not enough values to unpack (expected 2, got...
- Kefu Chai wrote:
> @Sam, does https://github.com/ceph/ceph/pull/40555 address this issue?
Yes, I can confirm. App... - 10:09 AM Bug #50693 (Need More Info): cephadm: commands fail with "ValueError: not enough values to unpack...
- @Sam, does https://github.com/ceph/ceph/pull/40555 address this issue?
05/14/2021
- 07:26 PM Bug #50526: OSD massive creation: OSDs not created
- @Juan, allow me to provide more detail on the scenario that we encountered. As far as I can tell, the root cause of o...
- 05:04 PM Bug #50526: OSD massive creation: OSDs not created
- David Orman wrote:
> We've created a PR to fix the root cause of this issue: https://github.com/alfredodeza/remoto/p... - 04:00 PM Bug #50526: OSD massive creation: OSDs not created
- We've created a PR to fix the root cause of this issue: https://github.com/alfredodeza/remoto/pull/63
- 06:01 PM Bug #50817 (Closed): cephadm: upgrade loops forever if not enough mds daemons
- If you don't have enough mds daemons that ok-to-stop will ever pass the upgrade just loops forever without providing ...
- 04:18 PM Bug #50717 (Fix Under Review): cephadm: prometheus.yml.j2 contains "tab" character
- 01:59 PM Bug #48142: rados:cephadm/upgrade/mon_election tests are failing: CapAdd and privileged are mutua...
- ...
- 09:51 AM Feature #50815 (Resolved): cephadm: Removing an offline host
- but doesn't that address only part of the problem. For example, any daemons that Ceph (not cephadm) knew about are st...
05/13/2021
- 04:55 PM Bug #50805 (Resolved): Replacement of OSDs not working in hosts with FQDN host name
- In a host with FQDN name:
#hostname
test1.lab.com
#ceph orch osd rm 4 --replace
# ceph osd tree
ID CLASS ... - 02:45 PM Bug #50041 (Resolved): cephadm bootstrap with apply-spec anmd ssh-user option failed while adding...
- 02:15 PM Tasks #50804 (Resolved): cephadm bootstrap. add a warning that users should not use --fsid
- cephadm bootstrap. add a warning that users should not use --fsid
Reason: this isn't really give the user any adva... - 01:55 PM Bug #50359 (In Progress): Configure the IP address for the monitoring stack components
05/12/2021
- 05:49 PM Feature #50784 (Fix Under Review): cephadm: orch upgrade check should check if the target image p...
- 05:43 PM Feature #50784 (Resolved): cephadm: orch upgrade check should check if the target image provided ...
- If the user provides an image to orch upgrade check that they could not actually upgrade to because of the ceph versi...
- 03:01 PM Bug #50776 (New): cephadm: CRUSH uses bare host names
- https://github.com/ceph/ceph/blob/master/src/pybind/mgr/cephadm/module.py#L1411
https://github.com/ceph/ceph/blob/... - 12:08 PM Bug #48930 (Resolved): when removing the iscsi service, the gateway config object remains
- 09:59 AM Bug #50359: Configure the IP address for the monitoring stack components
- I think also that being able to customize the port (exposing a spec parameter is also required in this context)
05/11/2021
- 06:28 PM Feature #50733 (In Progress): cephadm: provide message in orch upgrade status saying upgrade is c...
- 04:06 PM Bug #50526: OSD massive creation: OSDs not created
- To be clear, we have not applied this patch. I was merely adding information to point out the impact is not restricte...
- 03:57 PM Bug #50526: OSD massive creation: OSDs not created
- David Orman wrote:
> Juan Miguel Olmo Martínez wrote:
> > I think that the fix will also work for your issue, it wo... - 03:55 PM Bug #50113: Upgrading to v16 breaks rgw_frontends setting
- Before you set the port (if it's not too late), can you attach the rgw portion of the 'ceph orch ls --export' output?
- 02:41 PM Bug #50113: Upgrading to v16 breaks rgw_frontends setting
- workaround is to manually set the port:...
- 02:04 PM Bug #50113: Upgrading to v16 breaks rgw_frontends setting
- https://pulpito.ceph.com/swagner-2021-05-11_09:16:20-rados:cephadm-wip-swagner-testing-2021-05-06-1235-distro-basic-s...
- 02:21 PM Bug #50759 (Rejected): Redeploying daemon prometheus.a on host smithi159 failed: 'latin-1' codec ...
- was caused by https://github.com/ceph/ceph/pull/40172
- 02:17 PM Bug #50759 (Rejected): Redeploying daemon prometheus.a on host smithi159 failed: 'latin-1' codec ...
- https://pulpito.ceph.com/swagner-2021-05-11_09:16:20-rados:cephadm-wip-swagner-testing-2021-05-06-1235-distro-basic-s...
- 02:10 PM Bug #47480: cephadm: tcmu-runner container is logging inside the container
- https://pulpito.ceph.com/swagner-2021-05-11_09:16:20-rados:cephadm-wip-swagner-testing-2021-05-06-1235-distro-basic-s...
- 11:42 AM Bug #50691: cephadm: bootstrap fails with "IndexError: list index out of range" during cephadm se...
- workaround: Do not specify the ssh user when bootstrapping, but later on.
- 11:09 AM Bug #50691 (Fix Under Review): cephadm: bootstrap fails with "IndexError: list index out of range...
- 10:23 AM Bug #50691: cephadm: bootstrap fails with "IndexError: list index out of range" during cephadm se...
- https://github.com/ceph/ceph/commit/777f236ad885b03b551dd820f41a00b9c89761eb#diff-d0f7acffbce59b9e36a1479d1b1f32955cd...
05/10/2021
- 10:32 PM Bug #50526: OSD massive creation: OSDs not created
- Juan Miguel Olmo Martínez wrote:
> I think that the fix will also work for your issue, it would be nice if you can c... - 06:58 PM Feature #50733 (Closed): cephadm: provide message in orch upgrade status saying upgrade is complete
- Right now, the upgrade status just says the upgrade is no longer in progress and no explicit message is given to say ...
- 01:57 PM Feature #45864 (Resolved): cephadm: include monitoring components in usual upgrade process
- 09:21 AM Bug #49860: cephadm adopt - Report conf file missing - now it says could not detect legacy fsid
- Do you remember, were there any other ceph daemons deployed on that host? cephadm needs to know the fsid of the clus...
- 08:57 AM Bug #46606: cephadm: post-bootstrap monitoring deployment only works if the command "ceph mgr mod...
- prio=normal, as this is not trivial to implement
- 08:53 AM Feature #48102: cephadm: configure HA (cluster flags) for Alertmanager
- Isn't the altertmanager already HA by itself? I thought that alertmanager already creates a fault-tolerant cluster on...
- 08:48 AM Feature #48980 (Closed): orch: add image properties to monitoring spec files
- 08:42 AM Feature #48560 (Closed): Spec files for each daemon in the monitoring stack
05/09/2021
- 11:13 AM Bug #50717 (Resolved): cephadm: prometheus.yml.j2 contains "tab" character
- Hello.
in file /usr/share/ceph/mgr/cephadm/templates/services/prometheus/prometheus.yml.j2 provided by package
c...
05/08/2021
- 09:33 AM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
- Sam Overton wrote:
> #50693 explains the root-cause, which is that @/sys/kernel/security/apparmor/profiles@ is an em...
05/07/2021
- 10:13 PM Bug #50671: cephadm.py OSD status check fails 'no keyring found at /etc/ceph/ceph.client.admin.ke...
- I think this might be a permissions issue - it looks like cephadm is writing the keyring without changing its permiss...
- 08:47 PM Bug #49293: podman 3.0 on ubuntu 18.04: failed to mount overlay for metacopy check with "nodev,me...
- Deepika Upadhyay wrote:
> /ceph/teuthology-archive/yuriw-2021-05-03_16:25:32-rados-wip-yuri-testing-2021-04-29-1033-... - 08:01 PM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
- #50693 explains the root-cause, which is that @/sys/kernel/security/apparmor/profiles@ is an empty file and cephadm i...
- 07:57 PM Bug #50693 (Resolved): cephadm: commands fail with "ValueError: not enough values to unpack (expe...
- Occurs with ceph/cephadm 16.2.1 running on a clean Debian 10.9 install.
The following error is from a failed OSD D... - 07:31 PM Bug #50691 (Resolved): cephadm: bootstrap fails with "IndexError: list index out of range" during...
- Running on a cleanly installed Debian 10.9 host with ceph/cephadm 16.2.3.
The same command in 16.2.1, running on t... - 06:29 PM Bug #50690 (Can't reproduce): ceph orch apply osd -i <path_to_osd_spec.yml> --dry-run command not...
- Description of problem:
ceph orch apply osd -i <path_to_osd_spec.yml> --dry-run command is not generating the e... - 03:27 PM Documentation #50687 (In Progress): cephadm: must redeploy monitoring stack daemon after changing...
- 02:03 PM Documentation #50687 (Resolved): cephadm: must redeploy monitoring stack daemon after changing im...
- We document that, to use a different image from the default for a monitoring stack daemon, you must change the image ...
- 10:09 AM Bug #50685 (Resolved): wrong exception type: Exception("No filters applied")
- ...
- 09:20 AM Bug #48930: when removing the iscsi service, the gateway config object remains
- follow-up PR: https://github.com/ceph/ceph/pull/41181
- 09:19 AM Bug #48930 (Fix Under Review): when removing the iscsi service, the gateway config object remains
- 04:22 AM Bug #50113: Upgrading to v16 breaks rgw_frontends setting
- ...
05/06/2021
- 06:29 PM Bug #50443 (Resolved): cephadm: Don't allow upgrade start with not enough mgr or mon daemons
- 06:24 PM Bug #50544 (Resolved): cephadm: monitoring stack containers in conf file passed to bootstrap not ...
- 06:23 PM Bug #50364 (Resolved): cephadm: removing daemons from hosts in maintenance mode
- 05:43 AM Bug #50671 (Closed): cephadm.py OSD status check fails 'no keyring found at /etc/ceph/ceph.client...
- OSD status Check fails with no keyring found.
CLI:
2021-05-01T12:08:20.050 INFO:tasks.cephadm:Waiting for OSDs t...
05/05/2021
- 02:52 PM Bug #48142: rados:cephadm/upgrade/mon_election tests are failing: CapAdd and privileged are mutua...
- still seeing in octopus: http://qa-proxy.ceph.com/teuthology/yuriw-2021-05-04_19:53:28-rados-wip-yuri-testing-2021-05...
05/04/2021
- 04:44 PM Bug #49293: podman 3.0 on ubuntu 18.04: failed to mount overlay for metacopy check with "nodev,me...
- /ceph/teuthology-archive/yuriw-2021-05-03_16:25:32-rados-wip-yuri-testing-2021-04-29-1033-octopus-distro-basic-smithi...
- 11:44 AM Feature #50639 (New): Request to provide an option to specify erasure coded pool as datapool whil...
- ...
- 04:01 AM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
- Just to confirm this is how the section looks after my edit...
05/03/2021
- 04:51 PM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
- I did as suggested but the upgrade still fails with the following new error...
- 03:40 PM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
- workaround is to replace
/var/lib/ceph/30449cba-44e4-11eb-ba64-dda10beff041/cephadm.17068a0b484bdc911a9c50d6408adf... - 03:36 PM Bug #50616: ValueError: not enough values to unpack (expected 2, got 1) during upgrade from 15.2....
- ...
- 01:57 PM Bug #50616 (Duplicate): ValueError: not enough values to unpack (expected 2, got 1) during upgrad...
- Started an upgrade from 15.2.8 to 16.2.1 via cephadm running on Ubuntu 20.04 & Docker.
MON/MGR/MDS upgraded fine a... - 03:49 PM Bug #50399: cephadm ignores registry settings
- you also have to update the image to point to your registry. otherwise cephadm don't actually use the registry
- 03:45 PM Bug #44587 (New): failed to write <pid> to cgroup.procs:
05/02/2021
- 08:55 AM Bug #46606: cephadm: post-bootstrap monitoring deployment only works if the command "ceph mgr mod...
- > - make the 'orch apply prometheus' fail if the mgr prometheus module isn't enabled. (maybe include a --force in ca...
04/30/2021
- 07:42 PM Bug #46606: cephadm: post-bootstrap monitoring deployment only works if the command "ceph mgr mod...
- I'd *definitively* go for make 'orch apply prometheus' silently enable the prometheus module.
- 04:46 PM Bug #46606: cephadm: post-bootstrap monitoring deployment only works if the command "ceph mgr mod...
- A couple options:
- make the 'orch apply prometheus' fail if the mgr prometheus module isn't enabled. (maybe incl... - 06:49 PM Support #50594 (Resolved): ceph orch / cephadm does not allow deploying multiple MDS daemons per ...
- I have 3 hosts, with lots of cores. I have a filesystem with ~150M files that requires several active MDS daemons to ...
- 10:15 AM Feature #50593 (Resolved): cephadm: cephfs-mirror service should enable "mgr/mirror"
- cephadm: cephfs-mirror service should enable "mgr/mirror"
- 07:00 AM Bug #50592 (Closed): "ceph orch apply <svc_type>" applies placement by default without providing ...
- ...
04/29/2021
- 09:13 AM Bug #50526: OSD massive creation: OSDs not created
- Andreas Håkansson wrote:
> We have the same or a very similar problem,
> In out test case adding more than 8 disk w...
04/28/2021
- 08:07 PM Bug #50102 (Resolved): spec jsons that expect a list in a field dont verify that a list was actua...
- 06:27 PM Bug #50306 (Pending Backport): /etc/hosts is not passed to ceph containers. clusters that were re...
- 06:26 PM Feature #46044 (Pending Backport): cephadm: Distribute admin keyring.
- 06:26 PM Bug #50443 (Pending Backport): cephadm: Don't allow upgrade start with not enough mgr or mon daemons
- 06:25 PM Bug #50544 (Pending Backport): cephadm: monitoring stack containers in conf file passed to bootst...
- 12:47 PM Bug #50544 (Fix Under Review): cephadm: monitoring stack containers in conf file passed to bootst...
- 06:24 PM Bug #50548 (Pending Backport): cephadm doesn't deploy monitors when multiple public networks
- 07:21 AM Bug #50548: cephadm doesn't deploy monitors when multiple public networks
- PR created: https://github.com/ceph/ceph/pull/41055
- 06:58 AM Bug #50548 (Resolved): cephadm doesn't deploy monitors when multiple public networks
- The issue spotted on Ceph 16.2.1 deployed with cephadm+docker, although the master branch seems to also be affected.
... - 05:44 PM Bug #50062 (Resolved): orch host add with multiple labels and no addr
- 05:32 PM Bug #50248 (Resolved): rgw-nfs daemons marked as stray
- 04:07 PM Feature #49960 (Resolved): cephadm: put max on number of daemons in placement count based on numb...
- 04:06 PM Documentation #50257 (Resolved): cephadm docs: wrong command for getting events for single daemon
- 04:06 PM Bug #49757 (Resolved): orch: --format flag name not included in help for 'orch ps' and 'orch ls'
- 09:48 AM Bug #50526: OSD massive creation: OSDs not created
- We have the same or a very similar problem,
In out test case adding more than 8 disk with db on a separate nvme devi... - 09:26 AM Bug #50551 (Duplicate): Massive OSD creation: kernel parameter fs.aio-max-nr with a low value by ...
- duplicates #47873
- 09:21 AM Bug #50551: Massive OSD creation: kernel parameter fs.aio-max-nr with a low value by default
- We've been setting fs.aio-max-nr to 1048576 since early bluestore days with no apparent downside. That would be a sim...
- 09:14 AM Bug #50551 (Duplicate): Massive OSD creation: kernel parameter fs.aio-max-nr with a low value by ...
- fs.aio-max-nr: The Asynchronous non-blocking I/O (AIO) feature that allows a process to initiate multiple I/O operati...
Also available in: Atom