Activity
From 05/06/2020 to 06/04/2020
06/04/2020
- 01:18 PM Documentation #45896 (New): cephadm: Need a manual howto: "upgrade the cluster manually"
- symptom:...
- 10:10 AM Feature #45876 (New): cephadm: handle port conflicts gracefully
- ...
- 09:30 AM Bug #45872: ceph orch device ls exposes the `device_id` under the DEVICES column which isn't too ...
- Sebastian Wagner wrote:
> what about poiting users to --format yaml? that would be an easy fix.
Yep, taht's proba... - 08:03 AM Bug #45872: ceph orch device ls exposes the `device_id` under the DEVICES column which isn't too ...
- what about poiting users to --format yaml? that would be an easy fix.
- 07:43 AM Bug #45872 (Resolved): ceph orch device ls exposes the `device_id` under the DEVICES column which...
- Instead of just listing the device_id, we should consider adding columns for disk properties that can be filtered for...
- 08:11 AM Documentation #45820 (Pending Backport): create OSDs doc refer to --use-all-devices
- 08:05 AM Documentation #45865 (Fix Under Review): cephadm: The service spec documentation is lacking impor...
- 07:59 AM Bug #45867: orchestrator: Errors while deployment are hidden behind the log wall
- relates to https://github.com/ceph/ceph/pull/35375
- 07:13 AM Bug #45604: mgr/cephadm: Failed to create an OSD
- I haven't seen this issue in a while now. Would be interesting if this still exists in the latest master.
- 07:10 AM Bug #45792: cephadm: zapped OSD gets re-added to the cluster.
- There is an `unmanaged` flag that can be set for any ServiceSpec...
- 02:14 AM Bug #45792: cephadm: zapped OSD gets re-added to the cluster.
- As an additional comment on this...
I also think similar logic should apply to any drivespec that is used to apply...
06/03/2020
- 09:38 PM Bug #45808 (Fix Under Review): cephadm/test_adoption.sh: Error parsing image configuration: Inval...
- 05:47 PM Bug #45808: cephadm/test_adoption.sh: Error parsing image configuration: Invalid status code retu...
- https://github.com/ceph/teuthology/pull/1501
- 05:48 PM Bug #45807: cephadm/test_cephadm.sh: unable to pull image: Error parsing image configuration: too...
- https://github.com/ceph/teuthology/pull/1501
- 05:48 PM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- https://github.com/ceph/teuthology/pull/1501
- 04:32 PM Bug #45792: cephadm: zapped OSD gets re-added to the cluster.
- We stepped into that too, IMO quite non-intuitive behavior too. One removes an OSD and it reappears again shortly aft...
- 04:19 PM Bug #45792: cephadm: zapped OSD gets re-added to the cluster.
- That is exactly the behavior we see.
I do not think that is intuitive or should be the expected behavior.
Maybe... - 04:09 PM Bug #45792: cephadm: zapped OSD gets re-added to the cluster.
- -Hmm there is something racey going on. I have a cluster too deployed with --all-available-devices. I remove and osd ...
- 02:43 PM Bug #45792: cephadm: zapped OSD gets re-added to the cluster.
- I'd say that this is the intended behavior. There is a tracking issue for adding a `blacklisting` command that allows...
- 04:08 PM Bug #45867 (Resolved): orchestrator: Errors while deployment are hidden behind the log wall
- As an end user i expect that when running a 'ceph orch apply xxxx' command i will see the errors on the CLI when some...
- 03:41 PM Documentation #45865 (Resolved): cephadm: The service spec documentation is lacking important inf...
- The service spec documentation does not explain when to use the 'unmanaged' field or what happens when this is set to...
- 02:55 PM Feature #45864 (Resolved): cephadm: include monitoring components in usual upgrade process
- Monitoring components are tied to the Ceph version released.
For instance, on Ceph Nautilus, Grafana 5 is supporte... - 02:46 PM Documentation #45820 (In Progress): create OSDs doc refer to --use-all-devices
- 02:41 PM Documentation #45862 (Resolved): orch mds rm is documented but does not exist
- ...
- 02:25 PM Bug #45861 (Resolved): data_devices: limit 3 deployed 6 osds per node
- We have 5 OSD nodes, all looks similar (except node 4 has -1 ssd):...
- 02:20 PM Documentation #45860 (Rejected): cephadm: document upgrades of monitoring components
- Since the container images used for monitoring components can now be specified manually, it is possible to upgrade th...
- 02:19 PM Feature #45859 (Resolved): cephadm: use fixed versions
- Use fixed versions for container image in cephadm.
Missing: Grafana.
It also needs to be checked if the same ne... - 01:51 PM Documentation #45858 (Resolved): `ceph orch status` doesn't show in progress actions
- https://docs.ceph.com/docs/master/mgr/orchestrator/#status documents that @ceph orch status@ show in-progress operati...
- 09:20 AM Bug #45737: Module 'cephadm' has failed: cannot send (already closed?)
- Hi,
So besides the fact this is a duplicate of an issue, that is waiting to have a fix reviewed, what should I do ... - 08:59 AM Feature #45463 (Pending Backport): cephadm: allow custom images for grafana, prometheus, alertman...
- 08:45 AM Documentation #45833 (Resolved): cephadm: properly document labels
- # How to add / remove labels to hosts
# How to show labels
# how to use labels for placing daemons
Those bits ar... - 08:40 AM Bug #45832: cephadm: "ceph orch apply mon" moves daemons
- you used ceph-salt to deploy the initial mon?
- 08:27 AM Bug #45832 (Resolved): cephadm: "ceph orch apply mon" moves daemons
- Somewhat related to the issue I copied this from.
I started with a bootstrap cluster (1 mon, 1 mgr), wanted to add... - 07:02 AM Bug #45407: cephadm: Speed up OSD deployment preview
- PR https://github.com/ceph/ceph/pull/34665 won't be beneficial from the preview cache implemented in this issue, beca...
06/02/2020
- 08:29 PM Bug #45819: cephadm: Possible error in deploying-nfs-ganesha docs
- This is likely due to NFSv3 config which requires a running rpcbind service....
- 03:21 PM Bug #45819 (Can't reproduce): cephadm: Possible error in deploying-nfs-ganesha docs
- Simon Sutter sent an email to ceph-users that included this:
https://docs.ceph.com/docs/master/cephadm/install/#de... - 03:37 PM Documentation #45820 (Resolved): create OSDs doc refer to --use-all-devices
- https://docs.ceph.com/docs/octopus/mgr/orchestrator/#create-osds (also master) refer to...
- 02:32 PM Feature #44873 (Resolved): cephadm bootstrap: add --apply-spec <cluster.yaml>
- 01:56 PM Bug #45462 (Resolved): 'https://download.ceph.com/debian-octopus focal Release' does not have a R...
- 01:54 PM Bug #45407 (Resolved): cephadm: Speed up OSD deployment preview
- 01:54 PM Bug #45245 (Resolved): cephadm: print iscsi container's log to stdout/stderr
- 01:13 PM Bug #45249 (Resolved): cephadm: fail to apply a iSCSI ServiceSpec
- 01:11 PM Bug #45293 (Resolved): cephadm: service_id can contain a '.' char (mds, nfs, iscsi)
- 01:11 PM Bug #45294 (Resolved): cephdam: rgw realm/zone could contain 'hostname'
- 01:10 PM Bug #45284 (Resolved): cephadm: Access host files on "cephadm shell"
- 01:09 PM Feature #44625 (Resolved): cephadm: test dmcrypt
- 01:09 PM Bug #45587 (Resolved): mgr/cephadm: Failed to create encrypted OSD
- 01:09 PM Bug #45700 (Resolved): cryptsetup LuksOpen hangs while creating or starting an encrypted OSD - Oc...
- 01:09 PM Documentation #44284 (Resolved): cephadm: provide a way to modify the initial crushmap
- 01:08 PM Bug #45427 (Resolved): cephadm: auth get failed: invalid entity_auth mon
- 01:08 PM Bug #45129 (Resolved): simple (ceph-disk) style OSDs adopted by cephadm don't start after reboot
- 01:08 PM Bug #45393 (Resolved): Containerized osd config must be updated when adding/removing mons
- 01:07 PM Bug #45458 (Resolved): non-ascii chars in /etc/prometheus/ceph/ceph_default_alerts.yml
- 01:06 PM Bug #45417 (Resolved): cephadm: nfs grace remove killed before completion
- 01:06 PM Bug #45394 (Resolved): cephadm: fail to create/preview OSDs via drive group
- 01:03 PM Bug #45252 (Resolved): cephadm: fail to insert modules when creating iSCSI targets
- 01:03 PM Feature #45163 (Resolved): cephadm: iscsi: read and write config-key for the dashboard
- 12:59 PM Bug #45632 (Pending Backport): nfs: auth credentials for recovery database include mds
- 12:53 PM Bug #45632 (Resolved): nfs: auth credentials for recovery database include mds
- 12:59 PM Bug #45696 (Pending Backport): cephadm: Validate bootstrap dashboard "key" and "cert" file exists
- 12:53 PM Bug #45696 (Resolved): cephadm: Validate bootstrap dashboard "key" and "cert" file exists
- 12:54 PM Bug #45629 (Pending Backport): cephadm: Allow users to provide ssh keys during bootstrap
- 12:53 PM Bug #45629 (Resolved): cephadm: Allow users to provide ssh keys during bootstrap
- 12:52 PM Bug #45560 (Resolved): cephadm: fail to create OSDs
- 12:51 PM Documentation #44828 (Resolved): cephadm: clarify "Failed to infer CIDR network for mon ip"
- 11:11 AM Tasks #45814 (Resolved): tasks/cephadm.py: Add iSCSI smoke test
- We need a block similar to
https://github.com/ceph/ceph/blob/cca6533da2dbb756769bf3640b19705a1d0ea1fa/qa/tasks/cep... - 10:50 AM Bug #45791: cephadm: Upgrade is failing octopus on centos 8 %d format a numbe ri srequired: not ...
- If more log is needed I can add, but I believe below is the relevant snip you need that pertains to the upgrade....
... - 08:33 AM Bug #45791 (Need More Info): cephadm: Upgrade is failing octopus on centos 8 %d format a numbe r...
- Can you please attache the full MGR log file?
- 08:24 AM Bug #45807 (Duplicate): cephadm/test_cephadm.sh: unable to pull image: Error parsing image config...
- Different traceback, but same origin. Close as duplicate. Thank you for reporting this!
- 04:42 AM Bug #45807 (Duplicate): cephadm/test_cephadm.sh: unable to pull image: Error parsing image config...
- /a/yuriw-2020-05-30_02:18:17-rados-wip-yuri-master_5.29.20-distro-basic-smithi/5104549...
- 08:23 AM Bug #45808: cephadm/test_adoption.sh: Error parsing image configuration: Invalid status code retu...
- right. test_adoption doesn't use tasks/cephadmp.py
- 08:21 AM Bug #45808: cephadm/test_adoption.sh: Error parsing image configuration: Invalid status code retu...
- should work! I'm seeing the registry getting accessed:...
- 05:52 AM Bug #45808 (Resolved): cephadm/test_adoption.sh: Error parsing image configuration: Invalid statu...
- /a/yuriw-2020-05-30_02:18:17-rados-wip-yuri-master_5.29.20-distro-basic-smithi/5104472...
- 04:43 AM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- Looks like this change needs to be applied more widely. See https://tracker.ceph.com/issues/45807
- 03:20 AM Bug #45792: cephadm: zapped OSD gets re-added to the cluster.
- When using command `ceph orch apply osd --all-available-devices` to create OSDs, an OSDSpec is created to apply all u...
06/01/2020
- 09:45 PM Bug #45399: NFS Ganesha : Error searching service specs for all nodes after nfs orch apply nfs......
- This also affects other services such as MDS etc, by causing the orchestrator to deploy/remove them in a loop:
<pr... - 09:44 PM Bug #45399 (Fix Under Review): NFS Ganesha : Error searching service specs for all nodes after nf...
- A short hostname should not contain a dot char ('.'), but looks like the validation is missing during `host add` ...
... - 03:53 PM Documentation #44828: cephadm: clarify "Failed to infer CIDR network for mon ip"
- Zac Dover wrote:
> https://pad.ceph.com/p/cidr_error_cephadm_docs
>
> The list of commands in this Etherpad repre...
05/30/2020
- 06:54 PM Bug #45792: cephadm: zapped OSD gets re-added to the cluster.
- One thing I didn't mentioned is that cephadm attempting to apply OSDs to available devices SURVIVES disabling and ena...
- 06:35 PM Bug #45792 (Resolved): cephadm: zapped OSD gets re-added to the cluster.
- Using version 15.2.1 with octopus cluster running centos 8.
When the cluster was initially deployed, OSDs were cre... - 06:28 PM Bug #45791 (Can't reproduce): cephadm: Upgrade is failing octopus on centos 8 %d format a numbe ...
- New dev cluster running centos 8 with octopus release deployed using cephadm. Deployed on version 15.2.1.
Attempt...
05/29/2020
- 06:55 PM Bug #45343: Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshake timeout"
- ...
- 05:40 PM Bug #45343: Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshake timeout"
- Yeah, that's what I meant when I said I can't figure out the appropriate ...
- 09:49 AM Bug #45343: Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshake timeout"
- ...
- 01:48 PM Tasks #45143 (Closed): cephadm: scheduler improvements
- split into different issues
- 01:47 PM Feature #45770 (Rejected): cephadm: allow count=0 to have services without daemons
- 01:47 PM Feature #45769 (Resolved): cephadm: Don't deploy on offline hosts
- 01:46 PM Feature #45768 (Rejected): cephadm: remove daemons should check HEALTH
- 01:45 PM Documentation #45767 (Resolved): documentation: disable the scheduler: unmanaged=True + ceph orch...
- 01:45 PM Feature #45766 (New): cephadm: Removal: make sure, enough daemons joined the maps
- 01:43 PM Bug #45726 (Fix Under Review): Module 'cephadm' has failed: auth get failed: failed to find clien...
- 01:42 PM Feature #45653 (Duplicate): cephadm: Improve safety by using a specific user
- 01:20 PM Bug #45560 (Pending Backport): cephadm: fail to create OSDs
- 01:20 PM Bug #45632 (Pending Backport): nfs: auth credentials for recovery database include mds
- 09:09 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- maybe we can actually fix this by moving to a our internal registry
- 04:29 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- /a/yuriw-2020-05-28_02:23:45-rados-wip-yuri-master_5.27.20-distro-basic-smithi/5098059
/a/yuriw-2020-05-28_02:23:45-... - 12:21 AM Bug #45627 (Fix Under Review): cephadm: frequently getting `1 hosts fail cephadm check`
- 12:16 AM Bug #45700: cryptsetup LuksOpen hangs while creating or starting an encrypted OSD - Octopus
- I expect that the backport of https://github.com/ceph/ceph/pull/34745 will fix this.
- 12:13 AM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- right. we're now using containers seriously. Thanks David for setting up the registries.
05/28/2020
- 09:45 PM Bug #45631 (In Progress): Error parsing image configuration: Invalid status code returned when fe...
- 09:45 PM Bug #45631 (New): Error parsing image configuration: Invalid status code returned when fetching b...
- /a/yuriw-2020-05-28_02:23:45-rados-wip-yuri-master_5.27.20-distro-basic-smithi/5098001...
- 08:57 PM Bug #45343: Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshake timeout"
- Okay, I set up a docker mirror and a quay registry today.
h3. docker-mirror.front.sepia.ceph.com... - 03:44 PM Bug #45343: Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshake timeout"
- Not getting any better:...
- 08:21 AM Bug #45343: Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshake timeout"
- David Galloway wrote:
> Sebastian Wagner wrote:
> > David, do you have an idea how to make quay.io reliable?
>
>... - 06:26 AM Bug #45343: Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshake timeout"
- ...
- 03:46 PM Bug #45737 (Duplicate): Module 'cephadm' has failed: cannot send (already closed?)
- 09:39 AM Bug #45737 (Duplicate): Module 'cephadm' has failed: cannot send (already closed?)
- Hi,
I have a development cluster running on 4 VM. They're all running CentOS8 Stream and were bootstraped using ce... - 02:31 PM Bug #45656 (Duplicate): orchestrator: Host affinity/antiaffinity
- 12:37 PM Bug #45696 (Pending Backport): cephadm: Validate bootstrap dashboard "key" and "cert" file exists
- 12:37 PM Bug #45629 (Pending Backport): cephadm: Allow users to provide ssh keys during bootstrap
- 07:12 AM Bug #45625 (Pending Backport): cephadm: when configuring monitoring with ceph orch, ceph dashboar...
05/27/2020
- 09:35 PM Bug #45421: cephadm: MaxWhileTries: Waiting for 3 mons in monmap: "unable to remove container c3e...
- I have no idea about the timing of this run other than to say as a general rule I wait until the shaman build page is...
- 12:04 PM Bug #45421: cephadm: MaxWhileTries: Waiting for 3 mons in monmap: "unable to remove container c3e...
- I'm trying since ages to reproduce this one, but failed so far. See my attempts at https://github.com/ceph/ceph/pull/...
- 12:01 PM Bug #45421: cephadm: MaxWhileTries: Waiting for 3 mons in monmap: "unable to remove container c3e...
- Removed /a/bhubbard-2020-05-01_23:30:27-rados:thrash-old-clients-master-distro-basic-smithi/5014074 since it's a diff...
- 11:52 AM Bug #45421: cephadm: MaxWhileTries: Waiting for 3 mons in monmap: "unable to remove container c3e...
- /a/bhubbard-2020-05-01_23:30:27-rados:thrash-old-clients-master-distro-basic-smithi/5014074...
- 04:59 PM Bug #45343: Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshake timeout"
- Sebastian Wagner wrote:
> David, do you have an idea how to make quay.io reliable?
I'm still a container noob. S... - 09:42 AM Bug #45343: Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshake timeout"
- David, do you have an idea how to make quay.io reliable?
- 01:25 AM Bug #45343: Error: error pulling image "quay.io/ceph-ci/ceph:" "net/http: TLS handshake timeout"
- /a/yuriw-2020-05-22_19:55:53-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5083366...
- 01:49 PM Bug #45596 (Resolved): qa/tasks/cephadm: No cephadm module detected
- 01:21 PM Bug #45726: Module 'cephadm' has failed: auth get failed: failed to find client.crash.<node_name>...
- Error lines related to this issue can be found in line 660 and next.
- 11:00 AM Bug #45726 (Resolved): Module 'cephadm' has failed: auth get failed: failed to find client.crash....
- [ceph: root@ceph-node-00 /]# ceph -s
cluster:
id: 8dc6f04a-9fee-11ea-a46a-525400622549
health: HEALT... - 12:34 PM Documentation #45728 (Resolved): Add an example for custom images to the "bootstrap a new cluster...
- ...
- 12:29 PM Feature #44628: cephadm: Add initial firewall management to cephadm
- for this, we'll need control and information about the ports all the daemons use. Especially if they're configurable,...
- 12:26 PM Bug #45624: cephadm: "ceph orch apply mgr" is deploying in wrong nodes
- probably getting fixed by https://github.com/ceph/ceph/pull/34633
- 12:26 PM Documentation #45623: cephadm: "ceph orch apply mon" is deploying in wrong nodes
- probably getting fixed by https://github.com/ceph/ceph/pull/34633
- 12:24 PM Bug #45621 (Duplicate): check-host returns terrible unhelpful error message
- 12:22 PM Bug #45617 (Pending Backport): mgr/orch: mds with explicit naming
- 12:09 PM Bug #45174 (Fix Under Review): cephadm: missing parameters on 'orch daemon add iscsi'
- 12:08 PM Bug #45587 (Need More Info): mgr/cephadm: Failed to create encrypted OSD
- Is this reproducible with the latest master?
- 10:17 AM Feature #44886: cephadm: allow use of authenticated registry
- see also https://github.com/ceph/ceph/pull/35217
- 09:50 AM Bug #45725 (Can't reproduce): cephadm: Further improve "Failed to infer CIDR network for mon ip"
- ...
- 09:33 AM Bug #45627: cephadm: frequently getting `1 hosts fail cephadm check`
- Good idea! Will push up a PR for ceph (and on for remoto) in the morning :)
- 07:33 AM Bug #45627: cephadm: frequently getting `1 hosts fail cephadm check`
- I'd go with solution 1.
Plus a monkey patch. Something like... - 05:25 AM Bug #45627: cephadm: frequently getting `1 hosts fail cephadm check`
- We just need to be connection aware.
Solution 1
===================
Remoto annoyingly doesn't have a method in... - 03:01 AM Bug #45627: cephadm: frequently getting `1 hosts fail cephadm check`
- I've updated the duplicate bug with:
I've managed to recreate, I have 2 nodes, node1(10.20.92.201) and node2(10.2... - 09:26 AM Bug #45724 (Resolved): check-host should not fail using fqdn or not that hard
- I would suggest either identify that it's an FQDN or answer "Host not found. Use 'ceph orch host ls' to see all manag...
- 07:52 AM Bug #45701 (Duplicate): rados/cephadm/smoke-roleless fails due to CEPHADM_REFRESH_FAILED health c...
- 07:43 AM Bug #45701: rados/cephadm/smoke-roleless fails due to CEPHADM_REFRESH_FAILED health check
- /a/yuriw-2020-05-23_15:15:01-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5085552
/a/yuriw-2020-05-23_15:15:01-... - 07:30 AM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- solution:
1. set up a mirror at vossi04
2. added toml as a dependency for test, see https://github.com/ceph/teuth... - 07:27 AM Bug #45631 (Resolved): Error parsing image configuration: Invalid status code returned when fetch...
- 01:12 AM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- /a/yuriw-2020-05-22_19:55:53-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5083351...
- 02:59 AM Bug #45032: cephadm: Not recovering from `OSError: cannot send (already closed?)`
- I've managed to recreate, I have 2 nodes, node1(10.20.92.201) and node2(10.20.92.202).
Node2 happens to be the cu... - 02:58 AM Bug #45719 (Can't reproduce): CommandFailedError: Command failed on smithi073 with status 1: 'tes...
- /a/yuriw-2020-05-22_19:55:53-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5083241
/a/yuriw-2020-05-22_19:55:53-... - 02:45 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- /a/yuriw-2020-05-22_19:55:53-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5083157
/a/yuriw-2020-05-22_19:55:53-...
05/26/2020
- 03:00 PM Feature #45712 (Duplicate): Add 'state' attribute to ServiceSpec
- We should track the following states:
* Pending
* Failure
* Success
* Creating
when applying/creating servic... - 08:54 AM Documentation #44828: cephadm: clarify "Failed to infer CIDR network for mon ip"
- https://pad.ceph.com/p/cidr_error_cephadm_docs
The list of commands in this Etherpad represents Zac Dover's 26 May... - 08:11 AM Bug #45631 (In Progress): Error parsing image configuration: Invalid status code returned when fe...
- 05:34 AM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- /a/yuriw-2020-05-22_19:55:53-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5083319
INFO:cephadm:ceph:stderr Er... - 04:33 AM Bug #45701 (Duplicate): rados/cephadm/smoke-roleless fails due to CEPHADM_REFRESH_FAILED health c...
- /a/yuriw-2020-05-22_19:55:53-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5083267...
05/25/2020
- 08:48 PM Bug #44832: cephadm: `ceph cephadm generate-key` fails with No such file or directory: '/tmp/...
- @Vladimir: with Octopus, to find the version you are using it is no longer sufficient to examine RPMs. Please post th...
- 03:56 PM Bug #45700: cryptsetup LuksOpen hangs while creating or starting an encrypted OSD - Octopus
- further testing: it appears none of the cryptsetup commands exit properly when called from ceph. when running them on...
- 03:40 PM Bug #45700 (Resolved): cryptsetup LuksOpen hangs while creating or starting an encrypted OSD - Oc...
- When creating or starting an encrypted OSD a cryptsetup process hangs indefinitely and prevents the creation or start...
- 12:02 PM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- - https://planet.jboss.org/post/deploy_and_configure_a_local_docker_caching_proxy
- https://docs.docker.com/registry... - 10:59 AM Bug #45696 (Fix Under Review): cephadm: Validate bootstrap dashboard "key" and "cert" file exists
- 10:45 AM Bug #45696 (In Progress): cephadm: Validate bootstrap dashboard "key" and "cert" file exists
- 10:44 AM Bug #45696 (Resolved): cephadm: Validate bootstrap dashboard "key" and "cert" file exists
- While bootstrapping a cluster, I've provided "--dashboard-key" and "--dashboard-crt" options pointing to files that d...
- 09:39 AM Bug #45629 (Fix Under Review): cephadm: Allow users to provide ssh keys during bootstrap
05/24/2020
- 12:19 PM Bug #45672 (Can't reproduce): Unable to add additional hosts to cluster using cephadm
- After configuring nodes 2 and 3 with permission for the node 1 root user to SSH with Ceph's configuration and key, co...
- 11:52 AM Bug #44832: cephadm: `ceph cephadm generate-key` fails with No such file or directory: '/tmp/...
- Seems the target version does not include fix for the issue:
...
INFO:cephadm:Generating ssh key...
INFO:cephadm:N... - 05:26 AM Feature #44886 (In Progress): cephadm: allow use of authenticated registry
05/23/2020
- 03:52 PM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- ...
- 03:18 PM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- Sebastian Wagner wrote:
> I think this is caused by our ci downloading too many monitoring images.
>
> I think we... - 03:05 PM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- ...
05/22/2020
- 11:58 AM Bug #45596 (Fix Under Review): qa/tasks/cephadm: No cephadm module detected
- 10:43 AM Bug #45656 (Duplicate): orchestrator: Host affinity/antiaffinity
- Maybe we should follow the design/"spirit" embedded in:
"k8s's Assigning Pods to Nodes":https://kubernetes.io/docs/... - 10:41 AM Feature #45655 (New): orchestrator: Host affinity/antiaffinity
- Maybe we should follow the design/"spirit" embedded in:
[[k8s's Assigning Pods to Nodes:https://kubernetes.io/docs/... - 09:18 AM Feature #45654 (Rejected): orchestrator: support OSDs backed by LVM LV/VG
- Available devices should show LVs/VGs not used.
OSDs should use other logical volumes/volumes groups as "storage dev... - 09:17 AM Bug #45595: qa/tasks/cephadm: No filesystem is configured and MDS daemon gets deployed repeatedly
- Michael Fritch wrote:
> Also, any ideas why the fsname is `all` ??
It is from here in ceph_mdss()... - 08:41 AM Feature #45653 (Duplicate): cephadm: Improve safety by using a specific user
- cephadm must support a <username> option so you can specify any user that has password-less sudo.
Needed also to ... - 08:35 AM Feature #45652 (Duplicate): cephadm: Allow user to select monitoring stack ports
- User must be able to change the ports settings for the monitoring stack
We can continue using as default:... - 07:42 AM Feature #44628: cephadm: Add initial firewall management to cephadm
- User must be able to decide what ports to use (both http/https).
- 06:53 AM Bug #45625 (Fix Under Review): cephadm: when configuring monitoring with ceph orch, ceph dashboar...
- 04:44 AM Bug #45584 (Resolved): qa/tasks/cephadm: With roleless feature no mons are deployed
- 03:11 AM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- ...
- 02:54 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- ...
- 01:03 AM Bug #45037 (Resolved): octopus: cephadm: non-ceph units put wrong uid:gid in systemd unit file
- 12:56 AM Bug #44894 (Resolved): cephadm: non-ceph units put wrong uid:gid in systemd unit file
05/21/2020
- 08:23 AM Bug #45625 (In Progress): cephadm: when configuring monitoring with ceph orch, ceph dashboard is ...
- 03:25 AM Feature #45163 (Pending Backport): cephadm: iscsi: read and write config-key for the dashboard
- 12:43 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- Looks similar...
- 12:05 AM Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 42...
- I think this is caused by our ci downloading too many monitoring images.
I think we have two options now:
1. co...
05/20/2020
- 11:17 PM Bug #45632 (Fix Under Review): nfs: auth credentials for recovery database include mds
- 11:15 PM Bug #45632 (Resolved): nfs: auth credentials for recovery database include mds
- ...
- 06:44 PM Bug #45631 (Closed): Error parsing image configuration: Invalid status code returned when fetchin...
- ...
- 02:56 PM Bug #45629 (Resolved): cephadm: Allow users to provide ssh keys during bootstrap
- ATM, `cephadm boostrap` will always generate new SSH keys, unless `--skip-ssh` option is used.
But `--skip-ssh` al... - 02:28 PM Bug #45618 (Can't reproduce): cephadm tests fail because missing image on quay.io
- see https://status.quay.io
> May 19, 2020
> Quay.io outage
> Resolved - Currently service is restored and stable... - 08:12 AM Bug #45618: cephadm tests fail because missing image on quay.io
- yesterday, quay.io returned "Bad Gateway" in my runs. I think this was an infrastructure issue at quay.io. I just sch...
- 06:30 AM Bug #45618 (Can't reproduce): cephadm tests fail because missing image on quay.io
- ...
- 02:11 PM Bug #45628 (Resolved): cephadm qa: smoke should verify daemons are actually running
- RGW failed:...
- 01:59 PM Bug #45032: cephadm: Not recovering from `OSError: cannot send (already closed?)`
- fixed after reboot of active mgr
root@one1-ceph4.storage.:~# ceph cephadm check-host one1-ceph5
one1-ceph5 (None)... - 01:51 PM Bug #45032: cephadm: Not recovering from `OSError: cannot send (already closed?)`
- same here after a reboot of the hosts:
root@one1-ceph4.storage.:~# ceph cephadm check-host one1-ceph4
one1-ceph4 ... - 01:49 PM Bug #45032 (Pending Backport): cephadm: Not recovering from `OSError: cannot send (already closed?)`
- I think we sill have this problem.
- 01:47 PM Bug #45627 (Resolved): cephadm: frequently getting `1 hosts fail cephadm check`
- https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/ADK3Y2XHTIJ2YV6MFSQX4XPTQ4WP5ETM/...
- 01:07 PM Bug #45596: qa/tasks/cephadm: No cephadm module detected
- in any case, the MDS error is pretty fatal:
https://github.com/ceph/ceph/blob/a7ea259f24dc08abf5458a79935f4f36ad7d... - 12:55 PM Bug #45596: qa/tasks/cephadm: No cephadm module detected
- mgr log...
- 12:38 PM Bug #45625 (Resolved): cephadm: when configuring monitoring with ceph orch, ceph dashboard is onl...
- I run the commands mentioned in https://docs.ceph.com/docs/master/cephadm/monitoring/#deploying-monitoring-with-cepha...
- 12:17 PM Bug #45624 (Can't reproduce): cephadm: "ceph orch apply mgr" is deploying in wrong nodes
- I add the mgr label to 3 of my nodes.
Then when I run 'ceph orch apply mgr' I expect that mgr is deployed to all 3... - 11:58 AM Documentation #45623 (Can't reproduce): cephadm: "ceph orch apply mon" is deploying in wrong nodes
- I add the mon label to 3 of my 4 nodes,
then when I run 'ceph orch apply mon' I expect that mon is deployed to those... - 10:51 AM Bug #45621: check-host returns terrible unhelpful error message
- I find that doing @ceph mgr fail@ fixes the problem, but could never guess from the message.
- 10:49 AM Bug #45621 (Duplicate): check-host returns terrible unhelpful error message
- After having some CEPHADM_HOST_CHECK_FAILED and CEPHADM_REFRESH_FAILED warnings after rebooting some hosts, I get the...
- 09:55 AM Bug #45594 (In Progress): cephadm: weight of a replaced OSD is 0
- This is certainly not intended.. I'll investigate
- 02:08 AM Bug #45595: qa/tasks/cephadm: No filesystem is configured and MDS daemon gets deployed repeatedly
- Also, any ideas why the fsname is `all` ??
- 01:54 AM Bug #45595: qa/tasks/cephadm: No filesystem is configured and MDS daemon gets deployed repeatedly
- We attempted to configure an MDS with file system `all` using an explicit daemon id of `a`?...
- 01:34 AM Bug #45617 (Fix Under Review): mgr/orch: mds with explicit naming
- 01:24 AM Bug #45617 (Resolved): mgr/orch: mds with explicit naming
- Explicitly naming an mds:...
- 12:54 AM Bug #45252: cephadm: fail to insert modules when creating iSCSI targets
- I've created a PR to bind mount /lib/modules RO: https://github.com/ceph/ceph/pull/35141
Once I have the PR applie...
05/19/2020
- 03:54 PM Bug #45162 (Resolved): cephadm: iscsi should use the correct container image
- 03:53 PM Bug #44792 (Resolved): cephadm: make `cephadm shell` independent from /etc/ceph/ceph.conf
- 03:53 PM Bug #45196 (Resolved): cephadm: remove 'fqdn_enabled' parameter from iSCSI service spec
- 03:52 PM Bug #44826 (Resolved): cephadm: "Deploying daemon crash.li221-238... ERROR: no keyring provided"
- 03:51 PM Bug #45161 (Resolved): cephadm: iscsi should validate the existence of the given pool
- 12:12 PM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- seems that I'm close to unable to reproduce this reliably.
- 11:49 AM Bug #45596: qa/tasks/cephadm: No cephadm module detected
- Right, the mgr gets restarted:...
- 09:46 AM Bug #45596: qa/tasks/cephadm: No cephadm module detected
- This failure can also be seen in this test:
http://pulpito.ceph.com/varsha-2020-05-18_10:25:58-rados-wip-integrate-c... - 08:59 AM Bug #45596 (Resolved): qa/tasks/cephadm: No cephadm module detected
- Something weird is happening here.
First mgr fails and cephadm is disabled.... - 10:55 AM Bug #45595: qa/tasks/cephadm: No filesystem is configured and MDS daemon gets deployed repeatedly
- the logs of a failed MDS:...
- 08:04 AM Bug #45595 (Can't reproduce): qa/tasks/cephadm: No filesystem is configured and MDS daemon gets d...
- On adding mds to roles...
- 10:48 AM Bug #45604 (Duplicate): mgr/cephadm: Failed to create an OSD
- 09:37 AM Bug #45604 (Duplicate): mgr/cephadm: Failed to create an OSD
- Creating an OSD using the following commands fails....
- 07:42 AM Bug #45594 (Resolved): cephadm: weight of a replaced OSD is 0
- Not sure if this is intended.
After deleting an OSD with `--replace` flag and create new OSD on it, OSD's WEIGHT i... - 07:01 AM Bug #45252: cephadm: fail to insert modules when creating iSCSI targets
- Hmm, that didn't happen on my test system. I might need to rebuild to check, I might have to reboot the host just in ...
- 02:44 AM Bug #45252: cephadm: fail to insert modules when creating iSCSI targets
- Still seeing this after PR 34898 merged.
insert_error.txt contains more info
05/18/2020
- 03:57 PM Bug #45587: mgr/cephadm: Failed to create encrypted OSD
- note, octopus doesn't contain https://github.com/ceph/ceph/pull/34745
- 03:07 PM Bug #45587 (Resolved): mgr/cephadm: Failed to create encrypted OSD
- I can not create an encrypted OSD using Ceph 15.2.1-277-g17d346932e on SES7....
- 01:36 PM Feature #45463 (Fix Under Review): cephadm: allow custom images for grafana, prometheus, alertman...
- 12:47 PM Bug #45584 (Fix Under Review): qa/tasks/cephadm: With roleless feature no mons are deployed
- 12:30 PM Bug #45584 (Resolved): qa/tasks/cephadm: With roleless feature no mons are deployed
- http://qa-proxy.ceph.com/teuthology/varsha-2020-05-12_12:13:48-rados-wip-varsha-testing-distro-basic-smithi/5049043/
... - 08:17 AM Bug #45174: cephadm: missing parameters on 'orch daemon add iscsi'
- I've created a PR (https://github.com/ceph/ceph/pull/35097) that makes the api_user and api_password manditory.
- 08:05 AM Bug #45576: cephadm: `cephadm ls` does not play well with `cephadm logs`
- Same is true for ...
- 08:01 AM Bug #45576 (Resolved): cephadm: `cephadm ls` does not play well with `cephadm logs`
- Right now users need to run...
05/17/2020
- 06:12 AM Bug #45572 (Rejected): cephadm: ceph-crash isn't deployed anywhere
- Rejecting in favour of https://github.com/ceph/ceph-salt/issues/236
05/16/2020
- 10:36 PM Bug #45572: cephadm: ceph-crash isn't deployed anywhere
- Ah. Found it. This is due to ceph-salt calling @cephadm bootstrap@ with @--skip-ssh@, so presumably actually needs ...
- 06:50 AM Bug #45572 (Rejected): cephadm: ceph-crash isn't deployed anywhere
- AFAICT when deploying a containerized cluster with cephadm, ceph-crash is never deployed anywhere. This means that i...
05/15/2020
- 11:39 AM Bug #45560 (Fix Under Review): cephadm: fail to create OSDs
- 11:38 AM Bug #45560 (Pending Backport): cephadm: fail to create OSDs
- 09:49 AM Bug #45560 (In Progress): cephadm: fail to create OSDs
- 04:13 AM Bug #45560 (Resolved): cephadm: fail to create OSDs
- OSDs are not created after applying the following spec:...
- 10:49 AM Feature #45565 (New): cephadm: A daemon should provide information about itself (e.g. service urls)
- As a normal user of Ceph without having much insight of the development and its inner life i expect that a fresh depl...
- 10:24 AM Bug #45407 (Fix Under Review): cephadm: Speed up OSD deployment preview
- 09:35 AM Documentation #45564 (Duplicate): cephadm: document workaround for accessing the admin socket by ...
- ...
- 07:52 AM Feature #45463 (In Progress): cephadm: allow custom images for grafana, prometheus, alertmanager ...
05/14/2020
- 11:51 PM Bug #45174: cephadm: missing parameters on 'orch daemon add iscsi'
- hrm, yeah good question. Any preferences? We could auto gen if not supplied and then something like `ceph orch ls --e...
- 02:13 PM Documentation #44354: cephadm: Log messages are missing
- https://serverfault.com/questions/809093/how-do-view-older-journalctl-logs-after-a-rotation-maybe
- 02:04 PM Feature #44578 (Rejected): cephadm: verify Grafana works with Prometheus HA
- works
- 02:02 PM Feature #44601: cephadm: Mix of hosts: with and without firewall
- Maybe we can expose this via `ceph orch host ls`?
- 02:02 PM Feature #45163 (Fix Under Review): cephadm: iscsi: read and write config-key for the dashboard
- 02:00 PM Bug #44729 (Can't reproduce): cephadm enter using docker is broken
- 01:49 PM Bug #45198 (Closed): cephadm: unable to add iSCSI daemon from service spec yaml file
- 01:48 PM Bug #45286 (Closed): cephadm: Adding hosts to the cluster fails
- 01:46 PM Bug #45394 (Pending Backport): cephadm: fail to create/preview OSDs via drive group
- 01:45 PM Bug #45417 (Pending Backport): cephadm: nfs grace remove killed before completion
- 01:43 PM Feature #43705 (Closed): cephadm: on config change, restart appropriate daemons
- seems to be done. sort of. reopen if required.
- 01:42 PM Bug #44577 (Closed): cephadm: reconfigure Prometheus on MGR failover
- no need. prometheus already knows all instances.
- 01:32 PM Bug #45258 (Duplicate): cephadm: iSCSIServiceSpec: user/password should be mandatory (or autogene...
- 01:30 PM Bug #45245 (Fix Under Review): cephadm: print iscsi container's log to stdout/stderr
- 11:07 AM Bug #44673 (Fix Under Review): cephadm: `orch apply` and `orch daemon add` use completely differe...
05/13/2020
- 03:14 PM Documentation #45383: Cephadm.py OSD deployment fails: full device path or just the name?
- So as Joshua pointed to me since I had completely missed it, the reason of my confusion was that on upstream teutholo...
- 03:13 PM Bug #45534 (Closed): cephadm: "exec: \"--\": executable file not found in $PATH"
- 02:48 PM Bug #45534 (Closed): cephadm: "exec: \"--\": executable file not found in $PATH"
- ...
05/12/2020
- 05:12 PM Documentation #45411: cephadm: add section about container images
- PR 32410 was previously the Pull Request ID specified in the "Pull request ID" field of this bug.
- 03:44 PM Documentation #45411 (In Progress): cephadm: add section about container images
- 03:58 PM Feature #43940: orchestrator mgr add and rm
- not sure which PR implemented this, but it wasn't https://github.com/ceph/ceph/pull/33072
- 03:48 PM Documentation #45383 (In Progress): Cephadm.py OSD deployment fails: full device path or just the...
- 03:47 PM Documentation #45383: Cephadm.py OSD deployment fails: full device path or just the name?
- I'd like some feedback from the community (at as many levels as possible) about whether I should add a note to the do...
- 02:54 PM Bug #45032 (Fix Under Review): cephadm: Not recovering from `OSError: cannot send (already closed?)`
- 12:08 PM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- https://github.com/ceph/ceph/pull/35018 might make this thing go away, without fixing the underlying issue.
- 11:39 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- ...
- 11:36 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- ...
- 11:35 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- ...
- 11:30 AM Bug #45252 (Pending Backport): cephadm: fail to insert modules when creating iSCSI targets
- 11:29 AM Bug #45458 (Pending Backport): non-ascii chars in /etc/prometheus/ceph/ceph_default_alerts.yml
- 07:06 AM Feature #45163 (In Progress): cephadm: iscsi: read and write config-key for the dashboard
05/11/2020
- 05:11 PM Documentation #45411: cephadm: add section about container images
- PR:
https://github.com/ceph/ceph/pull/35006 - 01:02 PM Bug #44926: dashboard: creating a new bucket causes InvalidLocationConstraint
- Apely AGAMAKOU wrote:
> Hi, i've got the same issue:
>
> OS: Debian 10 (buster)
> Ceph: Octopus (15.2.1)
> Node... - 12:50 PM Bug #44926: dashboard: creating a new bucket causes InvalidLocationConstraint
- Hi, i've got the same issue:
OS: Debian 10 (buster)
Ceph: Octopus (15.2.1)
Nodes: 3
- 10:43 AM Bug #45393 (Pending Backport): Containerized osd config must be updated when adding/removing mons
- 10:41 AM Bug #45465 (Resolved): cephadm: `ceph orch restart osd` has the potential to break your cluster
- Multiple bugs here:
* the cephadm implementation doesn't check anything. (ceph osd ok-to-stop...., HEALTH_ERR)
* ... - 10:40 AM Bug #45129 (Pending Backport): simple (ceph-disk) style OSDs adopted by cephadm don't start after...
- 10:01 AM Feature #45463 (Resolved): cephadm: allow custom images for grafana, prometheus, alertmanager and...
- Right now, users don't have a way to customize them at all.
I think w're going to need a grafana_image, like in ht... - 09:54 AM Bug #45462 (Fix Under Review): 'https://download.ceph.com/debian-octopus focal Release' does not ...
- 09:52 AM Bug #45462 (Resolved): 'https://download.ceph.com/debian-octopus focal Release' does not have a R...
- http://pulpito.ceph.com/swagner-2020-05-08_13:52:54-rados-wip-swagner3-testing-2020-05-08-1329-distro-basic-smithi/50...
- 12:54 AM Bug #45458 (Resolved): non-ascii chars in /etc/prometheus/ceph/ceph_default_alerts.yml
- http://qa-proxy.ceph.com/teuthology/mgfritch-2020-05-09_01:31:09-rados-wip-mgfritch-testing-2020-05-08-1646-distro-ba...
05/09/2020
- 09:13 PM Bug #44792 (Pending Backport): cephadm: make `cephadm shell` independent from /etc/ceph/ceph.conf
- 09:06 PM Bug #45427: cephadm: auth get failed: invalid entity_auth mon
- urgent. right now node-exporter is broken
- 09:05 PM Bug #45427 (Pending Backport): cephadm: auth get failed: invalid entity_auth mon
05/08/2020
- 10:07 PM Bug #45418 (Rejected): cephadm: `orch reconfig` does not reconfig the container image
- confirmed that the needed functionality is already provided via the 'redeploy' command...
- 09:11 PM Bug #45454 (Can't reproduce): cephadm: teardown: hang at sudo systemctl stop ceph-453d3962-9141-1...
- http://pulpito.ceph.com/swagner-2020-05-08_13:52:54-rados-wip-swagner3-testing-2020-05-08-1329-distro-basic-smithi/
... - 08:09 PM Bug #45452: cephadm: while removing ceph-common, unable to remove directory '/var/lib/ceph': Devi...
- http://pulpito.ceph.com/swagner-2020-05-08_13:51:20-rados-wip-swagner2-testing-2020-05-08-1134-distro-basic-smithi/50...
- 04:49 PM Bug #45452 (Closed): cephadm: while removing ceph-common, unable to remove directory '/var/lib/ce...
- http://pulpito.ceph.com/swagner-2020-05-08_13:49:07-rados-wip-swagner-testing-2020-05-08-1133-distro-basic-smithi/503...
- 03:25 PM Bug #45451 (Can't reproduce): cephadm: `ceph orch redeploy mgr` never returns
- problem: the current active manager is restarted synchronously, which means the command never completes.
- 03:16 PM Documentation #45450 (New): cephadm: what does redeploy vs reconfig actually mean and when do do ...
- when want a users call reconfig?
is it more like an internal cephadm thing? and users should always be pointed to... - 12:00 PM Feature #45410: cephadm: Support upgrading alertmanager, grafana, prometheus and node_exporter
- It currently seems that using fixed versions for monitoring stack containers are the only way to be ensure that major...
- 11:02 AM Bug #45427 (Fix Under Review): cephadm: auth get failed: invalid entity_auth mon
- 07:56 AM Documentation #44284 (Pending Backport): cephadm: provide a way to modify the initial crushmap
05/07/2020
- 03:05 PM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- http://pulpito.ceph.com/mgfritch-2020-05-07_02:27:06-rados-wip-mgfritch-testing-2020-05-06-1821-distro-basic-smithi/5...
- 11:35 AM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- In the past, https://github.com/ceph/ceph/pull/34091 was able to reproduce this bug consistently. I'll look into resu...
- 11:32 AM Bug #44990 (In Progress): cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such fil...
- 02:03 PM Bug #45427: cephadm: auth get failed: invalid entity_auth mon
- we probably don't need _get_config_and_keyring for node-exporter
- 10:13 AM Bug #45427 (Resolved): cephadm: auth get failed: invalid entity_auth mon
- http://pulpito.ceph.com/mgfritch-2020-05-07_02:27:06-rados-wip-mgfritch-testing-2020-05-06-1821-distro-basic-smithi/5...
- 11:52 AM Documentation #45383: Cephadm.py OSD deployment fails: full device path or just the name?
- The confusing part from my pov is that downstream this commit that strips the device name is breaking the tests and I...
- 11:39 AM Documentation #45383: Cephadm.py OSD deployment fails: full device path or just the name?
- Some background as to why this exists see (https://github.com/ceph/ceph/commit/f026a1c9f661fc1442048ef0bfadf84c35c142...
- 11:31 AM Bug #45421 (Duplicate): cephadm: MaxWhileTries: Waiting for 3 mons in monmap: "unable to remove c...
- 06:38 AM Bug #45421: cephadm: MaxWhileTries: Waiting for 3 mons in monmap: "unable to remove container c3e...
- /a/yuriw-2020-05-05_15:20:13-rados-wip-yuri8-testing-2020-05-04-2117-octopus-distro-basic-smithi/5024853
- 02:01 AM Bug #45421 (Duplicate): cephadm: MaxWhileTries: Waiting for 3 mons in monmap: "unable to remove c...
- /a/bhubbard-2020-05-01_23:30:27-rados:thrash-old-clients-master-distro-basic-smithi/5014152
/a/bhubbard-2020-05-01_2... - 11:26 AM Bug #45394 (In Progress): cephadm: fail to create/preview OSDs via drive group
- 10:24 AM Feature #45410: cephadm: Support upgrading alertmanager, grafana, prometheus and node_exporter
- This are our current versions
Grafana 5.3.3
Alertmanager 0.16.2
Prometheus 2.11.1
Node exporter 0.17.0
grafa... - 10:03 AM Feature #45410: cephadm: Support upgrading alertmanager, grafana, prometheus and node_exporter
- These are the monitoring stack versions that we use in our nautilus-based releases:
grafana: 5.4.3
prometheus: v2.... - 01:44 AM Bug #45420 (Can't reproduce): cephadmunit.py: teuthology.exceptions.CommandFailedError: Command f...
- /a/bhubbard-2020-05-01_23:30:27-rados:thrash-old-clients-master-distro-basic-smithi/5014156...
- 12:23 AM Bug #45417 (Fix Under Review): cephadm: nfs grace remove killed before completion
- 12:22 AM Bug #45418 (Fix Under Review): cephadm: `orch reconfig` does not reconfig the container image
- 12:19 AM Bug #45418: cephadm: `orch reconfig` does not reconfig the container image
- applies to any changes to the systemd unit, unit.run, unit.poststop scripts etc.
workaround is to completely remov... - 12:18 AM Bug #45418 (Rejected): cephadm: `orch reconfig` does not reconfig the container image
- define a custom container image:...
05/06/2020
- 11:53 PM Bug #45417 (Resolved): cephadm: nfs grace remove killed before completion
- ganesha-rados-grace remove is killed before completion.
The shutdown of the nfs container + grace remove (unit.pos... - 06:22 PM Bug #44990: cephadm: exec: "/usr/bin/ceph-mon": stat /usr/bin/ceph-mon: no such file or directory
- Could this be caused by the container image not being built yet, or would that present as a different error? With any...
- 04:57 PM Documentation #45411 (Resolved): cephadm: add section about container images
- * we recommend against using the...
- 04:18 PM Feature #45410: cephadm: Support upgrading alertmanager, grafana, prometheus and node_exporter
- This might not be an issue for minor version upgrades in Grafana and Prometheus, although it would be hard to guarant...
- 04:04 PM Feature #45410: cephadm: Support upgrading alertmanager, grafana, prometheus and node_exporter
- It would be nice to have this two things:
1. Use by default fixed versions images of the different components of th... - 03:53 PM Feature #45410 (Resolved): cephadm: Support upgrading alertmanager, grafana, prometheus and node_...
- Right now, we're simply downloading :latest, which might even differ between daemons on different hosts.
- 02:36 PM Bug #45407 (Resolved): cephadm: Speed up OSD deployment preview
- There is a "pending Ceph Dashboard pull request":https://github.com/ceph/ceph/pull/34665 to implement a "preview" fea...
- 11:59 AM Bug #45399 (Resolved): NFS Ganesha : Error searching service specs for all nodes after nfs orch a...
- Environment :
- 3 hypervisors centos 8.1 (hyp00, hyp01, hyp02)
- 19 OSDs.
- cluster upgraded a month ago from n... - 11:23 AM Bug #45393 (Fix Under Review): Containerized osd config must be updated when adding/removing mons
- It's always the little things...
- 08:58 AM Bug #45393: Containerized osd config must be updated when adding/removing mons
- A quick grep of my logs shows it reconfiguring the mons and mgrs, but not the osds.
- 08:36 AM Bug #45393: Containerized osd config must be updated when adding/removing mons
- Thanks for the pointer, I'll try to figure out what's going on, seeing as I'm the one who hit this :-)
- 07:42 AM Bug #45393: Containerized osd config must be updated when adding/removing mons
- This was fixed in https://github.com/ceph/ceph/pull/33855 . Looks like we have to figure out, what went wrong here.
- 06:41 AM Bug #45393 (Resolved): Containerized osd config must be updated when adding/removing mons
- Try this:
- bootstrap a cluster (1 mon, 1 mgr)
- add a bunch of osds (@ceph orch apply osd --all-available-device... - 08:03 AM Bug #45394: cephadm: fail to create/preview OSDs via drive group
- -I could imagine that you're seeing this because the container images are not fully up to date yet. They're probably ...
- 06:56 AM Bug #45394 (Resolved): cephadm: fail to create/preview OSDs via drive group
- Create OSD with the following config:...
Also available in: Atom