Activity
From 08/13/2020 to 09/11/2020
09/11/2020
- 03:04 PM Feature #47368: Provide a daemon mode for cephadm to handle host/daemon state requests
- -Could you push the code to a branch, please?- What sebastian said :)
Also, what about finding a suitable timeslot... - 03:01 PM Feature #47368: Provide a daemon mode for cephadm to handle host/daemon state requests
- Paul, can you make a WIP Pr with your current code?
- 05:24 AM Feature #47368: Provide a daemon mode for cephadm to handle host/daemon state requests
- quick demo of the work so far
https://gist.githubusercontent.com/pcuzner/574113c5976e05a78a6334901eba6319/raw/ae2f... - 02:28 PM Feature #47335 (Fix Under Review): cephadm: Make the Monitoring templates super flexible
- 02:27 PM Support #47233 (Resolved): cephadm: orch apply mon "label:osd" crashes cluster
- 12:09 PM Bug #46429: cephadm fails bootstrap with new Podman Versions 2.0.1 and 2.0.2
- Tested
dev-box-1:/var/lib/ceph/662190fd-30cc-4f4f-945d-dd429e32b0c0 # podman --version
podman version 2.0.6
a... - 11:32 AM Bug #46558: cephadm: paths attribute ignored for db_devices/wal_devices via OSD spec
- DriveGroups are supposed to `describe` a state/layout without explicitly pointing to disk identifiers.
If there is... - 10:35 AM Bug #47381: "ceph orch apply --dry-run" reports empty osdspec even though OSDs will be deployed
- Joshua Schmid wrote:
> This is most likely a known timing issue of the `--dry-run` command.
Could you provide a ... - 09:35 AM Cleanup #47402 (New): CLI: orch osd rm: Replace svc_id with osd_id
- https://ceph.readthedocs.io/en/latest/mgr/orchestrator/#remove-an-osd
and section below refer to svc_id. Should be... - 09:28 AM Bug #47401 (Resolved): improve drive group validation
- ...
09/10/2020
- 11:10 PM Feature #47368: Provide a daemon mode for cephadm to handle host/daemon state requests
- cephadm ls in 0.5 secs...on what system :) the current ls code takes 10secs per host in my physical lab!
Like I sa... - 08:20 AM Feature #47368: Provide a daemon mode for cephadm to handle host/daemon state requests
- Actually this change is really impactful, as this introduces a fundamental change in the architecture of cephadm.
... - 05:33 AM Feature #47368: Provide a daemon mode for cephadm to handle host/daemon state requests
- I'll push the code tomorrow as a draft for people to poke at.
At this point deployment from cephadm looks like
c... - 06:03 PM Bug #47291 (Resolved): cephadm: invalid unit.run file generated for iSCSI
- 03:12 PM Bug #47078 (In Progress): cephadm: ForwardToSyslog=yes (aka really huge syslog)
- 02:09 PM Bug #47387: rook: 'ceph orch ps' does not list daemons correctly
- Sebastian Wagner wrote:
> but `ceph orch ls` or `ceph orch status` works?
'ceph orch ls' doesn't work: https://tr... - 01:40 PM Bug #47387: rook: 'ceph orch ps' does not list daemons correctly
- but `ceph orch ls` or `ceph orch status` works?
- 11:49 AM Bug #47387 (Resolved): rook: 'ceph orch ps' does not list daemons correctly
- Daemons are deployed successfully...
09/09/2020
- 10:49 PM Feature #47368: Provide a daemon mode for cephadm to handle host/daemon state requests
- This work has a simple goal - provide the data that mgr/cephadm needs faster to make the data more current and the Ui...
- 02:55 PM Feature #47368: Provide a daemon mode for cephadm to handle host/daemon state requests
- Juan Miguel Olmo Martínez wrote:
> Sebastian Wagner wrote:
>
> >
> > * What's the point in establishing SSH con... - 02:26 PM Feature #47368: Provide a daemon mode for cephadm to handle host/daemon state requests
- Sebastian Wagner wrote:
>
> * What's the point in establishing SSH connections from the MGR, if we already have ... - 11:20 AM Feature #47368: Provide a daemon mode for cephadm to handle host/daemon state requests
- Juan Miguel Olmo Martínez wrote:
> I think that the target here is to provide a way to collect information of the ho... - 08:54 AM Feature #47368: Provide a daemon mode for cephadm to handle host/daemon state requests
- I think that the target here is to provide a way to collect information of the hosts and daemons running in each host...
- 08:24 AM Feature #47368: Provide a daemon mode for cephadm to handle host/daemon state requests
- This has the potential to screw up cephadm pretty badly for all eternity, if we design the architecture wrong. We hav...
- 04:07 PM Bug #47384 (Fix Under Review): cephadm: Remove assignment to member variable in ServiceSpecs
- 04:07 PM Bug #47384 (Resolved): cephadm: Remove assignment to member variable in ServiceSpecs
- Remove unnecessary assignment to member variable 'preview_only', this is done in the constructor of the derived Servi...
- 03:02 PM Bug #47381: "ceph orch apply --dry-run" reports empty osdspec even though OSDs will be deployed
- This is most likely a known timing issue of the `--dry-run` command.
You may verify this by running the provided ... - 02:01 PM Bug #47381 (Can't reproduce): "ceph orch apply --dry-run" reports empty osdspec even though OSDs ...
- If I precede "ceph orch apply --dry-run" with "ceph orch device ls --refresh", everything is fine.
But AFAICT we a... - 01:55 PM Bug #47358: "ceph orch apply osd" chokes on valid service_spec.yml
- ...
- 11:14 AM Tasks #47369: Ceph scales to 100's of hosts, 1000's of OSDs....can orchestrator?
- A single Prometheus instance can, on a properly sized host, handle 1000 nodes. As an OSD is usually accompanied by ot...
- 07:47 AM Tasks #47369 (Resolved): Ceph scales to 100's of hosts, 1000's of OSDs....can orchestrator?
- This bug is intended to help us identify areas in mgr/orchestrator, mgr/cephadm, cephadm, mgr/prometheus and mgr/rook...
- 08:10 AM Feature #47370: cephadm to support configuration of rbd mirroring daemon
- what *exactly* do you need?
- 07:58 AM Feature #47370 (Resolved): cephadm to support configuration of rbd mirroring daemon
- This is an Request for enhancement for cephadm to support configuration of rbd mirroring daemon.
09/08/2020
- 09:51 PM Feature #47368 (Resolved): Provide a daemon mode for cephadm to handle host/daemon state requests
- Scaling the current methods for list_daemons and gather_facts to 100+ hosts represents problems in mgr multi-threadin...
- 05:24 PM Bug #47366 (Closed): Mgr keeps dispatching `osd crush reweight` after OSD removal process is done
- hello,
First of all, I am not certain that his may have already been fixed by the PR of this resolved issue, if ... - 02:37 PM Bug #47078: cephadm: ForwardToSyslog=yes (aka really huge syslog)
- https://docs.ceph.com/docs/master/cephadm/operations/#logging-to-files
- 02:36 PM Bug #47078: cephadm: ForwardToSyslog=yes (aka really huge syslog)
- https://github.com/systemd/systemd/pull/7198
- 12:29 PM Bug #47360 (Fix Under Review): cephadm: osd unit.run creates /var/run/ceph/$FSID too late, so OSD...
- If you've got a node runing OSDs and no other services, this is trivially reproducible by first running @ceph orch rm...
- 10:44 AM Bug #47360 (Resolved): cephadm: osd unit.run creates /var/run/ceph/$FSID too late, so OSD may not...
- The OSD unit.run file currently has the following form:...
- 11:39 AM Bug #45093: cephadm: mgrs transiently getting co-located (one node gets two when only one was ask...
- https://github.com/ceph/ceph/pull/36766 is probably going to help finding this.
- 11:35 AM Bug #44644: cephadm: RGW: updating the spec doesn't update the mon store
- Deleting realms, zones and RGW users is something we need to *carefully* think about. Under all circumstances we have...
- 10:32 AM Bug #47358 (Resolved): "ceph orch apply osd" chokes on valid service_spec.yml
- The following service_spec.yml works fine for "ceph orch apply", but produces an error with "ceph orch apply osd"
... - 03:44 AM Bug #47340: _list_devices: 'NoneType' object has no attribute 'get'
- Doesn't this require https://tracker.ceph.com/issues/47146 first to backport in ceph-volume, and then orch device ls ...
09/07/2020
- 04:01 PM Bug #47340 (Duplicate): _list_devices: 'NoneType' object has no attribute 'get'
- https://pulpito.ceph.com/swagner-2020-09-07_12:17:11-rados:cephadm-wip-swagner2-testing-2020-09-07-1101-distro-basic-...
- 11:22 AM Bug #47337 (Closed): rook: 'ceph orch ls' fails
- ...
- 11:14 AM Bug #47336: `orch device ls`: Unexpected argument '--wide'
- interesting: `master` actually has --wide!...
- 11:04 AM Bug #47336 (Can't reproduce): `orch device ls`: Unexpected argument '--wide'
- https://pulpito.ceph.com/teuthology-2020-09-03_07:01:02-rados-master-distro-basic-smithi/...
- 11:01 AM Bug #47109 (Resolved): while doing an upgrade: Module 'osd_support' has failed: Not found or unlo...
- 11:00 AM Feature #47335 (Resolved): cephadm: Make the Monitoring templates super flexible
- The idea is to allow power-users to modify the templates, if they really want to.
- 10:26 AM Bug #47333 (Resolved): repo_digest: fix `orch ps`
- https://github.com/ceph/ceph/pull/36432#issuecomment-682414409
According to systemctl status, my podman is execute... - 09:57 AM Bug #47332 (Resolved): repo_digest: Follow up's
- ...
- 08:48 AM Bug #47305 (Pending Backport): cephadm: podman not optional on SUSE
09/06/2020
- 10:32 AM Documentation #47142: docs: explain the difference between services and daemons
- While the previous comment has a technical definition of various terms, it doesn't make clear the difference between ...
09/05/2020
09/04/2020
- 03:49 PM Bug #47305 (Resolved): cephadm: podman not optional on SUSE
- 01:25 PM Bug #47291 (Fix Under Review): cephadm: invalid unit.run file generated for iSCSI
- 09:28 AM Documentation #47142: docs: explain the difference between services and daemons
- Let's define things for *cephadm* a bit more in detail:
* a *service_type* is something like mon, mgr, alertmanage... - 06:41 AM Tasks #46551: cephadm: Add better a better hint how to add a host
- This will give you that warning:...
- 04:16 AM Bug #47298 (Resolved): mgr/cephadm: keyrings are left behind after daemon removal
- keyrings are not removed by cephadm after a daemon removal, which can cause the number of created keyrings to grow af...
09/03/2020
- 11:25 PM Bug #47291: cephadm: invalid unit.run file generated for iSCSI
- oh interesting, good spot. I deploy using podman and haven't seen that error. But it's true they probably shouldn't b...
- 05:21 PM Bug #47291: cephadm: invalid unit.run file generated for iSCSI
- This is specific to hosts using podman. Hosts using docker shouldn't see this.
- 05:18 PM Bug #47291 (Resolved): cephadm: invalid unit.run file generated for iSCSI
- Current unit.run generated for iSCSI looks like ...
- 10:34 PM Bug #47109: while doing an upgrade: Module 'osd_support' has failed: Not found or unloadable
- https://github.com/ceph/ceph/pull/36973 mergedhttps://github.com/ceph/ceph/pull/36973
- 09:04 PM Feature #47165 (Closed): Update orch device ls (plain) to show the LSM data now available from ce...
- PR merged
- 12:44 PM Bug #46922 (Pending Backport): cephadm: IPv6 syntax inconsistency
- 12:23 PM Feature #47286 (Resolved): cephadm: Local registry setup
- The idea is to optimize cluster deployment time by creating a local registry that will then be used to pull images fr...
- 02:04 AM Feature #44055 (Fix Under Review): cephadm: make 'ls' faster
- PR submitted
09/02/2020
- 05:57 PM Bug #46704: container_linux.go:349: "exec: \"stat\": executable file not found
- Sebastian Wagner wrote:
> Deepika Upadhyay wrote:
> > [...]
> >
> > /a/yuriw-2020-08-20_00:20:21-rados-wip-yuri7... - 04:05 PM Feature #47274 (New): cephadm: make the container_image setting available to the cephadm binary i...
- The problem is:
If someone calls... - 02:09 PM Bug #47109 (Fix Under Review): while doing an upgrade: Module 'osd_support' has failed: Not found...
- 07:50 AM Support #47233: cephadm: orch apply mon "label:osd" crashes cluster
- In my experience using yaml is typically enough to prevent those errors.
- 07:05 AM Support #47233: cephadm: orch apply mon "label:osd" crashes cluster
- I support this notion as it will help to massively reduce the risk of doing somewhat obvious errors. The yaml approac...
- 05:30 AM Feature #47261 (Resolved): cephadm integration for cephfs-mirror daemon
- ...
09/01/2020
- 04:11 PM Bug #46529 (Pending Backport): cephadm: error removing storage for container "...-mon": remove /v...
- 03:54 PM Bug #46529: cephadm: error removing storage for container "...-mon": remove /var/lib/containers/s...
- https://github.com/ceph/ceph/pull/36915 merged
https://github.com/ceph/ceph/pull/36931 - octopus PR - 10:23 AM Bug #46529 (Fix Under Review): cephadm: error removing storage for container "...-mon": remove /v...
- 02:06 PM Bug #46654 (Resolved): Unsupported podman container configuration via systemd
- 02:01 PM Support #47233: cephadm: orch apply mon "label:osd" crashes cluster
- G. Heinrich wrote:
> how cephadm could prevent such situation in the first place.
We need to encourage users to u... - 01:31 PM Support #47233: cephadm: orch apply mon "label:osd" crashes cluster
- Thanks for your feedback. Your solution is working perfectly, I edited the ceph.conf and afterwards the cluster statu...
- 12:23 PM Support #47233: cephadm: orch apply mon "label:osd" crashes cluster
- Ha! you should now have three MONs on osd-01 osd-02 and osd-03
Unfortunately your /etc/ceph/ceph.conf is outdated... - 12:06 PM Support #47233: cephadm: orch apply mon "label:osd" crashes cluster
- Do you have the MGR logs? https://docs.ceph.com/docs/master/cephadm/troubleshooting/
- 11:59 AM Support #47233 (Resolved): cephadm: orch apply mon "label:osd" crashes cluster
- I have a virtual Ceph cluster with 6 VMs/Hosts running on Ubuntu Server 20.04. The cluster is running on Podman.
Thr... - 12:04 PM Bug #47234 (Resolved): by default, cephadm creates 4 MONs
- In case you have 4 hosts, cephadm will by default spawn 4 MONs.
We should probably improve this and only create 3... - 10:54 AM Bug #45093: cephadm: mgrs transiently getting co-located (one node gets two when only one was ask...
- > only happens with MGRs
It has recently been reported to happen at bootstrap, when supplying a spec that asks for... - 09:01 AM Cleanup #43700: cephadm: make it a proper python package
- I've attached a possible roadmap for doing this
!https://tracker.ceph.com/attachments/download/5080/Screenshot_202... - 06:20 AM Bug #46910: cephadm: Unable to create an iSCSI target
- The first OBS request has been accepted. The suse product container doesn't suffer from the same problem. It seems it...
- 04:50 AM Documentation #47142: docs: explain the difference between services and daemons
- Sebastian Wagner wrote:
> https://docs.ceph.com/docs/master/mgr/orchestrator/#orchestrator-cli ??
That's a good f...
08/31/2020
- 11:38 PM Bug #46529: cephadm: error removing storage for container "...-mon": remove /var/lib/containers/s...
- https://github.com/ceph/ceph/pull/36915
- 11:48 AM Bug #47171 (Resolved): mgr/cephadm: removing last service of a type raises IndexError
- 09:53 AM Bug #47185 (Resolved): TypeError: _daemon_add_misc() got an unexpected keyword argument
- 02:46 AM Bug #47185: TypeError: _daemon_add_misc() got an unexpected keyword argument
- also, ...
- 02:35 AM Bug #47185: TypeError: _daemon_add_misc() got an unexpected keyword argument
- ...
- 01:08 AM Bug #47185: TypeError: _daemon_add_misc() got an unexpected keyword argument
- offending PR https://github.com/ceph/ceph/pull/29489
- 01:03 AM Bug #47185: TypeError: _daemon_add_misc() got an unexpected keyword argument
- urgent, because almost all cephadm tests are failing: https://pulpito.ceph.com/kchai-2020-08-29_13:54:19-rados-wip-ke...
- 02:29 AM Feature #47165 (In Progress): Update orch device ls (plain) to show the LSM data now available fr...
- PR raised which gives the following output...
- 01:44 AM Bug #46910: cephadm: Unable to create an iSCSI target
- Created an open build system (OBS) request (PR) to add the missing pacakage to the ceph-base pattern used in building...
08/29/2020
08/28/2020
- 10:28 AM Bug #47185 (Resolved): TypeError: _daemon_add_misc() got an unexpected keyword argument
- https://pulpito.ceph.com/swagner-2020-08-28_09:46:34-rados:cephadm-wip-swagner-testing-2020-08-28-1004-distro-basic-s...
- 10:16 AM Feature #47165: Update orch device ls (plain) to show the LSM data now available from ceph-volume
- We're quickly approaching a state where the table is too big to look nice.
Might probably make sense to add somet... - 10:10 AM Bug #46412 (Can't reproduce): cephadm trying to pull mimic based image
- please reopen, if you still see this
- 10:09 AM Bug #47134 (Pending Backport): cephadm: Unable to infer container image when tag is missing
- 10:08 AM Bug #45279 (Resolved): cephadm bootstrap: monmaptool --create: error writing to '/tmp/monmap': (2...
- I think this is fixed with https://github.com/ceph/ceph/pull/35651
Please reopen if you still see this. - 10:04 AM Bug #47141 (Resolved): cephadm: make check: _check_pool_exists: PyCapsule_GetPointer called with ...
- 10:01 AM Bug #47170 (Fix Under Review): cephadm "ceph-c2f4ec26-c63c-11ea-80c1-90b11c20b87d-osd.3-activate"...
- 07:35 AM Bug #47170: cephadm "ceph-c2f4ec26-c63c-11ea-80c1-90b11c20b87d-osd.3-activate" is already in use ...
- /a/yuriw-2020-08-26_18:16:40-rados-wip-yuri-testing-2020-08-26-1631-octopus-distro-basic-smithi/5378288
- 08:06 AM Bug #47127: osd_id_claims uses shortlabel instead of the FQDN and cannot be fulfilled.
- I have tracked it down to the ...
- 07:34 AM Bug #46529: cephadm: error removing storage for container "...-mon": remove /var/lib/containers/s...
- 5378365, 5378277, 5378451, 5378510
yuriw-2020-08-26_18:16:40-rados-wip-yuri-testing-2020-08-26-1631-octopus-distro-b... - 12:21 AM Bug #46529: cephadm: error removing storage for container "...-mon": remove /var/lib/containers/s...
- /a/teuthology-2020-08-26_07:01:02-rados-master-distro-basic-smithi/5377136
/a/teuthology-2020-08-26_07:01:02-rados-m...
08/27/2020
- 07:16 PM Bug #46654 (Fix Under Review): Unsupported podman container configuration via systemd
- 04:52 PM Bug #47171 (Fix Under Review): mgr/cephadm: removing last service of a type raises IndexError
- 03:40 PM Bug #47171 (Resolved): mgr/cephadm: removing last service of a type raises IndexError
- ...
- 03:39 PM Bug #47170 (Resolved): cephadm "ceph-c2f4ec26-c63c-11ea-80c1-90b11c20b87d-osd.3-activate" is alre...
- ...
- 03:02 PM Bug #45465: cephadm: `ceph orch restart osd` has the potential to break your cluster
- ...
- 12:22 PM Feature #46666 (In Progress): cephadm: Introduce 'container' specification to deploy custom conta...
- 12:08 PM Bug #47169 (Won't Fix): cephadm: wrong command parsing for `cephadm logs`
- ...
- 04:33 AM Feature #46811 (Closed): cephadm: add host metadata to the orchestrator's inventory
- 04:30 AM Feature #47165 (Closed): Update orch device ls (plain) to show the LSM data now available from ce...
- Data returned from ceph-volume now includes libstoragemgmt data (health, transport, led support etc), so the orch dev...
08/26/2020
- 12:56 PM Bug #44231 (Fix Under Review): cephadm: cannot capture core files
- 09:12 AM Feature #47145 (Closed): cephadm: Multiple daemons of the same service on single host
- ...
- 08:09 AM Documentation #47142: docs: explain the difference between services and daemons
- https://docs.ceph.com/docs/master/mgr/orchestrator/#orchestrator-cli ??
- 07:56 AM Documentation #47142 (Resolved): docs: explain the difference between services and daemons
- 26 Aug 2020, ~1700 AEST:
<wowas> https://tracker.ceph.com/issues/46082 (...)`orch ps` will list the cephadm daemons,... - 07:50 AM Bug #47141 (Resolved): cephadm: make check: _check_pool_exists: PyCapsule_GetPointer called with ...
- https://jenkins.ceph.com/job/ceph-pull-requests/58466/consoleFull...
- 01:56 AM Feature #44055 (In Progress): cephadm: make 'ls' faster
- here's the gist https://gist.github.com/pcuzner/c8940e4af5f817b817640e97bff50e91
to test, just download to a ceph ... - 12:23 AM Feature #47139: Require a minimum version for podman/docker
- +1000
the check_host code currently does no checking at all on component versions that ceph functionality depends on...
08/25/2020
- 09:33 PM Feature #44055: cephadm: make 'ls' faster
- I think there are a couple of things that we need to change in our approach
1. Don't inspect the containers, and exe... - 09:18 PM Feature #47139 (Resolved): Require a minimum version for podman/docker
- Requiring a minimum version for podman/docker (and just pre-reqs in general) will reduce qa overhead and make it easi...
- 02:28 PM Bug #47134 (Resolved): cephadm: Unable to infer container image when tag is missing
- The image tag for a running container can move to another container image on pull:...
- 12:23 PM Cleanup #47131 (Resolved): mgr/cephadm: Move serve() into a dedicated serve.py
- Including all methods called by serve()
Including all methods called by methods called by serve()
Including all... - 08:19 AM Bug #47127 (New): osd_id_claims uses shortlabel instead of the FQDN and cannot be fulfilled.
- Hello!
I was testing replacing disks on my 9 node ceph 15.2.4 cluster provisioned by cephadm and using the orchest...
08/24/2020
- 01:53 PM Bug #46558: cephadm: paths attribute ignored for db_devices/wal_devices via OSD spec
- @Sebastien : does that mean we need another tracker for implementing this feature or it won't happen at all ?
I th... - 01:48 PM Bug #46558 (Resolved): cephadm: paths attribute ignored for db_devices/wal_devices via OSD spec
- 01:22 PM Bug #46558: cephadm: paths attribute ignored for db_devices/wal_devices via OSD spec
- It looks like it has been decided to ignore this at the moment
https://github.com/ceph/ceph/pull/36543 - 04:31 AM Bug #46558: cephadm: paths attribute ignored for db_devices/wal_devices via OSD spec
- Dimitri Savineau wrote:
> When creating an OSD spec for using dedicated devices for either DB and/or WAL bluestore d... - 01:51 PM Bug #44926 (Resolved): dashboard: creating a new bucket causes InvalidLocationConstraint
- 01:50 PM Feature #44628 (Resolved): cephadm: Add initial firewall management to cephadm
- 01:49 PM Bug #44252 (Resolved): cephadm: mgr,mds scale-down should prefer standby daemons
- 01:44 PM Documentation #46701 (Resolved): remove `alias ceph='cephadm shell -- ceph'`
- 01:44 PM Bug #46813 (Resolved): `ceph orch * --refresh` is broken
- 01:43 PM Bug #46833 (Resolved): simple (ceph-disk style) OSDs adopted by cephadm must not call `ceph-volum...
- 01:43 PM Bug #46748 (Resolved): Module 'cephadm' has failed: auth get failed: failed to find osd.32 in key...
- 01:38 PM Bug #46922 (Fix Under Review): cephadm: IPv6 syntax inconsistency
- 12:09 PM Bug #47109 (Resolved): while doing an upgrade: Module 'osd_support' has failed: Not found or unlo...
- ...
- 10:33 AM Bug #47107 (Resolved): device-health-metrics unavailable because image ceph/ceph:latest has smart...
- Hey hello,
My ceph 15 cluster built by cephadm using ceph/ceph:latest docker image cannot read smart data on dis... - 09:15 AM Bug #47078: cephadm: ForwardToSyslog=yes (aka really huge syslog)
- Maybe we can use journald namespaces??? https://www.freedesktop.org/software/systemd/man/systemd-journald.service.htm...
08/22/2020
- 05:09 PM Feature #47079 (Resolved): cephadm should create a log file
- If someone runs into issues that only are logged to stdout during the first time a specific action is executed - it i...
- 05:07 PM Bug #47078 (Need More Info): cephadm: ForwardToSyslog=yes (aka really huge syslog)
- @ForwardToSyslog=yes@ is default in man distros.
Which means, we have a chain here:
container's stdout -> syste...
08/21/2020
- 03:50 PM Bug #45465: cephadm: `ceph orch restart osd` has the potential to break your cluster
- depends on https://github.com/ceph/ceph/pull/36753
- 03:48 PM Bug #46704: container_linux.go:349: "exec: \"stat\": executable file not found
- Deepika Upadhyay wrote:
> [...]
>
> /a/yuriw-2020-08-20_00:20:21-rados-wip-yuri7-testing-2020-08-19-2051-octopus-... - 12:12 PM Bug #46704: container_linux.go:349: "exec: \"stat\": executable file not found
- ...
- 03:47 PM Bug #46529: cephadm: error removing storage for container "...-mon": remove /var/lib/containers/s...
- ...
- 12:37 PM Bug #45861: data_devices: limit 3 deployed 6 osds per node
- I tried different approach:
```
service_type: osd
service_id: 3db_per_nvme
placement:
host_pattern: 'bluesha... - 08:29 AM Bug #46748 (Pending Backport): Module 'cephadm' has failed: auth get failed: failed to find osd.3...
- 08:29 AM Documentation #46701 (Pending Backport): remove `alias ceph='cephadm shell -- ceph'`
- 01:45 AM Bug #46910: cephadm: Unable to create an iSCSI target
- Finally tracked down the issue. Turns out our suse ceph image doesn't contain the tcmu-runner-handler-rbd package. If...
08/19/2020
- 10:07 PM Feature #47038: cephadm: Automatically deploy failed daemons on other hosts
- Nathan Cutler wrote:
> Which daemon types would this apply to? I can think of more potential problems, depending on ... - 08:03 PM Feature #47038: cephadm: Automatically deploy failed daemons on other hosts
- Which daemon types would this apply to? I can think of more potential problems, depending on daemon type:
OSD daem... - 12:42 PM Feature #47038 (New): cephadm: Automatically deploy failed daemons on other hosts
- currently cephadm doesn't automatically re-distribute containers to new hosts. Right now, this is a manual step.
l... - 12:37 PM Documentation #45623: cephadm: "ceph orch apply mon" is deploying in wrong nodes
- Ah, got it:
The label is a user-defined field. There is no automatic deduction of the placement. you need to set t... - 12:32 PM Bug #43838 (New): cephadm: Forcefully Remove Services (unresponsive hosts)
- 09:34 AM Bug #47035 (Resolved): TypeError: _daemon_action_redeploy() missing 1 required positional argumen...
- ...
- 07:16 AM Bug #46862: cephadm: nfs ganesha client mount is read only
- > The configuration is basically the same than the template from [1]
The export block is missing and it is expecte...
08/18/2020
- 06:02 PM Bug #46862: cephadm: nfs ganesha client mount is read only
- > It seems the export creation was not successful. Please share the ganesha config file.
The configuration is basi...
08/17/2020
- 07:34 PM Bug #46665 (Fix Under Review): cephadm plugin: Failure to start service stops service loop; no ot...
- 04:26 PM Bug #46990: execnet: EOFError: couldnt load message header, expected 9 bytes, got 0
- The moral of the story is: wait for the bootstrap MON(s) and MGR(s) to appear in "cephadm ls" before proceeding with ...
- 04:00 PM Bug #46990: execnet: EOFError: couldnt load message header, expected 9 bytes, got 0
- Note: this problem is known to arise (only on machines with root filesystem on HDD) the first time "ceph orch apply" ...
- 12:44 PM Bug #46990: execnet: EOFError: couldnt load message header, expected 9 bytes, got 0
- Seems to:
(1) happen in libvirt VMs running on slower hardware (e.g. nested virt)
(2) be a recent regression - 12:37 PM Bug #46990: execnet: EOFError: couldnt load message header, expected 9 bytes, got 0
- Might relate to https://www.reddit.com/r/ceph/comments/f4rjrk/taking_another_whack_at_ceph_on_odroid_hc2s/
- 12:19 PM Bug #46990 (Can't reproduce): execnet: EOFError: couldnt load message header, expected 9 bytes, g...
- ...
- 03:07 PM Bug #46862 (Need More Info): cephadm: nfs ganesha client mount is read only
- Dimitri Savineau wrote:
> When trying to mount a ganesha share via nfs on a client node, the mount command is succes... - 01:17 PM Bug #45792 (Resolved): cephadm: zapped OSD gets re-added to the cluster.
- 12:57 PM Bug #46991: cephadm ls does not list legacy rgws
- Sebastian Wagner wrote:
> For RGWs, please follow step 11 to redeploy the RGWs with cephadm!
This was the step 4t... - 12:55 PM Bug #46991: cephadm ls does not list legacy rgws
- For RGWs, please follow step 11 to redeploy the RGWs with cephadm!
- 12:21 PM Bug #46991 (Resolved): cephadm ls does not list legacy rgws
- I am trying to convert ceph-ansible cluster to cephadm, when I see ceph -s...
- 12:52 PM Bug #46045 (Resolved): qa/tasks/cephadm: Module 'dashboard' is not enabled error
- 12:52 PM Bug #45594 (Resolved): cephadm: weight of a replaced OSD is 0
- 12:51 PM Feature #44548 (Resolved): cephadm: persist osd removal queue
- 12:51 PM Bug #46740 (Resolved): mgr/cephadm: restart of daemon reports host is empty
- 12:51 PM Bug #43681 (Resolved): cephadm: Streamline RGW deployment
- 12:50 PM Bug #46764: cephadm (ceph orch apply) sometimes gets "stuck" and cannot deploy any OSDs
- Though I cannot access the MGR log, I think it's safe to assume it would look like the one in #46990
- 12:49 PM Bug #46764 (Duplicate): cephadm (ceph orch apply) sometimes gets "stuck" and cannot deploy any OSDs
- 12:44 PM Bug #46764 (Need More Info): cephadm (ceph orch apply) sometimes gets "stuck" and cannot deploy a...
- Thanks for the report. I'll need more info to resolve this. MGR log might be helpful
- 12:49 PM Bug #46540 (Resolved): cephadm: iSCSI gateways problems.
- 12:48 PM Documentation #45858 (Resolved): `ceph orch status` doesn't show in progress actions
- 12:48 PM Bug #46175 (Resolved): cephadm: orch apply -i: MON and MGR service specs must not have a service_id
- 12:04 PM Bug #46921: Fedora: Download URL not found for cephadm installation
- I don't think we have official packages for fc32. See https://download.ceph.com/rpm-octopus/
08/15/2020
- 03:37 AM Bug #46921: Fedora: Download URL not found for cephadm installation
- From https://docs.ceph.com/docs/master/releases/octopus/#v15-2-0-octopus, seems it is not yet supported?
08/13/2020
- 03:04 PM Bug #46922: cephadm: IPv6 syntax inconsistency
- I'll have a look into making sure IPv6 addresses can be given wrapped or unwrapped and we'd just detect and do accord...
- 12:39 PM Bug #46922 (Resolved): cephadm: IPv6 syntax inconsistency
- While trying to bootstrap a cluster with `--mon-ip` and `--apply-spec` options I found that someting IPv6 syntax is `...
- 01:13 PM Bug #46910: cephadm: Unable to create an iSCSI target
- ...
- 04:54 AM Bug #46910: cephadm: Unable to create an iSCSI target
- I think I've had this before, if I remember correctly I've seen this when the tcmu-runner container was running but d...
- 12:01 PM Bug #46921 (Resolved): Fedora: Download URL not found for cephadm installation
- I'm following [the guide](https://ceph.readthedocs.io/en/latest/cephadm/install/) for installing Ceph using cephadm. ...
- 04:47 AM Feature #46811 (Fix Under Review): cephadm: add host metadata to the orchestrator's inventory
Also available in: Atom