Activity
From 02/09/2020 to 03/09/2020
03/09/2020
- 09:52 PM Feature #43962 (Resolved): cephadm: Make mgr/cephadm declarative
- 06:14 PM Bug #44440 (In Progress): cephadm should be able to infer running container
- 05:27 PM Bug #44526 (Fix Under Review): sporatic cephadm bootstrap failures: 'timed out'
- 05:27 PM Bug #44526: sporatic cephadm bootstrap failures: 'timed out'
- I think the fundamental problem here is how ceph.in is using librados. One thread is trying to do some work, which i...
- 04:12 PM Bug #44526: sporatic cephadm bootstrap failures: 'timed out'
- ceph.in sets a short 5s timeout for -h, and that's triggering shutdown, but then ceph isn't cleanly stopping...
<p... - 03:55 PM Bug #44526 (Resolved): sporatic cephadm bootstrap failures: 'timed out'
- ...
- 04:12 AM Bug #44513 (Resolved): mgr/cephadm: `orch ps --refresh` returns no results
- h3. Steps to reproduce
* Create services and list daemons... - 04:05 AM Bug #44512 (Resolved): mgr/cephadm: `orch ls` doesn't obey filters
- h3. Steps to reproduce
* Create a service, e.g. mgr
* List the service with service_type filter, say `osd`. The r...
03/08/2020
- 10:30 PM Bug #44253 (Resolved): _apply_service should move services, not just expand/contract
- 10:30 PM Bug #44254 (Resolved): scheduler should prefer existing daemon locations
- 10:30 PM Bug #44392 (Resolved): mgr/orchestrator: missing SPEC and PLACEMENT field in JSON output of Servi...
- 10:30 PM Bug #44491 (Resolved): mgr/cephadm: fail to load service specs after restarting
- 10:29 PM Bug #44167 (Resolved): cephadm/ def _update_service: Remove should make use of spec.placement.hosts
- 10:29 PM Bug #44302 (Resolved): cehpadm: apply_mon: NotImplementedError
03/07/2020
- 05:48 PM Bug #43713 (Resolved): drive group filters: use `and` instead of `or`
- 12:21 AM Bug #44440: cephadm should be able to infer running container
- ceph-container proposal for adding a new LABEL - https://github.com/ceph/ceph-container/pull/1604
03/06/2020
- 09:25 PM Feature #43937 (Rejected): cephadm: make default image configurable
- Closing in favor of https://tracker.ceph.com/issues/44440, see https://github.com/ceph/ceph/pull/33781 for more infor...
- 12:29 PM Feature #43937 (Fix Under Review): cephadm: make default image configurable
- 09:02 PM Bug #44302 (Fix Under Review): cehpadm: apply_mon: NotImplementedError
- 07:21 PM Bug #44302 (In Progress): cehpadm: apply_mon: NotImplementedError
- From https://github.com/ceph/ceph/pull/33548#issuecomment-591443581
I removed apply_mon because simply reusing _ap... - 07:25 PM Bug #44440: cephadm should be able to infer running container
- As described in https://github.com/ceph/ceph/pull/33781#issuecomment-595760420, if 'image' isn't specified then cepha...
- 07:13 PM Bug #44440 (New): cephadm should be able to infer running container
- 12:40 PM Bug #44313: ceph-volume prepare is not idempotent and may get called twice
- 33755 fixed this for c-v prepare on a single device, which is what teuthology does.
- 11:37 AM Bug #44491 (Resolved): mgr/cephadm: fail to load service specs after restarting
- h3. Steps to reproduce:
* Enable cephadm backend and add a host mgr0
* Create a mgr daemon*... - 08:30 AM Feature #44461 (Pending Backport): cephadm: watch Grafana certificates
- Add a period check for the validity of the provided Grafana certificates and raise an health alert if they aren't hea...
03/05/2020
- 10:50 PM Feature #43937: cephadm: make default image configurable
- I've learned that `cephadm bootstrap` is already storing the pulled image path in a config-key:...
- 07:47 PM Feature #43937 (In Progress): cephadm: make default image configurable
- 06:18 PM Feature #43937: cephadm: make default image configurable
- I would prefer to be able to set a MON config-key with the default value.
If env variable is defined, it should ov... - 06:06 PM Feature #43937: cephadm: make default image configurable
- Sebastian Wagner wrote:
> prio low, as this can be done by setting the env variable system-wide
How does one set ... - 06:57 PM Backport #43993 (New): mimic: ceph orchestrator rgw rm: no valid command found
- 06:29 PM Bug #44440 (Duplicate): cephadm should be able to infer running container
- 02:07 PM Bug #44440 (Resolved): cephadm should be able to infer running container
- If I use "cephadm" to deploy a Ceph cluster using "Container A" (not the default one) and I have that cluster running...
- 03:44 PM Bug #44272: on SUSE, crash daemon starts but then always stops a couple minutes later
- OK, some more information:...
- 11:18 AM Bug #44272: on SUSE, crash daemon starts but then always stops a couple minutes later
- OK, I will reproduce, obtain dmesg output, and post here.
One thing I did notice is that, with the upstream contai... - 07:00 AM Bug #44392 (In Progress): mgr/orchestrator: missing SPEC and PLACEMENT field in JSON output of Se...
03/04/2020
- 11:25 PM Feature #44429 (Rejected): cephadm: make upgrade work with 'packaged' mode
- In order to upgrade when the cephadm binary is installed via a package, mgr/cephadm needs to update the cephadm packa...
- 07:34 PM Bug #44273 (Can't reproduce): Getting "stray daemon osd.3 on host admin not managed by cephadm" o...
- not getting this anymore
- 11:27 AM Feature #44414 (Resolved): bubble up errors during 'apply' phase to 'cluster warnings'
- Since we moved to a fully declarative approach which handles most of the deployment in the background (k8-like) it be...
- 01:54 AM Bug #44390 (Resolved): cephadm: fail to create daemons
03/03/2020
- 08:47 PM Bug #44401 (In Progress): cephadm: check host performed every time through serve loop
- 08:41 PM Bug #44401 (Resolved): cephadm: check host performed every time through serve loop
- This should only check every N seconds (say, 10 minutes)
- 08:42 PM Feature #44402 (Resolved): cephadm: more complete smoke test that can be run with vstart
- Frequently we are making (mgr/cephadm and cephadm) code changes and are developing against vstart. It would be nice ...
- 08:33 PM Feature #44205 (Resolved): cephadm: push/apply config.yml
- 06:20 PM Bug #44397 (Resolved): cephadm: make rgw daemons avoid the same host
- A test verifying this behavior was removed, check history for test_rgw_update_fail
- 11:10 AM Bug #44392 (Fix Under Review): mgr/orchestrator: missing SPEC and PLACEMENT field in JSON output ...
- 10:07 AM Bug #44392 (Resolved): mgr/orchestrator: missing SPEC and PLACEMENT field in JSON output of Servi...
- A new column `SPEC` was added in PR https://github.com/ceph/ceph/pull/33553.
And PLACEMENT field was added in PR htt... - 03:44 AM Bug #44390 (Fix Under Review): cephadm: fail to create daemons
- 03:38 AM Bug #44390 (In Progress): cephadm: fail to create daemons
- Might be a regression from https://github.com/ceph/ceph/pull/33658/files#diff-8b586ec9c3ad3e8421a8858888f7ddf0R2067.
- 03:36 AM Bug #44390 (Resolved): cephadm: fail to create daemons
- I hit this error when creating OSDs:...
03/02/2020
- 05:35 PM Cleanup #44379 (Won't Fix): orchestrator: {to,from}_json inconsistent
- sometimes to_json returns a dict (that can be fed to json.dumps), sometimes it returns the JSON string. We should be...
02/28/2020
- 05:20 PM Documentation #44354: cephadm: Log messages are missing
- https://github.com/ceph/ceph/pull/33627 is an attempt to clarify the cephadm log gathering docs, and it has some rela...
- 04:43 PM Documentation #44354 (Duplicate): cephadm: Log messages are missing
- There is a bug in master/octopus which causes OSDs to crash in msgr V2 without any load on the cluster. But that's no...
- 04:52 PM Bug #43713 (Fix Under Review): drive group filters: use `and` instead of `or`
- 02:33 PM Bug #44028 (Resolved): cephadm: usability: failing to add an osd, useless message
- I don't think we can make the error message any more useful. We IMO have to fix bugs in c-v and cephadm if they appear.
- 02:30 PM Bug #44272 (Triaged): on SUSE, crash daemon starts but then always stops a couple minutes later
- 02:30 PM Bug #44312 (Duplicate): ceph-volume prepare is not idempotent and may get called twice
- 02:28 PM Bug #44313: ceph-volume prepare is not idempotent and may get called twice
- Sage Weil wrote:
> An alternative workaround would be to make cephadm look for 'RuntimeError: skipping vg_nvme/lv_4,... - 02:04 PM Bug #44313: ceph-volume prepare is not idempotent and may get called twice
- -Your QA run really had a excessive amount of duplicates-:
Ah, each mon logs the command.... - 02:27 PM Bug #44167 (In Progress): cephadm/ def _update_service: Remove should make use of spec.placement....
- 02:26 PM Bug #44167: cephadm/ def _update_service: Remove should make use of spec.placement.hosts
- should be fixed by https://github.com/ceph/ceph/pull/33523
- 02:24 PM Feature #43696: cephadm: check that units start
- I'm inclined to close this as won't fix. What shell we do?
h3. Wait, till the process is responsive?
Like http... - 01:49 AM Feature #44308 (Resolved): mgr/cephadm: Enable alertmanager configuration in mgr/cephadm (the orc...
02/27/2020
- 07:03 PM Bug #44121 (Resolved): calling cephadm shell again looses bash history
- 12:38 PM Bug #44270: Under certain circumstances, "ceph orch apply" returns success even when no OSDs are ...
- For ceph-salt, I have a workaround here:
https://github.com/ceph/ceph-salt/pull/99
02/26/2020
- 08:50 PM Bug #44313: ceph-volume prepare is not idempotent and may get called twice
- similar failure, this time a 'daemon mon add' dup:
/a/sage-2020-02-26_08:10:43-rados-wip-sage2-testing-2020-02-25-... - 08:27 PM Bug #44313: ceph-volume prepare is not idempotent and may get called twice
- One possible fix would be to make ceph-volume itself idempotent, so that calling prepare on an already-prepared devic...
- 08:26 PM Bug #44313 (Resolved): ceph-volume prepare is not idempotent and may get called twice
- symptom is a failure like so:...
- 08:24 PM Bug #44312 (Duplicate): ceph-volume prepare is not idempotent and may get called twice
- 04:59 PM Bug #43680 (Resolved): parallelize osd provisioning
- 01:46 PM Feature #44205 (Fix Under Review): cephadm: push/apply config.yml
- 10:41 AM Feature #44308 (Resolved): mgr/cephadm: Enable alertmanager configuration in mgr/cephadm (the orc...
- 09:44 AM Bug #44302 (Fix Under Review): cehpadm: apply_mon: NotImplementedError
- 09:08 AM Bug #44302 (Resolved): cehpadm: apply_mon: NotImplementedError
- ...
- 09:32 AM Feature #44305 (Resolved): mgr/cephadm: Add support for removing MONs
- As a future follow up of https://tracker.ceph.com/issues/44302
02/25/2020
- 03:07 PM Feature #44287 (Rejected): cephadm: Graceful Shutdown of the Whole Ceph Cluster
- even though the use case is limited, it would make it possible for users to stop their production clusters for schedu...
- 02:59 PM Bug #44272: on SUSE, crash daemon starts but then always stops a couple minutes later
- Rethinking. I think this is an apparmor problem. Adding the output of dmesg would be helpful.
- 02:41 PM Bug #44272: on SUSE, crash daemon starts but then always stops a couple minutes later
- related: https://github.com/opencontainers/runc/issues/2236
After reading the code at container_linux.go:389, podm... - 11:57 AM Documentation #44284: cephadm: provide a way to modify the initial crushmap
- Actually, when this workaround is used, the option ends up being set in the MON store, and is reflected in the initia...
- 11:51 AM Documentation #44284 (Resolved): cephadm: provide a way to modify the initial crushmap
- If we don't edit the map when bootstrapping, the CRUSH map has to be edited at runtime, which is non-trivial.
Howe...
02/24/2020
- 07:29 PM Bug #41746 (Resolved): mgr/rook: `ceph orchestrator device ls` doesn't set `available`
- this is working now AFAICS
- 07:29 PM Bug #43838: cephadm: Forcefully Remove Services (unresponsive hosts)
- One option is to have them 'ceph orch host rm $hostname'...
- 07:27 PM Bug #44121 (Fix Under Review): calling cephadm shell again looses bash history
- 07:19 PM Bug #44270 (Triaged): Under certain circumstances, "ceph orch apply" returns success even when no...
- i bet the problem is that the drive inventory isn't populated yet immediately after bootstrap.
- 03:12 PM Bug #44270: Under certain circumstances, "ceph orch apply" returns success even when no OSDs are ...
- we might need some more in-depth validation of drive groups here.
- 03:10 PM Bug #44270 (Can't reproduce): Under certain circumstances, "ceph orch apply" returns success even...
- On a single-node cluster, the "cephadm bootstrap" command deploys 1 MGR and 1 MON.
On very recent versions of mast... - 07:17 PM Bug #44273 (Need More Info): Getting "stray daemon osd.3 on host admin not managed by cephadm" on...
- This should have been fixed by 607263224c26... can you reproduce this with debug_mgr = 20 and attach a log?
- 03:46 PM Bug #44273 (Can't reproduce): Getting "stray daemon osd.3 on host admin not managed by cephadm" o...
- I typically test the most simple deployment imaginable: single node, 1 MGR, 1 MON, and 4 OSDs. The deployment is done...
- 07:13 PM Feature #43867 (Resolved): cephadm: progress item for upgrade
- 04:16 PM Feature #43673 (Need More Info): ceph-ansible playbook: pivot to cephadm
- 03:38 PM Bug #44272 (Resolved): on SUSE, crash daemon starts but then always stops a couple minutes later
- Recently cephadm/orchestrator started deploying crash daemon on all cluster nodes.
On SUSE (at least), the crash d... - 01:33 PM Feature #44205: cephadm: push/apply config.yml
- to sum up our discussion from Friday:
* What about doing all calls synchronously and only return async completions... - 01:30 PM Feature #44205 (In Progress): cephadm: push/apply config.yml
02/23/2020
- 08:12 PM Feature #44255 (New): cephadm: scheduler should consider other daemons on each node
- When choosing a home for a daemon, we should prefer nodes that have fewer daemons, and/or fewer daemons of the same t...
- 08:11 PM Bug #44254 (Resolved): scheduler should prefer existing daemon locations
- If we are placing N daemons, then we should select nodes that already have daemons for the service. (Otherwise, an a...
- 08:10 PM Bug #44253 (Resolved): _apply_service should move services, not just expand/contract
- if placement is based on, e.g., labels, then moving a label should cause us to move services too (add first, then rem...
- 07:17 PM Bug #44252 (Resolved): cephadm: mgr,mds scale-down should prefer standby daemons
- There are three types of daemons:
1. active daemons
2. standby daemons
3. unknown daemons that are (not yet) par... - 02:36 PM Bug #44170 (Duplicate): Teuthology is testing unrelated container images
- this was because the wip-swagner-testing branch was reused and because #44242 had not been fixed
02/22/2020
- 02:16 PM Feature #43675 (Resolved): workflow for using a signed dashboard cert
- 12:02 PM Bug #44170: Teuthology is testing unrelated container images
- Saw in recent run: http://pulpito.ceph.com/ideepika-2020-02-21_16:12:00-rados-wip-deepika-testing-21-02-2020-distro-b...
- 03:03 AM Bug #43949 (Resolved): mgr/cephadm: ceph fs volume create: TypeError: %d format: a number is requ...
- 03:02 AM Bug #44119 (Resolved): installing cephamd on bionic is painful:
- 03:01 AM Bug #44121: calling cephadm shell again looses bash history
- we could bind the root .bash_history file to something like /var/lib/ceph/$fsid/.bash_history ?
- 03:01 AM Bug #44209 (Resolved): qa/workunits/cephadm/test_cephadm.sh: prometheus:latest: Invalid JSON in -
- 03:01 AM Bug #44003 (Resolved): cephadm: multiple mgrs scheduled on same host
02/21/2020
- 08:28 PM Feature #43675 (Fix Under Review): workflow for using a signed dashboard cert
- 08:21 PM Feature #43675: workflow for using a signed dashboard cert
- https://github.com/ceph/ceph/pull/33472
- 03:30 PM Feature #43675 (In Progress): workflow for using a signed dashboard cert
- 08:25 PM Feature #43694 (Fix Under Review): cephadm: flag dashboard user to change password
- https://github.com/ceph/ceph/pull/32990
- 02:41 PM Bug #43835 (Resolved): cephadm: `ceph fs volume create` can't create MDS daemons
- https://github.com/ceph/ceph/pull/33441
- 09:20 AM Bug #44235 (Can't reproduce): qa/tasks/cephadm.py: sudo systemctl stop ceph-fsid@mgr.y raises Key...
- that was a test run. never mind.
- 09:16 AM Bug #44235 (Can't reproduce): qa/tasks/cephadm.py: sudo systemctl stop ceph-fsid@mgr.y raises Key...
- ...
02/20/2020
- 07:32 PM Bug #44231 (Resolved): cephadm: cannot capture core files
- At least, I can't figure it out.
On my test box, i set kernel.core_pattern to both a valid hsot and container path... - 02:08 PM Feature #43695 (Resolved): cephadm: alertmanager
- 02:05 PM Feature #43860 (Resolved): mgr/cephadm: Implement "ceph orchestrator node_exporter add"
- 02:04 PM Feature #43836: cephadm adopt: also adopt Prometheus and Grafana daemons from DeepSea
- prometheus: https://github.com/ceph/ceph/pull/33417
- 02:02 PM Bug #44027 (Resolved): cephadm: usability: python backtrace on usage error
- 02:00 PM Bug #44026 (Resolved): cephadm: usability: confusing error message when trying to add a host with...
- 01:59 PM Feature #44031 (Resolved): cephadm: Also cache `device ls`.
- 01:57 PM Bug #44079 (Resolved): cephadm: ModuleNotFoundError: No module named 'distutils.spawn'
- 01:50 PM Bug #44209: qa/workunits/cephadm/test_cephadm.sh: prometheus:latest: Invalid JSON in -
- let's see if https://github.com/ceph/ceph/pull/33433 helps.
- 01:44 AM Bug #44180 (Resolved): cephadm: missing describe_service call crashes the MGR when accessing Dash...
- Fixed in https://github.com/ceph/ceph/pull/33359
02/19/2020
- 10:47 PM Bug #44169 (Resolved): informative exception eaten
- 10:38 PM Feature #43670 (Resolved): teuthology: Add new upgrade/downgrade process
- 10:37 PM Feature #43867 (Fix Under Review): cephadm: progress item for upgrade
- 10:36 PM Feature #44031 (Fix Under Review): cephadm: Also cache `device ls`.
- 10:35 PM Feature #43836 (Fix Under Review): cephadm adopt: also adopt Prometheus and Grafana daemons from ...
- 10:35 PM Feature #43695 (Fix Under Review): cephadm: alertmanager
- 10:35 PM Feature #43940 (Resolved): orchestrator mgr add and rm
- 10:35 PM Feature #43685 (Resolved): host prepare
- 07:00 PM Bug #44165 (Resolved): test_load_data fails
- 06:59 PM Bug #44188 (Resolved): Module 'cephadm' has failed: dictionary changed size during iteration
- 01:37 PM Bug #44188: Module 'cephadm' has failed: dictionary changed size during iteration
- > yes, deployed without issue, but again only one monitor was deployed. And when I tried to add other monitors this e...
- 03:08 PM Cleanup #43674 (Resolved): rename/merge orchestrator_cli -> orchestrator
- 02:31 PM Bug #44209 (Resolved): qa/workunits/cephadm/test_cephadm.sh: prometheus:latest: Invalid JSON in -
- http://qa-proxy.ceph.com/teuthology/swagner-2020-02-19_13:52:54-rados-wip-swagner-testing-2020-02-19-1014-distro-basi...
- 12:33 PM Bug #44175 (Resolved): cephadm: adopt does not work with filestore OSDs
- 11:22 AM Feature #44205: cephadm: push/apply config.yml
- Sebastian Wagner wrote:
> hm, what about not inventing a new schema here? and instead simply concatenate the service... - 11:09 AM Feature #44205: cephadm: push/apply config.yml
- hm, what about not inventing a new schema here? and instead simply concatenate the service specs for all types?
Li... - 10:51 AM Feature #44205 (Resolved): cephadm: push/apply config.yml
- Having a push/apply-config option would enable us to define multiple services/daemons before the actual deployment.
...
02/18/2020
- 10:25 PM Bug #44188 (Resolved): Module 'cephadm' has failed: dictionary changed size during iteration
- ...
- 08:32 PM Bug #44175 (Fix Under Review): cephadm: adopt does not work with filestore OSDs
- 02:30 PM Bug #44168 (Resolved): qa/tasks/cephadm in ceph_bootstrap: AttributeError: 'NoneType' object has ...
- 09:16 AM Bug #44180: cephadm: missing describe_service call crashes the MGR when accessing Dashboard
- +1 for adding mypy to mgr/dashboard.
- 09:12 AM Bug #44180 (Resolved): cephadm: missing describe_service call crashes the MGR when accessing Dash...
- How to reproduce:
1. Enable cephadm
bin/ceph mgr module enable cephadm
bin/ceph orch set backend cephadm
2. A...
02/17/2020
- 11:32 PM Bug #44175 (Resolved): cephadm: adopt does not work with filestore OSDs
- it tries to *copy* the data directory, bad bad bad
- 05:40 PM Feature #43685 (Fix Under Review): host prepare
- 04:18 PM Bug #44169 (Fix Under Review): informative exception eaten
- 02:13 PM Bug #44169 (Resolved): informative exception eaten
- ...
- 02:46 PM Bug #44170 (Duplicate): Teuthology is testing unrelated container images
- shaman build: https://shaman.ceph.com/builds/ceph/wip-swagner-testing/290ad805b6b133320a894170a2157f1ffb45ed45/defaul...
- 01:06 PM Bug #44168 (Resolved): qa/tasks/cephadm in ceph_bootstrap: AttributeError: 'NoneType' object has ...
- ...
- 11:34 AM Feature #43708 (Fix Under Review): mgr/rook: Blink enclosure LED
- 11:33 AM Feature #43708: mgr/rook: Blink enclosure LED
- https://github.com/ceph/ceph/pull/33366
- 10:52 AM Bug #44167 (Resolved): cephadm/ def _update_service: Remove should make use of spec.placement.hosts
- Right now, we're randomly removing services, like...
- 05:37 AM Bug #44165 (Fix Under Review): test_load_data fails
- 04:45 AM Bug #44165 (Resolved): test_load_data fails
- tasks.mgr.test_orchestrator_cli.TestOrchestratorCli fails with...
02/14/2020
- 06:10 PM Bug #44138 (Resolved): ModuleNotFoundError: No module named 'jsonpatch'
- This should also address upgrade failures like http://pulpito.ceph.com/teuthology-2020-02-13_20:11:56-upgrade:mimic-x...
- 08:46 AM Bug #44138 (Fix Under Review): ModuleNotFoundError: No module named 'jsonpatch'
- 08:39 AM Bug #44138: ModuleNotFoundError: No module named 'jsonpatch'
- http://qa-proxy.ceph.com/teuthology/teuthology-2020-02-14_05:00:03-smoke-master-testing-basic-smithi/4761991/teutholo...
- 08:36 AM Bug #44138: ModuleNotFoundError: No module named 'jsonpatch'
- http://qa-proxy.ceph.com/teuthology/pdonnell-2020-02-14_04:02:50-fs-wip-pdonnell-testing-20200214.001201-distro-basic...
- 08:35 AM Bug #44138 (Resolved): ModuleNotFoundError: No module named 'jsonpatch'
- https://github.com/ceph/ceph/commit/846761ef7afab43144f38bf5631fd859d6964820
- 02:54 PM Bug #43415: python3-remoto not available in ubuntu
- prio low, as we only need remoto within the mgr node, which can trivially be centos or suse.
- 02:51 PM Documentation #43683 (Resolved): Missing docs for HostSpec
- afaik this is resolved
- 02:50 PM Feature #43696: cephadm: check that units start
- e.g. by scheduling a `daemon ls` run on that host?
- 02:48 PM Bug #43153 (Can't reproduce): mgr orchestrator hangs
- please reopen, if this happens again.
- 02:47 PM Documentation #43834: cephadm: some command only support `hosts`. make sure users know this limit...
- prio low, as users should have a proper error message when doing mistakes
- 02:46 PM Feature #43707: mgr/rook: OSD create for non-trivial drive groups
- kind of obsolete by making Rook drive-group aware.
- 02:44 PM Feature #43937: cephadm: make default image configurable
- prio low, as this can be done by setting the env variable system-wide
- 02:44 PM Bug #43898 (Resolved): cephadm CEPHADM_STRAY_HOST: FQDN vs short host names
- 02:43 PM Bug #43949 (Fix Under Review): mgr/cephadm: ceph fs volume create: TypeError: %d format: a number...
- 02:40 PM Bug #44027 (Fix Under Review): cephadm: usability: python backtrace on usage error
- 10:54 AM Feature #43911: test cephadm rgw deployment
- Right now, this is blocked by https://github.com/ceph/ceph/pull/33310
- 10:44 AM Bug #44026 (Fix Under Review): cephadm: usability: confusing error message when trying to add a h...
02/13/2020
- 03:29 PM Bug #44026: cephadm: usability: confusing error message when trying to add a host without adding ...
- The message is now a tiny bit better:...
- 03:25 PM Feature #43911: test cephadm rgw deployment
- looks like I need to wait for https://github.com/ceph/ceph/pull/33041
- 01:49 PM Feature #43911: test cephadm rgw deployment
- while trying to implement this, I got:...
- 02:35 PM Bug #44122 (Won't Fix): bin/cephadm cannot read vstart's ceph.conf
- ...
- 02:23 PM Bug #44121 (Resolved): calling cephadm shell again looses bash history
- As we're always creating a new container, we obviously also loosing the bash history, which is unfortunate.
espec... - 01:36 PM Bug #44119 (Resolved): installing cephamd on bionic is painful:
- h3. After downloading cephadm:...
- 11:50 AM Bug #43932 (Resolved): bin/cephadm: All daemons should call port_in_use
- 11:50 AM Bug #43932: bin/cephadm: All daemons should call port_in_use
- Has been fixed by https://github.com/ceph/ceph/pull/33205
- 10:14 AM Feature #43839: enhance `host ls`
- Some groundwork for this: https://github.com/ceph/ceph/pull/33258
- 01:09 AM Bug #44077 (Resolved): grafana container doesn't start on 18.04
02/12/2020
- 05:26 PM Feature #43678 (Resolved): CEPH_VERSION label
- cephadm part is done. rest is https://github.com/ceph/ceph-container/issues/1508
- 05:24 PM Bug #44026: cephadm: usability: confusing error message when trying to add a host without adding ...
- I hope this is now resolved with https://github.com/ceph/ceph/pull/33179 !
- 05:22 PM Bug #43972 (Resolved): rook: 'Device' object has no attribute 'swagger_types'
- 05:22 PM Bug #44019 (Resolved): cephadm: rgw update doesn't work
- 05:21 PM Bug #44054 (Resolved): test_host_list fails
- 05:20 PM Cleanup #43700 (New): cephadm: make it a proper python package
- 05:20 PM Bug #42769 (Resolved): mgr/ssh: Irritating success message
- 05:19 PM Feature #43684 (New): Make use of progress items for OSD deployment
- 04:23 PM Feature #43839 (In Progress): enhance `host ls`
- 04:22 PM Cleanup #43674 (Fix Under Review): rename/merge orchestrator_cli -> orchestrator
- 11:55 AM Bug #44028: cephadm: usability: failing to add an osd, useless message
- ceph-ansible! See https://github.com/search?q=ceph+100%25FREE&type=Code
Looks like a regression in ceph-volume t... - 11:52 AM Bug #44028: cephadm: usability: failing to add an osd, useless message
- The error originated from ceph-volume:...
02/11/2020
- 08:09 PM Bug #43835 (In Progress): cephadm: `ceph fs volume create` can't create MDS daemons
- 04:49 PM Bug #44077: grafana container doesn't start on 18.04
- you think we can add a `$CEPHADM logs --name grafana.a | cat` to the script?
- 02:33 PM Bug #44077 (Resolved): grafana container doesn't start on 18.04
- ...
- 04:43 PM Bug #44018 (Resolved): cephadm: down host kills serve() thread
- 04:24 PM Bug #44079 (Resolved): cephadm: ModuleNotFoundError: No module named 'distutils.spawn'
- As cephadm is supposed to not require any Python dependencies, we might need to document this:...
- 03:11 PM Bug #44054 (Fix Under Review): test_host_list fails
- 02:38 PM Bug #44054: test_host_list fails
- http://pulpito.ceph.com/sage-2020-02-11_06:58:37-rados-wip-sage2-testing-2020-02-10-2058-distro-basic-smithi/4754155/
- 12:29 PM Bug #44003: cephadm: multiple mgrs scheduled on same host
- A fix for this should make use of https://github.com/ceph/ceph/pull/33205
- 12:26 PM Bug #44019 (Fix Under Review): cephadm: rgw update doesn't work
02/10/2020
- 11:14 PM Bug #43913 (Resolved): test_error (tasks.mgr.test_orchestrator_cli.TestOrchestratorCli): Assertio...
- 01:34 PM Bug #43913 (Fix Under Review): test_error (tasks.mgr.test_orchestrator_cli.TestOrchestratorCli): ...
- 06:17 PM Bug #43883 (Resolved): cephadm: Found left-over process 15516 (podman) in control group while sta...
- 03:08 PM Feature #44005 (Resolved): cephadm: associate addrs to hosts
- 12:29 PM Bug #43802 (Resolved): cephadm: error creating container: "Your kernel does not support swap limi...
- 11:24 AM Bug #43972 (Fix Under Review): rook: 'Device' object has no attribute 'swagger_types'
- 11:22 AM Bug #43972: rook: 'Device' object has no attribute 'swagger_types'
- Fix available in:
https://github.com/ceph/ceph/pull/33176
Note:
The right command to create the OSD in the examp...
02/09/2020
- 09:34 PM Feature #44055 (Closed): cephadm: make 'ls' faster
- For both podman and docker, 'ps' tells you the image name but not its hash.
With podman, you can do:... - 03:48 PM Bug #44054 (Resolved): test_host_list fails
- ...
- 12:24 AM Bug #43703 (Resolved): selinux vs logrotate
- I'm calling this one "fixed", even though for el 8.0 and 8.1 (pre-z-stream) the error is still there.
https://gith... - 12:22 AM Bug #43883 (Fix Under Review): cephadm: Found left-over process 15516 (podman) in control group w...
Also available in: Atom