From 01/03/2018 to 02/01/2018
- 10:43 PM Bug #22463: "zap command needs both HOSTNAME and DISK but got "None vpm099:vdb"" in ceph-deploy-l...
- Also needs openstack entry:
- 01:47 PM Bug #22871 (Resolved): ceph-deploy disk zap subcommand needs debug argument
- It looks like ceph-deploy disk zap is expecting a debug argument in args but none seems to have been supplied in the ...
- 02:52 PM Feature #7505: Support Archlinux
- PR: https://github.com/ceph/ceph-deploy/pull/461
- 02:51 PM Bug #22816: TypeError: startswith first arg must be bytes or a tuple of bytes, not str in osd.py:375
- PR: https://github.com/ceph/ceph-deploy/pull/462
- 02:46 PM Bug #22816 (Resolved): TypeError: startswith first arg must be bytes or a tuple of bytes, not str...
- Target Version: 2.0.0
Python Version: 3.6.4
remoto lib version: 0.0.29
When running @ceph-deploy disk list <host...
- 01:25 AM Feature #7505: Support Archlinux
- Hello! with a friend, we wrote support for Archlinux in ceph-deploy release 2.0.0
I'm going to use this ticket for r...
- 05:46 PM Bug #21161 (Won't Fix): ceph-eploy randomly times out
- We've seen this with a combination of:
* Older python versions (2.6, and even some in 2.7)
* Older versions of ce...
- 04:55 PM Feature #19301 (Resolved): add Virtuozzo Linux support to the ceph-deploy
- Released as part of 1.5.38
- 04:54 PM Bug #19992 (Resolved): [mita] add python3 label to nodes
- This has been fixed for a while now
- 03:19 PM Bug #9122 (Closed): ceph-deploy: disk list throws an exception
- it's been a while since we've last seen these. We also added a newer execnet version (the backend library to connect ...
- 03:02 PM Feature #22118 (Resolved): ceph-volume support for bluestore non-encrypted OSDs
- Merged and released as part of ceph-deploy 2.0.0
- 02:45 PM Bug #19850 (Resolved): new 1.5.38 release
- Released on 2017-05-19
- 01:41 PM Bug #22315 (Resolved): release ceph-deploy 2.0.0
- 2.0.0 was released 2018-01-16
- 05:34 PM Bug #22725 (Rejected): osd create doesn't work with Ceph jewel
- When trying to create an OSD on Ceph Jewel based machine, it fails:...
- 05:31 PM Bug #22712: mgr fails on missing directory
- It seems that this is resolved, as PR was accepted: https://github.com/ceph/ceph-deploy/pull/459
- 07:58 PM Bug #22712 (Resolved): mgr fails on missing directory
- If `/var/lib/ceph/mgr` directory is missing, `safe_mkdir` still does not fail, but subsequent code is unable to reach...
- 09:02 PM Bug #22675 (Need More Info): cleanup ceph-deploy docs preflight documentation
- 1) Cleanup all the unnecessary manual edits of repo mentioned in pre-flight docs
- 02:12 AM Bug #22348: mon rebuild failed on 12.2.2
- Alfredo Deza wrote:
> This will need further debugging on your side. Seems like these monitors can't reach each s3-1...
- 05:30 PM Bug #22463: "zap command needs both HOSTNAME and DISK but got "None vpm099:vdb"" in ceph-deploy-l...
- yuriw - if you can test and merge this one here, needs backports that went in master
- 04:22 PM Bug #22463: "zap command needs both HOSTNAME and DISK but got "None vpm099:vdb"" in ceph-deploy-l...
- Vasu, can you take a look pls? Those tests fail some upgrades as well
- 01:49 PM Bug #22463: "zap command needs both HOSTNAME and DISK but got "None vpm099:vdb"" in ceph-deploy-l...
- I think Vasu was updating these tests. This is an easy one to get fixed, but I am not sure where that would go.
- 04:11 PM Bug #22348 (Need More Info): mon rebuild failed on 12.2.2
- This will need further debugging on your side. Seems like these monitors can't reach each s3-1-4 while the others one...
- 04:58 PM Bug #22598 (Duplicate): "[ FAILED ] cls_rgw.gc_defer" in upgrade:jewel-x-luminous-distro-basic-...
- 04:51 PM Bug #22598 (Duplicate): "[ FAILED ] cls_rgw.gc_defer" in upgrade:jewel-x-luminous-distro-basic-...
- Run: http://pulpito.ceph.com/teuthology-2018-01-04_22:29:17-upgrade:jewel-x-luminous-distro-basic-smithi/#
Also available in: Atom