Activity
From 10/24/2018 to 11/22/2018
11/22/2018
- 04:57 PM Feature #37086 (In Progress): Add several flags to ceph-volume lvm batch
- I had a look into the batch subcommand. The current argument stucture (with data devices as positional arguments and ...
11/21/2018
- 02:18 PM Bug #37356: ceph-volume lvm batch broken on py3 environments
- https://github.com/ceph/ceph/pull/25203
- 02:14 PM Bug #37356 (Resolved): ceph-volume lvm batch broken on py3 environments
- ...
- 09:21 AM Feature #37083: ceph-volume should report device_id as part of the device list
- Alfredo Deza wrote:
> The choice is to have these configuration-management tasks be done with a configuration manage...
11/20/2018
- 08:29 PM Feature #37087: add ceph-volume zap <osd-id> <osd-uuid> subcommand
> Does it make sense to add a general ceph-volume zap command? This could re-use the lvm zap command in case its an...- 03:32 PM Feature #37087: add ceph-volume zap <osd-id> <osd-uuid> subcommand
- zap looks good for managing lvm based osds. I think we´ll need a subcommand to zap ceph-disk deployed OSDs as well th...
11/19/2018
- 05:58 PM Feature #37083: ceph-volume should report device_id as part of the device list
- The choice is to have these configuration-management tasks be done with a configuration management tool, not in ceph-...
- 05:08 PM Feature #37083: ceph-volume should report device_id as part of the device list
- @alfredo : We have no choice here. We need to list the devices _before_ Ceph runs. So c-v have to be resilient when c...
- 02:57 PM Feature #37083: ceph-volume should report device_id as part of the device list
- There is an assumption here that ceph-volume can be installed without any other ceph component. A lot of the internal...
- 01:35 PM Feature #37087: add ceph-volume zap <osd-id> <osd-uuid> subcommand
- Alfredo Deza wrote:
> I understand that removing an OSD is something that was discussed, I am not clear why the orch...
11/16/2018
- 02:05 PM Feature #37087: add ceph-volume zap <osd-id> <osd-uuid> subcommand
- I understand that removing an OSD is something that was discussed, I am not clear why the orchestrator can't issue th...
- 01:30 PM Feature #37087: add ceph-volume zap <osd-id> <osd-uuid> subcommand
- Alfredo Deza wrote:
> Why would removing an OSD be something that ceph-volume needs to do?
This was identified in... - 09:52 AM Feature #37083: ceph-volume should report device_id as part of the device list
- Yes that would be better to get a single implementation to avoid mismatches.
I see two set of calls of get_device_...
11/15/2018
- 12:24 PM Bug #27062 (Resolved): GPT devices are excluded
- mimic PR: https://github.com/ceph/ceph/pull/25103
luminous PR: https://github.com/ceph/ceph/pull/25104
11/14/2018
- 08:02 PM Feature #36603: Allow loop device as a LVM backend
- :-(
But the second method of checking if device is valid (using `stat` package from python) recognizes loop devices... - 07:59 PM Feature #36603 (Closed): Allow loop device as a LVM backend
- Closing this as it is possible to use loop devices (if the LV is created before hand). We are not going to add functi...
- 06:05 PM Bug #27062 (In Progress): GPT devices are excluded
- https://github.com/ceph/ceph/pull/25098
- 04:42 PM Feature #37083: ceph-volume should report device_id as part of the device list
- I had a look into this and I have two discussion points:
The current c++ implementation uses the ID_SERIAL udev pr... - 03:48 PM Feature #37083: ceph-volume should report device_id as part of the device list
- It is not clear to me what spec I should adhere to. It seems like it isn't 1 piece of information added, but to compo...
- 12:21 PM Feature #37083: ceph-volume should report device_id as part of the device list
- We need to share the same format so the manager can 'recognize' as disk c-v exposes as the same disk as the one it ha...
11/13/2018
- 07:00 PM Bug #36768 (Resolved): tests failures when /dev/sda doesn't exist
- mimic PR: https://github.com/ceph/ceph/pull/25066
luminous PR: https://github.com/ceph/ceph/pull/25067 - 04:03 PM Feature #37087: add ceph-volume zap <osd-id> <osd-uuid> subcommand
- Why would removing an OSD be something that ceph-volume needs to do?
There is already a ticket for destroying/zapp... - 02:37 PM Feature #37087 (New): add ceph-volume zap <osd-id> <osd-uuid> subcommand
- Add subcommand to remove an OSD and its underlying storage. This should work for ceph-disk osds as well as lvm osds.
... - 04:01 PM Feature #37086: Add several flags to ceph-volume lvm batch
- --dmcrypt, --objectstore, and --osds-per-device are already supported
I don't think that --num-slots or --shared-u... - 02:33 PM Feature #37086 (Resolved): Add several flags to ceph-volume lvm batch
- as discussed in several orchestrator meetings, the batch subcommand needs several arg additions.
-replace-osd-ids=... - 03:56 PM Feature #37083: ceph-volume should report device_id as part of the device list
- Seems like by 'ID' you mean something that gets composed somehow if certain information is not available. Do you need...
- 03:32 PM Feature #37083: ceph-volume should report device_id as part of the device list
- As we can't rely on a running ceph cluster we'll sadly have to duplicate the code for this feature.
Code is here :
... - 02:47 PM Feature #37083: ceph-volume should report device_id as part of the device list
- Yeah, that's starting nautilus. A typical output looks like :
$ ceph device ls
DEVICE ... - 12:03 PM Feature #37083: ceph-volume should report device_id as part of the device list
- Could you expand a bit on this ticket? I've never seen `ceph device` before. Is that Nautilus only? When you say "lis...
- 11:08 AM Feature #37083 (Resolved): ceph-volume should report device_id as part of the device list
- When listing the device list, the orchestrator needs to get the device_id as defined in "ceph device" command.
- 10:54 AM Feature #37082 (Rejected): Split ceph-volume into a separate package
- As per the orchestrator sandwich, orchestrator should be in a position to list a host device list _before_ ceph is in...
11/12/2018
- 06:00 PM Bug #36768 (Fix Under Review): tests failures when /dev/sda doesn't exist
- master PR https://github.com/ceph/ceph/pull/25063
- 04:21 PM Bug #36768 (Resolved): tests failures when /dev/sda doesn't exist
- PR https://github.com/ceph/ceph/pull/24859 introduced new unit tests that rely implicitly on systems that have device...
11/09/2018
- 08:25 PM Bug #24972 (Resolved): `inventory` top-level sub-command
- mimic PR: https://github.com/ceph/ceph/pull/25013
luminous PR: https://github.com/ceph/ceph/pull/25014 - 01:51 PM Bug #24972 (In Progress): `inventory` top-level sub-command
- master PR: https://github.com/ceph/ceph/pull/24859
- 08:13 PM Bug #36470 (Resolved): ceph-volume simple activate needs --no-systemd flag
- mimic PR: https://github.com/ceph/ceph/pull/25011
luminous PR: https://github.com/ceph/ceph/pull/25012 - 05:37 PM Bug #36648 (Resolved): osds don't come up after reboot
- mimic PR https://github.com/ceph/ceph/pull/24852
luminous PR https://github.com/ceph/ceph/pull/24853 - 02:20 PM Bug #36701: calling Device.is_valid repeatedly duplicates entries in Device._rejected_reasons
- https://github.com/ceph/ceph/pull/25007
11/08/2018
- 07:20 PM Bug #36470 (Fix Under Review): ceph-volume simple activate needs --no-systemd flag
- master PR: https://github.com/ceph/ceph/pull/24998
11/07/2018
- 09:22 PM Bug #36589 (Rejected): ceph-volume: generate bad clustername in /etc/ceph/osd files by default.
- An admin that deploys Ceph with a custom cluster name is required to use @--cluster=name@ *everywhere*. There is no w...
- 07:33 PM Bug #36728 (Resolved): ceph-volume does not respect $PATH
- Hello!
Attempting to use ceph-volume when LVM is installed in a non-standard location results in:... - 02:47 PM Bug #36601 (Resolved): ceph-volume: use console_scripts instead of scripts for binaries
- master PR: https://github.com/ceph/ceph/pull/24773
mimic PR: https://github.com/ceph/ceph/pull/24852
luminous PR: h... - 02:46 PM Bug #36246 (Resolved): confusing message when running a 'ceph-volume scan'
- mimic PR:https://github.com/ceph/ceph/pull/24826
luminous PR: https://github.com/ceph/ceph/pull/24827 - 02:44 PM Bug #36704 (Resolved): tox tests broken after systemd fixes
- master PR: https://github.com/ceph/ceph/pull/24937
mimic PR: https://github.com/ceph/ceph/pull/24957
luminous PR: ... - 02:43 PM Bug #36672 (Resolved): tox: consume ceph-ansible's requirements.txt
- master PR: https://github.com/ceph/ceph/pull/24881
mimic PR: https://github.com/ceph/ceph/pull/24959
luminous PR: h...
11/06/2018
- 12:43 PM Bug #36648: osds don't come up after reboot
- master PR https://github.com/ceph/ceph/pull/24840
11/05/2018
- 07:01 PM Bug #36704 (Resolved): tox tests broken after systemd fixes
- Current master broken with:...
- 05:43 PM Feature #36446: Adding is_valid() function to filter out devices
- Jan Fajerski wrote:
> This done is it not?
Yep. Closing then. - 10:38 AM Feature #36446: Adding is_valid() function to filter out devices
- This done is it not?
- 10:46 AM Bug #36701 (Resolved): calling Device.is_valid repeatedly duplicates entries in Device._rejected_...
- Device.is_valid only ever adds to _rejected_reason. Two fixes are possible:
- is_valid resets _rejected_reasons on...
11/01/2018
- 12:45 PM Bug #36672 (Resolved): tox: consume ceph-ansible's requirements.txt
- All testing is currently blocked since the ceph-ansible project made a change that requires python-netaddr installed ...
10/30/2018
- 07:28 PM Bug #36648: osds don't come up after reboot
- Fallout from https://github.com/ceph/ceph/pull/24773 getting merged. I don't understand how our tests didn't pick it up
- 05:24 PM Bug #36648 (Resolved): osds don't come up after reboot
- ...
10/29/2018
- 06:37 PM Feature #36603: Allow loop device as a LVM backend
- Ah, yes, ceph-deploy will not help here.
If you create the LVs beforehand (manually, with some other method beside... - 06:00 PM Feature #36603: Allow loop device as a LVM backend
- Hi Alfredo,
to be honest - I'm not sure. To deploy both production and testing clusters I'm using code that executes... - 09:55 AM Feature #36603: Allow loop device as a LVM backend
- For small testing/development clusters it is still viable to pre-create the LVs and then pass those onto ceph-volume....
- 09:52 AM Bug #24795 (Resolved): Unclear documentation on preparing devices
- mimic PR: https://github.com/ceph/ceph/pull/24449
luminous PR: https://github.com/ceph/ceph/pull/24451
10/26/2018
- 07:35 PM Feature #36603 (Closed): Allow loop device as a LVM backend
- Loop device usage is a convinient way to deploy small Ceph clusters for development, however @ceph-volume lvm create@...
- 01:09 PM Bug #36601 (Resolved): ceph-volume: use console_scripts instead of scripts for binaries
- ceph-volume should use console_scripts like most python software today.
console_scripts allow to support out of the ... - 01:05 PM Bug #36600 (New): ceph-volume: Some test_process tests fail in silence.
- All tests in TestFunctionalCall doesn't checks returned values and for example:
* test_stdin ran "echo echo '/' | ...
10/25/2018
- 10:25 PM Feature #36363 (Resolved): batch prepare only for containerized deployments
- luminous backport: https://github.com/ceph/ceph/pull/24759
mimic backport: https://github.com/ceph/ceph/pull/24760 - 06:25 PM Bug #36386 (Resolved): remove --version
- mimic PR: https://github.com/ceph/ceph/pull/24753
luminous PR: https://github.com/ceph/ceph/pull/24754 - 04:00 PM Bug #36519 (Closed): simple tests failing (ceph-disk related)
- This was fixed in ceph-ansible when the commit was reverted
- 01:28 PM Bug #36492 (Resolved): ceph lvm list reports wrong json from within a container
- mimic PR https://github.com/ceph/ceph/pull/24740
luminous PR https://github.com/ceph/ceph/pull/24741
10/24/2018
- 02:58 PM Bug #36492 (Fix Under Review): ceph lvm list reports wrong json from within a container
- master PR https://github.com/ceph/ceph/pull/24738
- 02:48 PM Bug #36492: ceph lvm list reports wrong json from within a container
- [root@magna059 ~]# cat /var/log/ceph/ceph-volume.log
[2018-10-24 14:47:04,000][ceph_volume.main][INFO ] Running com... - 01:30 PM Bug #36589 (Rejected): ceph-volume: generate bad clustername in /etc/ceph/osd files by default.
- I'm on ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable)
I have tried to convert a cep... - 01:15 PM Bug #36586: ceph-volume /etc/ceph/osd files contains keyring and is worldwide readable.
- or something like ceph:ceph 400
- 01:13 PM Bug #36586 (New): ceph-volume /etc/ceph/osd files contains keyring and is worldwide readable.
- ceph-volume simple scan creates files in /etc/ceph/osd worldwide readable.
These files contains keyring of the osd...
Also available in: Atom