Activity
From 10/21/2020 to 11/19/2020
11/19/2020
- 09:18 PM Backport #48304 (Resolved): octopus: prepare: the *-slots arguments have no effect
- https://github.com/ceph/ceph/pull/38205
- 09:17 PM Backport #48303 (In Progress): octopus: ceph-volume lvm batch fails activating filestore dymcrypt...
- 09:15 PM Backport #48303 (Resolved): octopus: ceph-volume lvm batch fails activating filestore dymcrypt osds
- https://github.com/ceph/ceph/pull/38199
- 09:17 PM Backport #48302 (In Progress): nautilus: ceph-volume lvm batch fails activating filestore dymcryp...
- 09:15 PM Backport #48302 (Resolved): nautilus: ceph-volume lvm batch fails activating filestore dymcrypt osds
- https://github.com/ceph/ceph/pull/38198
- 09:14 PM Bug #44494 (Pending Backport): prepare: the *-slots arguments have no effect
- This didn't make octopus :/
- 09:11 PM Bug #48271 (Pending Backport): ceph-volume lvm batch fails activating filestore dymcrypt osds
- 08:35 AM Bug #48271 (Fix Under Review): ceph-volume lvm batch fails activating filestore dymcrypt osds
- 08:34 AM Bug #48271: ceph-volume lvm batch fails activating filestore dymcrypt osds
- When the journal device is prepared, the uuid set in `tags['ceph.journal_uuid']` currently refers to the uuid generat...
- 07:41 PM Bug #37805 (Closed): ceph-ansible includes an incompatible role in stable-3.2
- 01:19 PM Feature #41294 (In Progress): ceph-volume should be able to list metadata for a specific osd
11/18/2020
- 09:33 PM Bug #48270 (In Progress): libstoragemgmt calls fatally wound Areca RAID controllers on mira
- 09:31 PM Bug #48270: libstoragemgmt calls fatally wound Areca RAID controllers on mira
- That's a good point Dan. Could we document this in the release notes (as prominently as possible) as a 'known issue'?
- 09:29 PM Bug #48270: libstoragemgmt calls fatally wound Areca RAID controllers on mira
- Yes, I intend to follow up with libstoragemgmt maintainers as well. It's possible there's a way to make this code fr...
- 08:43 AM Bug #48270 (Fix Under Review): libstoragemgmt calls fatally wound Areca RAID controllers on mira
- Oh wow...that is not very nice. I pushed a PR to make this data retrieval optional and opt-in.
We should probably ... - 05:32 AM Bug #48270: libstoragemgmt calls fatally wound Areca RAID controllers on mira
- I've reserved mira083 installed with CentOS 7 (we don't yet have a CentOS 8 image for mira), and can reproduce the bu...
- 05:18 AM Bug #48270 (Resolved): libstoragemgmt calls fatally wound Areca RAID controllers on mira
- Preface: mira is very old hardware and has shown a tendency to be buggy WRT SCSI commands in the past (since the driv...
- 10:21 AM Bug #48271: ceph-volume lvm batch fails activating filestore dymcrypt osds
- Ok so the root cause of this bug then is something like this?
The ceph-osd --mkfs call with a journal fails for so... - 07:52 AM Bug #48271 (Resolved): ceph-volume lvm batch fails activating filestore dymcrypt osds
- --> DEPRECATION NOTICE
--> --journal-size as integer is parsed as megabytes
--> A future release will p... - 09:02 AM Bug #48273 (Resolved): lvm/create.py: typo in the help message
- This is a *convinience* command that combines the prepare
11/17/2020
- 12:35 PM Backport #48184 (Resolved): octopus: ceph-volume lvm batch doesn't work anymore with --auto and f...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38045
m... - 10:19 AM Support #48259 (Closed): Ceph-ansible OSD node add fail (ceph.volume Failed to find physical volume)
- Deployment Node:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: T...
11/13/2020
- 08:01 PM Backport #48184: octopus: ceph-volume lvm batch doesn't work anymore with --auto and full SSD/NVM...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/38045
merged - 03:55 PM Backport #48184 (In Progress): octopus: ceph-volume lvm batch doesn't work anymore with --auto an...
- 05:31 PM Backport #48189 (In Progress): nautilus: remove mention of dmcache from docs and help text
- 05:31 PM Backport #48188 (In Progress): octopus: remove mention of dmcache from docs and help text
- 04:00 PM Backport #48185 (In Progress): nautilus: ceph-volume lvm batch doesn't work anymore with --auto a...
11/12/2020
- 04:51 PM Backport #48189: nautilus: remove mention of dmcache from docs and help text
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/38048
ceph-backport.sh versi... - 04:47 PM Backport #48188: octopus: remove mention of dmcache from docs and help text
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/38047
ceph-backport.sh versi... - 04:31 PM Backport #48185: nautilus: ceph-volume lvm batch doesn't work anymore with --auto and full SSD/NV...
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/38046
ceph-backport.sh versi... - 04:29 PM Backport #48184: octopus: ceph-volume lvm batch doesn't work anymore with --auto and full SSD/NVM...
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/38045
ceph-backport.sh versi...
11/11/2020
- 02:19 PM Backport #48189 (Resolved): nautilus: remove mention of dmcache from docs and help text
- https://github.com/ceph/ceph/pull/38048
- 02:19 PM Backport #48188 (Resolved): octopus: remove mention of dmcache from docs and help text
- https://github.com/ceph/ceph/pull/38047
- 02:19 PM Backport #48187 (Resolved): nautilus: the --log-level flag is not respected
- https://github.com/ceph/ceph/pull/38372
- 02:18 PM Backport #48186 (Resolved): octopus: the --log-level flag is not respected
- https://github.com/ceph/ceph/pull/38426
- 02:18 PM Backport #48185 (Resolved): nautilus: ceph-volume lvm batch doesn't work anymore with --auto and ...
- https://github.com/ceph/ceph/pull/38046
- 02:18 PM Backport #48184 (Resolved): octopus: ceph-volume lvm batch doesn't work anymore with --auto and f...
- https://github.com/ceph/ceph/pull/38045
- 01:35 PM Backport #48087 (In Progress): nautilus: ceph-volume simple activate ignores osd_mount_options_xf...
- 12:04 PM Backport #48088 (In Progress): octopus: ceph-volume simple activate ignores osd_mount_options_xfs...
11/10/2020
- 08:31 PM Backport #48087: nautilus: ceph-volume simple activate ignores osd_mount_options_xfs for Filestor...
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/38015
ceph-backport.sh versi... - 08:29 PM Backport #48088: octopus: ceph-volume simple activate ignores osd_mount_options_xfs for Filestore...
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/38014
ceph-backport.sh versi... - 02:58 PM Bug #47966: Fails to deploy osd in rook, throws index error
- I think so yes. The batch subcommand does not handle partitions, but full devices or LVs. See also https://docs.ceph....
- 02:50 PM Bug #48106 (Pending Backport): ceph-volume lvm batch doesn't work anymore with --auto and full SS...
- 02:50 PM Bug #48150: add more tests for _sort_rotational_disks
- was added to https://github.com/ceph/ceph/pull/37942
- 02:49 PM Bug #48150 (Resolved): add more tests for _sort_rotational_disks
11/09/2020
- 10:33 AM Fix #48039 (Pending Backport): remove mention of dmcache from docs and help text
- 10:31 AM Bug #48045 (Pending Backport): the --log-level flag is not respected
- 10:26 AM Bug #48150 (Resolved): add more tests for _sort_rotational_disks
- because we only added one test for the full SSD scenario (https://github.com/ceph/ceph/pull/37942) but I guess we sho...
11/04/2020
- 09:23 AM Bug #48106 (Fix Under Review): ceph-volume lvm batch doesn't work anymore with --auto and full SS...
11/03/2020
- 10:08 PM Bug #48106 (Resolved): ceph-volume lvm batch doesn't work anymore with --auto and full SSD/NVMe d...
- Since the lvm batch refactor [1], it's not possible anymore to run...
- 02:50 PM Bug #47831: ceph-volume reject md-devices [rejected reason: Insufficient space <5GB]
- * LSBLK fo MD127 on the docker host ...
- 02:47 PM Bug #47831: ceph-volume reject md-devices [rejected reason: Insufficient space <5GB]
- h2. LSBLK fo MD127 on the docker host...
- 11:24 AM Backport #48088 (Resolved): octopus: ceph-volume simple activate ignores osd_mount_options_xfs fo...
- https://github.com/ceph/ceph/pull/38014
- 11:24 AM Backport #48087 (Resolved): nautilus: ceph-volume simple activate ignores osd_mount_options_xfs f...
- https://github.com/ceph/ceph/pull/38015
10/29/2020
- 07:15 PM Bug #48045 (Fix Under Review): the --log-level flag is not respected
- 07:10 PM Bug #48045 (Resolved): the --log-level flag is not respected
- Regardless of what is given to --log-level, the file log level is always set to DEBUG.
In looking at the code it s... - 02:43 PM Fix #48039 (Resolved): remove mention of dmcache from docs and help text
- With the introduction of bluestore dmcache is no longer needed and
is no longer supported with `ceph-volume lvm`. We...
10/28/2020
- 12:55 PM Bug #48018: ceph-volume simple activate ignores osd_mount_options_xfs for Filestore OSD
- That's perfect like this.
Thanks Jan. - 08:23 AM Bug #48018 (Pending Backport): ceph-volume simple activate ignores osd_mount_options_xfs for File...
- @Dimitri Let me know if this is needed further back then nautilus please.
- 08:27 AM Bug #47831: ceph-volume reject md-devices [rejected reason: Insufficient space <5GB]
- I don remember a particular change in ceph-volume between these versions. Especially not something that shrinks the i...
10/27/2020
- 03:52 PM Bug #48018 (Resolved): ceph-volume simple activate ignores osd_mount_options_xfs for Filestore OSD
- ceph-volume lvm prepare/activate consumes osd_mount_options_xfs value from the ceph configuratin file for mounting Fi...
10/26/2020
- 10:47 AM Backport #47845 (Resolved): octopus: add no-systemd argument to zap
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37722
m... - 08:43 AM Bug #47978: mgr inventory page displays zram as 'ssd'
- Thanks for the report, Harry. Updating project, as the data displayed on the dashboard is collected via the Orchestra...
10/25/2020
- 05:05 PM Bug #47978 (New): mgr inventory page displays zram as 'ssd'
- The manager cluster/inventory page includes zram devices as 'inventory' in the 'ssd' class.
This suggests ceph a...
10/23/2020
- 06:51 PM Backport #47845: octopus: add no-systemd argument to zap
- Jan Fajerski wrote:
> https://github.com/ceph/ceph/pull/37722
merged - 05:05 PM Bug #47966: Fails to deploy osd in rook, throws index error
- On teuthology smithi machine, I don't get the error but osd's are not deployed too. Is this expected behaviour ?
<pr... - 03:27 PM Bug #47966 (In Progress): Fails to deploy osd in rook, throws index error
- Hmm batch shouldn't accept partitions. That is certainly a bug.
But batch should only be fed with bare devices or ... - 09:07 AM Bug #47966 (Resolved): Fails to deploy osd in rook, throws index error
- ...
10/21/2020
- 10:57 AM Bug #36242 (Resolved): broken journal and filestore data size in `lvm batch --report`
- 10:57 AM Bug #36283 (Resolved): --journal-size flag broken when less than 1GB
- 10:57 AM Bug #37502 (Resolved): lvm batch potentially creates multi-pv volume groups
- 10:57 AM Bug #37590 (Resolved): api.vgcreate uses a PE size of 1G
- 10:57 AM Bug #38168 (Resolved): c-v inventory command reports disk with 0 Bytes as available
- 10:56 AM Bug #42412 (Resolved): lvm create needs --journal for filestore
- 10:56 AM Bug #43899 (Resolved): cephadm: Remove the clutch between Teuthology and ceph-volume
- 10:56 AM Bug #44749 (Resolved): lvm batch does not re-use db devices with free space on VGs
- 10:56 AM Feature #44783 (Resolved): c-v's batch report doesn't show if the disk is going to be encrypted
- 10:55 AM Feature #44951 (Resolved): add support for 'slots' in lvm batch
- 10:54 AM Bug #46033 (Resolved): "AttributeError: 'MixedType' object has no attribute 'wal_vg_extents'" whi...
- 10:54 AM Bug #24969 (Resolved): `lvm create` should allow carving out LVs from existing VGs
- 09:08 AM Feature #47925 (New): add ceph-volume simple zap <osd-id> <osd-uuid> subcommand
- As a prerequisite to a highlevel zap command, we'll need a zap for simple (i.e. ceph-disk deployed osds)
- 09:01 AM Feature #44630: cephadm: improve behaviour with virtual disks
- I'm not sure what the request is. Can we clarify this a bit?
- 08:59 AM Feature #44911 (Resolved): support dmcrypt device that is already encrypted by user
Also available in: Atom