Activity
From 11/15/2020 to 12/14/2020
12/14/2020
- 10:21 PM Bug #44494 (Resolved): prepare: the *-slots arguments have no effect
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:19 PM Bug #47966 (Resolved): Fails to deploy osd in rook, throws index error
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:19 PM Bug #48018 (Resolved): ceph-volume simple activate ignores osd_mount_options_xfs for Filestore OSD
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:19 PM Fix #48039 (Resolved): remove mention of dmcache from docs and help text
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:19 PM Bug #48045 (Resolved): the --log-level flag is not respected
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:18 PM Bug #48271 (Resolved): ceph-volume lvm batch fails activating filestore dymcrypt osds
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
12/12/2020
- 03:28 PM Backport #48304 (Resolved): octopus: prepare: the *-slots arguments have no effect
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38205
m... - 03:28 PM Backport #48088 (Resolved): octopus: ceph-volume simple activate ignores osd_mount_options_xfs fo...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38014
m... - 03:28 PM Backport #48188 (Resolved): octopus: remove mention of dmcache from docs and help text
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38047
m... - 03:28 PM Backport #48303 (Resolved): octopus: ceph-volume lvm batch fails activating filestore dymcrypt osds
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38199
m... - 02:46 PM Backport #48353 (Resolved): octopus: Fails to deploy osd in rook, throws index error
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38280
m... - 02:46 PM Backport #48186 (Resolved): octopus: the --log-level flag is not respected
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38426
m...
12/10/2020
- 04:34 PM Bug #47758: fail to create OSDs because the requested extent is too large
- Hi @Jan,
I was able to reproduce the issue on node system 'clara013' and please find the below details,
Zap was...
12/08/2020
- 12:45 PM Feature #47584 (In Progress): make create/prepare idempotent
- 12:20 PM Bug #47758: fail to create OSDs because the requested extent is too large
- Sorry Juan, only now get to look at this. I'm somewhat hesitant to solve it like in your PR. It would be much better ...
- 12:00 PM Bug #48492 (Resolved): util/disk.py can't parse PB size suffix
- This leads to ceph-volume throwing an IndexError when a PB sized disk is attached to a node.
12/07/2020
12/04/2020
- 12:45 PM Bug #48464: blkid holds old entries in cache
- https://github.com/ceph/ceph/pull/38447
- 12:43 PM Bug #48464 (Resolved): blkid holds old entries in cache
- OS: Ubuntu 16.04
Ceph: Nautilus 14.2.11
Invoking activating OSD drive could fail because of old blkid cache
<pre...
12/03/2020
- 05:21 PM Backport #48187 (Resolved): nautilus: the --log-level flag is not respected
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38372
m... - 05:20 PM Backport #48413 (Resolved): nautilus: lvm/create.py: typo in the help message
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38371
m... - 12:20 PM Backport #48186 (In Progress): octopus: the --log-level flag is not respected
- 12:19 PM Backport #48414 (In Progress): octopus: lvm/create.py: typo in the help message
- 02:43 AM Feature #47295 (Rejected): Optimize ceph-volume inventory to reduce runtime
- rejected - an alternate approach was implemented
- 01:50 AM Bug #48445 (New): ceph-volume lvm zap fails to properly dismantle volume groups
- After successfully deploying with cephadm, and then successfully purging the cluster with cephadm, I tried to use zap...
12/01/2020
- 07:45 PM Backport #48366: octopus: libstoragemgmt calls fatally wound Areca RAID controllers on mira
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38299
m... - 04:35 PM Backport #48366 (Resolved): octopus: libstoragemgmt calls fatally wound Areca RAID controllers on...
- 07:44 PM Backport #48352: nautilus: Fails to deploy osd in rook, throws index error
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38279
m... - 04:36 PM Backport #48352 (Resolved): nautilus: Fails to deploy osd in rook, throws index error
- 04:42 PM Backport #48187 (In Progress): nautilus: the --log-level flag is not respected
- 04:40 PM Backport #48413 (In Progress): nautilus: lvm/create.py: typo in the help message
- 04:39 PM Backport #48413 (Resolved): nautilus: lvm/create.py: typo in the help message
- https://github.com/ceph/ceph/pull/38371
- 04:39 PM Backport #48414 (Resolved): octopus: lvm/create.py: typo in the help message
- https://github.com/ceph/ceph/pull/38425
- 04:39 PM Bug #48273 (Pending Backport): lvm/create.py: typo in the help message
- 04:35 PM Bug #48270 (Resolved): libstoragemgmt calls fatally wound Areca RAID controllers on mira
- 04:34 PM Support #48259 (Closed): Ceph-ansible OSD node add fail (ceph.volume Failed to find physical volume)
- There is no /dev/sdk device that ansible is trying to use.
So closing this as not a ceph-volume issue. If the host... - 12:12 PM Feature #41294 (Fix Under Review): ceph-volume should be able to list metadata for a specific osd
- 10:00 AM Bug #47831: ceph-volume reject md-devices [rejected reason: Insufficient space <5GB]
- confirming that "Insufficient space <5GB" goes for md devices on newer versions of ceph
https://github.com/rook/rook...
11/30/2020
- 10:14 PM Bug #48270: libstoragemgmt calls fatally wound Areca RAID controllers on mira
- The libstoragemgmt bug has a fix upstream: https://github.com/libstorage/libstoragemgmt/pull/444
- 01:11 PM Bug #47758: fail to create OSDs because the requested extent is too large
- @Jan, have you seen?:
https://github.com/ceph/ceph/pull/38335
We have verified that this change fixes the issue. - 01:08 PM Bug #47758: fail to create OSDs because the requested extent is too large
- Jan Fajerski wrote:
> @Juan was this seen in a CI setup or just a rook cluster?
>
> I'll look into adjusting the ... - 12:10 PM Bug #47758 (In Progress): fail to create OSDs because the requested extent is too large
- @Juan was this seen in a CI setup or just a rook cluster?
I'll look into adjusting the size calculation by a poten... - 12:03 PM Bug #47758 (Duplicate): fail to create OSDs because the requested extent is too large
- 12:03 PM Bug #48383 (Duplicate): OSD creation fails because volume group has insufficient free space to pl...
11/26/2020
- 05:08 PM Bug #48383 (Duplicate): OSD creation fails because volume group has insufficient free space to pl...
- Error when trying to create an OSD using Ceph orchestrator. After several test we were able to discover that the prob...
- 11:17 AM Feature #47541 (Resolved): add no-systemd argument to zap
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:19 AM Backport #48367 (Rejected): nautilus: libstoragemgmt calls fatally wound Areca RAID controllers o...
- LSM data retrieval was not backported to nautilus
- 07:15 AM Backport #48367 (Rejected): nautilus: libstoragemgmt calls fatally wound Areca RAID controllers o...
- 07:16 AM Backport #48366 (In Progress): octopus: libstoragemgmt calls fatally wound Areca RAID controllers...
- 07:15 AM Backport #48366 (Resolved): octopus: libstoragemgmt calls fatally wound Areca RAID controllers on...
- https://github.com/ceph/ceph/pull/38299
- 07:14 AM Bug #48270 (Pending Backport): libstoragemgmt calls fatally wound Areca RAID controllers on mira
11/25/2020
- 07:51 PM Backport #48087 (Resolved): nautilus: ceph-volume simple activate ignores osd_mount_options_xfs f...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38015
m... - 07:51 PM Backport #48189 (Resolved): nautilus: remove mention of dmcache from docs and help text
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38048
m... - 07:51 PM Backport #47846 (Resolved): nautilus: add no-systemd argument to zap
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37723
m... - 07:46 PM Backport #48302 (Resolved): nautilus: ceph-volume lvm batch fails activating filestore dymcrypt osds
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38198
m... - 10:44 AM Backport #48353 (In Progress): octopus: Fails to deploy osd in rook, throws index error
- 10:42 AM Backport #48353 (Resolved): octopus: Fails to deploy osd in rook, throws index error
- https://github.com/ceph/ceph/pull/38280
- 10:44 AM Backport #48352 (In Progress): nautilus: Fails to deploy osd in rook, throws index error
- 10:42 AM Backport #48352 (Resolved): nautilus: Fails to deploy osd in rook, throws index error
- https://github.com/ceph/ceph/pull/38279
- 10:42 AM Bug #47966 (Pending Backport): Fails to deploy osd in rook, throws index error
11/24/2020
- 03:02 PM Bug #48106 (Resolved): ceph-volume lvm batch doesn't work anymore with --auto and full SSD/NVMe d...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
11/23/2020
- 06:21 PM Backport #48302: nautilus: ceph-volume lvm batch fails activating filestore dymcrypt osds
- Jan Fajerski wrote:
> https://github.com/ceph/ceph/pull/38198
merged - 01:05 PM Backport #48185 (Resolved): nautilus: ceph-volume lvm batch doesn't work anymore with --auto and ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38046
m...
11/20/2020
- 05:54 PM Backport #48185: nautilus: ceph-volume lvm batch doesn't work anymore with --auto and full SSD/NV...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/38046
merged - 08:11 AM Backport #48304 (In Progress): octopus: prepare: the *-slots arguments have no effect
- 01:30 AM Bug #48270: libstoragemgmt calls fatally wound Areca RAID controllers on mira
- https://github.com/libstorage/libstoragemgmt/issues/442
- 12:53 AM Bug #48270: libstoragemgmt calls fatally wound Areca RAID controllers on mira
- In fact...
- 12:31 AM Bug #48270: libstoragemgmt calls fatally wound Areca RAID controllers on mira
- Have been debugging the libstoragmgmt code behind link_type_get. It appears the command failure/bus reset occurs on ...
11/19/2020
- 09:18 PM Backport #48304 (Resolved): octopus: prepare: the *-slots arguments have no effect
- https://github.com/ceph/ceph/pull/38205
- 09:17 PM Backport #48303 (In Progress): octopus: ceph-volume lvm batch fails activating filestore dymcrypt...
- 09:15 PM Backport #48303 (Resolved): octopus: ceph-volume lvm batch fails activating filestore dymcrypt osds
- https://github.com/ceph/ceph/pull/38199
- 09:17 PM Backport #48302 (In Progress): nautilus: ceph-volume lvm batch fails activating filestore dymcryp...
- 09:15 PM Backport #48302 (Resolved): nautilus: ceph-volume lvm batch fails activating filestore dymcrypt osds
- https://github.com/ceph/ceph/pull/38198
- 09:14 PM Bug #44494 (Pending Backport): prepare: the *-slots arguments have no effect
- This didn't make octopus :/
- 09:11 PM Bug #48271 (Pending Backport): ceph-volume lvm batch fails activating filestore dymcrypt osds
- 08:35 AM Bug #48271 (Fix Under Review): ceph-volume lvm batch fails activating filestore dymcrypt osds
- 08:34 AM Bug #48271: ceph-volume lvm batch fails activating filestore dymcrypt osds
- When the journal device is prepared, the uuid set in `tags['ceph.journal_uuid']` currently refers to the uuid generat...
- 07:41 PM Bug #37805 (Closed): ceph-ansible includes an incompatible role in stable-3.2
- 01:19 PM Feature #41294 (In Progress): ceph-volume should be able to list metadata for a specific osd
11/18/2020
- 09:33 PM Bug #48270 (In Progress): libstoragemgmt calls fatally wound Areca RAID controllers on mira
- 09:31 PM Bug #48270: libstoragemgmt calls fatally wound Areca RAID controllers on mira
- That's a good point Dan. Could we document this in the release notes (as prominently as possible) as a 'known issue'?
- 09:29 PM Bug #48270: libstoragemgmt calls fatally wound Areca RAID controllers on mira
- Yes, I intend to follow up with libstoragemgmt maintainers as well. It's possible there's a way to make this code fr...
- 08:43 AM Bug #48270 (Fix Under Review): libstoragemgmt calls fatally wound Areca RAID controllers on mira
- Oh wow...that is not very nice. I pushed a PR to make this data retrieval optional and opt-in.
We should probably ... - 05:32 AM Bug #48270: libstoragemgmt calls fatally wound Areca RAID controllers on mira
- I've reserved mira083 installed with CentOS 7 (we don't yet have a CentOS 8 image for mira), and can reproduce the bu...
- 05:18 AM Bug #48270 (Resolved): libstoragemgmt calls fatally wound Areca RAID controllers on mira
- Preface: mira is very old hardware and has shown a tendency to be buggy WRT SCSI commands in the past (since the driv...
- 10:21 AM Bug #48271: ceph-volume lvm batch fails activating filestore dymcrypt osds
- Ok so the root cause of this bug then is something like this?
The ceph-osd --mkfs call with a journal fails for so... - 07:52 AM Bug #48271 (Resolved): ceph-volume lvm batch fails activating filestore dymcrypt osds
- --> DEPRECATION NOTICE
--> --journal-size as integer is parsed as megabytes
--> A future release will p... - 09:02 AM Bug #48273 (Resolved): lvm/create.py: typo in the help message
- This is a *convinience* command that combines the prepare
11/17/2020
- 12:35 PM Backport #48184 (Resolved): octopus: ceph-volume lvm batch doesn't work anymore with --auto and f...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/38045
m... - 10:19 AM Support #48259 (Closed): Ceph-ansible OSD node add fail (ceph.volume Failed to find physical volume)
- Deployment Node:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: T...
Also available in: Atom