Bug #61582
openrbd unable to create snapshot: failed to allocate snapshot id: (95) Operation not supported
0%
Description
Hi,
Running Proxmox 7.4-3 with Ceph version ceph version 17.2.6 (995dec2cdae920da21db2d455e55efbc339bde24) quincy (stable)
Kernel version Linux kvm-01 5.15.107-2-pve #1 SMP PVE 5.15.107-2 (2023-05-10T09:10Z) x86_64 GNU/Linux
I recently upgraded from 17.2.5 to 17.2.6.
Ever since when I try to create a RBD snapshot this fails with the following error:
# rbd snap create k8s_cephfs_data/csi-vol-8f127c1e-5dc8-4b9a-acc5-7f28eefd1ed0@test123 Creating snap: 10% complete...2023-06-04T00:59:34.873+0200 7f480695f700 -1 librbd::SnapshotCreateRequest: failed to allocate snapshot id: (95) Operation not supported Creating snap: 10% complete...failed. rbd: failed to create snapshot: (95) Operation not supported root@kvm-01:~# rbd info k8s_cephfs_data/csi-vol-8f127c1e-5dc8-4b9a-acc5-7f28eefd1ed0 rbd image 'csi-vol-8f127c1e-5dc8-4b9a-acc5-7f28eefd1ed0': size 1 GiB in 256 objects order 22 (4 MiB objects) snapshot_count: 0 id: 18ff5fe96f9e7d block_name_prefix: rbd_data.18ff5fe96f9e7d format: 2 features: layering op_features: flags: create_timestamp: Mon May 15 20:32:39 2023 access_timestamp: Mon May 15 20:32:39 2023 modify_timestamp: Mon May 15 20:32:39 2023
This seems related to https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/TEHTDANDVT6P6BKCRHCABHFBM2PJFDJ5/#5RRA44GGQN5NQ2L35CCK3UPOLIO3R77S and https://forum.proxmox.com/threads/backup-vzsnap-fails-after-update-to-ceph-17-2-6.128265/
Updated by Florius rk 11 months ago
This is done with an admin account:
client.admin key: <SNIP> caps: [mds] allow * caps: [mgr] allow * caps: [mon] allow * caps: [osd] allow *
Updated by Florius rk 11 months ago
I created a new pool and making a snapshot on the new pool worked. Same type of image, and pool settings are the same:
root@kvm-01:~# rbd info test/test123 rbd image 'test123': size 1 GiB in 256 objects order 22 (4 MiB objects) snapshot_count: 0 id: 58a4955b5797e2 block_name_prefix: rbd_data.58a4955b5797e2 format: 2 features: layering op_features: flags: create_timestamp: Sun Jun 4 11:34:02 2023 access_timestamp: Sun Jun 4 11:34:02 2023 modify_timestamp: Sun Jun 4 11:34:02 2023 root@kvm-01:~# rbd info k8s_cephfs_data/csi-vol-785f53c1-1a96-4aa9-a9fb-b25654af9c83 rbd image 'csi-vol-785f53c1-1a96-4aa9-a9fb-b25654af9c83': size 700 GiB in 179200 objects order 22 (4 MiB objects) snapshot_count: 0 id: dc65ed407e8fc1 block_name_prefix: rbd_data.dc65ed407e8fc1 format: 2 features: layering op_features: flags:
create_timestamp: Sun Apr 16 00:26:42 2023
access_timestamp: Sun Apr 16 00:26:42 2023
modify_timestamp: Sun Apr 16 00:26:42 2023
Updated by Florius rk 11 months ago
So I still had an old Ceph FS on that specific pool, k8s_cephfs_data.
I removed the FS and snapshots worked again.
This worked before <17.2.6. Is this a bug or intended behaviour? Please advice, thank you!
Updated by Paul Kusters about 2 months ago
Issue still exists on Ceph version 18.2.1, for me removing the CephFS on the affected pool isn't an option.