Project

General

Profile

Actions

Bug #57205

open

Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)

Added by Venky Shankar over 1 year ago. Updated over 1 year ago.

Status:
Pending Backport
Priority:
Normal
Category:
Correctness/Safety
Target version:
% Done:

0%

Source:
Tags:
backport_processed
Backport:
pacific,quincy
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
mgr/volumes
Labels (FS):
qa
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

/a/vshankar-2022-08-18_04:30:42-fs-wip-vshankar-testing1-20220818-082047-testing-default-smithi/6978395

2022-08-18T10:56:05.907 INFO:tasks.cephfs_test_runner:test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups) ... ERROR
2022-08-18T10:56:05.908 INFO:tasks.cephfs_test_runner:
2022-08-18T10:56:05.909 INFO:tasks.cephfs_test_runner:======================================================================
2022-08-18T10:56:05.910 INFO:tasks.cephfs_test_runner:ERROR: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
2022-08-18T10:56:05.910 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2022-08-18T10:56:05.911 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2022-08-18T10:56:05.911 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph-c_c7606b2c8bdef281f7eee6b171bb9adc82612b43/qa/tasks/cephfs/test_volumes.py", line 1677, in test_subvolume_group_ls_filter_internal_directories
2022-08-18T10:56:05.912 INFO:tasks.cephfs_test_runner:    self._fs_cmd("subvolume", "snapshot", "rm", self.volname, subvolume, snapshot)
2022-08-18T10:56:05.913 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph-c_c7606b2c8bdef281f7eee6b171bb9adc82612b43/qa/tasks/cephfs/test_volumes.py", line 38, in _fs_cmd
2022-08-18T10:56:05.913 INFO:tasks.cephfs_test_runner:    return self.mgr_cluster.mon_manager.raw_cluster_cmd("fs", *args)
2022-08-18T10:56:05.914 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph-c_c7606b2c8bdef281f7eee6b171bb9adc82612b43/qa/tasks/ceph_manager.py", line 1609, in raw_cluster_cmd
2022-08-18T10:56:05.915 INFO:tasks.cephfs_test_runner:    return self.run_cluster_cmd(**kwargs).stdout.getvalue()
2022-08-18T10:56:05.917 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph-c_c7606b2c8bdef281f7eee6b171bb9adc82612b43/qa/tasks/ceph_manager.py", line 1600, in run_cluster_cmd
2022-08-18T10:56:05.917 INFO:tasks.cephfs_test_runner:    return self.controller.run(**kwargs)
2022-08-18T10:56:05.917 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b1d387f12b117399cb87c86aaa341398fa0c0919/teuthology/orchestra/remote.py", line 510, in run
2022-08-18T10:56:05.918 INFO:tasks.cephfs_test_runner:    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
2022-08-18T10:56:05.918 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b1d387f12b117399cb87c86aaa341398fa0c0919/teuthology/orchestra/run.py", line 455, in run
2022-08-18T10:56:05.918 INFO:tasks.cephfs_test_runner:    r.wait()
2022-08-18T10:56:05.919 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b1d387f12b117399cb87c86aaa341398fa0c0919/teuthology/orchestra/run.py", line 161, in wait
2022-08-18T10:56:05.919 INFO:tasks.cephfs_test_runner:    self._raise_for_status()
2022-08-18T10:56:05.919 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_b1d387f12b117399cb87c86aaa341398fa0c0919/teuthology/orchestra/run.py", line 183, in _raise_for_status
2022-08-18T10:56:05.920 INFO:tasks.cephfs_test_runner:    node=self.hostname, label=self.label
2022-08-18T10:56:05.920 INFO:tasks.cephfs_test_runner:teuthology.exceptions.CommandFailedError: Command failed on smithi063 with status 11: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs subvolume snapshot rm
cephfs subvolume_0000000000650649 snapshot_0000000001000075'

Failure is easily reproducible.


Related issues 2 (0 open2 closed)

Copied to CephFS - Backport #57718: pacific: Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)ResolvedKotresh Hiremath RavishankarActions
Copied to CephFS - Backport #57719: quincy: Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)ResolvedKotresh Hiremath RavishankarActions
Actions #1

Updated by Venky Shankar over 1 year ago

Nikhil, PTAL.

Actions #2

Updated by Venky Shankar over 1 year ago

  • Assignee changed from Nikhilkumar Shelke to Kotresh Hiremath Ravishankar

Kotresh, PTAL.

Actions #3

Updated by Kotresh Hiremath Ravishankar over 1 year ago

  • Status changed from New to In Progress
  • Pull request ID set to 47985
  • Labels (FS) qa added
Actions #4

Updated by Kotresh Hiremath Ravishankar over 1 year ago

The snapshot removal has failed because it had pending clones. Please see below.

2022-08-18T10:55:51.450 INFO:teuthology.orchestra.run.smithi063.stderr:2022-08-18T10:55:51.438+0000 7f78abfff700  1 --2- 172.21.15.63:0/1068448419 >> [v2:172.21.15.63:6832/17362,v1:172.21.15.63:6833/17362] conn(0x7f788c060c70 0x7f788c063120 secure :-1 s=READY pgs=1966 cs=
0 l=1 rev1=1 crypto rx=0x7f789c015040 tx=0x7f789c00a000 comp rx=0 tx=0).ready entity=mgr.4108 client_cookie=0 server_cookie=0 in_seq=0 out_seq=0
2022-08-18T10:55:51.451 INFO:teuthology.orchestra.run.smithi063.stderr:2022-08-18T10:55:51.442+0000 7f78a8ff9700  1 -- 172.21.15.63:0/1068448419 <== mon.0 v2:172.21.15.63:3300/0 6 ==== mon_command_ack([{"prefix": "get_command_descriptions"}]=0  v0) v1 ==== 72+0+166138 (se
cure 0 0 0) 0x7f78ac0c3f90 con 0x7f78ac0c2230
2022-08-18T10:55:51.659 INFO:teuthology.orchestra.run.smithi063.stderr:2022-08-18T10:55:51.650+0000 7f78b10a4700  1 -- 172.21.15.63:0/1068448419 --> [v2:172.21.15.63:6832/17362,v1:172.21.15.63:6833/17362] -- mgr_command(tid 0: {"prefix": "fs subvolume snapshot rm", "vol_n
ame": "cephfs", "sub_name": "subvolume_0000000000650649", "snap_name": "snapshot_0000000001000075", "target": ["mon-mgr", ""]}) v1 -- 0x7f78ac0c3f90 con 0x7f788c060c70
2022-08-18T10:55:53.293 INFO:teuthology.orchestra.run.smithi063.stderr:2022-08-18T10:55:53.286+0000 7f78a8ff9700  1 -- 172.21.15.63:0/1068448419 <== mon.0 v2:172.21.15.63:3300/0 7 ==== mgrmap(e 33) v1 ==== 81172+0+0 (secure 0 0 0) 0x7f78a401b030 con 0x7f78ac0c2230
2022-08-18T10:55:55.557 INFO:tasks.ceph.mgr.y.smithi063.stderr:2022-08-18T10:55:55.550+0000 7fb20257f700 -1 mgr.server reply reply (11) Resource temporarily unavailable snapshot 'snapshot_0000000001000075' has pending clones
2022-08-18T10:55:55.558 INFO:teuthology.orchestra.run.smithi063.stderr:2022-08-18T10:55:55.550+0000 7f78a8ff9700  1 -- 172.21.15.63:0/1068448419 <== mgr.4108 v2:172.21.15.63:6832/17362 1 ==== mgr_command_reply(tid 0: -11 snapshot 'snapshot_0000000001000075' has pending cl
ones) v1 ==== 63+0+0 (secure 0 0 0) 0x7f78ac0c3f90 con 0x7f788c060c70
2022-08-18T10:55:55.560 INFO:teuthology.orchestra.run.smithi063.stderr:Error EAGAIN: snapshot 'snapshot_0000000001000075' has pending clones
Actions #5

Updated by Rishabh Dave over 1 year ago

  • Status changed from In Progress to Pending Backport
Actions #6

Updated by Backport Bot over 1 year ago

  • Copied to Backport #57718: pacific: Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups) added
Actions #7

Updated by Backport Bot over 1 year ago

  • Copied to Backport #57719: quincy: Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups) added
Actions #8

Updated by Backport Bot over 1 year ago

  • Tags set to backport_processed
Actions

Also available in: Atom PDF