Project

General

Profile

Actions

Bug #48125

closed

qa: test_subvolume_snapshot_clone_cancel_in_progress failure

Added by Patrick Donnelly over 3 years ago. Updated almost 3 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
% Done:

0%

Source:
Tags:
Backport:
pacific,octopus,nautilus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
kceph, qa-suite
Labels (FS):
qa-failure
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2020-11-05T03:00:02.672 INFO:teuthology.orchestra.run.smithi079:> (cd /home/ubuntu/cephtest/mnt.0 && exec sudo bash -c 'stat -c %h /home/ubuntu/cephtest/mnt.0/./volumes/_deleting')
2020-11-05T03:00:02.706 INFO:teuthology.orchestra.run.smithi079.stdout:3
2020-11-05T03:00:07.707 INFO:teuthology.orchestra.run:Running command with timeout 900
2020-11-05T03:00:07.708 INFO:teuthology.orchestra.run.smithi079:> (cd /home/ubuntu/cephtest/mnt.0 && exec sudo bash -c 'stat -c %h /home/ubuntu/cephtest/mnt.0/./volumes/_deleting')
2020-11-05T03:00:07.741 INFO:teuthology.orchestra.run.smithi079.stdout:3
...
2020-11-05T03:00:12.917 INFO:tasks.cephfs_test_runner:======================================================================
2020-11-05T03:00:12.918 INFO:tasks.cephfs_test_runner:ERROR: test_subvolume_snapshot_clone_cancel_in_progress (tasks.cephfs.test_volumes.TestSubvolumeSnapshotClones)
2020-11-05T03:00:12.918 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2020-11-05T03:00:12.918 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2020-11-05T03:00:12.918 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-pdonnell-testing-20201104.220526/qa/tasks/cephfs/test_volumes.py", line 2829, in test_subvolume_snapshot_clone_cancel_in_progress
2020-11-05T03:00:12.919 INFO:tasks.cephfs_test_runner:    self._wait_for_trash_empty()
2020-11-05T03:00:12.919 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-pdonnell-testing-20201104.220526/qa/tasks/cephfs/test_volumes.py", line 291, in _wait_for_trash_empty
2020-11-05T03:00:12.919 INFO:tasks.cephfs_test_runner:    self.mount_a.wait_for_dir_empty(trashdir, timeout=timeout)
2020-11-05T03:00:12.919 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_ceph-c_wip-pdonnell-testing-20201104.220526/qa/tasks/cephfs/mount.py", line 810, in wait_for_dir_empty
2020-11-05T03:00:12.919 INFO:tasks.cephfs_test_runner:    while proceed():
2020-11-05T03:00:12.920 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/contextutil.py", line 133, in __call__
2020-11-05T03:00:12.920 INFO:tasks.cephfs_test_runner:    raise MaxWhileTries(error_msg)
2020-11-05T03:00:12.920 INFO:tasks.cephfs_test_runner:teuthology.exceptions.MaxWhileTries: reached maximum tries (6) after waiting for 30 seconds

From: /ceph/teuthology-archive/pdonnell-2020-11-05_00:20:13-fs-wip-pdonnell-testing-20201104.220526-distro-basic-smithi/5591293/teuthology.log

mgr deleted the file quickly:

2020-11-05T02:59:42.611+0000 7fc9edf0e700  8 client.10770 rmdir(#0x1000000021a/fc1b25a9-dc09-47f2-9257-8d7f7f61593b) = 0

From: /ceph/teuthology-archive/pdonnell-2020-11-05_00:20:13-fs-wip-pdonnell-testing-20201104.220526-distro-basic-smithi/5591293/remote/smithi079/log/ceph-mgr.y.log.gz

It looks like stat is returning the wrong result (3 hard links instead of 2).

Actions

Also available in: Atom PDF