Project

General

Profile

Actions

Bug #64012

closed

qa: Command failed qa/workunits/fs/full/subvolume_clone.sh

Added by Milind Changire 4 months ago. Updated 3 months ago.

Status:
Duplicate
Priority:
Normal
Category:
Testing
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
qa-suite
Labels (FS):
qa, qa-failure
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2024-01-11T05:58:15.533 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_teuthology_cd45576300487d997e5a85abed65500b9f5d143b/teuthology/run_tasks.py", line 105, in run_tasks
    manager = run_one_task(taskname, ctx=ctx, config=config)
  File "/home/teuthworker/src/git.ceph.com_teuthology_cd45576300487d997e5a85abed65500b9f5d143b/teuthology/run_tasks.py", line 83, in run_one_task
    return task(**kwargs)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_4bcedb9fc11bed1fd6fd0c4fd59187401f309680/qa/tasks/workunit.py", line 129, in task
    p.spawn(_run_tests, ctx, refspec, role, tests,
  File "/home/teuthworker/src/git.ceph.com_teuthology_cd45576300487d997e5a85abed65500b9f5d143b/teuthology/parallel.py", line 84, in __exit__
    for result in self:
  File "/home/teuthworker/src/git.ceph.com_teuthology_cd45576300487d997e5a85abed65500b9f5d143b/teuthology/parallel.py", line 98, in __next__
    resurrect_traceback(result)
  File "/home/teuthworker/src/git.ceph.com_teuthology_cd45576300487d997e5a85abed65500b9f5d143b/teuthology/parallel.py", line 30, in resurrect_traceback
    raise exc.exc_info[1]
  File "/home/teuthworker/src/git.ceph.com_teuthology_cd45576300487d997e5a85abed65500b9f5d143b/teuthology/parallel.py", line 23, in capture_traceback
    return func(*args, **kwargs)
  File "/home/teuthworker/src/github.com_ceph_ceph-c_4bcedb9fc11bed1fd6fd0c4fd59187401f309680/qa/tasks/workunit.py", line 424, in _run_tests
    remote.run(
  File "/home/teuthworker/src/git.ceph.com_teuthology_cd45576300487d997e5a85abed65500b9f5d143b/teuthology/orchestra/remote.py", line 523, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/src/git.ceph.com_teuthology_cd45576300487d997e5a85abed65500b9f5d143b/teuthology/orchestra/run.py", line 455, in run
    r.wait()
  File "/home/teuthworker/src/git.ceph.com_teuthology_cd45576300487d997e5a85abed65500b9f5d143b/teuthology/orchestra/run.py", line 161, in wait
    self._raise_for_status()
  File "/home/teuthworker/src/git.ceph.com_teuthology_cd45576300487d997e5a85abed65500b9f5d143b/teuthology/orchestra/run.py", line 181, in _raise_for_status
    raise CommandFailedError(
teuthology.exceptions.CommandFailedError: Command failed (workunit test fs/full/subvolume_clone.sh) on smithi029 with status 22: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=4bcedb9fc11bed1fd6fd0c4fd59187401f309680 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 CEPH_MNT=/home/ubuntu/cephtest/mnt.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/fs/full/subvolume_clone.sh'

Quincy:
https://pulpito.ceph.com/yuriw-2024-01-10_19:20:36-fs-wip-vshankar-testing1-quincy-2024-01-10-2010-quincy-distro-default-smithi/7512375


Related issues 1 (1 open0 closed)

Related to CephFS - Bug #63132: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warningPending BackportKotresh Hiremath Ravishankar

Actions
Actions #1

Updated by Venky Shankar 4 months ago

  • Category set to Testing
  • Status changed from New to Triaged
  • Assignee set to Kotresh Hiremath Ravishankar
  • Target version set to v19.0.0
Actions #2

Updated by Kotresh Hiremath Ravishankar 3 months ago

As part of testing osd full and subvolume clone testing. The clone delay is set to 15 seconds and 10 clones are triggered.
The test expects 10 clones to reach pending state. Because of the issue https://tracker.ceph.com/issues/63132, the storage
available is lot less than expected and sometimes clone subvolume command itself fails with ENOSPC. Hence this is another
consequence of the issue https://tracker.ceph.com/issues/63132, So marking this as duplicate of 63132.

Actions #3

Updated by Kotresh Hiremath Ravishankar 3 months ago

  • Status changed from Triaged to Duplicate
Actions #4

Updated by Kotresh Hiremath Ravishankar 3 months ago

  • Related to Bug #63132: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning added
Actions

Also available in: Atom PDF