Project

General

Profile

Actions

Bug #58726

open

Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)

Added by Venky Shankar about 1 year ago. Updated 7 months ago.

Status:
Pending Backport
Priority:
Normal
Assignee:
Category:
Testing
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
backport_processed
Backport:
pacific,quincy
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
qa, qa-failure
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

/a/yuriw-2023-02-13_20:43:24-fs-wip-yuri5-testing-2023-02-07-0850-quincy-distro-default-smithi/7171607

2023-02-14T04:09:34.961 INFO:tasks.cephfs_test_runner:======================================================================
2023-02-14T04:09:34.962 INFO:tasks.cephfs_test_runner:ERROR: test_acls (tasks.cephfs.test_acls.TestACLs)
2023-02-14T04:09:34.962 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2023-02-14T04:09:34.962 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2023-02-14T04:09:34.962 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_e4842e6def3c0c51750b118e8ec702f6124b84a2/qa/tasks/cephfs/xfstests_dev.py", line 18, in setUp
2023-02-14T04:09:34.962 INFO:tasks.cephfs_test_runner:    self.prepare_xfstests_dev()
2023-02-14T04:09:34.963 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_e4842e6def3c0c51750b118e8ec702f6124b84a2/qa/tasks/cephfs/xfstests_dev.py", line 23, in prepare_xfstests_dev
2023-02-14T04:09:34.963 INFO:tasks.cephfs_test_runner:    self.install_deps()
2023-02-14T04:09:34.963 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_e4842e6def3c0c51750b118e8ec702f6124b84a2/qa/tasks/cephfs/xfstests_dev.py", line 117, in install_deps
2023-02-14T04:09:34.963 INFO:tasks.cephfs_test_runner:    raise RuntimeError('expected a yum based or a apt based system')
2023-02-14T04:09:34.963 INFO:tasks.cephfs_test_runner:RuntimeError: expected a yum based or a apt based system

Related issues 2 (1 open1 closed)

Copied to CephFS - Backport #58991: quincy: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)In ProgressRishabh DaveActions
Copied to CephFS - Backport #58992: pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)RejectedRishabh DaveActions
Actions #1

Updated by Dhairya Parmar about 1 year ago

  • Pull request ID set to 50142
Actions #2

Updated by Rishabh Dave about 1 year ago

  • Status changed from New to Fix Under Review
Actions #3

Updated by Venky Shankar about 1 year ago

  • Subject changed from quincy: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs) to Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
Actions #4

Updated by Rishabh Dave about 1 year ago

  • Backport set to pacific,quincy
Actions #5

Updated by Rishabh Dave about 1 year ago

  • Status changed from Fix Under Review to Pending Backport
Actions #6

Updated by Backport Bot about 1 year ago

  • Copied to Backport #58991: quincy: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs) added
Actions #7

Updated by Backport Bot about 1 year ago

  • Copied to Backport #58992: pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs) added
Actions #8

Updated by Backport Bot about 1 year ago

  • Tags set to backport_processed
Actions #10

Updated by Ilya Dryomov 11 months ago

  • Target version deleted (v17.2.6)
Actions #12

Updated by alexandre derumier 7 months ago

Hi,

I'm seeing this warning on a custom cluster, with ceph running in a qemu virtual machine with virtio-scsi disk.
they are virtual disk, on real ssd storage

2023-09-20T15:14:32.356616+0200 osd.1 (osd.1) 1 : cluster [WRN] OSD bench result of 13847.075721 IOPS exceeded the threshold limit of 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].

ceph osd tree show them as ssd,

  1. ceph osd tree
    ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
    -1 6.98630 root default
    ....
    1 ssd 0.87328 osd.1 up 1.00000 1.00000

but I can only fix this warn by setting value for hdd

ceph config set osd osd_mclock_max_capacity_iops_hdd 15000

I think this is because virtio disk are exposed with rotational=1 (this can be forced to 0 in qemu param)

/sys/block/sda/queue/rotational=1

(but I'm not sure why "ceph osd tree" see them as ssd)

Actions

Also available in: Atom PDF