Project

General

Profile

Actions

Bug #61243

closed

Bug #58945: qa: xfstests-dev's generic test suite has 20 failures with fuse client

qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed

Added by Kotresh Hiremath Ravishankar 11 months ago. Updated 5 months ago.

Status:
Duplicate
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Job: fs/functional/{begin/{0-install 1-ceph 2-logrotate} clusters/1a3s-mds-4c-client conf/{client mds mon osd} distro/{rhel_8} mount/fuse objectstore/bluestore-bitmap overrides/{ignorelist_health ignorelist_wrongly_marked_down no_client_pidfile} tasks/xfstests-dev}

Job Link: https://pulpito.ceph.com/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/7270142/

Log: http://qa-proxy.ceph.com/teuthology/yuriw-2023-05-10_18:21:40-fs-wip-yuri7-testing-2023-05-10-0742-distro-default-smithi/7270142/teuthology.log

Sentry event: https://sentry.ceph.com/organizations/ceph/?query=e6e39b552a1841ebb47955dc68c99360

Failure Reason:
Test failure: test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev)

2023-05-11T12:41:06.933 INFO:tasks.cephfs_test_runner:======================================================================
2023-05-11T12:41:06.933 INFO:tasks.cephfs_test_runner:FAIL: test_generic (tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev)
2023-05-11T12:41:06.934 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2023-05-11T12:41:06.934 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2023-05-11T12:41:06.934 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_efff14fe2c359453eb8f53b726b8f9c12bdb8748/qa/tasks/cephfs/tests_from_xfstests_dev.py", line 12, in test_generic
2023-05-11T12:41:06.934 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_efff14fe2c359453eb8f53b726b8f9c12bdb8748/qa/tasks/cephfs/xfstests_dev.py", line 385, in run_generic_tests
2023-05-11T12:41:06.935 INFO:tasks.cephfs_test_runner:  File "/home/teuthworker/src/github.com_ceph_ceph-c_efff14fe2c359453eb8f53b726b8f9c12bdb8748/qa/tasks/cephfs/xfstests_dev.py", line 377, in run_testdir
2023-05-11T12:41:06.935 INFO:tasks.cephfs_test_runner:AssertionError: 17 != 0

PRs included in the testing batch:

The following PRs are included in the test. None of them looked related.

https://github.com/ceph/ceph/pull/50312
https://github.com/ceph/ceph/pull/50876
https://github.com/ceph/ceph/pull/51004

It doesn't look to be related to PRs in the batch. Needs further investigation


Subtasks 4 (4 open0 closed)

Bug #61496: ceph-fuse: generic/020 failed of "No space left on device"Fix Under ReviewXiubo Li

Actions
Bug #61501: ceph-fuse: generic/126 failed due to file couldn't be executed without the 'r' modeFix Under ReviewXiubo Li

Actions
Bug #61551: ceph-fuse: generic/192 failed with "delta1 is NOT in range 5 .. 7"Fix Under ReviewXiubo Li

Actions
Bug #61552: ceph-fuse: generic/193 failed on non-root user chmod a+r on root owned file succeed Fix Under ReviewXiubo Li

Actions
Actions #1

Updated by Kotresh Hiremath Ravishankar 11 months ago

Cluster log shows fs degraded/offline and then cleared before test ends. Bunch of osd failure msgs after test ended.

1683808862.486255 mgr.y (mgr.4104) 12231 : cluster 0 pgmap v12255: 105 pgs: 105 active+clean; 44 GiB data, 52 GiB used, 668 GiB / 720 GiB avail; 102 B/s wr, 12 op/s
1683808862.88876 mon.a (mon.0) 497 : cluster 3 Health check failed: 1 filesystem is degraded (FS_DEGRADED)
1683808862.8887892 mon.a (mon.0) 498 : cluster 4 Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
1683808862.8947642 mon.a (mon.0) 500 : cluster 0 mds.? [v2:172.21.15.72:6835/2415202150,v1:172.21.15.72:6837/2415202150] up:boot
1683808862.894821 mon.a (mon.0) 501 : cluster 0 fsmap cephfs:0/1 4 up:standby, 1 failed
1683808862.486255 mgr.y (mgr.4104) 12231 : cluster 0 pgmap v12255: 105 pgs: 105 active+clean; 44 GiB data, 52 GiB used, 668 GiB / 720 GiB avail; 102 B/s wr, 12 op/s
1683808862.88876 mon.a (mon.0) 497 : cluster 3 Health check failed: 1 filesystem is degraded (FS_DEGRADED)
1683808862.8887892 mon.a (mon.0) 498 : cluster 4 Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
1683808862.8947642 mon.a (mon.0) 500 : cluster 0 mds.? [v2:172.21.15.72:6835/2415202150,v1:172.21.15.72:6837/2415202150] up:boot
1683808862.894821 mon.a (mon.0) 501 : cluster 0 fsmap cephfs:0/1 4 up:standby, 1 failed
1683808863.893283 mon.a (mon.0) 504 : cluster 1 Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded)
1683808863.893283 mon.a (mon.0) 504 : cluster 1 Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded)
1683808863.893308 mon.a (mon.0) 505 : cluster 1 Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
1683808863.893308 mon.a (mon.0) 505 : cluster 1 Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
1683808863.8933294 mon.a (mon.0) 506 : cluster 1 Cluster is now healthy
1683808863.8933294 mon.a (mon.0) 506 : cluster 1 Cluster is now healthy
1683808863.8991568 mon.a (mon.0) 508 : cluster 0 fsmap  4 up:standby
1683808863.8991568 mon.a (mon.0) 508 : cluster 0 fsmap  4 up:standby
1683808864.4866972 mgr.y (mgr.4104) 12232 : cluster 0 pgmap v12256: 105 pgs: 105 active+clean; 44 GiB data, 52 GiB used, 668 GiB / 720 GiB avail; 0 B/s wr, 13 op/s
1683808864.4866972 mgr.y (mgr.4104) 12232 : cluster 0 pgmap v12256: 105 pgs: 105 active+clean; 44 GiB data, 52 GiB used, 668 GiB / 720 GiB avail; 0 B/s wr, 13 op/s
1683808864.9116302 mon.a (mon.0) 511 : cluster 0 osdmap e39: 8 total, 8 up, 8 in
1683808864.9116302 mon.a (mon.0) 511 : cluster 0 osdmap e39: 8 total, 8 up, 8 in
1683808865.9139888 mon.a (mon.0) 514 : cluster 0 osdmap e40: 8 total, 8 up, 8 in
1683808865.9139888 mon.a (mon.0) 514 : cluster 0 osdmap e40: 8 total, 8 up, 8 in
1683808866.8799782 client.admin (client.?) 0 : cluster 1 Ended test tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev.test_generic
1683808866.8799782 client.admin (client.?) 0 : cluster 1 Ended test tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev.test_generic
...
1683808917.8331187 mon.a (mon.0) 552 : cluster 0 osd.2 reported immediately failed by osd.4
1683808917.6982274 mon.a (mon.0) 527 : cluster 0 osd.0 reported immediately failed by osd.1
1683808917.8332915 mon.a (mon.0) 553 : cluster 0 osd.2 reported immediately failed by osd.5
1683808917.8334563 mon.a (mon.0) 554 : cluster 0 osd.2 reported immediately failed by osd.6
1683808917.6983435 mon.a (mon.0) 528 : cluster 0 osd.0 reported immediately failed by osd.4
1683808917.8336303 mon.a (mon.0) 555 : cluster 0 osd.2 reported immediately failed by osd.7
1683808917.833843 mon.a (mon.0) 556 : cluster 0 osd.2 reported immediately failed by osd.4
1683808917.6984441 mon.a (mon.0) 529 : cluster 0 osd.0 reported immediately failed by osd.5
1683808917.8340497 mon.a (mon.0) 557 : cluster 0 osd.2 reported immediately failed by osd.5
1683808917.8342583 mon.a (mon.0) 558 : cluster 0 osd.2 reported immediately failed by osd.6
1683808917.6985614 mon.a (mon.0) 530 : cluster 0 osd.0 reported immediately failed by osd.4
1683808917.8344486 mon.a (mon.0) 559 : cluster 0 osd.2 reported immediately failed by osd.7
1683808917.89796 mon.a (mon.0) 560 : cluster 0 osd.0 reported immediately failed by osd.7
1683808917.6986647 mon.a (mon.0) 531 : cluster 0 osd.0 reported immediately failed by osd.3
1683808917.8981495 mon.a (mon.0) 561 : cluster 0 osd.0 reported immediately failed by osd.4
1683808917.8983102 mon.a (mon.0) 562 : cluster 0 osd.0 reported immediately failed by osd.5
1683808917.6987848 mon.a (mon.0) 532 : cluster 0 osd.0 reported immediately failed by osd.7
1683808917.8984678 mon.a (mon.0) 563 : cluster 0 osd.0 reported immediately failed by osd.4
1683808917.8986285 mon.a (mon.0) 564 : cluster 0 osd.0 reported immediately failed by osd.5
1683808917.6989002 mon.a (mon.0) 533 : cluster 0 osd.0 reported immediately failed by osd.5
1683808917.8987913 mon.a (mon.0) 565 : cluster 0 osd.0 reported immediately failed by osd.6
1683808917.8989465 mon.a (mon.0) 566 : cluster 0 osd.0 reported immediately failed by osd.7
1683808917.6990209 mon.a (mon.0) 534 : cluster 0 osd.0 reported immediately failed by osd.3
1683808917.8991039 mon.a (mon.0) 567 : cluster 0 osd.0 reported immediately failed by osd.6
1683808917.9181037 mon.a (mon.0) 568 : cluster 0 osd.3 reported immediately failed by osd.5
1683808917.6991365 mon.a (mon.0) 535 : cluster 0 osd.0 reported immediately failed by osd.7

Actions #2

Updated by Milind Changire 11 months ago

  • Assignee set to Xiubo Li

Please disable tests needed to be disabled.

Actions #3

Updated by Xiubo Li 11 months ago

  • Status changed from New to In Progress
Actions #5

Updated by Xiubo Li 11 months ago

  • Status changed from In Progress to Duplicate
  • Parent task set to #58945

Rishabh,

Currently I only fixed 4/17 failures and the other 13 ones haven't begun yet. As disscused in the Gchat I will leave rest to you.

Thanks.

Actions

Also available in: Atom PDF