Bug #42446
openmgr/dashboard: tasks.mgr.dashboard.test_ganesha.GaneshaTest.test_create_export fails in ceph-dashboard-pr-backend
0%
Description
https://jenkins.ceph.com/job/ceph-dashboard-pr-backend/33/console
...snip... 2019-10-23 14:26:16,342.342 INFO:__main__:Starting test: test_create_export (tasks.mgr.dashboard.test_ganesha.GaneshaTest) 2019-10-23 14:26:16,342.342 INFO:__main__:Running ['./bin/ceph', 'health', '--format=json'] 2019-10-23 14:26:21,848.848 INFO:__main__:Running ['./bin/ceph', 'health', '--format=json'] 2019-10-23 14:26:27,358.358 INFO:__main__:Running ['./bin/ceph', 'health', '--format=json'] 2019-10-23 14:26:32,976.976 INFO:__main__:Running ['./bin/ceph', 'health', '--format=json'] 2019-10-23 14:26:38,513.513 INFO:__main__:Running ['./bin/ceph', 'health', '--format=json'] 2019-10-23 14:26:39,038.038 INFO:__main__:test_create_export (tasks.mgr.dashboard.test_ganesha.GaneshaTest) ... ERROR 2019-10-23 14:26:39,039.039 INFO:__main__:Stopped test: test_create_export (tasks.mgr.dashboard.test_ganesha.GaneshaTest) in 22.696573s 2019-10-23 14:26:39,039.039 INFO:__main__:Running ['./bin/radosgw-admin', 'user', 'rm', '--uid', 'admin', '--purge-data'] 2019-10-23 14:26:39,483.483 INFO:__main__:Running ['./bin/ceph', 'osd', 'pool', 'delete', 'ganesha', 'ganesha', '--yes-i-really-really-mean-it'] 2019-10-23 14:26:40,181.181 INFO:tasks.mgr.dashboard.helper:command result: 2019-10-23 14:26:40,181.181 INFO:__main__: 2019-10-23 14:26:40,181.181 INFO:__main__:====================================================================== 2019-10-23 14:26:40,182.182 INFO:__main__:ERROR: test_create_export (tasks.mgr.dashboard.test_ganesha.GaneshaTest) 2019-10-23 14:26:40,182.182 INFO:__main__:---------------------------------------------------------------------- 2019-10-23 14:26:40,182.182 INFO:__main__:Traceback (most recent call last): 2019-10-23 14:26:40,182.182 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/helper.py", line 154, in setUp 2019-10-23 14:26:40,182.182 INFO:__main__: self.wait_for_health_clear(20) 2019-10-23 14:26:40,182.182 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/ceph_test_case.py", line 131, in wait_for_health_clear 2019-10-23 14:26:40,183.183 INFO:__main__: self.wait_until_true(is_clear, timeout) 2019-10-23 14:26:40,183.183 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/ceph_test_case.py", line 163, in wait_until_true 2019-10-23 14:26:40,183.183 INFO:__main__: raise RuntimeError("Timed out after {0}s".format(elapsed)) 2019-10-23 14:26:40,183.183 INFO:__main__:RuntimeError: Timed out after 20s 2019-10-23 14:26:40,183.183 INFO:__main__: 2019-10-23 14:26:40,183.183 INFO:__main__:---------------------------------------------------------------------- 2019-10-23 14:26:40,184.184 INFO:__main__:Ran 48 tests in 501.380s
Updated by Alfonso MartÃnez over 4 years ago
More info: {u'status': u'HEALTH_WARN', u'checks': {u'POOL_PG_NUM_NOT_POWER_OF_TWO': {u'muted': False, u'severity': u'HEALTH_WARN', u'summary': {u'count': 1, u'message': u'1 pool(s) have non-power-of-two pg_num'}}}, u'mutes': []}
2019-10-24 07:01:25,321.321 INFO:__main__:Starting test: test_create_export (tasks.mgr.dashboard.test_ganesha.GaneshaTest)
2019-10-24 07:01:25,321.321 INFO:__main__:Running ['./bin/ceph', 'health', '--format=json']
2019-10-24 07:01:25,879.879 ERROR:tasks.ceph_test_case:{u'status': u'HEALTH_WARN', u'checks': {u'POOL_PG_NUM_NOT_POWER_OF_TWO': {u'muted': False, u'severity': u'HEALTH_WARN', u'summary': {u'count': 1, u'message': u'1 pool(s) have non-power-of-two pg_num'}}}, u'mutes': []}
2019-10-24 07:01:30,883.883 INFO:__main__:Running ['./bin/ceph', 'health', '--format=json']
2019-10-24 07:01:31,428.428 ERROR:tasks.ceph_test_case:{u'status': u'HEALTH_WARN', u'checks': {u'POOL_PG_NUM_NOT_POWER_OF_TWO': {u'muted': False, u'severity': u'HEALTH_WARN', u'summary': {u'count': 1, u'message': u'1 pool(s) have non-power-of-two pg_num'}}}, u'mutes': []}
2019-10-24 07:01:36,432.432 INFO:__main__:Running ['./bin/ceph', 'health', '--format=json']
2019-10-24 07:01:36,968.968 ERROR:tasks.ceph_test_case:{u'status': u'HEALTH_WARN', u'checks': {u'POOL_PG_NUM_NOT_POWER_OF_TWO': {u'muted': False, u'severity': u'HEALTH_WARN', u'summary': {u'count': 1, u'message': u'1 pool(s) have non-power-of-two pg_num'}}}, u'mutes': []}
2019-10-24 07:01:41,972.972 INFO:__main__:Running ['./bin/ceph', 'health', '--format=json']
2019-10-24 07:01:42,500.500 ERROR:tasks.ceph_test_case:{u'status': u'HEALTH_WARN', u'checks': {u'POOL_PG_NUM_NOT_POWER_OF_TWO': {u'muted': False, u'severity': u'HEALTH_WARN', u'summary': {u'count': 1, u'message': u'1 pool(s) have non-power-of-two pg_num'}}}, u'mutes': []}
2019-10-24 07:01:47,504.504 INFO:__main__:Running ['./bin/ceph', 'health', '--format=json']
2019-10-24 07:01:48,036.036 ERROR:tasks.ceph_test_case:{u'status': u'HEALTH_WARN', u'checks': {u'POOL_PG_NUM_NOT_POWER_OF_TWO': {u'muted': False, u'severity': u'HEALTH_WARN', u'summary': {u'count': 1, u'message': u'1 pool(s) have non-power-of-two pg_num'}}}, u'mutes': []}
2019-10-24 07:01:48,036.036 INFO:__main__:test_create_export (tasks.mgr.dashboard.test_ganesha.GaneshaTest) ... ERROR
2019-10-24 07:01:48,036.036 INFO:__main__:Stopped test: test_create_export (tasks.mgr.dashboard.test_ganesha.GaneshaTest) in 22.71562s
2019-10-24 07:01:48,037.037 INFO:__main__:Running ['./bin/radosgw-admin', 'user', 'rm', '--uid', 'admin', '--purge-data']
2019-10-24 07:01:48,218.218 INFO:__main__:Running ['./bin/ceph', 'osd', 'pool', 'delete', 'ganesha', 'ganesha', '--yes-i-really-really-mean-it']
2019-10-24 07:01:49,514.514 INFO:tasks.mgr.dashboard.helper:command result:
2019-10-24 07:01:49,514.514 INFO:__main__:
2019-10-24 07:01:49,514.514 INFO:__main__:======================================================================
2019-10-24 07:01:49,515.515 INFO:__main__:ERROR: test_create_export (tasks.mgr.dashboard.test_ganesha.GaneshaTest)
2019-10-24 07:01:49,515.515 INFO:__main__:----------------------------------------------------------------------
2019-10-24 07:01:49,515.515 INFO:__main__:Traceback (most recent call last):
2019-10-24 07:01:49,515.515 INFO:__main__: File "/ceph/qa/tasks/mgr/dashboard/helper.py", line 154, in setUp
2019-10-24 07:01:49,515.515 INFO:__main__: self.wait_for_health_clear(20)
2019-10-24 07:01:49,515.515 INFO:__main__: File "/ceph/qa/tasks/ceph_test_case.py", line 133, in wait_for_health_clear
2019-10-24 07:01:49,516.516 INFO:__main__: self.wait_until_true(is_clear, timeout)
2019-10-24 07:01:49,516.516 INFO:__main__: File "/ceph/qa/tasks/ceph_test_case.py", line 165, in wait_until_true
2019-10-24 07:01:49,516.516 INFO:__main__: raise RuntimeError("Timed out after {0}s".format(elapsed))
2019-10-24 07:01:49,516.516 INFO:__main__:RuntimeError: Timed out after 20s
2019-10-24 07:01:49,516.516 INFO:__main__:
2019-10-24 07:01:49,516.516 INFO:__main__:----------------------------------------------------------------------
2019-10-24 07:01:49,517.517 INFO:__main__:Ran 1 test in 82.088s
Updated by Alfonso MartÃnez over 4 years ago
- Related to Bug #42215: cluster [WRN] Health check failed: 1 pool(s) have non-power-of-two pg_num (POOL_PG_NUM_NOT_POWER_OF_TWO)" in cluster log added
Updated by Tatjana Dehler over 4 years ago
- Related to Bug #42512: mgr/dashboard: tasks.mgr.dashboard.test_rbd_mirroring suite is failing added
Updated by Laura Paduano over 4 years ago
mgr.x.log:
2019-11-05T09:19:35.903+0000 7fa7ea875700 10 monclient: _send_mon_message to mon.c at v2:127.0.0.1:40023/0
2019-11-05T09:19:35.903+0000 7fa7ea875700 1 -- 127.0.0.1:0/21007 --> [v2:127.0.0.1:40023/0,v1:127.0.0.1:40024/0] -- mgrbeacon mgr.x(27547b9b-bc9a-423a-b5eb-6fc270089bc0,5111, , 0) v9 -- 0x70badc0 con 0x2fbb000
2019-11-05T09:19:37.659+0000 7fa7f1082700 1 -- 127.0.0.1:0/21007 >> [v2:127.0.0.1:6800/21047,v1:127.0.0.1:6801/21047] conn(0x70ad000 msgr2=0x70b6b00 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_bulk peer close file descriptor 37
2019-11-05T09:19:37.659+0000 7fa7f1082700 1 -- 127.0.0.1:0/21007 >> [v2:127.0.0.1:6800/21047,v1:127.0.0.1:6801/21047] conn(0x70ad000 msgr2=0x70b6b00 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_until read failed
2019-11-05T09:19:37.659+0000 7fa7f1082700 1 --2- 127.0.0.1:0/21007 >> [v2:127.0.0.1:6800/21047,v1:127.0.0.1:6801/21047] conn(0x70ad000 0x70b6b00 crc :-1 s=READY pgs=11 cs=0 l=1 rx=0 tx=0).handle_read_frame_preamble_main read frame length and tag failed r=-1 ((1) Operation not permitted)
2019-11-05T09:19:37.659+0000 7fa7f1082700 1 --2- 127.0.0.1:0/21007 >> [v2:127.0.0.1:6800/21047,v1:127.0.0.1:6801/21047] conn(0x70ad000 0x70b6b00 crc :-1 s=READY pgs=11 cs=0 l=1 rx=0 tx=0).stop
2019-11-05T09:19:37.659+0000 7fa7ee07c700 0 client.0 ms_handle_reset on v2:127.0.0.1:6800/21047
2019-11-05T09:19:37.659+0000 7fa7ee07c700 1 -- 127.0.0.1:0/21007 >> [v2:127.0.0.1:6800/21047,v1:127.0.0.1:6801/21047] conn(0x70ad000 msgr2=0x70b6b00 unknown :-1 s=STATE_CLOSED l=1).mark_down
2019-11-05T09:19:37.659+0000 7fa7ee07c700 1 --2- 127.0.0.1:0/21007 >> [v2:127.0.0.1:6800/21047,v1:127.0.0.1:6801/21047] conn(0x70ad000 0x70b6b00 unknown :-1 s=CLOSED pgs=11 cs=0 l=1 rx=0 tx=0).stop
2019-11-05T09:19:37.659+0000 7fa7ee07c700 1 --2- 127.0.0.1:0/21007 >> [v2:127.0.0.1:6800/21047,v1:127.0.0.1:6801/21047] conn(0x70ad400 0x70b7080 unknown :-1 s=NONE pgs=0 cs=0 l=0 rx=0 tx=0).connect
2019-11-05T09:19:37.659+0000 7fa7ee07c700 1 -- 127.0.0.1:0/21007 --> [v2:127.0.0.1:6800/21047,v1:127.0.0.1:6801/21047] -- mgropen(unknown.x) v3 -- 0x71e2000 con 0x70ad400
2019-11-05T09:19:37.659+0000 7fa7f1082700 1 -- 127.0.0.1:0/21007 >> [v2:127.0.0.1:6800/21047,v1:127.0.0.1:6801/21047] conn(0x70ad400 msgr2=0x70b7080 unknown :-1 s=STATE_CONNECTING_RE l=0).process reconnect failed to v2:127.0.0.1:6800/21047
2019-11-05T09:19:37.659+0000 7fa7f1082700 1 --2- 127.0.0.1:0/21007 >> [v2:127.0.0.1:6800/21047,v1:127.0.0.1:6801/21047] conn(0x70ad400 0x70b7080 unknown :-1 s=START_CONNECT pgs=0 cs=0 l=0 rx=0 tx=0)._fault waiting 0.200000
2019-11-05T09:19:37.690+0000 7fa7ef07e700 -1 received signal: Terminated from python ../qa/tasks/vstart_runner.py --ignore-missing-binaries tasks.mgr.test_dashboard tasks.mgr.dashboard.test_auth tasks.mgr.dashboard.test_cephfs tasks.mgr.dashboard.test_cluster_configuration tasks.mgr.dashboard.test_erasure_code_profile tasks.mgr.dashboard.test_ganesha tasks.mgr.dashboard.test_health tasks.mgr.dashboard.test_host tasks.mgr.dashboard.test_logs tasks.mgr.dashboard.test_mgr_module tasks.mgr.dashboard.test_monitor tasks.mgr.dashboard.test_orchestrator tasks.mgr.dashboard.test_osd tasks.mgr.dashboard.test_perf_counters tasks.mgr.dashboard.test_pool tasks.mgr.dashboard.test_rbd_mirroring tasks.mgr.dashboard.test_rbd tasks.mgr.dashboard.test_requests tasks.mgr.dashboard.test_rgw tasks.mgr.dashboard.test_role tasks.mgr.dashboard.test_settings tasks.mgr.dashboard.test_summary tasks.mgr.dashboard.test_user tasks.mgr.test_module_selftest (PID: 15524) UID: 1001
2019-11-05T09:19:37.690+0000 7fa7ef07e700 -1 mgr handle_signal *** Got signal Terminated ***
Updated by Ernesto Puerta about 3 years ago
- Project changed from mgr to Dashboard
- Category changed from 144 to Component - NFS