Project

General

Profile

Bug #45806

qa/task/vstart_runner.py: setting the network namespace "ceph-ns--tmp-tmpq1pg2pz7-mnt.0" failed: Invalid argument

Added by Xiubo Li about 1 month ago. Updated about 1 month ago.

Status:
Fix Under Review
Priority:
Normal
Assignee:
Category:
-
Target version:
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
qa-suite
Labels (FS):
qa
Pull request ID:
Crash signature:

Description

(virtualenv) 18:46:01 [mchangir@indraprastha build] $ python3 ../qa/tasks/vstart_runner.py tasks.cephfs.test_scrub.TestScrub.test_scrub_backtrace_for_new_files
../qa/tasks/vstart_runner.py:1234: SyntaxWarning: "is" with a literal. Did you mean "=="?
  if IP(opt_brxnet).iptype() is 'PUBLIC':
Using guessed paths /home/mchangir/work/mchangir-ceph.git/build/lib/ ['/home/mchangir/work/mchangir-ceph.git/qa', '/home/mchangir/work/mchangir-ceph.git/build/lib/cython_modules/lib.3', '/home/mchangir/work/mchangir-ceph.git/src/pybind']
../qa/tasks/vstart_runner.py:1234: SyntaxWarning: "is" with a literal. Did you mean "=="?
  if IP(opt_brxnet).iptype() is 'PUBLIC':
Using guessed paths /home/mchangir/work/mchangir-ceph.git/build/lib/ ['/home/mchangir/work/mchangir-ceph.git/qa', '/home/mchangir/work/mchangir-ceph.git/build/lib/cython_modules/lib.3', '/home/mchangir/work/mchangir-ceph.git/src/pybind']
2020-05-27 18:46:10,110.110 INFO:__main__:Executing modules: ['tasks.cephfs.test_scrub.TestScrub.test_scrub_backtrace_for_new_files']
2020-05-27 18:46:10,115.115 INFO:__main__:Loaded: [<unittest.suite.TestSuite tests=[<tasks.cephfs.test_scrub.TestScrub testMethod=test_scrub_backtrace_for_new_files>]>]
2020-05-27 18:46:10,115.115 INFO:__main__:e: <unittest.suite.TestSuite tests=[<unittest.suite.TestSuite tests=[<tasks.cephfs.test_scrub.TestScrub testMethod=test_scrub_backtrace_for_new_files>]>]>
2020-05-27 18:46:10,115.115 INFO:__main__:e: <unittest.suite.TestSuite tests=[<tasks.cephfs.test_scrub.TestScrub testMethod=test_scrub_backtrace_for_new_files>]>
2020-05-27 18:46:10,115.115 INFO:__main__:Running ['ps', '-u25405']
2020-05-27 18:46:10,130.130 WARNING:__main__:Killing stray process  258133 pts/1    00:00:00 ceph-mds
2020-05-27 18:46:10,132.132 INFO:__main__:Running ['./bin/ceph', 'auth', 'get-or-create', 'client.0', 'osd', 'allow rw', 'mds', 'allow', 'mon', 'allow r']
2020-05-27 18:46:10,470.470 INFO:__main__:Running ['./bin/ceph', 'tell', 'osd.*', 'injectargs', '--osd-mon-report-interval', '5']
2020-05-27 18:46:10,696.696 INFO:__main__:Searching for existing instance osd_mon_report_interval/osd
2020-05-27 18:46:10,697.697 INFO:__main__:Searching for existing instance osd_mon_report_interval/osd
2020-05-27 18:46:10,697.697 INFO:__main__:Searching for existing instance mds log max segments/mds
2020-05-27 18:46:10,697.697 INFO:__main__:Searching for existing instance osd_mon_report_interval/osd
2020-05-27 18:46:10,698.698 INFO:__main__:Searching for existing instance mds log max segments/mds
2020-05-27 18:46:10,698.698 INFO:__main__:Searching for existing instance mds root ino uid/global
2020-05-27 18:46:10,698.698 INFO:__main__:Searching for existing instance osd_mon_report_interval/osd
2020-05-27 18:46:10,698.698 INFO:__main__:Searching for existing instance mds log max segments/mds
2020-05-27 18:46:10,699.699 INFO:__main__:Searching for existing instance mds root ino uid/global
2020-05-27 18:46:10,699.699 INFO:__main__:Searching for existing instance mds root ino gid/global
2020-05-27 18:46:10,699.699 INFO:__main__:Executing modules: ['tasks.cephfs.test_scrub.TestScrub.test_scrub_backtrace_for_new_files']
2020-05-27 18:46:10,700.700 INFO:__main__:Loaded: [<unittest.suite.TestSuite tests=[<tasks.cephfs.test_scrub.TestScrub testMethod=test_scrub_backtrace_for_new_files>]>]
2020-05-27 18:46:10,700.700 INFO:__main__:e: <unittest.suite.TestSuite tests=[<unittest.suite.TestSuite tests=[<tasks.cephfs.test_scrub.TestScrub testMethod=test_scrub_backtrace_for_new_files>]>]>
2020-05-27 18:46:10,700.700 INFO:__main__:e: <unittest.suite.TestSuite tests=[<tasks.cephfs.test_scrub.TestScrub testMethod=test_scrub_backtrace_for_new_files>]>
2020-05-27 18:46:10,700.700 INFO:__main__:Disabling 0 tests because of is_for_teuthology or needs_trimming
2020-05-27 18:46:10,700.700 INFO:__main__:Starting test: test_scrub_backtrace_for_new_files (tasks.cephfs.test_scrub.TestScrub)
2020-05-27 18:46:10,700.700 INFO:__main__:Running ['./bin/ceph', 'log', 'Starting test tasks.cephfs.test_scrub.TestScrub.test_scrub_backtrace_for_new_files']
2020-05-27 18:46:11,432.432 INFO:__main__:Running ['./bin/ceph', 'osd', 'dump', '--format=json-pretty']
2020-05-27 18:46:11,703.703 INFO:__main__:Running ['./bin/ceph', 'fs', 'dump', '--format=json']
2020-05-27 18:46:11,998.998 INFO:__main__:Running ['./bin/ceph', 'fs', 'fail', 'a']
2020-05-27 18:46:12,509.509 INFO:__main__:Running ['./bin/ceph', 'fs', 'dump', '--format=json']
2020-05-27 18:46:12,810.810 INFO:__main__:Running ['./bin/ceph', 'fs', 'rm', 'a', '--yes-i-really-mean-it']
2020-05-27 18:46:13,515.515 INFO:__main__:Running ['./bin/ceph', 'osd', 'pool', 'delete', 'cephfs.a.meta', 'cephfs.a.meta', '--yes-i-really-really-mean-it']
2020-05-27 18:46:14,570.570 INFO:__main__:Running ['./bin/ceph', 'osd', 'pool', 'delete', 'cephfs.a.data', 'cephfs.a.data', '--yes-i-really-really-mean-it']
2020-05-27 18:46:15,577.577 INFO:__main__:Running ['ps', 'ww', '-u25405']
2020-05-27 18:46:15,595.595 INFO:__main__:No match for mds a
2020-05-27 18:46:15,596.596 INFO:__main__:Running ['./bin/./ceph-mds', '-i', 'a']
2020-05-27 18:46:15,681.681 INFO:__main__:Running ['./bin/ceph', 'osd', 'blacklist', 'clear']
2020-05-27 18:46:16,623.623 INFO:tasks.cephfs.cephfs_test_case:['0']
2020-05-27 18:46:16,623.623 INFO:__main__:Running ['./bin/ceph', 'auth', 'ls', '--format=json-pretty']
2020-05-27 18:46:16,927.927 INFO:__main__:Running ['./bin/ceph', 'auth', 'del', 'client.bootstrap-mds']
2020-05-27 18:46:17,259.259 INFO:__main__:Running ['./bin/ceph', 'auth', 'del', 'client.bootstrap-mgr']
2020-05-27 18:46:17,585.585 INFO:__main__:Running ['./bin/ceph', 'auth', 'del', 'client.bootstrap-osd']
2020-05-27 18:46:17,905.905 INFO:__main__:Running ['./bin/ceph', 'auth', 'del', 'client.bootstrap-rbd']
2020-05-27 18:46:18,235.235 INFO:__main__:Running ['./bin/ceph', 'auth', 'del', 'client.bootstrap-rbd-mirror']
2020-05-27 18:46:18,566.566 INFO:__main__:Running ['./bin/ceph', 'auth', 'del', 'client.bootstrap-rgw']
2020-05-27 18:46:18,895.895 INFO:__main__:Running ['./bin/ceph', 'auth', 'del', 'client.fs']
2020-05-27 18:46:19,216.216 INFO:__main__:Running ['./bin/ceph', 'auth', 'del', 'client.fs_a']
2020-05-27 18:46:19,539.539 INFO:__main__:Discovered MDS IDs: ['a']
2020-05-27 18:46:19,539.539 INFO:__main__:Running ['./bin/ceph', '--format=json-pretty', 'osd', 'lspools']
2020-05-27 18:46:19,874.874 INFO:tasks.cephfs.filesystem:Creating filesystem 'cephfs'
2020-05-27 18:46:19,874.874 INFO:__main__:Running ['./bin/ceph', 'osd', 'pool', 'create', 'cephfs_metadata', '8']
2020-05-27 18:46:20,790.790 INFO:__main__:Running ['./bin/ceph', 'osd', 'pool', 'create', 'cephfs_data', '8']
2020-05-27 18:46:21,824.824 INFO:__main__:Running ['./bin/ceph', 'fs', 'new', 'cephfs', 'cephfs_metadata', 'cephfs_data', '--force']
2020-05-27 18:46:22,163.163 INFO:__main__:Running ['./bin/ceph', 'osd', 'dump', '--format=json']
2020-05-27 18:46:22,431.431 INFO:__main__:Running ['./bin/ceph', 'osd', 'dump', '--format=json']
2020-05-27 18:46:22,700.700 INFO:__main__:Running ['./bin/ceph', 'fs', 'set', 'cephfs', 'standby_count_wanted', '0']
2020-05-27 18:46:23,204.204 INFO:__main__:Running ['./bin/ceph', 'fs', 'dump', '--format=json']
2020-05-27 18:46:23,524.524 INFO:__main__:Running ['./bin/ceph', 'osd', 'dump', '--format=json']
2020-05-27 18:46:23,798.798 INFO:__main__:Running ['./bin/ceph', 'auth', 'caps', 'client.0', 'mds', 'allow', 'mon', 'allow r', 'osd', 'allow rw pool=cephfs_data']
2020-05-27 18:46:24,132.132 INFO:__main__:Running ['./bin/ceph', 'fs', 'dump', '--format=json']
2020-05-27 18:46:24,430.430 INFO:tasks.cephfs.filesystem:are_daemons_healthy: mds map: {'epoch': 9, 'flags': 18, 'ever_allowed_features': 0, 'explicitly_allowed_features': 0, 'created': '2020-05-27T18:46:22.123624+0530', 'modified': '2020-05-27T18:46:23.135893+0530', 'tableserver': 0, 'root': 0, 'session_timeout': 60, 'session_autoclose': 300, 'min_compat_client': '0 (unknown)', 'max_file_size': 1099511627776, 'last_failure': 0, 'last_failure_osd_epoch': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}, 'max_mds': 1, 'in': [0], 'up': {'mds_0': 4166}, 'failed': [], 'damaged': [], 'stopped': [], 'info': {'gid_4166': {'gid': 4166, 'name': 'a', 'rank': 0, 'incarnation': 8, 'state': 'up:active', 'state_seq': 3, 'addr': '192.168.1.100:6811/2476124280', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.1.100:6810', 'nonce': 2476124280}, {'type': 'v1', 'addr': '192.168.1.100:6811', 'nonce': 2476124280}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138292837744639, 'flags': 0}}, 'data_pools': [5], 'metadata_pool': 4, 'enabled': True, 'fs_name': 'cephfs', 'balancer': '', 'standby_count_wanted': 0}
2020-05-27 18:46:24,430.430 INFO:tasks.cephfs.filesystem:are_daemons_healthy: 1/1
2020-05-27 18:46:24,430.430 INFO:__main__:Running ['./bin/ceph', 'daemon', 'mds.a', 'status']
2020-05-27 18:46:24,602.602 INFO:tasks.cephfs.filesystem:_json_asok output: b'{\n    "cluster_fsid": "2e803508-fe98-42da-beeb-e56aade02eba",\n    "whoami": 0,\n    "id": 4166,\n    "want_state": "up:active",\n    "state": "up:active",\n    "rank_uptime": 2.4572795360000002,\n    "mdsmap_epoch": 9,\n    "osdmap_epoch": 22,\n    "osdmap_epoch_barrier": 21,\n    "uptime": 8.9165379510000005\n}\n'
2020-05-27 18:46:24,603.603 INFO:__main__:Discovered MDS IDs: ['a']
2020-05-27 18:46:24,603.603 INFO:__main__:Wait for MDS to reach steady state...
2020-05-27 18:46:24,603.603 INFO:__main__:Running ['./bin/ceph', 'fs', 'dump', '--format=json']
2020-05-27 18:46:24,897.897 INFO:tasks.cephfs.filesystem:are_daemons_healthy: mds map: {'epoch': 9, 'flags': 18, 'ever_allowed_features': 0, 'explicitly_allowed_features': 0, 'created': '2020-05-27T18:46:22.123624+0530', 'modified': '2020-05-27T18:46:23.135893+0530', 'tableserver': 0, 'root': 0, 'session_timeout': 60, 'session_autoclose': 300, 'min_compat_client': '0 (unknown)', 'max_file_size': 1099511627776, 'last_failure': 0, 'last_failure_osd_epoch': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}, 'max_mds': 1, 'in': [0], 'up': {'mds_0': 4166}, 'failed': [], 'damaged': [], 'stopped': [], 'info': {'gid_4166': {'gid': 4166, 'name': 'a', 'rank': 0, 'incarnation': 8, 'state': 'up:active', 'state_seq': 3, 'addr': '192.168.1.100:6811/2476124280', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.1.100:6810', 'nonce': 2476124280}, {'type': 'v1', 'addr': '192.168.1.100:6811', 'nonce': 2476124280}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138292837744639, 'flags': 0}}, 'data_pools': [5], 'metadata_pool': 4, 'enabled': True, 'fs_name': 'cephfs', 'balancer': '', 'standby_count_wanted': 0}
2020-05-27 18:46:24,897.897 INFO:tasks.cephfs.filesystem:are_daemons_healthy: 1/1
2020-05-27 18:46:24,897.897 INFO:__main__:Running ['./bin/ceph', 'daemon', 'mds.a', 'status']
2020-05-27 18:46:25,058.058 INFO:tasks.cephfs.filesystem:_json_asok output: b'{\n    "cluster_fsid": "2e803508-fe98-42da-beeb-e56aade02eba",\n    "whoami": 0,\n    "id": 4166,\n    "want_state": "up:active",\n    "state": "up:active",\n    "rank_uptime": 2.9138192859999998,\n    "mdsmap_epoch": 9,\n    "osdmap_epoch": 22,\n    "osdmap_epoch_barrier": 21,\n    "uptime": 9.3730771379999993\n}\n'
2020-05-27 18:46:25,059.059 INFO:__main__:Ready to start LocalFuseMount...
2020-05-27 18:46:25,059.059 INFO:tasks.cephfs.mount:Setting the 'None' netns for '/tmp/tmpz_tteuil/mnt.0'
2020-05-27 18:46:25,059.059 INFO:__main__:Running ['ip', 'addr']
2020-05-27 18:46:25,062.062 INFO:__main__:Running ['echo', '1', '|', 'sudo', 'tee', '/proc/sys/net/ipv4/ip_forward']
2020-05-27 18:46:25,067.067 INFO:__main__:Running ['route']
2020-05-27 18:46:26,673.673 INFO:__main__:Running ['sudo', 'bash', '-c', 'iptables -A FORWARD -o wlp61s0 -i ceph-brx -j ACCEPT']
2020-05-27 18:46:26,717.717 INFO:__main__:Running ['sudo', 'bash', '-c', 'iptables -A FORWARD -i wlp61s0 -o ceph-brx -j ACCEPT']
2020-05-27 18:46:26,746.746 INFO:__main__:Running ['sudo', 'bash', '-c', 'iptables -t nat -A POSTROUTING -s 192.168.255.254/16 -o wlp61s0 -j MASQUERADE']
2020-05-27 18:46:26,773.773 INFO:__main__:Running ['ip', 'netns', 'list']
2020-05-27 18:46:26,779.779 INFO:__main__:Running ['ip', 'netns', 'list-id']
2020-05-27 18:46:26,784.784 INFO:__main__:Running ['ip', 'netns', 'list-id']
2020-05-27 18:46:26,788.788 INFO:__main__:Running ['ip', 'netns', 'list-id']
2020-05-27 18:46:26,792.792 INFO:__main__:Running ['ip', 'netns', 'list-id']
2020-05-27 18:46:26,796.796 INFO:__main__:Running ['sudo', 'bash', '-c', 'ip netns add ceph-ns--tmp-tmpz_tteuil-mnt.0']
2020-05-27 18:46:26,825.825 INFO:__main__:Running ['sudo', 'bash', '-c', 'ip netns set ceph-ns--tmp-tmpz_tteuil-mnt.0 3']
2020-05-27 18:46:26,852.852 INFO:__main__:Running ['sudo', 'bash', '-c', 'ip netns exec ceph-ns--tmp-tmpx12z69wa-mnt.0 ip addr']
2020-05-27 18:46:26,880.880 INFO:__main__:Running ['sudo', 'bash', '-c', 'ip netns exec ceph-ns--tmp-tmpj4n5ohuy-mnt.0 ip addr']
2020-05-27 18:46:26,904.904 INFO:__main__:Running ['sudo', 'bash', '-c', 'ip netns exec ceph-ns--tmp-tmpqpmy1kuh-mnt.0 ip addr']
2020-05-27 18:46:26,925.925 INFO:__main__:Running ['sudo', 'bash', '-c', 'ip netns exec ceph-ns--tmp-tmpq1pg2pz7-mnt.0 ip addr']
setting the network namespace "ceph-ns--tmp-tmpq1pg2pz7-mnt.0" failed: Invalid argument
2020-05-27 18:46:26,945.945 INFO:__main__:test_scrub_backtrace_for_new_files (tasks.cephfs.test_scrub.TestScrub) ... ERROR
2020-05-27 18:46:26,945.945 INFO:__main__:Stopped test: test_scrub_backtrace_for_new_files (tasks.cephfs.test_scrub.TestScrub) in 16.245124s
2020-05-27 18:46:26,945.945 INFO:__main__:
2020-05-27 18:46:26,945.945 INFO:__main__:======================================================================
2020-05-27 18:46:26,946.946 INFO:__main__:ERROR: test_scrub_backtrace_for_new_files (tasks.cephfs.test_scrub.TestScrub)
2020-05-27 18:46:26,946.946 INFO:__main__:----------------------------------------------------------------------
2020-05-27 18:46:26,946.946 INFO:__main__:Traceback (most recent call last):
2020-05-27 18:46:26,946.946 INFO:__main__:  File "/home/mchangir/work/mchangir-ceph.git/qa/tasks/cephfs/test_scrub.py", line 118, in setUp
2020-05-27 18:46:26,946.946 INFO:__main__:    super().setUp()
2020-05-27 18:46:26,946.946 INFO:__main__:  File "/home/mchangir/work/mchangir-ceph.git/qa/tasks/cephfs/cephfs_test_case.py", line 145, in setUp
2020-05-27 18:46:26,946.946 INFO:__main__:    self.mounts[i].mount_wait()
2020-05-27 18:46:26,946.946 INFO:__main__:  File "/home/mchangir/work/mchangir-ceph.git/qa/tasks/cephfs/mount.py", line 339, in mount_wait
2020-05-27 18:46:26,946.946 INFO:__main__:    self.mount(mount_path=mount_path, mount_fs_name=mount_fs_name, mountpoint=mountpoint,
2020-05-27 18:46:26,946.946 INFO:__main__:  File "../qa/tasks/vstart_runner.py", line 706, in mount
2020-05-27 18:46:26,946.946 INFO:__main__:    self.setup_netns()
2020-05-27 18:46:26,946.946 INFO:__main__:  File "/home/mchangir/work/mchangir-ceph.git/qa/tasks/cephfs/mount.py", line 299, in setup_netns
2020-05-27 18:46:26,946.946 INFO:__main__:    self._setup_netns()
2020-05-27 18:46:26,946.946 INFO:__main__:  File "/home/mchangir/work/mchangir-ceph.git/qa/tasks/cephfs/mount.py", line 188, in _setup_netns
2020-05-27 18:46:26,946.946 INFO:__main__:    p = self.client_remote.run(args=args, stderr=StringIO(),
2020-05-27 18:46:26,946.946 INFO:__main__:  File "../qa/tasks/vstart_runner.py", line 356, in run
2020-05-27 18:46:26,946.946 INFO:__main__:    return self._do_run(**kwargs)
2020-05-27 18:46:26,946.946 INFO:__main__:  File "../qa/tasks/vstart_runner.py", line 420, in _do_run
2020-05-27 18:46:26,946.946 INFO:__main__:    proc.wait()
2020-05-27 18:46:26,947.947 INFO:__main__:  File "../qa/tasks/vstart_runner.py", line 203, in wait
2020-05-27 18:46:26,947.947 INFO:__main__:    raise CommandFailedError(self.args, self.exitstatus)
2020-05-27 18:46:26,947.947 INFO:__main__:teuthology.exceptions.CommandFailedError: Command failed with status 255: ['sudo', 'bash', '-c', 'ip netns exec ceph-ns--tmp-tmpq1pg2pz7-mnt.0 ip addr']
2020-05-27 18:46:26,947.947 INFO:__main__:
2020-05-27 18:46:26,947.947 INFO:__main__:----------------------------------------------------------------------
2020-05-27 18:46:26,947.947 INFO:__main__:Ran 1 test in 16.245s
2020-05-27 18:46:26,947.947 INFO:__main__:
2020-05-27 18:46:26,947.947 INFO:__main__:FAILED (errors=1)
2020-05-27 18:46:26,947.947 INFO:__main__:
2020-05-27 18:46:26,947.947 INFO:__main__:======================================================================
2020-05-27 18:46:26,947.947 INFO:__main__:ERROR: test_scrub_backtrace_for_new_files (tasks.cephfs.test_scrub.TestScrub)
2020-05-27 18:46:26,947.947 INFO:__main__:----------------------------------------------------------------------
2020-05-27 18:46:26,947.947 INFO:__main__:Traceback (most recent call last):
2020-05-27 18:46:26,947.947 INFO:__main__:  File "/home/mchangir/work/mchangir-ceph.git/qa/tasks/cephfs/test_scrub.py", line 118, in setUp
2020-05-27 18:46:26,947.947 INFO:__main__:    super().setUp()
2020-05-27 18:46:26,947.947 INFO:__main__:  File "/home/mchangir/work/mchangir-ceph.git/qa/tasks/cephfs/cephfs_test_case.py", line 145, in setUp
2020-05-27 18:46:26,947.947 INFO:__main__:    self.mounts[i].mount_wait()
2020-05-27 18:46:26,947.947 INFO:__main__:  File "/home/mchangir/work/mchangir-ceph.git/qa/tasks/cephfs/mount.py", line 339, in mount_wait
2020-05-27 18:46:26,948.948 INFO:__main__:    self.mount(mount_path=mount_path, mount_fs_name=mount_fs_name, mountpoint=mountpoint,
2020-05-27 18:46:26,948.948 INFO:__main__:  File "../qa/tasks/vstart_runner.py", line 706, in mount
2020-05-27 18:46:26,948.948 INFO:__main__:    self.setup_netns()
2020-05-27 18:46:26,948.948 INFO:__main__:  File "/home/mchangir/work/mchangir-ceph.git/qa/tasks/cephfs/mount.py", line 299, in setup_netns
2020-05-27 18:46:26,948.948 INFO:__main__:    self._setup_netns()
2020-05-27 18:46:26,948.948 INFO:__main__:  File "/home/mchangir/work/mchangir-ceph.git/qa/tasks/cephfs/mount.py", line 188, in _setup_netns
2020-05-27 18:46:26,948.948 INFO:__main__:    p = self.client_remote.run(args=args, stderr=StringIO(),
2020-05-27 18:46:26,948.948 INFO:__main__:  File "../qa/tasks/vstart_runner.py", line 356, in run
2020-05-27 18:46:26,948.948 INFO:__main__:    return self._do_run(**kwargs)
2020-05-27 18:46:26,948.948 INFO:__main__:  File "../qa/tasks/vstart_runner.py", line 420, in _do_run
2020-05-27 18:46:26,948.948 INFO:__main__:    proc.wait()
2020-05-27 18:46:26,948.948 INFO:__main__:  File "../qa/tasks/vstart_runner.py", line 203, in wait
2020-05-27 18:46:26,948.948 INFO:__main__:    raise CommandFailedError(self.args, self.exitstatus)
2020-05-27 18:46:26,948.948 INFO:__main__:teuthology.exceptions.CommandFailedError: Command failed with status 255: ['sudo', 'bash', '-c', 'ip netns exec ceph-ns--tmp-tmpq1pg2pz7-mnt.0 ip addr']
2020-05-27 18:46:26,948.948 INFO:__main__:
(virtualenv) 18:46:27 [mchangir@indraprastha build] $ 

History

#1 Updated by Xiubo Li about 1 month ago

  • Pull request ID set to 35283

#2 Updated by Xiubo Li about 1 month ago

  • Status changed from New to Fix Under Review

If the previous test cases failed, the netnses will not be removed/cleaned up, and we cannot be sure what state they will be in, and we may hit some errors like this.

We should always delete the previous stale netnses at the beginning of vstart_runner.py.

#3 Updated by Patrick Donnelly about 1 month ago

  • Target version set to v16.0.0
  • Source set to Q/A
  • ceph-qa-suite deleted (fs)

Xiubo Li wrote:

If the previous test cases failed, the netnses will not be removed/cleaned up, and we cannot be sure what state they will be in, and we may hit some errors like this.

We should always delete the previous stale netnses at the beginning of vstart_runner.py.

Xiubo, when pasting text from a failed QA run, please also include the source file.

#4 Updated by Patrick Donnelly about 1 month ago

Patrick Donnelly wrote:

Xiubo Li wrote:

If the previous test cases failed, the netnses will not be removed/cleaned up, and we cannot be sure what state they will be in, and we may hit some errors like this.

We should always delete the previous stale netnses at the beginning of vstart_runner.py.

Xiubo, when pasting text from a failed QA run, please also include the source file.

Nevermind, I see this came from `vstart_runner.py` now.

Also available in: Atom PDF