Project

General

Profile

Actions

Bug #44571

closed

qa: test_abort_conn failed with "CalledProcessError: Command 'sudo python3 -c"

Added by Xiubo Li about 4 years ago. Updated about 4 years ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2020-03-11 22:59:04,565.565 INFO:__main__:Running ['ps', 'ww', '-u0']
2020-03-11 22:59:04,570.570 INFO:__main__:Running ['ps', 'ww', '-u0']
2020-03-11 22:59:04,577.577 INFO:__main__:Running ['ps', 'ww', '-u0']
2020-03-11 22:59:04,584.584 INFO:__main__:No match for mds a
2020-03-11 22:59:04,585.585 INFO:__main__:Running ['./bin/./ceph-mds', '-i', 'a']
2020-03-11 22:59:04,590.590 INFO:__main__:No match for mds c
2020-03-11 22:59:04,590.590 INFO:__main__:Running ['./bin/./ceph-mds', '-i', 'c']
2020-03-11 22:59:04,594.594 INFO:__main__:No match for mds b
2020-03-11 22:59:04,594.594 INFO:__main__:Running ['./bin/./ceph-mds', '-i', 'b']
2020-03-11 22:59:04,727.727 INFO:__main__:Running ['./bin/ceph', 'osd', 'blacklist', 'clear']
2020-03-11 22:59:05,805.805 INFO:tasks.cephfs.cephfs_test_case:['0', '1']
2020-03-11 22:59:05,806.806 INFO:__main__:Running ['./bin/ceph', 'auth', 'ls', '--format=json-pretty']
2020-03-11 22:59:06,318.318 INFO:__main__:Discovered MDS IDs: ['a', 'c', 'b']
2020-03-11 22:59:06,320.320 INFO:__main__:Running ['./bin/ceph', '--format=json-pretty', 'osd', 'lspools']
2020-03-11 22:59:06,774.774 INFO:tasks.cephfs.filesystem:Creating filesystem 'cephfs'
2020-03-11 22:59:06,775.775 INFO:__main__:Running ['./bin/ceph', 'osd', 'pool', 'create', 'cephfs_metadata', '8']
2020-03-11 22:59:07,650.650 INFO:__main__:Running ['./bin/ceph', 'osd', 'pool', 'create', 'cephfs_data', '8']
2020-03-11 22:59:08,680.680 INFO:__main__:Running ['./bin/ceph', 'fs', 'new', 'cephfs', 'cephfs_metadata', 'cephfs_data', '--force']
2020-03-11 22:59:09,222.222 INFO:__main__:Running ['./bin/ceph', 'osd', 'dump', '--format=json']
2020-03-11 22:59:09,657.657 INFO:__main__:Running ['./bin/ceph', 'osd', 'dump', '--format=json']
2020-03-11 22:59:10,114.114 INFO:__main__:Running ['./bin/ceph', 'fs', 'set', 'cephfs', 'standby_count_wanted', '0']
2020-03-11 22:59:11,362.362 INFO:__main__:Running ['./bin/ceph', 'fs', 'dump', '--format=json']
2020-03-11 22:59:11,856.856 INFO:__main__:Running ['./bin/ceph', 'osd', 'dump', '--format=json']
2020-03-11 22:59:12,293.293 INFO:__main__:Running ['./bin/ceph', 'auth', 'caps', 'client.0', 'mds', 'allow', 'mon', 'allow r', 'osd', 'allow rw pool=cephfs_data']
2020-03-11 22:59:12,862.862 INFO:__main__:Running ['./bin/ceph', 'auth', 'caps', 'client.1', 'mds', 'allow', 'mon', 'allow r', 'osd', 'allow rw pool=cephfs_data']
2020-03-11 22:59:13,429.429 INFO:__main__:Running ['./bin/ceph', 'fs', 'dump', '--format=json']
2020-03-11 22:59:13,922.922 INFO:tasks.cephfs.filesystem:are_daemons_healthy: mds map: {u'session_autoclose': 300, u'balancer': u'', u'up': {u'mds_0': 8223}, u'last_failure_osd_epoch': 0, u'in': [0], u'last_failure': 0, u'max_file_size': 1099511627776, u'explicitly_allowed_features': 0, u'damaged': [], u'tableserver': 0, u'metadata_pool': 48, u'failed': [], u'epoch': 171, u'flags': 18, u'max_mds': 1, u'compat': {u'compat': {}, u'ro_compat': {}, u'incompat': {u'feature_10': u'snaprealm v2', u'feature_8': u'no anchor table', u'feature_9': u'file layout v2', u'feature_2': u'client writeable ranges', u'feature_3': u'default file layouts on dirs', u'feature_1': u'base v0.20', u'feature_6': u'dirfrag is stored in omap', u'feature_4': u'dir inode in separate object', u'feature_5': u'mds uses versioned encoding'}}, u'min_compat_client': u'0 (unknown)', u'data_pools': [49], u'info': {u'gid_8223': {u'export_targets': [], u'name': u'b', u'incarnation': 168, u'state_seq': 3, u'state': u'up:active', u'gid': 8223, u'features': 4540138292836696063, u'rank': 0, u'flags': 0, u'join_fscid': -1, u'addrs': {u'addrvec': [{u'nonce': 2586056693, u'type': u'v1', u'addr': u'10.72.36.245:6815'}]}, u'addr': u'10.72.36.245:6815/2586056693'}}, u'fs_name': u'cephfs', u'created': u'2020-03-11T22:59:09.165027-0400', u'standby_count_wanted': 0, u'enabled': True, u'modified': u'2020-03-11T22:59:11.297778-0400', u'session_timeout': 60, u'stopped': [], u'ever_allowed_features': 0, u'root': 0}
2020-03-11 22:59:13,923.923 INFO:tasks.cephfs.filesystem:are_daemons_healthy: 1/1
2020-03-11 22:59:13,924.924 INFO:__main__:Running ['./bin/ceph', 'daemon', 'mds.b', 'status']
2020-03-11 22:59:14,167.167 INFO:tasks.cephfs.filesystem:_json_asok output: {
    "cluster_fsid": "f537aa68-bb64-461b-9ae3-ce6b41d340c7",
    "whoami": 0,
    "id": 8223,
    "want_state": "up:active",
    "state": "up:active",
    "rank_uptime": 4.9731281960000002,
    "mdsmap_epoch": 171,
    "osdmap_epoch": 338,
    "osdmap_epoch_barrier": 338,
    "uptime": 9.43330409
}

2020-03-11 22:59:14,169.169 INFO:__main__:Discovered MDS IDs: ['a', 'c', 'b']
2020-03-11 22:59:14,170.170 INFO:__main__:Wait for MDS to reach steady state...
2020-03-11 22:59:14,171.171 INFO:__main__:Running ['./bin/ceph', 'fs', 'dump', '--format=json']
2020-03-11 22:59:14,690.690 INFO:tasks.cephfs.filesystem:are_daemons_healthy: mds map: {u'session_autoclose': 300, u'balancer': u'', u'up': {u'mds_0': 8223}, u'last_failure_osd_epoch': 0, u'in': [0], u'last_failure': 0, u'max_file_size': 1099511627776, u'explicitly_allowed_features': 0, u'damaged': [], u'tableserver': 0, u'metadata_pool': 48, u'failed': [], u'epoch': 171, u'flags': 18, u'max_mds': 1, u'compat': {u'compat': {}, u'ro_compat': {}, u'incompat': {u'feature_10': u'snaprealm v2', u'feature_8': u'no anchor table', u'feature_9': u'file layout v2', u'feature_2': u'client writeable ranges', u'feature_3': u'default file layouts on dirs', u'feature_1': u'base v0.20', u'feature_6': u'dirfrag is stored in omap', u'feature_4': u'dir inode in separate object', u'feature_5': u'mds uses versioned encoding'}}, u'min_compat_client': u'0 (unknown)', u'data_pools': [49], u'info': {u'gid_8223': {u'export_targets': [], u'name': u'b', u'incarnation': 168, u'state_seq': 3, u'state': u'up:active', u'gid': 8223, u'features': 4540138292836696063, u'rank': 0, u'flags': 0, u'join_fscid': -1, u'addrs': {u'addrvec': [{u'nonce': 2586056693, u'type': u'v1', u'addr': u'10.72.36.245:6815'}]}, u'addr': u'10.72.36.245:6815/2586056693'}}, u'fs_name': u'cephfs', u'created': u'2020-03-11T22:59:09.165027-0400', u'standby_count_wanted': 0, u'enabled': True, u'modified': u'2020-03-11T22:59:11.297778-0400', u'session_timeout': 60, u'stopped': [], u'ever_allowed_features': 0, u'root': 0}
2020-03-11 22:59:14,690.690 INFO:tasks.cephfs.filesystem:are_daemons_healthy: 1/1
2020-03-11 22:59:14,690.690 INFO:__main__:Running ['./bin/ceph', 'daemon', 'mds.b', 'status']
2020-03-11 22:59:14,934.934 INFO:tasks.cephfs.filesystem:_json_asok output: {
    "cluster_fsid": "f537aa68-bb64-461b-9ae3-ce6b41d340c7",
    "whoami": 0,
    "id": 8223,
    "want_state": "up:active",
    "state": "up:active",
    "rank_uptime": 5.7402896449999998,
    "mdsmap_epoch": 171,
    "osdmap_epoch": 338,
    "osdmap_epoch_barrier": 338,
    "uptime": 10.200459702
}

2020-03-11 22:59:14,935.935 INFO:__main__:Ready to start LocalFuseMount...
2020-03-11 22:59:14,936.936 INFO:tasks.cephfs.mount:Setting the 'None' netns for '/tmp/tmp5dbVun/mnt.0'
2020-03-11 22:59:14,937.937 INFO:__main__:Running ['ip', 'addr']
2020-03-11 22:59:14,945.945 INFO:__main__:Running ['ip', 'netns', 'list']
2020-03-11 22:59:14,951.951 INFO:__main__:Running ['ip', 'netns', 'list-id']
2020-03-11 22:59:14,955.955 INFO:__main__:Running ['ip', 'netns', 'list-id']
2020-03-11 22:59:14,960.960 INFO:__main__:Running ['ip', 'netns', 'list-id']
2020-03-11 22:59:14,963.963 INFO:__main__:Running ['sudo', 'bash', '-c', 'ip netns add ceph-ns--tmp-tmp5dbVun-mnt.0']
2020-03-11 22:59:15,014.014 INFO:__main__:Running ['sudo', 'bash', '-c', 'ip netns set ceph-ns--tmp-tmp5dbVun-mnt.0 2']
2020-03-11 22:59:15,068.068 INFO:__main__:Running ['sudo', 'bash', '-c', 'ip netns exec ceph-ns--tmp-tmpttLd0H-mnt.1 ip addr']
2020-03-11 22:59:15,127.127 INFO:__main__:Running ['sudo', 'bash', '-c', 'ip netns exec ceph-ns--tmp-tmpttLd0H-mnt.0 ip addr']
2020-03-11 22:59:15,184.184 INFO:__main__:Running ['sudo', 'bash', '-c', 'ip netns exec ceph-ns--tmp-tmpttLd0H-mnt.1 ip addr']
2020-03-11 22:59:15,245.245 INFO:__main__:Running ['sudo', 'bash', '-c', 'ip netns exec ceph-ns--tmp-tmpttLd0H-mnt.1 ip addr']
2020-03-11 22:59:15,311.311 INFO:__main__:Running ['sudo', 'bash', '-c', 'ip netns exec ceph-ns--tmp-tmpttLd0H-mnt.0 ip addr']
2020-03-11 22:59:15,360.360 INFO:tasks.cephfs.mount:Setuping the netns 'ceph-ns--tmp-tmp5dbVun-mnt.0' with 192.168.0.3/16
2020-03-11 22:59:15,361.361 INFO:__main__:Running ['sudo', 'bash', '-c', 'ip link add veth0 netns ceph-ns--tmp-tmp5dbVun-mnt.0 type veth peer name brx.2']
2020-03-11 22:59:15,428.428 INFO:__main__:Running ['sudo', 'bash', '-c', 'ip netns exec ceph-ns--tmp-tmp5dbVun-mnt.0 ip addr add 192.168.0.3/16 brd 192.168.255.255 dev veth0']
2020-03-11 22:59:15,484.484 INFO:__main__:Running ['sudo', 'bash', '-c', 'ip netns exec ceph-ns--tmp-tmp5dbVun-mnt.0 ip link set veth0 up']
2020-03-11 22:59:15,554.554 INFO:__main__:Running ['sudo', 'bash', '-c', 'ip netns exec ceph-ns--tmp-tmp5dbVun-mnt.0 ip link set lo up']
2020-03-11 22:59:15,622.622 INFO:__main__:Running ['sudo', 'bash', '-c', 'ip netns exec ceph-ns--tmp-tmp5dbVun-mnt.0 ip route add default via 192.168.255.254']
2020-03-11 22:59:15,690.690 INFO:__main__:Running ['sudo', 'bash', '-c', 'ip link set brx.2 up']
2020-03-11 22:59:15,754.754 INFO:__main__:Running ['sudo', 'bash', '-c', 'brctl addif ceph-brx brx.2']
2020-03-11 22:59:15,818.818 INFO:__main__:Running ['mkdir', '-p', '/tmp/tmp5dbVun/mnt.0']
2020-03-11 22:59:15,827.827 INFO:__main__:Running ['mount', '-t', 'fusectl', '/sys/fs/fuse/connections', '/sys/fs/fuse/connections']
mount: /sys/fs/fuse/connections is already mounted or /sys/fs/fuse/connections busy
2020-03-11 22:59:15,836.836 INFO:__main__:Running ['ls', '/sys/fs/fuse/connections']
2020-03-11 22:59:15,841.841 INFO:__main__:Pre-mount connections: [43, 45, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71]
2020-03-11 22:59:15,841.841 INFO:__main__:Running ['nsenter', '--net=/var/run/netns/ceph-ns--tmp-tmp5dbVun-mnt.0', './bin/ceph-fuse', '-f', '--name', 'client.0', '/tmp/tmp5dbVun/mnt.0']
2020-03-11 22:59:15,845.845 INFO:__main__:Mounting client.0 with pid 9519
2020-03-11 22:59:15,845.845 INFO:__main__:Running ['mount', '-t', 'fusectl', '/sys/fs/fuse/connections', '/sys/fs/fuse/connections']
mount: /sys/fs/fuse/connections is already mounted or /sys/fs/fuse/connections busy
2020-03-11 22:59:15,856.856 INFO:__main__:Running ['ls', '/sys/fs/fuse/connections']
2020-03-11 22:59:16,863.863 INFO:__main__:Running ['mount', '-t', 'fusectl', '/sys/fs/fuse/connections', '/sys/fs/fuse/connections']
mount: /sys/fs/fuse/connections is already mounted or /sys/fs/fuse/connections busy
2020-03-11 22:59:16,876.876 INFO:__main__:Running ['ls', '/sys/fs/fuse/connections']
2020-03-11 22:59:16,884.884 INFO:__main__:Post-mount connections: [43, 45, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72]
2020-03-11 22:59:16,886.886 INFO:__main__:I think my launching pid was 9519
2020-03-11 22:59:16,886.886 INFO:teuthology.misc::sh: sudo python3 -c 
import glob
import re
import os
import subprocess

def find_socket(client_name):
        asok_path = "/tmp/ceph-asok.R9gHnv//client.0.9519.asok" 
        files = glob.glob(asok_path)

        # Given a non-glob path, it better be there
        if "*" not in asok_path:
            assert(len(files) == 1)
            return files[0]

        for f in files:
                pid = re.match(".*\.(\d+)\.asok$", f).group(1)
                if os.path.exists("/proc/{0}".format(pid)):
                        return f
        raise RuntimeError("Client socket {0} not found".format(client_name))

print(find_socket("client.0"))

2020-03-11 22:59:16,952.952 INFO:__main__:test_abort_conn (tasks.cephfs.test_client_recovery.TestClientRecovery) ... ERROR
2020-03-11 22:59:16,953.953 INFO:__main__:Stopped test: test_abort_conn (tasks.cephfs.test_client_recovery.TestClientRecovery) in 19.594384s
2020-03-11 22:59:16,954.954 INFO:__main__:
2020-03-11 22:59:16,954.954 INFO:__main__:======================================================================
2020-03-11 22:59:16,954.954 INFO:__main__:ERROR: test_abort_conn (tasks.cephfs.test_client_recovery.TestClientRecovery)
2020-03-11 22:59:16,955.955 INFO:__main__:----------------------------------------------------------------------
2020-03-11 22:59:16,955.955 INFO:__main__:Traceback (most recent call last):
2020-03-11 22:59:16,955.955 INFO:__main__:  File "/data/ceph/qa/tasks/cephfs/cephfs_test_case.py", line 145, in setUp
2020-03-11 22:59:16,956.956 INFO:__main__:    self.mounts[i].mount()
2020-03-11 22:59:16,956.956 INFO:__main__:  File "../qa/tasks/vstart_runner.py", line 900, in mount
2020-03-11 22:59:16,956.956 INFO:__main__:    self.gather_mount_info()
2020-03-11 22:59:16,956.956 INFO:__main__:  File "/data/ceph/qa/tasks/cephfs/fuse_mount.py", line 170, in gather_mount_info
2020-03-11 22:59:16,956.956 INFO:__main__:    status = self.admin_socket(['status'])
2020-03-11 22:59:16,956.956 INFO:__main__:  File "/data/ceph/qa/tasks/cephfs/fuse_mount.py", line 427, in admin_socket
2020-03-11 22:59:16,956.956 INFO:__main__:    ], timeout=(15*60)).strip()
2020-03-11 22:59:16,956.956 INFO:__main__:  File "../qa/tasks/vstart_runner.py", line 398, in sh
2020-03-11 22:59:16,957.957 INFO:__main__:    env=env)
2020-03-11 22:59:16,957.957 INFO:__main__:  File "/data/teuthology/teuthology/misc.py", line 1332, in sh
2020-03-11 22:59:16,957.957 INFO:__main__:    output=output
2020-03-11 22:59:16,957.957 INFO:__main__:CalledProcessError: Command 'sudo python3 -c 
2020-03-11 22:59:16,957.957 INFO:__main__:import glob
2020-03-11 22:59:16,957.957 INFO:__main__:import re
2020-03-11 22:59:16,957.957 INFO:__main__:import os
2020-03-11 22:59:16,958.958 INFO:__main__:import subprocess
2020-03-11 22:59:16,958.958 INFO:__main__:
2020-03-11 22:59:16,958.958 INFO:__main__:def find_socket(client_name):
2020-03-11 22:59:16,958.958 INFO:__main__:        asok_path = "/tmp/ceph-asok.R9gHnv//client.0.9519.asok" 
2020-03-11 22:59:16,958.958 INFO:__main__:        files = glob.glob(asok_path)
2020-03-11 22:59:16,958.958 INFO:__main__:
2020-03-11 22:59:16,958.958 INFO:__main__:        # Given a non-glob path, it better be there
2020-03-11 22:59:16,959.959 INFO:__main__:        if "*" not in asok_path:
2020-03-11 22:59:16,959.959 INFO:__main__:            assert(len(files) == 1)
2020-03-11 22:59:16,959.959 INFO:__main__:            return files[0]
2020-03-11 22:59:16,959.959 INFO:__main__:
2020-03-11 22:59:16,959.959 INFO:__main__:        for f in files:
2020-03-11 22:59:16,959.959 INFO:__main__:                pid = re.match(".*\.(\d+)\.asok$", f).group(1)
2020-03-11 22:59:16,959.959 INFO:__main__:                if os.path.exists("/proc/{0}".format(pid)):
2020-03-11 22:59:16,959.959 INFO:__main__:                        return f
2020-03-11 22:59:16,960.960 INFO:__main__:        raise RuntimeError("Client socket {0} not found".format(client_name))
2020-03-11 22:59:16,960.960 INFO:__main__:
2020-03-11 22:59:16,960.960 INFO:__main__:print(find_socket("client.0"))
2020-03-11 22:59:16,960.960 INFO:__main__:' returned non-zero exit status 1
2020-03-11 22:59:16,960.960 INFO:__main__:
2020-03-11 22:59:16,960.960 INFO:__main__:----------------------------------------------------------------------
2020-03-11 22:59:16,960.960 INFO:__main__:Ran 2 tests in 20.312s
2020-03-11 22:59:16,961.961 INFO:__main__:
2020-03-11 22:59:16,961.961 INFO:__main__:FAILED (errors=1, skipped=1)
2020-03-11 22:59:16,961.961 INFO:__main__:
2020-03-11 22:59:16,961.961 INFO:__main__:======================================================================
2020-03-11 22:59:16,961.961 INFO:__main__:ERROR: test_abort_conn (tasks.cephfs.test_client_recovery.TestClientRecovery)
2020-03-11 22:59:16,961.961 INFO:__main__:----------------------------------------------------------------------
2020-03-11 22:59:16,962.962 INFO:__main__:Traceback (most recent call last):
2020-03-11 22:59:16,962.962 INFO:__main__:  File "/data/ceph/qa/tasks/cephfs/cephfs_test_case.py", line 145, in setUp
2020-03-11 22:59:16,962.962 INFO:__main__:    self.mounts[i].mount()
2020-03-11 22:59:16,962.962 INFO:__main__:  File "../qa/tasks/vstart_runner.py", line 900, in mount
2020-03-11 22:59:16,962.962 INFO:__main__:    self.gather_mount_info()
2020-03-11 22:59:16,962.962 INFO:__main__:  File "/data/ceph/qa/tasks/cephfs/fuse_mount.py", line 170, in gather_mount_info
2020-03-11 22:59:16,962.962 INFO:__main__:    status = self.admin_socket(['status'])
2020-03-11 22:59:16,963.963 INFO:__main__:  File "/data/ceph/qa/tasks/cephfs/fuse_mount.py", line 427, in admin_socket
2020-03-11 22:59:16,963.963 INFO:__main__:    ], timeout=(15*60)).strip()
2020-03-11 22:59:16,963.963 INFO:__main__:  File "../qa/tasks/vstart_runner.py", line 398, in sh
2020-03-11 22:59:16,963.963 INFO:__main__:    env=env)
2020-03-11 22:59:16,963.963 INFO:__main__:  File "/data/teuthology/teuthology/misc.py", line 1332, in sh
2020-03-11 22:59:16,963.963 INFO:__main__:    output=output
2020-03-11 22:59:16,964.964 INFO:__main__:CalledProcessError: Command 'sudo python3 -c 
2020-03-11 22:59:16,964.964 INFO:__main__:import glob
2020-03-11 22:59:16,964.964 INFO:__main__:import re
2020-03-11 22:59:16,964.964 INFO:__main__:import os
2020-03-11 22:59:16,964.964 INFO:__main__:import subprocess
2020-03-11 22:59:16,965.965 INFO:__main__:
2020-03-11 22:59:16,965.965 INFO:__main__:def find_socket(client_name):
2020-03-11 22:59:16,965.965 INFO:__main__:        asok_path = "/tmp/ceph-asok.R9gHnv//client.0.9519.asok" 
2020-03-11 22:59:16,965.965 INFO:__main__:        files = glob.glob(asok_path)
2020-03-11 22:59:16,965.965 INFO:__main__:
2020-03-11 22:59:16,965.965 INFO:__main__:        # Given a non-glob path, it better be there
2020-03-11 22:59:16,965.965 INFO:__main__:        if "*" not in asok_path:
2020-03-11 22:59:16,966.966 INFO:__main__:            assert(len(files) == 1)
2020-03-11 22:59:16,966.966 INFO:__main__:            return files[0]
2020-03-11 22:59:16,966.966 INFO:__main__:
2020-03-11 22:59:16,966.966 INFO:__main__:        for f in files:
2020-03-11 22:59:16,966.966 INFO:__main__:                pid = re.match(".*\.(\d+)\.asok$", f).group(1)
2020-03-11 22:59:16,966.966 INFO:__main__:                if os.path.exists("/proc/{0}".format(pid)):
2020-03-11 22:59:16,966.966 INFO:__main__:                        return f
2020-03-11 22:59:16,966.966 INFO:__main__:        raise RuntimeError("Client socket {0} not found".format(client_name))
2020-03-11 22:59:16,966.966 INFO:__main__:
2020-03-11 22:59:16,966.966 INFO:__main__:print(find_socket("client.0"))
2020-03-11 22:59:16,966.966 INFO:__main__:' returned non-zero exit status 1
2020-03-11 22:59:16,967.967 INFO:__main__:

Actions #1

Updated by Xiubo Li about 4 years ago

  • Status changed from New to In Progress
Actions #2

Updated by Xiubo Li about 4 years ago

  • Status changed from In Progress to Fix Under Review
Actions #3

Updated by Xiubo Li about 4 years ago

Fixed by:

commit 2cc0ee709c36eabe03311a00b72295da468bccf4
Author: Rishabh Dave <ridave@gmail.com>
Date:   Fri Mar 13 07:03:50 2020 +0000

    qa/vstart_runner: update vstart_runner.LocalRemote.sh

    Commit 9f6c764f10f replaces remote.run calls by remote.sh without
    updating the definition of vstart_runner.LocalRemote.sh which breaks the
    cephfs tests when executed locally.

    Fixes: https://tracker.ceph.com/issues/44579
    Signed-off-by: Rishabh Dave <ridave@redhat.com>

Actions #4

Updated by Xiubo Li about 4 years ago

  • Status changed from Fix Under Review to Resolved
Actions

Also available in: Atom PDF