Actions
Bug #40035
opensmoke.sh failing in jenkins "make check" test randomly
Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
125/175 Test #4: smoke.sh ................................***Failed 245.97 sec ... //home/jenkins-build/build/workspace/ceph-pull-requests/qa/standalone/ceph-helpers.sh:1978: flush_pceph osd last-stat-seq 0 ERROR: ld.so: object 'ASAN_LIBRARY-NOTFOUND' from LD_PRELOAD cannot be preloaded: ignored. ERROR: ld.so: object 'ASAN_LIBRARY-NOTFOUND' from LD_PRELOAD cannot be preloaded: ignored. waiting osd.1 seq 38654705680 /home/jenkins-build/build/workspace/ceph-pull-requests/qa/standalone/ceph-helpers.sh:1978: flush_pgtest 21474836500 -lt 21474836498 /home/jenkins-build/build/workspace/ceph-pull-requests/qa/standalone/ceph-helpers.sh:1974: flush_pgfor s in '$seqs' //home/jenkins-build/build/workspace/ceph-pull-requests/qa/standalone/ceph-helpers.sh:1975: flush_pecho 1-38654705680 //home/jenkins-build/build/workspace/ceph-pull-requests/qa/standalone/ceph-helpers.sh:1975: flush_pcut -d - -f 1 /home/jenkins-build/build/workspace/ceph-pull-requests/qa/standalone/ceph-helpers.sh:1975: flush_pgosd=1 //home/jenkins-build/build/workspace/ceph-pull-requests/qa/standalone/ceph-helpers.sh:1976: flush_pcut -d - -f 2 //home/jenkins-build/build/workspace/ceph-pull-requests/qa/standalone/ceph-helpers.sh:1976: flush_pecho 1-38654705680 /home/jenkins-build/build/workspace/ceph-pull-requests/qa/stErrors while running CTest Build step 'Execute shell' marked build as failure [PostBuildScript] - Executing post build scripts. [ceph-pull-requests] $ /bin/sh -xe /tmp/jenkins8731416451300077444.sh + sudo reboot
see https://jenkins.ceph.com/job/ceph-pull-requests/817/console
i tried to reproduce locally in my xenial docker container. but failed.
Updated by Laura Paduano over 4 years ago
Kefu Chai wrote:
[...]
see https://jenkins.ceph.com/job/ceph-pull-requests/817/console
i tried to reproduce locally in my xenial docker container. but failed.
Not sure if this is related, but we're seeing something similar wrt the error
ERROR: ld.so: object 'ASAN_LIBRARY-NOTFOUND' from LD_PRELOAD cannot be preloaded: ignored.
We're seeing this in Jenkins when running the dashboard backend API tests in Nautilus:
2019-09-25 15:25:26,388.388 ERROR:__main__:tried to stop a non-running daemon
2019-09-25 15:25:26,389.389 INFO:__main__:Running ['./bin/ceph', 'mds', 'fail', 'a']
2019-09-25 15:25:27,283.283 INFO:__main__:Running ['./bin/ceph', 'osd', 'dump', '--format=json-pretty']
2019-09-25 15:25:27,910.910 INFO:__main__:Running ['./bin/ceph', 'fs', 'dump', '--format=json']
2019-09-25 15:25:28,568.568 INFO:__main__:Running ['./bin/ceph', 'fs', 'dump', '--format=json']
2019-09-25 15:25:29,187.187 INFO:__main__:Discovered MDS IDs: ['a']
2019-09-25 15:25:29,188.188 INFO:__main__:Running ['./bin/ceph', '--format=json-pretty', 'osd', 'lspools']
2019-09-25 15:25:29,776.776 INFO:tasks.cephfs.filesystem:Creating filesystem 'cephfs'
2019-09-25 15:25:29,777.777 INFO:__main__:Running ['./bin/ceph', 'daemon', 'mon.a', 'config', 'get', 'mon_pg_warn_min_per_osd']
ERROR: ld.so: object 'ASAN_LIBRARY-NOTFOUND' from LD_PRELOAD cannot be preloaded: ignored.
ERROR: ld.so: object 'ASAN_LIBRARY-NOTFOUND' from LD_PRELOAD cannot be preloaded: ignored.
Can't get admin socket path: unable to get conf option admin_socket for mon.a: ERROR: ld.so: object 'ASAN_LIBRARY-NOTFOUND' from LD_PRELOAD cannot be preloaded: ignored.
2019-09-25 15:25:29,984.984 INFO:__main__:ERROR
2019-09-25 15:25:29,984.984 INFO:__main__:
2019-09-25 15:25:29,985.985 INFO:__main__:======================================================================
2019-09-25 15:25:29,985.985 INFO:__main__:ERROR: setUpClass (tasks.mgr.dashboard.test_cephfs.CephfsTest)
2019-09-25 15:25:29,985.985 INFO:__main__:----------------------------------------------------------------------
2019-09-25 15:25:29,986.986 INFO:__main__:Traceback (most recent call last):
2019-09-25 15:25:29,986.986 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/mgr/dashboard/helper.py", line 127, in setUpClass
2019-09-25 15:25:29,986.986 INFO:__main__: cls.fs = cls.mds_cluster.newfs(create=True)
2019-09-25 15:25:29,986.986 INFO:__main__: File "../qa/tasks/vstart_runner.py", line 705, in newfs
2019-09-25 15:25:29,986.986 INFO:__main__: return LocalFilesystem(self._ctx, name=name, create=create)
2019-09-25 15:25:29,987.987 INFO:__main__: File "../qa/tasks/vstart_runner.py", line 755, in __init__
2019-09-25 15:25:29,987.987 INFO:__main__: self.create()
2019-09-25 15:25:29,987.987 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/cephfs/filesystem.py", line 516, in create
2019-09-25 15:25:29,987.987 INFO:__main__: pgs_per_fs_pool = self.get_pgs_per_fs_pool()
2019-09-25 15:25:29,987.987 INFO:__main__: File "../qa/tasks/vstart_runner.py", line 776, in get_pgs_per_fs_pool
2019-09-25 15:25:29,988.988 INFO:__main__: return 3 * int(self.get_config('mon_pg_warn_min_per_osd'))
2019-09-25 15:25:29,988.988 INFO:__main__: File "../qa/tasks/vstart_runner.py", line 644, in get_config
2019-09-25 15:25:29,988.988 INFO:__main__: return self.json_asok(['config', 'get', key], service_type, service_id)[key]
2019-09-25 15:25:29,988.988 INFO:__main__: File "/home/jenkins-build/build/workspace/ceph-dashboard-pr-backend/qa/tasks/cephfs/filesystem.py", line 178, in json_asok
2019-09-25 15:25:29,988.988 INFO:__main__: proc = self.mon_manager.admin_socket(service_type, service_id, command, timeout=timeout)
2019-09-25 15:25:29,989.989 INFO:__main__: File "../qa/tasks/vstart_runner.py", line 618, in admin_socket
2019-09-25 15:25:29,989.989 INFO:__main__: timeout=timeout
2019-09-25 15:25:29,989.989 INFO:__main__: File "../qa/tasks/vstart_runner.py", line 301, in run
2019-09-25 15:25:29,989.989 INFO:__main__: proc.wait()
2019-09-25 15:25:29,989.989 INFO:__main__: File "../qa/tasks/vstart_runner.py", line 179, in wait
2019-09-25 15:25:29,990.990 INFO:__main__: raise CommandFailedError(self.args, self.exitstatus)
2019-09-25 15:25:29,991.991 INFO:__main__:CommandFailedError: Command failed with status 22: ['./bin/ceph', 'daemon', 'mon.a', 'config', 'get', 'mon_pg_warn_min_per_osd']
2019-09-25 15:25:29,991.991 INFO:__main__:
2019-09-25 15:25:29,991.991 INFO:__main__:----------------------------------------------------------------------
2019-09-25 15:25:29,991.991 INFO:__main__:Ran 11 tests in 187.385s
2019-09-25 15:25:29,991.991 INFO:__main__:
2019-09-25 15:25:29,991.991 INFO:__main__:FAILED (errors=1)
source: https://jenkins.ceph.com/job/ceph-dashboard-pr-backend/24/consoleText
Updated by Alfonso MartÃnez over 4 years ago
In addition to what Laura reported, it must be said that this failure is seen in jenkins job only
when running the job in nautilus branch. The tests pass when you run them locally (local machine).
Updated by Alfonso MartÃnez over 4 years ago
- Related to Bug #42832: nautilus: mgr/dashboard: Nautilus: backend API test failure: setUpClass (tasks.mgr.dashboard.test_cephfs.CephfsTest) added
Actions