Actions
Bug #7084
closedsegv in RGWDataChangesLog::RGWDataChangesLog
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
2014-01-02T13:01:17.183 INFO:teuthology.orchestra.run.err:[10.214.131.19]: *** Caught signal (Segmentation fault) ** 2014-01-02T13:01:17.184 INFO:teuthology.orchestra.run.err:[10.214.131.19]: in thread 7fda906a9780 2014-01-02T13:01:17.189 INFO:teuthology.orchestra.run.err:[10.214.131.19]: ceph version 0.67.5-14-g4d88dd1 (4d88dd10bfab4e5fb45632245be5f79eeba73a30) 2014-01-02T13:01:17.189 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 1: radosgw-admin() [0x50a56a] 2014-01-02T13:01:17.189 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 2: (()+0xfcb0) [0x7fda8efbecb0] 2014-01-02T13:01:17.189 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 3: (std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(std::string const&)+0xb) [0x7fda8e663f2b] 2014-01-02T13:01:17.190 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 4: (RGWDataChangesLog::RGWDataChangesLog(CephContext*, RGWRados*)+0x17a) [0x4b91ea] 2014-01-02T13:01:17.190 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 5: (RGWRados::init_rados()+0xa7) [0x49b517] 2014-01-02T13:01:17.190 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 6: (RGWCache<RGWRados>::init_rados()+0x17) [0x4b9447] 2014-01-02T13:01:17.190 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 7: (RGWRados::initialize()+0xa) [0x49ff7a] 2014-01-02T13:01:17.190 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 8: (RGWStoreManager::init_storage_provider(CephContext*, bool)+0x571) [0x498fd1] 2014-01-02T13:01:17.190 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 9: (main()+0x196e) [0x43ea2e] 2014-01-02T13:01:17.190 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 10: (__libc_start_main()+0xed) [0x7fda8dd1376d] 2014-01-02T13:01:17.190 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 11: radosgw-admin() [0x4488f9] 2014-01-02T13:01:17.190 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 2014-01-02 13:01:17.184198 7fda906a9780 -1 *** Caught signal (Segmentation fault) ** 2014-01-02T13:01:17.190 INFO:teuthology.orchestra.run.err:[10.214.131.19]: in thread 7fda906a9780 2014-01-02T13:01:17.191 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 2014-01-02T13:01:17.191 INFO:teuthology.orchestra.run.err:[10.214.131.19]: ceph version 0.67.5-14-g4d88dd1 (4d88dd10bfab4e5fb45632245be5f79eeba73a30) 2014-01-02T13:01:17.191 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 1: radosgw-admin() [0x50a56a] 2014-01-02T13:01:17.191 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 2: (()+0xfcb0) [0x7fda8efbecb0] 2014-01-02T13:01:17.191 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 3: (std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(std::string const&)+0xb) [0x7fda8e663f2b] 2014-01-02T13:01:17.191 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 4: (RGWDataChangesLog::RGWDataChangesLog(CephContext*, RGWRados*)+0x17a) [0x4b91ea] 2014-01-02T13:01:17.191 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 5: (RGWRados::init_rados()+0xa7) [0x49b517] 2014-01-02T13:01:17.191 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 6: (RGWCache<RGWRados>::init_rados()+0x17) [0x4b9447] 2014-01-02T13:01:17.191 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 7: (RGWRados::initialize()+0xa) [0x49ff7a] 2014-01-02T13:01:17.191 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 8: (RGWStoreManager::init_storage_provider(CephContext*, bool)+0x571) [0x498fd1] 2014-01-02T13:01:17.191 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 9: (main()+0x196e) [0x43ea2e] 2014-01-02T13:01:17.192 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 10: (__libc_start_main()+0xed) [0x7fda8dd1376d] 2014-01-02T13:01:17.192 INFO:teuthology.orchestra.run.err:[10.214.131.19]: 11: radosgw-admin() [0x4488f9] 2014-01-02T13:01:17.192 INFO:teuthology.orchestra.run.err:[10.214.131.19]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
test was
ubuntu@teuthology:/var/lib/teuthworker/archive/sage-2014-01-02_12:58:22-upgrade:parallel-next-testing-basic-plana/22332$ cat orig.config.yaml archive_path: /var/lib/teuthworker/archive/sage-2014-01-02_12:58:22-upgrade:parallel-next-testing-basic-plana/22332 description: upgrade/parallel/rgw/{0-cluster/start.yaml 1-dumpling-install/dumpling.yaml 2-workload/s3tests.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/final.yaml distro/ubuntu_12.04.yaml} email: null job_id: '22332' kernel: kdb: true sha1: f48db1e9ac6f1578ab7efef9f66c70279e2f0cb5 last_in_suite: false machine_type: plana name: sage-2014-01-02_12:58:22-upgrade:parallel-next-testing-basic-plana nuke-on-error: true os_type: ubuntu os_version: '12.04' overrides: admin_socket: branch: next ceph: conf: mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 5 log-whitelist: - slow request sha1: fe3fd5fb4aa6327256dfaf731914dc4f79b0d44e ceph-deploy: branch: dev: next conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: debug mon: 1 debug ms: 20 debug paxos: 20 install: ceph: sha1: fe3fd5fb4aa6327256dfaf731914dc4f79b0d44e s3tests: branch: next workunit: sha1: fe3fd5fb4aa6327256dfaf731914dc4f79b0d44e owner: scheduled_sage@flab roles: - - mon.a - mds.a - osd.0 - osd.1 - - mon.b - mon.c - osd.2 - osd.3 - - client.0 - client.1 tasks: - chef: null - clock.check: null - install: branch: dumpling - ceph: fs: xfs - parallel: - workload - upgrade-sequence - rgw: - client.1 - swift: client.1: rgw_server: client.1 teuthology_branch: next upgrade-sequence: sequential: - install.upgrade: all: branch: emperor - ceph.restart: - mon.a - mon.b - mon.c - mds.a - osd.0 - osd.1 - osd.2 - osd.3 - rgw.client.0 verbose: true workload: sequential: - rgw: - client.0 - s3tests: client.0: force-branch: dumpling rgw_server: client.0
Updated by Tamilarasi muthamizhan over 10 years ago
recent log: ubuntu@teuthology:/a/sage-2014-01-02_12:58:22-upgrade:parallel-next-testing-basic-plana/22332
Updated by Sage Weil over 10 years ago
- Status changed from New to Can't reproduce
reopen if this ever comes up again... but looks like a bad build or something :/
Actions