Actions
Bug #8411
closed"Segmentation fault" in radosgw in upgrade:dumpling-dumpling-testing-basic-vps
Status:
Can't reproduce
Priority:
Normal
Assignee:
-
Target version:
-
% Done:
0%
Source:
other
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
2014-05-19T22:34:49.777 INFO:teuthology.orchestra.run.err:[10.214.138.60]: -5> 2014-05-20 05:34:49.754894 7f8f5ddca780 1 -- 10.214.138.60:0/1006187 --> 10.214.138.60:6790/0 -- mon_subscribe({monmap=2+,osdmap=0}) v2 -- ?+0 0x19f7850 con 0x19fa660 2014-05-19T22:34:49.777 INFO:teuthology.orchestra.run.err:[10.214.138.60]: -4> 2014-05-20 05:34:49.754905 7f8f5ddca780 10 monclient: renew_subs 2014-05-19T22:34:49.777 INFO:teuthology.orchestra.run.err:[10.214.138.60]: -3> 2014-05-20 05:34:49.754911 7f8f5ddca780 10 monclient: _send_mon_message to mon.c at 10.214.138.60:6790/0 2014-05-19T22:34:49.777 INFO:teuthology.orchestra.run.err:[10.214.138.60]: -2> 2014-05-20 05:34:49.754917 7f8f5ddca780 1 -- 10.214.138.60:0/1006187 --> 10.214.138.60:6790/0 -- mon_subscribe({monmap=2+,osdmap=0}) v2 -- ?+0 0x19f7db0 con 0x19fa660 2014-05-19T22:34:49.777 INFO:teuthology.orchestra.run.err:[10.214.138.60]: -1> 2014-05-20 05:34:49.754942 7f8f5ddca780 1 librados: init done 2014-05-19T22:34:49.777 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 0> 2014-05-20 05:34:49.757475 7f8f5ddca780 -1 *** Caught signal (Segmentation fault) ** 2014-05-19T22:34:49.777 INFO:teuthology.orchestra.run.err:[10.214.138.60]: in thread 7f8f5ddca780 2014-05-19T22:34:49.778 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 2014-05-19T22:34:49.778 INFO:teuthology.orchestra.run.err:[10.214.138.60]: ceph version 0.67.1 (e23b817ad0cf1ea19c0a7b7c9999b30bed37d533) 2014-05-19T22:34:49.778 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 1: radosgw-admin() [0x4ffe5a] 2014-05-19T22:34:49.778 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 2: (()+0xfcb0) [0x7f8f5c6e1cb0] 2014-05-19T22:34:49.778 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 3: (std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(std::string const&)+0xb) [0x7f8f5bd86f2b] 2014-05-19T22:34:49.778 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 4: (RGWDataChangesLog::RGWDataChangesLog(CephContext*, RGWRados*)+0x16e) [0x4b398e] 2014-05-19T22:34:49.778 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 5: (RGWRados::init_rados()+0xa7) [0x4965c7] 2014-05-19T22:34:49.778 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 6: (RGWCache<RGWRados>::init_rados()+0x17) [0x4b3bc7] 2014-05-19T22:34:49.778 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 7: (RGWRados::initialize()+0xa) [0x49afba] 2014-05-19T22:34:49.778 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 8: (RGWStoreManager::init_storage_provider(CephContext*, bool)+0x62f) [0x4940cf] 2014-05-19T22:34:49.778 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 9: (main()+0x18b1) [0x43ba61] 2014-05-19T22:34:49.779 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 10: (__libc_start_main()+0xed) [0x7f8f5b43676d] 2014-05-19T22:34:49.779 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 11: radosgw-admin() [0x4456c9] 2014-05-19T22:34:49.779 INFO:teuthology.orchestra.run.err:[10.214.138.60]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this. 2014-05-19T22:34:49.779 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 2014-05-19T22:34:49.779 INFO:teuthology.orchestra.run.err:[10.214.138.60]: --- logging levels --- 2014-05-19T22:34:49.779 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 0/ 5 none 2014-05-19T22:34:49.779 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 0/ 1 lockdep 2014-05-19T22:34:49.779 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 0/ 1 context 2014-05-19T22:34:49.779 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 1/ 1 crush 2014-05-19T22:34:49.779 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 1/ 5 mds 2014-05-19T22:34:49.780 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 1/ 5 mds_balancer 2014-05-19T22:34:49.780 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 1/ 5 mds_locker 2014-05-19T22:34:49.780 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 1/ 5 mds_log 2014-05-19T22:34:49.780 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 1/ 5 mds_log_expire 2014-05-19T22:34:49.780 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 1/ 5 mds_migrator 2014-05-19T22:34:49.780 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 0/ 1 buffer 2014-05-19T22:34:49.780 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 0/ 1 timer 2014-05-19T22:34:49.780 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 0/ 1 filer 2014-05-19T22:34:49.781 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 0/ 1 striper 2014-05-19T22:34:49.781 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 0/ 1 objecter 2014-05-19T22:34:49.781 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 0/ 5 rados 2014-05-19T22:34:49.781 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 0/ 5 rbd 2014-05-19T22:34:49.781 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 0/ 5 journaler 2014-05-19T22:34:49.781 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 0/ 5 objectcacher 2014-05-19T22:34:49.781 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 0/ 5 client 2014-05-19T22:34:49.781 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 0/ 5 osd 2014-05-19T22:34:49.781 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 0/ 5 optracker 2014-05-19T22:34:49.781 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 0/ 5 objclass 2014-05-19T22:34:49.782 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 1/ 3 filestore 2014-05-19T22:34:49.782 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 1/ 3 journal 2014-05-19T22:34:49.782 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 0/ 5 ms 2014-05-19T22:34:49.782 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 1/ 5 mon 2014-05-19T22:34:49.782 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 0/10 monc 2014-05-19T22:34:49.782 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 1/ 5 paxos 2014-05-19T22:34:49.782 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 0/ 5 tp 2014-05-19T22:34:49.782 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 1/ 5 auth 2014-05-19T22:34:49.782 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 1/ 5 crypto 2014-05-19T22:34:49.782 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 1/ 1 finisher 2014-05-19T22:34:49.783 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 1/ 5 heartbeatmap 2014-05-19T22:34:49.783 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 1/ 5 perfcounter 2014-05-19T22:34:49.783 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 1/ 5 rgw 2014-05-19T22:34:49.783 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 1/ 5 hadoop 2014-05-19T22:34:49.783 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 1/ 5 javaclient 2014-05-19T22:34:49.783 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 1/ 5 asok 2014-05-19T22:34:49.783 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 1/ 1 throttle 2014-05-19T22:34:49.783 INFO:teuthology.orchestra.run.err:[10.214.138.60]: -2/-2 (syslog threshold) 2014-05-19T22:34:49.783 INFO:teuthology.orchestra.run.err:[10.214.138.60]: 99/99 (stderr threshold) 2014-05-19T22:34:49.784 INFO:teuthology.orchestra.run.err:[10.214.138.60]: max_recent 500 2014-05-19T22:34:49.784 INFO:teuthology.orchestra.run.err:[10.214.138.60]: max_new 1000 2014-05-19T22:34:49.784 INFO:teuthology.orchestra.run.err:[10.214.138.60]: log_file /var/log/ceph/ceph-client.0.6187.log 2014-05-19T22:34:49.784 INFO:teuthology.orchestra.run.err:[10.214.138.60]: --- end dump of recent events --- 2014-05-19T22:34:49.791 INFO:teuthology.task.s3tests:Removing s3-tests... 2014-05-19T22:34:49.792 DEBUG:teuthology.orchestra.run:Running [10.214.138.60]: 'rm -rf /home/ubuntu/cephtest/s3-tests' 2014-05-19T22:34:49.933 INFO:teuthology.task.rgw:Stopping apache... 2014-05-19T22:34:49.954 INFO:teuthology.misc:Shutting down rgw daemons... 2014-05-19T22:34:49.954 DEBUG:teuthology.task.rgw.client.0:waiting for process to exit 2014-05-19T22:34:49.988 INFO:teuthology.task.rgw.client.0:Stopped 2014-05-19T22:34:49.988 DEBUG:teuthology.orchestra.run:Running [10.214.138.60]: 'rm -f /home/ubuntu/cephtest/rgw.opslog.client.0.sock' 2014-05-19T22:34:50.057 INFO:teuthology.task.rgw:Removing apache config... 2014-05-19T22:34:50.057 DEBUG:teuthology.orchestra.run:Running [10.214.138.60]: 'rm -f /home/ubuntu/cephtest/apache/apache.client.0.conf && rm -f /home/ubuntu/cephtest/apache/htdocs.client.0/rgw.fcgi' 2014-05-19T22:34:50.126 INFO:teuthology.task.rgw:Cleaning up apache directories... 2014-05-19T22:34:50.126 DEBUG:teuthology.orchestra.run:Running [10.214.138.60]: 'rm -rf /home/ubuntu/cephtest/apache/tmp.client.0 && rmdir /home/ubuntu/cephtest/apache/htdocs.client.0' 2014-05-19T22:34:50.198 DEBUG:teuthology.orchestra.run:Running [10.214.138.60]: 'rmdir /home/ubuntu/cephtest/apache' 2014-05-19T22:34:50.266 ERROR:teuthology.run_tasks:Saw exception from tasks Traceback (most recent call last): File "/home/teuthworker/teuthology-dumpling/teuthology/run_tasks.py", line 25, in run_tasks manager = run_one_task(taskname, ctx=ctx, config=config) File "/home/teuthworker/teuthology-dumpling/teuthology/run_tasks.py", line 14, in run_one_task return fn(**kwargs) File "/home/teuthworker/teuthology-dumpling/teuthology/task/parallel.py", line 41, in task p.spawn(_run_spawned, ctx, confg, taskname) File "/home/teuthworker/teuthology-dumpling/teuthology/parallel.py", line 88, in __exit__ raise CommandCrashedError: Command crashed: '/home/ubuntu/cephtest/adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid bar.client.0 --purge-data' 2014-05-19T22:34:50.267 DEBUG:teuthology.run_tasks:Unwinding manager <contextlib.GeneratorContextManager object at 0x2570590> 2014-05-19T22:34:50.267 DEBUG:teuthology.run_tasks:Unwinding manager <contextlib.GeneratorContextManager object at 0x2508990> 2014-05-19T22:34:50.267 DEBUG:teuthology.run_tasks:Unwinding manager <contextlib.GeneratorContextManager object at 0x2504a90> 2014-05-19T22:34:50.267 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthworker/teuthology-dumpling/teuthology/contextutil.py", line 27, in nested yield vars File "/home/teuthworker/teuthology-dumpling/teuthology/task/ceph.py", line 1168, in task yield File "/home/teuthworker/teuthology-dumpling/teuthology/run_tasks.py", line 25, in run_tasks manager = run_one_task(taskname, ctx=ctx, config=config) File "/home/teuthworker/teuthology-dumpling/teuthology/run_tasks.py", line 14, in run_one_task return fn(**kwargs) File "/home/teuthworker/teuthology-dumpling/teuthology/task/parallel.py", line 41, in task p.spawn(_run_spawned, ctx, confg, taskname) File "/home/teuthworker/teuthology-dumpling/teuthology/parallel.py", line 88, in __exit__ raise CommandCrashedError: Command crashed: '/home/ubuntu/cephtest/adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid bar.client.0 --purge-data' 2014-05-19T22:34:50.267 INFO:teuthology.misc:Shutting down mds daemons... 2014-05-19T22:34:50.268 DEBUG:teuthology.task.ceph.mds.a:waiting for process to exit 2014-05-19T22:34:50.276 INFO:teuthology.task.ceph.mds.a:Stopped 2014-05-19T22:34:50.276 INFO:teuthology.misc:Shutting down osd daemons... 2014-05-19T22:34:50.276 DEBUG:teuthology.task.ceph.osd.1:waiting for process to exit 2014-05-19T22:34:50.326 INFO:teuthology.task.ceph.osd.1:Stopped
archive_path: /var/lib/teuthworker/archive/teuthology-2014-05-19_19:15:03-upgrade:dumpling-dumpling-testing-basic-vps/263541 branch: dumpling description: upgrade/dumpling/rgw/{0-cluster/start.yaml 1-dumpling-install/cuttlefish.v0.67.1.yaml 2-workload/testrgw.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/monthrash.yaml} email: null job_id: '263541' kernel: &id001 kdb: true sha1: 335cb91ce950ce0e12294af671c64a468d89194c last_in_suite: false machine_type: vps name: teuthology-2014-05-19_19:15:03-upgrade:dumpling-dumpling-testing-basic-vps nuke-on-error: true os_type: ubuntu overrides: admin_socket: branch: dumpling ceph: conf: mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug filestore: 20 debug journal: 20 debug ms: 1 debug osd: 20 fs: xfs log-whitelist: - slow request - scrub sha1: bd5d6f116416d1b410d57ce00cb3e2abf6de102b ceph-deploy: branch: dev: dumpling conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: debug mon: 1 debug ms: 20 debug paxos: 20 osd default pool size: 2 install: ceph: sha1: bd5d6f116416d1b410d57ce00cb3e2abf6de102b s3tests: branch: dumpling workunit: sha1: bd5d6f116416d1b410d57ce00cb3e2abf6de102b owner: scheduled_teuthology@teuthology roles: - - mon.a - mds.a - osd.0 - osd.1 - osd.2 - - mon.b - mon.c - osd.3 - osd.4 - osd.5 - client.0 suite: upgrade:dumpling targets: ubuntu@vpm011.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC5CkkjZtvY96dBuT9Qa4uY2wfNJqgbFGnRCv5QUbFv6b+aD2/Or0XLqK4K/lGQVILuCK+yajo2D2vwzlZU7NSlrA79ZgHjrLgaAw+otdVDYQ7UznnJ7x049HfSiUndcnNZRRgoGziwbBoXwIC4TIt2Q0GwzR7+KGyl1RaLhsTK1mA6QECU+Rrlvhwbw/v6Qc0pX1J6M5jQtE9uC5fSQIwxB7Mich330HN0v8B1yWHCJDOdXLJz+qJSRjaVYZG1TIJTKBpwXtqps2UvQrz/F6lNhoO+VL4L/yHVetjaBr+dG+jSU1wBhwVT8jD5yKJtaF0Khhug1362Zhp+as6eTaNZ ubuntu@vpm016.front.sepia.ceph.com: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCYkxoEJsOn8Ps4h7AOw5obb/ug7carf5JZNHp+a6i1MzyzZZbo0tcMnYzehaHEbgcQ2Fwnk1BICeDpEQtoSzRm1xZLH2o5wTGR0VrD9u3TrQtK//efPt4i6B4ofwJ6NcPzL29W/DLEqmjR5UDbDEPmhX5kC+gV/+h+OXorM34mH0yQK8adyY56YK4tnhJIUctRUCEg/xFq/cwXUlMeNdKNOEAHtYUlO0SjYA6zPeZIgZWA2v4m3luXNMK5jZaSZyTT12J0tlGQpazJzdKd6HY0gyqZUGNNwLFKz3MaAWuNsfRrwMMCNvUG1hl5DFRBS5VZ8VkkNkUNKtYNMk1+gCVd tasks: - internal.lock_machines: - 2 - vps - internal.save_config: null - internal.check_lock: null - internal.connect: null - internal.check_conflict: null - internal.check_ceph_data: null - internal.vm_setup: null - kernel: *id001 - internal.base: null - internal.archive: null - internal.coredump: null - internal.syslog: null - internal.timer: null - chef: null - clock.check: null - install: branch: cuttlefish - ceph: null - install.upgrade: all: tag: v0.67.1 - ceph.restart: null - parallel: - workload - upgrade-sequence - mon_thrash: revive_delay: 20 thrash_delay: 1 - swift: client.0: rgw_server: client.0 teuthology_branch: dumpling upgrade-sequence: sequential: - install.upgrade: all: branch: dumpling - ceph.restart: - mon.a - sleep: duration: 60 - ceph.restart: - mon.b - sleep: duration: 60 - ceph.restart: - mon.c - sleep: duration: 60 - ceph.restart: - mds.a - sleep: duration: 60 - ceph.restart: - osd.0 - sleep: duration: 30 - ceph.restart: - osd.1 - sleep: duration: 30 - ceph.restart: - osd.2 - sleep: duration: 30 - ceph.restart: - osd.3 - sleep: duration: 30 - ceph.restart: - osd.4 - sleep: duration: 30 - ceph.restart: - osd.5 - sleep: duration: 30 - ceph.restart: - rgw.client.0 verbose: true worker_log: /var/lib/teuthworker/archive/worker_logs/worker.vps.19345 workload: sequential: - rgw: - client.0 - s3tests: client.0: rgw_server: client.0
description: upgrade/dumpling/rgw/{0-cluster/start.yaml 1-dumpling-install/cuttlefish.v0.67.1.yaml 2-workload/testrgw.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final/monthrash.yaml} duration: 486.2718291282654 failure_reason: 'Command crashed: ''/home/ubuntu/cephtest/adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage radosgw-admin -n client.0 user rm --uid bar.client.0 --purge-data''' flavor: basic mon.a-kernel-sha1: 335cb91ce950ce0e12294af671c64a468d89194c mon.b-kernel-sha1: 335cb91ce950ce0e12294af671c64a468d89194c owner: scheduled_teuthology@teuthology success: false
Updated by Zack Cerza almost 10 years ago
- Project changed from teuthology to rgw
Updated by Sage Weil about 9 years ago
- Status changed from New to Can't reproduce
Actions