Project

General

Profile

Bug #12677

agent has AttributeErrors on wrong exception references

Added by Alfredo Deza over 8 years ago. Updated over 8 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Target version:
-
% Done:

0%

Source:
other
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

The agent is causing errors like:

2015-08-12 03:48:37,789.789 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 03:48:37,782 13877 [radosgw_agent.worker][WARNING] error unlocking log, continuing anyway since lock will timeout. Traceback:
2015-08-12 03:48:37,789.789 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:Traceback (most recent call last):
2015-08-12 03:48:37,790.790 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/home/ubuntu/cephtest/radosgw-agent.client.0/radosgw_agent/worker.py", line 65, in unlock_shard
2015-08-12 03:48:37,790.790 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    self.lock.release_and_clear()
2015-08-12 03:48:37,790.790 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/home/ubuntu/cephtest/radosgw-agent.client.0/radosgw_agent/lock.py", line 98, in release_and_clear
2015-08-12 03:48:37,790.790 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    except client.HttpError as e:
2015-08-12 03:48:37,791.791 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:AttributeError: 'module' object has no attribute 'HttpError'

And there are a few instances where this is referenced (in the wrong way). The `HttpError` needs to come from `radosgw_agent.exc` not `client`

History

#1 Updated by Alfredo Deza over 8 years ago

  • Status changed from 12 to Fix Under Review

#2 Updated by Vasu Kulkarni over 8 years ago

It has fixed the attribute error previously seen during issue 12661

G ] getting bucket list
2015-08-12 19:08:07,541.541 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:08:07,527 18188 [radosgw_agent.client][INFO  ] creating connection to endpoint: http://vpm174.front.sepia.ceph.com:7280
2015-08-12 19:08:07,542.542 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:08:07,528 18188 [radosgw_agent.client][INFO  ] creating connection to endpoint: http://vpm174.front.sepia.ceph.com:7281
2015-08-12 19:08:07,545.545 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:08:07,530 18188 [radosgw_agent.sync][INFO  ] waiting to make sure bucket log is consistent
2015-08-12 19:08:37,582.582 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:08:37,567 18188 [radosgw_agent.sync][INFO  ] Starting sync
2015-08-12 19:08:37,657.657 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:08:37,643 19542 [radosgw_agent.worker][DEBUG ] bucket instance is "myfoodata:r0z0.4144.4" with marker 0#00000000002.2.3
2015-08-12 19:08:37,657.657 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:08:37,643 19542 [radosgw_agent.worker][INFO  ] ********************************************************************************
2015-08-12 19:08:37,657.657 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:08:37,643 19542 [radosgw_agent.worker][INFO  ] syncing bucket "myfoodata" 
2015-08-12 19:08:37,714.714 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:08:37,700 19542 [radosgw_agent.worker][DEBUG ] syncing object myfoodata/tiny_file
2015-08-12 19:08:37,728.728 INFO:tasks.rgw.client.1.vpm174.stdout:*** Caught signal (Segmentation fault) **
2015-08-12 19:08:37,728.728 INFO:tasks.rgw.client.1.vpm174.stdout: in thread 7f76d27f4700
2015-08-12 19:08:37,731.731 INFO:tasks.rgw.client.1.vpm174.stdout: ceph version 9.0.2-1252-gd8d6bb8 (d8d6bb898d976a479c33baebcd4ade0ef120a18e)
2015-08-12 19:08:37,731.731 INFO:tasks.rgw.client.1.vpm174.stdout: 1: (()+0x2ef91a) [0x7f77104c391a]
2015-08-12 19:08:37,731.731 INFO:tasks.rgw.client.1.vpm174.stdout: 2: (()+0x10340) [0x7f770c880340]
2015-08-12 19:08:37,732.732 INFO:tasks.rgw.client.1.vpm174.stdout: 3: (strlen()+0x2a) [0x7f770af5faea]
2015-08-12 19:08:37,732.732 INFO:tasks.rgw.client.1.vpm174.stdout: 4: (req_info::req_info(CephContext*, RGWEnv*)+0x253) [0x7f77104486f3]
2015-08-12 19:08:37,732.732 INFO:tasks.rgw.client.1.vpm174.stdout: 5: (RGWRESTStreamReadRequest::get_obj(RGWAccessKey&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, rgw_obj&)+0x179) [0x7f7710480859]
2015-08-12 19:08:37,732.732 INFO:tasks.rgw.client.1.vpm174.stdout: 6: (RGWRESTConn::get_obj(std::string const&, req_info*, rgw_obj&, bool, RGWGetDataCB*, RGWRESTStreamReadRequest**)+0x70b) [0x7f7710427b5b]
2015-08-12 19:08:37,732.732 INFO:tasks.rgw.client.1.vpm174.stdout: 7: (RGWRados::fetch_remote_obj(RGWObjectCtx&, std::string const&, std::string const&, std::string const&, req_info*, std::string const&, rgw_obj&, rgw_obj&, RGWBucketInfo&, RGWBucketInfo&, long*, long*, long const*, long const*, char const*, char const*, RGWRados::AttrsMod, std::map<std::string, ceph::buffer::list, std::less<std::string>, std::allocator<std::pair<std::string const, ceph::buffer::list> > >&, RGWObjCategory, unsigned long, std::string*, std::string*, std::string*, rgw_err*, void (*)(long, void*), void*)+0x40c) [0x7f7710408edc]
2015-08-12 19:08:37,732.732 INFO:tasks.rgw.client.1.vpm174.stdout: 8: (RGWRados::copy_obj(RGWObjectCtx&, std::string const&, std::string const&, std::string const&, req_info*, std::string const&, rgw_obj&, rgw_obj&, RGWBucketInfo&, RGWBucketInfo&, long*, long*, long const*, long const*, char const*, char const*, RGWRados::AttrsMod, std::map<std::string, ceph::buffer::list, std::less<std::string>, std::allocator<std::pair<std::string const, ceph::buffer::list> > >&, RGWObjCategory, unsigned long, std::string*, std::string*, std::string*, rgw_err*, void (*)(long, void*), void*)+0x395) [0x7f771040a005]
2015-08-12 19:08:37,733.733 INFO:tasks.rgw.client.1.vpm174.stdout: 9: (RGWCopyObj::execute()+0x1ae) [0x7f77104333de]
2015-08-12 19:08:37,733.733 INFO:tasks.rgw.client.1.vpm174.stdout: 10: (()+0x1abd10) [0x7f771037fd10]
2015-08-12 19:08:37,733.733 INFO:tasks.rgw.client.1.vpm174.stdout: 11: (()+0x1ac928) [0x7f7710380928]
2015-08-12 19:08:37,733.733 INFO:tasks.rgw.client.1.vpm174.stdout: 12: (()+0x2b711f) [0x7f771048b11f]
2015-08-12 19:08:37,733.733 INFO:tasks.rgw.client.1.vpm174.stdout: 13: (()+0x2b90ee) [0x7f771048d0ee]
2015-08-12 19:08:37,733.733 INFO:tasks.rgw.client.1.vpm174.stdout: 14: (()+0x8182) [0x7f770c878182]
2015-08-12 19:08:37,733.733 INFO:tasks.rgw.client.1.vpm174.stdout: 15: (clone()+0x6d) [0x7f770afd147d]
2015-08-12 19:08:37,734.734 INFO:tasks.rgw.client.1.vpm174.stdout:2015-08-12 19:08:37.716882 7f76d27f4700 -1 *** Caught signal (Segmentation fault) **
2015-08-12 19:08:37,734.734 INFO:tasks.rgw.client.1.vpm174.stdout: in thread 7f76d27f4700
2015-08-12 19:08:37,734.734 INFO:tasks.rgw.client.1.vpm174.stdout:
2015-08-12 19:08:37,734.734 INFO:tasks.rgw.client.1.vpm174.stdout: ceph version 9.0.2-1252-gd8d6bb8 (d8d6bb898d976a479c33baebcd4ade0ef120a18e)
2015-08-12 19:08:37,734.734 INFO:tasks.rgw.client.1.vpm174.stdout: 1: (()+0x2ef91a) [0x7f77104c391a]
2015-08-12 19:08:37,734.734 INFO:tasks.rgw.client.1.vpm174.stdout: 2: (()+0x10340) [0x7f770c880340]
2015-08-12 19:08:37,734.734 INFO:tasks.rgw.client.1.vpm174.stdout: 3: (strlen()+0x2a) [0x7f770af5faea]
2015-08-12 19:08:37,735.735 INFO:tasks.rgw.client.1.vpm174.stdout: 4: (req_info::req_info(CephContext*, RGWEnv*)+0x253) [0x7f77104486f3]
2015-08-12 19:08:37,735.735 INFO:tasks.rgw.client.1.vpm174.stdout: 5: (RGWRESTStreamReadRequest::get_obj(RGWAccessKey&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, rgw_obj&)+0x179) [0x7f7710480859]
2015-08-12 19:08:37,735.735 INFO:tasks.rgw.client.1.vpm174.stdout: 6: (RGWRESTConn::get_obj(std::string const&, req_info*, rgw_obj&, bool, RGWGetDataCB*, RGWRESTStreamReadRequest**)+0x70b) [0x7f7710427b5b]
2015-08-12 19:08:37,735.735 INFO:tasks.rgw.client.1.vpm174.stdout: 7: (RGWRados::fetch_remote_obj(RGWObjectCtx&, std::string const&, std::string const&, std::string const&, req_info*, std::string const&, rgw_obj&, rgw_obj&, RGWBucketInfo&, RGWBucketInfo&, long*, long*, long const*, long const*, char const*, char const*, RGWRados::AttrsMod, std::map<std::string, ceph::buffer::list, std::less<std::string>, std::allocator<std::pair<std::string const, ceph::buffer::list> > >&, RGWObjCategory, unsigned long, std::string*, std::string*, std::string*, rgw_err*, void (*)(long, void*), void*)+0x40c) [0x7f7710408edc]
2015-08-12 19:08:37,735.735 INFO:tasks.rgw.client.1.vpm174.stdout: 8: (RGWRados::copy_obj(RGWObjectCtx&, std::string const&, std::string const&, std::string const&, req_info*, std::string const&, rgw_obj&, rgw_obj&, RGWBucketInfo&, RGWBucketInfo&, long*, long*, long const*, long const*, char const*, char const*, RGWRados::AttrsMod, std::map<std::string, ceph::buffer::list, std::less<std::string>, std::allocator<std::pair<std::string const, ceph::buffer::list> > >&, RGWObjCategory, unsigned long, std::string*, std::string*, std::string*, rgw_err*, void (*)(long, void*), void*)+0x395) [0x7f771040a005]
2015-08-12 19:08:37,735.735 INFO:tasks.rgw.client.1.vpm174.stdout: 9: (RGWCopyObj::execute()+0x1ae) [0x7f77104333de]
2015-08-12 19:08:37,735.735 INFO:tasks.rgw.client.1.vpm174.stdout: 10: (()+0x1abd10) [0x7f771037fd10]
2015-08-12 19:08:37,736.736 INFO:tasks.rgw.client.1.vpm174.stdout: 11: (()+0x1ac928) [0x7f7710380928]
2015-08-12 19:08:37,736.736 INFO:tasks.rgw.client.1.vpm174.stdout: 12: (()+0x2b711f) [0x7f771048b11f]
2015-08-12 19:08:37,736.736 INFO:tasks.rgw.client.1.vpm174.stdout: 13: (()+0x2b90ee) [0x7f771048d0ee]
2015-08-12 19:08:37,736.736 INFO:tasks.rgw.client.1.vpm174.stdout: 14: (()+0x8182) [0x7f770c878182]
2015-08-12 19:08:37,736.736 INFO:tasks.rgw.client.1.vpm174.stdout: 15: (clone()+0x6d) [0x7f770afd147d]
2015-08-12 19:08:37,736.736 INFO:tasks.rgw.client.1.vpm174.stdout: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
2015-08-12 19:08:37,737.737 INFO:tasks.rgw.client.1.vpm174.stdout:
2015-08-12 19:08:37,803.803 INFO:tasks.rgw.client.1.vpm174.stdout:     0> 2015-08-12 19:08:37.716882 7f76d27f4700 -1 *** Caught signal (Segmentation fault) **
2015-08-12 19:08:37,804.804 INFO:tasks.rgw.client.1.vpm174.stdout: in thread 7f76d27f4700
2015-08-12 19:08:37,804.804 INFO:tasks.rgw.client.1.vpm174.stdout:
2015-08-12 19:08:37,804.804 INFO:tasks.rgw.client.1.vpm174.stdout: ceph version 9.0.2-1252-gd8d6bb8 (d8d6bb898d976a479c33baebcd4ade0ef120a18e)
2015-08-12 19:08:37,804.804 INFO:tasks.rgw.client.1.vpm174.stdout: 1: (()+0x2ef91a) [0x7f77104c391a]
2015-08-12 19:08:37,805.805 INFO:tasks.rgw.client.1.vpm174.stdout: 2: (()+0x10340) [0x7f770c880340]
2015-08-12 19:08:37,805.805 INFO:tasks.rgw.client.1.vpm174.stdout: 3: (strlen()+0x2a) [0x7f770af5faea]
2015-08-12 19:08:37,805.805 INFO:tasks.rgw.client.1.vpm174.stdout: 4: (req_info::req_info(CephContext*, RGWEnv*)+0x253) [0x7f77104486f3]
2015-08-12 19:08:37,805.805 INFO:tasks.rgw.client.1.vpm174.stdout: 5: (RGWRESTStreamReadRequest::get_obj(RGWAccessKey&, std::map<std::string, std::string, std::less<std::string>, std::allocator<std::pair<std::string const, std::string> > >&, rgw_obj&)+0x179) [0x7f7710480859]
2015-08-12 19:08:37,805.805 INFO:tasks.rgw.client.1.vpm174.stdout: 6: (RGWRESTConn::get_obj(std::string const&, req_info*, rgw_obj&, bool, RGWGetDataCB*, RGWRESTStreamReadRequest**)+0x70b) [0x7f7710427b5b]
2015-08-12 19:08:37,806.806 INFO:tasks.rgw.client.1.vpm174.stdout: 7: (RGWRados::fetch_remote_obj(RGWObjectCtx&, std::string const&, std::string const&, std::string const&, req_info*, std::string const&, rgw_obj&, rgw_obj&, RGWBucketInfo&, RGWBucketInfo&, long*, long*, long const*, long const*, char const*, char const*, RGWRados::AttrsMod, std::map<std::string, ceph::buffer::list, std::less<std::string>, std::allocator<std::pair<std::string const, ceph::buffer::list> > >&, RGWObjCategory, unsigned long, std::string*, std::string*, std::string*, rgw_err*, void (*)(long, void*), void*)+0x40c) [0x7f7710408edc]
2015-08-12 19:08:37,806.806 INFO:tasks.rgw.client.1.vpm174.stdout: 8: (RGWRados::copy_obj(RGWObjectCtx&, std::string const&, std::string const&, std::string const&, req_info*, std::string const&, rgw_obj&, rgw_obj&, RGWBucketInfo&, RGWBucketInfo&, long*, long*, long const*, long const*, char const*, char const*, RGWRados::AttrsMod, std::map<std::string, ceph::buffer::list, std::less<std::string>, std::allocator<std::pair<std::string const, ceph::buffer::list> > >&, RGWObjCategory, unsigned long, std::string*, std::string*, std::string*, rgw_err*, void (*)(long, void*), void*)+0x395) [0x7f771040a005]
2015-08-12 19:08:37,806.806 INFO:tasks.rgw.client.1.vpm174.stdout: 9: (RGWCopyObj::execute()+0x1ae) [0x7f77104333de]
2015-08-12 19:08:37,806.806 INFO:tasks.rgw.client.1.vpm174.stdout: 10: (()+0x1abd10) [0x7f771037fd10]
2015-08-12 19:08:37,806.806 INFO:tasks.rgw.client.1.vpm174.stdout: 11: (()+0x1ac928) [0x7f7710380928]
2015-08-12 19:08:37,806.806 INFO:tasks.rgw.client.1.vpm174.stdout: 12: (()+0x2b711f) [0x7f771048b11f]
2015-08-12 19:08:37,807.807 INFO:tasks.rgw.client.1.vpm174.stdout: 13: (()+0x2b90ee) [0x7f771048d0ee]
2015-08-12 19:08:37,807.807 INFO:tasks.rgw.client.1.vpm174.stdout: 14: (()+0x8182) [0x7f770c878182]
2015-08-12 19:08:37,807.807 INFO:tasks.rgw.client.1.vpm174.stdout: 15: (clone()+0x6d) [0x7f770afd147d]
2015-08-12 19:08:37,807.807 INFO:tasks.rgw.client.1.vpm174.stdout: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
2015-08-12 19:08:37,807.807 INFO:tasks.rgw.client.1.vpm174.stdout:
2015-08-12 19:08:38,048.048 INFO:tasks.rgw.client.1.vpm174.stderr:daemon-helper: command crashed with signal 11
2015-08-12 19:08:48,779.779 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:08:48,765 19542 [radosgw_agent.worker][WARNING] encountered an error during sync: unable to connect to vpm174.front.sepia.ceph.com:7281 [Errno 111] Connection refused
2015-08-12 19:08:58,257.257 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:08:58,242 19542 [radosgw_agent.worker][DEBUG ] error geting op state: unable to connect to vpm174.front.sepia.ceph.com:7281 [Errno 111] Connection refused
2015-08-12 19:08:58,257.257 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:Traceback (most recent call last):
2015-08-12 19:08:58,258.258 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/home/ubuntu/cephtest/radosgw-agent.client.0/radosgw_agent/worker.py", line 273, in wait_for_object
2015-08-12 19:08:58,258.258 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    bucket, obj)
2015-08-12 19:08:58,258.258 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/home/ubuntu/cephtest/radosgw-agent.client.0/radosgw_agent/client.py", line 250, in get_op_state
2015-08-12 19:08:58,259.259 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    'client-id': client_id,
2015-08-12 19:08:58,259.259 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/home/ubuntu/cephtest/radosgw-agent.client.0/radosgw_agent/client.py", line 207, in request
2015-08-12 19:08:58,259.259 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    raise NetworkError(msg)
2015-08-12 19:08:58,259.259 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:NetworkError: unable to connect to vpm174.front.sepia.ceph.com:7281 [Errno 111] Connection refused
2015-08-12 19:09:07,786.786 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:09:07,772 19542 [radosgw_agent.worker][DEBUG ] error geting op state: unable to connect to vpm174.front.sepia.ceph.com:7281 [Errno 111] Connection refused
2015-08-12 19:09:07,786.786 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:Traceback (most recent call last):
2015-08-12 19:09:07,787.787 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/home/ubuntu/cephtest/radosgw-agent.client.0/radosgw_agent/worker.py", line 273, in wait_for_object
2015-08-12 19:09:07,787.787 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    bucket, obj)
2015-08-12 19:09:07,787.787 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/home/ubuntu/cephtest/radosgw-agent.client.0/radosgw_agent/client.py", line 250, in get_op_state
2015-08-12 19:09:07,787.787 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    'client-id': client_id,
2015-08-12 19:09:07,787.787 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/home/ubuntu/cephtest/radosgw-agent.client.0/radosgw_agent/client.py", line 207, in request
2015-08-12 19:09:07,788.788 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    raise NetworkError(msg)
2015-08-12 19:09:07,788.788 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:NetworkError: unable to connect to vpm174.front.sepia.ceph.com:7281 [Errno 111] Connection refused
2015-08-12 19:09:08,788.788 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:09:08,773 19542 [radosgw_agent.worker][ERROR ] failed to sync object myfoodata/tiny_file:
2015-08-12 19:09:08,788.788 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:09:08,774 19542 [radosgw_agent.worker][WARNING] will retry sync of failed object at next incremental sync
2015-08-12 19:09:08,789.789 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:09:08,774 19542 [radosgw_agent.worker][INFO  ] synced 0 objects
2015-08-12 19:09:08,789.789 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:09:08,775 19542 [radosgw_agent.worker][INFO  ] completed syncing bucket "myfoodata" 
2015-08-12 19:09:08,789.789 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:09:08,775 19542 [radosgw_agent.worker][INFO  ] ********************************************************************************
2015-08-12 19:09:08,815.815 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:09:08,776 19542 [radosgw_agent.worker][WARNING] error setting worker bound for key "myfoodata:r0z0.4144.4", may duplicate some work later. Traceback:
2015-08-12 19:09:08,815.815 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:Traceback (most recent call last):
2015-08-12 19:09:08,815.815 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/home/ubuntu/cephtest/radosgw-agent.client.0/radosgw_agent/worker.py", line 87, in set_bound
2015-08-12 19:09:08,815.815 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    data=data)
2015-08-12 19:09:08,816.816 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/home/ubuntu/cephtest/radosgw-agent.client.0/radosgw_agent/client.py", line 493, in set_worker_bound
2015-08-12 19:09:08,816.816 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    data=json.dumps(data),
2015-08-12 19:09:08,816.816 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/usr/lib/python2.7/json/__init__.py", line 243, in dumps
2015-08-12 19:09:08,816.816 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    return _default_encoder.encode(obj)
2015-08-12 19:09:08,816.816 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/usr/lib/python2.7/json/encoder.py", line 207, in encode
2015-08-12 19:09:08,816.816 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    chunks = self.iterencode(o, _one_shot=True)
2015-08-12 19:09:08,817.817 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/usr/lib/python2.7/json/encoder.py", line 270, in iterencode
2015-08-12 19:09:08,817.817 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    return _iterencode(o, 0)
2015-08-12 19:09:08,817.817 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/usr/lib/python2.7/json/encoder.py", line 184, in default
2015-08-12 19:09:08,817.817 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    raise TypeError(repr(o) + " is not JSON serializable")
2015-08-12 19:09:08,817.817 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:TypeError: <boto.s3.user.User object at 0x7f8a4e71c610> is not JSON serializable
2015-08-12 19:09:16,783.783 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:Exception in thread Thread-1:
2015-08-12 19:09:16,783.783 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:Traceback (most recent call last):
2015-08-12 19:09:16,783.783 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
2015-08-12 19:09:16,784.784 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    self.run()
2015-08-12 19:09:16,784.784 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/home/ubuntu/cephtest/radosgw-agent.client.0/radosgw_agent/lock.py", line 109, in run
2015-08-12 19:09:16,784.784 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    self._acquire()
2015-08-12 19:09:16,784.784 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/home/ubuntu/cephtest/radosgw-agent.client.0/radosgw_agent/lock.py", line 76, in _acquire
2015-08-12 19:09:16,784.784 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    self.zone_id, self.timeout, self.locker_id)
2015-08-12 19:09:16,785.785 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/home/ubuntu/cephtest/radosgw-agent.client.0/radosgw_agent/client.py", line 424, in lock_shard
2015-08-12 19:09:16,785.785 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    expect_json=False)
2015-08-12 19:09:16,785.785 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/home/ubuntu/cephtest/radosgw-agent.client.0/radosgw_agent/client.py", line 207, in request
2015-08-12 19:09:16,785.785 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    raise NetworkError(msg)
2015-08-12 19:09:16,785.785 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:NetworkError: unable to connect to vpm174.front.sepia.ceph.com:7281 [Errno 111] Connection refused
2015-08-12 19:09:16,785.785 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:
2015-08-12 19:09:24,012.012 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:09:23,996 19542 [radosgw_agent.worker][WARNING] error unlocking log, continuing anyway since lock will timeout. Traceback:
2015-08-12 19:09:24,012.012 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:Traceback (most recent call last):
2015-08-12 19:09:24,013.013 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/home/ubuntu/cephtest/radosgw-agent.client.0/radosgw_agent/worker.py", line 65, in unlock_shard
2015-08-12 19:09:24,013.013 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    self.lock.release_and_clear()
2015-08-12 19:09:24,013.013 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/home/ubuntu/cephtest/radosgw-agent.client.0/radosgw_agent/lock.py", line 98, in release_and_clear
2015-08-12 19:09:24,013.013 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    self.zone_id, self.locker_id)
2015-08-12 19:09:24,013.013 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/home/ubuntu/cephtest/radosgw-agent.client.0/radosgw_agent/client.py", line 436, in unlock_shard
2015-08-12 19:09:24,014.014 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    expect_json=False)
2015-08-12 19:09:24,014.014 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/home/ubuntu/cephtest/radosgw-agent.client.0/radosgw_agent/client.py", line 207, in request
2015-08-12 19:09:24,014.014 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    raise NetworkError(msg)
2015-08-12 19:09:24,014.014 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:NetworkError: unable to connect to vpm174.front.sepia.ceph.com:7281 [Errno 111] Connection refused
2015-08-12 19:09:24,014.014 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:09:23,997 19542 [radosgw_agent.worker][INFO  ] finished syncing shard 124
2015-08-12 19:09:24,014.014 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:09:23,997 19542 [radosgw_agent.worker][INFO  ] incremental sync will need to retry buckets: [u'myfoodata']
2015-08-12 19:09:24,014.014 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:09:23,997 19542 [radosgw_agent.worker][INFO  ] No more entries in queue, exiting
2015-08-12 19:09:24,014.014 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:09:23,998 18188 [radosgw_agent.sync][DEBUG ] synced item (124, [u'myfoodata']) successfully
2015-08-12 19:09:32,698.698 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:09:32,683 18188 [radosgw_agent.sync][WARNING] could not set worker bounds, may repeat some work.Traceback:
2015-08-12 19:09:32,699.699 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:Traceback (most recent call last):
2015-08-12 19:09:32,699.699 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/home/ubuntu/cephtest/radosgw-agent.client.0/radosgw_agent/sync.py", line 119, in complete_item
2015-08-12 19:09:32,700.700 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    data)
2015-08-12 19:09:32,700.700 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/home/ubuntu/cephtest/radosgw-agent.client.0/radosgw_agent/client.py", line 494, in set_worker_bound
2015-08-12 19:09:32,700.700 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    special_first_param='work_bound',
2015-08-12 19:09:32,700.700 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:  File "/home/ubuntu/cephtest/radosgw-agent.client.0/radosgw_agent/client.py", line 207, in request
2015-08-12 19:09:32,700.700 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:    raise NetworkError(msg)
2015-08-12 19:09:32,700.700 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:NetworkError: unable to connect to vpm174.front.sepia.ceph.com:7281 [Errno 111] Connection refused
2015-08-12 19:09:32,701.701 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:09:32,684 18188 [radosgw_agent.sync][INFO  ] 1/1 items processed
2015-08-12 19:09:32,703.703 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:2015-08-12 19:09:32,685 18188 [radosgw_agent.sync][ERROR ] Encountered errors syncing these 1 shards: [u'myfoodata']
2015-08-12 19:09:32,703.703 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:10.214.140.50 - - [12/Aug/2015 19:09:32] "POST /data/full HTTP/1.1" 200 0
2015-08-12 19:09:32,703.703 INFO:tasks.radosgw_agent.ubuntu@vpm174.front.sepia.ceph.com.8000.syncdaemon.vpm174.stderr:10.214.140.50 - - [12/Aug/2015 19:09:32] "POST /data/full HTTP/1.1" 200 -
2015-08-12 19:10:34,165.165 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
  File "/home/ubuntu/teuthology/teuthology/run_tasks.py", line 53, in run_tasks
    manager = run_one_task(taskname, ctx=ctx, config=config)
  File "/home/ubuntu/teuthology/teuthology/run_tasks.py", line 41, in run_one_task
    return fn(**kwargs)
  File "/home/ubuntu/src/ceph-qa-suite_master/tasks/radosgw_admin.py", line 472, in task
    dest_k = dest_connection.get_bucket(bucket_name + 'data').get_key('tiny_file')
  File "/home/ubuntu/teuthology/virtualenv/local/lib/python2.7/site-packages/boto/s3/connection.py", line 502, in get_bucket
    return self.head_bucket(bucket_name, headers=headers)
  File "/home/ubuntu/teuthology/virtualenv/local/lib/python2.7/site-packages/boto/s3/connection.py", line 521, in head_bucket
    response = self.make_request('HEAD', bucket_name, headers=headers)
  File "/home/ubuntu/teuthology/virtualenv/local/lib/python2.7/site-packages/boto/s3/connection.py", line 664, in make_request
    retry_handler=retry_handler
  File "/home/ubuntu/teuthology/virtualenv/local/lib/python2.7/site-packages/boto/connection.py", line 1071, in make_request
    retry_handler=retry_handler)
  File "/home/ubuntu/teuthology/virtualenv/local/lib/python2.7/site-packages/boto/connection.py", line 1030, in _mexe
    raise ex
error: [Errno 111] Connection refused

#3 Updated by Alfredo Deza over 8 years ago

  • Status changed from Fix Under Review to Resolved

merged commit 5d0d60b into master

Also available in: Atom PDF