Bug #10532
closed
Failed "joining thrashosds" - suspected low memory config on vps
Added by Yuri Weinstein over 9 years ago.
Updated over 8 years ago.
Description
Logs are in http://qa-proxy.ceph.com/teuthology/teuthology-2015-01-11_17:05:01-upgrade:giant-x-next-distro-basic-vps/698063/
2015-01-12T21:46:45.720 INFO:teuthology.orchestra.run.vpm106.stdout:successfully deleted pool unique_pool_2
2015-01-12T21:46:45.722 DEBUG:teuthology.run_tasks:Unwinding manager ceph.restart
2015-01-12T21:46:45.722 DEBUG:teuthology.run_tasks:Unwinding manager rados
2015-01-12T21:46:45.722 INFO:tasks.rados:joining rados
2015-01-12T21:46:45.722 DEBUG:teuthology.run_tasks:Unwinding manager rados
2015-01-12T21:46:45.723 INFO:tasks.rados:joining rados
2015-01-12T21:46:45.723 DEBUG:teuthology.run_tasks:Unwinding manager ceph.restart
2015-01-12T21:46:45.723 DEBUG:teuthology.run_tasks:Unwinding manager thrashosds
2015-01-12T21:46:45.723 INFO:tasks.thrashosds:joining thrashosds
2015-01-12T21:46:45.723 ERROR:teuthology.run_tasks:Manager failed: thrashosds
Traceback (most recent call last):
File "/home/teuthworker/src/teuthology_master/teuthology/run_tasks.py", line 119, in run_tasks
suppress = manager.__exit__(*exc_info)
File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next()
File "/var/lib/teuthworker/src/ceph-qa-suite_next/tasks/thrashosds.py", line 174, in task
thrash_proc.do_join()
File "/var/lib/teuthworker/src/ceph-qa-suite_next/tasks/ceph_manager.py", line 314, in do_join
self.thread.get()
File "/usr/lib/python2.7/dist-packages/gevent/greenlet.py", line 308, in get
raise self._exception
Exception: timed out waiting for admin_socket to appear after osd.10 restart
See the same in http://qa-proxy.ceph.com/teuthology/teuthology-2015-01-13_17:18:01-upgrade:firefly-x-next-distro-basic-vps/701660/
2015-01-14T08:48:14.050 INFO:teuthology.orchestra.run.vpm054:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage rados rmpool unique_pool_2 unique_pool_2 --yes-i-really-really-mean-it'
2015-01-14T08:48:14.225 INFO:teuthology.orchestra.run.vpm054.stdout:successfully deleted pool unique_pool_2
2015-01-14T08:48:14.227 DEBUG:teuthology.run_tasks:Unwinding manager ceph.restart
2015-01-14T08:48:14.227 DEBUG:teuthology.run_tasks:Unwinding manager rados
2015-01-14T08:48:14.227 INFO:tasks.rados:joining rados
2015-01-14T08:48:14.228 DEBUG:teuthology.run_tasks:Unwinding manager rados
2015-01-14T08:48:14.228 INFO:tasks.rados:joining rados
2015-01-14T08:48:14.228 DEBUG:teuthology.run_tasks:Unwinding manager ceph.restart
2015-01-14T08:48:14.228 DEBUG:teuthology.run_tasks:Unwinding manager thrashosds
2015-01-14T08:48:14.228 INFO:tasks.thrashosds:joining thrashosds
2015-01-14T08:48:14.229 ERROR:teuthology.run_tasks:Manager failed: thrashosds
Traceback (most recent call last):
File "/home/teuthworker/src/teuthology_master/teuthology/run_tasks.py", line 119, in run_tasks
suppress = manager.__exit__(*exc_info)
File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next()
File "/var/lib/teuthworker/src/ceph-qa-suite_next/tasks/thrashosds.py", line 174, in task
thrash_proc.do_join()
File "/var/lib/teuthworker/src/ceph-qa-suite_next/tasks/ceph_manager.py", line 314, in do_join
self.thread.get()
File "/usr/lib/python2.7/dist-packages/gevent/greenlet.py", line 308, in get
raise self._exception
Exception: timed out waiting for admin_socket to appear after osd.13 restart
Sam: slow vps on this one.
- Priority changed from Normal to Urgent
- Status changed from New to Rejected
I'm going to declare this batch to be vps related.
- Project changed from Ceph to devops
- Status changed from Rejected to New
- Assignee set to Sage Weil
Sage, assigned to you for prioritization.
- Subject changed from Failed "joining thrashosds" in upgrade:giant-x-next-distro-basic-vps run to Failed "joining thrashosds" - suspected low memory config on vps
- Status changed from New to Won't Fix
- Regression set to No
Also available in: Atom
PDF