Bug #13275
closed"Error ENOENT: unrecognized pool"
Added by Yuri Weinstein over 8 years ago. Updated over 8 years ago.
0%
Description
Run: http://pulpito.ceph.com/teuthology-2015-09-27_17:10:01-upgrade:hammer-x-infernalis-distro-basic-vps/
Job: 1072831
Logs: http://qa-proxy.ceph.com/teuthology/teuthology-2015-09-27_17:10:01-upgrade:hammer-x-infernalis-distro-basic-vps/1072831/teuthology.log
2015-09-28T13:59:44.940 INFO:tasks.workunit.client.1.vpm097.stdout:[ OK ] LibRadosTwoPoolsPP.EvictSnap2 (8376 ms) 2015-09-28T13:59:44.941 INFO:tasks.workunit.client.1.vpm097.stdout:[ RUN ] LibRadosTwoPoolsPP.TryFlush 2015-09-28T13:59:45.166 INFO:teuthology.orchestra.run.vpm185:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd pool get test-rados-api-vpm097-21564-21 pg_num' 2015-09-28T13:59:45.406 INFO:teuthology.orchestra.run.vpm185.stderr:Error ENOENT: unrecognized pool 'test-rados-api-vpm097-21564-21' 2015-09-28T13:59:45.416 ERROR:teuthology.parallel:Exception in parallel execution
Per Sage iirc "AttributeError: 'function' object has no attribute 'info'" was fixed recently
Updated by Yuri Weinstein over 8 years ago
- Project changed from teuthology to Ceph
Updated by Yuri Weinstein over 8 years ago
Run: http://pulpito.ceph.com/teuthology-2015-09-29_10:41:00-upgrade:firefly-hammer-x:parallel-infernalis-distro-basic-vps/
Jobs: ['1076295', '1076296', '1076298']
Logs: http://qa-proxy.ceph.com/teuthology/teuthology-2015-09-29_10:41:00-upgrade:firefly-hammer-x:parallel-infernalis-distro-basic-vps/1076295/teuthology.log
2015-09-29T11:18:53.514 INFO:teuthology.orchestra.run.vpm032:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph pg dump --format json' 2015-09-29T11:18:53.925 INFO:tasks.workunit.client.0.vpm129.stdout:[----------] 17 tests from LibRadosTwoPoolsPP (212345 ms total) 2015-09-29T11:18:53.925 INFO:tasks.workunit.client.0.vpm129.stdout: 2015-09-29T11:18:53.925 INFO:tasks.workunit.client.0.vpm129.stdout:[----------] 3 tests from LibRadosTierECPP 2015-09-29T11:18:54.070 INFO:teuthology.orchestra.run.vpm032.stderr:dumped all in format json 2015-09-29T11:18:55.246 INFO:tasks.ceph:Scrubbing osd osd.2 2015-09-29T11:18:55.247 INFO:teuthology.orchestra.run.vpm032:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd deep-scrub osd.2' 2015-09-29T11:18:55.596 INFO:teuthology.orchestra.run.vpm032.stderr:osd.2 instructed to deep-scrub 2015-09-29T11:18:55.614 INFO:tasks.ceph:Scrubbing osd osd.3 2015-09-29T11:18:55.614 INFO:teuthology.orchestra.run.vpm032:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd deep-scrub osd.3' 2015-09-29T11:18:55.908 INFO:teuthology.orchestra.run.vpm032.stderr:osd.3 instructed to deep-scrub 2015-09-29T11:18:55.922 INFO:tasks.ceph:Scrubbing osd osd.0 2015-09-29T11:18:55.922 INFO:teuthology.orchestra.run.vpm032:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd deep-scrub osd.0' 2015-09-29T11:18:59.218 INFO:teuthology.orchestra.run.vpm032.stderr:osd.0 instructed to deep-scrub 2015-09-29T11:18:59.233 INFO:tasks.ceph:Scrubbing osd osd.1 2015-09-29T11:18:59.233 INFO:teuthology.orchestra.run.vpm032:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd deep-scrub osd.1' 2015-09-29T11:18:59.560 INFO:teuthology.orchestra.run.vpm032.stderr:osd.1 instructed to deep-scrub 2015-09-29T11:18:59.574 INFO:teuthology.orchestra.run.vpm032:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph pg dump --format json' 2015-09-29T11:19:02.896 INFO:teuthology.orchestra.run.vpm032.stderr:dumped all in format json 2015-09-29T11:19:02.925 ERROR:teuthology.contextutil:Saw exception from nested tasks Traceback (most recent call last): File "/home/teuthworker/src/teuthology_master/teuthology/contextutil.py", line 30, in nested yield vars File "/var/lib/teuthworker/src/ceph-qa-suite_infernalis/tasks/ceph.py", line 1372, in task yield AttributeError: 'function' object has no attribute 'info' 2015-09-29T11:19:02.926 INFO:teuthology.misc:Shutting down mds daemons... 2015-09-29T11:19:02.926 DEBUG:tasks.ceph.mds.a:waiting for process to exit 2015-09-29T11:19:08.915 INFO:tasks.ceph.mds.a:Stopped 2015-09-29T11:19:08.915 INFO:teuthology.misc:Shutting down osd daemons... 2015-09-29T11:19:08.916 DEBUG:tasks.ceph.osd.1:waiting for process to exit 2015-09-29T11:19:12.088 INFO:tasks.workunit.client.0.vpm129.stdout:test/librados/TestCase.cc:216: Failure 2015-09-29T11:19:12.088 INFO:tasks.workunit.client.0.vpm129.stdout:Value of: create_one_ec_pool_pp(pool_name, s_cluster) 2015-09-29T11:19:12.088 INFO:tasks.workunit.client.0.vpm129.stdout: Actual: "mon_command osd pool create pool:test-rados-api-vpm129-14707-25 pool_type:erasure failed with error -4" 2015-09-29T11:19:12.088 INFO:tasks.workunit.client.0.vpm129.stdout:Expected: "" 2015-09-29T11:19:12.088 INFO:tasks.workunit.client.0.vpm129.stdout:[ RUN ] LibRadosTierECPP.Dirty 2015-09-29T11:19:12.089 INFO:tasks.workunit.client.0.vpm129.stderr:*** Caught signal (Segmentation fault) ** 2015-09-29T11:19:12.089 INFO:tasks.workunit.client.0.vpm129.stderr: in thread 7ff63ff167c0 2015-09-29T11:19:12.091 INFO:tasks.workunit.client.0.vpm129.stderr: ceph version 0.80.10-138-gecabc67 (ecabc6796f19c03947bb5b14da4e1b761ff8847f) 2015-09-29T11:19:12.091 INFO:tasks.workunit.client.0.vpm129.stderr: 1: ceph_test_rados_api_tier() [0x4bf7df] 2015-09-29T11:19:12.091 INFO:tasks.workunit.client.0.vpm129.stderr: 2: (()+0x10340) [0x7ff63ebb0340] 2015-09-29T11:19:12.091 INFO:tasks.workunit.client.0.vpm129.stderr: 3: (__pthread_rwlock_rdlock()+0xa) [0x7ff63ebaba4a] 2015-09-29T11:19:12.092 INFO:tasks.workunit.client.0.vpm129.stderr: 4: (librados::RadosClient::lookup_pool(char const*)+0x47) [0x7ff63ef70da7] 2015-09-29T11:19:12.092 INFO:tasks.workunit.client.0.vpm129.stderr: 5: (librados::RadosClient::create_ioctx(char const*, librados::IoCtxImpl**)+0x1a) [0x7ff63ef711da] 2015-09-29T11:19:12.092 INFO:tasks.workunit.client.0.vpm129.stderr: 6: (rados_ioctx_create()+0x12) [0x7ff63ef52c12] 2015-09-29T11:19:12.092 INFO:tasks.workunit.client.0.vpm129.stderr: 7: (librados::Rados::ioctx_create(char const*, librados::IoCtx&)+0x15) [0x7ff63ef52c55] 2015-09-29T11:19:12.092 INFO:tasks.workunit.client.0.vpm129.stderr: 8: (RadosTestECPP::SetUp()+0x3a) [0x4cca7a] 2015-09-29T11:19:12.093 INFO:tasks.workunit.client.0.vpm129.stderr: 9: (testing::Test::Run()+0x43) [0x4b3253] 2015-09-29T11:19:12.093 INFO:tasks.workunit.client.0.vpm129.stderr: 10: (testing::internal::TestInfoImpl::Run()+0xd8) [0x4b3378] 2015-09-29T11:19:12.093 INFO:tasks.workunit.client.0.vpm129.stderr: 11: (testing::TestCase::Run()+0x95) [0x4b3415] 2015-09-29T11:19:12.093 INFO:tasks.workunit.client.0.vpm129.stderr: 12: (testing::internal::UnitTestImpl::RunAllTests()+0x247) [0x4b64d7] 2015-09-29T11:19:12.094 INFO:tasks.workunit.client.0.vpm129.stderr: 13: (main()+0x73) [0x42b0d3] 2015-09-29T11:19:12.094 INFO:tasks.workunit.client.0.vpm129.stderr: 14: (__libc_start_main()+0xf5) [0x7ff63ddd8ec5] 2015-09-29T11:19:12.094 INFO:tasks.workunit.client.0.vpm129.stderr: 15: ceph_test_rados_api_tier() [0x42c22e] 2015-09-29T11:19:12.094 INFO:tasks.workunit.client.0.vpm129.stderr:2015-09-29 18:19:12.083022 7ff63ff167c0 -1 *** Caught signal (Segmentation fault) ** 2015-09-29T11:19:12.094 INFO:tasks.workunit.client.0.vpm129.stderr: in thread 7ff63ff167c0 2015-09-29T11:19:12.095 INFO:tasks.workunit.client.0.vpm129.stderr: 2015-09-29T11:19:12.095 INFO:tasks.workunit.client.0.vpm129.stderr: ceph version 0.80.10-138-gecabc67 (ecabc6796f19c03947bb5b14da4e1b761ff8847f) 2015-09-29T11:19:12.095 INFO:tasks.workunit.client.0.vpm129.stderr: 1: ceph_test_rados_api_tier() [0x4bf7df] 2015-09-29T11:19:12.095 INFO:tasks.workunit.client.0.vpm129.stderr: 2: (()+0x10340) [0x7ff63ebb0340] 2015-09-29T11:19:12.095 INFO:tasks.workunit.client.0.vpm129.stderr: 3: (__pthread_rwlock_rdlock()+0xa) [0x7ff63ebaba4a] 2015-09-29T11:19:12.096 INFO:tasks.workunit.client.0.vpm129.stderr: 4: (librados::RadosClient::lookup_pool(char const*)+0x47) [0x7ff63ef70da7] 2015-09-29T11:19:12.096 INFO:tasks.workunit.client.0.vpm129.stderr: 5: (librados::RadosClient::create_ioctx(char const*, librados::IoCtxImpl**)+0x1a) [0x7ff63ef711da] 2015-09-29T11:19:12.096 INFO:tasks.workunit.client.0.vpm129.stderr: 6: (rados_ioctx_create()+0x12) [0x7ff63ef52c12] 2015-09-29T11:19:12.096 INFO:tasks.workunit.client.0.vpm129.stderr: 7: (librados::Rados::ioctx_create(char const*, librados::IoCtx&)+0x15) [0x7ff63ef52c55] 2015-09-29T11:19:12.096 INFO:tasks.workunit.client.0.vpm129.stderr: 8: (RadosTestECPP::SetUp()+0x3a) [0x4cca7a] 2015-09-29T11:19:12.097 INFO:tasks.workunit.client.0.vpm129.stderr: 9: (testing::Test::Run()+0x43) [0x4b3253] 2015-09-29T11:19:12.097 INFO:tasks.workunit.client.0.vpm129.stderr: 10: (testing::internal::TestInfoImpl::Run()+0xd8) [0x4b3378] 2015-09-29T11:19:12.097 INFO:tasks.workunit.client.0.vpm129.stderr: 11: (testing::TestCase::Run()+0x95) [0x4b3415] 2015-09-29T11:19:12.097 INFO:tasks.workunit.client.0.vpm129.stderr: 12: (testing::internal::UnitTestImpl::RunAllTests()+0x247) [0x4b64d7] 2015-09-29T11:19:12.097 INFO:tasks.workunit.client.0.vpm129.stderr: 13: (main()+0x73) [0x42b0d3] 2015-09-29T11:19:12.098 INFO:tasks.workunit.client.0.vpm129.stderr: 14: (__libc_start_main()+0xf5) [0x7ff63ddd8ec5] 2015-09-29T11:19:12.098 INFO:tasks.workunit.client.0.vpm129.stderr: 15: ceph_test_rados_api_tier() [0x42c22e] 2015-09-29T11:19:12.098 INFO:tasks.workunit.client.0.vpm129.stderr: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Updated by Yuri Weinstein over 8 years ago
Updated by Yuri Weinstein over 8 years ago
Updated by Sage Weil over 8 years ago
- Status changed from New to Resolved
I fixed this a couple days ago, should be okay now.... https://github.com/ceph/ceph-qa-suite/commit/0e2814d81e882dbaa81bc7c3f0d7118fc4752451
the later instances are different issues