Actions
Bug #7689
closedlibrados: ENOENT on ioctx create
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
I think this is a race with pool creation, mon quorum changes, and new client startup getting a not-quite-fresh osdmap:
2014-03-11T13:47:41.704 INFO:teuthology.task.rados.ceph_manager:creating pool_name unique_pool_0 2014-03-11T13:47:41.704 DEBUG:teuthology.orchestra.run:Running [10.214.132.37]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd pool create unique_pool_0 1' 2014-03-11T13:47:41.839 DEBUG:teuthology.orchestra.run:Running [10.214.132.37]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph -m 10.214.132.37:6793 mon_status' 2014-03-11T13:47:42.063 DEBUG:teuthology.orchestra.run:Running [10.214.132.37]: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph -m 10.214.132.24:6789 mon_status' 2014-03-11T13:47:42.140 INFO:teuthology.orchestra.run.err:[10.214.132.37]: pool 'unique_pool_0' created 2014-03-11T13:47:42.151 DEBUG:teuthology.orchestra.run:Running [10.214.132.24]: 'CEPH_CLIENT_ID=0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph_test_rados --op read 100 --op write 100 --op delete 50 --max-ops 4000 --objects 50 --max-in-flight 16 --s ize 4000000 --min-stride-size 400000 --max-stride-size 800000 --max-seconds 0 --op copy_from 50 --op snap_create 50 --op snap_remove 50 --op rollback 50 --pool unique_pool_0' 2014-03-11T13:47:42.171 INFO:teuthology.task.rados.rados.0.out:[10.214.132.24]: adding op weight read -> 100 2014-03-11T13:47:42.171 INFO:teuthology.task.rados.rados.0.out:[10.214.132.24]: adding op weight write -> 100 2014-03-11T13:47:42.171 INFO:teuthology.task.rados.rados.0.out:[10.214.132.24]: adding op weight delete -> 50 2014-03-11T13:47:42.171 INFO:teuthology.task.rados.rados.0.out:[10.214.132.24]: adding op weight copy_from -> 50 2014-03-11T13:47:42.172 INFO:teuthology.task.rados.rados.0.out:[10.214.132.24]: adding op weight snap_create -> 50 2014-03-11T13:47:42.172 INFO:teuthology.task.rados.rados.0.out:[10.214.132.24]: adding op weight snap_remove -> 50 2014-03-11T13:47:42.172 INFO:teuthology.task.rados.rados.0.out:[10.214.132.24]: adding op weight rollback -> 50 2014-03-11T13:47:42.172 INFO:teuthology.task.rados.rados.0.out:[10.214.132.24]: ceph version 0.77-791-g1249b0b (1249b0bd76bde12796652aa5dc52ecba51fd52ce) 2014-03-11T13:47:42.173 INFO:teuthology.task.rados.rados.0.out:[10.214.132.24]: Configuration: 2014-03-11T13:47:42.173 INFO:teuthology.task.rados.rados.0.out:[10.214.132.24]: Number of operations: 4000 2014-03-11T13:47:42.173 INFO:teuthology.task.rados.rados.0.out:[10.214.132.24]: Number of objects: 50 2014-03-11T13:47:42.173 INFO:teuthology.task.rados.rados.0.out:[10.214.132.24]: Max in flight operations: 16 2014-03-11T13:47:42.173 INFO:teuthology.task.rados.rados.0.out:[10.214.132.24]: Object size (in bytes): 4000000 2014-03-11T13:47:42.174 INFO:teuthology.task.rados.rados.0.out:[10.214.132.24]: Write stride min: 400000 2014-03-11T13:47:42.174 INFO:teuthology.task.rados.rados.0.out:[10.214.132.24]: Write stride max: 800000 2014-03-11T13:47:42.181 INFO:teuthology.task.rados.rados.0.err:[10.214.132.24]: Error initializing rados test context: (2) No such file or directory
ubuntu@teuthology:/var/lib/teuthworker/archive/teuthology-2014-03-11_02:30:01-rados-firefly-distro-basic-plana/125883
i saw this a few days earlier, too :(
Updated by Sage Weil about 10 years ago
actually, no mon_command creating the pool appears in the logs. does not jive with the teuthology log.
Updated by Sage Weil about 10 years ago
- Priority changed from Urgent to High
downgrading thsi weirdness until i see it again
Actions