Actions
Bug #20389
closed"Error EPERM: min_compat_client jewel < luminous, which is required for pg-upmap" in powercycle
Status:
Won't Fix
Priority:
Immediate
Assignee:
-
Category:
-
Target version:
-
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
powercycle
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
Run: http://pulpito.ceph.com/yuriw-2017-06-22_23:59:13-powercycle-wip-yuri-testing2_2017_7_22-distro-basic-smithi/
Job: 1317428
Logs: http://qa-proxy.ceph.com/teuthology/yuriw-2017-06-22_23:59:13-powercycle-wip-yuri-testing2_2017_7_22-distro-basic-smithi/1317428/teuthology.log
2017-06-23T00:30:35.502 INFO:teuthology.orchestra.run.smithi030:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 30 ceph --cluster ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok dump_blocked_ops' 2017-06-23T00:30:35.533 INFO:tasks.ceph.mon.a.smithi130.stderr:2017-06-23 00:30:35.535797 7f09bbfa5700 -1 bad boost::get: key id is not type long 2017-06-23T00:30:35.538 INFO:tasks.ceph.mon.a.smithi130.stderr:2017-06-23 00:30:35.540223 7f09bbfa5700 -1 2017-06-23T00:30:35.541 INFO:teuthology.orchestra.run.smithi130.stderr:Error EPERM: min_compat_client jewel < luminous, which is required for pg-upmap 2017-06-23T00:30:35.553 INFO:tasks.thrashosds.thrasher:Failed to rm-pg-upmap, ignoring 2017-06-23T00:30:35.569 INFO:tasks.ceph.osd.0.smithi146.stderr:2017-06-23 00:30:35.567789 7f9373967700 -1 received signal: Hangup from PID: 6510 task name: /usr/bin/python /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 0 UID: 0 2017-06-23T00:30:35.633 INFO:teuthology.orchestra.run.smithi030:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 30 ceph --cluster ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok dump_historic_ops' 2017-06-23T00:30:35.682 INFO:tasks.ceph.osd.1.smithi030.stderr:2017-06-23 00:30:35.684453 7f8072066700 -1 received signal: Hangup from PID: 6703 task name: /usr/bin/python /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 1 UID: 0 2017-06-23T00:30:35.752 INFO:teuthology.orchestra.run.smithi146:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 30 ceph --cluster ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok dump_ops_in_flight' 2017-06-23T00:30:35.791 INFO:tasks.ceph.osd.1.smithi030.stderr:2017-06-23 00:30:35.793441 7f8072066700 -1 received signal: Hangup from PID: 6703 task name: /usr/bin/python /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 1 UID: 0 2017-06-23T00:30:35.874 INFO:tasks.ceph.osd.0.smithi146.stderr:2017-06-23 00:30:35.871298 7f9373967700 -1 received signal: Hangup from PID: 6510 task name: /usr/bin/python /bin/daemon-helper kill ceph-osd -f --cluster ceph -i 0 UID: 0 ...
Updated by Yuri Weinstein almost 7 years ago
- Status changed from New to 12
Updated by Sage Weil almost 7 years ago
- Priority changed from Normal to Immediate
Updated by Sage Weil almost 7 years ago
- Status changed from 12 to Won't Fix
this is actually fine; we're ignoring errors from these commands (so the thrasher can work when the feature is unavailable or during an upgrade etc.)
The problem in the failed run seems to be that machines take a long time to reboot and there were unfound objects during one of the failures, making rados bench cleanup slow.
Actions