Actions
Bug #6058
closedBug #6057: osd: log bound mismatch after bobtail -> dumpling -> next upgrade
upgrading from bobtail to dumpling to next: log bound mismatch and wrong node message in the logs
% Done:
0%
Source:
Q/A
Tags:
Backport:
Regression:
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):
Description
These failure are seen when running the rgw upgrade tests from bobtail to dumpling to next branch.
logs from the nightly failures: ubuntu@teuthology:/a/teuthology-2013-08-19_01:31:11-upgrade-next-testing-basic-plana/1459
also,
1453 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_mon_mds_osd.yaml 5-dumpling-workload:readwrite.yaml 6-upgrade-next:next.yaml 7-restart:mon-mds-osd.yaml 8-next-workload:readwrite.yaml 3151s 1454 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_mon_mds_osd.yaml 5-dumpling-workload:readwrite.yaml 6-upgrade-next:next.yaml 7-restart:mon-mds-osd.yaml 8-next-workload:s3tests.yaml 2021s 1455 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_mon_mds_osd.yaml 5-dumpling-workload:readwrite.yaml 6-upgrade-next:next.yaml 7-restart:mon-mds-osd.yaml 8-next-workload:swift.yaml 2135s 1456 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_mon_mds_osd.yaml 5-dumpling-workload:readwrite.yaml 6-upgrade-next:next.yaml 7-restart:osd-mds-mon.yaml 8-next-workload:readwrite.yaml 3040s 1457 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_mon_mds_osd.yaml 5-dumpling-workload:readwrite.yaml 6-upgrade-next:next.yaml 7-restart:osd-mds-mon.yaml 8-next-workload:s3tests.yaml 2004s 1458 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_mon_mds_osd.yaml 5-dumpling-workload:readwrite.yaml 6-upgrade-next:next.yaml 7-restart:osd-mds-mon.yaml 8-next-workload:swift.yaml 2105s 1459 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_mon_mds_osd.yaml 5-dumpling-workload:s3tests.yaml 6-upgrade-next:next.yaml 7-restart:mon-mds-osd.yaml 8-next-workload:readwrite.yaml 2099s 1460 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_mon_mds_osd.yaml 5-dumpling-workload:s3tests.yaml 6-upgrade-next:next.yaml 7-restart:mon-mds-osd.yaml 8-next-workload:s3tests.yaml 1348s 1461 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_mon_mds_osd.yaml 5-dumpling-workload:s3tests.yaml 6-upgrade-next:next.yaml 7-restart:mon-mds-osd.yaml 8-next-workload:swift.yaml 1530s 1462 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_mon_mds_osd.yaml 5-dumpling-workload:s3tests.yaml 6-upgrade-next:next.yaml 7-restart:osd-mds-mon.yaml 8-next-workload:readwrite.yaml 2444s 1463 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_mon_mds_osd.yaml 5-dumpling-workload:s3tests.yaml 6-upgrade-next:next.yaml 7-restart:osd-mds-mon.yaml 8-next-workload:s3tests.yaml 1327s 1464 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_mon_mds_osd.yaml 5-dumpling-workload:s3tests.yaml 6-upgrade-next:next.yaml 7-restart:osd-mds-mon.yaml 8-next-workload:swift.yaml 1296s 1465 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_mon_mds_osd.yaml 5-dumpling-workload:swift.yaml 6-upgrade-next:next.yaml 7-restart:mon-mds-osd.yaml 8-next-workload:readwrite.yaml 2212s 1466 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_mon_mds_osd.yaml 5-dumpling-workload:swift.yaml 6-upgrade-next:next.yaml 7-restart:mon-mds-osd.yaml 8-next-workload:s3tests.yaml 1576s 1467 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_mon_mds_osd.yaml 5-dumpling-workload:swift.yaml 6-upgrade-next:next.yaml 7-restart:mon-mds-osd.yaml 8-next-workload:swift.yaml 1444s 1468 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_mon_mds_osd.yaml 5-dumpling-workload:swift.yaml 6-upgrade-next:next.yaml 7-restart:osd-mds-mon.yaml 8-next-workload:readwrite.yaml 2212s 1469 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_mon_mds_osd.yaml 5-dumpling-workload:swift.yaml 6-upgrade-next:next.yaml 7-restart:osd-mds-mon.yaml 8-next-workload:s3tests.yaml 1406s 1470 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_mon_mds_osd.yaml 5-dumpling-workload:swift.yaml 6-upgrade-next:next.yaml 7-restart:osd-mds-mon.yaml 8-next-workload:swift.yaml 1508s 1471 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_osd_mds_mon.yaml 5-dumpling-workload:readwrite.yaml 6-upgrade-next:next.yaml 7-restart:mon-mds-osd.yaml 8-next-workload:readwrite.yaml 3009s 1472 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_osd_mds_mon.yaml 5-dumpling-workload:readwrite.yaml 6-upgrade-next:next.yaml 7-restart:mon-mds-osd.yaml 8-next-workload:s3tests.yaml 2242s 1473 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_osd_mds_mon.yaml 5-dumpling-workload:readwrite.yaml 6-upgrade-next:next.yaml 7-restart:mon-mds-osd.yaml 8-next-workload:swift.yaml 2385s 1474 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_osd_mds_mon.yaml 5-dumpling-workload:readwrite.yaml 6-upgrade-next:next.yaml 7-restart:osd-mds-mon.yaml 8-next-workload:readwrite.yaml 2770s 1475 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_osd_mds_mon.yaml 5-dumpling-workload:readwrite.yaml 6-upgrade-next:next.yaml 7-restart:osd-mds-mon.yaml 8-next-workload:s3tests.yaml 2230s 1476 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_osd_mds_mon.yaml 5-dumpling-workload:readwrite.yaml 6-upgrade-next:next.yaml 7-restart:osd-mds-mon.yaml 8-next-workload:swift.yaml 2200s 1477 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_osd_mds_mon.yaml 5-dumpling-workload:s3tests.yaml 6-upgrade-next:next.yaml 7-restart:mon-mds-osd.yaml 8-next-workload:readwrite.yaml 2137s 1478 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_osd_mds_mon.yaml 5-dumpling-workload:s3tests.yaml 6-upgrade-next:next.yaml 7-restart:mon-mds-osd.yaml 8-next-workload:s3tests.yaml 1248s 1479 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_osd_mds_mon.yaml 5-dumpling-workload:s3tests.yaml 6-upgrade-next:next.yaml 7-restart:mon-mds-osd.yaml 8-next-workload:swift.yaml 1317s 1480 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_osd_mds_mon.yaml 5-dumpling-workload:s3tests.yaml 6-upgrade-next:next.yaml 7-restart:osd-mds-mon.yaml 8-next-workload:readwrite.yaml 2080s 1481 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_osd_mds_mon.yaml 5-dumpling-workload:s3tests.yaml 6-upgrade-next:next.yaml 7-restart:osd-mds-mon.yaml 8-next-workload:s3tests.yaml 1485s 1482 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_osd_mds_mon.yaml 5-dumpling-workload:s3tests.yaml 6-upgrade-next:next.yaml 7-restart:osd-mds-mon.yaml 8-next-workload:swift.yaml 1520s 1483 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_osd_mds_mon.yaml 5-dumpling-workload:swift.yaml 6-upgrade-next:next.yaml 7-restart:mon-mds-osd.yaml 8-next-workload:readwrite.yaml 2185s 1484 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_osd_mds_mon.yaml 5-dumpling-workload:swift.yaml 6-upgrade-next:next.yaml 7-restart:mon-mds-osd.yaml 8-next-workload:s3tests.yaml 1434s 1485 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_osd_mds_mon.yaml 5-dumpling-workload:swift.yaml 6-upgrade-next:next.yaml 7-restart:mon-mds-osd.yaml 8-next-workload:swift.yaml 1351s 1486 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_osd_mds_mon.yaml 5-dumpling-workload:swift.yaml 6-upgrade-next:next.yaml 7-restart:osd-mds-mon.yaml 8-next-workload:readwrite.yaml 2193s 1487 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_osd_mds_mon.yaml 5-dumpling-workload:swift.yaml 6-upgrade-next:next.yaml 7-restart:osd-mds-mon.yaml 8-next-workload:s3tests.yaml 1430s 1488 FAIL scheduled_teuthology@teuthology collection:rgw-double 0-cluster:start.yaml 1-bobtail-install:bobtail.yaml 2-bobtail-workload:s3tests.yaml 3-upgrade:dumpling.yaml 4-restart:upgrade_osd_mds_mon.yaml 5-dumpling-workload:swift.yaml 6-upgrade-next:next.yaml 7-restart:osd-mds-mon.yaml 8-next-workload:swift.yaml 1326s
2013-08-19 06:43:24.082849 7feaa01d2780 0 ceph version 0.67.1-5-g290bcd8 (290bcd8a718887eb0e28aa2d97bceeee79068ea9), process ceph-osd, pid 22862 2013-08-19 06:43:24.551088 7feaa01d2780 0 filestore(/var/lib/ceph/osd/ceph-1) limited size xattrs -- filestore_xattr_use_omap already enabled 2013-08-19 06:43:24.637153 7feaa01d2780 0 filestore(/var/lib/ceph/osd/ceph-1) mount FIEMAP ioctl is supported and appears to work 2013-08-19 06:43:24.637166 7feaa01d2780 0 filestore(/var/lib/ceph/osd/ceph-1) mount FIEMAP ioctl is disabled via 'filestore fiemap' config option 2013-08-19 06:43:24.637358 7feaa01d2780 0 filestore(/var/lib/ceph/osd/ceph-1) mount did NOT detect btrfs 2013-08-19 06:43:24.848493 7feaa01d2780 0 filestore(/var/lib/ceph/osd/ceph-1) mount syncfs(2) syscall fully supported (by glibc and kernel) 2013-08-19 06:43:24.848945 7feaa01d2780 0 filestore(/var/lib/ceph/osd/ceph-1) mount found snaps <> 2013-08-19 06:43:25.299644 7feaa01d2780 0 filestore(/var/lib/ceph/osd/ceph-1) mount: enabling WRITEAHEAD journal mode: btrfs not detected 2013-08-19 06:43:25.301176 7feaa01d2780 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2013-08-19 06:43:25.301201 7feaa01d2780 1 journal _open /var/lib/ceph/osd/ceph-1/journal fd 21: 104857600 bytes, block size 4096 bytes, directio = 1, aio = 0 2013-08-19 06:43:25.374750 7feaa01d2780 1 journal _open /var/lib/ceph/osd/ceph-1/journal fd 21: 104857600 bytes, block size 4096 bytes, directio = 1, aio = 0 2013-08-19 06:43:25.375485 7feaa01d2780 1 journal close /var/lib/ceph/osd/ceph-1/journal 2013-08-19 06:43:25.796076 7feaa01d2780 0 filestore(/var/lib/ceph/osd/ceph-1) limited size xattrs -- filestore_xattr_use_omap already enabled 2013-08-19 06:43:25.895470 7feaa01d2780 0 filestore(/var/lib/ceph/osd/ceph-1) mount FIEMAP ioctl is supported and appears to work 2013-08-19 06:43:25.895483 7feaa01d2780 0 filestore(/var/lib/ceph/osd/ceph-1) mount FIEMAP ioctl is disabled via 'filestore fiemap' config option 2013-08-19 06:43:25.895670 7feaa01d2780 0 filestore(/var/lib/ceph/osd/ceph-1) mount did NOT detect btrfs 2013-08-19 06:43:26.089990 7feaa01d2780 0 filestore(/var/lib/ceph/osd/ceph-1) mount syncfs(2) syscall fully supported (by glibc and kernel) 2013-08-19 06:43:26.090124 7feaa01d2780 0 filestore(/var/lib/ceph/osd/ceph-1) mount found snaps <> 2013-08-19 06:43:26.432287 7feaa01d2780 0 filestore(/var/lib/ceph/osd/ceph-1) mount: enabling WRITEAHEAD journal mode: btrfs not detected 2013-08-19 06:43:26.433090 7feaa01d2780 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2013-08-19 06:43:26.433102 7feaa01d2780 1 journal _open /var/lib/ceph/osd/ceph-1/journal fd 21: 104857600 bytes, block size 4096 bytes, directio = 1, aio = 0 2013-08-19 06:43:26.433215 7feaa01d2780 1 journal _open /var/lib/ceph/osd/ceph-1/journal fd 21: 104857600 bytes, block size 4096 bytes, directio = 1, aio = 0 2013-08-19 06:43:26.588048 7feaa01d2780 -1 osd.1 22 PGs are upgrading 2013-08-19 06:43:33.083292 7feaa0198700 0 -- 0.0.0.0:6802/22862 >> 10.214.131.33:6789/0 pipe(0x3381500 sd=39 :0 s=1 pgs=0 cs=0 l=1 c=0x3379580).fault 2013-08-19 06:43:36.083040 7fea88cef700 0 -- 0.0.0.0:6802/22862 >> 10.214.131.25:6790/0 pipe(0x3570780 sd=151 :0 s=1 pgs=0 cs=0 l=1 c=0x3379b00).fault 2013-08-19 06:43:39.083205 7feaa0198700 0 -- 0.0.0.0:6802/22862 >> 10.214.131.33:6789/0 pipe(0x3570a00 sd=151 :0 s=1 pgs=0 cs=0 l=1 c=0x3379c60).fault 2013-08-19 06:43:42.083815 7fea88cef700 0 -- 0.0.0.0:6802/22862 >> 10.214.131.25:6789/0 pipe(0x3381500 sd=151 :0 s=1 pgs=0 cs=0 l=1 c=0x3379580).fault 2013-08-19 06:43:45.083986 7feaa0198700 0 -- 0.0.0.0:6802/22862 >> 10.214.131.33:6789/0 pipe(0x3570c80 sd=39 :0 s=1 pgs=0 cs=0 l=1 c=0x3379dc0).fault 2013-08-19 06:43:48.084494 7fea88cef700 0 -- 0.0.0.0:6802/22862 >> 10.214.131.25:6790/0 pipe(0x3570780 sd=151 :0 s=1 pgs=0 cs=0 l=1 c=0x3379b00).fault 2013-08-19 06:43:51.084708 7feaa0198700 0 -- 0.0.0.0:6802/22862 >> 10.214.131.25:6789/0 pipe(0x3570a00 sd=151 :0 s=1 pgs=0 cs=0 l=1 c=0x3379c60).fault 2013-08-19 06:43:54.085062 7fea88cef700 0 -- 0.0.0.0:6802/22862 >> 10.214.131.25:6790/0 pipe(0x3381500 sd=39 :0 s=1 pgs=0 cs=0 l=1 c=0x3379580).fault 2013-08-19 06:43:57.085471 7feaa0198700 0 -- 0.0.0.0:6802/22862 >> 10.214.131.25:6789/0 pipe(0x3570c80 sd=39 :0 s=1 pgs=0 cs=0 l=1 c=0x3379dc0).fault 2013-08-19 06:44:03.086167 7fea88bee700 0 -- 10.214.131.33:6802/22862 >> 10.214.131.25:6790/0 pipe(0x3570a00 sd=39 :0 s=1 pgs=0 cs=0 l=1 c=0x3379c60).fault 2013-08-19 06:44:09.086927 7fea88cef700 0 -- 10.214.131.33:6802/22862 >> 10.214.131.25:6789/0 pipe(0x3381500 sd=151 :0 s=1 pgs=0 cs=0 l=1 c=0x3379dc0).fault 2013-08-19 06:44:13.425141 7fea92502700 0 osd.1 22 handle_osd_map fsid 5f41b1d7-5b14-4ae2-aca7-b04333ab0727 != 00000000-0000-0000-0000-000000000000 2013-08-19 06:44:15.093843 7fea88bee700 0 -- 0.0.0.0:6807/22862 >> 10.214.131.25:6801/3931 pipe(0x3570780 sd=151 :46466 s=1 pgs=0 cs=0 l=0 c=0x3824000).connect claims to be 10.214.131.25:6801/4688 not 10. 214.131.25:6801/3931 - wrong node! 2013-08-19 06:44:15.093886 7fea88bee700 0 -- 0.0.0.0:6807/22862 >> 10.214.131.25:6801/3931 pipe(0x3570780 sd=151 :46466 s=1 pgs=0 cs=0 l=0 c=0x3824000).fault with nothing to send, going to standby 2013-08-19 06:44:15.094632 7fea88bee700 0 -- 0.0.0.0:6807/22862 >> 10.214.131.25:6801/3931 pipe(0x3570780 sd=42 :46467 s=1 pgs=0 cs=1 l=0 c=0x3824000).connect claims to be 10.214.131.25:6801/4688 not 10.2 14.131.25:6801/3931 - wrong node! 2013-08-19 06:44:15.094676 7fea88bee700 0 -- 0.0.0.0:6807/22862 >> 10.214.131.25:6801/3931 pipe(0x3570780 sd=42 :46467 s=1 pgs=0 cs=1 l=0 c=0x3824000).fault 2013-08-19 06:44:15.095011 7fea88aed700 0 -- 0.0.0.0:6807/22862 >> 10.214.131.33:6802/21811 pipe(0x3843000 sd=44 :57367 s=1 pgs=0 cs=0 l=0 c=0x3824160).connect claims to be 10.214.131.33:6802/22862 not 10 .214.131.33:6802/21811 - wrong node! 2013-08-19 06:44:15.095070 7fea88aed700 0 -- 0.0.0.0:6807/22862 >> 10.214.131.33:6802/21811 pipe(0x3843000 sd=44 :57367 s=1 pgs=0 cs=0 l=0 c=0x3824160).fault 2013-08-19 06:44:15.095169 7fea88bee700 0 -- 0.0.0.0:6807/22862 >> 10.214.131.25:6801/3931 pipe(0x3570780 sd=42 :46468 s=1 pgs=0 cs=1 l=0 c=0x3824000).connect claims to be 10.214.131.25:6801/4688 not 10.2 14.131.25:6801/3931 - wrong node! 2013-08-19 06:44:15.095344 7fea88aed700 0 -- 0.0.0.0:6807/22862 >> 10.214.131.33:6802/21811 pipe(0x3843000 sd=44 :57368 s=1 pgs=0 cs=0 l=0 c=0x3824160).connect claims to be 10.214.131.33:6802/22862 not 10 .214.131.33:6802/21811 - wrong node! ... ... .. 2013-08-19 06:50:33.091534 7f0363d99780 0 ceph version 0.67-136-gb007b33 (b007b3304c2020aa9f122ec6fef83a909053db3a), process ceph-osd, pid 26397 2013-08-19 06:50:33.274642 7f0363d99780 0 filestore(/var/lib/ceph/osd/ceph-1) limited size xattrs -- filestore_xattr_use_omap already enabled 2013-08-19 06:50:33.325326 7f0363d99780 0 filestore(/var/lib/ceph/osd/ceph-1) mount FIEMAP ioctl is supported and appears to work 2013-08-19 06:50:33.325340 7f0363d99780 0 filestore(/var/lib/ceph/osd/ceph-1) mount FIEMAP ioctl is disabled via 'filestore fiemap' config option 2013-08-19 06:50:33.325522 7f0363d99780 0 filestore(/var/lib/ceph/osd/ceph-1) mount did NOT detect btrfs 2013-08-19 06:50:33.552000 7f0363d99780 0 filestore(/var/lib/ceph/osd/ceph-1) mount syncfs(2) syscall fully supported (by glibc and kernel) 2013-08-19 06:50:33.552173 7f0363d99780 0 filestore(/var/lib/ceph/osd/ceph-1) mount found snaps <> 2013-08-19 06:50:33.946082 7f0363d99780 0 filestore(/var/lib/ceph/osd/ceph-1) mount: enabling WRITEAHEAD journal mode: btrfs not detected 2013-08-19 06:50:33.947795 7f0363d99780 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2013-08-19 06:50:33.947811 7f0363d99780 1 journal _open /var/lib/ceph/osd/ceph-1/journal fd 21: 104857600 bytes, block size 4096 bytes, directio = 1, aio = 0 2013-08-19 06:50:34.048264 7f0363d99780 1 journal _open /var/lib/ceph/osd/ceph-1/journal fd 21: 104857600 bytes, block size 4096 bytes, directio = 1, aio = 0 2013-08-19 06:50:34.049366 7f0363d99780 1 journal close /var/lib/ceph/osd/ceph-1/journal 2013-08-19 06:50:34.306078 7f0363d99780 0 filestore(/var/lib/ceph/osd/ceph-1) limited size xattrs -- filestore_xattr_use_omap already enabled 2013-08-19 06:50:34.365236 7f0363d99780 0 filestore(/var/lib/ceph/osd/ceph-1) mount FIEMAP ioctl is supported and appears to work 2013-08-19 06:50:34.365249 7f0363d99780 0 filestore(/var/lib/ceph/osd/ceph-1) mount FIEMAP ioctl is disabled via 'filestore fiemap' config option 2013-08-19 06:50:34.365438 7f0363d99780 0 filestore(/var/lib/ceph/osd/ceph-1) mount did NOT detect btrfs 2013-08-19 06:50:34.542386 7f0363d99780 0 filestore(/var/lib/ceph/osd/ceph-1) mount syncfs(2) syscall fully supported (by glibc and kernel) 2013-08-19 06:50:34.542548 7f0363d99780 0 filestore(/var/lib/ceph/osd/ceph-1) mount found snaps <> 2013-08-19 06:50:34.750922 7f0363d99780 0 filestore(/var/lib/ceph/osd/ceph-1) mount: enabling WRITEAHEAD journal mode: btrfs not detected 2013-08-19 06:50:34.751835 7f0363d99780 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway 2013-08-19 06:50:34.751846 7f0363d99780 1 journal _open /var/lib/ceph/osd/ceph-1/journal fd 21: 104857600 bytes, block size 4096 bytes, directio = 1, aio = 0 2013-08-19 06:50:34.752147 7f0363d99780 1 journal _open /var/lib/ceph/osd/ceph-1/journal fd 21: 104857600 bytes, block size 4096 bytes, directio = 1, aio = 0 2013-08-19 06:50:35.031060 7f0363d99780 0 log [ERR] : 3.0 log bound mismatch, info (0'0,29'218] actual [20'1,22'80] 2013-08-19 06:50:35.072927 7f0363d99780 0 log [ERR] : 3.1 log bound mismatch, info (0'0,29'204] actual [20'1,22'81] 2013-08-19 06:50:35.081222 7f0363d99780 0 log [ERR] : 3.3 log bound mismatch, info (0'0,29'240] actual [17'1,22'92] 2013-08-19 06:50:35.089496 7f0363d99780 0 log [ERR] : 3.4 log bound mismatch, info (0'0,29'263] actual [20'1,22'78] 2013-08-19 06:50:35.106146 7f0363d99780 0 log [ERR] : 3.6 log bound mismatch, info (0'0,29'222] actual [20'1,22'111] 2013-08-19 06:50:35.114580 7f0363d99780 0 log [ERR] : 4.0 log bound mismatch, info (0'0,29'27] actual [10'1,22'10] 2013-08-19 06:50:35.131286 7f0363d99780 0 log [ERR] : 4.2 log bound mismatch, info (0'0,29'10] actual [9'1,22'6] 2013-08-19 06:50:35.148163 7f0363d99780 0 log [ERR] : 4.3 log bound mismatch, info (0'0,29'41] actual [10'1,22'18] 2013-08-19 06:50:35.164736 7f0363d99780 0 log [ERR] : 4.5 log bound mismatch, info (0'0,29'21] actual [9'1,22'13] 2013-08-19 06:50:35.214784 7f0363d99780 0 log [ERR] : 6.7 log bound mismatch, info (0'0,29'4] actual [12'1,22'2] 2013-08-19 06:50:35.231581 7f0363d99780 0 log [ERR] : 7.0 log bound mismatch, info (0'0,29'4] actual [16'1,22'2] 2013-08-19 06:50:35.256636 7f0363d99780 0 log [ERR] : 7.7 log bound mismatch, info (0'0,29'4] actual [14'1,22'2] 2013-08-19 06:50:35.290128 7f0363d99780 0 log [ERR] : 9.0 log bound mismatch, info (0'0,29'682] actual [19'1,22'412] 2013-08-19 06:50:35.315118 7f0363d99780 0 log [ERR] : 9.4 log bound mismatch, info (0'0,29'562] actual [20'1,22'287] 2013-08-19 06:50:35.331712 7f0363d99780 0 log [ERR] : 9.5 log bound mismatch, info (0'0,29'592] actual [20'1,22'291] 2013-08-19 06:50:35.356842 7f0363d99780 0 log [ERR] : 9.6 log bound mismatch, info (0'0,29'728] actual [18'1,22'364] 2013-08-19 06:50:35.373544 7f0363d99780 0 log [ERR] : 10.3 log bound mismatch, info (0'0,29'651] actual [20'1,22'341] 2013-08-19 06:50:35.390330 7f0363d99780 0 log [ERR] : 10.4 log bound mismatch, info (0'0,29'478] actual [20'1,22'257] 2013-08-19 06:50:35.415202 7f0363d99780 0 log [ERR] : 10.5 log bound mismatch, info (0'0,29'488] actual [20'1,22'198] 2013-08-19 06:50:35.432052 7f0363d99780 0 log [ERR] : 10.7 log bound mismatch, info (0'0,29'501] actual [20'1,22'237] 2013-08-19 06:50:35.448578 7f0363d99780 0 log [ERR] : 11.2 log bound mismatch, info (0'0,29'8] actual [22'1,22'5] 2013-08-19 06:50:35.456887 7f0363d99780 0 log [ERR] : 11.3 log bound mismatch, info (0'0,29'17] actual [22'1,22'13] 2013-08-19 06:50:35.473697 7f0363d99780 0 log [ERR] : 11.7 log bound mismatch, info (0'0,29'2] actual [22'1,22'1] .... .... .... 2013-08-19 06:50:40.012543 7f0363d6f700 0 -- 0.0.0.0:6803/26397 >> 10.214.131.25:6807/4695 pipe(0x19e2780 sd=58 :34054 s=1 pgs=0 cs=0 l=0 c=0x1cfe160).connect claims to be 0.0.0.0:6807/7749 not 10.214.131.25:6807/4695 - wrong node! 2013-08-19 06:50:40.012589 7f0363d6f700 0 -- 0.0.0.0:6803/26397 >> 10.214.131.25:6807/4695 pipe(0x19e2780 sd=58 :34054 s=1 pgs=0 cs=0 l=0 c=0x1cfe160).fault 2013-08-19 06:50:40.012733 7f034c6b4700 0 -- 0.0.0.0:6803/26397 >> 10.214.131.33:6811/22869 pipe(0x19e2c80 sd=58 :0 s=1 pgs=0 cs=0 l=0 c=0x1cfe2c0).fault with nothing to send, going to standby 2013-08-19 06:50:40.012961 7f034c5b3700 0 -- 0.0.0.0:6803/26397 >> 10.214.131.25:6802/4688 pipe(0x1d2b280 sd=60 :0 s=1 pgs=0 cs=0 l=0 c=0x1cfe580).fault with nothing to send, going to standby 2013-08-19 06:50:40.013182 7f034c6b4700 0 -- 0.0.0.0:6803/26397 >> 10.214.131.33:6811/22869 pipe(0x19e2c80 sd=58 :0 s=1 pgs=0 cs=1 l=0 c=0x1cfe2c0).fault 2013-08-19 06:50:40.013209 7f0363d6f700 0 -- 0.0.0.0:6803/26397 >> 10.214.131.25:6807/4695 pipe(0x19e2780 sd=59 :34056 s=1 pgs=0 cs=0 l=0 c=0x1cfe160).connect claims to be 0.0.0.0:6807/7749 not 10.214.131.25:6807/4695 - wrong node! 2013-08-19 06:50:40.013728 7f0363d6f700 0 -- 0.0.0.0:6803/26397 >> 10.214.131.25:6807/4695 pipe(0x19e2780 sd=59 :34060 s=1 pgs=0 cs=0 l=0 c=0x1cfe160).connect claims to be 0.0.0.0:6807/7749 not 10.214.131.25:6807/4695 - wrong node! 2013-08-19 06:50:40.014361 7f0363d6f700 0 -- 0.0.0.0:6803/26397 >> 10.214.131.25:6807/4695 pipe(0x19e2780 sd=59 :34064 s=1 pgs=0 cs=0 l=0 c=0x1cfe160).connect claims to be 0.0.0.0:6807/7749 not 10.214.131.25:6807/4695 - wrong node! 2013-08-19 06:50:40.014985 7f0363d6f700 0 -- 0.0.0.0:6803/26397 >> 10.214.131.25:6807/4695 pipe(0x19e2780 sd=58 :34067 s=1 pgs=0 cs=0 l=0 c=0x1cfe160).connect claims to be 0.0.0.0:6807/7749 not 10.214.131.25:6807/4695 - wrong node! 2013-08-19 06:50:40.015339 7f034c5b3700 0 -- 0.0.0.0:6803/26397 >> 10.214.131.25:6802/4688 pipe(0x1d2b280 sd=58 :0 s=1 pgs=0 cs=1 l=0 c=0x1cfe580).fault 2013-08-19 06:50:40.015563 7f0363d6f700 0 -- 0.0.0.0:6803/26397 >> 10.214.131.25:6807/4695 pipe(0x19e2780 sd=60 :34072 s=1 pgs=0 cs=0 l=0 c=0x1cfe160).connect claims to be 0.0.0.0:6807/7749 not 10.214.131.25:6807/4695 - wrong node! 2013-08-19 06:50:40.016131 7f0363d6f700 0 -- 0.0.0.0:6803/26397 >> 10.214.131.25:6807/4695 pipe(0x19e2780 sd=60 :34076 s=1 pgs=0 cs=0 l=0 c=0x1cfe160).connect claims to be 0.0.0.0:6807/7749 not 10.214.131.25:6807/4695 - wrong node! 2013-08-19 06:50:40.016690 7f0363d6f700 0 -- 0.0.0.0:6803/26397 >> 10.214.131.25:6807/4695 pipe(0x19e2780 sd=60 :34079 s=1 pgs=0 cs=0 l=0 c=0x1cfe160).connect claims to be 0.0.0.0:6807/7749 not 10.214.131.25:6807/4695 - wrong node! 2013-08-19 06:50:40.017259 7f0363d6f700 0 -- 0.0.0.0:6803/26397 >> 10.214.131.25:6807/4695 pipe(0x19e2780 sd=60 :34080 s=1 pgs=0 cs=0 l=0 c=0x1cfe160).connect claims to be 0.0.0.0:6807/7749 not 10.214.131.25:6807/4695 - wrong node!
Updated by Tamilarasi muthamizhan over 10 years ago
- Status changed from New to Duplicate
- Parent task set to #6057
Actions