Bug #12715
closed"[ERR] bad backtrace on dir ino 600" in cluster log"
0%
Description
Run: http://pulpito.ceph.com/teuthology-2015-08-14_16:56:20-upgrade:firefly-x-hammer-distro-basic-multi/
Job: 1014715
Logs: http://qa-proxy.ceph.com/teuthology/teuthology-2015-08-14_16:56:20-upgrade:firefly-x-hammer-distro-basic-multi/1014715/teuthology.log
failure_reason: '"2015-08-14 19:49:12.208057 mds.0 10.214.131.32:6800/6653 1 : [ERR] bad backtrace on dir ino
Updated by Yuri Weinstein over 8 years ago
Run http://pulpito.ceph.com/teuthology-2015-08-21_08:42:54-upgrade:firefly-x-hammer-distro-basic-vps/
Jobs: 1024928, 1024929
Logs: http://qa-proxy.ceph.com/teuthology/teuthology-2015-08-21_08:42:54-upgrade:firefly-x-hammer-distro-basic-vps/1024928/teuthology.log
2015-08-21T11:29:42.713 INFO:teuthology.task.print:**** done branch: -x install.upgrade 2015-08-21T11:29:42.714 INFO:teuthology.task.sequential:In sequential, running task ceph.restart... 2015-08-21T11:29:42.714 INFO:tasks.ceph.mds.a:Restarting daemon 2015-08-21T11:29:42.714 INFO:tasks.ceph.mds.a:Stopping old one... 2015-08-21T11:29:42.714 DEBUG:tasks.ceph.mds.a:waiting for process to exit 2015-08-21T11:29:48.713 INFO:tasks.ceph.mds.a:Stopped 2015-08-21T11:29:48.713 INFO:teuthology.orchestra.run.vpm048:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mds -f -i a' 2015-08-21T11:29:48.716 INFO:tasks.ceph.mds.a:Started 2015-08-21T11:29:48.716 INFO:tasks.ceph:Waiting until ceph is healthy... 2015-08-21T11:29:48.716 INFO:teuthology.orchestra.run.vpm048:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph osd dump --format=json' 2015-08-21T11:29:50.793 DEBUG:teuthology.misc:6 of 6 OSDs are up 2015-08-21T11:29:50.794 INFO:teuthology.orchestra.run.vpm048:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph health' 2015-08-21T11:29:51.648 DEBUG:teuthology.misc:Ceph health: HEALTH_OK 2015-08-21T11:29:51.648 INFO:teuthology.task.sequential:In sequential, running task sleep... 2015-08-21T11:29:51.649 INFO:teuthology.task.sleep:Sleeping for 60 2015-08-21T11:29:51.766 INFO:tasks.ceph.mds.a.vpm048.stdout:starting mds.a at :/0 2015-08-21T11:29:52.126 INFO:tasks.ceph.mds.a.vpm048.stderr:2015-08-21 18:29:52.131618 7f7451264780 -1 mds.-1.0 log_to_monitors {default=true} 2015-08-21T11:30:01.831 INFO:tasks.ceph.mds.a.vpm048.stderr:2015-08-21 18:30:01.837113 7f7449574700 -1 log_channel(cluster) log [ERR] : bad backtrace on dir ino 600 2015-08-21T11:30:01.833 INFO:tasks.ceph.mds.a.vpm048.stderr:2015-08-21 18:30:01.838770 7f7449574700 -1 log_channel(cluster) log [ERR] : bad backtrace on dir ino 601 2015-08-21T11:30:01.835 INFO:tasks.ceph.mds.a.vpm048.stderr:2015-08-21 18:30:01.840552 7f7449574700 -1 log_channel(cluster) log [ERR] : bad backtrace on dir ino 602 2015-08-21T11:30:01.837 INFO:tasks.ceph.mds.a.vpm048.stderr:2015-08-21 18:30:01.843379 7f7449574700 -1 log_channel(cluster) log [ERR] : bad backtrace on dir ino 603 2015-08-21T11:30:01.868 INFO:tasks.ceph.mds.a.vpm048.stderr:2015-08-21 18:30:01.873581 7f7449574700 -1 log_channel(cluster) log [ERR] : bad backtrace on dir ino 604 2015-08-21T11:30:01.992 INFO:tasks.ceph.mds.a.vpm048.stderr:2015-08-21 18:30:01.998149 7f7449574700 -1 log_channel(cluster) log [ERR] : bad backtrace on dir ino 605 2015-08-21T11:30:01.994 INFO:tasks.ceph.mds.a.vpm048.stderr:2015-08-21 18:30:01.999831 7f7449574700 -1 log_channel(cluster) log [ERR] : bad backtrace on dir ino 606 2015-08-21T11:30:01.996 INFO:tasks.ceph.mds.a.vpm048.stderr:2015-08-21 18:30:02.001806 7f7449574700 -1 log_channel(cluster) log [ERR] : bad backtrace on dir ino 607 2015-08-21T11:30:01.998 INFO:tasks.ceph.mds.a.vpm048.stderr:2015-08-21 18:30:02.003164 7f7449574700 -1 log_channel(cluster) log [ERR] : bad backtrace on dir ino 608 2015-08-21T11:30:01.999 INFO:tasks.ceph.mds.a.vpm048.stderr:2015-08-21 18:30:02.005031 7f7449574700 -1 log_channel(cluster) log [ERR] : bad backtrace on dir ino 609 2015-08-21T11:30:51.648 INFO:teuthology.task.sequential:In sequential, running task ceph.restart...
Updated by Sage Weil over 8 years ago
This is an old bug, right? We should just whitelist this?
Updated by Yuri Weinstein over 8 years ago
- Assignee set to Zheng Yan
Zheng, Sage's mentioned that this may have been fixed by you, can you take a look?
Updated by Zheng Yan over 8 years ago
the test uses 0.80.4 (7c241cfaa6c8c068bc9da8578ca00b9f4fc7567f). the newest firefly include the fix (commit a5970963)
Updated by Yuri Weinstein over 8 years ago
Also in run:
http://pulpito.ceph.com/teuthology-2015-08-21_08:42:54-upgrade:firefly-x-hammer-distro-basic-vps/
Jobs: ['1024928', '1024929', '1024930']
Updated by Yuri Weinstein over 8 years ago
whitelist "bad backtrace on dir ino" warning message (per irc chat with Greg)
Updated by Yuri Weinstein over 8 years ago
- Assignee changed from Zheng Yan to Yuri Weinstein
Updated by Yuri Weinstein over 8 years ago
Updated by Shinobu Kinjo over 8 years ago
This kind of warning is still happen in the following upgrading path.
Dumpling -> FireFly -> Hammer
Each version is newest.
Are those ""warnings"" ignorable? Or is there anything we have to fix?
mds.0 [ERR] bad backtrace on dir ino 600
mds.0 [ERR] bad backtrace on dir ino 601
mds.0 [ERR] bad backtrace on dir ino 602
mds.0 [ERR] bad backtrace on dir ino 603
mds.0 [ERR] bad backtrace on dir ino 604
mds.0 [ERR] bad backtrace on dir ino 605
mds.0 [ERR] bad backtrace on dir ino 606
mds.0 [ERR] bad backtrace on dir ino 607
mds.0 [ERR] bad backtrace on dir ino 608
mds.0 [ERR] bad backtrace on dir ino 609