Project

General

Profile

Bug #20959

Updated by Sage Weil over 6 years ago

"2017-08-09 06:52:11.115593 mon.a mon.0 172.21.15.12:6789/0 154 : cluster [WRN] Health check failed: application not enabled on 1 pool(s) (POOL_APP_NOT_ENABLED)" in cluster log

/a/sage-2017-08-09_05:29:48-rados-luminous-distro-basic-smithi/1500948 http://pulpito.ceph.com/sage-2017-08-09_05:29:48-rados-luminous-distro-basic-smithi/1500948

The log shows
<pre>
2017-08-09T06:51:53.348 INFO:teuthology.orchestra.run.smithi012:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs new cephfs cephfs_metadata cephfs_data'
...
2017-08-09T06:51:53.845 INFO:teuthology.orchestra.run.smithi012.stderr:2017-08-09 06:51:53.837735 7f4347fff700 1 -- 172.21.15.12:0/147144223 <== mon.0 172.21.15.12:6789/0 9 ==== mon_command_ack([{"prefix": "fs new", "data": "cephfs_data", "fs_name": "cephfs", "metadata": "cephfs_metadata"}]=0 new fs with metadata
pool 2 and data pool 3 v2) v1 ==== 172+0+0 (3358715605 0 0) 0x7f4348002010 con 0x7f4358192360
</pre>
but 10s of seconds later,
<pre>
"application_metadata":{}}
...
ions":{},"application_metadata":{}}],"o
</pre>
for the metadata pools. rbd one is fine,
<pre>
,"application_metadata":{"rbd":{}}},
</pre>
that's at
<pre>
2017-08-09T06:52:14.246
</pre>

Back