Project

General

Profile

Actions

Bug #20959

closed

cephfs application metdata not set by ceph.py

Added by Sage Weil over 6 years ago. Updated over 6 years ago.

Status:
Resolved
Priority:
Immediate
Assignee:
Category:
Correctness/Safety
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(RADOS):
Monitor
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

"2017-08-09 06:52:11.115593 mon.a mon.0 172.21.15.12:6789/0 154 : cluster [WRN] Health check failed: application not enabled on 1 pool(s) (POOL_APP_NOT_ENABLED)" in cluster log

/a/sage-2017-08-09_05:29:48-rados-luminous-distro-basic-smithi/1500948

The log shows

2017-08-09T06:51:53.348 INFO:teuthology.orchestra.run.smithi012:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph fs new cephfs cephfs_metadata cephfs_data'
...
2017-08-09T06:51:53.845 INFO:teuthology.orchestra.run.smithi012.stderr:2017-08-09 06:51:53.837735 7f4347fff700  1 -- 172.21.15.12:0/147144223 <== mon.0 172.21.15.12:6789/0 9 ==== mon_command_ack([{"prefix": "fs new", "data": "cephfs_data", "fs_name": "cephfs", "metadata": "cephfs_metadata"}]=0 new fs with metadata 
pool 2 and data pool 3 v2) v1 ==== 172+0+0 (3358715605 0 0) 0x7f4348002010 con 0x7f4358192360

but 10s of seconds later,
"application_metadata":{}}
...
ions":{},"application_metadata":{}}],"o

for the metadata pools. rbd one is fine,
,"application_metadata":{"rbd":{}}},

that's at
2017-08-09T06:52:14.246


Related issues 1 (0 open1 closed)

Related to Ceph - Bug #20891: mon: mysterious "application not enabled on <N> pool(s)"ResolvedGreg Farnum08/03/2017

Actions
Actions

Also available in: Atom PDF