Project

General

Profile

Actions

Bug #20916

closed

HEALTH_WARN application not enabled on 1 pool(s)

Added by Vasu Kulkarni over 6 years ago. Updated over 6 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

Not sure this is related to rbd, but I have seen recent PR regarding creating some rbd application pool

ceph-common             x86_64   2:12.1.2-142.g9964cf1.el7  

1) during systemd test, ceph is install using ceph-deploy and by default no rbd pool is created
2) the test restarts various daemons using systemctl and reboots
3) after reboot it waits for HEALTH_OK but on centos it stays at HEALTH_WARN due to
HEALTH_WARN application not enabled on 1 pool(s)

2017-08-04T13:34:29.774 INFO:teuthology.orchestra.run.vpm037:Running: 'sudo reboot'
2017-08-04T13:34:29.924 INFO:teuthology.orchestra.run.vpm059:Running: 'sudo reboot'
2017-08-04T13:34:29.950 INFO:teuthology.orchestra.run.vpm069:Running: 'sudo reboot'
2017-08-04T13:34:29.967 INFO:teuthology.orchestra.run.vpm101:Running: 'sudo reboot'
2017-08-04T13:36:30.040 INFO:teuthology.misc:Re-opening connections...
2017-08-04T13:36:30.040 INFO:teuthology.misc:trying to connect to ubuntu@vpm069.front.sepia.ceph.com
2017-08-04T13:36:30.040 DEBUG:teuthology.orchestra.connection:{'username': 'ubuntu', 'hostname': 'vpm069.front.sepia.ceph.com', 'key_filename': ['/home/teuthworker/.ssh/id_rsa'], 'timeout': 60}
2017-08-04T13:36:30.421 INFO:teuthology.orchestra.run.vpm069:Running: 'true'
2017-08-04T13:36:31.139 INFO:teuthology.misc:trying to connect to ubuntu@vpm059.front.sepia.ceph.com
2017-08-04T13:36:31.140 DEBUG:teuthology.orchestra.connection:{'username': 'ubuntu', 'hostname': 'vpm059.front.sepia.ceph.com', 'key_filename': ['/home/teuthworker/.ssh/id_rsa'], 'timeout': 60}
2017-08-04T13:36:31.313 INFO:teuthology.orchestra.run.vpm059:Running: 'true'
2017-08-04T13:36:31.535 DEBUG:teuthology.misc:waited 1.49497103691
2017-08-04T13:36:32.535 INFO:teuthology.misc:trying to connect to ubuntu@vpm037.front.sepia.ceph.com
2017-08-04T13:36:32.536 DEBUG:teuthology.orchestra.connection:{'username': 'ubuntu', 'hostname': 'vpm037.front.sepia.ceph.com', 'key_filename': ['/home/teuthworker/.ssh/id_rsa'], 'timeout': 60}
2017-08-04T13:36:32.958 INFO:teuthology.orchestra.run.vpm037:Running: 'true'
2017-08-04T13:36:33.646 DEBUG:teuthology.misc:waited 3.60671806335
2017-08-04T13:36:34.647 INFO:teuthology.misc:trying to connect to ubuntu@vpm101.front.sepia.ceph.com
2017-08-04T13:36:34.648 DEBUG:teuthology.orchestra.connection:{'username': 'ubuntu', 'hostname': 'vpm101.front.sepia.ceph.com', 'key_filename': ['/home/teuthworker/.ssh/id_rsa'], 'timeout': 60}
2017-08-04T13:36:34.961 INFO:teuthology.orchestra.run.vpm101:Running: 'true'
2017-08-04T13:36:35.645 DEBUG:teuthology.misc:waited 5.60494208336
2017-08-04T13:36:36.647 INFO:teuthology.orchestra.run.vpm037:Running: 'sudo ps -eaf | grep ceph'
2017-08-04T13:36:36.984 INFO:teuthology.orchestra.run.vpm037.stdout:ceph       937     1  0 13:35 ?        00:00:00 /usr/bin/ceph-mon -f --cluster ceph --id vpm037 --setuser ceph --setgroup ceph
2017-08-04T13:36:36.984 INFO:teuthology.orchestra.run.vpm037.stdout:ceph       938     1  0 13:35 ?        00:00:00 /usr/bin/ceph-mgr -f --cluster ceph --id vpm037 --setuser ceph --setgroup ceph
2017-08-04T13:36:36.984 INFO:teuthology.orchestra.run.vpm037.stdout:ceph       991     1  1 13:35 ?        00:00:01 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
2017-08-04T13:36:36.984 INFO:teuthology.orchestra.run.vpm037.stdout:ubuntu    6499  6473  0 13:36 ?        00:00:00 bash -c sudo ps -eaf | grep ceph
2017-08-04T13:36:36.984 INFO:teuthology.orchestra.run.vpm037.stdout:ubuntu    6525  6499  0 13:36 ?        00:00:00 grep ceph
2017-08-04T13:36:36.985 INFO:teuthology.orchestra.run.vpm059:Running: 'sudo ps -eaf | grep ceph'
2017-08-04T13:36:37.120 INFO:teuthology.orchestra.run.vpm059.stdout:ceph       926     1  0 13:34 ?        00:00:00 /usr/bin/ceph-mds -f --cluster ceph --id vpm059 --setuser ceph --setgroup ceph
2017-08-04T13:36:37.120 INFO:teuthology.orchestra.run.vpm059.stdout:ceph       954     1  1 13:34 ?        00:00:01 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph
2017-08-04T13:36:37.120 INFO:teuthology.orchestra.run.vpm059.stdout:ubuntu    6450  6424  0 13:36 ?        00:00:00 bash -c sudo ps -eaf | grep ceph
2017-08-04T13:36:37.120 INFO:teuthology.orchestra.run.vpm059.stdout:ubuntu    6476  6450  0 13:36 ?        00:00:00 grep ceph
2017-08-04T13:36:37.121 INFO:teuthology.orchestra.run.vpm069:Running: 'sudo ps -eaf | grep ceph'
2017-08-04T13:36:37.321 INFO:teuthology.orchestra.run.vpm069.stdout:ceph       814     1  0 13:35 ?        00:00:00 /usr/bin/ceph-mon -f --cluster ceph --id vpm069 --setuser ceph --setgroup ceph
2017-08-04T13:36:37.322 INFO:teuthology.orchestra.run.vpm069.stdout:ubuntu    6182  6156  0 13:36 ?        00:00:00 bash -c sudo ps -eaf | grep ceph
2017-08-04T13:36:37.322 INFO:teuthology.orchestra.run.vpm069.stdout:ubuntu    6208  6182  0 13:36 ?        00:00:00 grep ceph
2017-08-04T13:36:37.322 INFO:teuthology.orchestra.run.vpm101:Running: 'sudo ps -eaf | grep ceph'
2017-08-04T13:36:37.738 INFO:teuthology.orchestra.run.vpm101.stdout:ceph      1032     1  1 13:35 ?        00:00:00 /usr/bin/ceph-osd -f --cluster ceph --id 3 --setuser ceph --setgroup ceph
2017-08-04T13:36:37.739 INFO:teuthology.orchestra.run.vpm101.stdout:ceph      6534     1  1 13:35 ?        00:00:00 /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph
2017-08-04T13:36:37.739 INFO:teuthology.orchestra.run.vpm101.stdout:ubuntu    6713  6687  0 13:36 ?        00:00:00 bash -c sudo ps -eaf | grep ceph
2017-08-04T13:36:37.739 INFO:teuthology.orchestra.run.vpm101.stdout:ubuntu    6739  6713  0 13:36 ?        00:00:00 grep ceph
2017-08-04T13:36:37.740 INFO:teuthology.orchestra.run.vpm037:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage sudo ceph --cluster ceph health'
2017-08-04T13:36:38.486 INFO:teuthology.misc.health.vpm037.stdout:HEALTH_WARN application not enabled on 1 pool(s)
2017-08-04T13:36:38.486 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN application not enabled on 1 pool(s)
2017-08-04T13:36:45.488 INFO:teuthology.orchestra.run.vpm037:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage sudo ceph --cluster ceph health'
2017-08-04T13:36:45.997 INFO:teuthology.misc.health.vpm037.stdout:HEALTH_WARN application not enabled on 1 pool(s)
2017-08-04T13:36:45.997 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN application not enabled on 1 pool(s)
2017-08-04T13:36:53.000 INFO:teuthology.orchestra.run.vpm037:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage sudo ceph --cluster ceph health'
2017-08-04T13:36:53.505 INFO:teuthology.misc.health.vpm037.stdout:HEALTH_WARN application not enabled on 1 pool(s)
2017-08-04T13:36:53.505 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN application not enabled on 1 pool(s)
2017-08-04T13:37:00.507 INFO:teuthology.orchestra.run.vpm037:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage sudo ceph --cluster ceph health'

logs:
http://qa-proxy.ceph.com/teuthology/teuthology-2017-08-04_05:00:17-smoke-master-testing-basic-vps/1481539/teuthology.log


Related issues 1 (0 open1 closed)

Related to Ceph - Bug #20891: mon: mysterious "application not enabled on <N> pool(s)"ResolvedGreg Farnum08/03/2017

Actions
Actions #1

Updated by Vasu Kulkarni over 6 years ago

  • Assignee set to Josh Durgin
Actions #2

Updated by Nathan Cutler over 6 years ago

Duplicate of 20891 ?

Actions #3

Updated by Josh Durgin over 6 years ago

  • Related to Bug #20891: mon: mysterious "application not enabled on <N> pool(s)" added
Actions #4

Updated by Josh Durgin over 6 years ago

  • Status changed from New to Duplicate

Good call Nathan - let's handle this in that issue.

Actions

Also available in: Atom PDF