Project

General

Profile

Actions

Bug #19771

closed

HEALTH_WARN pool rbd pg_num 244 > pgp_num 224 during upgrade

Added by Sage Weil about 7 years ago. Updated over 6 years ago.

Status:
Resolved
Priority:
Immediate
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
kraken
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2017-04-25T16:42:57.790 INFO:teuthology.orchestra.run.smithi078:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph health'
2017-04-25T16:42:57.929 INFO:teuthology.misc.health.smithi078.stdout:HEALTH_WARN pool rbd pg_num 244 > pgp_num 224
2017-04-25T16:42:57.929 DEBUG:teuthology.misc:Ceph health: HEALTH_WARN pool rbd pg_num 244 > pgp_num 224
2017-04-25T16:43:04.931 INFO:teuthology.orchestra.run.smithi078:Running: 'adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage ceph --cluster ceph health'
2017-04-25T16:43:05.093 INFO:teuthology.misc.health.smithi078.stdout:HEALTH_WARN pool rbd pg_num 244 > pgp_num 224

/a/sage-2017-04-25_15:37:28-upgrade:jewel-x-master---basic-smithi/1068352
description: upgrade:jewel-x/stress-split-erasure-code/{0-cluster/{openstack.yaml
  start.yaml} 1-jewel-install/jewel.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml
  4-workload/ec-rados-default.yaml 5-finish-upgrade.yaml 6-luminous.yaml 7-final-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml
  distros/ubuntu_14.04.yaml}

Related issues 1 (0 open1 closed)

Copied to Ceph - Backport #20024: kraken: HEALTH_WARN pool rbd pg_num 244 > pgp_num 224 during upgradeResolvedNathan CutlerActions
Actions #1

Updated by Sage Weil almost 7 years ago

/a/sage-2017-05-01_11:43:01-upgrade:kraken-x-master---basic-smithi/1087965

Actions #2

Updated by Kefu Chai almost 7 years ago

2017-05-01T13:00:05.534 INFO:tasks.thrashosds.thrasher:fixing pg num pool rbd
...
{"pgid":"0.147","version":"0'0","reported_seq":"0","reported_epoch":"0","state":"creating","last_fresh":"2017-05-01 12:59:46.655825" 
...
{"pgid":"0.146","version":"0'0","reported_seq":"0","reported_epoch":"0","state":"creating","last_fresh":"2017-05-01 12:57:47.667230",

some pgs were still being created.

    def set_pool_pgpnum(self, pool_name):
        """ 
        Set pgpnum property of pool_name pool.
        """ 
        with self.lock:
            assert isinstance(pool_name, basestring)
            assert pool_name in self.pools
            if self.get_num_creating() > 0:
                return False
            self.set_pool_property(pool_name, 'pgp_num', self.pools[pool_name])
            return True


so we didn't fix the pg_num of pool rbd when finishing thrashing osd.

Actions #3

Updated by Kefu Chai almost 7 years ago

  • Status changed from New to Fix Under Review
  • Assignee set to Kefu Chai
Actions #4

Updated by Sage Weil almost 7 years ago

  • Status changed from Fix Under Review to Pending Backport
  • Backport set to kraken
Actions #5

Updated by Nathan Cutler almost 7 years ago

  • Copied to Backport #20024: kraken: HEALTH_WARN pool rbd pg_num 244 > pgp_num 224 during upgrade added
Actions #6

Updated by Kefu Chai almost 7 years ago

  • Assignee deleted (Kefu Chai)
Actions #7

Updated by Nathan Cutler over 6 years ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF