Project

General

Profile

Bug #39116

Draining filestore osd, removing, and adding new bluestore osd causes OSDs to crash

Added by Iain Buclaw almost 5 years ago. Updated over 4 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
2 - major
Reviewed:
Affected Versions:
ceph-qa-suite:
rados
Component(RADOS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2019-04-04 10:40:22.600 7f44cb2a9700 -1 log_channel(cluster) log [ERR] : Health check failed: Possible data damage: 1 pg inconsistent (PG_DAMAGED)
2019-04-04 10:42:27.859 7f44cb2a9700 -1 log_channel(cluster) log [ERR] : Health check update: Possible data damage: 2 pgs inconsistent (PG_DAMAGED)
2019-04-04 10:59:59.998 7f44cb2a9700 -1 log_channel(cluster) log [ERR] : overall HEALTH_ERR 34 scrub errors; Possible data damage: 2 pgs inconsistent
2019-04-04 11:30:56.586 7f44cb2a9700 -1 log_channel(cluster) log [ERR] : Health check update: Possible data damage: 1 pg inconsistent (PG_DAMAGED)
2019-04-04 11:33:06.445 7f44cb2a9700  0 log_channel(cluster) log [INF] : Health check cleared: PG_DAMAGED (was: Possible data damage: 1 pg inconsistent)
2019-04-04 11:40:01.996 7f44cb2a9700 -1 log_channel(cluster) log [ERR] : Health check failed: Possible data damage: 1 pg inconsistent (PG_DAMAGED)
2019-04-04 11:59:59.994 7f44cb2a9700 -1 log_channel(cluster) log [ERR] : overall HEALTH_ERR 5 scrub errors; Possible data damage: 1 pg inconsistent
2019-04-04 12:59:59.999 7f44cb2a9700 -1 log_channel(cluster) log [ERR] : overall HEALTH_ERR 8 scrub errors; Possible data damage: 1 pg inconsistent
2019-04-04 13:12:05.033 7f44cb2a9700 -1 log_channel(cluster) log [ERR] : Health check update: Possible data damage: 2 pgs inconsistent (PG_DAMAGED)
2019-04-04 13:14:32.941 7f44cb2a9700 -1 log_channel(cluster) log [ERR] : Health check update: Possible data damage: 3 pgs inconsistent (PG_DAMAGED)
2019-04-04 13:25:08.437 7f44cb2a9700 -1 log_channel(cluster) log [ERR] : Health check update: Possible data damage: 2 pgs inconsistent (PG_DAMAGED)
2019-04-04 13:59:59.995 7f44cb2a9700 -1 log_channel(cluster) log [ERR] : overall HEALTH_ERR 10 scrub errors; Possible data damage: 2 pgs inconsistent
2019-04-04 14:18:04.007 7f44cb2a9700 -1 log_channel(cluster) log [ERR] : Health check update: Possible data damage: 1 pg inconsistent (PG_DAMAGED)
2019-04-04 14:19:26.517 7f44cb2a9700  0 log_channel(cluster) log [INF] : Health check cleared: PG_DAMAGED (was: Possible data damage: 1 pg inconsistent)
2019-04-04 14:29:22.432 7f44cb2a9700 -1 log_channel(cluster) log [ERR] : Health check failed: Possible data damage: 1 pg inconsistent (PG_DAMAGED)
2019-04-04 14:53:59.230 7f44cb2a9700  0 log_channel(cluster) log [INF] : Health check cleared: PG_DAMAGED (was: Possible data damage: 1 pg inconsistent)
2019-04-04 14:56:22.430 7f44cb2a9700 -1 log_channel(cluster) log [ERR] : Health check failed: Possible data damage: 1 pg inconsistent (PG_DAMAGED)
2019-04-04 14:59:59.995 7f44cb2a9700 -1 log_channel(cluster) log [ERR] : overall HEALTH_ERR noout flag(s) set; 19 scrub errors; Possible data damage: 1 pg inconsistent
2019-04-04 15:09:24.482 7f44cb2a9700  0 log_channel(cluster) log [INF] : Health check cleared: PG_DAMAGED (was: Possible data damage: 1 pg inconsistent)

Each scrub on the new bluestore osd finds a new missing object. As per #39115, ceph pg repair is not enough to fix this. The newly added osd needs to be constantly restarted.


Related issues

Related to RADOS - Bug #43174: pgs inconsistent, union_shard_errors=missing Resolved
Duplicated by RADOS - Bug #39115: ceph pg repair doesn't fix itself if osd is bluestore Duplicate 04/04/2019

History

#1 Updated by David Zafman almost 5 years ago

Please find a stack trace in the osd log. Is there an assert that would look like this?

/build/ceph-13.2.5-g######/src/.../.....: ##: FAILED ceph_assert(.....

Or a segmentation fault / abort?

#2 Updated by Casey Bodley almost 5 years ago

  • Project changed from rgw to Ceph

#3 Updated by Iain Buclaw almost 5 years ago

David Zafman wrote:

Please find a stack trace in the osd log. Is there an assert that would look like this?

/build/ceph-13.2.5-g######/src/.../.....: ##: FAILED ceph_assert(.....

Or a segmentation fault / abort?

I found the following that correlates with the logged health check at 13:25:08.

2019-04-04 13:25:05.579 7fbbeedfb700 -1 log_channel(cluster) log [ERR] : 51.fc repair : stat mismatch, got 20334/20333 objects, 0/0 clones, 20334/20333 dirty, 0/0 omap, 0/0 pinned, 0/0 hit_set_archive, 0/0 whiteouts, 228850639/228845532 bytes, 0/0 manifest objects, 0/0 hit_set_archive bytes.
2019-04-04 13:25:05.579 7fbbeedfb700 -1 log_channel(cluster) log [ERR] : 51.fc repair 1 missing, 0 inconsistent objects
2019-04-04 13:25:05.579 7fbbeedfb700 -1 log_channel(cluster) log [ERR] : 51.fc repair 3 errors, 2 fixed
2019-04-04 13:25:05.591 7fbbeadf3700 -1 /build/ceph-13.2.5/src/osd/ReplicatedBackend.cc: In function 'void ReplicatedBackend::prepare_pull(eversion_t, const hobject_t&, ObjectContextRef, ReplicatedBackend::RPGHandle*)' thread 7fbbeadf3700 time 2019-04-04 13:25:05.584424
/build/ceph-13.2.5/src/osd/ReplicatedBackend.cc: 1308: FAILED assert(peer_missing.count(fromshard))

 ceph version 13.2.5 (cbff874f9007f1869bfd3821b7e33b2a6ffd4988) mimic (stable)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x102) [0x7fbc157485c2]
 2: (()+0x2f9787) [0x7fbc15748787]
 3: (ReplicatedBackend::prepare_pull(eversion_t, hobject_t const&, std::shared_ptr<ObjectContext>, ReplicatedBackend::RPGHandle*)+0x12d4) [0xa2c364]
 4: (ReplicatedBackend::recover_object(hobject_t const&, eversion_t, std::shared_ptr<ObjectContext>, std::shared_ptr<ObjectContext>, PGBackend::RecoveryHandle*)+0xdc) [0xa2ef4c]
 5: (PrimaryLogPG::recover_missing(hobject_t const&, eversion_t, int, PGBackend::RecoveryHandle*)+0x1d9) [0x8b6b39]
 6: (PrimaryLogPG::recover_primary(unsigned long, ThreadPool::TPHandle&)+0x9c9) [0x8f1979]
 7: (PrimaryLogPG::start_recovery_ops(unsigned long, ThreadPool::TPHandle&, unsigned long*)+0x210) [0x8f7c70]
 8: (OSD::do_recovery(PG*, unsigned int, unsigned long, ThreadPool::TPHandle&)+0x36a) [0x76793a]
 9: (PGRecovery::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x19) [0x9c5b69]
 10: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x590) [0x769240]
 11: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x476) [0x7fbc1574e6f6]
 12: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x7fbc1574f8b0]
 13: (()+0x76ba) [0x7fbc13dbc6ba]
 14: (clone()+0x6d) [0x7fbc133cb41d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

There are others in the same log, all have the same backtrace.

OSD version is:

ceph version 13.2.5 (cbff874f9007f1869bfd3821b7e33b2a6ffd4988) mimic (stable)

#4 Updated by Iain Buclaw almost 5 years ago

# ceph pg deep-scrub 51.177
instructing pg 51.177 on osd.9 to deep-scrub

ceph -w logs:

2019-04-05 07:12:01.200548 osd.9 [ERR] 51.177 shard 0 51:eedfc6f8::4653376157940334334:47:head : missing
2019-04-05 07:12:06.580296 osd.9 [ERR] 51.177 shard 9 51:eedfc6f8::4653376157940334334:47:head : missing
2019-04-05 07:12:38.708078 osd.9 [ERR] 51.177 deep-scrub : stat mismatch, got 19714/19713 objects, 0/0 clones, 19714/19713 dirty, 0/0 omap, 0/0 pinned, 0/0 hit_set_archive, 0/0 whiteouts, 246025587/246016008 bytes, 0/0 manifest objects, 0/0 hit_set_archive bytes.
2019-04-05 07:12:38.708093 osd.9 [ERR] 51.177 deep-scrub 1 missing, 0 inconsistent objects
2019-04-05 07:12:38.708096 osd.9 [ERR] 51.177 deep-scrub 3 errors

On osd.0:

# find 51.177_head/ -name "*47_4653376157940334334*" 
51.177_head/DIR_7/DIR_7/DIR_B/DIR_F/47_4653376157940334334_head_1F63FB77__33

rados list-inconsistent-obj

{
    "epoch": 53141,
    "inconsistents": [
        {
            "object": {
                "name": "47",
                "nspace": "",
                "locator": "4653376157940334334",
                "snap": "head",
                "version": 4526675
            },
            "errors": [],
            "union_shard_errors": [
                "missing" 
            ],
            "selected_object_info": {
                "oid": {
                    "oid": "47",
                    "key": "4653376157940334334",
                    "snapid": -2,
                    "hash": 526646135,
                    "max": 0,
                    "pool": 51,
                    "namespace": "" 
                },
                "version": "52284'454181",
                "prior_version": "52249'377382",
                "last_reqid": "client.36576007.0:15677702",
                "user_version": 4526675,
                "size": 9579,
                "mtime": "2019-02-07 10:58:34.211435",
                "local_mtime": "2019-02-07 10:58:34.218297",
                "lost": 0,
                "flags": [
                    "dirty",
                    "data_digest",
                    "omap_digest" 
                ],
                "truncate_seq": 0,
                "truncate_size": 0,
                "data_digest": "0x39906511",
                "omap_digest": "0xffffffff",
                "expected_object_size": 0,
                "expected_write_size": 0,
                "alloc_hint_flags": 0,
                "manifest": {
                    "type": 0
                },
                "watchers": {}
            },
            "shards": [
                {
                    "osd": 0,
                    "primary": false,
                    "errors": [],
                    "size": 9579,
                    "omap_digest": "0xffffffff",
                    "data_digest": "0x39906511" 
                },
                {
                    "osd": 9,
                    "primary": true,
                    "errors": [
                        "missing" 
                    ]
                }
            ]
        }
    ]
}

Repairing:

# ceph pg repair 51.177
instructing pg 51.177 on osd.9 to repair

ceph -w logs:

2019-04-05 07:31:34.248658 osd.9 [ERR] 51.177 shard 9 51:eee4d2c1::171336252089860175:169:head : missing
2019-04-05 07:31:39.532834 osd.9 [ERR] 51.177 shard 0 51:eee4d2c1::171336252089860175:169:head : missing
2019-04-05 07:31:39.532835 osd.9 [ERR] 51.177 shard 0 51:eeea33e8::383290891110620472:41:head : missing
2019-04-05 07:31:44.802083 osd.9 [ERR] 51.177 shard 9 51:eeea33e8::383290891110620472:41:head : missing
2019-04-05 07:32:06.433237 mon.ap-120 [INF] osd.9 failed (root=default,host=ap-124) (connection refused reported by osd.8)
2019-04-05 07:32:06.485653 mon.ap-120 [WRN] Health check failed: 1 osds down (OSD_DOWN)
2019-04-05 07:32:09.979661 mon.ap-120 [WRN] Health check failed: Reduced data availability: 3 pgs inactive, 41 pgs peering (PG_AVAILABILITY)
2019-04-05 07:32:09.979712 mon.ap-120 [WRN] Health check failed: Degraded data redundancy: 312778/23936452 objects degraded (1.307%), 18 pgs degraded (PG_DEGRADED)
2019-04-05 07:32:11.240626 mon.ap-120 [INF] Health check cleared: OSD_SCRUB_ERRORS (was: 3 scrub errors)
2019-04-05 07:32:11.240652 mon.ap-120 [INF] Health check cleared: PG_DAMAGED (was: Possible data damage: 1 pg inconsistent)
2019-04-05 07:32:13.483748 mon.ap-120 [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 3 pgs inactive, 41 pgs peering)
2019-04-05 07:32:19.295771 mon.ap-120 [WRN] Health check update: Degraded data redundancy: 2429737/23936452 objects degraded (10.151%), 159 pgs degraded (PG_DEGRADED)
2019-04-05 07:32:25.339488 mon.ap-120 [WRN] Health check update: Degraded data redundancy: 2429737/23936488 objects degraded (10.151%), 159 pgs degraded (PG_DEGRADED)
2019-04-05 07:32:35.207942 mon.ap-120 [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2019-04-05 07:32:35.580655 mon.ap-120 [INF] osd.9 172.28.19.9:6800/379046 boot
2019-04-05 07:32:37.834878 mon.ap-120 [WRN] Health check update: Degraded data redundancy: 2228323/23936488 objects degraded (9.309%), 147 pgs degraded (PG_DEGRADED)
2019-04-05 07:32:41.458732 mon.ap-120 [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 1335902/23936488 objects degraded (5.581%), 85 pgs degraded)
2019-04-05 07:32:41.458757 mon.ap-120 [INF] Cluster is now healthy

OSD crash logs

2019-04-05 07:32:05.864 7fd837b89700 -1 *** Caught signal (Aborted) **
 in thread 7fd837b89700 thread_name:tp_osd_tp

 ceph version 13.2.5 (cbff874f9007f1869bfd3821b7e33b2a6ffd4988) mimic (stable)
 1: /usr/bin/ceph-osd() [0xcaccb0]
 2: (()+0x11390) [0x7fd85afdf390]
 3: (gsignal()+0x38) [0x7fd85a512428]
 4: (abort()+0x16a) [0x7fd85a51402a]
 5: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x250) [0x7fd85c961710]
 6: (()+0x2f9787) [0x7fd85c961787]
 7: (ReplicatedBackend::prepare_pull(eversion_t, hobject_t const&, std::shared_ptr<ObjectContext>, ReplicatedBackend::RPGHandle*)+0x12d4) [0xa2c364]
 8: (ReplicatedBackend::recover_object(hobject_t const&, eversion_t, std::shared_ptr<ObjectContext>, std::shared_ptr<ObjectContext>, PGBackend::RecoveryHandle*)+0xdc) [0xa2ef4c]
 9: (PrimaryLogPG::recover_missing(hobject_t const&, eversion_t, int, PGBackend::RecoveryHandle*)+0x1d9) [0x8b6b39]
 10: (PrimaryLogPG::recover_primary(unsigned long, ThreadPool::TPHandle&)+0x9c9) [0x8f1979]
 11: (PrimaryLogPG::start_recovery_ops(unsigned long, ThreadPool::TPHandle&, unsigned long*)+0x210) [0x8f7c70]
 12: (OSD::do_recovery(PG*, unsigned int, unsigned long, ThreadPool::TPHandle&)+0x36a) [0x76793a]
 13: (PGRecovery::run(OSD*, OSDShard*, boost::intrusive_ptr<PG>&, ThreadPool::TPHandle&)+0x19) [0x9c5b69]
 14: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x590) [0x769240]
 15: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x476) [0x7fd85c9676f6]
 16: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x7fd85c9688b0]
 17: (()+0x76ba) [0x7fd85afd56ba]
 18: (clone()+0x6d) [0x7fd85a5e441d]
 NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

Shortly after start-up, another PG reports as inconsistent - but I suspect this is because it is next on the list of scheduled scrubs.

2019-04-05 07:33:03.211561 osd.9 [ERR] 51.137 shard 3 51:ec991f27::372956414560036835:41:head : missing
2019-04-05 07:33:08.422089 osd.9 [ERR] 51.137 shard 9 51:ec991f27::372956414560036835:41:head : missing
2019-04-05 07:34:31.983093 osd.9 [ERR] 51.137 shard 3 51:ecef2e12::237649098503494993:24:head : missing
2019-04-05 07:34:31.983094 osd.9 [ERR] 51.137 shard 3 51:ecef2e12::237649098503494993:25:head : missing
2019-04-05 07:34:31.983095 osd.9 [ERR] 51.137 shard 3 51:ecef2e12::237649098503494993:26:head : missing
2019-04-05 07:34:31.983096 osd.9 [ERR] 51.137 shard 3 51:ecef2e12::237649098503494993:27:head : missing
2019-04-05 07:34:31.983096 osd.9 [ERR] 51.137 shard 3 51:ecef2e12::237649098503494993:33:head : missing
2019-04-05 07:34:31.983097 osd.9 [ERR] 51.137 shard 3 51:ecef2e12::237649098503494993:34:head : missing
2019-04-05 07:34:31.983097 osd.9 [ERR] 51.137 shard 3 51:ecef2e12::237649098503494993:41:head : missing
2019-04-05 07:34:31.983098 osd.9 [ERR] 51.137 shard 3 51:ecef2e12::237649098503494993:42:head : missing
2019-04-05 07:34:37.212930 osd.9 [ERR] 51.137 shard 9 51:ecef2e12::237649098503494993:24:head : missing
2019-04-05 07:34:37.212931 osd.9 [ERR] 51.137 shard 9 51:ecef2e12::237649098503494993:25:head : missing
2019-04-05 07:34:37.212932 osd.9 [ERR] 51.137 shard 9 51:ecef2e12::237649098503494993:26:head : missing
2019-04-05 07:34:37.212933 osd.9 [ERR] 51.137 shard 9 51:ecef2e12::237649098503494993:27:head : missing
2019-04-05 07:34:37.212934 osd.9 [ERR] 51.137 shard 9 51:ecef2e12::237649098503494993:33:head : missing
2019-04-05 07:34:37.212935 osd.9 [ERR] 51.137 shard 9 51:ecef2e12::237649098503494993:34:head : missing
2019-04-05 07:34:37.212935 osd.9 [ERR] 51.137 shard 9 51:ecef2e12::237649098503494993:41:head : missing
2019-04-05 07:34:37.212936 osd.9 [ERR] 51.137 shard 9 51:ecef2e12::237649098503494993:42:head : missing
2019-04-05 07:34:52.624365 osd.9 [ERR] 51.137 scrub : stat mismatch, got 20016/20007 objects, 0/0 clones, 20016/20007 dirty, 0/0 omap, 0/0 pinned, 0/0 hit_set_archive, 0/0 whiteouts, 238983468/238906209 bytes, 0/0 manifest objects, 0/0 hit_set_archive bytes.
2019-04-05 07:34:52.624374 osd.9 [ERR] 51.137 scrub 9 missing, 0 inconsistent objects
2019-04-05 07:34:52.624377 osd.9 [ERR] 51.137 scrub 19 errors
2019-04-05 07:34:57.022091 mon.ap-120 [ERR] Health check failed: 19 scrub errors (OSD_SCRUB_ERRORS)
2019-04-05 07:34:57.022119 mon.ap-120 [ERR] Health check failed: Possible data damage: 1 pg inconsistent (PG_DAMAGED)

#5 Updated by Iain Buclaw almost 5 years ago

Another PG.

# rados list-inconsistent-obj 51.1fa | jq .inconsistents[].object
{
  "name": "169",
  "nspace": "",
  "locator": "193061343227287063",
  "snap": "head",
  "version": 4309859
}
{
  "name": "169",
  "nspace": "",
  "locator": "238938732312934742",
  "snap": "head",
  "version": 4285966
}
{
  "name": "21",
  "nspace": "",
  "locator": "238938732312934742",
  "snap": "head",
  "version": 4285956
}
{
  "name": "22",
  "nspace": "",
  "locator": "238938732312934742",
  "snap": "head",
  "version": 4285957
}
{
  "name": "23",
  "nspace": "",
  "locator": "238938732312934742",
  "snap": "head",
  "version": 4285958
}

Did rados get/put on all objects listed above, then...

# ceph pg deep-scrub 51.1fa 
instructing pg 51.1fa on osd.0 to deep-scrub

Logs:

2019-04-05 08:31:24.459809 osd.0 [ERR] 51.1fa shard 9 51:5fd90f47::193061343227287063:169:head : missing
2019-04-05 08:31:29.722528 osd.0 [ERR] 51.1fa shard 0 51:5fd90f47::193061343227287063:169:head : missing
2019-04-05 08:31:40.254726 osd.0 [ERR] 51.1fa shard 9 51:5fe9e70d::238938732312934742:169:head : missing
2019-04-05 08:31:40.254730 osd.0 [ERR] 51.1fa shard 9 51:5fe9e70d::238938732312934742:21:head : missing
2019-04-05 08:31:40.254731 osd.0 [ERR] 51.1fa shard 9 51:5fe9e70d::238938732312934742:22:head : missing
2019-04-05 08:31:40.254732 osd.0 [ERR] 51.1fa shard 9 51:5fe9e70d::238938732312934742:23:head : missing
2019-04-05 08:31:45.529356 osd.0 [ERR] 51.1fa shard 0 51:5fe9e70d::238938732312934742:169:head : missing
2019-04-05 08:31:45.529359 osd.0 [ERR] 51.1fa shard 0 51:5fe9e70d::238938732312934742:21:head : missing
2019-04-05 08:31:45.529360 osd.0 [ERR] 51.1fa shard 0 51:5fe9e70d::238938732312934742:22:head : missing
2019-04-05 08:31:45.529361 osd.0 [ERR] 51.1fa shard 0 51:5fe9e70d::238938732312934742:23:head : missing
2019-04-05 08:32:06.461456 osd.0 [ERR] 51.1fa deep-scrub : stat mismatch, got 19524/19519 objects, 0/0 clones, 19524/19519 dirty, 0/0 omap, 0/0 pinned, 0/0 hit_set_archive, 0/0 whiteouts, 227386438/227371540 bytes, 0/0 manifest objects, 0/0 hit_set_archive bytes.
2019-04-05 08:32:06.461466 osd.0 [ERR] 51.1fa deep-scrub 5 missing, 0 inconsistent objects
2019-04-05 08:32:06.461469 osd.0 [ERR] 51.1fa deep-scrub 11 errors

Still reported as missing objects.

Running repair again, and osd.0 crashes with same backtrace.

#6 Updated by Iain Buclaw almost 5 years ago

The fix so far is switching the osd back to filestore.

#7 Updated by Greg Farnum almost 5 years ago

  • Project changed from Ceph to bluestore

#8 Updated by Neha Ojha almost 5 years ago

  • Project changed from bluestore to RADOS

#9 Updated by David Zafman over 4 years ago

  • Duplicated by Bug #39115: ceph pg repair doesn't fix itself if osd is bluestore added

#10 Updated by David Zafman over 4 years ago

  • Subject changed from Draining filestore osd, removing, and adding new bluestore osd causes many PGs mapped to bluestore to go inconsistent to Draining filestore osd, removing, and adding new bluestore osd causes OSDs to crash

#11 Updated by David Zafman over 4 years ago

  • Related to Bug #43174: pgs inconsistent, union_shard_errors=missing added

Also available in: Atom PDF