osd: eternal stuck PG in 'unfound_recovery'
A PG might be eternally stuck in 'unfound_recovery' after some OSDs are marked down.
For example, the following steps reproduce this.
1) Create EC 2+1 pool. Assume a PG has [1,0,2] up/acting set.
2) Execute "ceph osd out osd.0 osd.2". Now the PG has [1,3,5] up/acting set.
3) Put some objects to the PG.
4) Execute "ceph osd in osd.0 osd.2". It starts recovering to [1,0,2].
5) Execute "ceph osd down osd.3 osd.5". These downs are momentary. osd.3
and osd.5 boot instantly.
It leads the PG to transit 'unfound_recovery' and stay on forever
despite all OSDs are up.
This bad situation is resolved by means of marking down an OSD in acting set.
6) Execute "ceph osd down osd.0", then unfound objects are resolved
and the PG restarts recovering.
Upon my investigation, if downed OSD is not a member of current up/acting set,
a PG might stay 'ReplicaActive' and discard peering requests from the primary.
Thus the primary OSD can't exit from unfound state.
PGs of downed OSD should transit to 'Reset' state and start peering.
#3 Updated by Kouya Shimura about 1 year ago
- File ceph-osd.3.log.gz added
Attached full log (download ceph-osd.3.log.gz).
### pg1.0s1 is in 'Started/ReplicaActive'state and osd.3 is not a member of acting set [1,0,2] 2018-06-06 09:46:15.740 7ff794727700 10 osd.3 pg_epoch: 33 pg[1.0s1... [1,0,2]] state<Started/ReplicaActive>: Activate Finished ... ### Executed "ceph osd down osd.3 osd.5" at 09:46:17 ... 2018-06-06 09:46:20.532 7ff79df3a700 1 osd.3 35 state: booting -> active ... 2018-06-06 09:46:20.532 7ff794727700 10 osd.3 pg_epoch: 35 pg[1.0s1...] state<Started>: Started advmap ### After this, no message "should_restart_peering, transitioning to Reset" (the PG stays in ReplicaActive) ...