Actions
Subtask #2659
closedFeature #2611: mon: Single-Paxos
mon: Single-Paxos: ceph tool -w subscriptions not being updated
% Done:
0%
Source:
Tags:
Backport:
Reviewed:
Affected Versions:
Pull request ID:
Description
how to reproduce:
ubuntu@plana41:~/ceph/src$ ./ceph -w health HEALTH_WARN 24 pgs degraded; 24 pgs stuck unclean; recovery 149/298 degraded (50.000%); 1 mons down, quorum 0,1 a,b monmap e1: 3 mons at {a=127.0.0.1:6789/0,b=127.0.0.1:6790/0,c=127.0.0.1:6791/0} osdmap e8: 1 osds: 1 up, 1 in pgmap v169: 24 pgs: 24 active+degraded; 380 MB data, 30225 MB used, 404 GB / 457 GB avail; 149/298 degraded (50.000%) mdsmap e11: 3/3/3 up {0=b=up:active,1=a=up:active,2=c=up:active} 2012-06-27 09:52:49.436905 mon.0 [INF] pgmap v169: 24 pgs: 24 active+degraded; 380 MB data, 30225 MB used, 404 GB / 457 GB avail; 149/298 degraded (50.000%)
Leave it running and do as such:
ubuntu@plana41:~/ceph/src$ ./ceph osd create `uuidgen` 1 ubuntu@plana41:~/ceph/src$ uuidgen 69676514-c383-45eb-af92-20b006394d67 ubuntu@plana41:~/ceph/src$ ./ceph osd create `uuidgen` 2 ubuntu@plana41:~/ceph/src$ ./ceph -s health HEALTH_WARN 24 pgs degraded; 24 pgs stuck unclean; recovery 149/298 degraded (50.000%); 1 mons down, quorum 0,1 a,b monmap e1: 3 mons at {a=127.0.0.1:6789/0,b=127.0.0.1:6790/0,c=127.0.0.1:6791/0} osdmap e13: 3 osds: 1 up, 1 in pgmap v177: 24 pgs: 24 active+degraded; 380 MB data, 30231 MB used, 404 GB / 457 GB avail; 149/298 degraded (50.000%) mdsmap e11: 3/3/3 up {0=b=up:active,1=a=up:active,2=c=up:active} ubuntu@plana41:~/ceph/src$ ./ceph -w health HEALTH_WARN 24 pgs degraded; 24 pgs stuck unclean; recovery 149/298 degraded (50.000%); 1 mons down, quorum 0,1 a,b monmap e1: 3 mons at {a=127.0.0.1:6789/0,b=127.0.0.1:6790/0,c=127.0.0.1:6791/0} osdmap e13: 3 osds: 1 up, 1 in pgmap v177: 24 pgs: 24 active+degraded; 380 MB data, 30231 MB used, 404 GB / 457 GB avail; 149/298 degraded (50.000%) mdsmap e11: 3/3/3 up {0=b=up:active,1=a=up:active,2=c=up:active} 2012-06-27 09:55:01.309559 mon.0 [INF] pgmap v177: 24 pgs: 24 active+degraded; 380 MB data, 30231 MB used, 404 GB / 457 GB avail; 149/298 degraded (50.000%)
The first ./ceph -w didn't receive any subscription updates, even though there were updates (compare time stamps from the second run and the first one).
Updated by Joao Eduardo Luis over 11 years ago
Can't recall if this was fixed at some point, or if the root cause was even related.
This must be tested again once the new tree is ready for testing.
Updated by Joao Eduardo Luis almost 11 years ago
- Status changed from New to Can't reproduce
- Translation missing: en.field_remaining_hours set to 0.00
Actions