Activity
From 09/16/2019 to 10/15/2019
10/15/2019
- 10:13 PM Bug #42332 (In Progress): CephContext::CephContextServiceThread might pause for 5 seconds at shut...
- 10:11 PM Bug #42332 (Resolved): CephContext::CephContextServiceThread might pause for 5 seconds at shutdown
- The entry loop in CephContext::CephContextServiceThread doesn't check for thread exit prior to waiting. This can resu...
- 07:52 PM Backport #41918 (Resolved): mimic: osd: scrub error on big objects; make bluestore refuse to star...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30784
m... - 07:44 PM Backport #41918: mimic: osd: scrub error on big objects; make bluestore refuse to start on big ob...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30784
merged - 05:25 PM Backport #42326 (In Progress): nautilus: max_size from crushmap ignored when increasing size on pool
- 09:47 AM Backport #42326 (Resolved): nautilus: max_size from crushmap ignored when increasing size on pool
- https://github.com/ceph/ceph/pull/30941
- 11:18 AM Bug #42328 (Resolved): osd/PrimaryLogPG.cc: 3962: ceph_abort_msg("out of order op")
- Observing on the recent master when running rbd suite [1]:...
- 08:54 AM Feature #42321 (Fix Under Review): Add a new mode to balance pg layout by primary osds
- There already have upmap optimizer since Luminous version. The upmap optimizer is help for balancing PGs across OSDs,...
- 08:47 AM Bug #42060: Slow ops seen when one ceph private interface is shut down
- Yes, ~3 minutes after disabling the network, the OSDs became down. I started the network after 5 minutes and until th...
- 07:51 AM Backport #42126 (In Progress): nautilus: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
- 07:45 AM Backport #42127 (In Progress): luminous: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
- Updated automatically by ceph-backport.sh version 15.0.0.6113
- 07:40 AM Backport #42127 (New): luminous: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
- 07:39 AM Backport #42128 (In Progress): mimic: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
- 07:31 AM Backport #41844 (In Progress): mimic: tools/rados: allow list objects in a specific pg in a pool
- 07:23 AM Bug #36732: tools/rados: fix segmentation fault
- This fix was merged before the v14.2.0 (nautilus) release.
Backports:
* luminous https://github.com/ceph/ceph/p... - 06:39 AM Backport #42240 (In Progress): mimic: Adding Placement Group id in Large omap log message
- 06:35 AM Backport #42242 (In Progress): nautilus: Adding Placement Group id in Large omap log message
- 06:33 AM Backport #42241 (In Progress): luminous: Adding Placement Group id in Large omap log message
10/14/2019
- 10:15 PM Documentation #42315 (New): Improve rados command usage, man page and turorial
- 10:14 PM Documentation #42314 (New): Improve ceph-objectstore-tool usage, man page and create turorial
- 02:56 PM Bug #42111 (Pending Backport): max_size from crushmap ignored when increasing size on pool
- 12:55 PM Backport #42153 (In Progress): luminous: Removed OSDs with outstanding peer failure reports crash...
- Updated automatically by ceph-backport.sh version 15.0.0.6113
- 12:54 PM Backport #42152 (In Progress): nautilus: Removed OSDs with outstanding peer failure reports crash...
- Updated automatically by ceph-backport.sh version 15.0.0.6113
- 12:53 PM Backport #42154 (In Progress): mimic: Removed OSDs with outstanding peer failure reports crash th...
- Updated automatically by ceph-backport.sh version 15.0.0.6113
- 12:45 PM Bug #22350: nearfull OSD count in 'ceph -w'
- Note: backported to luminous via https://github.com/ceph/ceph/pull/30902
- 12:44 PM Backport #42138 (In Progress): luminous: Remove unused full and nearful output from OSDMap summary
- Updated automatically by ceph-backport.sh version 15.0.0.6113
- 12:40 PM Backport #42137 (In Progress): mimic: Remove unused full and nearful output from OSDMap summary
- Updated automatically by ceph-backport.sh version 15.0.0.6113
- 12:40 PM Backport #42136 (In Progress): nautilus: Remove unused full and nearful output from OSDMap summary
- Updated automatically by ceph-backport.sh version 15.0.0.6113
- 12:39 PM Backport #42128 (Need More Info): mimic: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
- -https://github.com/ceph/ceph/pull/30576#issuecomment-541652768-
NOTE: this fixes a bug introduced into mimic by h... - 12:33 PM Backport #42128 (In Progress): mimic: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
- Updated automatically by ceph-backport.sh version 15.0.0.6113
- 12:39 PM Backport #42126 (Need More Info): nautilus: mgr/balancer FAILED ceph_assert(osd_weight.count(i.fi...
- -https://github.com/ceph/ceph/pull/30576#issuecomment-541652768-
NOTE: fixes a bug introduced into nautilus by htt... - 12:34 PM Backport #42126 (In Progress): nautilus: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
- Updated automatically by ceph-backport.sh version 15.0.0.6113
- 12:38 PM Backport #42127 (Need More Info): luminous: mgr/balancer FAILED ceph_assert(osd_weight.count(i.fi...
- -see https://github.com/ceph/ceph/pull/30576#issuecomment-541652768-
This is needed to fix a bug introduced into l... - 12:31 PM Backport #42037 (In Progress): luminous: Enable auto-scaler and get src/osd/PeeringState.cc:3671:...
- Updated automatically by ceph-backport.sh version 15.0.0.6113
- 12:28 PM Backport #42036 (In Progress): mimic: Enable auto-scaler and get src/osd/PeeringState.cc:3671: fa...
- Updated automatically by ceph-backport.sh version 15.0.0.6113
- 12:21 PM Backport #41964 (In Progress): mimic: Segmentation fault in rados ls when using --pgid and --pool...
- Updated automatically by ceph-backport.sh version 15.0.0.6113
- 11:38 AM Backport #41961 (In Progress): mimic: tools/rados: add --pgid in help
- Updated automatically by ceph-backport.sh version 15.0.0.6113
10/13/2019
10/12/2019
- 09:17 AM Backport #41584 (Need More Info): mimic: backfill_toofull seen on cluster where the most full OSD...
- non-trivial due to refactoring in master, but if the nautilus backport gets accepted, we could cherry-pick from there...
- 09:07 AM Backport #41583 (In Progress): nautilus: backfill_toofull seen on cluster where the most full OSD...
10/11/2019
- 09:08 AM Feature #24099: osd: Improve workflow when creating OSD on raw block device if there was bluestor...
- Triggered the same! WTF ?
- 08:56 AM Bug #42207: ceph osd df showing 0/0/0
- Yes and it also show err state,
- 12:29 AM Bug #42186: "2019-10-04T19:31:51.053283+0000 osd.7 (osd.7) 108 : cluster [ERR] 2.5s0 shard 0(1) 2...
From the original description /a/sage-2019-10-04_18:20:43-rados-wip-sage-testing-2019-10-04-0923-distro-basic-smith...
10/10/2019
- 09:02 PM Bug #42115 (In Progress): Turn off repair pg state when leaving recovery
- 10:26 AM Backport #42259 (Resolved): nautilus: document new option mon_max_pg_per_osd
- https://github.com/ceph/ceph/pull/31300
- 10:26 AM Backport #42258 (Resolved): mimic: document new option mon_max_pg_per_osd
- https://github.com/ceph/ceph/pull/31875
- 09:38 AM Bug #41748: log [ERR] : 7.19 caller_ops.size 62 > log size 61
- Sage, the calls to log_weirdness() in the places you suggested are already in place. They were added by you as part o...
- 08:36 AM Bug #41255: backfill_toofull seen on cluster where the most full OSD is at 1%
- Adding 14.2.4 as an affected version, as I am seeing the same issue on a 14.2.4 cluster that has recently had 9 OSDs ...
- 06:08 AM Documentation #42221 (Pending Backport): document new option mon_max_pg_per_osd
- https://github.com/ceph/ceph/pull/30787
- 05:54 AM Bug #41946: cbt perf test fails due to leftover in /home/ubuntu/cephtest
- We do a cleanup before this https://github.com/ceph/ceph/blob/master/qa/tasks/cbt.py#L251-L275, but these seem to be ...
- 03:01 AM Bug #42058: OSD reconnected across map epochs, inconsistent pg logs created
- Greg Farnum wrote:
> Okay, so the issue here is that osd.1 managed to reconnect to osd.5 and osd.9 without triggerin... - 01:10 AM Bug #42058: OSD reconnected across map epochs, inconsistent pg logs created
- Our cluster is running on Luminous 12.2.12.
I do not think PR https://github.com/ceph/ceph/pull/25343 can not solve...
10/09/2019
- 09:28 PM Bug #42060: Slow ops seen when one ceph private interface is shut down
- Do the OSDs ever stay down once their cluster network is disabled?
Generally speaking, if they only have the clust... - 09:24 PM Bug #42058 (Duplicate): OSD reconnected across map epochs, inconsistent pg logs created
- Oh sorry I didn't look at that PR. It is the correct fix; if we do another luminous point release it should show up o...
- 04:19 PM Bug #42058 (New): OSD reconnected across map epochs, inconsistent pg logs created
- Okay, so the issue here is that osd.1 managed to reconnect to osd.5 and osd.9 without triggering a wider reset of the...
- 07:13 AM Bug #42058: OSD reconnected across map epochs, inconsistent pg logs created
- see PR: https://github.com/ceph/ceph/pull/25343 which also avoid triggerring RESETSESSION.
- 06:52 AM Bug #42058: OSD reconnected across map epochs, inconsistent pg logs created
- @Greg
Assume pg 1.1a maps to osds[1,5,9], osd1 is the primary osd.
Time 1: osd1 osd5 osd9 was online and could... - 09:09 PM Bug #42173 (Closed): _pinned_map closest pinned map ver 252615 not available! error: (2) No such ...
- Rocksdb repair isn't guaranteed to get all the data back - it sounds like it lost some maps in this case. For further...
- 09:07 PM Bug #42207: ceph osd df showing 0/0/0
- Does this persist after you create a pool?
- 03:07 PM Bug #42214: cosbench workloads failing (seen on master and mimic backport testing)
- Investigation gist: https://gist.github.com/rzarzynski/2d3b9be8986bebb6435c4b5a5bd7df13.
On today's meeting Kefu poi... - 02:59 PM Bug #42214 (Duplicate): cosbench workloads failing (seen on master and mimic backport testing)
- 11:43 AM Bug #42214 (In Progress): cosbench workloads failing (seen on master and mimic backport testing)
- 03:00 PM Bug #41946: cbt perf test fails due to leftover in /home/ubuntu/cephtest
- 08:36 AM Backport #42242 (Resolved): nautilus: Adding Placement Group id in Large omap log message
- https://github.com/ceph/ceph/pull/30923
- 08:36 AM Backport #42241 (Resolved): luminous: Adding Placement Group id in Large omap log message
- https://github.com/ceph/ceph/pull/30922
- 08:36 AM Backport #42240 (Resolved): mimic: Adding Placement Group id in Large omap log message
- https://github.com/ceph/ceph/pull/30924
- 07:54 AM Feature #41359 (Pending Backport): Adding Placement Group id in Large omap log message
10/08/2019
- 12:09 PM Bug #42060: Slow ops seen when one ceph private interface is shut down
- We monitor rados outage in both scenarios.
For luminous, when the interface was shut down - ~60 seconds rados outag... - 10:10 AM Bug #42225: target_max_bytes and target_max_objects should accept values in [M,G,T]iB and M, G, T...
- I will open PR for this tracker once PR#30701 gets merged.
- 10:03 AM Bug #42225 (Resolved): target_max_bytes and target_max_objects should accept values in [M,G,T]iB ...
- 1. To flush or evict at 1 TB, we execute the following:...
- 08:56 AM Bug #41191: osd: scrub error on big objects; make bluestore refuse to start on big objects
- @Florian - backports on the way. I'll try to keep my eye on them.
- 08:02 AM Bug #41191: osd: scrub error on big objects; make bluestore refuse to start on big objects
- As far as I can see, without this patch there is no way to detect excessively large objects other than doing @rados s...
- 08:56 AM Backport #41919 (In Progress): luminous: osd: scrub error on big objects; make bluestore refuse t...
- Updated automatically by ceph-backport.sh version 15.0.0.5775
- 08:51 AM Backport #41918 (In Progress): mimic: osd: scrub error on big objects; make bluestore refuse to s...
- Updated automatically by ceph-backport.sh version 15.0.0.5775
- 08:50 AM Backport #41920 (In Progress): nautilus: osd: scrub error on big objects; make bluestore refuse t...
- Updated automatically by ceph-backport.sh version 15.0.0.5775
- 08:02 AM Documentation #42221 (Resolved): document new option mon_max_pg_per_osd
- https://github.com/ceph/ceph/pull/28525 was opened against mimic, but the bug exists in master.
Please open a PR t... - 07:59 AM Backport #38206 (Resolved): mimic: osds allows to partially start more than N+2
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29241
m... - 07:58 AM Backport #41447 (Resolved): mimic: osd/PrimaryLogPG: Access destroyed references in finish_degrad...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30291
m... - 07:58 AM Backport #41863 (Resolved): mimic: Mimic MONs have slow/long running ops
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30481
m... - 07:58 AM Backport #41922 (Resolved): mimic: OSDMonitor: missing `pool_id` field in `osd pool ls` command
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30485
m... - 07:52 AM Backport #41704 (Resolved): mimic: oi(object_info_t).size does not match on disk size
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30275
m... - 07:34 AM Bug #42215: cluster [WRN] Health check failed: 1 pool(s) have non-power-of-two pg_num (POOL_PG_NU...
- the power of two check was (or is in the process of being) backported to nautilus
- 06:07 AM Bug #42173: _pinned_map closest pinned map ver 252615 not available! error: (2) No such file or d...
- Hi,
sorry yes I forgot to elaborate.
We had another issue resulting in crashing mon because of apparent rocksdb cor... - 05:50 AM Feature #41359 (Fix Under Review): Adding Placement Group id in Large omap log message
- 04:47 AM Bug #41924: asynchronous recovery can not function under certain circumstances
- Neha Ojha wrote:
> @Nathan The PR that merged is based on https://github.com/ceph/ceph/pull/24004, which has not bee... - 04:45 AM Bug #41743: Long heartbeat ping times on front interface seen, longest is 2237.999 msec (OSD_SLOW...
- David Zafman wrote:
> This is already included in back porting of https://tracker.ceph.com/issues/40640
Thanks, D... - 12:14 AM Bug #42111: max_size from crushmap ignored when increasing size on pool
10/07/2019
- 09:07 PM Support #42174 (Closed): Ceph Nautilus OSD isn't able to add to cluster
- The irc channel or user mailing list are better ways to get help with this sort of thing: https://ceph.io/irc/ :)
- 09:06 PM Bug #42173 (Need More Info): _pinned_map closest pinned map ver 252615 not available! error: (2) ...
- This came up in "[ceph-users] mon sudden crash loop - pinned map" as well; is that perhaps the same cluster?
Were ... - 07:31 PM Backport #38206: mimic: osds allows to partially start more than N+2
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29241
merged - 07:26 PM Backport #41447: mimic: osd/PrimaryLogPG: Access destroyed references in finish_degraded_object
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30291
merged - 07:25 PM Backport #41863: mimic: Mimic MONs have slow/long running ops
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30481
merged - 07:25 PM Backport #41863: mimic: Mimic MONs have slow/long running ops
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30481
merged - 07:24 PM Bug #40287: OSDMonitor: missing `pool_id` field in `osd pool ls` command
- merged https://github.com/ceph/ceph/pull/30485
- 07:21 PM Backport #41704: mimic: oi(object_info_t).size does not match on disk size
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30275
merged - 07:21 PM Bug #42215 (New): cluster [WRN] Health check failed: 1 pool(s) have non-power-of-two pg_num (POOL...
Tests either need whitelist or fixes to use power of two PG nums. This was introduced by new health check.
Exam...- 07:19 PM Bug #42214 (Duplicate): cosbench workloads failing (seen on master and mimic backport testing)
- I'm not sure where the error is in the teuthology.log but they always report:
Command failed on xxxxxxxx with status... - 01:40 PM Bug #42207 (New): ceph osd df showing 0/0/0
- I have ceph mimic cluster with 1 mon 2- osd nodes after adding all the two osd to the cluster my data store is still ...
- 01:36 PM Bug #42102: use-after-free in Objecter timer handing
- I tested a Q&D patch that made start_tick take a unique_lock, but that didn't seem to fix the issue, so the race does...
- 12:40 PM Backport #42202 (Rejected): luminous: mon: /var/lib/ceph/mon/* data (esp rocksdb) is not 0600
- https://github.com/ceph/ceph/pull/31033
- 12:40 PM Backport #42201 (Rejected): mimic: mon: /var/lib/ceph/mon/* data (esp rocksdb) is not 0600
- https://github.com/ceph/ceph/pull/31032
- 12:40 PM Backport #42200 (Resolved): nautilus: mon: /var/lib/ceph/mon/* data (esp rocksdb) is not 0600
- https://github.com/ceph/ceph/pull/31031
- 12:40 PM Backport #42199 (Resolved): luminous: osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
- https://github.com/ceph/ceph/pull/31030
- 12:40 PM Backport #42198 (Resolved): mimic: osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
- https://github.com/ceph/ceph/pull/31029
- 12:39 PM Backport #42197 (Resolved): nautilus: osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
- https://github.com/ceph/ceph/pull/31028
- 02:49 AM Bug #42178 (Duplicate): scrub errors due to missing objects
10/06/2019
- 10:48 PM Bug #42186: "2019-10-04T19:31:51.053283+0000 osd.7 (osd.7) 108 : cluster [ERR] 2.5s0 shard 0(1) 2...
- /a/sage-2019-10-06_19:16:50-rados-master-distro-basic-smithi/4364824
"2019-10-06T21:24:06.336207+0000 osd.1 (osd.1... - 05:41 PM Backport #42168: nautilus: readable.sh test fails
- Nathan Cutler wrote:
> @Venky Please use "src/script/backport-create-issue" from the master branch to create backpor... - 02:05 PM Bug #42177 (Pending Backport): osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
- 02:05 PM Bug #38345: mon: segv in MonOpRequest::~MonOpRequest OpHistory::cleanup
- ...
10/05/2019
- 07:36 PM Bug #41924: asynchronous recovery can not function under certain circumstances
- @Nathan The PR that merged is based on https://github.com/ceph/ceph/pull/24004, which has not been backported to mimi...
- 02:38 PM Bug #41924: asynchronous recovery can not function under certain circumstances
- Adding mimic backport, since the first attempted fix ( see https://github.com/ceph/ceph/pull/30459 ) targeted mimic.
- 03:14 AM Feature #40955 (Resolved): Extend the scrub sleep time when the period is outside [osd_scrub_begi...
- 02:48 AM Bug #41743 (Resolved): Long heartbeat ping times on front interface seen, longest is 2237.999 mse...
- This is already included in back porting of https://tracker.ceph.com/issues/40640
- 01:42 AM Bug #42114 (Pending Backport): mon: /var/lib/ceph/mon/* data (esp rocksdb) is not 0600
- 01:37 AM Bug #42186 (Can't reproduce): "2019-10-04T19:31:51.053283+0000 osd.7 (osd.7) 108 : cluster [ERR] ...
- /a/sage-2019-10-04_18:20:43-rados-wip-sage-testing-2019-10-04-0923-distro-basic-smithi/4358878
10/04/2019
- 06:25 PM Bug #42111 (Fix Under Review): max_size from crushmap ignored when increasing size on pool
- 06:23 PM Bug #42111: max_size from crushmap ignored when increasing size on pool
- - With fix:...
- 06:20 PM Bug #42111: max_size from crushmap ignored when increasing size on pool
- - I was able to reproduce in the master branch in `vstart` cluster....
- 05:45 PM Bug #42111 (In Progress): max_size from crushmap ignored when increasing size on pool
- 02:38 PM Backport #42168: nautilus: readable.sh test fails
- @Venky Please use "src/script/backport-create-issue" from the master branch to create backport issues. Or, if you nee...
- 02:08 PM Backport #40082 (Need More Info): luminous: osd: Better error message when OSD count is less than...
- Does not pass make check, not clear how to make it pass, and luminous is borderline EOL so the question is: how impor...
- 09:08 AM Bug #26970 (Resolved): src/osd/OSDMap.h: 1065: FAILED assert(__null != pool)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:08 AM Bug #38040 (Resolved): osd_map_message_max default is too high?
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:07 AM Bug #38416 (Resolved): crc cache should be invalidated when posting preallocated rx buffers
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:07 AM Bug #38827 (Resolved): valgrind: UninitCondition in ceph::crypto::onwire::AES128GCM_OnWireRxHandl...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:07 AM Bug #38828 (Resolved): should set EPOLLET flag on del_event()
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:06 AM Bug #38839 (Resolved): .mgrstat failed to decode mgrstat state; luminous dev version?
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:05 AM Bug #40377 (Resolved): osd beacon sometimes has empty pg list
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:59 AM Backport #41534 (Resolved): nautilus: valgrind: UninitCondition in ceph::crypto::onwire::AES128GC...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29928
m... - 08:58 AM Backport #41703 (Resolved): nautilus: oi(object_info_t).size does not match on disk size
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30278
m... - 08:58 AM Backport #41963 (Resolved): nautilus: Segmentation fault in rados ls when using --pgid and --pool...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30605
m... - 08:58 AM Backport #41960 (Resolved): nautilus: tools/rados: add --pgid in help
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30607
m... - 08:57 AM Backport #38277 (Resolved): mimic: osd_map_message_max default is too high?
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29242
m... - 08:56 AM Backport #38852 (Resolved): mimic: .mgrstat failed to decode mgrstat state; luminous dev version?
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29249
m... - 08:56 AM Backport #38437 (Resolved): mimic: crc cache should be invalidated when posting preallocated rx b...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29247
m... - 08:56 AM Backport #40884 (Resolved): mimic: ceph mgr module ls -f plain crashes mon
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29593
m... - 08:56 AM Backport #40949 (Resolved): mimic: Better default value for osd_snap_trim_sleep
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29732
m... - 08:53 AM Backport #38450: mimic: src/osd/OSDMap.h: 1065: FAILED assert(__null != pool)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29976
m... - 04:24 AM Backport #38450 (Resolved): mimic: src/osd/OSDMap.h: 1065: FAILED assert(__null != pool)
- 08:53 AM Backport #41595: mimic: ceph-objectstore-tool can't remove head with bad snapset
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30081
m... - 04:28 AM Backport #41595 (Resolved): mimic: ceph-objectstore-tool can't remove head with bad snapset
- 08:53 AM Backport #40083 (Resolved): mimic: osd: Better error message when OSD count is less than osd_pool...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30180
m... - 08:47 AM Backport #40732 (Resolved): mimic: mon: auth mon isn't loading full KeyServerData after restart
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30181
m... - 08:47 AM Backport #41291 (Resolved): mimic: filestore pre-split may not split enough directories
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30182
m... - 08:46 AM Backport #41351 (Resolved): mimic: hidden corei7 requirement in binary packages
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30183
m... - 08:46 AM Backport #41490 (Resolved): mimic: OSDCap.PoolClassRNS test aborts
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30214
m... - 08:46 AM Backport #41502 (Resolved): mimic: Warning about past_interval bounds on deleting pg
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30222
m... - 08:44 AM Backport #40464 (Resolved): mimic: osd beacon sometimes has empty pg list
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29253
m... - 08:44 AM Backport #38351: mimic: Limit loops waiting for force-backfill/force-recovery to happen
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29245
m... - 04:33 AM Backport #38351 (Resolved): mimic: Limit loops waiting for force-backfill/force-recovery to happen
- 08:43 AM Backport #38856 (Resolved): mimic: should set EPOLLET flag on del_event()
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29250
m... - 08:43 AM Backport #40179: mimic: qa/standalone/scrub/osd-scrub-snaps.sh sometimes fails
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29251
m... - 04:32 AM Backport #40179 (Resolved): mimic: qa/standalone/scrub/osd-scrub-snaps.sh sometimes fails
- 04:32 AM Bug #40078 (Resolved): qa/standalone/scrub/osd-scrub-snaps.sh sometimes fails
10/03/2019
- 11:45 PM Backport #38277: mimic: osd_map_message_max default is too high?
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29242
merged - 11:39 PM Backport #38852: mimic: .mgrstat failed to decode mgrstat state; luminous dev version?
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29249
merged - 11:37 PM Backport #38437: mimic: crc cache should be invalidated when posting preallocated rx buffers
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29247
merged - 11:37 PM Backport #40884: mimic: ceph mgr module ls -f plain crashes mon
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29593
merged - 11:36 PM Backport #40949: mimic: Better default value for osd_snap_trim_sleep
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29732
merged - 11:35 PM Backport #38450: mimic: src/osd/OSDMap.h: 1065: FAILED assert(__null != pool)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29976
merged - 11:34 PM Backport #41595: mimic: ceph-objectstore-tool can't remove head with bad snapset
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30081
merged - 11:34 PM Backport #40083: mimic: osd: Better error message when OSD count is less than osd_pool_default_size
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30180
merged - 11:32 PM Backport #40732: mimic: mon: auth mon isn't loading full KeyServerData after restart
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30181
merged - 11:32 PM Backport #41291: mimic: filestore pre-split may not split enough directories
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30182
merged - 11:31 PM Backport #41351: mimic: hidden corei7 requirement in binary packages
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30183
merged - 11:31 PM Backport #41490: mimic: OSDCap.PoolClassRNS test aborts
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30214
merged - 11:30 PM Backport #41502: mimic: Warning about past_interval bounds on deleting pg
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30222
merged - 11:27 PM Backport #40464: mimic: osd beacon sometimes has empty pg list
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29253
merged - 11:26 PM Backport #38351: mimic: Limit loops waiting for force-backfill/force-recovery to happen
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29245
merged - 11:24 PM Backport #38856: mimic: should set EPOLLET flag on del_event()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29250
merged - 11:23 PM Backport #40179: mimic: qa/standalone/scrub/osd-scrub-snaps.sh sometimes fails
- David Zafman wrote:
> https://github.com/ceph/ceph/pull/29251
merged - 08:57 PM Bug #42102: use-after-free in Objecter timer handing
- I will note that the test has to run for several minutes before the ASAN warning pops. ASAN does slow things down, bu...
- 08:39 PM Bug #42114 (Fix Under Review): mon: /var/lib/ceph/mon/* data (esp rocksdb) is not 0600
- 07:52 PM Backport #41534: nautilus: valgrind: UninitCondition in ceph::crypto::onwire::AES128GCM_OnWireRxH...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29928
merged - 07:51 PM Backport #41703: nautilus: oi(object_info_t).size does not match on disk size
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30278
merged - 07:50 PM Backport #41963: nautilus: Segmentation fault in rados ls when using --pgid and --pool/-p togethe...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30605
merged - 07:49 PM Backport #41960: nautilus: tools/rados: add --pgid in help
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30607
merged - 06:12 PM Bug #42178 (Duplicate): scrub errors due to missing objects
- ...
- 06:07 PM Bug #42176 (Duplicate): FAILED ceph_assert(obc) in PrimaryLogPG::recover_backfill()
- 05:40 PM Bug #42176 (Duplicate): FAILED ceph_assert(obc) in PrimaryLogPG::recover_backfill()
- ...
- 06:01 PM Bug #42177 (Fix Under Review): osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
- 05:58 PM Bug #42177 (Resolved): osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
- First the object is deleted,...
- 05:34 PM Bug #42175 (Can't reproduce): _txc_add_transaction error (2) No such file or directory not handl...
- ...
- 05:12 PM Bug #38219: rebuild-mondb hangs
- Just a quick note as this might be relevant for the decision whether or not to integrate this PR:
Running mimic 13... - 04:36 PM Support #42174 (Closed): Ceph Nautilus OSD isn't able to add to cluster
- Ceph cluster on debian, nautilus version, the issue is any time i try creating the data store the don't add the osds ...
- 04:18 PM Bug #36631: potential deadlock in PG::_scan_snaps when repairing snap mapper
- jewel is EOL - @Mykola, does any of this apply to luminous?
- 04:01 PM Bug #42173 (Closed): _pinned_map closest pinned map ver 252615 not available! error: (2) No such ...
- -4> 2019-10-03 17:58:44.023 7fde2e2f9700 5 mon.km-fsn-1-dc4-m1-797678@0(leader).paxos(paxos active c 4545611..45463...
- 01:09 PM Backport #42168 (In Progress): nautilus: readable.sh test fails
- 11:18 AM Backport #42168 (Resolved): nautilus: readable.sh test fails
- https://github.com/ceph/ceph/pull/30704
- 11:19 AM Bug #41424 (Pending Backport): readable.sh test fails
- 07:13 AM Feature #41905: Add ability to change fsid of cluster
- Splitting the cluster meant no data copy from A to B. Minimal downtime for the RGW application and no downtime for th...
10/02/2019
- 10:02 PM Feature #41905: Add ability to change fsid of cluster
- This sounds to me like the kind of thing we don't want to support directly. What's the use case for splitting a clust...
- 09:09 PM Bug #42060 (Need More Info): Slow ops seen when one ceph private interface is shut down
- What workload are you running; does it have its own metrics? Is there evidence that Nautilus is slower or behaving wo...
- 09:04 PM Bug #42113: ceph -h usage should indicate CephChoices --name= is sometime required
- No failures so this is normal priority?
- 04:43 PM Bug #41754: Use dump_stream() instead of dump_float() for floats where max precision isn't helpful
From json.org:
JSON (JavaScript Object Notation) is a lightweight data-interchange format. *It is easy for human...- 01:20 PM Bug #20924 (Resolved): osd: leaked Session on osd.7
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 01:19 PM Feature #37935 (Resolved): Add clear-data-digest command to objectstore tool
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 01:13 PM Documentation #41004 (Resolved): doc: pg_num should always be a power of two
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 01:12 PM Documentation #41403 (Resolved): doc: mon_health_to_clog_* values flipped
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 01:11 PM Backport #42154 (Resolved): mimic: Removed OSDs with outstanding peer failure reports crash the m...
- https://github.com/ceph/ceph/pull/30903
- 01:11 PM Backport #42153 (Resolved): luminous: Removed OSDs with outstanding peer failure reports crash th...
- https://github.com/ceph/ceph/pull/30905
- 01:11 PM Backport #42152 (Resolved): nautilus: Removed OSDs with outstanding peer failure reports crash th...
- https://github.com/ceph/ceph/pull/30904
- 01:09 PM Backport #42141 (Resolved): nautilus: asynchronous recovery can not function under certain circum...
- https://github.com/ceph/ceph/pull/31077
- 01:09 PM Backport #42138 (Resolved): luminous: Remove unused full and nearful output from OSDMap summary
- https://github.com/ceph/ceph/pull/30902
- 01:09 PM Backport #42137 (Resolved): mimic: Remove unused full and nearful output from OSDMap summary
- https://github.com/ceph/ceph/pull/30901
- 01:09 PM Backport #42136 (Resolved): nautilus: Remove unused full and nearful output from OSDMap summary
- https://github.com/ceph/ceph/pull/30900
- 01:08 PM Backport #42128 (Resolved): mimic: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
- https://github.com/ceph/ceph/pull/30898
- 01:08 PM Backport #42127 (Resolved): luminous: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
- https://github.com/ceph/ceph/pull/30926
- 01:07 PM Backport #42126 (Resolved): nautilus: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
- https://github.com/ceph/ceph/pull/30899
- 01:07 PM Backport #42125 (Resolved): nautilus: weird daemon key seen in health alert
- https://github.com/ceph/ceph/pull/31039
- 12:13 PM Backport #24360 (Resolved): luminous: osd: leaked Session on osd.7
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29859
m... - 12:08 AM Backport #24360: luminous: osd: leaked Session on osd.7
- Samuel Just wrote:
> https://github.com/ceph/ceph/pull/29859
merged - 12:12 PM Backport #38436 (Resolved): luminous: crc cache should be invalidated when posting preallocated r...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29248
m... - 12:10 PM Backport #41568 (Resolved): nautilus: doc: pg_num should always be a power of two
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30004
m... - 12:09 PM Backport #41529 (Resolved): nautilus: doc: mon_health_to_clog_* values flipped
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30003
m... - 11:19 AM Backport #42120 (In Progress): nautilus: pg_autoscaler should show a warning if pg_num isn't a po...
- 11:09 AM Backport #42120 (Resolved): nautilus: pg_autoscaler should show a warning if pg_num isn't a power...
- https://github.com/ceph/ceph/pull/30689
- 10:54 AM Bug #42102: use-after-free in Objecter timer handing
- Su Yue wrote:
> Jeff Layton wrote:
> > While hunting a crash in tracker #42026, I ran across this bug when testing ... - 06:24 AM Bug #42102: use-after-free in Objecter timer handing
- Su Yue wrote:
> Jeff Layton wrote:
> > While hunting a crash in tracker #42026, I ran across this bug when testing ... - 03:47 AM Bug #42102: use-after-free in Objecter timer handing
- Jeff Layton wrote:
> While hunting a crash in tracker #42026, I ran across this bug when testing with ASAN:
>
> [... - 10:18 AM Backport #41921: nautilus: OSDMonitor: missing `pool_id` field in `osd pool ls` command
- duplicate PR https://github.com/ceph/ceph/pull/30568 was closed
- 03:32 AM Feature #41359 (In Progress): Adding Placement Group id in Large omap log message
- 02:37 AM Bug #42115 (Resolved): Turn off repair pg state when leaving recovery
We set the repair pg state during recovery initiated by repair. To handle all cases we need to clear it when trans...
10/01/2019
- 10:58 PM Backport #38436: luminous: crc cache should be invalidated when posting preallocated rx buffers
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29248
merged - 10:19 PM Bug #42114 (Resolved): mon: /var/lib/ceph/mon/* data (esp rocksdb) is not 0600
- by default, i see...
- 09:25 PM Bug #42113 (Fix Under Review): ceph -h usage should indicate CephChoices --name= is sometime requ...
- ...
- 07:26 PM Bug #42111 (Resolved): max_size from crushmap ignored when increasing size on pool
- Hello,
when the crushmap-rule has "max_size=2" for example, and you set size=3 on the pool, all I/O stops withou... - 02:05 PM Backport #41922: mimic: OSDMonitor: missing `pool_id` field in `osd pool ls` command
- https://github.com/ceph/ceph/pull/30547 was closed because identical PR https://github.com/ceph/ceph/pull/30485 was o...
- 11:28 AM Bug #42102: use-after-free in Objecter timer handing
- Found by running LibRadosMisc.ShutdownRace test built with -DWITH_ASAN=ON. I had to set:...
- 10:31 AM Bug #42102 (Can't reproduce): use-after-free in Objecter timer handing
- While hunting a crash in tracker #42026, I ran across this bug when testing with ASAN:...
- 01:08 AM Feature #40419 (Resolved): [RFE] Estimated remaining time on recovery?
09/30/2019
- 12:28 PM Backport #42095 (In Progress): nautilus: global osd crash in DynamicPerfStats::add_to_reports
- 12:28 PM Backport #42095 (Resolved): nautilus: global osd crash in DynamicPerfStats::add_to_reports
- https://github.com/ceph/ceph/pull/30648
- 05:46 AM Backport #41958 (In Progress): nautilus: scrub errors after quick split/merge cycle
- https://github.com/ceph/ceph/pull/30643
09/29/2019
- 09:58 PM Bug #42082 (Duplicate): pybind/rados: set_omap() crash on py3
- 10:17 AM Bug #42079 (Pending Backport): weird daemon key seen in health alert
- 09:40 AM Bug #42079: weird daemon key seen in health alert
- an alternative fix: https://github.com/ceph/ceph/pull/30635
09/28/2019
- 02:25 PM Bug #41748: log [ERR] : 7.19 caller_ops.size 62 > log size 61
- I suggest putting a call to log_weirdness() in the Reset state entry point, so we can tell if the problem came from t...
- 08:01 AM Bug #41891 (Pending Backport): global osd crash in DynamicPerfStats::add_to_reports
09/27/2019
- 05:08 PM Feature #41647 (Pending Backport): pg_autoscaler should show a warning if pg_num isn't a power of...
- 03:55 PM Bug #42015 (Pending Backport): Remove unused full and nearful output from OSDMap summary
- 07:02 AM Bug #42015 (Resolved): Remove unused full and nearful output from OSDMap summary
- 03:27 PM Bug #42084 (New): df output difference if 8 OSD cluster has 5+3 shared EC pool vs larger cluster
I created an 8 OSD cluster with 1 EC pool 5+3 and this ceph df detail output....- 02:42 PM Bug #42082 (Resolved): pybind/rados: set_omap() crash on py3
- Details see https://github.com/ceph/ceph/pull/30483#issuecomment-535873920
- 12:47 PM Bug #36572 (Closed): ceph-in: --connect-timeout doesn't work while pinging mon
- 12:30 PM Bug #42079 (Resolved): weird daemon key seen in health alert
- e.g.:
19 slow ops, oldest one blocked for 34 sec, daemons [osd,2,osd,4] have slow ops. - 05:19 AM Bug #41680 (Pending Backport): Removed OSDs with outstanding peer failure reports crash the monitor
- 02:40 AM Bug #42052 (Pending Backport): mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
- 02:39 AM Bug #41924 (Pending Backport): asynchronous recovery can not function under certain circumstances
- 01:36 AM Bug #42058 (In Progress): OSD reconnected across map epochs, inconsistent pg logs created
- 12:27 AM Backport #41845 (In Progress): luminous: tools/rados: allow list objects in a specific pg in a pool
- 12:26 AM Backport #41959 (In Progress): luminous: tools/rados: add --pgid in help
- 12:26 AM Backport #41962 (In Progress): luminous: Segmentation fault in rados ls when using --pgid and --p...
- https://github.com/ceph/ceph/pull/30608
09/26/2019
- 11:04 PM Backport #41960 (In Progress): nautilus: tools/rados: add --pgid in help
- 10:53 PM Backport #41963 (In Progress): nautilus: Segmentation fault in rados ls when using --pgid and --p...
- 10:48 AM Bug #42060: Slow ops seen when one ceph private interface is shut down
- Hi,
When i mention private network i am referring to the cluster_network. - 10:30 AM Bug #42060 (Need More Info): Slow ops seen when one ceph private interface is shut down
- Environment -
5 node Nautilus cluster
67 OSDs per node - 4TB HDD per OSD
We are trying a use case where we shut... - 08:53 AM Bug #42058 (Duplicate): OSD reconnected across map epochs, inconsistent pg logs created
- Get the lossless cluster connection between osd.2 and osd.47 for example.
When osd.47 is restarted and at the same... - 08:37 AM Bug #40035: smoke.sh failing in jenkins "make check" test randomly
- In addition to what Laura reported, it must be said that this failure is seen in jenkins job only
when running the j... - 08:26 AM Bug #40035: smoke.sh failing in jenkins "make check" test randomly
- Kefu Chai wrote:
> [...]
>
> see https://jenkins.ceph.com/job/ceph-pull-requests/817/console
>
> i tried to re... - 03:21 AM Bug #41743: Long heartbeat ping times on front interface seen, longest is 2237.999 msec (OSD_SLOW...
This shows the send on osd.0 and receive at osd.6. ...- 02:52 AM Bug #41743: Long heartbeat ping times on front interface seen, longest is 2237.999 msec (OSD_SLOW...
- This shows the front and back interface. I don't know which is which, but it already sent the second interface maybe...
- 02:32 AM Bug #41743: Long heartbeat ping times on front interface seen, longest is 2237.999 msec (OSD_SLOW...
I confused the front and back interface with a retransmit. The ports are the 2 interfaces.
-At the ping receivi...
09/25/2019
- 11:41 PM Bug #41924 (Fix Under Review): asynchronous recovery can not function under certain circumstances
- 09:27 PM Bug #41924: asynchronous recovery can not function under certain circumstances
- 09:46 PM Bug #41874 (Resolved): mon-osdmap-prune.sh fails
- 09:45 PM Bug #41873 (Resolved): test-erasure-code.sh fails
- 09:28 PM Bug #41939 (Need More Info): Scaling with unfound options might leave PGs in state "unknown"
- 09:28 PM Bug #41939: Scaling with unfound options might leave PGs in state "unknown"
- How are we ending up in this state? What the previous states on the those PGs?
- 09:24 PM Bug #41943 (Need More Info): ceph-mgr fails to report OSD status correctly
- Do you have any other information from that OSD while this happened?
- 09:22 PM Bug #41943: ceph-mgr fails to report OSD status correctly
- Sounds like this OSD was somehow up enough that it responded to peer heartbeats, but was not processing any client re...
- 09:03 PM Bug #41908 (Resolved): TMAPUP operation results in OSD assertion failure
- 12:11 PM Bug #42052 (Resolved): mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
- > OSDMap.cc: 4603: FAILED ceph_assert(osd_weight.count(i.first))
>
> ceph version v15.0.0-5429-gac828d7 (ac828d732... - 10:50 AM Bug #41866 (Fix Under Review): OSD cannot report slow operation warnings in time.
- 10:49 AM Bug #41866: OSD cannot report slow operation warnings in time.
- 08:26 AM Backport #41921 (In Progress): nautilus: OSDMonitor: missing `pool_id` field in `osd pool ls` com...
- https://github.com/ceph/ceph/pull/30568
09/24/2019
- 09:50 PM Bug #38724: _txc_add_transaction error (39) Directory not empty not handled on operation 21 (op 1...
- Bumping priority based on community feedback.
- 07:53 PM Backport #42037 (Resolved): luminous: Enable auto-scaler and get src/osd/PeeringState.cc:3671: fa...
- https://github.com/ceph/ceph/pull/30896
- 07:52 PM Backport #42036 (Resolved): mimic: Enable auto-scaler and get src/osd/PeeringState.cc:3671: faile...
- https://github.com/ceph/ceph/pull/30895
- 04:11 PM Bug #41946: cbt perf test fails due to leftover in /home/ubuntu/cephtest
- the log files were created by cosbench. see https://github.com/intel-cloud/cosbench/blob/ca68b333e85c51829ea68f203877...
- 12:19 PM Backport #41922 (In Progress): mimic: OSDMonitor: missing `pool_id` field in `osd pool ls` command
- https://github.com/ceph/ceph/pull/30547
- 12:15 PM Backport #41917 (In Progress): nautilus: osd: failure result of do_osd_ops not logged in prepare_...
- https://github.com/ceph/ceph/pull/30546
09/23/2019
- 09:33 PM Bug #42015 (In Progress): Remove unused full and nearful output from OSDMap summary
- 09:27 PM Bug #42015 (Resolved): Remove unused full and nearful output from OSDMap summary
in OSDMap::print_oneline_summary() and OSDMap::print_summary() (CEPH_OSDMAP_FULL and CEPH_OSDMAP_NEARFULL checks)- 08:41 PM Backport #42014 (In Progress): nautilus: Enable auto-scaler and get src/osd/PeeringState.cc:3671:...
- 08:35 PM Backport #42014 (Resolved): nautilus: Enable auto-scaler and get src/osd/PeeringState.cc:3671: fa...
- https://github.com/ceph/ceph/pull/30528
- 07:42 PM Feature #41647 (Fix Under Review): pg_autoscaler should show a warning if pg_num isn't a power of...
- 07:20 PM Bug #42012: mon osd_snap keys grow unbounded
- This is (mostly) fixed in master by https://github.com/ceph/ceph/pull/30518. There is still one set of per-epoch key...
- 03:41 PM Bug #42012: mon osd_snap keys grow unbounded
- Link to the full "dump-keys | grep osd_snap"
https://wustl.box.com/s/3r7bgv32hs5hw4jmgmywbo9qvqrqsmwn - 03:26 PM Bug #42012 (Resolved): mon osd_snap keys grow unbounded
- ...
- 07:19 PM Bug #41680: Removed OSDs with outstanding peer failure reports crash the monitor
- 05:09 PM Bug #41944: inconsistent pool count in ceph -s output
- Is this after pools are deleted? In that case, it's #40011
- 04:27 PM Backport #41864 (In Progress): luminous: Mimic MONs have slow/long running ops
- 02:27 PM Bug #37875: osdmaps aren't being cleaned up automatically on healthy cluster
- Still ongoing here, with mimic too. On one 13.2.6 cluster we have this, for example:...
- 02:12 PM Bug #41816 (Pending Backport): Enable auto-scaler and get src/osd/PeeringState.cc:3671: failed as...
- 09:02 AM Backport #41964 (Resolved): mimic: Segmentation fault in rados ls when using --pgid and --pool/-p...
- https://github.com/ceph/ceph/pull/30893
- 09:02 AM Backport #41963 (Resolved): nautilus: Segmentation fault in rados ls when using --pgid and --pool...
- https://github.com/ceph/ceph/pull/30605
- 09:02 AM Backport #41962 (Resolved): luminous: Segmentation fault in rados ls when using --pgid and --pool...
- 09:02 AM Backport #41961 (Resolved): mimic: tools/rados: add --pgid in help
- https://github.com/ceph/ceph/pull/30893
- 09:02 AM Backport #41960 (Resolved): nautilus: tools/rados: add --pgid in help
- https://github.com/ceph/ceph/pull/30607
- 09:02 AM Backport #41959 (Resolved): luminous: tools/rados: add --pgid in help
- https://github.com/ceph/ceph/pull/30608
- 09:02 AM Backport #41958 (Resolved): nautilus: scrub errors after quick split/merge cycle
- https://github.com/ceph/ceph/pull/30643
09/22/2019
- 10:12 PM Cleanup #41876 (Pending Backport): tools/rados: add --pgid in help
- 11:55 AM Bug #41950 (Can't reproduce): crimson compile
- Can i know crimson-old use what version Seastar code at ceph-15 version?
When compile, output following option:
<... - 04:12 AM Bug #41936 (Pending Backport): scrub errors after quick split/merge cycle
- 03:45 AM Bug #41946: cbt perf test fails due to leftover in /home/ubuntu/cephtest
- ...
- 02:09 AM Bug #41946 (Duplicate): cbt perf test fails due to leftover in /home/ubuntu/cephtest
- ...
- 03:42 AM Bug #41875 (Pending Backport): Segmentation fault in rados ls when using --pgid and --pool/-p tog...
09/20/2019
- 09:01 PM Bug #41156 (Rejected): dump_float() poor output
- 08:47 PM Bug #41817 (Closed): qa/standalone/scrub/osd-recovery-scrub.sh timed out waiting for scrub
- 07:17 PM Bug #41913 (Fix Under Review): With auto scaler operating stopping an OSD can lead to COT crashin...
- the real bug here is that the pgid split so the pgid specified to COT is wrong. the attached PR adds a check in COT ...
- 06:22 PM Bug #41944 (Resolved): inconsistent pool count in ceph -s output
- ...
- 06:08 PM Bug #41816 (Fix Under Review): Enable auto-scaler and get src/osd/PeeringState.cc:3671: failed as...
- 05:36 PM Bug #41816: Enable auto-scaler and get src/osd/PeeringState.cc:3671: failed assert info.last_comp...
- The complete_to pointer is already at log end before recover_got() is called. I think it's because during split() we ...
- 04:35 PM Bug #41943 (Closed): ceph-mgr fails to report OSD status correctly
- After an inexplicable cluster event that resulted in around 10% of our OSDs falsely reported down (and shortly after ...
- 12:47 PM Bug #41834: qa: EC Pool configuration and slow op warnings for OSDs caused by recent master changes
- Might as well add some RBD failures while piling on:
http://pulpito.ceph.com/trociny-2019-09-19_12:41:57-rbd-wip-m... - 02:13 AM Bug #41939 (Need More Info): Scaling with unfound options might leave PGs in state "unknown"
With osd_pool_default_pg_autoscale_mode="on"
../qa/run-standalone.sh TEST_rep_recovery_unfound
The test failu...- 01:59 AM Backport #41863 (In Progress): mimic: Mimic MONs have slow/long running ops
- https://github.com/ceph/ceph/pull/30481
- 01:57 AM Backport #41862 (In Progress): nautilus: Mimic MONs have slow/long running ops
- https://github.com/ceph/ceph/pull/30480
09/19/2019
- 11:12 PM Bug #41817: qa/standalone/scrub/osd-recovery-scrub.sh timed out waiting for scrub
- This fix for this particular issue is to just disable auto scaler because it just causes a hang in the test but no cr...
- 10:59 PM Bug #41923: 3 different ceph-osd asserts caused by enabling auto-scaler
I think this stack better reflects the thread that hit the suicide timeout. However, everytime I've seen this thre...- 09:41 PM Bug #41923: 3 different ceph-osd asserts caused by enabling auto-scaler
Look at the assert(op.hinfo) it is caused by the corruption injected by the test. I'll verify that the asserts are...- 12:05 AM Bug #41923 (Can't reproduce): 3 different ceph-osd asserts caused by enabling auto-scaler
Change config osd_pool_default_pg_autoscale_mode to "on"
Saw these 4 core dumps on 3 different sub-tests.
../...- 04:51 PM Bug #41936 (Fix Under Review): scrub errors after quick split/merge cycle
- 04:51 PM Bug #41936 (Resolved): scrub errors after quick split/merge cycle
- PGs split and then merge soon after. There is a pg stat scrub mismatch.
- 04:48 PM Bug #41834: qa: EC Pool configuration and slow op warnings for OSDs caused by recent master changes
- This shows up in rgw's ec pool tests also. In osd logs, I see slow ops on MOSDECSubOpRead/Reply messages, and they al...
- 09:32 AM Feature #41647: pg_autoscaler should show a warning if pg_num isn't a power of two
- Note: contrary to what the bug description says, pg_autoscaler will (apparently) *not* be automatically turned on wit...
- 01:56 AM Bug #41924 (Resolved): asynchronous recovery can not function under certain circumstances
- guoracle report that:
> In the asynchronous recovery feature,
> the asynchronous recovery target OSD is selected ... - 01:39 AM Bug #41866: OSD cannot report slow operation warnings in time.
- *report_callback* thread is also blocked on PG::lock with MGRClient::lock locked while getting the pg stats. This in ...
- 12:54 AM Bug #41816: Enable auto-scaler and get src/osd/PeeringState.cc:3671: failed assert info.last_comp...
This can be reproduced by setting config osd_pool_default_pg_autoscale_mode="on" and executing this test:
../qa/...- 12:29 AM Bug #41754: Use dump_stream() instead of dump_float() for floats where max precision isn't helpful
I was suspicious that the trailing 0999999994 in the elapsed time is noise. Could this be caused by a float being...
09/18/2019
- 06:33 PM Backport #41922 (Resolved): mimic: OSDMonitor: missing `pool_id` field in `osd pool ls` command
- https://github.com/ceph/ceph/pull/30485
- 06:33 PM Backport #41921 (Resolved): nautilus: OSDMonitor: missing `pool_id` field in `osd pool ls` command
- https://github.com/ceph/ceph/pull/30486
- 06:31 PM Backport #41920 (Resolved): nautilus: osd: scrub error on big objects; make bluestore refuse to s...
- https://github.com/ceph/ceph/pull/30783
- 06:31 PM Backport #41919 (Resolved): luminous: osd: scrub error on big objects; make bluestore refuse to s...
- https://github.com/ceph/ceph/pull/30785
- 06:31 PM Backport #41918 (Resolved): mimic: osd: scrub error on big objects; make bluestore refuse to star...
- https://github.com/ceph/ceph/pull/30784
- 06:31 PM Backport #41917 (Resolved): nautilus: osd: failure result of do_osd_ops not logged in prepare_tra...
- https://github.com/ceph/ceph/pull/30546
- 04:25 PM Bug #41900 (Resolved): auto-scaler breaks many standalone tests
- 03:38 PM Bug #41913 (Resolved): With auto scaler operating stopping an OSD can lead to COT crashing instea...
- ...
- 03:03 PM Bug #41891: global osd crash in DynamicPerfStats::add_to_reports
- Answering myself - seems that rbd_support cannot be disabled anyway
# ceph mgr module disable rbd_support
Error E... - 10:59 AM Bug #41891: global osd crash in DynamicPerfStats::add_to_reports
- I don't believe this command was running at that time, however "rbd_support" mgr module was active. Could this be the...
- 10:53 AM Bug #41891: global osd crash in DynamicPerfStats::add_to_reports
- Marcin, I believe I know the cause and I am now discussing the fix [1]. A workaround could be not to use "rbd perf im...
- 10:13 AM Bug #41891 (Fix Under Review): global osd crash in DynamicPerfStats::add_to_reports
- 06:24 AM Bug #41891 (In Progress): global osd crash in DynamicPerfStats::add_to_reports
- 01:55 PM Bug #41908 (Fix Under Review): TMAPUP operation results in OSD assertion failure
- 01:47 PM Bug #41908 (Resolved): TMAPUP operation results in OSD assertion failure
- In 'do_tmapup', the object is READ into a 'newop' structure and then when it is re-written, the same 'newop' structur...
- 10:52 AM Bug #41677: Cephmon:fix mon crash
- @shuguang what is the exact version of ceph-mon? i cannot match the backtrace with the source code of master HEAD.
- 09:46 AM Feature #41905 (New): Add ability to change fsid of cluster
- There is a case where you want to change the fsid of a cluster: When you have splitted a cluster into two different c...
09/17/2019
- 09:50 PM Bug #41900 (Resolved): auto-scaler breaks many standalone tests
Caused by https://github.com/ceph/ceph/pull/30112
In some cases I had to kill processes to get past hung tests. ...- 08:46 PM Bug #41816: Enable auto-scaler and get src/osd/PeeringState.cc:3671: failed assert info.last_comp...
- This crash didn't reproduce for me using run-standalone.sh with the auto scaler turned off.
- 08:35 PM Bug #40287 (Pending Backport): OSDMonitor: missing `pool_id` field in `osd pool ls` command
- 08:30 PM Bug #41191 (Pending Backport): osd: scrub error on big objects; make bluestore refuse to start on...
- 08:29 PM Bug #41210 (Pending Backport): osd: failure result of do_osd_ops not logged in prepare_transactio...
- @shuguang wang did you want this to be backported to a release older than nautilus?
- 06:59 PM Bug #41336: All OSD Faild after Reboot.
- Hi,
two questions:
- How to find out if a pool is affected?
"ceph osd erasure-code-profile get" does not list... - 05:04 PM Bug #41891: global osd crash in DynamicPerfStats::add_to_reports
- Yes, I use "rbd perf image iotop/iostat" (one of the reasons for upgrade:-) ). Not exporting per image data with prom...
- 03:51 PM Bug #41891: global osd crash in DynamicPerfStats::add_to_reports
- Marcin, are you using `rbd perf image iotop|iostat` commands? Or may be prometheus mgr module with rbd per image stat...
- 01:49 PM Bug #41891: global osd crash in DynamicPerfStats::add_to_reports
- As crash seems to be related to stats reporting - don't know if it is related, but it was soon after eliminating "Leg...
- 10:30 AM Bug #41891 (Resolved): global osd crash in DynamicPerfStats::add_to_reports
- Hi,
during routine host maintenance, I've encountered massive osd crash across entire cluster. The sequence of event... - 01:19 PM Feature #40420 (Need More Info): Introduce an ceph.conf option to disable HEALTH_WARN when nodeep...
- https://github.com/ceph/ceph/pull/29422 has been merged, but not yet backported
- 08:05 AM Bug #41754: Use dump_stream() instead of dump_float() for floats where max precision isn't helpful
- Regarding elapsed time it might be important (for `compact` is not, but for benchmarking is). Another importatnat thi...
- 06:15 AM Backport #41238 (In Progress): nautilus: Implement mon_memory_target
09/16/2019
- 10:10 PM Cleanup #41876 (Fix Under Review): tools/rados: add --pgid in help
- 10:09 PM Cleanup #41876 (Resolved): tools/rados: add --pgid in help
- 09:39 PM Bug #41817 (In Progress): qa/standalone/scrub/osd-recovery-scrub.sh timed out waiting for scrub
- This is likely cause by enabling of auto scaler.
- 03:27 PM Bug #41817: qa/standalone/scrub/osd-recovery-scrub.sh timed out waiting for scrub
- /a/kchai-2019-09-15_15:37:26-rados-wip-kefu-testing-2019-09-15-1533-distro-basic-mira/4311115/
/a/pdonnell-2019-09-1... - 08:05 PM Bug #41875 (Fix Under Review): Segmentation fault in rados ls when using --pgid and --pool/-p tog...
- 07:55 PM Bug #41875 (Resolved): Segmentation fault in rados ls when using --pgid and --pool/-p together as...
- - Works fine with only --pgid...
- 07:57 PM Bug #41816: Enable auto-scaler and get src/osd/PeeringState.cc:3671: failed assert info.last_comp...
- Reproduced with logs: /a/nojha-2019-09-13_21:45:51-rados:standalone-master-distro-basic-smithi/4304313/remote/smithi1...
- 03:25 PM Bug #40522: on_local_recover doesn't touch?
- /a/pdonnell-2019-09-14_22:40:03-rados-master-distro-basic-smithi/4307679/
/a/kchai-2019-09-15_15:37:26-rados-wip-kef... - 03:23 PM Bug #41874 (Resolved): mon-osdmap-prune.sh fails
- ...
- 03:19 PM Bug #41873 (Resolved): test-erasure-code.sh fails
- ...
- 01:46 PM Backport #41238: nautilus: Implement mon_memory_target
- The old PR is unlinked from the tracker as more commits need to be pulled in for this backport. I will update this tr...
- 01:04 PM Backport #41238 (Need More Info): nautilus: Implement mon_memory_target
- first attempted backport https://github.com/ceph/ceph/pull/29652 was closed - apparently, the backport is not trivial...
- 01:23 PM Backport #40993: mimic: Ceph status in some cases does not report slow ops
- just for completeness - the mimic fix is (I think): https://github.com/ceph/ceph/pull/30391
- 10:39 AM Bug #41866: OSD cannot report slow operation warnings in time.
- assumed that bluestore is used.
- 10:23 AM Bug #41866 (Fix Under Review): OSD cannot report slow operation warnings in time.
- If an underlying device is blocked due to H/W issues, a thread that checks slow ops can’t report slow op warning in t...
- 07:21 AM Backport #41864 (Resolved): luminous: Mimic MONs have slow/long running ops
- https://github.com/ceph/ceph/pull/30519
- 07:21 AM Backport #41863 (Resolved): mimic: Mimic MONs have slow/long running ops
- https://github.com/ceph/ceph/pull/30481
- 07:21 AM Backport #41862 (Resolved): nautilus: Mimic MONs have slow/long running ops
- https://github.com/ceph/ceph/pull/30480
- 07:14 AM Backport #41845 (Resolved): luminous: tools/rados: allow list objects in a specific pg in a pool
- https://github.com/ceph/ceph/pull/30608
- 07:14 AM Backport #41844 (Resolved): mimic: tools/rados: allow list objects in a specific pg in a pool
- https://github.com/ceph/ceph/pull/30893
Also available in: Atom