Project

General

Profile

Activity

From 09/17/2019 to 10/16/2019

10/16/2019

11:26 PM Backport #41449: mimic: mon: C_AckMarkedDown has not handled the Callback Arguments
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30213
merged
Yuri Weinstein
11:07 PM Backport #39537: luminous: osd/ReplicatedBackend.cc: 1321: FAILED assert(get_parent()->get_log()....
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28989
merged
Yuri Weinstein
12:48 PM Bug #42341 (New): OSD PGs are not being purged
related ML thread: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-October/037017.html
Apparently some PG...
Anonymous
09:02 AM Feature #40420 (Resolved): Introduce an ceph.conf option to disable HEALTH_WARN when nodeep-scrub...
I posed David's question on backport targets at https://github.com/ceph/ceph/pull/29422#issuecomment-532215897 and it... Nathan Cutler

10/15/2019

10:13 PM Bug #42332 (In Progress): CephContext::CephContextServiceThread might pause for 5 seconds at shut...
Jason Dillaman
10:11 PM Bug #42332 (Resolved): CephContext::CephContextServiceThread might pause for 5 seconds at shutdown
The entry loop in CephContext::CephContextServiceThread doesn't check for thread exit prior to waiting. This can resu... Jason Dillaman
07:52 PM Backport #41918 (Resolved): mimic: osd: scrub error on big objects; make bluestore refuse to star...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30784
m...
Nathan Cutler
07:44 PM Backport #41918: mimic: osd: scrub error on big objects; make bluestore refuse to start on big ob...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30784
merged
Yuri Weinstein
05:25 PM Backport #42326 (In Progress): nautilus: max_size from crushmap ignored when increasing size on pool
Vikhyat Umrao
09:47 AM Backport #42326 (Resolved): nautilus: max_size from crushmap ignored when increasing size on pool
https://github.com/ceph/ceph/pull/30941 Nathan Cutler
11:18 AM Bug #42328 (Resolved): osd/PrimaryLogPG.cc: 3962: ceph_abort_msg("out of order op")
Observing on the recent master when running rbd suite [1]:... Mykola Golub
08:54 AM Feature #42321 (Fix Under Review): Add a new mode to balance pg layout by primary osds
There already have upmap optimizer since Luminous version. The upmap optimizer is help for balancing PGs across OSDs,... Rixin Luo
08:47 AM Bug #42060: Slow ops seen when one ceph private interface is shut down
Yes, ~3 minutes after disabling the network, the OSDs became down. I started the network after 5 minutes and until th... Nokia ceph-users
07:51 AM Backport #42126 (In Progress): nautilus: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
Nathan Cutler
07:45 AM Backport #42127 (In Progress): luminous: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
Updated automatically by ceph-backport.sh version 15.0.0.6113 Nathan Cutler
07:40 AM Backport #42127 (New): luminous: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
Nathan Cutler
07:39 AM Backport #42128 (In Progress): mimic: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
Nathan Cutler
07:31 AM Backport #41844 (In Progress): mimic: tools/rados: allow list objects in a specific pg in a pool
Nathan Cutler
07:23 AM Bug #36732: tools/rados: fix segmentation fault
This fix was merged before the v14.2.0 (nautilus) release.
Backports:
* luminous https://github.com/ceph/ceph/p...
Nathan Cutler
06:39 AM Backport #42240 (In Progress): mimic: Adding Placement Group id in Large omap log message
Vikhyat Umrao
06:35 AM Backport #42242 (In Progress): nautilus: Adding Placement Group id in Large omap log message
Brad Hubbard
06:33 AM Backport #42241 (In Progress): luminous: Adding Placement Group id in Large omap log message
Brad Hubbard

10/14/2019

10:15 PM Documentation #42315 (New): Improve rados command usage, man page and turorial
David Zafman
10:14 PM Documentation #42314 (Resolved): Improve ceph-objectstore-tool usage, man page and create turorial
David Zafman
02:56 PM Bug #42111 (Pending Backport): max_size from crushmap ignored when increasing size on pool
Kefu Chai
12:55 PM Backport #42153 (In Progress): luminous: Removed OSDs with outstanding peer failure reports crash...
Updated automatically by ceph-backport.sh version 15.0.0.6113 Nathan Cutler
12:54 PM Backport #42152 (In Progress): nautilus: Removed OSDs with outstanding peer failure reports crash...
Updated automatically by ceph-backport.sh version 15.0.0.6113 Nathan Cutler
12:53 PM Backport #42154 (In Progress): mimic: Removed OSDs with outstanding peer failure reports crash th...
Updated automatically by ceph-backport.sh version 15.0.0.6113 Nathan Cutler
12:45 PM Bug #22350: nearfull OSD count in 'ceph -w'
Note: backported to luminous via https://github.com/ceph/ceph/pull/30902 Nathan Cutler
12:44 PM Backport #42138 (In Progress): luminous: Remove unused full and nearful output from OSDMap summary
Updated automatically by ceph-backport.sh version 15.0.0.6113 Nathan Cutler
12:40 PM Backport #42137 (In Progress): mimic: Remove unused full and nearful output from OSDMap summary
Updated automatically by ceph-backport.sh version 15.0.0.6113 Nathan Cutler
12:40 PM Backport #42136 (In Progress): nautilus: Remove unused full and nearful output from OSDMap summary
Updated automatically by ceph-backport.sh version 15.0.0.6113 Nathan Cutler
12:39 PM Backport #42128 (Need More Info): mimic: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
-https://github.com/ceph/ceph/pull/30576#issuecomment-541652768-
NOTE: this fixes a bug introduced into mimic by h...
Nathan Cutler
12:33 PM Backport #42128 (In Progress): mimic: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
Updated automatically by ceph-backport.sh version 15.0.0.6113 Nathan Cutler
12:39 PM Backport #42126 (Need More Info): nautilus: mgr/balancer FAILED ceph_assert(osd_weight.count(i.fi...
-https://github.com/ceph/ceph/pull/30576#issuecomment-541652768-
NOTE: fixes a bug introduced into nautilus by htt...
Nathan Cutler
12:34 PM Backport #42126 (In Progress): nautilus: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
Updated automatically by ceph-backport.sh version 15.0.0.6113 Nathan Cutler
12:38 PM Backport #42127 (Need More Info): luminous: mgr/balancer FAILED ceph_assert(osd_weight.count(i.fi...
-see https://github.com/ceph/ceph/pull/30576#issuecomment-541652768-
This is needed to fix a bug introduced into l...
Nathan Cutler
12:31 PM Backport #42037 (In Progress): luminous: Enable auto-scaler and get src/osd/PeeringState.cc:3671:...
Updated automatically by ceph-backport.sh version 15.0.0.6113 Nathan Cutler
12:28 PM Backport #42036 (In Progress): mimic: Enable auto-scaler and get src/osd/PeeringState.cc:3671: fa...
Updated automatically by ceph-backport.sh version 15.0.0.6113 Nathan Cutler
12:21 PM Backport #41964 (In Progress): mimic: Segmentation fault in rados ls when using --pgid and --pool...
Updated automatically by ceph-backport.sh version 15.0.0.6113 Nathan Cutler
11:38 AM Backport #41961 (In Progress): mimic: tools/rados: add --pgid in help
Updated automatically by ceph-backport.sh version 15.0.0.6113 Nathan Cutler

10/13/2019

09:55 PM Bug #42082 (Pending Backport): pybind/rados: set_omap() crash on py3
Brad Hubbard

10/12/2019

09:17 AM Backport #41584 (Need More Info): mimic: backfill_toofull seen on cluster where the most full OSD...
non-trivial due to refactoring in master, but if the nautilus backport gets accepted, we could cherry-pick from there... Nathan Cutler
09:07 AM Backport #41583 (In Progress): nautilus: backfill_toofull seen on cluster where the most full OSD...
Nathan Cutler

10/11/2019

09:08 AM Feature #24099: osd: Improve workflow when creating OSD on raw block device if there was bluestor...
Triggered the same! WTF ? Марк Коренберг
08:56 AM Bug #42207: ceph osd df showing 0/0/0
Yes and it also show err state, kennedy osei
12:29 AM Bug #42186: "2019-10-04T19:31:51.053283+0000 osd.7 (osd.7) 108 : cluster [ERR] 2.5s0 shard 0(1) 2...

From the original description /a/sage-2019-10-04_18:20:43-rados-wip-sage-testing-2019-10-04-0923-distro-basic-smith...
David Zafman

10/10/2019

09:02 PM Bug #42115 (In Progress): Turn off repair pg state when leaving recovery
David Zafman
10:26 AM Backport #42259 (Resolved): nautilus: document new option mon_max_pg_per_osd
https://github.com/ceph/ceph/pull/31300 Nathan Cutler
10:26 AM Backport #42258 (Resolved): mimic: document new option mon_max_pg_per_osd
https://github.com/ceph/ceph/pull/31875 Nathan Cutler
09:38 AM Bug #41748: log [ERR] : 7.19 caller_ops.size 62 > log size 61
Sage, the calls to log_weirdness() in the places you suggested are already in place. They were added by you as part o... Sridhar Seshasayee
08:36 AM Bug #41255: backfill_toofull seen on cluster where the most full OSD is at 1%
Adding 14.2.4 as an affected version, as I am seeing the same issue on a 14.2.4 cluster that has recently had 9 OSDs ... Florian Haas
06:08 AM Documentation #42221 (Pending Backport): document new option mon_max_pg_per_osd
https://github.com/ceph/ceph/pull/30787 Kefu Chai
05:54 AM Bug #41946: cbt perf test fails due to leftover in /home/ubuntu/cephtest
We do a cleanup before this https://github.com/ceph/ceph/blob/master/qa/tasks/cbt.py#L251-L275, but these seem to be ... Neha Ojha
03:01 AM Bug #42058: OSD reconnected across map epochs, inconsistent pg logs created
Greg Farnum wrote:
> Okay, so the issue here is that osd.1 managed to reconnect to osd.5 and osd.9 without triggerin...
相洋 于
01:10 AM Bug #42058: OSD reconnected across map epochs, inconsistent pg logs created
Our cluster is running on Luminous 12.2.12.
I do not think PR https://github.com/ceph/ceph/pull/25343 can not solve...
相洋 于

10/09/2019

09:28 PM Bug #42060: Slow ops seen when one ceph private interface is shut down
Do the OSDs ever stay down once their cluster network is disabled?
Generally speaking, if they only have the clust...
Greg Farnum
09:24 PM Bug #42058 (Duplicate): OSD reconnected across map epochs, inconsistent pg logs created
Oh sorry I didn't look at that PR. It is the correct fix; if we do another luminous point release it should show up o... Greg Farnum
04:19 PM Bug #42058 (New): OSD reconnected across map epochs, inconsistent pg logs created
Okay, so the issue here is that osd.1 managed to reconnect to osd.5 and osd.9 without triggering a wider reset of the... Greg Farnum
07:13 AM Bug #42058: OSD reconnected across map epochs, inconsistent pg logs created
see PR: https://github.com/ceph/ceph/pull/25343 which also avoid triggerring RESETSESSION.
相洋 于
06:52 AM Bug #42058: OSD reconnected across map epochs, inconsistent pg logs created
@Greg
Assume pg 1.1a maps to osds[1,5,9], osd1 is the primary osd.
Time 1: osd1 osd5 osd9 was online and could...
相洋 于
09:09 PM Bug #42173 (Closed): _pinned_map closest pinned map ver 252615 not available! error: (2) No such ...
Rocksdb repair isn't guaranteed to get all the data back - it sounds like it lost some maps in this case. For further... Josh Durgin
09:07 PM Bug #42207: ceph osd df showing 0/0/0
Does this persist after you create a pool? Josh Durgin
03:07 PM Bug #42214: cosbench workloads failing (seen on master and mimic backport testing)
Investigation gist: https://gist.github.com/rzarzynski/2d3b9be8986bebb6435c4b5a5bd7df13.
On today's meeting Kefu poi...
Radoslaw Zarzynski
02:59 PM Bug #42214 (Duplicate): cosbench workloads failing (seen on master and mimic backport testing)
Kefu Chai
11:43 AM Bug #42214 (In Progress): cosbench workloads failing (seen on master and mimic backport testing)
Radoslaw Zarzynski
03:00 PM Bug #41946: cbt perf test fails due to leftover in /home/ubuntu/cephtest
Kefu Chai
08:36 AM Backport #42242 (Resolved): nautilus: Adding Placement Group id in Large omap log message
https://github.com/ceph/ceph/pull/30923 Nathan Cutler
08:36 AM Backport #42241 (Resolved): luminous: Adding Placement Group id in Large omap log message
https://github.com/ceph/ceph/pull/30922 Nathan Cutler
08:36 AM Backport #42240 (Resolved): mimic: Adding Placement Group id in Large omap log message
https://github.com/ceph/ceph/pull/30924 Nathan Cutler
07:54 AM Feature #41359 (Pending Backport): Adding Placement Group id in Large omap log message
Brad Hubbard

10/08/2019

12:09 PM Bug #42060: Slow ops seen when one ceph private interface is shut down
We monitor rados outage in both scenarios.
For luminous, when the interface was shut down - ~60 seconds rados outag...
Nokia ceph-users
10:10 AM Bug #42225: target_max_bytes and target_max_objects should accept values in [M,G,T]iB and M, G, T...
I will open PR for this tracker once PR#30701 gets merged. Prashant D
10:03 AM Bug #42225 (Resolved): target_max_bytes and target_max_objects should accept values in [M,G,T]iB ...
1. To flush or evict at 1 TB, we execute the following:... Prashant D
08:56 AM Bug #41191: osd: scrub error on big objects; make bluestore refuse to start on big objects
@Florian - backports on the way. I'll try to keep my eye on them. Nathan Cutler
08:02 AM Bug #41191: osd: scrub error on big objects; make bluestore refuse to start on big objects
As far as I can see, without this patch there is no way to detect excessively large objects other than doing @rados s... Florian Haas
08:56 AM Backport #41919 (In Progress): luminous: osd: scrub error on big objects; make bluestore refuse t...
Updated automatically by ceph-backport.sh version 15.0.0.5775 Nathan Cutler
08:51 AM Backport #41918 (In Progress): mimic: osd: scrub error on big objects; make bluestore refuse to s...
Updated automatically by ceph-backport.sh version 15.0.0.5775 Nathan Cutler
08:50 AM Backport #41920 (In Progress): nautilus: osd: scrub error on big objects; make bluestore refuse t...
Updated automatically by ceph-backport.sh version 15.0.0.5775 Nathan Cutler
08:02 AM Documentation #42221 (Resolved): document new option mon_max_pg_per_osd
https://github.com/ceph/ceph/pull/28525 was opened against mimic, but the bug exists in master.
Please open a PR t...
Nathan Cutler
07:59 AM Backport #38206 (Resolved): mimic: osds allows to partially start more than N+2
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29241
m...
Nathan Cutler
07:58 AM Backport #41447 (Resolved): mimic: osd/PrimaryLogPG: Access destroyed references in finish_degrad...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30291
m...
Nathan Cutler
07:58 AM Backport #41863 (Resolved): mimic: Mimic MONs have slow/long running ops
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30481
m...
Nathan Cutler
07:58 AM Backport #41922 (Resolved): mimic: OSDMonitor: missing `pool_id` field in `osd pool ls` command
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30485
m...
Nathan Cutler
07:52 AM Backport #41704 (Resolved): mimic: oi(object_info_t).size does not match on disk size
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30275
m...
Nathan Cutler
07:34 AM Bug #42215: cluster [WRN] Health check failed: 1 pool(s) have non-power-of-two pg_num (POOL_PG_NU...
the power of two check was (or is in the process of being) backported to nautilus Nathan Cutler
06:07 AM Bug #42173: _pinned_map closest pinned map ver 252615 not available! error: (2) No such file or d...
Hi,
sorry yes I forgot to elaborate.
We had another issue resulting in crashing mon because of apparent rocksdb cor...
Anonymous
05:50 AM Feature #41359 (Fix Under Review): Adding Placement Group id in Large omap log message
Neha Ojha
04:47 AM Bug #41924: asynchronous recovery can not function under certain circumstances
Neha Ojha wrote:
> @Nathan The PR that merged is based on https://github.com/ceph/ceph/pull/24004, which has not bee...
Nathan Cutler
04:45 AM Bug #41743: Long heartbeat ping times on front interface seen, longest is 2237.999 msec (OSD_SLOW...
David Zafman wrote:
> This is already included in back porting of https://tracker.ceph.com/issues/40640
Thanks, D...
Nathan Cutler
12:14 AM Bug #42111: max_size from crushmap ignored when increasing size on pool
Vikhyat Umrao

10/07/2019

09:07 PM Support #42174 (Closed): Ceph Nautilus OSD isn't able to add to cluster
The irc channel or user mailing list are better ways to get help with this sort of thing: https://ceph.io/irc/ :) Greg Farnum
09:06 PM Bug #42173 (Need More Info): _pinned_map closest pinned map ver 252615 not available! error: (2) ...
This came up in "[ceph-users] mon sudden crash loop - pinned map" as well; is that perhaps the same cluster?
Were ...
Greg Farnum
07:31 PM Backport #38206: mimic: osds allows to partially start more than N+2
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29241
merged
Yuri Weinstein
07:26 PM Backport #41447: mimic: osd/PrimaryLogPG: Access destroyed references in finish_degraded_object
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30291
merged
Yuri Weinstein
07:25 PM Backport #41863: mimic: Mimic MONs have slow/long running ops
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30481
merged
Yuri Weinstein
07:25 PM Backport #41863: mimic: Mimic MONs have slow/long running ops
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30481
merged
Yuri Weinstein
07:24 PM Bug #40287: OSDMonitor: missing `pool_id` field in `osd pool ls` command
merged https://github.com/ceph/ceph/pull/30485 Yuri Weinstein
07:21 PM Backport #41704: mimic: oi(object_info_t).size does not match on disk size
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30275
merged
Yuri Weinstein
07:21 PM Bug #42215 (New): cluster [WRN] Health check failed: 1 pool(s) have non-power-of-two pg_num (POOL...

Tests either need whitelist or fixes to use power of two PG nums. This was introduced by new health check.
Exam...
David Zafman
07:19 PM Bug #42214 (Duplicate): cosbench workloads failing (seen on master and mimic backport testing)
I'm not sure where the error is in the teuthology.log but they always report:
Command failed on xxxxxxxx with status...
David Zafman
01:40 PM Bug #42207 (New): ceph osd df showing 0/0/0
I have ceph mimic cluster with 1 mon 2- osd nodes after adding all the two osd to the cluster my data store is still ... kennedy osei
01:36 PM Bug #42102: use-after-free in Objecter timer handing
I tested a Q&D patch that made start_tick take a unique_lock, but that didn't seem to fix the issue, so the race does... Jeff Layton
12:40 PM Backport #42202 (Rejected): luminous: mon: /var/lib/ceph/mon/* data (esp rocksdb) is not 0600
https://github.com/ceph/ceph/pull/31033 Nathan Cutler
12:40 PM Backport #42201 (Rejected): mimic: mon: /var/lib/ceph/mon/* data (esp rocksdb) is not 0600
https://github.com/ceph/ceph/pull/31032 Nathan Cutler
12:40 PM Backport #42200 (Resolved): nautilus: mon: /var/lib/ceph/mon/* data (esp rocksdb) is not 0600
https://github.com/ceph/ceph/pull/31031 Nathan Cutler
12:40 PM Backport #42199 (Resolved): luminous: osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
https://github.com/ceph/ceph/pull/31030 Nathan Cutler
12:40 PM Backport #42198 (Resolved): mimic: osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
https://github.com/ceph/ceph/pull/31029 Nathan Cutler
12:39 PM Backport #42197 (Resolved): nautilus: osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
https://github.com/ceph/ceph/pull/31028 Nathan Cutler
02:49 AM Bug #42178 (Duplicate): scrub errors due to missing objects
Neha Ojha

10/06/2019

10:48 PM Bug #42186: "2019-10-04T19:31:51.053283+0000 osd.7 (osd.7) 108 : cluster [ERR] 2.5s0 shard 0(1) 2...
/a/sage-2019-10-06_19:16:50-rados-master-distro-basic-smithi/4364824
"2019-10-06T21:24:06.336207+0000 osd.1 (osd.1...
Sage Weil
05:41 PM Backport #42168: nautilus: readable.sh test fails
Nathan Cutler wrote:
> @Venky Please use "src/script/backport-create-issue" from the master branch to create backpor...
Venky Shankar
02:05 PM Bug #42177 (Pending Backport): osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
Sage Weil
02:05 PM Bug #38345: mon: segv in MonOpRequest::~MonOpRequest OpHistory::cleanup
... Sage Weil

10/05/2019

07:36 PM Bug #41924: asynchronous recovery can not function under certain circumstances
@Nathan The PR that merged is based on https://github.com/ceph/ceph/pull/24004, which has not been backported to mimi... Neha Ojha
02:38 PM Bug #41924: asynchronous recovery can not function under certain circumstances
Adding mimic backport, since the first attempted fix ( see https://github.com/ceph/ceph/pull/30459 ) targeted mimic. Nathan Cutler
03:14 AM Feature #40955 (Resolved): Extend the scrub sleep time when the period is outside [osd_scrub_begi...
David Zafman
02:48 AM Bug #41743 (Resolved): Long heartbeat ping times on front interface seen, longest is 2237.999 mse...
This is already included in back porting of https://tracker.ceph.com/issues/40640 David Zafman
01:42 AM Bug #42114 (Pending Backport): mon: /var/lib/ceph/mon/* data (esp rocksdb) is not 0600
Sage Weil
01:37 AM Bug #42186 (Can't reproduce): "2019-10-04T19:31:51.053283+0000 osd.7 (osd.7) 108 : cluster [ERR] ...
/a/sage-2019-10-04_18:20:43-rados-wip-sage-testing-2019-10-04-0923-distro-basic-smithi/4358878 Sage Weil

10/04/2019

06:25 PM Bug #42111 (Fix Under Review): max_size from crushmap ignored when increasing size on pool
Vikhyat Umrao
06:23 PM Bug #42111: max_size from crushmap ignored when increasing size on pool
- With fix:... Vikhyat Umrao
06:20 PM Bug #42111: max_size from crushmap ignored when increasing size on pool
- I was able to reproduce in the master branch in `vstart` cluster.... Vikhyat Umrao
05:45 PM Bug #42111 (In Progress): max_size from crushmap ignored when increasing size on pool
Vikhyat Umrao
02:38 PM Backport #42168: nautilus: readable.sh test fails
@Venky Please use "src/script/backport-create-issue" from the master branch to create backport issues. Or, if you nee... Nathan Cutler
02:08 PM Backport #40082 (Need More Info): luminous: osd: Better error message when OSD count is less than...
Does not pass make check, not clear how to make it pass, and luminous is borderline EOL so the question is: how impor... Nathan Cutler
09:08 AM Bug #26970 (Resolved): src/osd/OSDMap.h: 1065: FAILED assert(__null != pool)
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler
09:08 AM Bug #38040 (Resolved): osd_map_message_max default is too high?
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler
09:07 AM Bug #38416 (Resolved): crc cache should be invalidated when posting preallocated rx buffers
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler
09:07 AM Bug #38827 (Resolved): valgrind: UninitCondition in ceph::crypto::onwire::AES128GCM_OnWireRxHandl...
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler
09:07 AM Bug #38828 (Resolved): should set EPOLLET flag on del_event()
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler
09:06 AM Bug #38839 (Resolved): .mgrstat failed to decode mgrstat state; luminous dev version?
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler
09:05 AM Bug #40377 (Resolved): osd beacon sometimes has empty pg list
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler
08:59 AM Backport #41534 (Resolved): nautilus: valgrind: UninitCondition in ceph::crypto::onwire::AES128GC...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29928
m...
Nathan Cutler
08:58 AM Backport #41703 (Resolved): nautilus: oi(object_info_t).size does not match on disk size
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30278
m...
Nathan Cutler
08:58 AM Backport #41963 (Resolved): nautilus: Segmentation fault in rados ls when using --pgid and --pool...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30605
m...
Nathan Cutler
08:58 AM Backport #41960 (Resolved): nautilus: tools/rados: add --pgid in help
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30607
m...
Nathan Cutler
08:57 AM Backport #38277 (Resolved): mimic: osd_map_message_max default is too high?
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29242
m...
Nathan Cutler
08:56 AM Backport #38852 (Resolved): mimic: .mgrstat failed to decode mgrstat state; luminous dev version?
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29249
m...
Nathan Cutler
08:56 AM Backport #38437 (Resolved): mimic: crc cache should be invalidated when posting preallocated rx b...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29247
m...
Nathan Cutler
08:56 AM Backport #40884 (Resolved): mimic: ceph mgr module ls -f plain crashes mon
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29593
m...
Nathan Cutler
08:56 AM Backport #40949 (Resolved): mimic: Better default value for osd_snap_trim_sleep
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29732
m...
Nathan Cutler
08:53 AM Backport #38450: mimic: src/osd/OSDMap.h: 1065: FAILED assert(__null != pool)
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29976
m...
Nathan Cutler
04:24 AM Backport #38450 (Resolved): mimic: src/osd/OSDMap.h: 1065: FAILED assert(__null != pool)
David Zafman
08:53 AM Backport #41595: mimic: ceph-objectstore-tool can't remove head with bad snapset
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30081
m...
Nathan Cutler
04:28 AM Backport #41595 (Resolved): mimic: ceph-objectstore-tool can't remove head with bad snapset
David Zafman
08:53 AM Backport #40083 (Resolved): mimic: osd: Better error message when OSD count is less than osd_pool...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30180
m...
Nathan Cutler
08:47 AM Backport #40732 (Resolved): mimic: mon: auth mon isn't loading full KeyServerData after restart
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30181
m...
Nathan Cutler
08:47 AM Backport #41291 (Resolved): mimic: filestore pre-split may not split enough directories
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30182
m...
Nathan Cutler
08:46 AM Backport #41351 (Resolved): mimic: hidden corei7 requirement in binary packages
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30183
m...
Nathan Cutler
08:46 AM Backport #41490 (Resolved): mimic: OSDCap.PoolClassRNS test aborts
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30214
m...
Nathan Cutler
08:46 AM Backport #41502 (Resolved): mimic: Warning about past_interval bounds on deleting pg
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30222
m...
Nathan Cutler
08:44 AM Backport #40464 (Resolved): mimic: osd beacon sometimes has empty pg list
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29253
m...
Nathan Cutler
08:44 AM Backport #38351: mimic: Limit loops waiting for force-backfill/force-recovery to happen
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29245
m...
Nathan Cutler
04:33 AM Backport #38351 (Resolved): mimic: Limit loops waiting for force-backfill/force-recovery to happen
David Zafman
08:43 AM Backport #38856 (Resolved): mimic: should set EPOLLET flag on del_event()
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29250
m...
Nathan Cutler
08:43 AM Backport #40179: mimic: qa/standalone/scrub/osd-scrub-snaps.sh sometimes fails
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29251
m...
Nathan Cutler
04:32 AM Backport #40179 (Resolved): mimic: qa/standalone/scrub/osd-scrub-snaps.sh sometimes fails
David Zafman
04:32 AM Bug #40078 (Resolved): qa/standalone/scrub/osd-scrub-snaps.sh sometimes fails
David Zafman

10/03/2019

11:45 PM Backport #38277: mimic: osd_map_message_max default is too high?
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29242
merged
Yuri Weinstein
11:39 PM Backport #38852: mimic: .mgrstat failed to decode mgrstat state; luminous dev version?
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29249
merged
Yuri Weinstein
11:37 PM Backport #38437: mimic: crc cache should be invalidated when posting preallocated rx buffers
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29247
merged
Yuri Weinstein
11:37 PM Backport #40884: mimic: ceph mgr module ls -f plain crashes mon
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29593
merged
Yuri Weinstein
11:36 PM Backport #40949: mimic: Better default value for osd_snap_trim_sleep
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29732
merged
Yuri Weinstein
11:35 PM Backport #38450: mimic: src/osd/OSDMap.h: 1065: FAILED assert(__null != pool)
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29976
merged
Yuri Weinstein
11:34 PM Backport #41595: mimic: ceph-objectstore-tool can't remove head with bad snapset
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30081
merged
Yuri Weinstein
11:34 PM Backport #40083: mimic: osd: Better error message when OSD count is less than osd_pool_default_size
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30180
merged
Yuri Weinstein
11:32 PM Backport #40732: mimic: mon: auth mon isn't loading full KeyServerData after restart
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30181
merged
Yuri Weinstein
11:32 PM Backport #41291: mimic: filestore pre-split may not split enough directories
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30182
merged
Yuri Weinstein
11:31 PM Backport #41351: mimic: hidden corei7 requirement in binary packages
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30183
merged
Yuri Weinstein
11:31 PM Backport #41490: mimic: OSDCap.PoolClassRNS test aborts
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30214
merged
Yuri Weinstein
11:30 PM Backport #41502: mimic: Warning about past_interval bounds on deleting pg
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30222
merged
Yuri Weinstein
11:27 PM Backport #40464: mimic: osd beacon sometimes has empty pg list
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29253
merged
Yuri Weinstein
11:26 PM Backport #38351: mimic: Limit loops waiting for force-backfill/force-recovery to happen
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29245
merged
Yuri Weinstein
11:24 PM Backport #38856: mimic: should set EPOLLET flag on del_event()
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29250
merged
Yuri Weinstein
11:23 PM Backport #40179: mimic: qa/standalone/scrub/osd-scrub-snaps.sh sometimes fails
David Zafman wrote:
> https://github.com/ceph/ceph/pull/29251
merged
Yuri Weinstein
08:57 PM Bug #42102: use-after-free in Objecter timer handing
I will note that the test has to run for several minutes before the ASAN warning pops. ASAN does slow things down, bu... Jeff Layton
08:39 PM Bug #42114 (Fix Under Review): mon: /var/lib/ceph/mon/* data (esp rocksdb) is not 0600
Neha Ojha
07:52 PM Backport #41534: nautilus: valgrind: UninitCondition in ceph::crypto::onwire::AES128GCM_OnWireRxH...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29928
merged
Yuri Weinstein
07:51 PM Backport #41703: nautilus: oi(object_info_t).size does not match on disk size
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30278
merged
Yuri Weinstein
07:50 PM Backport #41963: nautilus: Segmentation fault in rados ls when using --pgid and --pool/-p togethe...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30605
merged
Yuri Weinstein
07:49 PM Backport #41960: nautilus: tools/rados: add --pgid in help
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30607
merged
Yuri Weinstein
06:12 PM Bug #42178 (Duplicate): scrub errors due to missing objects
... Neha Ojha
06:07 PM Bug #42176 (Duplicate): FAILED ceph_assert(obc) in PrimaryLogPG::recover_backfill()
Neha Ojha
05:40 PM Bug #42176 (Duplicate): FAILED ceph_assert(obc) in PrimaryLogPG::recover_backfill()
... Neha Ojha
06:01 PM Bug #42177 (Fix Under Review): osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
Sage Weil
05:58 PM Bug #42177 (Resolved): osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
First the object is deleted,... Sage Weil
05:34 PM Bug #42175 (Can't reproduce): _txc_add_transaction error (2) No such file or directory not handl...
... Neha Ojha
05:12 PM Bug #38219: rebuild-mondb hangs
Just a quick note as this might be relevant for the decision whether or not to integrate this PR:
Running mimic 13...
Christoffer Anselm
04:36 PM Support #42174 (Closed): Ceph Nautilus OSD isn't able to add to cluster
Ceph cluster on debian, nautilus version, the issue is any time i try creating the data store the don't add the osds ... kennedy osei
04:18 PM Bug #36631: potential deadlock in PG::_scan_snaps when repairing snap mapper
jewel is EOL - @Mykola, does any of this apply to luminous? Nathan Cutler
04:01 PM Bug #42173 (Closed): _pinned_map closest pinned map ver 252615 not available! error: (2) No such ...
-4> 2019-10-03 17:58:44.023 7fde2e2f9700 5 mon.km-fsn-1-dc4-m1-797678@0(leader).paxos(paxos active c 4545611..45463... Anonymous
01:09 PM Backport #42168 (In Progress): nautilus: readable.sh test fails
Venky Shankar
11:18 AM Backport #42168 (Resolved): nautilus: readable.sh test fails
https://github.com/ceph/ceph/pull/30704 Venky Shankar
11:19 AM Bug #41424 (Pending Backport): readable.sh test fails
Venky Shankar
07:13 AM Feature #41905: Add ability to change fsid of cluster
Splitting the cluster meant no data copy from A to B. Minimal downtime for the RGW application and no downtime for th... Wido den Hollander

10/02/2019

10:02 PM Feature #41905: Add ability to change fsid of cluster
This sounds to me like the kind of thing we don't want to support directly. What's the use case for splitting a clust... Greg Farnum
09:09 PM Bug #42060 (Need More Info): Slow ops seen when one ceph private interface is shut down
What workload are you running; does it have its own metrics? Is there evidence that Nautilus is slower or behaving wo... Greg Farnum
09:04 PM Bug #42113: ceph -h usage should indicate CephChoices --name= is sometime required
No failures so this is normal priority? Greg Farnum
04:43 PM Bug #41754: Use dump_stream() instead of dump_float() for floats where max precision isn't helpful

From json.org:
JSON (JavaScript Object Notation) is a lightweight data-interchange format. *It is easy for human...
David Zafman
01:20 PM Bug #20924 (Resolved): osd: leaked Session on osd.7
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler
01:19 PM Feature #37935 (Resolved): Add clear-data-digest command to objectstore tool
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler
01:13 PM Documentation #41004 (Resolved): doc: pg_num should always be a power of two
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler
01:12 PM Documentation #41403 (Resolved): doc: mon_health_to_clog_* values flipped
While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ... Nathan Cutler
01:11 PM Backport #42154 (Resolved): mimic: Removed OSDs with outstanding peer failure reports crash the m...
https://github.com/ceph/ceph/pull/30903 Nathan Cutler
01:11 PM Backport #42153 (Resolved): luminous: Removed OSDs with outstanding peer failure reports crash th...
https://github.com/ceph/ceph/pull/30905 Nathan Cutler
01:11 PM Backport #42152 (Resolved): nautilus: Removed OSDs with outstanding peer failure reports crash th...
https://github.com/ceph/ceph/pull/30904 Nathan Cutler
01:09 PM Backport #42141 (Resolved): nautilus: asynchronous recovery can not function under certain circum...
https://github.com/ceph/ceph/pull/31077 Nathan Cutler
01:09 PM Backport #42138 (Resolved): luminous: Remove unused full and nearful output from OSDMap summary
https://github.com/ceph/ceph/pull/30902 Nathan Cutler
01:09 PM Backport #42137 (Resolved): mimic: Remove unused full and nearful output from OSDMap summary
https://github.com/ceph/ceph/pull/30901 Nathan Cutler
01:09 PM Backport #42136 (Resolved): nautilus: Remove unused full and nearful output from OSDMap summary
https://github.com/ceph/ceph/pull/30900 Nathan Cutler
01:08 PM Backport #42128 (Resolved): mimic: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
https://github.com/ceph/ceph/pull/30898 Nathan Cutler
01:08 PM Backport #42127 (Resolved): luminous: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
https://github.com/ceph/ceph/pull/30926 Nathan Cutler
01:07 PM Backport #42126 (Resolved): nautilus: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
https://github.com/ceph/ceph/pull/30899 Nathan Cutler
01:07 PM Backport #42125 (Resolved): nautilus: weird daemon key seen in health alert
https://github.com/ceph/ceph/pull/31039 Nathan Cutler
12:13 PM Backport #24360 (Resolved): luminous: osd: leaked Session on osd.7
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29859
m...
Nathan Cutler
12:08 AM Backport #24360: luminous: osd: leaked Session on osd.7
Samuel Just wrote:
> https://github.com/ceph/ceph/pull/29859
merged
Yuri Weinstein
12:12 PM Backport #38436 (Resolved): luminous: crc cache should be invalidated when posting preallocated r...
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29248
m...
Nathan Cutler
12:10 PM Backport #41568 (Resolved): nautilus: doc: pg_num should always be a power of two
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30004
m...
Nathan Cutler
12:09 PM Backport #41529 (Resolved): nautilus: doc: mon_health_to_clog_* values flipped
This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30003
m...
Nathan Cutler
11:19 AM Backport #42120 (In Progress): nautilus: pg_autoscaler should show a warning if pg_num isn't a po...
Nathan Cutler
11:09 AM Backport #42120 (Resolved): nautilus: pg_autoscaler should show a warning if pg_num isn't a power...
https://github.com/ceph/ceph/pull/30689 Nathan Cutler
10:54 AM Bug #42102: use-after-free in Objecter timer handing
Su Yue wrote:
> Jeff Layton wrote:
> > While hunting a crash in tracker #42026, I ran across this bug when testing ...
Jeff Layton
06:24 AM Bug #42102: use-after-free in Objecter timer handing
Su Yue wrote:
> Jeff Layton wrote:
> > While hunting a crash in tracker #42026, I ran across this bug when testing ...
Su Yue
03:47 AM Bug #42102: use-after-free in Objecter timer handing
Jeff Layton wrote:
> While hunting a crash in tracker #42026, I ran across this bug when testing with ASAN:
>
> [...
Su Yue
10:18 AM Backport #41921: nautilus: OSDMonitor: missing `pool_id` field in `osd pool ls` command
duplicate PR https://github.com/ceph/ceph/pull/30568 was closed Nathan Cutler
03:32 AM Feature #41359 (In Progress): Adding Placement Group id in Large omap log message
Brad Hubbard
02:37 AM Bug #42115 (Resolved): Turn off repair pg state when leaving recovery

We set the repair pg state during recovery initiated by repair. To handle all cases we need to clear it when trans...
David Zafman

10/01/2019

10:58 PM Backport #38436: luminous: crc cache should be invalidated when posting preallocated rx buffers
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29248
merged
Yuri Weinstein
10:19 PM Bug #42114 (Resolved): mon: /var/lib/ceph/mon/* data (esp rocksdb) is not 0600
by default, i see... Sage Weil
09:25 PM Bug #42113 (Fix Under Review): ceph -h usage should indicate CephChoices --name= is sometime requ...
... Sage Weil
07:26 PM Bug #42111 (Resolved): max_size from crushmap ignored when increasing size on pool
Hello,
when the crushmap-rule has "max_size=2" for example, and you set size=3 on the pool, all I/O stops withou...
Alex Masteo
02:05 PM Backport #41922: mimic: OSDMonitor: missing `pool_id` field in `osd pool ls` command
https://github.com/ceph/ceph/pull/30547 was closed because identical PR https://github.com/ceph/ceph/pull/30485 was o... Nathan Cutler
11:28 AM Bug #42102: use-after-free in Objecter timer handing
Found by running LibRadosMisc.ShutdownRace test built with -DWITH_ASAN=ON. I had to set:... Jeff Layton
10:31 AM Bug #42102 (Can't reproduce): use-after-free in Objecter timer handing
While hunting a crash in tracker #42026, I ran across this bug when testing with ASAN:... Jeff Layton
01:08 AM Feature #40419 (Resolved): [RFE] Estimated remaining time on recovery?
xie xingguo

09/30/2019

12:28 PM Backport #42095 (In Progress): nautilus: global osd crash in DynamicPerfStats::add_to_reports
Mykola Golub
12:28 PM Backport #42095 (Resolved): nautilus: global osd crash in DynamicPerfStats::add_to_reports
https://github.com/ceph/ceph/pull/30648 Mykola Golub
05:46 AM Backport #41958 (In Progress): nautilus: scrub errors after quick split/merge cycle
https://github.com/ceph/ceph/pull/30643 Prashant D

09/29/2019

09:58 PM Bug #42082 (Duplicate): pybind/rados: set_omap() crash on py3
Brad Hubbard
10:17 AM Bug #42079 (Pending Backport): weird daemon key seen in health alert
Kefu Chai
09:40 AM Bug #42079: weird daemon key seen in health alert
an alternative fix: https://github.com/ceph/ceph/pull/30635 Kefu Chai

09/28/2019

02:25 PM Bug #41748: log [ERR] : 7.19 caller_ops.size 62 > log size 61
I suggest putting a call to log_weirdness() in the Reset state entry point, so we can tell if the problem came from t... Sage Weil
08:01 AM Bug #41891 (Pending Backport): global osd crash in DynamicPerfStats::add_to_reports
Kefu Chai

09/27/2019

05:08 PM Feature #41647 (Pending Backport): pg_autoscaler should show a warning if pg_num isn't a power of...
Sage Weil
03:55 PM Bug #42015 (Pending Backport): Remove unused full and nearful output from OSDMap summary
David Zafman
07:02 AM Bug #42015 (Resolved): Remove unused full and nearful output from OSDMap summary
Kefu Chai
03:27 PM Bug #42084 (New): df output difference if 8 OSD cluster has 5+3 shared EC pool vs larger cluster

I created an 8 OSD cluster with 1 EC pool 5+3 and this ceph df detail output....
David Zafman
02:42 PM Bug #42082 (Resolved): pybind/rados: set_omap() crash on py3
Details see https://github.com/ceph/ceph/pull/30483#issuecomment-535873920 Nathan Cutler
12:47 PM Bug #36572 (Closed): ceph-in: --connect-timeout doesn't work while pinging mon
Rishabh Dave
12:30 PM Bug #42079 (Resolved): weird daemon key seen in health alert
e.g.:
19 slow ops, oldest one blocked for 34 sec, daemons [osd,2,osd,4] have slow ops.
xie xingguo
05:19 AM Bug #41680 (Pending Backport): Removed OSDs with outstanding peer failure reports crash the monitor
Kefu Chai
02:40 AM Bug #42052 (Pending Backport): mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
xie xingguo
02:39 AM Bug #41924 (Pending Backport): asynchronous recovery can not function under certain circumstances
xie xingguo
01:36 AM Bug #42058 (In Progress): OSD reconnected across map epochs, inconsistent pg logs created
Greg Farnum
12:27 AM Backport #41845 (In Progress): luminous: tools/rados: allow list objects in a specific pg in a pool
Vikhyat Umrao
12:26 AM Backport #41959 (In Progress): luminous: tools/rados: add --pgid in help
Vikhyat Umrao
12:26 AM Backport #41962 (In Progress): luminous: Segmentation fault in rados ls when using --pgid and --p...
https://github.com/ceph/ceph/pull/30608 Vikhyat Umrao

09/26/2019

11:04 PM Backport #41960 (In Progress): nautilus: tools/rados: add --pgid in help
Vikhyat Umrao
10:53 PM Backport #41963 (In Progress): nautilus: Segmentation fault in rados ls when using --pgid and --p...
Vikhyat Umrao
10:48 AM Bug #42060: Slow ops seen when one ceph private interface is shut down
Hi,
When i mention private network i am referring to the cluster_network.
Nokia ceph-users
10:30 AM Bug #42060 (Need More Info): Slow ops seen when one ceph private interface is shut down
Environment -
5 node Nautilus cluster
67 OSDs per node - 4TB HDD per OSD
We are trying a use case where we shut...
Nokia ceph-users
08:53 AM Bug #42058 (Duplicate): OSD reconnected across map epochs, inconsistent pg logs created
Get the lossless cluster connection between osd.2 and osd.47 for example.
When osd.47 is restarted and at the same...
相洋 于
08:37 AM Bug #40035: smoke.sh failing in jenkins "make check" test randomly
In addition to what Laura reported, it must be said that this failure is seen in jenkins job only
when running the j...
Alfonso Martínez
08:26 AM Bug #40035: smoke.sh failing in jenkins "make check" test randomly
Kefu Chai wrote:
> [...]
>
> see https://jenkins.ceph.com/job/ceph-pull-requests/817/console
>
> i tried to re...
Laura Paduano
03:21 AM Bug #41743: Long heartbeat ping times on front interface seen, longest is 2237.999 msec (OSD_SLOW...

This shows the send on osd.0 and receive at osd.6. ...
David Zafman
02:52 AM Bug #41743: Long heartbeat ping times on front interface seen, longest is 2237.999 msec (OSD_SLOW...
This shows the front and back interface. I don't know which is which, but it already sent the second interface maybe... David Zafman
02:32 AM Bug #41743: Long heartbeat ping times on front interface seen, longest is 2237.999 msec (OSD_SLOW...

I confused the front and back interface with a retransmit. The ports are the 2 interfaces.
-At the ping receivi...
David Zafman

09/25/2019

11:41 PM Bug #41924 (Fix Under Review): asynchronous recovery can not function under certain circumstances
Neha Ojha
09:27 PM Bug #41924: asynchronous recovery can not function under certain circumstances
Greg Farnum
09:46 PM Bug #41874 (Resolved): mon-osdmap-prune.sh fails
David Zafman
09:45 PM Bug #41873 (Resolved): test-erasure-code.sh fails
David Zafman
09:28 PM Bug #41939 (Need More Info): Scaling with unfound options might leave PGs in state "unknown"
Neha Ojha
09:28 PM Bug #41939: Scaling with unfound options might leave PGs in state "unknown"
How are we ending up in this state? What the previous states on the those PGs? Neha Ojha
09:24 PM Bug #41943 (Need More Info): ceph-mgr fails to report OSD status correctly
Do you have any other information from that OSD while this happened? Neha Ojha
09:22 PM Bug #41943: ceph-mgr fails to report OSD status correctly
Sounds like this OSD was somehow up enough that it responded to peer heartbeats, but was not processing any client re... Greg Farnum
09:03 PM Bug #41908 (Resolved): TMAPUP operation results in OSD assertion failure
Neha Ojha
12:11 PM Bug #42052 (Resolved): mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
> OSDMap.cc: 4603: FAILED ceph_assert(osd_weight.count(i.first))
>
> ceph version v15.0.0-5429-gac828d7 (ac828d732...
xie xingguo
10:50 AM Bug #41866 (Fix Under Review): OSD cannot report slow operation warnings in time.
Kefu Chai
10:49 AM Bug #41866: OSD cannot report slow operation warnings in time.
Kefu Chai
08:26 AM Backport #41921 (In Progress): nautilus: OSDMonitor: missing `pool_id` field in `osd pool ls` com...
https://github.com/ceph/ceph/pull/30568 Prashant D

09/24/2019

09:50 PM Bug #38724: _txc_add_transaction error (39) Directory not empty not handled on operation 21 (op 1...
Bumping priority based on community feedback. Brad Hubbard
07:53 PM Backport #42037 (Resolved): luminous: Enable auto-scaler and get src/osd/PeeringState.cc:3671: fa...
https://github.com/ceph/ceph/pull/30896 Nathan Cutler
07:52 PM Backport #42036 (Resolved): mimic: Enable auto-scaler and get src/osd/PeeringState.cc:3671: faile...
https://github.com/ceph/ceph/pull/30895 Nathan Cutler
04:11 PM Bug #41946: cbt perf test fails due to leftover in /home/ubuntu/cephtest
the log files were created by cosbench. see https://github.com/intel-cloud/cosbench/blob/ca68b333e85c51829ea68f203877... Kefu Chai
12:19 PM Backport #41922 (In Progress): mimic: OSDMonitor: missing `pool_id` field in `osd pool ls` command
https://github.com/ceph/ceph/pull/30547 Prashant D
12:15 PM Backport #41917 (In Progress): nautilus: osd: failure result of do_osd_ops not logged in prepare_...
https://github.com/ceph/ceph/pull/30546 Prashant D

09/23/2019

09:33 PM Bug #42015 (In Progress): Remove unused full and nearful output from OSDMap summary
David Zafman
09:27 PM Bug #42015 (Resolved): Remove unused full and nearful output from OSDMap summary

in OSDMap::print_oneline_summary() and OSDMap::print_summary() (CEPH_OSDMAP_FULL and CEPH_OSDMAP_NEARFULL checks)
David Zafman
08:41 PM Backport #42014 (In Progress): nautilus: Enable auto-scaler and get src/osd/PeeringState.cc:3671:...
Nathan Cutler
08:35 PM Backport #42014 (Resolved): nautilus: Enable auto-scaler and get src/osd/PeeringState.cc:3671: fa...
https://github.com/ceph/ceph/pull/30528 Nathan Cutler
07:42 PM Feature #41647 (Fix Under Review): pg_autoscaler should show a warning if pg_num isn't a power of...
Sage Weil
07:20 PM Bug #42012: mon osd_snap keys grow unbounded
This is (mostly) fixed in master by https://github.com/ceph/ceph/pull/30518. There is still one set of per-epoch key... Sage Weil
03:41 PM Bug #42012: mon osd_snap keys grow unbounded
Link to the full "dump-keys | grep osd_snap"
https://wustl.box.com/s/3r7bgv32hs5hw4jmgmywbo9qvqrqsmwn
Brian Koebbe
03:26 PM Bug #42012 (Resolved): mon osd_snap keys grow unbounded
... Sage Weil
07:19 PM Bug #41680: Removed OSDs with outstanding peer failure reports crash the monitor
Greg Farnum
05:09 PM Bug #41944: inconsistent pool count in ceph -s output
Is this after pools are deleted? In that case, it's #40011 Nathan Cutler
04:27 PM Backport #41864 (In Progress): luminous: Mimic MONs have slow/long running ops
Nathan Cutler
02:27 PM Bug #37875: osdmaps aren't being cleaned up automatically on healthy cluster
Still ongoing here, with mimic too. On one 13.2.6 cluster we have this, for example:... Dan van der Ster
02:12 PM Bug #41816 (Pending Backport): Enable auto-scaler and get src/osd/PeeringState.cc:3671: failed as...
Sage Weil
09:02 AM Backport #41964 (Resolved): mimic: Segmentation fault in rados ls when using --pgid and --pool/-p...
https://github.com/ceph/ceph/pull/30893 Nathan Cutler
09:02 AM Backport #41963 (Resolved): nautilus: Segmentation fault in rados ls when using --pgid and --pool...
https://github.com/ceph/ceph/pull/30605 Nathan Cutler
09:02 AM Backport #41962 (Resolved): luminous: Segmentation fault in rados ls when using --pgid and --pool...
Nathan Cutler
09:02 AM Backport #41961 (Resolved): mimic: tools/rados: add --pgid in help
https://github.com/ceph/ceph/pull/30893 Nathan Cutler
09:02 AM Backport #41960 (Resolved): nautilus: tools/rados: add --pgid in help
https://github.com/ceph/ceph/pull/30607 Nathan Cutler
09:02 AM Backport #41959 (Resolved): luminous: tools/rados: add --pgid in help
https://github.com/ceph/ceph/pull/30608 Nathan Cutler
09:02 AM Backport #41958 (Resolved): nautilus: scrub errors after quick split/merge cycle
https://github.com/ceph/ceph/pull/30643 Nathan Cutler

09/22/2019

10:12 PM Cleanup #41876 (Pending Backport): tools/rados: add --pgid in help
Vikhyat Umrao
11:55 AM Bug #41950 (Can't reproduce): crimson compile
Can i know crimson-old use what version Seastar code at ceph-15 version?
When compile, output following option:
<...
YongSheng Zhang
04:12 AM Bug #41936 (Pending Backport): scrub errors after quick split/merge cycle
Kefu Chai
03:45 AM Bug #41946: cbt perf test fails due to leftover in /home/ubuntu/cephtest
... Kefu Chai
02:09 AM Bug #41946 (Duplicate): cbt perf test fails due to leftover in /home/ubuntu/cephtest
... Kefu Chai
03:42 AM Bug #41875 (Pending Backport): Segmentation fault in rados ls when using --pgid and --pool/-p tog...
Kefu Chai

09/20/2019

09:01 PM Bug #41156 (Rejected): dump_float() poor output
David Zafman
08:47 PM Bug #41817 (Closed): qa/standalone/scrub/osd-recovery-scrub.sh timed out waiting for scrub
David Zafman
07:17 PM Bug #41913 (Fix Under Review): With auto scaler operating stopping an OSD can lead to COT crashin...
the real bug here is that the pgid split so the pgid specified to COT is wrong. the attached PR adds a check in COT ... Sage Weil
06:22 PM Bug #41944 (Resolved): inconsistent pool count in ceph -s output
... Sage Weil
06:08 PM Bug #41816 (Fix Under Review): Enable auto-scaler and get src/osd/PeeringState.cc:3671: failed as...
Sage Weil
05:36 PM Bug #41816: Enable auto-scaler and get src/osd/PeeringState.cc:3671: failed assert info.last_comp...
The complete_to pointer is already at log end before recover_got() is called. I think it's because during split() we ... Sage Weil
04:35 PM Bug #41943 (Closed): ceph-mgr fails to report OSD status correctly
After an inexplicable cluster event that resulted in around 10% of our OSDs falsely reported down (and shortly after ... Brian Andrus
12:47 PM Bug #41834: qa: EC Pool configuration and slow op warnings for OSDs caused by recent master changes
Might as well add some RBD failures while piling on:
http://pulpito.ceph.com/trociny-2019-09-19_12:41:57-rbd-wip-m...
Jason Dillaman
02:13 AM Bug #41939 (Need More Info): Scaling with unfound options might leave PGs in state "unknown"

With osd_pool_default_pg_autoscale_mode="on"
../qa/run-standalone.sh TEST_rep_recovery_unfound
The test failu...
David Zafman
01:59 AM Backport #41863 (In Progress): mimic: Mimic MONs have slow/long running ops
https://github.com/ceph/ceph/pull/30481 Prashant D
01:57 AM Backport #41862 (In Progress): nautilus: Mimic MONs have slow/long running ops
https://github.com/ceph/ceph/pull/30480 Prashant D

09/19/2019

11:12 PM Bug #41817: qa/standalone/scrub/osd-recovery-scrub.sh timed out waiting for scrub
This fix for this particular issue is to just disable auto scaler because it just causes a hang in the test but no cr... David Zafman
10:59 PM Bug #41923: 3 different ceph-osd asserts caused by enabling auto-scaler

I think this stack better reflects the thread that hit the suicide timeout. However, everytime I've seen this thre...
David Zafman
09:41 PM Bug #41923: 3 different ceph-osd asserts caused by enabling auto-scaler

Look at the assert(op.hinfo) it is caused by the corruption injected by the test. I'll verify that the asserts are...
David Zafman
12:05 AM Bug #41923 (Can't reproduce): 3 different ceph-osd asserts caused by enabling auto-scaler

Change config osd_pool_default_pg_autoscale_mode to "on"
Saw these 4 core dumps on 3 different sub-tests.
../...
David Zafman
04:51 PM Bug #41936 (Fix Under Review): scrub errors after quick split/merge cycle
Sage Weil
04:51 PM Bug #41936 (Resolved): scrub errors after quick split/merge cycle
PGs split and then merge soon after. There is a pg stat scrub mismatch. Sage Weil
04:48 PM Bug #41834: qa: EC Pool configuration and slow op warnings for OSDs caused by recent master changes
This shows up in rgw's ec pool tests also. In osd logs, I see slow ops on MOSDECSubOpRead/Reply messages, and they al... Casey Bodley
09:32 AM Feature #41647: pg_autoscaler should show a warning if pg_num isn't a power of two
Note: contrary to what the bug description says, pg_autoscaler will (apparently) *not* be automatically turned on wit... Nathan Cutler
01:56 AM Bug #41924 (Resolved): asynchronous recovery can not function under certain circumstances
guoracle report that:
> In the asynchronous recovery feature,
> the asynchronous recovery target OSD is selected ...
xie xingguo
01:39 AM Bug #41866: OSD cannot report slow operation warnings in time.
*report_callback* thread is also blocked on PG::lock with MGRClient::lock locked while getting the pg stats. This in ... Ilsoo Byun
12:54 AM Bug #41816: Enable auto-scaler and get src/osd/PeeringState.cc:3671: failed assert info.last_comp...

This can be reproduced by setting config osd_pool_default_pg_autoscale_mode="on" and executing this test:
../qa/...
David Zafman
12:29 AM Bug #41754: Use dump_stream() instead of dump_float() for floats where max precision isn't helpful

I was suspicious that the trailing 0999999994 in the elapsed time is noise. Could this be caused by a float being...
David Zafman

09/18/2019

06:33 PM Backport #41922 (Resolved): mimic: OSDMonitor: missing `pool_id` field in `osd pool ls` command
https://github.com/ceph/ceph/pull/30485
Nathan Cutler
06:33 PM Backport #41921 (Resolved): nautilus: OSDMonitor: missing `pool_id` field in `osd pool ls` command
https://github.com/ceph/ceph/pull/30486 Nathan Cutler
06:31 PM Backport #41920 (Resolved): nautilus: osd: scrub error on big objects; make bluestore refuse to s...
https://github.com/ceph/ceph/pull/30783 Nathan Cutler
06:31 PM Backport #41919 (Resolved): luminous: osd: scrub error on big objects; make bluestore refuse to s...
https://github.com/ceph/ceph/pull/30785 Nathan Cutler
06:31 PM Backport #41918 (Resolved): mimic: osd: scrub error on big objects; make bluestore refuse to star...
https://github.com/ceph/ceph/pull/30784 Nathan Cutler
06:31 PM Backport #41917 (Resolved): nautilus: osd: failure result of do_osd_ops not logged in prepare_tra...
https://github.com/ceph/ceph/pull/30546 Nathan Cutler
04:25 PM Bug #41900 (Resolved): auto-scaler breaks many standalone tests
David Zafman
03:38 PM Bug #41913 (Resolved): With auto scaler operating stopping an OSD can lead to COT crashing instea...
... David Zafman
03:03 PM Bug #41891: global osd crash in DynamicPerfStats::add_to_reports
Answering myself - seems that rbd_support cannot be disabled anyway
# ceph mgr module disable rbd_support
Error E...
Marcin Gibula
10:59 AM Bug #41891: global osd crash in DynamicPerfStats::add_to_reports
I don't believe this command was running at that time, however "rbd_support" mgr module was active. Could this be the... Marcin Gibula
10:53 AM Bug #41891: global osd crash in DynamicPerfStats::add_to_reports
Marcin, I believe I know the cause and I am now discussing the fix [1]. A workaround could be not to use "rbd perf im... Mykola Golub
10:13 AM Bug #41891 (Fix Under Review): global osd crash in DynamicPerfStats::add_to_reports
Mykola Golub
06:24 AM Bug #41891 (In Progress): global osd crash in DynamicPerfStats::add_to_reports
Mykola Golub
01:55 PM Bug #41908 (Fix Under Review): TMAPUP operation results in OSD assertion failure
Jason Dillaman
01:47 PM Bug #41908 (Resolved): TMAPUP operation results in OSD assertion failure
In 'do_tmapup', the object is READ into a 'newop' structure and then when it is re-written, the same 'newop' structur... Jason Dillaman
10:52 AM Bug #41677: Cephmon:fix mon crash
@shuguang what is the exact version of ceph-mon? i cannot match the backtrace with the source code of master HEAD. Kefu Chai
09:46 AM Feature #41905 (New): Add ability to change fsid of cluster
There is a case where you want to change the fsid of a cluster: When you have splitted a cluster into two different c... Wido den Hollander

09/17/2019

09:50 PM Bug #41900 (Resolved): auto-scaler breaks many standalone tests

Caused by https://github.com/ceph/ceph/pull/30112
In some cases I had to kill processes to get past hung tests. ...
David Zafman
08:46 PM Bug #41816: Enable auto-scaler and get src/osd/PeeringState.cc:3671: failed assert info.last_comp...
This crash didn't reproduce for me using run-standalone.sh with the auto scaler turned off. David Zafman
08:35 PM Bug #40287 (Pending Backport): OSDMonitor: missing `pool_id` field in `osd pool ls` command
Neha Ojha
08:30 PM Bug #41191 (Pending Backport): osd: scrub error on big objects; make bluestore refuse to start on...
Neha Ojha
08:29 PM Bug #41210 (Pending Backport): osd: failure result of do_osd_ops not logged in prepare_transactio...
@shuguang wang did you want this to be backported to a release older than nautilus? Neha Ojha
06:59 PM Bug #41336: All OSD Faild after Reboot.
Hi,
two questions:
- How to find out if a pool is affected?
"ceph osd erasure-code-profile get" does not list...
Oliver Freyermuth
05:04 PM Bug #41891: global osd crash in DynamicPerfStats::add_to_reports
Yes, I use "rbd perf image iotop/iostat" (one of the reasons for upgrade:-) ). Not exporting per image data with prom... Marcin Gibula
03:51 PM Bug #41891: global osd crash in DynamicPerfStats::add_to_reports
Marcin, are you using `rbd perf image iotop|iostat` commands? Or may be prometheus mgr module with rbd per image stat... Mykola Golub
01:49 PM Bug #41891: global osd crash in DynamicPerfStats::add_to_reports
As crash seems to be related to stats reporting - don't know if it is related, but it was soon after eliminating "Leg... Marcin Gibula
10:30 AM Bug #41891 (Resolved): global osd crash in DynamicPerfStats::add_to_reports
Hi,
during routine host maintenance, I've encountered massive osd crash across entire cluster. The sequence of event...
Marcin Gibula
01:19 PM Feature #40420 (Need More Info): Introduce an ceph.conf option to disable HEALTH_WARN when nodeep...
https://github.com/ceph/ceph/pull/29422 has been merged, but not yet backported Nathan Cutler
08:05 AM Bug #41754: Use dump_stream() instead of dump_float() for floats where max precision isn't helpful
Regarding elapsed time it might be important (for `compact` is not, but for benchmarking is). Another importatnat thi... Марк Коренберг
06:15 AM Backport #41238 (In Progress): nautilus: Implement mon_memory_target
Sridhar Seshasayee
 

Also available in: Atom