Activity
From 11/17/2017 to 12/16/2017
12/16/2017
- 04:30 AM Bug #22419 (Pending Backport): Pool Compression type option doesn't apply to new OSD's
- 04:27 AM Bug #22093 (Resolved): osd stuck in loop processing resent ops due to ms inject socket failures: 500
- 02:21 AM Backport #22406 (In Progress): jewel: osd: deletes are performed inline during pg log processing
- 12:48 AM Bug #22462 (Resolved): mon: unknown message type 1537 in luminous->mimic upgrade tests
- http://pulpito.ceph.com/teuthology-2017-12-14_22:26:40-upgrade:luminous-x:point-to-point-x-master-distro-basic-ovh/
...
12/15/2017
- 10:22 PM Backport #22450: luminous: Visibility for snap trim queue length
- unmerged pr can't be cherry-picked anyway...
- 12:05 PM Backport #22450: luminous: Visibility for snap trim queue length
- master PR https://github.com/ceph/ceph/pull/19520 has not been merged yet - do not backport! (this backport ticket wa...
- 08:16 AM Backport #22450 (Resolved): luminous: Visibility for snap trim queue length
- https://github.com/ceph/ceph/pull/20098
- 09:50 PM Bug #22354: v12.2.2 unable to create bluestore osd using ceph-disk
- So I dug a little deeper on this, and followed this gentleman's efforts to manually set up bluestore OSDs (although h...
- 02:47 PM Feature #22456 (New): efficient snapshot rollback
- #21305 :
> Rolling back images is painfully slow. Yes, I know, "rbd clone", but this creates another image and au... - 12:10 PM Feature #22448: Visibility for snap trim queue length
- @Nathan: yeah, sorry, I thought this process is more manual.
- 12:08 PM Feature #22448: Visibility for snap trim queue length
- @Piotr: It's OK to add e.g. "jewel, luminous" to the "Backport" field right from the beginning, though.
When the ... - 12:07 PM Feature #22448 (Fix Under Review): Visibility for snap trim queue length
- master PR is https://github.com/ceph/ceph/pull/19520
- 12:06 PM Feature #22448: Visibility for snap trim queue length
- @Piotr: Please wait until the master PR is merged before starting the backporting process. Thanks.
- 08:11 AM Feature #22448 (Resolved): Visibility for snap trim queue length
- We observed unexplained, constant disk space usage increase on a few of our prod clusters. At first we thought that i...
- 12:04 PM Backport #22449: jewel: Visibility for snap trim queue length
- master PR https://github.com/ceph/ceph/pull/19520 has not been merged yet - do not backport! (this backport ticket wa...
- 08:13 AM Backport #22449 (Resolved): jewel: Visibility for snap trim queue length
- https://github.com/ceph/ceph/pull/21200
- 06:23 AM Bug #22093 (Fix Under Review): osd stuck in loop processing resent ops due to ms inject socket fa...
- https://github.com/ceph/ceph/pull/19542
- 06:02 AM Bug #22346: OSD_ORPHAN issues after jewel->luminous upgrade, but orphaned osds not in crushmap
- Hi Graham,
The consensus is that this was caused by a bug in a previous release which failed to remove the devices... - 05:53 AM Bug #22440: New pgs per osd hard limit can cause peering issues on existing clusters
- We could definitely add a health warning for when we hit that condition in maybe_wait_for_max_pg()? that should show ...
12/14/2017
- 09:39 PM Bug #22354: v12.2.2 unable to create bluestore osd using ceph-disk
- I am getting the exact same behavior during @ceph-deploy osd activate@ (which uses @ceph-disk activate@) on a newly-d...
- 09:27 PM Bug #22440: New pgs per osd hard limit can cause peering issues on existing clusters
- Sure that makes sense.
If not a new state, how about something that would show up in pg query. I queried the pg wi... - 09:19 PM Bug #22440: New pgs per osd hard limit can cause peering issues on existing clusters
- I'm inclined to think we just need to surface this better (perhaps as a new state?) rather than try and let it peer i...
- 11:45 AM Bug #22440 (Resolved): New pgs per osd hard limit can cause peering issues on existing clusters
- During upgrade of OSD's in a cluster from Filestore to Bluestore, the CRUSH layout changed in my cluster. This result...
- 08:17 PM Bug #22445 (New): ceph osd metadata reports wrong "back_iface"
- ceph osd metadata reports wrong "back_iface". Example: ceph osd metadata 0
{
"id": 0,
"arch": "x86_64",
... - 08:05 PM Feature #22260: osd: recover after network outages
- Thanks Joao for fielding Shinobu's question.
- 04:08 PM Feature #22442 (New): ceph daemon mon.id mon_status -> ceph daemon mon.id status
- ceph mon_status is not consistent with the status command for all other daemons. It would bring consistency to ceph w...
- 12:42 PM Bug #22064: "RadosModel.h: 865: FAILED assert(0)" in rados-jewel-distro-basic-smithi
- Pushed https://shaman.ceph.com/builds/ceph/wip-jewel-22064/ pointing to 900e8cfc0d3a8057c3528b5e1787560bb6c2f198 whic...
- 12:28 PM Bug #22064: "RadosModel.h: 865: FAILED assert(0)" in rados-jewel-distro-basic-smithi
- I'm going through the jewel rados nightlies now to figure out when it started. Here is a list of SHA1s where the bug ...
- 11:12 AM Bug #22064: "RadosModel.h: 865: FAILED assert(0)" in rados-jewel-distro-basic-smithi
- Raising priority because this is readily reproducible in baseline jewel. (By "baseline jewel" I mean jewel HEAD, curr...
- 12:36 PM Bug #21997 (Resolved): thrashosds defaults to min_in 3, some ec tests are (2,2)
- 12:34 PM Backport #22391 (Resolved): luminous: thrashosds defaults to min_in 3, some ec tests are (2,2)
- 02:13 AM Backport #22391 (Closed): luminous: thrashosds defaults to min_in 3, some ec tests are (2,2)
- d21809b is already in luminous.
- 11:05 AM Bug #22438 (Fix Under Review): mon: leak in lttng dlopen / __tracepoints__init
- https://github.com/ceph/ceph/pull/19515
- 10:17 AM Bug #22438: mon: leak in lttng dlopen / __tracepoints__init
- now daemons are linked against ceph-common, and when ceph-common is dlopen'ed @__tracepoints__init()@ is called; when...
- 02:18 AM Bug #22438 (Resolved): mon: leak in lttng dlopen / __tracepoints__init
- In the 3 valgrind failures here: http://pulpito.ceph.com/yuriw-2017-12-12_20:47:55-rados-wip-yuri2-testing-2017-12-12...
- 07:48 AM Backport #22400 (In Progress): jewel: PR #16172 causing performance regression
- 07:14 AM Bug #22220: osd/ReplicatedPG.h:1667:14: internal compiler error: in force_type_die, at dwarf2out....
- 02:20 AM Bug #22220: osd/ReplicatedPG.h:1667:14: internal compiler error: in force_type_die, at dwarf2out....
- How does https://github.com/ceph/ceph/pull/19461 fix the bug in gcc7?
12/13/2017
- 10:58 PM Backport #22399 (In Progress): luminous: Manager daemon x is unresponsive. No standby daemons ava...
- https://github.com/ceph/ceph/pull/19501
- 10:49 PM Backport #22402 (In Progress): luminous: osd: replica read can trigger cache promotion
- https://github.com/ceph/ceph/pull/19499
- 10:25 PM Backport #22405 (In Progress): jewel: store longer dup op information
- 09:42 PM Bug #16236: cache/proxied ops from different primaries (cache interval change) don't order proper...
- also in
http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/
'1946610' - 09:41 PM Bug #22063: "RadosModel.h: 1703: FAILED assert(!version || comp->get_version64() == version)" inr...
- also in
http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/
'1946540' - 09:40 PM Bug #22064: "RadosModel.h: 865: FAILED assert(0)" in rados-jewel-distro-basic-smithi
- Also in http://pulpito.ceph.com/teuthology-2017-12-09_02:00:03-rados-jewel-distro-basic-smithi/
Jobs: '1946659', '19... - 02:34 PM Backport #22389 (In Progress): luminous: ceph-objectstore-tool: Add option "dump-import" to exami...
- 02:21 PM Bug #22419 (Fix Under Review): Pool Compression type option doesn't apply to new OSD's
- https://github.com/ceph/ceph/pull/19486
- 02:11 PM Bug #22419: Pool Compression type option doesn't apply to new OSD's
- 11:49 AM Bug #22419 (Resolved): Pool Compression type option doesn't apply to new OSD's
- If you set the pool compression type option to something like snappy, existing bluestore OSD's will then start compre...
- 02:13 PM Feature #22420: Add support for obtaining a list of available compression options
- This came up on IRC: my suggestion was to include the list of usable plugins in the MOSDBoot metadata.
- 12:30 PM Feature #22420 (Resolved): Add support for obtaining a list of available compression options
- According to the documentation, Ceph supports a variety of compression algorithms when creating Pools on BlueStore vi...
- 01:16 PM Backport #22069: luminous: osd/ReplicatedPG.cc: recover_replicas: object added to missing set for...
- opened http://tracker.ceph.com/issues/22423 for the backport; waiting for green light from Sage.
- 01:01 PM Backport #22069: luminous: osd/ReplicatedPG.cc: recover_replicas: object added to missing set for...
- David Zafman wrote:
> This will backport more cleanly if we also backport #17708 (https://github.com/ceph/ceph/pull/... - 01:14 PM Backport #22423: luminous: osd: initial minimal efforts to clean up PG interface
- h3. description
Backport of https://github.com/ceph/ceph/pull/17708 which does not have an associated tracker issue. - 01:13 PM Backport #22423 (Closed): luminous: osd: initial minimal efforts to clean up PG interface
- 01:09 PM Backport #22421 (In Progress): mon doesn't send health status after paxos service is inactive tem...
- 12:55 PM Backport #22421 (Resolved): mon doesn't send health status after paxos service is inactive tempor...
- https://github.com/ceph/ceph/pull/19481
- 01:08 PM Support #22422 (New): Block fsid does not match our fsid
- Hi, i'm deploying new OSDs with luminous and bluestore.
I'm trying with:
"ceph-disk prepare --bluestore /dev/sda --... - 12:33 PM Bug #22142 (Pending Backport): mon doesn't send health status after paxos service is inactive tem...
- 12:05 PM Bug #21557: osd.6 found snap mapper error on pg 2.0 oid 2:0e781f33:::smithi14431805-379 ... :187 ...
- https://github.com/ceph/ceph/pull/19366 is merged for more verbose logs
- 11:29 AM Bug #22220: osd/ReplicatedPG.h:1667:14: internal compiler error: in force_type_die, at dwarf2out....
- @Kefu: thanks for the quick fix!
- 04:36 AM Bug #22220 (Fix Under Review): osd/ReplicatedPG.h:1667:14: internal compiler error: in force_type...
- https://github.com/ceph/ceph/pull/19461
- 03:58 AM Bug #22220: osd/ReplicatedPG.h:1667:14: internal compiler error: in force_type_die, at dwarf2out....
- The build in https://github.com/ceph/ceph/pull/19457 was done on Ubuntu :(
- 03:57 AM Bug #22220: osd/ReplicatedPG.h:1667:14: internal compiler error: in force_type_die, at dwarf2out....
- please see https://github.com/ceph/ceph/pull/19426. that's why it popped up recently.
Nathan, to downgrade the GCC... - 02:07 AM Bug #22220: osd/ReplicatedPG.h:1667:14: internal compiler error: in force_type_die, at dwarf2out....
- ...
- 01:22 AM Bug #22220: osd/ReplicatedPG.h:1667:14: internal compiler error: in force_type_die, at dwarf2out....
- ...
- 01:05 AM Bug #22220: osd/ReplicatedPG.h:1667:14: internal compiler error: in force_type_die, at dwarf2out....
- Is Jenkins using Fedora? If not I'd suggest we create a bug against the appropriate OS and component. I suspect this ...
- 12:57 AM Bug #22220: osd/ReplicatedPG.h:1667:14: internal compiler error: in force_type_die, at dwarf2out....
- Raising priority because this error is now affecting (all?) Jewel PRs. See e.g. https://github.com/ceph/ceph/pull/194...
- 05:29 AM Bug #22369: out of order reply on set-chunks.yaml workload
- https://github.com/ceph/ceph/pull/19464
- 02:50 AM Bug #22415: 'pg dump' fails after mon rebuild
- probably the mgr commands didn't get included in the rebuilt mon? i think they should get set after the mgr daemon r...
- 02:50 AM Bug #22415 (Duplicate): 'pg dump' fails after mon rebuild
- ...
- 01:12 AM Bug #18239 (New): nan in ceph osd df again
- 01:05 AM Bug #19700: OSD remained up despite cluster network being inactive?
- 12:18 AM Bug #22413: can't delete object from pool when Ceph out of space
- You can get around this by using rados_write_op_operate with the 'LIBRADOS_OPERATION_FULL_FORCE' flag (128), like the...
12/12/2017
- 10:35 PM Bug #22413: can't delete object from pool when Ceph out of space
- forgot to mention I get errors like this when it fills up:
192.168.203.54: 2017-12-12 22:33:58.369563 7f2f1cb7ee40... - 08:57 PM Bug #22413 (Resolved): can't delete object from pool when Ceph out of space
- I ran into a situation where python librados script would hang while trying to delete an object when Ceph storage was...
- 03:16 PM Bug #22409 (Resolved): ceph_objectstore_tool: no flush before collection_empty() calls; ObjectSto...
- Currently we need callers to flush the sequencer before collection_list (and thus collection_empty).
/a/sage-2017-... - 02:51 PM Bug #22408 (Can't reproduce): objecter: sent out of order ops
- ...
- 02:23 PM Bug #22232 (Duplicate): ceph df %used output is wrong
- Closing as a duplicate of http://tracker.ceph.com/issues/22247
- 01:55 PM Bug #22114: mon: ops get stuck in "resend forwarded message to leader"
- hongpeng lu wrote:
> Hmm, actually, I don't know how to submit a PR.
https://github.com/ceph/ceph/blob/master/Sub... - 08:45 AM Backport #22406 (Rejected): jewel: osd: deletes are performed inline during pg log processing
- https://github.com/ceph/ceph/pull/19558
- 08:45 AM Backport #22405 (Rejected): jewel: store longer dup op information
- -https://github.com/ceph/ceph/pull/19497-
https://github.com/ceph/ceph/pull/19558 - 08:45 AM Backport #22403 (Resolved): jewel: osd: replica read can trigger cache promotion
- https://github.com/ceph/ceph/pull/21199
- 08:45 AM Backport #22402 (Resolved): luminous: osd: replica read can trigger cache promotion
- 08:45 AM Backport #22400 (Rejected): jewel: PR #16172 causing performance regression
- -https://github.com/ceph/ceph/pull/19497-
https://github.com/ceph/ceph/pull/19558 - 08:45 AM Backport #22399 (Resolved): luminous: Manager daemon x is unresponsive. No standby daemons available
- https://github.com/ceph/ceph/pull/19501
- 08:43 AM Backport #22391 (Resolved): luminous: thrashosds defaults to min_in 3, some ec tests are (2,2)
- https://github.com/ceph/ceph/pull/18702
- 08:43 AM Backport #22390 (Rejected): jewel: ceph-objectstore-tool: Add option "dump-import" to examine an ...
- https://github.com/ceph/ceph/pull/21193
- 08:43 AM Backport #22389 (Resolved): luminous: ceph-objectstore-tool: Add option "dump-import" to examine ...
- https://github.com/ceph/ceph/pull/19487
- 08:43 AM Backport #22387 (Resolved): luminous: PG stuck in recovery_unfound
- https://github.com/ceph/ceph/pull/20055
- 07:26 AM Bug #22350: nearfull OSD count in 'ceph -w'
- Greg Farnum wrote:
Hi Greg,
> Can you produce logs of the monitor doing this? With "debug mon = 20" set?
Not... - 01:39 AM Bug #22369: out of order reply on set-chunks.yaml workload
- Let me take a look at this issue.
12/11/2017
- 09:44 PM Bug #22350 (Need More Info): nearfull OSD count in 'ceph -w'
- 09:44 PM Bug #22350: nearfull OSD count in 'ceph -w'
- Can you produce logs of the monitor doing this? With "debug mon = 20" set?
- 09:29 PM Bug #19971 (Pending Backport): osd: deletes are performed inline during pg log processing
- 09:20 PM Bug #22369: out of order reply on set-chunks.yaml workload
- 09:19 PM Bug #22369 (Resolved): out of order reply on set-chunks.yaml workload
- ...
12/09/2017
- 03:40 AM Feature #22086 (Pending Backport): ceph-objectstore-tool: Add option "dump-import" to examine an ...
12/08/2017
- 07:29 PM Bug #22354: v12.2.2 unable to create bluestore osd using ceph-disk
- Reproducing steps...
=======================
Stopping osd.112
# systemctl stop ceph-osd@112
Removing 112 fro... - 06:35 PM Bug #22354 (Resolved): v12.2.2 unable to create bluestore osd using ceph-disk
- Hello,
We aware that ceph-disk which is deprecated in 12.2.2 . As part of my testing, I can still using this ceph... - 05:08 PM Bug #22346: OSD_ORPHAN issues after jewel->luminous upgrade, but orphaned osds not in crushmap
- Interesting! It seems like we probably removed 30 osds from the old retired hardware, so it's curious that just 3 had...
- 07:27 AM Bug #22346: OSD_ORPHAN issues after jewel->luminous upgrade, but orphaned osds not in crushmap
- So this is happening because entries for "device2", "device14", and "device19" still have entries in the "name_map" s...
- 12:13 AM Bug #22346: OSD_ORPHAN issues after jewel->luminous upgrade, but orphaned osds not in crushmap
- 12:12 AM Bug #22346: OSD_ORPHAN issues after jewel->luminous upgrade, but orphaned osds not in crushmap
- Thanks Graham,
I'll be taking a look into this. I can confirm I can reproduce the issue locally with the osdmaptoo... - 04:29 PM Bug #22142 (Fix Under Review): mon doesn't send health status after paxos service is inactive tem...
- https://github.com/ceph/ceph/pull/19404
- 03:51 PM Bug #22142 (In Progress): mon doesn't send health status after paxos service is inactive temporarily
- 03:49 PM Bug #22142: mon doesn't send health status after paxos service is inactive temporarily
- mon/MgrMonitor::send_digests() stops the periodic digests if PaxosService goes inactive for a time (say when a MON go...
- 08:30 AM Bug #22351 (Resolved): Couldn't init storage provider (RADOS)
2017-12-08 16:25:46.172119 7f12bf18de00 0 deferred set uid:gid to 167:167 (ceph:ceph)
2017-12-08 16:25:46.172...- 08:00 AM Bug #22350 (Resolved): nearfull OSD count in 'ceph -w'
- Hello,
While looking at the 'ceph -w' output i noticed that sometimes the 'nearfull' information is wrong:
"201... - 07:14 AM Bug #22093: osd stuck in loop processing resent ops due to ms inject socket failures: 500
- Sage, let's make it 1000 then. if it helps with the test. ...
- 07:12 AM Bug #22349 (New): valgrind: Leak_StillReachable in rocksdb
- ...
- 07:07 AM Bug #22278: FreeBSD fails to build with WITH_SPDK=ON
- http://dpdk.org/dev/patchwork/patch/31865/
12/07/2017
- 10:39 PM Bug #22346 (Resolved): OSD_ORPHAN issues after jewel->luminous upgrade, but orphaned osds not in ...
- ust updated a fairly long-lived (originally firefly) cluster from jewel to luminous 12.2.2.
One of the issues I se... - 03:32 AM Bug #20924: osd: leaked Session on osd.7
- /a/sage-2017-12-06_22:54:32-rados-wip-sage-testing-2017-12-06-1352-distro-basic-smithi/1939984
osd.7 again! - 01:51 AM Feature #22086 (Fix Under Review): ceph-objectstore-tool: Add option "dump-import" to examine an ...
- 01:51 AM Feature #22086: ceph-objectstore-tool: Add option "dump-import" to examine an export
- https://github.com/ceph/ceph/pull/19368
12/06/2017
- 11:56 PM Bug #22329: mon: Valgrind: mon (Leak_DefinitelyLost, Leak_IndirectlyLost)
- I took a look and didn't see anything going through MDSMonitor.* or FSCommand.*.
It looks like a leaked session du... - 10:12 PM Bug #22329: mon: Valgrind: mon (Leak_DefinitelyLost, Leak_IndirectlyLost)
- We'll keep this here in case we see it elsewhere, but the leaks I see are of messages and the AuthSessions associated...
- 06:27 AM Bug #22329 (Closed): mon: Valgrind: mon (Leak_DefinitelyLost, Leak_IndirectlyLost)
- See: /ceph/teuthology-archive/pdonnell-2017-12-05_06:48:09-fs-wip-pdonnell-testing-20171205.044504-testing-basic-smit...
- 11:49 PM Bug #21557: osd.6 found snap mapper error on pg 2.0 oid 2:0e781f33:::smithi14431805-379 ... :187 ...
- saw this again,
/a/sage-2017-12-05_16:19:46-rados-mimic-dev1-distro-basic-smithi/1933230
still confused. it looks... - 11:00 PM Bug #22233: prime_pg_temp breaks on uncreated pgs
- /a/sage-2017-12-05_18:31:27-rados-wip-pg-scrub-preempt-distro-basic-smithi/1934001 ?
- 06:33 AM Bug #22330: ec: src/common/interval_map.h: 161: FAILED assert(len > 0)
- Another: /ceph/teuthology-archive/pdonnell-2017-12-05_06:54:06-kcephfs-wip-pdonnell-testing-20171205.044504-testing-b...
- 06:31 AM Bug #22330 (Resolved): ec: src/common/interval_map.h: 161: FAILED assert(len > 0)
- ...
12/05/2017
- 12:30 AM Bug #22064: "RadosModel.h: 865: FAILED assert(0)" in rados-jewel-distro-basic-smithi
- 12:23 AM Support #22132 (Resolved): OSDs stuck in "booting" state after catastrophic data loss
- This isn't impossible but I believe you've gone about it the wrong way. See http://docs.ceph.com/docs/master/rados/tr...
- 12:17 AM Bug #22144 (Can't reproduce): *** Caught signal (Aborted) ** in thread thread_name:tp_peering
- This was discussed on the mailing list thread "[ceph-users] OSD Random Failures - Latest Luminous" and ended without ...
- 12:07 AM Support #22224 (Resolved): memory leak
- There was an issue in luminous where it was mis-estimating the amount of memory used in bluestore, but that is resolv...
12/04/2017
- 11:56 PM Bug #22310 (Fix Under Review): wrong comparison at evicting
- 02:31 AM Bug #22310 (Fix Under Review): wrong comparison at evicting
- Colder object gets lower temperature(smaller value).
Thus just use its ranking level in rating scale to
do comparis... - 11:16 PM Backport #22069: luminous: osd/ReplicatedPG.cc: recover_replicas: object added to missing set for...
- This will backport more cleanly if we also backport #17708 (https://github.com/ceph/ceph/pull/17708) reducing risk of...
- 07:15 PM Bug #22317 (New): pybind: InvalidArgumentError does not take keyword arguments
- This is caused because the exception class inherits from parent classes that do not define any keyword arguments
<... - 03:43 PM Support #22224: memory leak
- We had this issue in a couple of OSDs which came down
We then tried to decrease 'bluestore_cache_size' to 100MB ( ...
12/02/2017
- 06:14 AM Bug #21147 (Pending Backport): Manager daemon x is unresponsive. No standby daemons available
- 06:14 AM Bug #22257 (Pending Backport): mon: mgrmaps not trimmed
- 05:02 AM Bug #20188: filestore: os/filestore/FileStore.h: 357: FAILED assert(q.empty()) from ceph_test_obj...
- /a/kchai-2017-12-01_08:15:14-rados-wip-kefu-testing-2017-12-01-1256-distro-basic-mira/1912013
12/01/2017
- 10:00 PM Bug #22300 (Rejected): ceph osd reweightn command seems to change weight value
- Hi,
on 12.2.2, when using the ceph osd reweightn command it looks like the weight value gets changed to an unexpec... - 03:43 PM Bug #19299: Jewel -> Kraken: OSD boot takes 1+ hours, unusually high CPU
- We did not have any similar issues updating from Jewel to Luminous. As far as I'm concerned there's no reason to kee...
- 11:29 AM Feature #22260: osd: recover after network outages
- The behavior Karol looked into would have benefited from a suicide in case of a timeout. Instead, what he saw was OSD...
- 10:35 AM Bug #21925: cluster capacity is much more smaller than it should be
- Hi jianpeng, Sage,
Thanks for your attention. It seems that bluestore doesn't support paritioned block device nor ... - 04:53 AM Bug #21825: OSD won't stay online and crashes with abort
- Looks like this ticket won't get investigated anymore, should I delete my object-store-tool export of the PG? You ca...
11/30/2017
- 07:25 PM Bug #21721: ceph pg force-backfill cmd failed with ENOENT error
- Just hit this myself- strangely it worked and then it didn't in the space of minutes. have a kinda small cluster that...
- 06:51 PM Bug #22093: osd stuck in loop processing resent ops due to ms inject socket failures: 500
- caught in a loop resending lots of queued ops, with
ms inject socket failures: 500
so that we fail teh c... - 03:24 PM Bug #21218: thrash-eio + bluestore (hangs with unfound objects or read_log_and_missing assert)
- demoting this since i haven't seen it recently. will update the bug next time it happens in qa.
- 02:29 PM Bug #22266 (Resolved): mgr/PyModuleRegistry.cc: 139: FAILED assert(map.epoch > 0)
- 02:27 PM Backport #22275 (Resolved): luminous: mgr/PyModuleRegistry.cc: 139: FAILED assert(map.epoch > 0)
- 09:29 AM Bug #18239: nan in ceph osd df again
- 10.2.10 still has this bug:...
- 06:09 AM Bug #22278 (Resolved): FreeBSD fails to build with WITH_SPDK=ON
- quote from Willem Jan Withagen
> I'm having a bit of trouble in FreeBSD jenkins as well:... - 05:09 AM Support #22243: Luminous: EC pool using more space than it should
- That definitely makes sense that there's additional space taken up from metadata and other information. It just seems...
11/29/2017
- 10:27 PM Bug #20059: miscounting degraded objects
- I don't know what Sage is referring to regarding accounting for missing_loc. Should num_objects_missing be set for t...
- 05:11 PM Bug #20059 (In Progress): miscounting degraded objects
- David fixed most of this, but there is still one piece left (accurate accounting for missing_loc)
- 09:25 PM Backport #22270 (Resolved): ceph df report wrong pool usage pct
- 08:22 AM Backport #22270 (In Progress): ceph df report wrong pool usage pct
- https://github.com/ceph/ceph/pull/19230
- 08:19 AM Backport #22270 (Resolved): ceph df report wrong pool usage pct
- https://github.com/ceph/ceph/pull/19230
- 06:54 PM Bug #19700: OSD remained up despite cluster network being inactive?
- Yes, we can still reproduce this on 10.2.10. We have not updated to luminous as of yet.
- 05:14 PM Bug #19700 (Need More Info): OSD remained up despite cluster network being inactive?
- Patrick, can you still reproduce this?
- 06:49 PM Bug #22145 (Pending Backport): PG stuck in recovery_unfound
- 04:21 PM Bug #22145: PG stuck in recovery_unfound
- https://github.com/ceph/ceph/pull/18974
- 05:38 PM Bug #22050: ERROR type entries of pglog do not update min_last_complete_ondisk, potentially ballo...
- Josh thinks we still want to trim since it's a write to disk.
- 05:33 PM Bug #11907 (Closed): crushmap validation must not block the monitor
- there is a timeout
- 05:32 PM Bug #12405 (Resolved): filestore: syncfs causes high cpu load due to kernel implementation in hig...
- Newer kernels fix syncfs(2) to use a dirty inode list for this. (The fix isn't in the latest el7 kernel(s) yet, thou...
- 05:32 PM Bug #15936 (Can't reproduce): Osd-s on cache pool crash after upgrade from Hammer to Jewel
- not enough info here to go on..
- 05:31 PM Bug #16236: cache/proxied ops from different primaries (cache interval change) don't order proper...
- 05:25 PM Bug #17553 (Can't reproduce): OSD crashed with signal 6, BackoffThrottle::get(unsigned long)
- 05:25 PM Bug #17660 (Can't reproduce): objecter_requests 'mtime": "1970-01-01 00:00:00.000000s"' and '"las...
- My guess is this is related to some issues with full handling that are fixe din luminous and were backported to later...
- 05:21 PM Bug #18239 (Need More Info): nan in ceph osd df again
- Thinking/hoping we cleaned these up? Dan, do you still see this?
- 05:20 PM Bug #18240 (Can't reproduce): Deep scrub errors running cephfs kernel client on jewel
- 05:19 PM Bug #19377 (Duplicate): mark_unfound_lost revert won't actually recover the objects unless there ...
- I think https://github.com/ceph/ceph/pull/18974 addresses this!
- 05:18 PM Feature #21198: Monitors don't handle incomplete network splits
- Yep. If you have a network partition that can be crossed by some monitors but not others, we're screwed.
Resolving... - 05:16 PM Bug #19449 (Won't Fix): 10.2.3->10.2.6 upgrade switched crush tunables, generated crc errors whil...
- jewel isn't as careful about crc mismatches; going forward this won't happen but we can't change old 10.2.x versions.
- 05:16 PM Bug #21211 (Need More Info): 12.2.0,cephfs(meta replica 2, data ec 2+1),ceph-osd coredump
- We can't do anything without logs and a cluster description here. Was this on bluestore?
- 05:16 PM Bug #19606 (Can't reproduce): monitors crash on incorrect OSD UUID (and bad uuid following reboot?)
- 05:12 PM Bug #19299 (Can't reproduce): Jewel -> Kraken: OSD boot takes 1+ hours, unusually high CPU
- Note that this code is slated to get replaced in mimic.
- 05:10 PM Bug #20451 (Can't reproduce): osd Segmentation fault after upgrade from jewel (10.2.5) to kraken ...
- 05:10 PM Bug #20476 (Can't reproduce): ops stuck waiting_for_map
- 05:10 PM Bug #21721 (Can't reproduce): ceph pg force-backfill cmd failed with ENOENT error
- I don't think we've seen this elsewhere and we have no context for how to make it happen again...
- 05:09 PM Bug #21931: osd: src/osd/ECBackend.cc: 2164: FAILED assert((offset + length) <= (range.first.get_...
- Oh, I think this is the thing where we're requesting reads of non-existent chunks?
- 05:09 PM Bug #20491 (Can't reproduce): objecter leaked OSDMap in handle_osd_map
- 05:07 PM Bug #20000 (Can't reproduce): osd assert in shared_cache.hpp: 107: FAILED assert(weak_refs.empty())
- 05:05 PM Bug #22220: osd/ReplicatedPG.h:1667:14: internal compiler error: in force_type_die, at dwarf2out....
- 05:04 PM Bug #20908 (Resolved): qa/standalone/misc failure in TEST_mon_features
- 05:03 PM Bug #15653: crush: low weight devices get too many objects for num_rep > 1
- The new weight-set capability in crush gives us the tool to fix this, but balancer module does not try to do fix it y...
- 05:02 PM Bug #20973 (Can't reproduce): src/osdc/ Objecter.cc: 3106: FAILED assert(check_latest_map_ops.fin...
- 05:02 PM Bug #20798 (Can't reproduce): LibRadosLockECPP.LockExclusiveDurPP gets EEXIST
- 05:02 PM Bug #20974 (Can't reproduce): osd/PG.cc: 3377: FAILED assert(r == 0) (update_snap_map remove fails)
- 05:01 PM Bug #20986 (Can't reproduce): segv in crush_destroy_bucket_straw2 on rados/standalone/misc.yaml
- 05:01 PM Bug #20876 (Can't reproduce): BADAUTHORIZER on mgr, hung ceph tell mon.*
- 04:59 PM Bug #21263 (Resolved): when disk error happens, osd reports assertion failure without any error i...
- 04:58 PM Bug #21580 (Resolved): osd: stalled recovery ends up in recovery_wait
- PG_STATEA_RECOVERY_UNFOUND
- 04:57 PM Bug #21592 (Can't reproduce): LibRadosCWriteOps.CmpExt got 0 instead of -4095-1
- 04:56 PM Bug #20909: Error ETIMEDOUT: crush test failed with -110: timed out during smoke test (5 seconds)
- Haven't seen since mons were moved to nvme on smithi
- 04:56 PM Bug #20909 (Can't reproduce): Error ETIMEDOUT: crush test failed with -110: timed out during smok...
- 04:55 PM Bug #20981 (Can't reproduce): ./run_seed_to_range.sh errored out
- this failure seems to have gone away..
- 04:50 PM Bug #21997 (Pending Backport): thrashosds defaults to min_in 3, some ec tests are (2,2)
- 04:49 PM Bug #21825 (Closed): OSD won't stay online and crashes with abort
- Looks like stuff is working now.
- 04:49 PM Bug #21144 (Resolved): daemon-helper: command crashed with signal 1
- I missed before that this was an upgrade test; it's working now.
- 04:46 PM Bug #22123: osd: objecter sends out of sync with pg epochs for proxied ops
- 04:46 PM Bug #22123: osd: objecter sends out of sync with pg epochs for proxied ops
- We could also cancel them in reverse order; that won't break ordering and doesn't require new interfaces.
- 04:42 PM Bug #21823: on_flushed: object ... obc still alive (ec + cache tiering)
- The problem is that we want the (weak) refs to go away. If they still exist, then someone has a hadnle to an obc fro...
- 04:42 PM Bug #22165: split pg not actually created, gets stuck in state unknown
- Two solutions. Sage thinks it wouldn't be hard to make create code account for splits that need to be processed.
S... - 04:32 PM Bug #20922: misdirected op with localize_reads set
- Downgrading this as it's localized_reads.
- 04:31 PM Bug #20919 (Pending Backport): osd: replica read can trigger cache promotion
- 04:31 PM Bug #20874 (Can't reproduce): osd/PGLog.h: 1386: FAILED assert(miter == missing.get_items().end()...
- There have been changes to bluestore, and I think maybe bugfixes to the missing set. And no reproductions.
- 04:30 PM Bug #21147 (Fix Under Review): Manager daemon x is unresponsive. No standby daemons available
- https://github.com/ceph/ceph/pull/19242
- 04:27 PM Bug #21147 (In Progress): Manager daemon x is unresponsive. No standby daemons available
- Sage believes this is due to high failure injections in the messenger in some of our testing, which makes it sometime...
- 04:29 PM Bug #21130 (Need More Info): "FAILED assert(bh->last_write_tid > tid)" in powercycle-master-testi...
- Apparently haven't seen this since and there are no logs any more. :(
- 04:25 PM Bug #21165: 2 pgs stuck in unknown during thrashing
- Haven't seen this since then.
- 04:23 PM Bug #20910: spurious MON_DOWN, apparently slow/laggy mon
- Moved the /var/log/ceph into NVMe and haven't seen it since.
- 04:23 PM Bug #20910 (Resolved): spurious MON_DOWN, apparently slow/laggy mon
- 04:22 PM Bug #21846 (Closed): Default ms log level results in ~40% performance degradation on RBD 4K rando...
- 04:19 PM Bug #22266 (Pending Backport): mgr/PyModuleRegistry.cc: 139: FAILED assert(map.epoch > 0)
- 03:22 PM Bug #22266 (Fix Under Review): mgr/PyModuleRegistry.cc: 139: FAILED assert(map.epoch > 0)
- https://github.com/ceph/ceph/pull/19238
- 12:22 PM Bug #22266: mgr/PyModuleRegistry.cc: 139: FAILED assert(map.epoch > 0)
- 04:17 PM Bug #22135 (Resolved): bluestore: "Caught signal (Segmentation fault)" in fs-luminous-distro-basi...
- 04:16 PM Bug #21750: scrub stat mismatch on bytes
- Also, we are talking about mostly reverting this and replacing it with account done at the bluestore layer.
- 04:15 PM Bug #21750: scrub stat mismatch on bytes
- I think this is fixed now?
- 03:53 PM Backport #22275 (In Progress): luminous: mgr/PyModuleRegistry.cc: 139: FAILED assert(map.epoch > 0)
- https://github.com/ceph/ceph/pull/19240
- 03:52 PM Backport #22275 (Resolved): luminous: mgr/PyModuleRegistry.cc: 139: FAILED assert(map.epoch > 0)
- ...
- 03:19 PM Support #22224: memory leak
- Clearly the system is out of memory, but what's the evidence for the OSDs specifically being the culprit?
- 03:01 PM Bug #21925 (Need More Info): cluster capacity is much more smaller than it should be
- the osd partitions are probably small. 'ceph osd df' and a regular 'df' on the osd host(s) will help.
- 08:07 AM Bug #22093: osd stuck in loop processing resent ops due to ms inject socket failures: 500
- http://pulpito.ceph.com/kchai-2017-11-28_10:23:40-rados-wip-kefu-testing-2017-11-28-1542-distro-basic-mira/1901349/
...
11/28/2017
- 11:14 PM Bug #22266: mgr/PyModuleRegistry.cc: 139: FAILED assert(map.epoch > 0)
- /a/yuriw-2017-11-27_23:31:26-rados-luminous-distro-basic-smithi/1897131
reproducible! - 06:17 PM Bug #22266: mgr/PyModuleRegistry.cc: 139: FAILED assert(map.epoch > 0)
- The monitor is sending the manager a MgrMap with epoch zero. It's happening immediately after a monitor restart, whi...
- 04:31 PM Bug #22266 (Resolved): mgr/PyModuleRegistry.cc: 139: FAILED assert(map.epoch > 0)
- ...
- 06:45 PM Backport #21650 (Resolved): luminous: buffer_anon leak during deep scrub (on otherwise idle osd)
- Fixed code is already in luminous.
- 09:52 AM Bug #22095 (Fix Under Review): ceph status shows wrong number of objects
- 09:51 AM Bug #22232 (Fix Under Review): ceph df %used output is wrong
- 05:08 AM Feature #22260: osd: recover after network outages
- Do you want to avoid OSD(s) suicide because of timeout?
- 04:52 AM Feature #22260 (New): osd: recover after network outages
- We've run into a situation where after an 802.3ad/lacp enabled switch(es) has been rebooted, some OSDs failed to reco...
- 02:41 AM Bug #22114: mon: ops get stuck in "resend forwarded message to leader"
- Greg Farnum wrote:
> Ummm, yep, that looks right to me at a quick glance! Can you submit a PR with that change? :)
... - 02:04 AM Backport #22258 (In Progress): mon: mgrmaps not trimmed
- 02:04 AM Backport #22258 (Resolved): mon: mgrmaps not trimmed
- mgrmonitor does not trim old mgrmaps. these can accumulate forever.
https://github.com/ceph/ceph/pull/19187
11/27/2017
- 11:04 PM Bug #22257: mon: mgrmaps not trimmed
- https://github.com/ceph/ceph/pull/19185
- 11:04 PM Bug #22257 (Resolved): mon: mgrmaps not trimmed
- mgrmonitor does not trim old mgrmaps. these can accumulate forever.
- 10:05 PM Support #22243: Luminous: EC pool using more space than it should
- There’s a few things here to look at.
OSDs do use up space for non-object metadata and tracking purposes. It’s not a... - 07:53 PM Bug #22114: mon: ops get stuck in "resend forwarded message to leader"
- 07:52 PM Bug #22114: mon: ops get stuck in "resend forwarded message to leader"
- Ummm, yep, that looks right to me at a quick glance! Can you submit a PR with that change? :)
11/25/2017
- 04:38 PM Support #22243 (New): Luminous: EC pool using more space than it should
- Hello,
I have an erasure-coded pool that is using more space on the OSDs than it should be. The EC profile is set ...
11/24/2017
- 10:05 PM Bug #21625 (Resolved): ceph-kvstore-tool does not call bluestore's umount when exit
- 10:04 PM Bug #21624 (Resolved): BlueStore::umount will crash when the BlueStore is opened by start_kv_only()
- 09:54 PM Bug #22039 (Resolved): bluestore: segv during unmount
- 08:40 AM Bug #22233 (In Progress): prime_pg_temp breaks on uncreated pgs
- # mon.b instructed osd.3 to create pg 92.4. the upset was [3,6]
# osd.3 created pg 92.4, and sent "created" message ... - 03:45 AM Bug #22114: mon: ops get stuck in "resend forwarded message to leader"
- Greg Farnum wrote:
> Hmm, nope, it's definitely something different if it's happening in Luminous.
I changed some...
11/23/2017
- 11:51 AM Bug #22095: ceph status shows wrong number of objects
- https://github.com/ceph/ceph/pull/19117
- 11:45 AM Bug #22232: ceph df %used output is wrong
- https://github.com/ceph/ceph/pull/19116
- 11:43 AM Bug #22232 (Duplicate): ceph df %used output is wrong
- Consider this output:
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
30.2G 14.3G 15.9G ... - 11:37 AM Bug #22114: mon: ops get stuck in "resend forwarded message to leader"
- Captured monitor's log...
- 06:28 AM Bug #22114: mon: ops get stuck in "resend forwarded message to leader"
- It happended again....
- 09:46 AM Support #22224: memory leak
- One correction , we're running 12.2.1
Thanks
11/22/2017
- 03:10 PM Support #22224: memory leak
- Hello
We have a fresh 'luminous' ( 12.2.0 ) (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc) ( installed us... - 02:06 PM Support #22224 (Resolved): memory leak
- Hello
We have a fresh 'luminous' ( 12.2.0 ) (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc) ( install... - 05:16 AM Bug #22220: osd/ReplicatedPG.h:1667:14: internal compiler error: in force_type_die, at dwarf2out....
- Also effects fc27.
This is a gcc bug https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82155 so this just a place-holde... - 04:44 AM Bug #22220 (Resolved): osd/ReplicatedPG.h:1667:14: internal compiler error: in force_type_die, at...
- Build failure for Jewel stable release on fc26 ...
11/21/2017
- 10:01 PM Backport #22128: "test/librados/TestCase.cc:485: Failure" in upgrade:jewel-x-luminous
- This was a luminous-only fix, so there's no master PR.
- 09:43 PM Backport #22213 (In Progress): luminous: On pg repair the primary is not favored as was intended
- 07:31 PM Backport #22213 (Resolved): luminous: On pg repair the primary is not favored as was intended
- https://github.com/ceph/ceph/pull/19083
- 09:36 PM Bug #22138: jewel: On pg repair the primary is not favored as was intended
- This jewel backport was effected by cherry-picking a single commit - 1ad05b1068ddd5d3312af45af1a60587200ddcd7 - into ...
- 09:33 PM Bug #21907: On pg repair the primary is not favored as was intended
- Just to recapitulate, the jewel backport was effected by cherry-picking a single commit - 1ad05b1068ddd5d3312af45af1a...
- 08:50 PM Bug #22135: bluestore: "Caught signal (Segmentation fault)" in fs-luminous-distro-basic-smithi
- Still not resolved
See run http://pulpito.ceph.com/yuriw-2017-11-21_17:55:44-upgrade:jewel-x-luminous-distro-basic-v... - 08:44 PM Bug #22114: mon: ops get stuck in "resend forwarded message to leader"
- Hmm, nope, it's definitely something different if it's happening in Luminous.
- 01:35 AM Bug #22114: mon: ops get stuck in "resend forwarded message to leader"
- Greg Farnum wrote:
> What version are you running? This sounds like a known issue but it was fixed long ago.
Can ... - 01:19 AM Bug #22114: mon: ops get stuck in "resend forwarded message to leader"
- Greg Farnum wrote:
> What version are you running? This sounds like a known issue but it was fixed long ago.
[roo... - 01:17 AM Bug #22114: mon: ops get stuck in "resend forwarded message to leader"
- What version are you running? This sounds like a known issue but it was fixed long ago.
- 08:50 AM Backport #22176: luminous: osd: pg limit on replica test failure
- @Prashant - if you can, for backports, please put the PR link in the issue description. (We do it this way because it...
- 01:10 AM Bug #22050 (Triaged): ERROR type entries of pglog do not update min_last_complete_ondisk, potenti...
- 01:10 AM Bug #22050: ERROR type entries of pglog do not update min_last_complete_ondisk, potentially ballo...
- This one's tricky; I'm not sure we *want* to trim based on error entries in the general case. If a broken client subm...
11/20/2017
- 11:08 PM Backport #22176: luminous: osd: pg limit on replica test failure
- https://github.com/ceph/ceph/pull/19059
- 11:24 AM Backport #22176 (In Progress): luminous: osd: pg limit on replica test failure
- 11:05 AM Backport #22176 (Resolved): luminous: osd: pg limit on replica test failure
- https://github.com/ceph/ceph/pull/19059
- 06:44 PM Bug #22135 (Resolved): bluestore: "Caught signal (Segmentation fault)" in fs-luminous-distro-basi...
- 04:34 PM Bug #22064: "RadosModel.h: 865: FAILED assert(0)" in rados-jewel-distro-basic-smithi
- Also in http://qa-proxy.ceph.com/teuthology/teuthology-2017-11-17_18:17:24-rados-jewel-distro-basic-smithi/1857527/te...
- 03:57 PM Bug #22201 (New): PG removal with ceph-objectstore-tool segfaulting
- I am using ceph 10.2.10. ceph-objectstore-tool is failing to delete a PG with the below segfault. The initial attemp...
- 02:55 PM Bug #21474 (Resolved): bluestore fsck took 224.778802 seconds to complete which caused "timed out...
- 05:50 AM Bug #21474: bluestore fsck took 224.778802 seconds to complete which caused "timed out waiting fo...
- http://pulpito.ceph.com/dmick-2017-11-17_15:07:33-smoke-wip-ceph-disk-fsid-distro-basic-smithi/
https://github.com... - 02:55 PM Backport #22166 (Resolved): luminous: bluestore fsck took 224.778802 seconds to complete which ca...
- 05:54 AM Backport #22166 (In Progress): luminous: bluestore fsck took 224.778802 seconds to complete which...
- https://github.com/ceph/ceph/pull/19025
- 05:53 AM Backport #22166 (Resolved): luminous: bluestore fsck took 224.778802 seconds to complete which ca...
- https://github.com/ceph/ceph/pull/19025
- 02:16 PM Backport #22199 (In Progress): crushtool decompile prints bogus when osd < max_osd_id are missing
- 02:10 PM Backport #22199 (Resolved): crushtool decompile prints bogus when osd < max_osd_id are missing
https://github.com/ceph/ceph/pull/19039- 02:08 PM Bug #22117 (Pending Backport): crushtool decompile prints bogus when osd < max_osd_id are missing
- 01:23 PM Backport #22128 (Resolved): "test/librados/TestCase.cc:485: Failure" in upgrade:jewel-x-luminous
- 01:22 PM Bug #22165: split pg not actually created, gets stuck in state unknown
- - pg 6.6 was never created yet
- osd went down
- split
- osd comes up
- osd gets pg_create on 6.6
- does not pro... - 03:43 AM Bug #22165 (Resolved): split pg not actually created, gets stuck in state unknown
- pg is in state unknown.
the primary shows lots of... - 11:21 AM Backport #22167 (In Progress): luminous: Various odd clog messages for mons
- 08:09 AM Backport #22167 (Fix Under Review): luminous: Various odd clog messages for mons
- https://github.com/ceph/ceph/pull/19031
- 07:56 AM Backport #22167 (In Progress): luminous: Various odd clog messages for mons
- 07:56 AM Backport #22167 (Resolved): luminous: Various odd clog messages for mons
- https://github.com/ceph/ceph/pull/19031
- 06:42 AM Bug #22113 (Pending Backport): osd: pg limit on replica test failure
- 04:12 AM Backport #22164 (In Progress): luminous: cluster [ERR] Unhandled exception from module 'balancer'...
- https://github.com/ceph/ceph/pull/19023
- 01:57 AM Backport #22164 (Resolved): luminous: cluster [ERR] Unhandled exception from module 'balancer' wh...
- https://github.com/ceph/ceph/pull/19023
11/19/2017
11/18/2017
- 03:44 PM Backport #21701 (Resolved): luminous: ceph-kvstore-tool does not call bluestore's umount when exit
- 03:44 PM Backport #21702 (Resolved): luminous: BlueStore::umount will crash when the BlueStore is opened b...
- 03:24 PM Bug #22144: *** Caught signal (Aborted) ** in thread thread_name:tp_peering
- Have also tried to start an OSD with noup set as suggested by a user on the ML.
However OSD still fails on the sam...
11/17/2017
- 03:43 PM Feature #22152 (New): Implement rocksdb cache perf counter
- Implement rocksdb cache perf counter
Maybe we can use ceph daemon osd.$id perf dump and add rocksdb cache perf cou... - 02:25 PM Feature #21073: mgr: ceph/rgw: show hostnames and ports in ceph -s status output
- The line with ...
- 12:34 PM Backport #22150 (In Progress): luminous: bluestore: segv during unmount
- 12:33 PM Backport #22150: luminous: bluestore: segv during unmount
- https://github.com/ceph/ceph/pull/18983
- 10:01 AM Backport #22150 (Resolved): luminous: bluestore: segv during unmount
- https://github.com/ceph/ceph/pull/18983
- 11:57 AM Bug #21823: on_flushed: object ... obc still alive (ec + cache tiering)
- It seems that SharedLRU causes refs leak. My guess is that this is the same cause as http://tracker.ceph.com/issues/2...
- 11:54 AM Bug #21823: on_flushed: object ... obc still alive (ec + cache tiering)
- It seems that SharedLRU causes refs leak. My guess is that this is same cause as http://tracker.ceph.com/issues/20000...
- 04:34 AM Bug #22144: *** Caught signal (Aborted) ** in thread thread_name:tp_peering
- ...
Also available in: Atom