Apply Clear

# Project Tracker Status Priority Subject Assignee Updated Category Target version Tags
24421CephBugNewNormalasync messager thread cpu high, osd service not normal until restart 06/20/2018 01:44 AMOSDCeph - v12.2.6
23329MessengersBugNewNormalasync messager lost session when IO performance testing, not recover util to restart 03/12/2019 11:25 PMAsyncMessengerCeph - v12.2.5
23272Linux kernel clientBugNewNormalswitch port down ,cephfs kernel client lost session, blocked not recover ok util port up 04/18/2018 09:21 AMfs/cephCeph - v12.2.5
23234fsBugWon't FixNormalmds: damage detected while opening remote dentryZheng Yan04/09/2018 08:39 PMmultimds
23199rgwBugResolvedNormalradosgw coredump RGWGC::process03/01/2019 01:22 PMCeph - v10.2.11
23198rgwBugTriagedNormalosd coredump ClassHandler::ClassMethod::execMatt Benjamin03/08/2018 07:19 PMCeph - v10.2.11
23185CephBugResolvedNormalceph: decode mds dump cache bytes failedKefu Chai04/04/2018 06:17 PM
23116fsBugResolvedNormalcephfs-journal-tool: add time to event list03/06/2018 11:34 PMIntrospection/ControlCeph - v13.0.0
23082MessengersBugNewNormalmsg/Async drop message, io blocked a long timeHaomai Wang03/12/2019 11:25 PMAsyncMessengerCeph - v10.2.11
22848RADOSBugNewNormal Pull the cable,5mins later,Put back to the cable,pg stuck a long time ulitl to restart ceph-osd02/08/2018 12:51 AMCeph - v10.2.11
22523fsBugClosedNormal Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.Jos Collin01/31/2018 12:49 AMCeph - v10.2.11
21362fsBugNeed More InfoNormalcephfs ec data pool + windows fio,ceph cluster degraed several hours always, osd up and down12/12/2017 02:33 AM
21262RADOSBugNeed More InfoNormalcephfs ec data pool, many osds marked down 12/22/2017 01:11 PMCeph - v12.2.0
21255bluestoreBugClosedNormalstop bluestore nvme osd, sgdisk it hang, sync operation hang 11/29/2017 11:20 PM
21211RADOSBugNeed More InfoNormal12.2.0,cephfs(meta replica 2, data ec 2+1),ceph-osd coredump11/29/2017 05:16 PMCeph - v12.2.0

    Also available in: Atom CSV PDF