Activity
From 01/21/2018 to 02/19/2018
02/19/2018
- 10:59 PM Bug #18178 (Won't Fix): Unfound objects lost after OSD daemons restarted
Reasons this is being close
1. PG repair is moving to user mode so on the fly object repair probably won't use r...- 09:58 PM Feature #23045: mon: warn on slow ops in OpTracker
- I've assigned this to myself but I don't know when I can get to it, so if you want to work on this feel free to take it!
- 09:56 PM Feature #23045 (Resolved): mon: warn on slow ops in OpTracker
- The monitor has an OpTracker now, but it doesn't warn on slow ops the way the MDS or OSD do. We should enable that to...
- 09:52 PM Bug #23030: osd: crash during recovery with assert(p != recovery_info.ss.clone_snap)and assert(re...
- This snapshot assert looks like "Ceph Luminous - pg is down due to src/osd/SnapMapper.cc: 246: FAILED assert(r == -2)...
- 09:02 PM Feature #23044 (New): osd: use madvise with MADV_DONTDUMP to prevent cached data from being core ...
- Idea here is to reduce the size of the coredumps but also to prevent sensitive data from being leaked.
- 02:55 PM Bug #22123 (Fix Under Review): osd: objecter sends out of sync with pg epochs for proxied ops
- https://github.com/ceph/ceph/pull/20484
I opted for the marginally more complex solution of cancelling multiple o...
02/17/2018
- 02:20 AM Bug #23031 (New): FAILED assert(!parent->get_log().get_missing().is_missing(soid))
- Using vstart to start 3 OSDs with -o filestore debug inject read err=1
Manually injectdataerr on all replicas of o... - 12:37 AM Bug #23030 (Fix Under Review): osd: crash during recovery with assert(p != recovery_info.ss.clone...
- I've got some OSDs in an 5/3 EC pool crashing during recovery. The crash happens simultaneously on 5 to 10 OSDs, some...
- 12:36 AM Bug #22743: "RadosModel.h: 854: FAILED assert(0)" in upgrade:hammer-x-jewel-distro-basic-smithi
- I looked at it briefly and the output doesn't make any sense to me, but I don't have a lot of context around what the...
02/16/2018
- 11:49 PM Bug #22114 (Fix Under Review): mon: ops get stuck in "resend forwarded message to leader"
- https://github.com/ceph/ceph/pull/20467
- 02:14 PM Bug #22114: mon: ops get stuck in "resend forwarded message to leader"
- Greg Farnum wrote:
> Ummm, yep, that looks right to me at a quick glance! Can you submit a PR with that change? :)
... - 02:04 PM Bug #22114: mon: ops get stuck in "resend forwarded message to leader"
- Maybe not. you should check the code on github.com.
- 01:22 PM Bug #22114: mon: ops get stuck in "resend forwarded message to leader"
- hongpeng lu wrote:
> The messages can not be forwarded appropriately, you must change the code like this.
> [...]
... - 01:17 PM Bug #22114: mon: ops get stuck in "resend forwarded message to leader"
- The messages can not be forwarded appropriately, you must change the code like this....
- 12:52 PM Bug #22114: mon: ops get stuck in "resend forwarded message to leader"
- We have the same problem on all our Luminous clusters. Any news regarding fix?
Most stuck messages in our case are o... - 10:35 PM Bug #22743: "RadosModel.h: 854: FAILED assert(0)" in upgrade:hammer-x-jewel-distro-basic-smithi
- @nathan This doesn't have cache tier, so it would be a different issue. Maybe related to upgrade?
- 07:58 PM Bug #22743: "RadosModel.h: 854: FAILED assert(0)" in upgrade:hammer-x-jewel-distro-basic-smithi
- @David I guess this is a duplicate, too?
- 04:27 PM Bug #22743: "RadosModel.h: 854: FAILED assert(0)" in upgrade:hammer-x-jewel-distro-basic-smithi
- seems reproducible, see
http://pulpito.ceph.com/teuthology-2018-02-16_01:15:03-upgrade:hammer-x-jewel-distro-basic-... - 10:01 PM Bug #23029 (New): osd does not handle eio on meta objects (e.g., osdmap)
- ...
- 05:00 PM Bug #22063 (Duplicate): "RadosModel.h: 1703: FAILED assert(!version || comp->get_version64() == v...
- 04:59 PM Bug #22064 (Duplicate): "RadosModel.h: 865: FAILED assert(0)" in rados-jewel-distro-basic-smithi
- 11:03 AM Backport #23024 (Resolved): luminous: thrash-eio + bluestore (hangs with unfound objects or read_...
- https://github.com/ceph/ceph/pull/20495
- 12:07 AM Bug #21218 (Pending Backport): thrash-eio + bluestore (hangs with unfound objects or read_log_and...
02/15/2018
- 06:53 PM Bug #22952: Monitor stopped responding after awhile
- Frank Li wrote:
> either 12.2.2 + the patch or 12.2.3 RC + the patch would be good, whichever is more convenient to ... - 05:09 PM Bug #22996 (Fix Under Review): Snapset inconsistency is no longer detected
- 04:13 PM Bug #18746: monitors crashing ./include/interval_set.h: 355: FAILED assert(0) (jewel+kraken)
- I've got a cluster here where this issue is 100% reproducible when trying to delete snapshots. Let me know if we can ...
- 04:07 PM Bug #21833: Multiple asserts caused by DNE pgs left behind after lots of OSD restarts
- I'm also seeing this on 12.2.2. The crashing OSD has some bad PG which crashes it on startup. I first assumed the dis...
- 03:47 PM Backport #21871 (In Progress): luminous: ObjectStore/StoreTest.FiemapHoles/3 fails with kstore
- 03:45 PM Backport #21871 (Need More Info): luminous: ObjectStore/StoreTest.FiemapHoles/3 fails with kstore
- somewhat non-trivial, @Kefu could you take a look?
- 03:40 PM Backport #21871 (In Progress): luminous: ObjectStore/StoreTest.FiemapHoles/3 fails with kstore
- 03:39 PM Backport #21872 (Resolved): jewel: ObjectStore/StoreTest.FiemapHoles/3 fails with kstore
- 07:34 AM Support #23005 (New): Implement rados for Python library with some problem
- Hi all,
This is my first time to be here.
I use the ceph raods library to implement customize python code, and ...
02/14/2018
- 10:02 PM Bug #18746: monitors crashing ./include/interval_set.h: 355: FAILED assert(0) (jewel+kraken)
- I'm seeing this on Luminous. Some kRBD clients are sending requests of death killing the active monitor.
No special ... - 08:30 PM Bug #22462: mon: unknown message type 1537 in luminous->mimic upgrade tests
- @Kefu could pls take a look?
- 05:48 PM Bug #22847: ceph osd force-create-pg cause all ceph-mon to crash and unable to come up again
- ok, I'll wait for 12.2.4 or a 12.2.3 + the patch then.
- 09:10 AM Bug #22847: ceph osd force-create-pg cause all ceph-mon to crash and unable to come up again
- Frank Li wrote:
> just curious, I saw this patch got merged to the master branch and has the target version of 12.2.... - 06:51 AM Bug #22847: ceph osd force-create-pg cause all ceph-mon to crash and unable to come up again
- just curious, I saw this patch got merged to the master branch and has the target version of 12.2.3, does that mean i...
- 06:50 AM Bug #22952: Monitor stopped responding after awhile
- either 12.2.2 + the patch or 12.2.3 RC + the patch would be good, whichever is more convenient to build.
- 06:05 AM Bug #22996: Snapset inconsistency is no longer detected
- We also need this fix to include tests that happen in the QA suite to prevent a future regression! :)
(Presumably th... - 03:39 AM Bug #22996: Snapset inconsistency is no longer detected
- 03:37 AM Bug #22996 (Resolved): Snapset inconsistency is no longer detected
The fix for #20243 required additional handling of snapset inconsistency. The Object info and snapset aren't part ...
02/13/2018
- 07:53 PM Bug #22994 (New): rados bench doesn't use --max-objects
- It would be useful for testing OSD caching behavior if rados bench would respect --max-objects parameter. It seems t...
- 07:30 PM Bug #22992: mon: add RAM usage (including avail) to HealthMonitor::check_member_health?
- Turned out it was just the monitor being thrashed (didn't realize we were doing that in kcephfs!): #22993
Still, m... - 06:43 PM Bug #22992 (New): mon: add RAM usage (including avail) to HealthMonitor::check_member_health?
- I'm looking into several MON_DOWN failures from
http://pulpito.ceph.com/pdonnell-2018-02-13_17:49:41-kcephfs-wip-p... - 06:12 PM Bug #21218: thrash-eio + bluestore (hangs with unfound objects or read_log_and_missing assert)
- https://github.com/ceph/ceph/pull/20410
- 04:04 AM Bug #21218 (Fix Under Review): thrash-eio + bluestore (hangs with unfound objects or read_log_and...
- 12:27 PM Bug #22063: "RadosModel.h: 1703: FAILED assert(!version || comp->get_version64() == version)" inr...
- Another jewel run with this bug:
* http://qa-proxy.ceph.com/teuthology/smithfarm-2018-02-06_21:07:15-rados-wip-jew... - 06:52 AM Bug #22952: Monitor stopped responding after awhile
- Kefu Chai wrote:
> > I reproduced the issue in a seperate cluster
>
> could you share the steps to reproduce this...
02/12/2018
- 10:35 PM Bug #21218: thrash-eio + bluestore (hangs with unfound objects or read_log_and_missing assert)
This assert can only happen in the following two cases:
osd debug verify missing on start = true. Used in t...- 10:07 PM Bug #21218: thrash-eio + bluestore (hangs with unfound objects or read_log_and_missing assert)
- For kefu's run above,...
- 03:07 AM Bug #21218: thrash-eio + bluestore (hangs with unfound objects or read_log_and_missing assert)
- thrash-eio + bluestore
/a/kchai-2018-02-11_04:16:47-rados-wip-kefu-testing-2018-02-11-0959-distro-basic-mira/2181825... - 10:05 AM Bug #22354 (Fix Under Review): v12.2.2 unable to create bluestore osd using ceph-disk
- https://github.com/ceph/ceph/pull/20400
- 09:52 AM Bug #22445: ceph osd metadata reports wrong "back_iface"
- John Spray wrote:
> Hmm, this could well be the first time anyone's really tested the IPv6 path here.
https://git... - 09:27 AM Backport #22942 (In Progress): luminous: ceph osd force-create-pg cause all ceph-mon to crash and...
- 08:57 AM Bug #22952: Monitor stopped responding after awhile
- > I reproduced the issue in a seperate cluster
could you share the steps to reproduce this issue? so i can try it ... - 05:58 AM Bug #22949 (Rejected): ceph_test_admin_socket_output --all times out
- 05:57 AM Bug #22949: ceph_test_admin_socket_output --all times out
- thanks Brad. my bad, i thought the bug was in master also. closing this ticket, as the related PR is not yet merged.
02/10/2018
- 08:50 AM Bug #22949: ceph_test_admin_socket_output --all times out
- 08:39 AM Bug #22949: ceph_test_admin_socket_output --all times out
- This is not a problem with the test (although it highlights a deficiency with error reporting which I'll submit a PR ...
- 02:32 AM Bug #22882 (In Progress): Objecter deadlocked on op budget while holding rwlock in ms_handle_reset()
- I finally realized that the op throttler *does* drop the global rwlock while waiting for throttle, so it at least doe...
02/09/2018
- 10:08 PM Bug #22847: ceph osd force-create-pg cause all ceph-mon to crash and unable to come up again
- Just FYI, Using this new patch, the leader ceph-mon will hung once it is up, and any kind of OSD command is ran, like...
- 10:06 PM Bug #22952: Monitor stopped responding after awhile
- Frank Li wrote:
> Frank Li wrote:
> > I reproduced the issue in a seperate cluster, it seems that whichever ceph-mo... - 08:40 PM Bug #22952: Monitor stopped responding after awhile
- Frank Li wrote:
> I reproduced the issue in a seperate cluster, it seems that whichever ceph-mon became the leader w... - 08:35 PM Bug #22952: Monitor stopped responding after awhile
- I reproduced the issue in a seperate cluster, it seems that whichever ceph-mon became the leader will be stuck, as I ...
- 07:50 PM Feature #22973 (Duplicate): log lines when hitting "pg overdose protection"
- You're right that it's bad! This will be fixed in the next luminous release after a belated backport finally happened...
- 02:15 PM Feature #22973 (Duplicate): log lines when hitting "pg overdose protection"
- After upgrading to Luminous we ran into situation where 10% of our pgs remained unavailable, stuck in "activating" st...
- 04:24 PM Bug #22300 (Rejected): ceph osd reweightn command seems to change weight value
- the parameter of reweigtn is an array of fixed point integers. and the integers are int(weight * 0x10000), where weig...
- 02:20 PM Feature #22974 (Resolved): documentation - pg state table missing "activating" state
- "activating" is not listed in the pg state table:
http://docs.ceph.com/docs/master/rados/operations/pg-states/
... - 06:41 AM Bug #22949: ceph_test_admin_socket_output --all times out
- Sure mate, added a patch to get better debugging and will test as soon as it's built.
- 12:24 AM Bug #22882: Objecter deadlocked on op budget while holding rwlock in ms_handle_reset()
- Oh, and I had the LingerOp and Op conflated in my head a bit when looking at that before, but they are different.
... - 12:03 AM Bug #22882: Objecter deadlocked on op budget while holding rwlock in ms_handle_reset()
- Jason, how did you establish the number of in-flight ops? I wonder if maybe it *did* have them but they weren't able ...
- 12:02 AM Bug #22882: Objecter deadlocked on op budget while holding rwlock in ms_handle_reset()
- Okay, so presumably on resend you shouldn't need to grab op budget again, since it's already budgeted, right?
And ...
02/08/2018
- 02:37 PM Bug #22949: ceph_test_admin_socket_output --all times out
- Brad, i am not able to reproduce this issue. could you help take a look?
- 02:25 AM Bug #20086 (Resolved): LibRadosLockECPP.LockSharedDurPP gets EEXIST
- 02:24 AM Bug #22440 (Resolved): New pgs per osd hard limit can cause peering issues on existing clusters
- @Nick, if you think this issue deserves a different fix, please feel free to reopen this ticket
- 12:51 AM Bug #22848: Pull the cable,5mins later,Put back to the cable,pg stuck a long time ulitl to resta...
- Hi Josh Durginm,
1.They both are fibre-optical cable in our networkcard.
2.Log files cann't be found yet,due to at...
02/07/2018
- 11:09 PM Bug #22220: osd/ReplicatedPG.h:1667:14: internal compiler error: in force_type_die, at dwarf2out....
- Fixed by gcc-7.3.1-2.fc26 gcc-7.3.1-2.fc27 in fc27
- 10:49 PM Bug #22440: New pgs per osd hard limit can cause peering issues on existing clusters
- Kefu Chai wrote:
> https://github.com/ceph/ceph/pull/20204
merged - 09:44 PM Bug #22848: Pull the cable,5mins later,Put back to the cable,pg stuck a long time ulitl to resta...
- Which cable are you pulling? Do you have logs from the monitors and osds? The default failure detection timeouts can ...
- 09:40 PM Bug #22916 (Duplicate): OSD crashing in peering
- 09:40 PM Bug #21287 (Duplicate): 1 PG down, OSD fails with "FAILED assert(i->prior_version == last || i->i...
- 03:52 AM Bug #21287: 1 PG down, OSD fails with "FAILED assert(i->prior_version == last || i->is_error())"
- see https://github.com/ceph/ceph/pull/16675
- 02:37 AM Bug #21287: 1 PG down, OSD fails with "FAILED assert(i->prior_version == last || i->is_error())"
- we hit this bug too in ec pool 2+1, i find one peer did not receive one piece of op message sended from primary osd, ...
- 06:12 PM Bug #22952: Monitor stopped responding after awhile
- here is where the first mon server is stuck, running mon_status hang:
[root@dl1-kaf101 frli]# ceph --admin-daemon /v... - 06:06 PM Bug #22952 (Duplicate): Monitor stopped responding after awhile
- After a crash of ceph-mon in 12.2.2 and using a private build provided by ceph developers, the ceph-mon would come up...
- 06:06 PM Bug #22847: ceph osd force-create-pg cause all ceph-mon to crash and unable to come up again
- https://tracker.ceph.com/issues/22952
ticket opened for ceph-mon not responding issue. - 06:02 PM Bug #22847: ceph osd force-create-pg cause all ceph-mon to crash and unable to come up again
- I'll open a separate ticket to track the monitor not responding issue. the fix for the force-create-pg issue is good.
- 06:01 PM Bug #22847: ceph osd force-create-pg cause all ceph-mon to crash and unable to come up again
- Kefu Chai wrote:
> [...]
>
>
> the cluster formed a quorum of [0,1,2,3,4] since 18:02:21. and it was not in pro... - 05:58 PM Bug #22847: ceph osd force-create-pg cause all ceph-mon to crash and unable to come up again
- Kefu Chai wrote:
> [...]
>
> was any osd up when you were testing?
Yes, but they were in Booting State, all of... - 06:56 AM Bug #22847: ceph osd force-create-pg cause all ceph-mon to crash and unable to come up again
- ...
- 06:12 AM Bug #22847: ceph osd force-create-pg cause all ceph-mon to crash and unable to come up again
- ...
- 04:05 PM Bug #22746 (Resolved): osd/common: ceph-osd process is terminated by the logratote task
- 03:33 PM Bug #22949 (Rejected): ceph_test_admin_socket_output --all times out
- http://pulpito.ceph.com/kchai-2018-02-07_01:22:25-rados-wip-kefu-testing-2018-02-06-1514-distro-basic-mira/2161301/
- 05:50 AM Backport #22942 (Resolved): luminous: ceph osd force-create-pg cause all ceph-mon to crash and un...
- https://github.com/ceph/ceph/pull/20399
- 05:01 AM Backport #22934 (Resolved): luminous: filestore journal replay does not guard omap operations
- https://github.com/ceph/ceph/pull/21547
- 12:54 AM Backport #22866 (In Progress): jewel: ceph osd df json output validation reported invalid numbers...
- https://github.com/ceph/ceph/pull/20344
02/06/2018
- 08:01 PM Bug #22350 (Resolved): nearfull OSD count in 'ceph -w'
- 07:49 PM Bug #22847: ceph osd force-create-pg cause all ceph-mon to crash and unable to come up again
- so anything I can do to help recover the cluster ??
- 06:50 AM Bug #22847 (Pending Backport): ceph osd force-create-pg cause all ceph-mon to crash and unable to...
- 01:23 AM Bug #22847: ceph osd force-create-pg cause all ceph-mon to crash and unable to come up again
- please see attached logs for when the monitor was started, and then later got into the stuck mode.
I just replaced t... - 04:54 PM Bug #22920: filestore journal replay does not guard omap operations
- lowering the priority since in practice we don't clone objects with omap on them.
- 04:53 PM Bug #22920 (Pending Backport): filestore journal replay does not guard omap operations
- 04:07 PM Bug #22656: scrub mismatch on bytes (cache pools)
- aah, just popped up on luminous: http://pulpito.ceph.com/yuriw-2018-02-05_23:07:16-rados-wip-yuri-testing-2018-02-05-...
- 02:24 PM Bug #20924: osd: leaked Session on osd.7
- /a/yuriw-2018-02-02_20:31:37-rados-wip_yuri_master_2.2.18-distro-basic-smithi/2143177
02/05/2018
- 09:06 PM Feature #4305: CRUSH: it should be possible use ssd as primary and hdd for replicas but still mak...
- Assuming @Patrick meant "RADOS" and not "rados-java"
- 08:58 PM Bug #21977: null map from OSDService::get_map in advance_pg
- Seems persisting, see in
http://qa-proxy.ceph.com/teuthology/teuthology-2018-02-05_04:23:02-upgrade:jewel-x-lumino... - 08:01 PM Feature #3586 (Resolved): CRUSH: separate library
- 07:53 PM Feature #3764: osd: async replicas
- 07:33 PM Feature #11046 (Resolved): osd: rados io hints improvements
- PR merged.
- 03:17 PM Bug #22920: filestore journal replay does not guard omap operations
- https://github.com/ceph/ceph/pull/20279
- 03:16 PM Bug #22920 (Resolved): filestore journal replay does not guard omap operations
- omap operations are replayed without checking the guards, which means that omap data can leak between objects that ar...
- 12:05 PM Bug #20924: osd: leaked Session on osd.7
- /a/yuriw-2018-02-02_20:31:37-rados-wip_yuri_master_2.2.18-distro-basic-smithi/2143177/remote/smithi111/log/valgrind/o...
- 09:37 AM Support #22917 (New): mon keeps on crashing ( 12.2.2 )
- mon keeps on crashing ( 0> 2018-02-05 00:22:49.915541 7f6d0a781700 -1 *** Caught signal (Aborted) **
in thread 7f6d... - 08:49 AM Bug #22916 (Duplicate): OSD crashing in peering
- Bluestore OSD is crashed with a stacktrace:...
- 03:52 AM Bug #22847: ceph osd force-create-pg cause all ceph-mon to crash and unable to come up again
- > Should I be updating the ceph-osd to the same patched version ??
no need to update ceph-osd.
> but very soon,... - 01:41 AM Bug #22668 (Resolved): osd/ExtentCache.h: 371: FAILED assert(tid == 0)
02/04/2018
- 07:29 AM Bug #22847: ceph osd force-create-pg cause all ceph-mon to crash and unable to come up again
- ALso, while the monitors came up and form a forum, but very soon, they would all stop responding again, and then I fi...
02/03/2018
- 09:39 PM Backport #21239 (In Progress): jewel: test_health_warnings.sh can fail
- 07:01 PM Backport #22450 (Resolved): luminous: Visibility for snap trim queue length
- 06:37 PM Bug #22409 (Resolved): ceph_objectstore_tool: no flush before collection_empty() calls; ObjectSto...
- 06:37 PM Backport #22707 (Resolved): luminous: ceph_objectstore_tool: no flush before collection_empty() c...
- 06:36 PM Bug #21147 (Resolved): Manager daemon x is unresponsive. No standby daemons available
- 06:35 PM Backport #22399 (Resolved): luminous: Manager daemon x is unresponsive. No standby daemons available
- 07:18 AM Backport #22906 (Rejected): jewel: bluestore: New OSD - Caught signal - bstore_kv_sync (throttle ...
- 07:17 AM Bug #22539: bluestore: New OSD - Caught signal - bstore_kv_sync
- Adding jewel backport on the theory that (1) Jenkins CI is using modern glibc/kernel to run make check on jewel, brea...
- 12:45 AM Backport #22389 (Resolved): luminous: ceph-objectstore-tool: Add option "dump-import" to examine ...
- 12:43 AM Bug #22837 (In Progress): discover_all_missing() not always called during activating
- Part of https://github.com/ceph/ceph/pull/20220
- 12:41 AM Bug #18162 (Resolved): osd/ReplicatedPG.cc: recover_replicas: object added to missing set for bac...
- 12:40 AM Backport #22013 (Resolved): jewel: osd/ReplicatedPG.cc: recover_replicas: object added to missing...
02/02/2018
- 11:08 PM Backport #22707: luminous: ceph_objectstore_tool: no flush before collection_empty() calls; Objec...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19967
merged - 11:01 PM Backport #22389: luminous: ceph-objectstore-tool: Add option "dump-import" to examine an export
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19487
merged - 11:00 PM Backport #22399: luminous: Manager daemon x is unresponsive. No standby daemons available
- Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/19501
merged - 09:15 PM Bug #22847: ceph osd force-create-pg cause all ceph-mon to crash and unable to come up again
- Frank Li wrote:
> I've updated all the ceph-mon with the RPMs from the patch repo, they came up fine, and I've resta... - 09:14 PM Bug #22847: ceph osd force-create-pg cause all ceph-mon to crash and unable to come up again
- I've updated all the ceph-mon with the RPMs from the patch repo, they came up fine, and I've restarted the OSDs, but ...
- 08:29 PM Bug #22847: ceph osd force-create-pg cause all ceph-mon to crash and unable to come up again
- Just for future operational references, is there anyway to revert the Monitor map to a previous state in the case of ...
- 06:22 PM Bug #22847: ceph osd force-create-pg cause all ceph-mon to crash and unable to come up again
- Please note the Crash happend on the monitor, not the OSD, the OSDs all stayed up, but all the monitor crashed.
- 06:21 PM Bug #22847: ceph osd force-create-pg cause all ceph-mon to crash and unable to come up again
- -4> 2018-01-31 22:47:22.942381 7fc641d0b700 1 -- 10.102.52.37:6789/0 <== mon.0 10.102.52.37:6789/0 0 ==== log(1 ...
- 06:09 PM Bug #22847 (Fix Under Review): ceph osd force-create-pg cause all ceph-mon to crash and unable to...
- https://github.com/ceph/ceph/pull/20267
- 05:46 PM Bug #22847 (Need More Info): ceph osd force-create-pg cause all ceph-mon to crash and unable to c...
- Can you attach the entire osd log for the crashed osd? (In particular, we need to see what assertion failed.) Thanks!
- 07:32 PM Bug #22902 (Resolved): src/osd/PG.cc: 6455: FAILED assert(0 == "we got a bad state machine event")
http://pulpito.ceph.com/dzafman-2018-02-01_09:46:36-rados-wip-zafman-testing-distro-basic-smithi/2138315
I think...- 07:23 PM Bug #22834 (Resolved): Primary ends up in peer_info which isn't supposed to be there
- 09:48 AM Bug #22257 (Resolved): mon: mgrmaps not trimmed
- 09:48 AM Backport #22258 (Resolved): mon: mgrmaps not trimmed
- 09:47 AM Backport #22402 (Resolved): luminous: osd: replica read can trigger cache promotion
- 08:05 AM Backport #22807 (Resolved): luminous: "osd pool stats" shows recovery information bugly
- 07:54 AM Bug #22715 (Resolved): log entries weirdly zeroed out after 'osd pg-temp' command
- 07:54 AM Backport #22744 (Resolved): luminous: log entries weirdly zeroed out after 'osd pg-temp' command
- 05:46 AM Documentation #22843: [doc][luminous] the configuration guide still contains osd_op_threads and d...
- For downstream Red Hat products, you should use the Red Hat bugzilla to report bugs. This is the upstream bug tracker...
- 05:15 AM Backport #22013 (In Progress): jewel: osd/ReplicatedPG.cc: recover_replicas: object added to miss...
- 12:17 AM Bug #22882: Objecter deadlocked on op budget while holding rwlock in ms_handle_reset()
- When I saw the test running for 4 hours my first thought was that the cluster was unhealthy -- but all OSDs were up a...
02/01/2018
- 11:26 PM Bug #22117 (Resolved): crushtool decompile prints bogus when osd < max_osd_id are missing
- 11:25 PM Backport #22199 (Resolved): crushtool decompile prints bogus when osd < max_osd_id are missing
- 11:24 PM Bug #22113 (Resolved): osd: pg limit on replica test failure
- 11:24 PM Backport #22176 (Resolved): luminous: osd: pg limit on replica test failure
- 11:24 PM Bug #21907 (Resolved): On pg repair the primary is not favored as was intended
- 11:23 PM Backport #22213 (Resolved): luminous: On pg repair the primary is not favored as was intended
- 11:10 PM Backport #22258: mon: mgrmaps not trimmed
- Kefu Chai wrote:
> mgrmonitor does not trim old mgrmaps. these can accumulate forever.
>
> https://github.com/ce... - 11:08 PM Backport #22402: luminous: osd: replica read can trigger cache promotion
- Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/19499
merged - 11:04 PM Bug #22673 (Resolved): osd checks out-of-date osdmap for DESTROYED flag on start
- 11:03 PM Backport #22761 (Resolved): luminous: osd checks out-of-date osdmap for DESTROYED flag on start
- 11:01 PM Backport #22761: luminous: osd checks out-of-date osdmap for DESTROYED flag on start
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20068
merged - 11:00 PM Backport #22807: luminous: "osd pool stats" shows recovery information bugly
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20150
merged - 10:59 PM Bug #22419 (Resolved): Pool Compression type option doesn't apply to new OSD's
- 10:59 PM Backport #22502 (Resolved): luminous: Pool Compression type option doesn't apply to new OSD's
- 09:04 PM Backport #22502: luminous: Pool Compression type option doesn't apply to new OSD's
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20106
merged - 10:43 PM Bug #22887 (Duplicate): osd/ECBackend.cc: 2202: FAILED assert((offset + length) <= (range.first.g...
- ...
- 10:29 PM Bug #22882: Objecter deadlocked on op budget while holding rwlock in ms_handle_reset()
- It's not quite that simple; ops on a failed OSD or closed session get moved into the homeless_session and at a quick ...
- 06:52 PM Bug #22882: Objecter deadlocked on op budget while holding rwlock in ms_handle_reset()
- http://qa-proxy.ceph.com/teuthology/jdillaman-2018-02-01_08:21:33-rbd-wip-jd-testing-luminous-distro-basic-smithi/213...
- 06:51 PM Bug #22882 (Resolved): Objecter deadlocked on op budget while holding rwlock in ms_handle_reset()
- ...
- 09:06 PM Bug #22715: log entries weirdly zeroed out after 'osd pg-temp' command
- merged https://github.com/ceph/ceph/pull/20042
- 06:09 PM Bug #22881 (Resolved): scrub interaction with HEAD boundaries and snapmapper repair is broken
- symptom:...
- 11:45 AM Bug #22842: (luminous) ceph-disk prepare of simple filestore failed with 'Unable to set partition...
- John Spray wrote:
> I would suspect that something is strange about the disk (non-GPT partition table perhaps?), and... - 11:11 AM Bug #22842: (luminous) ceph-disk prepare of simple filestore failed with 'Unable to set partition...
- I would suspect that something is strange about the disk (non-GPT partition table perhaps?), and you're getting less-...
- 11:43 AM Backport #22449: jewel: Visibility for snap trim queue length
- presumably non-trivial backport; assigning to the developer
- 11:40 AM Feature #22448 (Pending Backport): Visibility for snap trim queue length
- 10:49 AM Backport #22866 (Resolved): jewel: ceph osd df json output validation reported invalid numbers (-...
- https://github.com/ceph/ceph/pull/20344
- 08:12 AM Bug #21750 (Resolved): scrub stat mismatch on bytes
- The code is gone.
- 05:42 AM Bug #22848: Pull the cable,5mins later,Put back to the cable,pg stuck a long time ulitl to resta...
- why pgs status is peering alawys, I could sure that such as monitor osd both ok.
those pg state machine should wo... - 05:32 AM Bug #22848 (New): Pull the cable,5mins later,Put back to the cable,pg stuck a long time ulitl to...
- Hi all,
We have 3 nodes ceph cluster, version 10.2.10.
new installing enviroment and prosessional rpms from downlo... - 04:29 AM Bug #22847 (Resolved): ceph osd force-create-pg cause all ceph-mon to crash and unable to come up...
- during the course of trouble-shooting an osd issue, I ran this command:
ceph osd force-create-pg 1.ace11d67
then al...
01/31/2018
- 10:39 PM Bug #22656: scrub mismatch on bytes (cache pools)
- We just aren't assigning that much priority to cache tiering.
- 10:27 PM Bug #22752 (Fix Under Review): snapmapper inconsistency, crash on luminous
- 03:32 PM Bug #22440: New pgs per osd hard limit can cause peering issues on existing clusters
- https://github.com/ceph/ceph/pull/20204
- 01:53 PM Documentation #22843 (Won't Fix): [doc][luminous] the configuration guide still contains osd_op_t...
- In the configuration guide for RHCS 3 is still mentioned osd_op_threads, which is not already part of RHCS 3 code.
... - 01:51 PM Bug #22842 (New): (luminous) ceph-disk prepare of simple filestore failed with 'Unable to set par...
- Hi,
can't create a simple filestore with help of ceph-disk under ubuntu trusy, please have a look on this...
<p... - 12:50 PM Bug #21496: doc: Manually editing a CRUSH map, Word 'type' missing.
- Yes, I believe so.
- 12:48 PM Bug #21496: doc: Manually editing a CRUSH map, Word 'type' missing.
- Sorry Is it fine now?
- 12:44 PM Bug #21496: doc: Manually editing a CRUSH map, Word 'type' missing.
- No, the line should be:...
- 12:34 PM Bug #21496: doc: Manually editing a CRUSH map, Word 'type' missing.
- Did the changes. Please tell the changes are correct or not.Please review. Sorry if I am wrong.
- 12:03 PM Bug #22142 (Resolved): mon doesn't send health status after paxos service is inactive temporarily
- 12:03 PM Backport #22421 (Resolved): mon doesn't send health status after paxos service is inactive tempor...
- 04:10 AM Bug #22837 (Resolved): discover_all_missing() not always called during activating
Sometimes discover_all_missing() isn't called so we don't get a complete picture of misplaced objects. This makes ...- 12:44 AM Backport #22164: luminous: cluster [ERR] Unhandled exception from module 'balancer' while running...
- Prashant D wrote:
> https://github.com/ceph/ceph/pull/19023
merged - 12:44 AM Backport #22167: luminous: Various odd clog messages for mons
- Prashant D wrote:
> https://github.com/ceph/ceph/pull/19031
merged - 12:43 AM Backport #22199: crushtool decompile prints bogus when osd < max_osd_id are missing
- Jan Fajerski wrote:
> https://github.com/ceph/ceph/pull/19039
merged - 12:41 AM Backport #22176: luminous: osd: pg limit on replica test failure
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19059
merged - 12:40 AM Backport #22213: luminous: On pg repair the primary is not favored as was intended
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/19083
merged - 12:13 AM Bug #22834: Primary ends up in peer_info which isn't supposed to be there
Workaround
https://github.com/ceph/ceph/pull/20189
01/30/2018
- 11:43 PM Bug #22834 (Resolved): Primary ends up in peer_info which isn't supposed to be there
rados/singleton/{all/lost-unfound.yaml msgr-failures/few.yaml msgr/random.yaml objectstore/bluestore-bitmap.yaml ra...- 04:01 PM Bug #22440: New pgs per osd hard limit can cause peering issues on existing clusters
- will backport https://github.com/ceph/ceph/pull/18614 to luminous. it helps to make this status more visible to user.
01/29/2018
- 09:26 PM Bug #22656: scrub mismatch on bytes (cache pools)
- /a/sage-2018-01-29_18:07:24-rados-wip-sage-testing-2018-01-29-0927-distro-basic-smithi/2122957
description: rados/th... - 08:01 PM Bug #22743: "RadosModel.h: 854: FAILED assert(0)" in upgrade:hammer-x-jewel-distro-basic-smithi
- I don't think a bug in a hammer binary during an upgrade test to jewel is an urgent problem at this point?
- 03:15 PM Bug #22201: PG removal with ceph-objectstore-tool segfaulting
- We're getting close to converting the OSDs in this cluster to Bluestore. If you would like any tests to be run on th...
- 02:56 PM Bug #22668: osd/ExtentCache.h: 371: FAILED assert(tid == 0)
- simpler fix: https://github.com/ceph/ceph/pull/20169
- 02:38 PM Bug #22440: New pgs per osd hard limit can cause peering issues on existing clusters
- First, perhaps this will help to make these issues more visible: https://github.com/ceph/ceph/pull/20167
Second, i... - 10:23 AM Bug #20086 (Fix Under Review): LibRadosLockECPP.LockSharedDurPP gets EEXIST
- https://github.com/ceph/ceph/pull/20161
- 07:28 AM Bug #20086: LibRadosLockECPP.LockSharedDurPP gets EEXIST
- /a/kchai-2018-01-28_09:53:35-rados-wip-kefu-testing-2018-01-27-2356-distro-basic-mira/2120659...
- 01:15 AM Bug #21471 (Resolved): mon osd feature checks for osdmap flags and require-osd-release fail if 0 ...
01/28/2018
- 11:59 PM Backport #22807 (In Progress): luminous: "osd pool stats" shows recovery information bugly
- https://github.com/ceph/ceph/pull/20150
- 12:31 AM Backport #22818 (In Progress): jewel: repair_test fails due to race with osd start
01/27/2018
- 08:35 AM Backport #21872 (In Progress): jewel: ObjectStore/StoreTest.FiemapHoles/3 fails with kstore
- 06:49 AM Bug #22662 (Pending Backport): ceph osd df json output validation reported invalid numbers (-nan)...
01/26/2018
- 06:01 PM Backport #22818 (Resolved): jewel: repair_test fails due to race with osd start
- https://github.com/ceph/ceph/pull/20146
- 05:54 PM Bug #22662: ceph osd df json output validation reported invalid numbers (-nan) (jewel)
- +1 for null, which is an English word and hence far more comprehensible than "NaN", which is what I would call "Progr...
- 05:42 PM Bug #21577 (Resolved): ceph-monstore-tool --readable mode doesn't understand FSMap, MgrMap
- 05:41 PM Backport #21636 (Resolved): luminous: ceph-monstore-tool --readable mode doesn't understand FSMap...
- 05:21 PM Bug #20705 (Pending Backport): repair_test fails due to race with osd start
- Seen in Jewel so marking for backport
http://qa-proxy.ceph.com/teuthology/dzafman-2018-01-25_13:41:04-rados-wip-za... - 05:16 PM Backport #21872: jewel: ObjectStore/StoreTest.FiemapHoles/3 fails with kstore
- This backport is needed as seen in:
http://qa-proxy.ceph.com/teuthology/dzafman-2018-01-25_13:41:04-rados-wip-zafm... - 11:55 AM Bug #18239 (New): nan in ceph osd df again
- 11:54 AM Bug #18239 (Duplicate): nan in ceph osd df again
- duplicated with #22662
- 08:00 AM Backport #22808 (Rejected): jewel: "osd pool stats" shows recovery information bugly
- 08:00 AM Backport #22807 (Resolved): luminous: "osd pool stats" shows recovery information bugly
- https://github.com/ceph/ceph/pull/20150
- 07:30 AM Bug #22727 (Pending Backport): "osd pool stats" shows recovery information bugly
01/25/2018
- 07:59 PM Bug #20243 (Resolved): Improve size scrub error handling and ignore system attrs in xattr checking
- 07:59 PM Backport #21051 (Resolved): luminous: Improve size scrub error handling and ignore system attrs i...
- 07:58 PM Bug #21382 (Resolved): Erasure code recovery should send additional reads if necessary
- 07:56 PM Bug #22145 (Resolved): PG stuck in recovery_unfound
- 07:56 PM Bug #20059 (Resolved): miscounting degraded objects
- 07:55 PM Backport #22724 (Resolved): luminous: miscounting degraded objects
- 07:33 PM Backport #22724 (Fix Under Review): luminous: miscounting degraded objects
- Included in https://github.com/ceph/ceph/pull/20055
- 07:55 PM Backport #22387 (Resolved): luminous: PG stuck in recovery_unfound
- 07:35 PM Backport #22387 (Fix Under Review): luminous: PG stuck in recovery_unfound
- 07:54 PM Backport #21653 (Resolved): luminous: Erasure code recovery should send additional reads if neces...
- 07:53 PM Backport #22069 (Resolved): luminous: osd/ReplicatedPG.cc: recover_replicas: object added to miss...
- 04:18 PM Bug #22662: ceph osd df json output validation reported invalid numbers (-nan) (jewel)
- Chang Liu wrote:
> Enrico Labedzki wrote:
> > Chang Liu wrote:
> > > Enrico Labedzki wrote:
> > > > Chang Liu wro... - 03:45 PM Bug #22662: ceph osd df json output validation reported invalid numbers (-nan) (jewel)
- Enrico Labedzki wrote:
> Chang Liu wrote:
> > Enrico Labedzki wrote:
> > > Chang Liu wrote:
> > > > Sage Weil wro... - 09:40 AM Bug #22662: ceph osd df json output validation reported invalid numbers (-nan) (jewel)
- Chang Liu wrote:
> Enrico Labedzki wrote:
> > Chang Liu wrote:
> > > Sage Weil wrote:
> > > > 1. it's not valid j... - 09:30 AM Bug #22662: ceph osd df json output validation reported invalid numbers (-nan) (jewel)
- Enrico Labedzki wrote:
> Chang Liu wrote:
> > Sage Weil wrote:
> > > 1. it's not valid json.. Formatter shouldn't ... - 08:52 AM Bug #22662: ceph osd df json output validation reported invalid numbers (-nan) (jewel)
- Chang Liu wrote:
> Sage Weil wrote:
> > 1. it's not valid json.. Formatter shouldn't allow it
> > 2. we should hav... - 06:36 AM Bug #22662: ceph osd df json output validation reported invalid numbers (-nan) (jewel)
- Sage Weil wrote:
> 1. it's not valid json.. Formatter shouldn't allow it
> 2. we should have a valid value (or 0) t... - 04:02 AM Bug #22662: ceph osd df json output validation reported invalid numbers (-nan) (jewel)
This bug has been fixed by https://github.com/ceph/ceph/pull/13531. We should backport it to Jewel.
- 04:08 PM Bug #22064: "RadosModel.h: 865: FAILED assert(0)" in rados-jewel-distro-basic-smithi
As Josh said it seems easier to trigger in Jewel. This is based on my attempt to reproduce in master.
All 50 ma...- 02:22 AM Bug #22064: "RadosModel.h: 865: FAILED assert(0)" in rados-jewel-distro-basic-smithi
- Looking through the logs more with David, we found this sequence of events in 1946610:
1) osd.5 gets a write to ob... - 12:45 PM Bug #22266: mgr/PyModuleRegistry.cc: 139: FAILED assert(map.epoch > 0)
- Master PR for second round of backporting: https://github.com/ceph/ceph/pull/19780
Luminous backport PR: https://g... - 08:44 AM Bug #22656: scrub mismatch on bytes (cache pools)
- Happened here as well: http://pulpito.ceph.com/smithfarm-2018-01-24_19:46:55-rados-wip-smithfarm-testing-distro-basic...
- 04:24 AM Backport #22794 (In Progress): jewel: heartbeat peers need to be updated when a new OSD added int...
- https://github.com/ceph/ceph/pull/20108
- 04:14 AM Backport #22794 (Resolved): jewel: heartbeat peers need to be updated when a new OSD added into a...
- https://github.com/ceph/ceph/pull/20108
- 04:13 AM Backport #22793 (Rejected): osd: sends messages to marked-down peers
- i wanted to backport fix of #18004 not this one.
- 04:12 AM Backport #22793 (Rejected): osd: sends messages to marked-down peers
- the async osdmap updates introduce a new problem:
- handle_osd_map map X marks down osd Y
- pg thread uses map X-...
01/24/2018
- 09:18 PM Backport #21636: luminous: ceph-monstore-tool --readable mode doesn't understand FSMap, MgrMap
- Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/18754
merged - 09:10 PM Bug #22329: mon: Valgrind: mon (Leak_DefinitelyLost, Leak_IndirectlyLost)
- New one:
/ceph/teuthology-archive/yuriw-2018-01-23_20:26:59-multimds-wip-yuri-testing-2018-01-22-1653-luminous-tes... - 07:56 PM Backport #22502: luminous: Pool Compression type option doesn't apply to new OSD's
- Master commit was reverted - redoing the backport.
- 06:12 PM Bug #22624: filestore: 3180: FAILED assert(0 == "unexpected error"): error (2) No such file or di...
- Disregard my previous comment; different error message for the same assert was unfortunately buried in the logs. Sorr...
- 06:04 PM Bug #22624: filestore: 3180: FAILED assert(0 == "unexpected error"): error (2) No such file or di...
- FWIW, I am currently reproducing this quite reliably on my dev env, on a quite outdated version of master (cbe78ae629...
- 01:55 PM Bug #21407 (Resolved): backoff causes out of order op
- 01:54 PM Backport #21794 (Resolved): luminous: backoff causes out of order op
- 11:23 AM Backport #22450 (In Progress): luminous: Visibility for snap trim queue length
- https://github.com/ceph/ceph/pull/20098
01/23/2018
- 11:57 PM Bug #21566 (Resolved): OSDService::recovery_need_sleep read+updated without locking
- 11:57 PM Backport #21697 (Resolved): luminous: OSDService::recovery_need_sleep read+updated without locking
- 11:06 PM Backport #21697: luminous: OSDService::recovery_need_sleep read+updated without locking
- Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/18753
merged - 11:56 PM Backport #21785 (Resolved): luminous: OSDMap cache assert on shutdown
- 11:07 PM Backport #21785: luminous: OSDMap cache assert on shutdown
- Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/18749
merged - 11:55 PM Bug #21845 (Resolved): Objecter::_send_op unnecessarily constructs costly hobject_t
- 11:55 PM Backport #21921 (Resolved): luminous: Objecter::_send_op unnecessarily constructs costly hobject_t
- 11:09 PM Backport #21921: luminous: Objecter::_send_op unnecessarily constructs costly hobject_t
- Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/18745
merged - 11:54 PM Backport #21922 (Resolved): luminous: Objecter::C_ObjectOperation_sparse_read throws/catches exce...
- 11:10 PM Backport #21922: luminous: Objecter::C_ObjectOperation_sparse_read throws/catches exceptions on -...
- Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/18744
merged - 11:25 PM Bug #21818 (Resolved): ceph_test_objectstore fails ObjectStore/StoreTest.Synthetic/1 (filestore) ...
- 11:25 PM Backport #21924 (Resolved): luminous: ceph_test_objectstore fails ObjectStore/StoreTest.Synthetic...
- 11:10 PM Backport #21924: luminous: ceph_test_objectstore fails ObjectStore/StoreTest.Synthetic/1 (filesto...
- Shinobu Kinjo wrote:
> https://github.com/ceph/ceph/pull/18742
merged - 08:30 PM Backport #22423 (Closed): luminous: osd: initial minimal efforts to clean up PG interface
- I was able to cleanly backport http://tracker.ceph.com/issues/22069 without this large change.
- 11:01 AM Bug #22351: Couldn't init storage provider (RADOS)
- No, I set it to Luminous based on the request by theanalyst in https://github.com/ceph/ceph/pull/20023. I'm fine with...
- 10:24 AM Bug #22351: Couldn't init storage provider (RADOS)
- @Brad Assigning to you and leaving the backport field on "luminous" (but feel free to zero it out if it's enough to m...
- 10:14 AM Bug #21833: Multiple asserts caused by DNE pgs left behind after lots of OSD restarts
- @David I can only guess that this is not reproducible in master and that's why it requires a luminous-only fix. Could...
- 10:01 AM Backport #22761 (In Progress): luminous: osd checks out-of-date osdmap for DESTROYED flag on start
- 09:40 AM Backport #22761 (Resolved): luminous: osd checks out-of-date osdmap for DESTROYED flag on start
- https://github.com/ceph/ceph/pull/20068
- 07:48 AM Bug #22673 (Pending Backport): osd checks out-of-date osdmap for DESTROYED flag on start
- 06:38 AM Bug #22727: "osd pool stats" shows recovery information bugly
- need to backport it to jewel and luminous. but it at least dates back to 9.2.0. see also http://lists.ceph.com/piperm...
- 06:32 AM Bug #22727 (Fix Under Review): "osd pool stats" shows recovery information bugly
01/22/2018
- 11:50 PM Bug #22419 (Pending Backport): Pool Compression type option doesn't apply to new OSD's
- 08:12 AM Bug #22419 (Fix Under Review): Pool Compression type option doesn't apply to new OSD's
- https://github.com/ceph/ceph/pull/20044
- 11:46 PM Bug #22711 (Resolved): qa/workunits/cephtool/test.sh fails with test_mon_cephdf_commands: expect...
- 12:53 PM Bug #22711 (Fix Under Review): qa/workunits/cephtool/test.sh fails with test_mon_cephdf_commands:...
- https://github.com/ceph/ceph/pull/20046
- 11:06 AM Bug #22711: qa/workunits/cephtool/test.sh fails with test_mon_cephdf_commands: expect_false test...
- the weirdness of this issue is that some PGs are mapped to a single OSD:...
- 03:13 AM Bug #22711: qa/workunits/cephtool/test.sh fails with test_mon_cephdf_commands: expect_false test...
- the curr_object_copies_rate value in PGMap.cc dump_object_stat_sum is .5, which is counteracting the 2x replication f...
- 07:04 PM Bug #22752: snapmapper inconsistency, crash on luminous
- https://github.com/ceph/ceph/pull/20040
- 07:03 PM Bug #22752 (Resolved): snapmapper inconsistency, crash on luminous
- from Stefan Priebe on ceph-devel ML:...
- 06:47 PM Backport #22387 (In Progress): luminous: PG stuck in recovery_unfound
Included with another dependent backport as https://github.com/ceph/ceph/pull/20055- 12:40 PM Backport #22387 (Need More Info): luminous: PG stuck in recovery_unfound
- Non-trivial backport
- 02:27 PM Feature #22750 (Fix Under Review): libradosstriper conditional compile
- -https://github.com/ceph/ceph/pull/18197-
- 01:21 PM Feature #22750 (Resolved): libradosstriper conditional compile
- Currently libradosstriper is a hard dependency of the rados CLI tool.
Please add a "WITH_LIBRADOSSTRIPER" compile-... - 02:16 PM Bug #22746 (Fix Under Review): osd/common: ceph-osd process is terminated by the logratote task
- 11:51 AM Bug #22746 (Resolved): osd/common: ceph-osd process is terminated by the logratote task
- 1. Construct the scene:
(1) step 1:
Open the terminal_1, and
Prepare the cmd: "killall -q -1 ceph-mon ... - 12:59 PM Support #22749 (Closed): dmClock OP classification
- Why does dmClock algorithm in CEPH attribute recovery's read and write OP to osd_op_queue_mclock_osd_sub, so that whe...
- 12:41 PM Backport #22724 (Need More Info): luminous: miscounting degraded objects
- 12:41 PM Backport #22724: luminous: miscounting degraded objects
- David, while you're doing this one, can you include https://tracker.ceph.com/issues/22387 as well?
- 12:23 PM Support #22680 (Resolved): mons segmentation faults New 12.2.2 cluster
- 03:04 AM Bug #22715 (Pending Backport): log entries weirdly zeroed out after 'osd pg-temp' command
- 03:04 AM Backport #22744 (In Progress): luminous: log entries weirdly zeroed out after 'osd pg-temp' command
- https://github.com/ceph/ceph/pull/20042
- 03:03 AM Backport #22744 (Resolved): luminous: log entries weirdly zeroed out after 'osd pg-temp' command
- https://github.com/ceph/ceph/pull/20042
01/21/2018
- 08:29 PM Bug #22715 (Resolved): log entries weirdly zeroed out after 'osd pg-temp' command
- 06:56 PM Bug #22743 (New): "RadosModel.h: 854: FAILED assert(0)" in upgrade:hammer-x-jewel-distro-basic-sm...
- Run: http://pulpito.ceph.com/teuthology-2018-01-19_01:15:02-upgrade:hammer-x-jewel-distro-basic-smithi/
Job: 2088826...
Also available in: Atom