Activity
From 04/28/2020 to 05/27/2020
05/27/2020
- 10:34 PM Bug #45733 (Fix Under Review): osd-scrub-repair.sh: SyntaxError: invalid syntax
- 10:29 PM Bug #45733 (Resolved): osd-scrub-repair.sh: SyntaxError: invalid syntax
- /a/yuriw-2020-05-23_15:15:01-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5085557...
- 09:21 PM Bug #45660: osd-scrub-repair.sh:TEST_corrupt_scrub_replicated failed
- ...
- 06:59 AM Bug #45660: osd-scrub-repair.sh:TEST_corrupt_scrub_replicated failed
- /a/yuriw-2020-05-23_15:15:01-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5085557
- 09:05 PM Bug #44715: common/TrackedOp.cc: 163: FAILED ceph_assert((sharded_in_flight_list.back())->ops_in_...
- Lowering severity since we haven't seen it in two weeks.
- 08:37 PM Bug #45619 (Triaged): Health check failed: Reduced data availability: PG_AVAILABILITY
- http://pulpito.front.sepia.ceph.com/yuvalif-2020-05-19_14:52:46-rgw:verify-fix-amqp-urls-with-vhosts-distro-basic-smi...
- 06:35 PM Bug #45619: Health check failed: Reduced data availability: PG_AVAILABILITY
- Neha Ojha wrote:
> Seen in the rados suite: /a/nojha-2020-05-21_19:33:40-rados-wip-32601-distro-basic-smithi/5077159... - 01:58 PM Bug #44981 (Pending Backport): rados/test_envlibrados_for_rocksdb.sh build failure (seen in nauti...
- 08:22 AM Bug #45721 (Resolved): CommandFailedError: Command failed (workunit test rados/test_python.sh) FA...
- /a/yuriw-2020-05-24_19:30:40-rados-wip-yuri-master_5.24.20-distro-basic-smithi/5088170...
- 07:02 AM Bug #43888: osd/osd-bench.sh 'tell osd.N bench' hang
- /a/yuriw-2020-05-23_15:15:01-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5085549
- 02:20 AM Bug #43888: osd/osd-bench.sh 'tell osd.N bench' hang
- /a/yuriw-2020-05-22_19:55:53-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5083462
- 06:55 AM Bug #45661: valgrind issue: UninitValue in ProtocolV2
- /a/yuriw-2020-05-23_15:15:01-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5085545
/a/yuriw-2020-05-23_15:15:01-...
05/26/2020
- 03:11 PM Bug #45695: librados: significant memory consumption
- David Disseldorp wrote:
> I've tested with in-memory logging disabled via the client ceph.conf:
>
> [...]
>
> ... - 11:33 AM Bug #45706 (New): Memory usage in buffer_anon showing unbounded growth in osds on EC pool. (14.2.9)
- Hi,
Re these threads in the mailing list: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/DPBVNJQX... - 07:12 AM Bug #45588 (Resolved): test_envlibrados_for_rocksdb.sh fails on master
- 04:44 AM Bug #45702 (Fix Under Review): PGLog::read_log_and_missing: ceph_assert(miter == missing.get_item...
- /a/yuriw-2020-05-22_19:55:53-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5083350...
05/25/2020
- 05:01 PM Backport #45677 (In Progress): nautilus: rados/test_envlibrados_for_rocksdb.sh fails on Xenial (s...
- 04:58 PM Backport #45676 (In Progress): octopus: rados/test_envlibrados_for_rocksdb.sh fails on Xenial (se...
- 02:28 PM Bug #43825 (Resolved): osd stuck down
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:28 PM Bug #44062 (Resolved): LibRadosWatchNotify.WatchNotify failure
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:27 PM Bug #44439 (Resolved): osd/osd-scrub-repair.sh fails: scrub/osd-scrub-repair.sh:698: TEST_repair_...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:27 PM Bug #44518 (Resolved): osd/osd-backfill-stats.sh TEST_backfill_out2: wait_for_clean timeout
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:27 PM Bug #44532 (Resolved): nautilus: FAILED ceph_assert(head.version == 0 || e.version.version > head...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:26 PM Bug #45266 (Resolved): follower monitors can grow beyond memory target
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:51 AM Bug #45698 (New): PrioritizedQueue: messages in normal queue
- if(i->second.front().first < i->second.num_tokens())
{
//nenver go in, if cost equal to num_tockens(),which valu... - 11:09 AM Backport #44686 (Resolved): nautilus: osd/osd-backfill-stats.sh TEST_backfill_out2: wait_for_clea...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35047
m... - 11:09 AM Backport #45224 (Resolved): nautilus: LibRadosWatchNotify.WatchNotify failure
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35049
m... - 11:09 AM Backport #44689 (Resolved): nautilus: osd/osd-scrub-repair.sh fails: scrub/osd-scrub-repair.sh:69...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35048
m... - 11:08 AM Backport #43919 (Resolved): nautilus: osd stuck down
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35024
m... - 11:08 AM Backport #44841 (Resolved): nautilus: nautilus: FAILED ceph_assert(head.version == 0 || e.version...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34957
m... - 11:06 AM Backport #44490 (Resolved): nautilus: lz4 compressor corrupts data when buffers are unaligned
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35004
m... - 11:06 AM Backport #45391 (Resolved): nautilus: follower monitors can grow beyond memory target
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34916
m... - 11:06 AM Backport #45359 (Resolved): nautilus: rados: Sharded OpWQ drops suicide_grace after waiting for work
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34882
m... - 10:55 AM Bug #45695: librados: significant memory consumption
- I've tested with in-memory logging disabled via the client ceph.conf:...
- 10:27 AM Bug #45695: librados: significant memory consumption
- I should have mentioned that my client ceph.conf is minimal, with only the _mon host_ and _keyring_ options set.
- 10:22 AM Bug #45695 (New): librados: significant memory consumption
- I did some valgrind massif heap profiling with the following simple librados (octopus 15.2.1) program:...
- 02:28 AM Bug #45690 (New): pg_interval_t::check_new_interval is overly generous about guessing when EC PGs...
- One EC PG stuck at peering+down forever, the problem occurs through the following steps:
Suppose the pg's acting set...
05/24/2020
- 10:09 PM Bug #44981: rados/test_envlibrados_for_rocksdb.sh build failure (seen in nautilus)
- Nathan Cutler wrote:
>
> New -> In Progress -> Fix Under Review -> Pending Backport
>
> This, I thought, was th... - 08:59 PM Bug #44981: rados/test_envlibrados_for_rocksdb.sh build failure (seen in nautilus)
- Brad Hubbard wrote:
> Sorry Nathan, Could you explain why you changed this from 'In Progress' to 'Fix Under Review'?... - 09:04 PM Backport #45677 (Resolved): nautilus: rados/test_envlibrados_for_rocksdb.sh fails on Xenial (seen...
- https://github.com/ceph/ceph/pull/35237
- 09:04 PM Backport #45676 (Resolved): octopus: rados/test_envlibrados_for_rocksdb.sh fails on Xenial (seen ...
- https://github.com/ceph/ceph/pull/35236
- 09:03 PM Backport #45673 (Resolved): octopus: qa: powercycle: install task runs twice with double unwind c...
- https://github.com/ceph/ceph/pull/35441
- 07:55 PM Bug #45606 (Fix Under Review): build_incremental_map_msg missing incremental map while snaptrim o...
- 04:00 PM Bug #22052: ceph-mon: possible Leak in OSDMap::build_simple_optioned
- ...
05/23/2020
- 09:56 PM Bug #45561 (Pending Backport): rados/test_envlibrados_for_rocksdb.sh fails on Xenial (seen in nau...
- 03:11 PM Bug #24531: Mimic MONs have slow/long running ops
- We had this issue yesterday. We had a broken mon cluster which I was able to repair by shutting down all mons, scalin...
05/22/2020
- 06:54 PM Backport #44686: nautilus: osd/osd-backfill-stats.sh TEST_backfill_out2: wait_for_clean timeout
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35047
merged - 06:47 PM Backport #45224: nautilus: LibRadosWatchNotify.WatchNotify failure
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35049
merged - 06:46 PM Backport #44689: nautilus: osd/osd-scrub-repair.sh fails: scrub/osd-scrub-repair.sh:698: TEST_rep...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35048
merged - 06:40 PM Backport #43919: nautilus: osd stuck down
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35024
merged - 06:39 PM Backport #44841: nautilus: nautilus: FAILED ceph_assert(head.version == 0 || e.version.version > ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34957
merged - 04:53 PM Bug #45661 (Resolved): valgrind issue: UninitValue in ProtocolV2
- ...
- 04:32 PM Bug #20960: ceph_test_rados: mismatched version (due to pg import/export)
- Has started appearing more frequently recently - /a/nojha-2020-05-21_19:33:40-rados-wip-32601-distro-basic-smithi/507...
- 04:30 PM Bug #45660 (Resolved): osd-scrub-repair.sh:TEST_corrupt_scrub_replicated failed
- ...
- 04:24 PM Bug #45647: "ceph --cluster ceph --log-early osd last-stat-seq osd.0" times out due to msgr-failu...
- /a/nojha-2020-05-21_19:33:40-rados-wip-32601-distro-basic-smithi/5076944/
- 03:36 AM Bug #45647 (New): "ceph --cluster ceph --log-early osd last-stat-seq osd.0" times out due to msgr...
- ...
- 02:40 PM Bug #45619 (New): Health check failed: Reduced data availability: PG_AVAILABILITY
- Seen in the rados suite: /a/nojha-2020-05-21_19:33:40-rados-wip-32601-distro-basic-smithi/5077159/
- 02:35 PM Bug #45619: Health check failed: Reduced data availability: PG_AVAILABILITY
- We've been seeing a lot of this in the rgw suite over the last month or two.
- 02:37 PM Bug #45298 (Resolved): cram: balancer/misplaced.t fails with 'Error EAGAIN: Some objects (0.00891...
- This was a result of d4fbaf7ea959fd945857abd327271a97fb1da631, which only applies to master.
- 04:41 AM Bug #45298 (Pending Backport): cram: balancer/misplaced.t fails with 'Error EAGAIN: Some objects ...
- 04:40 AM Feature #43324 (Resolved): Make zlib windowBits configurable for compression
- 04:30 AM Bug #45612 (Pending Backport): qa: powercycle: install task runs twice with double unwind causing...
- 04:03 AM Bug #44595: cache tiering: Error: oid 48 copy_from 493 returned error code -2
- ...
- 03:59 AM Bug #24613: luminous: rest/test.py fails with expected 200, got 400
- /a/nojha-2020-05-21_19:42:29-rados-wip-29089-luminous-distro-basic-smithi/5077334
05/21/2020
- 09:32 PM Bug #44981: rados/test_envlibrados_for_rocksdb.sh build failure (seen in nautilus)
- Sorry Nathan, Could you explain why you changed this from 'In Progress' to 'Fix Under Review'? The PR has been review...
- 04:43 PM Bug #44981 (Fix Under Review): rados/test_envlibrados_for_rocksdb.sh build failure (seen in nauti...
- 05:34 PM Bug #45614 (Resolved): qa/workunits/cephtool/test.sh failures due to dropping obsolete cache tier...
- 04:41 PM Bug #45614: qa/workunits/cephtool/test.sh failures due to dropping obsolete cache tiering options
- Backport will be handled via #45514
- 02:52 AM Bug #45619: Health check failed: Reduced data availability: PG_AVAILABILITY
- it's a new thing. also, before whitelist things, better off figure out why we should whitelist it.
05/20/2020
- 09:13 PM Bug #45606: build_incremental_map_msg missing incremental map while snaptrim or backfilling
- Nothing to worry about, this message should just be a dout instead.
- 09:08 PM Bug #45619 (Need More Info): Health check failed: Reduced data availability: PG_AVAILABILITY
- Is this something that has started appearing recently? If not, probably just needs whitelisting.
- 07:34 AM Bug #45619 (Resolved): Health check failed: Reduced data availability: PG_AVAILABILITY
- multiple RGW tests are failing on different branches, with:...
- 07:29 PM Bug #20960: ceph_test_rados: mismatched version (due to pg import/export)
- /a/nojha-2020-05-19_23:54:26-rados-wip-cephadm-test-distro-basic-smithi/5070712
- 03:20 PM Backport #44490: nautilus: lz4 compressor corrupts data when buffers are unaligned
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35004
merged - 03:19 PM Backport #45391: nautilus: follower monitors can grow beyond memory target
- Sridhar Seshasayee wrote:
> https://github.com/ceph/ceph/pull/34916
merged - 03:19 PM Backport #45359: nautilus: rados: Sharded OpWQ drops suicide_grace after waiting for work
- Dan Hill wrote:
> https://github.com/ceph/ceph/pull/34882
merged - 10:42 AM Bug #45611: crimson: centos 8 vstart failure
- caught segfault at points
1. run with next option in gdb: ... - 10:39 AM Bug #45611: crimson: centos 8 vstart failure
- How to reproduce:
1. launch a centos 8 container and build vstart with -DWITH_SEASTAR=ON
2. start a vstart base... - 02:17 AM Bug #44981: rados/test_envlibrados_for_rocksdb.sh build failure (seen in nautilus)
- Thanks Nathan.
- 02:16 AM Bug #44981 (In Progress): rados/test_envlibrados_for_rocksdb.sh build failure (seen in nautilus)
- 12:34 AM Bug #45615 (Resolved): api_watch_notify_pp: LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.Watc...
- ...
- 12:26 AM Bug #45614: qa/workunits/cephtool/test.sh failures due to dropping obsolete cache tiering options
- /a/nojha-2020-05-19_00:53:41-rados-wip-revert-34894-distro-basic-smithi/5068016
- 12:24 AM Bug #45614 (Resolved): qa/workunits/cephtool/test.sh failures due to dropping obsolete cache tier...
- Caused by https://github.com/ceph/ceph/pull/35015
05/19/2020
- 10:30 PM Bug #45612 (Fix Under Review): qa: powercycle: install task runs twice with double unwind causing...
- 10:24 PM Bug #45612 (Resolved): qa: powercycle: install task runs twice with double unwind causing fatal e...
- Continuation of #45387. My fix was incomplete.
http://pulpito.ceph.com/teuthology-2020-04-25_03:09:02-powercycle-m... - 07:23 PM Bug #45611: crimson: centos 8 vstart failure
- caught some memory leaks using core dumps, but they seem to be related to asan/libc...
- 02:37 PM Bug #45611 (New): crimson: centos 8 vstart failure
- ...
- 11:44 AM Bug #45606 (Resolved): build_incremental_map_msg missing incremental map while snaptrim or backfi...
- Hello,
I'm not sure if this is an issue or not. On one Cluster I see the following Messages, most times when snapt... - 09:34 AM Backport #44370: nautilus: msg/async: the event center is blocked by rdma construct conection for...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34780
m... - 02:33 AM Backport #44370 (Resolved): nautilus: msg/async: the event center is blocked by rdma construct co...
- 02:54 AM Bug #45588: test_envlibrados_for_rocksdb.sh fails on master
- http://pulpito.ceph.com/kchai-2020-05-19_02:54:14-rados:singleton-wip-kefu2-testing-2020-05-13-1200-distro-basic-smithi/
05/18/2020
- 03:42 PM Bug #45588: test_envlibrados_for_rocksdb.sh fails on master
- https://github.com/facebook/rocksdb/pull/6855
- 03:41 PM Bug #45588 (Resolved): test_envlibrados_for_rocksdb.sh fails on master
- ...
- 02:44 PM Backport #44370: nautilus: msg/async: the event center is blocked by rdma construct conection for...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34780
merged
05/15/2020
- 03:38 PM Backport #44413: nautilus: FTBFS on s390x in openSUSE Build Service due to presence of -O2 in RPM...
- c8af73e19ab02617411fe689ff1b98b8f4d096ca did not make v14.2.9, and it will be in v14.2.10.
- 11:24 AM Bug #45561 (In Progress): rados/test_envlibrados_for_rocksdb.sh fails on Xenial (seen in nautilus)
- 06:59 AM Bug #45561 (Fix Under Review): rados/test_envlibrados_for_rocksdb.sh fails on Xenial (seen in nau...
- 06:40 AM Bug #45561 (Resolved): rados/test_envlibrados_for_rocksdb.sh fails on Xenial (seen in nautilus)
- http://qa-proxy.ceph.com/teuthology/bhubbard-2020-05-13_06:50:26-rados-wip-nautilus-badone-testing-2-distro-basic-smi...
05/14/2020
- 12:13 AM Bug #44715 (Need More Info): common/TrackedOp.cc: 163: FAILED ceph_assert((sharded_in_flight_list...
- I am not able to reproduce this failure on octopus or on master:
http://pulpito.ceph.com/nojha-2020-05-13_17:20:4...
05/13/2020
- 09:09 PM Bug #45533 (Resolved): cls/queue: fix empty markers when listing entries
- 12:53 PM Bug #45533: cls/queue: fix empty markers when listing entries
- already fixed in: https://github.com/ceph/ceph/pull/34788
- 12:51 PM Bug #45533 (Resolved): cls/queue: fix empty markers when listing entries
- markers are sometimes empty when listing entries
- 09:02 PM Backport #44489 (In Progress): mimic: lz4 compressor corrupts data when buffers are unaligned
- 03:43 PM Backport #45224 (In Progress): nautilus: LibRadosWatchNotify.WatchNotify failure
- 03:42 PM Backport #44689 (In Progress): nautilus: osd/osd-scrub-repair.sh fails: scrub/osd-scrub-repair.sh...
- 03:40 PM Backport #44686 (In Progress): nautilus: osd/osd-backfill-stats.sh TEST_backfill_out2: wait_for_c...
- 02:49 AM Bug #44981 (Fix Under Review): rados/test_envlibrados_for_rocksdb.sh build failure (seen in nauti...
05/12/2020
- 06:07 PM Bug #45292: pg autoscaler merging issue
- Sorry for the delay. We are working to get a reservation on one of our internal labs so we can recreate the issue and...
- 03:02 PM Bug #37875: osdmaps aren't being cleaned up automatically on healthy cluster
- nautilus backport tracked by https://tracker.ceph.com/issues/45402
- 03:01 PM Bug #37875 (Duplicate): osdmaps aren't being cleaned up automatically on healthy cluster
- 02:30 PM Backport #43919 (In Progress): nautilus: osd stuck down
- 02:29 PM Backport #43919: nautilus: osd stuck down
- first attempted backport - https://github.com/ceph/ceph/pull/33156 - was closed
- 02:29 PM Backport #43919 (New): nautilus: osd stuck down
05/11/2020
- 09:54 PM Bug #45356: nautilus: rados/upgrade/mimic-x-singleton failures due to mon_client_directed_command...
- https://github.com/ceph/ceph/pull/34884 merged
- 04:41 PM Backport #44490 (In Progress): nautilus: lz4 compressor corrupts data when buffers are unaligned
- 02:23 PM Bug #44827 (Resolved): osd: incorrect read bytes stat in SPARSE_READ
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:21 PM Bug #45075 (Resolved): scrub/osd-scrub-repair.sh: TEST_auto_repair_bluestore_failed failure
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 01:51 PM Backport #45392 (Resolved): octopus: follower monitors can grow beyond memory target
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34917
m... - 12:04 PM Bug #44959 (Closed): health warning: pgs not deep-scrubbed in time although it was in time
- Aaaha, that was it. Thank you very much!
I've set the @osd deep scrub interval@ under @[osd]@ so the mgr did not g... - 11:53 AM Bug #44959: health warning: pgs not deep-scrubbed in time although it was in time
- Have you changed the values on the MGR? mgr checks that and if mgr still has defaults, it will issue warnings..
@c... - 03:28 AM Bug #45298: cram: balancer/misplaced.t fails with 'Error EAGAIN: Some objects (0.008913) are degr...
- /a/yuriw-2020-05-04_17:54:17-rados-wip-yuri5-testing-2020-05-04-1554-nautilus-distro-basic-smithi/5022793
05/10/2020
- 02:39 AM Bug #45457 (Pending Backport): CEPH Graylog Logging Missing "host" Field
- Hello,
I have tried sending CEPH logs to Graylog with the following configuration:
mon_cluster_log_to_graylog =...
05/08/2020
- 07:28 PM Backport #45392: octopus: follower monitors can grow beyond memory target
- Sridhar Seshasayee wrote:
> https://github.com/ceph/ceph/pull/34917
merged - 03:06 PM Backport #45039 (Resolved): octopus: mon: reset min_size when changing pool size
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34528
m... - 03:06 PM Backport #44836 (Resolved): octopus: librados mon_command (mgr) command hang
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34416
m... - 12:10 PM Bug #24531: Mimic MONs have slow/long running ops
- Something similar, Ceph v14.2.0....
- 09:03 AM Bug #45390: FreeBSD: osdmap decode and encode does not give the same OSDMap
- Willem Jan Withagen wrote:
> Added code to dump JSON tree for both cases, and then it seems both trees are equal.
>... - 07:28 AM Bug #45441 (Resolved): rados: Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in clust...
/a/yuriw-2020-05-05_15:20:13-rados-wip-yuri8-testing-2020-05-04-2117-octopus-distro-basic-smithi/5024888...- 05:06 AM Backport #44841 (In Progress): nautilus: nautilus: FAILED ceph_assert(head.version == 0 || e.vers...
05/07/2020
- 10:37 PM Backport #45039: octopus: mon: reset min_size when changing pool size
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34528
merged - 10:33 PM Backport #44836: octopus: librados mon_command (mgr) command hang
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34416
merged - 08:54 PM Bug #45353: FAILED ceph_assert(pg_upmap.empty())
- Haven't been able to reproduce so far post https://github.com/ceph/ceph/pull/34748
- 06:18 PM Backport #44841 (Need More Info): nautilus: nautilus: FAILED ceph_assert(head.version == 0 || e.v...
- non-trivial because of https://github.com/ceph/ceph/pull/33910/commits/d4b1cc61e6526d325fd759f98e13e5a10523f5f7
- 03:47 PM Bug #44715: common/TrackedOp.cc: 163: FAILED ceph_assert((sharded_in_flight_list.back())->ops_in_...
- http://pulpito.ceph.com/swagner-2020-05-07_09:50:39-rados-wip-swagner3-testing-2020-05-06-1727-distro-basic-smithi/50...
- 06:27 AM Bug #45424 (New): api_watch_notify_pp: [ FAILED ] LibRadosWatchNotifyECPP.WatchNotify watch_not...
- /a/yuriw-2020-05-05_15:20:13-rados-wip-yuri8-testing-2020-05-04-2117-octopus-distro-basic-smithi/5024839...
- 06:19 AM Bug #45423 (Pending Backport): api_tier_pp: [ FAILED ] LibRadosTwoPoolsPP.HitSetWrite
- /a/yuriw-2020-05-05_15:20:13-rados-wip-yuri8-testing-2020-05-04-2117-octopus-distro-basic-smithi/5024839...
05/06/2020
- 10:19 PM Bug #45390: FreeBSD: osdmap decode and encode does not give the same OSDMap
- Added code to dump JSON tree for both cases, and then it seems both trees are equal.
So the serialized OSDMap contai... - 09:06 PM Bug #45381 (Need More Info): unfound objects in erasure-coded CephFS
- Is cache tiering involved here too? Do you have any osd logs from the same time?
- 08:05 AM Backport #45392 (In Progress): octopus: follower monitors can grow beyond memory target
- 06:20 AM Backport #45392 (Resolved): octopus: follower monitors can grow beyond memory target
- https://github.com/ceph/ceph/pull/34917
- 08:02 AM Backport #45391 (In Progress): nautilus: follower monitors can grow beyond memory target
- 06:18 AM Backport #45391 (Resolved): nautilus: follower monitors can grow beyond memory target
- https://github.com/ceph/ceph/pull/34916
05/05/2020
- 08:49 PM Bug #45390 (Closed): FreeBSD: osdmap decode and encode does not give the same OSDMap
- The problems occurs both in Octopus and Master.
This is the simple version of a part of test_compression.cc:
<pre... - 07:14 PM Backport #45357: octopus: rados: Sharded OpWQ drops suicide_grace after waiting for work
- Will do, thanks for the explanation.
- 01:12 PM Backport #45357: octopus: rados: Sharded OpWQ drops suicide_grace after waiting for work
- @Dan - please leave "Target Version" empty when you stage your backports.
The name "Target Version" is a bit of a ... - 05:06 PM Bug #45388 (New): Insufficient monitor logging to diagnose downed OSDs
- We just had a case where in a Ceph Luminous cluster the monitor forced newly started OSDs to commit suicide. Communic...
- 04:30 PM Backport #44468 (Resolved): nautilus: mon: Get session_map_lock before remove_session
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34677
m... - 04:26 PM Backport #45314 (Resolved): octopus: scrub/osd-scrub-repair.sh: TEST_auto_repair_bluestore_failed...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34830
m... - 04:25 PM Backport #45041 (Resolved): octopus: osd: incorrect read bytes stat in SPARSE_READ
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34809
m... - 04:25 PM Backport #44842 (Resolved): octopus: nautilus: FAILED ceph_assert(head.version == 0 || e.version....
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34807
m... - 04:25 PM Backport #44685 (Resolved): octopus: osd/osd-backfill-stats.sh TEST_backfill_out2: wait_for_clean...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34806
m... - 04:49 AM Bug #45266 (Pending Backport): follower monitors can grow beyond memory target
05/04/2020
- 09:04 PM Bug #45298 (Fix Under Review): cram: balancer/misplaced.t fails with 'Error EAGAIN: Some objects ...
- 08:42 PM Backport #44468: nautilus: mon: Get session_map_lock before remove_session
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34677
merged - 08:13 PM Backport #45314: octopus: scrub/osd-scrub-repair.sh: TEST_auto_repair_bluestore_failed failure
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34830
merged - 08:13 PM Backport #45041: octopus: osd: incorrect read bytes stat in SPARSE_READ
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34809
merged - 08:11 PM Backport #44842: octopus: nautilus: FAILED ceph_assert(head.version == 0 || e.version.version > h...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34807
merged - 08:10 PM Backport #44685: octopus: osd/osd-backfill-stats.sh TEST_backfill_out2: wait_for_clean timeout
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34806
merged - 04:18 PM Bug #45076: rados: Sharded OpWQ drops suicide_grace after waiting for work
- This issue is also present in Luminous, which is EOL now that Octopus has released.
Should I open a tracker/pr fo... - 03:58 PM Bug #45381 (Need More Info): unfound objects in erasure-coded CephFS
- Encountered something weird with cephfs today that shouldn't happen
Setup:
* Ceph 14.2.8
* 8 OSD servers, 8 SS... - 03:25 PM Bug #44286: Cache tiering shows unfound objects after OSD reboots
- this occasionally comes up on the mailing list as well. it's not reproducible on my test setup, though :(
- 03:21 PM Bug #45356 (Fix Under Review): nautilus: rados/upgrade/mimic-x-singleton failures due to mon_clie...
- 10:55 AM Feature #43324 (Fix Under Review): Make zlib windowBits configurable for compression
- 09:14 AM Bug #44715: common/TrackedOp.cc: 163: FAILED ceph_assert((sharded_in_flight_list.back())->ops_in_...
- http://pulpito.ceph.com/swagner-2020-04-30_14:11:16-rados-wip-swagner2-testing-2020-04-29-1247-distro-basic-smithi/50...
05/02/2020
- 02:15 AM Backport #45358 (In Progress): mimic: rados: Sharded OpWQ drops suicide_grace after waiting for work
- 02:15 AM Backport #45358 (New): mimic: rados: Sharded OpWQ drops suicide_grace after waiting for work
- 01:16 AM Backport #45358 (In Progress): mimic: rados: Sharded OpWQ drops suicide_grace after waiting for work
- 01:13 AM Backport #45358 (Rejected): mimic: rados: Sharded OpWQ drops suicide_grace after waiting for work
- https://github.com/ceph/ceph/pull/34883
- 02:14 AM Backport #45359 (In Progress): nautilus: rados: Sharded OpWQ drops suicide_grace after waiting fo...
- 02:13 AM Backport #45359 (New): nautilus: rados: Sharded OpWQ drops suicide_grace after waiting for work
- 01:17 AM Backport #45359 (In Progress): nautilus: rados: Sharded OpWQ drops suicide_grace after waiting fo...
- 01:13 AM Backport #45359 (Resolved): nautilus: rados: Sharded OpWQ drops suicide_grace after waiting for work
- https://github.com/ceph/ceph/pull/34882
- 02:11 AM Backport #45357 (In Progress): octopus: rados: Sharded OpWQ drops suicide_grace after waiting for...
- 02:10 AM Backport #45357 (New): octopus: rados: Sharded OpWQ drops suicide_grace after waiting for work
- 01:14 AM Backport #45357 (In Progress): octopus: rados: Sharded OpWQ drops suicide_grace after waiting for...
- 01:13 AM Backport #45357 (Resolved): octopus: rados: Sharded OpWQ drops suicide_grace after waiting for work
- https://github.com/ceph/ceph/pull/34881
05/01/2020
- 11:49 PM Bug #45076 (Pending Backport): rados: Sharded OpWQ drops suicide_grace after waiting for work
- 10:49 PM Bug #45353: FAILED ceph_assert(pg_upmap.empty())
- Damn, missed that thanks Neha. Let me run this again on current master.
- 09:06 PM Bug #45353: FAILED ceph_assert(pg_upmap.empty())
- We have removed jewel from thrash-old-clients in https://github.com/ceph/ceph/pull/34748. We should check if this fai...
- 06:56 PM Bug #45353: FAILED ceph_assert(pg_upmap.empty())
- 'rados/thrash-old-clients/{0-size-min-size-overrides/2-size-2-min-size.yaml
1-install/jewel.yaml backoff/normal.yaml... - 05:11 AM Bug #45353 (New): FAILED ceph_assert(pg_upmap.empty())
- /a/bhubbard-2020-05-01_01:03:08-rados-wip-yuri-testing-2020-04-24-1941-master-distro-basic-smithi/5003239...
- 04:52 PM Bug #45356 (Resolved): nautilus: rados/upgrade/mimic-x-singleton failures due to mon_client_direc...
- ...
04/30/2020
- 06:55 AM Bug #45298: cram: balancer/misplaced.t fails with 'Error EAGAIN: Some objects (0.008913) are degr...
- This looks similar....
- 06:48 AM Bug #45345: tasks/rados.py fails with "psutil.NoSuchProcess: psutil.NoSuchProcess process no long...
- /a/teuthology-2020-04-26_07:01:02-rados-master-distro-basic-smithi/4985956
- 06:39 AM Bug #45345 (Can't reproduce): tasks/rados.py fails with "psutil.NoSuchProcess: psutil.NoSuchProce...
- /a/yuriw-2020-04-28_21:58:13-rados-wip-yuri-testing-2020-04-24-1941-master-distro-basic-smithi/4995279
Looking at ...
04/29/2020
- 11:53 PM Bug #45266 (Fix Under Review): follower monitors can grow beyond memory target
- 02:04 PM Bug #45266: follower monitors can grow beyond memory target
- Taking ownership of this.
-Sridhar - 11:36 PM Bug #45298 (In Progress): cram: balancer/misplaced.t fails with 'Error EAGAIN: Some objects (0.00...
- No success in reproducing this so far: http://pulpito.ceph.com/nojha-2020-04-29_18:44:55-rados:singleton-nomsgr-maste...
- 09:26 PM Bug #45240: Not able to export objects using ceph-objectstore-tool
- I don't think this is a bug in the ceph-objectstore-tool but more a case of export failing when it encounters corrupt...
- 09:17 PM Bug #45292 (Need More Info): pg autoscaler merging issue
- Can you provide pg query output for one of those PGs? Also, osd logs with debug_osd=20 will be helpful.
- 12:17 PM Backport #45314 (In Progress): octopus: scrub/osd-scrub-repair.sh: TEST_auto_repair_bluestore_fai...
- 03:52 AM Bug #45318 (New): Health check failed: 2/6 mons down, quorum b,a,c,e (MON_DOWN)" in cluster log r...
- /a/teuthology-2020-04-26_02:30:03-rados-octopus-distro-basic-smithi/4984906
The MON log shows it came back up arou... - 03:15 AM Bug #44715: common/TrackedOp.cc: 163: FAILED ceph_assert((sharded_in_flight_list.back())->ops_in_...
- /a/teuthology-2020-04-26_02:30:03-rados-octopus-distro-basic-smithi/4984693
/a/teuthology-2020-04-26_02:30:03-rados-... - 03:15 AM Bug #42347: nautilus assert during osd shutdown: FAILED ceph_assert((sharded_in_flight_list.back(...
- /a/teuthology-2020-04-26_02:30:03-rados-octopus-distro-basic-smithi/4984693
04/28/2020
- 08:04 PM Bug #44076 (Resolved): mon: update + monmap update triggers spawn loop
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:04 PM Bug #44248 (Resolved): Receiving RemoteBackfillReserved in WaitLocalBackfillReserved can cause th...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:02 PM Backport #45314 (Resolved): octopus: scrub/osd-scrub-repair.sh: TEST_auto_repair_bluestore_failed...
- https://github.com/ceph/ceph/pull/34830
- 07:28 PM Support #45270 (Resolved): after reboot osd move to localhost
- I believe this has been discussed several times on the mailing list. If your OSDs don't get reliably told what their ...
- 05:51 PM Backport #45041 (In Progress): octopus: osd: incorrect read bytes stat in SPARSE_READ
- 05:47 PM Backport #44842 (In Progress): octopus: nautilus: FAILED ceph_assert(head.version == 0 || e.versi...
- 05:46 PM Backport #44685 (In Progress): octopus: osd/osd-backfill-stats.sh TEST_backfill_out2: wait_for_cl...
- 03:44 PM Bug #45075 (Pending Backport): scrub/osd-scrub-repair.sh: TEST_auto_repair_bluestore_failed failure
- 12:05 AM Bug #45075: scrub/osd-scrub-repair.sh: TEST_auto_repair_bluestore_failed failure
- https://github.com/ceph/ceph/pull/34602 merged
- 09:25 AM Backport #44324: nautilus: Receiving RemoteBackfillReserved in WaitLocalBackfillReserved can caus...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34512
m... - 03:01 AM Backport #44324 (Resolved): nautilus: Receiving RemoteBackfillReserved in WaitLocalBackfillReserv...
- 09:25 AM Backport #44289: nautilus: mon: update + monmap update triggers spawn loop
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34500
m... - 03:00 AM Backport #44289 (Resolved): nautilus: mon: update + monmap update triggers spawn loop
- 02:56 AM Backport #44370 (In Progress): nautilus: msg/async: the event center is blocked by rdma construct...
- 02:05 AM Bug #45298 (Resolved): cram: balancer/misplaced.t fails with 'Error EAGAIN: Some objects (0.00891...
- /a/teuthology-2020-04-26_07:01:02-rados-master-distro-basic-smithi/4985666...
- 01:31 AM Bug #36304: FAILED ceph_assert(p != pg_slots.end()) in OSDShard::register_and_wake_split_child(PG*)
- /a/teuthology-2020-04-26_07:01:02-rados-master-distro-basic-smithi/4986119
Also available in: Atom