Activity
From 03/24/2020 to 04/22/2020
04/22/2020
- 11:53 PM Bug #44062 (Pending Backport): LibRadosWatchNotify.WatchNotify failure
- 11:52 PM Bug #44062: LibRadosWatchNotify.WatchNotify failure
- Seeing this in Nautilus so setting backport.
http://pulpito.ceph.com/yuriw-2020-04-21_20:54:00-rados-wip-yuri8-tes... - 05:43 PM Bug #45191 (New): erasure-code/test-erasure-eio.sh: TEST_ec_single_recovery_error fails
- ...
- 05:15 PM Bug #45190 (New): osd dump times out
- ...
- 10:04 AM Backport #44468 (In Progress): nautilus: mon: Get session_map_lock before remove_session
04/21/2020
- 11:19 PM Backport #44908: mimic: mon: rados/multimon tests fail with clock skew
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34370
merged - 11:19 PM Backport #44083: mimic: expected MON_CLOCK_SKEW but got none
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34370
merged - 11:09 PM Backport #45040: nautilus: mon: reset min_size when changing pool size
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34585
merged - 11:08 PM Bug #45168 (New): mimic: cephtool/test.sh: test_mon_osd_pool_set failure
- ...
- 06:46 PM Backport #45053 (Resolved): octopus: nautilus upgrade should recommend ceph-osd restarts after en...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34523
m... - 06:37 PM Backport #45054 (Resolved): nautilus: nautilus upgrade should recommend ceph-osd restarts after e...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34524
m... - 12:34 PM Bug #44715 (Fix Under Review): common/TrackedOp.cc: 163: FAILED ceph_assert((sharded_in_flight_li...
- 02:45 AM Bug #39039: mon connection reset, command not resent
- I tested this on a lab cluster after disabling cephx per https://docs.ceph.com/docs/octopus/rados/configuration/auth-...
- 02:10 AM Bug #45113 (Resolved): workunits/cls/test_cls_cmpomap.sh fails
- 02:10 AM Bug #44901: luminous: osd continue down because of the hearbeattimeout
- Slove it! It is because we deploy ceph in the docker use kolla asible.
We start some dockers by hand and miss some...
04/20/2020
- 02:24 AM Bug #39039: mon connection reset, command not resent
- This also continues to happen on octopus, I just tested on 15.2.0.
I have attached the build instructions I used t...
04/18/2020
- 08:06 AM Fix #45140: osd/tiering: flush cache pool may lead to slow write requests
- Pull request ID: 34623
- 07:26 AM Fix #45140 (New): osd/tiering: flush cache pool may lead to slow write requests
- In OSD tiering, when flushing objects from cache pool to base pool, there are two problems can lead to slow request:
...
04/17/2020
- 10:25 PM Bug #45139: osd/osd-markdown.sh: markdown_N_impl failure
- This was seen after the fix for https://tracker.ceph.com/issues/44662 merged.
- 10:24 PM Bug #45139 (New): osd/osd-markdown.sh: markdown_N_impl failure
- ...
- 09:57 PM Bug #43888: osd/osd-bench.sh 'tell osd.N bench' hang
- Still fails occasionally
/a/nojha-2020-04-10_22:42:57-rados:standalone-master-distro-basic-smithi/4943804/ - 05:02 PM Bug #41735: pg_autoscaler throws HEALTH_WARN with auto_scale on for all pools
- nautilus backport: https://github.com/ceph/ceph/pull/34618
- 04:54 PM Bug #41735 (Pending Backport): pg_autoscaler throws HEALTH_WARN with auto_scale on for all pools
- This change needs to be backported into Nautilus to fix a regression (#45135)
- 07:21 AM Bug #45113 (Fix Under Review): workunits/cls/test_cls_cmpomap.sh fails
04/16/2020
- 11:23 PM Bug #45121 (New): nautilus: osd-scrub-snaps.sh: TEST_scrub_snaps failure
- ...
- 11:09 PM Bug #45075 (Fix Under Review): scrub/osd-scrub-repair.sh: TEST_auto_repair_bluestore_failed failure
- 07:44 PM Bug #45075 (In Progress): scrub/osd-scrub-repair.sh: TEST_auto_repair_bluestore_failed failure
- 05:50 PM Bug #45075: scrub/osd-scrub-repair.sh: TEST_auto_repair_bluestore_failed failure
- /a/bhubbard-2020-04-16_09:57:54-rados-wip-badone-testing-distro-basic-smithi/4957883/
- 04:15 PM Bug #45113 (Triaged): workunits/cls/test_cls_cmpomap.sh fails
- Thank you Casey! i will see if we can use the default list.
- 02:54 PM Bug #45113: workunits/cls/test_cls_cmpomap.sh fails
- i didn't realize this ran in the rados suite. it's passing in the rgw/verify suite
it looks like the rados suite... - 02:25 PM Bug #45113 (Resolved): workunits/cls/test_cls_cmpomap.sh fails
- ...
- 09:39 AM Backport #45038 (In Progress): mimic: mon: reset min_size when changing pool size
- 09:37 AM Backport #45040 (In Progress): nautilus: mon: reset min_size when changing pool size
- 08:28 AM Feature #44025 (Resolved): Make it harder to set pool replica size to 1
- 12:30 AM Bug #45076 (Fix Under Review): rados: Sharded OpWQ drops suicide_grace after waiting for work
04/15/2020
- 09:33 PM Bug #45008: [osd crash]The ceph-osd assert with rbd bench io
- Sebastian Wagner wrote:
> duplicate of 44715 ?
Looks like a dup of 42347, which was on the osd. - 03:54 PM Bug #45008: [osd crash]The ceph-osd assert with rbd bench io
- duplicate of 44715 ?
- 03:54 PM Bug #44715: common/TrackedOp.cc: 163: FAILED ceph_assert((sharded_in_flight_list.back())->ops_in_...
- http://pulpito.ceph.com/swagner-2020-04-15_09:10:55-rados-wip-swagner2-testing-2020-04-14-1813-distro-basic-smithi/
04/14/2020
- 10:25 AM Feature #45079 (New): HEALTH_WARN, if require-osd-release is < mimic and OSD wants to join the cl...
- When upgrading a cluster to octopus, users should get a warning, if require-osd-release is < mimic as this prevents o...
- 08:54 AM Bug #44715: common/TrackedOp.cc: 163: FAILED ceph_assert((sharded_in_flight_list.back())->ops_in_...
- http://pulpito.ceph.com/swagner-2020-04-09_21:46:02-rados-wip-swagner2-testing-2020-04-09-1541-distro-basic-smithi/
04/13/2020
- 10:39 PM Bug #45076 (Resolved): rados: Sharded OpWQ drops suicide_grace after waiting for work
- The Sharded OpWQ will opportunistically wait for more work when processing an empty queue. While waiting, the default...
- 07:45 PM Bug #45075 (Resolved): scrub/osd-scrub-repair.sh: TEST_auto_repair_bluestore_failed failure
- ...
- 06:08 PM Backport #44486 (In Progress): nautilus: Nautilus: Random mon crashes in failed assertion at ceph...
- 02:44 PM Backport #43232: nautilus: pgs stuck in laggy state
- @Neha - Can you make a decision whether to backport this to nautilus or not? Sage wrote:
"I'm not sure whether we ...
04/12/2020
- 10:34 PM Bug #44883 (Resolved): upgrade to octopus can complain about orchestrator_cli
- 11:25 AM Backport #45039 (In Progress): octopus: mon: reset min_size when changing pool size
- 11:24 AM Feature #44025 (Pending Backport): Make it harder to set pool replica size to 1
04/11/2020
- 11:43 AM Backport #45054 (In Progress): nautilus: nautilus upgrade should recommend ceph-osd restarts afte...
- 09:40 AM Backport #45054 (Resolved): nautilus: nautilus upgrade should recommend ceph-osd restarts after e...
- https://github.com/ceph/ceph/pull/34524
- 11:39 AM Backport #45053 (In Progress): octopus: nautilus upgrade should recommend ceph-osd restarts after...
- 09:39 AM Backport #45053 (Resolved): octopus: nautilus upgrade should recommend ceph-osd restarts after en...
- https://github.com/ceph/ceph/pull/34523
- 09:42 AM Feature #41666 (Resolved): Issue a HEALTH_WARN when a Pool is configured with [min_]size == 1
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:38 AM Bug #44684 (Resolved): pgs entering premerge state that still need backfill
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:37 AM Backport #45041 (Resolved): octopus: osd: incorrect read bytes stat in SPARSE_READ
- https://github.com/ceph/ceph/pull/34809
- 09:37 AM Backport #45040 (Resolved): nautilus: mon: reset min_size when changing pool size
- https://github.com/ceph/ceph/pull/34585
- 09:37 AM Backport #45039 (Resolved): octopus: mon: reset min_size when changing pool size
- https://github.com/ceph/ceph/pull/34528
- 09:36 AM Backport #45038 (Rejected): mimic: mon: reset min_size when changing pool size
- https://github.com/ceph/ceph/pull/34586
04/10/2020
- 04:27 PM Backport #44324 (In Progress): nautilus: Receiving RemoteBackfillReserved in WaitLocalBackfillRes...
- 06:44 AM Backport #45025 (Need More Info): mimic: hung osd_repop, bluestore committed but failed to trigge...
- To backport https://github.com/ceph/ceph/pull/24761 to mimic we would first need to backport https://github.com/ceph/...
- 06:26 AM Backport #45025 (Rejected): mimic: hung osd_repop, bluestore committed but failed to trigger repo...
- 05:31 AM Bug #36473: hung osd_repop, bluestore committed but failed to trigger repop_commit
- 24761
- 03:29 AM Bug #25174: osd: assert failure with FAILED assert(repop_queue.front() == repop) In function 'vo...
- this is likely duplicated with https://tracker.ceph.com/issues/22570
and resolved by https://github.com/ceph/ceph/pu...
04/09/2020
- 03:51 PM Bug #44352: pool listings are slow after deleting objects
- I think this is a known issue with slow [omap] listing caused by RocksDB fragmentation.
There was a bunch of improve... - 02:30 PM Bug #44352: pool listings are slow after deleting objects
- radosgw-admin commands are just listing the pool, and the performance degradation happens with 'rados ls' too - movin...
- 02:38 PM Backport #44289 (In Progress): nautilus: mon: update + monmap update triggers spawn loop
- 09:46 AM Backport #44711 (Resolved): nautilus: pgs entering premerge state that still need backfill
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34354
m... - 09:44 AM Backport #42662 (Resolved): nautilus:Issue a HEALTH_WARN when a Pool is configured with [min_]siz...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31842
m... - 09:43 AM Backport #44360 (Resolved): nautilus: Rados should use the '-o outfile' convention
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33641
m... - 06:26 AM Bug #45008 (New): [osd crash]The ceph-osd assert with rbd bench io
- ceph version: 14.2.5
OS:centos 7.6.1810
Procedure:
1, Create a rbd image.
2, Use rbd bench tool to write some dat... - 02:47 AM Bug #44981: rados/test_envlibrados_for_rocksdb.sh build failure (seen in nautilus)
- We should be testing the version the rocksdb submodule is pointing to. In nautilus that's...
$ git submodule statu... - 01:55 AM Bug #44981: rados/test_envlibrados_for_rocksdb.sh build failure (seen in nautilus)
- We are trying to compile rocksdb master with gcc 4.8.5 but std::max_align_t only became available in 4.9.
04/08/2020
- 10:53 PM Backport #44711: nautilus: pgs entering premerge state that still need backfill
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34354
merged - 10:07 PM Bug #44939: The mon and/or osd pod memory consumption is not even. One of them consumes about 50%...
- What is your mon_memory_target and osd_memory_target?
Uneven memory on the mons is likely due to the leader doing ... - 09:50 PM Bug #42347: nautilus assert during osd shutdown: FAILED ceph_assert((sharded_in_flight_list.back(...
- This is still an issue on 14.2.8 (at least the one shipped with proxmox):...
- 03:59 PM Bug #45001 (Duplicate): mon+cephadm: ceph_assert((sharded_in_flight_list.back())->ops_in_flight_s...
- 03:29 PM Bug #45001 (Duplicate): mon+cephadm: ceph_assert((sharded_in_flight_list.back())->ops_in_flight_s...
- http://pulpito.ceph.com/swagner-2020-04-08_10:27:55-rados-wip-swagner2-testing-2020-04-08-0014-distro-basic-smithi/49...
- 03:31 PM Bug #44715: common/TrackedOp.cc: 163: FAILED ceph_assert((sharded_in_flight_list.back())->ops_in_...
- http://pulpito.ceph.com/swagner-2020-04-08_10:27:55-rados-wip-swagner2-testing-2020-04-08-0014-distro-basic-smithi/49...
- 07:43 AM Bug #44827 (Pending Backport): osd: incorrect read bytes stat in SPARSE_READ
- 07:40 AM Bug #44862 (Pending Backport): mon: reset min_size when changing pool size
04/07/2020
- 06:54 PM Backport #42662: nautilus:Issue a HEALTH_WARN when a Pool is configured with [min_]size == 1
- Sridhar Seshasayee wrote:
> https://github.com/ceph/ceph/pull/31842
merged - 05:51 PM Backport #44360: nautilus: Rados should use the '-o outfile' convention
- Kefu Chai wrote:
> https://github.com/ceph/ceph/pull/33641
merged - 04:35 PM Bug #44981 (Resolved): rados/test_envlibrados_for_rocksdb.sh build failure (seen in nautilus)
- ...
04/06/2020
- 02:44 PM Bug #44959 (Closed): health warning: pgs not deep-scrubbed in time although it was in time
- Hi!
Some of my PGs are listed as "not scrubbed in time" in my 14.2.8 cluster.
My scrub settings are:... - 11:00 AM Backport #44835 (Need More Info): nautilus: librados mon_command (mgr) command hang
- non-trivial due to post-nautilus refactoring
- 10:44 AM Backport #44836 (In Progress): octopus: librados mon_command (mgr) command hang
- 02:59 AM Bug #44945: Mon High CPU usage when another mon syncing from it
- It is probably relate with huge removed_snap keys.
- 02:58 AM Bug #44945 (Need More Info): Mon High CPU usage when another mon syncing from it
- Each sync request take very long time to come back, as show below. And from the TOP of source-mon, the CPU was on d...
04/04/2020
- 08:16 PM Bug #44939: The mon and/or osd pod memory consumption is not even. One of them consumes about 50%...
- Here's overall memory consumption under traffic.The OSD consumes much more memory as well.
# knc top pods | egrep... - 08:10 PM Bug #44939 (New): The mon and/or osd pod memory consumption is not even. One of them consumes abo...
- This is a ceph deployment with rook release 1.2.7/ceph 14.2.8. After deployment, one of the mon pods and/or osd pods ...
04/03/2020
- 09:32 PM Documentation #43896 (Pending Backport): nautilus upgrade should recommend ceph-osd restarts afte...
- 09:02 PM Bug #36473 (Pending Backport): hung osd_repop, bluestore committed but failed to trigger repop_co...
- Tagging for Mimic backport consideration.
- 09:01 PM Bug #36473: hung osd_repop, bluestore committed but failed to trigger repop_commit
- Mimic (pr#22739 + pr#24269) introduced this race condition, which was fixed in Nautilus (pr#24761).
Was this evalu...
04/02/2020
- 10:54 PM Bug #44901 (Rejected): luminous: osd continue down because of the hearbeattimeout
- There is clearly an issue with your network which is not a ceph issue.
- 02:22 AM Bug #44901 (Rejected): luminous: osd continue down because of the hearbeattimeout
- HI! all! Thanks for reading this msg.
I hava one ceph cluster installed with ceph V12.2.12. It runs well for abo... - 10:25 PM Bug #44631: ceph pg dump error code 124
- I think the pg dump command is timing out for some reason. The timestamps between the following log lines indicate th...
- 10:57 AM Backport #44908 (In Progress): mimic: mon: rados/multimon tests fail with clock skew
- 10:51 AM Backport #44908 (Resolved): mimic: mon: rados/multimon tests fail with clock skew
- https://github.com/ceph/ceph/pull/34370
- 10:53 AM Backport #44083 (In Progress): mimic: expected MON_CLOCK_SKEW but got none
- 10:51 AM Bug #40112 (Pending Backport): mon: rados/multimon tests fail with clock skew
- 01:28 AM Bug #44815 (Fix Under Review): Pool stats increase after PG merged (PGMap::apply_incremental does...
- 01:27 AM Bug #44797 (Closed): mon/cephx : trace of a deleted customer in the "auth" index
04/01/2020
- 09:12 PM Bug #44859 (Closed): add osd ceph cluster status slow requests are blocked > 32 sec. Implicated o...
- It sounds like you may be running into the maximum pgs per osd limits. You can increase these to get around this. If ...
- 08:13 PM Backport #44711 (In Progress): nautilus: pgs entering premerge state that still need backfill
- 07:50 PM Backport #44847: octopus: osd-backfill-recovery-log.sh fails
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34313
m... - 06:55 PM Backport #44847 (Resolved): octopus: osd-backfill-recovery-log.sh fails
- 07:34 PM Bug #43807 (Resolved): osd-backfill-recovery-log.sh fails
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:12 PM Bug #44755 (In Progress): Create stronger affinity between drivegroup specs and osd daemons
- 01:35 PM Bug #44884: mon: weight-set create may return on uncomitted state
- ...
- 01:33 PM Bug #44884 (New): mon: weight-set create may return on uncomitted state
- ...
- 01:28 PM Bug #44883 (Resolved): upgrade to octopus can complain about orchestrator_cli
- ...
- 01:22 PM Bug #44882 (New): osd: leaked buffer (alloc via CephxAuthorizeHandler::verify_authorizer)
- ...
- 09:39 AM Bug #44862 (In Progress): mon: reset min_size when changing pool size
- 02:57 AM Bug #39039: mon connection reset, command not resent
- #44197 looks related?
03/31/2020
- 04:37 PM Bug #44827 (Fix Under Review): osd: incorrect read bytes stat in SPARSE_READ
- 04:38 AM Bug #44827 (Resolved): osd: incorrect read bytes stat in SPARSE_READ
- the local vaiable 'total_read', which is always zero in code was used to accumulate total bytes it reads from
bluest... - 03:22 PM Bug #44862 (Resolved): mon: reset min_size when changing pool size
- See https://github.com/rook/rook/issues/5127
Currently 'ceph osd pool set size x' only changes min_size if it's ab... - 03:19 PM Bug #44859: add osd ceph cluster status slow requests are blocked > 32 sec. Implicated osds 10,15
- 日志里面出现如下信息:maybe_wait_for_max_pg withhold creation of pg 8.11: 600 >= 600
- 03:16 PM Bug #44859 (Closed): add osd ceph cluster status slow requests are blocked > 32 sec. Implicated o...
- hello
I have a question.I add osd in a ceph cluster.ceph version is 12.2.8.but cluster status show slow requests ar... - 10:10 AM Bug #43807: osd-backfill-recovery-log.sh fails
- Neha Ojha wrote:
> Nathan, we need https://github.com/ceph/ceph/pull/34126 as well - See https://tracker.ceph.com/is... - 10:08 AM Backport #44847 (In Progress): octopus: osd-backfill-recovery-log.sh fails
- 10:06 AM Backport #44847 (Resolved): octopus: osd-backfill-recovery-log.sh fails
- https://github.com/ceph/ceph/pull/34313
- 10:03 AM Bug #41424 (Resolved): readable.sh test fails
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:02 AM Documentation #42221 (Resolved): document new option mon_max_pg_per_osd
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:01 AM Bug #42810 (Resolved): ceph config rm does not revert debug_mon to default
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:01 AM Bug #42964 (Resolved): monitor config store: Deleting logging config settings does not decrease l...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:00 AM Bug #43903 (Resolved): osd segv in ceph::buffer::v14_2_0::ptr::release (PGTempMap::decode)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:00 AM Bug #44052 (Resolved): ceph -s does not show >32bit pg states
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:58 AM Bug #44507 (Resolved): osd/PeeringState.cc: 5582: FAILED ceph_assert(ps->is_acting(osd_with_shard...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:58 AM Backport #44842 (Resolved): octopus: nautilus: FAILED ceph_assert(head.version == 0 || e.version....
- https://github.com/ceph/ceph/pull/34807
- 09:58 AM Backport #44841 (Resolved): nautilus: nautilus: FAILED ceph_assert(head.version == 0 || e.version...
- https://github.com/ceph/ceph/pull/34957
- 09:57 AM Bug #44759 (Resolved): fast luminous -> nautilus -> octopus upgrade asserts out
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:57 AM Backport #44836 (Resolved): octopus: librados mon_command (mgr) command hang
- https://github.com/ceph/ceph/pull/34416
- 09:57 AM Backport #44835 (Rejected): nautilus: librados mon_command (mgr) command hang
- 08:31 AM Backport #44770: octopus: fast luminous -> nautilus -> octopus upgrade asserts out
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34204
m... - 08:30 AM Backport #44717: octopus: osd/PeeringState.cc: 5582: FAILED ceph_assert(ps->is_acting(osd_with_sh...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34123
m... - 08:24 AM Backport #43257 (Resolved): mimic: monitor config store: Deleting logging config settings does no...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33327
m... - 08:23 AM Backport #42258 (Resolved): mimic: document new option mon_max_pg_per_osd
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31875
m... - 08:22 AM Backport #43469: nautilus: asynchronous recovery + backfill might spin pg undersized for a long time
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32849
m...
03/30/2020
- 10:33 PM Backport #42168 (Resolved): nautilus: readable.sh test fails
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30704
m... - 10:27 PM Bug #43807 (Pending Backport): osd-backfill-recovery-log.sh fails
- Nathan, we need https://github.com/ceph/ceph/pull/34126 as well - See https://tracker.ceph.com/issues/43807#note-15
- 10:24 PM Bug #43807 (Resolved): osd-backfill-recovery-log.sh fails
- ...
- 02:42 PM Bug #44815: Pool stats increase after PG merged (PGMap::apply_incremental doesn't subtract stats ...
- https://github.com/ceph/ceph/pull/34289
- 02:30 PM Bug #44815 (Resolved): Pool stats increase after PG merged (PGMap::apply_incremental doesn't subt...
- Pool stats like num_objects, num_bytes, increased after PGs were merged after manual set pg_num.
Steps to reproduc... - 01:33 PM Bug #43591: /sbin/fstrim can interfere with umount
- /a/sage-2020-03-29_22:38:58-fs-wip-sage-testing-2020-03-29-0834-distro-basic-smithi/4902553
- 01:24 PM Bug #44798 (Pending Backport): librados mon_command (mgr) command hang
- 12:55 PM Bug #44184: Slow / Hanging Ops after pool creation
- Fwiw on the cluster I'm seeing this on, I did set this flag after tidying the osd map (removed a couple of destroyed ...
- 11:20 AM Bug #44691 (New): mon/caps.sh fails with "Expected return 13, got 0"
03/29/2020
- 01:37 PM Bug #44184: Slow / Hanging Ops after pool creation
- When searching through the code I found this in src/mon/OSDMonitor.cc...
03/28/2020
- 09:23 PM Bug #44798 (Fix Under Review): librados mon_command (mgr) command hang
- 09:13 PM Bug #44798 (Resolved): librados mon_command (mgr) command hang
- - mon starts
- mgr starts
- mgr fetchs mon metadata
- more mons are added to the cluster (post-bootstrap)
- libra... - 12:27 PM Backport #43469 (Resolved): nautilus: asynchronous recovery + backfill might spin pg undersized f...
- 10:46 AM Bug #44797: mon/cephx : trace of a deleted customer in the "auth" index
- It was a hidden character.
I do not have the rights to close the ticket - 10:40 AM Bug #44797 (Closed): mon/cephx : trace of a deleted customer in the "auth" index
- ...
- 12:59 AM Backport #44770 (Resolved): octopus: fast luminous -> nautilus -> octopus upgrade asserts out
- 12:57 AM Backport #44717 (Resolved): octopus: osd/PeeringState.cc: 5582: FAILED ceph_assert(ps->is_acting(...
- 12:57 AM Bug #44631: ceph pg dump error code 124
- /a/sage-2020-03-27_13:32:58-rados-wip-sage3-testing-2020-03-26-1757-distro-basic-smithi/4895381
03/27/2020
- 09:45 PM Backport #43257: mimic: monitor config store: Deleting logging config settings does not decrease ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/33327
merged - 03:37 AM Bug #44184: Slow / Hanging Ops after pool creation
- Andrew Mitroshin wrote:
> Jan Fajerski wrote:
> > Andrew Mitroshin wrote:
> > > Could you please submit output for...
03/26/2020
- 10:37 AM Backport #44770 (In Progress): octopus: fast luminous -> nautilus -> octopus upgrade asserts out
- 10:36 AM Backport #44770 (Resolved): octopus: fast luminous -> nautilus -> octopus upgrade asserts out
- https://github.com/ceph/ceph/pull/34204
- 10:34 AM Bug #44759 (Pending Backport): fast luminous -> nautilus -> octopus upgrade asserts out
- 07:36 AM Bug #23937: FAILED assert(info.history.same_interval_since != 0)
- unfortunately, we still hit this after the patch (https://github.com/ceph/ceph/pull/20571) applied.
we didn't do exp... - 12:11 AM Bug #44532 (Pending Backport): nautilus: FAILED ceph_assert(head.version == 0 || e.version.versio...
03/25/2020
- 11:40 PM Backport #44206 (Resolved): nautilus: osd segv in ceph::buffer::v14_2_0::ptr::release (PGTempMap:...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33530
m... - 11:21 PM Backport #44081 (Resolved): nautilus: ceph -s does not show >32bit pg states
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33275
m... - 11:21 PM Backport #43997 (Resolved): nautilus: Ceph tools utilizing "global_[pre_]init" no longer process ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33261
m... - 10:15 PM Bug #44759 (Fix Under Review): fast luminous -> nautilus -> octopus upgrade asserts out
- 09:58 PM Bug #44759 (Resolved): fast luminous -> nautilus -> octopus upgrade asserts out
- ...
- 03:05 PM Bug #44755 (Resolved): Create stronger affinity between drivegroup specs and osd daemons
- We currently only show the name of the drivegroup spec in `orch ls`...
03/24/2020
- 10:03 PM Bug #44724: compressor: Set default Zstd compression level to 1
- I've closed *PR:* https://github.com/ceph/ceph/pull/34133 in favor of making it a separate commit as part of:
*PR:...
Also available in: Atom