Activity
From 12/10/2019 to 01/08/2020
01/08/2020
- 10:23 PM Bug #43312 (Pending Backport): Change default upmap_max_deviation to 5
- 10:10 PM Bug #43307 (Pending Backport): Remove use of rules batching for upmap balancer
- 10:09 PM Bug #43397 (Fix Under Review): FS_DEGRADED to cluster log despite --no-mon-health-to-clog
- 10:04 PM Bug #43412: cephadm ceph_manager IndexError: list index out of range
- Kefu's got a PR for this
- 05:31 AM Bug #43412: cephadm ceph_manager IndexError: list index out of range
- I'm guessing it's caused by there being no pools at the time. So the random choice fails. Maybe we need to do somethi...
- 10:02 PM Bug #43422: qa/standalone/mon/osd-pool-create.sh fails to grep utf8 pool name
- probably need to set LANG to utf8
- 08:23 AM Bug #43185: ceph -s not showing client activity
- We run 14.2.4. I see mgr process at 100% sometimes and I been told that the reason for lack of activity show might be...
- 02:24 AM Bug #43520 (In Progress): segfault in kstore's pending stripes
- 02:23 AM Bug #43520: segfault in kstore's pending stripes
- ceph version 14.2.1-700.3.0.2.407 (c823e6bbf85437561d2165c0f4b5d8c6bd726975) nautilus (stable)
1: (()+0xf5e0) [0x7f... - 02:20 AM Bug #43520 (In Progress): segfault in kstore's pending stripes
01/07/2020
- 02:46 PM Documentation #41389 (Resolved): wrong datatype describing crush_rule
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:45 PM Bug #42177 (Resolved): osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:43 PM Bug #42906 (Resolved): ceph-mon --mkfs: public_address type (v1|v2) is not respected
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:25 AM Backport #43495 (In Progress): nautilus: ceph monitor crashes after updating 'mon_memory_target' ...
- 10:24 AM Backport #43495 (New): nautilus: ceph monitor crashes after updating 'mon_memory_target' config s...
- 10:01 AM Backport #43495 (Resolved): nautilus: ceph monitor crashes after updating 'mon_memory_target' con...
- https://github.com/ceph/ceph/pull/32520
- 09:34 AM Bug #43454: ceph monitor crashes after updating 'mon_memory_target' config setting.
- Tested the fix without using rocksdb and confirmed that the crash is not observed now:
2020-01-07T12:53:09.942+053... - 08:41 AM Bug #43454 (Pending Backport): ceph monitor crashes after updating 'mon_memory_target' config set...
- 02:46 AM Backport #39474: luminous: segv in fgets() in collect_sys_info reading /proc/cpuinfo
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32349
merged - 02:45 AM Backport #41730: luminous: osd/ReplicatedBackend.cc: 1349: FAILED ceph_assert(peer_missing.count(...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31855
merged
01/06/2020
- 11:21 PM Bug #43490 (New): nautilus: "[WRN] Monitor daemon marked osd.2 down, but it is still running" in ...
- Run: http://pulpito.ceph.com/yuriw-2020-01-04_16:08:12-rados-wip-yuri8-testing-2020-01-03-2031-nautilus-distro-basic-...
- 11:18 PM Backport #42997: nautilus: acting_recovery_backfill won't catch all up peers
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32064
merged - 11:17 PM Backport #42853: nautilus: format error: ceph osd stat --format=json
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32062
merged - 11:16 PM Backport #42846: nautilus: src/msg/async/net_handler.cc: Fix compilation
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31736
merged - 10:23 PM Bug #43489 (New): PG.cc: 953: FAILED assert(0 == "past_interval start interval mismatch")
Upgrade runs from Jewel to Luminous and Luminous to Mimic
yuriw-2019-12-23_19:53:50-rados-wip-yuri3-testing-2019...- 07:21 PM Bug #41718 (Resolved): ceph osd stat JSON output incomplete
- 07:21 PM Bug #43485 (Fix Under Review): Deprecated full/nearfull added back by mistake
- 07:16 PM Bug #43485 (Resolved): Deprecated full/nearfull added back by mistake
The change for
https://tracker.ceph.com/issues/41718 (dff411f1905cc69bfb2cfa8b62a00b4702e6aa46)
also added back...- 06:26 PM Backport #43325 (Resolved): luminous: wrong datatype describing crush_rule
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32267
m... - 06:25 PM Backport #43315 (Resolved): mimic:wrong datatype describing crush_rule
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32255
m... - 06:24 PM Backport #42197 (Resolved): nautilus: osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31028
m... - 06:23 PM Backport #43140 (Resolved): nautilus: ceph-mon --mkfs: public_address type (v1|v2) is not respected
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32028
m... - 01:31 PM Backport #43473 (Resolved): nautilus: recursive lock of OpTracker::lock (70)
- https://github.com/ceph/ceph/pull/32858
- 01:30 PM Backport #43472 (Resolved): mimic: negative num_objects can set PG_STATE_DEGRADED
- https://github.com/ceph/ceph/pull/33331
- 01:30 PM Backport #43471 (Resolved): nautilus: negative num_objects can set PG_STATE_DEGRADED
- https://github.com/ceph/ceph/pull/32857
- 01:30 PM Backport #43470 (Rejected): mimic: asynchronous recovery + backfill might spin pg undersized for ...
- https://github.com/ceph/ceph/pull/33330
- 01:30 PM Backport #43469 (Resolved): nautilus: asynchronous recovery + backfill might spin pg undersized f...
- https://github.com/ceph/ceph/pull/32849
- 08:00 AM Bug #42861 (Fix Under Review): Libceph-common.so needs to use private link attribute when includi...
01/04/2020
- 03:15 PM Bug #43334 (Resolved): nautilus: rados/test_envlibrados_for_rocksdb.sh broken packages with ubunt...
- i've recompiled cmake3 for xenial/amd64 with GCC-5, and uploaded the built packages to the chacra repo. please reopen...
- 02:38 AM Bug #43334: nautilus: rados/test_envlibrados_for_rocksdb.sh broken packages with ubuntu_16.04.yaml
- i need to rebuild cmake3 using the original libstdc++ instead of the one from the gcc-8/gcc-9 ppa repo.
01/03/2020
- 11:54 PM Bug #43421 (Fix Under Review): mon spends too much time to build incremental osdmap
- 10:09 AM Bug #43421: mon spends too much time to build incremental osdmap
- It takes 5 seconds to build 640 incremental osdmap for one client.
- 08:15 AM Bug #43421: mon spends too much time to build incremental osdmap
- sorry. It took 5 seconds
- 11:49 PM Bug #43185 (Need More Info): ceph -s not showing client activity
- super xor wrote:
> Possible relation to https://tracker.ceph.com/issues/43364 and https://tracker.ceph.com/issues/43... - 10:48 PM Bug #43311 (Pending Backport): asynchronous recovery + backfill might spin pg undersized for a lo...
- 09:01 PM Feature #40870: Implement mon_memory_target
- Another follow-on fix: https://github.com/ceph/ceph/pull/32473
- 09:00 PM Bug #43454 (Fix Under Review): ceph monitor crashes after updating 'mon_memory_target' config set...
- 08:24 AM Bug #43454 (Resolved): ceph monitor crashes after updating 'mon_memory_target' config setting.
- Refer bugzilla https://bugzilla.redhat.com/show_bug.cgi?id=1760257 for more details.
- 08:06 PM Backport #42197: nautilus: osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31028
merged - 04:39 PM Bug #43334: nautilus: rados/test_envlibrados_for_rocksdb.sh broken packages with ubuntu_16.04.yaml
- /a/yuriw-2019-12-23_20:23:51-rados-wip-yuri-testing-2019-12-16-2241-nautilus-distro-basic-smithi/4628899/
01/02/2020
- 03:41 PM Bug #43403: unittest_lockdep unreliable
- Happened in https://github.com/ceph/ceph/pull/27792 (among others)
01/01/2020
- 11:01 AM Documentation #42315: Improve rados command usage, man page and turorial
- RADOS(8) Ceph RADOS(8)
NAME
rados - rados object s... - 10:52 AM Documentation #42315: Improve rados command usage, man page and turorial
- [zdover@192-168-1-112 ~]$ rados -h
usage: rados [options] [commands]
POOL COMMANDS
lspools ...
12/25/2019
- 03:24 PM Bug #43422 (Resolved): qa/standalone/mon/osd-pool-create.sh fails to grep utf8 pool name
- ...
- 12:33 PM Bug #43421: mon spends too much time to build incremental osdmap
- In my cluster , it took five minutes to 1300 versions of incremental osdmap.
patch: https://github.com/ceph/ceph/... - 09:49 AM Bug #43421 (Fix Under Review): mon spends too much time to build incremental osdmap
- if a client's osdmap version is too low. mon spend too much time to build incremental osdmap.
Mon can't handle norma...
12/24/2019
- 05:03 AM Bug #43308 (Pending Backport): negative num_objects can set PG_STATE_DEGRADED
- 05:02 AM Bug #42780 (Pending Backport): recursive lock of OpTracker::lock (70)
- 01:53 AM Bug #43413 (New): Virtual IP address of iface lo results in failing to start an OSD
- We added a virtual IP on the loopback internetface lo to complete the LVS configuration....
12/23/2019
- 11:54 PM Bug #43412 (Resolved): cephadm ceph_manager IndexError: list index out of range
- ...
- 08:26 PM Backport #43140: nautilus: ceph-mon --mkfs: public_address type (v1|v2) is not respected
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32028
mergedReviewed-by: Ricardo Dias <rdias@suse.com>
- 02:18 PM Bug #43174: pgs inconsistent, union_shard_errors=missing
- Hi David.
> Are you running your own Ceph build?
No, we use official (comunity) build.
> Sortbitwise needed to...
12/21/2019
12/20/2019
- 11:39 PM Bug #42328 (Resolved): osd/PrimaryLogPG.cc: 3962: ceph_abort_msg("out of order op")
- I can't check the original reports (logs have been removed), but assuming it's the same root cause PR #32382 5bb932c3...
- 01:31 AM Bug #42328: osd/PrimaryLogPG.cc: 3962: ceph_abort_msg("out of order op")
- I observed something similar on a ceph_test_rados teuthology run: sjust-2019-12-19_20:05:13-rados-wip-sjust-read-from...
- 11:37 PM Bug #43394 (Resolved): crimson::dmclock segv in crimson::IndIntruHeap
- Should be fixed with PR #32380 2c9542901532feafd569d92e9f67ccd2e1af3129
- 08:53 PM Bug #43403 (Resolved): unittest_lockdep unreliable
- ...
- 08:22 AM Bug #41255: backfill_toofull seen on cluster where the most full OSD is at 1%
- Hi David:
Good to know the bug is indeed fixed ... too bad it didn't make it in 13.2.8. Anyways ... building patch... - 04:50 AM Bug #38345 (In Progress): mon: segv in MonOpRequest::~MonOpRequest OpHistory::cleanup
- 01:50 AM Bug #43174: pgs inconsistent, union_shard_errors=missing
Scrub incorrectly thinks the object really isn't there, but we know it is.
The way that you can see missing obje...
12/19/2019
- 11:57 PM Bug #42780 (Fix Under Review): recursive lock of OpTracker::lock (70)
- https://github.com/ceph/ceph/pull/32364
- 12:09 PM Bug #42780 (In Progress): recursive lock of OpTracker::lock (70)
- 10:30 PM Bug #43307 (Fix Under Review): Remove use of rules batching for upmap balancer
- 10:27 PM Bug #43397 (Resolved): FS_DEGRADED to cluster log despite --no-mon-health-to-clog
- ...
- 09:38 PM Bug #43394 (Resolved): crimson::dmclock segv in crimson::IndIntruHeap
- ...
- 07:06 PM Bug #41255: backfill_toofull seen on cluster where the most full OSD is at 1%
- A backport to Mimic of the fix can be found here:
https://github.com/ceph/ceph/pull/32361
Or if you can build fro... - 02:34 PM Bug #41255: backfill_toofull seen on cluster where the most full OSD is at 1%
- We added a CRUSH policy (replicated_nvme) and set this policy on our cephfs metadata pool (with 1.2 Bilion objects) a...
- 07:02 PM Backport #41584 (In Progress): mimic: backfill_toofull seen on cluster where the most full OSD is...
- 02:29 PM Bug #43306: segv in collect_sys_info
- Neha Ojha wrote:
> This looks similar to https://tracker.ceph.com/issues/38296, though the mon seems to have been up... - 02:22 PM Backport #39474 (In Progress): luminous: segv in fgets() in collect_sys_info reading /proc/cpuinfo
- 02:18 PM Bug #41383 (Resolved): scrub object count mismatch on device_health_metrics pool
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:14 PM Backport #42739 (Resolved): nautilus: scrub object count mismatch on device_health_metrics pool
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31735
m... - 07:39 AM Bug #43382: medium io/system load causes quorum failure
- Or due to limited bandwidth? 10G NICs dedicated.
- 07:36 AM Bug #43382 (New): medium io/system load causes quorum failure
- We just found out that if you put some io pressure on your system by e.g. big rsync, the mon process has issues proba...
- 05:44 AM Bug #43126 (Fix Under Review): OSD_SLOW_PING_TIME_BACK nits
- 02:20 AM Bug #43318: monitor mark all services(osd mgr) down
- mgr has no log when setting the debug_mgr to 40.
12/18/2019
- 10:31 PM Bug #43193 (Need More Info): "ceph ping mon.<id>" cannot work
- Can you provide the sequence of commands that fail? Also, please attach the monitor names and monmap.
- 10:25 PM Bug #43305 (Won't Fix): "psutil.NoSuchProcess process no longer exists" error in luminous-x-nauti...
- This is an infra issue....
- 10:23 PM Bug #43306: segv in collect_sys_info
- This looks similar to https://tracker.ceph.com/issues/38296, though the mon seems to have been upgraded to nautilus(w...
- 10:17 PM Bug #43318 (Need More Info): monitor mark all services(osd mgr) down
- Can you provide mgr logs from when this happened?
- 10:12 PM Feature #43377 (Resolved): Make Zstandard compression level a configurable option
- I've played with using the different compression algorithms on the RGWs and the default compression level for Zstanda...
- 07:38 PM Backport #42739: nautilus: scrub object count mismatch on device_health_metrics pool
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31735
merged - 03:53 PM Backport #43316 (Resolved): nautilus:wrong datatype describing crush_rule
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32254
m... - 12:11 PM Bug #43365: Nautilus: Random mon crashes in failed assertion at ceph::time_detail::signedspan
- So it's asserting inside of to_timespan, and the Paxos code triggering that assert is
> auto start = ceph::coarse_... - 12:03 PM Bug #43365 (Resolved): Nautilus: Random mon crashes in failed assertion at ceph::time_detail::sig...
- Thanks to 14.2.5 auto warning for recent crashes, we are observing frequent (somewhat daily period) random crashes of...
- 09:35 AM Bug #43185: ceph -s not showing client activity
- Possible relation to https://tracker.ceph.com/issues/43364 and https://tracker.ceph.com/issues/43317
12/17/2019
- 05:39 PM Bug #43308 (Fix Under Review): negative num_objects can set PG_STATE_DEGRADED
- 09:19 AM Backport #43346 (Resolved): nautilus: short pg log + cache tier ceph_test_rados out of order reply
- https://github.com/ceph/ceph/pull/32848
- 06:47 AM Bug #41950 (Can't reproduce): crimson compile
- 06:46 AM Bug #41950: crimson compile
- i assume that you were trying to compile crimson-osd not crimson-old. please check the submodule of seastar to unders...
12/16/2019
- 10:36 PM Bug #43296 (Need More Info): Ceph assimilate-conf results in config entries which can not be removed
- Can you attach the (relevant) output from "ceph config-key dump | grep config"? I think the keys are being installed...
- 10:22 PM Bug #43296: Ceph assimilate-conf results in config entries which can not be removed
- Might be related to #42964?
- 10:06 PM Bug #43334 (Resolved): nautilus: rados/test_envlibrados_for_rocksdb.sh broken packages with ubunt...
- Run: http://pulpito.ceph.com/yuriw-2019-12-15_16:25:11-rados-wip-yuri-nautilus-baseline_12.13.19-distro-basic-smithi/...
- 08:36 PM Bug #38358 (Pending Backport): short pg log + cache tier ceph_test_rados out of order reply
- Seen in nautilus: /a/yuriw-2019-12-15_16:25:11-rados-wip-yuri-nautilus-baseline_12.13.19-distro-basic-smithi/4605500/
- 12:40 PM Bug #43174 (New): pgs inconsistent, union_shard_errors=missing
- Hmm this may be something else then. David, does it look familiar?
- 08:40 AM Feature #43324: Make zlib windowBits configurable for compression
- Xiyuan Wang wrote:
> Now the zlib windowBits is hardcoding as -15[1]. But it should be set to different value for di... - 03:38 AM Feature #43324 (Resolved): Make zlib windowBits configurable for compression
- Now the zlib windowBits is hardcoding as -15[1]. But it should be set to different value for different case.
Accor... - 07:27 AM Backport #43325 (In Progress): luminous: wrong datatype describing crush_rule
- 07:24 AM Backport #43325 (New): luminous: wrong datatype describing crush_rule
- 07:24 AM Backport #43325 (Resolved): luminous: wrong datatype describing crush_rule
- https://github.com/ceph/ceph/pull/32267
12/15/2019
- 10:04 PM Documentation #41389 (Pending Backport): wrong datatype describing crush_rule
- 03:55 PM Bug #38076 (Resolved): osds allows to partially start more than N+2
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 03:53 PM Feature #40528 (Resolved): Better default value for osd_snap_trim_sleep
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 03:53 PM Backport #43320 (Resolved): mimic: PeeringState::GoClean will call purge_strays unconditionally
- https://github.com/ceph/ceph/pull/33329
- 03:53 PM Backport #43319 (Resolved): nautilus: PeeringState::GoClean will call purge_strays unconditionally
- https://github.com/ceph/ceph/pull/32847
- 01:27 PM Bug #42328: osd/PrimaryLogPG.cc: 3962: ceph_abort_msg("out of order op")
- Looking at the historical test runs, it seems to have started after [1] but before [2].
[1] http://pulpito.ceph.co... - 01:30 AM Bug #42328: osd/PrimaryLogPG.cc: 3962: ceph_abort_msg("out of order op")
- http://qa-proxy.ceph.com/teuthology/teuthology-2019-12-02_02:01:02-rbd-master-distro-basic-smithi/4559106/teuthology.log
- 01:29 AM Bug #42328: osd/PrimaryLogPG.cc: 3962: ceph_abort_msg("out of order op")
- http://qa-proxy.ceph.com/teuthology/jdillaman-2019-12-14_17:15:11-rbd-wip-jd-testing-distro-basic-smithi/4603518/teut...
- 06:55 AM Bug #43318 (Need More Info): monitor mark all services(osd mgr) down
- Suddenly, all mgrs and osds in my cluster began to be set to down by the monitor.
the log of monitor like this
```
...
12/14/2019
- 08:28 AM Documentation #41389 (In Progress): wrong datatype describing crush_rule
- 07:21 AM Documentation #41389 (Pending Backport): wrong datatype describing crush_rule
- 02:42 AM Documentation #41389: wrong datatype describing crush_rule
- Just needs a cherry-pick of 3ed3de6c964ba998d5b18ceb997d1a6dffe355db
- 08:26 AM Backport #43315 (In Progress): mimic:wrong datatype describing crush_rule
- 08:02 AM Backport #43315 (Resolved): mimic:wrong datatype describing crush_rule
- https://github.com/ceph/ceph/pull/32255
- 08:24 AM Backport #43316 (In Progress): nautilus:wrong datatype describing crush_rule
- 08:03 AM Backport #43316 (Resolved): nautilus:wrong datatype describing crush_rule
- https://github.com/ceph/ceph/pull/32254
- 02:50 AM Bug #43307 (In Progress): Remove use of rules batching for upmap balancer
- 02:49 AM Bug #43312 (In Progress): Change default upmap_max_deviation to 5
- 02:06 AM Bug #43312 (Resolved): Change default upmap_max_deviation to 5
- 12:24 AM Bug #43311 (Resolved): asynchronous recovery + backfill might spin pg undersized for a long time
- When an osd that is part of current up set gets chosen as an
async_recovery_target, it gets removed from the acting ... - 12:16 AM Bug #43308 (In Progress): negative num_objects can set PG_STATE_DEGRADED
12/13/2019
- 08:40 PM Bug #40963 (Resolved): mimic: MQuery during Deleting state
- 08:40 PM Bug #41317 (Pending Backport): PeeringState::GoClean will call purge_strays unconditionally
- 07:47 PM Bug #43308 (Resolved): negative num_objects can set PG_STATE_DEGRADED
- ...
- 07:05 PM Bug #43296: Ceph assimilate-conf results in config entries which can not be removed
- Alwin from Proxmox provided a work around but this still appears to be a bug:
https://forum.proxmox.com/threads/ceph... - 04:51 PM Bug #43296: Ceph assimilate-conf results in config entries which can not be removed
- Setting debug_rdb to 5/5 unfortunately doesn't reveal anything:
Commands:... - 03:37 AM Bug #43296 (Resolved): Ceph assimilate-conf results in config entries which can not be removed
- We assimilated our Ceph configuration file and subsequently have a minimal config file. We are subsequently not able ...
- 04:31 PM Bug #43307 (Resolved): Remove use of rules batching for upmap balancer
Due to cost of calculations for very large PG/shard counts, we will settle for balancing each pool individually for...- 03:43 PM Bug #25174 (Can't reproduce): osd: assert failure with FAILED assert(repop_queue.front() == repop...
- 02:43 PM Bug #43306 (Resolved): segv in collect_sys_info
- Run: http://pulpito.ceph.com/teuthology-2019-12-13_02:25:03-upgrade:luminous-x-nautilus-distro-basic-smithi/
Job: '4... - 02:40 PM Bug #43305 (Won't Fix): "psutil.NoSuchProcess process no longer exists" error in luminous-x-nauti...
- Run: http://pulpito.ceph.com/teuthology-2019-12-13_02:25:03-upgrade:luminous-x-nautilus-distro-basic-smithi/
Jobs: '... - 08:23 AM Backport #42259 (Resolved): nautilus: document new option mon_max_pg_per_osd
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31300
m... - 08:22 AM Backport #40947 (Resolved): luminous: Better default value for osd_snap_trim_sleep
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31857
m... - 08:22 AM Backport #38205 (Resolved): luminous: osds allows to partially start more than N+2
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31858
m... - 08:22 AM Backport #43093 (Resolved): luminous: Improve OSDMap::calc_pg_upmaps() efficiency
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31992
m... - 06:17 AM Bug #40712: ceph-mon crash with assert(err == 0) after rocksdb->get
- we meet this problem recently.
we decline this related more to rocksdb but not ceph
12/12/2019
- 04:41 PM Backport #40947: luminous: Better default value for osd_snap_trim_sleep
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31857
mergedReviewed-by: Josh Durgin <jdurgin@redhat.com>
- 04:41 PM Backport #38205: luminous: osds allows to partially start more than N+2
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31858
merged - 04:40 PM Backport #43093: luminous: Improve OSDMap::calc_pg_upmaps() efficiency
- David Zafman wrote:
> https://github.com/ceph/ceph/pull/31992
merged - 10:16 AM Bug #43174: pgs inconsistent, union_shard_errors=missing
- Greg thanks for the reply.
Greg Farnum wrote:
> If you fetch an object in RGW and its backing RADOS objects are m... - 09:41 AM Bug #38330 (Resolved): osd/OSD.cc: 1515: abort() in Service::build_incremental_map_msg
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:23 AM Backport #43119 (Resolved): mimic: osd/OSD.cc: 1515: abort() in Service::build_incremental_map_msg
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32000
m... - 08:44 AM Bug #43193: "ceph ping mon.<id>" cannot work
- The command "ceph ping mon.a" or "ceph ping mon.b" or "ceph ping mon.c" works fine.
If the mon id is not specified, ... - 05:31 AM Bug #41317 (Fix Under Review): PeeringState::GoClean will call purge_strays unconditionally
- 12:04 AM Bug #43267 (Rejected): unexpected error in BlueStore::_txc_add_transaction
- 12:02 AM Bug #43267: unexpected error in BlueStore::_txc_add_transaction
- Nope, it was full. Well spotted:...
12/11/2019
- 11:28 PM Bug #43267: unexpected error in BlueStore::_txc_add_transaction
This is caused by an out of space condition that won't usually happen. Check your BlueStore configuration.
Is ...- 10:21 PM Bug #43267: unexpected error in BlueStore::_txc_add_transaction
- This is simply out-of-space condition, see:
-6> 2019-12-11T16:13:44.466-0500 7fcbe4ecd700 -1 bluestore(/build/ce... - 09:39 PM Bug #43267 (Rejected): unexpected error in BlueStore::_txc_add_transaction
- I was testing kcephfs vs. a vstart cluster and the OSD crashed. fsstress was running at the time, so it was being kep...
- 10:26 PM Bug #43268 (New): Restrict admin socket commands more from the Ceph tool
- https://bugzilla.redhat.com/show_bug.cgi?id=1780458
It sounds like we've given admin socket access to any cephx us... - 10:17 PM Bug #43106 (Resolved): mimic: crash in build_incremental_map_msg
- Marking this resolved as all the backports are now in place.
- 10:17 PM Bug #43174 (Closed): pgs inconsistent, union_shard_errors=missing
- If you fetch an object in RGW and its backing RADOS objects are missing, it just fills in the space with zeros. It so...
- 10:15 PM Bug #43173 (Duplicate): pgs inconsistent, union_shard_errors=missing
- 08:07 PM Bug #43266 (Fix Under Review): common: admin socket compiler warning
- 08:03 PM Bug #43266 (Resolved): common: admin socket compiler warning
- ...
- 01:38 PM Backport #43257 (Resolved): mimic: monitor config store: Deleting logging config settings does no...
- https://github.com/ceph/ceph/pull/33327
- 01:38 PM Backport #43256 (Resolved): nautilus: monitor config store: Deleting logging config settings does...
- https://github.com/ceph/ceph/pull/32846
- 04:05 AM Bug #42964 (Pending Backport): monitor config store: Deleting logging config settings does not de...
12/10/2019
- 08:44 PM Backport #40890 (In Progress): mimic: Pool settings aren't populated to OSD after restart.
- 08:41 PM Backport #40891 (In Progress): nautilus: Pool settings aren't populated to OSD after restart.
- 08:34 PM Backport #43246 (Resolved): nautilus: Nearfull warnings are incorrect
- https://github.com/ceph/ceph/pull/32773
- 08:29 PM Backport #43245 (Resolved): nautilus: osd: increase priority in certain OSD perf counters
- https://github.com/ceph/ceph/pull/32845
- 08:25 PM Backport #43239 (Resolved): nautilus: ok-to-stop incorrect for some ec pgs
- https://github.com/ceph/ceph/pull/32844
- 08:24 PM Backport #43232 (Rejected): nautilus: pgs stuck in laggy state
- 04:10 PM Bug #42346 (Pending Backport): Nearfull warnings are incorrect
- 03:26 PM Bug #42961 (Pending Backport): osd: increase priority in certain OSD perf counters
- 02:51 PM Bug #43189 (Pending Backport): pgs stuck in laggy state
- I'm not sure whether we should backport this to nautilus or not. We only noticed qa failures because the new octopus...
- 02:50 PM Bug #43189 (Resolved): pgs stuck in laggy state
- 01:48 AM Bug #43048: nautilus: upgrade/mimic-x/stress-split: failed to recover before timeout expired
- /a/yuriw-2019-12-06_21:30:44-upgrade:mimic-x-nautilus-distro-basic-smithi/4576681
Also available in: Atom