Activity
From 12/16/2019 to 01/14/2020
01/14/2020
- 09:28 PM Bug #40649: set_mon_vals failed to set cluster_network = 10.1.2.0/24: Configuration option 'clust...
- FYI, I was able to remove the config settings with:
$ ceph config rm <who> <what>
followed by
$ ceph config ... - 08:13 PM Bug #43485 (Resolved): Deprecated full/nearfull added back by mistake
- 03:46 PM Bug #43582 (In Progress): rebuild-mondb doesn't populate mgr commands -> pg dump EINVAL
- 03:46 PM Bug #43582: rebuild-mondb doesn't populate mgr commands -> pg dump EINVAL
- i double checked update_mgrmap() in ceph_monstore_tool.cc. which is called when handling rebuild subcommand. will try...
- 01:50 PM Bug #43597: stuck waiting for pg to advance to epoch
- 1.c...
- 01:36 PM Bug #43597 (New): stuck waiting for pg to advance to epoch
- ...
- 08:58 AM Bug #43306: segv in collect_sys_info
- https://github.com/ceph/ceph/pull/32630 is posted to avoid using fgets().
- 08:40 AM Documentation #4568 (Closed): FAQ entry for changing journal size/moving journal
- This bug has been judged too old to fix. This is because either it is either 1) raised against a version of Ceph prio...
- 08:39 AM Documentation #3466 (Closed): rados manpage: bench still documents "read" rather than "seq/rand"
- This bug has been judged too old to fix. This is because either it is either 1) raised against a version of Ceph prio...
- 08:37 AM Documentation #3447 (Closed): doc: how to recover from a failed journal device
- This bug has been judged too old to fix. This is because either it is either 1) raised against a version of Ceph prio...
- 08:36 AM Documentation #3218 (Closed): Doc: osdmaptool manpage out of date with code *and* usage
- This bug has been judged too old to fix. This is because either it is either 1) raised against a version of Ceph prio...
- 08:35 AM Documentation #3166 (Closed): doc: Explain OSD up/down, in/out: what does it mean, where does it ...
- This bug has been judged too old to fix. This is because either it is either 1) raised against a version of Ceph prio...
- 08:34 AM Documentation #3054 (Closed): doc: omap, tmap, xattrs
- This bug has been judged too old to fix. This is because either it is either 1) raised against a version of Ceph prio...
- 08:32 AM Documentation #2272 (Closed): FAQs: RADOS reliability and availability
- This bug has been judged too old to fix. This is because either it is either 1) raised against a version of Ceph prio...
- 02:35 AM Bug #43592 (Resolved): osd-recovery-space.sh has a race
The function wait_for_state() returns success when there are no PGs in a selected state. The test's purpose of wai...
01/13/2020
- 10:06 PM Backport #43532 (Resolved): luminous: Change default upmap_max_deviation to 5
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32586
m... - 10:05 PM Backport #39474 (Resolved): luminous: segv in fgets() in collect_sys_info reading /proc/cpuinfo
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32349
m... - 10:04 PM Backport #41730 (Resolved): luminous: osd/ReplicatedBackend.cc: 1349: FAILED ceph_assert(peer_mis...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31855
m... - 08:28 PM Bug #43591 (New): /sbin/fstrim can interfere with umount
- ...
- 08:13 PM Bug #43306 (Fix Under Review): segv in collect_sys_info
- 08:11 PM Bug #43306: segv in collect_sys_info
- #38296 changed the buffer to 1024 chars, but /proc/cpuinfo can be bigger than that, too. On smithi (8 CPUs), it's 9...
- 02:47 PM Bug #43587 (Resolved): mon shutdown timeout (race with async compaction)
- ...
- 02:42 PM Bug #39555: backfill_toofull while OSDs are not full (Unneccessary HEALTH_ERR)
- FWIW, I am seeing this issue after an upgrade from 12.2.12 to 14.2.6.
The status is HEALTH_WARN not HEALTH_ERR but... - 02:21 PM Bug #43404: mon crash in OSDMap::_pg_to_raw_osds from update_pending_pgs
- /a/sage-2020-01-12_21:37:03-rados-wip-sage-testing-2020-01-12-0621-distro-basic-smithi/4660728...
- 02:16 PM Bug #43584 (Resolved): MON_DOWN during mon_join process
- /a/sage-2020-01-12_21:37:03-rados-wip-sage-testing-2020-01-12-0621-distro-basic-smithi/4660691...
- 02:02 PM Bug #43582 (Resolved): rebuild-mondb doesn't populate mgr commands -> pg dump EINVAL
- ...
- 01:39 PM Bug #43580 (Fix Under Review): pg: fastinfo incorrect when last_update moves backward in time
- 01:05 PM Bug #43580 (Resolved): pg: fastinfo incorrect when last_update moves backward in time
- If, during peering, last_update moves backwards, we may rewrite the full info but leave a fastinfo record in place wi...
- 12:24 PM Bug #42821 (Resolved): src/msg/async/net_handler.cc: Fix compilation
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:23 PM Bug #43454 (Resolved): ceph monitor crashes after updating 'mon_memory_target' config setting.
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:19 PM Backport #43495 (Resolved): nautilus: ceph monitor crashes after updating 'mon_memory_target' con...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32520
m... - 12:13 PM Backport #42997 (Resolved): nautilus: acting_recovery_backfill won't catch all up peers
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32064
m... - 12:13 PM Backport #42853 (Resolved): nautilus: format error: ceph osd stat --format=json
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32062
m... - 12:12 PM Backport #42846 (Resolved): nautilus: src/msg/async/net_handler.cc: Fix compilation
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31736
m... - 07:38 AM Bug #43555: raw usage is far from total pool usage
- only overwrite file
01/12/2020
- 09:29 PM Backport #43532: luminous: Change default upmap_max_deviation to 5
- David Zafman wrote:
> https://github.com/ceph/ceph/pull/32586
merged
01/10/2020
- 11:36 PM Bug #43555 (New): raw usage is far from total pool usage
- ceph -v
ceph version 13.2.5 (cbff874f9007f1869bfd3821b7e33b2a6ffd4988) mimic (stable)... - 10:32 PM Bug #42328: osd/PrimaryLogPG.cc: 3962: ceph_abort_msg("out of order op")
- http://pulpito.ceph.com/nojha-2020-01-10_19:11:03-rbd:mirror-thrash-master-distro-basic-smithi/4653675/
Observatio... - 08:30 PM Bug #42328: osd/PrimaryLogPG.cc: 3962: ceph_abort_msg("out of order op")
- Reproduces with -s rbd:mirror-thrash and --filter 'rbd-mirror-fsx-workunit'
http://pulpito.ceph.com/nojha-2020-01-... - 10:03 PM Bug #43553 (Can't reproduce): mon: client mon_status fails
- ...
- 09:07 PM Bug #40649: set_mon_vals failed to set cluster_network = 10.1.2.0/24: Configuration option 'clust...
- This also happened to me during an upgrade from Luminous to Nautilus.
The cluster/public networks were not defined... - 07:26 PM Bug #43552 (Resolved): nautilus: OSDMonitor: SIGFPE in OSDMonitor::share_map_with_random_osd
- ...
- 02:45 PM Bug #43365: Nautilus: Random mon crashes in failed assertion at ceph::time_detail::signedspan
- We are also running into this issue.
Jan 07 19:03:42 pmxc05 ceph-mon[3701783]: 2020-01-07 19:03:42.625 7fe59c03d... - 01:39 PM Bug #39665 (Resolved): kstore: memory may leak on KStore::_do_read_stripe
- 01:34 PM Bug #43412 (Resolved): cephadm ceph_manager IndexError: list index out of range
- 04:55 AM Backport #43532 (In Progress): luminous: Change default upmap_max_deviation to 5
- 04:54 AM Backport #43531 (In Progress): mimic: Change default upmap_max_deviation to 5
01/09/2020
- 10:00 PM Bug #42328 (New): osd/PrimaryLogPG.cc: 3962: ceph_abort_msg("out of order op")
- This issue is still occurring with today's master branch:
http://qa-proxy.ceph.com/teuthology/jdillaman-2020-01-09... - 04:56 PM Backport #43495: nautilus: ceph monitor crashes after updating 'mon_memory_target' config setting.
- Sridhar Seshasayee wrote:
> https://github.com/ceph/ceph/pull/32520
merged - 02:28 AM Bug #43412 (Fix Under Review): cephadm ceph_manager IndexError: list index out of range
- 12:39 AM Backport #43529 (In Progress): nautilus: Remove use of rules batching for upmap balancer
- 12:27 AM Backport #43529 (Resolved): nautilus: Remove use of rules batching for upmap balancer
- https://github.com/ceph/ceph/pull/31956
- 12:39 AM Backport #43530 (In Progress): nautilus: Change default upmap_max_deviation to 5
- 12:28 AM Backport #43530 (Resolved): nautilus: Change default upmap_max_deviation to 5
- https://github.com/ceph/ceph/pull/31956
- 12:28 AM Backport #43532 (Resolved): luminous: Change default upmap_max_deviation to 5
- https://github.com/ceph/ceph/pull/32586
- 12:28 AM Backport #43531 (Resolved): mimic: Change default upmap_max_deviation to 5
- https://github.com/ceph/ceph/pull/31957
01/08/2020
- 10:23 PM Bug #43312 (Pending Backport): Change default upmap_max_deviation to 5
- 10:10 PM Bug #43307 (Pending Backport): Remove use of rules batching for upmap balancer
- 10:09 PM Bug #43397 (Fix Under Review): FS_DEGRADED to cluster log despite --no-mon-health-to-clog
- 10:04 PM Bug #43412: cephadm ceph_manager IndexError: list index out of range
- Kefu's got a PR for this
- 05:31 AM Bug #43412: cephadm ceph_manager IndexError: list index out of range
- I'm guessing it's caused by there being no pools at the time. So the random choice fails. Maybe we need to do somethi...
- 10:02 PM Bug #43422: qa/standalone/mon/osd-pool-create.sh fails to grep utf8 pool name
- probably need to set LANG to utf8
- 08:23 AM Bug #43185: ceph -s not showing client activity
- We run 14.2.4. I see mgr process at 100% sometimes and I been told that the reason for lack of activity show might be...
- 02:24 AM Bug #43520 (In Progress): segfault in kstore's pending stripes
- 02:23 AM Bug #43520: segfault in kstore's pending stripes
- ceph version 14.2.1-700.3.0.2.407 (c823e6bbf85437561d2165c0f4b5d8c6bd726975) nautilus (stable)
1: (()+0xf5e0) [0x7f... - 02:20 AM Bug #43520 (In Progress): segfault in kstore's pending stripes
01/07/2020
- 02:46 PM Documentation #41389 (Resolved): wrong datatype describing crush_rule
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:45 PM Bug #42177 (Resolved): osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:43 PM Bug #42906 (Resolved): ceph-mon --mkfs: public_address type (v1|v2) is not respected
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:25 AM Backport #43495 (In Progress): nautilus: ceph monitor crashes after updating 'mon_memory_target' ...
- 10:24 AM Backport #43495 (New): nautilus: ceph monitor crashes after updating 'mon_memory_target' config s...
- 10:01 AM Backport #43495 (Resolved): nautilus: ceph monitor crashes after updating 'mon_memory_target' con...
- https://github.com/ceph/ceph/pull/32520
- 09:34 AM Bug #43454: ceph monitor crashes after updating 'mon_memory_target' config setting.
- Tested the fix without using rocksdb and confirmed that the crash is not observed now:
2020-01-07T12:53:09.942+053... - 08:41 AM Bug #43454 (Pending Backport): ceph monitor crashes after updating 'mon_memory_target' config set...
- 02:46 AM Backport #39474: luminous: segv in fgets() in collect_sys_info reading /proc/cpuinfo
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32349
merged - 02:45 AM Backport #41730: luminous: osd/ReplicatedBackend.cc: 1349: FAILED ceph_assert(peer_missing.count(...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31855
merged
01/06/2020
- 11:21 PM Bug #43490 (New): nautilus: "[WRN] Monitor daemon marked osd.2 down, but it is still running" in ...
- Run: http://pulpito.ceph.com/yuriw-2020-01-04_16:08:12-rados-wip-yuri8-testing-2020-01-03-2031-nautilus-distro-basic-...
- 11:18 PM Backport #42997: nautilus: acting_recovery_backfill won't catch all up peers
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32064
merged - 11:17 PM Backport #42853: nautilus: format error: ceph osd stat --format=json
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32062
merged - 11:16 PM Backport #42846: nautilus: src/msg/async/net_handler.cc: Fix compilation
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31736
merged - 10:23 PM Bug #43489 (New): PG.cc: 953: FAILED assert(0 == "past_interval start interval mismatch")
Upgrade runs from Jewel to Luminous and Luminous to Mimic
yuriw-2019-12-23_19:53:50-rados-wip-yuri3-testing-2019...- 07:21 PM Bug #41718 (Resolved): ceph osd stat JSON output incomplete
- 07:21 PM Bug #43485 (Fix Under Review): Deprecated full/nearfull added back by mistake
- 07:16 PM Bug #43485 (Resolved): Deprecated full/nearfull added back by mistake
The change for
https://tracker.ceph.com/issues/41718 (dff411f1905cc69bfb2cfa8b62a00b4702e6aa46)
also added back...- 06:26 PM Backport #43325 (Resolved): luminous: wrong datatype describing crush_rule
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32267
m... - 06:25 PM Backport #43315 (Resolved): mimic:wrong datatype describing crush_rule
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32255
m... - 06:24 PM Backport #42197 (Resolved): nautilus: osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31028
m... - 06:23 PM Backport #43140 (Resolved): nautilus: ceph-mon --mkfs: public_address type (v1|v2) is not respected
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32028
m... - 01:31 PM Backport #43473 (Resolved): nautilus: recursive lock of OpTracker::lock (70)
- https://github.com/ceph/ceph/pull/32858
- 01:30 PM Backport #43472 (Resolved): mimic: negative num_objects can set PG_STATE_DEGRADED
- https://github.com/ceph/ceph/pull/33331
- 01:30 PM Backport #43471 (Resolved): nautilus: negative num_objects can set PG_STATE_DEGRADED
- https://github.com/ceph/ceph/pull/32857
- 01:30 PM Backport #43470 (Rejected): mimic: asynchronous recovery + backfill might spin pg undersized for ...
- https://github.com/ceph/ceph/pull/33330
- 01:30 PM Backport #43469 (Resolved): nautilus: asynchronous recovery + backfill might spin pg undersized f...
- https://github.com/ceph/ceph/pull/32849
- 08:00 AM Bug #42861 (Fix Under Review): Libceph-common.so needs to use private link attribute when includi...
01/04/2020
- 03:15 PM Bug #43334 (Resolved): nautilus: rados/test_envlibrados_for_rocksdb.sh broken packages with ubunt...
- i've recompiled cmake3 for xenial/amd64 with GCC-5, and uploaded the built packages to the chacra repo. please reopen...
- 02:38 AM Bug #43334: nautilus: rados/test_envlibrados_for_rocksdb.sh broken packages with ubuntu_16.04.yaml
- i need to rebuild cmake3 using the original libstdc++ instead of the one from the gcc-8/gcc-9 ppa repo.
01/03/2020
- 11:54 PM Bug #43421 (Fix Under Review): mon spends too much time to build incremental osdmap
- 10:09 AM Bug #43421: mon spends too much time to build incremental osdmap
- It takes 5 seconds to build 640 incremental osdmap for one client.
- 08:15 AM Bug #43421: mon spends too much time to build incremental osdmap
- sorry. It took 5 seconds
- 11:49 PM Bug #43185 (Need More Info): ceph -s not showing client activity
- super xor wrote:
> Possible relation to https://tracker.ceph.com/issues/43364 and https://tracker.ceph.com/issues/43... - 10:48 PM Bug #43311 (Pending Backport): asynchronous recovery + backfill might spin pg undersized for a lo...
- 09:01 PM Feature #40870: Implement mon_memory_target
- Another follow-on fix: https://github.com/ceph/ceph/pull/32473
- 09:00 PM Bug #43454 (Fix Under Review): ceph monitor crashes after updating 'mon_memory_target' config set...
- 08:24 AM Bug #43454 (Resolved): ceph monitor crashes after updating 'mon_memory_target' config setting.
- Refer bugzilla https://bugzilla.redhat.com/show_bug.cgi?id=1760257 for more details.
- 08:06 PM Backport #42197: nautilus: osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31028
merged - 04:39 PM Bug #43334: nautilus: rados/test_envlibrados_for_rocksdb.sh broken packages with ubuntu_16.04.yaml
- /a/yuriw-2019-12-23_20:23:51-rados-wip-yuri-testing-2019-12-16-2241-nautilus-distro-basic-smithi/4628899/
01/02/2020
- 03:41 PM Bug #43403: unittest_lockdep unreliable
- Happened in https://github.com/ceph/ceph/pull/27792 (among others)
01/01/2020
- 11:01 AM Documentation #42315: Improve rados command usage, man page and turorial
- RADOS(8) Ceph RADOS(8)
NAME
rados - rados object s... - 10:52 AM Documentation #42315: Improve rados command usage, man page and turorial
- [zdover@192-168-1-112 ~]$ rados -h
usage: rados [options] [commands]
POOL COMMANDS
lspools ...
12/25/2019
- 03:24 PM Bug #43422 (Resolved): qa/standalone/mon/osd-pool-create.sh fails to grep utf8 pool name
- ...
- 12:33 PM Bug #43421: mon spends too much time to build incremental osdmap
- In my cluster , it took five minutes to 1300 versions of incremental osdmap.
patch: https://github.com/ceph/ceph/... - 09:49 AM Bug #43421 (Fix Under Review): mon spends too much time to build incremental osdmap
- if a client's osdmap version is too low. mon spend too much time to build incremental osdmap.
Mon can't handle norma...
12/24/2019
- 05:03 AM Bug #43308 (Pending Backport): negative num_objects can set PG_STATE_DEGRADED
- 05:02 AM Bug #42780 (Pending Backport): recursive lock of OpTracker::lock (70)
- 01:53 AM Bug #43413 (New): Virtual IP address of iface lo results in failing to start an OSD
- We added a virtual IP on the loopback internetface lo to complete the LVS configuration....
12/23/2019
- 11:54 PM Bug #43412 (Resolved): cephadm ceph_manager IndexError: list index out of range
- ...
- 08:26 PM Backport #43140: nautilus: ceph-mon --mkfs: public_address type (v1|v2) is not respected
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32028
mergedReviewed-by: Ricardo Dias <rdias@suse.com>
- 02:18 PM Bug #43174: pgs inconsistent, union_shard_errors=missing
- Hi David.
> Are you running your own Ceph build?
No, we use official (comunity) build.
> Sortbitwise needed to...
12/21/2019
12/20/2019
- 11:39 PM Bug #42328 (Resolved): osd/PrimaryLogPG.cc: 3962: ceph_abort_msg("out of order op")
- I can't check the original reports (logs have been removed), but assuming it's the same root cause PR #32382 5bb932c3...
- 01:31 AM Bug #42328: osd/PrimaryLogPG.cc: 3962: ceph_abort_msg("out of order op")
- I observed something similar on a ceph_test_rados teuthology run: sjust-2019-12-19_20:05:13-rados-wip-sjust-read-from...
- 11:37 PM Bug #43394 (Resolved): crimson::dmclock segv in crimson::IndIntruHeap
- Should be fixed with PR #32380 2c9542901532feafd569d92e9f67ccd2e1af3129
- 08:53 PM Bug #43403 (Resolved): unittest_lockdep unreliable
- ...
- 08:22 AM Bug #41255: backfill_toofull seen on cluster where the most full OSD is at 1%
- Hi David:
Good to know the bug is indeed fixed ... too bad it didn't make it in 13.2.8. Anyways ... building patch... - 04:50 AM Bug #38345 (In Progress): mon: segv in MonOpRequest::~MonOpRequest OpHistory::cleanup
- 01:50 AM Bug #43174: pgs inconsistent, union_shard_errors=missing
Scrub incorrectly thinks the object really isn't there, but we know it is.
The way that you can see missing obje...
12/19/2019
- 11:57 PM Bug #42780 (Fix Under Review): recursive lock of OpTracker::lock (70)
- https://github.com/ceph/ceph/pull/32364
- 12:09 PM Bug #42780 (In Progress): recursive lock of OpTracker::lock (70)
- 10:30 PM Bug #43307 (Fix Under Review): Remove use of rules batching for upmap balancer
- 10:27 PM Bug #43397 (Resolved): FS_DEGRADED to cluster log despite --no-mon-health-to-clog
- ...
- 09:38 PM Bug #43394 (Resolved): crimson::dmclock segv in crimson::IndIntruHeap
- ...
- 07:06 PM Bug #41255: backfill_toofull seen on cluster where the most full OSD is at 1%
- A backport to Mimic of the fix can be found here:
https://github.com/ceph/ceph/pull/32361
Or if you can build fro... - 02:34 PM Bug #41255: backfill_toofull seen on cluster where the most full OSD is at 1%
- We added a CRUSH policy (replicated_nvme) and set this policy on our cephfs metadata pool (with 1.2 Bilion objects) a...
- 07:02 PM Backport #41584 (In Progress): mimic: backfill_toofull seen on cluster where the most full OSD is...
- 02:29 PM Bug #43306: segv in collect_sys_info
- Neha Ojha wrote:
> This looks similar to https://tracker.ceph.com/issues/38296, though the mon seems to have been up... - 02:22 PM Backport #39474 (In Progress): luminous: segv in fgets() in collect_sys_info reading /proc/cpuinfo
- 02:18 PM Bug #41383 (Resolved): scrub object count mismatch on device_health_metrics pool
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 02:14 PM Backport #42739 (Resolved): nautilus: scrub object count mismatch on device_health_metrics pool
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/31735
m... - 07:39 AM Bug #43382: medium io/system load causes quorum failure
- Or due to limited bandwidth? 10G NICs dedicated.
- 07:36 AM Bug #43382 (New): medium io/system load causes quorum failure
- We just found out that if you put some io pressure on your system by e.g. big rsync, the mon process has issues proba...
- 05:44 AM Bug #43126 (Fix Under Review): OSD_SLOW_PING_TIME_BACK nits
- 02:20 AM Bug #43318: monitor mark all services(osd mgr) down
- mgr has no log when setting the debug_mgr to 40.
12/18/2019
- 10:31 PM Bug #43193 (Need More Info): "ceph ping mon.<id>" cannot work
- Can you provide the sequence of commands that fail? Also, please attach the monitor names and monmap.
- 10:25 PM Bug #43305 (Won't Fix): "psutil.NoSuchProcess process no longer exists" error in luminous-x-nauti...
- This is an infra issue....
- 10:23 PM Bug #43306: segv in collect_sys_info
- This looks similar to https://tracker.ceph.com/issues/38296, though the mon seems to have been upgraded to nautilus(w...
- 10:17 PM Bug #43318 (Need More Info): monitor mark all services(osd mgr) down
- Can you provide mgr logs from when this happened?
- 10:12 PM Feature #43377 (Resolved): Make Zstandard compression level a configurable option
- I've played with using the different compression algorithms on the RGWs and the default compression level for Zstanda...
- 07:38 PM Backport #42739: nautilus: scrub object count mismatch on device_health_metrics pool
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31735
merged - 03:53 PM Backport #43316 (Resolved): nautilus:wrong datatype describing crush_rule
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32254
m... - 12:11 PM Bug #43365: Nautilus: Random mon crashes in failed assertion at ceph::time_detail::signedspan
- So it's asserting inside of to_timespan, and the Paxos code triggering that assert is
> auto start = ceph::coarse_... - 12:03 PM Bug #43365 (Resolved): Nautilus: Random mon crashes in failed assertion at ceph::time_detail::sig...
- Thanks to 14.2.5 auto warning for recent crashes, we are observing frequent (somewhat daily period) random crashes of...
- 09:35 AM Bug #43185: ceph -s not showing client activity
- Possible relation to https://tracker.ceph.com/issues/43364 and https://tracker.ceph.com/issues/43317
12/17/2019
- 05:39 PM Bug #43308 (Fix Under Review): negative num_objects can set PG_STATE_DEGRADED
- 09:19 AM Backport #43346 (Resolved): nautilus: short pg log + cache tier ceph_test_rados out of order reply
- https://github.com/ceph/ceph/pull/32848
- 06:47 AM Bug #41950 (Can't reproduce): crimson compile
- 06:46 AM Bug #41950: crimson compile
- i assume that you were trying to compile crimson-osd not crimson-old. please check the submodule of seastar to unders...
12/16/2019
- 10:36 PM Bug #43296 (Need More Info): Ceph assimilate-conf results in config entries which can not be removed
- Can you attach the (relevant) output from "ceph config-key dump | grep config"? I think the keys are being installed...
- 10:22 PM Bug #43296: Ceph assimilate-conf results in config entries which can not be removed
- Might be related to #42964?
- 10:06 PM Bug #43334 (Resolved): nautilus: rados/test_envlibrados_for_rocksdb.sh broken packages with ubunt...
- Run: http://pulpito.ceph.com/yuriw-2019-12-15_16:25:11-rados-wip-yuri-nautilus-baseline_12.13.19-distro-basic-smithi/...
- 08:36 PM Bug #38358 (Pending Backport): short pg log + cache tier ceph_test_rados out of order reply
- Seen in nautilus: /a/yuriw-2019-12-15_16:25:11-rados-wip-yuri-nautilus-baseline_12.13.19-distro-basic-smithi/4605500/
- 12:40 PM Bug #43174 (New): pgs inconsistent, union_shard_errors=missing
- Hmm this may be something else then. David, does it look familiar?
- 08:40 AM Feature #43324: Make zlib windowBits configurable for compression
- Xiyuan Wang wrote:
> Now the zlib windowBits is hardcoding as -15[1]. But it should be set to different value for di... - 03:38 AM Feature #43324 (Resolved): Make zlib windowBits configurable for compression
- Now the zlib windowBits is hardcoding as -15[1]. But it should be set to different value for different case.
Accor... - 07:27 AM Backport #43325 (In Progress): luminous: wrong datatype describing crush_rule
- 07:24 AM Backport #43325 (New): luminous: wrong datatype describing crush_rule
- 07:24 AM Backport #43325 (Resolved): luminous: wrong datatype describing crush_rule
- https://github.com/ceph/ceph/pull/32267
Also available in: Atom