Activity
From 08/21/2020 to 09/19/2020
09/19/2020
09/18/2020
- 10:29 PM Documentation #47522 (Duplicate): Document "ceph df detail"
- 09:34 PM Bug #47508 (Fix Under Review): Multiple read errors cause repeated entry/exit recovery for each e...
- 12:38 AM Bug #47508: Multiple read errors cause repeated entry/exit recovery for each error
- WIthout this fix every object is a recovery. Only with added 2 dout()s....
- 09:32 PM Bug #47509: test: ceph_test_argparse.py no longer passes/is correctly invoked in CI
- We'll revisit this test to see what's going on but not urgent IMO.
- 07:17 PM Bug #46323: thrash_cache_writeback_proxy_none: FAILED ceph_assert(version == old_value.version) i...
- this still fails consistently
/a/teuthology-2020-09-17_07:01:02-rados-master-distro-basic-smithi/5443303 - 02:01 AM Feature #39012 (Resolved): osd: distinguish unfound + impossible to find, vs start some down OSDs...
09/17/2020
- 10:55 PM Bug #47492 (Fix Under Review): tools/osdmaptool.cc: fix inaccurate pg map result when simulating ...
- 09:31 PM Documentation #47523 (Resolved): ceph df documentation is outdated
- Fields have changed meaning and new ones have been added, in the 'ceph df detail' output:
https://github.com/ceph/... - 09:07 PM Documentation #47522 (Closed): Document "ceph df detail"
- The output of "ceph df detail" has a lot of useful information including per pool stats. Let's document whatever is n...
- 07:03 PM Backport #47092 (Resolved): nautilus: mon: stuck osd_pgtemp message forwards
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/37171
m... - 07:03 PM Backport #47296 (Resolved): nautilus: osdmaps aren't being cleaned up automatically on healthy cl...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36982
m... - 07:02 PM Backport #47257: nautilus: Add pg count for pools in the `ceph df` command
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36944
m... - 04:46 PM Feature #47519: Gracefully detect MTU mismatch
- Good suggestion, the pings are with large sizes already, but we don't particularly warn about MTU right now, you just...
- 04:22 PM Feature #47519 (New): Gracefully detect MTU mismatch
- I ran into an issue this morning of flapping OSDs. The ...
- 03:59 AM Bug #47508 (In Progress): Multiple read errors cause repeated entry/exit recovery for each error
- 01:08 AM Bug #47509 (New): test: ceph_test_argparse.py no longer passes/is correctly invoked in CI
- Despite the test showing up as passed in Jenkins, it's apparent that most of the actual tests are now being skipped i...
09/16/2020
- 09:32 PM Bug #47180 (Fix Under Review): qa/standalone/mon/mon-handle-forward.sh failure
- 09:06 PM Backport #47092: nautilus: mon: stuck osd_pgtemp message forwards
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/37171
merged - 07:21 PM Bug #47447 (Resolved): test_osd_cannot_recover (tasks.mgr.test_progress.TestProgress) fails
- 06:15 PM Bug #47508: Multiple read errors cause repeated entry/exit recovery for each error
- ...
- 06:12 PM Bug #47508 (In Progress): Multiple read errors cause repeated entry/exit recovery for each error
After looking at https://github.com/ceph/ceph/pull/36989 I realized that after the first read error all the other g...- 04:35 PM Bug #47239 (Resolved): thrashosds.thrasher error in rados
- 03:44 PM Backport #47296: nautilus: osdmaps aren't being cleaned up automatically on healthy cluster
- Neha Ojha wrote:
> https://github.com/ceph/ceph/pull/36982
merged - 03:22 PM Bug #38219: rebuild-mondb hangs
- /a/yuriw-2020-09-16_01:27:14-rados-wip-yuri3-testing-2020-09-16-0014-nautilus-distro-basic-smithi/5437537/teuthology.log
- 08:42 AM Bug #47492 (Resolved): tools/osdmaptool.cc: fix inaccurate pg map result when simulating osd out
- When simulating osd out, it will always adjust this osd's crush weight to 1.0. Hence the pg map result is not same as...
- 07:55 AM Backport #47257 (Resolved): nautilus: Add pg count for pools in the `ceph df` command
09/15/2020
- 11:59 PM Backport #47092 (In Progress): nautilus: mon: stuck osd_pgtemp message forwards
- 08:39 PM Backport #47257: nautilus: Add pg count for pools in the `ceph df` command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36944
merged - 08:35 PM Bug #47239: thrashosds.thrasher error in rados
- /a/gregf-2020-09-14_05:25:36-rados-wip-stretch-mode-distro-basic-smithi/5433134
- 08:07 PM Bug #47239 (Fix Under Review): thrashosds.thrasher error in rados
- 04:13 PM Bug #47440: nautilus: valgrind caught leak in Messenger::ms_deliver_verify_authorizer
- ...
- 12:54 PM Bug #47447 (Fix Under Review): test_osd_cannot_recover (tasks.mgr.test_progress.TestProgress) fails
- 12:02 PM Bug #47223 (Rejected): Change the default value of option osd_async_recovery_min_cost from 100 to 10
- 03:17 AM Bug #47452 (Fix Under Review): invalid values of crush-failure-domain should not be allowed while...
- 02:26 AM Bug #47452 (Resolved): invalid values of crush-failure-domain should not be allowed while creatin...
- # ceph osd erasure-code-profile set testprofile k=4 m=2 crush-failure-domain=x
# ceph osd erasure-code-profile get...
09/14/2020
- 10:36 PM Bug #47239: thrashosds.thrasher error in rados
- I think 530982129ec131ef78e2f9989abfaeddb0959c65 caused this issue.
- 09:53 PM Bug #47239: thrashosds.thrasher error in rados
- /a/teuthology-2020-09-14_07:01:01-rados-master-distro-basic-smithi/5433978
- 08:42 PM Bug #47447 (Triaged): test_osd_cannot_recover (tasks.mgr.test_progress.TestProgress) fails
- The same test passed with this revert https://github.com/ceph/ceph/pull/37122.
https://pulpito.ceph.com/nojha-2020... - 08:18 PM Bug #47447 (Resolved): test_osd_cannot_recover (tasks.mgr.test_progress.TestProgress) fails
- ...
- 01:38 PM Bug #47440 (New): nautilus: valgrind caught leak in Messenger::ms_deliver_verify_authorizer
- http://qa-proxy.ceph.com/teuthology/yuriw-2020-09-10_15:32:32-rados-wip-yuri2-testing-2020-09-08-1946-nautilus-distro...
09/13/2020
- 05:50 PM Feature #39012 (Fix Under Review): osd: distinguish unfound + impossible to find, vs start some d...
09/11/2020
- 09:49 PM Bug #47420 (New): nautilus: test_rados.TestIoctx.test_aio_read fails with AssertionError: 5 != 2
http://qa-proxy.ceph.com/teuthology/yuriw-2020-09-10_22:09:32-rados-wip-yuri2-testing-2020-09-10-0000-nautilus-dist...- 09:39 PM Bug #47419 (Resolved): make check: src/test/smoke.sh: TEST_multimon: timeout 8 rados -p foo bench...
- The PR did not change code at all. Log is attached.
- 09:19 PM Bug #47344 (Resolved): osd: Poor client IO throughput/latency observed with dmclock scheduler dur...
- 03:03 PM Bug #46405: osd/osd-rep-recov-eio.sh: TEST_rados_repair_warning: return 1
- Brad, thanks. will create a separate ticket.
- 03:23 AM Bug #46405: osd/osd-rep-recov-eio.sh: TEST_rados_repair_warning: return 1
- Kefu,
/a/kchai-2020-09-10_16:44:13-rados-wip-kefu-testing-2020-09-10-1633-distro-basic-smithi/5421813/teuthology.l... - 02:40 AM Bug #46405: osd/osd-rep-recov-eio.sh: TEST_rados_repair_warning: return 1
- ...
- 05:07 AM Bug #47395 (New): ceph raw used is more than total used in all pools (ceph df detail)
- In my ceph cluster, when i run the *ceph df detail* command it shows me like as following result...
- 02:36 AM Bug #47024: rados/test.sh: api_tier_pp LibRadosTwoPoolsPP.ManifestSnapRefcount failed
- ...
09/10/2020
- 04:43 AM Bug #24531: Mimic MONs have slow/long running ops
- please note, this fix was backported as f0697a9af54bf866572036bd6d582abd5299d0c8...
09/09/2020
- 12:58 PM Bug #47361: invalid upmap not getting cleaned
- As a workaround for our cluster operations I have removed the un-used "rack" level from our osd tree, and now the upm...
- 10:56 AM Bug #47361: invalid upmap not getting cleaned
- This seems to be still breaking in master.
osdmap is attached. - 12:32 PM Bug #47380 (Resolved): mon: slow ops due to osd_failure
- ...
- 09:15 AM Backport #47364 (In Progress): luminous: pgs inconsistent, union_shard_errors=missing
09/08/2020
- 06:06 PM Backport #47365 (In Progress): mimic: pgs inconsistent, union_shard_errors=missing
- 02:48 PM Backport #47365 (Resolved): mimic: pgs inconsistent, union_shard_errors=missing
- https://github.com/ceph/ceph/pull/37053
- 05:47 PM Backport #47362 (In Progress): nautilus: pgs inconsistent, union_shard_errors=missing
- 02:01 PM Backport #47362 (Resolved): nautilus: pgs inconsistent, union_shard_errors=missing
- https://github.com/ceph/ceph/pull/37051
- 03:23 PM Backport #47363 (In Progress): octopus: pgs inconsistent, union_shard_errors=missing
- 02:01 PM Backport #47363 (Resolved): octopus: pgs inconsistent, union_shard_errors=missing
- https://github.com/ceph/ceph/pull/37048
- 02:48 PM Backport #47364 (Resolved): luminous: pgs inconsistent, union_shard_errors=missing
- https://github.com/ceph/ceph/pull/37062
- 01:52 PM Bug #43174 (Pending Backport): pgs inconsistent, union_shard_errors=missing
- 01:35 PM Bug #43174 (Resolved): pgs inconsistent, union_shard_errors=missing
- 01:38 PM Bug #47361: invalid upmap not getting cleaned
- We deleted all the pg_upmap_items and let the balancer start again. It created bad upmap rules again in the first ite...
- 01:08 PM Bug #47361 (Rejected): invalid upmap not getting cleaned
- In v14.2.11 we have some invalid upmaps which don't get cleaned. (And I presume they were created by the balancer).
... - 03:35 AM Bug #47344 (Fix Under Review): osd: Poor client IO throughput/latency observed with dmclock sched...
09/07/2020
- 10:00 PM Bug #47352 (In Progress): rados ls improvements.
- "rados ls" should NOT include deleted head objects, with json output include snapshots but add snapid fields to output.
- 08:09 PM Feature #46842 (Resolved): librados: add LIBRBD_SUPPORTS_GETADDRS support
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:08 PM Backport #47346 (Resolved): octopus: mon/mon-last-epoch-clean.sh failure
- https://github.com/ceph/ceph/pull/37349
- 08:08 PM Backport #47345 (Resolved): nautilus: mon/mon-last-epoch-clean.sh failure
- https://github.com/ceph/ceph/pull/37478
- 08:04 PM Bug #47344 (Resolved): osd: Poor client IO throughput/latency observed with dmclock scheduler dur...
- Regardless of the higher weightage given to client IO when compared to
recovery IO, poor client throughput/latency i... - 06:41 AM Bug #47309 (Pending Backport): mon/mon-last-epoch-clean.sh failure
09/06/2020
- 06:55 PM Bug #47328 (Resolved): nautilus: ObjectStore/SimpleCloneTest: invalid rm coll
- job getting dead, but not sure what killed it(is it because smithi was not able to handle the said number of threads?...
- 10:26 AM Backport #46932 (Resolved): nautilus: librados: add LIBRBD_SUPPORTS_GETADDRS support
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36853
m... - 09:11 AM Bug #47299: Assertion in pg_missing_set: p->second.need <= v || p->second.is_delete()
- Neha Ojha wrote:
> Is it possible for you to capture osd logs with debug_osd=30? We'll also try to reproduce this at...
09/04/2020
- 11:15 PM Bug #47309 (Fix Under Review): mon/mon-last-epoch-clean.sh failure
- 09:50 PM Bug #47309 (Resolved): mon/mon-last-epoch-clean.sh failure
- The test needs to be updated based on https://github.com/ceph/ceph/pull/36977...
- 11:07 PM Backport #47297: octopus: osdmaps aren't being cleaned up automatically on healthy cluster
- Neha Ojha wrote:
> https://github.com/ceph/ceph/pull/36981
merged - 12:17 AM Backport #47297 (In Progress): octopus: osdmaps aren't being cleaned up automatically on healthy ...
- 12:08 AM Backport #47297 (Resolved): octopus: osdmaps aren't being cleaned up automatically on healthy clu...
- https://github.com/ceph/ceph/pull/36981
- 10:32 PM Bug #47204: ceph osd getting shutdown after joining to cluster
- The log shows systemd is stopping the osd, doing a clean shutdown via SIGTERM. It's unclear what caused systemd to st...
- 10:23 PM Bug #47299 (Need More Info): Assertion in pg_missing_set: p->second.need <= v || p->second.is_del...
- Is it possible for you to capture osd logs with debug_osd=30? We'll also try to reproduce this at our end.
- 06:57 AM Bug #47299 (Need More Info): Assertion in pg_missing_set: p->second.need <= v || p->second.is_del...
- Some of our ODSs will sometimes crash with the following message:...
- 10:08 PM Bug #47300: mount.ceph fails to understand AAAA records from SRV record
- Thanks for the detailed description. The earlier fix clearly depends on ms_bind_ipv6: https://github.com/ceph/ceph/pu...
- 07:49 AM Bug #47300 (Resolved): mount.ceph fails to understand AAAA records from SRV record
- Hello,
Unsure if this belongs to CephFS or RADOS :-). I have seen numerous of issues here regarding IPv6/AAAA re... - 12:32 AM Backport #47296 (In Progress): nautilus: osdmaps aren't being cleaned up automatically on healthy...
- 12:07 AM Backport #47296 (Resolved): nautilus: osdmaps aren't being cleaned up automatically on healthy cl...
- https://github.com/ceph/ceph/pull/36982
- 12:04 AM Bug #47290 (Pending Backport): osdmaps aren't being cleaned up automatically on healthy cluster
09/03/2020
- 11:45 PM Bug #47278 (Won't Fix): run-tox-qa failing in make check Jenkins run
- The issue was related to this PR https://github.com/ceph/ceph/pull/36397.
./tasks/lost_unfound.py:146:13: F841 loc... - 03:26 PM Bug #47290 (Fix Under Review): osdmaps aren't being cleaned up automatically on healthy cluster
- 03:10 PM Bug #47290 (Resolved): osdmaps aren't being cleaned up automatically on healthy cluster
- in https://github.com/ceph/ceph/pull/19076/commits/e62269c8929e414284ad0773c4a3c82e43735e4e, we made a mistake. we sh...
- 11:44 AM Bug #47263 (Closed): memory leak, overusage OSD process since 14.2.10
- 05:05 AM Bug #47263: memory leak, overusage OSD process since 14.2.10
- Please CLOSE. Solved, user error
09/02/2020
- 11:36 PM Bug #47278 (Won't Fix): run-tox-qa failing in make check Jenkins run
See this run:
https://jenkins.ceph.com/job/ceph-pull-requests/59034/consoleFull#231334942c19247c4-fcb7-4c61-9a5d-7...- 10:54 PM Bug #46405: osd/osd-rep-recov-eio.sh: TEST_rados_repair_warning: return 1
- Here's the actual problem I think. Working on a fix....
- 10:31 PM Bug #47024: rados/test.sh: api_tier_pp LibRadosTwoPoolsPP.ManifestSnapRefcount failed
- /a/nojha-2020-09-02_20:45:56-rados:verify-master-distro-basic-smithi/5400369
- 10:14 PM Bug #46323: thrash_cache_writeback_proxy_none: FAILED ceph_assert(version == old_value.version) i...
- /a/yuriw-2020-08-31_19:07:15-rados-octopus-distro-basic-smithi/5396037
- 09:40 PM Bug #47180 (In Progress): qa/standalone/mon/mon-handle-forward.sh failure
- 09:38 PM Bug #47180: qa/standalone/mon/mon-handle-forward.sh failure
- Issue is that `mon_host` is now only used for bootstrapping the MonClient. After that, it uses whatever the current m...
- 09:03 PM Bug #47180 (Triaged): qa/standalone/mon/mon-handle-forward.sh failure
- I am able to reproduce this locally and it fails consistently on master https://pulpito.ceph.com/nojha-2020-09-01_20:...
- 04:43 PM Backport #46932: nautilus: librados: add LIBRBD_SUPPORTS_GETADDRS support
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36853
merged - 01:25 PM Bug #47273 (Resolved): ceph report missing osdmap_clean_epochs if answered by peon
- The ceph report section `osdmap_clean_epochs` is empty when the response comes from a peon.
E.g, a good report fro... - 11:50 AM Bug #47263: memory leak, overusage OSD process since 14.2.10
- MALLOC: 17863967056 (17036.4 MiB) Bytes in use by application
MALLOC: + 0 ( 0.0 MiB) Bytes in page ... - 09:19 AM Bug #47263: memory leak, overusage OSD process since 14.2.10
- mepools for one using 20GB rising atm...
- 08:10 AM Bug #47263: memory leak, overusage OSD process since 14.2.10
- Logs only show compaction stats...
- 08:09 AM Bug #47263 (Closed): memory leak, overusage OSD process since 14.2.10
- Since the upgrade to 14.2.10 I have multiple OSDs in our big cluster that are overconsuming memory and are being OOM ...
- 08:20 AM Backport #47258 (In Progress): octopus: Add pg count for pools in the `ceph df` command
- 05:06 AM Backport #47258 (Resolved): octopus: Add pg count for pools in the `ceph df` command
- https://github.com/ceph/ceph/pull/36945
- 08:14 AM Backport #47257 (In Progress): nautilus: Add pg count for pools in the `ceph df` command
- 05:06 AM Backport #47257 (Resolved): nautilus: Add pg count for pools in the `ceph df` command
- https://github.com/ceph/ceph/pull/36944
- 05:04 AM Backport #47251 (Resolved): octopus: add ability to clean_temps in osdmaptool
- https://github.com/ceph/ceph/pull/37348
- 05:04 AM Backport #47250 (Resolved): nautilus: add ability to clean_temps in osdmaptool
- https://github.com/ceph/ceph/pull/37477
09/01/2020
- 03:50 PM Bug #47239 (Resolved): thrashosds.thrasher error in rados
- Run: https://pulpito.ceph.com/yuriw-2020-08-31_23:49:50-rados:thrash-old-clients-master-distro-basic-smithi/
Job: 53...
08/31/2020
- 09:44 PM Bug #47180: qa/standalone/mon/mon-handle-forward.sh failure
- similar one...
- 06:49 PM Bug #47223 (In Progress): Change the default value of option osd_async_recovery_min_cost from 100...
- 06:45 PM Bug #47223 (Rejected): Change the default value of option osd_async_recovery_min_cost from 100 to 10
- - While testing different RGW worklods[1] we do not see async recovery benefits specifically in EC pools with the def...
- 06:08 AM Bug #47207 (New): Mon crashes during adding osd
- Mon crashes during adding osd:
Found that osd was not successfully added to the crush map
mon log:
debug 202... - 06:07 AM Feature #46663 (Pending Backport): Add pg count for pools in the `ceph df` command
- 05:31 AM Feature #46663 (Resolved): Add pg count for pools in the `ceph df` command
- 03:17 AM Bug #46405: osd/osd-rep-recov-eio.sh: TEST_rados_repair_warning: return 1
- /a/yuriw-2020-08-27_00:49:53-rados-wip-yuri8-testing-2020-08-26-2329-octopus-distro-basic-smithi/5379176/
08/30/2020
- 01:22 PM Bug #47206 (New): Ceph-mon crashes with zero exit code when no space left on device
- When filesystem gets filled up to 100% ceph-mon exits with zero code and systemd reports that everything is OK.
h3...
08/29/2020
- 01:21 PM Bug #47204: ceph osd getting shutdown after joining to cluster
- uploaded ceph osd logs screenshot for reference
- 01:20 PM Bug #47204 (New): ceph osd getting shutdown after joining to cluster
- after adding new disks in existing cluster with luminous 12.2.12 version (old servers) with one more node addition wi...
08/28/2020
- 05:26 PM Bug #47159 (Pending Backport): add ability to clean_temps in osdmaptool
- 02:39 PM Bug #46445 (Resolved): nautilis client may hunt for mon very long if msg v2 is not enabled on mons
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 01:58 PM Support #47150: ceph df - big difference between per-class and per-pool usage
- Marius Leustean wrote:
> Alright, *ceph osd df tree class nvme* outputs the same capacity as the RAW STORAGE above (... - 11:19 AM Support #47150: ceph df - big difference between per-class and per-pool usage
- Alright, *ceph osd df tree class nvme* outputs the same capacity as the RAW STORAGE above (11TB).
Does it mean tha... - 10:33 AM Support #47150: ceph df - big difference between per-class and per-pool usage
- MAX AVAIL in POOL sections is primarily determined by the OSD with the least amount of free space.
"ceph osd df tree... - 08:54 AM Bug #45647: "ceph --cluster ceph --log-early osd last-stat-seq osd.0" times out due to msgr-failu...
- /a/shuzhenyi-2020-08-28_05:41:22-rados:thrash-wip-shuzhenyi-testing-2020-08-28-0955-distro-basic-smithi/5381787/
rad... - 07:56 AM Bug #45647: "ceph --cluster ceph --log-early osd last-stat-seq osd.0" times out due to msgr-failu...
/a/yuriw-2020-08-27_00:49:53-rados-wip-yuri8-testing-2020-08-26-2329-octopus-distro-basic-smithi/5379206/
rados/si...- 07:58 AM Bug #46508: Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)
- /a/ http://qa-proxy.ceph.com/teuthology/yuriw-2020-08-27_00:49:53-rados-wip-yuri8-testing-2020-08-26-2329-octopus-dis...
- 07:41 AM Bug #46323: thrash_cache_writeback_proxy_none: FAILED ceph_assert(version == old_value.version) i...
- /a/yuriw-2020-08-26_18:16:40-rados-wip-yuri-testing-2020-08-26-1631-octopus-distro-basic-smithi/5378493...
- 07:38 AM Bug #46318: mon_recovery: quorum_status times out
- /a/yuriw-2020-08-26_18:16:40-rados-wip-yuri-testing-2020-08-26-1631-octopus-distro-basic-smithi/5378436...
- 12:34 AM Bug #47181 (New): "sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeo...
- ...
- 12:26 AM Bug #47180 (Resolved): qa/standalone/mon/mon-handle-forward.sh failure
- ...
08/27/2020
- 10:29 PM Bug #24990: api_watch_notify: LibRadosWatchNotify.Watch3Timeout failed
- /a/teuthology-2020-08-26_07:01:02-rados-master-distro-basic-smithi/5377129
- 10:27 PM Bug #45423: api_tier_pp: [ FAILED ] LibRadosTwoPoolsPP.HitSetWrite
- /a/teuthology-2020-08-26_07:01:02-rados-master-distro-basic-smithi/5377120/
- 05:59 PM Documentation #47176 (New): creating pool doc is very out-of-date
- https://docs.ceph.com/docs/master/rados/operations/pools/#create-a-pool
Ideally we shouldn't be talking about PGs ... - 01:10 PM Backport #46932 (In Progress): nautilus: librados: add LIBRBD_SUPPORTS_GETADDRS support
- 10:55 AM Backport #46952 (Resolved): nautilus: nautilis client may hunt for mon very long if msg v2 is not...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36634
m... - 10:54 AM Backport #46586: octopus: The default value of osd_scrub_during_recovery is false since v11.1.1
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36661
m... - 04:41 AM Bug #47002 (Resolved): python-rados: connection error
08/26/2020
- 10:22 PM Documentation #47163 (New): document the difference between disk commit and apply time
- <wowas> whats the difference between the disk commit and apply time?
<cervigni> ask zdover - 10:17 PM Backport #46586 (Resolved): octopus: The default value of osd_scrub_during_recovery is false sinc...
- 06:25 PM Bug #47159 (Fix Under Review): add ability to clean_temps in osdmaptool
- 06:21 PM Bug #47159 (Resolved): add ability to clean_temps in osdmaptool
- This is particularly useful for debugging purposes when clean_temps() takes abnormally high amount of time due to fla...
- 04:54 PM Bug #44862 (Resolved): mon: reset min_size when changing pool size
- 04:49 PM Bug #47153 (Won't Fix - EOL): monitor crash during upgrade due to LogSummary encoding changes bet...
- ...
- 04:15 PM Backport #46952: nautilus: nautilis client may hunt for mon very long if msg v2 is not enabled on...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36634
merged - 03:32 PM Bug #47002: python-rados: connection error
- A cherry-pick of this PR has caused the ragweed tests to start passing and https://tracker.ceph.com/issues/47060 to g...
- 04:02 AM Bug #47002 (In Progress): python-rados: connection error
- 03:57 AM Bug #47002: python-rados: connection error
- Ok, I've had a chance to test this. Without my patch in rados_connect we see the following value for 'keyring' and th...
- 03:24 PM Support #47150 (New): ceph df - big difference between per-class and per-pool usage
- Considering the below example:...
- 10:25 AM Feature #46663 (In Progress): Add pg count for pools in the `ceph df` command
08/25/2020
- 11:08 AM Bug #47002: python-rados: connection error
- I have a candidate patch for this that I should be able to test tomorrow morning APAC time (it's building now).
- 08:11 AM Bug #47002: python-rados: connection error
- Progress update.
I eliminated anything to do with python by writing a small C client that just reads the conf file...
08/24/2020
- 09:52 PM Bug #47119 (Resolved): perf suite fails with backend objectstore/filestore-xfs
- 06:28 PM Bug #47119: perf suite fails with backend objectstore/filestore-xfs
- Neha Ojha wrote:
> Mostly likely caused by https://github.com/ceph/ceph/pull/36090
merged - 04:49 PM Bug #47119 (Fix Under Review): perf suite fails with backend objectstore/filestore-xfs
- 04:28 PM Bug #47119: perf suite fails with backend objectstore/filestore-xfs
- Mostly likely caused by https://github.com/ceph/ceph/pull/36090
- 03:37 PM Bug #47119 (Resolved): perf suite fails with backend objectstore/filestore-xfs
- Run: https://pulpito.ceph.com/teuthology-2020-08-18_03:57:03-perf-basic-master-distro-basic-smithi/
Jobs: 5355382, 5... - 11:01 AM Bug #46266: Monitor crashed in creating pool in CrushTester::test_with_fork()
- This happens again :(
- 10:39 AM Bug #46285: osd: error from smartctl is always reported as invalid JSON
- Yaarit Hatuka wrote:
> Is the cluster containerized?
> If so - smartctl version in the container might be old (mean... - 08:45 AM Bug #47002: python-rados: connection error
- There's something quite different about the local environment (vstart cluster) and the teuthology environment. For ex...
- 08:19 AM Bug #47002 (New): python-rados: connection error
- So, I can reproduce this on teuthology but not locally. Kefu mentioned that https://github.com/ceph/ceph/pull/36516 m...
- 04:20 AM Bug #47002: python-rados: connection error
- I can reproduce this and I'm looking into it.
- 07:13 AM Bug #47103 (New): Ceph is not going into read only mode when it is 85% full.
- We are using ceph 14.2.9 with the help of rook 1.2.7 in openshift environment. We are observing that our cluster is n...
08/23/2020
08/22/2020
- 07:47 PM Bug #43888 (Resolved): osd/osd-bench.sh 'tell osd.N bench' hang
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:44 PM Backport #47092 (Resolved): nautilus: mon: stuck osd_pgtemp message forwards
- https://github.com/ceph/ceph/pull/37171
- 07:44 PM Backport #47091 (Resolved): octopus: mon: stuck osd_pgtemp message forwards
- https://github.com/ceph/ceph/pull/37347
- 07:42 PM Backport #46964 (Resolved): octopus: Pool stats increase after PG merged (PGMap::apply_incrementa...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36667
m... - 06:09 PM Backport #46964: octopus: Pool stats increase after PG merged (PGMap::apply_incremental doesn't s...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36667
merged - 07:41 PM Backport #46934 (Resolved): octopus: "No such file or directory" when exporting or importing a po...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36666
m... - 06:08 PM Backport #46934: octopus: "No such file or directory" when exporting or importing a pool if locat...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36666
merged - 07:41 PM Backport #46739 (Resolved): octopus: mon: expected_num_objects warning triggers on bluestore-only...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36665
m... - 06:07 PM Backport #46739: octopus: mon: expected_num_objects warning triggers on bluestore-only setups
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36665
merged - 07:41 PM Backport #46722 (Resolved): octopus: osd/osd-bench.sh 'tell osd.N bench' hang
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36664
m... - 06:06 PM Backport #46722: octopus: osd/osd-bench.sh 'tell osd.N bench' hang
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36664
merged - 07:41 PM Backport #46709 (Resolved): octopus: Negative peer_num_objects crashes osd
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36663
m... - 06:05 PM Backport #46709: octopus: Negative peer_num_objects crashes osd
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36663
merged - 07:40 PM Backport #46951 (Resolved): octopus: nautilis client may hunt for mon very long if msg v2 is not ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36633
m... - 07:40 PM Backport #46931 (Resolved): octopus: librados: add LIBRBD_SUPPORTS_GETADDRS support
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36643
m... - 03:40 PM Bug #45441: rados: Health check failed: 1/3 mons down, quorum a,c (MON_DOWN)" in cluster log'
- yuriw-2020-08-20_19:48:15-rados-wip-yuri-testing-2020-08-17-1723-octopus-distro-basic-smithi/5362945/
- 03:39 PM Bug #44595: cache tiering: Error: oid 48 copy_from 493 returned error code -2
- seeing on octopus as well: yuriw-2020-08-20_19:48:15-rados-wip-yuri-testing-2020-08-17-1723-octopus-distro-basic-smit...
08/21/2020
- 07:47 PM Backport #46951: octopus: nautilis client may hunt for mon very long if msg v2 is not enabled on ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36633
merged - 06:30 PM Backport #46931: octopus: librados: add LIBRBD_SUPPORTS_GETADDRS support
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36643
merged - 04:32 PM Bug #46325 (Rejected): A pool at size 3 should have a min_size 2
- My testing using vstart is wrong because somehow the config osd_pool_default_min_size == 1 even though I specified th...
- 07:49 AM Bug #46325: A pool at size 3 should have a min_size 2
- This is from `octopus` - `15.2.4`....
- 07:34 AM Bug #47004 (Resolved): watch/notify may not function well after lingerOp failed
- 01:53 AM Bug #46914 (Pending Backport): mon: stuck osd_pgtemp message forwards
Also available in: Atom