Activity
From 07/09/2020 to 08/07/2020
08/07/2020
- 12:42 PM Feature #46842 (Pending Backport): librados: add LIBRBD_SUPPORTS_GETADDRS support
- Backports need to also include the new method from commit df507cde8d71
- 07:33 AM Bug #46847: Loss of placement information on OSD reboot
- Thanks a lot for this info. There have been a few more scenarios discussed on the users-list, all involving changes t...
- 06:51 AM Bug #46845: Newly orchestrated OSD fails with 'unable to find any IPv4 address in networks '2001:...
- Matthew Oliver wrote:
> I've managed to recreate the issue in a vstart env. It happens when I use ipv6 but set the `... - 02:08 AM Bug #46845: Newly orchestrated OSD fails with 'unable to find any IPv4 address in networks '2001:...
- I've managed to recreate the issue in a vstart env. It happens when I use ipv6 but set the `public network` to an ipv...
08/06/2020
- 10:35 PM Bug #46264 (Fix Under Review): mon: check for mismatched daemon versions
- 05:59 PM Backport #46742: octopus: ceph_osd crash in _committed_osd_maps when failed to encode first inc map
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36340
merged - 05:14 PM Bug #46847: Loss of placement information on OSD reboot
- We have had this problem for a long time, one reason was resolved in #37439. But it still persists in some cases, and...
- 01:59 PM Bug #46847 (Need More Info): Loss of placement information on OSD reboot
- During rebalancing after adding new disks to a cluster, the cluster looses placement information on reboot of an "old...
- 09:24 AM Bug #46845: Newly orchestrated OSD fails with 'unable to find any IPv4 address in networks '2001:...
- I think this is duplicate of https://tracker.ceph.com/issues/39711
The workaround was to disable `ms_bind_ipv4`, a... - 08:33 AM Bug #46845 (Resolved): Newly orchestrated OSD fails with 'unable to find any IPv4 address in netw...
- I just started deploying 60 OSDs to my new 15.2.4 OCtopus IPv6 cephadm cluster. I applied the spec for the OSDs and t...
- 05:27 AM Bug #46829: Periodic Lagged/Stalled PGs on new Cluster
- This is on current master, right?
- 05:25 AM Bug #46829: Periodic Lagged/Stalled PGs on new Cluster
- Can you paste two adjacent cycles? I'm curious about the timestamp of the subsequent cycle.
- 04:25 AM Feature #46842 (Resolved): librados: add LIBRBD_SUPPORTS_GETADDRS support
- This will be very helpful when release ceph package(like RPM) with backporting rados_getaddrs() api to previous versi...
08/04/2020
- 06:29 PM Bug #45721: CommandFailedError: Command failed (workunit test rados/test_python.sh) FAIL: test_ra...
- rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/many msgr/async-v2only objectstore/bluestore-comp-zstd r...
- 06:08 PM Bug #43048: nautilus: upgrade/mimic-x/stress-split: failed to recover before timeout expired
- /a/teuthology-2020-08-01_16:48:51-upgrade:mimic-x-nautilus-distro-basic-smithi/5277468
- 05:01 PM Bug #46829 (New): Periodic Lagged/Stalled PGs on new Cluster
- While Radek and I have been working on improving bufferlist append overhead I've been noticing that periodically I en...
- 04:12 PM Bug #21592: LibRadosCWriteOps.CmpExt got 0 instead of -4095-1
- /a/yuriw-2020-08-01_15:45:48-rados-nautilus-distro-basic-smithi/5276330
- 08:47 AM Bug #46824 (Pending Backport): "No such file or directory" when exporting or importing a pool if ...
- 04:48 AM Bug #46824 (Fix Under Review): "No such file or directory" when exporting or importing a pool if ...
- 04:45 AM Bug #46824 (Resolved): "No such file or directory" when exporting or importing a pool if locator ...
- Fixes the following error when exporting a pool that contains objects
with a locator key set:...
08/03/2020
08/02/2020
- 02:46 PM Bug #43413: Virtual IP address of iface lo results in failing to start an OSD
- lei xin wrote:
> I ran into the same problem and my trigger condition was the same, i.e. when configuring the VIP o... - 02:33 PM Bug #43413: Virtual IP address of iface lo results in failing to start an OSD
- I ran into the same problem and my trigger condition was the same, i.e. when configuring the VIP on the loopback int...
07/31/2020
- 09:56 PM Feature #39012 (In Progress): osd: distinguish unfound + impossible to find, vs start some down O...
- 12:35 PM Bug #46445 (Pending Backport): nautilis client may hunt for mon very long if msg v2 is not enable...
- 12:29 PM Bug #46323: thrash_cache_writeback_proxy_none: FAILED ceph_assert(version == old_value.version) i...
- /a/kchai-2020-07-31_01:42:48-rados-wip-kefu-testing-2020-07-30-2107-distro-basic-smithi/5271969
- 11:34 AM Bug #38322 (Closed): luminous: mons do not trim maps until restarted
- 11:34 AM Bug #38322 (Resolved): luminous: mons do not trim maps until restarted
- 11:32 AM Support #8600 (Closed): MON crashes on new crushmap injection
- closing because no one has complained for 6 years.
- 10:33 AM Bug #44755: Create stronger affinity between drivegroup specs and osd daemons
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:32 AM Bug #46224 (Resolved): Health check failed: 4 mgr modules have failed (MGR_MODULE_ERROR)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:34 AM Feature #38603 (Resolved): mon: osdmap prune
- luminous is EOL
- 09:32 AM Backport #38610 (Rejected): luminous: mon: osdmap prune
- luminous is EOL and the backport PR has been closed
- 09:15 AM Fix #6496 (Closed): mon: PGMap::dump should use TextTable
- 09:13 AM Bug #18859 (Closed): kraken monitor fails to bootstrap off jewel monitors if it has booted before
- 09:12 AM Bug #18043 (Closed): ceph-mon prioritizes public_network over mon_host address
- 06:42 AM Backport #46741 (Resolved): nautilus: ceph_osd crash in _committed_osd_maps when failed to encode...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36339
m...
07/30/2020
- 11:59 PM Backport #46741: nautilus: ceph_osd crash in _committed_osd_maps when failed to encode first inc map
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36339
merged - 08:02 PM Bug #46732: teuthology.exceptions.MaxWhileTries: 'check for active or peered' reached maximum tri...
- ...
- 11:35 AM Bug #46318 (Triaged): mon_recovery: quorum_status times out
- 11:32 AM Bug #46428: mon: all the 3 mon daemons crashed when running the fs aio test
- Are you co-locating the test and the monitors? Can this be fd depletion?
- 05:17 AM Backport #46408 (Resolved): octopus: Health check failed: 4 mgr modules have failed (MGR_MODULE_E...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35995
m... - 05:15 AM Backport #46372 (Resolved): osd: expose osdspec_affinity to osd_metadata
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35957
m...
07/29/2020
- 05:35 PM Documentation #46760 (Fix Under Review): The default value of osd_op_queue is wpq since v11.0.0
- 03:36 PM Documentation #46760: The default value of osd_op_queue is wpq since v11.0.0
- https://github.com/ceph/ceph/pull/36354
- 03:32 PM Documentation #46760 (Fix Under Review): The default value of osd_op_queue is wpq since v11.0.0
- Since 14adc9d33f, `osd_op_queue` defaults to `wpq`, but the documentation was still stating that its default value is...
- 04:53 AM Bug #46732 (Need More Info): teuthology.exceptions.MaxWhileTries: 'check for active or peered' re...
- Looks like osd.2 was taken down by the thrasher and did not come back up. We'd probably need a full set of logs to wo...
- 04:31 AM Backport #46742 (In Progress): octopus: ceph_osd crash in _committed_osd_maps when failed to enco...
- 04:30 AM Backport #46742 (Resolved): octopus: ceph_osd crash in _committed_osd_maps when failed to encode ...
- https://github.com/ceph/ceph/pull/36340
- 04:31 AM Bug #45991 (Resolved): PG merge: FAILED ceph_assert(info.history.same_interval_since != 0)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:30 AM Backport #46741 (In Progress): nautilus: ceph_osd crash in _committed_osd_maps when failed to enc...
- 04:29 AM Backport #46741 (Resolved): nautilus: ceph_osd crash in _committed_osd_maps when failed to encode...
- https://github.com/ceph/ceph/pull/36339
- 04:19 AM Backport #46706 (Resolved): nautilus: Cancellation of on-going scrubs
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36292
m... - 04:19 AM Backport #46090 (Resolved): nautilus: PG merge: FAILED ceph_assert(info.history.same_interval_sin...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36161
m... - 01:11 AM Bug #46443 (Pending Backport): ceph_osd crash in _committed_osd_maps when failed to encode first ...
07/28/2020
- 05:35 PM Backport #46706: nautilus: Cancellation of on-going scrubs
- David Zafman wrote:
> https://github.com/ceph/ceph/pull/36292
merged - 05:34 PM Backport #46090: nautilus: PG merge: FAILED ceph_assert(info.history.same_interval_since != 0)
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/36161
merged - 03:32 PM Backport #46739 (Resolved): octopus: mon: expected_num_objects warning triggers on bluestore-only...
- https://github.com/ceph/ceph/pull/36665
- 03:32 PM Backport #46738 (Resolved): nautilus: mon: expected_num_objects warning triggers on bluestore-onl...
- https://github.com/ceph/ceph/pull/37474
- 02:59 PM Backport #46408: octopus: Health check failed: 4 mgr modules have failed (MGR_MODULE_ERROR)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35995
merged - 02:59 PM Backport #46372: osd: expose osdspec_affinity to osd_metadata
- Joshua Schmid wrote:
> https://github.com/ceph/ceph/pull/35957
merged - 01:43 PM Bug #23031: FAILED assert(!parent->get_log().get_missing().is_missing(soid))
- hi, guys, what's the status of this problem now, does we resolve the assert in qa tests
- 04:39 AM Bug #46323: thrash_cache_writeback_proxy_none: FAILED ceph_assert(version == old_value.version) i...
- /a/yuriw-2020-07-13_23:00:15-rados-wip-yuri8-testing-2020-07-13-1946-octopus-distro-basic-smithi/5224163
- 04:03 AM Bug #46732 (Need More Info): teuthology.exceptions.MaxWhileTries: 'check for active or peered' re...
- /a/yuriw-2020-07-13_23:00:15-rados-wip-yuri8-testing-2020-07-13-1946-octopus-distro-basic-smithi/5223971...
- 03:37 AM Bug #45615: api_watch_notify_pp: LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify/1 f...
- /a/yuriw-2020-07-13_23:00:15-rados-wip-yuri8-testing-2020-07-13-1946-octopus-distro-basic-smithi/5223919
- 03:35 AM Bug #45423: api_tier_pp: [ FAILED ] LibRadosTwoPoolsPP.HitSetWrite
- /ceph/teuthology-archive/yuriw-2020-07-13_23:00:15-rados-wip-yuri8-testing-2020-07-13-1946-octopus-distro-basic-smith...
- 03:27 AM Bug #27053: qa: thrashosds: "[ERR] : 2.0 has 1 objects unfound and apparently lost"
- /a/yuriw-2020-07-13_23:00:15-rados-wip-yuri8-testing-2020-07-13-1946-octopus-distro-basic-smithi/5224148
- 02:56 AM Bug #45318: Health check failed: 2/6 mons down, quorum b,a,c,e (MON_DOWN)" in cluster log running...
- 'msgr-failures/few', 'msgr/async-v1only', 'no_pools', 'objectstore/bluestore-comp-zlib', 'rados', 'rados/multimon/{cl...
- 02:50 AM Bug #45761: mon_thrasher: "Error ENXIO: mon unavailable" during sync_force command leads to "fail...
- /a/yuriw-2020-07-13_23:00:15-rados-wip-yuri8-testing-2020-07-13-1946-octopus-distro-basic-smithi/5224050
- 02:21 AM Bug #37532 (Pending Backport): mon: expected_num_objects warning triggers on bluestore-only setups
- 02:17 AM Bug #46405: osd/osd-rep-recov-eio.sh: TEST_rados_repair_warning: return 1
- /a/kchai-2020-07-27_15:50:48-rados-wip-kefu-testing-2020-07-27-2127-distro-basic-smithi/5261869
07/27/2020
- 06:41 PM Bug #27053: qa: thrashosds: "[ERR] : 2.0 has 1 objects unfound and apparently lost"
- /ceph/teuthology-archive/pdonnell-2020-07-17_01:54:54-kcephfs-wip-pdonnell-testing-20200717.003135-distro-basic-smith...
- 04:18 PM Bug #46443: ceph_osd crash in _committed_osd_maps when failed to encode first inc map
- update affected version as it impacted all octopus release
- 04:02 PM Bug #46443 (Fix Under Review): ceph_osd crash in _committed_osd_maps when failed to encode first ...
- 03:31 PM Bug #46443: ceph_osd crash in _committed_osd_maps when failed to encode first inc map
- Ahh now I understand why v14.2.10 crashes: fa842716b6dc3b2077e296d388c646f1605568b0 changed the `osdmap` in _committe...
- 11:57 AM Bug #46443: ceph_osd crash in _committed_osd_maps when failed to encode first inc map
- Maybe this will fix (untested -- use on a test cluster first):...
- 10:27 AM Bug #46443: ceph_osd crash in _committed_osd_maps when failed to encode first inc map
- The issue also persist in latest Octopus release.
- 08:39 AM Bug #46443: ceph_osd crash in _committed_osd_maps when failed to encode first inc map
- I think it is mon not the peer OSD. (We just upgrade the mon from 14.2.10 to 15.2.4, below log with mon 15.2.4).
... - 07:52 AM Bug #46443: ceph_osd crash in _committed_osd_maps when failed to encode first inc map
- > For those osd cannot start ,it is 100% reproducible.
Could you set debug_ms = 1 on that osd, then inspect the lo... - 06:19 AM Bug #46443: ceph_osd crash in _committed_osd_maps when failed to encode first inc map
- Dan van der Ster wrote:
> @Xiaoxi thanks for confirming. What are the circumstances of your crash? Did it start spon... - 06:18 AM Bug #46443: ceph_osd crash in _committed_osd_maps when failed to encode first inc map
- @Dan
Yes/no, it is not 100% same that in our case we have several clusters that start adding OSDs with 14.2.10 into... - 11:35 AM Backport #46722 (Resolved): octopus: osd/osd-bench.sh 'tell osd.N bench' hang
- https://github.com/ceph/ceph/pull/36664
- 11:33 AM Bug #45561 (Resolved): rados/test_envlibrados_for_rocksdb.sh fails on Xenial (seen in nautilus)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:32 AM Bug #46064 (Resolved): Add statfs output to ceph-objectstore-tool
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:30 AM Backport #46710 (Resolved): nautilus: Negative peer_num_objects crashes osd
- https://github.com/ceph/ceph/pull/37473
- 11:30 AM Backport #46709 (Resolved): octopus: Negative peer_num_objects crashes osd
- https://github.com/ceph/ceph/pull/36663
07/26/2020
- 08:04 PM Backport #46460 (Resolved): octopus: pybind/mgr/balancer: should use "==" and "!=" for comparing ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36036
m... - 08:03 PM Backport #46116 (Resolved): nautilus: Add statfs output to ceph-objectstore-tool
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35713
m... - 08:02 PM Backport #45677 (Resolved): nautilus: rados/test_envlibrados_for_rocksdb.sh fails on Xenial (seen...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35237
m...
07/25/2020
- 05:55 PM Bug #46705 (Pending Backport): Negative peer_num_objects crashes osd
- 12:45 AM Bug #46705 (Resolved): Negative peer_num_objects crashes osd
- https://pulpito.ceph.com/xxg-2020-07-20_02:56:08-rados:thrash-nautilus-lie-distro-basic-smithi/5240518/
Full stack... - 04:49 PM Backport #46706 (In Progress): nautilus: Cancellation of on-going scrubs
- 04:09 PM Backport #46706 (Resolved): nautilus: Cancellation of on-going scrubs
- https://github.com/ceph/ceph/pull/36292
- 04:17 PM Backport #46707 (In Progress): octopus: Cancellation of on-going scrubs
- 04:10 PM Backport #46707 (Resolved): octopus: Cancellation of on-going scrubs
- https://github.com/ceph/ceph/pull/36291
- 03:57 PM Bug #46275 (Pending Backport): Cancellation of on-going scrubs
07/24/2020
- 09:29 PM Bug #46405: osd/osd-rep-recov-eio.sh: TEST_rados_repair_warning: return 1
- I'm not seeing this on my build machine using run-standalone.sh
- 07:11 PM Backport #46116: nautilus: Add statfs output to ceph-objectstore-tool
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35713
merged - 07:08 PM Backport #45677: nautilus: rados/test_envlibrados_for_rocksdb.sh fails on Xenial (seen in nautilus)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35237
merged - 06:21 AM Bug #46443: ceph_osd crash in _committed_osd_maps when failed to encode first inc map
- @Xiaoxi thanks for confirming. What are the circumstances of your crash? Did it start spontaneously after you upgrade...
- 06:16 AM Bug #46443: ceph_osd crash in _committed_osd_maps when failed to encode first inc map
- I do have coredump captured , the osdmap is null which lead to segmentation fault in osdmap->isup
07/23/2020
- 02:37 PM Bug #43888 (Pending Backport): osd/osd-bench.sh 'tell osd.N bench' hang
- 04:34 AM Bug #46428: mon: all the 3 mon daemons crashed when running the fs aio test
- The steps:
1, mount one cephfs kernel client to /mnt/cephfs/
2, run the following command:... - 04:32 AM Bug #46428: mon: all the 3 mon daemons crashed when running the fs aio test
- I couldn't reproduce it locally, let the core team help to check the above core dump whether they have any idea about...
07/22/2020
- 10:00 PM Bug #43888 (Fix Under Review): osd/osd-bench.sh 'tell osd.N bench' hang
- More details in https://gist.github.com/aclamk/fac791df3510840c640e18a0e6a4c724
- 07:55 PM Bug #46275 (Fix Under Review): Cancellation of on-going scrubs
- 04:02 PM Backport #46460: octopus: pybind/mgr/balancer: should use "==" and "!=" for comparing strings
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36036
merged - 03:52 PM Bug #46670 (New): refuse to remove mon from the monmap if the mon is in quorum
- Before accepting to remove the mon when "ceph mon remove" is used, we must not acknowledge the request if the mon is ...
07/21/2020
- 06:36 PM Feature #46663 (Resolved): Add pg count for pools in the `ceph df` command
- Add pg count for pools in the `ceph df` command
- 06:31 PM Bug #43174: pgs inconsistent, union_shard_errors=missing
- https://github.com/ceph/ceph/pull/35938 is closed in favor of https://github.com/ceph/ceph/pull/36230
- 02:17 AM Bug #46428 (In Progress): mon: all the 3 mon daemons crashed when running the fs aio test
07/20/2020
- 05:39 PM Bug #46242: rados -p default.rgw.buckets.data returning over millions objects No such file or dir...
- I think first you should verify the correct name as Josh suggested with `rados stat` command.
For example I did tr... - 03:19 PM Bug #43861 (Resolved): ceph_test_rados_watch_notify hang
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 03:16 PM Bug #46143 (Resolved): osd: make message cap option usable again
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 01:21 AM Bug #46443: ceph_osd crash in _committed_osd_maps when failed to encode first inc map
- Initially there's a crc error building the full from the first incremental in the loop:...
07/19/2020
- 06:20 PM Bug #43174 (In Progress): pgs inconsistent, union_shard_errors=missing
- 06:03 PM Bug #46562 (Rejected): ceph tell PGID scrub/deep_scrub stopped working
- This wasn't really the problem I was seeing.
07/17/2020
- 06:10 PM Bug #46596 (Resolved): ceph-osd --mkfs: *** longjmp causes uninitialized stack frame ***: /usr/bi...
- 03:47 PM Bug #46596 (Fix Under Review): ceph-osd --mkfs: *** longjmp causes uninitialized stack frame ***:...
- https://github.com/ceph/ceph-container/pull/1712
- 11:53 AM Bug #46596: ceph-osd --mkfs: *** longjmp causes uninitialized stack frame ***: /usr/bin/ceph-osd ...
- there is a small possibility that this is related to https://github.com/ceph/ceph/pull/33770
- 11:24 AM Bug #46596 (Resolved): ceph-osd --mkfs: *** longjmp causes uninitialized stack frame ***: /usr/bi...
- This is likely a regression that was merged yesterday into master (July 16th)....
- 05:56 PM Backport #46017 (Resolved): nautilus: ceph_test_rados_watch_notify hang
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36031
m... - 05:39 PM Backport #46017: nautilus: ceph_test_rados_watch_notify hang
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36031
merged - 05:56 PM Backport #46164 (Resolved): nautilus: osd: make message cap option usable again
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35738
m... - 05:38 PM Backport #46164: nautilus: osd: make message cap option usable again
- Neha Ojha wrote:
> https://github.com/ceph/ceph/pull/35738
merged - 05:27 PM Bug #46603 (New): osd/osd-backfill-space.sh: TEST_ec_backfill_simple: return 1
- ...
- 04:16 PM Backport #46090 (In Progress): nautilus: PG merge: FAILED ceph_assert(info.history.same_interval_...
- 02:44 PM Bug #40081: mon: luminous crash attempting to decode maps after nautilus quorum has been formed
- https://github.com/ceph/ceph/pull/28671 was closed
That PR was cherry-picked to nautilus via https://github.com/ce... - 11:17 AM Backport #46595 (Resolved): octopus: crash in Objecter and CRUSH map lookup
- https://github.com/ceph/ceph/pull/36662
- 11:17 AM Bug #44314 (Resolved): osd-backfill-stats.sh failing intermittently in TEST_backfill_sizeup_out()...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:16 AM Bug #45606 (Resolved): build_incremental_map_msg missing incremental map while snaptrim or backfi...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:16 AM Bug #45733 (Resolved): osd-scrub-repair.sh: SyntaxError: invalid syntax
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:16 AM Bug #45795 (Resolved): PrimaryLogPG.cc: 627: FAILED ceph_assert(!get_acting_recovery_backfill().e...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:16 AM Bug #45943 (Resolved): Ceph Monitor heartbeat grace period does not reset.
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:15 AM Bug #46053 (Resolved): osd: wakeup all threads of shard rather than one thread
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:14 AM Backport #46587 (Resolved): nautilus: The default value of osd_scrub_during_recovery is false sin...
- https://github.com/ceph/ceph/pull/37472
- 11:14 AM Backport #46586 (Resolved): octopus: The default value of osd_scrub_during_recovery is false sinc...
- https://github.com/ceph/ceph/pull/36661
- 11:13 AM Backport #45890 (Resolved): nautilus: osd: pg stuck in waitactingchange when new acting set doesn...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35389
m... - 11:13 AM Backport #45883 (Resolved): nautilus: osd-scrub-repair.sh: SyntaxError: invalid syntax
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35388
m... - 11:12 AM Backport #45776 (Resolved): nautilus: build_incremental_map_msg missing incremental map while sna...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35386
m... - 11:09 AM Backport #46286 (Resolved): octopus: mon: log entry with garbage generated by bad memory access
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36035
m... - 11:06 AM Backport #46261 (Resolved): octopus: larger osd_scrub_max_preemptions values cause Floating point...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36034
m... - 11:06 AM Backport #46089 (Resolved): octopus: PG merge: FAILED ceph_assert(info.history.same_interval_sinc...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36033
m... - 11:05 AM Backport #46086 (Resolved): octopus: osd: wakeup all threads of shard rather than one thread
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36032
m... - 11:05 AM Backport #46016 (Resolved): octopus: osd-backfill-stats.sh failing intermittently in TEST_backfil...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36030
m... - 11:05 AM Backport #46007 (Resolved): octopus: PrimaryLogPG.cc: 627: FAILED ceph_assert(!get_acting_recover...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36029
m...
07/16/2020
- 05:09 PM Backport #45890: nautilus: osd: pg stuck in waitactingchange when new acting set doesn't change
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35389
merged - 05:08 PM Backport #45883: nautilus: osd-scrub-repair.sh: SyntaxError: invalid syntax
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35388
merged - 05:08 PM Backport #45776: nautilus: build_incremental_map_msg missing incremental map while snaptrim or ba...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35386
merged - 04:30 PM Backport #46286: octopus: mon: log entry with garbage generated by bad memory access
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/36035
merged - 01:14 PM Bug #46443: ceph_osd crash in _committed_osd_maps when failed to encode first inc map
- Markus do you have a coredump available for further debugging?
- 12:52 PM Bug #43365: Nautilus: Random mon crashes in failed assertion at ceph::time_detail::signedspan
- So, we experienced this bug as well, so I investigated it myself, though I'm no timekeeping, ceph, or c++ expert, jus...
- 11:36 AM Documentation #46554 (Resolved): Malformed sentence in RADOS page
- 09:54 AM Bug #42668 (Won't Fix): ceph daemon osd.* fails in osd container but ceph daemon mds.* does not f...
- Ben, just run `unset CEPH_ARGS` once in the OSD container, then you will be able to use the socket commands.
- 09:23 AM Bug #44311 (Pending Backport): crash in Objecter and CRUSH map lookup
- 02:38 AM Bug #46562 (Rejected): ceph tell PGID scrub/deep_scrub stopped working
At this point I don't know if was broken by https://github.com/ceph/ceph/pull/30217 change.
07/15/2020
- 09:30 PM Bug #46405: osd/osd-rep-recov-eio.sh: TEST_rados_repair_warning: return 1
- /a/yuriw-2020-07-13_23:06:23-rados-wip-yuri5-testing-2020-07-13-1944-octopus-distro-basic-smithi/5224649
- 04:36 PM Documentation #46554: Malformed sentence in RADOS page
- Item 22 here:
https://pad.ceph.com/p/Report_Documentation_Bugs - 04:32 PM Documentation #46554 (Resolved): Malformed sentence in RADOS page
- https://docs.ceph.com/docs/master/rados/
The current sentence:
Once you have a deployed a Ceph Storage Clust... - 03:58 PM Documentation #45988 (Resolved): [doc/os]: Centos 8 is not listed even though it is supported
- 11:09 AM Documentation #46545 (New): Two Developer Guide pages might be redundant
- https://docs.ceph.com/docs/master/dev/internals/
and
https://docs.ceph.com/docs/master/dev/developer_guide/
... - 10:58 AM Documentation #46531 (Pending Backport): The default value of osd_scrub_during_recovery is false ...
- 10:24 AM Bug #46323: thrash_cache_writeback_proxy_none: FAILED ceph_assert(version == old_value.version) i...
- /a//kchai-2020-07-15_09:19:03-rados-wip-kefu-testing-2020-07-13-2108-distro-basic-smithi/5228761
07/14/2020
- 10:50 PM Bug #44311 (Fix Under Review): crash in Objecter and CRUSH map lookup
- 04:43 PM Bug #44311: crash in Objecter and CRUSH map lookup
- I am hitting this all the time now that librbd is using the 'neorados' API [1]. I plan to just rebuild the rmaps when...
- 04:41 PM Bug #44311 (In Progress): crash in Objecter and CRUSH map lookup
- 07:12 PM Backport #46228 (Resolved): nautilus: Ceph Monitor heartbeat grace period does not reset.
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35798
m... - 04:22 PM Backport #46228: nautilus: Ceph Monitor heartbeat grace period does not reset.
- Sridhar Seshasayee wrote:
> https://github.com/ceph/ceph/pull/35798
merged - 07:02 PM Documentation #46531 (Fix Under Review): The default value of osd_scrub_during_recovery is false ...
- 12:05 PM Documentation #46531: The default value of osd_scrub_during_recovery is false since v11.1.1
- I can't figure out how to add the pull request ID above, so here's a link to it instead: https://github.com/ceph/ceph...
- 11:58 AM Documentation #46531 (Resolved): The default value of osd_scrub_during_recovery is false since v1...
- Since 8dca17c, `osd_scrub_during_recovery` defaults to `false`, but the documentation was still stating that its defa...
- 06:58 PM Bug #37532 (Fix Under Review): mon: expected_num_objects warning triggers on bluestore-only setups
- 08:58 AM Bug #37532: mon: expected_num_objects warning triggers on bluestore-only setups
- Joao Eduardo Luis wrote:
> I don't think it's wise to simply remove the code because filestore is no longer the defa... - 04:19 PM Backport #46261: octopus: larger osd_scrub_max_preemptions values cause Floating point exception
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36034
merged - 04:18 PM Backport #46089: octopus: PG merge: FAILED ceph_assert(info.history.same_interval_since != 0)
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/36033
merged - 04:18 PM Backport #46086: octopus: osd: wakeup all threads of shard rather than one thread
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/36032
merged - 04:17 PM Backport #46016: octopus: osd-backfill-stats.sh failing intermittently in TEST_backfill_sizeup_ou...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36030
merged - 04:16 PM Backport #46007: octopus: PrimaryLogPG.cc: 627: FAILED ceph_assert(!get_acting_recovery_backfill(...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36029
merged - 04:01 PM Bug #43888: osd/osd-bench.sh 'tell osd.N bench' hang
- /a/yuriw-2020-07-13_19:30:53-rados-wip-yuri6-testing-2020-07-13-1520-octopus-distro-basic-smithi/5223525
- 01:20 PM Bug #17170: mon/monclient: update "unable to obtain rotating service keys when osd init" to sugge...
- issue fixed after setting correct NTP server on the machines.
followed the instructions here: https://access.redhat.... - 09:36 AM Bug #17170: mon/monclient: update "unable to obtain rotating service keys when osd init" to sugge...
- issue still seen in pacific dev version:...
07/13/2020
- 07:32 PM Bug #46508: Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)
- Neha Ojha wrote:
> Does not look related to https://tracker.ceph.com/issues/45619 or caused by https://github.com/ce... - 07:30 PM Bug #46508: Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)
- Does not look related to https://tracker.ceph.com/issues/45619 or caused by https://github.com/ceph/ceph/commit/d4fba...
- 07:02 PM Bug #46508 (New): Health check failed: Reduced data availability: 1 pg inactive (PG_AVAILABILITY)
- rados/basic/{ceph clusters/{fixed-2 openstack} msgr-failures/many msgr/async objectstore/bluestore-comp-snappy rados ...
- 05:48 PM Bug #46506 (New): RuntimeError: Exiting scrub checking -- not all pgs scrubbed.
- ...
- 01:20 PM Documentation #16356 (Resolved): doc: manual deployment of ceph monitor needs fix
- https://github.com/ceph/ceph/pull/31452 resolves this issue.
- 11:19 AM Bug #46445 (Fix Under Review): nautilis client may hunt for mon very long if msg v2 is not enable...
- 05:04 AM Bug #46242: rados -p default.rgw.buckets.data returning over millions objects No such file or dir...
- Hi Josh,
Object still exist:
Just done your test and failed with single quotes -> https://gyazo.com/a04fcf5b522...
07/10/2020
- 09:39 PM Bug #46443: ceph_osd crash in _committed_osd_maps when failed to encode first inc map
- fa842716b6dc3b2077e296d388c646f1605568b0 arrived in v14.2.10 and touches _committed_osd_maps
- 07:42 AM Bug #46443 (Resolved): ceph_osd crash in _committed_osd_maps when failed to encode first inc map
- We upgraded a mimic cluster to v14.2.10, everything was running and ok.
I triggerd an monmap change with the command... - 09:32 PM Backport #46460 (In Progress): octopus: pybind/mgr/balancer: should use "==" and "!=" for compari...
- 05:50 PM Backport #46460 (Resolved): octopus: pybind/mgr/balancer: should use "==" and "!=" for comparing ...
- https://github.com/ceph/ceph/pull/36036
- 09:31 PM Backport #46286 (In Progress): octopus: mon: log entry with garbage generated by bad memory access
- 09:30 PM Backport #46261 (In Progress): octopus: larger osd_scrub_max_preemptions values cause Floating po...
- 09:29 PM Backport #46089 (In Progress): octopus: PG merge: FAILED ceph_assert(info.history.same_interval_s...
- 09:28 PM Backport #46086 (In Progress): octopus: osd: wakeup all threads of shard rather than one thread
- 09:27 PM Backport #46017 (In Progress): nautilus: ceph_test_rados_watch_notify hang
- 09:26 PM Backport #46018 (Resolved): octopus: ceph_test_rados_watch_notify hang
- The original fix went into octopus during the pre-release phase when bugfixes were being merged to octopus and octopu...
- 09:24 PM Backport #46016 (In Progress): octopus: osd-backfill-stats.sh failing intermittently in TEST_back...
- 09:23 PM Backport #46007 (In Progress): octopus: PrimaryLogPG.cc: 627: FAILED ceph_assert(!get_acting_reco...
- 08:46 PM Bug #43888: osd/osd-bench.sh 'tell osd.N bench' hang
- Couple of updates on this:
1. Reproduced the issue with some extra debug logging.
https://pulpito.ceph.com/no... - 05:50 PM Backport #46461 (Resolved): nautilus: pybind/mgr/balancer: should use "==" and "!=" for comparing...
- https://github.com/ceph/ceph/pull/37471
- 08:20 AM Bug #46445 (Resolved): nautilis client may hunt for mon very long if msg v2 is not enabled on mons
- The problem is observed for a nautilus client. For newer client versions the situation is accidentally much better (s...
- 06:10 AM Backport #46229 (Resolved): octopus: Ceph Monitor heartbeat grace period does not reset.
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35799
m... - 06:09 AM Backport #46165 (Resolved): octopus: osd: make message cap option usable again
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35737
m...
07/09/2020
- 07:08 PM Bug #46437 (Closed): Admin Socket leaves behind .asok files after daemons (ex: RGW) shut down gra...
- Reproducer(s):
0. be in build dir
1. run vstart.sh
2. edit stop.sh to not `rm -rf "${asok_dir}"`
3. do ls of /tmp... - 02:16 PM Backport #46408 (In Progress): octopus: Health check failed: 4 mgr modules have failed (MGR_MODUL...
- 02:34 AM Bug #46428 (In Progress): mon: all the 3 mon daemons crashed when running the fs aio test
- The logs:...
Also available in: Atom