Activity
From 02/19/2024 to 03/19/2024
03/18/2024
- 10:54 PM crimson Bug #64975: crimson: Health check failed: 9 scrub errors (OSD_SCRUB_ERRORS)" in cluster log'
- Testing fix -- gonna stick the snap mapper keys into the pgmeta object and avoid the problem entirely.
- 09:45 PM crimson Bug #64975 (New): crimson: Health check failed: 9 scrub errors (OSD_SCRUB_ERRORS)" in cluster log'
- ERROR 2024-03-15 10:04:01,561 [shard 1:main] osd - pg_epoch 198 pg[2.2( empty local-lis/les=11/12 n=0 ec=11/11 lis/c...
- 09:21 PM crimson Bug #64696: Centos8 Crimson container builds - nothing provides libprotobuf.so.30()
- I made the changes in a temporary repo (https://copr.fedorainfracloud.org/coprs/ktdreyer/grpc/), and the packages ins...
- 07:15 PM crimson Bug #64696: Centos8 Crimson container builds - nothing provides libprotobuf.so.30()
- This will likely fix the (I know now) centos8-only issue. I saw the copr for el9 in the build and mistakenly thought...
- 03:49 PM crimson Bug #64696: Centos8 Crimson container builds - nothing provides libprotobuf.so.30()
- I think I made a mistake when I chose the @protobuf@ version in https://copr.fedorainfracloud.org/coprs/ceph/grpc/ . ...
- 07:29 PM RADOS Bug #64972: qa: "ceph tell 4.3a deep-scrub" command not found
- and https://github.com/ceph/ceph/pull/54214
- 07:29 PM RADOS Bug #64972 (New): qa: "ceph tell 4.3a deep-scrub" command not found
- ...
- 07:24 PM RADOS Bug #63967 (Resolved): qa/tasks/ceph.py: "ceph tell <pgid> deep_scrub" fails
- 06:56 PM RADOS Bug #64646: ceph osd pool rmsnap clone object leak
- In QA.
- 06:56 PM RADOS Bug #64854: decoding chunk_refs_by_hash_t return wrong values
- Hmm, I guess I saw a PR for that.
- 06:55 PM RADOS Bug #64824: mon: ceph-16.2.14/src/mon/Monitor.cc: 5661: FAILED ceph_assert(err == 0)
- Would need logs with @debug_mon=20@ and @debug_rocksdb=20@ from period before the assertion.
- 06:51 PM RADOS Bug #64670: LibRadosAioEC.RoundTrip2 hang and pkill
- Nothing new but still observing. Bump up.
- 06:50 PM RADOS Bug #64866: rados/test.sh: LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/1 failed
- Hi Nitzan! Would you mind taking a look?
- 06:49 PM RADOS Bug #64863: rados/thrash-old-clients: Health detail: HEALTH_WARN 1/3 mons down, quorum a,c in clu...
- Hmm, I think I saw Laura's PR for @MON_DOWN@.
- 06:44 PM RADOS Bug #58436: ceph cluster log reporting log level in numeric format for the clog messages
- Do we need to backport?
- 06:43 PM RADOS Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- In QA.
- 06:39 PM rgw Bug #64971 (New): Rgw lifecycle skip
- Running ceph ver. 18.2.0
We observe that bucket lifecycle cleanup, are being dropped on a random day each week.
Sh... - 06:36 PM RADOS Bug #64558: librados: use CEPH_OSD_FLAG_FULL_FORCE for IoCtxImpl::remove
- Sent to QA.
- 06:28 PM RADOS Bug #57782 (Fix Under Review): [mon] high cpu usage by fn_monstore thread
- The fix awaits QA.
- 06:26 PM RADOS Bug #61774: centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
- Passed QA.
- 06:25 PM RADOS Bug #64938: Pool created with single PG splits into many on single OSD causes OSD to hit max_pgs_...
- Reviewed.
- 06:21 PM RADOS Bug #62992: Heartbeat crash in reset_timeout and clear_timeout
- https://github.com/ceph/ceph/pull/54492 merged
- 06:12 PM RADOS Bug #64968 (Fix Under Review): mon: MON_DOWN warnings when mons are first booting
- 04:11 PM RADOS Bug #64968 (Fix Under Review): mon: MON_DOWN warnings when mons are first booting
- ...
- 05:58 PM RADOS Bug #56393: failed to complete snap trimming before timeout
- Hi Matan,
would you mind taking a look? Not a high priority. - 01:53 PM RADOS Bug #56393: failed to complete snap trimming before timeout
- /a/yuriw-2024-03-15_19:59:43-rados-wip-yuri6-testing-2024-03-15-0709-distro-default-smithi/7603381/...
- 05:52 PM RADOS Bug #64347: src/osd/PG.cc: FAILED ceph_assert(!bad || !cct->_conf->osd_debug_verify_cached_snaps)
- In QA.
- 04:26 PM RADOS Bug #64347: src/osd/PG.cc: FAILED ceph_assert(!bad || !cct->_conf->osd_debug_verify_cached_snaps)
- /a/yuriw-2024-03-15_19:59:43-rados-wip-yuri6-testing-2024-03-15-0709-distro-default-smithi/7603610/
- 05:48 PM RADOS Bug #64917: SnapMapperTest.CheckObjectKeyFormat object key changed
- I think this is already tackled by https://github.com/ceph/ceph/pull/56142.
Assigning to Matan for confirmation. I... - 04:31 PM RADOS Bug #64917: SnapMapperTest.CheckObjectKeyFormat object key changed
- /a/yuriw-2024-03-15_19:59:43-rados-wip-yuri6-testing-2024-03-15-0709-distro-default-smithi/7603418/
/a/yuriw-2024-03... - 05:46 PM Dashboard Bug #64970 (Fix Under Review): mgr/dashboard: fix duplicate grafana panels when on mgr failover
- h3. Description of problem
_here_
The metrics data was not shown with "multiple matches for labels grouping l... - 05:43 PM RADOS Bug #64437: qa/standalone/scrub/osd-scrub-repair.sh: TEST_repair_stats_ec: test 26 = 13
- Bump up.
- 05:12 PM RADOS Bug #64437: qa/standalone/scrub/osd-scrub-repair.sh: TEST_repair_stats_ec: test 26 = 13
- /a/yuriw-2024-03-15_19:59:43-rados-wip-yuri6-testing-2024-03-15-0709-distro-default-smithi/7603349
- 05:41 PM RADOS Bug #53240: full-object read crc is mismatch, because truncate modify oi.size and forget to clear...
- In QA.
- 05:40 PM RADOS Bug #64333: PG autoscaler tuning => catastrophic ceph cluster crash
- I'm going to propose a patch removing the @--force@.
- 05:39 PM RADOS Bug #52657: MOSDPGLog::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_NAUTILUS)'
- Bump up.
- 05:22 PM bluestore Support #64966: OSDs crash | Assert error | KernelDevice::aio_submit | when backfills 3 replica ...
- I think the issue is caused by high fragmentation of some onode(s). Looks like there are more than 64K logical non-co...
- 12:54 PM bluestore Support #64966 (New): OSDs crash | Assert error | KernelDevice::aio_submit | when backfills 3 re...
- Dear all,
we faced a strange errors.
When one of osd died a cluster started to remap\bacfilling but than for one ... - 04:58 PM rgw Bug #59380: rados/singleton-nomsgr: test failing from "Health check failed: 1 full osd(s) (OSD_FU...
- /a/yuriw-2024-03-15_19:59:43-rados-wip-yuri6-testing-2024-03-15-0709-distro-default-smithi/7603379/
/a/yuriw-2024-03... - 04:28 PM Orchestrator Bug #64872: rados/cephadm/smoke: Health check failed: 1 stray daemon(s) not managed by cephadm (C...
- /a/yuriw-2024-03-15_19:59:43-rados-wip-yuri6-testing-2024-03-15-0709-distro-default-smithi/7603598
/a/yuriw-2024-03-... - 03:53 PM devops Bug #64962: WITH_ZBD should be removed
- @Rongqi Sun - please fix Pull request ID - looks like it's wrong.
- 08:25 AM devops Bug #64962 (Fix Under Review): WITH_ZBD should be removed
- 08:22 AM devops Bug #64962 (Fix Under Review): WITH_ZBD should be removed
- Take a jenkins job for example: https://jenkins.ceph.com/job/ceph-pull-requests/131553/consoleFull
Shows:
CMake... - 03:04 PM Orchestrator Backport #62974 (Resolved): quincy: cephadm: allow zapping OSD devices as part of host drain proc...
- 01:54 PM ceph-volume Backport #64944 (In Progress): quincy: ceph-volume lvm zap fails with "undefined name 'List'"
- 01:53 PM ceph-volume Backport #64943 (In Progress): reef: ceph-volume lvm zap fails with "undefined name 'List'"
- 01:53 PM ceph-volume Backport #64945 (In Progress): squid: ceph-volume lvm zap fails with "undefined name 'List'"
- 01:25 PM CephFS Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- Dhairya Parmar wrote:
> We have https://github.com/ceph/ceph/blob/main/src/common/ceph_releases.h which I'm using to... - 12:16 PM CephFS Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- We have https://github.com/ceph/ceph/blob/main/src/common/ceph_releases.h which I'm using to fetch the last two versi...
- 12:43 PM CephFS Bug #64937 (Resolved): reef: qa: AttributeError: 'TestSnapSchedulesSubvolAndGroupArguments' objec...
- problem was fixed and PR is merged
- 12:27 PM Orchestrator Backport #64688 (Resolved): quincy: cephadm: host filtering with label and host pattern only uses...
- 12:26 PM Orchestrator Backport #64630 (Resolved): quincy: cephadm: asyncio timeout handler can't handle conccurent.futu...
- 12:25 PM Orchestrator Bug #63481 (Resolved): cephadm: OSD weights are not restored when you stop removal of an OSD
- 12:25 PM Orchestrator Backport #63534 (Resolved): quincy: cephadm: OSD weights are not restored when you stop removal o...
- 12:24 PM Orchestrator Feature #58820 (Resolved): cephadm: allow draining host without removing conf and keyring files
- 12:24 PM Orchestrator Backport #62531 (Resolved): quincy: cephadm: allow draining host without removing conf and keyrin...
- 10:40 AM Orchestrator Bug #64894 (Resolved): [node-proxy] the RedFishClient.logout() can never logout from the redfish API
- 10:40 AM Orchestrator Bug #64951 (Resolved): [node-proxy] RedFish APIs don't return always the same format for Location...
- 08:26 AM Orchestrator Bug #64951 (Pending Backport): [node-proxy] RedFish APIs don't return always the same format for ...
- 10:40 AM Orchestrator Backport #64932 (Resolved): reef: [node-proxy] the RedFishClient.logout() can never logout from t...
- 08:35 AM Orchestrator Backport #64932 (In Progress): reef: [node-proxy] the RedFishClient.logout() can never logout fro...
- 10:39 AM Orchestrator Backport #64964 (Resolved): reef: [node-proxy] RedFish APIs don't return always the same format f...
- 08:29 AM Orchestrator Backport #64964 (Resolved): reef: [node-proxy] RedFish APIs don't return always the same format f...
- 10:39 AM Orchestrator Backport #64931 (Resolved): squid: [node-proxy] the RedFishClient.logout() can never logout from ...
- 08:34 AM Orchestrator Backport #64931 (In Progress): squid: [node-proxy] the RedFishClient.logout() can never logout fr...
- 10:38 AM Orchestrator Backport #64963 (Resolved): squid: [node-proxy] RedFish APIs don't return always the same format ...
- 08:29 AM Orchestrator Backport #64963 (Resolved): squid: [node-proxy] RedFish APIs don't return always the same format ...
- 09:34 AM Orchestrator Bug #64965 (New): Dynamic prometheus configuration uses hostname instead of IP address
- Starting with Ceph Reef our stats on the dashboard stopped working. We got the error: `The mgr/prometheus module at s...
- 09:27 AM Support #64952: crc32 at s390x arch
- The PR is here: https://github.com/ceph/ceph/pull/56224
- 08:52 AM Dashboard Backport #64930 (In Progress): reef: mgr/dashboard: add cephfs authentication
- 08:49 AM Dashboard Backport #64929 (In Progress): squid: mgr/dashboard: add cephfs authentication
- 08:36 AM CephFS Bug #59582 (Resolved): snap-schedule: allow retention spec to specify max number of snaps to retain
- 07:58 AM CephFS Bug #64961 (In Progress): ceph-fuse: crash when try to open & trunc a encrypted file
- Mount a kclient and encrypt it and then use a ceph-fuse client tries to open & trunc it, though it returned failure w...
- 06:54 AM Dashboard Backport #64960 (New): squid: mgr/dashboard: nfs mount command in attach dialog need syntax modif...
- 06:54 AM Dashboard Backport #64959 (New): reef: mgr/dashboard: nfs mount command in attach dialog need syntax modifi...
- 06:48 AM Dashboard Bug #64933 (Pending Backport): mgr/dashboard: nfs mount command in attach dialog need syntax modi...
- 05:53 AM CephFS Bug #55148 (Closed): snap_schedule: remove subvolume(-group) interfaces
- this tracker is no longer relevant since subvolume and subvolumegroup interface has been readded to snap-schedule
- 05:38 AM CephFS Backport #55579 (In Progress): quincy: snap_schedule: avoid throwing traceback for bad or missing...
- backport is available in quincy
- 05:30 AM CephFS Backport #58599 (Resolved): quincy: mon: prevent allocating snapids allocated for CephFS
- commit is available in quincy
- 05:04 AM Dashboard Cleanup #64958 (Fix Under Review): mgr/dashboard: store remote cluster token in cookie rather tha...
- 05:03 AM Dashboard Cleanup #64958 (Fix Under Review): mgr/dashboard: store remote cluster token in cookie rather tha...
- right now the remote cluster token is stored in localStorage temporarily while switching between clusters. Instead us...
- 02:37 AM CephFS Feature #58057 (Resolved): cephfs-top: enhance fstop tests to cover testing displayed data
- 02:36 AM CephFS Bug #61397 (Resolved): cephfs-top: enhance --dump code to include the missing fields
- 02:35 AM CephFS Backport #63553 (Resolved): reef: cephfs-top: enhance --dump code to include the missing fields
- 02:32 AM crimson Bug #64957 (New): crimson/seastore: osd crashes when serving new object writes
- ...
- 02:13 AM CephFS Bug #62580: testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
- https://pulpito.ceph.com/vshankar-2024-03-14_16:52:41-fs-wip-vshankar-testing1-quincy-2024-03-14-0655-quincy-testing-...
- 02:10 AM crimson Bug #63307: crimson: SnapTrimObjSubEvent doesn't actually seem to submit delta_stats
- let me try solve this issue
03/17/2024
- 10:35 PM Documentation #64956 (New): Typo in Bucket Notification Docs
- The docs on bucket notifications state the topic to notification relationship as follows.
"There can be multiple n... - 06:55 PM CephFS Bug #64912 (Fix Under Review): make check: QuiesceDbTest.MultiRankRecovery Failed
- 02:35 PM rgw Backport #64834 (Resolved): squid: RGW segmentation fault when reading object permissions via the...
- 02:35 PM rgw Backport #64876 (Resolved): squid: x-amz-expiration HTTP header: expiry-date sometimes broken
- 02:35 PM rgw Backport #64887 (Resolved): squid: kafka: RGW hangs when broker is down for no persistent notific...
- 02:35 PM rgw Backport #64909 (Resolved): squid: rgw: rgw-restore-bucket-index -- sort uses specified temp dir
- 02:06 PM Bug #64955 (New): osd/scrub: state_cast() usage in the scrubber FSM is unreliable
- Using statechart::state_cast while handling an event is not guaranteed to work.
The cast should be replaced with 'in...
03/16/2024
- 06:42 PM rbd Backport #64914 (Resolved): squid: [diff-iterate] discards that truncate aren't accounted for by ...
- 06:23 PM rbd Backport #64553 (Resolved): squid: [test][krbd] volume data corruption when using rbd-mirror w/fa...
- 06:22 PM RADOS Backport #64406 (Resolved): reef: Failed to encode map X with expected CRC
- 06:20 PM Backport #64509 (Resolved): reef: Debian bookworm package needs to explicitly specify cephadm hom...
- 06:00 PM rbd Backport #64463 (Resolved): quincy: "rbd children" should support --image-id option
- 04:08 PM rbd Backport #64463: quincy: "rbd children" should support --image-id option
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55618
merged - 06:00 PM rbd Backport #64461 (Resolved): quincy: split() is broken in SparseExtentSplitMerge and SparseBufferl...
- 04:09 PM rbd Backport #64461: quincy: split() is broken in SparseExtentSplitMerge and SparseBufferlistExtentSp...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55664
merged - 05:59 PM rbd Backport #64555 (Resolved): quincy: [test][krbd] volume data corruption when using rbd-mirror w/f...
- 04:10 PM rbd Backport #64555: quincy: [test][krbd] volume data corruption when using rbd-mirror w/failover
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55763
merged - 04:30 PM CephFS Bug #56288: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, ...
- Venky Shankar wrote:
> lei liu wrote:
> > We recently encountered a similar issue, may I ask if there is a solution... - 06:54 AM rgw Bug #64571: lifecycle transition crashes since merge end-to-end tracing
- >>>
#0 __pthread_kill_implementation (
threadid=<optimized out>, signo=signo@entry=11,
no_tid=no_tid@entr... - 06:53 AM rgw Bug #64571: lifecycle transition crashes since merge end-to-end tracing
- Not able to reproduce this issue locally. Pasting the bt of two crashes reported so far -
>>>
2024-03-13T13:33...
03/15/2024
- 10:49 PM RADOS Bug #64802: rados: generalize stretch mode pg temp handling to be usable without stretch mode
- I recently created a draft PR https://github.com/ceph/ceph/pull/56233/, adding the additional arguments peering_bucke...
- 10:14 PM RADOS Bug #64802: rados: generalize stretch mode pg temp handling to be usable without stretch mode
- WIP PR: https://github.com/ceph/ceph/pull/56233
- 08:55 PM crimson Bug #64696: Centos8 Crimson container builds - nothing provides libprotobuf.so.30()
- jammy has libprotobuf.so.23.0.4 from libprotobuf23, protobuf-compiler-3.12.4 depends on it
python3-grpcio-1.30.2-3... - 06:46 AM crimson Bug #64696: Centos8 Crimson container builds - nothing provides libprotobuf.so.30()
- Collating data:
8stream
protobuf:
AppStream contains protobuf-compiler 3.5.0-15.el8
depends on libprotobuf... - 07:50 PM Orchestrator Backport #64632 (Resolved): reef: secure monitoring stack support is not documented
- 07:49 PM Orchestrator Bug #57614 (Resolved): "ceph nfs cluster create ..." always show process bound to 2049: unable to...
- 07:49 PM Orchestrator Backport #62532 (Resolved): reef: "ceph nfs cluster create ..." always show process bound to 2049...
- 07:47 PM Orchestrator Backport #62973 (Resolved): reef: cephadm: allow zapping OSD devices as part of host drain procedure
- 07:47 PM Orchestrator Feature #58933 (Resolved): Setup Ingress service and NFS to use PROXY protocol
- 07:47 PM Orchestrator Backport #61539 (Resolved): reef: Setup Ingress service and NFS to use PROXY protocol
- 07:46 PM Orchestrator Feature #63220 (Resolved): cephadm: warn users about draining a host explicitly listed in a servi...
- 07:46 PM Orchestrator Backport #63508 (Resolved): reef: cephadm: warn users about draining a host explicitly listed in ...
- 07:45 PM Orchestrator Feature #59680 (Resolved): Embed version of ceph - as built - in the cephadm binary
- 07:45 PM Orchestrator Backport #61687 (Resolved): reef: Embed version of ceph - as built - in the cephadm binary
- 07:45 PM Orchestrator Feature #61809 (Resolved): Support ArgumentSpec based objects in extra_container_args and extra_e...
- 07:45 PM Orchestrator Backport #61936 (Resolved): reef: Support ArgumentSpec based objects in extra_container_args and ...
- 07:44 PM Orchestrator Bug #61889 (Resolved): cephadm: cephadm module crashes trying to migrate simple rgw specs
- 07:44 PM Orchestrator Backport #61939 (Resolved): quincy: cephadm: cephadm module crashes trying to migrate simple rgw ...
- 07:43 PM Orchestrator Backport #62008 (Resolved): reef: testing for extra daemon/container features
- 07:43 PM Orchestrator Bug #59443 (Resolved): cephadm: port 9095 not opened in firewall after adopting prometheus
- 07:43 PM Orchestrator Backport #61678 (Resolved): reef: cephadm: port 9095 not opened in firewall after adopting promet...
- 07:42 PM Orchestrator Bug #62276 (Resolved): cephadm: keepalived configured with incorrect unicast IPs if VIP is in dif...
- 07:42 PM Orchestrator Backport #62472 (Resolved): reef: cephadm: keepalived configured with incorrect unicast IPs if VI...
- 07:41 PM Orchestrator Bug #57931 (Resolved): RGW rgw_frontend_type field is not checked correctly during the spec parsing
- 07:41 PM Orchestrator Backport #63011 (Resolved): quincy: RGW rgw_frontend_type field is not checked correctly during t...
- 07:40 PM Orchestrator Backport #64629 (Resolved): reef: cephadm: asyncio timeout handler can't handle conccurent.future...
- 07:39 PM Orchestrator Backport #63533 (Resolved): reef: cephadm: OSD weights are not restored when you stop removal of ...
- 07:37 PM Orchestrator Backport #64698 (Resolved): reef: allow idmap overrides in nfs-ganesha configuration
- 07:36 PM Orchestrator Bug #64382 (Resolved): cephadm: remove restriction for crush device classes
- 07:36 PM Orchestrator Backport #64645 (Resolved): quincy: cephadm: remove restriction for crush device classes
- 07:35 PM Orchestrator Bug #63525 (Resolved): cephadm: drivespec limit not working correctly
- 07:35 PM Orchestrator Backport #63818 (Resolved): quincy: cephadm: drivespec limit not working correctly
- 07:35 PM Orchestrator Bug #63729 (Resolved): OSD with dedicated db devices redeployments fail when no service_id provid...
- 07:34 PM Orchestrator Backport #63816 (Resolved): quincy: OSD with dedicated db devices redeployments fail when no serv...
- 07:25 PM Orchestrator Feature #63031 (Resolved): cephadm: remove host entry from crush map during host removal
- 07:25 PM Orchestrator Backport #63446 (Resolved): quincy: cephadm: remove host entry from crush map during host removal
- 07:24 PM Orchestrator Bug #63238 (Resolved): cephadm: daemon events not updated on repeat events.
- 07:24 PM Orchestrator Backport #63435 (Resolved): quincy: cephadm: daemon events not updated on repeat events.
- 07:23 PM Orchestrator Bug #61885 (Resolved): Using [] around an ipv6 address while adding a host fails
- 07:23 PM Orchestrator Backport #63116 (Resolved): quincy: Using [] around an ipv6 address while adding a host fails
- 07:19 PM Orchestrator Backport #62471 (Resolved): quincy: cephadm: keepalived configured with incorrect unicast IPs if ...
- 07:18 PM Orchestrator Backport #61676 (Resolved): quincy: cephadm: port 9095 not opened in firewall after adopting prom...
- 05:41 PM rgw Bug #64571: lifecycle transition crashes since merge end-to-end tracing
- Casey Bodley wrote:
> hey Shreyansh and Soumya, any updates here? this is one urgent for the squid release
Hi Cas... - 04:53 PM rgw Bug #64571: lifecycle transition crashes since merge end-to-end tracing
- hey Shreyansh and Soumya, any updates here? this is one urgent for the squid release
- 04:51 PM rgw Backport #64954 (New): squid: Notification FilterRules for S3key, S3Metadata & S3Tags spit incorr...
- 04:47 PM rgw Bug #64653 (Pending Backport): Notification FilterRules for S3key, S3Metadata & S3Tags spit incor...
- 03:21 PM rgw Backport #64949 (In Progress): squid: rgw-multisite: add x-rgw-replicated-at
- 01:50 PM rgw Backport #64949 (In Progress): squid: rgw-multisite: add x-rgw-replicated-at
- https://github.com/ceph/ceph/pull/56226
- 03:08 PM rgw Bug #64953 (Fix Under Review): [CVE-2023-46159] RGW crash upon misconfigured CORS rule
- 02:33 PM Support #64952 (New): crc32 at s390x arch
- ceph has no crc32c support for the s390x arch.
This can lead to a huge overhead by the SW crc32 code.
- 02:32 PM Orchestrator Bug #64951 (Fix Under Review): [node-proxy] RedFish APIs don't return always the same format for ...
- 02:18 PM Orchestrator Bug #64951 (Resolved): [node-proxy] RedFish APIs don't return always the same format for Location...
- After some tests, it turns out that depending on the hardware, the header 'Location' which is returned by the server ...
- 02:32 PM rgw Bug #64950: rgw-nfs: various file mv (rename) operations fail
- update: not quincy, reef
- 02:08 PM rgw Bug #64950 (Fix Under Review): rgw-nfs: various file mv (rename) operations fail
- 01:57 PM rgw Bug #64950 (In Progress): rgw-nfs: various file mv (rename) operations fail
- 01:56 PM rgw Bug #64950 (Fix Under Review): rgw-nfs: various file mv (rename) operations fail
- This is a regression, probably dating back to Quincy. The initial regression was caused by zipper integration.
- 02:27 PM rgw Backport #64541 (Resolved): squid: metadata cache races on deletes
- 02:19 PM CephFS Bug #56288: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, ...
- I think this happens when there are concurrent lookups and deletes under a directory. _readdir_cache_cb() has code li...
- 02:01 PM CephFS Bug #56288: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, ...
- lei liu wrote:
> We recently encountered a similar issue, may I ask if there is a solution?
For now, restart ceph... - 01:36 PM CephFS Bug #56288: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, ...
- Venky Shankar wrote:
> lei liu wrote:
> > We recently encountered a similar issue, may I ask if there is a solution... - 01:26 PM CephFS Bug #56288: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, ...
- lei liu wrote:
> We recently encountered a similar issue, may I ask if there is a solution?
Which version was thi... - 12:10 PM CephFS Bug #56288: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, ...
- We recently encountered a similar issue, may I ask if there is a solution?
- 01:52 PM CephFS Backport #64223: reef: qa: flush journal may cause timeouts of `scrub status`
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55915
merged - 01:51 PM CephFS Backport #64582: reef: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55829
merged - 01:50 PM CephFS Backport #64222: reef: Test failure: test_filesystem_sync_stuck_for_around_5s (tasks.cephfs.test_...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55746
merged - 01:50 PM CephFS Backport #64075: reef: testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.Test...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55743
merged - 01:49 PM CephFS Backport #64045: reef: mds: use explicitly sized types for network and disk encoding
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55742
merged - 01:49 PM rgw Feature #64365 (Pending Backport): rgw-multisite: add x-rgw-replicated-at
- 01:48 PM CephFS Backport #63553: reef: cephfs-top: enhance --dump code to include the missing fields
- Jos Collin wrote:
> https://github.com/ceph/ceph/pull/54520
merged - 01:47 PM Backport #63147: reef: [Client] handle_command_reply overwrite outbl, so cannot get previous resu...
- Rishabh Dave wrote:
> https://github.com/ceph/ceph/pull/53893
merged - 01:47 PM CephFS Backport #62026: reef: mds/MDSAuthCaps: "fsname", path, root_squash can't be in same cap with uid...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52581
merged - 01:44 PM rgw Backport #64948 (Resolved): squid: support lifecycle filters ObjectSizeLessThan and ObjectSizeGre...
- squid backported included with https://tracker.ceph.com/issues/64876
- 01:27 PM rgw Backport #64948 (Resolved): squid: support lifecycle filters ObjectSizeLessThan and ObjectSizeGre...
- 01:38 PM CephFS Bug #64947 (Fix Under Review): qa: fix continued use of log-whitelist
- 01:18 PM CephFS Bug #64947 (Fix Under Review): qa: fix continued use of log-whitelist
- 01:25 PM rgw Feature #63304 (Pending Backport): support lifecycle filters ObjectSizeLessThan and ObjectSizeGre...
- 12:50 PM rbd Bug #64785: RBD persistent error corruption
- Jacobus Erasmus wrote:
> Ilya Dryomov wrote:
> > What errors were observed when trying to boot the VM, if any? Doe... - 12:19 PM ceph-volume Bug #59375: ceph-volume should support symbolic links to devices e.g. for multipath
- This can be closed now, it was fixed by https://github.com/ceph/ceph/pull/53309.
- 12:01 PM CephFS Backport #64896 (In Progress): squid: mds: QuiesceDb to manage subvolume quiesce state
- PR: https://github.com/ceph/ceph/pull/56202
- 11:44 AM rbd Backport #64675 (Resolved): squid: rbd: scalability issue on Windows due to TCP session count
- 10:39 AM CephFS Bug #64616: selinux denials with centos9.stream
- https://pulpito.ceph.com/yuriw-2024-03-08_16:17:06-fs-wip-yuri10-testing-2024-03-07-1242-reef-distro-default-smithi/7...
- 10:30 AM CephFS Bug #64946 (New): qa: unable to locate package libcephfs1
- https://pulpito.ceph.com/yuriw-2024-03-08_16:17:06-fs-wip-yuri10-testing-2024-03-07-1242-reef-distro-default-smithi/7...
- 10:25 AM rbd Backport #64914 (In Progress): squid: [diff-iterate] discards that truncate aren't accounted for ...
- 10:12 AM rbd Backport #64916 (In Progress): reef: [diff-iterate] discards that truncate aren't accounted for b...
- 10:10 AM rbd Backport #64915 (In Progress): quincy: [diff-iterate] discards that truncate aren't accounted for...
- 09:43 AM CephFS Backport #64919 (In Progress): reef: qa: enhance labeled perf counters test for cephfs-mirror
- 09:29 AM CephFS Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- Dhairya Parmar wrote:
> Venky Shankar wrote:
> > Dhairya Parmar wrote:
> > > im in strong favour of having this do... - 09:02 AM CephFS Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- Venky Shankar wrote:
> Dhairya Parmar wrote:
> > im in strong favour of having this done, considering the caveat di... - 09:17 AM CephFS Backport #64918 (In Progress): squid: qa: enhance labeled perf counters test for cephfs-mirror
- 09:02 AM RADOS Bug #56393: failed to complete snap trimming before timeout
- /a/yuriw-2024-03-13_19:25:03-rados-wip-yuri6-testing-2024-03-12-0858-distro-default-smithi/7597884
/a/yuriw-2024-03-... - 08:29 AM ceph-volume Backport #64945 (In Progress): squid: ceph-volume lvm zap fails with "undefined name 'List'"
- https://github.com/ceph/ceph/pull/56258
- 08:29 AM ceph-volume Backport #64944 (In Progress): quincy: ceph-volume lvm zap fails with "undefined name 'List'"
- https://github.com/ceph/ceph/pull/56260
- 08:29 AM ceph-volume Backport #64943 (In Progress): reef: ceph-volume lvm zap fails with "undefined name 'List'"
- https://github.com/ceph/ceph/pull/56259
- 08:19 AM ceph-volume Bug #64898 (Pending Backport): ceph-volume lvm zap fails with "undefined name 'List'"
- 08:06 AM RADOS Bug #64942 (New): rados/verify: valgrind reports "Invalid read of size 8" error.
- /a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-testing-2024-03-05-0854-distro-default-smithi/7587319
/a/yuriw-2024-03-... - 06:18 AM CephFS Backport #64941 (New): quincy: qa: Add multifs root_squash testcase
- 06:18 AM CephFS Backport #64940 (New): reef: qa: Add multifs root_squash testcase
- 06:17 AM CephFS Backport #64939 (New): squid: qa: Add multifs root_squash testcase
- 06:07 AM CephFS Bug #64641 (Pending Backport): qa: Add multifs root_squash testcase
- 04:47 AM CephFS Support #57952: Pacific: the buffer_anon_bytes of ceph-mds is too large
- Dan van der Ster wrote:
> There was a similar case back in nautilus:
> * https://lists.ceph.io/hyperkitty/list/ceph... - 01:44 AM crimson Bug #64934: crimson: failed assert in get_or_create_mapping
- PGShardMapping::get_or_create_mapping was renamed from maybe_create_pg()
- 01:01 AM RADOS Bug #64938 (Fix Under Review): Pool created with single PG splits into many on single OSD causes ...
- 12:51 AM RADOS Bug #64938 (Fix Under Review): Pool created with single PG splits into many on single OSD causes ...
- With autoscale mode ON, if a new pool is created without specifying the pg_num/pgp_num values then the pool gets crea...
- 12:47 AM crimson Bug #64789 (Resolved): crimson unitest timeout (Reactor backend: io_uring) as liburing mismatch
- 12:42 AM CephFS Bug #48562 (New): qa: scrub - object missing on disk; some files may be lost
- https://pulpito.ceph.com/yuriw-2024-03-12_14:59:27-fs-wip-yuri11-testing-2024-03-11-0838-reef-distro-default-smithi/7...
- 12:40 AM CephFS Bug #62221: Test failure: test_add_ancestor_and_child_directory (tasks.cephfs.test_mirroring.Test...
- /teuthology/yuriw-2024-03-12_14:59:27-fs-wip-yuri11-testing-2024-03-11-0838-reef-distro-default-smithi/7593782/teutho...
- 12:33 AM CephFS Bug #64937 (Resolved): reef: qa: AttributeError: 'TestSnapSchedulesSubvolAndGroupArguments' objec...
- ...
03/14/2024
- 10:01 PM rgw Bug #64803: ninja all on fedora 39 fails because arrow_ext requires C++14
- Hey Casey, I just worked around it with......
- 02:41 PM rgw Bug #64803: ninja all on fedora 39 fails because arrow_ext requires C++14
- thanks Brad. our arrow submodule is rather old, it needs to be updated
epel does have system packages for this (li... - 08:24 PM cephsqlite Bug #63902 (Duplicate): Crashed MGR - sqlite3.InternalError: unknown operation
- 07:46 PM rgw Backport #64228: reef: sts: CreateRole fails with MalformedPolicyDocument if policy document cont...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55356
merged - 05:45 PM bluestore Backport #64936 (In Progress): quincy: BlueStore/DeferredWriteTest.NewData/3 is broken
- https://github.com/ceph/ceph/pull/56201
- 05:30 PM bluestore Backport #64936 (In Progress): quincy: BlueStore/DeferredWriteTest.NewData/3 is broken
- 05:25 PM bluestore Backport #64123 (Rejected): pacific: ObjectStore/StoreTestSpecificAUSize.SpilloverFixedTest/2 fai...
- Pacific is at EOL
- 05:19 PM bluestore Backport #64647 (In Progress): reef: BlueStore/DeferredWriteTest.NewData/3 is broken
- https://github.com/ceph/ceph/pull/56199
- 05:19 PM bluestore Backport #64648 (In Progress): squid: BlueStore/DeferredWriteTest.NewData/3 is broken
- https://github.com/ceph/ceph/pull/56200
- 05:01 PM bluestore Backport #64928 (In Progress): reef: KeyValueDB/KVTest.RocksDB_estimate_size tests failing
- https://github.com/ceph/ceph/pull/56197
- 01:32 PM bluestore Backport #64928 (In Progress): reef: KeyValueDB/KVTest.RocksDB_estimate_size tests failing
- 04:53 PM crimson Bug #64935 (New): crimson: heap use after free during ~OSD()
- https://pulpito.ceph.com/sjust-2024-03-14_07:56:00-crimson-rados-wip-sjust-cb2b48d7-2024-03-12-distro-default-smithi/...
- 04:51 PM CephFS Backport #64926 (In Progress): reef: mds: disable defer_client_eviction_on_laggy_osds by default
- 12:56 PM CephFS Backport #64926 (In Progress): reef: mds: disable defer_client_eviction_on_laggy_osds by default
- https://github.com/ceph/ceph/pull/56196
- 04:51 PM CephFS Bug #62653: qa: unimplemented fcntl command: 1036 with fsstress
- In rados run: /a/yuriw-2024-03-13_19:26:09-rados-wip-yuri-testing-2024-03-12-1240-reef-distro-default-smithi/7597989
- 04:51 PM crimson Bug #64934: crimson: failed assert in get_or_create_mapping
- Also:
https://pulpito.ceph.com/sjust-2024-03-14_07:56:00-crimson-rados-wip-sjust-cb2b48d7-2024-03-12-distro-default-... - 04:44 PM crimson Bug #64934 (New): crimson: failed assert in get_or_create_mapping
- https://pulpito.ceph.com/sjust-2024-03-14_07:56:00-crimson-rados-wip-sjust-cb2b48d7-2024-03-12-distro-default-smithi/...
- 04:50 PM CephFS Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- Dhairya Parmar wrote:
> im in strong favour of having this done, considering the caveat discussed above i still feel... - 04:41 PM CephFS Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- Dhairya Parmar wrote:
> Venky Shankar wrote:
> > Dhairya Parmar wrote:
> > > Github docs for rate limits especiall... - 02:45 PM CephFS Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- im in strong favour of having this done, considering the caveat discussed above i still feel the risk:reward is signi...
- 02:42 PM CephFS Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- Venky Shankar wrote:
> Dhairya Parmar wrote:
> > Github docs for rate limits especially unauthenticated requests to... - 02:07 PM CephFS Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- Dhairya Parmar wrote:
> Github docs for rate limits especially unauthenticated requests to the raw content API is no... - 01:38 PM CephFS Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- Github docs for rate limits especially unauthenticated requests to the raw content API is non-existent, but since gi...
- 12:27 PM CephFS Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- I'm thinking to make use of ceph repo's raw content access of src/ceph_release i.e [0]. This is still a github API ca...
- 04:47 PM CephFS Backport #64925 (In Progress): quincy: mds: disable defer_client_eviction_on_laggy_osds by default
- 12:55 PM CephFS Backport #64925 (In Progress): quincy: mds: disable defer_client_eviction_on_laggy_osds by default
- https://github.com/ceph/ceph/pull/56195
- 04:46 PM CephFS Backport #64924 (In Progress): squid: mds: disable defer_client_eviction_on_laggy_osds by default
- 12:55 PM CephFS Backport #64924 (In Progress): squid: mds: disable defer_client_eviction_on_laggy_osds by default
- https://github.com/ceph/ceph/pull/56194
- 04:28 PM CephFS Feature #63664 (Fix Under Review): mds: add quiesce protocol for halting I/O on subvolumes
- 04:27 PM CephFS Tasks #63706 (Closed): mds: Integrate the QuiesceDbManager and the QuiesceAgent into the MDS rank
- Folding this into #63664.
- 04:26 PM CephFS Tasks #63709 (Closed): mds: Plug the QuiesceProtocol implementation into the QuiesceAgent control...
- Folding this into #63664.
- 04:16 PM Dashboard Bug #64933 (Pending Backport): mgr/dashboard: nfs mount command in attach dialog need syntax modi...
- h3. Description of problem
Currently it displays :
sudo mount -t nfs -o port=<PORT> <IP of active_mds daemon>:/ <... - 03:51 PM Feature #57515: The way to know the data format of each OSD and MON was created
- https://github.com/ceph/ceph/pull/53858 merged
- 03:30 PM Orchestrator Backport #64932 (Resolved): reef: [node-proxy] the RedFishClient.logout() can never logout from t...
- https://github.com/ceph/ceph/pull/56252
- 03:30 PM Orchestrator Backport #64931 (Resolved): squid: [node-proxy] the RedFishClient.logout() can never logout from ...
- https://github.com/ceph/ceph/pull/56251
- 03:25 PM Orchestrator Bug #64894 (Pending Backport): [node-proxy] the RedFishClient.logout() can never logout from the ...
- 02:48 PM CephFS Bug #64927 (Fix Under Review): qa/cephfs: test_cephfs_mirror_blocklist raises "KeyError: 'rados_i...
- 01:06 PM CephFS Bug #64927 (Fix Under Review): qa/cephfs: test_cephfs_mirror_blocklist raises "KeyError: 'rados_i...
- Saw this failure occur a couple of times in recent QA run for CephFS PRs -
https://pulpito.ceph.com/rishabh-2024-0... - 02:43 PM rgw Bug #64253 (Won't Fix): bucket notifications publish all events when TopicConfiguration.Events ar...
- Yuval Lifshitz wrote:
> according to our documentation: https://docs.ceph.com/en/latest/radosgw/s3/bucketops/#create... - 02:18 PM RADOS Bug #64802: rados: generalize stretch mode pg temp handling to be usable without stretch mode
- peering_crush_bucket_[count|target|barrier]
- 01:47 PM RADOS Bug #64802: rados: generalize stretch mode pg temp handling to be usable without stretch mode
- Don't forget that there is also pg_pool_t::peering_crush_bucket_count that directly requires a minimum number of high...
- 12:38 PM RADOS Bug #64802: rados: generalize stretch mode pg temp handling to be usable without stretch mode
- My plan current script to setup a vstart to test out the above hypothesis:...
- 02:10 PM rgw Bug #64571: lifecycle transition crashes since merge end-to-end tracing
- saw an expiration crash in https://qa-proxy.ceph.com/teuthology/cbodley-2024-03-13_12:30:10-rgw-wip-cbodley-testing-d...
- 01:47 PM Dashboard Backport #64930 (In Progress): reef: mgr/dashboard: add cephfs authentication
- https://github.com/ceph/ceph/pull/56254
- 01:47 PM Dashboard Backport #64929 (In Progress): squid: mgr/dashboard: add cephfs authentication
- https://github.com/ceph/ceph/pull/56253
- 01:42 PM Dashboard Bug #64660 (Pending Backport): mgr/dashboard: add cephfs authentication
- 01:28 PM bluestore Bug #63121 (Pending Backport): KeyValueDB/KVTest.RocksDB_estimate_size tests failing
- 01:28 PM bluestore Bug #63121: KeyValueDB/KVTest.RocksDB_estimate_size tests failing
- Seeing this failure in reef runs: /a/yuriw-2024-03-13_19:26:09-rados-wip-yuri-testing-2024-03-12-1240-reef-distro-def...
- 01:17 PM RADOS Bug #61774: centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
- /a/yuriw-2024-03-13_19:26:09-rados-wip-yuri-testing-2024-03-12-1240-reef-distro-default-smithi/7598397
/a/yuriw-2024... - 01:01 PM Orchestrator Bug #64208: test_cephadm.sh: Container version mismatch causes job to fail.
- /a/yuriw-2024-03-13_19:26:09-rados-wip-yuri-testing-2024-03-12-1240-reef-distro-default-smithi/7598333
/a/yuriw-2024... - 12:57 PM RADOS Backport #63559: reef: Heartbeat crash in osd
- /a/yuriw-2024-03-13_19:26:09-rados-wip-yuri-testing-2024-03-12-1240-reef-distro-default-smithi/7598201
- 12:56 PM CephFS Bug #64615: tools/first-damage: Skips root and lost+found inode
- We dont't backport first-damage tool fixes to releases.
- 12:32 PM CephFS Bug #64615 (Resolved): tools/first-damage: Skips root and lost+found inode
- We don't backport first-damage tool fixes, do we?
- 12:54 PM CephFS Bug #64685 (Pending Backport): mds: disable defer_client_eviction_on_laggy_osds by default
- 12:32 PM Messengers Bug #64923 (New): When creating qp fails, ceph service crash
- In my roce envoriment:
ubuntu22.04 Linux node194 5.15.0-60-generic #66-Ubuntu SMP Fri Jan 20 14:29:49 UTC 2023 x86_6... - 11:25 AM CephFS Backport #64922 (New): reef: qa: Command failed (workunit test fs/snaps/untar_snap_rm.sh)
- 11:25 AM CephFS Backport #64921 (New): quincy: qa: Command failed (workunit test fs/snaps/untar_snap_rm.sh)
- 11:25 AM CephFS Backport #64920 (New): squid: qa: Command failed (workunit test fs/snaps/untar_snap_rm.sh)
- 11:25 AM CephFS Backport #64919 (In Progress): reef: qa: enhance labeled perf counters test for cephfs-mirror
- https://github.com/ceph/ceph/pull/56211
- 11:25 AM CephFS Backport #64918 (In Progress): squid: qa: enhance labeled perf counters test for cephfs-mirror
- https://github.com/ceph/ceph/pull/56210
- 11:23 AM CephFS Bug #64058 (Pending Backport): qa: Command failed (workunit test fs/snaps/untar_snap_rm.sh)
- 11:22 AM CephFS Bug #64486 (Pending Backport): qa: enhance labeled perf counters test for cephfs-mirror
- 11:06 AM Orchestrator Bug #63784: qa/standalone/mon/mkfs.sh:'mkfs/a' already exists and is not empty: monitor may alrea...
- /a/yuriw-2024-03-12_18:29:22-rados-wip-yuri8-testing-2024-03-11-1138-distro-default-smithi/7594882
- 11:00 AM RADOS Bug #64917 (New): SnapMapperTest.CheckObjectKeyFormat object key changed
- /a/yuriw-2024-03-12_18:29:22-rados-wip-yuri8-testing-2024-03-11-1138-distro-default-smithi/7594695...
- 10:28 AM rbd Backport #64916 (In Progress): reef: [diff-iterate] discards that truncate aren't accounted for b...
- https://github.com/ceph/ceph/pull/56213
- 10:27 AM rbd Backport #64915 (In Progress): quincy: [diff-iterate] discards that truncate aren't accounted for...
- https://github.com/ceph/ceph/pull/56212
- 10:27 AM rbd Backport #64914 (Resolved): squid: [diff-iterate] discards that truncate aren't accounted for by ...
- https://github.com/ceph/ceph/pull/56216
- 10:26 AM rbd Bug #63770 (Pending Backport): [diff-iterate] discards that truncate aren't accounted for by Obje...
- 09:32 AM Bug #64305: ceph_assert error on rgw start in rook-ceph-rgw-ceph-objectstore pod
- rook 1.13 supports ceph v17 or v18.
but according to: https://docs.ceph.com/en/latest/start/os-recommendations/
we ... - 08:54 AM Dashboard Bug #64913 (New): mgr/dashboard: Allow DELETE method api request, X-TOTAL-COUNT header in CORS co...
- Allow DELETE method api request, X-TOTAL-COUNT header in CORS config in dashboard
- 07:37 AM CephFS Bug #64912 (Fix Under Review): make check: QuiesceDbTest.MultiRankRecovery Failed
- From: https://jenkins.ceph.com/job/ceph-pull-requests/131349/console...
- 07:04 AM crimson Bug #64728 (Resolved): osd crashes when there are enough number of pgs in a single seastore based...
- 06:58 AM Dashboard Cleanup #64911 (New): mgr/dashboard: fix creation of empty tag
- When creating a bucket from the dashboard, tagset field was empty when created if no tags where set.
Tagset field ... - 06:00 AM Dashboard Bug #62972: ERROR: test_list_enabled_module (tasks.mgr.dashboard.test_mgr_module.MgrModuleTest)
- https://jenkins.ceph.com/job/ceph-api/70518/
- 05:59 AM rgw Backport #64909 (In Progress): squid: rgw: rgw-restore-bucket-index -- sort uses specified temp dir
- 05:58 AM rgw Backport #64909 (Resolved): squid: rgw: rgw-restore-bucket-index -- sort uses specified temp dir
- https://github.com/ceph/ceph/pull/56181
- 05:58 AM rgw Backport #64910 (New): quincy: rgw: rgw-restore-bucket-index -- sort uses specified temp dir
- 05:58 AM rgw Backport #64908 (New): reef: rgw: rgw-restore-bucket-index -- sort uses specified temp dir
- 05:56 AM rgw Bug #64875 (Pending Backport): rgw: rgw-restore-bucket-index -- sort uses specified temp dir
- 04:24 AM CephFS Bug #64659: mds: switch to using xlists instead of elists
- Dhairya Parmar wrote:
> Venky Shankar wrote:
> > Dhairya Parmar wrote:
> > > Patrick Donnelly wrote:
> > > > Dhai... - 03:34 AM crimson Bug #64696: Centos8 Crimson container builds - nothing provides libprotobuf.so.30()
- Please use https://tracker.ceph.com/issues/64907 for further discussion of "wrong versions of protobuf files installed"
- 03:19 AM crimson Bug #64696: Centos8 Crimson container builds - nothing provides libprotobuf.so.30()
- I'm going to open another bug about the protoc/libprotobuf-dev damage. This bug is now about the missing runtime dep...
- 03:18 AM crimson Bug #64696: Centos8 Crimson container builds - nothing provides libprotobuf.so.30()
- > adami01 has the same issue
No it doesn't:
[dmick@adami01 ~]$ ls /usr/bin/protoc*
/usr/bin/protoc
- 03:17 AM crimson Bug #64696: Centos8 Crimson container builds - nothing provides libprotobuf.so.30()
- Indeed, the libprotobuf in the copr repo is libprotobuf.so.30, the version that the built packages require...that is,...
- 02:50 AM crimson Bug #64696: Centos8 Crimson container builds - nothing provides libprotobuf.so.30()
- Samuel Just wrote:
> From what we can tell, someone tampered with /usr/bin/protoc on 3 jammy hosts:
>
> [...]
> ... - 03:33 AM Bug #64907 (New): Some jammy jenkins builders had rogue versions of protobuf packages
- This is a breakout of the original issue from https://tracker.ceph.com/issues/64696
sjust and dmick diagnosed a bu... - 03:08 AM rgw Bug #63657: The usage of all buckets will be deleted(If you execute DELETE /admin/usage?bucket={b...
- This bug does not reproduce on main.
See the output of this script (https://paste.sh/2IyRPtr4#tuYSmCnqmS4RlogGPvdX... - 12:35 AM CephFS Bug #50719: xattr returning from the dead (sic!)
- Matthew Hutchinson wrote:
> Is there anything in the logs you saw that could be causing this issue? I am eager to ge...
03/13/2024
- 11:10 PM crimson Bug #64696: Centos8 Crimson container builds - nothing provides libprotobuf.so.30()
- https://github.com/ceph/ceph/pull/55444 added protobuf-devel and protobuf-compiler as build requirements for a new ve...
- 09:14 AM crimson Bug #64696: Centos8 Crimson container builds - nothing provides libprotobuf.so.30()
- Changing the title as it seems like this is the cause for the centos8 shaman failures with Crimson.
https://jenkin... - 09:11 PM rgw Feature #64903 (New): sns: support AddPermission/RemovePermission for granular topic policy
- rgw supports setting topic policy via the Policy attribute in SetTopicAttributes. add support for AddPermission and R...
- 08:01 PM Orchestrator Bug #64899 (Fix Under Review): boostrap fails when no container engine is present
- 02:30 PM Orchestrator Bug #64899 (Fix Under Review): boostrap fails when no container engine is present
- [root@node-1 cephadm]# ./cephadm bootstrap --mon-ip=10.10.10.11 --skip-dashboard --skip-monitoring-stack --single-hos...
- 07:30 PM Orchestrator Bug #64902 (In Progress): cephadm: public_network config check does not pick up changes in public...
- This causes a bit of a workflow issue with responding to the health warning. A user will see the warning, and potenti...
- 07:12 PM Orchestrator Backport #64699 (Rejected): quincy: allow idmap overrides in nfs-ganesha configuration
- 07:12 PM Orchestrator Backport #64699: quincy: allow idmap overrides in nfs-ganesha configuration
- serious merge conflicts in this backport. Going to say we don't bother unless we really need this in quincy
- 06:51 PM crimson Bug #64040: PGBackend unhandled throw exceptions
- Need to understand the difference between...
- 06:48 PM crimson Bug #64040: PGBackend unhandled throw exceptions
- Nope, that didn't work:...
- 11:39 AM crimson Bug #64040: PGBackend unhandled throw exceptions
I think for those methods with signature XX::interruptible_future<> we can replace a throw by the corresponding cri...- 06:21 PM Orchestrator Feature #62185 (Resolved): Add "networks" parameter to "orch apply rgw"
- 06:21 PM Orchestrator Backport #62447 (Resolved): quincy: Add "networks" parameter to "orch apply rgw"
- 06:05 PM rbd Backport #64274 (Resolved): quincy: rbd_snap_get_timestamp() hits an assert if non-existing snap ...
- 04:44 PM RADOS Bug #57782: [mon] high cpu usage by fn_monstore thread
- Hi,
Thanks to this article https://blog.palark.com/sre-troubleshooting-ceph-systemd-containerd/, I think root caus... - 04:02 PM mgr Feature #64900 (New): Expose RBD per-pool stats in mgr/prometheus module
- Allow the mgr/prometheus module to collect RBD per-pool stats.
Such stats are now available via `rbd pool stats` c... - 03:16 PM bluestore Backport #64072 (Resolved): quincy: src/common/PriorityCache.cc: FAILED ceph_assert(mem_avail >= 0)
- 02:08 PM Orchestrator Backport #64627 (Resolved): reef: cephadm: ceph-exporter fails to deploy when placed first
- 02:07 PM ceph-volume Bug #64898 (Fix Under Review): ceph-volume lvm zap fails with "undefined name 'List'"
- 02:02 PM ceph-volume Bug #64898 (Pending Backport): ceph-volume lvm zap fails with "undefined name 'List'"
- when zapping an encrypted partition, ceph-volume fails because List is undefined.
- 02:07 PM Orchestrator Backport #64622 (Resolved): reef: mgr/cephadm is not defining haproxy tcp healthchecks for Ganesha
- 02:05 PM Orchestrator Backport #64620 (Resolved): reef: cephadm is not accounting for the memory required nvme gateways...
- 02:01 PM rgw Bug #64184: test_bn.py -v -a kafka_test: Fatal glibc error: tpp.c:87 (__pthread_tpp_change_priori...
- similar crash, but with "Attempt to free invalid pointer" in tcmalloc:...
- 01:59 PM Orchestrator Backport #63447 (Resolved): reef: cephadm: remove host entry from crush map during host removal
- 01:51 PM CephFS Backport #64218 (In Progress): reef: fs/cephadm/renamevolume: volume rename failure
- 01:50 PM CephFS Backport #64217 (In Progress): quincy: fs/cephadm/renamevolume: volume rename failure
- 01:49 PM CephFS Backport #64047 (In Progress): reef: qa: test_fragmented_injection (tasks.cephfs.test_data_scan.T...
- 01:49 PM CephFS Backport #64046 (In Progress): quincy: qa: test_fragmented_injection (tasks.cephfs.test_data_scan...
- 01:48 PM CephFS Backport #64250 (In Progress): reef: smoke test fails from "NameError: name 'DEBUGFS_META_DIR' is...
- 01:47 PM CephFS Backport #64249 (In Progress): quincy: smoke test fails from "NameError: name 'DEBUGFS_META_DIR' ...
- 01:34 PM RADOS Bug #64735: OSD/MON: rollback_to snap the latest overlap is not right
- Ilya Dryomov wrote:
> No, snap2 would continue to exist and one should be able to "rollback" to it. Rollback is rea... - 10:16 AM RADOS Bug #64735: OSD/MON: rollback_to snap the latest overlap is not right
- Matan Breizman wrote:
> Ilya Dryomov wrote:
> > Put another way: rollback is a destructive operation. One isn't ex... - 10:00 AM RADOS Bug #64735: OSD/MON: rollback_to snap the latest overlap is not right
- Ilya Dryomov wrote:
> Put another way: rollback is a destructive operation. One isn't expected to be able to go bac... - 01:19 PM Orchestrator Bug #64894 (Fix Under Review): [node-proxy] the RedFishClient.logout() can never logout from the ...
- 01:14 PM Orchestrator Bug #64894 (Resolved): [node-proxy] the RedFishClient.logout() can never logout from the redfish API
- the endpoint passed down to util.query() is wrong, it makes the client unable to properly logout.
- 01:15 PM RADOS Bug #64897 (New): unittest_ceph_crypto - valgrind failed
- running unit-test with valgraind:
ctest -R unittest_ceph_crypto -T memcheck... - 01:15 PM CephFS Backport #64896 (In Progress): squid: mds: QuiesceDb to manage subvolume quiesce state
- PR: https://github.com/ceph/ceph/pull/56202
- 01:14 PM RADOS Bug #64895 (New): unittest_perf_counters_cache - valgrind failed
running unit-test with valgraind:
ctest -R unittest_perf_counters_cache -T memcheck...- 01:13 PM RADOS Bug #64893 (New): unittest_bufferlist - valgrind failed
- running unit-test with valgraind:
ctest -R unittest_bufferlist -T memcheck... - 01:11 PM RADOS Bug #64892 (New): unittest_ipaddr - valgrind failed
- running unit-test with valgraind:
ctest -R unittest_ipaddr -T memcheck... - 01:09 PM CephFS Tasks #63708 (Resolved): mds: MDS message transport for inter-rank QuiesceDbManager communications
- Backport tracking in https://tracker.ceph.com/issues/63665
- 01:09 PM rgw Bug #64816: garbage collection not processing
- Thanks for the info.
I've tried running `radosgw-admin gc process` a few times but doesn't appear to be doing anyt... - 01:09 PM CephFS Feature #63666 (Resolved): mds: QuiesceAgent to execute quiesce operations on an MDS rank
- Backport tracking in https://tracker.ceph.com/issues/63665
- 01:09 PM CephFS Feature #63668 (Resolved): pybind/mgr/volumes: add quiesce protocol API
- Backport tracking in https://tracker.ceph.com/issues/63665
- 01:09 PM CephFS Tasks #63707 (Resolved): mds: AdminSocket command to control the QuiesceDbManager
- Backport tracking in https://tracker.ceph.com/issues/63665
- 01:08 PM RADOS Bug #64891 (New): unittest_admin_socket - valgrind failed
- running unit-test with valgraind:
ctest -R unittest_admin_socket -T memcheck... - 01:08 PM CephFS Feature #63665 (Pending Backport): mds: QuiesceDb to manage subvolume quiesce state
- 01:05 PM Dashboard Feature #64890 (In Progress): mgr/dashboard: update NVMe-oF API
- NVMe-oF API requires several fixes/improvements:
# Error handling,
# Not bypassing gRPC data (curretly it basical... - 01:01 PM CephFS Bug #50719: xattr returning from the dead (sic!)
- Matthew Hutchinson wrote:
> Is there anything in the logs you saw that could be causing this issue? I am eager to ge... - 12:57 PM CephFS Bug #50719: xattr returning from the dead (sic!)
- Is there anything in the logs you saw that could be causing this issue? I am eager to get this resolved for all of ou...
- 12:59 PM mgr Bug #49693: Manager daemon is unresponsive, replacing it with standby daemon
- Are there any logs or information to get this resolved? currently the dashboard is disabled on the cluster to stop th...
- 12:55 PM CephFS Backport #64809 (Rejected): pacific: mds: add debug logs for handling setxattr for ceph.dir.subvo...
- 12:52 PM CephFS Bug #58244: Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
- I hit this on testing one of my PR https://github.com/ceph/ceph/pull/56153
https://pulpito.ceph.com/khiremat-2024-... - 12:11 PM rgw Backport #64886 (In Progress): quincy: kafka: RGW hangs when broker is down for no persistent not...
- 10:35 AM rgw Backport #64886 (In Progress): quincy: kafka: RGW hangs when broker is down for no persistent not...
- https://github.com/ceph/ceph/pull/56163
- 12:08 PM CephFS Bug #63265: qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
- https://pulpito.ceph.com/rishabh-2024-03-08_19:54:47-fs-rishabh-2024mar8-testing-default-smithi/7588250.
Same erro... - 11:56 AM rgw Bug #64889 (New): Deleting an rgw realm, does not clear the 'realm_id' and it is listed in the 'd...
- Description of problem:
Deleting an rgw realm, does not clear the 'realm_id' and it is listed in the 'default_inf... - 11:43 AM CephFS Bug #63699 (Fix Under Review): qa: failed cephfs-shell test_reading_conf
- 11:41 AM rgw Bug #64888 (In Progress): RFE: Realm get default should show the realm name with id.
[root@magna016 ~]# radosgw-admin realm create --rgw-realm test2 --default
{
"id": "1f1ad6c8-47ae-46bf-8f29-b...- 11:31 AM rgw Backport #64885 (In Progress): reef: kafka: RGW hangs when broker is down for no persistent notif...
- 10:35 AM rgw Backport #64885 (In Progress): reef: kafka: RGW hangs when broker is down for no persistent notif...
- https://github.com/ceph/ceph/pull/56158
- 10:47 AM rgw Backport #64887 (In Progress): squid: kafka: RGW hangs when broker is down for no persistent noti...
- 10:36 AM rgw Backport #64887 (Resolved): squid: kafka: RGW hangs when broker is down for no persistent notific...
- https://github.com/ceph/ceph/pull/56156
- 10:41 AM Dashboard Backport #64884 (In Progress): squid: mgr/dashboard: fix snap schedule time format
- 10:18 AM Dashboard Backport #64884 (In Progress): squid: mgr/dashboard: fix snap schedule time format
- https://github.com/ceph/ceph/pull/56155
- 10:40 AM Dashboard Backport #64883 (In Progress): reef: mgr/dashboard: fix snap schedule time format
- 10:18 AM Dashboard Backport #64883 (In Progress): reef: mgr/dashboard: fix snap schedule time format
- https://github.com/ceph/ceph/pull/56154
- 10:29 AM rgw Bug #64710 (Pending Backport): kafka: RGW hangs when broker is down for no persistent notifications
- 10:19 AM rgw Bug #64253: bucket notifications publish all events when TopicConfiguration.Events array is empty
- according to our documentation: https://docs.ceph.com/en/latest/radosgw/s3/bucketops/#create-notification...
- 10:17 AM Dashboard Bug #64831 (Pending Backport): mgr/dashboard: fix snap schedule time format
- 09:56 AM CephFS Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Venky Shankar wrote:
> Xiubo Li wrote:
> > I have finally reproduced by by pulling the latest ceph code. I believe ... - 09:12 AM CephFS Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Xiubo Li wrote:
> I have finally reproduced by by pulling the latest ceph code. I believe there is one commit in MDS... - 09:10 AM CephFS Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Rerunning with the updated testing kernel
https://pulpito.ceph.com/vshankar-2024-03-13_05:41:30-fs-wip-vshankar-te... - 02:55 AM CephFS Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- I have finally reproduced by by pulling the latest ceph code. I believe there is one commit in MDS have improved the ...
- 09:36 AM crimson Feature #64086 (Resolved): Enable multicore messanger
- https://github.com/ceph/ceph/pull/55641
https://github.com/ceph/ceph/pull/55708 - 08:37 AM devops Bug #64882 (Fix Under Review): rocksdb: Fast CRC32 supported: Not supported on Arm64
- When build rocksdb, shows:
Fast CRC32 supported: Not supported on Arm64
But actually it is supported. - 08:08 AM RADOS Backport #64881 (In Progress): reef: singleton/ec-inconsistent-hinfo.yaml: Include a possible ben...
- 07:34 AM RADOS Backport #64881 (In Progress): reef: singleton/ec-inconsistent-hinfo.yaml: Include a possible ben...
- https://github.com/ceph/ceph/pull/56151
- 07:32 AM RADOS Bug #64314 (Resolved): cluster log: Cluster log level string representation missing in the cluste...
- 07:30 AM RADOS Fix #64573 (Pending Backport): singleton/ec-inconsistent-hinfo.yaml: Include a possible benign cl...
- 06:22 AM Dashboard Bug #64813 (Resolved): mgr/dashboard: fix snap schedule list toggle cols
- 06:21 AM Dashboard Backport #64825 (Resolved): squid: mgr/dashboard: fix snap schedule list toggle cols
- 06:21 AM Dashboard Backport #64826 (Resolved): reef: mgr/dashboard: fix snap schedule list toggle cols
- 06:18 AM Dashboard Bug #64880 (Fix Under Review): mgr/dashboard: dashboard multi-cluster improvements and bug fixes
- 1.Since, after adding a new cluster we add its IP to the list of prometheus federation targets. The metrics of the ad...
- 05:26 AM RADOS Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- Brad Hubbard wrote:
> Nitzan Mordechai wrote:
> > now the segfault happens on check_one function where we also have... - 02:27 AM RADOS Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- Nitzan Mordechai wrote:
> now the segfault happens on check_one function where we also have pre-regex to truncate th... - 05:07 AM CephFS Bug #64751 (Fix Under Review): cephfs-mirror coredumped when acquiring pthread mutex
- 04:03 AM CephFS Bug #64852: MDS hangs on "joining batch getattr" when client does statx
- @Xiubo Li Could you shortly summarise them for me? For the second one, it's marked as Resolved but there seems to be ...
- 02:14 AM Orchestrator Bug #64879 (In Progress): cephadm: cephadm shell with --name fail when named daemon is stopped/fa...
- The `cephadm shell` command with --name uses `podman inspect` on the container of the daemon to pick out an image to ...
- 01:49 AM rgw Backport #64795 (Resolved): squid: rgw: compatibility issues on BucketPublicAccessBlock
- 01:40 AM Orchestrator Backport #64635 (Resolved): reef: cephadm/nvmeof: scrape nvmeof prometheus endpoint
- 01:39 AM Orchestrator Backport #64689 (Resolved): reef: cephadm: host filtering with label and host pattern only uses t...
- 01:38 AM Orchestrator Backport #64644 (Resolved): reef: cephadm: remove restriction for crush device classes
- 01:38 AM Orchestrator Backport #64634 (Resolved): reef: cephadm: cephadm does not clean up /etc/ceph/podman-auth.json i...
- 01:37 AM Orchestrator Bug #64229 (Resolved): cephadm: traceback when running `cephadm ls` with nvmeof daemon present
- 01:36 AM Orchestrator Backport #64414 (Resolved): reef: cephadm: traceback when running `cephadm ls` with nvmeof daemon...
- 01:36 AM Orchestrator Feature #63864 (Resolved): When listing devices it would be helpful to have a summary footer with...
- 01:36 AM Orchestrator Backport #63985 (Resolved): reef: When listing devices it would be helpful to have a summary foot...
- 01:35 AM Orchestrator Bug #63865 (Resolved): ceph orch host ls --detail reports the incorrect CPU thread count
- 01:35 AM Orchestrator Backport #63984 (Resolved): reef: ceph orch host ls --detail reports the incorrect CPU thread count
- 01:35 AM Orchestrator Backport #63817 (Resolved): reef: cephadm: drivespec limit not working correctly
- 01:34 AM Orchestrator Backport #63815 (Resolved): reef: OSD with dedicated db devices redeployments fail when no servic...
- 01:33 AM Orchestrator Bug #63388 (Resolved): mgr: discovery service (port 8765) fails if ms_bind ipv6 only
- 01:33 AM Orchestrator Backport #63448 (Resolved): reef: mgr: discovery service (port 8765) fails if ms_bind ipv6 only
- 01:32 AM Orchestrator Backport #63434 (Resolved): reef: cephadm: daemon events not updated on repeat events.
- 01:30 AM Orchestrator Bug #59704 (Resolved): jaeger-agents in error state if deployed before jaeger-collector
- 01:30 AM Orchestrator Backport #63190 (Resolved): reef: jaeger-agents in error state if deployed before jaeger-collector
03/12/2024
- 08:29 PM RADOS Bug #64725 (Fix Under Review): rados/singleton: application not enabled on pool 'rbd'
- 01:48 PM RADOS Bug #64725: rados/singleton: application not enabled on pool 'rbd'
- /a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-testing-2024-03-05-0854-distro-default-smithi/7587549
- 07:44 PM rgw Feature #64083: Response code from rgw rate-limit should be 429 not 503 , best is configurable
- +1 on this request
- 07:35 PM rgw Backport #64793 (In Progress): quincy: Notification kafka: Persistent messages are removed even w...
- 07:30 PM rgw Backport #64876 (In Progress): squid: x-amz-expiration HTTP header: expiry-date sometimes broken
- 07:19 PM rgw Backport #64876 (Resolved): squid: x-amz-expiration HTTP header: expiry-date sometimes broken
- https://github.com/ceph/ceph/pull/56144
- 07:19 PM rgw Backport #64878 (New): reef: x-amz-expiration HTTP header: expiry-date sometimes broken
- 07:19 PM rgw Backport #64877 (New): quincy: x-amz-expiration HTTP header: expiry-date sometimes broken
- 07:12 PM rgw Bug #63973 (Pending Backport): x-amz-expiration HTTP header: expiry-date sometimes broken
- 07:03 PM Orchestrator Bug #64720: Cannot infer CIDR network on Hetzner Cloud
- In order to reliably fix the issue without regressions we'll want to reproduce the problem. However, I can't find doc...
- 07:01 PM rgw Backport #64861: squid: Notification kafka: Persistent messages are removed even when the broker ...
- Krunal Chheda wrote:
> @yuval, is the tag v19.0.0 for squid and is this the squid code base (https://github.com/cep... - 06:45 PM rgw Backport #64861: squid: Notification kafka: Persistent messages are removed even when the broker ...
- @yuval, is the tag v19.0.0 for squid and is this the squid code base (https://github.com/ceph/ceph/blob/v19.0.0/src/r...
- 12:41 PM rgw Backport #64861 (Rejected): squid: Notification kafka: Persistent messages are removed even when ...
- Krunal Chheda wrote:
> Why do we need backport for squid, this PR was part of main before the squid was cut
you ... - 12:37 PM rgw Backport #64861: squid: Notification kafka: Persistent messages are removed even when the broker ...
- Why do we need backport for squid, this PR was part of main before the squid was cut
- 12:14 PM rgw Backport #64861 (Rejected): squid: Notification kafka: Persistent messages are removed even when ...
- 06:21 PM RADOS Bug #58436: ceph cluster log reporting log level in numeric format for the clog messages
- https://github.com/ceph/ceph/pull/49730 merged
- 05:29 PM rgw Backport #64768 (Resolved): squid: rgw: awssigv4: new trailer boundary case
- 05:15 PM rgw Bug #64875 (Pending Backport): rgw: rgw-restore-bucket-index -- sort uses specified temp dir
- Allows sort to use the specified temporary directory. Also fixes a bug in the temp file clean-up (backslash missing).
- 05:03 PM RADOS Bug #64735: OSD/MON: rollback_to snap the latest overlap is not right
- Ilya Dryomov wrote:
> This is because rollback discards all changes made to image HEAD and makes it identical to the... - 04:30 PM RADOS Bug #64735: OSD/MON: rollback_to snap the latest overlap is not right
- Matan Breizman wrote:
> the suggested change here suggests that the disk usage should actually be:
> NAME ... - 04:13 PM RADOS Bug #64735: OSD/MON: rollback_to snap the latest overlap is not right
- Hi Matan,
We are able to roll back back and forth between arbitrary snapshots and the suggested change in https://... - 02:24 PM RADOS Bug #64735 (Need More Info): OSD/MON: rollback_to snap the latest overlap is not right
- We should first understand whether this is a bug or intentional behavior, given the following order of operations:
<... - 04:53 PM crimson Bug #64040: PGBackend unhandled throw exceptions
- Looking at the same source (src/crimson/osd/pg_backend.cc), there are uses of ct_error() instead of throw exception, ...
- 04:29 PM crimson Bug #64040: PGBackend unhandled throw exceptions
- I was curious about this, and having a look at src/crimson/common/errorator.h (which looks quite scary), particularly...
- 04:34 PM rbd Bug #64874 (New): post-rollback "rbd du" output is incorrect
- See discussion in https://tracker.ceph.com/issues/64735:
> There are two separate areas here:
> - rollback op han... - 03:41 PM rgw Backport #64764 (Resolved): squid: SSL session id reuse speedup mechanism of the SSL_CTX_set_sess...
- 03:35 PM RADOS Bug #64437: qa/standalone/scrub/osd-scrub-repair.sh: TEST_repair_stats_ec: test 26 = 13
- /a/yuriw-2024-03-08_16:19:51-rados-wip-yuri2-testing-2024-03-01-1606-distro-default-smithi/7587184
- 01:20 PM RADOS Bug #64437: qa/standalone/scrub/osd-scrub-repair.sh: TEST_repair_stats_ec: test 26 = 13
- /a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-testing-2024-03-05-0854-distro-default-smithi/7587334
- 03:33 PM RADOS Bug #61774: centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
- /a/yuriw-2024-03-08_16:19:51-rados-wip-yuri2-testing-2024-03-01-1606-distro-default-smithi/7587174/
- 01:18 PM RADOS Bug #61774: centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
- /a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-testing-2024-03-05-0854-distro-default-smithi/7587531
/a/yuriw-2024-03-... - 03:27 PM Dashboard Bug #64873 (New): mgr/dashboard: Health check failed: Degraded data redundancy: 2/6 objects degra...
- h3. Description of problem
/a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-testing-2024-03-05-0854-distro-default-smi... - 03:09 PM Orchestrator Bug #64872 (New): rados/cephadm/smoke: Health check failed: 1 stray daemon(s) not managed by ceph...
- /a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-testing-2024-03-05-0854-distro-default-smithi/7587708
Description: ra... - 02:48 PM Orchestrator Bug #64871 (New): rados/cephadm/workunits: Health check failed: 1 failed cephadm daemon(s) (CEPHA...
- /a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-testing-2024-03-05-0854-distro-default-smithi/7587672
/a/yuriw-2024-03-... - 02:33 PM Dashboard Bug #64870 (New): mgr/dashboard: Health check failed: 1 osds down (OSD_DOWN)" in cluster log
- h3. Description of problem
/a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-testing-2024-03-05-0854-distro-default-smi... - 02:21 PM rgw Backport #64792 (In Progress): reef: Notification kafka: Persistent messages are removed even whe...
- 02:15 PM RADOS Bug #64869 (New): rados/thrash: slow reservation response from 1 (115547ms) in cluster log
- /a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-testing-2024-03-05-0854-distro-default-smithi/7587833
The cluster log... - 02:14 PM rgw Backport #64661 (Resolved): squid: uncaught exception from AWSv4ComplMulti during java AWS4Test.t...
- 02:14 PM rgw Backport #64664 (Resolved): squid: object lock: An object uploaded through a multipart upload can...
- 02:14 PM rgw Backport #64493 (Resolved): squid: Disable/Enable access key Feature
- 02:06 PM Bug #64827 (Fix Under Review): osd/scrub: "reservation requested while still reserved" error in c...
- 01:59 PM Orchestrator Bug #64868 (New): cephadm/osds, cephadm/workunits: Health check failed: 1 pool(s) do not have an ...
- /a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-testing-2024-03-05-0854-distro-default-smithi/7587558
/a/yuriw-2024-03-... - 01:53 PM rbd Bug #64785: RBD persistent error corruption
- Ilya Dryomov wrote:
> What exactly does "virtual host runs out of memory" amount to -- QEMU processes getting axed... - 01:49 PM rgw Bug #64867 (New): Doc for #migrating-a-single-site-deployment-to-multi-site needs to be updated.
- The upstream documentation for migrating a single site to a multisite(with default zone configuration) needs to be up...
- 01:27 PM RADOS Bug #64866 (New): rados/test.sh: LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/1 ...
- /a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-testing-2024-03-05-0854-distro-default-smithi/7587349
There was a sim... - 01:19 PM RADOS Bug #62832 (Resolved): common: config_proxy deadlock during shutdown (and possibly other times)
- 01:19 PM RADOS Backport #63457 (Resolved): quincy: common: config_proxy deadlock during shutdown (and possibly o...
- 01:16 PM Orchestrator Bug #64865 (New): cephadm: Health check failed: 1 osds down (OSD_DOWN) in cluster log
- The following tests in the cephadm suite failed with the warning:
/a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-tes... - 12:56 PM Orchestrator Bug #64864 (New): cephadm: Health detail: HEALTH_WARN 1/3 mons down, quorum a,c in cluster log
- The following tests in the cephadm suite failed with the warning:
/a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-tes... - 12:55 PM CephFS Bug #64751 (In Progress): cephfs-mirror coredumped when acquiring pthread mutex
- 12:54 PM Bug #61582: rbd unable to create snapshot: failed to allocate snapshot id: (95) Operation not sup...
- Issue still exists on Ceph version 18.2.1, for me removing the CephFS on the affected pool isn't an option.
- 12:52 PM CephFS Bug #63907 (Duplicate): cephfs-mirror: Mirror::update_fs_mirrors crashes while taking lock
- https://tracker.ceph.com/issues/64751
- 12:44 PM RADOS Bug #64863 (New): rados/thrash-old-clients: Health detail: HEALTH_WARN 1/3 mons down, quorum a,c ...
- The following tests in the rados suite failed with the warning:
/a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-testi... - 12:38 PM CephFS Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Venky Shankar wrote:
> Dhairya Parmar wrote:
> > Venky Shankar wrote:
> > > Dhairya Parmar wrote:
> > > > Venky S... - 12:10 PM CephFS Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Dhairya Parmar wrote:
> the old client can simply be put on hold - revoke caps and pause I/O. Wait for the time auto... - 12:09 PM CephFS Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Dhairya Parmar wrote:
> Venky Shankar wrote:
> > Dhairya Parmar wrote:
> > > Venky Shankar wrote:
> > > > Dhairya... - 11:50 AM CephFS Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Venky Shankar wrote:
> Dhairya Parmar wrote:
> > Venky Shankar wrote:
> > > Dhairya Parmar wrote:
> > > > Patrick... - 11:32 AM CephFS Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Dhairya Parmar wrote:
> Venky Shankar wrote:
> > Dhairya Parmar wrote:
> > > Patrick Donnelly wrote:
> > > > Dhai... - 12:37 PM rgw Backport #64773 (Resolved): squid: rgw: make rgw-restore-bucket-index more robust
- 12:32 PM rgw Backport #63839 (Resolved): reef: [Errno 13] Permission denied: '/vstart_runner.log'
- 12:21 PM RADOS Bug #52624: qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
- /a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-testing-2024-03-05-0854-distro-default-smithi/7587455
- 12:18 PM Orchestrator Bug #52109: test_cephadm.sh: Timeout('Port 8443 not free on 127.0.0.1.',)
- /a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-testing-2024-03-05-0854-distro-default-smithi/7587370
- 12:14 PM crimson Feature #64862 (In Progress): Support PG split/merge
- 12:06 PM CephFS Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Venky Shankar wrote:
> Xiubo Li wrote:
> > Locally I have tried all the possible cases with the upstream ceph code ... - 09:22 AM CephFS Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Xiubo Li wrote:
> Locally I have tried all the possible cases with the upstream ceph code and couldn't reproduce it,... - 07:53 AM CephFS Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Locally I have tried all the possible cases with the upstream ceph code and couldn't reproduce it, and have partially...
- 02:04 AM CephFS Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Venky Shankar wrote:
> Xiubo Li wrote:
> > Venky Shankar wrote:
> > > Xiubo Li wrote:
> > > > Venky Shankar wrote... - 01:25 AM CephFS Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo Li wrote:
> > > Venky Shankar wrote:
> > > > Xiubo,
> > > >
>... - 12:01 PM CephFS Bug #64729: mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadat...
- Discussed a bit in standby - this could be a fallout from the recently introduced mdlog trim decay counter.
- 11:30 AM bluestore Backport #64860 (In Progress): reef: ObjectStore/StoreTest.BluestoreRepairGlobalStats/1 is broken
- https://github.com/ceph/ceph/pull/56139
- 10:57 AM bluestore Backport #64860 (In Progress): reef: ObjectStore/StoreTest.BluestoreRepairGlobalStats/1 is broken
- 11:27 AM bluestore Backport #64858 (In Progress): quincy: ObjectStore/StoreTest.BluestoreRepairGlobalStats/1 is broken
- https://github.com/ceph/ceph/pull/56138
- 10:57 AM bluestore Backport #64858 (In Progress): quincy: ObjectStore/StoreTest.BluestoreRepairGlobalStats/1 is broken
- 11:26 AM RADOS Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- now the segfault happens on check_one function where we also have pre-regex to truncate the output that causing segfa...
- 07:55 AM RADOS Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- according to the console logs:...
- 04:22 AM RADOS Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- Radoslaw Zarzynski wrote:
> The fix isn't merged yet which could explain the reoccurrence above
The run mentioned... - 11:12 AM bluestore Backport #64859 (Rejected): pacific: ObjectStore/StoreTest.BluestoreRepairGlobalStats/1 is broken
- Not applied to Pacific
- 10:57 AM bluestore Backport #64859 (Rejected): pacific: ObjectStore/StoreTest.BluestoreRepairGlobalStats/1 is broken
- 10:59 AM bluestore Backport #59010: pacific: OSD metadata should show the min_alloc_size that each OSD was built with
- Fixed starting v16.2.14
- 10:55 AM bluestore Bug #62330 (Pending Backport): ObjectStore/StoreTest.BluestoreRepairGlobalStats/1 is broken
- 10:40 AM bluestore Bug #62330: ObjectStore/StoreTest.BluestoreRepairGlobalStats/1 is broken
- Fixed as a part of https://github.com/ceph/ceph/pull/53178
- 10:32 AM bluestore Backport #62779 (Rejected): pacific: btree allocator doesn't pass alloctor's UTs
- Pacific is at EOL
- 10:25 AM CephFS Bug #64042 (Fix Under Review): mgr/snap_schedule: Adding retention which already exists gives imp...
- 08:28 AM RADOS Bug #64514 (Duplicate): LibRadosTwoPoolsPP.PromoteSnapScrub test failed
- Closing as this is a duplicate.
- 08:27 AM RADOS Bug #64646: ceph osd pool rmsnap clone object leak
- Radoslaw Zarzynski wrote:
> Need a squid backport as well.
Awaiting main merge (https://github.com/ceph/ceph/pull... - 08:19 AM CephFS Bug #64852: MDS hangs on "joining batch getattr" when client does statx
- Niklas Hambuechen wrote:
> Are these possibly related?
>
> * https://tracker.ceph.com/issues/63364 - MDS_CLIENT_O... - 04:57 AM CephFS Bug #64852: MDS hangs on "joining batch getattr" when client does statx
- Are these possibly related?
* https://tracker.ceph.com/issues/63364 - MDS_CLIENT_OLDEST_TID: 15 clients failing to... - 04:55 AM CephFS Bug #64852: MDS hangs on "joining batch getattr" when client does statx
- Some logs from `/var/log/ceph/ceph-mds.mycluster-node-4.log` to show that the problematic op hung for multiple hours:...
- 04:42 AM CephFS Bug #64852: MDS hangs on "joining batch getattr" when client does statx
- The log this is triggering is this:
https://github.com/ceph/ceph/commit/5cf60960b642f999ce08d404a6b6e14c1eb434ca
... - 04:37 AM CephFS Bug #64852 (New): MDS hangs on "joining batch getattr" when client does statx
- Every couple days, our CephFS hangs on one specific directory:...
- 08:15 AM CephFS Bug #64856 (New): mds crashes when extracting from a tar is cancelled
- On fresh @vstart@ cluster following commands were run -...
- 07:24 AM Dashboard Bug #64855 (New): mgr/dashboard: improve bucket deletion feedback
- Improve bucket deletion feedback, by adding a notification on bucket deletion and adding better explanation on the bu...
- 07:07 AM Dashboard Backport #64594 (Resolved): squid: mgr/dashboard: fix volume creation with multiple hosts
- 06:31 AM RADOS Bug #64854 (In Progress): decoding chunk_refs_by_hash_t return wrong values
- When running ceph dencoder test on clang-14 compiled JSON dump of chunk_refs_by_hash_t will show:...
- 06:28 AM CephFS Bug #63514 (Fix Under Review): mds: avoid sending inode/stray counters as part of health warning ...
- 06:09 AM Backport #64836 (Resolved): reef: run-tox-mgr-dashboard-py3 failing with many 400 responses
- 06:09 AM Backport #64838 (Resolved): squid: run-tox-mgr-dashboard-py3 failing with many 400 responses
- 06:02 AM RADOS Bug #56393: failed to complete snap trimming before timeout
- /a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-testing-2024-03-05-0854-distro-default-smithi/7587430
/a/yuriw-2024-03-... - 05:39 AM CephFS Bug #64717: MDS stuck in replay/resolve use
- Hi Abhishek,
This tracker was discussed in yesterday's cephfs standup. In particular, the mds crash backtrace that... - 05:03 AM Bug #64853 (New): Assertion failure common/buffer.cc: 510: FAILED ceph_assert(_raw) while dumping...
- I found this crash on one of the clusters to which I have access.
It was not me who told the MDS to dump the inode... - 04:58 AM CephFS Bug #61947: mds: enforce a limit on the size of a session in the sessionmap
- Potentially related issue:
* https://tracker.ceph.com/issues/64852 - MDS hangs on "joining batch getattr" when cli... - 04:58 AM CephFS Bug #63364: MDS_CLIENT_OLDEST_TID: 15 clients failing to advance oldest client/flush tid
- Potentially related issue:
* https://tracker.ceph.com/issues/64852 - MDS hangs on "joining batch getattr" when cli... - 04:37 AM CephFS Bug #62847: mds: blogbench requests stuck (5mds+scrub+snaps-flush)
- Potentially related: https://tracker.ceph.com/issues/64852
- 03:45 AM crimson Bug #64789 (Fix Under Review): crimson unitest timeout (Reactor backend: io_uring) as liburing mi...
- 03:29 AM crimson Bug #64789: crimson unitest timeout (Reactor backend: io_uring) as liburing mismatch
- As pr https://github.com/ceph/ceph/pull/55787 bump liburing from 0.7 to 2.5.
with liburing-dev (2.1) installed on ub... - 03:13 AM rgw Bug #63177: RGW user quotas is not honored when bucket owner is different than uploader
- The problem persists even after running sync-stats. In fact, I identified the issue weeks after the quota had already...
- 02:08 AM RADOS Bug #64824: mon: ceph-16.2.14/src/mon/Monitor.cc: 5661: FAILED ceph_assert(err == 0)
- Radoslaw Zarzynski wrote:
> Looks like a mon-scrub failure. This can be caused by a HW issue or by a corruption.
> ... - 01:18 AM CephFS Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Patrick Donnelly wrote:
> Venky Shankar wrote:
> > Venky Shankar wrote:
> > > Venky Shankar wrote:
> > > > This i... - 12:33 AM crimson Bug #64696: Centos8 Crimson container builds - nothing provides libprotobuf.so.30()
- Yes, those hosts don't have this problem anymore. I think the current issue is something different.
03/11/2024
- 10:22 PM rbd Bug #64785: RBD persistent error corruption
- Jacobus Erasmus wrote:
> If a virtual machine is set up with a rbd_persistent_cache_mode=ssd, and rbd_plugin=pwl_cac... - 10:11 PM rbd Bug #64784: rbd_plugins does not check if the plugin exist before changing the value.
- Jacobus Erasmus wrote:
> When a rbd_plugins is set incorrectly (no plugin available) it will accept the change witho... - 08:55 PM RADOS Bug #64438: NeoRadosWatchNotify.WatchNotifyTimeout times out along with FAILED ceph_assert(op->se...
- Fails here in the neorados test:...
- 08:44 PM Feature #64845: Support read_from_replica everywhere
- Pool flag?
- 07:07 PM Feature #64845 (New): Support read_from_replica everywhere
- RADOS supports reads from replicas now, and has done so for a while. It is not on by default and requires setting a f...
- 08:26 PM rbd Feature #64850: rbd: Support read_from_replica everywhere
- Greg Farnum wrote:
> librbd already supports read-from-replica via the rbd_read_from_replica_policy config; does RBD... - 07:19 PM rbd Feature #64850 (New): rbd: Support read_from_replica everywhere
- librbd already supports read-from-replica via the rbd_read_from_replica_policy config; does RBD need other modificati...
- 07:32 PM Orchestrator Bug #64262 (Closed): non of osd
- maybe you hit submit too early but this issue is information-free and non-actionable so I'm just going to close it. P...
- 07:30 PM Orchestrator Bug #51361: KillMode=none is deprecated
- FWIW, the cephadm team is aware of the issue, it's just that it has been a lower priority as it's "just" a warning - ...
- 07:20 PM rgw Feature #64851 (New): RGW: Support read_from_replica everywhere
- We would like RGW to support read-from-replica for stretch clusters.
- 07:18 PM RADOS Feature #64849 (New): rados: Support read_from_replica everywhere
- The Objecter supports read-from-replica if you pass in the LOCALIZE_READS flag. If we want to serve all read IO from ...
- 07:14 PM mgr Feature #64848 (New): mgr: Support read_from_replica everywhere (including modules)
- We would like to be able to use read-from-replica throughout the Ceph stack. Some mgr modules access the cluster (mgr...
- 07:12 PM Linux kernel client Feature #64847 (New): krbd/kcephfs: Support read_from_replica everywhere
- We would like to be able to use read-from-replica throughout the Ceph stack. Should RBD and CephFS in the kernel use ...
- 07:10 PM CephFS Bug #64846 (New): CephFS: support read_from_replica everywhere
- We would like to be able to use read-from-replica throughout the CephFS stack. Right now, there's a libcephfs::ceph_l...
- 06:40 PM RADOS Bug #64735: OSD/MON: rollback_to snap the latest overlap is not right
- There is PR posted: https://github.com/ceph/ceph/pull/55991
- 06:06 PM RADOS Bug #64735: OSD/MON: rollback_to snap the latest overlap is not right
- Hi Matan! Would you mind taking a look?
- 06:32 PM rgw Backport #64693: reef: rgw/s3select: crashes in test_progress_expressions in run_s3select_on_csv
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55969
merged - 06:31 PM rgw Backport #64600: reef: unittest_rgw_dmclock_scheduler fails for arm64
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55790
merged - 06:29 PM rgw Backport #64500: reef: multisite: Deadlock in RGWDeleteMultiObj with default rgw_multi_obj_del_ma...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55655
merged - 06:28 PM rgw Backport #64426: reef: rgw: rados objects wrongly deleted
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55621
merged - 06:28 PM rgw Backport #64448: reef: invalid olh attributes on the target object after copy_object in a version...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55606
merged - 06:27 PM Orchestrator Backport #64844 (New): reef: Regression: Permanent KeyError: 'TYPE' : return self.blkid_api['TYP...
- 06:26 PM Orchestrator Backport #64843 (New): quincy: Regression: Permanent KeyError: 'TYPE' : return self.blkid_api['T...
- 06:26 PM rgw Backport #64088: reef: multisite: don't write data/bilog entries for lifecycle transitions/deletes
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55289
merged - 06:24 PM rgw Backport #63960: reef: rgw: lack of headers in 304 response
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55094
merged - 06:23 PM rgw Backport #63940: reef: 'radosgw-admin zone set' overwrites default-placement target
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55061
merged - 06:22 PM Orchestrator Bug #63502 (Pending Backport): Regression: Permanent KeyError: 'TYPE' : return self.blkid_api['T...
- looks like this was never backported.
- 06:10 PM Orchestrator Bug #63502: Regression: Permanent KeyError: 'TYPE' : return self.blkid_api['TYPE'] == 'part'
- Remains in 18.2.2
- 06:22 PM rgw Backport #63777: reef: [rgw][lc] using custom lc schedule (work time) may cause lc processing to ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/54866
merged - 06:18 PM RADOS Bug #64670: LibRadosAioEC.RoundTrip2 hang and pkill
- Bump up.
- 06:16 PM RADOS Bug #54182: OSD_TOO_MANY_REPAIRS cannot be cleared in >=Octopus
- Review in progress.
- 06:15 PM RADOS Bug #64514: LibRadosTwoPoolsPP.PromoteSnapScrub test failed
- Bump up.
- 06:09 PM RADOS Bug #64725: rados/singleton: application not enabled on pool 'rbd'
- Fix is to add this to the ignorelist.
- 06:02 PM RADOS Bug #64646: ceph osd pool rmsnap clone object leak
- Need a squid backport as well.
- 06:00 PM RADOS Bug #64824 (Need More Info): mon: ceph-16.2.14/src/mon/Monitor.cc: 5661: FAILED ceph_assert(err =...
- Looks like a mon-scrub failure. This can be caused by a HW issue or by a corruption.
Is there a sign of malfunctioni... - 08:24 AM RADOS Bug #64824 (Need More Info): mon: ceph-16.2.14/src/mon/Monitor.cc: 5661: FAILED ceph_assert(err =...
- -1> 2024-03-11T02:29:03.716+0000 7f6600eaf700 -1 /root/rpmbuild/BUILD/ceph-16.2.14/src/mon/Monitor.cc: In functio...
- 05:58 PM Bug #64827: osd/scrub: "reservation requested while still reserved" error in cluster log
- Testing a fix as https://github.com/ceph/ceph/pull/56132 (now in draft status)
- 05:55 PM Bug #64827 (In Progress): osd/scrub: "reservation requested while still reserved" error in cluste...
- 12:32 PM Bug #64827 (Fix Under Review): osd/scrub: "reservation requested while still reserved" error in c...
- The scenario encountered:
- the replica is reserved, and is now in ReplicaReserved
- the primary starts scrubbing a... - 05:55 PM RADOS Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- The fix isn't merged yet which could explain the reoccurrence above
- 02:45 PM RADOS Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- /a/yuriw-2024-03-08_16:20:46-rados-wip-yuri4-testing-2024-03-05-0854-distro-default-smithi/7587684
/a/yuriw-2024-03-... - 05:51 PM RADOS Bug #52657: MOSDPGLog::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_NAUTILUS)'
- Bump up.
- 05:50 PM RADOS Bug #64333: PG autoscaler tuning => catastrophic ceph cluster crash
- 1. I'm still nor sure we need @--force@. 2. If it turns justified, shouldn't it be @--yes-i-really-really-mean-it@?
- 05:42 PM RADOS Bug #64314: cluster log: Cluster log level string representation missing in the cluster logs.
- Still in testing.
- 05:37 PM CephFS Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Venky Shankar wrote:
> Venky Shankar wrote:
> > Venky Shankar wrote:
> > > This is not as bad as it looks. The cep... - 11:36 AM CephFS Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Venky Shankar wrote:
> Venky Shankar wrote:
> > This is not as bad as it looks. The ceph-fuse process seems to be e... - 05:34 PM bluestore Feature #57785: fragmentation score in metrics
- Hi team,
We encounter this issue too.
I can see [1] is merged already.
May I ask if any updates on `implement th... - 05:33 PM rgw Backport #64694 (Resolved): squid: rgw/s3select: crashes in test_progress_expressions in run_s3se...
- 05:21 PM Feature #64842 (New): Make min_size=1 change configurable when triggering degraded stretch mode
- See also this Ceph user mailing list thread: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/PVYFRJMN...
- 05:06 PM rgw Bug #64841 (New): java_s3tests: testObjectCreateBadExpectMismatch failure
- ex. http://qa-proxy.ceph.com/teuthology/cbodley-2024-03-10_14:50:40-rgw-wip-cbodley2-testing-distro-default-smithi/75...
- 04:44 PM Feature #64840 (New): Add posibility to disable stretch mode
- See this ceph-users mailing list thread: https://www.spinics.net/lists/ceph-users/msg74238.html
Currently it is no... - 04:11 PM Backport #64837 (In Progress): quincy: run-tox-mgr-dashboard-py3 failing with many 400 responses
- 04:03 PM Backport #64837 (In Progress): quincy: run-tox-mgr-dashboard-py3 failing with many 400 responses
- https://github.com/ceph/ceph/pull/56128
- 04:10 PM Backport #64836 (In Progress): reef: run-tox-mgr-dashboard-py3 failing with many 400 responses
- 04:03 PM Backport #64836 (Resolved): reef: run-tox-mgr-dashboard-py3 failing with many 400 responses
- https://github.com/ceph/ceph/pull/56127
- 04:09 PM Backport #64838 (In Progress): squid: run-tox-mgr-dashboard-py3 failing with many 400 responses
- 04:03 PM Backport #64838 (Resolved): squid: run-tox-mgr-dashboard-py3 failing with many 400 responses
- https://github.com/ceph/ceph/pull/56126
- 04:02 PM Bug #64684 (Pending Backport): run-tox-mgr-dashboard-py3 failing with many 400 responses
- 03:54 PM rgw Backport #64833 (In Progress): reef: RGW segmentation fault when reading object permissions via t...
- https://github.com/ceph/ceph/pull/56004
- 03:39 PM rgw Backport #64833 (In Progress): reef: RGW segmentation fault when reading object permissions via t...
- 03:44 PM rgw Backport #64834 (In Progress): squid: RGW segmentation fault when reading object permissions via ...
- 03:39 PM rgw Backport #64834 (Resolved): squid: RGW segmentation fault when reading object permissions via the...
- https://github.com/ceph/ceph/pull/56125
- 03:43 PM rgw Bug #64835 (New): valgrind invalid read related to D3nDataCache::d3n_libaio_create_write_request()
- d3n datacache job failing with valgrind issue
example job: http://qa-proxy.ceph.com/teuthology/cbodley-2024-03-10_... - 03:34 PM rgw Bug #63684 (Pending Backport): RGW segmentation fault when reading object permissions via the swi...
- 03:29 PM Backport #64293 (Resolved): quincy: Incorrect OSD transaction id type, flipping on Windows
- 03:17 PM Backport #64293: quincy: Incorrect OSD transaction id type, flipping on Windows
- merged
- 03:19 PM rgw Bug #64832 (New): valgrind UninitCondition in RGWSelectObj_ObjStore_S3::run_s3select_on_csv
- from rgw/verify job: http://qa-proxy.ceph.com/teuthology/cbodley-2024-03-10_14:49:11-rgw-wip-cbodley-testing-distro-d...
- 03:16 PM rgw Bug #63786: rados_cls_all: TestCls2PCQueue.MultiProducer hangs
- saw in http://qa-proxy.ceph.com/teuthology/cbodley-2024-03-10_14:49:11-rgw-wip-cbodley-testing-distro-default-smithi/...
- 03:13 PM Stable releases Tasks #64721 (Resolved): reef 18.2.2 (hot-fix)
- 03:09 PM Dashboard Bug #64831 (In Progress): mgr/dashboard: fix snap schedule time format
- 03:08 PM Dashboard Bug #64831 (Pending Backport): mgr/dashboard: fix snap schedule time format
- h3. Description of problem
_here_
h3. Environment
* @ceph version@ string:
* Platform (OS/distro/release)... - 02:19 PM CephFS Backport #64738 (In Progress): squid: Memory leak detected when accessing a CephFS volume from Sa...
- 02:18 PM CephFS Backport #64737 (In Progress): reef: Memory leak detected when accessing a CephFS volume from Sam...
- 02:14 PM CephFS Backport #64736 (In Progress): quincy: Memory leak detected when accessing a CephFS volume from S...
- 01:40 PM Dashboard Backport #64830 (New): reef: mgr/dashboard: Locking improvements in bucket create form
- 01:40 PM Dashboard Backport #64829 (New): squid: mgr/dashboard: Locking improvements in bucket create form
- 01:34 PM Dashboard Cleanup #64658 (Pending Backport): mgr/dashboard: Locking improvements in bucket create form
- 01:33 PM Dashboard Cleanup #64658: mgr/dashboard: Locking improvements in bucket create form
- moved tracker to cleanup to include backports
- 01:12 PM CephFS Bug #64729 (Triaged): mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report sl...
- 01:10 PM Dashboard Backport #64808 (Resolved): squid: mgr/dashboard: add snap schedule M and Y repeat frequencies to...
- 01:10 PM Dashboard Backport #64807 (Resolved): reef: mgr/dashboard: add snap schedule M and Y repeat frequencies to ...
- 01:08 PM CephFS Bug #64730 (Triaged): fs/misc/multiple_rsync.sh workunit times out
- 01:08 PM CephFS Bug #59413 (Duplicate): cephfs: qa snaptest-git-ceph.sh failed with "got remote process result: 128"
- 01:08 PM CephFS Bug #64748 (Duplicate): reef: snaptest-git-ceph.sh failure
- 12:32 PM CephFS Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Venky Shankar wrote:
> Dhairya Parmar wrote:
> > Patrick Donnelly wrote:
> > > Dhairya Parmar wrote:
> > > > I'm ... - 12:27 PM CephFS Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- the old client can simply be put on hold - revoke caps and pause I/O. Wait for the time autoclose arrives(def 300s); ...
- 12:24 PM CephFS Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Venky Shankar wrote:
> Patrick Donnelly wrote:
> > Dhairya Parmar wrote:
> > > I'm waiting for patrick/venky's res... - 12:15 PM CephFS Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Patrick Donnelly wrote:
> Dhairya Parmar wrote:
> > I'm waiting for patrick/venky's response on this since they had... - 12:00 PM CephFS Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Dhairya Parmar wrote:
> Patrick Donnelly wrote:
> > Dhairya Parmar wrote:
> > > I'm waiting for patrick/venky's re... - 11:23 AM CephFS Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Patrick Donnelly wrote:
> Dhairya Parmar wrote:
> > I'm waiting for patrick/venky's response on this since they had... - 12:06 PM CephFS Bug #64659: mds: switch to using xlists instead of elists
- Venky Shankar wrote:
> Dhairya Parmar wrote:
> > Patrick Donnelly wrote:
> > > Dhairya Parmar wrote:
> > > > Patr... - 11:17 AM rgw Backport #64766 (In Progress): reef: SSL session id reuse speedup mechanism of the SSL_CTX_set_se...
- 11:16 AM CephFS Bug #53192: High cephfs MDS latency and CPU load with snapshots and unlink operations
- Abhishek Lekshmanan wrote:
> Hi Venky, Patrick
>
> further to our talk, we saw the MDS growing with a lot of log ... - 10:33 AM CephFS Bug #53192: High cephfs MDS latency and CPU load with snapshots and unlink operations
- Hi Andras,
Patrick has a proposed fix that optimizes the iteration - https://github.com/ceph/ceph/pull/55768
I ... - 11:07 AM rgw Backport #64767 (In Progress): quincy: SSL session id reuse speedup mechanism of the SSL_CTX_set_...
- 10:54 AM CephFS Bug #64711 (Fix Under Review): Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks...
- 09:51 AM Dashboard Backport #64825 (In Progress): squid: mgr/dashboard: fix snap schedule list toggle cols
- 09:25 AM Dashboard Backport #64825 (Resolved): squid: mgr/dashboard: fix snap schedule list toggle cols
- https://github.com/ceph/ceph/pull/56116
- 09:49 AM Dashboard Backport #64826 (In Progress): reef: mgr/dashboard: fix snap schedule list toggle cols
- 09:25 AM Dashboard Backport #64826 (Resolved): reef: mgr/dashboard: fix snap schedule list toggle cols
- https://github.com/ceph/ceph/pull/56115
- 09:31 AM CephFS Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Venky Shankar wrote:
> Xiubo Li wrote:
> > Venky Shankar wrote:
> > > Xiubo,
> > >
> > > I see the following co... - 09:27 AM CephFS Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo,
> >
> > I see the following commit in the testing kernel:
> >... - 07:45 AM CephFS Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Venky Shankar wrote:
> Xiubo,
>
> I see the following commit in the testing kernel:
>
> [...]
>
> The inter... - 06:18 AM CephFS Bug #64679: cephfs: removexattr should always return -ENODATA when xattr doesn't exist
- Xiubo,
I see the following commit in the testing kernel:... - 09:22 AM Dashboard Bug #64813 (Pending Backport): mgr/dashboard: fix snap schedule list toggle cols
- 06:14 AM CephFS Bug #64752: cephfs-mirror: valgrind report leaks
- The test test_peer_commands_with_mirroring_disabled passes, but then in the unwinding process, there's a CommandFaile...
- 04:54 AM CephFS Bug #63830: MDS fails to start
- Heðin Ejdesgaard Møller wrote:
> Milind Changire wrote:
> > Heðin Ejdesgaard Møller wrote:
> > > I have made a cor... - 04:32 AM CephFS Bug #64717: MDS stuck in replay/resolve use
- Hi Abhishek,
Abhishek Lekshmanan wrote:
> We have a cephfs cluster where we ran a lot of metadata intensive workl... - 04:15 AM CephFS Backport #64780 (Rejected): squid: qa/fscrypt: switch to postmerge fragment to distiguish the mou...
- The *squid* release has already included this.
03/10/2024
- 10:54 PM Orchestrator Bug #64823 (New): cephadm uninstall fails on debian if perl is not installed
- I encountered the following issue when attempting to uninstall cephadm 18.2.1 on debian bookworm:...
- 09:30 PM Orchestrator Backport #64635 (In Progress): reef: cephadm/nvmeof: scrape nvmeof prometheus endpoint
- 09:28 PM Orchestrator Backport #64635 (New): reef: cephadm/nvmeof: scrape nvmeof prometheus endpoint
- 09:04 PM Orchestrator Backport #64689 (In Progress): reef: cephadm: host filtering with label and host pattern only use...
- 09:01 PM Orchestrator Backport #64644 (In Progress): reef: cephadm: remove restriction for crush device classes
- 08:51 PM Orchestrator Backport #64634 (In Progress): reef: cephadm: cephadm does not clean up /etc/ceph/podman-auth.jso...
- 08:50 PM Orchestrator Backport #64632 (In Progress): reef: secure monitoring stack support is not documented
- 08:47 PM Orchestrator Backport #64629 (In Progress): reef: cephadm: asyncio timeout handler can't handle conccurent.fut...
- 08:45 PM Orchestrator Backport #64627 (In Progress): reef: cephadm: ceph-exporter fails to deploy when placed first
- 08:27 PM Orchestrator Backport #64622 (In Progress): reef: mgr/cephadm is not defining haproxy tcp healthchecks for Gan...
- 08:25 PM Orchestrator Backport #64620 (In Progress): reef: cephadm is not accounting for the memory required nvme gatew...
- 08:20 PM Orchestrator Backport #64414 (In Progress): reef: cephadm: traceback when running `cephadm ls` with nvmeof dae...
- 08:13 PM Orchestrator Backport #63985 (In Progress): reef: When listing devices it would be helpful to have a summary f...
- 08:12 PM Orchestrator Backport #63984 (In Progress): reef: ceph orch host ls --detail reports the incorrect CPU thread ...
- 08:10 PM Orchestrator Backport #63817 (In Progress): reef: cephadm: drivespec limit not working correctly
- 08:09 PM Orchestrator Backport #63815 (In Progress): reef: OSD with dedicated db devices redeployments fail when no ser...
- 08:08 PM Orchestrator Backport #63533 (In Progress): reef: cephadm: OSD weights are not restored when you stop removal ...
- 08:03 PM Orchestrator Backport #63508 (In Progress): reef: cephadm: warn users about draining a host explicitly listed ...
- 07:57 PM Orchestrator Backport #63448 (In Progress): reef: mgr: discovery service (port 8765) fails if ms_bind ipv6 only
- 07:52 PM Orchestrator Backport #63447 (In Progress): reef: cephadm: remove host entry from crush map during host removal
- 07:45 PM Orchestrator Backport #63434 (In Progress): reef: cephadm: daemon events not updated on repeat events.
- 07:44 PM Orchestrator Backport #63190 (In Progress): reef: jaeger-agents in error state if deployed before jaeger-colle...
- 07:43 PM Orchestrator Backport #62917 (Rejected): reef: Update nvmeof gw default version to 0.0.3
- this version has already since been updated again. No point doing this backport.
- 07:37 PM Orchestrator Backport #62974 (In Progress): quincy: cephadm: allow zapping OSD devices as part of host drain p...
- 07:27 PM Orchestrator Backport #55960 (Rejected): pacific: Exception when running 'rook' task.
- no more pacific releases afaik
- 07:23 PM Orchestrator Backport #64688 (In Progress): quincy: cephadm: host filtering with label and host pattern only u...
- 07:22 PM Orchestrator Backport #64645 (In Progress): quincy: cephadm: remove restriction for crush device classes
- 07:14 PM Orchestrator Backport #64630 (In Progress): quincy: cephadm: asyncio timeout handler can't handle conccurent.f...
- 07:12 PM Orchestrator Backport #63818 (In Progress): quincy: cephadm: drivespec limit not working correctly
- 07:11 PM Orchestrator Backport #63816 (In Progress): quincy: OSD with dedicated db devices redeployments fail when no s...
- 07:08 PM Orchestrator Backport #63534 (In Progress): quincy: cephadm: OSD weights are not restored when you stop remova...
- 07:06 PM Orchestrator Backport #63446 (In Progress): quincy: cephadm: remove host entry from crush map during host removal
- 07:02 PM Orchestrator Backport #63435 (In Progress): quincy: cephadm: daemon events not updated on repeat events.
- 07:01 PM Orchestrator Backport #63116 (In Progress): quincy: Using [] around an ipv6 address while adding a host fails
- 06:43 PM Orchestrator Backport #62533 (Rejected): quincy: "ceph nfs cluster create ..." always show process bound to 20...
- quincy backport of this isn't really feasible. 25 merge conflicts in the cephadm binary alone on the first commit.
- 06:35 PM Orchestrator Bug #61667 (Resolved): cephadm: tcmu-runner not restarted on failure
- 06:34 PM Orchestrator Backport #62800 (Resolved): quincy: cephadm: tcmu-runner not restarted on failure
- 06:27 PM Orchestrator Bug #62679 (Resolved): cephadm: don't provide tag when grabbing auth token during "orch upgrade ls"
- 06:26 PM Orchestrator Backport #62796 (Resolved): quincy: cephadm: don't provide tag when grabbing auth token during "o...
- 06:22 PM Orchestrator Bug #61571 (Resolved): cephadm: cephadm does not include tcmu-runner in logrotate config
- 06:22 PM Orchestrator Backport #62468 (Resolved): quincy: cephadm: cephadm does not include tcmu-runner in logrotate co...
- 06:21 PM Orchestrator Feature #62009 (Resolved): cephadm: support for CA signed keys
- 06:21 PM Orchestrator Backport #62461 (Resolved): quincy: cephadm: support for CA signed keys
- 06:20 PM Orchestrator Cleanup #61548 (Resolved): Add function to check if a host is unreachable by the hostname
- 06:19 PM Orchestrator Backport #61965 (Resolved): quincy: Add function to check if a host is unreachable by the hostname
- 06:18 PM Orchestrator Bug #61533 (Resolved): osd specs with 'spec' field but device selection outside of 'spec' field f...
- 06:18 PM Orchestrator Backport #61685 (Resolved): quincy: osd specs with 'spec' field but device selection outside of '...
- 06:00 PM Orchestrator Bug #61592 (Resolved): cephadm: Message about limit policy spams logs if using `limit` field in O...
- 06:00 PM Orchestrator Backport #61682 (Resolved): quincy: cephadm: Message about limit policy spams logs if using `limi...
- 05:59 PM Orchestrator Bug #61330 (Resolved): public_network is set as 'mon' instead of global while bootsraping via cep...
- 05:58 PM Orchestrator Backport #61543 (Resolved): quincy: public_network is set as 'mon' instead of global while bootsr...
- 02:45 PM rgw-testing Bug #64822 (Resolved): s3tests: boto2 test_headers.py failures
- https://github.com/ceph/s3-tests/pull/555
- 02:38 PM rgw-testing Bug #64822: s3tests: boto2 test_headers.py failures
- from a good run on 3/8: http://qa-proxy.ceph.com/teuthology/cbodley-2024-03-08_14:54:28-rgw-wip-rgw-account-v3-distro...
- 02:27 PM rgw-testing Bug #64822 (Resolved): s3tests: boto2 test_headers.py failures
- all of the below are failing with @AssertionError: S3ResponseError not raised@...
- 02:00 PM rgw Bug #64816 (Won't Fix - EOL): garbage collection not processing
- > Is there anything that can be done to force GC to run?
`radosgw-admin gc process` will try to process the queue,... - 09:18 AM nvme-of Fix #64821: cephadm - make changes to ceph-nvmeof.conf template
- enable_prometheus_exporter=true
- 09:06 AM nvme-of Fix #64821 (New): cephadm - make changes to ceph-nvmeof.conf template
- Make these fields configurable:
[gaetway]
state_update_notify
state_update_interval_sec
enable_prometheus_expo... - 07:37 AM RADOS Bug #64657 (Rejected): Ceph test cases starting cluster not waiting for OSDs to join fully
- 茁野 鲍 Thanks for letting us know!
i'll reject that bug
03/08/2024
- 11:50 PM RADOS Bug #64804 (Duplicate): gcc-13 apparently breaks SafeTimer
- 04:07 AM RADOS Bug #64804 (Duplicate): gcc-13 apparently breaks SafeTimer
- https://github.com/ceph/ceph/pull/55886
Probably related to https://bugzilla.redhat.com/show_bug.cgi?id=2241339 . - 08:16 PM Orchestrator Bug #64712 (Resolved): the node-proxy daemon fails to send data to the mgr endpoint
- 08:16 PM Orchestrator Backport #64750 (Resolved): reef: the node-proxy daemon fails to send data to the mgr endpoint
- 07:59 PM Orchestrator Backport #64749 (Resolved): squid: the node-proxy daemon fails to send data to the mgr endpoint
- 07:11 PM CephFS Feature #61866 (Fix Under Review): MDSMonitor: require --yes-i-really-mean-it when failing an MDS...
- 07:08 PM CephFS Tasks #64819 (In Progress): data corruption during rmw after lseek
- There's data corruption during a rmw after a seek.
reproducer... - 06:42 PM rgw Backport #64818 (In Progress): squid: [RFE] multisite: Bucket notification information should be ...
- https://github.com/ceph/ceph/pull/56069
- 06:14 PM rgw Backport #64818 (In Progress): squid: [RFE] multisite: Bucket notification information should be ...
- 06:15 PM CephFS Tasks #64723: ffsb configure issues (gcc fails)
- This is an issue with not updating metadata with the new appended file size.
- 06:13 PM rgw Feature #50078 (Pending Backport): [RFE] multisite: Bucket notification information should be sha...
- Casey Bodley wrote:
> already merged:
> https://github.com/ceph/ceph/pull/55688 test/rgw/notifications: split tests... - 05:30 PM Documentation #54544 (Resolved): Several links to pgcalc lead to a 404
- This was finally addressed in March 2024 by Zac Dover and Josh Durgin. The first of the two PRs below removes the bro...
- 04:56 PM Bug #64817: Stretch mode does not work for pools that use CRUSH rule with device classes
- Note: I have made cluster state files (ceph-collect: https://github.com/42on/ceph-collect) during the conversion proc...
- 04:48 PM Bug #64817 (New): Stretch mode does not work for pools that use CRUSH rule with device classes
- I have converted a (test) 3 node replicated cluster (2 storage nodes, 1 node with monitor only, min_size=2, size=4) s...
- 04:49 PM CephFS Bug #63830: MDS fails to start
- Milind Changire wrote:
> Heðin Ejdesgaard Møller wrote:
> > I have made a coredump of the mds service, but it's siz... - 03:01 PM rbd Bug #63770 (Fix Under Review): [diff-iterate] discards that truncate aren't accounted for by Obje...
- 02:36 PM rgw Bug #64816 (Won't Fix - EOL): garbage collection not processing
- Hello
Just noticed after deleting some large buckets from a rgw.buckets.data pool that GC is not actually deleting... - 01:49 PM Bug #64814 (New): [monclient::_reopen_session()] Client crash if there is a monitor with a weight...
- Before 2019, monitors had a weight of 10 => https://github.com/ceph/ceph/commit/2d113dedf851995e000d3cce136b69
Since... - 01:17 PM Dashboard Bug #64724 (Resolved): mgr/dashboard: Http failure parsing data error and no data rendered on poo...
- 11:45 AM Dashboard Bug #64813 (Resolved): mgr/dashboard: fix snap schedule list toggle cols
- h3. Description of problem
_here_
h3. Environment
* @ceph version@ string:
* Platform (OS/distro/release)... - 10:48 AM Dashboard Bug #64812 (Fix Under Review): mgr/dashboard: add support for NFSv3 exports
- 10:42 AM CephFS Backport #64811 (In Progress): reef: mds: add debug logs for handling setxattr for ceph.dir.subvo...
- 10:05 AM CephFS Backport #64811 (In Progress): reef: mds: add debug logs for handling setxattr for ceph.dir.subvo...
- https://github.com/ceph/ceph/pull/56062
- 10:41 AM CephFS Backport #64810 (In Progress): quincy: mds: add debug logs for handling setxattr for ceph.dir.sub...
- 10:05 AM CephFS Backport #64810 (In Progress): quincy: mds: add debug logs for handling setxattr for ceph.dir.sub...
- https://github.com/ceph/ceph/pull/56061
- 10:19 AM RADOS Bug #62338: osd: choose_async_recovery_ec may select an acting set < min_size
- Hello again.
Apparently I got a tiny little bit too excited.
I tested the case described above with 16.2.15 and... - 10:04 AM CephFS Backport #64809 (Rejected): pacific: mds: add debug logs for handling setxattr for ceph.dir.subvo...
- 10:00 AM Dashboard Backport #64808 (In Progress): squid: mgr/dashboard: add snap schedule M and Y repeat frequencies...
- 09:55 AM Dashboard Backport #64808 (Resolved): squid: mgr/dashboard: add snap schedule M and Y repeat frequencies to...
- https://github.com/ceph/ceph/pull/56060
- 09:58 AM Dashboard Backport #64807 (In Progress): reef: mgr/dashboard: add snap schedule M and Y repeat frequencies ...
- 09:55 AM Dashboard Backport #64807 (Resolved): reef: mgr/dashboard: add snap schedule M and Y repeat frequencies to ...
- https://github.com/ceph/ceph/pull/56059
- 09:55 AM CephFS Bug #61958 (Pending Backport): mds: add debug logs for handling setxattr for ceph.dir.subvolume
- 09:49 AM Dashboard Bug #64614 (Pending Backport): mgr/dashboard: add snap schedule M and Y repeat frequencies to cre...
- 09:44 AM Dashboard Bug #64614 (In Progress): mgr/dashboard: add snap schedule M and Y repeat frequencies to create form
- 09:29 AM Dashboard Cleanup #64806 (New): mgr/dashboard: directoryStore improvements for cephfs
- When making use of the directoryStore to load the directories of the fs, there are a couple issues that could be impr...
- 07:15 AM CephFS Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Venky Shankar wrote:
> This is not as bad as it looks. The ceph-fuse process seems to be exiting as usual - its some... - 04:48 AM CephFS Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- This is not as bad as it looks. The ceph-fuse process seems to be exiting as usual - its somewhere in the qa world wh...
- 02:01 AM CephFS Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Continuing on this today - fusermount(1) is basically invoking umount2(2). Will try to see what's going on.
- 06:51 AM Dashboard Bug #64681: mgr/dashboard: grpc deps broken in some builds
- Just FYI, these both fail for me locally as well with updated versions of grpcio and grpcio-tools
run-tox-mgr-dash... - 05:31 AM Dashboard Bug #64681 (In Progress): mgr/dashboard: grpc deps broken in some builds
- 05:30 AM Dashboard Bug #64681: mgr/dashboard: grpc deps broken in some builds
- Hi Brad,
Its the same. The reason I didn't pin it to a later version was because in centos8 the latest version ava... - 06:47 AM rgw Bug #64805 (New): rgw: dynamic resharding will block write op
- When rgw dynamic resharding is triggered, writing will be blocked. I think this is unreasonable and may have a seriou...
- 02:20 AM rgw Bug #64803 (New): ninja all on fedora 39 fails because arrow_ext requires C++14
- commit 6c9d1b64849f1531cde0b6dd9bc1fd91c9ce153...
- 12:39 AM CephFS Backport #64585 (In Progress): squid: crash: void Locker::handle_file_lock(ScatterLock*, ceph::cr...
- 12:37 AM CephFS Backport #64586 (In Progress): quincy: crash: void Locker::handle_file_lock(ScatterLock*, ceph::c...
- 12:33 AM CephFS Backport #64584 (In Progress): reef: crash: void Locker::handle_file_lock(ScatterLock*, ceph::cre...
- 12:26 AM RADOS Bug #64802 (New): rados: generalize stretch mode pg temp handling to be usable without stretch mode
- PeeringState::calc_replicated_acting_stretch encodes special behavior for stretch clusters which prohibits the primar...
03/07/2024
- 09:56 PM rgw-testing Bug #64801 (New): s3tests: unpin botocore version
- the version was pinned to work around a v2 signature regression in botocore-1.28.0
Matt confirmed that botocore 1.... - 09:37 PM rbd Bug #64800 (In Progress): unable to remove RBD image when OSD is full and trash object is not alr...
- In a vstart cluster, created a RBD image, wrote data to it, configured set-full-ratio to make the OSDs full. Failed t...
- 07:52 PM mgr Cleanup #63421 (Resolved): pybind/mgr: remove bogus __del__() methods of python mgr modules
- The fix was merged. See, https://github.com/ceph/ceph/pull/54375
- 07:11 PM ceph-volume Feature #64798 (Fix Under Review): ceph-volume: add support encryption per device type
- 06:47 PM ceph-volume Feature #64798: ceph-volume: add support encryption per device type
- RFC: https://github.com/ceph/ceph/pull/56046
- 05:43 PM ceph-volume Feature #64798 (Fix Under Review): ceph-volume: add support encryption per device type
- Allow ceph volume to encrypt only requested device type like having block device unencrypted but the db encrypted.
- 06:50 PM CephFS Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Dhairya Parmar wrote:
> I'm waiting for patrick/venky's response on this since they had discussed some approach rega... - 10:32 AM CephFS Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- I'm waiting for patrick/venky's response on this since they had discussed some approach regarding changes to some pro...
- 06:40 PM CephFS Support #57952: Pacific: the buffer_anon_bytes of ceph-mds is too large
- There was a similar case back in nautilus:
* https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/TNXNORN... - 06:38 PM mgr Bug #64799 (Fix Under Review): mgr: update cluster state for new maps from the mons before notify...
- 06:32 PM mgr Bug #64799 (Fix Under Review): mgr: update cluster state for new maps from the mons before notify...
- https://github.com/ceph/ceph/blob/639d182732644edc5c413562ebc904ab5b953303/src/mgr/Mgr.cc#L590-L597
Oddly the maps... - 06:03 PM rgw Feature #64797 (Fix Under Review): config option to disable s3 presigned urls
- 05:07 PM rgw Feature #64797 (Fix Under Review): config option to disable s3 presigned urls
- allow admins to disable authentication via presigned urls
- 05:06 PM Dashboard Bug #64614 (Pending Backport): mgr/dashboard: add snap schedule M and Y repeat frequencies to cre...
- 05:00 PM Dashboard Bug #64614 (In Progress): mgr/dashboard: add snap schedule M and Y repeat frequencies to create form
- 10:29 AM Dashboard Bug #64614 (Pending Backport): mgr/dashboard: add snap schedule M and Y repeat frequencies to cre...
- 10:28 AM Dashboard Bug #64614 (New): mgr/dashboard: add snap schedule M and Y repeat frequencies to create form
- 04:12 PM rgw Backport #64795 (In Progress): squid: rgw: compatibility issues on BucketPublicAccessBlock
- 04:11 PM rgw Backport #64795 (Resolved): squid: rgw: compatibility issues on BucketPublicAccessBlock
- https://github.com/ceph/ceph/pull/56043
- 04:11 PM rgw Backport #64796 (New): reef: rgw: compatibility issues on BucketPublicAccessBlock
- 04:11 PM rgw Backport #64794 (New): quincy: rgw: compatibility issues on BucketPublicAccessBlock
- 04:04 PM rgw Bug #64492 (Pending Backport): rgw: compatibility issues on BucketPublicAccessBlock
- 03:34 PM rgw Bug #61716: Production random data not accessible. Object not found on healthy cluster.
- This sounds like it could be a duplicate of this bug that was resolved:
https://tracker.ceph.com/issues/63642
... - 03:31 PM rgw Backport #64793 (In Progress): quincy: Notification kafka: Persistent messages are removed even w...
- https://github.com/ceph/ceph/pull/56145
- 03:31 PM rgw Backport #64792 (In Progress): reef: Notification kafka: Persistent messages are removed even whe...
- https://github.com/ceph/ceph/pull/56140
- 03:30 PM CephFS Bug #64008 (Fix Under Review): mds: CInode::item_caps used in two different lists
- 03:27 PM rgw Bug #62808 (Can't reproduce): Buckets mtime equal to creation time
- 03:25 PM rgw Bug #63335 (Pending Backport): Notification kafka: Persistent messages are removed even when the ...
- 03:23 PM rgw Bug #63275 (Duplicate): RGW: Using bucket chown causes objects to change to the standard storage ...
- 03:22 PM rgw Bug #63542 (Won't Fix): Delete-Marker deletion inconsistencies
- 03:21 PM rgw Bug #63546 (Resolved): rgwlc: even current object versions have a unique instance
- 03:12 PM rgw Bug #64264 (Resolved): RGW:use radosgw-admin list object in bucket, radosgw-admin -h no useful info
- 03:02 PM rgw Bug #64710 (Fix Under Review): kafka: RGW hangs when broker is down for no persistent notifications
- 02:48 PM nvme-of Feature #64777 (Fix Under Review): mon: add NVMe-oF gateway monitor and HA
- 06:46 AM nvme-of Feature #64777 (Fix Under Review): mon: add NVMe-oF gateway monitor and HA
- Ceph nvmeof monitor
- gateway submodule
This PR adds high availability support for the nvmeof Ceph service. High ... - 02:26 PM Orchestrator Backport #64697 (In Progress): squid: allow idmap overrides in nfs-ganesha configuration
- 02:11 PM rgw Backport #64764 (In Progress): squid: SSL session id reuse speedup mechanism of the SSL_CTX_set_s...
- 01:56 AM rgw Backport #64764 (Resolved): squid: SSL session id reuse speedup mechanism of the SSL_CTX_set_sess...
- https://github.com/ceph/ceph/pull/56037
- 01:58 PM rgw Backport #64768 (In Progress): squid: rgw: awssigv4: new trailer boundary case
- 01:57 AM rgw Backport #64768 (Resolved): squid: rgw: awssigv4: new trailer boundary case
- https://github.com/ceph/ceph/pull/56036
- 01:17 PM Dashboard Backport #64791 (New): squid: mgr/dashboard: In rgw multisite, during zone creation acess/secret ...
- 01:17 PM Dashboard Backport #64790 (New): reef: mgr/dashboard: In rgw multisite, during zone creation acess/secret k...
- 01:07 PM Dashboard Bug #64080 (Pending Backport): mgr/dashboard: In rgw multisite, during zone creation acess/secret...
- 12:56 PM CephFS Backport #64778 (In Progress): squid: mds: add per-client perf counters (w/ label) support
- 06:59 AM CephFS Backport #64778 (In Progress): squid: mds: add per-client perf counters (w/ label) support
- https://github.com/ceph/ceph/pull/56035
- 12:55 PM CephFS Backport #64779 (In Progress): squid: cephfs_mirror: add perf counters (w/ label) support
- 06:59 AM CephFS Backport #64779 (In Progress): squid: cephfs_mirror: add perf counters (w/ label) support
- https://github.com/ceph/ceph/pull/56035
- 12:44 PM crimson Bug #64789 (Resolved): crimson unitest timeout (Reactor backend: io_uring) as liburing mismatch
- From:https://jenkins.ceph.com/job/ceph-pull-requests/130681/consoleFull...
- 12:35 PM rgw Backport #64692 (Rejected): quincy: rgw/s3select: crashes in test_progress_expressions in run_s3s...
- 12:34 PM rgw Backport #64692: quincy: rgw/s3select: crashes in test_progress_expressions in run_s3select_on_csv
- this issue is not relevant to quincy.
at that time it was a different CSV system. - 12:17 PM RADOS Bug #64788 (Fix Under Review): EpollDriver::del_event() crashes when the nic is unplugged
- 11:48 AM RADOS Bug #64788 (Fix Under Review): EpollDriver::del_event() crashes when the nic is unplugged
- librbd uses msgr to talk to its Ceph cluster. if the client's nic is hot unplugged, there is chance that @EpollDriver...
- 11:46 AM Documentation #64787 (New): Missing teuthology-suite command on integration test workflow page
- Unable to see sample teuthology-suite command on integration test workflow page, under "TRIGGERING TESTS" section's s...
- 11:44 AM CephFS Backport #64619 (In Progress): quincy: mds: check the layout in Server::handle_client_mknod
- 11:44 AM CephFS Bug #64786 (New): mds: make ceph.dir.subvolume availabile via getfattr
- * vxattr ceph.dir.subvolume can't be fetched for a subvolume
* there's no integration test to test presence of ceph.... - 11:42 AM CephFS Backport #64618 (In Progress): reef: mds: check the layout in Server::handle_client_mknod
- 11:39 AM CephFS Backport #64617 (In Progress): squid: mds: check the layout in Server::handle_client_mknod
- 11:21 AM Orchestrator Backport #64698 (In Progress): reef: allow idmap overrides in nfs-ganesha configuration
- 11:17 AM CephFS Bug #64659: mds: switch to using xlists instead of elists
- Dhairya Parmar wrote:
> Patrick Donnelly wrote:
> > Dhairya Parmar wrote:
> > > Patrick Donnelly wrote:
> > > > >... - 09:31 AM rbd Bug #64785 (New): RBD persistent error corruption
- If a virtual machine is set up with a rbd_persistent_cache_mode=ssd, and rbd_plugin=pwl_cache
When the virtual hos... - 09:25 AM rbd Bug #64784 (New): rbd_plugins does not check if the plugin exist before changing the value.
- When a rbd_plugins is set incorrectly (no plugin available) it will accept the change without any error. If you later...
- 09:04 AM RADOS Bug #64657: Ceph test cases starting cluster not waiting for OSDs to join fully
- Thank you for addressing this issue. I appreciate your effort in fixing the issue.
I apologize for the oversight o... - 08:35 AM Dashboard Bug #64783 (New): mgr/dashboard: applitools e2e failure
- its constantly failing, so we should find a way to debug and fix this in an easier way.
- 08:26 AM crimson Bug #64782 (New): test_python.sh TestIoctx.test_locator failes in cases of SeaStore
- ...
- 08:16 AM Dashboard Bug #64781 (New): mgr/dashboard: create Jsonnet for ceph cluster dashboard.
- h3. Description of problem
create Jsonnet for ceph cluster dashboard.
https://github.com/ceph/ceph/blob/main/m... - 07:39 AM CephFS Bug #51197: qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation...
- Milind, I'm taking this one.
- 07:38 AM CephFS Bug #64730: fs/misc/multiple_rsync.sh workunit times out
- Xiubo Li wrote:
> Venky,
>
> Have you ever seen the *"On-disk backtrace is divergent or newer"* error ?
>
> [... - 06:59 AM CephFS Bug #64730: fs/misc/multiple_rsync.sh workunit times out
- Xiubo Li wrote:
> Venky,
>
> Have you ever seen the *"On-disk backtrace is divergent or newer"* error ?
>
> [... - 06:50 AM CephFS Bug #64730: fs/misc/multiple_rsync.sh workunit times out
- Venky,
Have you ever seen the *"On-disk backtrace is divergent or newer"* error ?... - 06:29 AM CephFS Bug #64730: fs/misc/multiple_rsync.sh workunit times out
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo Li wrote:
> > > It seems metadata damaged:
> >
> > Right. I sa... - 05:59 AM CephFS Bug #64730: fs/misc/multiple_rsync.sh workunit times out
- Venky Shankar wrote:
> Xiubo Li wrote:
> > It seems metadata damaged:
>
> Right. I saw that in the mds log but l... - 03:59 AM CephFS Bug #64730: fs/misc/multiple_rsync.sh workunit times out
- Xiubo Li wrote:
> It seems metadata damaged:
Right. I saw that in the mds log but left that out while creating th... - 03:23 AM CephFS Bug #64730: fs/misc/multiple_rsync.sh workunit times out
- It seems metadata damaged:...
- 07:09 AM CephFS Backport #64780 (Rejected): squid: qa/fscrypt: switch to postmerge fragment to distiguish the mou...
- 06:33 AM CephFS Backport #64776 (In Progress): quincy: qa/cephfs: add MON_DOWN and `deprecated feature inline_dat...
- 06:28 AM CephFS Backport #64776 (In Progress): quincy: qa/cephfs: add MON_DOWN and `deprecated feature inline_dat...
- https://github.com/ceph/ceph/pull/56023
- 06:21 AM CephFS Backport #64763 (In Progress): reef: qa/cephfs: add MON_DOWN and `deprecated feature inline_data'...
- 01:49 AM CephFS Backport #64763 (In Progress): reef: qa/cephfs: add MON_DOWN and `deprecated feature inline_data'...
- https://github.com/ceph/ceph/pull/56022
- 06:16 AM CephFS Backport #64762 (In Progress): squid: qa/cephfs: add MON_DOWN and `deprecated feature inline_data...
- 01:49 AM CephFS Backport #64762 (In Progress): squid: qa/cephfs: add MON_DOWN and `deprecated feature inline_data...
- https://github.com/ceph/ceph/pull/56021
- 05:47 AM CephFS Backport #64757 (In Progress): quincy: selinux denials with centos9.stream
- 01:37 AM CephFS Backport #64757 (In Progress): quincy: selinux denials with centos9.stream
- https://github.com/ceph/ceph/pull/56020
- 05:34 AM Dashboard Backport #64775 (In Progress): squid: mgr/dashboard: fix nvmeof documentation and traddr issue
- 05:34 AM Dashboard Backport #64775 (In Progress): squid: mgr/dashboard: fix nvmeof documentation and traddr issue
- https://github.com/ceph/ceph/pull/55685
- 05:32 AM CephFS Backport #64756 (In Progress): reef: selinux denials with centos9.stream
- 01:37 AM CephFS Backport #64756 (In Progress): reef: selinux denials with centos9.stream
- https://github.com/ceph/ceph/pull/56019
- 05:27 AM Dashboard Bug #64714 (Pending Backport): mgr/dashboard: fix nvmeof documentation and traddr issue
- 05:19 AM Dashboard Bug #62969 (Resolved): mgr/dashboard: Show the OSD's Out and Down panels as red whenever an OSD i...
- 05:19 AM Dashboard Backport #63571 (Resolved): reef: mgr/dashboard: Show the OSD's Out and Down panels as red whenev...
- 04:42 AM CephFS Backport #64755 (In Progress): squid: selinux denials with centos9.stream
- 01:37 AM CephFS Backport #64755 (In Progress): squid: selinux denials with centos9.stream
- https://github.com/ceph/ceph/pull/56018
- 04:41 AM CephFS Backport #64758 (In Progress): squid: osdc/Journaler: better handle ENOENT during replay as up:st...
- 01:37 AM CephFS Backport #64758 (In Progress): squid: osdc/Journaler: better handle ENOENT during replay as up:st...
- https://github.com/ceph/ceph/pull/56017
- 04:40 AM CephFS Backport #64759 (In Progress): reef: osdc/Journaler: better handle ENOENT during replay as up:sta...
- 01:37 AM CephFS Backport #64759 (In Progress): reef: osdc/Journaler: better handle ENOENT during replay as up:sta...
- https://github.com/ceph/ceph/pull/56016
- 04:39 AM CephFS Backport #64760 (In Progress): quincy: osdc/Journaler: better handle ENOENT during replay as up:s...
- 01:37 AM CephFS Backport #64760 (In Progress): quincy: osdc/Journaler: better handle ENOENT during replay as up:s...
- https://github.com/ceph/ceph/pull/56015
- 04:32 AM Dashboard Bug #64716 (Resolved): mgr/dashboard: fixed cephfs mount command
- 04:32 AM Dashboard Backport #64731 (Resolved): squid: mgr/dashboard: fixed cephfs mount command
- 04:31 AM Dashboard Backport #64732 (Resolved): reef: mgr/dashboard: fixed cephfs mount command
- 04:28 AM Dashboard Bug #63591 (Fix Under Review): mgr/dashboard: pyyaml==6.0 installation fails with "AttributeError...
- 03:12 AM CephFS Bug #64748: reef: snaptest-git-ceph.sh failure
- Venky,
These two failures both caused by the *EOF* issue and there has a existing tracker for this and please see ... - 02:38 AM rgw Backport #64773 (In Progress): squid: rgw: make rgw-restore-bucket-index more robust
- 02:06 AM rgw Backport #64773 (Resolved): squid: rgw: make rgw-restore-bucket-index more robust
- https://github.com/ceph/ceph/pull/56009
- 02:06 AM CephFS Bug #63830: MDS fails to start
- Heðin Ejdesgaard Møller wrote:
> I have made a coredump of the mds service, but it's size is ~10MB, so I'm unable to... - 02:06 AM rgw Backport #64772 (New): reef: rgw: make rgw-restore-bucket-index more robust
- 02:06 AM rgw Backport #64771 (New): quincy: rgw: make rgw-restore-bucket-index more robust
- 01:57 AM rgw Backport #64770 (New): quincy: rgw: awssigv4: new trailer boundary case
- 01:57 AM rgw Backport #64769 (New): reef: rgw: awssigv4: new trailer boundary case
- 01:56 AM rgw Backport #64767 (In Progress): quincy: SSL session id reuse speedup mechanism of the SSL_CTX_set_...
- https://github.com/ceph/ceph/pull/56119
- 01:56 AM rgw Backport #64766 (In Progress): reef: SSL session id reuse speedup mechanism of the SSL_CTX_set_se...
- https://github.com/ceph/ceph/pull/56120
- 01:56 AM rgw Feature #64765 (Pending Backport): rgw: make rgw-restore-bucket-index more robust
- This experimental tool write a series of temporary files, the combined size of which is roughly proportional to some ...
- 01:54 AM rgw Bug #64676 (Pending Backport): rgw: awssigv4: new trailer boundary case
- 01:53 AM rgw Bug #64719 (Pending Backport): SSL session id reuse speedup mechanism of the SSL_CTX_set_session_...
- 01:46 AM CephFS Bug #64761 (New): cephfs-mirror: add throttling to mirror daemon ops
- Right now, there is not control on the number of concurrent in-flight operations. Introduce a mechanism to throttle o...
- 01:37 AM CephFS Bug #64746 (Pending Backport): qa/cephfs: add MON_DOWN and `deprecated feature inline_data' to he...
- 01:36 AM CephFS Bug #64616 (Pending Backport): selinux denials with centos9.stream
- 01:34 AM CephFS Bug #57048 (Pending Backport): osdc/Journaler: better handle ENOENT during replay as up:standby-r...
- 01:25 AM Dashboard Bug #64681: mgr/dashboard: grpc deps broken in some builds
- Nizamudeen,
I'm seeing the following failures on Fedora 39 with grpcio and grpcio-tools.... - 12:58 AM crimson Bug #64587 (Resolved): seastar reactor_backend.cc compile error: no member named 'features' in 'i...
03/06/2024
- 11:45 PM Orchestrator Bug #63502: Regression: Permanent KeyError: 'TYPE' : return self.blkid_api['TYPE'] == 'part'
- Remains in 18.2.1
- 09:37 PM Backport #64754 (In Progress): squid: No matching package to install: 'qatlib-devel'
- 09:16 PM Backport #64754 (In Progress): squid: No matching package to install: 'qatlib-devel'
- https://github.com/ceph/ceph/pull/56007
- 09:11 PM Dashboard Bug #64724 (Fix Under Review): mgr/dashboard: Http failure parsing data error and no data rendere...
- 11:15 AM Dashboard Bug #64724: mgr/dashboard: Http failure parsing data error and no data rendered on pools list page
- This is the issue copied from Paul's feedback.
The values are coming Infinte, I think we need here an error handler ... - 09:07 PM Bug #64678 (Pending Backport): No matching package to install: 'qatlib-devel'
- 02:28 PM Bug #64678 (Fix Under Review): No matching package to install: 'qatlib-devel'
- 07:15 PM RADOS Bug #64726: LibRadosAioEC.MultiWritePP hang and pkill
- ...
- 07:14 PM RADOS Bug #64726: LibRadosAioEC.MultiWritePP hang and pkill
- I think the direct reason behind the test's hang is the death of @osd.5@:...
- 08:22 AM RADOS Bug #64726: LibRadosAioEC.MultiWritePP hang and pkill
- removed the "Related issues"
- 08:21 AM RADOS Bug #64726: LibRadosAioEC.MultiWritePP hang and pkill
- last op that LibRadosAioEC.MultiWritePP trying to do is writing the oid_MultiWritePP_ obj:...
- 05:21 PM Bug #64070: CEPHADM_CHECK_LINKSPEED gets majority wrong
- Still same issue on 18.2.1
- 04:41 PM CephFS Tasks #64723: ffsb configure issues (gcc fails)
- The sleep interval is directly tied to client_caps_release_delay.
- 04:12 PM Linux kernel client Bug #64607: ceph: fstest generic/580 test failure with infinitely loop
- Patrick Donnelly wrote:
> Here is one job where we would have likely caught this bug:
>
> https://pulpito.ceph.co... - 03:16 PM Linux kernel client Bug #64607: ceph: fstest generic/580 test failure with infinitely loop
- I want to answer a question from Ilya about whether we should be catching this in upstream QA. The answer is, we shou...
- 04:06 PM CephFS Bug #64752 (New): cephfs-mirror: valgrind report leaks
- /a/yuriw-2024-03-01_20:51:20-fs-squid-distro-default-smithi/7578146...
- 04:01 PM CephFS Backport #64098: reef: mount command returning misleading error message
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55300
merged - 03:59 PM CephFS Backport #63262: reef: MDS slow requests for the internal 'rename' requests
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/54467
merged - 03:56 PM crimson Bug #64696: Centos8 Crimson container builds - nothing provides libprotobuf.so.30()
- Ah, interesting. Were you able to fix those hosts? Or do we need someone from Infrastructure to take a look?
- 03:51 PM rgw Bug #63684 (Fix Under Review): RGW segmentation fault when reading object permissions via the swi...
- 03:25 PM rgw Bug #63684: RGW segmentation fault when reading object permissions via the swift API
- It seems https://github.com/ceph/ceph/commit/db8d1d455c7f41b2527fb79ab510f186a7d63109 just lost somehow during Ceph R...
- 12:45 PM rgw Bug #63684: RGW segmentation fault when reading object permissions via the swift API
- I see it was already fixed someday https://tracker.ceph.com/issues/56029. Wondering why it happens again, maybe becau...
- 12:01 PM rgw Bug #63684: RGW segmentation fault when reading object permissions via the swift API
- Do we have any WA for this?
- 12:00 PM rgw Bug #63684: RGW segmentation fault when reading object permissions via the swift API
- We are facing the same issue on ceph v18.2.1 (rook v1.12.10)...
- 03:33 PM CephFS Bug #64751 (Fix Under Review): cephfs-mirror coredumped when acquiring pthread mutex
- /a/yuriw-2024-03-01_20:51:20-fs-squid-distro-default-smithi/7578112
Log: ./remote/smithi134/log/ceph-client.mirror... - 03:20 PM RADOS Bug #63389: Failed to encode map X with expected CRC
- The problem came because of a commit that introduced the commented-out check for @SERVER_REEF@ in @OSDMap::encode()@....
- 03:17 PM rgw Feature #50078 (Fix Under Review): [RFE] multisite: Bucket notification information should be sha...
- 03:16 PM CephFS Feature #61866: MDSMonitor: require --yes-i-really-mean-it when failing an MDS with MDS_HEALTH_TR...
- Rishabh, please take this one.
- 01:53 PM Orchestrator Backport #64750 (In Progress): reef: the node-proxy daemon fails to send data to the mgr endpoint
- 01:40 PM Orchestrator Backport #64750 (Resolved): reef: the node-proxy daemon fails to send data to the mgr endpoint
- https://github.com/ceph/ceph/pull/55999
- 01:52 PM Orchestrator Backport #64749 (In Progress): squid: the node-proxy daemon fails to send data to the mgr endpoint
- 01:39 PM Orchestrator Backport #64749 (Resolved): squid: the node-proxy daemon fails to send data to the mgr endpoint
- https://github.com/ceph/ceph/pull/55998
- 01:37 PM Orchestrator Bug #64712 (Pending Backport): the node-proxy daemon fails to send data to the mgr endpoint
- 01:20 PM CephFS Bug #64748 (Duplicate): reef: snaptest-git-ceph.sh failure
- - /a/vshankar-2024-03-05_07:34:10-fs-wip-vshankar-testing1-reef-2024-03-05-1017-testing-default-smithi/7582083
- /a/... - 01:20 PM CephFS Backport #64741 (In Progress): squid: client: do not proceed with I/O if filehandle is invalid
- https://github.com/ceph/ceph/pull/55997
- 08:47 AM CephFS Backport #64741 (In Progress): squid: client: do not proceed with I/O if filehandle is invalid
- 01:10 PM CephFS Backport #64739: quincy: client: do not proceed with I/O if filehandle is invalid
- ah, this will be a bit tricky, src/test/client/nonblocking.cc is not present in reef or quincy
- 08:46 AM CephFS Backport #64739 (New): quincy: client: do not proceed with I/O if filehandle is invalid
- 01:10 PM CephFS Backport #64740: reef: client: do not proceed with I/O if filehandle is invalid
- ah, this will be a bit tricky, src/test/client/nonblocking.cc is not present in reef or quincy
- 08:46 AM CephFS Backport #64740 (New): reef: client: do not proceed with I/O if filehandle is invalid
- 12:13 PM CephFS Bug #64659: mds: switch to using xlists instead of elists
- Patrick Donnelly wrote:
> Dhairya Parmar wrote:
> > Patrick Donnelly wrote:
> > > > working with elist might lead ... - 11:02 AM CephFS Bug #64747 (New): postgresql pkg install failure
- /a/vshankar-2024-03-05_07:34:10-fs-wip-vshankar-testing1-reef-2024-03-05-1017-testing-default-smithi/7582129...
- 10:36 AM CephFS Bug #64711: Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirror...
- ...
- 05:09 AM CephFS Bug #64711: Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirror...
- ...
- 04:23 AM CephFS Bug #64711 (In Progress): Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.ceph...
- 10:34 AM Dashboard Backport #64732 (In Progress): reef: mgr/dashboard: fixed cephfs mount command
- 06:46 AM Dashboard Backport #64732 (Resolved): reef: mgr/dashboard: fixed cephfs mount command
- https://github.com/ceph/ceph/pull/55993
- 10:31 AM Dashboard Backport #64731 (In Progress): squid: mgr/dashboard: fixed cephfs mount command
- 06:46 AM Dashboard Backport #64731 (Resolved): squid: mgr/dashboard: fixed cephfs mount command
- https://github.com/ceph/ceph/pull/55992
- 10:16 AM Bug #64718: download.ceph.com URLs broken for Reef
- Yes this is definitely not resolved, please reopen!
- 09:59 AM bluestore Support #64702: how to create spdk backend osd
- Thanks @Igor Fedotov your reply.
I figured it out. The errors could be ignored as spdk backend device doesn't sup... - 09:49 AM CephFS Bug #64746 (Fix Under Review): qa/cephfs: add MON_DOWN and `deprecated feature inline_data' to he...
- 09:42 AM CephFS Bug #64746 (Pending Backport): qa/cephfs: add MON_DOWN and `deprecated feature inline_data' to he...
- Probably a fallout from https://github.com/ceph/ceph/pull/54312
- 09:42 AM CephFS Backport #64744 (In Progress): squid: mds: `dump dir` command should indicate that a dir is not c...
- 08:47 AM CephFS Backport #64744 (In Progress): squid: mds: `dump dir` command should indicate that a dir is not c...
- https://github.com/ceph/ceph/pull/55989
- 09:36 AM CephFS Backport #64743 (In Progress): reef: mds: `dump dir` command should indicate that a dir is not ca...
- 08:47 AM CephFS Backport #64743 (In Progress): reef: mds: `dump dir` command should indicate that a dir is not ca...
- https://github.com/ceph/ceph/pull/55987
- 09:34 AM CephFS Backport #64742 (In Progress): quincy: mds: `dump dir` command should indicate that a dir is not ...
- 08:47 AM CephFS Backport #64742 (In Progress): quincy: mds: `dump dir` command should indicate that a dir is not ...
- https://github.com/ceph/ceph/pull/55986
- 09:06 AM devops Bug #64745 (New): remove cruft recursively
- When run make-dist, shows
cleanup...
rm: cannot remove 'ceph-erasure-code-corpus': Is a directory
rm: cannot rem... - 08:51 AM crimson Bug #64009: Crimson: PGShardMapping::maybe_create_pg() assert failure
- https://pulpito.ceph.com/matan-2024-03-05_16:18:08-crimson-rados-wip-matanb-crimson-testing-march-5-distro-crimson-sm...
- 08:46 AM CephFS Backport #64738 (In Progress): squid: Memory leak detected when accessing a CephFS volume from Sa...
- https://github.com/ceph/ceph/pull/56123
- 08:46 AM CephFS Backport #64737 (In Progress): reef: Memory leak detected when accessing a CephFS volume from Sam...
- https://github.com/ceph/ceph/pull/56122
- 08:46 AM CephFS Backport #64736 (In Progress): quincy: Memory leak detected when accessing a CephFS volume from S...
- https://github.com/ceph/ceph/pull/56121
- 08:45 AM CephFS Bug #63093 (Pending Backport): mds: `dump dir` command should indicate that a dir is not cached
- Jos, please update the original backport PR with the additional commit.
- 08:41 AM CephFS Bug #64313 (Pending Backport): client: do not proceed with I/O if filehandle is invalid
- 08:40 AM CephFS Bug #64479 (Pending Backport): Memory leak detected when accessing a CephFS volume from Samba usi...
- 08:21 AM RADOS Bug #64735 (Need More Info): OSD/MON: rollback_to snap the latest overlap is not right
- when rollback_to snap, we use the latest clone's current overlap to intersection_of older snapshot's clone overlap.
... - 08:11 AM Dashboard Bug #64734 (New): mgr/dashboard: Clicking on Ceph logo do not takes to the dashboard
- h3. Clicking on Ceph logo do not takes to the dashboard
Clicking on logo right now updates url, but user remains o... - 07:43 AM RADOS Bug #62338: osd: choose_async_recovery_ec may select an acting set < min_size
- Hello. Just FYI, this fixes a very nasty issue in my EC setup.
Here are some details.
The EC setup and crush rule... - 07:32 AM CephFS Bug #64572 (Fix Under Review): workunits/fsx.sh failure
- 04:00 AM CephFS Bug #64572: workunits/fsx.sh failure
- Xiubo Li wrote:
> The *XFS_IOC_FREESP64* and *XFS_IOC_ALLOCSP64* macros are from */usr/include/xfs/xfs_fs.h*, which ... - 02:24 AM CephFS Bug #64572: workunits/fsx.sh failure
- The *XFS_IOC_FREESP64* and *XFS_IOC_ALLOCSP64* macros are from */usr/include/xfs/xfs_fs.h*, which is from *xfsprogs-d...
- 07:26 AM Bug #64733 (New): Monitor keeps crashing on 1 specific node
- Hello,
I have one monitor in my cluster of 3 Nodes, which keeps on crashing after a while. If I then remove the daem... - 06:41 AM Dashboard Bug #64716 (Pending Backport): mgr/dashboard: fixed cephfs mount command
- 05:35 AM CephFS Bug #64730 (Triaged): fs/misc/multiple_rsync.sh workunit times out
- /a/vshankar-2024-03-04_08:26:39-fs-wip-vshankar-testing-20240304.042522-testing-default-smithi/7580882...
- 04:56 AM Dashboard Feature #64530 (Resolved): mgr/dashboard: introduce multicluter monitoring and management
- 04:28 AM CephFS Bug #64729 (Triaged): mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report sl...
- /a/vshankar-2024-03-04_08:26:39-fs-wip-vshankar-testing-20240304.042522-testing-default-smithi/7580913...
- 04:11 AM Bug #63603 (Duplicate): install-deps.sh fails with 'AttributeError: cython_sources'
- 02:31 AM crimson Bug #64546: client io requests hang when issued before the creation of the related pgs
- https://github.com/ceph/ceph/pull/55979
- 02:21 AM crimson Bug #63647: SnapTrimEvent AddressSanitizer: heap-use-after-free
- I'm working on a minor refactor to handle the above issues.
- 02:20 AM crimson Bug #63647: SnapTrimEvent AddressSanitizer: heap-use-after-free
- Removing the above noop stages created a different problem.
SnapTrimEvent doesn't actually do or block on WaitForA... - 02:16 AM crimson Bug #63647: SnapTrimEvent AddressSanitizer: heap-use-after-free
- WaitSubop, WaitTrimTimmer, and WaitRepop are pipeline stages local to the
operation. As such they don't actually pr... - 02:19 AM crimson Bug #64728 (Resolved): osd crashes when there are enough number of pgs in a single seastore based...
- ...
- 01:23 AM CephFS Bug #64641 (Triaged): qa: Add multifs root_squash testcase
03/05/2024
- 11:09 PM CephFS Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- /a/yuriw-2024-03-05_15:31:54-smoke-reef-release-distro-default-smithi/7582350
- 10:52 PM Orchestrator Bug #64208: test_cephadm.sh: Container version mismatch causes job to fail.
- /a/yuriw-2024-03-04_20:52:58-rados-reef-release-distro-default-smithi/7581706
- 10:50 PM RADOS Bug #62992: Heartbeat crash in reset_timeout and clear_timeout
- /a/yuriw-2024-03-04_20:52:58-rados-reef-release-distro-default-smithi/7581448
- 10:47 PM RADOS Bug #64726 (New): LibRadosAioEC.MultiWritePP hang and pkill
- /a/yuriw-2024-03-04_20:52:58-rados-reef-release-distro-default-smithi/7581519...
- 10:43 PM Dashboard Cleanup #59142: mgr/dashboard: fix e2e for dashboard v3
- /a/yuriw-2024-03-04_20:52:58-rados-reef-release-distro-default-smithi/7581558
- 10:42 PM RADOS Bug #55141: thrashers/fastread: assertion failure: rollback_info_trimmed_to == head
- /a/yuriw-2024-03-04_20:52:58-rados-reef-release-distro-default-smithi/7581575
- 10:39 PM Orchestrator Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope n...
- /a/yuriw-2024-03-04_20:52:58-rados-reef-release-distro-default-smithi/7581730
- 10:33 PM RADOS Bug #64725 (Fix Under Review): rados/singleton: application not enabled on pool 'rbd'
- /a/yuriw-2024-03-04_20:52:58-rados-reef-release-distro-default-smithi/7581526...
- 10:27 PM Dashboard Bug #64724 (Resolved): mgr/dashboard: Http failure parsing data error and no data rendered on poo...
- h3. Http failure parsing data error and no data rendered on pools list page
Looking at the http response, I see in... - 10:24 PM RADOS Bug #61774: centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
- /a/yuriw-2024-03-04_20:52:58-rados-reef-release-distro-default-smithi/7581722
/a/yuriw-2024-03-04_20:52:58-rados-ree... - 10:24 PM RADOS Bug #61774: centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
- Update on this: The PR is ready to be reviewed again.
- 09:29 PM CephFS Tasks #64723 (In Progress): ffsb configure issues (gcc fails)
- Isolated the failure, see below
When appending to file and ls is performed on it at or before 5s it appears stale ... - 09:28 PM CephFS Tasks #64166 (Resolved): RMW issue with xfstest ffsb
- This is resolved by commit 9a083b09355; cherry-picked into wip-fscrypt branch.
- 07:17 PM Orchestrator Backport #62800 (In Progress): quincy: cephadm: tcmu-runner not restarted on failure
- 06:29 PM rgw Feature #7791 (Rejected): radosgw-agent should show statistics
- 06:28 PM rgw Feature #8800 (Rejected): Radosgw-agent on pypi.python.org
- 05:20 PM Orchestrator Backport #62796 (In Progress): quincy: cephadm: don't provide tag when grabbing auth token during...
- 05:10 PM Orchestrator Backport #62531 (In Progress): quincy: cephadm: allow draining host without removing conf and key...
- 05:03 PM Bug #64718: download.ceph.com URLs broken for Reef
- @Yuri:
The point of this issue is not that debian-reef_OLD existed. But that debian-reef *no longer* exists - 04:55 PM Bug #64718 (Resolved): download.ceph.com URLs broken for Reef
- It was unintentionally left over from the testing prerelease for 16.2.15
Removed now - 01:59 PM Bug #64718 (Resolved): download.ceph.com URLs broken for Reef
- URLs for reef now have _OLD appended:
https://download.ceph.com/debian-reef_OLD/
https://download.ceph.com/rpm-re... - 04:08 PM Orchestrator Backport #62471 (In Progress): quincy: cephadm: keepalived configured with incorrect unicast IPs ...
- 04:06 PM rgw Backport #64693 (In Progress): reef: rgw/s3select: crashes in test_progress_expressions in run_s3...
- 03:34 PM Stable releases Tasks #64721: reef 18.2.2 (hot-fix)
- h3. QE VALIDATION (STARTED 3/5/24)
PRs list => https://pad.ceph.com/p/v18.2.2_QE_PRs_LIST
https://pad.ceph.com/... - 03:27 PM Stable releases Tasks #64721 (Resolved): reef 18.2.2 (hot-fix)
- h3. Workflow
* "Preparing the release":http://ceph.com/docs/master/dev/development-workflow/#preparing-a-new-relea... - 03:24 PM Stable releases Tasks #64151 (Resolved): pacific v16.2.15
- 03:24 PM Orchestrator Bug #64720 (New): Cannot infer CIDR network on Hetzner Cloud
- On Hetzner Cloud, I run @cephadm bootstrap --mon-ip 10.0.0.6@ and receive the "Cannot infer CIDR network" error.
T... - 03:23 PM rgw Bug #64719 (Fix Under Review): SSL session id reuse speedup mechanism of the SSL_CTX_set_session_...
- 02:16 PM rgw Bug #64719 (Pending Backport): SSL session id reuse speedup mechanism of the SSL_CTX_set_session_...
- The OpenSSL session-id reuse acceleration mechanism that is described in SSL_CTX_set_session_id_context
https://ww... - 03:17 PM Orchestrator Backport #62468 (In Progress): quincy: cephadm: cephadm does not include tcmu-runner in logrotate...
- 03:16 PM Orchestrator Backport #62461 (In Progress): quincy: cephadm: support for CA signed keys
- 03:08 PM Orchestrator Backport #61965 (In Progress): quincy: Add function to check if a host is unreachable by the host...
- 03:06 PM Orchestrator Backport #61939 (In Progress): quincy: cephadm: cephadm module crashes trying to migrate simple r...
- 03:06 PM Orchestrator Backport #63011 (In Progress): quincy: RGW rgw_frontend_type field is not checked correctly durin...
- 02:56 PM Orchestrator Backport #61685 (In Progress): quincy: osd specs with 'spec' field but device selection outside o...
- 02:55 PM Orchestrator Backport #61682 (In Progress): quincy: cephadm: Message about limit policy spams logs if using `l...
- 02:52 PM Orchestrator Backport #61676 (In Progress): quincy: cephadm: port 9095 not opened in firewall after adopting p...
- 02:48 PM Orchestrator Backport #61543 (In Progress): quincy: public_network is set as 'mon' instead of global while boo...
- 02:39 PM Dashboard Tasks #64708 (Fix Under Review): mgr/dashboard: Improvement in Placement targets in bucket form
- 08:15 AM Dashboard Tasks #64708 (In Progress): mgr/dashboard: Improvement in Placement targets in bucket form
- 08:06 AM Dashboard Tasks #64708 (Fix Under Review): mgr/dashboard: Improvement in Placement targets in bucket form
- *Placement Targets*
* Use the default placement, and have the advanced section to change this option. Follow RBD i... - 01:55 PM CephFS Bug #64659: mds: switch to using xlists instead of elists
- Dhairya Parmar wrote:
> Patrick Donnelly wrote:
> > > working with elist might lead to severe consequences at times... - 09:05 AM CephFS Bug #64659: mds: switch to using xlists instead of elists
- Patrick Donnelly wrote:
> > working with elist might lead to severe consequences at times if the same class member i... - 01:49 PM Orchestrator Bug #64712 (Fix Under Review): the node-proxy daemon fails to send data to the mgr endpoint
- 10:25 AM Orchestrator Bug #64712: the node-proxy daemon fails to send data to the mgr endpoint
- It turns out this is because of RedFish which returns unexpected data.
For instance:... - 10:09 AM Orchestrator Bug #64712 (Resolved): the node-proxy daemon fails to send data to the mgr endpoint
- When the node-proxy daemon tries to send its data to the mgr endpoint, it fails with a 500 Error.
Typical failure:... - 01:49 PM CephFS Tasks #64691: Symlink target not set correctly in unencrypted dir
- Christopher Hoffman wrote:
> in->symlink_plain wasn't being set in case of non-fscrypt.
>
> [...]
is this pat... - 01:28 PM CephFS Bug #53192: High cephfs MDS latency and CPU load with snapshots and unlink operations
- Hi Venky, Patrick
further to our talk, we saw the MDS growing with a lot of log segments and crashing in the up:re... - 01:26 PM CephFS Bug #64717 (New): MDS stuck in replay/resolve use
- We have a cephfs cluster where we ran a lot of metadata intensive workloads with snapshots enabled. In our monitoring...
- 01:04 PM RADOS Bug #64514 (In Progress): LibRadosTwoPoolsPP.PromoteSnapScrub test failed
- 01:04 PM RADOS Bug #64514: LibRadosTwoPoolsPP.PromoteSnapScrub test failed
- This may be related to bug fixed in https://tracker.ceph.com/issues/64347. However, the outcome here is different whi...
- 12:38 PM Dashboard Bug #64716 (Resolved): mgr/dashboard: fixed cephfs mount command
- h3. Description of problem
Attach option is giving the sample mount command for attaching the filesystem.
We ar... - 12:17 PM Bug #64715 (New): osdmap offline optimization and balancer should take bluestore_min_alloc_size i...
- I have access to a cluster and tried to perform offline OSDmap optimization as described here: https://docs.ceph.com/...
- 12:04 PM Dashboard Bug #57924: mgr/dashboard: fails with "Module 'dashboard' has failed: key type unsupported" when ...
- I have the same issue.
Is there any update?
-> ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) qu... - 11:02 AM Dashboard Feature #64329 (Resolved): mgr/dashboard: add hardware status
- 11:01 AM Dashboard Feature #64329 (Pending Backport): mgr/dashboard: add hardware status
- 10:42 AM Dashboard Bug #64714 (Fix Under Review): mgr/dashboard: fix nvmeof documentation and traddr issue
- 10:41 AM Dashboard Bug #64714 (Pending Backport): mgr/dashboard: fix nvmeof documentation and traddr issue
- the traddr in listener is always none
From Aviv... - 10:21 AM Dashboard Bug #64713 (Fix Under Review): mgr/dashboard: nvmeof grpc requests are not properly closed
- 10:17 AM Dashboard Bug #64713 (Fix Under Review): mgr/dashboard: nvmeof grpc requests are not properly closed
- after initializing a connection the grpc requests are not properly closed. this can potentially lead to a wastage of ...
- 09:53 AM CephFS Bug #64711: Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirror...
- The check_peer_snap_in_progress() doesn't wait for 'syncing' to appear. It just checks the state at the moment and re...
- 09:35 AM CephFS Bug #64711 (Fix Under Review): Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks...
- /a/vshankar-2024-03-04_08:26:39-fs-wip-vshankar-testing-20240304.042522-testing-default-smithi/7580933
Probably a ... - 09:38 AM bluestore Support #64702: how to create spdk backend osd
- AFAIK SPDK backend for BlueStore isn't properly maintained and I've never seen it in production so I doubt it's funct...
- 02:48 AM bluestore Support #64702: how to create spdk backend osd
- Hi, there
I'm trying to create spdk backend osd, but it cant' success. My operations like this:
1. build ceph 1... - 02:33 AM bluestore Support #64702 (New): how to create spdk backend osd
- Hi, there
I'm trying to create spdk backend osd, but it cant' success. My operations like this:
1. build ceph 1... - 09:32 AM bluestore Bug #52464: FAILED ceph_assert(current_shard->second->valid())
- ah... just reread Michael's post at reddit - smartctl reports unrecoverable disk errors according to it - I believe t...
- 09:30 AM bluestore Bug #52464: FAILED ceph_assert(current_shard->second->valid())
- Michael Collins wrote:
> We're run into this bug twice recently, we don't have any urgency when it comes to re-provi... - 06:51 AM bluestore Bug #52464: FAILED ceph_assert(current_shard->second->valid())
We encountered the same issue on Ceph 16.2.10, suspecting it may be related to reusing-Iterator in RocksDB,rocksdb ...- 09:29 AM rgw Bug #64710 (Pending Backport): kafka: RGW hangs when broker is down for no persistent notifications
- based on this comment: https://github.com/ceph/ceph/pull/55051#issuecomment-1961950841
the main issue is that the li... - 09:23 AM CephFS Bug #63949: leak in mds.c detected by valgrind during CephFS QA run
- https://pulpito.ceph.com/vshankar-2024-03-04_08:26:39-fs-wip-vshankar-testing-20240304.042522-testing-default-smithi/...
- 09:23 AM CephFS Bug #64149: valgrind+mds/client: gracefully shutdown the mds during valgrind tests
- https://pulpito.ceph.com/vshankar-2024-03-04_08:26:39-fs-wip-vshankar-testing-20240304.042522-testing-default-smithi/...
- 08:16 AM Support #64709 (New): Unable to Determine Encryption Status of RBD Image
- # Execute below provided commands to encrypt an RBD image.
# Use the rbd info command to inspect the properties of t... - 08:12 AM Dashboard Cleanup #64658 (Fix Under Review): mgr/dashboard: Locking improvements in bucket create form
- 08:09 AM Dashboard Bug #64588 (Resolved): mgr/dashboard: rgw roles page broken with items don't have permission poli...
- 08:09 AM Dashboard Backport #64683 (Resolved): quincy: mgr/dashboard: rgw roles page broken with items don't have pe...
- 08:09 AM Dashboard Bug #64270 (Resolved): mgr/dashboard: Ceph dashboard throws 500 Internal Server Error while acces...
- 08:09 AM Dashboard Backport #64368 (Resolved): quincy: mgr/dashboard: Ceph dashboard throws 500 Internal Server Erro...
- 08:08 AM RADOS Bug #64657: Ceph test cases starting cluster not waiting for OSDs to join fully
- Without the full log it will be hard to tell if the symptoms that I see are exactly as 茁野 鲍 see, but we are missing t...
- 07:33 AM CephFS Bug #64707 (New): suites/fsstress.sh hangs on one client - test times out
- https://pulpito.ceph.com/vshankar-2024-03-04_08:26:39-fs-wip-vshankar-testing-20240304.042522-testing-default-smithi/...
- 07:20 AM CephFS Bug #64679 (Fix Under Review): cephfs: removexattr should always return -ENODATA when xattr doesn...
- 06:05 AM CephFS Bug #64572: workunits/fsx.sh failure
- I looked at this closely, at it seems that the compilation failure is deliberately triggered from the xfstest code wh...
- 05:10 AM rgw Backport #64694: squid: rgw/s3select: crashes in test_progress_expressions in run_s3select_on_csv
- https://github.com/ceph/ceph/pull/55941
- 04:40 AM CephFS Backport #64704 (In Progress): quincy: Test failure: test_mount_all_caps_absent (tasks.cephfs.tes...
- 04:10 AM CephFS Backport #64704 (In Progress): quincy: Test failure: test_mount_all_caps_absent (tasks.cephfs.tes...
- https://github.com/ceph/ceph/pull/55944
- 04:40 AM CephFS Backport #64706 (Rejected): squid: mount command returning misleading error message
- Already in squid when branching...
- 04:10 AM CephFS Backport #64706 (Rejected): squid: mount command returning misleading error message
- 04:39 AM CephFS Backport #64703 (Rejected): squid: Test failure: test_mount_all_caps_absent (tasks.cephfs.test_mu...
- Already in squid when branching...
- 04:10 AM CephFS Backport #64703 (Rejected): squid: Test failure: test_mount_all_caps_absent (tasks.cephfs.test_mu...
- 04:36 AM CephFS Backport #64705 (In Progress): reef: Test failure: test_mount_all_caps_absent (tasks.cephfs.test_...
- 04:10 AM CephFS Backport #64705 (In Progress): reef: Test failure: test_mount_all_caps_absent (tasks.cephfs.test_...
- https://github.com/ceph/ceph/pull/55943
- 04:04 AM CephFS Bug #64700 (Pending Backport): Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multif...
- 01:37 AM CephFS Bug #64700 (Pending Backport): Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multif...
- The actual error is:...
- 03:36 AM CephFS Backport #64701 (In Progress): squid: mgr/volumes: Support to reject CephFS clones if cloner thre...
- 01:42 AM CephFS Backport #64701 (In Progress): squid: mgr/volumes: Support to reject CephFS clones if cloner thre...
- https://github.com/ceph/ceph/pull/55940
- 01:41 AM CephFS Feature #59714: mgr/volumes: Support to reject CephFS clones if cloner threads are not available
- Backport note: required additional commits from https://github.com/ceph/ceph/pull/55930
- 01:09 AM CephFS Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Another debug session with @set detach-on-fork on@ which is supposed to let gdb debug both parent and child processes...
03/04/2024
- 11:23 PM crimson Bug #64696: Centos8 Crimson container builds - nothing provides libprotobuf.so.30()
- From what we can tell, someone tampered with /usr/bin/protoc on 3 jammy hosts:...
- 10:55 PM crimson Bug #64696: Centos8 Crimson container builds - nothing provides libprotobuf.so.30()
- From dmick@irvingi05,
$ protoc --version
libprotoc 25.1
/usr/bin/protoc -> protoc-25.1.0
ii protobuf-compi... - 09:22 PM crimson Bug #64696: Centos8 Crimson container builds - nothing provides libprotobuf.so.30()
- I'm trying to create a jammy environment to reproduce this.
- 09:11 PM crimson Bug #64696: Centos8 Crimson container builds - nothing provides libprotobuf.so.30()
dmick@irvingi05:~$ dpkg -l | grep protobuf
ii libprotobuf-dev:amd64 3.12.4-1ubuntu7.22.04...- 08:55 PM crimson Bug #64696: Centos8 Crimson container builds - nothing provides libprotobuf.so.30()
- I'm able to build in a local centos 8 container with protobuf, protobuf-devel, protobuf-compiler 3.5.0.
The two en... - 08:05 PM crimson Bug #64696: Centos8 Crimson container builds - nothing provides libprotobuf.so.30()
- From another PR https://jenkins.ceph.com/job/ceph-pull-requests/130486/consoleFull#85719909585e1414f-af06-4588-8fed-a...
- 07:54 PM crimson Bug #64696: Centos8 Crimson container builds - nothing provides libprotobuf.so.30()
- Likely related, centos 8 crimson builds are failing with (https://jenkins.ceph.com/job/ceph-dev-new-build/ARCH=x86_64...
- 06:34 PM crimson Bug #64696 (New): Centos8 Crimson container builds - nothing provides libprotobuf.so.30()
- Make check fails from:
https://jenkins.ceph.com/job/ceph-pull-requests/130462/consoleFull#85719909585e1414f-af06-4... - 09:19 PM RADOS Backport #63526 (Resolved): quincy: crash: int OSD::shutdown(): assert(end_time - start_time_func...
- 09:18 PM bluestore Bug #62815 (Resolved): hybrid/avl allocators might be very ineffective when serving bluefs alloca...
- 08:43 PM bluestore Bug #62815: hybrid/avl allocators might be very ineffective when serving bluefs allocations
- https://github.com/ceph/ceph/pull/54877 merged
- 09:17 PM bluestore Backport #63761 (Resolved): quincy: hybrid/avl allocators might be very ineffective when serving ...
- 09:17 PM bluestore Bug #63618 (Resolved): Allocator configured with 64K alloc unit might get 4K requests
- 08:43 PM bluestore Bug #63618: Allocator configured with 64K alloc unit might get 4K requests
- https://github.com/ceph/ceph/pull/54877 merged
- 09:17 PM bluestore Backport #63758 (Resolved): quincy: Allocator configured with 64K alloc unit might get 4K requests
- 08:45 PM RADOS Bug #61140: crash: int OSD::shutdown(): assert(end_time - start_time_func < cct->_conf->osd_fast_...
- https://github.com/ceph/ceph/pull/55134 merged
- 08:44 PM Backport #63986: quincy: mon: add exception handling to ceph health mute
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55117
merged - 08:23 PM Orchestrator Backport #64699 (Rejected): quincy: allow idmap overrides in nfs-ganesha configuration
- 08:22 PM Orchestrator Backport #64698 (Resolved): reef: allow idmap overrides in nfs-ganesha configuration
- https://github.com/ceph/ceph/pull/56029
- 08:22 PM Orchestrator Backport #64697 (In Progress): squid: allow idmap overrides in nfs-ganesha configuration
- https://github.com/ceph/ceph/pull/56038
- 08:21 PM Orchestrator Feature #64577 (Pending Backport): allow idmap overrides in nfs-ganesha configuration
- 08:11 PM CephFS Documentation #51428 (Resolved): mgr/nfs: move nfs doc from cephfs to mgr
- 08:11 PM CephFS Backport #51790 (Rejected): pacific: mgr/nfs: move nfs doc from cephfs to mgr
- pacific is EOL
- 08:09 PM bluestore Bug #61466 (Resolved): Add bluefs write op count metrics
- 08:09 PM CephFS Tasks #64691 (Resolved): Symlink target not set correctly in unencrypted dir
- in->symlink_plain wasn't being set in case of non-fscrypt. ...
- 04:17 PM CephFS Tasks #64691 (Resolved): Symlink target not set correctly in unencrypted dir
- Symlink does not work outside of an unencrypted dir. The target does not get set...
- 08:09 PM bluestore Backport #61468 (Rejected): pacific: Add bluefs write op count metrics
- pacific is EOL
- 08:07 PM RADOS Backport #58337 (Rejected): pacific: mon-stretched_cluster: degraded stretched mode lead to Monit...
- 08:06 PM RADOS Backport #58337 (Duplicate): pacific: mon-stretched_cluster: degraded stretched mode lead to Moni...
- pacific is EOL
- 08:07 PM RADOS Bug #59271 (Resolved): mon: FAILED ceph_assert(osdmon()->is_writeable())
- 08:07 PM RADOS Backport #59700 (Rejected): pacific: mon: FAILED ceph_assert(osdmon()->is_writeable())
- pacific is EOL
- 08:06 PM RADOS Bug #57017 (Resolved): mon-stretched_cluster: degraded stretched mode lead to Monitor crash
- 08:00 PM RADOS Bug #64657: Ceph test cases starting cluster not waiting for OSDs to join fully
- Hi Nitzan! Would you mind taking a look?
- 07:59 PM RADOS Bug #64637: LeakPossiblyLost in BlueStore::_do_write_small() in osd
- Looks like typical symptom of (CPU/memory) starvation.
- 07:59 PM RADOS Bug #64646: ceph osd pool rmsnap clone object leak
- note from bug scrub: reviewed, went to QA.
- 07:58 PM RADOS Bug #64514: LibRadosTwoPoolsPP.PromoteSnapScrub test failed
- Bump up.
- 07:56 PM RADOS Bug #54182: OSD_TOO_MANY_REPAIRS cannot be cleared in >=Octopus
- note from bug scrub: reviewed, changes requested.
- 07:55 PM RADOS Bug #64670: LibRadosAioEC.RoundTrip2 hang and pkill
- Might be something new. Bump up and observe.
- 07:53 PM RADOS Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- note from scrub: the PR is approved. Needs-qa.
- 07:51 PM RADOS Bug #64674 (Resolved): src/scripts/ceph-backport.sh
- I guess we don't need to backport anything.
- 07:49 PM RADOS Bug #64258: osd/PrimaryLogPG.cc: FAILED ceph_assert(inserted)
- note from bug scrub: reviewed.
- 01:40 PM RADOS Bug #64258 (Fix Under Review): osd/PrimaryLogPG.cc: FAILED ceph_assert(inserted)
- 07:49 PM RADOS Bug #64695: Aborted signal starting in AsyncConnection::send_message()
- ...
- 05:39 PM RADOS Bug #64695 (New): Aborted signal starting in AsyncConnection::send_message()
- /a/yuriw-2024-03-01_16:47:30-rados-wip-yuri11-testing-2024-02-28-0950-reef-distro-default-smithi/7577623...
- 07:44 PM RADOS Bug #64314: cluster log: Cluster log level string representation missing in the cluster logs.
- Still in QA. Bump up.
- 07:36 PM RADOS Bug #64333: PG autoscaler tuning => catastrophic ceph cluster crash
- Thank you very, very much for the scenario! This throws a lot of light on what has happened.
I'm not sure whether th... - 07:32 PM RADOS Bug #52657: MOSDPGLog::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_NAUTILUS)'
- note from bug scrub: Aishwarya is addressing the review's comments.
- 07:27 PM Bug #64684: run-tox-mgr-dashboard-py3 failing with many 400 responses
- Thank you for looking into it! Great find. I hope that once they do release their changes as a major version we won't...
- 05:27 PM Bug #64684: run-tox-mgr-dashboard-py3 failing with many 400 responses
- after they yanked that release of pytest, it looks like the issue went away. In anycase I'll keep this PR there in ca...
- 03:53 PM Bug #64684 (Fix Under Review): run-tox-mgr-dashboard-py3 failing with many 400 responses
- i opened a PR which is still draft, but mostly the failure looks because of a new (and now yanked) release[1] of pyte...
- 11:42 AM Bug #64684 (Pending Backport): run-tox-mgr-dashboard-py3 failing with many 400 responses
- Failures observed in a multitude of unrelated PR builds:
https://jenkins.ceph.com/job/ceph-pull-requests/130431/co... - 07:25 PM rbd Backport #64667 (In Progress): quincy: [test] cross-pollinate diff-continuous and compare-mirror-...
- 07:12 PM Backport #64509: reef: Debian bookworm package needs to explicitly specify cephadm home directory
- Matthew Vernon wrote:
> please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/557... - 07:11 PM rbd Backport #64668 (In Progress): reef: [test] cross-pollinate diff-continuous and compare-mirror-im...
- 06:56 PM rbd Backport #64669 (In Progress): squid: [test] cross-pollinate diff-continuous and compare-mirror-i...
- 06:27 PM RADOS Bug #53240: full-object read crc is mismatch, because truncate modify oi.size and forget to clear...
- The fix goes into QA.
- 05:27 PM rgw Backport #64694 (Resolved): squid: rgw/s3select: crashes in test_progress_expressions in run_s3se...
- 05:27 PM rgw Backport #64693 (In Progress): reef: rgw/s3select: crashes in test_progress_expressions in run_s3...
- https://github.com/ceph/ceph/pull/55969
- 05:27 PM rgw Backport #64692 (Rejected): quincy: rgw/s3select: crashes in test_progress_expressions in run_s3s...
- 05:27 PM rgw Bug #63245 (Pending Backport): rgw/s3select: crashes in test_progress_expressions in run_s3select...
- 03:12 PM rgw Bug #63245 (New): rgw/s3select: crashes in test_progress_expressions in run_s3select_on_csv
- the PR is fixing the issue of too-small-chunk (flow was changed to append these small chunks)
thus, upon compression... - 02:54 PM rgw Bug #63245 (Fix Under Review): rgw/s3select: crashes in test_progress_expressions in run_s3select...
- 05:26 PM Backport #64663 (Resolved): squid: crimson: unittest-seatar-socket failing intermittently
- 03:57 PM rgw Bug #64690 (New): TestAMQP.MaxConnections failure
- a make check failure from https://jenkins.ceph.com/job/ceph-pull-requests/130468/consoleFull#-1839254854e840cee4-f4a4...
- 03:52 PM Orchestrator Backport #64689 (Resolved): reef: cephadm: host filtering with label and host pattern only uses t...
- https://github.com/ceph/ceph/pull/56107
- 03:52 PM Orchestrator Backport #64688 (Resolved): quincy: cephadm: host filtering with label and host pattern only uses...
- https://github.com/ceph/ceph/pull/56088
- 03:52 PM Orchestrator Backport #64687 (New): squid: cephadm: host filtering with label and host pattern only uses the l...
- 03:51 PM Orchestrator Bug #64428 (Pending Backport): cephadm: host filtering with label and host pattern only uses the ...
- 02:24 PM Dashboard Feature #64686 (New): mgr/dashboard: Use help texts below the input fields and lengeds in forms
- h3. Use help texts below the input fields and lengeds in forms
_Introduces a new UXD change which adds help texts ... - 02:02 PM CephFS Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- qa test case reproducer: https://github.com/ceph/ceph/pull/55784
- 01:52 PM CephFS Bug #64572 (Triaged): workunits/fsx.sh failure
- 01:51 PM CephFS Bug #64572: workunits/fsx.sh failure
- Venky Shankar wrote:
> https://pulpito.ceph.com/vshankar-2024-02-26_05:44:42-fs:workload-wip-vshankar-testing-202402... - 01:31 PM CephFS Bug #64685 (Fix Under Review): mds: disable defer_client_eviction_on_laggy_osds by default
- 01:28 PM CephFS Bug #64685 (Pending Backport): mds: disable defer_client_eviction_on_laggy_osds by default
- This config can result in a single client holding up mds to service other clients since once a client is deferred fro...
- 12:31 PM CephFS Feature #58057: cephfs-top: enhance fstop tests to cover testing displayed data
- The progress of this req is tracked at
[1] https://tracker.ceph.com/issues/61397
[2] https://tracker.ceph.com/issue... - 12:13 PM CephFS Bug #57594 (Can't reproduce): pacific: Test failure: test_rebuild_moved_dir (tasks.cephfs.test_da...
- 10:55 AM Bug #64548 (Fix Under Review): ceph-base: /var/lib/ceph/crash/posted not chowned to ceph:ceph cau...
- 10:35 AM Bug #64548: ceph-base: /var/lib/ceph/crash/posted not chowned to ceph:ceph causing ceph-crash to ...
- Opened a PR for this, as this was something I've been working on as part of a patch series.
You can find the PR he... - 10:32 AM Dashboard Backport #64683: quincy: mgr/dashboard: rgw roles page broken with items don't have permission po...
- https://github.com/ceph/ceph/pull/55516
- 10:00 AM Dashboard Backport #64683 (Resolved): quincy: mgr/dashboard: rgw roles page broken with items don't have pe...
- https://github.com/ceph/ceph/pull/55516
- 10:19 AM CephFS Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Finally with the infra and the kclient issues set aside, I was able to gdb the ceph-fuse process, add a breakpoint in...
- 09:58 AM CephFS Backport #64224 (In Progress): quincy: qa: flush journal may cause timeouts of `scrub status`
- 09:48 AM CephFS Backport #64223 (In Progress): reef: qa: flush journal may cause timeouts of `scrub status`
- 09:37 AM Dashboard Bug #64682 (New): mgr/dashboard: Grafana issues.
- h3. Description of problem
1. All the dashboard needs to be revisited and see remove the duplicate content and con... - 09:28 AM Dashboard Bug #64588 (Pending Backport): mgr/dashboard: rgw roles page broken with items don't have permiss...
- 09:28 AM Dashboard Bug #64588 (Fix Under Review): mgr/dashboard: rgw roles page broken with items don't have permiss...
- 09:26 AM Dashboard Bug #64588 (Pending Backport): mgr/dashboard: rgw roles page broken with items don't have permiss...
- 09:26 AM Dashboard Bug #64588 (Fix Under Review): mgr/dashboard: rgw roles page broken with items don't have permiss...
- 08:15 AM Dashboard Bug #64681 (In Progress): mgr/dashboard: grpc deps broken in some builds
- in fedora, grpcio and grpcio-tools current pinned version is not available. so need to relax on the pinnings.
- 08:03 AM crimson Bug #64680 (New): transaction_manager_test/tm_random_block_device_test_t.scatter_allocation/0 sta...
- ERROR seastore_cleaner - RBMSpaceTracker::equals: block addr 133074944 mismatch other used: false
Caused by multi... - 07:10 AM CephFS Bug #64486 (Fix Under Review): qa: enhance labeled perf counters test for cephfs-mirror
- 07:05 AM CephFS Bug #64679 (Fix Under Review): cephfs: removexattr should always return -ENODATA when xattr doesn...
- This issue is from https://github.com/ceph/ceph/pull/55087.
The POSIX says we should return -ENODATA when the corr... - 04:19 AM Bug #64678 (Pending Backport): No matching package to install: 'qatlib-devel'
- redhat 9 arm OS, sh install-deps.sh
report:
No matching package to install: 'qatlib-devel'
No matching package t... - 02:16 AM crimson Bug #64512 (Resolved): crimson: asan stack-use-after-return false positive on osd startup with cl...
- 01:58 AM Bug #64210 (Fix Under Review): make check(arm64): check-generated.sh is running too slow, needed ...
- 01:26 AM CephFS Bug #64616 (Fix Under Review): selinux denials with centos9.stream
- 12:57 AM Linux kernel client Bug #64607 (Fix Under Review): ceph: fstest generic/580 test failure with infinitely loop
- v1 patchwork link: https://patchwork.kernel.org/project/ceph-devel/list/?series=830937&archive=both
- 12:18 AM RADOS Bug #63066: rados/objectstore - application not enabled on pool '.mgr'
- /a/yuriw-2024-02-28_15:47:41-rados-wip-yuri4-testing-2024-02-27-1111-quincy-distro-default-smithi/7575815
/a/yuriw-2... - 12:15 AM bluestore Bug #56788: crash: void KernelDevice::_aio_thread(): abort
- /a/yuriw-2024-02-28_15:47:41-rados-wip-yuri4-testing-2024-02-27-1111-quincy-distro-default-smithi/7575637
03/03/2024
- 09:55 PM crimson Bug #64589: seastar prometheus.cc compile error: call to member function 'Set' is ambiguous
- squid backport: https://github.com/ceph/ceph/pull/55907
- 09:28 AM crimson Bug #64589 (Resolved): seastar prometheus.cc compile error: call to member function 'Set' is ambi...
- 04:29 PM rgw Bug #63245: rgw/s3select: crashes in test_progress_expressions in run_s3select_on_csv
- ignore the previous comment.
the exception (before that it was an assert) was caused by a small-size chunk.
the s... - 03:00 PM CephFS Feature #64677 (New): Enhance Message with a generic method that can be used to delay payload dec...
- Message payload has a serialized version of the message content. Using the standard `decode_payload` suggests that th...
- 07:29 AM mgr Bug #56246 (In Progress): crash: File "mgr/nfs/module.py", in cluster_ls: return available_cluste...
03/02/2024
- 07:36 PM rgw Bug #64676 (Fix Under Review): rgw: awssigv4: new trailer boundary case
- 07:34 PM rgw Bug #64676 (Pending Backport): rgw: awssigv4: new trailer boundary case
- I observed an environment in which the maven test suite, using http, generated a trailer chunk
boundary of "0;" rath... - 07:06 PM rbd Backport #64675 (In Progress): squid: rbd: scalability issue on Windows due to TCP session count
- 06:54 PM rbd Backport #64675 (Resolved): squid: rbd: scalability issue on Windows due to TCP session count
- https://github.com/ceph/ceph/pull/55893
- 06:53 PM rbd Feature #63645 (Pending Backport): rbd: scalability issue on Windows due to TCP session count
- 12:21 PM rgw Bug #63245: rgw/s3select: crashes in test_progress_expressions in run_s3select_on_csv
- https://github.com/ceph/ceph/pull/55891
this PR removes the assert residing in the CSV-parser and replaces it with... - 12:02 AM mgr Backport #63796: reef: devicehealth: sqlite3.IntegrityError: UNIQUE constraint failed: DeviceHeal...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/54879
merged
03/01/2024
- 11:57 PM Backport #63277: reef: cmake: dependency ordering error for liburing and librocksdb
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/54122
merged - 11:19 PM RADOS Bug #64674: src/scripts/ceph-backport.sh
- revert PR: https://github.com/ceph/ceph/pull/55884
will fix this - 11:16 PM RADOS Bug #64674 (Resolved): src/scripts/ceph-backport.sh
- src/script/ceph-backport.sh: line 1737: ../../../ceph/.github/pull_request_template.md: No such file or directory
... - 11:01 PM RADOS Backport #64673 (In Progress): quincy: test_pool_min_size: AssertionError: wait_for_clean: failed...
- 10:58 PM RADOS Backport #64673 (In Progress): quincy: test_pool_min_size: AssertionError: wait_for_clean: failed...
- https://github.com/ceph/ceph/pull/55882
- 10:58 PM RADOS Backport #64672 (New): pacific: test_pool_min_size: AssertionError: wait_for_clean: failed before...
- 10:58 PM RADOS Backport #64671 (New): reef: test_pool_min_size: AssertionError: wait_for_clean: failed before ti...
- 10:55 PM RADOS Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- /a/yuriw-2024-02-28_22:53:11-rados-wip-yuri2-testing-2024-02-16-0829-reef-distro-default-smithi/7576306
- 10:54 PM RADOS Bug #62992: Heartbeat crash in reset_timeout and clear_timeout
- /a/yuriw-2024-02-28_22:53:11-rados-wip-yuri2-testing-2024-02-16-0829-reef-distro-default-smithi/7576311
- 10:53 PM RADOS Bug #62992: Heartbeat crash in reset_timeout and clear_timeout
- /a/yuriw-2024-02-28_22:53:11-rados-wip-yuri2-testing-2024-02-16-0829-reef-distro-default-smithi/7576314
- 09:30 PM RADOS Bug #62992: Heartbeat crash in reset_timeout and clear_timeout
- /a/yuriw-2024-02-28_22:53:11-rados-wip-yuri2-testing-2024-02-16-0829-reef-distro-default-smithi/7576298
- 10:53 PM RADOS Bug #59172 (Pending Backport): test_pool_min_size: AssertionError: wait_for_clean: failed before ...
- 10:51 PM RADOS Bug #64670 (New): LibRadosAioEC.RoundTrip2 hang and pkill
- /a/yuriw-2024-02-28_22:53:11-rados-wip-yuri2-testing-2024-02-16-0829-reef-distro-default-smithi/7576303...
- 09:51 PM Orchestrator Bug #64208: test_cephadm.sh: Container version mismatch causes job to fail.
- /a/yuriw-2024-02-28_22:53:11-rados-wip-yuri2-testing-2024-02-16-0829-reef-distro-default-smithi/7576313
- 08:34 PM CephFS Bug #64659: mds: switch to using xlists instead of elists
- > working with elist might lead to severe consequences at times if the same class member is used to initialise multip...
- 12:16 PM CephFS Bug #64659 (New): mds: switch to using xlists instead of elists
- ...
- 05:58 PM rbd Backport #64669 (In Progress): squid: [test] cross-pollinate diff-continuous and compare-mirror-i...
- https://github.com/ceph/ceph/pull/55927
- 05:58 PM rbd Backport #64668 (In Progress): reef: [test] cross-pollinate diff-continuous and compare-mirror-im...
- https://github.com/ceph/ceph/pull/55928
- 05:58 PM rbd Backport #64667 (In Progress): quincy: [test] cross-pollinate diff-continuous and compare-mirror-...
- https://github.com/ceph/ceph/pull/55929
- 05:56 PM rbd Bug #64574 (Pending Backport): [test] cross-pollinate diff-continuous and compare-mirror-image te...
- 05:25 PM rgw Backport #64664 (In Progress): squid: object lock: An object uploaded through a multipart upload ...
- 05:22 PM rgw Backport #64664 (Resolved): squid: object lock: An object uploaded through a multipart upload can...
- https://github.com/ceph/ceph/pull/55876
- 05:22 PM rgw Backport #64666 (New): reef: object lock: An object uploaded through a multipart upload can be de...
- 05:22 PM rgw Backport #64665 (New): quincy: object lock: An object uploaded through a multipart upload can be ...
- 05:20 PM rgw Bug #63724 (Pending Backport): object lock: An object uploaded through a multipart upload can be ...
- 03:45 PM rbd Feature #41591 (Rejected): [rbd]:clear up all objects when pool is empty
- 03:42 PM Bug #64213: MGR modules incompatible with later PyO3 versions - PyO3 modules may only be initiali...
- Interesting... Because of this problem, and the fact that debian-ceph packages are not even tested before release, I ...
- 03:00 PM Bug #64213: MGR modules incompatible with later PyO3 versions - PyO3 modules may only be initiali...
- centos 9stream is also affected btw, we are affected by this pyo3 import error issue. Subscribing on this issue.
- 03:35 PM Backport #64663 (In Progress): squid: crimson: unittest-seatar-socket failing intermittently
- 03:35 PM Backport #64663 (Resolved): squid: crimson: unittest-seatar-socket failing intermittently
- https://github.com/ceph/ceph/pull/55873
- 03:35 PM crimson Bug #64457: crimson: unittest-seatar-socket failing intermittently
- ...
- 03:24 PM crimson Bug #64457 (Pending Backport): crimson: unittest-seatar-socket failing intermittently
- tagged for squid backport since we're seeing the failures there too
- 02:29 PM rbd Feature #64662 (In Progress): allow cloning from group snapshots (.group -- snapshots in a group ...
- Currently .group snapshots can only be mapped (read-only, as any other snapshot) and used for rollback. This is unne...
- 01:56 PM rgw Backport #64661 (In Progress): squid: uncaught exception from AWSv4ComplMulti during java AWS4Tes...
- 01:36 PM rgw Backport #64661 (Resolved): squid: uncaught exception from AWSv4ComplMulti during java AWS4Test.t...
- https://github.com/ceph/ceph/pull/55871
- 01:46 PM CephFS Bug #50719: xattr returning from the dead (sic!)
- Xiubo Li wrote:
> Matthew Hutchinson wrote:
> > We are currently working on recreating this issue internally as thi... - 01:28 PM rgw Bug #64549 (Pending Backport): uncaught exception from AWSv4ComplMulti during java AWS4Test.testM...
- 12:52 PM Dashboard Bug #64660 (Pending Backport): mgr/dashboard: add cephfs authentication
- Add the ability to use cephaux from the Dashboard's cephfs
- 12:11 PM RADOS Backport #64649 (In Progress): quincy: min_last_epoch_clean is not updated, causing osdmap to be ...
- 12:00 PM RADOS Backport #64650 (In Progress): reef: min_last_epoch_clean is not updated, causing osdmap to be un...
- 11:50 AM Dashboard Cleanup #64658 (Pending Backport): mgr/dashboard: Locking improvements in bucket create form
- *Summary of Improvements:*
* Rename to 'Object Locking'
* If locking is enabled the versioning needs to be defaul... - 11:44 AM RADOS Backport #64651 (In Progress): squid: min_last_epoch_clean is not updated, causing osdmap to be u...
- 09:53 AM CephFS Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Venky Shankar wrote:
> Dhairya,
>
> Before we go into improving the lagginess detection infrastructure, let's ver... - 09:19 AM RADOS Bug #64657: Ceph test cases starting cluster not waiting for OSDs to join fully
- eg. for reproduce the issue:
diff slicer-src/src/test/osd/safe-to-destroy.sh
function run() {
@@ -32,18 +32,3... - 09:12 AM RADOS Bug #64657 (Rejected): Ceph test cases starting cluster not waiting for OSDs to join fully
- I've identified an issue in the Ceph testing framework where, after starting a temporary cluster using functions like...
- 08:29 AM Orchestrator Backport #64635 (In Progress): reef: cephadm/nvmeof: scrape nvmeof prometheus endpoint
- 05:48 AM CephFS Backport #64656 (Rejected): quincy: qa/fscrypt: switch to postmerge fragment to distiguish the mo...
- fscrypt hasn't been support yet in quincy.
- 03:23 AM CephFS Backport #64656 (Rejected): quincy: qa/fscrypt: switch to postmerge fragment to distiguish the mo...
- 05:48 AM CephFS Backport #64655 (In Progress): reef: qa/fscrypt: switch to postmerge fragment to distiguish the m...
- 03:23 AM CephFS Backport #64655 (In Progress): reef: qa/fscrypt: switch to postmerge fragment to distiguish the m...
- https://github.com/ceph/ceph/pull/55857
- 05:32 AM Linux kernel client Bug #64607: ceph: fstest generic/580 test failure with infinitely loop
- For the messager V2 test for *fscrypt* we need to backport https://tracker.ceph.com/issues/59195.
- 05:30 AM Linux kernel client Bug #64607: ceph: fstest generic/580 test failure with infinitely loop
- [Edit] https://pulpito.ceph.com/vshankar-2024-02-27_04:05:06-fs-wip-vshankar-testing-20240226.124304-testing-default-...
- 05:15 AM Dashboard Backport #64639 (Resolved): squid: mgr/dashboard: rgw roles page broken with items don't have per...
- 05:15 AM Dashboard Backport #64640 (Resolved): reef: mgr/dashboard: rgw roles page broken with items don't have perm...
- 03:13 AM CephFS Bug #64654 (Duplicate): fscrypt: add mount-syntax/v2 test for fscrypt
- This has been fixed by https://tracker.ceph.com/issues/59195 ocassionaly, and we jsut need to backport it to quincy a...
- 03:07 AM CephFS Bug #64654 (Duplicate): fscrypt: add mount-syntax/v2 test for fscrypt
- We missed the v2 test for fscrypt.
- 03:12 AM CephFS Bug #59195 (Pending Backport): qa/fscrypt: switch to postmerge fragment to distiguish the mounter...
- 02:57 AM Feature #64335: Add alerts to ceph monitoring stack for the nvmeof gateways
- PR Merged
- 02:56 AM nvme-of Feature #64578: Add a top tool to the nvmeof CLI to support troubleshooting
- Example code https://github.com/pcuzner/ceph-nvmeof-top
Container here quay.io/cuznerp/nvmeof-top:latest - 02:00 AM bluestore Bug #52464: FAILED ceph_assert(current_shard->second->valid())
- We're run into this bug twice recently, we don't have any urgency when it comes to re-provisioning these disks and ar...
02/29/2024
- 10:56 PM crimson Feature #64375: crimson: introduce support for C++ coroutines
- https://github.com/ceph/ceph/pull/55846 https://github.com/ceph/ceph/pull/55847 add initial support and convert some ...
- 09:25 PM RADOS Backport #64406: reef: Failed to encode map X with expected CRC
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/55712
merged - 09:00 PM RADOS Bug #64637: LeakPossiblyLost in BlueStore::_do_write_small() in osd
- Laura Flores wrote:
> /a/yuriw-2024-02-22_21:33:08-rados-wip-yuri8-testing-2024-02-22-0734-reef-distro-default-smith... - 09:00 PM RADOS Bug #64637 (New): LeakPossiblyLost in BlueStore::_do_write_small() in osd
- 08:57 PM RADOS Bug #64637 (Duplicate): LeakPossiblyLost in BlueStore::_do_write_small() in osd
- 08:54 PM RADOS Bug #52657: MOSDPGLog::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_NAUTILUS)'
- /a/yuriw-2024-02-28_22:39:54-rados-wip-yuri8-testing-2024-02-22-0734-reef-distro-default-smithi/7576288
- 08:42 PM RADOS Bug #62992: Heartbeat crash in reset_timeout and clear_timeout
- /a/yuriw-2024-02-28_22:39:54-rados-wip-yuri8-testing-2024-02-22-0734-reef-distro-default-smithi/7576292
- 08:03 PM rgw Bug #64653 (Fix Under Review): Notification FilterRules for S3key, S3Metadata & S3Tags spit incor...
- 07:43 PM rgw Bug #64653 (Pending Backport): Notification FilterRules for S3key, S3Metadata & S3Tags spit incor...
Currently if both prefix & suffix filter rules are set and if try to do a `radosgw-admin notification get` & captur...- 06:42 PM rbd Bug #64652 (New): [test] modernize rbd-mirror test setups to use mirroring instances that run on ...
- The rbd-mirror tests in qa/suites/rbd/{mirror, mirror-thrash}, and qa/suites/krbd/mirror are not set up as they're ty...
- 06:26 PM RADOS Backport #64651 (In Progress): squid: min_last_epoch_clean is not updated, causing osdmap to be u...
- https://github.com/ceph/ceph/pull/55865
- 06:15 PM RADOS Backport #64650 (In Progress): reef: min_last_epoch_clean is not updated, causing osdmap to be un...
- https://github.com/ceph/ceph/pull/55867
- 06:15 PM RADOS Backport #64649 (In Progress): quincy: min_last_epoch_clean is not updated, causing osdmap to be ...
- https://github.com/ceph/ceph/pull/55868
- 06:08 PM RADOS Bug #63883 (Pending Backport): min_last_epoch_clean is not updated, causing osdmap to be unable t...
- 04:14 PM bluestore Backport #64648 (In Progress): squid: BlueStore/DeferredWriteTest.NewData/3 is broken
- 04:14 PM bluestore Backport #64647 (In Progress): reef: BlueStore/DeferredWriteTest.NewData/3 is broken
- 04:07 PM bluestore Bug #64443 (Pending Backport): BlueStore/DeferredWriteTest.NewData/3 is broken
- 03:55 PM Orchestrator Feature #64334 (Duplicate): The nvmeof gateway has an embedded prometheus exporter than should be...
- 03:45 PM rgw Backport #64501 (Resolved): squid: multisite: Deadlock in RGWDeleteMultiObj with default rgw_mult...
- 03:45 PM rgw Backport #64601 (Resolved): squid: unittest_rgw_dmclock_scheduler fails for arm64
- 03:29 PM rgw Bug #64418 (Can't reproduce): RGW garbage collection stuck and growing
- 03:28 PM rgw Bug #64431 (Triaged): metadata sync does not replicate iam OpenIDConnectProvider metadata
- 03:12 PM rgw Bug #64598 (Triaged): Radosgw Instance ID Mismatch between metadata counters and RGW exporter met...
- 03:12 PM rgw Bug #64185 (Triaged): timeouts when listing versioned buckets that skip over many entries
- 02:46 PM RADOS Bug #64646 (Fix Under Review): ceph osd pool rmsnap clone object leak
- There are 2 ways to remove pool snaps, rados tool or mon command (ceph osd pool rmsnap).
It seems that the monitor c... - 02:17 PM CephFS Tasks #64413: File size is not correct after rmw
- Spoke to Chris regarding this.
Chris, if you can attach the debug client/mds logs from the two runs you mention (o... - 02:08 PM CephFS Bug #58090: Non-existent pending clone shows up in snapshot info
- Neeraj and I had a discussion regarding this.
We fixed a bunch of issues around clones and dangling index symlinks... - 12:48 PM Orchestrator Backport #64645 (Resolved): quincy: cephadm: remove restriction for crush device classes
- https://github.com/ceph/ceph/pull/56087
- 12:48 PM Orchestrator Backport #64644 (Resolved): reef: cephadm: remove restriction for crush device classes
- https://github.com/ceph/ceph/pull/56106
- 12:47 PM Orchestrator Bug #64382 (Pending Backport): cephadm: remove restriction for crush device classes
- 11:10 AM Dashboard Bug #64487 (Resolved): mgr/dashboard: fix subvolume group edit
- 11:10 AM Dashboard Backport #64609 (Resolved): squid: mgr/dashboard: fix subvolume group edit
- 11:09 AM Dashboard Backport #64610 (Resolved): reef: mgr/dashboard: fix subvolume group edit
- 08:36 AM ceph-volume Backport #64643 (New): quincy: lvm list should filter also on vg name
- 08:36 AM ceph-volume Backport #64642 (New): reef: lvm list should filter also on vg name
- 08:33 AM ceph-volume Bug #62320 (Pending Backport): lvm list should filter also on vg name
- 08:04 AM CephFS Bug #64616: selinux denials with centos9.stream
- Dan Mick wrote:
> I bet you didn't mean to change the project to Calamari, which is long-dead
Oh god. I meant to ... - 07:58 AM CephFS Bug #64616: selinux denials with centos9.stream
- I bet you didn't mean to change the project to Calamari, which is long-dead
- 07:37 AM CephFS Bug #64641 (Pending Backport): qa: Add multifs root_squash testcase
- Multifs root_squash test is missing. Add it.
- 07:33 AM crimson Bug #64546: client io requests hang when issued before the creation of the related pgs
- Samuel Just wrote:
> https://github.com/athanatos/ceph/tree/sjust/wip-64546-max-creating-pgs
We've just finished ... - 07:02 AM RADOS Bug #53342: Exiting scrub checking -- not all pgs scrubbed
- Radoslaw Zarzynski wrote:
> Ronen, do we need any backporting?
No. The fix (55478) made it in time for Squid. - 06:12 AM CephFS Backport #64583 (In Progress): squid: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FU...
- 06:10 AM CephFS Backport #64582 (In Progress): reef: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FUL...
- 06:08 AM CephFS Backport #64581 (In Progress): quincy: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_F...
- 06:05 AM Dashboard Backport #64640 (In Progress): reef: mgr/dashboard: rgw roles page broken with items don't have p...
- 05:51 AM Dashboard Backport #64640 (Resolved): reef: mgr/dashboard: rgw roles page broken with items don't have perm...
- https://github.com/ceph/ceph/pull/55827
- 06:04 AM Dashboard Backport #64639 (In Progress): squid: mgr/dashboard: rgw roles page broken with items don't have ...
- 05:51 AM Dashboard Backport #64639 (Resolved): squid: mgr/dashboard: rgw roles page broken with items don't have per...
- https://github.com/ceph/ceph/pull/55826
- 05:48 AM Dashboard Bug #64588 (Pending Backport): mgr/dashboard: rgw roles page broken with items don't have permiss...
- 04:24 AM CephFS Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Venky Shankar wrote:
> Started running into
>
> > ceph: stderr Error: OCI runtime error: crun: bpf create ``: In... - 04:23 AM Orchestrator Bug #64482 (Resolved): ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not i...
- Ilya posted a couple of changes to ceph-build to resolve this.
- https://github.com/ceph/ceph-build/pull/2204
- h... - 04:19 AM RADOS Bug #64471: osd: upgrades from v18.2.[01] to main fail with "heartbeat_check: no reply from"
- Xiubo Li wrote:
> Patrick,
>
> From console_logs/smithi196.log, the kernel just crashed when copying data from us... - 12:44 AM RADOS Bug #64471: osd: upgrades from v18.2.[01] to main fail with "heartbeat_check: no reply from"
- Xiubo Li wrote:
> Patrick,
>
> From console_logs/smithi196.log, the kernel just crashed when copying data from us... - 04:03 AM CephFS Feature #64506: qa: update fs:upgrade to test from reef/squid to main
- Dhairya, please take this one (on prio).
- 12:53 AM CephFS Bug #50719: xattr returning from the dead (sic!)
- Matthew Hutchinson wrote:
> We are currently working on recreating this issue internally as this was a customer clus... - 12:49 AM devops Bug #64638 (New): get lastest code from github, build in centos 8 stream,sh install-deps.sh: No m...
- after getting lastest code from github, build in centos,python 3.6.8
sh install-deps.sh
ERROR: Could not find a...
02/28/2024
- 10:43 PM Orchestrator Bug #57755: task/test_orch_cli: test_cephfs_mirror times out
- /a/yuriw-2024-02-19_20:16:06-rados-wip-yuri2-testing-2024-02-16-0829-reef-distro-default-smithi/7566897
- 10:32 PM RADOS Bug #64637 (New): LeakPossiblyLost in BlueStore::_do_write_small() in osd
- /a/yuriw-2024-02-22_21:33:08-rados-wip-yuri8-testing-2024-02-22-0734-reef-distro-default-smithi/7571350...
- 10:20 PM Orchestrator Bug #64208: test_cephadm.sh: Container version mismatch causes job to fail.
- /a/yuriw-2024-02-22_21:33:08-rados-wip-yuri8-testing-2024-02-22-0734-reef-distro-default-smithi/7571334
- 08:36 PM rbd Backport #64464 (Resolved): reef: "rbd children" should support --image-id option
- 08:32 PM rbd Backport #64464: reef: "rbd children" should support --image-id option
- merged
- 08:35 PM rbd Backport #64462 (Resolved): reef: split() is broken in SparseExtentSplitMerge and SparseBufferlis...
- 08:33 PM rbd Backport #64462: reef: split() is broken in SparseExtentSplitMerge and SparseBufferlistExtentSpli...
- merged
- 08:14 PM Orchestrator Backport #64636 (New): squid: cephadm/nvmeof: scrape nvmeof prometheus endpoint
- 08:14 PM Orchestrator Backport #64635 (Resolved): reef: cephadm/nvmeof: scrape nvmeof prometheus endpoint
- https://github.com/ceph/ceph/pull/56108
- 08:10 PM Orchestrator Bug #64536 (Pending Backport): cephadm/nvmeof: scrape nvmeof prometheus endpoint
- 07:53 PM CephFS Tasks #63669: qa: add teuthology tests for quiescing a group of subvolumes
- We shouldn't overload. The background quiesce is an important test, but it is not functional e2e testing. For that we...
- 07:40 PM Orchestrator Backport #64634 (Resolved): reef: cephadm: cephadm does not clean up /etc/ceph/podman-auth.json i...
- https://github.com/ceph/ceph/pull/56105
- 07:40 PM Orchestrator Backport #64633 (New): squid: cephadm: cephadm does not clean up /etc/ceph/podman-auth.json in rm...
- 07:33 PM Orchestrator Bug #64433 (Pending Backport): cephadm: cephadm does not clean up /etc/ceph/podman-auth.json in r...
- 07:32 PM Orchestrator Backport #64632 (Resolved): reef: secure monitoring stack support is not documented
- https://github.com/ceph/ceph/pull/56104
- 07:32 PM Orchestrator Backport #64631 (New): squid: secure monitoring stack support is not documented
- 07:30 PM Orchestrator Documentation #64596 (Pending Backport): secure monitoring stack support is not documented
- 07:03 PM CephFS Bug #64616: selinux denials with centos9.stream
- Venky Shankar wrote:
> Patrick, I saw you working around with selinux denials in @qa/suites/fs/workload/tasks/5-work... - 03:32 PM CephFS Bug #64616: selinux denials with centos9.stream
- Patrick, I saw you working around with selinux denials in @qa/suites/fs/workload/tasks/5-workunit/postgres.yaml@, how...
- 02:55 PM CephFS Bug #64616 (Pending Backport): selinux denials with centos9.stream
- /a/vshankar-2024-02-26_10:07:12-fs-wip-vshankar-testing-20240226.064629-testing-default-smithi/7573529...
- 06:45 PM Orchestrator Backport #64630 (Resolved): quincy: cephadm: asyncio timeout handler can't handle conccurent.futu...
- https://github.com/ceph/ceph/pull/56086
- 06:45 PM Orchestrator Backport #64629 (Resolved): reef: cephadm: asyncio timeout handler can't handle conccurent.future...
- https://github.com/ceph/ceph/pull/56103
- 06:44 PM Orchestrator Backport #64628 (New): squid: cephadm: asyncio timeout handler can't handle conccurent.futures.Ca...
- 06:44 PM Orchestrator Backport #64627 (Resolved): reef: cephadm: ceph-exporter fails to deploy when placed first
- https://github.com/ceph/ceph/pull/56102
- 06:44 PM Orchestrator Backport #64626 (New): squid: cephadm: ceph-exporter fails to deploy when placed first
- 06:43 PM Orchestrator Bug #64473 (Pending Backport): cephadm: asyncio timeout handler can't handle conccurent.futures.C...
- 06:41 PM Orchestrator Bug #64491 (Pending Backport): cephadm: ceph-exporter fails to deploy when placed first
- 05:55 PM rgw Bug #49387: several crashes from bad_alloc exceptions
- i'm hearing reports that rhel 8.6 is shipping a gperftools 2.8.1 that has these same crashes?
edit: oops, this isn... - 05:24 PM Dashboard Backport #64625 (In Progress): squid: mgr/dashboard: fix snap schedule date format
- 05:20 PM Dashboard Backport #64625 (In Progress): squid: mgr/dashboard: fix snap schedule date format
- https://github.com/ceph/ceph/pull/55816
- 05:23 PM Dashboard Backport #64624 (In Progress): reef: mgr/dashboard: fix snap schedule date format
- 05:20 PM Dashboard Backport #64624 (In Progress): reef: mgr/dashboard: fix snap schedule date format
- https://github.com/ceph/ceph/pull/55815
- 05:13 PM Dashboard Bug #64613 (Pending Backport): mgr/dashboard: fix snap schedule date format
- 05:12 PM Dashboard Bug #64613 (New): mgr/dashboard: fix snap schedule date format
- 04:30 PM Dashboard Bug #64613 (Pending Backport): mgr/dashboard: fix snap schedule date format
- 04:30 PM Dashboard Bug #64613 (In Progress): mgr/dashboard: fix snap schedule date format
- 02:36 PM Dashboard Bug #64613 (Pending Backport): mgr/dashboard: fix snap schedule date format
- 09:57 AM Dashboard Bug #64613 (Pending Backport): mgr/dashboard: fix snap schedule date format
- h3. Description of problem
Description of problem:
-----------------------
Snapshot schedule from dashboard is f... - 05:06 PM rgw Bug #51437 (Resolved): the lifecycle transition operation does not work after set object acl
- The fix is available from quincy release.
- 05:03 PM rgw Bug #51437: the lifecycle transition operation does not work after set object acl
- This issue was already fixed as part of https://tracker.ceph.com/issues/61770 (https://github.com/ceph/ceph/pull/52160)
- 04:55 PM CephFS Bug #64615 (Fix Under Review): tools/first-damage: Skips root and lost+found inode
- 12:04 PM CephFS Bug #64615 (Resolved): tools/first-damage: Skips root and lost+found inode
- The 'first-damage.py' tool skips both root and lost+found inode as
a result the tool can't be used to repair/remove ... - 04:45 PM Orchestrator Backport #64623 (New): squid: mgr/cephadm is not defining haproxy tcp healthchecks for Ganesha
- 04:45 PM Orchestrator Backport #64622 (Resolved): reef: mgr/cephadm is not defining haproxy tcp healthchecks for Ganesha
- https://github.com/ceph/ceph/pull/56101
- 04:45 PM Orchestrator Backport #64621 (New): squid: cephadm is not accounting for the memory required nvme gateways are...
- 04:44 PM Orchestrator Backport #64620 (Resolved): reef: cephadm is not accounting for the memory required nvme gateways...
- https://github.com/ceph/ceph/pull/56100
- 04:43 PM rgw Bug #62774: datalog crash consistency: missing objects on the secondary site after the multisite ...
- design doc: https://pad.ceph.com/p/Datalog_Crash_Consistency
- 04:43 PM Orchestrator Bug #64020 (Pending Backport): cephadm is not accounting for the memory required nvme gateways ar...
- 04:42 PM Orchestrator Bug #62638 (Pending Backport): mgr/cephadm is not defining haproxy tcp healthchecks for Ganesha
- 03:49 PM Dashboard Bug #62089 (Resolved): mgr/dashboard: TypeError: string indices must be integers
- 03:49 PM CephFS Backport #64619 (In Progress): quincy: mds: check the layout in Server::handle_client_mknod
- https://github.com/ceph/ceph/pull/56032
- 03:49 PM CephFS Backport #64618 (In Progress): reef: mds: check the layout in Server::handle_client_mknod
- https://github.com/ceph/ceph/pull/56031
- 03:49 PM CephFS Backport #64617 (In Progress): squid: mds: check the layout in Server::handle_client_mknod
- https://github.com/ceph/ceph/pull/56030
- 03:45 PM CephFS Bug #64061 (Pending Backport): mds: check the layout in Server::handle_client_mknod
- 03:40 PM CephFS Bug #64058 (Fix Under Review): qa: Command failed (workunit test fs/snaps/untar_snap_rm.sh)
- 03:40 PM CephFS Bug #64058: qa: Command failed (workunit test fs/snaps/untar_snap_rm.sh)
- Expanding this a bit:...
- 03:40 PM CephFS Bug #64290 (Closed): mds: erroneous "MDS abort because newly corrupt dentry to be committed" beca...
- I'm not sure why I forked #64058.
- 03:28 PM Orchestrator Bug #54436 (Closed): allow idmap overrides in nfs-ganesha configuration
- 03:28 PM Orchestrator Bug #54436 (Duplicate): allow idmap overrides in nfs-ganesha configuration
- 03:08 PM Orchestrator Feature #64577 (Fix Under Review): allow idmap overrides in nfs-ganesha configuration
- 02:43 PM rgw Bug #64571: lifecycle transition crashes since merge end-to-end tracing
- when testing the squid backport of end2end tracing https://github.com/ceph/ceph/pull/55625, i didn't see these crashe...
- 02:35 PM Bug #64612 (Duplicate): make check arm64: failing mempool.check_shard_select
- 09:37 AM Bug #64612 (Duplicate): make check arm64: failing mempool.check_shard_select
- ...
- 02:26 PM Bug #64597: MDS Crashing Repeatedly in UP:Replay (Failed Assert)
- It looks like the journal integrity check is fine:...
- 02:02 PM CephFS Backport #64565 (In Progress): reef: Difference in error code returned while removing system xatt...
- 06:21 AM CephFS Backport #64565: reef: Difference in error code returned while removing system xattrs using remov...
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/55803
ceph-backport.sh versi... - 01:00 PM CephFS Bug #50719: xattr returning from the dead (sic!)
- We are currently working on recreating this issue internally as this was a customer cluster that was having the issue...
- 12:53 PM mgr Bug #49693: Manager daemon is unresponsive, replacing it with standby daemon
I'm running a cluster on Ubuntu 20.04 with Quincy version 17.2.7, and I'm encountering the same issue. Once I acces...- 11:38 AM CephFS Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Started running into
> ceph: stderr Error: OCI runtime error: crun: bpf create ``: Invalid argument
again in
... - 10:47 AM Bug #63557 (Fix Under Review): NVMe-oF gateway prometheus endpoints
- 10:14 AM Dashboard Bug #64614 (Pending Backport): mgr/dashboard: add snap schedule M and Y repeat frequencies to cre...
- h3. Description of problem
Description of problem:
-----------------------
h3. Environment
* @ceph vers... - 10:05 AM Dashboard Backport #64610 (In Progress): reef: mgr/dashboard: fix subvolume group edit
- 07:09 AM Dashboard Backport #64610 (Resolved): reef: mgr/dashboard: fix subvolume group edit
- https://github.com/ceph/ceph/pull/55811
- 10:03 AM Dashboard Backport #64609 (In Progress): squid: mgr/dashboard: fix subvolume group edit
- 07:09 AM Dashboard Backport #64609 (Resolved): squid: mgr/dashboard: fix subvolume group edit
- https://github.com/ceph/ceph/pull/55810
- 08:50 AM CephFS Bug #64611 (New): Inconsistent usage of the return codes in the MDS code base
- Ceph cluster may comprise of daemons running on different platforms with "incompatible numeric values of the errno de...
- 07:01 AM Dashboard Bug #64487 (Pending Backport): mgr/dashboard: fix subvolume group edit
- 06:23 AM CephFS Backport #64566: squid: Difference in error code returned while removing system xattrs using remo...
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/55805
ceph-backport.sh versi... - 06:17 AM CephFS Backport #64564: quincy: Difference in error code returned while removing system xattrs using rem...
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/55802
ceph-backport.sh versi... - 05:50 AM Dashboard Backport #64608 (New): squid: mgr/dashboard: replace grafana piechart panel plugin with native gr...
- 05:43 AM Dashboard Cleanup #64579 (Pending Backport): mgr/dashboard: replace grafana piechart panel plugin with nati...
- 04:41 AM Linux kernel client Bug #64607 (Fix Under Review): ceph: fstest generic/580 test failure with infinitely loop
- This is reported by Luis, please see https://patchwork.kernel.org/project/ceph-devel/patch/20240125023920.1287555-4-x...
- 04:07 AM Dashboard Backport #64606 (New): squid: mgr/dashboard: dashboard thread abort
- 04:07 AM Dashboard Backport #64605 (New): quincy: mgr/dashboard: dashboard thread abort
- 04:06 AM Dashboard Backport #64604 (New): reef: mgr/dashboard: dashboard thread abort
- 04:05 AM Dashboard Bug #62972: ERROR: test_list_enabled_module (tasks.mgr.dashboard.test_mgr_module.MgrModuleTest)
- https://jenkins.ceph.com/job/ceph-api/69441/
- 04:03 AM Dashboard Bug #61844 (Pending Backport): mgr/dashboard: dashboard thread abort
- 02:03 AM crimson Bug #64546: client io requests hang when issued before the creation of the related pgs
- https://github.com/athanatos/ceph/tree/sjust/wip-64546-max-creating-pgs
- 01:14 AM crimson Bug #64546: client io requests hang when issued before the creation of the related pgs
- In the logs above, it seems that pools 39 (8 pgs), 40 (8 pgs), 41 (32 pgs), 42 (32 pgs), 43 (32 pgs), 44 (32 pgs), 46...
- 01:06 AM crimson Bug #64546: client io requests hang when issued before the creation of the related pgs
- OSDMonitor::update_pending_pgs limits the number of pgs it's willing to process in each invocation -- mon_osd_max_cre...
- 12:27 AM crimson Bug #64546: client io requests hang when issued before the creation of the related pgs
- DEBUG 2024-02-22 09:56:51,337 [shard 0:main] osd - client_request(id=2536, detail=m=[osd_op(client.4303.0:2 46.0 46.7...
- 12:23 AM crimson Bug #64546: client io requests hang when issued before the creation of the related pgs
- 2024-02-22T09:56:52.307+0000 2b476bef4700 10 mon.a@0(leader).osd e16 update_pending_pgs pg 46.0 just added, up [2,0...
- 12:07 AM RADOS Bug #64471: osd: upgrades from v18.2.[01] to main fail with "heartbeat_check: no reply from"
- Patrick,
From console_logs/smithi196.log, the kernel just crashed when copying data from userspace:...
02/27/2024
- 11:24 PM rbd Bug #64574 (Fix Under Review): [test] cross-pollinate diff-continuous and compare-mirror-image te...
- 08:51 PM Bug #64597: MDS Crashing Repeatedly in UP:Replay (Failed Assert)
- The full assert section of the MDS logs shows this interesting line....
- 02:38 PM Bug #64597 (New): MDS Crashing Repeatedly in UP:Replay (Failed Assert)
Came in after the weekend and found all our Active/Standby MDS are crashed out. It seems to get past the journal re...- 08:29 PM mgr Backport #62887 (New): pacific: [pg-autoscaler] Peformance issue with the autoscaler when we have...
- 07:23 PM mgr Backport #62887 (In Progress): pacific: [pg-autoscaler] Peformance issue with the autoscaler when...
- 08:27 PM rgw Feature #64251: allow AWS lifecycle event types to configure lifecycle notifications and Replicat...
- As part of this tracker, the lifecycle and ObjectSynced(multisite Replication) events are made aws compatible along w...
- 07:22 PM Dashboard Backport #64593 (Rejected): quincy: mgr/dashboard: fix volume creation with multiple hosts
- No need to backport quincy, the feature is from reef onwards
- 12:29 PM Dashboard Backport #64593 (Rejected): quincy: mgr/dashboard: fix volume creation with multiple hosts
- 07:18 PM Backport #64603 (Rejected): [Not a Bug Just Testing] testing1
- 07:12 PM Backport #64603 (New): [Not a Bug Just Testing] testing1
- 07:11 PM Backport #64603 (Pending Backport): [Not a Bug Just Testing] testing1
- 07:09 PM Backport #64603 (Rejected): [Not a Bug Just Testing] testing1
- testing ceph-backport.script
- 06:33 PM CephFS Bug #64602 (Fix Under Review): tools/cephfs: cephfs-journal-tool does not recover dentries with a...
- 06:31 PM CephFS Bug #64602 (Fix Under Review): tools/cephfs: cephfs-journal-tool does not recover dentries with a...
- https://github.com/ceph/ceph/blob/4a1c26b52121803d1bd0f8c1c06eb856f2add307/src/tools/cephfs/JournalTool.cc#L867-L870
... - 04:34 PM RADOS Bug #64514: LibRadosTwoPoolsPP.PromoteSnapScrub test failed
- Hi guys,
this bug came up a few weeks ago and I've asked one of the PR authors of the run I was reviewing to take ... - 04:21 PM rgw Backport #64601 (In Progress): squid: unittest_rgw_dmclock_scheduler fails for arm64
- 04:15 PM rgw Backport #64601 (Resolved): squid: unittest_rgw_dmclock_scheduler fails for arm64
- https://github.com/ceph/ceph/pull/55791
- 04:21 PM rgw Backport #64600 (In Progress): reef: unittest_rgw_dmclock_scheduler fails for arm64
- 04:15 PM rgw Backport #64600 (In Progress): reef: unittest_rgw_dmclock_scheduler fails for arm64
- https://github.com/ceph/ceph/pull/55790
- 04:20 PM rgw Backport #64599 (In Progress): quincy: unittest_rgw_dmclock_scheduler fails for arm64
- 04:15 PM rgw Backport #64599 (In Progress): quincy: unittest_rgw_dmclock_scheduler fails for arm64
- https://github.com/ceph/ceph/pull/55789
- 04:06 PM rgw Bug #64568 (Pending Backport): unittest_rgw_dmclock_scheduler fails for arm64
- 03:28 PM RADOS Bug #64333: PG autoscaler tuning => catastrophic ceph cluster crash
- Nicolas Dandrimont wrote:
> I believe I understand what might have gone wrong though: One of our benchmarking script... - 02:01 PM RADOS Bug #64333: PG autoscaler tuning => catastrophic ceph cluster crash
- Hi!
Radoslaw Zarzynski wrote:
> Loic, is there an object store of one of those dead OSDs available for investigat... - 03:23 PM ceph-volume Bug #64560 (Fix Under Review): ceph-volume: when create osd, vgcreate stderr failed to find PV
- 02:47 PM Dashboard Bug #64242: mgr/dashboard: typo: upgrade page 'Failed to fetch informations' should be information
- Hey,
This was the part of Paul's feedback here: https://issues.redhat.com/browse/RHCSDASH-1264
May be we can combin... - 10:33 AM Dashboard Bug #64242: mgr/dashboard: typo: upgrade page 'Failed to fetch informations' should be information
- Just for completeness sake, there's another line containing "informations":
https://github.com/ceph/ceph/blob/v18.... - 02:41 PM rgw Bug #64598 (Triaged): Radosgw Instance ID Mismatch between metadata counters and RGW exporter met...
- Currently, metrics consumed by Prometheus related to the RGW are being generated by combining two parts:
1. The RGW ... - 02:17 PM CephFS Tasks #63669: qa: add teuthology tests for quiescing a group of subvolumes
- Leonid Usov wrote:
> >> If that is the case, we could cope with a background script config
> > Didn't understand th... - 02:10 PM CephFS Tasks #63669: qa: add teuthology tests for quiescing a group of subvolumes
- > then we didn't even need a dedicated trasher script, just a yaml config file with the shell loop that issues the qu...
- 02:05 PM Orchestrator Documentation #64596 (Pending Backport): secure monitoring stack support is not documented
- secure monitoring stack support introduced mainly by the PR https://github.com/ceph/ceph/pull/46601
is not documented - 12:42 PM Dashboard Backport #64595 (In Progress): reef: mgr/dashboard: fix volume creation with multiple hosts
- 12:29 PM Dashboard Backport #64595 (In Progress): reef: mgr/dashboard: fix volume creation with multiple hosts
- https://github.com/ceph/ceph/pull/55786
- 12:41 PM Dashboard Backport #64594 (In Progress): squid: mgr/dashboard: fix volume creation with multiple hosts
- 12:29 PM Dashboard Backport #64594 (Resolved): squid: mgr/dashboard: fix volume creation with multiple hosts
- https://github.com/ceph/ceph/pull/55785
- 12:33 PM Dashboard Bug #64559: mgr/dashboard: fix volume creation with multiple hosts
- Wrong Pull request ID - updated it
- 12:27 PM Dashboard Bug #64559 (Pending Backport): mgr/dashboard: fix volume creation with multiple hosts
- 12:29 PM RADOS Bug #64504: aio ops queued but never executed
- So far what I see is that the client op didn't get scheduled at all on osd.1 due a continuous stream of higher priori...
- 10:39 AM Dashboard Backport #64529 (Resolved): squid: mgr/dashboard: TypeError: string indices must be integers
- 10:14 AM CephFS Backport #64204 (In Progress): quincy: task/test_nfs: AttributeError: 'TestNFS' object has no att...
- 10:07 AM CephFS Backport #64205 (In Progress): reef: task/test_nfs: AttributeError: 'TestNFS' object has no attri...
- 09:56 AM CephFS Backport #64205: reef: task/test_nfs: AttributeError: 'TestNFS' object has no attribute 'run_ceph...
- Adam, I'm taking this one and the other release backports for this change.
- 09:58 AM bluestore Backport #63914 (In Progress): quincy: Some of ObjectStore/*Deferred* test cases are failing with...
- https://github.com/ceph/ceph/pull/55779
- 09:55 AM bluestore Backport #63913 (In Progress): reef: Some of ObjectStore/*Deferred* test cases are failing with b...
- https://github.com/ceph/ceph/pull/55778
- 09:47 AM bluestore Backport #64091 (In Progress): reef: ceph-bluestore-tool bluefs-bdev-expand doesn't adjust OSD fr...
- https://github.com/ceph/ceph/pull/55777
- 09:45 AM bluestore Backport #64092 (In Progress): quincy: ceph-bluestore-tool bluefs-bdev-expand doesn't adjust OSD ...
- https://github.com/ceph/ceph/pull/55776
- 09:42 AM bluestore Backport #64592 (New): quincy: BlueFS: l_bluefs_log_compactions is counted twice in sync log comp...
- 09:42 AM bluestore Backport #64591 (New): squid: BlueFS: l_bluefs_log_compactions is counted twice in sync log compa...
- 09:42 AM bluestore Backport #64590 (New): reef: BlueFS: l_bluefs_log_compactions is counted twice in sync log compac...
- 09:40 AM bluestore Bug #64533 (Pending Backport): BlueFS: l_bluefs_log_compactions is counted twice in sync log comp...
- 09:34 AM bluestore Backport #64115 (In Progress): quincy: ObjectStore/StoreTest.SimpleCloneTest/2 times out from an ...
- https://github.com/ceph/ceph/pull/55775
- 09:32 AM bluestore Backport #64116 (In Progress): reef: ObjectStore/StoreTest.SimpleCloneTest/2 times out from an ab...
- https://github.com/ceph/ceph/pull/55774
- 09:28 AM CephFS Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Trying my luck with this today - hopefully no infra issues show up.
- 08:49 AM crimson Bug #64587: seastar reactor_backend.cc compile error: no member named 'features' in 'io_uring'
- that is because seastar search build/src/liburing/src/include/liburing.h for include file, and that is very old libur...
- 07:18 AM crimson Bug #64587 (Fix Under Review): seastar reactor_backend.cc compile error: no member named 'feature...
- 06:57 AM crimson Bug #64587 (Resolved): seastar reactor_backend.cc compile error: no member named 'features' in 'i...
- From: https://jenkins.ceph.com/job/ceph-pull-requests/129984/console...
- 07:51 AM crimson Bug #64589: seastar prometheus.cc compile error: call to member function 'Set' is ambiguous
- the latest seastar has been fixed this, see https://github.com/scylladb/seastar/pull/2112, update seastar version?
- 07:45 AM crimson Bug #64589 (Resolved): seastar prometheus.cc compile error: call to member function 'Set' is ambi...
- From: https://jenkins.ceph.com/job/ceph-pull-requests/129982/consoleFull#1956812383e4bfde06-44c1-4b3c-8379-d9ee175fb2...
- 07:43 AM Linux kernel client Bug #64172 (Fix Under Review): Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFs...
- Updated the kclient patch series to fix it: https://patchwork.kernel.org/project/ceph-devel/list/?series=830176&archi...
- 07:21 AM Dashboard Bug #64588 (Fix Under Review): mgr/dashboard: rgw roles page broken with items don't have permiss...
- 07:18 AM Dashboard Bug #64588 (Resolved): mgr/dashboard: rgw roles page broken with items don't have permission poli...
- simply create an rgw role from UI and see that the list page is broken with an exception in backend...
- 05:58 AM CephFS Backport #64586 (In Progress): quincy: crash: void Locker::handle_file_lock(ScatterLock*, ceph::c...
- https://github.com/ceph/ceph/pull/56050
- 05:58 AM CephFS Backport #64585 (In Progress): squid: crash: void Locker::handle_file_lock(ScatterLock*, ceph::cr...
- https://github.com/ceph/ceph/pull/56051
- 05:58 AM CephFS Backport #64584 (In Progress): reef: crash: void Locker::handle_file_lock(ScatterLock*, ceph::cre...
- https://github.com/ceph/ceph/pull/56049
- 05:55 AM CephFS Bug #62077 (Fix Under Review): mgr/nfs: validate path when modifying cephfs export
- 05:51 AM CephFS Bug #54833 (Pending Backport): crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<ML...
- 05:51 AM CephFS Backport #64583 (In Progress): squid: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FU...
- https://github.com/ceph/ceph/pull/55830
- 05:51 AM CephFS Backport #64582 (In Progress): reef: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FUL...
- https://github.com/ceph/ceph/pull/55829
- 05:51 AM CephFS Backport #64581 (In Progress): quincy: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_F...
- https://github.com/ceph/ceph/pull/55828
- 05:50 AM Documentation #64580: Obsolete/Broken instructions for joining the CEPH development mailing list
- Apologies for adding an "Affected Versions" filed to the issue. I wasn't able to figure out how to unselect it.
- 05:48 AM Documentation #64580 (New): Obsolete/Broken instructions for joining the CEPH development mailing...
- This is regarding instructions on joining the CEPH Dev mailing list here, https://docs.ceph.com/en/latest/dev/develop...
- 05:48 AM CephFS Bug #63132 (Pending Backport): qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
- 05:24 AM Dashboard Cleanup #64579 (Pending Backport): mgr/dashboard: replace grafana piechart panel plugin with nati...
- Since the grafana piechart panel plugin is deprectaed in the latest grafana versions, the existing panels should be r...
- 03:35 AM nvme-of Feature #64578 (New): Add a top tool to the nvmeof CLI to support troubleshooting
- By adding a top subcommand the admin should be able to understand the performance of the gateway from reactor CPU to ...
- 01:02 AM RADOS Bug #64194 (Duplicate): make check(arm64): unittest_rgw_dmclock_scheduler Failed
02/26/2024
- 10:04 PM Orchestrator Feature #64577 (Pending Backport): allow idmap overrides in nfs-ganesha configuration
- idmapd.conf allows controlling the NFSv4.x server side id mapping settings such as adding a "Domain" or setting the i...
- 08:14 PM rgw Bug #64571: lifecycle transition crashes since merge end-to-end tracing
- wasn't able to reproduce under vstart. tested both with a debug build and a release build
required extra storage-c... - 03:58 PM rgw Bug #64571 (New): lifecycle transition crashes since merge end-to-end tracing
- regression from https://github.com/ceph/ceph/pull/52114, whose test results included these failures https://pulpito.c...
- 07:32 PM RADOS Bug #54182: OSD_TOO_MANY_REPAIRS cannot be cleared in >=Octopus
- Bump up.
- 07:30 PM RADOS Bug #64347: src/osd/PG.cc: FAILED ceph_assert(!bad || !cct->_conf->osd_debug_verify_cached_snaps)
- Bump up – needs qa.
- 07:24 PM RADOS Bug #64438: NeoRadosWatchNotify.WatchNotifyTimeout times out along with FAILED ceph_assert(op->se...
- Continuing to look at this bug.
- 07:20 PM RADOS Bug #64258 (In Progress): osd/PrimaryLogPG.cc: FAILED ceph_assert(inserted)
- Moving back to _in progress_ per https://github.com/ceph/ceph/pull/55410#issuecomment-1945423142.
- 07:14 PM RADOS Bug #64460: rados/upgrade/parallel: "[WRN] MON_DOWN: 1/3 mons down, quorum a,b" in cluster log
- This needs a whitelist PR. Discussed in bug scrub.
- 07:09 PM RADOS Bug #64471: osd: upgrades from v18.2.[01] to main fail with "heartbeat_check: no reply from"
- I think we were investigating it together with Patrick on @CEPH-RADOS@. The finding was that the entire node was unre...
- 07:07 PM RADOS Bug #64437: qa/standalone/scrub/osd-scrub-repair.sh: TEST_repair_stats_ec: test 26 = 13
- Bump up.
- 07:06 PM RADOS Bug #53342 (Resolved): Exiting scrub checking -- not all pgs scrubbed
- Ronen, do we need any backporting?
- 07:05 PM RADOS Bug #61385: TEST_dump_scrub_schedule fails from "key is query_active: negation:0 # expected: true...
- In the previous tracker, the error message was "expected: false, in actual: true". This one is "expected: true, in ac...
- 07:02 PM RADOS Bug #64504: aio ops queued but never executed
- Asked Sridhar to judge whether it's dmclock-related.
- 06:58 PM RADOS Bug #62777: rados/valgrind-leaks: expected valgrind issues and found none
- Let's watch to see if this is fixed by https://github.com/ceph/ceph/pull/52639.
- 06:55 PM RADOS Bug #64514: LibRadosTwoPoolsPP.PromoteSnapScrub test failed
- Hmm, it seems to happen *before* the scrub part:...
- 06:50 PM RADOS Bug #64558 (Fix Under Review): librados: use CEPH_OSD_FLAG_FULL_FORCE for IoCtxImpl::remove
- 02:43 AM RADOS Bug #64558 (Fix Under Review): librados: use CEPH_OSD_FLAG_FULL_FORCE for IoCtxImpl::remove
- librados::OPERATION_FULL_FORCE should be translated to CEPH_OSD_FLAG_FULL_FORCE before calling IoCtxImpl::remove().
... - 06:46 PM RADOS Backport #64576 (New): quincy: Incorrect behavior on combined cmpext+write ops in the face of ses...
- 06:46 PM RADOS Backport #64575 (New): reef: Incorrect behavior on combined cmpext+write ops in the face of sessi...
- 06:44 PM RADOS Bug #64314: cluster log: Cluster log level string representation missing in the cluster logs.
- Bump up. Already in the QA.
- 06:42 PM RADOS Bug #52657: MOSDPGLog::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_NAUTILUS)'
- Bump up.
- 06:40 PM RADOS Bug #64333: PG autoscaler tuning => catastrophic ceph cluster crash
- bump up
- 06:40 PM RADOS Bug #64192 (Pending Backport): Incorrect behavior on combined cmpext+write ops in the face of ses...
- 05:17 PM rbd Bug #64574 (Pending Backport): [test] cross-pollinate diff-continuous and compare-mirror-image te...
- In order to expand coverage:
- add compare_mirror_image_alternate_primary.sh workunit to krbd suite to run against... - 05:04 PM RADOS Fix #64573 (Fix Under Review): singleton/ec-inconsistent-hinfo.yaml: Include a possible benign cl...
- 04:37 PM RADOS Fix #64573 (Pending Backport): singleton/ec-inconsistent-hinfo.yaml: Include a possible benign cl...
- The changes introduced as part of PR: https://github.com/ceph/ceph/pull/53524
made the randomized values of osd_op_q... - 04:49 PM rbd Backport #64555 (In Progress): quincy: [test][krbd] volume data corruption when using rbd-mirror ...
- 04:48 PM rbd Backport #64554 (In Progress): reef: [test][krbd] volume data corruption when using rbd-mirror w/...
- 04:47 PM rbd Backport #64553 (In Progress): squid: [test][krbd] volume data corruption when using rbd-mirror w...
- 04:10 PM CephFS Bug #64572 (Fix Under Review): workunits/fsx.sh failure
- https://pulpito.ceph.com/vshankar-2024-02-26_05:44:42-fs:workload-wip-vshankar-testing-20240216.060239-testing-defaul...
- 03:53 PM Orchestrator Support #64570 (New): prometheus couldnt start daemon
- I need some help , and I dont know if this is a bug.
Im using ceph v18.2.1 with 3 baremetal nodes.
I was deploy cep... - 03:36 PM rgw Feature #64569 (New): sns: implement Subscribe/Unsubscribe instead of configuring endpoints in To...
- 02:51 PM CephFS Tasks #63669: qa: add teuthology tests for quiescing a group of subvolumes
- >> If that is the case, we could cope with a background script config
> Didn't understand this question.
I meant ... - 02:48 PM CephFS Tasks #63669: qa: add teuthology tests for quiescing a group of subvolumes
- Leonid Usov wrote:
> @Patrick, I have several discussion points wrt the approach
>
> 1. Should we add more client... - 02:32 PM CephFS Tasks #63669: qa: add teuthology tests for quiescing a group of subvolumes
- @Patrick, I have several discussion points wrt the approach
1. Should we add more clients and/or more mountpoints ... - 02:48 PM rgw Bug #64568 (Fix Under Review): unittest_rgw_dmclock_scheduler fails for arm64
- can't test on arm, but i was able to reproduce failures by running the unittest under valgrind. the 1ms sleeps were j...
- 02:01 PM rgw Bug #64568 (Pending Backport): unittest_rgw_dmclock_scheduler fails for arm64
- ...
- 02:36 PM mgr Feature #64318: mgr/prometheus add support for TLS and client cert authentication
- Christian Rohmann wrote:
> Redouane Kachach Elhichou wrote:
> > > But is there any TLS added? Looking at https://do... - 01:15 PM mgr Feature #64318: mgr/prometheus add support for TLS and client cert authentication
- Redouane Kachach Elhichou wrote:
> > But is there any TLS added? Looking at https://docs.ceph.com/en/reef/mgr/dashbo... - 01:00 PM mgr Feature #64318: mgr/prometheus add support for TLS and client cert authentication
- Christian Rohmann wrote:
> Redouane Kachach Elhichou wrote:
> > cephadm has already support to enable security acro... - 02:30 PM Bug #63824 (Duplicate): "ceph orch ls" fail with exception KeyError: 'exporter'
- 02:27 PM Orchestrator Bug #63805 (Duplicate): "ceph orch ls" fail with exception KeyError: 'ceph-exporter'
- https://tracker.ceph.com/issues/63123
- 02:26 PM Orchestrator Bug #63805 (Resolved): "ceph orch ls" fail with exception KeyError: 'ceph-exporter'
- Resolved by the PR: https://github.com/ceph/ceph/pull/53910
- 02:19 PM RADOS Bug #64562: Occasional segmentation faults in ScrubQueue::collect_ripe_jobs
- .
- 02:18 PM RADOS Bug #64562: Occasional segmentation faults in ScrubQueue::collect_ripe_jobs
- No problem with the rename Igor, thank you!
Igor Fedotov wrote:
> Hi Paolo,
> mind me renaming the ticket to som... - 02:16 PM RADOS Bug #64562: Occasional segmentation faults in ScrubQueue::collect_ripe_jobs
- Igor Fedotov wrote:
> Hi Paolo,
> mind me renaming the ticket to something like "Occasional segmentation faults in ... - 01:27 PM RADOS Bug #64562: Occasional segmentation faults in ScrubQueue::collect_ripe_jobs
- Hi Paolo,
mind me renaming the ticket to something like "Occasional segmentation faults in ScrubQueue::collect_ripe_... - 10:07 AM RADOS Bug #64562 (New): Occasional segmentation faults in ScrubQueue::collect_ripe_jobs
- Hello!
Igor Fedotov suggested me to open a new ticket under RADOS subproject, the original ticket is [[https://tra... - 02:17 PM CephFS Bug #61009: crash: void interval_set<T, C>::erase(T, T, std::function<bool(T, T)>) [with T = inod...
- @here - curious if https://tracker.ceph.com/issues/60241 is actually related to this ticket. The former has got compl...
- 01:59 PM CephFS Bug #64563 (Triaged): mds: enhance laggy clients detections due to laggy OSDs
- 01:58 PM CephFS Bug #64563: mds: enhance laggy clients detections due to laggy OSDs
- Dhairya,
Before we go into improving the lagginess detection infrastructure, let's verify if there isn't a (corner... - 11:52 AM CephFS Bug #64563 (Triaged): mds: enhance laggy clients detections due to laggy OSDs
- Right now the code happily accepts that if there is any laggy OSD and a client got laggy then it must be due to the O...
- 01:47 PM bluestore Bug #64567 (New): Expanding main device might cause leaked extents.
- This happens when freelist manager is in bitmap(not-null) mode and original block device size is not aligned with the...
- 01:39 PM CephFS Feature #63936 (Closed): client, libcephfs: enable sparse read capability in libcephfs I/O code p...
- Closed due to the fixes not being targeted anytime soon.
- 01:31 PM CephFS Backport #64566 (New): squid: Difference in error code returned while removing system xattrs usin...
- 01:31 PM CephFS Backport #64565 (In Progress): reef: Difference in error code returned while removing system xatt...
- https://github.com/ceph/ceph/pull/55803
- 01:31 PM CephFS Backport #64564 (New): quincy: Difference in error code returned while removing system xattrs usi...
- 01:30 PM CephFS Bug #64542 (Pending Backport): Difference in error code returned while removing system xattrs usi...
- 01:14 PM Orchestrator Cleanup #61408 (Resolved): Improve reliability and organization around cephadm deployment
- 01:14 PM Orchestrator Backport #61806 (Resolved): reef: Improve reliability and organization around cephadm deployment
- 01:12 PM Orchestrator Bug #59254 (Resolved): cephadm: misleading error message when trying to add host missing using a ...
- 01:11 PM Orchestrator Backport #61538 (Resolved): quincy: cephadm: misleading error message when trying to add host mis...
- 01:10 PM Orchestrator Bug #63338 (Closed): rook crash when accessing pvs_in_sc field
- 01:08 PM Orchestrator Bug #64516 (Resolved): 50x errors are thrown when entering "Cluster > Upgrade" dashboard
- 01:08 PM Orchestrator Backport #64521 (Resolved): reef: 50x errors are thrown when entering "Cluster > Upgrade" dashboard
- 01:08 PM Orchestrator Backport #64521: reef: 50x errors are thrown when entering "Cluster > Upgrade" dashboard
- https://github.com/ceph/ceph/pull/55706
- 01:07 PM Orchestrator Backport #64522 (Resolved): squid: 50x errors are thrown when entering "Cluster > Upgrade" dashboard
- 01:07 PM Orchestrator Backport #64522: squid: 50x errors are thrown when entering "Cluster > Upgrade" dashboard
- https://github.com/ceph/ceph/pull/55707
- 01:05 PM Orchestrator Bug #64211 (Resolved): drivegroup specific code is still available on rook orch
- 01:05 PM Orchestrator Backport #64523 (Resolved): squid: drivegroup specific code is still available on rook orch
- 01:05 PM Orchestrator Backport #64523: squid: drivegroup specific code is still available on rook orch
- https://github.com/ceph/ceph/pull/55707
- 01:04 PM Orchestrator Backport #64520 (Resolved): reef: drivegroup specific code is still available on rook orch
- 01:04 PM Orchestrator Backport #64520: reef: drivegroup specific code is still available on rook orch
- https://github.com/ceph/ceph/pull/55706
- 12:53 PM CephFS Bug #64008: mds: CInode::item_caps used in two different lists
- It seems like only the MDS code to be using elist, why not just switch to using xlist? That way we completely avoid t...
- 12:37 PM CephFS Bug #64486 (In Progress): qa: enhance labeled perf counters test for cephfs-mirror
- 12:27 PM CephFS Bug #61182 (Resolved): qa: workloads/cephfs-mirror-ha-workunit - stopping mirror daemon after the...
- 12:26 PM CephFS Backport #62176 (Resolved): quincy: qa: workloads/cephfs-mirror-ha-workunit - stopping mirror dae...
- 11:23 AM CephFS Bug #62925 (Fix Under Review): cephfs-journal-tool: Add preventive measures in the tool to avoid ...
- 10:17 AM ceph-volume Bug #64561: ceph-volume in containerized environment cannot find the correct osd directory
- This OSD is setup as "unmanaged" in an older Ceph version. Where the layout looks like this: block - hdd, db - nvme0/...
- 09:42 AM ceph-volume Bug #64561 (New): ceph-volume in containerized environment cannot find the correct osd directory
- When trying to perform a 'ceph-volume lvm migrate --osd-id 88 --osd-fsid <fsid> --from-db --target vg-nvme1n1/lv-3500...
- 10:13 AM Orchestrator Bug #58920: logrotate - delaycompress and duplicate entry errors
- i could not comment installation without containers at the moment.
in case of containerized installation, @ceph.au... - 09:35 AM Orchestrator Bug #58920: logrotate - delaycompress and duplicate entry errors
- I think the pattern ceph-*.log is not enough.
there are not onloy ceph-<daemon>.log files but also the ceph.log and ... - 09:55 AM Linux kernel client Bug #64172 (In Progress): Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAutho...
- 09:40 AM Dashboard Backport #64528 (Resolved): reef: mgr/dashboard: TypeError: string indices must be integers
- 09:37 AM ceph-volume Bug #64560 (Fix Under Review): ceph-volume: when create osd, vgcreate stderr failed to find PV
- When I create an OSD, I get an error message in the attachment:
# ceph orch daemon add osd ceph02:/dev/sdl
bin/... - 07:36 AM CephFS Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- oh well, infra issues now :/...
- 04:41 AM CephFS Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- I am working on this. The plan is to gdb the ceph-fuse process after fusermount. Will update by EOD today.
- 06:57 AM Dashboard Bug #64559 (Pending Backport): mgr/dashboard: fix volume creation with multiple hosts
- An error ocurrs when creating a CephFs volume from the Dashboard with multiple hosts
- 05:54 AM Orchestrator Bug #64482: ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented
- Ilya has a change is ceph-build[*] which I'm putting to test today. Will report when done.
[*]: https://github.com... - 05:24 AM CephFS Bug #53192: High cephfs MDS latency and CPU load with snapshots and unlink operations
- Hi Andras,
Nice to hear from you in the User/Dev monthly meet-up. You had a question related to what exactly is Sn... - 04:48 AM CephFS Support #64442: Ceph stripe parallel write
- Hi Nishit,
Nishit Khosla wrote:
> Hello,
>
> We are trying to do performance troubleshooting for cephfs and ex... - 02:37 AM CephFS Backport #64222 (In Progress): reef: Test failure: test_filesystem_sync_stuck_for_around_5s (task...
- 02:34 AM CephFS Backport #64221 (In Progress): quincy: Test failure: test_filesystem_sync_stuck_for_around_5s (ta...
- 02:28 AM CephFS Backport #64076 (In Progress): quincy: testing: Test failure: test_snapshot_remove (tasks.cephfs....
- 02:25 AM CephFS Backport #64075 (In Progress): reef: testing: Test failure: test_snapshot_remove (tasks.cephfs.te...
- 02:19 AM CephFS Backport #64043 (In Progress): quincy: mds: use explicitly sized types for network and disk encoding
- 02:19 AM CephFS Backport #64045 (In Progress): reef: mds: use explicitly sized types for network and disk encoding
- 01:49 AM Linux kernel client Documentation #62837: Add support for read_from_replica=localize for cephfs similar to krbd
- Documentation update here: https://github.com/ceph/ceph/pull/55683
02/25/2024
- 05:16 AM rgw Bug #64557 (New): Error in rgw python bindings
- Using python binding for rgw fails with this error:...
- 04:37 AM crimson Bug #64556 (New): crimson osd crashes when got an empty omap header
- ...
02/24/2024
- 06:00 AM crimson Bug #64546: client io requests hang when issued before the creation of the related pgs
- osd.0 crash is unrelated -- should be fixed by https://github.com/ceph/ceph/pull/55705
02/23/2024
- 11:13 PM Bug #64544: Scrub stuck and 'pg has invalid (post-split) stat'
- Among all actions that has been done, the last try which fix the issue was to move caches pool to dedicated OSD (whic...
- 12:20 AM Bug #64544 (New): Scrub stuck and 'pg has invalid (post-split) stat'
- Following an upgrade from Nautilus (14.2.22) to Pacific (16.2.13) with ceph-ansible, we
encounter an issue with a ca... - 10:51 PM rbd Backport #64555 (Resolved): quincy: [test][krbd] volume data corruption when using rbd-mirror w/f...
- https://github.com/ceph/ceph/pull/55763
- 10:51 PM rbd Backport #64554 (In Progress): reef: [test][krbd] volume data corruption when using rbd-mirror w/...
- https://github.com/ceph/ceph/pull/55762
- 10:51 PM rbd Backport #64553 (Resolved): squid: [test][krbd] volume data corruption when using rbd-mirror w/fa...
- https://github.com/ceph/ceph/pull/55761
- 10:47 PM rbd Bug #61617 (Pending Backport): [test][krbd] volume data corruption when using rbd-mirror w/failover
- 06:16 PM rgw Bug #64543: ceph_test_librgw_file_nfsns crashes on infinite loop
- squid backport is included in https://github.com/ceph/ceph/pull/55625
- 06:16 PM rgw Bug #64543 (Resolved): ceph_test_librgw_file_nfsns crashes on infinite loop
- 06:11 PM rgw Bug #64543 (Pending Backport): ceph_test_librgw_file_nfsns crashes on infinite loop
- 04:07 PM rgw Backport #64552 (New): squid: rgw/multisite: objects named "." or ".." are not replicated
- 04:07 PM rgw Backport #64551 (New): reef: rgw/multisite: objects named "." or ".." are not replicated
- 04:07 PM rgw Backport #64550 (New): quincy: rgw/multisite: objects named "." or ".." are not replicated
- 04:01 PM rgw Bug #64366 (Pending Backport): rgw/multisite: objects named "." or ".." are not replicated
- 03:47 PM rgw Support #64547: List topic
- Just FYI, you might also need to recreate the topic if the user info is not stored on topic.
the PR to store user i... - 08:43 AM rgw Support #64547: List topic
- we are facing a problem regarding the topic operations to send notification, particularly when using amqp protocol.
... - 08:36 AM rgw Support #64547 (New): List topic
- 02:58 PM Bug #61589: 16.2.13: ERROR:ceph-crash:directory /var/lib/ceph/crash/posted does not exist; please...
- I just ran into a similar issue with the Debian packaged Ceph: https://tracker.ceph.com/issues/64548
(/var/lib/ceph/... - 02:23 PM rgw Bug #64549 (Fix Under Review): uncaught exception from AWSv4ComplMulti during java AWS4Test.testM...
- 02:22 PM rgw Bug #64549: uncaught exception from AWSv4ComplMulti during java AWS4Test.testMultipartUploadWithP...
- ...
- 01:48 PM rgw Bug #64549 (Pending Backport): uncaught exception from AWSv4ComplMulti during java AWS4Test.testM...
- from http://qa-proxy.ceph.com/teuthology/cbodley-2024-02-23_03:36:00-rgw-wip-cbodley-testing-distro-default-smithi/75...
- 01:18 PM Bug #64548 (Fix Under Review): ceph-base: /var/lib/ceph/crash/posted not chowned to ceph:ceph cau...
- The Debian package ceph-base postinst applies some chown to ceph:ceph in @https://github.com/ceph/ceph/blob/87f6091b9...
- 05:55 AM crimson Bug #64546 (New): client io requests hang when issued before the creation of the related pgs
- The issue is as follows:
1. Pool X is created at osdmap epoch 16
2. The monitor sends out pg_create messages to o... - 05:34 AM crimson Bug #64545 (New): crimson: OrderedConcurrentPhase::ExitBarrier::exit() does not guarrantee that p...
- ...
02/22/2024
- 11:03 PM Bug #64446: Backport PR#55540 to Squid (and only Squid) when its commits are merged to main
- Ilya requested a Reef backport. Ilya gets a Reef backport.
https://github.com/ceph/ceph/pull/55723 - Squid backpor... - 10:51 PM rgw Bug #64543: ceph_test_librgw_file_nfsns crashes on infinite loop
- touche
- 10:38 PM rgw Bug #64543: ceph_test_librgw_file_nfsns crashes on infinite loop
- see how useful librgw is? :)
- 10:01 PM rgw Bug #64543 (Fix Under Review): ceph_test_librgw_file_nfsns crashes on infinite loop
- 09:57 PM rgw Bug #64543 (Resolved): ceph_test_librgw_file_nfsns crashes on infinite loop
- examples from https://pulpito.ceph.com/cbodley-2024-02-21_22:01:37-rgw-wip-rgw-account-v3-distro-default-smithi/
<... - 08:33 PM rgw Feature #50078: [RFE] multisite: Bucket notification information should be shared between zones.
- already merged:
https://github.com/ceph/ceph/pull/55688 test/rgw/notifications: split tests between basic, kafka and... - 06:08 PM CephFS Bug #64542 (Pending Backport): Difference in error code returned while removing system xattrs usi...
- During removexattr() operation for those xattrs from "system." namespace, kernel client returns ENOTSUP in early stag...
- 06:02 PM rgw Bug #63684: RGW segmentation fault when reading object permissions via the swift API
- Steve Taylor wrote:
> Steve Taylor wrote:
> > J. Eric Ivancich wrote:
> > > J. Eric Ivancich wrote:
> > > > Steve... - 05:56 PM rgw Bug #63684: RGW segmentation fault when reading object permissions via the swift API
- Steve Taylor wrote:
> J. Eric Ivancich wrote:
> > J. Eric Ivancich wrote:
> > > Steve Taylor wrote:
> > > > After... - 05:53 PM rgw Bug #63684: RGW segmentation fault when reading object permissions via the swift API
- J. Eric Ivancich wrote:
> J. Eric Ivancich wrote:
> > Steve Taylor wrote:
> > > After some additional testing, it ... - 05:51 PM rgw Bug #63684: RGW segmentation fault when reading object permissions via the swift API
- J. Eric Ivancich wrote:
> Steve Taylor wrote:
> > After some additional testing, it doesn't look like the RGWBucket... - 05:27 PM rgw Bug #63684: RGW segmentation fault when reading object permissions via the swift API
- Steve Taylor wrote:
> After some additional testing, it doesn't look like the RGWBucketInfo whose attempted retrieva... - 05:47 PM CephFS Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Also affects pacific and reef v18.2.0 (possibly v18.2.1 too):
https://pulpito.ceph.com/yuriw-2024-02-21_23:06:32-f... - 05:40 PM CephFS Tasks #63708 (Fix Under Review): mds: MDS message transport for inter-rank QuiesceDbManager commu...
- 05:21 PM RADOS Feature #56956: osdc: Add objecter fastfail
- > There is no point in indefinitely waiting when pg of an object is inactive.
This is not correct for CephFS or RB... - 05:16 PM rgw Backport #64539 (In Progress): quincy: metadata cache races on deletes
- 04:38 PM rgw Backport #64539 (In Progress): quincy: metadata cache races on deletes
- https://github.com/ceph/ceph/pull/55718
- 04:41 PM rgw Backport #64540 (In Progress): reef: metadata cache races on deletes
- 04:38 PM rgw Backport #64540 (In Progress): reef: metadata cache races on deletes
- https://github.com/ceph/ceph/pull/55716
- 04:40 PM rgw Backport #64541 (In Progress): squid: metadata cache races on deletes
- 04:38 PM rgw Backport #64541 (Resolved): squid: metadata cache races on deletes
- https://github.com/ceph/ceph/pull/55715
- 04:37 PM rgw Bug #64480 (Pending Backport): metadata cache races on deletes
- 04:01 PM rgw Bug #64527: Radosgw 504 timeouts & Garbage collection is frozen
- thanks Michael
this is unfortunately common for librados clients like radosgw when pgs are down or unresponsive. t... - 02:32 AM rgw Bug #64527: Radosgw 504 timeouts & Garbage collection is frozen
- Ouch syntax errors... Sorry I expected markdown to work here.
- 02:30 AM rgw Bug #64527 (New): Radosgw 504 timeouts & Garbage collection is frozen
Ceph version: 17.2.6-1
OS: Ubuntu 20.04
Deployed without Cephadm using SystemD.
Cluster specs:...- 03:36 PM mgr Feature #64318: mgr/prometheus add support for TLS and client cert authentication
- Redouane Kachach Elhichou wrote:
> cephadm has already support to enable security across all the monitoring stack (i... - 02:12 PM mgr Feature #64318: mgr/prometheus add support for TLS and client cert authentication
cephadm has already support to enable security across all the monitoring stack (including all the components). The ...- 03:11 PM rgw Bug #63177: RGW user quotas is not honored when bucket owner is different than uploader
- Paul,
The user stats not being update makes it seem as if the user stats are not being updated fast enough. Can yo... - 03:04 PM RADOS Backport #64406 (In Progress): reef: Failed to encode map X with expected CRC
- 02:58 PM CephFS Feature #63668 (Fix Under Review): pybind/mgr/volumes: add quiesce protocol API
- 01:35 PM Dashboard Bug #64369: mgr/dashboard: Add SSL to prometheus federation in the prometheus config
- Is this related to my feature request on adding SSL / TLS to the Prometheus metrics endpoints in general?
-> https:/... - 12:31 PM CephFS Bug #64538 (Fix Under Review): cephfs-shell: hangs and then aborts
- When cephfs-shell is launched, it prints a deprecation warning, hangs for it and then aborts -...
- 11:53 AM CephFS Bug #64537 (New): mds: lower the log level when rejecting a session reclaim request
- I'm seeing a case where an old NFS Ganesha client got evicted but not due to a reclaim request by the new incarnation...
- 11:13 AM CephFS Bug #62925 (In Progress): cephfs-journal-tool: Add preventive measures in the tool to avoid corru...
- 11:01 AM Orchestrator Bug #64536 (Pending Backport): cephadm/nvmeof: scrape nvmeof prometheus endpoint
- 10:06 AM Backport #64509: reef: Debian bookworm package needs to explicitly specify cephadm home directory
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/55709
ceph-backport.sh versi... - 09:03 AM crimson Bug #64535 (New): crimson osd crashes during crimson-rados-experimental teuthology tests
- ...
- 09:02 AM Dashboard Backport #64528 (In Progress): reef: mgr/dashboard: TypeError: string indices must be integers
- 04:58 AM Dashboard Backport #64528 (Resolved): reef: mgr/dashboard: TypeError: string indices must be integers
- https://github.com/ceph/ceph/pull/55704
- 09:01 AM Dashboard Backport #64529 (In Progress): squid: mgr/dashboard: TypeError: string indices must be integers
- 04:58 AM Dashboard Backport #64529 (Resolved): squid: mgr/dashboard: TypeError: string indices must be integers
- https://github.com/ceph/ceph/pull/55703
- 08:29 AM CephFS Bug #64534 (New): qa: test_cephfs_mirror_cancel_sync fails in a 100 jobs run of fs:mirror suite
- test_cephfs_mirror_cancel_sync fails in a 100 jobs run of fs:mirror suite
https://pulpito.ceph.com/jcollin-2024-02-... - 07:04 AM RADOS Bug #59196 (Fix Under Review): ceph_test_lazy_omap_stats segfault while waiting for active+clean
- 06:53 AM bluestore Bug #64533 (Pending Backport): BlueFS: l_bluefs_log_compactions is counted twice in sync log comp...
- When sync log compaction, in BlueFS::_compact_log_sync_LNF_LD, l_bluefs_log_compactions is first counted in _rewrite_...
- 06:33 AM Orchestrator Bug #64532 (Fix Under Review): cephadm: make filter & timestamp for rgw counters configurable
- Introduce 2 new config for RGW service spec for new confgis added here for RGW daemon https://github.com/ceph/ceph/pu...
- 06:31 AM CephFS Bug #54834 (Duplicate): crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<MLock>&):...
- 05:24 AM CephFS Feature #64531 (New): mds,mgr: identify metadata heavy workloads
- This is coming from the folks in the field - apparently it helps to know early on before the MDS starts throwing up c...
- 05:09 AM Dashboard Feature #64530 (Fix Under Review): mgr/dashboard: introduce multicluter monitoring and management
- 05:08 AM Dashboard Feature #64530 (Resolved): mgr/dashboard: introduce multicluter monitoring and management
- h3. Description of problem
_here_
h3. Environment
* @ceph version@ string:
* Platform (OS/distro/release)... - 04:50 AM Dashboard Bug #62089 (Pending Backport): mgr/dashboard: TypeError: string indices must be integers
- 04:20 AM Linux kernel client Bug #63814: File deletion is 20x slower on kernel mount compared to libcephfs
- Niklas Hambuechen wrote:
> I mean the libcephfs Python bindings with docs at https://docs.ceph.com/en/latest/cephfs/... - 01:50 AM Linux kernel client Bug #63814: File deletion is 20x slower on kernel mount compared to libcephfs
- I mean the libcephfs Python bindings with docs at https://docs.ceph.com/en/latest/cephfs/api/libcephfs-py/
In the ... - 12:04 AM RADOS Backport #63843 (In Progress): quincy: Add health error if one or more OSDs registered v1/v2 publ...
- 12:01 AM RADOS Backport #63842 (In Progress): reef: Add health error if one or more OSDs registered v1/v2 public...
02/21/2024
- 11:43 PM rgw Bug #51437: the lifecycle transition operation does not work after set object acl
- hi soumya, when you return, could you look at this older report?
thanks!
Matt - 09:17 PM rgw Feature #64526 (New): support x-amz-expected-bucket-owner
- https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-owner-condition.html
> Bucket owner condition isn't a... - 08:47 PM Orchestrator Bug #58920: logrotate - delaycompress and duplicate entry errors
- Manuel Lausch wrote:
> I opened a PR vor this issue
> https://github.com/ceph/ceph/pull/55662
thanks Manuel for ... - 12:00 PM Orchestrator Bug #58920: logrotate - delaycompress and duplicate entry errors
- I opened a PR vor this issue
https://github.com/ceph/ceph/pull/55662 - 07:42 PM RADOS Bug #64519: OSD/MON: No snapshot metadata keys trimming
- This reminded me of the notes in https://pad.ceph.com/p/removing_removed_snaps/timeslider#4651 that talk about why th...
- 10:22 AM RADOS Bug #64519 (New): OSD/MON: No snapshot metadata keys trimming
- The Monitor's keys of purged_snap_ / purged_epoch_ and OSD's PSN_ (SnapMapper::PURGED_SNAP_PREFIX) keys are not trimm...
- 07:32 PM Bug #64323: PG recovery stuck without making progress
- We've seen this in a few clusters and reverted to osd_op_queue = wpq to workaround the issue in mclock scheduler.
- 07:20 PM rgw Bug #63445: valgrind leak from D3nDataCache::d3n_libaio_create_write_request
- @D3nDataCache::d3n_libaio_create_write_request@ popped up again on main. maybe just an unlikely race vs @ainit.aio_id...
- 06:46 PM Orchestrator Bug #54071: rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>)
- /a/yuriw-2024-02-19_19:25:49-rados-pacific-release-distro-default-smithi/7566747/
- 06:46 PM Orchestrator Bug #58145 (Pending Backport): orch/cephadm: nfs tests failing to mount exports (mount -t nfs 10....
- As discussed offline with Adam King
``tracker was "fixed" by a change in ganesha itself and then the version we're... - 06:12 PM Orchestrator Bug #58145 (New): orch/cephadm: nfs tests failing to mount exports (mount -t nfs 10.0.31.120:/fak...
- Hi guys,
this problem popped up in a RADOS Pacific branch run:
/a/yuriw-2024-02-19_19:25:49-rados-pacific-release... - 06:16 PM crimson Bug #63647: SnapTrimEvent AddressSanitizer: heap-use-after-free
- https://pulpito.ceph.com/sjust-2024-02-21_05:52:17-crimson-rados-wip-sjust-crimson-testing-2024-02-20-distro-default-...
- 06:14 PM crimson Bug #63647: SnapTrimEvent AddressSanitizer: heap-use-after-free
- https://pulpito.ceph.com/sjust-2024-02-21_05:52:17-crimson-rados-wip-sjust-crimson-testing-2024-02-20-distro-default-...
- 01:40 PM crimson Bug #63647: SnapTrimEvent AddressSanitizer: heap-use-after-free
- https://pulpito.ceph.com/matan-2024-02-21_12:07:57-crimson-rados-wip-matanb-crimson-alien-buf-v3-testing-distro-crims...
- 06:13 PM CephFS Tasks #64413: File size is not correct after rmw
- In the case of O_TRUNC, I've added to update_inode_file_size() to set effective_size to 0, when size is 0....
- 03:13 PM RADOS Bug #63389: Failed to encode map X with expected CRC
- The actual fix (not just the log message change) is: https://github.com/ceph/ceph/pull/55401.
It got approved 2 hour... - 03:05 PM Bug #64213: MGR modules incompatible with later PyO3 versions - PyO3 modules may only be initiali...
- https://tracker.ceph.com/issues/63529 is a subset of this issue (relating to the dashboard), and has a fix just for t...
- 02:40 PM CephFS Backport #64518 (In Progress): reef: mgr/volumes: Support to reject CephFS clones if cloner threa...
- 07:17 AM CephFS Backport #64518 (In Progress): reef: mgr/volumes: Support to reject CephFS clones if cloner threa...
- https://github.com/ceph/ceph/pull/55692
- 02:26 PM Dashboard Bug #64524 (In Progress): mgr/dashboard: fix retention add for subvolume
- h3. Description of problem
Description of problem:
-----------------------
h3. Environment
* @ceph vers... - 02:19 PM CephFS Backport #64517 (In Progress): quincy: mgr/volumes: Support to reject CephFS clones if cloner thr...
- 07:17 AM CephFS Backport #64517 (In Progress): quincy: mgr/volumes: Support to reject CephFS clones if cloner thr...
- https://github.com/ceph/ceph/pull/55690
- 01:46 PM Orchestrator Backport #64523 (Resolved): squid: drivegroup specific code is still available on rook orch
- 01:46 PM Orchestrator Backport #64522 (Resolved): squid: 50x errors are thrown when entering "Cluster > Upgrade" dashboard
- 01:38 PM Orchestrator Backport #64521 (Resolved): reef: 50x errors are thrown when entering "Cluster > Upgrade" dashboard
- 01:37 PM crimson Bug #64009: Crimson: PGShardMapping::maybe_create_pg() assert failure
- https://pulpito.ceph.com/matan-2024-02-21_12:07:57-crimson-rados-wip-matanb-crimson-alien-buf-v3-testing-distro-crims...
- 01:31 PM Orchestrator Bug #64516 (Pending Backport): 50x errors are thrown when entering "Cluster > Upgrade" dashboard
- 07:36 AM Orchestrator Bug #64516 (Fix Under Review): 50x errors are thrown when entering "Cluster > Upgrade" dashboard
- 07:16 AM Orchestrator Bug #64516 (Resolved): 50x errors are thrown when entering "Cluster > Upgrade" dashboard
- 01:19 PM crimson Bug #64038 (Resolved): clone_overlap assertion failed when doing rollback image
- Song Zhang wrote:
> Matan Breizman wrote:
> > Song Zhang wrote:
> > > To be more precise:
> > > 2. sequential wri... - 01:06 PM crimson Bug #64038: clone_overlap assertion failed when doing rollback image
- Matan Breizman wrote:
> Song Zhang wrote:
> > To be more precise:
> > 2. sequential write to fill image by fio.
>... - 12:59 PM crimson Bug #64038 (Need More Info): clone_overlap assertion failed when doing rollback image
- Song Zhang wrote:
> To be more precise:
> 2. sequential write to fill image by fio.
> 4/6. random write with 10M d... - 01:16 PM crimson Bug #53001 (Resolved): unittest-btree-lba-manager times out in arm64 tests jenkins PR checks
- 285/289 Test #251: unittest-btree-lba-manager ................ Passed 3106.63 sec
- 01:16 PM crimson Bug #62550: osd crashes when doing peering
- Matan Breizman wrote:
> Was this resolved? If not, is it reproducible?
This should have been fixed by https://git... - 01:04 PM crimson Bug #62550 (Need More Info): osd crashes when doing peering
- Was this resolved? If not, is it reproducible?
- 01:15 PM crimson Bug #51758 (Can't reproduce): SeaStore Vstart fail
- Please re-open if still relevant
- 01:14 PM crimson Bug #51460 (Can't reproduce): crimson: assert happen in crimson::os::seastore::CachedExtent::is_d...
- Please re-open if still relevant
- 01:14 PM crimson Bug #48868 (Can't reproduce): Background recovery request fall into infinite recursive loop
- Please re-open if still relevant
- 01:13 PM crimson Bug #48810 (Can't reproduce): "mount fsck found 1 errors" in crimson-rados-master
- 01:13 PM crimson Bug #47457 (Closed): segfault in BlueStore::_do_write()
- 01:12 PM crimson Bug #47312 (Closed): segfault in alien store
- 01:12 PM crimson Bug #45821 (Closed): crimson: crimson running into segfault after dumping tons of core files
- Please reopen if still relevant.
- 01:11 PM crimson Bug #45818 (Closed): crimson failing with bluestore as objectstore
- Please re-reopen if still relevant.
- 01:10 PM crimson Bug #57549 (Closed): Crimson: Alienstore not work after ceph enable c++20
- Jianxin Li wrote:
> This problem disappeared after update GCC complier to the 12.2.0 version. And I met the Segmenta... - 01:08 PM crimson Bug #59241 (Resolved): [crimson] OSD logs with debug 20/20 are not captured
- 01:06 PM crimson Bug #62098 (Resolved): long latency of repop delivering
- 01:05 PM crimson Bug #62525 (Resolved): Admin socket address already in use after restart
- 01:05 PM crimson Bug #62526 (In Progress): during recovery crimson sends OI_ATTR with MAXed soid and kills classic...
- 01:03 PM crimson Bug #62857 (Resolved): crimson osd fails to reboot
- 01:02 PM crimson Bug #62740 (Won't Fix): PGAdvanceMap can't handle skip_maps
- Will re-open if still occurs.
- 01:01 PM crimson Bug #63845 (Duplicate): crimson: use-after-free in seastar::shard_mutex::unlock()
- Closing as this is a duplicate.
- 12:58 PM crimson Bug #64049 (Resolved): crimson-compile-error: error: declaration of 'seastar::net::offload_info s...
- Crimson's Seastar submodule was updated, closing this issue.
Please re-open if you still encounter that. - 12:57 PM crimson Bug #64140 (Resolved): crimson: crash during crimson-osd --mkrs
- 12:56 PM crimson Bug #64513 (Fix Under Review): crimson: stack-use-after-free in build_incremental_map_msg
- 12:34 AM crimson Bug #64513 (Fix Under Review): crimson: stack-use-after-free in build_incremental_map_msg
- ...
- 12:56 PM crimson Bug #64512 (Fix Under Review): crimson: asan stack-use-after-return false positive on osd startup...
- 12:19 AM crimson Bug #64512 (Resolved): crimson: asan stack-use-after-return false positive on osd startup with cl...
- On clang-17 (output below) and also gcc-12/13, address sanitizer seems to be throwing stack-use-after-return errors r...
- 12:55 PM crimson Bug #63996 (Resolved): crimson: osd crash during OSD::_handle_osd_maps
- 12:54 PM crimson Bug #64457 (Resolved): crimson: unittest-seatar-socket failing intermittently
- 12:35 PM Orchestrator Backport #64520 (Resolved): reef: drivegroup specific code is still available on rook orch
- 12:34 PM Orchestrator Bug #64467: cephadm deployed NFS Ganesha clusters should disable attribute and dir caching
- I've been told by one of the ganesha team members that this isn't a good idea, but I don't have an explanation why. I...
- 12:29 PM Orchestrator Bug #64211 (Pending Backport): drivegroup specific code is still available on rook orch
- 11:51 AM mgr Bug #63529: Python Sub-Interpreter Model Used by ceph-mgr Incompatible With Python Modules Based ...
- Since that PR is a partial fix for this, I've put in https://github.com/ceph/ceph/pull/55689 to backport it to reef.
- 09:26 AM Linux kernel client Bug #64172: Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
- Venky Shankar wrote:
> Xiubo - assigning this tracker to you since the you own the kclient patchset.
Sure. - 09:19 AM Linux kernel client Bug #64172: Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
- Xiubo - assigning this tracker to you since the you own the kclient patchset.
- 09:26 AM rgw Bug #64184: test_bn.py -v -a kafka_test: Fatal glibc error: tpp.c:87 (__pthread_tpp_change_priori...
- Casey Bodley wrote:
> @Yuval maybe it would make sense to split the rgw/notifications suite into two separate jobs f... - 09:23 AM Orchestrator Bug #64482: ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not implemented
- This happens in fs:workload suite since it uses cephadm based installs. I have temporarily switched away from that to...
- 09:05 AM Backport #64509: reef: Debian bookworm package needs to explicitly specify cephadm home directory
- Kefu asked me to do the backport for this, but the @ceph-backport.sh@ script refuses to do so because this task is ow...
- 07:08 AM CephFS Feature #59714 (Pending Backport): mgr/volumes: Support to reject CephFS clones if cloner threads...
- 06:30 AM Dashboard Backport #64515 (In Progress): squid: mgr/dashboard: nvmeof api broken for v1.0.0
- 06:28 AM Dashboard Backport #64515 (In Progress): squid: mgr/dashboard: nvmeof api broken for v1.0.0
- https://github.com/ceph/ceph/pull/55685
- 06:28 AM Dashboard Bug #64384 (Pending Backport): mgr/dashboard: nvmeof api broken for v1.0.0
- 03:53 AM Orchestrator Bug #64434: rados/cephadm/osds: [WRN] CEPHADM_DAEMON_PLACE_FAIL: Failed to place 1 daemon(s)
- /a/yuriw-2024-02-14_14:58:57-rados-wip-yuri4-testing-2024-02-13-1546-distro-default-smithi/7559855
/a/yuriw-2024-02-... - 03:50 AM RADOS Bug #64514 (Duplicate): LibRadosTwoPoolsPP.PromoteSnapScrub test failed
- In rados_api_tests: ...
- 02:21 AM Linux kernel client Bug #63814: File deletion is 20x slower on kernel mount compared to libcephfs
- What do you mean the *libcephfs* ? Do you mean use the *ceph-fuse* or directly using the third-part Apps to call the ...
- 02:13 AM Linux kernel client Bug #59259 (Resolved): KASAN: use-after-free Write in encode_cap_msg
- Applied to Linus' tree.
- 01:57 AM CephFS Bug #50719: xattr returning from the dead (sic!)
- Austin Axworthy wrote:
> Hello,
>
> I've come across this Ceph issue and noticed it hasn't been updated in 9 mont...
02/20/2024
- 09:36 PM bluestore Bug #64511 (Fix Under Review): kv/RocksDBStore: rocksdb_cf_compact_on_deletion has no effect on t...
- 09:25 PM bluestore Bug #64511 (Fix Under Review): kv/RocksDBStore: rocksdb_cf_compact_on_deletion has no effect on t...
- This setting is applied via update_column_family_options(), which is called for each configured CF but not for the de...
- 07:08 PM rgw Backport #64510 (In Progress): squid: backport rgw/lc: decorating log events with more details
- 06:57 PM rgw Backport #64510: squid: backport rgw/lc: decorating log events with more details
- Backport (to Squid) PR: https://github.com/ceph/ceph/pull/55673
- 06:33 PM rgw Backport #64510 (In Progress): squid: backport rgw/lc: decorating log events with more details
- https://github.com/ceph/ceph/pull/55673 (backported from https://github.com/ceph/ceph/pull/55286)
- 07:04 PM CephFS Bug #50719: xattr returning from the dead (sic!)
- Hello,
I've come across this Ceph issue and noticed it hasn't been updated in 9 months. I aim to shed light on thi... - 06:58 PM Feature #64436: rgw: add remaining x-amz-replication-status options
- The pace of bilog trimming seems acceptable given the value from this header would be to provide persistent informati...
- 06:15 PM Orchestrator Bug #64057: task/test_cephadm_timeout - failed with timeout
- /a/yuriw-2024-02-14_14:58:57-rados-wip-yuri4-testing-2024-02-13-1546-distro-default-smithi/7560090/
- 05:57 PM Backport #64509 (Resolved): reef: Debian bookworm package needs to explicitly specify cephadm hom...
- 05:57 PM Backport #64508 (New): quincy: Debian bookworm package needs to explicitly specify cephadm home d...
- 05:57 PM Bug #64069 (Pending Backport): Debian bookworm package needs to explicitly specify cephadm home d...
- 04:59 PM Bug #64069: Debian bookworm package needs to explicitly specify cephadm home directory
- Can this now be set to Status: Pending Backport please? I'd like to backport this change to (at least) reef.
- 05:33 PM RADOS Bug #64333: PG autoscaler tuning => catastrophic ceph cluster crash
- This tracker looks very interesting: https://tracker.ceph.com/issues/57757.
- 05:00 PM RADOS Bug #64333: PG autoscaler tuning => catastrophic ceph cluster crash
- Links to crash sites:
* https://github.com/ceph/ceph/blob/v17.2.7/src/osd/ECBackend.cc#L676
* https://github.co... - 05:33 PM RADOS Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- /a/yuriw-2024-02-14_14:58:57-rados-wip-yuri4-testing-2024-02-13-1546-distro-default-smithi/7560007/
- 05:23 PM CephFS Tasks #63669 (In Progress): qa: add teuthology tests for quiescing a group of subvolumes
- 05:21 PM CephFS Feature #64507 (New): pybind/mgr/snap_schedule: support crash-consistent snapshots
- Right now the module is specific with a 1-1 mapping of schedule to subvolume (or path). The module should be enhanced...
- 05:17 PM RADOS Bug #62777: rados/valgrind-leaks: expected valgrind issues and found none
- /a/yuriw-2024-02-14_14:58:57-rados-wip-yuri4-testing-2024-02-13-1546-distro-default-smithi/7559915
- 05:14 PM rgw Bug #64366 (Fix Under Review): rgw/multisite: objects named "." or ".." are not replicated
- 05:13 PM rgw Bug #64366: rgw/multisite: objects named "." or ".." are not replicated
- The existing integration test case _test_multi.py:test_object_sync_ is updated to reproduce the issue. Objects with k...
- 05:07 PM rgw Bug #63724 (Fix Under Review): object lock: An object uploaded through a multipart upload can be ...
- 05:05 PM rgw Bug #64184: test_bn.py -v -a kafka_test: Fatal glibc error: tpp.c:87 (__pthread_tpp_change_priori...
- another crash trace from kafka test:...
- 04:14 PM rgw Bug #63684: RGW segmentation fault when reading object permissions via the swift API
- Steve Taylor wrote:
> After some additional testing, it doesn't look like the RGWBucketInfo whose attempted retrieva... - 03:42 PM CephFS Feature #64506 (New): qa: update fs:upgrade to test from reef/squid to main
- 02:59 PM CephFS Backport #64505 (In Progress): reef: mds: reversed encoding of MDSMap max_xattr_size/bal_rank_mas...
- 02:44 PM CephFS Backport #64505 (In Progress): reef: mds: reversed encoding of MDSMap max_xattr_size/bal_rank_mas...
- https://github.com/ceph/ceph/pull/55669
- 02:54 PM CephFS Bug #64440: mds: reversed encoding of MDSMap max_xattr_size/bal_rank_mask v18.2.1 <-> main
- Breaking relation to #62724 to allow backport script to work.
- 02:38 PM CephFS Bug #64440 (Pending Backport): mds: reversed encoding of MDSMap max_xattr_size/bal_rank_mask v18....
- 02:51 PM CephFS Bug #64477 (New): pacific: rados/cephadm/mgr-nfs-upgrade: [WRN] client session with duplicated se...
- Unassigning myself; this is not related to the MDSMap encoding changes.
However, it does look like we're now seein... - 02:41 PM CephFS Bug #64502: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Unassigning myself to return to other high priority tasks.
This issue is only revealed by the fix for i64440 which... - 02:15 AM CephFS Bug #64502 (New): pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
- Every ceph-fuse mount for quincy fails to unmount for reef->main:
https://pulpito.ceph.com/pdonnell-2024-02-19_18:... - 02:39 PM CephFS Backport #62724 (Resolved): reef: mon/MDSMonitor: optionally forbid to use standby for another fs...
- 12:29 PM rbd Bug #63770: [diff-iterate] discards that truncate aren't accounted for by ObjectListSnapsRequest
- Note that fast-diff feature being disabled is relevant only to "rbd diff". Other users of ObjectListSnapsRequest, su...
- 12:14 PM rbd Feature #63341 (Resolved): improve rbd_diff_iterate2() performance in fast-diff mode
- 12:13 PM rbd Feature #63341: improve rbd_diff_iterate2() performance in fast-diff mode
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:05 PM rbd Bug #53897 (Resolved): diff-iterate can report holes when diffing against the beginning of time (...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:05 PM rbd Bug #58740 (Resolved): "rbd feature disable" remote request hangs when proxied to rbd-nbd
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:04 PM rbd Bug #61567 (Resolved): [test] nose framework is not available on centos stream 9
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:04 PM rbd Bug #62140 (Resolved): pybind/rbd/rbd.pyx does not build with Cython-3
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:04 PM rbd Bug #62891 (Resolved): [test][rbd] test recovery of rbd_support module from repeated blocklisting...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:04 PM rbd Bug #62994 (Resolved): mgr/rbd_support: recovery from client blocklisting halts after MirrorSnaps...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:04 PM rbd Bug #63028 (Resolved): ceph-mgr seg faults when testing for rbd_support module recovery on repeat...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:04 PM rbd Bug #63422 (Resolved): librbd crash in journal discard wait_event
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:04 PM rbd Bug #63654 (Resolved): [diff-iterate] ObjectListSnapsRequest's LIST_SNAPS_FLAG_WHOLE_OBJECT behav...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:04 PM rbd Bug #63673 (Resolved): qa/workunits/rbd/cli_generic.sh: rbd support module command not failing as...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:04 PM rbd Bug #64139 (Resolved): rbd-nbd: image resizing doesn't update size of an image that is mapped usi...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 11:50 AM RADOS Bug #58130: LibRadosAio.SimpleWrite hang and pkill
- Thanks for Aishwarya who also looked on the queued ops that didn't executed, i opened new bug for it: https://tracker...
- 11:43 AM RADOS Bug #64504 (New): aio ops queued but never executed
- Few of teuthology tests were failed when trying to execute aio_write and then wait_for_complete never completed.
the... - 10:30 AM rbd Backport #64462 (In Progress): reef: split() is broken in SparseExtentSplitMerge and SparseBuffer...
- 10:30 AM Bug #64420 (Resolved): arm64: Could NOT find Protobuf (missing: Protobuf_LIBRARIES Protobuf_INCLU...
- 10:28 AM rbd Backport #64461 (In Progress): quincy: split() is broken in SparseExtentSplitMerge and SparseBuff...
- 10:15 AM rgw Bug #63909 (Fix Under Review): persistent topic stats test fails
- 10:04 AM Linux kernel client Bug #64172: Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
- Xiubo Li wrote:
> Xiubo Li wrote:
> > I found the bug in the kclient patch sries *[PATCH v3 0/6] ceph: check the ce... - 04:03 AM Linux kernel client Bug #64172: Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
- Xiubo Li wrote:
> I found the bug in the kclient patch sries *[PATCH v3 0/6] ceph: check the cephx mds auth access i... - 02:20 AM Linux kernel client Bug #64172: Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
- I found the bug in the kclient patch sries *[PATCH v3 0/6] ceph: check the cephx mds auth access in client side*, the...
- 12:55 AM Linux kernel client Bug #64172: Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
- Rishabh,
The test failed at around *2024-01-22T08:27:53.365*:... - 05:01 AM CephFS Feature #59714: mgr/volumes: Support to reject CephFS clones if cloner threads are not available
- Backport note: additional include commits from https://github.com/ceph/ceph/pull/55660
- 02:18 AM CephFS Bug #64503 (Fix Under Review): client: log message when unmount call is received
- 02:16 AM CephFS Bug #64503 (Fix Under Review): client: log message when unmount call is received
02/19/2024
- 09:44 PM cleanup Tasks #62741: rgw_thread_pool_size is no longer used
- @rgw_thread_pool_size@ is still used by the beast frontend for the thread pool. it's just not limited to 1 connection...
- 09:28 PM rgw Bug #63684: RGW segmentation fault when reading object permissions via the swift API
- After some additional testing, it doesn't look like the RGWBucketInfo whose attempted retrieval is causing the segfau...
- 07:44 PM crimson Feature #64375: crimson: introduce support for C++ coroutines
- It seems there are other reasons to switch compilers, I'm looking into using clang for crimson builds now.
- 06:34 PM RADOS Bug #61385: TEST_dump_scrub_schedule fails from "key is query_active: negation:0 # expected: true...
- Why copying into a new bug report, instead of marking as duplicate?
- 06:32 PM RADOS Bug #53342: Exiting scrub checking -- not all pgs scrubbed
- I think we can close this bug as 'resolved'
- 06:31 PM RADOS Bug #62119: timeout on reserving replicsa
- @aishwarya - I think we can lower the severity, or maybe even close this bug.
It seems as though some specific tests... - 06:28 PM RADOS Bug #64310 (Rejected): osd/scrub: PGs remain in the scrub queue after an interval change
- My mistake. Not exactly a bug.
(Fuller explanation:
recent changes to the scrub state-machine changed the point in ... - 06:25 PM RADOS Bug #64437: qa/standalone/scrub/osd-scrub-repair.sh: TEST_repair_stats_ec: test 26 = 13
Will take a look.
- 06:13 PM rgw Backport #64500 (In Progress): reef: multisite: Deadlock in RGWDeleteMultiObj with default rgw_mu...
- 06:03 PM rgw Backport #64500 (In Progress): reef: multisite: Deadlock in RGWDeleteMultiObj with default rgw_mu...
- https://github.com/ceph/ceph/pull/55655
- 06:12 PM rgw Backport #64501 (In Progress): squid: multisite: Deadlock in RGWDeleteMultiObj with default rgw_m...
- 06:03 PM rgw Backport #64501 (Resolved): squid: multisite: Deadlock in RGWDeleteMultiObj with default rgw_mult...
- https://github.com/ceph/ceph/pull/55654
- 06:11 PM rgw Bug #64492 (Fix Under Review): rgw: compatibility issues on BucketPublicAccessBlock
- 05:34 PM rgw Bug #64492: rgw: compatibility issues on BucketPublicAccessBlock
- PR: https://github.com/ceph/ceph/pull/55652
- 05:28 PM rgw Bug #64492 (Pending Backport): rgw: compatibility issues on BucketPublicAccessBlock
- - The root element on GetPublicAccessBlock should be PublicAccessBlockConfiguration.
- s3GetBucketPublicAccessBlock ... - 06:03 PM rgw Backport #64499 (New): squid: rgw: add s3select bytes processed and bytes returned to usage
- 06:03 PM rgw Backport #64498 (New): quincy: rgw: add s3select bytes processed and bytes returned to usage
- 06:02 PM rgw Backport #64497 (New): reef: rgw: add s3select bytes processed and bytes returned to usage
- 06:02 PM rgw Backport #64496 (New): squid: keystone admin token is not invalidated on http 401 response
- 06:02 PM rgw Backport #64495 (New): quincy: keystone admin token is not invalidated on http 401 response
- 06:02 PM rgw Backport #64494 (New): reef: keystone admin token is not invalidated on http 401 response
- 06:00 PM rgw Backport #64493 (In Progress): squid: Disable/Enable access key Feature
- 05:54 PM rgw Backport #64493 (Resolved): squid: Disable/Enable access key Feature
- https://github.com/ceph/ceph/pull/55653
- 05:59 PM rgw Bug #63373 (Pending Backport): multisite: Deadlock in RGWDeleteMultiObj with default rgw_multi_ob...
- 05:57 PM rgw Feature #63563 (Pending Backport): rgw: add s3select bytes processed and bytes returned to usage
- 05:55 PM rgw Bug #64094 (Pending Backport): keystone admin token is not invalidated on http 401 response
- 05:51 PM rgw Feature #59186 (Pending Backport): Disable/Enable access key Feature
- 04:58 PM Orchestrator Bug #64118: cephadm: RuntimeError: Failed command: apt-get update: E: The repository 'https://dow...
- /a/lflores-2024-02-09_16:51:13-rados-wip-yuri3-testing-2024-02-07-1233-distro-default-smithi/7554126/
- 04:24 PM Orchestrator Bug #64491 (Pending Backport): cephadm: ceph-exporter fails to deploy when placed first
- This is due to the default socket directory it uses being a directory that is usually made by other ceph daemons. If ...
- 04:18 PM rgw Bug #64489 (Fix Under Review): rgw: pick the last ip in x-forwarded-for chain
- 03:45 PM rgw Bug #64489: rgw: pick the last ip in x-forwarded-for chain
- PR: https://github.com/ceph/ceph/pull/55646
- 03:43 PM rgw Bug #64489 (Fix Under Review): rgw: pick the last ip in x-forwarded-for chain
- Currently, when rgw_remote_addr_param is set to HTTP_X_FORWARDED_FOR, it will pick the first IP from the chain. As de...
- 04:07 PM CephFS Bug #64478: Upgrading mon from v18.2.1 to latest-reef-devel image is causing mon to fail when dec...
- Great to hear you're already investigating, thanks!
- 04:01 PM CephFS Bug #64490 (Fix Under Review): mds: some request errors come from errno.h rather than fs_types.h
- 03:59 PM CephFS Bug #64490 (Fix Under Review): mds: some request errors come from errno.h rather than fs_types.h
- (See future PR for where.)
- 03:24 PM rbd Bug #64345: rbd/test_librbd_python.sh: ERROR at teardown of TestImage.test_diff_iterate
- /a/lflores-2024-02-09_16:51:13-rados-wip-yuri3-testing-2024-02-07-1233-distro-default-smithi/7554153/
- 03:09 PM Orchestrator Bug #61940: "test_cephfs_mirror" fails from stray cephadm daemon
- /a/lflores-2024-02-09_16:51:13-rados-wip-yuri3-testing-2024-02-07-1233-distro-default-smithi/7554136/
- 02:44 PM rgw Bug #63973 (Fix Under Review): x-amz-expiration HTTP header: expiry-date sometimes broken
- 02:37 PM Dashboard Bug #62089: mgr/dashboard: TypeError: string indices must be integers
- Hi, its not happening on normal cluster as far as I have seen.
It seems something needs to be wrong in cluster for t... - 02:02 PM rgw Bug #63642: rgw: rados objects wrongly deleted
- for future reference the `rgw-gap-list` has been tested to work correctly with the golang reproducer to find the affe...
- 02:02 PM CephFS Bug #64477 (In Progress): pacific: rados/cephadm/mgr-nfs-upgrade: [WRN] client session with dupli...
- 01:45 PM devops Bug #64488: upgrading packages from debian distro to ceph-io packages on bookworm fails because o...
- please is it possible to reformat the issue ? i'm too used to write everything in markdown ;(
- 01:43 PM devops Bug #64488 (New): upgrading packages from debian distro to ceph-io packages on bookworm fails bec...
- upgrading cephadm package fails when going from distro package (16.x) to 18.2.1 provided by ceph.io repository
aft... - 01:37 PM Linux kernel client Bug #64172: Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
- It seems we have hit this 3 times, right ?
- 09:48 AM Linux kernel client Bug #64172: Test failure: test_multiple_path_r (tasks.cephfs.test_admin.TestFsAuthorize)
- Rishabh Dave wrote:
> Venky Shankar wrote:
> > Rishabh Dave wrote:
> > > I also ran this test with FUSE and kernel... - 01:28 PM CephFS Bug #64284 (Won't Fix): client: align get/put caps with kclient
- Dhairya Parmar wrote:
> > While having similar implementation in the kclient and user-space is desired, I don't thin... - 12:52 PM Dashboard Bug #64487 (Resolved): mgr/dashboard: fix subvolume group edit
- h3. Description of problem
Description of problem:
-----------------------
Editing of Subvolume Group is not wor... - 12:51 PM CephFS Bug #62265: cephfs-mirror: use monotonic clocks in cephfs mirror daemon
- Jos, please continue where Manish left off.
- 12:50 PM CephFS Bug #62720: mds: identify selinux relabelling and generate health warning
- Chris, please take this one whenever you get some time off from the fscrypt work :)
- 12:42 PM CephFS Backport #64484 (In Progress): reef: mds: add per-client perf counters (w/ label) support
- 08:51 AM CephFS Backport #64484 (In Progress): reef: mds: add per-client perf counters (w/ label) support
- https://github.com/ceph/ceph/pull/55640
- 12:39 PM CephFS Backport #64485 (In Progress): reef: cephfs_mirror: add perf counters (w/ label) support
- 08:51 AM CephFS Backport #64485 (In Progress): reef: cephfs_mirror: add perf counters (w/ label) support
- https://github.com/ceph/ceph/pull/55640
- 12:07 PM CephFS Bug #57048 (Fix Under Review): osdc/Journaler: better handle ENOENT during replay as up:standby-r...
- 09:33 AM CephFS Bug #64486 (Pending Backport): qa: enhance labeled perf counters test for cephfs-mirror
- In particular, verify peer metric counters.
- 08:49 AM CephFS Feature #64387 (Pending Backport): mds: add per-client perf counters (w/ label) support
- 08:49 AM CephFS Feature #63945 (Pending Backport): cephfs_mirror: add perf counters (w/ label) support
- 08:47 AM CephFS Documentation #54551: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds cannot work
- I've made the notes a little clearer here: https://github.com/ceph/ceph/pull/55637
I've tried to help the reader d... - 08:39 AM CephFS Documentation #64483 (In Progress): doc: document labelled perf metrics for mds/cephfs-mirror
- 07:56 AM CephFS Documentation #64483 (In Progress): doc: document labelled perf metrics for mds/cephfs-mirror
- 07:43 AM Orchestrator Bug #64482 (Resolved): ceph: stderr Error: OCI runtime error: crun: bpf create ``: Function not i...
- See - /a/vshankar-2024-02-16_12:40:36-fs:workload-wip-vshankar-testing-20240216.103400-testing-default-smithi/7562301...
- 07:19 AM CephFS Bug #63700: qa: test_cd_with_args failure
- Neeraj, PTAL asap.
- 07:19 AM CephFS Bug #63699: qa: failed cephfs-shell test_reading_conf
- Neeraj, PTAL asap.
- 07:13 AM CephFS Bug #64149: valgrind+mds/client: gracefully shutdown the mds during valgrind tests
- Kotresh, please take this one (spoke to Milind regarding this before reassigning).
- 04:21 AM CephFS Bug #64479 (Fix Under Review): Memory leak detected when accessing a CephFS volume from Samba usi...
Also available in: Atom