Activity
From 11/08/2023 to 12/07/2023
Today
- 10:17 AM rbd Backport #63745 (In Progress): pacific: librbd crash in journal discard wait_event
- 09:40 AM RADOS Bug #53240: full-object read crc is mismatch, because truncate modify oi.size and forget to clear...
- Our cluster seems to have encountered a similar issue to yours. We haven't yet identified the root cause. I'm wonderi...
- 09:32 AM rgw Bug #63542: Delete-Marker deletion inconsistencies
- Thanks Daniel to have looked into this; unfortunately I'm currently in the position to not be able to follow this iss...
- 09:31 AM rgw Bug #63560: retry_raced_bucket_write considerations
- Thanks Daniel to have looked into this; unfortunately I'm currently in the position to not be able to follow this iss...
- 08:48 AM sepia Support #63566: access to global build stats jenkins dashboard
- Done
- 08:04 AM sepia Support #63566: access to global build stats jenkins dashboard
- Thanks Adam, you can add Ivo there!
- 08:12 AM Orchestrator Bug #63720: cephadm: Cannot set values for --daemon-types, --services or --hosts when upgrade alr...
- /a/yuriw-2023-12-06_15:05:11-rados-wip-yuri6-testing-2023-12-05-0753-pacific-distro-default-smithi/7480324
- 07:57 AM Infrastructure Bug #59193: "Failed to fetch package version from https://shaman.ceph.com/api/search ..."
- /a/yuriw-2023-12-06_15:05:11-rados-wip-yuri6-testing-2023-12-05-0753-pacific-distro-default-smithi/7480308
- 07:52 AM Infrastructure Bug #63750 (New): qa/workunits/post-file.sh: Couldn't create directory: No such file or directory
- h3. Description of problem
/a/yuriw-2023-12-06_15:05:11-rados-wip-yuri6-testing-2023-12-05-0753-pacific-distro-def... - 06:32 AM rgw Bug #63749 (New): rdma: when set ms_async_rdma_receive_queue_len is low, ceph health detail could...
- In my roce envoriment:
ubuntu22.04 Linux node194 5.15.0-60-generic #66-Ubuntu SMP Fri Jan 20 14:29:49 UTC 2023 x86_6... - 04:50 AM Linux kernel client Bug #63586: libceph: osd_sparse_read: failed to allocate 2208169984 sparse read extents
- Just the *read_partial()* couldn't successfully read the *21* bytes *footer* data from the socket:...
12/06/2023
- 10:41 PM rgw Bug #63723 (Closed): a
- 09:38 PM RADOS Bug #62992: Heartbeat crash in reset_timeout and clear_timeout
- /a/yuriw-2023-12-05_18:59:03-rados-wip-yuri4-testing-2023-12-04-1129-reef-distro-default-smithi/7478311...
- 09:33 PM RADOS Bug #44595: cache tiering: Error: oid 48 copy_from 493 returned error code -2
- This looks similar:
/a/yuriw-2023-12-05_18:59:03-rados-wip-yuri4-testing-2023-12-04-1129-reef-distro-default-smith... - 09:29 PM rbd Backport #63745: pacific: librbd crash in journal discard wait_event
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/54820
ceph-backport.sh versi... - 08:17 PM rbd Backport #63745 (In Progress): pacific: librbd crash in journal discard wait_event
- https://github.com/ceph/ceph/pull/54820
- 09:26 PM Ceph Bug #63748 (New): qa/workunits/post-file.sh: Couldn't create directory
- 09:26 PM Ceph Bug #63748 (New): qa/workunits/post-file.sh: Couldn't create directory
- /a/yuriw-2023-01-13_20:42:41-rados-pacific_16.2.11_RC6.6-distro-default-smithi/7142054/...
- 09:21 PM RADOS Bug #52155: crash: pthread_rwlock_rdlock() in queue_want_up_thru
- Similar crash: http://telemetry.front.sepia.ceph.com:4000/d/jByk5HaMz/crash-spec-x-ray?var-sig_v2=606d36db613472cf50f...
- 09:11 PM RADOS Bug #52155: crash: pthread_rwlock_rdlock() in queue_want_up_thru
- /a/yuriw-2023-12-05_18:59:03-rados-wip-yuri4-testing-2023-12-04-1129-reef-distro-default-smithi/7478026
- 09:20 PM rbd Backport #63747: quincy: librbd crash in journal discard wait_event
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/54819
ceph-backport.sh versi... - 08:17 PM rbd Backport #63747 (New): quincy: librbd crash in journal discard wait_event
- 09:17 PM rbd Backport #63746: reef: librbd crash in journal discard wait_event
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/54818
ceph-backport.sh versi... - 08:17 PM rbd Backport #63746 (New): reef: librbd crash in journal discard wait_event
- 08:11 PM rbd Bug #63422 (Pending Backport): librbd crash in journal discard wait_event
- 06:50 PM rgw Feature #63744 (New): support new notification event types
- see: https://docs.aws.amazon.com/AmazonS3/latest/userguide/notification-how-to-event-types-and-destinations.html
f... - 06:45 PM Dashboard Bug #62972: ERROR: test_list_enabled_module (tasks.mgr.dashboard.test_mgr_module.MgrModuleTest)
- still failing in https://jenkins.ceph.com/job/ceph-api/65558/
- 04:54 PM mgr Bug #57460: Json formatted ceph pg dump hangs on large clusters
- This was discussed in the CDM (https://tracker.ceph.com/projects/ceph/wiki/CDM_06-DEC-2023) and the conclusion was:
... - 04:00 PM CephFS Bug #63743 (New): MDS crash on ceph_assert(in->is_auth())
- The MDS crash and continue crash on the same assert....
- 03:29 PM rgw Backport #63742 (New): reef: rgwlc: lock_lambda overwrites ret val
- 03:29 PM rgw Backport #63741 (New): quincy: rgwlc: lock_lambda overwrites ret val
- 03:22 PM rgw Bug #63740 (Pending Backport): rgwlc: lock_lambda overwrites ret val
- lock_lambda captures ret by reference, it will overwrites
returned value of bucket_lc_process when wait_backoff is c... - 11:23 AM CephFS Backport #63512 (Resolved): pacific: client: queue a delay cap flushing if there are ditry caps/s...
- 11:03 AM bluestore Feature #63739 (New): We might need some clarification/rework to properly locate service files if...
- In same cases - primarily after some recovery and DB exporting - we might need to use RocksDB on top of regular files...
- 10:50 AM rbd Backport #63738 (New): reef: [diff-iterate] ObjectListSnapsRequest's LIST_SNAPS_FLAG_WHOLE_OBJECT...
- 10:50 AM rbd Backport #63737 (New): quincy: [diff-iterate] ObjectListSnapsRequest's LIST_SNAPS_FLAG_WHOLE_OBJE...
- 10:50 AM rbd Backport #63736 (New): pacific: [diff-iterate] ObjectListSnapsRequest's LIST_SNAPS_FLAG_WHOLE_OBJ...
- 10:49 AM rbd Bug #63654 (Pending Backport): [diff-iterate] ObjectListSnapsRequest's LIST_SNAPS_FLAG_WHOLE_OBJE...
- 10:43 AM mgr Bug #63735 (New): Module 'pg_autoscaler' has failed: float division by zero
- Dear maintainer,
After the last upgrade from 17.2.6 to 17.2.7 my Ceph is stuck in HEALTH_ERR with the following me... - 10:23 AM CephFS Bug #63619 (Closed): client: check for negative value of iovcnt before passing it to internal fun...
- 10:23 AM CephFS Bug #63629 (Closed): client: handle context completion during async I/O call when the client is n...
- 10:23 AM CephFS Bug #63648 (Closed): client: ensure callback is finished if write fails during async I/O
- 10:23 AM CephFS Bug #63632 (Closed): client: fh obtained using O_PATH can stall the caller during async I/O
- 10:22 AM CephFS Bug #63734 (In Progress): client: handle callback when async io fails
- There are several cases where the async call might experience some failure and the helper functions would just return...
- 09:58 AM rbd Backport #63733 (In Progress): reef: pybind/rbd/rbd.pyx does not build with Cython-3
- 09:54 AM rbd Backport #63733 (In Progress): reef: pybind/rbd/rbd.pyx does not build with Cython-3
- https://github.com/ceph/ceph/pull/54807
- 09:52 AM rbd Bug #62140 (Pending Backport): pybind/rbd/rbd.pyx does not build with Cython-3
- 09:45 AM CephFS Bug #63710: client.5394 isn't responding to mclientcaps(revoke), ino 0x10000000001 pending pAsLsX...
- Stefan Kooman wrote:
> We have parsed our logs and occasionally see the same Ceph health warning "MDS_CLIENT_LATE_RE... - 09:40 AM CephFS Bug #63710: client.5394 isn't responding to mclientcaps(revoke), ino 0x10000000001 pending pAsLsX...
- We have parsed our logs and occasionally see the same Ceph health warning "MDS_CLIENT_LATE_RELEASE" with exactly thes...
- 09:24 AM Ceph Bug #63732 (New): RGW jwt auth not adhering to RFC
- Hi Ceph team,
I was testing out OIDC auth when I came across a Bug that broke that feature completely for us.
Our... - 08:43 AM Orchestrator Bug #63731 (New): mgr/nfs: add subvolume & groups name flags for export creation
- For creating CephFS export with subvolumes/subvolumeGroups currently user needs to provide path for the same. To impr...
- 06:56 AM Linux kernel client Bug #63586: libceph: osd_sparse_read: failed to allocate 2208169984 sparse read extents
- Locally I reproduce a similar case and get the debug log by enabling the whole *libceph.ko* module debug logs, which ...
- 06:03 AM CephFS Feature #56428: add command "fs deauthorize"
- Hey Rishabh,
As we spoke yesterday, please update here with details of the discussion you has with Greg. - 06:03 AM CephFS Bug #61279: qa: test_dirfrag_limit (tasks.cephfs.test_strays.TestStrays) failed
- Rishabh - please get an RCA for this.
- 04:52 AM Dashboard Bug #63730 (New): mgr/dashboard: improve rgw bucket encryption workflow
- Allow the user to edit and view the rgw bucket encryption confih values
12/05/2023
- 08:22 PM rbd Bug #63682: "rbd migration prepare" crashes when importing from http stream
- Ilya Dryomov wrote:
> I think the issue might be that static initialization is now split between librados and librbd... - 07:58 PM rbd Bug #63682: "rbd migration prepare" crashes when importing from http stream
- Casey Bodley wrote:
> i tried adding a similar @librbd.map@ in https://github.com/ceph/ceph/pull/54788, but couldn't... - 06:38 PM rbd Bug #63682: "rbd migration prepare" crashes when importing from http stream
- i'm not familiar with the @--version-script@ stuff outside of what we did in @librados.map@. i tried adding a similar...
- 08:07 PM rgw Backport #62413 (In Progress): pacific: qa/sts: test_list_buckets_invalid_auth and test_list_buck...
- 08:05 PM rgw Backport #59492 (In Progress): pacific: test_librgw_file.sh crashes: src/tcmalloc.cc:332] Attempt...
- 08:04 PM rgw Bug #53220 (Resolved): notification tests failing: 'find /home/ubuntu/cephtest -ls ; rmdir -- /ho...
- 08:04 PM rgw Backport #53866 (Rejected): pacific: notification tests failing: 'find /home/ubuntu/cephtest -ls ...
- 08:00 PM rgw Backport #59494 (In Progress): quincy: test_librgw_file.sh crashes: src/tcmalloc.cc:332] Attempt ...
- 07:54 PM rgw Backport #62414 (In Progress): quincy: qa/sts: test_list_buckets_invalid_auth and test_list_bucke...
- 07:53 PM rgw Backport #62412 (In Progress): reef: qa/sts: test_list_buckets_invalid_auth and test_list_buckets...
- 07:52 PM rgw Backport #63624 (In Progress): pacific: SignatureDoesNotMatch for certain RGW Admin Ops endpoints...
- 07:51 PM rgw Backport #63625 (In Progress): quincy: SignatureDoesNotMatch for certain RGW Admin Ops endpoints ...
- 07:51 PM rgw Backport #63626 (In Progress): reef: SignatureDoesNotMatch for certain RGW Admin Ops endpoints wh...
- 07:45 PM rgw Bug #50721 (Resolved): make fetching of certs while validating tokens, more generic.
- 07:45 PM rgw Backport #52778 (Resolved): pacific: make fetching of certs while validating tokens, more generic.
- 07:44 PM rgw Bug #55286 (Resolved): Segfault when Open Policy Agent authorization is enabled
- 07:43 PM rgw Backport #55500 (Resolved): pacific: Segfault when Open Policy Agent authorization is enabled
- 07:43 PM rgw Bug #57901 (Resolved): s3:ListBuckets response limited to 1000 buckets (by default) since Octopus
- 07:42 PM rgw Backport #58234 (Resolved): pacific: s3:ListBuckets response limited to 1000 buckets (by default)...
- 07:42 PM rgw Backport #58584 (Resolved): pacific: Keys returned by Admin API during user creation on secondary...
- 07:41 PM rgw Backport #59131 (Rejected): pacific: DeleteObjects response does not include DeleteMarker/DeleteM...
- 07:40 PM rgw Backport #59610 (Resolved): pacific: sts: every AssumeRole writes to the RGWUserInfo
- 07:40 PM rgw Bug #58950 (Resolved): Swift static large objects are not deleted when segment object path set in...
- 07:40 PM rgw Backport #61174 (Resolved): quincy: Swift static large objects are not deleted when segment objec...
- 07:40 PM rgw Backport #61175 (Resolved): pacific: Swift static large objects are not deleted when segment obje...
- 07:40 PM rgw Backport #61432 (Resolved): reef: rgw: multisite data log flag not used
- 07:39 PM rgw Backport #61433 (Resolved): pacific: rgw: multisite data log flag not used
- 07:38 PM rgw Bug #51598 (Resolved): Session policy evaluation incorrect for CreateBucket.
- 07:38 PM rgw Backport #52784 (Resolved): pacific: Session policy evaluation incorrect for CreateBucket.
- 07:08 PM rgw Backport #52784: pacific: Session policy evaluation incorrect for CreateBucket.
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44476
merged - 07:38 PM rgw Bug #55460 (Resolved): GetBucketTagging returns wrong NoSuchTagSetError instead of NoSuchTagSet
- 07:38 PM rgw Backport #55613 (Resolved): pacific: GetBucketTagging returns wrong NoSuchTagSetError instead of ...
- 07:09 PM rgw Backport #55613: pacific: GetBucketTagging returns wrong NoSuchTagSetError instead of NoSuchTagSet
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50533
merged - 07:37 PM CephFS Feature #63666 (Fix Under Review): mds: QuiesceAgent to execute quiesce operations on an MDS rank
- 07:36 PM CephFS Feature #63665 (Fix Under Review): mds: QuiesceDb to manage subvolume quiesce state
- 07:36 PM CephFS Tasks #63707 (Fix Under Review): mds: AdminSocket command to control the QuiesceDbManager
- 07:10 PM CephFS Backport #63513: pacific: MDS slow requests for the internal 'rename' requests
- Xiubo Li wrote:
> https://github.com/ceph/ceph/pull/54517
merged - 06:39 PM rgw Bug #63672 (Fix Under Review): qa: nothing provides lua-devel needed by ceph-2:18.0.0-7530.g67eb1...
- 04:57 PM Orchestrator Bug #63729 (In Progress): OSD with dedicated db devices redeployments fail when no service_id pro...
- When no `service_id` is provided to the service spec for osd service, OSD redeployments fail.
if no service_id is ... - 04:38 PM CephFS Bug #61732: pacific: test_cluster_info fails from "No daemons reported"
- merged
- 04:20 PM CephFS Backport #63588: pacific: qa: fs:mixed-clients kernel_untar_build failure
- merged
- 04:18 PM CephFS Backport #63512: pacific: client: queue a delay cap flushing if there are ditry caps/snapcaps
- merged
- 04:17 PM CephFS Backport #63173: pacific: crash: void MDLog::trim(int): assert(segments.size() >= pre_segments_size)
- merged
- 04:17 PM CephFS Backport #62520: pacific: client: FAILED ceph_assert(_size == 0)
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53981
merged - 04:16 PM CephFS Backport #62916: pacific: client: syncfs flush is only fast with a single MDS
- merged
- 04:15 PM CephFS Backport #62406: pacific: pybind/mgr/volumes: pending_subvolume_deletions count is always zero in...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53574
merged - 03:24 PM sepia Support #63566: access to global build stats jenkins dashboard
- I can add one more username which one should I add ?
- 11:24 AM sepia Support #63566: access to global build stats jenkins dashboard
- thanks Adam, its working for both of them!
- 03:12 PM rgw Bug #63178 (Fix Under Review): multisite: don't write data/bilog entries for lifecycle transition...
- 02:11 PM Linux kernel client Bug #59259: KASAN: use-after-free Write in encode_cap_msg
- Xiubo - reassigning this to you after talking to Rishabh since Rishabh mentioned that this is being hit pretty freque...
- 01:39 PM bluestore Bug #52398 (Resolved): ObjectStore/StoreTestSpecificAUSize.BluestoreBrokenNoSharedBlobRepairTest/...
- https://tracker.ceph.com/issues/63606 is a standalone Pacific ticket, this one is not related. Hence closing.
- 11:09 AM bluestore Bug #63606 (Fix Under Review): pacific: ObjectStore/StoreTestSpecificAUSize.BluestoreBrokenNoShar...
- 09:27 AM bluestore Bug #63606: pacific: ObjectStore/StoreTestSpecificAUSize.BluestoreBrokenNoSharedBlobRepairTest/2 ...
- This is a standalone Pacific issue, not a reincarnation of #52398, due to lack of NCB support.
- 10:58 AM CephFS Feature #61334 (Fix Under Review): cephfs-mirror: use snapdiff api for efficient tree traversal
- 08:40 AM RADOS Bug #63727: LogClient: do not output meaningless logs by default
- https://github.com/ceph/ceph/pull/54780
change 52145 to 54780, due to 52145 cannot reopen - 05:44 AM RADOS Bug #63727 (New): LogClient: do not output meaningless logs by default
- When using log_to_syslog to dump logs to remote,
Before printing a ceph log, a "log to syslog" will be printed.
Thi... - 08:15 AM Dashboard Backport #63728 (New): reef: mgr/dashboard: rgw roles edit and delete
- 08:09 AM Dashboard Bug #63230 (Pending Backport): mgr/dashboard: rgw roles edit and delete
- 06:27 AM CephFS Bug #52581 (New): Dangling fs snapshots on data pool after change of directory layout
- Bumping the prio since a similar report has showed up in -users list - https://marc.info/?l=ceph-users&m=170151942902...
- 06:00 AM CephFS Feature #47264: "fs authorize" subcommand should work for multiple FSs too
- Apparently, release note was added in wrong part of PendingReleaseNote (or it was added to right part but it was move...
- 05:58 AM CephFS Bug #63154: fs rename must require FS to be offline and refuse_client_session to be set
- Release note wasn't added in the PR. Therefore a new PR has been raised to get this done - https://github.com/ceph/ce...
- 03:10 AM Orchestrator Bug #46991 (Resolved): cephadm ls does not list legacy rgws
- 01:50 AM CephFS Bug #63141: qa/cephfs: test_idem_unaffected_root_squash fails
- The corresponding kclient fix https://patchwork.kernel.org/project/ceph-devel/list/?series=806849.
- 01:49 AM CephFS Bug #63141 (Fix Under Review): qa/cephfs: test_idem_unaffected_root_squash fails
- 12:48 AM CephFS Bug #62739 (Resolved): cephfs-shell: remove distutils Version classes because they're deprecated
- 12:47 AM CephFS Backport #62998 (Resolved): reef: cephfs-shell: remove distutils Version classes because they're ...
12/04/2023
- 11:47 PM Ceph Bug #63557: NVMe-oF gateway prometheus endpoints
- Endpoint PR https://github.com/ceph/ceph-nvmeof/pull/344
- 09:20 PM RADOS Bug #63389: Failed to encode map X with expected CRC
- 1. Currently what we found is that the issue is always reproducible in runs that contain “upgrade/parallel” + “upgra...
- 07:35 PM RADOS Bug #63389: Failed to encode map X with expected CRC
- bump up
- 07:47 PM RADOS Bug #63609 (Fix Under Review): osd acquire map_cache_lock high latency
- 07:46 PM RADOS Bug #63658 (Fix Under Review): OSD trim_maps - possible too slow lead to using too much storage s...
- 07:43 PM RADOS Bug #42597 (Fix Under Review): mon and mds ok-to-stop commands should validate input names exist ...
- 07:42 PM RADOS Bug #62470 (Need More Info): Rook: OSD Crash Looping / Caught signal (Aborted) / thread_name:tp_o...
- 07:36 PM CephFS Backport #63419: pacific: mds: client request may complete without queueing next replay request
- merged
- 07:34 PM RADOS Bug #61968: rados::connect() gets segement fault
- Hi Nitzan! Would you mind taking a look?
- 06:17 PM bluestore Bug #63618: Allocator configured with 64K alloc unit might get 4K requests
- Whether #62282 is related or not is still an open question.
- 06:16 PM bluestore Bug #63618 (Fix Under Review): Allocator configured with 64K alloc unit might get 4K requests
- So the issue occurs when custom bluefs_shared_alloc_size is in use and it's below min_alloc_size persistent for BlueS...
- 06:12 PM rgw Bug #63724 (Fix Under Review): object lock: An object uploaded through a multipart upload can be ...
- 12:12 PM rgw Bug #63724: object lock: An object uploaded through a multipart upload can be deleted without the...
- The fix is been submitted to the link https://github.com/ceph/ceph/pull/54767
The test results are shown below.
T... - 08:48 AM rgw Bug #63724 (Fix Under Review): object lock: An object uploaded through a multipart upload can be ...
- Set object locks on buckets and upload objects to buckets. Objects can be deleted without the x-amz-bypass-governance...
- 04:47 PM mgr Backport #57474 (Resolved): pacific: mgr: FAILED ceph_assert(daemon != nullptr)
- 03:53 PM mgr Backport #57474: pacific: mgr: FAILED ceph_assert(daemon != nullptr)
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52993
merged - 04:46 PM RADOS Bug #52553 (Resolved): pybind: rados.RadosStateError raised when closed watch object goes out of ...
- 04:46 PM RADOS Backport #52557 (Resolved): pacific: pybind: rados.RadosStateError raised when closed watch objec...
- 03:52 PM RADOS Backport #52557: pacific: pybind: rados.RadosStateError raised when closed watch object goes out ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51259
merged - 04:08 PM rbd Backport #63714 (In Progress): pacific: qa/workunits/rbd/cli_generic.sh: rbd support module comma...
- 04:08 PM CephFS Backport #63413: reef: mon/MDSMonitor: metadata not loaded from PAXOS on update
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/54316
merged - 04:07 PM CephFS Backport #63418: reef: mds: client request may complete without queueing next replay request
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/54313
merged - 04:07 PM rbd Backport #63716 (In Progress): quincy: qa/workunits/rbd/cli_generic.sh: rbd support module comman...
- 04:07 PM CephFS Backport #63264: reef: mgr/volumes: fix `subvolume group rm` command error message
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/54207
merged - 04:06 PM CephFS Backport #62998: reef: cephfs-shell: remove distutils Version classes because they're deprecated
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/54119
merged - 04:05 PM rbd Backport #63715 (In Progress): reef: qa/workunits/rbd/cli_generic.sh: rbd support module command ...
- 04:04 PM CephFS Backport #62738: reef: quota: accept values in human readable format as well
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/53333
merged - 03:54 PM RADOS Backport #63600: pacific: RBD cloned image is slow in 4k write with "waiting for rw locks"
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/54593
merged - 03:53 PM sepia Support #63566: access to global build stats jenkins dashboard
- The "avanthakkar" and "aaSharma14" users should have access can they check that they can see the dashboard ?
- 11:23 AM sepia Support #63566: access to global build stats jenkins dashboard
- I'll add their names and github profiles below.
Aashish Sharma: https://github.com/aaSharma14
Ankush Behl: https:... - 03:44 PM teuthology Support #63655: ssh permissions pdiazbou@teuthology...
- Please check that you have access
- 03:25 PM rbd Bug #63682: "rbd migration prepare" crashes when importing from http stream
- trying to retrace Jason's steps from https://github.com/ceph/ceph/pull/38000#pullrequestreview-526802351:...
- 09:48 AM rbd Bug #63682: "rbd migration prepare" crashes when importing from http stream
- Ilya Dryomov wrote:
> I suspect https://github.com/ceph/ceph/pull/50821.
Confirmed: https://github.com/ceph/ceph/... - 01:39 PM CephFS Bug #63710 (Need More Info): client.5394 isn't responding to mclientcaps(revoke), ino 0x100000000...
- 04:28 AM CephFS Bug #63710 (In Progress): client.5394 isn't responding to mclientcaps(revoke), ino 0x10000000001 ...
- I couldn't reproduce it.
- 01:25 PM CephFS Bug #63726 (New): cephfs-shell: support bootstrapping via monitor address
- Right now, cephfs-shell requires the ceph configuration file. However, its possible to connect to a ceph cluster by p...
- 01:03 PM Linux kernel client Bug #63586: libceph: osd_sparse_read: failed to allocate 2208169984 sparse read extents
- Greg Farnum wrote:
> If the OSD says it's returning 64K of read data, it seems like it can't have returned 220816998... - 01:01 PM CephFS Bug #63722 (Fix Under Review): cephfs/fuse: renameat2 with flags has wrong semantics
- 12:24 PM CephFS Bug #54462: Command failed (workunit test fs/snaps/snaptest-git-ceph.sh) on smithi055 with status...
- pacific - https://pulpito.ceph.com/yuriw-2023-11-30_15:46:13-fs-wip-yuri8-testing-2023-11-29-0706-pacific-distro-defa...
- 12:23 PM CephFS Backport #58992: pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- https://pulpito.ceph.com/yuriw-2023-11-30_15:46:13-fs-wip-yuri8-testing-2023-11-29-0706-pacific-distro-default-smithi...
- 12:23 PM CephFS Bug #62501: pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snaps...
- https://pulpito.ceph.com/yuriw-2023-11-30_15:46:13-fs-wip-yuri8-testing-2023-11-29-0706-pacific-distro-default-smithi...
- 12:22 PM mgr Bug #51282: pybind/mgr/mgr_util: .mgr pool may be created too early causing spurious PG_DEGRADED ...
- Seen in pacific run:
https://pulpito.ceph.com/yuriw-2023-11-30_15:46:13-fs-wip-yuri8-testing-2023-11-29-0706-pacific... - 12:22 PM RADOS Bug #52624: qa: "Health check failed: Reduced data availability: 1 pg peering (PG_AVAILABILITY)"
- Seen in pacific:
https://pulpito.ceph.com/yuriw-2023-11-30_15:46:13-fs-wip-yuri8-testing-2023-11-29-0706-pacific-dis... - 11:50 AM CephFS Bug #63633: client: handle nullptr context in async i/o api
- we would need this in quincy since all other client trackers (that i created) are having backport set up until quincy
- 11:39 AM CephFS Bug #63633 (In Progress): client: handle nullptr context in async i/o api
- 11:41 AM mgr Feature #59561 (In Progress): mgr/prometheus,exporter: add integration test against TargetDown al...
- 09:49 AM RADOS Bug #62777: rados/valgrind-leaks: expected valgrind issues and found none
- Radoslaw Zarzynski wrote:
> Hi Nitzan!
>
> IIUC the test doesn't properly wait for exit of the process. Am I corr... - 08:50 AM Dashboard Backport #63725 (New): reef: mgr/dashboard: cephfs subvolume snapshot list
- 08:49 AM Dashboard Backport #63652 (In Progress): reef: mgr/dashboard: rgw ports from config entities like endpoint ...
- 08:43 AM Dashboard Bug #63237 (Pending Backport): mgr/dashboard: cephfs subvolume snapshot list
- 08:38 AM RADOS Bug #46318: mon_recovery: quorum_status times out
- Laura Flores wrote:
> /a/yuriw-2023-11-27_22:36:50-rados-wip-yuri-testing-2023-11-27-1028-pacific-distro-default-smi... - 07:58 AM rgw Bug #63723: a
- My operation mistake created the wrong problem, please help remove it thanks.
- 07:55 AM rgw Bug #63723 (Closed): a
- ds
- 04:42 AM CephFS Bug #63141 (In Progress): qa/cephfs: test_idem_unaffected_root_squash fails
- 04:16 AM CephFS Bug #63510: ceph fs (meta) data inconsistent with python shutil.copy()
- Frank Schilder wrote:
> Here is a test run showing the issue.
>
> Script with name @test-copy@:
>
> [...]
>
... - 01:16 AM rgw Bug #63178: multisite: don't write data/bilog entries for lifecycle transitions/deletes
- https://github.com/ceph/ceph/pull/54759
12/03/2023
- 06:31 PM rbd Bug #63682: "rbd migration prepare" crashes when importing from http stream
- There were no changes in librbd in this area. I suspect https://github.com/ceph/ceph/pull/50821.
- 03:08 PM rbd Bug #63682: "rbd migration prepare" crashes when importing from http stream
- First observed in https://pulpito.ceph.com/yuriw-2023-11-10_19:55:46-rbd-wip-yuri5-testing-2023-11-10-0828-distro-def...
- 02:50 PM rbd Bug #63682: "rbd migration prepare" crashes when importing from http stream
- The crash is in the bowels of ASIO while trying to connect:...
- 06:02 PM sepia Support #63566: access to global build stats jenkins dashboard
- Hey Nizamudeen, which users do you want to have access? I will add them to one of the groups
- 09:04 AM CephFS Bug #63722 (Fix Under Review): cephfs/fuse: renameat2 with flags has wrong semantics
- Linux renameat2 system call has additional optional flags (RENAME_EXCHANGE, RENAME_NOREPLACE, RENAME_WHITEOUT) which ...
- 08:15 AM rgw Bug #63721 (New): deleting bucket failed: cls_bucket_list_unordered error in rgw_rados_operate (b...
- One of our users cannot delete several buckets with radosgw-admin due to the error like below:...
- 08:02 AM crimson Bug #63647: SnapTrimEvent AddressSanitizer: heap-use-after-free
- https://pulpito.ceph.com/matan-2023-11-30_16:06:51-crimson-rados-wip-matanb-crimson-osdmap-trimming-distro-crimson-sm...
- 08:02 AM crimson Bug #63556: TestClsRbd.directory_methods failed test
- https://pulpito.ceph.com/matan-2023-11-30_16:06:51-crimson-rados-wip-matanb-crimson-osdmap-trimming-distro-crimson-sm...
12/02/2023
- 01:32 PM rgw Bug #63495: Object gateway doesn't support POST DELETE
- Hi,
So we've identified that the entries are indeed missing from rgw_dns_name - I'm working with the relevant team... - 12:56 PM rbd Bug #63654 (Fix Under Review): [diff-iterate] ObjectListSnapsRequest's LIST_SNAPS_FLAG_WHOLE_OBJE...
- 12:01 PM RADOS Bug #62470: Rook: OSD Crash Looping / Caught signal (Aborted) / thread_name:tp_osd_tp
- It might be helpful to see the full (or at least last 20K lines of) OSD log preceding the crash.
- 12:19 AM RADOS Bug #46318: mon_recovery: quorum_status times out
- /a/yuriw-2023-11-27_22:36:50-rados-wip-yuri-testing-2023-11-27-1028-pacific-distro-default-smithi/7469028
12/01/2023
- 11:58 PM Orchestrator Bug #63720 (New): cephadm: Cannot set values for --daemon-types, --services or --hosts when upgra...
- /a/yuriw-2023-11-29_18:10:24-rados-wip-yuri-testing-2023-11-27-1028-pacific-distro-default-smithi/7471181...
- 10:47 PM rgw Bug #63153: Uploads by AWS Go SDK v2 fail with XAmzContentSHA256Mismatch when Checksum is requested
- I have an example (non-https) where we see a failure with aws-sdk-java/1.12.367
https://gist.githubusercontent.c... - 09:54 PM rgw Bug #63717 (Fix Under Review): radosgw-admin-rest: user listing fails assert out['truncated'] == ...
- 04:45 PM rgw Bug #63717: radosgw-admin-rest: user listing fails assert out['truncated'] == False
- https://github.com/ceph/ceph/pull/50359 seems the most likely culprit, since those radosgw-admin-rest jobs in https:/...
- 04:43 PM rgw Bug #63717 (Fix Under Review): radosgw-admin-rest: user listing fails assert out['truncated'] == ...
- a recent regression on main: http://qa-proxy.ceph.com/teuthology/cbodley-2023-12-01_15:54:49-rgw-main-distro-default-...
- 08:08 PM rbd Bug #63719 (New): [test] scribble()-based DiffIterate tests are too weak
- scribble()-based DiffIterate tests are too weak: at least two regressions that should been caught by DiffIterate.Diff...
- 07:16 PM bluestore Bug #63618: Allocator configured with 64K alloc unit might get 4K requests
- Notes from today's discussion: https://pad.ceph.com/p/RCA_62282
- 07:04 PM RADOS Backport #63401 (Resolved): pacific: pybind: ioctx.get_omap_keys asserts if start_after parameter...
- 04:41 PM RADOS Backport #63401: pacific: pybind: ioctx.get_omap_keys asserts if start_after parameter is non-empty
- merged
- 05:48 PM RADOS Bug #52288 (Resolved): doc: clarify use of `rados rm` command
- 05:47 PM rgw Feature #63563 (In Progress): rgw: add s3select bytes processed and bytes returned to usage
- 05:47 PM RADOS Backport #52307 (Resolved): pacific: doc: clarify use of `rados rm` command
- 04:39 PM RADOS Backport #52307: pacific: doc: clarify use of `rados rm` command
- merged
- 05:43 PM rgw Feature #63718 (New): STS integration with Spiffe/Spire
- https://spiffe.io/
- 05:22 PM CephFS Backport #61829: pacific: qa: test_join_fs_vanilla is racy
- merged
- 01:05 PM CephFS Bug #63510: ceph fs (meta) data inconsistent with python shutil.copy()
- I collected the straces of the python commands for both runs in #63510#note-5 and attached them here. They are signif...
- 12:47 PM CephFS Bug #63510: ceph fs (meta) data inconsistent with python shutil.copy()
- Here is a test run showing the issue.
Script with name @test-copy@:... - 11:28 AM nvme-of Bug #62693: Update nvme gw cli usage with proper entities
- Hey Rahul,
What's the issue here? Containers usually wrap the main command under the ENTRYPOINT, and the arguments... - 09:52 AM CephFS Bug #63713 (Fix Under Review): mds: encode `bal_rank_mask` with a higher version
- 09:36 AM CephFS Bug #63713 (Fix Under Review): mds: encode `bal_rank_mask` with a higher version
- For details see - https://github.com/ceph/ceph/pull/53340#discussion_r1399255031
- 09:39 AM rbd Backport #63716 (In Progress): quincy: qa/workunits/rbd/cli_generic.sh: rbd support module comman...
- https://github.com/ceph/ceph/pull/54770
- 09:39 AM rbd Backport #63715 (In Progress): reef: qa/workunits/rbd/cli_generic.sh: rbd support module command ...
- https://github.com/ceph/ceph/pull/54769
- 09:39 AM rbd Backport #63714 (In Progress): pacific: qa/workunits/rbd/cli_generic.sh: rbd support module comma...
- https://github.com/ceph/ceph/pull/54771
- 09:33 AM rbd Bug #63673 (Pending Backport): qa/workunits/rbd/cli_generic.sh: rbd support module command not fa...
- 08:15 AM CephFS Bug #63132: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
- I reverted the rocksdb compression PR and ran the tests.
http://qa-proxy.ceph.com/teuthology/khiremat-2023-12-01_0... - 07:23 AM Dashboard Bug #63698: mgr/dashboard: 500 - Internal Server Error for RGW
- Nizamudeen A wrote:
> Hi Morteza,
>
> We have fixed this bug in our code https://github.com/ceph/ceph/pull/53323... - 07:04 AM CephFS Bug #63712 (New): client: add test cases for sync io code paths
- Client::pwritev, Client::preadv aren't test much, just a single simple test case in test/libcephfs/test.cc, so having...
- 06:09 AM CephFS Bug #63679: client: handle zero byte sync/async write cases
- from yesterday's conversation with venky during standup; fixing this on client side
- 06:08 AM CephFS Bug #63679: client: handle zero byte sync/async write cases
- Xiubo Li wrote:
> Dhairya Parmar wrote:
> > i experimented by removing the assertion, the test case passed but test... - 06:02 AM Dashboard Bug #63608 (Resolved): mgr/dashboard: cephfs rename only works when fs is offline
- 03:21 AM CephFS Bug #63710: client.5394 isn't responding to mclientcaps(revoke), ino 0x10000000001 pending pAsLsX...
- ...
- 02:22 AM CephFS Bug #63710 (Need More Info): client.5394 isn't responding to mclientcaps(revoke), ino 0x100000000...
- This time it says the *Fs* caps is not released by the client, while from the client side when I stat the correspondi...
- 03:09 AM Ceph Support #63711 (New): How to transfer the cephfs metadata pool to a new pool
- The situation is as follows: I have a Ceph cluster that initially served as RBD block storage for OpenStack. Later, I...
- 03:02 AM Linux kernel client Bug #63586: libceph: osd_sparse_read: failed to allocate 2208169984 sparse read extents
- If the OSD says it's returning 64K of read data, it seems like it can't have returned 2208169984 separate extents? I ...
- 01:03 AM RADOS Bug #62225: pacific upgrade test fails when upgrading OSDs due to degraded pgs
- /a/lflores-2023-11-21_19:51:47-rados-wip-yuri2-testing-2023-11-13-0820-pacific-distro-default-smithi/7463960
11/30/2023
- 10:28 PM rgw Bug #63684: RGW segmentation fault when reading object permissions via the swift API
- Quick update. The segfault only occurs when the url contains a trailing slash and index.html is omitted. If the trail...
- 09:35 PM rgw Bug #63684: RGW segmentation fault when reading object permissions via the swift API
- A colleague was able to find a simple sequence that reproduces this segmentation fault consistently. Perform the foll...
- 03:16 PM rgw Bug #63684: RGW segmentation fault when reading object permissions via the swift API
- Adding some log info into the tracker:
-7> 2023-11-29T21:03:36.307+0000 7f5aa8ccb700 20 req 575854782490507380... - 08:55 PM CephFS Tasks #63709 (New): mds: Plug the QuiesceProtocol implementation into the QuiesceAgent control in...
- 08:54 PM CephFS Tasks #63708 (New): mds: MDS message transport for inter-rank QuiesceDbManager communications
- 08:53 PM CephFS Tasks #63707 (Fix Under Review): mds: AdminSocket command to control the QuiesceDbManager
- As the first phase of the quiesce db integration into the MDS cluster, we should allow for a single-rank setup where ...
- 08:47 PM CephFS Tasks #63706 (New): mds: Integrate the QuiesceDbManager and the QuiesceAgent into the MDS rank
- Every MDS Rank should manage an instance of a manager and an agent. The instances should be provided with the require...
- 06:29 PM rgw Bug #63485 (Fix Under Review): inaccessible bucket: Error reading IAM Policy: Terminate parsing d...
- 05:15 PM rgw Backport #63705 (New): reef: notifications: add observability to persistent notification queues
- 05:15 PM rgw Bug #63672: qa: nothing provides lua-devel needed by ceph-2:18.0.0-7530.g67eb1cb4.el8.x86_64
- the #sepia folks haven't figured out how to enable this repo yet. if that can't be resolved in a timely manner, we co...
- 05:08 PM rgw Feature #63704 (Pending Backport): notifications: add observability to persistent notification qu...
- 05:08 PM rgw Feature #63704 (Pending Backport): notifications: add observability to persistent notification qu...
- following PRs are implementing the feature:
https://github.com/ceph/ceph/pull/52087
https://github.com/ceph/ceph/pu... - 04:50 PM Ceph Backport #63476 (Resolved): reef: MClientRequest: properly handle ceph_mds_request_head_legacy fo...
- 04:44 PM Ceph Backport #63476: reef: MClientRequest: properly handle ceph_mds_request_head_legacy for ext_num_r...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/54407
merged - 04:49 PM rgw Bug #63560: retry_raced_bucket_write considerations
- If you'd prefer to discuss this in person, feel free to come to one of the Wed. refactoring meetings (on the communit...
- 04:48 PM rgw Bug #63560: retry_raced_bucket_write considerations
- In principle, you are probably correct. However, all the cases where put_info() uses retry_raced_bucket_write(), the...
- 04:33 PM rgw Bug #63542: Delete-Marker deletion inconsistencies
- RGWRados::get_obj_state() indeed returns -ENOENT if the object is a delete marker, and it's been that way for a long ...
- 04:02 PM CephFS Bug #63697 (In Progress): client: zero byte sync write fails
- 01:17 PM CephFS Bug #63697 (In Progress): client: zero byte sync write fails
- ...
- 03:56 PM rgw Bug #63355: test/cls_2pc_queue: fails during migration tests
- unless we backport the new persistent topic observability feature, we don't need to backport this fix.
this is just ... - 03:26 PM rgw Bug #63355 (Pending Backport): test/cls_2pc_queue: fails during migration tests
- 03:55 PM Ceph Feature #63703 (New): If a prefix is available, allow it be used to narrow the bounds of OMAP ite...
- In the current implementation of BlueStore::get_omap_iterator(), it creates an iterator with the bounds that cover th...
- 03:28 PM rgw Bug #63153 (In Progress): Uploads by AWS Go SDK v2 fail with XAmzContentSHA256Mismatch when Check...
- 03:28 PM bluestore Bug #63106 (Fix Under Review): ObjectStore/StoreTestSpecificAUSize.DeferredOnBigOverwrite/1 is ra...
- 03:28 PM rgw Bug #63245 (Can't reproduce): rgw/s3select: crashes in test_progress_expressions in run_s3select_...
- 03:24 PM rgw Bug #63495: Object gateway doesn't support POST DELETE
- any update on this?
- 03:17 PM rgw Bug #63613 (Fix Under Review): [rgw][lc] using custom lc schedule (work time) may cause lc proces...
- 03:11 PM rgw Bug #20802 (Fix Under Review): rgw multisite: clean up deleted bucket instance and index objects
- 03:10 PM rgw Bug #61561 (Can't reproduce): Trino does not stop the statement processing upon a failure of the ...
- 03:04 PM rgw Backport #63702 (In Progress): quincy: kafka crashed during message callback in teuthology
- 02:07 PM rgw Backport #63702 (In Progress): quincy: kafka crashed during message callback in teuthology
- https://github.com/ceph/ceph/pull/54737
- 02:31 PM rgw Backport #63701 (In Progress): reef: kafka crashed during message callback in teuthology
- 02:07 PM rgw Backport #63701 (In Progress): reef: kafka crashed during message callback in teuthology
- https://github.com/ceph/ceph/pull/54736
- 02:00 PM CephFS Bug #63700 (New): qa: test_cd_with_args failure
- ...
- 01:59 PM rgw Bug #63314 (Pending Backport): kafka crashed during message callback in teuthology
- 01:59 PM Dashboard Bug #63698: mgr/dashboard: 500 - Internal Server Error for RGW
- Hi Morteza,
We have fixed this bug in our code https://github.com/ceph/ceph/pull/53323, but its still waiting for ... - 01:34 PM Dashboard Bug #63698 (New): mgr/dashboard: 500 - Internal Server Error for RGW
- h3. Description of problem
While opening Object Gateway tab from left sidebar I get the error
500 - Internal S... - 01:56 PM CephFS Bug #63699 (New): qa: failed cephfs-shell test_reading_conf
- ...
- 01:34 PM CephFS Bug #63471: client: error code inconsistency when accessing a mount of a deleted dir
- On the latest main, I am seeing a RADOS time out error. I will dig further into this....
- 12:59 PM Dashboard Bug #63696 (New): mgr/dashboard: prometheus API is being pinged when no prometheus-api-host is pr...
- if the prometheus-api-host is empy, it still tries to do a ping on the prometheus endpoint and ends up failing
<pr... - 12:59 PM Orchestrator Bug #63678 (Fix Under Review): Some inconsistencies on the dashboard "Services" tab
- 12:58 PM Orchestrator Bug #63694 (Fix Under Review): List of physical devices on a node is missing some entries
- 10:11 AM Orchestrator Bug #63694 (Fix Under Review): List of physical devices on a node is missing some entries
- 10:51 AM rgw Feature #63695 (New): kafka: allow more than one broker in the bootsrap list
- currently we provide the kafka endpoint in the topic configuration. but kafka allows for multiple bootstrap brokers t...
- 09:47 AM CephFS Bug #63685 (Rejected): mds: FAILED ceph_assert(_head.empty())
- Venky Shankar wrote:
> Spoke to Xiubo - this branch had changes from https://github.com/ceph/ceph/pull/54725.
Oka... - 05:07 AM CephFS Bug #63685: mds: FAILED ceph_assert(_head.empty())
- Spoke to Xiubo - this branch had changes from https://github.com/ceph/ceph/pull/54725.
- 03:19 AM CephFS Bug #63685: mds: FAILED ceph_assert(_head.empty())
- I can reproduce this and blocked my tests. I will work on it today.
- 02:26 AM CephFS Bug #63685 (Rejected): mds: FAILED ceph_assert(_head.empty())
- When I was running some xfstests locally I hit the mds crash:...
- 09:27 AM CephFS Bug #59119: mds: segmentation fault during replay of snaptable updates
- https://tracker.ceph.com/issues/54741 was seen in a cluster where it's unlikely that "flush journal" was run. It's po...
- 09:13 AM Dashboard Backport #63693 (New): reef: mgr/dashboard: add bucket tags in bucket form and details
- 09:07 AM Dashboard Feature #63412 (Pending Backport): mgr/dashboard: add bucket tags in bucket form and details
- 08:09 AM mgr Bug #62641: mgr/(object_format && nfs/export): enhance nfs export update failure response
- backports withheld until https://tracker.ceph.com/issues/63692 is addressed. Updated the respective backport trackers...
- 06:00 AM mgr Bug #62641 (Pending Backport): mgr/(object_format && nfs/export): enhance nfs export update failu...
- Holding back the backport till we decide on https://github.com/ceph/ceph/pull/53431#issuecomment-1823873370
Dhairy... - 08:05 AM mgr Backport #63691: reef: mgr/(object_format && nfs/export): enhance nfs export update failure response
- need to discuss and address https://tracker.ceph.com/issues/63692 first (during bug scrub) before moving on with the ...
- 06:09 AM mgr Backport #63691 (New): reef: mgr/(object_format && nfs/export): enhance nfs export update failure...
- 08:03 AM mgr Backport #63690: quincy: mgr/(object_format && nfs/export): enhance nfs export update failure res...
- need to discuss and address https://tracker.ceph.com/issues/63692 first (during bug scrub) before moving on with the ...
- 06:09 AM mgr Backport #63690 (New): quincy: mgr/(object_format && nfs/export): enhance nfs export update failu...
- 07:57 AM CephFS Bug #63692 (New): mgr/nfs: return tuple convention not being followed in case of a failure
- This came to the light while discussing [0] with venky where we saw that the tuple returned by the Responder decorato...
- 07:46 AM CephFS Bug #63679: client: handle zero byte sync/async write cases
- Dhairya Parmar wrote:
> i experimented by removing the assertion, the test case passed but test case execution could... - 06:13 AM CephFS Bug #63679: client: handle zero byte sync/async write cases
- Venky Shankar wrote:
> Does this fail in the sync path? If not, then it could be due to the async changes.
I see ... - 05:40 AM CephFS Bug #63679: client: handle zero byte sync/async write cases
- Does this fail in the sync path? If not, then it could be due to the async changes.
- 07:10 AM CephFS Backport #63688 (In Progress): reef: qa/cephfs: improvements for name generators in test_volumes.py
- 05:19 AM CephFS Backport #63688 (In Progress): reef: qa/cephfs: improvements for name generators in test_volumes.py
- https://github.com/ceph/ceph/pull/54729
- 06:27 AM CephFS Backport #63687 (In Progress): quincy: qa/cephfs: improvements for name generators in test_volume...
- 05:19 AM CephFS Backport #63687 (In Progress): quincy: qa/cephfs: improvements for name generators in test_volume...
- https://github.com/ceph/ceph/pull/54727
- 05:54 AM Orchestrator Bug #63689 (New): "REFRESHED" Column is not being refreshed as expected
- I have installed a Ceph cluster using Cephadm, and the version I am currently using is 17.2.6.
However, I have notic... - 05:45 AM CephFS Bug #59343: qa: fs/snaps/snaptest-multiple-capsnaps.sh failed
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Xiubo - this is seen in reef branch - https://pulpito.ceph.com/vshankar-... - 05:23 AM CephFS Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
- Venky Shankar wrote:
> Any update on this, Xiubo?
Already done this weeks ago, please see Jira https://issues.red... - 05:04 AM CephFS Bug #62510: snaptest-git-ceph.sh failure with fs/thrash
- Any update on this, Xiubo?
- 05:17 AM CephFS Bug #63680 (Pending Backport): qa/cephfs: improvements for name generators in test_volumes.py
- 04:49 AM Dashboard Bug #63686 (New): mgr/dashboard: adapt service creation form to support nvmeof creation
- We'll need to add form entries based on https://github.com/ceph/ceph/blob/main/src/python-common/ceph/deployment/serv...
- 01:25 AM CephFS Bug #43393 (Fix Under Review): qa: add support/qa for cephfs-shell on CentOS 9 / RHEL9
- 01:19 AM CephFS Bug #43393 (In Progress): qa: add support/qa for cephfs-shell on CentOS 9 / RHEL9
11/29/2023
- 10:49 PM RADOS Bug #61968: rados::connect() gets segement fault
- I believe this is the same issue being reported on fedora 39 at https://bugzilla.redhat.com/show_bug.cgi?id=2252160
- 10:12 PM rbd Bug #63673 (Fix Under Review): qa/workunits/rbd/cli_generic.sh: rbd support module command not fa...
- 10:47 AM rbd Bug #63673 (In Progress): qa/workunits/rbd/cli_generic.sh: rbd support module command not failing...
- 01:23 AM rbd Bug #63673 (Pending Backport): qa/workunits/rbd/cli_generic.sh: rbd support module command not fa...
- Observed in https://pulpito.ceph.com/yuriw-2023-11-27_17:14:54-rbd-wip-yuri10-testing-2023-11-22-1112-distro-default-...
- 09:54 PM rgw Bug #63684 (New): RGW segmentation fault when reading object permissions via the swift API
- The specifics of the circumstances under which this is reproduced are still not completely understood, but a swift ge...
- 09:54 PM rgw Bug #63672: qa: nothing provides lua-devel needed by ceph-2:18.0.0-7530.g67eb1cb4.el8.x86_64
- Zack is trying to get this repo enabled in https://github.com/ceph/ceph-cm-ansible/pull/749
- 06:56 PM rgw Bug #63672: qa: nothing provides lua-devel needed by ceph-2:18.0.0-7530.g67eb1cb4.el8.x86_64
- > are we positive that lua-devel is a runtime dependency?
luarocks compile the packages, so it needs lua.h which com... - 02:15 PM rgw Bug #63672: qa: nothing provides lua-devel needed by ceph-2:18.0.0-7530.g67eb1cb4.el8.x86_64
- are we positive that @lua-devel@ is a runtime dependency? did you try without and see failures? i don't understand wh...
- 09:54 AM rgw Bug #63672: qa: nothing provides lua-devel needed by ceph-2:18.0.0-7530.g67eb1cb4.el8.x86_64
- in the above test, lua-devel is found for:
* centos8 as part of the "powertools" repo
* centos9 as part of the "CRB... - 06:02 PM rgw Documentation #63683 (New): doc: document migration from ldap auth to STS
- document a migration strategy so the ldap auth engine can be deprecated
- 05:19 PM rbd Bug #63682 (New): "rbd migration prepare" crashes when importing from http stream
- Seen on 22.04 and 9.stream so far:...
- 04:02 PM rgw-testing Backport #63639: reef: lua integration tests
- the backport will need to include the resolution for https://tracker.ceph.com/issues/63672
reef is still testing a... - 03:42 PM RADOS Bug #42597: mon and mds ok-to-stop commands should validate input names exist to prevent misleadi...
- Created a PR for this issue: https://github.com/ceph/ceph/pull/54719
- 03:24 PM bluestore Bug #62815: hybrid/avl allocators might be very ineffective when serving bluefs allocations
- Pacific backport is https://github.com/ceph/ceph/pull/54434
- 03:20 PM rgw Bug #63583 (Resolved): [Errno 13] Permission denied: '/vstart_runner.log'
- unclear whether this will need backports - i guess we'll see whether the errors show up on other releases
- 03:07 PM rbd Bug #63681: io_context_pool threads deadlock possibility when journaling is enabled
- Oh, I totally forgot to mention in the description - by increasing librados_thread_count to something high, like 15, ...
- 03:06 PM rbd Bug #63681 (New): io_context_pool threads deadlock possibility when journaling is enabled
- Ilya discovered that the stress test introduced in https://github.com/ceph/ceph/pull/54377 would sometimes hang when ...
- 02:17 PM RADOS Bug #17170 (Resolved): mon/monclient: update "unable to obtain rotating service keys when osd ini...
- 02:14 PM CephFS Backport #59198 (Rejected): pacific: cephfs: qa enables kclient for newop test
- https://github.com/ceph/ceph/pull/50992#issuecomment-1831975609
- 02:13 PM CephFS Backport #58992 (Rejected): pacific: Test failure: test_acls (tasks.cephfs.test_acls.TestACLs)
- https://github.com/ceph/ceph/pull/52584#issuecomment-1831972637
- 02:07 PM CephFS Backport #62514 (Rejected): pacific: Error: Unable to find a match: python2 with fscrypt tests
- https://github.com/ceph/ceph/pull/53625#issuecomment-1831964165
- 02:07 PM CephFS Backport #62798 (Rejected): pacific: qa: run nfs related tests with fs suite
- https://github.com/ceph/ceph/pull/53905#issuecomment-1831963103
- 02:06 PM CephFS Backport #55865 (Rejected): pacific: qa/cephfs: setting to sudo to True has no effect on _run_pyt...
- https://github.com/ceph/ceph/pull/54180#issuecomment-1831960244
- 02:05 PM CephFS Backport #55863 (Rejected): pacific: qa/cephfs: mon cap not properly tested in caps_helper.py
- https://github.com/ceph/ceph/pull/54182#issuecomment-1831959758
- 02:05 PM CephFS Backport #52954 (Rejected): pacific: qa/xfstest-dev.py: update to include centos stream
- https://github.com/ceph/ceph/pull/54184#issuecomment-1831958537
- 02:02 PM RADOS Backport #58333 (Rejected): pacific: mon/monclient: update "unable to obtain rotating service key...
- https://github.com/ceph/ceph/pull/54556#issuecomment-1831952739
- 01:47 PM CephFS Bug #63680 (Pending Backport): qa/cephfs: improvements for name generators in test_volumes.py
- Generate a name that is shorter and easier to remember. This helps while investigating test failures with vstart_runn...
- 01:46 PM CephFS Bug #61950 (Need More Info): mds/OpenFileTable: match MAX_ITEMS_PER_OBJ does not honor osd_deep_s...
- cannot reproduce
- 01:37 PM CephFS Backport #63675 (In Progress): quincy: High memory usage on standby replay MDS
- 10:24 AM CephFS Backport #63675 (In Progress): quincy: High memory usage on standby replay MDS
- https://github.com/ceph/ceph/pull/54717
- 01:36 PM CephFS Backport #63676 (In Progress): reef: High memory usage on standby replay MDS
- 10:24 AM CephFS Backport #63676 (In Progress): reef: High memory usage on standby replay MDS
- https://github.com/ceph/ceph/pull/54716
- 01:21 PM Linux kernel client Bug #59259: KASAN: use-after-free Write in encode_cap_msg
- Talked with Rishabh, assign it back to Rishabh.
- 08:18 AM Linux kernel client Bug #59259: KASAN: use-after-free Write in encode_cap_msg
- Hi Rishabh,
I will take this tracker. As I remembered there was one cu bz also hit this.
Thanks
- 01:09 PM CephFS Bug #63132: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
- I suggested that a test be run with disabling rocksdb compression to check if the space utilization shoots up and mak...
- 01:06 PM CephFS Bug #63679: client: handle zero byte sync/async write cases
- i experimented by removing the assertion, the test case passed but test case execution could not be completed, its st...
- 01:01 PM CephFS Bug #63679 (New): client: handle zero byte sync/async write cases
- A bit of background: libcephfs is now capable of performing async I/O and a new api `Client::ll_preadv_pwritev` is av...
- 12:39 PM Orchestrator Bug #63678 (Fix Under Review): Some inconsistencies on the dashboard "Services" tab
OSD daemon running count double of total on "Services" tab:
https://github.com/rook/rook/issues/13278
MDS daemo...- 11:33 AM Dashboard Bug #58827 (Resolved): mgr/dashboard: update bcrypt dep in requirements.txt
- 11:33 AM Dashboard Backport #58829 (Resolved): pacific: mgr/dashboard: update bcrypt dep in requirements.txt
- 11:24 AM CephFS Backport #63575 (In Progress): reef: qa: ModuleNotFoundError: No module named XXXXXX
- 11:22 AM CephFS Backport #63588 (In Progress): pacific: qa: fs:mixed-clients kernel_untar_build failure
- 11:21 AM CephFS Backport #63589 (In Progress): quincy: qa: fs:mixed-clients kernel_untar_build failure
- 11:20 AM CephFS Backport #63590 (In Progress): reef: qa: fs:mixed-clients kernel_untar_build failure
- 11:02 AM ceph-volume Backport #63677 (In Progress): pacific: ceph-volume prepare doesn't use partitions as-is anymore
- 10:56 AM ceph-volume Backport #63677 (In Progress): pacific: ceph-volume prepare doesn't use partitions as-is anymore
- https://github.com/ceph/ceph/pull/54709
- 11:02 AM ceph-volume Bug #54053 (Won't Fix): activate --all command stops on the first error and might leave part of v...
- 11:01 AM rgw Bug #58034 (Resolved): RGW misplaces index entries after dynamically resharding bucket
- 11:00 AM rgw Backport #58171 (Resolved): quincy: RGW misplaces index entries after dynamically resharding bucket
- 11:00 AM ceph-volume Bug #45443 (Won't Fix): ceph-volume: support mode-specific availability fields in inventory subco...
- 10:59 AM ceph-volume Bug #45443 (Rejected): ceph-volume: support mode-specific availability fields in inventory subcom...
- 10:58 AM ceph-volume Bug #38044 (Won't Fix): implement --version flag
- 10:57 AM ceph-volume Bug #62002 (Resolved): raw list shouldn't list lvm OSDs
- 10:55 AM ceph-volume Bug #58943 (Resolved): ceph-volume's deactivate doesn't close encrypted volumes
- 10:33 AM ceph-volume Bug #58569 (Fix Under Review): Add the ability to configure options for ceph-volume to pass to cr...
- 10:31 AM ceph-volume Bug #62320 (Fix Under Review): lvm list should filter also on vg name
- 10:27 AM ceph-volume Bug #48797 (Duplicate): lvm batch calculates wrong extends
- Closed as dup of #47758 (now resolved)
- 10:14 AM CephFS Bug #48673 (Pending Backport): High memory usage on standby replay MDS
- 10:03 AM teuthology Support #63655: ssh permissions pdiazbou@teuthology...
- pere@main QO737Etb2x9bOia0GDLiSw 4f94f4f946f4651c6347076de1834ccc92cdb55455b8f06cb5f99cf9347876f4
- 08:05 AM ceph-volume Bug #57907 (Duplicate): ceph-volume complains about "Insufficient space (<5GB)" on 1.75TB device
- #58591
- 08:03 AM ceph-volume Backport #63311 (In Progress): pacific: report "Insufficient space (<5GB)" even when disk size is...
- 07:48 AM ceph-volume Backport #63313 (In Progress): quincy: report "Insufficient space (<5GB)" even when disk size is ...
- 07:48 AM ceph-volume Backport #63312 (In Progress): reef: report "Insufficient space (<5GB)" even when disk size is su...
- 07:45 AM ceph-volume Bug #58306 (Duplicate): empty disk rejected with 'Insufficient space (<5GB)'
- #58591
- 07:15 AM CephFS Bug #63265 (In Progress): qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
- 07:15 AM CephFS Bug #63265: qa: fs/snaps/snaptest-git-ceph.sh failed when reseting to tag 'v0.1'
- Xiubo Li wrote:
> https://pulpito.ceph.com/yuriw-2023-10-16_14:43:00-fs-wip-yuri4-testing-2023-10-11-0735-reef-distr... - 06:45 AM CephFS Bug #62067 (New): ffsb.sh failure "Resource temporarily unavailable"
- /a/vshankar-2023-11-14_06:10:15-fs-wip-vshankar-testing-20231107.042705-distro-default-smithi/7456853
- 06:45 AM CephFS Bug #62067: ffsb.sh failure "Resource temporarily unavailable"
- Kotresh, please RCA.
- 04:19 AM bluestore Backport #63441 (Resolved): pacific: resharding RocksDB after upgrade to Pacific breaks OSDs
- 04:18 AM bluestore Backport #63661 (Resolved): reef: Typo in reshard example
- 03:01 AM Ceph Bug #63674 (New): Ceph incorrectly calculates RAW space for OSD based on VDO
- Tested on Ceph versions quincy, reef
Linux distribution - Rocky Linux 9.3.
VDO 8.2.1.6 from the repository.
This p... - 12:27 AM CephFS Bug #63646: mds: incorrectly issued the Fc caps in LOCK_EXCL_XSYN state for filelock
- Venky Shankar wrote:
> Xiubo Li wrote:
> > Venky Shankar wrote:
[...]
> >
> > It will miss the state machine wh...
11/28/2023
- 10:44 PM rgw Bug #63672 (Fix Under Review): qa: nothing provides lua-devel needed by ceph-2:18.0.0-7530.g67eb1...
- QA tests on RHEL broken by https://github.com/ceph/ceph/pull/52931
see: https://pulpito.ceph.com/rishabh-2023-11-2... - 08:03 PM bluestore Backport #63441: pacific: resharding RocksDB after upgrade to Pacific breaks OSDs
- merged
- 08:01 PM bluestore Backport #59178: pacific: BLK/Kernel: Improve protection against running one OSD twice
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53567
merged - 07:59 PM rgw Backport #61433: pacific: rgw: multisite data log flag not used
- merged
- 07:58 PM rgw Backport #59610: pacific: sts: every AssumeRole writes to the RGWUserInfo
- merged
- 07:58 PM rgw Backport #58584: pacific: Keys returned by Admin API during user creation on secondary zone not v...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51602
merged - 07:55 PM rgw Backport #59729 (Resolved): pacific: S3 CompleteMultipartUploadResult has empty ETag element
- 07:55 PM rgw Backport #59729: pacific: S3 CompleteMultipartUploadResult has empty ETag element
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51445
merged - 07:55 PM rgw Backport #61175: pacific: Swift static large objects are not deleted when segment object path set...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51600
merged - 07:55 PM Ceph Backport #55063 (Resolved): pacific: Attempting to modify bucket sync pipe results in segfault
- 07:54 PM Ceph Backport #55063: pacific: Attempting to modify bucket sync pipe results in segfault
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51256
merged - 07:54 PM rgw Backport #55149 (Resolved): pacific: rgw: Update "CEPH_RGW_DIR_SUGGEST_LOG_OP" for remove entries
- 07:53 PM rgw Backport #55149: pacific: rgw: Update "CEPH_RGW_DIR_SUGGEST_LOG_OP" for remove entries
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50540
merged - 07:54 PM rgw Bug #52302 (Resolved): assumed-role: s3api head-object returns 403 Forbidden, even if role has Li...
- 07:53 PM rgw Backport #53648 (Resolved): pacific: assumed-role: s3api head-object returns 403 Forbidden, even ...
- 07:51 PM rgw Backport #53648: pacific: assumed-role: s3api head-object returns 403 Forbidden, even if role has...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44471
merged - 07:53 PM rgw Bug #49797 (Resolved): rgw/sts: chunked upload fails using STS temp credentials generated by GetS...
- 07:52 PM rgw Backport #52785 (Resolved): pacific: rgw/sts: chunked upload fails using STS temp credentials gen...
- 07:49 PM rgw Backport #52785: pacific: rgw/sts: chunked upload fails using STS temp credentials generated by G...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44463
merged - 07:52 PM rgw Backport #58234: pacific: s3:ListBuckets response limited to 1000 buckets (by default) since Octopus
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/49526
merged - 07:52 PM rgw Backport #55500: pacific: Segfault when Open Policy Agent authorization is enabled
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/46106
merged - 07:50 PM rgw Backport #52778: pacific: make fetching of certs while validating tokens, more generic.
- merged
- 06:55 PM teuthology Bug #63671 (Closed): RERUN options seems broken "No module named 'lxml'"
- 06:54 PM teuthology Bug #63671: RERUN options seems broken "No module named 'lxml'"
- running `bootstrap` fixed it
yuriw@teuthology ~/teuthology [18:53:21]> rm -rf virtualenv/
yuriw@teuthology ~/teu... - 06:34 PM teuthology Bug #63671 (Closed): RERUN options seems broken "No module named 'lxml'"
- ...
- 05:55 PM rgw Bug #63314 (Fix Under Review): kafka crashed during message callback in teuthology
- 05:49 PM rgw Bug #63314: kafka crashed during message callback in teuthology
- crash is happening in the dtor if the unique_lock defined here (when it goes out of scope): https://github.com/ceph/c...
- 05:43 PM bluestore Backport #63660 (In Progress): pacific: Typo in reshard example
- 04:57 PM bluestore Backport #63660 (In Progress): pacific: Typo in reshard example
- https://github.com/ceph/ceph/pull/54696
- 05:42 PM bluestore Backport #63662 (In Progress): quincy: Typo in reshard example
- 04:57 PM bluestore Backport #63662 (In Progress): quincy: Typo in reshard example
- https://github.com/ceph/ceph/pull/54695
- 05:41 PM bluestore Backport #63661 (In Progress): reef: Typo in reshard example
- 04:57 PM bluestore Backport #63661 (Resolved): reef: Typo in reshard example
- https://github.com/ceph/ceph/pull/54694
- 05:24 PM CephFS Feature #63670: mds,client: add light-weight quiesce protocol
- This tracker is optional and can be done in the future if desired.
- 05:23 PM CephFS Feature #63670 (New): mds,client: add light-weight quiesce protocol
- This protocol would be lighter weight than the cap-revoke version (#63664) by having the MDS tell clients to halt wri...
- 05:19 PM CephFS Tasks #63669 (New): qa: add teuthology tests for quiescing a group of subvolumes
- This will probably require a number of parallel workload tasks with periodic crash consistent snapshots being execute...
- 05:16 PM CephFS Feature #63668 (New): pybind/mgr/volumes: add quiesce protocol API
- Wires into libcephfs/cephfs.pyx.
- 05:16 PM CephFS Feature #63667 (New): client,libcephfs,cephfs.pyx: add quiesce protocol
- <stub>
- 05:15 PM CephFS Feature #63666 (Fix Under Review): mds: QuiesceAgent to execute quiesce operations on an MDS rank
- QuiesceAgent is the layer responsible for converting updates from the QuiesceDbManager into actual quiesce requests a...
- 05:13 PM CephFS Feature #63665 (Fix Under Review): mds: QuiesceDb to manage subvolume quiesce state
- @QuiesceDb@ is an ephemeral versioned replicated database holding a collection of @QuiesceSet@ s. Each quiesce set ca...
- 05:12 PM CephFS Feature #63664 (In Progress): mds: add quiesce protocol for halting I/O on subvolumes
- This is for the cap revoke version of the protocol.
- 05:11 PM CephFS Feature #63663 (In Progress): mds,client: add crash-consistent snapshot support
- <stub>
- 04:50 PM bluestore Bug #63436 (Pending Backport): Typo in reshard example
- 04:48 PM RADOS Backport #62996 (Resolved): pacific: Add detail description for delayed op in osd log file
- 04:43 PM bluestore Bug #63618: Allocator configured with 64K alloc unit might get 4K requests
- PR https://github.com/ceph/ceph/pull/48854 "os/bluestore: enable 4K allocation unit for BlueFS"
was created with cas... - 04:29 PM RADOS Bug #56136 (Resolved): [Progress] Do not show NEW PG_NUM value for pool if autoscaler is set to off
- 04:28 PM RADOS Backport #56649 (Resolved): pacific: [Progress] Do not show NEW PG_NUM value for pool if autoscal...
- 04:01 PM RADOS Backport #56649: pacific: [Progress] Do not show NEW PG_NUM value for pool if autoscaler is set t...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53464
merged - 04:28 PM RADOS Bug #52781 (Resolved): shard-threads cannot wakeup bug
- 04:27 PM RADOS Backport #52841 (Resolved): pacific: shard-threads cannot wakeup bug
- 04:00 PM RADOS Backport #52841: pacific: shard-threads cannot wakeup bug
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/51262
merged - 04:21 PM crimson Bug #63647 (In Progress): SnapTrimEvent AddressSanitizer: heap-use-after-free
- 01:39 AM crimson Bug #63647: SnapTrimEvent AddressSanitizer: heap-use-after-free
- https://pulpito.ceph.com/yingxin-2023-11-27_09:03:31-crimson-rados-wip-yingxin-crimson-make-crosscore-send-ordered-di...
- 04:04 PM CephFS Fix #63634: [RFC] limit iov structures to 1024 while performing async I/O
- The limit to IOV_MAX should not be a problem for the intended consumer, Ganesha. At the moment, Ganesha doesn't submi...
- 02:11 PM CephFS Fix #63634: [RFC] limit iov structures to 1024 while performing async I/O
- Dhairya Parmar wrote:
> This tracker is just to get initial thoughts/opinions about limiting the no. of iovec struct... - 04:00 PM RADOS Backport #61822: pacific: osdmaptool crush
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52203
merged - 03:59 PM RADOS Backport #59085: pacific: cache tier set proxy faild
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50552
merged - 03:59 PM mgr Backport #58805: pacific: ceph mgr fail after upgrade to pacific
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/50194
merged - 03:39 PM rgw Documentation #63659 (New): rgw_lc_max_wp_worker should be increased not decreased
- In the documentation https://docs.ceph.com/en/latest/radosgw/config-ref/#lifecycle-settings it reads in the last para...
- 02:48 PM RADOS Bug #62918 (Fix Under Review): all write io block but no flush is triggered
- 02:45 PM mgr Bug #63615 (Fix Under Review): mgr: consider raising priority of MMgrBeacon
- 02:40 PM CephFS Bug #63519 (Triaged): ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
- Ongoing discussion here - https://github.com/ceph/ceph/pull/53340#discussion_r1399255031
- 02:39 PM CephFS Bug #63538 (Need More Info): mds: src/mds/Locker.cc: 2357: FAILED ceph_assert(!cap->is_new())
- No logs and no core available. Checking back with folks who reported this (downstream).
- 02:38 PM RADOS Backport #63651 (In Progress): reef: Ceph-object-store to skip getting attrs of pgmeta objects
- 02:37 PM RADOS Backport #63650 (In Progress): quincy: Ceph-object-store to skip getting attrs of pgmeta objects
- 02:37 PM RADOS Backport #63649 (In Progress): pacific: Ceph-object-store to skip getting attrs of pgmeta objects
- 02:25 PM CephFS Bug #63120 (Triaged): mgr/nfs: support providing export ID while creating exports using 'nfs expo...
- 02:25 PM CephFS Bug #63120: mgr/nfs: support providing export ID while creating exports using 'nfs export create ...
- Dhairya Parmar wrote:
> h3. TL;DR: Since we can provide export_ids while creating exports using 'nfs export apply -i... - 02:18 PM CephFS Bug #63473 (Triaged): fsstressh.sh fails with errno 124
- 02:18 PM CephFS Bug #63473: fsstressh.sh fails with errno 124
- Rishabh, could you please link the failed job here?
- 11:02 AM RADOS Bug #63524: ceph_test_rados create object without reference
- ...
- 07:44 AM CephFS Bug #54833 (Fix Under Review): crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<ML...
- 07:07 AM RADOS Bug #63658: OSD trim_maps - possible too slow lead to using too much storage space
- https://github.com/ceph/ceph/pull/54686
- 06:54 AM RADOS Bug #63658: OSD trim_maps - possible too slow lead to using too much storage space
- ...
- 06:07 AM RADOS Bug #63658: OSD trim_maps - possible too slow lead to using too much storage space
- @jianwei zhang
do you have steps to reproduce the OSDMap accumulating inside the OSD in your first scenario ? - 05:40 AM RADOS Bug #63658 (Fix Under Review): OSD trim_maps - possible too slow lead to using too much storage s...
current osdmap trim code logic in ceph-osd:
1. osd receives MOSDMap sent from mon or osd, maybe 40 osdmaps, will c...- 06:19 AM CephFS Bug #63646: mds: incorrectly issued the Fc caps in LOCK_EXCL_XSYN state for filelock
- Xiubo Li wrote:
> Venky Shankar wrote:
> > Venky Shankar wrote:
> > > Discussion from another channel:
> > >
> ... - 05:20 AM rgw Bug #63657: The usage of all buckets will be deleted(If you execute DELETE /admin/usage?bucket={b...
- The usage of all buckets will be deleted(If you execute DELETE /admin/usage?bucket={bucket-name})
[Reproduction St... - 05:18 AM rgw Bug #63657 (New): The usage of all buckets will be deleted(If you execute DELETE /admin/usage?buc...
- The usage of all buckets will be deleted(If you execute DELETE /admin/usage?bucket={bucket-name})
[Reproduction St... - 03:07 AM CephFS Bug #63587 (Fix Under Review): Test failure: test_filesystem_sync_stuck_for_around_5s (tasks.ceph...
- 01:53 AM CephFS Bug #63587: Test failure: test_filesystem_sync_stuck_for_around_5s (tasks.cephfs.test_misc.TestMisc)
- ...
- 01:44 AM CephFS Bug #63614 (Fix Under Review): cephfs-mirror: the peer list/snapshot mirror status always display...
- 01:39 AM crimson Bug #61651: entries.begin()->version > info.last_update
- https://pulpito.ceph.com/yingxin-2023-11-27_09:03:31-crimson-rados-wip-yingxin-crimson-make-crosscore-send-ordered-di...
- 12:21 AM CephFS Bug #63510 (Need More Info): ceph fs (meta) data inconsistent with python shutil.copy()
11/27/2023
- 10:22 PM rbd Bug #62140 (Fix Under Review): pybind/rbd/rbd.pyx does not build with Cython-3
- 10:00 PM Ceph Bug #63656 (New): RGW usage not trimmed fully if many keys to delete in a shard
- If there are a lot of usage log entries to trim in a usage shard, not all entries are trimmed in some cases.
I sus... - 09:31 PM RADOS Bug #63524: ceph_test_rados create object without reference
- Can you add the "overlap error" lines to the bug?
- 08:19 PM Orchestrator Bug #63525 (In Progress): cephadm: drivespec limit not working correctly
- 08:18 PM bluestore Cleanup #63612 (Fix Under Review): get-attrs,set-attrs and rm-attrs option details are not includ...
- 07:37 PM teuthology Support #63655 (New): ssh permissions pdiazbou@teuthology...
- need to update my keys to my new computer
- 04:19 PM RADOS Bug #61774: centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
- /a/yuriw-2023-11-20_15:34:30-rados-wip-yuri7-testing-2023-11-17-0819-distro-default-smithi/7463356
- 02:29 PM RADOS Bug #63609: osd acquire map_cache_lock high latency
- after this PR
1 个 shard consume 3617 ==> 3877 ,all 261 osdmap epoch,sum latency 0.536 seconds
- 0.104 + 0.070 ... - 02:26 PM RADOS Bug #63609: osd acquire map_cache_lock high latency
- https://github.com/ceph/ceph/pull/54612...
- 01:59 PM CephFS Bug #63614: cephfs-mirror: the peer list/snapshot mirror status always displays only one mon host...
- As discussed with Venky in the bug scrub, just remove the entire 'mon_host' entry from the peer_list to make it simil...
- 01:44 PM CephFS Bug #63619 (Fix Under Review): client: check for negative value of iovcnt before passing it to in...
- 01:27 PM rbd Bug #63654: [diff-iterate] ObjectListSnapsRequest's LIST_SNAPS_FLAG_WHOLE_OBJECT behavior is broken
- And there is an additional regression, also introduced in pacific, in the same if statement! In read_whole_object ca...
- 12:46 PM rbd Bug #63654 (Pending Backport): [diff-iterate] ObjectListSnapsRequest's LIST_SNAPS_FLAG_WHOLE_OBJE...
- Since pacific, LIST_SNAPS_FLAG_WHOLE_OBJECT flag is treated the same as read_whole_object case. This causes bogus ho...
- 01:22 PM CephFS Bug #63632 (Fix Under Review): client: fh obtained using O_PATH can stall the caller during async...
- 11:37 AM CephFS Bug #63632 (In Progress): client: fh obtained using O_PATH can stall the caller during async I/O
- 01:20 PM rgw Backport #63627 (In Progress): quincy: object lock: The query does not match after the worm prote...
- 01:19 PM rgw Backport #63628 (In Progress): reef: object lock: The query does not match after the worm protect...
- 12:32 PM teuthology Bug #63643: [Teuthology] Unable to add multiple hosts as a time in teuthology command also failin...
- adam kraitman wrote:
> Hey Sunil, are you able to reimage other nodes (not tala) ?
yes reimage is working. - 11:37 AM teuthology Bug #63643 (In Progress): [Teuthology] Unable to add multiple hosts as a time in teuthology comma...
- Hey Sunil, are you able to reimage other nodes (not tala) ?
- 07:25 AM teuthology Bug #63643: [Teuthology] Unable to add multiple hosts as a time in teuthology command also failin...
- Ilya Dryomov wrote:
> Hi Sunil,
>
> Why did you file this against RBD?
Sorry, i didn't notice the Project ther... - 07:15 AM teuthology Bug #63643: [Teuthology] Unable to add multiple hosts as a time in teuthology command also failin...
- Hi Sunil,
Why did you file this against RBD? - 07:07 AM teuthology Bug #63643 (In Progress): [Teuthology] Unable to add multiple hosts as a time in teuthology comma...
- In the below command, functionality is working, Able to lock the node and reimage successfully.
But errors out as be... - 12:22 PM Dashboard Backport #63653 (New): quincy: mgr/dashboard: rgw ports from config entities like endpoint and ss...
- 12:22 PM Dashboard Backport #63652 (In Progress): reef: mgr/dashboard: rgw ports from config entities like endpoint ...
- https://github.com/ceph/ceph/pull/54764
- 12:20 PM Dashboard Bug #63357 (Resolved): quincy: mgr/dashboard: disable dashboard v3 in quincy
- 12:18 PM Dashboard Bug #63564 (Pending Backport): mgr/dashboard: rgw ports from config entities like endpoint and ss...
- 11:55 AM CephFS Bug #63646: mds: incorrectly issued the Fc caps in LOCK_EXCL_XSYN state for filelock
- Venky Shankar wrote:
> Venky Shankar wrote:
> > Discussion from another channel:
> >
> > The above issued/pendin... - 11:27 AM CephFS Bug #63646: mds: incorrectly issued the Fc caps in LOCK_EXCL_XSYN state for filelock
- Venky Shankar wrote:
> Discussion from another channel:
>
> The above issued/pending caps in all the clients look... - 10:47 AM CephFS Bug #63646: mds: incorrectly issued the Fc caps in LOCK_EXCL_XSYN state for filelock
- Discussion from another channel:
The above issued/pending caps in all the clients look so strange to me. While whe... - 08:03 AM CephFS Bug #63646 (Fix Under Review): mds: incorrectly issued the Fc caps in LOCK_EXCL_XSYN state for fi...
- 07:59 AM CephFS Bug #63646 (Fix Under Review): mds: incorrectly issued the Fc caps in LOCK_EXCL_XSYN state for fi...
- ...
- 11:18 AM RADOS Backport #63651 (In Progress): reef: Ceph-object-store to skip getting attrs of pgmeta objects
- https://github.com/ceph/ceph/pull/54693
- 11:18 AM RADOS Backport #63650 (In Progress): quincy: Ceph-object-store to skip getting attrs of pgmeta objects
- https://github.com/ceph/ceph/pull/54692
- 11:18 AM RADOS Backport #63649 (In Progress): pacific: Ceph-object-store to skip getting attrs of pgmeta objects
- https://github.com/ceph/ceph/pull/54691
- 11:10 AM RADOS Bug #63640 (Pending Backport): Ceph-object-store to skip getting attrs of pgmeta objects
- 11:04 AM CephFS Bug #63648 (In Progress): client: ensure callback is finished if write fails during async I/O
- 10:25 AM CephFS Bug #63648: client: ensure callback is finished if write fails during async I/O
- let's say if the file is read only, the internal function Client::_write() returns EBADF but doesn't take care of the...
- 10:21 AM CephFS Bug #63648 (Closed): client: ensure callback is finished if write fails during async I/O
- else it will stall the caller
- 10:28 AM rgw Bug #63644 (Fix Under Review): setting invalid ip for raw endpoint, RGW service status normal,the...
- 07:23 AM rgw Bug #63644 (Fix Under Review): setting invalid ip for raw endpoint, RGW service status normal,the...
- firstly seting a invalid ip for RGW endpoint in ceph.conf
secondly start the service, like systemctl start ceph-rado... - 09:47 AM CephFS Bug #54741: crash: MDSTableClient::got_journaled_ack(unsigned long)
- So, this happens in a standby-replay daemon...
- 09:12 AM crimson Bug #63299 (Resolved): The lifecycle of SnapTrimObjSubEvent::WaitRepop should be extended in case...
- Yingxin Cheng wrote:
> Issue is still present after https://github.com/ceph/ceph/pull/54513
>
> See https://pulpi... - 08:50 AM crimson Bug #63299: The lifecycle of SnapTrimObjSubEvent::WaitRepop should be extended in case of interru...
- Issue is still present after https://github.com/ceph/ceph/pull/54513
See https://pulpito.ceph.com/yingxin-2023-11-... - 09:01 AM crimson Bug #63647: SnapTrimEvent AddressSanitizer: heap-use-after-free
- Also see https://pulpito.ceph.com/yingxin-2023-11-27_02:14:08-crimson-rados-wip-yingxin-crimson-make-crosscore-send-o...
- 08:58 AM crimson Bug #63647 (In Progress): SnapTrimEvent AddressSanitizer: heap-use-after-free
- Sanitized backtrace:...
- 08:55 AM crimson Bug #61651: entries.begin()->version > info.last_update
- https://pulpito.ceph.com/yingxin-2023-11-27_02:15:02-crimson-rados-wip-yingxin-crimson-improve-mempool5-distro-defaul...
- 07:46 AM Ceph Bug #63645 (Fix Under Review): rbd: scalability issue on Windows due to TCP session count
- rbd-wnbd doesn't currently reuse OSD connections across disk mappings, which can lead to an excessive amount of TCP s...
- 06:16 AM CephFS Bug #62962 (Duplicate): mds: standby-replay daemon crashes on replay
- Duplicate of https://tracker.ceph.com/issues/54741
- 06:04 AM rgw Bug #63642: rgw: rados objects wronly deleted
- rados object list. shadow_xxxxx.2_3, 2_4 lost...
- 06:00 AM rgw Bug #63642: rgw: rados objects wronly deleted
- We checked the s3 user's logs. The multipart upload retries were automatically attempted by the s3-transfer SDK.
W... - 03:49 AM rgw Bug #63642: rgw: rados objects wronly deleted
- We encountered data loss when using multipart upload. We found that some rados objects were lost.
h2. Logs on prod... - 03:23 AM rgw Bug #63642 (New): rgw: rados objects wronly deleted
11/26/2023
- 02:55 PM rgw Feature #63641 (New): kafka: expose librdkafka retry parameters as conf parameters
- there are setups where sync notifications (non persistent) are required.
in these setups we need to detect that the ... - 02:23 PM rgw Bug #61640 (Duplicate): RGW Kafka: Radosgw crashes when DeleteMultiObj is executed on a bucket th...
- 12:28 PM RADOS Bug #63640 (Pending Backport): Ceph-object-store to skip getting attrs of pgmeta objects
- See "Error getting attr on" after creating an empty pool:...
- 11:43 AM Infrastructure Bug #63595: magna002.ceph.redhat.com server responding very slow
- We are starting to upgrade the Octo ceph cluster and I will check magna002 after the upgrade
- 10:53 AM rgw-testing Backport #63639 (New): reef: lua integration tests
- 10:47 AM rgw-testing Bug #63616 (Pending Backport): lua integration tests
- 10:24 AM Ceph Backport #63638 (In Progress): reef: debian packaging is missing bcrypt dependency for ceph-mgr's...
- 08:33 AM Ceph Backport #63638 (In Progress): reef: debian packaging is missing bcrypt dependency for ceph-mgr's...
- https://github.com/ceph/ceph/pull/54662
- 08:33 AM Ceph Bug #63637 (Pending Backport): debian packaging is missing bcrypt dependency for ceph-mgr's .requ...
11/25/2023
- 06:34 PM mgr Bug #59580: memory leak (RESTful module, maybe others?)
- Hi Team,
We are affected as well. Since we upgraded to 16.2.14 on two diff cluster 'ceph-mgr' oom and gets killed.... - 02:24 PM Ceph Bug #63637 (Fix Under Review): debian packaging is missing bcrypt dependency for ceph-mgr's .requ...
- 11:59 AM Ceph Bug #63637 (Pending Backport): debian packaging is missing bcrypt dependency for ceph-mgr's .requ...
- Creating this ticket as asked in the pull request [0] to make backporting possible.
[0]: https://github.com/ceph/c...
11/24/2023
- 06:25 PM Dashboard Backport #58829 (In Progress): pacific: mgr/dashboard: update bcrypt dep in requirements.txt
- 04:52 PM Ceph Bug #63636 (New): Catastrophic failure when ECONNREFUSED is triggered
- We’ve recently had a serious outage at work, after a host had a network problem. This was seemingly caused by the @os...
- 01:57 PM CephFS Bug #63629 (In Progress): client: handle context completion during async I/O call when the client...
- 08:21 AM CephFS Bug #63629 (Closed): client: handle context completion during async I/O call when the client is n...
- when Client::ll_preadv_pwritev() is called and if the client is not mounting then it would return ENOTCONN but the ca...
- 12:49 PM Dashboard Bug #63635 (New): mgr/dashboard: multi-site topology page just says object in breadcrumb
- h3. Description of problem
!image.png!
_here_
h3. Environment
* @ceph version@ string:
* Platform (OS/dist... - 11:35 AM Dashboard Feature #56429: mgr/dashboard: Remote user authentication (e.g. via apache2)
- Just a hint for the Dashboard team (Nizam, Avan) if they decide to implement this.
Currently the Dashboard support... - 10:41 AM CephFS Fix #63634 (New): [RFC] limit iov structures to 1024 while performing async I/O
- This tracker is just to get initial thoughts/opinions about limiting the no. of iovec structures that can be read/wri...
- 10:20 AM CephFS Bug #63633 (In Progress): client: handle nullptr context in async i/o api
- ...
- 10:04 AM CephFS Bug #63632 (Closed): client: fh obtained using O_PATH can stall the caller during async I/O
- If `O_PATH` flag is used with the call to obtain a fh; the internal function handling async I/O does check for the O_...
- 09:39 AM Dashboard Bug #63591: mgr/dashboard: pyyaml==6.0 installation fails with "AttributeError: cython_sources"
- Hello!
I tried it myself and looks encountered the same issue as Brad, the proposed fix also seems to work, as it ... - 09:37 AM rgw Backport #63631 (New): quincy: "test pushing kafka s3 notification on master" - no events are sent
- 09:37 AM rgw Backport #63630 (New): reef: "test pushing kafka s3 notification on master" - no events are sent
- 09:37 AM rgw Bug #62136 (Pending Backport): "test pushing kafka s3 notification on master" - no events are sent
- 06:53 AM CephFS Bug #63619 (In Progress): client: check for negative value of iovcnt before passing it to interna...
- 05:13 AM Ceph Bug #63607 (Fix Under Review): get_pool_is_selfmanaged_snaps_mode() API is broken by design
- 04:37 AM CephFS Bug #54741: crash: MDSTableClient::got_journaled_ack(unsigned long)
- The LogSegment has an empty `pending_commit_tids` map...
- 02:54 AM rgw Backport #63628 (In Progress): reef: object lock: The query does not match after the worm protect...
- https://github.com/ceph/ceph/pull/54674
- 02:53 AM rgw Backport #63627 (In Progress): quincy: object lock: The query does not match after the worm prote...
- https://github.com/ceph/ceph/pull/54675
- 02:48 AM rgw Bug #63537 (Pending Backport): object lock: The query does not match after the worm protection ti...
- 02:16 AM rgw Bug #63153: Uploads by AWS Go SDK v2 fail with XAmzContentSHA256Mismatch when Checksum is requested
- fwiw:...
- 02:10 AM rgw Bug #63153: Uploads by AWS Go SDK v2 fail with XAmzContentSHA256Mismatch when Checksum is requested
- random data point.
I don't find the string STREAMING-AWS4-HMAC-SHA256-PAYLOAD in aws-sdk-go-v2, but certainly do f... - 12:29 AM rgw Backport #63626 (In Progress): reef: SignatureDoesNotMatch for certain RGW Admin Ops endpoints wh...
- https://github.com/ceph/ceph/pull/54791
- 12:29 AM rgw Backport #63625 (In Progress): quincy: SignatureDoesNotMatch for certain RGW Admin Ops endpoints ...
- https://github.com/ceph/ceph/pull/54792
- 12:29 AM rgw Backport #63624 (In Progress): pacific: SignatureDoesNotMatch for certain RGW Admin Ops endpoints...
- https://github.com/ceph/ceph/pull/54793
- 12:29 AM rgw Backport #63623 (New): reef: rgw: link only radosgw with ALLOC_LIBS
- 12:29 AM rgw Backport #63622 (New): quincy: rgw: link only radosgw with ALLOC_LIBS
- 12:15 AM rgw Bug #63394 (Pending Backport): rgw: link only radosgw with ALLOC_LIBS
- 12:14 AM rgw Bug #62105 (Pending Backport): SignatureDoesNotMatch for certain RGW Admin Ops endpoints when usi...
- 12:12 AM rgw Backport #63621 (New): reef: tempest failures: test_create_container_with_remove_metadata_key/value
- 12:12 AM rgw Backport #63620 (New): quincy: tempest failures: test_create_container_with_remove_metadata_key/v...
- 12:10 AM rgw Bug #51772 (Pending Backport): tempest failures: test_create_container_with_remove_metadata_key/v...
11/23/2023
- 09:13 PM bluestore Bug #62282: BlueFS and BlueStore use the same space (init_rm_free assert)
- This patch simulates a scenario for Hybrid Allocator which might result in marking used extents as free and hence cau...
- 09:11 PM bluestore Bug #62282: BlueFS and BlueStore use the same space (init_rm_free assert)
- I think this could be related to https://tracker.ceph.com/issues/63618 if DB shares main device and legacy 64K alloc ...
- 08:37 PM CephFS Bug #63619: client: check for negative value of iovcnt before passing it to internal functions du...
- At first glance I thought that maybe it is the way coded (function taking care of edge cases before calling internal ...
- 08:23 PM CephFS Bug #63619 (Closed): client: check for negative value of iovcnt before passing it to internal fun...
- While running async I/O with negative iovcnt in Client::ll_preadv_pwritev(), I experienced a crash:...
- 08:12 PM Ceph Bug #63607: get_pool_is_selfmanaged_snaps_mode() API is broken by design
- > This precaution was added in https://github.com/ceph/ceph/pull/9187, but unfortunately it never worked because get_...
- 08:05 PM bluestore Bug #63618: Allocator configured with 64K alloc unit might get 4K requests
- -1> 2023-11-23T23:03:27.872+0300 7f6b9aa4f0c0 -1 /home/if/ceph.3/src/os/bluestore/fastbmap_allocator_impl.h: In fu...
- 07:56 PM bluestore Bug #63618 (Fix Under Review): Allocator configured with 64K alloc unit might get 4K requests
- Legacy (pre-pacific deployed) OSDs setup their main device allocator with 64K allocation unit - as configured in OSD'...
- 06:40 PM Ceph Bug #63617 (New): ceph-common: CommonSafeTimer<std::mutex>::timer_thread(): python3.12 killed by ...
https://bugzilla.redhat.com/show_bug.cgi?id=2251165
Description of problem:
Version-Release number of selec...- 04:36 PM bluestore Bug #62815 (Fix Under Review): hybrid/avl allocators might be very ineffective when serving bluef...
- 04:36 PM rgw-testing Bug #63616 (Pending Backport): lua integration tests
- 04:00 PM ceph-volume Bug #63391: OSDs fail to be created on PVs or LVs in v17.2.7 due to failure in ceph-volume raw list
- May I ask if there will be a bugfix to the 17.2.7 point release with this fix?
- 03:00 PM mgr Bug #59580: memory leak (RESTful module, maybe others?)
- Hello folks
After upgrading to version 16.2.13, we encountered MGR OOM issues. It's worth noting that the restful ... - 02:20 PM mgr Bug #59580 (Fix Under Review): memory leak (RESTful module, maybe others?)
- 02:34 PM rgw Bug #62136 (Fix Under Review): "test pushing kafka s3 notification on master" - no events are sent
- 09:07 AM bluestore Bug #63558 (Rejected): NCB alloc map recovery procedure doesn't take main device residing bluefs ...
- false alarm
- 08:33 AM ceph-volume Backport #63598 (In Progress): quincy: ceph-volume prepare doesn't use partitions as-is anymore
- 08:32 AM ceph-volume Backport #63599 (In Progress): reef: ceph-volume prepare doesn't use partitions as-is anymore
- 07:32 AM Linux kernel client Bug #63586: libceph: osd_sparse_read: failed to allocate 2208169984 sparse read extents
- Raised two PRs to add more debug logs in osd and bluestore:
https://github.com/ceph/ceph/pull/54571
https://githu... - 05:42 AM Linux kernel client Bug #63586 (In Progress): libceph: osd_sparse_read: failed to allocate 2208169984 sparse read ext...
- 06:21 AM mgr Bug #63615 (Fix Under Review): mgr: consider raising priority of MMgrBeacon
- We have a cluster with more than 5000 osds. When deleting a large number of rbd snapshots, active mgr frequently swit...
- 05:36 AM CephFS Bug #63614 (Fix Under Review): cephfs-mirror: the peer list/snapshot mirror status always display...
- The secondary cluster is configured with 3 mons, when a peer connection is established, the peer list/snapshot mirror...
- 05:08 AM CephFS Feature #63544: mgr/volumes: bulk delete canceled clones
- I guess the general feedback for this enhancement seems to be positive in terms of performance improvement and resour...
- 05:03 AM CephFS Bug #54741: crash: MDSTableClient::got_journaled_ack(unsigned long)
- Venky Shankar wrote:
> ... and we have a core to inspect. Hopefully things go well.
Sounds good. - 04:58 AM CephFS Bug #54741: crash: MDSTableClient::got_journaled_ack(unsigned long)
- ... and we have a core to inspect. Hopefully things go well.
- 04:48 AM CephFS Bug #54741: crash: MDSTableClient::got_journaled_ack(unsigned long)
- Venky Shankar wrote:
> BTW, this crash is seen in the standby-replay MDS daemon.
Sure Venky, not getting a chance... - 04:46 AM CephFS Bug #54741: crash: MDSTableClient::got_journaled_ack(unsigned long)
- BTW, this crash is seen in the standby-replay MDS daemon.
- 04:45 AM CephFS Bug #54741: crash: MDSTableClient::got_journaled_ack(unsigned long)
- Xiubo, I'm taking this one. This has been hit in a downstream build lately.
- 02:46 AM CephFS Bug #63510: ceph fs (meta) data inconsistent with python shutil.copy()
- Locally I tried it with *shutil.copy2* and *shutil.copy*, but couldn't reproduce it.
The *shutil.copy2* script:
... - 02:43 AM CephFS Bug #63510 (In Progress): ceph fs (meta) data inconsistent with python shutil.copy()
11/22/2023
- 10:04 PM rgw Bug #63613: [rgw][lc] using custom lc schedule (work time) may cause lc processing to stall
- Added the PR https://github.com/ceph/ceph/pull/54622 addressing this issue.
- 10:02 PM rgw Bug #63613 (Fix Under Review): [rgw][lc] using custom lc schedule (work time) may cause lc proces...
- We use different lc processing time windows for our different clusters utilizing the knob rgw_lifecycle_work_time ([[...
- 08:28 PM Ceph Backport #63478 (Resolved): pacific: MClientRequest: properly handle ceph_mds_request_head_legacy...
- 08:15 PM CephFS Backport #63339: reef: mds: warning `clients failing to advance oldest client/flush tid` seen wit...
- merged
- 07:57 PM Ceph Backport #63023: pacific: AsyncMessenger::wait() isn't checking for spurious condition wakeup
- Leonid Usov wrote:
> https://github.com/ceph/ceph/pull/53716
merged - 07:56 PM CephFS Backport #63144: pacific: qa: FAIL: test_client_blocklisted_oldest_tid (tasks.cephfs.test_client_...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53634
merged - 07:56 PM CephFS Backport #62584: pacific: mds: enforce a limit on the size of a session in the sessionmap
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53634
merged - 07:55 PM CephFS Backport #62906: pacific: mds,qa: some balancer debug messages (<=5) not printed when debug_mds i...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53552
merged - 07:29 PM Dashboard Feature #56429: mgr/dashboard: Remote user authentication (e.g. via apache2)
- Hello Scott,
Yes, having users auto-created (or not at all because all user details are transmitted in every reque... - 03:32 PM bluestore Cleanup #63612: get-attrs,set-attrs and rm-attrs option details are not included in the ceph-obje...
- Relevant BZ: https://bugzilla.redhat.com/show_bug.cgi?id=2251011
- 03:31 PM bluestore Cleanup #63612 (Fix Under Review): get-attrs,set-attrs and rm-attrs option details are not includ...
- Description of problem:
The options that exist in the troubleshooting guide for getting, setting and removing obje... - 02:51 PM mgr Bug #59580: memory leak (RESTful module, maybe others?)
- Our 16.2.14 cluster is affected as well. Modules enabled:
"always_on_modules": [
"balancer",
... - 02:05 PM RADOS Bug #62918: all write io block but no flush is triggered
- hi,my point is that: while _maybe_wait_for_writeback block for max_dirty_bh(calculated by 64k pages),fluser_entry do ...
- 01:19 PM ceph-volume Bug #62320: lvm list should filter also on vg name
- PR fixing this issue: https://github.com/ceph/ceph/pull/53841
BTW this also breaks ceph-ansible (at least in some ... - 12:29 PM CephFS Bug #63132: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
- Copying the slack communication here
-------------------------------------
Kotresh H R
@rzarzynski
Thanks for ... - 12:02 PM CephFS Bug #63132: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
- A speculation: in main we compress RocksDB (@bluestore_rocksdb_options@; see https://github.com/ceph/ceph/pull/53343)...
- 10:09 AM Ceph Backport #63611 (In Progress): reef: flaky cephfs test on Windows
- 10:08 AM Ceph Backport #63611 (In Progress): reef: flaky cephfs test on Windows
- https://github.com/ceph/ceph/pull/54614
- 10:05 AM Ceph Bug #63610 (Pending Backport): flaky cephfs test on Windows
- One of the libcephfs tests is comparing timestamps, which sometimes fails on Windows, possibly due to reduced timesta...
- 08:49 AM RADOS Bug #63609 (Fix Under Review): osd acquire map_cache_lock high latency
- ...
- 08:28 AM Linux kernel client Documentation #62837 (In Progress): Add support for read_from_replica=localize for cephfs similar...
- Raised one PR for this https://github.com/ceph/ceph/pull/54611.
- 07:52 AM Linux kernel client Documentation #62837: Add support for read_from_replica=localize for cephfs similar to krbd
- Xiubo Li wrote:
> In kclient it seems this hasn't be supported everywhere. Such as for the *sync* and *direct* read.... - 07:50 AM Linux kernel client Documentation #62837: Add support for read_from_replica=localize for cephfs similar to krbd
- In kclient it seems this hasn't be supported everywhere. Such as for the *sync* and *direct* read.
- 06:53 AM Linux kernel client Documentation #62837: Add support for read_from_replica=localize for cephfs similar to krbd
- Rakshith R wrote:
> Thanks,
>
> Can we use this tracker to implement changes in cephfs mount options documentati... - 06:49 AM Linux kernel client Documentation #62837: Add support for read_from_replica=localize for cephfs similar to krbd
- Thanks,
Can we use this tracker to implement changes in cephfs mount options documentation ?
This way more users... - 06:13 AM Dashboard Backport #63593 (Rejected): reef: mgr/dashboard: add a new ceph cluster advanced dashboard to gra...
- 06:13 AM Dashboard Feature #63592 (Resolved): mgr/dashboard: add a new ceph cluster advanced dashboard to grafana da...
- 05:41 AM Dashboard Bug #63591: mgr/dashboard: pyyaml==6.0 installation fails with "AttributeError: cython_sources"
- Indeed, so try this.
Make sure you clear the install cache in your repo from your last run.... - 03:48 AM Dashboard Bug #63591: mgr/dashboard: pyyaml==6.0 installation fails with "AttributeError: cython_sources"
- Hi Brad,
To test this, I started a docker container with f39 docker run -it -v $PWD:/ceph fedora:39 bash and ran t... - 05:34 AM Dashboard Bug #63608 (Resolved): mgr/dashboard: cephfs rename only works when fs is offline
- according to the recent change in cephfs, the ceph fs rename will only happen when an fs
is offline and refuse_clien... - 03:23 AM CephFS Bug #54833: crash: void Locker::handle_file_lock(ScatterLock*, ceph::cref_t<MLock>&): assert(lock...
- Found another case:
Firstly in auth mds the *simple_sync()* just transmits the *filelock* state *LOCK_MIX* --> *LO...
11/21/2023
- 09:46 PM Ceph Bug #63607 (Fix Under Review): get_pool_is_selfmanaged_snaps_mode() API is broken by design
- "rados cppool srcpool dstpool" command where srcpool is in selfmanaged snaps mode should require --yes-i-really-mean-...
- 07:16 PM bluestore Bug #63606 (Fix Under Review): pacific: ObjectStore/StoreTestSpecificAUSize.BluestoreBrokenNoShar...
- 07:12 PM bluestore Bug #52398 (Pending Backport): ObjectStore/StoreTestSpecificAUSize.BluestoreBrokenNoSharedBlobRep...
- 10:19 AM bluestore Bug #52398: ObjectStore/StoreTestSpecificAUSize.BluestoreBrokenNoSharedBlobRepairTest/2 triggers ...
- Needs to be backported to Pacific: /a/yuriw-2023-11-16_22:29:26-rados-wip-yuri3-testing-2023-11-14-1227-pacific-distr...
- 06:39 PM Orchestrator Bug #63605: Failed to extract uid/gid for path /etc/prometheus
- /a/yuriw-2023-11-14_15:40:04-rados-wip-yuri2-testing-2023-11-13-0820-pacific-distro-default-smithi/7457750...
- 06:34 PM Orchestrator Bug #63605 (New): Failed to extract uid/gid for path /etc/prometheus
- /a/yuriw-2023-11-14_15:40:04-rados-wip-yuri2-testing-2023-11-13-0820-pacific-distro-default-smithi/7457697...
- 06:37 PM Infrastructure Bug #62714 (New): Bad file descriptor when stopping Ceph iscsi
- Found another instance
/a/yuriw-2023-11-14_15:40:04-rados-wip-yuri2-testing-2023-11-13-0820-pacific-distro-default-s... - 06:24 PM RADOS Bug #56028: thrash_cache_writeback_proxy_none: FAILED ceph_assert(version == old_value.version) i...
- /a/yuriw-2023-11-14_15:40:04-rados-wip-yuri2-testing-2023-11-13-0820-pacific-distro-default-smithi/7457700
- 05:51 PM Infrastructure Bug #63595 (In Progress): magna002.ceph.redhat.com server responding very slow
- Hey, it's probably because we are still having issues with the ceph cluster and since magna002 is using a cephfs moun...
- 07:12 AM Infrastructure Bug #63595 (In Progress): magna002.ceph.redhat.com server responding very slow
- Hi,
magna002.ceph.redhat.com server responding very slowly. Before raising the bug I verified the internet speed a... - 05:41 PM Infrastructure Bug #63565: Reimage the ESX on argo machines
- Done, I will send you all the details by Slack
- 03:37 PM Orchestrator Bug #58659: mds_upgrade_sequence: failure when deploying node-exporter
- https://pulpito.ceph.com/yuriw-2023-11-14_20:31:57-fs-wip-yuri4-testing-2023-11-13-0820-pacific-distro-default-smithi...
- 03:32 PM rgw Bug #63583: [Errno 13] Permission denied: '/vstart_runner.log'
- haven't been able to figure out what changed to cause this. Ali will comment out the vstart_runner stuff from the tas...
- 03:02 PM rgw Bug #63597 (Need More Info): rgw: multi-part upload will make head object metadata error during a...
- tao song wrote:
> There is a bug when you do a breakpoint continuation by using aws java Signature Version 4.
can... - 10:04 AM rgw Bug #63597 (Need More Info): rgw: multi-part upload will make head object metadata error during a...
- There is a bug when you do a breakpoint continuation by using aws java Signature Version 4. The head object metadata ...
- 02:30 PM RADOS Fix #63604 (Fix Under Review): osd: Tune snaptrim cost for the mClock scheduler.
- 02:13 PM RADOS Fix #63604 (Fix Under Review): osd: Tune snaptrim cost for the mClock scheduler.
- Currently, the cost of snap trim operation is set to a static value (see config option 'osd_snap_trim_cost') of 1 MiB...
- 02:27 PM CephFS Bug #63516 (Rejected): mds may try new batch head that is killed
- NAB
- 02:07 PM mgr Bug #63292: Prometheus exporter bug
- We did some more digging and believe the culprit is in OSDMap.cc/h. The "rbi_round() function":https://github.com/cep...
- 02:02 PM Ceph Backport #63476 (In Progress): reef: MClientRequest: properly handle ceph_mds_request_head_legacy...
- 01:55 PM Ceph Bug #63603 (New): install-deps.sh fails with 'AttributeError: cython_sources'
- Fedora 39. 'main'
Collecting pyyaml==6.0 (from -r requirements-alerts.txt (line 1))
Downloading PyYAML-6.0.tar.... - 01:47 PM Ceph Bug #62293 (Resolved): osd mclock QoS : osd_mclock_scheduler_client_lim is not limited
- 01:46 PM Ceph Backport #62546 (Resolved): reef: osd mclock QoS : osd_mclock_scheduler_client_lim is not limited
- 01:43 PM RADOS Bug #62171 (Fix Under Review): All OSD shards should use the same scheduler type when osd_op_queu...
- 01:34 PM rgw-testing Bug #59424 (Pending Backport): run s3tests against keystone EC2
- 01:34 PM rgw-testing Bug #59424 (Fix Under Review): run s3tests against keystone EC2
- 12:29 PM RADOS Backport #63602 (In Progress): reef: RBD cloned image is slow in 4k write with "waiting for rw lo...
- 11:42 AM RADOS Backport #63602 (In Progress): reef: RBD cloned image is slow in 4k write with "waiting for rw lo...
- https://github.com/ceph/ceph/pull/54595
- 12:28 PM RADOS Backport #63601 (In Progress): quincy: RBD cloned image is slow in 4k write with "waiting for rw ...
- 11:42 AM RADOS Backport #63601 (In Progress): quincy: RBD cloned image is slow in 4k write with "waiting for rw ...
- https://github.com/ceph/ceph/pull/54594
- 12:27 PM RADOS Backport #63600 (In Progress): pacific: RBD cloned image is slow in 4k write with "waiting for rw...
- 11:42 AM RADOS Backport #63600 (In Progress): pacific: RBD cloned image is slow in 4k write with "waiting for rw...
- https://github.com/ceph/ceph/pull/54593
- 11:28 AM mgr Bug #63596: After copyup cloned volume objects are written to, the osd ops carries stat op
- This seems to be a duplicate of https://tracker.ceph.com/issues/53593.
- 07:29 AM mgr Bug #63596: After copyup cloned volume objects are written to, the osd ops carries stat op
- stat op causes an exclusive lock to be obtained for write operations on an object. In high concurrency, write operati...
- 07:23 AM mgr Bug #63596 (New): After copyup cloned volume objects are written to, the osd ops carries stat op
- 1. rbd create --size 10240 zq.rgw.buckets.index/image02
2. write to zq.rgw.buckets.index/image02
3. rbd snap create... - 11:26 AM ceph-volume Backport #63599 (In Progress): reef: ceph-volume prepare doesn't use partitions as-is anymore
- https://github.com/ceph/ceph/pull/54629
- 11:26 AM ceph-volume Backport #63598 (In Progress): quincy: ceph-volume prepare doesn't use partitions as-is anymore
- https://github.com/ceph/ceph/pull/54630
- 11:25 AM RADOS Bug #53593 (Pending Backport): RBD cloned image is slow in 4k write with "waiting for rw locks"
- The fix landed in main, but post reef freeze. I think we need to backport this at least to reef, and probably older ...
- 11:25 AM ceph-volume Bug #58812 (Pending Backport): ceph-volume prepare doesn't use partitions as-is anymore
- 10:27 AM ceph-volume Bug #58812 (Fix Under Review): ceph-volume prepare doesn't use partitions as-is anymore
- 10:00 AM CephFS Bug #58376: CephFS Snapshots are accessible even when it's deleted from the other client
- Kotresh, I'm taking this one.
- 09:47 AM Orchestrator Bug #54071: rados/cephadm/osds: Invalid command: missing required parameter hostname(<string>)
- /a/yuriw-2023-11-16_22:29:26-rados-wip-yuri3-testing-2023-11-14-1227-pacific-distro-default-smithi/7461277
- 09:33 AM Ceph Feature #63574: support setting quota in the format of {K|M}iB along with the K|M, {K|M}i
- Venky Shankar wrote:
> Dhairya Parmar wrote:
> > I had used common code(strtol.cc)'s strict_iec_cast() to enable ac... - 09:26 AM Ceph Feature #63574: support setting quota in the format of {K|M}iB along with the K|M, {K|M}i
- Dhairya Parmar wrote:
> I had used common code(strtol.cc)'s strict_iec_cast() to enable acceptance of human readable... - 06:56 AM CephFS Bug #57087 (Fix Under Review): qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDat...
- 05:11 AM CephFS Feature #63544: mgr/volumes: bulk delete canceled clones
- Raimund Sacherer wrote:
> Ah, I forgot, the most important part for me to write this script was to give the CU a pro... - 05:05 AM CephFS Feature #63544: mgr/volumes: bulk delete canceled clones
- Kotresh Hiremath Ravishankar wrote:
> I further looked into the snippet of the script used. It is as below.
>
> [... - 04:01 AM rgw Fix #63507 (Duplicate): 304 response is not RFC9110 compliant
- thanks, i revived the original fix from https://tracker.ceph.com/issues/45736. we'll track progress there since it's ...
- 03:59 AM rgw Bug #45736 (Fix Under Review): rgw: lack of headers in 304 response
- 02:54 AM rgw Bug #63537: object lock: The query does not match after the worm protection time is set to 2124, ...
- Here's another way to solve the problem, set the retain_until_date to a 64-bit integer variable.
https://github.c...
11/20/2023
- 08:48 PM Ceph Backport #63478: pacific: MClientRequest: properly handle ceph_mds_request_head_legacy for ext_nu...
- In this packport ("MClientRequest: handle owner_uid and owner_gid from ceph_mds_request_head_legacy") commit was drop...
- 07:04 PM RADOS Bug #63520 (Fix Under Review): the usage of osd_pg_stat_report_interval_max is not uniform
- 06:59 PM RADOS Bug #63389: Failed to encode map X with expected CRC
- scrub note: Junior is going to setup a meeting to discuss the issue.
- 06:55 PM RADOS Bug #63509: osd/scrub: some replica states specified incorrectly
- Introduced in commit 31af98f52d9749401b1b2b9afff9a0620b68cec8 (Wed Feb 22 21:04:17 2023 -0800) which is not in reef. ...
- 06:50 PM RADOS Bug #63524: ceph_test_rados create object without reference
- scrub note: should the status be @in progress@?
- 06:41 PM RADOS Bug #63541: Observing client.admin crash in thread_name 'rados' on executing 'rados clearomap..'
- The bug has been introduced in d333b35aa10bf03a8bc047994d5cf3fed019b49a (Feb 5 15:28:04 2021 +0000). It's present in ...
- 06:36 PM RADOS Bug #63541: Observing client.admin crash in thread_name 'rados' on executing 'rados clearomap..'
- The fix is approved and awaits QA.
- 04:07 PM Orchestrator Bug #63561: cephadm: build time install of dependencies fails on build systems that disable network
- The dependencies are available as packages on the system.
- 03:18 PM CephFS Feature #63544: mgr/volumes: bulk delete canceled clones
- Ah, I forgot, the most important part for me to write this script was to give the CU a progress counter so we all act...
- 03:16 PM CephFS Feature #63544: mgr/volumes: bulk delete canceled clones
- Hi,
I created a script to do this as I had to do it now on two CU clusters. I think one of the issues is running ... - 03:00 PM Ceph Bug #58965: radosgw: Stale entries in bucket indexes following successful DELETE request
- We see the same issue. We are on quincy (seen same issue on pacific when we were on that before).
- 01:54 PM CephFS Bug #63521 (Rejected): qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_check...
- 01:48 PM CephFS Bug #63538: mds: src/mds/Locker.cc: 2357: FAILED ceph_assert(!cap->is_new())
- Just FYI - this was a multimds setup.
- 01:47 PM CephFS Bug #63539 (Duplicate): fs/full/subvolume_clone.sh: Health check failed: 1 full osd(s) (OSD_FULL)
- Dup of https://tracker.ceph.com/issues/63132
- 01:34 PM ceph-volume Bug #63545 (Resolved): 'ceph-volume raw list' is broken
- 01:33 PM ceph-volume Backport #63555 (Resolved): quincy: 'ceph-volume raw list' is broken
- 01:24 PM devops Bug #61244 (Resolved): compile error while building ceph with liburing
- https://github.com/ceph/ceph/pull/51559#issuecomment-1795491162
- 01:10 PM Linux kernel client Bug #63586: libceph: osd_sparse_read: failed to allocate 2208169984 sparse read extents
- Ilya Dryomov wrote:
> > The Rados just returned a very large extents 2208169984 and kernel libceph couldn't allocate... - 10:42 AM Linux kernel client Bug #63586: libceph: osd_sparse_read: failed to allocate 2208169984 sparse read extents
- > The Rados just returned a very large extents 2208169984 and kernel libceph couldn't allocate enough memory for it a...
- 01:43 AM Linux kernel client Bug #63586 (In Progress): libceph: osd_sparse_read: failed to allocate 2208169984 sparse read ext...
- https://pulpito.ceph.com/yuriw-2023-11-15_23:07:34-fs-wip-yuri-testing-2023-11-14-0743-reef-distro-default-smithi/745...
- 12:51 PM Infrastructure Bug #63565 (In Progress): Reimage the ESX on argo machines
- Working on it
- 11:14 AM Ceph Bug #63398 (Fix Under Review): Windows Unicode support
- 11:02 AM rgw Documentation #63530 (Pending Backport): notifications: default event types does not include sync...
- 09:27 AM CephFS Bug #57087: qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
- This isn't as worse as I made it to sound in #note-10 - just a qa thing. Fix coming up...
- 07:28 AM CephFS Backport #63576 (In Progress): quincy: qa: ModuleNotFoundError: No module named XXXXXX
- https://github.com/ceph/ceph/pull/54090
- 06:30 AM Orchestrator Bug #46991: cephadm ls does not list legacy rgws
- I have created a PR for this issue: https://github.com/ceph/ceph/pull/54567
- 06:12 AM Dashboard Backport #63593 (Rejected): reef: mgr/dashboard: add a new ceph cluster advanced dashboard to gra...
- 06:05 AM Dashboard Feature #63592 (Pending Backport): mgr/dashboard: add a new ceph cluster advanced dashboard to gr...
- 06:01 AM Dashboard Feature #63592 (Resolved): mgr/dashboard: add a new ceph cluster advanced dashboard to grafana da...
- Add a new Ceph Cluster - Advanced dashboard to grafana which shows the advanced cluster details
- 04:09 AM Dashboard Bug #63591 (New): mgr/dashboard: pyyaml==6.0 installation fails with "AttributeError: cython_sour...
- h3. Description of problem
pyyaml==6.0 installation fails with "AttributeError: cython_sources" and is an instance... - 02:21 AM CephFS Bug #50220: qa: dbench workload timeout
- reef: https://pulpito.ceph.com/yuriw-2023-11-15_23:07:34-fs-wip-yuri-testing-2023-11-14-0743-reef-distro-default-smit...
- 02:20 AM CephFS Backport #63590 (In Progress): reef: qa: fs:mixed-clients kernel_untar_build failure
- https://github.com/ceph/ceph/pull/54711
- 02:20 AM CephFS Backport #63589 (In Progress): quincy: qa: fs:mixed-clients kernel_untar_build failure
- https://github.com/ceph/ceph/pull/54712
- 02:20 AM CephFS Backport #63588 (In Progress): pacific: qa: fs:mixed-clients kernel_untar_build failure
- https://github.com/ceph/ceph/pull/54713
- 02:18 AM CephFS Bug #62580: testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
- reef: https://pulpito.ceph.com/yuriw-2023-11-15_23:07:34-fs-wip-yuri-testing-2023-11-14-0743-reef-distro-default-smit...
- 02:16 AM CephFS Bug #63587 (Fix Under Review): Test failure: test_filesystem_sync_stuck_for_around_5s (tasks.ceph...
- https://pulpito.ceph.com/yuriw-2023-11-15_23:07:34-fs-wip-yuri-testing-2023-11-14-0743-reef-distro-default-smithi/746...
- 02:12 AM CephFS Bug #62658: error during scrub thrashing: reached maximum tries (31) after waiting for 900 seconds
- reef: https://pulpito.ceph.com/yuriw-2023-11-15_23:07:34-fs-wip-yuri-testing-2023-11-14-0743-reef-distro-default-smit...
- 02:09 AM CephFS Bug #57655 (Pending Backport): qa: fs:mixed-clients kernel_untar_build failure
- reef: https://pulpito.ceph.com/yuriw-2023-11-15_23:07:34-fs-wip-yuri-testing-2023-11-14-0743-reef-distro-default-smit...
- 02:05 AM CephFS Bug #62221: Test failure: test_add_ancestor_and_child_directory (tasks.cephfs.test_mirroring.Test...
- reef: https://pulpito.ceph.com/yuriw-2023-11-15_23:07:34-fs-wip-yuri-testing-2023-11-14-0743-reef-distro-default-smit...
11/19/2023
- 07:37 PM rgw Backport #63585 (New): reef: notifications: sending notifications with multidelete is causing a c...
- 07:37 PM rgw Backport #63584 (New): quincy: notifications: sending notifications with multidelete is causing a...
- 07:35 PM rgw Bug #63580 (Pending Backport): notifications: sending notifications with multidelete is causing a...
- 11:17 AM rgw Bug #63580 (Pending Backport): notifications: sending notifications with multidelete is causing a...
- this is a regression from: https://github.com/ceph/ceph/commit/6b6592f50b6b81fa13a330bcb91273ba7f25c0c9
- 07:30 PM rgw Bug #63583: [Errno 13] Permission denied: '/vstart_runner.log'
- https://github.com/ceph/ceph/pull/50212 touched qa/tasks/vstart_runner.py recently, but i tried reverting that pr and...
- 07:29 PM rgw Bug #63583 (Resolved): [Errno 13] Permission denied: '/vstart_runner.log'
- all rgw/singleton jobs now failing in qa/tasks/radosgw_admin.py because it imports vstart_runner
ex. http://qa-pro... - 04:41 PM RADOS Bug #63582 (Closed): ---
- 04:38 PM RADOS Bug #63582 (Closed): ---
- 04:20 PM mgr Bug #63581: mgr/zabbix: Incorrect OSD IDs in Zabbix discovery for non-default CRUSH hierarchies
- Created Pull Request: https://github.com/ceph/ceph/pull/54562
- 03:44 PM mgr Bug #63581 (New): mgr/zabbix: Incorrect OSD IDs in Zabbix discovery for non-default CRUSH hierarc...
- The current implementation of the Zabbix discovery only provides the correct OSD IDs if the CRUSH hierarchy is the de...
- 10:26 AM rgw Bug #63532: notification: etag is missing in CompleteMultipartUpload event
- the "publish_commit()" should be moved from "execute()" to "commit()" where the final etag is calculated.
(based o... - 10:23 AM rgw Bug #63532 (New): notification: etag is missing in CompleteMultipartUpload event
- 09:48 AM Ceph Bug #63579 (New): build: enable code ready builder for centos9
- when running "install-deps.sh" on a centos9 machine, the following packages are not found:...
11/18/2023
- 09:17 PM rgw Feature #63563: rgw: add s3select bytes processed and bytes returned to usage
- RFC PR: https://github.com/ceph/ceph/pull/54554
- 08:21 PM rgw Backport #62137 (Resolved): reef: rgw_object_lock.cc:The maximum time of bucket object lock is 24...
- 08:21 PM rgw Backport #63045 (Resolved): reef: s3test test_list_buckets_bad_auth fails with Keystone EC2
- 08:20 PM rgw Backport #63043 (Resolved): pacific: s3test test_list_buckets_bad_auth fails with Keystone EC2
- 08:20 PM rgw Backport #61352 (Resolved): reef: Object Ownership Inconsistent
- 08:20 PM rgw Backport #61351 (Resolved): pacific: Object Ownership Inconsistent
- 08:19 PM rgw Bug #62875 (Resolved): SignatureDoesNotMatch when extra headers start with 'x-amzn'
- 08:18 PM rgw Backport #62138 (Resolved): pacific: rgw_object_lock.cc:The maximum time of bucket object lock is...
- 08:18 PM rgw Backport #63053 (Resolved): quincy: SignatureDoesNotMatch when extra headers start with 'x-amzn'
- 08:17 PM rgw Bug #60094 (Resolved): crash: RGWSI_Notify::unwatch(RGWSI_RADOS::Obj&, unsigned long)
- 08:17 PM rgw Backport #63059 (Resolved): quincy: crash: RGWSI_Notify::unwatch(RGWSI_RADOS::Obj&, unsigned long)
- 08:17 PM rgw Backport #63052 (Resolved): pacific: SignatureDoesNotMatch when extra headers start with 'x-amzn'
- 08:16 PM rgw Backport #63058 (Resolved): pacific: crash: RGWSI_Notify::unwatch(RGWSI_RADOS::Obj&, unsigned long)
- 08:16 PM rgw Bug #62014 (Resolved): rgw/syncpolicy: sync status doesn't reflect syncpolicy set
- 08:16 PM rgw Backport #62306 (Resolved): reef: rgw/syncpolicy: sync status doesn't reflect syncpolicy set
- 08:16 PM rgw Backport #62307 (Resolved): quincy: rgw/syncpolicy: sync status doesn't reflect syncpolicy set
- 08:15 PM rgw Backport #62308 (Resolved): pacific: rgw/syncpolicy: sync status doesn't reflect syncpolicy set
- 08:15 PM rgw Bug #62250 (Resolved): retry metadata cache notifications with INVALIDATE_OBJ
- 08:15 PM rgw Backport #62300 (Resolved): pacific: retry metadata cache notifications with INVALIDATE_OBJ
- 08:10 PM RADOS Bug #52316 (Resolved): qa/tasks/mon_thrash.py: _do_thrash AssertionError len(s['quorum']) == len(...
- 08:10 PM RADOS Backport #61452 (Resolved): reef: qa/tasks/mon_thrash.py: _do_thrash AssertionError len(s['quorum...
- 08:10 PM RADOS Backport #61451 (Resolved): quincy: qa/tasks/mon_thrash.py: _do_thrash AssertionError len(s['quor...
- 08:09 PM RADOS Backport #61450 (Resolved): pacific: qa/tasks/mon_thrash.py: _do_thrash AssertionError len(s['quo...
- 08:09 PM RADOS Bug #49888 (Resolved): rados/singleton: radosbench.py: teuthology.exceptions.MaxWhileTries: reach...
- 08:09 PM RADOS Backport #61449 (Resolved): quincy: rados/singleton: radosbench.py: teuthology.exceptions.MaxWhil...
- 08:09 PM RADOS Backport #61448 (Resolved): pacific: rados/singleton: radosbench.py: teuthology.exceptions.MaxWhi...
- 08:08 PM RADOS Bug #61349 (Resolved): ObjectWriteOperation::mtime2() works with IoCtx::operate() but not aio_ope...
- 08:08 PM RADOS Backport #61729 (Resolved): reef: ObjectWriteOperation::mtime2() works with IoCtx::operate() but ...
- 08:08 PM RADOS Backport #61730 (Resolved): pacific: ObjectWriteOperation::mtime2() works with IoCtx::operate() b...
- 08:07 PM RADOS Bug #58884 (Resolved): ceph: osd blocklist does not accept v2/v1: prefix for addr
- 08:07 PM RADOS Backport #61487 (Resolved): quincy: ceph: osd blocklist does not accept v2/v1: prefix for addr
- 08:07 PM RADOS Backport #61488 (Resolved): pacific: ceph: osd blocklist does not accept v2/v1: prefix for addr
- 08:06 PM RADOS Backport #63179 (Resolved): pacific: Share mon's purged snapshots with OSD
- 08:03 PM RADOS Backport #57794 (In Progress): pacific: intrusive_lru leaking memory when
- 08:02 PM RADOS Backport #57795 (In Progress): quincy: intrusive_lru leaking memory when
- 07:56 PM RADOS Backport #59538 (Rejected): pacific: osd/scrub: verify SnapMapper consistency not backported
- 07:54 PM RADOS Bug #52605 (Resolved): osd: add scrub duration to pg dump
- 07:54 PM RADOS Backport #52845 (Rejected): pacific: osd: add scrub duration to pg dump
- 07:52 PM RADOS Bug #45721 (Resolved): CommandFailedError: Command failed (workunit test rados/test_python.sh) FA...
- 07:52 PM RADOS Backport #57545 (Resolved): quincy: CommandFailedError: Command failed (workunit test rados/test_...
- 07:52 PM RADOS Backport #57544 (Resolved): pacific: CommandFailedError: Command failed (workunit test rados/test...
- 07:52 PM RADOS Bug #43584 (Resolved): MON_DOWN during mon_join process
- 07:51 PM RADOS Backport #52747 (Resolved): pacific: MON_DOWN during mon_join process
- 07:50 PM RADOS Backport #58333 (In Progress): pacific: mon/monclient: update "unable to obtain rotating service ...
- 07:49 PM RADOS Backport #58334 (Resolved): quincy: mon/monclient: update "unable to obtain rotating service keys...
- 07:48 PM RADOS Bug #53855 (Resolved): rados/test.sh hangs while running LibRadosTwoPoolsPP.ManifestFlushDupCount
- 07:48 PM RADOS Backport #56099 (Resolved): pacific: rados/test.sh hangs while running LibRadosTwoPoolsPP.Manifes...
- 07:46 PM RADOS Backport #56655 (Resolved): quincy: rados/test.sh hangs while running LibRadosTwoPoolsPP.TierFlus...
- 07:46 PM RADOS Bug #52012 (Resolved): osd/scrub: src/osd/scrub_machine.cc: 55: FAILED ceph_assert(state_cast<con...
- 07:45 PM RADOS Backport #53338 (Resolved): pacific: osd/scrub: src/osd/scrub_machine.cc: 55: FAILED ceph_assert(...
- 07:45 PM RADOS Backport #55791 (Rejected): pacific: CEPH Graylog Logging Missing "host" Field
- 07:43 PM mgr Backport #57473 (In Progress): quincy: mgr: FAILED ceph_assert(daemon != nullptr)
- 07:39 PM mgr Backport #63366 (In Progress): pacific: mgr: remove out&down osd from mgr daemons to avoid warnings
- 07:36 PM mgr Bug #45147 (Resolved): Module 'diskprediction_local' takes forever to load
- 07:35 PM mgr Backport #50165 (Rejected): pacific: Module 'diskprediction_local' takes forever to load
- 07:34 PM CephFS Backport #63165 (Resolved): reef: pybind/mgr/volumes: Log missing mutexes to help debug
- 07:34 PM CephFS Backport #63164 (Resolved): pacific: pybind/mgr/volumes: Log missing mutexes to help debug
- 07:33 PM CephFS Backport #63269 (Resolved): pacific: mds: report clients laggy due laggy OSDs only after checking...
- 07:30 PM CephFS Backport #55580 (Resolved): pacific: snap_schedule: avoid throwing traceback for bad or missing a...
- 04:56 PM rgw Bug #63578 (New): compilation fails with WITH_RADOSGW_LUA_PACKAGES=OFF
- ...
11/17/2023
- 10:03 PM Orchestrator Bug #58658: mds_upgrade_sequence: Error: initializing source docker://prom/alertmanager:v0.20.0
- Jobs from run /a/yuriw-2023-11-16_22:27:54-rados-wip-yuri4-testing-2023-11-13-0820-pacific-distro-default-smithi/7461...
- 09:59 PM Dashboard Bug #57386: cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selecto...
- /a/yuriw-2023-11-16_22:27:54-rados-wip-yuri4-testing-2023-11-13-0820-pacific-distro-default-smithi/7461192
- 09:58 PM Orchestrator Bug #57303: rados/cephadm: Failed to fetch package version from https://shaman.ceph.com/api/searc...
- /a/yuriw-2023-11-16_22:27:54-rados-wip-yuri4-testing-2023-11-13-0820-pacific-distro-default-smithi/7461242
- 09:57 PM Orchestrator Bug #63577: cephadm: docker.io/library/haproxy: toomanyrequests: You have reached your pull rate ...
- /a/yuriw-2023-11-16_22:27:54-rados-wip-yuri4-testing-2023-11-13-0820-pacific-distro-default-smithi/7461246
/a/yuriw-... - 09:54 PM Orchestrator Bug #63577 (New): cephadm: docker.io/library/haproxy: toomanyrequests: You have reached your pull...
- /a/yuriw-2023-11-16_22:27:54-rados-wip-yuri4-testing-2023-11-13-0820-pacific-distro-default-smithi/7461250...
- 09:56 PM Infrastructure Bug #63531: Error authenticating with smithiXXX.front.sepia.ceph.com: SSHException('No existing s...
- /a/yuriw-2023-11-16_22:27:54-rados-wip-yuri4-testing-2023-11-13-0820-pacific-distro-default-smithi/7461248
- 07:28 PM rgw Backport #63252 (Resolved): reef: Add bucket versioning info to radosgw-admin bucket stats output
- 07:27 PM rgw Backport #63254 (Resolved): pacific: Add bucket versioning info to radosgw-admin bucket stats output
- 07:27 PM rgw Backport #58238 (Resolved): pacific: beast frontend crashes on exception from socket.local_endpoi...
- 07:27 PM rgw Bug #61727 (Resolved): beast: add max_header_size option
- 07:25 PM rgw Backport #61728 (Resolved): pacific: beast: add max_header_size option
- 07:25 PM rgw Bug #57877 (Resolved): rgw: some operations may not have a valid bucket object
- 07:24 PM rgw Backport #58817 (Resolved): pacific: rgw: some operations may not have a valid bucket object
- 07:18 PM RADOS Backport #62819 (In Progress): reef: osd: choose_async_recovery_ec may select an acting set < min...
- 07:18 PM RADOS Backport #62817 (In Progress): quincy: osd: choose_async_recovery_ec may select an acting set < m...
- 07:17 PM RADOS Backport #62818 (In Progress): pacific: osd: choose_async_recovery_ec may select an acting set < ...
- 07:11 PM rgw Bug #58330 (Resolved): RGW service crashes regularly with floating point exception
- 07:10 PM rgw Backport #58479 (Resolved): quincy: RGW service crashes regularly with floating point exception
- 07:10 PM rgw Backport #58478 (Resolved): pacific: RGW service crashes regularly with floating point exception
- 07:10 PM rgw Bug #62681 (Resolved): high virtual memory consumption when dealing with Chunked Upload
- 07:09 PM rgw Backport #63056 (Resolved): quincy: high virtual memory consumption when dealing with Chunked Upload
- 07:09 PM rgw Backport #63055 (Resolved): pacific: high virtual memory consumption when dealing with Chunked Up...
- 07:08 PM rgw Bug #61629 (Resolved): rgw: add support http_date if http_x_amz_date is missing for sigv4
- 07:08 PM rgw Backport #61871 (Resolved): reef: rgw: add support http_date if http_x_amz_date is missing for sigv4
- 07:08 PM rgw Backport #61870 (Resolved): quincy: rgw: add support http_date if http_x_amz_date is missing for ...
- 07:07 PM rgw Backport #61872 (Resolved): pacific: rgw: add support http_date if http_x_amz_date is missing for...
- 07:07 PM rgw Bug #62013 (Resolved): Object with null version when using versioning and transition
- 07:06 PM rgw Backport #62753 (Resolved): quincy: Object with null version when using versioning and transition
- 07:06 PM rgw Backport #62752 (Resolved): reef: Object with null version when using versioning and transition
- 07:05 PM rgw Backport #59693 (Resolved): reef: metadata in bucket notification include attributes other than x...
- 07:05 PM rgw Backport #62751 (Resolved): pacific: Object with null version when using versioning and transition
- 07:05 PM rgw Backport #59692 (Resolved): pacific: metadata in bucket notification include attributes other tha...
- 07:03 PM Ceph Backport #63477 (In Progress): quincy: MClientRequest: properly handle ceph_mds_request_head_lega...
- 07:02 PM Ceph Backport #63478 (In Progress): pacific: MClientRequest: properly handle ceph_mds_request_head_leg...
- 06:58 PM Ceph Backport #55989 (Resolved): pacific: don't trim excessive PGLog::IndexedLog::dups entries
- 05:22 PM rgw Bug #63373: multisite: Deadlock in RGWDeleteMultiObj with default rgw_multi_obj_del_max_aio > 1
- We could just get rid of LazyFIFO. It speeds startup time of radosgw-admin, but isn't required for any kind of correc...
- 04:24 PM rgw Bug #63373: multisite: Deadlock in RGWDeleteMultiObj with default rgw_multi_obj_del_max_aio > 1
- Pattern leading to the deadlock in 18.2.1 and patch has not changed:...
- 02:01 PM rgw Bug #63373: multisite: Deadlock in RGWDeleteMultiObj with default rgw_multi_obj_del_max_aio > 1
- Just built reef 18.2.1 with the patch on top, but does not seem to fix the problem.
Looking further... - 04:16 PM rbd Bug #53897: diff-iterate can report holes when diffing against the beginning of time (fromsnapnam...
- Christopher Hoffman wrote:
> I don't see anything within ceph code that explicitly relies on showing a zeroed entry.... - 12:51 PM CephFS Backport #63576 (In Progress): quincy: qa: ModuleNotFoundError: No module named XXXXXX
- 12:51 PM CephFS Backport #63575 (In Progress): reef: qa: ModuleNotFoundError: No module named XXXXXX
- https://github.com/ceph/ceph/pull/54714
- 12:49 PM CephFS Bug #62706 (Pending Backport): qa: ModuleNotFoundError: No module named XXXXXX
- 10:09 AM Ceph Feature #63574: support setting quota in the format of {K|M}iB along with the K|M, {K|M}i
- I had used common code(strtol.cc)'s strict_iec_cast() to enable acceptance of human readable values for max_bytes quo...
- 10:06 AM Ceph Feature #63574 (New): support setting quota in the format of {K|M}iB along with the K|M, {K|M}i
- Currently, from CLI to set quota we are supporting values in the format of "K" or "Ki" but not "KiB". However in dash...
- 09:04 AM Dashboard Bug #63573: mgr/dashboard: rgw discovery wrong port and ssl settings when rgw behind proxy
- Sorry the formatting is a little bit wrong, but I don't have the permission to edit the description.
- 09:02 AM Dashboard Bug #63573 (New): mgr/dashboard: rgw discovery wrong port and ssl settings when rgw behind proxy
- h3. Description of problem
When the RGW is behind a proxy and not reachable from outside the dashboard rgw discovery... - 08:04 AM sepia Support #63566 (In Progress): access to global build stats jenkins dashboard
- 05:03 AM sepia Support #63566 (In Progress): access to global build stats jenkins dashboard
As copied from the email I sent:
Our team usually relies on this jenkins dashboard https://jenkins.ceph.com/plugi...- 07:00 AM Dashboard Backport #63570 (In Progress): pacific: mgr/dashboard: Graphs in Grafana Dashboard are not showin...
- 05:51 AM Dashboard Backport #63570 (In Progress): pacific: mgr/dashboard: Graphs in Grafana Dashboard are not showin...
- https://github.com/ceph/ceph/pull/54542
- 06:58 AM Dashboard Backport #63568 (In Progress): reef: mgr/dashboard: Graphs in Grafana Dashboard are not showing c...
- 05:51 AM Dashboard Backport #63568 (In Progress): reef: mgr/dashboard: Graphs in Grafana Dashboard are not showing c...
- https://github.com/ceph/ceph/pull/54541
- 06:57 AM Dashboard Backport #63569 (In Progress): quincy: mgr/dashboard: Graphs in Grafana Dashboard are not showing...
- 05:51 AM Dashboard Backport #63569 (In Progress): quincy: mgr/dashboard: Graphs in Grafana Dashboard are not showing...
- https://github.com/ceph/ceph/pull/54540
- 06:56 AM Dashboard Backport #63572 (In Progress): quincy: mgr/dashboard: Show the OSD's Out and Down panels as red w...
- 05:59 AM Dashboard Backport #63572 (In Progress): quincy: mgr/dashboard: Show the OSD's Out and Down panels as red w...
- https://github.com/ceph/ceph/pull/54539
- 06:51 AM Dashboard Backport #63571 (In Progress): reef: mgr/dashboard: Show the OSD's Out and Down panels as red whe...
- 05:59 AM Dashboard Backport #63571 (In Progress): reef: mgr/dashboard: Show the OSD's Out and Down panels as red whe...
- https://github.com/ceph/ceph/pull/54538
- 06:23 AM CephFS Bug #57087: qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
- So what happens is, when the MDSs are stopped before the disaster recovery steps can be run (scan_extents, etc..), th...
- 05:55 AM Dashboard Bug #62969 (Pending Backport): mgr/dashboard: Show the OSD's Out and Down panels as red whenever ...
- 05:44 AM Dashboard Bug #63088 (Pending Backport): mgr/dashboard: Graphs in Grafana Dashboard are not showing consist...
- 05:37 AM sepia Bug #63567 (New): apt-key add is deprecated, must stop using ansible apt_key module
- ansible's apt_key uses apt-key add, which has been deprecated for some time and is now obsolete. We must change our ...
- 05:27 AM CephFS Bug #62057 (Resolved): mds: add TrackedOp event for batching getattr/lookup
- 05:27 AM CephFS Backport #62732 (Resolved): quincy: mds: add TrackedOp event for batching getattr/lookup
- 05:27 AM CephFS Backport #62733 (Resolved): reef: mds: add TrackedOp event for batching getattr/lookup
- 05:26 AM CephFS Backport #62731 (Resolved): pacific: mds: add TrackedOp event for batching getattr/lookup
- 04:29 AM Infrastructure Bug #63565 (In Progress): Reimage the ESX on argo machines
- Hi,
I am unable to open the link - https://tracker.ceph.com/issues/63140#change-248085 . It says "You are not auth... - 04:18 AM Dashboard Bug #63564 (Pending Backport): mgr/dashboard: rgw ports from config entities like endpoint and ss...
- rgw port can be fetched not just via port/ssl_port, people can also pass config with endpoint/ssl_endpoint and that n...
- 02:21 AM rgw Feature #63563 (In Progress): rgw: add s3select bytes processed and bytes returned to usage
- Expose bytes processed and bytes returned for s3select requests to log usage.
11/16/2023
- 11:28 PM sepia Bug #63562: new users in teuthology
- We'll have to invent a new special case for teuthology. I'll have to scope that before I can predict how hard/how long.
- 05:47 PM sepia Bug #63562 (New): new users in teuthology
- New users on teuthology.front have a home directory created in /home instead of /cephfs/home. The ansible playbook ne...
- 10:09 PM mgr Backport #63367 (In Progress): quincy: mgr: remove out&down osd from mgr daemons to avoid warnings
- 10:06 PM mgr Backport #63365 (In Progress): reef: mgr: remove out&down osd from mgr daemons to avoid warnings
- 05:31 PM Orchestrator Bug #63561 (New): cephadm: build time install of dependencies fails on build systems that disable...
- A recent change to .../cephadm/build.py added
...
class Config:
def __init__(self, cli_args):
self.... - 04:58 PM rgw Bug #63560 (New): retry_raced_bucket_write considerations
- Updating bucket's metadata concurrently by two or more threads is allowed in radosgw.
There is a retry mechanism: re... - 03:53 PM rgw Bug #63532 (Fix Under Review): notification: etag is missing in CompleteMultipartUpload event
- 03:34 PM rgw Bug #62487: rgw-multisite: few objects are duplicated on archive zone intermittently
- 1. create three zones with one of them being archive zone
2. stop rgw service on archive zone
3. create bucket and ... - 02:48 PM RADOS Backport #63559 (In Progress): reef: Heartbeat crash in osd
- 02:46 PM RADOS Backport #63559 (In Progress): reef: Heartbeat crash in osd
- https://github.com/ceph/ceph/pull/54527
- 02:45 PM RADOS Bug #62992 (Pending Backport): Heartbeat crash in reset_timeout and clear_timeout
- 02:34 PM ceph-volume Backport #63554 (Resolved): reef: 'ceph-volume raw list' is broken
- 09:33 AM ceph-volume Backport #63554 (In Progress): reef: 'ceph-volume raw list' is broken
- 09:30 AM ceph-volume Backport #63554 (Resolved): reef: 'ceph-volume raw list' is broken
- https://github.com/ceph/ceph/pull/54521
- 01:41 PM bluestore Bug #63558: NCB alloc map recovery procedure doesn't take main device residing bluefs extents int...
- Apparently the culprit is int BlueStore::read_allocation_from_drive_on_startup() method which doesn't call alloc->ini...
- 01:30 PM bluestore Bug #63558: NCB alloc map recovery procedure doesn't take main device residing bluefs extents int...
- Relevant BlueFS log snippet, note the last extent at bdev 2:
log_fnode file(ino 1 size 0x0 mtime 2023-11-15T14:09... - 01:28 PM bluestore Bug #63558 (Rejected): NCB alloc map recovery procedure doesn't take main device residing bluefs ...
- One has to mark relevant BlueFS extents as free before destaging new allocation map file after the full recovery.
Th... - 01:26 PM RADOS Bug #63501: ceph::common::leak_some_memory() got interpreted as an actual leak
- Looks like it related to bug https://github.com/ceph/ceph/pull/52639
its not something with leak_some_memory - 01:19 PM CephFS Feature #63544: mgr/volumes: bulk delete canceled clones
- I further looked into the snippet of the script used. It is as below....
- 05:27 AM CephFS Feature #63544: mgr/volumes: bulk delete canceled clones
- Kotresh Hiremath Ravishankar wrote:
> The time consumption is part of the fetching the list of cancelled clones and ... - 05:01 AM CephFS Feature #63544: mgr/volumes: bulk delete canceled clones
- The time consumption is part of the fetching the list of cancelled clones and issuing subvolume rm ? If that's the ca...
- 01:05 PM CephFS Bug #63132: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
- Venky Shankar wrote:
> Kotresh Hiremath Ravishankar wrote:
> > It is strange that the the test are passing in main ... - 11:48 AM Ceph Bug #63557 (New): NVMe-oF gateway prometheus endpoints
- Add support in ceph adm for NVMe-oF gateway prometheus endpoints.
- 11:01 AM rgw Bug #63153: Uploads by AWS Go SDK v2 fail with XAmzContentSHA256Mismatch when Checksum is requested
- I don't know what the status of https://github.com/ceph/s3-tests is or what other unit tests suites are used on RGW. ...
- 10:03 AM crimson Bug #63556 (New): TestClsRbd.directory_methods failed test
- https://pulpito.ceph.com/matan-2023-11-16_08:02:56-crimson-rados-wip-matanb-crimson-testing-11-15-distro-crimson-smit...
- 10:01 AM crimson Bug #62740: PGAdvanceMap can't handle skip_maps
- The osdmap to return enoent will also not store any full or incremental maps in it:...
- 09:57 AM crimson Bug #62162: local_shared_foreign_ptr: Assertion `ptr && *ptr' failed
- https://pulpito.ceph.com/matan-2023-11-16_08:02:56-crimson-rados-wip-matanb-crimson-testing-11-15-distro-crimson-smit...
- 09:54 AM rgw Bug #63445: valgrind leak from D3nDataCache::d3n_libaio_create_write_request
- reproduced with valgrind suppression generation @ run:
https://pulpito.ceph.com/mkogan-2023-11-15_16:30:36-rgw-rgw-w... - 09:54 AM CephFS Bug #63225: hanging cephfs mounts. ceph reporting slow requests and mclientcaps(revoke)
- did not see the issue again recently after removing some faulty osd drive
- 09:33 AM ceph-volume Backport #63555 (In Progress): quincy: 'ceph-volume raw list' is broken
- 09:30 AM ceph-volume Backport #63555 (Resolved): quincy: 'ceph-volume raw list' is broken
- https://github.com/ceph/ceph/pull/54522
- 09:29 AM ceph-volume Bug #63545 (Pending Backport): 'ceph-volume raw list' is broken
- 09:27 AM CephFS Backport #63553 (In Progress): reef: cephfs-top: enhance --dump code to include the missing fields
- 09:21 AM CephFS Backport #63553 (In Progress): reef: cephfs-top: enhance --dump code to include the missing fields
- https://github.com/ceph/ceph/pull/54520
- 09:15 AM CephFS Bug #61397 (Pending Backport): cephfs-top: enhance --dump code to include the missing fields
- 09:15 AM CephFS Bug #61397 (New): cephfs-top: enhance --dump code to include the missing fields
- 07:42 AM CephFS Bug #63364: MDS_CLIENT_OLDEST_TID: 15 clients failing to advance oldest client/flush tid
- The kclient fixing patches: https://patchwork.kernel.org/project/ceph-devel/list/?series=801531
- 07:39 AM CephFS Bug #63552 (Fix Under Review): mds: use explicitly sized types for network and disk encoding
- 07:36 AM CephFS Bug #63552 (Fix Under Review): mds: use explicitly sized types for network and disk encoding
- Some members are use *unsigned* for network and disk encoding, which will be in different size in different OSs.
- 07:23 AM RADOS Bug #63541 (Fix Under Review): Observing client.admin crash in thread_name 'rados' on executing '...
- 07:06 AM rgw Bug #62938 (Resolved): RGW s3website API prefetches data for range requests
- 07:05 AM rgw Backport #63050 (Resolved): quincy: RGW s3website API prefetches data for range requests
- 07:04 AM rgw Backport #63051 (Resolved): reef: RGW s3website API prefetches data for range requests
- 07:04 AM rgw Backport #63049 (Resolved): pacific: RGW s3website API prefetches data for range requests
- 07:02 AM rgw Backport #59026 (Resolved): pacific: relying on boost flatmap emplace behavior is risky
- 07:01 AM rgw Backport #57199 (Resolved): pacific: rgw: 'bucket check' deletes index of multipart meta when its...
- 06:59 AM rgw Backport #59361 (Resolved): pacific: metadata cache: if a watcher is disconnected and reinit() fa...
- 06:55 AM rgw Backport #55701 (Resolved): pacific: `radosgw-admin user modify --placement-id` crashes without `...
- 06:54 AM rgw Backport #57635 (Resolved): pacific: RGW crash due to PerfCounters::inc assert_condition during m...
- 06:53 AM rgw Backport #62823 (Resolved): pacific: RadosGW API: incorrect bucket quota in response to HEAD /{bu...
- 06:52 AM Ceph Bug #53181 (Resolved): rgw: wrong UploadPartCopy error code when src object not exist and src buc...
- 06:52 AM Ceph Backport #53658 (Resolved): pacific: rgw: wrong UploadPartCopy error code when src object not exi...
- 06:51 AM rgw Backport #58902 (Resolved): pacific: PostObj may incorrectly return 400 EntityTooSmall
- 04:26 AM rgw Bug #52976 (Resolved): 'radosgw-admin bi purge' unable to delete index if bucket entrypoint doesn...
- 04:26 AM rgw Backport #53152 (Resolved): pacific: 'radosgw-admin bi purge' unable to delete index if bucket en...
- 04:06 AM CephFS Backport #63551 (New): pacific: MDS_CLIENT_OLDEST_TID: 15 clients failing to advance oldest clien...
- 03:05 AM RADOS Bug #44715: common/TrackedOp.cc: 163: FAILED ceph_assert((sharded_in_flight_list.back())->ops_in_...
- affected version: v16.2.13
- 12:44 AM CephFS Backport #63536: reef: MDS_CLIENT_OLDEST_TID: 15 clients failing to advance oldest client/flush tid
- This needs https://tracker.ceph.com/issues/62952 to get merged first.
- 12:43 AM CephFS Backport #63535: quincy: MDS_CLIENT_OLDEST_TID: 15 clients failing to advance oldest client/flush...
- This needs https://tracker.ceph.com/issues/62951 to get merged first.
- 12:31 AM CephFS Backport #63513: pacific: MDS slow requests for the internal 'rename' requests
- Xiubo Li wrote:
> Need to wait for https://tracker.ceph.com/issues/62859 to get merge first.
Since this has been ... - 12:30 AM CephFS Backport #63513 (In Progress): pacific: MDS slow requests for the internal 'rename' requests
- 12:19 AM CephFS Backport #62859 (Resolved): pacific: mds: deadlock between unlink and linkmerge
11/15/2023
- 11:51 PM RADOS Feature #63550 (New): NUMA enhacements
- Right now we support pinning an OSD process to the CPUs that corresponds with a particular NUMA node according to whi...
- 11:37 PM CephFS Backport #62834 (Resolved): pacific: cephfs-top: enhance --dump code to include the missing fields
- 03:50 PM CephFS Backport #62834: pacific: cephfs-top: enhance --dump code to include the missing fields
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53453
merged - 11:34 PM Ceph Bug #63223 (Resolved): ceph-build-pull-requests/ansible: join() missing 1 required positional arg...
- 11:33 PM Ceph Bug #63223: ceph-build-pull-requests/ansible: join() missing 1 required positional argument: 'a'
- Pinning ansible-core has resolved this: https://github.com/ceph/ceph-build/pull/2175
- 11:13 PM Stable releases Tasks #63548: Change jenkins job to not push release tag to the ceph repo
- To be clear, the flow is such:
# The build lead starts the build job, as defined in https://docs.ceph.com/en/lates... - 08:29 PM Stable releases Tasks #63548 (New): Change jenkins job to not push release tag to the ceph repo
- Currently, the Ceph jenkins job pushes the release tag from the ceph-releases repo to the ceph repo straight away. Ho...
- 10:24 PM rgw Backport #53152: pacific: 'radosgw-admin bi purge' unable to delete index if bucket entrypoint do...
- merged
- 09:47 PM rgw Backport #61351: pacific: Object Ownership Inconsistent
- merged
- 09:46 PM rgw Backport #63043: pacific: s3test test_list_buckets_bad_auth fails with Keystone EC2
- merged
- 09:45 PM rgw Backport #61872: pacific: rgw: add support http_date if http_x_amz_date is missing for sigv4
- merged
- 09:44 PM rgw Backport #59026: pacific: relying on boost flatmap emplace behavior is risky
- merged
- 09:30 PM rgw Bug #63549 (New): rgw-multisite: occasionally bucket full sync fails to sync objects
- 1. configure multisite with three zones.
2. stop rgw service on the one of the non-master zone
3. create versioned ... - 09:20 PM rgw Backport #62138: pacific: rgw_object_lock.cc:The maximum time of bucket object lock is 24855 days
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52605
merged - 09:06 PM rgw Backport #57199: pacific: rgw: 'bucket check' deletes index of multipart meta when its pending_ma...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/54016
merged - 09:05 PM rgw Backport #59361: pacific: metadata cache: if a watcher is disconnected and reinit() fails, it won...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/54014
merged - 09:05 PM rgw Backport #63055: pacific: high virtual memory consumption when dealing with Chunked Upload
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53775
merged - 09:04 PM rgw Backport #63052: pacific: SignatureDoesNotMatch when extra headers start with 'x-amzn'
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53772
merged - 09:03 PM rgw Backport #63049: pacific: RGW s3website API prefetches data for range requests
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53769
merged - 09:02 PM rgw Backport #63058: pacific: crash: RGWSI_Notify::unwatch(RGWSI_RADOS::Obj&, unsigned long)
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53759
merged - 09:01 PM rgw Backport #58478: pacific: RGW service crashes regularly with floating point exception
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53593
merged - 09:01 PM rgw Backport #55701: pacific: `radosgw-admin user modify --placement-id` crashes without `--storage-c...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53474
merged - 09:00 PM rgw Backport #57635: pacific: RGW crash due to PerfCounters::inc assert_condition during multisite sy...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53472
merged - 09:00 PM rgw Backport #62823: pacific: RadosGW API: incorrect bucket quota in response to HEAD /{bucket}/?usage
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53439
merged - 08:59 PM rgw Backport #62308: pacific: rgw/syncpolicy: sync status doesn't reflect syncpolicy set
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53410
merged - 08:59 PM rgw Backport #62751: pacific: Object with null version when using versioning and transition
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53400
merged - 08:51 PM rgw Backport #59692: pacific: metadata in bucket notification include attributes other than x-amz-meta-*
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53376
merged - 08:50 PM Ceph Backport #53658: pacific: rgw: wrong UploadPartCopy error code when src object not exist and src ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53356
merged - 08:49 PM rgw Backport #58902: pacific: PostObj may incorrectly return 400 EntityTooSmall
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52936
merged - 08:48 PM rgw Backport #62300: pacific: retry metadata cache notifications with INVALIDATE_OBJ
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52797
merged - 08:47 PM rgw Backport #58817: pacific: rgw: some operations may not have a valid bucket object
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52729
merged - 08:45 PM rgw Backport #61728: pacific: beast: add max_header_size option
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/52113
merged - 07:53 PM rgw Bug #63537 (Fix Under Review): object lock: The query does not match after the worm protection ti...
- 07:28 PM rgw Bug #63537 (Triaged): object lock: The query does not match after the worm protection time is set...
- i was able to reproduce this by modifying the following s3-tests case:...
- 08:07 AM rgw Bug #63537 (Pending Backport): object lock: The query does not match after the worm protection ti...
- Set the protection time to 2124.1.1 and then check the protection of this object to get 198x.x.x time...
- 07:02 PM rgw Bug #63546 (Fix Under Review): rgwlc: even current object versions have a unique instance
- 06:38 PM rgw Bug #63546 (Fix Under Review): rgwlc: even current object versions have a unique instance
- Lifecycle processing was clearing list prior to sending the info to rgw_notify.
- 06:51 PM Dashboard Feature #63547 (New): mgr/dashboard: add option to manage bucket ACLs
- Manage bucket ACLs when editing a bucket
- 05:48 PM rbd Bug #63431 (Resolved): [test] clean up RBD_MIRROR_MODE vs MIRROR_IMAGE_MODE in rbd_mirror_helpers...
- 09:53 AM rbd Bug #63431 (Fix Under Review): [test] clean up RBD_MIRROR_MODE vs MIRROR_IMAGE_MODE in rbd_mirror...
- 04:29 PM rgw Backport #58238: pacific: beast frontend crashes on exception from socket.local_endpoint()
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/54167
merged - 03:55 PM CephFS Backport #63269: pacific: mds: report clients laggy due laggy OSDs only after checking any OSD is...
- Dhairya Parmar wrote:
> https://github.com/ceph/ceph/pull/54120
merged - 03:55 PM CephFS Backport #63164: pacific: pybind/mgr/volumes: Log missing mutexes to help debug
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53916
merged - 03:54 PM CephFS Backport #57157: pacific: doc: update snap-schedule notes regarding 'start' time
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53576
merged - 03:53 PM CephFS Backport #62731: pacific: mds: add TrackedOp event for batching getattr/lookup
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53556
merged - 03:52 PM CephFS Backport #62897: pacific: client: evicted warning because client completes unmount before thrashe...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53555
merged - 03:52 PM CephFS Backport #62902: pacific: mds: log a message when exiting due to asok "exit" command
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53550
merged - 03:51 PM CephFS Backport #62859: pacific: mds: deadlock between unlink and linkmerge
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53495
merged - 03:50 PM CephFS Backport #62854: pacific: qa: "cluster [ERR] MDS abort because newly corrupt dentry to be committ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53486
merged - 03:39 PM CephFS Bug #63488 (Fix Under Review): smoke test fails from "NameError: name 'DEBUGFS_META_DIR' is not d...
- 03:30 PM ceph-volume Bug #63545 (Resolved): 'ceph-volume raw list' is broken
- 'ceph-volume raw list' is broken for a specific use case (rook)
rook copies devices from /dev/ to /mnt for specifi... - 02:59 PM rgw Bug #63109: [rgw][sts] the ceph config item to set the max session duration is not honored
- Thank you! I believe backport is not needed.
- 02:56 PM CephFS Feature #63544 (New): mgr/volumes: bulk delete canceled clones
- Creating this feature tracker to discuss the effort involved and if it would really help in cleaning out canceled clo...
- 01:52 PM Ceph Bug #63543 (New): rgw going down upon executing s3select-request
- the trim operator missing ta type-checking, which causes an exception (not caught by s3select-engine)
- 01:47 PM rgw Bug #63542 (New): Delete-Marker deletion inconsistencies
- We stumbled into an issue trying to make the following s3test pass for the downstream rgw/sfs driver implementation:
... - 01:40 PM RADOS Bug #63541 (Fix Under Review): Observing client.admin crash in thread_name 'rados' on executing '...
- Description of problem:
Observing client.admin crash in thread_name 'rados' on executing 'rados clearomap' for a r... - 01:32 PM CephFS Bug #63461: Long delays when two threads modify the same directory
- Xavi Hernandez wrote:
> Venky Shankar wrote:
> > Which ceph version are you using and what's max_mds set to?
>
>... - 01:30 PM CephFS Bug #63521: qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_checks.TestScrub...
- Milind, isn't this caused by a PR that was included in the test branch for integration testing? If yes, then there is...
- 12:52 PM Orchestrator Bug #63540 (New): Cephadm did not automatically modify firewall rules to allow access to port 992...
- ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)
Cephadm did not automatically modify ... - 12:11 PM CephFS Bug #63539 (Duplicate): fs/full/subvolume_clone.sh: Health check failed: 1 full osd(s) (OSD_FULL)
- The job was halted with following log messages -...
- 10:52 AM CephFS Bug #63538 (Need More Info): mds: src/mds/Locker.cc: 2357: FAILED ceph_assert(!cap->is_new())
- Seen in an internal cluster running multimds. ...
- 04:57 AM CephFS Backport #63536 (New): reef: MDS_CLIENT_OLDEST_TID: 15 clients failing to advance oldest client/f...
- 04:57 AM CephFS Backport #63535 (New): quincy: MDS_CLIENT_OLDEST_TID: 15 clients failing to advance oldest client...
- 04:54 AM CephFS Bug #63364 (Pending Backport): MDS_CLIENT_OLDEST_TID: 15 clients failing to advance oldest client...
- 01:18 AM CephFS Bug #57244 (Resolved): [WRN] : client.408214273 isn't responding to mclientcaps(revoke), ino 0x10...
11/14/2023
- 09:06 PM mgr Bug #63292: Prometheus exporter bug
- Another command that's running into the same issue:...
- 03:47 PM mgr Bug #63292: Prometheus exporter bug
- Hi,
Thank you for your response. I'm sorry but I didn't fully explain the problem. We initially discovered the pro... - 05:31 AM mgr Bug #63292: Prometheus exporter bug
- Hi Navid,
From reef, the metrics are not exported by prometheus by default and instead a new daemon called ceph-ex... - 08:26 PM Orchestrator Backport #63534 (New): quincy: cephadm: OSD weights are not restored when you stop removal of an OSD
- 08:26 PM Orchestrator Backport #63533 (New): reef: cephadm: OSD weights are not restored when you stop removal of an OSD
- 08:22 PM Orchestrator Bug #63481 (Pending Backport): cephadm: OSD weights are not restored when you stop removal of an OSD
- 07:56 PM rgw Bug #63373 (Fix Under Review): multisite: Deadlock in RGWDeleteMultiObj with default rgw_multi_ob...
- 03:23 PM rgw Bug #63373: multisite: Deadlock in RGWDeleteMultiObj with default rgw_multi_obj_del_max_aio > 1
- another report in https://tracker.ceph.com/issues/63528
seems to be specific to the @RGWDeleteMultiObj@ op when @r... - 02:02 PM rgw Bug #63373: multisite: Deadlock in RGWDeleteMultiObj with default rgw_multi_obj_del_max_aio > 1
- We also suffer from the same deletion problem with large buckets reported above; Signature is identical.
- 01:40 PM rgw Bug #63373: multisite: Deadlock in RGWDeleteMultiObj with default rgw_multi_obj_del_max_aio > 1
- In our setup this is easily reproduced by just running:
s3cmd rm -r s3://some-large-bucket
We also have multisi... - 04:38 PM rgw Bug #63532 (New): notification: etag is missing in CompleteMultipartUpload event
- ...
- 04:18 PM RADOS Bug #63509 (Fix Under Review): osd/scrub: some replica states specified incorrectly
- 04:14 PM Infrastructure Bug #63531 (New): Error authenticating with smithiXXX.front.sepia.ceph.com: SSHException('No exis...
- /a/yuriw-2023-11-13_23:56:46-smoke-reef-release-distro-default-smithi/7456670...
- 03:24 PM rgw Bug #63528 (Duplicate): rgw recursive delete deadlock
- 02:12 PM rgw Bug #63528: rgw recursive delete deadlock
- Enrico Bocchi wrote:
> Do you see this happening in a multi-site RGW setup or also with simpler single-zone clusters... - 02:06 PM rgw Bug #63528: rgw recursive delete deadlock
- Do you see this happening in a multi-site RGW setup or also with simpler single-zone clusters?
We are seeing the s... - 11:21 AM rgw Bug #63528 (Duplicate): rgw recursive delete deadlock
- Recursive bucket delete operations deadlock when using rgw_multi_obj_del_max_aio > 1.
This is most likely caused b... - 03:14 PM rgw Documentation #63530 (Pending Backport): notifications: default event types does not include sync...
- currently documentation says that, by default, notifications are sent for all event types.
- 03:10 PM rgw Bug #63153: Uploads by AWS Go SDK v2 fail with XAmzContentSHA256Mismatch when Checksum is requested
- implementation notes:
h3. accept @STREAMING-UNSIGNED-PAYLOAD-TRAILER@ and @STREAMING-AWS4-HMAC-SHA256-PAYLOAD-TRAI... - 02:07 PM CephFS Bug #63132: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
- Kotresh Hiremath Ravishankar wrote:
> It is strange that the the test are passing in main branch.
>
> On the main... - 02:06 PM mgr Bug #63529 (New): Python Sub-Interpreter Model Used by ceph-mgr Incompatible With Python Modules ...
- Our users noticed that Ceph's dashboard is broken in Proxmox Virtual Environment 8. On a more closer investigation, t...
- 12:25 PM Dashboard Bug #63515 (In Progress): mgr/dashboard: explicitly shutdown cephfs mount in controllers.cephfs a...
- 12:11 PM CephFS Backport #63480 (In Progress): quincy: src/mds/MDLog.h: 100: FAILED ceph_assert(!segments.empty())
- https://github.com/ceph/ceph/pull/54494
- 12:05 PM CephFS Backport #63479 (In Progress): reef: src/mds/MDLog.h: 100: FAILED ceph_assert(!segments.empty())
- 12:05 PM CephFS Backport #63479: reef: src/mds/MDLog.h: 100: FAILED ceph_assert(!segments.empty())
- https://github.com/ceph/ceph/pull/54493
- 11:34 AM RADOS Bug #62992 (Fix Under Review): Heartbeat crash in reset_timeout and clear_timeout
- 11:23 AM RADOS Bug #63501: ceph::common::leak_some_memory() got interpreted as an actual leak
- Looks like it is the correct response, valgrind caught the leak, but exit-on-first-error caused the mons and osds to ...
- 11:01 AM RADOS Backport #63527 (New): reef: crash: int OSD::shutdown(): assert(end_time - start_time_func < cct-...
- 11:01 AM RADOS Backport #63526 (New): quincy: crash: int OSD::shutdown(): assert(end_time - start_time_func < cc...
- 10:56 AM RADOS Bug #61140 (Pending Backport): crash: int OSD::shutdown(): assert(end_time - start_time_func < cc...
- 10:17 AM CephFS Bug #57087: qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
- The interesting bit from mds log ./remote/smithi081/log/ceph-mds.c.log.gz...
- 05:50 AM CephFS Bug #57087: qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
- Seen again - https://pulpito.ceph.com/vshankar-2023-11-06_10:33:49-fs-wip-vshankar-testing-20231106.073650-testing-de...
- 09:48 AM Dashboard Feature #56429: mgr/dashboard: Remote user authentication (e.g. via apache2)
- This is an interesting proposal, though on your diagram i notice you would like the user to be created on the dashboa...
- 08:59 AM RADOS Bug #63520: the usage of osd_pg_stat_report_interval_max is not uniform
- solution:
main idea:
1. Compatible with the judgment of epoch and seconds at the same time
2. Reason 1: 500 epoc... - 05:57 AM RADOS Bug #63520: the usage of osd_pg_stat_report_interval_max is not uniform
- Disadvantages of non-uniform units: After 500 osdmaps are reached, it may not be reached within 500 seconds.
- 05:55 AM RADOS Bug #63520 (Fix Under Review): the usage of osd_pg_stat_report_interval_max is not uniform
Scenario 1: osd_pg_stat_report_interval_max is in epoch units...- 08:20 AM bluestore Backport #58588: quincy: OSD is unable to allocate free space for BlueFS
- Fixed in 17.2.6
- 08:07 AM Orchestrator Bug #63525: cephadm: drivespec limit not working correctly
- Some more background discussion in the slack thread: https://ceph-storage.slack.com/archives/C04SNUBD2M6/p16995377049...
- 08:05 AM Orchestrator Bug #63525 (In Progress): cephadm: drivespec limit not working correctly
- There appears to be a bug in cephadm/ceph orch when applying limits inside a drive specification.
My test cluster ha... - 07:43 AM rgw Backport #59568 (Rejected): pacific: Multipart re-uploads cause orphan data
- 07:42 AM rgw Backport #59568 (Duplicate): pacific: Multipart re-uploads cause orphan data
- 07:41 AM rgw Bug #46563 (Resolved): Metadata synchronization failed,"metadata is behind on 1 shards" appear
- 07:41 AM rgw Backport #55703 (Resolved): quincy: Metadata synchronization failed,"metadata is behind on 1 shar...
- 07:40 AM rgw Backport #55702 (Rejected): pacific: Metadata synchronization failed,"metadata is behind on 1 sha...
- 07:40 AM rgw Backport #62284 (Resolved): reef: [ FAILED ] TestAMQP.IdleConnection (30132 ms)
- 07:39 AM RADOS Bug #63524 (New): ceph_test_rados create object without reference
- tier ops that add refs into the low_tier pools can hit overlap offset which causing few empty entries in low_tier obj...
- 07:32 AM teuthology Bug #63522: likely PYTHONPATH mess when running cephfs tests
- ...
- 07:31 AM teuthology Bug #63522: likely PYTHONPATH mess when running cephfs tests
- ...
- 07:30 AM teuthology Bug #63522: likely PYTHONPATH mess when running cephfs tests
- ...
- 07:29 AM teuthology Bug #63522: likely PYTHONPATH mess when running cephfs tests
- ...
- 07:09 AM teuthology Bug #63522 (New): likely PYTHONPATH mess when running cephfs tests
- "No module named 'tasks.ceph_fuse'":https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-202...
- 07:23 AM teuthology Bug #63523 (New): qa: Command failed - qa/workunits/fs/misc/general_vxattrs.sh
- ...
- 07:21 AM CephFS Bug #63013: I/O is hang while I try to write to a file with two processes, maybe due to metadata ...
- fuchen ma wrote:
> Still don't know how to change the version of CephFS with cephadm. cephadm shell only shows the v... - 06:51 AM CephFS Bug #63013: I/O is hang while I try to write to a file with two processes, maybe due to metadata ...
- Still don't know how to change the version of CephFS with cephadm. cephadm shell only shows the versions of ceph-mon ...
- 07:16 AM rgw Bug #63355 (Fix Under Review): test/cls_2pc_queue: fails during migration tests
- 06:57 AM CephFS Bug #51964: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
- main: https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-s...
- 06:56 AM CephFS Bug #63141: qa/cephfs: test_idem_unaffected_root_squash fails
- main: https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-s...
- 06:52 AM CephFS Bug #61243: qa: tasks.cephfs.tests_from_xfstests_dev.TestXFSTestsDev - 17 tests failed
- main: https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-s...
- 06:51 AM CephFS Bug #57676: qa: error during scrub thrashing: rank damage found: {'backtrace'}
- main: https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-s...
- 06:50 AM CephFS Bug #62580: testing: Test failure: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays)
- main: https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-s...
- 06:44 AM CephFS Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
- main: https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-s...
- 06:42 AM CephFS Bug #63521 (Rejected): qa: Test failure: test_scrub_merge_dirfrags (tasks.cephfs.test_scrub_check...
- ...
- 06:26 AM CephFS Bug #63233: mon|client|mds: valgrind reports possible leaks in the MDS
- main: https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-s...
- 06:21 AM CephFS Bug #53859: qa: Test failure: test_pool_perm (tasks.cephfs.test_pool_perm.TestPoolPerm)
- main: https://pulpito.ceph.com/mchangir-2023-11-13_10:27:15-fs-wip-mchangir-testing-20231110.052303-testing-default-s...
- 05:46 AM CephFS Bug #63519 (Triaged): ceph-fuse: reef ceph-fuse crashes with main branch ceph-mds
- Creating a tracker for recording in run wiki since it shows up in integrations tests for main branch. Explanation is ...
- 04:31 AM rgw Bug #58481 (Resolved): rgw: remove guard_reshard in bucket_index_read_olh_log
- 04:30 AM rgw Backport #58495 (Rejected): pacific: rgw: remove guard_reshard in bucket_index_read_olh_log
- 04:29 AM rgw Bug #52067 (Resolved): RadosGW requests to Open Policy Agent add 40ms latency
- 04:28 AM rgw Backport #52241 (Rejected): pacific: RadosGW requests to Open Policy Agent add 40ms latency
- 04:06 AM CephFS Bug #63488: smoke test fails from "NameError: name 'DEBUGFS_META_DIR' is not defined"
- Laura Flores wrote:
> @Venky want to attach your PR to this tracker?
My changes aren't fully ready yet - it solve...
11/13/2023
- 09:50 PM RADOS Bug #63310: use-after-move in OSDService::build_incremental_map_msg()
- https://github.com/ceph/ceph/pull/54269 merged
- 09:12 PM Infrastructure Bug #63518 (Duplicate): Selinux denial in rados/standalone job
- 09:10 PM Infrastructure Bug #63518 (Duplicate): Selinux denial in rados/standalone job
- /a/yuriw-2023-11-10_18:18:41-rados-wip-yuri3-testing-2023-11-09-1355-quincy-distro-default-smithi/7454517...
- 09:12 PM Infrastructure Bug #55443: "SELinux denials found.." in rados run
- Seen in a rados/standalone job, which seems new:
Description: rados/standalone/{supported-random-distro$/{rhel_8} ... - 09:00 PM RADOS Bug #63066: rados/objectstore - application not enabled on pool '.mgr'
- /a/yuriw-2023-11-10_18:18:41-rados-wip-yuri3-testing-2023-11-09-1355-quincy-distro-default-smithi/7454409
- 08:57 PM Dashboard Bug #61578: test_dashboard_e2e.sh: Can't run because no spec files were found
- /a/yuriw-2023-11-10_18:18:41-rados-wip-yuri3-testing-2023-11-09-1355-quincy-distro-default-smithi/7454533
- 08:55 PM Orchestrator Bug #58476: test_non_existent_cluster: cluster does not exist
- /a/yuriw-2023-11-10_18:18:41-rados-wip-yuri3-testing-2023-11-09-1355-quincy-distro-default-smithi/7454561
- 08:52 PM CephFS Bug #63488: smoke test fails from "NameError: name 'DEBUGFS_META_DIR' is not defined"
- @Venky want to attach your PR to this tracker?
- 07:36 PM mgr Bug #63292: Prometheus exporter bug
- We did some more digging and found the JSON that the mgr is trying to decode is indeed incorrect:...
- 07:16 PM RADOS Bug #62918: all write io block but no flush is triggered
- So your point that the @while@'s body isn't executed when @max_dirty_bh@ is calculated for 64k pages, right?...
- 07:11 PM Ceph Bug #63517 (New): Process (rbd) crashed in handle_oneshot_fatal_signal
- After upgrading from Fedora 37 to Fedora 39, I can no longer mount a CEPH volume. rbdmap fails to start:
Nov 14 06... - 07:04 PM RADOS Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- Brad is taking a look.
- 07:02 PM RADOS Bug #62992: Heartbeat crash in reset_timeout and clear_timeout
- Hi Matan! How about sending a PR with the timeout bump up?
- 06:56 PM RADOS Bug #63389: Failed to encode map X with expected CRC
- bump up
- 05:56 PM CephFS Bug #63516 (Fix Under Review): mds may try new batch head that is killed
- 05:55 PM CephFS Bug #63516 (Rejected): mds may try new batch head that is killed
- https://github.com/ceph/ceph/blob/7325a8d317d1cef199306833e7829b171378266b/src/mds/Server.cc#L2641-L2645
- 05:33 PM Dashboard Bug #63123 (Closed): Dashboard crash when opening NFS view
- 04:54 PM Dashboard Bug #63515 (In Progress): mgr/dashboard: explicitly shutdown cephfs mount in controllers.cephfs a...
- The controllers.cephfs and controllers.nfs modules use services.cephfs.CephFS object to mount a Ceph file system. The...
- 04:53 PM rgw Bug #63153: Uploads by AWS Go SDK v2 fail with XAmzContentSHA256Mismatch when Checksum is requested
- Christian Rohmann wrote:
> Maybe it also makes sense to change the title of this issue to
>
> "Uploads by AWS ... - 04:43 PM rgw Bug #63153: Uploads by AWS Go SDK v2 fail with XAmzContentSHA256Mismatch when Checksum is requested
- Christian Rohmann wrote:
> Are there any plans on prioritizing the implementation of the missing check-summing featu... - 04:32 PM Infrastructure Bug #63492: Not able to access and mount magna002 ceph directory
- Please use cephfs to mount the volume
- 04:02 PM Ceph Feature #63343: Add fields to ceph-nvmeof conf and fix cpumask default
- also please make the field called "enable_spdk_doscovery_controller" to be configurable.
- 03:21 PM CephFS Bug #63461: Long delays when two threads modify the same directory
- Venky Shankar wrote:
> Which ceph version are you using and what's max_mds set to?
I'm using a recent build from ... - 11:33 AM CephFS Bug #63461: Long delays when two threads modify the same directory
- Xavi Hernandez wrote:
> I've just seen that the delay corresponds roughly to the value of *mds_tick_interval* option... - 03:09 PM CephFS Bug #63514 (New): mds: avoid sending inode/stray counters as part of health warning for standby-r...
- Otherwise a health warning from a s-r daemon looks like...
- 02:53 PM Orchestrator Bug #63483 (Resolved): Add github command for rook e2e jenkins job
- 02:52 PM Orchestrator Backport #63489 (Resolved): reef: Add github command for rook e2e jenkins job
- 02:52 PM Orchestrator Backport #63489: reef: Add github command for rook e2e jenkins job
- backported by PR: https://github.com/ceph/ceph/pull/54224
- 02:52 PM Orchestrator Feature #63462 (Resolved): adding e2e testing for rook orchestrator
- 02:52 PM Orchestrator Backport #63463 (Resolved): reef: adding e2e testing for rook orchestrator
- 02:52 PM Orchestrator Backport #63463: reef: adding e2e testing for rook orchestrator
- backported by PR: https://github.com/ceph/ceph/pull/54224
- 02:51 PM Orchestrator Bug #63291 (Resolved): rook cluster uses hard-coded namespace 'rook-ceph' in some calls
- 02:51 PM Orchestrator Backport #63359 (Resolved): reef: rook cluster uses hard-coded namespace 'rook-ceph' in some calls
- backported by PR: https://github.com/ceph/ceph/pull/54224
- 02:51 PM Orchestrator Bug #63107 (Resolved): rook crashes when trying to list services
- 02:51 PM Orchestrator Backport #63266 (Resolved): reef: rook crashes when trying to list services
- 02:50 PM Orchestrator Feature #63038 (Resolved): Adding support to automatically discover storage classes on Rook cluster
- 02:50 PM Orchestrator Backport #63484 (Resolved): reef: Adding support to automatically discover storage classes on Roo...
- backported by PR: https://github.com/ceph/ceph/pull/54224
- 02:50 PM Orchestrator Bug #63326 (Resolved): This Orchestrator does not support `orch prometheus access info`
- 02:49 PM Orchestrator Backport #63444 (Resolved): reef: This Orchestrator does not support `orch prometheus access info`
- 02:49 PM Orchestrator Backport #63444: reef: This Orchestrator does not support `orch prometheus access info`
- backported by PR: https://github.com/ceph/ceph/pull/54224
- 02:46 PM CephFS Backport #61166 (Resolved): pacific: [WRN] : client.408214273 isn't responding to mclientcaps(rev...
- 02:45 PM rgw Bug #63445 (In Progress): valgrind leak from D3nDataCache::d3n_libaio_create_write_request
- was not able to repro on local env, working on generating a valgrind suppression
- 02:09 PM CephFS Bug #61879 (Resolved): mds: linkmerge assert check is incorrect in rename codepath
- 02:08 PM CephFS Backport #62240 (Resolved): reef: mds: linkmerge assert check is incorrect in rename codepath
- 02:08 PM CephFS Backport #62241 (Resolved): quincy: mds: linkmerge assert check is incorrect in rename codepath
- 02:08 PM CephFS Backport #62242 (Resolved): pacific: mds: linkmerge assert check is incorrect in rename codepath
- 02:06 PM CephFS Bug #61864 (Resolved): mds: replay thread does not update some essential perf counters
- 02:06 PM CephFS Backport #62191 (Resolved): quincy: mds: replay thread does not update some essential perf counters
- 02:06 PM CephFS Backport #62189 (Resolved): reef: mds: replay thread does not update some essential perf counters
- 02:05 PM CephFS Backport #62190 (Resolved): pacific: mds: replay thread does not update some essential perf counters
- 02:05 PM CephFS Bug #59318 (Resolved): mon/MDSMonitor: daemon booting may get failed if mon handles up:boot beaco...
- 02:05 PM CephFS Backport #61425 (Resolved): quincy: mon/MDSMonitor: daemon booting may get failed if mon handles ...
- 02:05 PM CephFS Backport #61424 (Resolved): reef: mon/MDSMonitor: daemon booting may get failed if mon handles up...
- 02:04 PM CephFS Backport #61426 (Resolved): pacific: mon/MDSMonitor: daemon booting may get failed if mon handles...
- 02:04 PM CephFS Bug #24403 (Resolved): mon failed to return metadata for mds
- 02:04 PM CephFS Backport #61691 (Resolved): quincy: mon failed to return metadata for mds
- 02:04 PM CephFS Backport #61693 (Resolved): reef: mon failed to return metadata for mds
- 02:03 PM CephFS Backport #61692 (Resolved): pacific: mon failed to return metadata for mds
- 02:02 PM CephFS Backport #62337 (Resolved): pacific: MDSAUthCaps: use g_ceph_context directly
- 02:01 PM CephFS Backport #59015 (Rejected): pacific: Command failed (workunit test fs/quota/quota.sh) on smithi08...
- 01:57 PM CephFS Documentation #62791 (Resolved): doc: write cephfs commands in full
- 01:57 PM CephFS Backport #62805 (Resolved): quincy: doc: write cephfs commands in full
- 01:56 PM CephFS Backport #62806 (Resolved): reef: doc: write cephfs commands in full
- 01:56 PM CephFS Backport #62807 (Resolved): pacific: doc: write cephfs commands in full
- 01:55 PM CephFS Backport #57776 (Resolved): pacific: Clarify security implications of path-restricted cephx capab...
- 01:53 PM CephFS Backport #62289 (Resolved): quincy: ceph_test_libcephfs_reclaim crashes during test
- 01:52 PM CephFS Backport #62288 (Rejected): pacific: ceph_test_libcephfs_reclaim crashes during test
- 01:51 PM CephFS Backport #62003 (Rejected): pacific: client: readdir_r_cb: get rstat for dir only if using rbytes...
- 01:50 PM CephFS Backport #62879 (Resolved): pacific: cephfs-shell: update path to cephfs-shell since its location...
- 01:50 PM CephFS Backport #62331 (Rejected): pacific: MDSAuthCaps: minor improvements
- 01:49 PM CephFS Bug #59067 (Resolved): mds: add cap acquisition throttled event to MDR
- 01:48 PM CephFS Backport #62572 (Resolved): pacific: mds: add cap acquisition throttled event to MDR
- 01:48 PM CephFS Bug #58678 (Resolved): cephfs_mirror: local and remote dir root modes are not same
- 01:48 PM CephFS Backport #59408 (Resolved): reef: cephfs_mirror: local and remote dir root modes are not same
- 01:48 PM CephFS Backport #59000 (Resolved): quincy: cephfs_mirror: local and remote dir root modes are not same
- 01:48 PM CephFS Backport #59001 (Resolved): pacific: cephfs_mirror: local and remote dir root modes are not same
- 01:47 PM CephFS Bug #62072 (Resolved): cephfs-mirror: do not run concurrent C_RestartMirroring context
- 01:45 PM CephFS Backport #62948 (Resolved): quincy: cephfs-mirror: do not run concurrent C_RestartMirroring context
- 01:45 PM CephFS Backport #62950 (Resolved): reef: cephfs-mirror: do not run concurrent C_RestartMirroring context
- 01:45 PM CephFS Backport #62949 (Resolved): pacific: cephfs-mirror: do not run concurrent C_RestartMirroring context
- 01:45 PM CephFS Bug #61753 (Resolved): Better help message for cephfs-journal-tool -help command for --rank option.
- 01:44 PM CephFS Backport #61804 (Resolved): quincy: Better help message for cephfs-journal-tool -help command for...
- 01:44 PM CephFS Backport #61805 (Resolved): reef: Better help message for cephfs-journal-tool -help command for -...
- 01:43 PM CephFS Backport #61803 (Resolved): pacific: Better help message for cephfs-journal-tool -help command fo...
- 01:42 PM bluestore Backport #63440 (In Progress): quincy: resharding RocksDB after upgrade to Pacific breaks OSDs
- 01:41 PM bluestore Backport #63442 (In Progress): reef: resharding RocksDB after upgrade to Pacific breaks OSDs
- 01:41 PM bluestore Backport #63441 (In Progress): pacific: resharding RocksDB after upgrade to Pacific breaks OSDs
- 11:36 AM CephFS Bug #63013: I/O is hang while I try to write to a file with two processes, maybe due to metadata ...
- fuchen ma wrote:
> Can I change the version of ceph by using cephadm?
I believe using
> cephadm shell -- ceph ... - 08:13 AM CephFS Backport #63513: pacific: MDS slow requests for the internal 'rename' requests
- Need to wait for https://tracker.ceph.com/issues/62859 to get merge first.
- 08:12 AM CephFS Backport #63513 (In Progress): pacific: MDS slow requests for the internal 'rename' requests
- https://github.com/ceph/ceph/pull/54517
- 08:05 AM CephFS Backport #63512 (Resolved): pacific: client: queue a delay cap flushing if there are ditry caps/s...
- https://github.com/ceph/ceph/pull/54472
- 06:57 AM rgw Documentation #63511 (New): radosgw-admin doc is out if synch with actual command
- see: https://docs.ceph.com/en/latest/man/8/radosgw-admin/
e.g. pubsub documentation exists - 04:59 AM CephFS Feature #63468 (Fix Under Review): mds/purgequeue: add l_pq_executed_ops counter
- 03:15 AM CephFS Backport #62952 (In Progress): reef: kernel/fuse client using ceph ID with uid restricted MDS cap...
- 03:13 AM CephFS Backport #63262 (In Progress): reef: MDS slow requests for the internal 'rename' requests
- 03:11 AM CephFS Backport #63262 (New): reef: MDS slow requests for the internal 'rename' requests
- 01:05 AM CephFS Backport #63262 (In Progress): reef: MDS slow requests for the internal 'rename' requests
- 03:12 AM CephFS Backport #63261 (New): quincy: MDS slow requests for the internal 'rename' requests
- 03:08 AM CephFS Backport #63261 (In Progress): quincy: MDS slow requests for the internal 'rename' requests
- 01:02 AM CephFS Backport #63261: quincy: MDS slow requests for the internal 'rename' requests
- This need https://tracker.ceph.com/issues/62858 to get merged first.
- 03:11 AM CephFS Backport #62951 (In Progress): quincy: kernel/fuse client using ceph ID with uid restricted MDS c...
- 01:07 AM Linux kernel client Bug #61333 (Fix Under Review): kernel/fuse client using ceph ID with uid restricted MDS caps cann...
- The V1 patchwork link: https://patchwork.kernel.org/project/ceph-devel/list/?series=799865
- 12:40 AM CephFS Backport #63274 (In Progress): quincy: client: queue a delay cap flushing if there are ditry caps...
- 12:40 AM CephFS Backport #63273 (In Progress): reef: client: queue a delay cap flushing if there are ditry caps/s...
- 12:21 AM CephFS Bug #63510 (Need More Info): ceph fs (meta) data inconsistent with python shutil.copy()
- This was reported by Frank via the *ceph-users* mail list: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/t...
11/12/2023
- 04:23 PM rgw Bug #63314: kafka crashed during message callback in teuthology
- according to the log, centos9 has this version: librdkafka-1.6.1.
and according to the git log, this version has the... - 07:34 AM RADOS Bug #63509 (Fix Under Review): osd/scrub: some replica states specified incorrectly
- ReplicaActiveOp sub-states are specified with the wrong parent attribute.
- 07:32 AM RADOS Backport #63372 (Resolved): pacific: use-after-move in OSDService::build_incremental_map_msg()
- 07:31 AM RADOS Backport #63370 (In Progress): quincy: use-after-move in OSDService::build_incremental_map_msg()
- https://github.com/ceph/ceph/pull/54269 approved and pending tests
11/11/2023
- 01:21 PM rbd Bug #63431: [test] clean up RBD_MIRROR_MODE vs MIRROR_IMAGE_MODE in rbd_mirror_helpers.sh and rel...
- Aniket Singh Rawat wrote:
> Hello I would like to work on this issue.
Hey Aniket. Are you still working on the is... - 10:22 AM rgw Bug #63153: Uploads by AWS Go SDK v2 fail with XAmzContentSHA256Mismatch when Checksum is requested
- Maybe it also makes sense to change the title of this issue to
"Uploads by AWS Go SDK v2 fail with XAmzContentS... - 10:18 AM rgw Bug #63153: Uploads by AWS Go SDK v2 fail with XAmzContentSHA256Mismatch when Checksum is requested
- With the terraform issue arising due to their switch to the current AWS Go SDK v2 see (https://github.com/hashicorp/t...
- 09:31 AM rgw Bug #63495: Object gateway doesn't support POST DELETE
- I am finding out, will get back to you.
11/10/2023
- 10:07 PM Orchestrator Backport #63508 (New): reef: cephadm: warn users about draining a host explicitly listed in a ser...
- 10:06 PM Orchestrator Feature #63220 (Pending Backport): cephadm: warn users about draining a host explicitly listed in...
- 05:13 PM rgw Fix #63507: 304 response is not RFC9110 compliant
- Here is a link for the PR mentioned - https://github.com/ceph/ceph/pull/35284
- 05:12 PM rgw Fix #63507 (Duplicate): 304 response is not RFC9110 compliant
- When returning 304 (NotModified) both s3 and s3website endpoints violates RFC9110 (https://www.rfc-editor.org/rfc/rfc...
- 04:48 PM CephFS Bug #63506 (Fix Under Review): qa/cephfs: some tests test_volumes overwrite build/keyring
- When a new client is being created, its keyring must be written in a new file. This requires calling @teuthology.orch...
- 04:44 PM CephFS Bug #63504: qa/vstart_runner.py: build/keyring is over-written
- The bug actually is not in the helper method but in how that method is used in test_volumes.py.
- 12:49 PM CephFS Bug #63504 (Rejected): qa/vstart_runner.py: build/keyring is over-written
- 12:38 PM CephFS Bug #63504 (Rejected): qa/vstart_runner.py: build/keyring is over-written
- When test_subvolume_evict_client is run on dev machine, this test over-writes build/keyring makes Ceph cluster inoper...
- 01:08 PM mgr Feature #63505: influx: tag device_class of ceph_daemon_stats
- https://github.com/ceph/ceph/pull/54449
- 12:41 PM mgr Feature #63505 (New): influx: tag device_class of ceph_daemon_stats
- It would be helpful to have the device class as tags in the InfluxDB stats, like in Prometheus.
- 10:52 AM devops Bug #61470: Quincy: Missing repository for Ubuntu 22.04
- We now have a DAEMON_OLD_VERSION warning because our hosts on Ubuntu 20.04 got 17.2.7 from the Ceph APT repository, b...
- 09:01 AM Ceph Bug #63327: compiler cython error
- this issue still exists on 17.2.7
- 07:45 AM Ceph Bug #63503 (New): data corruption after rbd migration
- we're struggling with strange issue which I think might be a bug
causing snapshot data corruption while migrating RB... - 07:27 AM CephFS Bug #63259: mds: failed to store backtrace and force file system read-only
- Sorry, it should be this link https://pulpito.ceph.com/yuriw-2023-10-16_14:43:00-fs-wip-yuri4-testing-2023-10-11-0735...
- 12:32 AM CephFS Bug #63259: mds: failed to store backtrace and force file system read-only
- Kotresh Hiremath Ravishankar wrote:
> Hi Xiubo,
>
> The logs for the job link in the description is not matching ... - 06:38 AM Dashboard Bug #63469 (Resolved): mgr/dashboard: fix rgw multi-site import form helper
- 06:38 AM Dashboard Backport #63470 (Resolved): reef: mgr/dashboard: fix rgw multi-site import form helper
- 05:21 AM CephFS Bug #63488: smoke test fails from "NameError: name 'DEBUGFS_META_DIR' is not defined"
- Realized that ubuntu 20.04 doesn't have the kclient changes that adds functionality to fetch client-id via an xattr q...
- 03:54 AM CephFS Bug #63488: smoke test fails from "NameError: name 'DEBUGFS_META_DIR' is not defined"
- So, this failure was also seen in earlier runs. Testing up a fix.
- 04:14 AM CephFS Bug #63411: qa: flush journal may cause timeouts of `scrub status`
- "GREEN Jobs after bumping timeout":https://pulpito.ceph.com/mchangir-2023-11-10_02:21:55-fs:workload-main-distro-defa...
- 03:11 AM RADOS Bug #62119: timeout on reserving replicsa
- The logs of the recent failures indicate a different issue than the one the tracker was raised with.
osd.1 tries to... - 01:19 AM Ceph Bug #63502 (New): Regression: Permanent KeyError: 'TYPE' : return self.blkid_api['TYPE'] == 'part'
- A bug reported long ago, apparently fixed, has re-appeared when migrating within Quincy to 17.2.7
On Tue, 7 Nov 20... - 12:20 AM CephFS Bug #44565 (Resolved): src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state...
- 12:20 AM CephFS Backport #62523 (Resolved): pacific: src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_...
11/09/2023
- 10:31 PM Orchestrator Bug #63151: reef: cephadm: deployment of nfs over rgw fails with "free(): invalid pointer"
- I think what you're experiencing is:
https://tracker.ceph.com/issues/63394
This fix hasn't merged yet, but it p... - 09:11 PM RADOS Bug #61774: centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
- Another one from the reef.1 batch: https://pulpito.ceph.com/yuriw-2023-11-05_15:32:58-rados-reef-release-distro-defau...
- 09:06 PM RADOS Bug #61774: centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
- From reef.1 testing: https://pulpito.ceph.com/yuriw-2023-11-05_15:32:58-rados-reef-release-distro-default-smithi/7448...
- 08:57 PM RADOS Bug #63501 (New): ceph::common::leak_some_memory() got interpreted as an actual leak
- 1. We run the osd.0 under valgrind in the @exit-on-first-error@ mode,
2. then we provoke a leak by @ceph tell osd.0 ... - 08:34 PM RADOS Bug #47589: radosbench times out "reached maximum tries (800) after waiting for 4800 seconds"
- Reoccurrence from 18.2.1 testing:
/a/yuriw-2023-11-05_15:32:58-rados-reef-release-distro-default-smithi/7448372 - 08:28 PM Infrastructure Bug #63500 (New): No module named 'tasks.nvme_loop'
- /a/yuriw-2023-10-24_00:11:03-rados-wip-yuri2-testing-2023-10-23-0917-distro-default-smithi/7435995/...
- 07:48 PM ceph-volume Bug #63391 (Resolved): OSDs fail to be created on PVs or LVs in v17.2.7 due to failure in ceph-vo...
- 07:59 AM ceph-volume Bug #63391 (Pending Backport): OSDs fail to be created on PVs or LVs in v17.2.7 due to failure in...
- 07:48 PM ceph-volume Backport #63490 (Resolved): quincy: OSDs fail to be created on PVs or LVs in v17.2.7 due to failu...
- 08:03 AM ceph-volume Backport #63490 (In Progress): quincy: OSDs fail to be created on PVs or LVs in v17.2.7 due to fa...
- 08:00 AM ceph-volume Backport #63490 (Resolved): quincy: OSDs fail to be created on PVs or LVs in v17.2.7 due to failu...
- https://github.com/ceph/ceph/pull/54430
- 07:41 PM mgr Backport #62885 (Resolved): reef: [pg-autoscaler] Peformance issue with the autoscaler when we ha...
- 07:37 PM RADOS Backport #62479 (Resolved): quincy: ceph status does not report an application is not enabled on ...
- 07:36 PM RADOS Backport #62478 (Resolved): reef: ceph status does not report an application is not enabled on th...
- 07:01 PM Orchestrator Bug #63499 (New): cephadm: rgw multisite test "ceph rgw realm bootstrap" fails with "KeyError: 'r...
- seen this in two separate runs with totally different PRs included, so fairly confident it's caused by something alre...
- 06:22 PM CephFS Backport #57110 (Resolved): pacific: mds: handle deferred client request core when mds reboot
- 04:57 PM CephFS Backport #57110: pacific: mds: handle deferred client request core when mds reboot
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53362
merged - 05:02 PM RADOS Backport #63372: pacific: use-after-move in OSDService::build_incremental_map_msg()
- Ronen Friedman wrote:
> PR# 54268
merged - 04:59 PM CephFS Backport #62523: pacific: src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || st...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53662
merged - 04:58 PM CephFS Backport #61803: pacific: Better help message for cephfs-journal-tool -help command for --rank op...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53645
merged - 04:58 PM CephFS Backport #62949: pacific: cephfs-mirror: do not run concurrent C_RestartMirroring context
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53640
merged - 04:57 PM CephFS Backport #59001: pacific: cephfs_mirror: local and remote dir root modes are not same
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53270
merged - 04:56 PM CephFS Backport #62572: pacific: mds: add cap acquisition throttled event to MDR
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/53169
merged - 04:00 PM rgw Bug #63495: Object gateway doesn't support POST DELETE
- Gareth Humphries wrote:
> POST /?delete HTTP/1.1
> Host: <bucket>.<domain>:<port>
thanks. can you confirm that r... - 03:19 PM rgw Bug #63495: Object gateway doesn't support POST DELETE
- ...
- 03:05 PM rgw Bug #63495: Object gateway doesn't support POST DELETE
- There is a HOST header present (along with many others), I removed them, both for brevity and because they contain po...
- 02:59 PM rgw Bug #63495 (Need More Info): Object gateway doesn't support POST DELETE
- rgw does support the DeleteObjects API. you can find lots of test coverage from calls to boto3's @delete_objects()@ i...
- 02:37 PM rgw Bug #63495 (Need More Info): Object gateway doesn't support POST DELETE
- I can see a case for this being a FR, but since it's there in the spec and not implemented in ceph, bug seems more ap...
- 03:36 PM rgw Bug #63314: kafka crashed during message callback in teuthology
- will attempts to fix the issue by using librdkafka as a git submodule.
using a version that has the fix above. - 03:11 PM rgw Bug #63314 (Triaged): kafka crashed during message callback in teuthology
- 03:35 PM rgw Backport #63498 (New): reef: test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueu...
- 03:30 PM rgw Bug #62449 (Pending Backport): test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQu...
- 03:27 PM rgw Bug #62449: test/cls_2pc_queue: TestCls2PCQueue.MultiProducer and TestCls2PCQueue.AsyncConsumer f...
- Yuval, since we merged https://github.com/ceph/ceph/pull/53265 for this, should we go ahead and move this to pending ...
- 03:22 PM rgw Bug #62822 (Won't Fix): CORS doesn't work when used with Keystone and implicit tenants scenario.
- 03:17 PM rgw Bug #63438 (Fix Under Review): RGW: Cloud sync module fails to sync folders
- 03:09 PM rgw Backport #63497 (New): reef: rgw: s3 notifications not generated on object transition
- 03:09 PM rgw Backport #63496 (New): quincy: rgw: s3 notifications not generated on object transition
- 03:03 PM rgw Bug #58641 (Pending Backport): rgw: s3 notifications not generated on object transition
- backports will need additional fix from https://github.com/ceph/ceph/pull/54372
- 03:02 PM rgw Bug #63458 (Resolved): rgwlc: currentversion expiration incorrectly removes objects (rather than ...
- 02:56 PM Infrastructure Bug #63492 (In Progress): Not able to access and mount magna002 ceph directory
- 10:51 AM Infrastructure Bug #63492 (In Progress): Not able to access and mount magna002 ceph directory
- Description of problem
Not able to access and mount magna002 ceph directory
How to reproduce
Mount magna002
... - 02:18 PM CephFS Bug #63259: mds: failed to store backtrace and force file system read-only
- Hi Xiubo,
The logs for the job link in the description is not matching the logs snippet provided by you.
I see... - 02:10 PM RADOS Bug #62119: timeout on reserving replicsa
- https://pulpito.ceph.com/yuriw-2023-11-05_15:32:58-rados-reef-release-distro-default-smithi/7448316/
- 02:03 PM Dashboard Cleanup #59142: mgr/dashboard: fix e2e for dashboard v3
- https://pulpito.ceph.com/yuriw-2023-11-05_15:32:58-rados-reef-release-distro-default-smithi/7448355...
- 02:00 PM ceph-volume Backport #63491 (Resolved): reef: OSDs fail to be created on PVs or LVs in v17.2.7 due to failure...
- 08:03 AM ceph-volume Backport #63491 (In Progress): reef: OSDs fail to be created on PVs or LVs in v17.2.7 due to fail...
- 08:00 AM ceph-volume Backport #63491 (Resolved): reef: OSDs fail to be created on PVs or LVs in v17.2.7 due to failure...
- https://github.com/ceph/ceph/pull/54429
- 01:53 PM RADOS Bug #62992: Heartbeat crash in reset_timeout and clear_timeout
- From @/a/yuriw-2023-11-05_15:32:58-rados-reef-release-distro-default-smithi/7448450/teuthology.log@:...
- 12:42 PM CephFS Bug #62501: pacific(?): qa: mgr-osd-full causes OSD aborts due to ENOSPC (fs/full/subvolume_snaps...
- Dhairya,
I will take this one as I am debugging the same issue on quincy branch.
https://tracker.ceph.com/issue... - 12:41 PM CephFS Bug #63132: qa: subvolume_snapshot_rm.sh stalls when waiting for OSD_FULL warning
- It is strange that the the test are passing in main branch.
On the main branch - Available is 997 MiB ... - 12:33 PM mgr Bug #59580: memory leak (RESTful module, maybe others?)
- Adrien, can you please add a massif output file from the mgr (one without restful enable) that leaking memory?
- 11:53 AM mgr Bug #59580: memory leak (RESTful module, maybe others?)
- Adrien Georget wrote:
> Is there a way to limit memory usage for MGR?
As a temporary solution, I have limited the... - 11:50 AM Ceph Feature #63344: Set and manage nvmeof gw - controller ids ranges
- Another change to this request. By default the ceph adm should set 2K ids per GW. I.e. GW 1 will get 1..2048, GW 2 wi...
- 11:50 AM Dashboard Bug #63348 (Resolved): mgr/dashboard: update rgw multisite import form helper info
- 11:48 AM Ceph Bug #63494 (Fix Under Review): all: daemonizing may release CephContext:: _fork_watchers_lock whe...
- 11:46 AM Ceph Bug #63494 (Fix Under Review): all: daemonizing may release CephContext:: _fork_watchers_lock whe...
- Was debugging a ceph-fuse crash reported downstream which has the following backtrace:...
- 11:24 AM Ceph Bug #63493: Problem with Pgs Deep-scrubbing ceph
- Abu Sayed wrote:
> Hi ,
>
> We operate a Ceph cluster running the Octopus version (latest 15.2.17). The setup inc... - 10:59 AM Ceph Bug #63493 (New): Problem with Pgs Deep-scrubbing ceph
- Hi ,
We operate a Ceph cluster running the Octopus version (latest 15.2.17). The setup includes *13 hosts*, totali... - 11:15 AM CephFS Feature #62670 (Need More Info): [RFE] cephfs should track and expose subvolume usage and quota
- 11:12 AM CephFS Feature #62670: [RFE] cephfs should track and expose subvolume usage and quota
- Hi Paul/Venky,
The mgr/volumes exposes to apis to get the information asked here.
1. All subvolume related info... - 09:52 AM crimson Bug #61825 (Resolved): SnapTrim leak clones
- 09:51 AM crimson Bug #61846 (Resolved): ceph config show osd returns nothing for Crimson OSD
- 09:48 AM crimson Bug #63299 (Fix Under Review): The lifecycle of SnapTrimObjSubEvent::WaitRepop should be extended...
- 08:20 AM crimson Bug #63299: The lifecycle of SnapTrimObjSubEvent::WaitRepop should be extended in case of interru...
- osd.3:
https://pulpito.ceph.com/matan-2023-11-08_11:46:17-crimson-rados-wip-matanb-crimson-do_osd_ops_execute-v3-d... - 09:26 AM rgw Bug #62760 (Resolved): versioned bucket stats can be incorrect after reshard or radosgw-admin buc...
- The fix for this was included as part of the fix for https://tracker.ceph.com/issues/62075. It was already backported...
- 08:18 AM crimson Bug #61653: PG is printed after destruction
- https://pulpito.ceph.com/matan-2023-11-08_11:46:17-crimson-rados-wip-matanb-crimson-do_osd_ops_execute-v3-distro-crim...
- 08:17 AM crimson Bug #61653 (Fix Under Review): PG is printed after destruction
- 08:11 AM rbd Bug #63422 (Fix Under Review): librbd crash in journal discard wait_event
- 05:58 AM CephFS Feature #62925: cephfs-journal-tool: Add preventive measures in the tool to avoid corruting a cep...
- Bumping priority since its essential that we have this functionality asap.
- 05:43 AM Orchestrator Backport #63489 (Resolved): reef: Add github command for rook e2e jenkins job
- 04:22 AM CephFS Bug #63488: smoke test fails from "NameError: name 'DEBUGFS_META_DIR' is not defined"
- This failure is odd - nothing has changed w.r.t. DEBUGFS_META_DIR when it got merged in commit...
- 04:19 AM CephFS Bug #63488 (In Progress): smoke test fails from "NameError: name 'DEBUGFS_META_DIR' is not defined"
- 03:51 AM CephFS Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
- Venky Shankar wrote:
> Milind Changire wrote:
> > Patrick Donnelly wrote:
> > > Milind Changire wrote:
> > > > Pa...
11/08/2023
- 10:20 PM rbd Backport #63382 (Resolved): pacific: mgr/rbd_support: recovery from client blocklisting halts aft...
- 09:37 PM rbd Backport #63382: pacific: mgr/rbd_support: recovery from client blocklisting halts after MirrorSn...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/54293
merged - 10:20 PM rbd Backport #63351 (Resolved): pacific: "rbd feature disable" remote request hangs when proxied to r...
- 09:36 PM rbd Backport #63351: pacific: "rbd feature disable" remote request hangs when proxied to rbd-nbd
- Ilya Dryomov wrote:
> https://github.com/ceph/ceph/pull/54256
merged - 10:19 PM rbd Backport #63385 (Resolved): pacific: [test][rbd] test recovery of rbd_support module from repeate...
- 09:38 PM rbd Backport #63385: pacific: [test][rbd] test recovery of rbd_support module from repeated blocklist...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/54294
merged - 10:19 PM CephFS Bug #63488: smoke test fails from "NameError: name 'DEBUGFS_META_DIR' is not defined"
- /a/yuriw-2023-11-07_20:47:24-smoke-reef-release-distro-default-smithi/7451519
- 10:18 PM CephFS Bug #63488 (Fix Under Review): smoke test fails from "NameError: name 'DEBUGFS_META_DIR' is not d...
- /a/yuriw-2023-11-07_20:47:24-smoke-reef-release-distro-default-smithi/7451517...
- 09:38 PM RADOS Bug #59196: ceph_test_lazy_omap_stats segfault while waiting for active+clean
- /a/yuriw-2023-11-05_15:32:58-rados-reef-release-distro-default-smithi/7448518
- 09:30 PM RADOS Bug #62992: Heartbeat crash in reset_timeout and clear_timeout
- /a/yuriw-2023-11-05_15:32:58-rados-reef-release-distro-default-smithi/7448450
/a/yuriw-2023-11-05_15:32:58-rados-ree... - 08:33 PM Ceph Feature #62019 (Fix Under Review): ceph cli: please report the fullest OSD in addition to overall...
- IMO this information is better to be provided on per-pool basis. Hence implemented via 'ceph df' command.
- 05:49 PM Ceph Feature #63487 (New): allow idmap overrides in nfs-ganesha configuration
- Description of problem:
idmapd.conf allows controlling the NFSv4.x server side id mapping settings such as adding ... - 04:19 PM rgw Bug #62136: "test pushing kafka s3 notification on master" - no events are sent
- also failing for reef 18.2.1:
http://qa-proxy.ceph.com/teuthology/yuriw-2023-11-07_20:50:58-rgw-reef-release-distro-... - 03:51 PM CephFS Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
- Milind Changire wrote:
> Patrick Donnelly wrote:
> > Milind Changire wrote:
> > > Patrick Donnelly wrote:
> > > >... - 12:31 PM CephFS Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
- Patrick Donnelly wrote:
> Milind Changire wrote:
> > Patrick Donnelly wrote:
> > > [...]
> > >
> > > /teutholog... - 03:48 PM CephFS Feature #58129 (Resolved): mon/FSCommands: support swapping file systems by name
- 03:31 PM CephFS Bug #57206 (Rejected): ceph_test_libcephfs_reclaim crashes during test
- https://github.com/ceph/ceph/pull/53648#issuecomment-1802128014
- 03:30 PM CephFS Bug #61999 (Rejected): client: readdir_r_cb: get rstat for dir only if using rbytes for size
- https://github.com/ceph/ceph/pull/54179#issuecomment-1802126318
- 03:27 PM CephFS Bug #62329 (Rejected): MDSAuthCaps: minor improvements
- https://github.com/ceph/ceph/pull/54143#issuecomment-1802120736
- 03:21 PM rgw Bug #63486 (New): reef: test_lifecycle_cloud_transition_large_obj FAILED
- http://qa-proxy.ceph.com/teuthology/yuriw-2023-11-07_20:50:58-rgw-reef-release-distro-default-smithi/7451553/teutholo...
- 02:30 PM rgw Bug #59474: Cannot delete object using multi-delete operation on a bucket with policy
- Running into the same issue in 18.2.0.
- 02:21 PM rgw Bug #63485 (Fix Under Review): inaccessible bucket: Error reading IAM Policy: Terminate parsing d...
- log shared from ceph-users mailing list:...
- 01:50 PM rbd Bug #63422: librbd crash in journal discard wait_event
- As a consequence for the above, a very simple workaround for this crash appears to be to set rbd_skip_partial_discard...
- 11:55 AM Orchestrator Backport #63484 (Resolved): reef: Adding support to automatically discover storage classes on Roo...
- 11:52 AM Orchestrator Bug #63483 (Resolved): Add github command for rook e2e jenkins job
- Adding command for rook e2e jenkins job
- 10:17 AM CephFS Backport #61793 (Resolved): pacific: mgr/snap_schedule: catch all exceptions to avoid crashing mo...
- 10:17 AM CephFS Backport #61795 (Resolved): quincy: mgr/snap_schedule: catch all exceptions to avoid crashing module
- 10:11 AM CephFS Backport #61990 (Resolved): reef: snap-schedule: allow retention spec to specify max number of sn...
- 10:11 AM CephFS Backport #61991 (Resolved): quincy: snap-schedule: allow retention spec to specify max number of ...
- 09:11 AM ceph-volume Bug #63391: OSDs fail to be created on PVs or LVs in v17.2.7 due to failure in ceph-volume raw list
- -As noted in the GH issue and PR, I don't think the linked PR fixes the reported crash. I have pushed https://github....
- 07:14 AM CephFS Backport #62571: quincy: ceph_fs.h: add separate owner_{u,g}id fields
- https://github.com/ceph/ceph/pull/54411
- 07:06 AM Ceph Backport #63477: quincy: MClientRequest: properly handle ceph_mds_request_head_legacy for ext_num...
- https://github.com/ceph/ceph/pull/54411
- 06:51 AM Ceph Backport #63478: pacific: MClientRequest: properly handle ceph_mds_request_head_legacy for ext_nu...
- https://github.com/ceph/ceph/pull/54410
- 06:22 AM CephFS Bug #63482 (Fix Under Review): qa: fs/nfs suite needs debug mds/client
- 06:20 AM CephFS Bug #63482 (Fix Under Review): qa: fs/nfs suite needs debug mds/client
- 04:50 AM CephFS Bug #63233: mon|client|mds: valgrind reports possible leaks in the MDS
- Seen in reef-release (18.2.1) validation branch - https://pulpito.ceph.com/vshankar-2023-11-07_05:14:36-fs-reef-relea...
- 04:49 AM CephFS Bug #63233: mon|client|mds: valgrind reports possible leaks in the MDS
- Bumping prio. Rishabh, PTAL at the earliest.
- 02:20 AM CephFS Backport #63475 (In Progress): quincy: cephfs-mirror: ceph_getxattr call always return -61 (ENODA...
- 02:18 AM CephFS Backport #63474 (In Progress): reef: cephfs-mirror: ceph_getxattr call always return -61 (ENODATA...
Also available in: Atom