Activity
From 09/22/2022 to 10/21/2022
10/21/2022
- 09:16 PM Bug #57914 (Resolved): centos 8 build failed
- I see it on main and pacific
https://jenkins.ceph.com/job/ceph-dev-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILAB... - 06:26 PM bluestore Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
- Hi Sven,
Thanks for reporting telemetry! The issue you reported is tracked in https://tracker.ceph.com/issues/5620... - 04:41 PM bluestore Bug #53002: crash BlueStore::Onode::put from BlueStore::TransContext::~TransContext
- We have almost daily crashes on our octopus cluster, which are also reported via telemetry, which look like this bug,...
- 05:31 PM Backport #57505 (Resolved): quincy: openSUSE Leap 15.x needs to explicitly specify gcc-11
- 03:26 PM Backport #57505: quincy: openSUSE Leap 15.x needs to explicitly specify gcc-11
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48058
merged - 05:28 PM rbd Bug #52915: rbd du versus rbd diff values wildly different when snapshots are present
- Alex Yarbrough wrote:
> If I _rbd du_ all of the ~200 images that I have, and sum the result, my total is about 24 T... - 03:25 PM rbd Bug #52915: rbd du versus rbd diff values wildly different when snapshots are present
- Ilya, first thank you for the time you put into your messages. I am aware of the issue regarding RBD object size vers...
- 04:52 PM CephFS Backport #57719: quincy: Test failure: test_subvolume_group_ls_filter_internal_directories (tasks...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48327
merged - 04:19 PM RADOS Bug #55809: "Leak_IndirectlyLost" valgrind report on mon.c
- /a/yuriw-2022-10-12_16:24:50-rados-wip-yuri8-testing-2022-10-12-0718-quincy-distro-default-smithi/7063948/
- 04:16 PM RADOS Bug #57913 (Duplicate): Thrashosd: timeout 120 ceph --cluster ceph osd pool rm unique_pool_2 uniq...
- /a/yuriw-2022-10-12_16:24:50-rados-wip-yuri8-testing-2022-10-12-0718-quincy-distro-default-smithi/7063868/
rados/t... - 03:57 PM rbd Backport #57843 (Resolved): quincy: rbd CLI inconsistencies affecting "--namespace" arg
- 03:29 PM rbd Backport #57843: quincy: rbd CLI inconsistencies affecting "--namespace" arg
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48458
merged - 03:55 PM Orchestrator Bug #56951: rook/smoke: Updating cephclusters/rook-ceph is forbidden
- /a/yuriw-2022-10-12_16:24:50-rados-wip-yuri8-testing-2022-10-12-0718-quincy-distro-default-smithi/7063866/
- 03:36 PM Orchestrator Bug #52321: qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting f...
- /a/yuriw-2022-10-12_16:24:50-rados-wip-yuri8-testing-2022-10-12-0718-quincy-distro-default-smithi/7063706/
- 12:41 PM Dashboard Bug #57912 (Fix Under Review): mgr/dashboard: Dashboard creation of NFS exports with RGW backend ...
- 12:12 PM Dashboard Bug #57912 (Fix Under Review): mgr/dashboard: Dashboard creation of NFS exports with RGW backend ...
- When attempting to create a NFS export with RGW as the backend from Dashboard, this fails as per the description.
Ho... - 10:39 AM rgw Bug #57911 (Pending Backport): Segmentation fault when uploading file with bucket policy on Quincy
- RGW crashes when a file is uploaded and a bucket policy has been set up.
The crash has been "reproduced for latest... - 10:28 AM bluestore Bug #57895: OSD crash in Onode::put()
- dongdong tao wrote:
> Yaarit Hatuka wrote:
> > Status changed from "New" to "Duplicate" since this issue duplicates... - 12:20 AM bluestore Bug #57895: OSD crash in Onode::put()
- Yaarit Hatuka wrote:
> Status changed from "New" to "Duplicate" since this issue duplicates https://tracker.ceph.com... - 09:53 AM Orchestrator Bug #57910 (New): ingress: HAProxy fails to start because keepalived IP address not yet available...
- After deploying a new cluster _sometimes_ HAProxy fails to start on ingress nodes:...
- 08:41 AM RADOS Bug #57699: slow osd boot with valgrind (reached maximum tries (50) after waiting for 300 seconds)
- @Nitzan Mordechai this is probably similar to,
https://tracker.ceph.com/issues/52948 and https://tracker.ceph.com/is... - 07:47 AM RADOS Fix #57040 (Resolved): osd: Update osd's IOPS capacity using async Context completion instead of ...
- 07:46 AM RADOS Backport #57443 (Resolved): quincy: osd: Update osd's IOPS capacity using async Context completio...
- 06:03 AM Orchestrator Feature #55490: cephadm: allow passing grafana cert and frontend-api-url in spec
- The OP mentioned @set-grafana-frontend-api-url@ but missed mentioning setting @set-grafana-api-url@ from a spec which...
10/20/2022
- 11:33 PM RADOS Bug #56101: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
- Notes from the rados suite review:
We may need to check if we're shutting down while sending pg stats; if so, we d... - 10:47 PM bluestore Feature #57785: fragmentation score in metrics
- I'm just a user so I can't answer some of the questions. I'll fill in what I know though.
1. Not sure
3. No priva... - 10:26 PM bluestore Feature #57785: fragmentation score in metrics
- Hey Kevin (and Vikhyat),
I have a few questions regarding the fragmentation score:
1. Where are all the places ... - 09:25 PM rbd Bug #52915: rbd du versus rbd diff values wildly different when snapshots are present
- Going back to CephRBD_NVMe/vm-101-disk-0 image, your "rbd du" output makes perfect sense to me based on what you said...
- 09:12 PM rbd Bug #52915: rbd du versus rbd diff values wildly different when snapshots are present
- Hi Alex,
"rbd diff CephRBD_NVMe/vm-101-disk-0" reports the allocated areas of the image without taking snapshots i... - 03:40 PM rbd Bug #52915: rbd du versus rbd diff values wildly different when snapshots are present
- Greetings all. I have read through the related issues that are resolved. I do not believe this issue is duplicated or...
- 06:11 PM Orchestrator Feature #57909 (Resolved): cephadm: make logging host refresh data to debug logs configurable
- The amount of data we log in the debug logs when refreshing a host is too verbose, even for debug level. It renders t...
- 04:14 PM Support #57908 (New): rgw common prefix performance on large bucket
- Hi, I'm facing the same issue metioned here:
https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/36P62BOOCJBVVJ... - 04:09 PM ceph-volume Bug #57907: ceph-volume complains about "Insufficient space (<5GB)" on 1.75TB device
- I add a workaround screenshot to disable Hotplug in Bios.
- 04:01 PM ceph-volume Bug #57907: ceph-volume complains about "Insufficient space (<5GB)" on 1.75TB device
- The Problem is that in @util/device.py@ line 582
The call for @int(self.sys_api.get('size', 0))@ is always 0 if s... - 03:15 PM ceph-volume Bug #57907 (Duplicate): ceph-volume complains about "Insufficient space (<5GB)" on 1.75TB device
- On a one week old working cluster 17.2.5, i try to add another host with 2 SSDs and 4 HDDs.
None of them is shown as... - 03:07 PM RADOS Bug #57152 (Resolved): segfault in librados via libcephsqlite
- 03:06 PM RADOS Backport #57373 (Resolved): pacific: segfault in librados via libcephsqlite
- 02:56 PM RADOS Backport #57373: pacific: segfault in librados via libcephsqlite
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48187
merged - 03:01 PM Orchestrator Backport #57638: pacific: applying osd service spec with size filter fails if there's tiny (KB-si...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48243
merged - 02:58 PM Orchestrator Backport #57639: pacific: cephadm: `ceph orch ps` doesn't list container versions in some cases
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48210
merged - 02:55 PM ceph-volume Backport #57566: pacific: inventory a device get_partitions_facts called many times
- Guillaume Abrioux wrote:
> https://github.com/ceph/ceph/pull/48126
merged - 02:53 PM ceph-volume Backport #57564: pacific: functional test lvm-centos8-filestore-create is broken
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48123
merged - 02:45 PM Bug #57906 (New): ceph -s show too many executing tasks
- I got a log of execution tasks with ceph -s but I'm sure there is nothing running. How I can clean this messages? Als...
- 02:24 PM rgw Bug #57770 (Triaged): RGW (pacific) misplaces index entries after dynamically resharding bucket
- 02:24 PM rgw Bug #57770 (New): RGW (pacific) misplaces index entries after dynamically resharding bucket
- 02:21 PM rgw Bug #57783: multisite: data sync reports shards behind after source zone fully trims datalog
- related work in https://github.com/ceph/ceph/pull/47682 and https://github.com/ceph/ceph/pull/48397
- 02:20 PM rgw Bug #57804: Enabling sync on bucket not working
- i can only recommend running the command until it succeeds
- 02:18 PM rgw Bug #57853 (Need More Info): multisite sync process block after long time running
- 02:16 PM rgw Bug #57901 (Fix Under Review): s3:ListBuckets response limited to 1000 buckets (by default) since...
- 02:11 PM rgw Bug #57231 (Resolved): Valgrind: jump on unitialized in s3select
- 01:51 PM bluestore Bug #57895 (Duplicate): OSD crash in Onode::put()
- Status changed from "New" to "Duplicate" since this issue duplicates https://tracker.ceph.com/issues/56382.
- 10:10 AM bluestore Bug #57895: OSD crash in Onode::put()
- Please help to review this one, https://github.com/ceph/ceph/pull/48566
Here is the related log: https://pastebin.... - 01:30 PM rgw Bug #57905 (Pending Backport): multisite: terminate called after throwing an instance of 'ceph::b...
- example from rgw/multisite suite: http://qa-proxy.ceph.com/teuthology/cbodley-2022-10-19_23:28:37-rgw-wip-cbodley-tes...
- 10:54 AM bluestore Bug #56851: crash: int BlueStore::read_allocation_from_onodes(SimpleBitmap*, BlueStore::read_allo...
- @Sudhin - curious if you can reproduce the issue? If so it would be great to get OSD log with debug-bluestore set to ...
- 10:52 AM bluestore Bug #52464: FAILED ceph_assert(current_shard->second->valid())
- IMO this is rather related to DB sharding stuff introduced by https://github.com/ceph/ceph/pull/34006
Hence reassign... - 10:46 AM bluestore Bug #52464: FAILED ceph_assert(current_shard->second->valid())
- Neha Ojha wrote:
> Gabi, I am assigning it to you for now, since this looks related to NCB.
No, apparently this i... - 09:49 AM bluestore Bug #57857 (Fix Under Review): KernelDevice::read doesn't translate error codes correctly
- 09:40 AM bluestore Bug #56382 (Fix Under Review): ONode ref counting is broken
- 09:10 AM bluestore Bug #56382 (Pending Backport): ONode ref counting is broken
- 06:33 AM CephFS Bug #54557 (Fix Under Review): scrub repair does not clear earlier damage health status
- 06:24 AM Dashboard Bug #57284 (Resolved): mgr/dashboard: 500 internal server error seen on ingress service creation ...
- 06:24 AM Dashboard Backport #57485 (Resolved): pacific: mgr/dashboard: 500 internal server error seen on ingress ser...
- 05:57 AM rgw Bug #57562: multisite replication issue on Quincy
- We have an example scenario here where one of the objects in a bucket failed to be synced to the secondary.
* Mdlog... - 05:28 AM CephFS Backport #57716 (Resolved): pacific: libcephfs: incorrectly showing the size for snapdirs when st...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48413
Merged. - 04:54 AM CephFS Backport #57874 (In Progress): quincy: Permissions of the .snap directory do not inherit ACLs
- 04:17 AM CephFS Backport #57723 (Resolved): pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48417
Merged.
10/19/2022
- 11:31 PM rbd Bug #57902 (Resolved): [rbd-nbd] add --snap-id option to "rbd device map" to allow mapping arbitr...
- As any snapshot in a non-user snapshot namespace, mirror snapshots are inaccessible to most rbd CLI commands. As suc...
- 11:16 PM rbd Bug #57066 (Fix Under Review): rbd snap list not change the last read when more than 64 group snaps
- 09:28 PM rgw Bug #57901 (Resolved): s3:ListBuckets response limited to 1000 buckets (by default) since Octopus
- Since Octopus, s3:ListBuckets is limited to rgw_list_buckets_max_chunk buckets in its response due to loss of truncat...
- 09:21 PM RADOS Backport #52747 (In Progress): pacific: MON_DOWN during mon_join process
- 09:09 PM RADOS Backport #52746 (Rejected): octopus: MON_DOWN during mon_join process
- Octopus is EOL.
- 08:59 PM RADOS Bug #43584: MON_DOWN during mon_join process
- /a/yuriw-2022-10-05_20:44:57-rados-wip-yuri4-testing-2022-10-05-0917-pacific-distro-default-smithi/7055594
- 08:46 PM RADOS Bug #57900 (In Progress): mon/crush_ops.sh: mons out of quorum
- /a/teuthology-2022-10-09_07:01:03-rados-quincy-distro-default-smithi/7059463...
- 05:56 PM Orchestrator Bug #57341: cephadm: failures from tests comparing output strings are difficult to debug
- See attached screenshot for a better colorized example.
- 05:53 PM Orchestrator Bug #57341: cephadm: failures from tests comparing output strings are difficult to debug
- I did a few minutes of research and found two packages that may help:
pytest-mock (https://pytest-mock.readthedocs.i... - 03:38 PM Linux kernel client Bug #57898: ceph client extremely slow kernel version between 5.15 and 6.0
- Even with the ceph-fuse method in the body it gets slow again over time.
- 12:47 PM Linux kernel client Bug #57898 (In Progress): ceph client extremely slow kernel version between 5.15 and 6.0
- hello? I am very new to ceph. Thank you for taking that into consideration and reading.
I recently changed the ker... - 03:20 PM RADOS Bug #57698 (Pending Backport): osd/scrub: "scrub a chunk" requests are sent to the wrong set of r...
- 03:05 PM rgw Bug #16767 (In Progress): RadosGW Multipart Cleanup Failure
- 02:55 PM rgw Bug #16767: RadosGW Multipart Cleanup Failure
- Vicki Good wrote:
> I've encountered this bug in Ceph 14 and 15 and it's a pretty big problem for us for the same re... - 02:16 PM CephFS Backport #57875 (In Progress): pacific: Permissions of the .snap directory do not inherit ACLs
- 01:45 PM bluestore Bug #57855: cannot enable level_compaction_dynamic_level_bytes
- I found that the level_compaction_dynamic_level_bytes option does not apply if opt.db_paths exists when opening rocks...
- 01:26 PM bluestore Bug #55324: rocksdb omap iterators become extremely slow in the presence of large delete range to...
- Benoît Knecht wrote:
> > I see this was backported in: https://github.com/ceph/ceph/pull/45963 but was later reverte... - 12:09 PM bluestore Bug #55324: rocksdb omap iterators become extremely slow in the presence of large delete range to...
- Sven Kieske wrote:
> I assume this was not backported to the last octopus release?
Yes, the octopus is EOL - 12:04 PM bluestore Bug #55324: rocksdb omap iterators become extremely slow in the presence of large delete range to...
- > I see this was backported in: https://github.com/ceph/ceph/pull/45963 but was later reverted in https://github.com/...
- 11:21 AM bluestore Bug #55324: rocksdb omap iterators become extremely slow in the presence of large delete range to...
- Sven Kieske wrote:
> I don't see the PR showing up in any release notes. I assume this was not backported to the las... - 11:16 AM bluestore Bug #55324: rocksdb omap iterators become extremely slow in the presence of large delete range to...
- I don't see the PR showing up in any release notes. I assume this was not backported to the last octopus release? In ...
- 09:06 AM bluestore Bug #55324 (Resolved): rocksdb omap iterators become extremely slow in the presence of large dele...
- 01:20 PM rgw-testing Bug #54104: test_rgw_datacache.py: s3cmd fails with '403 (SignatureDoesNotMatch)' in ubuntu
- ping @Mark, this remains a blocker for enabling ubuntu in the rgw/verify suite. that subsuite contains most of our fu...
- 01:11 PM rgw Bug #57899 (Pending Backport): admin: cannot use tenant with notification topic
- issue was a regression introduced in: 200f71a90c9e77c91452cec128c2c8be0d3d6f1f
topic notification commands should be... - 01:03 PM mgr Bug #55046 (Resolved): mgr: perf counters node exporter
- 12:59 PM mgr Backport #57141 (Resolved): quincy: mgr: perf counters node exporter
- 12:27 PM Orchestrator Bug #57897 (New): ceph mgr restart causes restart of all iscsi daemons in a loop
- We have observed that since v17.2.4, a restart of the active ceph mgr appears to cause all iSCSI daemons to restart a...
- 11:49 AM Dashboard Feature #57896 (New): mgr/dashboard: create per component high level dashboard view
- h3. Description of problem
A great improvemnte to the dashboard would be to have a higher level view of each compo... - 11:49 AM bluestore Bug #57895: OSD crash in Onode::put()
- This is observed from 15.2.16, but I believe the code defect to cause this kind of race condition is still present on...
- 11:42 AM bluestore Bug #57895 (Duplicate): OSD crash in Onode::put()
This issue happens when an Onode is being trimmed right away after it's unpinned. This is possible when the LRU lis...- 11:01 AM Bug #57868: iSCSI: rbd-target-api reports python version and identified 'unsupported version' tri...
- This likely goes for all ceph-container containers... Guillaume, could you please take a look?
- 10:30 AM Orchestrator Feature #57894 (Fix Under Review): Move prometheus spec check to the service_spec module
- 10:18 AM Orchestrator Feature #57894 (Pending Backport): Move prometheus spec check to the service_spec module
- 10:29 AM RADOS Bug #57699: slow osd boot with valgrind (reached maximum tries (50) after waiting for 300 seconds)
- The issue is that we having deadlock on specific condition. When we are trying to update the mClockScheduler config c...
- 09:13 AM CephFS Bug #57882: Kernel Oops, kernel NULL pointer dereference
- Xiubo Li wrote:
> It's a known bug and I will check this today or this week.
Oh my ! I did search for anything pr... - 08:46 AM bluestore Bug #55328 (Closed): OSD crashed due to checksum error
- 08:45 AM Bug #57893 (Fix Under Review): make-dist creates ceph.spec with incorrect Release tag for SUSE-ba...
- 08:04 AM Bug #57893 (Pending Backport): make-dist creates ceph.spec with incorrect Release tag for SUSE-ba...
- @ceph.spec.in@ says:...
- 07:43 AM Dashboard Bug #57805 (Pending Backport): mgr/dashboard: Unable to change subuser permission
- 07:42 AM Dashboard Bug #57805 (Resolved): mgr/dashboard: Unable to change subuser permission
- 07:42 AM Dashboard Backport #57841 (Resolved): quincy: mgr/dashboard: Unable to change subuser permission
- 07:33 AM Dashboard Feature #57826 (Resolved): mgr/dashboard: add server side encryption to rgw/s3
- 07:33 AM Dashboard Backport #57835 (Resolved): quincy: mgr/dashboard: add server side encryption to rgw/s3
- 05:57 AM rbd Bug #57872 (Fix Under Review): [pwl] inconsistent "rbd status" output (clean = true but dirty_byt...
- 05:31 AM RADOS Bug #57546: rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+la...
- I was able to reproduce this using the test Laura mentioned above - http://pulpito.front.sepia.ceph.com/amathuri-2022...
- 05:12 AM Dashboard Bug #39726 (Resolved): mgr/dashboard: "Striping" feature checkbox missing in RBD image dialog
- 05:12 AM Dashboard Backport #56566 (Resolved): pacific: mgr/dashboard: "Striping" feature checkbox missing in RBD im...
- 05:06 AM crimson Bug #57629: crimson: segfault during mkfs
- using GCC 12.2.0 on ubuntu 22.04 facing the same problem.
- 03:26 AM crimson Bug #57549: Crimson: Alienstore not work after ceph enable c++20
- This problem disappeared after update GCC complier to the 12.2.0 version. And I met the Segmentation fault on https:/...
10/18/2022
- 07:16 PM Dashboard Bug #48258: mgr/dashboard: Switch from tslint to eslint
- Thanks Nizam, will get working
- 06:25 PM Documentation #57858: v17.2.4 release does not contain latest cherry-picks
- Bottom line: The quincy-release branch (and future release branches) should be up-to-date on the Ceph repository for ...
- 06:04 PM Orchestrator Bug #57891 (Resolved): [Gibba Cluster] HEALTH_ERR: Upgrade: failed due to an unexpected exception
- - Upgrade paused due to one host not being reachable in the cluster.
- Resumed the upgrade with the resume command
... - 05:29 PM Bug #57890: cmd_getval() throws but many callers don't catch the exception
- For reference, here are crashes with `cmd_getval` in their backtrace:
http://telemetry.front.sepia.ceph.com:4000/d/N... - 05:02 PM Bug #57890 (New): cmd_getval() throws but many callers don't catch the exception
- In https://github.com/ceph/ceph/pull/23557 we switched @cmd_getval()@ to throw on error. This family of functions hav...
- 04:31 PM RADOS Bug #51729: Upmap verification fails for multi-level crush rule
- Chris, can you please provide your osdmap binary?
- 04:13 PM rgw Backport #57889 (Rejected): pacific: amqp: rgw crash when ca location is used for amqp connections
- 04:12 PM rgw Backport #57888 (In Progress): quincy: amqp: rgw crash when ca location is used for amqp connections
- https://github.com/ceph/ceph/pull/54170
- 04:08 PM rgw Bug #57850 (Pending Backport): amqp: rgw crash when ca location is used for amqp connections
- 03:49 PM Orchestrator Backport #57787 (In Progress): quincy: mgr/nfs: Add a sectype field to nfs exports created by nfs...
- 03:39 PM rgw Bug #57881 (Fix Under Review): LDAP invalid password resource leak fix
- 09:56 AM rgw Bug #57881: LDAP invalid password resource leak fix
- I created a pull request for a possible fix:
https://github.com/ceph/ceph/pull/48509 - 01:02 PM rgw Bug #57877 (Fix Under Review): rgw: some operations may not have a valid bucket object
- 09:53 AM mgr Backport #57887 (In Progress): pacific: mgr/prometheus: avoid duplicates and deleted entries for ...
- 09:04 AM mgr Backport #57887 (Resolved): pacific: mgr/prometheus: avoid duplicates and deleted entries for rbd...
- https://github.com/ceph/ceph/pull/48524
- 09:49 AM mgr Backport #57886 (In Progress): quincy: mgr/prometheus: avoid duplicates and deleted entries for r...
- 09:04 AM mgr Backport #57886 (Resolved): quincy: mgr/prometheus: avoid duplicates and deleted entries for rbd_...
- https://github.com/ceph/ceph/pull/48523
- 09:35 AM Linux kernel client Bug #47450 (Resolved): stop parsing the error string in the session reject message
- Fixed in:...
- 09:33 AM Linux kernel client Bug #46904: kclient: cluster [WRN] client.4478 isn't responding to mclientcaps(revoke)
- Fixed it in kernel and the patchwork link: https://patchwork.kernel.org/project/ceph-devel/list/?series=686074
- 09:27 AM Backport #57885 (In Progress): quincy: disable system_pmdk on s390x for SUSE distros
- 08:49 AM Backport #57885 (Resolved): quincy: disable system_pmdk on s390x for SUSE distros
- https://github.com/ceph/ceph/pull/48522
- 09:03 AM RADOS Bug #57845: MOSDRepOp::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_OCTOPUS...
- Hi Neha,
the logs from the crash instance that I reported initially are already rotated out on the particular node... - 09:00 AM mgr Bug #57797 (Pending Backport): mgr/prometheus: avoid duplicates and deleted entries for rbd_stats...
- 08:41 AM Bug #57860 (Pending Backport): disable system_pmdk on s390x for SUSE distros
- 08:19 AM Orchestrator Bug #57096: osd not restarting after upgrading to quincy due to podman args --cgroups=split
- I manually created the unit.meta, and it seems to work. thanks again.
- 06:28 AM Orchestrator Bug #57096: osd not restarting after upgrading to quincy due to podman args --cgroups=split
- The unit.meta file is not yet present in Octopus. I'll try to figure something out or wait for the PR release.
Tha... - 02:48 AM RADOS Bug #57852: osd: unhealthy osd cannot be marked down in time
- Radoslaw Zarzynski wrote:
> Could you please clarify a bit? Do you mean there some extra, unnecessary (from the POV ... - 02:19 AM CephFS Backport #57880 (In Progress): pacific: NFS client unable to see newly created files when listing...
- 02:14 AM CephFS Backport #57879 (In Progress): quincy: NFS client unable to see newly created files when listing ...
- 12:52 AM CephFS Bug #57882 (Duplicate): Kernel Oops, kernel NULL pointer dereference
- It's a known bug and I will check this today or this week.
10/17/2022
- 07:29 PM Orchestrator Bug #57800: ceph orch upgrade does not appear to work with FQNDs.
- alright, looking back at the original traceback...
- 06:55 PM Orchestrator Bug #57884 (Resolved): cephadm: attempting a daemon redeploy of the active mgr with a specified i...
- If I run something like...
- 06:27 PM RADOS Bug #57796: after rebalance of pool via pgupmap balancer, continuous issues in monitor log
- Link to the discussion on ceph-users: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/AZHAIGY3BIM4SGB...
- 06:20 PM RADOS Bug #57883: test-erasure-code.sh: TEST_rados_put_get_jerasure fails on "rados_put_get: grep '\<5...
- Let's first see if it's easily reproducible:
http://pulpito.front.sepia.ceph.com/lflores-2022-10-17_18:19:55-rados:s... - 06:03 PM RADOS Bug #57883: test-erasure-code.sh: TEST_rados_put_get_jerasure fails on "rados_put_get: grep '\<5...
- The failed function:
qa/standalone/erasure-code/test-erasure-code.sh... - 05:52 PM RADOS Bug #57883 (Resolved): test-erasure-code.sh: TEST_rados_put_get_jerasure fails on "rados_put_get:...
- /a/yuriw-2022-10-13_17:24:48-rados-main-distro-default-smithi/7065580...
- 06:16 PM RADOS Bug #57845 (Need More Info): MOSDRepOp::encode_payload(uint64_t): Assertion `HAVE_FEATURE(feature...
- These reports in telemetry look similar: http://telemetry.front.sepia.ceph.com:4000/d/Nvj6XTaMk/spec-search?orgId=1&v...
- 06:08 PM RADOS Bug #57852 (Need More Info): osd: unhealthy osd cannot be marked down in time
- Could you please clarify a bit? Do you mean there some extra, unnecessary (from the POV of jugging whether an OSD is ...
- 06:01 PM mgr Bug #57460: Json formatted ceph pg dump hangs on large clusters
- Thanks, Radoslow! I'll look into modifying the patch as you suggested, targeting Reef.
- 05:48 PM RADOS Bug #57782: [mon] high cpu usage by fn_monstore thread
- NOT A FIX (extra debugs): https://github.com/ceph/ceph/pull/48513
- 05:45 PM RADOS Bug #57698 (Fix Under Review): osd/scrub: "scrub a chunk" requests are sent to the wrong set of r...
- 05:43 PM RADOS Bug #51729: Upmap verification fails for multi-level crush rule
- A note from bug scrub: this is going to be assigned tomorrow.
- 02:49 PM Bug #57613: Kernel Oops, kernel NULL pointer dereference
- Moved (copied) to cephfs, might have echo from a better spot :) this one can be closed.
- 02:47 PM CephFS Bug #57882 (Duplicate): Kernel Oops, kernel NULL pointer dereference
- (repost from Ceph (#57613), I couldn't find a way to move the bug entry from one project to another)
Hello everyon... - 02:19 PM rbd Backport #57779 (Resolved): quincy: [test] fio 3.16 doesn't build on recent kernels due to remova...
- 02:10 PM rbd Backport #57779: quincy: [test] fio 3.16 doesn't build on recent kernels due to removal of linux/...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48386
merged - 01:45 PM Dashboard Backport #57828 (Resolved): quincy: cephadm/test_dashboard_e2e.sh: Expected to find content: '/^f...
- 01:25 PM rbd Tasks #54312: combine the journal and snapshot test scripts
- Please set the state to Fix Under Review once the lab stuff is sorted out and you have a link to a test run.
- 01:22 PM rbd Bug #57066 (In Progress): rbd snap list not change the last read when more than 64 group snaps
- 12:30 PM rgw Bug #57881 (Pending Backport): LDAP invalid password resource leak fix
- I have noticed that in the case a User tries to log in using LDAP with a wrong password, two new LDAP sessions will b...
- 12:04 PM CephFS Backport #57880 (Resolved): pacific: NFS client unable to see newly created files when listing di...
- https://github.com/ceph/ceph/pull/48521
- 12:04 PM CephFS Backport #57879 (Resolved): quincy: NFS client unable to see newly created files when listing dir...
- https://github.com/ceph/ceph/pull/48520
- 11:57 AM CephFS Bug #57210 (Pending Backport): NFS client unable to see newly created files when listing director...
- 11:53 AM CephFS Backport #57261 (Resolved): pacific: standby-replay mds is removed from MDSMap unexpectedly
- 10:54 AM Orchestrator Feature #57878 (Resolved): Add typing checks for rgw module
- 10:23 AM bluestore Bug #57855: cannot enable level_compaction_dynamic_level_bytes
- I did some more digging on this and found that this PR was the cause.
https://github.com/ceph/ceph/pull/43100
- 09:32 AM Orchestrator Bug #57876 (Fix Under Review): prometheus ERROR failed to collect metrics
- 09:13 AM Orchestrator Bug #57876 (Resolved): prometheus ERROR failed to collect metrics
- ...
- 09:19 AM rgw Bug #57877 (Resolved): rgw: some operations may not have a valid bucket object
- Some codepaths may not always have a valid bucket, so add checks to detect this.
- 08:57 AM Linux kernel client Bug #46904 (Fix Under Review): kclient: cluster [WRN] client.4478 isn't responding to mclientcaps...
- 04:52 AM Linux kernel client Bug #46904: kclient: cluster [WRN] client.4478 isn't responding to mclientcaps(revoke)
- The MDS was waiting for _*Fw*_ caps:...
- 03:43 AM Linux kernel client Bug #56524 (Resolved): xfstest-dev: generic/467 failed with "open_by_handle(/mnt/kcephfs.A/467-di...
- 03:42 AM Linux kernel client Bug #57321 (Resolved): xfstests: ceph/004 setfattr: /mnt/kcephfs.A/test-004/dest: Invalid argument
- 03:41 AM Linux kernel client Bug #57342 (Resolved): kclient: incorrectly showing the size for snapdirs when stating them
10/16/2022
- 02:50 PM CephFS Backport #57875 (Resolved): pacific: Permissions of the .snap directory do not inherit ACLs
- https://github.com/ceph/ceph/pull/48553
- 02:50 PM CephFS Backport #57874 (Resolved): quincy: Permissions of the .snap directory do not inherit ACLs
- https://github.com/ceph/ceph/pull/48563
- 02:49 PM CephFS Bug #57084 (Pending Backport): Permissions of the .snap directory do not inherit ACLs
- 02:46 PM CephFS Bug #57084 (Resolved): Permissions of the .snap directory do not inherit ACLs
10/15/2022
- 08:36 PM crimson Bug #57873 (New): crimson: override overrides.ceph.flavor in crimson_qa_overrides.yaml as well
- overrides.ceph.flavor = default gets set by teuthology/suite/placeholder.py
- 09:19 AM rbd Bug #57872 (Resolved): [pwl] inconsistent "rbd status" output (clean = true but dirty_bytes = 61440)
- This popped up in a quincy integration branch run, but the code in main is exactly the same:...
10/14/2022
- 09:17 PM rgw Bug #52027: XML responses return different order of XML elements
- Hi
I think this is not fully addressed.
I've added a comment to pull request https://github.com/ceph/ceph/pull/42... - 09:13 PM RADOS Bug #51729: Upmap verification fails for multi-level crush rule
- Andras,
Thanks for the extra info. This needs to be addressed. Anyone?
- 08:48 PM RADOS Bug #51729: Upmap verification fails for multi-level crush rule
- Just to clarify - the error "verify_upmap number of buckets X exceeds desired Y" comes from the C++ code in ceph-mon ...
- 06:47 PM RADOS Bug #51729: Upmap verification fails for multi-level crush rule
- I am now seeing this issue on pacific, 16.2.10 on rocky8 linux.
If I have a >2 level rule on an ec pool (6+2), suc... - 06:54 PM rgw Backport #57430: quincy: key is used after move in RGWGetObj_ObjStore_S3::override_range_hdr
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48228
merged - 06:50 PM Orchestrator Bug #57870 (Resolved): cephadm: --apply-spec is trying to do too much and failing as a result
- --apply-spec is intended to do 2 things:
1) distribute ssh keys to hosts with hosts specs in the applied spec
2) ... - 04:15 PM RADOS Bug #57698: osd/scrub: "scrub a chunk" requests are sent to the wrong set of replicas
- Following some discussions: here are excerpts from a run demonstrating this issue.
Test run rfriedma-2022-09-28_15:5... - 04:04 PM Orchestrator Bug #57800: ceph orch upgrade does not appear to work with FQNDs.
- Oh, by all combinations, I mean I created DNS entries for all hosts, not just ceph02.
- 04:03 PM Orchestrator Bug #57800: ceph orch upgrade does not appear to work with FQNDs.
- I add DNS entries for all combinations. So both ceph02.oldname.local and ceph02.domain.local are now valid names but...
- 02:05 PM rbd Tasks #54312 (In Progress): combine the journal and snapshot test scripts
- 01:42 PM rgw Bug #44660: Multipart re-uploads cause orphan data
- Writing on behalf of Ulrich Klein <Ulrich.Klein@ulrichklein.de>, he wanted to add some info to this tracker, below is...
- 10:45 AM Orchestrator Bug #57781 (Rejected): Fix prometheus dependencies calculation
- closing as the current behavior is correct. We just need to add some comments to clarify the logic.
- 10:21 AM Orchestrator Bug #57366 (Pending Backport): prometheus is not re-deployed when service-discovery port changes
- 10:20 AM Orchestrator Bug #57816 (Fix Under Review): Add support to configure protocol (http or https) for Grafana url ...
- 09:24 AM Dashboard Bug #48258: mgr/dashboard: Switch from tslint to eslint
- great, thanks Sedrick. you can assign it to you. There are two PRs opened currently. You can go over the discussions ...
- 09:19 AM Dashboard Bug #48258: mgr/dashboard: Switch from tslint to eslint
- Hi, will like to work on this one
- 08:15 AM rgw Bug #57804: Enabling sync on bucket not working
- Hello Casey,
The init command ended after 60 minutes running.
Unfortunately the two errors are returned constan... - 07:46 AM Bug #57868 (New): iSCSI: rbd-target-api reports python version and identified 'unsupported versio...
- When running the cephadm deployed iSCSI container images, the API endpoint exposes python versions. This trigggers vu...
- 04:35 AM Dashboard Cleanup #57867 (Resolved): mgr/dashboard: migrate bootstrap 4 to 5
- h3. Description of problem
_here_
h3. Environment
* @ceph version@ string:
* Platform (OS/distro/release)... - 04:34 AM Dashboard Cleanup #57866 (Resolved): mgr/dashboard: update to angular 13
- 12:14 AM crimson Bug #57549: Crimson: Alienstore not work after ceph enable c++20
- do you mean rados bench works on ubuntu 20.04 in your machine for aliestore?
10/13/2022
- 11:27 PM CephFS Bug #48673 (Fix Under Review): High memory usage on standby replay MDS
- 10:19 PM Documentation #57858: v17.2.4 release does not contain latest cherry-picks
- ...
- 08:02 PM crimson Bug #57791 (Resolved): crimson: zero becomes truncate if region exceeds object bound
- https://github.com/ceph/ceph/pull/48405
- 08:02 PM crimson Bug #57789 (Resolved): crimson: add list_snaps
- https://github.com/ceph/ceph/pull/48405
- 08:02 PM crimson Bug #57773 (Resolved): crimson: TestLibRBD.TestCompareAndWriteStripeUnitSuccessPP fails with EINVAL
- https://github.com/ceph/ceph/pull/48405
- 08:02 PM crimson Bug #57759 (Resolved): crimson: rbdv1 needs TMAP, easier to implement than to skip rbdv1 tests
- https://github.com/ceph/ceph/pull/48405
- 06:40 PM Bug #57864 (In Progress): qa: fail "Checking cluster log for badness" check (and therefore the jo...
- 10:18 AM Bug #57864 (In Progress): qa: fail "Checking cluster log for badness" check (and therefore the jo...
- Discovered in https://github.com/ceph/ceph/pull/48288#discussion_r993883997:
----------
It appears there's a ca... - 04:08 PM Orchestrator Bug #57800: ceph orch upgrade does not appear to work with FQNDs.
- it's odd that the hostname it reports not having an address for isn't even a hostname it has stored "ceph02.domain.lo...
- 03:10 PM CephFS Bug #54760 (Closed): crash: void CDir::try_remove_dentries_for_stray(): assert(dn->get_linkage()-...
- Venky Shankar wrote:
> I think https://github.com/ceph/ceph/pull/46331 would mitigate this issue, however, the unlin... - 03:07 PM rbd Backport #57780 (Resolved): pacific: [test] fio 3.16 doesn't build on recent kernels due to remov...
- 03:07 PM rbd Backport #57780: pacific: [test] fio 3.16 doesn't build on recent kernels due to removal of linux...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48385
merged - 02:18 PM rgw Bug #57783 (In Progress): multisite: data sync reports shards behind after source zone fully trim...
- 02:17 PM rgw Bug #57804 (Need More Info): Enabling sync on bucket not working
- did the `bucket sync enable` command finish? i imagine it would take a while with 60000 index shards..
- 02:15 PM rgw Bug #57807 (Duplicate): The cloud sync module does not work starting with Pacific
- 02:01 PM rgw Bug #57724 (Triaged): Keys returned by Admin API during user creation on secondary zone not valid
- 12:44 PM rgw Bug #57770: RGW (pacific) misplaces index entries after dynamically resharding bucket
- Nick Janus wrote:
> J. Eric Ivancich wrote:
> > The theory is that the bucket index shard does not exist at this mo... - 09:28 AM Dashboard Tasks #57863 (Resolved): mgr/dashboard: cluster-utilization card
- h3. Description
One of the cards of the new landing page (https://github.com/ceph/ceph/tree/feature-landing-page-r... - 09:18 AM Dashboard Bug #57018: host.containers.internal accessing grafana's performance graphs
- I see similar behaviour here and this started with podman 4.1 where podman is injecting an entry into /etc/hosts insi...
- 09:16 AM Dashboard Tasks #57862 (Resolved): mgr/dashboard: capacity card
- h3. Description
The capacity card is one of the cards for the new landing page. It is currently implemented on the... - 09:08 AM Dashboard Feature #57861 (Pending Backport): mgr/dashboard: Dashboard landing page revamp
- h3. Description
Tasks for the landing page revamp.
- 08:27 AM Linux kernel client Bug #57656 (Need More Info): [testing] dbench: write failed on handle 10009 (Resource temporaril...
- Today I spent more than half day to read the mds, osd side logs, but still couldn't find any suspect logs. Usually if...
- 07:39 AM RADOS Bug #57859 (Fix Under Review): bail from handle_command() if _generate_command_map() fails
- 03:51 AM RADOS Bug #57859 (Resolved): bail from handle_command() if _generate_command_map() fails
- https://tracker.ceph.com/issues/54558 catches an exception from handle_command() to avoid mon termination due to a po...
- 04:34 AM Bug #57860 (Fix Under Review): disable system_pmdk on s390x for SUSE distros
- 04:28 AM Bug #57860 (Pending Backport): disable system_pmdk on s390x for SUSE distros
- Same as https://tracker.ceph.com/issues/56491 which addressed RHEL and Fedora not shipping libpmem on s390x, but for ...
- 04:03 AM RADOS Bug #54558: malformed json in a Ceph RESTful API call can stop all ceph-mon services
- nikhil kshirsagar wrote:
> Ilya Dryomov wrote:
> > I don't think https://github.com/ceph/ceph/pull/45547 is a compl... - 03:21 AM crimson Bug #57549: Crimson: Alienstore not work after ceph enable c++20
- tried the latest version with gcc-12.2.0 and ubuntu 22.04, met the same problem on https://tracker.ceph.com/issues/57...
- 02:59 AM crimson Bug #57693 (Resolved): Messenger test failed against test_messenger_peer.cc
- The fix was merged.
- 02:58 AM crimson Bug #56589 (Resolved): perf-crimson-msgr: segmentation fault happens when shutdown
- The fix was merged.
- 02:56 AM crimson Bug #56520: perf-crimson-msgr: Aborting on shard 0
- The fix was merged.
- 02:55 AM crimson Bug #56520 (Resolved): perf-crimson-msgr: Aborting on shard 0
10/12/2022
- 09:18 PM Documentation #57858: v17.2.4 release does not contain latest cherry-picks
- Here's how I think we should go about this.
We know that the v17.2.4 tag is missing from the Quincy branch. We sho... - 07:35 PM Documentation #57858: v17.2.4 release does not contain latest cherry-picks
- The signed v17.2.4 tag was also not included in https://github.com/ceph/ceph/pull/48290. This seems to have occurred ...
- 06:57 PM Documentation #57858 (Resolved): v17.2.4 release does not contain latest cherry-picks
- Earlier today, I went to check one of the Telemetry commands in the Long Running Cluster, and the command caused a cr...
- 08:59 PM bluestore Bug #56851: crash: int BlueStore::read_allocation_from_onodes(SimpleBitmap*, BlueStore::read_allo...
- Sudhin Bengeri wrote:
> We are running into the same problem in our ceph cluster, we are running ceph v17.2.3.
We... - 08:57 PM bluestore Bug #56851: crash: int BlueStore::read_allocation_from_onodes(SimpleBitmap*, BlueStore::read_allo...
- We are running into the same problem in our ceph cluster, we are running ceph v17.2.3
- 06:41 PM bluestore Bug #57857 (Pending Backport): KernelDevice::read doesn't translate error codes correctly
- "(()+0xf630) [0x7f746eadc630]",
"(gsignal()+0x37) [0x7f746d8cf387]",
"(abort()+0x148) [0x7f... - 06:40 PM CephFS Backport #57848 (In Progress): pacific: mgr/volumes: addition of human-readable flag to volume in...
- 05:57 PM CephFS Backport #57849 (In Progress): quincy: mgr/volumes: addition of human-readable flag to volume inf...
- 05:11 PM crimson Bug #55326 (Resolved): crimson: formatter recursion loop crash
- 05:08 PM RADOS Bug #57782: [mon] high cpu usage by fn_monstore thread
- Hey Radek,
makes sense, I created a debug branch https://github.com/ceph/ceph-ci/pull/new/wip-crush-debug and migh... - 05:00 PM rgw Bug #57770: RGW (pacific) misplaces index entries after dynamically resharding bucket
- J. Eric Ivancich wrote:
> The theory is that the bucket index shard does not exist at this moment, as it was deleted... - 03:45 PM CephFS Bug #57856 (Fix Under Review): cephfs-top: Skip refresh when the perf stats query shows no metrics
- 03:39 PM CephFS Bug #57856 (Closed): cephfs-top: Skip refresh when the perf stats query shows no metrics
- In cephfs-top loading the clients usually takes time. So skip refreshing the main window when there are no metrics.
- 03:15 PM bluestore Feature #57785: fragmentation score in metrics
- Looks like we can get the fragmentation score via an admin socket command:...
- 02:43 PM bluestore Feature #57785: fragmentation score in metrics
- Yaarit/Laura - can we do something in telemetry perf channels?
- 02:54 PM bluestore Bug #57855 (Resolved): cannot enable level_compaction_dynamic_level_bytes
- create an osd with the following options....
- 12:29 PM CephFS Bug #53573 (Fix Under Review): qa: test new clients against older Ceph clusters
- 09:06 AM Linux kernel client Bug #57656: [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
- Another failure is :...
- 05:22 AM Linux kernel client Bug #57656 (In Progress): [testing] dbench: write failed on handle 10009 (Resource temporarily u...
- 07:46 AM rbd Backport #57844 (In Progress): pacific: rbd CLI inconsistencies affecting "--namespace" arg
- 07:42 AM rbd Backport #57843 (In Progress): quincy: rbd CLI inconsistencies affecting "--namespace" arg
- 06:19 AM CephFS Bug #57854 (Resolved): mds: make num_fwd and num_retry to __u32
- The num_fwd in MClientRequestForward is int32_t, while the num_fwd
in ceph_mds_request_head is __u8. This is buggy w... - 05:44 AM crimson Bug #57693 (Fix Under Review): Messenger test failed against test_messenger_peer.cc
- Should be fixed by https://github.com/ceph/ceph/pull/48457
- 05:09 AM Dashboard Bug #57166 (Resolved): mgr/dashboard: "Average GET/PUT Latencies" panel lacks details
- 05:09 AM Dashboard Backport #57487 (Resolved): pacific: mgr/dashboard: "Average GET/PUT Latencies" panel lacks details
- 05:08 AM Dashboard Feature #56699 (Resolved): mgr/dashboard: improve dashboard redirect address
- 05:07 AM Dashboard Backport #57661 (Resolved): quincy: mgr/dashboard: improve dashboard redirect address
- 05:07 AM Dashboard Backport #57663 (Resolved): pacific: mgr/dashboard: improve dashboard redirect address
- 04:56 AM CephFS Backport #57836 (In Progress): pacific: Failure in snaptest-git-ceph.sh (it's an async unlink/cre...
- 04:11 AM rgw Backport #57197 (Resolved): pacific: x-amz-date protocol change breaks aws v4 signature logic: wa...
- 04:11 AM rgw Bug #47527 (Resolved): Ceph returns s3 incompatible xml response for listMultipartUploads
- 04:10 AM rgw Backport #53148 (Rejected): octopus: Ceph returns s3 incompatible xml response for listMultipartU...
- Octopus is EOL
- 04:10 AM rgw Backport #53149 (Resolved): pacific: Ceph returns s3 incompatible xml response for listMultipartU...
- 02:52 AM CephFS Backport #57837 (In Progress): quincy: Failure in snaptest-git-ceph.sh (it's an async unlink/crea...
- 02:51 AM rgw Bug #57853 (Pending Backport): multisite sync process block after long time running
- 1、deploy RADOSGW multisite
2、put lot of objects
3、keep it runing for a long time
- 02:39 AM RADOS Bug #57852 (Need More Info): osd: unhealthy osd cannot be marked down in time
- Before an unhealthy osd is marked down by mon, other osd may choose it as
heartbeat peer and then report an incorrec... - 01:03 AM bluestore Bug #55328: OSD crashed due to checksum error
- Hi Igor
> I will start to run the same test senario with a newer Ceph version (v16.2.10) in a few weeks, and run the...
10/11/2022
- 10:25 PM CephFS Backport #57718: pacific: Test failure: test_subvolume_group_ls_filter_internal_directories (task...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48328
merged - 10:25 PM CephFS Backport #57261: pacific: standby-replay mds is removed from MDSMap unexpectedly
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48270
merged - 10:24 PM CephFS Backport #57194: pacific: ceph pacific fails to perform fs/mirror test
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48269
merged - 10:23 PM rgw Backport #57649: pacific: rgw: fix bool/int logic error when calling get_obj_head_ioctx
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48230
merged - 10:22 PM rgw Backport #57429: pacific: key is used after move in RGWGetObj_ObjStore_S3::override_range_hdr
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48229
merged - 10:20 PM rgw Backport #57753: pacific: Log status of individual object deletions for multi-object delete reque...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48348
merged - 10:19 PM rgw Backport #57197: pacific: x-amz-date protocol change breaks aws v4 signature logic: was rfc 2616....
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48313
merged - 10:18 PM rgw Backport #55918: pacific: Bucket sync policy core dumped
- https://github.com/ceph/ceph/pull/47994 merged
- 10:17 PM rgw Backport #57450: pacific: 'radosgw-admin sync flow create' cmd crashes if flow-type omitted
- https://github.com/ceph/ceph/pull/47994 merged
- 10:15 PM rgw Backport #55245: pacific: rgwlc: ordinary expiration can remove delete-markers at end of current...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47231
merged - 10:14 PM rgw Backport #56185: pacific: rgw crash when use swift api
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47230
merged - 10:14 PM rgw Backport #55135: pacific: multisite: data sync only spawns one bucket sync at a time
- Casey Bodley wrote:
> https://github.com/ceph/ceph/pull/45713
merged - 10:13 PM rgw Backport #54144: pacific: bilog trim: segfault in RGWRadosBILogTrimCR::send_request if bucket sha...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44907
merged - 10:11 PM rgw Backport #53149: pacific: Ceph returns s3 incompatible xml response for listMultipartUploads
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/44558
merged - 06:04 PM rgw Bug #57770: RGW (pacific) misplaces index entries after dynamically resharding bucket
- Here is the code that does this:...
- 06:02 PM rgw Bug #57770 (Need More Info): RGW (pacific) misplaces index entries after dynamically resharding b...
- So I looked at the code in 16.2.9 to try to understand how this might happen. The final step in adding an object to t...
- 05:44 PM mgr Bug #57851 (Fix Under Review): pybind/mgr/snap_schedule: use temp_store for db
- 05:42 PM mgr Bug #57851 (Resolved): pybind/mgr/snap_schedule: use temp_store for db
- ...
- 04:19 PM Dashboard Backport #57831 (Resolved): quincy: mgr/dashboard: weird data in OSD details
- 04:19 PM Dashboard Backport #57831 (Rejected): quincy: mgr/dashboard: weird data in OSD details
- 06:11 AM Dashboard Backport #57831 (In Progress): quincy: mgr/dashboard: weird data in OSD details
- 05:37 AM Dashboard Backport #57831 (Resolved): quincy: mgr/dashboard: weird data in OSD details
- https://github.com/ceph/ceph/pull/48433
- 04:19 PM Dashboard Backport #57847 (Resolved): quincy: mgr/dashboard: auto-coloring label
- 11:03 AM Dashboard Backport #57847 (Resolved): quincy: mgr/dashboard: auto-coloring label
- https://github.com/ceph/ceph/pull/48433
- 01:50 PM rgw Bug #57850 (Fix Under Review): amqp: rgw crash when ca location is used for amqp connections
- 01:45 PM rgw Bug #57850 (Pending Backport): amqp: rgw crash when ca location is used for amqp connections
- ca location value is stored as a reference, and the original string may already be destroyed when ca location is used
- 01:36 PM Linux kernel client Bug #54044: intermittent hangs waiting for caps
- Hi Xiubo,
here are the answers to the open questions:
* My max_mds value is 1
* My ceph version is 17.2.2
I... - 01:21 PM Dashboard Bug #52811 (Can't reproduce): mgr/dashboard: mgr crashes when viewing unavailable filesystem info...
- Latest main branch looks like it was fixed.
- 11:42 AM CephFS Backport #57821 (In Progress): pacific: cephfs-data-scan: scan_links is not verbose enough
- 11:41 AM CephFS Backport #57820 (In Progress): quincy: cephfs-data-scan: scan_links is not verbose enough
- 11:35 AM CephFS Bug #56162 (Resolved): mgr/stats: add fs_name as field in perf stats command output
- 11:34 AM CephFS Bug #56169 (Resolved): mgr/stats: 'perf stats' command shows incorrect output with non-existing m...
- 11:34 AM CephFS Bug #56483 (Resolved): mgr/stats: missing clients in perf stats command output.
- 11:33 AM CephFS Feature #54978 (Resolved): cephfs-top:addition of filesystem menu(improving GUI)
- 11:33 AM CephFS Bug #55861 (Resolved): Test failure: test_client_metrics_and_metadata (tasks.cephfs.test_mds_metr...
- 11:32 AM CephFS Backport #57283 (Resolved): quincy: cephfs-top:addition of filesystem menu(improving GUI)
- 11:32 AM CephFS Backport #57273 (Resolved): quincy: mgr/stats: missing clients in perf stats command output.
- 11:32 AM CephFS Backport #57330 (Resolved): quincy: Test failure: test_client_metrics_and_metadata (tasks.cephfs....
- 11:31 AM CephFS Backport #57276 (Resolved): quincy: mgr/stats: 'perf stats' command shows incorrect output with n...
- 11:31 AM CephFS Backport #57278 (Resolved): quincy: mgr/stats: add fs_name as field in perf stats command output
- 11:27 AM CephFS Backport #57849 (Resolved): quincy: mgr/volumes: addition of human-readable flag to volume info c...
- https://github.com/ceph/ceph/pull/48466
- 11:26 AM CephFS Backport #57848 (Resolved): pacific: mgr/volumes: addition of human-readable flag to volume info ...
- https://github.com/ceph/ceph/pull/48468
- 11:19 AM CephFS Bug #57620 (Pending Backport): mgr/volumes: addition of human-readable flag to volume info command
- 11:08 AM Dashboard Backport #57838 (Resolved): quincy: mgr/dashboard: prometheus: change name of pg_repaired_objects
- 08:18 AM Dashboard Backport #57838 (In Progress): quincy: mgr/dashboard: prometheus: change name of pg_repaired_objects
- 07:45 AM Dashboard Backport #57838 (Resolved): quincy: mgr/dashboard: prometheus: change name of pg_repaired_objects
- https://github.com/ceph/ceph/pull/48438
- 11:03 AM Dashboard Backport #57846 (Resolved): pacific: mgr/dashboard: auto-coloring label
- https://github.com/ceph/ceph/pull/50121
- 11:00 AM Dashboard Feature #55922 (Pending Backport): mgr/dashboard: auto-coloring label
- 11:00 AM Dashboard Backport #57835 (In Progress): quincy: mgr/dashboard: add server side encryption to rgw/s3
- 07:25 AM Dashboard Backport #57835 (Resolved): quincy: mgr/dashboard: add server side encryption to rgw/s3
- https://github.com/ceph/ceph/pull/48441
- 10:58 AM Dashboard Backport #57841 (In Progress): quincy: mgr/dashboard: Unable to change subuser permission
- 09:41 AM Dashboard Backport #57841 (Resolved): quincy: mgr/dashboard: Unable to change subuser permission
- https://github.com/ceph/ceph/pull/48440
- 10:13 AM RADOS Bug #57845 (New): MOSDRepOp::encode_payload(uint64_t): Assertion `HAVE_FEATURE(features, SERVER_O...
- ...
- 10:02 AM rbd Backport #57844 (Resolved): pacific: rbd CLI inconsistencies affecting "--namespace" arg
- https://github.com/ceph/ceph/pull/48459
- 10:02 AM rbd Backport #57843 (Resolved): quincy: rbd CLI inconsistencies affecting "--namespace" arg
- https://github.com/ceph/ceph/pull/48458
- 09:58 AM rbd Bug #57765 (Pending Backport): rbd CLI inconsistencies affecting "--namespace" arg
- 09:43 AM Dashboard Bug #57840 (Triaged): mgr/dashboard: "Add Host" rejects IPv6 addresses
- 09:21 AM Dashboard Bug #57840 (Triaged): mgr/dashboard: "Add Host" rejects IPv6 addresses
- h3. Description of problem
After setup a Cluster with cephadm it is not possible to add Hosts with its IPv6 addess... - 09:41 AM Dashboard Backport #57842 (Rejected): pacific: mgr/dashboard: Unable to change subuser permission
- 09:41 AM Dashboard Bug #57805 (Pending Backport): mgr/dashboard: Unable to change subuser permission
- 08:23 AM Dashboard Backport #57839 (In Progress): pacific: mgr/dashboard: prometheus: change name of pg_repaired_obj...
- 07:46 AM Dashboard Backport #57839 (Resolved): pacific: mgr/dashboard: prometheus: change name of pg_repaired_objects
- https://github.com/ceph/ceph/pull/48439
- 07:55 AM Orchestrator Backport #55991: pacific: Allow setting crush_device_class in OSD service specs
- I just tried to use the feature "crush_device_class" as it's supposed to be available, but it fails (I try to create ...
- 07:33 AM Dashboard Bug #57806 (Pending Backport): mgr/dashboard: prometheus: change name of pg_repaired_objects
- 07:25 AM CephFS Backport #57837 (Resolved): quincy: Failure in snaptest-git-ceph.sh (it's an async unlink/create ...
- https://github.com/ceph/ceph/pull/48452
- 07:25 AM CephFS Backport #57836 (Resolved): pacific: Failure in snaptest-git-ceph.sh (it's an async unlink/create...
- https://github.com/ceph/ceph/pull/48453
- 07:18 AM Dashboard Feature #57826 (Pending Backport): mgr/dashboard: add server side encryption to rgw/s3
- 04:39 AM Dashboard Feature #57826 (Resolved): mgr/dashboard: add server side encryption to rgw/s3
- Add the capability to add serever side encryption to the buckets in rgw
- 07:17 AM CephFS Bug #55332 (Pending Backport): Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
- 06:53 AM CephFS Documentation #57778: CephFS subvolume metadata not available in pacific
- Thanks for the update and looking into it. :-)
- 05:23 AM CephFS Documentation #57778: CephFS subvolume metadata not available in pacific
- Hi Eugen,
The doc changes are backported when a backport PR gets merged. The 16.2.10 release is a hotfix release w... - 05:37 AM Dashboard Backport #57833 (Resolved): quincy: mgr/dashboard: cephadm dashboard e2e failure "being covered b...
- https://github.com/ceph/ceph/pull/48432
- 05:37 AM Dashboard Backport #57832 (Rejected): pacific: mgr/dashboard: cephadm dashboard e2e failure "being covered ...
- 05:36 AM Dashboard Backport #57830 (Resolved): pacific: mgr/dashboard: weird data in OSD details
- https://github.com/ceph/ceph/pull/50121
- 05:25 AM Dashboard Backport #57828 (In Progress): quincy: cephadm/test_dashboard_e2e.sh: Expected to find content: '...
- 05:02 AM Dashboard Backport #57828 (Resolved): quincy: cephadm/test_dashboard_e2e.sh: Expected to find content: '/^f...
- https://github.com/ceph/ceph/pull/48432
- 05:20 AM Dashboard Bug #57511 (Pending Backport): mgr/dashboard: cephadm dashboard e2e failure "being covered by ano...
- 05:09 AM Dashboard Bug #57803 (Pending Backport): mgr/dashboard: weird data in OSD details
- 05:02 AM Dashboard Backport #57829 (Resolved): pacific: cephadm/test_dashboard_e2e.sh: Expected to find content: '/^...
- https://github.com/ceph/ceph/pull/55415
- 04:43 AM Dashboard Tasks #57827 (New): mgr/dashboard: add e2e tests for cephx user creation
- 04:40 AM Dashboard Bug #57114 (Resolved): mgr/dashboard: Squash is not mandatory field in "Create NFS export" page
- 04:39 AM Dashboard Backport #57435 (Resolved): pacific: mgr/dashboard: Squash is not mandatory field in "Create NFS ...
- 04:39 AM Dashboard Backport #57582 (Resolved): pacific: AssertionError: Expected to find element: `cd-modal .badge:n...
- 04:38 AM Dashboard Backport #57581 (Resolved): quincy: AssertionError: Expected to find element: `cd-modal .badge:no...
- 04:30 AM Dashboard Bug #57386 (Pending Backport): cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/'...
- 04:03 AM CephFS Bug #57299: qa: test_dump_loads fails with JSONDecodeError
- Rishabh, should this change be backported to p/q releases?
- 02:16 AM Orchestrator Bug #57800: ceph orch upgrade does not appear to work with FQNDs.
- So, I did notice that I had set the domain name on one of the nodes to the "oldname.local" (when I was doing the find...
- 01:19 AM Orchestrator Bug #57800: ceph orch upgrade does not appear to work with FQNDs.
- what does `ceph orch host ls` report for this host? This error should only be raised if we can't find any IP stored f...
- 01:24 AM Orchestrator Documentation #57596: MON Service
- the "networks" parameter inside the service spec is a separate thing from the public/cluster network. The public_netw...
10/10/2022
- 08:54 PM rgw Bug #57807: The cloud sync module does not work starting with Pacific
- Related issue: https://tracker.ceph.com/issues/55310
- 12:48 PM rgw Bug #57807 (Duplicate): The cloud sync module does not work starting with Pacific
- We have a cluster running Ceph Pacific storing objects with S3 and we want to sync the objects with an external endpo...
- 07:19 PM rgw Bug #57562: multisite replication issue on Quincy
- Are there any suggestions/tips on how we can debug this type of multisite/replication issues?
- 06:48 PM CephFS Backport #57825 (Resolved): pacific: qa: mirror tests should cleanup fs during unwind
- https://github.com/ceph/ceph/pull/50765
- 06:47 PM CephFS Backport #57824 (Resolved): quincy: qa: mirror tests should cleanup fs during unwind
- https://github.com/ceph/ceph/pull/50766
- 06:47 PM CephFS Backport #57823 (Rejected): pacific: Test failure: test_newops_getvxattr (tasks.cephfs.test_newop...
- 06:47 PM CephFS Backport #57822 (Rejected): quincy: Test failure: test_newops_getvxattr (tasks.cephfs.test_newops...
- 06:47 PM CephFS Backport #57821 (Resolved): pacific: cephfs-data-scan: scan_links is not verbose enough
- https://github.com/ceph/ceph/pull/48443
- 06:47 PM CephFS Backport #57820 (Resolved): quincy: cephfs-data-scan: scan_links is not verbose enough
- https://github.com/ceph/ceph/pull/48442
- 06:44 PM CephFS Bug #57248 (Pending Backport): qa: mirror tests should cleanup fs during unwind
- 06:37 PM CephFS Bug #57589 (Pending Backport): cephfs-data-scan: scan_links is not verbose enough
- 06:37 PM CephFS Bug #57580 (Pending Backport): Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.Test...
- 06:33 PM RADOS Bug #57796: after rebalance of pool via pgupmap balancer, continuous issues in monitor log
Radoslaw,
Yes, I saw that piece of code too. But i *think* I figured it out just a short time ago. I had the cru...- 06:05 PM RADOS Bug #57796 (Need More Info): after rebalance of pool via pgupmap balancer, continuous issues in m...
- Thanks for the report! The log comes from there:...
- 06:23 PM RADOS Bug #57782 (Need More Info): [mon] high cpu usage by fn_monstore thread
- It looks we're burning CPU in @close(2)@. The single call site I can spot is in @write_data_set_to_csv@. Let's analyz...
- 05:30 PM mgr Bug #57460: Json formatted ceph pg dump hangs on large clusters
- Hi Ponnuvel! Thanks for the analysis. The problem is genuine and the exponential explosion is simply a no-no.
I just... - 05:07 PM mgr Bug #57768 (Fix Under Review): mgr/balancer: check for end_weekday is exclusive, stops balancing ...
- 03:48 PM Dashboard Backport #57819 (New): quincy: mgr/dashboard: update legal links
- 03:48 PM Dashboard Backport #57818 (Rejected): pacific: mgr/dashboard: update legal links
- 03:35 PM Dashboard Bug #57792 (Pending Backport): mgr/dashboard: update legal links
- 02:36 PM Linux kernel client Bug #57686: general protection fault and CephFS kernel client hangs after MDS failover
- I believe that https://tracker.ceph.com/issues/57817 is another instance of this bug, but I wasn't sure so I opened a...
- 02:35 PM Linux kernel client Bug #57817 (Duplicate): general protection fault and CephFS kernel client hangs after MDS failover
- I believe that this is the same bug as https://tracker.ceph.com/issues/57686, but in case I'm wrong, I'm opening this...
- 02:25 PM Orchestrator Bug #57816 (Pending Backport): Add support to configure protocol (http or https) for Grafana url ...
- Right now cephadm always deploy Grafana by using https. In some testing scenarios it would be helpful to configure th...
- 01:45 PM Dashboard Feature #57815 (New): mgr/dashboard: smart automatic capabilities creator
- h3. Description
h1. Capabilities are formed by using known keywords as unknown values are not permitted. This mean... - 01:41 PM Dashboard Feature #57814 (New): mgr/dashboard: add enum fields
- h3. Description
Entities in capabilities are known beforehand therefore we can fill a dropdown and let the user ju... - 01:39 PM Dashboard Feature #57813 (New): mgr/dashboard: include form name in breadcrumbs
- h3. Description
The breadcrumbs should include the current form name (e.g.: Cluster >> Users >> Create), and the ... - 01:39 PM Dashboard Feature #57812 (New): mgr/dashboard: map icons in backend to frontend
- h3. Description
Rather than using literal Font-Awesome icon names in the back-end, we could just use a enum set ... - 01:28 PM Dashboard Feature #57811 (New): mgr/dashboard: infer form path
- h3. Description
f"{obj.action_type} {obj.__class__.__name__.title()}"
h3. Target persona
{{collapse(Example... - 01:25 PM Dashboard Feature #57810 (New): mgr/dashboard: auto generated routing in backend forms
- h3. Description of problem
Backend generated forms should generate the needed routing from the backend too and not... - 01:24 PM Dashboard Bug #57809 (New): mgr/dashboard: disable drag and drop in array forms
- h3. Description of problem
arrays in angular json schema have a drag and drop functionality that must be droped si... - 01:23 PM Dashboard Feature #57808 (New): mgr/dashboard: authx improvements
- h3. Description of problem
This is the epic of followup tasks in the authx feature
h3. Environment
* @ceph v... - 12:52 PM CephFS Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
- Hitting similar issue in my today's run against ubuntu 22.04:
http://qa-proxy.ceph.com/teuthology/dparmar-2022-10-... - 07:53 AM CephFS Bug #57655: qa: fs:mixed-clients kernel_untar_build failure
- Maybe this can help here https://lore.kernel.org/all/alpine.LSU.2.21.2004031057320.25955@pobox.suse.cz/.
- 11:50 AM CephFS Backport #57723 (In Progress): pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
- 10:56 AM CephFS Bug #57641: Ceph FS fscrypt clones missing fscrypt metadata
- Thanks Venky, I did not submit this right away as I'm unsure the xattr copy is the right way to do this.
I wonder ... - 08:03 AM CephFS Bug #57641 (Fix Under Review): Ceph FS fscrypt clones missing fscrypt metadata
- Hi Marcel,
I pushed a PR with your commit. We should probably take this opportunity to copy user xattrs during clone. - 10:36 AM Dashboard Bug #57806 (Fix Under Review): mgr/dashboard: prometheus: change name of pg_repaired_objects
- 10:18 AM Dashboard Bug #57806 (Resolved): mgr/dashboard: prometheus: change name of pg_repaired_objects
- h3. Description of problem
pg_repaired_objects > pool_repaired_objects
h3. Environment
* @ceph version@ stri... - 10:36 AM Dashboard Bug #57623 (Resolved): mgr/dashboard: expose num repaired objects metric per pool
- 10:13 AM Dashboard Bug #57805 (Resolved): mgr/dashboard: Unable to change subuser permission
- Tried to edit the permission of a subuser but once changed the permission, edited permission not seen in user info of...
- 09:49 AM CephFS Documentation #57778: CephFS subvolume metadata not available in pacific
- Hi Eugen,
The latest pacific (v16.2.10) only included CVE fixes as per
https://github.com/ceph/ceph/blo... - 09:29 AM CephFS Bug #57610: qa: timeout during unwinding of qa/workunits/suites/fsstress.sh
- Milind, not sure I remember this correctly -- did you RCA this?
- 09:28 AM CephFS Backport #57717 (In Progress): quincy: libcephfs: incorrectly showing the size for snapdirs when ...
- 09:23 AM CephFS Backport #57716 (In Progress): pacific: libcephfs: incorrectly showing the size for snapdirs when...
- 06:47 AM rgw Bug #57804 (Need More Info): Enabling sync on bucket not working
- Hello,
I'm having a problem when trying to enable sync on one of our buckets (multi-site) from master zone.
Her... - 06:33 AM Dashboard Feature #56155 (Resolved): mgr/dashboard: Add daemon logs tab to Cluster -> Logs component
- 06:08 AM RADOS Bug #56101: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
- Laura Flores wrote:
> I contacted some Telemetry users. I will report back here with any information.
>
I am on... - 05:52 AM Dashboard Bug #57803 (Fix Under Review): mgr/dashboard: weird data in OSD details
- 05:37 AM Dashboard Bug #57803 (Resolved): mgr/dashboard: weird data in OSD details
- Please see the attached screenshot:
* OSD.3 device shows as "sdcsdc" (rhceph53)
* With daemons OSD.3 and OSD.4
* B... - 05:27 AM crimson Bug #57773 (Fix Under Review): crimson: TestLibRBD.TestCompareAndWriteStripeUnitSuccessPP fails w...
- 02:57 AM Bug #57802 (New): RGW crash when upload file through swift RGWFormPost function
- Hi,
When I use swift RGWFormPost to upload file, I got ** Caught signal (Segmentation fault) ** error. It will case ... - 12:39 AM crimson Bug #57801 (New): crimson: tag pool types as crimson, disallow snapshot, scrub, ec operations
- add mon_pool_default_crimson option to enable it by default, set in vstart,teutholoy
--crimson flag during pool crea...
10/09/2022
- 01:44 AM CephFS Bug #57674: fuse mount crashes the standby MDSes
- Jos Collin wrote:
> This is not a bug, just the limit reached.
>
> Processor -- accept open file descriptions lim... - 01:20 AM Orchestrator Bug #57800 (New): ceph orch upgrade does not appear to work with FQNDs.
- This is purely speculative on my part, but after attempting an upgrade to 17.2.4 from 17.2.3, it just sits there doin...
10/08/2022
- 09:09 PM crimson Bug #57799 (Resolved): crimson: add guard rails to enable crimson on a cluster
- - crimson experimental feature
- ceph osd set-allow-crimson
- disallow crimson-osd booting without that flag
- 05:43 PM crimson Bug #57758: crimson: disable autoscale for crimson in teuthology
- Actually already defaults to this, need to figure out why the test I saw was doing merges.
- 04:42 PM crimson Bug #57798 (Resolved): crimson: actually set CRIMSON_COMPAT for teuthology workunits
- 07:27 AM rgw Bug #56992: rgw_op.cc:Deleting a non-existent object also generates a delete marker
- Because I do not fully understand the PR process, I need to close the old PR, and the new one in https://github.com/c...
- 04:26 AM CephFS Backport #57362 (Resolved): quincy: ffsb.sh test failure
- 04:25 AM CephFS Backport #57240 (Resolved): quincy: ceph-fs crashes on getfattr
10/07/2022
- 09:01 PM rgw Bug #57562: multisite replication issue on Quincy
- Hi,
Here's some extra data from another test which used increased rgw debugging levels by feeding in the options _... - 08:32 PM RADOS Bug #57796: after rebalance of pool via pgupmap balancer, continuous issues in monitor log
- I removed the hosts holding the osds reported by verify_upmap from the default root rule that no one uses, and the lo...
- 05:56 PM RADOS Bug #57796: after rebalance of pool via pgupmap balancer, continuous issues in monitor log
- Note that the balancer balanced a replicated pool, using its own custom crush root too. The hosts in that pool (not i...
- 05:46 PM RADOS Bug #57796: after rebalance of pool via pgupmap balancer, continuous issues in monitor log
- preformatting the crush info so it shows up properly ......
- 05:43 PM RADOS Bug #57796 (Need More Info): after rebalance of pool via pgupmap balancer, continuous issues in m...
The pgupmap balancer was not balancing well, and after setting mgr/balancer/upmap_max_deviation to 1 (ceph config-k...- 08:00 PM mgr Bug #57797 (Resolved): mgr/prometheus: avoid duplicates and deleted entries for rbd_stats_pool
- 04:46 PM RADOS Backport #57795 (In Progress): quincy: intrusive_lru leaking memory when
- https://github.com/ceph/ceph/pull/54557
- 04:46 PM RADOS Backport #57794 (Resolved): pacific: intrusive_lru leaking memory when
- https://github.com/ceph/ceph/pull/54558
- 04:46 PM Orchestrator Backport #57793 (New): quincy: Update monitoring doc to reflect the new location of grafana key/cert
- 04:32 PM Orchestrator Documentation #57769 (Pending Backport): Update monitoring doc to reflect the new location of gra...
- 04:29 PM RADOS Bug #57573 (Pending Backport): intrusive_lru leaking memory when
- 03:10 PM Dashboard Bug #57792 (Fix Under Review): mgr/dashboard: update legal links
- 10:10 AM Dashboard Bug #57792 (Pending Backport): mgr/dashboard: update legal links
- The legal links in the login page are outdated:
* "Help" is broken, and it should probably point to docs.ceph.com
*... - 02:12 PM Feature #57455 (Rejected): msg: change to allow separate port ranges for MDS and OSD
- This has always been possible by way of [osd], [mds], etc sections in ceph.conf file. See discussion in https://gith...
- 02:11 PM rgw Bug #51919 (Duplicate): crash: ceph::common::PerfCounters::inc(int, unsigned long) (in RGWAsyncFe...
- Changed status from Resolved to Duplicate since this issue duplicates https://tracker.ceph.com/issues/49666.
- 12:53 PM CephFS Bug #57594 (In Progress): pacific: Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_s...
- http://pulpito.front.sepia.ceph.com/jcollin-2022-10-07_11:57:35-fs-wip-jcollin-B57594-main-check-distro-default-smith...
- 12:36 PM mgr Bug #54788: crash: void MonMap::add(const mon_info_t&): assert(addr_mons.count(a) == 0)
- See bug 54744.
- 12:36 PM RADOS Bug #54773: crash: void MonMap::add(const mon_info_t&): assert(addr_mons.count(a) == 0)
- See bug 54744.
- 12:35 PM RADOS Bug #54744: crash: void MonMap::add(const mon_info_t&): assert(addr_mons.count(a) == 0)
- Rook v1.6.5 / Ceph v12.2.9 running on the host network and not inside the Kubernetes SDN caused creating a mon canary...
- 09:40 AM CephFS Bug #57764: Thread md_log_replay is hanged for ever.
- Thanks for the bug report. Seems like you found a subtle race. I haven't gone through the fix yet, but I'll get to it...
- 07:41 AM cleanup Tasks #57569: implement chown admin rest entrypoint
- This item has been repurposed to: *implement chown admin rest entrypoint*.
After a chat with Daniel Gryniewicz we ag... - 07:33 AM rgw Bug #57784: beast frontend crashes on exception from socket.local_endpoint()
- Hey,
here is a full stack trace from the RGW daemon. I removed bucket/file/user names.
The host is:
Ubuntu 20.04...
10/06/2022
- 11:28 PM crimson Bug #57738 (Resolved): crimson: repop ordering bug
- 10:22 PM rbd Bug #56561 (Resolved): rbd perf image iostat/iotop lost the ability to gather data across pools
- 10:11 PM crimson Bug #57791 (Resolved): crimson: zero becomes truncate if region exceeds object bound
- 09:17 PM Bug #56098: api_tier_pp: failure on LibRadosTwoPoolsPP.ManifestRefRead
- /a/yuriw-2022-10-05_21:09:57-rados-main-distro-default-smithi/7056369
- 09:10 PM crimson Bug #57789 (Resolved): crimson: add list_snaps
- Some librbd library functions and tests use list_snaps. Worth adding though snapshot support doesn't really work yet.
- 08:38 PM RADOS Bug #56101: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
- I contacted some Telemetry users. I will report back here with any information.
Something to note: The large maj... - 07:49 PM Orchestrator Bug #57303: rados/cephadm: Failed to fetch package version from https://shaman.ceph.com/api/searc...
- Laura Flores wrote:
> /a/yuriw-2022-09-20_17:39:55-rados-wip-yuri5-testing-2022-09-19-1007-pacific-distro-default-sm... - 07:37 PM Bug #57756 (Resolved): upgrade: notify retry canceled due to unrecoverable error after 1 attempts...
- I think this was resolved in 17.2.4, so it shouldn't happen anymore.
- 06:58 PM Orchestrator Backport #57788 (Resolved): pacific: mgr/nfs: Add a sectype field to nfs exports created by nfs m...
- https://github.com/ceph/ceph/pull/49929
- 06:58 PM Orchestrator Backport #57787 (Resolved): quincy: mgr/nfs: Add a sectype field to nfs exports created by nfs mg...
- https://github.com/ceph/ceph/pull/48531
- 06:51 PM Orchestrator Feature #57404 (Pending Backport): mgr/nfs: Add a sectype field to nfs exports created by nfs mgr...
- 06:35 PM Orchestrator Feature #57786 (Resolved): cephadm: open ports in firewall when deploying iscsi
- specifically, 3260 and whatever the user provides for the api_port. We already have logic in the deploy command in th...
- 06:30 PM Orchestrator Bug #57750: cephadm fails to upgrade systems not running sudo
- I can confirm that this was with the root user, no custom user involved.
Upgrades prior to 17.x.x worked like a ch... - 06:24 PM Orchestrator Bug #57750: cephadm fails to upgrade systems not running sudo
- assuming this wasn't with a custom (not-root) ssh user anyway, in which case sudo would be required as cephadm needs ...
- 06:22 PM Orchestrator Bug #57750 (In Progress): cephadm fails to upgrade systems not running sudo
- this is a legit bug. However, I think this should have been fixed by https://github.com/ceph/ceph/pull/47898 which ap...
- 04:10 PM Orchestrator Bug #57750: cephadm fails to upgrade systems not running sudo
- The documentation does not state that one need sudo at all. It's an option. So one cannot make the assumption that ev...
- 03:52 PM Orchestrator Bug #57750 (Need More Info): cephadm fails to upgrade systems not running sudo
- I'd say that the expected behavior. The user you use with cephadm needs passwordless sudo access to all the hosts tha...
- 06:19 PM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- I created an issue to surface the fragmentation score via prom here: https://tracker.ceph.com/issues/57785
Not a 1... - 06:17 PM bluestore Feature #57785 (New): fragmentation score in metrics
- Currently the bluestore fragmentation score does not seem to be exported in metrics. Due to the issue described in ht...
- 05:25 PM rgw Bug #57784 (Fix Under Review): beast frontend crashes on exception from socket.local_endpoint()
- 05:19 PM rgw Bug #57784 (Pending Backport): beast frontend crashes on exception from socket.local_endpoint()
- reported on ceph-users in https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/FSXGU7WVFJTPHW5S5A63IN4AEOV...
- 05:12 PM Backport #50382 (Resolved): pacific: DecayCounter: Expected: (std::abs(total-expected)/expected) ...
- 05:09 PM Backport #50382: pacific: DecayCounter: Expected: (std::abs(total-expected)/expected) < (0.01), a...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48031
merged - 05:11 PM CephFS Backport #57554: quincy: qa: test_subvolume_snapshot_clone_quota_exceeded fails CommandFailedError
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48164
merged - 05:08 PM RADOS Backport #57545: quincy: CommandFailedError: Command failed (workunit test rados/test_python.sh) ...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48113
merged - 05:05 PM RADOS Backport #57496: quincy: Invalid read of size 8 in handle_recovery_delete()
- Nitzan Mordechai wrote:
> https://github.com/ceph/ceph/pull/48039
merged - 05:04 PM RADOS Backport #57443: quincy: osd: Update osd's IOPS capacity using async Context completion instead o...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47983
merged - 05:03 PM RADOS Backport #57346: quincy: expected valgrind issues and found none
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47933
merged - 05:01 PM RADOS Backport #56602: quincy: ceph report missing osdmap_clean_epochs if answered by peon
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47928
merged - 05:00 PM RADOS Backport #55282: quincy: osd: add scrub duration for scrubs after recovery
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47926
merged - 04:58 PM CephFS Backport #57362: quincy: ffsb.sh test failure
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47890
merged - 04:58 PM CephFS Backport #57240: quincy: ceph-fs crashes on getfattr
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47890
merged - 04:56 PM CephFS Backport #57283: quincy: cephfs-top:addition of filesystem menu(improving GUI)
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47820
merged - 04:56 PM CephFS Backport #57273: quincy: mgr/stats: missing clients in perf stats command output.
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47820
merged - 04:56 PM CephFS Backport #57330: quincy: Test failure: test_client_metrics_and_metadata (tasks.cephfs.test_mds_me...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47820
merged - 04:56 PM CephFS Backport #57276: quincy: mgr/stats: 'perf stats' command shows incorrect output with non-existing...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47820
merged - 04:56 PM CephFS Backport #57278: quincy: mgr/stats: add fs_name as field in perf stats command output
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47820
merged - 04:51 PM CephFS Backport #57555: pacific: qa: test_subvolume_snapshot_clone_quota_exceeded fails CommandFailedError
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48165
merged - 04:50 PM Dashboard Backport #57582: pacific: AssertionError: Expected to find element: `cd-modal .badge:not(script,s...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48142
merged - 04:47 PM RADOS Backport #57544: pacific: CommandFailedError: Command failed (workunit test rados/test_python.sh)...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48112
merged - 04:44 PM rgw Bug #57562: multisite replication issue on Quincy
- The difference between this issue and Bug #57783 is that in our case, the buckets/objects are NOT synced.
I tried a... - 04:44 PM CephFS Backport #56468: pacific: mgr/volumes: display in-progress clones for a snapshot
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47112
merged - 03:28 PM rgw Bug #57783 (In Progress): multisite: data sync reports shards behind after source zone fully trim...
- workload tests have been producing cases where data sync shows shards behind, although the source zone has fully trim...
- 02:08 PM RADOS Bug #57782 (Fix Under Review): [mon] high cpu usage by fn_monstore thread
- We observed high cpu usage by ms_dispatch and fn_monstore thread (amounting to 100-99% in top) Ceph [ deployment was ...
- 12:45 PM CephFS Bug #57764 (Fix Under Review): Thread md_log_replay is hanged for ever.
- 12:29 PM Bug #56610: FTBFS with fmtlib 9.0.0
- Can confirm adding -DFMT_DEPRECATED_OSTREAM to CXXFLAGS downstream in openSUSE fixes this (as it does for Debian). N...
- 11:49 AM Orchestrator Bug #57695 (Resolved): cephadm: upgrade tests fail with "Upgrade: Paused due to UPGRADE_BAD_TARGE...
- 10:58 AM rbd Backport #57779 (In Progress): quincy: [test] fio 3.16 doesn't build on recent kernels due to rem...
- 10:43 AM rbd Backport #57779 (Resolved): quincy: [test] fio 3.16 doesn't build on recent kernels due to remova...
- https://github.com/ceph/ceph/pull/48386
- 10:57 AM rbd Backport #57780 (In Progress): pacific: [test] fio 3.16 doesn't build on recent kernels due to re...
- 10:43 AM rbd Backport #57780 (Resolved): pacific: [test] fio 3.16 doesn't build on recent kernels due to remov...
- https://github.com/ceph/ceph/pull/48385
- 10:57 AM Orchestrator Bug #57781 (Rejected): Fix prometheus dependencies calculation
- https://github.com/ceph/ceph/pull/46400 introduced a new http service discovery mechanism but we are still including ...
- 10:43 AM rbd Bug #57766 (Pending Backport): [test] fio 3.16 doesn't build on recent kernels due to removal of ...
- 07:18 AM CephFS Bug #54501 (Fix Under Review): libcephfs: client needs to update the mtime and change attr when s...
- 06:07 AM CephFS Documentation #57778 (New): CephFS subvolume metadata not available in pacific
- According to the current Pacific docs [1] it should be possible to set subvolume metadata for a cephfs volume:
<pr... - 05:52 AM Dashboard Cleanup #54356 (Resolved): mgr/dashboard: Grafana e2e tests
- 05:52 AM Dashboard Cleanup #56426 (Resolved): mgr/dashboard: update cypress to 9.7
- 05:51 AM Dashboard Backport #56588 (Resolved): pacific: mgr/dashboard: update cypress to 9.7
- 05:51 AM Dashboard Backport #55468 (Resolved): pacific: mgr/dashboard: Grafana e2e tests
- 05:27 AM CephFS Backport #57777 (In Progress): quincy: Clarify security implications of path-restricted cephx cap...
- https://github.com/ceph/ceph/pull/53559
- 05:26 AM CephFS Backport #57776 (Resolved): pacific: Clarify security implications of path-restricted cephx capab...
- https://github.com/ceph/ceph/pull/53560
- 05:19 AM CephFS Bug #56507: pacific: Test failure: test_rapid_creation (tasks.cephfs.test_fragment.TestFragmentat...
- "New run":http://qa-proxy.ceph.com/teuthology/yuriw-2022-10-03_22:11:49-fs-wip-yuri-testing-2022-10-03-1342-pacific-d...
- 05:11 AM CephFS Documentation #57737 (Pending Backport): Clarify security implications of path-restricted cephx c...
- 01:56 AM crimson Bug #57774 (Closed): crimson: skip snapshot tests for test_librbd
- 01:54 AM crimson Bug #57773 (Resolved): crimson: TestLibRBD.TestCompareAndWriteStripeUnitSuccessPP fails with EINVAL
- ./bin/ceph_test_librbd --gtest_filter=TestLibRBD.TestCompareAndWriteStripeUnitSuccessPP
- 12:33 AM Orchestrator Backport #57772 (New): quincy: cephadm: watch Grafana certificates
- 12:22 AM Orchestrator Feature #44461 (Pending Backport): cephadm: watch Grafana certificates
10/05/2022
- 11:51 PM crimson Bug #57740 (Resolved): crimson: op hang while running ./bin/ceph_test_rados_api_aio_pp and ./bin/...
- https://github.com/ceph/ceph/pull/48352
- 11:48 PM crimson Bug #57617 (Resolved): crimson: need to actually set version/user_version for duplicate ops
- https://github.com/ceph/ceph/pull/48195
- 11:07 PM Orchestrator Bug #51361 (New): KillMode=none is deprecated
- I was wrong about @KillMode=none@ for my use case
- 08:54 PM Orchestrator Bug #57771 (Pending Backport): orch/cephadm suite: 'TESTDIR=/home/ubuntu/cephtest bash -s' fails
- It seems to be failing to install some selinux package...
- 06:49 PM RADOS Bug #57699 (Fix Under Review): slow osd boot with valgrind (reached maximum tries (50) after wait...
- 06:48 PM RADOS Bug #57049 (Duplicate): cluster logging does not adhere to mon_cluster_log_file_level
- 06:46 PM RADOS Bug #50222: osd: 5.2s0 deep-scrub : stat mismatch
- Hi Laura. In luck with verification of the hypothesis from the comment #17?
- 06:43 PM RADOS Bug #57532 (Duplicate): Notice discrepancies in the performance of mclock built-in profiles
- Marked as duplicate per comment #4.
- 06:25 PM RADOS Bug #57757: ECUtil: terminate called after throwing an instance of 'ceph::buffer::v15_2_0::end_of...
- There is a coredump on the teuhtology node (@/ceph/teuthology-archive/yuriw-2022-09-29_16:44:24-rados-wip-lflores-tes...
- 06:19 PM RADOS Bug #57546: rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+la...
- I think this a fix for that got reverted in quincy (https://tracker.ceph.com/issues/53806) but it's still in @main@. ...
- 06:12 PM RADOS Bug #50042: rados/test.sh: api_watch_notify failures
- Assigning to Nitzan just for the sake of testing the hypothesis from https://tracker.ceph.com/issues/50042#note-35.
- 06:06 PM RADOS Cleanup #57587 (Resolved): mon: fix Elector warnings
- Resolved by https://github.com/ceph/ceph/pull/48289.
- 06:05 PM RADOS Bug #56101: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
- This won't be easy to reproduce but there are still some options like:
* contacting owners of the external cluster... - 05:58 PM rgw Bug #57770 (Resolved): RGW (pacific) misplaces index entries after dynamically resharding bucket
- When RGW reshards buckets with ~250k index entries*, I've noticed some s3:PutObject requests that return 200 end up w...
- 04:43 PM mgr Bug #57710 (Rejected): Exports cannot be removed with ceph_argparse
- 04:37 PM mgr Bug #57694 (Rejected): Exports not created correctly when using ceph_argparse
- 04:37 PM mgr Bug #57711 (Rejected): Exports not updated correctly when using ceph_argparse
- 03:21 PM crimson Bug #57578 (Fix Under Review): crimson: assertion failure in _do_transaction_step()
- 02:42 PM mgr Bug #57768: mgr/balancer: check for end_weekday is exclusive, stops balancing too early
- *PR*: https://github.com/ceph/ceph/pull/48375
- 01:50 PM mgr Bug #57768: mgr/balancer: check for end_weekday is exclusive, stops balancing too early
- I am working on this issue
- 01:45 PM mgr Bug #57768 (Resolved): mgr/balancer: check for end_weekday is exclusive, stops balancing too early
- According to the "docs":https://docs.ceph.com/en/latest/rados/operations/balancer/ @end_weekday@ restricts automatic ...
- 02:20 PM Orchestrator Documentation #57769 (In Progress): Update monitoring doc to reflect the new location of grafana ...
- 02:20 PM Orchestrator Documentation #57769 (Pending Backport): Update monitoring doc to reflect the new location of gra...
- As part of the PR https://github.com/ceph/ceph/pull/47098 grafane key/cert now are stored per node but doc has not be...
- 01:58 PM Orchestrator Bug #57173 (Resolved): cephadm: bootstrap should return non-zero exit code when applying spec fails
- 01:57 PM Orchestrator Backport #57379 (Resolved): pacific: cephadm: bootstrap should return non-zero exit code when app...
- 10:55 AM rbd Bug #57766 (Fix Under Review): [test] fio 3.16 doesn't build on recent kernels due to removal of ...
- 08:07 AM rbd Bug #57766 (Resolved): [test] fio 3.16 doesn't build on recent kernels due to removal of linux/raw.h
- ...
- 10:23 AM rbd Bug #57765 (Fix Under Review): rbd CLI inconsistencies affecting "--namespace" arg
- Making Lucian the nominal assignee as Stefan doesn't seem to have a tracker account.
- 06:26 AM rbd Bug #57765 (Resolved): rbd CLI inconsistencies affecting "--namespace" arg
- There are a few rbd CLI inconsistencies that affect the "--namespace" parameter:
* unlike "rbd device map", "rbd d... - 08:16 AM ceph-volume Bug #57767 (In Progress): ceph-volume should check if device is locked prior to zapping it
- ceph-volume allows zapping a device although its related `ceph-osd` process is running...
- 08:08 AM Dashboard Bug #57456 (Resolved): mgr/dashboard: Cephfs snapshot creation with same name on UI throws 500 In...
- 08:07 AM Dashboard Backport #57498 (Resolved): pacific: mgr/dashboard: Cephfs snapshot creation with same name on UI...
- 08:00 AM Bug #56610: FTBFS with fmtlib 9.0.0
- We're having the same issue with Pacific on openSUSE (https://bugzilla.opensuse.org/show_bug.cgi?id=1202292).
10/04/2022
- 11:21 PM CephFS Bug #57764 (Resolved): Thread md_log_replay is hanged for ever.
- In production environment, we have a problem: one standby-replay's md_log_replay thread is hanged.
1,The reason:
... - 10:42 PM rgw Bug #57562: multisite replication issue on Quincy
- We are able to consistently reproduce the replication issue now. The following are the environment and the steps to r...
- 07:31 PM Bug #57763: monitor DB grows without bound during rebalance
- edit:
Why do the Monitor DBs continue to grow in size when the rebalance, backfill, balancer, and autoscaler are dis... - 07:17 PM Bug #57763 (New): monitor DB grows without bound during rebalance
- We have a very large cluster of about 680 OSDs across 18 storage servers. The largest and most active pool is our RGW...
- 05:39 PM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- Sure. https://tracker.ceph.com/issues/57762
- 03:51 PM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- Kevin Fox wrote:
> For the record, ssd/ssd or hdd/hdd seems to work fine even though the documentation makes it soun... - 03:30 PM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- For the record, ssd/ssd or hdd/hdd seems to work fine even though the documentation makes it sound like it doesn't.
... - 05:38 PM bluestore Documentation #57762 (New): documentation about same hardware class wrong
- The documentation in at least one place:
https://docs.ceph.com/en/pacific/man/8/ceph-bluestore-tool/ bluefs-bdev-mig... - 05:25 PM RADOS Bug #50042: rados/test.sh: api_watch_notify failures
- /a/yuriw-2022-09-29_16:40:30-rados-wip-all-kickoff-r-distro-default-smithi/7047940...
- 05:17 PM rgw Bug #51574: Segfault when uploading file
- Here is the stacktrace from running the test script with ceph version 17.2.3 (dff484dfc9e19a9819f375586300b3b79d80034...
- 04:02 PM rgw Bug #51574: Segfault when uploading file
- Hello,
As Ceph Quincy 17.2.3 is still segfaulting using the same test script as for Pacific before the fix, we sti... - 01:53 PM Linux kernel client Bug #57703: unable to handle page fault for address and system lockup after MDS failover
- Minor correction: "@ceph: update_snap_trace error -5@" is still seen in dmesg after MDS failovers when mounting the w...
- 01:44 PM CephFS Bug #57674 (Closed): fuse mount crashes the standby MDSes
- This is not a bug, just the limit reached.
Processor -- accept open file descriptions limit reached sd = 20 errno ... - 10:26 AM CephFS Bug #57674 (In Progress): fuse mount crashes the standby MDSes
- 12:56 PM CephFS Backport #57748 (Rejected): pacific: doc: Fix disaster recovery documentation
- Not required for Pacific
- 12:56 PM CephFS Backport #57743 (Rejected): pacific: qa: test_recovery_pool uses wrong recovery procedure
- Not required for pacific.
- 12:49 PM CephFS Bug #57676 (Triaged): qa: error during scrub thrashing: rank damage found: {'backtrace'}
- 12:48 PM CephFS Bug #57682 (Triaged): client: ERROR: test_reconnect_after_blocklisted
- 07:43 AM rbd Backport #57388: quincy: [test] iscsi rest_api_create.t and rest_api_delete.t need formatting adj...
- Hi Guillaume,
Could you please take a look at this? It seemed like a ceph-container issue to me. - 06:26 AM CephFS Backport #57761 (Resolved): pacific: qa: test_scrub_pause_and_resume_with_abort failure
- https://github.com/ceph/ceph/pull/49458
- 06:26 AM CephFS Backport #57760 (Resolved): quincy: qa: test_scrub_pause_and_resume_with_abort failure
- https://github.com/ceph/ceph/pull/49459
- 06:15 AM CephFS Bug #48812 (Pending Backport): qa: test_scrub_pause_and_resume_with_abort failure
- 05:35 AM CephFS Bug #57411: mutiple mds crash seen while running db workloads with regular snapshots and journal ...
- Patrick Donnelly wrote:
> Apparently this one is known.
yeh. and its only seen when running database workloads on... - 12:44 AM crimson Bug #57759: crimson: rbdv1 needs TMAP, easier to implement than to skip rbdv1 tests
- src/test/librbd/test_librbd.cc...
- 12:19 AM crimson Bug #57759 (Resolved): crimson: rbdv1 needs TMAP, easier to implement than to skip rbdv1 tests
- ...
10/03/2022
- 11:58 PM crimson Bug #57758 (New): crimson: disable autoscale for crimson in teuthology
- 11:44 PM rgw Bug #23264: Server side encryption support for s3 COPY operation
- It does not silently corrupt objects as far as I can tell, but it does still return a 501 NotImplemented when you try...
- 10:21 PM RADOS Bug #53575: Valgrind reports memory "Leak_PossiblyLost" errors concerning lib64
- Found a similar instance here:
/a/lflores-2022-09-30_21:47:41-rados-wip-lflores-testing-distro-default-smithi/7050... - 10:07 PM RADOS Bug #57546: rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+la...
- /a/yuriw-2022-09-29_16:44:24-rados-wip-lflores-testing-distro-default-smithi/7048304
/a/lflores-2022-09-30_21:47:41-... - 10:01 PM RADOS Bug #57757: ECUtil: terminate called after throwing an instance of 'ceph::buffer::v15_2_0::end_of...
- Put affected version as "14.2.9" since there is no option for "14.2.19".
- 09:59 PM RADOS Bug #57757 (Fix Under Review): ECUtil: terminate called after throwing an instance of 'ceph::buff...
- /a/yuriw-2022-09-29_16:44:24-rados-wip-lflores-testing-distro-default-smithi/7048173/remote/smithi133/crash/posted/20...
- 08:53 PM Bug #57756 (Resolved): upgrade: notify retry canceled due to unrecoverable error after 1 attempts...
- Upgrade tests loop on this line when trying to reach prometheus_receiver. Might have something to do with the new v17...
- 08:33 PM Orchestrator Bug #57755 (New): task/test_orch_cli: test_cephfs_mirror times out
- /a/yuriw-2022-09-29_16:44:24-rados-wip-lflores-testing-distro-default-smithi/7048282...
- 08:31 PM Orchestrator Bug #54029: orch:cephadm/workunits/{agent/on mon_election/connectivity task/test_orch_cli} test f...
- @Matan yes, let's create a new Tracker. I found the same issue on a test branch which includes the patch for this Tra...
- 08:25 PM mgr Bug #57480 (Resolved): mgr/telemetry: log exceptions as "exception" instead of "error"
- 08:22 PM mgr Bug #57480: mgr/telemetry: log exceptions as "exception" instead of "error"
- https://github.com/ceph/ceph/pull/48152 merged
- 08:18 PM cephsqlite Bug #55142: [ERR] : Unhandled exception from module 'devicehealth' while running on mgr.gibba002....
- /a/yuriw-2022-09-29_16:44:24-rados-wip-lflores-testing-distro-default-smithi/7048202
- 06:53 PM rgw Backport #57753 (In Progress): pacific: Log status of individual object deletions for multi-objec...
- 04:02 PM rgw Backport #57753 (Resolved): pacific: Log status of individual object deletions for multi-object d...
- https://github.com/ceph/ceph/pull/48348
- 06:04 PM Bug #57540 (Fix Under Review): FMT Cmake code does not work on Ubuntu Kinetic with system libfmt
- 05:54 PM rgw Bug #57608 (Fix Under Review): RGW need to support Kafka with more SASL mechanism
- 05:15 PM rgw Bug #57588 (Resolved): rgw: async refcount operate in copy_obj
- 04:23 PM cleanup Tasks #57647 (In Progress): prototype metadata sync with c++20 coroutines and neorados
- work in progress:
* cpp20 coroutines in neorados: https://github.com/ceph/ceph/pull/48129
* UTs for neorados: https... - 04:01 PM rgw Backport #57752 (Resolved): quincy: Log status of individual object deletions for multi-object de...
- https://github.com/ceph/ceph/pull/49084
- 04:01 PM rgw Feature #56645 (Pending Backport): Log status of individual object deletions for multi-object del...
- 02:45 PM CephFS Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Milind Changire wrote:
> Venky Shankar wrote:
> > Milind Changire wrote:
> > > This doesn't crash on my local ubun... - 09:26 AM CephFS Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Venky Shankar wrote:
> Milind Changire wrote:
> > This doesn't crash on my local ubuntu focal vstart cluster.
> > ... - 12:59 PM RADOS Bug #57751 (Resolved): LibRadosAio.SimpleWritePP hang and pkill
- /a/nmordech-2022-10-02_08:27:55-rados:verify-wip-nm-51282-distro-default-smithi/7051967/...
- 12:21 PM Orchestrator Bug #57750 (In Progress): cephadm fails to upgrade systems not running sudo
- cephadm fails to upgrade systems not running sudo.
It appears to have started with this commit:
https://github... - 10:01 AM CephFS Backport #57748 (In Progress): pacific: doc: Fix disaster recovery documentation
- 05:47 AM CephFS Backport #57748 (Rejected): pacific: doc: Fix disaster recovery documentation
- https://github.com/ceph/ceph/pull/48344
- 09:56 AM CephFS Backport #57747 (In Progress): quincy: doc: Fix disaster recovery documentation
- 05:46 AM CephFS Backport #57747 (In Progress): quincy: doc: Fix disaster recovery documentation
- https://github.com/ceph/ceph/pull/48343
- 09:53 AM CephFS Bug #56808 (In Progress): crash: LogSegment* MDLog::get_current_segment(): assert(!segments.empty())
- This seems to be fixed by the PR https://github.com/ceph/ceph/pull/46833
Unfortunately, we don't have mds logs assoc... - 05:44 AM CephFS Documentation #57734 (Pending Backport): doc: Fix disaster recovery documentation
- 05:03 AM CephFS Backport #57746 (Duplicate): quincy: qa: broad snapshot functionality testing across clients
- 04:39 AM CephFS Bug #23724 (Pending Backport): qa: broad snapshot functionality testing across clients
- 04:39 AM CephFS Backport #57745 (Rejected): quincy: qa: postgresql test suite workunit
- 04:38 AM CephFS Backport #57744 (Resolved): quincy: qa: test_recovery_pool uses wrong recovery procedure
- https://github.com/ceph/ceph/pull/50767
- 04:38 AM CephFS Backport #57743 (Resolved): pacific: qa: test_recovery_pool uses wrong recovery procedure
- https://github.com/ceph/ceph/pull/50860
- 04:38 AM CephFS Feature #55470 (Pending Backport): qa: postgresql test suite workunit
- 04:37 AM CephFS Bug #57598 (Pending Backport): qa: test_recovery_pool uses wrong recovery procedure
- 04:27 AM CephFS Feature #57091 (Resolved): mds: modify scrub to catch dentry corruption
10/01/2022
- 04:40 PM mgr Bug #57742 (New): Setting predict_interval in diskprediction_local causes module failure
- After upgrading from 16.2.10 to 17.2.4, a critical health error is raised;...
- 12:17 PM Documentation #57741 (New): Documentation dependencies probably out of date as of October 2022
- https://docs.ceph.com/en/quincy/start/documenting-ceph/#build-the-source-first-time
[zdover@fedora ceph]$ pip inst... - 07:39 AM CephFS Backport #57718 (In Progress): pacific: Test failure: test_subvolume_group_ls_filter_internal_dir...
- 07:37 AM CephFS Backport #57719 (In Progress): quincy: Test failure: test_subvolume_group_ls_filter_internal_dire...
- 07:35 AM CephFS Backport #57723: pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
- Waiting on https://github.com/ceph/ceph/pull/47112 to be merged
- 07:29 AM CephFS Backport #57722 (In Progress): quincy: qa: test_subvolume_snapshot_info_if_orphan_clone fails
- 12:21 AM crimson Bug #57740: crimson: op hang while running ./bin/ceph_test_rados_api_aio_pp and ./bin/ceph_test_r...
- Bug is that we aren't erroring out the blocking future in get_or_create_pg if the pool does not exist.
- 12:18 AM crimson Bug #57740: crimson: op hang while running ./bin/ceph_test_rados_api_aio_pp and ./bin/ceph_test_r...
- osd.1:
DEBUG 2022-09-30 23:31:32,641 [shard 2] osd - Creating 258.c
...
DEBUG 2022-09-30 23:31:32,641 [shard 2] ... - 12:08 AM crimson Bug #57740: crimson: op hang while running ./bin/ceph_test_rados_api_aio_pp and ./bin/ceph_test_r...
- osd.2 has a bunch of ops blocked on:...
- 12:04 AM crimson Bug #57740 (Resolved): crimson: op hang while running ./bin/ceph_test_rados_api_aio_pp and ./bin/...
- MDS=0 MGR=1 OSD=3 MON=1 ../src/vstart.sh --without-dashboard -X --crimson --redirect-output --debug -n --no-restart -...
- 12:09 AM crimson Bug #57547: Hang with seastore at wait_for_active stage
- https://tracker.ceph.com/issues/57740 is probably related.
09/30/2022
- 11:56 PM Bug #52435 (Resolved): seastore: lba pin crash
- 11:55 PM crimson Bug #47212 (Resolved): out-of-order "Error: finished tid 3 when last_acked_tid was 5"
- No backports for crimson for now.
- 11:14 PM crimson Bug #57739 (New): crimson: LogMissingRequest and RepRequest operator<< access possibly invalid req
- ...
- 09:49 PM crimson Bug #57738: crimson: repop ordering bug
- ...
- 09:43 PM crimson Bug #57738: crimson: repop ordering bug
- ...
- 09:39 PM crimson Bug #57738 (Resolved): crimson: repop ordering bug
- ...
- 09:47 PM Stable releases Tasks #57472 (Resolved): quincy v17.2.4
- 07:34 PM CephFS Documentation #57737 (Pending Backport): Clarify security implications of path-restricted cephx c...
- https://docs.ceph.com/en/latest/cephfs/client-auth/#path-restriction suggests that you can restrict clients to a subt...
- 07:13 PM RADOS Bug #17170 (Fix Under Review): mon/monclient: update "unable to obtain rotating service keys when...
- 06:32 PM Orchestrator Bug #51361 (Fix Under Review): KillMode=none is deprecated
- I filed a "fix for this":https://github.com/ceph/ceph/pull/48317 because it was preventing me from setting up cluster...
- 04:49 PM RADOS Bug #57105: quincy: ceph osd pool set <pool> size math error
- Looks like in both cases something is being subtracted from an zero value unsigned int64 and overflowing.
2^64 − ... - 03:37 PM RADOS Bug #57105: quincy: ceph osd pool set <pool> size math error
- Setting the size (from 3) to 2, then setting it to 1 works......
- 03:38 AM RADOS Bug #57105: quincy: ceph osd pool set <pool> size math error
- I created a new cluster today to do a very specific test and ran into this (or something like it) again today. In th...
- 02:41 PM Dashboard Backport #57691 (In Progress): pacific: mgr/dashboard: permission denied when creating a NFS export
- 02:40 PM Dashboard Backport #57692 (In Progress): quincy: mgr/dashboard: permission denied when creating a NFS export
- 02:37 PM Dashboard Cleanup #43116 (Resolved): mgr/dashboard: Add text to empty life expectancy column
- 02:36 PM Dashboard Backport #57681 (Resolved): pacific: mgr/dashboard: Add text to empty life expectancy column
- 02:35 PM Dashboard Backport #57680 (Resolved): quincy: mgr/dashboard: Add text to empty life expectancy column
- 01:37 PM rbd Bug #57605: rbd/test_librbd_python.sh: cluster [WRN] pool 'test-librbd-smithi137-24673-7' is full...
- The librbd api test "RemoveFullTry" runs here:
2022-09-16T00:35:56.053 INFO:tasks.workunit.client.0.smithi137.stdo... - 01:12 PM rbd Bug #57605: rbd/test_librbd_python.sh: cluster [WRN] pool 'test-librbd-smithi137-24673-7' is full...
- Chris, could you please add your timestamp analysis pointing away from test_librbd_python.sh and to test_librbd.sh here?
- 10:46 AM rgw Backport #57197 (In Progress): pacific: x-amz-date protocol change breaks aws v4 signature logic:...
- 10:45 AM rgw Backport #57196 (In Progress): quincy: x-amz-date protocol change breaks aws v4 signature logic: ...
- 10:40 AM RADOS Bug #49777 (Resolved): test_pool_min_size: 'check for active or peered' reached maximum tries (5)...
- 10:39 AM RADOS Backport #57022 (Resolved): pacific: test_pool_min_size: 'check for active or peered' reached max...
- 10:24 AM Bug #51625 (Resolved): osd/OSD: mkfs need wait for transcation completely finish
- 10:22 AM Bug #51625 (Pending Backport): osd/OSD: mkfs need wait for transcation completely finish
- 10:14 AM Bug #51625 (Resolved): osd/OSD: mkfs need wait for transcation completely finish
- 10:24 AM Backport #56635 (In Progress): pacific: log_max_recent setting broken as of Nautilus
- 10:17 AM Backport #56636 (Rejected): octopus: log_max_recent setting broken as of Nautilus
- Octopus is EOL
- 10:16 AM Backport #56637 (In Progress): quincy: log_max_recent setting broken as of Nautilus
- 10:16 AM Dashboard Cleanup #57735: mgr/dashboard: improve visualization of disk availability
- Shouldn't we add the True label in case is available?
It'd be a bit noisy though, so if the condition that is more... - 09:00 AM Dashboard Cleanup #57735: mgr/dashboard: improve visualization of disk availability
- Just a couple of suggestions
1. Instead False .. could it be shown "No"
2. In addition to the "available" field the... - 08:05 AM Dashboard Cleanup #57735 (New): mgr/dashboard: improve visualization of disk availability
- If the disk is not available we just show empty in the Available column of the Physical Disks page. Instead it should...
- 10:12 AM rgw Bug #53464 (Resolved): pubsub: list topics, radosgw coredump
- 10:12 AM rgw Bug #53325 (Resolved): Test failures in ceph_test_cls_rgw_gc
- 10:12 AM rgw Backport #53638 (Rejected): octopus: Test failures in ceph_test_cls_rgw_gc
- Octopus is EOL
- 10:11 AM rgw Bug #53705 (Resolved): rgw: in bucket reshard list, clarify new num shards is tentative
- 10:11 AM rgw Backport #54153 (Rejected): octopus: rgw: in bucket reshard list, clarify new num shards is tenta...
- Octopus is EOL
- 10:11 AM rgw Backport #54154 (Resolved): quincy: rgw: in bucket reshard list, clarify new num shards is tentative
- 10:11 AM rgw Bug #49747 (Resolved): tempest: test_create_object_with_transfer_encoding fails
- 10:10 AM rgw Backport #51782 (Resolved): octopus: tempest: test_create_object_with_transfer_encoding fails
- 10:10 AM rgw Bug #53469 (Resolved): cohort::lru may unlock twice
- 10:10 AM rgw Backport #53470 (Resolved): octopus: cohort::lru may unlock twice
- 10:10 AM rgw Bug #52673 (Resolved): rgw: remove rgw_rados_pool_pg_num_min and its use on pool creation
- 10:10 AM rgw Bug #53367 (Resolved): Log S3 access key ID in ops logs
- 10:09 AM rgw Backport #55999 (Resolved): quincy: Log S3 access key ID in ops logs
- 10:08 AM rgw Bug #50141 (Resolved): consecutive complete-multipart-upload requests following the 1st (successf...
- 10:08 AM rgw Backport #53867 (Resolved): octopus: consecutive complete-multipart-upload requests following the...
- 10:08 AM rgw Bug #54500 (Resolved): Trim olh entries with empty name from bi
- 10:08 AM rgw Backport #55249 (Resolved): octopus: Trim olh entries with empty name from bi
- Octopus is EOL
- 10:07 AM rgw Bug #55432 (Resolved): ceph_test_librgw_file_nfsns crash on shutdown during ~OpsLogFile()
- 10:07 AM rgw Bug #53788 (Resolved): radosgw should reopen ops log file on SIGHUP
- 10:07 AM rgw Backport #55997 (Resolved): quincy: radosgw should reopen ops log file on SIGHUP
- 10:07 AM rgw Bug #53003 (Resolved): Performance regression on rgw/s3 copy operation
- 10:06 AM rgw Backport #53145 (Resolved): pacific: Performance regression on rgw/s3 copy operation
- 10:06 AM rgw Backport #55996 (Resolved): pacific: radosgw should reopen ops log file on SIGHUP
- 10:06 AM rgw Backport #55456 (Resolved): pacific: ceph_test_librgw_file_nfsns crash on shutdown during ~OpsLog...
- 10:06 AM rgw Backport #55250 (Resolved): pacific: Trim olh entries with empty name from bi
- 10:06 AM rgw Backport #53868 (Resolved): pacific: consecutive complete-multipart-upload requests following the...
- 10:06 AM rgw Backport #55998 (Resolved): pacific: Log S3 access key ID in ops logs
- 10:06 AM rgw Backport #54278 (Resolved): pacific: rgw: remove rgw_rados_pool_pg_num_min and its use on pool cr...
- 10:05 AM rgw Backport #53471 (Resolved): pacific: cohort::lru may unlock twice
- 10:05 AM rgw Backport #51783 (Resolved): pacific: tempest: test_create_object_with_transfer_encoding fails
- 10:05 AM rgw Backport #54152 (Resolved): pacific: rgw: in bucket reshard list, clarify new num shards is tenta...
- 10:05 AM rgw Backport #53637 (Resolved): pacific: Test failures in ceph_test_cls_rgw_gc
- 10:05 AM rgw Backport #53518 (Resolved): pacific: pubsub: list topics, radosgw coredump
- 10:05 AM rgw Bug #53252 (Resolved): compiler warning, enumeration value ‘AMQP_STATUS_SSL_SET_ENGINE_FAILED’ no...
- 10:05 AM rgw Backport #53640 (Resolved): octopus: compiler warning, enumeration value ‘AMQP_STATUS_SSL_SET_ENG...
- 10:05 AM rgw Bug #53226 (Resolved): `radosgw-admin reshard stale-instances rm` can't remove stale-indexs which...
- 10:04 AM rgw Backport #53653 (Resolved): octopus: `radosgw-admin reshard stale-instances rm` can't remove stal...
- 10:04 AM rgw Bug #47861 (Resolved): swift rest api on GET returns X-Container-* headers with zero value
- 10:04 AM rgw Bug #48755 (Resolved): [doc] mention support for cross-region replication (added with per-bucket ...
- 10:04 AM rgw Backport #53836 (Resolved): octopus: [doc] mention support for cross-region replication (added wi...
- 10:04 AM rgw Backport #53835 (Resolved): pacific: [doc] mention support for cross-region replication (added wi...
- 10:03 AM rgw Backport #53818 (Resolved): pacific: swift rest api on GET returns X-Container-* headers with zer...
- 10:03 AM rgw Backport #53654 (Resolved): pacific: `radosgw-admin reshard stale-instances rm` can't remove stal...
- 10:03 AM rgw Backport #53639 (Resolved): pacific: compiler warning, enumeration value ‘AMQP_STATUS_SSL_SET_ENG...
- 10:03 AM rgw Bug #51560 (Resolved): the root cause of rgw.none appearance
- 10:03 AM rgw Backport #53255 (Rejected): octopus: the root cause of rgw.none appearance
- Octopus is EOL
- 10:03 AM rgw Backport #53254 (Resolved): pacific: the root cause of rgw.none appearance
- 10:02 AM rgw Bug #51305 (Resolved): notification: zero size in COPY events
- 10:02 AM rgw Backport #51348 (Resolved): pacific: notification: zero size in COPY events
- 10:02 AM rgw Bug #52738 (Resolved): notifications: http endpoints with one trailing slash are considered malfo...
- 10:02 AM rgw Backport #53078 (Resolved): octopus: notifications: http endpoints with one trailing slash are co...
- 10:02 AM rgw Backport #53079 (Resolved): pacific: notifications: http endpoints with one trailing slash are co...
- 10:02 AM rgw Bug #51466 (Resolved): rgw: cls_bucket_list_unordered() might return repeating or partial results...
- 10:01 AM rgw Backport #53037 (Resolved): octopus: rgw: cls_bucket_list_unordered() might return repeating or p...
- 10:01 AM rgw Backport #53036 (Resolved): pacific: rgw: cls_bucket_list_unordered() might return repeating or p...
- 10:00 AM mgr Bug #57710: Exports cannot be removed with ceph_argparse
- Thanks Ramana, that was the issue, fixed it on the driver. We can close this tracker.
- 09:59 AM mgr Bug #57694: Exports not created correctly when using ceph_argparse
- Thanks Ramana, that was the issue, fixed it on the driver. We can close this tracker.
- 09:59 AM mgr Bug #57711: Exports not updated correctly when using ceph_argparse
- Thanks Ramana, that was the issue, fixed it on the driver. We can close this tracker.
- 09:58 AM rgw Bug #52091 (Resolved): multisite: if mdlogs are trimmed prematurely, 'radosgw-admin sync status' ...
- 09:58 AM rgw Bug #52037 (Resolved): PutObjRentention allows invalid changes to retention mode
- 09:58 AM rgw Bug #52941 (Resolved): rgw: have "bucket check --fix" fix pool ids correctly
- 09:58 AM rgw Backport #52991 (Resolved): octopus: rgw: have "bucket check --fix" fix pool ids correctly
- 09:57 AM rgw Bug #43259 (Resolved): S3 CopyObject: failed to parse copy location
- 09:57 AM rgw Backport #51700 (Resolved): octopus: S3 CopyObject: failed to parse copy location
- 09:57 AM rgw Bug #52069 (Resolved): hadoop: broken mirror for apache-maven download
- 09:57 AM rgw Backport #52114 (Resolved): octopus: hadoop: broken mirror for apache-maven download
- 09:56 AM rgw Backport #52114 (Rejected): octopus: hadoop: broken mirror for apache-maven download
- 09:56 AM rgw Bug #51114 (Resolved): TestAMQP.ClosedConnection failing in master
- 09:56 AM rgw Backport #52072 (Resolved): octopus: PutObjRentention allows invalid changes to retention mode
- 09:56 AM rgw Backport #52108 (Resolved): octopus: multisite: if mdlogs are trimmed prematurely, 'radosgw-admin...
- 09:56 AM rgw Backport #52107 (Resolved): pacific: multisite: if mdlogs are trimmed prematurely, 'radosgw-admin...
- 09:56 AM rgw Backport #52071 (Resolved): pacific: PutObjRentention allows invalid changes to retention mode
- 09:56 AM rgw Backport #51351 (Resolved): pacific: TestAMQP.ClosedConnection failing in master
- 09:55 AM rgw Backport #52115 (Resolved): pacific: hadoop: broken mirror for apache-maven download
- 09:55 AM rgw Backport #51777 (Resolved): pacific: rgw forward request in multisite for RGWDeleteBucketPolicy a...
- 09:55 AM rgw Backport #51701 (Resolved): pacific: S3 CopyObject: failed to parse copy location
- 09:55 AM rgw Backport #52990 (Resolved): pacific: rgw: have "bucket check --fix" fix pool ids correctly
- 09:55 AM rgw Bug #48860 (Resolved): sse: on missing kms keyid, return 400 InvalidArgument instead 403 InvalidA...
- 09:55 AM rgw Documentation #52830 (Resolved): rgw: document rgw_lc_debug_interval configuration option
- 09:55 AM rgw Backport #53157 (Resolved): octopus: sse: on missing kms keyid, return 400 InvalidArgument instea...
- 09:55 AM rgw Backport #52989 (Resolved): octopus: rgw: document rgw_lc_debug_interval configuration option
- 09:54 AM rgw Bug #51253 (Resolved): rgw: add function entry logging to make more thorough and consistent
- 09:54 AM rgw Backport #52612 (Rejected): octopus: rgw: add function entry logging to make more thorough and co...
- Octopus is EOL
- 09:54 AM rgw Backport #53158 (Resolved): pacific: sse: on missing kms keyid, return 400 InvalidArgument instea...
- 09:54 AM rgw Backport #52988 (Resolved): pacific: rgw: document rgw_lc_debug_interval configuration option
- 09:54 AM rgw Backport #52611 (Resolved): pacific: rgw: add function entry logging to make more thorough and co...
- 09:54 AM rgw Bug #52027 (Resolved): XML responses return different order of XML elements
- 09:53 AM rgw Backport #52349 (Resolved): octopus: XML responses return different order of XML elements
- 09:53 AM rgw Backport #52348 (Resolved): pacific: XML responses return different order of XML elements
- 09:53 AM rgw Bug #52070 (Resolved): kmip and barbican tests failing on 'pip install pytz'
- 09:53 AM rgw Backport #52112 (Rejected): octopus: kmip and barbican tests failing on 'pip install pytz'
- Octopus is EOL
- 09:53 AM rgw Backport #52113 (Resolved): pacific: kmip and barbican tests failing on 'pip install pytz'
- 09:52 AM rgw Bug #51677 (Resolved): default value of follow_olh in get_obj_state() differs between sal::Object...
- 09:52 AM rgw Backport #51810 (Resolved): pacific: default value of follow_olh in get_obj_state() differs betwe...
- 09:52 AM rgw Backport #51810 (Duplicate): pacific: default value of follow_olh in get_obj_state() differs betw...
- 09:51 AM rgw Bug #46625 (Resolved): rgw: md5 signatures not match when enable rgw compress for RGWBulkUpload
- 09:51 AM rgw Backport #51702 (Resolved): octopus: rgw: md5 signatures not match when enable rgw compress for R...
- 09:51 AM rgw Bug #49128 (Resolved): Non-default storage class results in some garbage
- 09:51 AM rgw Backport #51353 (Rejected): octopus: Non-default storage class results in some garbage
- Octopus is EOL
- 09:50 AM rgw Backport #51703 (Resolved): pacific: rgw: md5 signatures not match when enable rgw compress for R...
- 09:50 AM rgw Backport #51352 (Resolved): pacific: Non-default storage class results in some garbage
- 09:50 AM rgw Bug #54114 (Resolved): PostObj ignores error, may lose data
- 09:50 AM rgw Bug #54417 (Resolved): radosgw-admin bucket sync run command crashes when send_chain() gets invoked
- 09:49 AM rgw Backport #54150 (Resolved): quincy: PostObj ignores error, may lose data
- 09:49 AM rgw Backport #54428 (Resolved): quincy: radosgw-admin bucket sync run command crashes when send_chain...
- 09:49 AM rgw Backport #54427 (Rejected): octopus: radosgw-admin bucket sync run command crashes when send_chai...
- Octopus is EOL
- 09:48 AM rgw Backport #54149 (Resolved): octopus: PostObj ignores error, may lose data
- 09:48 AM rgw Backport #54426 (Resolved): pacific: radosgw-admin bucket sync run command crashes when send_chai...
- 09:48 AM rgw Backport #54148 (Resolved): pacific: PostObj ignores error, may lose data
- 09:48 AM rgw Bug #54116 (Resolved): admin: datalog list always return max-entries
- 09:48 AM rgw Bug #53728 (Resolved): rgwlc: warn at level 0 if lifecycle processing for a valid bucket marker f...
- 09:47 AM rgw Backport #54146 (Resolved): quincy: admin: datalog list always return max-entries
- 09:47 AM rgw Backport #54093 (Resolved): quincy: rgwlc: warn at level 0 if lifecycle processing for a valid bu...
- 09:47 AM rgw Backport #54147 (Resolved): pacific: admin: datalog list always return max-entries
- 09:47 AM rgw Backport #54092 (Resolved): pacific: rgwlc: warn at level 0 if lifecycle processing for a valid b...
- 09:47 AM rgw Bug #50194 (Resolved): librgw: make rgw file handle versioned
- 09:47 AM rgw Backport #54085 (Resolved): octopus: librgw: make rgw file handle versioned
- 09:46 AM rgw Backport #54084 (Resolved): quincy: librgw: make rgw file handle versioned
- 09:46 AM rgw Backport #54083 (Resolved): pacific: librgw: make rgw file handle versioned
- 09:46 AM rgw Bug #53599 (Resolved): Memory leak in radosgw-admin bucket chown command
- 09:46 AM rgw Backport #54075 (Resolved): octopus: Memory leak in radosgw-admin bucket chown command
- 09:45 AM rgw Backport #54076 (Resolved): quincy: Memory leak in radosgw-admin bucket chown command
- 09:45 AM rgw Backport #54077 (Resolved): pacific: Memory leak in radosgw-admin bucket chown command
- 09:45 AM rgw Bug #53731 (Resolved): remove bucket API returns NoSuchKey than NoSuchBucket
- 09:45 AM rgw Backport #54040 (Rejected): quincy: remove bucket API returns NoSuchKey than NoSuchBucket
- 09:39 AM rgw Backport #54039 (Rejected): octopus: remove bucket API returns NoSuchKey than NoSuchBucket
- Octopus is EOL
- 09:38 AM rgw Backport #54041 (Resolved): pacific: remove bucket API returns NoSuchKey than NoSuchBucket
- 09:38 AM rgw Backport #55713 (Rejected): octopus: user policy API incompatibilities with aws
- Octopus is EOL
- 09:38 AM rgw Backport #55714 (Resolved): quincy: user policy API incompatibilities with aws
- 09:36 AM rgw Bug #54130 (Resolved): OpsLogRados::log segfaults in rgw/multisite suite
- 09:35 AM rgw Backport #54162 (Resolved): quincy: OpsLogRados::log segfaults in rgw/multisite suite
- 09:35 AM rgw Backport #54537 (Resolved): pacific: OpsLogRados::log segfaults in rgw/multisite suite
- 09:35 AM rgw Bug #53856 (Resolved): rgw: fix bucket index list minor calculation bug
- 09:35 AM rgw Backport #54072 (Resolved): octopus: rgw: fix bucket index list minor calculation bug
- 09:34 AM rgw Backport #54074 (Resolved): pacific: rgw: fix bucket index list minor calculation bug
- 09:32 AM rgw Bug #48001 (Resolved): Brocken SwiftAPI anonymous access
- 09:32 AM rgw Backport #56955 (Resolved): quincy: Brocken SwiftAPI anonymous access
- 09:31 AM rgw Backport #56954 (Resolved): pacific: Brocken SwiftAPI anonymous access
- 09:31 AM rgw Backport #55968 (Resolved): quincy: RGWRados::check_disk_state no checking object's storage_class
- 09:30 AM rgw Backport #55969 (Resolved): pacific: RGWRados::check_disk_state no checking object's storage_class
- 09:30 AM rgw Bug #50924 (Resolved): rgw/rgw_string.h: has missing includes when compiling with boost 1.75 on a...
- 09:30 AM rgw Backport #56731 (Resolved): pacific: rgw/rgw_string.h: has missing includes when compiling with b...
- 09:28 AM RADOS Bug #50192 (Resolved): FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get_missing().is_...
- 09:27 AM RADOS Backport #50274 (Resolved): pacific: FAILED ceph_assert(attrs || !recovery_state.get_pg_log().get...
- 09:27 AM RADOS Bug #53516 (Resolved): Disable health warning when autoscaler is on
- 09:27 AM RADOS Backport #53644 (Resolved): pacific: Disable health warning when autoscaler is on
- 09:27 AM RADOS Bug #51942 (Resolved): src/osd/scrub_machine.cc: FAILED ceph_assert(state_cast<const NotActive*>())
- 09:26 AM RADOS Backport #53339 (Resolved): pacific: src/osd/scrub_machine.cc: FAILED ceph_assert(state_cast<cons...
- 09:26 AM RADOS Bug #55001 (Resolved): rados/test.sh: Early exit right after LibRados global tests complete
- 09:26 AM RADOS Backport #57029 (Resolved): pacific: rados/test.sh: Early exit right after LibRados global tests ...
- 09:26 AM RADOS Bug #57119 (Resolved): Heap command prints with "ceph tell", but not with "ceph daemon"
- 09:25 AM RADOS Backport #57313 (Resolved): pacific: Heap command prints with "ceph tell", but not with "ceph dae...
- 09:04 AM mgr Bug #56486 (Resolved): mgr/telemetry: reset health warning after re-opting-in
- 09:04 AM mgr Backport #56720 (Resolved): pacific: mgr/telemetry: reset health warning after re-opting-in
- 09:04 AM mgr Bug #53475 (Resolved): progress: dump of pg_stats hold ClusterState::lock for long periods on lar...
- 09:03 AM mgr Backport #53634 (Resolved): pacific: progress: dump of pg_stats hold ClusterState::lock for long ...
- 09:03 AM mgr Bug #53039 (Resolved): osd: ceph osd stop does not take effect
- 09:03 AM mgr Backport #53201 (Resolved): pacific: osd: ceph osd stop does not take effect
- 09:03 AM Dashboard Bug #53022 (Resolved): mgr/dashboard: monitoring: grafonnet refactoring for hosts dashboards
- 09:02 AM Dashboard Backport #53023 (Resolved): pacific: mgr/dashboard: monitoring: grafonnet refactoring for hosts d...
- 09:02 AM Dashboard Bug #53374 (Resolved): mgr/dashboard: monitoring: refactor into ceph-mixin
- 09:02 AM Dashboard Backport #54138 (Resolved): pacific: mgr/dashboard: monitoring: refactor into ceph-mixin
- 09:01 AM Dashboard Bug #54176 (Resolved): mgr/dashboard: change monitoring directories in cephadm bootstrap script
- 09:01 AM Dashboard Backport #54177 (Resolved): pacific: mgr/dashboard: change monitoring directories in cephadm boot...
- 09:00 AM Dashboard Bug #55195 (Resolved): mgr/dashboard: update grafana piechart and vonage status panel versions
- 08:59 AM Dashboard Backport #55199 (Resolved): pacific: mgr/dashboard: update grafana piechart and vonage status pan...
- 08:59 AM Dashboard Bug #56074 (Resolved): mgr/dashboard: bump moment from 2.29.1 to 2.29.3 (CVE-2022-24785)
- 08:58 AM Dashboard Backport #56076 (Resolved): quincy: mgr/dashboard: bump moment from 2.29.1 to 2.29.3 (CVE-2022-24...
- 08:56 AM Dashboard Backport #56075 (Resolved): pacific: mgr/dashboard: bump moment from 2.29.1 to 2.29.3 (CVE-2022-2...
- 08:56 AM Dashboard Bug #56413 (Resolved): mgr/dashboard: OSDs are not created with the Throughput-optimized which is...
- 08:56 AM Dashboard Backport #56615 (Resolved): pacific: mgr/dashboard: OSDs are not created with the Throughput-opti...
- 08:55 AM Dashboard Bug #56688 (Resolved): mgr/dashboard: add required validation for frontend and monitor port
- 08:55 AM Dashboard Backport #56966 (Resolved): pacific: mgr/dashboard: add required validation for frontend and moni...
- 08:55 AM Dashboard Bug #57006 (Resolved): mgr/dashboard: remove debug log line printing the JWT
- 08:54 AM Dashboard Backport #57009 (Resolved): quincy: mgr/dashboard: remove debug log line printing the JWT
- 08:54 AM Dashboard Backport #57010 (Rejected): pacific: mgr/dashboard: remove debug log line printing the JWT
- 08:54 AM Dashboard Bug #57345 (Resolved): mgr/dashboard: "openapi-check" test fails
- 08:53 AM Dashboard Backport #57493 (Resolved): pacific: mgr/dashboard: "openapi-check" test fails
- 08:52 AM CephFS Backport #57282 (Resolved): pacific: cephfs-top:addition of filesystem menu(improving GUI)
- 08:51 AM CephFS Backport #57395 (Resolved): pacific: crash: int Client::_do_remount(bool): abort
- 08:51 AM CephFS Backport #57393 (Resolved): pacific: client: abort the client daemons when we couldn't invalidate...
- 08:48 AM CephFS Bug #55190 (Resolved): mgr/volumes: Show clone failure reason in clone status command
- 08:48 AM CephFS Backport #55349 (Resolved): pacific: mgr/volumes: Show clone failure reason in clone status command
- 08:48 AM CephFS Bug #55313 (Resolved): Unexpected file access behavior using ceph-fuse
- 08:47 AM CephFS Backport #55926 (Resolved): quincy: Unexpected file access behavior using ceph-fuse
- 08:47 AM CephFS Backport #55927 (Resolved): pacific: Unexpected file access behavior using ceph-fuse
- 08:47 AM CephFS Bug #55240 (Resolved): mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
- 08:46 AM CephFS Backport #55659 (Resolved): pacific: mds: stuck 2 seconds and keeps retrying to find ino from aut...
- 08:46 AM CephFS Bug #54237 (Resolved): pybind/cephfs: Add mapping for Ernno 13:Permission Denied and adding path ...
- 08:46 AM CephFS Backport #54578 (Resolved): quincy: pybind/cephfs: Add mapping for Ernno 13:Permission Denied and...
- 08:43 AM CephFS Backport #54577 (Resolved): pacific: pybind/cephfs: Add mapping for Ernno 13:Permission Denied an...
- 08:41 AM CephFS Bug #54653 (Resolved): crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): assert(stag_snap_ma...
- 08:40 AM CephFS Backport #56056 (Resolved): pacific: crash: uint64_t CephFuse::Handle::fino_snap(uint64_t): asser...
- 08:40 AM CephFS Bug #56012 (Resolved): mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
- 08:40 AM CephFS Backport #56526 (Resolved): quincy: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_...
- 08:39 AM CephFS Backport #56527 (Resolved): pacific: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any...
- 08:38 AM CephFS Bug #54052 (Resolved): mgr/snap-schedule: scheduled snapshots are not created after ceph-mgr restart
- 08:38 AM CephFS Backport #55055 (Resolved): quincy: mgr/snap-schedule: scheduled snapshots are not created after ...
- 08:38 AM CephFS Backport #55056 (Resolved): pacific: mgr/snap-schedule: scheduled snapshots are not created after...
- 08:38 AM CephFS Bug #54046 (Resolved): unaccessible dentries after fsstress run with namespace-restricted caps
- 08:37 AM CephFS Backport #55427 (Resolved): pacific: unaccessible dentries after fsstress run with namespace-rest...
- 08:37 AM CephFS Bug #54374 (Resolved): mgr/snap_schedule: include timezone information in scheduled snapshots
- 08:36 AM CephFS Backport #55384 (Resolved): pacific: mgr/snap_schedule: include timezone information in scheduled...
- 08:36 AM CephFS Bug #54625 (Resolved): Issue removing subvolume with retained snapshots - Possible quincy regress...
- 08:35 AM CephFS Backport #55335 (Resolved): pacific: Issue removing subvolume with retained snapshots - Possible ...
- 08:34 AM CephFS Bug #52642 (Resolved): snap scheduler: cephfs snapshot schedule status doesn't list the snapshot ...
- 08:34 AM CephFS Backport #53760 (Resolved): pacific: snap scheduler: cephfs snapshot schedule status doesn't list...
- 08:33 AM CephFS Bug #55217 (Resolved): pybind/mgr/volumes: Clone operation hangs
- 08:33 AM CephFS Backport #55353 (Resolved): quincy: pybind/mgr/volumes: Clone operation hangs
- 08:33 AM CephFS Backport #55352 (Resolved): pacific: pybind/mgr/volumes: Clone operation hangs
- 08:33 AM CephFS Bug #52606 (Resolved): qa: test_dirfrag_limit
- 08:32 AM CephFS Backport #52875 (Resolved): pacific: qa: test_dirfrag_limit
- 08:32 AM CephFS Bug #51707 (Resolved): pybind/mgr/volumes: Cloner threads stuck in loop trying to clone the stale...
- 08:32 AM CephFS Backport #52384 (Resolved): pacific: pybind/mgr/volumes: Cloner threads stuck in loop trying to c...
- 08:31 AM CephFS Bug #54081 (Resolved): mon/MDSMonitor: sanity assert when inline data turned on in MDSMap from v1...
- 08:31 AM CephFS Backport #54160 (Resolved): quincy: mon/MDSMonitor: sanity assert when inline data turned on in M...
- 08:31 AM CephFS Backport #54161 (Resolved): pacific: mon/MDSMonitor: sanity assert when inline data turned on in ...
- 08:31 AM CephFS Bug #53911 (Resolved): client: client session state stuck in opening and hang all the time
- 08:30 AM CephFS Backport #54216 (Resolved): quincy: client: client session state stuck in opening and hang all th...
- 08:30 AM CephFS Backport #54217 (Resolved): pacific: client: client session state stuck in opening and hang all t...
- 08:30 AM CephFS Bug #51062 (Resolved): mds,client: suppport getvxattr RPC
- 08:30 AM CephFS Backport #54533 (Resolved): quincy: mds,client: suppport getvxattr RPC
- 08:29 AM CephFS Backport #54532 (Resolved): pacific: mds,client: suppport getvxattr RPC
- 08:24 AM CephFS Bug #54066 (Resolved): mgr/volumes: uid/gid of the clone is incorrect
- 08:24 AM CephFS Backport #54420 (Rejected): octopus: mgr/volumes: uid/gid of the clone is incorrect
- Octopus is EOL
- 08:24 AM CephFS Backport #54256 (Resolved): pacific: mgr/volumes: uid/gid of the clone is incorrect
- 07:23 AM rgw Bug #51462 (Resolved): rgw: resolve empty ordered bucket listing results w/ CLS filtering
- 07:23 AM rgw Backport #52076 (Resolved): octopus: rgw: resolve empty ordered bucket listing results w/ CLS fil...
- 07:23 AM rgw Backport #52075 (Resolved): pacific: rgw: resolve empty ordered bucket listing results w/ CLS fil...
- 06:37 AM CephFS Documentation #57734 (Resolved): doc: Fix disaster recovery documentation
- The note about symlink recovery needs to be fixed in the below link. The symlink recovery is fixed in quincy.
ht... - 05:36 AM Dashboard Backport #57733 (New): quincy: mgr/dashboard: Improve level A accessibility for missing aria labe...
- 05:18 AM RADOS Backport #57372 (Resolved): quincy: segfault in librados via libcephsqlite
- 05:08 AM Dashboard Bug #55872 (Pending Backport): mgr/dashboard: Improve level A accessibility for missing aria labe...
- 04:23 AM RADOS Bug #57532: Notice discrepancies in the performance of mclock built-in profiles
- As Sridhar has mentioned in the BZ, the Case 2 results are due to the max limit setting for best effort clients. This...
- 02:34 AM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- So, it looks like moving the db to a db volume works with ceph-bluestore-tool bluefs-bdev-migrate. So most of the way...
- 02:19 AM RADOS Bug #49888: rados/singleton: radosbench.py: teuthology.exceptions.MaxWhileTries: reached maximum ...
- /a/yuriw-2022-09-27_23:37:28-rados-wip-yuri2-testing-2022-09-27-1455-distro-default-smithi/7046230/
- 01:55 AM Orchestrator Bug #54029: orch:cephadm/workunits/{agent/on mon_election/connectivity task/test_orch_cli} test f...
- yuriw-2022-09-27_23:37:28-rados-wip-yuri2-testing-2022-09-27-1455-distro-default-smithi/7046362
09/29/2022
- 09:01 PM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- Igor Fedotov wrote:
> The issue with that fragmentation score is that there is no strong math behind it. Originally ... - 08:39 PM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- Igor Fedotov wrote:
> If this is still available - may I ask you to run ceph-bluestore-tool's free-dump command and ... - 08:38 PM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- Igor Fedotov wrote:
> This is totally irrelevant - these are warnings showing legacy formatted omaps for this OSD.... - 08:15 PM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- Kevin Fox wrote:
> Random other thing... during repairs, I see:
> [root@pc20 ceph]# ceph-bluestore-tool --log-lev... - 08:10 PM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- Kevin Fox wrote:
> Just saw this again, on a small scale. just one of the osds that I had moved the db off to its ow... - 08:08 PM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- Kevin Fox wrote:
> One note I see in the rook documentation:
> "Notably, ceph-volume will not use a device of the s... - 08:06 PM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- Kevin Fox wrote:
> Hi Igor,
>
>
> Does the fragmentation score alone show how fragmented things are? I still se... - 04:30 PM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- Random other thing... during repairs, I see:
[root@pc20 ceph]# ceph-bluestore-tool --log-level 30 --path /var/lib/... - 04:19 PM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- Got some more info... during the outage, I had 12 drives that wouldn't recover by moving off the db. looking back thr...
- 03:51 PM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- Just saw this again, on a small scale. just one of the osds that I had moved the db off to its own volume, just enter...
- 08:37 PM RADOS Bug #56101: Gibba Cluster: 17.2.0 to 17.2.1 RC upgrade OSD crash in function safe_timer
- - This was visible again in LRC upgrade today....
- 08:08 PM Stable releases Tasks #57472: quincy v17.2.4
- Release buld https://jenkins.ceph.com/view/all/job/ceph/572/
1353ed37dec8d74973edc3d5d5908c20ad5a7332
podman pull... - 07:48 PM Dashboard Bug #57730 (Resolved): mgr/dashboard: iscsi service created from cephadm's gateway is down
- the service_spec applied is without api_user and api_password....
- 07:43 PM mgr Bug #57711 (Need More Info): Exports not updated correctly when using ceph_argparse
- Can you try replacing the 'nfs_cluster_id' with 'cluster_id' in argdict as mentioned in the docs,
https://docs.ceph.... - 08:06 AM mgr Bug #57711 (Rejected): Exports not updated correctly when using ceph_argparse
- When issuing the command nfs export apply through the ceph_argparse.py library and there is an export already created...
- 07:40 PM mgr Bug #57694 (Need More Info): Exports not created correctly when using ceph_argparse
- Can you try replacing the 'nfs_cluster_id' with 'cluster_id' in argdict as mentioned in the docs,
https://docs.ceph.... - 07:36 PM mgr Bug #57710 (Need More Info): Exports cannot be removed with ceph_argparse
- The argument 'nfs_cluster_id' in argdict doesn't look correct. Try replacing it with 'cluster_id' as
mentioned in ht... - 07:50 AM mgr Bug #57710 (Rejected): Exports cannot be removed with ceph_argparse
- When issuing the command nfs export rm through the ceph_argparse.py library a rados error exception is raised as foll...
- 07:32 PM Orchestrator Bug #57311: rook: ensure CRDs are installed first
- /a/yuriw-2022-09-27_23:37:28-rados-wip-yuri2-testing-2022-09-27-1455-distro-default-smithi/7046305
- 02:41 PM Orchestrator Bug #57311: rook: ensure CRDs are installed first
- /a/yuriw-2022-09-27_23:37:28-rados-wip-yuri2-testing-2022-09-27-1455-distro-default-smithi/7046147
- 07:31 PM RADOS Bug #50222: osd: 5.2s0 deep-scrub : stat mismatch
- yuriw-2022-09-27_23:37:28-rados-wip-yuri2-testing-2022-09-27-1455-distro-default-smithi/7046253
- 07:21 PM RADOS Bug #53768: timed out waiting for admin_socket to appear after osd.2 restart in thrasher/defaults...
- yuriw-2022-09-27_23:37:28-rados-wip-yuri2-testing-2022-09-27-1455-distro-default-smithi/7046234
- 07:16 PM Orchestrator Bug #52321: qa/tasks/rook times out: 'check osd count' reached maximum tries (90) after waiting f...
- yuriw-2022-09-27_23:37:28-rados-wip-yuri2-testing-2022-09-27-1455-distro-default-smithi/7046227
- 07:03 PM Dashboard Bug #57386: cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selecto...
- yuriw-2022-09-27_23:37:28-rados-wip-yuri2-testing-2022-09-27-1455-distro-default-smithi/7046226
- 06:59 PM Orchestrator Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope n...
- yuriw-2022-09-27_23:37:28-rados-wip-yuri2-testing-2022-09-27-1455-distro-default-smithi/7046226
- 06:02 PM RADOS Bug #55435 (Resolved): mon/Elector: notify_ranked_removed() does not properly erase dead_ping in ...
- 06:01 PM RADOS Backport #56550 (Resolved): pacific: mon/Elector: notify_ranked_removed() does not properly erase...
- 06:00 PM rgw Backport #57559 (In Progress): quincy: data corruption due to network jitter
- 06:00 PM rgw Backport #57560 (In Progress): pacific: data corruption due to network jitter
- 05:53 PM rgw Backport #52729 (Rejected): octopus: Federated user can modify policies in other tenants
- Octopus is EOL
- 03:55 PM RADOS Bug #54611 (Resolved): prometheus metrics shows incorrect ceph version for upgraded ceph daemon
- 03:54 PM RADOS Backport #55309 (Resolved): pacific: prometheus metrics shows incorrect ceph version for upgraded...
- 03:00 PM Linux kernel client Bug #57686: general protection fault and CephFS kernel client hangs after MDS failover
- When mounting the affected directory with "noquotadf", the issue will not happen.
- 02:56 PM Linux kernel client Bug #57703: unable to handle page fault for address and system lockup after MDS failover
- When mounting "/" (the whole CephFS) instead of the individual directories, MDS failovers don't cause any issues (not...
- 02:52 PM RADOS Bug #57727: mon_cluster_log_file_level option doesn't take effect
- Yes. I was trying to close it as a duplicate after editing my comment. Thank you for closing it.
- 02:50 PM RADOS Bug #57727 (Duplicate): mon_cluster_log_file_level option doesn't take effect
- Ah, you edited your comment to say "Closing this tracker as a duplicate of 57049".
- 02:48 PM RADOS Bug #57727 (Fix Under Review): mon_cluster_log_file_level option doesn't take effect
- 02:41 PM RADOS Bug #57727: mon_cluster_log_file_level option doesn't take effect
- Hi Ilya,
I had a PR#47480 opened for this issue but closed it in favor of PR#47502. We have a old tracker 57049 fo... - 02:00 PM RADOS Bug #57727 (Duplicate): mon_cluster_log_file_level option doesn't take effect
- This appears to be regression introduced in quincy in https://github.com/ceph/ceph/pull/42014:...
- 02:44 PM RADOS Bug #57049: cluster logging does not adhere to mon_cluster_log_file_level
- I had a PR#47480 opened for this issue but closed it in favor of PR#47502. The PR#47502 addresses this issue along wi...
- 02:37 PM rgw Bug #57679: RGW/swift: Lost data if copy SLO-object and delete original
- This sounds a lot like a group of issues Eric worked on a few years ago...
- 02:26 PM rgw Bug #57679 (Need More Info): RGW/swift: Lost data if copy SLO-object and delete original
- hi Andrey, are you able to test this in the real openstack environment? it's hard for us to know what the right behav...
- 02:32 PM CephFS Backport #57729 (Resolved): quincy: Quincy 17.2.3 pybind/mgr/status: assert metadata failed
- https://github.com/ceph/ceph/pull/49967
- 02:32 PM CephFS Backport #57728 (Resolved): pacific: Quincy 17.2.3 pybind/mgr/status: assert metadata failed
- https://github.com/ceph/ceph/pull/49966
- 02:21 PM CephFS Bug #57072 (Pending Backport): Quincy 17.2.3 pybind/mgr/status: assert metadata failed
- 02:15 PM RADOS Backport #56735 (Resolved): octopus: unessesarily long laggy PG state
- 02:14 PM rgw Bug #57231 (Fix Under Review): Valgrind: jump on unitialized in s3select
- 02:14 PM RADOS Bug #50806 (Resolved): osd/PrimaryLogPG.cc: FAILED ceph_assert(attrs || !recovery_state.get_pg_lo...
- 02:13 PM RADOS Backport #50893 (Resolved): pacific: osd/PrimaryLogPG.cc: FAILED ceph_assert(attrs || !recovery_s...
- 02:07 PM RADOS Bug #55158 (Resolved): mon/OSDMonitor: properly set last_force_op_resend in stretch mode
- 02:07 PM RADOS Backport #55281 (Resolved): pacific: mon/OSDMonitor: properly set last_force_op_resend in stretch...
- 02:05 PM rgw Backport #54518 (Rejected): octopus: add OPT_DATA_SYNC_RUN to gc_ops_list to initialize gc that p...
- Octopus is EOL
- 02:05 PM rgw Bug #54433 (Resolved): add OPT_DATA_SYNC_RUN to gc_ops_list to initialize gc that prevents send_c...
- 02:01 PM rgw Backport #54520 (Resolved): quincy: add OPT_DATA_SYNC_RUN to gc_ops_list to initialize gc that pr...
- 02:01 PM rgw Backport #54519 (Resolved): pacific: add OPT_DATA_SYNC_RUN to gc_ops_list to initialize gc that p...
- 02:01 PM rgw Bug #51152 (Resolved): add role information and auth type to ops log
- 02:00 PM rgw Backport #52782 (Resolved): pacific: add role information and auth type to ops log
- 02:00 PM rgw Bug #52085 (Resolved): crypt: can't load client cert from /home/ubuntu/cephtest/ca/kmip-client.crt
- 02:00 PM rgw Backport #54494 (Resolved): pacific: segmentation fault in UserAsyncRefreshHandler::init_fetch
- 02:00 PM rgw Backport #54035 (Resolved): pacific: crypt: can't load client cert from /home/ubuntu/cephtest/ca/...
- 01:57 PM CephFS Bug #54121 (Resolved): mgr/volumes: File Quota attributes not getting inherited to the cloned volume
- 01:57 PM CephFS Backport #54333 (Resolved): quincy: mgr/volumes: File Quota attributes not getting inherited to t...
- 01:57 PM CephFS Backport #54331 (Rejected): octopus: mgr/volumes: File Quota attributes not getting inherited to ...
- 01:56 PM CephFS Backport #54332 (Resolved): pacific: mgr/volumes: File Quota attributes not getting inherited to ...
- 01:56 PM CephFS Bug #54099 (Resolved): mgr/volumes: A deleted subvolumegroup when listed using "ceph fs subvolume...
- 01:56 PM CephFS Backport #54336 (Resolved): quincy: mgr/volumes: A deleted subvolumegroup when listed using "ceph...
- 01:55 PM CephFS Backport #54334 (Rejected): octopus: mgr/volumes: A deleted subvolumegroup when listed using "cep...
- 01:55 PM CephFS Backport #54335 (Resolved): pacific: mgr/volumes: A deleted subvolumegroup when listed using "cep...
- 01:54 PM CephFS Backport #53458 (Resolved): pacific: pacific: qa: Test failure: test_deep_split (tasks.cephfs.tes...
- 01:53 PM CephFS Backport #53912 (Resolved): pacific: qa: fs:upgrade test fails mds count check
- 01:53 PM CephFS Bug #52274 (Resolved): mgr/nfs: add more log messages
- 01:52 PM CephFS Backport #52823 (Resolved): pacific: mgr/nfs: add more log messages
- 01:51 PM ceph-volume Bug #57144 (Resolved): ceph-volume lvm zap throws an error: "Unable to detect the partition numbe...
- 01:51 PM ceph-volume Backport #57380 (Resolved): quincy: ceph-volume lvm zap throws an error: "Unable to detect the pa...
- 01:51 PM ceph-volume Backport #57381 (Resolved): pacific: ceph-volume lvm zap throws an error: "Unable to detect the p...
- 01:48 PM Bug #53281 (Resolved): Windows IPv6 support
- 01:47 PM Backport #56728 (Resolved): quincy: Windows IPv6 support
- 01:47 PM Backport #56729 (Resolved): pacific: Windows IPv6 support
- 01:47 PM Bug #56480 (Resolved): std::shared_mutex deadlocks on Windows
- 01:47 PM Backport #57054 (Resolved): quincy: std::shared_mutex deadlocks on Windows
- 01:47 PM Backport #57053 (Resolved): pacific: std::shared_mutex deadlocks on Windows
- 01:46 PM Bug #57308 (Resolved): Incorrect err pointer casts on Windows
- 01:46 PM Backport #57403 (Resolved): pacific: Incorrect err pointer casts on Windows
- 01:45 PM rbd Bug #57726 (Resolved): [rbd_support] set_localized_module_option(..., None) is spamming the audit...
- https://github.com/rook/rook/discussions/11052
- 01:38 PM Bug #56400 (Resolved): Multiple "unsolicited reservation grant" messages logged, with no justific...
- 01:38 PM Backport #56404 (Resolved): pacific: Multiple "unsolicited reservation grant" messages logged, wi...
- 01:31 PM Bug #49627 (Resolved): mgr: fix dump duplicate info for ceph osd df
- 01:31 PM Backport #49733 (Resolved): pacific: mgr: fix dump duplicate info for ceph osd df
- 01:25 PM Orchestrator Feature #57725 (New): cephadm: add default dest to build.py
- Was thinking that a file named "cephadm" in the directory where buld.py lives (typically ceph/src/cephadm) would make...
- 12:56 PM rgw Bug #57724 (Resolved): Keys returned by Admin API during user creation on secondary zone not valid
- When in a multisite configuration, user creation on a secondary zone via the admin API returns keys that are not vali...
- 12:47 PM CephFS Backport #57723 (Resolved): pacific: qa: test_subvolume_snapshot_info_if_orphan_clone fails
- https://github.com/ceph/ceph/pull/48417
- 12:46 PM CephFS Backport #57722 (In Progress): quincy: qa: test_subvolume_snapshot_info_if_orphan_clone fails
- https://github.com/ceph/ceph/pull/48325
- 12:46 PM CephFS Backport #57721 (Resolved): pacific: qa: data-scan/journal-tool do not output debugging in upstre...
- https://github.com/ceph/ceph/pull/50773
- 12:46 PM CephFS Backport #57720 (Resolved): quincy: qa: data-scan/journal-tool do not output debugging in upstrea...
- https://github.com/ceph/ceph/pull/50772
- 12:44 PM CephFS Bug #57597 (Pending Backport): qa: data-scan/journal-tool do not output debugging in upstream tes...
- 12:26 PM CephFS Bug #57446 (Pending Backport): qa: test_subvolume_snapshot_info_if_orphan_clone fails
- PR was reviewed, tested and merged.
- 12:25 PM CephFS Backport #57719 (Resolved): quincy: Test failure: test_subvolume_group_ls_filter_internal_directo...
- https://github.com/ceph/ceph/pull/48327
- 12:25 PM CephFS Backport #57718 (Resolved): pacific: Test failure: test_subvolume_group_ls_filter_internal_direct...
- https://github.com/ceph/ceph/pull/48328
- 12:25 PM CephFS Backport #57717 (Resolved): quincy: libcephfs: incorrectly showing the size for snapdirs when sta...
- https://github.com/ceph/ceph/pull/48414
- 12:25 PM CephFS Backport #57716 (Resolved): pacific: libcephfs: incorrectly showing the size for snapdirs when st...
- https://github.com/ceph/ceph/pull/48413
- 12:24 PM CephFS Bug #57205 (Pending Backport): Test failure: test_subvolume_group_ls_filter_internal_directories ...
- 12:21 PM CephFS Bug #57344 (Pending Backport): libcephfs: incorrectly showing the size for snapdirs when stating ...
- 11:58 AM RADOS Bug #57699: slow osd boot with valgrind (reached maximum tries (50) after waiting for 300 seconds)
- I was not able to reproduce it with the more debug messages, I created PR with the debug message and will wait for re...
- 11:57 AM CephFS Backport #57715 (Resolved): quincy: mds: scrub locates mismatch between child accounted_rstats an...
- https://github.com/ceph/ceph/pull/50774
- 11:57 AM CephFS Backport #57714 (Resolved): pacific: mds: scrub locates mismatch between child accounted_rstats a...
- https://github.com/ceph/ceph/pull/50775
- 11:57 AM CephFS Backport #57713 (Resolved): quincy: qa: "1 MDSs behind on trimming (MDS_TRIM)"
- https://github.com/ceph/ceph/pull/50768
- 11:57 AM CephFS Backport #57712 (Resolved): pacific: qa: "1 MDSs behind on trimming (MDS_TRIM)"
- https://github.com/ceph/ceph/pull/50757
- 11:55 AM CephFS Bug #57657 (Pending Backport): mds: scrub locates mismatch between child accounted_rstats and sel...
- 11:54 AM CephFS Bug #57657: mds: scrub locates mismatch between child accounted_rstats and self rstats
- Patrick Donnelly wrote:
> During standup I was thinking of something else. This test deliberately creates this kind ... - 11:52 AM CephFS Bug #57677 (Pending Backport): qa: "1 MDSs behind on trimming (MDS_TRIM)"
- 07:28 AM RADOS Bug #56289 (Duplicate): crash: void PeeringState::check_past_interval_bounds() const: abort
- 07:28 AM RADOS Bug #54710 (Duplicate): crash: void PeeringState::check_past_interval_bounds() const: abort
- 07:28 AM RADOS Bug #54709 (Duplicate): crash: void PeeringState::check_past_interval_bounds() const: abort
- 07:21 AM RADOS Bug #54708 (Duplicate): crash: void PeeringState::check_past_interval_bounds() const: abort
- 07:02 AM RADOS Bug #49689: osd/PeeringState.cc: ceph_abort_msg("past_interval start interval mismatch") start
- Radoslaw Zarzynski wrote:
> A note from the bug scrub: work in progress.
WIP: https://gist.github.com/Matan-B/ca5... - 05:37 AM Dashboard Bug #56971: mgr/dashboard: inactive to active dashbaord redirect to ip instead of hostname
- Duplicate of: https://tracker.ceph.com/issues/56699
- 04:58 AM mgr Bug #57700 (Resolved): mgr/telemetry: ValueError: too many values to unpack (expected 2) in get_m...
- 04:18 AM CephFS Bug #51824: pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM
- Dhairya,
Were you able to RCA this? - 04:16 AM Bug #57708: Segmentation Fault in librados2
- Any suggestions if this was fixed in later version of librados2 (>14.2.11-0) or what can be done to avoid such segfau...
- 04:09 AM Bug #57708 (New): Segmentation Fault in librados2
- Below segfault happens in librados2-14.2.11-0 version,
Program terminated with signal 11, Segmentation fault.
(g... - 02:47 AM RADOS Bug #57532: Notice discrepancies in the performance of mclock built-in profiles
- Hi Bharath, could you also add the mClock configuration values from osd config show command here?
- 02:21 AM Bug #57707 (New): ceph build failure on Rocky Linux release 8.6 (Green Obsidian)
- ceph build fails with following error,
Error: src/app/shared/components/notifications-sidebar/notifications-sidebar.... - 01:20 AM rgw Bug #57706 (Can't reproduce): When creating a new user, if the 'uid' is not provided, error repor...
- When creating a new user, if the 'uid' does not contain an error reporting 'Permission denied', we think it is reason...
09/28/2022
- 08:25 PM rbd Bug #57605 (Fix Under Review): rbd/test_librbd_python.sh: cluster [WRN] pool 'test-librbd-smithi1...
- 08:00 PM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- One note I see in the rook documentation:
"Notably, ceph-volume will not use a device of the same device class (HDD,... - 07:32 PM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- Hi Igor,
Thanks for the details. That makes sense and helps me feel much more comfortable that the hack I put in p... - 09:07 AM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- Kevin Fox wrote:
> I can find no evidence that the cluster got full. I've seen it occasionally go up a little past 8... - 06:03 PM RADOS Bug #53806 (New): unessesarily long laggy PG state
- Reopening b/c the original fix had to be reverted: https://github.com/ceph/ceph/pull/44499#issuecomment-1247315820.
- 05:54 PM RADOS Bug #57618: rados/test.sh hang and pkilled (LibRadosWatchNotifyEC.WatchNotify)
- Note from a scrub: might we worth talking about.
- 05:51 PM RADOS Bug #57650 (In Progress): mon-stretch: reweighting an osd to a big number, then back to original ...
- 05:51 PM RADOS Bug #57678 (Fix Under Review): Mon fail to send pending metadata through MMgrUpdate after an upgr...
- 05:50 PM RADOS Bug #57698: osd/scrub: "scrub a chunk" requests are sent to the wrong set of replicas
- What are symptoms? How bad is it? A hang maybe? I'm asking to understand the impact.
- 05:48 PM RADOS Bug #57698 (In Progress): osd/scrub: "scrub a chunk" requests are sent to the wrong set of replicas
- IIRC Ronen has mentioned the scrub code interchanges @get_acting_set()@ and @get_acting_recovery_backfill()@.
- 01:40 PM RADOS Bug #57698 (Resolved): osd/scrub: "scrub a chunk" requests are sent to the wrong set of replicas
- The Primary registers its intent to scrub with the 'get_actingset()', as it should.
But the actual chunk requests ar... - 05:45 PM RADOS Bug #57699 (In Progress): slow osd boot with valgrind (reached maximum tries (50) after waiting f...
- Marking WIP per our morning talk.
- 01:58 PM RADOS Bug #57699 (Resolved): slow osd boot with valgrind (reached maximum tries (50) after waiting for ...
- /a/yuriw-2022-09-23_20:38:59-rados-wip-yuri6-testing-2022-09-23-1008-quincy-distro-default-smithi/7042504 ...
- 05:44 PM RADOS Backport #57705 (Resolved): pacific: mon/MonMap.h: FAILED ceph_assert(m < ranks.size()) when redu...
- 05:44 PM RADOS Backport #57704 (Resolved): quincy: mon/MonMap.h: FAILED ceph_assert(m < ranks.size()) when reduc...
- 05:43 PM RADOS Bug #57529 (In Progress): mclock backfill is getting higher priority than WPQ
- Marking as WIP as IIRC Sridhar was talking about this issue during core standups.
- 05:42 PM RADOS Bug #57573 (In Progress): intrusive_lru leaking memory when
- As I understood:
1. @evit()@ intends to not free too much (which makes sense).
2. The dtor reuses @evict()@ for c... - 05:39 PM RADOS Bug #49689: osd/PeeringState.cc: ceph_abort_msg("past_interval start interval mismatch") start
- A note from the bug scrub: work in progress.
- 05:37 PM mgr Bug #57700 (Fix Under Review): mgr/telemetry: ValueError: too many values to unpack (expected 2) ...
- 03:21 PM mgr Bug #57700: mgr/telemetry: ValueError: too many values to unpack (expected 2) in get_mempool
- Telemetry expects the daemon to be formatted like "mds.a", where there is 1 value before the '.', and one value after...
- 02:45 PM mgr Bug #57700 (Resolved): mgr/telemetry: ValueError: too many values to unpack (expected 2) in get_m...
- A ValueError occurs when generating mempool stats in the perf channel:
This is 17.2.4 RC: ceph version 17.2.3-770-... - 05:37 PM devops Backport #57685 (Duplicate): quincy: build: LTO can cause false positives in cmake tests resultin...
- 05:35 PM RADOS Bug #50089 (Pending Backport): mon/MonMap.h: FAILED ceph_assert(m < ranks.size()) when reducing n...
- 11:06 AM RADOS Bug #50089: mon/MonMap.h: FAILED ceph_assert(m < ranks.size()) when reducing number of monitors i...
- ...
- 11:03 AM RADOS Bug #50089: mon/MonMap.h: FAILED ceph_assert(m < ranks.size()) when reducing number of monitors i...
- I am seeing the same crash in version : ceph version 16.2.10 and just noticed that PR linked in this thread is merged...
- 05:06 PM rgw Bug #57562: multisite replication issue on Quincy
- cosbench workload file has been uploaded
- 02:06 PM rgw Bug #57562: multisite replication issue on Quincy
- We would like to know the workload in detail.
Would be great if the cosbench.xml file for the workload that ran for ... - 04:00 PM Linux kernel client Bug #57703 (Duplicate): unable to handle page fault for address and system lockup after MDS failover
- We have a four-node Ceph cluster (Ceph 17.2.1, Ubuntu 20.04, kernel 5.15.0-48-generic #54~20.04.1-Ubuntu), managed by...
- 03:47 PM Bug #57385: OSDs “slow ops” for with multi hour delay.
- The VM cluster was left to its own devices for a while. It became stuck recovering with no errors, or even slow io a...
- 03:17 PM mgr Bug #57460: Json formatted ceph pg dump hangs on large clusters
- Further looking at `ceph pg dump` to get the total PGs and total heartbeat peers in the user environment where this w...
- 03:02 PM rgw Backport #57702 (Resolved): quincy: rgw/crypt/barbican: Cannot create secret
- 03:02 PM rgw Backport #57701 (Resolved): pacific: rgw/crypt/barbican: Cannot create secret
- 02:53 PM rgw Bug #51772 (Triaged): tempest failures: test_create_container_with_remove_metadata_key/value
- 02:53 PM rgw Bug #51772 (New): tempest failures: test_create_container_with_remove_metadata_key/value
- 02:52 PM rgw Bug #54247 (Pending Backport): rgw/crypt/barbican: Cannot create secret
- 02:37 PM Bug #56725: open file hang using vim with ceph-fuse client
- I'm also experiencing this issue. I'm running "ceph-fuse version 17.2.0" with a "ceph version 17.2.3" cluster. Do you...
- 01:25 PM mgr Feature #57697 (New): provide capability to choose between Prometheus and Opentelemtry for mgr me...
- Issue:
currently ceph mgr exports it's metrics in Prometheus format making them unable to be parsed and handled by o... - 01:10 PM RADOS Backport #57696 (Resolved): quincy: ceph log last command fail to log by verbosity level
- https://github.com/ceph/ceph/pull/50407
- 01:05 PM Orchestrator Bug #57303: rados/cephadm: Failed to fetch package version from https://shaman.ceph.com/api/searc...
- Laura Flores wrote:
> /a/yuriw-2022-09-20_17:39:55-rados-wip-yuri5-testing-2022-09-19-1007-pacific-distro-default-sm... - 01:04 PM RADOS Feature #52424 (Resolved): [RFE] Limit slow request details to mgr log
- 01:03 PM RADOS Bug #57340 (Pending Backport): ceph log last command fail to log by verbosity level
- 01:02 PM Orchestrator Bug #57695 (Resolved): cephadm: upgrade tests fail with "Upgrade: Paused due to UPGRADE_BAD_TARGE...
- it looks like this is because we now officially have v18 in main and the test still starts from Octopus (v15.2.0 spec...
- 11:08 AM Orchestrator Feature #51618 (Rejected): rgw_frontends configuration in config database not persistent
- 11:08 AM Orchestrator Feature #51618: rgw_frontends configuration in config database not persistent
- The issue sounds to be solved by the proposal, the proposed feature must be opened a separate tracker.
- 10:59 AM Orchestrator Bug #54235 (Closed): Filtered out host ceph03: does not belong to mon public_network
- Closing as this issue is a duplicate for an already solved BUG. Feel free to reopen if you think it's not.
- 10:57 AM Orchestrator Bug #54235 (Duplicate): Filtered out host ceph03: does not belong to mon public_network
- 10:32 AM Orchestrator Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope n...
- I've already provided a detailed explanation of the root cause of this issue. Please consider upgrading the needed sw...
- 10:28 AM Orchestrator Bug #54142 (Resolved): quincy cephadm-purge-cluster needs work
- I'm not able to reproduce these issues with the code on the main branch anymore. Please, feel free to re-open if you ...
- 09:32 AM mgr Bug #57694 (Rejected): Exports not created correctly when using ceph_argparse
- When issuing the command nfs export apply through the ceph_argparse.py library the execution finishes successfully bu...
- 09:21 AM bluestore Backport #57688 (In Progress): quincy: unable to read osd superblock on AArch64 with page size 64K
- https://github.com/ceph/ceph/pull/48279
- 09:19 AM bluestore Backport #57687 (In Progress): pacific: unable to read osd superblock on AArch64 with page size 64K
- https://github.com/ceph/ceph/pull/48278
- 08:30 AM crimson Bug #57693 (Resolved): Messenger test failed against test_messenger_peer.cc
- ...
- 08:22 AM Dashboard Backport #57681 (In Progress): pacific: mgr/dashboard: Add text to empty life expectancy column
- 07:38 AM Dashboard Backport #57692 (Resolved): quincy: mgr/dashboard: permission denied when creating a NFS export
- https://github.com/ceph/ceph/pull/48315
- 07:38 AM Dashboard Backport #57691 (Resolved): pacific: mgr/dashboard: permission denied when creating a NFS export
- https://github.com/ceph/ceph/pull/48316
- 07:26 AM Dashboard Bug #48686 (Pending Backport): mgr/dashboard: permission denied when creating a NFS export
- 07:23 AM Dashboard Feature #57690 (Pending Backport): mgr/dashboard: add JSON driven forms
- https://github.com/hamzahamidi/ajsf
- 02:44 AM crimson Bug #57549: Crimson: Alienstore not work after ceph enable c++20
- remove --bluestore-devs /dev/nvme7n1 or using a debug build, still the same problem.
I find a different place comp... - 02:27 AM rgw Backport #57559: quincy: data corruption due to network jitter
- please link this Backport tracker issue with GitHub PR https://github.com/ceph/ceph/pull/48273
ceph-backport.sh versi...
09/27/2022
- 06:04 PM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- Thank you, Igor. I think Kevin answered as much as the background he had from the issue.
- 05:38 PM cephsqlite Backport #57184 (Resolved): quincy: crash: pthread_mutex_lock()
- 05:36 PM Orchestrator Bug #57303 (New): rados/cephadm: Failed to fetch package version from https://shaman.ceph.com/api...
- /a/yuriw-2022-09-20_17:39:55-rados-wip-yuri5-testing-2022-09-19-1007-pacific-distro-default-smithi/7039204
@Adam h... - 05:27 PM Orchestrator Bug #49287: podman: setting cgroup config for procHooks process caused: Unit libpod-$hash.scope n...
- /a/yuriw-2022-09-20_17:39:55-rados-wip-yuri5-testing-2022-09-19-1007-pacific-distro-default-smithi/7039249
- 05:22 PM Orchestrator Bug #57689 (New): cephadm/smoke-roleless: RuntimeError: dictionary changed size during iteration
- /a/yuriw-2022-09-20_17:39:55-rados-wip-yuri5-testing-2022-09-19-1007-pacific-distro-default-smithi/7039167...
- 05:10 PM Orchestrator Bug #57255: rados/cephadm/mds_upgrade_sequence, pacific : cephadm [ERR] Upgrade: Paused due to UP...
- /a/yuriw-2022-09-20_17:39:55-rados-wip-yuri5-testing-2022-09-19-1007-pacific-distro-default-smithi/7039447
- 05:02 PM Bug #57368 (Duplicate): The CustomResourceDefinition "installations.operator.tigera.io" is invali...
- 04:58 PM Orchestrator Bug #57268: rook: The CustomResourceDefinition "installations.operator.tigera.io" is invalid
- /a/yuriw-2022-09-20_17:39:55-rados-wip-yuri5-testing-2022-09-19-1007-pacific-distro-default-smithi/7039330
- 04:34 PM bluestore Backport #57688 (Resolved): quincy: unable to read osd superblock on AArch64 with page size 64K
- 04:34 PM bluestore Backport #57687 (Resolved): pacific: unable to read osd superblock on AArch64 with page size 64K
- 04:30 PM mgr Bug #57460: Json formatted ceph pg dump hangs on large clusters
- Some further analysis...
I've deployed Ceph clusters at various OSD counts (from 3 to 20) and looked at how
the o... - 04:18 PM bluestore Bug #57537 (Pending Backport): unable to read osd superblock on AArch64 with page size 64K
- 04:10 PM Orchestrator Bug #56951: rook/smoke: Updating cephclusters/rook-ceph is forbidden
- /a/yuriw-2022-09-23_20:38:59-rados-wip-yuri6-testing-2022-09-23-1008-quincy-distro-default-smithi/7042736
- 04:08 PM Orchestrator Bug #57311: rook: ensure CRDs are installed first
- /a/yuriw-2022-09-23_20:38:59-rados-wip-yuri6-testing-2022-09-23-1008-quincy-distro-default-smithi/7042656
- 03:06 PM Orchestrator Bug #57311: rook: ensure CRDs are installed first
- /a/yuriw-2022-09-21_21:00:57-rados-wip-yuri3-testing-2022-09-21-0921-distro-default-smithi/7040424
- 04:07 PM Dashboard Bug #57386: cephadm/test_dashboard_e2e.sh: Expected to find content: '/^foo$/' within the selecto...
- /a/yuriw-2022-09-23_20:38:59-rados-wip-yuri6-testing-2022-09-23-1008-quincy-distro-default-smithi/7042734
- 04:01 PM devops Backport #57684 (Duplicate): quincy: undefined reference to "__atomic_load_16" on s390x
- 04:00 PM devops Backport #57684: quincy: undefined reference to "__atomic_load_16" on s390x
- We'll track this in #57622 , since that one is older and this involves multiple PRs.
- 03:59 PM devops Backport #57684 (In Progress): quincy: undefined reference to "__atomic_load_16" on s390x
- 03:01 PM devops Backport #57684 (Duplicate): quincy: undefined reference to "__atomic_load_16" on s390x
- https://github.com/ceph/ceph/pull/48263
- 03:41 PM rbd Bug #57605 (In Progress): rbd/test_librbd_python.sh: cluster [WRN] pool 'test-librbd-smithi137-24...
- 03:28 PM Linux kernel client Bug #57686 (Duplicate): general protection fault and CephFS kernel client hangs after MDS failover
- We have a four-node Ceph cluster (Ceph 17.2.1, Ubuntu 20.04, kernel 5.15.0-48-generic #54~20.04.1-Ubuntu), managed by...
- 03:01 PM devops Backport #57685 (Duplicate): quincy: build: LTO can cause false positives in cmake tests resultin...
- 03:00 PM devops Bug #56492: undefined reference to "__atomic_load_16" on s390x
- quincy backport in https://github.com/ceph/ceph/pull/48263
- 02:59 PM devops Bug #56492 (Pending Backport): undefined reference to "__atomic_load_16" on s390x
- 02:59 PM devops Bug #54514 (Pending Backport): build: LTO can cause false positives in cmake tests resulting in b...
- quincy backport in https://github.com/ceph/ceph/pull/48263
- 02:13 PM CephFS Backport #57393: pacific: client: abort the client daemons when we couldn't invalidate the dentry...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48109
merged - 02:13 PM CephFS Backport #57395: pacific: crash: int Client::_do_remount(bool): abort
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/48108
merged - 02:12 PM CephFS Backport #57282: pacific: cephfs-top:addition of filesystem menu(improving GUI)
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47998
merged - 01:15 PM rgw Bug #41230: multisite: better spread multisite sync load over cooperating gateways
- Casey Bodley wrote:
> each radosgw tries to lock every shard of each multisite log for processing, and can hold the ... - 01:02 PM RADOS Bug #17170 (New): mon/monclient: update "unable to obtain rotating service keys when osd init" to...
- 01:02 PM RADOS Bug #17170 (Closed): mon/monclient: update "unable to obtain rotating service keys when osd init"...
- This report can technically have other causes, but it's just always because the OSDs are too far out of clock sync wi...
- 12:54 PM CephFS Bug #57682 (Triaged): client: ERROR: test_reconnect_after_blocklisted
- ...
- 12:37 PM CephFS Bug #53573: qa: test new clients against older Ceph clusters
- Dhairya has started work on this.
- 10:05 AM Dashboard Backport #57680 (In Progress): quincy: mgr/dashboard: Add text to empty life expectancy column
- 09:50 AM Dashboard Backport #57680 (Resolved): quincy: mgr/dashboard: Add text to empty life expectancy column
- https://github.com/ceph/ceph/pull/48271
- 09:50 AM Dashboard Backport #57681 (Resolved): pacific: mgr/dashboard: Add text to empty life expectancy column
- https://github.com/ceph/ceph/pull/48276
- 09:48 AM Dashboard Cleanup #43116 (Pending Backport): mgr/dashboard: Add text to empty life expectancy column
- 09:48 AM Dashboard Cleanup #43116 (Resolved): mgr/dashboard: Add text to empty life expectancy column
- 08:11 AM Dashboard Cleanup #43116 (Pending Backport): mgr/dashboard: Add text to empty life expectancy column
- 08:10 AM Dashboard Cleanup #43116 (Fix Under Review): mgr/dashboard: Add text to empty life expectancy column
- 05:29 AM Dashboard Cleanup #43116 (Pending Backport): mgr/dashboard: Add text to empty life expectancy column
- 09:35 AM CephFS Bug #57611 (Duplicate): qa: failure during qa/workunits/fs/snaps/snaptest-git-ceph.sh
- Duplicate of #54462
- 09:34 AM CephFS Bug #57612 (Duplicate): qa: segmentation fault during qa/workunits/libcephfs/test.sh
- Duplicate of #57206
- 09:26 AM CephFS Backport #56713 (Resolved): quincy: mds: standby-replay daemon always removed in MDSMonitor::prep...
- Backport Bot wrote:
> https://github.com/ceph/ceph/pull/47281
merged. - 09:05 AM CephFS Backport #57370 (Resolved): quincy: standby-replay mds is removed from MDSMap unexpectedly
- 07:51 AM CephFS Backport #57261 (In Progress): pacific: standby-replay mds is removed from MDSMap unexpectedly
- 07:42 AM CephFS Backport #57194 (In Progress): pacific: ceph pacific fails to perform fs/mirror test
- 07:41 AM CephFS Backport #57193 (In Progress): quincy: ceph pacific fails to perform fs/mirror test
- 06:24 AM CephFS Backport #57193: quincy: ceph pacific fails to perform fs/mirror test
- Rishabh, I'm taking this one.
- 07:37 AM rgw Bug #57679 (Need More Info): RGW/swift: Lost data if copy SLO-object and delete original
- 1. We use Swift/API and Large composite objects. Specifically Static Large Objects.
https://docs.openstack.org/swi... - 06:25 AM CephFS Backport #56978 (Resolved): pacific: mgr/volumes: Subvolume creation failed on FIPs enabled system
- 03:12 AM RADOS Bug #57678 (Resolved): Mon fail to send pending metadata through MMgrUpdate after an upgrade resu...
- The prometheus metrics still showing older ceph version for upgraded mon. This issue is observed if we upgrade cluste...
- 02:21 AM CephFS Bug #57677 (Fix Under Review): qa: "1 MDSs behind on trimming (MDS_TRIM)"
- 02:00 AM CephFS Bug #57677 (Resolved): qa: "1 MDSs behind on trimming (MDS_TRIM)"
- /ceph/teuthology-archive/pdonnell-2022-09-26_19:11:10-fs-wip-pdonnell-testing-20220923.171109-distro-default-smithi/7...
- 01:32 AM Linux kernel client Bug #57656: [testing] dbench: write failed on handle 10009 (Resource temporarily unavailable)
- /ceph/teuthology-archive/pdonnell-2022-09-26_19:11:10-fs-wip-pdonnell-testing-20220923.171109-distro-default-smithi/7...
- 12:30 AM CephFS Bug #57676 (Triaged): qa: error during scrub thrashing: rank damage found: {'backtrace'}
- Backtrace scrub failures are back with damage checking in fwd_scrub.py, introduced by
https://github.com/ceph/ceph...
09/26/2022
- 10:29 PM Orchestrator Bug #57651 (Fix Under Review): cephadm: serve loop just loops forever if migration_current is too...
- 08:49 PM devops Backport #57622: quincy: cmake: do not use GCC extension when detecting 16-byte atomic op
- https://github.com/ceph/ceph/pull/48263
- 07:46 PM CephFS Backport #57671 (In Progress): pacific: mds: damage table only stores one dentry per dirfrag
- 07:45 PM CephFS Backport #57670 (In Progress): quincy: mds: damage table only stores one dentry per dirfrag
- 06:33 PM Orchestrator Bug #57675 (Resolved): mgr/cephadm: upgrades with 3 or more mgr daemons can get stuck in endless ...
- When daemons are upgraded by cephadm, there are two criteria taken into
account for a daemon to be considered totall... - 05:37 PM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- I can find no evidence that the cluster got full. I've seen it occasionally go up a little past 85 (usually if I'm re...
- 05:34 PM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- I switched one of the pods to /bin/bash and tried various things to fsck the osds. Every time it hit the point where ...
- 05:06 PM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- The cluster involved was provisioned with ceph:v14.2.4-20190917 Oct 2019. Its been running nautilus until last month....
- 09:12 AM bluestore Bug #57672 (Need More Info): SSD OSD won't start after high framentation score!
- 09:12 AM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- @Vikhyat - what Ceph release are we talking about?
- 05:29 PM CephFS Bug #57657 (Fix Under Review): mds: scrub locates mismatch between child accounted_rstats and sel...
- 05:26 PM CephFS Bug #57657: mds: scrub locates mismatch between child accounted_rstats and self rstats
- During standup I was thinking of something else. This test deliberately creates this kind of damage by manually delet...
- 12:48 PM CephFS Bug #57657 (Triaged): mds: scrub locates mismatch between child accounted_rstats and self rstats
- 05:08 PM CephFS Bug #57641: Ceph FS fscrypt clones missing fscrypt metadata
- Hi Marcel,
> Copying the ceph.fscrypt.auth xattr in the mgr/volumes async clone seems to work:
> https://github.c... - 02:44 PM RADOS Bug #51688 (In Progress): "stuck peering for" warning is misleading
- 02:44 PM RADOS Bug #51688: "stuck peering for" warning is misleading
- Shreyansh Sancheti is working on this bug.
- 02:18 PM CephFS Bug #57674: fuse mount crashes the standby MDSes
- Jos said he could take more of a look at this.
- 12:45 PM CephFS Bug #57674 (Triaged): fuse mount crashes the standby MDSes
- 10:51 AM CephFS Bug #57674 (Closed): fuse mount crashes the standby MDSes
- fuse mount fs to large number of clients crashes standby MDSes and hangs df. Thus a 2000 fuse clients cannot be achie...
- 01:11 PM RADOS Backport #57258 (In Progress): pacific: Assert in Ceph messenger
- 01:07 PM CephFS Bug #57531: Mutipule zombie processes, and more and more
- ... or the daemon crashes are a different issue than the zombie processes (ceph-mds??).
- 01:06 PM CephFS Bug #57531: Mutipule zombie processes, and more and more
- Are you saying the zombie processes are ceph-osd daemons?
- 01:04 PM CephFS Bug #57610: qa: timeout during unwinding of qa/workunits/suites/fsstress.sh
- Venky Shankar wrote:
> Milind, please mark this as a duplicate. Run wiki is here - https://tracker.ceph.com/projects... - 01:03 PM CephFS Bug #57610: qa: timeout during unwinding of qa/workunits/suites/fsstress.sh
- Milind, please mark this as a duplicate. Run wiki is here - https://tracker.ceph.com/projects/cephfs/wiki/Main (for f...
- 01:03 PM CephFS Bug #57611: qa: failure during qa/workunits/fs/snaps/snaptest-git-ceph.sh
- Milind, please mark this as a duplicate. Run wiki is here - https://tracker.ceph.com/projects/cephfs/wiki/Main (for f...
- 01:03 PM CephFS Bug #57612: qa: segmentation fault during qa/workunits/libcephfs/test.sh
- Milind, please mark this as a duplicate. Run wiki is here - https://tracker.ceph.com/projects/cephfs/wiki/Main (for f...
- 01:02 PM CephFS Documentation #57115: Explanation for cache pressure
- Eugen Block wrote:
> Venky Shankar wrote:
> > Eugen, thanks for the detailed explanation. It would be immensely hel... - 12:57 PM CephFS Bug #57594 (Triaged): pacific: Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan....
- Jos, PTAL.
- 12:51 PM CephFS Bug #57655 (Triaged): qa: fs:mixed-clients kernel_untar_build failure
- 12:29 PM RADOS Backport #56722 (In Progress): pacific: osd thread deadlock
- 12:07 PM CephFS Backport #57665 (In Progress): pacific: Do not abort MDS on unknown messages
- https://github.com/ceph/ceph/pull/48253
- 11:41 AM CephFS Backport #57666 (In Progress): quincy: Do not abort MDS on unknown messages
- https://github.com/ceph/ceph/pull/48252
- 11:30 AM CephFS Bug #57359 (Fix Under Review): mds/Server: -ve values cause unexpected client eviction while hand...
- 10:27 AM crimson Bug #57578: crimson: assertion failure in _do_transaction_step()
- The correct order is:...
- 10:12 AM CephFS Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- Milind Changire wrote:
> This doesn't crash on my local ubuntu focal vstart cluster.
> The stack trace points to a ... - 09:43 AM Backport #57593 (In Progress): pacific: STORE==USED in ceph df
- https://github.com/ceph/ceph/pull/48250
- 09:41 AM Backport #57592 (In Progress): quincy: STORE==USED in ceph df
- https://github.com/ceph/ceph/pull/48249
- 09:33 AM Bug #54347 (Duplicate): ceph df stats break when there is an OSD with CRUSH weight == 0
- 09:20 AM RADOS Backport #55633: octopus: ceph-osd takes all memory before oom on boot
- Konstantin Shalygin wrote:
> Igor, seems when `version` filed is not set it's possible to change issue `status`
>
... - 08:46 AM bluestore Bug #57292 (Fix Under Review): Failed to start OSD when upgrading from nautilus to pacific with b...
- 08:06 AM CephFS Documentation #57673 (Resolved): doc: document the relevance of mds_namespace mount option
- Users get lost trying to mount file-system with old syntax and find no mention of the 'mds_namespace' mount option in...
- 08:01 AM Dashboard Bug #57668 (Resolved): exporter: don't skip loop if pid path is empty
- 08:01 AM Dashboard Backport #57669 (Resolved): quincy: exporter: don't skip loop if pid path is empty
- 02:53 AM Orchestrator Backport #57638 (In Progress): pacific: applying osd service spec with size filter fails if there...
- 02:52 AM Orchestrator Backport #57637 (In Progress): quincy: applying osd service spec with size filter fails if there'...
- 01:00 AM CephFS Bug #57210 (Fix Under Review): NFS client unable to see newly created files when listing director...
09/25/2022
- 07:40 AM rgw Backport #56407 (In Progress): pacific: rgw gc object leak when gc omap set entry failed with a l...
- 07:06 AM rgw Backport #56406 (In Progress): quincy: rgw gc object leak when gc omap set entry failed with a la...
09/24/2022
- 04:41 PM Bug #57385: OSDs “slow ops” for with multi hour delay.
- I think WE have found it, THANKS DISCORD! Well, were the problem is at least.
I have been able to work several ot... - 08:08 AM RADOS Bug #56495 (Resolved): Log at 1 when Throttle::get_or_fail() fails
- 08:08 AM RADOS Backport #56641 (Resolved): quincy: Log at 1 when Throttle::get_or_fail() fails
- 08:07 AM RADOS Backport #56642 (Resolved): pacific: Log at 1 when Throttle::get_or_fail() fails
- 08:04 AM RADOS Backport #57257 (Resolved): quincy: Assert in Ceph messenger
- 08:03 AM RADOS Backport #56723 (Resolved): quincy: osd thread deadlock
- 07:58 AM RADOS Backport #55633: octopus: ceph-osd takes all memory before oom on boot
- Igor, seems when `version` filed is not set it's possible to change issue `status`
Radoslaw, what is the current s... - 07:57 AM RADOS Backport #55633 (In Progress): octopus: ceph-osd takes all memory before oom on boot
- 07:56 AM RADOS Backport #55631 (Resolved): pacific: ceph-osd takes all memory before oom on boot
- Now PR merged, set resolved
09/23/2022
- 06:49 PM rgw Bug #56609 (Closed): performance issues causing teuthology failures: RGWWatcher::handle_error (10...
- 06:49 PM rgw Bug #56609 (Resolved): performance issues causing teuthology failures: RGWWatcher::handle_error (...
- Something has been changed (probably in the OSD) over the last few months to make this issue go away. I'm closing thi...
- 06:10 PM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- User question:...
- 06:09 PM bluestore Bug #57672: SSD OSD won't start after high framentation score!
- The user was not able to capture any debug data because it hit the cluster so hard that it went down.
- 06:07 PM bluestore Bug #57672 (Duplicate): SSD OSD won't start after high framentation score!
- One of the rook upstream users reported this issue in the upstream rook channel!...
- 06:03 PM rgw Backport #55508 (Rejected): octopus: rgw: remove entries from bucket index shards directly in lim...
- 05:53 PM rgw Backport #57648 (In Progress): quincy: rgw: fix bool/int logic error when calling get_obj_head_ioctx
- 05:49 PM rgw Backport #57649 (In Progress): pacific: rgw: fix bool/int logic error when calling get_obj_head_i...
- 05:44 PM rgw Backport #57429 (In Progress): pacific: key is used after move in RGWGetObj_ObjStore_S3::override...
- 05:42 PM rgw Backport #57430 (In Progress): quincy: key is used after move in RGWGetObj_ObjStore_S3::override_...
- 05:19 PM CephFS Bug #57411 (Duplicate): mutiple mds crash seen while running db workloads with regular snapshots ...
- Apparently this one is known.
- 01:36 PM Dashboard Backport #57669 (In Progress): quincy: exporter: don't skip loop if pid path is empty
- 10:40 AM Dashboard Backport #57669 (Resolved): quincy: exporter: don't skip loop if pid path is empty
- https://github.com/ceph/ceph/pull/48225
- 12:20 PM CephFS Bug #57206: ceph_test_libcephfs_reclaim crashes during test
- This doesn't crash on my local ubuntu focal vstart cluster.
The stack trace points to a boost::lexical_cast<>
Hyp... - 10:41 AM CephFS Backport #57671 (Resolved): pacific: mds: damage table only stores one dentry per dirfrag
- https://github.com/ceph/ceph/pull/48262
- 10:40 AM CephFS Backport #57670 (Resolved): quincy: mds: damage table only stores one dentry per dirfrag
- https://github.com/ceph/ceph/pull/48261
- 10:30 AM CephFS Bug #57249 (Pending Backport): mds: damage table only stores one dentry per dirfrag
- 10:28 AM Dashboard Bug #57668 (Resolved): exporter: don't skip loop if pid path is empty
- When pid file config comes empty from config dump which prevents to add metrics.
- 10:25 AM Dashboard Backport #57633 (Resolved): quincy: exporter: avoid stoi for empty pid_str mgr/dashboard: short_...
- 10:04 AM Dashboard Feature #57667 (New): mgr/dashboard: add a filtering to the notification sidebar
- The notification sidebar is cluttered with all the notifications coming from the different sources and its impossible...
- 07:22 AM CephFS Backport #55749 (In Progress): quincy: snap_schedule: remove subvolume(-group) interfaces
- 07:20 AM CephFS Backport #55748 (In Progress): pacific: snap_schedule: remove subvolume(-group) interfaces
- 07:17 AM CephFS Backport #57666 (Resolved): quincy: Do not abort MDS on unknown messages
- 07:17 AM CephFS Backport #57665 (Resolved): pacific: Do not abort MDS on unknown messages
- https://github.com/ceph/ceph/pull/48253
- 07:16 AM CephFS Bug #56522 (Pending Backport): Do not abort MDS on unknown messages
- 07:14 AM Dashboard Backport #57663 (In Progress): pacific: mgr/dashboard: improve dashboard redirect address
- 06:57 AM Dashboard Backport #57663 (Resolved): pacific: mgr/dashboard: improve dashboard redirect address
- https://github.com/ceph/ceph/pull/48220
- 07:11 AM Dashboard Backport #57661 (In Progress): quincy: mgr/dashboard: improve dashboard redirect address
- 06:57 AM Dashboard Backport #57661 (Resolved): quincy: mgr/dashboard: improve dashboard redirect address
- https://github.com/ceph/ceph/pull/48219
- 06:54 AM Dashboard Feature #56699 (Pending Backport): mgr/dashboard: improve dashboard redirect address
- 06:42 AM Dashboard Backport #57514 (Resolved): quincy: mgr/dashboard: pre-select osd form filters
- 06:03 AM rgw Bug #56992: rgw_op.cc:Deleting a non-existent object also generates a delete marker
- This is my PR: https://github.com/ceph/ceph/pull/47526
- 05:31 AM Dashboard Feature #57457 (Resolved): mgr/dashboard: Add a Silence button shortcut to alert notifications
- 05:31 AM Dashboard Backport #57512 (Resolved): quincy: mgr/dashboard: Add a Silence button shortcut to alert notific...
- 05:30 AM Dashboard Feature #37327 (Resolved): mgr/dashboard: Add details to the modal which displays the `safe-to-de...
- 05:30 AM Dashboard Backport #57584 (Resolved): pacific: mgr/dashboard: Add details to the modal which displays the `...
- 05:28 AM Dashboard Backport #57583 (Resolved): quincy: mgr/dashboard: Add details to the modal which displays the `s...
- 03:14 AM rgw Backport #57659 (Rejected): pacific: fail to set requestPayment in slave zone
- 03:14 AM rgw Backport #57658 (New): quincy: fail to set requestPayment in slave zone
- 03:13 AM rgw Bug #57468 (Pending Backport): fail to set requestPayment in slave zone
- 02:05 AM CephFS Bug #57657 (Resolved): mds: scrub locates mismatch between child accounted_rstats and self rstats
- ...
- 01:55 AM Linux kernel client Bug #57656 (Need More Info): [testing] dbench: write failed on handle 10009 (Resource temporaril...
- When testing with my postgres changes:
https://github.com/ceph/ceph/labels/wip-pdonnell-testing2
I've observed ... - 01:03 AM CephFS Bug #57655 (Pending Backport): qa: fs:mixed-clients kernel_untar_build failure
- ...
09/22/2022
- 09:47 PM crimson Bug #57654 (New): crimson/osd: check blocked peering ops when we get a new map and cancel any for...
- 08:30 PM RADOS Backport #56642: pacific: Log at 1 when Throttle::get_or_fail() fails
- Radoslaw Zarzynski wrote:
> https://github.com/ceph/ceph/pull/47764
merged - 08:10 PM crimson Bug #57653 (New): crimson/os: remove CollectionRef from FuturizedStore interface
- It seems to serve no purpose other than to impose a bunch of complexity to maintain the refcount for FuturizedCollect...
- 07:21 PM crimson Bug #57629: crimson: segfault during mkfs
- Tried https://github.com/ceph/ceph/pull/48203, slightly different backtrace...
- 01:35 AM crimson Bug #57629: crimson: segfault during mkfs
- More complete backtrace from gdb:...
- 12:07 AM crimson Bug #57629 (New): crimson: segfault during mkfs
- Release build: ./do_cmake.sh -DWITH_SEASTAR=ON -DWITH_MGR_DASHBOARD_FRONTEND=OFF -DWITH_CCACHE=ON -DCMAKE_BUILD_TYPE=...
- 05:29 PM Orchestrator Bug #57651 (Resolved): cephadm: serve loop just loops forever if migration_current is too high
- cephadm uses the config option mgr/cephadm/migration_current to track it doing a set of migration steps. In normal op...
- 05:18 PM Orchestrator Bug #55605 (Resolved): Rook orchestrator py exception with NFS commands
- 05:11 PM RADOS Bug #57650: mon-stretch: reweighting an osd to a big number, then back to original causes uneven ...
- ceph osd tree:...
- 05:09 PM RADOS Bug #57650 (In Progress): mon-stretch: reweighting an osd to a big number, then back to original ...
- Reweight an osd from 0.0900 to 0.7000
and then reweight back to 0.0900. Causes uneven weights between
two zones rep... - 05:09 PM rgw Backport #57649 (Resolved): pacific: rgw: fix bool/int logic error when calling get_obj_head_ioctx
- https://github.com/ceph/ceph/pull/48230
- 05:09 PM rgw Backport #57648 (Resolved): quincy: rgw: fix bool/int logic error when calling get_obj_head_ioctx
- https://github.com/ceph/ceph/pull/48231
- 05:01 PM rgw Bug #57543 (Pending Backport): rgw: fix bool/int logic error when calling get_obj_head_ioctx
- 04:49 PM Orchestrator Backport #57601 (Resolved): quincy: Rook orchestrator py exception with NFS commands
- 04:24 PM ceph-volume Bug #57627 (Resolved): ceph-volume activate takes time to complete
- 04:01 PM cleanup Tasks #57647 (In Progress): prototype metadata sync with c++20 coroutines and neorados
- a skeletal design for metadata sync coroutines, along with abstractions for unit testing:
https://gist.github.com/... - 03:48 PM Orchestrator Backport #57644 (In Progress): pacific: haproxy requires the `net.ipv4.ip_nonlocal_bind` sysctl s...
- 02:38 PM Orchestrator Backport #57644 (Resolved): pacific: haproxy requires the `net.ipv4.ip_nonlocal_bind` sysctl setting
- https://github.com/ceph/ceph/pull/48212
- 03:46 PM Orchestrator Backport #57645 (In Progress): quincy: haproxy requires the `net.ipv4.ip_nonlocal_bind` sysctl se...
- 02:39 PM Orchestrator Backport #57645 (Resolved): quincy: haproxy requires the `net.ipv4.ip_nonlocal_bind` sysctl setting
- https://github.com/ceph/ceph/pull/48211
- 03:28 PM cleanup Tasks #57646 (New): stop mangling request header names
- rgw frontends have always converted incoming request headers from their native format like "x-amz-date" into the form...
- 03:03 PM RADOS Bug #57628: osd:PeeringState.cc: FAILED ceph_assert(info.history.same_interval_since != 0)
- The same issue was reported in telemetry also on version 15.0.0:
http://telemetry.front.sepia.ceph.com:4000/d/jByk5H... - 02:33 PM Orchestrator Bug #57563 (Pending Backport): haproxy requires the `net.ipv4.ip_nonlocal_bind` sysctl setting
- 02:28 PM Orchestrator Bug #57305 (Closed): bootstrap mgr timeout is too short
- Closing because the timeout is configurable and the default values are good enough.
- 02:25 PM rgw Backport #57643 (Resolved): pacific: using a string_view on a temporary string object
- https://github.com/ceph/ceph/pull/52159
- 02:24 PM rgw Backport #57642 (Resolved): quincy: using a string_view on a temporary string object
- https://github.com/ceph/ceph/pull/52158
- 02:23 PM Orchestrator Backport #57639 (In Progress): pacific: cephadm: `ceph orch ps` doesn't list container versions i...
- 01:55 PM Orchestrator Backport #57639 (Resolved): pacific: cephadm: `ceph orch ps` doesn't list container versions in s...
- https://github.com/ceph/ceph/pull/48210
- 02:23 PM rgw Bug #56992: rgw_op.cc:Deleting a non-existent object also generates a delete marker
- It would be interesting to see what Amazon S3 does.
Are you able to produce a PR, yuxuan yang? - 02:14 PM rgw Bug #57322 (Resolved): in DBMultipartWriter part_num is used for oid creation before being initia...
- 02:13 PM rgw Bug #57325 (Resolved): memory issues in newDBStore()
- 02:13 PM rgw Bug #57326 (Pending Backport): using a string_view on a temporary string object
- 02:12 PM Orchestrator Feature #57254 (Closed): RGW deployment should fail early if port number is already in use
- Technically it should be possible to detect which ports are in use on all the nodes but there're more things to keep ...
- 02:07 PM RADOS Bug #57570 (Fix Under Review): mon-stretched_cluster: Site weights are not monitored post stretch...
- 02:02 PM rgw Bug #57399 (Can't reproduce): multisite tests segfault in 'radosgw-admin bucket sync run'
- 02:01 PM Orchestrator Backport #57640 (In Progress): quincy: cephadm: `ceph orch ps` doesn't list container versions in...
- 01:55 PM Orchestrator Backport #57640 (Resolved): quincy: cephadm: `ceph orch ps` doesn't list container versions in so...
- https://github.com/ceph/ceph/pull/48208
- 01:59 PM CephFS Bug #57641 (Fix Under Review): Ceph FS fscrypt clones missing fscrypt metadata
- h2. Summary
When cloning a Ceph FS volume containing fscrypt-enabled subtrees,
the clone misses fscrypt metadata.... - 01:54 PM Orchestrator Backport #57638 (Resolved): pacific: applying osd service spec with size filter fails if there's ...
- https://github.com/ceph/ceph/pull/48243
- 01:54 PM Orchestrator Backport #57637 (Resolved): quincy: applying osd service spec with size filter fails if there's t...
- https://github.com/ceph/ceph/pull/48242
- 01:51 PM Orchestrator Bug #57558 (Pending Backport): cephadm: `ceph orch ps` doesn't list container versions in some cases
- 01:46 PM Orchestrator Bug #57609 (Pending Backport): applying osd service spec with size filter fails if there's tiny (...
- 01:31 PM rgw Backport #57636 (In Progress): quincy: RGW crash due to PerfCounters::inc assert_condition during...
- https://github.com/ceph/ceph/pull/53471
- 01:31 PM rgw Backport #57635 (Resolved): pacific: RGW crash due to PerfCounters::inc assert_condition during m...
- https://github.com/ceph/ceph/pull/53472
- 01:12 PM rgw Bug #49666 (Pending Backport): RGW crash due to PerfCounters::inc assert_condition during multisi...
- 12:58 PM CephFS Bug #57523: CephFS performance degredation in mountpoint
- Hi,
yes the MDS is running with default configuration, we only tested if two active MDS were helping but it didnt ... - 12:43 PM RADOS Bug #57632 (In Progress): test_envlibrados_for_rocksdb: free(): invalid pointer
- 06:44 AM RADOS Bug #57632 (Closed): test_envlibrados_for_rocksdb: free(): invalid pointer
- /a/kchai-2022-08-23_13:19:39-rados-wip-kefu-testing-2022-08-22-2243-distro-default-smithi/6987883/...
- 12:27 PM Dashboard Backport #57633 (In Progress): quincy: exporter: avoid stoi for empty pid_str mgr/dashboard: sho...
- 09:37 AM Dashboard Backport #57633 (Resolved): quincy: exporter: avoid stoi for empty pid_str mgr/dashboard: short_...
- https://github.com/ceph/ceph/pull/48206
- 12:02 PM Dashboard Backport #57223 (Resolved): quincy: mgr/dashboard: dashboard connects via ssl to an ip address in...
- 12:00 PM Dashboard Backport #57224 (Resolved): pacific: mgr/dashboard: dashboard connects via ssl to an ip address i...
- 11:57 AM CephFS Bug #57634 (Closed): mgr/volumes: small fixes in doc.
- Not Required!! Closing it.
- 10:50 AM CephFS Bug #57634 (Closed): mgr/volumes: small fixes in doc.
- 11:52 AM Dashboard Backport #57625 (In Progress): quincy: mgr/dashboard: expose num repaired objects metric per pool
- 10:08 AM CephFS Bug #47693 (Rejected): qa: snap replicator tests
- 10:08 AM CephFS Bug #54064 (Resolved): pacific: qa: mon assertion failure during upgrade
- 09:35 AM Dashboard Bug #57619 (Pending Backport): exporter: avoid stoi for empty pid_str mgr/dashboard: short_descr...
- 06:45 AM RADOS Bug #57163 (Resolved): free(): invalid pointer
- test_envlibrados_for_rocksdb failure will be tracked here: https://tracker.ceph.com/issues/57632
- 06:39 AM rgw Bug #52000 (Resolved): cephadm deployed RGW prints: "rgw is configured to optionally allow insecu...
- 06:39 AM rgw Backport #52050 (Resolved): pacific: cephadm deployed RGW prints: "rgw is configured to optionall...
- 05:32 AM CephFS Fix #57295 (Rejected): qa: remove RHEL from job matrix
- 05:30 AM CephFS Backport #57631 (Rejected): quincy: first-damage.sh does not handle dentries with spaces
- 05:30 AM CephFS Backport #57630 (Rejected): pacific: first-damage.sh does not handle dentries with spaces
- 05:25 AM CephFS Bug #57586 (Pending Backport): first-damage.sh does not handle dentries with spaces
- 05:23 AM RADOS Bug #57546: rados/thrash-erasure-code: wait_for_recovery timeout due to "active+clean+remapped+la...
- Thanks for the reproducer Laura, I'm looking into the failures.
- 05:22 AM CephFS Bug #54557: scrub repair does not clear earlier damage health status
- Neeraj, please take this one.
- 04:27 AM CephFS Bug #51824: pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM
- Dhairya Parmar wrote:
> I can tell this issue exists even in Quincy. The Ceph environment I used operates on`ceph 17... - 02:08 AM rgw Bug #47866: Object not found on healthy cluster
- yunqing wang wrote:
> will this effect NAUTILUS?
No nautilus did not have the newer GC code that introduced this ... - 01:51 AM rgw Bug #47866: Object not found on healthy cluster
- will this effect NAUTILUS?
- 01:29 AM rgw Bug #57468 (Resolved): fail to set requestPayment in slave zone
- 12:02 AM crimson Bug #57549: Crimson: Alienstore not work after ceph enable c++20
- Ah, with a release build, I'm getting a segfault during mkfs (https://tracker.ceph.com/issues/57629). Not apparently...
Also available in: Atom