Activity
From 02/07/2017 to 03/08/2017
03/08/2017
- 10:43 PM Bug #19232 (Fix Under Review): "Error EINVAL: invalid command" in upgrade:kraken-x-master-distro-...
- wip-19232
https://github.com/ceph/ceph/pull/13892 merged - 10:08 PM Bug #19232: "Error EINVAL: invalid command" in upgrade:kraken-x-master-distro-basic-smithi
- https://github.com/ceph/ceph/pull/13852 fixed this for jewel-x but we need the same addition for kraken-x
- 06:33 PM Bug #19232: "Error EINVAL: invalid command" in upgrade:kraken-x-master-distro-basic-smithi
- There is a newly-added command "osd set-full-ratio" (and set-nearfull-ratio). Looks like this is a 'test suite doesn...
- 05:21 PM Bug #19232: "Error EINVAL: invalid command" in upgrade:kraken-x-master-distro-basic-smithi
- Test error... Something did "ceph --admin-daemon" when they meant "ceph daemon"
Nope, read it wrong on the phone. ... - 05:12 PM Bug #19232 (Resolved): "Error EINVAL: invalid command" in upgrade:kraken-x-master-distro-basic-sm...
- Run: http://pulpito.ceph.com/teuthology-2017-03-08_03:25:22-upgrade:kraken-x-master-distro-basic-smithi/
Logs: http:... - 09:38 PM RADOS Bug #19237 (New): "PG.cc: 3100: FAILED assert(e.version > info.last_update)" in upgrade:kraken-x-...
- Run: http://pulpito.ceph.com/teuthology-2017-03-08_02:25:22-upgrade:kraken-x-master-distro-basic-vps/
Job: 894399
L... - 09:09 PM Feature #19217: add optimized crc32c algorithm for ppc64le architecture
- Fix in progress.
- 09:08 PM Bug #17432: CephFS - Bonnie++: Can't create file
- Same issue still present in v10.2.6
- 07:47 PM rgw Bug #18300: leak from RGWMetaSyncShardCR::incremental_sync
- both backports are resolved, can we close this one?
- 07:46 PM rgw Backport #18287: kraken: multisite: coroutine deadlock in RGWMetaSyncCR after ECANCELED errors
- this commit 73cd8df887fb5e45f2d49275cedfeab31809ddc8 "rgw: use explicit flag to cancel RGWCoroutinesManager::run()" i...
- 06:48 PM Backport #18610 (Need More Info): kraken: osd: ENOENT on clone
- 06:41 PM Backport #19208 (In Progress): jewel: snap trim blocked behind ec read, never woken, on kraken-x ...
- 06:38 PM rgw Backport #17786 (Fix Under Review): jewel: multisite: assertion in RGWRados::wakeup_data_sync_shards
- https://github.com/ceph/ceph/pull/13886
- 06:35 PM Backport #19224 (In Progress): jewel: osd ops (sent and?) arrive at osd out of order
- 08:40 AM Backport #19224 (Resolved): jewel: osd ops (sent and?) arrive at osd out of order
- https://github.com/ceph/ceph/pull/17893
- 06:31 PM rgw Bug #16129 (Resolved): HTTPConnectionPool(host='mira101.front.sepia.ceph.com', port=8000): Max re...
- 06:21 PM rgw Bug #19236 (Resolved): multisite: some 'radosgw-admin data sync' commands hang
- RGWRemoteDataLog::init_sync_status() and read_sync_status() use their own RGWCoroutinesManager, because RGWCoroutines...
- 06:20 PM Backport #19210 (In Progress): jewel: pre-jewel "osd rm" incrementals are misinterpreted
- 06:14 PM Backport #19209 (In Progress): kraken: pre-jewel "osd rm" incrementals are misinterpreted
- 06:01 PM Backport #19207 (In Progress): kraken: snap trim blocked behind ec read, never woken, on kraken-x...
- 05:54 PM Backport #19225 (In Progress): kraken: osd ops (sent and?) arrive at osd out of order
- 08:40 AM Backport #19225 (Rejected): kraken: osd ops (sent and?) arrive at osd out of order
- 05:20 PM Bug #19233 (Won't Fix): broken packages in upgrade:hammer-jewel-x-kraken-distro-basic-smithi
- Run: http://pulpito.ceph.com/teuthology-2017-03-08_02:25:22-upgrade:hammer-jewel-x-kraken-distro-basic-smithi/
Jobs:... - 05:15 PM CephFS Bug #16914: multimds: pathologically slow deletions in some tests
- Notes on discussion today:
* The issue is that we usually (single mds) do fine in these cases because of the proj... - 05:14 PM Bug #19076: osd/ReplicatedBackend.cc: 884: FAILED a ssert(j != bc->pulling.end())
- https://github.com/ceph/ceph/pull/13879
- 03:42 AM Bug #19076: osd/ReplicatedBackend.cc: 884: FAILED a ssert(j != bc->pulling.end())
- /a/sage-2017-03-07_22:50:21-rados-wip-sage-testing---basic-smithi/893435
- 04:11 PM rgw Feature #19053: rgw: swift API: support to retrieve manifest for SLO
- Same experiment performed on OpenStack Swift yields the result -...
- 03:53 PM rgw Bug #19231 (Resolved): upgrade to multisite v2 fails if there is a zone without zone info
- 2017-03-08 15:03:26.541590 7eff663e09c0 0 failed to init zoneparams as: (2) No such file or directory
2017-03-08 1... - 02:55 PM rgw Backport #19179 (Need More Info): jewel: rgw: 204 No Content is returned when putting illformed S...
- non-trivial backport, conflicts with https://github.com/ceph/ceph/commit/f1007832acb8f7bcf5a39708b58e4913fd60f892 at ...
- 02:51 PM rgw Backport #19178 (In Progress): kraken: anonymous user's error code of getting object is not consi...
- 02:50 PM rgw Backport #19177 (In Progress): jewel: anonymous user's error code of getting object is not consis...
- 02:45 PM rgw Backport #19172 (In Progress): kraken: rgw: S3 create bucket should not do response in json
- 02:44 PM CephFS Feature #19230 (Resolved): Limit MDS deactivation to one at a time
- This needs to happen in the thrasher code, and also in the mon
- 02:44 PM rgw Backport #19171 (In Progress): jewel: rgw: S3 create bucket should not do response in json
- 02:38 PM rgw Backport #19164 (In Progress): kraken: radosgw-admin: add the 'object stat' command to usage
- 02:37 PM rgw Backport #19163 (In Progress): jewel: radosgw-admin: add the 'object stat' command to usage
- 02:31 PM rgw Backport #19170 (In Progress): kraken: rgw_file: allow setattr on placeholder (common_prefix) di...
- 02:31 PM rgw Backport #19168 (In Progress): kraken: rgw_file: RGWReaddir (and cognate ListBuckets request) do...
- 02:30 PM rgw Backport #19166 (In Progress): kraken: rgw_file: "exact match" invalid for directories, in RGWLi...
- 02:28 PM rgw Backport #19229 (In Progress): kraken: librgw: objects created from s3 apis are not visible from ...
- 02:27 PM rgw Backport #19229 (Resolved): kraken: librgw: objects created from s3 apis are not visible from nfs...
- https://github.com/ceph/ceph/pull/13871
- 02:26 PM rbd Backport #19228 (Resolved): jewel: Enabling mirroring for a pool wiht clones may fail
- https://github.com/ceph/ceph/pull/14663
- 02:26 PM rbd Backport #19227 (Resolved): kraken: rbd: Enabling mirroring for a pool wiht clones may fail
- https://github.com/ceph/ceph/pull/16113
- 02:26 PM rgw Bug #18651 (Pending Backport): librgw: objects created from s3 apis are not visible from nfs moun...
- 02:25 PM rgw Backport #19162 (In Progress): kraken: rgw_file: fix marker computation
- 02:14 PM rgw Backport #19165 (In Progress): jewel: rgw_file: "exact match" invalid for directories, in RGWLib...
- 02:13 PM rgw Backport #19169 (In Progress): jewel: rgw_file: allow setattr on placeholder (common_prefix) dir...
- 02:13 PM rgw Backport #19167 (In Progress): jewel: rgw_file: RGWReaddir (and cognate ListBuckets request) don...
- 02:12 PM rgw Backport #19161 (In Progress): jewel: rgw_file: fix marker computation
- 02:10 PM rgw Backport #19160 (In Progress): kraken: multisite: RGWMetaSyncShardControlCR gives up on EIO
- 02:09 PM rgw Backport #19159 (In Progress): jewel: multisite: RGWMetaSyncShardControlCR gives up on EIO
- 02:08 PM rgw Backport #19157 (In Progress): kraken: RGW health check errors out incorrectly
- 02:07 PM rgw Backport #19158 (In Progress): jewel: RGW health check errors out incorrectly
- 01:59 PM rgw Backport #19156 (In Progress): kraken: rgw: typo in rgw_admin.cc
- 01:55 PM rgw Backport #19155 (In Progress): jewel: rgw: typo in rgw_admin.cc
- 01:42 PM rbd Feature #19034 (Resolved): [rbd CLI] import-diff should use concurrent writes
- 01:41 PM rbd Bug #19130 (Pending Backport): Enabling mirroring for a pool wiht clones may fail
- 01:40 PM Bug #19226 (Won't Fix): ceph-disk: cannot deactivate a directory-based osd
- Hi,
We cannot deactivate a path-based OSD.
Creating the OSD:... - 12:48 PM rgw Backport #19154 (In Progress): kraken: rgw_file: fix recycling of invalid mkdir handles
- 12:31 PM rgw Backport #19153 (In Progress): jewel: rgw_file: fix recycling of invalid mkdir handles
- 12:17 PM CephFS Bug #19204: MDS assert failed when shutting down
- https://github.com/ceph/ceph/pull/13859
- 12:15 PM CephFS Bug #19204 (Fix Under Review): MDS assert failed when shutting down
- Hmm, we do shut down the objecter before the finisher, which is clearly not handling this case.
Let's try swapping... - 12:17 PM RADOS Feature #18943 (Resolved): crush: add devices class that rules can use as a filter
- 12:09 PM rgw Backport #19152 (In Progress): jewel: rgw_file: restore (corrected) fix for dir "partial match" ...
- 11:20 AM CephFS Bug #19201 (Resolved): mds: status asok command prints MDS_RANK_NONE as unsigned long -1: 1844674...
- 11:19 AM CephFS Feature #11950 (Resolved): Strays enqueued for purge cause MDCache to exceed size limit
- PurgeQueue has merged to master, will be in Luminous.
- 10:54 AM Bug #12100: OSD crash, uexpected aio error in FileJournal.cc
- Wido den Hollander wrote:
> I encountered this issue this morning on several machines where the unattended upgrades ... - 08:41 AM Backport #19223 (In Progress): jewel: osd crashes during hit_set_trim and hit_set_remove_all if h...
- 08:39 AM Backport #19223 (Resolved): jewel: osd crashes during hit_set_trim and hit_set_remove_all if hit ...
- https://github.com/ceph/ceph/pull/13827
- 08:40 AM Backport #19222 (In Progress): hammer: osd crashes during hit_set_trim and hit_set_remove_all if ...
- 08:39 AM Backport #19222 (Rejected): hammer: osd crashes during hit_set_trim and hit_set_remove_all if hit...
- https://github.com/ceph/ceph/pull/13826
- 08:39 AM Bug #19185 (Pending Backport): osd crashes during hit_set_trim and hit_set_remove_all if hit set ...
- Apparently, this is a special-case bug that is only present in hammer and jewel, not in master. I will stage backport...
- 08:30 AM rgw Bug #18665: no http referer info in container metadata dump in swift API
- This fix depends on the swift http reference feature, which has not been backported to jewel. Marking the jewel backp...
- 08:29 AM rgw Backport #18897 (Need More Info): jewel: no http referer info in container metadata dump in swift...
- it depends on the swift http reference feature, which has not been backported to jewel. So it had a compilation error
- 08:21 AM CephFS Feature #19135: Multi-Mds: One dead, all dead; what is Robustness??
- John Spray wrote:
> After decreasing max_mds, you also need to use "ceph mds deactivate <rank>" to shrink the active... - 08:18 AM Bug #19221 (Can't reproduce): unittest_compression segfaults in CompressionPlugin.all
- ...
- 04:07 AM CephFS Bug #19220 (New): jewel: mds crashed (FAILED assert(mds->mds_lock.is_locked_by_me())
- http://qa-proxy.ceph.com/teuthology/teuthology-2017-02-19_10:10:02-fs-jewel---basic-smithi/833293/teuthology.log
<... - 03:32 AM Bug #19133 (Pending Backport): osd ops (sent and?) arrive at osd out of order
- 03:12 AM rgw Bug #19219 (Resolved): Data sync problem in multisite
- In zone A and zone B, if we upload a object obj to A. The data sync process is :
1. B get the bi log from A;
2. B f... - 02:59 AM rgw Bug #19218: swift static web can not display index.html automatically in sub-directories
- https://github.com/ceph/ceph/pull/13850
- 02:48 AM rgw Bug #19218 (New): swift static web can not display index.html automatically in sub-directories
- If we create sub-directories(eg. sub-dir) for site, in static web mode, which includes index.html.
when we enter t... - 01:51 AM Bug #19187: Delete/discard operations initiated by a qemu/kvm guest get stuck
- @Adam: It's fortunate that you are able to reproduce it from the rados CLI as well. Assuming you have an asok for OSD...
03/07/2017
- 09:47 PM rgw Backport #19151 (In Progress): kraken: rgw_file: avoid interning ".." in FHCache table and don't...
- 09:36 PM Bug #19187: Delete/discard operations initiated by a qemu/kvm guest get stuck
- I do have OSD logs, but the debug level is low so there isn't much there (I only turned up debug on the client side)....
- 06:44 PM Bug #19187: Delete/discard operations initiated by a qemu/kvm guest get stuck
- @Andy: Any chance you have logs from those two OSDs? Are your file descriptor limits set high enough on both the clie...
- 06:35 PM Bug #19187: Delete/discard operations initiated by a qemu/kvm guest get stuck
- Hi Jason,
Our clients are running 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b). The OSDs are running a mix of... - 05:56 PM Bug #19187 (Need More Info): Delete/discard operations initiated by a qemu/kvm guest get stuck
- @Adam: what version of Ceph are you running on your OSDs and clients? Since it appears like the PG is stuck and not r...
- 09:35 PM Feature #19217 (Resolved): add optimized crc32c algorithm for ppc64le architecture
- Add a performance optimized crc32c function for ppc64le that uses Altivec assembly instructions. Add an architectur...
- 09:30 PM rgw Backport #19150 (In Progress): jewel: rgw_file: avoid interning ".." in FHCache table and don't ...
- 09:20 PM rgw Backport #19147 (In Progress): kraken: rgw daemon's DUMPABLE flag is cleared by setuid preventing...
- 09:19 PM rgw Backport #19148 (In Progress): jewel: rgw daemon's DUMPABLE flag is cleared by setuid preventing ...
- 09:17 PM rgw Backport #19146 (In Progress): kraken: rgw: a few cases where rgw_obj is incorrectly initialized
- 09:16 PM rgw Backport #19145 (In Progress): jewel: rgw: a few cases where rgw_obj is incorrectly initialized
- 09:16 PM rgw Bug #19096: rgw: a few cases where rgw_obj is incorrectly initialized
- *master PR*: https://github.com/ceph/ceph/pull/13676
- 09:11 PM rgw Backport #19144 (In Progress): kraken: rgw_file: FHCache residence check should be exhaustive
- 09:07 PM rgw Backport #19143 (In Progress): jewel: rgw_file: FHCache residence check should be exhaustive
- 08:43 PM rgw Backport #19098 (Need More Info): RGW/openssl fix for autoconf logic problem in Ubuntu Xenial
- 08:35 PM rgw Backport #19049 (In Progress): kraken: multisite: some yields in RGWMetaSyncShardCR::full_sync() ...
- 08:32 PM rgw Backport #19048 (In Progress): jewel: multisite: some yields in RGWMetaSyncShardCR::full_sync() r...
- 04:21 PM rgw Backport #18969 (In Progress): jewel: Change loglevel to 20 for 'System already converted' message
- 03:42 PM rgw Backport #18908 (In Progress): jewel: the swift container acl does not support field ".ref"
- 02:52 PM rgw Bug #19214 (Resolved): rgw_file: RGWLibFS::get_inst() not stable
- This is supposed to return the stable instance, not the current instance counter value.
Found by Gui Hecheng<guima... - 02:41 PM rgw Bug #18079 (Resolved): rgw: need to stream metadata full sync init
- 02:38 PM CephFS Backport #19206 (In Progress): jewel: Invalid error code returned by MDS is causing a kernel clie...
- 01:28 PM CephFS Backport #19206 (Resolved): jewel: Invalid error code returned by MDS is causing a kernel client ...
- https://github.com/ceph/ceph/pull/13831
- 02:16 PM Bug #18515 (Rejected): Ceph -s give us wrong information about the cluster when OSDs in a cluster...
- it's expected. we rely on osd peers to report the failure to mon. and mon will mark an osd down if it has not receive...
- 01:45 PM rgw Backport #18898 (In Progress): kraken: no http referer info in container metadata dump in swift API
- 01:43 PM rgw Backport #18897 (In Progress): jewel: no http referer info in container metadata dump in swift API
- 01:32 PM rgw Bug #19213 (New): GET on SLO Object uploaded without etags and size_bytes keys in manifest fails
- etags and size_bytes keys are optional (https://docs.openstack.org/developer/swift/overview_large_objects.html)
SLO ... - 01:28 PM rgw Backport #19212 (Resolved): kraken: rgw: "cluster [WRN] bad locator @X on object @X...." in clust...
- https://github.com/ceph/ceph/pull/14065
- 01:28 PM rgw Backport #19211 (Resolved): jewel: rgw: "cluster [WRN] bad locator @X on object @X...." in cluste...
- https://github.com/ceph/ceph/pull/14064
- 01:28 PM Backport #19210 (Resolved): jewel: pre-jewel "osd rm" incrementals are misinterpreted
- https://github.com/ceph/ceph/pull/13884
- 01:28 PM Backport #19209 (Resolved): kraken: pre-jewel "osd rm" incrementals are misinterpreted
- https://github.com/ceph/ceph/pull/13883
- 01:28 PM Backport #19208 (Resolved): jewel: snap trim blocked behind ec read, never woken, on kraken-x upg...
- https://github.com/ceph/ceph/pull/16015
- 01:28 PM Backport #19207 (Rejected): kraken: snap trim blocked behind ec read, never woken, on kraken-x up...
- https://github.com/ceph/ceph/pull/16082
- 01:22 PM CephFS Bug #19205 (Resolved): Invalid error code returned by MDS is causing a kernel client WARNING
- Found by an occasionally failing xfstest generic/011.
After some investigation, it was found out that a positive e... - 01:01 PM devops Bug #6281 (Closed): ceph-deploy config.py write_conf throws away old config just because they are...
- Closing as this is intended behavior
- 09:49 AM Bug #19185: osd crashes during hit_set_trim and hit_set_remove_all if hit set object doesn't exist
- fix for hammer: https://github.com/ceph/ceph/pull/13826
fix for jewel: https://github.com/ceph/ceph/pull/13827 - 08:27 AM Bug #19185: osd crashes during hit_set_trim and hit_set_remove_all if hit set object doesn't exist
- Just to emphasize again: use_gmt_hitset = false at the time of the first "could not load hitset with timestamp 7 hour...
- 08:13 AM Bug #19185: osd crashes during hit_set_trim and hit_set_remove_all if hit set object doesn't exist
- Kefu Chai wrote:
> the use-gmt flag is unset for pools created before upgrading, so i don't think that's the case.
... - 06:31 AM Bug #19185 (Fix Under Review): osd crashes during hit_set_trim and hit_set_remove_all if hit set ...
- the use-gmt flag is unset for pools created before upgrading, so i don't think that's the case.
https://github.com... - 09:20 AM CephFS Bug #19204 (Resolved): MDS assert failed when shutting down
- We encountered a failed assertion when trying to shutdown an MDS. Here is a snippet of the log:
> -14> 2017-01-22 ... - 07:24 AM RADOS Bug #19191: osd/ReplicatedBackend.cc: 1109: FAILED assert(!parent->get_log().get_missing().is_mis...
- Do we actually want to clear the missing set here, or just filter it for the correct child PG?
...I presume killing ... - 07:19 AM RADOS Bug #19191: osd/ReplicatedBackend.cc: 1109: FAILED assert(!parent->get_log().get_missing().is_mis...
- Summary:
During pg split we are resetting last_backfill but not clearing the
local missing set. This comes back t... - 07:08 AM rgw Backport #19176 (In Progress): jewel: swift API: cannot disable object versioning with empty X-Ve...
- https://github.com/ceph/ceph/pull/13823
- 07:06 AM CephFS Subtask #10489: Mpi tests fail on both ceph-fuse and kclient
- Zheng Yan wrote:
> [...]
>
> the test passes after applying above patch and adding "-W -R" options to fsx-mpi, me... - 04:50 AM Bug #19202: FAILED assert(p != spanning_blob_map.end()) in ceph_test_objectstore
- i am trying to reproduce it using...
- 04:16 AM Bug #19202 (Can't reproduce): FAILED assert(p != spanning_blob_map.end()) in ceph_test_objectstore
- ...
- 04:19 AM Documentation #19203 (Resolved): gitbuilders are still referenced by docs
- Briefly in dev/repo-access.rst and throughout install/get-packages.rst. gitbuilders have been replaced by https://sha...
- 04:16 AM rgw Feature #19053: rgw: swift API: support to retrieve manifest for SLO
- Here is our test procedure:...
- 04:11 AM Bug #18226 (Fix Under Review): assert in bluestore_extent_ref_map_t::record_t::bound_encode with ...
- https://github.com/ceph/ceph/pull/13819
- 03:21 AM Bug #19134 (Fix Under Review): mds suicide during multimds test
- https://github.com/ceph/ceph/pull/13818
- 02:58 AM Bug #19134: mds suicide during multimds test
- The reason is that
Beacon::is_laggy() calls MonClient::reopen_session(). MonClient::_reopen_session() reset MonCli... - 02:13 AM Bug #15912 (Resolved): An OSD was seen getting ENOSPC even with osd_failsafe_full_ratio passed
- https://github.com/ceph/ceph/pull/13425
- 02:12 AM Bug #16878 (Resolved): filestore: utilization ratio calculation does not take journal size into a...
- https://github.com/ceph/ceph/pull/13425
- 01:19 AM RADOS Bug #19199: Odd OSD failure path; ERROR: osd init failed: (110) Connection timed out
- Earlier in the log the root cause appears:...
03/06/2017
- 08:50 PM Bug #19131 (Pending Backport): snap trim blocked behind ec read, never woken, on kraken-x upgrade
- 08:45 PM CephFS Bug #19201 (Fix Under Review): mds: status asok command prints MDS_RANK_NONE as unsigned long -1:...
- https://github.com/ceph/ceph/pull/13816
- 08:36 PM CephFS Bug #19201 (Resolved): mds: status asok command prints MDS_RANK_NONE as unsigned long -1: 1844674...
- i.e....
- 07:51 PM Bug #19200 (Resolved): RHEL 7.3 Selinux denials at OSD start
- I get a batch of SElinux denials when starting Kraken OSD. However there does not seem to be any impairment of the f...
- 06:48 PM RADOS Bug #19199 (New): Odd OSD failure path; ERROR: osd init failed: (110) Connection timed out
- See attached OSD log for more details.
commit 6f8e4b38103d6f519e6661acc97a47ceccf5e5fc was the latest master
Interm... - 06:28 PM Bug #19119 (Pending Backport): pre-jewel "osd rm" incrementals are misinterpreted
- 05:13 PM rgw Bug #18980 (Pending Backport): rgw: "cluster [WRN] bad locator @X on object @X...." in cluster log
- 05:13 PM rgw Bug #18980 (Resolved): rgw: "cluster [WRN] bad locator @X on object @X...." in cluster log
- 05:05 PM RADOS Bug #19198 (Closed): Bluestore doubles mem usage when caching object content
- When trying to cache object content BlueStore uses twice as much memory than it really caches.
The root cause for ... - 01:33 PM Linux kernel client Bug #19095 (In Progress): handle image feature mismatches
- 01:25 PM Bug #19197 (Can't reproduce): live obc after interval change on EC pool
- ...
- 01:06 PM CephFS Feature #19135: Multi-Mds: One dead, all dead; what is Robustness??
- After decreasing max_mds, you also need to use "ceph mds deactivate <rank>" to shrink the active cluster, so that one...
- 04:21 AM CephFS Feature #19135: Multi-Mds: One dead, all dead; what is Robustness??
- John Spray wrote:
> Having multiple active MDS daemons does not remove the need for standby daemons. Set max_mds to... - 12:54 PM CephFS Bug #19118: MDS heartbeat timeout during rejoin, when working with large amount of caps/inodes
- John Spray wrote:
> https://github.com/ceph/ceph/pull/13807
>
> I've added a periodic heartbeat reset in the long... - 12:00 PM CephFS Bug #19118 (Fix Under Review): MDS heartbeat timeout during rejoin, when working with large amoun...
- https://github.com/ceph/ceph/pull/13807
I've added a periodic heartbeat reset in the long loop in export_remaining... - 12:45 PM rgw Bug #19195 (Resolved): when converting region_map we need to use rgw_zone_root_pool
- Hammer version stores region map in rgw_zone_root_pool and not rgw_region_root_pool.
- 12:40 PM rbd Backport #18321: jewel: librbd::ResizeRequest: failed to update image header: (16) Device or reso...
- https://github.com/ceph/ceph/pull/12740 conflicts with current Jewel, so I've submitted a new PR: https://github.com/...
- 12:31 PM Bug #19188: error: pathspec 'jewel' did not match any file(s) known to git
- Did your ceph branch wip-my-branch include edceabbd4769 ("qa/tasks/workunit: use ceph.git as an alternative of ceph-c...
- 12:20 PM rbd Feature #19034 (Fix Under Review): [rbd CLI] import-diff should use concurrent writes
- PR: https://github.com/ceph/ceph/pull/13782
- 11:21 AM rgw Feature #19053: rgw: swift API: support to retrieve manifest for SLO
- I'm working on this feature. In the build from the latest source code, I'm unable to retrieve the SLO object (GET ret...
- 11:02 AM Bug #19185: osd crashes during hit_set_trim and hit_set_remove_all if hit set object doesn't exist
- created https://github.com/ceph/ceph/pull/13805 to perform i/o with cache tier when upgrade from hammer to jewel
- 10:56 AM Feature #12430: libmailrados: Mailbox storage on RADOS
- OK, thanks Wido!
- 09:48 AM RADOS Bug #19191: osd/ReplicatedBackend.cc: 1109: FAILED assert(!parent->get_log().get_missing().is_mis...
- yuri, the backtrace you posted is another issue. i am building your branch of wip-yuri-testing_2017_3_4 to see if "ce...
- 01:22 AM RADOS Bug #19191: osd/ReplicatedBackend.cc: 1109: FAILED assert(!parent->get_log().get_missing().is_mis...
- Also see during PRs testing https://trello.com/c/il60a5yB
http://qa-proxy.ceph.com/teuthology/yuriw-2017-03-05_23:... - 09:28 AM rbd Subtask #18787 (In Progress): rbd-mirror A/A: proxy InstanceReplayer APIs via InstanceWatcher RPC
- 09:25 AM rbd Subtask #18785 (Fix Under Review): rbd-mirror A/A: separate ImageReplayer handling from Replayer
- PR: https://github.com/ceph/ceph/pull/13803
- 09:14 AM rgw Backport #19176: jewel: swift API: cannot disable object versioning with empty X-Versions-Location
- Changing the subject field as it might be misleading and suggest we don't support disabling Swift's object versioning...
- 09:13 AM rgw Backport #19175: kraken: swift API: cannot disable object versioning with empty X-Versions-Location
- Changing the subject field as it might be misleading and suggest we don't support disabling Swift's object versioning...
- 09:11 AM rgw Bug #18852: swift API: cannot disable object versioning with empty X-Versions-Location
- Changing the subject field as it might be misleading and suggest we don't support disabling Swift's object versioning...
- 06:52 AM rgw Bug #19194 (Duplicate): after s3cmd put file to bucket, 'radosgw-admin usage show --categories=pu...
- * descriptions:
after put file to bucket with command: 's3cmd put file s3://bucket', check with command: 'radosgw-a... - 06:31 AM Bug #19193 (Resolved): Latest 0.94.10 RPMs for Hammer / el6 / x86_64 missing at http://download.c...
- As per subject line, the 0.94.10 RPMs are not (yet) available for Hammer / el6 / x86_64.
It is noted that the 0.94... - 06:13 AM CephFS Bug #18797 (Duplicate): valgrind jobs hanging in fs suite
- the master now passes, see http://pulpito.ceph.com/kchai-2017-03-06_05:57:24-fs-master---basic-mira/
- 04:54 AM Bug #19068 (Duplicate): timeout on "ceph --cluster ceph -- mon tell '*' injectargs '--mon_osd_dow...
- 03:36 AM Bug #19192 (Fix Under Review): brag fails to count "in" mds
- https://github.com/ceph/ceph/pull/13798
- 03:32 AM Bug #19192 (Resolved): brag fails to count "in" mds
- this is introduced by 4e9b953 (i.e. in jewel)
- 01:55 AM Linux kernel client Bug #19189: cephfs kernel 4.9.13 file read hangs
- The bug was introduced in 4.9 kernel by commit https://github.com/ceph/ceph-client/commit/1afe478569ba7414dde8a874dda...
03/05/2017
- 09:33 AM Backport #18724 (In Progress): jewel: osd: calc_clone_subsets misuses try_read_lock vs missing
- 03:54 AM Backport #18724 (New): jewel: osd: calc_clone_subsets misuses try_read_lock vs missing
- 03:54 AM Backport #18724: jewel: osd: calc_clone_subsets misuses try_read_lock vs missing
- Please let me close the PR for backporting.
- 03:46 AM Backport #18724: jewel: osd: calc_clone_subsets misuses try_read_lock vs missing
- @Nathan Should this backport request be achieved?
I'm asking because this backport request is really confusing bec... - 08:51 AM Backport #18581: jewel: osd: ENOENT on clone
- NOTE: this backport was later reverted by https://github.com/ceph/ceph/pull/13280 (both backport and revert are conta...
03/04/2017
- 08:33 PM Bug #18371 (Resolved): ceph-disk: error on _bytes2str
- 08:33 PM Bug #18694 (Resolved): ceph-disk activate for partition failing
- 07:24 PM RADOS Bug #19191 (Resolved): osd/ReplicatedBackend.cc: 1109: FAILED assert(!parent->get_log().get_missi...
- ...
- 05:22 PM Backport #18431 (Resolved): kraken: ceph-disk: error on _bytes2str
- 04:54 PM Backport #19190: logrotate fails on debian distros with systemd
- Note: hammer may have some systemd stuff in it, but it is unsupported.
- 04:51 PM Backport #19190: logrotate fails on debian distros with systemd
- h3. description
logrotate fails on ceph v0.94.3 and later on debian distros with systemd. This probably happens be... - 04:11 PM Backport #19190: logrotate fails on debian distros with systemd
- Proposed fix : https://github.com/ceph/ceph/pull/13793
- 03:54 PM Backport #19190 (Rejected): logrotate fails on debian distros with systemd
- https://github.com/ceph/ceph/pull/13793
- 11:08 AM Linux kernel client Bug #19189 (Resolved): cephfs kernel 4.9.13 file read hangs
- using cephfs 10.2.5 on compute cluster with 4k cores and kernel 4.8.17 works like a charm. upgrading 3 nodes to 4.9.1...
- 01:50 AM Bug #19185: osd crashes during hit_set_trim and hit_set_remove_all if hit set object doesn't exist
- So my take on this is that for some reason, either the new jewel OSDs or the old hammer ones started creating GMT tim...
- 01:11 AM Bug #19188: error: pathspec 'jewel' did not match any file(s) known to git
A work around for this is to do the following:
git push ci wip-my-branch
git push origin wip-my-branch
WAIT FO...- 12:44 AM Bug #19188 (Rejected): error: pathspec 'jewel' did not match any file(s) known to git
- Don't look for distinguished release names in ceph-ci.git
- 12:39 AM Bug #19188 (Rejected): error: pathspec 'jewel' did not match any file(s) known to git
dzafman-2017-03-03_09:17:05-rados-wip-15912-distro-basic-smithi/878196...
03/03/2017
- 11:36 PM Backport #19181 (In Progress): kraken: mon: force_create_pg could leave pg stuck in creating state
- 11:38 AM Backport #19181 (Resolved): kraken: mon: force_create_pg could leave pg stuck in creating state
- https://github.com/ceph/ceph/pull/13790
- 09:41 PM Backport #19182 (In Progress): jewel: mon: force_create_pg could leave pg stuck in creating state
- 11:38 AM Backport #19182 (Resolved): jewel: mon: force_create_pg could leave pg stuck in creating state
- https://github.com/ceph/ceph/pull/17008
- 08:54 PM Backport #19183 (In Progress): jewel: os/filestore/HashIndex: be loud about splits
- 11:38 AM Backport #19183 (Resolved): jewel: os/filestore/HashIndex: be loud about splits
- https://github.com/ceph/ceph/pull/13788
- 08:46 PM Bug #19187: Delete/discard operations initiated by a qemu/kvm guest get stuck
- @rbd info@ for the image, in case it is helpful:...
- 08:39 PM Bug #19187 (Closed): Delete/discard operations initiated by a qemu/kvm guest get stuck
- We are frequently seeing delete/discard operations get stuck on rbd devices attached to qemu/kvm VMs. In the guest, t...
- 07:07 PM Backport #19140 (In Progress): jewel: osdc/Objecter: If osd full, it should pause read op which w...
- 11:35 AM Backport #19140 (Resolved): jewel: osdc/Objecter: If osd full, it should pause read op which w/ r...
- https://github.com/ceph/ceph/pull/17893
- 06:58 PM Backport #19142 (In Progress): jewel: Ceph Xenial Packages - ceph-base missing dependency for psmisc
- 11:35 AM Backport #19142 (Resolved): jewel: Ceph Xenial Packages - ceph-base missing dependency for psmisc
- https://github.com/ceph/ceph/pull/13786
- 06:48 PM Feature #12430: libmailrados: Mailbox storage on RADOS
- Florian Haas wrote:
> Wido, is there new information on this? As far as I can see, there haven't been any commits to... - 07:46 AM Feature #12430: libmailrados: Mailbox storage on RADOS
- Wido, is there new information on this? As far as I can see, there haven't been any commits to the GitHub repo yet.
- 06:35 PM Backport #19141 (In Progress): hammer: osdc/Objecter: If osd full, it should pause read op which ...
- 11:35 AM Backport #19141 (Rejected): hammer: osdc/Objecter: If osd full, it should pause read op which w/ ...
- https://github.com/ceph/ceph/pull/13784
- 05:25 PM rgw Bug #18980 (Fix Under Review): rgw: "cluster [WRN] bad locator @X on object @X...." in cluster log
- https://github.com/ceph/ceph/pull/13783
The teuthology failures don't occur on kraken or earlier, but I still tagg... - 04:40 PM rgw Bug #18980: rgw: "cluster [WRN] bad locator @X on object @X...." in cluster log
- Yeah, I managed to track this down in one of my runs: http://qa-proxy.ceph.com/teuthology/cbodley-2017-03-01_14:55:21...
- 04:45 PM Bug #19185 (Resolved): osd crashes during hit_set_trim and hit_set_remove_all if hit set object d...
- i started an upgrade process to go from 0.94.7 to 10.2.5 on a production cluster that is using cache tiering. this cl...
- 03:12 PM devops Bug #19184 (Fix Under Review): rpm: groups do not conform to latest openSUSE standards
- https://github.com/ceph/ceph/pull/13781
- 02:33 PM devops Bug #19184 (Resolved): rpm: groups do not conform to latest openSUSE standards
- We are using "Group:" lines in the specfile to conform to openSUSE packaging guidelines. These "Group:" lines are ins...
- 02:15 PM rgw Backport #18896 (In Progress): kraken: should parse the url to http host to compare with the cont...
- 02:15 PM rgw Backport #18895 (Need More Info): jewel: should parse the url to http host to compare with the co...
- jewel backport is non-trivial because jewel does not have the ACLReferer struct introduced in src/rgw/rgw_acl.h by 78...
- 02:14 PM rgw Bug #18685: should parse the url to http host to compare with the container referer acl
- jewel backport is non-trivial because jewel does not have the ACLReferer struct introduced in src/rgw/rgw_acl.h by 78...
- 02:08 PM rgw Backport #18866 (In Progress): jewel: 'radosgw-admin sync status' on master zone of non-master zo...
- 02:06 PM rgw Backport #18811 (In Progress): jewel: librgw: RGWLibFS::setattr fails on directories
- 01:47 PM rgw Backport #18714 (In Progress): jewel: rgw: multipart uploads copy part support
- 01:42 PM devops Backport #18854 (In Progress): hammer: upstart: radosgw-all does not start on boot if ceph-base i...
- 12:39 PM Bug #16878: filestore: utilization ratio calculation does not take journal size into account
- @David: We were able to reproduce the bug on a very recent version of master plus your patches from https://github.co...
- 12:01 PM Bug #19139: osdc/Objecter: If osd full, it should pause read op which w/ rwordered flag
- Combine jewel backport with #19133 in a single PR.
- 11:34 AM Bug #19139 (Resolved): osdc/Objecter: If osd full, it should pause read op which w/ rwordered flag
- master commit: https://github.com/ceph/ceph/commit/07b2a22210e26eac1b2825c30629788da05e5e12
- 11:59 AM Bug #19133: osd ops (sent and?) arrive at osd out of order
- I guess it would make sense to backport #19139 and this to jewel in a single PR.
- 11:37 AM Bug #19133 (Fix Under Review): osd ops (sent and?) arrive at osd out of order
- 03:21 AM Bug #19133: osd ops (sent and?) arrive at osd out of order
- https://github.com/ceph/ceph/pull/13759
- 03:15 AM Bug #19133: osd ops (sent and?) arrive at osd out of order
- Okay, two bugs:
1. In jewel,... - 02:53 AM Bug #19133: osd ops (sent and?) arrive at osd out of order
- Okay, the root cause here was a bug in my path that was setting/unsetting hte full flag on the osdmap. *But*... ther...
- 02:30 AM Bug #19133 (Resolved): osd ops (sent and?) arrive at osd out of order
- ...
- 11:37 AM rgw Backport #19180 (Resolved): kraken: rgw: 204 No Content is returned when putting illformed Swift'...
- https://github.com/ceph/ceph/pull/14516
- 11:37 AM rgw Backport #19179 (Rejected): jewel: rgw: 204 No Content is returned when putting illformed Swift's...
- 11:37 AM rgw Backport #19178 (Resolved): kraken: anonymous user's error code of getting object is not consiste...
- https://github.com/ceph/ceph/pull/13877
- 11:37 AM rgw Backport #19177 (Rejected): jewel: anonymous user's error code of getting object is not consisten...
- https://github.com/ceph/ceph/pull/13876
- 11:36 AM rgw Backport #19176 (Resolved): jewel: swift API: cannot disable object versioning with empty X-Versi...
- https://github.com/ceph/ceph/pull/13823
- 11:36 AM rgw Backport #19175 (Resolved): kraken: swift API: cannot disable object versioning with empty X-Vers...
- https://github.com/ceph/ceph/pull/14519
- 11:36 AM rbd Backport #19174 (Resolved): jewel: rbd_clone_copy_on_read ineffective with exclusive-lock
- https://github.com/ceph/ceph/pull/16124
- 11:36 AM rbd Backport #19173 (Resolved): kraken: rbd: rbd_clone_copy_on_read ineffective with exclusive-lock
- https://github.com/ceph/ceph/pull/14543
- 11:36 AM rgw Backport #19172 (Resolved): kraken: rgw: S3 create bucket should not do response in json
- https://github.com/ceph/ceph/pull/13875
- 11:36 AM rgw Backport #19171 (Resolved): jewel: rgw: S3 create bucket should not do response in json
- https://github.com/ceph/ceph/pull/13874
- 11:36 AM rgw Backport #19170 (Resolved): kraken: rgw_file: allow setattr on placeholder (common_prefix) direc...
- https://github.com/ceph/ceph/pull/13871
- 11:36 AM rgw Backport #19169 (Resolved): jewel: rgw_file: allow setattr on placeholder (common_prefix) direct...
- https://github.com/ceph/ceph/pull/13869
- 11:36 AM rgw Backport #19168 (Resolved): kraken: rgw_file: RGWReaddir (and cognate ListBuckets request) don't...
- https://github.com/ceph/ceph/pull/13871
- 11:36 AM rgw Backport #19167 (Resolved): jewel: rgw_file: RGWReaddir (and cognate ListBuckets request) don't ...
- https://github.com/ceph/ceph/pull/13869
- 11:36 AM rgw Backport #19166 (Resolved): kraken: rgw_file: "exact match" invalid for directories, in RGWLibFS...
- https://github.com/ceph/ceph/pull/13871
- 11:36 AM rgw Backport #19165 (Resolved): jewel: rgw_file: "exact match" invalid for directories, in RGWLibFS:...
- https://github.com/ceph/ceph/pull/13869
- 11:36 AM rgw Backport #19164 (Resolved): kraken: radosgw-admin: add the 'object stat' command to usage
- https://github.com/ceph/ceph/pull/13873
- 11:36 AM rgw Backport #19163 (Resolved): jewel: radosgw-admin: add the 'object stat' command to usage
- https://github.com/ceph/ceph/pull/13872
- 11:36 AM rgw Backport #19162 (Resolved): kraken: rgw_file: fix marker computation
- https://github.com/ceph/ceph/pull/13871
- 11:36 AM rgw Backport #19161 (Resolved): jewel: rgw_file: fix marker computation
- https://github.com/ceph/ceph/pull/13869
- 11:36 AM rgw Backport #19160 (Resolved): kraken: multisite: RGWMetaSyncShardControlCR gives up on EIO
- https://github.com/ceph/ceph/pull/13868
- 11:36 AM rgw Backport #19159 (Resolved): jewel: multisite: RGWMetaSyncShardControlCR gives up on EIO
- https://github.com/ceph/ceph/pull/13867
- 11:36 AM rgw Backport #19158 (Resolved): jewel: RGW health check errors out incorrectly
- https://github.com/ceph/ceph/pull/13865
- 11:35 AM rgw Backport #19157 (Resolved): kraken: RGW health check errors out incorrectly
- https://github.com/ceph/ceph/pull/13866
- 11:35 AM rgw Backport #19156 (Resolved): kraken: rgw: typo in rgw_admin.cc
- https://github.com/ceph/ceph/pull/13864
- 11:35 AM rgw Backport #19155 (Resolved): jewel: rgw: typo in rgw_admin.cc
- https://github.com/ceph/ceph/pull/13863
- 11:35 AM rgw Backport #19154 (Resolved): kraken: rgw_file: fix recycling of invalid mkdir handles
- https://github.com/ceph/ceph/pull/13871
- 11:35 AM rgw Backport #19153 (Resolved): jewel: rgw_file: fix recycling of invalid mkdir handles
- https://github.com/ceph/ceph/pull/13848
- 11:35 AM rgw Backport #19152 (Resolved): jewel: rgw_file: restore (corrected) fix for dir "partial match" (re...
- https://github.com/ceph/ceph/pull/13858
- 11:35 AM rgw Backport #19151 (Duplicate): kraken: rgw_file: avoid interning ".." in FHCache table and don't r...
- 11:35 AM rgw Backport #19150 (Duplicate): jewel: rgw_file: avoid interning ".." in FHCache table and don't re...
- 11:35 AM rgw Backport #19149 (Resolved): kraken: rgw_file: ensure valid_s3_object_name for directories
- https://github.com/ceph/ceph/pull/14518
- 11:35 AM rgw Backport #19148 (Resolved): jewel: rgw daemon's DUMPABLE flag is cleared by setuid preventing cor...
- https://github.com/ceph/ceph/pull/13844
- 11:35 AM rgw Backport #19147 (Resolved): kraken: rgw daemon's DUMPABLE flag is cleared by setuid preventing co...
- https://github.com/ceph/ceph/pull/13845
- 11:35 AM rgw Backport #19146 (Resolved): kraken: rgw: a few cases where rgw_obj is incorrectly initialized
- https://github.com/ceph/ceph/pull/13843
- 11:35 AM rgw Backport #19145 (Resolved): jewel: rgw: a few cases where rgw_obj is incorrectly initialized
- https://github.com/ceph/ceph/pull/13842
- 11:35 AM rgw Backport #19144 (Resolved): kraken: rgw_file: FHCache residence check should be exhaustive
- https://github.com/ceph/ceph/pull/13871
- 11:35 AM rgw Backport #19143 (Resolved): jewel: rgw_file: FHCache residence check should be exhaustive
- https://github.com/ceph/ceph/pull/14169
- 10:18 AM CephFS Bug #16914: multimds: pathologically slow deletions in some tests
- Pulled the test off wip-11950 and onto https://github.com/ceph/ceph/pull/13770
- 09:32 AM Feature #18235 (Pending Backport): os/filestore/HashIndex: be loud about splits
- 09:09 AM CephFS Feature #19135 (Rejected): Multi-Mds: One dead, all dead; what is Robustness??
- Having multiple active MDS daemons does not remove the need for standby daemons. Set max_mds to something less than ...
- 07:54 AM CephFS Feature #19135: Multi-Mds: One dead, all dead; what is Robustness??
- Zheng Yan wrote:
> multiple active mds is mainly for improving performance (balance load to multiple mds). robustnes... - 06:59 AM CephFS Feature #19135: Multi-Mds: One dead, all dead; what is Robustness??
- multiple active mds is mainly for improving performance (balance load to multiple mds). robustness is achieved standb...
- 02:57 AM CephFS Feature #19135 (Rejected): Multi-Mds: One dead, all dead; what is Robustness??
- I have three mds, when I copy file to fuse-type mountpoint for test.
when I shutdown one mds, the client hang, the m... - 08:11 AM Bug #19138 (Won't Fix): "Ceph -w" may have overlapped output when there are too many PG status
- An example is as follows.
2017-02-28 17:20:15.644382 mon.0 [INF] pgmap v1240292: 1024 pgs: 1 active+remapped+backf... - 07:57 AM Bug #18962: ceph-disk: Zap disk doesn't clear OSD journal data
- Updated pull request https://github.com/ceph/ceph/pull/13766
- 07:19 AM CephFS Feature #19137 (New): samba: implement aio callback for ceph vfs module
- 03:22 AM rgw Bug #19136 (Resolved): The parameter of torrent request is not the same as amazon s3
- In amazon s3, the get torrent request is: `GET /ObjectName?torrent`
In ceph, request is `GET /ObjectName?get_torrent... - 03:02 AM Bug #19099: valgrind runs time out in wait_for_up_osds after 300s; osd logs show *incredibly* slo...
- turns out, to split the the single big debuginfo packages into subpackages, is not a trivial task. see https://bugzil...
- 02:55 AM Bug #19076: osd/ReplicatedBackend.cc: 884: FAILED a ssert(j != bc->pulling.end())
- /a/sage-2017-03-03_00:23:40-rados-wip-osd-full---basic-smithi/875756
- 02:33 AM Bug #19134 (Resolved): mds suicide during multimds test
- ...
- 12:45 AM rbd Feature #18748 (Fix Under Review): [cli] add ability to demote/promote all mirrored images in a pool
- *PR*: https://github.com/ceph/ceph/pull/13758
03/02/2017
- 11:10 PM Bug #19129 (Pending Backport): Ceph Xenial Packages - ceph-base missing dependency for psmisc
- 11:44 AM Bug #19129 (Fix Under Review): Ceph Xenial Packages - ceph-base missing dependency for psmisc
- https://github.com/ceph/ceph/pull/13744
- 11:20 AM Bug #19129 (Resolved): Ceph Xenial Packages - ceph-base missing dependency for psmisc
- I'm running ceph on Xenial
root@ceph-2-man5-sata-c18:~# lsb_release -a
No LSB modules are available.
Distributor... - 09:04 PM Bug #19131 (Fix Under Review): snap trim blocked behind ec read, never woken, on kraken-x upgrade
- https://github.com/ceph/ceph/pull/13755
- 08:12 PM Bug #19131 (Rejected): snap trim blocked behind ec read, never woken, on kraken-x upgrade
- /a/sage-2017-03-02_17:08:17-upgrade:kraken-x-master---basic-smithi/875222
/a/sage-2017-03-02_17:08:17-upgrade:kraken... - 07:52 PM rbd Bug #19130 (Fix Under Review): Enabling mirroring for a pool wiht clones may fail
- PR: https://github.com/ceph/ceph/pull/13752
- 01:53 PM rbd Bug #19130 (Resolved): Enabling mirroring for a pool wiht clones may fail
- When enabling RBD mirroring within a pool (rbd.mirror_mode_set(ioctx, RBD_MIRROR_MODE_POOL)), it tries to enable mirr...
- 07:47 PM rbd Cleanup #19104 (Fix Under Review): [test] librados_test_stub should support multiple connections
- *PR*: https://github.com/ceph/ceph/pull/13737
- 07:47 PM rbd Cleanup #19010 (Resolved): Simplify asynchronous image close behavior
- 07:36 PM rgw Bug #18980: rgw: "cluster [WRN] bad locator @X on object @X...." in cluster log
- seems like a problem where we send a delete with an empty object name. Maybe on radosgw-admin user rm, but not 100% s...
- 07:12 PM rgw Bug #17995 (Fix Under Review): rgw: zonegroup add cmd didn't update zone's realm_id
- 07:12 PM rgw-testing Bug #18341 (In Progress): add tenant testing to s3tests and swift test
- 07:11 PM rgw Bug #18230 (Need More Info): rgw: ERROR: got unexpected error when trying to read object: -2
- Still waiting for more info
- 07:10 PM rgw Bug #18343 (Need More Info): Error responses do not fully conform to AWS spec
- Hi, is this proposal still active? The PR above was closed-by-author.
- 07:07 PM rgw Bug #18426 (Fix Under Review): sync fail when upload file after disable versioning
- 07:06 PM rgw Bug #18210 (In Progress): rgw: multisite: commit period issue when creating multiple zonegroups
- 07:05 PM rgw Bug #18359 (Fix Under Review): radosgw Segmentation fault when use swiftclient upload file
- 07:05 PM rgw Bug #15513 (Resolved): RGW multisite: Delete requests do not propagate to the peer zone
- 07:04 PM rgw Bug #12898 (Won't Fix): Objects starting with underscore inaccessible after upgrade to 0.94.3
- 06:59 PM rgw Bug #18592: radosgw/teuthology/ssl: 30 s3-tests fail with https.
- 06:58 PM rgw Bug #18585: librgw: ensure clean shutdown after abnormal startup
- 06:58 PM rgw Bug #19011 (In Progress): rgw: add radosclient finisher to perf counter
- triage: adding perf counters has runtime cost; verifying whether rados team is on board for that
- 06:55 PM rgw Bug #18796 (Pending Backport): rgw: 204 No Content is returned when putting illformed Swift's ACL
- 06:55 PM rgw Bug #16736: rgw: a bucket with non-empty tenant can't link to specified user
- triage: need forward progress for Luminous; policies will be available; need more discussion by 3/16
- 06:51 PM rgw Bug #18725 (In Progress): "zonegroupmap set" does not work
- triage: propose to remove this command
- 06:51 PM rgw Bug #18860 (Resolved): the thread name of sync is not clear
- 06:51 PM rbd Subtask #18784 (Resolved): rbd-mirror A/A: leader should track up/down rbd-mirror instances
- 06:49 PM rgw Bug #18852 (Pending Backport): swift API: cannot disable object versioning with empty X-Versions-...
- 06:48 PM rgw Bug #18806 (Pending Backport): anonymous user's error code of getting object is not consistent wi...
- 06:38 PM rgw Bug #19085: Objects incorrectly report size 0 bytes after upgrade to 11.2.0
- Hi Pranjal,
Can you update this ticket with your analysis? If you have a proposed fix, create a githup pull reque... - 06:34 PM rgw Bug #18940 (Closed): ERROR RESTFUL_IO with S3 GET/PUT operations
- 06:13 PM Bug #19099 (Resolved): valgrind runs time out in wait_for_up_osds after 300s; osd logs show *incr...
- 05:24 PM Bug #19099: valgrind runs time out in wait_for_up_osds after 300s; osd logs show *incredibly* slo...
- ...
- 04:28 PM Bug #19099: valgrind runs time out in wait_for_up_osds after 300s; osd logs show *incredibly* slo...
- testing at http://pulpito.ceph.com/kchai-2017-03-02_16:25:12-rados:verify-wip-disable-dwz-kefu---basic-mira/...
- 03:58 PM Bug #19099: valgrind runs time out in wait_for_up_osds after 300s; osd logs show *incredibly* slo...
- i guess more importantly, is there an reason we want/need the compressed dwz in the package? maybe we can just turn ...
- 03:58 PM Bug #19099: valgrind runs time out in wait_for_up_osds after 300s; osd logs show *incredibly* slo...
- FWIW debian does a -dbg packages for every subpackage (e.g., ceph-osd-dbg, ceph-test-dbg) and it is *so* much nicer!!...
- 03:55 PM Bug #19099: valgrind runs time out in wait_for_up_osds after 300s; osd logs show *incredibly* slo...
- teuthology-suite -s rados/verify --filter valgrind --limit 1 -p 10 -c wip-sage-testing
tail teuthology.log to see ... - 11:05 AM Bug #19099: valgrind runs time out in wait_for_up_osds after 300s; osd logs show *incredibly* slo...
- If someone can give me clear instructions on how to reproduce this (assume I'm dumb) or better yet give me access to ...
- 05:03 AM Bug #19099: valgrind runs time out in wait_for_up_osds after 300s; osd logs show *incredibly* slo...
- debuginfo sizes collected from https://download.ceph.com/ and https://copr-be.cloud.fedoraproject.org/results/badone/...
- 04:46 AM Bug #19099: valgrind runs time out in wait_for_up_osds after 300s; osd logs show *incredibly* slo...
- guess the redhat-rpm-config is updated in our build machine to 9.1.0-30 or up. which includes the fix of https://bugz...
- 12:50 AM Bug #19099: valgrind runs time out in wait_for_up_osds after 300s; osd logs show *incredibly* slo...
- okay, according to tomhughes in #valgrind-dev, this is teh compressed debug info, and because it's compressed valgrin...
- 12:20 AM Bug #19099: valgrind runs time out in wait_for_up_osds after 300s; osd logs show *incredibly* slo...
- hmm, it is reading the debuginfo file......
- 12:04 AM Bug #19099: valgrind runs time out in wait_for_up_osds after 300s; osd logs show *incredibly* slo...
- Watchig this happen, I observed all 3 osds on a host at 100% CPU, and very very slow progression of log messages thro...
- 05:24 PM Bug #19113 (Duplicate): slow osd boot with valgrind (reached maximum tries (50) after waiting for...
- 04:07 PM Bug #19107 (Resolved): avoid unnecessary memory copy in ConfUtil class
- 11:52 AM CephFS Bug #17656: cephfs: high concurrent causing slow request
- jichao sun wrote:
> I have the same problem too!!!
- 08:27 AM CephFS Bug #17656: cephfs: high concurrent causing slow request
- I have the same problem too!!!
- 11:48 AM rgw Backport #19098: RGW/openssl fix for autoconf logic problem in Ubuntu Xenial
- > I believe we no longer support that in master. Do we in kraken?
Nope. Jewel is the last release that uses autot... - 10:29 AM rgw Backport #19098: RGW/openssl fix for autoconf logic problem in Ubuntu Xenial
- Well, I think I have good news. Turns out the cmake logic already gets the right soname in xenial. So, nothing to f...
- 10:13 AM Bug #18962: ceph-disk: Zap disk doesn't clear OSD journal data
- 10:05 AM Bug #15781: No scrub information available for pg 0.8
- We've just found the same issue on our cluster too:
ceph health detail
HEALTH_ERR 1 pgs inconsistent; 6 scrub err... - 08:33 AM CephFS Bug #18995 (Resolved): ceph-fuse always fails If pid file is non-empty and run as daemon
- 08:11 AM CephFS Bug #18730: mds: backtrace issues getxattr for every file with cap on rejoin
- Zheng Yan wrote:
> I think we should design a new mechanism to track in-use inodes (current method isn't scalable be... - 03:40 AM CephFS Bug #18730: mds: backtrace issues getxattr for every file with cap on rejoin
- I think we should design a new mechanism to track in-use inodes (current method isn't scalable because it journals al...
- 06:46 AM rbd Bug #19128 (Resolved): rbd import needs to sanity check auto-generated image name
- I see a qa :
[root@lab8106 ~]# rbd import /bin/ls ls@snap
rbd: destination snapname specified for a command that d... - 05:15 AM Bug #18732 (Duplicate): monitor takes 2 minutes to start
- 04:08 AM Linux kernel client Feature #15107 (Resolved): kcephfs: be less pessimistic about rcu walk in d_revalidate
- by commits:
5b484a513149f53613d376a9d1cd0391de099fb4
14fb9c9efe3570459b6bddd76a990140917237ad
f49d1e058d23a5fabeb1... - 04:05 AM Linux kernel client Feature #17805 (Resolved): Match fuse_require_active_mds behaviour in kernel client
- 03:02 AM Linux kernel client Bug #19127 (Resolved): NULL pointer dereference in ceph_readdir
- ...
- 02:54 AM Feature #5521 (Duplicate): Enhance PGLS or new op to list all namespace/objects in a pool.
- Duplicate 9031
03/01/2017
- 11:30 PM CephFS Bug #19118: MDS heartbeat timeout during rejoin, when working with large amount of caps/inodes
- I have that already. I did set the beacon_grace to 600s to walk around the bug and bring the cluster back.
Seems r... - 11:16 PM CephFS Bug #19118: MDS heartbeat timeout during rejoin, when working with large amount of caps/inodes
- I have that already. I did set the beacon_grace to 600s to walk around the bug and bring the cluster back.
In firs... - 03:52 PM CephFS Bug #19118: MDS heartbeat timeout during rejoin, when working with large amount of caps/inodes
- May be related to http://tracker.ceph.com/issues/18730, although perhaps not since that one shouldn't be causing the ...
- 03:36 PM CephFS Bug #19118 (Resolved): MDS heartbeat timeout during rejoin, when working with large amount of cap...
- We set an alarm every OPTION(mds_beacon_grace, OPT_FLOAT, 15) seconds, if mds_rank doesnt finish its task within this...
- 11:26 PM Bug #18298 (Pending Backport): mon: force_create_pg could leave pg stuck in creating state
- 10:55 PM rgw Bug #19089 (Pending Backport): rgw daemon's DUMPABLE flag is cleared by setuid preventing coredumps
- 06:11 AM rgw Bug #19089 (Fix Under Review): rgw daemon's DUMPABLE flag is cleared by setuid preventing coredumps
- 10:30 PM Fix #11081 (Resolved): ceph.conf - $host always expands to localhost instead of actual hostname
- 10:27 PM Bug #19110 (Resolved): segv in OpTracker::dump_ops_in_flight
- 08:37 AM Bug #19110: segv in OpTracker::dump_ops_in_flight
- https://github.com/ceph/ceph/pull/13702
- 10:21 PM Backport #18610 (New): kraken: osd: ENOENT on clone
- 07:31 PM rbd Bug #18938: Unable to build 11.2.0 under i686
- Hello,
I almost have the same problem but on an ARM platform.... - 07:26 PM Bug #19119: pre-jewel "osd rm" incrementals are misinterpreted
- https://github.com/ceph/ceph/pull/13732 (jewel note)
https://github.com/ceph/ceph/pull/13731 (master note)
https://... - 06:00 PM Bug #19119 (Fix Under Review): pre-jewel "osd rm" incrementals are misinterpreted
- https://github.com/ceph/ceph/pull/13730
- 05:36 PM Bug #19119: pre-jewel "osd rm" incrementals are misinterpreted
- 04:23 PM Bug #19119: pre-jewel "osd rm" incrementals are misinterpreted
- It looks like Sage's commit in https://github.com/ceph/ceph/pull/6900 is the culprit. That "set weight to 1" was car...
- 03:56 PM Bug #19119 (Resolved): pre-jewel "osd rm" incrementals are misinterpreted
- I have a bunch of misdirected requests from a recent kernel client to a hammer cluster, triggered by osd rm:...
- 07:15 PM rbd Feature #19123 (New): rbd/rados drivers in PyPI repo
- We have a need to install modules wholly contained within python virtualenv. As of now, we're extracting compiled dri...
- 06:59 PM Bug #19039: ceph-osd --mkkey --mkfs segfaults with bluestore
- The culprit seems to be passing...
- 06:30 PM Linux kernel client Bug #19122 (Resolved): pre-jewel "osd rm" incrementals are misinterpreted (kernel client)
- Carry over the fix for #19119.
- 05:36 PM CephFS Bug #18798: FS activity hung, MDS reports client "failing to respond to capability release"
- Zheng Yan wrote:
> probably fixed by https://github.com/ceph/ceph-client/commit/10a2699426a732cbf3fc9e835187e8b914f0... - 03:31 PM CephFS Feature #16523 (In Progress): Assert directory fragmentation is occuring during stress tests
- 02:55 PM rgw Backport #18626 (In Progress): jewel: TempURL verification broken for URI encoded object names
- https://github.com/ceph/ceph/pull/13145 was opened, but had a compilation error, so it was replaced by https://github...
- 01:40 PM rgw Backport #18626: jewel: TempURL verification broken for URI encoded object names
- https://github.com/ceph/ceph/pull/13724
- 01:18 PM rgw Bug #19085: Objects incorrectly report size 0 bytes after upgrade to 11.2.0
- I'm working on this. Can someone set the assignee to me (Pranjal Agrawal), since I don't have sufficient privileges yet?
- 12:07 PM rgw Bug #19025 (Pending Backport): RGW health check errors out incorrectly
- 11:09 AM CephFS Bug #18883: qa: failures in samba suite
- open ticket for samba build http://tracker.ceph.com/issues/19117
- 10:16 AM Bug #19116 (Duplicate): Sporadic segfaults in lockdep_locked on startup
- /home/jenkins-build/build/workspace/ceph-pull-requests/src/ceph_objectstore_tool_dir/out/client.admin.18512.asok is t...
- 10:08 AM Bug #19099 (Fix Under Review): valgrind runs time out in wait_for_up_osds after 300s; osd logs sh...
- * https://github.com/ceph/teuthology/pull/1036
* https://github.com/ceph/ceph/pull/13721
checked dmesg, on the te... - 09:00 AM rgw Backport #19098: RGW/openssl fix for autoconf logic problem in Ubuntu Xenial
- @Marcus: This issue (#19096) is for your autotools fix - are you going to open a PR against jewel for that, at or aro...
- 08:58 AM rgw Backport #19098: RGW/openssl fix for autoconf logic problem in Ubuntu Xenial
- Marcus Watts wrote:
> Is it worth trying to fix the cmake for this in jewel also or do we just ignore it?
cmake i... - 07:04 AM rgw Backport #19098: RGW/openssl fix for autoconf logic problem in Ubuntu Xenial
- I'll make a PR for master that fixes cmake. Is it worth trying to fix the cmake for this in jewel also or do we just...
- 08:42 AM Bug #19113: slow osd boot with valgrind (reached maximum tries (50) after waiting for 300 seconds)
- @Kefu: see #19099
- 05:29 AM Bug #19113: slow osd boot with valgrind (reached maximum tries (50) after waiting for 300 seconds)
- ...
- 05:21 AM Bug #19113 (Duplicate): slow osd boot with valgrind (reached maximum tries (50) after waiting for...
- it takes more than 5 minutes to boot an OSD with valgrind....
- 08:21 AM CephFS Bug #17828 (Need More Info): libceph setxattr returns 0 without setting the attr
- ceph.quota.max_files is hidden xattr. It doesn't show in listxattr. you need to get it explictly (getfattr -n ceph.qu...
- 07:19 AM rgw Backport #19115 (In Progress): jewel: rgw_file: ensure valid_s3_object_name for directories
- https://github.com/ceph/ceph/pull/13717
- 07:03 AM rgw Backport #19115 (Resolved): jewel: rgw_file: ensure valid_s3_object_name for directories
- https://github.com/ceph/ceph/pull/13717
- 07:12 AM Backport #18881 (Rejected): jewel: fix test_rgw_ldap.cc for search filter
- cmake is not ready on jewel
- 06:59 AM rgw Bug #19114 (Duplicate): radosgw: problem on xenial resolving sonames for libssl.so libcrypto.so
- Oops. This is a duplicate of 19098. Sorry.
- 06:55 AM rgw Bug #19114 (Duplicate): radosgw: problem on xenial resolving sonames for libssl.so libcrypto.so
- The logic I have for resolving sonames at configure time relies on building a dummy program linked against -lssl -lcr...
- 04:58 AM rgw Feature #19052: rgw: multiple zonegroups: support of request redirection to access different zone...
- I stick sequence charts created by "PlantUML plugin":http://www.redmine.org/plugins/plantuml
!plantuml_zone_config... - 04:51 AM rgw Bug #19043: rgw: multiple zonegroups: asymmetric behavior of creating bucket on secondary zonegroup
- I stick sequence charts created by "PlantUML plugin":http://www.redmine.org/plugins/plantuml
!plantuml_zone_config... - 04:40 AM rgw Bug #19042: rgw: multiple zonegroups: bucket can't be created if the user name isn't registered o...
- I stick sequence charts created by "PlantUML plugin":http://www.redmine.org/plugins/plantuml
If the user account d... - 04:32 AM rgw Bug #19041: rgw: multiple zonegroups: asymmetric behavior of creating user account
- I stick a sequence chart created by "PlantUML plugin":http://www.redmine.org/plugins/plantuml
!plantuml_zone_confi... - 03:11 AM mgr Feature #17449: Make stats period configurable
- https://github.com/ceph/ceph/pull/12732 66fd36d51fb456140d25cfe7a73b50ff899db6d6
- 01:22 AM rgw Bug #19112 (Resolved): rgw_file: RGWFileHandle dtor must also cond-unlink from FHCache
- Formerly masked in part by the reclaim() action, direct-delete now substitutes for reclaim() iff its LRU lane is over...
02/28/2017
- 09:55 PM Bug #16878: filestore: utilization ratio calculation does not take journal size into account
- @nathan I would love to know if the fix I have for 15912 which accounts for the amount of dirty journal fixes the iss...
- 09:40 PM rgw Bug #19111 (Pending Backport): rgw_file: FHCache residence check should be exhaustive
- PR to master: https://github.com/ceph/ceph/pull/13703
- 08:56 PM rgw Bug #19111 (Resolved): rgw_file: FHCache residence check should be exhaustive
- Current "! deleted()" check is not explicit and apparently not 100% reliable (nfs-ganesha mdcache lru cache full case).
- 08:19 PM rbd Cleanup #19104 (In Progress): [test] librados_test_stub should support multiple connections
- 03:02 AM rbd Cleanup #19104 (Resolved): [test] librados_test_stub should support multiple connections
- For tests where client ids need to be unique or where blacklisting is required, the librados_test_stub should be able...
- 08:15 PM rgw Bug #19019 (Pending Backport): multisite: RGWMetaSyncShardControlCR gives up on EIO
- 07:20 PM Bug #19097 (Resolved): segv in MonClient::build_authorizer(int)
- 07:03 AM Bug #19097 (Fix Under Review): segv in MonClient::build_authorizer(int)
- https://github.com/ceph/ceph/pull/13685
- 02:43 AM Bug #19097: segv in MonClient::build_authorizer(int)
- From code inspection I'm a suspicious of this change that merge on Feb 14:...
- 07:20 PM Bug #19110 (Fix Under Review): segv in OpTracker::dump_ops_in_flight
- 06:10 PM Bug #19110: segv in OpTracker::dump_ops_in_flight
- /a/sage-2017-02-27_21:39:28-rados-wip-sage-testing---basic-smithi/866800
- 05:35 PM Bug #19110: segv in OpTracker::dump_ops_in_flight
- Actually i don't think this is a race. I think it's just unsafe clearing of desc:...
- 05:13 PM Bug #19110 (Resolved): segv in OpTracker::dump_ops_in_flight
- ...
- 05:00 PM rbd Cleanup #19010 (Fix Under Review): Simplify asynchronous image close behavior
- *PR*: https://github.com/ceph/ceph/pull/13701
- 02:24 PM rbd Feature #19034 (In Progress): [rbd CLI] import-diff should use concurrent writes
- 02:12 PM CephFS Bug #19103: cephfs: Out of space handling
- Looking again now that I'm a few coffees into my day -- all the cephfs enospc stuff is just aimed at providing a slic...
- 08:58 AM CephFS Bug #19103: cephfs: Out of space handling
- Believe it or not I sneezed and somehow that caused me to select some affected versions...
- 08:58 AM CephFS Bug #19103: cephfs: Out of space handling
- Also, I'm not sure we actually need to do sync writes when the cluster is near full -- we already have machinery that...
- 08:56 AM CephFS Bug #19103: cephfs: Out of space handling
- Could the "ENOSPC on failsafe_full_ratio" behaviour be the default? It seems like any application layer that wants t...
- 12:13 AM CephFS Bug #19103 (Won't Fix): cephfs: Out of space handling
Cephfs needs to be more careful on a cluster with almost full OSDs. There is a delay in OSDs reporting stats, a MO...- 01:58 PM rgw Bug #19096 (Pending Backport): rgw: a few cases where rgw_obj is incorrectly initialized
- 01:37 PM CephFS Feature #19109 (Resolved): Use data pool's 'df' for statfs instead of global stats, if there is o...
The client sends a MStatfs to the mon to get the info for a statfs system call. Currently the mon gives it the glo...- 01:26 PM rbd Bug #19108 (Fix Under Review): rbd-nbd: prompt message when input nbds_max, and nbd module alread...
- PR: https://github.com/ceph/ceph/pull/13694
- 12:58 PM rbd Bug #19108 (Resolved): rbd-nbd: prompt message when input nbds_max, and nbd module already loaded.
- when user specify --nbds_max, nbd-rbd will try to load nbd module, and set parm nbds_max. But if the nbd module is al...
- 09:17 AM mgr Feature #17503 (Fix Under Review): Enable python modules to subscribe to cluster log
- https://github.com/ceph/ceph/pull/13690
- 09:14 AM mgr Bug #17738: Deadlock when shutdown() is called while still in init()
- Chang: hi, I think I wasn't receiving emails for mgr tickets, so I didn't notice your comments. Are you still workin...
- 09:11 AM Bug #19107 (Resolved): avoid unnecessary memory copy in ConfUtil class
- use ref prefer.
- 09:08 AM mgr Feature #17456 (In Progress): Migrate nonessential PGMap data away from ceph-mon
- The initial movement of pg commands: https://github.com/ceph/ceph/pull/13617
- 09:06 AM mgr Bug #17455 (Resolved): mgr ignores mgr_beacon_period setting
- ...
- 09:03 AM CephFS Bug #17828: libceph setxattr returns 0 without setting the attr
- This ticket didn't get noticed because it was filed in the 'mgr' component instead of the 'fs' component.
Chris: d... - 09:01 AM Cleanup #19106 (Resolved): config: eliminate config_t::set_val unsafe option
- https://trello.com/c/4TehDNGd
- 09:01 AM mgr Bug #18764: Crash on missing 'ceph_version' in daemon metadata
- n.b. it's PyModules::dump_server that actually does the naughty .at() call.
- 08:49 AM rgw Documentation #18889 (Pending Backport): rgw: S3 create bucket should not do response in json
- 07:30 AM rbd Subtask #18784 (Fix Under Review): rbd-mirror A/A: leader should track up/down rbd-mirror instances
- PR: https://github.com/ceph/ceph/pull/13571
- 07:29 AM rbd Subtask #18783 (Resolved): rbd-mirror A/A: InstanceWatcher watch/notify stub for leader/follower RPC
- 04:42 AM Bug #19105 (Won't Fix): ceph-disk prepare fails when the output of parted has got more string att...
- Hi, I've encountered an error during ceph-disk_prepare
"stderr": "prepare_device: OSD will not be hot-swappable i... - 04:30 AM Bug #18967: Cluster can't process any new requests after 3 hosts crashed in 4+2 EC
- Wow, just find this problem has been solved
https://github.com/ceph/ceph/pull/12342
- 03:15 AM rbd Bug #18888 (Pending Backport): rbd_clone_copy_on_read ineffective with exclusive-lock
- 01:45 AM Bug #19039: ceph-osd --mkkey --mkfs segfaults with bluestore
- This issue only happens when I set WITH_LTTNG, WITH_EVENTTRACE, HAVE_BABELTRACE flags and build with DEB_BUILD_OPTION...
- 12:51 AM rgw Bug #18942: swift ver location owner is not consistent, the object cannot be able to delete
- https://github.com/ceph/ceph/pull/13433
02/27/2017
- 11:00 PM Bug #19097: segv in MonClient::build_authorizer(int)
Nothing too notable. A rados put crashed after a rados mksnap. Because we don't see the experimental features wa...- 07:24 PM Bug #19097 (Resolved): segv in MonClient::build_authorizer(int)
- ...
- 10:17 PM Backport #19080: "TestLibRBD.UpdateFeatures" tests failed in upgrade:client-upgrade-jewel-distro-...
- > @Nathan - how would you like to finish it for 10.2.6 point release ?
There are only two options: either ignore t... - 09:33 PM Backport #19080: "TestLibRBD.UpdateFeatures" tests failed in upgrade:client-upgrade-jewel-distro-...
- @Nathan - how would you like to finish it for 10.2.6 point release ?
- 09:14 PM CephFS Bug #19101 (Closed): "samba3error [Unknown error/failure. Missing torture_fail() or torture_asser...
- This is jewel point v10.2.6
Run: http://pulpito.ceph.com/yuriw-2017-02-24_20:42:46-samba-jewel---basic-smithi/
Jo... - 09:07 PM Linux kernel client Bug #17221: "Failures: xfs/001 generic/275 generic/225 generic/079" in krbd
- Similar on jewel v10.2.6
Run: http://pulpito.ceph.com/yuriw-2017-02-24_21:11:45-krbd-jewel-testing-basic-smithi/
... - 08:57 PM rgw Backport #19098: RGW/openssl fix for autoconf logic problem in Ubuntu Xenial
- Thomas/Marcus: Please open a PR at ceph/ceph.git against the "jewel" branch. The end of the PR description could look...
- 08:54 PM rgw Backport #19098: RGW/openssl fix for autoconf logic problem in Ubuntu Xenial
- h3. Description
Marcus Watts has a fix for this BZ:
RGW service fails to start with SSL configured on Ubuntu
h... - 07:35 PM rgw Backport #19098 (Resolved): RGW/openssl fix for autoconf logic problem in Ubuntu Xenial
- https://github.com/ceph/ceph/pull/14215
- 08:45 PM Bug #19100 (Can't reproduce): "AssertionError: failed to complete snap trimming before timeout" i...
- Run: http://pulpito.ceph.com/teuthology-2017-02-26_19:25:18-upgrade:kraken-x-master-distro-basic-vps/
Jobs: many
L... - 08:15 PM Bug #19099 (Resolved): valgrind runs time out in wait_for_up_osds after 300s; osd logs show *incr...
- a taste...
- 08:11 PM rgw Feature #19088 (Fix Under Review): Add support for multipart upload expiration
- 01:09 AM rgw Feature #19088: Add support for multipart upload expiration
- https://github.com/ceph/ceph/pull/13622
- 01:05 AM rgw Feature #19088 (Fix Under Review): Add support for multipart upload expiration
- In Amazon S3, users can config lifecycle for multipart upload so that these uploads can be aborted after a certain ti...
- 08:10 PM rgw Feature #19071 (Closed): 添加S3生命周期对于分块上传的支持
- 01:09 AM rgw Feature #19071: 添加S3生命周期对于分块上传的支持
- Abort this tracker, the http://tracker.ceph.com/issues/19088 is created for this feature.
- 08:06 PM Backport #18677 (In Progress): kraken: OSD metadata reports filestore when using bluestore
- 07:46 PM Bug #18638: OSD metadata reports filestore when using bluestore
- Well, that's good point =)
- 06:34 PM Bug #18638: OSD metadata reports filestore when using bluestore
- Eh, I'm not too worried about jewel since nobody should really be using that bluestore code anymore. We certainly co...
- 05:39 PM Bug #18638: OSD metadata reports filestore when using bluestore
- @Sage
-Fix needs to be backported into jewel as well?-
-https://github.com/ceph/ceph/blob/jewel/src/osd/OSD.cc#L4... - 05:32 PM Bug #18638: OSD metadata reports filestore when using bluestore
- https://github.com/ceph/ceph/pull/13072
- 06:41 PM rbd Bug #19081: rbd: refuse to use an ec pool that doesn't support overwrites
- Yes, for luminous I think we'll have that flag still - mainly because it's a really bad idea to enable on filestore, ...
- 02:08 PM rbd Bug #19081: rbd: refuse to use an ec pool that doesn't support overwrites
- @Josh: do you also invision that users will need to set that flag in Luminous -- or should EC overwrites just work ou...
- 06:35 PM rgw Bug #19096 (Resolved): rgw: a few cases where rgw_obj is incorrectly initialized
- in the following:...
- 05:50 PM Linux kernel client Bug #19095: handle image feature mismatches
- @vasu do you a run/log to illustrate the problem?
- 05:48 PM Linux kernel client Bug #19095 (Resolved): handle image feature mismatches
- kernels which dont support the newer features like object-map, fast-diff, deep-flatten fail with not so friendly
err... - 05:10 PM Bug #19094 (Closed): rare stalled aio on trusty kernel (3.13.0-110-generic)
- /a/yuriw-2017-02-24_16:24:00-rados-wip-yuri-testing2_2017_2_24-distro-basic-smithi/856482
I've seen this once befo... - 03:36 PM Bug #18979: [ceph-mon] create-keys:Cannot get or create admin key
- i want to own this.
- 01:59 PM rbd Fix #19091 (Need More Info): rbd: rbd du cmd calc total volume is smaller than used
- Looks like this was an unintended consequence of commit 1ccdcb5b6c1cfd176a86df4f115a88accc81b4d0.
- 08:37 AM rbd Fix #19091 (Rejected): rbd: rbd du cmd calc total volume is smaller than used
- The result which rbd du a snapshot image is good, but rbd du a original image seems unreasonable, because the PROVISI...
- 01:28 PM Support #19093 (Rejected): Failed to deploy multiple SPDK blueStore OSD instances per node
- I succeed to create first osd when Useing the following command
ceph osd create
rm -rf /var/lib/ceph/osd/ceph-0/
m... - 12:55 PM rgw Feature #18800: rgw: support AWS4 authentication for S3 Post Object API
- Yes, I am having a look in this bug.
- 12:47 PM CephFS Bug #18883: qa: failures in samba suite
- both ubuntu and centos samba packages have no dependency to libcephfs. It works when there happen to be libcephfs1.
- 11:21 AM CephFS Bug #18883: qa: failures in samba suite
- Which packages were you seeing the linkage issue on? The centos ones?
- 09:25 AM CephFS Bug #18883: qa: failures in samba suite
- ...
- 10:23 AM Bug #13499: FAILED assert(repop_queue.front() == repop)
- Here is one on current jewel:...
- 10:03 AM RADOS Bug #19092 (New): cluster [ERR] scrub 2.1 ... is an unexpected clone" in cluster log
- see http://pulpito.ceph.com/kchai-2017-02-27_04:13:29-rados-wip-kefu-testing---basic-smithi/862801/
after evicting... - 09:43 AM Bug #19069 (Duplicate): segv in CephxClientHandler::handle_response, MonConnection::authenticate
- 05:15 AM Bug #19069: segv in CephxClientHandler::handle_response, MonConnection::authenticate
- testing https://github.com/ceph/ceph/pull/13656
* /a/sage-2017-02-24_06:15:05-rados-wip-sage-testing---basic-smith... - 09:42 AM Bug #19015 (Resolved): ceph tell mon.* times out
- 08:14 AM rgw Bug #19056: radosgw S3 AWS4 signature and keystone integration broken
- This feature is very important to us. A lot of S3 client library and CLI support AWS4 signature by default, so the ke...
- 07:56 AM CephFS Bug #18757: Jewel ceph-fuse does not recover after lost connection to MDS
- I updated PR to do _closed_mds_session(s).
As for config option, I would expect client to reconnect automagically ... - 07:47 AM Cleanup #19090 (Resolved): Wrong hard-coded urls
- There are some hard-coded URLs in:
* src/mon/OSDMonitor.cc
* src/sample.ceph.conf
e.g.,
3042: ss << "; see ... - 06:53 AM RADOS Feature #18943: crush: add devices class that rules can use as a filter
- https://github.com/ceph/ceph/pull/13444
- 06:22 AM rgw Bug #19089 (In Progress): rgw daemon's DUMPABLE flag is cleared by setuid preventing coredumps
- https://github.com/ceph/ceph/pull/13657
- 05:53 AM rgw Bug #19089 (Resolved): rgw daemon's DUMPABLE flag is cleared by setuid preventing coredumps
- #17650 resolved this issue for the MON and OSD daemons but rgw calls setuid later in its start-up via civetweb.
Th... - 03:27 AM RADOS Bug #19086 (Need More Info): BlockDevice::create should add check for readlink result instead of ...
- 12:52 AM rgw Bug #19074: cannot cover the object expiration
- https://github.com/ceph/ceph/pull/13621
02/26/2017
- 04:25 PM Bug #19015 (Fix Under Review): ceph tell mon.* times out
- 04:24 PM Bug #19015: ceph tell mon.* times out
- https://github.com/ceph/ceph/pull/13656
- 03:33 PM rgw Bug #19085: Objects incorrectly report size 0 bytes after upgrade to 11.2.0
- In the meantime I found out that if I put an object with s3cmd the size of the object is correct, so the problem is o...
- 02:14 PM Bug #19087: Bluestore panic with jemalloc
- Intel folks report that jewel goes well with jemalloc
- 02:10 PM Bug #19087 (Can't reproduce): Bluestore panic with jemalloc
- Checkout Kraken and build from source, with "cmake -D ALLOCATOR=jemalloc -DBOOST_J=$(nproc) "$@" .. "
OSD will pan... - 11:19 AM RADOS Bug #19086: BlockDevice::create should add check for readlink result instead of raise error until...
- https://github.com/ceph/ceph/pull/13654
- 09:49 AM RADOS Bug #19086 (Rejected): BlockDevice::create should add check for readlink result instead of raise ...
- ...
- 03:45 AM Backport #19080: "TestLibRBD.UpdateFeatures" tests failed in upgrade:client-upgrade-jewel-distro-...
- One option is to modify the test so it specifies the exact SHA1 (but then it won't get any newer commits that might b...
- 03:39 AM Backport #19080: "TestLibRBD.UpdateFeatures" tests failed in upgrade:client-upgrade-jewel-distro-...
- Indeed. Teuthology correctly recognizes the infernalis SHA1 that is needed, but when it queries Shaman it only asks f...
- 03:20 AM Backport #18890 (In Progress): ruleset is out of scope in CrushTester::test_with_crushtoo
02/25/2017
- 11:24 PM Backport #18842 (Resolved): kraken: kernel client feature mismatch on latest master test runs
- 11:19 PM Backport #18842: kraken: kernel client feature mismatch on latest master test runs
- Bumping priority to match the severity of the issue. Let me know if anyone needs a hand getting this into kraken quic...
- 05:59 PM rgw Bug #19085 (Closed): Objects incorrectly report size 0 bytes after upgrade to 11.2.0
- I'm running a docker registry using the s3aws storage driver on a ceph cluster. This setup was running very well unti...
- 01:10 PM Bug #19084 (Can't reproduce): Cannot Boot: A start job is running for dev-mapper-ceph\x2d\x2dos\x...
- I reboot the server and it stuck with this message:
A start job is running for dev-mapper-ceph\x2d\x2dos\x2droot.dev... - 09:58 AM Backport #19080: "TestLibRBD.UpdateFeatures" tests failed in upgrade:client-upgrade-jewel-distro-...
- This is the problem with the test. And the fix for this test was merged to infernalis branch long time ago:...
- 08:04 AM Feature #16966: Multiple SPDK BlueStore OSD instances per node
- thanks!
by the way, I just deploy Multiple SPDK BlueStore OSD instances per node in ceph 12.0.0,
but still mistak... - 05:27 AM Backport #18677 (New): kraken: OSD metadata reports filestore when using bluestore
- 02:40 AM Backport #19083 (Resolved): jewel: osd: preserve allocation hint attribute during recovery
- https://github.com/ceph/ceph/pull/13647
- 12:10 AM Backport #19046 (In Progress): jewel: IPv6 Heartbeat packets are not marked with DSCP QoS - async...
02/24/2017
- 11:36 PM Bug #18960: PG stuck peering after host reboot
- George Vasilakakos wrote:
>
> I think this solution should be able to get a PG unstuck without having to incur the... - 03:57 PM Bug #18960: PG stuck peering after host reboot
- Brad Hubbard wrote:
> The next step then would be recreating osd.7 (marking it out+down, then lost, re-formatting it... - 10:55 AM Bug #18960: PG stuck peering after host reboot
- Brad Hubbard wrote:
> The next step then would be recreating osd.7 (marking it out+down, then lost, re-formatting it... - 10:10 AM Bug #18960: PG stuck peering after host reboot
- The next step then would be recreating osd.7 (marking it out+down, then lost, re-formatting it and making a new osd i...
- 09:56 AM Bug #18960: PG stuck peering after host reboot
- Brad Hubbard wrote:
> This looks like corruption of the leveldb database on osd.7. Could you upload the omap directo... - 09:32 AM Bug #18960: PG stuck peering after host reboot
- Brad Hubbard wrote:
> Given the amount of retries in the logs and the history of http://tracker.ceph.com/issues/1850... - 03:12 AM Bug #18960: PG stuck peering after host reboot
- Given the amount of retries in the logs and the history of http://tracker.ceph.com/issues/18508 could you try with as...
- 12:50 AM Bug #18960: PG stuck peering after host reboot
- This looks like corruption of the leveldb database on osd.7. Could you upload the omap directory from 0sd.7's data pa...
- 10:12 PM Bug #19076: osd/ReplicatedBackend.cc: 884: FAILED a ssert(j != bc->pulling.end())
- Something like https://github.com/athanatos/ceph/tree/wip-17831-18583-18809-18927-19076
- 07:31 PM Bug #19076: osd/ReplicatedBackend.cc: 884: FAILED a ssert(j != bc->pulling.end())
- 03:20 PM Bug #19076 (Resolved): osd/ReplicatedBackend.cc: 884: FAILED a ssert(j != bc->pulling.end())
- ...
- 10:12 PM rbd Bug #19081: rbd: refuse to use an ec pool that doesn't support overwrites
- The flag will stick around for luminous. In the future if all ec pools supported overwrites, the flag would just alwa...
- 09:06 PM rbd Bug #19081 (Need More Info): rbd: refuse to use an ec pool that doesn't support overwrites
- @Josh: what's the API for determining if that flag is set? Is that flag only valid for Kraken?
- 09:00 PM rbd Bug #19081 (Resolved): rbd: refuse to use an ec pool that doesn't support overwrites
- When using an ec data pool that does not have the overwrites flag set, librbd ends up hitting an assert in the i/o pa...
- 10:12 PM RADOS Bug #19023: ceph_test_rados invalid read caused apparently by lost intervals due to mons trimming...
- Something like: https://github.com/athanatos/ceph/tree/wip-19023
- 10:10 PM Bug #18961 (Resolved): objecter continually resends ops which don't have a callback
- 10:09 PM Bug #18937 (Resolved): cache/tiering flush bug with head delete
- 06:24 PM Backport #19080 (Closed): "TestLibRBD.UpdateFeatures" tests failed in upgrade:client-upgrade-jewe...
- Fixed by David Galloway in #18089 - see https://github.com/ceph/ceph/pull/14459 for more information.
- 06:02 PM Bug #18933 (Resolved): asok ops operator<< races with handle_pull, which mutates MOSDPGPull
- 05:52 PM Bug #19077: Resize TextTable columns to the widest element of each column
- Updating the example for better formatting.
Here is an example:... - 04:49 PM Bug #19077 (Resolved): Resize TextTable columns to the widest element of each column
- Running the command "rados df" on a cluster with long pool names or lots of data results in the stair stepping of the...
- 05:45 PM rgw Feature #19079 (New): RGW Bucket Index cleanup tool needed
- Much like the orphan object cleanup tool, there needs to be a cleanup tool to remove orphaned RGW bucket indices.
... - 04:53 PM Backport #18723 (In Progress): kraken: osd: calc_clone_subsets misuses try_read_lock vs missing
- 04:33 PM Bug #19069: segv in CephxClientHandler::handle_response, MonConnection::authenticate
- /a/sage-2017-02-24_06:15:05-rados-wip-sage-testing---basic-smithi/855243
- 03:19 PM Bug #19069: segv in CephxClientHandler::handle_response, MonConnection::authenticate
- /a/sage-2017-02-24_06:15:05-rados-wip-sage-testing---basic-smithi/855189
- 04:22 PM rgw Bug #18331: RGW leaking data
- I forgot to mention that this is a cluster that was first deployed as Giant, then upgraded to Hammer and finally to J...
- 04:18 PM rgw Bug #18331: RGW leaking data
- Hi,
I have the same problem and I think the leaked objects come from some failed or interrupted multipart uploads ... - 02:29 PM CephFS Feature #19075 (Fix Under Review): Extend 'p' mds auth cap to cover quotas and all layout fields
- https://github.com/ceph/ceph/pull/13628
- 02:21 PM CephFS Feature #19075 (Resolved): Extend 'p' mds auth cap to cover quotas and all layout fields
- Re. mailing list thread "quota change restriction" http://marc.info/?l=ceph-devel&m=148769159329755&w=2
We should ... - 02:12 PM CephFS Bug #18600 (Resolved): multimds suite tries to run quota tests against kclient, fails
- 02:11 PM CephFS Bug #17990 (Resolved): newly created directory may get fragmented before it gets journaled
- 02:11 PM rgw Bug #19066 (Pending Backport): rgw_file: ensure valid_s3_object_name for directories
- https://github.com/ceph/ceph/pull/13614
- 02:10 PM CephFS Bug #16768 (Resolved): multimds: check_rstat assertion failure
- 02:10 PM CephFS Bug #18159 (Resolved): "Unknown mount option mds_namespace"
- 02:09 PM CephFS Bug #18646 (Resolved): mds: rejoin_import_cap FAILED assert(session)
- 11:26 AM Backport #18890: ruleset is out of scope in CrushTester::test_with_crushtoo
- h3. description
In CrushTester::test_with_crushtool:
if (ruleset >= 0) {
crushtool.add_cmd_args(
"--r... - 11:15 AM Backport #18890 (Fix Under Review): ruleset is out of scope in CrushTester::test_with_crushtoo
- https://github.com/ceph/ceph/pull/13627
- 10:24 AM Backport #18890: ruleset is out of scope in CrushTester::test_with_crushtoo
- -https://github.com/ceph/ceph/pull/13357-
- 09:56 AM rgw Feature #18800: rgw: support AWS4 authentication for S3 Post Object API
- hi, anyone working on this?
- 09:51 AM CephFS Bug #18953 (Resolved): mds applies 'fs full' check for CEPH_MDS_OP_SETFILELOCK
- 09:50 AM CephFS Bug #18663 (Resolved): teuthology teardown hangs if kclient umount fails
- 09:48 AM CephFS Bug #18675 (Resolved): client: during multimds thrashing FAILED assert(session->requests.empty())
- 07:42 AM rgw Feature #19071: 添加S3生命周期对于分块上传的支持
- https://github.com/ceph/ceph/pull/13622
- 02:56 AM rgw Feature #19071 (Closed): 添加S3生命周期对于分块上传的支持
- 在Amazon S3中的生命周期功能中,支持分块上传的过期删除。在初始化分块上传后的一定时间内,如果用户还是没有完成,则系统会abort该分块上传。这样可以节省用户空间。此外,在ceph中,由于在删除容器时,并没有清除那些未完成的分块...
- 07:31 AM rgw Bug #19074 (Resolved): cannot cover the object expiration
- when we set the object expiration, and upload the same object to the container again. The previous object expiration ...
- 07:28 AM rbd Feature #19073 (Duplicate): rbd: support namespace
- support namespace in rbd. a design as below.
http://pad.ceph.com/p/rbd_namespace - 04:44 AM Backport #19070: hammer: rebuild monstore failed due to illegal memory access
- h3. description
1. ceph version
ceph version 0.94.10 (b1e0532418e4631af01acbc0cedd426f1905f4af)
2.test scripts... - 02:52 AM Backport #19070 (Fix Under Review): hammer: rebuild monstore failed due to illegal memory access
- 02:24 AM Backport #19070: hammer: rebuild monstore failed due to illegal memory access
- https://github.com/ceph/ceph/pull/13605
- 02:21 AM Backport #19070 (Rejected): hammer: rebuild monstore failed due to illegal memory access
- https://github.com/ceph/ceph/pull/13605
- 04:30 AM Bug #18809: FAILED assert(object_contexts.empty()) (live on master only from Jan-Feb 2017, all ot...
- https://github.com/ceph/ceph/pull/13280 (the jewel revert)
- 04:04 AM Bug #10411: PG stuck incomplete after failed node
- what about the data in pg 5.3d2...
- 03:28 AM rbd Feature #19072: rbd-fuse support rbd image snap
- @jason dillaman
- 03:26 AM rbd Feature #19072 (New): rbd-fuse support rbd image snap
- Currently, the rbd-fuse do not support to mount image snap.
We can add this feature to rbd-fuse.. - 01:47 AM CephFS Bug #18759 (Resolved): multimds suite tries to run norstats tests against kclient
02/23/2017
- 11:54 PM Bug #18993 (Resolved): osd/PrimaryLogPG.cc: 9888: FAILED assert(object_contexts.empty())
- Nevermind, got my git log wrong, that pr was not included in that run, and should fix this. Re-open if it appears again.
- 11:47 PM Bug #18993: osd/PrimaryLogPG.cc: 9888: FAILED assert(object_contexts.empty())
- same assert even after https://github.com/ceph/ceph/pull/13569 :
http://qa-proxy.ceph.com/teuthology/dis-2017-02-2... - 11:24 PM CephFS Feature #9754: A 'fence and evict' client eviction command
- Underway on jcsp/wip-17980 along with #17980
- 12:07 PM CephFS Feature #9754 (In Progress): A 'fence and evict' client eviction command
- 11:08 PM Bug #19069 (Duplicate): segv in CephxClientHandler::handle_response, MonConnection::authenticate
- ...
- 11:06 PM Bug #18933 (Fix Under Review): asok ops operator<< races with handle_pull, which mutates MOSDPGPull
- /a/sage-2017-02-23_15:15:06-rados-master---basic-smithi/852417
https://github.com/ceph/ceph/pull/13545 - 11:05 PM Bug #19068 (Duplicate): timeout on "ceph --cluster ceph -- mon tell '*' injectargs '--mon_osd_dow...
- this appears to be an issue with tell mon.* failing to retry properly with teh new multi-mon monclient code, then fal...
- 11:00 PM RADOS Bug #19067 (Need More Info): missing set not persisted
- ...
- 09:07 PM rgw Bug #19066 (Resolved): rgw_file: ensure valid_s3_object_name for directories
- The logic in RGWLibFS::mkdir() validated bucket names, but not
object names (though RGWLibFS::create() did so).
T... - 07:06 PM rgw Bug #19013 (Pending Backport): radosgw-admin: add the 'object stat' command to usage
- 07:01 PM rgw Bug #19026 (Pending Backport): rgw: typo in rgw_admin.cc
- https://github.com/ceph/ceph/pull/13576
- 07:00 PM rgw Bug #18965: rgw: S3 v4 sign is not working with aws java sdk
- Will retest on Jewel (no plans to backport to Infernalis).
- 06:57 PM rgw Bug #18965 (Need More Info): rgw: S3 v4 sign is not working with aws java sdk
- 06:49 PM rgw Bug #19018 (Pending Backport): rgw_file: fix marker computation
- https://github.com/ceph/ceph/pull/13529
- 06:44 PM rgw Bug #19056: radosgw S3 AWS4 signature and keystone integration broken
- @rzarzynski can you take a look at this one?
- 12:56 PM rgw Bug #19056 (Resolved): radosgw S3 AWS4 signature and keystone integration broken
- We have an Jewel radosgw, with the s3 authentication integration with keystone enabled (rgw_s3_auth_use_keystone = tr...
- 06:41 PM rgw Bug #18940: ERROR RESTFUL_IO with S3 GET/PUT operations
- This isn't necessarily a bug, it's just client closing down the connection. Could happen when cosbench is shutting down.
- 06:40 PM rgw Bug #19036 (Pending Backport): rgw_file: fix recycling of invalid mkdir handles
- 06:40 PM rgw Bug #19059 (Pending Backport): rgw_file: restore (corrected) fix for dir "partial match" (return...
- https://github.com/ceph/ceph/pull/13607
- 03:21 PM rgw Bug #19059 (Resolved): rgw_file: restore (corrected) fix for dir "partial match" (return of FLAG...
- Allow callers of rgw_lookup() on objects attested in an rgw_readdir() callback the ability to bypass exact match in R...
- 06:37 PM rgw Bug #19060 (Pending Backport): rgw_file: avoid interning ".." in FHCache table and don't ref for...
- https://github.com/ceph/ceph/pull/13590
- 03:25 PM rgw Bug #19060 (Duplicate): rgw_file: avoid interning ".." in FHCache table and don't ref for them
- These refs won't be returned by nfs-ganesha, and are sufficiently
magical that other consumers should be persuaded t... - 06:36 PM rgw Bug #18989 (Pending Backport): rgw_file: allow setattr on placeholder (common_prefix) directories
- https://github.com/ceph/ceph/pull/13252
- 06:34 PM rgw Bug #18991 (Pending Backport): rgw_file: RGWReaddir (and cognate ListBuckets request) don't enum...
- https://github.com/ceph/ceph/pull/13529
- 06:32 PM rgw Bug #18992 (Pending Backport): rgw_file: "exact match" invalid for directories, in RGWLibFS::sta...
- https://github.com/ceph/ceph/pull/13607
- 06:09 PM Bug #18960: PG stuck peering after host reboot
- Since we need this pool to work again, we decided to take the data loss and try to move on.
So far, no luck. We tr... - 04:44 PM Stable releases Tasks #17851: jewel v10.2.6
- h3. QE VALIDATION (STARTED 2/23/17)
(Note: *%{color:green}PASSED% / %{color:red}FAILED%* - indicates "TEST IS IN P... - 04:24 PM Feature #16966: Multiple SPDK BlueStore OSD instances per node
- Yes, please take a look at https://github.com/ceph/ceph/pull/12604
- 02:19 AM Feature #16966: Multiple SPDK BlueStore OSD instances per node
- have any solution?
- 03:44 PM devops Backport #19062 (In Progress): jewel: Build ceph-resource-agents package for rpm based os
- 03:42 PM devops Backport #19062 (Resolved): jewel: Build ceph-resource-agents package for rpm based os
- https://github.com/ceph/ceph/pull/13606
- 03:40 PM devops Bug #17613 (Pending Backport): Build ceph-resource-agents package for rpm based os
- 03:36 PM Backport #18720 (Resolved): jewel: systemd restarts Ceph Mon to quickly after failing to start
- 03:22 PM Bug #18927 (Resolved): on_flushed: object ... obc still alive
- 02:45 PM RADOS Bug #19058: osd: backfill failed to remove racing evict
- /a/sage-2017-02-21_20:58:58-rados-wip-sage-testing---basic-smithi/844754
- 02:43 PM RADOS Bug #19058 (New): osd: backfill failed to remove racing evict
- we are backfilling......
- 02:12 PM Stable releases Tasks #19009: kraken v11.2.1
- h3. rbd...
- 01:52 PM Stable releases Tasks #19009: kraken v11.2.1
- h3. rgw...
- 01:48 PM Stable releases Tasks #19009: kraken v11.2.1
- h3. fs...
- 01:45 PM Stable releases Tasks #19009: kraken v11.2.1
- h3. powercycle...
- 01:43 PM Stable releases Tasks #19009: kraken v11.2.1
- h3. ceph-disk...
- 01:40 PM Stable releases Tasks #19009: kraken v11.2.1
- h3. Upgrade jewel-x...
- 01:37 PM Stable releases Tasks #19009: kraken v11.2.1
- h3. rados...
- 01:36 PM rbd Bug #19057 (Won't Fix): krbd suite does not run on hammer (rbd task fails with "No route to host")
- Reproducer: ...
- 12:06 PM Stable releases Tasks #19055 (Closed): jewel v10.2.7
- h3. Workflow
* "Preparing the release":http://ceph.com/docs/master/dev/development-workflow/#preparing-a-new-relea... - 11:25 AM Bug #19054 (Duplicate): mark an osd down sooner once we have collect enough reports
- 11:12 AM Bug #19054 (Duplicate): mark an osd down sooner once we have collect enough reports
- https://github.com/ceph/ceph/pull/7942
see also https://bugzilla.redhat.com/show_bug.cgi?id=1425115 - 11:05 AM rbd Feature #18865: rbd: wipe data in disk in rbd removing
- Okey, makes sense. will investigate more about it in osd. thanx
Jason Dillaman wrote:
> @Yang: As I mentioned, th... - 10:31 AM rgw Bug #7169: rgw: list multipart parts broken (> 1000 parts)
- Although this one has been marked as resolved I just ran into this situation with Jewel 10.2.5.
The listing replie... - 10:30 AM rgw Feature #19053 (New): rgw: swift API: support to retrieve manifest for SLO
- Current RGW code (tested on jewel v10.2.5) doesn't support retrieving manifest.
Even if ... - 09:50 AM rgw Feature #19052 (New): rgw: multiple zonegroups: support of request redirection to access differen...
- This is a part of issues discussed on ceph-devel ML: http://marc.info/?t=148671813300008
It is related to #19043 and... - 09:42 AM CephFS Bug #18757: Jewel ceph-fuse does not recover after lost connection to MDS
- you can use 'ceph daemon client.xxx kick_stale_sessions' to recover this issue. Maybe we should add config option to ...
- 09:32 AM rgw Backport #19049 (Resolved): kraken: multisite: some yields in RGWMetaSyncShardCR::full_sync() res...
- https://github.com/ceph/ceph/pull/13838
- 09:32 AM rgw Backport #19048 (Resolved): jewel: multisite: some yields in RGWMetaSyncShardCR::full_sync() resu...
- https://github.com/ceph/ceph/pull/13837
- 09:31 AM rgw Backport #19047 (Resolved): kraken: RGW leaking data
- https://github.com/ceph/ceph/pull/14517
- 09:31 AM Backport #18842: kraken: kernel client feature mismatch on latest master test runs
- Additional PR for the record -https://github.com/ceph/ceph/pull/13486-
- 09:30 AM Backport #19046 (Rejected): jewel: IPv6 Heartbeat packets are not marked with DSCP QoS - async me...
- 09:30 AM CephFS Backport #19045 (Resolved): kraken: buffer overflow in test LibCephFS.DirLs
- https://github.com/ceph/ceph/pull/14571
- 09:30 AM CephFS Backport #19044 (Resolved): jewel: buffer overflow in test LibCephFS.DirLs
- https://github.com/ceph/ceph/pull/14671
- 09:28 AM rbd Bug #18990 (Pending Backport): [rbd-mirror] deleting a snapshot during sync can result in read er...
- 09:22 AM Backport #18457 (Resolved): jewel: selinux: Allow ceph to manage tmp files
- 09:22 AM Backport #18582 (Resolved): Issue with upgrade from 0.94.9 to 10.2.5
- 09:21 AM Backport #18804 (Resolved): jewel: "ERROR: Export PG's map_epoch 3901 > OSD's epoch 3281" in upgr...
- 09:21 AM Backport #18812 (Resolved): jewel: hammer client generated misdirected op against jewel cluster
- 08:16 AM rgw Bug #19043 (New): rgw: multiple zonegroups: asymmetric behavior of creating bucket on secondary z...
- This is a part of issues discussed on ceph-devel ML: http://marc.info/?t=148671813300008
I tentatively configured ... - 06:26 AM rgw Bug #19042 (New): rgw: multiple zonegroups: bucket can't be created if the user name isn't regist...
- This is a part of issues discussed on ceph-devel ML: http://marc.info/?t=148671813300008
It may be related to #19041... - 05:29 AM rgw Bug #19041 (New): rgw: multiple zonegroups: asymmetric behavior of creating user account
- This is a part of issues discussed on ceph-devel ML: http://marc.info/?t=148671813300008
I tentatively configured ... - 04:47 AM rgw Bug #19040: rgw: multiple zonegroups: authorization error to post period
- FYI...
- 04:38 AM rgw Bug #19040 (New): rgw: multiple zonegroups: authorization error to post period
- This is a part of issues discussed on ceph-devel ML: http://marc.info/?t=148671813300008
I tentatively configured ...
02/22/2017
- 11:44 PM RADOS Bug #19023: ceph_test_rados invalid read caused apparently by lost intervals due to mons trimming...
- Well, sort of. last_epoch_clean is really about when we can forget OSDMaps. Should we retain OSDMaps on the mon (an...
- 11:33 PM RADOS Bug #19023: ceph_test_rados invalid read caused apparently by lost intervals due to mons trimming...
- 2017-02-20 20:45:59.104093 7f75c93f8700 10 osd.3 pg_epoch: 284 pg[1.16( v 278'379 (0'0,278'379] local-les=277 n=1 ec=...
- 12:09 AM RADOS Bug #19023: ceph_test_rados invalid read caused apparently by lost intervals due to mons trimming...
- 2017-02-20 20:46:28.567065 7ffa3242c700 10 osd.4 pg_epoch: 255 pg[1.16( v 254'369 (0'0,254'369] local-les=164 n=3 ec=...
- 12:05 AM RADOS Bug #19023: ceph_test_rados invalid read caused apparently by lost intervals due to mons trimming...
- 2017-02-20 20:46:40.165108 7f9e2ffc3700 10 osd.0 pg_epoch: 300 pg[1.16( DNE empty local-les=0 n=0 ec=0 les/c/f 0/0/0 ...
- 12:03 AM RADOS Bug #19023: ceph_test_rados invalid read caused apparently by lost intervals due to mons trimming...
- 2017-02-20 20:46:41.743173 7f9e277b2700 10 osd.0 pg_epoch: 301 pg[1.16( empty local-les=0 n=0 ec=141 les/c/f 164/164/...
- 10:34 PM Feature #18052: Replace past_intervals with more compact structure
- https://github.com/athanatos/ceph/tree/wip-past-intervals
- 10:34 PM Bug #17916 (Can't reproduce): osd/PGLog.cc: 1047: FAILED assert(oi.version == i->first)
- 10:33 PM Bug #18961 (Fix Under Review): objecter continually resends ops which don't have a callback
- https://github.com/ceph/ceph/pull/13570
- 10:33 PM Bug #18927 (Fix Under Review): on_flushed: object ... obc still alive
- https://github.com/ceph/ceph/pull/13569
- 10:32 PM Bug #18937 (Fix Under Review): cache/tiering flush bug with head delete
- https://github.com/ceph/ceph/pull/13570
- 10:07 PM Bug #19039 (Can't reproduce): ceph-osd --mkkey --mkfs segfaults with bluestore
- When deploying ceph (master) with bluestore results in a segmentation fault with attempting to create OSD key.
Ste... - 07:10 PM rbd Backport #19038 (In Progress): jewel: [rbd-mirror] deleting a snapshot during sync can result in ...
- 06:51 PM rbd Backport #19038 (Resolved): jewel: [rbd-mirror] deleting a snapshot during sync can result in rea...
- https://github.com/ceph/ceph/pull/13596
- 07:00 PM rbd Backport #18215 (Closed): jewel: TestImageSync.SnapshotStress fails on bluestore
- I would like to avoid backporting sparse object reads to jewel unless required.
- 07:00 PM rbd Bug #18146 (Resolved): TestImageSync.SnapshotStress fails on bluestore
- 07:00 PM rbd Feature #16780 (Resolved): rbd-mirror: use sparse read during image sync
- 07:00 PM rbd Backport #17879 (Closed): jewel: rbd-mirror: use sparse read during image sync
- I would like to avoid backporting sparse object reads to jewel unless required.
- 06:50 PM rbd Backport #19037 (Resolved): kraken: rbd-mirror: deleting a snapshot during sync can result in rea...
- https://github.com/ceph/ceph/pull/14622
- 05:54 PM rbd Bug #18884 (Resolved): systemctl stop rbdmap unmaps all rbds and not just the ones in /etc/ceph/r...
- 05:35 PM Bug #18979: [ceph-mon] create-keys:Cannot get or create admin key
- Kefu,
Do you want to own this or this should be assigned to someone else? - 10:35 AM Bug #18979: [ceph-mon] create-keys:Cannot get or create admin key
- ceph cli crashed with coredump...
- 04:28 AM Bug #18979: [ceph-mon] create-keys:Cannot get or create admin key
- 2017-02-17T18:18:11.237 INFO:teuthology.orchestra.run.vpm031.stderr:[vpm165.front.sepia.ceph.com][ERROR ] "ceph auth ...
- 05:16 PM Backport #18723: kraken: osd: calc_clone_subsets misuses try_read_lock vs missing
- Cherry-picked and RE-pushed :)
3833440adea6f8bcb0093603c3a9d16360ed57ec
91b74235027c8a4872dcab6b37767b12c3267061
6... - 04:26 PM rgw Bug #19036 (Fix Under Review): rgw_file: fix recycling of invalid mkdir handles
- PR to master: https://github.com/ceph/ceph/pull/13590
- 03:43 PM rgw Bug #19036 (Resolved): rgw_file: fix recycling of invalid mkdir handles
- To avoid a string copy in the common mkdir path, handles for proposed buckets currently are staged in the handle tabl...
- 03:42 PM rbd Bug #19035 (Resolved): [rbd CLI] map with cephx disabled results in error message
- ...
- 03:38 PM rbd Feature #19034 (Resolved): [rbd CLI] import-diff should use concurrent writes
- The export, export-diff, and import commands all issue concurrent operations to the librbd API. The import-diff comma...
- 02:45 PM CephFS Bug #19033: cephfs: mds is crushed, after I set about 400 64KB xattr kv pairs to a file
- Thank you for the very detailed report
- 02:44 PM CephFS Bug #19033 (Fix Under Review): cephfs: mds is crushed, after I set about 400 64KB xattr kv pairs ...
- 01:04 PM CephFS Bug #19033: cephfs: mds is crushed, after I set about 400 64KB xattr kv pairs to a file
- h1. Fix proposal
https://github.com/ceph/ceph/pull/13587 - 12:22 PM CephFS Bug #19033 (Resolved): cephfs: mds is crushed, after I set about 400 64KB xattr kv pairs to a file
- h1. 1. Problem
After I have set about 400 64KB xattr kv pair to a file,
mds is crashed. Every time I try to star... - 02:26 PM Bug #18960: PG stuck peering after host reboot
- I have uploaded two more log files: cd758c7b-7e74-4ff3-a00c-24b1391c77a2
These are from osd.7, one is the normal o... - 11:42 AM Bug #18960: PG stuck peering after host reboot
- I have uploaded ceph-objectstore-tool output for 307, 595, 1391.
I have also uploaded logs from 307 and 1391, from O... - 11:07 AM Bug #18960: PG stuck peering after host reboot
- Brad Hubbard wrote:
> We'd need logging at a minimum of debug osd = 20 and debug ms = 20. What we need going forward... - 02:20 AM Bug #18960: PG stuck peering after host reboot
- The recently uploaded logs show the same behaviour.
$ grep "PriorSet: affected_by_map" ceph-osd*
ceph-osd.1391-de... - 02:10 AM Bug #18960: PG stuck peering after host reboot
- We'd need logging at a minimum of debug osd = 20 and debug ms = 20. What we need going forward depends on what the ac...
- 02:15 PM rbd Bug #17251 (Resolved): Potential seg fault when blacklisting a client
- 01:44 PM rbd Backport #17261 (Resolved): jewel: Potential seg fault when blacklisting a client
- The patch has been backported and merged into Jewel as a part of https://github.com/ceph/ceph/pull/12890, see https:/...
- 01:34 PM rbd Bug #17210 (Resolved): ImageWatcher: double unwatch of failed watch handle
- 01:27 PM rbd Backport #17242 (Resolved): jewel: ImageWatcher: double unwatch of failed watch handle
- This one has been backported and merged into Jewel as a part of https://github.com/ceph/ceph/pull/12890, see https://...
- 01:21 PM Bug #18873: Need to use thread safe random number generation (unless c++11 already provides this)
- David Zafman wrote:
> We are using rand() throughout the code which man says isn't thread safe. We should determine... - 12:10 PM Backport #16585 (In Progress): jewel: enormous CLOSE_WAIT connections after re-spawning a mds daemon
- https://github.com/ceph/ceph/pull/13585
- 11:04 AM rgw Bug #19008: rgw: adding bucket lifecycle does not work with V4 signature
- I have the same with 'aws-sdk' gem 2.7.11 and 's3cmd' 1.6.1. Ceph is 11.1.0.
- 10:45 AM Stable releases Tasks #17151 (Resolved): hammer v0.94.10
- 10:37 AM rgw Backport #18901 (In Progress): jewel: librgw: path segments neglect to ref parents
- 10:37 AM rgw Backport #18901: jewel: librgw: path segments neglect to ref parents
- https://github.com/ceph/ceph/pull/13583
- 10:05 AM CephFS Bug #18941 (Pending Backport): buffer overflow in test LibCephFS.DirLs
- It's a rare thing, but let's backport so that we don't have to re-diagnose it in the future.
- 09:48 AM CephFS Bug #18964 (Resolved): mon: fs new is no longer idempotent
- 09:36 AM CephFS Bug #19022 (Fix Under Review): Crash in Client::queue_cap_snap when thrashing
- https://github.com/ceph/ceph/pull/13579
- 09:36 AM CephFS Bug #18914 (Fix Under Review): cephfs: Test failure: test_data_isolated (tasks.cephfs.test_volume...
- https://github.com/ceph/ceph/pull/13580
- 08:37 AM Bug #19020 (Fix Under Review): "unlikely race on master" OSDMapMapping.h: 31: FAILED assert(shard...
- https://github.com/ceph/ceph/pull/13574
- 08:21 AM Bug #17821 (Pending Backport): ceph-disk and dmcrypt does not support cluster names different tha...
- 08:21 AM Bug #17821: ceph-disk and dmcrypt does not support cluster names different than 'ceph'
- For the purpose of backporting only https://github.com/ceph/ceph/pull/13573/commits/7f66672b675abbc0262769d32a38112c7...
- 07:46 AM RADOS Bug #18926: Why osds do not release memory?
- Hello,
Version: L12.0.0, bluestore, two replication.
Memory size:16GB
OSD number:12
After I trying to...
02/21/2017
- 11:39 PM RADOS Bug #19023: ceph_test_rados invalid read caused apparently by lost intervals due to mons trimming...
- Notably, when it goes active at the end there, it's missing the 10 commits which happened during the [3,1] interval.
- 11:38 PM RADOS Bug #19023: ceph_test_rados invalid read caused apparently by lost intervals due to mons trimming...
- At epoch 255, 1.16 is on [4,3] and is active+clean
2017-02-20 20:45:10.962790 7fd9b7cba700 10 osd.4 pg_epoch: 255 ... - 01:35 AM RADOS Bug #19023: ceph_test_rados invalid read caused apparently by lost intervals due to mons trimming...
- I assume from your description that this was a dirty interval the monitor shouldn't have trimmed? Or did osd.4 perhap...
- 01:27 AM RADOS Bug #19023 (Resolved): ceph_test_rados invalid read caused apparently by lost intervals due to mo...
- samuelj@teuthology:/a/samuelj-2017-02-20_18:45:04-rados-wip-18937---basic-smithi/839771/remote
If you look back in... - 11:14 PM devops Bug #17613 (Resolved): Build ceph-resource-agents package for rpm based os
- 10:58 PM Bug #17821: ceph-disk and dmcrypt does not support cluster names different than 'ceph'
- Better fix https://github.com/ceph/ceph/pull/13573
- 08:26 PM Bug #17821 (Fix Under Review): ceph-disk and dmcrypt does not support cluster names different tha...
- 08:09 PM Bug #17821 (In Progress): ceph-disk and dmcrypt does not support cluster names different than 'ceph'
- Regression: activate must not have a --cluster argument, working on it
- 09:51 PM Stable releases Tasks #19009: kraken v11.2.1
- ...
- 09:01 PM Stable releases Tasks #19009 (In Progress): kraken v11.2.1
- 09:24 PM Bug #18928 (Pending Backport): IPv6 Heartbeat packets are not marked with DSCP QoS - async messenger
- 09:00 PM Bug #18928: IPv6 Heartbeat packets are not marked with DSCP QoS - async messenger
- https://github.com/ceph/ceph/pull/13418 merged
- 09:00 PM Bug #18887: IPv6 Heartbeat packets are not marked with DSCP QoS - simple messenger
- https://github.com/ceph/ceph/pull/13418 merged (async messenger fix)
- 08:57 PM rbd Bug #18990 (Fix Under Review): [rbd-mirror] deleting a snapshot during sync can result in read er...
- *PR*: https://github.com/ceph/ceph/pull/13568
- 08:48 PM Stable releases Tasks #17851: jewel v10.2.6
- h3. RADOS ON PR#13131 AND PR#13255...
- 08:40 PM Bug #18401 (Resolved): Cannot reserve CentOS 7.2 smithi machines
- 08:40 PM Backport #18406 (Resolved): jewel: Cannot reserve CentOS 7.2 smithi machines
- 08:35 PM rgw Bug #19027 (Fix Under Review): multisite: EPERM when trying to read SLO objects as system/admin user
- https://github.com/ceph/ceph/pull/13561
- 03:25 PM rgw Bug #19027 (Resolved): multisite: EPERM when trying to read SLO objects as system/admin user
- RGWGetObj::read_user_manifest_part() calls verify_object_permission() on each SLO segment, but doesn't take the user'...
- 06:08 PM Backport #18724: jewel: osd: calc_clone_subsets misuses try_read_lock vs missing
- Failed:
-https://jenkins.ceph.com/job/ceph-pull-requests/18894/consoleFull#507451577e4bfde06-44c1-4b3c-8379-d9ee175f... - 06:06 PM Backport #18724: jewel: osd: calc_clone_subsets misuses try_read_lock vs missing
- -Cherry-picked commit:
--91b74235027c8a4872dcab6b37767b12c3267061- - 05:44 PM Backport #18724 (In Progress): jewel: osd: calc_clone_subsets misuses try_read_lock vs missing
- 05:47 PM Bug #19028 (Duplicate): LibRadosLockECPP.BreakLockPP and LibRadosLockECPP.ListLockersPP failure
- ...
- 05:17 PM Bug #18960: PG stuck peering after host reboot
- I have upload two more logs obtained with osd debug = 10 in ceph.conf.
ceph-post-file ID: 1bd61b3e-d035-4a58-ba06-... - 05:15 PM CephFS Feature #18490: client: implement delegation support in userland cephfs
- Greg Farnum wrote:
> I guess I'm not sure what you're going for with the Fb versus Fc here. Sure, if you have Fwb an... - 04:14 PM CephFS Feature #18490: client: implement delegation support in userland cephfs
- I guess I'm not sure what you're going for with the Fb versus Fc here. Sure, if you have Fwb and then get an Fr read ...
- 04:04 PM CephFS Feature #18490: client: implement delegation support in userland cephfs
- Greg Farnum wrote:
> > BTW: CEPH_CAPFILE_BUFFER does also imply CEPH_CAP_FILE_CACHE, doesn't it?
>
> No, I don't ... - 01:21 AM CephFS Feature #18490: client: implement delegation support in userland cephfs
- > BTW: CEPH_CAPFILE_BUFFER does also imply CEPH_CAP_FILE_CACHE, doesn't it?
No, I don't think it does. In practice... - 05:15 PM CephFS Bug #18914: cephfs: Test failure: test_data_isolated (tasks.cephfs.test_volume_client.TestVolumeC...
- Of course, you're right.
- 02:43 PM CephFS Bug #18914: cephfs: Test failure: test_data_isolated (tasks.cephfs.test_volume_client.TestVolumeC...
- I think cephfs python bind calls ceph_setxattr instead of ceph_ll_setxattr. There is no such code in Client::setxattr
- 12:31 PM CephFS Bug #18914: cephfs: Test failure: test_data_isolated (tasks.cephfs.test_volume_client.TestVolumeC...
- The thing that's confusing me is that Client::ll_setxattr has this block:...
- 09:05 AM CephFS Bug #18914: cephfs: Test failure: test_data_isolated (tasks.cephfs.test_volume_client.TestVolumeC...
- The error is because MDS had outdated osdmap and thought the newly creately pool does not exist. (MDS has code that m...
- 03:41 PM Cleanup #19016 (Resolved): docs still refer to "mon osd min down reports" conf option
- 11:35 AM Cleanup #19016 (Fix Under Review): docs still refer to "mon osd min down reports" conf option
- I've created this pull request:
https://github.com/ceph/ceph/pull/13558
Please review. - 03:39 PM Bug #17366: "assert len(unclean) == num_unclean" in dump_stuck.py in rados suite
- Seen again in jewel 10.2.6 integration testing: http://pulpito.ceph.com/smithfarm-2017-02-21_14:15:43-rados-wip-jewel...
- 02:17 PM rbd Backport #18668 (Resolved): kraken: [ FAILED ] TestLibRBD.ImagePollIO in upgrade:client-upgrade...
- 02:17 PM rbd Backport #18703 (Resolved): kraken: Prevent librbd from blacklisting the in-use librados client
- 02:15 PM CephFS Bug #19022 (In Progress): Crash in Client::queue_cap_snap when thrashing
- 02:15 PM rgw Bug #19026 (Resolved): rgw: typo in rgw_admin.cc
- realms is misspelled in rgw_admin.cc (1)
Here is some example output:
# radosgw-admin realm list
failed to list... - 11:26 AM rgw Bug #18331 (Pending Backport): RGW leaking data
- 11:25 AM rgw Bug #18258 (Duplicate): rgw: radosgw-admin orphan find goes into infinite loop
- 11:25 AM rgw Backport #18913 (Duplicate): kraken: rgw: radosgw-admin orphan find goes into infinite loop
- 11:17 AM rgw Backport #18912 (Duplicate): jewel: rgw: radosgw-admin orphan find goes into infinite loop
- http://tracker.ceph.com/issues/18827
- 11:13 AM rgw Backport #18912: jewel: rgw: radosgw-admin orphan find goes into infinite loop
- This one has already been backported and merged: https://github.com/ceph/ceph/pull/13358
- 11:09 AM rgw Bug #19025 (Fix Under Review): RGW health check errors out incorrectly
- https://github.com/ceph/ceph/pull/13557
- 11:02 AM rgw Bug #19025 (Resolved): RGW health check errors out incorrectly
- Looks like the config 'rgw_healthcheck_disabling_path' is checking for the presence of a path and if the path is acce...
- 09:30 AM CephFS Backport #18707 (In Progress): kraken: failed filelock.can_read(-1) assertion in Server::_dir_is_...
- 09:28 AM CephFS Backport #18708 (Resolved): jewel: failed filelock.can_read(-1) assertion in Server::_dir_is_none...
- 09:23 AM Bug #18968 (Resolved): mon changing client global_id on restart (failure in TestVolumeClient.test...
- 04:07 AM Bug #18968: mon changing client global_id on restart (failure in TestVolumeClient.test_data_isola...
- https://github.com/ceph/ceph/pull/13550
- 09:23 AM Cleanup #19012 (Fix Under Review): Add override in header files of rbd subsystem
- 02:31 AM Cleanup #19012: Add override in header files of rbd subsystem
- https://github.com/ceph/ceph/pull/13536
- 07:11 AM rgw Backport #18903 (In Progress): jewel: rgw: first write also tries to read object
- 07:11 AM rgw Backport #18903: jewel: rgw: first write also tries to read object
- -https://github.com/ceph/ceph/pull/13552- (closed)
- 03:51 AM Bug #19020: "unlikely race on master" OSDMapMapping.h: 31: FAILED assert(shards == 0)
- Also confirmed on master (tests ran on xenial)
- 01:42 AM Bug #19024 (Can't reproduce): ec pool stuck incomplete, active+remapped -- crush mapping anomaly?
- samuelj@teuthology:/a/samuelj-2017-02-20_18:45:04-rados-wip-18937---basic-smithi/839838
I killed the osd.5 process...
02/20/2017
- 11:56 PM CephFS Bug #19022: Crash in Client::queue_cap_snap when thrashing
- http://pulpito.ceph.com/jspray-2017-02-20_15:59:37-fs-master---basic-smithi 837749 and 837668
- 11:55 PM CephFS Bug #19022 (Resolved): Crash in Client::queue_cap_snap when thrashing
- Seen on master. Mysterious regression....
- 11:51 PM Feature #14860 (Duplicate): scrub/repair: persist scrub results (do not overwrite deep scrub resu...
- 10:25 PM Bug #16878: filestore: utilization ratio calculation does not take journal size into account
- @David We found a reproducer for this, let me know if you need/want it.
- 10:10 PM Bug #19020 (Resolved): "unlikely race on master" OSDMapMapping.h: 31: FAILED assert(shards == 0)
- Per Josh' reuest
@FAILED: 92 - test-erasure-code.sh (Timeout)@ (can't attach log too big)
Was part of PRs testi... - 09:47 PM Stable releases Tasks #17851: jewel v10.2.6
- Release blockers to be merged urgently upon obtaining green rados/powercycle/upgrade:
* https://github.com/ceph/ce... - 09:31 PM Stable releases Tasks #17851: jewel v10.2.6
- h3. ceph-disk...
- 09:21 PM Stable releases Tasks #17851: jewel v10.2.6
- h3. Upgrade hammer-x ...
- 09:21 PM Stable releases Tasks #17851: jewel v10.2.6
- h3. Upgrade jewel point-to-point-x...
- 09:20 PM Stable releases Tasks #17851: jewel v10.2.6
- h3. powercycle...
- 09:19 PM Stable releases Tasks #17851: jewel v10.2.6
- h3. rados...
- 09:15 PM Stable releases Tasks #17851: jewel v10.2.6
- h3. fs...
- 11:53 AM Stable releases Tasks #17851: jewel v10.2.6
- ...
- 09:11 PM rgw Bug #19019 (Fix Under Review): multisite: RGWMetaSyncShardControlCR gives up on EIO
- https://github.com/ceph/ceph/pull/13546
- 08:58 PM rgw Bug #19019 (Resolved): multisite: RGWMetaSyncShardControlCR gives up on EIO
- Testing of multisite sync while the master zone is stopping OSDs was seen to cause several errors of the form:...
- 08:04 PM rgw Bug #19018 (Resolved): rgw_file: fix marker computation
- The underlying type of RGWFileHandle::Directory::marker_cache (dirent_string) was not correct for storing object mark...
- 08:03 PM rbd Feature #13025 (Resolved): Add scatter/gather support to librbd C/C++ APIs
- 06:49 PM Stable releases Tasks #19014: hammer v0.94.11
- h3. upgrade/firefly-x...
- 03:54 PM Stable releases Tasks #19014: hammer v0.94.11
- h3. krbd...
- 03:50 PM Stable releases Tasks #19014: hammer v0.94.11
- ...
- 03:49 PM Stable releases Tasks #19014 (Rejected): hammer v0.94.11
- h3. Workflow
* "Preparing the release":http://ceph.com/docs/master/dev/development-workflow/#preparing-a-new-relea... - 05:13 PM Bug #18516: "osd marked itself down" will not recognised if host runs mon + osd on shutdown/reboot
- systemd fixes are not needed in hammer, since systemd support was still in its infancy when hammer was released.
- 05:12 PM rgw Backport #17756 (Resolved): jewel: rgw: bucket resharding
- 05:12 PM rgw Backport #18563 (Resolved): jewel: leak from RGWMetaSyncShardCR::incremental_sync
- 05:11 PM rgw Backport #18773 (Resolved): jewel: rgw crashes when updating period with placement group
- 05:11 PM Backport #19004 (Resolved): jewel: tests: qa/suites/upgrade/hammer-x/stress-split: finish thrashi...
- 11:10 AM Backport #19004 (Resolved): jewel: tests: qa/suites/upgrade/hammer-x/stress-split: finish thrashi...
- https://github.com/ceph/ceph/pull/13222
- 05:11 PM Backport #19006 (Resolved): jewel: tests: upgrade/hammer-x/stress-split-erasure-code(-x86_64) bre...
- 11:15 AM Backport #19006 (In Progress): jewel: tests: upgrade/hammer-x/stress-split-erasure-code(-x86_64) ...
- https://github.com/ceph/ceph/pull/13222 broke the upgrade/hammer-x/stress-split-erasure-code and upgrade/hammer-x/str...
- 11:12 AM Backport #19006 (Resolved): jewel: tests: upgrade/hammer-x/stress-split-erasure-code(-x86_64) bre...
- https://github.com/ceph/ceph/pull/13533
- 05:11 PM Backport #18998 (In Progress): jewel: 'ceph auth import -i' overwrites caps, should alert user be...
- 10:51 AM Backport #18998 (Resolved): jewel: 'ceph auth import -i' overwrites caps, should alert user befor...
- https://github.com/ceph/ceph/pull/13544
All checks have passed - 05:05 PM Backport #18997 (In Progress): kraken: ceph-disk prepare get wrong group name in bluestore
- 10:51 AM Backport #18997 (Resolved): kraken: ceph-disk prepare get wrong group name in bluestore
- https://github.com/ceph/ceph/pull/13543
- 05:03 PM Bug #18960: PG stuck peering after host reboot
- Brad Hubbard wrote:
> The debug log shows the following.
>
> 2017-02-15 18:25:13.649706 We transition to primary ... - 05:00 PM Backport #18999 (In Progress): kraken: "osd/PG.cc: 6896: FAILED assert(pg->is_acting(osd_with_sha...
- 10:51 AM Backport #18999 (Resolved): kraken: "osd/PG.cc: 6896: FAILED assert(pg->is_acting(osd_with_shard)...
- https://github.com/ceph/ceph/pull/13542
- 04:57 PM rgw Bug #18076 (Pending Backport): multisite: some yields in RGWMetaSyncShardCR::full_sync() resume i...
- https://github.com/ceph/ceph/pull/12223
- 04:55 PM Backport #19000 (In Progress): jewel: "osd/PG.cc: 6896: FAILED assert(pg->is_acting(osd_with_shar...
- 10:51 AM Backport #19000 (Resolved): jewel: "osd/PG.cc: 6896: FAILED assert(pg->is_acting(osd_with_shard) ...
- https://github.com/ceph/ceph/pull/13541
Changes approved - 04:29 PM Cleanup #19016 (Resolved): docs still refer to "mon osd min down reports" conf option
- commit:0269a0c17723fd3e22738f7495fe017225b924a4 removed the "mon_osd_min_down_reports" option. The docs still mention...
- 03:50 PM Bug #19015 (Resolved): ceph tell mon.* times out
- I think this is related to the monc refactor....
- 03:14 PM Backport #18812: jewel: hammer client generated misdirected op against jewel cluster
- Preferring Sage's backport https://github.com/ceph/ceph/pull/13255 over https://github.com/ceph/ceph/pull/13270 even ...
- 03:02 PM rgw Bug #19013 (Resolved): radosgw-admin: add the 'object stat' command to usage
- https://github.com/ceph/ceph/pull/13291
- 02:29 PM Bug #19005: spg_t::decode(ceph::buffer::list::iterator&) decode past end of struct encoding on ma...
- https://github.com/ceph/ceph/pull/13537
- 11:11 AM Bug #19005 (Resolved): spg_t::decode(ceph::buffer::list::iterator&) decode past end of struct enc...
- ...
- 02:23 PM Cleanup #19012 (Resolved): Add override in header files of rbd subsystem
- 02:12 PM rgw Bug #18331 (Resolved): RGW leaking data
- 02:04 PM CephFS Bug #18883: qa: failures in samba suite
- Latest run:
http://pulpito.ceph.com/jspray-2017-02-20_12:27:44-samba-master-testing-basic-smithi/
Now we're seein... - 08:01 AM CephFS Bug #18883: qa: failures in samba suite
- fix for the ceph-fuse bug: https://github.com/ceph/ceph/pull/13532
- 02:00 PM rgw Bug #19011: rgw: add radosclient finisher to perf counter
- https://github.com/ceph/ceph/pull/13535
- 02:00 PM rgw Bug #19011 (In Progress): rgw: add radosclient finisher to perf counter
- The process of radosclient finisher is important, I think that is nessesary to add it to perf counter.
- 01:55 PM rbd Cleanup #19010 (Resolved): Simplify asynchronous image close behavior
- Currently, an image cannot be closed when invoked from the image's op work queue nor can the image's memory be releas...
- 12:32 PM Stable releases Tasks #19009 (Resolved): kraken v11.2.1
- h3. Workflow
* "Preparing the release":http://ceph.com/docs/master/dev/development-workflow/#preparing-a-new-relea... - 12:30 PM rgw Bug #19008 (New): rgw: adding bucket lifecycle does not work with V4 signature
- When trying to set a new bucket lifecycle using the AWS SDK, the request fails with "501 Not Implemented"....
- 11:44 AM Backport #18758 (Resolved): Various upgrade/hammer-x failures in jewel 10.2.6 integration testing
- 11:43 AM rgw Backport #18985 (In Progress): kraken: rgw: sending Content-Length in 204 and 304 responses shoul...
- 11:42 AM rgw Backport #18983 (In Progress): jewel: rgw: sending Content-Length in 204 and 304 responses should...
- 11:32 AM RADOS Bug #18996: api_misc: [ FAILED ] LibRadosMiscConnectFailure.ConnectFailure
- the authenticate would time out at "15:59:24.639011"....
- 10:28 AM RADOS Bug #18996 (New): api_misc: [ FAILED ] LibRadosMiscConnectFailure.ConnectFailure
- ...
- 11:13 AM CephFS Bug #1656 (Won't Fix): Hadoop client unit test failures
- I'm told that there is a newer hdfs test suite that we would adopt if refreshing the hdfs support, so this ticket is ...
- 10:59 AM rgw Backport #17313 (Resolved): jewel: rgw-ldap: add ldap lib to rgw lib deps based on build config
- 10:59 AM rbd Backport #18285 (Resolved): jewel: partition func should be enabled When load nbd.ko for rbd-nbd
- 10:59 AM Backport #18512 (Resolved): build/ops: compilation error when --with-radowsgw=no
- 10:55 AM rgw Backport #19003 (In Progress): jewel: civetweb defaults to libssl.so and libcrypto.so when versio...
- 10:53 AM rgw Backport #19003 (Resolved): jewel: civetweb defaults to libssl.so and libcrypto.so when versions ...
- https://github.com/ceph/ceph/pull/12917
- 10:50 AM rgw Bug #11239 (Pending Backport): civetweb defaults to libssl.so and libcrypto.so when versions not ...
- 08:00 AM CephFS Bug #18995 (Fix Under Review): ceph-fuse always fails If pid file is non-empty and run as daemon
- https://github.com/ceph/ceph/pull/13532
- 07:58 AM CephFS Bug #18995 (Resolved): ceph-fuse always fails If pid file is non-empty and run as daemon
- It always fails with message like:...
- 07:53 AM mgr Bug #18994 (Closed): High CPU usage for ceph-mgr daemon v11.2.0
- Hello,
We are facing high CPU usage which is consumed by ceph-mgr process.
Environment:-
RHEL7.2
kraken - ... - 07:47 AM Bug #18967: Cluster can't process any new requests after 3 hosts crashed in 4+2 EC
- And restart rgw process won't release OSD's throttle.
- 06:49 AM Bug #18993: osd/PrimaryLogPG.cc: 9888: FAILED assert(object_contexts.empty())
- and /a/kchai-2017-02-18_17:57:32-rados-wip-kefu-testing---basic-mira/831320
- 06:40 AM Bug #18993: osd/PrimaryLogPG.cc: 9888: FAILED assert(object_contexts.empty())
- -6945> 2017-02-19 00:35:28.425515 7f31585d3700 10 osd.4 pg_epoch: 285 pg[1.11( v 276'358 lc 265'344 (0'0,276'358] l...
- 06:38 AM Bug #18993 (Resolved): osd/PrimaryLogPG.cc: 9888: FAILED assert(object_contexts.empty())
- ...
- 06:29 AM Bug #18976: ceph-disk prepare accepts a directory who's name starts with /dev
- The --data-dev, --journal-dev (--*-dev for each kind of argument) can be used to fail in case the provided argument i...
- 06:11 AM rgw Bug #18965: rgw: S3 v4 sign is not working with aws java sdk
- Hi, okwap
I have tried this with Ceph master code. It's ok to pass AWS v4 authorization. as:
2017-02-20 14:10:... - 03:06 AM Feature #18932 (Pending Backport): 'ceph auth import -i' overwrites caps, should alert user befor...
- 01:43 AM rgw Bug #18992 (Resolved): rgw_file: "exact match" invalid for directories, in RGWLibFS::stat_leaf()
- rgw_file: invalid use of RGWFileHandle::FLAG_EXACT_MATCH
The change which introduced this flag also ca... - 01:43 AM rgw Bug #18991 (In Progress): rgw_file: RGWReaddir (and cognate ListBuckets request) don't enumerate...
- 01:30 AM rgw Bug #18991 (Resolved): rgw_file: RGWReaddir (and cognate ListBuckets request) don't enumerate mu...
- This issue has one root cause in librgw, namely that the marker argument to these requests was incorrectly formatted ...
- 01:16 AM Bug #17821: ceph-disk and dmcrypt does not support cluster names different than 'ceph'
- Apologies. Wrong error message.
ganeshma@otccldstore04:~$ ceph-deploy osd activate otccldstore04:/dev/sdb1:/dev/nv... - 01:15 AM Bug #17821: ceph-disk and dmcrypt does not support cluster names different than 'ceph'
- I recently ran into issues deploying ceph using ceph-deploy with this change. I believe these two changes should fix ...
02/19/2017
- 11:56 PM rbd Bug #18990 (Resolved): [rbd-mirror] deleting a snapshot during sync can result in read errors
- Given an image with zero snapshots and some data written to object X, if you create a snapshot, start a full rbd-mirr...
- 11:20 PM rgw Bug #18989 (Resolved): rgw_file: allow setattr on placeholder (common_prefix) directories
- When a POSIX path <bucket>/foo/ is known only as an implicit path segment from other objects (e.g., <bucket>/foo/bar....
- 07:57 PM rbd Bug #18982: How to get out of weird situation after rbd flatten?
- The affected Ceph version as assigned to the ticket: 0.94.7. Kernel (on Ceph hosts) is 4.4.27 (soon to be updated to ...
- 06:55 PM rbd Bug #18982: How to get out of weird situation after rbd flatten?
- Please write Ceph and Kernel versions your cluster running.
- 11:25 AM Bug #18988 (Duplicate): Needs backport: ceph-disk prepare accepts a directory who's name starts w...
- Populated backport field in #18976
- 08:59 AM Bug #18988 (Duplicate): Needs backport: ceph-disk prepare accepts a directory who's name starts w...
- See http://tracker.ceph.com/issues/18976.
I have issued a pull request to fix the above issue at https://github.com/... - 10:54 AM rgw Feature #18855: support "user list" command in radosgw-admin
- RGW has `user list` command already. It could be closed.
- 09:50 AM CephFS Bug #18757: Jewel ceph-fuse does not recover after lost connection to MDS
- I created https://github.com/ceph/ceph/pull/13522
This resolves hang and allows work with mountpoint in this test ... - 09:14 AM Cleanup #17749 (Fix Under Review): duplicate upgrade warning code
- 08:59 AM Bug #18976 (Fix Under Review): ceph-disk prepare accepts a directory who's name starts with /dev
- 08:55 AM Bug #18976: ceph-disk prepare accepts a directory who's name starts with /dev
- See https://github.com/ceph/ceph/pull/13520
- 03:48 AM Backport #18723: kraken: osd: calc_clone_subsets misuses try_read_lock vs missing
- Cherry-picked:
3833440adea6f8bcb0093603c3a9d16360ed57ec
91b74235027c8a4872dcab6b37767b12c3267061 - 03:46 AM Backport #18723 (Need More Info): kraken: osd: calc_clone_subsets misuses try_read_lock vs missing
- 12:00 AM Bug #18960: PG stuck peering after host reboot
- The debug log shows the following.
2017-02-15 18:25:13.649706 We transition to primary and begin peering.
2017-...
02/18/2017
- 10:59 PM rbd Bug #18987 (Won't Fix): "[ FAILED ] TestLibRBD.ExclusiveLock" in upgrade:client-upgrade-kraken-...
- Run: http://pulpito.ceph.com/teuthology-2017-02-17_22:07:49-upgrade:client-upgrade-kraken-distro-basic-smithi/
Job: ... - 09:51 PM RADOS Documentation #18986 (New): Need to document monitor health configuration values
All configuration variables referenced in OSDMonitor::get_health() need to be documented. These values affect the ...- 06:59 PM Backport #18958 (In Progress): jewel: IPv6 Heartbeat packets are not marked with DSCP QoS - simpl...
- 05:54 PM Backport #18958 (Need More Info): jewel: IPv6 Heartbeat packets are not marked with DSCP QoS - si...
- https://jenkins.ceph.com/job/ceph-pull-requests/18689/consoleFull#-6328728062a811ea2-3e7b-466b-84b4-d13df7e35809
- 06:57 PM Bug #18929 (Pending Backport): "osd/PG.cc: 6896: FAILED assert(pg->is_acting(osd_with_shard) || p...
- 01:38 PM devops Bug #17613 (Fix Under Review): Build ceph-resource-agents package for rpm based os
- https://github.com/ceph/ceph/pull/13515
- 01:21 PM devops Feature #15043 (Resolved): Build and distribute packages for Ubuntu 16.04 LTS (Xenial Xerus)
- I checked http://download.ceph.com/ and found Xenial packages there, including for hammer.
- 01:16 PM devops Bug #15452 (Resolved): https://jenkins.ceph.com/job/ceph-pull-requests/ stuck
- Appears to have been fixed long ago
- 01:11 PM Bug #18954 (Pending Backport): ceph-disk prepare get wrong group name in bluestore
- 01:17 AM Bug #18954: ceph-disk prepare get wrong group name in bluestore
- This PR has been merged. Please help to close this issue, thanks!
- 11:51 AM rgw Backport #18985 (Fix Under Review): kraken: rgw: sending Content-Length in 204 and 304 responses ...
- https://github.com/ceph/ceph/pull/13514
- 11:43 AM rgw Backport #18985 (In Progress): kraken: rgw: sending Content-Length in 204 and 304 responses shoul...
- 11:42 AM rgw Backport #18985 (Resolved): kraken: rgw: sending Content-Length in 204 and 304 responses should b...
- https://github.com/ceph/ceph/pull/13514
- 11:23 AM rbd Feature #18984 (New): RFE: let rbd export write directly to a block device
- It would be great if `rbd export` could write directly to a block device.
Right now it won't let you:
# rbd expo... - 06:48 AM rgw Bug #18965: rgw: S3 v4 sign is not working with aws java sdk
- thanks. It would be better to change RGW logging level to 20. You could do it as following:
1. add "debug rgw = 20... - 06:35 AM Cleanup #17749 (In Progress): duplicate upgrade warning code
- https://github.com/ceph/ceph/pull/13512
02/17/2017
- 11:47 PM Backport #18959 (In Progress): jewel: Disallow enabling 'hashpspool' option to a pool without som...
- 11:08 PM rgw Backport #18983 (Fix Under Review): jewel: rgw: sending Content-Length in 204 and 304 responses s...
- 11:08 PM rgw Backport #18983 (Resolved): jewel: rgw: sending Content-Length in 204 and 304 responses should be...
- https://github.com/ceph/ceph/pull/13503
- 10:53 PM Backport #18958 (In Progress): jewel: IPv6 Heartbeat packets are not marked with DSCP QoS - simpl...
- 10:26 PM rgw Feature #16602 (Pending Backport): rgw: sending Content-Length in 204 and 304 responses should be...
- 10:21 PM rbd Bug #18982 (Duplicate): How to get out of weird situation after rbd flatten?
- Hope this is good for the tracker instead of the mailing list...
We have an image that was cloned from a snapshot:... - 10:05 PM ceph-ansible Bug #18981: Don't try to install ceph-fs-common in kraken or later
- The problem here is the 'ceph-fs-common' package. It's not in the chacra repo because it's not built anymore: http:/...
- 09:57 PM ceph-ansible Bug #18981: Don't try to install ceph-fs-common in kraken or later
- Let's manually try to install....
- 09:39 PM ceph-ansible Bug #18981: Don't try to install ceph-fs-common in kraken or later
- From: http://qa-proxy.ceph.com/teuthology/teuthology-2017-02-16_11:15:02-ceph-ansible-kraken-distro-basic-vps/821536/...
- 09:24 PM ceph-ansible Bug #18981: Don't try to install ceph-fs-common in kraken or later
- logs : http://pulpito.front.sepia.ceph.com/teuthology-2017-02-16_11:15:02-ceph-ansible-kraken-distro-basic-vps/821536/
- 09:16 PM ceph-ansible Bug #18981 (New): Don't try to install ceph-fs-common in kraken or later
- ceph-ansible version: 2.2.1
ceph version: Kraken
Distro: Xenial
looks like broken packages causing the following... - 08:43 PM rgw Bug #18980 (Resolved): rgw: "cluster [WRN] bad locator @X on object @X...." in cluster log
- During the clean up phase of teuthology (run as master as the --ceph-suite), an egrep of the logs reveals warnings si...
- 08:21 PM Bug #18979 (Duplicate): [ceph-mon] create-keys:Cannot get or create admin key
- During ceph-deploy run on master branch, I see the following only on *centos 7.3*...
- 06:58 PM Feature #18978 (New): ceph-disk: Implement easy replace of OSD
- As discussed here: http://marc.info/?l=ceph-devel&m=148717797822608&w=2
Maybe something like:... - 06:43 PM rgw Bug #18977 (Resolved): rgw: end_marker parameter doesn't work on Swift container's listing
- RadosGW and Swift are loaded with the same data.
RadosGW:
> $ curl -i "${publicURL}/TestContainer-32150453/?&end_... - 06:41 PM Backport #18378 (Need More Info): kraken: msg/simple/SimpleMessenger.cc: 239: FAILED assert(!clea...
- 06:39 PM Bug #18975: osd: pg log split does not rebuild index for parent or child
- https://github.com/ceph/ceph/pull/13493
- 05:47 PM Bug #18975 (Resolved): osd: pg log split does not rebuild index for parent or child
- 06:39 PM Backport #18431 (In Progress): kraken: ceph-disk: error on _bytes2str
- 06:33 PM Backport #18610 (Need More Info): kraken: osd: ENOENT on clone
- 06:30 PM Backport #18677 (Need More Info): kraken: OSD metadata reports filestore when using bluestore
- 06:27 PM Backport #18682 (In Progress): kraken: mon: 'osd crush move ...' doesnt work on osds
- 06:14 PM Backport #18723 (In Progress): kraken: osd: calc_clone_subsets misuses try_read_lock vs missing
- 06:09 PM Bug #18976 (Won't Fix): ceph-disk prepare accepts a directory who's name starts with /dev
This bug is a proposal for ceph-disk prepare to reject an argument
that starts with /dev if that argument is not a...- 06:04 PM CephFS Bug #18883: qa: failures in samba suite
- Testing fix for the fuse thing here: https://github.com/ceph/ceph/pull/13498
Haven't looked into the smbtorture fa... - 05:52 PM CephFS Bug #18883: qa: failures in samba suite
- The weird "failed to lock pidfile" ones are all with the "mount/native.yaml" fragment
- 05:44 PM CephFS Bug #18883: qa: failures in samba suite
- Some of them are something different, too:
http://qa-proxy.ceph.com/teuthology/zack-2017-02-08_12:21:51-samba-mast... - 05:32 PM CephFS Bug #18883: qa: failures in samba suite
- Weird, that was supposed to be fixed so that ceph-fuse never tries to create a pidfile: https://github.com/ceph/ceph/...
- 11:23 AM CephFS Bug #18883: qa: failures in samba suite
- all failure have similar errors...
- 06:03 PM Backport #18973 (In Progress): kraken: ceph-disk does not support cluster names different than 'c...
- 03:16 PM Backport #18973 (Resolved): kraken: ceph-disk does not support cluster names different than 'ceph'
- https://github.com/ceph/ceph/pull/13497
- 05:59 PM Backport #18972 (In Progress): jewel: ceph-disk does not support cluster names different than 'ceph'
- 02:49 PM Backport #18972 (Resolved): jewel: ceph-disk does not support cluster names different than 'ceph'
- https://github.com/ceph/ceph/pull/14765
- 05:50 PM Backport #18907 (In Progress): kraken: "osd marked itself down" will not recognised if host runs ...
- 05:46 PM Backport #18906 (In Progress): jewel: "osd marked itself down" will not recognised if host runs m...
- 05:32 PM Backport #18758 (In Progress): Various upgrade/hammer-x failures in jewel 10.2.6 integration testing
- 05:25 PM Backport #18952 (In Progress): kraken: segfault in ceph-osd --flush-journal
- 05:17 PM Backport #18957 (In Progress): jewel: ceph-disk: bluestore --setgroup incorrectly set with user
- 05:13 PM Backport #18956 (In Progress): kraken: ceph-disk: bluestore --setgroup incorrectly set with user
- 05:05 PM Backport #18756 (Rejected): upgrade/jewel-x in jewel should use latest released jewel as base
- We'll just run upgrade/jewel-x/point-to-point on jewel for integration testing and QE will run the full upgrade/jewel...
- 05:04 PM Backport #18894 (In Progress): kraken: Possible lockdep false alarm for ThreadPool lock
- 04:56 PM Backport #18842 (In Progress): kraken: kernel client feature mismatch on latest master test runs
- 04:45 PM Backport #18814 (In Progress): kraken: PrimaryLogPG: up_osd_features used without the requires_kr...
- 04:36 PM Linux kernel client Bug #18690: kclient: FAILED assert(0 == "old msgs despite reconnect_seq feature")
- Haomai, here's a test run:
http://pulpito.ceph.com/pdonnell-2017-02-17_15:35:17-multimds:thrash-master-testing-bas... - 04:35 PM CephFS Bug #18872: write to cephfs mount hangs, ceph-fuse and kernel
- make check finishes with 2 failed suites.
* FAIL: test/osd/osd-scrub-repair.sh
* FAIL: test/osd/osd-scrub-snaps.s... - 04:29 PM Bug #18694: ceph-disk activate for partition failing
- OK, I see that the most recent file from master introduces changes that are incompatible with the rest of the ceph co...
- 03:03 PM Bug #18694: ceph-disk activate for partition failing
- Hrm, It would seem I was a bit hasty.
After purging everything I had and starting again, using ceph-ansible to con... - 03:33 PM Stable releases Tasks #17151: hammer v0.94.10
- ...
- 02:49 PM rbd Backport #18971 (Resolved): jewel: AdminSocket::bind_and_listen failed after rbd-nbd mapping
- https://github.com/ceph/ceph/pull/14701
- 02:49 PM rbd Backport #18970 (Resolved): kraken: rbd: AdminSocket::bind_and_listen failed after rbd-nbd mapping
- https://github.com/ceph/ceph/pull/14540
- 02:47 PM rgw Backport #18969 (Resolved): jewel: Change loglevel to 20 for 'System already converted' message
- https://github.com/ceph/ceph/pull/13834
- 02:33 PM Bug #17821 (Pending Backport): ceph-disk and dmcrypt does not support cluster names different tha...
- 11:38 AM Bug #18968: mon changing client global_id on restart (failure in TestVolumeClient.test_data_isola...
Here it is: client.4219 sends an auth message, and gets its ID changed to 14100...- 11:35 AM Bug #18968 (Resolved): mon changing client global_id on restart (failure in TestVolumeClient.test...
This particular FS test restarts a mon while a client is mounted. The test is then hanging because that client app...- 11:28 AM CephFS Bug #17939: non-local cephfs quota changes not visible until some IO is done
- I just realised that rstats have the same problem. Client A is adding data to a Manila share, and Client B doesn't se...
- 11:06 AM Backport #18951 (In Progress): jewel: segfault in ceph-osd --flush-journal
- https://github.com/ceph/ceph/pull/13477
- 10:15 AM Bug #18967 (Closed): Cluster can't process any new requests after 3 hosts crashed in 4+2 EC
- Hi, guys. I'm using rgw on a 4+2 EC pool whose failure domain is host level. There are 20 hosts in total and each has...
- 07:54 AM rbd Bug #17951 (Pending Backport): AdminSocket::bind_and_listen failed after rbd-nbd mapping
- PR: https://github.com/ceph/ceph/pull/12433
- 07:12 AM rgw Bug #18965: rgw: S3 v4 sign is not working with aws java sdk
- ...
- 05:37 AM rgw Bug #18965: rgw: S3 v4 sign is not working with aws java sdk
- Please paste rgw's log if you could get. I'm working on this.
- 03:04 AM rgw Bug #18965: rgw: S3 v4 sign is not working with aws java sdk
- request ...
- 03:03 AM rgw Bug #18965 (Closed): rgw: S3 v4 sign is not working with aws java sdk
- java sdk version: 1.11.91
The following code snippet returns 400 bad request (copy from http://docs.ceph.com/docs/... - 06:06 AM Feature #18966 (New): Add mon_osd_down_out_subtree_max_osd
- In addition to mon_osd_down_out_subtree_limit there should be a config option mon_osd_down_out_subtree_max_osd
The... - 06:03 AM Bug #18962 (In Progress): ceph-disk: Zap disk doesn't clear OSD journal data
- See PR: -https://github.com/ceph/ceph/pull/13474-
- 02:36 AM CephFS Bug #18964 (Fix Under Review): mon: fs new is no longer idempotent
- PR: https://github.com/ceph/ceph/pull/13471
- 02:33 AM CephFS Bug #18964 (Resolved): mon: fs new is no longer idempotent
- ...
- 02:07 AM rgw Bug #18940: ERROR RESTFUL_IO with S3 GET/PUT operations
- Yehuda Sadeh wrote:
> does cosbench report any error?
no cosbench report no error - 01:32 AM rgw Bug #18505: kill nfs-ganesha + rgwfile get segfault
- @matt, yes, nfs-ganesha will be blocked if librgw thread join
and can you reproduct this issue SIGUSR1? - 01:17 AM rgw Documentation #18889: rgw: S3 create bucket should not do response in json
- Cool.
02/16/2017
- 11:56 PM Bug #11224: crushtool --test --simulate not working correctly
- This appears to be due to the additional --simulate flag that is being passed to crushtool. The docs are somewhat thi...
- 11:49 PM Feature #18932 (Fix Under Review): 'ceph auth import -i' overwrites caps, should alert user befor...
- https://github.com/ceph/ceph/pull/13468
- 10:38 PM rgw Backport #18833 (Resolved): jewel: rgw: usage stats and quota are not operational for multi-tenan...
- 10:38 PM rbd Bug #18963: rbd-mirror: forced failover does not function when peer is unreachable
- The individual ImageReplayers are stuck in the STOPPING state, trying to stop the replay of the remote journal. Due t...
- 10:12 PM rbd Bug #18963 (Resolved): rbd-mirror: forced failover does not function when peer is unreachable
- When a local image is force promoted to primary, the local rbd-mirror daemon should detect that the local images are ...
- 10:00 PM rgw Bug #18885 (Fix Under Review): rgw: data sync of versioned objects, note updating bi marker
- 06:58 PM rgw Bug #18885: rgw: data sync of versioned objects, note updating bi marker
- https://github.com/ceph/ceph/pull/13426
- 09:58 PM CephFS Backport #18708 (In Progress): jewel: failed filelock.can_read(-1) assertion in Server::_dir_is_n...
- 02:43 PM CephFS Backport #18708 (Fix Under Review): jewel: failed filelock.can_read(-1) assertion in Server::_dir...
- https://github.com/ceph/ceph/pull/13459
- 09:49 PM Bug #18962 (Resolved): ceph-disk: Zap disk doesn't clear OSD journal data
- When Zapping a disk and re-creating a OSD on the same disk again the chance is there that the partition table will be...
- 09:29 PM Bug #18961 (Resolved): objecter continually resends ops which don't have a callback
- This is triggered by the delete op sent during OSD flush.
- 09:16 PM CephFS Feature #17980 (In Progress): MDS should reject connections from OSD-blacklisted clients
- There's a jcsp/wip-17980 with a first cut of this.
- 06:55 PM rgw Bug #18939: rgw: versioned object sync inconsistency
- https://github.com/ceph/ceph/pull/13426
- 06:54 PM Backport #18813 (In Progress): hammer: hammer client generated misdirected op against jewel cluster
- 05:56 PM Backport #18813 (New): hammer: hammer client generated misdirected op against jewel cluster
- 06:51 PM rgw Bug #18940: ERROR RESTFUL_IO with S3 GET/PUT operations
- does cosbench report any error?
- 06:48 PM rgw Bug #18505: kill nfs-ganesha + rgwfile get segfault
- This doesn't look like a SIGSEGV (SIGUSR1).
I'll revisit shutdown process--I know you also posted a patch to addre... - 06:43 PM rgw Bug #18936: rgw slo manifest: etag and size should be optional
- Based on review of Swift behavior, agreed behavior for missing etag shall be changed to never report precondition-fai...
- 06:40 PM rgw Bug #18079 (Fix Under Review): rgw: need to stream metadata full sync init
- 06:37 PM rgw Bug #18829: RGW S3 v4 authentication issue with X-Amz-Expires
- 05:54 PM Bug #18840 (Resolved): Miss volume_backend_name in openstack cinder.conf example
- 05:29 PM rgw Documentation #18889 (In Progress): rgw: S3 create bucket should not do response in json
- 05:29 PM rgw Documentation #18889: rgw: S3 create bucket should not do response in json
- I've created https://github.com/ceph/ceph/pull/13461 to make a note in docs.
- 04:13 PM rgw Documentation #18889: rgw: S3 create bucket should not do response in json
- Maybe we should add a warning in the docs mentioning about this
- 04:12 PM rgw Documentation #18889: rgw: S3 create bucket should not do response in json
- okwap okwap wrote:
> Abhishek Lekshmanan wrote:
> > Can you post more detailed logs on how you got the json, boto (... - 12:40 AM rgw Documentation #18889: rgw: S3 create bucket should not do response in json
- Abhishek Lekshmanan wrote:
> Can you post more detailed logs on how you got the json, boto (both 2 and 3) should wor... - 03:32 PM CephFS Feature #18490: client: implement delegation support in userland cephfs
- Zheng asked a pointed question about this today, so to be clear...
This would be 100% an opportunistic thing. You ... - 02:46 PM Backport #18120 (Resolved): jewel: fixed the issue when --disable-server, compilation fails
- 01:53 PM Bug #18960: PG stuck peering after host reboot
- Upload ID from ceph-post-file:
84bf663c-cfa4-48d5-9836-0fda8a4d8990 - 01:48 PM Bug #18960 (Closed): PG stuck peering after host reboot
- On a cluster running Jewel 10.2.5, rebooting a host resulted in a PG getting stuck in the peering state.
pg 1.323 ... - 01:11 PM rgw Bug #18919 (Pending Backport): Change loglevel to 20 for 'System already converted' message
- 12:57 PM Backport #18959 (Resolved): jewel: Disallow enabling 'hashpspool' option to a pool without some k...
- https://github.com/ceph/ceph/pull/13507
All checks have passed - 12:56 PM Backport #18958 (Resolved): jewel: IPv6 Heartbeat packets are not marked with DSCP QoS - simple m...
- https://github.com/ceph/ceph/pull/13450
- 12:56 PM Backport #18957 (Resolved): jewel: ceph-disk: bluestore --setgroup incorrectly set with user
- https://github.com/ceph/ceph/pull/13489
- 12:56 PM Backport #18956 (Resolved): kraken: ceph-disk: bluestore --setgroup incorrectly set with user
- https://github.com/ceph/ceph/pull/13488
- 12:56 PM Bug #18955 (Pending Backport): ceph-disk: bluestore --setgroup incorrectly set with user
- 12:54 PM Bug #18955 (Resolved): ceph-disk: bluestore --setgroup incorrectly set with user
- https://github.com/ceph/ceph/pull/13457
- 11:27 AM Bug #18954: ceph-disk prepare get wrong group name in bluestore
- *PR*: https://github.com/ceph/ceph/pull/13457
- 11:14 AM Bug #18954 (Resolved): ceph-disk prepare get wrong group name in bluestore
- When executing command ceph-disk prepare with --setgroup in bluestore, we will get wrong group name.
- 11:20 AM RADOS Bug #18924: kraken-bluestore 11.2.0 memory leak issue
- This was discussed during Yesterday's performance meeting and Sage suggested that this is indeed a memory leak.
Al... - 11:17 AM RADOS Bug #18926: Why osds do not release memory?
- Seems to be related to #18924 doesn't it?
Machines seem to be running out of memory with BlueStore. - 10:01 AM Stable releases Tasks #17851: jewel v10.2.6
- h3. rbd...
- 10:00 AM Stable releases Tasks #17851: jewel v10.2.6
- h3. rgw...
- 09:59 AM Stable releases Tasks #17851: jewel v10.2.6
- h3. fs...
- 09:58 AM Stable releases Tasks #17851: jewel v10.2.6
- h3. powercycle...
- 09:57 AM Stable releases Tasks #17851: jewel v10.2.6
- h3. ceph-disk...
- 09:56 AM Stable releases Tasks #17851: jewel v10.2.6
- h3. Upgrade hammer-x ...
- 09:55 AM Stable releases Tasks #17851: jewel v10.2.6
- h3. Upgrade jewel point-to-point-x...
- 09:53 AM Stable releases Tasks #17851: jewel v10.2.6
- h3. rados...
- 08:31 AM CephFS Bug #18953 (Fix Under Review): mds applies 'fs full' check for CEPH_MDS_OP_SETFILELOCK
- https://github.com/ceph/ceph/pull/13455
- 08:13 AM CephFS Bug #18953 (Resolved): mds applies 'fs full' check for CEPH_MDS_OP_SETFILELOCK
- clients should be able to acquire/release file locks when fs is full
- 06:17 AM Feature #18836: list-inconsistent-obj should show which osd is the primary
- Why do you need this feature.
- 04:50 AM Bug #18887 (Pending Backport): IPv6 Heartbeat packets are not marked with DSCP QoS - simple messe...
- https://github.com/ceph/ceph/pull/13370
- 04:49 AM Feature #18468 (Pending Backport): Disallow enabling 'hashpspool' option to a pool without some k...
- 12:18 AM Bug #17177 (Resolved): cephfs metadata pool: deep-scrub error "omap_digest != best guess omap_dig...
02/15/2017
- 11:55 PM rbd Feature #13025: Add scatter/gather support to librbd C/C++ APIs
- *PR*: https://github.com/ceph/ceph/pull/13447
- 10:54 PM Backport #18952 (Resolved): kraken: segfault in ceph-osd --flush-journal
- https://github.com/ceph/ceph/pull/13490
All checks have passed - 10:54 PM Backport #18951 (Resolved): jewel: segfault in ceph-osd --flush-journal
- https://github.com/ceph/ceph/pull/13477
- 10:54 PM CephFS Backport #18950 (Resolved): kraken: mds/StrayManager: avoid reusing deleted inode in StrayManager...
- https://github.com/ceph/ceph/pull/14570
- 10:54 PM CephFS Backport #18949 (Resolved): jewel: mds/StrayManager: avoid reusing deleted inode in StrayManager:...
- https://github.com/ceph/ceph/pull/14670
- 10:54 PM rbd Backport #18948 (Resolved): jewel: rbd-mirror: additional test stability improvements
- https://github.com/ceph/ceph/pull/14154
- 10:54 PM rbd Backport #18947 (Resolved): kraken: rbd-mirror: additional test stability improvements
- https://github.com/ceph/ceph/pull/14155
- 10:48 PM rgw Backport #17886 (Resolved): jewel: multisite: ECANCELED & 500 error on bucket delete
- 10:47 PM rgw Backport #17961 (Resolved): rgw, jewel: TempURL fails if rgw_keystone_implicit_tenants is enabled
- 10:47 PM CephFS Backport #18100 (Resolved): jewel: ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- 10:47 PM Backport #18183 (Resolved): jewel: cephfs metadata pool: deep-scrub error "omap_digest != best gu...
- 10:47 PM rbd Backport #18556 (Resolved): jewel: Potential race when removing two-way mirroring image
- 10:47 PM rbd Backport #18608 (Resolved): jewel: Removing a clone that fails to open its parent might leave dan...
- 10:47 PM RADOS Feature #18943: crush: add devices class that rules can use as a filter
- <loicd> sage: I'm confused by how we should handle the weights with the device classes. The weight of the generated b...
- 03:00 PM RADOS Feature #18943: crush: add devices class that rules can use as a filter
- Instead of ...
- 12:24 PM RADOS Feature #18943 (Resolved): crush: add devices class that rules can use as a filter
- h3. Problem
1. We want to have different types of devices (SSD, HDD, NVMe) backing different OSDs within the same ... - 10:47 PM CephFS Backport #18679 (Resolved): jewel: failed to reconnect caps during snapshot tests
- 10:46 PM Backport #18869 (Resolved): jewel: tests: SUSE yaml facets in qa/distros/all are out of date
- 10:46 PM Backport #18870 (Resolved): kraken: tests: SUSE yaml facets in qa/distros/all are out of date
- 10:46 PM Backport #18891 (Resolved): jewel: rgw: add option to log custom HTTP headers (rgw_log_http_headers)
- 09:55 PM Bug #18929: "osd/PG.cc: 6896: FAILED assert(pg->is_acting(osd_with_shard) || pg->is_up(osd_with_s...
- I don't understand why this is not popping up. Sage's patch is correct, but something else is going on. Why is the ...
- 09:54 PM Bug #18929: "osd/PG.cc: 6896: FAILED assert(pg->is_acting(osd_with_shard) || pg->is_up(osd_with_s...
- samuelj@teuthology:/a/samuelj-2017-02-15_01:03:44-rados-wip-sam-testing---basic-smithi/816292 also
- 06:24 PM rgw Bug #18936: rgw slo manifest: etag and size should be optional
- to break things down further, here's what happens when just size_bytes or just etag are omitted:
1. omit size_byte... - 12:31 AM rgw Bug #18936: rgw slo manifest: etag and size should be optional
- For the "hang" problem, I believe that you will find that radosgw didn't issue an "HTTP status" line. I get a hang w...
- 06:23 PM CephFS Bug #17594: cephfs: permission checking not working (MDS should enforce POSIX permissions)
- Jeff Layton wrote:
> Greg Farnum wrote:
> > Gah. I've run out of time to work on this right now. I've got a branch ... - 11:53 AM CephFS Bug #17594: cephfs: permission checking not working (MDS should enforce POSIX permissions)
- Greg Farnum wrote:
> Gah. I've run out of time to work on this right now. I've got a branch at git@github.com:gregsf... - 01:39 AM CephFS Bug #17594: cephfs: permission checking not working (MDS should enforce POSIX permissions)
- Gah. I've run out of time to work on this right now. I've got a branch at git@github.com:gregsfortytwo/ceph.git which...
- 03:16 PM rgw Backport #17208: jewel: rgw: setting rgw_swift_url_prefix = "/" doesn't work as expected
- https://github.com/ceph/ceph/pull/11497
- 02:59 PM rgw Bug #18841 (Fix Under Review): http referer acl conflict with public read in swift api
- https://github.com/ceph/ceph/pull/13294
- 02:41 PM rbd Bug #18935 (Pending Backport): rbd-mirror: additional test stability improvements
- 02:33 PM Stable releases Tasks #17851: jewel v10.2.6
- ...
- 02:23 PM rgw Documentation #18889: rgw: S3 create bucket should not do response in json
- Can you post more detailed logs on how you got the json, boto (both 2 and 3) should work seamlessly with radosgw
- 02:04 PM Cleanup #18922: add 'override' where possible when implementing an interface
- https://github.com/ceph/ceph/pull/13436
https://github.com/ceph/ceph/pull/13437
https://github.com/ceph/ceph/pull/1... - 01:19 PM Bug #18945 (Closed): ceph-disk activate with dmcrypt should mount lockbox directory
- During the activation sequence of the encrypted OSD the lockbox directory is not mounted which results in the followi...
- 01:18 PM Bug #18944 (Won't Fix): ceph-disk prepare with dmcrypt should unmount lockbox directory
- When the preparation of the encrypted OSD is completed the lockbox directory is left mounted.
It should not. - 09:05 AM Bug #16878: filestore: utilization ratio calculation does not take journal size into account
- 08:20 AM rgw Bug #18942 (New): swift ver location owner is not consistent, the object cannot be able to delete
- when the onwer of swift ver location is not consistent with src bucket, the object in src bucket cannot be deleted, w...
- 04:02 AM CephFS Bug #18941 (Fix Under Review): buffer overflow in test LibCephFS.DirLs
- https://github.com/ceph/ceph/pull/13429
- 03:57 AM CephFS Bug #18941 (Resolved): buffer overflow in test LibCephFS.DirLs
- http://pulpito.ceph.com/jspray-2017-02-14_02:39:19-fs-wip-jcsp-testing-20170214-distro-basic-smithi/812889
- 03:54 AM Bug #18857 (Fix Under Review): os/store_test is unable to vary min_alloc_size anymore
- https://github.com/ceph/ceph/pull/13415
- 03:47 AM rgw Bug #18940 (Closed): ERROR RESTFUL_IO with S3 GET/PUT operations
- I have a Ceph version 12.0.0 (b7d9d6eb542e2b946ac778bd3a381ce466f60f6a) cluster consisting of 1 node = 1 MON + 4 OSD ...
- 01:52 AM rgw Bug #18939 (Fix Under Review): rgw: versioned object sync inconsistency
- When running test_multi.py, one of the tests fail due to olh pointing at a different object version at one of the zon...
- 12:56 AM rbd Bug #18938 (Won't Fix): Unable to build 11.2.0 under i686
- Hello,
The ceph 11.2.0 tarball fail to build under i686 architecture when it succeeds under x86_64.
Here is my ... - 12:52 AM Bug #15912: An OSD was seen getting ENOSPC even with osd_failsafe_full_ratio passed
- https://github.com/ceph/ceph/pull/13425
There are multiple issues to address. This pull requests addresses some o... - 12:26 AM CephFS Feature #18490: client: implement delegation support in userland cephfs
- John Spray wrote:
> So the "client completely unresponsive but only evict it when someone else wants its caps" case ... - 12:09 AM Bug #18937 (Resolved): cache/tiering flush bug with head delete
- base: 77=[77,76,74,71,6f,6d,62,61]:[]+head
promoted at 77, then deleted in cache
cache: 7a=[7a,76,74,6f,6d,62,... - 12:08 AM Bug #18809 (Resolved): FAILED assert(object_contexts.empty()) (live on master only from Jan-Feb 2...
- 12:07 AM Bug #18529 (Resolved): ERROR: test_rados.TestRados.test_ping_monitor
- 12:07 AM Bug #18927: on_flushed: object ... obc still alive
02/14/2017
- 11:03 PM Bug #17826 (Resolved): ceph-init syntax error breaks package remove, daemon stop
- 06:40 PM Bug #17826 (In Progress): ceph-init syntax error breaks package remove, daemon stop
- 06:33 PM Bug #17826: ceph-init syntax error breaks package remove, daemon stop
- 299b7d06ac18c5cd30b8b65c7d25df9fc00287db introduced a syntax error into ceph_common.sh that prevents at least some pa...
- 06:17 PM Bug #17826: ceph-init syntax error breaks package remove, daemon stop
- Here's another case of nuke failing: http://paste2.org/8jzbtLGa
Note that ceph-common is explicitly removed early ... - 10:57 PM Feature #18932 (In Progress): 'ceph auth import -i' overwrites caps, should alert user before ove...
- 08:00 PM Feature #18932: 'ceph auth import -i' overwrites caps, should alert user before overwrite
- File - src/mon/AuthMonitor.cc...
- 07:54 PM Feature #18932 (Resolved): 'ceph auth import -i' overwrites caps, should alert user before overwrite
- - User unknowingly imported a ceph.client.admin.keyring file which was lacking caps and was situated in their /etc/ce...
- 10:45 PM rgw Bug #18936 (Fix Under Review): rgw slo manifest: etag and size should be optional
- In debugging a downstream problem with RGW swift SLO, I found the following apparent (?) issues
1. downloading an ... - 10:25 PM CephFS Bug #18830 (Resolved): Coverity: bad iterator dereference in Locker::acquire_locks
- 10:24 PM CephFS Bug #18877 (Pending Backport): mds/StrayManager: avoid reusing deleted inode in StrayManager::_pu...
- 10:21 PM CephFS Feature #18490: client: implement delegation support in userland cephfs
- So the "client completely unresponsive but only evict it when someone else wants its caps" case is http://tracker.cep...
- 04:27 PM CephFS Feature #18490: client: implement delegation support in userland cephfs
- I started taking a look at this. One thing we have to solve first, is that I don't think there is any automatic resol...
- 08:59 PM rbd Bug #18935 (Fix Under Review): rbd-mirror: additional test stability improvements
- *PR*: https://github.com/ceph/ceph/pull/13421
- 08:57 PM rbd Bug #18935 (Resolved): rbd-mirror: additional test stability improvements
- 08:43 PM Messengers Bug #18351: msg/DispatchQueue.h: 228: FAILED assert(mqueue.empty())
- /a/sage-2017-02-14_14:45:43-rados-wip-pg-split-interval---basic-smithi/815357
- 08:33 PM Bug #13522: Apparent deadlock between tcmalloc getting a stacktrace and dlopen allocating memory
- That this is occuring on xenial is disturbing, because it seems to have gperftools 2.4:...
- 02:34 PM Bug #13522: Apparent deadlock between tcmalloc getting a stacktrace and dlopen allocating memory
- still see this on xenial: /a/sage-2017-02-14_06:55:13-rados-wip-pg-split-interval---basic-smithi/813242
- 08:32 PM Bug #18933 (Resolved): asok ops operator<< races with handle_pull, which mutates MOSDPGPull
- handle_pull modifies MOSDPGPull RecoveryInfo in place. an i'll-timed asok dump ops will call operator<< and crash et...
- 08:00 PM Bug #17831 (Resolved): osd: ENOENT on clone
- http://tracker.ceph.com/issues/18927 and http://tracker.ceph.com/issues/18809 were caused by this series, I don't thi...
- 07:13 PM Bug #18929: "osd/PG.cc: 6896: FAILED assert(pg->is_acting(osd_with_shard) || pg->is_up(osd_with_s...
- https://github.com/ceph/ceph/pull/13420
- 04:46 PM Bug #18929: "osd/PG.cc: 6896: FAILED assert(pg->is_acting(osd_with_shard) || pg->is_up(osd_with_s...
- ...
- 04:42 PM Bug #18929 (Resolved): "osd/PG.cc: 6896: FAILED assert(pg->is_acting(osd_with_shard) || pg->is_up...
- (part of PRs testing batch https://trello.com/c/wqtuiCLb)
Run: http://pulpito.ceph.com/yuriw-2017-02-14_00:18:31-r... - 06:25 PM CephFS Bug #7750: Attempting to mount a kNFS export of a sub-directory of a CephFS filesystem fails with...
- Actually, does fail with stale. Instead the NFS mount command eventually times out.
mount.nfs: Connection timed out - 06:23 PM CephFS Bug #7750: Attempting to mount a kNFS export of a sub-directory of a CephFS filesystem fails with...
- Happens for me also:
Debian Jessie with backported kernel
Linux drbl 4.8.0-0.bpo.2-amd64 #1 SMP Debian 4.8.15-2~bpo... - 05:59 PM Bug #18820 (Pending Backport): segfault in ceph-osd --flush-journal
- https://github.com/ceph/ceph/pull/13311 merged
- 05:16 PM RADOS Bug #18930 (New): received Segmentation fault in PGLog::IndexedLog::add
- 2017-02-15 00:12:04.566736 7fee7b9ec700 -1 *** Caught signal (Segmentation fault) **
in thread 7fee7b9ec700 thread_... - 04:30 PM Bug #18928 (Resolved): IPv6 Heartbeat packets are not marked with DSCP QoS - async messenger
- osd_heartbeat_use_min_delay_socket is being ignored for IPv6.
(see also wip-18887 that covers the same change for ... - 04:23 PM Bug #18927 (Resolved): on_flushed: object ... obc still alive
- ...
- 03:47 PM rgw Feature #18916 (Fix Under Review): support non-current version expiration
- https://github.com/ceph/ceph/pull/13385
- 01:11 PM Cleanup #18922 (Fix Under Review): add 'override' where possible when implementing an interface
- https://github.com/ceph/ceph/pull/13414
- 08:11 AM Cleanup #18922 (Resolved): add 'override' where possible when implementing an interface
- https://trello.com/c/qxT20W82
- 12:09 PM RADOS Bug #18924: kraken-bluestore 11.2.0 memory leak issue
- Marek Panek wrote:
> We observe the same effect in 11.2 with bluestore. After some time OSDs consume ~6G RAM memory ... - 12:07 PM RADOS Bug #18924: kraken-bluestore 11.2.0 memory leak issue
- We observe the same effect in 11.2. After some time OSDs consume ~6G RAM memory (12 OSDs per 64G RAM server) and fina...
- 08:58 AM RADOS Bug #18924 (Resolved): kraken-bluestore 11.2.0 memory leak issue
- Hi All,
On all our 5 node cluster with ceph 11.2.0 we encounter memory leak issues.
Cluster details : 5 node wi... - 12:00 PM Feature #16091 (Resolved): Monclient: hunt for mons in parallel
- 11:40 AM RADOS Bug #18926 (Duplicate): Why osds do not release memory?
- Version: K11.2.0, bluestore, two replication.
test: testing cluster with fio, with parmeters "-direct=1 -iodepth 6... - 11:10 AM CephFS Bug #18838 (Resolved): valgrind: Leak_StillReachable in libceph-common __tracepoints__init
- 10:38 AM RADOS Bug #18925 (Can't reproduce): Leak_DefinitelyLost in KernelDevice::aio_write
See on fs test branch based on master.
http://pulpito.ceph.com/jspray-2017-02-14_02:39:19-fs-wip-jcsp-testing-20...- 09:02 AM rbd Bug #18844: import-diff failed: (33) Numerical argument out of domain - if image size of the chil...
- Whops - I forgot that one line. It is basically the same as in the validate case.
These are the all steps to repro... - 07:34 AM rbd Bug #18844: import-diff failed: (33) Numerical argument out of domain - if image size of the chil...
- how do you create vms/test-larger?
- 08:16 AM rgw Feature #18923 (Resolved): add a s3-test case in set acl
- Add a test case for setting current version's acl with no version id specified.
- 01:55 AM rgw Bug #18921: change log level in user quota sync
- https://github.com/ceph/ceph/pull/13408
- 01:46 AM rgw Bug #18921 (Resolved): change log level in user quota sync
- If we create a user and this user hasn't do anything, user's quota sync will print the error like below.
@0 ERROR: c... - 12:29 AM CephFS Bug #18914: cephfs: Test failure: test_data_isolated (tasks.cephfs.test_volume_client.TestVolumeC...
- Oh yeah, the client does have code that's meant to be doing that, and on the client side it's a wait_for_latest. So ...
02/13/2017
- 10:46 PM Stable releases Tasks #17851: jewel v10.2.6
- h3. rbd...
- 09:52 PM Stable releases Tasks #17851: jewel v10.2.6
- h3. rgw...
- 09:26 PM Stable releases Tasks #17851: jewel v10.2.6
- h3. fs...
- 09:18 PM Stable releases Tasks #17851: jewel v10.2.6
- h3. powercycle...
- 08:40 PM Stable releases Tasks #17851: jewel v10.2.6
- h3. rados...
- 06:49 AM Stable releases Tasks #17851: jewel v10.2.6
- ...
- 09:47 PM Feature #18468 (Fix Under Review): Disallow enabling 'hashpspool' option to a pool without some k...
- https://github.com/ceph/ceph/pull/13406
- 09:32 PM rbd Documentation #17978 (Resolved): Wrong diskcache parameter name for OpenStack Havana and Icehouse
- 08:20 PM rbd Documentation #17978: Wrong diskcache parameter name for OpenStack Havana and Icehouse
- I've opened a pull request: https://github.com/ceph/ceph/pull/13403
@Jason: The documentation fix doesn't apply to... - 08:20 PM Bug #18840 (Fix Under Review): Miss volume_backend_name in openstack cinder.conf example
- https://github.com/ceph/ceph/pull/13400
- 06:52 PM Bug #18840 (In Progress): Miss volume_backend_name in openstack cinder.conf example
- 07:20 PM rgw Feature #18916: support non-current version expiration
- I assume this is an RGW feature request.
- 08:50 AM rgw Feature #18916 (Fix Under Review): support non-current version expiration
- Add support for non-current object version's expiration.
- 07:05 PM CephFS Bug #18914: cephfs: Test failure: test_data_isolated (tasks.cephfs.test_volume_client.TestVolumeC...
- That's odd; I thought clients validated pools before passing them to the mds. Maybe that's wrong or undesirable for o...
- 12:29 PM CephFS Bug #18914: cephfs: Test failure: test_data_isolated (tasks.cephfs.test_volume_client.TestVolumeC...
- Hmm, so this is happening because volume client creates a pool, then tries to use it as a layout at a time before its...
- 07:43 AM CephFS Bug #18914 (Resolved): cephfs: Test failure: test_data_isolated (tasks.cephfs.test_volume_client....
- http://qa-proxy.ceph.com/teuthology/teuthology-2017-02-12_10:10:02-fs-jewel---basic-smithi/808861/
- 06:34 PM rgw Bug #18919 (Fix Under Review): Change loglevel to 20 for 'System already converted' message
- 06:33 PM rgw Bug #18919: Change loglevel to 20 for 'System already converted' message
- https://github.com/ceph/ceph/pull/13399
- 05:04 PM rgw Bug #18919 (Resolved): Change loglevel to 20 for 'System already converted' message
- - If we upgrade hammer to jewel and metadata gets converted successfully from hammer to jewel, we log a message:
<... - 05:56 PM Bug #17573 (Duplicate): librados doesn't properly report failure to create socket
- This is a non-issue with async messenger, since it uses a fixed number of fds. In jewel, this is a well-known issue t...
- 05:17 PM Bug #17573: librados doesn't properly report failure to create socket
- rados bench doesn't use librbd -- the "problem" is at the lower level within librados. We really don't want VMs to cr...
- 05:12 PM Bug #18637 (Can't reproduce): "BlueStore.cc: 4967: FAILED assert(allow_eio || r != -5)" in powerc...
- unable to reproduce this. best guess is another bad mira disk.
- 04:07 PM Linux kernel client Bug #17153: kernel hung task warnings on teuthology.front kernel
- My guess at this point is that this may be a different manifestation of bug 18130. This is also occurring in the ceph...
- 04:03 PM Bug #18586 (Rejected): osd map update sending -1 in flags when pool hits quota
- NOTABUG. I got confused here by some protocol changes that occurred between John's original kernel patchset posting a...
- 04:00 PM Linux kernel client Bug #18686 (Resolved): too many on the wire revalidations from ceph_d_revalidate
- Moving to Resolved under the assumption that we'll be merging this set into v4.11.
- 03:59 PM Linux kernel client Bug #18474 (Resolved): oops in __unregister_request
- 03:39 PM rgw Bug #18747 (Fix Under Review): Unable to enforce RGW quota if there are more than one RGW nodes i...
- Related doc fix: https://github.com/ceph/ceph/pull/13395
- 03:29 PM rgw Bug #18918 (Resolved): rgw fails to compile with fastcgi support enabled
- 02:12 PM rgw Bug #18918: rgw fails to compile with fastcgi support enabled
- Fix submitted against master via: https://github.com/ceph/ceph/pull/13393
- 01:47 PM rgw Bug #18918 (Resolved): rgw fails to compile with fastcgi support enabled
- I see the following error when attempting to compile current master:
-------------
In file included from /home/da... - 03:14 PM Backport #18890 (Rejected): ruleset is out of scope in CrushTester::test_with_crushtoo
- -https://github.com/ceph/ceph/pull/13381-
- 04:13 AM Backport #18890 (Rejected): ruleset is out of scope in CrushTester::test_with_crushtoo
- https://github.com/ceph/ceph/pull/13627
- 03:12 PM rbd Feature #18865: rbd: wipe data in disk in rbd removing
- @Yang: As I mentioned, there is no way for librbd to overwrite snapshot objects -- they are read-only from the point-...
- 06:49 AM rbd Feature #18865: rbd: wipe data in disk in rbd removing
- Jason Dillaman wrote:
> @Yang: can you provide more background on your intended request use-case? If you are trying ... - 03:11 PM Cleanup #18846 (Pending Backport): remove qa/suites/buildpackages
- 03:10 PM Bug #17306 (Resolved): crushtool --compile is create output despite of missing item
- 02:38 PM CephFS Bug #18838 (Fix Under Review): valgrind: Leak_StillReachable in libceph-common __tracepoints__init
- https://github.com/ceph/ceph/pull/13394
- 01:52 PM rbd Subtask #18785 (In Progress): rbd-mirror A/A: separate ImageReplayer handling from Replayer
- 12:04 PM CephFS Bug #18915: valgrind causes ceph-fuse mount_wait timeout
- Probably duplicate of http://tracker.ceph.com/issues/18797
- 08:15 AM CephFS Bug #18915 (New): valgrind causes ceph-fuse mount_wait timeout
- http://pulpito.ceph.com/teuthology-2017-02-11_17:15:02-fs-master---basic-smithi/
http://qa-proxy.ceph.com/teutholo... - 10:44 AM rbd Feature #18917 (New): rbd: show the latest snapshot in rbd info
- When we do a snapshot rollback, we want to know what the snapshot the current head is based on.
- 08:58 AM CephFS Bug #18872: write to cephfs mount hangs, ceph-fuse and kernel
- make check seem to get stuck after PASS: unittest_log on unittest_throttle.
_edit_ The machine has only very littl... - 07:24 AM rgw Backport #18913 (Duplicate): kraken: rgw: radosgw-admin orphan find goes into infinite loop
- 07:24 AM rgw Backport #18912 (Duplicate): jewel: rgw: radosgw-admin orphan find goes into infinite loop
- 07:24 AM rbd Backport #18911 (Resolved): jewel: rbd-nbd: check /sys/block/nbdX/size to ensure kernel mapped co...
- 07:24 AM rbd Backport #18910 (Resolved): kraken: rbd-nbd: check /sys/block/nbdX/size to ensure kernel mapped c...
- https://github.com/ceph/ceph/pull/14540
- 07:23 AM rgw Backport #18909 (Resolved): kraken: rgw: the swift container acl does not support field ".ref"
- https://github.com/ceph/ceph/pull/14516
- 07:23 AM rgw Backport #18908 (Resolved): jewel: the swift container acl does not support field ".ref"
- https://github.com/ceph/ceph/pull/13833
- 07:23 AM Backport #18907 (Resolved): kraken: "osd marked itself down" will not recognised if host runs mon...
- https://github.com/ceph/ceph/pull/13494
All checks have passed - 07:23 AM Backport #18906 (Resolved): jewel: "osd marked itself down" will not recognised if host runs mon ...
- https://github.com/ceph/ceph/pull/13492
- 07:23 AM rgw Backport #18904 (Resolved): kraken: rgw: first write also tries to read object
- https://github.com/ceph/ceph/pull/14515
- 07:23 AM rgw Backport #18903 (Rejected): jewel: rgw: first write also tries to read object
- 07:23 AM rgw Backport #18902 (Resolved): kraken: librgw: path segments neglect to ref parents
- https://github.com/ceph/ceph/pull/13871
- 07:23 AM rgw Backport #18901 (Resolved): jewel: librgw: path segments neglect to ref parents
- https://github.com/ceph/ceph/pull/13583
- 07:22 AM CephFS Backport #18900 (Resolved): jewel: Test failure: test_open_inode
- https://github.com/ceph/ceph/pull/14669
- 07:22 AM CephFS Backport #18899 (Resolved): kraken: Test failure: test_open_inode
- https://github.com/ceph/ceph/pull/14569
- 07:22 AM rgw Backport #18898 (Resolved): kraken: no http referer info in container metadata dump in swift API
- https://github.com/ceph/ceph/pull/13829
- 07:22 AM rgw Backport #18897 (Rejected): jewel: no http referer info in container metadata dump in swift API
- 07:22 AM rgw Backport #18896 (Resolved): kraken: should parse the url to http host to compare with the contain...
- https://github.com/ceph/ceph/pull/13780
- 07:22 AM rgw Backport #18895 (Rejected): jewel: should parse the url to http host to compare with the containe...
- 07:22 AM Backport #18894 (Resolved): kraken: Possible lockdep false alarm for ThreadPool lock
- https://github.com/ceph/ceph/pull/13487
- 07:22 AM rbd Backport #18893 (Resolved): jewel: Incomplete declaration for ContextWQ in librbd/Journal.h
- https://github.com/ceph/ceph/pull/14152
- 07:22 AM rbd Backport #18892 (Resolved): kraken: Incomplete declaration for ContextWQ in librbd/Journal.h
- https://github.com/ceph/ceph/pull/14153
- 07:21 AM Backport #18848 (Resolved): jewel: remove qa/suites/buildpackages
- 07:20 AM Backport #18849 (Resolved): kraken: remove qa/suites/buildpackages
- 07:18 AM Backport #17334 (Resolved): jewel: crushtool --compile is create output despite of missing item
- 06:29 AM Backport #18657 (In Progress): jewel: Fix OSD network address in OSD heartbeat_check log message
- .h3 original description
- Tracker[1] had introduced this osd network address in the heartbeat_check log message.
... - 06:21 AM Backport #18891 (Resolved): jewel: rgw: add option to log custom HTTP headers (rgw_log_http_headers)
- https://github.com/ceph/ceph/pull/12490
- 03:22 AM rgw Documentation #18889 (Resolved): rgw: S3 create bucket should not do response in json
- rgw s3 cannot work with boto3, due to create bucket returns json instead of xml.
boto3 error:... - 02:58 AM rgw Bug #18828: RGW S3 v4 authentication issue with X-Amz-Expires
- ...
02/12/2017
- 11:03 AM Backport #18814: kraken: PrimaryLogPG: up_osd_features used without the requires_kraken flag in k...
- @Shinobu Backporting is an excellent way to practice with git :-) HINT: read about "interactive rebasing" - if you ma...
- 08:28 AM Backport #18814 (New): kraken: PrimaryLogPG: up_osd_features used without the requires_kraken fla...
- 08:27 AM Backport #18814: kraken: PrimaryLogPG: up_osd_features used without the requires_kraken flag in k...
- @Nathan I delete remote branch which was used for PR#13373 because the PR was something definitely wrong and messy be...
- 12:11 AM Backport #18814 (In Progress): kraken: PrimaryLogPG: up_osd_features used without the requires_kr...
- @Nathan Yay finally. Hopefully there is nothing stupid :3
-https://github.com/ceph/ceph/pull/13373- - 09:32 AM Bug #18840 (New): Miss volume_backend_name in openstack cinder.conf example
- 12:37 AM Bug #18840 (Fix Under Review): Miss volume_backend_name in openstack cinder.conf example
- -https://github.com/ceph/ceph/pull/13374-
- 12:20 AM Bug #18840 (In Progress): Miss volume_backend_name in openstack cinder.conf example
- 07:05 AM Bug #18820: segfault in ceph-osd --flush-journal
- Alexey Sheplyakov wrote:
> @Sergey:
> Could you please check if https://github.com/ceph/ceph/pull/13311 solves the ... - 05:45 AM rbd Bug #18888 (Fix Under Review): rbd_clone_copy_on_read ineffective with exclusive-lock
- PR: https://github.com/ceph/ceph/pull/13196
- 05:10 AM rbd Bug #18888 (In Progress): rbd_clone_copy_on_read ineffective with exclusive-lock
- 05:10 AM rbd Bug #18888 (Resolved): rbd_clone_copy_on_read ineffective with exclusive-lock
- With layering+exclusive-lock feature, rbd_clone_copy_on_read does not trigger object copyups from parent image. This ...
02/11/2017
- 10:57 PM Backport #18814 (New): kraken: PrimaryLogPG: up_osd_features used without the requires_kraken fla...
- 06:40 PM Bug #18887 (Resolved): IPv6 Heartbeat packets are not marked with DSCP QoS - simple messenger
- osd_heartbeat_use_min_delay_socket is being ignored for IPv6.
- 03:53 PM rgw Backport #18827 (Resolved): jewel: RGW leaking data
- 03:52 PM Bug #18516: "osd marked itself down" will not recognised if host runs mon + osd on shutdown/reboot
- @Greg IIRC systemd support wasn't production-ready when hammer was released, but it is backportable because there is ...
- 02:29 PM rbd Feature #18335 (Pending Backport): rbd-nbd: check /sys/block/nbdX/size to ensure kernel mapped co...
- 12:27 PM CephFS Bug #18872: write to cephfs mount hangs, ceph-fuse and kernel
- The commit is from the SUSE repo. Its part of the ses4 branch: https://github.com/SUSE/ceph/commits/ses4. Sorry shoul...
- 12:22 AM CephFS Bug #18872: write to cephfs mount hangs, ceph-fuse and kernel
- PPC clients! Wondering if you've tried running any of the automated tests (the unit tests, or teuthology suites?) on...
02/10/2017
- 10:24 PM Bug #18817 (Duplicate): MDS crush: thread_name:ms_dispatch
- #18816
- 10:23 PM CephFS Bug #18816: MDS crashes with log disabled
- For some reason we still let people disable the MDS log. That's...bad. I think it only existed for some cheap benchma...
- 10:18 PM CephFS Bug #18872: write to cephfs mount hangs, ceph-fuse and kernel
- Well, the problem is clearly indicated by the client...
- 10:05 PM RADOS Bug #18749: OSD: allow EC PGs to do recovery below min_size
- See https://www.mail-archive.com/ceph-users@lists.ceph.com/msg35273.html for user discovery.
- 08:56 PM rgw Bug #18885 (Resolved): rgw: data sync of versioned objects, note updating bi marker
- This happens because the unlink_instance has a higher mtime than the del object operation (that happens later), so wh...
- 08:04 PM rgw Backport #18827: jewel: RGW leaking data
- Tested, merged (Jewel):
https://github.com/ceph/ceph/pull/13358 - 07:59 PM Bug #18516 (Pending Backport): "osd marked itself down" will not recognised if host runs mon + os...
- Adding backport fields. This should go wherever we have systems in supported releases; I think I got the right ones.
- 06:36 PM Bug #18516 (Resolved): "osd marked itself down" will not recognised if host runs mon + osd on shu...
- 06:19 PM rgw Bug #18829 (Fix Under Review): RGW S3 v4 authentication issue with X-Amz-Expires
- 01:13 PM rgw Bug #18829: RGW S3 v4 authentication issue with X-Amz-Expires
- I have verified this bug. it's exists.
I have patch a PR for this: https://github.com/ceph/ceph/pull/13354
- 06:12 PM rbd Bug #18884: systemctl stop rbdmap unmaps all rbds and not just the ones in /etc/ceph/rbdmap
- I have a fix which uses an RBDMAP_UNMAP_ALL parameter in /etc/sysconfig/ceph to control whether all RBD images (if "y...
- 06:04 PM rbd Bug #18884 (Resolved): systemctl stop rbdmap unmaps all rbds and not just the ones in /etc/ceph/r...
- Copy of downstream bug report:
When stopping the service rbdmap it unmaps ALL mapped RBDs instead just unmapping t... - 05:48 PM CephFS Bug #18661 (Pending Backport): Test failure: test_open_inode
- 03:17 PM Bug #18819 (Pending Backport): Possible lockdep false alarm for ThreadPool lock
- 02:44 PM CephFS Bug #18883 (New): qa: failures in samba suite
First Samba run in ages:
http://pulpito.ceph.com/zack-2017-02-08_12:21:51-samba-master---basic-smithi/
Let's ge...- 02:24 PM rbd Bug #17913: librbd io deadlock after host lost network connectivity
- @Dan van der Ster:
If you can install all necessary debug packages and get a complete gdb core backtrace via "thre... - 02:22 PM rbd Bug #18839 (Resolved): fsx segfault on clone op
- 02:19 PM rbd Documentation #17978: Wrong diskcache parameter name for OpenStack Havana and Icehouse
- @Michael: note that Icehouse and Havana are both EOLed by the upstream community. Does this issue apply to Grizzly+ r...
- 10:44 AM rbd Documentation #17978: Wrong diskcache parameter name for OpenStack Havana and Icehouse
- @Michael: Can you open a PR at https://github.com/ceph/ceph with your proposed fix? The documentation is under doc/
- 09:58 AM rbd Documentation #17978: Wrong diskcache parameter name for OpenStack Havana and Icehouse
- *Ping* Any progress on this?
- 02:17 PM rbd Feature #18865 (Need More Info): rbd: wipe data in disk in rbd removing
- @Yang: can you provide more background on your intended request use-case? If you are trying to implement a secure del...
- 01:59 PM rbd Bug #18862 (Pending Backport): Incomplete declaration for ContextWQ in librbd/Journal.h
- *PR*: https://github.com/ceph/ceph/pull/13322
- 01:11 PM rgw Bug #18828: RGW S3 v4 authentication issue with X-Amz-Expires
- https://github.com/ceph/ceph/pull/13354
- 06:29 AM rgw Bug #18828: RGW S3 v4 authentication issue with X-Amz-Expires
- I'm testing it and try to fix it. assign this issue to me, please
- 12:20 PM CephFS Bug #18882 (New): StrayManager::advance_delayed() can use tens of seconds
- I saw mds become laggy when running blogbench in a loop. The command I ran is "while `true`; do ls | xargs -P8 -n1 rm...
- 10:15 AM Backport #18881: jewel: fix test_rgw_ldap.cc for search filter
- ...
- 10:15 AM Backport #18881: jewel: fix test_rgw_ldap.cc for search filter
- in jewel, we already have new LDAPHelper constructor
so, we should cherry-pick following commit to make test pass:... - 10:13 AM Backport #18881 (Rejected): jewel: fix test_rgw_ldap.cc for search filter
- https://github.com/ceph/ceph/pull/13355
- 09:59 AM Backport #18814: kraken: PrimaryLogPG: up_osd_features used without the requires_kraken flag in k...
- @Nathan Thank you so much for your help. Please let me give it try again and I will write some up here.
- 09:28 AM rgw Bug #18880 (New): error code of swift object versioning is not consistent with SWIFT
- when the versions location of container does not exist and upload the object two more times to the container, it will...
- 09:13 AM Backport #18878: jewel: cmake: should define the install destination
- h3. description
in jewel, version(10.2.5)(8c87d09)
run `run_cmake_check` would meet error as follow:... - 08:25 AM Backport #18878 (Fix Under Review): jewel: cmake: should define the install destination
- 07:09 AM Backport #18878: jewel: cmake: should define the install destination
- https://github.com/ceph/ceph/pull/13348
- 07:03 AM Backport #18878: jewel: cmake: should define the install destination
- after cherry-pick 7a602ec292b7e92c994d1718402e35a6c6692a02
there is another problem as following:... - 07:00 AM Backport #18878: jewel: cmake: should define the install destination
- 7a602ec292b7e92c994d1718402e35a6c6692a02
will cherry-pick above commit to sole it - 06:59 AM Backport #18878 (Closed): jewel: cmake: should define the install destination
- https://github.com/ceph/ceph/pull/13348
- 06:05 AM Bug #18809 (Fix Under Review): FAILED assert(object_contexts.empty()) (live on master only from J...
- https://github.com/ceph/ceph/pull/13342
- 03:26 AM CephFS Bug #18877: mds/StrayManager: avoid reusing deleted inode in StrayManager::_purge_stray_logged
- https://github.com/ceph/ceph/pull/13347
- 03:25 AM CephFS Bug #18877 (Resolved): mds/StrayManager: avoid reusing deleted inode in StrayManager::_purge_stra...
- This issue was found by testing another PR (https://github.com/ceph/ceph/pull/12792), which makes MDS directly uses T...
02/09/2017
- 11:39 PM Bug #18876 (Resolved): make check fails due to missing bc in ceph-helper.sh
- Bunch of test are failing:
1 - test-ceph-helpers.sh (Failed)
5 - cephtool-test-mon.sh (Failed)
88 - test... - 11:30 PM RADOS Cleanup #18875 (New): osd: give deletion ops a cost when performing backfill
- From PrimaryLogPG, line 11134 (at time of writing)...
- 11:11 PM rgw Backport #17756 (In Progress): jewel: rgw: bucket resharding
- 07:31 PM rgw Bug #18593: radosgw/ssl: sslv3 vs. tls1
- Whatever is decided, we need to make it configurable. Default should probably be TLS1.
- 07:29 PM rgw Bug #18620: rgw should provide ALLOC_HINT_FLAG_INCOMPRESSIBLE when writing compressed/encrypted data
- https://github.com/ceph/ceph/pull/13165
- 07:28 PM rgw Bug #18639: multisite: incremental metadata sync does not properly advance to next period
- tests for this also need review in https://github.com/ceph/ceph/pull/13067
- 07:26 PM rgw Bug #18725: "zonegroupmap set" does not work
- What version are you running? This command is an artifact from multisite v1 and probably doesn't work and should be r...
- 07:20 PM rgw Bug #18726 (Closed): Starting an old rgw instance reverts my zonegroup changes
- Closing this. It was fixed after 10.2.2, that's why 10.2.5 works fine.
- 07:14 PM rgw Bug #18685 (Pending Backport): should parse the url to http host to compare with the container re...
- 07:13 PM rgw Bug #18665 (Pending Backport): no http referer info in container metadata dump in swift API
- 07:10 PM rgw Bug #18484 (Pending Backport): the swift container acl does not support field ".ref"
- 07:10 PM Backport #18814: kraken: PrimaryLogPG: up_osd_features used without the requires_kraken flag in k...
- @Shinobu The idea is to resolve the conflicts, if you can, and describe how you did it in the commit message. This ca...
- 01:27 AM Backport #18814 (Need More Info): kraken: PrimaryLogPG: up_osd_features used without the requires...
- 1st one is fine but 2nd one:
[root@octopus ceph]# git cherry-pick -x 1a5cc32f0a3bf5ef06642402e930e3786700ab7d
[wi... - 12:25 AM Backport #18814 (New): kraken: PrimaryLogPG: up_osd_features used without the requires_kraken fla...
- Yay git did some trick =)
- 07:05 PM rgw Bug #18852 (Fix Under Review): swift API: cannot disable object versioning with empty X-Versions-...
- 07:02 PM rgw Bug #18860 (Fix Under Review): the thread name of sync is not clear
- https://github.com/ceph/ceph/pull/13324
- 02:18 AM rgw Bug #18860 (Resolved): the thread name of sync is not clear
- The thread name of sync is "radosgw", that is same with civetweb thread. It is not good for problem analysis.
- 07:01 PM rgw Bug #16736: rgw: a bucket with non-empty tenant can't link to specified user
- Yeah, currently the problem is that bucket instance id includes the tenant. Changing it is problematic, because it ca...
- 05:23 PM rgw Bug #16736: rgw: a bucket with non-empty tenant can't link to specified user
- I can confirm that this issue also exists in most recent kraken release.
This unfortunately makes working in a mul... - 06:57 PM rgw Bug #17779: rgw: s3 API does not honor rgw_keystone_implicit_tenants when keystone integration is...
- @rzarzynski can you take a look at this one?
- 06:55 PM rgw Bug #17964: rgw: multipart parts on versioned bucket create versioned bucket index entries
- this is something we can test in ragweed.
- 06:54 PM rgw Bug #18079 (In Progress): rgw: need to stream metadata full sync init
- https://github.com/ceph/ceph/pull/12429
- 06:53 PM rgw Bug #17574: multisite: many duplicate mdlog entries cause race to sync and result in ECANCELED
- Discussed in bug scrub. We think that what we see there is not necessarily a bug, but s3-tests doing a lot of metadat...
- 06:52 PM Bug #18873 (Closed): Need to use thread safe random number generation (unless c++11 already provi...
We are using rand() throughout the code which man says isn't thread safe. We should determine if rand() with c++11...- 06:51 PM Bug #18533: two instances of omap_digest mismatch
- wip-18533 is now cleaned up and has two specific unit tests and a fuzzer which reproduce invalid iterator results.
- 03:39 AM Bug #18533: two instances of omap_digest mismatch
- This is the result of one of Sam's now failing testing with my complete checking code which also outputs the complete...
- 01:50 AM Bug #18533: two instances of omap_digest mismatch
- Nevermind, the bug can produce a more general set of errors than I had realized. See the more recent updates to the ...
- 12:28 AM Bug #18533: two instances of omap_digest mismatch
- If the entries David added a few days ago are the right ones, then the above bug doesn't explain what's happening in ...
- 12:13 AM Bug #18533: two instances of omap_digest mismatch
- David: Can you add the list of keys which are present on that node but shouldn't be?
- 06:50 PM rgw Bug #18650 (Pending Backport): librgw: path segments neglect to ref parents
- 06:50 PM rgw Bug #18622 (Pending Backport): rgw: first write also tries to read object
- 06:42 PM rgw Bug #18258 (Pending Backport): rgw: radosgw-admin orphan find goes into infinite loop
- https://github.com/ceph/ceph/pull/13147
- 03:36 PM CephFS Bug #18872: write to cephfs mount hangs, ceph-fuse and kernel
- Also daemon commands don't return anything. That is for the client mds_requests and objecter_requests and ops_in_flig...
- 03:27 PM CephFS Bug #18872 (Resolved): write to cephfs mount hangs, ceph-fuse and kernel
- When trying to write to a cephfs mount using 'dd' the client hangs indefinitely. The kernel client can be <ctrl-c>'ed...
- 02:52 PM Backport #18805 (Resolved): kraken: "ERROR: Export PG's map_epoch 3901 > OSD's epoch 3281" in upg...
- 02:15 PM RADOS Bug #18871 (New): problem about create pool with expected-num-objects does not cause collection s...
- i create a pool and want the PG folder splitting happen at the pool creation time,but i found it not happend
1、... - 02:08 PM Bug #18694: ceph-disk activate for partition failing
- Grabbing the raw file directly from git and putting it in place of the original definitely stops me getting the error...
- 12:18 PM Backport #18868 (In Progress): hammer: tests: SUSE yaml facets in qa/distros/all are out of date
- 11:54 AM Backport #18868 (Rejected): hammer: tests: SUSE yaml facets in qa/distros/all are out of date
- https://github.com/ceph/ceph/pull/13333
- 12:16 PM Linux kernel client Bug #18474: oops in __unregister_request
- On second thought, I don't really like that since __register_request doesn't put the thing on the list. I'm going to ...
- 12:10 PM Backport #18848 (In Progress): jewel: remove qa/suites/buildpackages
- 12:10 PM Backport #18849 (In Progress): kraken: remove qa/suites/buildpackages
- 12:07 PM rgw Bug #18828: RGW S3 v4 authentication issue with X-Amz-Expires
- Hi, what's your X-Amz-Expires?
- 12:04 PM Backport #18869 (In Progress): jewel: tests: SUSE yaml facets in qa/distros/all are out of date
- 11:54 AM Backport #18869 (Resolved): jewel: tests: SUSE yaml facets in qa/distros/all are out of date
- https://github.com/ceph/ceph/pull/13331
- 12:02 PM Backport #18870 (In Progress): kraken: tests: SUSE yaml facets in qa/distros/all are out of date
- 11:54 AM Backport #18870 (Resolved): kraken: tests: SUSE yaml facets in qa/distros/all are out of date
- https://github.com/ceph/ceph/pull/13330
- 11:53 AM Bug #18856 (Pending Backport): tests: SUSE yaml facets in qa/distros/all are out of date
- 11:32 AM rgw Backport #18866 (Resolved): jewel: 'radosgw-admin sync status' on master zone of non-master zoneg...
- https://github.com/ceph/ceph/pull/13779
- 10:22 AM rbd Feature #18864: rbd export/import for consistent group
- it should be a feature not a bug.
- 10:15 AM rbd Feature #18864 (New): rbd export/import for consistent group
- 10:18 AM rbd Feature #18865: rbd: wipe data in disk in rbd removing
- it should be a feature instead of bug.
- 10:16 AM rbd Feature #18865 (Rejected): rbd: wipe data in disk in rbd removing
- 10:14 AM rbd Feature #18863 (New): rbd export/import improvement.
- snap timestamp for each diff, and add a crc check for it.
- 09:37 AM CephFS Bug #18816: MDS crashes with log disabled
- /// * Updated by Ahmed Akhuraidah in ML
The issue can be reproduced with upstream Ceph packages.
ahmed@ubcephno... - 09:31 AM devops Bug #18464 (Resolved): FTBFS in ppc64(le)
- 09:25 AM rbd Bug #18862 (Fix Under Review): Incomplete declaration for ContextWQ in librbd/Journal.h
- PR: https://github.com/ceph/ceph/pull/13322
- 08:43 AM rbd Bug #18862 (Resolved): Incomplete declaration for ContextWQ in librbd/Journal.h
- There is an incomplete declaration for ContextWQ and we call its method in Journal<I>::MetadataListener::handle_updat...
- 06:50 AM Linux kernel client Bug #18690: kclient: FAILED assert(0 == "old msgs despite reconnect_seq feature")
- yes..
- 12:41 AM Backport #18813 (Need More Info): hammer: hammer client generated misdirected op against jewel cl...
- @Nathan What should I do?
# git cherry-pick -x 923e7f5ce5ed437af15e178299a61029ff48e4a2
error: could not apply 9...
02/08/2017
- 11:33 PM Bug #18533: two instances of omap_digest mismatch
- I'm pretty comfortable pinning the cluster trouble on that one, assuming the extra keys and the overlapping complete ...
- 11:30 PM Bug #18533: two instances of omap_digest mismatch
- wip-18533 above now has a unit test which causes the iterator to return a deleted value.
- 08:28 PM Bug #18533: two instances of omap_digest mismatch
- debuggging: https://github.com/athanatos/ceph/tree/wip-18533
- 08:28 PM Bug #18533: two instances of omap_digest mismatch
- <davidzlap> sjust: 100011cf577.00000000
<davidzlap> sjust: I meant http://pastebin.com/19W78B6U
<sjust> davidzlap: ... - 07:53 PM Bug #18533: two instances of omap_digest mismatch
- Corrupt complete mapping found on pg 1.25 primary osd.72 for oid 100011cf577.00000000:
http://pastebin.com/19W78B6U - 11:09 PM Backport #18814 (Need More Info): kraken: PrimaryLogPG: up_osd_features used without the requires...
- 11:07 PM Backport #18814: kraken: PrimaryLogPG: up_osd_features used without the requires_kraken flag in k...
- @Loic Sorry for the late. The reason why I just waited is that there are 2 commits for the bug:
#1 https://github.... - 05:55 PM Backport #18814 (New): kraken: PrimaryLogPG: up_osd_features used without the requires_kraken fla...
- master pull request: https://github.com/ceph/ceph/pull/13114
Please set the to "In Progress" when a backport pull ... - 09:53 PM Cleanup #18846 (Fix Under Review): remove qa/suites/buildpackages
- 08:26 PM Cleanup #18846: remove qa/suites/buildpackages
- Needs to backport https://github.com/ceph/ceph/pull/13319 as well.
- 08:10 PM RADOS Bug #18859 (Closed): kraken monitor fails to bootstrap off jewel monitors if it has booted before
- To reproduce; bootstrap a quorum off of jewel. Stop one of the monitors, remove it's filesystem contents, re-create i...
- 08:06 PM Bug #18856 (Fix Under Review): tests: SUSE yaml facets in qa/distros/all are out of date
- *master PR*: https://github.com/ceph/ceph/pull/13313
- 02:25 PM Bug #18856 (Resolved): tests: SUSE yaml facets in qa/distros/all are out of date
- 07:31 PM rbd Subtask #18753 (In Progress): rbd-mirror HA: create teuthology thrasher for rbd-mirror
- 05:53 PM Backport #18849 (Resolved): kraken: remove qa/suites/buildpackages
- 05:41 PM Bug #18857 (Resolved): os/store_test is unable to vary min_alloc_size anymore
- Some test cases from store_test used to update min_alloc_size on the file they are unable to do that any more due to ...
- 05:00 PM RADOS Bug #18162: osd/ReplicatedPG.cc: recover_replicas: object added to missing set for backfill, but ...
- Sorry, github is off-limits for me (it tries to run non-Free Software on my browser, and it refuses to work if I don'...
- 03:59 PM Linux kernel client Bug #18690: kclient: FAILED assert(0 == "old msgs despite reconnect_seq feature")
- It looks like teuthology sets "ms die on old message = true" in its ceph.conf.template file.
Haomai, we don't *exp... - 02:05 AM Linux kernel client Bug #18690: kclient: FAILED assert(0 == "old msgs despite reconnect_seq feature")
- OH, I guess kernel client handle reconnect seq inconsistent with async msgr. if fuse client help, it would be great.
- 03:26 PM rgw Bug #18091 (Pending Backport): 'radosgw-admin sync status' on master zone of non-master zonegroup
- 02:25 PM Linux kernel client Bug #18474: oops in __unregister_request
- Jeff Layton wrote:
> Is there ever a time you'd want to remove it from the request tree but leave it on the s_unsafe... - 12:05 PM Linux kernel client Bug #18474: oops in __unregister_request
- Zheng Yan wrote:
> I think wait_requests() should remove request from unsafe list before calling __unregister_reques... - 10:20 AM Linux kernel client Bug #18474: oops in __unregister_request
- I think wait_requests() should remove request from unsafe list before calling __unregister_request()
- 02:18 PM rbd Subtask #18784 (In Progress): rbd-mirror A/A: leader should track up/down rbd-mirror instances
- 02:17 PM rbd Subtask #18783 (Fix Under Review): rbd-mirror A/A: InstanceWatcher watch/notify stub for leader/f...
- PR: https://github.com/ceph/ceph/pull/13312
- 01:55 PM Bug #18820: segfault in ceph-osd --flush-journal
- @Sergey:
Could you please check if https://github.com/ceph/ceph/pull/13311 solves the problem for you? - 01:29 PM rgw Feature #18855 (Resolved): support "user list" command in radosgw-admin
- 12:23 PM devops Backport #18854 (Rejected): hammer: upstart: radosgw-all does not start on boot if ceph-base is n...
- https://github.com/ceph/ceph/pull/13777
- 12:23 PM devops Backport #18853 (Resolved): jewel: upstart: radosgw-all does not start on boot if ceph-base is no...
- https://github.com/ceph/ceph/pull/16294
- 12:17 PM Backport #18848 (Resolved): jewel: remove qa/suites/buildpackages
- 08:54 AM devops Bug #18464 (Fix Under Review): FTBFS in ppc64(le)
- 07:35 AM rgw Bug #18852: swift API: cannot disable object versioning with empty X-Versions-Location
- https://github.com/ceph/ceph/pull/13303
- 07:07 AM rgw Bug #18852 (Resolved): swift API: cannot disable object versioning with empty X-Versions-Location
- ceph swift cannot disable object versioning through remove its X-Versions-Location metadata header by sending an empt...
- 05:16 AM devops Bug #18313 (Pending Backport): upstart: radosgw-all does not start on boot if ceph-base is not in...
- might want to note the user visible change in release notes when backporting this change to jewel and hammer where up...
02/07/2017
- 11:23 PM Bug #18533: two instances of omap_digest mismatch
- Here's the output from a deep-scrub on 2/7:...
- 11:11 PM Linux kernel client Bug #18690: kclient: FAILED assert(0 == "old msgs despite reconnect_seq feature")
- Client logs are missing because it's the kernel client. I will need to rerun the test suite to see if I can coax a fa...
- 05:29 AM Linux kernel client Bug #18690: kclient: FAILED assert(0 == "old msgs despite reconnect_seq feature")
- Needs client log too.. http://qa-proxy.ceph.com/teuthology/pdonnell-2017-02-06_19:24:21-multimds:thrash-master-testin...
- 10:13 PM Feature #18851 (New): Ability to add comments in certain views of Ceph daemons or status
- It would be nice to maintain a record of why an OSD is down, or why a flag has been set within the Ceph Cluster so th...
- 09:51 PM CephFS Bug #18850: Leak in MDCache::handle_dentry_unlink
- Attached full valgrind from mds.a in jspray-2017-02-07_16:25:53-multimds-wip-jcsp-testing-20170206-testing-basic-smit...
- 09:49 PM CephFS Bug #18850 (Rejected): Leak in MDCache::handle_dentry_unlink
While there are various bits of valgrind noise going around at the moment, this one does look like a multimds speci...- 09:22 PM Backport #18847 (In Progress): hammer: remove qa/suites/buildpackages
- 08:42 PM Backport #18847 (Rejected): hammer: remove qa/suites/buildpackages
- https://github.com/ceph/ceph/pull/13332
- 09:06 PM Backport #18848 (In Progress): jewel: remove qa/suites/buildpackages
- 08:42 PM Backport #18848 (Resolved): jewel: remove qa/suites/buildpackages
- https://github.com/ceph/ceph/pull/13299
https://github.com/ceph/ceph/pull/13331 - 08:57 PM Backport #18849 (In Progress): kraken: remove qa/suites/buildpackages
- 08:42 PM Backport #18849 (Resolved): kraken: remove qa/suites/buildpackages
- https://github.com/ceph/ceph/pull/13298
https://github.com/ceph/ceph/pull/13330 - 08:42 PM Cleanup #18846 (Pending Backport): remove qa/suites/buildpackages
- 05:38 PM Cleanup #18846 (Resolved): remove qa/suites/buildpackages
- It should live in teuthology, not in Ceph. And it is currently broken:
there is no need to keep it around.
https:... - 08:34 PM CephFS Bug #18845: valgrind failure in fs suite
- also at shttp://pulpito.ceph.com/abhi-2017-02-07_15:12:56-fs-wip-luminous-2---basic-smithi/795353/
- 03:38 PM CephFS Bug #18845 (New): valgrind failure in fs suite
- Saw a valgrind failure on fs suite on the master branch as of fc2df15, run http://pulpito.ceph.com/abhi-2017-02-07_09...
- 02:58 PM Linux kernel client Bug #18697: Kernel panic on cephfs kernel client (4.4.0-57-generic #78-Ubuntu SMP)
- I think we hit this again, but these time it hanging there.
[980732.927323] BUG: unable to handle kernel NULL po... - 01:06 PM Bug #16263: "ERROR: Export PG's map_epoch 3901 > OSD's epoch 3281" in upgrade:infernalis-x-jewel-...
- additional fix from PR#13279 cherry-picked into all three backport PRs
- 12:14 PM rbd Bug #18844 (Resolved): import-diff failed: (33) Numerical argument out of domain - if image size ...
- *Steps to setup the test case (create a basic image):*
rbd create vms/test -s 1G
rbd snap create vms/test@snap
rbd... - 12:06 PM Linux kernel client Bug #18474: oops in __unregister_request
- I added this just before the kfree(req) in ceph_mdsc_release_request:...
- 11:49 AM rgw Backport #18843 (Resolved): kraken: rgw: usage stats and quota are not operational for multi-tena...
- https://github.com/ceph/ceph/pull/14513
- 11:48 AM Backport #18842 (Resolved): kraken: kernel client feature mismatch on latest master test runs
- https://github.com/ceph/ceph/pull/13485
- 10:57 AM Backport #18813: hammer: hammer client generated misdirected op against jewel cluster
- @Nathan O.k, got it.
- 10:55 AM Backport #18813: hammer: hammer client generated misdirected op against jewel cluster
- @Shinobu: We already know the master tracker # from the "Copied from" record. In the description, please put the gith...
- 10:38 AM Backport #18497 (In Progress): kraken: osd_recovery_incomplete: failed assert not manager.is_reco...
- https://github.com/ceph/ceph/pull/13295
- 10:04 AM rgw Bug #18841 (Resolved): http referer acl conflict with public read in swift api
- when set the container read acl: .r:*,.r:-.example.com, www.example.com will not has the perm to read in openstack sw...
- 09:08 AM Linux kernel client Bug #18807 (Duplicate): I/O error on rbd device after adding new OSD to Crush map
- 08:41 AM Linux kernel client Bug #18807: I/O error on rbd device after adding new OSD to Crush map
- Hi Ilya.
I have upgraded the kernel and it looks like, the bug was fixed - I can't reproduce it.
Thank you. - 09:05 AM Linux kernel client Bug #14901: misdirected requests on 4.2 during rebalancing
- In 3.16.39.
- 07:38 AM Bug #18840 (Resolved): Miss volume_backend_name in openstack cinder.conf example
- http://docs.ceph.com/docs/master/rbd/rbd-openstack/#configuring-cinder
<<<<< Current Content
[DEFAULT]
...
en... - 03:40 AM rbd Bug #18839: fsx segfault on clone op
- fixed by:
https://github.com/ceph/ceph/pull/13287 - 03:34 AM rbd Bug #18839 (Resolved): fsx segfault on clone op
- exec:
./ceph_test_librbd_fsx -N 1000 rbd fsx -d
segfault:
123 write 0x2398d thru 0x2b8d9 (0x7f4d bytes)... - 02:17 AM Bug #18820: segfault in ceph-osd --flush-journal
- Please ignore me - you appear to have stumbled across a different problem, given that you managed to get it to SEGFAU...
- 12:41 AM Bug #18820: segfault in ceph-osd --flush-journal
- Were this by chance a newly created journal ? (As in, not really used before you tried to flush). We had a similar pr...
- 01:33 AM RADOS Backport #17445: jewel: list-snap cache tier missing promotion logic (was: rbd cli segfault when ...
- FWIW: in our case, the rbd pool is tiered in write-back mode.
- 01:27 AM RADOS Backport #17445: jewel: list-snap cache tier missing promotion logic (was: rbd cli segfault when ...
- This particular snapshot were created on the 20th of January, and I'm relatively certain clients/osds/monitors/etc. r...
- 01:15 AM RADOS Backport #17445: jewel: list-snap cache tier missing promotion logic (was: rbd cli segfault when ...
- I suspect we're hitting the same....
- 12:01 AM Backport #18814 (In Progress): kraken: PrimaryLogPG: up_osd_features used without the requires_kr...
Also available in: Atom