# v10.2.8 * Backport #16585: jewel: enormous CLOSE_WAIT connections after re-spawning a mds daemon * Backport #17768: jewel: transient jerasure unit test failures * Backport #18193: jewel: transient jerasure unit test failures * Backport #18321: jewel: librbd::ResizeRequest: failed to update image header: (16) Device or resource busy * Backport #18496: jewel: Possible deadlock performing a synchronous API action while refresh in-progress * Backport #18626: jewel: TempURL verification broken for URI encoded object names * Backport #18669: jewel: [ FAILED ] TestLibRBD.ImagePollIO in upgrade:client-upgrade-kraken-distro-basic-smithi * Backport #18683: jewel: mon: 'osd crush move ...' doesnt work on osds * Backport #18699: jewel: client: fix the cross-quota rename boundary check conditions * Backport #18705: jewel: fragment space check can cause replayed request fail * Backport #18775: jewel: Qemu crash triggered by network issues * Backport #18778: jewel: rbd --pool=x rename y z does not work * Backport #18792: jewel: Client message throttles are not changeable without restart * Backport #18815: jewel: rados tool does not work with 0 length input (crashes if '--striper' enabled) * Backport #18823: jewel: run-rbd-unit-tests.sh assert in lockdep_will_lock, TestLibRBD.ObjectMapConsistentSnap * Backport #18866: jewel: 'radosgw-admin sync status' on master zone of non-master zonegroup * Backport #18893: jewel: Incomplete declaration for ContextWQ in librbd/Journal.h * Backport #18900: jewel: Test failure: test_open_inode * Backport #18906: jewel: "osd marked itself down" will not recognised if host runs mon + osd on shutdown/reboot * Backport #18908: jewel: the swift container acl does not support field ".ref" * Backport #18911: jewel: rbd-nbd: check /sys/block/nbdX/size to ensure kernel mapped correctly * Backport #18948: jewel: rbd-mirror: additional test stability improvements * Backport #18949: jewel: mds/StrayManager: avoid reusing deleted inode in StrayManager::_purge_stray_logged * Backport #18951: jewel: segfault in ceph-osd --flush-journal * Backport #18957: jewel: ceph-disk: bluestore --setgroup incorrectly set with user * Backport #18958: jewel: IPv6 Heartbeat packets are not marked with DSCP QoS - simple messenger * Backport #18969: jewel: Change loglevel to 20 for 'System already converted' message * Backport #18971: jewel: AdminSocket::bind_and_listen failed after rbd-nbd mapping * Backport #18972: jewel: ceph-disk does not support cluster names different than 'ceph' * Backport #18998: jewel: 'ceph auth import -i' overwrites caps, should alert user before overwrite * Backport #19000: jewel: "osd/PG.cc: 6896: FAILED assert(pg->is_acting(osd_with_shard) || pg->is_up(osd_with_shard))" in rados/upgrade * Backport #19044: jewel: buffer overflow in test LibCephFS.DirLs * Backport #19048: jewel: multisite: some yields in RGWMetaSyncShardCR::full_sync() resume in incremental_sync() * Backport #19062: jewel: Build ceph-resource-agents package for rpm based os * Backport #19083: jewel: osd: preserve allocation hint attribute during recovery * Backport #19142: jewel: Ceph Xenial Packages - ceph-base missing dependency for psmisc * Backport #19145: jewel: rgw: a few cases where rgw_obj is incorrectly initialized * Backport #19155: jewel: rgw: typo in rgw_admin.cc * Backport #19158: jewel: RGW health check errors out incorrectly * Backport #19163: jewel: radosgw-admin: add the 'object stat' command to usage * Backport #19171: jewel: rgw: S3 create bucket should not do response in json * Backport #19183: jewel: os/filestore/HashIndex: be loud about splits * Backport #19206: jewel: Invalid error code returned by MDS is causing a kernel client WARNING * Backport #19210: jewel: pre-jewel "osd rm" incrementals are misinterpreted * Backport #19211: jewel: rgw: "cluster [WRN] bad locator @X on object @X...." in cluster log * Backport #19223: jewel: osd crashes during hit_set_trim and hit_set_remove_all if hit set object doesn't exist * Backport #19311: segmentation fault when call do_fiemap() in filestore * Backport #19314: jewel: osd: pg log split does not rebuild index for parent or child * Backport #19321: jewel: multisite: possible infinite loop in RGWFetchAllMetaCR * Backport #19325: jewel: [api] temporarily restrict (rbd_)mirror_peer_add from adding multiple peers * Backport #19328: jewel: osd_snap_trim_sleep option does not work * Backport #19330: jewel: upgrade to multisite v2 fails if there is a zone without zone info * Backport #19332: jewel: brag fails to count "in" mds * Backport #19334: jewel: MDS heartbeat timeout during rejoin, when working with large amount of caps/inodes * Backport #19352: jewel: RadosImport::import should return an error if Rados::connect fails * Backport #19353: jewel: multisite: some 'radosgw-admin data sync' commands hang * Backport #19355: jewel: when converting region_map we need to use rgw_zone_root_pool * Backport #19357: jewel: systemctl stop rbdmap unmaps all rbds and not just the ones in /etc/ceph/rbdmap * Backport #19358: jewel: rbdmap documentation is inadequate * Backport #19392: jewel:mon: remove bad rocksdb option * Backport #19394: OSD blocks all readonly ops when OSD reaches full * Backport #19404: jewel: core: two instances of omap_digest mismatch * Backport #19461: jewel: admin ops: fix the quota section * Backport #19464: jewel: monitor creation with IPv6 public network segfaults * Backport #19468: jewel: [api] is_exclusive_lock_owner doesn't detect that is has been blacklisted * Backport #19469: jewel: rgw_file: leaf objects (which store Unix attrs) can be deleted when children exist * Backport #19474: jewel: multisite: EPERM when trying to read SLO objects as system/admin user * Backport #19476: jewel: RGW S3 v4 authentication issue with X-Amz-Expires * Backport #19478: jewel: "zonegroupmap set" does not work * Backport #19481: jewel: ceph degraded and misplaced status output inaccurate * Backport #19482: jewel: No output for "ceph mds rmfailed 0 --yes-i-really-mean-it" command * Backport #19484: jewel: New added OSD always down when full flag is set * Backport #19493: jewel: ceph-disk: Racing between partition creation & device node creation * Backport #19495: jewel: Objecter::epoch_barrier isn't respected in _op_submit() * Backport #19508: Upgrading from 0.94.6 to 10.2.6 can overload monitors (failed to encode map with expected crc) * Backport #19523: jewel: `radosgw-admin zone create` command with specified zone-id creates a zone with different id * Backport #19525: jewel: rgwfs hung due to missing unlock within unlink operation * Backport #19536: jewel: ceph-disk list reports mount error for OSD having mount options with SELinux context * Tasks #19538: jewel v10.2.8 * Backport #19546: jewel integration branch fails to build for centos (regression) * Backport #19547: rbdmap.service not included in debian packaging (jewel-only) * Backport #19562: jewel: "api_misc: [ FAILED ] LibRadosMiscConnectFailure.ConnectFailure" * Backport #19575: jewel: rgw: unsafe access in RGWListBucket_ObjStore_SWIFT::send_response() * Backport #19607: jewel: multisite: fetch_remote_obj() gets wrong version when copying from remote * Backport #19610: jewel: [librados_test_stub] cls_cxx_map_get_XYZ methods don't return correct value * Backport #19612: jewel: Issues with C API image metadata retrieval functions * Backport #19617: jewel: common/LogClient.cc: 310: FAILED assert(num_unsent <= log_queue.size()) * Backport #19619: jewel: MDS server crashes due to inconsistent metadata. * Backport #19646: jewel: ceph-disk: directory-backed OSDs do not start on boot * Backport #19660: jewel: rgw_file: fix readdir after dir-change * Backport #19662: jewel: rgw_file: fix event expire check, don't expire directories being read * Backport #19665: jewel: C_MDSInternalNoop::complete doesn't free itself * Backport #19666: jewel: fs:The mount point break off when mds switch hanppened. * Backport #19668: jewel: MDS goes readonly writing backtrace for a file whose data pool has been removed * Backport #19671: jewel: MDS assert failed when shutting down * Backport #19673: jewel: cephfs: mds is crushed, after I set about 400 64KB xattr kv pairs to a file * Backport #19675: jewel: cephfs: Test failure: test_data_isolated (tasks.cephfs.test_volume_client.TestVolumeClient) * Backport #19677: jewel: Jewel ceph-fuse does not recover after lost connection to MDS * Backport #19686: jewel: Give requested scrubs a higher priority * Backport #19690: jewel: Improvements to crushtool manpage * Backport #19701: jewel: osd/PGLog.cc: 1047: FAILED assert(oi.version == i->first) * Backport #19709: jewel: Enable MDS to start when session ino info is corrupt * Backport #19711: jewel: [test] test_notify.py: rbd.InvalidArgument: error updating features for image test_notify_clone2 * Backport #19722: jewel: rgw_file: introduce rgw_lookup type hints * Backport #19724: jewel: RGW S3 v4 authentication issue with X-Amz-Expires * Backport #19727: jewel: rbd-nbd: immediate seg fault starting the daemon * Backport #19728: jewel: rgw: add radosgw-admin command to check progress toward bucket sharding limits * Backport #19736: radosgw/s3 chunked transfer encodings and fast_forward_request * Backport #19757: jewel: fix failed to create bucket if a non-master zonegroup has a single zone * Backport #19762: jewel: non-local cephfs quota changes not visible until some IO is done * Backport #19772: jewel: rgw: swift: disable revocation thread under certain circumstances * Backport #19774: jewel: osd: promote throttle parameters are reversed * Backport #19786: jewel: ceph jewel failed create s3 tpye subuser from admin rest api * Backport #19806: jewel: APIs to support Ragweed suite * Backport #19846: jewel: write to cephfs mount hangs, ceph-fuse and kernel * Backport #20014: jewel: multisite: bi_list() decode failures * Backport #20027: jewel: Deadlock on two ceph-fuse clients accessing the same file * Backport #20032: jewel: osd_scrub_sleep option blocks op thread in jewel + later * Backport #20078: rgw jewel valgrind failure: make_params * Backport #20088: valgrind reports leak in RGWRemoteMetadataCR * Backport #20126: Jewel: Can't repair when only an attr object error * Backport #20140: jewel: Journaler may execute on_safe contexts prematurely * Backport #20148: jewel: Too many stat ops when MDS trying to probe a large file * Backport #20151: jewel: ceph-disk fails if OSD udev rule triggers prior to mount of /var * Backport #20157: jewel: rgw_file: handle chunked readdir * Backport #20172: jewel: PR #14886 creates a SafeTimer thread per PG * Backport #20401: jewel: "MaxWhileTries: 'wait_until_healthy' reached maximum tries (150) after waiting for 900 seconds" in upgrade/hammer-jewel-x/parallel * Backport #20412: test_remote_update_write (tasks.cephfs.test_quota.TestQuota) fails in Jewel 10.2.8 integration testing * Bug #20502: crush: Jewel upgrade misbehaving with custom roots/rulesets * Backport #20578: jewel: mon: fix 'sortbitwise' warning on jewel * Feature #20733: RGW bucket limits