# v13.2.2 * Backport #24359: mimic: osd: leaked Session on osd.7 * Backport #24543: mimic: Dashboard: Prevent RGW API user deletion * Backport #24629: mimic: rgw: fail to recover index from crash * Bug #24674: mgr/dashboard: Unable to disable SSL for proxy environments * Backport #24841: mimic: qa: move mds/client config to qa from teuthology ceph.conf.template * Backport #24863: mimic: ceph_volume_client: allow atomic update of RADOS objects * Backport #24905: mimic: mimic 13.2.0 doesn't build in Fedora rawhide * Backport #24914: mimic: mon: prevent older/incompatible clients from mounting the file system * Backport #24931: mimic: client: put instance/addr information in status asok command * Backport #24945: mimic: image create request should validate data pool for self-managed snapshot support * Backport #24984: mimic: 'radosgw-admin sync error trim' only trims partially * Backport #24986: mimic: multisite: object metadata operations are skipped by sync * Backport #24989: mimic: Limit pg log length during recovery/backfill so that we don't run out of memory. * Backport #24992: mimic: valgrind-leaks.yaml: expected valgrind issues and found none * Backport #25021: mimic: multisite: curl client does not time out on sync requests * Backport #25032: mimic: SPDK compiles with -march=native * Backport #25035: mimic: mds: dump MDSMap epoch to log at low debug * Backport #25037: mimic: mds: scrub doesn't always return JSON results * Backport #25040: mimic: mds: dump recent (memory) log messages before respawning due to being removed from MDSMap * Backport #25042: mimic: mds: reduce debugging for missing inodes during subtree migration * Backport #25044: mimic: overhead of g_conf->get_val("config name") is high * Backport #25045: mimic: mds: create health warning if we detect metadata (journal) writes are slow * Backport #25047: mimic: mds may get discontinuous mdsmap * Backport #25055: mimic: doc: http://docs.ceph.com/docs/mimic/rados/operations/pg-states/ * Backport #25073: mimic: Boost system library is no longer required to compile and link example librados program * Backport #25078: mimic: Intermittent http_status=409 with op status=-17 on ceph rgw with compression enabled * Backport #25083: mimic: [deep-copy] object map can get improperly invalidated * Backport #25088: mimic: change default rgw_thread_pool_size to 512 * Backport #25101: mimic: jewel->luminous: osdmap crc mismatch * Backport #25118: mimic: newly added python test test_rbd.TestClone.test_trash_snapshot fails * Backport #25119: mimic: tests: "cluster [WRN] 25 slow requests" in powercycle * Backport #25120: mimic: mgr/dashboard: URL prefix is not working * Backport #25121: mimic: [clone v2] auto-delete trashed snapshot upon release of last child * Backport #25126: mimic: Allow repair of an object with a bad data_digest in object_info on all replicas * Backport #25142: mimic: mimic selinux denials comm="tp_fstore_op / comm="ceph-osd dev=dm-0 and dm-1 * Backport #25144: mimic: Automatically set expected_num_objects for new pools with >=100 PGs per OSD * Backport #25176: mimic: osd,mon: increase mon_max_pg_per_osd to 300 * Backport #25178: mimic: rados: not all exceptions accept keyargs * Backport #25200: mimic: FAILED assert(trim_to <= info.last_complete) in PGLog::trim() * Backport #25202: mimic: ceph-mgr: Module 'influx' has failed * Backport #25204: mimic: rados python bindings use prval from stack * Backport #25206: mimic: CephVolumeClient: delay required after adding data pool to MDSMap * Backport #25218: mimic: valgrind failures related to --max-threads prevent radosgw from starting * Backport #25220: mimic: osd/PGLog.cc: use lgeneric_subdout instead of generic_dout * Backport #25222: mimic: common: Cond.h:C_SaferCond does not check done before calling cond.WaitInterval, creating a race condition * Backport #25225: mimic: [RFE] Filestore split log should show PG that is splitting * Backport #25226: mimic: OSD: still returning EIO instead of recovering objects on checksum errors * Backport #26837: mimic: Can't turn off mgrc stats with mgr_stats_threshold * Backport #26842: mimic: rgw_file: "deep stat"/stats of unenumerated paths not handled * Backport #26845: mimic: Lifecycle rules number on one bucket should be limited. * Backport #26847: mimic: Delete marker generated by lifecycle has no owner * Backport #26849: mimic: rgw: civetweb fails on urls with control characters * Backport #26870: mimic: osd: segfaults under normal operation * Backport #26881: mimic: ceph-base debian package compiled on ubuntu/xenial has unmet runtime dependencies * Backport #26888: mimic: mds: use self CPU usage to calculate load * Backport #26903: mimic: qa: reduce slow warnings arising due to limited testing hardware * Backport #26905: mimic: MDSMonitor: consider raising priority of MMDSBeacons from MDS so they are processed before other client messages * Backport #26907: mimic: kv: MergeOperator name() returns string, and caller calls c_str() on the temporary * Backport #26909: mimic: PGLog.cc: saw valgrind issues while accessing complete_to->version * Backport #26912: mimic: "balancer execute" only requires read permissions * Backport #26914: mimic: handle ceph_ll_close on unmounted filesystem without crashing * Backport #26916: mimic: doc: Fix broken urls * Backport #26920: mimic: Mimic Dashboard does not allow deletion of snapshots containing "+" in their name * Backport #26921: mimic: possibly wrong log level in gc_iterate_entries (src/cls/rgw/cls_rgw.cc:3291) * Backport #26923: mimic: mds: mds got laggy because of MDSBeacon stuck in mqueue * Backport #26929: mimic: MDSMonitor: note ignored beacons/map changes at higher debug level * Backport #26931: mimic: scrub livelock * Backport #26933: mimic: segv in OSDMap::calc_pg_upmaps from balancer * Backport #26944: mimic: os/bluestore/BlueStore.cc: 1025: FAILED assert(buffer_bytes >= b->length) from ObjectStore/StoreTest.ColSplitTest2/2 * Backport #26946: mimic: tempest tests failing with pysaml2 version conflict * Backport #26956: mimic: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04 * Backport #26976: mimic: qa: kcephfs suite has kernel build failures * Backport #26978: mimic: cephfs-data-scan: print the max used ino * Backport #26980: mimic: multisite: intermittent failures in test_bucket_sync_disable_enable * Backport #26982: mimic: mds: crash when dumping ops in flight * Backport #26984: mimic: client: requests that do name lookup may be sent to wrong mds * Backport #26988: mimic: mds: explain delayed client_request due to subtree migration * Backport #26989: mimic: Implement "cephfs-journal-tool event splice" equivalent for purge queue * Backport #27059: mimic: ceph-mgr package does not remove /usr/lib/ceph/mgr compiled files (Debian only?) * Backport #27060: mimic: run-rbd-unit-tests.sh test fails to finish in jenkin's "make check" run * Backport #27212: mimic: rpm: should change ceph-mgr package depency from py-bcrypt to python2-bcrypt * Backport #27213: mimic: libradosstriper conditional compile * Backport #32082: mimic: mgr balancer does not save optimized plan but latest * Backport #32086: mimic: mds: MDBalancer::try_rebalance() may stop prematurely * Backport #32108: mimic: object errors found in be_select_auth_object() aren't logged the same * Backport #32129: mimic: docs: radosgw: ldap-auth: wrong option name 'rgw_ldap_searchfilter' * Backport #34532: mimic: force-create-pg broken * Backport #35068: mimic: deep scrub cannot find the bitrot if the object is cached * Backport #35070: mimic: cls/rgw: add rgw_usage_log_entry type to ceph-dencoder * Backport #35073: mimic: osd/PGLog.cc: 60: FAILED assert(s <= can_rollback_to) after upgrade to luminous * Backport #35078: mimic: broken bash example in bluestore migration * Backport #35079: mimic: mgr/dashboard: RestClient can't handle ProtocolError exceptions * Backport #35706: mimic: mgr/dashboard: Display RGW user/bucket quota max size in human readable form * Backport #35722: mimic: evicting client session may block finisher thread * Backport #35835: mimic: mgr/dashboard: Frontend timeouts when RGW takes too long to respond * Bug #35906: ceph-disk: is_mounted() returns None for mounted OSDs with Python 3 * Backport #35942: mimic: "ceph tell osd.x bench" writes resulting JSON to stderr instead of stdout. * Backport #35954: mimic: rgw: s3cmd sync fails * Bug #35973: radosgw-admin bucket limit check stuck generating high read ops with > 999 buckets per user * Bug #36234: swift: dump_account_metadata doesn't return quota info * Bug #36345: librados C API aio read empty buffer * Bug #36364: Bluestore OSD IO Hangs near Flush (flush in 90.330556) * Bug #36398: There is no dependent package in the repository Ubuntu 18.04 * Bug #36531: 'MAX AVAIL' in 'ceph df' showing wrong information * Bug #36619: radosgw-admin realm pull fails with an error "request failed: (13) Permission denied If the realm has been changed on the master zone, the master zone's gateway may need to be restarted to recognize this user." * Bug #36700: Establishing a valid contact for telemetry plugin. * Bug #36726: Module 'dashboard' has experienced an error and cannot handle commands: No module named ordered_dict * Support #37157: how to use "RGW_ACCESS_KEY_ID" with S3/swift for AD user ? * Bug #37445: RGW Swift API issue with repeatable downloading large object when range of bytes is required