# v12.2.9 * Backport #22504: luminous: client may fail to trim as many caps as MDS asked for * Backport #23408: luminous: mgrc's ms_handle_reset races with send_pgstats() * Backport #23604: luminous: Discard ops should flush affected objects from in-memory cache * Backport #23998: luminous: osd/EC: slow/hung ops in multimds suite test * Backport #24478: luminous: read object attrs failed at EC recovery * Backport #24630: luminous: cls_bucket_list fails causes cascading osd crashes * Backport #24842: luminous: qa: move mds/client config to qa from teuthology ceph.conf.template * Backport #24862: luminous: ceph_volume_client: allow atomic update of RADOS objects * Backport #24912: luminous: qa: multifs requires 4 mds but gets only 2 * Backport #24934: luminous: cephfs-journal-tool: wrong layout info used * Backport #24946: luminous: image create request should validate data pool for self-managed snapshot support * Backport #24983: luminous: 'radosgw-admin sync error trim' only trims partially * Backport #24985: luminous: multisite: object metadata operations are skipped by sync * Backport #24988: luminous: Limit pg log length during recovery/backfill so that we don't run out of memory. * Backport #25025: luminous: cls_rgw test is only run in rados suite: add it to rgw suite as well * Backport #25043: luminous: overhead of g_conf->get_val("config name") is high * Backport #25046: luminous: mds: create health warning if we detect metadata (journal) writes are slow * Backport #25087: luminous: change default rgw_thread_pool_size to 512 * Backport #25145: luminous: Automatically set expected_num_objects for new pools with >=100 PGs per OSD * Backport #25177: luminous: osd,mon: increase mon_max_pg_per_osd to 300 * Backport #25199: luminous: FAILED assert(trim_to <= info.last_complete) in PGLog::trim() * Backport #25203: luminous: rados python bindings use prval from stack * Backport #25205: luminous: CephVolumeClient: delay required after adding data pool to MDSMap * Backport #25217: luminous: valgrind failures related to --max-threads prevent radosgw from starting * Backport #25219: luminous: osd/PGLog.cc: use lgeneric_subdout instead of generic_dout * Backport #26838: luminous: Can't turn off mgrc stats with mgr_stats_threshold * Backport #26840: luminous: librados application's symbol could conflict with the libceph-common * Backport #26844: luminous: rgw_file: "deep stat"/stats of unenumerated paths not handled * Backport #26846: luminous: Lifecycle rules number on one bucket should be limited. * Backport #26848: luminous: Delete marker generated by lifecycle has no owner * Backport #26851: luminous: ceph_volume_client: py3 compatible * Backport #26885: luminous: mds: reset heartbeat map at potential time-consuming places * Backport #26889: luminous: mds: use self CPU usage to calculate load * Backport #26904: luminous: qa: reduce slow warnings arising due to limited testing hardware * Backport #26906: luminous: MDSMonitor: consider raising priority of MMDSBeacons from MDS so they are processed before other client messages * Backport #26908: luminous: kv: MergeOperator name() returns string, and caller calls c_str() on the temporary * Backport #26910: luminous: PGLog.cc: saw valgrind issues while accessing complete_to->version * Backport #26915: luminous: handle ceph_ll_close on unmounted filesystem without crashing * Backport #26917: luminous: doc: Fix broken urls * Backport #26922: luminous: possibly wrong log level in gc_iterate_entries (src/cls/rgw/cls_rgw.cc:3291) * Backport #26924: luminous: mds: mds got laggy because of MDSBeacon stuck in mqueue * Backport #26930: luminous: MDSMonitor: note ignored beacons/map changes at higher debug level * Backport #26934: luminous: segv in OSDMap::calc_pg_upmaps from balancer * Backport #26977: luminous: cephfs-data-scan: print the max used ino * Backport #26979: luminous: multisite: intermittent failures in test_bucket_sync_disable_enable * Backport #26981: luminous: mds: crash when dumping ops in flight * Backport #26983: luminous: client: requests that do name lookup may be sent to wrong mds * Backport #26987: luminous: mds: explain delayed client_request due to subtree migration * Backport #26990: luminous: mds: curate priority of perf counters sent to mgr * Backport #26992: luminous: discover_all_missing() not always called during activating * Backport #27058: luminous: ceph-mgr package does not remove /usr/lib/ceph/mgr compiled files (Debian only?) * Backport #27061: luminous: run-rbd-unit-tests.sh test fails to finish in jenkin's "make check" run * Backport #27987: luminous: Refuses to release lock when cookie is the same at rewatch * Backport #32080: luminous: mgr balancer does not save optimized plan but latest * Backport #32084: luminous: mds: MDBalancer::try_rebalance() may stop prematurely * Backport #32088: luminous: mds: use monotonic clock for beacon sender thread waits * Backport #32098: luminous: mds: optimize the way how max export size is enforced * Backport #32103: luminous: mds: allows client to create ".." and "." dirents * Backport #32106: luminous: object errors found in be_select_auth_object() aren't logged the same * Backport #32127: luminous: docs: radosgw: ldap-auth: wrong option name 'rgw_ldap_searchfilter' * Backport #35069: luminous: cls/rgw: add rgw_usage_log_entry type to ceph-dencoder * Backport #35072: luminous: osd/PGLog.cc: 60: FAILED assert(s <= can_rollback_to) after upgrade to luminous * Backport #35537: luminous: Bad URL for unmap.t in krbd run * Backport #35703: luminous: multisite: out of order updates to sync status markers * Backport #35704: luminous: "rbd import --export-format 2" fails when the input is a pipe * Backport #35707: luminous: A period pull occasionally raises "curl_easy_perform returned status 28 error: Operation too slow" * Backport #35709: luminous: deadlock on shutdown in RGWIndexCompletionManager::stop() * Backport #35711: luminous: Enabling journaling on an in-use image ignores any journal options * Backport #35713: luminous: [rbd-mirror] aborted in Operation::execute_snap_remove() * Backport #35716: luminous: msg: "challenging authorizer" messages appear at debug_ms=0 * Backport #35718: luminous: mds: beacon spams is_laggy message * Backport #35721: luminous: evicting client session may block finisher thread * Backport #35838: luminous: mds: use monotonic clock for beacon message timekeeping * Backport #35844: luminous: objecter cannot resend split-dropped op when racing with con reset * Backport #35854: luminous: should remove mentioning of "scrubq" in ceph(8) manpage * Backport #35856: luminous: multisite: segfault on shutdown/realm reload * Backport #35859: luminous: MDSMonitor: lookup of gid in prepare_beacon that has been removed will cause exception * Backport #35929: luminous: mon/OSDMonitor: cancel_report causes obsolete max_failed_since * Backport #35931: luminous: mds: retry remounting in ceph-fuse on dcache invalidation * Backport #35933: luminous: client: cannot list out files created by another ceph-fuse client * Backport #35937: luminous: mds: add average session age (uptime) perf counter * Backport #35939: luminous: client: statfs inode count odd * Backport #35941: luminous: "ceph tell osd.x bench" writes resulting JSON to stderr instead of stdout. * Backport #35958: luminous: assert in execute_flatten() when flattening a clone with no overlap * Backport #35960: luminous: assert(total_data_size % sinfo.get_chunk_size() == 0) with ec overwrite flag set * Backport #35962: luminous: choose_acting picked want > pool size * Backport #35976: luminous: mds: configurable timeout for client eviction * Backport #35978: luminous: multisite: incremental data sync makes unnecessary call to RGWReadRemoteDataLogShardInfoCR * Backport #35980: luminous: multisite: data sync error repo processing does not back off on empty * Backport #35981: luminous: ceph-disk: is_mounted() returns None for mounted OSDs with Python 3 * Backport #35983: luminous: mds: change mds perf counters can statistics filesystem operations number and latency * Backport #35991: luminous: ceph-objectstore-tool apply-layout-settings optional target level can't be specified. * Backport #36101: luminous: qa: remove knfs site from future releases * Backport #36116: luminous: [test] not valid to have different parents between image snapshots * Backport #36119: luminous: [rbd-mirror] failed assertion when updating mirror status * Backport #36124: luminous: Chunked encoding fails if chunk greater than 1MiB * Backport #36126: luminous: msg: AsyncConnection keeps previous message buffers until new message comes in * Backport #36128: luminous: abort_bucket_multiparts() fails on missing multipart meta objects * Backport #36131: luminous: "symbol lookup error: ceph-osd: undefined symbol: _ZdaPvm" on centos 7.4 * Backport #36133: luminous: client: update ctime when modifying file content * Backport #36135: luminous: mds: rctime may go back * Backport #36137: luminous: multisite: update index segfault on shutdown/realm reload * Backport #36139: luminous: multisite: make redundant data sync errors less scary * Backport #36141: luminous: rgw: return x-amz-version-id: null when delete obj in versioning suspended bucket * Backport #36143: luminous: Blacklisted client might not notice it lost the lock * Backport #36152: luminous: qa: fsstress workunit does not execute in parallel on same host without clobbering files * Backport #36157: luminous: [simple/msg]Add heartbeat timeout beforeAccepter::entry break out for osd thread * Backport #36196: luminous: mds: internal op missing events time 'throttled', 'all_read', 'dispatched' * Backport #36198: luminous: ceph-fuse: add SELinux policy * Backport #36202: luminous: multisite: intermittent test_bucket_index_log_trim failures * Backport #36210: luminous: mds: runs out of file descriptors after several respawns * Backport #36224: luminous: [rbd-mirror] object map is getting invalidated during rbd-mirror-fsx-workunit test * Backport #36274: luminous: osd/PrimaryLogPG: fix potential pg-log overtrimming * Backport #36277: luminous: qa: add timeouts to workunits to bound test execution time in the event of crashes/bugs * Backport #36311: luminous: multi-site: object name should be urlencoded when we put it into ES * Backport #36322: luminous: qa: Command failed on smithi189 with status 1: 'rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 /home/ubuntu/cephtest/mnt.0/client.0/tmp' * Backport #36382: luminous: resharding produces invalid values of bucket stats * Bug #36406: Cache-tier forward mode hang in luminous (again) * Bug #36411: OSD crash starting recovery/backfill with EC pool * Backport #36431: luminous: [qa] fsstress workunit uses unavailable "realpath" command * Backport #36514: luminous: add a missing dependency for e2fsprogs * Bug #36567: Segmentation fault in BlueStore::Blob::discard_unallocated * Bug #36626: couldn't rewatch after network was blocked and client blacklisted * Bug #36725: luminous: Apparent Memory Leak in OSD * Bug #37280: librbd's generate_image_id() is not so random * Bug #37299: ceph-disk: ceph osd start failed: Command '['/usr/bin/systemctl', 'disable', 'ceph-osd@0', '--runtime']' * Backport #38165: luminous: os/bluestore: avoid frequent and massive allocator's dump on bluefs rebalance failure