# v12.2.8 * Backport #23015: luminous: client: coredump when nfs-ganesha use ceph_ll_get_inode() * Backport #23772: luminous: ceph status shows wrong number of objects * Backport #23790: luminous: mds: crash during shutdown_pass * Backport #23989: luminous: mds: don't report slow request for blocked filelock request * Backport #24068: luminous: osd sends op_reply out of order * Backport #24083: luminous: rados: not all exceptions accept keyargs * Backport #24136: luminous: MDSMonitor: uncommitted state exposed to clients/mdss * Backport #24190: luminous: fs: reduce number of helper debug messages at level 5 for client * Backport #24295: luminous: repeated eviction of idle client until some IO happens * Backport #24311: luminous: pjd: cd: too many arguments * Backport #24387: luminous: Allow removal of RBD images even if the journal is corrupt * Backport #24469: luminous: osd crashes in on_local_recover due to stray clone * Backport #24471: luminous: Ceph-osd crash when activate SPDK * Backport #24474: luminous: Buffer overflow if operation returns more data than length provided to aio_execute. * Backport #24495: luminous: osd: segv in Session::have_backoff * Backport #24498: luminous: "invalid object map" flag may be not stored on disk * Backport #24501: luminous: osd: eternal stuck PG in 'unfound_recovery' * Backport #24514: luminous: "radosgw-admin zonegroup set" requires realm * Backport #24535: luminous: client: _ll_drop_pins travel inode_map may access invalid ‘next’ iterator * Backport #24538: luminous: common/DecayCounter: set last_decay to current time when decoding decay counter * Backport #24540: luminous: multimds pjd open test fails * Backport #24546: luminous: rgw: change order of authentication back to local, remote * Backport #24584: luminous: osdc: wrong offset in BufferHead * Backport #24628: luminous: rgw: fail to recover index from crash * Backport #24632: luminous: rgw performance regression for luminous 12.2.4 * Backport #24690: luminous: rgw-multisite: endless loop in RGWBucketShardIncrementalSyncCR * Backport #24692: luminous: rgw: index complete miss zones_trace set * Backport #24693: luminous: rgw: meta and data notify thread miss stop cr manager * Backport #24694: luminous: PurgeQueue sometimes ignores Journaler errors * Backport #24696: luminous: mds: low wrlock efficiency due to dirfrags traversal * Backport #24697: luminous: ceph osd safe-to-destroy crashes the mgr * Backport #24714: luminous: Add option to view IP addresses of clients in output of 'ceph features' * Backport #24717: luminous: blogbench.sh failed in upgrade:luminous-x-mimic-distro-basic-smithi * Backport #24718: luminous: client: returning garbage (?) for readdir * Backport #24735: luminous: order rbdmap.service before remote-fs-pre.target * Backport #24737: luminous: add unit test for cls bi list command * Backport #24739: luminous: Bring back diff -y for non-FreeBSD * Backport #24748: luminous: change default filestore_merge_threshold to -10 * Backport #24770: luminous: set correctly shard for existed Collection. * Backport #24772: luminous: osd: may get empty info at recovery * Backport #24774: luminous: Mimic build fails with -DWITH_RADOSGW=0 * Backport #24782: luminous: rgw: set cr state if aio_read err return in RGWCloneMetaLogCoroutine::state_send_rest_request * Backport #24798: luminous: FAILED assert(0 == "can't mark unloaded shard dirty") with compression enabled * Backport #24804: luminous: Python bindings use iteritems method which is not Python 3 compatible * Backport #24808: luminous: rgw gc may cause a large number of read traffic * Backport #24810: luminous: Invalid Access-Control-Request-Request may bypass validate_cors_rule_method * Backport #24814: luminous: REST admin metadata API paging failure bucket & bucket.instance: InvalidArgument * Backport #24824: luminous: test_ceph_argparse.py broken on py3-only system * Backport #24828: luminous: qa: iogen.sh: line 7: cd: too many arguments * Backport #24830: luminous: "radosgw-admin objects expire" always returns ok even if the process fails. * Backport #24833: luminous: 'radogw-admin reshard status' command should print text for reshard status * Backport #24844: luminous: rgw: require --yes-i-really-mean-it to run radosgw-admin orphans find * Backport #24845: luminous: tools/ceph-objectstore-tool: split filestore directories offline to target hash level * Backport #24860: luminous: cephfs-journal-tool: Importing a zero-length purge_queue journal breaks its integrity. * Backport #24864: luminous: Abort in OSDMap::decode() during qa/standalone/erasure-code/test-erasure-eio.sh * Backport #24886: luminous: Multiple races related to destruction of SharedBlob and BlueStore::split_cache() * Bug #24895: rgw fail to put elasticsearch document when compression is set * Backport #24916: luminous: rgw: partial ordered bucket listing feature * Backport #24932: luminous: client: put instance/addr information in status asok command * Backport #24979: luminous: ceph-helpers.sh tries to use dirname without mandatory parameter * Backport #25023: luminous: multisite: curl client does not time out on sync requests * Backport #25033: luminous: "Health check failed: 1 MDSs report slow requests (MDS_SLOW_REQUEST)" in powercycle * Backport #25036: luminous: mds: dump MDSMap epoch to log at low debug * Backport #25038: luminous: mds: scrub doesn't always return JSON results * Backport #25039: luminous: mds: dump recent (memory) log messages before respawning due to being removed from MDSMap * Backport #25041: luminous: mds: reduce debugging for missing inodes during subtree migration * Backport #25048: luminous: mds may get discontinuous mdsmap * Backport #25063: luminous: ceph-bluestore-tool manpage not getting rendered correctly * Backport #25074: luminous: Boost system library is no longer required to compile and link example librados program * Backport #25079: luminous: Intermittent http_status=409 with op status=-17 on ceph rgw with compression enabled * Backport #25100: luminous: jewel->luminous: osdmap crc mismatch * Backport #25102: luminous: mgr: make rados handle available to all modules (dashboard_v2 req.) * Backport #25117: luminous: mgr: add units to performance counters (dashboard_v2 req.) * Backport #25127: luminous: Allow repair of an object with a bad data_digest in object_info on all replicas * Backport #25143: luminous: mimic selinux denials comm="tp_fstore_op / comm="ceph-osd dev=dm-0 and dm-1 * Bug #25152: chown: cannot access ‘/var/lib/ceph/osd/*/block*’: No such file or directory * Backport #25223: luminous: [RFE] Filestore split log should show PG that is splitting * Backport #25227: luminous: OSD: still returning EIO instead of recovering objects on checksum errors * Backport #26833: luminous: mds: recovering mds receive export_cancel message * Backport #26871: luminous: osd: segfaults under normal operation * Backport #26887: luminous: tests: "cluster [WRN] 25 slow requests" in powercycle * Bug #27202: Use aws s3 java sdk failed to read a file when key like /test/test.txt * Bug #33420: Forced deep-scrub doesn't start * Bug #33561: PG repair doesn't start on an inconsistent group * Bug #34321: OSD crash because of DBObjectMap.cc: 662: FAILED assert(state.legacy) * Fix #34537: cls/rgw: add rgw_usage_log_entry type to ceph-dencoder * Feature #35687: rgw: storing and reading total usage data to construct rgw service monitor by prometheus * Bug #35808: ceph osd ok-to-stop result dosen't match the real situation * Bug #36108: Assertion due to ENOENT result on clonerange2