# v12.2.5 * Backport #20823: jewel: client::mkdirs not handle well when two clients send mkdir request for a same dir * Backport #22383: luminous: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), actual: -34 vs 0 * Backport #22635: luminous: s3cmd move object error * Backport #22688: luminous: client: fails to release to revoking Fc * Backport #22766: luminous: RGW doesn't check time skew in auth v4 http header request * Backport #22812: luminous: Civetweb reports bad response code. * Backport #22856: luminous: build Debian installation packages failure * Backport #22857: luminous: librbd::object_map::InvalidateRequest: 0x7fbd100beed0 should_complete: r=0 * Backport #22858: luminous: beast: bind to specific ip address * Backport #22862: luminous: cephfs-journal-tool: may got assertion failure due to not shutdown * Backport #22884: luminous: rgw: document civetweb ssl configuration * Backport #22889: luminous: rgw_file: avoid fragging thread_local log buffer * Backport #22891: luminous: qa: kcephfs lacks many configurations in the fs/multimds suites * Backport #22935: luminous: client: setattr should drop "Fs" rather than "As" for mtime and size * Backport #22936: luminous: client: readdir bug * Backport #22940: luminous: Double free in rados_getxattrs_next * Backport #22942: luminous: ceph osd force-create-pg cause all ceph-mon to crash and unable to come up again * Backport #22964: luminous: [rbd-mirror] infinite loop is possible when formatting the status message * Backport #22966: luminous: kclient: Test failure: test_full_same_file (tasks.cephfs.test_full.TestClusterFull) * Backport #22967: luminous: Journaler::flush() may flush less data than expected, which causes flush waiter to hang * Backport #22969: luminous: mds: session reference leak * Backport #22971: luminous: mon: removing tier from an EC base pool is forbidden, even if allow_ec_overwrites is set * Backport #22972: luminous: mds: move remaining containers in CDentry/CDir/CInode to mempool * Backport #22983: luminous: balancer should warn about missing requirements * Backport #23011: luminous: [journal] allocating a new tag after acquiring the lock should use on-disk committed position * Backport #23013: luminous: mds: LOCK_SYNC_MIX state makes "getattr" operations extremely slow when there are lots of clients issue writes or reads to the same file * Backport #23016: luminous: mds: assert when inode moves during scrub * Backport #23020: luminous: The parameter of max-uploads doesn't work when List Multipart Uploads * Backport #23022: luminous: can not set user quota with specific value * Backport #23024: luminous: thrash-eio + bluestore (hangs with unfound objects or read_log_and_missing assert) * Backport #23025: luminous: rgw: data sync of versioned objects, note updating bi marker * Backport #23060: luminous: qa: ignore more warnings during mds-full test * Backport #23061: luminous: qa: kcephfs thrash sub-suite does not ignore MON_DOWN * Backport #23062: luminous: qa: mds-full: ignore "Health check failed: pauserd,pausewr flag(s) set (OSDMAP_FLAGS)" in cluster log" * Backport #23063: luminous: osd: BlueStore.cc: BlueStore::_balance_bluefs_freespace: assert(0 == "allocate failed, wtf"); * Backport #23064: luminous: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-mult run * Backport #23074: luminous: bluestore: statfs available can go negative * Backport #23075: luminous: osd: objecter sends out of sync with pg epochs for proxied ops * Backport #23077: luminous: mon: ops get stuck in "resend forwarded message to leader" * Backport #23101: luminous: ceph-mgr fails to start after a system reboot on Ubuntu 16.04 * Backport #23102: luminous: Objects only serving first 512K * Backport #23114: luminous: can't delete object from pool when Ceph out of space * Bug #23140: ceph-volume lvm list should work with raw devices * Backport #23150: luminous: mds: add uptime to status asok command * Backport #23152: luminous: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT * Backport #23154: luminous: mds: FAILED assert (p != active_requests.end()) in MDRequestRef MDCache::request_get(metareqid_t) * Backport #23156: luminous: ceph-fuse: clarify -i is not a valid option * Backport #23159: luminous: Drop upgrade/jewel-x/point-to-point-x in luminous and master * Backport #23160: luminous: Multiple asserts caused by DNE pgs left behind after lots of OSD restarts * Backport #23173: luminous: BlueFS reports rotational journals if BDEV_WAL is not set * Backport #23174: luminous: SRV resolution fails to lookup AAAA records * Backport #23175: luminous: mgr not reporting when ports conflict * Backport #23176: luminous: some rgw suites override frontend setting in frontend/beast.yaml * Backport #23177: luminous: [test] OpenStack tempest test is failing across all branches (again) * Backport #23178: luminous: run-make-check.sh thinks it needs debianutils on SUSE * Backport #23179: luminous: rgw: can't download object with range when compression enabled * Backport #23180: luminous: radosgw-admin data sync run crashes * Backport #23186: luminous: ceph tell mds.* prints only one matching usage * Backport #23192: rgw_log (and rgw_file): don't use undefined/unset RGWEnv key/value pairs * Backport #23221: luminous: possible issue with ssl + libcurl * Backport #23224: luminous: mgr log spamming about down osds * Backport #23225: luminous: rgw: list bilog will loop forever * Backport #23226: luminous: bluestore_cache_data uses too much memory * Bug #23229: usage trim loops forever: infinite calls to rgw.user_usage_log_trim * Backport #23230: luminous: Update mgr/restful documentation * Backport #23239: luminous: Curl+OpenSSL support in RGW * Backport #23245: luminous: multisite: segfault in radosgw-admin realm pull * Backport #23252: luminous: The return value of auth v2/v4 in RGW is wrong when Expires/X-Amz-Expires missing * Backport #23256: luminous: bluestore: should recalc_allocated when decoding bluefs_fnode_t * Backport #23268: luminous: osd: add numpg_removing metric * Bug #23272: switch port down ,cephfs kernel client lost session, blocked not recover ok util port up * Backport #23275: luminous: ceph-objectstore-tool command to trim the pg log * Backport #23302: luminous: rgw: add radosgw-admin sync error trim to trim sync error log * Backport #23304: luminous: parent blocks are still seen after a whole-object discard * Backport #23306: luminous: Assertion is raised when fetching file event in Ceph 12.2.1 * Backport #23310: luminous: s3 website: some s3tests are failing because redirects include index doc suffix * Backport #23312: luminous: invalid JSON returned when querying pool parameters * Backport #23313: luminous: mgr: prometheus: internal server error while new OSDs are being added to the cluster. * Backport #23314: luminous: client: allow client to use caps that are revoked but not yet returned * Backport #23315: luminous: pool create cmd's expected_num_objects is not correctly interpreted * Backport #23317: luminous: Cannot specify multiple ports for civetweb port/listening_ports due to config parsing * Backport #23318: luminous: rgw: crash with rgw_run_sync_thread=false * Backport #23323: luminous: ERROR type entries of pglog do not update min_last_complete_ondisk, potentially ballooning memory usage * Bug #23329: async messager lost session when IO performance testing, not recover util to restart * Backport #23346: luminous: RGWCopyObj silently corrupts the object that was mulitpart-uploaded in SSE-C * Backport #23347: luminous: rgw: inefficient buffer usage for PUTs * Backport #23349: luminous: Couldn't init storage provider (RADOS) * Backport #23351: luminous: filestore: do_copy_range replay bad return value * Backport #23355: luminous: client: prevent fallback to remount when dentry_invalidate_cb is true but root->dir is NULL * Backport #23357: luminous: Admin API support for bucket quota change * Bug #23390: Identifying NVMe via PCI serial isn't sufficient (Bluestore/SPDK) * Backport #23407: luminous: [cls] rbd.group_image_list is incorrectly flagged as R/W * Backport #23409: luminous: mgr: fix MSG_MGR_MAP handling * Backport #23410: luminous: Documentation license version is ambiguous * Backport #23412: luminous: delete type mismatch in CephContext teardown * Backport #23414: luminous: mds: fixed MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 definition breaks luminous upgrades * Backport #23423: luminous: librados/snap_set_diff: don't assert on empty snapset * Backport #23472: luminous: add --add-bucket and --move options to crushtool * Backport #23478: should not check for VERSION_ID * Backport #23485: luminous: scrub errors not cleared on replicas can cause inconsistent pg state when replica takes over primary * Bug #23496: ceph-volume: lsblk: unknown column: PKNAME,PARTLABEL * Bug #23497: ceph-volume: lvcreate: unrecognized option '--yes' * Backport #23500: luminous: snapmapper inconsistency, crash on luminous * Backport #23501: luminous: OSD bind to IPv6 link-local address * Backport #23507: luminous: test_admin_socket.sh may fail on wait_for_clean * Backport #23520: luminous: ceph_authtool: add mode option * Backport #23522: luminous: tests: unittest_pglog timeout * Backport #23524: luminous: is_qemu_running in qemu_rebuild_object_map.sh and qemu_dynamic_features.sh may return false positive * Backport #23542: luminous: rbd-nbd: EBUSY when do map * Backport #23544: luminous: aio_t::rval int type not enough to contain io_event::res with unsigned long type, cause core dump * Backport #23545: luminous: "Message too long" error when appending journal * Backport #23561: luminous: mds: mds gets significantly behind on trimming while creating millions of files (cont.) * Backport #23570: luminous: mds: counter decay incorrect * Backport #23572: luminous: mds: make sure that MDBalancer uses heartbeat info from the same epoch * Backport #23606: luminous: "ENGINE Error in 'start' listener * Backport #23690: luminous: multisite Synchronization failed when read and write delete at the same time * Backport #23691: luminous: radosgw-admin: add an option to reset user stats * Backport #23720: luminous: radosgw-admin user stats --sync-stats without a user will create an empty object * Backport #23758: luminous: usage trim loops forever: infinite calls to rgw.user_usage_log_trim * Documentation #23765: librbd hangs if permissions are incorrect * Bug #23817: Bucket policy and colons in filename * Bug #23831: bucket policy ipdeny not in effect * Bug #23918: "ceph-volume lvm prepare" errors with "no valid command found" * Bug #23944: OSD going down randomly * Bug #24011: [rgw] Bucket Policy - not works with object tags * Backport #24299: luminous: rgw: download object might fail for local invariable uninitialized * Support #24602: bind unable to bind to 192.168.5.77:7300/0 on any port in range 6800-7300: * Bug #24603: rgw-multisite: endless loop in RGWBucketShardIncrementalSyncCR