# v12.2.6 * Backport #22637: luminous: rgw:lc: set LifecycleConfiguration without "Rule" tag return OK * Backport #22696: luminous: client: dirty caps may never get the chance to flush * Backport #22769: luminous: allow client requests to preempt scrub * Backport #22934: luminous: filestore journal replay does not guard omap operations * Backport #22937: luminous: beast: listen on multiple endpoints * Backport #23151: luminous: doc: update ceph-fuse with FUSE options * Backport #23157: luminous: mds: underwater dentry check in CDir::_omap_fetched is racy * Backport #23227: luminous: clang compilation error in BoundedKeyCounter * Backport #23231: luminous: rgw_statfs should report the correct stats * Backport #23308: luminous: doc: Fix -d option in ceph-fuse doc * Backport #23474: luminous: client: allow caller to request that setattr request be synchronous * Backport #23475: luminous: ceph-fuse: trim ceph-fuse -V output * Backport #23607: luminous: import-diff failed: (33) Numerical argument out of domain - if image size of the child is larger than the size of its parent * Backport #23631: luminous: python bindings fixes and improvements * Backport #23632: luminous: mds: handle client requests when mds is stopping * Backport #23635: luminous: client: fix request send_to_auth was never really used * Backport #23636: luminous: mds: kicked out by monitor during rejoin * Backport #23637: luminous: mds: assertion in MDSRank::validate_sessions * Backport #23638: luminous: ceph-fuse: getgroups failure causes exception * Backport #23640: luminous: rbd: import with option --export-format fails to protect snapshot * Backport #23666: luminous: SIGFPE, Arithmetic exception in AsyncConnection::_process_connection * Backport #23668: luminous: There is no 'ceph osd pool get erasure allow_ec_overwrites' command * Backport #23671: luminous: mds: MDBalancer using total (all time) request count in load statistics * Backport #23672: luminous: bluestore: ENODATA on aio * Backport #23675: luminous: qa/workunits/mon/test_mon_config_key.py fails on master * Backport #23681: luminous: mg_read() call has wrong arguments * Backport #23682: luminous: rgw:failed to pass test_bucket_list_maxkeys_unreadable in s3-test * Backport #23683: luminous: radosgw-admin should not use metadata cache when not needed * Backport #23685: luminous: rgw_file: post deadlock fix, direct deleted RGWFileHandle objects can remain in handle table * Backport #23698: luminous: mds: load balancer fixes * Backport #23700: luminous: osd: KernelDevice.cc: 539: FAILED assert(r == 0) * Backport #23702: luminous: mds: sessions opened by journal replay do not get dirtied properly * Backport #23703: luminous: MDSMonitor: mds health warnings printed in bad format * Backport #23704: luminous: ceph-fuse: broken directory permission checking * Backport #23750: luminous: mds: ceph.dir.rctime follows dir ctime not inode ctime * Backport #23770: luminous: ceph-fuse: return proper exit code * Backport #23771: luminous: client: fix gid_count check in UserPerm->deep_copy_from() * Backport #23782: luminous: table of contents doesn't render for luminous/jewel docs * Backport #23784: luminous: osd: Warn about objects with too many omap entries * Backport #23786: luminous: "utilities/env_librados.cc:175:33: error: unused parameter 'offset' [-Werror=unused-parameter]" in rados * Backport #23791: luminous: MDSMonitor: new file systems are not initialized with the pending_fsmap epoch * Backport #23792: luminous: MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_fsmap * Backport #23802: luminous: slow ceph_ll_sync_inode calls after setattr * Backport #23808: luminous: upgrade: bad pg num and stale health status in mixed lumnious/mimic cluster * Backport #23818: luminous: client: add option descriptions and review levels (e.g. LEVEL_DEV) * Backport #23833: luminous: MDSMonitor: crash after assigning standby-replay daemon in multifs setup * Backport #23835: luminous: mds: fix occasional dir rstat inconsistency between multi-MDSes * Backport #23850: luminous: Read operations segfaulting multiple OSDs * Backport #23852: luminous: OSD crashes on empty snapset * Backport #23861: luminous: rgw: admin rest api shouldn't return error when getting user's stats if the user hasn't create any bucket. * Backport #23862: luminous: aws4 auth not implemented for PutBucketRequestPayment * Backport #23863: luminous: scrub interaction with HEAD boundaries and clones is broken * Backport #23864: luminous: compression ratio depends on block size, which is much smaller (16K vs 4M) in multisite sync * Backport #23865: luminous: [rgw] GET ?torrent returns object's body instead torrent-file * Backport #23866: luminous: No meaningful error when RGW cannot create pools due to lack of available PGs * Backport #23868: luminous: rgw: do not reflect period if not current in RGWPeriodPuller::pull * Backport #23869: luminous: rgw sends garbage meta.compression to ElasticSearch * Backport #23870: luminous: null map from OSDService::get_map in advance_pg * Backport #23881: luminous: Bluestore OSD hit assert((log_reader->buf.pos & ~super.block_mask()) == 0) * Backport #23886: luminous: Resharding hangs with versioning-enabled buckets * Backport #23900: luminous: [rbd-mirror] asok hook for image replayer not re-registered after bootstrap * Backport #23902: luminous: [rbd-mirror] local tag predecessor mirror uuid is incorrectly replaced with remote * Backport #23904: luminous: Deleting a pool with active watch/notify linger ops can result in seg fault * Backport #23906: luminous: libcurl ignores headers with empty value, leading to signature mismatches * Backport #23912: luminous: mon: High MON cpu usage when cluster is changing * Backport #23913: luminous: rbd-nbd can deadlock in logging thread * Backport #23914: luminous: cache-try-flush hits wrlock, busy loops * Backport #23915: luminous: monitors crashing ./include/interval_set.h: 355: FAILED assert(0) (jewel+kraken) * Backport #23924: luminous: LibRadosAio.PoolQuotaPP failed * Backport #23925: luminous: assert on pg upmap * Backport #23930: luminous: mds: scrub code stuck at trimming log segments * Backport #23931: luminous: qa: test_purge_queue_op_rate: self.assertTrue(phase2_ops < phase1_ops * 1.25) * Backport #23933: luminous: client: avoid second lock on client_lock * Backport #23934: luminous: client: "remove_session_caps still has dirty|flushing caps" when thrashing max_mds * Backport #23935: luminous: mds: may send LOCK_SYNC_MIX message to starting MDS * Backport #23936: luminous: cephfs-journal-tool: segfault during journal reset * Backport #23945: luminous: potential race in rbd-mirror disconnect QA test * Backport #23946: luminous: mds: crash when failover * Backport #23950: luminous: mds: stopping rank 0 cannot shutdown until log is trimmed * Backport #23951: luminous: mds: stuck during up:stopping * Backport #23977: luminous: multisite: misleading error on normal shutdown * Backport #23982: luminous: qa: TestVolumeClient.test_lifecycle needs updated for new eviction behavior * Backport #23984: luminous: mds: scrub on fresh file system fails * Backport #23985: luminous: librbd::Watcher's handle_rewatch_complete might fire after object destroyed * Backport #23986: luminous: recursive lock of objecter session::lock on cancel * Backport #23987: luminous: cephfs does not count st_nlink for directories correctly? * Backport #23988: luminous: luminous->master: luminous crashes with AllReplicasRecovered in Started/Primary/Active/NotRecovering state * Backport #23991: luminous: client: hangs on umount if it had an MDS session evicted * Backport #24014: luminous: mgr/influx: Module fails to parse service names if RGW is present * Backport #24015: luminous: UninitCondition in PG::RecoveryState::Incomplete::react(PG::AdvMap const&) * Backport #24016: luminous: scrub interaction with HEAD boundaries and snapmapper repair is broken * Backport #24042: luminous: ceph-disk log is written to /var/run/ceph * Backport #24043: luminous: java compile error: Source/Target option 1.5 is not supported since jdk 9 * Backport #24048: luminous: pg-upmap cannot balance in some case * Backport #24049: luminous: ceph-fuse: missing dentries in readdir result * Backport #24050: luminous: mds: MClientCaps should carry inode's dirstat * Backport #24055: luminous: VolumeClient: allow ceph_volume_client to create 'volumes' without namespace isolation * Backport #24059: luminous: Deleting a pool with active notify linger ops can result in seg fault * Backport #24060: luminous: RGW Multi-site radosgw-admin sync status and radosgw-admin bucket sync status commands addition of 'detail' flag * Backport #24063: luminous: Misnamed S3 operation * Backport #24070: luminous: build-integration-branch script can fail with UnicodeEncodeError * Backport #24084: luminous: [rbd-mirror] bootstrap should not raise -EREMOTEIO if local image still attached * Backport #24086: luminous: [rbd-mirror] potential races during PoolReplayer shut-down * Backport #24107: luminous: PurgeQueue::_consume() could return true when there were no purge queue item actually executed. * Backport #24108: luminous: MDCache.cc: 5317: FAILED assert(mds->is_rejoin()) * Backport #24120: luminous: rgw: 403 error when creating an object with metadata containing sequence of spaces * Backport #24122: luminous: selinux denials with ceph-deploy/ceph-volume lvm device * Backport #24130: luminous: mds: race with new session from connection and imported session * Backport #24132: luminous: "122 - unittest_bluefs (OTHER_FAULT)" during ctest run * Backport #24153: luminous: Eviction still raced with scrub due to preemption * Backport #24156: luminous: [rbd-mirror] potential deadlock when running asok 'flush' command * Backport #24185: luminous: client: segfault in trim_caps * Backport #24188: luminous: kceph: umount on evicted client blocks forever * Backport #24189: luminous: qa: kernel_mount.py umount must handle timeout arg * Backport #24198: luminous: mon: slow op on log message * Backport #24201: luminous: client: fails to respond cap revoke from non-auth mds * Backport #24205: luminous: mds: broadcast quota to relevant clients when quota is explicitly set * Backport #24207: luminous: client: deleted inode's Bufferhead which was in STATE::Tx would lead a assert fail * Backport #24214: luminous: Module 'balancer' has failed: could not find bucket -14 * Backport #24216: luminous: "process (unknown)" in ceph logs * Bug #24225: AArch64 CRC32 crash with SIGILL * Backport #24245: luminous: Manager daemon y is unresponsive during teuthology cluster teardown * Backport #24247: luminous: SharedBlob::put() racy * Backport #24252: luminous: Admin OPS Api overwrites email when user is modified * Backport #24258: luminous: crush device class: Monitor Crash when moving Bucket into Default root * Backport #24279: luminous: RocksDB compression is not supported at least on Debian. * Backport #24290: luminous: common: JSON output from rados bench write has typo in max_latency key * Backport #24298: luminous: rgw: fix 'copy part' without 'x-amz-copy-source-range' when compression enabled * Backport #24302: luminous: rgw: (jewel) can't delete swift acls with swift command. * Backport #24314: luminous: multisite test failures in test_versioned_object_incremental_sync * Backport #24328: luminous: assert manager.get_num_active_clean() == pg_num on rados/singleton/all/max-pg-per-osd.from-primary.yaml * Backport #24331: luminous: mon: mds health metrics sent to cluster log indpeendently * Backport #24334: luminous: ARMv8 feature detection broken, leading to illegal instruction crashes * Backport #24341: luminous: mds memory leak * Backport #24351: luminous: slow mon ops from osd_failure * Backport #24353: luminous: rgw: request with range defined as "bytes=0--1" returns 416 InvalidRange * Backport #24356: luminous: osd: pg hard limit too easy to hit * Bug #24369: luminous: checking quota while holding cap ref may deadlock * Bug #24370: luminous: root dir's new snapshot lost when restart mds * Backport #24374: luminous: mon: auto compaction on rocksdb should kick in more often * Backport #24378: luminous: [rbd-mirror] entries_behind_master will not be zero after mirror over * Backport #24393: luminous: rgw: making implicit_tenants backwards compatible * Bug #24421: async messager thread cpu high, osd service not normal until restart * Backport #24477: luminous: Bucket lifecycles stick around after buckets are deleted * Backport #24503: luminous: ObjectStore/StoreTestSpecificAUSize.SyntheticMatrixNoCsum/2 failed fsck with stray objects * Bug #24511: osd crushed at thread_name:safe_timer * Bug #24622: No module named rados * Bug #24661: os/bluestore: don't store/use path_block.{db,wal} from meta * Documentation #24712: Memory recommendations for bluestore * Backport #24740: luminous: CLI unit formatting tests are broken * Backport #24750: luminous: os/bluestore: don't store/use path_block.{db,wal} from meta * Bug #24789: [rgw] ERROR: unable to remove bucket(2) No such file or directory (if --bypass-gc) * Backport #24806: luminous: rgw workload makes osd memory explode * Bug #24947: Ceph Luminous radosgw: Couldn't init storage provider (RADOS) * Bug #25098: Bluestore OSD failed to start with `bluefs_types.h: 54: FAILED assert(pos <= end)`