# v12.2.3 * Backport #21359: luminous: racy is_mounted() checks in libcephfs * Backport #21479: luminous: Services reported with blank hostname by mgr * Backport #21525: luminous: client: dual client segfault with racing ceph_shutdown * Backport #21631: luminous: remove region from "INSTALL CEPH OBJECT GATEWAY" * Backport #21636: luminous: ceph-monstore-tool --readable mode doesn't understand FSMap, MgrMap * Backport #21641: luminous: rbd ls -l crashes with SIGABRT * Backport #21644: luminous: [rbd-mirror] image-meta is not replicated as part of initial sync * Backport #21646: luminous: Image-meta should be dynamically refreshed * Backport #21653: luminous: Erasure code recovery should send additional reads if necessary * Backport #21657: luminous: StrayManager::truncate is broken * Backport #21688: luminous: Possible deadlock in 'list_children' when refresh is required * Backport #21690: luminous: [qa] rbd_mirror_helpers.sh request_resync_image function saves image id to wrong variable * Backport #21694: luminous: compare-and-write -EILSEQ failures should be filtered when committing journal events * Backport #21695: luminous: failed CompleteMultipartUpload request does not release lock * Backport #21697: luminous: OSDService::recovery_need_sleep read+updated without locking * Backport #21700: luminous: rbd-mirror: Allow a different data-pool to be used on the secondary cluster * Backport #21785: luminous: OSDMap cache assert on shutdown * Backport #21788: luminous: [journal] image-meta set event should refresh the image after its applied instead of before * Backport #21793: luminous: [rbd-mirror] primary image should register in remote, non-primary image's journal * Backport #21794: luminous: backoff causes out of order op * Backport #21863: luminous: ceph-conf: dump parsed config in plain text or as json * Backport #21865: luminous: rbd: rbd crashes during map * Backport #21868: luminous: [iscsi] documentation tweaks * Backport #21869: luminous: Add ceph-monstore-tool in ceph-mon package, ceph-kvstore-tool in ceph-mon and ceph-osd, and ceph-osdomap-tool in ceph-osd package. * Backport #21870: luminous: Assertion in EImportStart::replay should be a damaged() * Backport #21874: luminous: qa: libcephfs_interface_tests: shutdown race failures * Backport #21875: luminous: ceph-mgr spuriously reloading OSD metadata on map changes * Backport #21916: luminous: msg/async/AsyncConnection.cc: 1835: FAILED assert(state == STATE_CLOSED) * Backport #21920: luminous: sparse-reads should not be used for small IO requests * Backport #21921: luminous: Objecter::_send_op unnecessarily constructs costly hobject_t * Backport #21922: luminous: Objecter::C_ObjectOperation_sparse_read throws/catches exceptions on -ENOENT * Backport #21924: luminous: ceph_test_objectstore fails ObjectStore/StoreTest.Synthetic/1 (filestore) buffer content mismatch * Backport #21946: luminous: `fs status` always says 0 clients * Backport #21947: luminous: mds: preserve order of requests during recovery of multimds cluster * Backport #21948: luminous: MDSMonitor: mons should reject misconfigured mds_blacklist_interval * Backport #21949: luminous: rgw: null instance mtime incorrect when enable versioning * Backport #21952: luminous: mds: no assertion on inode being purging in find_ino_peers() * Backport #21953: luminous: MDSMonitor commands crashing on cluster upgraded from Hammer (nonexistent pool?) * Backport #21969: luminous: [rbd-mirror] spurious "bufferlist::end_of_buffer" exception * Backport #21970: luminous: [journal] tags are not being expired if no other clients are registered * Backport #21973: luminous: [test] UpdateFeatures RPC message should be included in test_notify.py * Backport #22004: luminous: FAILED assert(get_version() < pv) in CDir::mark_dirty * Backport #22016: luminous: RGWCrashError: RGW will crash when generating random bucket name and object name during loadgen process * Backport #22017: luminous: Segmentation fault when starting radosgw after reverting .rgw.root * Backport #22021: luminous: rgw: modify s3 type subuser access permission fail * Backport #22023: luminous: mgr: dashboard plugin OSD daemons' table the Usage column's value is always zero. * Backport #22024: luminous: RGWCrashError: RGW will crash if a putting lc config request does not include an ID tag in the request xml * Backport #22025: luminous: ceph_test_cls_log failures related to cls_cxx_subop_version() * Backport #22026: luminous: Policy parser may or may not dereference uninitialized boost::optional sometimes * Backport #22027: luminous: multisite: destination zone does not compress synced objects * Backport #22029: luminous: restarting active ceph-mgr cause glitches in bps and iops metrics * Backport #22030: luminous: List of filesystems does not get refreshed after a filesystem deletion * Backport #22033: luminous: [tcmu-runner] export librbd IO perf counters to mgr * Backport #22067: luminous: mds: definition of MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 is wrong * Backport #22068: luminous: mds: mds gets significantly behind on trimming while creating millions of files * Backport #22069: luminous: osd/ReplicatedPG.cc: recover_replicas: object added to missing set for backfill, but is not in recovering, error! * Backport #22073: luminous: [api] compare-and-write methods not properly advertised * Backport #22074: luminous: don't check gid when none specified in auth caps * Backport #22075: luminous: mgr tests don't indicate failure if exception thrown from serve() * Backport #22076: luminous: 'ceph tell mds' commands result in 'File exists' errors on client admin socket * Backport #22077: luminous: src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == inode_map.size() + snap_inode_map.size()) * Backport #22078: luminous: ceph.in: tell mds does not understand --cluster * Backport #22089: luminous: Scrub considers dirty backtraces to be damaged, puts in damage table even though it repairs * Backport #22164: luminous: cluster [ERR] Unhandled exception from module 'balancer' while running on mgr.x: 'NoneType' object has no attribute 'iteritems'" in cluster log * Backport #22167: luminous: Various odd clog messages for mons * Backport #22169: luminous: *** Caught signal (Segmentation fault) ** in thread thread_name:tp_librbd * Backport #22171: luminous: rgw: log keystone errors at a higher level * Backport #22172: luminous: [rbd-nbd] Fedora does not register resize events * Backport #22174: luminous: possible deadlock in various maintenance operations * Backport #22176: luminous: osd: pg limit on replica test failure * Backport #22177: luminous: rgw: lifecycle process may block RGWRealmReloader::reload * Backport #22179: luminous: Swift object expiry incorrectly trims entries, leaving behind some of the objects to be not deleted * Backport #22181: luminous: rgw segfaults after running radosgw-admin data sync init * Backport #22183: luminous: rgw: multisite with jewel as master will not sync data * Backport #22184: luminous: Dynamic bucket indexing, resharding and tenants seems to be broken * Backport #22185: luminous: abort in listing mapped nbd devices when running in a container * Backport #22187: luminous: rgw: add cors header rule check in cors option request * Backport #22189: luminous: osdc/Objecter: objecter op_send_bytes perf counter always 0 * Backport #22190: luminous: class rbd.Image discard----OSError: [errno 2147483648] error discarding region * Backport #22192: luminous: MDSMonitor: monitor gives constant "is now active in filesystem cephfs as rank" cluster log info messages * Backport #22193: luminous: OSD crash on boot with assert caused by Bluefs on flush write * Backport #22194: luminous: Default kernel.pid_max is easily exceeded during recovery on high OSD-count system * Backport #22196: luminous: mgr[zabbix] float division by zero (osd['kb'] = 0) * Backport #22197: luminous: mgr: mark_down of osd without metadata is broken * Backport #22198: luminous: Compare and write against a clone can result in failure * Backport #22199: crushtool decompile prints bogus when osd < max_osd_id are missing * Backport #22208: luminous: 'rbd du' on empty pool results in output of "specified image" * Backport #22210: luminous: radosgw-admin zonegroup get and zone get should return defaults when there is no realm * Backport #22213: luminous: On pg repair the primary is not favored as was intended * Backport #22214: luminous: Bucket policy evaluation is not carried out for DeleteBucketWebsite * Backport #22215: luminous: rgw: bucket index object not deleted after radosgw-admin bucket rm --purge-objects --bypass-gc * Backport #22216: luminous: "osd status" command exception if OSD not in pgmap stats * Backport #22228: luminous: client: trim_caps may remove cap iterator points to * Backport #22237: luminous: request that is "!mdr->is_replay() && mdr->is_queued_for_replay()" may hang at acquiring locks * Backport #22238: luminous: prometheus module 500 if 'deep' in pg states * Backport #22240: luminous: Processes stuck waiting for write with ceph-fuse * Backport #22242: luminous: mds: limit size of subtree migration * Backport #22252: luminous: ceph-volume@.service modifications to ceph.spec.in * Backport #22258: mon: mgrmaps not trimmed * Backport #22264: luminous: bluestore: db.slow used when db is not full * Backport #22309: luminous: use user-defined literals for default sizes * Backport #22339: luminous: ceph-fuse: failure to remount in startup test does not handle client_die_on_failed_remount properly * Bug #22351: Couldn't init storage provider (RADOS) * Backport #22365: luminous: log rotate causes rgw realm reload * Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded * Backport #22375: luminous: ceph 12.2.x Luminous: Build fails with --without-radosgw * Backport #22376: luminous: Python RBD metadata_get does not work. * Backport #22379: luminous: client reconnect gather race * Backport #22385: luminous: mds: mds should ignore export_pin for deleted directory * Backport #22387: luminous: PG stuck in recovery_unfound * Backport #22388: luminous: rgw: 501 is returned When init multipart is using V4 signature and chunk encoding * Backport #22389: luminous: ceph-objectstore-tool: Add option "dump-import" to examine an export * Backport #22393: luminous: librbd: cannot copy all image-metas if we have more than 64 key/value pairs * Backport #22395: luminous: librbd: cannot clone all image-metas if we have more than 64 key/value pairs * Backport #22397: luminous: rgw: radosgw-admin reshard command argument error. * Backport #22399: luminous: Manager daemon x is unresponsive. No standby daemons available * Backport #22401: luminous: rgw: make HTTP dechunking compatible with Amazon S3 * Backport #22402: luminous: osd: replica read can trigger cache promotion * Backport #22404: luminous: crush_ruleset is invalid command in luminous * Backport #22407: luminous: client: implement delegation support in userland cephfs * Backport #22421: mon doesn't send health status after paxos service is inactive temporarily * Backport #22426: luminous: S3 API: incorrect error code on GET website bucket * Backport #22434: luminous: rgw: user stats increased after bucket reshard * Backport #22450: luminous: Visibility for snap trim queue length * Backport #22452: luminous: msg/async: unregister connection failed when racing happened * Backport #22453: luminous: mgr/balancer/upmap_max_iterations must be cast to integer * Backport #22454: luminous: cluster resource agent ocf:ceph:rbd - wrong permissions * Backport #22455: luminous: balancer crush-compat sends "foo" command * Feature #22456: efficient snapshot rollback * Backport #22490: luminous: mds: handle client session messages when mds is stopping * Backport #22493: luminous: mds: crash during exiting * Backport #22496: luminous: KeyError: ('name',) in balancer rm * Backport #22497: luminous: [rbd-mirror] new pools might not be detected * Backport #22499: luminous: cephfs-journal-tool: tool would miss to report some invalid range * Backport #22500: luminous: cephfs: potential adjust failure in lru_expire * Backport #22501: luminous: qa: CommandFailedError: Command failed on smithi135 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph osd pool set cephfs_data allow_ec_overwrites true' * Backport #22502: luminous: Pool Compression type option doesn't apply to new OSD's * Backport #22503: luminous: mds: read hang in multiple mds setup * Backport #22506: luminous: rgw usage trim only trims a few entries * Backport #22507: luminous: bluestore: do not crash on over-large objects * Backport #22509: luminous: osd: "sudo cp /var/lib/ceph/osd/ceph-0/fsid ..." fails * Documentation #22533: [iscsi-gw]Incorrect package version is specified * Backport #22563: luminous: mds: optimize CDir::_omap_commit() and CDir::_committed() for large directory * Backport #22564: luminous: Locker::calc_new_max_size does not take layout.stripe_count into account * Backport #22573: luminous: AttributeError: 'LocalFilesystem' object has no attribute 'ec_profile' * Backport #22574: luminous: Random 500 errors in Swift PutObject * Backport #22576: luminous: zabbix throws exception * Backport #22577: luminous: [test] rbd-mirror split brain test case can have a false-positive failure until teuthology * Backport #22579: luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obsolete * Backport #22580: luminous: qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClusterFull.test_full_different_file * Backport #22581: luminous: multisite: 'radosgw-admin sync error list' contains temporary EBUSY errors * Backport #22583: luminous: rgw: chained cache size is growing above rgw_cache_lru_size limit * Backport #22585: luminous: Prometheus exporter can't get metrics after update to 12.2.2 * Backport #22586: luminous: RGWBug: rewrite a versioning object create a new object * Backport #22587: luminous: mds: mdsload debug too high * Backport #22588: luminous: rgw: put cors operation returns 500 unknown error (ops are ECANCELED) * Backport #22591: luminous: radosgw refuses upload when Content-Type missing from POST policy * Backport #22593: luminous: [ FAILED ] TestLibRBD.RenameViaLockOwner * Backport #22601: luminous: S3 API Policy Conditions IpAddress and NotIpAddress do not work * Backport #22602: luminous: Bucket Policy Evaluation Logical Error * Backport #22611: luminous: "Transaction check error" in upgrade:client-upgrade-kraken-luminous * Backport #22618: luminous: put bucket policy panics RGW process * Backport #22621: luminous: compilation failures with boost 1.66 * Backport #22622: luminous: rgw opslog didn't compatible with s3 * Backport #22623: luminous: rgw opslog cannot record referrer when using curl as client * Backport #22630: doc: misc fixes for CephFS best practices * Backport #22633: luminous: OSD crushes with FAILED assert(used_blocks.size() > count) during the first start after upgrade 12.2.1 -> 12.2.2 * Backport #22634: luminous: ceph-mgr dashboard has dependency on python-jinja2 * Backport #22671: luminous: OSD heartbeat timeout due to too many omap entries read in each 'chunk' being backfilled * Backport #22690: luminous: qa: qa/cephfs/clusters/fixed-2-ucephfs.yaml has insufficient osds * Backport #22691: luminous: ceph-base symbols not stripped in debs * Backport #22692: luminous: simplelru does O(n) std::list::size() * Backport #22694: luminous: mds: fix dump last_sent * Backport #22698: luminous: bluestore: New OSD - Caught signal - bstore_kv_sync * Backport #22699: luminous: client:_rmdir() uses a deleted memory structure(Dentry) leading a core * Backport #22701: luminous: ceph-volume fails when centos7 image doesn't have lvm2 installed * Backport #22705: luminous: pg coloring broke in dashboard * Backport #22706: luminous: tests: force backfill test can conflict with pool removal * Backport #22707: luminous: ceph_objectstore_tool: no flush before collection_empty() calls; ObjectStore/StoreTest.SimpleAttrTest/2 fails * Backport #22708: luminous: rgw: copy_object doubles leading underscore on object names. * Backport #22719: luminous: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code * Backport #22724: luminous: miscounting degraded objects * Backport #22744: luminous: log entries weirdly zeroed out after 'osd pg-temp' command * Backport #22753: luminous: multisite: trim bilogs as data sync peer zones catch up * Bug #22756: RGW will not list contents of older buckets at all: reshard makes it show up again * Backport #22760: luminous: mgr: prometheus: missed osd commit\apply latency metrics. * Backport #22761: luminous: osd checks out-of-date osdmap for DESTROYED flag on start * Backport #22763: luminous: mds: crashes because of old pool id in journal header * Backport #22765: luminous: client: avoid recursive lock in ll_get_vino * Backport #22767: luminous: Librgw shutdown uncorreclty * Backport #22768: luminous: Service daemons never recover from transient outage * Backport #22770: luminous: ceph-objectstore-tool set-size should maybe clear data-digest * Backport #22773: luminous: rgw file deadlock on lru evicting * Bug #22776: mds: session count,dns and inos from cli "fs status" is always 0 * Bug #22785: ceph-volume does not activate OSD using mount options in ceph.conf * Backport #22798: luminous: mds: add success return * Backport #22805: luminous: rgw distributes cache updates on exclusive creates * Backport #22806: luminous: [librbd] force removing snapshots cannot remove children * Backport #22807: luminous: "osd pool stats" shows recovery information bugly * Backport #22809: luminous: rbd snap create/rm takes 60s long * Backport #22811: luminous: Authentication failed, did you specify a mgr ID with a valid keyring? * Backport #22831: luminous: Dashboard on backup MGRs always redirects to /, breaking reverse proxy support * Bug #22847: ceph osd force-create-pg cause all ceph-mon to crash and unable to come up again * Backport #22859: luminous: mds: session count,dns and inos from cli "fs status" is always 0 * Backport #22860: luminous: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-yuri-master-1.19.18-distro-basic-smithi * Backport #22864: luminous: mds: scrub crash * Backport #22867: luminous: MDS: assert failure when the inode for the cap_export from other MDS happened not in MDCache * Backport #22892: luminous: _read_bdev_label unable to decode label at offset * Backport #22907: luminous: mds: admin socket wait for scrub completion is racy * Backport #22921: luminous: dashboard module: 404 for static resouces * Backport #22922: luminous: rgw: resharding doesn't seem to preserve bucket acls * Backport #22930: luminous: beast gets SignatureDoesNotMatch in v4 auth * Backport #22938: luminous: system user can't delete bucket completely * Bug #22988: mount check needs to resolve realpaths * Bug #22994: rados bench doesn't use --max-objects * Bug #23079: Sysctl options from packages should be in /usr/lib * Bug #23080: Broken 'ceph-volume lvm prepare' mount options * Bug #23137: [upstream] rbd-nbd does not resize on Ubuntu * Bug #23145: OSD crashes during recovery of EC pg * Bug #23373: Problem with UID starting with underscores * Bug #24151: ceph-mgr have lost prio=0 perf counters? get_counter seem to ignore them * Bug #24419: ceph-objectstore-tool unable to open mon store * Bug #24588: osd: may get empty info at recovery