# v12.2.13 * Bug #23147: RGW: metrics 'qlen', 'qactive' are not work * Backport #23223: luminous: rgw: garbage collector removes objects slowly * Backport #23237: Corrupted downloads from civetweb when using multipart with slow connections * Backport #24360: luminous: osd: leaked Session on osd.7 * Backport #36080: luminous: aarch64: Compiler-based detection of crc32 extended CPU type is broken * Backport #37497: luminous: get or set rgw realm zonegroup zone should check user's caps for security * Backport #37612: luminous: rpm: missing dependency on python34-ceph-argparse from python34-cephfs (and others?) * Backport #37692: luminous: Image mirroring should be disabled when it is moved to trash * Backport #37748: luminous: Add clear-data-digest command to objectstore tool * Backport #37892: luminous: doc: wrong value of usage log default in logging section * Backport #38205: luminous: osds allows to partially start more than N+2 * Backport #38242: luminous: msg/async: connection race + winner fault can leave connection in standby * Backport #38276: luminous: osd_map_message_max default is too high? * Backport #38340: luminous: mds: may leak gather during cache drop * Backport #38397: luminous: rgw: when exclusive lock fails due existing lock, log add'l info * Backport #38436: luminous: crc cache should be invalidated when posting preallocated rx buffers * Backport #38440: luminous: compare-and-write skips compare after copyup without object map * Backport #38442: luminous: osd-markdown.sh can fail with CLI_DUP_COMMAND=1 * Backport #38445: luminous: mds: drop cache does not timeout as expected * Backport #38508: luminous: [rbd-mirror] LeaderWatcher stuck in loop if pool deleted * Backport #38551: luminous: core: lazy omap stat collection * Backport #38564: luminous: [librbd] race condition possible when validating RBD pool * Backport #38567: luminous: osd_recovery_priority is not documented (but osd_recovery_op_priority is) * Backport #38674: luminous: Performance improvements for object-map * Backport #38686: luminous: kcephfs TestClientLimits.test_client_pin fails with "client caps fell below min" * Backport #38714: luminous: rgw: gc entries with zero-length chains are not cleaned up * Backport #38719: luminous: crush: choose_args array size mis-sized when weight-sets are enabled * Backport #38748: luminous: non existant mdlog failures logged at level 0 * Backport #38750: luminous: should report EINVAL in ErasureCode::parse() if m<=0 * Backport #38781: luminous: mgr/balancer: blame if upmap won't actually work * Backport #38873: luminous: Rados.get_fsid() returning bytes in python3 * Backport #38877: luminous: mds: high debug logging with many subtrees is slow * Backport #38880: luminous: ENOENT in collection_move_rename on EC backfill target * Backport #38884: luminous: Lifecycle doesn't remove delete markers * Backport #38887: luminous: GetBucketCORS API returns "Not Found" error code when CORS configuration does not exist * Backport #38902: luminous: Minor rados related documentation fixes * Backport #38905: luminous: osd/PGLog.h: print olog_can_rollback_to before deciding to rollback * Backport #38908: luminous: rgw: read not exists null version success and return empty data * Backport #38920: luminous: "Caught signal (Aborted) thread_name:radosgw" in ceph dashboard tests Jenkins job * Backport #38925: luminous: beast frontend option to set the TCP_NODELAY socket option * Backport #38954: luminous: backport krbd discard qa fixes to stable branches * Backport #38958: luminous: multisite: sync status on master zone does not show "oldest incremental change not applied" * Backport #38962: luminous: DaemonServer::handle_conf_change - broken locking * Backport #38975: luminous: return ETIMEDOUT if we meet a timeout in poll * Backport #39016: luminous: unable to cancel reshard operations for buckets with tenants * Backport #39042: luminous: osd/PGLog: preserve original_crt to check rollbackability * Backport #39177: luminous: rgw: remove_olh_pending_entries() does not limit the number of xattrs to remove * Backport #39180: luminous: rgw: orphans find perf improvments * Backport #39191: luminous: mds: crash during mds restart * Backport #39198: luminous: mds: we encountered "No space left on device" when moving huge number of files into one directory * Backport #39204: luminous: osd: leaked pg refs on shutdown * Backport #39208: luminous: mds: mds_cap_revoke_eviction_timeout is not used to initialize Server::cap_revoke_eviction_timeout * Backport #39213: luminous: mds: there is an assertion when calling Beacon::shutdown() * Backport #39218: luminous: osd: FAILED ceph_assert(attrs || !pg_log.get_missing().is_missing(soid) || (it_objects != pg_log.get_log().objects.end() && it_objects->second->op == pg_log_entry_t::LOST_REVERT)) in PrimaryLogPG::get_object_context() * Backport #39221: luminous: mds: behind on trimming and "[dentry] was purgeable but no longer is!" * Backport #39227: luminous: rgw_file: can't retrieve etag of empty object written through NFS * Backport #39231: luminous: kclient: nofail option not supported * Backport #39239: luminous: "sudo yum -y install python34-cephfs" fails on mimic * Backport #39243: luminous: msg/async: connection race + winner fault can leave connection stuck at replacing forever * Backport #39247: luminous: os/bluestore: fix length overflow * Backport #39254: luminous: occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWritesTest/2 failure * Backport #39272: luminous: rgw: S3 policy evaluated incorrectly * Backport #39277: luminous: platform.linux_distribution() is deprecated; stop using it * Backport #39314: luminous: krbd: fix rbd map hang due to udev return subsystem unordered * Backport #39332: luminous: Build with lttng on openSUSE * Backport #39343: luminous: ceph-objectstore-tool rename dump-import to dump-export * Backport #39358: luminous: Compliance to aws s3's relaxed query handling behaviour * Backport #39360: luminous: rgw:failed to pass test_bucket_create_naming_bad_punctuation in s3test * Backport #39373: luminous: ceph tell osd.xx bench help : gives wrong help * Bug #39395: ceph: ceph fs auth fails * Backport #39409: luminous: inefficient unordered bucket listing * Backport #39420: luminous: Don't mark removed osds in when running "ceph osd in any|all|*" * Backport #39424: luminous: mgr: deadlock * Backport #39427: luminous: 'rbd mirror status --verbose' will occasionally seg fault * Backport #39431: luminous: Degraded PG does not discover remapped data on originating OSD * Backport #39444: luminous: OSD crashed in BitmapAllocator::init_add_free() * Backport #39457: luminous: mgr/prometheus: replace whitespaces in metric names * Backport #39460: luminous: [rbd-mirror] "bad crc in data" error when listing large pools * Backport #39463: luminous: print client IP in default debug_ms log level when "bad crc in {front|middle|data}" occurs * Backport #39468: luminous: There is no punctuation mark or blank between tid and client_id in the output of "ceph health detail" * Backport #39474: luminous: segv in fgets() in collect_sys_info reading /proc/cpuinfo * Backport #39497: luminous: rgw admin: object stat command output's delete_at not readable * Backport #39537: luminous: osd/ReplicatedBackend.cc: 1321: FAILED assert(get_parent()->get_log().get_log().objects.count(soid) && (get_parent()->get_log().get_log().objects.find(soid)->second->op == pg_log_entry_t::LOST_REVERT) && (get_parent()->get_log().get_log().object * Backport #39563: luminous: Error message displayed when mon_osd_max_split_count would be exceeded is not as user-friendly as it could be * Backport #39565: luminous: ceph-bluestore-tool: bluefs-bdev-expand silently bypasses main device (slot 2) * Backport #39572: luminous: send x-amz-version-id header in PUT response * Backport #39589: luminous: qa/tasks/rbd_fio: fixed missing delimiter between 'cd' and 'configure' * Backport #39603: luminous: document CreateBucketConfiguration for s3 PUT Bucket request * Backport #39615: luminous: civetweb frontend: response is buffered in memory if content length is not explicitly specified * Backport #39638: luminous: fsck on mkfs breaks ObjectStore/StoreTestSpecificAUSize.BlobReuseOnOverwrite * Backport #39673: luminous: [test] possible race condition in rbd-nbd disconnect * Backport #39691: luminous: mds: error "No space left on device" when create a large number of dirs * Backport #39696: luminous: rgw: success returned for put bucket versioning on a non existant bucket * Backport #39719: luminous: short pg log+nautilus-p2p-stress-split: "Error: finished tid 3 when last_acked_tid was 5" in upgrade:nautilus-p2p * Backport #39727: luminous: [test] devstack is broken (again) * Backport #39732: luminous: rgw: allow radosgw-admin bucket list to use the --allow-unordered flag * Backport #39733: luminous: multisite: mismatch of bucket creation times from List Buckets * Backport #39747: luminous: Add support for --bypass-gc flag of radosgw-admin bucket rm command in RGW Multi-site * Backport #40004: luminous: do_cmake.sh: "source" not found * Backport #40032: luminous: rgw metadata search (elastic search): meta sync: ERROR: failed to read mdlog info with (2) No such file or directory * Backport #40041: luminous: avoid trimming too many log segments after mds failover * Backport #40082: luminous: osd: Better error message when OSD count is less than osd_pool_default_size * Backport #40092: luminous: Missing Documentation for radosgw-admin reshard commands (man pages) * Backport #40127: luminous: rgw: Swift interface: server side copy fails if object name contains `?` * Backport #40132: luminous: rgw: putting X-Object-Manifest via TempURL should be prohibited * Backport #40135: luminous: rgw: the Multi-Object Delete operation of S3 API wrongly handles the "Code" response element * Backport #40138: luminous: document steps to disable metadata_heap on existing zones * Backport #40143: luminous: multisite: 'radosgw-admin bucket sync status' should call syncs_from(source.name) instead of id * Backport #40149: luminous: rgw: bucket may redundantly list keys after BI_PREFIX_CHAR * Backport #40160: luminous: FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_dir()" * Backport #40163: luminous: mount: key parsing fail when doing a remount * Backport #40166: luminous: client: ceph.dir.rctime xattr value incorrectly prefixes "09" to the nanoseconds component * Bug #40182: luminous: pybind: luminous volume client breaks against nautilus cluster * Bug #40200: luminous: mds: does fails assert(session->get_nref() == 1) when balancing * Backport #40218: luminous: TestMisc.test_evict_client fails * Backport #40221: luminous: mds: reset heartbeat during long-running loops in recovery * Backport #40229: luminous: maybe_remove_pg_upmap can be super inefficient for large clusters * Backport #40233: luminous: [CLI]rbd: get positional argument error when using --image * Backport #40266: luminous: data race in OutputDataSocket * Bug #40286: luminous: qa: remove ubuntu 14.04 testing * Backport #40318: luminous: "make: *** [hello_world_cpp] Error 127" in rados * Backport #40343: luminous: mds: fix corner case of replaying open sessions * Backport #40347: luminous: ssl tests failing with SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed'),)",) * Backport #40350: luminous: rgw/OutputDataSocket: append_output(buffer::list&) says it will (but does not) discard output at data_max_backlog * Backport #40359: luminous: rgw: set null version object issues * Backport #40422: luminous: Bitmap allocator return duplicate entries which cause interval_set assert * Backport #40463: luminous: possible crash when replaying journal with invalid/corrupted ranges * Backport #40496: luminous: Object Gateway multisite document read-only argument error * Backport #40499: luminous: [cli] 'export' should handle concurrent IO completions * Backport #40502: luminous: osd: rollforward may need to mark pglog dirty * Backport #40506: luminous: rgw: conditionally allow builtin users with non-unique email addresses * Backport #40534: luminous: pool compression options not consistently applied * Backport #40548: luminous: Keyrings created by ceph auth get are not suitable for ceph auth import * Backport #40551: luminous: [test] qemu-iotests tests fails under latest Ubuntu kernel * Backport #40559: luminous: rgw: the log output gets very spammy in multisite clusters * Backport #40574: luminous: Disabling journal might result in assertion failure * Bug #40584: kernel build failure in kernel_untar_build.sh * Backport #40592: luminous: rbd_mirror/ImageSyncThrottler.cc: 61: FAILED ceph_assert(m_queue.empty()) * Backport #40638: luminous: osd: report omap/data/metadata usage * Backport #40650: luminous: os/bluestore: fix >2GB writes * Backport #40653: luminous: Lower the default value of osd_deep_scrub_large_omap_object_key_threshold * Backport #40697: luminous: test_envlibrados_for_rocksdb.yaml fails installing g++-4.7 on 18.04 * Backport #40735: luminous: multisite: failover docs should use 'realm pull' instead of 'period pull' * Backport #40756: luminous: stupid allocator might return extents with length = 0 * Backport #40807: luminous: mds: msg weren't destroyed before handle_client_reconnect returned, if the reconnect msg was from non-existent session * Backport #40852: luminous: multisite: radosgw-admin commands should not modify metadata on a non-master zone * Backport #40880: luminous: Reduce log level for cls/journal and cls/rbd expected errors * Backport #40892: luminous: mds: cleanup truncating inodes when standby replay mds trim log segments * Backport #40947: luminous: Better default value for osd_snap_trim_sleep * Backport #40978: luminous: missing string substitution when reporting mounts * Backport #41000: luminous: client: failed to drop dn and release caps causing mds stary stacking. * Backport #41020: luminous: simple: when 'type' file is not present activate fails * Backport #41057: luminous: ceph-volume does not recognize wal/db partitions created by ceph-disk * Backport #41104: luminous: rgw: when usring radosgw-admin to list bucket, can set --max-entries excessively high * Backport #41111: luminous: rgw: fix drain handles error when deleting bucket with bypass-gc option * Backport #41139: luminous: ceph-volume prints errors to stdout with --format json * Backport #41202: luminous: ceph-volume prints log messages to stdout * Backport #41266: luminous: beast frontend throws an exception when running out of FDs * Backport #41278: luminous: mgr/prometheus: Setting scrape_interval breaks cache timeout comparison * Backport #41281: luminous: BlueStore tool to check fragmentation * Backport #41285: luminous: error from replay does not stored in rbd-mirror status * Backport #41289: luminous: fix and improve doc regarding manual bluestore cache settings. * Backport #41322: luminous: multisite: datalog/mdlog trim don't loop until done * Backport #41334: luminous: ceph-test RPM not built for SUSE * Backport #41338: luminous: os/bluestore/BlueFS: use 64K alloc_size on the shared device * Bug #41367: rocksdb: submit_transaction error: Corruption: block checksum mismatch code = 2 * Bug #41370: [RGW] RGW in website mode: rgw_rados.h: 2150: FAILED assert(!obj.empty() * Backport #41373: luminous: batch functional idempotency test fails since message is now on stderr * Backport #41382: luminous: rgw: housekeeping of reset stats operation in radosgw-admin and cls back-end * Bug #41401: rgw: api_name fixes from Nautilus (e.g., allows CreateBucket w/alternate placement) * Backport #41421: luminous: `rbd mirror pool status --verbose` test is missing * Backport #41439: luminous: [rbd-mirror] cannot connect to remote cluster when running as 'ceph' user * Backport #41458: luminous: proc_replica_log need preserve replica log's crt * Backport #41480: luminous: rgw dns name is not case sensitive * Backport #41489: luminous: client: client should return EIO when it's unsafe reqs have been dropped when the session is close. * Backport #41510: luminous: 50-100% iops lost due to bluefs_preextend_wal_files = false * Backport #41532: luminous: Move bluefs alloc size initialization log message to log level 1 * Backport #41544: luminous: [test] rbd-nbd FSX test runs are failing * Backport #41579: luminous: rgw: api_name fixes from Nautilus (e.g., allows CreateBucket w/alternate placement) * Backport #41613: luminous: ceph-volume lvm list is O(n^2) * Backport #41621: luminous: in rbd-ggate the assert in Log:open() will trigger * Backport #41626: luminous: multisite: ENOENT errors from FetchRemoteObj causing bucket sync to stall without retry * Backport #41644: luminous: QA run failures "Command failed on smithi with status 1: '\n sudo yum -y install ceph-radosgw\n ' " * Backport #41697: luminous: Network ping monitoring * Backport #41706: luminous: in cls_bucket_list_unordered() listing of entries following an entry for which check_disk_state() returns -ENOENT may not get listed * Backport #41709: luminous: Set concurrent max_background_compactions in rocksdb to 2 * Backport #41713: luminous: can't remove rados objects after copy rgw-object fail * Backport #41730: luminous: osd/ReplicatedBackend.cc: 1349: FAILED ceph_assert(peer_missing.count(fromshard)) * Backport #41733: luminous: osd: need clear PG_STATE_CLEAN when repair object * Backport #41772: luminous: RBD image manipulation using python API crashing since Nautilus * Backport #41808: luminous: rgw: fix minimum of unordered bucket listing * Backport #41845: luminous: tools/rados: allow list objects in a specific pg in a pool * Backport #41864: luminous: Mimic MONs have slow/long running ops * Backport #41914: luminous: mgr/test_localpool.sh fails after multiple tries on luminous * Backport #41919: luminous: osd: scrub error on big objects; make bluestore refuse to start on big objects * Backport #41959: luminous: tools/rados: add --pgid in help * Backport #41962: luminous: Segmentation fault in rados ls when using --pgid and --pool/-p together as options * Backport #42037: luminous: Enable auto-scaler and get src/osd/PeeringState.cc:3671: failed assert info.last_complete == info.last_update * Backport #42039: luminous: client: _readdir_cache_cb() may use the readdir_cache already clear * Backport #42049: luminous: fix pytest warnings * Bug #42056: rgw: librgw write wrongly closed in NFS3 * Bug #42058: OSD reconnected across map epochs, inconsistent pg logs created * Backport #42127: luminous: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first)) * Backport #42138: luminous: Remove unused full and nearful output from OSDMap summary * Backport #42153: luminous: Removed OSDs with outstanding peer failure reports crash the monitor * Bug #42175: _txc_add_transaction error (2) No such file or directory not handled on operation 15 * Bug #42193: luminous: MDS crash running upgrade test * Backport #42199: luminous: osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc) * Backport #42241: luminous: Adding Placement Group id in Large omap log message * Backport #42264: luminous: mimic and luminous still need to read ceph.conf.template from teuthology * Bug #42316: msg/async: do not bump connect_seq for fault during ACCEPTING_SESSION * Backport #42361: luminous: python3-cephfs should provide python36-cephfs * Backport #42390: luminous: mgr/balancer: 'dict_keys' object does not support indexing * Backport #42393: luminous: CephContext::CephContextServiceThread might pause for 5 seconds at shutdown * Backport #42415: luminous: sphinx spits warning when rendering doc/rbd/qemu-rbd.rst * Backport #42425: luminous: [rbd] rbd map hangs up infinitely after osd down * Backport #42527: luminous: concurrent "rbd unmap" failures due to udev * Backport #42548: luminous: verify_upmaps can not cancel invalid upmap_items in some cases * Backport #42573: luminous: restful: Query nodes_by_id for items * Backport #42580: luminous: p2p tests fail due to missing python3-cephfs package * Backport #42586: luminous: out of order caused by letting old msg from down peer be processed to RESETSESSION * Backport #42663: luminous: RBD mirroring test cases broken in mimic due to bad backport * Backport #42672: luminous: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04 * Backport #42678: luminous: qa: malformed job * Backport #42698: luminous: Larger cluster using upmap mode balancer can block other balancer commands * Backport #42774: luminous: mds: add command that modify session metadata * Backport #42784: luminous: mgr/prometheus: UnboundLocalError occurs when obj_store is neither filestore nor bluestore * Backport #42796: luminous: unnecessary error message "calc_pg_upmaps failed to build overfull/underfull" * Bug #42828: rbd journal err assert(ictx->journal != __null) when release exclusive_lock * Backport #42834: luminous: STATE_KV_SUBMITTED is set too early. * Backport #42849: luminous: ceph osd status - units invisible using black background * Backport #42895: luminous: rgw: add list user admin OP API * Backport #42988: luminous: update kernel.sh for read-only changes * Backport #43013: luminous: rgw: crypt: permit RGW-AUTO/default with SSE-S3 headers * Backport #43093: luminous: Improve OSDMap::calc_pg_upmaps() efficiency * Bug #43175: pgs inconsistent, union_shard_errors=missing * Backport #43234: luminous: rgw: radosgw_admin teuthology task: No module named bunch * Bug #43269: rgw: lc: continue past get_obj_state() failure * Backport #43278: luminous: "cd /home/ubuntu/cephtest/s3-tests && ./bootstrap" fails on ubuntu * Backport #43325: luminous: wrong datatype describing crush_rule * Bug #43421: mon spends too much time to build incremental osdmap * Backport #43499: luminous: rbd-mirror daemons don't logrotate correctly * Backport #43532: luminous: Change default upmap_max_deviation to 5 * Bug #43562: Error in tcmalloc * Backport #43577: luminous: StupidAllocator.cc: 265: FAILED assert(intervals <= max_intervals) * Backport #43651: luminous: Improve upmap change reporting in logs * Backport #43759: luminous: functional tests only assume correct number is osds if branch tests is mimic or luminous * Backport #43926: luminous: kernel_untar_build.sh: bison: command not found * Bug #44008: multi-part upload will lost part data when you abort and resume a multipart upload request by using aws java Signature Version 4 api * Bug #44967: rgw:rgw crash when putting object tagging and post object with malformedXML