# v14.2.5 * Backport #39412: nautilus: pool_stat.dump() - value of num_store_stats is wrong * Backport #39517: nautilus: Improvements to standalone tests. * Backport #39604: mgr/dashboard: Allow the decrease of pg's of an existing pool * Backport #39660: nautilus: rgw: Segfault during request processing * Backport #39682: nautilus: filestore pre-split may not split enough directories * Backport #39700: nautilus: [RFE] If the nodeep-scrub/noscrub flags are set in pools instead of global cluster. List the pool names in the ceph status * Backport #40043: nautilus: mgr/zabbix: Typo in key name for PGs in backfill_wait state * Backport #40045: nautilus: mgr/dashboard: Unify button/URL actions naming for iSCSI and NFS * Backport #40084: nautilus: osd: Better error message when OSD count is less than osd_pool_default_size * Backport #40131: nautilus: Document behaviour of fsync-after-close * Backport #40227: nautilus: msg: bad address encoding when CEPH_FEATURE_MSG_ADDR2 enabled * Backport #40270: nautilus: make check failure: "patch: command not found" * Backport #40449: nautilus: "no available blob id" assertion might occur * Backport #40495: nautilus: test_volume_client: declare only one default for python version * Backport #40504: nautilus: osd: rollforward may need to mark pglog dirty * Backport #40597: nautilus: rgw_file: directory enumeration can be accelerated 1-2 orders of magnitude taking stats from bucket index Part I (stats from S3/Swift only) * Backport #40630: nautilus: rgw multisite: datalogs/bilogs are not trimmed if no peer zones are processing them * Backport #40830: nautilus: mgr/dashboard: Validate iSCSI controls * Backport #40840: nautilus: Explicitly requested repair of an inconsistent PG cannot be scheduled timely on a OSD with ongoing recovery * Backport #40849: nautilus: lifecycle transitions on non existent placement targets * Backport #40854: nautilus: test_volume_client: test_put_object_versioned is unreliable * Backport #40857: nautilus: ceph_volume_client: python program embedded in test_volume_client.py use python2.7 * Backport #40878: nautilus: Unable to reset / unset module options * Backport #40887: nautilus: ceph_volume_client: to_bytes converts NoneType object str * Backport #40894: nautilus: mds: cleanup truncating inodes when standby replay mds trim log segments * Backport #40895: nautilus: pybind: Add standard error message and fix print of path as byte object in error message * Backport #40897: nautilus: ceph_volume_client: fs_name must be converted to string before using it * Backport #40900: nautilus: mds: only evict an unresponsive client when another client wants its caps * Backport #40944: nautilus: mgr: failover during in qa testing causes unresponsive client warnings * Backport #40946: nautilus: ceph-crash crashes: 'memoryview: a bytes-like object is required' * Bug #41067: mgr/dashboard: (nautilus) change bucket owner between owners from same tenant * Backport #41081: nautilus: Pool and namespace should be separated by a slash * Backport #41087: nautilus: qa: AssertionError: u'open' != 'stale' * Backport #41090: nautilus: [rpm packaging] librgw2 contains files that belong in librgw-devel * Backport #41091: nautilus: mgr/dashboard: ceph-mgr-dashboard RPM contains duplicated files that are not hard- or symlinks * Backport #41093: nautilus: qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_stale_write_caps causes (MDS_CLIENT_LATE_RELEASE) * Backport #41095: nautilus: qa: race in test_standby_replay_singleton_fail * Backport #41096: nautilus: mds: map client_caps been inserted by mistake * Backport #41099: nautilus: tools/cephfs: memory leak in cephfs/Resetter.cc * Backport #41102: nautilus: rgw: when usring radosgw-admin to list bucket, can set --max-entries excessively high * Backport #41107: nautilus: mds: disallow setting ceph.dir.pin value exceeding max rank id * Backport #41109: nautilus: rgw: fix drain handles error when deleting bucket with bypass-gc option * Backport #41113: nautilus: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP * Backport #41119: nautilus: rgw: rgw-admin: search for user by access key * Backport #41125: nautilus: RGW returns one byte more data than the requested range from the SLO object. * Backport #41128: nautilus: qa: power off still resulted in client sending session close * Backport #41130: nautilus: RGW Swift metadata dropped after S3 bucket versioning enabled * Backport #41238: nautilus: Implement mon_memory_target * Backport #41258: nautilus: os/bluestore: Don't forget sub kv_submitted_waiters. * Backport #41264: nautilus: Potential crash in putbj * Backport #41267: nautilus: beast frontend throws an exception when running out of FDs * Backport #41269: nautilus: cephfs-shell: Convert files path type from string to bytes * Backport #41272: nautilus: rgw: rgw-log issues the wrong message when decompression fails * Backport #41276: nautilus: qa: malformed job * Backport #41279: nautilus: mgr/prometheus: Setting scrape_interval breaks cache timeout comparison * Backport #41282: nautilus: BlueStore tool to check fragmentation * Backport #41286: nautilus: error from replay does not stored in rbd-mirror status * Backport #41290: nautilus: fix and improve doc regarding manual bluestore cache settings. * Backport #41323: nautilus: multisite: datalog/mdlog trim don't loop until done * Backport #41333: nautilus: ceph-test RPM not built for SUSE * Backport #41340: nautilus: os/bluestore/BlueFS: use 64K alloc_size on the shared device * Backport #41341: nautilus: "CMake Error" in test_envlibrados_for_rocksdb.sh * Backport #41350: nautilus: hidden corei7 requirement in binary packages * Backport #41380: nautilus: rgw: housekeeping of reset stats operation in radosgw-admin and cls back-end * Backport #41408: nautilus: List objects version 2 * Backport #41420: nautilus: too slow to delete a big empty volume * Backport #41422: nautilus: `rbd mirror pool status --verbose` test is missing * Backport #41436: nautilus: pg_autoscaler: pool id key not present in pool_stats * Backport #41437: nautilus: mgr/volumes: subvolume and subvolume group path exists even when creation failed * Backport #41440: nautilus: [rbd-mirror] cannot connect to remote cluster when running as 'ceph' user * Backport #41441: nautilus: mgr/rbd_support: module.py:1088: error: Name 'image_spec' is not defined * Backport #41443: nautilus: osd: need clear PG_STATE_CLEAN when repair object * Backport #41444: nautilus: mgr/volumes: handle incorrect pool_layout setting during `fs subvolume/subvolume group create` * Backport #41446: nautilus: rgw_file: readdir: do not construct markers w/leading '/' * Backport #41448: nautilus: osd/PrimaryLogPG: Access destroyed references in finish_degraded_object * Backport #41452: nautilus: support S3 Object Lock * Backport #41453: nautilus: mon: C_AckMarkedDown has not handled the Callback Arguments * Backport #41456: nautilus: proc_replica_log need preserve replica log's crt * Backport #41459: nautilus: rgw: Put User Policy is sensitive to whitespace * Backport #41460: nautilus: incorrect RW_IO_MAX * Backport #41463: nautilus: ceph-objectstore-tool: update-mon-db return EINVAL with missed inc_osdmap * Backport #41465: nautilus: mount.ceph: doesn't accept "strictatime" * Backport #41467: nautilus: mds: recall capabilities more regularly when under cache pressure * Backport #41477: nautilus: cephfs-data-scan scan_links FAILED ceph_assert(p->second >= before+len) * Backport #41479: nautilus: rgw dns name is not case sensitive * Backport #41482: nautilus: rgw: potential realm watch lost * Backport #41485: nautilus: rgw: list bucket with delimiter wrongly skip some special keys * Backport #41488: nautilus: client: client should return EIO when it's unsafe reqs have been dropped when the session is close. * Backport #41491: nautilus: OSDCap.PoolClassRNS test aborts * Backport #41493: nautilus: multisite: radosgw-admin bucket sync status incorrectly reports "caught up" during full sync * Backport #41495: nautilus: qa: 'ceph osd require-osd-release nautilus' fails * Backport #41498: nautilus: RGW S3Website didn't do the necessary checking to the website configuration * Backport #41501: nautilus: backfill_toofull while OSDs are not full (Unneccessary HEALTH_ERR) * Backport #41503: nautilus: Warning about past_interval bounds on deleting pg * Backport #41509: nautilus: Python 3 Ceph throws exception when sending via zabbix-sender * Backport #41529: nautilus: doc: mon_health_to_clog_* values flipped * Backport #41531: nautilus: Move bluefs alloc size initialization log message to log level 1 * Backport #41534: nautilus: valgrind: UninitCondition in ceph::crypto::onwire::AES128GCM_OnWireRxHandler::authenticated_decrypt_update_final() * Backport #41545: nautilus: [test] rbd-nbd FSX test runs are failing * Backport #41548: nautilus: monc: send_command to specific down mon breaks other mon msgs * Backport #41568: nautilus: doc: pg_num should always be a power of two * Backport #41583: nautilus: backfill_toofull seen on cluster where the most full OSD is at 1% * Backport #41588: nautilus: Lifecycle expiration action generates delete marker continuously * Backport #41596: nautilus: ceph-objectstore-tool can't remove head with bad snapset * Backport #41604: nautilus: dashboard: predefined system roles don't include read access to grafana scope * Backport #41620: nautilus: in rbd-ggate the assert in Log:open() will trigger * Backport #41624: nautilus: rgw/rgw_op: Remove get_val from hotpath via legacy options * Backport #41627: nautilus: multisite: ENOENT errors from FetchRemoteObj causing bucket sync to stall without retry * Backport #41629: nautilus: failed to remove image with mirroring enabled and data pool deleted * Backport #41631: nautilus: rgw:report error "unrecognized arg rm" when using "radosgw-admin zone rm" * Backport #41640: nautilus: FAILED ceph_assert(info.history.same_interval_since != 0) in PG::start_peering_interval() * Backport #41695: nautilus: Network ping monitoring * Backport #41700: nautilus: "make check" failing in GitHub due to python packaging conflict * Backport #41703: nautilus: oi(object_info_t).size does not match on disk size * Backport #41705: nautilus: Incorrect logical operator in Monitor::handle_auth_request() * Backport #41707: nautilus: in cls_bucket_list_unordered() listing of entries following an entry for which check_disk_state() returns -ENOENT may not get listed * Backport #41711: nautilus: man page for ceph-kvstore-tool missing command * Backport #41712: nautilus: FAILED ceph_assert(p != pg_slots.end()) in OSDShard::register_and_wake_split_child(PG*) * Backport #41720: nautilus: Add mgr module for kubernetes event integration * Backport #41724: nautilus: build failed with option "-DWITH_TESTS=off * Backport #41764: nautilus: TestClsRbd.sparsify fails when using filestore * Backport #41766: nautilus: ceph.spec.in: 1800MB of memory per build job is not sufficient to prevent OOM * Backport #41771: nautilus: RBD image manipulation using python API crashing since Nautilus * Backport #41773: nautilus: mgr/dashboard: NFS Ganesha Object Gateway exports should default to read-only and warn if RW is requested * Backport #41785: nautilus: Make dumping of reservation info congruent between scrub and recovery * Backport #41804: nautilus: Slow op warning does not display correctly * Backport #41806: nautilus: rgw: fix minimum of unordered bucket listing * Backport #41809: nautilus: Total amount to PG's is more than 100% * Backport #41813: nautilus: mgr/dashboard: SSL-enabled dashboard does not play nicely with a frontend HAproxy * Backport #41846: nautilus: beast frontend reads body in small buffers * Backport #41850: nautilus: mgr/volumes: drop unused size in fs volume create * Backport #41851: nautilus: mds: MDSIOContextBase instance leak * Backport #41855: nautilus: client: removing dir reports "not empty" issue due to client side filled wrong dir offset * Backport #41858: nautilus: memory usage of: radosgw-admin bucket rm * Backport #41862: nautilus: Mimic MONs have slow/long running ops * Backport #41883: nautilus: [trash] cannot restore mirroring sourced images * Backport #41884: nautilus: mgr/volumes: prevent negative subvolume size * Backport #41886: nautilus: mds: client evicted twice in one tick * Backport #41889: nautilus: mgr/volumes: retry spawning purge threads on failure * Backport #41890: nautilus: mount.ceph: enable consumption of ceph keyring files * Backport #41898: nautilus: rgw: data sync start delay if remote haven't init data_log * Backport #41915: nautilus: avoid page cache for krbd discard round off tests * Backport #41917: nautilus: osd: failure result of do_osd_ops not logged in prepare_transaction function * Backport #41920: nautilus: osd: scrub error on big objects; make bluestore refuse to start on big objects * Backport #41921: nautilus: OSDMonitor: missing `pool_id` field in `osd pool ls` command * Backport #41933: nautilus: mgr/volumes: issuing `fs subvolume getpath` command hangs and ceph-mgr daemon hits 100% CPU utilization * Bug #41948: nautilus: mds: incomplete backport of #40444 (MDCache::cow_inode does not cleanup unneeded client_snap_caps) * Backport #41956: nautilus: ceph-volume: typo in logmessage of ceph-volume systemd * Backport #41957: nautilus: rbd-nbd backport netlink changes from master * Backport #41958: nautilus: scrub errors after quick split/merge cycle * Backport #41960: nautilus: tools/rados: add --pgid in help * Backport #41963: nautilus: Segmentation fault in rados ls when using --pgid and --pool/-p together as options * Backport #41968: nautilus: [rbd-mirror] image status reports "down" after msgr v2 reconnect * Backport #41970: nautilus: rgw: ldap auth: S3 auth failure should return InvalidAccessKeyId for consistency * Backport #41972: nautilus: [rbd-mirror] simplify peer bootstrapping * Backport #41974: nautilus: Admin user become normal after synced to another zone. * Backport #41976: nautilus: if user doesnt exist then bucket list should give error/info message (saying user doesnt exist) rather than showing empty list * Backport #41980: nautilus: mgr/dashboard: passwords and other sensitive information is written to logs * Backport #41981: nautilus: nautilus: RGW compression does not take effect, using command “radosgw-admin zone placement modify……” * Backport #41983: nautilus: osd status reports old crush location after osd moves * Backport #41995: nautilus: mgr/dashboard: NFS export list should display the "Pseudo Path" * Backport #42013: nautilus: hammer client failed to auth against master OSD * Backport #42014: nautilus: Enable auto-scaler and get src/osd/PeeringState.cc:3671: failed assert info.last_complete == info.last_update * Backport #42024: nautilus: mgr/dashboard: Error editing iSCSI image advanced settings * Backport #42030: nautilus: mgr/dashboard: Error during iSCSI target edition * Backport #42041: nautilus: bluestore objectstore_blackhole=true violates read-after-write * Backport #42050: nautilus: fix pytest warnings * Bug #42054: osd child thread not inherit main thread affinity attribute * Backport #42065: nautilus: mgr/dashboard: Support iSCSI target-level CHAP authentication * Backport #42066: nautilus: mgr/dashboard: iSCSI control inputs should be rendered based on control "type" * Backport #42070: nautilus: install-deps.sh not support aarch64 * Backport #42083: nautilus: pybind/rados: set_omap() crash on py3 * Backport #42095: nautilus: global osd crash in DynamicPerfStats::add_to_reports * Bug #42099: OSDs crashed during the fio test * Backport #42105: nautilus: "docs: build check" broken in stable branches * Bug #42108: pg_autoscaler: pool "target_size_bytes" setting doesn't allow for T, G, M values * Backport #42124: nautilus: dead lock caused by ceph_get_module_option() * Backport #42125: nautilus: weird daemon key seen in health alert * Backport #42126: nautilus: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first)) * Backport #42136: nautilus: Remove unused full and nearful output from OSDMap summary * Backport #42141: nautilus: asynchronous recovery can not function under certain circumstances * Backport #42144: nautilus: mgr/prometheus: KeyError in mgr/prometheus/module.py", line 490, in get_mgr_status * Backport #42149: nautilus: mgr/volumes: missing protection for `fs volume rm` command * Backport #42150: nautilus: mgr/dashboard: Configuring an URL prefix does not work as expected * Backport #42152: nautilus: Removed OSDs with outstanding peer failure reports crash the monitor * Backport #42155: nautilus: mds: infinite loop in Locker::file_update_finish() * Backport #42163: nautilus: mgr/dashboard: MDS counter chart in Filesystems page is not automatically refreshed * Bug #42173: _pinned_map closest pinned map ver 252615 not available! error: (2) No such file or directory * Support #42174: Ceph Nautilus OSD isn't able to add to cluster * Backport #42180: nautilus: mgr/volumes: creating subvolume and subvolume group snapshot fails * Backport #42182: nautilus: es sync module supports es new type-less mapping * Backport #42200: nautilus: mon: /var/lib/ceph/mon/* data (esp rocksdb) is not 0600 * Backport #42204: nautilus: Error cloning snapshot when using RBD namespaces * Bug #42209: STATE_KV_SUBMITTED is set too early. * Backport #42234: nautilus: api/lvm: VolumeGroups.filter purges the object * Backport #42236: nautilus: api/lvm: PVolumes.filter purges the object * Backport #42239: nautilus: mgr/volumes: list FS subvolumes, subvolume groups, and their snapshots * Backport #42242: nautilus: Adding Placement Group id in Large omap log message * Backport #42260: nautilus: mgr: pg_autoscaler: problem with pool_logical_used * Backport #42277: nautilus: unittest_rgw_amqp failing in master * Backport #42280: nautilus: Continuation token doesn't work in bucket list operation * Backport #42281: nautilus: rgw: lifecycle: days may be 0 * Backport #42283: nautilus: mgr/dashboard: Building the frontend with --prod cause problems * Bug #42293: bluestore/rocksdb: wrong Fast CRC32 supported log printing on AArch64 platform * Backport #42295: nautilus: mgr/dashboard: Delete actions should provide the name of the object being deleted * Feature #42321: Add a new mode to balance pg layout by primary osds * Backport #42326: nautilus: max_size from crushmap ignored when increasing size on pool * Bug #42354: Ceph Prometheus plugin uses device mapper devices instead of LVM vg_name/lv_name * Backport #42356: nautilus: mgr/dashboard: iSCSI target details should display the disk WWN and LUN number * Backport #42363: nautilus: python3-cephfs should provide python36-cephfs * Backport #42392: nautilus: mgr/balancer: 'dict_keys' object does not support indexing * Backport #42395: nautilus: CephContext::CephContextServiceThread might pause for 5 seconds at shutdown * Backport #42401: nautilus: cmake: Allow cephfs and ceph-mds to be build when building on FreeBSD * Backport #42417: nautilus: sphinx spits warning when rendering doc/rbd/qemu-rbd.rst * Backport #42427: nautilus: [rbd] rbd map hangs up infinitely after osd down * Backport #42438: nautilus: msg/async: nonexistent auth users leads to auth timeout, not fast failure * Backport #42439: nautilus: Ceph libraries need /etc/ceph to work, yet installing them does not create this directory * Backport #42458: nautilus: mgr/dashboard: Failing Teuthology tests due to "client eviction warning" * Backport #42460: nautilus: test/{fs,cephfs}: Get libcephfs and cephfs to compile with FreeBSD * Backport #42482: nautilus: mgr/dashboard: Backport transifex-i18ntool for an easier sync with transifex * Backport #42524: nautilus: concurrent "rbd unmap" failures due to udev * Backport #42532: nautilus: rearrange api/lvm.py * Backport #42535: nautilus: unit test: lvm mocking isufficient * Backport #42540: nautilus: api/lvm: check if list of LVs is empty * Backport #42545: nautilus: cbt task does not clean up * Backport #42547: nautilus: verify_upmaps can not cancel invalid upmap_items in some cases * Backport #42562: nautilus: mgr/dashboard: Move QA tests to support running the rados/dashboard QA tests in isolation * Backport #42574: nautilus: restful: Query nodes_by_id for items * Backport #42589: nautilus: mgr/dashboard: error when editing image: TypeError: Cannot read property 'pool_name' of undefined * Bug #42592: ceph-mon/mgr PGstat Segmentation Fault * Bug #42613: nautilus: mgr/dashboard: "unsupported 'ceph-iscsi' config version. Expected 10 but found 11" * Backport #42621: nautilus: mgr/dashboard: dashboard e2e Jenkins job failures on Nautilus backport PRs * Backport #42645: nautilus: backport "common/thread: Fix race condition in make_named_thread" to mimic and nautilus * Backport #42648: nautilus: ceph device show-health-metrics foo crashes mgr * Backport #42677: nautilus: mgr/dashboard: Fix grafana dashboards * Backport #42679: nautilus: mgr/dashboard: Should be possible to set the iSCSI disk WWN and LUN number from the UI * Backport #42682: nautilus: mgr/{dashboard,prometheus}: Fix hostname in `ceph mgr services` * Backport #42694: nautilus: mgr/dashboard: non-pool fields shown in pool details * Backport #42696: nautilus: Larger cluster using upmap mode balancer can block other balancer commands * Backport #42729: nautilus: mgr/dashboard: RBD tests must use pools with power-of-two pg_num * Backport #42730: nautilus: mgr/dashboard: dashboard test fails due to adaptation of iscsi response * Backport #42731: nautilus: assert(addr_mons.count(m.public_addr) == 0); * Backport #42743: nautilus: mgr/dashboard: false alignment of MDS chart data points * Bug #42745: mgr/dashboard: nautilus backports: NFS breadcrumb test failures in dashboard e2e tests * Backport #42747: nautilus: mgr/dashboard: MDS counters chart's tooltip is overlapping the data points * Backport #42751: nautilus: mgr/restful: requests api adds support handles parallel as well as sequential execution of commands * Backport #42755: nautilus: allow skipping calls to restorecon * Backport #42795: nautilus: common: fix typo in rgw_user_max_buckets option long description. * Backport #42799: nautilus: unittest_rgw_amqp failure * Bug #42822: mgr/dashboard: slashes in RBD pools and RBD images * Backport #42836: nautilus: Recent rgw-website change causes master FTBFS on OpenSUSE Build Service * Backport #42838: nautilus: master/octopus FTBFS on s390x * Backport #42858: nautilus: ceph -s shows wrong number of pools when pool was deleted * Backport #42965: nautilus: ceph-volume fails on non se-linux systems * Backport #42973: nautilus: ceph-volume fails on non se-linux systems py2 * Backport #43009: nautilus: TestClsRbd.mirror_image_status failure during mimic->master(octopus) upgrade * Backport #43030: nautilus: rados manpage fails to mention --namespace option * Bug #43224: ceph osd status error * Bug #43287: HEALTH_ERR Module 'crash' has failed: time data '2019-07-06 15:5' does not match format '%Y-%m-%d %H:%M:%S.%f' * Bug #43296: Ceph assimilate-conf results in config entries which can not be removed * Bug #43407: mds crash after update to v14.2.5 * Bug #43443: ceph build: when do ARGS="-DWITH_BLKIN=ON" ./do_cmake.sh, some error will occur based on offical ceph verison. * Bug #44268: multisite/lc: lc doesn't run in the slave * Bug #44409: log: the time precision of log is only milliseconds because the option log_coarse_timestamps doesn’t work well * Bug #44635: Params "append" and "position" are not signed when append object * Bug #44678: Can not set CORS via AWS S3 API * Bug #44813: Sendfile on cephfs result in 0 bytes data on other node * Bug #45008: [osd crash]The ceph-osd assert with rbd bench io