# v14.2.2 * Backport #38808: nautilus: mgr/orchestrator: Remove `(add|test|remove)_stateful_service_rule` * Backport #38850: upgrade: 1 nautilus mon + 1 luminous mon can't automatically form quorum * Backport #38869: nautilus: flush skips requests in QOS throttler queue * Backport #38874: nautilus: doc: cleanup HTTP Frontends documentation * Backport #38876: nautilus: mds: high debug logging with many subtrees is slow * Backport #38881: nautilus: ENOENT in collection_move_rename on EC backfill target * Backport #38918: nautilus: multisite: add perf counters for data sync * Backport #38982: nautilus: pg_autoscaler is not Python 3 compatible * Backport #39018: nautilus: unable to cancel reshard operations for buckets with tenants * Backport #39043: nautilus: osd/PGLog: preserve original_crt to check rollbackability * Backport #39046: nautilus: rgw: update resharding documentation * Backport #39048: nautilus: rgw: beast endpoint doesn't set a default port * Backport #39050: nautilus: ceph_volume_client: Too many arguments for "WriteOpCtx" * Backport #39051: nautilus: doc: add LAZYIO * Backport #39169: nautilus: doc/mgr/orchestrator_cli: Rook orch supports mon update * Backport #39176: nautilus: doc: add documentation for `fs set min_compat_client` * Backport #39178: nautilus: rgw: remove_olh_pending_entries() does not limit the number of xattrs to remove * Backport #39184: nautilus: Primary won't automatically repair replica on pulling error * Backport #39192: nautilus: mds: crash during mds restart * Backport #39195: nautilus: Several race conditions are possible between io::ObjectRequest and io::CopyupRequest * Backport #39197: nautilus: cephfs-shell: ls command produces error: no "colorize" attribute found error * Backport #39199: nautilus: mds: we encountered "No space left on device" when moving huge number of files into one directory * Backport #39202: nautilus: rgw: race condition between resharding and ops waiting on resharding * Backport #39205: nautilus: osd: leaked pg refs on shutdown * Backport #39209: nautilus: mds: mds_cap_revoke_eviction_timeout is not used to initialize Server::cap_revoke_eviction_timeout * Backport #39211: nautilus: MDSTableServer.cc: 83: FAILED assert(version == tid) * Backport #39214: nautilus: mds: there is an assertion when calling Beacon::shutdown() * Backport #39219: nautilus: osd: FAILED ceph_assert(attrs || !pg_log.get_missing().is_missing(soid) || (it_objects != pg_log.get_log().objects.end() && it_objects->second->op == pg_log_entry_t::LOST_REVERT)) in PrimaryLogPG::get_object_context() * Backport #39222: nautilus: mds: behind on trimming and "[dentry] was purgeable but no longer is!" * Backport #39224: nautilus: deep cp a migration prepared image will results assert * Backport #39226: nautilus: [sparsify] verify that image isn't using an EC data pool * Backport #39228: nautilus: rgw_file: can't retrieve etag of empty object written through NFS * Backport #39232: nautilus: kclient: nofail option not supported * Backport #39241: nautilus: msg/async: connection race + winner fault can leave connection stuck at replacing forever * Backport #39256: nautilus: occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWritesTest/2 failure * Backport #39273: nautilus: S3 policy evaluated incorrectly * Backport #39288: nautilus: [rbd-mirror] image replayer should periodically flush IO and commit positions * Backport #39308: nautilus: mgr/ActivePyModules: handle_command may keep lock locked if module not available * Backport #39312: nautilus: mgr/rook: Added missing rgw daemons in "service ls" * Backport #39313: nautilus: mgr/rook: Fix RGW creation * Backport #39315: nautilus: krbd: fix rbd map hang due to udev return subsystem unordered * Backport #39344: nautilus: progress: KeyError on pg_to_state[pg_str]['stat_sum']['num_bytes_recovered'] * Backport #39345: nautilus: mgr/dashboard: code documentation * Backport #39346: nautilus: mgr/dashboard: Manager should complain about wrong dashboard certificate * Backport #39348: nautilus: raise an alert when per pool stats aren't used * Backport #39356: nautilus: mgr/rook: Remove support for Rook older than v0.9 * Backport #39370: nautilus: mgr/dashboard: Buggy data table search field * Backport #39371: nautilus: mgr/dashboard: Localization for date picker module * Backport #39375: nautilus: ceph tell osd.xx bench help : gives wrong help * Backport #39376: nautilus: cephfs-shell: mkdir creates directory with invalid octal mode * Backport #39377: nautilus: cephfs-shell: python traceback with mkdir when reattempt of mkdir * Backport #39378: nautilus: cephfs-shell: support mkdir with non-octal mode * Backport #39379: nautilus: cephfs-shell: python traceback with mkdir inside inexistant directory * Backport #39397: nautilus: deadlock on command completion * Backport #39410: nautilus: inefficient unordered bucket listing * Backport #39414: nautilus: multisite: period pusher gets 403 Forbidden against other zonegroups * Backport #39417: nautilus: rgw multisite: bucket index logs do not get trimmed unless zones 'sync_from_all' * Backport #39419: nautilus: rados/upgrade/nautilus-x-singleton: mon.c@1(electing).elector(11) Shutting down because I lack required monitor features * Backport #39421: nautilus: Don't mark removed osds in when running "ceph osd in any|all|*" * Backport #39423: nautilus: Drop "ceph_test_librbd_api" target * Backport #39425: nautilus: mgr: deadlock * Backport #39428: nautilus: 'rbd mirror status --verbose' will occasionally seg fault * Backport #39430: nautilus: qa: test_sessionmap assumes simple messenger * Backport #39432: nautilus: Degraded PG does not discover remapped data on originating OSD * Backport #39446: nautilus: OSD crashed in BitmapAllocator::init_add_free() * Backport #39450: librbd cannot open image against Jewel cluster * Backport #39452: nautilus: mgr/dashboard: iSCSI form is showing a warning * Backport #39453: nautilus: mgr/dashboard: Adapt iSCSI discovery auth for read-only users * Backport #39454: nautilus: mgr/dashboard: Validate if any client belongs to more than one group * Backport #39459: nautilus: mgr/prometheus: replace whitespaces in metric names * Backport #39462: nautilus: [rbd-mirror] "bad crc in data" error when listing large pools * Backport #39465: nautilus: print client IP in default debug_ms log level when "bad crc in {front|middle|data}" occurs * Backport #39467: nautilus: mgr/dashboard: Admin resource not honored * Backport #39470: nautilus: There is no punctuation mark or blank between tid and client_id in the output of "ceph health detail" * Backport #39471: nautilus: Expose CephFS snapshot creation time to clients * Backport #39473: nautilus: mds: fail to resolve snapshot name contains '_' * Backport #39476: nautilus: segv in fgets() in collect_sys_info reading /proc/cpuinfo * Backport #39479: nautilus: test/rgw: fix race in test_rgw_reshard_wait and test_rgw_reshard_wait uses same clock for timing * Backport #39496: nautilus: rgw admin: object stat command output's delete_at not readable * Backport #39502: nautilus: mgr/dashboard: RGW Bucket API should provide times in UTC that will be converted into local time by Angular * Backport #39503: nautilus: rgw: clean up some logging * Backport #39504: nautilus: Give recovery for inactive PGs a higher priority * Backport #39512: nautilus: osd acting cycle * Backport #39514: nautilus: osd: segv in _preboot -> heartbeat * Backport #39519: nautilus: snaps missing in mapper, should be: ca was r -2...repaired * Backport #39524: nautilus: mgr/dashboard: Can't login with a bigger time difference between user and server or make auth token work with UTC times only * Backport #39530: nautilus: tox failures when running "make check" * Backport #39534: nautilus: mgr/dashboard: New RBD snapshot names should be prefix with a local time bound ISO timestamp not UTC * Backport #39535: nautilus: mgr/dashboard: Make all columns sortable * Backport #39536: nautilus: test_orchestrator: AttributeError: 'TestWriteCompletion' object has no attribute 'id' * Backport #39539: nautilus: osd/ReplicatedBackend.cc: 1321: FAILED assert(get_parent()->get_log().get_log().objects.count(soid) && (get_parent()->get_log().get_log().objects.find(soid)->second->op == pg_log_entry_t::LOST_REVERT) && (get_parent()->get_log().get_log().object * Backport #39540: nautilus: Provide a base set of Prometheus alert manager rules that notify the user about common Ceph error conditions * Backport #39541: nautilus: [test] qemu-iotests tests fails under latest Ubuntu kernel * Backport #39558: nautilus: mgr/dashboard: KV-table transforms dates through pipe * Backport #39559: nautilus: mgr/ansible: Host ls implementation * Backport #39560: nautilus: mgr/dashboard: Queue notifications as default * Backport #39573: nautilus: common: Clang requires a default constructor, but it can be empty #27844 * Backport #39574: nautilus: rgw: cloud sync module logs attrs in the log * Backport #39575: nautilus: librgw: unexpected crash when creating bucket * Backport #39577: nautilus: build/rgw: unittest_rgw_dmclock_scheduler does not need Boost_LIBRARIES #26799 * Backport #39590: nautilus: qa/tasks/rbd_fio: fixed missing delimiter between 'cd' and 'configure' * Backport #39591: nautilus: mgr/dashboard: Upgrade to ceph-iscsi config v9 * Bug #39600: CRUSH rule device classes mystery * Backport #39601: nautilus: document CreateBucketConfiguration for s3 PUT Bucket request * Backport #39612: nautilus: os/bluestore: fix for FreeBSD iocb structure * Backport #39616: nautilus: mgr/dashboard: iSCSI should allow exporting an RBD image with Journaling enabled * Backport #39630: nautilus: mgr/dashboard: iSCSI GET requests should not be logged * Backport #39631: nautilus: mgr/dashboard: iSCSI form does not support IPv6 * Feature #39637: rgw: allow radosgw-admin bucket list to use the --allow-unordered flag * Backport #39658: nautilus: mgr/dashboard: Avoid merge conflicts in messages.xlf by auto-generating it at build time? * Backport #39664: nautilus: mgr/dashboard: incorrect help message for minimum blob size * Backport #39670: nautilus: mds: output lock state in format dump * Backport #39671: nautilus: make cluster_network work well. * Backport #39672: nautilus: os/bluestore: fix missing discard in BlueStore::_kv_sync_thread * Backport #39675: nautilus: [test] possible race condition in rbd-nbd disconnect * Backport #39676: nautilus: rgw: crypto: HMAC ctors cannot safely assert in (e.g.) FIPS mode * Backport #39678: nautilus: cephfs-shell: fix string decode for ls command * Backport #39680: nautilus: pybind: add the lseek() function to pybind of cephfs * Backport #39684: nautilus: rgw: cloud sync module fails to sync multipart objects * Backport #39686: nautilus: ceph-fuse: client hang because its bad session PipeConnection to mds * Backport #39690: nautilus: mds: error "No space left on device" when create a large number of dirs * Backport #39695: nautilus: restful/api is not Python 3 compatible * Backport #39699: nautilus: OSD down on snaptrim. * Bug #39713: "Ceph -s" execution consumes too much time * Backport #39721: nautilus: short pg log+nautilus-p2p-stress-split: "Error: finished tid 3 when last_acked_tid was 5" in upgrade:nautilus-p2p * Backport #39729: nautilus: [test] devstack is broken (again) * Backport #39735: nautilus: multisite: mismatch of bucket creation times from List Buckets * Backport #39736: nautilus: mgr/dashboard: 'RBD_FEATURE_MIGRATING' is missing in `rbd.pyx` * Backport #39738: nautilus: Binary data in OSD log from "CRC header" message * Backport #39739: nautilus: Fast-diff can be disabled w/o disabling object-map * Backport #39740: nautilus: rgw: swift object expiry fails when a bucket reshards * Backport #39745: nautilus: On rook master branch s3cmd make bucket returns error InvalidLocationConstraint * Backport #39746: nautilus: beast: multiple v4 and v6 endpoints with the same port will cause failure * Backport #39932: nautilus: dashboard: Grafana dashboards use outdated metric names from the prometheus module * Backport #39934: nautilus: mgr/volumes: add CephFS subvolumes library * Backport #39935: nautilus: cephfs-shell: teuthology tests * Backport #39936: nautilus: cephfs-shell: add commands to manipulate quotas * Backport #39937: nautilus: cephfs-shell: add a "stat" command * Backport #39960: nautilus: cephfs-shell: mkdir error for relative path * Backport #39961: nautilus: mgr/dashboard: Unify the look of dashboard charts * Backport #39962: nautilus: mgr/dashboard: openssl exception when verifying certificates of HTTPS requests * Backport #39975: nautilus: mgr/dashboard: NFS export creation: Add more info to the validation message of the field "Pseudo" * Backport #39988: nautilus: mgr/dashboard: Unable to see tcmu-runner perf counters * Backport #39993: nautilus: mgr/dashboard: inconsistent result when editing a RBD image's features * Backport #40003: nautilus: do_cmake.sh: "source" not found * Backport #40006: nautilus: Several embedded Grafana dashboards are not displayed due to changed uids * Backport #40030: nautilus: mgr/dashboard: Some validations are not updated and prevent the submission of a form * Backport #40031: nautilus: mgr/dashboard: "local variable 'cluster_id' referenced before assignment" error when trying to list NFS Ganesha daemons * Backport #40037: nautilus: dashboard: orchestrator mgr modules assert failure on iscsi service request * Backport #40040: nautilus: avoid trimming too many log segments after mds failover * Backport #40044: nautilus: common: segfault while parsing POD_MEMORY_REQUEST * Backport #40048: nautilus: mgr/dashboard: Display correct dialog title * Bug #40051: mgr/dashboard: Dashboard login page broken; summary returns 401 * Backport #40057: nautilus: mgr/dashboard: NFS clients information is not displayed in the details view * Backport #40059: nautilus: mgr/dashboard: Add custom dialogue for configuring PG scrub parameters * Backport #40065: monitoring: SNMP OID per every Prometheus alert rule * Backport #40067: nautilus: Ceph RPM build fails on openSUSE Tumbleweed with GCC 9 * Backport #40074: nautilus: mgr/dashboard: Error creating NFS client without squash * Backport #40075: nautilus: mgr/dashboard: Angular is creating multiple instances of the same service * Backport #40076: nautilus: mgr/dashboard: Reduce the number of renders on the tables * Backport #40077: nautilus: mgr/dashboard: Only one root node is shown in the crush map viewer * Backport #40087: nautilus: backport new iso8601 parsing logic * Backport #40090: nautilus: ceph-mgr should log an error if it can't find any modules to load * Backport #40105: nautilus: [test] qemu_dynamic_features workunit fails to disable fast-diff+object-map * Bug #40116: nautilus: qa: cannot schedule kcephfs/multimds * Backport #40122: nautilus: wrong format for rbd_mirror prometheus metrics * Backport #40145: nautilus: Multisite sync corruption for large multipart obj * Backport #40148: nautilus: rgw: bucket may redundantly list keys after BI_PREFIX_CHAR * Backport #40157: nautilus: mgr/volumes: cannot create subvolumes with py3 libraries * Backport #40158: nautilus: mgr/volumes: unable to set quota on fs subvolumes * Backport #40161: nautilus: FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_dir()" * Backport #40164: nautilus: mount: key parsing fail when doing a remount * Backport #40167: nautilus: client: ceph.dir.rctime xattr value incorrectly prefixes "09" to the nanoseconds component * Backport #40169: nautilus: qa: Expected: (btime) < (new_btime), actual: 2019-05-09 23:33:09.400554 vs 2019-05-09 23:33:09.094205 * Backport #40189: nautilus: mgr/dashboard: misplaced objects not shown anymore * Backport #40192: nautilus: Rados.get_fsid() returning bytes in python3 * Backport #40217: nautilus: cephfs-shell: Fix flake8 errors * Backport #40220: nautilus: TestMisc.test_evict_client fails * Backport #40223: nautilus: mds: reset heartbeat during long-running loops in recovery * Backport #40232: nautilus: build/ops: python3 pybind RPMs do not replace their python2 counterparts on upgrade even though they should * Backport #40236: nautilus: mds: blacklisted clients eviction is broken * Bug #40288: mds: lost mds journal when hot-standby mds switch occurs * Backport #40301: nautilus: OBS fails to build master due to RPATH issue * Backport #40313: nautilus: cephfs-shell: 'lls' command errors * Backport #40314: nautilus: cephfs-shell: Incorrect error message is printed in 'lcd' command * Backport #40321: nautilus: test: extend mgr/volume test to cover new interfaces * Backport #40338: nautilus: mgr/volumes: Name 'sub_name' is not defined * Backport #40344: nautilus: mds: fix corner case of replaying open sessions * Backport #40346: nautilus: ssl tests failing with SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed'),)",) * Bug #40373: nautilus: qa: still testing simple messenger * Bug #40374: nautilus: qa: disable "HEALTH_WARN Legacy BlueStore stats reporting..." * Backport #40378: nautilus: mgr / volume: refactor volume module * Backport #40379: nautilus: [rbd-mirror] image sync can crash when updating progress * Backport #40402: nautilus: "/bootstrap: missing required packages" in upgrade:nautilus-p2p-nautilus * Backport #40469: nautilus: cephfs-shell: test only python3 and assert python3 in cephfs-shell * Backport #40470: nautilus: cephfs-shell: fix unecessary usage of to_bytes for file paths * Backport #40471: nautilus: cephfs-shell: Fix flake8 warnings and errors * Backport #40569: nautilus: mgr/volumes: subvolume.py calls Exceptions with too few arguments * Backport #40570: nautilus: mgr/volumes: allow setting data pool layout for fs subvolumes * Backport #40571: nautilus: mgr/volumes: allow setting mode on fs subvol, subvol group * Backport #40762: nautilus: rgw: list bucket with start marker and delimiter will miss next object with char '0' * Bug #40770: lvm activate --> lvm activate * Bug #40781: ceph-crash crashes: 'memoryview: a bytes-like object is required' * Tasks #40937: Problem "open vSwitch" networkbond set_numa_affinity * Documentation #40996: Calling messenger v1 protocol legacy is misleading * Bug #41025: 2/3 mon process crash - complete cluster failure * Bug #41026: MDS process crashes on 14.2.2 * Bug #41049: adding ceph secret key to kernel failed: Invalid argument. * Bug #41134: ceph-osd not release memory to os * Bug #41183: pg autoscale on EC pools * Bug #41234: More than 100% in a dashboard PG Status * Bug #41313: PG distribution completely messed up since Nautilus * Bug #41526: Choosing the next PG for a deep scrubs wrong. * Bug #41618: 14.2.1->14.2.2 ceph-mgr hard segfault. devicehealth? * Bug #42641: Starting MGR fails: handle_connect_reply_2 connect got BADAUTHORIZER