# v11.2.1 * Backport #18378: kraken: msg/simple/SimpleMessenger.cc: 239: FAILED assert(!cleared) * Backport #18387: kraken: Cannot clone ceph/s3-tests.git (missing branch) * Backport #18403: kraken: cache tiering: base pool last_force_resend not respected (racing read got wrong version) * Backport #18418: kraken: leveldb corruption leads to "Operation not permitted not handled" and assert * Backport #18431: kraken: ceph-disk: error on _bytes2str * Backport #18439: kraken: TestVolumeClient.test_evict_client failure creating pidfile * Backport #18456: kraken: Attempting to remove an image w/ incompatible features results in partial removal * Backport #18463: kraken: Decode errors on backtrace will crash MDS * Backport #18493: kraken: rbd-mirror: sporadic image replayer shut down failure * Backport #18495: kraken: rbd: Possible deadlock performing a synchronous API action while refresh in-progress * Backport #18497: kraken: osd_recovery_incomplete: failed assert not manager.is_recovered() * Backport #18499: kraken: rgw: Realm set does not create a new period * Backport #18501: kraken: rbd-mirror: potential race mirroring cloned image * Backport #18531: kraken: speed up readdir by skipping unwanted dn * Backport #18540: kraken: Test failure: test_session_reject (tasks.cephfs.test_sessionmap.TestSessionMap) * Backport #18548: kraken: multisite: segfault after changing value of rgw_data_log_num_shards * Backport #18549: kraken: rbd: 'metadata_set' API operation should not change global config setting * Backport #18552: kraken: ceph-fuse crash during snapshot tests * Backport #18554: kraken: poen wrongly delete routed pg stats op before receive pg stats ack * Backport #18555: kraken: rbd: Potential race when removing two-way mirroring image * Backport #18557: kraken: rbd: 'rbd bench-write' will crash if --io-size is 4G * Backport #18562: kraken: Test Failure: kcephfs test_client_recovery.TestClientRecovery * Backport #18566: kraken: MDS crashes on missing metadata object * Backport #18571: kraken: Python Swift client commands in Quick Developer Guide don't match configuration in vstart.sh * Backport #18601: kraken: rbd: Add missing parameter feedback to 'rbd snap limit' * Backport #18604: kraken: cephfs test failures (ceph.com/qa is broken, should be download.ceph.com/qa) * Backport #18606: kraken: ceph-disk prepare writes osd log 0 with root owner * Backport #18609: kraken: Removing a clone that fails to open its parent might leave dangling rbd_children reference * Backport #18610: kraken: osd: ENOENT on clone * Backport #18612: kraken: client: segfault on ceph_rmdir path "/" * Backport #18616: kraken: segfault in handle_client_caps * Backport #18627: kraken: TempURL verification broken for URI encoded object names * Backport #18632: kraken: rbd: [qa] crash in journal-enabled fsx run * Backport #18659: kraken: /home/dzafman/ceph/src/osd/PG.h: 441: FAILED assert(needs_recovery_map.count(hoid)) * Backport #18668: kraken: [ FAILED ] TestLibRBD.ImagePollIO in upgrade:client-upgrade-kraken-distro-basic-smithi * Backport #18677: kraken: OSD metadata reports filestore when using bluestore * Backport #18678: kraken: failed to reconnect caps during snapshot tests * Backport #18682: kraken: mon: 'osd crush move ...' doesnt work on osds * Backport #18700: kraken: client: fix the cross-quota rename boundary check conditions * Backport #18703: kraken: Prevent librbd from blacklisting the in-use librados client * Backport #18706: kraken: fragment space check can cause replayed request fail * Backport #18707: kraken: failed filelock.can_read(-1) assertion in Server::_dir_is_nonempty * Backport #18709: kraken: multisite: sync status reports master is on a different period * Backport #18711: kraken: slave zonegroup cannot enable the bucket versioning * Backport #18713: kraken: radosgw-admin period update reverts deleted zonegroup * Backport #18721: kraken: systemd restarts Ceph Mon to quickly after failing to start * Backport #18722: kraken: bluestore: full osd will not start. _do_alloc_write failed to reserve 0x10000, etc. * Backport #18723: kraken: osd: calc_clone_subsets misuses try_read_lock vs missing * Backport #18769: kraken: [ FAILED ] TestJournalTrimmer.RemoveObjectsWithOtherClient * Backport #18771: kraken: rbd: Improve compatibility between librbd + krbd for the data pool * Backport #18772: kraken: rgw crashes when updating period with placement group * Backport #18776: kraken: Qemu crash triggered by network issues * Backport #18777: kraken: rbd --pool=x rename y z does not work * Backport #18780: kraken: radosgw swift: error messages: spurious newline after http body causes weird errors. * Backport #18793: kraken: Client message throttles are not changeable without restart * Backport #18805: kraken: "ERROR: Export PG's map_epoch 3901 > OSD's epoch 3281" in upgrade:infernalis-x-jewel-distro-basic-vps * Backport #18810: kraken: librgw: RGWLibFS::setattr fails on directories * Backport #18814: kraken: PrimaryLogPG: up_osd_features used without the requires_kraken flag in kraken * Backport #18822: kraken: run-rbd-unit-tests.sh assert in lockdep_will_lock, TestLibRBD.ObjectMapConsistentSnap * Backport #18842: kraken: kernel client feature mismatch on latest master test runs * Backport #18843: kraken: rgw: usage stats and quota are not operational for multi-tenant users * Backport #18849: kraken: remove qa/suites/buildpackages * Backport #18870: kraken: tests: SUSE yaml facets in qa/distros/all are out of date * Backport #18892: kraken: Incomplete declaration for ContextWQ in librbd/Journal.h * Backport #18894: kraken: Possible lockdep false alarm for ThreadPool lock * Backport #18896: kraken: should parse the url to http host to compare with the container referer acl * Backport #18898: kraken: no http referer info in container metadata dump in swift API * Backport #18899: kraken: Test failure: test_open_inode * Backport #18902: kraken: librgw: path segments neglect to ref parents * Backport #18904: kraken: rgw: first write also tries to read object * Backport #18907: kraken: "osd marked itself down" will not recognised if host runs mon + osd on shutdown/reboot * Backport #18909: kraken: rgw: the swift container acl does not support field ".ref" * Backport #18910: kraken: rbd-nbd: check /sys/block/nbdX/size to ensure kernel mapped correctly * Bug #18926: Why osds do not release memory? * Backport #18947: kraken: rbd-mirror: additional test stability improvements * Backport #18950: kraken: mds/StrayManager: avoid reusing deleted inode in StrayManager::_purge_stray_logged * Backport #18952: kraken: segfault in ceph-osd --flush-journal * Backport #18956: kraken: ceph-disk: bluestore --setgroup incorrectly set with user * Backport #18970: kraken: rbd: AdminSocket::bind_and_listen failed after rbd-nbd mapping * Backport #18973: kraken: ceph-disk does not support cluster names different than 'ceph' * Backport #18985: kraken: rgw: sending Content-Length in 204 and 304 responses should be controllable * Backport #18997: kraken: ceph-disk prepare get wrong group name in bluestore * Backport #18999: kraken: "osd/PG.cc: 6896: FAILED assert(pg->is_acting(osd_with_shard) || pg->is_up(osd_with_shard))" in rados/upgrade * Tasks #19009: kraken v11.2.1 * Backport #19037: kraken: rbd-mirror: deleting a snapshot during sync can result in read errors * Backport #19045: kraken: buffer overflow in test LibCephFS.DirLs * Backport #19047: kraken: RGW leaking data * Backport #19049: kraken: multisite: some yields in RGWMetaSyncShardCR::full_sync() resume in incremental_sync() * Backport #19144: kraken: rgw_file: FHCache residence check should be exhaustive * Backport #19146: kraken: rgw: a few cases where rgw_obj is incorrectly initialized * Backport #19147: kraken: rgw daemon's DUMPABLE flag is cleared by setuid preventing coredumps * Backport #19149: kraken: rgw_file: ensure valid_s3_object_name for directories * Backport #19154: kraken: rgw_file: fix recycling of invalid mkdir handles * Backport #19156: kraken: rgw: typo in rgw_admin.cc * Backport #19157: kraken: RGW health check errors out incorrectly * Backport #19160: kraken: multisite: RGWMetaSyncShardControlCR gives up on EIO * Backport #19162: kraken: rgw_file: fix marker computation * Backport #19164: kraken: radosgw-admin: add the 'object stat' command to usage * Backport #19166: kraken: rgw_file: "exact match" invalid for directories, in RGWLibFS::stat_leaf() * Backport #19168: kraken: rgw_file: RGWReaddir (and cognate ListBuckets request) don't enumerate multi-segment directories * Backport #19170: kraken: rgw_file: allow setattr on placeholder (common_prefix) directories * Backport #19172: kraken: rgw: S3 create bucket should not do response in json * Backport #19173: kraken: rbd: rbd_clone_copy_on_read ineffective with exclusive-lock * Backport #19175: kraken: swift API: cannot disable object versioning with empty X-Versions-Location * Backport #19178: kraken: anonymous user's error code of getting object is not consistent with SWIFT * Backport #19180: kraken: rgw: 204 No Content is returned when putting illformed Swift's ACL * Backport #19181: kraken: mon: force_create_pg could leave pg stuck in creating state * Backport #19209: kraken: pre-jewel "osd rm" incrementals are misinterpreted * Backport #19212: kraken: rgw: "cluster [WRN] bad locator @X on object @X...." in cluster log * Backport #19227: kraken: rbd: Enabling mirroring for a pool wiht clones may fail * Backport #19229: kraken: librgw: objects created from s3 apis are not visible from nfs mount point * Backport #19315: kraken: osd: pg log split does not rebuild index for parent or child * Backport #19322: kraken: multisite: possible infinite loop in RGWFetchAllMetaCR * Backport #19324: kraken: rbd: [api] temporarily restrict (rbd_)mirror_peer_add from adding multiple peers * Backport #19326: kraken: bluestore bdev: flush no-op optimization is racy * Backport #19327: kraken: bluefs: missing flush_bdev in fsync path * Backport #19329: kraken: osd_snap_trim_sleep option does not work * Backport #19331: kraken: upgrade to multisite v2 fails if there is a zone without zone info * Backport #19333: kraken: brag fails to count "in" mds * Backport #19335: kraken: MDS heartbeat timeout during rejoin, when working with large amount of caps/inodes * Backport #19336: kraken: rbd: refuse to use an ec pool that doesn't support overwrites * Backport #19340: kraken: An OSD was seen getting ENOSPC even with osd_failsafe_full_ratio passed * Backport #19342: kraken: 'period update' does not remove short_zone_ids of deleted zones * Backport #19351: kraken: RadosImport::import should return an error if Rados::connect fails * Backport #19354: kraken: multisite: some 'radosgw-admin data sync' commands hang * Backport #19356: kraken: when converting region_map we need to use rgw_zone_root_pool * Backport #19391: kraken: two instances of omap_digest mismatch * Backport #19460: kraken: rpm spec file mentions non-existent ceph-create-keys systemd unit file, causing ceph-mon units to not be enabled via preset * Backport #19462: kraken: rgw: admin ops: fix the quota section * Backport #19465: kraken: monitor creation with IPv6 public network segfaults * Backport #19467: kraken: [api] is_exclusive_lock_owner doesn't detect that is has been blacklisted * Backport #19470: kraken: rgw_file: leaf objects (which store Unix attrs) can be deleted when children exist * Backport #19471: kraken: rgw_file: RGWFileHandle dtor must also cond-unlink from FHCache * Backport #19472: kraken: cannot cover the object expiration * Backport #19475: kraken: rgw: multisite: EPERM when trying to read SLO objects as system/admin user * Backport #19477: kraken: rgw: S3 v4 authentication issue with X-Amz-Expires * Backport #19479: kraken: rgw: "zonegroupmap set" does not work * Backport #19480: kraken: ceph degraded and misplaced status output inaccurate * Backport #19483: kraken: No output for "ceph mds rmfailed 0 --yes-i-really-mean-it" command * Backport #19485: kraken: New added OSD always down when full flag is set * Backport #19496: kraken: Objecter::epoch_barrier isn't respected in _op_submit() * Backport #19524: kraken: rgw: 'radosgw-admin zone create' command with specified zone-id creates a zone with different id * Backport #19526: kraken: rgwfs hung due to missing unlock within unlink operation * Backport #19534: kraken: rgw: Error parsing xml when get bucket lifecycle * Backport #19537: kraken: ceph-disk list reports mount error for OSD having mount options with SELinux context * Backport #19544: kraken: ceph-disk: Add fix subcommand kraken back-port * Backport #19560: kraken: objecter: full_try behavior not consistent with osd * Backport #19561: kraken: "api_misc: [ FAILED ] LibRadosMiscConnectFailure.ConnectFailure" * Backport #19564: kraken: Ceph Xenial Packages - ceph-base missing dependency for psmisc * Backport #19573: kraken: rgw: Response header of swift API returned by radosgw does not contain "x-openstack-request-id". But Swift returns it. * Backport #19574: kraken: rgw: unsafe access in RGWListBucket_ObjStore_SWIFT::send_response() * Bug #19593: purge queue and standby replay mds * Backport #19608: kraken: multisite: fetch_remote_obj() gets wrong version when copying from remote * Backport #19609: kraken: [librados_test_stub] cls_cxx_map_get_XYZ methods don't return correct value * Backport #19611: kraken: Issues with C API image metadata retrieval functions * Backport #19614: kraken: multisite: rest api fails to decode large period on 'period commit' * Backport #19616: kraken: multisite: bucket zonegroup redirect not working * Backport #19618: kraken: common/LogClient.cc: 310: FAILED assert(num_unsent <= log_queue.size()) * Backport #19620: kraken: MDS server crashes due to inconsistent metadata. * Backport #19621: kraken: rbd-nbd: add signal handler * Backport #19622: kraken: hammer client generated misdirected op against jewel cluster * Backport #19647: kraken: ceph-disk: directory-backed OSDs do not start on boot * Backport #19659: kraken: upgrade:client-upgrade/{hammer,jewel}-client-x/rbd failing in kraken 11.2.1 integration testing * Backport #19661: kraken: rgw_file: fix readdir after dir-change * Backport #19663: kraken: rgw_file: fix event expire check, don't expire directories being read * Backport #19664: kraken: C_MDSInternalNoop::complete doesn't free itself * Backport #19667: kraken: fs:The mount point break off when mds switch hanppened. * Backport #19669: kraken: MDS goes readonly writing backtrace for a file whose data pool has been removed * Backport #19670: kraken: logrotate is missing from debian package (kraken, master) * Backport #19672: kraken: MDS assert failed when shutting down * Backport #19674: kraken: cephfs: mds is crushed, after I set about 400 64KB xattr kv pairs to a file * Backport #19676: kraken: cephfs: Test failure: test_data_isolated (tasks.cephfs.test_volume_client.TestVolumeClient) * Backport #19678: kraken: Jewel ceph-fuse does not recover after lost connection to MDS * Backport #19680: kraken: MDS: damage reporting by ino number is useless * Backport #19685: kraken: Give requested scrubs a higher priority * Backport #19693: kraken: [test] test_notify.py: rbd.InvalidArgument: error updating features for image test_notify_clone2 * Backport #19702: kraken: osd/PGLog.cc: 1047: FAILED assert(oi.version == i->first) * Backport #19704: civetweb-worker segmentation fault * Backport #19710: kraken: Enable MDS to start when session ino info is corrupt * Backport #19723: kraken: rgw_file: introduce rgw_lookup type hints * Backport #19725: kraken: RGW S3 v4 authentication issue with X-Amz-Expires * Backport #19759: kraken: multisite: after CreateBucket is forwarded to master, local bucket may use different value for bucket index shards * Backport #19760: kraken: osd: leaked MOSDMap * Backport #19763: kraken: non-local cephfs quota changes not visible until some IO is done * Backport #19766: kraken: rgw: when uploading the objects continuesly in the versioned bucket, some objects will not sync. * Backport #19776: kraken: multisite: realm rename does not propagate to other clusters * Backport #19777: kraken: rgw: implement support for OS-REVOKE extension of OpenStack Identity API v3 * Backport #19794: kraken: [test] test_notify.py: assert(not image.is_exclusive_lock_owner()) on line 147 * Backport #19807: kraken: [test] remove hard-coded image name from TestLibRBD.Mirror * Backport #19809: kraken: APIs to support Ragweed suite * Bug #19821: apt-purge ceph-mon, apt-purge ceph-osd fails * Backport #19833: kraken: Cannot delete some snapshots after upgrade from jewel to kraken * Backport #19837: kraken: rgw: S3 object uploads using the AWSv4's multi-chunk mode hang RadosGW * Backport #19839: kraken: reduce log level of 'storing entry at' in cls_log * Backport #19840: kraken: civetweb frontend segfaults in Luminous * Backport #19841: kraken: clean up min/max span warning * Backport #19843: kraken: Add custom user data support in bucket index * Backport #19845: kraken: write to cephfs mount hangs, ceph-fuse and kernel * Backport #19872: kraken: [rbd-mirror] failover and failback of unmodified image results in split-brain * Backport #19916: kraken: osd/OSD.h: 706: FAILED assert(removed) in PG::unreg_next_scrub * Backport #19928: kraken: mon crash on shutdown, lease_ack_timeout event * Backport #20010: kraken: ceph-disk: separate ceph-osd --check-needs-* logs * Backport #20015: kraken: multisite: bi_list() decode failures * Backport #20022: kraken: rbd-mirror replay fails on attempting to reclaim data to local site (LS) from distant-end after DE promotion. * Backport #20024: kraken: HEALTH_WARN pool rbd pg_num 244 > pgp_num 224 during upgrade * Backport #20026: kraken: cephfs: MDS became unresponsive when truncating a very large file * Backport #20028: kraken: Deadlock on two ceph-fuse clients accessing the same file * Backport #20031: kraken: rgw: Swift's at-root features (/crossdomain.xml, /info, /healthcheck) are broken * Backport #20033: kraken: osd_scrub_sleep option blocks op thread in jewel + later * Backport #20034: kraken: ceph-disk: Racing between partition creation & device node creation * Backport #20035: kraken: mon: MAX AVAIL calcuation does not fact or in mon_osd_full_ratio * Backport #20125: Kraken: Can't repair when only an attr object error * Backport #20147: kraken: rgw: 'gc list --include-all' command infinite loop the first 1000 items * Backport #20150: kraken: ceph-disk fails if OSD udev rule triggers prior to mount of /var * Backport #20154: kraken: Potential IO hang if image is flattened while read request is in-flight * Backport #20156: kraken: fix: rgw crashed caused by shard id out of range when listing data log * Backport #20158: kraken: rgw_file: handle chunked readdir * Backport #20173: kraken: PR #14886 creates a SafeTimer thread per PG * Bug #20177: RGW lifecycle not expiring objects due to permissions on lc pool * Backport #20191: kraken: SELinux denials (the files in /var/log/ceph get mislabeled) * Backport #20193: kraken: Speed up upgrade from non-SELinux enabled ceph to an SELinux enabled one * Backport #20195: kraken: rgw_file: restore (corrected) fix for dir "partial match" (return of FLAG_EXACT_MATCH) * Backport #20261: kraken: 'radosgw-admin usage show' listing 0 bytes_sent/received * Backport #20263: kraken: "datalog trim" can't work as expected * Backport #20264: kraken: [cli] ensure positional arguments exist before casting * Backport #20266: kraken: [api] is_exclusive_lock_owner shouldn't return -EBUSY * Backport #20268: kraken: get wrong content when download object with specific range when compression was enabled * Backport #20269: kraken: wrong object size after copy of uncompressed multipart objects * Backport #20271: kraken: LibRadosMiscConnectFailure.ConnectFailure hang * Backport #20293: kraken: multisite: log_meta on secondary zone causes continuous loop of metadata sync * Backport #20315: kraken: mon: fail to form large quorum; msg/async busy loop * Backport #20345: kraken: make check fails with Error EIO: load dlopen(build/lib/libec_FAKE.so): build/lib/libec_FAKE.so: cannot open shared object file: No such file or directory * Backport #20347: kraken: rgw: meta sync thread crash at RGWMetaSyncShardCR * Backport #20351: kraken: test_librbd_api.sh fails in upgrade test * Backport #20363: kraken: VersionIdMarker and NextVersionIdMarker are not returned when listing object versions * Backport #20365: kraken: mon: osd crush set crushmap need sanity check * Backport #20366: kraken: kraken-bluestore 11.2.0 memory leak issue * Backport #20405: kraken: Lifecycle thread will still handle the bucket even if it has been removed. * Backport #20443: kraken: osd: client IOPS drops to zero frequently * Backport #20487: kraken: make check fails due to missing bc in ceph-helper.sh * Backport #20495: Bluestore memory leak (uninit) * Backport #20497: kraken: MaxWhileTries: reached maximum tries (105) after waiting for 630 seconds from radosbench.yaml * Backport #20499: kraken: tests: ObjectStore/StoreTest.OnodeSizeTracking/2 fails on bluestore * Backport #20500: kraken: src/test/pybind/test_cephfs.py fails * Backport #20517: kraken: [rbd CLI] map with cephx disabled results in error message * Backport #20520: kraken: rados/upgrade rgw swift test fails * Backport #20522: kraken: FAILED assert(object_contexts.empty()) (live on master only from Jan-Feb 2017, all other instances are different) * Backport #20523: kraken: on_flushed: object ... obc still alive * Backport #20634: kraken: [test] rbd-mirror teuthology task doesn't start daemon in foreground mode * Backport #20638: kraken: EPERM: cannot set require_min_compat_client to luminous: 6 connected client(s) look like jewel (missing 0x800000000200000 * Backport #20672: kraken: Bad status warning for mon_warn_osd_usage_min_max_delta * Bug #20843: assert(i->prior_version == last) when a MODIFY entry follows an ERROR entry * Backport #20881: Thrasher: update pgp_num of all expanded pools if not yet * Backport #20884: kraken: bluestore: allocator fails for 0x80000000 allocations * Bug #21021: Failed assert starting OSD after mark_unfound_lost caused process crash * Bug #21038: Upgrading from jewel to kraken - mgr create throws EACCESS: access denied * Bug #21068: ceph-disk deploy bluestore fails to create correct block symlink for multipath devices