# v10.2.11 * Backport #17445: jewel: list-snap cache tier missing promotion logic (was: rbd cli segfault when doing diff on an image (cache tiering)) * Backport #18277: jewel: multisite: trim data changes log as peers zones catch up * Backport #18853: jewel: upstart: radosgw-all does not start on boot if ceph-base is not installed * Backport #19140: jewel: osdc/Objecter: If osd full, it should pause read op which w/ rwordered flag * Backport #19224: jewel: osd ops (sent and?) arrive at osd out of order * Backport #20399: jewel: rgw_file: recursive lane lock can occur in LRU drain * Backport #20410: jewel: rgw: Part's index is not removed when versioning enabled * Backport #20480: jewel: rgw log includes zero byte sometimes * Backport #20637: jewel: rbd-mirror: cluster watcher should ignore -EPERM errors against reading 'rbd_mirroring' object * Backport #20675: jewel: Addition of online osd 'omap'compaction command * Backport #20719: jewel: rgw: Truncated objects * Backport #20824: jewel: rgw: AWSv4 encoding/signature problem, can happen with listobjects marker. * Backport #21032: jewel: osd: default osd_scrub_during_recovery=false * Backport #21033: jewel: rpm: bump epoch ahead of ceph-common in RHEL base * Backport #21036: jewel: snapset xattr corruption propagated from primary to other shards * Backport #21053: jewel: RHEL 7.3 Selinux denials at OSD start * Backport #21067: jewel: MDS integer overflow fix * Backport #21117: jewel: osd: osd_scrub_during_recovery only considers primary, not replicas * Backport #21150: jewel: tests: btrfs copy_clone returns errno 95 (Operation not supported) * Backport #21205: rgw: bi list entry count incremented on error, distorting error code * Backport #21239: jewel: test_health_warnings.sh can fail * Backport #21266: jewel: [cli] rename of non-existent image results in seg fault * Backport #21290: jewel: [rbd] image-meta list does not return all entries * Backport #21308: jewel: pre-luminous: aio_read returns erroneous data when rados_osd_op_timeout is set but not reach * Backport #21440: jewel: Performance: Slow OSD startup, heavy LevelDB activity * Backport #21442: jewel: [cli] mirror "getter" commands will fail if mirroring has never been enabled * Backport #21447: jewel: rgw:multisite: Get bucket location which is located in another zonegroup, will return "301 Moved Permanently" * Backport #21478: jewel: systemd: Add explicit Before=ceph.target * Backport #21481: jewel: "FileStore.cc: 2930: FAILED assert(0 == "unexpected error")" in fs * Backport #21485: jewel: the typo to the thread name * Backport #21489: jewel: qa: failures from pjd fstest * Backport #21519: jewel: qa: test_client_pin times out waiting for dentry release from kernel * Backport #21522: jewel: ceph-disk omits "--runtime" when enabling ceph-osd@$ID.service units for device-backed OSDs * Backport #21546: jewel: rgw file write error * Backport #21626: jewel: ceph_volume_client: sets invalid caps for existing IDs with no caps * Backport #21632: jewel: remove region from "INSTALL CEPH OBJECT GATEWAY" * Backport #21642: jewel: rbd ls -l crashes with SIGABRT * Backport #21689: jewel: Possible deadlock in 'list_children' when refresh is required * Backport #21691: jewel: [qa] rbd_mirror_helpers.sh request_resync_image function saves image id to wrong variable * Backport #21718: jewel: doc fails build with latest breathe * Backport #21730: jewel: ceph-disk: retry on OSError * Tasks #21742: jewel v10.2.11 * Backport #21784: jewel: cli/crushtools/build.t sometimes fails in jenkins' "make check" run * Backport #21786: jewel: OSDMap cache assert on shutdown * Backport #21791: jewel: RGW: Multipart upload may double the quota * Backport #21796: jewel: Ubuntu amd64 client can not discover the ubuntu arm64 ceph cluster * Backport #21864: jewel: ceph-conf: dump parsed config in plain text or as json * Backport #21866: jewel: rbd: rbd crashes during map * Backport #21867: jewel: [object map] removing a large image (~100TB) with an object map may result in loss of OSD * Backport #21872: jewel: ObjectStore/StoreTest.FiemapHoles/3 fails with kstore * Backport #21873: jewel: failed CompleteMultipartUpload request does not release lock * Backport #21911: jewel: Errors in test_librbd_api.sh in upgrade:client-upgrade-jewel-luminous * Backport #21912: jewel: tests: "assert((features & RBD_FEATURE_FAST_DIFF) != 0)" in upgrade:client-upgrade-jewel-luminous * Backport #21915: jewel: [rbd-mirror] peer cluster connections should filter out command line optionals * Backport #21923: jewel: Objecter::C_ObjectOperation_sparse_read throws/catches exceptions on -ENOENT * Backport #21950: jewel: rgw: null instance mtime incorrect when enable versioning * Backport #21951: jewel: multisite: data sync status advances despite failure in RGWListBucketIndexesCR * Backport #21954: jewel: list bucket which enable versioning get wrong result when user marker * Backport #21971: jewel: [journal] tags are not being expired if no other clients are registered * Backport #22013: jewel: osd/ReplicatedPG.cc: recover_replicas: object added to missing set for backfill, but is not in recovering, error! * Backport #22018: jewel: Segmentation fault when starting radosgw after reverting .rgw.root * Backport #22028: jewel: boto3 v4 SignatureDoesNotMatch failure due to sorting of sse-kms headers * Backport #22031: jewel: FAILED assert(get_version() < pv) in CDir::mark_dirty * Backport #22104: jewel: common/config: set rocksdb_cache_size to OPT_U64 * Backport #22170: jewel: *** Caught signal (Segmentation fault) ** in thread thread_name:tp_librbd * Backport #22173: jewel: [rbd-nbd] Fedora does not register resize events * Backport #22175: jewel: possible deadlock in various maintenance operations * Backport #22180: jewel: Swift object expiry incorrectly trims entries, leaving behind some of the objects to be not deleted * Backport #22182: jewel: rgw segfaults after running radosgw-admin data sync init * Backport #22186: jewel: abort in listing mapped nbd devices when running in a container * Backport #22188: jewel: rgw: add cors header rule check in cors option request * Backport #22191: jewel: class rbd.Image discard----OSError: [errno 2147483648] error discarding region * Backport #22209: jewel: 'rbd du' on empty pool results in output of "specified image" * Backport #22236: jewel: ceph-disk flake8 test fails on very old, and very new, versions of flake8 * Backport #22241: jewel: Processes stuck waiting for write with ceph-fuse * Bug #22248: system user can't delete bucket completely * Backport #22259: jewel: rgw: swift anonymous access doesn't work in jewel * Bug #22261: Object remaining in domain_root pool after delete bucket * Bug #22273: Duplicate logrotate entries (AGAIN!) when upgrading from Ceph Jewel to Ceph Luminous (Debian) * Bug #22352: rados gateway computes wrong AWS4 signature if canonical request contains the tilde (~) character * Backport #22378: jewel: ceph-fuse: failure to remount in startup test does not handle client_die_on_failed_remount properly * Backport #22380: jewel: client reconnect gather race * Backport #22384: jewel: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), actual: -34 vs 0 * Backport #22394: jewel: librbd: cannot copy all image-metas if we have more than 64 key/value pairs * Backport #22396: jewel: librbd: cannot clone all image-metas if we have more than 64 key/value pairs * Backport #22403: jewel: osd: replica read can trigger cache promotion * Backport #22425: jewel: S3 API: incorrect error code on GET website bucket * Backport #22449: jewel: Visibility for snap trim queue length * Backport #22494: jewel: unsigned integer overflow in file_layout_t::get_period * Backport #22498: jewel: [rbd-mirror] new pools might not be detected * Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position. * Support #22553: ceph-object-tool can not remove metadata pool's object * Backport #22569: jewel: doc: clarify path restriction instructions * Backport #22572: jewel: Stale bucket index entry remains after object deletion * Backport #22575: jewel: Random 500 errors in Swift PutObject * Backport #22578: jewel: [test] rbd-mirror split brain test case can have a false-positive failure until teuthology * Backport #22582: jewel: multisite: 'radosgw-admin sync error list' contains temporary EBUSY errors * Backport #22584: jewel: rgw: chained cache size is growing above rgw_cache_lru_size limit * Backport #22589: jewel: rgw: put cors operation returns 500 unknown error (ops are ECANCELED) * Backport #22590: jewel: ceph.in: tell mds does not understand --cluster * Backport #22592: jewel: radosgw refuses upload when Content-Type missing from POST policy * Backport #22594: jewel: [ FAILED ] TestLibRBD.RenameViaLockOwner * Backport #22636: jewel: s3cmd move object error * Support #22649: rbd-mirror use ceph public_network * Backport #22658: filestore: randomize split threshold * Backport #22670: jewel: OSD heartbeat timeout due to too many omap entries read in each 'chunk' being backfilled * Bug #22685: create user, but uid error * Backport #22689: jewel: client: fails to release to revoking Fc * Backport #22693: jewel: simplelru does O(n) std::list::size() * Backport #22695: jewel: mds: fix dump last_sent * Backport #22700: jewel: client:_rmdir() uses a deleted memory structure(Dentry) leading a core * Backport #22703: jewel: rgw: offline resharding doesn't seem to preserve bucket acls * Backport #22709: jewel: rgw: copy_object doubles leading underscore on object names. * Backport #22762: jewel: mds: unbalanced auth_pin/auth_unpin in RecoveryQueue code * Backport #22764: jewel: mds: crashes because of old pool id in journal header * Backport #22771: jewel: ceph-objectstore-tool set-size should maybe clear data-digest * Backport #22772: jewel: user creation can overwrite existing user even if different uid is given * Backport #22774: jewel: rgw file deadlock on lru evicting * Backport #22794: jewel: heartbeat peers need to be updated when a new OSD added into an existed cluster * Backport #22810: jewel: rbd snap create/rm takes 60s long * Backport #22818: jewel: repair_test fails due to race with osd start * Backport #22830: jewel: expose --sync-stats via admin api * Bug #22848: Pull the cable,5mins later,Put back to the cable,pg stuck a long time ulitl to restart ceph-osd * Backport #22861: jewel: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-yuri-master-1.19.18-distro-basic-smithi * Backport #22863: jewel: cephfs-journal-tool: may got assertion failure due to not shutdown * Backport #22865: jewel: mds: scrub crash * Backport #22866: jewel: ceph osd df json output validation reported invalid numbers (-nan) (jewel) * Backport #22894: jewel: rgw: ECANCELED in rgw_get_system_obj() leads to infinite loop * Backport #22904: jewel: rgw: copying part without http header "x-amz-copy-source-range" will be mistaken for copying object * Backport #22912: jewel: ceph-objectstore-tool: "$OBJ get-omaphdr" and "$OBJ list-omap" scan all pgs instead of using specific pg * Backport #22913: jewel: rbd discard ret value truncated * Backport #22939: jewel: system user can't delete bucket completely * Backport #22941: jewel: Double free in rados_getxattrs_next * Backport #22965: jewel: [rbd-mirror] infinite loop is possible when formatting the status message * Backport #22968: jewel: Journaler::flush() may flush less data than expected, which causes flush waiter to hang * Backport #22970: jewel: mds: session reference leak * Backport #22987: jewel: rgw: user stats increased after bucket reshard * Backport #23010: jewel: Filestore rocksdb compaction readahead option not set by default * Backport #23012: jewel: [journal] allocating a new tag after acquiring the lock should use on-disk committed position * Backport #23021: jewel: The parameter of max-uploads doesn't work when List Multipart Uploads * Backport #23023: jewel: can not set user quota with specific value * Backport #23026: jewel: rgw: data sync of versioned objects, note updating bi marker * Backport #23065: jewel: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-mult run * Backport #23076: jewel: osd: objecter sends out of sync with pg epochs for proxied ops * Bug #23082: msg/Async drop message, io blocked a long time * Backport #23153: jewel: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT * Backport #23158: jewel: mds: underwater dentry check in CDir::_omap_fetched is racy * Backport #23171: core dump : recursive lock of RGWKeystoneTokenCache * Backport #23181: jewel: Can't repair corrupt object info due to bad oid on all replicas * Bug #23198: osd coredump ClassHandler::ClassMethod::exec * Bug #23199: radosgw coredump RGWGC::process * Bug #23210: ceph-fuse: exported nfs get "stale file handle" when mds migrating * Backport #23240: jewel: Curl+OpenSSL support in RGW * Backport #23243: jewel: possible issue with ssl + libcurl * Backport #23244: jewel: multisite: segfault in radosgw-admin realm pull * Bug #23255: radosgw record wrong data logs when create and delete bucket in seconds repeatedly * Backport #23274: jewel: abort early if frontends signal an initialization error * Backport #23303: jewel: rgw: add radosgw-admin sync error trim to trim sync error log * Backport #23305: jewel: parent blocks are still seen after a whole-object discard * Backport #23307: jewel: ceph-objectstore-tool command to trim the pg log * Backport #23311: jewel: s3 website: some s3tests are failing because redirects include index doc suffix * Backport #23316: jewel: pool create cmd's expected_num_objects is not correctly interpreted * Backport #23338: jewel: radosgw-admin: add an option to reset user stats * Backport #23348: jewel: rgw: inefficient buffer usage for PUTs * Backport #23356: jewel: client: prevent fallback to remount when dentry_invalidate_cb is true but root->dir is NULL * Bug #23403: Mon cannot join quorum * Backport #23411: jewel: Documentation license version is ambiguous * Backport #23413: jewel: delete type mismatch in CephContext teardown * Bug #23469: jewel: rgw: radosgw in jewel has not been linked with tcmalloc when selected in configure * Backport #23486: jewel: scrub errors not cleared on replicas can cause inconsistent pg state when replica takes over primary * Backport #23508: jewel: test_admin_socket.sh may fail on wait_for_clean * Backport #23521: jewel: ceph_authtool: add mode option * Backport #23523: jewel: tests: unittest_pglog timeout * Backport #23525: jewel: is_qemu_running in qemu_rebuild_object_map.sh and qemu_dynamic_features.sh may return false positive * Backport #23543: jewel: rbd-nbd: EBUSY when do map * Backport #23546: jewel: "Message too long" error when appending journal * Backport #23558: jewel: swift test fails because pip no longer accepts --allow-unverified * Backport #23673: jewel: auth: ceph auth add does not sanity-check caps * Backport #23721: jewel: radosgw-admin user stats --sync-stats without a user will create an empty object * Backport #23783: jewel: table of contents doesn't render for luminous/jewel docs * Support #23839: RGW GC Stuxk * Backport #23905: jewel: Deleting a pool with active watch/notify linger ops can result in seg fault * Backport #23932: jewel: client: avoid second lock on client_lock * Bug #24007: rados.connect get a segmentation fault * Backport #24058: jewel: Deleting a pool with active notify linger ops can result in seg fault * Bug #24159: Monitor down when large store data needs to compact triggered by ceph tell mon.xx compact command * Backport #24244: jewel: osd/EC: slow/hung ops in multimds suite test * Backport #24291: jewel: common: JSON output from rados bench write has typo in max_latency key * Bug #24324: [Hammer]pg trim got segmentation fault * Bug #24529: monitor report empty client io rate when clock not synchronized * Backport #24742: jewel: CLI unit formatting tests are broken * Support #38125: Multisite ceph cluster storage for data replication * Bug #38308: segfault when deleting rbd in python bindings * Bug #38675: Les OSD ne redémarrent pas tous après l'upgrade des serveurs du cluster ceph de Ubuntu 14.04 à Ubuntu 16.04 * Bug #39054: osd push failed because local copy is 4394'133607637 * Bug #40552: rgw: receive an unexpected return code 403 occasionally when putting an object in chunked upload mode * Bug #40822: rbd-nbd: reproducible crashes on nbd request timeout