# v10.2.6 * Backport #13512: If blkid hangs, ceph-osd appears to start but does not come up on mon, and gdb can't backtrace (aka "2 of 4 OSDs are up") * Backport #16871: jewel: Have a flavor of bucket deletion in radosgw-admin to bypass garbage collection * Cleanup #16985: Improve error reporting from "rbd feature enable/disable" * Backport #17057: jewel: The "request lock" RPC message might be incorrectly ignored * Backport #17119: jewel: multisite: assert(next) failed in RGWMetaSyncCR * Backport #17134: jewel: FAILED assert(m_image_ctx.journal == nullptr) * Backport #17162: jewel: rgw multisite: doesn't retry RGWFetchAllMetaCR on failed lease * Backport #17208: jewel: rgw: setting rgw_swift_url_prefix = "/" doesn't work as expected * Backport #17242: jewel: ImageWatcher: double unwatch of failed watch handle * Backport #17243: jewel: Deadlock in several librbd teuthology test cases * Backport #17261: jewel: Potential seg fault when blacklisting a client * Backport #17313: jewel: rgw-ldap: add ldap lib to rgw lib deps based on build config * Backport #17334: jewel: crushtool --compile is create output despite of missing item * Backport #17340: jewel: exclusive_lock::AcquireRequest doesn't handle -ERESTART on image::RefreshRequest * Backport #17342: jewel: teuthology: assertion failure in a radosgw-admin related task * Backport #17343: jewel: radosgw Consumes too much CPU time to synchronize metadata or data between multisite * Backport #17472: jewel: rpm: /etc/ceph/rbdmap is packaged with executable access rights * Backport #17478: jewel: MDS goes damaged on blacklist (failed to read JournalPointer: -108 ((108) Cannot send after transport endpoint shutdown) * Backport #17507: jewel: multisite: 'radosgw-admin period prepare' is obsolete * Backport #17512: jewel: multisite: metadata master can get the wrong value for 'oldest_log_period' * Backport #17514: jewel: rgw:bucket check remove _multipart_ prefix * Backport #17582: jewel: monitor assertion failure when deactivating mds in (invalid) fscid 0 * Backport #17583: jewel: utime.h: fix timezone issue in round_to_* funcs. * Backport #17600: jewel: common: Improve linux dcache hash algorithm * Backport #17601: jewel: mon: health does not report pgs stuck in more than one state * Backport #17615: jewel: mds: false "failing to respond to cache pressure" warning * Backport #17617: jewel: [cephfs] fuse client crash when adding a new osd * Backport #17666: jewel: OSD scrubs same PG over and over * Backport #17674: jewel: set_acl fails for object beginning and ending in underscore * Backport #17679: jewel: monitor should send monmap updates when the monmap is updated * Backport #17697: jewel: MDS long-time blocked ops. ceph-fuse locks up with getattr of file * Backport #17705: jewel: ceph_volume_client: recovery of partial auth update is broken * Backport #17706: jewel: multimds: mds entering up:replay and processing down mds aborts * Backport #17708: jewel: 'radosgw-admin bucket sync init' crashes * Backport #17709: jewel: multisite: coroutine deadlock assertion on error in FetchAllMetaCR * Backport #17710: jewel: multisite: race between ReadSyncStatus and InitSyncStatus leads to EIO errors * Backport #17712: jewel: 'rbd du' of missing image does not return error * Backport #17720: jewel: MDS: false "failing to respond to cache pressure" warning * Backport #17721: jewel: osd_max_backfills default has changed, documentation should reflect that. * Backport #17732: jewel: ceph daemons DUMPABLE flag is cleared by setuid preventing coredumps * Backport #17733: jewel: multisite: after finishing full sync on a bucket, incremental sync starts over from the beginning * Backport #17735: jewel: RGW will not list Argonaut-era bucket via HTTP (but radosgw-admin works) * Backport #17754: jewel: ceph-create-keys loops forever * Backport #17756: jewel: rgw: bucket resharding * Backport #17763: jewel: TestLibRBD.DiscardAfterWrite doesn't handle "rbd_skip_partial_discard = true" * Backport #17765: jewel: collection_list shadow return value # * Backport #17766: jewel: Exclusive lock improperly initialized on read-only image when using snap_set API * Backport #17767: jewel: rbd-mirror: disabling mirroring with option '--force' makes RBD-images unaccessible * Backport #17769: jewel: disable virtual hosting of buckets when no hostnames are configured * Backport #17783: jewel: rgw: json encode/decode of RGWBucketInfo missing index_type field * Backport #17784: jewel: osd crashes when "radosgw-admin bi list --max-entries=1" command runing * Backport #17785: jewel: ldap: unhandled exception from rgw::from_base64() in RGW_Auth_S3::authorize_v2() * Backport #17838: jewel: leak in RGWFetchAllMetaCR * Backport #17839: jewel: rgw: the value of total_time is wrong in the result of 'radosgw-admin log show' opt * Backport #17841: jewel: mds fails to respawn if executable has changed * Backport #17842: jewel: Remove the runtime dependency on lsb_release * Backport #17844: jewel: rbd-nbd: disallow mapping images >2TB in size * Backport #17845: jewel: rbd-mirror: snap protect of non-layered image results in split-brain * Backport #17846: jewel: RGWHTTPManager deadlock when HAVE_CURL_MULTI_WAIT=0 * Tasks #17851: jewel v10.2.6 * Backport #17875: jewel: rgw file: remove spurious mount entries for RGW buckets * Backport #17876: jewel: osd: update_log_missing does not order correctly with osd_ops * Backport #17877: jewel: FileStore: fiemap cannot be totally retrieved in xfs when the number of extents > 1364 * Backport #17881: jewel: Add config option to disable new scrubs during recovery * Backport #17884: jewel: OSDs marked OUT wrongly after monitor failover * Backport #17885: jewel: "[ FAILED ] LibCephFS.InterProcessLocking" in jewel v10.2.4 * Backport #17886: jewel: multisite: ECANCELED & 500 error on bucket delete * Backport #17895: tools: snapshotted RBD extent objects can't be manually evicted from a cache tier * Backport #17903: jewel: tests: flake8 3.1.1 behavior changed * Backport #17904: jewel: Error EINVAL: removing mon.a at 172.21.15.16:6789/0, there will be 1 monitors * Backport #17908: jewel: rgw: the response element "X-Timestamp" of swift stat bucket api is zero * Backport #17909: jewel: ReplicatedBackend::build_push_op: add a second config to limit omap entries/chunk independently of object data * Backport #17926: jewel: ceph-disk --dmcrypt create must not require admin key * Backport #17953: jewel: restarting an osd twice quickly enough with no other map changes can leave it running, but not up * Backport #17956: jewel: Clients without pool-changing caps shouldn't be allowed to change pool_namespace * Backport #17961: rgw, jewel: TempURL fails if rgw_keystone_implicit_tenants is enabled * Backport #17969: jewel: multisite upgrade from hammer -> jewel ignores rgw_region_root_pool * Backport #17974: jewel: ceph/Client segfaults in handle_mds_map when switching mds * Backport #18007: jewel: ceph-disk: ceph-disk@.service races with ceph-osd@.service * Backport #18008: jewel: Cannot create deep directories when caps contain "path=/somepath" * Backport #18009: jewel: ceph-disk: udev permission race with dm * Backport #18010: jewel: Cleanly reject "session evict" command when in replay * Backport #18011: jewel: test fails due to "The UNIX domain socket path" * Backport #18012: jewel: qa/workunits/rbd: improvements for rbd-mirror tests * Backport #18024: jewel: "FAILED assert(m_processing == 0)" while running test_lock_fence.sh * Backport #18025: jewel: rgw: gc records empty entries * Backport #18026: jewel: ceph_volume_client.py : Error: Can't handle arrays of non-strings * Backport #18060: jewel: timeout during ceph-disk trigger due to /var/lock/ceph-disk flock contention * Backport #18061: jewel: rgw:fix for deleting objects name beginning and ending with underscores of one bucket using POST method of js sdk. * Backport #18098: jewel: rgw: Swift's "prefix" parameter is not supported on account listing * Backport #18100: jewel: ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3 * Backport #18101: jewel: Add workaround for upgrade issues for older jewel versions * Backport #18102: jewel: rgw: Unable to commit period zonegroup change * Backport #18103: jewel: truncate can cause unflushed snapshot data lose * Backport #18104: jewel: ceph osd down detection behaviour * Backport #18107: jewel: multisite: failed assertion in 'radosgw-admin bucket sync status' * Backport #18108: jewel: msg/simple/Pipe: error decoding addr * Backport #18110: jewel: diff calculate can hide parent extents when examining first snapshot in clone * Backport #18112: jewel: multisite requests failing with '400 Bad Request' with civetweb 1.8 * Backport #18120: jewel: fixed the issue when --disable-server, compilation fails * Backport #18133: jewel: undefined references when building unit tests with --with-xio * Backport #18135: jewel: make check fails when hostname not properly set, but run-make-check.sh does not check this * Backport #18136: jewel: qa: rbd-mirror workunit false negative when waiting for image deleted after resync * Backport #18183: jewel: cephfs metadata pool: deep-scrub error "omap_digest != best guess omap_digest" * Backport #18190: jewel: rbd-mirror: gmock warnings in bootstrap request unit tests * Backport #18191: jewel: "rbd mirror image resync" does not force resync after split-brain * Backport #18192: jewel: standby-replay daemons can sometimes miss events * Backport #18194: jewel: rbd-mirror split-brain issues should be clearly visible in mirror status * Backport #18195: jewel: cephfs: fix missing ll_get for ll_walk * Backport #18199: jewel: build/ops: install-deps.sh based on /etc/os-release * Backport #18212: jewel: rgw:radosgw server abort when accept a CORS request with short origin * Backport #18214: jewel: add max_part and nbds_max options in rbd nbd map, in order to keep consistent with * Backport #18216: jewel: rgw-admin: missing command to modify placement targets * Backport #18217: jewel: rgw sends omap_getvals with (u64)-1 limit * Backport #18219: jewel: msg: upper 32-bits of message sequence get lost * Backport #18221: jewel: dumpling, hammer, jewel: qemu/tests/qemu-iotests/077 fails * Backport #18270: jewel: add image id block name prefix APIs * Backport #18274: jewel: Memory leaks in object_list_begin and object_list_end * Backport #18276: jewel: rbd-nbd: invalid error code for "failed to read nbd request" messages * Backport #18278: jewel: RBD diff got SIGABRT with "--whole-object" for RBD whose parent also have fast-diff feature enabled. * Backport #18280: jewel: mon: osd flag health message is misleading * Backport #18282: jewel: monitor cannot start because of "FAILED assert(info.state == MDSMap::STATE_STANDBY)" * Backport #18284: jewel: Need CLI ability to add, edit and remove omap values with binary keys * Backport #18285: jewel: partition func should be enabled When load nbd.ko for rbd-nbd * Backport #18286: jewel: multisite: coroutine deadlock in RGWMetaSyncCR after ECANCELED errors * Backport #18288: jewel: rbd-mirror: image sync object map reload logs message * Backport #18290: jewel: objectmap does not show object existence correctly * Backport #18307: path restricted cephx caps not working correctly * Backport #18308: ceph-fuse not clearing setuid/setgid bits on chown * Backport #18312: 'make dist' fails because tar uses `--format=ustar` * Backport #18320: jewel: rbd status: json format has duplicated/overwritten key * Backport #18323: jewel: JournalMetadata flooding with errors when being blacklisted * Backport #18337: jewel: Expose librbd API methods to directly acquire and release the exclusive lock * Backport #18340: jewel: rgw: more descriptive error message when failing to read zone/realm/zg info * Backport #18348: jewel: rgw ldap: enforce simple_bind w/LDAPv3 redux * Backport #18349: jewel: AWS S3 Version 4 signatures fail sometimes. * Backport #18376: jewel: rados/upgrade test fails with git clone https://github.com/ceph/ceph.git /home/ubuntu/cephtest/clone.client.0 ; cd -- /home/ubuntu/cephtest/clone.client.0 && git checkout jewel * Backport #18379: jewel: msg/simple/SimpleMessenger.cc: 239: FAILED assert(!cleared) * Backport #18386: jewel: Cannot clone ceph/s3-tests.git (missing branch) * Backport #18391: jewel: qa/workunits/rbd/test_lock_fence.sh fails (regression) * Backport #18402: jewel: tests: objecter_requests workunit fails on wip branches * Backport #18404: jewel: cache tiering: base pool last_force_resend not respected (racing read got wrong version) * Backport #18406: jewel: Cannot reserve CentOS 7.2 smithi machines * Backport #18413: jewel: lookup of /.. in jewel returns -ENOENT * Backport #18417: jewel: leveldb corruption leads to "Operation not permitted not handled" and assert * Backport #18433: jewel: rados bench seq must verify the hostname * Backport #18434: jewel: Improve error reporting from "rbd feature enable/disable" * Backport #18450: jewel: [teuthology] update "rbd/singleton/all/formatted-output.yaml" to support ceph-ci * Backport #18453: jewel: [iscsi]: need an API to break the exclusive lock * Backport #18455: jewel: Attempting to remove an image w/ incompatible features results in partial removal * Backport #18457: jewel: selinux: Allow ceph to manage tmp files * Backport #18462: jewel: Decode errors on backtrace will crash MDS * Backport #18466: jewel: install-deps.sh doesn't run on SLES * Backport #18485: jewel: osd_recovery_incomplete: failed assert not manager.is_recovered() * Backport #18494: jewel: [rbd-mirror] sporadic image replayer shut down failure * Backport #18498: jewel: rgw: Realm set does not create a new period * Backport #18504: jewel: crash adding snap to purged_snaps in ReplicatedPG::WaitingOnReplicas (part 2) * Backport #18512: build/ops: compilation error when --with-radowsgw=no * Backport #18520: jewel: speed up readdir by skipping unwanted dn * Backport #18526: jewel: rgw: implement swift /info api * Backport #18545: jewel: [teuthology] update Ubuntu image url after ceph.com refactor * Backport #18547: jewel: multisite: segfault after changing value of rgw_data_log_num_shards * Backport #18550: jewel: 'metadata_set' API operation should not change global config setting * Backport #18551: jewel: ceph-fuse crash during snapshot tests * Backport #18553: jewel: peon wrongly delete routed pg stats op before receive pg stats ack * Backport #18556: jewel: Potential race when removing two-way mirroring image * Backport #18558: jewel: rbd bench-write will crash if "--io-size" is 4G * Backport #18559: jewel: multisite: memory leak from RGWSimpleRadosLockCR::send_request() * Backport #18560: jewel: multisite: memory leak in RGWCloneMetaLogCoroutine::state_store_mdlog_entries() * Backport #18563: jewel: leak from RGWMetaSyncShardCR::incremental_sync * Backport #18565: jewel: MDS crashes on missing metadata object * Backport #18569: jewel: radosgw valgrind "invalid read size 4" RGWGetObj * Backport #18570: jewel: Python Swift client commands in Quick Developer Guide don't match configuration in vstart.sh * Backport #18582: Issue with upgrade from 0.94.9 to 10.2.5 * Backport #18603: jewel: cephfs test failures (ceph.com/qa is broken, should be download.ceph.com/qa) * Backport #18605: jewel: ceph-disk prepare writes osd log 0 with root owner * Backport #18608: jewel: Removing a clone that fails to open its parent might leave dangling rbd_children reference * Backport #18611: jewel: client: segfault on ceph_rmdir path "/" * Backport #18615: jewel: segfault in handle_client_caps * Backport #18633: jewel: [qa] crash in journal-enabled fsx run * Backport #18634: jewel: RGWRados::get_system_obj() sends unnecessary stat request before read * Backport #18652: jewel: Test failure: test_session_reject (tasks.cephfs.test_sessionmap.TestSessionMap) * Backport #18672: jewel: teuthology: qemu-iotests on xenial fails with unexpected console output * Backport #18676: jewel: librgw: objects created from s3 apis are not visible from nfs mount point * Backport #18679: jewel: failed to reconnect caps during snapshot tests * Backport #18684: jewel: multisite: sync status reports master is on a different period * Backport #18708: jewel: failed filelock.can_read(-1) assertion in Server::_dir_is_nonempty * Backport #18710: jewel: slave zonegroup cannot enable the bucket versioning * Backport #18712: jewel: radosgw-admin period update reverts deleted zonegroup * Backport #18719: tests: lfn-upgrade-hammer: cluster stuck in HEALTH_WARN after last upgraded node reboots * Backport #18720: jewel: systemd restarts Ceph Mon to quickly after failing to start * Backport #18729: jewel: ceph-disk: error on _bytes2str * Backport #18758: Various upgrade/hammer-x failures in jewel 10.2.6 integration testing * Backport #18773: jewel: rgw crashes when updating period with placement group * Backport #18779: jewel: leak in RGWAsyncRadosProcessor::handle_request * Backport #18804: jewel: "ERROR: Export PG's map_epoch 3901 > OSD's epoch 3281" in upgrade:infernalis-x-jewel-distro-basic-vps * Backport #18812: jewel: hammer client generated misdirected op against jewel cluster * Backport #18827: jewel: RGW leaking data * Backport #18833: jewel: rgw: usage stats and quota are not operational for multi-tenant users * Backport #18848: jewel: remove qa/suites/buildpackages * Backport #18869: jewel: tests: SUSE yaml facets in qa/distros/all are out of date * Backport #18891: jewel: rgw: add option to log custom HTTP headers (rgw_log_http_headers) * Backport #19004: jewel: tests: qa/suites/upgrade/hammer-x/stress-split: finish thrashing before final upgrade * Backport #19006: jewel: tests: upgrade/hammer-x/stress-split-erasure-code(-x86_64) break symlinks * Bug #20490: rgw: can not list buckets when enable s3 website hosting and use domain