# v10.2.3 * Backport #15608: jewel: Ownership of run dir, pid file and asok file not set when deferring permissions drop * Backport #15700: jewel: rados/test.sh workunit timesout on OpenStack * Backport #15767: jewel: list-inconsistent-obj and list-inconsistent-snapset improvements * Backport #15768: jewel: FileStore: umount hang because sync thread doesn't exit * Backport #15806: jewel: New pools have bogus "stuck inactive/unclean" HEALTH_ERR messages until they are first active and clean * Backport #15841: jewel: s3website: x-amz-website-redirect-location header returns malformed HTTP response * Backport #15898: jewel: Confusing MDS log message when shut down with stalled journaler reads * Backport #15954: jewel: rgw: initial slashes are not properly handled in Swift's BulkDelete * Backport #15960: jewel: rgw: custom metadata aren't camelcased in Swift's responses * Backport #15964: jewel: realm pull fails when using apache frontend * Backport #15965: jewel: No Last-Modified, Content-Size and X-Object-Manifest headers if no segments in DLO manifest * Backport #15967: jewel: rgw: account/container metadata not actually present in a request are deleted during POST through Swift API * Backport #15968: jewel: ceph status mds output ignores active MDS when there is a standby replay * Backport #15978: jewel: rgw: leak * Backport #15998: jewel: most admin operations see a failed to create default zone error message * Backport #15999: jewel: CephFSVolumeClient: read-only authorization for volumes * Backport #16009: radosgw-admin: failure for user create after upgrade from hammer to jewel * Backport #16037: jewel: MDSMonitor::check_subs() is very buggy * Backport #16039: jewel: rgw: updating custom metadata on account/container is broken * Backport #16040: jewel: rgw: updating CORS/ACLs might not work in some circumstances * Backport #16041: jewel: mds/StrayManager.cc: 520: FAILED assert(dnl->is_primary()) * Backport #16071: jewel: remove radosgw-admin temp command as it got deprecated * Backport #16080: jewel: osd:sched_time not actually randomized * Backport #16081: jewel: mon: "mon metadata" fails when only one monitor exists * Backport #16083: jewel: mds: wrongly treat symlink inode as normal file/dir when symlink inode is stale on kcephfs * Backport #16085: jewel: A query on a static large object fails with 404 error * Backport #16086: jewel: 'radosgw-admin zone modify' clears master_zone if no --master is given * Backport #16099: jewel: ceph-base requires parted * Backport #16109: jewel: retry on bucket sync failures * Backport #16112: jewel: CORS: Access-Control-Allow-Origin should return * when set that way * Backport #16117: jewel: rgw: aws4 parsing issue * Backport #16135: jewel: MDS: fix getattr starve setattr * Backport #16136: jewel: MDSMonitor fixes * Backport #16148: jewel: Scrub error: 0/1 pinned * Backport #16150: jewel: crash adding snap to purged_snaps in ReplicatedPG::WaitingOnReplicas * Backport #16152: jewel: fs: client: fstat cap release * Backport #16153: jewel: Missing export for rados_aio_get_version in src/include/rados/librados.h * Backport #16163: jewel: rgw: cannot limit users to not create buckets * Backport #16182: jewel: backport static sites fixes master->jewel * Backport #16193: jewel: Fixes for list-inconsistent-* * Backport #16194: jewel: Earlier version of jq don't have -S option * Backport #16215: jewel: client: crash in unmount when fuse_use_invalidate_cb is enabled * Backport #16232: jewel: Improve rbd-mirror test case coverage * Backport #16249: jewel: sparse_read on ec pool should return extends with correct offset * Backport #16272: jewel: rgw ldap: fix ldap bindpw parsing * Backport #16299: jewel: mds: fix SnapRealm::have_past_parents_open() * Backport #16311: jewel: Add STREAMING-AWS4-HMAC-SHA256-PAYLOAD support * Backport #16312: jewel: selinux denials in RGW * Backport #16315: jewel: When journaling is enabled, a flush request shouldn't flush the cache * Backport #16319: jewel: radosgw-admin: inconsistency in uid/email handling * Backport #16320: jewel: fs: fuse mounted file systems fails SAMBA CTDB ping_pong rw test with v9.0.2 * Backport #16324: jewel: rbd: create error: (38) Function not implemented in upgrade:client * Backport #16338: jewel : rados bench : add cleanup message with time it has taken to delete the objects when cleanup start for written objects * Backport #16339: jewel : rgw : support size suffixes for --max-size in radosgw-admin command * Tasks #16344: jewel v10.2.3 * Backport #16371: jewel: rbd-mirror: ensure replay status formatter has completed before stopping replay * Backport #16372: jewel: Unable to disable journaling feature if in unexpected mirror state * Backport #16373: jewel: rbd-mirror: gracefully handle missing sync point snapshots * Backport #16374: jewel: AsyncConnection::lockmsg/async lockdep cycle: AsyncMessenger::lock, MDSDaemon::mds_lock, AsyncConnection::lock * Backport #16380: jewel: msg/async: connection race hang * Backport #16381: jewel: comparing return code to ERR_NOT_MODIFIED in rgw_rest_s3.cc (needs minus sign) * Backport #16392: jewel: rgw: master: build failures with boost > 1.58 * Backport #16393: jewel: rgw: Swift's object versioning doesn't support restoring a previous version on DELETE * Backport #16422: jewel: Sporadic test failures when image update notification missed * Backport #16423: jewel: Journal duplicate op detection can cause lockdep error * Backport #16424: jewel: Journal needs to handle duplicate maintenance op tids * Backport #16425: jewel: rbd-mirror: potential race condition accessing local image journal * Backport #16426: jewel: Possible race condition during journal transition from replay to ready * Backport #16427: jewel: prepare_pgtemp needs to only update up_thru if newer than the existing one * Backport #16429: jewel: OSDMonitor: drop pg temps from not the current primary * Backport #16431: jewel: librados,osd: bad flags can crash the osd * Backport #16437: jewel: async messenger mon crash * Backport #16438: jewel: Delete image when a resync is requested * Backport #16459: jewel: rbd-mirror should disable proxied maintenance ops for non-primary image * Backport #16460: jewel: Crash when utilizing advisory locking API functions * Backport #16461: jewel: ceph Resource Agent does not work with systemd * Backport #16482: jewel: Timeout sending mirroring notification shouldn't result in failure * Backport #16483: jewel: Close journal and object map before flagging exclusive lock as released * Backport #16484: jewel: ExclusiveLock object leaked when switching to snapshot * Backport #16485: jewel: Whitelist EBUSY error from "snap unprotect" for journal replay * Backport #16486: jewel: Object map/fast-diff invalidated if journal replays the same snap remove event * Backport #16487: jewel: async_messenger: implement ms_inject_delay* * Backport #16507: jewel: "ceph_test_librbd_api: symbol lookup error: ceph_test_librbd_api: undefined symbol" in upgrade:client-upgrade-jewel-distro-basic-smithi * Backport #16511: jewel: rbd-mirror: 'wait_for_scheduled_deletion' callback might deadlock * Backport #16512: jewel: rbd-mirror: potential race condition when restarting image replayer * Backport #16513: jewel: rbd-mirror: image-replayer: segfault when removing resync listener * Backport #16514: jewel: Image removal doesn't necessarily clean up all rbd_mirroring entries * Backport #16515: jewel: Session::check_access() is buggy * Backport #16518: jewel: TaskFinisher: cancel all tasks wait until finisher done * Backport #16520: jewel: librbd: potential use after free on refresh error * Backport #16547: jewel:ObjectCacher: doesn't correctly handle read replies on split BufferHeads * Backport #16549: jewel: Monitor die if moncommand without "prefix" item * Backport #16560: jewel: mds: enforce a dirfrag limit on entries * Backport #16565: jewel: rgw: data sync stops after getting error in all data log sync shards * Backport #16576: jewel: rbd-mirror: FAILED assert(m_local_image_ctx->object_map != nullptr)" * Backport #16577: jewel : 60-ceph-partuuid-workaround-rules still needed by debian jessie (udev 215-17) * Backport #16586: jewel: partprobe intermittent issues during ceph-disk prepare * Backport #16589: jewel: multisite sync races with deletes * Backport #16593: jewel: FAILED assert(object_no < m_object_map.size()) * Backport #16599: jewel: rgw: Swift API returns double space usage and objects of account metadata * Backport #16601: jewel: librbd: fix missing return statement if failed to get mirror image state * Backport #16620: jewel: Fix shutting down mds timed-out due to deadlock * Backport #16621: jewel: mds: `session evict` tell command blocks forever with async messenger (TestVolumeClient.test_evict_client failure) * Backport #16625: jewel: Failing file operations on kernel based cephfs mount point leaves unaccessible file behind on hammer 0.94.7 * Backport #16636: jewel: rgw: document multi tenancy * Backport #16637: jewel: add socket backlog setting for via ceph.conf * Backport #16658: jewel: rbd-mirror: gracefully handle being blacklisted * Backport #16659: jewel: ReplicatedBackend doesn't increment stats on pull, only push * Backport #16696: jewel: segfault in RGWBucketShardIncrementalSyncCR * Backport #16697: jewel: ceph-fuse is not linked to libtcmalloc * Backport #16699: jewel: multidelete query parameter not correctly parsed * Backport #16700: jewel: rgw: segmentation fault on error_repo in data sync * Backport #16701: jewel: rbd-mirror: image sync throttle needs to use pool id + image id to form unique key * Backport #16702: jewel: multisite: bucket sync failures with tenant users * Backport #16731: jewel: failed to create bucket after upgrade from hammer to jewel * Backport #16732: jewel: Bucket index shards orphaned after bucket delete * Backport #16735: jewel: rbd-nbd does not properly handle resize notifications * Backport #16747: jewel: rbd-mirror: snap rename does not correctly replicate * Backport #16748: jewel: mount.ceph: move from ceph-base to ceph-common and add symlink in /sbin for SUSE * Backport #16750: jewel: ceph-osd-prestart.sh contains Upstart-specific code * Backport #16778: multisite: 400-error with certain complete multipart upload requests * Backport #16795: jewel: libatomic-ops-devel was renamed in May 2012 - fix in ceph.spec * Backport #16796: jewel: Renaming old format image results in "Transport endpoint is not connected" error * Backport #16797: jewel: MDS Deadlock on shutdown active rank while busy with metadata IO * Backport #16798: jewel: ceph command line tool chokes on ceph –w (the dash is unicode 'en dash' &ndash, copy-paste to reproduce) * Backport #16830: jewel: CephFSVolumeClient: List authorized IDs by share * Backport #16831: jewel: Add versioning to CephFSVolumeClient interface * Backport #16862: jewel: "default" zone and zonegroup cannot be added to a realm * Backport #16863: jewel: use zone endpoints instead of zonegroup endpoints * Backport #16864: jewel: multisite segfault on ~RGWRealmWatcher if realm was deleted * Backport #16865: jewel: saw valgrind issues in ReplicatedPG::new_repop * Backport #16867: jewel: mkfs.xfs slow performance with discards and object map * Backport #16869: jewel: Discard hangs when 'rbd_skip_partial_discard' is enabled * Backport #16901: jewel: segfault in RGWOp_MDLog_Notify * Backport #16903: jewel: Non-primary image is recording journal events during image sync * Backport #16904: jewel: journal should prefetch small chunks of the object during replay * Backport #16915: Jewel: OSD crash with Hammer to Jewel Upgrade: void FileStore::init_temp_collections() * Backport #16934: jewel: Add zone rename to radosgw_admin * Backport #16945: jewel: RGW/civetweb no longer listens on IPv6: invalid port spec * Backport #16950: jewel: librbd/ExclusiveLock.cc: 197: FAILED assert(m_watch_handle != 0) * Backport #16958: jewel: Bug when using port 443s in rgw. * Backport #16959: jewel: rpm: OBS needs ExclusiveArch * Backport #16969: jewel: src/script/subman fails with KeyError: '\n"band' * Backport #16978: jewel: rbd-mirror: FAILED assert(m_on_update_status_finish == nullptr) * Backport #17004: jewel: rbd-mirror: FAILED assert(m_state == STATE_STOPPING) * Backport #17005: jewel: ImageReplayer::is_replaying does not include flush state * Backport #17006: jewel: Increase log level for messages occuring while running rgw admin command * Backport #17026: jewel: doc: format 2 now is the default image format * Backport #17032: jewel: multisite: RGWPeriodPuller tries to pull from itself * Backport #17033: jewel: rgw: there may be some objects not delete in some circumstances * Backport #17034: jewel: rgw: object expirer's hints might be trimmed without processing in some circumstances * Backport #17061: jewel: bashism in src/rbdmap * Backport #17063: jewel: Throttle in-flight image syncs to only a X concurrent * Backport #17066: jewel: rbd-mirror: remove ceph_test_rbd_mirror_image_replay test case * Backport #17080: jewel: the option 'rbd_cache_writethrough_until_flush=true' dosn't work * Backport #17088: jewel: Sporadic failure in TestImageReplayer.StartReplayAndWrite * Backport #17089: jewel: OSD failed to subscribe skipped osdmaps after "ceph osd pause" * Backport #17092: jewel: build/ops: need rocksdb commit 7ca731b12ce for ppc64le build * Backport #17126: mds: fix double-unlock on shutdown * Backport #17147: jewel: rgw multisite "ERROR" messages during "normal operation" * Backport #17485: jewel: Periodically update the sync point object number during sync