# v10.2.4 * Backport #16069: jewel: set full flags doesn't work * Backport #16313: jewel: client: FAILED assert(root_ancestor->qtree == __null) * Backport #16337: jewel : Improve OSD heartbeat_check log message by including host name (besides OSD numbers) * Backport #16377: jewel: msgr/async: Messenger thread long time lock hold risk * Backport #16441: jewel: [initscripts] systemd-run is not needed in initscripts * Backport #16447: jewel: default quota fixes * Backport #16458: jewel: Potential crash during journal::Replay shut down * Backport #16564: jewel: cors auto memleak * Backport #16583: jewel: mon crash: crush/CrushWrapper.h: 940: FAILED assert(successful_detach) * Backport #16657: jewel: i386 tarball gitbuilder failure on master: * Backport #16667: jewel: incorrect value of CINIT_FLAG_DEFER_DROP_PRIVILEGES * Backport #16792: jewel: metadata sync can skip markers for failed/incomplete entries * Backport #16793: jewel: upgrade from old multisite to new multisite fails * Backport #16794: jewel: multisite bucket sync doesn't retry objects that it fails to fetch * Backport #16866: jewel: OSD: "ceph osd df" does not show summarized info correctly if one or more OSDs are out * Backport #16868: jewel: Prevent the creation of a clone from a non-primary mirrored image * Backport #16902: jewel: rbd-mirror: image deleter should use pool id + global image uuid for key * Backport #16946: jewel: client: nlink count is not maintained correctly * Backport #16951: jewel: ceph 10.2.2 rbd status on image format 2 returns "(2) No such file or directory" * Backport #17007: jewel: ceph-disk should timeout when a lock cannot be acquired * Backport #17056: jewel: mon/osdmonitor: decouple adjust_heartbeat_grace and min_down_reporters * Backport #17058: jewel: Disabling pool mirror mode with registered peers results orphaned mirrored images * Backport #17059: jewel: rbd bench-write: seg fault when "--io-size" is larger than image size * Backport #17060: jewel: Cannot disable journaling or remove non-mirrored, "non-primary" image * Backport #17062: jewel: "[ FAILED ] TestClsRbd.mirror_image" in upgrade:jewel-x-master-distro-basic-vps * Backport #17064: jewel: rgw: radosgw daemon core when reopen logs * Backport #17065: jewel: rbd mirror: after promote, the mirror image often be up+error * Backport #17067: jewel: Request exclusive lock if owner sends -ENOTSUPP for proxied maintenance op * Backport #17082: jewel: disable LTTng-UST in openSUSE builds * Backport #17094: jewel: ceph-osd-prestart.sh fails confusingly when data directory does not exist * Backport #17095: jewel: rpm: ceph installs stuff in %_udevrulesdir but does not own that directory * Backport #17118: jewel: rgw: period commit return error when the current period has a zonegroup which doesn't have a master zone * Backport #17121: jewel: the %USED of "ceph df" is wrong * Backport #17122: jewel: COPY broke multipart files uploaded under dumpling * Backport #17131: jewel: Jewel: segfault in ObjectCacher::FlusherThread * Backport #17135: jewel: ceph mon Segmentation fault after set crush_ruleset ceph 10.2.2 * Backport #17140: jewel: period commit loses zonegroup changes: region_map converted repeatedly * Backport #17141: jewel: PG::_update_calc_stats wrong for CRUSH_ITEM_NONE up set items * Backport #17143: jewel: rgw file uses too much CPU in gc/idle thread * Backport #17144: jewel: mark_all_unfound_lost() leaves unapplied changes * Backport #17145: jewel: PG::choose_acting valgrind error or ./common/hobject.h: 182: FAILED assert(!max || (*this == hobject_t(hobject_t::get_max()))) * Backport #17149: jewel: ceph-disk: expected systemd unit failures are confusing * Backport #17161: jewel: multisite: StateBuildingFullSyncMaps doesn't check for errors before advancing to StateSync * Backport #17206: jewel: ceph-fuse crash in Client::get_root_ino * Backport #17207: jewel: ceph-fuse crash on force unmount with file open * Backport #17241: jewel: "*** Caught signal" in krbd * Backport #17244: jewel: Failure in snaptest-git-ceph.sh * Backport #17245: jewel: tests: scsi_debug fails /dev/disk/by-partuuid * Backport #17246: jewel: Log path as well as ino when detecting metadata damage * Backport #17262: jewel: rbd-nbd IO hang * Backport #17263: jewel: ceph-objectstore-tool: ability to perform filestore splits offline * Backport #17264: jewel: multimds: allow_multimds not required when max_mds is set in ceph.conf at startup * Backport #17265: jewel: Possible deadlock race condition between image close and librados shutdown * Backport #17290: jewel: ImageWatcher: use after free within C_UnwatchAndFlush * Backport #17292: jewel: add a tool to rebuild mon store from OSD * Backport #17312: jewel: build/ops: allow building RGW with LDAP disabled * Backport #17319: jewel: rgw nfs 28 * Backport #17321: jewel: rgw: file: remove busy-wait in RGWLibFS::gc() * Backport #17322: jewel: rgw nfs v3 completions * Backport #17323: jewel: rgw: rgw_file: restore local definition of RGWLibFS gc interval * Backport #17324: jewel: rgw: ldap: protect rgw::from_base64 from non-base64 input * Backport #17325: jewel: rgw: rgw_file: fix return value signedness (rgw_readdir) * Backport #17326: jewel: rgw: rgw file fix bug of rgw_lookup can not exact match file name * Backport #17327: jewel: rgw: remove duplicated calls to getattr * Backport #17332: jewel: rgw: file setattr * Backport #17335: jewel: radosgw-admin(8) does not describe "--job-id" or "--max-concurrent-ios" * Backport #17337: jewel: radosgw-admin lacks docs for "--orphan-stale-secs" * Backport #17339: jewel: objects in the metadata_heap pool are created, but never read or removed * Backport #17341: jewel: librados memory leaks from ceph::crypto (WITH_NSS) * Backport #17344: jewel: "line 16: exec: ceph-coverage: not found" in upgrade:client-upgrade-jewel-distro-basic-smithi * Backport #17345: jewel: Ceph Status - Segmentation Fault * Backport #17347: jewel: ceph-create-keys: sometimes blocks forever if mds "allow" is set * Backport #17349: jewel: Modification for "TEST S3 ACCESS" section in "INSTALL CEPH OBJECT GATEWAY" page * Backport #17350: jewel: rgw:response information is error when geting token of swift account * Backport #17358: jewel: doc: fix description for rsize and rasize * Backport #17360: jewel: ceph-objectstore-tool crashes if --journal-path * Backport #17373: jewel: image.stat() call in librbdpy fails sometimes * Backport #17375: jewel: "sysfs write failed" in smoke * Backport #17376: jewel: Assign LOG_INFO priority to syslog calls * Backport #17377: jewel: LIBRADOS modify Pipe::connect() to return the error code * Backport #17382: jewel: logrotate script permissions need to match qemu * Backport #17384: jewel: helgrind: TestLibRBD.TestIOPP potential deadlock closing an image with read-ahead enabled * Backport #17393: jewel: rgw: rgw_file: fix set_attrs operation * Backport #17394: jewel: rgw: nfs: fix NFS creation (and other?) times for S3-created buckets * Backport #17402: jewel: OSDMonitor: Missing nearfull flag set * Backport #17404: jewel: update_features API needs to support backwards/forward compatibility * Backport #17405: jewel: Sporadic failure in TestMockJournal.ReplayOnDiskPostFlushError * Backport #17406: jewel: rbd-mirror: force-promoted image will remain R/O until local rbd-mirror daemon restarted * Backport #17471: jewel: rgw: versioning is broken in current master * Backport #17474: jewel: Failure in dirfrag.sh * Backport #17475: jewel: rbd-mirror: potential race condition results in heap corruption * Backport #17476: jewel: Failure in snaptest-git-ceph.sh * Backport #17477: jewel: Crash in Client::_invalidate_kernel_dcache when reconnecting during unmount * Backport #17479: jewel: Duplicate damage table entries * Backport #17480: jewel: ACL request for objects with underscore at end and beginning * Backport #17481: jewel: Proxied operations shouldn't result in error messages if replayed * Backport #17482: jewel: Enable/Disable of features is allowed even the features are already enabled/disabled * Backport #17483: jewel: RBD should restrict mirror enable/disable actions on parents/clones * Backport #17484: jewel: performance: journaling results in 4X slowdown when writes are not blocked by cache * Backport #17486: jewel: Optionally unregister "laggy" journal clients * Tasks #17487: jewel v10.2.4 * Backport #17505: jewel: rgw: doc: description of multipart part entity is wrong * Backport #17506: jewel: assert failure in run-rbd-unit-tests.sh * Backport #17508: jewel: rbd-mirror: potential crash during replay shut down * Backport #17509: jewel: Config parameter "rgw keystone make new tenants" in radosgw multitenancy does not work * Backport #17510: jewel: ERROR: got unexpected error when trying to read object: -2 * Backport #17511: jewel: s3tests-test-readwrite failing with 500 * Backport #17513: jewel: S3 object versioning fails when applied on a non-master zone * Backport #17537: jewel: Improve resiliency of rbd-mirror stress test case * Backport #17538: jewel: rgw:user email can modify to empty when it has values * Backport #17542: jewel: systemd: add install section to rbdmap.service file * Backport #17543: jewel: ldap auth custom search filter * Backport #17555: jewel: krbd-related CLI patches * Backport #17556: jewel: librbd::Operations: update notification failed: (2) No such file or directory * Backport #17557: jewel: MDSMonitor: non-existent standby_for_fscid not caught * Backport #17559: jewel: librbd should permit removal of image being bootstrapped by rbd-mirror * Backport #17560: jewel: build/ops: include more files in "make dist" tarball * Backport #17575: jewel: aarch64: Compiler-based detection of crc32 extended CPU type is broken * Backport #17576: jewel: RGW loses realm/period/zonegroup/zone data: period overwritten if somewhere in the cluster is still running Hammer * Backport #17603: jewel: mon/tool: PGMonitor::check_osd_map assert fail when the rebuild mon store * Backport #17609: jewel: tests: ceph-disk must ignore debug monc * Backport #17616: jewel: FAILED assert(m_in_flight_object_closes > 0) * Backport #17642: jewel: TestJournalReplay: sporadic "assert(m_state == STATE_READY || m_state == STATE_STOPPING)" failure * Backport #17673: jewel: rgw crash when client post object with null conditions * Backport #17676: jewel: swift: Problems with DLO containing 0 length segments * Backport #17686: jewel: don't loop forever when reading data from 0 sized segment * Backport #17693: jewel: rgw: don't loop forever when reading data from 0 sized segment * Backport #17694: jewel: mon: forwarded message is encoded with sending client's features * Backport #17695: jewel: discard after write results in assertion failure * Backport #17696: jewel: ceph osd metadata produced bad json on error * Backport #17707: jewel: ceph-disk: using a regular file as a journal fails * Backport #17715: jewel: init file should be owned by ceph * Backport #17734: jewel: Upgrading 0.94.6 -> 0.94.9 saturating mon node networking * Backport #17748: jewel: Message.cc: 193: FAILED assert(middle.length() == 0) * Backport #17787: jewel: ceph-post-file doesn't work * Backport #17816: jewel: Missing comma in ceph-create-keys causes concatenation of arguments * Backport #17915: jewel: filestore: can get stuck in an unbounded loop during scrub * Backport #18015: jewel: osd: condition OSDMap encoding on features * Backport #18059: librados: new setxattr overload breaks c++ ABI * Backport #18095: jewel: Improve deep-scrub performance with many snapshots