# v0.94.10 * Backport #13927: hammer: cephfs-java ftruncate unit test failure * Backport #14323: hammer: OpTracker needs to release the message throttle in _unregistered * Backport #16151: hammer: crash adding snap to purged_snaps in ReplicatedPG::WaitingOnReplicas * Backport #16225: hammer: SIGABRT in TrackedOp::dump() via dump_ops_in_flight() * Backport #16318: hammer: radosgw-admin: inconsistency in uid/email handling * Backport #16428: hammer: prepare_pgtemp needs to only update up_thru if newer than the existing one * Backport #16432: hammer: librados,osd: bad flags can crash the osd * Backport #16442: hammer: [initscripts]: systemd-run is not needed in initscripts * Backport #16448: hammer: default quota fixes * Backport #16546: hammer: ObjectCacher doesn't correctly handle read replies on split BufferHeads * Backport #16584: hammer: mon crash: crush/CrushWrapper.h: 940: FAILED assert(successful_detach) * Backport #16594: hammer: RGW Swift API: ranged request on a DLO provides wrong values in Content-Range HTTP header * Backport #16870: hammer: OSD: crash on EIO during deep-scrubing * Backport #16918: hammer: making stop.sh more portable * Backport #16952: hammer: ceph 10.2.2 rbd status on image format 2 returns "(2) No such file or directory" * Backport #17068: hammer: Request exclusive lock if owner sends -ENOTSUPP for proxied maintenance op * Backport #17120: hammer: the %USED of "ceph df" is wrong * Backport #17123: hammer: COPY broke multipart files uploaded under dumpling * Backport #17142: hammer: osd: PG::_update_calc_stats wrong for CRUSH_ITEM_NONE up set items * Backport #17146: hammer: PG::choose_acting valgrind error or ./common/hobject.h: 182: FAILED assert(!max || (*this == hobject_t(hobject_t::get_max()))) * Backport #17150: hammer: rgw: Anonymous user is able to read bucket with authenticated read ACL * Tasks #17151: hammer v0.94.10 * Backport #17285: ceph-mon leaks in MDSMonitor when ceph-mds process is running but MDS is not configured * Backport #17291: hammer: add a tool to rebuild mon store from OSD * Backport #17333: hammer: crushtool --compile is create output despite of missing item * Backport #17336: hammer: radosgw-admin(8) does not describe "--job-id" or "--max-concurrent-ios" * Backport #17338: hammer: radosgw-admin lacks docs for "--orphan-stale-secs" * Backport #17346: hammer: Ceph Status - Segmentation Fault * Backport #17359: hammer: ceph-objectstore-tool crashes if --journal-path * Backport #17374: hammer: image.stat() call in librbdpy fails sometimes * Backport #17383: hammer: ceph-objectstore-tool: ability to perform filestore splits offline * Backport #17403: hammer: OSDMonitor: Missing nearfull flag set * Backport #17534: hammer: doc: document the changed upgrade steps for hammer * Backport #17602: hammer: mon/tool: PGMonitor::check_osd_map assert fail when the rebuild mon store * Backport #17631: hammer: Fix rgw crash when client post object with null condition * Backport #17671: Fix coding mistake in pre-refactor rbd shell * Backport #17677: hammer: swift: Problems with DLO containing 0 length segments * Backport #17678: hammer: monitor should send monmap updates when the monmap is updated * Backport #17764: hammer: collection_list shadow return value # * Backport #17840: hammer: rgw: the value of total_time is wrong in the result of 'radosgw-admin log show' opt * Backport #17878: hammer: FileStore: fiemap cannot be totally retrieved in xfs when the number of extents > 1364 * Backport #17883: hammer: OSDs marked OUT wrongly after monitor failover * Backport #17905: hammer: Error EINVAL: removing mon.a at 172.21.15.16:6789/0, there will be 1 monitors * Backport #17957: hammer: "RWLock.h: 124: FAILED assert(r == 0)" in rados-jewel-distro-basic-smithi * Backport #18109: hammer: msg/simple/Pipe: error decoding addr * Backport #18111: hammer: diff calculate can hide parent extents when examining first snapshot in clone * Backport #18132: hammer: ReplicatedBackend::build_push_op: add a second config to limit omap entries/chunk independently of object data * Backport #18213: hammer: rgw:radosgw server abort when accept a CORS request with short origin * Backport #18218: hammer: rgw sends omap_getvals with (u64)-1 limit * Backport #18222: hammer: dumpling, hammer, jewel: qemu/tests/qemu-iotests/077 fails * Backport #18237: hammer: rbd import-diff does not complain when image-name not given * Backport #18281: hammer: mon: osd flag health message is misleading * Backport #18317: hammer: TempURL does not behave like its swift counterpart * Backport #18377: hammer: rados/upgrade test fails with git clone https://github.com/ceph/ceph.git /home/ubuntu/cephtest/clone.client.0 ; cd -- /home/ubuntu/cephtest/clone.client.0 && git checkout jewel * Backport #18383: NameError: global name 'mnt_point' is not defined in hammer 0.94.10 integration testing * Backport #18385: hammer: Cannot clone ceph/s3-tests.git (missing branch) * Backport #18390: hammer: qa/workunits/rbd/test_lock_fence.sh fails (regression) * Backport #18397: OSDs commit suicide in rbd suite when testing on btrfs * Backport #18399: hammer: tests: objecter_requests workunit fails on wip branches * Backport #18405: hammer: Cannot reserve CentOS 7.2 smithi machines * Backport #18432: hammer: ceph-create-keys loops forever * Backport #18448: hammer: osd: filestore: FALLOC_FL_PUNCH_HOLE must be used with FALLOC_FL_KEEP_SIZE * Backport #18449: hammer: [teuthology] update "rbd/singleton/all/formatted-output.yaml" to support ceph-ci * Backport #18544: hammer: [teuthology] update Ubuntu image url after ceph.com refactor * Backport #18602: hammer: cephfs test failures (ceph.com/qa is broken, should be download.ceph.com/qa) * Backport #18628: hammer: osd: Fix map gaps again (bug 15943)