# v0.94.6 * Bug #11969: chain_xattr: deal with 254 byte limit on inline xfs attrs * Backport #12483: mon: MonitorDBStore iterator's get_next_key() returns wrong keys * Backport #12587: FileStore calls syncfs(2) even it is not supported * Backport #12590: "ceph mds add_data_pool" check for EC pool is wrong * Backport #12835: mon: map_cache can become inaccurate if osd does not receive the osdmaps * Backport #12856: rgw: missing handling of encoding-type=url when listing keys in bucket * Backport #12923: logrotate reload error on Ubuntu 14.04 * Backport #12925: ceph.spec.in: rgw placeholder dirs are not packaged * Backport #12928: ceph.spec.in libcephfs_jni1 has no %post and %postun * Backport #12932: Unpackaged directories causes SUSE build failure * Backport #12940: IO error on kvm/rbd with an erasure coded pool tier (after an upgrade from 0.87.1 to 0.94.2) * Backport #12948: Heavy memory shuffling in rados bench * Backport #12949: Race condition in rados bench * Backport #13035: requeue_scrub when kick_object_context_blocked * Backport #13036: osd: avoid multi set osd_op.outdata in tier pool * Backport #13037: hit set clear repops fired in same epoch as map change -- segfault since they fall into the new interval even though the repops are cleared * Backport #13040: common/Thread:pthread_attr_destroy(thread_attr) when done with it * Backport #13042: ThreadPool add/remove work queue methods not thread safe * Backport #13045: rbd export-diff crashes in librbd::simple_diff_cb * Backport #13047: Content-Type header should have correct initial capitals * Backport #13171: objecter: cancellation bugs * Backport #13172: rbd-replay* should ship in ceph-common * Backport #13195: should recalc the min_last_epoch_clean when decode PGMap * Backport #13205: ReplicatedBackend: populate recovery_info.size for clone (bug symptom is size mismatch on replicated backend on a clone in scrub) * Backport #13210: tests : Fixed broken Makefiles after integration of lttng into rados * Backport #13233: mon: include min_last_epoch_clean as part of PGMap::print_summary and PGMap::dump * Backport #13245: client nonce collision due to unshared pid namespaces * Backport #13307: dumpling incrementals do not work properly on hammer and newer * Backport #13335: hammer: OSD crashed when reached pool's max_bytes quota * Backport #13336: osd: we do not ignore notify from down osds * Backport #13337: segfault in agent_work * Backport #13338: filestore: fix peek_queue for OpSequencer * Backport #13339: mon: check for store writeablility before participating in election * Backport #13340: small probability sigabrt when setting rados_osd_op_timeout * Backport #13341: ceph upstart script rbdmap.conf incorrectly processes parameters * Tasks #13356: hammer v0.94.6 * Backport #13387: librbd: reads larger than cache size hang * Backport #13409: randomize scrub times * Backport #13425: wrong conditional for boolean function KeyServer::get_auth() * Backport #13440: ceph-disk prepare fails if device is a symlink * Backport #13460: rbd-replay-prep: crashes on incorrect trace file * Backport #13461: librbd: object map invalid but not flagged as such * Backport #13488: object_info_t::decode() has wrong version * Backport #13513: rgw: value of Swift API's X-Object-Manifest header is not url_decoded during segment look up * Backport #13535: LibRadosWatchNotify.WatchNotify2Timeout * Backport #13536: rgw: bucket listing hangs on versioned buckets * Backport #13538: rgw: orphan tool should be careful about removing head objects * Backport #13540: rgw: get bucket location returns region name, not region api name * Backport #13541: LTTng-UST should be optionally enabled * Backport #13550: qemu workunit refers to apt-mirror.front.sepia.ceph.com * Backport #13588: OSD::build_past_intervals_parallel() shall reset primary and up_primary when begin a new past_interval. * Backport #13590: mon: should not set isvalid = true when cephx_verify_authorizer return false * Backport #13599: rbd-replay-prep not getting packaged for SUSE * Backport #13620: osd: pg stuck in replay * Backport #13621: CephFS restriction on removing cache tiers is overly strict * Backport #13637: FileStore: potential memory leak if getattrs fails. * Backport #13654: crush: crash if we see CRUSH_ITEM_NONE in early rule step * Backport #13672: tests: testprofile must be removed before it is re-created * Backport #13692: osd: do not cache unused memory in attrs * Backport #13693: bug with cache/tiering and snapshot reads [merged, needs a test in ceph-qa-suite] * Backport #13695: init-rbdmap uses distro-specific functions * Backport #13716: rgw:swift use Civetweb ssl can not get right url * Backport #13734: rgw: swift API returns more than real object count and bytes used when retrieving account metadata * Backport #13753: Avoid re-writing old-format image header on resize * Backport #13755: QEMU hangs after creating snapshot and stopping VM * Backport #13758: rbd: pure virtual method called * Backport #13760: unknown argument --quiet in udevadm settle * Backport #13770: Objecter: pool op callback may hang forever. * Backport #13786: rbd-replay-* moved from ceph-test-dbg to ceph-common-dbg as well * Backport #13789: Objecter: potential null pointer access when do pool_snap_list. * Backport #13820: hammer: Setting ACL on Object removes ETag * Backport #13831: hammer: init script reload doesn't work on EL7 * Backport #13859: hammer: ceph.spec.in License line does not reflect COPYING * Backport #13870: hammer: OSD: race condition detected during send_failures * Backport #13888: hammer: orphans finish segfaults * Backport #13892: hammer: auth/cephx: large amounts of log are produced by osd * Backport #13930: hammer: Ceph Pools' MAX AVAIL is 0 if some OSDs' weight is 0 * Backport #13936: hammer: Ceph daemon failed to start, because the service name was already used. * Backport #14043: hammer: osd/PG.cc: 288: FAILED assert(info.last_epoch_started >= info.history.last_epoch_started) * Backport #14063: hammer: rbd merge-diff doesn't properly handle >2GB diffs * Backport #14138: hammer: Use of syslog results in all log messages at priority "emerg" * Backport #14143: hammer: Verify self-managed snapshot functionality on image create * Backport #14236: "OSDMonitor.cc: 2116: FAILED assert(0)" in rados-hammer-distro-basic-openstack * Backport #14283: hammer: rbd: fix bench-write * Backport #14285: hammer: osd/OSD.cc: 2469: FAILED assert(pg_stat_queue.empty()) on shutdown * Backport #14287: hammer: ReplicatedPG: wrong result code checking logic during sparse_read * Backport #14288: hammer: ceph osd pool stats broken in hammer * Backport #14292: osd/PG.cc: 3837: FAILED assert(0 == "Running incompatible OSD") * Backport #14329: hammer: osd/ReplicatedPG: fix promotion recency logic * Backport #14331: hammer: configure.ac: no use to add "+" before ac_ext=c * Backport #14376: scrub suicide timeout is the same as the regular timeout -- should probably match recovery at least * Backport #14441: hammer: man: document listwatchers cmd in "rados" manpage * Backport #14466: hammer: rbd-replay does not check for EOF and goes to endless loop * Backport #14467: hammer: disable filestore_xfs_extsize by default * Backport #14470: hammer: tool for artificially inflate the leveldb of the mon store for testing purposes * Backport #14497: hammer: ceph-monstore-tool: FAILED assert(!is_open) * Backport #14543: hammer: Cannot reliably create snapshot after freezing QEMU IO * Backport #14553: hammer: rbd: TaskFinisher::cancel should remove event from SafeTimer * Backport #14569: hammer: Make RGW_MAX_PUT_SIZE configurable * Backport #14570: hammer: Incorrect ETAG calculated for POST uploads * Backport #14581: hammer: should compact full epochs in monitor * Backport #14584: hammer: fsstress.sh fails * Backport #14624: hammer: fsx failed to compile * Backport #14643: hammer: rgw: radosgw-admin man page doesn't contain the "orphans" commands * Backport #14651: hammer: rgw: radosgw-admin --help doesn't show the "orphans find" command * Backport #14801: hammer: build/ops: deb: strip tracepoint libraries from Wheezy/Precise builds * Backport #14802: hammer: build/ops: pass tcmalloc env through to ceph-os * Backport #14803: hammer: osd: smaller object_info_t xattrs