# v12.1.0 * Tasks #16784: RADOSGW feature requires FCGI which no longer has an upstream source * Feature #19109: Use data pool's 'df' for statfs instead of global stats, if there is only one data pool * Bug #19635: Deadlock on two ceph-fuse clients accessing the same file * Feature #19657: An EIO from a single device should not be a client-visible failure. * Fix #19691: Remove journaler_allow_split_entries option * Bug #20109: src/common/crc32c_ppc_asm.s ppc64le build break * Documentation #20119: Documentation of Python RBD API does not say that aio_* functions call their callbacks in DIFFERENT (dummy) thread * Bug #20288: armhf build fail * Fix #20330: msg : tcp backlog parameter increase * Bug #20334: I/O become slowly when multi mds which subtree root has replica * Bug #20415: dashboard: Health is not updated * Bug #20426: some generic options can not be passed by rbd-nbd * Bug #20435: `ceph -s` repeats some health details in Luminous RC release * Bug #20484: snap can be removed while it is been using by rbd-nbd device * Bug #20525: ceph osd replace problem with osd out * Bug #20545: erasure coding = crashes * Bug #20583: mds: improve wording when mds respawns due to mdsmap removal * Bug #20595: mds: export_pin should be included in `get subtrees` output * Bug #20612: radosgw ceases responding to list requests * Bug #20659: MDSMonitor: assertion failure if two mds report same health warning * Bug #20663: Segmentation fault when exporting rgw bucket in nfs-ganesha * Bug #20677: mds: abrt during migration * Bug #20680: osd: FAILED assert(num_unsent <= log_queue.size()) * Bug #20683: mon: HealthMonitor.cc: 216: FAILED assert(store_size > 0) * Bug #20745: Error on create-initial with --cluster * Bug #20746: dashboard: usage graph is getting more and mor big * Bug #20755: radosgw crash when service is restarted during lifecycle processing * Bug #20756: radosgw daemon crash when service is restarted during lifecycle processing * Feature #20760: mds: add perf counters for all mds-to-mds messages * Bug #20765: bluestore: mismatched uuid in bdev_label after unclean shutdown * Bug #20767: Zabbix plugin assumes legacy compat mode for health json * Bug #20787: the bucket created by swift can not be used by s3 * Feature #20801: ability to rebuild BlueStore WAL journals is missing * Bug #20807: Error in boot.log - Failed to start Ceph disk activation - Luminous * Bug #20856: osd: luminous osd bluestore crashes with jemalloc enabled on debian 9 * Bug #20861: Object data loss in RGW when multipart upload completion times out