Index by date
03/07/2024
03/05/2024
02/07/2024
12/07/2023
11/03/2023
10/04/2023
09/05/2023
08/08/2023
07/06/2023
07/03/2023
06/28/2023
06/21/2023
06/14/2023
06/05/2023
05/02/2023
04/05/2023
03/02/2023
02/01/2023
01/30/2023
01/04/2023
12/07/2022
11/03/2022
10/05/2022
09/06/2022
08/25/2022
06/29/2022
06/01/2022
04/30/2022
03/03/2022
02/02/2022
01/12/2022
12/01/2021
11/04/2021
10/06/2021
09/01/2021
08/04/2021
06/24/2021
06/02/2021
05/06/2021
04/28/2021
04/20/2021
03/26/2021
03/24/2021
03/17/2021
03/03/2021
01/27/2021
12/02/2020
11/22/2020
11/04/2020
11/02/2020
10/28/2020
10/05/2020
08/05/2020
07/02/2020
06/30/2020
06/03/2020
06/02/2020
02/05/2020
12/04/2019
11/22/2019
11/07/2019
10/30/2019
09/05/2019
08/06/2019
07/04/2019
05/31/2019
04/03/2019
03/07/2019
02/06/2019
01/02/2019
12/05/2018
11/08/2018
10/03/2018
09/06/2018
08/01/2018
07/12/2018
06/06/2018
05/03/2018
04/04/2018
04/03/2018
03/08/2018
02/07/2018
01/04/2018
12/06/2017
11/02/2017
11/01/2017
10/03/2017
09/07/2017
08/02/2017
07/06/2017
06/07/2017
05/23/2017
05/03/2017
04/21/2017
04/06/2017
04/04/2017
02/07/2017
02/01/2017
01/10/2017
12/20/2016
12/07/2016
11/02/2016
10/05/2016
10/04/2016
09/21/2016
09/07/2016
08/11/2016
08/08/2016
08/03/2016
07/18/2016
06/28/2016
06/22/2016
- CAB 2016-03-23
- CAB 2016-06-22
- Cache Tiering - Improve efficiency of read-miss operations
- Ceph Advisory Board
06/07/2016
06/01/2016
05/20/2016
05/12/2016
04/25/2016
04/15/2016
04/06/2016
03/21/2016
03/17/2016
03/16/2016
03/03/2016
02/24/2016
02/15/2016
01/14/2016
01/09/2016
12/21/2015
12/16/2015
12/15/2015
12/01/2015
11/06/2015
11/03/2015
11/02/2015
10/26/2015
10/21/2015
10/20/2015
09/28/2015
09/02/2015
08/26/2015
- Infernalis
- Osd - Tiering II (Warm->Cold)
- Osd - Transactions
- Press
- Rbd - kernel rbd client supports copy-on-read
- RBD Async Mirroring
- RGW - ActiveActive Arch
- Rgw - Hadoop FileSystem Interface for a RADOS Gateway Caching Tier
- RGW - NFS
- RGW Multitenancy
- Romana - calamari-clients repo gets a new name
08/19/2015
08/14/2015
08/13/2015
08/12/2015
08/11/2015
08/10/2015
08/06/2015
08/04/2015
07/10/2015
- Generic support for plugins installation and upgrade
- Improve tail latency
- LMDB keyvalue backend for Ceph
- NewStore (new osd backend)
- OpenStack@Ceph
- Openstack manila and ceph
- Osd - erasure coding pool overwrite support
- Osd - Faster Peering
07/08/2015
- Continue CephDocker integration work
- Dynamic data relocation for cache tiering
- Export rbd diff between clone and parent
07/06/2015
- Accelio RDMA Messenger
- Accelio xio integration with kernel RBD client for RDMA support
- Adding a proprietary key value store to CEPH as a pluggable module
- Add Metadata Mechanism To LibRBD
- A standard framework for Ceph performance profiling with latency breakdown
- Cache tier improvements - hitsets proxy write
- Calamari - How to implement high-level stories in an intelligent API
- Cephfs - multitenancy features
- Ceph Governance
- Clustered SCSI target using RBD Status
07/03/2015
- CDS Giant
- CephFS fsck Progress &
- Create Versionable and Fault-Tolerant Storage Devices with Ceph and VirtualBox
- Fixed memory layout for MessageOp passing
- Get Started with the Calamari REST API and PHP
- How to make Ceph enterprise ready
- Kerberos authn AD authnauthz
- Librados - expose checksums
- Librbd - shared flag object map
- Monitor - reweight near full osd autonomicly
- OSD - add flexible cache control of object data
- Osd - opportunistic whole-object checksums
- Osd - prepopulate pg temp
- Osd - ScrubSnapTrim IO prioritization
- Osd - tiering - fine-grained promotion unit
- Osd - tiering - reduce readwrite latencies on cache tier miss
- Osd - update Transaction encoding
- Planning (CDS)
- Quotas vs subtrees
- Rados - improve ex-import functionality
- Rbd - Copy-on-read for clones in kernel rbd client
- RBD - Mirroring
- Rgw - bucket index scalability
- Rgw - Snapshots
- Towards Ceph Cold Storage
07/02/2015
07/01/2015
- -rbd - copy-on-read for clones
- Calamariapihardwarestorage
- Calamari localization
- Calamari RESTful API
- CephFS - Forward Scrub
- CephFS - Hadoop Support
- CephFS quota support discussion
- Ceph Security hardening
- Krbd exclusive locking
- Passive monitors
- Rados cache tier promotion queue and throttling
- Rados qos
- Rbd - Database performance
- Rbd journal
- Rgw - compound object (phase 1)
- Rgw - If-Match on user-defined metadata
- Rgw new multisite configuration
- Security - CephX brute-force protection through auto-blacklisting
- Wiki IA Overhaul
06/30/2015
- Giant
- LibradosObjecter trace capture and replay
- Mon - dispatch messages while waiting for IO to complete
- Osd - create backend for seagate kinetic
- Osd - Locally repairable code
- Pyramid Erasure Code
06/29/2015
06/23/2015
- -Sideboard-
- Add QoS capacity to librbd
- Add SystemtapDtrace static markers
- Annotate config options
- Annotate perfcounters
- Calamari - localization infrastructure Chinese version
- CDs Emperor
- Ceph deployment - ceph-deploy puppet chef salt ansible
- Create a Scalable and Resilient Object Gateway with Ceph and VirtualBox
- Crush extension for more flexible object placement
- Diagnosability
- Diff - integrity local import
- Libradosobjecter - improve threading
- Librados - support parallel reads
- Mongoose Civetweb frontend for RGW
- Mon - Independently dispatch non-conflicting messages
- Mon - PaxosServices relying on hooks instead of hardcoded order to updatepropose
- Mon - Prioritize messages
- Osd - tiering - new cache modes
- Reference counter for protected snapshots
- V0942
- V903
- V904
06/22/2015
- -1A - Welcome Introduction and Housekeeping
- -1B - Ceph Management API
- -1C - Erasure Encoding as a Storage Backend
- -1D - RGW Geo-Replication and Disaster Recovery
- -1E - RADOS Gateway refactor into library internal APIs
- -1F - Enforced bucket-level quotas in RGW
- -1G - Client Security for CephFS
- -1H - Inline data support
- -1I - FallocateHole Punching
- -2E - Chef Cookbook Consolidation &
- -2F - Testing buildrelease &
- -2G - RADOS namespaces CRUSH language extension CRUSH library
- -2H - Ceph stats and monitoring tools
- -2I - A hook framework for Ceph FS operation
- -Ceph User Committee-
- 1A - Welcome Introduction and Housekeeping
- 1B - Ceph Management API
- 1C - Erasure Encoding as a Storage Backend
- 1D - RGW Geo-Replication and Disaster Recovery
- 1E - RADOS Gateway refactor into library internal APIs
- 1F - Enforced bucket-level quotas in RGW
- 1G - Client Security for CephFS
- 1H - Inline data support
- 1I - FallocateHole Punching
- 2E - Chef Cookbook Consolidation &
- 2F - Testing buildrelease &
- 2G - RADOS namespaces CRUSH language extension CRUSH library
- 2H - Ceph stats and monitoring tools
- 2I - A hook framework for Ceph FS operation
- 5 minute guide - Deploying a Ceph Cluster
- Add CRUSH management to calamari API
- CDS Dumpling
- CDS Firefly
- Ceph-Brag
- Ceph-deploy
- Cephfs quota support
- Ceph CLI Experience
- Ceph Infrastructure
- Chat Logs
- Erasure coded storage backend (step 3)
- Etherpad Snapshots
- Object striping in librados
- Osdmap - primary role affinity
- Osd - new keyvalue backend
- Osd - tiering - cache pool overlay
- PowerDNS backend for RGW
- Rados cache pool (part 2)
- Rbd - cloud management platform features
- Rgw - bucket level quota
- Rgw - Multi-region Disaster Recovery (phase 2)
- Rgw - object versioning
- Rgw support for swift temp url
- Specify File layout by kernel client and fuse client
- Test Automation
06/21/2015
- 5 Ways to Contribute to Calamari
- 6 Important Calamari API Methods for Developers
- 7 Best Practices to Maximize Your Ceph Cluster's Performance
- Erasure coded storage backend (step 2)
- Increasing Ceph portability
- Kernel client read ahead optimization
- Libradosobjecter - smarter localized reads
- Librgw
- Mds - Inline data support (Step 2)
- Msgr - implement infiniband support via rsockets
- Osd - ceph on zfs
06/20/2015
06/19/2015
06/18/2015
06/17/2015
06/16/2015
06/13/2015
- Ceph 0 day for performance regression
- Let's make Calamari easier to troubleshoot
- Testing - non-functional tests
06/12/2015
06/11/2015
06/10/2015
- Add IOhint in CephFS
- Hadoop over Ceph RGW status update
- SAMPLE BLUEPRINT
- Tiering-enhacement
- Transcript - Erasure coded storage backend (step 2)
06/09/2015
- Add LevelDB support to ceph cluster backend store
- Add new feature - Write Once Read Many volume
- CephFS - file creation and object-level backtraces
- CephFS - Security &
- Cephfs encryption support
- Chat Log - Sessions 1-16
- Chat Log - Sessions 17-29
- CMake
- Cold Storage Pools
- FallocateHole Punching Support for Ceph
- Fix memory leaks
- Inline data support for Ceph
- Osd - tiering - object redirects
- RADOS Gateway refactor into library internal APIs
- Rados namespaces
- Rados redirects
- Rgw - active-active architecture
- RGW Geo-Replication and Disaster Recovery
- Scalability Stress and Portability test
- Strong AuthN and AuthZ for CephFS
- Zero-copy bufferlists
06/08/2015
- A hook framework for Ceph FS operation
- Better Swift Compatability for Radosgw
- Ceph Management API
- Ceph stats and monitoring tools
- Continuous OSD Stress Testing and Analysis
- Enforced bucket-level quotas in the Ceph Object Gateway
- Erasure encoding as a storage backend
- Extend crush rule language
06/07/2015
- Create and Maintain S3 feature list for compatibility tracking
- Create crush library
- Dumpling
- Emperor
- Firefly
- Hammer
- Mds - reduce memory consumption
- Mds dumpability
- Osd - clone from journal on btrfs
- Rbd - shared read cache
- Rbd copy-on-read for clones
- Rgw - multitenancy
- Rgw - plugin architecture
- Schedule
- Sideboard
06/06/2015
06/05/2015
06/03/2015
- Can Block CephFS and Gateway Clients Share Data
- Can Ceph Export a Filesystem via NFS or SambaCIFS
- Can Ceph Support Multiple Data Centers
- Can Ceph use other Multi-tenancy Modules
- Can I Access Ceph via a Hypervisor
- Can I Develop a Client With Another Language
- Can I Use the Same Drive for Multiple OSDs
- Does Ceph Authentication Provide Multi-tenancy
- Does Ceph Enforce Quotas
- Does Ceph Provide Billing
- Does Ceph Track Per User Usage
- Do Ceph Clients Run on Windows
- How Can I Give Ceph a Try
- How Does Ceph Authenticate Users
- How Does Ceph Ensure Data Integrity Across Replicas
- How Many NICs Per Host
- How Many OSDs Can I Run per Host
- What Kind of Hardware Does Ceph Require
- What Kind of OS Does Ceph Require
- What Programming Languages can Interact with the Object Store
- What Underlying Filesystem Do You Recommend
- Which Ceph Clients Support Striping
- Why Do You Recommend One Drive Per OSD
06/02/2015
06/01/2015
- Ceph Technical Committee
- Committer List
- Final report
- New RelyGUI
- Reliability model
- RESTful API for DR Geo-Replication
- RGW Object Versioning
- Rgw sync agent architecture
- Technical details on the model
- Tentative schedule
05/30/2015
05/29/2015
05/28/2015
05/25/2015
- Ceph User Committee
- Ceph User Committee meeting 2014-04-03
- Ceph User Committee meeting 2014-05-02
- Event Calendar
- Foundation
- Meetings
05/14/2015
Also available in: Atom