- Email: email@example.com
- Registered on: 03/03/2011
- Last connection: 11/14/2013
- 06:25 AM Ceph Bug #6761: emperor's "dirty" flag is being interpreted as "lost" by Dumpling OSDs
- Likewise for me, following the instructions everything is now back up and running without issues, thanks!
- 07:24 AM fs Bug #5290: mds: crash whilst trying to reconnect
- I certainly haven't been hit by this again, so if you consider it resolved...
- 06:10 AM fs Bug #5290: mds: crash whilst trying to reconnect
- Hi Zheng,
Is this what you mean?
- 02:16 AM fs Bug #5290 (Can't reproduce): mds: crash whilst trying to reconnect
Recently I experienced an issue with the mds servers in my cluster, the cluster storage would be absolutely fi...
- 08:46 AM Ceph Feature #3095 (Resolved): rbd tool resize improvements
- It might be handy if the rbd CLI tool could warn an admin when performing a resize operation that would ultimately en...
- 07:00 AM Ceph Bug #1725 (Rejected): osd: os/FileStore.cc: 2426: FAILED assert(0 == "unexpected error")
- Getting a crash on one OSD when it tries to start up after upgrading to 0.38.
Here is the log of start up to crash...
- 10:45 AM fs Bug #1108: Large number of files in a directory makes things grind to a halt
- I've just re-created the cluster I was testing this on, and given a 50G lv to store the ceph logs on, so running ever...
- 02:34 PM Ceph Bug #1486: osd: 0-length meta/pginfo_* files
- I'm afraid I don't have anything decent. I'm going to sort out a way I can store all the log data and run with a high...
- 03:14 AM Ceph Bug #1486: osd: 0-length meta/pginfo_* files
- I've attached the logs from the 3 OSDs exhibiting this behaviour. They're unable to start at the moment. Thanks to th...
- 02:59 AM Ceph Bug #1486 (Resolved): osd: 0-length meta/pginfo_* files
- Linux 3.0, Ceph 0.34 + patches (5ae3e13617c9a63d12d12c8506daefd2be14677d, f13ad83d43e883938369e7c06574daa8ff2fc4ee, f...
Also available in: Atom