- Email: firstname.lastname@example.org
- Registered on: 05/19/2010
- Last connection: 09/25/2012
- 02:42 AM Ceph Bug #3212: librados: failed to decode message of type 59 v1: buffer::end_of_buffer
- Ok, with packages 0.51-700-g1a9c8c7-1precise from wip-3212 and back to 0.41-1ubuntu2.1 on the client, it now refuses ...
- 03:07 AM Ceph Bug #3172 (Resolved): ceph::buffer::bad_alloc downloading a large object using rados
- got a 1gig object, when I get it I get a bad_alloc error.
This is on a 64bit Ubuntu 12.04.1 LTS box, with packages...
- 06:29 AM Ceph Bug #1674 (Can't reproduce): daemons crash when sent random data
- mon seem to crash every time, osd seem to take a few attempts (similar stack trace). not tested mds...
- 03:18 PM Ceph Bug #1628 (Resolved): segfault attempting to map an rbd snapshot
- 02:58 PM fs Bug #1535: concurrent creating and removing directories crashes cmds
- logs from both mds servers from startup through to crash of one node (and then shutdown of the other).
debug ms = 5
- 01:43 PM fs Bug #1535 (Resolved): concurrent creating and removing directories crashes cmds
- setup two clients with a mounted ceph filesystem, had one creating a hierarchy of empty directories in a loop and the...
- 03:03 AM Ceph Bug #1376: errant scrub stat mismatch logs after upgrade
- I upgraded to get that patch, but also got the on disk filestore update patch which was buggy and broke all my osds, ...
- 11:26 AM Ceph Bug #1502 (Resolved): osd FAILED assert(pg->log.tail <= pg->info.last_complete || pg->log.backlog)
- I had a 4 osd cluster. I kill -9 one cosd process (as a test) - it was detected as failed and the cluster became degr...
- 02:09 PM Ceph Bug #1470: broken osd after filestore upgrade
- It's empty!...
- 03:30 AM Ceph Bug #1470 (Closed): broken osd after filestore upgrade
- upgraded to latest master (commit 961260d3a) and cosd began automatic upgrade of the filestore, which seemed to compl...
Also available in: Atom