Activity
From 08/15/2013 to 09/13/2013
09/13/2013
- 06:52 PM Feature #6001 (Resolved): EC: [link] jerasure plugin
- 01ec6a3fdf9da8b35cc034788ab306957d5fd969 landed
- 03:54 PM rgw Bug #6302: rgw: tasks/rgw_s3tests_multiregion.yaml tests hang
- Pushed a fix to next (commit:4216eac0f59af60f60d4ce909b9ace87a7b64ccc). Still need to validate that this was the actu...
- 02:06 PM rgw Bug #6302: rgw: tasks/rgw_s3tests_multiregion.yaml tests hang
- I see this when trying to set worker bound:...
- 08:09 AM rgw Bug #6302 (Resolved): rgw: tasks/rgw_s3tests_multiregion.yaml tests hang
- > [30666] rgw/verify/{clusters/fixed-2.yaml fs/btrfs.yaml msgr-failures/few.yaml tasks/rgw_s3tests_multiregion.yaml v...
- 02:46 PM rgw Feature #6317 (New): rgw: replace pool creation
- New api to create more than 8 pgs per generated pool exists now, switch to using it.
- 01:55 PM Fix #6059: osd: block reads while repgather is writing across replicas
- Note, just extending the obc->write_lock() region doesn't really work since it can cause the op_tp to be blocked prev...
- 01:40 PM Feature #6147 (Fix Under Review): mon: calculate, expose per-pool pg stat deltas
- wip-6147 ; pull request 596
- 01:27 PM rbd Documentation #5009 (In Progress): doc: explain how to get qemu packages for each distro
- 01:24 PM rbd Feature #4013: rbd: openstack: extend nova boot api to support going from image to volume
- https://review.openstack.org/#/c/42474/
- 01:21 PM rbd Feature #4013 (Resolved): rbd: openstack: extend nova boot api to support going from image to volume
- 01:23 PM rbd Subtask #4016 (Resolved): rbd: openstack: extend nova boot api: modify libvirt driver to support ...
- 01:21 PM rbd Feature #4211 (Rejected): get good qemu, libvirt versions+patches in CentOS+
- 01:17 PM rgw Feature #6195: rgw: test full sync (with large object)
- 01:11 PM rgw Feature #4342 (Duplicate): rgw: dr: data sync agent: update sync processing state
- 01:04 PM rgw Feature #4342 (In Progress): rgw: dr: data sync agent: update sync processing state
- 01:10 PM rgw Documentation #5119 (Resolved): rgw: document which pools allowed to collide
- Added commentary to the ceph configuration reference.
- 01:09 PM rgw Documentation #5651 (Resolved): rgw: secret features need documentation
- Community fix provided.
- 01:09 PM rgw Feature #6191 (Fix Under Review): rgw: DR: per bucket replica_log handling
- 01:04 PM rgw Feature #6191 (In Progress): rgw: DR: per bucket replica_log handling
- 01:09 PM rgw Feature #6190 (Fix Under Review): rgw: DR: read list of buckets to do full sync on
- 01:04 PM rgw Feature #6190 (In Progress): rgw: DR: read list of buckets to do full sync on
- 01:07 PM devops Bug #6314: catch case where mon starts up but has rank -1 accordingly to mon_status.
- But...it *should* have rank -1 before it gets the chance to join the quorum, shouldn't it?
Or have I forgotten how r... - 01:00 PM devops Bug #6314 (Resolved): catch case where mon starts up but has rank -1 accordingly to mon_status.
- Now that `mon_status` is implemented we need some glue-code to check make it a bit more robust and ensure that the ra...
- 01:06 PM rgw Feature #6193 (In Progress): rgw: DR: per bucket sync
- 01:06 PM rgw Feature #6192 (In Progress): rgw: DR: per object sync
- 12:24 PM devops Feature #6020: radosgw-apache opinionated package
- The package should have the following
- depend on our mod_fastcgi
- enable fastcgi module
- enable rewrite mo... - 12:11 PM Bug #6313: dumpling: FAILED assert(latest->is_update()) from recover_primary()
- We restarted osd.4 and osd.1 to fix the issue.
- 12:08 PM Bug #6313 (Duplicate): dumpling: FAILED assert(latest->is_update()) from recover_primary()
- restarting the backfill target for the osd fixed the issue
- 11:51 AM devops Feature #6310: Get Dumpling into CentOS Ceph repo
- 11:50 AM devops Feature #6310 (Closed): Get Dumpling into CentOS Ceph repo
- 11:51 AM Bug #6311 (Resolved): Some scripts assume the location of python
- The following scripts' shebang strings all hardcode python's location to /usr/bin/python. They don't work on FreeBSD...
- 11:41 AM devops Feature #5956: Implement a radosgw command in ceph-deploy
- Is this a dup with 4408?
- 11:40 AM devops Feature #6067 (In Progress): ceph-deploy: make mon create catch common errors
- 11:39 AM Bug #5492: scripts installing into /usr/usr/sbin (with --prefix=/usr)
- Danny's fix does not resolve the original problem in either Ubuntu 12.04 or FreeBSD 9.1. I think that the real probl...
- 10:05 AM RADOS Documentation #6308 (Resolved): crushtool examples are confusing/outofdate..
- this example:
crushtool --build 128 shelf uniform 4 rack straw 8 root straw 0 -o map
does not work
this does wo... - 09:59 AM rgw Bug #6175 (Pending Backport): rgw: valgrind: invalid reads in copy_obj
- 09:36 AM rgw Bug #6268 (Pending Backport): rgw: RGWPutObj calls processor->complete() before all inflight data...
- 04:59 AM Bug #6301 (Need More Info): ceph-osd hung by XFS using linux 3.10
- 02:39 AM Bug #6301 (Closed): ceph-osd hung by XFS using linux 3.10
- The kernel configuration ( 3.10.11 sources ) is here : attachment:config.gz
The general ceph setup is described here...
09/12/2013
- 06:17 PM CephFS Bug #5418: kceph: crash in remove_session_caps
- 06:16 PM CephFS Bug #5927 (Resolved): kcephfs: ENOTEMPTY on rm -r
- 04:42 PM devops Bug #6300 (Resolved): ceph-deploy: mon create prompts for password
- ceph-deploy version: 1.2.3
ceph version: dumpling [v0.67.3]
on a single node setup, "mon create" command prompt... - 03:28 PM Cleanup #6287 (Resolved): OSDmonitor.c message "still creating pgs, wait" should be explicit abou...
- 07:20 AM Cleanup #6287 (Fix Under Review): OSDmonitor.c message "still creating pgs, wait" should be expli...
- https://github.com/ceph/ceph/pull/592
- 03:18 PM rbd Bug #6299: Dumpling Creates Extra Log Files
- I have this exact same behavior since updating to dumpling. It only seems to affect dumpling. Cuttlefish and earlier ...
- 03:11 PM rbd Bug #6299 (Resolved): Dumpling Creates Extra Log Files
- Created my cluster with mkcephfs, and distribute a common ceph.conf to all nodes with monitors and osd stanzas define...
- 02:34 PM RADOS Bug #6297 (In Progress): ceph osd tell * will break when FD limit reached, messenger should close...
- In environments with a large number of OSD's (approaching or exceeding the file descriptor limit set), ceph osd tell ...
- 02:18 PM Bug #6296 (Won't Fix): osd_scrub_load_threshold Should Take Core Count Into Consideration
- osd_scrub_load_threshold defaults to 0.5 which prevents scrub starting on relatively un-loaded multi-core systems. Th...
- 01:26 PM devops Feature #6017: ceph-deploy mon create: create on all mons in ceph.conf + then do gatherkeys if no...
- Pull request to implement the new mon_status : https://github.com/ceph/ceph-deploy/pull/71
Was merged into ceph-de... - 01:24 PM devops Feature #6017 (In Progress): ceph-deploy mon create: create on all mons in ceph.conf + then do ga...
- 01:25 PM devops Bug #6288: 'ceph-deploy mon create' with --cluster flag fails
- 01:25 PM devops Bug #6288: 'ceph-deploy mon create' with --cluster flag fails
- What happens here is that the call to start the daemon in `ceph_deploy/hosts/debian/mon/create.py` does not pass the ...
- 08:27 AM devops Bug #6288 (Resolved): 'ceph-deploy mon create' with --cluster flag fails
- The command:
$ ceph-deploy --cluster ceph0 mon create cephgs-{0,1,2}
runs without error yet it fails to do anyt... - 12:32 PM Fix #6278: osd: throttle snap trimming
- For a few weeks, I've had to run with noscrub and nodeep-scrub, because both can cause 100% spindle contention result...
- 11:40 AM Fix #6278: osd: throttle snap trimming
- The fix for bug 6291 resolves an issue with recovery using more resources than it should.
A workaround is to disab... - 11:56 AM Bug #6294 (Rejected): avoid using std::list::size()
- It's O(N) and might have performance implications.
- 11:44 AM rbd Bug #6293 (Resolved): Using wildcard produces wrong output
Using a wildcard should output "osds ..... instructed to deep-scrub"
$ ceph osd deep-scrub '*'
osds 0,1,2,3 instr...- 11:20 AM Bug #6292 (Pending Backport): Verify that recovery is truly complete
- daf417f9ccc9181c549ad2d4a19b16b0c3caf85f
- 11:20 AM Bug #6292 (Resolved): Verify that recovery is truly complete
- This came out of the analysis of bug 6226. Even though that bug was actually fixed by changes that prevented a negat...
- 11:16 AM Bug #6291 (Pending Backport): Recovery can take more resources than it should
- 139a714e13aa3c7f42091270b55dde8a17b3c4b8
- 11:07 AM Bug #6291: Recovery can take more resources than it should
- Caused by 944f3b73 "OSD: only start osd_recovery_max_single_start at once"
- 11:01 AM Bug #6291 (Resolved): Recovery can take more resources than it should
The code in do_recovery() uses MAX() instead of MIN() so is can exceed osd_recovery_max_active.- 10:58 AM Feature #6189 (In Progress): cachepool: osd: promote on read
- I'm starting on this now.
- 10:50 AM Bug #6226 (Resolved): after editing crushmap and adding new hosts, injecting it, several existing...
This was already fixed by backport commit 1ea6b561 in v0.61.8 release. See previous comment.- 10:40 AM CephFS Bug #6279 (Pending Backport): creating a new fs on pools from an old fs can lead to lost MDS Tables
- 10:39 AM CephFS Bug #6279 (Resolved): creating a new fs on pools from an old fs can lead to lost MDS Tables
- Merged in commit:a41aa9468f4c8dd92604c20e015904ac75f1e746.
Thanks! - 10:07 AM Feature #6033 (Resolved): cachepool: osd: basic io decision: read/write from/to cache pool or EAG...
- This got merged into master a couple days ago; commit:383d8a199ea70578bb418cf36812963b04c42873
- 10:03 AM CephFS Feature #6290 (Resolved): Journaler: warn and shut down if we hit end of journal too early
- If a gap gets created in the journal, the Journaler shuts down replay but doesn't warn that it hit ENOENT unexpectedl...
- 09:02 AM devops Bug #6289 (Resolved): new remote connection needs to be able to know about sudo
- The new execnet library needs to know about the need to insert `sudo` to remote commands (something that was already ...
- 08:30 AM devops Feature #4954 (Resolved): ceph-deploy: help and document need to be updated for osd create
- Merged into ceph-deploy master branch with hash: 26a41b6
- 08:21 AM devops Feature #4954 (In Progress): ceph-deploy: help and document need to be updated for osd create
- pull request opened https://github.com/ceph/ceph-deploy/pull/74
- 08:23 AM devops Bug #5975 (Resolved): Find a real fix for the pushy issue of hanging/deadlocking during long-runn...
- Merged into ceph-deploy master with hash: 5a87cb8
We have now started the migration away from pushy which fixes ... - 07:26 AM devops Bug #5975 (In Progress): Find a real fix for the pushy issue of hanging/deadlocking during long-r...
- Opened pull request: https://github.com/ceph/ceph-deploy/pull/71
- 07:33 AM Bug #5700: very high memory usage after update
- I just upgraded to dumpling: adjusted ceph.conf, restarted all mons, restarted all osds
After the restart, the osd...
09/11/2013
- 11:43 PM Cleanup #6287 (Resolved): OSDmonitor.c message "still creating pgs, wait" should be explicit abou...
- In the case where the cluster is still busy creating PGs after a 'pool set pg_num' and a 'pool set pgp_num' is done, ...
- 11:00 PM rgw Bug #6286 (Pending Backport): rgw: use of std::list::size() in ObjectCache
- 10:26 PM rgw Bug #6286 (Resolved): rgw: use of std::list::size() in ObjectCache
- We shouldn't call list::size() it's O(n).
- 10:45 PM Bug #6226 (Fix Under Review): after editing crushmap and adding new hosts, injecting it, several ...
- 04:01 PM Bug #6226: after editing crushmap and adding new hosts, injecting it, several existing OSD crashed
There is a race already fixed in a later release by (01d3e094) which could allow start_recovery_ops() to be called ...- 12:10 AM Bug #6226: after editing crushmap and adding new hosts, injecting it, several existing OSD crashed
- I was wrong - we are indeed on 0.61.7
root@ineri ~$ ndo all_nodes ceph --version
h0 ceph version 0.61.7... - 07:51 PM CephFS Bug #6284 (Fix Under Review): client: failed xlist assert on put_inode()
- https://github.com/ceph/ceph/pull/590
- 02:05 PM CephFS Bug #6284 (Resolved): client: failed xlist assert on put_inode()
- teuthology archives: /a/teuthology-2013-09-11_01:30:26-upgrade-fs-next-testing-basic-plana/29583/remote/ubuntu@plana3...
- 07:51 PM CephFS Bug #6279 (Fix Under Review): creating a new fs on pools from an old fs can lead to lost MDS Tables
- 06:55 PM CephFS Bug #6279: creating a new fs on pools from an old fs can lead to lost MDS Tables
- https://github.com/ceph/ceph/pull/589
- 12:39 PM CephFS Bug #6279 (Resolved): creating a new fs on pools from an old fs can lead to lost MDS Tables
- See the thread at http://www.mail-archive.com/ceph-users@lists.ceph.com/msg03918.html
If you delete your FS (but n... - 05:03 PM rbd Bug #6257: rbd: cp on sparse image allocates objects in dest
- Same goes for "rbd flatten", which also seems to allocate every block in the resulting image, even if large parts are...
- 04:47 PM CephFS Bug #4221 (Fix Under Review): MDS: LogEvent::decode needs to respect mds_log_skip_corrupt_events ...
- 04:46 PM CephFS Bug #4221: MDS: LogEvent::decode needs to respect mds_log_skip_corrupt_events for DECODE macros
- A corrupt object might lead to asserts getting thrown as part of DECODE_START or DECODE_FINISH. These macros are not ...
- 01:47 PM rgw Bug #6175 (Fix Under Review): rgw: valgrind: invalid reads in copy_obj
- 01:27 PM Bug #6283 (Resolved): ppc build fails
- josef reports: http://kojipkgs.fedoraproject.org//work/tasks/4881/5924881/build.log
- 01:20 PM devops Bug #6281 (Closed): ceph-deploy config.py write_conf throws away old config just because they are...
- in example
ceph-deploy new node-1:10.20.0.2
echo [global] >> /etc/ceph/ceph.conf
echo public address = 10.20.0.0... - 12:37 PM Fix #6278: osd: throttle snap trimming
- We are collecting perf dump metrics from all OSD and RBD admin sockets. What metrics would be the most useful to grap...
- 12:33 PM Fix #6278 (Resolved): osd: throttle snap trimming
- Qemu guests on our cluster experience high I/O latency, stalls or complete halts when spindle contention is created b...
- 12:25 PM rbd Bug #5955 (Resolved): qemu deadlock when librbd caching enabled (writethru or writeback).
- 11:34 AM rbd Bug #5955: qemu deadlock when librbd caching enabled (writethru or writeback).
- Sage,
This issue should be closed, as it seems to be resolved by upgrading qemu to include joshd's async flush pat... - 10:39 AM rgw Bug #6078 (Resolved): rgw: CORS not working
- Done. Backported to dumpling, merged at commit:a304016fa01b02efd500135c00b9bf3407a9999c. Also, commit:670db7e80ddc9c2...
- 10:01 AM rbd Bug #5428 (Can't reproduce): libceph: null deref in ceph_auth_reset
- 10:01 AM Feature #6274 (Fix Under Review): mon: MonCommand.h unit tests
- 01:16 AM Feature #6274 (Resolved): mon: MonCommand.h unit tests
- "work in progress":https://github.com/ceph/ceph/pull/588
unit tests to validate the syntax of the strings : syntax... - 10:00 AM rbd Bug #5876 (Resolved): Assertion failure in rbd_img_obj_callback() : rbd_assert(which >= img_reque...
- 09:59 AM Linux kernel client Bug #5429 (Duplicate): libceph: rcu stall, null deref in osd_reset->__reset_osd->__remove_osd
- i think/hope this is a duplicate of the async notify racing with shutdown
- 09:59 AM rbd Bug #5647 (Resolved): krbd: EBlACKLIST osd reply resulting in an oops on 3.9
- 09:58 AM rbd Bug #5636 (Resolved): krbd: crash in image refresh
- 09:57 AM rbd Bug #5454 (Resolved): krbd: assertion failure in rbd_img_obj_callback()
- 09:57 AM rbd Bug #5760 (Resolved): libceph: osdc_build_request(): BUG_ON(p > msg->front.iov_base + msg->front....
- 09:36 AM rgw Bug #6268 (Fix Under Review): rgw: RGWPutObj calls processor->complete() before all inflight data...
- 08:34 AM Feature #6227 (New): make osd crush placement on startup handle multiple trees (e.g., ssd + sas)
- 12:11 AM Feature #6227: make osd crush placement on startup handle multiple trees (e.g., ssd + sas)
- ok - that works nicely as a workaround for us.
Looking forward to better management tools for hierarchies ;)
/jc - 07:59 AM devops Bug #6269 (Resolved): ceph-deploy needs to use the equivalent of `hostname -s`
- Merged to ceph-deploy master branch with hash: c7756f8
- 07:58 AM devops Bug #6269: ceph-deploy needs to use the equivalent of `hostname -s`
- Pull request opened: https://github.com/ceph/ceph-deploy/pull/72
09/10/2013
- 08:08 PM rgw Bug #6214 (Resolved): rgw: PUT object with chunked upload doesn't propagate client side errors
- 08:08 PM rgw Bug #6214: rgw: PUT object with chunked upload doesn't propagate client side errors
- No need for backporting this one. It is clear that this wasn't the cuprit for the original issue (seems like it was a...
- 08:04 PM rgw Bug #6152 (Resolved): New S3 auth code fails when using response-* query string params to overrid...
- Was already merged into dumpling (commit:9b953aa4100eca5de2319b3c17c54bc2f6b03064).
- 05:17 PM Bug #6272 (Need More Info): ceph command usage missing setcrushmap
- ...
- 03:56 PM Bug #6272 (Closed): ceph command usage missing setcrushmap
This documented feature does not show up in ceph --help output
ceph osd setcrushmap -i {compiled-crushmap-filen...- 04:42 PM devops Cleanup #6273 (New): Doxygen Renderer
- Sphinx has been upgraded to 1.1.3, fixing the index problem. However, we do have an outstanding issue with Doxygen. ...
- 03:16 PM Bug #5922 (Duplicate): osd: unfound objects on next
- 02:27 PM Fix #6116: osd: incomplete pg from thrashing on next
- Ok, there are enough logs to confirm that this is the primary-thinks-it's-clean vs backfill-peer-thinks-it's-clean race.
- 02:18 PM Fix #6116: osd: incomplete pg from thrashing on next
- 1.3f does appear to be incomplete in the osd log
- 02:16 PM Fix #6116: osd: incomplete pg from thrashing on next
- From the mon logs, last reported seems to be
2013-09-09 22:31:28.047555 7f56db94d700 15 mon.a@0(leader).pg v1614 go... - 02:13 PM Fix #6116: osd: incomplete pg from thrashing on next
- There appear to be no pgs in incomplete state according to the osd log. Issue notifying the mon?
- 02:12 PM Fix #6116: osd: incomplete pg from thrashing on next
- The task was in process of letting the cluster recover with osd.2 down.
- 02:10 PM Fix #6116: osd: incomplete pg from thrashing on next
- Hmm, the last osd log entry indicates that the pg in question may have gone clean?
2013-09-09 22:27:19.022997 7f1724... - 02:09 PM Fix #6116: osd: incomplete pg from thrashing on next
- time: 2717s
log: http://qa-proxy.ceph.com/teuthology/teuthology-2013-09-09_20:00:20-rados-dumpling-testing-basi... - 02:11 PM CephFS Bug #4221: MDS: LogEvent::decode needs to respect mds_log_skip_corrupt_events for DECODE macros
- Actually, I think the details here are incorrect. mds_log_skip_corrupt_events behavior is broken, but it's not the re...
- 01:50 PM Bug #6118: failed to recover before timeout expired on radosbench, rados api tests
- http://qa-proxy.ceph.com/teuthology/teuthology-2013-09-04_20:00:07-rados-dumpling-testing-basic-plana/21637/
Seems... - 10:51 AM devops Bug #6269 (Resolved): ceph-deploy needs to use the equivalent of `hostname -s`
- It currently uses:...
- 10:17 AM Bug #6226: after editing crushmap and adding new hosts, injecting it, several existing OSD crashed
- The bug description claims that cluster is running v0.61.5 but attached log says v0.61.7. Could there be a mix of no...
- 09:58 AM devops Feature #4954: ceph-deploy: help and document need to be updated for osd create
- ...
- 09:19 AM rgw Bug #6268 (Resolved): rgw: RGWPutObj calls processor->complete() before all inflight data is drained
- This is problematic because we have ordering issue here. We end up updating the head before flushing all data is done...
- 12:49 AM Bug #5823: cpu load on cluster node is very high, client can't get data on pg from primary node ...
- The slow request increase frequently ...
see the attached file for more detail
09/09/2013
- 10:40 PM rbd Bug #6267: krbd: null deref in __kick_osd_requests+0x15e/0x1b0
- Sage Weil wrote:
> can you try a 3.10 kernel? there was at least one locking fix during that interval that could ex... - 10:33 PM rbd Bug #6267 (Need More Info): krbd: null deref in __kick_osd_requests+0x15e/0x1b0
- can you try a 3.10 kernel? there was at least one locking fix during that interval that could explain this. (also, ...
- 10:24 PM rbd Bug #6267 (Resolved): krbd: null deref in __kick_osd_requests+0x15e/0x1b0
- [639680.982539] BUG: unable to handle kernel NULL pointer dereference at 0000000000000498
[639680.986988] IP: [<ffff... - 09:51 PM RADOS Bug #6246 (Pending Backport): crushtool dumps core with non-unique bucket IDs
- 01:09 PM RADOS Bug #6246 (Fix Under Review): crushtool dumps core with non-unique bucket IDs
- 05:55 PM devops Bug #6266 (Resolved): ceph-deploy new command broken
- 04:49 PM devops Bug #6266: ceph-deploy new command broken
- wip-6266 works fine!
- 04:44 PM devops Bug #6266 (Fix Under Review): ceph-deploy new command broken
- wip-6266
- 04:32 PM devops Bug #6266 (Resolved): ceph-deploy new command broken
- on mira016, that has the latest version of ceph-deploy [v1.2.3], new command is broken and so am not able to proceed ...
- 04:37 PM devops Bug #6255 (Resolved): ceph-deploy: fail to parse json from mon_status
- commit:f957f89
- 06:04 AM devops Bug #6255 (Fix Under Review): ceph-deploy: fail to parse json from mon_status
- Opened pull request: https://github.com/ceph/ceph-deploy/pull/69
- 04:22 PM rbd Bug #6265 (Resolved): krbd: blockdev --setr{o,w} claims success but has no effect
- From http://permalink.gmane.org/gmane.comp.file-systems.ceph.user/3957...
- 04:18 PM rbd Feature #6264 (Resolved): rbd: expose all options available to rbd map
- These include ro/rw, share/noshare for osd_clients, crc/nocrc, and osd client timeouts.
These are mainly useful for ... - 04:15 PM Bug #6254 (In Progress): ceph_test_rados: rollback then delete gets ENOENT
- 04:12 PM Feature #6038 (Fix Under Review): cachepool: filestore/osd: infrastructure for large object COPY ...
- 04:12 PM Feature #6147 (In Progress): mon: calculate, expose per-pool pg stat deltas
- 04:01 PM Bug #6230 (Pending Backport): ceph osd crush move appears to be broken
- 09:41 AM Bug #6230: ceph osd crush move appears to be broken
- Joao - please take a look.
- 08:56 AM Bug #6230: ceph osd crush move appears to be broken
- nope, looks like this is supposed to work (and did in 0.61). from ML:...
- 03:01 PM Feature #6000 (Resolved): EC: [link] erasure plugin mechanism and abstract API
- 03:00 PM Subtask #5878 (Resolved): erasure plugin mechanism and abstract API
- 01:58 PM RADOS Fix #6262 (New): toofull osd prevents backfilling of other pg replicas
- Say a pg is to be 4-way replicated across osds [0,1,2,3].
AFAICT, if any of the osds 0, 1 or 2 hit the toofull thr... - 01:48 PM Feature #6261 (Resolved): ceph-filestore-dump use cases for disaster recovery
- Context: I often take cluster snapshots and compare file hashes and replication factors of all osds in my cluster, to...
- 01:24 PM Feature #6033 (Fix Under Review): cachepool: osd: basic io decision: read/write from/to cache poo...
- The OSD is slightly less stupid now, and it looks like making it smarter is going to be part of future tickets (the c...
- 12:09 PM rbd Bug #5426: librbd: mutex assert in perfcounters::tinc in librbd::AioCompletion::complete()
- ubuntu@teuthology:/a/teuthology-2013-09-09_01:36:24-upgrade-small-next-testing-basic-vps/27433
- 09:47 AM Bug #6040 (Resolved): Significant slowdown of osds since v0.67 Dumpling
- 09:45 AM RADOS Fix #6250: OSD: handle ENODEV on reads
- This was a mailing list report about when the OSD gets back ENODEV from the underlying filesystem.
- 09:39 AM RADOS Fix #6250: OSD: handle ENODEV on reads
- do you have a log? i see no instance of ENODEV in the code..
- 09:38 AM Feature #6258: ceph-disk: zap should wipefs
- Waiting for feedback from list.
- 09:07 AM Feature #6258 (Rejected): ceph-disk: zap should wipefs
- see
http://thread.gmane.org/gmane.comp.file-systems.ceph.user/3726/focus=3774 - 09:17 AM rgw Bug #6240: rgw: invalid read on addr in msgr via objecter
- What I see in the failed case is this:...
- 08:51 AM devops Feature #6017: ceph-deploy mon create: create on all mons in ceph.conf + then do gatherkeys if no...
- And I just made this work with execnet.
... and it doesn't hang *at all*... - 06:57 AM devops Feature #6017: ceph-deploy mon create: create on all mons in ceph.conf + then do gatherkeys if no...
- And now we have to back out of this change because once again, the infamous pushy bug (#5975) that cannot be fixed.
... - 08:17 AM Bug #5175: leveldb: LOG and MANIFEST file grow without bound (LOG being _text_ log !)
- Here's how leveldb works in respect to the log:
- setting it to /dev/null will only make the info messages to be w... - 08:03 AM Bug #5804 (Can't reproduce): mon: binds to 0.0.0.0:6800something port
- Have inspected the code and haven't found a reason for this to happen yet, nor was I able to reproduce this at all.
... - 02:12 AM Bug #6043: upstart does not reflect running ceph-osd daemons (ubuntu 13.04 only)
- Has anyone else experienced this issue? It seems to be affecting a few others - http://lists.ceph.com/pipermail/ceph-...
- 01:53 AM Subtask #6113 (In Progress): add ceph osd pool create [name] [key=value]
09/08/2013
- 10:21 PM rgw Bug #6240: rgw: invalid read on addr in msgr via objecter
- the first difference i see between a passing and failing run is that the passing run gets ENOENT when reading default...
- 09:45 PM rbd Bug #6257 (Resolved): rbd: cp on sparse image allocates objects in dest
- ...
- 09:41 PM Bug #6256 (Resolved): rados bench: segfault in ceph_clock_now
- 04:14 PM Bug #6256 (Resolved): rados bench: segfault in ceph_clock_now
- ...
- 07:04 PM Support #6238: About the data migration in Ceph
- You can find all the information on signing up for the mailing lists here:
http://ceph.com/resources/mailing-list-... - 03:43 AM Support #6238: About the data migration in Ceph
- Greg Farnum wrote:
> These sorts of questions are appropriate for the ceph-users list. Please send them there. :)
... - 04:17 PM Bug #6207: Found incorrect object contents
- ubuntu@teuthology:/a/teuthology-2013-09-08_01:00:04-rados-master-testing-basic-plana/25850
- 09:59 AM Bug #6118: failed to recover before timeout expired on radosbench, rados api tests
- another one with full logs: ubuntu@teuthology:/a/teuthology-2013-09-07_13:39:47-rados-dumpling-testing-basic-plana/25183
- 09:56 AM devops Bug #6255 (Resolved): ceph-deploy: fail to parse json from mon_status
- ...
- 09:22 AM Bug #6254 (Resolved): ceph_test_rados: rollback then delete gets ENOENT
- ubuntu@teuthology:/a/teuthology-2013-09-07_01:00:04-rados-next-testing-basic-plana/24495
ubuntu@teuthology:/a/teutho...
09/07/2013
- 04:33 PM rgw Bug #6240: rgw: invalid read on addr in msgr via objecter
- ...
- 03:31 PM devops Feature #1668: collectd: push ceph plugin upstream
- I've issued a pull request with some updates: https://github.com/ceph/collectd/pull/1
I'll rebase against the mast... - 09:02 AM Bug #6128 (Rejected): glance image-create with rbd --location fails to create image in rdb
- Closing in the ceph tracker as it's not ceph-specific, but a general bug in glance.
09/06/2013
- 02:53 PM devops Feature #6017: ceph-deploy mon create: create on all mons in ceph.conf + then do gatherkeys if no...
- Added a helper to check the mon_status in the remote host, it will be used to make sure it is actually running correc...
- 12:32 PM Bug #6249: daemon mon_status should report a daemon is not running
- Something along those lines would totally work for me :)
- 10:00 AM Bug #6249: daemon mon_status should report a daemon is not running
- The path could just be the wrong path as well (eg, got changed in config but daemon wasn't restarted). To be accurate...
- 08:48 AM Bug #6249 (Resolved): daemon mon_status should report a daemon is not running
- The current behavior is that it will return a generic exception log with a `No such file or directory` which is confu...
- 10:34 AM Bug #5823: cpu load on cluster node is very high, client can't get data on pg from primary node ...
- I'm using Ubuntu 12.10 (GNU/Linux 3.5.0-25-generic x86_64) kernel for all cluster nodes.
sometimes, i saw the osd ... - 10:11 AM RADOS Fix #6250 (New): OSD: handle ENODEV on reads
- Apparently the OSD translates a read error response of ENODEV into ENOENT and returns that to the client. That respon...
- 09:31 AM Bug #6235 (Resolved): fast intel crc code reads trailing words
- oops, he already did.. i'll merge this in!
- 09:29 AM Bug #6235: fast intel crc code reads trailing words
- Greg - please review.
- 09:04 AM devops Feature #5282 (In Progress): Get Dumpling into EPEL
- I've pinged both the epel/fedora and centos maintainers about including 0.67.3 when it's released.
- 09:02 AM devops Feature #5847 (Resolved): Build own versions of most recent leveldb for all supported platforms.
- Removing the Basho fix appears to fix the problem with centos/rhel6.3 platforms, so going with that solution.
Note... - 08:53 AM rgw Bug #5702 (Resolved): Radosgw RPM unnecessarily requires mod_fcgid
- Resolved with:
commit 8df504c157fc9de526657e5787c8fb532b678320
Author: Gary Lowell <gary.lowell@inktank.com>
Dat... - 08:51 AM Bug #6083 (Resolved): fedora18 rpm packages for ceph should be built with proper naming conventio...
- Resolved with the following commit:
commit 22e26a694da98da29a3fb3aa63e1627140831f39
Author: Gary Lowell <gary.low... - 08:45 AM rbd Feature #4917 (Resolved): iSCSI: Package tgt
- Pushed to ceph-extras repo.
- 08:37 AM Bug #6223 (Resolved): error generating keys
- 03:47 AM Bug #6223: error generating keys
- Hi!!
I've reinstalled Ceph with the new hostname and it's working fine now!! :)
Thanks a lot! - 08:24 AM devops Bug #6104 (Resolved): ceph-deploy should workaround pseudo-tty in SSH
- Merged into ceph-deploy master branch, with hash: 8eee1a3
- 07:22 AM devops Bug #6104 (Fix Under Review): ceph-deploy should workaround pseudo-tty in SSH
- This is not fixable with `pushy` (see ticket thread). So we are now falling back to disabling sudo if you are connect...
- 05:19 AM RADOS Bug #6246 (Resolved): crushtool dumps core with non-unique bucket IDs
- Any crushmap with duplicate bucket IDs will cause crushtool to dump core on compile. It would be much better of crush...
09/05/2013
- 10:42 PM Bug #6233: OSD crash during repair
- Was missing xattrs:
2013-09-06 09:30:19.813811 7f0ae8cbc700 0 log [INF] : applying configuration change: internal... - 07:46 PM Bug #6233: OSD crash during repair
- The pg being repaired at the time is 2.12, which 'ceph pg dump' tells me lives on [6,7]. Attached log is the output a...
- 05:23 PM devops Bug #6245 (Resolved): ceph-deploy: install command broken
- Fix merged to ceph-deploy master branch, with hash 225b7fbbab3ab9880484e2b814a4443986765849
After going back from ... - 05:21 PM devops Bug #6245 (Fix Under Review): ceph-deploy: install command broken
- Opened pull request: https://github.com/ceph/ceph-deploy/pull/64
- 04:32 PM devops Bug #6245 (Resolved): ceph-deploy: install command broken
- ceph deploy version: 1.2.3
when trying to install ceph using ceph-deploy, it always installs the default ubuntu ... - 12:17 PM rgw Bug #6214 (Pending Backport): rgw: PUT object with chunked upload doesn't propagate client side e...
- Looks good to me.
- 12:17 PM devops Bug #6035 (Closed): ceph-deploy: ceph-create-keys stuck on fedora 18 VMs
- Closing as I cannot replicate this any more.
- 12:16 PM devops Bug #6160 (Resolved): allow installation of packages only
- merge on ceph-deploy's master branch with hash: 73b920a
- 07:03 AM devops Bug #6160: allow installation of packages only
- pull request opened https://github.com/ceph/ceph-deploy/pull/62
- 12:12 PM devops Feature #6017: ceph-deploy mon create: create on all mons in ceph.conf + then do gatherkeys if no...
- currently blocked as I cannot implement correctly the mon_status on remote hosts because I get output like:...
- 10:05 AM Feature #6033: cachepool: osd: basic io decision: read/write from/to cache pool or EAGAIN; make o...
- wip-6033-redirects
The Objecter half of this is done and seems to be working correctly. The OSD portion is a bit mor... - 09:41 AM Bug #6223: error generating keys
- jordi arcas wrote:
> HI Sage,
> It doesn't works...
> I've changed the hostname wich really is NCSL007.
> I atta... - 09:36 AM Bug #6223: error generating keys
- HI Sage,
It doesn't works...
I've changed the hostname wich really is NCSL007.
I attach you all logs with the new... - 08:15 AM Bug #6223: error generating keys
- jordi arcas wrote:
> Hi! I attach you all the logs I found
>
> Thanks!!
Heh, it was actually the IPs and hostn... - 01:25 AM Bug #6223: error generating keys
- Hi! I attach you all the logs I found
Thanks!! - 09:41 AM Fix #6242 (Resolved): ceph CLI should honor `--help` and not return log lines back
- I wanted to have access to the help menu in the `ceph` CLI, but for some reason it just returns a few log lines
with... - 09:40 AM Support #6238 (Rejected): About the data migration in Ceph
- These sorts of questions are appropriate for the ceph-users list. Please send them there. :)
- 06:15 AM Support #6238 (Rejected): About the data migration in Ceph
- Hi all,
recently i read the source code and paper, and i have some questions about the data movement:
1. when... - 09:39 AM Bug #6230: ceph osd crush move appears to be broken
- iirc move is only for leaf items.. there is link and unlink for buckets?
- 09:31 AM devops Bug #6237 (Resolved): osd create fails because key is not in remote function
- Merged into ceph-deploy master branch with hash: 3b49407cdc5d3cfe2a55fd843f7e8c8723a4c7db
- 07:05 AM devops Bug #6237: osd create fails because key is not in remote function
- Pull request opened: https://github.com/ceph/ceph-deploy/pull/63
- 06:02 AM devops Bug #6237 (Resolved): osd create fails because key is not in remote function
- ...
- 08:26 AM Bug #6003: journal Unable to read past sequence 406 ...
- ubuntu@teuthology:/a/teuthology-2013-09-04_17:24:00-rados-master-testing-basic-plana/21102
- 08:20 AM rbd Bug #5426: librbd: mutex assert in perfcounters::tinc in librbd::AioCompletion::complete()
- ubuntu@teuthology:/a/teuthology-2013-09-05_01:00:58-rbd-next-testing-basic-plana/22153
- 08:05 AM Bug #6118: failed to recover before timeout expired on radosbench, rados api tests
- http://qa-proxy.ceph.com/teuthology/teuthology-2013-09-04_20:00:07-rados-dumpling-testing-basic-plana/21637/
- 08:04 AM rgw Bug #6240 (Resolved): rgw: invalid read on addr in msgr via objecter
- ...
09/04/2013
- 11:28 PM Bug #6236 (Won't Fix): ceph pg dump columns sometimes out of alignment
We should pad columns so that even when multiple states are present for a pg the column headers are in the right po...- 09:31 PM Bug #6235 (Resolved): fast intel crc code reads trailing words
- ...
- 09:04 PM Bug #6003: journal Unable to read past sequence 406 ...
- ubuntu@teuthology:/a/teuthology-2013-09-04_17:24:00-rados-master-testing-basic-plana/21102
- 06:48 PM Documentation #6234 (Resolved): all our new-user paths need to document name-resolution restrictions
- yet another new user was confused about needing a non-127/8 address for the hostname today.
http://ceph.com/docs/m... - 05:11 PM Bug #6233 (Closed): OSD crash during repair
- On 0.56.7-1~bpo70+1, whilst trying to repair an OSD:
2013-09-05 09:19:33.020619 7f540a12d700 0 log [ERR] : 2.12 r... - 04:47 PM rgw Bug #6111 (Resolved): rgw: multipart upload fails when last chunk < 512k
- Fixed, commit:9a551296e0811f2b65972377b25bb28dbb42f575.
- 03:45 PM rgw Bug #6111: rgw: multipart upload fails when last chunk < 512k
- The fix looks good to me. It looks like the s3-tests branch (wip-6111) has a couple extra lines left over from debugg...
- 04:39 PM Fix #5989 (Resolved): librados: document that bufferlist usage model is inconsistent
- 04:26 PM rgw Bug #6088 (Resolved): rgw: When uploading via POST specifying text instead of file formdata input...
- Fixed, commit:c8ec532fadc0df36e4b265fe20a2ff3e35319744. Also cherry-picked to bobtail, cuttlefish, and dumpling.
- 04:22 PM rgw Bug #6088: rgw: When uploading via POST specifying text instead of file formdata input field, a s...
- The bug reproduction was manual, not sure how easy it'd be creating a boto based test for that.
- 03:01 PM rgw Bug #6088: rgw: When uploading via POST specifying text instead of file formdata input field, a s...
- Looks good to me, is there an associated s3-tests branch as well?
- 04:18 PM rgw Bug #6078 (Pending Backport): rgw: CORS not working
- 02:52 PM rgw Bug #6078: rgw: CORS not working
- Left a couple comments on github. It'd be nice to try to break up some of the changes and explicitly explain why chan...
- 04:02 PM rgw Feature #6232 (New): rgw: create unitest for is_string_in_set()
- Recommended in code review for issue #6078
- 01:46 PM rbd Feature #4017 (Resolved): rbd: openstack: simplify volume booting with new api
- 01:41 PM rbd Feature #4017: rbd: openstack: simplify volume booting with new api
- Implemented by https://review.openstack.org/#/c/41728/
- 01:42 PM devops Bug #6160 (In Progress): allow installation of packages only
- 01:34 PM Feature #6231 (Resolved): buffer: cache crc in buffer::raw (or similar) to avoid recalculation fo...
- 01:32 PM rbd Subtask #4020 (Resolved): rbd: openstack: simplify volume booting with new api: make image boot b...
- superceded by modified boot panel interface: https://review.openstack.org/#/c/41728/
- 01:31 PM rbd Subtask #4019 (Resolved): rbd: openstack: simplify volume booting with new api: add boot option t...
- superceded by modified boot panel interface: https://review.openstack.org/#/c/41728/
- 01:30 PM rbd Subtask #4018 (Resolved): rbd: openstack: simplify volume booting with new api: modify boot panel...
- superceded by modified boot panel interface: https://review.openstack.org/#/c/41728/
- 01:26 PM rbd Subtask #4015 (Resolved): rbd: openstack: extend nova boot api: add block_dev_mapping_v2 to nova-...
- https://review.openstack.org/#/c/38815/
- 01:23 PM rbd Subtask #4014 (Resolved): rbd: openstack: extend nova boot api: add block_dev_mapping_v2 to nova-api
- https://review.openstack.org/#/c/32568/
- 01:14 PM rbd Subtask #4016 (Fix Under Review): rbd: openstack: extend nova boot api: modify libvirt driver to ...
- https://review.openstack.org/#/c/42474/
- 12:41 PM Bug #6230 (Resolved): ceph osd crush move appears to be broken
Per documentation at: http://ceph.com/docs/master/rados/operations/crush-map/
ceph osd crush move {bucket-name} ...- 10:53 AM Bug #6207: Found incorrect object contents
- The problem offset appears not to have changed since the prior successful read, and kern.log had:
2013-09-04T08:57... - 09:44 AM Bug #6207: Found incorrect object contents
- full logs here: ubuntu@teuthology:/var/lib/teuthworker/archive/sage-bug-6222-b/20456
- 10:41 AM devops Bug #4924 (Resolved): ceph-deploy: gatherkeys fails on raring (cuttlefish)
- 10:30 AM Bug #6222 (Resolved): ceph_test_rados failure; user_version 0 after a write
- Merged into master. :)
- 09:41 AM Bug #6223 (Need More Info): error generating keys
- can you attach your ceph.conf, /etc/hosts from the host where the mon is, and the output from 'ceph daemon mon.`hostn...
- 04:17 AM Bug #6223 (Resolved): error generating keys
- Hi!
I'm trying to prepare an admin and server node with ceph-deploy.
I've configured SHH, hostnames and I've instal... - 09:24 AM rgw Bug #6214: rgw: PUT object with chunked upload doesn't propagate client side errors
- Josh - can you please review?
- 09:21 AM devops Feature #6017 (In Progress): ceph-deploy mon create: create on all mons in ceph.conf + then do ga...
- Issue #6132 has been resolved and that adds a big warning message when the provided hostname does not match that of t...
- 09:15 AM Feature #6227: make osd crush placement on startup handle multiple trees (e.g., ssd + sas)
- The simplest solution is to set 'osd crush update on start = false' in your ceph.conf. We don't have a mechanism rig...
- 08:02 AM Feature #6227 (New): make osd crush placement on startup handle multiple trees (e.g., ssd + sas)
- See our crush map for layout: basically, we have 64 OSD SATA drives in one hierarchy and 6 SSD drives in a 'ssd' root...
- 09:07 AM devops Bug #6132 (Resolved): ceph-deploy to detect and warn when host != hostname
- Merged to ceph-deploy master branch with hash: 124e53e3513e80b306c68af7db1f4fe13e5d9d58
- 08:59 AM devops Bug #6132 (Fix Under Review): ceph-deploy to detect and warn when host != hostname
- Opened pull request: https://github.com/ceph/ceph-deploy/pull/60
- 08:06 AM rbd Feature #6228 (Resolved): image name metavariable
- h3. User description
Symptoms:
* When a VM has two RBD volumes attached, the admin socket only access one of th... - 07:39 AM Bug #6226 (Resolved): after editing crushmap and adding new hosts, injecting it, several existing...
- I have edited the crushmap on our 70 OSD, 10 server, 0.61.5 ceph cluster (see attached file) and injected it.
I ha... - 06:50 AM rbd Bug #5760: libceph: osdc_build_request(): BUG_ON(p > msg->front.iov_base + msg->front.iov_len);
- Same thing here : I cherry-picked your commit (a9fb92762883e2522fc4d1dcd403c5d888264746 : rbd: fix buffer size for wr...
- 06:36 AM devops Bug #5944: ceph-deploy osd needs to be moved to use the new remote helpers
- This is a **massive** effort because `osd` is huge. Will update as I make progress on this.
`prepare` and `activat...
09/03/2013
- 10:42 PM Bug #6222 (Fix Under Review): ceph_test_rados failure; user_version 0 after a write
- 08:56 PM Bug #6222 (Resolved): ceph_test_rados failure; user_version 0 after a write
- ...
- 08:53 PM Bug #6179 (Resolved): ceph_test_rados user_version checks fail during thrashing
- 09:36 AM Bug #6179 (Fix Under Review): ceph_test_rados user_version checks fail during thrashing
- 09:35 AM Bug #6179: ceph_test_rados user_version checks fail during thrashing
- Looks like maybe he found the bug (wip-6179).
- 04:26 PM Feature #6221 (New): Objecter,OSD: make it easy to determine when the network is misbehaving
- This probably takes the form of:
1) perf counter for Pipe reconnects
2) maybe an average for the period from sendin... - 04:01 PM Feature #6038 (In Progress): cachepool: filestore/osd: infrastructure for large object COPY atomi...
- 04:01 PM Feature #6031 (Resolved): cachepool: osd: COPY from another pool; small objects only
- 12:33 PM Feature #6031 (In Progress): cachepool: osd: COPY from another pool; small objects only
- 11:13 AM Feature #6031 (Resolved): cachepool: osd: COPY from another pool; small objects only
- 02:20 PM Bug #6196 (Resolved): unittest_lfnindex testing older HASH_INDEX_TAG
- 7ec0b4fb780b91b44427ed94eee82c3c6b6fff9f
- 09:36 AM Bug #6196 (In Progress): unittest_lfnindex testing older HASH_INDEX_TAG
- 02:08 PM rgw Documentation #6217 (Closed): rgw: update man pages
- for radosgw, radosgw-admin
- 01:47 PM devops Bug #6216 (Resolved): rpm missing package when junit not installed
- Either need to not faile if libcepfs_java is not built, or configure needs to fail if the required package is not fou...
- 01:33 PM rgw Bug #6161 (Resolved): radosgw 0.67.2 update -> "ERROR: failed to initialize watch"
- Pushed a fix, commit:1d1f7f18dfbdc46fdb09a96ef973475cd29feef5.
- 01:29 PM rgw Cleanup #6215 (New): rgw: get rid / rename base op class ret value
- This is a reminiscent of the ancient original gateway architecture, and should probably die now. Issues like #6214 ca...
- 01:27 PM rgw Bug #6214 (Fix Under Review): rgw: PUT object with chunked upload doesn't propagate client side e...
- 01:23 PM rgw Bug #6214 (Resolved): rgw: PUT object with chunked upload doesn't propagate client side errors
- E.g., client disconnected in the middle of upload. This affects also multi-part upload (when uploading part data).
- 01:07 PM Bug #5700: very high memory usage after update
- Ah ok, that increase might explain the increased memory usage, We'll know for sure in a few days :)
But anyway, is... - 09:33 AM Bug #5700: very high memory usage after update
- Ah, sorry, I missed that.
And yeah, the massif output confirms that ~80% of the heap is consumed by the pg logs. ... - 08:54 AM Bug #5700: very high memory usage after update
- Great! There are no other show stoppers and upgrade should be smooth, right? :)
I uploaded my binary and the massi... - 08:28 AM Bug #5700: very high memory usage after update
- Corin Langosch wrote:
> For testing I'd like to wait for for the next dumpling release (hopefully with fixed http://... - 08:12 AM Bug #5700: very high memory usage after update
- For testing I'd like to wait for for the next dumpling release (hopefully with fixed http://tracker.ceph.com/issues/6...
- 07:39 AM Bug #5700: very high memory usage after update
- Also, massif should have generated a report file that indicates which callers are allocating all of the memory. Can ...
- 07:38 AM Bug #5700: very high memory usage after update
- Corin Langosch wrote:
> Do these settings affect data safety in any way? The cluster is an important production one,... - 12:57 PM devops Bug #5763 (Resolved): ceph-deploy new [IP] should error out
- Merged to ceph-deploy master branch with hash: 77438b522c82e79fdb0f9b0c5963ba5a61f07f10
The validator was updated ... - 08:09 AM devops Bug #5763 (Fix Under Review): ceph-deploy new [IP] should error out
- Opened pull request: https://github.com/ceph/ceph-deploy/pull/58
- 11:31 AM rgw Bug #6208: rgw: md5 checksum failed on readwrite during upgrade-next tests
- There's a bunch of these in the apache logs:...
- 08:16 AM rgw Bug #6208 (Can't reproduce): rgw: md5 checksum failed on readwrite during upgrade-next tests
- ...
- 11:02 AM devops Feature #6018 (In Progress): Build ceph via jenkins
- 10:59 AM Bug #6177 (Can't reproduce): osd/ReplicatedPG.cc: 3852: FAILED assert(ctx->user_at_version > ctx-...
- We believe we fixed this up in the fix chain for #6179. My guess is on the pg_log_event_t changes to only specify new...
- 10:39 AM rbd Feature #4917 (In Progress): iSCSI: Package tgt
- Package build is set up except for opensuse12.2 and sles11sp2. It looks like for SuSE the IB devel libraries come fr...
- 10:21 AM Feature #5984: mon: probe monitors to check on their status regardless of quorum
- After a bit more thought I think we should make the mon_command() more robust and put this there. the fact that mon_...
- 10:18 AM devops Bug #5944 (In Progress): ceph-deploy osd needs to be moved to use the new remote helpers
- 10:15 AM Bug #6209 (Resolved): Objecter::recalc_op_target() breaks on non-existent pool target
- N/m, Sage got this already in commit:1610768d4a5373e57a0bde62e1a3365c2f5b0073
- 10:11 AM Bug #6209 (Resolved): Objecter::recalc_op_target() breaks on non-existent pool target
- I busted it with my tiering changes and will need to refactor in order to deal with unknown pools.
- 10:07 AM rgw Documentation #5669 (Resolved): Default site in Apache interferes with Gateway
- Not sure why this was opened. It has actually been in the documentation for quite some time.
- 10:06 AM rgw Documentation #5165: rgw: multisite: regions and global namespace documentation
- Wrote a "conceptual" document that failed Tamil's ease-of-use test, and rewrote in a quick-install procedure style. W...
- 08:13 AM Bug #6118: failed to recover before timeout expired on radosbench, rados api tests
- 4 objects degraded, 1 pg stuck in recovery_wait...
- 07:48 AM Linux kernel client Bug #5429: libceph: rcu stall, null deref in osd_reset->__reset_osd->__remove_osd
- ...
09/02/2013
- 09:32 PM Bug #6043: upstart does not reflect running ceph-osd daemons (ubuntu 13.04 only)
I seem to be running into a similar issue. Running 13.04 and 0.67.2 but it was also happening with 0.61.7, it seems...- 12:39 PM Bug #5700: very high memory usage after update
- BTW, would be nice if this issue would be re-opened. I cannot do this.. :(
- 09:52 AM Bug #6207 (Resolved): Found incorrect object contents
- ...
09/01/2013
- 11:06 PM rbd Bug #5760: libceph: osdc_build_request(): BUG_ON(p > msg->front.iov_base + msg->front.iov_len);
- more than 20GB later, still no bug so for my part it's solved. Thanks! :)
- 04:32 PM rbd Bug #5760: libceph: osdc_build_request(): BUG_ON(p > msg->front.iov_base + msg->front.iov_len);
- git cherry-pick 103673bf04c8207c92c3286005dfaa2d259ac9b6 68d253bc92e5fd780869b1fb31dd8e49267b8d4e
from v3.10.9 (0a4b... - 11:59 AM Bug #6177: osd/ReplicatedPG.cc: 3852: FAILED assert(ctx->user_at_version > ctx->new_obs.oi.user_v...
- http://teuthology.front.sepia.ceph.com/archive/teuthology-2013-09-01_01:01:33-krbd-master-testing-basic-plana/16054
08/31/2013
- 04:47 PM Bug #6179: ceph_test_rados user_version checks fail during thrashing
- 04:47 PM Bug #6178 (Resolved): mon/DataHealthService.cc: 131: FAILED assert(store_size > 0)
- 12:35 PM Bug #6178 (Fix Under Review): mon/DataHealthService.cc: 131: FAILED assert(store_size > 0)
- wip-6178 ; pull request 561
- 11:16 AM Bug #5700: very high memory usage after update
- Do these settings affect data safety in any way? The cluster is an important production one, so I cannot really play ...
- 10:18 AM Bug #5700: very high memory usage after update
- Corin Langosch wrote:
> I jus thought recreating the journal would help, but I didn't help at all.
>
> kill osd.1... - 03:07 AM Bug #5700: very high memory usage after update
- I jus thought recreating the journal would help, but I didn't help at all.
kill osd.12
/usr/bin/ceph-osd -i 12 --... - 02:00 AM Bug #5700: very high memory usage after update
- !http://s21.postimg.org/lzotzs1on/massif.jpg!
Looks like ceph is reading the whole log file (1GB) in memory and no... - 01:59 AM Bug #5700: very high memory usage after update
- !http://postimg.org/image/8vj9n39mr/!
Looks like ceph is reading the whole log file (1GB) in memory and not freein... - 01:45 AM Bug #5700: very high memory usage after update
- Here we go:
** restart with valgrind **
valgrind --tool=massif /usr/bin/ceph-osd -i 12 --pid-file /var/run/ceph... - 12:59 AM Bug #5700: very high memory usage after update
- > It's the pgs per osd that matters. But yeah, I'm not happy with my answer either, but I don't have much else to go ...
- 05:19 AM rbd Feature #4231 (Closed): librbd: Java bindings
- Yes!
I'll close this one.
08/30/2013
- 11:53 PM Bug #6206 (Won't Fix): OSD::do_convertfs() doesn't transfer attributes
If we bump the FileStore::on_disk_version (renaming to FileStore::target_version), after the convertfs we can't mou...- 06:15 PM Feature #6198 (New): packaging, admin_socket: create ceph group, make socket be group writable
- admin_socket consumers must be root today (because the daemons run as root and create the socket without manipulating...
- 05:03 PM Feature #6036 (Resolved): cachepool: osd: add objecter
- 04:31 PM rgw Feature #5842 (Resolved): rgw: integrate multi-region s3tests into teuthology task
- These tests have run successfully in the nightlies (they had valgrind issues that are being tracked on other tickets)...
- 12:30 PM rgw Feature #5842 (In Progress): rgw: integrate multi-region s3tests into teuthology task
- teuthology was updated to generate the necessary S3TEST_CONF input to trigger multi-region tests in the s3tests. This...
- 04:30 PM Bug #6179: ceph_test_rados user_version checks fail during thrashing
- Unfortunately I didn't set up any debug output for user versions because...well, because I didn't want to confuse thi...
- 10:19 AM Bug #6179: ceph_test_rados user_version checks fail during thrashing
- flab:teuthology 10:17 AM $ teuthology-schedule a.yaml --name sage-bug-6179-a -n 1
Job scheduled with ID 13732
sho... - 09:25 AM Bug #6179: ceph_test_rados user_version checks fail during thrashing
- 09:22 AM Bug #6179 (Resolved): ceph_test_rados user_version checks fail during thrashing
- 2013-08-30T01:33:42.532 INFO:teuthology.task.rados.rados.0.out:[10.214.132.15]: Writing 44 current snap is 201
...
... - 04:06 PM Bug #6196 (Resolved): unittest_lfnindex testing older HASH_INDEX_TAG
This unit test should test newer HOBJECT_WITH_POOL index_version.- 03:35 PM Bug #5700: very high memory usage after update
- Corin Langosch wrote:
> Hi Sage,
>
> to be honest I'm a little disappointed by your answer. 8192 isn't a lot of ... - 02:06 PM Bug #5700: very high memory usage after update
- Hi Sage,
to be honest I'm a little disappointed by your answer. 8192 isn't a lot of pgs? The docs say 50-100 pgs ... - 03:31 PM rgw Feature #6195 (Resolved): rgw: test full sync (with large object)
- Create a large object (greater than 5GB), test full sync that includes that object.
- 03:27 PM Feature #6032 (Resolved): cachepool: objecter: send requests to cache pool
- Thanks Sage!
merged to master in commit:b882aa2ace54099a1b5c2ce5b25ac29e29b9ec14 - 03:09 PM Feature #6032: cachepool: objecter: send requests to cache pool
- https://github.com/ceph/ceph/pull/560, branch wip-6032-cache-objecter
- 02:49 PM Feature #6032 (Fix Under Review): cachepool: objecter: send requests to cache pool
- 03:24 PM rgw Feature #6193 (Resolved): rgw: DR: per bucket sync
- Per bucket sync. Get the bucket sync position (name of objects that are currently in sync). For each such object, do ...
- 03:23 PM rgw Feature #6192 (Resolved): rgw: DR: per object sync
- Single object sync
Need to polish this a bit, but basically there are two separate
issues: request a copy and wait ... - 03:22 PM rgw Feature #6191 (Resolved): rgw: DR: per bucket replica_log handling
- Each agent will try to lock replica log shard, if succeeded then
will get the next bucket that it needs to work in f... - 03:22 PM devops Bug #5763 (In Progress): ceph-deploy new [IP] should error out
- on the latest ceph-deploy version 1.2.3, this problem exists.
ceph-deploy new command seems to accept any number o... - 03:21 PM rgw Feature #6190 (Resolved): rgw: DR: read list of buckets to do full sync on
- read list of buckets and store that info sharded in the replica_log
so that we could later on run multiple agents ag... - 03:17 PM rbd Feature #5275 (Resolved): openstack: port always_use_volumes option to grizzly
- 03:15 PM rbd Feature #4917: iSCSI: Package tgt
- 03:13 PM rgw Documentation #5165 (In Progress): rgw: multisite: regions and global namespace documentation
- 03:11 PM rgw Documentation #5166 (In Progress): rgw: dr: async repl and DR documentation
- 03:11 PM rbd Bug #5454 (In Progress): krbd: assertion failure in rbd_img_obj_callback()
- I think this may be a refcounting race with resends. Looking into it further.
- 03:11 PM rgw Feature #4340 (In Progress): rgw: dr: data sync agent: implement full sync
- 03:04 PM Feature #6189 (Resolved): cachepool: osd: promote on read
- 03:01 PM Feature #6188 (Resolved): cachepool: osd: promote on write and mark object dirty
- 02:56 PM Feature #6186 (Resolved): cachepool: osd: dirty state for an object in cache pool
- 02:52 PM Feature #6033 (In Progress): cachepool: osd: basic io decision: read/write from/to cache pool or ...
- 02:48 PM Tasks #6184 (Closed): filestore should record if filestore_xattr_use_omap has ever been enabled a...
- Forgot that I already added ths.
- 02:33 PM Tasks #6184 (Closed): filestore should record if filestore_xattr_use_omap has ever been enabled a...
- 02:46 PM Feature #5993 (In Progress): EC: [link] Refactor recovery to use PGBackend methods
- 02:45 PM Tasks #6185 (New): expand upgrade tests to be able to test downgrade
- 02:41 PM Feature #5998 (Fix Under Review): EC: [link] FileStore must work with ghobjects rather than hobjects
- 02:35 PM Fix #4635 (Resolved): mon: many ops expose uncommitted state
- 02:33 PM Feature #6031 (Fix Under Review): cachepool: osd: COPY from another pool; small objects only
- 02:19 PM Feature #6030 (Resolved): cachepool: osd: pg_pool_t cache_pool property
- This and initial tooling around it is merged into master in commit:b30a1b288996c2f7a6471f38c13030e6047052a2.
- 01:47 PM Bug #6128: glance image-create with rbd --location fails to create image in rdb
- http://pastebin.com/ZwfuGckH --glance-api.log from glance image-create from stdin
http://pastebin.com/YwEEz0xG --gla... - 01:43 PM rgw Documentation #6182 (Resolved): Conflicting locations for s3gw.fcgi
- This applies to the following page: http://ceph.com/docs/next/install/rpm/
When creating an Apache httpd virtual h... - 01:07 PM rbd Feature #4231: librbd: Java bindings
- Wido,
Can we close this ticket now? - 12:29 PM rgw Feature #5605: rgw: teuthology tests to check bucket issues in multi region env
- The ceph-qa-suite commits were actually c37faa8cf90abe54cba051b045edfe4ab9750bbc
The previous specified commit wa... - 11:32 AM rgw Feature #5605 (Resolved): rgw: teuthology tests to check bucket issues in multi region env
- The tests were added to teuthology/task/radosgw-admin.py in commit ff2a209f8d05fd018c0c6709ea70ed5fb1360435
The Y... - 11:53 AM devops Feature #3347: ceph-deploy: allow setting ssh user
- Would suggest default behaviour is to run as the user who is invoking the script.
- 11:48 AM devops Feature #6020 (Need More Info): radosgw-apache opinionated package
- We need a explicit description of what should be packaged up for a new radosgw package.
Eg, config files to add, r... - 11:41 AM devops Feature #6067: ceph-deploy: make mon create catch common errors
- 10:42 AM rgw Bug #6121 (Resolved): key error during readwrite test in upgrade suite
- Fixed a code path where the traceback key was not being set.
The fix has been in for a few days and we've seen no o... - 10:10 AM devops Feature #5956: Implement a radosgw command in ceph-deploy
- ceph-deploy should be able to install/deploy radosgw to a remote host
- 09:21 AM Bug #6178 (Resolved): mon/DataHealthService.cc: 131: FAILED assert(store_size > 0)
- ...
- 08:41 AM Messengers Bug #5508: msg/SimpleMessenger.cc: 230: FAILED assert(!cleared)
- ...
- 08:40 AM Bug #6177: osd/ReplicatedPG.cc: 3852: FAILED assert(ctx->user_at_version > ctx->new_obs.oi.user_v...
- ubuntu@teuthology:/a/teuthology-2013-08-30_01:01:28-krbd-master-testing-basic-plana/13463
- 08:30 AM Bug #6177 (Can't reproduce): osd/ReplicatedPG.cc: 3852: FAILED assert(ctx->user_at_version > ctx-...
- ...
- 08:27 AM rgw Bug #6176 (Resolved): rgw: valgrind: leak in copy_obj
- ...
- 08:26 AM rgw Bug #6175 (Resolved): rgw: valgrind: invalid reads in copy_obj
- ...
- 07:48 AM rbd Bug #6174 (Can't reproduce): osdc/ObjectCacher.cc: 526: FAILED assert(i->empty()) on cuttlefish f...
- ...
- 07:36 AM Feature #6173 (Resolved): Add LevelDB support to ceph cluster backend store
- http://wiki.ceph.com/01Planning/02Blueprints/Emperor/Add_LevelDB_support_to_ceph_cluster_backend_store
08/29/2013
- 10:44 PM Bug #6083: fedora18 rpm packages for ceph should be built with proper naming convention for ceph-dbg
- Looks the issue is with the expect script that does the rpm signing timing out. It has a default time out of 10 seco...
- 09:30 AM Bug #6083: fedora18 rpm packages for ceph should be built with proper naming convention for ceph-dbg
- The incorrectly named debug package is valid and signed with the autobuild key. So we know that it's good through at...
- 10:30 PM devops Feature #5847: Build own versions of most recent leveldb for all supported platforms.
- Centos 6.3/6.4 & rhel 6.3/6.4 are backported from the fedora19 leveldb 1.12 package.
12.04 is backported from the Ra... - 09:58 PM devops Feature #5847: Build own versions of most recent leveldb for all supported platforms.
- Can you confirm which versions of LevelDB you are packaging for each OS:
CentOS 6.3
CentOS 6.4
RHEL 6.3
RHEL 6.... - 07:51 PM rbd Bug #5636 (Fix Under Review): krbd: crash in image refresh
- branch wip-rbd-bugs-shutdown-lock contains a few fixes
- 07:48 PM rbd Bug #5391 (Duplicate): krbd: crash in rbd_obj_request_create -> strlen
- pretty sure this is the same notify vs shutdown race as #5636
- 07:42 PM rbd Feature #5003: cinder/nova: don't require ceph.conf on a compute host / support multiple clusters
- I can't edit status for some reason, but this was merged a couple months ago https://review.openstack.org/#/c/30790/
- 04:46 PM rbd Bug #6139: kernel panic in vms during disk benchmarking
- I've updated kernel to the latest linux-image-lts-raring and I am also having vm crashes. I can get the crash by runn...
- 03:43 PM Feature #6032 (In Progress): cachepool: objecter: send requests to cache pool
- wip-6032-cache-objecter, currently based on top of https://github.com/ceph/ceph/pull/554
The objecter will follow th... - 03:14 PM Bug #6151 (Resolved): OSD: FAILED assert(!log_keys_debug->count(i->first))
- 42d65b0a7057696f4b8094f7c686d467c075a64d
- 01:14 AM Bug #6151 (Resolved): OSD: FAILED assert(!log_keys_debug->count(i->first))
- <b>/a/teuthology-2013-08-28_21:55:13-krbd-master-testing-basic-plana/11274$ zless remote/ubuntu@plana24.front.sepia.c...
- 03:04 PM Linux kernel client Feature #6163 (Resolved): support caching pool and redirects
- This is the kernel client analogue to #6032.
- 02:03 PM Linux kernel client Feature #6162 (Resolved): support user_version & replay_version
- We've now implemented separate user and replay versions in the userspace Ceph stuff, but the kernel client still only...
- 01:26 PM rbd Feature #4917: iSCSI: Package tgt
- Looks right to me. The "split into .so's" patch still hasn't been accepted, but that probably doesn't bother us yet....
- 01:16 PM rbd Feature #4917 (Fix Under Review): iSCSI: Package tgt
- Dan - do you agree with Gary's steps above?
- 01:06 PM devops Bug #6104: ceph-deploy should workaround pseudo-tty in SSH
- One possible solution to this is to detect a `root` user and just not do the sudo connection.
Other than that, I t... - 10:12 AM devops Bug #6104: ceph-deploy should workaround pseudo-tty in SSH
- pushy dev says this might not be straightforward so fixing this will depend on pushy fixing it on their end.
- 01:03 PM rgw Bug #6161 (Resolved): radosgw 0.67.2 update -> "ERROR: failed to initialize watch"
- off the mailing list.
- 12:40 PM devops Bug #6160 (Resolved): allow installation of packages only
- A lot of users care for their own repositories and keys making ceph-deploy a no-go when attempting to install ceph.
... - 12:20 PM rgw Bug #6159 (Won't Fix): syncing an existing user causes an error
- It's not uncommon for the system users in a sync relationship to use the same keys. If this occurs, every metadata sy...
- 11:25 AM devops Bug #6158: selective sync of ceph precise dependencies from havana cloud archive
- Packages for 12.04 that Ceph Dumpling requires are: perftools, leveldb, libs3, libunwind, qemu
- 11:24 AM devops Bug #6158 (Rejected): selective sync of ceph precise dependencies from havana cloud archive
- leveldb
libs3-dev
gperfootls
libunwind
reprepro lets you do a selective sync, according to james. - 10:55 AM rgw Bug #6152 (Pending Backport): New S3 auth code fails when using response-* query string params to...
- 07:20 AM rgw Bug #6152 (Resolved): New S3 auth code fails when using response-* query string params to overrid...
- Previously there was a list of subresources to ignore when generating a signature. This has changed to a list of subr...
- 10:22 AM devops Feature #6018: Build ceph via jenkins
- 10:16 AM rbd Feature #5938: openstack: nova: allow live-migration without detach/reattach for rbd
- 10:00 AM devops Feature #3347: ceph-deploy: allow setting ssh user
- Changing subject so we are not prescriptive about where the ssh user is set. IMO it should be a ceph-deploy switch, n...
- 09:38 AM rbd Bug #6129 (Resolved): krbd: build broken when CEPH_FSCACHE disabled
- 09:35 AM devops Feature #6154: ceph-deploy should be able to use an argument "fs-type" to specify the filesystem ...
- As a workaround until this gets implemented, you can specify "fs type = foo" in the ceph.conf (at whatever granularit...
- 08:20 AM devops Feature #6154 (Resolved): ceph-deploy should be able to use an argument "fs-type" to specify the ...
- According to the documentation, it seems intended and expected that ceph-deploy should be able to define which filesy...
- 09:22 AM Bug #5896 (Fix Under Review): mon: MonmapMonitor: 'ceph mon add' always returns 'mon already exists'
- wip-5896, pull request 557
- 08:40 AM devops Feature #4954 (Need More Info): ceph-deploy: help and document need to be updated for osd create
- From what I see, `prepare` does `activate`, how are you running ceph-deploy and what are you seeing as output that te...
- 08:32 AM devops Bug #6035 (Need More Info): ceph-deploy: ceph-create-keys stuck on fedora 18 VMs
- 08:31 AM devops Bug #6035: ceph-deploy: ceph-create-keys stuck on fedora 18 VMs
- I can no longer replicated this.
Also the hosts have changed from Fedora to Ubuntu. And I really don't see where i... - 08:07 AM devops Bug #6138 (Resolved): 'ceph-deploy disk list' fails on CentOS 6.4
- Merged to ceph-deploy master branch
Hash: 69a765cfa94716813e5d77aa910c3ad21951b93b
Now the $PATH is always pass... - 06:05 AM devops Bug #6138 (Fix Under Review): 'ceph-deploy disk list' fails on CentOS 6.4
- Opened pull request: https://github.com/ceph/ceph-deploy/pull/55
- 07:59 AM Bug #6153 (Duplicate): osd/PGLog.cc: 632: FAILED assert(!log_keys_debug->count(i->first))
- #6151
- 07:37 AM Bug #6153 (Duplicate): osd/PGLog.cc: 632: FAILED assert(!log_keys_debug->count(i->first))
- ...
- 07:50 AM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- this will go into the next dumpling point release. thanks again for helping track it down!
- 07:49 AM devops Bug #4924 (Pending Backport): ceph-deploy: gatherkeys fails on raring (cuttlefish)
- 12:40 AM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- P.S.:
When does the fix will make it into the main branch (dumpling)?
I keep working with wip-4924 for now?
- 12:36 AM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- Sage Weil wrote:
> thanks for helping test this!
better free software ;-)
> add the --zap-disk argument to blo... - 07:40 AM Fix #6116: osd: incomplete pg from thrashing on next
- ubuntu@teuthology://a/teuthology-2013-08-28_01:00:04-rados-master-testing-basic-plana/10150
08/28/2013
- 04:49 PM Bug #6047 (Pending Backport): mon: Assert and monitor-crash when attemting to create pool-snapsho...
- 04:41 PM Feature #6147 (Resolved): mon: calculate, expose per-pool pg stat deltas
- so we can get iops, recovery stats on a per-pool basis
- 04:30 PM Subtask #5862: FileStore must work with ghobjects rather than hobjects
- pg_info_t has our standard encode/decode function versioning. We've got some fairly complex ones you can examine wher...
- 04:06 PM Subtask #5862: FileStore must work with ghobjects rather than hobjects
- The ceph-filestore-dump probably needs a version bump to prevent an import of an export which includes erasure coding...
- 02:11 PM Bug #6040 (Pending Backport): Significant slowdown of osds since v0.67 Dumpling
- 01:51 PM Feature #6143 (Resolved): OSD: kill filestore_xattr_use_omap, leave it enabled forever, adjust xa...
- Otherwise you might get a corrupt osd.
- 01:28 PM Bug #6083: fedora18 rpm packages for ceph should be built with proper naming convention for ceph-dbg
- The problem with suffix on the debug package is happening intermittently on fedora17, fedora18, and sles11sp2 gitbuil...
- 01:15 PM rbd Feature #4917: iSCSI: Package tgt
- It looks like what is needed:
1) Clone upstream source, https://github.com/fujita/tgt, into the github repo
2) Ad... - 12:56 PM rbd Bug #6140 (Rejected): qa: add phoronix-test-suite benchmark to rbd test suite
- This just runs benchmarks we already have like bonnie, iozone, and dbench.
- 09:03 AM rbd Bug #6140 (Rejected): qa: add phoronix-test-suite benchmark to rbd test suite
- for krbd and/or qemu+librbd. see #6139
- 11:49 AM Bug #6110 (Resolved): v0.67.1 branch is missing on gitbuilder for debian precise
- This time for sure.
It really is there:
$ curl http://gitbuilder.ceph.com/ceph-deb-precise-x86_64-basic/ref/v0.... - 10:02 AM Bug #6110 (In Progress): v0.67.1 branch is missing on gitbuilder for debian precise
- I still dont see it on http://gitbuilder.ceph.com/ceph-deb-precise-x86_64-basic/ref/
Gary is looking into it. - 10:44 AM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- bernhard glomm wrote:
> Sage Weil wrote:
> > Bernhard, thanks for those logs--I think I've identified the problem. ... - 10:43 AM devops Bug #4924 (Resolved): ceph-deploy: gatherkeys fails on raring (cuttlefish)
- thanks for helping test this!
- 09:22 AM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- Sage Weil wrote:
> Bernhard, thanks for those logs--I think I've identified the problem. Can you try with wip-4924 ... - 08:22 AM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- sorry, didn't realised on the first glance it was a normal package repository...
got it, will post results ASAP - 01:54 AM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- Sage Weil wrote:
> Bernhard, thanks for those logs--I think I've identified the problem. Can you try with wip-4924 ... - 10:16 AM Bug #6141: OSDs crash on recovery
- Dropping the caches was not a problem. Freeing Dentries an Inodes took about 30 Minutes and I guess ceph was not able...
- 10:00 AM Bug #6141: OSDs crash on recovery
- What was happening on your cluster at the time you dropped the caches? There are internal and external limits well be...
- 09:39 AM Bug #6141 (Can't reproduce): OSDs crash on recovery
- After (mistakenly) executing "echo 2 > /proc/sys/vm/drop_caches" instead of "echo 1 > /proc/sys/vm/drop_caches" to cl...
- 09:45 AM Documentation #6142 (Resolved): Ceph needs mor than 32k pids
- I kinda painfully discovered that one of my Hosts with 45 OSDs on it spawned 1.4 Million threads when starting it int...
- 09:19 AM Feature #6029 (Resolved): cachepool: osd: separate object version from pg version
- I merged this in this morning, commit:be9a39b766ba825ef348ca6e2de1f4db7c091dff
A suite run saw a lot of the btrfs ... - 08:31 AM devops Bug #6138 (In Progress): 'ceph-deploy disk list' fails on CentOS 6.4
- 07:48 AM devops Bug #5193: RHEL6 does not ship with xfsprogs
- Xfsprogs only in Scalable File System add on to RHEL 6.4 but xfs will be default fs in RHEL 7.
- 05:39 AM rbd Bug #6139 (Closed): kernel panic in vms during disk benchmarking
- Hi
I am having regular issues with virtual machines running heavy disk io benchmarks.
My ceph setup:
Ceph v...
08/27/2013
- 07:14 PM Bug #6097: btrfs locking regression on async snap ioctl
- Yan Zheng added a diagnosis on linux-btrfs:
btrfs_ioctl_start_sync() calls btrfs_attach_transaction_barrier() whic... - 05:56 PM Bug #6097: btrfs locking regression on async snap ioctl
- Question asked on linux-btrfs: http://article.gmane.org/gmane.comp.file-systems.btrfs/27911
- 06:57 PM CephFS Bug #5665 (Duplicate): mds takeover too early causes new mds to shutdown
- I think this is duplication of #4894
- 04:56 PM devops Bug #6138 (Resolved): 'ceph-deploy disk list' fails on CentOS 6.4
- When attempting to run 'ceph-deploy disk list den2ceph003' I get the follow error message:
$ ceph-deploy disk list... - 04:24 PM rbd Bug #5647: krbd: EBlACKLIST osd reply resulting in an oops on 3.9
- 03:47 PM Feature #6032: cachepool: objecter: send requests to cache pool
- Our current thinking is that the cache/tiering flags specify the write behavior which the Objecter handles, and that ...
- 03:42 PM Feature #6029 (Fix Under Review): cachepool: osd: separate object version from pg version
- There's a pull request at https://github.com/ceph/ceph/pull/549. Would also like to schedule another suite on it, but...
- 03:19 PM rbd Bug #5760 (Fix Under Review): libceph: osdc_build_request(): BUG_ON(p > msg->front.iov_base + msg...
- fix in branch wip-rbd-bugs in ceph-client.git, test in wip-krbd-workunits for ceph.git
- 01:14 PM phprados Feature #6137 (New): Add RADOS namespace support
- Add RADOS namespace support
- 01:13 PM rados-java Feature #6136 (New): Add RADOS namespace support
- Add the new RADOS namespace support
- 12:59 PM rgw Feature #6135 (New): Add a flag to radosgw-agent indicating whether exceptions should be propagated
- At present, there is no way for a caller of radosgw-agent to know whether an exception was encountered other than par...
- 12:54 PM rgw Bug #6134 (Resolved): RGW returns an error on set_worker_bound if a zone's log_pool doesn't alrea...
- At present, when a zone is configured, if its 'log_pool" doesn't happen to already exist, then RGW runs into issues. ...
- 12:00 PM Bug #6130 (Resolved): SharedPtrRegistry_all.wait_lookup_or_create fails inconsistently
- cool :-)
- 11:11 AM Bug #6130: SharedPtrRegistry_all.wait_lookup_or_create fails inconsistently
- Sam reviewed and merged into master, commit: 7cc2eb246df14925ca27b8dee19b32e0bdb505a8
- 05:08 AM Bug #6130 (Fix Under Review): SharedPtrRegistry_all.wait_lookup_or_create fails inconsistently
- 04:46 AM Bug #6130 (In Progress): SharedPtrRegistry_all.wait_lookup_or_create fails inconsistently
- 04:46 AM Bug #6130: SharedPtrRegistry_all.wait_lookup_or_create fails inconsistently
- while : ; do ./unittest_sharedptr_registry --gtest_filter=SharedPtrRegistry_all.wait_lookup_or_create || break ; don...
- 11:11 AM Bug #6117 (Resolved): osd: bad mutex assert in ReplicatedPG::context_registry_on_change()
- Sam reviewed and merged this into master, commit:ea2fc85e091683ced062594ad25fa569e5c1bbd7
- 08:22 AM Bug #6117: osd: bad mutex assert in ReplicatedPG::context_registry_on_change()
- Running the following against "the wip-6117 branch":http://gitbuilder.sepia.ceph.com/gitbuilder-ceph-deb-precise-amd6...
- 07:23 AM Bug #6117 (Fix Under Review): osd: bad mutex assert in ReplicatedPG::context_registry_on_change()
- 06:58 AM Bug #6117: osd: bad mutex assert in ReplicatedPG::context_registry_on_change()
- The following:...
- 06:35 AM Bug #6117: osd: bad mutex assert in ReplicatedPG::context_registry_on_change()
- crashes with the provided yaml file and *sudo gdb /usr/bin/ceph-osd cephtest/lo1308271455/archive/coredump/137760836...
- 06:24 AM Bug #6117: osd: bad mutex assert in ReplicatedPG::context_registry_on_change()
- ...
- 05:29 AM Bug #6117: osd: bad mutex assert in ReplicatedPG::context_registry_on_change()
- context_registry_on_change was...
- 03:20 AM Bug #6117: osd: bad mutex assert in ReplicatedPG::context_registry_on_change()
- teuthology-2013-08-26_01:01:03-rbd-master-testing-basic-plana/5820/remote/ubuntu@plana63.front.sepia.ceph.com/log/cep...
- 02:55 AM Bug #6117: osd: bad mutex assert in ReplicatedPG::context_registry_on_change()
- /a/teuthology-2013-08-25_09:24:44-rbd-master-testing-basic-plana/4847/remote/ubuntu@plana21.front.sepia.ceph.com/log/...
- 02:45 AM Bug #6117: osd: bad mutex assert in ReplicatedPG::context_registry_on_change()
- trying to run the config.yaml standalone to see if it reproduces the problem
- 11:03 AM devops Feature #6124 (Resolved): `ceph-deploy new` should accept node:IP pairs
- Merged on ceph-deploy's master branch, hash: cdc8f8a8c38543587e7120fbc17954c909c1b0fa
The argument validator was b... - 10:45 AM rgw Feature #6133 (New): Enhance the validation of JSON passed into radosgw-admin calls
- At present, any JSON data passed into a radosgw-admin call ('zone set' for example) is only validated in terms of bei...
- 10:32 AM Bug #3434: Unknown variables in test_xattr_support
- I was unaware this was assigned to me on this tracker. The good news is that this has been fixed, I believe, in the ...
- 10:18 AM RADOS Feature #6114: Complete python binding interfaces for librados
- Haomai, can you take a look at wip-5900 ? I am working on getting the Python bindings properly packaged.
I am at t... - 10:05 AM Bug #6131 (Rejected): debian: fd limits are set too high for standard users on the monitors
- Hmm, you're right! Misdiagnosis on my part from some faulty memories of making it work in teuthology. :)
- 09:36 AM Bug #6131 (Need More Info): debian: fd limits are set too high for standard users on the monitors
- who has actually observed this problem? AIUI root can set the ulimit however they want, so i think this can only hap...
- 09:40 AM devops Bug #6132 (Resolved): ceph-deploy to detect and warn when host != hostname
- Whenever ceph-deploy attempts to configure a mon and do an initial install, a user might use a nodename that specifie...
- 09:38 AM Fix #6116: osd: incomplete pg from thrashing on next
- Sam, please take a look.
- 08:26 AM Fix #6116: osd: incomplete pg from thrashing on next
- ubuntu@teuthology:/a/teuthology-2013-08-26_15:47:58-rados-next-testing-basic-plana/6694
cluster is still hung - 09:38 AM rgw Bug #6121: key error during readwrite test in upgrade suite
- 09:37 AM rbd Bug #6129 (In Progress): krbd: build broken when CEPH_FSCACHE disabled
- 09:26 AM Bug #6083 (In Progress): fedora18 rpm packages for ceph should be built with proper naming conven...
- 09:11 AM Bug #6083: fedora18 rpm packages for ceph should be built with proper naming convention for ceph-dbg
- The debug packages are built following the conventions of the platforms they are built on.
For debian the packages... - 09:24 AM Bug #6110 (Resolved): v0.67.1 branch is missing on gitbuilder for debian precise
- This appears to have just been a timing issue between github updates and when the build occurred. The tag built ok,...
- 09:24 AM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- Bernhard, thanks for those logs--I think I've identified the problem. Can you try with wip-4924 (based off of dumpli...
- 07:46 AM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- Sage Weil wrote:
> bernhard glomm wrote:
> > Sage Weil wrote:
> > > Ooh, I think I know what this is. This is pro... - 08:36 AM Bug #5896: mon: MonmapMonitor: 'ceph mon add' always returns 'mon already exists'
- I thought I had fixed this. I'll check.
- 04:45 AM CephFS Bug #2825: File lock doesn't work properly
- @Jean-Sébastien Frerot: Please retest with the following fix:
commit 476e4902907dfadb3709ba820453299ececf990b
A... - 01:28 AM Bug #6101: ceph-osd crash on corrupted store
- Samuel Just wrote:
> What kernel version are you running?
The debian wheezy's kernel package so its a 3.2 (linux-... - 01:11 AM Bug #6101: ceph-osd crash on corrupted store
- Hi, thanks for following the issue :) I couldn't upload the following logs from where I was so it took me some time, ...
08/26/2013
- 07:56 PM rgw Bug #6088 (Fix Under Review): rgw: When uploading via POST specifying text instead of file formda...
- 06:24 PM rgw Bug #6088: rgw: When uploading via POST specifying text instead of file formdata input field, a s...
- Bug confirmed on latest. From what I can tell cache entry for bucket gets corrupted.
- 06:48 PM rbd Bug #5647 (Fix Under Review): krbd: EBlACKLIST osd reply resulting in an oops on 3.9
- wip-5647, patch on ceph-devel
- 01:28 PM rbd Bug #5647 (In Progress): krbd: EBlACKLIST osd reply resulting in an oops on 3.9
- 06:13 PM Bug #6131: debian: fd limits are set too high for standard users on the monitors
- This may be appropriate for somebody else to work on, but for now I'm following the "you break it; you buy it" princi...
- 06:12 PM Bug #6131 (Rejected): debian: fd limits are set too high for standard users on the monitors
- f653aa570e5ebfd5ca955fafb7f500148a144bd7 upped the fd limits (apparently deliberately, for the OSDs) to 32k. This is ...
- 04:26 PM Bug #6130 (Resolved): SharedPtrRegistry_all.wait_lookup_or_create fails inconsistently
- "pull request":https://github.com/ceph/ceph/pull/544
I noticed via the gitbuilders that this unit test is inconsis... - 03:38 PM Bug #6117: osd: bad mutex assert in ReplicatedPG::context_registry_on_change()
- Is this a possible scenario for this error:
* thread A: "pthread_mutex_trylock":https://github.com/ceph/ceph/blob/eb... - 10:06 AM Bug #6117: osd: bad mutex assert in ReplicatedPG::context_registry_on_change()
- ubuntu@teuthology:/a/teuthology-2013-08-26_01:01:39-krbd-master-testing-basic-plana/5937
- 03:12 PM Feature #6030 (In Progress): cachepool: osd: pg_pool_t cache_pool property
- 03:04 PM Feature #6029: cachepool: osd: separate object version from pg version
- A-hah, I think I found it. Running an updated branch through a short set of tests and updating the documentation, the...
- 10:12 AM Feature #6029 (In Progress): cachepool: osd: separate object version from pg version
- Huh, thought I updated this already. Sage went over it and liked what he'd seen, but there were some test failures I ...
- 02:30 PM rbd Bug #6129 (Resolved): krbd: build broken when CEPH_FSCACHE disabled
- See http://gitbuilder.sepia.ceph.com/gitbuilder-kernel-deb-precise-amd64-basic/log.cgi?log=7c6203242d8a0338294def5708...
- 02:16 PM Bug #6128 (Rejected): glance image-create with rbd --location fails to create image in rdb
- glance image-create --name cirros-5 --container-format bare --disk-format qcow2 --is-public yes --location https://la...
- 02:09 PM Documentation #6127 (Resolved): CEPH_ARGS example for RHEL
- the CEPH_ARGS note in http://ceph.com/docs/next/rbd/rbd-openstack/ is debian centric and wont work for RHEL users.
... - 02:00 PM rgw Bug #6126 (Resolved): rgw: swift subuser access mask not working
- 01:58 PM devops Bug #6102 (Resolved): if EPEL has been added skip adding it again
- Merged into ceph-deploy master.
Fixed by using `--replacepkgs` which tells RPM to not exit with a non-zero status
... - 06:31 AM devops Bug #6102 (Fix Under Review): if EPEL has been added skip adding it again
- Pull request opened: https://github.com/ceph/ceph-deploy/pull/50
- 06:12 AM devops Bug #6102: if EPEL has been added skip adding it again
- dmick mentioned that it might be better to just use `--replacepackgs` to quiet down this. It is unfortunate that wget...
- 01:22 PM Bug #6101: ceph-osd crash on corrupted store
- 52f622a2/rbd_data.22d4c74b0dc51.0000000000002853/c4//2 seems to be the missing object.
Can you locate that object ... - 01:11 PM Bug #6101 (New): ceph-osd crash on corrupted store
- Ah. That's less good. The logs would be the place to start. What kernel version are you running?
- 12:15 PM Bug #6101: ceph-osd crash on corrupted store
- After some work this week-end, I tried quite some things to keep the test locked in the cluster (to avoid my user ent...
- 11:08 AM Bug #6101 (Can't reproduce): ceph-osd crash on corrupted store
- Our handling of this situation could be better, but basically it's crashing because it got messed up information out ...
- 01:20 PM Feature #5909 (Resolved): mon: keep track of monitor store size estimate vs 'du $mon_data'
- 01:14 PM devops Bug #5599 (Resolved): ceph-disk: prepare should issue a partprobe on the journal device too
- 01:13 PM devops Bug #4642 (Resolved): ceph-deploy: disk zap can throw a better error message
- Merged into ceph-deploy master, basically catching None arguments before they go off.
Hash: aeada3ec6659474ceb... - 07:46 AM devops Bug #4642 (Fix Under Review): ceph-deploy: disk zap can throw a better error message
- Opened pull request: https://github.com/ceph/ceph-deploy/pull/51
- 01:13 PM Bug #6045 (Resolved): mon/OSDMonitor.cc: 1609: FAILED assert(err == 0)
- 01:10 PM CephFS Bug #6004 (Resolved): osdc/ObjectCacher.cc: 738: FAILED assert(bh->length() <= start+(loff_t)leng...
- 01:09 PM Bug #6090 (Resolved): mon/OSDMonitor.cc: 186: FAILED assert(err == 0)
- 01:08 PM Bug #6108 (Resolved): broken readdir_r usage
- 09:50 AM Bug #6108: broken readdir_r usage
- commit:057588f41af39460f46f36594ac1c5c962068289
- 01:02 PM devops Feature #6124: `ceph-deploy new` should accept node:IP pairs
- Updated the wording. It has nothing to do with `mon create` but `new`.
Got confused with the help menu :/ - 10:57 AM devops Feature #6124: `ceph-deploy new` should accept node:IP pairs
- one clarification: mon create ... takes a host or fqdn or ip; it doesn't matter, it's just a target to connect to. i...
- 10:41 AM devops Feature #6124 (Resolved): `ceph-deploy new` should accept node:IP pairs
- Issue 5763 prevented from users creating a new config by passing an IP alone, but the validator that was added
preve... - 12:57 PM Bug #6122 (Fix Under Review): osd: rados cmd.cc test does not tolerate thrashing
- 10:16 AM Bug #6122 (Resolved): osd: rados cmd.cc test does not tolerate thrashing
- teuthology-2013-08-25_20:00:14-rados-dumpling-testing-basic-plana/5311
- 11:34 AM rbd Feature #5275: openstack: port always_use_volumes option to grizzly
- 11:01 AM devops Feature #6120 (Duplicate): ceph-deploy: accept HOST:IP on 'new' line
- Work for this issue is being tracked at 6124
- 09:20 AM devops Feature #6120 (Duplicate): ceph-deploy: accept HOST:IP on 'new' line
- currently it seems to only accept a HOST:FQDN, but not HOST:IP? so reports a user on ceph-users.
- 10:35 AM rgw Cleanup #6123 (Resolved): rgw: don't warn about missing region map
- 09:57 AM rgw Bug #6121 (Resolved): key error during readwrite test in upgrade suite
- ...
- 09:33 AM rgw Bug #6111 (Fix Under Review): rgw: multipart upload fails when last chunk < 512k
- 09:28 AM rgw Bug #6111 (In Progress): rgw: multipart upload fails when last chunk < 512k
- 08:56 AM devops Bug #4924 (Need More Info): ceph-deploy: gatherkeys fails on raring (cuttlefish)
- bernhard glomm wrote:
> Sage Weil wrote:
> > Ooh, I think I know what this is. This is probably cuttlefish v0.61.7... - 12:38 AM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- Sage Weil wrote:
> Ooh, I think I know what this is. This is probably cuttlefish v0.61.7 or older, right? There is... - 08:20 AM Bug #5239: osd: Segmentation fault in ceph-osd / tcmalloc
- Since the last update here, we've been running our own builds of the cuttlefish branch, built exactly as described ab...
- 03:31 AM Subtask #6119 (Won't Fix): replace PG::object_contexts with SharedPtrRegistry
- "API proposal":https://github.com/dachary/ceph/commit/60958095585a1f8392d8a967767f7620089d547d that compiles to show ...
- 03:17 AM Subtask #5510 (Resolved): ObjectContext : replace ref with shared_ptr
- merged and the rados tests results from the past few days does not exhibit problems that can be obviously traced back...
08/25/2013
- 09:27 PM Bug #6118 (Can't reproduce): failed to recover before timeout expired on radosbench, rados api tests
- ubuntu@teuthology:/a/teuthology-2013-08-25_09:23:30-rados-master-testing-basic-plana/4753
- 09:25 PM Bug #6117: osd: bad mutex assert in ReplicatedPG::context_registry_on_change()
- also
ubuntu@teuthology:/a/teuthology-2013-08-25_09:24:44-rbd-master-testing-basic-plana/4847
ubuntu@teuthology:/a... - 09:24 PM Bug #6117 (Resolved): osd: bad mutex assert in ReplicatedPG::context_registry_on_change()
- "pull request":https://github.com/ceph/ceph/pull/545...
- 06:09 PM rgw Bug #5374: Avoid relying on keystone's admin token
- I rebased wip-5374 again, went over it. Almost there but not quite yet, will get there soon.
- 09:11 AM Feature #5511: rados.py support for object locking
- Haomai Wang wrote:
> Is there exists any progress?
> May I give a hand if any?
No progress. Go for it!
08/24/2013
- 11:34 PM Feature #5511: rados.py support for object locking
- Is there exists any progress?
May I give a hand if any? - 10:05 PM Fix #6116 (Resolved): osd: incomplete pg from thrashing on next
- ... u'overall_status': u'HEALTH_WARN', u'summary': [{u'severity': u'HEALTH_WARN', u'summary': u'1 pgs incomplete'}]} ...
- 03:44 PM Bug #6115 (Resolved): doc: asphyxiate does not support class
- "asphyxiate":https://github.com/ceph/asphyxiate is "unable to handle classes":http://comments.gmane.org/gmane.comp.fi...
- 01:36 PM Bug #6003: journal Unable to read past sequence 406 ...
- teuthology-2013-08-23_01:00:10-rados-master-testing-basic-plana/1275
08/23/2013
- 09:26 PM RADOS Feature #6114 (New): Complete python binding interfaces for librados
- Now python binding for librados only supports basic operations like read, write. A lot of interfaces librados.h imple...
- 06:40 PM CephFS Bug #2218 (Resolved): CephFS "mismatch between child accounted_rstats and my rstats!"
- I have run MDS with "mds verify_scatter = 1" for months, didn't hit this.
- 04:29 PM Feature #6001 (Fix Under Review): EC: [link] jerasure plugin
- 04:27 PM Feature #6000 (Fix Under Review): EC: [link] erasure plugin mechanism and abstract API
- 04:24 PM Subtask #6113 (Resolved): add ceph osd pool create [name] [key=value]
- "work in progress":https://github.com/ceph/ceph/pull/578
* add *ceph osd pool create [name] [key=value]* where *ke... - 04:23 PM Bug #6112 (Resolved): rgw test failed in the nightly during upgrade from dumpling to next
- logs: ubuntu@teuthology:/a/teuthology-2013-08-23_01:35:04-upgrade-parallel-next-testing-basic-vps/1798...
- 04:12 PM Bug #6083: fedora18 rpm packages for ceph should be built with proper naming convention for ceph-dbg
- we may need to check this for centos as well. I am not able to check this right now as v0.67.1 is missing on gitbuild...
- 04:09 PM Subtask #5878 (Fix Under Review): erasure plugin mechanism and abstract API
- 04:08 PM Subtask #5879 (Fix Under Review): jerasure plugin
- 03:54 PM Documentation #5690: ceph "global options" should be documented somewhere
- Here's some notes I scribbled for myself:
early_options:
--version: show version to stdout, exit
--conf/-c: se... - 03:35 PM Bug #6099 (Resolved): ceph-rest-api: default log file doesn't work because not daemon
- 03:15 PM Bug #6099 (Fix Under Review): ceph-rest-api: default log file doesn't work because not daemon
- 03:22 PM CephFS Feature #3426: ceph-fuse: build/run on os x
- Giving this to Noah since he's actually done it already in a branch.
- 03:21 PM Bug #2901 (Resolved): librados-config should not read ceph.conf
- 03:18 PM Bug #3163 (Won't Fix): doc: explain meaning of pg dump output
- 03:18 PM CephFS Bug #3544 (Won't Fix): ./configure checks CFLAGS for jni.h if --with-hadoop is specified but also...
- 03:17 PM Bug #3030 (Won't Fix): config/option parser: Avoid needing to list command line options in a glob...
- 03:17 PM Bug #3029 (Won't Fix): config/option parser: Avoid needing to list obscure one-use options in glo...
- 03:16 PM Bug #2520 (Duplicate): iozone random read/write with 4k block size hangs
- 03:14 PM Bug #3662 (Won't Fix): mkcephfs --mkfs is not inserting any default settings
- 03:13 PM Bug #2690 (Won't Fix): mon: persist quorum features
- we should use compatset features in cases where this is unsafe.
- 03:12 PM Bug #2207 (Resolved): osd: crash when op length is greater than op input data
- 03:12 PM Bug #5291 (Can't reproduce): Bug with client naming for Cinder-Volume usage
- 03:11 PM Bug #2618 (Can't reproduce): error: unable to open OSD superblock
- 03:10 PM Bug #5891 (Won't Fix): rados bench displaying wrong unit
- we want to use base-2, but MiB etc is not pretty
- 03:09 PM Bug #3660 (Resolved): osd: marking objects lost invalidates pg stats
- 03:09 PM Bug #2354 (Resolved): osd: make watch timeout configurable
- 03:08 PM Bug #2902 (Resolved): common lib tries to open literal ~/.ceph/ceph.conf
- 03:08 PM Bug #2507 (Resolved): auth: "ceph auth get-or-create-key" argument validation is lacking
- 03:08 PM Bug #2205 (Won't Fix): mkcephfs throws "No such file or directory" errors when the pwd the script...
- 03:08 PM Bug #1036 (Won't Fix): obsync: handle LFN for file://
- 03:07 PM Bug #2551 (Rejected): leveldb broke "make distcheck"
- 03:07 PM rgw Bug #3896 (Can't reproduce): rest-bench common/WorkQueue.cc: 54: FAILED assert(_threads.empty())
- 03:07 PM Bug #5078 (Won't Fix): Debian missing sudo results in unclear error
- 03:07 PM Bug #5471 (Resolved): mon: do not join a quorum if quorum's version is lower than ours
- 03:06 PM CephFS Bug #3551 (Can't reproduce): mds: journaler hang
- 02:57 PM Bug #2914 (Resolved): librados set_complete_callback, set_safe_callback clobber each other's argu...
- 02:51 PM Bug #3526 (Resolved): Commands mentioned in documentation are incomplete ?
- 02:50 PM Bug #3584 (Resolved): Ranlib fails from 64-bit client on a file in 32-bit based Ceph cluster.
- i believe this has been fixed.
- 02:49 PM rgw Bug #6111 (Resolved): rgw: multipart upload fails when last chunk < 512k
- 02:49 PM CephFS Bug #3598 (Resolved): MDS should shut down cleanly on EBLACKLIST
- 02:48 PM Bug #3780 (Won't Fix): pg_num inappropriately low on new pools
- 02:48 PM Bug #2828 (Resolved): osd: assign_bid was allowed to mutate and return data
- 02:48 PM Bug #2653 (Resolved): Web docs point to obsolete "fusermount" page
- 02:47 PM Bug #3300 (Resolved): ceph::buffer::end_of_buffer isn't caught
- 02:46 PM Bug #3899 (Won't Fix): osd: failed to decode object_info_t
- 02:46 PM Bug #3268 (Rejected): osd: localize reads handling is incorrect
- 02:45 PM Bug #3894 (Closed): monclient: --keyring failed despite presence of file
- 02:45 PM Bug #2890 (Resolved): monitor: "recognize" heap commands
- 02:44 PM Bug #3972 (Resolved): new boost dependency: libboost-program-options
- 02:42 PM Messengers Bug #1674 (Can't reproduce): daemons crash when sent random data
- 02:41 PM Bug #3903 (Resolved): OSDMap::raw_pg_to_pps causes pools to have similar mappings
- 02:37 PM Bug #4780 (Resolved): RBD-Enabling Discard Trim
- 02:37 PM CephFS Bug #5021: ceph-fuse: crash on traceless reply
- What's the status of wip-5021?
- 02:36 PM Bug #3434: Unknown variables in test_xattr_support
- I'm curious why this is set to Won't Fix - the bug still exists in master.
- 02:32 PM Bug #3434 (Won't Fix): Unknown variables in test_xattr_support
- 02:35 PM Bug #4344 (Can't reproduce): osd/ReplicatedPG.cc: 5378: FAILED assert(pi.recovery_info.soid.snap ...
- 02:34 PM CephFS Bug #6087 (Resolved): mds: do not loop on old dirs missing backpointer xattrs
- I put this into Dumpling. The issue didn't exist in Cuttlefish since Yan hadn't written the open-by-ino code at that ...
- 02:33 PM Bug #4109 (Duplicate): incorrect degraded count
- same as negative degraded
- 02:33 PM Fix #4205: librados: Improve Watch-notify semantics
- sam, figure out what this means.
- 02:29 PM Bug #6110 (Resolved): v0.67.1 branch is missing on gitbuilder for debian precise
- There is no v0.67.1 branch on http://gitbuilder.ceph.com/ceph-deb-precise-x86_64-basic/ref/
and even confusing is ... - 02:28 PM Bug #2891 (Can't reproduce): heap profiler hangs when trying to start it up on the mon
- 02:27 PM Bug #5251 (Can't reproduce): wrong node messages in mds log
- 02:27 PM Bug #5449 (Can't reproduce): osd crash immediately after booting up
- 02:26 PM Bug #5459 (Resolved): ceph-mon failure using wip-mon-pgmap on ARM
- 02:26 PM Bug #5500 (Resolved): ceph CLI should validate, reject bad daemon commands
- 02:24 PM Bug #5733 (Won't Fix): monitor: validate pg_temp entries from OSDs
- remove_down_pg_temp() cleans up
- 02:22 PM Bug #5946 (Resolved): lockstatus.get_status called although check-locks: false
- Has been merged already, closing.
- 02:17 PM Bug #5946: lockstatus.get_status called although check-locks: false
- pull request?
- 02:17 PM Bug #5788 (Resolved): ceph: try new, fallback to old can race with daemon upgrade
- 02:15 PM Bug #5972 (Won't Fix): Permissions on /var/run/ceph changed causing permission error messages
- 02:14 PM Fix #5989: librados: document that bufferlist usage model is inconsistent
- 02:13 PM Bug #6043 (Need More Info): upstart does not reflect running ceph-osd daemons (ubuntu 13.04 only)
- is this still a problem? unless we can figure out the sequence to reproduce this i'm not sure what to do here. upst...
- 02:08 PM Bug #5932 (Won't Fix): osdmaptool --create-from-conf ignore "osd pool default pg[p] num"
- pg split now works; let's rely on that instead.
also nothing really uses osdmaptool to create the initial osdmap... - 02:05 PM Bug #5395: arm: osd: big performance differential between read/write
- is this still present?
- 02:04 PM Bug #5445 (Can't reproduce): random osd EPERM on journal
- 02:04 PM Bug #5641 (Resolved): occasional crush_ops.sh failure
- 02:03 PM Bug #5776 (Can't reproduce): ceph: passing -1 osd id
- 02:02 PM Bug #5700 (Can't reproduce): very high memory usage after update
- don't see anything strange from the core. i suspect this is just lots of pgs...
- 02:01 PM Bug #5823: cpu load on cluster node is very high, client can't get data on pg from primary node ...
- what kernel are you running?
- 02:00 PM rbd Bug #5890 (Need More Info): can't remove rbd image from pool
- does 'ceph health' say OK?
- 01:59 PM Bug #5925 (Can't reproduce): hung ceph_test_rados_delete_pools_parallel
- 01:57 PM Bug #5981: osd: journal didn't preallocate
- the problem is that ceph-disk creates the jouranl but does not allocat eit
- 01:51 PM Documentation #6107 (Resolved): Broken link on upgrade doc
- See http://ceph.com/docs/master/install/upgrading-ceph/
- 12:39 PM Documentation #6107 (Resolved): Broken link on upgrade doc
- There is a broken link in the doc at http://ceph.com/docs/next/install/upgrading-ceph/
The broken link is: http://... - 01:51 PM Bug #5985 (Resolved): very slow recovery for some objects
- 01:51 PM Bug #5923 (Duplicate): osd: 6 up, 5 in; 91 active+clean, 1 remapped
- 01:51 PM Bug #5901 (Duplicate): stuck incomplete immediately after clean
- 01:50 PM Bug #5922 (In Progress): osd: unfound objects on next
- 01:30 PM rbd Bug #5615 (Duplicate): lock ops are not re-sent when cluster gets marked un-full
- this is the linger resend on unfull bug #6070
- 01:27 PM rbd Bug #5812 (Need More Info): qemu-kvm guest hangs on disk write with rbd storage
- 01:13 PM rbd Bug #5812: qemu-kvm guest hangs on disk write with rbd storage
- Since this mostly goes away with caching enabled, I'm guessing this is the same as #5919 - does it still occur with 0...
- 01:26 PM rbd Bug #3619 (Resolved): librbd: read_iterate sparse behavior broken
- 01:25 PM rbd Bug #5184 (Resolved): libceph: create_singlethread_workqueue() error handling
- dan carpenter fixed this
- 01:24 PM rbd Bug #5955 (In Progress): qemu deadlock when librbd caching enabled (writethru or writeback).
- 01:20 PM rgw Bug #5362 (Resolved): rgw: failure when listing objects with prefix that starts with underscore
- 01:20 PM rgw Bug #4410 (Can't reproduce): rgw: exits uncleanly on fastcgi socket error
- 01:18 PM rgw Bug #5374 (Fix Under Review): Avoid relying on keystone's admin token
- 01:18 PM rgw Bug #5374: Avoid relying on keystone's admin token
- wip-5374
- 01:16 PM rgw Bug #5820 (Resolved): radosgw-admin should fail on non-valid flags
- Pull request merged.
- 01:16 PM rgw Bug #5885 (Resolved): Valgrind issue found while running s3 and swift tests
- looks like leaks
- 01:13 PM rgw Bug #5192 (Won't Fix): RGW: radosgw-admin user rm --access-key not working on bobtail
- 01:12 PM rgw Bug #5953 (Resolved): rgw: drain requests when going down
- 01:09 PM rgw Bug #6046 (Resolved): rgw: empty pool created for control objects
- 01:06 PM RADOS Fix #6109 (New): pg <pgid> mark_unfound_lost fails if a completely-gone OSD still in map
- cluster on mira045 et. al. had bad disk on osd.25; marked out, much data extracted, but for some
reason one pgid (2.... - 12:48 PM Bug #6108 (Resolved): broken readdir_r usage
- the buffer needs to be ~ sizeof(struct dirent) + PATH_MAX; it can't be a struct dirent or else the filename itself wi...
- 12:44 PM CephFS Bug #5649 (Can't reproduce): smbtorture test gets ebusy on kclient umount
- hasn't come up in a few weeks.
- 12:43 PM CephFS Bug #5927: kcephfs: ENOTEMPTY on rm -r
- 10:39 AM Bug #6090 (Pending Backport): mon/OSDMonitor.cc: 186: FAILED assert(err == 0)
- 10:17 AM rgw Bug #6056 (Resolved): rgw: sync agent is not propagating bucket delete
- Fixed, commit:2632846e24e3c26139e982e0a569951d25e1589b
- 09:58 AM rbd Bug #5426: librbd: mutex assert in perfcounters::tinc in librbd::AioCompletion::complete()
- ubuntu@teuthology:/a/teuthology-2013-08-23_00:30:06-ceph-deploy-master---basic-saya/1087
slightly different, thoug... - 09:40 AM Feature #5909 (In Progress): mon: keep track of monitor store size estimate vs 'du $mon_data'
- 08:49 AM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- Ooh, I think I know what this is. This is probably cuttlefish v0.61.7 or older, right? There is a fix in dumpling (...
- 03:29 AM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- Sage,
sorry for being late on this, other tasks kept me busy,
but here the infos you were asking for:
> > ceph... - 08:29 AM Bug #6049: pgmap json output shows bytes_* values quadrupled
- oops, sorry about that. pushed to dumpling branch. commit:a0ac88272511d670b5c3756dda2d02c93c2e9776
- 02:52 AM Bug #6049: pgmap json output shows bytes_* values quadrupled
- Maybe this warrants a backport to dumpling? The fix is tiny and risk-free.
- 08:16 AM Bug #6085 (Resolved): specify filetype flag (-t) when calling mount
- 06:03 AM Bug #6085 (Fix Under Review): specify filetype flag (-t) when calling mount
- Opened pull request: https://github.com/ceph/ceph/pull/534
- 07:24 AM devops Bug #6104 (Resolved): ceph-deploy should workaround pseudo-tty in SSH
- When connecting to a host that does not allow `sudo` over SSH, returning an error similar to:...
- 05:27 AM devops Bug #6102 (Resolved): if EPEL has been added skip adding it again
- ceph-deploy as of 1.2.2 adds the EPEL repo to CentOS and Scientific, it should not try to add this again if it alread...
- 03:11 AM rgw Bug #5931: radosgw crashes when deleting object
- Stumbled upon the same problem using bobtail binaries for ubuntu precise (from http://ceph.com/debian-bobtail repo), ...
08/22/2013
- 08:53 PM Bug #6101 (Can't reproduce): ceph-osd crash on corrupted store
- I see a problem with one of my dumpling OSD under debian, on a compressed btrfs. I think my BTRFS is corrupted or som...
- 06:40 PM Bug #5951: osd: next: EEXIST on mkcoll
- Nothing useful from the last one.
~/teuthology [mine?] » ./virtualenv/bin/teuthology-schedule --name "samuelj-5951... - 06:01 PM Bug #5239 (Can't reproduce): osd: Segmentation fault in ceph-osd / tcmalloc
- Let us know if this is still happening for you. Thanks!
- 06:00 PM Bug #5695 (Resolved): Debian packaging fails when removed but not purged
- 05:58 PM CephFS Bug #5883 (Resolved): mds: broken locking, ref count in handle_accept
- 05:53 PM Bug #6090 (Fix Under Review): mon/OSDMonitor.cc: 186: FAILED assert(err == 0)
- 03:56 PM Bug #6090: mon/OSDMonitor.cc: 186: FAILED assert(err == 0)
- 10:41 AM Bug #6090 (Resolved): mon/OSDMonitor.cc: 186: FAILED assert(err == 0)
- ...
- 05:18 PM Bug #6099 (Resolved): ceph-rest-api: default log file doesn't work because not daemon
- ceph-rest-api assumes that it can either get the user's choice or the default log file from
rados_conf_get("log_file... - 04:42 PM devops Feature #6098 (Rejected): put teuthology.front.sepia.ceph.com apache configuration files under so...
- There are several configuration files that are on teuthology.front.sepia.ceph.com that should probably be saved in gi...
- 04:32 PM Bug #6097 (Resolved): btrfs locking regression on async snap ioctl
- ...
- 01:40 PM rgw Bug #6078 (Fix Under Review): rgw: CORS not working
- Pushed a bunch of changes to wip-6078.
- 12:15 PM devops Bug #5499 (Resolved): ceph-deploy --cluster clustername osd prepare fails
- Merged into ceph-deploy master branch: 9605cefd71770118097a11f99a9fc27c1e30b1f5
- 09:57 AM devops Bug #5499 (In Progress): ceph-deploy --cluster clustername osd prepare fails
- 08:30 AM devops Bug #5499: ceph-deploy --cluster clustername osd prepare fails
- Thanks for the update, this should get fixed today with a release before the end of the week.
- 01:36 AM devops Bug #5499: ceph-deploy --cluster clustername osd prepare fails
- Bah, redmine formatting sucks... Patch attached.
- 01:32 AM devops Bug #5499: ceph-deploy --cluster clustername osd prepare fails
- This should fix it:
--- /usr/share/pyshared/ceph_deploy/osd.py
+++ /usr/share/pyshared/ceph_deploy/osd.py
@@ -113,... - 11:06 AM devops Bug #6091 (Won't Fix): centos build should use redhat-rpm-config for debuginfo packages
- The specfile currently directly invokes the debuginfo macro for centos builds. It should instead use the redhat-rpm-...
- 10:16 AM rgw Bug #6088: rgw: When uploading via POST specifying text instead of file formdata input field, a s...
- Version used: 0.56.6-15-g8c6a912
- 10:11 AM rgw Bug #6088 (Resolved): rgw: When uploading via POST specifying text instead of file formdata input...
- Specific text from customer:
??When uploading via POST if user specifies the "file" formdata input field as "text"... - 09:55 AM devops Bug #6086 (Resolved): ceph-deploy needs to handle ClientInitExceptions from pushy
- Merged into ceph-deploy master branch with hash: dfaa9d3274b3c8c0dcfce94062532649bf212fb9
Basically, a simple try/... - 09:49 AM devops Bug #6086 (Fix Under Review): ceph-deploy needs to handle ClientInitExceptions from pushy
- Opened pull request: https://github.com/ceph/ceph-deploy/pull/48
- 07:34 AM devops Bug #6086 (Resolved): ceph-deploy needs to handle ClientInitExceptions from pushy
- When pushy can't connect to a remote host it errors out and ceph-deploy does not handle the exception
resulting in a... - 09:53 AM devops Bug #6077 (Resolved): lsb_release should not be required for purging CentOS/Scientific
- Merged into ceph-deploy master branch with hash: 5d7304cabcca4bde7dd439d4578300777b28575c
Moved the helpers for ls... - 07:34 AM devops Bug #6077 (Fix Under Review): lsb_release should not be required for purging CentOS/Scientific
- Pull request opened: https://github.com/ceph/ceph-deploy/pull/47
- 09:42 AM Bug #6003: journal Unable to read past sequence 406 ...
- ubuntu@teuthology:/var/lib/teuthworker/archive/teuthology-2013-08-22_01:00:13-rados-next-testing-basic-plana/804
- 09:36 AM Bug #6081 (Duplicate): osd crashed during upgrade tests from dumpling to next in the nightlies
- dup of #6082
- 09:05 AM rbd Bug #5636: krbd: crash in image refresh
- again on ubuntu@teuthology:/a/teuthology-2013-08-22_01:01:30-krbd-next-testing-basic-plana/1020...
- 08:57 AM rbd Bug #5426: librbd: mutex assert in perfcounters::tinc in librbd::AioCompletion::complete()
- ubuntu@teuthology:/a/teuthology-2013-08-22_01:01:01-rbd-next-testing-basic-plana/888
- 08:50 AM Bug #6047 (In Progress): mon: Assert and monitor-crash when attemting to create pool-snapshots wh...
- 08:24 AM Feature #5909 (Fix Under Review): mon: keep track of monitor store size estimate vs 'du $mon_data'
- wip-5909 / pr: #526
- 08:15 AM CephFS Bug #6087 (Resolved): mds: do not loop on old dirs missing backpointer xattrs
- ...
- 07:16 AM Bug #6085: specify filetype flag (-t) when calling mount
- This is the actual patch that fixes this problem:...
- 07:11 AM Bug #6085 (Resolved): specify filetype flag (-t) when calling mount
- When not specifying the filetype `mount` will refuse to mount the filesystem created as seen in this thread from ceph...
08/21/2013
- 05:10 PM Bug #6083 (Resolved): fedora18 rpm packages for ceph should be built with proper naming conventio...
- currently, in the nightlies, ceph-debug package is looked for in a format "ceph-debug-0.67.1-11.gf6fe74f.fc18" but th...
- 04:48 PM Feature #5984: mon: probe monitors to check on their status regardless of quorum
- 04:44 PM Support #6070 (Resolved): list_lockers() never returns after disk full (librbdpy)
- commit:38a0ca66a79af4b541e6322467ae3a8a4483cc72 in master, next, dumpling and cuttlefish
- 03:06 PM Support #6070 (In Progress): list_lockers() never returns after disk full (librbdpy)
- There's a fix waiting for review in the wip-6070-cuttlefish branch. It applies cleanly to the next branch as well.
- 04:24 PM Bug #6081 (Duplicate): osd crashed during upgrade tests from dumpling to next in the nightlies
- logs: ubuntu@teuthology:/a/teuthology-2013-08-21_01:35:03-upgrade-parallel-next-testing-basic-vps/5197...
- 03:21 PM rbd Fix #6079 (Resolved): libceph: osd_client does not handle PAUSERD or PAUSEWR or FULL flags in osdmap
- When these flags are present, reads and or writes should not be sent. When these flags are removed, requests that wer...
- 02:21 PM rgw Bug #6078 (Resolved): rgw: CORS not working
- 01:36 PM devops Feature #5847: Build own versions of most recent leveldb for all supported platforms.
- Both the native leveldb-1.12 and our locally compiled version work correctly for mon create. So the issue encountere...
- 11:47 AM devops Feature #5847 (In Progress): Build own versions of most recent leveldb for all supported platforms.
- Re-opening since the leveldb-1.12 backported from fedora19 hangs during monitor create on centos/rhel 6.3 & 6.4 as de...
- 11:51 AM Bug #6022 (Resolved): monitor crashed during ceph-deploy mon create on centos 6.4 and 6.3
- Deleting leveldb-1.12 from the ceph-extras repo, and from the local mirror used by teuthology falls back to the level...
- 11:37 AM Bug #5951: osd: next: EEXIST on mkcoll
- No failures again. Switched yaml to xfs, more failures so far with xfs than with ext4.
~/teuthology [mine?] » ./v... - 11:17 AM Bug #6071 (Resolved): rados api test LibRadosMisc.BigAttrPP failed on the arm set up
test passed....- 10:58 AM Bug #6071: rados api test LibRadosMisc.BigAttrPP failed on the arm set up
- yes, the test set up had different versions for ceph-test[defaults to master branch by install task as no branch was ...
- 11:13 AM rgw Bug #6046 (Pending Backport): rgw: empty pool created for control objects
- 11:05 AM devops Bug #6077 (Resolved): lsb_release should not be required for purging CentOS/Scientific
- lsb_release seems to still be required for some ceph-deploy actions, purge/purgedata use it and should really attempt...
- 11:04 AM Bug #5412 (Resolved): doc bug: incorrect reference to monitor quorum requirements
- http://ceph.com/docs/master/rados/deployment/ceph-deploy-mon/
- 10:57 AM devops Documentation #5968 (Resolved): typo in monmap
- http://ceph.com/docs/master/man/8/monmaptool/
- 10:53 AM rgw Documentation #5525 (Resolved): Radosgw 'add the ceph keyring entries' section should be updated ...
- http://ceph.com/docs/master/radosgw/config/#add-to-ceph-keyring-entries
http://ceph.com/docs/master/start/quick-rgw/... - 10:49 AM Fix #6075 (Rejected): ceph.client.admin.keyring doesn't allow read to non-root users
- The ceph-create-keys script creates the file with 0600 and this causes issues when deploying. Our docs have to add th...
- 10:47 AM rgw Bug #6056 (In Progress): rgw: sync agent is not propagating bucket delete
- You've got my Reviewed-by: with the comment change we discussed, assuming you've tested it.
- 10:44 AM Documentation #5926 (Resolved): 5 minute quick start should deploy cluster using ceph-deploy and ...
- mkcephfs references removed from documentation.
- 10:44 AM devops Documentation #5688 (Resolved): ceph-deploy: upgrade procedure has to be documented
- http://ceph.com/docs/master/install/upgrading-ceph/
- 10:43 AM Bug #6074 (Duplicate): [ERR] scrub mismatch
- This looks like #5754. It's a bug in leveldb on precise, but harmless.
- 10:29 AM Bug #6074 (Duplicate): [ERR] scrub mismatch
- While running the rados suite with...
- 10:33 AM rgw Feature #5604 (Resolved): rgw: teuthology tests to check various user creation issues on multi re...
- These tests were added via commit #a39e7f1b095d3cb07f15ed065b4841d8730ed584
- 10:29 AM rgw Feature #5603 (Resolved): rgw: teuthology test to check secondary region creation
- This test case is a subset of 5604. Closing it as resolved since 5604 has been resolved.
- 10:24 AM rgw Feature #5602 (Resolved): rgw: teuthology task to test default region as master region
- This test case was checked into ceph-qa-suite as suites/rgw/singleton/all/rados-convert-to-region.yaml
commit #c37fa... - 09:56 AM rgw Bug #6051 (Resolved): rgw: 404 during readwrite test
- The issue was that the tests were not specifying the 'domain root pool' and the pool name generated by the rgw.py tas...
- 08:52 AM Bug #6073 (Can't reproduce): osd: mark_me_down sequence is racy
- ...
- 03:06 AM Subtask #5879 (In Progress): jerasure plugin
- 02:06 AM rbd Bug #6072 (Resolved): librbd image rename breaks child backwards reference
- Renaming an rbd image, that has clones, with a large name will break Image().parent_info() i.e. reverse lookup and th...
08/20/2013
- 10:58 PM rgw Bug #6056: rgw: sync agent is not propagating bucket delete
- Comment on github.
- 09:48 AM rgw Bug #6056: rgw: sync agent is not propagating bucket delete
- Tag, Greg, you're it.
- 09:40 AM rgw Bug #6056: rgw: sync agent is not propagating bucket delete
- Josh - can you please review?
- 10:40 PM devops Bug #5599 (Pending Backport): ceph-disk: prepare should issue a partprobe on the journal device too
- 05:51 PM Bug #6071: rados api test LibRadosMisc.BigAttrPP failed on the arm set up
- it is sending a 40MB xattr and failing. it should be sending 64K. note that 2 lines down from osd max attr in confi...
- 05:40 PM Bug #6071: rados api test LibRadosMisc.BigAttrPP failed on the arm set up
- logs with debug on is in: mira025: /home/ubuntu/bug_6071_latest
- 05:15 PM Bug #6071: rados api test LibRadosMisc.BigAttrPP failed on the arm set up
- xfs
- 05:12 PM Bug #6071: rados api test LibRadosMisc.BigAttrPP failed on the arm set up
- logs are copied to ubuntu@mira025:/home/ubuntu/bug_6071
- 05:09 PM Bug #6071: rados api test LibRadosMisc.BigAttrPP failed on the arm set up
- the test is...
- 05:04 PM Bug #6071 (Resolved): rados api test LibRadosMisc.BigAttrPP failed on the arm set up
- The rados api test failed on arm test setup when using ceph-deploy task. while it still worked with install task.
... - 05:27 PM Bug #5951: osd: next: EEXIST on mkcoll
- ~/teuthology [mine?] » ./virtualenv/bin/teuthology-schedule --name "samuelj-5951-7" -n 50 --owner samuelj@slider test...
- 03:47 PM Bug #5951: osd: next: EEXIST on mkcoll
- ~/teuthology [mine?] » ./virtualenv/bin/teuthology-schedule --name "samuelj-5951-6" -n 50 --owner samuelj@slider test...
- 01:10 PM Bug #5951: osd: next: EEXIST on mkcoll
- ubuntu@teuthology:/a/teuthology-2013-08-20_01:00:13-rados-next-testing-basic-plana/2690
- 09:18 AM Bug #5951: osd: next: EEXIST on mkcoll
- ubuntu@teuthology:/var/lib/teuthworker/archive/teuthology-2013-08-19_20:00:16-rados-dumpling-testing-basic-plana/2087
- 05:23 PM Bug #5922: osd: unfound objects on next
- After examining the running process of one of these, it really looks like either the replica ignored the message for ...
- 04:58 PM CephFS Bug #6004 (Pending Backport): osdc/ObjectCacher.cc: 738: FAILED assert(bh->length() <= start+(lof...
- 04:40 PM Support #6070 (Resolved): list_lockers() never returns after disk full (librbdpy)
- In the situation where an OSD disk fills up and then we attempt to unmount a resource, we have a process which, among...
- 03:51 PM rbd Bug #5220: test_ls_snaps segfaults on the arm test setup
- teuthology logs are copied to ubuntu@mira025:/home/ubuntu/rbd_api_old
- 03:39 PM Bug #6040 (Resolved): Significant slowdown of osds since v0.67 Dumpling
- From ceph-users:
Hey Samuel,
I picked up 0.67.1-10-g47c8949 from the GIT-builder and the osd from
that seems t... - 12:53 PM devops Bug #6035: ceph-deploy: ceph-create-keys stuck on fedora 18 VMs
- ceph version 0.67.1 (e23b817ad0cf1ea19c0a7b7c9999b30bed37d533)
looks like the mon create command doesnt hang anym... - 12:04 PM devops Bug #6035: ceph-deploy: ceph-create-keys stuck on fedora 18 VMs
- Tamil, I attempted to start manually the mon on one of those servers and got errors:...
- 12:26 PM Bug #6022: monitor crashed during ceph-deploy mon create on centos 6.4 and 6.3
- Rebuilding leveldb-1.12 without the Basho patch seems to work ok. This patch is described as:
# Cherry-picked fro... - 12:06 PM Bug #6022: monitor crashed during ceph-deploy mon create on centos 6.4 and 6.3
- Looks like the old leveldb-1.7.0 package works ok with dumpling on centos6.3, but the new leveldb-1.12 package does n...
- 08:02 AM Bug #6022: monitor crashed during ceph-deploy mon create on centos 6.4 and 6.3
- 6.3 fails in the same way:...
- 12:20 PM Bug #6049 (Resolved): pgmap json output shows bytes_* values quadrupled
- 11:50 AM devops Feature #4954 (New): ceph-deploy: help and document need to be updated for osd create
- 11:28 AM Bug #6045 (Pending Backport): mon/OSDMonitor.cc: 1609: FAILED assert(err == 0)
- 10:14 AM devops Feature #6067 (Resolved): ceph-deploy: make mon create catch common errors
- a few ideas:
- add a --add argument that is needed to expand the mon cluster. if not present, we will only procee... - 09:52 AM devops Bug #6019 (Resolved): ceph-deploy needs to better detect yum/apt for bootstraping
- Merged into ceph-deploy master: 252c21dec59ba1ff407362a6b21f043b7b8947ef
We are now making sure we are adding the ... - 08:53 AM devops Bug #6019 (Fix Under Review): ceph-deploy needs to better detect yum/apt for bootstraping
- Opened pull request: https://github.com/ceph/ceph-deploy/pull/46
- 08:07 AM rbd Bug #5955: qemu deadlock when librbd caching enabled (writethru or writeback).
- This hang occurred frequently with qemu 1.4.0, but after a week of trying, I cannot reproduce this bug under qemu 1.5...
- 04:27 AM Subtask #6064 (Rejected): erasure code : convenience functions to code / decode
- It would be usefull to have convenience functions that "work in terms of offset+length instead of chunks":http://arti...
08/19/2013
- 10:54 PM Bug #6057 (Resolved): osd: log bound mismatch after bobtail -> dumpling -> next upgrade
- yay, tested ok for me too. merged and backported
- 10:42 PM Bug #6057: osd: log bound mismatch after bobtail -> dumpling -> next upgrade
- Ran the above yaml on wip-6057, seems to work.
- 05:59 PM Bug #6057: osd: log bound mismatch after bobtail -> dumpling -> next upgrade
- wip-6057
- 04:55 PM Bug #6057: osd: log bound mismatch after bobtail -> dumpling -> next upgrade
- fatty:/home/sage/tmp/6057/ceph-osd.0.log for the full log
- 04:55 PM Bug #6057: osd: log bound mismatch after bobtail -> dumpling -> next upgrade
- ...
- 02:11 PM Bug #6057 (Resolved): osd: log bound mismatch after bobtail -> dumpling -> next upgrade
- 2013-08-19 06:24:27.814763 osd.2 0.0.0.0:6808/24619 1 : [ERR] 3.4 log bound mismatch, info (0''0,30''164] actual [21'...
- 10:47 PM Bug #5951: osd: next: EEXIST on mkcoll
- no failures in previous run
~/teuthology [mine?] » ./virtualenv/bin/teuthology-schedule --name "samuelj-5951-5" -n 5... - 03:43 PM Bug #5951: osd: next: EEXIST on mkcoll
- no failures in the last run
~/teuthology [mine?] » ./virtualenv/bin/teuthology-schedule --name "samuelj-5951-4" -n 5... - 11:08 AM Bug #5951: osd: next: EEXIST on mkcoll
- ~/teuthology [mine?] » ./virtualenv/bin/teuthology-schedule --name "samuelj-5951-3" -n 20 --owner samuelj@slider test...
- 08:58 AM Bug #5951: osd: next: EEXIST on mkcoll
- ubuntu@teuthology:/a/teuthology-2013-08-19_01:00:13-rados-master-testing-basic-plana/969
- 10:40 PM Bug #6040: Significant slowdown of osds since v0.67 Dumpling
- merged wip-dumpling-pglog-undirty with the config set to false into next and dumpling.
- 12:08 AM Bug #6040: Significant slowdown of osds since v0.67 Dumpling
- wip-dumpling-pglog-undirty may help with this.
- 06:09 PM devops Bug #5599: ceph-disk: prepare should issue a partprobe on the journal device too
- Actually this patch (attached) is probably more in keeping with the code style already used in ceph-disk, uses partpr...
- 05:13 PM devops Bug #5599: ceph-disk: prepare should issue a partprobe on the journal device too
- Fyi a tentative patch has been suggested, using partx rather than partprobe (no idea which might be best mind you):
... - 05:06 PM rgw Bug #6056 (Fix Under Review): rgw: sync agent is not propagating bucket delete
- We end up not removing the bucket entry point, although the bucket is unlinked from the user.
- 10:04 AM rgw Bug #6056: rgw: sync agent is not propagating bucket delete
- I should have noted that I would expect that info for the bucket should not be found on either the source or the dest...
- 10:02 AM rgw Bug #6056 (Resolved): rgw: sync agent is not propagating bucket delete
- A new test that deletes an existing bucket on the source, then does a sync, then tries to get info for that bucket on...
- 04:10 PM Bug #5902 (Resolved): s3tests failure during parallel upgrade test
- backported
- 03:42 PM Bug #6003: journal Unable to read past sequence 406 ...
- 200 runs later and no luck reproducing this with logs.
/var/lib/teuthworker/archive/sage-bug-6003-a
200 passes
- 09:38 AM Bug #6003: journal Unable to read past sequence 406 ...
- Run this tasks repeatedly with logging
- 03:24 PM Fix #6059 (Resolved): osd: block reads while repgather is writing across replicas
- Currently we use the ondisk_write/read locks to do mutual exclusion over the local filestore which avoids reading dat...
- 03:21 PM Feature #5905 (Resolved): hello world librados program (with explanatory comments!)
- Merged into master, commit:823435ce650a2be0523eba0d91dc9feb28b795f7
- 02:42 PM Bug #6058 (Duplicate): upgrading from bobtail to dumpling to next: log bound mismatch and wrong n...
- 02:39 PM Bug #6058 (Duplicate): upgrading from bobtail to dumpling to next: log bound mismatch and wrong n...
- These failure are seen when running the rgw upgrade tests from bobtail to dumpling to next branch.
logs from the n... - 01:37 PM CephFS Bug #5039 (Resolved): client: unlinking files leaves the cached entry behind
- 01:04 PM Bug #6043: upstart does not reflect running ceph-osd daemons (ubuntu 13.04 only)
- getting somewhere. but it still can't find it....
- 12:31 PM Bug #6043 (In Progress): upstart does not reflect running ceph-osd daemons (ubuntu 13.04 only)
- is the ceph package still installed? some older versions didn't stop the jobs before they uninstalled, which might e...
- 11:54 AM Bug #6043: upstart does not reflect running ceph-osd daemons (ubuntu 13.04 only)
- well......
- 09:33 AM Bug #6043 (Rejected): upstart does not reflect running ceph-osd daemons (ubuntu 13.04 only)
- stop ceph-osd id=0
or
stop ceph-osd-all
- 12:50 PM Bug #6052 (Resolved): ceph cli doesn't respect CEPH_ARGS
- 09:42 AM Bug #6052 (Fix Under Review): ceph cli doesn't respect CEPH_ARGS
- 09:01 AM Bug #6052 (Resolved): ceph cli doesn't respect CEPH_ARGS
- ...
- 12:29 PM Feature #6036 (Fix Under Review): cachepool: osd: add objecter
- 11:20 AM Bug #5988 (Resolved): librados: synchronous IO generally returns on ack instead of commit
- Merged into master, thanks Sage.
- 11:16 AM Bug #5988 (Fix Under Review): librados: synchronous IO generally returns on ack instead of commit
- wip-5988, commit:4e86be9232602ed595d885fcaeda5e47ad9a2a6a, pull request 512.
- 11:19 AM Bug #5979 (Resolved): librados: imposes internal tooling expectations on users
- Merged into master.
- 11:15 AM Bug #5979 (Fix Under Review): librados: imposes internal tooling expectations on users
- wip-5988, commit:f5636be742bffb19f16fdb832891fd1a43679ccf. Pull request 512.
- 09:53 AM Fix #5844: osd: snaptrimmer should throttle itself
- (09:57:41 AM) sjust: xdeller: right, the only way to fix that would be to increase OSDMap propogation speed
Regard... - 09:42 AM rgw Bug #6051: rgw: 404 during readwrite test
- Missed adding the traceback entry to an error dict in one codepath. I've pushed a one-line fix to the s3-tests branch...
- 09:00 AM rgw Bug #6051 (Resolved): rgw: 404 during readwrite test
- ...
- 09:39 AM rgw Bug #6046 (Fix Under Review): rgw: empty pool created for control objects
- 09:36 AM Bug #6041 (Resolved): Failing to add 3rd monitor
- Please upgrade to 0.67(.1) dumpling; 0.64 is an interim development release that doesn't get backported fixes (as 0.6...
- 08:53 AM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- bernhard glomm wrote:
> Sage Weil wrote:
> > bernhard: i think the problem in your case is that you have old keyrin... - 12:40 AM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- Sage Weil wrote:
> bernhard: i think the problem in your case is that you have old keyrings in /etc/ceph from prior ... - 07:34 AM Subtask #5862: FileStore must work with ghobjects rather than hobjects
- Here is how I understand "stripe":https://www.usenix.org/legacy/events/fast09/tech/full_papers/plank/plank.pdf / shar...
- 07:11 AM devops Bug #6019 (In Progress): ceph-deploy needs to better detect yum/apt for bootstraping
- 06:44 AM Subtask #5878: erasure plugin mechanism and abstract API
- Wido den Hollander wrote:
> Will this be case sensitve? I would suggest not, since that will confuse users. I pers... - 06:32 AM Subtask #5878: erasure plugin mechanism and abstract API
- I haven't looked at it in-depth, but one thing I noticed is that Reed-Solomon is always spelled with the first two le...
- 02:44 AM Bug #6047: mon: Assert and monitor-crash when attemting to create pool-snapshots while rbd-snapsh...
- This is pretty much the same as #5959, which was reported on cuttlefish and which we believed to have fixed on commit...
08/18/2013
- 10:55 PM Bug #6022 (New): monitor crashed during ceph-deploy mon create on centos 6.4 and 6.3
- verified that i can reproduce this. my first guess is a problem with the leveldb package on 6.4. 6.3 passes?
- 10:55 PM Bug #6022: monitor crashed during ceph-deploy mon create on centos 6.4 and 6.3
- 10:32 PM Bug #6049 (Resolved): pgmap json output shows bytes_* values quadrupled
- I was looking at the "ceph --format=json status" pgmap bytes_{used,avail,total} values with the goal of using them to...
- 09:08 PM Bug #5897 (Resolved): ceph_test_rados_api_watch_notify hang on LibRadosWatchNotify.WatchNotifyTim...
- fix preceded dumpling; backported to cuttlefish branch
- 09:06 PM rgw Bug #5949 (Resolved): radosgw: leaks
- 11:42 AM devops Bug #4924 (Resolved): ceph-deploy: gatherkeys fails on raring (cuttlefish)
- closing out this bug. i think i captured everything we learned in http://pad.ceph.com/p/quorum_pitfalls along with a...
- 11:36 AM rbd Bug #5919 (Resolved): qemu-1.4.0 and onwards, linux kernel 3.2.x, ceph-RBD, heavy I/O leads to ke...
- 11:08 AM Bug #6047 (Resolved): mon: Assert and monitor-crash when attemting to create pool-snapshots while...
- While playing around on my test-cluster, I ran into a problem that I've seen before, but have never been able to repr...
08/17/2013
- 09:24 PM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- Hi Sage, sorted it just before your reply. Was idly scrolling back through the thread when I spotted the word 'iptabl...
- 09:09 PM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- Michael Potter wrote:
> Hi Sage, took everything out of the host for except for #ipaddr# #subdomain-identifier#.#res... - 07:45 PM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- Hi Sage, took everything out of the host for except for #ipaddr# #subdomain-identifier#.#resolveable-domain#
Cleaned... - 06:14 PM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- Ah, I think the problem is
{ "rank": 0,
"name": "#subdomain-identifier#",
... - 05:54 PM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- Sage Weil wrote:
> Michael Potter wrote:
> > Getting the same thing on a fresh install of CentOS 6.4
> >
> > [..... - 06:35 PM rgw Bug #6046 (Resolved): rgw: empty pool created for control objects
- 10:48 AM devops Feature #6017: ceph-deploy mon create: create on all mons in ceph.conf + then do gatherkeys if no...
- how about
ceph-deploy mon create-initial
which will
1. do mon create on each mon in the mon_initial_quorum ... - 10:07 AM Bug #6045 (Fix Under Review): mon/OSDMonitor.cc: 1609: FAILED assert(err == 0)
- wip-6045
- 09:57 AM Bug #6045 (In Progress): mon/OSDMonitor.cc: 1609: FAILED assert(err == 0)
- we need to refresh any time we apply committed states to disk
- 09:47 AM Bug #6045 (Resolved): mon/OSDMonitor.cc: 1609: FAILED assert(err == 0)
- ...
- 10:01 AM CephFS Documentation #5797: Document unstable nature of CephFS
- My PR is still open?
- 09:10 AM Bug #5923 (Need More Info): osd: 6 up, 5 in; 91 active+clean, 1 remapped
- 09:10 AM Bug #5902 (Pending Backport): s3tests failure during parallel upgrade test
- 09:10 AM Bug #5901 (In Progress): stuck incomplete immediately after clean
- 09:09 AM Bug #6003 (Need More Info): journal Unable to read past sequence 406 ...
- 09:07 AM Bug #5959 (Resolved): Quorum is crashing on 'osd pool mksnap'
- backported by commit:64bef4ae4bab28b0b82a1481381b0c68a22fe1a4
- 09:00 AM Bug #5986: mon: FAILED assert(snaps.count(s)) when removing pool snap on 0.61.7
- backported in commit:411871f6bcc9a4b81140c2e98d13dc123860f6f7
- 09:00 AM Bug #5986 (Resolved): mon: FAILED assert(snaps.count(s)) when removing pool snap on 0.61.7
- 08:58 AM Bug #5985 (Pending Backport): very slow recovery for some objects
- 03:54 AM Bug #6043 (Won't Fix): upstart does not reflect running ceph-osd daemons (ubuntu 13.04 only)
- h3. Workaround
Using *restart* instead of *reload* restarts the daemons instead of sending them a signal that grac... - 12:36 AM CephFS Bug #6004: osdc/ObjectCacher.cc: 738: FAILED assert(bh->length() <= start+(loff_t)length-opos)
- the fix looks good
- 12:00 AM Bug #6041 (Resolved): Failing to add 3rd monitor
- After adding an additional (3rd) monitor, that new monitor will crash during first sync.
Ceph version: 0.64
201...
08/16/2013
- 11:42 PM Feature #5964 (Resolved): ceph-post-file (to replace/supplement cephdrop)
- 11:35 PM Feature #6036 (In Progress): cachepool: osd: add objecter
- 03:17 PM Feature #6036 (Resolved): cachepool: osd: add objecter
- 10:08 PM CephFS Bug #4894 (Resolved): mds: standby shut itself down due to not having any data
- 10:56 AM CephFS Bug #4894 (Fix Under Review): mds: standby shut itself down due to not having any data
- wip-4894
saw this again in ubuntu@teuthology:/a/teuthology-2013-08-15_20:01:04-fs-cuttlefish-testing-basic-plana/1... - 10:03 PM rbd Bug #5955 (Need More Info): qemu deadlock when librbd caching enabled (writethru or writeback).
- 09:09 PM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- Diego Woitasen wrote:
> What do you think? https://github.com/ceph/ceph/pull/510
aha, i bet this is what is tripp... - 06:50 PM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- What do you think? https://github.com/ceph/ceph/pull/510
- 05:55 PM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- I think the documentation is a little confusing. I had the same problems minutes ago I fixed it. In my escenario I ha...
- 05:43 PM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- Michael Potter wrote:
> Getting the same thing on a fresh install of CentOS 6.4
>
> [...]
Can you post the out... - 03:42 PM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- Getting the same thing on a fresh install of CentOS 6.4...
- 09:58 AM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- Hi,
I'm hitting the same bug on red hat 6.3 (Santiago), purging /var/lib/ceph and /etc/ceph doesn't help. - 09:26 AM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- bernhard: i think the problem in your case is that you have old keyrings in /etc/ceph from prior cluster instances. ...
- 06:59 AM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- ...
- 06:33 AM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- > runing ceph-create-keys manually gives:
>
> INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
> repea... - 09:03 PM Bug #6040: Significant slowdown of osds since v0.67 Dumpling
- For completeness, the relevant part of "ceph.conf", the rest of which just defines a standard 3-node cluster, with mo...
- 06:20 PM Bug #6040: Significant slowdown of osds since v0.67 Dumpling
- Kernel: SMP Debian 3.2.46-1~bpo60+1 x86_64 GNU/Linux on Debian Squeeze.
QEMU/KVM: 1:1.1.2+dfsg-2~bpo60+1, recompiled... - 06:10 PM Bug #6040 (Resolved): Significant slowdown of osds since v0.67 Dumpling
- I'm running a Ceph-cluster with 3 nodes, each of which runs a mon, osd and mds. I'm using RBD on this cluster as sto...
- 06:02 PM rgw Bug #5953 (Pending Backport): rgw: drain requests when going down
- 05:23 PM rbd Bug #5220 (In Progress): test_ls_snaps segfaults on the arm test setup
- hitting this again on dumpling but when tried with rbd old format. hence, reopening the bug...
- 04:21 PM CephFS Bug #6004: osdc/ObjectCacher.cc: 738: FAILED assert(bh->length() <= start+(loff_t)length-opos)
- Zheng Yan wrote:
> Sage Weil wrote:
> > looks like a read vs truncate race...
> > [...]
>
> looks like client r... - 03:31 PM Feature #6000 (In Progress): EC: [link] erasure plugin mechanism and abstract API
- 03:26 PM Feature #6038 (Resolved): cachepool: filestore/osd: infrastructure for large object COPY atomic r...
- 03:24 PM Feature #6037 (Resolved): cachepool: osd: whiteout state
- 03:13 PM devops Bug #6035 (Closed): ceph-deploy: ceph-create-keys stuck on fedora 18 VMs
- logs: ubuntu@teuthology:/a/teuthology-2013-08-16_01:10:04-ceph-deploy-master-testing-basic-vps/109387...
- 02:39 PM Subtask #5862: FileStore must work with ghobjects rather than hobjects
Use a "generation" number (gen_t?) instead of version_t.
Use a "shard" or "slice" number (shard_t or slice_t) i...- 02:20 PM Feature #6033 (Resolved): cachepool: osd: basic io decision: read/write from/to cache pool or EAG...
- 02:18 PM Feature #6032 (Resolved): cachepool: objecter: send requests to cache pool
- 02:17 PM Feature #6029: cachepool: osd: separate object version from pg version
- librados visible version seperate from PG version. There must also be an Objecter interface usable (in the future) f...
- 02:14 PM Feature #6029 (Resolved): cachepool: osd: separate object version from pg version
- 02:17 PM Feature #6031 (Resolved): cachepool: osd: COPY from another pool; small objects only
- 02:15 PM Feature #5703 (Duplicate): Allow ceph-deploy to work with non-root account
- Duplicate of #3347
- 02:15 PM Feature #6030 (Resolved): cachepool: osd: pg_pool_t cache_pool property
- 02:08 PM Feature #5908 (Rejected): mon: formatted output sections should be consistent across services and...
- 02:06 PM Feature #5904 (Resolved): hello world osd class (with explanatory documentation/comments!)
- 02:04 PM Feature #6028 (Resolved): EC: [link] ensure that erasure coded pools don't work until the osds ca...
- 02:02 PM Subtask #6027 (Resolved): ensure that erasure coded pools don't work until the osds can handle it
- "work in progress":https://github.com/ceph/ceph/pull/941
Perhaps the OSDMap includes a lower bound set of feature ... - 01:37 PM devops Feature #5775: create qemu rbd package for rhel 6.5 - qemu-rbd
- It's actually a symlink from /usr/lib64/qemu/librbd.so.1 to the librbd.so installed by the librbd package
- 01:31 PM devops Feature #5775: create qemu rbd package for rhel 6.5 - qemu-rbd
- depends on librbd
installs a symlink in /usr/lib/qemu to librbd.so installed by the librbd package - 01:26 PM devops Feature #5775: create qemu rbd package for rhel 6.5 - qemu-rbd
- 01:28 PM rbd Cleanup #5757 (Resolved): remove any fiemap reference from rbd.cc
- 01:25 PM rbd Feature #5774 (Resolved): test libvirt + qemu on rhel
- 01:22 PM rbd Feature #5774 (Need More Info): test libvirt + qemu on rhel
- ?
- 01:24 PM rgw Feature #2460 (Rejected): rgw: support multiple ceph backends
- 01:23 PM rgw Cleanup #3154 (Rejected): rgw: configurable auid when creating pools
- 01:20 PM devops Fix #5900: Create a Python package for ceph Python bindings
- I've made all the bindings individual packages and updates `ceph-rest-api` script to use the right imports.
Instal... - 01:12 PM rgw Documentation #5669: Default site in Apache interferes with Gateway
- This has been in the documentation for sometime.
http://ceph.com/docs/master/radosgw/config/#enable-the-configurati... - 01:09 PM Bug #6022: monitor crashed during ceph-deploy mon create on centos 6.4 and 6.3
- ...
- 01:09 PM Bug #6022 (Resolved): monitor crashed during ceph-deploy mon create on centos 6.4 and 6.3
- logs: ubuntu@teuthology:/a/teuthology-2013-08-16_01:10:04-ceph-deploy-master-testing-basic-vps/109409...
- 01:07 PM rgw Feature #5605 (In Progress): rgw: teuthology tests to check bucket issues in multi region env
- 11:59 AM devops Feature #6017: ceph-deploy mon create: create on all mons in ceph.conf + then do gatherkeys if no...
- 11:50 AM devops Feature #6017 (Resolved): ceph-deploy mon create: create on all mons in ceph.conf + then do gathe...
- For mon status, use ...
- 11:59 AM devops Feature #6020: radosgw-apache opinionated package
- 11:56 AM devops Feature #6020 (Rejected): radosgw-apache opinionated package
- 11:54 AM devops Bug #6019 (Resolved): ceph-deploy needs to better detect yum/apt for bootstraping
- ceph-deploy will fail horribly installing in CentOS because the EPEL repo does not exist.
It needs that because it... - 11:52 AM devops Feature #6018 (Resolved): Build ceph via jenkins
- Set up a jenkins instance to build Ceph and push to repos.
- it should pull from a private repo, not github, so th... - 11:33 AM devops Feature #5845 (Rejected): Automate ceph-deploy push to ceph-extras.
- 11:10 AM CephFS Documentation #5797 (Resolved): Document unstable nature of CephFS
- 10:18 AM Fix #4635 (In Progress): mon: many ops expose uncommitted state
- 10:10 AM Bug #6005 (Resolved): config stringification bug
- 09:17 AM Bug #6005 (Resolved): config stringification bug
- use of std::copy leaves separator at the end. fix in next, needs backport to dumpling
- 09:57 AM Bug #5981: osd: journal didn't preallocate
- ext4, mounted with noatime,nodiratime,discard.
- 09:39 AM Bug #5981 (Need More Info): osd: journal didn't preallocate
- 09:39 AM Bug #5981: osd: journal didn't preallocate
- strange, it is doing an fallocate on the jounral when it creates it, which should ensure there is sufficient disk spa...
- 03:59 AM Bug #5981: osd: journal didn't preallocate
- Ok, the issue has been that the journal mountpoint has been filled, since it has been undersized.
Would it be poss... - 09:40 AM Bug #5988: librados: synchronous IO generally returns on ack instead of commit
- 09:37 AM Bug #5979: librados: imposes internal tooling expectations on users
- on second thought, i think we should just drop the mention of the -m and just say 'no monitors specified' or somethin...
- 12:49 AM devops Feature #5847 (Resolved): Build own versions of most recent leveldb for all supported platforms.
- Latest leveldb has been added to ceph-extras repo.
- 12:29 AM Bug #5492: scripts installing into /usr/usr/sbin (with --prefix=/usr)
- Gary Lowell wrote:
> Thanks Danny. I tested $(exec_prefix)$(sbindir) on rpm and debian builds and it looks like do...
08/15/2013
- 11:13 PM CephFS Bug #6004: osdc/ObjectCacher.cc: 738: FAILED assert(bh->length() <= start+(loff_t)length-opos)
- Sage Weil wrote:
> looks like a read vs truncate race...
> [...]
looks like client releases Fr cap too early or ... - 10:24 PM CephFS Bug #6004 (Fix Under Review): osdc/ObjectCacher.cc: 738: FAILED assert(bh->length() <= start+(lof...
- 09:28 PM CephFS Bug #6004: osdc/ObjectCacher.cc: 738: FAILED assert(bh->length() <= start+(loff_t)length-opos)
- looks like a read vs truncate race......
- 09:20 PM CephFS Bug #6004 (Resolved): osdc/ObjectCacher.cc: 738: FAILED assert(bh->length() <= start+(loff_t)leng...
- ...
- 10:53 PM CephFS Bug #5021: ceph-fuse: crash on traceless reply
- hit this again,...
- 09:18 PM CephFS Bug #5927: kcephfs: ENOTEMPTY on rm -r
- 09:14 PM Bug #6003 (Resolved): journal Unable to read past sequence 406 ...
- * fix : commit:bae1f3eaa09c4747b8bfc6fb5dc673aa6989b695...
- 07:25 PM Feature #6002 (Resolved): EC: [link] erasure coding library plugin API documentation, including a...
- 07:24 PM Feature #6001 (Resolved): EC: [link] jerasure plugin
- 07:23 PM Feature #6000 (Resolved): EC: [link] erasure plugin mechanism and abstract API
- 07:22 PM Feature #5999 (Resolved): EC: [link] OSD internals must work in terms of cpg_t
- 07:21 PM Feature #5998 (Resolved): EC: [link] FileStore must work with ghobjects rather than hobjects
- 07:20 PM Feature #5997 (Resolved): EC: [link] Refactor scrub to use PGBackend methods
- 07:19 PM Feature #5996 (Resolved): EC: [link] PG::calc_acting and friends should always choose the shortes...
- 07:18 PM Feature #5995 (Resolved): EC: [link] Getinfo should use PGBackend methods to determine when peeri...
- 07:17 PM Feature #5994 (Resolved): EC: [link] Backfill should be able to handle multiple backfill peers
- 07:13 PM Feature #5993 (Resolved): EC: [link] Refactor recovery to use PGBackend methods
- 07:12 PM Feature #5992 (Resolved): EC: [link] Refactor Backfill to use PGBackend methods
- 07:12 PM Feature #5991 (Resolved): EC: [link] Backfill peers should not be included in the acting set
- 07:11 PM Feature #5990 (Resolved): EC: [link] Factor out the ReplciatedPG object replication and client wr...
- 07:09 PM Subtask #5433: Factor out the ReplicatedPG object replication and client IO logic as a PGBackend ...
- 07:09 PM Subtask #5046 (Resolved): Factor out PG logs, PG missing
- 06:35 PM Tasks #5848 (Resolved): add perf counter for each RecoveryMachine state
- 01:57 PM Tasks #5848 (Fix Under Review): add perf counter for each RecoveryMachine state
- 10:45 AM Tasks #5848 (In Progress): add perf counter for each RecoveryMachine state
- 05:45 PM Feature #5964 (Fix Under Review): ceph-post-file (to replace/supplement cephdrop)
- 01:45 PM Feature #5964 (In Progress): ceph-post-file (to replace/supplement cephdrop)
- 05:44 PM Feature #5904 (Fix Under Review): hello world osd class (with explanatory documentation/comments!)
- 05:27 PM Bug #5973 (Resolved): ceph --admin-daemon return code broken
- 04:51 PM Feature #5905 (Fix Under Review): hello world librados program (with explanatory comments!)
- wip-5905 and pull request https://github.com/ceph/ceph/pull/508
- 01:46 PM Feature #5905 (In Progress): hello world librados program (with explanatory comments!)
- 04:36 PM Fix #5989 (Resolved): librados: document that bufferlist usage model is inconsistent
- I discussed this on irc and it's not clear if we want to provide any guarantees or not, but it's certainly unpleasant...
- 04:33 PM Bug #5988 (Resolved): librados: synchronous IO generally returns on ack instead of commit
- This is not defaulting to data safety, and the synchronous functions don't provide any interface for doing something ...
- 04:32 PM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- This has just happened to me, so log with 'debug mon = 20' and 'debug ms = 1' and 'debug monc = 20' is attached.
... - 12:32 PM devops Bug #4924: ceph-deploy: gatherkeys fails on raring (cuttlefish)
- So after reinstalling the server, this went away. Next time I run into this, I'll update.
- 04:13 PM Feature #5909 (In Progress): mon: keep track of monitor store size estimate vs 'du $mon_data'
- 04:10 PM Fix #4635: mon: many ops expose uncommitted state
- We've fixed a couple of cases on the OSDMonitor and merged them into master. I'll keep this open for a while longer ...
- 04:05 PM Bug #5959 (Pending Backport): Quorum is crashing on 'osd pool mksnap'
- I'm pretty sure this is fixed by d1501938f5d07c067d908501fc5cfe3c857d7281 on next.
- 03:21 PM Bug #5959 (In Progress): Quorum is crashing on 'osd pool mksnap'
- I haven't been able to reproduce this either on 0.61.7 or -earlier- *latest* versions.
I was however able to trigg... - 04:03 PM Bug #5986 (Pending Backport): mon: FAILED assert(snaps.count(s)) when removing pool snap on 0.61.7
- Well, duh.
This was fixed by Sage (and reviewed by me) on d90683fdeda15b726dcf0a7cab7006c31e99f146 - 03:37 PM Bug #5986: mon: FAILED assert(snaps.count(s)) when removing pool snap on 0.61.7
- I can now confirm this is also really easy to trigger on cuttlefish HEAD.
- 03:20 PM Bug #5986 (Resolved): mon: FAILED assert(snaps.count(s)) when removing pool snap on 0.61.7
- While attempting to reproduce #5959, I managed to trigger this crash. It doesn't trigger on next, but I'm able to tr...
- 03:53 PM Documentation #5987 (Resolved): document requirement for monitor host time sync
- We used to say words about keeping monitor hosts in timesync with NTP or the like, but I can no longer find any menti...
- 03:14 PM Bug #5985 (Resolved): very slow recovery for some objects
- The snap cloning recovery logic can cause a push transaction to generate dozens of tiny writes followed by dozens of ...
- 03:11 PM Bug #5951: osd: next: EEXIST on mkcoll
- 2 failures in previous set, 1 mon clock, 1 stuck wait_backfill. Didn't have logging. D'oh. Rerunning.
~/teuthol... - 02:32 PM rgw Feature #5611: rgw: swift GET request for object with custom metadata should show custom metadata
- 01:45 PM Feature #5906 (Resolved): mon: better ceph -s output
- 01:29 PM Feature #5984 (Resolved): mon: probe monitors to check on their status regardless of quorum
- This could be used to figure out if a monitor is up, and if it is what's its excuse for not being in the quorum.
W... - 01:09 PM Bug #5982 (Rejected): injectargs seems to be broken for bools
- 01:08 PM Bug #5982: injectargs seems to be broken for bools
- Stefan Priebe wrote:
> at least to me this doesn't change anything:
> ceph osd tell \* injectargs -- "--osd_recover... - 12:51 PM Bug #5982: injectargs seems to be broken for bools
- at least to me this doesn't change anything:
ceph osd tell \* injectargs -- "--osd_recover_clone_overlap false"
ok
... - 12:49 PM Bug #5982: injectargs seems to be broken for bools
- you need a -- to make the cli stop parsing the option. or a space in there.
ceph tell osd.0 injectargs -- --some-... - 12:33 PM Bug #5982: injectargs seems to be broken for bools
- also for admin socket
- 12:30 PM Bug #5982 (Rejected): injectargs seems to be broken for bools
- ceph2/src [wip-5910] » ./ceph tell osd.\* injectargs '--osd_recover_clone_overlap=false'
*** DEVELOPER MODE: setting... - 12:22 PM Bug #5981: osd: journal didn't preallocate
- Oh, I forgot. This is on Ubuntu 13.04, with the following packages:
zoltan@signina:~$ dpkg -l | grep 0.61.
ii ce... - 12:16 PM Bug #5981 (Resolved): osd: journal didn't preallocate
- I had a node deployed using ceph-deploy. 7 disks in total, the journals are
on files on an SSD.
After rebooting t... - 11:33 AM rgw Bug #5192: RGW: radosgw-admin user rm --access-key not working on bobtail
- Further updates on this issue have come in from the customer, details can be found here: https://inktank.zendesk.com/...
- 10:42 AM Bug #5979 (Resolved): librados: imposes internal tooling expectations on users
- ...
- 10:18 AM Feature #5978 (Rejected): ceph.conf: create hierarchy
- Currently we have semi-flat hierarchy. We have global section, section per entity type and section per entity. The va...
- 10:10 AM rbd Bug #5977 (Resolved): librbd: python bindings need docstrings to show up in online docs
- Some methods don't have docstrings, which means they don't show up in http://ceph.com/docs/master/rbd/librbdpy/ at al...
- 09:36 AM Bug #5972: Permissions on /var/run/ceph changed causing permission error messages
- this is teuthology? it does the 777 on /var/run/ceph so that we can use the asok for non-root processes. normal ins...
- 08:00 AM devops Bug #5895 (Resolved): ceph-deploy: mon create command hung on ceph-create-keys in cuttlefish bran...
- 07:48 AM devops Bug #5895: ceph-deploy: mon create command hung on ceph-create-keys in cuttlefish branch on RHEL 6.3
- Opened #5975; merged https://github.com/ceph/ceph-deploy/pull/44
- 07:41 AM devops Bug #5895: ceph-deploy: mon create command hung on ceph-create-keys in cuttlefish branch on RHEL 6.3
- I'll merge this pull request but I really want a ticket to stay open reminding us that this needs to be *fixed* and n...
- 07:38 AM devops Bug #5895 (Fix Under Review): ceph-deploy: mon create command hung on ceph-create-keys in cuttlef...
- I have opened a new pull request with some tested changes that fix this problem: https://github.com/ceph/ceph-deploy/...
- 05:34 AM devops Bug #5895 (In Progress): ceph-deploy: mon create command hung on ceph-create-keys in cuttlefish b...
- 07:46 AM devops Bug #5975 (Resolved): Find a real fix for the pushy issue of hanging/deadlocking during long-runn...
- Issue #5895 caused us to have to implement a really disappointing workaround to deal with a pushy problem: https://gi...
- 05:34 AM devops Bug #5971 (Duplicate): ceph-deploy: ceph-create-keys hung during mon create in dumpling release o...
- 12:16 AM devops Bug #5947 (Resolved): ceph-deploy RPM release is pointing to the wrong repo
- ceph-deploy-release packages have been rebuilt to point to http://ceph.com/packages/ceph-extras/rpm/${dist}/noarch
T...
Also available in: Atom