Activity
From 10/30/2016 to 11/28/2016
11/28/2016
- 08:22 PM Bug #18054 (Resolved): os/bluestore/BlueStore.cc: 3576: FAILED assert(0 == "allocate failed, wtf")
- plenty of space, but bitmap allocator fails......
11/27/2016
- 04:19 PM Bug #17743: ceph_test_objectstore & test_objectstore_memstore.sh crashes in qa run (kraken)
- "ceph_test_objectstore --gtest_filter=\*/0" also, see
https://jenkins.ceph.com/job/ceph-pull-requests/14959/consol...
11/25/2016
- 02:44 PM Bug #18043 (Closed): ceph-mon prioritizes public_network over mon_host address
- Problem description:
Not using sections to declare ceph monitors results in monitors listening on the public_clust...
11/24/2016
- 04:54 PM Bug #17830: osd-scrub-repair.sh is failing (intermittently?) on Jenkins
- https://jenkins.ceph.com/job/ceph-pull-requests/14911/console from https://github.com/ceph/ceph/pull/12081
timesou... - 03:52 PM Bug #17830: osd-scrub-repair.sh is failing (intermittently?) on Jenkins
- https://jenkins.ceph.com/job/ceph-pull-requests/14906/console from https://github.com/ceph/ceph/pull/12061
It time... - 06:47 AM Bug #17830 (Resolved): osd-scrub-repair.sh is failing (intermittently?) on Jenkins
- 06:27 AM Bug #17830: osd-scrub-repair.sh is failing (intermittently?) on Jenkins
- 08:55 AM Bug #18021: Assertion "needs_recovery" fails when balance_read reaches a replica OSD where the ta...
- In my test, when encountering a large number of "balance_reads", the OSDs can be so busy that they can't send heartbe...
- 08:43 AM Bug #18021 (Duplicate): Assertion "needs_recovery" fails when balance_read reaches a replica OSD ...
- 2016-10-25 19:00:00.626567 7f9a63bff700 -1 error_msg osd/ReplicatedPG.cc: In function 'void ReplicatedPG::wait_for_un...
- 08:27 AM Bug #17949: make check: unittest_bit_alloc get_used_blocks() >= 0
- https://jenkins.ceph.com/job/ceph-pull-requests/14894/console
11/22/2016
- 12:03 PM Bug #15653: crush: low weight devices get too many objects for num_rep > 1
- Does this issue explain our uneven distribution? We have four racks, with 7, 8, 8, 4 hosts in each, respectively. The...
- 03:42 AM Bug #17830 (Resolved): osd-scrub-repair.sh is failing (intermittently?) on Jenkins
11/21/2016
- 04:02 PM Bug #17945 (Need More Info): ceph_test_rados_api_tier: failed to decode hitset in HitSetWrite test
- 06:15 AM Bug #17929: rados tool should bail out if you combine listing and setting the snap ID
- PR https://github.com/ceph/ceph/pull/12092
11/20/2016
- 09:41 AM Bug #17968 (Resolved): Ceph:OSD can't finish recovery+backfill process due to assertion failure
- Under some condition, OSD could be aborted during the recovery process due to the following assertion failure:
201... - 12:10 AM Bug #17830: osd-scrub-repair.sh is failing (intermittently?) on Jenkins
- More fixes and reenabled: https://github.com/ceph/ceph/pull/12072
11/18/2016
- 07:23 AM Bug #17830: osd-scrub-repair.sh is failing (intermittently?) on Jenkins
- For the record, over 25 occurrences of failed make check restarted because of the eio failure.
- 06:55 AM Bug #17949 (Resolved): make check: unittest_bit_alloc get_used_blocks() >= 0
- https://jenkins.ceph.com/job/ceph-pull-requests/14471/console...
11/17/2016
- 11:59 PM Bug #17830: osd-scrub-repair.sh is failing (intermittently?) on Jenkins
- https://jenkins.ceph.com/job/ceph-pull-requests/14411
- 11:08 PM Bug #17830: osd-scrub-repair.sh is failing (intermittently?) on Jenkins
- https://jenkins.ceph.com/job/ceph-pull-requests/14400/console
- 11:05 PM Bug #17830: osd-scrub-repair.sh is failing (intermittently?) on Jenkins
- https://jenkins.ceph.com/job/ceph-pull-requests/14402/
- 11:03 PM Bug #17830: osd-scrub-repair.sh is failing (intermittently?) on Jenkins
- I propose to temporarily disable it while it is worked on : https://github.com/ceph/ceph/pull/12058
- 10:47 PM Bug #17830: osd-scrub-repair.sh is failing (intermittently?) on Jenkins
- https://jenkins.ceph.com/job/ceph-pull-requests/14398/console
- 10:45 PM Bug #17830: osd-scrub-repair.sh is failing (intermittently?) on Jenkins
- https://jenkins.ceph.com/job/ceph-pull-requests/14395/console
- 09:54 PM Bug #17830: osd-scrub-repair.sh is failing (intermittently?) on Jenkins
- https://jenkins.ceph.com/job/ceph-pull-requests/14397/console
- 02:50 PM Bug #17830: osd-scrub-repair.sh is failing (intermittently?) on Jenkins
- https://jenkins.ceph.com/job/ceph-pull-requests/14340/console
- 06:40 AM Bug #17830 (Resolved): osd-scrub-repair.sh is failing (intermittently?) on Jenkins
- https://github.com/ceph/ceph/pull/11926
- 09:51 PM Bug #17945 (Need More Info): ceph_test_rados_api_tier: failed to decode hitset in HitSetWrite test
- ...
- 10:27 AM Bug #17929 (New): rados tool should bail out if you combine listing and setting the snap ID
- hi,
i've got found problem/feature in pool snapshots
when i delete some object from pool which was previously s...
11/16/2016
- 01:03 PM Bug #17743: ceph_test_objectstore & test_objectstore_memstore.sh crashes in qa run (kraken)
- http://pulpito.ceph.com/kchai-2016-11-13_07:03:13-rados-wip-kefu-testing---basic-smithi/544085/
http://pulpito.ceph.... - 01:00 PM Bug #17743: ceph_test_objectstore & test_objectstore_memstore.sh crashes in qa run (kraken)
- i just tested on ext4 the problem disappears. and seems it is reproducible on btrfs.
- 07:31 AM Bug #17830: osd-scrub-repair.sh is failing (intermittently?) on Jenkins
- https://github.com/ceph/ceph/pull/11979/commits/8854cca4164f9184cc549ba0b90b44515933de8c disables osd-scrub-repair.sh...
- 07:16 AM Bug #17830: osd-scrub-repair.sh is failing (intermittently?) on Jenkins
- https://github.com/ceph/ceph/pull/11926
11/15/2016
- 09:25 AM Bug #17743: ceph_test_objectstore & test_objectstore_memstore.sh crashes in qa run (kraken)
- Sage, i will try to fix this if you don't have enough bandwidth today.
11/14/2016
- 07:45 AM Bug #16279: assert(objiter->second->version > last_divergent_update) failed
- Yao Ning wrote:
> Hi, we got the crash because of the same reason in Ceph 0.94.5
>
> I think it is possible that ...
11/11/2016
- 07:55 PM Documentation #17871 (Closed): crush-map document could use clearer warning about impact of chang...
- In http://docs.ceph.com/docs/jewel/rados/operations/crush-map/ this section is the closest thing to documenting the i...
11/10/2016
- 10:58 PM Bug #17862: manager: add high level summary of pending scheduled and forced scrubs
- It's in the stats already I think (so we don't forget forced scrubs between intervals)? If so, the manager is alread...
- 10:56 PM Bug #17862 (New): manager: add high level summary of pending scheduled and forced scrubs
- 07:32 PM Bug #17743: ceph_test_objectstore & test_objectstore_memstore.sh crashes in qa run (kraken)
- I've tried a few different machines now but I can't reproduce this.
Can you generate a filestore = 20 log for me? - 06:25 PM Bug #17830 (In Progress): osd-scrub-repair.sh is failing (intermittently?) on Jenkins
Here is the portion of the test that ran. When osd.0 went down to perform the ceph-objectstore-tool list-attrs, th...- 02:48 PM Bug #12659: Can't delete cache pool
- That didn't work. At. All.
I could not delete the alt images (OSDs kept crashing). I finally decided to just rip o... - 12:00 PM Bug #12659: Can't delete cache pool
- As my development environment is down anyway, I'm now trying to:
* rename all images (mv foo foo.alt)
* copy them... - 11:42 AM Bug #12659: Can't delete cache pool
- Ok, this is weird. I deleted all snapshots. This means effectively there *can't* be any clones any longer as I can on...
- 11:38 AM Bug #12659: Can't delete cache pool
- Ah, and I misread. It's not about snapshots, it's about clones. Right. So I do have clones, but all of them have been...
- 11:37 AM Bug #12659: Can't delete cache pool
- The specific check that triggers is the one from here:
http://tracker.ceph.com/issues/8003
I'm still trying to ... - 11:30 AM Bug #12659: Can't delete cache pool
- I'm also being bitten by this. I shut down all VMs and supposedly all clients that talk to our ceph cluster, but I st...
11/09/2016
- 01:05 AM Bug #17743: ceph_test_objectstore & test_objectstore_memstore.sh crashes in qa run (kraken)
- yes, constantly. and reverting the 933a1da6d7517b8215c0cc720e47374adedf381e helps.
ceph_test_objectstore --gtest_f...
11/08/2016
- 10:23 PM Bug #17830: osd-scrub-repair.sh is failing (intermittently?) on Jenkins
This is caused by a crashing an osd. Not related to any minor test changes.
0> 2016-11-04 13:29:30.5323...- 09:48 PM Bug #17830 (Can't reproduce): osd-scrub-repair.sh is failing (intermittently?) on Jenkins
- For instance, https://jenkins.ceph.com/job/ceph-pull-requests/13489/console
An issue with this test was also repor... - 08:46 PM Bug #17660: objecter_requests 'mtime": "1970-01-01 00:00:00.000000s"' and '"last_sent": "4.76809e...
- I can semi-reliably reproduce stucking of objecter requests. I do large file writing to Cephfs from multiple clients:...
- 05:01 PM Bug #17743: ceph_test_objectstore & test_objectstore_memstore.sh crashes in qa run (kraken)
- Are you able to reproduce this locally?
- 12:08 PM Bug #17743: ceph_test_objectstore & test_objectstore_memstore.sh crashes in qa run (kraken)
- git bisect shows that 933a1da6d7517b8215c0cc720e47374adedf381e is the offending commit.
11/07/2016
- 12:12 AM Bug #17806 (Resolved): OSD: do not open pgs when the pg is not in pg_map
- the pg may be removed before the C_OpenPGs callbacked by finisher
11/04/2016
- 05:10 PM Bug #17743: ceph_test_objectstore & test_objectstore_memstore.sh crashes in qa run (kraken)
- ...
11/03/2016
- 06:38 PM Backport #17445: jewel: list-snap cache tier missing promotion logic (was: rbd cli segfault when ...
- Samuel Just wrote:
> What ceph version is running on the osds?
ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec...
11/02/2016
- 08:23 AM Bug #17660: objecter_requests 'mtime": "1970-01-01 00:00:00.000000s"' and '"last_sent": "4.76809e...
- This is an Objecter issue with its admin socket interfaces. Just because it's the Client's objecter doesn't make it a...
- 01:04 AM Bug #14115: crypto: race in nss init
- seeing this more often now, in 1/3 of 3 jobs: http://qa-proxy.ceph.com/teuthology/joshd-2016-11-02_00:23:24-rados-wip...
11/01/2016
- 10:41 PM Backport #17445: jewel: list-snap cache tier missing promotion logic (was: rbd cli segfault when ...
- What ceph version is running on the osds?
10/30/2016
Also available in: Atom