Activity
From 05/14/2017 to 06/12/2017
06/12/2017
- 04:35 PM Bug #20256: "ceph osd df" is broken; asserts out on Luminous-enabled clusters
- So obviously what happened is I thought we had moved the osd df command into the monitor, but that didn't actually ha...
- 04:33 PM Bug #20256 (Resolved): "ceph osd df" is broken; asserts out on Luminous-enabled clusters
- I got a private email report:
When do ‘ceph osd df’, ceph-mon always crush. The stack info as following:... - 08:46 AM Bug #18043: ceph-mon prioritizes public_network over mon_host address
- Thanks for the update, I look forward to seeing your PR :).
06/11/2017
- 07:52 PM Bug #13146 (Resolved): mon: creating a huge pool triggers a mon election
- We're throttling PG creates now.
- 07:28 PM Bug #11907: crushmap validation must not block the monitor
- Don't we internally time out crush map testing now? Does it behave sensibly if things take too long?
- 07:21 PM Bug #9523 (Closed): Both op threads and dispatcher threads could be stuck at acquiring the budget...
- Based on the PR discussion it seems the diagnosed issue wasn't the cause of the slowness. Closing since it hasn't (kn...
06/09/2017
- 07:51 PM Bug #20243 (Resolved): Improve size scrub error handling and ignore system attrs in xattr checking
Something similar to this was seen on a production system. If all the object_info_t matched there would be no erro...- 06:39 PM Bug #20242 (Resolved): Make osd-scrub-repair.sh unit test run faster
Most likely move some tests to the rados suite.- 01:26 AM Bug #20169: filestore+btrfs occasionally returns ENOSPC
- ugh just saw this on xenial too. hrm.
/a/sage-2017-06-08_20:27:41-rados-wip-sage-testing2-distro-basic-smithi/127...
06/08/2017
- 06:52 PM Bug #20227 (Need More Info): os/bluestore/BlueStore.cc: 2617: FAILED assert(0 == "can't mark unlo...
- Hmm, I see the fault_range call (it's in the new ec unclone code), but it's only dirtying the range including extents...
- 06:18 PM Bug #20227: os/bluestore/BlueStore.cc: 2617: FAILED assert(0 == "can't mark unloaded shard dirty")
- /a/sage-2017-06-08_02:04:29-rados-wip-sage-testing-distro-basic-smithi/1269367 too
- 06:14 PM Bug #20227 (Resolved): os/bluestore/BlueStore.cc: 2617: FAILED assert(0 == "can't mark unloaded s...
- ...
- 06:44 PM Bug #20221: kill osd + osd out leads to stale PGs
- @Greg the original bug description was updated with a simpler reproducer which does not involve copying objects. I be...
- 06:34 PM Bug #20221: kill osd + osd out leads to stale PGs
- Right, but what you've said here is that if you have pool size one, and kill the only OSD hosting it, then no other O...
- 02:58 PM Bug #20221: kill osd + osd out leads to stale PGs
- FWIW it was reproduced by badone.
- 12:20 PM Bug #20221: kill osd + osd out leads to stale PGs
- @Greg the first reproducer was not trying to rados put the same object. It was trying to rados put another object. I ...
- 12:18 PM Bug #20221: kill osd + osd out leads to stale PGs
- The reproducer works as expected on 12.0.3. The behavior changed somewhere in master after 12.0.3 was released.
- 12:17 PM Bug #20221: kill osd + osd out leads to stale PGs
- I don't understand what behavior you're looking for. Hanging is the expected behavior when data is unavailable.
- 10:07 AM Bug #20221 (New): kill osd + osd out leads to stale PGs
- h3. description
When the OSD is killed before ceph osd out, the PGs stay in stale state.
h3. reproducer
From... - 05:53 PM Bug #19960 (Pending Backport): overflow in client_io_rate in ceph osd pool stats
- 03:14 PM Bug #19960: overflow in client_io_rate in ceph osd pool stats
- > By which commit/PR?
554cf8394a9ac4f845c1fce03dd1a7f551a414a9
Merge pull request #15073 from liewegas/wip-mgr-stats - 11:00 AM Bug #18746: monitors crashing ./include/interval_set.h: 355: FAILED assert(0) (jewel+kraken)
- Hi Greg,
Thank you for taking the time to look into this.
Following the incident of the present ticket the clus...
06/07/2017
- 08:57 PM Bug #19943: osd: enoent on snaptrimmer
- /a/sage-2017-06-07_16:25:35-rados-wip-sage-testing2-distro-basic-smithi/1268182
rados/thrash-erasure-code/{ceph.ya... - 02:03 AM Bug #19943: osd: enoent on snaptrimmer
- /a/sage-2017-06-06_21:54:14-rados-wip-sage-testing-distro-basic-smithi/1265627
rados/thrash/{0-size-min-size-overr... - 08:11 PM Documentation #20215 (New): librados documentation improvement for the use cases
- librados documentation improvement for the use cases including the tradeoffs of object size, i/o rate, and omap vs re...
- 04:44 PM Bug #18696: OSD might assert when LTTNG tracing is enabled
- Wonder if this PR https://github.com/ceph/ceph/pull/14304 fixes this issue as well.
- 04:01 PM Bug #18750: handle_pg_remove: pg_map_lock held for write when taking pg_lock
- I think I remember this one and it wasn't really feasible to fix (at the time). If doing code inspection you'll want ...
- 03:59 PM Bug #18746: monitors crashing ./include/interval_set.h: 355: FAILED assert(0) (jewel+kraken)
- Pretty weird, that assert appears to be an internal interval_set consistency thing: https://github.com/ceph/ceph/blob...
- 03:58 PM Bug #19198: Bluestore doubles mem usage when caching object content
- 03:50 PM Bug #18667: [cache tiering] omap data time-traveled to stale version
- Jason says this "seems to pop up randomly every few weeks or so", so it's definitely a live, going concern. :(
- 03:40 PM Bug #19086 (Rejected): BlockDevice::create should add check for readlink result instead of raise ...
- 03:36 PM Bug #18647: ceph df output with erasure coded pools
- Let's verify this prior to Luminous and write a test for it!
- 03:29 PM Bug #19023 (Fix Under Review): ceph_test_rados invalid read caused apparently by lost intervals d...
- https://github.com/ceph/ceph/pull/15555
- 01:23 PM Bug #19960: overflow in client_io_rate in ceph osd pool stats
- Aleksei Gutikov wrote:
> fixed in master
By which commit/PR? - 12:04 PM Bug #19960: overflow in client_io_rate in ceph osd pool stats
- fixed in master
- 09:28 AM Bug #19783 (New): upgrade tests failing with "AssertionError: failed to complete snap trimming be...
- 06:34 AM Bug #19605: OSD crash: PrimaryLogPG.cc: 8396: FAILED assert(repop_queue.front() == repop)
- Zengran Zhang wrote:
> 2017-05-19 22:48:23.854608 7f14f1c1e700 0 -- 10.10.133.1:6823/2019 >> 10.10.133.1:6819/19544... - 02:04 AM Bug #20000: osd assert in shared_cache.hpp: 107: FAILED assert(weak_refs.empty())
- ...
- 02:02 AM Bug #20169: filestore+btrfs occasionally returns ENOSPC
- /a/sage-2017-06-06_21:54:14-rados-wip-sage-testing-distro-basic-smithi/1265467
rados/thrash/{0-size-min-size-overrid... - 02:02 AM Bug #20169: filestore+btrfs occasionally returns ENOSPC
- /a/sage-2017-06-06_21:54:14-rados-wip-sage-testing-distro-basic-smithi/1265435
rados/thrash/{0-size-min-size-overr...
06/06/2017
- 07:02 PM Bug #20068 (Resolved): osd valgrind error in CrushWrapper::has_incompat_choose_args
- 01:19 PM Bug #19943: osd: enoent on snaptrimmer
- /a/sage-2017-06-05_22:19:51-rados-wip-sage-testing-distro-basic-smithi/1262663
rados/thrash/{0-size-min-size-overr... - 01:16 PM Bug #19943: osd: enoent on snaptrimmer
- /a/sage-2017-06-05_22:19:51-rados-wip-sage-testing-distro-basic-smithi/1262583
rados/thrash/{0-size-min-size-overr... - 01:13 PM Bug #20133: EnvLibradosMutipoolTest.DBBulkLoadKeysInRandomOrder hangs on rocksdb+librados
- /a/sage-2017-06-05_22:19:51-rados-wip-sage-testing-distro-basic-smithi/1262365
- 12:40 PM Bug #19605: OSD crash: PrimaryLogPG.cc: 8396: FAILED assert(repop_queue.front() == repop)
- 2017-05-19 22:58:05.142834 7f14de2a2700 0 osd.0 pg_epoch: 78440 pg[9.10cs0( v 78440'6350 (78438'4241,78440'6350] loc...
06/05/2017
- 09:32 PM Bug #19518: log entry does not include per-op rvals?
- /a/sage-2017-06-05_18:36:01-rados-wip-sage-testing2-distro-basic-smithi/1261843...
- 06:27 PM Bug #20188 (New): filestore: os/filestore/FileStore.h: 357: FAILED assert(q.empty()) from ceph_te...
- ...
- 06:25 PM Bug #19943: osd: enoent on snaptrimmer
- /a/sage-2017-06-05_14:47:27-rados-wip-sage-testing-distro-basic-smithi/1260424
teuthology:1260424 06:25 PM $ cat s... - 06:24 PM Bug #19943: osd: enoent on snaptrimmer
- /a/sage-2017-06-05_14:47:27-rados-wip-sage-testing-distro-basic-smithi/1260344
teuthology:1260344 06:24 PM $ cat su... - 09:24 AM Backport #16239 (Fix Under Review): 'ceph tell osd.0 flush_pg_stats' fails in rados qa run
- https://github.com/ceph/ceph/pull/15475
06/03/2017
- 06:46 PM Bug #20169: filestore+btrfs occasionally returns ENOSPC
- It's less likely on Centos, but I think we've seen this before and it's usually been a btrfs kernel bug that got reso...
06/02/2017
- 03:29 PM Bug #20169 (New): filestore+btrfs occasionally returns ENOSPC
- ...
- 03:14 PM Bug #19964: occasional crushtool timeouts
- /a/sage-2017-06-02_08:32:01-rados-wip-sage-testing-distro-basic-smithi/1255514
teuthology:1255514 03:14 PM $ cat su... - 02:23 AM Bug #19943: osd: enoent on snaptrimmer
- /a/sage-2017-06-01_21:44:07-rados-wip-sage-testing---basic-smithi/1253654
teuthology:1253654 02:23 AM $ cat summary... - 02:20 AM Bug #20134 (Rejected): test_rados.TestIoctx.test_aio_read AssertionError: 5 != 2
- <Pre>
2017-06-01T22:57:09.649 INFO:tasks.workunit.client.0.smithi084.stderr:========================================...
06/01/2017
- 04:50 PM Bug #20133 (Can't reproduce): EnvLibradosMutipoolTest.DBBulkLoadKeysInRandomOrder hangs on rocksd...
- ...
- 04:43 PM Bug #19964: occasional crushtool timeouts
- /a/sage-2017-06-01_02:27:12-rados-wip-sage-testing2---basic-smithi/1249759
description: rados/singleton-bluestore/{a...
05/31/2017
- 11:06 PM Bug #19943: osd: enoent on snaptrimmer
- /a/sage-2017-05-31_18:45:30-rados-wip-sage-testing---basic-smithi/1248735
- 04:38 PM Bug #18043: ceph-mon prioritizes public_network over mon_host address
- and to elaborate on the fact that i have a branch and no pr, i do intend to finish this up soon, but likely only afte...
- 04:36 PM Bug #18043: ceph-mon prioritizes public_network over mon_host address
- fwiw, i've got a branch handling this from earlier this year: https://github.com/jecluis/ceph/commits/wip-mon-host
... - 04:04 PM Support #18508 (Closed): PGs of EC pool stuck in peering state
- There was clearly a lot going on here and none of it was clear. If switching to SimpleMessenger fixed it, I presume t...
- 03:14 PM Bug #17138: crush: inconsistent ruleset/ruled_id are difficult to figure out
- Some work in progress on this here: https://github.com/ceph/ceph/pull/13683
- 03:21 AM Bug #20117 (Rejected): BlueStore.cc: 8585: FAILED assert(0 == "unexpected error")
- version:
root@node0:~# ceph -v
ceph version 12.0.2 (5a1b6b3269da99a18984c138c23935e5eb96f73e)
bluestore+rbd+ec+o... - 03:19 AM Bug #20116 (Can't reproduce): osds abort on shutdown with assert(ceph/src/osd/OSD.cc: 4324: FAILE...
- version:
root@node0:~# ceph -v
ceph version 12.0.2 (5a1b6b3269da99a18984c138c23935e5eb96f73e)
bluestore+rbd+ec+o...
05/30/2017
- 01:46 PM Support #20108 (Resolved): PGs are not remapped correctly when one host fails
- I have run into the following problem:
in a 6 node cluster we have 2 nodes/chassis, and the crush rule set to distri... - 01:45 PM Bug #20041: ceph-osd: PGs getting stuck in scrub state, stalling RBD
- Logs available on teuthology:/home/jdillaman/osd.23.log_try_rados_rm.gz
05/29/2017
- 11:30 PM Bug #19790 (In Progress): rados ls on pool with no access returns no error
- 11:28 PM Bug #19790: rados ls on pool with no access returns no error
- https://github.com/ceph/ceph/pull/15354
Greg, will talk to you about the per-object cap semantics separately. - 07:45 PM Bug #19964: occasional crushtool timeouts
- /a/sage-2017-05-28_05:00:18-rados-wip-sage-testing---basic-smithi/1238511
description: rados/singleton-bluestore/{... - 02:51 PM Bug #17968: Ceph:OSD can't finish recovery+backfill process due to assertion failure
- https://github.com/ceph/ceph/pull/15349
05/28/2017
- 09:17 PM Bug #19909: PastIntervals::check_new_interval: assert(lastmap->get_pools().count(pgid.pool()))
- I've no idea the repercussions (thinking I'll backup and recreate the cluster) but if you write an osdmap into all of...
- 03:09 AM Bug #19943: osd: enoent on snaptrimmer
- /a/sage-2017-05-27_03:43:09-rados-wip-sage-testing2---basic-smithi/1235222
- 03:07 AM Bug #19943: osd: enoent on snaptrimmer
- /a/sage-2017-05-27_03:43:09-rados-wip-sage-testing2---basic-smithi/1235419
- 02:03 AM Bug #19964: occasional crushtool timeouts
- /a/sage-2017-05-27_03:43:09-rados-wip-sage-testing2---basic-smithi/1235225
- 01:59 AM Bug #19964: occasional crushtool timeouts
- /a/sage-2017-05-27_01:05:11-rados-wip-sage-testing---basic-smithi/1233483
- 01:57 AM Bug #20105 (Resolved): LibRadosWatchNotifyPPTests/LibRadosWatchNotifyPP.WatchNotify3/0 failure
- ...
05/27/2017
- 08:06 AM Bug #17968: Ceph:OSD can't finish recovery+backfill process due to assertion failure
- I have a document that provides the detail of our analysis of this problem, but it's written in chinese. If needed, I...
- 08:03 AM Bug #17968: Ceph:OSD can't finish recovery+backfill process due to assertion failure
- Hi, everyone.
Sorry, I forgot to watch my issues.
We found that the problem is due to "librados::OPERATION_BALA... - 07:59 AM Bug #19983: osds abort on shutdown with assert(/build/ceph-12.0.2/src/os/bluestore/KernelDevice.c...
- I pulled out a disk, and then there was the problem.
- 03:06 AM Bug #20099: osd/filestore: osd/PGLog.cc: 911: FAILED assert(last_e.version.version < e.version.ve...
- fang yuxiang wrote:
> i think this is not functional issue of ceph, maybe your local fs data is corrupted.
>
> ar... - 03:01 AM Bug #20099: osd/filestore: osd/PGLog.cc: 911: FAILED assert(last_e.version.version < e.version.ve...
- `read_log 406'6529418` and `read_log 346'6529418` have the same seq
other, I use ceph-kvstore-tool can show as:
... - 02:46 AM Bug #20099: osd/filestore: osd/PGLog.cc: 911: FAILED assert(last_e.version.version < e.version.ve...
- i think this is not functional issue of ceph, maybe your local fs data is corrupted.
are you using any block cache... - 02:41 AM Bug #20099 (Need More Info): osd/filestore: osd/PGLog.cc: 911: FAILED assert(last_e.version.versi...
- My Ceph cluster is down when the server is powered off,
and when i restart my osd, it failed in read_log.
As fllow:...
05/26/2017
- 09:44 PM Bug #19943: osd: enoent on snaptrimmer
- http://pulpito.ceph.com/gregf-2017-05-26_06:45:56-rados-wip-19931-snaptrim-pgs---basic-smithi/1231020/
- 03:36 PM Bug #20068 (Need More Info): osd valgrind error in CrushWrapper::has_incompat_choose_args
- https://github.com/ceph/ceph/pull/15244 was merged recently and modified how things are handled. Let see if it happen...
- 12:40 PM Bug #20092 (Duplicate): ceph-osd: FileStore::_do_transaction: assert(0 == "unexpected error")
- http://pulpito.ceph.com/jdillaman-2017-05-25_16:48:38-rbd-wip-jd-testing-distro-basic-smithi/1229611...
05/25/2017
- 10:07 PM Bug #20086 (Can't reproduce): LibRadosLockECPP.LockSharedDurPP gets EEXIST
- ...
- 06:11 AM Bug #19983: osds abort on shutdown with assert(/build/ceph-12.0.2/src/os/bluestore/KernelDevice.c...
- /a/bhubbard-2017-05-24_05:25:43-rados-wip-badone-testing---basic-smithi/1224591/teuthology.log...
- 05:56 AM Bug #19943: osd: enoent on snaptrimmer
- /a/bhubbard-2017-05-24_05:25:43-rados-wip-badone-testing---basic-smithi/1224546/teuthology.log
- 02:27 AM Bug #19964: occasional crushtool timeouts
- /a/sage-2017-05-24_22:20:09-rados-wip-sage-testing---basic-smithi/1225182
- 12:16 AM Bug #19790: rados ls on pool with no access returns no error
- Looking into this
05/24/2017
- 11:13 PM Bug #19939: OSD crash in MOSDRepOpReply::decode_payload
- Kefu, could you take a look at this one? Not sure if it's related to recent denc changes, or perhaps https://github.c...
- 10:26 AM Bug #19939: OSD crash in MOSDRepOpReply::decode_payload
- More instances from last night's master:
- http://pulpito.ceph.com/jspray-2017-05-23_22:31:39-fs-master-distro-basic... - 10:01 PM Bug #19943: osd: enoent on snaptrimmer
- /a/sage-2017-05-24_18:40:38-rados-wip-sage-testing2---basic-smithi/1224933
- 03:44 PM Bug #16890 (Fix Under Review): rbd diff outputs nothing when the image is layered and with a writ...
- 03:43 PM Feature #16883: omap not supported by ec pools
- This is due to erasure coded pools not supporting omap operations. It's a limitation for the current cache pool code,...
- 03:25 PM Bug #17170 (Can't reproduce): mon/monclient: update "unable to obtain rotating service keys when ...
- 03:22 PM Bug #17929: rados tool should bail out if you combine listing and setting the snap ID
- There is discussion on that (closed) PR. We just don't want to do snap listing as it's even more expensive than norma...
- 03:13 PM Bug #17968 (Need More Info): Ceph:OSD can't finish recovery+backfill process due to assertion fai...
- 03:13 PM Bug #17968 (Can't reproduce): Ceph:OSD can't finish recovery+backfill process due to assertion fa...
- 12:05 PM Bug #20068 (In Progress): osd valgrind error in CrushWrapper::has_incompat_choose_args
- 10:34 AM Bug #20068: osd valgrind error in CrushWrapper::has_incompat_choose_args
- Oops, left off the actual link:
http://pulpito.ceph.com/jspray-2017-05-23_22:31:39-fs-master-distro-basic-smithi/122... - 10:33 AM Bug #20068 (Resolved): osd valgrind error in CrushWrapper::has_incompat_choose_args
- Loic: assigning to you because it looks like you were working in this function recently....
- 10:47 AM Bug #20069 (New): PGs failing to create at start of test, REQUIRE_LUMINOUS not set?
- http://pulpito.ceph.com/jspray-2017-05-23_22:31:39-fs-master-distro-basic-smithi/1222407...
- 08:52 AM Bug #19790: rados ls on pool with no access returns no error
- For what it's worth, this is a regression. In Hammer, the appropriate EPERM is raised:...
05/23/2017
- 08:24 PM Bug #18165 (In Progress): OSD crash with osd/ReplicatedPG.cc: 8485: FAILED assert(is_backfill_tar...
- 07:37 PM Bug #19790: rados ls on pool with no access returns no error
- Well, it's obvious enough, we go into PrimaryLogPG::do_pg_op() before we check op_has_sufficient_caps().
I think t... - 06:57 PM Bug #20059 (Resolved): miscounting degraded objects
- on bigbang,...
- 09:50 AM Bug #20053 (New): crush compile / decompile looses precision on weight
- The weight of an item is displayed with %.3f and looses precision that makes a difference in mapping.
Steps to rep... - 03:39 AM Bug #20050: osd: very old pg creates take a long time to build past_intervals
- partially addressed by patch in wip-bigbang.
- 03:33 AM Bug #20050 (Resolved): osd: very old pg creates take a long time to build past_intervals
- (bigbang)
osds were down for a long time and pgs never got created. when the osds finally are up, they have to go...
05/22/2017
- 11:05 PM Bug #19909: PastIntervals::check_new_interval: assert(lastmap->get_pools().count(pgid.pool()))
- Still happens in 12.0.3, with the patch [[https://github.com/ceph/ceph/pull/15046]] applied。
- 08:35 PM Bug #20041: ceph-osd: PGs getting stuck in scrub state, stalling RBD
- ...
- 05:22 PM Bug #20041: ceph-osd: PGs getting stuck in scrub state, stalling RBD
- I've seen this on scrub as well.
- 03:55 PM Bug #20041 (Resolved): ceph-osd: PGs getting stuck in scrub state, stalling RBD
- See the attached logs for the remove op against rbd_data.21aafa6b8b4567.0000000000000aaa...
- 04:34 PM Bug #19964: occasional crushtool timeouts
- See this log as well:
http://qa-proxy.ceph.com/teuthology/yuriw-2017-05-20_04:20:14-rados-master_2017_5_20---basic... - 06:51 AM Bug #20000 (Can't reproduce): osd assert in shared_cache.hpp: 107: FAILED assert(weak_refs.empty())
- version:
root@node0:~# ceph -v
ceph version 12.0.2 (5a1b6b3269da99a18984c138c23935e5eb96f73e)
bluestore+ec+overw...
05/20/2017
- 06:41 AM Bug #19964: occasional crushtool timeouts
- this is not new, i've been spotting this occasionally in our jenkins run.
05/19/2017
- 04:38 PM Bug #19991 (New): dmclock-tests fail on my build VM
On my build machine which is a VM. It passes on Jenkins.
[ RUN ] test_client.full_bore_timing
/home/dz...- 03:18 PM Bug #19909: PastIntervals::check_new_interval: assert(lastmap->get_pools().count(pgid.pool()))
i have the same error with 12.0.3...- 02:52 PM Bug #19803: osd_op_reply for stat does not contain data (ceph-mds crashes with unhandled buffer::...
- After switching to writeback cache mode, this error didn't occur again. So I'm confident the proxy mode of the cache ...
- 08:30 AM Bug #19983 (Closed): osds abort on shutdown with assert(/build/ceph-12.0.2/src/os/bluestore/Kerne...
- version:
root@node0:~# ceph -v
ceph version 12.0.2 (5a1b6b3269da99a18984c138c23935e5eb96f73e)
bluestore+rbd+ec+o...
05/17/2017
- 11:09 PM Bug #19971 (In Progress): osd: deletes are performed inline during pg log processing
- 11:09 PM Bug #19971 (Resolved): osd: deletes are performed inline during pg log processing
- With a large number of deletes in a client workload, this can easily saturate a disk and cause very high latency, sin...
- 09:42 PM Bug #19700: OSD remained up despite cluster network being inactive?
- Was the cluster performing IO while this happened? Do your public and private networks perhaps route to each other?
... - 07:34 PM Bug #19790: rados ls on pool with no access returns no error
- Same issue even with just @rw@:...
- 06:38 PM Bug #19790: rados ls on pool with no access returns no error
- I'm not at a computer to check, but I'm pretty sure the "allow *" is short-circuiting other security checks here and ...
- 04:10 PM Bug #19790: rados ls on pool with no access returns no error
- 09:12 AM Bug #19790: rados ls on pool with no access returns no error
- Just checking: is anyone looking at this? It's arguably a security issue, after all.
- 03:53 PM Bug #16567: Ceph raises scrub errors on pgs with objects being overwritten
- Hmm, similar reports have popped up (although with on-disk size 0) on the mailing list. Those involved cache tiers th...
- 03:51 PM Bug #16279: assert(objiter->second->version > last_divergent_update) failed
- xfs corruption means your setup was not safe for power failure, or your disk is dying. Neither is something that ceph...
- 03:38 PM Bug #15936: Osd-s on cache pool crash after upgrade from Hammer to Jewel
- Ping Joao? This looks to have been a crash in persisting/trimming HitSets, which I know underwent a bunch of changes/...
- 03:19 PM Bug #15741: librados get_last_version() doesn't return correct result after aio completion
- Any update on this, David? :)
- 02:23 PM Bug #19964 (Resolved): occasional crushtool timeouts
- ...
- 11:21 AM Bug #19960 (Resolved): overflow in client_io_rate in ceph osd pool stats
- luminous branch, v12.0.2
Output of ceph osd pool stats -f json contains overflowed values in client_io_rate sectio...
05/16/2017
- 07:59 PM Bug #19943: osd: enoent on snaptrimmer
- /a/yuriw-2017-05-15_22:59:10-rados-wip-yuri-testing_2017_5_16-distro-basic-smithi/1181575 (bluestore)
- 05:55 PM Bug #19943: osd: enoent on snaptrimmer
- Clone 269 was trimmed but it corresponds to a lot of other snapshots, so the object shouldn't be removed until all th...
- 04:04 PM Bug #19943 (Resolved): osd: enoent on snaptrimmer
- ...
- 04:54 PM Feature #19944 (Rejected): [RFE]: add option/support config persistence with ceph tell command
- we should have support in ceph itself to make the conf changes persist, ceph tell has good error checking mechanism a...
- 11:08 AM Bug #19939 (Resolved): OSD crash in MOSDRepOpReply::decode_payload
Seen on kcephfs suite, running against test branch based on Monday's master....- 02:39 AM Bug #19936 (New): filestore ENOTEMPTY
- ...
Also available in: Atom