Activity
From 11/23/2016 to 12/22/2016
12/22/2016
- 05:26 PM Bug #18309: TestVolumeClient.test_evict_client failure creating pidfile
- Alternative approach: https://github.com/ceph/ceph/pull/12628
- 02:17 PM Bug #18314 (Resolved): commit 41d46e492 "osd/ReplicatedPG: limit omap request by bytes" breaks la...
- 01:58 PM Bug #18314: commit 41d46e492 "osd/ReplicatedPG: limit omap request by bytes" breaks large directory
- http://tracker.ceph.com/issues/18334 to track a proper fix for OMAP_GETKEYS
- 10:14 AM Backport #17478 (Resolved): jewel: MDS goes damaged on blacklist (failed to read JournalPointer: ...
- 10:14 AM Backport #17582 (Resolved): jewel: monitor assertion failure when deactivating mds in (invalid) f...
- 10:14 AM Backport #17615 (Resolved): jewel: mds: false "failing to respond to cache pressure" warning
- 10:14 AM Backport #17617 (Resolved): jewel: [cephfs] fuse client crash when adding a new osd
- 10:14 AM Backport #17697 (Resolved): jewel: MDS long-time blocked ops. ceph-fuse locks up with getattr of ...
- 10:14 AM Backport #17706 (Resolved): jewel: multimds: mds entering up:replay and processing down mds aborts
- 10:14 AM Backport #17720 (Resolved): jewel: MDS: false "failing to respond to cache pressure" warning
- 10:13 AM Backport #17841 (Resolved): jewel: mds fails to respawn if executable has changed
- 10:13 AM Backport #17885 (Resolved): jewel: "[ FAILED ] LibCephFS.InterProcessLocking" in jewel v10.2.4
12/21/2016
- 08:02 PM Bug #18309 (Fix Under Review): TestVolumeClient.test_evict_client failure creating pidfile
- -https://github.com/ceph/ceph/pull/12606-
- 06:20 PM Bug #18309: TestVolumeClient.test_evict_client failure creating pidfile
- The problem is that global_init_prefork is calling pidfile_write, and we started using that from the client in 83aaa5...
- 06:17 PM Backport #18308 (Resolved): ceph-fuse not clearing setuid/setgid bits on chown
- 05:20 PM Backport #18308 (New): ceph-fuse not clearing setuid/setgid bits on chown
- 04:36 PM Backport #18308: ceph-fuse not clearing setuid/setgid bits on chown
- h3. original description
I had some test failures that showed up in my most recent fs suite run here:
http... - 06:17 PM Bug #18131: ceph-fuse not clearing setuid/setgid bits on chown
- *master PR*: https://github.com/ceph/ceph/pull/12331
- 04:38 PM Bug #18131: ceph-fuse not clearing setuid/setgid bits on chown
- @Jeff: Which commit fixes the issue/should be backported to jewel?
- 05:18 PM Bug #18254 (Resolved): path restricted cephx caps not working correctly
- 05:16 PM Bug #18254: path restricted cephx caps not working correctly
- *master PR*: https://github.com/ceph/ceph/pull/12505
- 04:52 PM Bug #18254: path restricted cephx caps not working correctly
- @Jeff: We have a system/service in place for backporting bugfixes to our stable releases. Patches backported via this...
- 05:18 PM Backport #18307: path restricted cephx caps not working correctly
- (removed attachments that are available at #18254)
- 05:17 PM Backport #18307 (Resolved): path restricted cephx caps not working correctly
- 04:39 PM Backport #18307 (New): path restricted cephx caps not working correctly
- h3. original description
Ramana noticed this first while testing my ganesha patches to allow restricting exports. ... - 02:24 PM Bug #18314 (Fix Under Review): commit 41d46e492 "osd/ReplicatedPG: limit omap request by bytes" b...
- https://github.com/ceph/ceph/pull/12599
- 08:37 AM Bug #18314: commit 41d46e492 "osd/ReplicatedPG: limit omap request by bytes" breaks large directory
- ...
- 08:32 AM Bug #18314 (Resolved): commit 41d46e492 "osd/ReplicatedPG: limit omap request by bytes" breaks la...
- OPTION(osd_max_omap_bytes_per_request, OPT_U64, 4<<20)
4M can only carry about 5k dentries. It's too small - 11:14 AM Bug #15921 (Can't reproduce): segfault in cephfs-journal-tool (TestJournalRepair failure)
- Haven't seen this failure in a long time.
- 11:11 AM Bug #2375 (Closed): rrdtoll data malfuntion..
- Ancient, closing.
- 11:09 AM Bug #1206 (Closed): NFS reexport file creation lags 1-3 seconds
- Closing this because it's ancient (and if NFS creates were super-slow we'd notice on the knfs suite)
12/20/2016
- 06:54 PM Backport #18307: path restricted cephx caps not working correctly
- PR is up here:
https://github.com/ceph/ceph/pull/12592 - 01:00 PM Backport #18307 (Resolved): path restricted cephx caps not working correctly
- https://github.com/ceph/ceph/pull/12592
- 06:06 PM Bug #18311 (Fix Under Review): Decode errors on backtrace will crash MDS
- https://github.com/ceph/ceph/pull/12588
- 03:16 PM Bug #18311 (Resolved): Decode errors on backtrace will crash MDS
- Noticed by inspection:...
- 05:46 PM Bug #18225 (Resolved): MDS doesn't release memory after exceeding its cache size limit
- 05:34 PM Bug #9935 (Fix Under Review): client: segfault on ceph_rmdir path "/"
- https://github.com/ceph/ceph/pull/12550
- 01:19 PM Bug #18309 (Resolved): TestVolumeClient.test_evict_client failure creating pidfile
- Consistent on master
http://pulpito.ceph.com/jspray-2016-12-19_21:05:25-fs-master-distro-basic-smithi/648157
I ... - 01:13 PM Backport #18308 (Resolved): ceph-fuse not clearing setuid/setgid bits on chown
- https://github.com/ceph/ceph/pull/12591
- 01:01 PM Bug #18131 (Pending Backport): ceph-fuse not clearing setuid/setgid bits on chown
- 12:59 PM Bug #18254 (Pending Backport): path restricted cephx caps not working correctly
- 12:21 PM Bug #18254: path restricted cephx caps not working correctly
- Patch merged. We'll also want this backported to jewel.
- 11:16 AM Bug #18306 (Resolved): segfault in handle_client_caps
- http://pulpito.ceph.com/jspray-2016-12-19_21:05:25-fs-master-distro-basic-smithi/648247...
12/16/2016
- 02:42 PM Backport #18283 (Closed): kraken: monitor cannot start because of "FAILED assert(info.state == MD...
- 02:42 PM Backport #18282 (Resolved): jewel: monitor cannot start because of "FAILED assert(info.state == M...
- https://github.com/ceph/ceph/pull/13123
12/14/2016
- 10:52 PM Bug #18254: path restricted cephx caps not working correctly
- The patch turns out to be pretty trivial:...
- 09:26 PM Bug #18254: path restricted cephx caps not working correctly
- Revised test program here, in patch format so it can build in tree. We should probably roll this up into a regression...
- 08:56 PM Bug #18254: path restricted cephx caps not working correctly
- Thanks Greg, I'll take a look at how all of that stuff gets set. FWIW, here's the log with the client debugging crank...
- 08:44 PM Bug #18254: path restricted cephx caps not working correctly
- Did you check the client log to see where it's failing out at?
I'd check the code flow from Client::mount() to the M... - 08:28 PM Bug #18254: path restricted cephx caps not working correctly
- The program logs this in the MDS logs when run. I'm definitely passing in a real path there:...
- 08:12 PM Bug #18254: path restricted cephx caps not working correctly
- Oh, and you will need to overwrite the key in the reproducer program with the one for "alice".
- 08:05 PM Bug #18254 (Resolved): path restricted cephx caps not working correctly
- Ramana noticed this first while testing my ganesha patches to allow restricting exports. It appears that attempting t...
- 07:45 PM Bug #18119 (Closed): mds: check and get latest current logsegment to avoid trimming logsegment cr...
- This does not seem to be an issue in either master nor Jewel.
- 01:54 PM Feature #12132 (Resolved): cephfs-data-scan: Cleanup phase
- https://github.com/ceph/ceph/pull/12337#pullrequestreview-12909728...
- 12:56 PM Bug #18166 (Pending Backport): monitor cannot start because of "FAILED assert(info.state == MDSMa...
12/13/2016
- 10:47 PM Bug #18151: Incorrect report of size when quotas are enabled.
- As commented in the users ML, we will be updating to 10.2.5 in early January. Will provide feedback once that is done.
- 07:58 PM Bug #18151: Incorrect report of size when quotas are enabled.
- In fact the quota tree changes already got backported so I think this is resolved. http://tracker.ceph.com/issues/16313
- 06:17 AM Bug #18151 (In Progress): Incorrect report of size when quotas are enabled.
- Well, this code changed a fair bit between Jewel and master, as Zheng ripped out the quota trees. However, there appe...
- 05:59 PM Bug #18131: ceph-fuse not clearing setuid/setgid bits on chown
- I'll lower the priority to Normal now.
Ok, this should be fixed in mainline kernels and coming to stable series ke... - 02:24 PM Bug #18159 (Fix Under Review): "Unknown mount option mds_namespace"
- https://github.com/ceph/ceph/pull/12465
- 02:21 PM Bug #16691 (Resolved): sepia LRC lost directories
- 02:21 PM Feature #17853 (Resolved): More deterministic timing for directory fragmentation
- 01:47 PM Bug #18238 (Can't reproduce): TestDataScan failing due to log "unmatched rstat on 100"
This is almost certainly just something where we need to update the log whitelist, but I'm curious about how we got...- 10:14 AM Bug #17270: [cephfs] fuse client crash when adding a new osd
- @Henrik: The fix appears to be to revert https://github.com/ceph/ceph/commit/1a48a8a2b222e41236341cb1241f0885a1b0b9d8...
- 09:39 AM Bug #17270: [cephfs] fuse client crash when adding a new osd
- Is there are chance to get this backported to hammer too? We had same ceph-fuse crashes recently (0.94.9 ceph-fuse an...
- 08:17 AM Bug #18211: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays) failed at data pool empty ...
- 08:17 AM Bug #18211: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays) failed at data pool empty ...
- https://github.com/ceph/ceph-client/commit/6899bb08e4173b7dfc0aa232e589541da869411f
12/12/2016
- 02:48 PM Bug #18157: ceph-fuse segfaults on daemonize
- We worked around this somewhat badly in master/kraken, but Kefu's Preforker change is a better option.
- 02:48 PM Bug #18159: "Unknown mount option mds_namespace"
- Let's just make it silent on this case (unkonwn option) and let kernel reject it
- 02:15 PM Bug #17193 (Pending Backport): truncate can cause unflushed snapshot data lose
- 02:07 PM Bug #9935 (In Progress): client: segfault on ceph_rmdir path "/"
- 12:03 PM Bug #18225 (Fix Under Review): MDS doesn't release memory after exceeding its cache size limit
- https://github.com/ceph/ceph/pull/12443
- 11:37 AM Bug #18225 (Resolved): MDS doesn't release memory after exceeding its cache size limit
In some circumstances the MDS may fail to enforce its own cache size limits. Because boost::pools are used for all...
12/09/2016
- 02:05 PM Bug #18211: test_snapshot_remove (tasks.cephfs.test_strays.TestStrays) failed at data pool empty ...
- the object is from pool permission check. it's kernel version of http://tracker.ceph.com/issues/13782
- 09:51 AM Bug #18211 (Resolved): test_snapshot_remove (tasks.cephfs.test_strays.TestStrays) failed at data ...
- http://pulpito.ceph.com/jspray-2016-12-06_12:37:38-kcephfs:recovery-master-testing-basic-smithi/611141/...
- 09:49 AM Bug #17193: truncate can cause unflushed snapshot data lose
- 2016-12-06T13:28:03.559 INFO:tasks.cephfs_test_runner: self.assertTrue(self.fs.data_objects_absent(file_a_ino, siz...
12/08/2016
- 05:38 PM Bug #18166 (Fix Under Review): monitor cannot start because of "FAILED assert(info.state == MDSMa...
- https://github.com/ceph/ceph/pull/12395
- 04:23 PM Bug #18166: monitor cannot start because of "FAILED assert(info.state == MDSMap::STATE_STANDBY)"
- It looks like MDSMonitor::maybe_promote_standby is iterating over pending_fsmap.standby_daemons, but inside the loop ...
- 06:01 AM Bug #18166: monitor cannot start because of "FAILED assert(info.state == MDSMap::STATE_STANDBY)"
- The attachment is the log of crash monitor.
Thanks! - 03:37 PM Bug #18131: ceph-fuse not clearing setuid/setgid bits on chown
- Merged the chown part of this, and I think I have sorted out the problems I was having with the truncate and write co...
- 07:08 AM Backport #18195 (Resolved): jewel: cephfs: fix missing ll_get for ll_walk
- https://github.com/ceph/ceph/pull/13125
- 07:05 AM Backport #18192 (Resolved): jewel: standby-replay daemons can sometimes miss events
- https://github.com/ceph/ceph/pull/13126
12/07/2016
- 08:06 PM Bug #18179 (Resolved): MDS crashes on missing metadata object
- Saw this crash happening on a Jewel 10.2.3 MDS when it was missing a object in the metadata pool:...
- 05:01 PM Bug #18166: monitor cannot start because of "FAILED assert(info.state == MDSMap::STATE_STANDBY)"
- So this cluster is freshly-created with version 10.2.3?
Can you upload the monitor log with ceph-post-file? (Prefera... - 09:43 AM Bug #18166 (Resolved): monitor cannot start because of "FAILED assert(info.state == MDSMap::STATE...
ceph version: v10.2.3
operation system: ubuntu 14.04
linux kernel version: 3.13.0
Description:
I test for c...- 02:14 PM Bug #17954 (Pending Backport): standby-replay daemons can sometimes miss events
- 02:14 PM Bug #16924 (Resolved): Crash replaying EExport
- Not backporting because it's multi-mds
- 02:11 PM Bug #18016 (Duplicate): cephtool-test-mds.sh waiting for an active MDS daemon (intermittent)
- Will assume this is duplicate of https://github.com/ceph/ceph/pull/12234 unless we can see evidence otherwise -- this...
- 12:53 PM Feature #17980: MDS should reject connections from OSD-blacklisted clients
- Yes, these two should work together: 9754 to blacklist things, and then this ticket to enforce that blacklist on the ...
- 12:06 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
- Hmm, still nothing's jumping out at me.
It is noteworthy that mds_gid_t is a BOOST_STRONG_TYPEDEF (unlike other th... - 09:58 AM Feature #17835 (In Progress): mds: enable killpoint tests for MDS-MDS subtree export
- 08:16 AM Bug #18157: ceph-fuse segfaults on daemonize
- an alternative fix at https://github.com/ceph/ceph/pull/12358
- 01:02 AM Documentation #18040 (Resolved): Documentation says not to run multiple MDS, but we can do that now
12/06/2016
- 11:39 PM Bug #18159 (Resolved): "Unknown mount option mds_namespace"
- I think this is just a spurious message coming from src/mount/mount.ceph.c because it was not updated when mds_namesp...
- 11:16 PM Bug #18157 (Fix Under Review): ceph-fuse segfaults on daemonize
- https://github.com/ceph/ceph/pull/12347
- 10:20 PM Bug #18157: ceph-fuse segfaults on daemonize
- Not detected in nightlies because of #18158
- 10:16 PM Bug #18157 (Resolved): ceph-fuse segfaults on daemonize
- ...
- 06:52 PM Bug #17193: truncate can cause unflushed snapshot data lose
- It looks like the patch hasn't eliminated the failure:
http://pulpito.ceph.com/jspray-2016-12-06_12:37:38-kcephfs:re... - 03:34 PM Bug #18131: ceph-fuse not clearing setuid/setgid bits on chown
- Testing a PR now that fixes the setattr codepaths. That's pretty simple to do from those codepaths since we're sendin...
- 02:58 PM Feature #18154 (Fix Under Review): qa: enable mds thrash exports tests
Currently:...- 05:40 AM Bug #18086 (Pending Backport): cephfs: fix missing ll_get for ll_walk
- 04:34 AM Bug #18151: Incorrect report of size when quotas are enabled.
- Due to other bug / issue, I've run ceph-fuse in debug mode with 'debug client = 20'. Right after launching ceph-fuse,...
- 01:21 AM Bug #18151: Incorrect report of size when quotas are enabled.
- 1) My environment:
- ceph/cephfs in 10.2.2.
- All infrastructure is in the same version (rados cluster, mons, m... - 01:15 AM Bug #18151 (Resolved): Incorrect report of size when quotas are enabled.
12/05/2016
- 06:47 PM Bug #18131: ceph-fuse not clearing setuid/setgid bits on chown
- So the upshot here is that I don't think this is anything that has broken in ceph, per-se. FUSE changed its behavior,...
- 06:12 PM Bug #18131: ceph-fuse not clearing setuid/setgid bits on chown
- Ahh, I think I see the problem:...
- 04:25 PM Bug #18131: ceph-fuse not clearing setuid/setgid bits on chown
- Definitely a kernel problem. Works correctly on v4.8.0, broken on v4.8.6 kernel for sure. I haven't bisected to be su...
- 03:45 PM Bug #18131: ceph-fuse not clearing setuid/setgid bits on chown
- Sorry, I mean this patch in my v4.8.10 kernel:...
- 03:38 PM Bug #18131: ceph-fuse not clearing setuid/setgid bits on chown
- First evidence I see of this test failing is here:
http://pulpito.ceph.com/teuthology-2016-11-30_10:10:02-fs-j... - 03:38 PM Backport #18026 (In Progress): jewel: ceph_volume_client.py : Error: Can't handle arrays of non-s...
- 03:36 PM Backport #18100: jewel: ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- @John https://github.com/ceph/ceph/pull/12097/files conflicts in a non trivial way, could you take a look when you ha...
- 03:33 PM Backport #18103 (In Progress): jewel: truncate can cause unflushed snapshot data lose
12/02/2016
- 08:30 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
- BTW as you might have guessed, the crash occurred for me when I added a metadata server to a fresh cluster. Then ceph...
- 08:28 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
- I'm a bit puzzled why the last insert fails:...
- 07:24 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
- I see MDSMonitor::prepare_beacon() proceed through the:...
- 07:05 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
- Valgrind:...
- 07:00 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
- w/ debugging symbols:...
- 06:52 PM Bug #16771: mon crash in MDSMonitor::prepare_beacon on ARM
- Hi,
I can confirm this issue on armhf; I was initially having it with 10.1.2-0ubuntu1 from xenial on a scaleway C1... - 06:33 PM Bug #18131: ceph-fuse not clearing setuid/setgid bits on chown
- The tests I'm mainly concered with are the pjd.sh tests in those. The rest seem to be transient failures of one sort ...
- 06:29 PM Bug #18131 (Resolved): ceph-fuse not clearing setuid/setgid bits on chown
- I had some test failures that showed up in my most recent fs suite run here:
http://pulpito.ceph.com/jlayton-... - 04:51 PM Bug #18016: cephtool-test-mds.sh waiting for an active MDS daemon (intermittent)
- @jcsp did you run into this before ?
- 04:51 PM Bug #18016: cephtool-test-mds.sh waiting for an active MDS daemon (intermittent)
- 04:50 PM Bug #18016: cephtool-test-mds.sh waiting for an active MDS daemon (intermittent)
- https://jenkins.ceph.com/job/ceph-pull-requests/14992/console
exact same error - 01:18 PM Bug #18118 (Duplicate): multimds: mds stuck in up:creating
- seems like dup of http://tracker.ceph.com/issues/18066. MDSRank::creating_done did not get called. Osd op with tid ==...
- 03:05 AM Bug #18118 (Duplicate): multimds: mds stuck in up:creating
- Seen here:
http://pulpito.ceph.com/pdonnell-2016-12-02_01:20:51-multimds-master-testing-basic-mira/594001/
mds ... - 09:32 AM Bug #18119: mds: check and get latest current logsegment to avoid trimming logsegment crashing
- https://github.com/ceph/ceph/pull/12277
The fix here is to check whether mdr->ls/mut->ls is expired or not in call... - 09:27 AM Bug #18119 (Closed): mds: check and get latest current logsegment to avoid trimming logsegment cr...
- ...
12/01/2016
- 11:58 PM Bug #17656: cephfs: high concurrent causing slow request
- william sheng wrote:
> Greg Farnum wrote:
> > Just from the description it sounds like we're backing up while the M... - 08:12 PM Feature #17980: MDS should reject connections from OSD-blacklisted clients
- Related to http://tracker.ceph.com/issues/9754?
- 03:15 PM Bug #18066 (Resolved): objecter dropped op submitted before pool existed
- https://github.com/ceph/ceph/pull/12234
- 03:05 PM Backport #18103 (Resolved): jewel: truncate can cause unflushed snapshot data lose
- https://github.com/ceph/ceph/pull/12324
- 03:04 PM Bug #17982 (Resolved): fuse client failing to trim disconnected inode on unmount
- Fix is merged -- commit 2d02d2c95af9aed31a8579a2245b759f57b3a193.
- 03:03 PM Backport #18100 (Resolved): jewel: ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- https://github.com/ceph/ceph/pull/13139
- 03:03 PM Bug #18013 (Resolved): ceph_test_libcephfs access failures when run without ceph-fuse mount in te...
- Fixed in commit 8a887edaffeed66f02f605ee78b9999ffd629a60.
- 02:05 PM Bug #8405 (Duplicate): multimds: FAILED assert(dir->is_frozen_tree_root())
- dup of http://tracker.ceph.com/issues/17606
11/30/2016
- 07:06 PM Bug #18066: objecter dropped op submitted before pool existed
- 01:24 AM Bug #18086: cephfs: fix missing ll_get for ll_walk
- fixed by : https://github.com/ceph/ceph/pull/12061
- 01:24 AM Bug #18086 (Resolved): cephfs: fix missing ll_get for ll_walk
- When exporting a cephfs with nfs-ganesha, segfault is encountered upon releasing file handles using 'systemctl stop n...
11/29/2016
- 08:06 PM Bug #17982: fuse client failing to trim disconnected inode on unmount
- Thanks. Potential fix is up here:
https://github.com/ceph/ceph/pull/12228
Waiting for build now so I can get a ... - 02:15 PM Bug #17982: fuse client failing to trim disconnected inode on unmount
- Jeff Layton wrote:
> Thanks. To make sure i understand...
>
> Since the inode is released after the mutex is drop... - 12:44 PM Bug #17982: fuse client failing to trim disconnected inode on unmount
- Thanks. To make sure i understand...
Since the inode is released after the mutex is dropped, then the refcount dec... - 09:13 AM Bug #17982: fuse client failing to trim disconnected inode on unmount
- This looks like inode reference leak. I found something suspicious ...
- 08:05 PM Bug #18013: ceph_test_libcephfs access failures when run without ceph-fuse mount in teuthology (n...
- Potential fix up here. Still needs testing, but it seems to do the right thing on my box:
https://github.com/ceph/... - 06:05 PM Bug #18013: ceph_test_libcephfs access failures when run without ceph-fuse mount in teuthology (n...
- Sounds reasonable to me Jeff.
- 04:25 PM Bug #18013: ceph_test_libcephfs access failures when run without ceph-fuse mount in teuthology (n...
- Actually, now that I look further, it turns out that access.cc looks like it has just the function we need:...
- 03:15 PM Bug #18013: ceph_test_libcephfs access failures when run without ceph-fuse mount in teuthology (n...
- I think the right solution is to add a new function for setting the default UserPerm for the cmount, and have those j...
- 05:09 PM Bug #17837: ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- RPMs here: http://gitbuilder.ceph.com/ceph-rpm-centos7-x86_64-basic/ref/wip-17837-jewel/x86_64/
- 12:01 PM Bug #17837: ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- Alexander: I've pushed a backport of this to jewel to a branch called wip-17837-jewel. It will build in an hour or t...
- 11:50 AM Bug #17837 (Pending Backport): ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- 07:12 AM Bug #17837: ceph-mon crashed after upgrade from hammer 0.94.7 to jewel 10.2.3
- I could test the changes, do I have to compile this project?
- 02:29 PM Bug #16397: nfsd selinux denials causing knfs tests to fail
- From a recent run I see this is only happening on centos NFS servers, so I'm going to pin these tests to ubuntu (but ...
- 01:45 PM Bug #18066 (Resolved): objecter dropped op submitted before pool existed
http://pulpito.ceph.com/teuthology-2016-11-28_17:15:02-fs-master---basic-smithi/583404/
This is manifesting as a...- 12:54 PM Bug #17193 (Pending Backport): truncate can cause unflushed snapshot data lose
- 12:40 PM Feature #12552 (Rejected): qa: test cephfs over cache tier in fs suite
- Cache tiers are deprecated.
- 08:23 AM Bug #18047 (Fix Under Review): assertion in MDSMap::get_up_features()
- https://github.com/ceph/ceph/pull/12208
11/28/2016
- 02:29 PM Feature #11950 (In Progress): Strays enqueued for purge cause MDCache to exceed size limit
- 01:25 PM Feature #18050 (New): Enable mounting a .snap directory from libcephfs/ceph-fuse
Manila is adding a "mountable snapshots" feature:
https://github.com/openstack/manila-specs/blob/master/specs/ocat...- 09:43 AM Bug #18047 (Resolved): assertion in MDSMap::get_up_features()
- ...
- 07:03 AM Feature #12132 (In Progress): cephfs-data-scan: Cleanup phase
- The following xattrs are added to zeroth object:
#define XATTR_CEILING "scan_ceiling"
#define XATTR_MAX_MTIME "... - 01:24 AM Bug #17911 (Resolved): ensure that we vet the ceph_statx flags masks in libcephfs API
- Merged in https://github.com/ceph/ceph/pull/12106
11/24/2016
- 08:35 PM Documentation #18040: Documentation says not to run multiple MDS, but we can do that now
- Merged to master -- I leave it up to you whether you want to backport (I don't usually do that for documentation)
- 07:31 PM Documentation #18040 (Fix Under Review): Documentation says not to run multiple MDS, but we can d...
- https://github.com/ceph/ceph/pull/12184
- 07:28 PM Documentation #18040 (Resolved): Documentation says not to run multiple MDS, but we can do that now
- http://docs.ceph.com/docs/master/rados/deployment/ceph-deploy-mds/ has a red box that says "Do not run multiple metad...
- 01:25 PM Bug #18016 (Need More Info): cephtool-test-mds.sh waiting for an active MDS daemon (intermittent)
- I'll keep posting on this issue whenever I find such an error. If nothing happens in a month or two we can probably f...
- 10:52 AM Bug #18016: cephtool-test-mds.sh waiting for an active MDS daemon (intermittent)
- Can't tell what was going on here without logs from the services (which afaik we don't gather in these situations?)
... - 08:10 AM Bug #18016: cephtool-test-mds.sh waiting for an active MDS daemon (intermittent)
- https://jenkins.ceph.com/job/ceph-pull-requests/14881/console
- 06:07 AM Bug #18016 (Duplicate): cephtool-test-mds.sh waiting for an active MDS daemon (intermittent)
- ...
- 10:47 AM Backport #18026 (Resolved): jewel: ceph_volume_client.py : Error: Can't handle arrays of non-strings
- https://github.com/ceph/ceph/pull/12325
- 08:53 AM Bug #17656: cephfs: high concurrent causing slow request
- Greg Farnum wrote:
> Just from the description it sounds like we're backing up while the MDS purges deleted files fr...
11/23/2016
- 09:49 PM Backport #17285 (Resolved): ceph-mon leaks in MDSMonitor when ceph-mds process is running but MDS...
- 08:30 PM Bug #11258 (Resolved): cephfs-java ftruncate unit test failure
- 08:30 PM Backport #13927 (Resolved): hammer: cephfs-java ftruncate unit test failure
- 06:54 PM Bug #17982: fuse client failing to trim disconnected inode on unmount
- http://pulpito.ceph.com/teuthology-2016-11-21_17:15:01-fs-master---basic-smithi/567231/
- 06:22 PM Bug #17982: fuse client failing to trim disconnected inode on unmount
Another instance
jspray-2016-11-23_14:06:32-fs-wip-jcsp-testing-20161122-distro-basic-smithi/572367- 06:25 PM Bug #17800 (Pending Backport): ceph_volume_client.py : Error: Can't handle arrays of non-strings
- 03:05 PM Bug #18013: ceph_test_libcephfs access failures when run without ceph-fuse mount in teuthology (n...
- This chmod may be the bit we're missing:
https://github.com/ceph/ceph-qa-suite/blob/master/tasks/cephfs/fuse_mount.p... - 02:56 PM Bug #18013: ceph_test_libcephfs access failures when run without ceph-fuse mount in teuthology (n...
- Looks like these tests don't specify a UserPerm on the ceph function invocations, nor set up a "global" one. I imagin...
- 02:49 PM Bug #18013: ceph_test_libcephfs access failures when run without ceph-fuse mount in teuthology (n...
- BTW, the only material difference between this test run and a normal FS suite run is that ceph-fuse is turned off. Co...
- 02:46 PM Bug #18013 (Resolved): ceph_test_libcephfs access failures when run without ceph-fuse mount in te...
- When ceph-fuse is turned off for the libcephfs_interface_tests task, we see access control failures:...
- 02:41 PM Bug #17847: "Fuse mount failed to populate /sys/ after 31 seconds" in jewel 10.2.4
- http://pulpito.ceph.com/sage-2016-11-23_14:40:24-upgrade:hammer-x-master---basic-smithi/
- 02:37 PM Bug #17847: "Fuse mount failed to populate /sys/ after 31 seconds" in jewel 10.2.4
- https://github.com/ceph/ceph-qa-suite/pull/1280
added debugging. the mon connection looks fine, but it isn't doin... - 02:18 PM Bug #17847: "Fuse mount failed to populate /sys/ after 31 seconds" in jewel 10.2.4
- It's a hammer ceph-fuse trying to mount.. my guess is a compat issue
- 08:05 AM Backport #17956 (In Progress): jewel: Clients without pool-changing caps shouldn't be allowed to ...
- 08:04 AM Backport #18008 (In Progress): jewel: Cannot create deep directories when caps contain "path=/som...
- 06:43 AM Backport #18008 (Resolved): jewel: Cannot create deep directories when caps contain "path=/somepath"
- https://github.com/ceph/ceph/pull/12154
- 07:57 AM Backport #18010 (In Progress): jewel: Cleanly reject "session evict" command when in replay
- 06:43 AM Backport #18010 (Resolved): jewel: Cleanly reject "session evict" command when in replay
- https://github.com/ceph/ceph/pull/12153
Also available in: Atom