Activity
From 02/13/2016 to 03/13/2016
03/13/2016
03/11/2016
- 05:01 PM Bug #15060 (Duplicate): client: Leak_StillReachable from boost::detail::get_once_per_thread_epoch()
- http://tracker.ceph.com/issues/14794, which got fixed yesterday.
- 11:18 AM Bug #15060 (Duplicate): client: Leak_StillReachable from boost::detail::get_once_per_thread_epoch()
- http://pulpito.ceph.com/teuthology-2016-03-09_14:03:10-fs-jewel---basic-smithi/49376/...
- 02:30 PM Bug #14805: Hadoop tests failing with EPERM
- Maybe this has same issue as the python libcephfs tests did, they were creating files with mode 0, which used to work
- 02:23 PM Bug #15075 (Resolved): "suspicious RCU usage" message in knfs, kcephfs tests
- Zheng, does this look familiar to you?
http://pulpito.ceph.com/teuthology-2016-03-09_17:10:02-knfs-jewel-testing-b... - 02:11 PM Bug #15061 (Duplicate): TestStrays.test_snapshot_remove failing, objects remain in data pool
- Ah, I thought it sounded familiar, but I couldn't find the ticket. Thanks.
- 11:57 AM Bug #15061: TestStrays.test_snapshot_remove failing, objects remain in data pool
- It's an OSD issue. dup of http://tracker.ceph.com/issues/14962
- 11:32 AM Bug #15061: TestStrays.test_snapshot_remove failing, objects remain in data pool
- ...
- 11:31 AM Bug #15061 (Duplicate): TestStrays.test_snapshot_remove failing, objects remain in data pool
http://pulpito.ceph.com/teuthology-2016-03-07_18:04:01-fs-master---basic-smithi/45775/
Reproduces locally on vst...- 02:03 PM Feature #15074 (New): multifs: infer client's target filesystem based on its auth caps
- Usually, if a client subscribes to unqualified "mdsmap", it'll get whichever filesystem is marked as the legacy files...
- 01:53 PM Feature #15072 (New): mon: multifs: auth caps of MDS->mon connections to limit by FSCID
- MDSs already only receive a populated MDSMap once they have been assigned a rank.
These caps should be used by MDS... - 01:51 PM Feature #15071 (New): mds: client: multifs: auth caps on client->MDS connections to limit by FSCID
- When a client is to be limited to a particular filesystem, it needs to not only be restricted to seeing that MDSMap, ...
- 01:49 PM Feature #15070 (Resolved): mon: client: multifs: auth caps on client->mon connections to limit th...
- Currently clients with 'mds allow r' capabilities can see any MDSMap.
We would like to be able to craft client aut... - 01:47 PM Feature #15069 (Resolved): MDS: multifs: enable two filesystems to point to same pools if one of ...
- The 'damaged' flag on a filesystem prevents any MDS from being assigned a rank in that filesystem. While a filesyste...
- 01:43 PM Feature #15068 (Resolved): fsck: multifs: enable repair tools to read from one filesystem and wri...
- To create a workflow in which the user marks an existing filesystem damaged, and then goes through a repair process w...
- 01:41 PM Feature #15067 (Resolved): mon: client: multifs: enable clients to map a filesystem name to a FSCID
Currently clients have to specify the ID of the filesystem they want to connect to (or specify no ID to get the leg...- 01:40 PM Feature #15066 (Rejected): multifs: Allow filesystems to be assigned RADOS namespace as well as p...
- Everywhere we accept a pool argument currently (e.g. in "ceph fs new"), we should additionally accept a RADOS namespa...
- 01:37 PM Feature #15065 (Resolved): multifs: add standby_for_fscid setting on MDS and pass in MMDSBeacon
- Currently in MDSMonitor::prepare_beacon:...
- 01:33 PM Fix #15064 (Closed): multifs: tweak text on "flag set enable multiple"
- (Requested by Greg https://github.com/ceph/ceph/pull/6953#issuecomment-194731565)
Currently when someone tries to ... - 01:30 PM Feature #15063 (Resolved): multifs: option to disable sanity() calls
- (Requested by Greg https://github.com/ceph/ceph/pull/6953#discussion-diff-55642577R359)
The MDSMonitor calls FSMap... - 01:28 PM Fix #15062 (Resolved): multifs cleanup: pass feature bits into MDSMap from Filesystem::encode
- Currently a static set of bits is passed when encoding MDSMap in Filesystem objects.
https://github.com/ceph/ceph/... - 05:20 AM Backport #15057: infernalis: deleting striped file in cephfs doesn't free up file's space
- NOTE: Original PR not yet merged.
- 05:16 AM Backport #15057 (Rejected): infernalis: deleting striped file in cephfs doesn't free up file's space
- https://github.com/ceph/ceph/pull/8041
- 05:19 AM Backport #15056: hammer: deleting striped file in cephfs doesn't free up file's space
- NOTE: Original PR not yet merged.
- 05:16 AM Backport #15056 (Resolved): hammer: deleting striped file in cephfs doesn't free up file's space
- https://github.com/ceph/ceph/pull/8042
- 05:10 AM Bug #13268 (Resolved): Secondary groups are not read /w SAMBA 4.2.2 & CEPHFS VFS module
- 03:38 AM Bug #15050 (Fix Under Review): deleting striped file in cephfs doesn't free up file's space
- https://github.com/ceph/ceph/pull/8040
https://github.com/ceph/ceph/pull/8041
https://github.com/ceph/ceph/pull/8042 - 03:03 AM Bug #15050: deleting striped file in cephfs doesn't free up file's space
- Well spotted.
- 02:25 AM Bug #15050: deleting striped file in cephfs doesn't free up file's space
- we don't handle 'stripe_count > 1' properly when purging stray....
- 03:10 AM Backport #13809 (Resolved): hammer: Secondary groups are not read /w SAMBA 4.2.2 & CEPHFS VFS module
- 03:10 AM Backport #13813 (Resolved): hammer: Daily segfault ll_forget reader couldn't read tag
03/10/2016
- 11:55 PM Bug #15050: deleting striped file in cephfs doesn't free up file's space
- Can you describe in a little more detail how you were creating and deleting these files? You talk about them being 10...
- 07:08 PM Bug #15050 (Resolved): deleting striped file in cephfs doesn't free up file's space
- I've been creating large (16-64 GB) files in an otherwise empty test cephfs (mounted via the kernel client in linux 4...
- 01:48 PM Bug #15045 (Resolved): CephFSVolumeClient.evict should be limited by path, not just auth ID
- 11:08 AM Bug #15008: fuse expects root inode number to be FUSE_ROOT_ID
- *jewel PR*: https://github.com/ceph/ceph/pull/7976
- 09:05 AM Bug #12776 (Resolved): qa: standby MDS not shutting down, "reached maximum tries (50) after waiti...
- 09:04 AM Bug #13583 (Resolved): Client::_fsync() on a given file does not wait unsafe requests that create...
- 09:03 AM Bug #13675 (Resolved): Failure in LibCephFS.DirLs
- 09:02 AM Bug #13729 (Resolved): Daily segfault ll_forget reader couldn't read tag
- 09:00 AM Bug #14196 (Resolved): test_object_deletion fails (tasks.cephfs.test_damage.TestDamage)
- 08:59 AM Bug #14374 (Resolved): MDS asok handlers trigger lock cycle assertion if they take mds_lock
- 08:59 AM Bug #14380 (Resolved): "ceph mds setmap" crashes mon on invalid input
- 08:58 AM Bug #14379 (Resolved): Add confirmation flag to "ceph mds rmfailed"
- 08:52 AM Bug #10436 (Resolved): ceph-fuse: snapshot flushing from page cache to Client is not coherent
- 06:43 AM Bug #14996 (Resolved): libcephfs hangs on shutdown if an unclosed opendir handle exists
- 06:26 AM Bug #14800 (Resolved): [ceph-fuse] Fh ref might leak at umounting
- 06:24 AM Bug #14798 (Resolved): free fds being exhausted eventually because freed fds are never put back
- 06:12 AM Bug #15038 (Resolved): unittest_mds_types: inode_t.compare_equal fails
- 03:13 AM Bug #15038 (Fix Under Review): unittest_mds_types: inode_t.compare_equal fails
- https://github.com/ceph/ceph/pull/8014
- 01:11 AM Bug #15038 (Resolved): unittest_mds_types: inode_t.compare_equal fails
- This causes 'make check' to fail on master.
http://gitbuilder.sepia.ceph.com/gitbuilder-ceph-tarball-trusty-amd64-...
03/09/2016
- 02:05 PM Bug #14758 (Resolved): failed TestJournalRepair test on smithi
- In this case, mds.b was going into resolve (rank 0), while mds.a was remaining in standby. That's correct (rank 1 is...
- 11:53 AM Bug #14758 (New): failed TestJournalRepair test on smithi
- 11:52 AM Bug #14758 (Rejected): failed TestJournalRepair test on smithi
- The link in the report was wrong, it was actually:
http://pulpito.ceph.com/teuthology-2016-02-12_14:03:01-fs-jewel--... - 06:34 AM Bug #14758: failed TestJournalRepair test on smithi
- Hmm, we haven't seen this since then, and there have been some smithi runs so it's apparently not just infrastructure...
- 11:50 AM Feature #14642: Validate layouts everywhere we load them
- Yes, and in testing I also found other places in the MDS that get upset with zeros, so the ticket probably either nee...
- 06:46 AM Feature #14642: Validate layouts everywhere we load them
- John, I think maybe you said in the user thread that we seem to guard against this on inputs, right? So we've no idea...
- 06:50 AM Bug #14255: qa: we are filling smithi disks with ffsb workloads
- This is still happening (http://pulpito.ceph.com/teuthology-2016-03-07_18:04:01-fs-master---basic-smithi/45723/, http...
- 06:45 AM Bug #14807 (Need More Info): MDS crashes repeatedly after upgrade to Infernalis from Hammer
- 06:44 AM Bug #14735: ceph-fuse does not mount at boot on Debian Jessie
- Bumping down priority as it's just the backport.
- 06:43 AM Bug #14608 (Need More Info): snaptests.yaml failure: [WRN] open_snap_parents has:" in cluster log
- For whatever reason we don't actually have the logs here (any more?), and it doesn't seem to have reproduced elsewher...
- 06:43 AM Bug #14996 (Fix Under Review): libcephfs hangs on shutdown if an unclosed opendir handle exists
- https://github.com/ceph/ceph/pull/7994
- 06:32 AM Bug #14196: test_object_deletion fails (tasks.cephfs.test_damage.TestDamage)
- Bumping down priority as it's just the backport now.
- 01:53 AM Bug #15008 (Fix Under Review): fuse expects root inode number to be FUSE_ROOT_ID
- https://jenkins.ceph.com/job/ceph-pull-requests/2751/
03/08/2016
- 03:27 PM Bug #15008 (Resolved): fuse expects root inode number to be FUSE_ROOT_ID
- it's not true when mounting into subdir
- 05:09 AM Bug #9679: Ceph hadoop terasort job failure
- I forgot to mention, I tested both: Hadoop 2.7.1 and 2.7.2 with the same outcome.
64 bit Arch linux circa Jan 2015. ... - 05:01 AM Bug #9679: Ceph hadoop terasort job failure
- Greg,
My setup was simple: ceph jni bindings were build from ceph repo.
Cehp hadoop2 integration was build from h... - 12:00 AM Bug #9679: Ceph hadoop terasort job failure
- Dmitry, can you elucidate more on your environment? Which Hadoop, and what bindings did you use to connect Ceph and H...
- 02:15 AM Bug #14805: Hadoop tests failing with EPERM
- old libcephfs only has permission check for open. Now, It has full permission checks (open, lookup, setattr ...)
03/07/2016
- 11:55 PM Bug #14805: Hadoop tests failing with EPERM
- Do you have any idea what about the client permission checking is busting Hadoop? We want to fix it properly (or at l...
- 02:25 PM Bug #14996: libcephfs hangs on shutdown if an unclosed opendir handle exists
- I think we should unify open file/directory handle
- 10:41 AM Bug #14996 (Resolved): libcephfs hangs on shutdown if an unclosed opendir handle exists
This will hang on shutdown:...
03/03/2016
- 02:38 PM Bug #14684 (Resolved): test_scrub_checks fails
- 02:08 PM Bug #14970 (Rejected): Post-new-layouts clients crash talking to older MDSs
- Ah, I see: we reused the version, makes sense. Thanks!
- 02:01 PM Bug #14970: Post-new-layouts clients crash talking to older MDSs
- I think this only affects older MDS with feature bit (1<<58)
- 01:48 PM Bug #14970 (Rejected): Post-new-layouts clients crash talking to older MDSs
- By sheer coincidence I had a vstart cluster created before rebasing on master, and a client compiled after rebasing o...
- 02:38 AM Bug #10944: Deadlock, MDS logs "slow request", getattr pAsLsXsFs failed to rdlock
- The file by default goes to a file called "cachedump.*", like it says. I think it tends to go in /, but I don't remem...
03/02/2016
- 05:28 AM Bug #14805: Hadoop tests failing with EPERM
- tests passed
http://pulpito.ceph.com/zyan-2016-03-01_20:05:19-hadoop-jewel-testing-basic-mira/
02/29/2016
- 11:52 AM Bug #14685: dbench hang on native cifs mount
- http://qa-proxy.ceph.com/teuthology/teuthology-2016-02-26_19:14:10-samba-jewel---basic-mira/29964/teuthology.log
02/28/2016
- 06:09 PM Bug #10944: Deadlock, MDS logs "slow request", getattr pAsLsXsFs failed to rdlock
- We're also running into this with the latest versions of ceph and CentOS 7 kernel. Message on the mds is:...
- 09:04 AM Backport #14761 (In Progress): infernalis: ceph-fuse does not mount at boot on Debian Jessie
02/26/2016
- 02:04 PM Backport #14761: infernalis: ceph-fuse does not mount at boot on Debian Jessie
- Is there a reason why this was not backported to Infernalis before 9.2.1 release ?
- 02:04 PM Bug #14735: ceph-fuse does not mount at boot on Debian Jessie
- Is there a reason why this was not backported to Infernalis before 9.2.1 release ?
- 07:46 AM Bug #14732 (Duplicate): open returns EACCES when O_TRUNC is specified and write permission is den...
- @Yan, marking it as a duplicate of http://tracker.ceph.com/issues/13809 because it's only caused by this incomplete b...
- 07:09 AM Bug #14732 (Fix Under Review): open returns EACCES when O_TRUNC is specified and write permission...
02/25/2016
- 12:54 PM Bug #14732: open returns EACCES when O_TRUNC is specified and write permission is denied (hammer)
- Sorry, it's my fault.
I added two extra commits to https://github.com/ceph/ceph/pull/6604
- 04:01 AM Bug #14807: MDS crashes repeatedly after upgrade to Infernalis from Hammer
- Christopher Nelson wrote:
> It turns out the main files are too large. Is there any other way I can upload them?
...
02/24/2016
- 06:05 AM Backport #12350 (Resolved): Provided logrotate setup does not handle ceph-fuse correctly
- 05:22 AM Bug #14732: open returns EACCES when O_TRUNC is specified and write permission is denied (hammer)
- More runs with the same failure : http://pulpito.ceph.com/loic-2016-02-22_22:08:47-fs-hammer-backports---basic-multi/
02/23/2016
- 02:37 PM Bug #14807: MDS crashes repeatedly after upgrade to Infernalis from Hammer
- It turns out the main files are too large. Is there any other way I can upload them?
- 02:36 PM Bug #14807: MDS crashes repeatedly after upgrade to Infernalis from Hammer
- Apparently my institution blocks outbound scp, so I had to post the files here. Sorry for the delay.
- 09:21 AM Bug #13640 (Resolved): CephFS and page cache handling
- 09:15 AM Bug #14684 (Fix Under Review): test_scrub_checks fails
- ...
- 05:57 AM Backport #14843 (Rejected): infernalis: test_object_deletion fails (tasks.cephfs.test_damage.Test...
- 03:37 AM Bug #14759: ovh: failed snaptest-git-ceph.sh test
- might be duplicate of #10436
- 03:20 AM Bug #14196: test_object_deletion fails (tasks.cephfs.test_damage.TestDamage)
- https://github.com/ceph/ceph-qa-suite/pull/829
- 03:12 AM Bug #14798 (Fix Under Review): free fds being exhausted eventually because freed fds are never pu...
02/22/2016
- 01:34 PM Bug #14732: open returns EACCES when O_TRUNC is specified and write permission is denied (hammer)
- strange, I can't find O_TRUNC in tests/open/06.t
- 12:19 PM Bug #14800 (Fix Under Review): [ceph-fuse] Fh ref might leak at umounting
- 12:11 PM Bug #14805 (Fix Under Review): Hadoop tests failing with EPERM
- I have trouble to run the test on local machine, let's try disable client_permissions
https://github.com/ceph/ceph... - 09:38 AM Bug #14684: test_scrub_checks fails
http://pulpito.ceph.com/teuthology-2016-02-19_14:03:01-fs-jewel---basic-smithi/17838/...- 09:23 AM Bug #14196 (Pending Backport): test_object_deletion fails (tasks.cephfs.test_damage.TestDamage)
- http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2016-02-19_17:43:52-fs-infernalis---basic-openstack/559/
need ba...
02/20/2016
- 03:42 AM Bug #9679: Ceph hadoop terasort job failure
- I'm testing ceph version 10.0.3-1325-g98fba62 on hadoop 3.0.0 and I see the same error in hadoop terasort.
I'm not s...
02/19/2016
- 07:40 PM Bug #14716: "Thread.cc: 143: FAILED assert(status == 0)" in fs-hammer---basic-smithi
- corresponding issue #14824
- 04:49 PM Bug #14716: "Thread.cc: 143: FAILED assert(status == 0)" in fs-hammer---basic-smithi
- Next try commit 951339103d35bc8ee2de880f77aada40d15b592a
passed
http://pulpito.ceph.com/teuthology-2016-02-19_1... - 04:01 PM Bug #14716: "Thread.cc: 143: FAILED assert(status == 0)" in fs-hammer---basic-smithi
- Tests on 2817ffcf4e57f92551b86388681fc0fe70c386ec in ReplicatedPG commit failed all in similar way => ....
- 10:18 AM Bug #14818 (Resolved): Cython librados broke libcephfs/ceph_volume_client
- Specifically, the create_with_rados libcephfs function is no longer happy....
02/18/2016
- 10:31 PM Bug #14716: "Thread.cc: 143: FAILED assert(status == 0)" in fs-hammer---basic-smithi
- Greg the test passed on that commit.
http://pulpito.ceph.com/teuthology-2016-02-18_13:42:45-fs-wip-test-14716-2---... - 01:11 AM Bug #14716: "Thread.cc: 143: FAILED assert(status == 0)" in fs-hammer---basic-smithi
- Well at least it's consistent. Can you also try commit:2c8e57934284dae0ae92d1aa0839a87092ec7c51 against smithi/mira?
... - 09:43 PM Bug #14807: MDS crashes repeatedly after upgrade to Infernalis from Hammer
- I was not aware of any issues prior to the upgrade. I am posting the files now, and I'll let you know the tag when it...
- 08:57 PM Bug #14807: MDS crashes repeatedly after upgrade to Infernalis from Hammer
- Do you have the MDS log from when this first started happening, and can you please upload it? (ceph-post-file will le...
- 03:44 PM Bug #14807 (Can't reproduce): MDS crashes repeatedly after upgrade to Infernalis from Hammer
- I have a small cluster (1xMON 1xMDS 2xOSD). It has been very stable for the last year.
However, after the upgrade ... - 03:40 PM Bug #14672: MDS crashes with FAILED assert(inode_map.count(in->vino()) == 0) in 9.2.0
- I have this same problem, and I did not do a newfs. I simply upgraded from Hammer to Infernalis:...
- 03:35 PM Bug #14641: don't let users specify 0 on stripe count or object size
- It was a brand new 9.2 cluster. Haven't yet seen the issue again..
- 02:45 PM Feature #12144 (Resolved): cephfs-data-scan: integrated with sharded pgls
- https://github.com/ceph/ceph/pull/7034
- 10:44 AM Bug #14805 (Resolved): Hadoop tests failing with EPERM
Most recent instance:
http://pulpito.ceph.com/teuthology-2016-02-17_18:12:06-hadoop-jewel---basic-mira/
Here's ...- 05:24 AM Bug #14800: [ceph-fuse] Fh ref might leak at umounting
- https://github.com/ceph/ceph/pull/7686
- 05:22 AM Bug #14800 (Resolved): [ceph-fuse] Fh ref might leak at umounting
- Recently we meet ceph-fuse hanging issue and have to kill ceph-fuse process to continue. This issue is caused by forc...
- 03:22 AM Bug #14798: free fds being exhausted eventually because freed fds are never put back
- https://github.com/ceph/ceph/pull/7685
- 03:18 AM Bug #14798: free fds being exhausted eventually because freed fds are never put back
- The open and create operation in libcephfs will get a free fd from the free_fd_set. This free fd will be erased from ...
- 03:12 AM Bug #14798 (Resolved): free fds being exhausted eventually because freed fds are never put back
02/17/2016
- 11:09 PM Bug #14684: test_scrub_checks fails
- And again: http://pulpito.ceph.com/gregf-2016-02-15_18:08:49-fs-greg-fs-testing-215-1---basic-smithi/10771/
It doe...
02/16/2016
- 09:54 PM Bug #14714: three jobs in samba suite failing for hammer v0.94.6 QE validation
- same tests on v0.94.4 failed same way !?
Although they did passed during QE validation http://tracker.ceph.com/issue... - 05:31 PM Bug #14716: "Thread.cc: 143: FAILED assert(status == 0)" in fs-hammer---basic-smithi
- Tried on diff machines all failed in similar fashion:
http://pulpito.ceph.com/teuthology-2016-02-15_16:52:37-fs-hamm... - 04:44 AM Bug #14754 (Duplicate): test_barrier (tasks.cephfs.test_full.TestClusterFull) fails on osdmap epo...
- I hope this was #14750 manifesting differently?
- 04:43 AM Bug #14698: Test failure: test_full_fsync (tasks.cephfs.test_full.TestQuotaFull)
- #14750
- 04:42 AM Bug #14698 (Duplicate): Test failure: test_full_fsync (tasks.cephfs.test_full.TestQuotaFull)
- 04:43 AM Bug #14258 (Duplicate): qa: failed test_full_fsync
- #14750
- 04:35 AM Bug #14697 (Resolved): mds: assert in SafeTimer while suiciding
- 03:49 AM Bug #14684: test_scrub_checks fails
- gregf-2016-02-15_13:33:12-fs-greg-fs-testing-215-sure---basic-smithi/10011/
original; and here we have it on maste...
02/15/2016
- 04:46 AM Backport #14761 (Rejected): infernalis: ceph-fuse does not mount at boot on Debian Jessie
- https://github.com/ceph/ceph/pull/7834
02/14/2016
- 08:22 PM Bug #14684: test_scrub_checks fails
- http://pulpito.ceph.com/teuthology-2016-02-05_14:03:02-fs-jewel---basic-smithi/8806/
Same as the original case - 07:54 PM Bug #14684: test_scrub_checks fails
- http://pulpito.ceph.com/teuthology-2016-02-12_14:03:01-fs-jewel---basic-smithi/8245/...
- 08:04 PM Bug #14759 (Closed): ovh: failed snaptest-git-ceph.sh test
- http://pulpito.ovh.sepia.ceph.com:8081/teuthology-2016-02-06_18:04:02-fs-master---basic-openstack/17910/...
- 07:51 PM Bug #14758 (Resolved): failed TestJournalRepair test on smithi
- http://pulpito.ceph.com/teuthology-2016-02-12_14:03:01-fs-jewel---basic-smithi/8218/...
02/13/2016
- 07:08 AM Bug #14714: three jobs in samba suite failing for hammer v0.94.6 QE validation
- Hmm, we've got 3 failures in teuthology-2016-02-07_21:14:02-samba-infernalis---basic-openstack as well, but they're d...
- 06:10 AM Bug #14754 (Duplicate): test_barrier (tasks.cephfs.test_full.TestClusterFull) fails on osdmap epo...
- In an integration branch which might have messed it up, but this is a first appearance:
http://pulpito.ceph.com/greg...
Also available in: Atom