Activity
From 11/04/2014 to 12/03/2014
12/03/2014
- 09:31 PM Bug #10229: Filer: lock inversion with Objecter
- 10:40 AM Bug #10229 (Resolved): Filer: lock inversion with Objecter
- Saw this on a next test (http://qa-proxy.ceph.com/teuthology/sage-2014-12-01_11:11:17-fs-next-distro-basic-multi/6289...
- 05:55 PM Feature #1398: qa: multiclient file io test
- Note to self:
Try: rbd import to create an image name, rbd resize the image, make sure reads return EOF at right... - 06:56 AM Fix #10135 (Resolved): OSDMonitor: allow adding cache pools to cephfs pools already in use
- 26e8cf174b8e76b4282ce9d9c1af6ff12f5565a9
- 05:16 AM Bug #10164 (Fix Under Review): Dirfrag objects for deleted dir not purged until MDS restart
- https://github.com/ceph/ceph/pull/3071
12/02/2014
- 06:57 AM Bug #9997 (Resolved): test_client_pin case is failing
- Merged to next (https://github.com/ceph/ceph/pull/3056)
- 06:53 AM Bug #10217 (Resolved): old fuse should warn on flock
- This works in master.
- 06:19 AM Bug #10217: old fuse should warn on flock
- yes, we need recent version of ceph-fuse and MDS. old version does not support interrupting flock
- 03:37 AM Bug #10217 (Resolved): old fuse should warn on flock
Test failure: test_filelock (tasks.mds_client_recovery.TestClientRecovery):
http://pulpito.front.sepia.ceph.com/sa...- 03:46 AM Fix #10135 (Fix Under Review): OSDMonitor: allow adding cache pools to cephfs pools already in use
- giant backport PR: https://github.com/ceph/ceph/pull/3055
- 03:35 AM Bug #10151 (Resolved): mds client cache pressure health warning oscillates on/off
- The version on next has a pass on client-limits (the one that exercises health): http://pulpito.front.sepia.ceph.com/...
12/01/2014
- 06:05 PM Fix #10135 (Pending Backport): OSDMonitor: allow adding cache pools to cephfs pools already in use
- merged to next in commit:25fc21b837ba74bab2f6bc921c78fb3c43993cf5
This also should go into giant (I think Firefly ... - 05:58 PM Bug #10011 (Resolved): Journaler: failed on shutdown or EBLACKLISTED
- giant commit:65f6814847fe8644f5d77a9021fbf13043b76dbe
- 06:37 AM Bug #10011 (Fix Under Review): Journaler: failed on shutdown or EBLACKLISTED
- Haven't seen any failures around this, let's backport to giant: https://github.com/ceph/ceph/pull/3047
- 06:59 AM Bug #10164 (In Progress): Dirfrag objects for deleted dir not purged until MDS restart
- Zheng: assigning to you since you mentioned you were working on it
- 06:34 AM Bug #9997 (Fix Under Review): test_client_pin case is failing
- https://github.com/ceph/ceph/pull/3045
- 04:42 AM Bug #9994: ceph-qa-suite: nfs mount timeouts
- http://pulpito.ceph.com/teuthology-2014-11-23_23:10:01-knfs-next-testing-basic-multi/617093/
http://pulpito.ceph.com... - 04:20 AM Feature #9881 (Resolved): mds: admin command to flush the mds journal
- Merged to master (forgot the Fixes:, doh)...
11/26/2014
- 10:26 PM Bug #9997: test_client_pin case is failing
- For 3.18+ kernel, I think we can iterate the all dir inodes and invalidate dentry one by one.
- 12:19 AM Bug #9997: test_client_pin case is failing
- yes, I think it caused by the d_invalidate change. In 3.18-rc kernel, d_invalidate() unhash dentry regardless if the...
11/25/2014
- 09:27 AM Fix #10135 (Fix Under Review): OSDMonitor: allow adding cache pools to cephfs pools already in use
- https://github.com/ceph/ceph/pull/3008
- 04:42 AM Bug #9997: test_client_pin case is failing
- After much head scratching and log examination, this appears to be a kernel regression (assuming our behaviour was va...
11/24/2014
- 05:19 PM Bug #10151 (Pending Backport): mds client cache pressure health warning oscillates on/off
- Merged to master as of commit:aa4d1478647ce416e9cf4e8fcd32411230639f40. I like to let things go through testing befor...
- 09:20 AM Bug #10151: mds client cache pressure health warning oscillates on/off
- Opened PR against master instead of next by mistake. Next PR is https://github.com/ceph/ceph/pull/2996
- 03:16 AM Bug #10151 (Fix Under Review): mds client cache pressure health warning oscillates on/off
- master: https://github.com/ceph/ceph/pull/2989
giant: https://github.com/ceph/ceph/pull/2990 - 09:07 AM Bug #9997 (In Progress): test_client_pin case is failing
11/23/2014
- 08:30 PM Bug #9997: test_client_pin case is failing
- http://qa-proxy.ceph.com/teuthology/teuthology-2014-11-16_23:04:01-fs-next-testing-basic-multi/603971/
11/21/2014
- 12:34 PM Bug #9674 (Resolved): nightly failed multiple_rsync.sh
- I haven't seen this fail since then, hurray.
- 06:39 AM Bug #10151 (In Progress): mds client cache pressure health warning oscillates on/off
- Reproduced this locally by just allowing 3 mons in a vstart cluster and following the procedure from the mds_client_l...
- 12:50 AM Bug #10151: mds client cache pressure health warning oscillates on/off
- Yes -- the leader is reporting the health warning but the peons are not.
The warning is "Client 2922132 failing to... - 06:34 AM Fix #10135: OSDMonitor: allow adding cache pools to cephfs pools already in use
- Yeah, we didn't think about this first time around because the focus was on cache tiers to EC pools, but it would mak...
- 03:58 AM Bug #10164: Dirfrag objects for deleted dir not purged until MDS restart
- Alternatively less contrived way to see the issue: just do a loop of "cp -r /etc . ; rm -rf ./etc" in a filesystem mo...
- 03:14 AM Bug #10164 (Resolved): Dirfrag objects for deleted dir not purged until MDS restart
Seen while playing with the #9881 flush functionality: the dirfrag objects for deleted directories are never cleane...
11/20/2014
- 09:59 AM Bug #10151 (Resolved): mds client cache pressure health warning oscillates on/off
- seeing this on lab cluster. not sure if it is a problem in the mds health reporting or the mon, but it goes on and o...
11/19/2014
11/18/2014
- 11:51 PM Bug #10131: kclient: dentry still in use on umount
- it's a VFS bug. fixed by...
- 11:04 PM Bug #10131 (In Progress): kclient: dentry still in use on umount
- 09:20 AM Bug #10131 (Resolved): kclient: dentry still in use on umount
- ...
- 03:40 PM Fix #10135 (Resolved): OSDMonitor: allow adding cache pools to cephfs pools already in use
- Right now we disallow this with _check_remove_tier(), I believe because we were worried about coordinating the switch...
- 02:37 PM Feature #1398: qa: multiclient file io test
- Answering my own question: Item 2 above. It looks like this can all be done from python.
11/13/2014
- 06:58 AM Bug #10092 (Resolved): multiple_rsync.sh + ceph-fuse timing out on firefly
- greg is right, these time out semi-regularly. increased the timeout on master, giant, firefly.
11/12/2014
- 08:59 PM Bug #10092 (Resolved): multiple_rsync.sh + ceph-fuse timing out on firefly
- teuthology-2014-11-11_23:04:01-fs-firefly-distro-basic-multi/598145
teuthology-2014-11-11_23:04:01-fs-firefly-distro...
11/11/2014
- 02:59 PM Bug #8090: multimds: mds crash in check_rstats
- ubuntu@teuthology:/var/lib/teuthworker/archive/teuthology-2014-11-10_23:18:02-multimds-giant-testing-basic-multi/595393
11/10/2014
- 11:46 PM Bug #10041: ceph-fuse: never exit when no MDS server is available
- Just wanted to add that lack of timeout causes havoc all over the place... Autofs, backup scrips mounting CephFS on d...
- 04:05 PM Bug #10041: ceph-fuse: never exit when no MDS server is available
- Although it terminates on "Ctrl+C" a timeout would be _very_ useful because it would prevent system from hanging on b...
- 11:11 AM Bug #10041: ceph-fuse: never exit when no MDS server is available
- Was it blocking in the foreground? Did SIGKILL (ie, control-C) work on it?
We can add a configurable timeout but I... - 01:07 AM Bug #10041 (Resolved): ceph-fuse: never exit when no MDS server is available
- I'm attempting to mount CephFS using Fuse client (i.e. _ceph-fuse_) which do not exit if all MDS servers are down (I ...
- 10:57 PM Bug #10061 (New): uclient: MDS: output cap data in messages
- MClientCaps messages don't dump the caps they're updating, and generally neither does anything else. We need to optio...
- 10:55 PM Feature #10060 (New): uclient: warn about stuck cap flushes
- It can be hard to diagnose issues that involve cap state. To help with that, the client should keep track of its cap ...
- 10:40 PM Bug #9977 (Resolved): cephfs-journal-tool falsely reports invalid start_ptr
- In next branch as commit:65c33503c83ff8d88781c5c3ae81d88d84c8b3e4 and in giant as commit:fc5354dec55248724f8f6b795e3a...
- 09:36 PM Bug #9341: MDS: very slow rejoin
- Thanks.
- 09:27 PM Bug #9341 (Resolved): MDS: very slow rejoin
- This is backported to giant as of commit:97e423f52155e2902bf265bac0b1b9ed137f8aa0. The test for it also got backporte...
- 09:26 PM Bug #9800 (Resolved): client-limits test is not passing
- Backported in commit:387efc5fe1fb148ec135a6d8585a3b8f8d97dbf8
- 05:20 PM Bug #10025 (Resolved): Journal undump causes MDS to crash when start pos is not on object boundary
- Merged into next in commit:69be8e9b30c18e47c17ff7dafc4ac8fbe00d48e7, and the appropriate backport bits were merged la...
- 11:24 AM Bug #9997: test_client_pin case is failing
- http://qa-proxy.ceph.com/teuthology/teuthology-2014-11-09_23:04:01-fs-next-testing-basic-multi/593068/
- 11:23 AM Bug #6613: samba is crashing in teuthology
- Still happening: http://qa-proxy.ceph.com/teuthology/teuthology-2014-11-09_23:14:01-samba-next-testing-basic-multi/59...
11/09/2014
- 10:41 PM Bug #9995 (Resolved): failing test_filelock
- 09:19 AM Bug #9341: MDS: very slow rejoin
- Greg Farnum wrote:
> Hmm, we didn't put this in Giant initially because we were trying not to perturb it. Master has...
11/08/2014
- 08:07 AM Bug #9977 (Fix Under Review): cephfs-journal-tool falsely reports invalid start_ptr
- Backport to giant PR at:
https://github.com/ceph/ceph/pull/2887
11/07/2014
- 04:27 PM Bug #10011: Journaler: failed on shutdown or EBLACKLISTED
- Should be resolved by commit:6977d02f0d31c453cdf554a8f1796f290c1a3b89. We may want to backport once it's been through...
- 04:16 PM Feature #4138 (Resolved): MDS: forward scrub: add functionality to verify disk data is consistent
- This one ticket at least is definitely fulfilled by commit:daa9f9ffe82a811b5e0e69ef52241c4e0b7556bc
11/06/2014
- 11:43 PM Bug #9995: failing test_filelock
- 12:16 AM Bug #9995: failing test_filelock
- https://github.com/ceph/ceph-qa-suite/pull/228
- 09:46 PM Bug #9977 (Pending Backport): cephfs-journal-tool falsely reports invalid start_ptr
- Merged to next in commit:574c1d4bad37514ba941e3ae83e33a7d926697d9
Yes, let's please backport. - 05:49 PM Bug #9674: nightly failed multiple_rsync.sh
- I messed up (didn't set sudo everywhere), newer commits will hopefully make it all good. giant:f66bf31b6743246fb1c882...
- 11:16 AM Bug #10025 (Resolved): Journal undump causes MDS to crash when start pos is not on object boundary
Related ML thread from Jasper Siero, who first encountered the issue on firefly (http://lists.ceph.com/pipermail/ce...
11/05/2014
- 08:58 AM Bug #9995: failing test_filelock
- We'll need to update the test then so that it detects this situation and aborts quietly instead of raising an error.
- 05:42 AM Bug #10011: Journaler: failed on shutdown or EBLACKLISTED
- Ah... I've just realised why the "respawn on blacklist" thing I put in a while back isn't kicking in here: because Jo...
- 04:32 AM Bug #10011: Journaler: failed on shutdown or EBLACKLISTED
mon.a says:...
11/04/2014
- 10:59 PM Bug #9995: failing test_filelock
- ...
- 08:54 PM Bug #9995: failing test_filelock
- Is there something we can do as a workaround to prevent this blocking things? I expect people are going to use new ce...
- 07:36 PM Bug #9995 (Won't Fix): failing test_filelock
- it's a bug in old version of libfuse, it calls our setlk callback for both fcntl setlk and flock requests
- 05:46 PM Bug #9994: ceph-qa-suite: nfs mount timeouts
- teuthology-2014-11-03_23:10:01-knfs-giant-testing-basic-multi/585658/
- 05:40 PM Bug #10011 (Resolved): Journaler: failed on shutdown or EBLACKLISTED
- http://qa-proxy.ceph.com/teuthology/teuthology-2014-11-03_23:08:01-kcephfs-giant-testing-basic-multi/585648/
teuth... - 06:53 AM Bug #9869 (Resolved): Client: not handling cap_flush_ack messages properly
Also available in: Atom