Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2022-05-20T11:48:38ZCeph
Redmine CephFS - Bug #55725 (Pending Backport): MDS allows a (kernel) client to exceed the xattrs key/val...https://tracker.ceph.com/issues/557252022-05-20T11:48:38ZLuis Henriques
<p>A client is allowed to set as many xattrs in an inode as it wants, as long as it has CAP_XATTR_EXCL. This will allow him to buffer the xattrs and send them to the MDS later on. This will eventually result in crashing the MDS (and possibly also requires recovering the filesystem?).</p>
<p>There was an attempt long time ago to fix this in <a class="external" href="https://tracker.ceph.com/issues/19033">https://tracker.ceph.com/issues/19033</a>, which resulted in commit eb915d0eeccb ("cephfs: fix write_buf's _len overflow problem"). This fix added a maximum size for the xattrs (keys + values). But this maximum is only enforced when a client uses the synchronous MDS_OP_SETXATTR operation.</p>
<p>Here's a small test case:</p>
<pre>
# dd if=/dev/zero bs=1 count=65536 2> /dev/null | attr -s myattr /mnt/myfile
attr_set: No space left on device
Could not set "myattr" for /mnt/myfile
# dd if=/dev/zero bs=1 count=65536 2> /dev/null | attr -s myattr /mnt/myfile
Attribute "myattr" set to a 65536 byte value for /mnt/myfile:
#
</pre> cephsqlite - Bug #55696 (Duplicate): vstart hangs on when creating volumehttps://tracker.ceph.com/issues/556962022-05-18T15:18:08ZLuis Henriques
<p>As in the subject, when I create a vstart cluster it hangs with the following:</p>
<pre>
MON=2 MDS=3 OSD=2 RGW=0 ../src/vstart.sh -b --without-dashboard -i <IP-ADDR> -n
<...>
/build/bin/ceph -c /build/ceph.conf -k /build/keyring fs volume ls
</pre>
<p>While stucked in this stage, here's what I get with a <code>ceph -s</code>:<br /><pre>
cluster:
id: 680fb9ba-b496-47a1-9729-b801648b4ab3
health: HEALTH_OK
services:
mon: 2 daemons, quorum a,b (age 2m)
mgr: no daemons active (since 69s)
osd: 2 osds: 2 up (since 103s), 2 in (since 2m)
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 2.0 GiB used, 200 GiB / 202 GiB avail
pgs: 100.000% pgs unknown
1 unknown
</pre></p>
<p>So, no MDS is shown here and the pgs don't look good (and that's why I'm selecting the 'OSD' category for the issue).</p>
<p>I'm attaching the logs collected from the 'out' directory.</p> CephFS - Fix #52104 (Fix Under Review): qa: add testing for "copyfrom" mount optionhttps://tracker.ceph.com/issues/521042021-08-09T10:34:09ZLuis Henriques
<p>We currently don't have any testing coverage for the copy_file_range syscall, which requires the "copyfrom" mount option. Add new workload variants for them.</p> Linux kernel client - Feature #46690 (Fix Under Review): Add fscrypt support to the kernel CephFS...https://tracker.ceph.com/issues/466902020-07-23T11:20:21ZLuis Henriques
<p>As per the documentation fscrypt is a (kernel) "library which filesystems can hook into to support transparent encryption of files and directories". It basically allows users to transparently encrypt files and directories: a user can simply set the key in a directory and all files within that directory will be encrypted. Note that this means that only the file data will actually be encrypted; the only metadata the is encrypted is the filename, everything else (timestamps, file size, xatttributes, etc) are visible for other users as long as they have the permissions to access them.</p>
<p>So far, only local filesystems support it (ext4, f2fs and ubifs), but it looks like there's nothing preventing CephFS to support it.</p> CephFS - Fix #46070 (Resolved): client: fix snap directory atimehttps://tracker.ceph.com/issues/460702020-06-18T09:29:24ZLuis Henriques
<p>The fuse client gets almost all the .snap directory timestamps from it's parent. The exception is atime, which is kept to '0' (1970-01-01 00:00).</p>
<p>For consistency, atime should also be obtained from the parent directory.</p> CephFS - Bug #39951 (Resolved): mount: key parsing fail when doing a remounthttps://tracker.ceph.com/issues/399512019-05-16T11:06:11ZLuis Henriques
<p>When doing a CephFS remount (-o remount) the secret is parsed from procfs and we get '<hidden>' as a result and the mount will fail with:</p>
<pre>
secret is not valid base64: Invalid argument.
adding ceph secret key to kernel failed: Invalid argument.
</pre>
<p>As the kernel already have the key, we simply need to use it, as in the attach patch.</p> Linux kernel client - Bug #38482 (Resolved): Quotas: mounting quotas subdir doesn't respect quota...https://tracker.ceph.com/issues/384822019-02-26T12:07:22ZLuis Henriques
<p>The kernel client doesn't respect quotas that are set in directories it doesn't see in the mount point. Here's a report from ceph-users mailing list:</p>
<p><a class="external" href="http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-February/033357.html">http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-February/033357.html</a></p>
<p>The fuse client seems to be able to handle these scenarios correctly.</p> Linux kernel client - Bug #38224 (Fix Under Review): Race handling ceph_snap_context in kernel cl...https://tracker.ceph.com/issues/382242019-02-07T14:59:16ZLuis Henriques
<p>I've been seeing generic/013 (from the xfstest suite) failing occasionally with a kmemleak warning. I finally found some time to look at it and... I can't say how many hours I've already spent looking, but I can say it's an embarassingly high number!</p>
<p>Here's the warning:</p>
<pre>
unreferenced object 0xffff8881fccca940 (size 32):
comm "kworker/0:1", pid 12, jiffies 4295005883 (age 130.648s)
hex dump (first 32 bytes):
01 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 ................
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
backtrace:
[<00000000d741a1ea>] build_snap_context+0x5b/0x2a0
[<0000000021a00533>] rebuild_snap_realms+0x27/0x90
[<00000000ac538600>] rebuild_snap_realms+0x42/0x90
[<000000000e955fac>] ceph_update_snap_trace+0x2ee/0x610
[<00000000a9550416>] ceph_handle_snap+0x317/0x5f3
[<00000000fc287b83>] dispatch+0x362/0x176c
[<00000000a312c741>] ceph_con_workfn+0x9ce/0x2cf0
[<000000004168e3a9>] process_one_work+0x1d4/0x400
[<000000002188e9e7>] worker_thread+0x2d/0x3c0
[<00000000b593e4b3>] kthread+0x112/0x130
[<00000000a8587dca>] ret_from_fork+0x35/0x40
[<00000000ba1c9c1d>] 0xffffffffffffffff
</pre>
<p>After adding some debug code (tracepoints to collect some data), I found that it's a struct ceph_snap_context being leaked. Basically, after being created (which sets its initial refcount to 1), it's recount is being increased in ceph_queue_cap_snap():</p>
<pre><code class="c syntaxhl"><span class="CodeRay">...
update_snapc:
<span class="keyword">if</span> (ci->i_head_snapc) {
ci->i_head_snapc = ceph_get_snap_context(new_snapc);
dout(<span class="string"><span class="delimiter">"</span><span class="content"> new snapc is %p</span><span class="char">\n</span><span class="delimiter">"</span></span>, new_snapc);
}
spin_unlock(&ci->i_ceph_lock);
...
</span></code></pre>
<p>The only ceph_put_snap_context for that ceph_snap_context object only occurs when umounting the filesystem, which leaves the refcount to 1, not freeing it's memory.</p>
<p>There's obviously a race <em>somewhere</em> and I suspect that either ci->i_head_snapc or ci->i_snap_realm are being set to NULL before a ceph_put_snap_context is done on that object. But the code is really quite complex, difficult to follow and I'm really out of ideas on how to debug this, specially because it's really difficult to reproduce the bug.</p>
<p>As anyone seen this issue before? Does anyone have any idea on how to proceed?</p> CephFS - Bug #37378 (Resolved): truncate_seq ordering issues with object creationhttps://tracker.ceph.com/issues/373782018-11-23T10:17:14ZLuis Henriques
<p>I'm seeing a bug with copy_file_range in recent clients. Here's a simple way to reproduce it:</p>
<pre>
# set files layouts
touch a b
setfattr -n ceph.file.layout -v "stripe_unit=65536 stripe_count=1 object_size=65536" a
setfattr -n ceph.file.layout -v "stripe_unit=65536 stripe_count=1 object_size=65536" b
# create 'a' and 'b' with 3 objects
xfs_io -f -c "pwrite -S 0x61 0 65536" a
xfs_io -f -c "pwrite -S 0x62 65536 65536" a
xfs_io -f -c "pwrite -S 0x63 131072 65536" a
xfs_io -f -c "pwrite -S 0x64 0 196608" b
# truncate 'b'
xfs_io -c "truncate 0" b
# copy 'a' into 'b'
xfs_io -c "copy_range -s 0 -d 0 -l 196608 a" b
</pre>
<p>at this point the contents of 'b' is:</p>
<pre>
hexdump b
0000000 0000 0000 0000 0000 0000 0000 0000 0000
*
0030000
</pre>
<p>If I write 'b' again with 'xfs_io -f -c "pwrite -S 0x64 0 196608" b' it's content is restored, but running the copy_range results again in a file with zeros.</p>
<p>Initially I thought this was a bug in the copy_file_range, but at this point it looks more an issue with the TRUNC operation, as the remote copy operation will always result in zero'ed objects. Could this be a MDS bug? Or OSD...?</p> Linux kernel client - Bug #36317 (Resolved): fallocate implementation on the kernel cephfs clienthttps://tracker.ceph.com/issues/363172018-10-04T13:57:23ZLuis Henriques
<p>I remember seeing a comment somewhere (mailing list?) about this but couldn't find any reference to the issue so I decided to open a bug.</p>
<p>The problem: fallocate doesn't seem to be doing what it's supposed to do. I haven't been able to spend time looking at the code to understand the details, but here's a summary of the issue, on a very small test cluster:<br /><pre>
node5:~ # df -Th /mnt
Filesystem Type Size Used Avail Use% Mounted on
192.168.122.101:/ ceph 14G 228M 14G 2% /mnt
</pre></p>
<p>So, I have ~14G available and fallocate a big file:<br /><pre>
node5:/mnt # xfs_io -f -c "falloc 0 1T" hugefile
node5:/mnt # ls -lh
total 1.0T
-rw------- 1 root root 1.0T Oct 4 14:17 hugefile
drwxr-xr-x 2 root root 6 Oct 4 14:17 mydir
</pre><br />I would expect this to fail, and it looks like the available space hasn't changed.<br /><pre>
node5:/mnt # df -Th /mnt
Filesystem Type Size Used Avail Use% Mounted on
192.168.122.101:/ ceph 14G 228M 14G 2% /mnt
</pre><br />Anyway, a successful call to fallocate(2) should mean that "subsequent writes into the range specified by offset and len are guaranteed not to fail because of lack of disk space". Which isn't going to be the case in the above example.</p>
<p>I guess that a fix for this would require a CEPH_MSG_STATFS to the monitors to get the actual free space. But as I said, I haven't spent too much time looking at the problem.</p> Ceph - Bug #23358 (Resolved): vstart.sh gives obscure error of dashboard dependencies missinghttps://tracker.ceph.com/issues/233582018-03-14T10:56:03ZLuis Henriques
<p>Here's the command line I'm using:<br /><pre>
MON=1 OSD=3 MDS=3 ../src/vstart.sh -x -N -i 192.168.155.1 --multimds 2 -b
</pre></p>
<p>And the error:<br /><pre>
/home/miguel/dev/ceph/ceph/build/bin/ceph -c /home/miguel/dev/ceph/ceph/build/ceph.conf -k /home/miguel/dev/ceph/ceph/build/keyring restful create-key admin -o /tmp/tmp.EiBEB4KsYf
/home/miguel/dev/ceph/ceph/build/bin/ceph -c /home/miguel/dev/ceph/ceph/build/ceph.conf -k /home/miguel/dev/ceph/ceph/build/keyring mgr module enable dashboard_v2
Error ENOENT: all mgr daemons do not support module 'dashboard_v2', pass --force to force enablement
</pre><br />If I edit vstart.sh and add a '--force', I'll get:<br /><pre>
/home/miguel/dev/ceph/ceph/build/bin/ceph -c /home/miguel/dev/ceph/ceph/build/ceph.conf -k /home/miguel/dev/ceph/ceph/build/keyring mgr module enable dashboard_v2 --force
/home/miguel/dev/ceph/ceph/build/bin/ceph -c /home/miguel/dev/ceph/ceph/build/ceph.conf -k /home/miguel/dev/ceph/ceph/build/keyring tell mgr dashboard set-login-credentials
admin admin
no valid command found; 10 closest matches:
restful list-keys
restful delete-key <key_name>
influx self-test
influx send
restful create-key <key_name>
prometheus self-test
balancer execute <plan>
balancer dump <plan>
influx config-show
influx config-set <key> <value>
Error EINVAL: invalid command
</pre></p> CephFS - Bug #23289 (New): mds: xfstest generic/089 hangs in rename syscall in luminoushttps://tracker.ceph.com/issues/232892018-03-09T15:46:07ZLuis Henriques
<p>Running this test on a multimds Luminous cluster results in a stalled (very slow?) client. I'm using a kernel client, but I was able to reproduce the issue with fuse as well. Thus, I don't think this is a client issue. Also, I wasn't able to reproduce it with a mimic (master branch) cluster.</p>
<p>In the client, here's the test stack:<br />cat /proc/1065/stack<br />[<0>] ceph_mdsc_do_request+0xef/0x2e0<br />[<0>] ceph_rename+0x125/0x1e0<br />[<0>] vfs_rename+0x335/0x810<br />[<0>] SyS_renameat2+0x47a/0x530<br />[<0>] do_syscall_64+0x60/0x110<br />[<0>] entry_SYSCALL_64_after_hwframe+0x3d/0xa2<br />[<0>] 0xffffffffffffffff</p>
<p>I'm able to reproduce this very easily with a vstart cluster:</p>
<p>MON=1 OSD=2 MDS=3 ../src/vstart.sh -x -n -i 192.168.155.1 --multimds 2 -b</p>
<p>If, while the test is stalled, I reduce the number of MDSs, the test will proceed and finish. I'm still looking at the MDS logs (attached), but I'm not very proficient reading those so it may take a while for me to see something that may be obvious to more trained eyes.</p> sepia - Support #22559 (Resolved): Sepia Lab Access Requesthttps://tracker.ceph.com/issues/225592018-01-03T17:40:26ZLuis Henriques
<p>1) Do you just need VPN access or will you also be running teuthology jobs?<br />I want to run teuthology jobss too.</p>
<p>2) Desired Username: <br />henrix</p>
<p>3) Alternate e-mail address(es) we can reach you at: <br /><a class="email" href="mailto:lhenriques@suse.com">lhenriques@suse.com</a><br /><a class="email" href="mailto:lhenriques@suse.de">lhenriques@suse.de</a></p>
<p>4) If you don't already have an established history of code contributions to Ceph, is there an existing community or core developer you've worked with who has reviewed your work and can vouch for your access request?<br />I've some contributions to the kernel client code already, and I'm currently working on the implementing quota support in the kernel client. There isn't any pull request from me, only patchsets posted to the mailing-list. Here's an example: <a class="external" href="https://marc.info/?l=ceph-devel&m=151361156926781&w=2">https://marc.info/?l=ceph-devel&m=151361156926781&w=2</a><br />My main goal with sepia lab access is to actually run tests against a kernel client.</p>
<p style="padding-left:2em;">If you answered "No" to # 4, please answer the following (paste directly below the question to keep indentation):</p>
<p style="padding-left:2em;">4a) Paste a link to a Blueprint or planning doc of yours that was reviewed at a Ceph Developer Monthly.</p>
<p style="padding-left:2em;">4b) Paste a link to an accepted pull request for a major patch or feature.</p>
<p style="padding-left:2em;">4c) If applicable, include a link to the current project (planning doc, dev branch, or pull request) that you are looking to test.</p>
<p>5) Paste your SSH public key(s) between the <code>pre</code> tags<br /><pre>ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZDKRAePFqjugc2ljw4QmM6r7B2dPI39OBB+2HKONtDZI4q+uwgZ1yqnhrbJaU9ux0W5wNzJyp2VfOPhw/9G9jo+efJPg5IySnTonhi1zeCXNpBMY70dJZVCyGEDdWKehvFM9xFBOxBC/h7gOyKqEc28BKoy4cHfULzm4er8PU30MyFBngu8NaoKqyQ3GYAfWOAXF5lUdalbJwl3a+HBb8PjUhbGuznAZF1Ji4JgnNuEZdlNVOKwhEkAFZ1JwXvgLfrEJVQs5b2V9Sp2Vm5ZngsLg7b3RzV5I4B3Lj7QTTraV1zLJGiS5iRdjY7D4W5Uxi2k1X7vaOjySJWWadHyeicNNBhLSfHl6JJ3aMuA8oAAmtgo/8IkJ96GFtgM9j31Wg8wkeTyQrwSeqCuv1uDFr8rp+OvMClhHnDgUklJLWjjYPCDVYR5KV+iOiHL58WJg/b4h6NsoP91wO8P8EcSaiMdq7P77w5nJxDI0KKb//dC8aHtWnRRRlrIS+PNPqKJSNTaSwOTieXvzplhzKFzu2dr79qs+/67iaNE6HBvIX5jVHWCFV3wllyjsC8WYPh4kEVtg2SY00SDOOrMrqANV8DDtHFwwxQTEjz1z5ESuwy9UcUSQfP1/zsAp2OQzPutKs6UipHyBK3pLm6MjaeeaEWu8fmetCqALxYV3h17O+Yw== miguel@hermes.olymp</pre></p>
<p>6) Paste your hashed VPN credentials between the <code>pre</code> tags (Format: <code>user@hostname 22CharacterSalt 65CharacterHashedPassword</code>)<br /><pre>henrix@hermes iPPDBfnLzP5Pe5FuTcJBmw b26aefb8a61451066f984e074f708ea9ca6b2c5d7cca35996c08b0b2bb2c2736</pre></p> RADOS - Bug #20379 (Duplicate): bluestore assertion (KernelDevice.cc: 529: FAILED assert(r == 0))https://tracker.ceph.com/issues/203792017-06-22T09:25:14ZLuis Henriques
<p>There's already a bug (with lots of dups) that seems to be what I'm seeing in a vstart.sh cluster. Since this bug is already closed (<a class="external" href="http://tracker.ceph.com/issues/19511">http://tracker.ceph.com/issues/19511</a>) I've decided to open a new one.<br />The recipe is simple:<br />1. Start cluster with -b (I'm using -b -X -n --mon_num 1 --osd_num 3 --mds_num 1)<br />2. start a client (I'm using an SP3 kernel), mount the cephfs and run fio with a very simple script:</p>
<pre><code>[random-writers]<br /> rw=randrw<br /> size=32m<br /> numjobs=8</code></pre>
<p>Running this script a few times will eventually kill the OSDs, changing the cluster status to HEALTH_WARN after start seeing kernel messages:</p>
<p>[ 74.536976] libceph: osd1 192.168.155.1:6804 socket closed (con state OPEN)<br />[ 74.538087] libceph: osd1 192.168.155.1:6804 socket error on write<br />[ 74.567434] libceph: osd2 192.168.155.1:6808 socket closed (con state OPEN)<br />[ 74.568229] libceph: osd2 192.168.155.1:6808 socket error on write<br />[ 74.907989] libceph: osd1 down<br />[ 74.908322] libceph: osd2 down<br />[ 82.912261] libceph: osd0 192.168.155.1:6800 socket closed (con state OPEN)<br />[ 82.914071] libceph: osd0 192.168.155.1:6800 socket closed (con state CONNECTING)<br />[ 84.037899] libceph: osd0 192.168.155.1:6800 socket error on write<br />[ 85.033905] libceph: osd0 192.168.155.1:6800 socket error on write<br />[ 87.037925] libceph: osd0 192.168.155.1:6800 socket error on write<br />[ 91.045943] libceph: osd0 192.168.155.1:6800 socket closed (con state CONNECTING)<br />[ 99.045865] libceph: osd0 192.168.155.1:6800 socket error on write<br />[ 115.077906] libceph: osd0 192.168.155.1:6800 socket error on write<br />[ 147.141919] libceph: osd0 192.168.155.1:6800 socket error on write</p>
<p>Looking at the (dead) OSD logs, I see:</p>
<pre><code>-2> 2017-06-21 11:03:31.509411 7f531a1fd700 -1 bdev(0x558c1d4dcb40 /home/miguel/dev/ceph/ceph/build/dev/osd0/block) aio_submit retries 16<br /> -1> 2017-06-21 11:03:31.509435 7f531a1fd700 -1 bdev(0x558c1d4dcb40 /home/miguel/dev/ceph/ceph/build/dev/osd0/block) aio submit got (11) Resource temporarily unavailable<br /> 0> 2017-06-21 11:03:31.512526 7f531a1fd700 -1 /home/miguel/dev/ceph/ceph/src/os/bluestore/KernelDevice.cc: In function 'virtual void KernelDevice::aio_submit(IOContext*)' thread 7f531a1fd700 time 2017-06-21 11:03:31.509457<br />/home/miguel/dev/ceph/ceph/src/os/bluestore/KernelDevice.cc: 529: FAILED assert(r == 0)</code></pre>
<pre><code>ceph version 12.0.3-1919-g782b63ae9c (782b63ae9c1eba1d0eb61a1bed1a8874329944ca) luminous (dev)<br /> 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0xf5) [0x558c13f0f6a5]<br /> 2: (KernelDevice::aio_submit(IOContext*)+0xb10) [0x558c13eaae30]<br /> 3: (BlueStore::_deferred_submit(BlueStore::OpSequencer*)+0x713) [0x558c13d6da03]<br /> 4: (BlueStore::_deferred_try_submit()+0x1c6) [0x558c13d6e356]<br /> 5: (BlueStore::_txc_finish(BlueStore::TransContext*)+0x9c7) [0x558c13d82df7]<br /> 6: (BlueStore::_txc_state_proc(BlueStore::TransContext*)+0xba) [0x558c13d9366a]<br /> 7: (BlueStore::_kv_finalize_thread()+0xa0c) [0x558c13d951ec]<br /> 8: (BlueStore::KVFinalizeThread::entry()+0xd) [0x558c13dea66d]<br /> 9: (()+0x74e7) [0x7f532a78d4e7]<br /> 10: (clone()+0x3f) [0x7f5329800a2f]<br /> NOTE: a copy of the executable, or `objdump -rdS &lt;executable&gt;` is needed to interpret this.</code></pre>
<p>This is with current master branch.</p>
<p>My understanding is that this is just the IO queue being pushed a bit too hard, and the solution probably requires some sort of throttling mechanism.</p> Linux kernel client - Bug #19958 (Resolved): missing i_nlink check while converting a file handle...https://tracker.ceph.com/issues/199582017-05-17T11:16:00ZLuis Henriques
<p>xfstest generic/426 is failing due to a missing i_nlink check while converting a file handle to dentry. Here's the test output:</p>
<pre>
./check -d "generic/426"
FSTYP -- ceph
PLATFORM -- Linux/x86_64 rapido1 4.12.0-rc1+
generic/426
QA output created by 426
open_by_handle(326) opened an unlinked file!
open_by_handle(327) opened an unlinked file!
open_by_handle(363) opened an unlinked file!
open_by_handle(364) opened an unlinked file!
open_by_handle(365) opened an unlinked file!
[...]
Silence is golden
- output mismatch (see /fstests/xfstests-dev/results//generic/426.out.bad)
--- tests/generic/426.out 2017-05-03 10:19:33.000000000 +0000
+++ /fstests/xfstests-dev/results//generic/426.out.bad 2017-05-17 11:08:46.015583521 +0000
@@ -1,2 +1,700 @@
QA output created by 426
+open_by_handle(326) opened an unlinked file!
+open_by_handle(327) opened an unlinked file!
+open_by_handle(328) opened an unlinked file!
+open_by_handle(329) opened an unlinked file!
+open_by_handle(330) opened an unlinked file!
+open_by_handle(331) opened an unlinked file!
...
(Run 'diff -u tests/generic/426.out /fstests/xfstests-dev/results//generic/426.out.bad' to see the entire diff)
Ran: generic/426
Failures: generic/426
Failed 1 of 1 tests
</pre>
<p>I've tested the attached patch and it fixes the issue.</p>