Project

General

Profile

Actions

Bug #47565

closed

qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d5 pending pAsLsXsFscr issued pAsLsXsFsxcrwb, sent 60.889494 seconds ago"

Added by Patrick Donnelly over 3 years ago. Updated over 3 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
Testing
Target version:
% Done:

0%

Source:
Q/A
Tags:
Backport:
octopus,nautilus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Client, MDS
Labels (FS):
qa, qa-failure
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

failure_reason: '"2020-09-20T16:27:35.946549+0000 mds.b (mds.0) 1 : cluster [WRN]
  client.4606 isn''t responding to mclientcaps(revoke), ino 0x200000007d5 pending
  pAsLsXsFscr issued pAsLsXsFsxcrwb, sent 60.889494 seconds ago" in cluster log'

From: /ceph/teuthology-archive/pdonnell-2020-09-20_07:13:47-multimds-wip-pdonnell-testing-20200920.040823-distro-basic-smithi/5451587/teuthology.log

and

Failure: "2020-09-20T16:27:35.946549+0000 mds.b (mds.0) 1 : cluster [WRN] client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d5 pending pAsLsXsFscr issued pAsLsXsFsxcrwb, sent 60.889494 seconds ago" in cluster log
7 jobs: ['5451587', '5451424', '5451680', '5451482', '5451488', '5451475', '5451715']
suites intersection: ['conf/{client', 'mds', 'mon', 'osd}', 'whitelist_health']
suites union: ['begin', 'centos_latest', 'clusters/3-mds', 'clusters/9-mds', 'conf/{client', 'fuse-default-perm-no}', 'inline/no', 'inline/yes', 'k-testing}', 'mds', 'mon', 'mon-debug', 'mount', 'mount/fuse', 'mount/kclient/{k-testing', 'mount/kclient/{mount', 'ms-die-on-skipped}', 'ms-die-on-skipped}}', 'multimds/basic/{0-supported-random-distro$/{centos_8}', 'multimds/basic/{0-supported-random-distro$/{rhel_8}', 'multimds/basic/{0-supported-random-distro$/{ubuntu_latest}', 'multimds/verify/{begin', 'objectstore-ec/bluestore-bitmap', 'objectstore-ec/bluestore-comp', 'objectstore-ec/bluestore-comp-ec-root', 'objectstore-ec/bluestore-ec-root', 'osd}', 'overrides/{basic/{frag_enable', 'overrides/{distro/testing/{flavor/ubuntu_latest', 'overrides/{fuse-default-perm-no', 'q_check_counter/check_counter', 'tasks/cfuse_workunit_suites_blogbench}', 'tasks/cfuse_workunit_suites_ffsb}', 'tasks/cfuse_workunit_suites_fsstress', 'validater/valgrind}', 'verify/{frag_enable', 'whitelist_health', 'whitelist_wrongly_marked_down}', 'whitelist_wrongly_marked_down}}']

This looks like some kind of multimds bug with a client having caps with 2+ MDS. Here's the relevant MDS log for 5451587:

./remote/smithi120/log/ceph-mds.b.log.gz:2020-09-20T16:26:35.055+0000 7f47c6334700 10 mds.0.4 send_message_client_counted client.4606 seq 1751 client_caps(revoke ino 0x200000007d5 182 seq 11 caps=pAsLsXsFscr dirty=- wanted=Fxcb follows 0 mseq 1 size 104857600/213909504 ts 1/18446744073709551615 mtime 2020-09-20T16:18:26.669617+0000) v11
./remote/smithi120/log/ceph-mds.b.log.gz:2020-09-20T16:26:35.055+0000 7f47c6334700  1 -- [v2:172.21.15.120:6835/1535229919,v1:172.21.15.120:6837/1535229919] --> 172.21.15.110:0/2458736152 -- client_caps(revoke ino 0x200000007d5 182 seq 11 caps=pAsLsXsFscr dirty=- wanted=Fxcb follows 0 mseq 1 size 104857600/213909504 ts 1/18446744073709551615 mtime 2020-09-20T16:18:26.669617+0000) v11 -- 0x55a9729bd200 con 0x55a972739800
./remote/smithi120/log/ceph-mds.b.log.gz:2020-09-20T16:26:35.055+0000 7f47c6334700 10 mds.0.cache.ino(0x200000007d5) auth_pin by 0x55a972d41a10 on [inode 0x200000007d5 [2,head] /client.1/tmp/tmp/data/datafile116 auth{1=1} v2245 ap=2 dirtyparent s=104857600 n(v0 rc2020-09-20T16:18:26.669617+0000 b104857600 1=1+0) (ifile excl->sync) (iversion lock) cr={4606=0-213909504@1} caps={4606=pAsLsXsFscr/pAsLsXsFsxcrwb/Fxcb@11},l=4606 | ptrwaiter=0 request=0 lock=0 importingcaps=0 caps=1 dirtyparent=1 replicated=1 dirty=1 waiter=0 authpin=1 0x55a972d41700] now 2
./remote/smithi120/log/ceph-mds.b.log.gz:2020-09-20T16:26:35.944+0000 7f47c4b31700 20 mds.0.locker caps_tick age = 0.888669 client.4606.0x200000007d5
./remote/smithi120/log/ceph-mds.b.log.gz:2020-09-20T16:26:40.944+0000 7f47c4b31700 20 mds.0.locker caps_tick age = 5.888904 client.4606.0x200000007d5
./remote/smithi120/log/ceph-mds.b.log.gz:2020-09-20T16:26:45.944+0000 7f47c4b31700 20 mds.0.locker caps_tick age = 10.889030 client.4606.0x200000007d5
./remote/smithi120/log/ceph-mds.b.log.gz:2020-09-20T16:26:50.944+0000 7f47c4b31700 20 mds.0.locker caps_tick age = 15.888847 client.4606.0x200000007d5
./remote/smithi120/log/ceph-mds.b.log.gz:2020-09-20T16:26:55.944+0000 7f47c4b31700 20 mds.0.locker caps_tick age = 20.888905 client.4606.0x200000007d5
./remote/smithi120/log/ceph-mds.b.log.gz:2020-09-20T16:27:00.944+0000 7f47c4b31700 20 mds.0.locker caps_tick age = 25.889039 client.4606.0x200000007d5
./remote/smithi120/log/ceph-mds.b.log.gz:2020-09-20T16:27:05.944+0000 7f47c4b31700 20 mds.0.locker caps_tick age = 30.889069 client.4606.0x200000007d5
./remote/smithi120/log/ceph-mds.b.log.gz:2020-09-20T16:27:10.945+0000 7f47c4b31700 20 mds.0.locker caps_tick age = 35.889172 client.4606.0x200000007d5
./remote/smithi120/log/ceph-mds.b.log.gz:2020-09-20T16:27:15.946+0000 7f47c4b31700 20 mds.0.locker caps_tick age = 40.889251 client.4606.0x200000007d5
./remote/smithi120/log/ceph-mds.b.log.gz:2020-09-20T16:27:20.945+0000 7f47c4b31700 20 mds.0.locker caps_tick age = 45.889279 client.4606.0x200000007d5
./remote/smithi120/log/ceph-mds.b.log.gz:2020-09-20T16:27:25.945+0000 7f47c4b31700 20 mds.0.locker caps_tick age = 50.889367 client.4606.0x200000007d5
./remote/smithi120/log/ceph-mds.b.log.gz:2020-09-20T16:27:30.945+0000 7f47c4b31700 20 mds.0.locker caps_tick age = 55.889374 client.4606.0x200000007d5
./remote/smithi120/log/ceph-mds.b.log.gz:2020-09-20T16:27:35.945+0000 7f47c4b31700 20 mds.0.locker caps_tick age = 60.889494 client.4606.0x200000007d5
./remote/smithi120/log/ceph-mds.b.log.gz:2020-09-20T16:27:35.945+0000 7f47c4b31700  0 log_channel(cluster) log [WRN] : client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d5 pending pAsLsXsFscr issued pAsLsXsFsxcrwb, sent 60.889494 seconds ago
./remote/smithi120/log/ceph-mds.b.log.gz:2020-09-20T16:27:35.945+0000 7f47c4b31700 20 mds.0.locker caps_tick client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d5 pending pAsLsXsFscr issued pAsLsXsFsxcrwb, sent 60.889494 seconds ago

For debugging this, I suggest looking at the messages going to each MDS from the client. Fortunately this is a ceph-fuse client so the logs are available for what it's doing. This does happen on the kernel client too. For example:

/ceph/teuthology-archive/pdonnell-2020-09-20_07:13:47-multimds-wip-pdonnell-testing-20200920.040823-distro-basic-smithi/5451482/teuthology.log


Related issues 2 (0 open2 closed)

Copied to CephFS - Backport #47990: nautilus: qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d5 pending pAsLsXsFscr issued pAsLsXsFsxcrwb, sent 60.889494 seconds ago"ResolvedNathan CutlerActions
Copied to CephFS - Backport #47991: octopus: qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d5 pending pAsLsXsFscr issued pAsLsXsFsxcrwb, sent 60.889494 seconds ago"ResolvedNathan CutlerActions
Actions #1

Updated by Patrick Donnelly over 3 years ago

  • Assignee set to Xiubo Li
Actions #2

Updated by Xiubo Li over 3 years ago

  • Status changed from New to In Progress
Actions #3

Updated by Xiubo Li over 3 years ago

From 5451587/remote/smithi110/log/ceph-client.1.30354.log.gz:

We can see that the client.4606 has received the revoke request and was trying to flush the dirty data to disks and the start time is "2020-09-20T16:26:35.056+0000":


2020-09-20T16:26:35.056+0000 7f55ceffd700  5 client.4606 handle_cap_grant on in 0x200000007d5 mds.0 seq 11 caps now pAsLsXsFscr was pAsLsXsFsxcrwb
2020-09-20T16:26:35.056+0000 7f55ceffd700 10 client.4606 update_inode_file_time 0x200000007d5.head(faked_ino=0 ref=5 ll_ref=3 cap_refs={4=0,1024=1,4096=0,8192=1} open={2=0,3=0} mode=100700 size=104857600/213909504 nlink=1 btime=0.000000 mtime=2020-09-20T16:18:26.669617+0000 ctime=2020-09-20T16:18:26.669617+0000 caps=pAsLsXsFsxcrwb(0=pAsLsXsFsxcrwb) objectset[0x200000007d5 ts 0/0 objects 25 dirty_or_tx 196608] parents=0x10000000731.head["datafile116"] 0x7f55c4063540) pAsLsXsFsxcrwb ctime 2020-09-20T16:18:26.669617+0000 mtime 2020-09-20T16:18:26.669617+0000
2020-09-20T16:26:35.056+0000 7f55ceffd700 10 client.4606   revocation of Fxwb
2020-09-20T16:26:35.056+0000 7f55ceffd700 15 inode.get on 0x7f55c4063540 0x200000007d5.head now 6
2020-09-20T16:26:35.056+0000 7f55ceffd700 10 client.4606 _flush 0x200000007d5.head(faked_ino=0 ref=6 ll_ref=3 cap_refs={4=0,1024=1,4096=0,8192=1} open={2=0,3=0} mode=100700 size=104857600/213909504 nlink=1 btime=0.000000 mtime=2020-09-20T16:18:26.669617+0000 ctime=2020-09-20T16:18:26.669617+0000 caps=pAsLsXsFscr(0=pAsLsXsFscr) objectset[0x200000007d5 ts 0/0 objects 25 dirty_or_tx 196608] parents=0x10000000731.head["datafile116"] 0x7f55c4063540)
...

But the flush finished at "2020-09-20T16:29:20.386+0000":

...
2020-09-20T16:29:20.385+0000 7f55e1046700  1 -- 172.21.15.110:0/2458736152 <== osd.6 v2:172.21.15.120:6816/34366 20171 ==== osd_op_reply(34991 200000007d5.00000005 [write 2269184~4096,write 2392064~4096] v26'11642 uv11642 ondisk = 0) v8 ==== 206+0+0 (crc 0 0 0) 0x7f55d0025db0 con 0x7f55ac005420
2020-09-20T16:29:20.386+0000 7f55cffff700 10 client.4606 _flushed 0x200000007d5.head(faked_ino=0 ref=6 ll_ref=3 cap_refs={4=0,1024=1,4096=0,8192=1} open={2=0,3=0} mode=100700 size=104857600/213909504 nlink=1 btime=0.000000 mtime=2020-09-20T16:18:26.669617+0000 ctime=2020-09-20T16:18:26.669617+0000 caps=pAsLsXsFscr(0=pAsLsXsFscr) objectset[0x200000007d5 ts 0/0 objects 25 dirty_or_tx 0] parents=0x10000000731.head["datafile116"] 0x7f55c4063540)
2020-09-20T16:29:20.386+0000 7f55cffff700  5 client.4606 put_cap_ref dropped last FILE_BUFFER ref on 0x200000007d5.head(faked_ino=0 ref=6 ll_ref=3 cap_refs={4=0,1024=0,4096=0,8192=0} open={2=0,3=0} mode=100700 size=104857600/213909504 nlink=1 btime=0.000000 mtime=2020-09-20T16:18:26.669617+0000 ctime=2020-09-20T16:18:26.669617+0000 caps=pAsLsXsFscr(0=pAsLsXsFscr) objectset[0x200000007d5 ts 0/0 objects 25 dirty_or_tx 0] parents=0x10000000731.head["datafile116"] 0x7f55c4063540)
2020-09-20T16:29:20.386+0000 7f55cffff700  5 client.4606 put_cap_ref dropped last FILE_CACHE ref on 0x200000007d5.head(faked_ino=0 ref=6 ll_ref=3 cap_refs={4=0,1024=0,4096=0,8192=0} open={2=0,3=0} mode=100700 size=104857600/213909504 nlink=1 btime=0.000000 mtime=2020-09-20T16:18:26.669617+0000 ctime=2020-09-20T16:18:26.669617+0000 caps=pAsLsXsFscr(0=pAsLsXsFscr) objectset[0x200000007d5 ts 0/0 objects 25 dirty_or_tx 0] parents=0x10000000731.head["datafile116"] 0x7f55c4063540)
2020-09-20T16:29:20.386+0000 7f55cffff700 10 client.4606 check_caps on 0x200000007d5.head(faked_ino=0 ref=6 ll_ref=3 cap_refs={4=0,1024=0,4096=0,8192=0} open={2=0,3=0} mode=100700 size=104857600/213909504 nlink=1 btime=0.000000 mtime=2020-09-20T16:18:26.669617+0000 ctime=2020-09-20T16:18:26.669617+0000 caps=pAsLsXsFscr(0=pAsLsXsFscr) objectset[0x200000007d5 ts 0/0 objects 25 dirty_or_tx 0] parents=0x10000000731.head["datafile116"] 0x7f55c4063540) wanted - used Fc issued pAsLsXsFscr revoking Fxwb flags=0
2020-09-20T16:29:20.386+0000 7f55cffff700 10 client.4606  cap mds.0 issued pAsLsXsFscr implemented pAsLsXsFsxcrwb revoking Fxwb
2020-09-20T16:29:20.386+0000 7f55cffff700 10 client.4606 completed revocation of Fxwb

It took almost 165 seconds.

For this situation since the writing data to osd took too long, maybe the osds were overloaded.

Actions #4

Updated by Xiubo Li over 3 years ago

During flush the 0x200000007d5 inode, there also have many other inodes doing the flush on the same osd.6 at the same time:

...
2020-09-20T16:27:45.084+0000 7f55e1046700  1 -- 172.21.15.110:0/2458736152 <== osd.6 v2:172.21.15.120:6816/34366 17565 ==== osd_op_reply(40514 3000000024f.00000003 [write 217088~4096] v26'10430 uv10430 ondisk = 0) v8 ==== 164+0+0 (crc 0 0 0) 0x7f55d0025db0 con 0x7f55ac005420
2020-09-20T16:27:45.099+0000 7f55e1046700  1 -- 172.21.15.110:0/2458736152 <== osd.6 v2:172.21.15.120:6816/34366 17566 ==== osd_op_reply(40516 3000000020c.00000003 [write 2367488~4096] v26'10432 uv10432 ondisk = 0) v8 ==== 164+0+0 (crc 0 0 0) 0x7f55d0025db0 con 0x7f55ac005420
2020-09-20T16:27:45.137+0000 7f55e1046700  1 -- 172.21.15.110:0/2458736152 <== osd.6 v2:172.21.15.120:6816/34366 17567 ==== osd_op_reply(30880 200000007d5.00000005 [write 2215936~4096] v26'10486 uv10486 ondisk = 0) v8 ==== 164+0+0 (crc 0 0 0) 0x7f55d0025db0 con 0x7f55ac005420
2020-09-20T16:27:45.153+0000 7f55e1046700  1 -- 172.21.15.110:0/2458736152 <== osd.6 v2:172.21.15.120:6816/34366 17568 ==== osd_op_reply(42340 30000000228.00000016 [write 995328~4096,write 4153344~4096] v26'11014 uv11014 ondisk = 0) v8 ==== 206+0+0 (crc 0 0 0) 0x7f55d0025db0 con 0x7f55ac005420
2020-09-20T16:27:45.192+0000 7f55e1046700  1 -- 172.21.15.110:0/2458736152 <== osd.6 v2:172.21.15.120:6816/34366 17569 ==== osd_op_reply(42351 30000000203.00000006 [write 1839104~4096] v26'11016 uv11016 ondisk = 0) v8 ==== 164+0+0 (crc 0 0 0) 0x7f55d0025db0 con 0x7f55ac005420
2020-09-20T16:27:45.270+0000 7f55e1046700  1 -- 172.21.15.110:0/2458736152 <== osd.6 v2:172.21.15.120:6816/34366 17570 ==== osd_op_reply(36547 30000000208.0000000d [write 2760704~4096] v26'10391 uv10391 ondisk = 0) v8 ==== 164+0+0 (crc 0 0 0) 0x7f55d0025db0 con 0x7f55ac005420
2020-09-20T16:27:45.320+0000 7f55e1046700  1 -- 172.21.15.110:0/2458736152 <== osd.6 v2:172.21.15.120:6816/34366 17571 ==== osd_op_reply(40531 30000000200.00000004 [write 73728~4096] v26'10434 uv10434 ondisk = 0) v8 ==== 164+0+0 (crc 0 0 0) 0x7f55d0025db0 con 0x7f55ac005420
2020-09-20T16:27:45.356+0000 7f55e1046700  1 -- 172.21.15.110:0/2458736152 <== osd.6 v2:172.21.15.120:6816/34366 17572 ==== osd_op_reply(30881 30000000258.00000012 [write 2768896~4096] v26'10488 uv10488 ondisk = 0) v8 ==== 164+0+0 (crc 0 0 0) 0x7f55d0025db0 con 0x7f55ac005420
2020-09-20T16:27:45.356+0000 7f55e1046700  1 -- 172.21.15.110:0/2458736152 <== osd.6 v2:172.21.15.120:6816/34366 17573 ==== osd_op_reply(42360 30000000222.0000000c [write 1658880~4096] v26'11018 uv11018 ondisk = 0) v8 ==== 164+0+0 (crc 0 0 0) 0x7f55d0025db0 con 0x7f55ac005420
2020-09-20T16:27:45.412+0000 7f55e1046700  1 -- 172.21.15.110:0/2458736152 <== osd.6 v2:172.21.15.120:6816/34366 17574 ==== osd_op_reply(36553 30000000208.00000016 [write 2854912~4096] v26'10393 uv10393 ondisk = 0) v8 ==== 164+0+0 (crc 0 0 0) 0x7f55d0025db0 con 0x7f55ac005420
@@@    
...                                                                              

Too many inodes to write to the same osd.6 at the same time could make the osd.6 to be overloaded.

Actions #5

Updated by Xiubo Li over 3 years ago

@Patrick,

Maybe the MDS shouldn't report the WRN to monitor when revoking the "Fwbl" caps ? Since it may need to flush the dirty buffer to disk, which may take a longer time than 60 seonds.

Actions #6

Updated by Patrick Donnelly over 3 years ago

Xiubo Li wrote:

@Patrick,

Maybe the MDS shouldn't report the WRN to monitor when revoking the "Fwbl" caps ? Since it may need to flush the dirty buffer to disk, which may take a longer time than 60 seonds.

Okay, so the issue is that one of the OSDs was slow. That's not unusual for these tests. Is there a timeout config we can bump to say 300 seconds so this warning goes away? I do not want to ignorelist the warning.

Actions #7

Updated by Xiubo Li over 3 years ago

Patrick Donnelly wrote:

Xiubo Li wrote:

@Patrick,

Maybe the MDS shouldn't report the WRN to monitor when revoking the "Fwbl" caps ? Since it may need to flush the dirty buffer to disk, which may take a longer time than 60 seonds.

Okay, so the issue is that one of the OSDs was slow. That's not unusual for these tests. Is there a timeout config we can bump to say 300 seconds so this warning goes away? I do not want to ignorelist the warning.

This depends on the `session_timeout` value, which the default value is 60 seconds:

4145     // exponential backoff of warning intervals
4146     if (age > mds->mdsmap->get_session_timeout() * (1 << cap->get_num_revoke_warnings())) {
4147       cap->inc_num_revoke_warnings();     
4148       CachedStackStringStream css;
4149       *css << "client." << cap->get_client() << " isn't responding to mclientcaps(revoke), ino " 
4150            << cap->get_inode()->ino() << " pending " << ccap_string(cap->pending())
4151            << " issued " << ccap_string(cap->issued()) << ", sent " << age << " seconds ago"; 
4152       mds->clog->warn() << css->strv();   
4153       dout(20) << __func__ << " " << css->strv() << dendl;
4154     } else {
4155       dout(20) << __func__ << " silencing log message (backoff) for " << "client." << cap->get_client() << "." << cap->get_inode()->i     no() << dendl;
4156     }
# ./bin/ceph fs get a 
Filesystem 'a' (1)
fs_name a
epoch   125
flags   12
created 2020-09-28T17:02:27.386794+0800
modified        2020-09-30T09:17:14.772135+0800
tableserver     0
root    0
session_timeout 60
session_autoclose       300
max_file_size   1099511627776
required_client_features        {}
last_failure    0
last_failure_osd_epoch  0
compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds 1
in      0
up      {0=4307}
failed
damaged
stopped
data_pools      [3]
metadata_pool   2
inline_data     disabled
balancer
standby_count_wanted    1
[mds.c{0:4307} state up:active seq 3190 addr [v2:10.72.37.222:6830/2699242870,v1:10.72.37.222:6831/2699242870]]

Currently AFAICK we can change this only by `fs set` command:

# ceph fs set a session_timeout 300

The 300 seconds should be fine, there also had some other similiar issues before which also could see the same logs and they all took less than 200s.

Actions #8

Updated by Patrick Donnelly over 3 years ago

Sounds good. Please write up a PR for this Xiubo.

Actions #9

Updated by Xiubo Li over 3 years ago

Patrick Donnelly wrote:

Sounds good. Please write up a PR for this Xiubo.

Sure, will do.

Actions #10

Updated by Patrick Donnelly over 3 years ago

  • Category set to Testing
  • Labels (FS) qa, qa-failure added
Actions #11

Updated by Xiubo Li over 3 years ago

  • Pull request ID set to 37629
Actions #12

Updated by Xiubo Li over 3 years ago

  • Status changed from In Progress to Fix Under Review
Actions #13

Updated by Patrick Donnelly over 3 years ago

/ceph/teuthology-archive/yuriw-2020-10-07_19:13:19-multimds-wip-yuri5-testing-2020-10-07-1021-octopus-distro-basic-smithi/5504780/teuthology.log

Actions #14

Updated by Patrick Donnelly over 3 years ago

  • Backport set to octopus,nautilus

/ceph/teuthology-archive/yuriw-2020-10-07_19:13:19-multimds-wip-yuri5-testing-2020-10-07-1021-octopus-distro-basic-smithi/5504780/teuthology.log

Actions #15

Updated by Patrick Donnelly over 3 years ago

  • Status changed from Fix Under Review to Pending Backport
Actions #16

Updated by Nathan Cutler over 3 years ago

  • Copied to Backport #47990: nautilus: qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d5 pending pAsLsXsFscr issued pAsLsXsFsxcrwb, sent 60.889494 seconds ago" added
Actions #17

Updated by Nathan Cutler over 3 years ago

  • Copied to Backport #47991: octopus: qa: "client.4606 isn't responding to mclientcaps(revoke), ino 0x200000007d5 pending pAsLsXsFscr issued pAsLsXsFsxcrwb, sent 60.889494 seconds ago" added
Actions #18

Updated by Nathan Cutler over 3 years ago

  • Status changed from Pending Backport to Resolved

While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".

Actions

Also available in: Atom PDF