Project

General

Profile

Actions

Bug #54360

open

Dead job at "Finished running handlers" in rados/cephadm/osds/.../rm-zap-wait

Added by Laura Flores about 2 years ago. Updated almost 2 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
pacific
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

/a/yuriw-2022-02-17_22:49:55-rados-wip-yuri3-testing-2022-02-17-1256-distro-default-smithi/6691873

2022-02-18T07:37:28.157 DEBUG:teuthology.run_tasks:Unwinding manager cephadm
2022-02-18T07:37:28.169 INFO:tasks.cephadm:Teardown begin
2022-02-18T07:37:28.170 DEBUG:teuthology.orchestra.run.smithi116:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring
2022-02-18T07:37:28.201 DEBUG:teuthology.orchestra.run.smithi165:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring
2022-02-18T07:37:28.239 INFO:tasks.cephadm:Cleaning up testdir ceph.* files...
2022-02-18T07:37:28.239 DEBUG:teuthology.orchestra.run.smithi116:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub
2022-02-18T07:37:28.259 DEBUG:teuthology.orchestra.run.smithi165:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub
2022-02-18T07:37:28.298 INFO:tasks.cephadm:Stopping all daemons...
2022-02-18T07:37:28.298 INFO:tasks.cephadm.mon.smithi116:Stopping mon.smithi116...
2022-02-18T07:37:28.299 DEBUG:teuthology.orchestra.run.smithi116:> sudo systemctl stop ceph-cbcec422-908c-11ec-8c35-001a4aab830c@mon.smithi116
2022-02-18T07:37:28.495 INFO:journalctl@ceph.mon.smithi116.smithi116.stdout:Feb 18 07:37:28 smithi116 systemd[1]: Stopping Ceph mon.smithi116 for cbcec422-908c-11ec-8c35-001a4aab830c...
2022-02-18T07:37:28.496 INFO:journalctl@ceph.mon.smithi116.smithi116.stdout:Feb 18 07:37:28 smithi116 bash[80219]: Error: no container with name or ID "ceph-cbcec422-908c-11ec-8c35-001a4aab830c-mon.smithi116" found: no such container
2022-02-18T07:37:28.764 INFO:journalctl@ceph.mon.smithi116.smithi116.stdout:Feb 18 07:37:28 smithi116 conmon[29900]: 2022-02-18T07:37:28.493+0000 7f94fff61700 -1 received  signal: Terminated from /dev/init -- /usr/bin/ceph-mon -n mon.smithi116 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
2022-02-18T07:37:28.765 INFO:journalctl@ceph.mon.smithi116.smithi116.stdout:Feb 18 07:37:28 smithi116 conmon[29900]: 2022-02-18T07:37:28.493+0000 7f94fff61700 -1 mon.smithi116@0(leader) e2 *** Got Signal Terminated ***
2022-02-18T07:37:28.967 DEBUG:teuthology.orchestra.run.smithi116:> sudo pkill -f 'journalctl -f -n 0 -u ceph-cbcec422-908c-11ec-8c35-001a4aab830c@mon.smithi116.service'
2022-02-18T07:37:29.017 DEBUG:teuthology.orchestra.run:got remote process result: None
2022-02-18T07:37:29.017 INFO:tasks.cephadm.mon.smithi116:Stopped mon.smithi116
2022-02-18T07:37:29.018 INFO:tasks.cephadm.mon.smithi165:Stopping mon.smithi165...
2022-02-18T07:37:29.018 DEBUG:teuthology.orchestra.run.smithi165:> sudo systemctl stop ceph-cbcec422-908c-11ec-8c35-001a4aab830c@mon.smithi165
2022-02-18T07:37:29.375 INFO:journalctl@ceph.mon.smithi165.smithi165.stdout:Feb 18 07:37:29 smithi165 systemd[1]: Stopping Ceph mon.smithi165 for cbcec422-908c-11ec-8c35-001a4aab830c...
2022-02-18T07:37:29.375 INFO:journalctl@ceph.mon.smithi165.smithi165.stdout:Feb 18 07:37:29 smithi165 bash[53897]: Error: no container with name or ID "ceph-cbcec422-908c-11ec-8c35-001a4aab830c-mon.smithi165" found: no such container
2022-02-18T07:37:29.375 INFO:journalctl@ceph.mon.smithi165.smithi165.stdout:Feb 18 07:37:29 smithi165 conmon[35970]: 2022-02-18T07:37:29.195+0000 7f16f0e56700 -1 received  signal: Terminated from /dev/init -- /usr/bin/ceph-mon -n mon.smithi165 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
2022-02-18T07:37:29.376 INFO:journalctl@ceph.mon.smithi165.smithi165.stdout:Feb 18 07:37:29 smithi165 conmon[35970]: 2022-02-18T07:37:29.195+0000 7f16f0e56700 -1 mon.smithi165@1(peon) e2 *** Got Signal Terminated ***
2022-02-18T07:37:29.712 INFO:journalctl@ceph.mon.smithi165.smithi165.stdout:Feb 18 07:37:29 smithi165 bash[53936]: ceph-cbcec422-908c-11ec-8c35-001a4aab830c-mon-smithi165
2022-02-18T07:37:29.723 DEBUG:teuthology.orchestra.run.smithi165:> sudo pkill -f 'journalctl -f -n 0 -u ceph-cbcec422-908c-11ec-8c35-001a4aab830c@mon.smithi165.service'
2022-02-18T07:37:29.768 DEBUG:teuthology.orchestra.run:got remote process result: None
2022-02-18T07:37:29.769 INFO:tasks.cephadm.mon.smithi165:Stopped mon.smithi165
2022-02-18T07:37:29.769 DEBUG:teuthology.orchestra.run.smithi116:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid cbcec422-908c-11ec-8c35-001a4aab830c --force --keep-logs
2022-02-18T13:53:38.738 DEBUG:teuthology.exit:Got signal 15; running 1 handler...
2022-02-18T13:53:38.760 DEBUG:teuthology.task.console_log:Killing console logger for smithi116
2022-02-18T13:53:38.761 DEBUG:teuthology.task.console_log:Killing console logger for smithi165
2022-02-18T13:53:38.761 DEBUG:teuthology.exit:Finished running handlers
Actions #1

Updated by Laura Flores about 2 years ago

/a/yuriw-2022-02-21_15:48:20-rados-wip-yuri7-testing-2022-02-17-0852-pacific-distro-default-smithi/6698643
Description: rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-flag}

Actions #2

Updated by Laura Flores about 2 years ago

/a/yuriw-2022-02-24_22:04:22-rados-wip-yuri7-testing-2022-02-17-0852-pacific-distro-default-smithi/6704774
Description: rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-flag}

2022-02-25T05:12:53.617 INFO:teuthology.orchestra.run.smithi107.stderr:+ ceph osd dump
2022-02-25T05:12:53.617 INFO:teuthology.orchestra.run.smithi107.stderr:+ grep osd.1
2022-02-25T05:12:53.617 INFO:teuthology.orchestra.run.smithi107.stderr:+ grep 'up\s*in'
2022-02-25T05:12:53.919 INFO:journalctl@ceph.mon.smithi162.smithi162.stdout:Feb 25 05:12:53 smithi162 bash[14405]: cluster 2022-02-25T05:12:53.444898+0000 mgr.smithi107.fvoaft (mgr.14186) 12878 : cluster [DBG] pgmap v12101: 1 pgs: 1 active+clean; 0 B data, 36 MiB used, 626 GiB / 626 GiB avail
2022-02-25T05:12:53.930 INFO:journalctl@ceph.mon.smithi107.smithi107.stdout:Feb 25 05:12:53 smithi107 bash[12979]: cluster 2022-02-25T05:12:53.444898+0000 mgr.smithi107.fvoaft (mgr.14186) 12878 : cluster [DBG] pgmap v12101: 1 pgs: 1 active+clean; 0 B data, 36 MiB used, 626 GiB / 626 GiB avail
2022-02-25T05:12:53.946 INFO:teuthology.orchestra.run.smithi107.stderr:+ sleep 5
2022-02-25T05:12:54.919 INFO:journalctl@ceph.mon.smithi162.smithi162.stdout:Feb 25 05:12:54 smithi162 bash[14405]: audit 2022-02-25T05:12:53.926805+0000 mon.smithi107 (mon.0) 7692 : audit [DBG] from='client.? 172.21.15.107:0/3415946442' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
2022-02-25T05:12:54.930 INFO:journalctl@ceph.mon.smithi107.smithi107.stdout:Feb 25 05:12:54 smithi107 bash[12979]: audit 2022-02-25T05:12:53.926805+0000 mon.smithi107 (mon.0) 7692 : audit [DBG] from='client.? 172.21.15.107:0/3415946442' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
2022-02-25T05:12:55.919 INFO:journalctl@ceph.mon.smithi162.smithi162.stdout:Feb 25 05:12:55 smithi162 bash[14405]: cluster 2022-02-25T05:12:55.445539+0000 mgr.smithi107.fvoaft (mgr.14186) 12879 : cluster [DBG] pgmap v12102: 1 pgs: 1 active+clean; 0 B data, 36 MiB used, 626 GiB / 626 GiB avail
2022-02-25T05:12:55.930 INFO:journalctl@ceph.mon.smithi107.smithi107.stdout:Feb 25 05:12:55 smithi107 bash[12979]: cluster 2022-02-25T05:12:55.445539+0000 mgr.smithi107.fvoaft (mgr.14186) 12879 : cluster [DBG] pgmap v12102: 1 pgs: 1 active+clean; 0 B data, 36 MiB used, 626 GiB / 626 GiB avail
2022-02-25T05:12:57.810 INFO:journalctl@ceph.mon.smithi162.smithi162.stdout:Feb 25 05:12:57 smithi162 bash[14405]: cluster 2022-02-25T05:12:57.446227+0000 mgr.smithi107.fvoaft (mgr.14186) 12880 : cluster [DBG] pgmap v12103: 1 pgs: 1 active+clean; 0 B data, 36 MiB used, 626 GiB / 626 GiB avail
2022-02-25T05:12:57.930 INFO:journalctl@ceph.mon.smithi107.smithi107.stdout:Feb 25 05:12:57 smithi107 bash[12979]: cluster 2022-02-25T05:12:57.446227+0000 mgr.smithi107.fvoaft (mgr.14186) 12880 : cluster [DBG] pgmap v12103: 1 pgs: 1 active+clean; 0 B data, 36 MiB used, 626 GiB / 626 GiB avail
2022-02-25T05:12:58.698 DEBUG:teuthology.exit:Got signal 15; running 1 handler...
2022-02-25T05:12:58.699 DEBUG:teuthology.task.console_log:Killing console logger for smithi107
2022-02-25T05:12:58.700 DEBUG:teuthology.task.console_log:Killing console logger for smithi162
2022-02-25T05:12:58.700 DEBUG:teuthology.exit:Finished running handlers

Actions #3

Updated by Laura Flores about 2 years ago

  • Backport set to pacific
Actions #4

Updated by Laura Flores about 2 years ago

/a/yuriw-2022-03-01_17:45:51-rados-wip-yuri3-testing-2022-02-28-0757-pacific-distro-default-smithi/6715036

Actions #5

Updated by Laura Flores about 2 years ago

/a/yuriw-2022-03-25_18:42:52-rados-wip-yuri7-testing-2022-03-24-1341-pacific-distro-default-smithi/6761081

Actions #6

Updated by Laura Flores about 2 years ago

/a/yuriw-2022-04-01_01:23:52-rados-wip-yuri2-testing-2022-03-31-1523-pacific-distro-default-smithi/6771196

Actions #7

Updated by Laura Flores about 2 years ago

  • Project changed from Ceph to Orchestrator
Actions #8

Updated by Laura Flores almost 2 years ago

/a/lflores-2022-04-22_20:48:19-rados-wip-55324-pacific-backport-distro-default-smithi/6801437

Actions #9

Updated by Laura Flores almost 2 years ago

Laura Flores wrote:

/a/lflores-2022-04-22_20:48:19-rados-wip-55324-pacific-backport-distro-default-smithi/6801437

This actually seems to be a problem in the supervisor log:
/a/lflores-2022-04-22_20:48:19-rados-wip-55324-pacific-backport-distro-default-smithi/6801437/supervisor.6801437.log

2022-04-23T17:58:06.315 INFO:teuthology.misc:Compressing logs...
2022-04-23T17:58:06.315 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@smithi032.front.sepia.ceph.com'
2022-04-23T17:58:06.561 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@smithi167.front.sepia.ceph.com'
2022-04-23T17:58:06.574 INFO:teuthology.orchestra.run.smithi032.stderr:find: '/var/lib/ceph/crash': No such file or directory
2022-04-23T17:58:06.856 INFO:teuthology.orchestra.run.smithi167.stderr:find: '/var/lib/ceph/crash': No such file or directory
2022-04-23T17:58:06.876 INFO:teuthology.orchestra.run.smithi032.stderr:tar: /var/lib/ceph/crash: Cannot open: No such file or directory
2022-04-23T17:58:06.876 INFO:teuthology.orchestra.run.smithi032.stderr:tar: Error is not recoverable: exiting now
2022-04-23T17:58:06.915 INFO:teuthology.orchestra.run.smithi167.stderr:tar: /var/lib/ceph/crash: Cannot open: No such file or directory
2022-04-23T17:58:06.915 INFO:teuthology.orchestra.run.smithi167.stderr:tar: Error is not recoverable: exiting now
2022-04-23T17:58:06.917 INFO:teuthology.misc:Compressing logs...
2022-04-23T17:58:07.150 INFO:teuthology.misc:Compressing logs...
2022-04-23T17:58:08.969 INFO:teuthology.orchestra.run.smithi167.stderr:gzip: /var/log/ceph/508d1ff8-c2f9-11ec-8c39-001a4aab830c/ceph-osd.4.log: file size changed while zipping
2022-04-23T17:58:09.251 INFO:teuthology.orchestra.run.smithi032.stderr:gzip: /var/log/ceph/508d1ff8-c2f9-11ec-8c39-001a4aab830c/ceph-osd.3.log: file size changed while zipping
2022-04-23T17:58:10.905 INFO:teuthology.orchestra.run.smithi167.stderr:gzip: /var/log/ceph/508d1ff8-c2f9-11ec-8c39-001a4aab830c/ceph-osd.0.log: file size changed while zipping
2022-04-23T17:58:12.637 INFO:teuthology.orchestra.run.smithi167.stderr:gzip: /var/log/ceph/508d1ff8-c2f9-11ec-8c39-001a4aab830c/ceph-osd.6.log: file size changed while zipping
2022-04-23T17:58:16.098 INFO:teuthology.orchestra.run.smithi167.stderr:gzip: /var/log/ceph/508d1ff8-c2f9-11ec-8c39-001a4aab830c/ceph-mon.smithi167.log: file size changed while zipping
2022-04-23T17:58:17.857 INFO:teuthology.orchestra.run.smithi167.stderr:gzip: /var/log/ceph/508d1ff8-c2f9-11ec-8c39-001a4aab830c/ceph-osd.2.log: file size changed while zipping
2022-04-23T17:58:18.870 INFO:teuthology.orchestra.run.smithi032.stderr:gzip: /var/log/ceph/508d1ff8-c2f9-11ec-8c39-001a4aab830c/ceph-mon.smithi032.log: file size changed while zipping
2022-04-23T17:58:20.707 INFO:teuthology.orchestra.run.smithi032.stderr:gzip: /var/log/ceph/508d1ff8-c2f9-11ec-8c39-001a4aab830c/ceph-osd.5.log: file size changed while zipping
2022-04-23T17:58:22.941 INFO:teuthology.orchestra.run.smithi032.stderr:gzip: /var/log/ceph/508d1ff8-c2f9-11ec-8c39-001a4aab830c/ceph-osd.7.log: file size changed while zipping
2022-04-23T17:58:25.150 INFO:teuthology.orchestra.run.smithi032.stderr:gzip: /var/log/ceph/508d1ff8-c2f9-11ec-8c39-001a4aab830c/ceph-mgr.smithi032.gjzyzx.log: file size changed while zipping
2022-04-23T17:58:31.290 INFO:teuthology.kill:No teuthology processes running
2022-04-23T17:58:32.714 INFO:teuthology.kill:Nuking machines: ['smithi032', 'smithi167']

Actions #10

Updated by David Galloway almost 2 years ago

Laura Flores wrote:

/a/lflores-2022-04-22_20:48:19-rados-wip-55324-pacific-backport-distro-default-smithi/6801437

Supervisor log looks normal to me.

2022-04-23T11:27:37.308 INFO:teuthology.dispatcher.supervisor:Running job 6801437
2022-04-23T11:27:37.308 DEBUG:teuthology.dispatcher.supervisor:Running: /home/teuthworker/src/git.ceph.com_git_teuthology_1b11219b9af33884aae5eb66700541df4b9620ed/virtualenv/bin/teuthology -v --owner scheduled_lflores@teuthology --archive /home/teuthworker/archive/lflores-2022-04-22_20:48:19-rados-wip-55324-pacific-backport-distro-default-smithi/6801437 --name lflores-2022-04-22_20:48:19-rados-wip-55324-pacific-backport-distro-default-smithi --description rados/cephadm/osds/{0-distro/ubuntu_18.04 0-nvme-loop 1-start 2-ops/rm-zap-flag} -- /home/teuthworker/archive/lflores-2022-04-22_20:48:19-rados-wip-55324-pacific-backport-distro-default-smithi/6801437/orig.config.yaml
2022-04-23T11:27:37.312 INFO:teuthology.dispatcher.supervisor:Job archive: /home/teuthworker/archive/lflores-2022-04-22_20:48:19-rados-wip-55324-pacific-backport-distro-default-smithi/6801437
2022-04-23T11:27:37.312 INFO:teuthology.dispatcher.supervisor:Job PID: 25468
2022-04-23T11:27:37.312 INFO:teuthology.dispatcher.supervisor:Running with watchdog
2022-04-23T17:58:04.735 WARNING:teuthology.dispatcher.supervisor:Job ran longer than 23400s. Killing...
2022-04-23T17:58:04.892 INFO:teuthology.kill:Killing Pids: {25468}
2022-04-23T17:58:06.314 INFO:teuthology.task.internal:roles: ubuntu@smithi032.front.sepia.ceph.com - ['host.a', 'client.0']
2022-04-23T17:58:06.315 INFO:teuthology.task.internal:roles: ubuntu@smithi167.front.sepia.ceph.com - ['host.b', 'client.1']
2022-04-23T17:58:06.315 INFO:teuthology.misc:Compressing logs...
2022-04-23T17:58:06.315 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@smithi032.front.sepia.ceph.com'
2022-04-23T17:58:06.561 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@smithi167.front.sepia.ceph.com'
2022-04-23T17:58:06.574 INFO:teuthology.orchestra.run.smithi032.stderr:find: '/var/lib/ceph/crash': No such file or directory
2022-04-23T17:58:06.856 INFO:teuthology.orchestra.run.smithi167.stderr:find: '/var/lib/ceph/crash': No such file or directory
2022-04-23T17:58:06.876 INFO:teuthology.orchestra.run.smithi032.stderr:tar: /var/lib/ceph/crash: Cannot open: No such file or directory
2022-04-23T17:58:06.876 INFO:teuthology.orchestra.run.smithi032.stderr:tar: Error is not recoverable: exiting now
2022-04-23T17:58:06.915 INFO:teuthology.orchestra.run.smithi167.stderr:tar: /var/lib/ceph/crash: Cannot open: No such file or directory
2022-04-23T17:58:06.915 INFO:teuthology.orchestra.run.smithi167.stderr:tar: Error is not recoverable: exiting now
2022-04-23T17:58:06.917 INFO:teuthology.misc:Compressing logs...
2022-04-23T17:58:07.150 INFO:teuthology.misc:Compressing logs...
2022-04-23T17:58:08.969 INFO:teuthology.orchestra.run.smithi167.stderr:gzip: /var/log/ceph/508d1ff8-c2f9-11ec-8c39-001a4aab830c/ceph-osd.4.log: file size changed while zipping
2022-04-23T17:58:09.251 INFO:teuthology.orchestra.run.smithi032.stderr:gzip: /var/log/ceph/508d1ff8-c2f9-11ec-8c39-001a4aab830c/ceph-osd.3.log: file size changed while zipping
2022-04-23T17:58:10.905 INFO:teuthology.orchestra.run.smithi167.stderr:gzip: /var/log/ceph/508d1ff8-c2f9-11ec-8c39-001a4aab830c/ceph-osd.0.log: file size changed while zipping
2022-04-23T17:58:12.637 INFO:teuthology.orchestra.run.smithi167.stderr:gzip: /var/log/ceph/508d1ff8-c2f9-11ec-8c39-001a4aab830c/ceph-osd.6.log: file size changed while zipping
2022-04-23T17:58:16.098 INFO:teuthology.orchestra.run.smithi167.stderr:gzip: /var/log/ceph/508d1ff8-c2f9-11ec-8c39-001a4aab830c/ceph-mon.smithi167.log: file size changed while zipping
2022-04-23T17:58:17.857 INFO:teuthology.orchestra.run.smithi167.stderr:gzip: /var/log/ceph/508d1ff8-c2f9-11ec-8c39-001a4aab830c/ceph-osd.2.log: file size changed while zipping
2022-04-23T17:58:18.870 INFO:teuthology.orchestra.run.smithi032.stderr:gzip: /var/log/ceph/508d1ff8-c2f9-11ec-8c39-001a4aab830c/ceph-mon.smithi032.log: file size changed while zipping
2022-04-23T17:58:20.707 INFO:teuthology.orchestra.run.smithi032.stderr:gzip: /var/log/ceph/508d1ff8-c2f9-11ec-8c39-001a4aab830c/ceph-osd.5.log: file size changed while zipping
2022-04-23T17:58:22.941 INFO:teuthology.orchestra.run.smithi032.stderr:gzip: /var/log/ceph/508d1ff8-c2f9-11ec-8c39-001a4aab830c/ceph-osd.7.log: file size changed while zipping
2022-04-23T17:58:25.150 INFO:teuthology.orchestra.run.smithi032.stderr:gzip: /var/log/ceph/508d1ff8-c2f9-11ec-8c39-001a4aab830c/ceph-mgr.smithi032.gjzyzx.log: file size changed while zipping
2022-04-23T17:58:31.290 INFO:teuthology.kill:No teuthology processes running
2022-04-23T17:58:32.714 INFO:teuthology.kill:Nuking machines: ['smithi032', 'smithi167']

Might be worth running this job interactively or running it then SSHing to the testnode while @cephadm@ is running.

Actions #11

Updated by Laura Flores almost 2 years ago

/a/yuriw-2022-06-03_20:44:47-rados-wip-yuri5-testing-2022-06-02-0825-quincy-distro-default-smithi/6863163

Description: rados/cephadm/osds/{0-distro/ubuntu_20.04 0-nvme-loop 1-start 2-ops/rm-zap-wait}

Actions

Also available in: Atom PDF