Project

General

Profile

Bug #20371

mgr: occasional fails to send beacons (monc reconnect backoff too aggressive?)

Added by Sage Weil 2 months ago. Updated 25 days ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
-
Target version:
-
Start date:
06/21/2017
Due date:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Release:
Needs Doc:
No
Component(RADOS):

Description

for a while,

2017-06-21 05:14:24.083186 7f21f4323700  1 mgr send_beacon active
2017-06-21 05:14:24.083189 7f21f4323700 10 mgr send_beacon sending beacon as gid 4098
2017-06-21 05:14:24.083203 7f21f4323700  1 -- 172.21.15.80:0/2657294848 --> 172.21.15.80:6789/0 -- mgrbeacon mgr.x(21c823a4-dba0-4be6-940a-e295fbe30f86,4098, 172.21.15.80:6800/237445, 1) v2 -- ?+0 0x55d9d0440a00 con 0x55d9cfbf4640
2017-06-21 05:14:24.083221 7f21f4323700  0 mgr tick 
2017-06-21 05:14:24.083223 7f21f4323700 10 mgr update_delta_stats  v16
2017-06-21 05:14:24.083305 7f21f4323700 10 mgr.server operator() 12 pgs: 12 active+clean; 0 bytes data, 3183 MB used, 266 GB / 270 GB avail
2017-06-21 05:14:24.083331 7f21f4323700  1 -- 172.21.15.80:0/2657294848 --> 172.21.15.80:6789/0 -- monmgrreport v1 -- ?+1544 0x55d9cfc0bd40 con 0x55d9cfbf4640
2017-06-21 05:14:26.083459 7f21f4323700  0 mgr tick tick
2017-06-21 05:14:26.083470 7f21f4323700  1 mgr send_beacon active
2017-06-21 05:14:26.083472 7f21f4323700 10 mgr send_beacon sending beacon as gid 4098
2017-06-21 05:14:26.083486 7f21f4323700  1 -- 172.21.15.80:0/2657294848 --> 172.21.15.80:6789/0 -- mgrbeacon mgr.x(21c823a4-dba0-4be6-940a-e295fbe30f86,4098, 172.21.15.80:6800/237445, 1) v2 -- ?+0 0x55d9d0440c80 con 0x55d9cfbf4640

but then
2017-06-21 05:14:28.083689 7f21f4323700  1 mgr send_beacon active
2017-06-21 05:14:28.083692 7f21f4323700 10 mgr send_beacon sending beacon as gid 4098
2017-06-21 05:14:28.083697 7f21f4323700  0 mgr tick 
2017-06-21 05:14:28.083699 7f21f4323700 10 mgr update_delta_stats  v18
2017-06-21 05:14:28.083799 7f21f4323700 10 mgr.server operator() 12 pgs: 12 active+clean; 0 bytes data, 3183 MB used, 266 GB / 270 GB avail
2017-06-21 05:14:28.110377 7f21f231f700 10 _calc_signature seq 6 front_crc_ = 145679454 middle_crc = 0 data_crc = 0 sig = 14427909843967850292
2017-06-21 05:14:28.110426 7f21f131d700  1 -- 172.21.15.80:6800/237445 <== mon.0 172.21.15.80:0/237444 6 ==== mgrreport(+0-0 packed 1734) v2 ==== 1751+0+0 (145679454 0 0) 0x55d9d064a2c0 con 0x55d9d0492800
2017-06-21 05:14:28.110456 7f21f131d700  4 mgr.server handle_report from 0x55d9d0492800 name mon.a
2017-06-21 05:14:28.110460 7f21f131d700 20 mgr.server handle_report updating existing DaemonState for a
2017-06-21 05:14:28.110462 7f21f131d700 20 mgr update loading 0 new types, 0 old types, had 590 types, got 1734 bytes of data
2017-06-21 05:14:30.083915 7f21f4323700  0 mgr tick tick
2017-06-21 05:14:30.083927 7f21f4323700  1 mgr send_beacon active
2017-06-21 05:14:30.083929 7f21f4323700 10 mgr send_beacon sending beacon as gid 4098
2017-06-21 05:14:30.083935 7f21f4323700  0 mgr tick 
2017-06-21 05:14:30.083937 7f21f4323700 10 mgr update_delta_stats  v19
2017-06-21 05:14:30.083969 7f21f4323700 10 mgr.server operator() 12 pgs: 12 active+clean; 0 bytes data, 3183 MB used, 266 GB / 270 GB avail
2017-06-21 05:14:31.876473 7f21f2b20700  1 -- 172.21.15.80:6800/237445 >> - conn(0x55d9d0477000 :6800 s=STATE_ACCEPTING pgs=0 cs=0 l=0)._process_connection sd=8 -
2017-06-21 05:14:31.876683 7f21f2b20700 10 cephx: verify_authorizer decrypted service mgr secret_id=2
2017-06-21 05:14:31.876769 7f21f2b20700 10 cephx: verify_authorizer global_id=4111
2017-06-21 05:14:31.876799 7f21f2b20700 10 cephx: verify_authorizer ok nonce 4b588f54542289ec reply_bl.length()=36
2017-06-21 05:14:31.876845 7f21f2b20700 10 mgr.server ms_verify_authorizer  session 0x55d9cfbfc060 client.admin has caps allow * 'allow *'
2017-06-21 05:14:31.876857 7f21f2b20700 10 In get_auth_session_handler for protocol 2
2017-06-21 05:14:31.894450 7f21f1b1e700 10 _calc_signature seq 12 front_crc_ = 4255501585 middle_crc = 0 data_crc = 0 sig = 14575324513775164604
2017-06-21 05:14:31.894490 7f21f131d700  1 -- 172.21.15.80:6800/237445 <== osd.1 172.21.15.80:6809/237544 12 ==== pg_stats(3 pgs tid 0 v 0) v1 ==== 1862+0+0 (4255501585 0 0) 0x55d9d04e6300 con 0x55d9d0491000
2017-06-21 05:14:31.896013 7f21f2b20700  1 -- 172.21.15.80:6800/237445 >> 172.21.15.80:0/2685737263 conn(0x55d9d0477000 :6800 s=STATE_OPEN pgs=3 cs=1 l=1).read_bulk peer close file descriptor 8
2017-06-21 05:14:31.896037 7f21f2b20700  1 -- 172.21.15.80:6800/237445 >> 172.21.15.80:0/2685737263 conn(0x55d9d0477000 :6800 s=STATE_OPEN pgs=3 cs=1 l=1).read_until read failed
2017-06-21 05:14:31.896043 7f21f2b20700  1 -- 172.21.15.80:6800/237445 >> 172.21.15.80:0/2685737263 conn(0x55d9d0477000 :6800 s=STATE_OPEN pgs=3 cs=1 l=1).process read tag failed
2017-06-21 05:14:31.896048 7f21f2b20700  1 -- 172.21.15.80:6800/237445 >> 172.21.15.80:0/2685737263 conn(0x55d9d0477000 :6800 s=STATE_OPEN pgs=3 cs=1 l=1).fault on lossy channel, failing
2017-06-21 05:14:32.018762 7f21f2b20700  1 -- 172.21.15.80:6800/237445 >> - conn(0x55d9d0644000 :6800 s=STATE_ACCEPTING pgs=0 cs=0 l=0)._process_connection sd=8 -
2017-06-21 05:14:32.018912 7f21f2b20700 10 cephx: verify_authorizer decrypted service mgr secret_id=2
2017-06-21 05:14:32.018976 7f21f2b20700 10 cephx: verify_authorizer global_id=4112
2017-06-21 05:14:32.018997 7f21f2b20700 10 cephx: verify_authorizer ok nonce 436c6125628c895d reply_bl.length()=36
2017-06-21 05:14:32.019043 7f21f2b20700 10 mgr.server ms_verify_authorizer  session 0x55d9cfbfc220 client.admin has caps allow * 'allow *'
2017-06-21 05:14:32.019058 7f21f2b20700 10 In get_auth_session_handler for protocol 2
2017-06-21 05:14:32.030584 7f21f1b1e700 10 _calc_signature seq 12 front_crc_ = 1980360948 middle_crc = 0 data_crc = 0 sig = 12748525025159258443
2017-06-21 05:14:32.030633 7f21f131d700  1 -- 172.21.15.80:6800/237445 <== osd.2 172.21.15.80:6805/237543 12 ==== pg_stats(3 pgs tid 0 v 0) v1 ==== 1862+0+0 (1980360948 0 0) 0x55d9d04e6900 con 0x55d9d0475800
2017-06-21 05:14:32.031801 7f21f2b20700  1 -- 172.21.15.80:6800/237445 >> 172.21.15.80:0/2058161578 conn(0x55d9d0644000 :6800 s=STATE_OPEN pgs=2 cs=1 l=1).read_bulk peer close file descriptor 8
2017-06-21 05:14:32.031827 7f21f2b20700  1 -- 172.21.15.80:6800/237445 >> 172.21.15.80:0/2058161578 conn(0x55d9d0644000 :6800 s=STATE_OPEN pgs=2 cs=1 l=1).read_until read failed
2017-06-21 05:14:32.031836 7f21f2b20700  1 -- 172.21.15.80:6800/237445 >> 172.21.15.80:0/2058161578 conn(0x55d9d0644000 :6800 s=STATE_OPEN pgs=2 cs=1 l=1).process read tag failed
2017-06-21 05:14:32.031845 7f21f2b20700  1 -- 172.21.15.80:6800/237445 >> 172.21.15.80:0/2058161578 conn(0x55d9d0644000 :6800 s=STATE_OPEN pgs=2 cs=1 l=1).fault on lossy channel, failing
2017-06-21 05:14:32.031883 7f21f2b20700  1 -- 172.21.15.80:6800/237445 reap_dead start

/a/sage-2017-06-21_02:01:04-rados-wip-sage-testing2-distro-basic-smithi/1308454

eventually the mgr times out and fails over, and meanwhile the pg stat flush times out with

Exception: timed out waiting for mon to be updated with osd.1: 21474836492
 < 21474836497

Related issues

Duplicated by RADOS - Bug #20507: "[WRN] Manager daemon x is unresponsive. No standby daemons available." in cluster log Duplicate 07/05/2017
Duplicated by RADOS - Bug #20624: cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log Duplicate 07/14/2017

History

#1 Updated by Sage Weil 2 months ago

It looks like it wasn't aggressive enough about reconnection to the mon:

2017-06-21 05:14:27.246438 7f21f031b700  0 -- 172.21.15.80:0/2657294848 >> 172.21.15.80:6789/0 pipe(0x55d9cfcec800 sd=8 :57248 s=2 pgs=16 cs=1 l=1 c=0x55d9cfbf4640).injecting socket failure
2017-06-21 05:14:27.246709 7f21f7c2b700  1 -- 172.21.15.80:0/2657294848 --> 172.21.15.80:6789/0 -- auth(proto 0 26 bytes epoch 1) v1 -- ?+0 0x55d9d0441180 con 0x55d9cfbf54e0
2017-06-21 05:14:27.246734 7f21f7c2b700  0 client.0 ms_handle_reset on 172.21.15.80:6789/0
2017-06-21 05:14:27.246737 7f21f7c2b700  0 client.0 ms_handle_reset on 172.21.15.80:6789/0
2017-06-21 05:14:37.246533 7f21f6428700  1 -- 172.21.15.80:0/2657294848 --> 172.21.15.80:6789/0 -- auth(proto 0 26 bytes epoch 1) v1 -- ?+0 0x55d9d0441180 con 0x55d9cfbf54e0
2017-06-21 05:14:37.247176 7f21f7c2b700  1 -- 172.21.15.80:0/2657294848 <== mon.0 172.21.15.80:6789/0 1 ==== auth_reply(proto 2 0 (0) Success) v1 ==== 33+0+0 (4232557749 0 0) 0x55d9cfc67980 con 0x55d9cfbf54e0
2017-06-21 05:14:37.247335 7f21f7c2b700  1 -- 172.21.15.80:0/2657294848 --> 172.21.15.80:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- ?+0 0x55d9d0440f00 con 0x55d9cfbf54e0
2017-06-21 05:15:01.246818 7f21f6428700  1 -- 172.21.15.80:0/2657294848 --> 172.21.15.80:6789/0 -- auth(proto 0 26 bytes epoch 1) v1 -- ?+0 0x55d9d0440c80 con 0x55d9cfbf54e0

and on the mon side
2017-06-21 05:14:26.247957 7eff86846700  1 -- 172.21.15.80:6789/0 --> 172.21.15.80:0/2657294848 -- mgrdigest v1 -- 0x55758f353b80 con 0
2017-06-21 05:14:27.246496 7eff8183c700  1 -- 172.21.15.80:6789/0 >> 172.21.15.80:0/2657294848 conn(0x55758f59d800 :6789 s=STATE_OPEN pgs=5 cs=1 l=1).read_bulk peer close file descriptor 34
2017-06-21 05:14:27.246523 7eff8183c700  1 -- 172.21.15.80:6789/0 >> 172.21.15.80:0/2657294848 conn(0x55758f59d800 :6789 s=STATE_OPEN pgs=5 cs=1 l=1).read_until read failed
2017-06-21 05:14:27.246531 7eff8183c700  1 -- 172.21.15.80:6789/0 >> 172.21.15.80:0/2657294848 conn(0x55758f59d800 :6789 s=STATE_OPEN pgs=5 cs=1 l=1).process read tag failed
2017-06-21 05:14:27.246552 7eff8183c700  1 -- 172.21.15.80:6789/0 >> 172.21.15.80:0/2657294848 conn(0x55758f59d800 :6789 s=STATE_OPEN pgs=5 cs=1 l=1).fault on lossy channel, failing
2017-06-21 05:14:27.246584 7eff84041700 10 mon.a@0(leader) e1 ms_handle_reset 0x55758f59d800 172.21.15.80:0/2657294848
2017-06-21 05:14:27.246614 7eff84041700 10 mon.a@0(leader) e1 reset/close on session client.4098 172.21.15.80:0/2657294848
2017-06-21 05:14:27.246623 7eff84041700 10 mon.a@0(leader) e1 remove_session 0x55758f6dd880 client.4098 172.21.15.80:0/2657294848
2017-06-21 05:14:27.246903 7eff8083a700 10 mon.a@0(leader) e1 ms_verify_authorizer 172.21.15.80:0/2657294848 client protocol 0
2017-06-21 05:14:27.247040 7eff8083a700  0 -- 172.21.15.80:6789/0 >> 172.21.15.80:0/2657294848 conn(0x55758f52e800 :6789 s=STATE_OPEN_MESSAGE_READ_FRONT pgs=6 cs=1 l=1).read_until injecting socket failure
2017-06-21 05:14:27.247079 7eff8083a700  1 -- 172.21.15.80:6789/0 >> 172.21.15.80:0/2657294848 conn(0x55758f52e800 :6789 s=STATE_OPEN pgs=6 cs=1 l=1).read_bulk peer close file descriptor 34
2017-06-21 05:14:27.247088 7eff8083a700  1 -- 172.21.15.80:6789/0 >> 172.21.15.80:0/2657294848 conn(0x55758f52e800 :6789 s=STATE_OPEN pgs=6 cs=1 l=1).read_until read failed
2017-06-21 05:14:27.247093 7eff8083a700  1 -- 172.21.15.80:6789/0 >> 172.21.15.80:0/2657294848 conn(0x55758f52e800 :6789 s=STATE_OPEN pgs=6 cs=1 l=1).process read tag failed
2017-06-21 05:14:27.247099 7eff8083a700  1 -- 172.21.15.80:6789/0 >> 172.21.15.80:0/2657294848 conn(0x55758f52e800 :6789 s=STATE_OPEN pgs=6 cs=1 l=1).fault on lossy channel, failing
2017-06-21 05:14:27.247093 7eff84041700  1 -- 172.21.15.80:6789/0 <== client.4098 172.21.15.80:0/2657294848 1 ==== auth(proto 0 26 bytes epoch 1) v1 ==== 56+0+0 (1667460815 0 0) 0x55758f536580 con 0x55758f52e800
2017-06-21 05:14:27.247119 7eff84041700 10 mon.a@0(leader) e1 _ms_dispatch new session 0x55758f6dd880 MonSession(client.4098 172.21.15.80:0/2657294848 is open)
2017-06-21 05:14:27.247138 7eff84041700 10 mon.a@0(leader).paxosservice(auth 1..2) dispatch 0x55758f536580 auth(proto 0 26 bytes epoch 1) v1 from client.4098 172.21.15.80:0/2657294848 con 0x55758f52e800
2017-06-21 05:14:27.247166 7eff84041700 10 mon.a@0(leader).paxosservice(auth 1..2)  discarding message from disconnected client client.4098 172.21.15.80:0/2657294848 auth(proto 0 26 bytes epoch 1) v1
2017-06-21 05:14:27.247187 7eff84041700 10 mon.a@0(leader) e1 ms_handle_reset 0x55758f52e800 172.21.15.80:0/2657294848
2017-06-21 05:14:27.247199 7eff84041700 10 mon.a@0(leader) e1 reset/close on session client.4098 172.21.15.80:0/2657294848
2017-06-21 05:14:27.247202 7eff84041700 10 mon.a@0(leader) e1 remove_session 0x55758f6dd880 client.4098 172.21.15.80:0/2657294848
2017-06-21 05:14:37.246774 7eff8083a700 10 mon.a@0(leader) e1 ms_verify_authorizer 172.21.15.80:0/2657294848 client protocol 0
2017-06-21 05:14:37.246937 7eff84041700  1 -- 172.21.15.80:6789/0 <== client.4098 172.21.15.80:0/2657294848 1 ==== auth(proto 0 26 bytes epoch 1) v1 ==== 56+0+0 (1667460815 0 0) 0x55758fd49b80 con 0x55758f52e800
2017-06-21 05:14:37.246974 7eff84041700 10 mon.a@0(leader) e1 _ms_dispatch new session 0x55758f6dd880 MonSession(client.4098 172.21.15.80:0/2657294848 is open)
2017-06-21 05:14:37.246989 7eff84041700 10 mon.a@0(leader).paxosservice(auth 1..2) dispatch 0x55758fd49b80 auth(proto 0 26 bytes epoch 1) v1 from client.4098 172.21.15.80:0/2657294848 con 0x55758f52e800
2017-06-21 05:14:37.247014 7eff84041700 10 mon.a@0(leader).auth v2 preprocess_query auth(proto 0 26 bytes epoch 1) v1 from client.4098 172.21.15.80:0/2657294848
2017-06-21 05:14:37.247074 7eff84041700  1 -- 172.21.15.80:6789/0 --> 172.21.15.80:0/2657294848 -- auth_reply(proto 2 0 (0) Success) v1 -- 0x55758fd49e00 con 0
2017-06-21 05:14:37.247372 7eff8083a700  0 -- 172.21.15.80:6789/0 >> 172.21.15.80:0/2657294848 conn(0x55758f52e800 :6789 s=STATE_OPEN_TAG_ACK pgs=7 cs=1 l=1).read_until injecting socket failure
2017-06-21 05:14:37.247414 7eff8083a700  1 -- 172.21.15.80:6789/0 >> 172.21.15.80:0/2657294848 conn(0x55758f52e800 :6789 s=STATE_OPEN pgs=7 cs=1 l=1).read_bulk peer close file descriptor 34
2017-06-21 05:14:37.247422 7eff8083a700  1 -- 172.21.15.80:6789/0 >> 172.21.15.80:0/2657294848 conn(0x55758f52e800 :6789 s=STATE_OPEN pgs=7 cs=1 l=1).read_until read failed
2017-06-21 05:14:37.247417 7eff84041700  1 -- 172.21.15.80:6789/0 <== client.4098 172.21.15.80:0/2657294848 2 ==== auth(proto 2 32 bytes epoch 0) v1 ==== 62+0+0 (2545011323 0 0) 0x55758fd49e00 con 0x55758f52e800
2017-06-21 05:14:37.247426 7eff8083a700  1 -- 172.21.15.80:6789/0 >> 172.21.15.80:0/2657294848 conn(0x55758f52e800 :6789 s=STATE_OPEN pgs=7 cs=1 l=1).process read tag failed
2017-06-21 05:14:37.247432 7eff8083a700  1 -- 172.21.15.80:6789/0 >> 172.21.15.80:0/2657294848 conn(0x55758f52e800 :6789 s=STATE_OPEN pgs=7 cs=1 l=1).fault on lossy channel, failing
2017-06-21 05:14:37.247435 7eff84041700 20 mon.a@0(leader) e1 _ms_dispatch existing session 0x55758f6dd880 for client.4098 172.21.15.80:0/2657294848
2017-06-21 05:14:37.247450 7eff84041700 10 mon.a@0(leader).paxosservice(auth 1..2) dispatch 0x55758fd49e00 auth(proto 2 32 bytes epoch 0) v1 from client.4098 172.21.15.80:0/2657294848 con 0x55758f52e800
2017-06-21 05:14:37.247459 7eff84041700 10 mon.a@0(leader).paxosservice(auth 1..2)  discarding message from disconnected client client.4098 172.21.15.80:0/2657294848 auth(proto 2 32 bytes epoch 0) v1
2017-06-21 05:14:37.247476 7eff84041700 10 mon.a@0(leader) e1 ms_handle_reset 0x55758f52e800 172.21.15.80:0/2657294848
2017-06-21 05:14:37.247484 7eff84041700 10 mon.a@0(leader) e1 reset/close on session client.4098 172.21.15.80:0/2657294848
2017-06-21 05:14:37.247488 7eff84041700 10 mon.a@0(leader) e1 remove_session 0x55758f6dd880 client.4098 172.21.15.80:0/2657294848
2017-06-21 05:15:01.246995 7eff8103b700 10 mon.a@0(leader) e1 ms_verify_authorizer 172.21.15.80:0/2657294848 client protocol 0
2017-06-21 05:15:01.247134 7eff84041700  1 -- 172.21.15.80:6789/0 <== client.4098 172.21.15.80:0/2657294848 1 ==== auth(proto 0 26 bytes epoch 1) v1 ==== 56+0+0 (1667460815 0 0) 0x55758fdcbb80 con 0x55758f7f9000

I think the MonClient backoff should have reset back to initial values?

#2 Updated by Sage Weil 2 months ago

  • Subject changed from mgr: occasional fails to send beacons to mgr: occasional fails to send beacons (monc reconnect backoff too aggressive?)

#3 Updated by Joao Luis about 2 months ago

  • Assignee set to Joao Luis

#4 Updated by Sage Weil about 2 months ago

  • Priority changed from Immediate to High

#5 Updated by Sage Weil about 1 month ago

  • Priority changed from High to Immediate

/a/sage-2017-07-12_02:31:06-rbd-wip-health-distro-basic-smithi/1389750

this is about to trigger more test failures on master with wip-health, which will notice unexpected MGR_DOWN

#6 Updated by Sage Weil about 1 month ago

  • Duplicated by Bug #20507: "[WRN] Manager daemon x is unresponsive. No standby daemons available." in cluster log added

#7 Updated by Sage Weil about 1 month ago

  • Priority changed from Immediate to Urgent

#8 Updated by Joao Luis about 1 month ago

  • Related to Bug #20624: cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log added

#9 Updated by Sage Weil about 1 month ago

/a/sage-2017-07-19_15:27:16-rados-wip-sage-testing2-distro-basic-smithi/1419525

#10 Updated by Joao Luis about 1 month ago

  • Related to deleted (Bug #20624: cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log)

#11 Updated by Joao Luis about 1 month ago

  • Duplicated by Bug #20624: cluster [WRN] Health check failed: no active mgr (MGR_DOWN)" in cluster log added

#12 Updated by Joao Luis about 1 month ago

  • Status changed from Verified to Need More Info

all suites end up getting stuck for quite a while (enough to trigger the cutoff for a laggy/down mgr) somewhere during `send_beacon()`. I've got PR https://github.com/ceph/ceph/pull/16484 to increase debug on the mgr's monclient for those suites running with messenger failure injection. Let's see if that gets us enough information to figure out where this is coming from.

#13 Updated by Sage Weil 26 days ago

  • Status changed from Need More Info to Need Review

#14 Updated by Sage Weil 26 days ago

/a/sage-2017-07-25_20:28:21-rados-wip-sage-testing2-distro-basic-smithi/1443641

#15 Updated by Kefu Chai 25 days ago

  • Status changed from Need Review to Resolved
  • Assignee changed from Joao Luis to Sage Weil

Also available in: Atom PDF