Project

General

Profile

Actions

Support #50887

closed

ERROR: Daemon not found: mgr.ceph-node1.ruuwlz. See cephadm ls

Added by lyd dragon almost 3 years ago. Updated over 2 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Tags:
Reviewed:
Affected Versions:
Pull request ID:

Description

The previous access address of CEPH dashboard is https://ceph-node1: 8443, but now it's https://ceph-node2:443, to check the log, I found that some of the daemon's logs can't be checked.

$ cephadm version

Using recent ceph image ceph/ceph@sha256:c820cef23fb93518d5b35683d6301bae36511e52e0f8cd1495fd58805b849383
ceph version 16.2.3 (381b476cb3900f9a92eb95d03b4850b953cfd79a) pacific (stable)
$ ceph -v
ceph version 16.2.3 (381b476cb3900f9a92eb95d03b4850b953cfd79a) pacific (stable)

$ ceph orch ps

NAME                        HOST        PORTS        STATUS         REFRESHED  AGE  VERSION  IMAGE ID      CONTAINER ID  
alertmanager.ceph-node1     ceph-node1               running (3d)   106s ago   3d   0.20.0   0881eb8f169f  328944044d26  
crash.ceph-node1            ceph-node1               running (3d)   106s ago   3d   16.2.3   cb6dd59f250f  0ef17c490a0c  
crash.ceph-node2            ceph-node2               running (3d)   109s ago   3d   16.2.3   cb6dd59f250f  5688acf077f7  
crash.ceph-node3            ceph-node3               running (3d)   109s ago   3d   16.2.3   cb6dd59f250f  1b6d8031d3bb  
grafana.ceph-node1          ceph-node1  *:3000       running (3d)   106s ago   3d   6.7.4    ae5c36c3d3cd  a6c39e414253  
mds.myfs.ceph-node2.dobeeq  ceph-node2               running (10h)  109s ago   10h  16.2.3   cb6dd59f250f  610f5633162c  
mds.myfs.ceph-node3.ycembc  ceph-node3               running (10h)  109s ago   10h  16.2.3   cb6dd59f250f  17bf985bfbba  
mgr.ceph-node1.ruuwlz       ceph-node1  *:9283       running (20h)  106s ago   3d   16.2.3   cb6dd59f250f  2bb0eee82f4d  
mgr.ceph-node2.iangxl       ceph-node2  *:8443,9283  running (20h)  109s ago   3d   16.2.3   cb6dd59f250f  d7def8b9307f  
mon.ceph-node1              ceph-node1               running (20h)  106s ago   3d   16.2.3   cb6dd59f250f  ca6e1b8a36b5  
mon.ceph-node2              ceph-node2               running (20h)  109s ago   3d   16.2.3   cb6dd59f250f  06880b1bd610  
mon.ceph-node3              ceph-node3               running (20h)  109s ago   3d   16.2.3   cb6dd59f250f  766a2eb6ecbe  
node-exporter.ceph-node1    ceph-node1               running (3d)   106s ago   3d   0.18.1   e5a616e4b9cf  fef9911713ba  
node-exporter.ceph-node2    ceph-node2  *:9100       running (3d)   109s ago   3d   0.18.1   e5a616e4b9cf  09c0e4a119d3  
node-exporter.ceph-node3    ceph-node3  *:9100       running (3d)   109s ago   3d   0.18.1   e5a616e4b9cf  0c045e8e53d9  
osd.0                       ceph-node1               running (2d)   106s ago   2d   16.2.3   cb6dd59f250f  429f1247e825  
osd.1                       ceph-node2               running (2d)   109s ago   2d   16.2.3   cb6dd59f250f  2d8cf58c1b7d  
osd.2                       ceph-node3               running (2d)   109s ago   2d   16.2.3   cb6dd59f250f  d571254b1c4b  
prometheus.ceph-node1       ceph-node1               running (3d)   106s ago   3d   2.18.1   de242295e225  1e8bd0b90f7a 

$ cephadm logs --fsid 80b3a4fc-b339-11eb-9380-9e78e133aff7 --name osd.0| tail 
May 16 20:58:34 ceph-node1 bash[280449]: Uptime(secs): 255001.1 total, 0.0 interval
May 16 20:58:34 ceph-node1 bash[280449]: Flush(GB): cumulative 0.000, interval 0.000
May 16 20:58:34 ceph-node1 bash[280449]: AddFile(GB): cumulative 0.000, interval 0.000
May 16 20:58:34 ceph-node1 bash[280449]: AddFile(Total Files): cumulative 0, interval 0
May 16 20:58:34 ceph-node1 bash[280449]: AddFile(L0 Files): cumulative 0, interval 0
May 16 20:58:34 ceph-node1 bash[280449]: AddFile(Keys): cumulative 0, interval 0
May 16 20:58:34 ceph-node1 bash[280449]: Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
May 16 20:58:34 ceph-node1 bash[280449]: Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
May 16 20:58:34 ceph-node1 bash[280449]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
May 16 20:58:34 ceph-node1 bash[280449]: ** File Read Latency Histogram By Level [P] **

$ cephadm logs --fsid 80b3a4fc-b339-11eb-9380-9e78e133aff7 --name osd.1| tail  
ERROR: Daemon not found: osd.1. See `cephadm ls`

$ cephadm logs --fsid 80b3a4fc-b339-11eb-9380-9e78e133aff7 --name mgr.ceph-node2.iangxl | tail      
ERROR: Daemon not found: mgr.ceph-node2.iangxl. See `cephadm ls`

$ fsid=`cephadm shell ceph -s -f json 2>/dev/null | jq '.fsid'`
$ for name in $(cephadm ls | jq -r '.[].name') ; do
>   cephadm logs --fsid $fsid --name "$name" > $name;
> done

ERROR: Daemon not found: mon.ceph-node1. See `cephadm ls`
ERROR: Daemon not found: mgr.ceph-node1.ruuwlz. See `cephadm ls`
ERROR: Daemon not found: alertmanager.ceph-node1. See `cephadm ls`
ERROR: Daemon not found: crash.ceph-node1. See `cephadm ls`
ERROR: Daemon not found: grafana.ceph-node1. See `cephadm ls`
ERROR: Daemon not found: node-exporter.ceph-node1. See `cephadm ls`
ERROR: Daemon not found: prometheus.ceph-node1. See `cephadm ls`
ERROR: Daemon not found: osd.0. See `cephadm ls`
Actions #1

Updated by Sebastian Wagner over 2 years ago

  • Project changed from Ceph to Orchestrator
  • Description updated (diff)
Actions #2

Updated by Sebastian Wagner over 2 years ago

were you able to solve this? I think this was due to cephadm logs only working on the local host

Actions #3

Updated by Sebastian Wagner over 2 years ago

Right, cephadm logs is just a thin wrapper around journactl -u <unit name>. It only works on the local host. You'll need to ssh to node2 in order to see the logs of osd.1

Actions #4

Updated by Sebastian Wagner over 2 years ago

  • Tracker changed from Bug to Support
Actions #5

Updated by Sebastian Wagner over 2 years ago

  • Status changed from New to Closed
Actions

Also available in: Atom PDF