Project

General

Profile

Actions

Bug #49076

closed

cephadm: Bootstrapping fails: json.decoder.JSONDecodeError: Expecting value: line 3217 column 25 (char 114688)

Added by Gunther Heinrich over 3 years ago. Updated over 3 years ago.

Status:
Duplicate
Priority:
Normal
Assignee:
-
Category:
cephadm
Target version:
% Done:

0%

Source:
Community (user)
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

On latest Ubuntu 20.04.1 and Podman 2.2.1...

In relation to Bug #49013 I tried to bootstrap a new cluster for testing. It seems I can't either bootstrap a new cluster or upgrade an existing cluster at the moment. What is going on with cephadm?

Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit systemd-timesyncd.service is enabled and running
Repeating the final host check...
podman|docker (/usr/bin/podman) is present
systemctl is present
lvcreate is present
Unit systemd-timesyncd.service is enabled and running
Host looks OK
Cluster fsid: 837fa9cc-6485-11eb-9d0f-47b10d24c3ce
Verifying IP 192.168.56.10 port 3300 ...
Verifying IP 192.168.56.10 port 6789 ...
Mon IP 192.168.56.10 is in CIDR network 192.168.56.0/25
Pulling container image docker.io/ceph/ceph:v15...
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network...
Creating mgr...
Verifying port 9283 ...
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Wrote config to /etc/ceph/ceph.conf
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/10)...
mgr not available, waiting (2/10)...
mgr not available, waiting (3/10)...
mgr not available, waiting (4/10)...
mgr is available
Enabling cephadm module...
Traceback (most recent call last):
  File "/usr/sbin/cephadm", line 6111, in <module>
    r = args.func()
  File "/usr/sbin/cephadm", line 1399, in _default_image
    return func()
  File "/usr/sbin/cephadm", line 3124, in command_bootstrap
    wait_for_mgr_restart()
  File "/usr/sbin/cephadm", line 3103, in wait_for_mgr_restart
    j = json.loads(out)
  File "/usr/lib/python3.8/json/__init__.py", line 357, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3.8/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib/python3.8/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 3217 column 25 (char 114688)

In journalctl I found these log entries...
Feb 01 13:04:22 iz-ceph-v2-mon-01 systemd[1]: ceph-837fa9cc-6485-11eb-9d0f-47b10d24c3ce@mon.iz-ceph-v2-mon-01.service: Main process exited, code=dumped, status=6/ABRT
Feb 01 13:04:22 iz-ceph-v2-mon-01 systemd[1]: ceph-837fa9cc-6485-11eb-9d0f-47b10d24c3ce@mon.iz-ceph-v2-mon-01.service: Failed with result 'core-dump'.

...but it seems that the containers started nonetheless
645f722044ce  docker.io/ceph/ceph:v15  -n mgr.iz-ceph-v2...  16 minutes ago  Up 16 minutes ago          ceph-837fa9cc-6485-11eb-9d0f-47b10d24c3ce-mgr.iz-ceph-v2-mon-01.fcznbo
7dfb2c85fe02  docker.io/ceph/ceph:v15  -n mon.iz-ceph-v2...  16 minutes ago  Up 16 minutes ago          ceph-837fa9cc-6485-11eb-9d0f-47b10d24c3ce-mon.iz-ceph-v2-mon-01


Related issues 1 (0 open1 closed)

Is duplicate of Orchestrator - Bug #48993: cephadm: 'mgr stat' and/or 'pg dump' output truncatedResolved

Actions
Actions

Also available in: Atom PDF