Project

General

Profile

Actions

Bug #46154

open

unable to pull ceph/ceph-grafana: connection reset by peer

Added by Sebastian Wagner almost 4 years ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
Infrastructure Service
Target version:
-
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Crash signature (v1):
Crash signature (v2):

Description

http://pulpito.ceph.com/swagner-2020-06-23_11:55:14-rados:cephadm-wip-swagner-testing-2020-06-23-1057-distro-basic-smithi/5172323/

2020-06-23T12:20:41.349 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:Deploy daemon grafana.a ...
2020-06-23T12:20:41.350 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:Verifying port 3000 ...
2020-06-23T12:20:46.563 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:Non-zero exit code 125 from /usr/bin/podman run --rm --net=host --ipc=host -e CONTAINER_IMAGE=ceph/ceph-grafana:latest -e NODE_NAME=smithi198 --entrypoint stat ceph/ceph-grafana:latest -c %u %g /var/lib/grafana
2020-06-23T12:20:46.564 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr Trying to pull docker.io/ceph/ceph-grafana:latest...
2020-06-23T12:20:46.564 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr Getting image source signatures
2020-06-23T12:20:46.564 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr Copying blob sha256:003efafe5a84678b58http://pulpito.ceph.com/swagner-2020-06-23_11:55:14-rados:cephadm-wip-swagner-testing-2020-06-23-1057-distro-basic-smithi/5172323/5af8a06810c47079aa4705e60d07f1c31a52f0e35ce0b5
2020-06-23T12:20:46.564 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr   read tcp 172.21.15.198:55100->104.18.125.25:443: read: connection reset by peer
2020-06-23T12:20:46.564 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr Error: unable to pull ceph/ceph-grafana:latest: 1 error occurred:
2020-06-23T12:20:46.565 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr  * Error writing blob: error storing blob to file "/var/tmp/storage459839576/1": read tcp 172.21.15.198:55100->104.18.125.25:443: read: connection reset by peer
2020-06-23T12:20:46.565 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr
2020-06-23T12:20:46.565 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr
2020-06-23T12:20:46.571 INFO:tasks.workunit.client.0.smithi198.stderr:Traceback (most recent call last):
2020-06-23T12:20:46.571 INFO:tasks.workunit.client.0.smithi198.stderr:  File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 4825, in <module>
2020-06-23T12:20:46.572 INFO:tasks.workunit.client.0.smithi198.stderr:    r = args.func()
2020-06-23T12:20:46.572 INFO:tasks.workunit.client.0.smithi198.stderr:  File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 1182, in _default_image
2020-06-23T12:20:46.572 INFO:tasks.workunit.client.0.smithi198.stderr:    return func()
2020-06-23T12:20:46.572 INFO:tasks.workunit.client.0.smithi198.stderr:  File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 2863, in command_deploy
2020-06-23T12:20:46.572 INFO:tasks.workunit.client.0.smithi198.stderr:    uid, gid = extract_uid_gid_monitoring(daemon_type)
2020-06-23T12:20:46.573 INFO:tasks.workunit.client.0.smithi198.stderr:  File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 2799, in extract_uid_gid_monitoring
2020-06-23T12:20:46.573 INFO:tasks.workunit.client.0.smithi198.stderr:    uid, gid = extract_uid_gid(file_path='/var/lib/grafana')
2020-06-23T12:20:46.573 INFO:tasks.workunit.client.0.smithi198.stderr:  File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 1798, in extract_uid_gid
2020-06-23T12:20:46.573 INFO:tasks.workunit.client.0.smithi198.stderr:    args=['-c', '%u %g', file_path]
2020-06-23T12:20:46.573 INFO:tasks.workunit.client.0.smithi198.stderr:  File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 2275, in run
2020-06-23T12:20:46.574 INFO:tasks.workunit.client.0.smithi198.stderr:    self.run_cmd(), desc=self.entrypoint, timeout=timeout)
2020-06-23T12:20:46.574 INFO:tasks.workunit.client.0.smithi198.stderr:  File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 861, in call_throws
2020-06-23T12:20:46.574 INFO:tasks.workunit.client.0.smithi198.stderr:    raise RuntimeError('Failed command: %s' % ' '.join(command))
2020-06-23T12:20:46.574 INFO:tasks.workunit.client.0.smithi198.stderr:RuntimeError: Failed command: /usr/bin/podman run --rm --net=host --ipc=host -e CONTAINER_IMAGE=ceph/ceph-grafana:latest -e NODE_NAME=smithi198 --entrypoint stat ceph/ceph-grafana:latest -c %u %g /var/lib/grafana

does this mean, we have to retry fetching containers?


Related issues 1 (0 open1 closed)

Related to Orchestrator - Bug #46412: cephadm trying to pull mimic based imageCan't reproduce

Actions
Actions #1

Updated by Kefu Chai almost 3 years ago

  • Related to Bug #46412: cephadm trying to pull mimic based image added
Actions

Also available in: Atom PDF