Bug #48157
closedtest_cephadm.sh failure You have reached your pull rate limit. You may increase the limit by authenticating and upgrading
0%
Description
http://qa-proxy.ceph.com/teuthology/yuriw-2020-11-09_17:43:09-rados-wip-yuri8-testing-2020-11-09-0807-octopus-distro-basic-smithi/5606004/teuthology.log
http://qa-proxy.ceph.com/teuthology/yuriw-2020-11-09_17:43:09-rados-wip-yuri8-testing-2020-11-09-0807-octopus-distro-basic-smithi/5606056/teuthology.log
http://qa-proxy.ceph.com/teuthology/yuriw-2020-11-09_17:43:09-rados-wip-yuri8-testing-2020-11-09-0807-octopus-distro-basic-smithi/5605996/teuthology.log
2020-11-09T19:59:45.125 INFO:tasks.workunit.client.0.smithi183.stderr:Non-zero exit code 125 from /usr/bin/podman run --rm --ipc=host --net=host --entrypoint ceph -e CONTAINER_IMAGE=docker.io/ceph/daemon-base:latest-nautilus -e NODE_NAME=smithi183 docker.io/ceph/daemon-base:latest-nautilus --version 2020-11-09T19:59:45.125 INFO:tasks.workunit.client.0.smithi183.stderr:ceph:stderr Trying to pull docker.io/ceph/daemon-base:latest-nautilus... 2020-11-09T19:59:45.126 INFO:tasks.workunit.client.0.smithi183.stderr:ceph:stderr toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit 2020-11-09T19:59:45.126 INFO:tasks.workunit.client.0.smithi183.stderr:ceph:stderr Error: unable to pull docker.io/ceph/daemon-base:latest-nautilus: unable to pull image: Error initializing source docker://ceph/daemon-base:latest-nautilus: Error reading manifest latest-nautilus in docker.io/ceph/daemon-base: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit 2020-11-09T19:59:45.129 INFO:tasks.workunit.client.0.smithi183.stderr:Traceback (most recent call last): 2020-11-09T19:59:45.130 INFO:tasks.workunit.client.0.smithi183.stderr: File "/tmp/tmp.2RH4gLOl67/cephadm", line 6041, in <module> 2020-11-09T19:59:45.130 INFO:tasks.workunit.client.0.smithi183.stderr: r = args.func() 2020-11-09T19:59:45.130 INFO:tasks.workunit.client.0.smithi183.stderr: File "/tmp/tmp.2RH4gLOl67/cephadm", line 1359, in _infer_image 2020-11-09T19:59:45.131 INFO:tasks.workunit.client.0.smithi183.stderr: return func() 2020-11-09T19:59:45.131 INFO:tasks.workunit.client.0.smithi183.stderr: File "/tmp/tmp.2RH4gLOl67/cephadm", line 2641, in command_version 2020-11-09T19:59:45.131 INFO:tasks.workunit.client.0.smithi183.stderr: out = CephContainer(args.image, 'ceph', ['--version']).run() 2020-11-09T19:59:45.132 INFO:tasks.workunit.client.0.smithi183.stderr: File "/tmp/tmp.2RH4gLOl67/cephadm", line 2632, in run 2020-11-09T19:59:45.132 INFO:tasks.workunit.client.0.smithi183.stderr: self.run_cmd(), desc=self.entrypoint, timeout=timeout) 2020-11-09T19:59:45.132 INFO:tasks.workunit.client.0.smithi183.stderr: File "/tmp/tmp.2RH4gLOl67/cephadm", line 1038, in call_throws 2020-11-09T19:59:45.132 INFO:tasks.workunit.client.0.smithi183.stderr: raise RuntimeError('Failed command: %s' % ' '.join(command)) 2020-11-09T19:59:45.133 INFO:tasks.workunit.client.0.smithi183.stderr:RuntimeError: Failed command: /usr/bin/podman run --rm --ipc=host --net=host --entrypoint ceph -e CONTAINER_IMAGE=docker.io/ceph/daemon-base:latest-nautilus -e NODE_NAME=smithi183 docker.io/ceph/daemon-base:latest-nautilus --version 2020-11-09T19:59:45.149 INFO:tasks.workunit.client.0.smithi183.stderr:+ cleanup 2020-11-09T19:59:45.149 INFO:tasks.workunit.client.0.smithi183.stderr:+ '[' true = false ']' 2020-11-09T19:59:45.149 INFO:tasks.workunit.client.0.smithi183.stderr:+ dump_all_logs 00000000-0000-0000-0000-0000deadbeef 2020-11-09T19:59:45.150 INFO:tasks.workunit.client.0.smithi183.stderr:+ local fsid=00000000-0000-0000-0000-0000deadbeef 2020-11-09T19:59:45.150 INFO:tasks.workunit.client.0.smithi183.stderr:++ sudo /tmp/tmp.2RH4gLOl67/cephadm --image docker.io/ceph/daemon-base:latest-octopus ls 2020-11-09T19:59:45.150 INFO:tasks.workunit.client.0.smithi183.stderr:++ jq -r '.[] | select(.fsid == "00000000-0000-0000-0000-0000deadbeef").name' 2020-11-09T19:59:45.296 INFO:tasks.workunit.client.0.smithi183.stderr:+ local names= 2020-11-09T19:59:45.297 INFO:tasks.workunit.client.0.smithi183.stderr:+ echo 'dumping logs for daemons: ' 2020-11-09T19:59:45.297 INFO:tasks.workunit.client.0.smithi183.stderr:+ rm -rf tmp.test_cephadm.sh.4Lg3Vq 2020-11-09T19:59:45.297 INFO:tasks.workunit.client.0.smithi183.stdout:dumping logs for daemons: 2020-11-09T19:59:45.298 INFO:tasks.workunit.client.0.smithi183.stderr:+ rm -rf /tmp/tmp.2RH4gLOl67 2020-11-09T19:59:45.300 DEBUG:teuthology.orchestra.run:got remote process result: 1 2020-11-09T19:59:45.300 INFO:tasks.workunit:Stopping ['cephadm/test_cephadm.sh'] on client.0... 2020-11-09T19:59:45.301 INFO:teuthology.orchestra.run.smithi183:> sudo rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 2020-11-09T19:59:45.546 ERROR:teuthology.run_tasks:Saw exception from tasks. Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 90, in run_tasks manager = run_one_task(taskname, ctx=ctx, config=config) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 69, in run_one_task return task(**kwargs) File "/home/teuthworker/src/github.com_ceph_ceph-c_wip-yuri8-testing-2020-11-09-0807-octopus/qa/tasks/workunit.py", line 127, in task coverage_and_limits=not config.get('no_coverage_and_limits', None)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/parallel.py", line 84, in __exit__ for result in self: File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/parallel.py", line 98, in __next__ resurrect_traceback(result) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/parallel.py", line 30, in resurrect_traceback raise exc.exc_info[1] File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/parallel.py", line 23, in capture_traceback return func(*args, **kwargs) File "/home/teuthworker/src/github.com_ceph_ceph-c_wip-yuri8-testing-2020-11-09-0807-octopus/qa/tasks/workunit.py", line 415, in _run_tests label="workunit test {workunit}".format(workunit=workunit) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 215, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 446, in run r.wait() File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 160, in wait self._raise_for_status() File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 182, in _raise_for_status node=self.hostname, label=self.label teuthology.exceptions.CommandFailedError: Command failed (workunit test cephadm/test_cephadm.sh) on smithi183 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3b2df8c76c14ab5098113329edcfe1c0d82c9b44 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh' 2020-11-09T19:59:45.570 ERROR:teuthology.run_tasks: Sentry event: https://sentry.ceph.com/ceph/sepia/?query=b7e85beb0d9b40ba9626032d1b29528c Traceback (most recent call last): File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 90, in run_tasks manager = run_one_task(taskname, ctx=ctx, config=config) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 69, in run_one_task return task(**kwargs) File "/home/teuthworker/src/github.com_ceph_ceph-c_wip-yuri8-testing-2020-11-09-0807-octopus/qa/tasks/workunit.py", line 127, in task coverage_and_limits=not config.get('no_coverage_and_limits', None)) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/parallel.py", line 84, in __exit__ for result in self: File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/parallel.py", line 98, in __next__ resurrect_traceback(result) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/parallel.py", line 30, in resurrect_traceback raise exc.exc_info[1] File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/parallel.py", line 23, in capture_traceback return func(*args, **kwargs) File "/home/teuthworker/src/github.com_ceph_ceph-c_wip-yuri8-testing-2020-11-09-0807-octopus/qa/tasks/workunit.py", line 415, in _run_tests label="workunit test {workunit}".format(workunit=workunit) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 215, in run r = self._runner(client=self.ssh, name=self.shortname, **kwargs) File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 446, in run r.wait() File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 160, in wait self._raise_for_status() File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 182, in _raise_for_status node=self.hostname, label=self.label teuthology.exceptions.CommandFailedError: Command failed (workunit test cephadm/test_cephadm.sh) on smithi183 with status 1: 'mkdir -p -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && cd -- /home/ubuntu/cephtest/mnt.0/client.0/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=3b2df8c76c14ab5098113329edcfe1c0d82c9b44 TESTDIR="/home/ubuntu/cephtest" CEPH_ARGS="--cluster ceph" CEPH_ID="0" PATH=$PATH:/usr/sbin CEPH_BASE=/home/ubuntu/cephtest/clone.client.0 CEPH_ROOT=/home/ubuntu/cephtest/clone.client.0 adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/test_cephadm.sh
Updated by Deepika Upadhyay over 3 years ago
- Project changed from Ceph to Orchestrator
Updated by Neha Ojha over 3 years ago
- Related to Bug #45631: Error parsing image configuration: Invalid status code returned when fetching blob 429 (Too Many Requests) added
Updated by Neha Ojha over 3 years ago
/a/teuthology-2020-11-09_07:01:01-rados-master-distro-basic-smithi/5605208
Updated by Michael Fritch over 3 years ago
This is becoming more frequent on various ceph images (v15.2.0, octopus, latest-devel-master etc.) and also the monitoring stack (prom, alertmanager, etc):
/a/mgfritch-2020-11-13_04:28:21-rados:cephadm-wip-mgfritch-testing-2020-11-12-2008-distro-basic-smithi/5618529
/a/mgfritch-2020-11-13_04:28:21-rados:cephadm-wip-mgfritch-testing-2020-11-12-2008-distro-basic-smithi/5618542
/a/mgfritch-2020-11-13_04:28:21-rados:cephadm-wip-mgfritch-testing-2020-11-12-2008-distro-basic-smithi/5618554
/a/mgfritch-2020-11-13_04:28:21-rados:cephadm-wip-mgfritch-testing-2020-11-12-2008-distro-basic-smithi/5618530
/a/mgfritch-2020-11-13_04:28:21-rados:cephadm-wip-mgfritch-testing-2020-11-12-2008-distro-basic-smithi/5618540
/a/mgfritch-2020-11-13_04:28:21-rados:cephadm-wip-mgfritch-testing-2020-11-12-2008-distro-basic-smithi/5618509
We are also starting to see fairly frequent build failures against CentOS 8:
https://shaman.ceph.com/builds/ceph/master/c4b4e025b919c530b87f7855cacb2904a1d79d2b/default/239199/
https://shaman.ceph.com/builds/ceph/wip-mgfritch-testing-2020-11-12-1815/168c7ccf4e8bd270f8b9b6a91b15cfa6693cbb79/default/239101/
=== docker build ceph-ci/daemon-base:master-c4b4e02-master-centos-8-x86_64 Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg. STEP 1: FROM docker.io/centos:8 Error: error creating build container: Error initializing source docker://centos:8: Error reading manifest 8 in docker.io/library/centos: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit make[2]: *** [Makefile:25: build] Error 125
Updated by Michael Fritch over 3 years ago
Perhaps we could start mirroring these sepia related container deps on quay.ceph.io ??
https://wiki.sepia.ceph.com/doku.php?id=services:quay.ceph.io
Updated by Michael Fritch over 3 years ago
Another example, but this time `prom/alertmanager:v0.20.0`, `ceph/ceph-grafana:6.6.2`, and `prom/prometheus:v2.18.1`
/a/mgfritch-2020-11-13_19:50:15-rados:cephadm-wip-mgfritch-testing-2020-11-13-1122-distro-basic-smithi/5620195
Updated by Deepika Upadhyay over 3 years ago
- Priority changed from Normal to High
Updated by Deepika Upadhyay over 3 years ago
Michael Fritch wrote:
Perhaps we could start mirroring these sepia related container deps on quay.ceph.io ??
https://wiki.sepia.ceph.com/doku.php?id=services:quay.ceph.io
I think so that could be the best solution, https://www.openshift.com/blog/mitigate-impact-of-docker-hub-pull-request-limits
Updated by Deepika Upadhyay over 3 years ago
- Priority changed from High to Normal
Updated by Sebastian Wagner over 3 years ago
- Priority changed from Normal to Urgent
Updated by Sebastian Wagner over 3 years ago
we already have a caching registry for docker.io in sepia which is in ative use.
It's just that it is only used by tasks/cephadm.py, but not the standalone tests.
Updated by Deepika Upadhyay over 3 years ago
@Sebastian I. I also observed it by direct ssh to senta04/vossi04 to pull any other image
ERRO[0000] failed to dial gRPC: cannot connect to the Docker daemon. Is 'docker daemon' running on this host?: dial unix /var/run/docker.sock: connect: no such file or directory Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? [ideepika@vossi04 dockerfiles]$ podman build . -t centos8-ceph STEP 1: FROM centos:8 Error: error creating build container: The following failures happened while trying to pull image specified by "centos:8" based on search registries in /etc/containers/registries.conf: * "localhost/centos:8": Error initializing source docker://localhost/centos:8: error pinging docker registry localhost: Get https://localhost/v2/: dial tcp [::1]:443: connect: connection refused * "registry.access.redhat.com/centos:8": Error initializing source docker://registry.access.redhat.com/centos:8: Error reading manifest 8 in registry.access.redhat.com/centos: name unknown: Repo not found * "registry.redhat.io/centos:8": Error initializing source docker://registry.redhat.io/centos:8: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication * "docker.io/library/centos:8": Error initializing source docker://centos:8: (Mirrors also failed: [vossi04.front.sepia.ceph.com:5000/library/centos:8: error pinging docker registry vossi04.front.sepia.ceph.com:5000: Get http://vossi04.front.sepia.ceph.com:5000/v2/: dial tcp 172.21.10.4:5000: connect: connection refused]): docker.io/library/centos:8: Error reading manifest 8 in docker.io/library/centos: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
ideepika@senta02:~$ docker pull centos:8 Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Updated by Sebastian Wagner over 3 years ago
Updated by Sebastian Wagner over 3 years ago
- Status changed from New to In Progress
- Assignee set to David Galloway
Updated by Stephen Longofono over 3 years ago
Is there an option for pulling and saving an image, and providing as input to cephadm? It seems problematic that you deploy a potentially large cluster and pull images for each node, every time. If bootstrapping or deployment fails for some reason, you could reach this limit quickly while troubleshooting.
Updated by David Galloway over 3 years ago
Stephen Longofono wrote:
Is there an option for pulling and saving an image, and providing as input to cephadm? It seems problematic that you deploy a potentially large cluster and pull images for each node, every time. If bootstrapping or deployment fails for some reason, you could reach this limit quickly while troubleshooting.
+1 this would be nice.
We are also considering hosting our container images elsewhere.
Updated by Juan Miguel Olmo MartÃnez over 3 years ago
It continues happening:
2020-12-17T07:51:50.515 INFO:tasks.workunit.client.0.smithi086.stderr:ceph: stderr Trying to pull docker.io/ceph/daemon-base:latest-octopus...
2020-12-17T07:51:50.516 INFO:tasks.workunit.client.0.smithi086.stderr:ceph: stderr toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Updated by David Galloway over 3 years ago
Juan Miguel Olmo MartÃnez wrote:
It continues happening:
[...]
There is at least one spot in that logfile where registries.conf gets overwritten.
tasks: - exec: all: - curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_18.04/Release.key | sudo apt-key add - - echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_18.04/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list - sudo apt update - sudo apt -y install podman - echo -e "[registries.search]\nregistries = ['docker.io']" | sudo tee /etc/containers/registries.conf
That last line prevents the testnode from using our mirror. During ceph-cm-ansible at the start of the job, registries-conf-ctl write registries.conf so that our mirror will get used.
2020-12-17T07:46:14.003 INFO:teuthology.task.ansible.out:changed: [smithi086.front.sepia.ceph.com] => {"changed": true, "cmd": ["registries-conf-ctl", "add-mirror", "docker.io", "docker-mirror.front.sepia.ceph.com:5000"], "delta": "0:00:00.123925", "end": "2020-12-17 07:46:13.688338", "rc": 0, "start": "2020-12-17 07:46:13.564413", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
But later...
2020-12-17T07:46:48.353 DEBUG:teuthology.orchestra.run.smithi086:> sudo TESTDIR=/home/ubuntu/cephtest bash -c 'echo -e "[registries.search]\nregistries = ['"'"'docker.io'"'"']" | sudo tee /etc/containers/registries.conf' 2020-12-17T07:46:48.416 INFO:teuthology.orchestra.run.smithi086.stdout:[registries.search] 2020-12-17T07:46:48.417 INFO:teuthology.orchestra.run.smithi086.stdout:registries = ['docker.io']
Perhaps that should be tee -a
instead? Or just don't do that at all.
Updated by Dan Mick over 3 years ago
https://github.com/ceph/ceph/pull/38650 should fix this (sorry, didn't know there was a bug open)
Updated by Deepika Upadhyay over 3 years ago
hey Dan,
still observing this issue:
2021-01-04T20:49:19.396 INFO:tasks.workunit.client.0.smithi031.stderr:/usr/bin/podman:stderr toomanyrequests: You have reached your pull rate limit. You may increase the l 2021-01-04T20:49:19.397 INFO:tasks.workunit.client.0.smithi031.stderr:/usr/bin/podman:stderr Error: Error initializing source docker://ceph/ceph:v15: Error reading manifest 2021-01-04T20:49:19.403 INFO:tasks.workunit.client.0.smithi031.stderr:Traceback (most recent call last):
/ceph/teuthology-archive/yuriw-2021-01-04_18:28:05-rados-wip-yuri2-testing-2021-01-04-0837-octopus-distro-basic-smithi/5754203/teuthology.log
Updated by Sebastian Wagner over 3 years ago
Deepika Upadhyay wrote:
hey Dan,
still observing this issue:
[...]/ceph/teuthology-archive/yuriw-2021-01-04_18:28:05-rados-wip-yuri2-testing-2021-01-04-0837-octopus-distro-basic-smithi/5754203/teuthology.log
Deepika: https://github.com/ceph/ceph/pull/38650 is not yet backported to octopus
Updated by Deepika Upadhyay over 3 years ago
aah, silly of me, pardon. Thanks for fixing it!
Updated by Deepika Upadhyay over 3 years ago
- Status changed from In Progress to Pending Backport
Updated by Deepika Upadhyay over 3 years ago
@Sebastian I. seeing this issue still in octopus:
/ceph/teuthology-archive/yuriw-2021-01-18_19:17:40-rados-wip-yuri2-testing-2021-01-18-0815-octopus-distro-basic-smithi/5800014/teuthology.log
@batch of Jan 18, backport done around 11 Jan, or the cause might be some other, can you take a look?
also, is nautilus version check expected in octopus?
Non-zero exit code 125 from /bin/podman run --rm --ipc=host --net=host --entrypoint ceph -e CONTAINER_IMAGE=docker.io/ceph/daemon-base:latest-n nit.client.0.smithi100.stderr:ceph: stderr Trying to pull docker.io/ceph/daemon-base:latest-nautilus... nit.client.0.smithi100.stderr:ceph: stderr toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www. nit.client.0.smithi100.stderr:ceph: stderr Error: unable to pull docker.io/ceph/daemon-base:latest-nautilus: Error initializing source docker://ceph/daemon-base:latest-nauti nit.client.0.smithi100.stderr:ceph: stderr toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www. nit.client.0.smithi100.stderr:ceph: stderr Error: unable to pull docker.io/ceph/daemon-base:latest-nautilus: Error initializing source docker://ceph/daemon-base:latest-naut
Updated by Deepika Upadhyay about 3 years ago
- Status changed from Pending Backport to Resolved
not seeing this issue anymore