Bug #45631
closedError parsing image configuration: Invalid status code returned when fetching blob 429 (Too Many Requests)
0%
Description
2020-05-20T00:27:53.331 INFO:tasks.workunit.client.0.smithi043.stderr:INFO:cephadm:Pulling latest docker.io/ceph/daemon-base:latest-master-devel container... 2020-05-20T00:27:55.743 INFO:tasks.workunit.client.0.smithi043.stderr:INFO:cephadm:Non-zero exit code 125 from /usr/bin/podman pull docker.io/ceph/daemon-base:latest-master-devel 2020-05-20T00:27:55.743 INFO:tasks.workunit.client.0.smithi043.stderr:INFO:cephadm:/usr/bin/podman:stderr Trying to pull docker.io/ceph/daemon-base:latest-master-devel... 2020-05-20T00:27:55.743 INFO:tasks.workunit.client.0.smithi043.stderr:INFO:cephadm:/usr/bin/podman:stderr Invalid status code returned when fetching blob 429 (Too Many Requests) 2020-05-20T00:27:55.743 INFO:tasks.workunit.client.0.smithi043.stderr:INFO:cephadm:/usr/bin/podman:stderr Error: error pulling image "docker.io/ceph/daemon-base:latest-master-devel": unable to pull docker.io/ceph/daemon-base:latest-master-devel: unable to pull image: Error parsing image configuration: Invalid status code returned when fetching blob 429 (Too Many Requests) 2020-05-20T00:27:55.747 INFO:tasks.workunit.client.0.smithi043.stderr:Traceback (most recent call last): 2020-05-20T00:27:55.748 INFO:tasks.workunit.client.0.smithi043.stderr: File "/tmp/tmp.fuiN265OKA/cephadm", line 4643, in <module> 2020-05-20T00:27:55.748 INFO:tasks.workunit.client.0.smithi043.stderr: r = args.func() 2020-05-20T00:27:55.748 INFO:tasks.workunit.client.0.smithi043.stderr: File "/tmp/tmp.fuiN265OKA/cephadm", line 1154, in _default_image 2020-05-20T00:27:55.748 INFO:tasks.workunit.client.0.smithi043.stderr: return func() 2020-05-20T00:27:55.748 INFO:tasks.workunit.client.0.smithi043.stderr: File "/tmp/tmp.fuiN265OKA/cephadm", line 2334, in command_bootstrap 2020-05-20T00:27:55.749 INFO:tasks.workunit.client.0.smithi043.stderr: call_throws([container_path, 'pull', args.image]) 2020-05-20T00:27:55.749 INFO:tasks.workunit.client.0.smithi043.stderr: File "/tmp/tmp.fuiN265OKA/cephadm", line 838, in call_throws 2020-05-20T00:27:55.749 INFO:tasks.workunit.client.0.smithi043.stderr: raise RuntimeError('Failed command: %s' % ' '.join(command)) 2020-05-20T00:27:55.749 INFO:tasks.workunit.client.0.smithi043.stderr:RuntimeError: Failed command: /usr/bin/podman pull docker.io/ceph/daemon-base:latest-master-devel 2020-05-20T00:27:55.767 INFO:tasks.workunit.client.0.smithi043.stderr:+ cleanup 2020-05-20T00:27:55.767 INFO:tasks.workunit.client.0.smithi043.stderr:+ '[' true = false ']' 2020-05-20T00:27:55.768 INFO:tasks.workunit.client.0.smithi043.stderr:+ dump_all_logs 00000000-0000-0000-0000-0000deadbeef 2020-05-20T00:27:55.768 INFO:tasks.workunit.client.0.smithi043.stderr:+ local fsid=00000000-0000-0000-0000-0000deadbeef 2020-05-20T00:27:55.768 INFO:tasks.workunit.client.0.smithi043.stderr:++ sudo /tmp/tmp.fuiN265OKA/cephadm --image docker.io/ceph/daemon-base:latest-master-devel ls 2020-05-20T00:27:55.769 INFO:tasks.workunit.client.0.smithi043.stderr:++ jq -r '.[] | select(.fsid == "00000000-0000-0000-0000-0000deadbeef").name' 2020-05-20T00:27:55.903 INFO:tasks.workunit.client.0.smithi043.stderr:+ local names= 2020-05-20T00:27:55.903 INFO:tasks.workunit.client.0.smithi043.stderr:+ echo 'dumping logs for daemons: ' 2020-05-20T00:27:55.903 INFO:tasks.workunit.client.0.smithi043.stderr:+ rm -rf tmp.test_cephadm.sh.Q1RhZb 2020-05-20T00:27:55.904 INFO:tasks.workunit.client.0.smithi043.stdout:dumping logs for daemons: 2020-05-20T00:27:55.904 INFO:tasks.workunit.client.0.smithi043.stderr:+ rm -rf /tmp/tmp.fuiN265OKA 2020-05-20T00:27:55.906 DEBUG:teuthology.orchestra.run:got remote process result: 1
/a/nojha-2020-05-19_23:54:26-rados-wip-cephadm-test-distro-basic-smithi/5070557/
Updated by Sebastian Wagner almost 4 years ago
I think this is caused by our ci downloading too many monitoring images.
I think we have two options now:
1. copy the monitoring images to quay/ceph-ci
2. make a caching registry in sepia itself.
Updated by Sebastian Wagner almost 4 years ago
- Category set to cephadm
- Source set to Q/A
Updated by Kefu Chai almost 4 years ago
2020-05-21T11:50:27.302 INFO:ceph.mgr.y.smithi077.stdout:May 21 11:50:26 smithi077 bash[25086]: INFO:cephadm:stat:stderr Trying to pull docker.io/prom/prometheus:latest... 2020-05-21T11:50:27.302 INFO:ceph.mgr.y.smithi077.stdout:May 21 11:50:26 smithi077 bash[25086]: INFO:cephadm:stat:stderr time="2020-05-21T11:46:18Z" level=error msg="HEADER map[Cache-Control:[no-cache] Content-Type:[application/json] Retry-After:[60]]" 2020-05-21T11:50:27.302 INFO:ceph.mgr.y.smithi077.stdout:May 21 11:50:26 smithi077 bash[25086]: INFO:cephadm:stat:stderr time="2020-05-21T11:47:20Z" level=error msg="HEADER map[Cache-Control:[no-cache] Content-Type:[application/json] Retry-After:[60]]" 2020-05-21T11:50:27.302 INFO:ceph.mgr.y.smithi077.stdout:May 21 11:50:26 smithi077 bash[25086]: INFO:cephadm:stat:stderr time="2020-05-21T11:48:22Z" level=error msg="HEADER map[Cache-Control:[no-cache] Content-Type:[application/json] Retry-After:[60]]" 2020-05-21T11:50:27.303 INFO:ceph.mgr.y.smithi077.stdout:May 21 11:50:26 smithi077 bash[25086]: INFO:cephadm:stat:stderr time="2020-05-21T11:49:24Z" level=error msg="HEADER map[Cache-Control:[no-cache] Content-Type:[application/json] Retry-After:[60]]" 2020-05-21T11:50:27.303 INFO:ceph.mgr.y.smithi077.stdout:May 21 11:50:26 smithi077 bash[25086]: INFO:cephadm:stat:stderr too many request to registry 2020-05-21T11:50:27.303 INFO:ceph.mgr.y.smithi077.stdout:May 21 11:50:26 smithi077 bash[25086]: INFO:cephadm:stat:stderr Error: unable to pull prom/prometheus:latest: 4 errors occurred: 2020-05-21T11:50:27.303 INFO:ceph.mgr.y.smithi077.stdout:May 21 11:50:26 smithi077 bash[25086]: INFO:cephadm:stat:stderr * Error initializing source docker://registry.access.redhat.com/prom/prometheus:latest: Error reading manifest latest in registry.access.redhat.com/prom/prometheus: name unknown: Repo not found 2020-05-21T11:50:27.303 INFO:ceph.mgr.y.smithi077.stdout:May 21 11:50:26 smithi077 bash[25086]: INFO:cephadm:stat:stderr * Error initializing source docker://registry.fedoraproject.org/prom/prometheus:latest: Error reading manifest latest in registry.fedoraproject.org/prom/prometheus: manifest unknown: manifest unknown 2020-05-21T11:50:27.304 INFO:ceph.mgr.y.smithi077.stdout:May 21 11:50:26 smithi077 bash[25086]: INFO:cephadm:stat:stderr * Error initializing source docker://registry.centos.org/prom/prometheus:latest: Error reading manifest latest in registry.centos.org/prom/prometheus: manifest unknown: manifest unknown 2020-05-21T11:50:27.304 INFO:ceph.mgr.y.smithi077.stdout:May 21 11:50:26 smithi077 bash[25086]: INFO:cephadm:stat:stderr * Error parsing image configuration: too many request to registry 2020-05-21T11:50:27.304 INFO:ceph.mgr.y.smithi077.stdout:May 21 11:50:26 smithi077 bash[25086]: INFO:cephadm:stat:stderr 2020-05-21T11:50:27.304 INFO:ceph.mgr.y.smithi077.stdout:May 21 11:50:26 smithi077 bash[25086]: Traceback (most recent call last): 2020-05-21T11:50:27.304 INFO:ceph.mgr.y.smithi077.stdout:May 21 11:50:26 smithi077 bash[25086]: File "<stdin>", line 4657, in <module> 2020-05-21T11:50:27.305 INFO:ceph.mgr.y.smithi077.stdout:May 21 11:50:26 smithi077 bash[25086]: File "<stdin>", line 1155, in _default_image 2020-05-21T11:50:27.305 INFO:ceph.mgr.y.smithi077.stdout:May 21 11:50:26 smithi077 bash[25086]: File "<stdin>", line 2734, in command_deploy 2020-05-21T11:50:27.305 INFO:ceph.mgr.y.smithi077.stdout:May 21 11:50:26 smithi077 bash[25086]: File "<stdin>", line 2663, in extract_uid_gid_monitoring 2020-05-21T11:50:27.305 INFO:ceph.mgr.y.smithi077.stdout:May 21 11:50:26 smithi077 bash[25086]: File "<stdin>", line 1736, in extract_uid_gid 2020-05-21T11:50:27.305 INFO:ceph.mgr.y.smithi077.stdout:May 21 11:50:26 smithi077 bash[25086]: File "<stdin>", line 2185, in run 2020-05-21T11:50:27.306 INFO:ceph.mgr.y.smithi077.stdout:May 21 11:50:26 smithi077 bash[25086]: File "<stdin>", line 839, in call_throws 2020-05-21T11:50:27.306 INFO:ceph.mgr.y.smithi077.stdout:May 21 11:50:26 smithi077 bash[25086]: RuntimeError: Failed command: /bin/podman run --rm --net=host --ipc=host -e CONTAINER_IMAGE=prom/prometheus:latest -e NODE_NAME=smithi099 --entrypoint stat prom/prometheus:latest -c %u %g /etc/prometheus
/a/kchai-2020-05-21_10:34:02-rados-wip-kefu-testing-2020-05-21-1652-distro-basic-smithi/5076374
prom/prometheus:latest
Updated by Kefu Chai almost 4 years ago
2020-05-23T11:57:06.581 INFO:ceph.node-exporter.b.smithi025.stdout:May 23 11:57:06 smithi025 bash[36716]: too many request to registry 2020-05-23T11:57:06.581 INFO:ceph.node-exporter.b.smithi025.stdout:May 23 11:57:06 smithi025 bash[36716]: Error: unable to pull prom/node-exporter: 3 errors occurred: 2020-05-23T11:57:06.581 INFO:ceph.node-exporter.b.smithi025.stdout:May 23 11:57:06 smithi025 bash[36716]: * Error initializing source docker://registry.access.redhat.com/prom/node-exporter:latest: Error reading manifest latest in registry.access.redhat.com/prom/node-exporter: name unknown: Repo not found 2020-05-23T11:57:06.581 INFO:ceph.node-exporter.b.smithi025.stdout:May 23 11:57:06 smithi025 bash[36716]: * Error initializing source docker://registry.redhat.io/prom/node-exporter:latest: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication 2020-05-23T11:57:06.582 INFO:ceph.node-exporter.b.smithi025.stdout:May 23 11:57:06 smithi025 bash[36716]: * Error parsing image configuration: too many request to registry
"prom/node-exporter:latest" this time.
/a/kchai-2020-05-23_10:56:21-rados-wip-kefu-testing-2020-05-23-0054-distro-basic-smithi/5085285/
Updated by Kefu Chai almost 4 years ago
Sebastian Wagner wrote:
I think this is caused by our ci downloading too many monitoring images.
I think we have two options now:
1. copy the monitoring images to quay/ceph-ci
2. make a caching registry in sepia itself.
agreed. seems docker.io thinks we are misusing it. i think a caching registry is better. we don't need to pick the images to copy at least. and it's faster.
or, per https://docs.docker.com/docker-hub/download-rate-limit/, we could log into docker hub. see https://github.com/ceph/ceph/pull/35217
Updated by Kefu Chai almost 4 years ago
2020-05-23T15:40:31.757 INFO:ceph.mon.smithi185.smithi185.stdout:May 23 15:40:31 smithi185 bash[9788]: INFO:cephadm:stat:stderr /usr/bin/docker: error pulling image configuration: toomanyrequests: Too Many Requests. Please see https://docs.docker.com/docker-hub/download-rate-limit/. 2020-05-23T15:40:31.757 INFO:ceph.mon.smithi185.smithi185.stdout:May 23 15:40:31 smithi185 bash[9788]: INFO:cephadm:stat:stderr See '/usr/bin/docker run --help'. 2020-05-23T15:40:31.758 INFO:ceph.mon.smithi185.smithi185.stdout:May 23 15:40:31 smithi185 bash[9788]: Traceback (most recent call last): 2020-05-23T15:40:31.758 INFO:ceph.mon.smithi185.smithi185.stdout:May 23 15:40:31 smithi185 bash[9788]: File "<stdin>", line 4657, in <module> 2020-05-23T15:40:31.758 INFO:ceph.mon.smithi185.smithi185.stdout:May 23 15:40:31 smithi185 bash[9788]: File "<stdin>", line 1155, in _default_image 2020-05-23T15:40:31.758 INFO:ceph.mon.smithi185.smithi185.stdout:May 23 15:40:31 smithi185 bash[9788]: File "<stdin>", line 2734, in command_deploy 2020-05-23T15:40:31.758 INFO:ceph.mon.smithi185.smithi185.stdout:May 23 15:40:31 smithi185 bash[9788]: File "<stdin>", line 2669, in extract_uid_gid_monitoring 2020-05-23T15:40:31.758 INFO:ceph.mon.smithi185.smithi185.stdout:May 23 15:40:31 smithi185 bash[9788]: File "<stdin>", line 1736, in extract_uid_gid 2020-05-23T15:40:31.758 INFO:ceph.mon.smithi185.smithi185.stdout:May 23 15:40:31 smithi185 bash[9788]: File "<stdin>", line 2185, in run 2020-05-23T15:40:31.758 INFO:ceph.mon.smithi185.smithi185.stdout:May 23 15:40:31 smithi185 bash[9788]: File "<stdin>", line 839, in call_throws 2020-05-23T15:40:31.758 INFO:ceph.mon.smithi185.smithi185.stdout:May 23 15:40:31 smithi185 bash[9788]: RuntimeError: Failed command: /usr/bin/docker run --rm --net=host --ipc=host -e CONTAINER_IMAGE=prom/alertmanager -e NODE_NAME=smithi185 --entrypoint stat prom/alertmanager -c %u %g /etc/alertmanager 2020-05-23T15:40:31.759 INFO:ceph.mon.smithi185.smithi185.stdout:May 23 15:40:31 smithi185 bash[9788]: Traceback (most recent call last): 2020-05-23T15:40:31.759 INFO:ceph.mon.smithi185.smithi185.stdout:May 23 15:40:31 smithi185 bash[9788]: File "/usr/share/ceph/mgr/cephadm/module.py", line 955, in _run_cephadm 2020-05-23T15:40:31.759 INFO:ceph.mon.smithi185.smithi185.stdout:May 23 15:40:31 smithi185 bash[9788]: code, '\n'.join(err))) 2020-05-23T15:40:31.759 INFO:ceph.mon.smithi185.smithi185.stdout:May 23 15:40:31 smithi185 bash[9788]: RuntimeError: cephadm exited with an error code: 1, stderr:INFO:cephadm:Deploy daemon alertmanager.sm
prom/alertmanager
/a//yuriw-2020-05-23_15:15:01-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5085497
Updated by Brad Hubbard almost 4 years ago
/a/yuriw-2020-05-22_19:55:53-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5083319
INFO:cephadm:ceph:stderr Error: unable to pull docker.io/ceph/daemon-base:latest-octopus: unable to pull image: Error parsing image configuration: too many request to registry
/a/yuriw-2020-05-22_19:55:53-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5083423
/a/yuriw-2020-05-22_19:55:53-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5083209
Error: unable to pull docker.io/ceph/daemon-base:latest-octopus: unable to pull image: Error parsing image configuration: Invalid status code returned when fetching blob 429 (Too Many Requests)
Updated by Sebastian Wagner almost 4 years ago
- Status changed from New to In Progress
- Assignee set to Sebastian Wagner
Updated by Brad Hubbard almost 4 years ago
/a/yuriw-2020-05-22_19:55:53-rados-wip-yuri-master_5.22.20-distro-basic-smithi/5083351
2020-05-22T23:50:25.797 INFO:ceph.mgr.y.smithi156.stdout:May 22 23:50:25 smithi156 bash[10107]: INFO:cephadm:stat:stderr latest: Pulling from prom/prometheus ... 2020-05-22T23:50:25.799 INFO:ceph.mgr.y.smithi156.stdout:May 22 23:50:25 smithi156 bash[10107]: INFO:cephadm:stat:stderr /usr/bin/docker: error pulling image configuration: toomanyrequests: Too Many Requests. Please see https://docs.docker.com/docker-hub/download-rate-limit/
Updated by Kefu Chai almost 4 years ago
- Status changed from In Progress to Resolved
Updated by Kefu Chai almost 4 years ago
solution:
1. set up a mirror at vossi04
2. added toml as a dependency for test, see https://github.com/ceph/teuthology/pull/1493
3. pointed qa/suites/cephadm tests to this mirror. see https://github.com/ceph/ceph/pull/35235
Updated by Brad Hubbard almost 4 years ago
- Has duplicate Bug #45701: rados/cephadm/smoke-roleless fails due to CEPHADM_REFRESH_FAILED health check added
Updated by Brad Hubbard almost 4 years ago
- Status changed from Resolved to New
/a/yuriw-2020-05-28_02:23:45-rados-wip-yuri-master_5.27.20-distro-basic-smithi/5098001
2020-05-28T08:43:04.764 INFO:tasks.workunit.client.0.smithi047.stderr:INFO:cephadm:/usr/bin/podman:stderr Error: error pulling image "docker.io/ceph/daemon-base:latest-master-devel": unable to pull docker.io/ceph/daemon-base:latest-master-devel: unable to pull image: Error parsing image configuration: Invalid status code returned when fetching blob 429 (Too Many Requests)
Updated by Brad Hubbard almost 4 years ago
- Status changed from New to In Progress
Updated by Sebastian Wagner almost 4 years ago
right. we're now using containers seriously. Thanks David for setting up the registries.
Updated by Brad Hubbard almost 4 years ago
- Related to Bug #45807: cephadm/test_cephadm.sh: unable to pull image: Error parsing image configuration: too many request to registry added
Updated by Brad Hubbard almost 4 years ago
Looks like this change needs to be applied more widely. See https://tracker.ceph.com/issues/45807
Updated by Brad Hubbard almost 4 years ago
- Related to Bug #45808: cephadm/test_adoption.sh: Error parsing image configuration: Invalid status code returned when fetching blob 429 (Too Many Requests) added
Updated by Sebastian Wagner almost 4 years ago
Updated by Sebastian Wagner almost 4 years ago
- Status changed from In Progress to Closed
Updated by Neha Ojha over 3 years ago
- Related to Bug #48157: test_cephadm.sh failure You have reached your pull rate limit. You may increase the limit by authenticating and upgrading added