Ceph : Issueshttps://tracker.ceph.com/https://tracker.ceph.com/favicon.ico2021-09-22T14:50:37ZCeph
Redmine Orchestrator - Documentation #52704 (Won't Fix): network 'flow' diagram for cephadmhttps://tracker.ceph.com/issues/527042021-09-22T14:50:37ZSebastian Wagner
<p>it sounds like you're looking for a network 'flow' diagram for cephadm + the normal ceph architecture</p> Orchestrator - Feature #51010 (Resolved): add lamport clockhttps://tracker.ceph.com/issues/510102021-05-27T15:50:42ZSebastian WagnerOrchestrator - Feature #51007 (Resolved): add cephadm command that push results every so oftenhttps://tracker.ceph.com/issues/510072021-05-27T15:50:17ZSebastian WagnerOrchestrator - Feature #51006 (Resolved): add cephadm command to push results to endpointhttps://tracker.ceph.com/issues/510062021-05-27T15:50:08ZSebastian WagnerOrchestrator - Feature #51005 (Resolved): add mgr/cephadm endpointhttps://tracker.ceph.com/issues/510052021-05-27T15:49:58ZSebastian WagnerOrchestrator - Bug #50535 (Resolved): add local cephadm bootstrap dev env. https://tracker.ceph.com/issues/505352021-04-27T09:05:17ZSebastian Wagner
<pre>
sudo ./cephadm bootstrap \
--mon-ip 127.0.0.1 \
--ssh-private-key /home/<user>/.ssh/id_rsa \
--skip-mon-network \
--skip-monitoring-stack \
--single-host-defaults \
--shared_ceph_folder /path/to/ceph/repo
</pre>
<p>should be all you need to set up a cluster</p> Orchestrator - Feature #48979 (New): bin/cephadm: add possibilty to query default monitoring imag...https://tracker.ceph.com/issues/489792021-01-25T16:28:09ZSebastian Wagner
<p>possibily add this to --help????</p> Orchestrator - Bug #45462 (Resolved): 'https://download.ceph.com/debian-octopus focal Release' do...https://tracker.ceph.com/issues/454622020-05-11T09:52:40ZSebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-08_13:52:54-rados-wip-swagner3-testing-2020-05-08-1329-distro-basic-smithi/5034812/">http://pulpito.ceph.com/swagner-2020-05-08_13:52:54-rados-wip-swagner3-testing-2020-05-08-1329-distro-basic-smithi/5034812/</a></p>
<pre>
2020-05-08T15:24:54.694 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/../../../src/cephadm/cephadm -v add-repo --release octopus
2020-05-08T15:24:54.798 INFO:tasks.workunit.client.0.smithi204.stderr:DEBUG:cephadm:Could not locate podman: podman not found
2020-05-08T15:24:54.798 INFO:tasks.workunit.client.0.smithi204.stderr:INFO:root:Installing repo GPG key from https://download.ceph.com/keys/release.asc...
2020-05-08T15:24:54.964 INFO:tasks.workunit.client.0.smithi204.stderr:INFO:root:Installing repo file at /etc/apt/sources.list.d/ceph.list...
2020-05-08T15:24:54.983 INFO:tasks.workunit.client.0.smithi204.stderr:+ test_install_uninstall
2020-05-08T15:24:54.983 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo apt update
2020-05-08T15:24:55.009 INFO:tasks.workunit.client.0.smithi204.stderr:
2020-05-08T15:24:55.010 INFO:tasks.workunit.client.0.smithi204.stderr:WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
2020-05-08T15:24:55.010 INFO:tasks.workunit.client.0.smithi204.stderr:
2020-05-08T15:24:55.139 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:1 http://security.ubuntu.com/ubuntu focal-security InRelease
2020-05-08T15:24:55.270 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:2 http://archive.ubuntu.com/ubuntu focal InRelease
2020-05-08T15:24:55.284 INFO:tasks.workunit.client.0.smithi204.stdout:Ign:3 https://download.ceph.com/debian-octopus focal InRelease
2020-05-08T15:24:55.320 INFO:tasks.workunit.client.0.smithi204.stdout:Err:4 https://download.ceph.com/debian-octopus focal Release
2020-05-08T15:24:55.321 INFO:tasks.workunit.client.0.smithi204.stdout: 404 Not Found [IP: 158.69.68.124 443]
2020-05-08T15:24:55.351 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:5 http://archive.ubuntu.com/ubuntu focal-updates InRelease
2020-05-08T15:24:55.434 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:6 http://archive.ubuntu.com/ubuntu focal-backports InRelease
2020-05-08T15:24:56.423 INFO:tasks.workunit.client.0.smithi204.stdout:Reading package lists...
2020-05-08T15:24:56.442 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://security.ubuntu.com/ubuntu/dists/focal-security/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:E: The repository 'https://download.ceph.com/debian-octopus focal Release' does not have a Release file.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal-updates/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal-backports/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.444 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo yum -y install cephadm
2020-05-08T15:24:56.452 INFO:tasks.workunit.client.0.smithi204.stderr:sudo: yum: command not found
2020-05-08T15:24:56.453 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo dnf -y install cephadm
2020-05-08T15:24:56.459 INFO:tasks.workunit.client.0.smithi204.stderr:sudo: dnf: command not found
2020-05-08T15:24:56.460 DEBUG:teuthology.orchestra.run:got remote process result: 1
</pre>
<p>Are we going to get an ubuntu repo for focal or should I disable this test for it?</p> Infrastructure - Bug #45453 (New): 'https://download.ceph.com/debian-octopus focal Release' does ...https://tracker.ceph.com/issues/454532020-05-08T20:28:48ZSebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-05-08_13:52:54-rados-wip-swagner3-testing-2020-05-08-1329-distro-basic-smithi/5034812/">http://pulpito.ceph.com/swagner-2020-05-08_13:52:54-rados-wip-swagner3-testing-2020-05-08-1329-distro-basic-smithi/5034812/</a></p>
<pre>
2020-05-08T15:24:54.694 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo /home/ubuntu/cephtest/clone.client.0/qa/workunits/cephadm/../../../src/cephadm/cephadm -v add-repo --release octopus
2020-05-08T15:24:54.798 INFO:tasks.workunit.client.0.smithi204.stderr:DEBUG:cephadm:Could not locate podman: podman not found
2020-05-08T15:24:54.798 INFO:tasks.workunit.client.0.smithi204.stderr:INFO:root:Installing repo GPG key from https://download.ceph.com/keys/release.asc...
2020-05-08T15:24:54.964 INFO:tasks.workunit.client.0.smithi204.stderr:INFO:root:Installing repo file at /etc/apt/sources.list.d/ceph.list...
2020-05-08T15:24:54.983 INFO:tasks.workunit.client.0.smithi204.stderr:+ test_install_uninstall
2020-05-08T15:24:54.983 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo apt update
2020-05-08T15:24:55.009 INFO:tasks.workunit.client.0.smithi204.stderr:
2020-05-08T15:24:55.010 INFO:tasks.workunit.client.0.smithi204.stderr:WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
2020-05-08T15:24:55.010 INFO:tasks.workunit.client.0.smithi204.stderr:
2020-05-08T15:24:55.139 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:1 http://security.ubuntu.com/ubuntu focal-security InRelease
2020-05-08T15:24:55.270 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:2 http://archive.ubuntu.com/ubuntu focal InRelease
2020-05-08T15:24:55.284 INFO:tasks.workunit.client.0.smithi204.stdout:Ign:3 https://download.ceph.com/debian-octopus focal InRelease
2020-05-08T15:24:55.320 INFO:tasks.workunit.client.0.smithi204.stdout:Err:4 https://download.ceph.com/debian-octopus focal Release
2020-05-08T15:24:55.321 INFO:tasks.workunit.client.0.smithi204.stdout: 404 Not Found [IP: 158.69.68.124 443]
2020-05-08T15:24:55.351 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:5 http://archive.ubuntu.com/ubuntu focal-updates InRelease
2020-05-08T15:24:55.434 INFO:tasks.workunit.client.0.smithi204.stdout:Hit:6 http://archive.ubuntu.com/ubuntu focal-backports InRelease
2020-05-08T15:24:56.423 INFO:tasks.workunit.client.0.smithi204.stdout:Reading package lists...
2020-05-08T15:24:56.442 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://security.ubuntu.com/ubuntu/dists/focal-security/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:E: The repository 'https://download.ceph.com/debian-octopus focal Release' does not have a Release file.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal-updates/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.443 INFO:tasks.workunit.client.0.smithi204.stderr:W: http://archive.ubuntu.com/ubuntu/dists/focal-backports/InRelease: The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
2020-05-08T15:24:56.444 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo yum -y install cephadm
2020-05-08T15:24:56.452 INFO:tasks.workunit.client.0.smithi204.stderr:sudo: yum: command not found
2020-05-08T15:24:56.453 INFO:tasks.workunit.client.0.smithi204.stderr:+ sudo dnf -y install cephadm
2020-05-08T15:24:56.459 INFO:tasks.workunit.client.0.smithi204.stderr:sudo: dnf: command not found
2020-05-08T15:24:56.460 DEBUG:teuthology.orchestra.run:got remote process result: 1
</pre>
<p>Are we going to get an ubuntu repo for focal or should I disable this test for it?</p> Orchestrator - Bug #44122 (Won't Fix): bin/cephadm cannot read vstart's ceph.confhttps://tracker.ceph.com/issues/441222020-02-13T14:35:58ZSebastian Wagner
<pre>
Traceback (most recent call last):
File "./cephadm", line 3241, in <module>
r = args.func()
File "./cephadm", line 2166, in command_ls
legacy_dir=args.legacy_dir)
File "./cephadm", line 2192, in list_daemons
legacy_dir=legacy_dir)
File "./cephadm", line 873, in get_legacy_daemon_fsid
fsid = get_legacy_config_fsid(cluster, legacy_dir=legacy_dir)
File "./cephadm", line 852, in get_legacy_config_fsid
config = read_config(config_file)
File "./cephadm", line 586, in read_config
cp.read_file(s_io)
File "/usr/lib/python3.6/configparser.py", line 718, in read_file
self._read(f, source)
File "/usr/lib/python3.6/configparser.py", line 1066, in _read
lineno)
configparser.DuplicateSectionError: While reading from '<???>' [line 115]: section 'global' already exists
</pre> Orchestrator - Bug #44039 (Rejected): bin/cephadm: Remove --allow-fqdn-hostnamehttps://tracker.ceph.com/issues/440392020-02-07T14:55:10ZSebastian Wagner
<p>imo `hostname` should never return an fqdn. What bout removing this flag, until someone demands it?</p>
<p>If we remove this ability, then we should also adjust the 'orchestrator host add' check, which currently allows a fqdn (as long as it matches the configured hostname).</p>
<p><a class="external" href="https://github.com/ceph/ceph/pull/33042">https://github.com/ceph/ceph/pull/33042</a></p> Orchestrator - Bug #43932 (Resolved): bin/cephadm: All daemons should call port_in_usehttps://tracker.ceph.com/issues/439322020-01-31T13:32:15ZSebastian Wagner
<pre>
Jan 26 18:38:04 monitor2 systemd[1]: ceph-6a888c04-4041-11ea-96dc-525400e6f794@rgw.realm1.default.aylrtn.service: Service RestartSec=10s expired, scheduling restart.
Jan 26 18:38:04 monitor2 systemd[1]: Stopped Ceph daemon for 6a888c04-4041-11ea-96dc-525400e6f794.
Jan 26 18:38:04 monitor2 systemd[1]: Starting Ceph daemon for 6a888c04-4041-11ea-96dc-525400e6f794...
Jan 26 18:38:04 monitor2 podman[22283]: Error: no container with name or ID ceph-6a888c04-4041-11ea-96dc-525400e6f794-rgw.realm1.default.aylrtn found: no such container
Jan 26 18:38:04 monitor2 systemd[1]: Started Ceph daemon for 6a888c04-4041-11ea-96dc-525400e6f794.
Jan 26 18:38:04 monitor2 systemd[1]: Started libpod-conmon-f076e67e3fa79d4bb597e4ed296d3963bf069eb3a72b0ee346ec104028a2db20.scope.
Jan 26 18:38:05 monitor2 systemd[1]: Started libcontainer container f076e67e3fa79d4bb597e4ed296d3963bf069eb3a72b0ee346ec104028a2db20.
Jan 26 18:38:05 monitor2 bash[22294]: debug 2020-01-26T17:38:05.461+0000 7f192a83fa80 0 framework: beast
Jan 26 18:38:05 monitor2 bash[22294]: debug 2020-01-26T17:38:05.461+0000 7f192a83fa80 0 framework conf key: port, val: 7480
Jan 26 18:38:05 monitor2 bash[22294]: debug 2020-01-26T17:38:05.473+0000 7f192a83fa80 0 deferred set uid:gid to 167:167 (ceph:ceph)
Jan 26 18:38:05 monitor2 bash[22294]: debug 2020-01-26T17:38:05.473+0000 7f192a83fa80 0 ceph version 15.0.0-9543-g1c7fc80ba1 (1c7fc80ba17319e7d50724ac7b32d47bdba4204a) octopus (dev), process radosgw, pid 1
Jan 26 18:38:05 monitor2 bash[22294]: debug 2020-01-26T17:38:05.809+0000 7f192a83fa80 0 starting handler: beast
Jan 26 18:38:05 monitor2 bash[22294]: debug 2020-01-26T17:38:05.813+0000 7f192a83fa80 -1 failed to bind address 0.0.0.0:7480: Address already in use
Jan 26 18:38:05 monitor2 bash[22294]: debug 2020-01-26T17:38:05.813+0000 7f192a83fa80 -1 ERROR: failed initializing frontend
Jan 26 18:38:05 monitor2 systemd[1]: ceph-6a888c04-4041-11ea-96dc-525400e6f794@rgw.realm1.default.aylrtn.service: Main process exited, code=exited, status=98/n/a
Jan 26 18:38:06 monitor2 systemd[1]: ceph-6a888c04-4041-11ea-96dc-525400e6f794@rgw.realm1.default.aylrtn.service: Unit entered failed state.
Jan 26 18:38:06 monitor2 systemd[1]: ceph-6a888c04-4041-11ea-96dc-525400e6f794@rgw.realm1.default.aylrtn.service: Failed with result 'exit-code'.
Jan 26 18:38:16 monitor2 systemd[1]: ceph-6a888c04-4041-11ea-96dc-525400e6f794@rgw.realm1.default.aylrtn.service: Service RestartSec=10s expired, scheduling restart.
Jan 26 18:38:16 monitor2 systemd[1]: Stopped Ceph daemon for 6a888c04-4041-11ea-96dc-525400e6f794.
Jan 26 18:38:16 monitor2 systemd[1]: ceph-6a888c04-4041-11ea-96dc-525400e6f794@rgw.realm1.default.aylrtn.service: Start request repeated too quickly.
Jan 26 18:38:16 monitor2 systemd[1]: Failed to start Ceph daemon for 6a888c04-4041-11ea-96dc-525400e6f794.
Jan 26 18:38:16 monitor2 systemd[1]: ceph-6a888c04-4041-11ea-96dc-525400e6f794@rgw.realm1.default.aylrtn.service: Unit entered failed state.
Jan 26 18:38:16 monitor2 systemd[1]: ceph-6a888c04-4041-11ea-96dc-525400e6f794@rgw.realm1.default.aylrtn.service: Failed with result 'exit-code'.
</pre>
<p>As bin/cephadm could successfully deploy the daemon, users are not getting any helpful error message, except for manually looking into the journald log files.</p>
<p>If we call port_in_use() <strong>before</strong> deploying the daemons, users will get a useful error message.</p> teuthology - Bug #40749 (New): /task/ansible.py: AnsibleFailedError: RepresenterError: ('cannot r...https://tracker.ceph.com/issues/407492019-07-12T09:41:13ZSebastian Wagner
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/swagner-2019-07-11_11:08:19-rados:mgr-wip-swagner-testing-distro-basic-mira/4110614/teuthology.log">http://qa-proxy.ceph.com/teuthology/swagner-2019-07-11_11:08:19-rados:mgr-wip-swagner-testing-distro-basic-mira/4110614/teuthology.log</a></p>
<pre>
Thursday 11 July 2019 14:06:56 +0000 (0:00:00.272) 0:03:00.166 *********
===============================================================================
Check for /usr/bin/python ---------------------------------------------- 27.06s
2019-07-11T14:06:56.061 INFO:teuthology.task.ansible.out:users : Create all admin users with sudo access. ----------------------- 19.15s
users : Update authorized_keys using the keys repo --------------------- 18.43s
testnode : Zap all non-root disks --------------------------------------- 9.59s
testnode : Ensure packages are not present. ----------------------------- 9.53s
testnode : Install packages --------------------------------------------- 6.20s
testnode : ifdown and ifup ---------------------------------------------- 5.15s
users : Remove revoked users -------------------------------------------- 4.99s
common : Update apt cache ----------------------------------------------- 4.01s
testnode : Update apt cache. -------------------------------------------- 3.65s
testnode : Install python-apt ------------------------------------------- 3.11s
testnode : Blow away lingering OSD data and FSIDs ----------------------- 2.94s
testnode : Install apt keys --------------------------------------------- 2.09s
common : Install nrpe package and dependencies (Ubuntu) ----------------- 1.99s
testnode : Install packages via pip ------------------------------------- 1.72s
users : Update authorized_keys for each user with literal keys ---------- 1.72s
ansible-managed : Add authorized keys for the ansible user. ------------- 1.59s
Gathering Facts --------------------------------------------------------- 1.59s
testnode : Stop apache2 ------------------------------------------------- 1.45s
common : Upload megacli and cli64 for raid monitoring and smart.pl to /usr/sbin/. --- 1.18s
2019-07-11T14:06:56.319 ERROR:teuthology.task.ansible:Failed to parse ansible failure log: /tmp/teuth_ansible_failures_mF91TY (while parsing a flow mapping
in "/tmp/teuth_ansible_failures_mF91TY", line 1, column 54
expected ',' or '}', but got ':'
in "/tmp/teuth_ansible_failures_mF91TY", line 1, column 274)
2019-07-11T14:06:56.320 INFO:teuthology.task.ansible:Archiving ansible failure log at: /home/teuthworker/archive/swagner-2019-07-11_11:08:19-rados:mgr-wip-swagner-testing-distro-basic-mira/4110614/ansible_failures.yaml
2019-07-11T14:06:56.323 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 89, in run_tasks
manager.__enter__()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/__init__.py", line 123, in __enter__
self.begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 426, in begin
super(CephLab, self).begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 268, in begin
self.execute_playbook()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 295, in execute_playbook
self._handle_failure(command, status)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 319, in _handle_failure
raise AnsibleFailedError(failures)
AnsibleFailedError: 7
/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data
node = self.yaml_representers[None](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined
raise RepresenterError("cannot represent an object", data)
RepresenterError: ('cannot represent an object', u'mira062')
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/swagner-2019-07-11_11:08:19-rados:mgr-wip-swagner-testing-distro-basic-mira/4110614/ansible_failures.yaml">http://qa-proxy.ceph.com/teuthology/swagner-2019-07-11_11:08:19-rados:mgr-wip-swagner-testing-distro-basic-mira/4110614/ansible_failures.yaml</a></p>
<pre>
Failure object was: {'mira062.front.sepia.ceph.com': {'_ansible_no_log': False, u'invocation': {u'module_args': {u'name': u'mira062'}}, 'changed': False, u'msg': u"Command failed rc=1, out=, err=Could not get property: Failed to activate service 'org.freedesktop.hostname1': timed out\n"}}
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure
log.error(yaml.safe_dump(failure))
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump
return dump_all([data], stream, Dumper=SafeDumper, **kwds)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all
dumper.represent(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent
node = self.represent_data(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data
node = self.yaml_representers[None](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined
raise RepresenterError("cannot represent an object", data)
RepresenterError: ('cannot represent an object', u'mira062')
</pre>
<p>Is this a Teuthology issue or ceph-ansible or is this just because mira062 timed out?</p> Ceph - Bug #38145 (New): /usr/bin/ld: cmdparse.cc.o: bad reloc symbol indexhttps://tracker.ceph.com/issues/381452019-02-01T10:09:43ZSebastian Wagner
<p>Hey,</p>
<p>in the Sepia lab in flavour "Ubuntu Xenial", I'm getting a linker error:</p>
<pre>
/usr/bin/ld: common/CMakeFiles/common-common-objs.dir/cmdparse.cc.o: bad reloc symbol index (0x30317453 >= 0x2d1) for offset 0x4961534563497374 in section `.debug_info'
common/CMakeFiles/common-common-objs.dir/cmdparse.cc.o: error adding symbols: Bad value
collect2: error: ld returned 1 exit status
src/CMakeFiles/ceph-common.dir/build.make:446: recipe for target 'lib/libceph-common.so.1' failed
make[4]: *** [lib/libceph-common.so.1] Error 1
make[4]: Leaving directory '/build/ceph-14.0.1-3099-g9e926e9/obj-x86_64-linux-gnu'
</pre>
<ul>
<li><a href="https://jenkins.ceph.com/job/ceph-dev-new-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=xenial,DIST=xenial,MACHINE_SIZE=huge/17352//consoleFull" class="external">Jenkins Log</a></li>
<li><a href="https://shaman.ceph.com/builds/ceph/wip-swagner-testing/9e926e9927a4c9592403dbce959e526ba3860206/default/140455/" class="external">Shaman build</a></li>
</ul>
<p>I don't know if this is a reproducible error or not.</p> rbd - Bug #22253 (Can't reproduce): "rbd info" crashed: stack smashing detectedhttps://tracker.ceph.com/issues/222532017-11-27T14:37:58ZSebastian Wagner
<p>Environment: quite small vstart cluster.</p>
<p>This is the stack trace:<br /><pre>
#3 0x00007fffed44711c in __GI___fortify_fail (msg=<optimized out>, msg@entry=0x7fffed4bd441 "stack smashing detected") at fortify_fail.c:37
#4 0x00007fffed4470c0 in __stack_chk_fail () at stack_chk_fail.c:28
#5 0x00007ffff78f0beb in librbd::ImageCtx::perf_start (this=this@entry=0x555555b7bf70, name="librbd-8c39e2ae8944a-rbd-huge2") at /home/sebastian/Repos/ceph/src/librbd/ImageCtx.cc:397
#6 0x00007ffff78f3cb4 in librbd::ImageCtx::init (this=0x555555b7bf70) at /home/sebastian/Repos/ceph/src/librbd/ImageCtx.cc:275
#7 0x00007ffff799dacd in librbd::image::OpenRequest<librbd::ImageCtx>::send_register_watch (this=this@entry=0x555555b7fe60) at /home/sebastian/Repos/ceph/src/librbd/image/OpenRequest.cc:477
#8 0x00007ffff79a3102 in librbd::image::OpenRequest<librbd::ImageCtx>::handle_v2_apply_metadata (this=this@entry=0x555555b7fe60, result=result@entry=0x7fffb77fa374) at /home/sebastian/Repos/ceph/src/librbd/image/OpenRequest.cc:471
#9 0x00007ffff79a351f in librbd::util::detail::rados_state_callback<librbd::image::OpenRequest<librbd::ImageCtx>, &librbd::image::OpenRequest<librbd::ImageCtx>::handle_v2_apply_metadata, true> (c=<optimized out>, arg=0x555555b7fe60) at /home/sebastian/Repos/ceph/src/librbd/Utils.h:39
#10 0x00007ffff75d678d in librados::C_AioComplete::finish (this=0x7fffd0001b60, r=<optimized out>) at /home/sebastian/Repos/ceph/src/librados/AioCompletionImpl.h:169
#11 0x0000555555613949 in Context::complete (this=0x7fffd0001b60, r=<optimized out>) at /home/sebastian/Repos/ceph/src/include/Context.h:70
#12 0x00007fffeeab6010 in Finisher::finisher_thread_entry (this=0x555555acb3e8) at /home/sebastian/Repos/ceph/src/common/Finisher.cc:72
#13 0x00007fffee3a86ba in start_thread (arg=0x7fffb77fe700) at pthread_create.c:333
#14 0x00007fffed4353dd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
</pre></p>