Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2024-03-24T14:13:34Z
Ceph
Redmine
Ceph - Bug #65098 (New): detect_odr_violation of ceph::buffer::list::always_empty_bptr
https://tracker.ceph.com/issues/65098
2024-03-24T14:13:34Z
Kefu Chai
tchaikov@gmail.com
<p>after building the tree with ASan enabled, when running rbd, we have</p>
<pre>
=================================================================
==117994==ERROR: AddressSanitizer: odr-violation (0x7fc152daf880):
[1] size=24 'ceph::buffer::list::always_empty_bptr' /home/jenkins-build/build/workspace/ceph-pull-requests/src/common/buffer.cc:1267:34
[2] size=24 'ceph::buffer::list::always_empty_bptr' /home/jenkins-build/build/workspace/ceph-pull-requests/src/common/buffer.cc:1267:34
These globals were registered at these points:
[1]:
#0 0x55856c66727a in __asan_register_globals (/home/jenkins-build/build/workspace/ceph-pull-requests/build/bin/rbd+0xa4c27a) (BuildId: 917e672500fe0c0d67124c88ecdfcf36cb68ff53)
#1 0x7fc1525f7cee in asan.module_ctor buffer.cc
#2 0x7fc155dea47d in call_init elf/./elf/dl-init.c:70:3
[2]:
#0 0x55856c66727a in __asan_register_globals (/home/jenkins-build/build/workspace/ceph-pull-requests/build/bin/rbd+0xa4c27a) (BuildId: 917e672500fe0c0d67124c88ecdfcf36cb68ff53)
#1 0x7fc1505c222e in asan.module_ctor buffer.cc
#2 0x7fc155dea47d in call_init elf/./elf/dl-init.c:70:3
==117994==HINT: if you don't care about these errors you may set ASAN_OPTIONS=detect_odr_violation=0
SUMMARY: AddressSanitizer: odr-violation: global 'ceph::buffer::list::always_empty_bptr' at /home/jenkins-build/build/workspace/ceph-pull-requests/src/common/buffer.cc:1267:34
==117994==ABORTING
</pre>
<p>to enable ASan, one might want to apply the changeset at <a class="external" href="https://github.com/ceph/ceph/pull/56241">https://github.com/ceph/ceph/pull/56241</a></p>
RADOS - Bug #64788 (Fix Under Review): EpollDriver::del_event() crashes when the nic is unplugged
https://tracker.ceph.com/issues/64788
2024-03-07T11:48:12Z
Kefu Chai
tchaikov@gmail.com
<p>librbd uses msgr to talk to its Ceph cluster. if the client's nic is hot unplugged, there is chance that <code>EpollDriver::del_event()</code> crashes because <code>epoll_ctl(epfd, EPOLL_CTL_DEL, fd, &……)</code> returns <code>-ENOENT</code>. as its caller, <code>EventCenter::delete_file_event()</code> considers its a signal of a bug.</p>
Ceph - Bug #58069 (Resolved): flake8 fails since Nov 23 2022
https://tracker.ceph.com/issues/58069
2022-11-24T05:42:10Z
Kefu Chai
tchaikov@gmail.com
<pre>
Traceback (most recent call last):
File "/home/jenkins-build/build/workspace/ceph-pull-requests/src/cephadm/.tox/flake8/bin/flake8", line 8, in
sys.exit(main())
File "/home/jenkins-build/build/workspace/ceph-pull-requests/src/cephadm/.tox/flake8/lib/python3.10/site-packages/flake8/main/cli.py", line 23, in main
app.run(argv)
File "/home/jenkins-build/build/workspace/ceph-pull-requests/src/cephadm/.tox/flake8/lib/python3.10/site-packages/flake8/main/application.py", line 198, in run
self._run(argv)
File "/home/jenkins-build/build/workspace/ceph-pull-requests/src/cephadm/.tox/flake8/lib/python3.10/site-packages/flake8/main/application.py", line 186, in _run
self.initialize(argv)
File "/home/jenkins-build/build/workspace/ceph-pull-requests/src/cephadm/.tox/flake8/lib/python3.10/site-packages/flake8/main/application.py", line 165, in initialize
self.plugins, self.options = parse_args(argv)
File "/home/jenkins-build/build/workspace/ceph-pull-requests/src/cephadm/.tox/flake8/lib/python3.10/site-packages/flake8/options/parse_args.py", line 51, in parse_args
option_manager.register_plugins(plugins)
File "/home/jenkins-build/build/workspace/ceph-pull-requests/src/cephadm/.tox/flake8/lib/python3.10/site-packages/flake8/options/manager.py", line 259, in register_plugins
add_options(self)
File "/home/jenkins-build/build/workspace/ceph-pull-requests/src/cephadm/.tox/flake8/lib/python3.10/site-packages/flake8_quotes/__init__.py", line 109, in add_options
cls._register_opt(parser, '--quotes', action='store',
File "/home/jenkins-build/build/workspace/ceph-pull-requests/src/cephadm/.tox/flake8/lib/python3.10/site-packages/flake8_quotes/__init__.py", line 99, in _register_opt
parser.add_option(*args, **kwargs)
File "/home/jenkins-build/build/workspace/ceph-pull-requests/src/cephadm/.tox/flake8/lib/python3.10/site-packages/flake8/options/manager.py", line 281, in add_option
self._current_group.add_argument(*option_args, **option_kwargs)
File "/usr/lib/python3.10/argparse.py", line 1440, in add_argument
raise ValueError('%r is not callable' % (type_func,))
ValueError: 'choice' is not callable
</pre>
Dashboard - Bug #57345 (Resolved): mgr/dashboard: "openapi-check" test fails
https://tracker.ceph.com/issues/57345
2022-08-31T07:31:57Z
Kefu Chai
tchaikov@gmail.com
<a name="Description-of-problem"></a>
<h3 >Description of problem<a href="#Description-of-problem" class="wiki-anchor">¶</a></h3>
<p>openapi-check test fails.</p>
<a name="Environment"></a>
<h3 >Environment<a href="#Environment" class="wiki-anchor">¶</a></h3>
<ul>
<li><code>ceph version</code> string: 91c57c3fb160db1c95d412b966d703ca08ee75ef</li>
<li>Platform (OS/distro/release): ubuntu jammy</li>
<li>Cluster details (nodes, monitors, OSDs):</li>
<li>Did it happen on a stable environment or after a migration/upgrade?:</li>
<li>Browser used (e.g.: <code>Version 86.0.4240.198 (Official Build) (64-bit)</code>):</li>
</ul>
<a name="How-reproducible"></a>
<h3 >How reproducible<a href="#How-reproducible" class="wiki-anchor">¶</a></h3>
<p>not reproduciable. almost 1 out of 20 times.</p>
<p>Steps:</p>
<p>make check</p>
<a name="Actual-results"></a>
<h3 >Actual results<a href="#Actual-results" class="wiki-anchor">¶</a></h3>
<pre>
openapi-check run-test: commands[1] | diff openapi.yaml /home/jenkins-build/build/workspace/ceph-pull-requests/src/pybind/mgr/dashboard/.tox/openapi-check/tmp/openapi.yaml
10483c10483
< - description: Get Cluster Details
---
> - description: Get Ceph Users
ERROR: InvocationError for command /usr/bin/diff openapi.yaml .tox/openapi-check/tmp/openapi.yaml (exited with code 1)
</pre>
<a name="Expected-results"></a>
<h3 >Expected results<a href="#Expected-results" class="wiki-anchor">¶</a></h3>
<p>test passes</p>
<a name="Additional-info"></a>
<h3 >Additional info<a href="#Additional-info" class="wiki-anchor">¶</a></h3>
<p>found at <a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/102789/consoleText">https://jenkins.ceph.com/job/ceph-pull-requests/102789/consoleText</a></p>
mgr - Bug #57256 (Can't reproduce): test_force_no_gzip fails
https://tracker.ceph.com/issues/57256
2022-08-23T15:09:40Z
Kefu Chai
tchaikov@gmail.com
<pre>
2022-08-23T14:51:46.963 INFO:tasks.cephfs_test_runner:test_force_no_gzip (tasks.mgr.dashboard.test_requests.RequestsTest) ... FAIL
2022-08-23T14:51:46.964 INFO:tasks.cephfs_test_runner:
2022-08-23T14:51:46.965 INFO:tasks.cephfs_test_runner:======================================================================
2022-08-23T14:51:46.965 INFO:tasks.cephfs_test_runner:FAIL: test_force_no_gzip (tasks.mgr.dashboard.test_requests.RequestsTest)
2022-08-23T14:51:46.966 INFO:tasks.cephfs_test_runner:----------------------------------------------------------------------
2022-08-23T14:51:46.967 INFO:tasks.cephfs_test_runner:Traceback (most recent call last):
2022-08-23T14:51:46.967 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_ceph-c_4ef23433c76a897083753748dc9c5dc9509f7142/qa/tasks/mgr/dashboard/test_requests.py", line 21, in test_force_no_gzip
2022-08-23T14:51:46.968 INFO:tasks.cephfs_test_runner: 'Content-Type': 'application/json'
2022-08-23T14:51:46.968 INFO:tasks.cephfs_test_runner: File "/home/teuthworker/src/git.ceph.com_ceph-c_4ef23433c76a897083753748dc9c5dc9509f7142/qa/tasks/mgr/dashboard/helper.py", line 499, in assertHeaders
2022-08-23T14:51:46.969 INFO:tasks.cephfs_test_runner: self.assertEqual(self._resp.headers[name], value)
2022-08-23T14:51:46.969 INFO:tasks.cephfs_test_runner:AssertionError: 'application/vnd.ceph.api.v1.0+json' != 'application/json'
2022-08-23T14:51:46.970 INFO:tasks.cephfs_test_runner:- application/vnd.ceph.api.v1.0+json
2022-08-23T14:51:46.971 INFO:tasks.cephfs_test_runner:+ application/json
</pre>
<p>/a//kchai-2022-08-23_13:19:39-rados-wip-kefu-testing-2022-08-22-2243-distro-default-smithi/6987851/teuthology.log</p>
<p>looks like a regression introduced by <a class="external" href="https://github.com/ceph/ceph/pull/47720">https://github.com/ceph/ceph/pull/47720</a></p>
Ceph - Bug #57116 (Resolved): TestMockMigrationHttpClient and HTTPManager "bind: Address already ...
https://tracker.ceph.com/issues/57116
2022-08-12T13:54:46Z
Kefu Chai
tchaikov@gmail.com
<pre>
[ RUN ] TestMockMigrationHttpClient.OpenCloseHttp
unknown file: Failure
C++ exception with description "bind: Address already in use [system:98]" thrown in SetUp().
[ FAILED ] TestMockMigrationHttpClient.OpenCloseHttp (127 ms)
[ RUN ] TestMockMigrationHttpClient.OpenCloseHttps
[ OK ] TestMockMigrationHttpClient.OpenCloseHttps (131 ms)
[ RUN ] TestMockMigrationHttpClient.OpenHttpsHandshakeFail
unknown file: Failure
C++ exception with description "bind: Address already in use [system:98]" thrown in SetUp().
[ FAILED ] TestMockMigrationHttpClient.OpenHttpsHandshakeFail (99 ms)
[ RUN ] TestMockMigrationHttpClient.OpenInvalidUrl
unknown file: Failure
C++ exception with description "bind: Address already in use [system:98]" thrown in SetUp().
[ FAILED ] TestMockMigrationHttpClient.OpenInvalidUrl (109 ms)
[ RUN ] TestMockMigrationHttpClient.OpenResolveFail
unknown file: Failure
C++ exception with description "bind: Address already in use [system:98]" thrown in SetUp().
[ FAILED ] TestMockMigrationHttpClient.OpenResolveFail (90 ms)
[ RUN ] TestMockMigrationHttpClient.OpenConnectFail
unknown file: Failure
C++ exception with description "bind: Address already in use [system:98]" thrown in SetUp().
[ FAILED ] TestMockMigrationHttpClient.OpenConnectFail (87 ms)
[ RUN ] TestMockMigrationHttpClient.IssueHead
unknown file: Failure
C++ exception with description "bind: Address already in use [system:98]" thrown in SetUp().
[ FAILED ] TestMockMigrationHttpClient.IssueHead (83 ms)
[ RUN ] TestMockMigrationHttpClient.IssueGet
unknown file: Failure
C++ exception with description "bind: Address already in use [system:98]" thrown in SetUp().
[ FAILED ] TestMockMigrationHttpClient.IssueGet (89 ms)
[ RUN ] TestMockMigrationHttpClient.IssueSendFailed
unknown file: Failure
C++ exception with description "bind: Address already in use [system:98]" thrown in SetUp().
[ FAILED ] TestMockMigrationHttpClient.IssueSendFailed (82 ms)
[ RUN ] TestMockMigrationHttpClient.IssueReceiveFailed
unknown file: Failure
C++ exception with description "bind: Address already in use [system:98]" thrown in SetUp().
[ FAILED ] TestMockMigrationHttpClient.IssueReceiveFailed (78 ms)
[ RUN ] TestMockMigrationHttpClient.IssueResetFailed
unknown file: Failure
C++ exception with description "bind: Address already in use [system:98]" thrown in SetUp().
[ FAILED ] TestMockMigrationHttpClient.IssueResetFailed (78 ms)
[ RUN ] TestMockMigrationHttpClient.IssuePipelined
unknown file: Failure
C++ exception with description "bind: Address already in use [system:98]" thrown in SetUp().
[ FAILED ] TestMockMigrationHttpClient.IssuePipelined (76 ms)
[ RUN ] TestMockMigrationHttpClient.IssuePipelinedRestart
[ OK ] TestMockMigrationHttpClient.IssuePipelinedRestart (104 ms)
[ RUN ] TestMockMigrationHttpClient.ShutdownInFlight
[ OK ] TestMockMigrationHttpClient.ShutdownInFlight (105 ms)
[ RUN ] TestMockMigrationHttpClient.GetSize
[ OK ] TestMockMigrationHttpClient.GetSize (127 ms)
[ RUN ] TestMockMigrationHttpClient.GetSizeError
unknown file: Failure
C++ exception with description "bind: Address already in use [system:98]" thrown in SetUp().
[ FAILED ] TestMockMigrationHttpClient.GetSizeError (190 ms)
[ RUN ] TestMockMigrationHttpClient.Read
unknown file: Failure
C++ exception with description "bind: Address already in use [system:98]" thrown in SetUp().
[ FAILED ] TestMockMigrationHttpClient.Read (202 ms)
</pre>
<p>see <a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/101149/consoleFull">https://jenkins.ceph.com/job/ceph-pull-requests/101149/consoleFull</a></p>
<p>i did some preliminary investigation. but still no clues. as we bind on port 0. and that means we let the os to pick a random usable port for us.</p>
<p>Ilya, could you take a look please?</p>
sepia - Bug #57051 (Can't reproduce): unable to install librabbitmq on RHEL 8.6
https://tracker.ceph.com/issues/57051
2022-08-07T08:05:08Z
Kefu Chai
tchaikov@gmail.com
<p>this failed a bunch of tests on RHEL 8.6 with failure reason of</p>
<blockquote>
<p>Command failed on smithi123 with status 1: 'sudo yum -y install ceph-radosgw'</p>
</blockquote>
<p>see <a class="external" href="https://pulpito.ceph.com/kchai-2022-08-07_07:44:14-rados-wip-kefu-testing-2022-08-07-1123-distro-default-smithi/">https://pulpito.ceph.com/kchai-2022-08-07_07:44:14-rados-wip-kefu-testing-2022-08-07-1123-distro-default-smithi/</a></p>
<pre>
[kchai@smithi162 ~]$ sudo dnf install librabbitmq
Updating Subscription Management repositories.
Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) 151 kB/s | 2.8 kB 00:00
Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) 138 kB/s | 2.4 kB 00:00
No match for argument: librabbitmq
Error: Unable to find a match: librabbitmq
</pre>
<p>excerpt from /etc/yum.repos.d/redhat.repo</p>
<pre>
[rhel-8-for-x86_64-baseos-rpms]
name = Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs)
baseurl = https://satellite.front.sepia.ceph.com/pulp/repos/Ceph/Library/content/dist/rhel8/$releasever/x86_64/baseos/os
enabled = 1
gpgcheck = 1
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
sslverify = 1
sslcacert = /etc/rhsm/ca/katello-server-ca.pem
sslclientkey = /etc/pki/entitlement/6358742043260333223-key.pem
sslclientcert = /etc/pki/entitlement/6358742043260333223.pem
metadata_expire = 1
enabled_metadata = 1
</pre>
RADOS - Bug #55519 (Resolved): should use TCMalloc for better performance
https://tracker.ceph.com/issues/55519
2022-05-03T07:22:03Z
Kefu Chai
tchaikov@gmail.com
<p>we had been using TCMalloc in older releases. but somehow, we stopped doing so. let's bring it back.</p>
Dashboard - Bug #54605 (Triaged): KeyError: 'loki'
https://tracker.ceph.com/issues/54605
2022-03-17T11:10:03Z
Kefu Chai
tchaikov@gmail.com
<pre>
2022-03-17T07:37:04.200+0000 7fa3b3c53000 -1 mgr[py] Module not found: 'rook'
2022-03-17T07:37:04.200+0000 7fa3b3c53000 -1 mgr[py] Traceback (most recent call last):
File "/home/ubuntu/cephtest/clone.client.0/src/pybind/mgr/rook/__init__.py", line 5, in <module>
from .module import RookOrchestrator
File "/home/ubuntu/cephtest/clone.client.0/src/pybind/mgr/rook/module.py", line 38, in <module>
import orchestrator
File "/home/ubuntu/cephtest/clone.client.0/src/pybind/mgr/orchestrator/__init__.py", line 3, in <module>
from .module import OrchestratorCli
File "/home/ubuntu/cephtest/clone.client.0/src/pybind/mgr/orchestrator/module.py", line 20, in <module>
from ._interface import OrchestratorClientMixin, DeviceLightLoc, _cli_read_command, \
File "/home/ubuntu/cephtest/clone.client.0/src/pybind/mgr/orchestrator/_interface.py", line 759, in <module>
sum((service_to_daemon_types(t) for t in ServiceSpec.KNOWN_SERVICE_TYPES), []))
File "/home/ubuntu/cephtest/clone.client.0/src/pybind/mgr/orchestrator/_interface.py", line 759, in <genexpr>
sum((service_to_daemon_types(t) for t in ServiceSpec.KNOWN_SERVICE_TYPES), []))
File "/home/ubuntu/cephtest/clone.client.0/src/pybind/mgr/orchestrator/_interface.py", line 755, in service_to_daemon_types
return mapping[stype]
KeyError: 'loki'
</pre>
<p>/a/kchai-2022-03-17_05:18:57-rados-wip-cxx20-fixes-core-kefu-distro-default-smithi/6740903/remote/smithi164/log/mgr.x.log</p>
sepia - Bug #52765 (Resolved): smithi072 sda bad disk
https://tracker.ceph.com/issues/52765
2021-09-29T14:38:54Z
Kefu Chai
tchaikov@gmail.com
<pre>
2021-09-29T04:11:38.631007+00:00 smithi072 kernel: sd 4:0:0:0: [sda] tag#11 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=2s
2021-09-29T04:11:38.631263+00:00 smithi072 kernel: sd 4:0:0:0: [sda] tag#11 Sense Key : Medium Error [current]
2021-09-29T04:11:38.656534+00:00 smithi072 kernel: sd 4:0:0:0: [sda] tag#11 Add. Sense: Unrecovered read error
2021-09-29T04:11:38.656751+00:00 smithi072 kernel: sd 4:0:0:0: [sda] tag#11 CDB: Read(10) 28 00 01 34 93 98 00 00 08 00
2021-09-29T04:11:38.687468+00:00 smithi072 kernel: blk_update_request: I/O error, dev sda, sector 20222872 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
2021-09-29T04:11:38.697464+00:00 smithi072 kernel: ata5: EH complete
2021-09-29T04:11:41.309620+00:00 smithi072 kernel: ata5.00: exception Emask 0x0 SAct 0x3fc00000 SErr 0x0 action 0x0
2021-09-29T04:11:41.309724+00:00 smithi072 kernel: ata5.00: irq_stat 0x40000008
2021-09-29T04:11:41.309746+00:00 smithi072 kernel: ata5.00: failed command: READ FPDMA QUEUED
2021-09-29T04:11:41.315163+00:00 smithi072 kernel: ata5.00: cmd 60/08:b0:98:93:34/00:00:01:00:00/40 tag 22 ncq dma 4096 in#012 res 43/40:08:98:93:34/00:00:01:00:00/00 Emask 0x408 (media error) <F>
2021-09-29T04:11:41.331981+00:00 smithi072 kernel: ata5.00: status: { DRDY SENSE ERR }
2021-09-29T04:11:41.336973+00:00 smithi072 kernel: ata5.00: error: { UNC }
</pre>
Dashboard - Bug #52764 (Resolved): mgr/dashboard: orchestrator/01-hosts.e2e-spec.ts fails
https://tracker.ceph.com/issues/52764
2021-09-29T14:31:26Z
Kefu Chai
tchaikov@gmail.com
<a name="Description-of-problem"></a>
<h3 >Description of problem<a href="#Description-of-problem" class="wiki-anchor">¶</a></h3>
<pre>
2021-09-29T01:36:02.434 INFO:tasks.workunit.client.0.smithi080.stdout: 6 passing (2m)
2021-09-29T01:36:02.434 INFO:tasks.workunit.client.0.smithi080.stdout: 1 failing
2021-09-29T01:36:02.435 INFO:tasks.workunit.client.0.smithi080.stdout:
2021-09-29T01:36:02.435 INFO:tasks.workunit.client.0.smithi080.stdout: 1) Hosts page
2021-09-29T01:36:02.435 INFO:tasks.workunit.client.0.smithi080.stdout: when Orchestrator is available
2021-09-29T01:36:02.435 INFO:tasks.workunit.client.0.smithi080.stdout: should delete a host and add it back:
2021-09-29T01:36:02.436 INFO:tasks.workunit.client.0.smithi080.stdout: AssertionError: Timed out retrying: Expected not to find content: '/^smithi148$/' within the selector: 'datatable-body-row datata
ble-body-cell:nth-child(2)' but continuously found it.
2021-09-29T01:36:02.436 INFO:tasks.workunit.client.0.smithi080.stdout: at HostsPageHelper.delete (https://172.21.15.80:8443/__cypress/tests?p=cypress/integration/orchestrator/01-hosts.e2e-spec.ts:614
:22)
2021-09-29T01:36:02.436 INFO:tasks.workunit.client.0.smithi080.stdout: at HostsPageHelper.delete (https://172.21.15.80:8443/__cypress/tests?p=cypress/integration/orchestrator/01-hosts.e2e-spec.ts:175
:21)
2021-09-29T01:36:02.437 INFO:tasks.workunit.client.0.smithi080.stdout: at HostsPageHelper.descriptor.value (https://172.21.15.80:8443/__cypress/tests?p=cypress/integration/orchestrator/01-hosts.e2e-s
pec.ts:409:20)
2021-09-29T01:36:02.437 INFO:tasks.workunit.client.0.smithi080.stdout: at Context.eval (https://172.21.15.80:8443/__cypress/tests?p=cypress/integration/orchestrator/01-hosts.e2e-spec.ts:329:25)
</pre>
<a name="Environment"></a>
<h3 >Environment<a href="#Environment" class="wiki-anchor">¶</a></h3>
<ul>
<li><code>ceph version</code> string:</li>
<li>Platform (OS/distro/release):</li>
<li>Cluster details (nodes, monitors, OSDs):</li>
<li>Did it happen on a stable environment or after a migration/upgrade?:</li>
<li>Browser used (e.g.: <code>Version 86.0.4240.198 (Official Build) (64-bit)</code>):</li>
</ul>
<a name="How-reproducible"></a>
<h3 >How reproducible<a href="#How-reproducible" class="wiki-anchor">¶</a></h3>
<p>Steps:</p>
<ol>
<li> </li>
<li></li>
<li>...</li>
</ol>
<a name="Actual-results"></a>
<h3 >Actual results<a href="#Actual-results" class="wiki-anchor">¶</a></h3>
<p>/a/kchai-2021-09-28_23:40:16-rados-wip-kefu-testing-2021-09-28-2248-distro-basic-smithi/6412496</p>
<a name="Expected-results"></a>
<h3 >Expected results<a href="#Expected-results" class="wiki-anchor">¶</a></h3>
<a name="Additional-info"></a>
<h3 >Additional info<a href="#Additional-info" class="wiki-anchor">¶</a></h3>
Orchestrator - Bug #52279 (New): cephadm tests fail due to: error adding seccomp filter rule for ...
https://tracker.ceph.com/issues/52279
2021-08-16T08:44:57Z
Kefu Chai
tchaikov@gmail.com
<pre>
2021-08-16T02:38:47.473 INFO:teuthology.orchestra.run.smithi170.stderr:Running command: /bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint ceph --init -e CONTAINER_IMAGE=quay.ce
ph.io/ceph-ci/ceph:f435859ec4ee4b5dc9903c209333ddbd17f7e1da -e NODE_NAME=smithi170 -e CEPH_USE_RANDOM_NONCE=1 quay.ceph.io/ceph-ci/ceph:f435859ec4ee4b5dc9903c209333ddbd17f7e1da --version
2021-08-16T02:38:59.955 INFO:teuthology.orchestra.run.smithi170.stderr:ceph: Error: OCI runtime error: container_linux.go:380: starting container process caused: error adding seccomp filter rule for syscall bdflush: requested action matches default action of filter
2021-08-16T02:38:59.957 INFO:teuthology.orchestra.run.smithi170.stderr:Non-zero exit code 126 from /bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint ceph --init -e CONTAINER_IM
AGE=quay.ceph.io/ceph-ci/ceph:f435859ec4ee4b5dc9903c209333ddbd17f7e1da -e NODE_NAME=smithi170 -e CEPH_USE_RANDOM_NONCE=1 quay.ceph.io/ceph-ci/ceph:f435859ec4ee4b5dc9903c209333ddbd17f7e1da --version
2021-08-16T02:38:59.958 INFO:teuthology.orchestra.run.smithi170.stderr:ceph: stderr Error: OCI runtime error: container_linux.go:380: starting container process caused: error adding seccomp filter rule fo
r syscall bdflush: requested action matches default action of filter
2021-08-16T02:38:59.962 INFO:teuthology.orchestra.run.smithi170.stderr:Traceback (most recent call last):
2021-08-16T02:38:59.963 INFO:teuthology.orchestra.run.smithi170.stderr: File "/home/ubuntu/cephtest/cephadm", line 8456, in <module>
2021-08-16T02:38:59.963 INFO:teuthology.orchestra.run.smithi170.stderr: main()
2021-08-16T02:38:59.963 INFO:teuthology.orchestra.run.smithi170.stderr: File "/home/ubuntu/cephtest/cephadm", line 8444, in main
2021-08-16T02:38:59.963 INFO:teuthology.orchestra.run.smithi170.stderr: r = ctx.func(ctx)
2021-08-16T02:38:59.964 INFO:teuthology.orchestra.run.smithi170.stderr: File "/home/ubuntu/cephtest/cephadm", line 1782, in _default_image
2021-08-16T02:38:59.964 INFO:teuthology.orchestra.run.smithi170.stderr: return func(ctx)
2021-08-16T02:38:59.964 INFO:teuthology.orchestra.run.smithi170.stderr: File "/home/ubuntu/cephtest/cephadm", line 4210, in command_bootstrap
2021-08-16T02:38:59.964 INFO:teuthology.orchestra.run.smithi170.stderr: image_ver = CephContainer(ctx, ctx.image, 'ceph', ['--version']).run().strip()
2021-08-16T02:38:59.965 INFO:teuthology.orchestra.run.smithi170.stderr: File "/home/ubuntu/cephtest/cephadm", line 3439, in run
2021-08-16T02:38:59.965 INFO:teuthology.orchestra.run.smithi170.stderr: desc=self.entrypoint, timeout=timeout)
2021-08-16T02:38:59.965 INFO:teuthology.orchestra.run.smithi170.stderr: File "/home/ubuntu/cephtest/cephadm", line 1462, in call_throws
2021-08-16T02:38:59.965 INFO:teuthology.orchestra.run.smithi170.stderr: raise RuntimeError('Failed command: %s' % ' '.join(command))
2021-08-16T02:38:59.966 INFO:teuthology.orchestra.run.smithi170.stderr:RuntimeError: Failed command: /bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint ceph --init -e CONTAINER_
IMAGE=quay.ceph.io/ceph-ci/ceph:f435859ec4ee4b5dc9903c209333ddbd17f7e1da -e NODE_NAME=smithi170 -e CEPH_USE_RANDOM_NONCE=1 quay.ceph.io/ceph-ci/ceph:f435859ec4ee4b5dc9903c209333ddbd17f7e1da --version
2021-08-16T02:38:59.987 DEBUG:teuthology.orchestra.run:got remote process result: 1
</pre>
<p>see</p>
<ul>
<li><a class="external" href="https://pulpito.ceph.com/kchai-2021-08-15_13:10:58-rados-wip-kefu-testing-2021-08-15-1845-distro-basic-smithi/">https://pulpito.ceph.com/kchai-2021-08-15_13:10:58-rados-wip-kefu-testing-2021-08-15-1845-distro-basic-smithi/</a></li>
<li><a class="external" href="https://pulpito.ceph.com/kchai-2021-08-16_02:21:02-rados-master-distro-basic-smithi/">https://pulpito.ceph.com/kchai-2021-08-16_02:21:02-rados-master-distro-basic-smithi/</a></li>
</ul>
rgw - Bug #52278 (Resolved): check-generated.sh fails
https://tracker.ceph.com/issues/52278
2021-08-16T08:32:27Z
Kefu Chai
tchaikov@gmail.com
<pre>
**** rgw_log_entry test 2 binary reencode check failed ****
ceph-dencoder type rgw_log_entry select_test 2 encode export /tmp/typ-O8hVYK2Gb
ceph-dencoder type rgw_log_entry select_test 2 encode decode encode export /tmp/typ-G09itHHIC
cmp /tmp/typ-O8hVYK2Gb /tmp/typ-G09itHHIC
...
The following tests FAILED:
132 - check-generated.sh (Failed)
...
</pre>
crimson - Bug #52259 (Resolved): unittest-staged-fltree fails
https://tracker.ceph.com/issues/52259
2021-08-14T04:09:24Z
Kefu Chai
tchaikov@gmail.com
<pre>
...
DEBUG 2021-08-14 02:38:09,351 [shard 0] seastore - BtreeLBAManager::get_root: reading root at paddr_t<0, 8192> depth 1
DEBUG 2021-08-14 02:38:09,351 [shard 0] seastore - get_lba_btree_extent: reading leaf at offset paddr_t<0, 8192>, depth 1
DEBUG 2021-08-14 02:38:09,351 [shard 0] seastore - get_lba_btree_extent: read leaf at offset paddr_t<0, 8192> CachedExtent(addr=0x601000a9c780, type=LADDR_LEAF, version=1, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 4096>), paddr=paddr_t<0, 8192>, state=DIRTY, last_committed_crc=326493981, refcount=4, size=1, meta=btree_node_meta_t(begin=0, end=18446744073709551615, depth=1)), parent CachedExtent(addr=0x601000176e00, type=ROOT, version=1, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 4096>), paddr=paddr_t<NULL_SEG, NULL_OFF>, state=DIRTY, last_committed_crc=0, refcount=7)
DEBUG 2021-08-14 02:38:09,351 [shard 0] seastore - LBALeafNode::lookup_pin 0
DEBUG 2021-08-14 02:38:09,351 [shard 0] seastore - BtreeLBAManager::get_mapping: got mapping LBAPin(0~16384->paddr_t<1, 8192>
DEBUG 2021-08-14 02:38:09,351 [shard 0] seastore - TransactionManager::pin_to_extent(0x601000858140): getting extent LBAPin(0~16384->paddr_t<1, 8192>
DEBUG 2021-08-14 02:38:09,351 [shard 0] seastore - TransactionManager::pin_to_extent(0x601000858140): got extent CachedExtent(addr=0x601000b56d00, type=ONODE_BLOCK_STAGED, version=1, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), paddr=paddr_t<1, 8192>, state=DIRTY, last_committed_crc=850290427, refcount=3, laddr=0, pin=LBAPin(0~16384->paddr_t<1, 8192>, fltree_header=headerL0(is_level_tail=1, level=0))
DEBUG 2021-08-14 02:38:09,351 [shard 0] seastore - OTree::LeafNode::insert_value(0x601000858140): insert NodeL0@0x0+4000Lv0$ with insert_key=key_hobj(4,4,4; "ns7_..__/128B","oid7..__/13B"; 3,3), insert_value=ValueConf(TEST_EXTENDED, 989B), insert_pos(END, END, END), history=history(EQ, GT, --), mstat(-2) ...
DEBUG 2021-08-14 02:38:09,351 [shard 0] seastore - OTree::Seastore(0x601000858140): mutate CachedExtent(addr=0x601000b56d00, type=ONODE_BLOCK_STAGED, version=1, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), paddr=paddr_t<1, 8192>, state=DIRTY, last_committed_crc=850290427, refcount=3, laddr=0, pin=LBAPin(0~16384->paddr_t<1, 8192>, fltree_header=headerL0(is_level_tail=1, level=0)) ...
DEBUG 2021-08-14 02:38:09,351 [shard 0] seastore - Cache::duplicate_for_write(0x601000858140): CachedExtent(addr=0x601000b56d00, type=ONODE_BLOCK_STAGED, version=1, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), paddr=paddr_t<1, 8192>, state=DIRTY, last_committed_crc=850290427, refcount=6, laddr=0, pin=LBAPin(0~16384->paddr_t<1, 8192>, fltree_header=headerL0(is_level_tail=1, level=0)) -> CachedExtent(addr=0x6010009cdd00, type=ONODE_BLOCK_STAGED, version=2, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), paddr=paddr_t<1, 8192>, state=MUTATION_PENDING, last_committed_crc=850290427, refcount=2, laddr=18446744073709551615, pin=empty, fltree_header=headerL0(is_level_tail=1, level=0))
DEBUG 2021-08-14 02:38:09,351 [shard 0] seastore - TransactionManager::get_mutable_extent(0x601000858140): duplicating CachedExtent(addr=0x601000b56d00, type=ONODE_BLOCK_STAGED, version=1, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), paddr=paddr_t<1, 8192>, state=DIRTY, last_committed_crc=850290427, refcount=5, laddr=0, pin=LBAPin(0~16384->paddr_t<1, 8192>, fltree_header=headerL0(is_level_tail=1, level=0)) for write: CachedExtent(addr=0x6010009cdd00, type=ONODE_BLOCK_STAGED, version=2, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), paddr=paddr_t<1, 8192>, state=MUTATION_PENDING, last_committed_crc=850290427, refcount=2, laddr=18446744073709551615, pin=empty, fltree_header=headerL0(is_level_tail=1, level=0))
DEBUG 2021-08-14 02:38:09,351 [shard 0] seastore - OTree::Layout::insert: begin at insert_pos(LAST, END, END), insert_stage=1, insert_size=1159B ...
DEBUG 2021-08-14 02:38:09,351 [shard 0] seastore - OTree::Extent::Replay: decoding INSERT ...
DEBUG 2021-08-14 02:38:09,351 [shard 0] seastore - OTree::Extent::Replay: apply key_hobj(4,4,4; "ns7_..__/128B","oid7..__/13B"; 3,3), ValueConf(TEST_EXTENDED, 989B), insert_pos(LAST, END, END), insert_stage=1, insert_size=1159B ...
DEBUG 2021-08-14 02:38:09,351 [shard 0] seastore - OTree::Layout::insert: done at insert_pos(0, 1, 0), insert_stage=1, insert_size=1159B
DEBUG 2021-08-14 02:38:09,352 [shard 0] seastore - TransactionManager::submit_transaction(0x601000858140): about to await throttle
DEBUG 2021-08-14 02:38:09,352 [shard 0] seastore - SegmentCleaner::log_gc_state(await_hard_limits): total 1073741824, available 1065328640, unavailable 8413184, used 20480, reclaimable 0, reclaim_ratio 0.0, available_ratio 0.9921646118164062, should_block_on_gc false, gc_should_reclaim_space false, journal_head journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), journal_tail_target journal_seq_t(segment_seq=1, offset=paddr_t<1, 4096>), dirty_tail journal_seq_t(segment_seq=0, offset=paddr_t<1, 24576>), dirty_tail_limit journal_seq_t(segment_seq=0, offset=paddr_t<1, 24576>), gc_should_trim_journal false,
DEBUG 2021-08-14 02:38:09,352 [shard 0] seastore - TransactionManager::submit_transaction_direct(0x601000858140): about to prepare
DEBUG 2021-08-14 02:38:09,352 [shard 0] seastore - Cache::prepare_record(0x601000858140): enter
DEBUG 2021-08-14 02:38:09,352 [shard 0] seastore - Cache::prepare_record(0x601000858140): read_set validated
DEBUG 2021-08-14 02:38:09,352 [shard 0] seastore - Cache::prepare_record(0x601000858140): mutating CachedExtent(addr=0x6010009cdd00, type=ONODE_BLOCK_STAGED, version=2, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), paddr=paddr_t<1, 8192>, state=MUTATION_PENDING, last_committed_crc=850290427, refcount=1, laddr=0, pin=LBAPin(0~16384->paddr_t<1, 8192>, fltree_header=headerL0(is_level_tail=1, level=0))
DEBUG 2021-08-14 02:38:09,352 [shard 0] seastore - Cache::replace_extent: prev CachedExtent(addr=0x601000b56d00, type=ONODE_BLOCK_STAGED, version=1, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), paddr=paddr_t<1, 8192>, state=DIRTY, last_committed_crc=850290427, refcount=3, laddr=0, pin=LBAPin(0~16384->paddr_t<1, 8192>, fltree_header=headerL0(is_level_tail=1, level=0)), next CachedExtent(addr=0x6010009cdd00, type=ONODE_BLOCK_STAGED, version=2, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), paddr=paddr_t<1, 8192>, state=MUTATION_PENDING, last_committed_crc=850290427, refcount=2, laddr=0, pin=LBAPin(0~16384->paddr_t<1, 8192>, fltree_header=headerL0(is_level_tail=1, level=0))
DEBUG 2021-08-14 02:38:09,352 [shard 0] seastore - Cache::invalidate: invalidate begin -- extent CachedExtent(addr=0x601000b56d00, type=ONODE_BLOCK_STAGED, version=1, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), paddr=paddr_t<1, 8192>, state=DIRTY, last_committed_crc=850290427, refcount=2, laddr=0, pin=LBAPin(0~16384->paddr_t<1, 8192>, fltree_header=headerL0(is_level_tail=1, level=0))
DEBUG 2021-08-14 02:38:09,352 [shard 0] seastore - Cache::invalidate: invalidate end
DEBUG 2021-08-14 02:38:09,352 [shard 0] seastore - TransactionManager::submit_transaction_direct(0x601000858140): about to submit to journal
DEBUG 2021-08-14 02:38:09,352 [shard 0] seastore - write_record, mdlength 4096, dlength 0, target 28672
DEBUG 2021-08-14 02:38:09,352 [shard 0] seastore - segment_write to segment 1 at offset 28672, physical offset 8417280, len 4096, crc 335995726
DEBUG 2021-08-14 02:38:09,376 [shard 0] seastore - write_record: commit target 28672
DEBUG 2021-08-14 02:38:09,376 [shard 0] seastore - TransactionManager::submit_transaction_direct(0x601000858140): journal commit to paddr_t<1, 32768> seq journal_seq_t(segment_seq=1, offset=paddr_t<1, 28672>)
DEBUG 2021-08-14 02:38:09,376 [shard 0] seastore - Cache::complete_commit(0x601000858140): enter
DEBUG 2021-08-14 02:38:09,376 [shard 0] seastore - Cache::complete_commit(0x601000858140): mutated CachedExtent(addr=0x6010009cdd00, type=ONODE_BLOCK_STAGED, version=2, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), paddr=paddr_t<1, 8192>, state=MUTATION_PENDING, last_committed_crc=1625198013, refcount=2, laddr=0, pin=LBAPin(0~16384->paddr_t<1, 8192>, fltree_header=headerL0(is_level_tail=1, level=0))
DEBUG 2021-08-14 02:38:09,376 [shard 0] seastore - update_journal_tail_target: journal_seq_t(segment_seq=1, offset=paddr_t<1, 4096>)
DEBUG 2021-08-14 02:38:09,377 [shard 0] seastore - SegmentCleaner::log_gc_state(GCProcess::run_until_halt): total 1073741824, available 1065324544, unavailable 8417280, used 20480, reclaimable 0, reclaim_ratio 0.0, available_ratio 0.9921607971191406, should_block_on_gc false, gc_should_reclaim_space false, journal_head journal_seq_t(segment_seq=1, offset=paddr_t<1, 28672>), journal_tail_target journal_seq_t(segment_seq=1, offset=paddr_t<1, 4096>), dirty_tail journal_seq_t(segment_seq=0, offset=paddr_t<1, 28672>), dirty_tail_limit journal_seq_t(segment_seq=0, offset=paddr_t<1, 28672>), gc_should_trim_journal false,
DEBUG 2021-08-14 02:38:09,377 [shard 0] seastore - Cache::create_transaction(0x601000858140): created source=MUTATE
DEBUG 2021-08-14 02:38:09,377 [shard 0] test - [2] insert key_hobj(7,7,7; "ns6_..__/255B","oid6..__/2048B"; 0,0) -> TestItem(#4, 992B)
DEBUG 2021-08-14 02:38:09,377 [shard 0] seastore - Cache::get_root(0x601000858140): waiting root CachedExtent(addr=0x601000176e00, type=ROOT, version=1, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 4096>), paddr=paddr_t<NULL_SEG, NULL_OFF>, state=DIRTY, last_committed_crc=0, refcount=3)
DEBUG 2021-08-14 02:38:09,377 [shard 0] seastore - Cache::get_root(0x601000858140): got root read CachedExtent(addr=0x601000176e00, type=ROOT, version=1, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 4096>), paddr=paddr_t<NULL_SEG, NULL_OFF>, state=DIRTY, last_committed_crc=0, refcount=4)
DEBUG 2021-08-14 02:38:09,377 [shard 0] seastore - BtreeLBAManager::get_mapping: 0
DEBUG 2021-08-14 02:38:09,377 [shard 0] seastore - Cache::get_root(0x601000858140): root already on transaction CachedExtent(addr=0x601000176e00, type=ROOT, version=1, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 4096>), paddr=paddr_t<NULL_SEG, NULL_OFF>, state=DIRTY, last_committed_crc=0, refcount=4)
DEBUG 2021-08-14 02:38:09,377 [shard 0] seastore - BtreeLBAManager::get_root: reading root at paddr_t<0, 8192> depth 1
DEBUG 2021-08-14 02:38:09,377 [shard 0] seastore - get_lba_btree_extent: reading leaf at offset paddr_t<0, 8192>, depth 1
DEBUG 2021-08-14 02:38:09,377 [shard 0] seastore - get_lba_btree_extent: read leaf at offset paddr_t<0, 8192> CachedExtent(addr=0x601000a9c780, type=LADDR_LEAF, version=1, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 4096>), paddr=paddr_t<0, 8192>, state=DIRTY, last_committed_crc=326493981, refcount=4, size=1, meta=btree_node_meta_t(begin=0, end=18446744073709551615, depth=1)), parent CachedExtent(addr=0x601000176e00, type=ROOT, version=1, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 4096>), paddr=paddr_t<NULL_SEG, NULL_OFF>, state=DIRTY, last_committed_crc=0, refcount=7)
DEBUG 2021-08-14 02:38:09,377 [shard 0] seastore - LBALeafNode::lookup_pin 0
DEBUG 2021-08-14 02:38:09,377 [shard 0] seastore - BtreeLBAManager::get_mapping: got mapping LBAPin(0~16384->paddr_t<1, 8192>
DEBUG 2021-08-14 02:38:09,377 [shard 0] seastore - TransactionManager::pin_to_extent(0x601000858140): getting extent LBAPin(0~16384->paddr_t<1, 8192>
DEBUG 2021-08-14 02:38:09,377 [shard 0] seastore - TransactionManager::pin_to_extent(0x601000858140): got extent CachedExtent(addr=0x6010009cdd00, type=ONODE_BLOCK_STAGED, version=2, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), paddr=paddr_t<1, 8192>, state=DIRTY, last_committed_crc=1625198013, refcount=3, laddr=0, pin=LBAPin(0~16384->paddr_t<1, 8192>, fltree_header=headerL0(is_level_tail=1, level=0))
DEBUG 2021-08-14 02:38:09,377 [shard 0] seastore - OTree::LeafNode::insert_value(0x601000858140): insert NodeL0@0x0+4000Lv0$ with insert_key=key_hobj(7,7,7; "ns6_..__/255B","oid6..__/2048B"; 0,0), insert_value=ValueConf(TEST_EXTENDED, 989B), insert_pos(END, END, END), history=history(GT, --, --), mstat(-2) ...
DEBUG 2021-08-14 02:38:09,377 [shard 0] seastore - OTree::Seastore(0x601000858140): mutate CachedExtent(addr=0x6010009cdd00, type=ONODE_BLOCK_STAGED, version=2, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), paddr=paddr_t<1, 8192>, state=DIRTY, last_committed_crc=1625198013, refcount=3, laddr=0, pin=LBAPin(0~16384->paddr_t<1, 8192>, fltree_header=headerL0(is_level_tail=1, level=0)) ...
DEBUG 2021-08-14 02:38:09,377 [shard 0] seastore - Cache::duplicate_for_write(0x601000858140): CachedExtent(addr=0x6010009cdd00, type=ONODE_BLOCK_STAGED, version=2, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), paddr=paddr_t<1, 8192>, state=DIRTY, last_committed_crc=1625198013, refcount=6, laddr=0, pin=LBAPin(0~16384->paddr_t<1, 8192>, fltree_header=headerL0(is_level_tail=1, level=0)) -> CachedExtent(addr=0x601000b56c00, type=ONODE_BLOCK_STAGED, version=3, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), paddr=paddr_t<1, 8192>, state=MUTATION_PENDING, last_committed_crc=1625198013, refcount=2, laddr=18446744073709551615, pin=empty, fltree_header=headerL0(is_level_tail=1, level=0))
DEBUG 2021-08-14 02:38:09,377 [shard 0] seastore - TransactionManager::get_mutable_extent(0x601000858140): duplicating CachedExtent(addr=0x6010009cdd00, type=ONODE_BLOCK_STAGED, version=2, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), paddr=paddr_t<1, 8192>, state=DIRTY, last_committed_crc=1625198013, refcount=5, laddr=0, pin=LBAPin(0~16384->paddr_t<1, 8192>, fltree_header=headerL0(is_level_tail=1, level=0)) for write: CachedExtent(addr=0x601000b56c00, type=ONODE_BLOCK_STAGED, version=3, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), paddr=paddr_t<1, 8192>, state=MUTATION_PENDING, last_committed_crc=1625198013, refcount=2, laddr=18446744073709551615, pin=empty, fltree_header=headerL0(is_level_tail=1, level=0))
DEBUG 2021-08-14 02:38:09,377 [shard 0] seastore - OTree::Layout::insert: begin at insert_pos(END, END, END), insert_stage=2, insert_size=3336B ...
DEBUG 2021-08-14 02:38:09,377 [shard 0] seastore - OTree::Extent::Replay: decoding INSERT ...
DEBUG 2021-08-14 02:38:09,378 [shard 0] seastore - OTree::Extent::Replay: apply key_hobj(7,7,7; "ns6_..__/255B","oid6..__/2048B"; 0,0), ValueConf(TEST_EXTENDED, 989B), insert_pos(END, END, END), insert_stage=2, insert_size=3336B ...
DEBUG 2021-08-14 02:38:09,378 [shard 0] seastore - OTree::Layout::insert: done at insert_pos(1, 0, 0), insert_stage=2, insert_size=3336B
DEBUG 2021-08-14 02:38:09,378 [shard 0] seastore - TransactionManager::submit_transaction(0x601000858140): about to await throttle
DEBUG 2021-08-14 02:38:09,378 [shard 0] seastore - SegmentCleaner::log_gc_state(await_hard_limits): total 1073741824, available 1065324544, unavailable 8417280, used 20480, reclaimable 0, reclaim_ratio 0.0, available_ratio 0.9921607971191406, should_block_on_gc false, gc_should_reclaim_space false, journal_head journal_seq_t(segment_seq=1, offset=paddr_t<1, 28672>), journal_tail_target journal_seq_t(segment_seq=1, offset=paddr_t<1, 4096>), dirty_tail journal_seq_t(segment_seq=0, offset=paddr_t<1, 28672>), dirty_tail_limit journal_seq_t(segment_seq=0, offset=paddr_t<1, 28672>), gc_should_trim_journal false,
DEBUG 2021-08-14 02:38:09,378 [shard 0] seastore - TransactionManager::submit_transaction_direct(0x601000858140): about to prepare
DEBUG 2021-08-14 02:38:09,378 [shard 0] seastore - Cache::prepare_record(0x601000858140): enter
DEBUG 2021-08-14 02:38:09,378 [shard 0] seastore - Cache::prepare_record(0x601000858140): read_set validated
DEBUG 2021-08-14 02:38:09,378 [shard 0] seastore - Cache::prepare_record(0x601000858140): mutating CachedExtent(addr=0x601000b56c00, type=ONODE_BLOCK_STAGED, version=3, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), paddr=paddr_t<1, 8192>, state=MUTATION_PENDING, last_committed_crc=1625198013, refcount=1, laddr=0, pin=LBAPin(0~16384->paddr_t<1, 8192>, fltree_header=headerL0(is_level_tail=1, level=0))
DEBUG 2021-08-14 02:38:09,378 [shard 0] seastore - Cache::replace_extent: prev CachedExtent(addr=0x6010009cdd00, type=ONODE_BLOCK_STAGED, version=2, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), paddr=paddr_t<1, 8192>, state=DIRTY, last_committed_crc=1625198013, refcount=3, laddr=0, pin=LBAPin(0~16384->paddr_t<1, 8192>, fltree_header=headerL0(is_level_tail=1, level=0)), next CachedExtent(addr=0x601000b56c00, type=ONODE_BLOCK_STAGED, version=3, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), paddr=paddr_t<1, 8192>, state=MUTATION_PENDING, last_committed_crc=1625198013, refcount=2, laddr=0, pin=LBAPin(0~16384->paddr_t<1, 8192>, fltree_header=headerL0(is_level_tail=1, level=0))
DEBUG 2021-08-14 02:38:09,378 [shard 0] seastore - Cache::invalidate: invalidate begin -- extent CachedExtent(addr=0x6010009cdd00, type=ONODE_BLOCK_STAGED, version=2, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), paddr=paddr_t<1, 8192>, state=DIRTY, last_committed_crc=1625198013, refcount=2, laddr=0, pin=LBAPin(0~16384->paddr_t<1, 8192>, fltree_header=headerL0(is_level_tail=1, level=0))
DEBUG 2021-08-14 02:38:09,378 [shard 0] seastore - Cache::invalidate: invalidate end
DEBUG 2021-08-14 02:38:09,378 [shard 0] seastore - TransactionManager::submit_transaction_direct(0x601000858140): about to submit to journal
DEBUG 2021-08-14 02:38:09,378 [shard 0] seastore - write_record, mdlength 4096, dlength 0, target 32768
DEBUG 2021-08-14 02:38:09,378 [shard 0] seastore - segment_write to segment 1 at offset 32768, physical offset 8421376, len 4096, crc 530964746
DEBUG 2021-08-14 02:38:09,379 [shard 0] seastore - write_record: commit target 32768
DEBUG 2021-08-14 02:38:09,379 [shard 0] seastore - TransactionManager::submit_transaction_direct(0x601000858140): journal commit to paddr_t<1, 36864> seq journal_seq_t(segment_seq=1, offset=paddr_t<1, 32768>)
DEBUG 2021-08-14 02:38:09,379 [shard 0] seastore - Cache::complete_commit(0x601000858140): enter
DEBUG 2021-08-14 02:38:09,379 [shard 0] seastore - Cache::complete_commit(0x601000858140): mutated CachedExtent(addr=0x601000b56c00, type=ONODE_BLOCK_STAGED, version=3, dirty_from_or_retired_at=journal_seq_t(segment_seq=1, offset=paddr_t<1, 24576>), paddr=paddr_t<1, 8192>, state=MUTATION_PENDING, last_committed_crc=1313456216, refcount=2, laddr=0, pin=LBAPin(0~16384->paddr_t<1, 8192>, fltree_header=headerL0(is_level_tail=1, level=0))
DEBUG 2021-08-14 02:38:09,379 [shard 0] seastore - update_journal_tail_target: journal_seq_t(segment_seq=1, offset=paddr_t<1, 4096>)
DEBUG 2021-08-14 02:38:09,379 [shard 0] seastore - SegmentCleaner::log_gc_state(GCProcess::run_until_halt): total 1073741824, available 1065320448, unavailable 8421376, used 20480, reclaimable 0, reclaim_ratio 0.0, available_ratio 0.992156982421875, should_block_on_gc false, gc_should_reclaim_space false, journal_head journal_seq_t(segment_seq=1, offset=paddr_t<1, 32768>), journal_tail_target journal_seq_t(segment_seq=1, offset=paddr_t<1, 4096>), dirty_tail journal_seq_t(segment_seq=0, offset=paddr_t<1, 32768>), dirty_tail_limit journal_seq_t(segment_seq=0, offset=paddr_t<1, 32768>), gc_should_trim_journal false,
DEBUG 2021-08-14 02:38:09,380 [shard 0] seastore - Cache::create_transaction(0x601000858140): created source=MUTATE
DEBUG 2021-08-14 02:38:09,380 [shard 0] test - [3] insert key_hobj(1,1,1; "ns3_..__/255B","oid3____"; 2,2) -> TestItem(#3, 576B)
DEBUG 2021-08-14 02:38:09,380 [shard 0] seastore - OTree::Seastore(0x601000858140): get root: trigger eagain
unittest-staged-fltree: ../src/crimson/os/seastore/onode_manager/staged-fltree/node.cc:420: crimson::os::seastore::onode::Node::load_root(crimson::os::seastore::onode::context_t, crimson::os::seastore::onode::RootNodeTracker&)::<lambda(auto:115&&)> [with auto:115 = std::unique_ptr<crimson::os::seastore::onode::Super>]: Assertion `_super' failed.
Aborting on shard 0.
...
</pre>
Dashboard - Bug #51728 (Resolved): mgr/dashbord: force maintenance e2e failing for host
https://tracker.ceph.com/issues/51728
2021-07-19T15:31:13Z
Kefu Chai
tchaikov@gmail.com
<pre>
the e2e failures might be related.
2021-07-17T17:53:00.269 INFO:tasks.workunit.client.0.smithi080.stdout: Hosts page
2021-07-17T17:53:00.269 INFO:tasks.workunit.client.0.smithi080.stdout: when Orchestrator is available
2021-07-17T17:53:01.584 INFO:tasks.workunit.client.0.smithi080.stdout: 1) should force enter host into maintenance
2021-07-17T17:53:01.628 INFO:tasks.workunit.client.0.smithi080.stdout:
2021-07-17T17:53:01.629 INFO:tasks.workunit.client.0.smithi080.stdout:
2021-07-17T17:53:01.629 INFO:tasks.workunit.client.0.smithi080.stdout: 0 passing (1s)
2021-07-17T17:53:01.629 INFO:tasks.workunit.client.0.smithi080.stdout: 1 failing
2021-07-17T17:53:01.629 INFO:tasks.workunit.client.0.smithi080.stdout:
2021-07-17T17:53:01.630 INFO:tasks.workunit.client.0.smithi080.stdout: 1) Hosts page
2021-07-17T17:53:01.630 INFO:tasks.workunit.client.0.smithi080.stdout: when Orchestrator is available
2021-07-17T17:53:01.630 INFO:tasks.workunit.client.0.smithi080.stdout: should force enter host into maintenance:
2021-07-17T17:53:01.630 INFO:tasks.workunit.client.0.smithi080.stdout: TypeError: Cannot read property 'forEach' of undefined
2021-07-17T17:53:01.630 INFO:tasks.workunit.client.0.smithi080.stdout: at Context.eval (https://172.21.15.80:8443/__cypress/tests?p=cypress/integration/orchestrator/01-hosts-force-maintenance.e2e-spec.ts:325:30)
</pre>
<ul>
<li>/a/yuriw-2021-07-17_14:59:42-rados-wip-yuri-testing-master-7.16.21-distro-basic-smithi/6277686</li>
<li>/a/kchai-2021-07-17_15:53:41-rados-wip-kefu-testing-2021-07-17-1518-distro-basic-smithi/6277769</li>
</ul>