Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2020-08-10T01:03:45Z
Ceph
Redmine
rbd - Bug #46875 (New): TestLibRBD.TestPendingAio: test_librbd.cc:4539: Failure or SIGSEGV
https://tracker.ceph.com/issues/46875
2020-08-10T01:03:45Z
Sebastian Wagner
<pre>
[ RUN ] TestLibRBD.TestPendingAio
using new format!
/home/jenkins-build/build/workspace/ceph-pull-requests/src/test/librbd/test_librbd.cc:4539: Failure
Expected equality of these values:
1
rbd_aio_is_complete(comps[i])
Which is: 0
[ FAILED ] TestLibRBD.TestPendingAio (68 ms)
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/57209/consoleFull#-361705261e840cee4-f4a4-4183-81dd-42855615f2c1">https://jenkins.ceph.com/job/ceph-pull-requests/57209/consoleFull#-361705261e840cee4-f4a4-4183-81dd-42855615f2c1</a></p>
sepia - Bug #46154 (New): unable to pull ceph/ceph-grafana: connection reset by peer
https://tracker.ceph.com/issues/46154
2020-06-23T13:20:34Z
Sebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/swagner-2020-06-23_11:55:14-rados:cephadm-wip-swagner-testing-2020-06-23-1057-distro-basic-smithi/5172323/">http://pulpito.ceph.com/swagner-2020-06-23_11:55:14-rados:cephadm-wip-swagner-testing-2020-06-23-1057-distro-basic-smithi/5172323/</a></p>
<pre>
2020-06-23T12:20:41.349 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:Deploy daemon grafana.a ...
2020-06-23T12:20:41.350 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:Verifying port 3000 ...
2020-06-23T12:20:46.563 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:Non-zero exit code 125 from /usr/bin/podman run --rm --net=host --ipc=host -e CONTAINER_IMAGE=ceph/ceph-grafana:latest -e NODE_NAME=smithi198 --entrypoint stat ceph/ceph-grafana:latest -c %u %g /var/lib/grafana
2020-06-23T12:20:46.564 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr Trying to pull docker.io/ceph/ceph-grafana:latest...
2020-06-23T12:20:46.564 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr Getting image source signatures
2020-06-23T12:20:46.564 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr Copying blob sha256:003efafe5a84678b58http://pulpito.ceph.com/swagner-2020-06-23_11:55:14-rados:cephadm-wip-swagner-testing-2020-06-23-1057-distro-basic-smithi/5172323/5af8a06810c47079aa4705e60d07f1c31a52f0e35ce0b5
2020-06-23T12:20:46.564 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr read tcp 172.21.15.198:55100->104.18.125.25:443: read: connection reset by peer
2020-06-23T12:20:46.564 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr Error: unable to pull ceph/ceph-grafana:latest: 1 error occurred:
2020-06-23T12:20:46.565 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr * Error writing blob: error storing blob to file "/var/tmp/storage459839576/1": read tcp 172.21.15.198:55100->104.18.125.25:443: read: connection reset by peer
2020-06-23T12:20:46.565 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr
2020-06-23T12:20:46.565 INFO:tasks.workunit.client.0.smithi198.stderr:INFO:cephadm:stat:stderr
2020-06-23T12:20:46.571 INFO:tasks.workunit.client.0.smithi198.stderr:Traceback (most recent call last):
2020-06-23T12:20:46.571 INFO:tasks.workunit.client.0.smithi198.stderr: File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 4825, in <module>
2020-06-23T12:20:46.572 INFO:tasks.workunit.client.0.smithi198.stderr: r = args.func()
2020-06-23T12:20:46.572 INFO:tasks.workunit.client.0.smithi198.stderr: File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 1182, in _default_image
2020-06-23T12:20:46.572 INFO:tasks.workunit.client.0.smithi198.stderr: return func()
2020-06-23T12:20:46.572 INFO:tasks.workunit.client.0.smithi198.stderr: File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 2863, in command_deploy
2020-06-23T12:20:46.572 INFO:tasks.workunit.client.0.smithi198.stderr: uid, gid = extract_uid_gid_monitoring(daemon_type)
2020-06-23T12:20:46.573 INFO:tasks.workunit.client.0.smithi198.stderr: File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 2799, in extract_uid_gid_monitoring
2020-06-23T12:20:46.573 INFO:tasks.workunit.client.0.smithi198.stderr: uid, gid = extract_uid_gid(file_path='/var/lib/grafana')
2020-06-23T12:20:46.573 INFO:tasks.workunit.client.0.smithi198.stderr: File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 1798, in extract_uid_gid
2020-06-23T12:20:46.573 INFO:tasks.workunit.client.0.smithi198.stderr: args=['-c', '%u %g', file_path]
2020-06-23T12:20:46.573 INFO:tasks.workunit.client.0.smithi198.stderr: File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 2275, in run
2020-06-23T12:20:46.574 INFO:tasks.workunit.client.0.smithi198.stderr: self.run_cmd(), desc=self.entrypoint, timeout=timeout)
2020-06-23T12:20:46.574 INFO:tasks.workunit.client.0.smithi198.stderr: File "/tmp/tmp.ugRyqMTBeQ/cephadm", line 861, in call_throws
2020-06-23T12:20:46.574 INFO:tasks.workunit.client.0.smithi198.stderr: raise RuntimeError('Failed command: %s' % ' '.join(command))
2020-06-23T12:20:46.574 INFO:tasks.workunit.client.0.smithi198.stderr:RuntimeError: Failed command: /usr/bin/podman run --rm --net=host --ipc=host -e CONTAINER_IMAGE=ceph/ceph-grafana:latest -e NODE_NAME=smithi198 --entrypoint stat ceph/ceph-grafana:latest -c %u %g /var/lib/grafana
</pre>
<p>does this mean, we have to retry fetching containers?</p>
teuthology - Bug #44181 (New): Error in syslog: task.internal.syslog: random "*BUG*" in log message
https://tracker.ceph.com/issues/44181
2020-02-18T10:33:27Z
Sebastian Wagner
<p><a class="external" href="http://pulpito.ceph.com/sage-2020-02-18_02:48:28-rados-wip-sage4-testing-2020-02-17-1727-distro-basic-smithi/4776502">http://pulpito.ceph.com/sage-2020-02-18_02:48:28-rados-wip-sage4-testing-2020-02-17-1727-distro-basic-smithi/4776502</a></p>
<p>This job failure was caused by</p>
<p><a class="external" href="https://github.com/ceph/teuthology/blob/291d40053a7a5caedc1d683f08d25399bc3b9ccd/teuthology/task/internal/syslog.py#L98-L144">https://github.com/ceph/teuthology/blob/291d40053a7a5caedc1d683f08d25399bc3b9ccd/teuthology/task/internal/syslog.py#L98-L144</a></p>
<pre>
2020-02-18T06:48:18.388 ERROR:teuthology.task.internal.syslog:Error in syslog on ubuntu@smithi060.front.sepia.ceph.com: /home/ubuntu/cephtest/archive/syslog/misc.log:2020-02-18T06:42:25.361371+00:00 smithi060 bash[10468]: audit 2020-02-18T06:42:24.442267+0000 mon.a (mon.0) 82 : audit [INF] from='mgr.14124 172.21.15.60:0/1' entity='mgr.y' cmd=[{"prefix":"config-key set","key":"mgr/dashboard/key","val":"-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCnskmhDB10Jk6M\nXNpzP+7hOWVV7TYIeAGSapYoNcgQcPrQU0STPGuyUORmnKO4taTVz8EBvPL4p6Mv\niZFEIhL2OL07UexgDqKaAD4lne/KIhYQJVtkqPu/TYYemxa6xyl/V4LGPrSGYx0C\n8huP7RsqEMNLNMr/wG34hG7LCdGtcWk8aylma8XrXgukEHMsJJIeb+ZZKw3AnZCT\nbO++2B+V5DPtE4LId4x3G1PrVumH/whd6ciTtcImFspCRQgwlaPnDLf68bFXF67G\nBWJZZRFoTCc73fu/rUW1vGYk7WFiVi52WVbpgYrPc/AhWNOaH6d0xooBoohRjAYh\n7GDdVa+LAgMBAAECggEAM1HEZpymht0SPLJNx+dQ22wNLvahCoZvNLeZrESJLT7m\nAsr4uXZMHw3SV/SnxecQwr4JetawJJhowCuBYTBsTR2gC39OrzbLXAWm/ywOLfWw\netBz36I3KJw45zTfB9nbQTUuuCyIYngCcNxWwvz0yzLGEUXeudXR0bP1k/01RbZo\nhGe8oQSJzSN4zmfQtx/rSGCXJr23HUjPs0mVHqml2bZL9UZcuKu5RuN8PWSo0aOM\nZwGYa/1pcoo1OsN3XtujY9tU0Ykrd3rteAARMIBFzrktaWWhSdaiOQS0fYAnyGrX\njI7cjlsbtJfTt741wF0hmCZIGS40+HWTmwCTkapWwQKBgQDQkX4ZHyvFD96W81rz\nXLIdSEfgv0+andTC2v5kvlk4cxIYgic2g1R59gekZioOpVIQG/eCwiSFW5ndqjzI\nSGMj2bflL8vXv5q4EX+TL3W5LWOnR8k7FVxJpPsJbbyX93qbpcU/oKNSt5VDUPaL\neooha6lDP+HEAdfWHqe1PxP+9wKBgQDN1UzVWU8ur2tdlql2BVrwfi/J32/ZUFQG\nCPvC9RMdavZjKITu2Rg5LA6kYOnJ9MvVTU59Mf2c+6kKWQTaRhqTqhfAJYtjvmoC\naTGm7HGPywEOeMphF+LAb23DNcCzQFhBVduOfL8MSkTjJOjmaxyYc2qs+ts87NMt\nqCENAaPrDQKBgHvuV/1ZdkqsOVl81QhShku8DWnQg96d9jSqqAr4yE8woQoLHH3Z\n37JwrO3U/xygw3hrBdGextCvM2hxpZhk2vQMhKcclYVnhunlC+dLhio4fESD9WC0\nOphP/hMGL9Ak76fZArHiI+ocyAat7zHF6JofPP6G0QIFDlle8cxS5PDVAoGBAMRB\nByQ5JkV2HqG6YFNWYdICDuClOQj0DVk/wYSulY4sCUacQLtXpUAF4OQcP20/CgaT\n0i2Ot6ixTwi9veG8i+SVflXHtnLhAETSNfRZZyHaRmSdCSGwW5Rt6jMBkn2W8U9C\nZLgj+yjlu270J1hjcn1tNp4+BUG+8M+Mig7TrI4VAoGAYYltCD4bc2bBAPWnF6nk\nqrx16kKg0kjNdhATkBWt76jpsJYRmyo5NALLaB5/k0dS7ftuTmGEZLSnNyl44O2B\n7QH6PaRoP0hX5LtLwSZhiJxd6tDrfwMFzpVGiJHeUNKGS/GKQzlvlxUJb2aOhNWu\nMgFlLWfPOMgxiRpwUhtg0Is=\n-----END PRIVATE KEY-----\n"}]: dispatch
</pre>
<p>Unfortunately this log message contains "<code>BUG</code>" somewhere in the key.</p>
rbd - Bug #43274 (Need More Info): unittest_rbd_mirror: Exception: SegFault
https://tracker.ceph.com/issues/43274
2019-12-12T09:30:06Z
Sebastian Wagner
<p>Unfortunately, I don't know what exactly went wrong:</p>
<pre>
185/191 Test #184: unittest_rbd_mirror .....................***Exception: SegFault 11.74 sec
[==========] Running 279 tests from 34 test suites.
[----------] Global test environment set-up.
[----------] 13 tests from TestMockImageMap
[ RUN ] TestMockImageMap.SetLocalImages
seed 1526
[ OK ] TestMockImageMap.SetLocalImages (8 ms)
[ RUN ] TestMockImageMap.AddRemoveLocalImage
[ OK ] TestMockImageMap.AddRemoveLocalImage (25 ms)
[ RUN ] TestMockImageMap.AddRemoveRemoteImage
[ OK ] TestMockImageMap.AddRemoveRemoteImage (15 ms)
[ RUN ] TestMockImageMap.AddRemoveRemoteImageDuplicateNotification
[ OK ] TestMockImageMap.AddRemoveRemoteImageDuplicateNotification (5 ms)
[ RUN ] TestMockImageMap.AcquireImageErrorRetry
[ OK ] TestMockImageMap.AcquireImageErrorRetry (2 ms)
[ RUN ] TestMockImageMap.RemoveRemoteAndLocalImage
[ OK ] TestMockImageMap.RemoveRemoteAndLocalImage (2 ms)
[ RUN ] TestMockImageMap.AddInstance
[ OK ] TestMockImageMap.AddInstance (4 ms)
[ RUN ] TestMockImageMap.RemoveInstance
[ OK ] TestMockImageMap.RemoveInstance (7 ms)
[ RUN ] TestMockImageMap.AddInstancePingPongImageTest
[ OK ] TestMockImageMap.AddInstancePingPongImageTest (34 ms)
[ RUN ] TestMockImageMap.RemoveInstanceWithRemoveImage
[ OK ] TestMockImageMap.RemoveInstanceWithRemoveImage (23 ms)
[ RUN ] TestMockImageMap.AddErrorAndRemoveImage
[ OK ] TestMockImageMap.AddErrorAndRemoveImage (35 ms)
[ RUN ] TestMockImageMap.MirrorUUIDUpdated
[ OK ] TestMockImageMap.MirrorUUIDUpdated (44 ms)
[ RUN ] TestMockImageMap.RebalanceImageMap
[ OK ] TestMockImageMap.RebalanceImageMap (40 ms)
[----------] 13 tests from TestMockImageMap (244 ms total)
[----------] 14 tests from TestMockImageReplayer
[ RUN ] TestMockImageReplayer.StartStop
Failed to load class: cas (/home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_cas.so): /home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_cas.so: undefined symbol: _Z13cls_has_chunkPvNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
Failed to load class: log (/home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_log.so): /home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_log.so: undefined symbol: _Z24cls_cxx_map_write_headerPvPN4ceph6buffer7v14_2_04listE
Failed to load class: rgw (/home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_rgw.so): /home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_rgw.so: undefined symbol: _Z19cls_current_versionPv
Failed to load class: user (/home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_user.so): /home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_user.so: undefined symbol: _Z24cls_cxx_map_write_headerPvPN4ceph6buffer7v14_2_04listE
[ OK ] TestMockImageReplayer.StartStop (317 ms)
[ RUN ] TestMockImageReplayer.LocalImagePrimary
[ OK ] TestMockImageReplayer.LocalImagePrimary (146 ms)
[ RUN ] TestMockImageReplayer.LocalImageDNE
[ OK ] TestMockImageReplayer.LocalImageDNE (196 ms)
[ RUN ] TestMockImageReplayer.PrepareLocalImageError
[ OK ] TestMockImageReplayer.PrepareLocalImageError (194 ms)
[ RUN ] TestMockImageReplayer.GetRemoteImageIdDNE
[ OK ] TestMockImageReplayer.GetRemoteImageIdDNE (174 ms)
[ RUN ] TestMockImageReplayer.GetRemoteImageIdNonLinkedDNE
[ OK ] TestMockImageReplayer.GetRemoteImageIdNonLinkedDNE (224 ms)
[ RUN ] TestMockImageReplayer.GetRemoteImageIdError
[ OK ] TestMockImageReplayer.GetRemoteImageIdError (228 ms)
[ RUN ] TestMockImageReplayer.BootstrapError
[ OK ] TestMockImageReplayer.BootstrapError (154 ms)
[ RUN ] TestMockImageReplayer.StopBeforeBootstrap
[ OK ] TestMockImageReplayer.StopBeforeBootstrap (215 ms)
[ RUN ] TestMockImageReplayer.StartExternalReplayError
[ OK ] TestMockImageReplayer.StartExternalReplayError (152 ms)
[ RUN ] TestMockImageReplayer.StopError
[ OK ] TestMockImageReplayer.StopError (169 ms)
[ RUN ] TestMockImageReplayer.Replay
[ OK ] TestMockImageReplayer.Replay (177 ms)
[ RUN ] TestMockImageReplayer.DecodeError
[ OK ] TestMockImageReplayer.DecodeError (157 ms)
[ RUN ] TestMockImageReplayer.DelayedReplay
[ OK ] TestMockImageReplayer.DelayedReplay (2153 ms)
[----------] 14 tests from TestMockImageReplayer (4663 ms total)
[----------] 5 tests from TestMockImageSync
[ RUN ] TestMockImageSync.SimpleSync
[ OK ] TestMockImageSync.SimpleSync (198 ms)
[ RUN ] TestMockImageSync.RestartSync
[ OK ] TestMockImageSync.RestartSync (173 ms)
[ RUN ] TestMockImageSync.CancelNotifySyncRequest
[ OK ] TestMockImageSync.CancelNotifySyncRequest (159 ms)
[ RUN ] TestMockImageSync.CancelImageCopy
[ OK ] TestMockImageSync.CancelImageCopy (195 ms)
[ RUN ] TestMockImageSync.CancelAfterCopyImage
[ OK ] TestMockImageSync.CancelAfterCopyImage (166 ms)
[----------] 5 tests from TestMockImageSync (898 ms total)
[----------] 3 tests from TestMockInstanceReplayer
[ RUN ] TestMockInstanceReplayer.AcquireReleaseImage
[ OK ] TestMockInstanceReplayer.AcquireReleaseImage (16 ms)
[ RUN ] TestMockInstanceReplayer.RemoveFinishedImage
[ OK ] TestMockInstanceReplayer.RemoveFinishedImage (24 ms)
[ RUN ] TestMockInstanceReplayer.Reacquire
[ OK ] TestMockInstanceReplayer.Reacquire (2 ms)
[----------] 3 tests from TestMockInstanceReplayer (42 ms total)
[----------] 11 tests from TestMockInstanceWatcher
[ RUN ] TestMockInstanceWatcher.InitShutdown
[ OK ] TestMockInstanceWatcher.InitShutdown (23 ms)
[ RUN ] TestMockInstanceWatcher.InitError
[ OK ] TestMockInstanceWatcher.InitError (18 ms)
[ RUN ] TestMockInstanceWatcher.ShutdownError
[ OK ] TestMockInstanceWatcher.ShutdownError (15 ms)
[ RUN ] TestMockInstanceWatcher.Remove
[ OK ] TestMockInstanceWatcher.Remove (16 ms)
[ RUN ] TestMockInstanceWatcher.RemoveNoent
[ OK ] TestMockInstanceWatcher.RemoveNoent (12 ms)
[ RUN ] TestMockInstanceWatcher.ImageAcquireRelease
[ OK ] TestMockInstanceWatcher.ImageAcquireRelease (36 ms)
[ RUN ] TestMockInstanceWatcher.PeerImageRemoved
[ OK ] TestMockInstanceWatcher.PeerImageRemoved (36 ms)
[ RUN ] TestMockInstanceWatcher.ImageAcquireReleaseCancel
[ OK ] TestMockInstanceWatcher.ImageAcquireReleaseCancel (31 ms)
[ RUN ] TestMockInstanceWatcher.PeerImageAcquireWatchDNE
[ OK ] TestMockInstanceWatcher.PeerImageAcquireWatchDNE (17 ms)
[ RUN ] TestMockInstanceWatcher.PeerImageReleaseWatchDNE
[ OK ] TestMockInstanceWatcher.PeerImageReleaseWatchDNE (32 ms)
[ RUN ] TestMockInstanceWatcher.PeerImageRemovedCancel
[ OK ] TestMockInstanceWatcher.PeerImageRemovedCancel (12 ms)
[----------] 11 tests from TestMockInstanceWatcher (250 ms total)
[----------] 11 tests from TestMockInstanceWatcher_NotifySync
[ RUN ] TestMockInstanceWatcher_NotifySync.StartStopOnLeader
[ OK ] TestMockInstanceWatcher_NotifySync.StartStopOnLeader (48 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.CancelStartedOnLeader
[ OK ] TestMockInstanceWatcher_NotifySync.CancelStartedOnLeader (49 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.StartStopOnNonLeader
[ OK ] TestMockInstanceWatcher_NotifySync.StartStopOnNonLeader (36 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.CancelStartedOnNonLeader
[ OK ] TestMockInstanceWatcher_NotifySync.CancelStartedOnNonLeader (41 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.CancelWaitingOnNonLeader
[ OK ] TestMockInstanceWatcher_NotifySync.CancelWaitingOnNonLeader (46 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.InFlightPrevNotification
[ OK ] TestMockInstanceWatcher_NotifySync.InFlightPrevNotification (46 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.NoInFlightReleaseAcquireLeader
[ OK ] TestMockInstanceWatcher_NotifySync.NoInFlightReleaseAcquireLeader (46 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.StartedOnLeaderReleaseLeader
[ OK ] TestMockInstanceWatcher_NotifySync.StartedOnLeaderReleaseLeader (34 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.WaitingOnLeaderReleaseLeader
[ OK ] TestMockInstanceWatcher_NotifySync.WaitingOnLeaderReleaseLeader (46 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.StartedOnNonLeaderAcquireLeader
[ OK ] TestMockInstanceWatcher_NotifySync.StartedOnNonLeaderAcquireLeader (29 ms)
[ RUN ] TestMockInstanceWatcher_NotifySync.WaitingOnNonLeaderAcquireLeader
[ OK ] TestMockInstanceWatcher_NotifySync.WaitingOnNonLeaderAcquireLeader (34 ms)
[----------] 11 tests from TestMockInstanceWatcher_NotifySync (456 ms total)
[----------] 4 tests from TestMockLeaderWatcher
[ RUN ] TestMockLeaderWatcher.InitShutdown
[ OK ] TestMockLeaderWatcher.InitShutdown (33 ms)
[ RUN ] TestMockLeaderWatcher.InitReleaseShutdown
[ OK ] TestMockLeaderWatcher.InitReleaseShutdown (19 ms)
[ RUN ] TestMockLeaderWatcher.AcquireError
[ OK ] TestMockLeaderWatcher.AcquireError (12 ms)
[ RUN ] TestMockLeaderWatcher.Break
[ OK ] TestMockLeaderWatcher.Break (2012 ms)
[----------] 4 tests from TestMockLeaderWatcher (2076 ms total)
[----------] 12 tests from TestMockMirrorStatusUpdater
[ RUN ] TestMockMirrorStatusUpdater.InitShutDown
[ OK ] TestMockMirrorStatusUpdater.InitShutDown (13 ms)
[ RUN ] TestMockMirrorStatusUpdater.InitStatusWatcherError
[ OK ] TestMockMirrorStatusUpdater.InitStatusWatcherError (26 ms)
[ RUN ] TestMockMirrorStatusUpdater.ShutDownStatusWatcherError
[ OK ] TestMockMirrorStatusUpdater.ShutDownStatusWatcherError (14 ms)
[ RUN ] TestMockMirrorStatusUpdater.SmallBatch
[ OK ] TestMockMirrorStatusUpdater.SmallBatch (24 ms)
[ RUN ] TestMockMirrorStatusUpdater.LargeBatch
[ OK ] TestMockMirrorStatusUpdater.LargeBatch (30 ms)
[ RUN ] TestMockMirrorStatusUpdater.OverwriteStatus
[ OK ] TestMockMirrorStatusUpdater.OverwriteStatus (11 ms)
[ RUN ] TestMockMirrorStatusUpdater.OverwriteStatusInFlight
[ OK ] TestMockMirrorStatusUpdater.OverwriteStatusInFlight (7 ms)
[ RUN ] TestMockMirrorStatusUpdater.ImmediateUpdate
[ OK ] TestMockMirrorStatusUpdater.ImmediateUpdate (9 ms)
[ RUN ] TestMockMirrorStatusUpdater.RemoveIdleStatus
[ OK ] TestMockMirrorStatusUpdater.RemoveIdleStatus (20 ms)
[ RUN ] TestMockMirrorStatusUpdater.RemoveInFlightStatus
[ OK ] TestMockMirrorStatusUpdater.RemoveInFlightStatus (9 ms)
[ RUN ] TestMockMirrorStatusUpdater.ShutDownWhileUpdating
[ OK ] TestMockMirrorStatusUpdater.ShutDownWhileUpdating (14 ms)
[ RUN ] TestMockMirrorStatusUpdater.MirrorPeerSitePing
[ OK ] TestMockMirrorStatusUpdater.MirrorPeerSitePing (24 ms)
[----------] 12 tests from TestMockMirrorStatusUpdater (201 ms total)
[----------] 6 tests from TestMockNamespaceReplayer
[ RUN ] TestMockNamespaceReplayer.Init_LocalMirrorStatusUpdaterError
[ OK ] TestMockNamespaceReplayer.Init_LocalMirrorStatusUpdaterError (55 ms)
[ RUN ] TestMockNamespaceReplayer.Init_RemoteMirrorStatusUpdaterError
[ OK ] TestMockNamespaceReplayer.Init_RemoteMirrorStatusUpdaterError (32 ms)
[ RUN ] TestMockNamespaceReplayer.Init_InstanceReplayerError
[ OK ] TestMockNamespaceReplayer.Init_InstanceReplayerError (12 ms)
[ RUN ] TestMockNamespaceReplayer.Init_InstanceWatcherError
[ OK ] TestMockNamespaceReplayer.Init_InstanceWatcherError (20 ms)
[ RUN ] TestMockNamespaceReplayer.Init
[ OK ] TestMockNamespaceReplayer.Init (16 ms)
[ RUN ] TestMockNamespaceReplayer.AcuqireLeader
[ OK ] TestMockNamespaceReplayer.AcuqireLeader (9 ms)
[----------] 6 tests from TestMockNamespaceReplayer (144 ms total)
[----------] 4 tests from TestMockPoolReplayer
[ RUN ] TestMockPoolReplayer.ConfigKeyOverride
[ OK ] TestMockPoolReplayer.ConfigKeyOverride (47 ms)
[ RUN ] TestMockPoolReplayer.AcquireReleaseLeader
[ OK ] TestMockPoolReplayer.AcquireReleaseLeader (55 ms)
[ RUN ] TestMockPoolReplayer.Namespaces
[ OK ] TestMockPoolReplayer.Namespaces (2075 ms)
[ RUN ] TestMockPoolReplayer.NamespacesError
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/40443/console">https://jenkins.ceph.com/job/ceph-pull-requests/40443/console</a></p>
<p><a class="external" href="https://github.com/ceph/ceph/pull/32182">https://github.com/ceph/ceph/pull/32182</a></p>
rbd - Bug #42768 (Duplicate): unittest_journal: TestFutureImpl.Getters failed: Timeout
https://tracker.ceph.com/issues/42768
2019-11-12T11:43:33Z
Sebastian Wagner
<p>This might be a rare deadlock?</p>
<pre>
189/189 Test #129: unittest_journal ........................***Timeout 3600.11 sec
did not load config file, using default settings.
[==========] Running 117 tests from 11 test suites.
[----------] Global test environment set-up.
[----------] 14 tests from TestFutureImpl
2019-11-11T18:46:38.616+0000 7f6bcf064e80 -1 Errors while parsing config file!
2019-11-11T18:46:38.616+0000 7f6bcf064e80 -1 parse_file: filesystem error: cannot get file size: No such file or directory [ceph.conf]
2019-11-11T18:46:38.616+0000 7f6bcf064e80 -1 Errors while parsing config file!
2019-11-11T18:46:38.616+0000 7f6bcf064e80 -1 parse_file: filesystem error: cannot get file size: No such file or directory [ceph.conf]
[ RUN ] TestFutureImpl.Getters
Failed to load class: cas (/home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_cas.so): /home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_cas.so: undefined symbol: _Z13cls_has_chunkPvNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
2019-11-11T18:46:38.672+0000 7f6bcf064e80 0 <cls> /home/jenkins-build/build/workspace/ceph-pull-requests/src/cls/cephfs/cls_cephfs.cc:198: loading cephfs
2019-11-11T18:46:38.676+0000 7f6bcf064e80 0 <cls> /home/jenkins-build/build/workspace/ceph-pull-requests/src/cls/hello/cls_hello.cc:313: loading cls_hello
Failed to load class: log (/home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_log.so): /home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_log.so: undefined symbol: _Z24cls_cxx_map_write_headerPvPN4ceph6buffer7v14_2_04listE
Failed to load class: rgw (/home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_rgw.so): /home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_rgw.so: undefined symbol: _Z19cls_current_versionPv
Failed to load class: user (/home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_user.so): /home/jenkins-build/build/workspace/ceph-pull-requests/build/lib/libcls_user.so: undefined symbol: _Z24cls_cxx_map_write_headerPvPN4ceph6buffer7v14_2_04listE
[ OK ] TestFutureImpl.Getters (57 ms)
[ RUN ] TestFutureImpl.Attach
[ OK ] TestFutureImpl.Attach (8 ms)
[ RUN ] TestFutureImpl.AttachWithPendingFlush
[ OK ] TestFutureImpl.AttachWithPendingFlush (28 ms)
... snip successful tests ...
[ RUN ] TestObjectRecorder.AppendFlushByCount
[ OK ] TestObjectRecorder.AppendFlushByCount (12 ms)
[ RUN ] TestObjectRecorder.AppendFlushByBytes
[ OK ] TestObjectRecorder.AppendFlushByBytes (9 ms)
[ RUN ] TestObjectRecorder.AppendFlushByAge
[ OK ] TestObjectRecorder.AppendFlushByAge (11 ms)
[ RUN ] TestObjectRecorder.AppendFilledObject
99% tests passed, 1 tests failed out of 189
Total Test time (real) = 3830.24 sec
The following tests FAILED:
129 - unittest_journal (Timeout)
</pre>
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/38349/console">https://jenkins.ceph.com/job/ceph-pull-requests/38349/console</a></p>
rbd - Bug #41931 (Closed): mgr/rbd_support: TypeError: '>' not supported between instances of 'st...
https://tracker.ceph.com/issues/41931
2019-09-19T12:20:08Z
Sebastian Wagner
<p>Hi, I'm getting this exception nowadays:</p>
<pre>
2019-09-19T14:15:44.614+0200 7f5e73f84700 0 mgr[rbd_support] Fatal runtime error: '>' not supported between instances of 'str' and 'int'
Traceback (most recent call last):
File "/home/sebastian/Repos/ceph/src/pybind/mgr/rbd_support/module.py", line 165, in run
self.query_condition.wait(stats_period)
File "/usr/lib/python3.6/threading.py", line 298, in wait
if timeout > 0:
TypeError: '>' not supported between instances of 'str' and 'int'
</pre>
<p>Might be a Python 3 issue.</p>
rgw - Bug #40902 (Duplicate): make check: unittest_rgw_reshard_wait failed (ReshardWait.wait_yield)
https://tracker.ceph.com/issues/40902
2019-07-23T09:13:40Z
Sebastian Wagner
<p><a class="external" href="https://jenkins.ceph.com/job/ceph-pull-requests/29742">https://jenkins.ceph.com/job/ceph-pull-requests/29742</a></p>
<pre>
155/178 Test #162: unittest_rgw_reshard_wait ...............***Failed 1.06 sec
Running main() from gmock_main.cc
[==========] Running 5 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 5 tests from ReshardWait
[ RUN ] ReshardWait.wait_block
[ OK ] ReshardWait.wait_block (10 ms)
[ RUN ] ReshardWait.stop_block
[ OK ] ReshardWait.stop_block (13 ms)
[ RUN ] ReshardWait.wait_yield
/home/jenkins-build/build/workspace/ceph-pull-requests/src/test/rgw/test_rgw_reshard_wait.cc:72: Failure
Expected equality of these values:
1u
Which is: 1
context.poll()
Which is: 2
/home/jenkins-build/build/workspace/ceph-pull-requests/src/test/rgw/test_rgw_reshard_wait.cc:73: Failure
Value of: context.stopped()
Actual: true
Expected: false
/home/jenkins-build/build/workspace/ceph-pull-requests/src/test/rgw/test_rgw_reshard_wait.cc:75: Failure
Expected equality of these values:
1u
Which is: 1
context.run_one()
Which is: 0
[ FAILED ] ReshardWait.wait_yield (15 ms)
[ RUN ] ReshardWait.stop_yield
[ OK ] ReshardWait.stop_yield (10 ms)
[ RUN ] ReshardWait.stop_multiple
[ OK ] ReshardWait.stop_multiple (20 ms)
[----------] 5 tests from ReshardWait (68 ms total)
[----------] Global test environment tear-down
[==========] 5 tests from 1 test suite ran. (68 ms total)
[ PASSED ] 4 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] ReshardWait.wait_yield
1 FAILED TEST
</pre>
<p>Sorry, but I cannot provide any details, as the PR was not related to this failure.</p>
teuthology - Bug #40749 (New): /task/ansible.py: AnsibleFailedError: RepresenterError: ('cannot r...
https://tracker.ceph.com/issues/40749
2019-07-12T09:41:13Z
Sebastian Wagner
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/swagner-2019-07-11_11:08:19-rados:mgr-wip-swagner-testing-distro-basic-mira/4110614/teuthology.log">http://qa-proxy.ceph.com/teuthology/swagner-2019-07-11_11:08:19-rados:mgr-wip-swagner-testing-distro-basic-mira/4110614/teuthology.log</a></p>
<pre>
Thursday 11 July 2019 14:06:56 +0000 (0:00:00.272) 0:03:00.166 *********
===============================================================================
Check for /usr/bin/python ---------------------------------------------- 27.06s
2019-07-11T14:06:56.061 INFO:teuthology.task.ansible.out:users : Create all admin users with sudo access. ----------------------- 19.15s
users : Update authorized_keys using the keys repo --------------------- 18.43s
testnode : Zap all non-root disks --------------------------------------- 9.59s
testnode : Ensure packages are not present. ----------------------------- 9.53s
testnode : Install packages --------------------------------------------- 6.20s
testnode : ifdown and ifup ---------------------------------------------- 5.15s
users : Remove revoked users -------------------------------------------- 4.99s
common : Update apt cache ----------------------------------------------- 4.01s
testnode : Update apt cache. -------------------------------------------- 3.65s
testnode : Install python-apt ------------------------------------------- 3.11s
testnode : Blow away lingering OSD data and FSIDs ----------------------- 2.94s
testnode : Install apt keys --------------------------------------------- 2.09s
common : Install nrpe package and dependencies (Ubuntu) ----------------- 1.99s
testnode : Install packages via pip ------------------------------------- 1.72s
users : Update authorized_keys for each user with literal keys ---------- 1.72s
ansible-managed : Add authorized keys for the ansible user. ------------- 1.59s
Gathering Facts --------------------------------------------------------- 1.59s
testnode : Stop apache2 ------------------------------------------------- 1.45s
common : Upload megacli and cli64 for raid monitoring and smart.pl to /usr/sbin/. --- 1.18s
2019-07-11T14:06:56.319 ERROR:teuthology.task.ansible:Failed to parse ansible failure log: /tmp/teuth_ansible_failures_mF91TY (while parsing a flow mapping
in "/tmp/teuth_ansible_failures_mF91TY", line 1, column 54
expected ',' or '}', but got ':'
in "/tmp/teuth_ansible_failures_mF91TY", line 1, column 274)
2019-07-11T14:06:56.320 INFO:teuthology.task.ansible:Archiving ansible failure log at: /home/teuthworker/archive/swagner-2019-07-11_11:08:19-rados:mgr-wip-swagner-testing-distro-basic-mira/4110614/ansible_failures.yaml
2019-07-11T14:06:56.323 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 89, in run_tasks
manager.__enter__()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/__init__.py", line 123, in __enter__
self.begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 426, in begin
super(CephLab, self).begin()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 268, in begin
self.execute_playbook()
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 295, in execute_playbook
self._handle_failure(command, status)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/task/ansible.py", line 319, in _handle_failure
raise AnsibleFailedError(failures)
AnsibleFailedError: 7
/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data
node = self.yaml_representers[None](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined
raise RepresenterError("cannot represent an object", data)
RepresenterError: ('cannot represent an object', u'mira062')
</pre>
<p><a class="external" href="http://qa-proxy.ceph.com/teuthology/swagner-2019-07-11_11:08:19-rados:mgr-wip-swagner-testing-distro-basic-mira/4110614/ansible_failures.yaml">http://qa-proxy.ceph.com/teuthology/swagner-2019-07-11_11:08:19-rados:mgr-wip-swagner-testing-distro-basic-mira/4110614/ansible_failures.yaml</a></p>
<pre>
Failure object was: {'mira062.front.sepia.ceph.com': {'_ansible_no_log': False, u'invocation': {u'module_args': {u'name': u'mira062'}}, 'changed': False, u'msg': u"Command failed rc=1, out=, err=Could not get property: Failed to activate service 'org.freedesktop.hostname1': timed out\n"}}
Traceback (most recent call last):
File "/home/teuthworker/src/git.ceph.com_git_ceph-cm-ansible_master/callback_plugins/failure_log.py", line 44, in log_failure
log.error(yaml.safe_dump(failure))
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 309, in safe_dump
return dump_all([data], stream, Dumper=SafeDumper, **kwds)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/__init__.py", line 281, in dump_all
dumper.represent(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 29, in represent
node = self.represent_data(data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 58, in represent_data
node = self.yaml_representers[data_types[0]](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 227, in represent_dict
return self.represent_mapping(u'tag:yaml.org,2002:map', data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 125, in represent_mapping
node_value = self.represent_data(item_value)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 68, in represent_data
node = self.yaml_representers[None](self, data)
File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/virtualenv/local/lib/python2.7/site-packages/yaml/representer.py", line 251, in represent_undefined
raise RepresenterError("cannot represent an object", data)
RepresenterError: ('cannot represent an object', u'mira062')
</pre>
<p>Is this a Teuthology issue or ceph-ansible or is this just because mira062 timed out?</p>
mgr - Bug #39642 (Resolved): JSON-formatting the mgrmap is huge
https://tracker.ceph.com/issues/39642
2019-05-09T08:23:22Z
Sebastian Wagner
<p>When dumping the mgrmap into JSON (which is done in teuthology runs frequently), the output really pollutes the logs.</p>
<p>mitigating could look like:</p>
<ul>
<li>make <code>ceph status --format json-pretty</code> omit the mgrmap</li>
<li>omit attributes that match their default when dumping the mgrmap</li>
<li>call <code>ceph status</code> less frequently in teuthology</li>
</ul>
Ceph - Bug #38145 (New): /usr/bin/ld: cmdparse.cc.o: bad reloc symbol index
https://tracker.ceph.com/issues/38145
2019-02-01T10:09:43Z
Sebastian Wagner
<p>Hey,</p>
<p>in the Sepia lab in flavour "Ubuntu Xenial", I'm getting a linker error:</p>
<pre>
/usr/bin/ld: common/CMakeFiles/common-common-objs.dir/cmdparse.cc.o: bad reloc symbol index (0x30317453 >= 0x2d1) for offset 0x4961534563497374 in section `.debug_info'
common/CMakeFiles/common-common-objs.dir/cmdparse.cc.o: error adding symbols: Bad value
collect2: error: ld returned 1 exit status
src/CMakeFiles/ceph-common.dir/build.make:446: recipe for target 'lib/libceph-common.so.1' failed
make[4]: *** [lib/libceph-common.so.1] Error 1
make[4]: Leaving directory '/build/ceph-14.0.1-3099-g9e926e9/obj-x86_64-linux-gnu'
</pre>
<ul>
<li><a href="https://jenkins.ceph.com/job/ceph-dev-new-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=xenial,DIST=xenial,MACHINE_SIZE=huge/17352//consoleFull" class="external">Jenkins Log</a></li>
<li><a href="https://shaman.ceph.com/builds/ceph/wip-swagner-testing/9e926e9927a4c9592403dbce959e526ba3860206/default/140455/" class="external">Shaman build</a></li>
</ul>
<p>I don't know if this is a reproducible error or not.</p>
Ceph - Bug #37858 (Can't reproduce): Python 3: UnicodeDecodeError in /usr/bin/ceph in parse_json_...
https://tracker.ceph.com/issues/37858
2019-01-10T11:24:31Z
Sebastian Wagner
<p>I'm seeing this in the log of a recent nautilus build:</p>
<pre>
2019-01-10 11:05:18.613 7fce75ddb700 1 librados: init done
2019-01-10 11:05:18.613 7fce75ddb700 1 librados: init done
Traceback (most recent call last):
File "/usr/bin/ceph", line 1212, in <module>
retval = main()
File "/usr/bin/ceph", line 1136, in main
sigdict = parse_json_funcsigs(outbuf.decode('utf-8'), 'cli')
File "/usr/lib64/python2.7/encodings/utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xa0 in position 5827: invalid start byte
</pre>
<p>Unfortunately, I don't have <strong>any</strong> further information yet.</p>
Ceph - Bug #37373 (New): Interactive mode CLI with Python 3: Traceback when pressing ^D
https://tracker.ceph.com/issues/37373
2018-11-22T15:06:43Z
Sebastian Wagner
<p>Hey,</p>
<p>calling ^d in the repl of the ceph command using Python 3 shows a Traceback:</p>
<pre>
$ ceph
ceph>
ceph>
ceph> Traceback (most recent call last):
File "/home/sebastian/Repos/ceph/build/bin/ceph", line 1250, in <module>
retval = main()
File "/home/sebastian/Repos/ceph/build/bin/ceph", line 1229, in main
raw_write(outbuf)
File "/home/sebastian/Repos/ceph/build/bin/ceph", line 172, in raw_write
raw_stdout.write(buf)
TypeError: a bytes-like object is required, not 'str'
</pre>
<p>Is there anyone that uses this mode? Relates to</p>
<p><a class="external" href="https://marc.info/?i=CALe9h7c5kJudfsQ6Vf_vczUG0CeoN8=dxznC=92RamBvxD9u0w%20()%20mail%20!%20gmail%20!%20com">https://marc.info/?i=CALe9h7c5kJudfsQ6Vf_vczUG0CeoN8=dxznC=92RamBvxD9u0w%20()%20mail%20!%20gmail%20!%20com</a></p>
Ceph - Bug #23854 (Can't reproduce): Linking libceph_zstd.so sometimes fails
https://tracker.ceph.com/issues/23854
2018-04-25T13:00:23Z
Sebastian Wagner
<p>I get this error from time to time (since a few months) when building from source:<br /><pre>
[ 20%] Linking CXX shared library ../../../lib/libceph_zstd.so
/usr/bin/ld: libzstd/lib/libzstd.a(error_private.c.o): relocation R_X86_64_PC32 against symbol `ERR_getErrorString' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: final link failed: Bad value
collect2: error: ld returned 1 exit status
src/compressor/zstd/CMakeFiles/ceph_zstd.dir/build.make:95: die Regel für Ziel „lib/libceph_zstd.so.2.0.0“ scheiterte
make[2]: *** [lib/libceph_zstd.so.2.0.0] Fehler 1
CMakeFiles/Makefile2:20785: die Regel für Ziel „src/compressor/zstd/CMakeFiles/ceph_zstd.dir/all“ scheiterte
make[1]: *** [src/compressor/zstd/CMakeFiles/ceph_zstd.dir/all] Fehler 2
make[1]: *** Auf noch nicht beendete Prozesse wird gewartet …
</pre></p>
<p>This happens mostly after calling <code>make</code> multiple times without a full rebuild for a longer period of time.</p>
<p>Removing the build files helps as a workaround:<br /><pre>
rm -r build/src/compressor/zstd
</pre></p>
<p>Environment:</p>
<ul>
<li>Ubuntu 17.10</li>
<li>GNU ld (GNU Binutils for Ubuntu) 2.29.1</li>
<li>g++ (Ubuntu 7.2.0-8ubuntu3) 7.2.0</li>
<li>cmake version 3.9.1</li>
<li>git on master for the last few months.</li>
</ul>
rbd - Bug #22253 (Can't reproduce): "rbd info" crashed: stack smashing detected
https://tracker.ceph.com/issues/22253
2017-11-27T14:37:58Z
Sebastian Wagner
<p>Environment: quite small vstart cluster.</p>
<p>This is the stack trace:<br /><pre>
#3 0x00007fffed44711c in __GI___fortify_fail (msg=<optimized out>, msg@entry=0x7fffed4bd441 "stack smashing detected") at fortify_fail.c:37
#4 0x00007fffed4470c0 in __stack_chk_fail () at stack_chk_fail.c:28
#5 0x00007ffff78f0beb in librbd::ImageCtx::perf_start (this=this@entry=0x555555b7bf70, name="librbd-8c39e2ae8944a-rbd-huge2") at /home/sebastian/Repos/ceph/src/librbd/ImageCtx.cc:397
#6 0x00007ffff78f3cb4 in librbd::ImageCtx::init (this=0x555555b7bf70) at /home/sebastian/Repos/ceph/src/librbd/ImageCtx.cc:275
#7 0x00007ffff799dacd in librbd::image::OpenRequest<librbd::ImageCtx>::send_register_watch (this=this@entry=0x555555b7fe60) at /home/sebastian/Repos/ceph/src/librbd/image/OpenRequest.cc:477
#8 0x00007ffff79a3102 in librbd::image::OpenRequest<librbd::ImageCtx>::handle_v2_apply_metadata (this=this@entry=0x555555b7fe60, result=result@entry=0x7fffb77fa374) at /home/sebastian/Repos/ceph/src/librbd/image/OpenRequest.cc:471
#9 0x00007ffff79a351f in librbd::util::detail::rados_state_callback<librbd::image::OpenRequest<librbd::ImageCtx>, &librbd::image::OpenRequest<librbd::ImageCtx>::handle_v2_apply_metadata, true> (c=<optimized out>, arg=0x555555b7fe60) at /home/sebastian/Repos/ceph/src/librbd/Utils.h:39
#10 0x00007ffff75d678d in librados::C_AioComplete::finish (this=0x7fffd0001b60, r=<optimized out>) at /home/sebastian/Repos/ceph/src/librados/AioCompletionImpl.h:169
#11 0x0000555555613949 in Context::complete (this=0x7fffd0001b60, r=<optimized out>) at /home/sebastian/Repos/ceph/src/include/Context.h:70
#12 0x00007fffeeab6010 in Finisher::finisher_thread_entry (this=0x555555acb3e8) at /home/sebastian/Repos/ceph/src/common/Finisher.cc:72
#13 0x00007fffee3a86ba in start_thread (arg=0x7fffb77fe700) at pthread_create.c:333
#14 0x00007fffed4353dd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109
</pre></p>
Ceph - Bug #20619 (Closed): MgrClient.cc: 43: FAILED assert(msgr != nullptr)
https://tracker.ceph.com/issues/20619
2017-07-13T16:17:29Z
Sebastian Wagner
<p>I got this after creating a replicated Pool with very few PGs.</p>
Environment:
<ul>
<li>Git revision: 7e12840db34f8a0fb</li>
<li>vstart.sh -X -l</li>
</ul>
<pre>
/ceph/src/mgr/MgrClient.cc: In function 'void MgrClient::init()' thread 7f34b5310700 time 2017-07-13 17:51:39.314068
/ceph/src/mgr/MgrClient.cc: 43: FAILED assert(msgr != nullptr)
ceph version 12.1.0-761-g3ad4123 (3ad4123c83b42bfd49dc3594c96a0c7539bd6511) luminous (rc)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x102) [0x7f34a9f847d2]
2: (MgrClient::init()+0x5d) [0x7f34a9fe4bed]
3: (librados::RadosClient::connect()+0x90e) [0x7f34b29cde0e]
4: (rados_connect()+0x1f) [0x7f34b298199f]
5: (()+0x5f6dc) [0x7f34b2ce16dc]
6: (PyEval_EvalFrameEx()+0x68a) [0x4c468a]
7: (PyEval_EvalFrameEx()+0x5d8f) [0x4c9d8f]
8: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
9: ../env/bin/python() [0x4de6fe]
10: (PyObject_Call()+0x43) [0x4b0cb3]
11: ../env/bin/python() [0x4f492e]
12: (PyObject_Call()+0x43) [0x4b0cb3]
13: ../env/bin/python() [0x4f46a7]
14: ../env/bin/python() [0x4b670c]
15: (PyObject_Call()+0x43) [0x4b0cb3]
16: (PyEval_EvalFrameEx()+0x5faf) [0x4c9faf]
17: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
18: ../env/bin/python() [0x4de6fe]
19: (PyObject_Call()+0x43) [0x4b0cb3]
20: ../env/bin/python() [0x4f492e]
21: (PyObject_Call()+0x43) [0x4b0cb3]
22: ../env/bin/python() [0x569a48]
23: (PyEval_EvalFrameEx()+0x6345) [0x4ca345]
24: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
25: (PyEval_EvalFrameEx()+0x6099) [0x4ca099]
26: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
27: (PyEval_EvalFrameEx()+0x6099) [0x4ca099]
28: (PyEval_EvalFrameEx()+0x5d8f) [0x4c9d8f]
29: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
30: (PyEval_EvalFrameEx()+0x68d1) [0x4ca8d1]
31: (PyEval_EvalFrameEx()+0x5d8f) [0x4c9d8f]
32: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
33: (PyEval_EvalFrameEx()+0x68d1) [0x4ca8d1]
34: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
35: ../env/bin/python() [0x4de6fe]
36: (PyObject_Call()+0x43) [0x4b0cb3]
37: (PyObject_CallFunctionObjArgs()+0x16a) [0x4b97fa]
38: (_PyObject_GenericGetAttrWithDict()+0x17c) [0x4b00dc]
39: (PyEval_EvalFrameEx()+0x4c1) [0x4c44c1]
40: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
41: ../env/bin/python() [0x4de6fe]
42: (PyObject_Call()+0x43) [0x4b0cb3]
43: ../env/bin/python() [0x4f492e]
44: (PyObject_Call()+0x43) [0x4b0cb3]
45: ../env/bin/python() [0x569a48]
46: ../env/bin/python() [0x589fb1]
47: ../env/bin/python() [0x50157c]
48: (PyEval_EvalFrameEx()+0x615e) [0x4ca15e]
49: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
50: ../env/bin/python() [0x4de8b8]
51: (PyObject_Call()+0x43) [0x4b0cb3]
52: (PyEval_EvalFrameEx()+0x2ad1) [0x4c6ad1]
53: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
54: (PyEval_EvalFrameEx()+0x68d1) [0x4ca8d1]
55: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
56: ../env/bin/python() [0x4de8b8]
57: (PyObject_Call()+0x43) [0x4b0cb3]
58: (PyEval_EvalFrameEx()+0x2ad1) [0x4c6ad1]
59: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
60: ../env/bin/python() [0x4de8b8]
61: (PyObject_Call()+0x43) [0x4b0cb3]
62: (PyEval_EvalFrameEx()+0x2ad1) [0x4c6ad1]
63: (PyEval_EvalFrameEx()+0x5d8f) [0x4c9d8f]
64: (PyEval_EvalFrameEx()+0x5d8f) [0x4c9d8f]
65: (PyEval_EvalFrameEx()+0x5d8f) [0x4c9d8f]
66: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
67: ../env/bin/python() [0x4de6fe]
68: (PyObject_Call()+0x43) [0x4b0cb3]
69: ../env/bin/python() [0x4f492e]
70: (PyObject_Call()+0x43) [0x4b0cb3]
71: (PyEval_CallObjectWithKeywords()+0x30) [0x4ce5d0]
72: (_pyglib_handler_marshal()+0x39) [0x7f348a002759]
73: (()+0x4aab3) [0x7f348a24fab3]
74: (g_main_context_dispatch()+0x15a) [0x7f348a24f04a]
75: (()+0x4a3f0) [0x7f348a24f3f0]
76: (g_main_loop_run()+0xc2) [0x7f348a24f712]
77: (()+0xa534) [0x7f348a520534]
78: (PyEval_EvalFrameEx()+0x5780) [0x4c9780]
79: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
80: ../env/bin/python() [0x4de8b8]
81: (PyObject_Call()+0x43) [0x4b0cb3]
82: (PyEval_EvalFrameEx()+0x2ad1) [0x4c6ad1]
83: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
84: ../env/bin/python() [0x4de8b8]
85: (PyObject_Call()+0x43) [0x4b0cb3]
86: (PyEval_EvalFrameEx()+0x2ad1) [0x4c6ad1]
87: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
88: (PyEval_EvalFrameEx()+0x68d1) [0x4ca8d1]
89: (PyEval_EvalFrameEx()+0x5d8f) [0x4c9d8f]
90: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
91: (PyEval_EvalFrameEx()+0x6099) [0x4ca099]
92: (PyEval_EvalCodeEx()+0x255) [0x4c2765]
93: (PyEval_EvalCode()+0x19) [0x4c2509]
94: ../env/bin/python() [0x4f1def]
95: (PyRun_FileExFlags()+0x82) [0x4ec652]
96: (PyRun_SimpleFileExFlags()+0x191) [0x4eae31]
97: (Py_Main()+0x68a) [0x49e14a]
98: (__libc_start_main()+0xf0) [0x7f34b4b58830]
99: (_start()+0x29) [0x49d9d9]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
[1] 17661 abort (core dumped)
</pre>