Project

General

Profile

Actions

Bug #65572

open

Command failed (workunit test fs/snaps/untar_snap_rm.sh) on smithi155 with status 1

Added by Venky Shankar 13 days ago. Updated 1 day ago.

Status:
New
Priority:
Low
Assignee:
-
Category:
Correctness/Safety
Target version:
% Done:

0%

Source:
Q/A
Tags:
Backport:
quincy,reef,squid
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
MDS
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

This has started to show up again (with fs/thrash). See: https://pulpito.ceph.com/yuriw-2024-04-05_22:36:11-fs-wip-yuri7-testing-2024-04-04-0800-distro-default-smithi/7642088/

Last few lines before failures in teuthology.log

2024-04-06T00:51:55.747 INFO:tasks.workunit.client.1.smithi155.stdout:'.snap/k/linux-2.6.33/virt/kvm/eventfd.c' -> './k/linux-2.6.33/virt/kvm/eventfd.c'
2024-04-06T00:51:55.747 INFO:tasks.workunit.client.1.smithi155.stdout:'.snap/k/linux-2.6.33/virt/kvm/ioapic.c' -> './k/linux-2.6.33/virt/kvm/ioapic.c'
2024-04-06T00:51:55.747 INFO:tasks.workunit.client.1.smithi155.stdout:'.snap/k/linux-2.6.33/virt/kvm/ioapic.h' -> './k/linux-2.6.33/virt/kvm/ioapic.h'
2024-04-06T00:51:55.747 INFO:tasks.workunit.client.1.smithi155.stdout:'.snap/k/linux-2.6.33/virt/kvm/iodev.h' -> './k/linux-2.6.33/virt/kvm/iodev.h'
2024-04-06T00:51:55.747 INFO:tasks.workunit.client.1.smithi155.stdout:'.snap/k/linux-2.6.33/virt/kvm/iommu.c' -> './k/linux-2.6.33/virt/kvm/iommu.c'
2024-04-06T00:51:55.747 INFO:tasks.workunit.client.1.smithi155.stdout:'.snap/k/linux-2.6.33/virt/kvm/irq_comm.c' -> './k/linux-2.6.33/virt/kvm/irq_comm.c'
2024-04-06T00:51:55.747 INFO:tasks.workunit.client.1.smithi155.stdout:'.snap/k/linux-2.6.33/virt/kvm/kvm_main.c' -> './k/linux-2.6.33/virt/kvm/kvm_main.c'
2024-04-06T00:51:55.769 DEBUG:teuthology.orchestra.run:got remote process result: 1
2024-04-06T00:51:55.770 INFO:tasks.workunit:Stopping ['fs/snaps'] on client.1...

I'm not sure if the earlier instance was resolved, but I haven't seen this failure in a while.

Actions #1

Updated by Venky Shankar 13 days ago

  • Category set to Correctness/Safety
  • Source set to Q/A
Actions #2

Updated by Xiubo Li 1 day ago ยท Edited

There even no any ceph side logs. It seemed the cluster dead suddenly.

Actions #3

Updated by Venky Shankar 1 day ago

  • Priority changed from Normal to Low
Actions

Also available in: Atom PDF