Project

General

Profile

Bug #8004

LibCephFS.HardlinkNoOriginal hang

Added by Sage Weil almost 10 years ago. Updated almost 10 years ago.

Status:
Resolved
Priority:
High
Assignee:
-
Category:
-
Target version:
-
% Done:

0%

Source:
Q/A
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

ubuntu@teuthology:/var/lib/teuthworker/archive/sage-2014-04-05_15:44:13-multimds:verify-wip-ms-dump-testing-basic-plana/172361

This one is confusing. The client log /log/ceph-client.0.12295.log.gz doesn't seem to show a hung ll_ operation.

The pointers all appear to be 32-bit even though it is an x86_64 box... I'm not sure why that would be the case. Note that ceph-fuse doesn't work with 32-bit inodes.

Associated revisions

Revision 02aedbc4 (diff)
Added by Yan, Zheng almost 10 years ago

client: wake up umount waiter if receiving session open message

Wake up umount waiter if receiving session open message while
umounting. The umount waiter will re-close the session.

Fixes: #8004
Signed-off-by: Yan, Zheng <>

History

#1 Updated by Sage Weil almost 10 years ago

seems easy to reproduce, just hit this again with

ubuntu@teuthology:/var/lib/teuthworker/archive/sage-2014-04-06_16:36:32-multimds:verify-wip-7739-testing-basic-plana/174521$ cat orig.config.yaml 
archive_path: /var/lib/teuthworker/archive/sage-2014-04-06_16:36:32-multimds:verify-wip-7739-testing-basic-plana/174521
description: multimds/verify/{clusters/9-mds.yaml debug/mds_client.yaml fs/btrfs.yaml
  overrides/whitelist_wrongly_marked_down.yaml tasks/libcephfs_interface_tests.yaml
  validater/valgrind.yaml}
email: null
job_id: '174521'
kernel:
  kdb: true
  sha1: 00836fb008873a46a79a976f68f976bf04931067
last_in_suite: false
machine_type: plana
name: sage-2014-04-06_16:36:32-multimds:verify-wip-7739-testing-basic-plana
nuke-on-error: true
os_type: ubuntu
overrides:
  admin_socket:
    branch: wip-7739
  ceph:
    conf:
      client:
        debug client: 10
        debug ms: 1
      mds:
        debug mds: 20
        debug ms: 1
        ms dump on send: true
      mon:
        debug mon: 20
        debug ms: 1
        debug paxos: 20
      osd:
        debug filestore: 20
        debug journal: 20
        debug ms: 1
        debug osd: 20
        osd op thread timeout: 60
        osd sloppy crc: true
    fs: btrfs
    log-whitelist:
    - slow request
    - wrongly marked me down
    sha1: 2f7522c83a35b2726e73d110ab07a25308f5ea13
    valgrind:
      mds:
      - --tool=memcheck
      mon:
      - --tool=memcheck
      - --leak-check=full
      - --show-reachable=yes
      osd:
      - --tool=memcheck
  ceph-deploy:
    branch:
      dev: wip-7739
    conf:
      client:
        log file: /var/log/ceph/ceph-$name.$pid.log
      mon:
        debug mon: 1
        debug ms: 20
        debug paxos: 20
        osd default pool size: 2
  ceph-fuse:
    client.0:
      valgrind:
      - --tool=memcheck
      - --leak-check=full
      - --show-reachable=yes
  install:
    ceph:
      sha1: 2f7522c83a35b2726e73d110ab07a25308f5ea13
  s3tests:
    branch: master
  workunit:
    sha1: 2f7522c83a35b2726e73d110ab07a25308f5ea13
owner: scheduled_sage@flab
roles:
- - mon.a
  - mon.c
  - mds.a
  - mds.b
  - mds.c
  - mds.d
  - osd.0
  - osd.1
  - osd.2
- - mon.b
  - mds.e
  - mds.f
  - mds.g
  - mds.h
  - mds.i
  - osd.3
  - osd.4
  - osd.5
- - client.0
tasks:
- chef: null
- clock.check: null
- install: null
- ceph: null
- ceph-fuse: null
- workunit:
    clients:
      client.0:
      - libcephfs/test.sh
teuthology_branch: master
verbose: true
worker_log: /var/lib/teuthworker/archive/worker_logs/worker.plana.16895

#2 Updated by Sage Weil almost 10 years ago

oh, and the 32-bit pointer thing is because ceph-fuse is running under valgrind.

#3 Updated by Zheng Yan almost 10 years ago

  • Status changed from New to 7

#4 Updated by Zheng Yan almost 10 years ago

  • Status changed from 7 to Resolved

Also available in: Atom PDF