Project

General

Profile

Actions

Bug #24579

closed

client: returning garbage (?) for readdir

Added by Patrick Donnelly almost 6 years ago. Updated over 5 years ago.

Status:
Resolved
Priority:
Urgent
Assignee:
Category:
Correctness/Safety
Target version:
% Done:

0%

Source:
Q/A
Tags:
Backport:
mimic,luminous
Regression:
No
Severity:
1 - critical
Reviewed:
Affected Versions:
ceph-qa-suite:
Component(FS):
Client
Labels (FS):
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

2018-06-18T23:00:06.815 INFO:tasks.workunit.client.0.smithi175.stdout:FFSB version 6.0-RC2 started
2018-06-18T23:00:06.815 INFO:tasks.workunit.client.0.smithi175.stdout:
2018-06-18T23:00:06.821 INFO:tasks.ceph.osd.1.smithi089.stderr:2018-06-18 23:00:06.803 7fc3c007e700 -1 received  signal: Hangup from /usr/bin/python /usr/bin/daemon-helper kill ceph-osd -f --cluster ceph -i 1  (PID: 10071) UID: 0
2018-06-18T23:00:06.823 INFO:tasks.workunit.client.0.smithi175.stdout:benchmark time = 300
2018-06-18T23:00:06.823 INFO:tasks.workunit.client.0.smithi175.stdout:ThreadGroup 0
2018-06-18T23:00:06.823 INFO:tasks.workunit.client.0.smithi175.stdout:================
2018-06-18T23:00:06.823 INFO:tasks.workunit.client.0.smithi175.stdout:   num_threads      = 32
2018-06-18T23:00:06.823 INFO:tasks.workunit.client.0.smithi175.stdout:
2018-06-18T23:00:06.823 INFO:tasks.workunit.client.0.smithi175.stdout:   read_random      = off
2018-06-18T23:00:06.824 INFO:tasks.workunit.client.0.smithi175.stdout:   read_size        = 0   (0B)
2018-06-18T23:00:06.824 INFO:tasks.workunit.client.0.smithi175.stdout:   read_blocksize   = 0   (0B)
2018-06-18T23:00:06.824 INFO:tasks.workunit.client.0.smithi175.stdout:   read_skip        = off
2018-06-18T23:00:06.824 INFO:tasks.workunit.client.0.smithi175.stdout:   read_skipsize    = 0   (0B)
2018-06-18T23:00:06.824 INFO:tasks.workunit.client.0.smithi175.stdout:
2018-06-18T23:00:06.824 INFO:tasks.workunit.client.0.smithi175.stdout:   write_random     = on
2018-06-18T23:00:06.824 INFO:tasks.workunit.client.0.smithi175.stdout:   write_size       = 5242880     (5MB)
2018-06-18T23:00:06.825 INFO:tasks.workunit.client.0.smithi175.stdout:   fsync_file       = 0
2018-06-18T23:00:06.825 INFO:tasks.workunit.client.0.smithi175.stdout:   write_blocksize  = 4096        (4KB)
2018-06-18T23:00:06.825 INFO:tasks.workunit.client.0.smithi175.stdout:   wait time        = 0
2018-06-18T23:00:06.825 INFO:tasks.workunit.client.0.smithi175.stdout:
2018-06-18T23:00:06.825 INFO:tasks.workunit.client.0.smithi175.stdout:   op weights
2018-06-18T23:00:06.825 INFO:tasks.workunit.client.0.smithi175.stdout:                   read = 0 (0.00%)
2018-06-18T23:00:06.826 INFO:tasks.workunit.client.0.smithi175.stdout:                readall = 0 (0.00%)
2018-06-18T23:00:06.826 INFO:tasks.workunit.client.0.smithi175.stdout:                  write = 1 (100.00%)
2018-06-18T23:00:06.826 INFO:tasks.workunit.client.0.smithi175.stdout:                 create = 0 (0.00%)
2018-06-18T23:00:06.826 INFO:tasks.workunit.client.0.smithi175.stdout:                 append = 0 (0.00%)
2018-06-18T23:00:06.826 INFO:tasks.workunit.client.0.smithi175.stdout:                 delete = 0 (0.00%)
2018-06-18T23:00:06.826 INFO:tasks.workunit.client.0.smithi175.stdout:                 metaop = 0 (0.00%)
2018-06-18T23:00:06.826 INFO:tasks.workunit.client.0.smithi175.stdout:              createdir = 0 (0.00%)
2018-06-18T23:00:06.827 INFO:tasks.workunit.client.0.smithi175.stdout:                   stat = 0 (0.00%)
2018-06-18T23:00:06.827 INFO:tasks.workunit.client.0.smithi175.stdout:               writeall = 0 (0.00%)
2018-06-18T23:00:06.827 INFO:tasks.workunit.client.0.smithi175.stdout:         writeall_fsync = 0 (0.00%)
2018-06-18T23:00:06.827 INFO:tasks.workunit.client.0.smithi175.stdout:             open_close = 0 (0.00%)
2018-06-18T23:00:06.827 INFO:tasks.workunit.client.0.smithi175.stdout:            write_fsync = 0 (0.00%)
2018-06-18T23:00:06.827 INFO:tasks.workunit.client.0.smithi175.stdout:           create_fsync = 0 (0.00%)
2018-06-18T23:00:06.828 INFO:tasks.workunit.client.0.smithi175.stdout:           append_fsync = 0 (0.00%)
2018-06-18T23:00:06.828 INFO:tasks.workunit.client.0.smithi175.stdout:
2018-06-18T23:00:06.828 INFO:tasks.workunit.client.0.smithi175.stdout:FileSystem .�r��U
2018-06-18T23:00:06.828 INFO:tasks.workunit.client.0.smithi175.stdout:==========
2018-06-18T23:00:06.828 INFO:tasks.workunit.client.0.smithi175.stdout:   num_dirs         = 0
2018-06-18T23:00:06.828 INFO:tasks.workunit.client.0.smithi175.stdout:   starting files   = 128
2018-06-18T23:00:06.828 INFO:tasks.workunit.client.0.smithi175.stdout:
2018-06-18T23:00:06.829 INFO:tasks.workunit.client.0.smithi175.stdout:   min file size    = 104857600   (100MB)
2018-06-18T23:00:06.829 INFO:tasks.workunit.client.0.smithi175.stdout:   max file size    = 104857600   (100MB)
2018-06-18T23:00:06.829 INFO:tasks.workunit.client.0.smithi175.stdout:   directio         = off
2018-06-18T23:00:06.829 INFO:tasks.workunit.client.0.smithi175.stdout:   alignedio        = on
2018-06-18T23:00:06.829 INFO:tasks.workunit.client.0.smithi175.stdout:   bufferedio       = off
2018-06-18T23:00:06.829 INFO:tasks.workunit.client.0.smithi175.stdout:
2018-06-18T23:00:06.829 INFO:tasks.workunit.client.0.smithi175.stdout:   aging is off
2018-06-18T23:00:06.830 INFO:tasks.workunit.client.0.smithi175.stdout:   current utilization = 184466490759049117696.00%
2018-06-18T23:00:06.830 INFO:tasks.workunit.client.0.smithi175.stdout:
2018-06-18T23:00:06.830 INFO:tasks.workunit.client.0.smithi175.stdout:checking existing fs: .�r��U
2018-06-18T23:00:06.830 INFO:tasks.workunit.client.0.smithi175.stdout:recreating new fileset
2018-06-18T23:00:06.830 INFO:tasks.workunit.client.0.smithi175.stderr:opendir: No such file or directory
2018-06-18T23:00:06.831 INFO:tasks.workunit.client.0.smithi175.stderr:Error creating .�r��U/data
2018-06-18T23:00:06.831 INFO:tasks.workunit.client.0.smithi175.stderr:mkdir: No such file or directory
2018-06-18T23:00:06.832 INFO:tasks.workunit:Stopping ['suites/ffsb.sh'] on client.0...
2018-06-18T23:00:06.832 INFO:teuthology.orchestra.run.smithi175:Running: 'rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0'
2018-06-18T23:00:06.843 INFO:teuthology.orchestra.run.smithi047:Running: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 30 ceph --cluster ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok dump_historic_ops'
2018-06-18T23:00:06.923 INFO:tasks.ceph.osd.2.smithi096.stderr:2018-06-18 23:00:06.907 7fa85cc02700 -1 received  signal: Hangup from /usr/bin/python /usr/bin/daemon-helper kill ceph-osd -f --cluster ceph -i 2  (PID: 10077) UID: 0
2018-06-18T23:00:06.934 ERROR:teuthology.run_tasks:Saw exception from tasks.
Traceback (most recent call last):
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 86, in run_tasks
    manager = run_one_task(taskname, ctx=ctx, config=config)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/run_tasks.py", line 65, in run_one_task
    return task(**kwargs)
  File "/home/teuthworker/src/git.ceph.com_ceph_master/qa/tasks/workunit.py", line 206, in task
    cleanup=cleanup)
  File "/home/teuthworker/src/git.ceph.com_ceph_master/qa/tasks/workunit.py", line 356, in _spawn_on_all_clients
    timeout=timeout)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/parallel.py", line 85, in __exit__
    for result in self:
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/parallel.py", line 99, in next
    resurrect_traceback(result)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/parallel.py", line 22, in capture_traceback
    return func(*args, **kwargs)
  File "/home/teuthworker/src/git.ceph.com_ceph_master/qa/tasks/workunit.py", line 479, in _run_tests
    label="workunit test {workunit}".format(workunit=workunit)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/remote.py", line 193, in run
    r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 423, in run
    r.wait()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 155, in wait
    self._raise_for_status()
  File "/home/teuthworker/src/git.ceph.com_git_teuthology_master/teuthology/orchestra/run.py", line 177, in _raise_for_status
    node=self.hostname, label=self.label

From: /ceph/teuthology-archive/teuthology-2018-06-18_20:06:42-powercycle-master-distro-basic-smithi/2678604/teuthology.log

Recent example of correct output:

2018-06-16T00:07:27.318 INFO:tasks.workunit.client.0.smithi008.stdout:FFSB version 6.0-RC2 started
2018-06-16T00:07:27.318 INFO:tasks.workunit.client.0.smithi008.stdout:
2018-06-16T00:07:27.328 INFO:tasks.workunit.client.0.smithi008.stdout:benchmark time = 300
2018-06-16T00:07:27.328 INFO:tasks.workunit.client.0.smithi008.stdout:ThreadGroup 0
2018-06-16T00:07:27.328 INFO:tasks.workunit.client.0.smithi008.stdout:================
2018-06-16T00:07:27.328 INFO:tasks.workunit.client.0.smithi008.stdout:   num_threads      = 32
2018-06-16T00:07:27.329 INFO:tasks.workunit.client.0.smithi008.stdout:
2018-06-16T00:07:27.329 INFO:tasks.workunit.client.0.smithi008.stdout:   read_random      = off
2018-06-16T00:07:27.329 INFO:tasks.workunit.client.0.smithi008.stdout:   read_size        = 0   (0B)
2018-06-16T00:07:27.329 INFO:tasks.workunit.client.0.smithi008.stdout:   read_blocksize   = 0   (0B)
2018-06-16T00:07:27.330 INFO:tasks.workunit.client.0.smithi008.stdout:   read_skip        = off
2018-06-16T00:07:27.330 INFO:tasks.workunit.client.0.smithi008.stdout:   read_skipsize    = 0   (0B)
2018-06-16T00:07:27.330 INFO:tasks.workunit.client.0.smithi008.stdout:
2018-06-16T00:07:27.330 INFO:tasks.workunit.client.0.smithi008.stdout:   write_random     = on
2018-06-16T00:07:27.331 INFO:tasks.workunit.client.0.smithi008.stdout:   write_size       = 5242880     (5MB)
2018-06-16T00:07:27.331 INFO:tasks.workunit.client.0.smithi008.stdout:   fsync_file       = 0
2018-06-16T00:07:27.340 INFO:tasks.workunit.client.0.smithi008.stdout:   write_blocksize  = 4096        (4KB)
2018-06-16T00:07:27.341 INFO:tasks.workunit.client.0.smithi008.stdout:   wait time        = 0
2018-06-16T00:07:27.341 INFO:tasks.workunit.client.0.smithi008.stdout:
2018-06-16T00:07:27.341 INFO:tasks.workunit.client.0.smithi008.stdout:   op weights
2018-06-16T00:07:27.341 INFO:tasks.workunit.client.0.smithi008.stdout:                   read = 0 (0.00%)
2018-06-16T00:07:27.341 INFO:tasks.workunit.client.0.smithi008.stdout:                readall = 0 (0.00%)
2018-06-16T00:07:27.341 INFO:tasks.workunit.client.0.smithi008.stdout:                  write = 1 (100.00%)
2018-06-16T00:07:27.341 INFO:tasks.workunit.client.0.smithi008.stdout:                 create = 0 (0.00%)
2018-06-16T00:07:27.342 INFO:tasks.workunit.client.0.smithi008.stdout:                 append = 0 (0.00%)
2018-06-16T00:07:27.342 INFO:tasks.workunit.client.0.smithi008.stdout:                 delete = 0 (0.00%)
2018-06-16T00:07:27.342 INFO:tasks.workunit.client.0.smithi008.stdout:                 metaop = 0 (0.00%)
2018-06-16T00:07:27.342 INFO:tasks.workunit.client.0.smithi008.stdout:              createdir = 0 (0.00%)
2018-06-16T00:07:27.342 INFO:tasks.workunit.client.0.smithi008.stdout:                   stat = 0 (0.00%)
2018-06-16T00:07:27.342 INFO:tasks.workunit.client.0.smithi008.stdout:               writeall = 0 (0.00%)
2018-06-16T00:07:27.342 INFO:tasks.workunit.client.0.smithi008.stdout:         writeall_fsync = 0 (0.00%)
2018-06-16T00:07:27.342 INFO:tasks.workunit.client.0.smithi008.stdout:             open_close = 0 (0.00%)
2018-06-16T00:07:27.343 INFO:tasks.workunit.client.0.smithi008.stdout:            write_fsync = 0 (0.00%)
2018-06-16T00:07:27.343 INFO:tasks.workunit.client.0.smithi008.stdout:           create_fsync = 0 (0.00%)
2018-06-16T00:07:27.343 INFO:tasks.workunit.client.0.smithi008.stdout:           append_fsync = 0 (0.00%)
2018-06-16T00:07:27.343 INFO:tasks.workunit.client.0.smithi008.stdout:
2018-06-16T00:07:27.343 INFO:tasks.workunit.client.0.smithi008.stdout:FileSystem .
2018-06-16T00:07:27.343 INFO:tasks.workunit.client.0.smithi008.stdout:==========
2018-06-16T00:07:27.344 INFO:tasks.workunit.client.0.smithi008.stdout:   num_dirs         = 0
2018-06-16T00:07:27.344 INFO:tasks.workunit.client.0.smithi008.stdout:   starting files   = 128
2018-06-16T00:07:27.344 INFO:tasks.workunit.client.0.smithi008.stdout:
2018-06-16T00:07:27.344 INFO:tasks.workunit.client.0.smithi008.stdout:   min file size    = 104857600   (100MB)
2018-06-16T00:07:27.344 INFO:tasks.workunit.client.0.smithi008.stdout:   max file size    = 104857600   (100MB)
2018-06-16T00:07:27.344 INFO:tasks.workunit.client.0.smithi008.stdout:   directio         = off
2018-06-16T00:07:27.344 INFO:tasks.workunit.client.0.smithi008.stdout:   alignedio        = on
2018-06-16T00:07:27.345 INFO:tasks.workunit.client.0.smithi008.stdout:   bufferedio       = off
2018-06-16T00:07:27.345 INFO:tasks.workunit.client.0.smithi008.stdout:
2018-06-16T00:07:27.345 INFO:tasks.workunit.client.0.smithi008.stdout:   aging is off
2018-06-16T00:07:27.345 INFO:tasks.workunit.client.0.smithi008.stdout:   current utilization = 0.00%
2018-06-16T00:07:27.345 INFO:tasks.workunit.client.0.smithi008.stdout:
2018-06-16T00:07:27.345 INFO:tasks.workunit.client.0.smithi008.stdout:checking existing fs: .
2018-06-16T00:07:27.345 INFO:tasks.workunit.client.0.smithi008.stdout:recreating new fileset

From: /ceph/teuthology-archive/pdonnell-2018-06-15_18:50:45-fs-wip-pdonnell-testing-20180615.150516-testing-basic-smithi/2668859/teuthology.log

There's no significant difference between those versions of master (44db1805f8b3d0f2ca39b5967dd715b93ad26b81..2d2293948066cae8d656dfe91bdb6695958a52e9).

Here's the MDS log for the final readdir:

2018-06-18 23:00:06.810 7f9bbc4d1700  7 mds.0.server reply_client_request 0 ((0) Success) client_request(client.4387:780 mkdir #0x10000000001/tmp 2018-06-18 23:00:06.788273 caller_uid=1000, caller_gid=1188{6,27,118,1188,10102,}) v4
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.server apply_allocated_inos 0x0 / [] / 0x10000000153
2018-06-18 23:00:06.810 7f9bbc4d1700 20 mds.0.sessionmap mark_dirty s=0x563e98fb5500 name=client.4387 v=340
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.locker xlock_finish on (dn xlockdone x=1) [dentry #0x1/client.0/tmp/tmp [2,head] auth (dn xlockdone x=1) (dversion lock w=1 last_client=4387) v=2035 ap=2+0 ino=0x10000000153 state=1610612737|new | request=1 lock=2 inodepin=1 dirty=1 authpin=1 0x563e992fe800]
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.locker eval_gather (dn xlockdone) on [dentry #0x1/client.0/tmp/tmp [2,head] auth (dn xlockdone) (dversion lock w=1 last_client=4387) v=2035 ap=2+0 ino=0x10000000153 state=1610612737|new | request=1 lock=1 inodepin=1 dirty=1 authpin=1 0x563e992fe800]
2018-06-18 23:00:06.810 7f9bbc4d1700  7 mds.0.locker eval_gather finished gather on (dn xlockdone) on [dentry #0x1/client.0/tmp/tmp [2,head] auth (dn xlockdone) (dversion lock w=1 last_client=4387) v=2035 ap=2+0 ino=0x10000000153 state=1610612737|new | request=1 lock=1 inodepin=1 dirty=1 authpin=1 0x563e992fe800]
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.cache.den(0x10000000001 tmp) auth_unpin by 0x563e992fe8e0 on [dentry #0x1/client.0/tmp/tmp [2,head] auth (dversion lock w=1 last_client=4387) v=2035 ap=1+0 ino=0x10000000153 state=1610612737|new | request=1 lock=1 inodepin=1 dirty=1 authpin=1 0x563e992fe800] now 1+0
2018-06-18 23:00:06.810 7f9bbc4d1700 15 mds.0.cache.dir(0x10000000001) adjust_nested_auth_pins -1/-1 on [dir 0x10000000001 /client.0/tmp/ [2,head] auth v=2036 cv=0/0 ap=1+1+1 state=1610874881|complete f(v0 m2018-06-18 23:00:06.788273 3=1+2) n(v2 rc2018-06-18 23:00:06.788273 b1838614 106=102+4) hs=3+0,ss=0+0 dirty=3 | child=1 dirty=1 authpin=1 0x563e990f0d00] by 0x563e992fe8e0 count now 1 + 1
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.locker simple_eval (dn sync) on [dentry #0x1/client.0/tmp/tmp [2,head] auth (dversion lock w=1 last_client=4387) v=2035 ap=1+0 ino=0x10000000153 state=1610612737|new | request=1 lock=1 inodepin=1 dirty=1 authpin=1 0x563e992fe800]
2018-06-18 23:00:06.810 7f9bbc4d1700  7 mds.0.locker local_wrlock_finish  on (iversion lock w=1 last_client=4387) on [inode 0x10000000001 [...2,head] /client.0/tmp/ auth v43 ap=3+1 dirtyparent f(v0 m2018-06-18 23:00:06.788273 3=1+2) n(v2 rc2018-06-18 23:00:06.788273 b1838614 107=102+5) (isnap sync r=2) (inest lock w=1 dirty) (ifile excl->sync w=1) (iversion lock w=1 last_client=4387) caps={4387=pAsLsXsFs/Fx@9},l=4387 | dirtyscattered=1 request=1 lock=4 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=1 dirty=1 waiter=1 authpin=1 0x563e99127c00]
2018-06-18 23:00:06.810 7f9bbc4d1700  7 mds.0.locker wrlock_finish on (ifile excl->sync w=1) on [inode 0x10000000001 [...2,head] /client.0/tmp/ auth v43 ap=3+1 dirtyparent f(v0 m2018-06-18 23:00:06.788273 3=1+2) n(v2 rc2018-06-18 23:00:06.788273 b1838614 107=102+5) (isnap sync r=2) (inest lock w=1 dirty) (ifile excl->sync w=1) (iversion lock) caps={4387=pAsLsXsFs/Fx@9},l=4387 | dirtyscattered=1 request=1 lock=3 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=1 dirty=1 waiter=1 authpin=1 0x563e99127c00]
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.locker eval_gather (ifile excl->sync) on [inode 0x10000000001 [...2,head] /client.0/tmp/ auth v43 ap=3+1 dirtyparent f(v0 m2018-06-18 23:00:06.788273 3=1+2) n(v2 rc2018-06-18 23:00:06.788273 b1838614 107=102+5) (isnap sync r=2) (inest lock w=1 dirty) (ifile excl->sync) (iversion lock) caps={4387=pAsLsXsFs/Fx@9},l=4387 | dirtyscattered=1 request=1 lock=2 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=1 dirty=1 waiter=1 authpin=1 0x563e99127c00]
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.locker  next state is sync issued/allows loner s/scr xlocker /scr other /scr
2018-06-18 23:00:06.810 7f9bbc4d1700  7 mds.0.locker eval_gather finished gather on (ifile excl->sync) on [inode 0x10000000001 [...2,head] /client.0/tmp/ auth v43 ap=3+1 dirtyparent f(v0 m2018-06-18 23:00:06.788273 3=1+2) n(v2 rc2018-06-18 23:00:06.788273 b1838614 107=102+5) (isnap sync r=2) (inest lock w=1 dirty) (ifile excl->sync) (iversion lock) caps={4387=pAsLsXsFs/Fx@9},l=4387 | dirtyscattered=1 request=1 lock=2 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=1 dirty=1 waiter=1 authpin=1 0x563e99127c00]
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.cache.ino(0x10000000001) auth_unpin by 0x563e991281c0 on [inode 0x10000000001 [...2,head] /client.0/tmp/ auth v43 ap=2+1 dirtyparent f(v0 m2018-06-18 23:00:06.788273 3=1+2) n(v2 rc2018-06-18 23:00:06.788273 b1838614 107=102+5) (isnap sync r=2) (inest lock w=1 dirty) (iversion lock) caps={4387=pAsLsXsFs/Fx@9},l=4387 | dirtyscattered=1 request=1 lock=2 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=1 dirty=1 waiter=1 authpin=1 0x563e99127c00] now 2+1
2018-06-18 23:00:06.810 7f9bbc4d1700 15 mds.0.cache.dir(0x10000000000) adjust_nested_auth_pins -1/-1 on [dir 0x10000000000 /client.0/ [2,head] auth v=44 cv=0/0 ap=1+2+3 state=1610874881|complete f(v0 m2018-06-18 22:59:53.731313 1=0+1) n(v2 rc2018-06-18 23:00:06.788273 b1838614 107=102+5)/n(v2 rc2018-06-18 23:00:06.228362 b965659 103=99+4) hs=1+0,ss=0+0 dirty=1 | child=1 dirty=1 authpin=1 0x563e990f1200] by 0x563e991281c0 count now 1 + 3
2018-06-18 23:00:06.810 7f9bbc4d1700 10 MDSInternalContextBase::complete: 18C_MDS_RetryRequest
2018-06-18 23:00:06.810 7f9bbc4d1700  7 mds.0.server dispatch_client_request client_request(client.4387:781 readdir #0x10000000001 2018-06-18 23:00:06.792753 caller_uid=1000, caller_gid=1188{6,27,118,1188,10102,}) v4
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.server rdlock_path_pin_ref request(client.4387:781 cr=0x563e9945f380) #0x10000000001
2018-06-18 23:00:06.810 7f9bbc4d1700  7 mds.0.cache traverse: opening base ino 0x10000000001 snap head
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.cache path_traverse finish on snapid head
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.server ref is [inode 0x10000000001 [...2,head] /client.0/tmp/ auth v43 ap=2+1 dirtyparent f(v0 m2018-06-18 23:00:06.788273 3=1+2) n(v2 rc2018-06-18 23:00:06.788273 b1838614 107=102+5) (isnap sync r=2) (inest lock w=1 dirty) (iversion lock) caps={4387=pAsLsXsFs/Fx@9},l=4387 | dirtyscattered=1 request=1 lock=2 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=1 dirty=1 waiter=0 authpin=1 0x563e99127c00]
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.locker acquire_locks request(client.4387:781 cr=0x563e9945f380)
2018-06-18 23:00:06.810 7f9bbc4d1700 20 mds.0.locker  must rdlock (isnap sync r=2) [inode 0x1 [...2,head] / auth v18 ap=2+0 snaprealm=0x563e98fc5900 dirtyparent f(v0 m2018-06-18 22:59:27.192322 1=0+1) n(v2 rc2018-06-18 23:00:00.922701 b774162 69=64+5)/n(v0 1=0+1) (isnap sync r=2) (inest lock dirty) (iversion lock) caps={4387=pAsLsXsFs/-@10} | dirtyscattered=1 request=0 lock=1 dirfrag=1 caps=1 dirtyparent=1 dirty=1 authpin=1 0x563e99059100]
2018-06-18 23:00:06.810 7f9bbc4d1700 20 mds.0.locker  must rdlock (idft sync) [inode 0x10000000001 [...2,head] /client.0/tmp/ auth v43 ap=2+1 dirtyparent f(v0 m2018-06-18 23:00:06.788273 3=1+2) n(v2 rc2018-06-18 23:00:06.788273 b1838614 107=102+5) (isnap sync r=2) (inest lock w=1 dirty) (iversion lock) caps={4387=pAsLsXsFs/Fx@9},l=4387 | dirtyscattered=1 request=1 lock=2 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=1 dirty=1 waiter=0 authpin=1 0x563e99127c00]
2018-06-18 23:00:06.810 7f9bbc4d1700 20 mds.0.locker  must rdlock (ifile sync) [inode 0x10000000001 [...2,head] /client.0/tmp/ auth v43 ap=2+1 dirtyparent f(v0 m2018-06-18 23:00:06.788273 3=1+2) n(v2 rc2018-06-18 23:00:06.788273 b1838614 107=102+5) (isnap sync r=2) (inest lock w=1 dirty) (iversion lock) caps={4387=pAsLsXsFs/Fx@9},l=4387 | dirtyscattered=1 request=1 lock=2 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=1 dirty=1 waiter=0 authpin=1 0x563e99127c00]
2018-06-18 23:00:06.810 7f9bbc4d1700 20 mds.0.locker  must rdlock (isnap sync r=2) [inode 0x10000000001 [...2,head] /client.0/tmp/ auth v43 ap=2+1 dirtyparent f(v0 m2018-06-18 23:00:06.788273 3=1+2) n(v2 rc2018-06-18 23:00:06.788273 b1838614 107=102+5) (isnap sync r=2) (inest lock w=1 dirty) (iversion lock) caps={4387=pAsLsXsFs/Fx@9},l=4387 | dirtyscattered=1 request=1 lock=2 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=1 dirty=1 waiter=0 authpin=1 0x563e99127c00]
2018-06-18 23:00:06.810 7f9bbc4d1700 20 mds.0.locker  must rdlock (isnap sync r=2) [inode 0x10000000000 [...2,head] /client.0/ auth v35 ap=2+1 dirtyparent f(v0 m2018-06-18 22:59:53.731313 1=0+1) n(v2 rc2018-06-18 23:00:06.228362 b965659 104=99+5) (isnap sync r=2) (inest lock w=1 dirty) (ifile excl) (iversion lock) caps={4387=pAsLsXsFsx/Fx@4},l=4387 | ptrwaiter=0 dirtyscattered=1 request=0 lock=2 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=1 dirty=1 waiter=0 authpin=1 0x563e99128300]
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.locker  must authpin [inode 0x1 [...2,head] / auth v18 ap=2+0 snaprealm=0x563e98fc5900 dirtyparent f(v0 m2018-06-18 22:59:27.192322 1=0+1) n(v2 rc2018-06-18 23:00:00.922701 b774162 69=64+5)/n(v0 1=0+1) (isnap sync r=2) (inest lock dirty) (iversion lock) caps={4387=pAsLsXsFs/-@10} | dirtyscattered=1 request=0 lock=1 dirfrag=1 caps=1 dirtyparent=1 dirty=1 authpin=1 0x563e99059100]
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.locker  must authpin [inode 0x10000000001 [...2,head] /client.0/tmp/ auth v43 ap=2+1 dirtyparent f(v0 m2018-06-18 23:00:06.788273 3=1+2) n(v2 rc2018-06-18 23:00:06.788273 b1838614 107=102+5) (isnap sync r=2) (inest lock w=1 dirty) (iversion lock) caps={4387=pAsLsXsFs/Fx@9},l=4387 | dirtyscattered=1 request=1 lock=2 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=1 dirty=1 waiter=0 authpin=1 0x563e99127c00]
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.locker  must authpin [inode 0x10000000000 [...2,head] /client.0/ auth v35 ap=2+1 dirtyparent f(v0 m2018-06-18 22:59:53.731313 1=0+1) n(v2 rc2018-06-18 23:00:06.228362 b965659 104=99+5) (isnap sync r=2) (inest lock w=1 dirty) (ifile excl) (iversion lock) caps={4387=pAsLsXsFsx/Fx@4},l=4387 | ptrwaiter=0 dirtyscattered=1 request=0 lock=2 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=1 dirty=1 waiter=0 authpin=1 0x563e99128300]
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.locker  already auth_pinned [inode 0x1 [...2,head] / auth v18 ap=2+0 snaprealm=0x563e98fc5900 dirtyparent f(v0 m2018-06-18 22:59:27.192322 1=0+1) n(v2 rc2018-06-18 23:00:00.922701 b774162 69=64+5)/n(v0 1=0+1) (isnap sync r=2) (inest lock dirty) (iversion lock) caps={4387=pAsLsXsFs/-@10} | dirtyscattered=1 request=0 lock=1 dirfrag=1 caps=1 dirtyparent=1 dirty=1 authpin=1 0x563e99059100]
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.locker  already auth_pinned [inode 0x10000000001 [...2,head] /client.0/tmp/ auth v43 ap=2+1 dirtyparent f(v0 m2018-06-18 23:00:06.788273 3=1+2) n(v2 rc2018-06-18 23:00:06.788273 b1838614 107=102+5) (isnap sync r=2) (inest lock w=1 dirty) (iversion lock) caps={4387=pAsLsXsFs/Fx@9},l=4387 | dirtyscattered=1 request=1 lock=2 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=1 dirty=1 waiter=0 authpin=1 0x563e99127c00]
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.locker  already auth_pinned [inode 0x10000000000 [...2,head] /client.0/ auth v35 ap=2+1 dirtyparent f(v0 m2018-06-18 22:59:53.731313 1=0+1) n(v2 rc2018-06-18 23:00:06.228362 b965659 104=99+5) (isnap sync r=2) (inest lock w=1 dirty) (ifile excl) (iversion lock) caps={4387=pAsLsXsFsx/Fx@4},l=4387 | ptrwaiter=0 dirtyscattered=1 request=0 lock=2 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=1 dirty=1 waiter=0 authpin=1 0x563e99128300]
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.locker  already rdlocked (isnap sync r=2) [inode 0x1 [...2,head] / auth v18 ap=2+0 snaprealm=0x563e98fc5900 dirtyparent f(v0 m2018-06-18 22:59:27.192322 1=0+1) n(v2 rc2018-06-18 23:00:00.922701 b774162 69=64+5)/n(v0 1=0+1) (isnap sync r=2) (inest lock dirty) (iversion lock) caps={4387=pAsLsXsFs/-@10} | dirtyscattered=1 request=0 lock=1 dirfrag=1 caps=1 dirtyparent=1 dirty=1 authpin=1 0x563e99059100]
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.locker  already rdlocked (isnap sync r=2) [inode 0x10000000000 [...2,head] /client.0/ auth v35 ap=2+1 dirtyparent f(v0 m2018-06-18 22:59:53.731313 1=0+1) n(v2 rc2018-06-18 23:00:06.228362 b965659 104=99+5) (isnap sync r=2) (inest lock w=1 dirty) (ifile excl) (iversion lock) caps={4387=pAsLsXsFsx/Fx@4},l=4387 | ptrwaiter=0 dirtyscattered=1 request=0 lock=2 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=1 dirty=1 waiter=0 authpin=1 0x563e99128300]
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.locker  already rdlocked (isnap sync r=2) [inode 0x10000000001 [...2,head] /client.0/tmp/ auth v43 ap=2+1 dirtyparent f(v0 m2018-06-18 23:00:06.788273 3=1+2) n(v2 rc2018-06-18 23:00:06.788273 b1838614 107=102+5) (isnap sync r=2) (inest lock w=1 dirty) (iversion lock) caps={4387=pAsLsXsFs/Fx@9},l=4387 | dirtyscattered=1 request=1 lock=2 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=1 dirty=1 waiter=0 authpin=1 0x563e99127c00]
2018-06-18 23:00:06.810 7f9bbc4d1700  7 mds.0.locker rdlock_start  on (ifile sync) on [inode 0x10000000001 [...2,head] /client.0/tmp/ auth v43 ap=2+1 dirtyparent f(v0 m2018-06-18 23:00:06.788273 3=1+2) n(v2 rc2018-06-18 23:00:06.788273 b1838614 107=102+5) (isnap sync r=2) (inest lock w=1 dirty) (iversion lock) caps={4387=pAsLsXsFs/Fx@9},l=4387 | dirtyscattered=1 request=1 lock=2 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=1 dirty=1 waiter=0 authpin=1 0x563e99127c00]
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.locker  got rdlock on (ifile sync r=1) [inode 0x10000000001 [...2,head] /client.0/tmp/ auth v43 ap=2+1 dirtyparent f(v0 m2018-06-18 23:00:06.788273 3=1+2) n(v2 rc2018-06-18 23:00:06.788273 b1838614 107=102+5) (isnap sync r=2) (inest lock w=1 dirty) (ifile sync r=1) (iversion lock) caps={4387=pAsLsXsFs/Fx@9},l=4387 | dirtyscattered=1 request=1 lock=3 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=1 dirty=1 waiter=0 authpin=1 0x563e99127c00]
2018-06-18 23:00:06.810 7f9bbc4d1700  7 mds.0.locker rdlock_start  on (idft sync) on [inode 0x10000000001 [...2,head] /client.0/tmp/ auth v43 ap=2+1 dirtyparent f(v0 m2018-06-18 23:00:06.788273 3=1+2) n(v2 rc2018-06-18 23:00:06.788273 b1838614 107=102+5) (isnap sync r=2) (inest lock w=1 dirty) (ifile sync r=1) (iversion lock) caps={4387=pAsLsXsFs/Fx@9},l=4387 | dirtyscattered=1 request=1 lock=3 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=1 dirty=1 waiter=0 authpin=1 0x563e99127c00]
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.locker  got rdlock on (idft sync r=1) [inode 0x10000000001 [...2,head] /client.0/tmp/ auth v43 ap=2+1 dirtyparent f(v0 m2018-06-18 23:00:06.788273 3=1+2) n(v2 rc2018-06-18 23:00:06.788273 b1838614 107=102+5) (idft sync r=1) (isnap sync r=2) (inest lock w=1 dirty) (ifile sync r=1) (iversion lock) caps={4387=pAsLsXsFs/Fx@9},l=4387 | dirtyscattered=1 request=1 lock=4 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=1 dirty=1 waiter=0 authpin=1 0x563e99127c00]
2018-06-18 23:00:06.810 7f9bbc4d1700 20 Session check_access path /client.0/tmp
2018-06-18 23:00:06.810 7f9bbc4d1700 10 MDSAuthCap is_capable inode(path /client.0/tmp owner 1000:1188 mode 040775) by caller 1000:1188 mask 1 new 0:0 cap: MDSAuthCaps[allow rw]
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.server  frag * offset '' offset_hash 0 flags 1
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.server handle_client_readdir on [dir 0x10000000001 /client.0/tmp/ [2,head] auth v=2036 cv=0/0 ap=1+1+1 state=1610874881|complete f(v0 m2018-06-18 23:00:06.788273 3=1+2) n(v2 rc2018-06-18 23:00:06.788273 b1838614 106=102+4) hs=3+0,ss=0+0 dirty=3 | child=1 dirty=1 authpin=1 0x563e990f0d00]
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.server snapid head
2018-06-18 23:00:06.810 7f9bbc4d1700 10  mds.0.cache.snaprealm(0x1 seq 1 0x563e98fc5900) have_past_parents_open [1,head]
2018-06-18 23:00:06.810 7f9bbc4d1700 12 mds.0.server including    dn [dentry #0x1/client.0/tmp/tmp [2,head] auth (dversion lock w=1 last_client=4387) v=2035 ap=1+0 ino=0x10000000153 state=1610612737|new | request=1 lock=1 inodepin=1 dirty=1 authpin=1 0x563e992fe800]
2018-06-18 23:00:06.810 7f9bbc4d1700 20 mds.0.locker issue_client_lease no/null lease on [dentry #0x1/client.0/tmp/tmp [2,head] auth (dversion lock w=1 last_client=4387) v=2035 ap=1+0 ino=0x10000000153 state=1610612737|new | request=1 lock=1 inodepin=1 dirty=1 authpin=1 0x563e992fe800]
2018-06-18 23:00:06.810 7f9bbc4d1700 12 mds.0.server including inode [inode 0x10000000153 [...2,head] /client.0/tmp/tmp/ auth v2035 dirtyparent f() n(v0 1=0+1) (iauth excl) (inest lock) (ifile excl) (ixattr excl) (iversion lock) caps={4387=pAsxLsXsxFsx/-@1},l=4387 | request=1 dirfrag=1 caps=1 dirtyparent=1 dirty=1 0x563e996a8a00]
2018-06-18 23:00:06.810 7f9bbc4d1700 20 mds.0.cache.ino(0x10000000153)  pfile 1 pauth 1 plink 1 pxattr 1 plocal 0 ctime 2018-06-18 23:00:06.788273 valid=1
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.cache.ino(0x10000000153) encode_inodestat issuing pAsxLsXsxFsx seq 2
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.cache.ino(0x10000000153) encode_inodestat caps pAsxLsXsxFsx seq 2 mseq 0 xattrv 0 len 0
2018-06-18 23:00:06.810 7f9bbc4d1700 12 mds.0.server including    dn [dentry #0x1/client.0/tmp/ffsb-6.0-rc2 [2,head] auth (dversion lock) v=2033 ino=0x10000000003 state=1610612737|new | request=0 lock=0 inodepin=1 dirty=1 authpin=0 0x563e99206400]
2018-06-18 23:00:06.810 7f9bbc4d1700 20 mds.0.locker issue_client_lease no/null lease on [dentry #0x1/client.0/tmp/ffsb-6.0-rc2 [2,head] auth (dversion lock) v=2033 ino=0x10000000003 state=1610612737|new | request=0 lock=0 inodepin=1 dirty=1 authpin=0 0x563e99206400]
2018-06-18 23:00:06.810 7f9bbc4d1700 12 mds.0.server including inode [inode 0x10000000003 [...2,head] /client.0/tmp/ffsb-6.0-rc2/ auth v2033 dirtyparent f(v0 m2018-06-18 23:00:06.659948 84=82+2) n(v1 rc2018-06-18 23:00:06.659948 b1709683 104=101+3) (inest lock dirty) (ifile excl) (iversion lock) caps={4387=pAsLsXsFsx/Fx@593},l=4387 | ptrwaiter=0 dirtyscattered=1 request=0 lock=0 dirfrag=1 caps=1 dirtyrstat=0 dirtyparent=1 dirty=1 waiter=0 authpin=0 0x563e99129100]
2018-06-18 23:00:06.810 7f9bbc4d1700 20 mds.0.cache.ino(0x10000000003)  pfile 1 pauth 1 plink 1 pxattr 1 plocal 0 ctime 2018-06-18 23:00:06.659948 valid=1
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.cache.ino(0x10000000003) encode_inodestat issuing pAsLsXsFsx seq 594
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.cache.ino(0x10000000003) encode_inodestat caps pAsLsXsFsx seq 594 mseq 0 xattrv 0 len 0
2018-06-18 23:00:06.810 7f9bbc4d1700 12 mds.0.server including    dn [dentry #0x1/client.0/tmp/ffsb.tar.bz2 [2,head] auth (dversion lock) v=832 ino=0x10000000002 state=1610612736 | request=0 lock=0 inodepin=1 dirty=1 authpin=0 0x563e9911b400]
2018-06-18 23:00:06.810 7f9bbc4d1700 20 mds.0.locker issue_client_lease no/null lease on [dentry #0x1/client.0/tmp/ffsb.tar.bz2 [2,head] auth (dversion lock) v=832 ino=0x10000000002 state=1610612736 | request=0 lock=0 inodepin=1 dirty=1 authpin=0 0x563e9911b400]
2018-06-18 23:00:06.810 7f9bbc4d1700 12 mds.0.server including inode [inode 0x10000000002 [2,head] /client.0/tmp/ffsb.tar.bz2 auth v832 dirtyparent s=128931 n(v0 rc2018-06-18 22:59:54.292255 b128931 1=1+0) (iversion lock) caps={4387=pAsLsXsFsc/-@6},l=4387(-1) | ptrwaiter=0 request=0 lock=0 caps=1 dirtyparent=1 dirty=1 waiter=0 authpin=0 0x563e99128a00]
2018-06-18 23:00:06.810 7f9bbc4d1700 20 mds.0.cache.ino(0x10000000002)  pfile 1 pauth 1 plink 1 pxattr 1 plocal 0 ctime 2018-06-18 22:59:54.292255 valid=1
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.cache.ino(0x10000000002) encode_inodestat issuing pAsLsXsFscr seq 7
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.cache.ino(0x10000000002) encode_inodestat caps pAsLsXsFscr seq 7 mseq 0 xattrv 0 len 0
2018-06-18 23:00:06.810 7f9bbc4d1700 10 mds.0.server reply to client_request(client.4387:781 readdir #0x10000000001 2018-06-18 23:00:06.792753 caller_uid=1000, caller_gid=1188{6,27,118,1188,10102,}) v4 readdir num=3 bytes=960 start=1 end=1

From: /ceph/teuthology-archive/teuthology-2018-06-18_20:06:42-powercycle-master-distro-basic-smithi/2678604/remote/smithi175/log/ceph-mds.a.log.gz


Related issues 2 (0 open2 closed)

Copied to CephFS - Backport #24718: luminous: client: returning garbage (?) for readdirResolvedJos CollinActions
Copied to CephFS - Backport #24719: mimic: client: returning garbage (?) for readdirResolvedJos CollinActions
Actions #1

Updated by Patrick Donnelly almost 6 years ago

Here too:

/ceph/teuthology-archive/teuthology-2018-06-18_20:06:42-powercycle-master-distro-basic-smithi/2678660

/ceph/teuthology-archive/teuthology-2018-06-18_20:06:42-powercycle-master-distro-basic-smithi/2678772

Actions #2

Updated by Patrick Donnelly almost 6 years ago

This seems to only happen on Ubuntu 18.04:

pdonnell@teuthology /a/teuthology-2018-06-18_20:06:42-powercycle-master-distro-basic-smithi$ grep ffsb */config.yaml -C2 | grep -C2 ubuntu
2678604/config.yaml-branch: master
2678604/config.yaml-description: powercycle/osd/{clusters/3osd-1per-target.yaml objectstore/filestore-xfs.yaml
2678604/config.yaml:  powercycle/default.yaml supported-all-distro/ubuntu_latest.yaml tasks/cfuse_workunit_suites_ffsb.yaml
2678604/config.yaml-  thrashosds-health.yaml whitelist_health.yaml}
2678604/config.yaml-email: ceph-qa@ceph.com
--
2678632/config.yaml-branch: master
2678632/config.yaml-description: powercycle/osd/{clusters/3osd-1per-target.yaml objectstore/filestore-xfs.yaml
2678632/config.yaml:  powercycle/default.yaml supported-all-distro/ubuntu_16.04.yaml tasks/cfuse_workunit_suites_ffsb.yaml
2678632/config.yaml-  thrashosds-health.yaml whitelist_health.yaml}
2678632/config.yaml-email: ceph-qa@ceph.com
--
2678660/config.yaml-branch: master
2678660/config.yaml-description: powercycle/osd/{clusters/3osd-1per-target.yaml objectstore/bluestore.yaml
2678660/config.yaml:  powercycle/default.yaml supported-all-distro/ubuntu_latest.yaml tasks/cfuse_workunit_suites_ffsb.yaml
2678660/config.yaml-  thrashosds-health.yaml whitelist_health.yaml}
2678660/config.yaml-email: ceph-qa@ceph.com
--
2678688/config.yaml-branch: master
2678688/config.yaml-description: powercycle/osd/{clusters/3osd-1per-target.yaml objectstore/bluestore.yaml
2678688/config.yaml:  powercycle/default.yaml supported-all-distro/ubuntu_16.04.yaml tasks/cfuse_workunit_suites_ffsb.yaml
2678688/config.yaml-  thrashosds-health.yaml whitelist_health.yaml}
2678688/config.yaml-email: ceph-qa@ceph.com
--
2678716/config.yaml-branch: master
2678716/config.yaml-description: powercycle/osd/{clusters/3osd-1per-target.yaml objectstore/bluestore-comp.yaml
2678716/config.yaml:  powercycle/default.yaml supported-all-distro/ubuntu_latest.yaml tasks/cfuse_workunit_suites_ffsb.yaml
2678716/config.yaml-  thrashosds-health.yaml whitelist_health.yaml}
2678716/config.yaml-email: ceph-qa@ceph.com
--
2678744/config.yaml-branch: master
2678744/config.yaml-description: powercycle/osd/{clusters/3osd-1per-target.yaml objectstore/bluestore-comp.yaml
2678744/config.yaml:  powercycle/default.yaml supported-all-distro/ubuntu_16.04.yaml tasks/cfuse_workunit_suites_ffsb.yaml
2678744/config.yaml-  thrashosds-health.yaml whitelist_health.yaml}
2678744/config.yaml-email: ceph-qa@ceph.com
--
2678772/config.yaml-branch: master
2678772/config.yaml-description: powercycle/osd/{clusters/3osd-1per-target.yaml objectstore/bluestore-bitmap.yaml
2678772/config.yaml:  powercycle/default.yaml supported-all-distro/ubuntu_latest.yaml tasks/cfuse_workunit_suites_ffsb.yaml
2678772/config.yaml-  thrashosds-health.yaml whitelist_health.yaml}
2678772/config.yaml-email: ceph-qa@ceph.com
--
2678800/config.yaml-branch: master
2678800/config.yaml-description: powercycle/osd/{clusters/3osd-1per-target.yaml objectstore/bluestore-bitmap.yaml
2678800/config.yaml:  powercycle/default.yaml supported-all-distro/ubuntu_16.04.yaml tasks/cfuse_workunit_suites_ffsb.yaml
2678800/config.yaml-  thrashosds-health.yaml whitelist_health.yaml}
2678800/config.yaml-email: ceph-qa@ceph.com
Actions #3

Updated by Zheng Yan almost 6 years ago

ffsb issue. below patch can fix it. I don't know how to update http://download.ceph.com/qa/ffsb.tar.bz2

diff -urp 1/parser.c 2/parser.c
--- 1/parser.c  2008-10-28 04:17:05.000000000 +0800
+++ 2/parser.c  2018-06-20 16:19:40.545402885 +0800
@@ -203,7 +203,7 @@ static char *get_optstr(char *buf, char
        len = strnlen(string, BUFSIZE);
        sprintf(search_str, "%s=%%%ds\\n", string, BUFSIZE - len-1);
        if (1 == sscanf(line, search_str, &temp)) {
-               len = strnlen(temp, 4096);
+               len = strnlen(temp, 4095) + 1;
                ret_buf = malloc(len);
                strncpy(ret_buf, temp, len);
                return ret_buf;

Actions #4

Updated by Patrick Donnelly almost 6 years ago

  • Assignee set to Zheng Yan
Actions #5

Updated by Zheng Yan almost 6 years ago

  • Status changed from New to Fix Under Review
Actions #6

Updated by Patrick Donnelly almost 6 years ago

  • Status changed from Fix Under Review to Pending Backport
Actions #7

Updated by Patrick Donnelly almost 6 years ago

  • Copied to Backport #24718: luminous: client: returning garbage (?) for readdir added
Actions #8

Updated by Patrick Donnelly almost 6 years ago

  • Copied to Backport #24719: mimic: client: returning garbage (?) for readdir added
Actions #9

Updated by Nathan Cutler over 5 years ago

  • Status changed from Pending Backport to Resolved
Actions

Also available in: Atom PDF