Activity
From 03/18/2020 to 04/16/2020
04/16/2020
- 09:56 PM Feature #12334 (In Progress): nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- Reopening this bug. We've had some other reports of this upstream as well, and I'm convinced we'll need to add some w...
- 06:54 PM Backport #45027: nautilus: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34343
merged - 06:54 PM Backport #44480: nautilus: mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34343
merged - 06:54 PM Backport #44328: nautilus: client: bad error handling in Client::_lseek
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34308
merged - 06:53 PM Backport #44337: nautilus: mds: purge queue corruption from wrong backport
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34307
merged - 03:12 PM Bug #45114: client: make cache shrinking callbacks available via libcephfs
- ...we'll also need to expose the callback setting routines in libcephfs as well (they're only settable today via clas...
- 03:01 PM Bug #45114 (Duplicate): client: make cache shrinking callbacks available via libcephfs
- ganesha's FSAL_CEPH holds references to libcephfs Inode objects but it doesn't have a way to respond to cache pressur...
- 02:07 PM Bug #43600 (Resolved): qa: workunits/suites/iozone.sh: line 5: iozone: command not found
- Kefu have helped build and upload the iozone package to the spia lab repo, and enabled the iozone package in ceph-cm-...
- 01:35 PM Bug #44276: pybind/mgr/volumes: cleanup stale connection hang
- lowering priority since happens (most I guess) on libcephfs shutdown (in teuthology).
- 12:51 PM Bug #45107 (Closed): multi-mds: journal can't be flushed
- 12:10 PM Bug #45107: multi-mds: journal can't be flushed
- The simulation is wrong. Restart MDS will work.
Please close this issue. - 05:29 AM Bug #45107 (Closed): multi-mds: journal can't be flushed
- I'm trying to learn MDS and fix https://tracker.ceph.com/issues/45024 . Meanwhile, I found maybe there's a bug in cur...
- 10:51 AM Feature #44891 (Fix Under Review): link ceph-fuse to libfuse3 if possible
- 10:51 AM Bug #45103 (Fix Under Review): TestVolumeClient is failing under new py3 runtime
- 03:18 AM Bug #45103 (In Progress): TestVolumeClient is failing under new py3 runtime
04/15/2020
- 06:50 PM Bug #45104 (Closed): NFS deployed using orchestrator watch_url not working and mkdirs permission ...
- I've ran into two issues after deploying ceph using cephadm all default for the nfs side.
Running ceph orch apply... - 05:29 PM Bug #45103 (Resolved): TestVolumeClient is failing under new py3 runtime
- in eg http://pulpito.front.sepia.ceph.com/teuthology-2020-04-12_03:15:03-fs-master-distro-basic-smithi/4946437/
> ... - 04:00 PM Bug #45102 (New): nautilus: tasks/cfuse_workunit_suites_ffsb.yaml failure in multimds suite
- Hit this in multi-mds suite during yuri's run,
http://qa-proxy.ceph.com/teuthology/yuriw-2020-04-15_00:10:18-multimd... - 03:05 PM Bug #45100 (Resolved): qa: Test failure: test_damaged_dentry (tasks.cephfs.test_damage.TestDamage)
- Hit this in the kcephfs suite of yuri's nautilus run, http://qa-proxy.ceph.com/teuthology/yuriw-2020-04-15_00:09:40-k...
- 02:44 PM Bug #45090 (Fix Under Review): mds: inode's xattr_map may reference a large memory.
- 01:18 PM Documentation #44441 (Resolved): document new "wsync" and "nowsync" kcephfs mount options in moun...
- Merged
- 11:43 AM Bug #43761 (In Progress): mon/MDSMonitor: "ceph fs authorize cephfs client.test /test rw" does no...
- Patrick Donnelly wrote:
> Ramana, I'm assigning this to you. The bug is arguably in ceph-ansible because it's enabli... - 07:43 AM Backport #45050 (In Progress): nautilus: stale scrub status entry from a failed mds shows up in `...
- 06:07 AM Feature #44891 (In Progress): link ceph-fuse to libfuse3 if possible
- 02:24 AM Fix #44171 (Fix Under Review): pybind/cephfs: audit for unimplemented bindings for libcephfs
- 02:11 AM Bug #44962 (Fix Under Review): "ceph fs status" command outputs to stderr instead of stdout when ...
- 02:11 AM Bug #44962: "ceph fs status" command outputs to stderr instead of stdout when json formatting is ...
- Yes, you are right. Sent the PR.
- 01:01 AM Bug #43600: qa: workunits/suites/iozone.sh: line 5: iozone: command not found
- Recently I hit this too.
Or how about let's just do clone and built it before using it like the ./workunits/suites...
04/14/2020
- 06:45 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- I have migrated data away and did a fresh install of ceph using cephadm for octopus. I am not quite to the point of t...
- 03:39 PM Bug #45090 (Resolved): mds: inode's xattr_map may reference a large memory.
- ...
- 01:22 PM Bug #45080: mds slow journal ops then cephfs damaged after restarting an osd host
- It seems this is caused by ms type = simple (and msgr2). I can reproduce the bug easily by restarting an osd -- the m...
- 10:31 AM Bug #45080 (New): mds slow journal ops then cephfs damaged after restarting an osd host
- We had a cephfs cluster get damaged following a routine osd restart.
* All MDS's have ms type = simple, which seem... - 09:43 AM Bug #44902: ceph-fuse read the file cached data when we recover from the blacklist
- The master branch also has this problem.
- 07:31 AM Bug #44962: "ceph fs status" command outputs to stderr instead of stdout when json formatting is ...
- ...
- 07:31 AM Bug #44962: "ceph fs status" command outputs to stderr instead of stdout when json formatting is ...
- Sorry my bad, this only happens when --format json is used.
Sorry for the confusing, I updated the description. - 06:31 AM Bug #44962 (In Progress): "ceph fs status" command outputs to stderr instead of stdout when json ...
- 06:54 AM Bug #45078 (New): mds lock always waiting stable after a bunch rename test
- I run a metadata bench test, which about 30,000 files rename test in one directory, and always got mds slow request a...
- 06:37 AM Bug #45071 (Fix Under Review): cephfs-shell: CI testing does not detect flake8 errors
- 03:01 AM Bug #43600: qa: workunits/suites/iozone.sh: line 5: iozone: command not found
- FWIW, iozone is non-free software. see https://lists.fedorahosted.org/archives/list/legal@lists.fedoraproject.org/thr...
04/13/2020
- 06:41 PM Bug #44807 (Can't reproduce): cephfs kernel mount option "noshare" is useless
- This was fixed in mainline by this patch, but it was never backported to RHEL/Centos kernels. If you have a RHEL enti...
- 06:31 PM Bug #44807: cephfs kernel mount option "noshare" is useless
- I don't see that at all with a more recent kernel (5.7.0-rc1). I'll have to spin up a centos box and see if it's repr...
- 04:19 PM Bug #44962: "ceph fs status" command outputs to stderr instead of stdout when json formatting is ...
- I think there is slight confusion w.r.t to fd numbers of stdout and stderr.
As per /usr/include/unistd.h
/* Sta... - 01:48 PM Bug #44885: enable 'big_writes' fuse option if ceph-fuse is linked to libfuse < 3.0
- About the big_writes option in libfuse: https://github.com/libfuse/libfuse/blob/master/ChangeLog.rst
- 01:47 PM Bug #44885: enable 'big_writes' fuse option if ceph-fuse is linked to libfuse < 3.0
- As Zheng Yan adviced, the fix will set the big_writes options as true as default.
Then in ceph-fuse will just igno... - 09:01 AM Bug #44885 (Fix Under Review): enable 'big_writes' fuse option if ceph-fuse is linked to libfuse ...
- 08:59 AM Bug #44885: enable 'big_writes' fuse option if ceph-fuse is linked to libfuse < 3.0
- The PR: https://github.com/ceph/ceph/pull/34531
- 07:39 AM Bug #44885: enable 'big_writes' fuse option if ceph-fuse is linked to libfuse < 3.0
- ...
- 03:08 AM Bug #44885 (In Progress): enable 'big_writes' fuse option if ceph-fuse is linked to libfuse < 3.0
- 01:46 PM Bug #44922: When a large number of small files are written concurrently, the MDS getattr delay bl...
- There are some patches in the current kcephfs client testing branch that may help here:
https://github.com/cep... - 01:37 PM Bug #44963 (Resolved): fix MClientCaps::FLAG_SYNC in check_caps
- PR merged for Octopus.
- 01:33 PM Bug #43515: qa: SyntaxError: invalid token
- As noted by Wei-Chung Cheng, the fix is already in Octopus:...
- 01:30 PM Bug #45071 (Resolved): cephfs-shell: CI testing does not detect flake8 errors
- On explicitly passing py3 to envlist in CI testing, flake8 errors are not detected....
- 01:27 PM Feature #23376 (Rejected): nfsgw: add NFS-Ganesha to service map similar to "rgw-nfs"
- There still doesn't seem to be any need for this, and if there were we'd want to open a new bug that articulated the ...
- 01:26 PM Bug #43039 (Can't reproduce): client: shutdown race fails with status 141
- I haven't heard of this cropping up anymore, and it seemed like more of a problem in low-level message handling code ...
- 01:24 PM Feature #24461 (Resolved): cephfs: improve file create performance buffering file unlink/create o...
- 09:00 AM Bug #44288 (Fix Under Review): MDSMap encoder "ev" (extended version) is not checked for validity...
- 01:13 AM Bug #45014 (Resolved): qa/cephfs/test_backtrace_repair: CalledProcessError: Command './bin/rados ...
- Fixed by:...
- 01:13 AM Bug #45012 (Resolved): qa/cephfs/test_backtrace_repair: stuck forever when ran by vstart_runner.sh
- Fixed by:...
04/11/2020
- 09:38 AM Backport #45050 (Resolved): nautilus: stale scrub status entry from a failed mds shows up in `cep...
- https://github.com/ceph/ceph/pull/34563
- 09:38 AM Backport #45049 (Resolved): octopus: stale scrub status entry from a failed mds shows up in `ceph...
- https://github.com/ceph/ceph/pull/34800
- 09:38 AM Backport #45046 (Resolved): octopus: ceph-fuse: ceph::__ceph_abort(): ceph-fuse killed by SIGABRT...
- https://github.com/ceph/ceph/pull/34769
04/10/2020
- 08:57 AM Backport #45028 (In Progress): octopus: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- 08:54 AM Backport #45028 (Resolved): octopus: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- https://github.com/ceph/ceph/pull/34509
- 08:55 AM Backport #45027 (In Progress): nautilus: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- 08:53 AM Backport #45027 (Resolved): nautilus: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- https://github.com/ceph/ceph/pull/34343
- 08:53 AM Backport #45026 (Rejected): mimic: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- 07:50 AM Backport #44480: nautilus: mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- Zheng Yan wrote:
> original PR introduce new issue. please also backport https://github.com/ceph/ceph/pull/34110
... - 07:41 AM Backport #44480: nautilus: mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- original PR introduce new issue. please also backport https://github.com/ceph/ceph/pull/34110
- 01:51 AM Bug #45024: mds: wrong link count under certain circumstance
- I opened a pr at https://github.com/ceph/ceph/pull/34507 to help discuss this problem. The pr is just a sample, it do...
- 01:37 AM Bug #45024 (Resolved): mds: wrong link count under certain circumstance
- I'm simulating a condition that there are two active mds, when making a hard link cross the two mds, both mds and cli...
04/09/2020
- 08:26 PM Feature #45021: client: new asok commands for diagnosing cap handling issues
- Greg Farnum wrote:
> In general: yes! This is a long-desired addition to our introspection and debugging abilities.
... - 04:00 PM Feature #45021: client: new asok commands for diagnosing cap handling issues
- In general: yes! This is a long-desired addition to our introspection and debugging abilities.
But some questions ... - 03:07 PM Feature #45021 (In Progress): client: new asok commands for diagnosing cap handling issues
- I've been working on some ganesha+cephfs issues that have been reported, and it's quickly becoming evident to me that...
- 05:50 PM Bug #44790: qa: adjust-ulimits invoked after being removed
- http://pulpito.ceph.com/yuriw-2020-04-07_17:47:23-fs-wip-octopus-rgw-msg-fixes-distro-basic-smithi/4931869/
http://p... - 12:58 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Okay sure sounds good. As of right now I don't think I've seen the cache pressure messages in 2 days. I can't think o...
- 12:55 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- mitchell walls wrote:
>
> Here is an example of something happens which gives the slow requests.
>
> [...]
I... - 12:40 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Okay so far, I haven't seen any messages about cache pressure recently. That is without the Entries_HWMark set at all...
- 09:47 AM Backport #44668: nautilus: mgr/dashboard: backend API test failure "test_access_permissions"
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34055
m... - 09:41 AM Backport #44668 (Resolved): nautilus: mgr/dashboard: backend API test failure "test_access_permis...
- 09:42 AM Bug #42228 (Resolved): mgr/dashboard: backend API test failure "test_access_permissions"
- 09:40 AM Bug #45014: qa/cephfs/test_backtrace_repair: CalledProcessError: Command './bin/rados -p cephfs_d...
- It is because the remote.sh() will switch the args from list to str and the '"' in "oh i'm sorry did i overwrite your...
- 09:38 AM Bug #45014 (In Progress): qa/cephfs/test_backtrace_repair: CalledProcessError: Command './bin/rad...
- 09:38 AM Bug #45014 (Resolved): qa/cephfs/test_backtrace_repair: CalledProcessError: Command './bin/rados ...
- ...
- 09:28 AM Bug #45012: qa/cephfs/test_backtrace_repair: stuck forever when ran by vstart_runner.sh
- It is because that the remote.sh in vstart_runner.sh will call teuthology/misc.py's sh(), which has no stdin/stdout p...
- 09:26 AM Bug #45012 (In Progress): qa/cephfs/test_backtrace_repair: stuck forever when ran by vstart_runne...
- 09:26 AM Bug #45012 (Resolved): qa/cephfs/test_backtrace_repair: stuck forever when ran by vstart_runner.sh
- ...
04/08/2020
- 08:51 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- I will increase logging on the mds and do the same thing again tomorrow. Thanks for all your help.
- 08:45 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Ok thanks. I'll see what I can do with a multimds setup.
To be clear, I think there is more than one problem here ... - 08:31 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- At this point I am okay with just running a single mds with 3 standbys. Everytime I go to recreate the problem it is ...
- 07:38 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- I have set that as well to 100. Still getting the same thing. The weird thing is that this only happens with multiple...
- 07:24 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Ok. If you're able to change settings, can you try setting the MDCACHE Entries_HWMark value _much_ lower? Like maybe ...
- 07:09 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Here are logs for all three mdses when this occurs. ...
- 05:56 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- I have gotten further and it gets really weird. It isn't actually the python script it was a move they did after look...
- 01:35 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Update on the cache message, The cache message was still there with the single mds with the entries_used less than en...
- 11:41 AM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- More questions:
You said this happened when you updated from 14.2.7 to 14.2.8? That's probably coincidence unless ... - 11:33 AM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- mitchell walls wrote:
> Okay thank you. I am working on it. So the script they were working with was dealing with a ... - 01:16 PM Backport #44655 (In Progress): nautilus: qa: SyntaxError: invalid token
- 11:00 AM Bug #44380 (Fix Under Review): mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat rbytes...
- 09:48 AM Bug #44380: mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat rbytes" == g_conf()->mds_...
- The fixing PR: https://github.com/ceph/ceph/pull/34410
It will add accounted_rstat/rstat when building file dentry... - 08:12 AM Bug #44380: mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat rbytes" == g_conf()->mds_...
- The root cause of this is the cephfs-data-scan tool didn't update the InodeStore->inode->rstat, and when the MDS daem...
- 05:35 AM Bug #44380: mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat rbytes" == g_conf()->mds_...
- I could reproduce it %100 locally, the steps are:
1), run the test_fragmented_injection test case
2), manually rem... - 06:25 AM Bug #44988 (Duplicate): client: track dirty inodes in a per-session list for effective cap flushing
- Jeff's PR https://github.com/ceph/ceph/pull/34455 improves cap flushing by setting FLAG_SYNC when flushing auth_caps....
- 04:35 AM Backport #44487 (In Progress): nautilus: pybind/mgr/volumes: add upgrade testing
04/07/2020
- 08:52 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Okay thank you. I am working on it. So the script they were working with was dealing with a group of files that were ...
- 07:05 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- ...and to be clear too:
If you're working on a reproducer, then setting Entries_HWMark to a really low value (100 ... - 06:43 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- mitchell walls wrote:
> Okay so do you also think that is related to the mds slow requests from whenever I was runni... - 06:33 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Okay so do you also think that is related to the mds slow requests from whenever I was running 3 mdses?
I am work... - 06:17 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Thanks, it's not much above entries_hiwat but it is some above it, so it seems likely this is the same issue as the o...
- 05:54 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- (gdb) p lru_state
$1 = {entries_hiwat = 100000, entries_used = 100148, chunks_hiwat = 100000, chunks_used = 0,
f... - 05:03 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- I have another case opened internally at RH, and got them to force a coredump. From that I was able to dump ganesha's...
- 04:26 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- I agree with Greg. The metadata damage warnings are probably unrelated to the cache pressure issues. Let's focus on t...
- 03:59 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- This also seems to be very closely related to this issue https://tracker.ceph.com/issues/44947
- 03:55 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- I forgot to mention all of the mds servers are pretty large (including whole cluster) with two cpus and 256-384 gb of...
- 03:35 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- this file was a bit to large split it up
- 03:34 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- The backtrace "metadata damage" is probably erroneous: https://tracker.ceph.com/issues/43543; if we have busy files t...
- 03:25 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- not hours minutes
- 03:24 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- Just as a heads up I was restarting mdses alot to troubleshoot this issue. Once I would restart it would work fine fo...
- 03:22 PM Bug #44976: MDS problem slow requests, cache pressure, damaged metadata after upgrading 14.2.7 to...
- attaching all mds logs for the 3rd and 4th when it started
- 02:40 PM Bug #44976 (Resolved): MDS problem slow requests, cache pressure, damaged metadata after upgradin...
- I originally posted this on the nfs-ganesha issues github. They referenced me here since they think it is related to ...
- 01:53 PM Feature #44455: cephfs: add recursive unlink RPC
- Venky Shankar wrote:
> Greg Farnum wrote:
> > Hmm one problem the issue description skips over is that this will ne... - 01:27 PM Feature #44455: cephfs: add recursive unlink RPC
- Greg Farnum wrote:
> Hmm one problem the issue description skips over is that this will need to deal with hard-linke... - 01:10 PM Bug #44380: mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat rbytes" == g_conf()->mds_...
- Xiubo Li wrote:
> Zheng Yan wrote:
> > Greg Farnum wrote:
> > >
> > > That's just disabling the debug checking; ... - 12:22 PM Bug #44380: mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat rbytes" == g_conf()->mds_...
- Zheng Yan wrote:
> Greg Farnum wrote:
> >
> > That's just disabling the debug checking; I see this test pokes at ... - 02:49 AM Bug #44380: mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat rbytes" == g_conf()->mds_...
- Greg Farnum wrote:
>
> That's just disabling the debug checking; I see this test pokes at our metadata but I think... - 09:36 AM Bug #44947: Hung ops for evicted CephFS clients do not get cleaned up fully
- https://github.com/ceph/ceph/pull/32073 can also explain this. please try 14.2.8
- 09:30 AM Bug #44947: Hung ops for evicted CephFS clients do not get cleaned up fully
- It's mds bug. If you can compile ceph from source, please try https://github.com/ceph/ceph/pull/34338
- 06:52 AM Bug #44947: Hung ops for evicted CephFS clients do not get cleaned up fully
- This is quite odd — the only way for a request to get marked as cleaned up like that is after it does what should be ...
- 08:48 AM Backport #44483 (In Progress): nautilus: mds: assertion failure due to blacklist
- 03:25 AM Bug #36171: mds: ctime should not use client provided ctime/mtime
see https://github.com/ceph/ceph/pull/32126. that commit add 'dirty_from' field to rstat, which is mds time. mds ca...
04/06/2020
- 10:11 PM Bug #36171: mds: ctime should not use client provided ctime/mtime
- Yeah, we previously assigned mtimes from the MDS and it was a disaster. That's not something we'll be going back to!
... - 10:08 PM Bug #44380: mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat rbytes" == g_conf()->mds_...
- Zheng Yan wrote:
> test that triggered the assertion is test_fragmented_injection. I think we need to add following ... - 11:38 AM Bug #44380: mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat rbytes" == g_conf()->mds_...
- Zheng Yan wrote:
> test that triggered the assertion is test_fragmented_injection. I think we need to add following ... - 08:34 AM Bug #44380: mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat rbytes" == g_conf()->mds_...
- test that triggered the assertion is test_fragmented_injection. I think we need to add following settings
self.fs... - 10:04 PM Feature #44455: cephfs: add recursive unlink RPC
- Hmm one problem the issue description skips over is that this will need to deal with hard-linked files underneath the...
- 04:23 PM Feature #44455: cephfs: add recursive unlink RPC
- (self assigning this) will start looking to add this support soon.
- 10:00 PM Bug #44680: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- Nathan Cutler wrote:
> Zheng, is this a follow-on fix for #44295 ?
>
> I'm asking because that issue is marked fo... - 08:25 PM Bug #44680: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- Zheng, is this a follow-on fix for #44295 ?
I'm asking because that issue is marked for backport all the way to mi... - 08:24 PM Bug #44295: mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r == -2)
- Apparently, this caused #44680 - linking the two issues together
- 08:23 PM Backport #44479 (Need More Info): mimic: mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r ==...
- Zheng said not to merge the nautilus backport because it introduced an issue, possibly #44680
- 04:26 PM Bug #44963 (Resolved): fix MClientCaps::FLAG_SYNC in check_caps
- Sidharth noticed that we were probably issuing more FLAG_SYNC cap messages than is really required. What we're mostly...
- 03:49 PM Bug #44962 (Resolved): "ceph fs status" command outputs to stderr instead of stdout when json for...
- ...
- 02:16 PM Bug #44546: cleanup: Can't lookup inode 1
- This seems to happen when:
* a cephfs subdirectory is mounted with the kernel
* kernel > 5.0 (not sure, may have ... - 08:31 AM Bug #44947 (Need More Info): Hung ops for evicted CephFS clients do not get cleaned up fully
- Hello,
After noticing some hung CephFS operations on my client, I rebooted the client. Ceph has evicted and blackl...
04/05/2020
- 12:44 PM Bug #36171: mds: ctime should not use client provided ctime/mtime
> I think the right approach here is to have the client set its own ctime so that it's locally useful but not to tr...
04/04/2020
- 03:48 PM Bug #43600: qa: workunits/suites/iozone.sh: line 5: iozone: command not found
- See also recent in http://qa-proxy.ceph.com/teuthology/teuthology-2020-04-04_08:00:03-smoke-octopus-testing-basic-smi...
04/03/2020
- 07:53 PM Feature #20 (Resolved): client: recover from a killed session (w/ blacklist)
- 07:50 PM Bug #44389 (Resolved): client: fuse mount will print call trace with incorrect options
- 07:38 PM Bug #44680 (Pending Backport): mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- 07:29 PM Bug #44677 (Pending Backport): stale scrub status entry from a failed mds shows up in `ceph status`
- 07:28 PM Bug #44771 (Pending Backport): ceph-fuse: ceph::__ceph_abort(): ceph-fuse killed by SIGABRT in Cl...
- 07:23 PM Bug #44790: qa: adjust-ulimits invoked after being removed
- And again: http://pulpito.ceph.com/gregf-2020-04-03_05:44:43-fs-wip-greg-testing-42-1-distro-basic-smithi/4921286/
- 01:29 PM Feature #44931 (Resolved): mgr/volumes: get the list of auth IDs that have been granted access to...
- Manila requires the list of auth IDs that have been granted access to a share/subvolume.
https://github.com/openstac... - 12:21 PM Feature #44928 (Resolved): mgr/volumes: evict clients based on auth ID and subvolume mounted
- In manila, when an auth ID is denied access to a subvolume the auth ID's caps are suitably removed, and the clients t...
- 09:25 AM Bug #44922 (New): When a large number of small files are written concurrently, the MDS getattr de...
- In the multimds cluster, run the vdbench test tool to read and write small files (3000 small files in a single direct...
- 06:17 AM Feature #44277 (Fix Under Review): pybind/mgr/volumes: add command to return metadata regarding a...
04/02/2020
- 08:42 PM Bug #44821: Can not move directory when CephFS is lower layer for OverlayFS
- Ceph seems to be missing something in terms of xattrs. Here are some notes I took.
OverlayFS uses xattrs to determ... - 01:13 AM Bug #44821 (Need More Info): Can not move directory when CephFS is lower layer for OverlayFS
- 01:13 AM Bug #44821: Can not move directory when CephFS is lower layer for OverlayFS
- This sounds like an overlayfs issue. (Unless there are CephFS quotas involved? https://tracker.ceph.com/issues/44791)...
- 04:00 PM Feature #24305 (Resolved): client/mds: allow renaming across quota boundaries
- The userspace Client has allowed this for a while and kclient is being worked on now.
- 03:51 PM Bug #44916 (Resolved): client: syncfs flush is only fast with a single MDS
- When we invoke Client::syncfs, we call into flush_caps_sync() and that invokes check_caps() for everything dirty, add...
- 02:45 PM Bug #44904 (Fix Under Review): CephFSMount::run_shell does not run command with sudo
- 02:37 PM Bug #44904: CephFSMount::run_shell does not run command with sudo
- Hmm you're right it dropped the omit_sudo flag when passing it through. Looks like I screwed up my integration branch...
- 07:52 AM Bug #44904 (Resolved): CephFSMount::run_shell does not run command with sudo
- introduced by https://github.com/ceph/ceph/pull/33279
- 06:19 AM Bug #44902: ceph-fuse read the file cached data when we recover from the blacklist
- PR: https://github.com/ceph/ceph/pull/34364
- 06:10 AM Bug #44902 (Rejected): ceph-fuse read the file cached data when we recover from the blacklist
- reproduce steps:
1. [root@client.a]# echo 1234 > /mnt/fuse/file1
2. [root@client.b]# cat /mnt/fuse/file1
1234
3. ... - 02:07 AM Bug #44790: qa: adjust-ulimits invoked after being removed
- Happened again in /a/gregf-2020-04-01_18:45:43-fs-wip-greg-testing-41-distro-basic-smithi/4914097
I did happen to ... - 01:15 AM Bug #44546 (Need More Info): cleanup: Can't lookup inode 1
- What version is the CephFS cluster this client runs against?
This sounds like the thing where clients with limited... - 01:01 AM Feature #38153 (Closed): client: proactively release caps it is not using
- Looks like a little bit got done with this ticket a while ago, and it is now superseded by #43748.
04/01/2020
- 05:20 PM Backport #44290 (In Progress): mimic: mds: SIGSEGV in Migrator::export_sessions_flushed
- 02:38 PM Feature #44891 (Resolved): link ceph-fuse to libfuse3 if possible
- 01:42 PM Bug #44885 (Resolved): enable 'big_writes' fuse option if ceph-fuse is linked to libfuse < 3.0
- big_writes is deprecated in libfuse 3.0. But if ceph-fuse is linked to libfuse < 3.0, without big_writes option, writ...
- 10:19 AM Backport #44480 (In Progress): nautilus: mds: MDCache.cc: 6400: FAILED ceph_assert(r == 0 || r ==...
- 09:39 AM Bug #44807: cephfs kernel mount option "noshare" is useless
- Jeff Layton wrote:
> On my machine (recent Fedora), 'df' works by reading /proc/self/mountinfo to get a list of moun... - 09:18 AM Backport #44478 (In Progress): nautilus: mds: assert(p != active_requests.end())
- 01:13 AM Bug #43149 (Resolved): kclient: umount will stuck for around 1 minutes sometimes
- 01:12 AM Bug #43270 (Resolved): kclient: retry the same mds later after the new session is opened
- 01:11 AM Bug #42894 (Resolved): kclient: if there has at least one MDS still not laggy the mount will fail
- 01:11 AM Bug #43295 (Resolved): kclient: keep the session state until it is released
03/31/2020
- 04:12 PM Bug #44850 (Rejected): sync on libcephfs and wait for CEPH_CAP_OP_FLUSH_ACK
- > What is that missing?
Nothing as it turns out. We weren't sure in the other bug discussion, so I opted to open t... - 03:21 PM Bug #44850: sync on libcephfs and wait for CEPH_CAP_OP_FLUSH_ACK
- Greg Farnum wrote:
> Hmm, looking at the Client, I see:
> * Client::sync_fs() calls Client::_sync_fs()
> * Client:... - 01:53 PM Bug #44850: sync on libcephfs and wait for CEPH_CAP_OP_FLUSH_ACK
- Is it possible that either FUSE or libcephfs allow you to interrupt an in-progress sync when umount is issued and tha...
- 01:52 PM Bug #44850: sync on libcephfs and wait for CEPH_CAP_OP_FLUSH_ACK
- Hmm, looking at the Client, I see:
* Client::sync_fs() calls Client::_sync_fs()
* Client::_sync_fs() issues calls w... - 12:22 PM Bug #44850 (Rejected): sync on libcephfs and wait for CEPH_CAP_OP_FLUSH_ACK
- Opening this due to a conversation on the issue this one was copied from (https://tracker.ceph.com/issues/44744). The...
- 03:54 PM Bug #44807: cephfs kernel mount option "noshare" is useless
- On my machine (recent Fedora), 'df' works by reading /proc/self/mountinfo to get a list of mountpoints, and issuing s...
- 02:42 AM Bug #44807: cephfs kernel mount option "noshare" is useless
- if I create two cephfs named:cephfs and zyli.
I want mount them using kernel mount:
mount -t ceph 10.192.35.164:... - 01:44 AM Bug #44807: cephfs kernel mount option "noshare" is useless
- Jeff Layton wrote:
> This works for me. mounted the same filesystem twice (without noshare option):
>
> [...]
> ... - 10:06 AM Bug #36370 (Resolved): add information about active scrubs to "ceph -s" (and elsewhere)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:01 AM Bug #42635 (Resolved): mgr: daemon state for mds not available
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:01 AM Feature #43294 (Resolved): mount.ceph: give a hint message when no mds is up or cluster is laggy
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:59 AM Bug #44208 (Resolved): mgr/volumes: support canceling in-progress/pending clone operations.
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:58 AM Backport #44844 (Resolved): octopus: qa:test_config_session_timeout failed with incorrect options
- https://github.com/ceph/ceph/pull/34799
- 09:58 AM Backport #44843 (Resolved): octopus: LibCephFS::RecalledGetattr test failed
- https://github.com/ceph/ceph/pull/34798
- 09:57 AM Backport #44800 (In Progress): octopus: mds: 'if there is lock cache on dir' check is buggy
- 08:50 AM Bug #44114: test_cephfs_shell.TestDU test fail unexpectedly
- Facing same failure again - https://github.com/ceph/ceph/pull/32288#issuecomment-606488282. In contrast the test pas...
- 08:23 AM Backport #43502 (Resolved): mimic: mount.ceph: give a hint message when no mds is up or cluster i...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/32911
m... - 07:32 AM Bug #43567: qa: UnicodeDecodeError in TestGetAndPut.test_put_and_get_without_target_directory
- Saw the same error again with a different testsuite -
2020-03-31 11:19:50,786.786 INFO:__main__:==================... - 06:31 AM Backport #44328 (In Progress): nautilus: client: bad error handling in Client::_lseek
- 06:26 AM Backport #44337 (In Progress): nautilus: mds: purge queue corruption from wrong backport
- 03:49 AM Bug #44771 (Fix Under Review): ceph-fuse: ceph::__ceph_abort(): ceph-fuse killed by SIGABRT in Cl...
- 12:23 AM Bug #44380 (In Progress): mds: MDCache.cc: 2335: FAILED ceph_assert(!"unmatched rstat rbytes" == ...
- 12:19 AM Bug #6770 (Can't reproduce): ceph fscache: write file more than a page size to orignal file cause...
- 12:19 AM Bug #6770 (In Progress): ceph fscache: write file more than a page size to orignal file cause cac...
03/30/2020
- 10:34 PM Backport #44520 (Resolved): nautilus: qa: test_scrub_abort fails during check_task_status("idle")
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30704
m... - 10:34 PM Backport #42713 (Resolved): nautilus: mgr: daemon state for mds not available
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30704
m... - 10:33 PM Backport #41508 (Resolved): nautilus: add information about active scrubs to "ceph -s" (and elsew...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30704
m... - 10:33 PM Bug #44393: pybind/mgr/volumes: add `mypy` support
- Just to clarify, there is no src/pybind/mgr/tox.ini in nautilus and fef6450834b665b8b05703a8cbb690a04c4ba217 which ad...
- 02:29 PM Bug #44393: pybind/mgr/volumes: add `mypy` support
- Michael Fritch wrote:
> a backport for this is more involved that it would appear due to lack of [...] support in na... - 02:27 PM Bug #44393 (Resolved): pybind/mgr/volumes: add `mypy` support
- 08:45 PM Bug #44821: Can not move directory when CephFS is lower layer for OverlayFS
- Sorry, the error is when moving a directory, not specifically a file. The steps to reproduce is correct.
- 08:44 PM Bug #44821 (Need More Info): Can not move directory when CephFS is lower layer for OverlayFS
- When CephFS is used as a lower layer for OverlayFS and the redirect_dir=on option is supplied to OverlayFS, trying to...
- 02:44 PM Bug #44807: cephfs kernel mount option "noshare" is useless
- This works for me. mounted the same filesystem twice (without noshare option):...
- 09:52 AM Bug #44807 (Can't reproduce): cephfs kernel mount option "noshare" is useless
- i want use kernel mount to mount cephfs twice,and i can see mount info use "mount"
but i can not see to mount point ... - 10:35 AM Bug #44677 (Fix Under Review): stale scrub status entry from a failed mds shows up in `ceph status`
- 09:26 AM Feature #21571 (Duplicate): mds: limit number of snapshots (global and subtree)
- dup of https://tracker.ceph.com/issues/41209
- 09:21 AM Bug #40613 (Can't reproduce): kclient: .handle_message_footer got old message 1 <= 648 0x558ceade...
- seems like a temp issue of testing branch
- 09:19 AM Bug #41728 (Can't reproduce): mds: hang during fragmentdir
- 09:17 AM Feature #5486 (Resolved): kclient: make it work with selinux
- upstreamed
commit ac6713ccb5a6d13b59a2e3fda4fb049a2c4e0af2
Author: Yan, Zheng <zyan@redhat.com>
Date: Sun May ... - 09:09 AM Bug #42467 (Duplicate): mds: daemon crashes while updating blacklist
- dup of #44316
- 09:03 AM Bug #44565: src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XL...
- Lock state is SYNC. I checked code, can't figure out why. could you try reproducing this with debug_mds=10.
- 08:00 AM Bug #44801 (Fix Under Review): client: write stuck at waiting for larger max_size
- 01:43 AM Bug #44801 (Resolved): client: write stuck at waiting for larger max_size
- 06:22 AM Bug #44645 (Fix Under Review): cephfs-shell: Fix flake8 errors (E302, E502, E128, F821, W605, E12...
- 06:16 AM Bug #44172: cephfs-journal-tool: cannot set --dry_run arg
- correction to note#2
--dry-run must be specified after "recover_dentries" command only as an optional argument
- 01:35 AM Bug #44785: non-head batch requests may hold authpins and locks
- http://pulpito.ceph.com/zyan-2020-03-25_11:37:09-fs-wip-zyan-integration-testing-basic-smithi/4889018/
- 01:25 AM Backport #44800: octopus: mds: 'if there is lock cache on dir' check is buggy
- https://github.com/ceph/ceph/pull/34273
- 01:20 AM Backport #44800 (Resolved): octopus: mds: 'if there is lock cache on dir' check is buggy
- https://github.com/ceph/ceph/pull/34273
03/27/2020
- 07:59 PM Bug #44525 (Pending Backport): LibCephFS::RecalledGetattr test failed
- 07:58 PM Bug #44437 (Pending Backport): qa:test_config_session_timeout failed with incorrect options
- 07:55 PM Bug #44437 (Resolved): qa:test_config_session_timeout failed with incorrect options
- 07:56 PM Feature #44211 (Resolved): mount.ceph: stop printing warning message about mds_namespace
- 07:38 PM Backport #43502: mimic: mount.ceph: give a hint message when no mds is up or cluster is laggy
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32911
merged - 07:28 PM Bug #44790: qa: adjust-ulimits invoked after being removed
- This popped up a couple times many years ago; I'm not sure if it's a very rare race in shutdown or what. No apparent ...
- 07:27 PM Bug #44790 (Duplicate): qa: adjust-ulimits invoked after being removed
- http://pulpito.ceph.com/gregf-2020-03-26_16:03:14-fs-wip-greg-testing-326-distro-basic-smithi/4891336/...
- 05:52 PM Documentation #44788: cephfs-shell: Missing documentation of quota, df and du
- Quota, df and du are not documented in cephfs-shell doc[1].
[1] https://docs.ceph.com/docs/master/cephfs/cephfs-sh... - 05:51 PM Documentation #44788 (Resolved): cephfs-shell: Missing documentation of quota, df and du
- 05:48 PM Bug #38809 (Closed): cephfs-shell: python traceback with inexistant directory with cd
- It is fixed in Octopus.
This PR(https://github.com/ceph/ceph/pull/30365) fixes it - 05:38 PM Bug #38805 (Closed): cephfs-shell: put doesn't accept relative source file path of file in curren...
- This issue is resolved in octopus....
- 03:00 PM Bug #44785 (Fix Under Review): non-head batch requests may hold authpins and locks
- 02:51 PM Bug #44785 (Resolved): non-head batch requests may hold authpins and locks
- mds does not dispatch non-head batch request, holding locks can cause deadlock
- 08:55 AM Bug #44779 (New): mimic: Command failed (workunit test suites/ffsb.sh) on smithi063 with status 124
- teuthology run: wip-yuri4-testing-2020-03-23-1858-mimic
log url: http://pulpito.ceph.com/yuriw-2020-03-25_15:52:31-m...
03/26/2020
- 03:28 PM Bug #44555 (Resolved): qa: tasks.cephfs.test_auto_repair.TestMDSAutoRepair failed
- 01:42 PM Bug #43598 (Fix Under Review): mds: PurgeQueue does not handle objecter errors
- 01:18 PM Feature #44277 (In Progress): pybind/mgr/volumes: add command to return metadata regarding a subv...
- 01:02 PM Bug #44771 (Resolved): ceph-fuse: ceph::__ceph_abort(): ceph-fuse killed by SIGABRT in Client::_d...
- Description of problem:
https://bugzilla.redhat.com/show_bug.cgi?id=1817179
Tried to mount as follows
[root@...
03/25/2020
- 11:18 PM Backport #44473 (Resolved): nautilus: pybind/mgr/volumes: add `mypy` support
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34036
m... - 11:17 PM Backport #44473 (In Progress): nautilus: pybind/mgr/volumes: add `mypy` support
- 11:14 PM Backport #44670 (Resolved): mgr/volumes: support canceling in-progress/pending clone operations.
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34036
m... - 11:09 PM Backport #44291: nautilus: mds: SIGSEGV in Migrator::export_sessions_flushed
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33751
m... - 02:45 PM Bug #44565: src/mds/SimpleLock.h: 528: FAILED ceph_assert(state == LOCK_XLOCK || state == LOCK_XL...
- ...
- 03:45 AM Bug #44382 (Fix Under Review): qa: FAIL: test_barrier (tasks.cephfs.test_full.TestClusterFull)
03/24/2020
- 03:14 PM Feature #44277: pybind/mgr/volumes: add command to return metadata regarding a subvolume
- Venky Shankar wrote:
> Kotresh Hiremath Ravishankar wrote:
> > Hi Shyam,
> >
> > I have discussed this tracker w... - 01:45 PM Feature #44277: pybind/mgr/volumes: add command to return metadata regarding a subvolume
- Kotresh Hiremath Ravishankar wrote:
> Hi Shyam,
>
> I have discussed this tracker with Venky and require clarific... - 12:43 PM Feature #44277: pybind/mgr/volumes: add command to return metadata regarding a subvolume
- Hi Shyam,
I have discussed this tracker with Venky and require clarification. The requirement to check the size a... - 04:53 AM Bug #44657 (Fix Under Review): cephfs-shell: Fix flake8 errors (F841, E302, E502, E128, E305 and ...
- 02:07 AM Bug #44448 (Pending Backport): mds: 'if there is lock cache on dir' check is buggy
03/23/2020
- 05:19 PM Bug #44393: pybind/mgr/volumes: add `mypy` support
- a backport for this is more involved that it would appear due to lack of ...
- 01:53 PM Backport #44714 (Rejected): nautilus: mds: SimpleLock pointer is passed to Locker::wrlock_start
- no need to backport
- 12:17 PM Backport #44714 (Rejected): nautilus: mds: SimpleLock pointer is passed to Locker::wrlock_start
- 01:53 PM Backport #44713 (Rejected): mimic: mds: SimpleLock pointer is passed to Locker::wrlock_start
- no need to backport
- 12:17 PM Backport #44713 (Rejected): mimic: mds: SimpleLock pointer is passed to Locker::wrlock_start
- 01:52 PM Bug #44416 (Resolved): mds: SimpleLock pointer is passed to Locker::wrlock_start
- No need to backport.
- 12:40 PM Bug #43943: qa: "[WRN] evicting unresponsive client smithi131:z (6314), after 304.461 seconds"
- /a/sage-2020-03-22_23:32:49-rados-wip-sage3-testing-2020-03-22-1327-distro-basic-smithi/4881104
- 12:16 PM Bug #42467 (Fix Under Review): mds: daemon crashes while updating blacklist
- 12:13 PM Bug #44294 (Resolved): mds: "elist.h: 91: FAILED ceph_assert(_head.empty())"
- resolved, but also introduce new bug. Newest ticket is https://tracker.ceph.com/issues/44680
- 09:23 AM Bug #44680 (Fix Under Review): mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- 09:11 AM Bug #44680: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- maybe revert
https://github.com/ceph/ceph/commit/73436961512bd87981244fa48212085faf7028c4 and https://github.com/...
03/20/2020
- 05:26 PM Bug #44416 (Pending Backport): mds: SimpleLock pointer is passed to Locker::wrlock_start
- I think we want to backport this as convenient, too?
03/19/2020
- 09:02 PM Bug #44680: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- Yeah definitely the fault of https://github.com/ceph/ceph/pull/33538, which was trying to prevent us from asserting o...
- 08:56 PM Bug #44680: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- > [13:55:18] <@sage> it was triggered by the upgrade... i'm guessing when the old container was stopped and got blac...
- 08:49 PM Bug #44680: mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- Do we have any logs or more detail about what happened?
The only thing this flags in my head is https://github.com... - 07:12 PM Bug #44680 (Resolved): mds/Mutation.h: 128: FAILED ceph_assert(num_auth_pins == 0)
- ...
- 01:01 PM Bug #44677 (Resolved): stale scrub status entry from a failed mds shows up in `ceph status`
- This happens intermittently. When an active mds (mds.b) is terminated, mds.c transitions to active, but task status s...
- 12:53 PM Bug #43748 (Fix Under Review): client: improve wanted handling so we don't request unused caps (a...
- 10:10 AM Backport #44668 (In Progress): nautilus: mgr/dashboard: backend API test failure "test_access_per...
- 03:16 AM Backport #44291 (Resolved): nautilus: mds: SIGSEGV in Migrator::export_sessions_flushed
- 03:16 AM Bug #43909 (Pending Backport): mds: SIGSEGV in Migrator::export_sessions_flushed
- Whoops wrong ticket.
- 03:15 AM Bug #43909 (Resolved): mds: SIGSEGV in Migrator::export_sessions_flushed
- 02:04 AM Bug #6770: ceph fscache: write file more than a page size to orignal file cause cachfiles bug on EOF
- Test the latest ceph and kclient, it works well:...
03/18/2020
- 10:24 PM Backport #42440: mimic: mds: create a configurable snapshot limit
- Milind Changire wrote:
> do I need to post a PR against the mimic branch for this item ?
Yes, as far as I can tel... - 05:50 PM Backport #44670 (In Progress): mgr/volumes: support canceling in-progress/pending clone operations.
- 05:48 PM Backport #44670 (Resolved): mgr/volumes: support canceling in-progress/pending clone operations.
- https://github.com/ceph/ceph/pull/34036
- 01:52 PM Bug #44208 (Pending Backport): mgr/volumes: support canceling in-progress/pending clone operations.
- 01:13 PM Bug #44638 (Fix Under Review): test_scrub_pause_and_resume (tasks.cephfs.test_scrub_checks.TestSc...
- 11:23 AM Backport #44668 (Resolved): nautilus: mgr/dashboard: backend API test failure "test_access_permis...
- https://github.com/ceph/ceph/pull/34055
https://github.com/ceph/ceph/pull/34817 - 11:23 AM Bug #42228 (Pending Backport): mgr/dashboard: backend API test failure "test_access_permissions"
- 10:40 AM Bug #44525 (Fix Under Review): LibCephFS::RecalledGetattr test failed
- 09:45 AM Bug #44525: LibCephFS::RecalledGetattr test failed
- The cap grant may delayed and in the case if the inode locally didn't have the 'Fscr', the set_deleg() will return -E...
- 07:38 AM Bug #43965 (Resolved): mgr/volumes: synchronize ownership (for symlinks) and inode timestamps for...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:37 AM Bug #44438 (Resolved): qa: ERROR: test_subvolume_snapshot_clone_different_groups (tasks.cephfs.te...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 07:36 AM Backport #44521 (Resolved): nautilus: qa: ERROR: test_subvolume_snapshot_clone_different_groups (...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33877
m... - 07:36 AM Backport #44484 (Resolved): nautilus: mgr/volumes: synchronize ownership (for symlinks) and inode...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33877
m... - 07:34 AM Backport #42441: nautilus: mds: create a configurable snapshot limit
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33295
m... - 07:31 AM Bug #43901: qa: fsx: fatal error: libaio.h: No such file or directory
- Whoops https://github.com/ceph/ceph/pull/33959, in test now.
Also available in: Atom