Project

General

Profile

Activity

From 07/09/2019 to 08/07/2019

08/07/2019

09:48 PM Bug #41141 (Fix Under Review): mds: recall capabilities more regularly when under cache pressure
Patrick Donnelly
09:47 PM Bug #41140 (Fix Under Review): mds: trim cache more regularly
Patrick Donnelly
04:53 PM Bug #41140: mds: trim cache more regularly
We did much the same thing in the OSD. Previously we trimmed in a single thread at regular intervals, but now we tri... Mark Nelson
09:39 AM Bug #41140: mds: trim cache more regularly
Janek Bevendorff wrote:
> This may be obvious, but to put the whole thing into context: this cache trimming issue ca...
Dan van der Ster
09:29 AM Bug #41140: mds: trim cache more regularly
This may be obvious, but to put the whole thing into context: this cache trimming issue can make a CephFS permanently... Janek Bevendorff
09:07 PM Bug #41034 (Need More Info): cephfs-journal-tool: NetHandler create_socket couldn't create socket
Can you please elaborate on this, which journal tool are you talking about?
If possible, could you provide steps to...
Neha Ojha
08:03 PM Bug #40927 (Resolved): mgr/volumes: unable to create subvolumegroups/subvolumes when ceph-mgr is ...
Patrick Donnelly
08:02 PM Feature #40617 (Resolved): mgr/volumes: Add `ceph fs subvolumegroup getpath` command
Patrick Donnelly
08:02 PM Backport #41071 (Resolved): nautilus: mgr/volumes: unable to create subvolumegroups/subvolumes wh...
Patrick Donnelly
08:02 PM Backport #41070 (Resolved): nautilus: mgr/volumes: Add `ceph fs subvolumegroup getpath` command
Patrick Donnelly
02:42 PM Bug #41144: mount.ceph: doesn't accept "strictatime"
Changing subject. "nostrictatime" seems to be intercepted by /bin/mount, so the mount helper doesn't need to handle it. Jeff Layton
07:37 AM Bug #41148 (Fix Under Review): client: _readdir_cache_cb() may use the readdir_cache already clear
Zheng Yan
07:31 AM Bug #41148: client: _readdir_cache_cb() may use the readdir_cache already clear
segment fault:inode=0x0
huanwen ren
07:25 AM Bug #41148: client: _readdir_cache_cb() may use the readdir_cache already clear
huanwen ren wrote:
> Zheng Yan wrote:
> > I think getattr does not affect parent directory inode's completeness
> ...
huanwen ren
07:22 AM Bug #41148: client: _readdir_cache_cb() may use the readdir_cache already clear
Zheng Yan wrote:
> I think getattr does not affect parent directory inode's completeness
From the log, there is a...
huanwen ren
07:00 AM Bug #41148: client: _readdir_cache_cb() may use the readdir_cache already clear
I think getattr does not affect parent directory inode's completeness Zheng Yan
06:56 AM Bug #41148 (Resolved): client: _readdir_cache_cb() may use the readdir_cache already clear
Calling function A means to get dir information from the cache, but in the while loop,
the contents of readdir_cach...
huanwen ren
04:55 AM Bug #41147: mds: crash loop - Server.cc 6835: FAILED ceph_assert(in->first <= straydn->first)
this is temporarily fixed by wiping session table Anonymous
04:34 AM Bug #41147 (Duplicate): mds: crash loop - Server.cc 6835: FAILED ceph_assert(in->first <= straydn...
After creating a new FS and running it for 2 days I my MDS is in a crash loop. I didn't try anything yet so far as to... Anonymous

08/06/2019

10:36 PM Bug #41140: mds: trim cache more regularly
Dan van der Ster wrote:
> FWIW i can trigger the same problems here without creates. `ls -lR` of a large tree is eno...
Patrick Donnelly
05:01 PM Bug #41140: mds: trim cache more regularly
FWIW i can trigger the same problems here without creates. `ls -lR` of a large tree is enough, and increasing the thr... Dan van der Ster
04:42 PM Bug #41140 (Resolved): mds: trim cache more regularly
Under -create- workloads that result in the acquisition of a lot of capabilities, the MDS can't trim the cache fast e... Patrick Donnelly
07:56 PM Bug #41144 (Resolved): mount.ceph: doesn't accept "strictatime"
The cephfs mount helper doesn't support either strictatime or nostrictatime. It should intercept those options and se... Jeff Layton
04:46 PM Bug #41141 (Resolved): mds: recall capabilities more regularly when under cache pressure
If a client is doing a large parallel create workload, the MDS may not recall capabilities fast enough and the client... Patrick Donnelly
02:46 AM Bug #41133 (Closed): qa/tasks: update thrasher design
* Make the Thrasher class abstract by adding _do_thrash abstract function.
* Change OSDThrasher, RBDMirrorThrasher, ...
Jos Collin
02:45 AM Bug #41066 (Closed): mds: skip trim mds cache if mdcache is not opened
Zhi Zhang

08/05/2019

11:09 PM Backport #41129 (Resolved): mimic: qa: power off still resulted in client sending session close
https://github.com/ceph/ceph/pull/30233 Patrick Donnelly
11:09 PM Backport #41128 (Resolved): nautilus: qa: power off still resulted in client sending session close
https://github.com/ceph/ceph/pull/29983 Patrick Donnelly
11:07 PM Backport #41118 (Rejected): nautilus: cephfs-shell: add CI testing with flake8
Patrick Donnelly
11:07 PM Backport #41114 (Resolved): mimic: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
https://github.com/ceph/ceph/pull/31283 Patrick Donnelly
11:07 PM Backport #41113 (Resolved): nautilus: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
https://github.com/ceph/ceph/pull/30032 Patrick Donnelly
11:06 PM Backport #41112 (Rejected): nautilus: cephfs-shell: cd with no args has no effect
Patrick Donnelly
11:06 PM Backport #41108 (Resolved): mimic: mds: disallow setting ceph.dir.pin value exceeding max rank id
https://github.com/ceph/ceph/pull/29940 Patrick Donnelly
11:06 PM Backport #41107 (Resolved): nautilus: mds: disallow setting ceph.dir.pin value exceeding max rank id
https://github.com/ceph/ceph/pull/29938 Patrick Donnelly
11:06 PM Backport #41106 (Resolved): nautilus: mds: add command that modify session metadata
https://github.com/ceph/ceph/pull/32245 Patrick Donnelly
11:06 PM Backport #41105 (Rejected): nautilus: cephfs-shell: flake8 blank line and indentation error
Patrick Donnelly
11:05 PM Backport #41101 (Rejected): luminous: tools/cephfs: memory leak in cephfs/Resetter.cc
Patrick Donnelly
11:05 PM Backport #41100 (Resolved): mimic: tools/cephfs: memory leak in cephfs/Resetter.cc
https://github.com/ceph/ceph/pull/29915 Patrick Donnelly
11:05 PM Backport #41099 (Resolved): nautilus: tools/cephfs: memory leak in cephfs/Resetter.cc
https://github.com/ceph/ceph/pull/29879 Patrick Donnelly
11:05 PM Backport #41098 (Rejected): luminous: mds: map client_caps been inserted by mistake
Patrick Donnelly
11:05 PM Backport #41097 (Resolved): mimic: mds: map client_caps been inserted by mistake
https://github.com/ceph/ceph/pull/29833 Patrick Donnelly
11:04 PM Backport #41096 (Resolved): nautilus: mds: map client_caps been inserted by mistake
https://github.com/ceph/ceph/pull/29878 Patrick Donnelly
11:04 PM Backport #41095 (Resolved): nautilus: qa: race in test_standby_replay_singleton_fail
https://github.com/ceph/ceph/pull/29832 Patrick Donnelly
11:04 PM Backport #41094 (Resolved): mimic: qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_...
https://github.com/ceph/ceph/pull/29812 Patrick Donnelly
11:04 PM Backport #41093 (Resolved): nautilus: qa: tasks.cephfs.test_client_recovery.TestClientRecovery.te...
https://github.com/ceph/ceph/pull/29811 Patrick Donnelly
11:04 PM Backport #41089 (Rejected): nautilus: cephfs-shell: Multiple flake8 errors
Patrick Donnelly
11:04 PM Backport #41088 (Resolved): mimic: qa: AssertionError: u'open' != 'stale'
https://github.com/ceph/ceph/pull/29751 Patrick Donnelly
11:03 PM Backport #41087 (Resolved): nautilus: qa: AssertionError: u'open' != 'stale'
https://github.com/ceph/ceph/pull/29750 Patrick Donnelly
04:05 PM Backport #40326: nautilus: mds: evict stale client when one of its write caps are stolen
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28583
merged
Yuri Weinstein
04:04 PM Backport #40324: nautilus: ceph_volume_client: d_name needs to be converted to string before using
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28609
merged
Yuri Weinstein
04:03 PM Backport #40839: nautilus: cephfs-shell: TypeError in poutput
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29156
merged
Yuri Weinstein
04:03 PM Backport #40842: nautilus: ceph-fuse: mount does not support the fallocate()
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29157
merged
Yuri Weinstein
04:02 PM Backport #40843: nautilus: cephfs-shell: name 'files' is not defined error in do_rm()
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29158
merged
Yuri Weinstein
04:02 PM Backport #40874: nautilus: /src/include/xlist.h: 77: FAILED assert(_size == 0)
Xiaoxi Chen wrote:
> https://github.com/ceph/ceph/pull/29186
merged
Yuri Weinstein
04:01 PM Backport #40438: nautilus: getattr on snap inode stuck
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29231
merged
Yuri Weinstein
04:00 PM Backport #40440: nautilus: mds: cannot switch mds state from standby-replay to active
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29233
merged
Yuri Weinstein
03:59 PM Backport #40443: nautilus: libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when operating on...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29343
merged
Yuri Weinstein
03:58 PM Backport #40445: nautilus: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29344
merged
Yuri Weinstein
02:14 PM Bug #41072 (In Progress): scheduled cephfs snapshots (via ceph manager)
Patrick Donnelly
12:54 PM Bug #41072: scheduled cephfs snapshots (via ceph manager)
- also, interface for fetching snap metadata (`flushed` state, etc...)... Venky Shankar
12:23 PM Bug #41072 (Resolved): scheduled cephfs snapshots (via ceph manager)
outline detailed here: https://pad.ceph.com/p/cephfs-snap-mirroring
Specify a snapshot schedule on any (sub)direct...
Venky Shankar
01:47 PM Bug #40989 (Resolved): qa: RECENT_CRASH warning prevents wait_for_health_clear from completing
Patrick Donnelly
01:46 PM Feature #40986 (Fix Under Review): cephfs qos: implement cephfs qos base on tokenbucket algorighm
Patrick Donnelly
01:45 PM Backport #41000 (New): luminous: client: failed to drop dn and release caps causing mds stary sta...
Zheng, please take this one. The backport is non-trivial. Patrick Donnelly
01:14 PM Feature #41074 (Resolved): pybind/mgr/volumes: mirror (scheduled) snapshots to remote target
outline detailed here: https://pad.ceph.com/p/cephfs-snap-mirroring
mirror scheduled (and temporary) snapshots a r...
Venky Shankar
01:13 PM Feature #41073 (Rejected): cephfs-sync: tool for synchronizing cephfs snapshots to remote target
Introduce a rsync like tool (cephfs-sync) for mirroring scheduled and temp cephfs snapshots to sync targets. sync tar... Venky Shankar
01:02 PM Backport #41070 (In Progress): nautilus: mgr/volumes: Add `ceph fs subvolumegroup getpath` command
Ramana Raja
11:56 AM Backport #41070 (Resolved): nautilus: mgr/volumes: Add `ceph fs subvolumegroup getpath` command
https://github.com/ceph/ceph/pull/29490 Ramana Raja
01:00 PM Backport #41071 (In Progress): nautilus: mgr/volumes: unable to create subvolumegroups/subvolumes...
Ramana Raja
11:58 AM Backport #41071 (Resolved): nautilus: mgr/volumes: unable to create subvolumegroups/subvolumes wh...
https://github.com/ceph/ceph/pull/29490 Ramana Raja
10:50 AM Bug #41069 (Need More Info): nautilus: test_subvolume_group_create_with_desired_mode fails with "...
Seen here is nautilus run: http://qa-proxy.ceph.com/teuthology/yuriw-2019-07-30_20:57:10-fs-wip-yuri-testing-2019-07-... Venky Shankar
05:23 AM Bug #41066: mds: skip trim mds cache if mdcache is not opened
https://github.com/ceph/ceph/pull/29481 Zhi Zhang
05:18 AM Bug #41066 (Closed): mds: skip trim mds cache if mdcache is not opened
```
2019-07-24 14:51:28.028198 7f6dc2543700 1 mds.0.940446 active_start
2019-07-24 14:51:39.452890 7f6dc2543700 1...
Zhi Zhang
12:29 AM Backport #41001 (In Progress): mimic: client: failed to drop dn and release caps causing mds star...
Xiaoxi Chen
12:20 AM Backport #41002 (In Progress): nautilus:client: failed to drop dn and release caps causing mds st...
Xiaoxi Chen

08/01/2019

05:46 PM Support #40906: Full CephFS causes hang when accessing inode.
Please confirm that I understand the process so that I can give it a try.
Thanks!
Robert LeBlanc
05:45 PM Bug #41049: adding ceph secret key to kernel failed: Invalid argument.
reason: accidentially double base64 encoded
MISSING old warning!: secret is not valid base64: Invalid argument.
Anonymous
05:32 PM Bug #41049 (New): adding ceph secret key to kernel failed: Invalid argument.
Fresh Nautilus Cluster.
Fresh cephfs.
This is not a common base64 error
mount -t ceph 10.3.2.1:6789:/ /mnt/ -o n...
Anonymous

07/31/2019

06:40 PM Feature #40811 (Pending Backport): mds: add command that modify session metadata
Patrick Donnelly
06:34 PM Bug #40927 (Pending Backport): mgr/volumes: unable to create subvolumegroups/subvolumes when ceph...
Patrick Donnelly
05:37 PM Bug #41034 (Resolved): cephfs-journal-tool: NetHandler create_socket couldn't create socket
Ubuntu 18.04 with ceph Nautilus repo
Journal tool is broken:
2019-07-31 19:36:56.879 7f57bf308700 -1 NetHandler cre...
Anonymous
05:33 PM Bug #40999 (Pending Backport): qa: AssertionError: u'open' != 'stale'
Patrick Donnelly
05:10 PM Bug #41031 (Resolved): qa: malformed job
/ceph/teuthology-archive/pdonnell-2019-07-31_00:35:45-fs-wip-pdonnell-testing-20190730.205527-distro-basic-smithi/416... Patrick Donnelly
04:59 PM Bug #41026 (Rejected): MDS process crashes on 14.2.2
Please seek help on ceph-users. Provide more information about your cluster and how the error came about. Patrick Donnelly
04:00 PM Bug #41026: MDS process crashes on 14.2.2
... Anonymous
01:15 PM Bug #41026 (Rejected): MDS process crashes on 14.2.2
MDS Process on Ubuntu 18.04 Nautilus 14.2.2 are crashing, unable to recover
-7> 2019-07-31 13:29:46.888 7fb36a...
Anonymous
03:32 PM Bug #39395: ceph: ceph fs auth fails
merged https://github.com/ceph/ceph/pull/28666 Yuri Weinstein
09:08 AM Bug #40960: client: failed to drop dn and release caps causing mds stary stacking.
some more background of this issue is under
https://tracker.ceph.com/issues/38679#note-9
Xiaoxi Chen
02:49 AM Bug #41006 (Fix Under Review): cephfs-data-scan scan_links FAILED ceph_assert(p->second >= before...
Zheng Yan

07/30/2019

10:13 PM Backport #40845: nautilus: MDSMonitor: use stringstream instead of dout for mds repaired
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29159
merged
Yuri Weinstein
06:35 PM Bug #39947 (Pending Backport): cephfs-shell: add CI testing with flake8
Patrick Donnelly
05:07 PM Bug #40951 (Rejected): (think this is bug) Why CephFS metadata pool usage is only growing?
NAB Patrick Donnelly
12:09 PM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
Bingo - mds_log_max_segments.
In Luminous desc for this option is empty:...
Konstantin Shalygin
01:25 PM Bug #41006: cephfs-data-scan scan_links FAILED ceph_assert(p->second >= before+len)
looks like discontiguous free inode number can trigger the crash Zheng Yan
08:56 AM Bug #41006 (Resolved): cephfs-data-scan scan_links FAILED ceph_assert(p->second >= before+len)
Running cephfs-data-scan scan_links on a test 14.2.2 cluster I get this assertion:... Dan van der Ster
05:51 AM Feature #5520 (In Progress): osdc: should handle namespaces
Jos Collin
01:42 AM Backport #41002 (Resolved): nautilus:client: failed to drop dn and release caps causing mds stary...
https://github.com/ceph/ceph/pull/29478 Xiaoxi Chen
01:41 AM Backport #41001 (Resolved): mimic: client: failed to drop dn and release caps causing mds stary s...
https://github.com/ceph/ceph/pull/29479 Xiaoxi Chen
01:38 AM Backport #41000 (Resolved): luminous: client: failed to drop dn and release caps causing mds star...
https://github.com/ceph/ceph/pull/29830 Xiaoxi Chen

07/29/2019

09:53 PM Bug #40603 (Pending Backport): mds: disallow setting ceph.dir.pin value exceeding max rank id
Patrick Donnelly
09:49 PM Bug #40965 (Resolved): client: don't report ceph.* xattrs via listxattr
Patrick Donnelly
09:47 PM Bug #40939 (Pending Backport): mds: map client_caps been inserted by mistake
Patrick Donnelly
09:46 PM Bug #40960 (Pending Backport): client: failed to drop dn and release caps causing mds stary stack...
Patrick Donnelly
09:10 PM Bug #37681: qa: power off still resulted in client sending session close
backport note: also need fix for https://tracker.ceph.com/issues/40999 Patrick Donnelly
08:09 PM Bug #37681 (Pending Backport): qa: power off still resulted in client sending session close
Patrick Donnelly
09:09 PM Bug #40999 (Fix Under Review): qa: AssertionError: u'open' != 'stale'
Patrick Donnelly
09:06 PM Bug #40999 (Resolved): qa: AssertionError: u'open' != 'stale'
... Patrick Donnelly
08:10 PM Bug #40968 (Pending Backport): qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_stal...
Patrick Donnelly
06:17 PM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
Konstantin Shalygin wrote:
> ??The MDS tracks opened files and capabilities in the MDS journal. That would explain t...
Patrick Donnelly
04:25 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
??The MDS tracks opened files and capabilities in the MDS journal. That would explain the space usage in the metadata... Konstantin Shalygin
05:38 PM Cleanup #40992 (Pending Backport): cephfs-shell: Multiple flake8 errors
Patrick Donnelly
11:54 AM Cleanup #40992: cephfs-shell: Multiple flake8 errors
Not ignoring E501 instead limiting line length to 100.... Varsha Rao
06:59 AM Cleanup #40992 (Fix Under Review): cephfs-shell: Multiple flake8 errors
Varsha Rao
06:48 AM Cleanup #40992 (Resolved): cephfs-shell: Multiple flake8 errors
After ignoring E501 and W503 flake8 errors, following needs to be fixed:... Varsha Rao

07/27/2019

05:25 AM Support #40906: Full CephFS causes hang when accessing inode.
Okay, I'll give it a shot next week on some of my smaller directories.
Just to be sure I understand the process.
...
Robert LeBlanc

07/26/2019

10:18 PM Bug #40474 (Pending Backport): client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
Patrick Donnelly
10:18 PM Bug #40474 (Resolved): client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
Patrick Donnelly
10:16 PM Bug #40476 (Pending Backport): cephfs-shell: cd with no args has no effect
Patrick Donnelly
10:14 PM Bug #40695 (Resolved): mds: rework PurgeQueue on_error handler to avoid mds_lock state check
Patrick Donnelly
10:12 PM Cleanup #40787 (Resolved): mds: reorg CInode header
Patrick Donnelly
10:11 PM Bug #40936 (Pending Backport): tools/cephfs: memory leak in cephfs/Resetter.cc
Patrick Donnelly
10:10 PM Bug #40967 (Pending Backport): qa: race in test_standby_replay_singleton_fail
Patrick Donnelly
07:47 PM Bug #40989 (Resolved): qa: RECENT_CRASH warning prevents wait_for_health_clear from completing
... Patrick Donnelly
06:36 PM Bug #40965 (Fix Under Review): client: don't report ceph.* xattrs via listxattr
Patrick Donnelly
06:34 PM Bug #40927 (Fix Under Review): mgr/volumes: unable to create subvolumegroups/subvolumes when ceph...
Patrick Donnelly
05:08 PM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
The MDS tracks opened files and capabilities in the MDS journal. That would explain the space usage in the metadata p... Patrick Donnelly
09:37 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
After another bunch of simulations I was call `cache drop` to MDS via admin socket. Pool usage *198.8MB* -> *2.8MB*.
...
Konstantin Shalygin
06:45 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
I was abstract of kernel client and make some tests with Samba VFS - this is Luminous client.
First, I just copied...
Konstantin Shalygin
03:25 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
+10MBytes for last ~24H.
Actual pool's usage:
fs_data:
!fs_data_26.07.19.png!
fs_meta:
!fs_meta_26.07.19.png!
Konstantin Shalygin
03:23 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
Patrick, this is definitely Luminous 12.2.12. My actual question is why metadata usage (bytes) is growing and new obj... Konstantin Shalygin
04:06 PM Feature #40986: cephfs qos: implement cephfs qos base on tokenbucket algorighm
I think there are two kinds of design:
1. all clients use the same QoS setting, just as the implementation
in this...
songbo wang
03:58 PM Feature #40986 (Fix Under Review): cephfs qos: implement cephfs qos base on tokenbucket algorighm
The basic idea is as follows:
Set QoS info as one of the dir's xattrs;
All clients that can access the same dirs ...
songbo wang
10:04 AM Feature #18537: libcephfs cache invalidation upcalls
Jeff Layton wrote:
> I looked at this, but I think the real solution to this problem is to just prevent ganesha from...
huanwen ren
06:15 AM Backport #40445 (In Progress): nautilus: mds: MDCache::cow_inode does not cleanup unneeded client...
https://github.com/ceph/ceph/pull/29344 Prashant D
06:12 AM Backport #40443 (In Progress): nautilus: libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH whe...
https://github.com/ceph/ceph/pull/29343 Prashant D
04:49 AM Backport #37761: mimic: mds: deadlock when setting config value via admin socket
ACK Venky Shankar
02:42 AM Support #40906: Full CephFS causes hang when accessing inode.
after delete omap key. use scrub_path asok command to repair parent directory Zheng Yan
02:06 AM Support #40906: Full CephFS causes hang when accessing inode.
Will deleting the RADOS object for the inode going to cause more problems with the MDS because they get out of sync? Robert LeBlanc

07/25/2019

11:12 PM Feature #40617 (Pending Backport): mgr/volumes: Add `ceph fs subvolumegroup getpath` command
Patrick Donnelly
11:11 PM Cleanup #40866 (Resolved): mds: reorg Capability header
Patrick Donnelly
11:09 PM Bug #40836 (Pending Backport): cephfs-shell: flake8 blank line and indentation error
Patrick Donnelly
11:06 PM Cleanup #40742 (Resolved): mds: reorg CDir header
Patrick Donnelly
10:18 PM Bug #40968 (Fix Under Review): qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_stal...
Patrick Donnelly
10:15 PM Bug #40968 (Resolved): qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_stale_write_...
/ceph/teuthology-archive/pdonnell-2019-07-25_06:25:06-fs-wip-pdonnell-testing-20190725.023305-distro-basic-smithi/414... Patrick Donnelly
09:43 PM Bug #40967 (Fix Under Review): qa: race in test_standby_replay_singleton_fail
Patrick Donnelly
09:39 PM Bug #40967 (Resolved): qa: race in test_standby_replay_singleton_fail
... Patrick Donnelly
07:51 PM Bug #40965 (Resolved): client: don't report ceph.* xattrs via listxattr
The convention with filesystems that have virtual xattrs is to not report them via listxattr(). A lot of archiving to... Jeff Layton
06:53 PM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
I see now that you're using 12.2.12? That wouldn't be the cause then I guess. The PR which backports the relevant cha... Patrick Donnelly
06:48 PM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
Did you just upgrade your cluster to Nautilus? The data usage stats changed recently so that omaps in the metadata po... Patrick Donnelly
09:45 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
Found another cluster with this Ceph version. Data usage 10x more, but meta is not much.
Metadata pool in this clust...
Konstantin Shalygin
09:37 AM Bug #40951 (Rejected): (think this is bug) Why CephFS metadata pool usage is only growing?
Can't find a clue: singe MDS fs without actually usage, but metadata is only growing (screenshot attached). MDS with ... Konstantin Shalygin
06:46 PM Bug #40939 (Fix Under Review): mds: map client_caps been inserted by mistake
Patrick Donnelly
08:52 AM Bug #40939 (Resolved): mds: map client_caps been inserted by mistake
SnapRealm.h: SnapRealm::remove_cap()
if map client_caps has not key client, 'client_caps[client]' will insert key cl...
guodong xiao
06:18 PM Bug #40936 (Fix Under Review): tools/cephfs: memory leak in cephfs/Resetter.cc
Patrick Donnelly
06:24 AM Bug #40936: tools/cephfs: memory leak in cephfs/Resetter.cc
function -- Resetter::_write_reset_event(Journaler *journaler) guodong xiao
06:22 AM Bug #40936 (Resolved): tools/cephfs: memory leak in cephfs/Resetter.cc
there is a memory leak in cephfs/Resetter.cc:
LogEvent *le = new EResetJournal;
forget using 'delete' to release ...
guodong xiao
06:02 PM Bug #40932 (Need More Info): ceph-fuse: rm a dir failed, print "non-empty" error
does that work? Patrick Donnelly
03:25 AM Bug #40932: ceph-fuse: rm a dir failed, print "non-empty" error
use scrub_path asok command to fix it Zheng Yan
02:10 AM Bug #40932 (Need More Info): ceph-fuse: rm a dir failed, print "non-empty" error
#rm -rf dir1
print "non-empty" error, but "ls dir1" is empty.
guodong xiao
05:53 PM Bug #40877 (Fix Under Review): client: client should return EIO when it's unsafe reqs have been d...
Patrick Donnelly
05:51 PM Bug #40960 (Fix Under Review): client: failed to drop dn and release caps causing mds stary stack...
Patrick Donnelly
02:32 PM Bug #40960 (Resolved): client: failed to drop dn and release caps causing mds stary stacking.
when client get notification from MDS that a file has been deleted(via
getting CEPH_CAP_LINK_SHARED cap for inode wi...
Xiaoxi Chen
02:47 PM Bug #40927: mgr/volumes: unable to create subvolumegroups/subvolumes when ceph-mgr is run as non-...
Patrick Donnelly wrote:
> Also need a ticket to allow setting the uid/gid of the subvolume (Feature).
Done. http:...
Ramana Raja
02:43 PM Bug #40171 (Resolved): mds: reset heartbeat during long-running loops in recovery
Nathan Cutler
02:43 PM Backport #40222 (Resolved): mimic: mds: reset heartbeat during long-running loops in recovery
Nathan Cutler
02:37 PM Backport #40222: mimic: mds: reset heartbeat during long-running loops in recovery
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28918
merged
Yuri Weinstein
02:42 PM Backport #40875 (Resolved): mimic: /src/include/xlist.h: 77: FAILED assert(_size == 0)
Nathan Cutler
02:37 PM Backport #40875: mimic: /src/include/xlist.h: 77: FAILED assert(_size == 0)
Xiaoxi Chen wrote:
> https://github.com/ceph/ceph/pull/29187
merged
Yuri Weinstein
02:41 PM Backport #39685 (Resolved): mimic: ceph-fuse: client hang because its bad session PipeConnection ...
Nathan Cutler
02:36 PM Backport #39685: mimic: ceph-fuse: client hang because its bad session PipeConnection to mds
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29200
merged
Yuri Weinstein
02:40 PM Backport #38099 (Resolved): mimic: mds: remove cache drop admin socket command
Nathan Cutler
02:35 PM Backport #38099: mimic: mds: remove cache drop admin socket command
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29210
merged
Yuri Weinstein
02:38 PM Bug #38270 (Resolved): kcephfs TestClientLimits.test_client_pin fails with "client caps fell belo...
Nathan Cutler
02:38 PM Backport #38687 (Resolved): mimic: kcephfs TestClientLimits.test_client_pin fails with "client ca...
Nathan Cutler
02:34 PM Backport #38687: mimic: kcephfs TestClientLimits.test_client_pin fails with "client caps fell bel...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29211
merged
Yuri Weinstein
02:19 PM Feature #40959 (Resolved): mgr/volumes: allow setting uid, gid of subvolume and subvolume group d...
... by passing optional arguments --uid and --gid to `ceph fs subvolume/subvolume group create` commands. Ramana Raja
01:41 PM Documentation #40957 (Resolved): doc: add section to manpage for recover_session= option
Now that the recover_session= mount option has been added to the testing branch in the kernel, we need to update the ... Jeff Layton
08:56 AM Backport #40944 (Resolved): nautilus: mgr: failover during in qa testing causes unresponsive clie...
https://github.com/ceph/ceph/pull/29649 Nathan Cutler
03:14 AM Support #40934 (New): can't get connection to external cephfs frmo kubernetes pod
I need to get clear step by step instructions on how to setup cephfs provisioning to external cephfs cluster without ... konstantin pupkov

07/24/2019

11:19 PM Bug #40931 (Rejected): Can't connect to my kubernetes pod
I'm not sure I have enough context to help here. Also, this question should begin on ceph-users, not here. Patrick Donnelly
11:11 PM Bug #40931 (Rejected): Can't connect to my kubernetes pod
New version is not allowed to connect to my external to kubernetes ceph cluster. It is OK if connecting to rdbms, but... konstantin pupkov
06:59 PM Feature #40929 (Resolved): pybind/mgr/mds_autoscaler: create mgr plugin to deploy and configure M...
Create a new mgr plugin that adds and removes MDSs in response to degraded file systems. The plugin should monitor ch... Patrick Donnelly
06:23 PM Bug #40927: mgr/volumes: unable to create subvolumegroups/subvolumes when ceph-mgr is run as non-...
Also need a ticket to allow setting the uid/gid of the subvolume (Feature). Patrick Donnelly
04:36 PM Bug #40927 (Resolved): mgr/volumes: unable to create subvolumegroups/subvolumes when ceph-mgr is ...
... Ramana Raja
04:55 PM Support #40906 (New): Full CephFS causes hang when accessing inode.
Reopening this as I see now that Zheng asked you to create the ticket. Patrick Donnelly
04:53 PM Support #40906 (Rejected): Full CephFS causes hang when accessing inode.
This discussion should move to the ceph-users list: https://lists.ceph.io/postorius/lists/ceph-users.ceph.io/ Patrick Donnelly
01:48 AM Support #40906: Full CephFS causes hang when accessing inode.
The log does not contain information about the bad inode.
To use rados to delete inode
1. find inode number o...
Zheng Yan
11:51 AM Bug #24868 (Resolved): doc: Incomplete Kernel mount debugging
This is fixed by https://github.com/ceph/ceph/pull/28900 Jos Collin
11:14 AM Bug #40864: cephfs-shell: rmdir doesn't complain when directory is not empty
After this PR[https://github.com/ceph/ceph/pull/28514]
It does print an error message, but the message is not approp...
Varsha Rao
11:03 AM Bug #40863: cephfs-shell: rmdir with -p attempts to delete non-dir files as well
Varsha Rao

07/23/2019

04:14 PM Support #40906: Full CephFS causes hang when accessing inode.
The complete logs were 81GB, so I just filtered on the client session. If you need some more complete logs, let me kn... Robert LeBlanc
04:10 PM Support #40906 (New): Full CephFS causes hang when accessing inode.
Our CephFS filled up and we are having trouble operating on a directory that has a file with a bad inode state. We wo... Robert LeBlanc
03:57 PM Backport #40440 (In Progress): nautilus: mds: cannot switch mds state from standby-replay to active
Nathan Cutler
03:53 PM Backport #40438 (In Progress): nautilus: getattr on snap inode stuck
Nathan Cutler
03:51 PM Backport #40437 (In Progress): mimic: getattr on snap inode stuck
Nathan Cutler
03:47 PM Backport #40218 (In Progress): luminous: TestMisc.test_evict_client fails
Nathan Cutler
03:46 PM Backport #40219 (In Progress): mimic: TestMisc.test_evict_client fails
Nathan Cutler
03:45 PM Backport #40163 (In Progress): luminous: mount: key parsing fail when doing a remount
Nathan Cutler
03:44 PM Backport #40165 (In Progress): mimic: mount: key parsing fail when doing a remount
Nathan Cutler
03:43 PM Backport #40162 (Need More Info): mimic: FSAL_CEPH assertion failed in Client::_lookup_name: "par...
non-trivial Nathan Cutler
03:41 PM Backport #39223 (In Progress): mimic: mds: behind on trimming and "[dentry] was purgeable but no ...
Nathan Cutler
03:36 PM Backport #39215 (In Progress): mimic: mds: there is an assertion when calling Beacon::shutdown()
Nathan Cutler
03:25 PM Backport #39212 (In Progress): mimic: MDSTableServer.cc: 83: FAILED assert(version == tid)
Nathan Cutler
03:22 PM Backport #39210 (In Progress): mimic: mds: mds_cap_revoke_eviction_timeout is not used to initial...
Nathan Cutler
03:19 PM Backport #38875 (In Progress): mimic: mds: high debug logging with many subtrees is slow
Nathan Cutler
03:15 PM Backport #38709 (In Progress): mimic: qa: kclient unmount hangs after file system goes down
Nathan Cutler
02:07 PM Bug #40867 (Pending Backport): mgr: failover during in qa testing causes unresponsive client warn...
Sage Weil
01:03 PM Bug #40867: mgr: failover during in qa testing causes unresponsive client warnings
Patrick Donnelly wrote:
> Sage's whitelist PR: https://github.com/ceph/ceph/pull/29169
>
> As I said in issue des...
Venky Shankar
12:11 PM Backport #38687 (In Progress): mimic: kcephfs TestClientLimits.test_client_pin fails with "client...
Nathan Cutler
12:09 PM Backport #38643 (Need More Info): mimic: fs: "log [WRN] : failed to reconnect caps for missing in...
not trivial to backport because in master we have:... Nathan Cutler
12:04 PM Backport #38444 (Need More Info): mimic: mds: drop cache does not timeout as expected
see luminous backport https://github.com/ceph/ceph/pull/27342 Nathan Cutler
12:03 PM Backport #38350 (Need More Info): luminous: mds: decoded LogEvent may leak during shutdown
extensive changes that do not apply cleanly - elevated risk Nathan Cutler
12:03 PM Backport #38349 (Need More Info): mimic: mds: decoded LogEvent may leak during shutdown
extensive changes that do not apply cleanly - elevated risk Nathan Cutler
12:01 PM Backport #38339: mimic: mds: may leak gather during cache drop
The luminous backport https://github.com/ceph/ceph/pull/27342 involved cherry-picking a large number of commits. Assi... Nathan Cutler
12:00 PM Backport #38339 (Need More Info): mimic: mds: may leak gather during cache drop
non-trivial - the fix touches code that appears to have been refactored for nautilus Nathan Cutler
11:57 AM Backport #38099 (In Progress): mimic: mds: remove cache drop admin socket command
Nathan Cutler
11:52 AM Backport #37906 (Need More Info): mimic: make cephfs-data-scan reconstruct snaptable
non-trivial feature backport - assigning to dev lead for disposition Nathan Cutler
11:50 AM Backport #37761 (Need More Info): mimic: mds: deadlock when setting config value via admin socket
The master PR has two commits. The first commit touches the following files:
src/common/config_obs_mgr.h
src/comm...
Nathan Cutler
11:48 AM Backport #37637 (Need More Info): luminous: client: support getfattr ceph.dir.pin extended attribute
non-trivial feature backport - assigning to dev lead for disposition Nathan Cutler
11:47 AM Backport #37636 (Need More Info): mimic: client: support getfattr ceph.dir.pin extended attribute
non-trivial feature backport - assigning to dev lead for disposition Nathan Cutler
10:22 AM Backport #39685 (In Progress): mimic: ceph-fuse: client hang because its bad session PipeConnecti...
Nathan Cutler
08:28 AM Backport #40900 (Resolved): nautilus: mds: only evict an unresponsive client when another client ...
https://github.com/ceph/ceph/pull/30031 Nathan Cutler
08:28 AM Backport #40899 (Resolved): mimic: mds: only evict an unresponsive client when another client wan...
https://github.com/ceph/ceph/pull/30239 Nathan Cutler
08:25 AM Backport #40898 (Rejected): nautilus: cephfs-shell: Error messages are printed to stdout
Nathan Cutler
08:25 AM Backport #40897 (Resolved): nautilus: ceph_volume_client: fs_name must be converted to string bef...
https://github.com/ceph/ceph/pull/30030 Nathan Cutler
08:25 AM Backport #40896 (Rejected): mimic: ceph_volume_client: fs_name must be converted to string before...
https://github.com/ceph/ceph/pull/30238 Nathan Cutler
08:25 AM Backport #40895 (Resolved): nautilus: pybind: Add standard error message and fix print of path as...
https://github.com/ceph/ceph/pull/30026 Nathan Cutler
08:25 AM Backport #40894 (Resolved): nautilus: mds: cleanup truncating inodes when standby replay mds trim...
https://github.com/ceph/ceph/pull/29591 Nathan Cutler
08:24 AM Backport #40892 (Resolved): luminous: mds: cleanup truncating inodes when standby replay mds trim...
https://github.com/ceph/ceph/pull/31286 Nathan Cutler
08:23 AM Backport #40887 (Resolved): nautilus: ceph_volume_client: to_bytes converts NoneType object str
https://github.com/ceph/ceph/pull/30030 Nathan Cutler
08:23 AM Backport #40886 (Rejected): mimic: ceph_volume_client: to_bytes converts NoneType object str
Nathan Cutler
02:41 AM Backport #40875 (In Progress): mimic: /src/include/xlist.h: 77: FAILED assert(_size == 0)
https://github.com/ceph/ceph/pull/29187 Xiaoxi Chen
02:29 AM Bug #40877 (Resolved): client: client should return EIO when it's unsafe reqs have been dropped w...
when the session is close, the client will dropped unsafe req and cannot confirm whether its request had been complet... simon gao
02:21 AM Backport #40874 (In Progress): nautilus: /src/include/xlist.h: 77: FAILED assert(_size == 0)
Xiaoxi Chen
02:20 AM Backport #40874: nautilus: /src/include/xlist.h: 77: FAILED assert(_size == 0)
https://github.com/ceph/ceph/pull/29186 Xiaoxi Chen

07/22/2019

11:42 PM Backport #40875 (Resolved): mimic: /src/include/xlist.h: 77: FAILED assert(_size == 0)
https://github.com/ceph/ceph/pull/29187 Xiaoxi Chen
11:41 PM Backport #40874 (Resolved): nautilus: /src/include/xlist.h: 77: FAILED assert(_size == 0)
https://github.com/ceph/ceph/pull/29186 Xiaoxi Chen
11:34 PM Bug #40775 (Pending Backport): /src/include/xlist.h: 77: FAILED assert(_size == 0)
Patrick Donnelly
11:30 PM Bug #40477 (Pending Backport): mds: cleanup truncating inodes when standby replay mds trim log se...
Patrick Donnelly
11:29 PM Bug #40411 (Pending Backport): pybind: Add standard error message and fix print of path as byte o...
Patrick Donnelly
11:28 PM Bug #40800 (Pending Backport): ceph_volume_client: to_bytes converts NoneType object str
Patrick Donnelly
11:28 PM Bug #40369 (Pending Backport): ceph_volume_client: fs_name must be converted to string before usi...
Patrick Donnelly
11:26 PM Bug #40202 (Pending Backport): cephfs-shell: Error messages are printed to stdout
Patrick Donnelly
11:25 PM Feature #17854 (Pending Backport): mds: only evict an unresponsive client when another client wan...
Patrick Donnelly
09:10 PM Bug #40784 (Fix Under Review): mds: metadata changes may be lost when MDS is restarted
Patrick Donnelly
08:01 PM Bug #40873 (Duplicate): qa: expected MDS_CLIENT_LATE_RELEASE in tasks.cephfs.test_client_recovery...
This needs whitelisted:
/ceph/teuthology-archive/pdonnell-2019-07-20_01:58:35-fs-wip-pdonnell-testing-20190719.231...
Patrick Donnelly
06:19 PM Bug #40867: mgr: failover during in qa testing causes unresponsive client warnings
Sage's whitelist PR: https://github.com/ceph/ceph/pull/29169
As I said in issue description, I'd prefer if we clea...
Patrick Donnelly
03:26 PM Bug #40867 (Resolved): mgr: failover during in qa testing causes unresponsive client warnings
... Patrick Donnelly
01:48 PM Bug #23262 (Resolved): kclient: nofail option not supported
Nathan Cutler
01:48 PM Backport #39233 (Resolved): mimic: kclient: nofail option not supported
Nathan Cutler
01:48 PM Bug #38832 (Resolved): mds: fail to resolve snapshot name contains '_'
Nathan Cutler
01:48 PM Backport #39472 (Resolved): mimic: mds: fail to resolve snapshot name contains '_'
Nathan Cutler
01:47 PM Bug #39645 (Resolved): mds: output lock state in format dump
Nathan Cutler
01:47 PM Backport #39669 (Resolved): mimic: mds: output lock state in format dump
Nathan Cutler
01:46 PM Feature #39403 (Resolved): pybind: add the lseek() function to pybind of cephfs
Nathan Cutler
01:46 PM Backport #39679 (Resolved): mimic: pybind: add the lseek() function to pybind of cephfs
Nathan Cutler
01:25 PM Cleanup #40866 (Fix Under Review): mds: reorg Capability header
Varsha Rao
01:21 PM Cleanup #40866 (Resolved): mds: reorg Capability header
Varsha Rao
12:54 PM Backport #39689 (Resolved): mimic: mds: error "No space left on device" when create a large numb...
Nathan Cutler
12:22 PM Feature #24463: kclient: add btime support
Patches merged for v5.3. Jeff Layton
12:21 PM Feature #24463 (Resolved): kclient: add btime support
Jeff Layton
11:19 AM Backport #40168 (Resolved): mimic: client: ceph.dir.rctime xattr value incorrectly prefixes "09" ...
Nathan Cutler
11:18 AM Bug #40211 (Resolved): mds: fix corner case of replaying open sessions
Nathan Cutler
11:18 AM Backport #40342 (Resolved): mimic: mds: fix corner case of replaying open sessions
Nathan Cutler
11:14 AM Bug #40028 (Resolved): mds: avoid trimming too many log segments after mds failover
Nathan Cutler
11:14 AM Backport #40042 (Resolved): mimic: avoid trimming too many log segments after mds failover
Nathan Cutler
10:46 AM Bug #40864 (Resolved): cephfs-shell: rmdir doesn't complain when directory is not empty
Passing rmdir an non-empty directory doesn't remove the directory, as it's expected. But not printing anything mislea... Rishabh Dave
10:40 AM Bug #40863 (Resolved): cephfs-shell: rmdir with -p attempts to delete non-dir files as well
Going through do_rmdir in cephfs-shell, I don't see any checks that stop the method from deleting non-directory files... Rishabh Dave
09:49 AM Backport #40845 (In Progress): nautilus: MDSMonitor: use stringstream instead of dout for mds rep...
Nathan Cutler
08:21 AM Backport #40845 (Resolved): nautilus: MDSMonitor: use stringstream instead of dout for mds repaired
https://github.com/ceph/ceph/pull/29159 Nathan Cutler
09:48 AM Backport #40843 (In Progress): nautilus: cephfs-shell: name 'files' is not defined error in do_rm()
Nathan Cutler
08:20 AM Backport #40843 (Resolved): nautilus: cephfs-shell: name 'files' is not defined error in do_rm()
https://github.com/ceph/ceph/pull/29158 Nathan Cutler
09:47 AM Backport #40842 (In Progress): nautilus: ceph-fuse: mount does not support the fallocate()
Nathan Cutler
08:20 AM Backport #40842 (Resolved): nautilus: ceph-fuse: mount does not support the fallocate()
https://github.com/ceph/ceph/pull/29157 Nathan Cutler
09:47 AM Backport #40839 (In Progress): nautilus: cephfs-shell: TypeError in poutput
Nathan Cutler
08:20 AM Backport #40839 (Resolved): nautilus: cephfs-shell: TypeError in poutput
https://github.com/ceph/ceph/pull/29156 Nathan Cutler
09:20 AM Bug #40861 (Resolved): cephfs-shell: -p doesn't work for rmdir
... Rishabh Dave
09:18 AM Bug #40860 (Resolved): cephfs-shell: raises incorrect error when regfiles are passed to be deleted
Steps to reproduce the bug -... Rishabh Dave
08:23 AM Backport #40858 (Rejected): luminous: ceph_volume_client: python program embedded in test_volume_...
Nathan Cutler
08:23 AM Backport #40857 (Resolved): nautilus: ceph_volume_client: python program embedded in test_volume_...
https://github.com/ceph/ceph/pull/30030 Nathan Cutler
08:23 AM Backport #40856 (Rejected): mimic: ceph_volume_client: python program embedded in test_volume_cli...
Nathan Cutler
08:23 AM Backport #40855 (Rejected): luminous: test_volume_client: test_put_object_versioned is unreliable
Nathan Cutler
08:23 AM Backport #40854 (Resolved): nautilus: test_volume_client: test_put_object_versioned is unreliable
https://github.com/ceph/ceph/pull/30030 Nathan Cutler
08:22 AM Backport #40853 (Resolved): mimic: test_volume_client: test_put_object_versioned is unreliable
https://github.com/ceph/ceph/pull/30236 Nathan Cutler
08:21 AM Backport #40844 (Resolved): mimic: MDSMonitor: use stringstream instead of dout for mds repaired
https://github.com/ceph/ceph/pull/30235 Nathan Cutler
08:20 AM Backport #40841 (Resolved): mimic: ceph-fuse: mount does not support the fallocate()
https://github.com/ceph/ceph/pull/30228 Nathan Cutler
06:18 AM Bug #40836 (Fix Under Review): cephfs-shell: flake8 blank line and indentation error
Varsha Rao
06:06 AM Bug #40836 (Resolved): cephfs-shell: flake8 blank line and indentation error
Following are the flake8 errors... Varsha Rao

07/19/2019

05:39 PM Bug #40775 (Fix Under Review): /src/include/xlist.h: 77: FAILED assert(_size == 0)
Patrick Donnelly

07/18/2019

09:05 PM Bug #39405 (Pending Backport): ceph_volume_client: python program embedded in test_volume_client....
Patrick Donnelly
09:04 PM Bug #39510 (Pending Backport): test_volume_client: test_put_object_versioned is unreliable
Patrick Donnelly
06:06 PM Feature #40036 (Resolved): mgr / volumes: support asynchronous subvolume deletes
Patrick Donnelly
06:06 PM Backport #40796 (Resolved): nautilus: mgr / volumes: support asynchronous subvolume deletes
Patrick Donnelly
05:35 PM Bug #40821 (Resolved): osdc: objecter ops output does not have useful time information
... Patrick Donnelly
01:22 PM Bug #40775: /src/include/xlist.h: 77: FAILED assert(_size == 0)
kernel data structure for this... Zheng Yan
07:52 AM Bug #40775: /src/include/xlist.h: 77: FAILED assert(_size == 0)
echo 2>/proc/sys/vm/drop_caches " can walk release the reference and walk around the issue.... Xiaoxi Chen
06:16 AM Bug #40775: /src/include/xlist.h: 77: FAILED assert(_size == 0)
Analyzing more on the log , it seems an overflow in ll_ref.
From below log, it is pretty clear the patten is 2 _ll...
Xiaoxi Chen
01:00 PM Feature #40811 (Fix Under Review): mds: add command that modify session metadata
Zheng Yan
07:34 AM Feature #40811 (Resolved): mds: add command that modify session metadata
Zheng Yan
10:52 AM Feature #40617 (Fix Under Review): mgr/volumes: Add `ceph fs subvolumegroup getpath` command
Ramana Raja

07/17/2019

07:56 PM Backport #40807 (In Progress): luminous: mds: msg weren't destroyed before handle_client_reconnec...
Patrick Donnelly
07:53 PM Backport #40807 (Resolved): luminous: mds: msg weren't destroyed before handle_client_reconnect r...
https://github.com/ceph/ceph/pull/29097 Patrick Donnelly
07:45 PM Backport #39233: mimic: kclient: nofail option not supported
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28090
merged
Yuri Weinstein
07:44 PM Backport #39472: mimic: mds: fail to resolve snapshot name contains '_'
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28186
merged
Yuri Weinstein
07:44 PM Backport #39669: mimic: mds: output lock state in format dump
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28274
merged
Yuri Weinstein
07:44 PM Backport #39679: mimic: pybind: add the lseek() function to pybind of cephfs
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28337
merged
Yuri Weinstein
07:43 PM Backport #39689: mimic: mds: error "No space left on device" when create a large number of dirs
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28381
merged
Yuri Weinstein
07:43 PM Backport #40168: mimic: client: ceph.dir.rctime xattr value incorrectly prefixes "09" to the nano...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28501
merged
Yuri Weinstein
07:41 PM Backport #40342: mimic: mds: fix corner case of replaying open sessions
Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/28579
merged
Yuri Weinstein
07:40 PM Backport #40042: mimic: avoid trimming too many log segments after mds failover
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28650
merged
Yuri Weinstein
10:40 AM Bug #40800 (Fix Under Review): ceph_volume_client: to_bytes converts NoneType object str
Rishabh Dave
10:04 AM Bug #40800 (Resolved): ceph_volume_client: to_bytes converts NoneType object str
Precisely, it happens here - https://github.com/ceph/ceph/blob/master/src/pybind/ceph_volume_client.py#L29-L32
IMO...
Rishabh Dave
10:12 AM Backport #40796 (In Progress): nautilus: mgr / volumes: support asynchronous subvolume deletes
Ramana Raja
02:39 AM Bug #40775: /src/include/xlist.h: 77: FAILED assert(_size == 0)
The affected inode is a symlink... Xiaoxi Chen

07/16/2019

08:41 PM Backport #40796 (Resolved): nautilus: mgr / volumes: support asynchronous subvolume deletes
https://github.com/ceph/ceph/pull/29079 Patrick Donnelly
08:40 PM Feature #40036 (Pending Backport): mgr / volumes: support asynchronous subvolume deletes
Patrick Donnelly
12:57 PM Cleanup #40787 (Fix Under Review): mds: reorg CInode header
Varsha Rao
12:35 PM Cleanup #40787 (Resolved): mds: reorg CInode header
Varsha Rao
11:25 AM Bug #40695 (Fix Under Review): mds: rework PurgeQueue on_error handler to avoid mds_lock state check
Zheng Yan
07:35 AM Bug #40784 (Resolved): mds: metadata changes may be lost when MDS is restarted
Assumed a client copied some obj to another location in ceph. when early_replied was received , the cp command would ... shen hang
04:07 AM Documentation #39620 (Resolved): doc: MDS and metadata pool hardware requirements/recommendations
Jos Collin

07/15/2019

02:03 PM Cleanup #40694 (Resolved): mds: move MDSDaemon conf change handling to MDSRank finisher
Patrick Donnelly
01:58 PM Cleanup #40694 (Fix Under Review): mds: move MDSDaemon conf change handling to MDSRank finisher
Patrick Donnelly
01:49 PM Bug #40613 (Need More Info): kclient: .handle_message_footer got old message 1 <= 648 0x558ceadea...
Waiting to see if this happens again. Patrick Donnelly
01:46 PM Bug #40476 (Fix Under Review): cephfs-shell: cd with no args has no effect
Rishabh Dave
01:45 PM Bug #40603 (Fix Under Review): mds: disallow setting ceph.dir.pin value exceeding max rank id
Patrick Donnelly

07/13/2019

03:41 PM Bug #40775: /src/include/xlist.h: 77: FAILED assert(_size == 0)
hmm, interesting.
- 2147483646 = 0x80000002, it is more like a memory corruption?
Xiaoxi Chen
03:13 PM Bug #40775 (Resolved): /src/include/xlist.h: 77: FAILED assert(_size == 0)
It seems like we handle the inode ref wrongly? the number looks like overflow.
-12> 2019-07-13 00:49:44.582 7...
Xiaoxi Chen
03:12 AM Bug #40746: client: removing dir reports "not empty" issue due to client side filled wrong dir of...
I don't see any problem. last paramter of fill_dirent() should be offset for next readdir. With your change, offset o... Zheng Yan
12:17 AM Bug #40472 (Pending Backport): MDSMonitor: use stringstream instead of dout for mds repaired
Patrick Donnelly
12:16 AM Bug #40489 (Pending Backport): cephfs-shell: name 'files' is not defined error in do_rm()
Patrick Donnelly
12:15 AM Bug #40615 (Pending Backport): ceph-fuse: mount does not support the fallocate()
Patrick Donnelly
12:14 AM Bug #40679 (Pending Backport): cephfs-shell: TypeError in poutput
Patrick Donnelly

07/12/2019

11:16 PM Bug #40773 (Resolved): qa: 'ceph osd require-osd-release nautilus' fails
... Patrick Donnelly
05:27 PM Bug #40746 (Fix Under Review): client: removing dir reports "not empty" issue due to client side ...
Patrick Donnelly
08:30 AM Bug #40746 (Resolved): client: removing dir reports "not empty" issue due to client side filled w...
recently, during use nfs-ganesha+cephfs, we found some "directory not empty error" when removing
existing directory....
Peng Xie

07/11/2019

07:50 PM Cleanup #40742 (Resolved): mds: reorg CDir header
Patrick Donnelly
07:30 PM Feature #40401 (Fix Under Review): mgr/volumes: allow/deny r/rw access of auth IDs to subvolume a...
Ramana Raja
02:53 PM Bug #38679: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
Zheng Yan wrote:
> is cephfs exported to nfs
No, it is ceph-fuse (13.2.5).
It seems like customer has ~10 nod...
Xiaoxi Chen
01:35 PM Bug #38679: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
is cephfs exported to nfs Zheng Yan
03:55 AM Bug #38679: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
I think we need to try reclaim the caps in this case. I am seeing num_stray accumulated in my env due to the indoes ... Xiaoxi Chen
03:44 AM Bug #38679: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
not big issue even inode becomes to have caps Zheng Yan
01:09 PM Backport #40343 (Resolved): luminous: mds: fix corner case of replaying open sessions
Nathan Cutler

07/10/2019

07:17 PM Feature #40401 (In Progress): mgr/volumes: allow/deny r/rw access of auth IDs to subvolume and su...
Ramana Raja
02:53 PM Bug #38679: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
... Xiaoxi Chen
02:52 PM Bug #38679: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
It seems not a valid fix as this is not the only path to have the error.
We hit this in another way, not due to fr...
Xiaoxi Chen

07/09/2019

08:01 PM Cleanup #40694 (In Progress): mds: move MDSDaemon conf change handling to MDSRank finisher
Patrick Donnelly
05:45 PM Feature #40563 (Fix Under Review): client: query a single cache information, for example print a ...
Jos Collin wrote:
> @Patrick,
>
> Seems wenpengLi is already working on this?
> https://github.com/ceph/ceph/pul...
Patrick Donnelly
04:22 AM Feature #40563: client: query a single cache information, for example print a single inode cache ...
@Patrick,
Seems wenpengLi is already working on this?
https://github.com/ceph/ceph/pull/28853
Jos Collin
 

Also available in: Atom