Activity
From 03/21/2019 to 04/19/2019
04/19/2019
- 03:15 PM Feature #39098: mds: lock caching for asynchronous unlink
- Jeff Layton wrote:
> Zheng Yan wrote:
> > Sorry, I mean we don't need Lx
>
> I'm not sure I understand. What if ... - 11:37 AM Feature #39098: mds: lock caching for asynchronous unlink
- Zheng Yan wrote:
> Sorry, I mean we don't need Lx
I'm not sure I understand. What if other clients have Ls on the... - 01:19 AM Feature #39098: mds: lock caching for asynchronous unlink
- Sorry, I mean we don't need Lx
04/18/2019
- 09:15 PM Feature #39098: mds: lock caching for asynchronous unlink
- Zheng Yan wrote:
> With change in http://tracker.ceph.com/issues/39354, Fx is not needed for async unlink. (Other re... - 07:36 AM Feature #39098: mds: lock caching for asynchronous unlink
- With change in http://tracker.ceph.com/issues/39354, Lx is not needed for async unlink. (Other request will release x...
- 11:06 AM Backport #39379 (In Progress): nautilus: cephfs-shell: python traceback with mkdir inside inexist...
- 09:20 AM Backport #39379 (Resolved): nautilus: cephfs-shell: python traceback with mkdir inside inexistant...
- https://github.com/ceph/ceph/pull/27677
- 11:05 AM Backport #39378 (In Progress): nautilus: cephfs-shell: support mkdir with non-octal mode
- 09:20 AM Backport #39378 (Resolved): nautilus: cephfs-shell: support mkdir with non-octal mode
- https://github.com/ceph/ceph/pull/27677
- 11:05 AM Backport #39377 (In Progress): nautilus: cephfs-shell: python traceback with mkdir when reattempt...
- 09:20 AM Backport #39377 (Resolved): nautilus: cephfs-shell: python traceback with mkdir when reattempt of...
- https://github.com/ceph/ceph/pull/27677
- 11:05 AM Backport #39376 (In Progress): nautilus: cephfs-shell: mkdir creates directory with invalid octal...
- 09:20 AM Backport #39376 (Resolved): nautilus: cephfs-shell: mkdir creates directory with invalid octal mode
- https://github.com/ceph/ceph/pull/27677
- 11:03 AM Backport #39197 (In Progress): nautilus: cephfs-shell: ls command produces error: no "colorize" a...
- 08:24 AM Bug #39349 (Closed): mds: cap revokes leak
- 08:03 AM Bug #39349: mds: cap revokes leak
- Zheng Yan wrote:
> already fixed by https://github.com/ceph/ceph/pull/26713
yes, this is already fixed, please cl... - 07:14 AM Bug #39349: mds: cap revokes leak
- already fixed by https://github.com/ceph/ceph/pull/26713
- 05:35 AM Backport #38877 (In Progress): luminous: mds: high debug logging with many subtrees is slow
- 12:08 AM Feature #38740 (Pending Backport): cephfs-shell: support mkdir with non-octal mode
- 12:08 AM Bug #38739 (Pending Backport): cephfs-shell: python traceback with mkdir inside inexistant directory
- 12:08 AM Bug #38741 (Pending Backport): cephfs-shell: python traceback with mkdir when reattempt of mkdir
- 12:08 AM Bug #38743 (Pending Backport): cephfs-shell: mkdir creates directory with invalid octal mode
04/17/2019
- 08:59 PM Feature #39354: mds: derive wrlock from excl caps
- https://github.com/ceph/ceph/pull/27648
- 01:07 PM Feature #39354 (In Progress): mds: derive wrlock from excl caps
- 12:49 PM Feature #39354 (Closed): mds: derive wrlock from excl caps
- preparation for buffered create/unlink
- 08:42 PM Bug #39349 (Fix Under Review): mds: cap revokes leak
- 07:50 AM Bug #39349: mds: cap revokes leak
- These exists possibilities as follows:
1. some req make mds do simple_xlock on a filelock that's in LOCK_xlock... - 07:25 AM Bug #39349 (Closed): mds: cap revokes leak
- Recently, one of our clusters, after updating to 12.2.11, occasionally reports that "XXX clients failing to respond t...
- 09:31 AM Bug #11314 (In Progress): qa: MDS crashed and the runs hung without ever timing out
- 09:31 AM Feature #5520 (New): osdc: should handle namespaces
- 08:01 AM Bug #39350 (Resolved): df command error
- Commit 417836d, causes the df to produce incorrect output....
- 06:01 AM Bug #39078 (Resolved): fs: we lack a feature bit for nautilus
- 06:01 AM Backport #39187 (Resolved): nautilus: fs: we lack a feature bit for nautilus
04/16/2019
- 11:54 PM Backport #39187: nautilus: fs: we lack a feature bit for nautilus
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27497
merged - 08:11 PM Bug #39166 (Fix Under Review): mds: error "No space left on device" when create a large number o...
- 06:05 AM Bug #39166: mds: error "No space left on device" when create a large number of dirs
- Patrick Donnelly wrote:
> What version are you running?
Luminous 12.2.10 I have fix this issue by pull request 2... - 08:07 PM Bug #39305 (Fix Under Review): ceph-fuse: client hang because its bad session PipeConnection to mds
- 02:35 AM Bug #39305 (Resolved): ceph-fuse: client hang because its bad session PipeConnection to mds
- There still has a risk client will hang all the time and which happened in my enviroment
a few days ago. Well konw,... - 04:22 PM Bug #39329 (Won't Fix): ceph fs set <xxxxx> min_compat_client <yyyy> DOES NOT warn about connecte...
- ceph fs set <xxxxx> min_compat_client <yyyy> DOES NOT warn about connected incompatible clients
in spite of having... - 10:56 AM Bug #38652 (Resolved): mds|kclient: MDS_CLIENT_LATE_RELEASE warning caused by inline bug on RHEL 7.5
- 10:56 AM Backport #39225 (Resolved): nautilus: mds|kclient: MDS_CLIENT_LATE_RELEASE warning caused by inli...
- 10:55 AM Bug #39060 (Resolved): ls -S command produces AttributeError: 'str' object has no attribute 'decode'
- 10:54 AM Backport #39260 (Resolved): nautilus: ls -S command produces AttributeError: 'str' object has no ...
- 10:54 AM Bug #38804 (Resolved): cephfs-shell: ls always lists hidden files and directories
- 10:54 AM Backport #39217 (Resolved): nautilus: cephfs-shell: ls always lists hidden files and directories
04/15/2019
- 09:10 PM Bug #39166: mds: error "No space left on device" when create a large number of dirs
- What version are you running?
- 08:05 PM Backport #39225: nautilus: mds|kclient: MDS_CLIENT_LATE_RELEASE warning caused by inline bug on R...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27500
merged - 08:03 PM Backport #39260: nautilus: ls -S command produces AttributeError: 'str' object has no attribute '...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27531
merged - 08:03 PM Backport #39217: nautilus: cephfs-shell: ls always lists hidden files and directories
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27531
merged
04/12/2019
- 11:54 AM Bug #39266 (Fix Under Review): There is no punctuation mark or blank between tid and client_id i...
- 03:13 AM Bug #39266: There is no punctuation mark or blank between tid and client_id in the output of "ce...
- https://github.com/ceph/ceph/pull/27537
- 02:58 AM Bug #39266 (Resolved): There is no punctuation mark or blank between tid and client_id in the ou...
- There is no punctuation mark or blank between tid and client_id.
The output of "ceph health detail " is listed bel... - 04:49 AM Feature #17309 (Fix Under Review): qa: mon_thrash test for CephFS
04/11/2019
- 05:47 PM Backport #39217 (In Progress): nautilus: cephfs-shell: ls always lists hidden files and directories
- 05:46 PM Backport #39260 (In Progress): nautilus: ls -S command produces AttributeError: 'str' object has ...
- 05:43 PM Backport #39260 (Resolved): nautilus: ls -S command produces AttributeError: 'str' object has no ...
- https://github.com/ceph/ceph/pull/27531
- 04:47 PM Bug #39060 (Pending Backport): ls -S command produces AttributeError: 'str' object has no attribu...
- 08:45 AM Bug #38832 (Fix Under Review): mds: fail to resolve snapshot name contains '_'
- https://github.com/ceph/ceph/pull/27511
- 08:04 AM Bug #38832: mds: fail to resolve snapshot name contains '_'
- There's some confusion here -- we created a snapshot named 'hourly_2019-03-20_172736'. The internal _ chars are causi...
- 07:35 AM Bug #38832 (Need More Info): mds: fail to resolve snapshot name contains '_'
- 07:35 AM Bug #38832: mds: fail to resolve snapshot name contains '_'
- Server::handle_client_mksnap already contains following code:...
04/10/2019
- 09:55 PM Backport #39225 (In Progress): nautilus: mds|kclient: MDS_CLIENT_LATE_RELEASE warning caused by i...
- 09:08 PM Backport #39225 (Resolved): nautilus: mds|kclient: MDS_CLIENT_LATE_RELEASE warning caused by inli...
- https://github.com/ceph/ceph/pull/27500
- 09:14 PM Backport #39187 (In Progress): nautilus: fs: we lack a feature bit for nautilus
- 09:03 PM Backport #39187 (Resolved): nautilus: fs: we lack a feature bit for nautilus
- https://github.com/ceph/ceph/pull/27497
- 09:10 PM Backport #39233 (Resolved): mimic: kclient: nofail option not supported
- https://github.com/ceph/ceph/pull/28090
- 09:10 PM Backport #39232 (Resolved): nautilus: kclient: nofail option not supported
- https://github.com/ceph/ceph/pull/27851
- 09:10 PM Backport #39231 (Resolved): luminous: kclient: nofail option not supported
- https://github.com/ceph/ceph/pull/28436
- 09:07 PM Backport #39223 (Resolved): mimic: mds: behind on trimming and "[dentry] was purgeable but no lon...
- https://github.com/ceph/ceph/pull/29224
- 09:07 PM Backport #39222 (Resolved): nautilus: mds: behind on trimming and "[dentry] was purgeable but no ...
- https://github.com/ceph/ceph/pull/27879
- 09:07 PM Backport #39221 (Resolved): luminous: mds: behind on trimming and "[dentry] was purgeable but no ...
- https://github.com/ceph/ceph/pull/28432
- 09:06 PM Backport #39217 (Resolved): nautilus: cephfs-shell: ls always lists hidden files and directories
- https://github.com/ceph/ceph/pull/27531
- 09:06 PM Backport #39215 (Resolved): mimic: mds: there is an assertion when calling Beacon::shutdown()
- https://github.com/ceph/ceph/pull/29223
- 09:06 PM Backport #39214 (Resolved): nautilus: mds: there is an assertion when calling Beacon::shutdown()
- https://github.com/ceph/ceph/pull/27852
- 09:06 PM Backport #39213 (Resolved): luminous: mds: there is an assertion when calling Beacon::shutdown()
- https://github.com/ceph/ceph/pull/28438
- 09:06 PM Backport #39212 (Resolved): mimic: MDSTableServer.cc: 83: FAILED assert(version == tid)
- https://github.com/ceph/ceph/pull/29222
- 09:06 PM Backport #39211 (Resolved): nautilus: MDSTableServer.cc: 83: FAILED assert(version == tid)
- https://github.com/ceph/ceph/pull/27853
- 09:06 PM Backport #39210 (Resolved): mimic: mds: mds_cap_revoke_eviction_timeout is not used to initialize...
- https://github.com/ceph/ceph/pull/29220
- 09:06 PM Backport #39209 (Resolved): nautilus: mds: mds_cap_revoke_eviction_timeout is not used to initial...
- https://github.com/ceph/ceph/pull/27842
- 09:05 PM Backport #39208 (Resolved): luminous: mds: mds_cap_revoke_eviction_timeout is not used to initial...
- https://github.com/ceph/ceph/pull/27840
- 09:04 PM Backport #39200 (Resolved): mimic: mds: we encountered "No space left on device" when moving huge...
- https://github.com/ceph/ceph/pull/27917
- 09:04 PM Backport #39199 (Resolved): nautilus: mds: we encountered "No space left on device" when moving h...
- https://github.com/ceph/ceph/pull/27736
- 09:04 PM Backport #39198 (Resolved): luminous: mds: we encountered "No space left on device" when moving h...
- https://github.com/ceph/ceph/pull/27801
- 09:04 PM Backport #39197 (Resolved): nautilus: cephfs-shell: ls command produces error: no "colorize" attr...
- https://github.com/ceph/ceph/pull/27677
- 09:03 PM Backport #39193 (Resolved): mimic: mds: crash during mds restart
- https://github.com/ceph/ceph/pull/27916
- 09:03 PM Backport #39192 (Resolved): nautilus: mds: crash during mds restart
- https://github.com/ceph/ceph/pull/27714
- 09:03 PM Backport #39191 (Resolved): luminous: mds: crash during mds restart
- https://github.com/ceph/ceph/pull/27737
- 09:01 PM Backport #39176 (Resolved): nautilus: doc: add documentation for `fs set min_compat_client`
- https://github.com/ceph/ceph/pull/27900
- 04:51 PM Bug #39173: ceph-fuse: Ceph-fuse too slow compared to kernel client
- Sure, It would help to mention the level of performance difference in the documents here http://docs.ceph.com/docs/ma...
- 04:42 PM Bug #39173 (Won't Fix): ceph-fuse: Ceph-fuse too slow compared to kernel client
- Sorry this isn't a bug and it's well documented everywhere. The userspace client libraries (and FUSE) are known to be...
- 04:36 PM Bug #39173 (Won't Fix): ceph-fuse: Ceph-fuse too slow compared to kernel client
- I have a ceph cluster with 10 osd nodes, 50 osds, 5 mons, 5 mds (1 active, 4standby).
Ceph cluster is running latest... - 05:45 AM Bug #39166 (Resolved): mds: error "No space left on device" when create a large number of dirs
- When create dirs under the ceph mount path quickly, it returns "No space left on device" after the number beyond the ...
- 04:32 AM Bug #38681: cephfs-shell: add commands to manipulate snapshots
- filed a separate tracker for quota management
- 04:31 AM Bug #39165 (Resolved): cephfs-shell: add commands to manipulate quotas
- Idea being to make this a trivial one-off command to manipulate quotas on CephFS directories without mounting.
04/09/2019
- 03:10 PM Bug #38681 (Fix Under Review): cephfs-shell: add commands to manipulate snapshots
- need review for snapshot management PR
- 01:18 PM Bug #38679 (Pending Backport): mds: behind on trimming and "[dentry] was purgeable but no longer ...
- 01:18 PM Documentation #39130 (Pending Backport): doc: add documentation for `fs set min_compat_client`
- 12:08 AM Bug #38996 (Pending Backport): cephfs-shell: ls command produces error: no "colorize" attribute f...
04/08/2019
- 11:13 PM Bug #39026 (Pending Backport): mds: crash during mds restart
- 11:12 PM Bug #38994 (Pending Backport): mds: we encountered "No space left on device" when moving huge num...
- 10:42 PM Bug #38803 (Resolved): qa: test_sessionmap assumes simple messenger
- 06:31 PM Bug #22249 (Can't reproduce): Need to restart MDS to release cephfs space
- 06:28 PM Bug #16322 (Can't reproduce): ceph mds getting killed for no reason
- 06:26 PM Bug #38832: mds: fail to resolve snapshot name contains '_'
- Zheng, what was the motivation for the change to append this information to the snap name after the final _? re: 07e7...
- 06:19 PM Cleanup #24745 (Won't Fix): Spurious empty files in CephFS root pool when multiple pools associated
- As John noted, this isn't a bug. It may be we eventually try to relax the requirement for backtraces to be stored on ...
- 06:08 PM Cleanup #4744: mds: pass around LogSegments via std::shared_ptr
- 06:07 PM Feature #38951: client: implement asynchronous unlink/create
- Actually it may be sufficient to just wait on any existing dirops to complete before we do a synchronous one. I alrea...
- 04:31 PM Feature #38951: client: implement asynchronous unlink/create
- I ran fsstress on these patches today and hit a deadlock:...
- 04:46 PM Feature #38849 (Duplicate): mds: proactively merge orphaned inodes into a remaining parent after ...
- 01:49 PM Bug #39061 (Duplicate): Test failure: test_drop_cache_command_timeout_asok (tasks.cephfs.test_mis...
- http://tracker.ceph.com/issues/38348
- 01:46 PM Bug #38996 (Fix Under Review): cephfs-shell: ls command produces error: no "colorize" attribute f...
- 08:06 AM Bug #38996: cephfs-shell: ls command produces error: no "colorize" attribute found error
- Patrick Donnelly wrote:
> I just ran into this issue. Sorry for assuming it's a duplicate.
>
> It looks like this... - 12:14 PM Backport #38877: luminous: mds: high debug logging with many subtrees is slow
- Patrick Donnelly wrote:
> Rishabh, please do this backport. See also this guide: http://tracker.ceph.com/projects/ce... - 11:57 AM Backport #38540 (In Progress): mimic: qa: fsstress with valgrind may timeout
04/06/2019
- 10:07 AM Feature #39098: mds: lock caching for asynchronous unlink
- Zheng Yan wrote:
> If mds revokes Lx or Fx, client should flush the async unlinks (on the file or under the director... - 03:11 AM Feature #39098: mds: lock caching for asynchronous unlink
- Zheng Yan wrote:
> Jeff Layton wrote:
> > Zheng Yan wrote:
> > > mds takes xlock on linklock when unlinking file, ... - 03:08 AM Feature #39098: mds: lock caching for asynchronous unlink
- Jeff Layton wrote:
> Zheng Yan wrote:
> > mds takes xlock on linklock when unlinking file, which always revokes Lsx...
04/05/2019
- 11:10 PM Bug #38803 (Fix Under Review): qa: test_sessionmap assumes simple messenger
- 06:27 PM Bug #38803 (In Progress): qa: test_sessionmap assumes simple messenger
- 09:12 PM Documentation #39130 (Fix Under Review): doc: add documentation for `fs set min_compat_client`
- 07:26 PM Documentation #39130 (Resolved): doc: add documentation for `fs set min_compat_client`
- 06:40 PM Feature #38951: client: implement asynchronous unlink/create
- Ok, I have a prototype implementation that depends on a couple of small MDS patches that are discussed here https://t...
- 05:41 PM Feature #39129 (Resolved): create mechanism to delegate ranges of inode numbers to client
- Create a mechanism by which we can hand out ranges of inode numbers to MDS clients. The clients can then use those to...
- 05:23 PM Feature #39098: mds: lock caching for asynchronous unlink
- Jeff Layton wrote:
> This patch seems to do the right thing for me. I'm not sure whether this is the right approach ... - 05:22 PM Feature #39098: mds: lock caching for asynchronous unlink
- Jeff Layton wrote:
> Zheng Yan wrote:
> > mds takes xlock on linklock when unlinking file, which always revokes Lsx... - 03:31 PM Feature #39098: mds: lock caching for asynchronous unlink
- This patch seems to do the right thing for me. I'm not sure whether this is the right approach though, and we will ne...
- 12:00 PM Feature #39098: mds: lock caching for asynchronous unlink
- Continuing that thought, I see that handle_client_unlink does this unconditionally:...
- 10:51 AM Feature #39098: mds: lock caching for asynchronous unlink
- Zheng Yan wrote:
> mds takes xlock on linklock when unlinking file, which always revokes Lsx. why does client want ... - 09:07 AM Feature #39098: mds: lock caching for asynchronous unlink
- mds takes xlock on linklock when unlinking file, which always revokes Lsx. why does client want to keep Lx after sen...
04/04/2019
- 10:35 PM Bug #38804 (Pending Backport): cephfs-shell: ls always lists hidden files and directories
- 10:32 PM Bug #38996: cephfs-shell: ls command produces error: no "colorize" attribute found error
- I just ran into this issue. Sorry for assuming it's a duplicate.
It looks like this was caused by the cmd2 module ... - 07:47 PM Bug #39020 (Resolved): qa: qa/suites/fs/upgrade testing with upgrades from luminous no longer work
- 07:46 PM Bug #39078 (Pending Backport): fs: we lack a feature bit for nautilus
- Please only backport dcd6e97944f1eefb236c5b680569c0e7c085a692.
- 07:45 PM Bug #39077 (Resolved): fs: add note to release process that new CEPHFS_FEATURE_X bit needs added ...
- Effected in a00151ac as a compiler error.
- 07:40 PM Bug #38835 (Pending Backport): MDSTableServer.cc: 83: FAILED assert(version == tid)
- 07:39 PM Bug #38652 (Pending Backport): mds|kclient: MDS_CLIENT_LATE_RELEASE warning caused by inline bug ...
- 07:33 PM Feature #39098: mds: lock caching for asynchronous unlink
- This does help the MDS to hand out Lx caps on newly created files, but I'm having some trouble getting the cap handli...
- 04:38 PM Feature #39098: mds: lock caching for asynchronous unlink
- Now that I think about it, Lx may not be required for rmdir anyway:
We only need that for files that can be hardli... - 04:06 PM Feature #39098: mds: lock caching for asynchronous unlink
- Thanks Zheng, with both of those patches, I get Lx caps on newly-created files, but not on new directories. This is t...
- 02:22 PM Feature #39098: mds: lock caching for asynchronous unlink
- following path improves the chance that client get Fsx on directory....
- 01:59 PM Feature #39098: mds: lock caching for asynchronous unlink
- please following mds patch...
- 10:57 AM Feature #39098: mds: lock caching for asynchronous unlink
- I added that to the userland client and then used cephfs-shell to create a directory and a file inside it. Neither cr...
- 09:25 AM Feature #39098: mds: lock caching for asynchronous unlink
- please try following patch...
- 01:59 PM Bug #38742 (Fix Under Review): cephfs-shell: entering unrecognized command does not print newline...
- Updated with PR number
- 01:57 PM Feature #38740 (Fix Under Review): cephfs-shell: support mkdir with non-octal mode
- updated with PR number
- 01:56 PM Bug #38739 (Fix Under Review): cephfs-shell: python traceback with mkdir inside inexistant directory
- yes; the shell should print a sensible error in such a case, in my opinion
- 01:54 PM Bug #38741 (Fix Under Review): cephfs-shell: python traceback with mkdir when reattempt of mkdir
- 01:52 PM Bug #38743 (Fix Under Review): cephfs-shell: mkdir creates directory with invalid octal mode
- 01:29 PM Backport #38340 (In Progress): luminous: mds: may leak gather during cache drop
- 12:05 AM Bug #23262: kclient: nofail option not supported
- Kenneth Waegeman wrote:
> Hi,
> Should I do something to backport it?
No that's not necessary.
04/03/2019
- 10:49 PM Bug #39060 (Fix Under Review): ls -S command produces AttributeError: 'str' object has no attribu...
- 07:53 PM Feature #39098 (Resolved): mds: lock caching for asynchronous unlink
- In order to allow the client to asynchronously delete files, we need Fx caps on the parent, and Lx caps on the inode ...
- 02:17 PM Bug #23262: kclient: nofail option not supported
- Hi,
Should I do something to backport it? - 01:15 PM Feature #5520 (In Progress): osdc: should handle namespaces
- 02:32 AM Bug #38832: mds: fail to resolve snapshot name contains '_'
- ...
04/02/2019
- 03:24 PM Bug #37378: truncate_seq ordering issues with object creation
- Just a quick update: I've pushed another version of my copy_from/truncate OSD fix into https://github.com/ceph/ceph/p...
- 04:18 AM Backport #38445 (In Progress): luminous: mds: drop cache does not timeout as expected
- 02:00 AM Bug #39079 (Resolved): qa: simple messenger removal causes qa build failure
- 12:07 AM Bug #39079 (Fix Under Review): qa: simple messenger removal causes qa build failure
- 12:03 AM Bug #39079 (Resolved): qa: simple messenger removal causes qa build failure
- ...
- 12:19 AM Bug #39020 (Fix Under Review): qa: qa/suites/fs/upgrade testing with upgrades from luminous no lo...
- 12:19 AM Bug #39078 (Fix Under Review): fs: we lack a feature bit for nautilus
- https://github.com/ceph/ceph/pull/27303/commits/1aaa3a50c34ed4214d7d13a87b4011a557f2f14a
- 12:19 AM Bug #39077 (Fix Under Review): fs: add note to release process that new CEPHFS_FEATURE_X bit need...
04/01/2019
- 11:50 PM Bug #39078 (Resolved): fs: we lack a feature bit for nautilus
- This prevents an operator from configuring the file system to only allow nautilus clients.
- 11:48 PM Bug #39077 (In Progress): fs: add note to release process that new CEPHFS_FEATURE_X bit needs add...
- 11:31 PM Bug #39077 (Resolved): fs: add note to release process that new CEPHFS_FEATURE_X bit needs added ...
- Currently this is done ad-hoc and we just missed Nautilus.
- 11:11 PM Bug #38944 (Won't Fix): MDSMonitor: no mechanism for seeing value of allow_new_snaps
- The value is available in `ceph fs dump --format=json` in filesystems[].mdsmap.flags. The magic bit number is 2. See ...
- 10:06 PM Bug #38844 (Pending Backport): mds: mds_cap_revoke_eviction_timeout is not used to initialize Ser...
- 10:05 PM Bug #23262 (Pending Backport): kclient: nofail option not supported
- 09:46 PM Bug #38822 (Pending Backport): mds: there is an assertion when calling Beacon::shutdown()
- 06:02 PM Feature #38951: client: implement asynchronous unlink/create
- Jeff Layton wrote:
> I've started going through the kernel client, as I figured this would be more useful there init... - 04:35 PM Feature #38951: client: implement asynchronous unlink/create
- I've started going through the kernel client, as I figured this would be more useful there initially (and because I u...
- 04:56 PM Backport #38445: luminous: mds: drop cache does not timeout as expected
- Venky, this is the backport you're looking for I think. Can you do it?
ACK -- I'll do the backport. Thanks! - 04:55 PM Backport #38877: luminous: mds: high debug logging with many subtrees is slow
- Rishabh, please do this backport. See also this guide: http://tracker.ceph.com/projects/ceph-releases/wiki/HOWTO_back...
- 04:53 PM Backport #36462 (New): luminous: ceph-fuse client can't read or write due to backward cap_gen
- 04:53 PM Backport #38735 (Resolved): luminous: qa: tolerate longer heartbeat timeouts when using valgrind
- 01:57 PM Backport #38735: luminous: qa: tolerate longer heartbeat timeouts when using valgrind
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/26964
merged - 04:44 PM Backport #38686 (In Progress): luminous: kcephfs TestClientLimits.test_client_pin fails with "cli...
- 04:44 PM Backport #38686 (Resolved): luminous: kcephfs TestClientLimits.test_client_pin fails with "client...
- 04:43 PM Backport #38669 (Resolved): luminous: "log [WRN] : Health check failed: 1 clients failing to resp...
- 01:56 PM Backport #38669: luminous: "log [WRN] : Health check failed: 1 clients failing to respond to capa...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/27024
merged - 04:43 PM Backport #38545 (Resolved): luminous: qa: "Loading libcephfs-jni: Failure!"
- 01:58 PM Backport #38545: luminous: qa: "Loading libcephfs-jni: Failure!"
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/26820
merged - 04:42 PM Backport #38543 (Resolved): luminous: qa: tasks.cephfs.test_misc.TestMisc.test_fs_new hangs becau...
- 01:58 PM Backport #38543: luminous: qa: tasks.cephfs.test_misc.TestMisc.test_fs_new hangs because clients ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/26805
merged - 04:42 PM Backport #38541 (Resolved): luminous: qa: fsstress with valgrind may timeout
- 01:58 PM Backport #38541: luminous: qa: fsstress with valgrind may timeout
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/26776
merged - 04:42 PM Backport #38449 (Resolved): luminous: src/osdc/Journaler.cc: 420: FAILED ceph_assert(!r)
- 01:59 PM Backport #38449: luminous: src/osdc/Journaler.cc: 420: FAILED ceph_assert(!r)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/26642
merged - 12:57 PM Bug #39061 (Duplicate): Test failure: test_drop_cache_command_timeout_asok (tasks.cephfs.test_mis...
- Seen in luminous run: http://qa-proxy.ceph.com/teuthology/yuriw-2019-03-28_20:44:19-fs-wip-yuri4-testing-2019-03-28-1...
- 12:42 PM Bug #39060 (Resolved): ls -S command produces AttributeError: 'str' object has no attribute 'decode'
- ...
03/30/2019
- 08:19 AM Backport #39051 (Resolved): nautilus: doc: add LAZYIO
- https://github.com/ceph/ceph/pull/27899
- 08:19 AM Backport #39050 (Resolved): nautilus: ceph_volume_client: Too many arguments for "WriteOpCtx"
- https://github.com/ceph/ceph/pull/27893
03/29/2019
- 09:44 PM Documentation #38729 (Pending Backport): doc: add LAZYIO
- 09:37 PM Bug #38804 (Fix Under Review): cephfs-shell: ls always lists hidden files and directories
- 04:58 PM Feature #38951: client: implement asynchronous unlink/create
- Jeff Layton wrote:
> Today, we have no support for asynchronous MDS requests. make_request always blocks on the requ... - 04:35 PM Feature #38951: client: implement asynchronous unlink/create
- Today, we have no support for asynchronous MDS requests. make_request always blocks on the request. So the first step...
- 04:18 PM Bug #39026 (Fix Under Review): mds: crash during mds restart
- 05:33 AM Bug #39026: mds: crash during mds restart
- https://github.com/ceph/ceph/pull/27256
- 03:41 AM Bug #39026 (Resolved): mds: crash during mds restart
- On version 12.2.10...
- 02:36 PM Bug #38946 (Pending Backport): ceph_volume_client: Too many arguments for "WriteOpCtx"
03/28/2019
- 10:02 PM Bug #39020 (Resolved): qa: qa/suites/fs/upgrade testing with upgrades from luminous no longer work
- ...
- 07:36 PM Bug #38996 (Duplicate): cephfs-shell: ls command produces error: no "colorize" attribute found error
- 07:19 AM Bug #38996 (Resolved): cephfs-shell: ls command produces error: no "colorize" attribute found error
- Even though the package colorize-0.3.4-15.fc29.noarch is already installed.
The following error is produced.... - 05:06 PM Feature #38951: client: implement asynchronous unlink/create
- Jeff, please link the various tracker tickets you create to sub-task the project with "related issues" so they don't ...
- 04:46 PM Bug #38822: mds: there is an assertion when calling Beacon::shutdown()
- huanwen ren wrote:
> hi Patrick @Patrick Donnelly
> Can you pull me into the development of the tracker? I can't mo... - 04:35 PM Feature #38838 (Fix Under Review): Expose CephFS snapshot creation time to clients
- 04:32 PM Bug #38994 (Fix Under Review): mds: we encountered "No space left on device" when moving huge num...
- 03:52 AM Bug #38994: mds: we encountered "No space left on device" when moving huge number of files into o...
- the same reason as adding maybe_fragment on the tail of handle_client_openc.
- 03:35 AM Bug #38994 (Resolved): mds: we encountered "No space left on device" when moving huge number of f...
- The issue was founded in version 12.2.10.
client log is listed below:
.....
mv: cannot move ‘src1325’ to ‘../destf... - 12:46 PM Bug #38835 (Fix Under Review): MDSTableServer.cc: 83: FAILED assert(version == tid)
- 10:06 AM Bug #38835: MDSTableServer.cc: 83: FAILED assert(version == tid)
- https://github.com/ceph/ceph/pull/27238
- 12:42 PM Bug #38946: ceph_volume_client: Too many arguments for "WriteOpCtx"
- I'm not sure it's worth the trouble to backport to luminous, and mimic. Maybe OK to backport to latest stable release...
- 12:29 PM Bug #38651 (Resolved): qa: powercycle suite reports MDS_SLOW_METADATA_IO
- 12:29 PM Backport #38666 (Resolved): mimic: qa: powercycle suite reports MDS_SLOW_METADATA_IO
- 12:22 PM Backport #38665 (Resolved): luminous: qa: powercycle suite reports MDS_SLOW_METADATA_IO
03/27/2019
- 07:58 PM Backport #38666: mimic: qa: powercycle suite reports MDS_SLOW_METADATA_IO
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/26961
merged - 07:25 PM Backport #38688 (Rejected): luminous: mds: inode filtering on 'dump cache' asok
- 07:22 PM Feature #38849: mds: proactively merge orphaned inodes into a remaining parent after deleting pri...
- Another approach may be to allow the stray directory to fragment. Are there any downsides to doing that?
- 07:10 PM Feature #38951: client: implement asynchronous unlink/create
- I'm taking the approach that if we have to contact the server at all, then we probably should just send a synchronous...
- 02:53 PM Feature #38951 (Resolved): client: implement asynchronous unlink/create
- We have an open project to teach the client how to buffer creates when it has the right caps (Fx), and a delegated se...
- 03:58 PM Feature #38838: Expose CephFS snapshot creation time to clients
- Samba changes have also been proposed: https://lists.samba.org/archive/samba-technical/2019-March/133089.html
- 01:51 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Jeff Layton wrote:
> I have a couple of starter patches that add a new delegated_inos interval_set to session_info_t... - 01:23 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- I have a couple of starter patches that add a new delegated_inos interval_set to session_info_t.
Questions at this... - 01:09 PM Bug #38835: MDSTableServer.cc: 83: FAILED assert(version == tid)
- after restarting the mds, the crash will not happen again.
- 10:23 AM Bug #38835: MDSTableServer.cc: 83: FAILED assert(version == tid)
- Yes this was upgraded from luminous and yes that was the first time the MDS was running in mimic....
- 08:25 AM Bug #38835: MDSTableServer.cc: 83: FAILED assert(version == tid)
- goto frame 4 and try casting MDSIOContextBase into C_Prepare. tid is stored in C_Prepare
was the cluster upgrade... - 12:00 PM Bug #38946 (Fix Under Review): ceph_volume_client: Too many arguments for "WriteOpCtx"
- https://github.com/ceph/ceph/pull/27213
- 09:45 AM Bug #38946 (Resolved): ceph_volume_client: Too many arguments for "WriteOpCtx"
- Sebastian W hit this while linting,
pybind/ceph_volume_client.py:1480: error: Too many arguments for "WriteOpCtx"
... - 06:52 AM Bug #38944: MDSMonitor: no mechanism for seeing value of allow_new_snaps
- I also couldn't find a read setting. Would have expected it to at least be part of `ceph fs dump`.
- 03:05 AM Bug #38652: mds|kclient: MDS_CLIENT_LATE_RELEASE warning caused by inline bug on RHEL 7.5
new issue that can cause this warning (file lock become sync state while Fcb is issued)
/ceph/teuthology-archive...
03/26/2019
- 11:46 PM Bug #38944 (Won't Fix): MDSMonitor: no mechanism for seeing value of allow_new_snaps
- Although it's obvious that cephfs requires "allow_new_snaps = true" in order to enable snapshots, there appears to be...
- 04:48 PM Backport #38665: luminous: qa: powercycle suite reports MDS_SLOW_METADATA_IO
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/26962
merged - 02:48 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Great, so let's do a minor revision on the rules above. In order to buffer creates the client will need:
# an unus... - 11:42 AM Bug #38835: MDSTableServer.cc: 83: FAILED assert(version == tid)
- Zheng Yan wrote:
> I have trouble to check the coredump file. please use gdb to print 'tid' and 'version' if you can... - 03:07 AM Bug #38835: MDSTableServer.cc: 83: FAILED assert(version == tid)
- I have trouble to check the coredump file. please use gdb to print 'tid' and 'version' if you can.
- 12:49 AM Bug #38822: mds: there is an assertion when calling Beacon::shutdown()
- hi Patrick @Patrick Donnelly
Can you pull me into the development of the tracker? I can't modify most of the states ...
03/25/2019
- 05:31 PM Feature #38838: Expose CephFS snapshot creation time to clients
- Kernel PR: https://www.spinics.net/lists/ceph-devel/msg44894.html
Both patch-sets have been acked, but the userspa...
03/22/2019
- 01:04 PM Backport #38877 (Resolved): luminous: mds: high debug logging with many subtrees is slow
- https://github.com/ceph/ceph/pull/27679
- 01:04 PM Backport #38876 (Resolved): nautilus: mds: high debug logging with many subtrees is slow
- https://github.com/ceph/ceph/pull/27892
- 01:04 PM Backport #38875 (Resolved): mimic: mds: high debug logging with many subtrees is slow
- https://github.com/ceph/ceph/pull/29219
- 10:26 AM Feature #38851 (Rejected): mount.ceph.fuse: support secretfile option
- Hi!
As far as I can see, secretfile doesn't work with mount.ceph.fuse; Is this by intention? It would be nice to h... - 09:28 AM Feature #38849: mds: proactively merge orphaned inodes into a remaining parent after deleting pri...
- Sage suggests that forward scrub probably touches the remote hard link locations enough to force re-integrating the s...
- 09:22 AM Feature #38849 (Duplicate): mds: proactively merge orphaned inodes into a remaining parent after ...
- https://www.mail-archive.com/ceph-users@lists.ceph.com/msg53368.html
While we move an unlinked file into the stray... - 06:04 AM Backport #38736 (In Progress): mimic: qa: "[WRN] Health check failed: 1/3 mons down, quorum b,c (...
- -https://github.com/ceph/ceph/pull/27109-
03/21/2019
- 07:57 PM Bug #38844 (Resolved): mds: mds_cap_revoke_eviction_timeout is not used to initialize Server::cap...
- The config value is only used if the user changes it while the MDS is running.
- 07:31 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Jeff Layton wrote:
> Zheng Yan wrote:
>
> > > On another note, I do have at least one minor concern about using F... - 11:59 AM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Zheng Yan wrote:
> > On another note, I do have at least one minor concern about using Fb to indicate that buffere... - 05:54 PM Feature #17835 (New): mds: enable killpoint tests for MDS-MDS subtree export
- 04:21 PM Feature #38838 (Resolved): Expose CephFS snapshot creation time to clients
- Samba requires the snapshot creation time for the purpose of tracking and identifying Previous Versions for files and...
- 12:50 PM Feature #24285: mgr: add module which displays current usage of file system (`fs top`)
- for completeness, linking John's performance probes blueprint here, which is really interesting: https://tracker.ceph...
- 12:42 PM Bug #38832: mds: fail to resolve snapshot name contains '_'
- Found the cause: the subdir snaps are not visible if they have a _ in the original snapshot name. Snaps without _ do ...
- 09:42 AM Bug #38835 (Resolved): MDSTableServer.cc: 83: FAILED assert(version == tid)
- We just hit this on a v13.2.5 cluster with 1 active MDS:...
Also available in: Atom