Activity
From 12/19/2019 to 01/17/2020
01/17/2020
- 10:51 PM Tasks #4492 (New): mds: Define kill points involved in clustered migration and recovery
- 10:50 PM Feature #39129 (Fix Under Review): create mechanism to delegate ranges of inode numbers to client
- 09:49 PM Backport #42943: nautilus: mds: free heap memory may grow too large for some workloads
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/31802
merged - 09:48 PM Backport #42631: nautilus: client: FAILED assert(cap == in->auth_cap)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/32065
merged - 07:33 PM Bug #43596: mds: crash when enable msgr v2 due to lost contact
- That's indeed very odd. I looked through the code but didn't find a good reason why this would happen. It is interest...
- 01:13 PM Bug #43596: mds: crash when enable msgr v2 due to lost contact
- Yes the MDS was upgraded to 14.2.6 also.
Below the mon log when I changed its addr. (Full file that day is at ceph... - 06:49 PM Bug #43644 (Triaged): mds: Empty directory check is done on the importer side (at import finish) ...
- 06:48 PM Bug #43644: mds: Empty directory check is done on the importer side (at import finish) during mig...
- Zheng Yan wrote:
> you are right. we can do the check a export_dir and export_frozen. If directory is empty, abort. ... - 12:13 PM Bug #43644: mds: Empty directory check is done on the importer side (at import finish) during mig...
- you are right. we can do the check a export_dir and export_frozen. If directory is empty, abort. But we still need to...
- 09:16 AM Bug #43644: mds: Empty directory check is done on the importer side (at import finish) during mig...
- Sidharth Anupkrishnan wrote:
> In the current MDS code, the migration of empty directories is prohibited but it is ... - 09:13 AM Bug #43644 (Rejected): mds: Empty directory check is done on the importer side (at import finish)...
- In the current MDS code, the migration of empty directories is prohibited but it is actually exported during the migr...
- 03:59 PM Bug #43649: mount.ceph fails with ERANGE if name= option is longer than 37 characters
- It turns out that name= options can pretty much be arbitrarily long, so I reworked the code to remove the need for an...
- 03:57 PM Bug #43649 (In Progress): mount.ceph fails with ERANGE if name= option is longer than 37 characters
- 03:13 PM Bug #43649 (Resolved): mount.ceph fails with ERANGE if name= option is longer than 37 characters
- Aaron reported on the cephfs mailing list that some mount attempts were failing with ERANGE. For example:...
- 10:01 AM Bug #43645 (Fix Under Review): mgr/volumes: subvolumes with snapshots can be deleted
- 09:28 AM Bug #43645 (Resolved): mgr/volumes: subvolumes with snapshots can be deleted
- ...
- 07:50 AM Feature #24880 (Fix Under Review): pybind/mgr/volumes: restore from snapshot
- clone from a snap: https://github.com/ceph/ceph/pull/32030
Most of this work will be required for restoring a subv... - 05:46 AM Bug #42835 (Fix Under Review): qa: test_scrub_abort fails during check_task_status("idle")
01/16/2020
- 11:49 PM Bug #43640: nautilus: qa: test_async_subvolume_rm failure
- Just the lines from the teuthology log for the mgr connection:...
- 09:00 PM Bug #43640 (Need More Info): nautilus: qa: test_async_subvolume_rm failure
- ...
- 08:17 PM Bug #43638 (Duplicate): nautilus qa: tasks/cfuse_workunit_suites_ffsb.yaml failure
- 08:05 PM Bug #43638 (Duplicate): nautilus qa: tasks/cfuse_workunit_suites_ffsb.yaml failure
- ...
- 05:45 PM Bug #43637 (Triaged): nautilus: qa: Health check failed: Reduced data availability: 16 pgs inacti...
- ...
- 02:46 PM Backport #43629 (Resolved): nautilus: mgr/volumes: provision subvolumes with config metadata stor...
- https://github.com/ceph/ceph/pull/33122/
- 02:46 PM Backport #43628 (Resolved): nautilus: client: disallow changing fuse_default_permissions option a...
- https://github.com/ceph/ceph/pull/32915
- 02:46 PM Backport #43627 (Rejected): mimic: client: disallow changing fuse_default_permissions option at r...
- 02:45 PM Backport #43624 (Resolved): nautilus: mds: note features client has when rejecting client due to ...
- https://github.com/ceph/ceph/pull/32914
- 09:50 AM Bug #43601 (Fix Under Review): qa: ERROR: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
- 12:25 AM Bug #43601 (Triaged): qa: ERROR: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
- Looks like it's just that the MDS is responding to a getattr request on the root inode with EROFS:...
- 12:26 AM Bug #43125 (Can't reproduce): qa: ceph_volume_client not available "ModuleNotFoundError: No modul...
- 12:13 AM Documentation #43155 (Closed): CephFS Documentation Sprint 4
- 12:12 AM Bug #42637 (Fix Under Review): qa: ffsb suite causes SLOW_OPS warnings
- 12:00 AM Bug #16881 (Fix Under Review): RuntimeError: Files in flight high water is unexpectedly low (0 / 6)
01/15/2020
- 08:49 PM Bug #43599 (Fix Under Review): kclient: corrupt message failure on RHEL8 distribution kernel
- 08:25 PM Bug #43599 (In Progress): kclient: corrupt message failure on RHEL8 distribution kernel
- 08:20 PM Bug #43599: kclient: corrupt message failure on RHEL8 distribution kernel
- This is just one of those places where the kernel client did not ever expect to see a struct be extended. I suspect t...
- 06:43 PM Bug #43599: kclient: corrupt message failure on RHEL8 distribution kernel
- Jeff Layton wrote:
> What kernel is this?... - 05:50 PM Bug #43599: kclient: corrupt message failure on RHEL8 distribution kernel
- What kernel is this?
- 08:48 PM Bug #43600: qa: workunits/suites/iozone.sh: line 5: iozone: command not found
- Unfortunately, CentOS 8 / RHEL 8 don't have this package. We'll need to filter out these distributions somehow.
Mo... - 07:41 PM Bug #36507 (Duplicate): client: connection failure during reconnect causes client to hang
- Thanks huanwen!
- 07:39 PM Bug #42467: mds: daemon crashes while updating blacklist
- Zheng, I think you may have inadvertently fixed this in...
- 07:28 PM Bug #43216 (New): MDSMonitor: removes MDS coming out of quorum election
- 07:22 PM Bug #40608 (Duplicate): mds: assert after `delete gather` in C_Drop_Cache::recall_client_state
- Fixed by: https://tracker.ceph.com/issues/38445
- 07:16 PM Bug #42941 (Rejected): mds: stuck "waiting for osdmap 273 (which blacklists prior instance)"
- Cause was reverted.
- 05:19 AM Feature #43349: mgr/volumes: provision subvolumes with config metadata storage in cephfs
- backport note: additionally, include https://github.com/ceph/ceph/pull/32645
- 04:33 AM Feature #43349 (Pending Backport): mgr/volumes: provision subvolumes with config metadata storage...
- 04:33 AM Feature #43349 (Resolved): mgr/volumes: provision subvolumes with config metadata storage in cephfs
- 02:16 AM Bug #43362 (Pending Backport): client: disallow changing fuse_default_permissions option at runtime
- 02:15 AM Cleanup #43367 (Resolved): mds: reorg SimpleLock header
- 02:14 AM Cleanup #43386 (Resolved): mds: reorg SnapRealm header
- 02:13 AM Cleanup #43418 (Resolved): mds: reorg flock header
- 02:13 AM Cleanup #43424 (Resolved): mds: reorg inode_backtrace header
- 01:55 AM Bug #42986 (Fix Under Review): qa: Test failure: test_drop_cache_command_dead (tasks.cephfs.test_...
- 12:32 AM Bug #43513: qa: filelock_interrupt.py hang
- Zheng Yan wrote:
> Looks like flock syscall was restarted after handling signal alarm. The script does not work with... - 12:14 AM Bug #43554 (Fix Under Review): qa: test_full racy check: AssertionError: 29 not greater than or e...
01/14/2020
- 11:52 PM Bug #43601 (Resolved): qa: ERROR: test_object_deletion (tasks.cephfs.test_damage.TestDamage)
- ...
- 11:15 PM Bug #43600 (Resolved): qa: workunits/suites/iozone.sh: line 5: iozone: command not found
- ...
- 10:37 PM Bug #43542 (Resolved): mds/FSMap.cc: 1063: FAILED ceph_assert(count)
- 10:32 PM Bug #43484 (Pending Backport): mds: note features client has when rejecting client due to feature...
- 10:10 PM Bug #43599 (Resolved): kclient: corrupt message failure on RHEL8 distribution kernel
- ...
- 08:44 PM Bug #43541 (Resolved): qa/cephfs: don't test client on latest RHEL
- 08:44 PM Bug #43539 (Resolved): qa/cephfs: don't test kclient RHEL 7
- 08:34 PM Backport #42279: nautilus: qa: logrotate should tolerate connection resets
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31082
merged - 08:33 PM Backport #42129: nautilus: doc/ceph-fuse: -k missing in man page
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30765
merged - 05:39 PM Bug #43598 (Resolved): mds: PurgeQueue does not handle objecter errors
- Here: https://github.com/ceph/ceph/blob/6ea89e01971462432e0bc8b128b950acec4d85fe/src/mds/PurgeQueue.cc#L555
The fi... - 05:30 PM Bug #43596 (Need More Info): mds: crash when enable msgr v2 due to lost contact
- > It seems to be stIt seems to be stable now after enabling v2 on all mons and restarting all mds's.able now after en...
- 12:41 PM Bug #43596 (New): mds: crash when enable msgr v2 due to lost contact
- We just upgraded from mimic v13.2.7 to v14.2.6 and when we enable msgr v2 on the mon which an MDS is connected to, th...
- 04:15 PM Bug #43440 (Fix Under Review): client: chdir does not raise error if a file is passed
01/13/2020
- 09:44 PM Bug #43493 (Fix Under Review): osdc: fix null pointer caused program crash
- 04:29 PM Backport #42790: nautilus: mgr/volumes: add `fs subvolume resize infinite` command
- Jos Collin wrote:
> https://github.com/ceph/ceph/pull/31332
merged - 04:28 PM Backport #42615: nautilus: mgr/volumes: add `fs subvolume extend/shrink` commands
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31332
merged - 04:28 PM Backport #42142: nautilus: mds:split the dir if the op makes it oversized, because some ops maybe...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31302
merged - 04:27 PM Backport #42424: nautilus: qa: "cluster [ERR] Error recovering journal 0x200: (2) No such file ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31084
merged - 04:27 PM Backport #42422: nautilus: test_reconnect_eviction fails with "RuntimeError: MDS in reject state ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31083
merged - 04:26 PM Backport #42158: nautilus: osdc: objecter ops output does not have useful time information
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/31081
merged - 02:37 PM Bug #43567: qa: UnicodeDecodeError in TestGetAndPut.test_put_and_get_without_target_directory
- ...
- 12:12 PM Bug #43567 (Fix Under Review): qa: UnicodeDecodeError in TestGetAndPut.test_put_and_get_without_t...
- 11:41 AM Bug #43567 (Resolved): qa: UnicodeDecodeError in TestGetAndPut.test_put_and_get_without_target_di...
- decode() is run on a type @str@ -...
- 12:31 PM Feature #22446 (Resolved): mds: ask idle client to trim more caps
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:28 PM Bug #40283 (Resolved): qa: add testing for lazyio
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:28 PM Cleanup #40694 (Resolved): mds: move MDSDaemon conf change handling to MDSRank finisher
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:27 PM Bug #41148 (Resolved): client: _readdir_cache_cb() may use the readdir_cache already clear
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:27 PM Bug #41310 (Resolved): client: lazyio synchronize does not get file size
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:26 PM Bug #41835 (Resolved): mds: cache drop command does not drive cap recall
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:26 PM Bug #41837 (Resolved): client: lseek function does not return the correct value.
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:22 PM Backport #42161 (Resolved): nautilus: qa: add testing for lazyio
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30769
m... - 12:22 PM Backport #41888 (Resolved): nautilus: client: lazyio synchronize does not get file size
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30769
m... - 12:22 PM Backport #42147 (Resolved): nautilus: mds: mds returns -5 error when the deleted file does not exist
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30767
m... - 12:22 PM Backport #42145 (Resolved): nautilus: client: return error when someone passes bad whence value t...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30766
m... - 12:21 PM Backport #42121 (Resolved): nautilus: client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30764
m... - 12:21 PM Backport #42040 (Resolved): nautilus: client: _readdir_cache_cb() may use the readdir_cache alrea...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30763
m... - 12:21 PM Backport #42035 (Resolved): nautilus: client: lseek function does not return the correct value.
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30762
m... - 12:21 PM Backport #42339 (Resolved): nautilus: mds: move MDSDaemon conf change handling to MDSRank finisher
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30761
m... - 12:21 PM Backport #41899 (Resolved): nautilus: mds: cache drop command does not drive cap recall
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30761
m... - 12:21 PM Backport #41865 (Resolved): nautilus: mds: ask idle client to trim more caps
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30761
m... - 11:48 AM Backport #43573 (Resolved): nautilus: cephfs-journal-tool: will crash without any extra argument
- https://github.com/ceph/ceph/pull/32913
- 11:48 AM Backport #43572 (Rejected): mimic: cephfs-journal-tool: will crash without any extra argument
- 11:48 AM Backport #43568 (Resolved): nautilus: qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY
- https://github.com/ceph/ceph/pull/32912
- 03:03 AM Bug #43218 (Rejected): kclient: when looking up the snap dirs sometime will hit WARN_ON
- This is not a bug and will close it....
- 01:43 AM Feature #9477 (Closed): Handle kclient shutdown with dead network more gracefully
- this can be handled by 'umount -f'
- 01:43 AM Feature #8368 (Resolved): kernel: Notify users of mds disconnect and allow them to react to it
- 01:30 AM Feature #8368: kernel: Notify users of mds disconnect and allow them to react to it
- resolved by https://tracker.ceph.com/issues/39967
01/12/2020
- 12:52 AM Bug #36635: mds: purge queue corruption from wrong backport
- I think we can just add an upgrade note to Octopus to not upgrade from 13.2.2.
01/11/2020
- 01:49 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- OK, I move the wait to client side. See commit "client: wait for async creating before sending request or cap message...
- 01:13 AM Bug #43543 (Triaged): mds: scrub on directory with recently created files may fail to load backtr...
- 12:25 AM Backport #43558 (In Progress): nautilus: mds: reject forward scrubs when cluster has multiple act...
- 12:19 AM Backport #43558 (Resolved): nautilus: mds: reject forward scrubs when cluster has multiple active...
- https://github.com/ceph/ceph/pull/32602
- 12:19 AM Backport #43559 (Rejected): mimic: mds: reject forward scrubs when cluster has multiple active MD...
- 12:18 AM Bug #43483 (Pending Backport): mds: reject forward scrubs when cluster has multiple active MDS (m...
- 12:16 AM Bug #43249 (Resolved): cephfs-shell: exit failure when non-interactive command fails
01/10/2020
- 10:24 PM Bug #43251 (Resolved): mds: track client provided metric flags in session
- 10:22 PM Cleanup #43366 (Resolved): mds: reorg SessionMap header
- 10:15 PM Bug #43554 (Resolved): qa: test_full racy check: AssertionError: 29 not greater than or equal to 30
- ...
- 09:24 PM Backport #43506 (In Progress): nautilus: MDSMonitor: warn if a new file system is being created w...
- 08:19 PM Backport #42161: nautilus: qa: add testing for lazyio
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30769
merged - 08:19 PM Backport #41888: nautilus: client: lazyio synchronize does not get file size
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30769
merged - 08:18 PM Backport #42147: nautilus: mds: mds returns -5 error when the deleted file does not exist
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30767
merged - 08:18 PM Backport #42145: nautilus: client: return error when someone passes bad whence value to llseek
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30766
merged - 08:17 PM Backport #42121: nautilus: client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30764
merged - 08:17 PM Backport #42040: nautilus: client: _readdir_cache_cb() may use the readdir_cache already clear
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30763
merged - 08:16 PM Backport #42035: nautilus: client: lseek function does not return the correct value.
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30762
merged - 08:16 PM Backport #42339: nautilus: mds: move MDSDaemon conf change handling to MDSRank finisher
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30761
merged - 08:15 PM Backport #41899: nautilus: mds: cache drop command does not drive cap recall
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30761
merged - 08:15 PM Backport #41865: nautilus: mds: ask idle client to trim more caps
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30761
merged - 07:42 PM Bug #43540 (Duplicate): qa: test_export_pin (tasks.cephfs.test_exports.TestExports) failure
- Real error is here:...
- 12:57 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- I still don't understand what value this flag adds. Why not just always have requests involving an inode wait on the ...
- 03:09 AM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Jeff Layton wrote:
> Great! I'll still plan to add in a sanity check for this in the client too.
Patrick is right... - 03:01 AM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Jeff Layton wrote:
> Zheng Yan wrote:
> > mainly for wait_for_create_inode() function in MDS. Also make mds print e... - 11:02 AM Bug #43440: client: chdir does not raise error if a file is passed
- Not a cephfs shell bug. The error should be raised by ceph_chdir().
- 07:44 AM Bug #43543: mds: scrub on directory with recently created files may fail to load backtraces and r...
- This issue exists since scrub is first implemented. Should be easy to fix, just ignore checking backtrace if dirty_pa...
01/09/2020
- 11:14 PM Bug #43543: mds: scrub on directory with recently created files may fail to load backtraces and r...
- If you flush the journal:...
- 11:08 PM Bug #43543 (Resolved): mds: scrub on directory with recently created files may fail to load backt...
- On a vstart cluster, copy a directory tree into CephFS and do a recursive scrub concurrently:...
- 08:25 PM Bug #43514 (Pending Backport): qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY
- 07:57 PM Bug #43542 (Fix Under Review): mds/FSMap.cc: 1063: FAILED ceph_assert(count)
- 07:48 PM Bug #43542 (Resolved): mds/FSMap.cc: 1063: FAILED ceph_assert(count)
- ...
- 07:54 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Great! I'll still plan to add in a sanity check for this in the client too.
- 07:25 PM Feature #24461 (In Progress): cephfs: improve file create performance buffering file unlink/creat...
- Jeff Layton wrote:
> There's a potential problem I spotted today with copying the layouts from the first synchronous... - 07:21 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- There's a potential problem I spotted today with copying the layouts from the first synchronous create.
Suppose we... - 07:10 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Zheng Yan wrote:
> mainly for wait_for_create_inode() function in MDS. Also make mds print error if it failed to han... - 02:05 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- mainly for wait_for_create_inode() function in MDS. Also make mds print error if it failed to handle async request.
- 11:42 AM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Zheng Yan wrote:
> please add a flag that tell if a request is async.
> https://github.com/ukernel/ceph/commit/54f... - 07:13 PM Bug #43515 (Resolved): qa: SyntaxError: invalid token
- 07:12 PM Bug #43487 (Resolved): qa: test_acls does not detect rhel8
- 06:22 PM Feature #118 (Rejected): kclient: clean pages when throwing out dirty metadata on session teardown
- Excellent. In that case, let's go ahead and close this out.
- 06:43 AM Feature #118: kclient: clean pages when throwing out dirty metadata on session teardown
- Case1:
In the case when unmounting, the vfs will do this for us.
Case2:
In the case when the session is reconne... - 06:09 PM Bug #43541 (Fix Under Review): qa/cephfs: don't test client on latest RHEL
- 06:08 PM Bug #43541 (Resolved): qa/cephfs: don't test client on latest RHEL
- 06:04 PM Bug #43540 (Duplicate): qa: test_export_pin (tasks.cephfs.test_exports.TestExports) failure
- ...
- 06:02 PM Bug #43539 (Fix Under Review): qa/cephfs: don't test kclient RHEL 7
- 05:42 PM Bug #43539 (Resolved): qa/cephfs: don't test kclient RHEL 7
- Just fix the symlin qa/cephfs/mount/kclient/overrides/distro/rhel/rhel_7.yaml.
- 11:34 AM Feature #42530 (Fix Under Review): cephfs-shell: add setxattr and getxattr
- 06:48 AM Feature #43435 (Fix Under Review): kclient:send client provided metric flags in client metadata
- Under review in V2's "[Patch v2 8/8] ceph: send client provided metric flags in client metadata"
https://patchwork... - 06:45 AM Feature #43423 (Fix Under Review): mds: collect and show the dentry lease metric
- 06:44 AM Bug #37617: CephFS did not recover re-plugging network cable
- I also enconter the same problem. when I use ls or other operations on the mountpoint,it will failed. Even if the net...
- 05:50 AM Feature #4386 (Resolved): kclient: Mount error message when no MDS present
- 05:49 AM Feature #4386: kclient: Mount error message when no MDS present
- Fixed in:
https://github.com/ceph/ceph/pull/32164
https://patchwork.kernel.org/patch/11283665
01/08/2020
- 03:27 PM Bug #43516 (Resolved): qa: verify sub-suite does not define os_version
- 02:38 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- please add a flag that tell if a request is async.
https://github.com/ukernel/ceph/commit/54f6bbdc85505ddea21583e9c... - 01:54 PM Bug #43513: qa: filelock_interrupt.py hang
- Looks like flock syscall was restarted after handling signal alarm. The script does not work with python3, but work w...
- 08:58 AM Bug #43522 (Fix Under Review): qa: update xfstests_dev to install python2 instead of python on ub...
- 08:47 AM Bug #43522 (Resolved): qa: update xfstests_dev to install python2 instead of python on ubuntu 19
- 08:42 AM Bug #43393 (In Progress): qa: add support/qa for cephfs-shell on CentOS 9 / RHEL9
01/07/2020
- 10:11 PM Bug #42238 (Resolved): cephfs-shell: setxattr() is passed extra length argument
- 10:07 PM Bug #43517 (Resolved): qa: random subvolumegroup collision
- ...
- 09:59 PM Bug #43336 (Resolved): qa: test_unmount_for_evicted_client hangs
- 09:58 PM Cleanup #42563 (Resolved): mds: reorg MDSTableServer header
- 09:57 PM Cleanup #42690 (Resolved): mds: reorg Mutation header
- 09:56 PM Bug #43438 (Pending Backport): cephfs-journal-tool: will crash without any extra argument
- 09:21 PM Bug #43516 (Fix Under Review): qa: verify sub-suite does not define os_version
- 09:14 PM Bug #43516 (Resolved): qa: verify sub-suite does not define os_version
- ...
- 09:12 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Yes, setting a zero length i_xattrs buffer on the new inode seems to have corrected the problem. I believe what was h...
- 05:24 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- At this point, I'm 90% sure the problem is in xattrs. Basically, after creating the file async we're leaving the i_xa...
- 04:00 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- I threw in a hack to do this:...
- 02:44 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- also see https://github.com/ceph/ceph/pull/30969
- 02:42 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- For current async create code. ceph_mds_reply_inode::max_size is 0. client can't write to the new file until it gets ...
- 02:00 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- dynamic debugging from the client, with async dirops disabled. This is during the write calls:...
- 01:32 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- I can use strace to get timing statistics on individual calls though. With async dirops disabled:...
- 01:18 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Now that I look closer, I don't think strace -c is measuring what we need. It's looking at CPU time in each syscall. ...
- 01:16 AM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Thanks for checking. I'll have to play around with this more myself.
- 09:07 PM Bug #43515 (Fix Under Review): qa: SyntaxError: invalid token
- 09:06 PM Bug #43515 (Resolved): qa: SyntaxError: invalid token
- ...
- 08:53 PM Bug #43514 (Fix Under Review): qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY
- 08:48 PM Bug #43514 (Resolved): qa: test setUp may cause spurious MDS_INSUFFICIENT_STANDBY
- ...
- 08:33 PM Bug #43513 (Resolved): qa: filelock_interrupt.py hang
- ...
- 02:45 PM Backport #43509 (Resolved): nautilus: 'ceph -s' does not show standbys if there are no filesystems
- https://github.com/ceph/ceph/pull/32912
- 02:44 PM Backport #43506 (Resolved): nautilus: MDSMonitor: warn if a new file system is being created with...
- https://github.com/ceph/ceph/pull/32600
- 02:44 PM Backport #43505 (Rejected): mimic: MDSMonitor: warn if a new file system is being created with an...
- 02:42 PM Backport #43503 (Resolved): nautilus: mount.ceph: give a hint message when no mds is up or cluste...
- https://github.com/ceph/ceph/pull/32910
- 02:42 PM Backport #43502 (Resolved): mimic: mount.ceph: give a hint message when no mds is up or cluster i...
- https://github.com/ceph/ceph/pull/32911
- 12:43 PM Documentation #43154 (In Progress): doc: migrate best practice recommendations to relevant docs
- 12:34 PM Bug #43496: qa: xfstest_dev.py crashes while calling teuthology.misc.get_system_type
- Oh, BTW, the crash happened locally, not on teuthology.
- 12:33 PM Bug #43496 (Fix Under Review): qa: xfstest_dev.py crashes while calling teuthology.misc.get_syste...
- 12:30 PM Bug #43496 (Resolved): qa: xfstest_dev.py crashes while calling teuthology.misc.get_system_type
- teuthology.misc.get_system_type calls teuthology.misc.sh. Fix: add a wrapper method of teuthology.misc.sh to vstart_r...
- 12:33 PM Bug #43486 (Fix Under Review): qa: test_acls: cannot find packages on centos 8
- 08:00 AM Bug #43486: qa: test_acls: cannot find packages on centos 8
- I am checking if xfstests-dev runs fine without btrfs-prog-devel. The reason for it's absence on CentOS 8 is (AFAIS) ...
- 07:14 AM Bug #43486 (In Progress): qa: test_acls: cannot find packages on centos 8
- 11:07 AM Bug #43483 (In Progress): mds: reject forward scrubs when cluster has multiple active MDS (more t...
- 08:42 AM Backport #43338 (New): nautilus: qa/tasks: add remaining tests for fs volume
- 03:07 AM Bug #43493 (Can't reproduce): osdc: fix null pointer caused program crash
- PurgeRange.oncommit NULL error
01/06/2020
- 09:40 PM Bug #43329 (Resolved): cephfs-shell: AttributeError when undefined an conf opt is attemptted to read
- 08:42 PM Documentation #37746 (Resolved): doc: how to mount a subdir with ceph-fuse/kclient
- 08:35 PM Bug #43460 (Resolved): qa: loff_t type missing for fsync-tester
- 08:31 PM Fix #42450 (Pending Backport): MDSMonitor: warn if a new file system is being created with an EC ...
- 08:28 PM Bug #43326 (Resolved): mds: batch getattr/lookup bug
- 08:27 PM Bug #42088 (Pending Backport): 'ceph -s' does not show standbys if there are no filesystems
- 08:20 PM Feature #43294 (Pending Backport): mount.ceph: give a hint message when no mds is up or cluster i...
- 07:48 PM Bug #43487 (Fix Under Review): qa: test_acls does not detect rhel8
- 07:46 PM Bug #43487 (Resolved): qa: test_acls does not detect rhel8
- ...
- 07:44 PM Bug #43486 (Resolved): qa: test_acls: cannot find packages on centos 8
- ...
- 06:24 PM Bug #43484 (Fix Under Review): mds: note features client has when rejecting client due to feature...
- 06:15 PM Bug #43484 (Resolved): mds: note features client has when rejecting client due to feature incompat
- Currently we get a message like:...
- 03:17 PM Bug #43407: mds crash after update to v14.2.5
- The first ESubtreeMap in the journal was wrong. It should also contains dir 0x1...
- 02:52 PM Bug #43407 (Triaged): mds crash after update to v14.2.5
- 03:07 PM Bug #43483 (Resolved): mds: reject forward scrubs when cluster has multiple active MDS (more than...
- Forward scrub may cause the MDS to hit various assertions if there is more than one rank. Have the MDS check if there...
- 02:40 PM Bug #43440 (Triaged): client: chdir does not raise error if a file is passed
01/03/2020
- 11:49 PM Bug #43460 (Fix Under Review): qa: loff_t type missing for fsync-tester
- 11:44 PM Bug #43460 (Resolved): qa: loff_t type missing for fsync-tester
- ...
- 11:35 PM Bug #43459 (Fix Under Review): qa: FATAL ERROR: libtool does not seem to be installed.
- 11:28 PM Bug #43459 (In Progress): qa: FATAL ERROR: libtool does not seem to be installed.
- 11:24 PM Bug #43459 (Resolved): qa: FATAL ERROR: libtool does not seem to be installed.
- ...
- 07:34 PM Bug #43407: mds crash after update to v14.2.5
- Status update:
I have tried
cephfs-journal-tool event recover_dentries summary
followed with
cephfs-journal-tool... - 03:29 PM Bug #43407: mds crash after update to v14.2.5
- > 2. recover journal events:
> cephfs-journal-tool journal export backup.bin
Do you mean
_cephfs-journal-tool ev... - 02:23 PM Bug #43407: mds crash after update to v14.2.5
- mds shows there are some ENoOp log events. This means some region of mds log was erased by cephfs-journal-tools. Why ...
- 11:18 AM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- I built a tree based on 1e2fe722c41d4cc34094afb157b3eb06b4a50972, which is the commit just before the merge of Zheng'...
- 03:14 AM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Patrick Donnelly wrote:
> Zheng Yan wrote:
> > Patrick Donnelly wrote:
> > > The baseline performance is surprisin... - 01:51 AM Feature #43423: mds: collect and show the dentry lease metric
- Patches are ready and waiting for the depending PR [1] to be merged.
[1] https://github.com/ceph/ceph/pull/26004
01/02/2020
- 09:19 PM Bug #43407: mds crash after update to v14.2.5
- Yes I had 3 filesystems (namespaces), one for every mds daemon, and the setup was working up to the update to v14.2.5...
- 07:00 PM Bug #43407: mds crash after update to v14.2.5
- Were you using multiple MDS before?
Can you increase MDS debugging:
ceph config set mds debug_mds 10
and res... - 08:10 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Looking at my home-grown testcase, the results look pretty good, but an untarring a random kernel tarball is consider...
01/01/2020
- 09:38 AM Documentation #43154: doc: migrate best practice recommendations to relevant docs
- https://docs.ceph.com/docs/master/cephfs/fuse/ - This is the location of the FUSE docs.
12/31/2019
- 12:46 PM Bug #43440 (Resolved): client: chdir does not raise error if a file is passed
- ...
- 06:14 AM Feature #41566 (In Progress): mds: support rolling upgrades
- 04:13 AM Feature #43435: kclient:send client provided metric flags in client metadata
- Patch is ready and the test output is:...
- 04:11 AM Bug #43438 (Fix Under Review): cephfs-journal-tool: will crash without any extra argument
- 04:10 AM Bug #43438: cephfs-journal-tool: will crash without any extra argument
- The fixing PR: https://github.com/ceph/ceph/pull/32452
- 04:01 AM Bug #43438 (In Progress): cephfs-journal-tool: will crash without any extra argument
- 04:00 AM Bug #43438 (Resolved): cephfs-journal-tool: will crash without any extra argument
- ...
12/30/2019
- 04:16 PM Bug #41565 (Fix Under Review): mds: detect MDS<->MDS messages that are not versioned
- 05:42 AM Feature #43435 (In Progress): kclient:send client provided metric flags in client metadata
- 05:42 AM Feature #43435 (Resolved): kclient:send client provided metric flags in client metadata
- This will send the kclient provided metric flags to the MDS server.
12/27/2019
12/26/2019
- 05:26 PM Cleanup #43425 (Fix Under Review): mds: reorg snap header
- 03:01 PM Cleanup #43425 (Resolved): mds: reorg snap header
- 03:03 PM Cleanup #43426 (Resolved): mds: reorg mdstypes header
- 03:02 PM Cleanup #43424 (Fix Under Review): mds: reorg inode_backtrace header
- 01:58 PM Cleanup #43424 (Resolved): mds: reorg inode_backtrace header
- 06:15 AM Feature #43423: mds: collect and show the dentry lease metric
- https://tracker.ceph.com/issues/24285
- 06:12 AM Feature #43423: mds: collect and show the dentry lease metric
- Locally the patch is ready, but depend on https://github.com/ceph/ceph/pull/26004, which hasn't been merged yet.
<... - 06:10 AM Feature #43423 (Resolved): mds: collect and show the dentry lease metric
- Kclient will collect the dentry lease metric and send it to the MDS, currently this hasn't been shown in the perf stats.
12/25/2019
- 11:20 AM Bug #43410 (Won't Fix): mds:When the directory level is above 3000, the following assertions will...
12/24/2019
- 09:02 AM Cleanup #43418 (Fix Under Review): mds: reorg flock header
- 07:38 AM Cleanup #43418 (Resolved): mds: reorg flock header
- 06:19 AM Bug #43410: mds:When the directory level is above 3000, the following assertions will appear
- Zheng Yan wrote:
> mds call FOO::adjust_nested_auth_pins functions for each directory level, which caused stack over... - 06:14 AM Bug #43410: mds:When the directory level is above 3000, the following assertions will appear
- Zheng Yan wrote:
> full calltrace ?
I use gdb for mounting, this information is complete - 02:31 AM Bug #43410: mds:When the directory level is above 3000, the following assertions will appear
- mds call FOO::adjust_nested_auth_pins functions for each directory level, which caused stack overflow. mimic and late...
- 01:53 AM Bug #43410: mds:When the directory level is above 3000, the following assertions will appear
- full calltrace ?
12/23/2019
- 11:14 AM Cleanup #43408 (Fix Under Review): mds: reorg StrayManager header
- 10:51 AM Cleanup #43408 (Resolved): mds: reorg StrayManager header
- 11:14 AM Bug #43410: mds:When the directory level is above 3000, the following assertions will appear
- I don't see any exceptions in the log print
@Patrick Donnelly
@Zheng Yan - 11:10 AM Bug #43410 (Won't Fix): mds:When the directory level is above 3000, the following assertions will...
- When I use the script to continuously create directories,
but the directory level is above 3000, the following asse... - 11:10 AM Bug #43409 (Closed): mds:When the directory level is above 3000, the following assertions will ap...
- 11:09 AM Bug #43409 (Closed): mds:When the directory level is above 3000, the following assertions will ap...
- When I use the script to continuously create directories,
but the directory level is above 3000, the following asse...
12/22/2019
- 11:36 PM Bug #43407 (Triaged): mds crash after update to v14.2.5
- All MDS crashed and not able to restart after update from v14.2.4 to v14.2.5
*systemctl status:*...
12/21/2019
- 03:57 AM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Zheng Yan wrote:
> Patrick Donnelly wrote:
> > The baseline performance is surprising I think. That's with the same...
12/20/2019
12/19/2019
- 08:31 PM Bug #43393 (Resolved): qa: add support/qa for cephfs-shell on CentOS 9 / RHEL9
- 08:00 PM Documentation #41688 (Resolved): doc: client config reference improvements
- 07:57 PM Bug #43250 (Resolved): qa/test_cephfs_shell: TestDu.test_du_works_for_hardlinks fails
- 07:46 PM Bug #43392 (Resolved): MDSMonitor: support automatic failover to standbys with stronger affinity
- Initial work by Sage: https://github.com/ceph/ceph/pull/32015
The next step is to failover to a standby with stron... - 02:38 PM Bug #43329 (Fix Under Review): cephfs-shell: AttributeError when undefined an conf opt is attempt...
- 02:30 PM Cleanup #43387 (Fix Under Review): mds: reorg SnapServer header
- 02:23 PM Cleanup #43387 (Resolved): mds: reorg SnapServer header
- 02:16 PM Cleanup #43386 (Fix Under Review): mds: reorg SnapRealm header
- 02:08 PM Cleanup #43386 (Resolved): mds: reorg SnapRealm header
- 01:48 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- You're right. I just pushed a patch to be squashed in on top of the existing series. I'm testing it now with the clie...
- 01:40 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- Jeff Layton wrote:
> That's not a bad idea. We'd have to keep track of a separate set of newly-added ino_t's to send... - 12:09 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- That's not a bad idea. We'd have to keep track of a separate set of newly-added ino_t's to send in the reply, but tha...
- 09:01 AM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- Jeff Layton wrote:
> I have patches for this for the MDS, and the kernel, but I keep hitting a race where the client...
Also available in: Atom