Activity
From 06/19/2020 to 07/18/2020
07/18/2020
- 10:16 PM Backport #46591: octopus: ceph-fuse: ceph-fuse process is terminated by the logratote task and wh...
- Follow-up fix to add: https://github.com/ceph/ceph/pull/36194
- 08:20 PM Fix #44171: pybind/cephfs: audit for unimplemented bindings for libcephfs
- Ramana Raja wrote:
> Wouldn't backporting this to nautilus, and octopus be useful?
I think only if a function is ... - 04:51 PM Fix #44171 (Need More Info): pybind/cephfs: audit for unimplemented bindings for libcephfs
- 04:51 PM Fix #44171: pybind/cephfs: audit for unimplemented bindings for libcephfs
- Wouldn't backporting this to nautilus, and octopus be useful?
- 07:39 PM Backport #46480 (In Progress): nautilus: mds: send scrub status to ceph-mgr only when scrub is ru...
- 05:33 PM Backport #46592 (In Progress): nautilus: ceph-fuse: ceph-fuse process is terminated by the lograt...
- 05:33 PM Backport #46464 (In Progress): nautilus: mgr/volumes: fs subvolume clones stuck in progress when ...
- 05:32 PM Backport #46523 (In Progress): nautilus: mds: fix hang issue when accessing a file under a lost p...
- 05:27 PM Backport #46521 (In Progress): nautilus: mds: deleting a large number of files in a directory cau...
- 05:26 PM Backport #46517 (In Progress): nautilus: client: directory inode can not call release_callback
- 04:35 PM Bug #46612 (In Progress): mds: error: return-statement with no value, in function returning 'bool...
- 04:33 PM Bug #46612 (Rejected): mds: error: return-statement with no value, in function returning 'bool' [...
- beb12fa25315153e1a06a0104883de89776438a6 added an ALLOW_MESSAGES_FROM macro to src/mds/MDSRank.cc - this macro contai...
- 04:07 PM Backport #46611 (Need More Info): nautilus: cephfs.pyx: passing empty string is fine but passing ...
- cleanup that does not apply cleanly
- 04:06 PM Backport #46611 (Resolved): nautilus: cephfs.pyx: passing empty string is fine but passing None i...
- https://github.com/ceph/ceph/pull/37725
- 04:07 PM Backport #46610 (Need More Info): octopus: cephfs.pyx: passing empty string is fine but passing N...
- cleanup that does not apply cleanly
- 04:06 PM Backport #46610 (Resolved): octopus: cephfs.pyx: passing empty string is fine but passing None is...
- https://github.com/ceph/ceph/pull/37724
- 04:06 PM Bug #44415 (Pending Backport): cephfs.pyx: passing empty string is fine but passing None is not t...
- 04:04 PM Bug #44415 (Resolved): cephfs.pyx: passing empty string is fine but passing None is not to arg co...
- 04:04 PM Bug #44415: cephfs.pyx: passing empty string is fine but passing None is not to arg conffile in L...
- It doesn't apply cleanly to octopus, either.
- 04:01 PM Bug #44415: cephfs.pyx: passing empty string is fine but passing None is not to arg conffile in L...
- Patrick, can you confirm that this cleanup really needs to be backported to nautilus? This change does not apply clea...
- 03:55 PM Backport #46474 (In Progress): nautilus: mds: make threshold for MDS_TRIM warning configurable
- 03:51 PM Backport #46470 (In Progress): nautilus: client: release the client_lock before copying data in read
- 03:43 PM Backport #46409 (In Progress): nautilus: client: supplying ceph_fsetxattr with no value unsets xattr
- 03:40 PM Backport #46349 (Need More Info): nautilus: qa/tasks: make sh() in vstart_runner.py identical wit...
- 03:39 PM Backport #46349 (Rejected): nautilus: qa/tasks: make sh() in vstart_runner.py identical with teut...
- In nautilus, the "sh" function is the one from teuthology/misc:...
- 03:34 PM Backport #46310 (In Progress): nautilus: qa/tasks/cephfs/test_snapshots.py: Command failed with s...
- 03:31 PM Backport #46200 (In Progress): nautilus: qa: "[WRN] evicting unresponsive client smithi131:z (631...
- 03:30 PM Backport #46189 (In Progress): nautilus: mds: EMetablob replay too long will cause mds restart
- 03:27 PM Backport #46187 (In Progress): nautilus: client: fix snap directory atime
- 03:25 PM Backport #46151 (In Progress): nautilus: test_scrub_pause_and_resume (tasks.cephfs.test_scrub_che...
- 03:00 PM Backport #46466: nautilus: pybind/mgr/volumes: get_pool_names may indicate volume does not exist ...
- The "target version" value is set only when the backport PR is merged.
- 01:22 PM Backport #46466 (In Progress): nautilus: pybind/mgr/volumes: get_pool_names may indicate volume d...
- 01:25 PM Backport #46478 (In Progress): nautilus: pybind/mgr/volumes: volume deletion should check mon_all...
- 12:50 PM Backport #46235 (In Progress): nautilus: pybind/mgr/volumes: volume deletion not always removes t...
- 12:02 PM Backport #46527 (In Progress): nautilus: mgr/volumes: `protect` and `clone` operation in a single...
- 09:04 AM Feature #40401 (New): mgr/volumes: allow/deny r/rw access of auth IDs to subvolume and subvolume ...
- Previous attempt at fixing this https://github.com/ceph/ceph/pull/28973
- 03:31 AM Bug #46579 (Pending Backport): mgr/nfs: Remove NParts and Cache_Size from MDCACHE block
- 03:29 AM Bug #46543 (Pending Backport): mds forwarding request 'no_available_op_found'
- 03:28 AM Bug #46533 (Pending Backport): mds: null pointer dereference in MDCache::finish_rollback
- 03:27 AM Bug #46302 (Pending Backport): mds: optimize ephemeral rand pin
- 03:27 AM Bug #43517 (Pending Backport): qa: random subvolumegroup collision
- 03:16 AM Bug #46572 (Pending Backport): mgr/nfs: help for "nfs export create" and "nfs export delete" says...
- 03:12 AM Bug #46609 (Triaged): mds: CDir.cc: 956: FAILED ceph_assert(auth_pins == 0)
- ...
- 03:09 AM Bug #46608 (Duplicate): qa: thrashosds: log [ERR] : 4.0 has 3 objects unfound and apparently lost
- ...
07/17/2020
- 08:58 PM Bug #46607 (Closed): nautilus: pybind/mgr/volumes: TypeError: bad operand type for unary -: 'str'
- ...
- 05:55 PM Backport #46002: nautilus: pybind/mgr/volumes: add command to return metadata regarding a subvolu...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35672
m... - 01:12 PM Backport #46002 (Resolved): nautilus: pybind/mgr/volumes: add command to return metadata regardin...
- 05:48 PM Support #46580 (Rejected): About the ceph kernel client's relationship with physical memory
- Please direct your question to the ceph-users mailing list: https://lists.ceph.io/postorius/lists/ceph-users.ceph.io/
- 07:49 AM Support #46580 (Rejected): About the ceph kernel client's relationship with physical memory
- The CEPHFS cluster is deployed in vmware virtual machines,The virtual machine memory configuration is 4G
Test with F... - 04:37 PM Bug #36349: mds: src/mds/MDCache.cc: 1637: FAILED ceph_assert(follows >= realm->get_newest_seq())
- I'm seeing this crash at the moment on a Nautilus 14.2.10 cluster which had 6 MDS active.
Running the cephfs data ... - 03:39 PM Backport #42327 (Rejected): nautilus: cephfs-shell: not compatible with cmd2 versions after 0.9.13
- nautilus is no longer accepting cephfs-shell backports
first attempted backport - https://github.com/ceph/ceph/pul... - 01:12 PM Feature #45237 (Resolved): pybind/mgr/volumes: add command to return metadata regarding a subvolu...
- 11:50 AM Bug #46597 (In Progress): qa: Fs cleanup fails with a traceback
- 11:50 AM Bug #46597 (Resolved): qa: Fs cleanup fails with a traceback
- When two consecutive tests are run on single node local teuthology setup, the fs cleanup fails with following traceba...
- 11:15 AM Backport #46592 (Resolved): nautilus: ceph-fuse: ceph-fuse process is terminated by the logratote...
- https://github.com/ceph/ceph/pull/36181
- 11:15 AM Backport #46591 (Resolved): octopus: ceph-fuse: ceph-fuse process is terminated by the logratote ...
- https://github.com/ceph/ceph/pull/36195
- 11:14 AM Backport #46585 (Resolved): octopus: mgr/nfs: Update about nfs ganesha cluster deployment using c...
- https://github.com/ceph/ceph/pull/36224
- 11:12 AM Bug #46583 (Resolved): mds slave request 'no_available_op_found'
- ...
- 11:10 AM Backport #46469 (Resolved): octopus: client: release the client_lock before copying data in read
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36046
m... - 11:10 AM Backport #46410 (Resolved): octopus: client: supplying ceph_fsetxattr with no value unsets xattr
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36045
m... - 11:10 AM Backport #46348 (Resolved): octopus: qa/tasks: make sh() in vstart_runner.py identical with teuth...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36044
m... - 11:10 AM Backport #46311 (Resolved): octopus: qa/tasks/cephfs/test_snapshots.py: Command failed with statu...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36043
m... - 11:10 AM Backport #46199 (Resolved): octopus: qa: "[WRN] evicting unresponsive client smithi131:z (6314), ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36042
m... - 11:09 AM Backport #46188 (Resolved): octopus: mds: EMetablob replay too long will cause mds restart
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36040
m... - 11:09 AM Backport #46186 (Resolved): octopus: client: fix snap directory atime
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36039
m... - 11:09 AM Backport #46152 (Resolved): octopus: test_scrub_pause_and_resume (tasks.cephfs.test_scrub_checks....
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36038
m... - 11:09 AM Backport #46190 (Resolved): octopus: mds: cap revoking requests didn't success when the client do...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35842
m... - 11:04 AM Backport #46498: octopus: mgr/nfs: Update nfs-ganesha package requirements
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/36063
m... - 06:07 AM Bug #46579 (Fix Under Review): mgr/nfs: Remove NParts and Cache_Size from MDCACHE block
- 06:06 AM Bug #46579 (Resolved): mgr/nfs: Remove NParts and Cache_Size from MDCACHE block
- As setting them to small value affects the performance and they are not related
to metadata caching. https://review.... - 04:45 AM Bug #46426 (In Progress): mds: 8MMDSPing is not an MMDSOp type
07/16/2020
- 04:35 PM Backport #46469: octopus: client: release the client_lock before copying data in read
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36046
merged - 04:35 PM Backport #46410: octopus: client: supplying ceph_fsetxattr with no value unsets xattr
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36045
merged - 04:34 PM Backport #46348: octopus: qa/tasks: make sh() in vstart_runner.py identical with teuthology.orche...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36044
merged - 04:34 PM Backport #46311: octopus: qa/tasks/cephfs/test_snapshots.py: Command failed with status 1: ['cd',...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36043
merged - 04:31 PM Backport #46188: octopus: mds: EMetablob replay too long will cause mds restart
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36040
merged - 04:31 PM Backport #46186: octopus: client: fix snap directory atime
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36039
merged - 04:30 PM Backport #46152: octopus: test_scrub_pause_and_resume (tasks.cephfs.test_scrub_checks.TestScrubCo...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/36038
merged - 04:29 PM Backport #46190: octopus: mds: cap revoking requests didn't success when the client doing reconne...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35842
merged - 02:25 PM Documentation #46571 (Pending Backport): mgr/nfs: Update about nfs ganesha cluster deployment usi...
- 11:41 AM Documentation #46571 (In Progress): mgr/nfs: Update about nfs ganesha cluster deployment using ce...
- 11:16 AM Documentation #46571 (Resolved): mgr/nfs: Update about nfs ganesha cluster deployment using cepha...
- 01:46 PM Bug #46572 (Fix Under Review): mgr/nfs: help for "nfs export create" and "nfs export delete" says...
- 12:02 PM Bug #46572 (Resolved): mgr/nfs: help for "nfs export create" and "nfs export delete" says "<attac...
- ...
- 11:12 AM Bug #46565 (Fix Under Review): mgr/nfs: Ensure pseudoroot path is absolute and is not just /
- 08:57 AM Bug #46565: mgr/nfs: Ensure pseudoroot path is absolute and is not just /
- Reported By: Nathan Cutler
https://github.com/ceph/ceph/pull/36125#issuecomment-659007980 - 08:09 AM Bug #46565 (Resolved): mgr/nfs: Ensure pseudoroot path is absolute and is not just /
- As NFS Ganesha requires pseudo path to be an absolute path....
- 11:00 AM Bug #46380: libcephfs admin socket occurs segment fault
- Sorry, It is ourself mistake, please close this bug.Thanks.
- 09:25 AM Bug #46158 (Closed): pybind/mgr/volumes: Persist snapshot size on snapshot creation
- As per the reasons stated in comment https://tracker.ceph.com/issues/46158#note-4 , this tracker is being closed.
07/15/2020
- 09:41 PM Backport #46528 (In Progress): octopus: mgr/volumes: `protect` and `clone` operation in a single ...
- 08:31 PM Bug #46278: mds: Subvolume snapshot directory does not save attribute "ceph.quota.max_bytes" of s...
- The fix may not be required from the kernel clients in addition to libcephfs, as volume manager is the entity that wo...
- 08:14 PM Feature #42451: mds: add root_squash
- Ramana Raja wrote:
> Would the following syntax in the MDS caps work?
>
> [mds] allow rw root_squash=true, allo... - 06:28 PM Feature #42451 (Need More Info): mds: add root_squash
- Would the following syntax in the MDS caps work?
[mds] allow rw root_squash=true, allow r path=/foo
The clien... - 07:35 PM Bug #46559 (Fix Under Review): Create NFS Ganesha Cluster instructions are misleading
- 06:22 PM Bug #46559 (Resolved): Create NFS Ganesha Cluster instructions are misleading
- At https://docs.ceph.com/docs/master/cephfs/fs-nfs-exports/#create-nfs-ganesha-cluster it says to do:...
- 06:02 PM Bug #46543 (Fix Under Review): mds forwarding request 'no_available_op_found'
- 08:11 AM Bug #46543: mds forwarding request 'no_available_op_found'
- PR: https://github.com/ceph/ceph/pull/36107...
- 08:10 AM Bug #46543 (Resolved): mds forwarding request 'no_available_op_found'
- ...
- 04:32 PM Bug #46158: pybind/mgr/volumes: Persist snapshot size on snapshot creation
- The requirement was from ceph-csi to return a snapshot size.
On further discussion and looking at the CSI spec and... - 04:17 PM Feature #45747 (In Progress): pybind/mgr/nfs: add interface for adding user defined configuration
- 03:36 PM Bug #44546: cleanup: Can't lookup inode 1
- I'm seeing the same message logged on CentOS 8.2 (kernel 4.18.0-193.6.3.el8_2.x86_64) mounting a submount fileshare o...
- 10:40 AM Bug #46269 (Pending Backport): ceph-fuse: ceph-fuse process is terminated by the logratote task a...
07/14/2020
- 11:12 PM Bug #46355: client: directory inode can not call release_callback
- wei liu wrote:
> fix:
> https://github.com/ceph/ceph/pull/35327
Hi Wei, the PR # is in the issue metadata. No ... - 09:29 AM Bug #46355: client: directory inode can not call release_callback
- fix:
https://github.com/ceph/ceph/pull/35327 - 06:44 PM Feature #20: client: recover from a killed session (w/ blacklist)
- Right. To summarize: the question whether it should be backported was asked, but got no answer, and in the meantime w...
- 05:55 PM Bug #44386 (Can't reproduce): qa: blogbench cleanup hang/stall
- 01:23 PM Bug #46535 (In Progress): mds: Importer MDS failing right after EImportStart event is journaled, ...
- An MDS hitting mds_kill_import_at = 7 (after EImportStart is journaled but before sending the ImportAck to the export...
- 12:54 PM Bug #45349: mds: send scrub status to ceph-mgr only when scrub is running (or paused, etc..)
- The original fix introduced a regression - https://tracker.ceph.com/issues/46495 - which is being fixed by https://gi...
- 12:53 PM Backport #46480 (Need More Info): nautilus: mds: send scrub status to ceph-mgr only when scrub is...
- The original fix introduced a regression - https://tracker.ceph.com/issues/46495 - which is being fixed by https://gi...
- 12:47 PM Bug #46533 (Fix Under Review): mds: null pointer dereference in MDCache::finish_rollback
- 12:43 PM Bug #46533 (Resolved): mds: null pointer dereference in MDCache::finish_rollback
- introduce by https://tracker.ceph.com/issues/45024
- 10:13 AM Backport #46528 (Resolved): octopus: mgr/volumes: `protect` and `clone` operation in a single tra...
- https://github.com/ceph/ceph/pull/36126
- 10:13 AM Backport #46527 (Resolved): nautilus: mgr/volumes: `protect` and `clone` operation in a single tr...
- https://github.com/ceph/ceph/pull/36166
- 07:58 AM Bug #45835 (Fix Under Review): mds: OpenFileTable::prefetch_inodes during rejoin can cause out-of...
- 07:06 AM Documentation #46449 (Resolved): mgr/nfs: Update nfs-ganesha package requirements
- 07:06 AM Backport #46498 (Resolved): octopus: mgr/nfs: Update nfs-ganesha package requirements
- Nathan Cutler wrote:
> Varsha, could you use the "src/script/backport-create-issue" script to create backport issues... - 04:23 AM Backport #46498: octopus: mgr/nfs: Update nfs-ganesha package requirements
- Varsha, could you use the "src/script/backport-create-issue" script to create backport issues? It sets the release fi...
- 02:58 AM Feature #45371 (Pending Backport): mgr/volumes: `protect` and `clone` operation in a single trans...
07/13/2020
- 08:10 PM Bug #43543 (Resolved): mds: scrub on directory with recently created files may fail to load backt...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:09 PM Backport #46524 (Resolved): octopus: non-head batch requests may hold authpins and locks
- https://github.com/ceph/ceph/pull/37022
- 08:09 PM Bug #45024 (Resolved): mds: wrong link count under certain circumstance
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:09 PM Bug #45261 (Resolved): mds: FAILED assert(locking == lock) in MutationImpl::finish_locking
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:09 PM Bug #45304 (Resolved): qa/fuse_mount.py: tests crash when /sys/fs/fuse/connections is absent
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:09 PM Bug #45524 (Resolved): ceph-fuse: the -d option couldn't enable the debug mode in libfuse
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:08 PM Bug #45665 (Resolved): client: fails to reconnect to MDS
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:08 PM Bug #45699 (Resolved): mds may start to fragment dirfrag before rollback finishes
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:08 PM Bug #45723 (Resolved): vstart_runner: LocalFuseMount.mount should set set.mounted to True
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:08 PM Bug #45875 (Resolved): mds: add config to require forward to auth MDS
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:07 PM Backport #46523 (Resolved): nautilus: mds: fix hang issue when accessing a file under a lost pare...
- https://github.com/ceph/ceph/pull/36179
- 08:07 PM Backport #46522 (Resolved): octopus: mds: fix hang issue when accessing a file under a lost paren...
- https://github.com/ceph/ceph/pull/37020
- 08:07 PM Backport #46521 (Resolved): nautilus: mds: deleting a large number of files in a directory causes...
- https://github.com/ceph/ceph/pull/36178
- 08:07 PM Backport #46520 (Resolved): octopus: mds: deleting a large number of files in a directory causes ...
- https://github.com/ceph/ceph/pull/37034
- 08:06 PM Backport #46517 (Resolved): nautilus: client: directory inode can not call release_callback
- https://github.com/ceph/ceph/pull/36177
- 08:06 PM Backport #46516 (Resolved): octopus: client: directory inode can not call release_callback
- https://github.com/ceph/ceph/pull/37017
- 07:53 PM Backport #45687: luminous: mds: FAILED assert(locking == lock) in MutationImpl::finish_locking
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35345
m... - 02:08 PM Backport #45687 (Resolved): luminous: mds: FAILED assert(locking == lock) in MutationImpl::finish...
- 07:53 PM Backport #45825: luminous: MDS config reference lists mds log max expiring
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35516
m... - 07:52 PM Backport #44476: luminous: mds: assert(p != active_requests.end())
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34937
m... - 07:52 PM Backport #42160: luminous: osdc: objecter ops output does not have useful time information
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33294
m... - 07:52 PM Backport #42123: luminous: client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33293
m... - 07:52 PM Backport #41857: luminous: client: removing dir reports "not empty" issue due to client side fill...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/33292
m... - 06:17 PM Bug #44785 (Pending Backport): non-head batch requests may hold authpins and locks
- 06:16 PM Feature #45267 (Resolved): ceph-fuse: Reduce memory copy in ceph-fuse during data IO
- 06:15 PM Bug #46355 (Pending Backport): client: directory inode can not call release_callback
- 06:14 PM Bug #46129 (Pending Backport): mds: fix hang issue when accessing a file under a lost parent dire...
- 06:13 PM Bug #46273 (Pending Backport): mds: deleting a large number of files in a directory causes the fi...
- 05:54 PM Bug #46507 (Triaged): qa: test_data_scan: "show inode" returns ENOENT
- ...
- 05:21 PM Bug #46504 (Can't reproduce): pybind/mgr/volumes: self.assertTrue(check < timo) fails
- ...
- 05:19 PM Bug #43902: qa: mon_thrash: timeout "ceph quorum_status"
- /ceph/teuthology-archive/pdonnell-2020-07-11_02:43:08-fs-wip-pdonnell-testing-20200711.001802-distro-basic-smithi/521...
- 05:17 PM Bug #43039: client: shutdown race fails with status 141
- /ceph/teuthology-archive/pdonnell-2020-07-11_02:43:08-fs-wip-pdonnell-testing-20200711.001802-distro-basic-smithi/521...
- 04:09 PM Bug #46374: ceph-fuse blocks forever, fails to start, emits no errors
- John Mulligan wrote:
> Apologies, I found one very important item that I had previously overlooked. Because we use t... - 03:11 PM Feature #20: client: recover from a killed session (w/ blacklist)
- Patrick Donnelly wrote:
> Nathan, why was this changed to backport to Octopus?
I see: https://github.com/ceph/cep... - 03:10 PM Feature #20: client: recover from a killed session (w/ blacklist)
- Nathan, why was this changed to backport to Octopus?
- 01:39 PM Bug #46452 (Duplicate): mgr/volumes: Python stack trace instead of a graceful error is thrown whe...
- Duplicate of https://tracker.ceph.com/issues/46496
- 11:04 AM Bug #46496 (Fix Under Review): pybind/mgr/volumes: subvolume operations throw exception if volume...
- 09:44 AM Bug #46496 (In Progress): pybind/mgr/volumes: subvolume operations throw exception if volume does...
- 09:44 AM Bug #46496 (Resolved): pybind/mgr/volumes: subvolume operations throw exception if volume doesn't...
- Most of the volume/subvolume operations throw exception if the volume doesn't exist.
e.g.
$ bin/ceph fs subvolu... - 10:42 AM Backport #46498 (In Progress): octopus: mgr/nfs: Update nfs-ganesha package requirements
- https://github.com/ceph/ceph/pull/36063
- 10:39 AM Backport #46498 (Resolved): octopus: mgr/nfs: Update nfs-ganesha package requirements
- https://github.com/ceph/ceph/pull/36063
07/11/2020
- 08:50 AM Backport #46479 (In Progress): octopus: mds: send scrub status to ceph-mgr only when scrub is run...
- 08:49 AM Backport #46469 (In Progress): octopus: client: release the client_lock before copying data in read
- 08:46 AM Backport #46410 (In Progress): octopus: client: supplying ceph_fsetxattr with no value unsets xattr
- 08:44 AM Backport #46402 (In Progress): octopus: client: recover from a killed session (w/ blacklist)
- 08:41 AM Backport #46348 (In Progress): octopus: qa/tasks: make sh() in vstart_runner.py identical with te...
- 08:40 AM Backport #46311 (In Progress): octopus: qa/tasks/cephfs/test_snapshots.py: Command failed with st...
- 08:32 AM Backport #46199 (In Progress): octopus: qa: "[WRN] evicting unresponsive client smithi131:z (6314...
07/10/2020
- 11:57 PM Documentation #46449 (Pending Backport): mgr/nfs: Update nfs-ganesha package requirements
- 11:36 AM Documentation #46449 (Fix Under Review): mgr/nfs: Update nfs-ganesha package requirements
- 11:29 AM Documentation #46449 (Resolved): mgr/nfs: Update nfs-ganesha package requirements
- 09:55 PM Backport #46188 (In Progress): octopus: mds: EMetablob replay too long will cause mds restart
- 09:50 PM Backport #46186 (In Progress): octopus: client: fix snap directory atime
- 09:49 PM Backport #46185: octopus: cephadm: mds permissions for osd are unnecessarily permissive
- cephadm fix being backported to octopus along with all other cephadm-related fixes and features
- 09:49 PM Backport #46185 (Resolved): octopus: cephadm: mds permissions for osd are unnecessarily permissive
- 09:49 PM Bug #46081 (Resolved): cephadm: mds permissions for osd are unnecessarily permissive
- 09:47 PM Backport #46152 (In Progress): octopus: test_scrub_pause_and_resume (tasks.cephfs.test_scrub_chec...
- 07:30 PM Backport #45825 (Resolved): luminous: MDS config reference lists mds log max expiring
- 06:14 PM Backport #45852 (Resolved): nautilus: mds: scrub on directory with recently created files may fai...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35400
m... - 06:13 PM Backport #45847 (Resolved): nautilus: qa/fuse_mount.py: tests crash when /sys/fs/fuse/connections...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35399
m... - 06:10 PM Backport #45843 (Resolved): nautilus: ceph-fuse: the -d option couldn't enable the debug mode in ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35398
m... - 06:10 PM Backport #45839 (Resolved): nautilus: mds may start to fragment dirfrag before rollback finishes
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35397
m... - 06:09 PM Backport #45774 (Resolved): nautilus: vstart_runner: LocalFuseMount.mount should set set.mounted ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35396
m... - 06:09 PM Backport #45709 (Resolved): nautilus: mds: wrong link count under certain circumstance
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35394
m... - 06:09 PM Backport #45898 (Resolved): nautilus: mds: add config to require forward to auth MDS
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35377
m... - 06:08 PM Backport #45887 (Resolved): nautilus: client: fails to reconnect to MDS
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35403
m... - 05:57 PM Bug #40746 (Resolved): client: removing dir reports "not empty" issue due to client side filled w...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 05:57 PM Bug #40821 (Resolved): osdc: objecter ops output does not have useful time information
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 05:56 PM Bug #42107 (Resolved): client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 05:55 PM Bug #44316 (Resolved): mds: assert(p != active_requests.end())
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 05:54 PM Backport #46480 (Resolved): nautilus: mds: send scrub status to ceph-mgr only when scrub is runni...
- https://github.com/ceph/ceph/pull/36183
- 05:54 PM Backport #46479 (Resolved): octopus: mds: send scrub status to ceph-mgr only when scrub is runnin...
- https://github.com/ceph/ceph/pull/36047
- 05:53 PM Backport #46478 (Resolved): nautilus: pybind/mgr/volumes: volume deletion should check mon_allow_...
- https://github.com/ceph/ceph/pull/36167
- 05:53 PM Backport #46477 (Resolved): octopus: pybind/mgr/volumes: volume deletion should check mon_allow_p...
- https://github.com/ceph/ceph/pull/36327
- 05:53 PM Backport #46474 (Resolved): nautilus: mds: make threshold for MDS_TRIM warning configurable
- https://github.com/ceph/ceph/pull/36175
- 05:52 PM Backport #46473 (Resolved): octopus: mds: make threshold for MDS_TRIM warning configurable
- https://github.com/ceph/ceph/pull/36970
- 05:52 PM Bug #45971 (Resolved): vstart: set $CEPH_CONF when calling ganesha-rados-grace commands
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 05:52 PM Backport #46470 (Resolved): nautilus: client: release the client_lock before copying data in read
- https://github.com/ceph/ceph/pull/36294
- 05:52 PM Backport #46469 (Resolved): octopus: client: release the client_lock before copying data in read
- https://github.com/ceph/ceph/pull/36046
- 05:52 PM Bug #46079 (Resolved): handle multiple ganesha.nfsd's appropriately in vstart.sh
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 05:51 PM Backport #46466 (Resolved): nautilus: pybind/mgr/volumes: get_pool_names may indicate volume does...
- https://github.com/ceph/ceph/pull/36167
- 05:51 PM Backport #46465 (Resolved): octopus: pybind/mgr/volumes: get_pool_names may indicate volume does ...
- https://github.com/ceph/ceph/pull/36327
- 05:51 PM Backport #46464 (Resolved): nautilus: mgr/volumes: fs subvolume clones stuck in progress when lib...
- https://github.com/ceph/ceph/pull/36180
- 05:51 PM Backport #46463 (Resolved): octopus: mgr/volumes: fs subvolume clones stuck in progress when libc...
- https://github.com/ceph/ceph/pull/37350
- 03:29 PM Bug #46452 (Duplicate): mgr/volumes: Python stack trace instead of a graceful error is thrown whe...
- The following python stack trace is thrown whenever an non-existent vol-name is used in any subvolume command:
(ce... - 02:03 PM Backport #44476 (Resolved): luminous: mds: assert(p != active_requests.end())
- 02:02 PM Backport #42160 (Resolved): luminous: osdc: objecter ops output does not have useful time informa...
- 02:01 PM Backport #42123 (Resolved): luminous: client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
- 02:00 PM Backport #41857 (Resolved): luminous: client: removing dir reports "not empty" issue due to clien...
- 10:56 AM Bug #46104 (Resolved): Test failure: test_export_create_and_delete (tasks.cephfs.test_nfs.TestNFS)
- 10:56 AM Feature #44193 (Resolved): pybind/mgr/volumes: add API to manage NFS-Ganesha gateway clusters in ...
- 10:55 AM Bug #45740 (Resolved): mgr/nfs: Check cluster exists before creating exports and make exports per...
- 10:54 AM Bug #46046 (Resolved): Test failure: test_create_multiple_exports (tasks.cephfs.test_nfs.TestNFS)
- 10:54 AM Feature #45830 (Resolved): vstart: Support deployment of ganesha daemon by cephadm with NFS option
- 10:53 AM Feature #45742 (Resolved): mgr/nfs: Add interface for listing cluster
- 10:53 AM Feature #45741 (Resolved): mgr/volumes/nfs: Add interface for get and list exports
- 10:52 AM Bug #45744 (Resolved): mgr/nfs: allow only [A-Za-z0-9-_.] in cluster ID
- 10:52 AM Feature #45743 (Resolved): mgr/nfs: Add interface to show cluster information
- 06:07 AM Backport #46289: octopus: mgr/nfs: allow only [A-Za-z0-9-_.] in cluster ID
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35499
m... - 06:07 AM Backport #46290: octopus: mgr/nfs: Add interface for listing cluster
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35499
m... - 06:07 AM Backport #46291: octopus: mgr/volumes/nfs: Add interface for get and list exports
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35499
m... - 06:07 AM Backport #46292: octopus: mgr/nfs: Check cluster exists before creating exports and make exports ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35499
m... - 06:07 AM Backport #46156: octopus: Test failure: test_export_create_and_delete (tasks.cephfs.test_nfs.Test...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35499
m... - 06:06 AM Backport #46155: octopus: Test failure: test_create_multiple_exports (tasks.cephfs.test_nfs.TestNFS)
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35499
m... - 06:06 AM Backport #46085: octopus: handle multiple ganesha.nfsd's appropriately in vstart.sh
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35499
m... - 06:06 AM Backport #46106: octopus: pybind/mgr/volumes: add API to manage NFS-Ganesha gateway clusters in e...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35499
m... - 06:06 AM Backport #46003: octopus: vstart: set $CEPH_CONF when calling ganesha-rados-grace commands
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35499
m... - 06:06 AM Backport #45953: octopus: vstart: Support deployment of ganesha daemon by cephadm with NFS option
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35499
m... - 06:06 AM Backport #46401: octopus: mgr/nfs: Add interface to show cluster information
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35499
m...
07/09/2020
- 09:34 PM Bug #46438 (Resolved): mds: add vxattr for querying inherited layout
- There was private discussion about adding a layout cap but that was dismissed because changing the layout would resul...
- 05:43 PM Backport #45852: nautilus: mds: scrub on directory with recently created files may fail to load b...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35400
merged - 05:42 PM Backport #45847: nautilus: qa/fuse_mount.py: tests crash when /sys/fs/fuse/connections is absent
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35399
merged - 05:42 PM Backport #45843: nautilus: ceph-fuse: the -d option couldn't enable the debug mode in libfuse
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35398
merged - 05:41 PM Backport #45839: nautilus: mds may start to fragment dirfrag before rollback finishes
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35397
merged - 05:41 PM Backport #45774: nautilus: vstart_runner: LocalFuseMount.mount should set set.mounted to True
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35396
merged - 05:40 PM Backport #45709: nautilus: mds: wrong link count under certain circumstance
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35394
merged - 05:40 PM Backport #45898: nautilus: mds: add config to require forward to auth MDS
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35377
merged - 05:39 PM Backport #45887: nautilus: client: fails to reconnect to MDS
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35403
merged - 04:56 PM Feature #44279 (Fix Under Review): client: provide asok commands to getattr an inode with desired...
- 07:22 AM Feature #44279 (In Progress): client: provide asok commands to getattr an inode with desired caps
- 04:28 PM Bug #46434: osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- Interesting that it was reproducible for two runs. We've not seen this before but I'm suspicious that it probably exi...
- 01:27 PM Bug #46434 (Resolved): osdc: FAILED ceph_assert(bh->waitfor_read.empty())
- During Yuri's nautilus backport test run, hit the following failure error in multimds suite's xfs tests running in Ub...
- 01:28 PM Bug #45349 (Pending Backport): mds: send scrub status to ceph-mgr only when scrub is running (or ...
- 01:04 PM Feature #46432 (In Progress): cephfs-mirror: manager module interface to add/remove directory sna...
- 12:38 PM Feature #46432 (Resolved): cephfs-mirror: manager module interface to add/remove directory snapshots
- Mgr plugin should persist directories (snapshots) which are assigned to be replicated to a filesystem in a remote clu...
- 07:24 AM Tasks #23844 (In Progress): client: break client_lock
- 02:20 AM Bug #46277 (Pending Backport): pybind/mgr/volumes: get_pool_names may indicate volume does not ex...
- 01:59 AM Feature #45906 (Pending Backport): mds: make threshold for MDS_TRIM warning configurable
- 12:07 AM Bug #46025 (Pending Backport): client: release the client_lock before copying data in read
- 12:05 AM Bug #44415 (Pending Backport): cephfs.pyx: passing empty string is fine but passing None is not t...
- 12:04 AM Bug #46360 (Pending Backport): mgr/volumes: fs subvolume clones stuck in progress when libcephfs ...
07/08/2020
- 11:26 PM Bug #46426 (Resolved): mds: 8MMDSPing is not an MMDSOp type
- ...
- 10:31 PM Bug #45662 (Pending Backport): pybind/mgr/volumes: volume deletion should check mon_allow_pool_de...
- 08:46 PM Feature #45371 (Fix Under Review): mgr/volumes: `protect` and `clone` operation in a single trans...
- 01:22 PM Feature #45371 (In Progress): mgr/volumes: `protect` and `clone` operation in a single transaction
- Moved back to in progress, to handle backward compatibility for non-CSI use cases where protect/unprotect maybe in us...
- 04:01 PM Backport #46289 (Resolved): octopus: mgr/nfs: allow only [A-Za-z0-9-_.] in cluster ID
- 04:01 PM Backport #46290 (Resolved): octopus: mgr/nfs: Add interface for listing cluster
- 04:01 PM Backport #46291 (Resolved): octopus: mgr/volumes/nfs: Add interface for get and list exports
- 04:01 PM Backport #46292 (Resolved): octopus: mgr/nfs: Check cluster exists before creating exports and ma...
- 04:01 PM Backport #46156 (Resolved): octopus: Test failure: test_export_create_and_delete (tasks.cephfs.te...
- 04:01 PM Backport #46155 (Resolved): octopus: Test failure: test_create_multiple_exports (tasks.cephfs.tes...
- 04:01 PM Backport #46085 (Resolved): octopus: handle multiple ganesha.nfsd's appropriately in vstart.sh
- 04:00 PM Backport #46106 (Resolved): octopus: pybind/mgr/volumes: add API to manage NFS-Ganesha gateway cl...
- 03:58 PM Backport #46003 (Resolved): octopus: vstart: set $CEPH_CONF when calling ganesha-rados-grace comm...
- 03:58 PM Backport #45953 (Resolved): octopus: vstart: Support deployment of ganesha daemon by cephadm with...
- 03:58 PM Backport #46401 (Resolved): octopus: mgr/nfs: Add interface to show cluster information
- 05:40 AM Backport #46401 (In Progress): octopus: mgr/nfs: Add interface to show cluster information
- https://github.com/ceph/ceph/pull/35499
- 03:38 PM Feature #45747: pybind/mgr/nfs: add interface for adding user defined configuration
- Add json ganesha config to common ganesha config rados object
- 02:05 PM Bug #46278: mds: Subvolume snapshot directory does not save attribute "ceph.quota.max_bytes" of s...
- Zheng Yan wrote:
> try following patch
> [...]
Thanks, tested with ceph-fuse and setting the quota as before and... - 01:24 PM Bug #46163 (In Progress): mgr/volumes: Clone operation uses source subvolume root directory mode ...
- Need to address the following,
- pending integration with fix for https://tracker.ceph.com/issues/46278
- address... - 12:50 PM Bug #46420 (Fix Under Review): cephfs-shell: Return proper error code instead of 1
- 12:41 PM Bug #46420 (Resolved): cephfs-shell: Return proper error code instead of 1
- 10:00 AM Bug #46282 (Fix Under Review): qa: multiclient connection interruptions by stopping one client
- 05:29 AM Backport #46410 (Resolved): octopus: client: supplying ceph_fsetxattr with no value unsets xattr
- https://github.com/ceph/ceph/pull/36045
- 05:29 AM Backport #46409 (Resolved): nautilus: client: supplying ceph_fsetxattr with no value unsets xattr
- https://github.com/ceph/ceph/pull/36173
- 03:17 AM Bug #46302 (Fix Under Review): mds: optimize ephemeral rand pin
07/07/2020
- 10:32 PM Bug #46084 (Pending Backport): client: supplying ceph_fsetxattr with no value unsets xattr
- 07:09 PM Bug #46403 (Triaged): mds: "elist.h: 91: FAILED ceph_assert(_head.empty())"
- Not related to #44294.
Zheng, this looks like a lock cache bug. - 04:24 PM Bug #46403 (Triaged): mds: "elist.h: 91: FAILED ceph_assert(_head.empty())"
- This may be a re-occurance of https://tracker.ceph.com/issues/44294
Saw this while testing ceph.dir.pin.distribute... - 03:35 PM Bug #44294: mds: "elist.h: 91: FAILED ceph_assert(_head.empty())"
- Possibly hit a new manifestation of this while testing ceph.dir.pin.distributed for the IO500 challenge:...
- 03:28 PM Backport #46402 (Resolved): octopus: client: recover from a killed session (w/ blacklist)
- https://github.com/ceph/ceph/pull/35962
- 03:28 PM Feature #20 (Pending Backport): client: recover from a killed session (w/ blacklist)
- 02:01 PM Backport #46401 (Resolved): octopus: mgr/nfs: Add interface to show cluster information
- https://github.com/ceph/ceph/pull/35499
- 02:01 PM Feature #45743 (Pending Backport): mgr/nfs: Add interface to show cluster information
- 12:54 PM Backport #46389 (In Progress): octopus: pybind/mgr/volumes: cleanup stale connection hang
- 11:12 AM Backport #46389 (Resolved): octopus: pybind/mgr/volumes: cleanup stale connection hang
- https://github.com/ceph/ceph/pull/35962
- 12:19 PM Feature #46399 (New): mgr/volumes: preserve xattrs during snapshot clone
- While we copy over certain attributes during `fs subvolume snapshot clone`, we don't preserve extended attributes (e....
- 12:18 PM Bug #46278: mds: Subvolume snapshot directory does not save attribute "ceph.quota.max_bytes" of s...
- try following patch...
- 11:12 AM Backport #46388 (Resolved): nautilus: pybind/mgr/volumes: cleanup stale connection hang
- https://github.com/ceph/ceph/pull/36215
- 06:36 AM Bug #43517 (Fix Under Review): qa: random subvolumegroup collision
- 06:36 AM Bug #43517 (In Progress): qa: random subvolumegroup collision
- 05:42 AM Bug #46380 (Closed): libcephfs admin socket occurs segment fault
- ceph version: 14.2.5
samba version : 4.9.18
OS verion: centos 7.6.1810
Procedure:
1, deploy a ceph cluster and cr...
07/06/2020
- 11:52 PM Bug #44276 (Pending Backport): pybind/mgr/volumes: cleanup stale connection hang
- 05:48 PM Bug #46374: ceph-fuse blocks forever, fails to start, emits no errors
- Apologies, I found one very important item that I had previously overlooked. Because we use this environment to test ...
- 02:29 PM Bug #46374 (New): ceph-fuse blocks forever, fails to start, emits no errors
- For testing purposes we're running ceph-fuse in a container (code here [1]), that runs the ceph-fuse command to mount...
- 04:18 PM Bug #46360 (Fix Under Review): mgr/volumes: fs subvolume clones stuck in progress when libcephfs ...
- 01:52 PM Backport #46290 (In Progress): octopus: mgr/nfs: Add interface for listing cluster
- https://github.com/ceph/ceph/pull/35499
- 01:48 PM Bug #46278 (Triaged): mds: Subvolume snapshot directory does not save attribute "ceph.quota.max_b...
- 01:44 PM Bug #46355 (Fix Under Review): client: directory inode can not call release_callback
- 12:25 PM Bug #46282: qa: multiclient connection interruptions by stopping one client
- Locally with the local qa test I can see the same connection refuse logs, after the client gets the up:replay mdsmap:...
07/05/2020
- 11:24 AM Bug #46360 (Resolved): mgr/volumes: fs subvolume clones stuck in progress when libcephfs hits cer...
- During `fs subvolume clone`, libcephfs hit the "Disk quota exceeded error" that caused the subvolume clone to be stuc...
07/04/2020
- 08:33 PM Bug #43817 (Resolved): mds: update cephfs octopus feature bit
- 04:12 PM Bug #46357 (Can't reproduce): qa: Error downloading packages
- ...
- 12:44 PM Bug #42688: Standard CephFS caps do not allow certain dot files to be written
- The problem persists in Octopus!
Just did a fresh MAAS/JUJU cephfs install.
I believe at least the documentation ... - 03:28 AM Bug #46355 (Resolved): client: directory inode can not call release_callback
- I use Ganesha + CEPH to test release_callback feature, I have merged the relevant modification code:
https://github....
07/03/2020
- 03:37 PM Backport #46349 (Rejected): nautilus: qa/tasks: make sh() in vstart_runner.py identical with teut...
- 03:37 PM Backport #46348 (Resolved): octopus: qa/tasks: make sh() in vstart_runner.py identical with teuth...
- https://github.com/ceph/ceph/pull/36044
- 12:42 PM Backport #46201: octopus: mds: add ephemeral random and distributed export pins
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35759
m... - 05:38 AM Bug #46282: qa: multiclient connection interruptions by stopping one client
- Patrick Donnelly wrote:
> [...]
>
>
> From: /ceph/teuthology-archive/pdonnell-2020-06-29_23:19:06-fs-wip-pdonne... - 01:01 AM Bug #46282 (In Progress): qa: multiclient connection interruptions by stopping one client
- 01:51 AM Bug #46273: mds: deleting a large number of files in a directory causes the file system to read only
- The origin dirfrag is closed through close_dirfrag,and then the mergeed basedirfrag commits it.
More detailed log:...
07/02/2020
- 04:48 PM Bug #41541 (Resolved): mgr/volumes: ephemerally pin volumes
- 04:47 PM Backport #46201 (Resolved): octopus: mds: add ephemeral random and distributed export pins
- 04:47 PM Feature #41302 (Resolved): mds: add ephemeral random and distributed export pins
- 04:46 PM Backport #46315 (Resolved): octopus: mgr/volumes: ephemerally pin volumes
- 11:24 AM Bug #46158 (In Progress): pybind/mgr/volumes: Persist snapshot size on snapshot creation
- 06:27 AM Bug #46069 (Pending Backport): qa/tasks: make sh() in vstart_runner.py identical with teuthology....
07/01/2020
- 08:15 PM Bug #43039 (New): client: shutdown race fails with status 141
- /ceph/teuthology-archive/pdonnell-2020-07-01_06:37:23-fs-wip-pdonnell-testing-20200701.033411-distro-basic-smithi/519...
- 03:59 PM Backport #46315 (Resolved): octopus: mgr/volumes: ephemerally pin volumes
- https://github.com/ceph/ceph/pull/35759
- 03:56 PM Backport #46311 (Resolved): octopus: qa/tasks/cephfs/test_snapshots.py: Command failed with statu...
- https://github.com/ceph/ceph/pull/36043
- 03:56 PM Backport #46310 (Resolved): nautilus: qa/tasks/cephfs/test_snapshots.py: Command failed with stat...
- https://github.com/ceph/ceph/pull/36172
- 01:57 PM Bug #46302 (Resolved): mds: optimize ephemeral rand pin
- there can be two optimization
1. get_ephemeral_rand() is called for each loaded inode of dirfrag fetch. all calls ge... - 03:04 AM Backport #46289 (In Progress): octopus: mgr/nfs: allow only [A-Za-z0-9-_.] in cluster ID
- https://github.com/ceph/ceph/pull/35499
- 02:51 AM Backport #46289 (Resolved): octopus: mgr/nfs: allow only [A-Za-z0-9-_.] in cluster ID
- https://github.com/ceph/ceph/pull/35499
- 03:04 AM Backport #46291 (In Progress): octopus: mgr/volumes/nfs: Add interface for get and list exports
- https://github.com/ceph/ceph/pull/35499
- 02:51 AM Backport #46291 (Resolved): octopus: mgr/volumes/nfs: Add interface for get and list exports
- https://github.com/ceph/ceph/pull/35499
- 03:03 AM Backport #46292 (In Progress): octopus: mgr/nfs: Check cluster exists before creating exports and...
- https://github.com/ceph/ceph/pull/35499
- 02:51 AM Backport #46292 (Resolved): octopus: mgr/nfs: Check cluster exists before creating exports and ma...
- https://github.com/ceph/ceph/pull/35499
- 02:51 AM Backport #46290 (Resolved): octopus: mgr/nfs: Add interface for listing cluster
- https://github.com/ceph/ceph/pull/35499
- 02:50 AM Bug #45744 (Pending Backport): mgr/nfs: allow only [A-Za-z0-9-_.] in cluster ID
- 02:50 AM Bug #45740 (Pending Backport): mgr/nfs: Check cluster exists before creating exports and make exp...
- 02:50 AM Feature #45741 (Pending Backport): mgr/volumes/nfs: Add interface for get and list exports
- 02:50 AM Feature #45742 (Pending Backport): mgr/nfs: Add interface for listing cluster
06/30/2020
- 06:14 PM Feature #44279: client: provide asok commands to getattr an inode with desired caps
- Jeff Layton wrote:
> Do we need an asok interface for this? If you're planning to write testcases that link in libce... - 01:40 PM Feature #44279: client: provide asok commands to getattr an inode with desired caps
- Do we need an asok interface for this? If you're planning to write testcases that link in libcephfs directly, then yo...
- 05:37 PM Bug #45530 (Pending Backport): qa/tasks/cephfs/test_snapshots.py: Command failed with status 1: [...
- 04:51 PM Bug #46282 (Resolved): qa: multiclient connection interruptions by stopping one client
- ...
- 03:50 PM Bug #46278 (Resolved): mds: Subvolume snapshot directory does not save attribute "ceph.quota.max_...
- NOTE: This is applicable even without using subvolume snapshots
A snapshot of a directory with quota set on it, vi... - 03:35 PM Bug #46277 (Fix Under Review): pybind/mgr/volumes: get_pool_names may indicate volume does not ex...
- 03:32 PM Bug #46277 (Resolved): pybind/mgr/volumes: get_pool_names may indicate volume does not exist if m...
- ...
- 03:15 PM Bug #46269 (Fix Under Review): ceph-fuse: ceph-fuse process is terminated by the logratote task a...
- 06:18 AM Bug #46269 (Resolved): ceph-fuse: ceph-fuse process is terminated by the logratote task and what ...
- *1. reproduce the scene as shown below:*
(1) step 1:
Open the terminal_1, and
Prepare the cmd: "killall -q -1 ce... - 03:14 PM Bug #46273 (Fix Under Review): mds: deleting a large number of files in a directory causes the fi...
- 11:32 AM Bug #46273 (Resolved): mds: deleting a large number of files in a directory causes the file syste...
- Log as follow:...
- 07:31 AM Backport #46190 (In Progress): octopus: mds: cap revoking requests didn't success when the client...
- 07:30 AM Backport #46191 (In Progress): nautilus: mds: cap revoking requests didn't success when the clien...
06/29/2020
- 10:03 PM Bug #46056 (Resolved): assertion triggered in LRU::lru_touch in ganesha+libcephfs client
- The main patch that fixes this in ganesha is here:
https://github.com/nfs-ganesha/nfs-ganesha/commit/e45743b47... - 09:52 PM Bug #41541 (Pending Backport): mgr/volumes: ephemerally pin volumes
- 04:28 PM Bug #37725 (Can't reproduce): mds: stopping MDS with subtrees pinnned cannot finish stopping
- Xiubo Li wrote:
> Patrick Donnelly wrote:
> > I believe this is fixed already but needs double-checked. It may also... - 10:01 AM Bug #37725: mds: stopping MDS with subtrees pinnned cannot finish stopping
- Patrick Donnelly wrote:
> I believe this is fixed already but needs double-checked. It may also be fixed by the ephe... - 01:42 PM Bug #46158 (Triaged): pybind/mgr/volumes: Persist snapshot size on snapshot creation
- 01:41 PM Bug #46218 (Triaged): mds: Add inter MDS messages to the corpus and enforce versioning
- 07:44 AM Feature #46059 (Fix Under Review): vstart_runner.py: optionally rotate logs between tests
06/26/2020
- 03:35 PM Backport #46235 (Resolved): nautilus: pybind/mgr/volumes: volume deletion not always removes the ...
- https://github.com/ceph/ceph/pull/36167
- 03:35 PM Backport #46234 (Resolved): octopus: pybind/mgr/volumes: volume deletion not always removes the a...
- https://github.com/ceph/ceph/pull/36327
- 05:29 AM Bug #46218 (Triaged): mds: Add inter MDS messages to the corpus and enforce versioning
- Now that the current and the new inter-MDS messages are "guaranteed to be of type MMDSOp":https://tracker.ceph.com/is...
06/25/2020
- 09:37 PM Bug #41565 (Resolved): mds: detect MDS<->MDS messages that are not versioned
- 09:33 PM Bug #45910 (Pending Backport): pybind/mgr/volumes: volume deletion not always removes the associa...
- 09:33 PM Bug #46101 (Resolved): qa: set omit_sudo to False for cmds executed with sudo
- 08:31 PM Bug #46213 (Rejected): qa: pjd test reports odd EIO errors
- Probably caused by: https://github.com/ceph/ceph/pull/35725/files
- 06:00 PM Bug #46213 (Rejected): qa: pjd test reports odd EIO errors
- ...
- 12:41 PM Feature #45729 (In Progress): pybind/mgr/volumes: Add the ability to keep snapshots of subvolumes...
- 12:40 PM Feature #45371 (Fix Under Review): mgr/volumes: `protect` and `clone` operation in a single trans...
- 09:41 AM Bug #23421 (Closed): ceph-fuse: stop ceph-fuse if no root permissions?
- 05:38 AM Bug #46163 (Fix Under Review): mgr/volumes: Clone operation uses source subvolume root directory ...
- 02:59 AM Backport #46201 (In Progress): octopus: mds: add ephemeral random and distributed export pins
- 01:26 AM Backport #46201 (Resolved): octopus: mds: add ephemeral random and distributed export pins
- https://github.com/ceph/ceph/pull/35759
- 01:26 AM Feature #41302 (Pending Backport): mds: add ephemeral random and distributed export pins
06/24/2020
- 11:03 PM Feature #45371: mgr/volumes: `protect` and `clone` operation in a single transaction
- Based on #note-3 reassigning it to myself.
- 08:45 PM Backport #46200 (Resolved): nautilus: qa: "[WRN] evicting unresponsive client smithi131:z (6314),...
- https://github.com/ceph/ceph/pull/36171
- 08:45 PM Backport #46199 (Resolved): octopus: qa: "[WRN] evicting unresponsive client smithi131:z (6314), ...
- https://github.com/ceph/ceph/pull/36042
- 08:43 PM Backport #46191 (Resolved): nautilus: mds: cap revoking requests didn't success when the client d...
- https://github.com/ceph/ceph/pull/35841
- 08:43 PM Backport #46190 (Resolved): octopus: mds: cap revoking requests didn't success when the client do...
- https://github.com/ceph/ceph/pull/35842
- 08:43 PM Backport #46189 (Resolved): nautilus: mds: EMetablob replay too long will cause mds restart
- https://github.com/ceph/ceph/pull/36170
- 08:43 PM Backport #46188 (Resolved): octopus: mds: EMetablob replay too long will cause mds restart
- https://github.com/ceph/ceph/pull/36040
- 08:42 PM Backport #46187 (Resolved): nautilus: client: fix snap directory atime
- https://github.com/ceph/ceph/pull/36169
- 08:42 PM Backport #46186 (Resolved): octopus: client: fix snap directory atime
- https://github.com/ceph/ceph/pull/36039
- 08:42 PM Backport #46185 (Resolved): octopus: cephadm: mds permissions for osd are unnecessarily permissive
- https://github.com/ceph/ceph/pull/35898
- 06:48 PM Bug #46100 (Resolved): vstart_runner.py: check for Raw instance before treating as iterable
- 06:35 PM Fix #46070 (Pending Backport): client: fix snap directory atime
- 06:34 PM Bug #45935 (Pending Backport): mds: cap revoking requests didn't success when the client doing re...
- 06:34 PM Bug #45815 (Resolved): vstart_runner.py: set stdout and stderr to None by default
- 06:33 PM Bug #43943 (Pending Backport): qa: "[WRN] evicting unresponsive client smithi131:z (6314), after ...
- 06:31 PM Bug #46057 (Resolved): qa/cephfs: run_as_user must args list instead of str
- 06:30 PM Bug #46042 (Pending Backport): mds: EMetablob replay too long will cause mds restart
- 06:30 PM Bug #46023 (Resolved): mds: MetricAggregator.cc: 178: FAILED ceph_assert(rm)
- 03:33 PM Bug #46084: client: supplying ceph_fsetxattr with no value unsets xattr
- Jeff Layton wrote:
> Looks like a legit bug. The Linux kernel does this in __vfs_setxattr:
>
> [...]
>
> ...so... - 03:10 PM Bug #46056: assertion triggered in LRU::lru_touch in ganesha+libcephfs client
- Yeah, the latest scheme just ensures that handles (which are somewhat analogous to inodes) are never shared between e...
- 02:59 PM Bug #46056: assertion triggered in LRU::lru_touch in ganesha+libcephfs client
- Jeff Layton wrote:
> I started working on this, but there is a bit of a dilemma. My approach to fixing this was to s... - 01:06 PM Bug #46081 (Pending Backport): cephadm: mds permissions for osd are unnecessarily permissive
- 09:43 AM Feature #45743 (Fix Under Review): mgr/nfs: Add interface to show cluster information
- 05:00 AM Bug #46167 (Rejected): pybind/mgr/volumes: xlist.h: 144: FAILED ceph_assert((bool)_front == (bool...
- Caused by https://github.com/ceph/ceph/pull/35410
- 04:56 AM Bug #46167 (Rejected): pybind/mgr/volumes: xlist.h: 144: FAILED ceph_assert((bool)_front == (bool...
- ...
- 03:03 AM Feature #46166 (Resolved): mds: store symlink target as xattr in data pool inode for disaster rec...
- Currently, the MDS only stores the symlink inode's backtrace information in the data pool. During disaster recovery o...
- 01:56 AM Bug #46163 (Triaged): mgr/volumes: Clone operation uses source subvolume root directory mode and ...
06/23/2020
- 11:53 PM Bug #46163 (Resolved): mgr/volumes: Clone operation uses source subvolume root directory mode and...
- If a subvolumes mode or uid/gid values are changed post a snapshot, and a clone of a snapshot prior to the change is ...
- 09:25 PM Backport #46155 (In Progress): octopus: Test failure: test_create_multiple_exports (tasks.cephfs....
- 02:41 PM Backport #46155 (Fix Under Review): octopus: Test failure: test_create_multiple_exports (tasks.ce...
- 02:40 PM Backport #46155 (Resolved): octopus: Test failure: test_create_multiple_exports (tasks.cephfs.tes...
- https://github.com/ceph/ceph/pull/35499
- 09:23 PM Bug #46104: Test failure: test_export_create_and_delete (tasks.cephfs.test_nfs.TestNFS)
- Note: the Orchestrator project doesn't have a Backport tracker. Moved to "fs" Project which is where all the other NF...
- 01:44 PM Bug #46104 (Pending Backport): Test failure: test_export_create_and_delete (tasks.cephfs.test_nfs...
- 09:22 PM Backport #46156 (In Progress): octopus: Test failure: test_export_create_and_delete (tasks.cephfs...
- 02:42 PM Backport #46156 (Fix Under Review): octopus: Test failure: test_export_create_and_delete (tasks.c...
- 02:42 PM Backport #46156 (Resolved): octopus: Test failure: test_export_create_and_delete (tasks.cephfs.te...
- https://github.com/ceph/ceph/pull/35499
- 05:51 PM Bug #46158 (Closed): pybind/mgr/volumes: Persist snapshot size on snapshot creation
- Due to issue [1], the subvolume snapshot info command returns incorrect snapshot size if it's requested after the cor...
- 01:47 PM Bug #46058 (Duplicate): qa: test_scrub_pause_and_resume KeyError: 'a'
- Thanks for tracking that down Venky!
- 05:48 AM Bug #46058: qa: test_scrub_pause_and_resume KeyError: 'a'
- This is due to: https://tracker.ceph.com/issues/44638
The PR got merged but the tracker status was never updated. - 01:46 PM Bug #46046 (Pending Backport): Test failure: test_create_multiple_exports (tasks.cephfs.test_nfs....
- 01:29 PM Bug #46084: client: supplying ceph_fsetxattr with no value unsets xattr
- Sidharth Anupkrishnan wrote:
> Jeff Layton wrote:
> > The kernel seems to key its behavior on the size parameter. ... - 12:27 PM Bug #46084: client: supplying ceph_fsetxattr with no value unsets xattr
- Jeff Layton wrote:
> The kernel seems to key its behavior on the size parameter. When it's 0, the pointer passed in ... - 10:26 AM Bug #46084: client: supplying ceph_fsetxattr with no value unsets xattr
- The kernel seems to key its behavior on the size parameter. When it's 0, the pointer passed in is ignored and an empt...
- 11:38 AM Feature #45371: mgr/volumes: `protect` and `clone` operation in a single transaction
- With subvolume and snapshot decoupling feature[1], snapshot protect and unprotect would no longer be required.
[1]... - 09:00 AM Backport #46152 (Resolved): octopus: test_scrub_pause_and_resume (tasks.cephfs.test_scrub_checks....
- https://github.com/ceph/ceph/pull/36038
- 09:00 AM Backport #46151 (Resolved): nautilus: test_scrub_pause_and_resume (tasks.cephfs.test_scrub_checks...
- https://github.com/ceph/ceph/pull/36168
- 09:00 AM Bug #45071 (Resolved): cephfs-shell: CI testing does not detect flake8 errors
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:59 AM Feature #45289 (Resolved): mgr/volumes: create fs subvolumes with isolated RADOS namespaces
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:59 AM Bug #45300 (Resolved): qa/tasks/vstart_runner.py: TypeError: mount() got an unexpected keyword ar...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:59 AM Bug #45396 (Resolved): ceph-fuse: building the source code failed with libfuse3.5 or higher versions
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:58 AM Bug #45866 (Resolved): ceph-fuse build failure against libfuse v3.9.1
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 08:57 AM Backport #46001: octopus: pybind/mgr/volumes: add command to return metadata regarding a subvolum...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35670
m... - 06:36 AM Backport #46001 (Resolved): octopus: pybind/mgr/volumes: add command to return metadata regarding...
- 08:57 AM Backport #45849: octopus: mgr/volumes: create fs subvolumes with isolated RADOS namespaces
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35671
m... - 06:34 AM Backport #45849 (Resolved): octopus: mgr/volumes: create fs subvolumes with isolated RADOS namesp...
- 08:57 AM Backport #46013 (Resolved): octopus: qa: commit 9f6c764f10f break qa code in several places
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35600
m... - 08:57 AM Backport #45851 (Resolved): octopus: mds: scrub on directory with recently created files may fail...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35555
m... - 08:56 AM Backport #45848 (Resolved): octopus: qa/tasks/vstart_runner.py: TypeError: mount() got an unexpec...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35554
m... - 08:56 AM Backport #45846 (Resolved): octopus: qa/fuse_mount.py: tests crash when /sys/fs/fuse/connections ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35451
m... - 08:56 AM Backport #45941 (Resolved): octopus: ceph-fuse build failure against libfuse v3.9.1
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35450
m... - 08:56 AM Backport #45845 (Resolved): octopus: ceph-fuse: building the source code failed with libfuse3.5 o...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35450
m... - 08:56 AM Backport #45842 (Resolved): octopus: ceph-fuse: the -d option couldn't enable the debug mode in l...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35449
m... - 08:55 AM Backport #45476 (Resolved): octopus: cephfs-shell: CI testing does not detect flake8 errors
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/34998
m... - 08:55 AM Backport #45888 (Resolved): octopus: client: fails to reconnect to MDS
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35616
m... - 08:55 AM Backport #45838 (Resolved): octopus: mds may start to fragment dirfrag before rollback finishes
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35448
m... - 05:48 AM Bug #44638 (Pending Backport): test_scrub_pause_and_resume (tasks.cephfs.test_scrub_checks.TestSc...
06/22/2020
- 11:34 PM Bug #46084 (Triaged): client: supplying ceph_fsetxattr with no value unsets xattr
- Jeff suggests the behavior should be to convert the nullptr to an empty string, to match the behavior of the kernel c...
- 01:54 PM Bug #46084: client: supplying ceph_fsetxattr with no value unsets xattr
- Patrick Donnelly wrote:
> Interesting design choice here. Is it causing issues for some application of yours? I'm in... - 01:51 PM Bug #46084: client: supplying ceph_fsetxattr with no value unsets xattr
- Looks like a legit bug. The Linux kernel does this in __vfs_setxattr:...
- 01:44 PM Bug #46084: client: supplying ceph_fsetxattr with no value unsets xattr
- Interesting design choice here. Is it causing issues for some application of yours? I'm inclined not to change the be...
- 09:15 PM Bug #23421: ceph-fuse: stop ceph-fuse if no root permissions?
- The issue is right there in the log?
2020-06-19T14:49:24.230+0530 7fba997fa700 -1 client.4311 failed to remount fo... - 05:45 PM Backport #46001: octopus: pybind/mgr/volumes: add command to return metadata regarding a subvolum...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/35670
merged - 05:45 PM Backport #45849: octopus: mgr/volumes: create fs subvolumes with isolated RADOS namespaces
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35671
merged - 05:43 PM Backport #46013: octopus: qa: commit 9f6c764f10f break qa code in several places
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35600
merged - 05:41 PM Backport #45851: octopus: mds: scrub on directory with recently created files may fail to load ba...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35555
merged - 05:40 PM Backport #45848: octopus: qa/tasks/vstart_runner.py: TypeError: mount() got an unexpected keyword...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35554
merged - 05:37 PM Backport #45846: octopus: qa/fuse_mount.py: tests crash when /sys/fs/fuse/connections is absent
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35451
merged - 05:36 PM Backport #45941: octopus: ceph-fuse build failure against libfuse v3.9.1
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35450
merged - 05:36 PM Backport #45845: octopus: ceph-fuse: building the source code failed with libfuse3.5 or higher ve...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35450
merged - 05:34 PM Backport #45842: octopus: ceph-fuse: the -d option couldn't enable the debug mode in libfuse
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35449
merged - 05:34 PM Backport #45476: octopus: cephfs-shell: CI testing does not detect flake8 errors
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/34998
merged - 05:32 PM Backport #45888: octopus: client: fails to reconnect to MDS
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35616
merged - 05:28 PM Backport #45838: octopus: mds may start to fragment dirfrag before rollback finishes
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/35448
merged - 04:11 PM Bug #41541 (Fix Under Review): mgr/volumes: ephemerally pin volumes
- 04:09 PM Bug #37725 (Triaged): mds: stopping MDS with subtrees pinnned cannot finish stopping
- I believe this is fixed already but needs double-checked. It may also be fixed by the ephemeral pinning branch for #4...
- 01:47 PM Feature #46074 (Triaged): mds: provide altrenatives to increase the total cephfs subvolume snapsh...
- 01:41 PM Bug #46129 (Fix Under Review): mds: fix hang issue when accessing a file under a lost parent dire...
- 02:45 AM Bug #46129 (Resolved): mds: fix hang issue when accessing a file under a lost parent directory
- Once a while we had encountered some serious problem that resulted in some metadata lost. After we brought the MDS up...
- 01:03 PM Bug #46100 (Fix Under Review): vstart_runner.py: check for Raw instance before treating as iterable
- 01:03 PM Bug #46101 (Fix Under Review): qa: set omit_sudo to False for cmds executed with sudo
- 12:47 PM Bug #46056: assertion triggered in LRU::lru_touch in ganesha+libcephfs client
- Ok, I have a patchset that fixes this for ganesha. It turns out not to be terribly invasive, but it does need more te...
- 11:56 AM Bug #46140 (Resolved): mds: couldn't see the logs in log file before the daemon get aborted
- It seems the **assert()** call doesn't flush the log buffer to the relevant log files before it aborting the daemons,...
- 11:30 AM Bug #45593 (Rejected): qa: removing network bridge appears to cause dropped packets
- This is not the ceph qa test suite's bug, the root cause is the node itself get lost.
06/20/2020
- 09:55 PM Backport #45773 (Resolved): octopus: vstart_runner: LocalFuseMount.mount should set set.mounted t...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35447
m... - 09:54 PM Backport #45680: octopus: mgr/volumes: Not able to resize cephfs subvolume with ceph fs subvolume...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/35256
m...
06/19/2020
- 04:37 PM Bug #45332 (Resolved): qa: TestExports is failure under new Python3 runtime
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 04:37 PM Bug #45398 (Resolved): mgr/volumes: Not able to resize cephfs subvolume with ceph fs subvolume cr...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 03:46 PM Bug #46056: assertion triggered in LRU::lru_touch in ganesha+libcephfs client
- I think what I can do is FSAL_CEPH embed the export ID inside the handle-key that it uses as the cache key, without e...
- 12:58 PM Bug #46056: assertion triggered in LRU::lru_touch in ganesha+libcephfs client
- sepia-liu also proposed a patch to ganesha to fix this: https://review.gerrithub.io/c/ffilz/nfs-ganesha/+/495911
.... - 02:17 PM Bug #45666 (Resolved): qa: AssertionError: '1' != b'1'
- 02:17 PM Backport #45886 (Resolved): octopus: qa: AssertionError: '1' != b'1'
- 01:04 PM Backport #46085 (In Progress): octopus: handle multiple ganesha.nfsd's appropriately in vstart.sh
- https://github.com/ceph/ceph/pull/35499
- 10:50 AM Backport #46002 (In Progress): nautilus: pybind/mgr/volumes: add command to return metadata regar...
- 10:47 AM Feature #44193 (Pending Backport): pybind/mgr/volumes: add API to manage NFS-Ganesha gateway clus...
- 10:46 AM Backport #46106 (Resolved): octopus: pybind/mgr/volumes: add API to manage NFS-Ganesha gateway cl...
- https://github.com/ceph/ceph/pull/35499
- 10:40 AM Backport #45849 (In Progress): octopus: mgr/volumes: create fs subvolumes with isolated RADOS nam...
- 10:37 AM Backport #46001 (In Progress): octopus: pybind/mgr/volumes: add command to return metadata regard...
- 09:39 AM Backport #46011 (Resolved): octopus: qa: TestExports is failure under new Python3 runtime
- Already backported via https://github.com/ceph/ceph/commit/bb8f7b0907a2557f4ef5b0455c0fce66aa896e4b
- 09:30 AM Bug #23421 (New): ceph-fuse: stop ceph-fuse if no root permissions?
- 09:29 AM Bug #23421: ceph-fuse: stop ceph-fuse if no root permissions?
- Patrick Donnelly wrote:
> That would indicate ceph-fuse maybe crashed. Please check the logs.
2020-06-19T14:49:24... - 09:10 AM Bug #46046 (Fix Under Review): Test failure: test_create_multiple_exports (tasks.cephfs.test_nfs....
- 09:09 AM Bug #46104 (Fix Under Review): Test failure: test_export_create_and_delete (tasks.cephfs.test_nfs...
- 08:47 AM Bug #46104 (Resolved): Test failure: test_export_create_and_delete (tasks.cephfs.test_nfs.TestNFS)
- ...
- 06:49 AM Bug #46101 (Resolved): qa: set omit_sudo to False for cmds executed with sudo
- There are 2 set of distinct commands -- first are commands that setup and teardown network namespaces and second are ...
- 06:41 AM Bug #46100 (Resolved): vstart_runner.py: check for Raw instance before treating as iterable
- ...
- 05:05 AM Fix #46070 (Fix Under Review): client: fix snap directory atime
- 03:46 AM Bug #43762: pybind/mgr/volumes: create fails with TypeError
- Victoria Martinez de la Cruz wrote:
> Adding more context to this
>
> This happened after creating a second volum... - 02:49 AM Backport #45680 (Resolved): octopus: mgr/volumes: Not able to resize cephfs subvolume with ceph f...
- 02:40 AM Feature #43435: kclient:send client provided metric flags in client metadata
- V1: https://patchwork.kernel.org/project/ceph-devel/list/?series=303647
V2: https://patchwork.kernel.org/project/cep...
Also available in: Atom