Activity
From 09/11/2019 to 10/10/2019
10/10/2019
- 10:48 PM Bug #42251 (Fix Under Review): mds: no assert on frozen dir when scrub path
- 08:11 AM Bug #42251 (Resolved): mds: no assert on frozen dir when scrub path
- ...
- 10:47 PM Bug #42213: test_reconnect_eviction fails with "RuntimeError: MDS in reject state up:active"
- This looks like the same problem as #40999. Can't verify because there are no mds logs. The issue is that the hard r...
- 10:34 PM Bug #42117: MDS: daemon and cephfs-data-scan dump core on (probably) damaged omap entry
- I took a glance at the code. I don't see how that could happen even with an OOM situation.
- 08:23 PM Backport #41129: mimic: qa: power off still resulted in client sending session close
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/30233
merged - 08:23 PM Backport #40444: mimic: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30234
merged - 08:22 PM Backport #40844: mimic: MDSMonitor: use stringstream instead of dout for mds repaired
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30235
merged - 08:22 PM Backport #40853: mimic: test_volume_client: test_put_object_versioned is unreliable
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30236
merged - 08:21 PM Backport #40896: mimic: ceph_volume_client: fs_name must be converted to string before using it
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30238
merged - 08:21 PM Backport #40899: mimic: mds: only evict an unresponsive client when another client wants its caps
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30239
merged - 08:19 PM Backport #41466: mimic: mount.ceph: doesn't accept "strictatime"
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30240
merged - 08:18 PM Backport #41487: mimic: client: client should return EIO when it's unsafe reqs have been dropped ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30241
merged - 08:18 PM Backport #41852: mimic: mds: MDSIOContextBase instance leak
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30417
merged - 08:17 PM Backport #41856: mimic: client: removing dir reports "not empty" issue due to client side filled ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30443
merged - 12:25 PM Backport #41853 (In Progress): nautilus: mds: reject sessionless messages
- Updated automatically by ceph-backport.sh version 15.0.0.5775
- 10:30 AM Bug #40085 (Resolved): FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_dir()"
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:30 AM Bug #40101 (Resolved): libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when operating on .sn...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:29 AM Bug #40603 (Resolved): mds: disallow setting ceph.dir.pin value exceeding max rank id
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:29 AM Bug #40615 (Resolved): ceph-fuse: mount does not support the fallocate()
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:28 AM Bug #40939 (Resolved): mds: map client_caps been inserted by mistake
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 10:23 AM Backport #40162 (Resolved): mimic: FSAL_CEPH assertion failed in Client::_lookup_name: "parent->i...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29609
m... - 10:23 AM Backport #41088 (Resolved): mimic: qa: AssertionError: u'open' != 'stale'
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29751
m... - 10:23 AM Backport #41094 (Resolved): mimic: qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29812
m... - 10:23 AM Backport #41097 (Resolved): mimic: mds: map client_caps been inserted by mistake
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29833
m... - 10:22 AM Backport #41100 (Resolved): mimic: tools/cephfs: memory leak in cephfs/Resetter.cc
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29915
m... - 10:22 AM Backport #41108 (Resolved): mimic: mds: disallow setting ceph.dir.pin value exceeding max rank id
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29940
m... - 10:22 AM Backport #40442 (Resolved): mimic: libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when oper...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30108
m... - 10:22 AM Backport #40841 (Resolved): mimic: ceph-fuse: mount does not support the fallocate()
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30228
m... - 08:43 AM Bug #42252 (Rejected): mimic: test_21501 (tasks.cephfs.test_volume_client.TestVolumeClient) failure
- ...
- 06:57 AM Bug #41799: client: FAILED assert(cap == in->auth_cap)
- One more victim on 13.2.5...
10/09/2019
- 11:43 PM Backport #42239 (In Progress): nautilus: mgr/volumes: list FS subvolumes, subvolume groups, and t...
- 08:35 AM Backport #42239 (Resolved): nautilus: mgr/volumes: list FS subvolumes, subvolume groups, and thei...
- https://github.com/ceph/ceph/pull/30827
- 08:30 PM Bug #41800 (Pending Backport): qa: logrotate should tolerate connection resets
- 01:10 PM Bug #41800 (Fix Under Review): qa: logrotate should tolerate connection resets
- 04:16 AM Bug #41800 (In Progress): qa: logrotate should tolerate connection resets
- 07:12 PM Backport #40162: mimic: FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_dir()"
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29609
merged - 07:11 PM Backport #41088: mimic: qa: AssertionError: u'open' != 'stale'
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/29751
merged - 07:11 PM Backport #41094: mimic: qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_stale_write...
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/29812
merged - 07:10 PM Backport #41097: mimic: mds: map client_caps been inserted by mistake
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/29833
merged - 07:10 PM Backport #41100: mimic: tools/cephfs: memory leak in cephfs/Resetter.cc
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/29915
merged - 07:10 PM Backport #41108: mimic: mds: disallow setting ceph.dir.pin value exceeding max rank id
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/29940
merged - 07:09 PM Backport #40442: mimic: libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when operating on .s...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30108
merged - 07:08 PM Backport #40841: mimic: ceph-fuse: mount does not support the fallocate()
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30228
merged - 05:46 PM Bug #42228 (Fix Under Review): mgr/dashboard: backend API test failure "test_access_permissions"
- 05:33 PM Bug #42228: mgr/dashboard: backend API test failure "test_access_permissions"
- The value of mon_pg_warn_min_per_osd is used for selecting the number of PGs. For vstart clusters, its value is 3. T...
- 03:07 PM Bug #42228: mgr/dashboard: backend API test failure "test_access_permissions"
- Another reason could be https://github.com/ceph/ceph/pull/30463 as it introduced a new method one could use to mount ...
- 02:46 PM Bug #42228: mgr/dashboard: backend API test failure "test_access_permissions"
- The problem seems to be...
- 12:02 PM Bug #42228: mgr/dashboard: backend API test failure "test_access_permissions"
- Directly after vstart_runner.py is executed the cluster seems to be fine...
- 11:47 AM Bug #42228: mgr/dashboard: backend API test failure "test_access_permissions"
- I found out that the cluster somehow gets into an unhealthy state, which causes the problem....
- 11:15 AM Bug #42228: mgr/dashboard: backend API test failure "test_access_permissions"
- On my new build I get this error, so the change that broke the test comes from outside the dashboard.
- 02:15 PM Bug #42117: MDS: daemon and cephfs-data-scan dump core on (probably) damaged omap entry
- And trace for cephfs-data-scan:...
- 02:08 PM Bug #42117: MDS: daemon and cephfs-data-scan dump core on (probably) damaged omap entry
- Here's a trace of the MDS:...
- 06:24 AM Feature #41824: mds: aggregate subtree authorities for display in `fs top`
- Patrick, that's an informative metric to have. However, this is an *MDS* related metric. Currently, with `fs top`, al...
- 06:16 AM Bug #42238 (Fix Under Review): cephfs-shell: setxattr() is passed extra length argument
- 06:07 AM Bug #42238 (Resolved): cephfs-shell: setxattr() is passed extra length argument
- ...
10/08/2019
- 06:44 PM Feature #41842 (Pending Backport): mgr/volumes: list FS subvolumes, subvolume groups, and their s...
- 03:00 PM Bug #42228: mgr/dashboard: backend API test failure "test_access_permissions"
- I tested it on an older compiled cluster and it worked... As on newer builds it fails I assume it's a code change som...
- 11:59 AM Bug #42228 (In Progress): mgr/dashboard: backend API test failure "test_access_permissions"
- 11:41 AM Bug #42228 (Resolved): mgr/dashboard: backend API test failure "test_access_permissions"
- I got this error on my local system (based on master) and it also failed on a PR test (https://jenkins.ceph.com/job/c...
- 09:49 AM Bug #40411 (Resolved): pybind: Add standard error message and fix print of path as byte object in...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:48 AM Bug #41006 (Resolved): cephfs-data-scan scan_links FAILED ceph_assert(p->second >= before+len)
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:48 AM Bug #41031 (Resolved): qa: malformed job
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:48 AM Bug #41163 (Resolved): cephfs-shell: Convert files path type from string to bytes
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:48 AM Bug #41218 (Resolved): mgr/volumes: retry spawning purge threads on failure
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 09:43 AM Backport #40894 (Resolved): nautilus: mds: cleanup truncating inodes when standby replay mds trim...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29591
m... - 09:42 AM Backport #41096 (Resolved): nautilus: mds: map client_caps been inserted by mistake
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29878
m... - 09:42 AM Backport #41099 (Resolved): nautilus: tools/cephfs: memory leak in cephfs/Resetter.cc
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29879
m... - 09:42 AM Backport #41107 (Resolved): nautilus: mds: disallow setting ceph.dir.pin value exceeding max rank id
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29938
m... - 09:42 AM Backport #41128 (Resolved): nautilus: qa: power off still resulted in client sending session close
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29983
m... - 09:41 AM Backport #40895 (Resolved): nautilus: pybind: Add standard error message and fix print of path as...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30026
m... - 09:40 AM Backport #40897 (Resolved): nautilus: ceph_volume_client: fs_name must be converted to string bef...
- backport PR https://github.com/ceph/ceph/pull/30030
merge commit cbd6cc682bd335a00f63d0ae04cdaa705ce392be (v14.2.4-1... - 09:40 AM Backport #40495 (Resolved): nautilus: test_volume_client: declare only one default for python ver...
- backport PR https://github.com/ceph/ceph/pull/30030
merge commit cbd6cc682bd335a00f63d0ae04cdaa705ce392be (v14.2.4-1... - 09:40 AM Backport #40887 (Resolved): nautilus: ceph_volume_client: to_bytes converts NoneType object str
- backport PR https://github.com/ceph/ceph/pull/30030
merge commit cbd6cc682bd335a00f63d0ae04cdaa705ce392be (v14.2.4-1... - 09:40 AM Backport #40857 (Resolved): nautilus: ceph_volume_client: python program embedded in test_volume_...
- backport PR https://github.com/ceph/ceph/pull/30030
merge commit cbd6cc682bd335a00f63d0ae04cdaa705ce392be (v14.2.4-1... - 09:39 AM Backport #40854 (Resolved): nautilus: test_volume_client: test_put_object_versioned is unreliable
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30030
... - 09:39 AM Backport #40900 (Resolved): nautilus: mds: only evict an unresponsive client when another client ...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30031
m... - 09:39 AM Backport #41113 (Resolved): nautilus: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30032
m... - 09:38 AM Backport #41276 (Resolved): nautilus: qa: malformed job
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30038
m... - 09:38 AM Backport #41465 (Resolved): nautilus: mount.ceph: doesn't accept "strictatime"
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30039
m... - 09:38 AM Backport #41467 (Resolved): nautilus: mds: recall capabilities more regularly when under cache pr...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30040
m... - 09:37 AM Backport #41477 (Resolved): nautilus: cephfs-data-scan scan_links FAILED ceph_assert(p->second >=...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30041
m... - 09:37 AM Backport #41269 (Resolved): nautilus: cephfs-shell: Convert files path type from string to bytes
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30057
m... - 09:37 AM Backport #41851 (Resolved): nautilus: mds: MDSIOContextBase instance leak
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30418
m... - 09:37 AM Backport #41855 (Resolved): nautilus: client: removing dir reports "not empty" issue due to clien...
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30442
m... - 09:36 AM Backport #41889 (Resolved): nautilus: mgr/volumes: retry spawning purge threads on failure
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/30455
m... - 07:59 AM Backport #40165: mimic: mount: key parsing fail when doing a remount
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29225
m... - 07:57 AM Documentation #42220 (Resolved): doc: rearrange mounting with kernel doc
- Move mount commands for Ceph cluster with CephX up in doc since CephX is enabled by default.
- 07:42 AM Bug #41948 (Resolved): nautilus: mds: incomplete backport of #40444 (MDCache::cow_inode does not ...
10/07/2019
- 07:41 PM Backport #40894: nautilus: mds: cleanup truncating inodes when standby replay mds trim log segments
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29591
merged - 07:40 PM Backport #41096: nautilus: mds: map client_caps been inserted by mistake
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/29878
merged - 07:40 PM Backport #41099: nautilus: tools/cephfs: memory leak in cephfs/Resetter.cc
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/29879
merged - 07:40 PM Backport #41107: nautilus: mds: disallow setting ceph.dir.pin value exceeding max rank id
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/29938
merged - 07:39 PM Backport #41128: nautilus: qa: power off still resulted in client sending session close
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/29983
merged - 07:38 PM Backport #40895: nautilus: pybind: Add standard error message and fix print of path as byte objec...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30026
merged - 07:37 PM Backport #40897: nautilus: ceph_volume_client: fs_name must be converted to string before using it
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30030
merged - 07:37 PM Backport #40495: nautilus: test_volume_client: declare only one default for python version
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30030
merged - 07:37 PM Backport #40887: nautilus: ceph_volume_client: to_bytes converts NoneType object str
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30030
merged - 07:37 PM Backport #40857: nautilus: ceph_volume_client: python program embedded in test_volume_client.py u...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30030
merged - 07:37 PM Backport #40854: nautilus: test_volume_client: test_put_object_versioned is unreliable
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30030
merged - 07:36 PM Backport #40900: nautilus: mds: only evict an unresponsive client when another client wants its caps
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30031
merged - 07:36 PM Backport #41113: nautilus: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/30032
merged - 07:35 PM Backport #41276: nautilus: qa: malformed job
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30038
merged - 07:35 PM Backport #41465: nautilus: mount.ceph: doesn't accept "strictatime"
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30039
merged - 07:34 PM Backport #41467: nautilus: mds: recall capabilities more regularly when under cache pressure
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30040
merged - 07:34 PM Backport #41477: nautilus: cephfs-data-scan scan_links FAILED ceph_assert(p->second >= before+len)
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30041
merged - 07:33 PM Backport #41269: nautilus: cephfs-shell: Convert files path type from string to bytes
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30057
merged - 07:33 PM Backport #41851: nautilus: mds: MDSIOContextBase instance leak
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30418
merged - 07:32 PM Backport #41855: nautilus: client: removing dir reports "not empty" issue due to client side fill...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30442
merged - 07:30 PM Backport #40165 (Resolved): mimic: mount: key parsing fail when doing a remount
- 07:29 PM Backport #41889: nautilus: mgr/volumes: retry spawning purge threads on failure
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/30455
merged - 07:29 PM Bug #41948: nautilus: mds: incomplete backport of #40444 (MDCache::cow_inode does not cleanup unn...
- merged https://github.com/ceph/ceph/pull/30508
- 07:08 PM Backport #41888 (In Progress): nautilus: client: lazyio synchronize does not get file size
- 07:06 PM Backport #42149 (In Progress): nautilus: mgr/volumes: missing protection for `fs volume rm` command
- 07:02 PM Documentation #40689: mgr/volumes: document mgr fs volumes CLI
- nautilus backport will be handled via #41841 which contains a follow-up commit on this
- 06:57 PM Backport #42147 (In Progress): nautilus: mds: mds returns -5 error when the deleted file does not...
- 06:56 PM Backport #42145 (In Progress): nautilus: client: return error when someone passes bad whence valu...
- 06:55 PM Backport #42129 (In Progress): nautilus: doc/ceph-fuse: -k missing in man page
- 06:54 PM Backport #42121 (In Progress): nautilus: client: no method to handle SEEK_HOLE and SEEK_DATA in l...
- 06:53 PM Backport #42040 (In Progress): nautilus: client: _readdir_cache_cb() may use the readdir_cache al...
- 06:52 PM Backport #42035 (In Progress): nautilus: client: lseek function does not return the correct value.
- 06:51 PM Backport #41899 (In Progress): nautilus: mds: cache drop command does not drive cap recall
- 06:50 PM Bug #41799: client: FAILED assert(cap == in->auth_cap)
- @Zheng,
any update or anything we can help?
seems like we can change from... - 04:54 PM Bug #42213 (Resolved): test_reconnect_eviction fails with "RuntimeError: MDS in reject state up:a...
- seen here: http://qa-proxy.ceph.com/teuthology/yuriw-2019-10-02_14:24:11-kcephfs-wip-yuri6-testing-2019-10-01-1605-na...
- 02:22 PM Bug #40102: qa: probable kernel deadlock/oops during umount on testing branch
- The backport to 4.19 was incorrect, 4.19.76 is busted. Fixed in 4.19.77.
- 02:21 PM Bug #40102 (Resolved): qa: probable kernel deadlock/oops during umount on testing branch
- https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=87bc5b895d94a0f40fe170d4cf5771c8e8f85d15
- 01:54 PM Bug #41415: mgr/volumes: AssertionError: '33' != 'new_pool'
- Failure on the linked job for reference -...
- 01:52 PM Bug #41415: mgr/volumes: AssertionError: '33' != 'new_pool'
- Couldn't reproduce this issue locally -
> self.assertEqual(desired_pool, new_pool)
(Pdb) p desired_pool
'new_poo... - 01:42 PM Bug #42117 (Need More Info): MDS: daemon and cephfs-data-scan dump core on (probably) damaged oma...
- Do you have any core dumps available?
- 01:24 PM Documentation #42205 (Resolved): doc: update "mount using FUSE" page
- 11:56 AM Documentation #42196 (Resolved): doc: Document inter-mds export process
- Overview on inter-mds export process during subtree migrations.
- 11:51 AM Backport #41865 (New): nautilus: mds: ask idle client to trim more caps
- first attempted backport, https://github.com/ceph/ceph/pull/30750, was closed because it was incomplete
- 11:50 AM Backport #41865 (In Progress): nautilus: mds: ask idle client to trim more caps
- 10:54 AM Documentation #42195 (Resolved): Add doc for exporting cephfs over nfs server deployed using rook
- 09:16 AM Bug #42193 (New): luminous: MDS crash running upgrade test
- teuthology run here: http://qa-proxy.ceph.com/teuthology/yuriw-2019-10-05_14:51:39-fs-wip-yuri-testing-2019-10-04-144...
- 09:01 AM Cleanup #42191 (Fix Under Review): mds: reorg MDCache header
- 07:06 AM Cleanup #42191 (Resolved): mds: reorg MDCache header
- 08:19 AM Cleanup #42192 (Fix Under Review): mds: reorg MDLog header
- 08:10 AM Cleanup #42192 (Resolved): mds: reorg MDLog header
- 04:23 AM Documentation #42190 (Resolved): doc: document MDS journal event types
10/04/2019
- 09:02 AM Backport #42180 (Resolved): nautilus: mgr/volumes: creating subvolume and subvolume group snapsho...
- https://github.com/ceph/ceph/pull/31076
- 02:08 AM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- Jeff Layton wrote:
> Patrick Donnelly wrote:
> > When the session is closed/blacklisted/evicted (or: the session ca...
10/03/2019
- 04:15 PM Bug #36192 (Resolved): Internal fragment of ObjectCacher
- 04:15 PM Backport #36664 (Rejected): jewel: Internal fragment of ObjectCacher
- tracker cleanup - jewel is EOL
- 01:45 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- Patrick Donnelly wrote:
> When the session is closed/blacklisted/evicted (or: the session cannot be reclaimed) we sh... - 01:20 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- Jeff Layton wrote:
> Patrick Donnelly wrote:
> >
> > I haven't studied this bit of code but why can't we keep a c... - 01:17 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- Patrick Donnelly wrote:
>
> I haven't studied this bit of code but why can't we keep a closed/dead session in the ... - 01:08 PM Backport #41508 (In Progress): nautilus: add information about active scrubs to "ceph -s" (and el...
- 06:26 AM Backport #41508: nautilus: add information about active scrubs to "ceph -s" (and elsewhere)
- Patrick Donnelly wrote:
> Venky, please do this backport.
ACK - 01:02 PM Documentation #41316 (Resolved): doc: update documentation for LazyIO
- 08:38 AM Documentation #24641 (Resolved): Document behaviour of fsync-after-close
- 08:36 AM Documentation #41470 (Resolved): Document requirements for using cephfs
- 08:04 AM Backport #40130 (Resolved): mimic: Document behaviour of fsync-after-close
- This update was made using the script "backport-resolve-issue".
backport PR https://github.com/ceph/ceph/pull/29765
m... - 06:42 AM Bug #42096 (Pending Backport): mgr/volumes: creating subvolume and subvolume group snapshot fails
10/02/2019
- 01:16 PM Backport #42162 (Rejected): mimic: qa: add testing for lazyio
- 01:16 PM Backport #42161 (Resolved): nautilus: qa: add testing for lazyio
- https://github.com/ceph/ceph/pull/30769
- 01:14 PM Backport #42160 (Resolved): luminous: osdc: objecter ops output does not have useful time informa...
- https://github.com/ceph/ceph/pull/33294
- 01:14 PM Backport #42159 (Resolved): mimic: osdc: objecter ops output does not have useful time information
- https://github.com/ceph/ceph/pull/31384
- 01:14 PM Backport #42158 (Resolved): nautilus: osdc: objecter ops output does not have useful time informa...
- https://github.com/ceph/ceph/pull/31081
- 01:14 PM Backport #42157 (Rejected): nautilus: cephfs-shell: rmdir doesn't complain when directory is not ...
- https://github.com/ceph/ceph/pull/31080
- 01:12 PM Backport #42156 (Resolved): mimic: mds: infinite loop in Locker::file_update_finish()
- https://github.com/ceph/ceph/pull/31284
- 01:12 PM Backport #42155 (Resolved): nautilus: mds: infinite loop in Locker::file_update_finish()
- https://github.com/ceph/ceph/pull/31079
- 01:10 PM Backport #42149 (Resolved): nautilus: mgr/volumes: missing protection for `fs volume rm` command
- https://github.com/ceph/ceph/pull/30768
- 01:10 PM Backport #42148 (Resolved): mimic: mds: mds returns -5 error when the deleted file does not exist
- https://github.com/ceph/ceph/pull/31381
- 01:10 PM Backport #42147 (Resolved): nautilus: mds: mds returns -5 error when the deleted file does not exist
- https://github.com/ceph/ceph/pull/30767
- 01:10 PM Backport #42146 (Resolved): mimic: client: return error when someone passes bad whence value to l...
- https://github.com/ceph/ceph/pull/31380
- 01:10 PM Backport #42145 (Resolved): nautilus: client: return error when someone passes bad whence value t...
- https://github.com/ceph/ceph/pull/30766
- 01:10 PM Backport #42143 (Resolved): mimic: mds:split the dir if the op makes it oversized, because some o...
- https://github.com/ceph/ceph/pull/31379
- 01:10 PM Backport #42142 (Resolved): nautilus: mds:split the dir if the op makes it oversized, because som...
- https://github.com/ceph/ceph/pull/31302
- 01:08 PM Backport #42130 (Resolved): mimic: doc/ceph-fuse: -k missing in man page
- https://github.com/ceph/ceph/pull/30936
- 01:08 PM Backport #42129 (Resolved): nautilus: doc/ceph-fuse: -k missing in man page
- https://github.com/ceph/ceph/pull/30765
- 01:07 PM Backport #42123 (Resolved): luminous: client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
- https://github.com/ceph/ceph/pull/33293
- 01:07 PM Backport #42122 (Resolved): mimic: client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
- https://github.com/ceph/ceph/pull/30918
- 01:07 PM Backport #42121 (Resolved): nautilus: client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
- https://github.com/ceph/ceph/pull/30764
- 12:42 PM Documentation #40957 (Resolved): doc: add section to manpage for recover_session= option
- 11:10 AM Documentation #41952 (Resolved): doc: cleanup CephFS landing page
- 11:03 AM Backport #41508: nautilus: add information about active scrubs to "ceph -s" (and elsewhere)
- Venky, please do this backport.
- 10:31 AM Bug #41892 (Resolved): qa: convert kcephfs qa tests to use mount.ceph auto-discovery features
- nautilus backport being handled via #16656 - nautilus backport PR is https://github.com/ceph/ceph/pull/30521
- 07:24 AM Bug #42117 (Need More Info): MDS: daemon and cephfs-data-scan dump core on (probably) damaged oma...
- This was observed with ceph-12.2.10, but afaict the code path hasn't changed.
The root cause is not definitive, bu... - 04:24 AM Bug #42107 (Pending Backport): client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
10/01/2019
- 02:05 PM Bug #42107 (Resolved): client: no method to handle SEEK_HOLE and SEEK_DATA in lseek
- There is no method to handle SEEK_HOLE and SEEK_DATA in lseek in ceph-fuse
- 02:03 PM Tasks #39998: client: audit ACL
- https://tracker.ceph.com/issues/17594#note-37
- 10:02 AM Bug #42101 (Resolved): test_cephfs_shell: test_help doesn't test help
- The test runs command help without any arguments prints list of commands instead of help text. Pass "all" to help ins...
- 08:22 AM Bug #40864 (Pending Backport): cephfs-shell: rmdir doesn't complain when directory is not empty
- 08:18 AM Cleanup #42043 (Resolved): mds: reorg MDBalancer header
- 08:16 AM Bug #41871 (Pending Backport): client: return error when someone passes bad whence value to llseek
- 07:45 AM Bug #42100 (Resolved): cephfs-shell: always returns zero, even when a command has failed
- 07:40 AM Bug #42096 (Fix Under Review): mgr/volumes: creating subvolume and subvolume group snapshot fails
- 07:36 AM Bug #41841 (Pending Backport): mgr/volumes: missing protection for `fs volume rm` command
- 04:01 AM Documentation #40957 (Fix Under Review): doc: add section to manpage for recover_session= option
- 03:54 AM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- Jeff Layton wrote:
> I've started working on patches to add this, but I see a potential problem. The idea is to dele... - 02:36 AM Backport #40131 (Resolved): nautilus: Document behaviour of fsync-after-close
09/30/2019
- 01:53 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- I've started working on patches to add this, but I see a potential problem. The idea is to delegate a range of inodes...
- 01:36 PM Bug #42096 (Resolved): mgr/volumes: creating subvolume and subvolume group snapshot fails
- ...
- 12:49 PM Tasks #42085: qa: create tests for new recover_session=clean option
- Note that this can only be run against the testing kernel.
- 12:38 PM Feature #41302: mds: add ephemeral random and distributed export pins
- Sidharth Anupkrishnan wrote:
> > If a directory inode is ephemerally pinned, then we just note that in the inode as... - 10:31 AM Feature #41302: mds: add ephemeral random and distributed export pins
> If a directory inode is ephemerally pinned, then we just note that in the inode as a boolean flag. It remains pin...- 03:53 AM Feature #41302: mds: add ephemeral random and distributed export pins
- Sidharth Anupkrishnan wrote:
> Patrick Donnelly wrote:
> > Sidharth Anupkrishnan wrote:
> > > At a first look, I... - 11:27 AM Bug #41871 (In Progress): client: return error when someone passes bad whence value to llseek
- 11:26 AM Documentation #40957 (In Progress): doc: add section to manpage for recover_session= option
- 04:07 AM Documentation #41783 (Resolved): doc: document MDSs journaling mechanism and metadata pool
- 04:04 AM Feature #41910 (Resolved): qa: allow vstart_runner to perform tests on kclient mounts
09/27/2019
- 07:19 PM Bug #42088 (Resolved): 'ceph -s' does not show standbys if there are no filesystems
- - start up mon, mgr, osd
- start up mds (or two)
- but do not create a file system...
ceph -s... - 04:04 PM Tasks #42085 (Resolved): qa: create tests for new recover_session=clean option
- Add new tests in ceph/qa to test the new recover_session=clean mount option in kcephfs, and set them up to run in teu...
- 06:51 AM Documentation #41872 (Resolved): doc: update CephFS Quick Start guide
- 06:23 AM Documentation #42044 (Pending Backport): doc/ceph-fuse: -k missing in man page
- 06:20 AM Feature #41302: mds: add ephemeral random and distributed export pins
- Patrick Donnelly wrote:
> Sidharth Anupkrishnan wrote:
> > At a first look, I think there is no need to make ephem... - 04:48 AM Feature #41302: mds: add ephemeral random and distributed export pins
- Sidharth Anupkrishnan wrote:
> Sidharth Anupkrishnan wrote:
> > At a first look, I think there is no need to make... - 04:45 AM Feature #41302: mds: add ephemeral random and distributed export pins
- Sidharth Anupkrishnan wrote:
> At a first look, I think there is no need to make ephemeral_export_random_pin an xat... - 05:37 AM Bug #42057 (In Progress): cephfs-shell: not compatible with cmd2 versions after 0.9.13
09/26/2019
- 06:07 PM Feature #41302: mds: add ephemeral random and distributed export pins
- Sidharth Anupkrishnan wrote:
> At a first look, I think there is no need to make ephemeral_export_random_pin an xat... - 04:01 PM Feature #41302: mds: add ephemeral random and distributed export pins
- At a first look, I think there is no need to make ephemeral_export_random_pin an xattr like export_pin because for th...
- 01:27 PM Bug #40283 (Pending Backport): qa: add testing for lazyio
- 01:25 PM Bug #40821 (Pending Backport): osdc: objecter ops output does not have useful time information
- 01:23 PM Bug #41434 (Pending Backport): mds: infinite loop in Locker::file_update_finish()
- 01:22 PM Bug #41880 (Pending Backport): mds:split the dir if the op makes it oversized, because some ops m...
- 01:20 PM Cleanup #41678 (Resolved): mds: reorg LogSegment header
- 01:19 PM Bug #41868 (Pending Backport): mds: mds returns -5 error when the deleted file does not exist
- 01:17 PM Bug #41892 (Pending Backport): qa: convert kcephfs qa tests to use mount.ceph auto-discovery feat...
- 10:41 AM Bug #42062 (Resolved): qa: AttributeError: 'MonitorThrasher' object has no attribute 'fs'
- ...
- 10:34 AM Bug #42061 (Won't Fix): volume_client: AssertionError: 237 != 8
- ...
- 10:05 AM Bug #39987: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- Nathan Cutler wrote:
> > I plan to go a step further and not permit a tracker ticket to go backwards like this, i.e.... - 08:06 AM Bug #42057 (Resolved): cephfs-shell: not compatible with cmd2 versions after 0.9.13
- "-b" options fail since load command from cmd2 changed t run_script.
09/25/2019
- 09:36 AM Bug #39987: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- Oh, and one more thing: issues in Resolved status can be reverted to Need Review (or In Progress, or even New) as well.
- 09:29 AM Bug #39987: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- > I plan to go a step further and not permit a tracker ticket to go backwards like this, i.e. from PB back to NR. Ins...
- 06:16 AM Documentation #42044 (Resolved): doc/ceph-fuse: -k missing in man page
- 05:47 AM Cleanup #42043 (Fix Under Review): mds: reorg MDBalancer header
- 05:40 AM Cleanup #42043 (Resolved): mds: reorg MDBalancer header
09/24/2019
- 07:54 PM Backport #42040 (Resolved): nautilus: client: _readdir_cache_cb() may use the readdir_cache alrea...
- https://github.com/ceph/ceph/pull/30763
- 07:54 PM Backport #42039 (Resolved): luminous: client: _readdir_cache_cb() may use the readdir_cache alrea...
- https://github.com/ceph/ceph/pull/30934
- 07:54 PM Backport #42038 (Resolved): mimic: client: _readdir_cache_cb() may use the readdir_cache already ...
- https://github.com/ceph/ceph/pull/30933
- 07:52 PM Backport #42035 (Resolved): nautilus: client: lseek function does not return the correct value.
- https://github.com/ceph/ceph/pull/30762
- 07:52 PM Backport #42034 (Resolved): mimic: client: lseek function does not return the correct value.
- https://github.com/ceph/ceph/pull/30932
- 06:05 PM Bug #42020 (Fix Under Review): qa: fuse_mount should check if mounted in umount_wait
- 08:20 AM Bug #42020 (Resolved): qa: fuse_mount should check if mounted in umount_wait
- ...
- 11:32 AM Feature #41311 (Resolved): deprecate CephFS inline_data support
- 11:30 AM Bug #41148 (Pending Backport): client: _readdir_cache_cb() may use the readdir_cache already clear
- 11:16 AM Cleanup #41665 (Resolved): mds: reorg Locker header
- 11:11 AM Bug #41837 (Pending Backport): client: lseek function does not return the correct value.
- 08:34 AM Bug #42022 (Need More Info): mgr/volumes: "Timed out after 30s waiting for ./volumes/_deleting to...
- ...
- 07:03 AM Bug #39987: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- Nathan Cutler wrote:
> When there are multiple PRs fixing a single tracker, it's a good idea to "unset" (depopulate)... - 03:04 AM Documentation #41472 (Resolved): doc: add multiple active MDSs and Subtree Management in CephFS
- 03:00 AM Documentation #41872 (Fix Under Review): doc: update CephFS Quick Start guide
- 01:47 AM Documentation #42016 (Resolved): doc: layout rest of intro page
- Include links to different sections of CephFS Documentation: "Concepts" (architecture), "Getting Started", "Mounting"...
09/23/2019
- 05:30 PM Backport #41890 (In Progress): nautilus: mount.ceph: enable consumption of ceph keyring files
- 04:19 AM Backport #41890: nautilus: mount.ceph: enable consumption of ceph keyring files
- Should pick up backport for https://tracker.ceph.com/issues/41892 as well once it's merged.
- 05:16 PM Bug #39987: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- When there are multiple PRs fixing a single tracker, it's a good idea to "unset" (depopulate) the Pull request ID fie...
- 01:59 PM Documentation #41999 (Resolved): CephFS Documentation Sprint 2
- 01:24 PM Documentation #41952 (In Progress): doc: cleanup CephFS landing page
- 07:35 AM Documentation #41952 (Resolved): doc: cleanup CephFS landing page
- Remove or move links on CephFS landing page to Table of Contents on the left
- 12:53 PM Bug #41337 (Resolved): mgr/volumes: handle incorrect pool_layout setting during `fs subvolume/sub...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:53 PM Bug #41371 (Resolved): mgr/volumes: subvolume and subvolume group path exists even when creation ...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:53 PM Bug #41617 (Resolved): mgr/volumes: prevent negative subvolume size
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:52 PM Bug #41752 (Resolved): mgr/volumes: drop unused size in fs volume create
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:52 PM Bug #41903 (Resolved): mgr/volumes: issuing `fs subvolume getpath` command hangs and ceph-mgr dae...
- While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are ...
- 12:23 PM Backport #40445: nautilus: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
- Backport of follow-on fix: https://github.com/ceph/ceph/pull/30508
- 12:17 PM Backport #41933 (Resolved): nautilus: mgr/volumes: issuing `fs subvolume getpath` command hangs a...
- 12:16 PM Backport #41884 (Resolved): nautilus: mgr/volumes: prevent negative subvolume size
- 12:16 PM Backport #41850 (Resolved): nautilus: mgr/volumes: drop unused size in fs volume create
- 12:16 PM Backport #41444 (Resolved): nautilus: mgr/volumes: handle incorrect pool_layout setting during `f...
- 12:16 PM Backport #41437 (Resolved): nautilus: mgr/volumes: subvolume and subvolume group path exists even...
- 06:59 AM Feature #41842 (Fix Under Review): mgr/volumes: list FS subvolumes, subvolume groups, and their s...
- 06:57 AM Documentation #40689 (Resolved): mgr/volumes: document mgr fs volumes CLI
- 06:50 AM Cleanup #41951 (Resolved): mds: obsolete mds_cache_size
- mds_cache_memory_limit is preferred. Remove last bits of support for mds_cache_size.
- 05:06 AM Backport #41865: nautilus: mds: ask idle client to trim more caps
- Should also backport https://tracker.ceph.com/issues/41899
- 04:22 AM Feature #41910 (Fix Under Review): qa: allow vstart_runner to perform tests on kclient mounts
- 04:17 AM Bug #41892 (Fix Under Review): qa: convert kcephfs qa tests to use mount.ceph auto-discovery feat...
- 02:11 AM Bug #41935 (Duplicate): ceph mdss keep on crashing
- 02:04 AM Bug #41948 (Fix Under Review): nautilus: mds: incomplete backport of #40444 (MDCache::cow_inode d...
09/22/2019
- 07:34 AM Bug #41935: ceph mdss keep on crashing
- https://tracker.ceph.com/issues/41948
- 06:52 AM Bug #41948 (Resolved): nautilus: mds: incomplete backport of #40444 (MDCache::cow_inode does not ...
- backport #40445 incomplete
- 05:06 AM Documentation #41893 (Resolved): doc: mds state diagram color description mistake
09/20/2019
- 12:43 PM Feature #39129: create mechanism to delegate ranges of inode numbers to client
- At a high level, here's what I think we need to do:
Add a new delegated_inos field to session_info_t in the MDS co... - 09:59 AM Backport #41933 (In Progress): nautilus: mgr/volumes: issuing `fs subvolume getpath` command hang...
- https://github.com/ceph/ceph/pull/29926
09/19/2019
- 03:42 PM Bug #41935: ceph mdss keep on crashing
- Looks like the backport of https://tracker.ceph.com/issues/39987 to nautilus was incomplete, it's missing https://git...
- 02:23 PM Bug #41935: ceph mdss keep on crashing
- ...
- 02:21 PM Bug #41935 (Duplicate): ceph mdss keep on crashing
- I updated ceph to 14.2.3 yesterday. everything was running fine, but today mds start crashing. I tried restarting all...
- 01:30 PM Backport #41933 (Resolved): nautilus: mgr/volumes: issuing `fs subvolume getpath` command hangs a...
- https://github.com/ceph/ceph/pull/29926
- 01:29 PM Bug #41903 (Pending Backport): mgr/volumes: issuing `fs subvolume getpath` command hangs and ceph...
- 10:10 AM Bug #40283 (Fix Under Review): qa: add testing for lazyio
09/18/2019
- 02:12 PM Feature #41910 (Resolved): qa: allow vstart_runner to perform tests on kclient mounts
- Add a new --kclient switch to vstart_runner that tells it to use kernel mounts instead of FUSE.
- 10:57 AM Backport #41889 (In Progress): nautilus: mgr/volumes: retry spawning purge threads on failure
- 10:57 AM Bug #41892: qa: convert kcephfs qa tests to use mount.ceph auto-discovery features
- Instead, I think we'll just convert the existing kernel_mount.py code to just use the new functionality so that this ...
- 08:17 AM Bug #41903 (Fix Under Review): mgr/volumes: issuing `fs subvolume getpath` command hangs and ceph...
- 05:55 AM Bug #41903 (Resolved): mgr/volumes: issuing `fs subvolume getpath` command hangs and ceph-mgr dae...
- $ ceph fs subvolume create vol00 subvol00
$ ceph fs subvolume getpath vol00 subvol00
The command just hangs and c... - 03:17 AM Bug #41799: client: FAILED assert(cap == in->auth_cap)
- (gdb) p m->peer
$1 = {cap_id = {v = 2782052343}, seq = {v = 4}, mseq = {v = 0}, mds = {v = 1}, flags = 2 '\002'}
- 01:06 AM Backport #41856 (In Progress): mimic: client: removing dir reports "not empty" issue due to clien...
- https://github.com/ceph/ceph/pull/30443
- 01:05 AM Backport #41855 (In Progress): nautilus: client: removing dir reports "not empty" issue due to cl...
- https://github.com/ceph/ceph/pull/30442
09/17/2019
- 06:07 PM Backport #41899 (Resolved): nautilus: mds: cache drop command does not drive cap recall
- https://github.com/ceph/ceph/pull/30761
- 02:28 PM Backport #41884 (In Progress): nautilus: mgr/volumes: prevent negative subvolume size
- https://github.com/ceph/ceph/pull/29926
- 08:33 AM Backport #41884 (Resolved): nautilus: mgr/volumes: prevent negative subvolume size
- 02:00 PM Backport #41850 (In Progress): nautilus: mgr/volumes: drop unused size in fs volume create
- https://github.com/ceph/ceph/pull/29926
- 01:29 PM Bug #41835 (Pending Backport): mds: cache drop command does not drive cap recall
- 11:35 AM Documentation #41893 (Resolved): doc: mds state diagram color description mistake
- document mds-states.rst have a description mistake about states diagram color.
- 10:36 AM Bug #41892 (Resolved): qa: convert kcephfs qa tests to use mount.ceph auto-discovery features
- Recently, a patchset was merged that added the ability for mount.ceph to discover mon addrs and secrets from a local ...
- 09:26 AM Feature #41842 (In Progress): mgr/volumes: list FS subvolumes, subvolume groups, and their snapshots
- 08:40 AM Backport #41890 (Resolved): nautilus: mount.ceph: enable consumption of ceph keyring files
- https://github.com/ceph/ceph/pull/30521
- 08:35 AM Backport #41889 (Resolved): nautilus: mgr/volumes: retry spawning purge threads on failure
- https://github.com/ceph/ceph/pull/30455
- 08:34 AM Backport #41888 (Resolved): nautilus: client: lazyio synchronize does not get file size
- https://github.com/ceph/ceph/pull/30769
- 08:34 AM Backport #41887 (Rejected): mimic: client: lazyio synchronize does not get file size
- https://github.com/ceph/ceph/pull/30931
- 08:34 AM Backport #41886 (Resolved): nautilus: mds: client evicted twice in one tick
- https://github.com/ceph/ceph/pull/30951
- 08:34 AM Backport #41885 (Resolved): mimic: mds: client evicted twice in one tick
- https://github.com/ceph/ceph/pull/30950
- 08:06 AM Bug #41836: qa: "cluster [ERR] Error recovering journal 0x200: (2) No such file or directory" i...
- I think we can just whitelist the error 'Error recovering journal'
- 07:04 AM Bug #41880 (Resolved): mds:split the dir if the op makes it oversized, because some ops maybe in ...
- 06:48 AM Bug #41841 (Fix Under Review): mgr/volumes: missing protection for `fs volume rm` command
- 04:00 AM Backport #41851 (In Progress): nautilus: mds: MDSIOContextBase instance leak
- https://github.com/ceph/ceph/pull/30418
- 03:59 AM Backport #41852 (In Progress): mimic: mds: MDSIOContextBase instance leak
- https://github.com/ceph/ceph/pull/30417
- 03:46 AM Bug #41799: client: FAILED assert(cap == in->auth_cap)
- I'd like to know why the cap import message's seq is 1, mseq is 0. please use gdb to print cap import message's peer ...
- 12:46 AM Bug #41868: mds: mds returns -5 error when the deleted file does not exist
- Jeff Layton wrote:
> I agree that EIO makes no sense here, but since you're looking up files by inode number, -ESTAL...
09/16/2019
- 11:50 PM Feature #16656 (Pending Backport): mount.ceph: enable consumption of ceph keyring files
- 08:09 PM Bug #41799 (Fix Under Review): client: FAILED assert(cap == in->auth_cap)
- 08:04 PM Bug #41218 (Pending Backport): mgr/volumes: retry spawning purge threads on failure
- 08:04 PM Bug #41219 (Resolved): mgr/volumes: send purge thread (and other) health warnings to `ceph status`
- Backport tracked by #41218.
- 08:00 PM Bug #41310 (Pending Backport): client: lazyio synchronize does not get file size
- 07:59 PM Cleanup #41178 (Resolved): mds: reorg DamageTable header
- 07:59 PM Cleanup #41178 (Pending Backport): mds: reorg DamageTable header
- 07:58 PM Cleanup #41430 (Resolved): mds: reorg JournalPointer header
- 07:55 PM Bug #41585 (Pending Backport): mds: client evicted twice in one tick
- 07:06 PM Bug #41728 (Need More Info): mds: hang during fragmentdir
- 02:28 PM Bug #41728: mds: hang during fragmentdir
- Thanks!
- 01:56 PM Bug #41728: mds: hang during fragmentdir
- Nathan Fish wrote:
> When doing a parallel cp, the active MDS on the CephFS hung on a fragmentdir op.
> It might be... - 06:44 PM Bug #41617 (Pending Backport): mgr/volumes: prevent negative subvolume size
- 06:37 PM Documentation #41451 (Resolved): Document distributed metadata cache
- 05:36 PM Bug #41868: mds: mds returns -5 error when the deleted file does not exist
- I agree that EIO makes no sense here, but since you're looking up files by inode number, -ESTALE would probably make ...
- 01:46 PM Bug #41868 (Fix Under Review): mds: mds returns -5 error when the deleted file does not exist
- 12:04 PM Bug #41868 (Resolved): mds: mds returns -5 error when the deleted file does not exist
- There are 2 nfs-ganehsa ends:
1.The A side uses readdir to get all the file information in a directory,
and uses ... - 03:01 PM Documentation #41872 (Resolved): doc: update CephFS Quick Start guide
- 02:56 PM Bug #41871: client: return error when someone passes bad whence value to llseek
- s/ceph_assert/ceph_abort/
- 01:52 PM Bug #41871 (Resolved): client: return error when someone passes bad whence value to llseek
- There are a number of ceph_assert calls in src/client/Client.cc that are probably not necessary. There are calls in l...
- 01:48 PM Bug #41837 (Fix Under Review): client: lseek function does not return the correct value.
- 02:41 AM Bug #41837 (Resolved): client: lseek function does not return the correct value.
- If pos is initialized to -1 in the lseek function, then when offset is 0, EINVAL may be returned.
- 11:36 AM Bug #41841 (In Progress): mgr/volumes: missing protection for `fs volume rm` command
- 06:10 AM Bug #41841 (Resolved): mgr/volumes: missing protection for `fs volume rm` command
- Currently can remove a filesytem, its data and meta data pools, and MDSes with a `fs volume rm` ceph mgr command. May...
- 10:52 AM Feature #40959 (Fix Under Review): mgr/volumes: allow setting uid, gid of subvolume and subvolume...
- 07:21 AM Backport #41865 (Resolved): nautilus: mds: ask idle client to trim more caps
- https://github.com/ceph/ceph/pull/30761
- 07:18 AM Backport #41861 (Rejected): nautilus: cephfs-shell: du must ignore non-directory files
- 07:17 AM Backport #41857 (Resolved): luminous: client: removing dir reports "not empty" issue due to clien...
- https://github.com/ceph/ceph/pull/33292
- 07:17 AM Backport #41856 (Resolved): mimic: client: removing dir reports "not empty" issue due to client s...
- https://github.com/ceph/ceph/pull/30443
- 07:17 AM Backport #41855 (Resolved): nautilus: client: removing dir reports "not empty" issue due to clien...
- https://github.com/ceph/ceph/pull/30442
- 07:15 AM Backport #41854 (Rejected): mimic: mds: reject sessionless messages
- https://github.com/ceph/ceph/pull/30908
- 07:15 AM Backport #41853 (Resolved): nautilus: mds: reject sessionless messages
- https://github.com/ceph/ceph/pull/30843
- 07:15 AM Backport #41852 (Resolved): mimic: mds: MDSIOContextBase instance leak
- https://github.com/ceph/ceph/pull/30417
- 07:15 AM Backport #41851 (Resolved): nautilus: mds: MDSIOContextBase instance leak
- https://github.com/ceph/ceph/pull/30418
- 07:15 AM Backport #41850 (Resolved): nautilus: mgr/volumes: drop unused size in fs volume create
- 06:23 AM Feature #41842 (Resolved): mgr/volumes: list FS subvolumes, subvolume groups, and their snapshots
- Add commands to list FS subvolume, subvolume groups, and their snapshots
- 01:08 AM Bug #41836 (Resolved): qa: "cluster [ERR] Error recovering journal 0x200: (2) No such file or d...
- From: /ceph/teuthology-archive/pdonnell-2019-09-15_06:11:06-fs-wip-pdonnell-testing-20190915.030958-distro-basic-smit...
- 12:55 AM Bug #41835: mds: cache drop command does not drive cap recall
- Backport of #22446 is only for nautilus.
- 12:54 AM Bug #41835 (Fix Under Review): mds: cache drop command does not drive cap recall
- 12:50 AM Bug #41835 (Resolved): mds: cache drop command does not drive cap recall
- ...
09/13/2019
- 07:40 PM Bug #39641 (Resolved): cephfs-shell: 'du' command produces incorrect results
- Will be backported via #40371.
- 07:40 PM Bug #40371 (Pending Backport): cephfs-shell: du must ignore non-directory files
- 07:08 PM Documentation #40689 (Fix Under Review): mgr/volumes: document mgr fs volumes CLI
- 07:02 PM Bug #41752 (Pending Backport): mgr/volumes: drop unused size in fs volume create
- 06:25 PM Documentation #41826 (Resolved): doc: update CephFS summary and introduction
- 06:24 PM Documentation #41451 (Fix Under Review): Document distributed metadata cache
- 06:22 PM Documentation #41470 (Fix Under Review): Document requirements for using cephfs
- 06:19 PM Documentation #41738 (In Progress): Add documentation for that 'client direct access to data pool'
- 06:17 PM Documentation #41825 (Resolved): CephFS Documentation Sprint 1
- 06:12 PM Feature #41824 (New): mds: aggregate subtree authorities for display in `fs top`
- Each MDS is only aware of subtrees that border its own authoritative subtrees. This also affects rank 0.
Have each... - 03:37 PM Feature #36608 (Resolved): mds: answering all pending getattr/lookups targeting the same inode in...
- 03:35 PM Feature #22446 (Pending Backport): mds: ask idle client to trim more caps
- 03:34 PM Bug #40746 (Pending Backport): client: removing dir reports "not empty" issue due to client side ...
- 03:32 PM Bug #41329 (Pending Backport): mds: reject sessionless messages
- 03:30 PM Bug #41346 (Pending Backport): mds: MDSIOContextBase instance leak
09/12/2019
- 11:24 PM Cleanup #41185 (Resolved): mds: reorg FSMapUser header
- 11:23 PM Cleanup #41428 (Resolved): mds: reorg InoTable header
- 11:22 PM Cleanup #41607 (Resolved): mds: reorg Anchor header
- 11:20 PM Bug #41654 (Resolved): mds: reorg LocalLock header
- 11:19 PM Cleanup #41679 (Resolved): mds: reorg LogEvent header
- 08:05 PM Bug #41800 (Resolved): qa: logrotate should tolerate connection resets
- During kclient runs, we reboot nodes. The logrotate exception causes the test to fail:...
- 04:49 PM Bug #41799: client: FAILED assert(cap == in->auth_cap)
- The issue affect all releases including master
- 04:49 PM Bug #41799 (Resolved): client: FAILED assert(cap == in->auth_cap)
- below log explains the issue clearly, the auth_caps was set to NULL in previous remove_caps, and when add_update_cap...
- 06:17 AM Documentation #41472 (In Progress): doc: add multiple active MDSs and Subtree Management in CephFS
- 06:05 AM Documentation #41783 (Resolved): doc: document MDSs journaling mechanism and metadata pool
- 04:51 AM Bug #41759: mgr/volumes: test_async_subvolume_rm fails since purge threads did not cleanup trash ...
- Patrick Donnelly wrote:
> [...]
>
> From the teuthology log.
yeh -- that masks the logging of the actual trace...
09/11/2019
- 10:03 PM Fix #41782 (Resolved): mds: allow stray directories to fragment and switch from 10 stray director...
- Stray directories can become too full which can result in unexpected ENOSPC errors. See for example, #41778.
Evalu... - 05:54 PM Bug #41759: mgr/volumes: test_async_subvolume_rm fails since purge threads did not cleanup trash ...
- ...
- 01:03 PM Bug #41759: mgr/volumes: test_async_subvolume_rm fails since purge threads did not cleanup trash ...
- As seen from the MDS log, there are no filesystem ops after the rename ack to the client. This hints that the purge t...
- 11:01 AM Bug #41759 (Can't reproduce): mgr/volumes: test_async_subvolume_rm fails since purge threads did ...
- Patrick saw this recently here: http://qa-proxy.ceph.com/teuthology/pdonnell-2019-09-11_00:33:51-fs-wip-pdonnell-test...
- 03:07 PM Bug #41778 (New): 'No space left on device' due to snapshots
- When using snapshots, we are getting 'no space left on device' when num_strays is close to a million.
We only have l... - 01:05 PM Feature #41763 (New): Support decommissioning of additional data pools
- Adding additional data pools via @ceph fs add_data_pool@ is very easy, but once a pool is in use, it is very hard to ...
- 09:12 AM Feature #40959 (In Progress): mgr/volumes: allow setting uid, gid of subvolume and subvolume grou...
- 12:58 AM Bug #41752 (Resolved): mgr/volumes: drop unused size in fs volume create
Also available in: Atom