Activity
From 10/12/2017 to 11/10/2017
11/10/2017
- 09:39 PM Bug #22008: Processes stuck waiting for write with ceph-fuse
- I've applied this patch to the latest luminous branch, rebuild the MDS and tested it in a test environment with the c...
- 02:47 PM Feature #22105 (Resolved): provide a way to look up snapshotted inodes by vinodeno_t
- An NFS client could conceivably present a filehandle that refers to a snapshot inode after ganesha has been stopped a...
- 07:22 AM Backport #21947: luminous: mds: preserve order of requests during recovery of multimds cluster
- https://github.com/ceph/ceph/pull/18871
- 07:17 AM Backport #21952: luminous: mds: no assertion on inode being purging in find_ino_peers()
- https://github.com/ceph/ceph/pull/18869
- 04:44 AM Backport #22077: luminous: src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == inode_map.s...
- ** IN PROGRESS **
- 02:21 AM Feature #22097: mds: change mds perf counters can statistics filesystem operations number and lat...
- PR https://github.com/ceph/ceph/pull/18849 is for this feature
- 12:17 AM Backport #22078: luminous: ceph.in: tell mds does not understand --cluster
- -https://github.com/ceph/ceph/pull/18859-
- 12:15 AM Backport #22089: luminous: Scrub considers dirty backtraces to be damaged, puts in damage table e...
- https://github.com/ceph/ceph/pull/18858
11/09/2017
- 12:05 PM Feature #22097 (Resolved): mds: change mds perf counters can statistics filesystem operations num...
- The perf conters of mds daemon now only can statistics filesystem op number and the all replay latency.Sometimes we n...
- 06:07 AM Backport #22074: luminous: don't check gid when none specified in auth caps
- https://github.com/ceph/ceph/pull/18835
- 05:23 AM Backport #22076: luminous: 'ceph tell mds' commands result in 'File exists' errors on client admi...
- https://github.com/ceph/ceph/pull/18831
- 03:05 AM Bug #22091 (Duplicate): statfs get wrong fs size
- For fs size in statfs, the right value should be the size of data_pool plus size of metadata_pool, however, it return...
11/08/2017
- 10:45 PM Backport #22089 (Resolved): luminous: Scrub considers dirty backtraces to be damaged, puts in dam...
- https://github.com/ceph/ceph/pull/20341
- 10:28 AM Backport #22078 (Resolved): luminous: ceph.in: tell mds does not understand --cluster
- https://github.com/ceph/ceph/pull/18831
- 10:26 AM Backport #22077 (Resolved): luminous: src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == ...
- https://github.com/ceph/ceph/pull/18912
- 10:25 AM Backport #22076 (Resolved): luminous: 'ceph tell mds' commands result in 'File exists' errors on ...
- https://github.com/ceph/ceph/pull/18831
- 10:25 AM Backport #22074 (Resolved): luminous: don't check gid when none specified in auth caps
- https://github.com/ceph/ceph/pull/18835
- 01:13 AM Bug #22058 (Need More Info): mds: admin socket wait for scrub completion is racy
- no log, wait for it to happen again.
11/07/2017
- 10:34 PM Bug #22058: mds: admin socket wait for scrub completion is racy
- Hrm, I missed that bit of logic. Yes, I don't know why it waits forever either.
- 02:10 PM Bug #22058: mds: admin socket wait for scrub completion is racy
- ...
- 06:01 AM Bug #22058 (Resolved): mds: admin socket wait for scrub completion is racy
- From: http://pulpito.ceph.com/pdonnell-2017-11-06_22:14:57-fs-wip-pdonnell-testing-20171106.200337-testing-basic-smit...
- 10:21 PM Backport #22068 (In Progress): luminous: mds: mds gets significantly behind on trimming while cre...
- https://github.com/ceph/ceph/pull/18783
- 10:21 PM Backport #22068 (Resolved): luminous: mds: mds gets significantly behind on trimming while creati...
- https://github.com/ceph/ceph/pull/18783
- 10:19 PM Backport #22067 (In Progress): luminous: mds: definition of MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 i...
- 10:19 PM Backport #22067 (Resolved): luminous: mds: definition of MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 is w...
- https://github.com/ceph/ceph/pull/18782
- 09:37 PM Bug #21405 (Resolved): qa: add EC data pool to testing
- 09:37 PM Backport #21955 (Resolved): luminous: qa: add EC data pool to testing
- 08:45 PM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
- Just checking if there is any viable patch that I could try for this issue in ceph-fuse. We are running into this pr...
- 08:48 AM Bug #22008 (Fix Under Review): Processes stuck waiting for write with ceph-fuse
- https://github.com/ceph/ceph/pull/18787
- 07:54 AM Bug #21975: MDS: mds gets significantly behind on trimming while creating millions of files
- https://github.com/ceph/ceph/pull/18783
- 07:49 AM Bug #21985: mds: definition of MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 is wrong
- https://github.com/ceph/ceph/pull/18782
- 06:22 AM Bug #22009 (Pending Backport): don't check gid when none specified in auth caps
- 03:03 AM Bug #21584: FAILED assert(get_version() < pv) in CDir::mark_dirty
- For master: https://github.com/ceph/ceph/pull/18774
11/06/2017
- 05:41 PM Bug #22051 (Can't reproduce): tests: Health check failed: Reduced data availability: 5 pgs peerin...
- Saw this at a cephfs qe run, most likely an env. issue, http://pulpito.ceph.com/abhi-2017-11-05_16:49:08-fs-wip-abhi-...
- 04:59 PM Backport #22004: luminous: FAILED assert(get_version() < pv) in CDir::mark_dirty
- h3. description...
- 09:45 AM Bug #22008 (In Progress): Processes stuck waiting for write with ceph-fuse
- The second one is actually different from the first one. Seems like the first one was caused by 'client session gets ...
11/04/2017
- 04:02 AM Backport #21525: luminous: client: dual client segfault with racing ceph_shutdown
- -https://github.com/ceph/ceph/pull/18721-
- 03:58 AM Backport #21525 (In Progress): luminous: client: dual client segfault with racing ceph_shutdown
11/03/2017
- 09:41 PM Bug #18743 (Pending Backport): Scrub considers dirty backtraces to be damaged, puts in damage tab...
- 09:40 PM Bug #21928 (Pending Backport): src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == inode_m...
- 09:40 PM Bug #21975 (Pending Backport): MDS: mds gets significantly behind on trimming while creating mill...
- 09:39 PM Bug #21985 (Pending Backport): mds: definition of MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 is wrong
- 09:39 PM Bug #21406 (Pending Backport): ceph.in: tell mds does not understand --cluster
- 09:39 PM Bug #21967 (Pending Backport): 'ceph tell mds' commands result in 'File exists' errors on client ...
- 09:25 PM Bug #22038 (Resolved): ceph-volume-client: rados.Error: command not known
- ...
- 07:38 PM Bug #22008: Processes stuck waiting for write with ceph-fuse
- Attached is the ceph-fuse cache dump. This is a different instance of the problem (all the same symptoms), so the pr...
- 08:33 AM Bug #22008: Processes stuck waiting for write with ceph-fuse
- no idea how did it happen. please use admin socket to dump ceph-fuse's cache (ceph daemon client.xxx dump_cache)
- 03:50 PM Backport #22031 (Resolved): jewel: FAILED assert(get_version() < pv) in CDir::mark_dirty
- https://github.com/ceph/ceph/pull/21156
- 02:22 PM Bug #21991 (Fix Under Review): mds: tell session ls returns vanila EINVAL when MDS is not active
- https://github.com/ceph/ceph/pull/18705
- 01:50 PM Backport #21481 (Resolved): jewel: "FileStore.cc: 2930: FAILED assert(0 == "unexpected error")" i...
- 09:36 AM Bug #22003: [CephFS-Ganesha]MDS migrate will affect Ganesha service?
- this issue can be reproduced easily by restarting ganesha.nfd. I searched the ceph FSAL code, couldn't find any code ...
- 07:09 AM Bug #21892 (Fix Under Review): limit size of subtree migration
- https://github.com/ceph/ceph/pull/18697
11/02/2017
- 08:33 PM Bug #22009 (Fix Under Review): don't check gid when none specified in auth caps
- https://github.com/ceph/ceph/pull/18689
- 07:59 PM Bug #22009 (In Progress): don't check gid when none specified in auth caps
- 07:57 PM Bug #22009 (Resolved): don't check gid when none specified in auth caps
- MDS auth caps allow for uid but not gids to be specified. In that case, we shouldn't check caller_gid or caller_gids_...
- 07:19 PM Bug #22007: auth: mds cap parsing should not depend on order
- For the record, we do this everywhere. We should probably refactor all of our auth caps grammar to make this work bet...
- 06:22 PM Bug #22007 (New): auth: mds cap parsing should not depend on order
- https://www.mail-archive.com/ceph-users@lists.ceph.com/msg41552.html...
- 06:30 PM Bug #22008 (Resolved): Processes stuck waiting for write with ceph-fuse
- We've been running into a strange problem with Ceph using ceph-fuse and the filesystem. All the back end nodes are o...
- 05:37 PM Bug #21406 (Fix Under Review): ceph.in: tell mds does not understand --cluster
- Jos, "In Progress" indicates the Assignee is working on a fix. "Need Review" indicates the fix is undergoing review/t...
- 08:56 AM Bug #21406 (In Progress): ceph.in: tell mds does not understand --cluster
- 05:37 PM Bug #21967 (Fix Under Review): 'ceph tell mds' commands result in 'File exists' errors on client ...
- Jos, "In Progress" indicates the Assignee is working on a fix. "Need Review" indicates the fix is undergoing review/t...
- 08:56 AM Bug #21967 (In Progress): 'ceph tell mds' commands result in 'File exists' errors on client admin...
- 07:54 AM Bug #21584 (Pending Backport): FAILED assert(get_version() < pv) in CDir::mark_dirty
- 07:52 AM Backport #22004 (In Progress): luminous: FAILED assert(get_version() < pv) in CDir::mark_dirty
- https://github.com/ceph/ceph/pull/18008
- 07:52 AM Backport #22004 (Resolved): luminous: FAILED assert(get_version() < pv) in CDir::mark_dirty
- https://github.com/ceph/ceph/pull/18008
- 07:34 AM Bug #22003 (Resolved): [CephFS-Ganesha]MDS migrate will affect Ganesha service?
- [Ganesha Version]
Ganesha V2.4
630a35bef41aabf76f99532448d6154316a525e0
[Ceph Version]
ceph version 10.2.7 (50e... - 01:15 AM Bug #21991 (In Progress): mds: tell session ls returns vanila EINVAL when MDS is not active
11/01/2017
- 06:55 PM Documentation #2982 (Resolved): doc: write add/remove a metadata server
- This has since been addressed.
- 06:51 PM Fix #6753 (Closed): cephx authentication for mds seem to accept both "allow" and "allow *"
- This appears to be an obsolete issue.
- 06:48 PM Documentation #2969 (Resolved): doc: expand/complete mds settings reference
- This appears to be resolved in http://docs.ceph.com/docs/master/cephfs/mds-config-ref/
- 06:48 PM Documentation #2988 (Resolved): doc: write MDS troubleshooting
- This appears to have been resolved.
- 06:38 PM Bug #21734 (Duplicate): mount client shows total capacity of cluster but not of a pool
- 09:20 AM Feature #21995: ceph-fuse: support nfs export
[Basic OS]
1.Suse-12-SP1(ceph、client)
2.CentOS-7.2 (client)
[Ceph Version]
1.upgrade from 0.94.5 to 10.2.7
...- 08:18 AM Feature #21995 (Resolved): ceph-fuse: support nfs export
- set FUSE_EXPORT_SUPPORT flag on fuse connection and make fuse_ll_lookup be able to handle '<dir_ino>/.' (the dir inod...
10/31/2017
- 09:50 PM Bug #21991 (Resolved): mds: tell session ls returns vanila EINVAL when MDS is not active
- A more helpful error message would be desirable....
- 07:09 PM Bug #21406 (Fix Under Review): ceph.in: tell mds does not understand --cluster
- https://github.com/ceph/ceph/pull/18654
Found that the above PR also resolves this issue so I'm reassigning to mys... - 07:08 PM Bug #21967 (Fix Under Review): 'ceph tell mds' commands result in 'File exists' errors on client ...
- https://github.com/ceph/ceph/pull/18654
- 07:04 PM Bug #21967: 'ceph tell mds' commands result in 'File exists' errors on client admin socket
- 09:37 AM Bug #21985 (Fix Under Review): mds: definition of MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 is wrong
- https://github.com/ceph/ceph/pull/18646
- 08:58 AM Bug #21985 (Resolved): mds: definition of MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 is wrong
- it's ID is the same as MDS_FEATURE_INCOMPAT_NOANCHOR
10/30/2017
- 09:05 PM Backport #21953 (In Progress): luminous: MDSMonitor commands crashing on cluster upgraded from Ha...
- https://github.com/ceph/ceph/pull/18628
- 08:43 PM Bug #21945 (Resolved): MDSCache::gen_default_file_layout segv on rados/upgrade
- 05:44 AM Bug #21945: MDSCache::gen_default_file_layout segv on rados/upgrade
- /a/kchai-2017-10-29_15:49:18-rados-wip-kefu-testing-2017-10-28-2157-distro-basic-mira/1788809
- 06:31 PM Bug #21975: MDS: mds gets significantly behind on trimming while creating millions of files
- https://github.com/ceph/ceph/pull/18624
- 06:31 PM Bug #21975 (Resolved): MDS: mds gets significantly behind on trimming while creating millions of ...
- During creat() heavy workloads, the MDS gets behind on trimming its journal as the journal grows faster than it trims...
- 01:45 PM Bug #21884: client: populate f_fsid in statfs output
- Waiting for FUSE support on this.
- 12:04 PM Bug #21406: ceph.in: tell mds does not understand --cluster
- The work around `--conf <conf file path>` doesn't work anymore with the latest source code in the Ceph master. I'm hi...
- 04:18 AM Bug #21967 (Resolved): 'ceph tell mds' commands result in 'File exists' errors on client admin so...
- ...
10/29/2017
- 06:14 PM Bug #21807 (Resolved): mds: trims all unpinned dentries when memory limit is reached
- 06:14 PM Backport #21810 (Resolved): luminous: mds: trims all unpinned dentries when memory limit is reached
- 06:13 PM Bug #19593 (Resolved): purge queue and standby replay mds
- 06:13 PM Backport #21658 (Resolved): luminous: purge queue and standby replay mds
- 06:13 PM Bug #21768 (Resolved): FAILED assert(in->is_dir()) in MDBalancer::handle_export_pins()
- 06:12 PM Backport #21806 (Resolved): luminous: FAILED assert(in->is_dir()) in MDBalancer::handle_export_pi...
- 06:12 PM Bug #21191 (Resolved): ceph: tell mds.* results in warning
- 06:12 PM Backport #21324 (Resolved): luminous: ceph: tell mds.* results in warning
- 06:11 PM Bug #21726 (Resolved): limit internal memory usage of object cacher.
- 06:11 PM Backport #21804 (Resolved): luminous: limit internal memory usage of object cacher.
- 06:10 PM Bug #21746 (Resolved): client_metadata can be missing
- 06:10 PM Backport #21805 (Resolved): luminous: client_metadata can be missing
- 06:10 PM Backport #21627 (Resolved): luminous: ceph_volume_client: sets invalid caps for existing IDs with...
- 06:08 PM Backport #21600 (Resolved): luminous: mds: client caps can go below hard-coded default (100)
- 06:08 PM Bug #21476 (Resolved): ceph_volume_client: snapshot dir name hardcoded
- 06:08 PM Backport #21514 (Resolved): luminous: ceph_volume_client: snapshot dir name hardcoded
10/27/2017
- 08:32 PM Bug #21945 (In Progress): MDSCache::gen_default_file_layout segv on rados/upgrade
- WIP https://github.com/ceph/ceph/pull/18603
- 12:09 PM Bug #21945: MDSCache::gen_default_file_layout segv on rados/upgrade
- Presumably 7adf0fb819cc98702cd97214192770472eab5d27
- 12:07 PM Bug #21945: MDSCache::gen_default_file_layout segv on rados/upgrade
- ...
- 11:55 AM Bug #21945 (Resolved): MDSCache::gen_default_file_layout segv on rados/upgrade
- ...
- 08:23 PM Backport #21953: luminous: MDSMonitor commands crashing on cluster upgraded from Hammer (nonexist...
- Please hold off on merging any backport on this due to http://tracker.ceph.com/issues/21945
- 12:45 PM Backport #21953 (Resolved): luminous: MDSMonitor commands crashing on cluster upgraded from Hamme...
- https://github.com/ceph/ceph/pull/18628
- 07:46 PM Bug #21959 (Fix Under Review): MDSMonitor: monitor gives constant "is now active in filesystem ce...
- https://github.com/ceph/ceph/pull/18600
- 07:16 PM Bug #21959 (Resolved): MDSMonitor: monitor gives constant "is now active in filesystem cephfs as ...
Cluster log is filled with:...- 04:58 PM Backport #21955 (In Progress): luminous: qa: add EC data pool to testing
- 12:45 PM Backport #21955 (Resolved): luminous: qa: add EC data pool to testing
- https://github.com/ceph/ceph/pull/18596
- 03:05 PM Bug #21337 (Resolved): luminous: MDS is not getting past up:replay on Luminous cluster
- 12:45 PM Backport #21952 (Resolved): luminous: mds: no assertion on inode being purging in find_ino_peers()
- https://github.com/ceph/ceph/pull/18869
- 12:44 PM Backport #21948 (Resolved): luminous: MDSMonitor: mons should reject misconfigured mds_blacklist_...
- https://github.com/ceph/ceph/pull/19871
- 12:44 PM Backport #21947 (Resolved): luminous: mds: preserve order of requests during recovery of multimds...
- https://github.com/ceph/ceph/pull/18871
- 02:42 AM Bug #21722 (Pending Backport): mds: no assertion on inode being purging in find_ino_peers()
10/26/2017
- 06:04 PM Bug #21153 (Resolved): Incorrect grammar in FS message "1 filesystem is have a failed mds daemon"
- 06:03 PM Bug #21230 (Resolved): the standbys are not updated via "ceph tell mds.* command"
- 04:07 PM Backport #21657 (In Progress): luminous: StrayManager::truncate is broken
- 08:24 AM Bug #21928 (Fix Under Review): src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == inode_m...
- https://github.com/ceph/ceph/pull/18555
- 08:07 AM Bug #21928: src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == inode_map.size() + snap_in...
- I think the cause is that scrub function allocates shadow inode for base inode
CInode.cc... - 12:06 AM Bug #21405 (Pending Backport): qa: add EC data pool to testing
10/25/2017
- 11:38 PM Bug #21568 (Pending Backport): MDSMonitor commands crashing on cluster upgraded from Hammer (none...
- 11:37 PM Bug #21821 (Pending Backport): MDSMonitor: mons should reject misconfigured mds_blacklist_interval
- 11:36 PM Bug #20596 (Resolved): MDSMonitor: obsolete `mds dump` and other deprecated mds commands
- 08:37 PM Bug #21843 (Pending Backport): mds: preserve order of requests during recovery of multimds cluster
- 08:35 PM Bug #21848 (In Progress): client: re-expand admin_socket metavariables in child process
- Zhi, please revisit this issue as the fix in https://github.com/ceph/ceph/pull/18393 must be reverted due to the reas...
- 06:14 PM Bug #21928 (Resolved): src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == inode_map.size(...
- ...
- 02:55 PM Feature #21888: Adding [--repair] option for cephfs-journal-tool make it can recover all journal ...
- Assigning to Ivan, as he had already submitted a PR for this Feature.
- 01:50 PM Bug #18743 (Fix Under Review): Scrub considers dirty backtraces to be damaged, puts in damage tab...
- https://github.com/ceph/ceph/pull/18538
Inspired to fix this from working on today's "[ceph-users] MDS damaged" th... - 07:53 AM Bug #21393 (In Progress): MDSMonitor: inconsistent role/who usage in command help
- 06:43 AM Bug #21903: ceph-volume-client: File "cephfs.pyx", line 696, in cephfs.LibCephFS.open
- Patrick Donnelly wrote:
> Another one during retest: http://pulpito.ceph.com/pdonnell-2017-10-24_16:24:48-fs-wip-pdo...
10/24/2017
- 09:15 PM Bug #21908: kcephfs: mount fails with -EIO
- ...
- 09:14 PM Bug #21908 (New): kcephfs: mount fails with -EIO
- ...
- 08:23 PM Bug #21884: client: populate f_fsid in statfs output
- Jeff, I liked your suggestion of a vxattr the client can lookup to check if the mount is CephFS.
- 05:17 PM Bug #21903: ceph-volume-client: File "cephfs.pyx", line 696, in cephfs.LibCephFS.open
- Another one during retest: http://pulpito.ceph.com/pdonnell-2017-10-24_16:24:48-fs-wip-pdonnell-testing-201710240410-...
- 05:17 PM Bug #21903 (New): ceph-volume-client: File "cephfs.pyx", line 696, in cephfs.LibCephFS.open
- ...
- 03:24 PM Bug #21393 (Fix Under Review): MDSMonitor: inconsistent role/who usage in command help
- https://github.com/ceph/ceph/pull/18512
10/23/2017
- 07:51 PM Bug #21884: client: populate f_fsid in statfs output
- I'm not sure we want to change f_type. It _is_ still FUSE, regardless of what userland daemon it's talking to.
If ... - 10:43 AM Bug #21884: client: populate f_fsid in statfs output
- Here's how the kernel client fills this field out:...
- 11:46 AM Bug #21848 (Fix Under Review): client: re-expand admin_socket metavariables in child process
- 02:28 AM Bug #21393 (In Progress): MDSMonitor: inconsistent role/who usage in command help
- 01:51 AM Bug #21892 (Resolved): limit size of subtree migration
- ...
10/22/2017
- 09:19 AM Feature #21888: Adding [--repair] option for cephfs-journal-tool make it can recover all journal ...
- https://github.com/ceph/ceph/pull/18465/
- 06:15 AM Feature #21888 (Fix Under Review): Adding [--repair] option for cephfs-journal-tool make it can r...
- As described in the document if a journal is damaged or for any reason an MDS is incapable of replaying it, attempt t...
10/21/2017
- 04:05 AM Bug #21884 (Resolved): client: populate f_fsid in statfs output
- We should just reuse the kclient -f_id- f_type as, in principle, the application should only need to know that the fi...
10/20/2017
- 03:21 PM Bug #21512: qa: libcephfs_interface_tests: shutdown race failures
- PR to master was https://github.com/ceph/ceph/pull/18139
- 10:08 AM Feature #21877 (Resolved): quota and snaprealm integation
- https://github.com/ceph/ceph/pull/18424/commits/4477f8b93d183eb461798b5b67550d3d5b22c16c
- 09:30 AM Backport #21874 (Resolved): luminous: qa: libcephfs_interface_tests: shutdown race failures
- https://github.com/ceph/ceph/pull/20082
- 09:29 AM Backport #21870 (Resolved): luminous: Assertion in EImportStart::replay should be a damaged()
- https://github.com/ceph/ceph/pull/18930
- 07:10 AM Bug #21861 (New): osdc: truncate Object and remove the bh which have someone wait for read on it ...
- ceph version: jewel 10.2.2
When one osd be written over the full_ratio(default is 0.95) will lead the cluster t...
10/19/2017
- 11:22 PM Bug #21777 (Need More Info): src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
- This is NMI because we weren't able to reproduce the actual problem. We'll ahve to wait for QE to reproduce again wit...
- 06:35 PM Bug #21853 (Resolved): mds: mdsload debug too high
- ...
- 01:44 PM Feature #19578: mds: optimize CDir::_omap_commit() and CDir::_committed() for large directory
- this should help large directory performance
- 07:52 AM Bug #21848: client: re-expand admin_socket metavariables in child process
- https://github.com/ceph/ceph/pull/18393
- 07:51 AM Bug #21848 (Resolved): client: re-expand admin_socket metavariables in child process
- The default value of admin_socket is $run_dir/$cluster-$name.asok. If mounting multiple ceph-fuse instances on the sa...
- 07:44 AM Bug #21483 (Resolved): qa: test_snapshot_remove (kcephfs): RuntimeError: Bad data at offset 0
- 02:20 AM Bug #21749 (Duplicate): PurgeQueue corruption in 12.2.1
- dup of #19593
- 02:17 AM Backport #21658 (Fix Under Review): luminous: purge queue and standby replay mds
- https://github.com/ceph/ceph/pull/18385
- 01:53 AM Bug #21843 (Fix Under Review): mds: preserve order of requests during recovery of multimds cluster
- https://github.com/ceph/ceph/pull/18384
- 01:50 AM Bug #21843 (Resolved): mds: preserve order of requests during recovery of multimds cluster
- there are several cases that requests get processed in wrong order
1)
touch a/b/f (handled by mds.1, early ...
10/17/2017
- 10:09 PM Bug #21821 (Fix Under Review): MDSMonitor: mons should reject misconfigured mds_blacklist_interval
- https://github.com/ceph/ceph/pull/18366
- 09:55 PM Bug #21821: MDSMonitor: mons should reject misconfigured mds_blacklist_interval
- A good opportunity to use the new min/max fields on the config option itself.
I suppose if we accept the idea that... - 05:36 PM Bug #21821 (Resolved): MDSMonitor: mons should reject misconfigured mds_blacklist_interval
- There should be a minimum acceptable value otherwise we see potential behavior where a blacklisted MDS is still writi...
- 02:01 AM Bug #21812 (Closed): standby replay mds may submit log
- I wrongly interpret the log
10/16/2017
- 01:19 PM Bug #21812 (Closed): standby replay mds may submit log
- magna002://home/smohan/LOGS/cfs-mds.magna116.trunc.log.gz
mds submitted log entry while it's in standby replay sta... - 12:07 AM Bug #21807 (Pending Backport): mds: trims all unpinned dentries when memory limit is reached
- 12:03 AM Backport #21810 (Resolved): luminous: mds: trims all unpinned dentries when memory limit is reached
- https://github.com/ceph/ceph/pull/18316
10/14/2017
- 08:49 PM Bug #21807 (Fix Under Review): mds: trims all unpinned dentries when memory limit is reached
- https://github.com/ceph/ceph/pull/18309
- 08:46 PM Bug #21807 (Resolved): mds: trims all unpinned dentries when memory limit is reached
- Generally dentries are pinned by the client cache so this was easy to miss in testing. Bug is here:
https://github... - 08:17 AM Backport #21805 (In Progress): luminous: client_metadata can be missing
- 12:32 AM Backport #21805 (Resolved): luminous: client_metadata can be missing
- https://github.com/ceph/ceph/pull/18299
- 08:16 AM Backport #21804 (In Progress): luminous: limit internal memory usage of object cacher.
- 12:23 AM Backport #21804 (Resolved): luminous: limit internal memory usage of object cacher.
- https://github.com/ceph/ceph/pull/18298
- 12:39 AM Backport #21806 (In Progress): luminous: FAILED assert(in->is_dir()) in MDBalancer::handle_export...
- 12:37 AM Backport #21806 (Resolved): luminous: FAILED assert(in->is_dir()) in MDBalancer::handle_export_pi...
- https://github.com/ceph/ceph/pull/18300
- 12:16 AM Bug #21512 (Pending Backport): qa: libcephfs_interface_tests: shutdown race failures
- 12:14 AM Bug #21726 (Pending Backport): limit internal memory usage of object cacher.
- 12:14 AM Bug #21746 (Pending Backport): client_metadata can be missing
- 12:13 AM Bug #21759 (Pending Backport): Assertion in EImportStart::replay should be a damaged()
- 12:12 AM Bug #21768 (Pending Backport): FAILED assert(in->is_dir()) in MDBalancer::handle_export_pins()
10/13/2017
- 07:18 PM Feature #21601 (Resolved): ceph_volume_client: add get, put, and delete object interfaces
- 07:18 PM Backport #21602 (Resolved): luminous: ceph_volume_client: add get, put, and delete object interfaces
- 12:37 AM Bug #21777 (Fix Under Review): src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
- https://github.com/ceph/ceph/pull/18278
10/12/2017
- 10:47 PM Bug #21777 (Need More Info): src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
- MDS may send a MMDSCacheRejoin(MMDSCacheRejoin::OP_WEAK) message to an MDS which is not rejoin/active/stopping. Once ...
- 04:04 AM Bug #21768 (Fix Under Review): FAILED assert(in->is_dir()) in MDBalancer::handle_export_pins()
- https://github.com/ceph/ceph/pull/18261
- 03:58 AM Bug #21768 (Resolved): FAILED assert(in->is_dir()) in MDBalancer::handle_export_pins()
- ...
Also available in: Atom