Project

General

Profile

Activity

From 10/05/2017 to 11/03/2017

11/03/2017

09:41 PM Bug #18743 (Pending Backport): Scrub considers dirty backtraces to be damaged, puts in damage tab...
Patrick Donnelly
09:40 PM Bug #21928 (Pending Backport): src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == inode_m...
Patrick Donnelly
09:40 PM Bug #21975 (Pending Backport): MDS: mds gets significantly behind on trimming while creating mill...
Patrick Donnelly
09:39 PM Bug #21985 (Pending Backport): mds: definition of MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 is wrong
Patrick Donnelly
09:39 PM Bug #21406 (Pending Backport): ceph.in: tell mds does not understand --cluster
Patrick Donnelly
09:39 PM Bug #21967 (Pending Backport): 'ceph tell mds' commands result in 'File exists' errors on client ...
Patrick Donnelly
09:25 PM Bug #22038 (Resolved): ceph-volume-client: rados.Error: command not known
... Patrick Donnelly
07:38 PM Bug #22008: Processes stuck waiting for write with ceph-fuse
Attached is the ceph-fuse cache dump. This is a different instance of the problem (all the same symptoms), so the pr... Andras Pataki
08:33 AM Bug #22008: Processes stuck waiting for write with ceph-fuse
no idea how did it happen. please use admin socket to dump ceph-fuse's cache (ceph daemon client.xxx dump_cache) Zheng Yan
03:50 PM Backport #22031 (Resolved): jewel: FAILED assert(get_version() < pv) in CDir::mark_dirty
https://github.com/ceph/ceph/pull/21156 Nathan Cutler
02:22 PM Bug #21991 (Fix Under Review): mds: tell session ls returns vanila EINVAL when MDS is not active
https://github.com/ceph/ceph/pull/18705 Jos Collin
01:50 PM Backport #21481 (Resolved): jewel: "FileStore.cc: 2930: FAILED assert(0 == "unexpected error")" i...
Kefu Chai
09:36 AM Bug #22003: [CephFS-Ganesha]MDS migrate will affect Ganesha service?
this issue can be reproduced easily by restarting ganesha.nfd. I searched the ceph FSAL code, couldn't find any code ... Zheng Yan
07:09 AM Bug #21892 (Fix Under Review): limit size of subtree migration
https://github.com/ceph/ceph/pull/18697 Zheng Yan

11/02/2017

08:33 PM Bug #22009 (Fix Under Review): don't check gid when none specified in auth caps
https://github.com/ceph/ceph/pull/18689 Douglas Fuller
07:59 PM Bug #22009 (In Progress): don't check gid when none specified in auth caps
Douglas Fuller
07:57 PM Bug #22009 (Resolved): don't check gid when none specified in auth caps
MDS auth caps allow for uid but not gids to be specified. In that case, we shouldn't check caller_gid or caller_gids_... Douglas Fuller
07:19 PM Bug #22007: auth: mds cap parsing should not depend on order
For the record, we do this everywhere. We should probably refactor all of our auth caps grammar to make this work bet... Douglas Fuller
06:22 PM Bug #22007 (New): auth: mds cap parsing should not depend on order
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg41552.html... Greg Farnum
06:30 PM Bug #22008 (Resolved): Processes stuck waiting for write with ceph-fuse
We've been running into a strange problem with Ceph using ceph-fuse and the filesystem. All the back end nodes are o... Andras Pataki
05:37 PM Bug #21406 (Fix Under Review): ceph.in: tell mds does not understand --cluster
Jos, "In Progress" indicates the Assignee is working on a fix. "Need Review" indicates the fix is undergoing review/t... Patrick Donnelly
08:56 AM Bug #21406 (In Progress): ceph.in: tell mds does not understand --cluster
Jos Collin
05:37 PM Bug #21967 (Fix Under Review): 'ceph tell mds' commands result in 'File exists' errors on client ...
Jos, "In Progress" indicates the Assignee is working on a fix. "Need Review" indicates the fix is undergoing review/t... Patrick Donnelly
08:56 AM Bug #21967 (In Progress): 'ceph tell mds' commands result in 'File exists' errors on client admin...
Jos Collin
07:54 AM Bug #21584 (Pending Backport): FAILED assert(get_version() < pv) in CDir::mark_dirty
Kefu Chai
07:52 AM Backport #22004 (In Progress): luminous: FAILED assert(get_version() < pv) in CDir::mark_dirty
https://github.com/ceph/ceph/pull/18008 Kefu Chai
07:52 AM Backport #22004 (Resolved): luminous: FAILED assert(get_version() < pv) in CDir::mark_dirty
https://github.com/ceph/ceph/pull/18008 Kefu Chai
07:34 AM Bug #22003 (Resolved): [CephFS-Ganesha]MDS migrate will affect Ganesha service?
[Ganesha Version]
Ganesha V2.4
630a35bef41aabf76f99532448d6154316a525e0
[Ceph Version]
ceph version 10.2.7 (50e...
Gemini Chen
01:15 AM Bug #21991 (In Progress): mds: tell session ls returns vanila EINVAL when MDS is not active
Jos Collin

11/01/2017

06:55 PM Documentation #2982 (Resolved): doc: write add/remove a metadata server
This has since been addressed. Patrick Donnelly
06:51 PM Fix #6753 (Closed): cephx authentication for mds seem to accept both "allow" and "allow *"
This appears to be an obsolete issue. Patrick Donnelly
06:48 PM Documentation #2969 (Resolved): doc: expand/complete mds settings reference
This appears to be resolved in http://docs.ceph.com/docs/master/cephfs/mds-config-ref/ Patrick Donnelly
06:48 PM Documentation #2988 (Resolved): doc: write MDS troubleshooting
This appears to have been resolved. Patrick Donnelly
06:38 PM Bug #21734 (Duplicate): mount client shows total capacity of cluster but not of a pool
Patrick Donnelly
09:20 AM Feature #21995: ceph-fuse: support nfs export

[Basic OS]
1.Suse-12-SP1(ceph、client)
2.CentOS-7.2 (client)

[Ceph Version]
1.upgrade from 0.94.5 to 10.2.7
...
Gemini Chen
08:18 AM Feature #21995 (Resolved): ceph-fuse: support nfs export
set FUSE_EXPORT_SUPPORT flag on fuse connection and make fuse_ll_lookup be able to handle '<dir_ino>/.' (the dir inod... Zheng Yan

10/31/2017

09:50 PM Bug #21991 (Resolved): mds: tell session ls returns vanila EINVAL when MDS is not active
A more helpful error message would be desirable.... Patrick Donnelly
07:09 PM Bug #21406 (Fix Under Review): ceph.in: tell mds does not understand --cluster
https://github.com/ceph/ceph/pull/18654
Found that the above PR also resolves this issue so I'm reassigning to mys...
Patrick Donnelly
07:08 PM Bug #21967 (Fix Under Review): 'ceph tell mds' commands result in 'File exists' errors on client ...
https://github.com/ceph/ceph/pull/18654 Patrick Donnelly
07:04 PM Bug #21967: 'ceph tell mds' commands result in 'File exists' errors on client admin socket
Patrick Donnelly
09:37 AM Bug #21985 (Fix Under Review): mds: definition of MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 is wrong
https://github.com/ceph/ceph/pull/18646 Zheng Yan
08:58 AM Bug #21985 (Resolved): mds: definition of MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 is wrong
it's ID is the same as MDS_FEATURE_INCOMPAT_NOANCHOR Zheng Yan

10/30/2017

09:05 PM Backport #21953 (In Progress): luminous: MDSMonitor commands crashing on cluster upgraded from Ha...
https://github.com/ceph/ceph/pull/18628 Patrick Donnelly
08:43 PM Bug #21945 (Resolved): MDSCache::gen_default_file_layout segv on rados/upgrade
Sage Weil
05:44 AM Bug #21945: MDSCache::gen_default_file_layout segv on rados/upgrade
/a/kchai-2017-10-29_15:49:18-rados-wip-kefu-testing-2017-10-28-2157-distro-basic-mira/1788809 Kefu Chai
06:31 PM Bug #21975: MDS: mds gets significantly behind on trimming while creating millions of files
https://github.com/ceph/ceph/pull/18624 Patrick Donnelly
06:31 PM Bug #21975 (Resolved): MDS: mds gets significantly behind on trimming while creating millions of ...
During creat() heavy workloads, the MDS gets behind on trimming its journal as the journal grows faster than it trims... Patrick Donnelly
01:45 PM Bug #21884: client: populate f_fsid in statfs output
Waiting for FUSE support on this. Patrick Donnelly
12:04 PM Bug #21406: ceph.in: tell mds does not understand --cluster
The work around `--conf <conf file path>` doesn't work anymore with the latest source code in the Ceph master. I'm hi... Jos Collin
04:18 AM Bug #21967 (Resolved): 'ceph tell mds' commands result in 'File exists' errors on client admin so...
... Brad Hubbard

10/29/2017

06:14 PM Bug #21807 (Resolved): mds: trims all unpinned dentries when memory limit is reached
Patrick Donnelly
06:14 PM Backport #21810 (Resolved): luminous: mds: trims all unpinned dentries when memory limit is reached
Patrick Donnelly
06:13 PM Bug #19593 (Resolved): purge queue and standby replay mds
Patrick Donnelly
06:13 PM Backport #21658 (Resolved): luminous: purge queue and standby replay mds
Patrick Donnelly
06:13 PM Bug #21768 (Resolved): FAILED assert(in->is_dir()) in MDBalancer::handle_export_pins()
Patrick Donnelly
06:12 PM Backport #21806 (Resolved): luminous: FAILED assert(in->is_dir()) in MDBalancer::handle_export_pi...
Patrick Donnelly
06:12 PM Bug #21191 (Resolved): ceph: tell mds.* results in warning
Patrick Donnelly
06:12 PM Backport #21324 (Resolved): luminous: ceph: tell mds.* results in warning
Patrick Donnelly
06:11 PM Bug #21726 (Resolved): limit internal memory usage of object cacher.
Patrick Donnelly
06:11 PM Backport #21804 (Resolved): luminous: limit internal memory usage of object cacher.
Patrick Donnelly
06:10 PM Bug #21746 (Resolved): client_metadata can be missing
Patrick Donnelly
06:10 PM Backport #21805 (Resolved): luminous: client_metadata can be missing
Patrick Donnelly
06:10 PM Backport #21627 (Resolved): luminous: ceph_volume_client: sets invalid caps for existing IDs with...
Patrick Donnelly
06:08 PM Backport #21600 (Resolved): luminous: mds: client caps can go below hard-coded default (100)
Patrick Donnelly
06:08 PM Bug #21476 (Resolved): ceph_volume_client: snapshot dir name hardcoded
Patrick Donnelly
06:08 PM Backport #21514 (Resolved): luminous: ceph_volume_client: snapshot dir name hardcoded
Patrick Donnelly

10/27/2017

08:32 PM Bug #21945 (In Progress): MDSCache::gen_default_file_layout segv on rados/upgrade
WIP https://github.com/ceph/ceph/pull/18603 Patrick Donnelly
12:09 PM Bug #21945: MDSCache::gen_default_file_layout segv on rados/upgrade
Presumably 7adf0fb819cc98702cd97214192770472eab5d27 Sage Weil
12:07 PM Bug #21945: MDSCache::gen_default_file_layout segv on rados/upgrade
... Sage Weil
11:55 AM Bug #21945 (Resolved): MDSCache::gen_default_file_layout segv on rados/upgrade
... Sage Weil
08:23 PM Backport #21953: luminous: MDSMonitor commands crashing on cluster upgraded from Hammer (nonexist...
Please hold off on merging any backport on this due to http://tracker.ceph.com/issues/21945 Patrick Donnelly
12:45 PM Backport #21953 (Resolved): luminous: MDSMonitor commands crashing on cluster upgraded from Hamme...
https://github.com/ceph/ceph/pull/18628 Nathan Cutler
07:46 PM Bug #21959 (Fix Under Review): MDSMonitor: monitor gives constant "is now active in filesystem ce...
https://github.com/ceph/ceph/pull/18600 Patrick Donnelly
07:16 PM Bug #21959 (Resolved): MDSMonitor: monitor gives constant "is now active in filesystem cephfs as ...

Cluster log is filled with:...
Patrick Donnelly
04:58 PM Backport #21955 (In Progress): luminous: qa: add EC data pool to testing
Nathan Cutler
12:45 PM Backport #21955 (Resolved): luminous: qa: add EC data pool to testing
https://github.com/ceph/ceph/pull/18596 Nathan Cutler
03:05 PM Bug #21337 (Resolved): luminous: MDS is not getting past up:replay on Luminous cluster
Nathan Cutler
12:45 PM Backport #21952 (Resolved): luminous: mds: no assertion on inode being purging in find_ino_peers()
https://github.com/ceph/ceph/pull/18869 Nathan Cutler
12:44 PM Backport #21948 (Resolved): luminous: MDSMonitor: mons should reject misconfigured mds_blacklist_...
https://github.com/ceph/ceph/pull/19871 Nathan Cutler
12:44 PM Backport #21947 (Resolved): luminous: mds: preserve order of requests during recovery of multimds...
https://github.com/ceph/ceph/pull/18871 Nathan Cutler
02:42 AM Bug #21722 (Pending Backport): mds: no assertion on inode being purging in find_ino_peers()
Patrick Donnelly

10/26/2017

06:04 PM Bug #21153 (Resolved): Incorrect grammar in FS message "1 filesystem is have a failed mds daemon"
Patrick Donnelly
06:03 PM Bug #21230 (Resolved): the standbys are not updated via "ceph tell mds.* command"
Patrick Donnelly
04:07 PM Backport #21657 (In Progress): luminous: StrayManager::truncate is broken
Patrick Donnelly
08:24 AM Bug #21928 (Fix Under Review): src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == inode_m...
https://github.com/ceph/ceph/pull/18555 Zheng Yan
08:07 AM Bug #21928: src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == inode_map.size() + snap_in...
I think the cause is that scrub function allocates shadow inode for base inode
CInode.cc...
Zheng Yan
12:06 AM Bug #21405 (Pending Backport): qa: add EC data pool to testing
Patrick Donnelly

10/25/2017

11:38 PM Bug #21568 (Pending Backport): MDSMonitor commands crashing on cluster upgraded from Hammer (none...
Patrick Donnelly
11:37 PM Bug #21821 (Pending Backport): MDSMonitor: mons should reject misconfigured mds_blacklist_interval
Patrick Donnelly
11:36 PM Bug #20596 (Resolved): MDSMonitor: obsolete `mds dump` and other deprecated mds commands
Patrick Donnelly
08:37 PM Bug #21843 (Pending Backport): mds: preserve order of requests during recovery of multimds cluster
Patrick Donnelly
08:35 PM Bug #21848 (In Progress): client: re-expand admin_socket metavariables in child process
Zhi, please revisit this issue as the fix in https://github.com/ceph/ceph/pull/18393 must be reverted due to the reas... Patrick Donnelly
06:14 PM Bug #21928 (Resolved): src/mds/MDCache.cc: 7421: FAILED assert(CInode::count() == inode_map.size(...
... Patrick Donnelly
02:55 PM Feature #21888: Adding [--repair] option for cephfs-journal-tool make it can recover all journal ...
Assigning to Ivan, as he had already submitted a PR for this Feature. Jos Collin
01:50 PM Bug #18743 (Fix Under Review): Scrub considers dirty backtraces to be damaged, puts in damage tab...
https://github.com/ceph/ceph/pull/18538
Inspired to fix this from working on today's "[ceph-users] MDS damaged" th...
John Spray
07:53 AM Bug #21393 (In Progress): MDSMonitor: inconsistent role/who usage in command help
Jos Collin
06:43 AM Bug #21903: ceph-volume-client: File "cephfs.pyx", line 696, in cephfs.LibCephFS.open
Patrick Donnelly wrote:
> Another one during retest: http://pulpito.ceph.com/pdonnell-2017-10-24_16:24:48-fs-wip-pdo...
Ramana Raja

10/24/2017

09:15 PM Bug #21908: kcephfs: mount fails with -EIO
... Patrick Donnelly
09:14 PM Bug #21908 (New): kcephfs: mount fails with -EIO
... Patrick Donnelly
08:23 PM Bug #21884: client: populate f_fsid in statfs output
Jeff, I liked your suggestion of a vxattr the client can lookup to check if the mount is CephFS. Patrick Donnelly
05:17 PM Bug #21903: ceph-volume-client: File "cephfs.pyx", line 696, in cephfs.LibCephFS.open
Another one during retest: http://pulpito.ceph.com/pdonnell-2017-10-24_16:24:48-fs-wip-pdonnell-testing-201710240410-... Patrick Donnelly
05:17 PM Bug #21903 (New): ceph-volume-client: File "cephfs.pyx", line 696, in cephfs.LibCephFS.open
... Patrick Donnelly
03:24 PM Bug #21393 (Fix Under Review): MDSMonitor: inconsistent role/who usage in command help
https://github.com/ceph/ceph/pull/18512 Jos Collin

10/23/2017

07:51 PM Bug #21884: client: populate f_fsid in statfs output
I'm not sure we want to change f_type. It _is_ still FUSE, regardless of what userland daemon it's talking to.
If ...
Jeff Layton
10:43 AM Bug #21884: client: populate f_fsid in statfs output
Here's how the kernel client fills this field out:... Jeff Layton
11:46 AM Bug #21848 (Fix Under Review): client: re-expand admin_socket metavariables in child process
Nathan Cutler
02:28 AM Bug #21393 (In Progress): MDSMonitor: inconsistent role/who usage in command help
Jos Collin
01:51 AM Bug #21892 (Resolved): limit size of subtree migration
... Zheng Yan

10/22/2017

09:19 AM Feature #21888: Adding [--repair] option for cephfs-journal-tool make it can recover all journal ...
https://github.com/ceph/ceph/pull/18465/ Ivan Guan
06:15 AM Feature #21888 (Fix Under Review): Adding [--repair] option for cephfs-journal-tool make it can r...
As described in the document if a journal is damaged or for any reason an MDS is incapable of replaying it, attempt t... Ivan Guan

10/21/2017

04:05 AM Bug #21884 (Resolved): client: populate f_fsid in statfs output
We should just reuse the kclient -f_id- f_type as, in principle, the application should only need to know that the fi... Patrick Donnelly

10/20/2017

03:21 PM Bug #21512: qa: libcephfs_interface_tests: shutdown race failures
PR to master was https://github.com/ceph/ceph/pull/18139 Ken Dreyer
10:08 AM Feature #21877 (Resolved): quota and snaprealm integation
https://github.com/ceph/ceph/pull/18424/commits/4477f8b93d183eb461798b5b67550d3d5b22c16c Zheng Yan
09:30 AM Backport #21874 (Resolved): luminous: qa: libcephfs_interface_tests: shutdown race failures
https://github.com/ceph/ceph/pull/20082 Nathan Cutler
09:29 AM Backport #21870 (Resolved): luminous: Assertion in EImportStart::replay should be a damaged()
https://github.com/ceph/ceph/pull/18930 Nathan Cutler
07:10 AM Bug #21861 (New): osdc: truncate Object and remove the bh which have someone wait for read on it ...
ceph version: jewel 10.2.2
When one osd be written over the full_ratio(default is 0.95) will lead the cluster t...
Ivan Guan

10/19/2017

11:22 PM Bug #21777 (Need More Info): src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
This is NMI because we weren't able to reproduce the actual problem. We'll ahve to wait for QE to reproduce again wit... Patrick Donnelly
06:35 PM Bug #21853 (Resolved): mds: mdsload debug too high
... Patrick Donnelly
01:44 PM Feature #19578: mds: optimize CDir::_omap_commit() and CDir::_committed() for large directory
this should help large directory performance Zheng Yan
07:52 AM Bug #21848: client: re-expand admin_socket metavariables in child process
https://github.com/ceph/ceph/pull/18393 Zhi Zhang
07:51 AM Bug #21848 (Resolved): client: re-expand admin_socket metavariables in child process
The default value of admin_socket is $run_dir/$cluster-$name.asok. If mounting multiple ceph-fuse instances on the sa... Zhi Zhang
07:44 AM Bug #21483 (Resolved): qa: test_snapshot_remove (kcephfs): RuntimeError: Bad data at offset 0
Zheng Yan
02:20 AM Bug #21749 (Duplicate): PurgeQueue corruption in 12.2.1
dup of #19593 Zheng Yan
02:17 AM Backport #21658 (Fix Under Review): luminous: purge queue and standby replay mds
https://github.com/ceph/ceph/pull/18385 Zheng Yan
01:53 AM Bug #21843 (Fix Under Review): mds: preserve order of requests during recovery of multimds cluster
https://github.com/ceph/ceph/pull/18384 Zheng Yan
01:50 AM Bug #21843 (Resolved): mds: preserve order of requests during recovery of multimds cluster
there are several cases that requests get processed in wrong order
1)
touch a/b/f (handled by mds.1, early ...
Zheng Yan

10/17/2017

10:09 PM Bug #21821 (Fix Under Review): MDSMonitor: mons should reject misconfigured mds_blacklist_interval
https://github.com/ceph/ceph/pull/18366 John Spray
09:55 PM Bug #21821: MDSMonitor: mons should reject misconfigured mds_blacklist_interval
A good opportunity to use the new min/max fields on the config option itself.
I suppose if we accept the idea that...
John Spray
05:36 PM Bug #21821 (Resolved): MDSMonitor: mons should reject misconfigured mds_blacklist_interval
There should be a minimum acceptable value otherwise we see potential behavior where a blacklisted MDS is still writi... Patrick Donnelly
02:01 AM Bug #21812 (Closed): standby replay mds may submit log
I wrongly interpret the log Zheng Yan

10/16/2017

01:19 PM Bug #21812 (Closed): standby replay mds may submit log
magna002://home/smohan/LOGS/cfs-mds.magna116.trunc.log.gz
mds submitted log entry while it's in standby replay sta...
Zheng Yan
12:07 AM Bug #21807 (Pending Backport): mds: trims all unpinned dentries when memory limit is reached
Patrick Donnelly
12:03 AM Backport #21810 (Resolved): luminous: mds: trims all unpinned dentries when memory limit is reached
https://github.com/ceph/ceph/pull/18316 Patrick Donnelly

10/14/2017

08:49 PM Bug #21807 (Fix Under Review): mds: trims all unpinned dentries when memory limit is reached
https://github.com/ceph/ceph/pull/18309 Patrick Donnelly
08:46 PM Bug #21807 (Resolved): mds: trims all unpinned dentries when memory limit is reached
Generally dentries are pinned by the client cache so this was easy to miss in testing. Bug is here:
https://github...
Patrick Donnelly
08:17 AM Backport #21805 (In Progress): luminous: client_metadata can be missing
Nathan Cutler
12:32 AM Backport #21805 (Resolved): luminous: client_metadata can be missing
https://github.com/ceph/ceph/pull/18299 Patrick Donnelly
08:16 AM Backport #21804 (In Progress): luminous: limit internal memory usage of object cacher.
Nathan Cutler
12:23 AM Backport #21804 (Resolved): luminous: limit internal memory usage of object cacher.
https://github.com/ceph/ceph/pull/18298 Patrick Donnelly
12:39 AM Backport #21806 (In Progress): luminous: FAILED assert(in->is_dir()) in MDBalancer::handle_export...
Patrick Donnelly
12:37 AM Backport #21806 (Resolved): luminous: FAILED assert(in->is_dir()) in MDBalancer::handle_export_pi...
https://github.com/ceph/ceph/pull/18300 Patrick Donnelly
12:16 AM Bug #21512 (Pending Backport): qa: libcephfs_interface_tests: shutdown race failures
Patrick Donnelly
12:14 AM Bug #21726 (Pending Backport): limit internal memory usage of object cacher.
Patrick Donnelly
12:14 AM Bug #21746 (Pending Backport): client_metadata can be missing
Patrick Donnelly
12:13 AM Bug #21759 (Pending Backport): Assertion in EImportStart::replay should be a damaged()
Patrick Donnelly
12:12 AM Bug #21768 (Pending Backport): FAILED assert(in->is_dir()) in MDBalancer::handle_export_pins()
Patrick Donnelly

10/13/2017

07:18 PM Feature #21601 (Resolved): ceph_volume_client: add get, put, and delete object interfaces
Patrick Donnelly
07:18 PM Backport #21602 (Resolved): luminous: ceph_volume_client: add get, put, and delete object interfaces
Patrick Donnelly
12:37 AM Bug #21777 (Fix Under Review): src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
https://github.com/ceph/ceph/pull/18278 Patrick Donnelly

10/12/2017

10:47 PM Bug #21777 (Need More Info): src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
MDS may send a MMDSCacheRejoin(MMDSCacheRejoin::OP_WEAK) message to an MDS which is not rejoin/active/stopping. Once ... Patrick Donnelly
04:04 AM Bug #21768 (Fix Under Review): FAILED assert(in->is_dir()) in MDBalancer::handle_export_pins()
https://github.com/ceph/ceph/pull/18261 Zheng Yan
03:58 AM Bug #21768 (Resolved): FAILED assert(in->is_dir()) in MDBalancer::handle_export_pins()
... Zheng Yan

10/11/2017

09:38 PM Bug #21765: auth|doc: fs authorize error for existing credentials confusing/unclear
Doug, please take this one. Patrick Donnelly
09:37 PM Bug #21765 (Resolved): auth|doc: fs authorize error for existing credentials confusing/unclear
If you attempt to use `fs authorize` on a key that already exists you get an error like:
https://github.com/ceph/c...
Patrick Donnelly
03:47 PM Bug #21764 (Resolved): common/options.cc: Update descriptions and visibility levels for MDS/clien...
Go through the options in common/options.cc and figure out which should be LEVEL_DEV (hidden in the UI). BASIC/ADVANC... Patrick Donnelly
12:02 PM Bug #21748: client assertions tripped during some workloads
Huh. That is an interesting theory. I don't see how ganesha would do that, but maybe. Unfortunately, the original pro... Jeff Layton
08:27 AM Bug #21748: client assertions tripped during some workloads
this shouldn't happen even for traceless reply. I suspect the 'in' passed to ceph_ll_setattr isn't belong to the 'cmo... Zheng Yan
11:03 AM Bug #21759 (Fix Under Review): Assertion in EImportStart::replay should be a damaged()
https://github.com/ceph/ceph/pull/18244 John Spray
10:35 AM Bug #21759 (Resolved): Assertion in EImportStart::replay should be a damaged()

This is one of a number of assertions that still linger in journal.cc, but since it's been seen in the wild ("[ceph...
John Spray
09:11 AM Bug #21754: mds: src/osdc/Journaler.cc: 402: FAILED assert(!r)
... Zheng Yan
06:31 AM Bug #21749: PurgeQueue corruption in 12.2.1
Hi Yan,
yes, we had 3 MDS running in standby-replay mode (I switched them to standby now).
Thanks for the offer...
Daniel Baumann
02:58 AM Bug #21749: PurgeQueue corruption in 12.2.1
likely caused by http://tracker.ceph.com/issues/19593.
ping 'yanzheng' at ceph@OFTC, I will help you to recover th...
Zheng Yan
02:40 AM Bug #21745: mds: MDBalancer using total (all time) request count in load statistics
although it is simple to add last_timestamp and last_reqcount so that we can get an average TPS, but TPS may fluctuat... Xiaoxi Chen

10/10/2017

10:10 PM Bug #21754 (Rejected): mds: src/osdc/Journaler.cc: 402: FAILED assert(!r)
... Patrick Donnelly
04:09 PM Bug #21749: PurgeQueue corruption in 12.2.1
I saved all information/logs/objects, feel free to ask for any of it and further things.
Regards,
Daniel
Daniel Baumann
12:05 PM Bug #21749 (Duplicate): PurgeQueue corruption in 12.2.1
From "[ceph-users] how to debug (in order to repair) damaged MDS (rank)?"
Log snippet during MDS startup:...
John Spray
02:18 PM Bug #21748: client assertions tripped during some workloads
Actually this is wrong (as Zheng pointed out). The call is made with a zero-length path that starts from the inode on... Jeff Layton
10:44 AM Bug #21748: client assertions tripped during some workloads
The right fix is probably to just remove that assertion. I don't think it's really valid anyway. cephfs turns the ino... Jeff Layton
10:42 AM Bug #21748 (Can't reproduce): client assertions tripped during some workloads
We had a report of some crashes in ganesha here:
https://github.com/nfs-ganesha/nfs-ganesha/issues/215
Dan and ...
Jeff Layton
12:51 PM Bug #21412: cephfs: too many cephfs snapshots chokes the system
The trimsnap states. The rmdir actually completes quickly, but the resulting operations throw the entire cluster int... Wyllys Ingersoll
02:51 AM Bug #21412: cephfs: too many cephfs snapshots chokes the system
what do you mean "it takes almost 24 hours to delete a single snapshot"? 'rmdir .snap/xxx' tooks 24 hours or pgs on ... Zheng Yan
09:55 AM Bug #21746 (Fix Under Review): client_metadata can be missing
https://github.com/ceph/ceph/pull/18215 Zheng Yan
09:52 AM Bug #21746 (Resolved): client_metadata can be missing
session opened by Server::prepare_force_open_sessions() has no client metadata. Zheng Yan
09:47 AM Bug #21745 (Resolved): mds: MDBalancer using total (all time) request count in load statistics
This was pointed out by Xiaoxi Chen
The get_req_rate() function is returning the value of l_mds_request, which is ...
John Spray

10/09/2017

06:25 PM Bug #21405 (Fix Under Review): qa: add EC data pool to testing
https://github.com/ceph/ceph/pull/18192 Sage Weil
05:50 PM Bug #21734 (Duplicate): mount client shows total capacity of cluster but not of a pool
SERVER:
ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
44637G...
Petr Malkov
01:56 PM Bug #21412: cephfs: too many cephfs snapshots chokes the system
Note, the bug says "10.2.7" but we have since upgraded to 10.2.9 and the same problem exists. Wyllys Ingersoll
01:55 PM Bug #21412: cephfs: too many cephfs snapshots chokes the system
Here is a dump of the cephfs 'dentry_lru' table, in case it is interesting. Wyllys Ingersoll
01:53 PM Bug #21412: cephfs: too many cephfs snapshots chokes the system
Here is data collected from a recent attempt to delete a very old and very large snapshot:
The snapshot extended a...
Wyllys Ingersoll
10:30 AM Bug #21726 (Fix Under Review): limit internal memory usage of object cacher.
https://github.com/ceph/ceph/pull/18183 Zheng Yan
10:22 AM Bug #21726 (Resolved): limit internal memory usage of object cacher.
https://bugzilla.redhat.com/show_bug.cgi?id=1490814
Zheng Yan
07:07 AM Bug #21722: mds: no assertion on inode being purging in find_ino_peers()
https://github.com/ceph/ceph/pull/18174 Zhi Zhang
07:06 AM Bug #21722 (Resolved): mds: no assertion on inode being purging in find_ino_peers()
Recently we hit an assertion on MDS only few times when MDS was very busy.... Zhi Zhang
03:43 AM Bug #20938: CephFS: concurrent access to file from multiple nodes blocks for seconds
kernel of RHEL7 supports FUSE_AUTO_INVAL_DATA. But FUSE_CAP_DONT_MASK was added in libfuse 3.0. Currently no major li... Zheng Yan

10/06/2017

05:46 PM Feature #15066: multifs: Allow filesystems to be assigned RADOS namespace as well as pool for met...
we should default to using a namespace named after the filesystem unless otherwise specified. Douglas Fuller
05:45 PM Feature #21709 (New): ceph fs authorize should detect the correct data namespace
when per-FS data namespaces are enabled, ceph fs authorize should be updated to issue caps for them Douglas Fuller
02:21 PM Bug #21512: qa: libcephfs_interface_tests: shutdown race failures
I have a PR up that seems to fix this, but it may not be what we need. env_to_vec seems like it ought to be reworked ... Jeff Layton

10/05/2017

12:04 PM Bug #21512: qa: libcephfs_interface_tests: shutdown race failures
Looking now to see if we can somehow just fix up lockdep for this. Most of the problems I have seen have seen are fal... Jeff Layton
 

Also available in: Atom