Activity
From 05/10/2018 to 06/08/2018
06/08/2018
- 09:08 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- We've talked about this quite a lot in the past. I thought we had a tracker ticket for it, but on searching the most ...
- 06:21 PM Feature #24461: cephfs: improve file create performance buffering file unlink/create operations
- Neat. NFS and SMB have directory delegations/leases, but I haven't studied the topic in detail.
So the idea is to ... - 05:10 PM Feature #24461 (Resolved): cephfs: improve file create performance buffering file unlink/create o...
- **Serialized single-client** file creation (e.g. untar/rsync) is an area CephFS (and most distributed file systems) c...
- 07:08 PM Feature #24465 (Resolved): client: allow client to leave state intact on MDS when tearing down ob...
- When ganesha shuts down cleanly, it'll tear down all of its filehandle objects and release the files that it has open...
- 05:50 PM Feature #24464 (New): cephfs: file-level snapshots
- Use-case is to support dropbox-style versioning of files.
- 05:46 PM Feature #24463 (Resolved): kclient: add btime support
- 05:43 PM Feature #24462 (New): MDSMonitor: check for mixed version MDS
- And create a health error if it detects this.
- 09:00 AM Bug #24173 (In Progress): ceph_volume_client: allow atomic update of RADOS objects
- https://github.com/ceph/ceph/pull/22455
06/07/2018
- 01:30 PM Backport #24296: mimic: repeated eviction of idle client until some IO happens
- just replace 'cbegin()' with begin()
- 01:07 PM Backport #24296 (Need More Info): mimic: repeated eviction of idle client until some IO happens
- While backporting changes related to tracker 24052, getting cbegin not found compilation error :
/home/pdvian/backpo... - 01:11 PM Bug #24435 (Resolved): doc: incorrect snaprealm format upgrade process in mimic release note
- 01:07 PM Bug #24435 (Pending Backport): doc: incorrect snaprealm format upgrade process in mimic release note
- 01:11 PM Backport #24451 (Rejected): mimic: doc: incorrect snaprealm format upgrade process in mimic relea...
- Nevermind, this doc doesn't exist in mimic.
- 01:08 PM Backport #24451 (Rejected): mimic: doc: incorrect snaprealm format upgrade process in mimic relea...
- 08:23 AM Feature #24444 (Resolved): cephfs: make InodeStat, DirStat, LeaseStat versioned
- Make InodeStat/DirStat/LeaseStat versioned, so client can decode InodeStat in request reply without checking mds feat...
- 07:34 AM Feature #20598 (Fix Under Review): mds: revisit LAZY_IO
- https://github.com/ceph/ceph/pull/22450
- 06:31 AM Bug #24441: Ceph fs new cephfs command failed when meta pool already contains some objects
- ceph version 10.2.10:
when meta pool has objects.Run ceph fs new cephfs meta data,it can create fs successed.
... - 06:23 AM Bug #24441 (Closed): Ceph fs new cephfs command failed when meta pool already contains some objects
- ceph fs new cephfs meta4 data
Error EINVAL: pool 'meta4' already contains some objects. Use an empty pool instead. - 03:04 AM Bug #24440: common/DecayCounter: set last_decay to current time when decoding decay counter
- https://github.com/ceph/ceph/pull/22357
- 03:03 AM Bug #24440 (Resolved): common/DecayCounter: set last_decay to current time when decoding decay co...
- Recently we found mds load might become zero on another MDS under multi-MDSes scenario. The ceph version is Luminous....
06/06/2018
- 10:28 PM Documentation #24093 (Resolved): doc: Update *remove a metadata server*
- 09:23 PM Bug #24435 (Fix Under Review): doc: incorrect snaprealm format upgrade process in mimic release note
- https://github.com/ceph/ceph/pull/22445
- 09:17 PM Bug #24435 (In Progress): doc: incorrect snaprealm format upgrade process in mimic release note
- 01:55 PM Bug #24435 (Resolved): doc: incorrect snaprealm format upgrade process in mimic release note
- The commands to upgrade snaprealm format in release note are
ceph daemon <mds of rank 0> scrub_path /
ceph daemon... - 08:49 AM Bug #24028: CephFS flock() on a directory is broken
- In fuse filesystem, flock on directory is handled by VFS, there is nothing ceph-fuse can do.
- 08:12 AM Bug #24028: CephFS flock() on a directory is broken
- In that case flock() syscall over FUSEd directory should return an ENOTSUPP?. In any case we must not allow unsafe lo...
- 07:46 AM Bug #24028: CephFS flock() on a directory is broken
- ceph-fuse does not support file lock on directory. It's limitation of fuse kernel module.
- 07:12 AM Bug #24400: CephFS - All MDS went offline and required repair of filesystem
- http://tracker.ceph.com/issues/17177 can explain this issue. full filesystem scrub should repair incorrect dirstat/rs...
- 06:24 AM Bug #24400: CephFS - All MDS went offline and required repair of filesystem
- Zheng Yan wrote:
> there are lots of inodes have incorrect dirstat/rstat. have you ever run 'journal reset' before t... - 02:16 AM Bug #24400: CephFS - All MDS went offline and required repair of filesystem
- there are lots of inodes have incorrect dirstat/rstat. have you ever run 'journal reset' before the crash
- 02:07 AM Feature #24430 (Resolved): libcephfs: provide API to change umask
- The current use-case will be the CephFS shell.
06/05/2018
- 09:05 PM Feature #24429 (Duplicate): fs: implement snapshot count limit by subtree
- e.g. don't let a subtree have more than 7 snapshots. This should be configurable via an xattr.
Idea is from Dan va... - 06:06 PM Feature #24426 (New): mds: add second level cache backed by local SSD or NVRAM
- Idea is to have a second level to the MDS cache to improve access time and reduce reads on the metadata pool. This wo...
- 02:47 PM Bug #24284: cephfs: allow prohibiting user snapshots in CephFS
- > change default of mds_snap_max_uid to 0
Use-cases such as Manila let the users mount with root so this will be i... - 02:19 AM Bug #24284: cephfs: allow prohibiting user snapshots in CephFS
- maybe we can use 'auth string'
- 10:42 AM Bug #24403: mon failed to return metadata for mds
- I have updated first telegeo02 with no different result (as mds on telegeo02 was standby as last one rebooted)
The... - 09:14 AM Feature #22446: mds: ask idle client to trim more caps
- Can I get few implementation specific details to get started working on this issue?
And for clarity on my side, we... - 08:27 AM Bug #24400: CephFS - All MDS went offline and required repair of filesystem
- Zheng Yan wrote:
> do have have full log (the time mds started replay to mds crash). thanks
Full MDS log starting... - 12:50 AM Bug #23032 (Resolved): mds: underwater dentry check in CDir::_omap_fetched is racy
- 12:49 AM Backport #23157 (Resolved): luminous: mds: underwater dentry check in CDir::_omap_fetched is racy
- 12:49 AM Backport #22696 (Resolved): luminous: client: dirty caps may never get the chance to flush
06/04/2018
- 10:57 PM Bug #24400: CephFS - All MDS went offline and required repair of filesystem
- do have have full log (the time mds started replay to mds crash). thanks
- 02:06 PM Bug #24400: CephFS - All MDS went offline and required repair of filesystem
- Zheng Yan wrote:
> do you have mds log just before the crash
Excellent timing - we've just finished trawling thro... - 01:55 PM Bug #24400: CephFS - All MDS went offline and required repair of filesystem
- do you have mds log just before the crash
- 08:02 AM Bug #24400: CephFS - All MDS went offline and required repair of filesystem
- Forgot to say - one of the logs was taken with debug enabled (thus the size). Can provide whole log if needed
- 07:45 AM Bug #24400 (Can't reproduce): CephFS - All MDS went offline and required repair of filesystem
- Hi,
Raising this incase we can get some more insight and/or it helps others.
We have a 12.2.5 cluster provising... - 09:14 PM Bug #24241 (New): NFS-Ganesha libcephfs: Assert failure in object_locator_to_pg
- 06:15 PM Feature #24263: client/mds: create a merkle tree of objects to allow efficient generation of diff...
- Sage Weil wrote:
> A few questions:
>
> - What is the sha1 of? The object's content? That isn't necessarily kno... - 05:59 PM Feature #24263: client/mds: create a merkle tree of objects to allow efficient generation of diff...
- John Spray wrote:
> Patrick Donnelly wrote:
> > John Spray wrote:
> > > I'm a fan. Questions that spring to mind:... - 02:20 PM Bug #24403: mon failed to return metadata for mds
- The "sen2agriprod" server actually runs on centOS7 (kernel 3.10.0) which is in the recommended platforms.
If you t... - 01:30 PM Bug #24403: mon failed to return metadata for mds
- please try newer kernel
- 10:04 AM Bug #24403 (Resolved): mon failed to return metadata for mds
- Hello,
Redigging an error found into the ceph-users mailing list: http://lists.ceph.com/pipermail/ceph-users-ceph.... - 01:41 PM Bug #24306 (In Progress): mds: use intrusive_ptr to manage Message life-time
- 09:34 AM Bug #24172 (Resolved): client: fails to respond cap revoke from non-auth mds
- 05:39 AM Bug #23214 (Resolved): doc: Fix -d option in ceph-fuse doc
- 05:36 AM Bug #23248 (Resolved): ceph-fuse: trim ceph-fuse -V output
- 01:24 AM Backport #23704 (Resolved): luminous: ceph-fuse: broken directory permission checking
- 01:24 AM Backport #23770 (Resolved): luminous: ceph-fuse: return proper exit code
- 01:22 AM Backport #23818 (Resolved): luminous: client: add option descriptions and review levels (e.g. LEV...
- 01:22 AM Backport #23475 (Resolved): luminous: ceph-fuse: trim ceph-fuse -V output
- 01:21 AM Backport #23835 (Resolved): luminous: mds: fix occasional dir rstat inconsistency between multi-M...
- 01:21 AM Backport #23638 (Resolved): luminous: ceph-fuse: getgroups failure causes exception
- 01:20 AM Backport #23933 (Resolved): luminous: client: avoid second lock on client_lock
- 01:17 AM Backport #23931 (Resolved): luminous: qa: test_purge_queue_op_rate: self.assertTrue(phase2_ops < ...
- 01:16 AM Backport #23936 (Resolved): luminous: cephfs-journal-tool: segfault during journal reset
- 01:16 AM Backport #23950 (Resolved): luminous: mds: stopping rank 0 cannot shutdown until log is trimmed
- 01:15 AM Backport #23951 (Resolved): luminous: mds: stuck during up:stopping
- 01:15 AM Backport #23984 (Resolved): luminous: mds: scrub on fresh file system fails
- 01:14 AM Backport #23935 (Resolved): luminous: mds: may send LOCK_SYNC_MIX message to starting MDS
- 01:13 AM Backport #23991 (Resolved): luminous: client: hangs on umount if it had an MDS session evicted
- 01:13 AM Backport #24050 (Resolved): luminous: mds: MClientCaps should carry inode's dirstat
- 01:12 AM Backport #24049 (Resolved): luminous: ceph-fuse: missing dentries in readdir result
- 01:12 AM Backport #23946 (Resolved): luminous: mds: crash when failover
- 01:10 AM Backport #24107 (Resolved): luminous: PurgeQueue::_consume() could return true when there were no...
- 01:09 AM Backport #24108 (Resolved): luminous: MDCache.cc: 5317: FAILED assert(mds->is_rejoin())
- 01:03 AM Backport #24130 (Resolved): luminous: mds: race with new session from connection and imported ses...
- 01:02 AM Backport #24188 (Resolved): luminous: kceph: umount on evicted client blocks forever
- 01:01 AM Backport #24201 (Resolved): luminous: client: fails to respond cap revoke from non-auth mds
- 01:00 AM Backport #24207 (Resolved): luminous: client: deleted inode's Bufferhead which was in STATE::Tx w...
- 12:59 AM Bug #24289 (Resolved): mds memory leak
- 12:57 AM Backport #23982 (Resolved): luminous: qa: TestVolumeClient.test_lifecycle needs updated for new e...
- 12:55 AM Backport #24205 (Resolved): luminous: mds: broadcast quota to relevant clients when quota is expl...
- 12:53 AM Backport #24189 (Resolved): luminous: qa: kernel_mount.py umount must handle timeout arg
- 12:52 AM Backport #24341 (Resolved): luminous: mds memory leak
06/02/2018
06/01/2018
- 11:44 AM Documentation #24093 (Fix Under Review): doc: Update *remove a metadata server*
- https://github.com/ceph/ceph/pull/22338
- 02:43 AM Bug #24369 (Fix Under Review): luminous: checking quota while holding cap ref may deadlock
- https://github.com/ceph/ceph/pull/22354
- 12:58 AM Bug #24369: luminous: checking quota while holding cap ref may deadlock
- For example:
mds revokes an inode's Fw
mds freezes the subtree that contains the inode
client::_write() calls ... - 12:52 AM Bug #24369 (Resolved): luminous: checking quota while holding cap ref may deadlock
- 02:10 AM Backport #24372 (Rejected): luminous: mds: root inode's snaprealm doesn't get journalled correctly
- 02:10 AM Backport #24372: luminous: mds: root inode's snaprealm doesn't get journalled correctly
- luminous does not support snapshot
- 02:07 AM Backport #24372 (Rejected): luminous: mds: root inode's snaprealm doesn't get journalled correctly
- 02:08 AM Bug #24370 (Duplicate): luminous: root dir's new snapshot lost when restart mds
- dup of https://tracker.ceph.com/issues/24372
- 01:59 AM Bug #24370 (Duplicate): luminous: root dir's new snapshot lost when restart mds
- affect version: luminous & mimic
reproduce steps:
1. ceph-fuse mount /cephfuse
2. write a file /cephfuse/file1
... - 02:06 AM Backport #24340 (Resolved): mimic: mds memory leak
05/31/2018
- 01:47 PM Bug #24241: NFS-Ganesha libcephfs: Assert failure in object_locator_to_pg
- If you have time, it's probably worthwhile to roll a new testcase for ceph_ll_get_stripe_osd for this sort of thing. ...
- 11:57 AM Backport #24345 (Resolved): mimic: mds: root inode's snaprealm doesn't get journalled correctly
05/30/2018
- 12:04 PM Backport #24345 (In Progress): mimic: mds: root inode's snaprealm doesn't get journalled correctly
- https://github.com/ceph/ceph/pull/22322
- 11:52 AM Backport #24345 (Resolved): mimic: mds: root inode's snaprealm doesn't get journalled correctly
- 12:04 PM Bug #24343 (Resolved): mds: root inode's snaprealm doesn't get journalled correctly
- https://github.com/ceph/ceph/pull/22320
- 11:25 AM Bug #24343 (Resolved): mds: root inode's snaprealm doesn't get journalled correctly
- 03:47 AM Backport #24341 (In Progress): luminous: mds memory leak
- https://github.com/ceph/ceph/pull/22310
- 03:43 AM Backport #24341 (Resolved): luminous: mds memory leak
- https://github.com/ceph/ceph/pull/22310
- 03:42 AM Backport #24340 (In Progress): mimic: mds memory leak
- https://github.com/ceph/ceph/pull/22309
- 03:38 AM Backport #24340 (Resolved): mimic: mds memory leak
- https://github.com/ceph/ceph/pull/22309
- 03:36 AM Bug #24289 (Pending Backport): mds memory leak
05/29/2018
- 09:08 PM Bug #24284: cephfs: allow prohibiting user snapshots in CephFS
- We should actually discuss what kind of interface admins want. Dan van der Ster certainly has thoughts; others might ...
- 05:33 PM Feature #24263: client/mds: create a merkle tree of objects to allow efficient generation of diff...
- A few questions:
- What is the sha1 of? The object's content? That isn't necessarily known (e.g. 4 MB object whe... - 09:58 AM Bug #22269 (Resolved): ceph-fuse: failure to remount in startup test does not handle client_die_o...
- 09:58 AM Backport #22378 (Resolved): jewel: ceph-fuse: failure to remount in startup test does not handle ...
- 09:57 AM Backport #23932 (Resolved): jewel: client: avoid second lock on client_lock
- 09:45 AM Backport #24189: luminous: qa: kernel_mount.py umount must handle timeout arg
- Prashant D wrote:
> This tracker should be closed as duplicate tracker for #24188.
Here's what I see happening he... - 09:40 AM Backport #24331 (Resolved): luminous: mon: mds health metrics sent to cluster log indpeendently
- https://github.com/ceph/ceph/pull/22558
- 09:40 AM Backport #24330 (Resolved): mimic: mon: mds health metrics sent to cluster log indpeendently
- https://github.com/ceph/ceph/pull/22265
05/28/2018
- 04:39 PM Feature #24233 (Closed): Add new command ceph mds status
- 04:38 PM Feature #24233: Add new command ceph mds status
- Patrick Donnelly wrote:
>
> Why can't this information be from `ceph fs status --format=json`? I'm not really se... - 03:47 AM Backport #24205 (In Progress): luminous: mds: broadcast quota to relevant clients when quota is e...
- https://github.com/ceph/ceph/pull/22271
- 12:49 AM Bug #24269 (Fix Under Review): multimds pjd open test fails
- https://github.com/ceph/ceph/pull/22266
05/27/2018
- 10:20 PM Bug #24308 (Pending Backport): mon: mds health metrics sent to cluster log indpeendently
- mimic backport: https://github.com/ceph/ceph/pull/22265
05/25/2018
- 08:39 PM Backport #24311 (Resolved): luminous: pjd: cd: too many arguments
- https://github.com/ceph/ceph/pull/22883
- 08:39 PM Backport #24310 (Resolved): mimic: pjd: cd: too many arguments
- https://github.com/ceph/ceph/pull/22882
- 07:03 PM Bug #24307 (Pending Backport): pjd: cd: too many arguments
- 04:35 PM Bug #24307: pjd: cd: too many arguments
- https://github.com/ceph/ceph/pull/22233
- 04:21 PM Bug #24307 (Fix Under Review): pjd: cd: too many arguments
- -https://github.com/ceph/ceph/pull/22251-
- 04:20 PM Bug #24307 (Resolved): pjd: cd: too many arguments
- ...
- 04:44 PM Bug #24308 (Fix Under Review): mon: mds health metrics sent to cluster log indpeendently
- 04:44 PM Bug #24308: mon: mds health metrics sent to cluster log indpeendently
- https://github.com/ceph/ceph/pull/22252
- 04:42 PM Bug #24308 (Resolved): mon: mds health metrics sent to cluster log indpeendently
- We generate a health warning, which has its own logging infrastructure. But MDSMonitor is *also* sending them to wrn...
- 03:23 PM Bug #24306 (Resolved): mds: use intrusive_ptr to manage Message life-time
- We're regularly getting bugs relating to messages not getting released. Latest one is #24289.
Use a boost::intrusi... - 03:10 PM Feature #24233: Add new command ceph mds status
- Vikhyat Umrao wrote:
> Thanks John and Patrick for the feedback. I think rename is not needed let us get a new comma... - 02:51 PM Feature #24305 (Resolved): client/mds: allow renaming across quota boundaries
- Issue here: https://github.com/ceph/ceph/blob/77b35faa36f83d837a5fe2685efcd4b9be59406a/src/client/Client.cc#L12214-L1...
- 11:03 AM Backport #24296 (Resolved): mimic: repeated eviction of idle client until some IO happens
- https://github.com/ceph/ceph/pull/22550
- 11:03 AM Backport #24295 (Resolved): luminous: repeated eviction of idle client until some IO happens
- https://github.com/ceph/ceph/pull/22780
- 10:10 AM Feature #24263: client/mds: create a merkle tree of objects to allow efficient generation of diff...
- Patrick Donnelly wrote:
> John Spray wrote:
> > I'm a fan. Questions that spring to mind:
> >
> > - Do we apply... - 08:16 AM Bug #24289 (Fix Under Review): mds memory leak
- https://github.com/ceph/ceph/pull/22240
- 08:09 AM Bug #24289 (Resolved): mds memory leak
- forget to call message->put() in some cases
- 04:05 AM Bug #24052 (Pending Backport): repeated eviction of idle client until some IO happens
- 03:07 AM Feature #24286 (Resolved): tools: create CephFS shell
- > The Ceph file system (CephFS) provides for kernel driver and FUSE client access. In testing and trivial system admi...
- 02:54 AM Bug #24240 (Fix Under Review): qa: 1 mutations had unexpected outcomes
- https://github.com/ceph/ceph/pull/22234
- 02:31 AM Bug #24284: cephfs: allow prohibiting user snapshots in CephFS
- Zheng Yan wrote:
> change default of mds_snap_max_uid to 0
Okay, but we should enforce that as a file system opti... - 02:25 AM Bug #24284: cephfs: allow prohibiting user snapshots in CephFS
- change default of mds_snap_max_uid to 0
- 02:28 AM Feature #24285 (Resolved): mgr: add module which displays current usage of file system (`fs top`)
- It would ideally provide a list of sessions doing I/O, what kind of I/O, bandwidth of reads/writes, etc. Also the sam...
- 02:24 AM Documentation #23775 (Resolved): PendingReleaseNotes: add notes for major Mimic features
- 02:15 AM Feature #9659 (Duplicate): MDS: support cache eviction
- 01:46 AM Bug #23715 (Closed): "Scrubbing terminated -- not all pgs were active and clean" in fs-jewel-dist...
- Problem seems to have gone away. Closing.
05/24/2018
- 10:43 PM Feature #14456: mon: prevent older/incompatible clients from mounting the file system
- We're moving this to target 13.2.1.
- 10:40 PM Documentation #23775: PendingReleaseNotes: add notes for major Mimic features
- https://github.com/ceph/ceph/pull/22232
- 10:27 PM Bug #24284 (Resolved): cephfs: allow prohibiting user snapshots in CephFS
- Since snapshots can be used to circumvent (accidentally or not) the quotas as snapshot file data that has since been ...
- 09:03 PM Feature #22370 (Resolved): cephfs: add kernel client quota support
- 08:28 PM Backport #22378: jewel: ceph-fuse: failure to remount in startup test does not handle client_die_...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21162
merged - 08:28 PM Backport #23932: jewel: client: avoid second lock on client_lock
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21734
merged - 07:21 PM Backport #24209 (Resolved): mimic: client: deleted inode's Bufferhead which was in STATE::Tx woul...
- 07:21 PM Bug #24111 (Resolved): mds didn't update file's max_size
- 07:21 PM Backport #24187 (Resolved): mimic: mds didn't update file's max_size
- 07:20 PM Backport #24254 (Resolved): mimic: kceph: umount on evicted client blocks forever
- 07:20 PM Backport #24255 (Resolved): mimic: qa: kernel_mount.py umount must handle timeout arg
- 07:17 PM Backport #24186 (Resolved): mimic: client: segfault in trim_caps
- 07:15 PM Backport #24202 (Resolved): mimic: client: fails to respond cap revoke from non-auth mds
- 07:14 PM Backport #24206 (Resolved): mimic: mds: broadcast quota to relevant clients when quota is explici...
- 07:14 PM Bug #24118 (Resolved): mds: crash when using `config set` on tracked configs
- 07:13 PM Backport #24157 (Resolved): mimic: mds: crash when using `config set` on tracked configs
- 07:13 PM Backport #24191 (Resolved): mimic: fs: reduce number of helper debug messages at level 5 for client
- 05:05 PM Feature #24263: client/mds: create a merkle tree of objects to allow efficient generation of diff...
- John Spray wrote:
> I'm a fan. Questions that spring to mind:
>
> - Do we apply this to all files, or only large... - 09:39 AM Feature #24263: client/mds: create a merkle tree of objects to allow efficient generation of diff...
- I'm a fan. Questions that spring to mind:
- Do we apply this to all files, or only large ones based on some heuri... - 02:11 PM Backport #24201 (In Progress): luminous: client: fails to respond cap revoke from non-auth mds
- https://github.com/ceph/ceph/pull/22221
- 01:47 PM Bug #24240: qa: 1 mutations had unexpected outcomes
- The test case corrupted open file table’s omap header. One field in omap header is ‘num_objects’. The corrupted heade...
- 01:36 PM Bug #24241: NFS-Ganesha libcephfs: Assert failure in object_locator_to_pg
- Patrick Donnelly wrote:
> What version of Ceph are you using?
I run vstart cluster from master (last commit in on... - 05:01 AM Bug #24241 (Need More Info): NFS-Ganesha libcephfs: Assert failure in object_locator_to_pg
- What version of Ceph are you using?
- 01:36 PM Feature #24233: Add new command ceph mds status
- Thanks John and Patrick for the feedback. I think rename is not needed let us get a new command which can give status...
- 09:57 AM Bug #23084 (Resolved): doc: update ceph-fuse with FUSE options
- 09:57 AM Backport #23151 (Resolved): luminous: doc: update ceph-fuse with FUSE options
- 09:49 AM Backport #24189 (In Progress): luminous: qa: kernel_mount.py umount must handle timeout arg
- This tracker should be closed as duplicate tracker for #24188.
- 09:42 AM Backport #24188 (In Progress): luminous: kceph: umount on evicted client blocks forever
- https://github.com/ceph/ceph/pull/22208
- 07:48 AM Bug #24269 (Resolved): multimds pjd open test fails
- http://qa-proxy.ceph.com/teuthology/pdonnell-2018-05-23_14:53:33-multimds-wip-pdonnell-testing-20180522.181319-mimic-...
- 04:53 AM Backport #24185 (In Progress): luminous: client: segfault in trim_caps
- 02:21 AM Bug #23972: Ceph MDS Crash from client mounting aufs over cephfs
- The crash was at "mdr->tracedn = mdr->dn[ 0].back()", because mdr->dn[ 0] is empty. request that triggered the crash ...
05/23/2018
- 08:55 PM Feature #24263 (New): client/mds: create a merkle tree of objects to allow efficient generation o...
- Idea is that the collection of objects representing a file would be arranged as a merkle tree. Any write to an object...
- 07:09 PM Backport #24255 (In Progress): mimic: qa: kernel_mount.py umount must handle timeout arg
- 06:31 PM Backport #24255 (Resolved): mimic: qa: kernel_mount.py umount must handle timeout arg
- https://github.com/ceph/ceph/pull/22138
- 07:08 PM Backport #24254 (In Progress): mimic: kceph: umount on evicted client blocks forever
- 06:31 PM Backport #24254 (Resolved): mimic: kceph: umount on evicted client blocks forever
- https://github.com/ceph/ceph/pull/22138
- 12:44 PM Backport #24107 (In Progress): luminous: PurgeQueue::_consume() could return true when there were...
- https://github.com/ceph/ceph/pull/22176
- 10:50 AM Feature #24233: Add new command ceph mds status
- So I guess Vikhyat is suggesting an "MDS" command to match those for other daemons, but that wouldn't just be a renam...
- 08:11 AM Bug #23826 (Duplicate): mds: assert after daemon restart
- Checked again, it's likely fixed by https://github.com/ceph/ceph/pull/21883/commits/0a38a499b86c0ee13aa0e783a8359bcce...
- 08:08 AM Backport #24108 (In Progress): luminous: MDCache.cc: 5317: FAILED assert(mds->is_rejoin())
- 08:07 AM Backport #24108: luminous: MDCache.cc: 5317: FAILED assert(mds->is_rejoin())
- https://github.com/ceph/ceph/pull/22171
- 07:19 AM Backport #23946: luminous: mds: crash when failover
- @Nathan @Patrick I have cherry-picked pr21769 as well. Please review pr21900.
- 07:17 AM Bug #24241 (New): NFS-Ganesha libcephfs: Assert failure in object_locator_to_pg
- When calling ceph_ll_get_stripe_osd from nfs-ganesha fsal ceph in file mds.c, assertion failure causes segmentation f...
- 03:46 AM Backport #24207 (In Progress): luminous: client: deleted inode's Bufferhead which was in STATE::T...
- https://github.com/ceph/ceph/pull/22168
- 03:02 AM Bug #24239 (Fix Under Review): cephfs-journal-tool: Importing a zero-length purge_queue journal b...
- https://github.com/ceph/ceph/pull/22144
- 02:33 AM Bug #24239 (Resolved): cephfs-journal-tool: Importing a zero-length purge_queue journal breaks it...
- When we were importing a zero-length purge_queue journal exported previously, the last object and
the following one ... - 02:57 AM Bug #24236 (Fix Under Review): cephfs-journal-tool: journal inspect reports DAMAGED for purge que...
- https://github.com/ceph/ceph/pull/22146
- 02:19 AM Bug #24236: cephfs-journal-tool: journal inspect reports DAMAGED for purge queue when it's empty
- https://github.com/ceph/ceph/pull/22146
before fix, in a new created cluster fs, run:
$cephfs-journal-tool --jour... - 02:18 AM Bug #24236 (Fix Under Review): cephfs-journal-tool: journal inspect reports DAMAGED for purge que...
- When purge queue is empty, joural inspect still report DAMAGED
journal integrity. - 02:46 AM Bug #24240 (Resolved): qa: 1 mutations had unexpected outcomes
- ...
- 02:46 AM Bug #24238 (Fix Under Review): test gets ENOSPC from bluestore block device
- https://github.com/ceph/ceph/pull/22165
- 02:43 AM Bug #24238: test gets ENOSPC from bluestore block device
- The underlying block device was thinly provisioned and ran out of space. Asserting on ENOSPC from a block IO is expe...
- 02:42 AM Bug #24238: test gets ENOSPC from bluestore block device
- 02:32 AM Bug #24238 (Resolved): test gets ENOSPC from bluestore block device
- ...
- 01:36 AM Feature #22372 (Resolved): kclient: implement quota handling using new QuotaRealm
05/22/2018
- 10:18 PM Bug #21777: src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
- Zheng, do you think this is also resolved by the fix to #23826?
- 10:12 PM Bug #24129 (In Progress): qa: test_version_splitting (tasks.cephfs.test_sessionmap.TestSessionMap...
- 10:11 PM Documentation #24093 (In Progress): doc: Update *remove a metadata server*
- 10:08 PM Bug #24101 (Closed): mds: deadlock during fsstress workunit with 9 actives
- Apparently resolved by the revert.
- 10:03 PM Feature #23689: qa: test major/minor version upgrades
- Partially addressed by the QA suite fs:upgrade:snaps.
- 09:48 PM Feature #24233: Add new command ceph mds status
- Why does the command need renamed?
- 08:49 PM Feature #24233 (Closed): Add new command ceph mds status
- Add new command ceph mds status
For more information please check - https://tracker.ceph.com/issues/24217
Changin... - 09:42 PM Feature #20598: mds: revisit LAZY_IO
- See also: https://github.com/ceph/ceph/pull/21067
- 08:12 PM Bug #23972: Ceph MDS Crash from client mounting aufs over cephfs
- John Spray wrote:
> Any chance you can reproduce this with debuginfo packages installed, so that we can get meaningf... - 01:54 PM Backport #24191 (In Progress): mimic: fs: reduce number of helper debug messages at level 5 for c...
- 01:53 PM Bug #24177: qa: fsstress workunit does not execute in parallel on same host without clobbering files
- I suspect the problem is in unpacking and building ltp. The fsstress commands already use a pid-specific directory. H...
- 01:52 PM Backport #24157 (In Progress): mimic: mds: crash when using `config set` on tracked configs
- 01:48 PM Backport #24209 (In Progress): mimic: client: deleted inode's Bufferhead which was in STATE::Tx w...
- https://github.com/ceph/ceph/pull/22136
- 01:48 PM Backport #24187 (In Progress): mimic: mds didn't update file's max_size
- https://github.com/ceph/ceph/pull/22137
- 01:46 PM Backport #24186 (In Progress): mimic: client: segfault in trim_caps
- https://github.com/ceph/ceph/pull/22139
- 01:45 PM Backport #24202 (In Progress): mimic: client: fails to respond cap revoke from non-auth mds
- 01:44 PM Backport #24206 (In Progress): mimic: mds: broadcast quota to relevant clients when quota is expl...
05/21/2018
- 01:28 PM Backport #24049 (In Progress): luminous: ceph-fuse: missing dentries in readdir result
- https://github.com/ceph/ceph/pull/22119
- 01:21 PM Backport #24050 (In Progress): luminous: mds: MClientCaps should carry inode's dirstat
- https://github.com/ceph/ceph/pull/22118
- 08:49 AM Backport #24209 (Resolved): mimic: client: deleted inode's Bufferhead which was in STATE::Tx woul...
- https://github.com/ceph/ceph/pull/22136
- 08:49 AM Backport #24208 (Rejected): jewel: client: deleted inode's Bufferhead which was in STATE::Tx woul...
- 08:49 AM Backport #24207 (Resolved): luminous: client: deleted inode's Bufferhead which was in STATE::Tx w...
- https://github.com/ceph/ceph/pull/22168
- 08:48 AM Backport #24206 (Resolved): mimic: mds: broadcast quota to relevant clients when quota is explici...
- https://github.com/ceph/ceph/pull/22141
- 08:48 AM Backport #24205 (Resolved): luminous: mds: broadcast quota to relevant clients when quota is expl...
- https://github.com/ceph/ceph/pull/22271
- 08:48 AM Backport #24202 (Resolved): mimic: client: fails to respond cap revoke from non-auth mds
- https://github.com/ceph/ceph/pull/22140
- 08:48 AM Backport #24201 (Resolved): luminous: client: fails to respond cap revoke from non-auth mds
- https://github.com/ceph/ceph/pull/22221
05/20/2018
- 11:56 PM Bug #24133 (Pending Backport): mds: broadcast quota to relevant clients when quota is explicitly set
- 11:55 PM Bug #23837 (Pending Backport): client: deleted inode's Bufferhead which was in STATE::Tx would le...
- 11:55 PM Bug #24172 (Pending Backport): client: fails to respond cap revoke from non-auth mds
05/19/2018
- 10:05 AM Backport #24191 (Resolved): mimic: fs: reduce number of helper debug messages at level 5 for client
- https://github.com/ceph/ceph/pull/22154
- 10:05 AM Backport #24190 (Resolved): luminous: fs: reduce number of helper debug messages at level 5 for c...
- https://github.com/ceph/ceph/pull/23014
- 10:04 AM Backport #24189 (Resolved): luminous: qa: kernel_mount.py umount must handle timeout arg
- https://github.com/ceph/ceph/pull/22208
- 10:04 AM Backport #24188 (Resolved): luminous: kceph: umount on evicted client blocks forever
- https://github.com/ceph/ceph/pull/22208
- 10:04 AM Backport #24187 (Resolved): mimic: mds didn't update file's max_size
- https://github.com/ceph/ceph/pull/22137
- 10:04 AM Backport #24186 (Resolved): mimic: client: segfault in trim_caps
- https://github.com/ceph/ceph/pull/22139
- 09:57 AM Bug #24054 (Pending Backport): kceph: umount on evicted client blocks forever
- 09:56 AM Bug #24053 (Pending Backport): qa: kernel_mount.py umount must handle timeout arg
- 04:45 AM Backport #24185 (Resolved): luminous: client: segfault in trim_caps
- https://github.com/ceph/ceph/pull/22201
- 04:38 AM Bug #24137 (Pending Backport): client: segfault in trim_caps
05/18/2018
- 09:33 PM Bug #24111 (Pending Backport): mds didn't update file's max_size
- 09:32 PM Bug #21014 (Pending Backport): fs: reduce number of helper debug messages at level 5 for client
- 07:38 PM Bug #24177 (Resolved): qa: fsstress workunit does not execute in parallel on same host without cl...
- ...
- 05:10 PM Bug #24172: client: fails to respond cap revoke from non-auth mds
- adding mimic backport because the PR targets master
- 12:17 PM Bug #24172: client: fails to respond cap revoke from non-auth mds
- this can also explain http://tracker.ceph.com/issues/23350
- 12:10 PM Bug #24172 (Fix Under Review): client: fails to respond cap revoke from non-auth mds
- 12:10 PM Bug #24172: client: fails to respond cap revoke from non-auth mds
- https://github.com/ceph/ceph/pull/22080
- 12:06 PM Bug #24172 (Resolved): client: fails to respond cap revoke from non-auth mds
- 02:33 PM Bug #24173: ceph_volume_client: allow atomic update of RADOS objects
Greg Farnum's suggestions to do atomic RADOS object updates,
"If you've already got code that does all these t...- 02:13 PM Bug #24173 (Resolved): ceph_volume_client: allow atomic update of RADOS objects
- The manila driver needs the ceph_volume_client to atomically update contents
of RADOS objects used to store ganesha'... - 02:09 PM Bug #23393 (Resolved): ceph-ansible: update Ganesha config for nfs_file_gw to use optimal settings
- 12:14 AM Bug #24137 (In Progress): client: segfault in trim_caps
- https://github.com/ceph/ceph/pull/22073
05/17/2018
- 08:45 AM Backport #23946 (In Progress): luminous: mds: crash when failover
- 08:44 AM Bug #24137: client: segfault in trim_caps
- compile test_trim_caps.cc with the newest libcephfs. set mds_min_caps_per_client to 1, set mds_max_ratio_caps_per_cli...
- 04:44 AM Bug #24137: client: segfault in trim_caps
- Zheng Yan wrote:
> The problem is that anchor only pins current inode. Client::unlink() still may drop reference of ... - 12:44 AM Bug #24137: client: segfault in trim_caps
- The problem is that anchor only pins current inode. Client::unlink() still may drop reference of its parent inode.
- 08:41 AM Backport #24157 (Resolved): mimic: mds: crash when using `config set` on tracked configs
- https://github.com/ceph/ceph/pull/22153
- 04:14 AM Documentation #24093 (Fix Under Review): doc: Update *remove a metadata server*
- https://github.com/ceph/ceph/pull/22035
- 12:55 AM Bug #24052: repeated eviction of idle client until some IO happens
- https://github.com/ceph/ceph/pull/22026
- 12:52 AM Bug #24101: mds: deadlock during fsstress workunit with 9 actives
- Ivan Guan wrote:
> Zheng Yan wrote:
> > caused by https://github.com/ceph/ceph/pull/21615
>
> Sorry,i don't unde...
05/16/2018
- 09:04 PM Bug #24118 (Pending Backport): mds: crash when using `config set` on tracked configs
- 07:59 PM Bug #24138: qa: support picking a random distro using new teuthology $
- @Warren - wonder if it easy doable to add `yaml` configuration so if suites ^ run on `rhel` then `-k testing` is used...
- 05:43 PM Bug #24138: qa: support picking a random distro using new teuthology $
- FYI
merged PRs related to this:
https://tracker.ceph.com/issues/24138
https://github.com/ceph/ceph/pull/21932
h... - 05:34 PM Bug #24138: qa: support picking a random distro using new teuthology $
That's it I guess. Should also find a way to make `-k testing` the default unless distro == RHEL.- 05:33 PM Bug #24138: qa: support picking a random distro using new teuthology $
- @batrick I assume suites are: `fs`, `kcephfs`, `nutlimds` ? more?
- 06:16 PM Bug #24137: client: segfault in trim_caps
- Zheng Yan wrote:
> [...]
>
> I think above commit isn't quite right. how about patch below
>
> [...]
I'm no... - 11:10 AM Bug #24137: client: segfault in trim_caps
- ...
- 12:47 PM Bug #24101: mds: deadlock during fsstress workunit with 9 actives
- Zheng Yan wrote:
> caused by https://github.com/ceph/ceph/pull/21615
Sorry,i don't understand why this pr can cau... - 03:39 AM Bug #24052 (Fix Under Review): repeated eviction of idle client until some IO happens
05/15/2018
- 11:35 PM Bug #21014 (Fix Under Review): fs: reduce number of helper debug messages at level 5 for client
- https://github.com/ceph/ceph/pull/21972
- 10:35 PM Backport #23991 (In Progress): luminous: client: hangs on umount if it had an MDS session evicted
- 09:37 PM Bug #24028: CephFS flock() on a directory is broken
- Марк Коренберг wrote:
> Patrick Donnelly, why you set version to 14 ? Will this change be merged to Luminous ?
Be... - 08:25 PM Bug #24028: CephFS flock() on a directory is broken
- Patrick Donnelly, why you set version to 14 ? Will this change be merged to Luminous ?
- 08:23 PM Bug #24028: CephFS flock() on a directory is broken
- https://github.com/ceph/ceph/blob/master/src/client/fuse_ll.cc#L1037 ?
- 07:48 PM Bug #24028: CephFS flock() on a directory is broken
- Does ceph-fuse not have this problem?
- 03:38 AM Bug #24028: CephFS flock() on a directory is broken
- https://github.com/ceph/ceph-client/commit/ae2a8539ab7bb72f37306a544a555e9fc9ce8221
- 08:04 PM Bug #23837: client: deleted inode's Bufferhead which was in STATE::Tx would lead a assert fail
- BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1576908
- 08:01 PM Bug #23837: client: deleted inode's Bufferhead which was in STATE::Tx would lead a assert fail
- Fixed formatting.
- 11:37 AM Bug #23837 (Fix Under Review): client: deleted inode's Bufferhead which was in STATE::Tx would le...
- https://github.com/ceph/ceph/pull/22001
- 08:04 PM Bug #24087 (Duplicate): client: assert during shutdown after blacklisted
- Missed that. Thanks Zheng!
- 09:58 AM Bug #24087: client: assert during shutdown after blacklisted
- dup of http://tracker.ceph.com/issues/23837
- 07:55 PM Bug #24133 (Fix Under Review): mds: broadcast quota to relevant clients when quota is explicitly set
- 08:18 AM Bug #24133: mds: broadcast quota to relevant clients when quota is explicitly set
- https://github.com/ceph/ceph/pull/21997
- 08:13 AM Bug #24133 (Resolved): mds: broadcast quota to relevant clients when quota is explicitly set
- We found client won't get quota updated for a long time under following case. We found this issue on Luminous, but it...
- 07:41 PM Bug #24138 (Resolved): qa: support picking a random distro using new teuthology $
- Similar to https://github.com/ceph/ceph/pull/22008/files
- 07:38 PM Bug #24137: client: segfault in trim_caps
- Reasonable assumption about this crash is either the inode was deleted (in which case the Cap should have been delete...
- 07:18 PM Bug #24137 (Resolved): client: segfault in trim_caps
- ...
- 03:28 PM Backport #24136 (Resolved): luminous: MDSMonitor: uncommitted state exposed to clients/mdss
- https://github.com/ceph/ceph/pull/23013
- 01:45 PM Bug #23768 (Pending Backport): MDSMonitor: uncommitted state exposed to clients/mdss
- Mimic PR: https://github.com/ceph/ceph/pull/22005
- 03:53 AM Bug #24074: Read ahead in fuse client is broken with large buffer size
- try passing '--client_readahead_max_bytes=4194304' option to ceph-fuse
05/14/2018
- 10:21 PM Bug #24129 (Fix Under Review): qa: test_version_splitting (tasks.cephfs.test_sessionmap.TestSessi...
- https://github.com/ceph/ceph/pull/21992
- 08:13 PM Bug #24129 (Resolved): qa: test_version_splitting (tasks.cephfs.test_sessionmap.TestSessionMap) t...
- ...
- 10:10 PM Bug #24074 (Need More Info): Read ahead in fuse client is broken with large buffer size
- Chuan Qiu wrote:
> If the read is larger than 128K(e.g. 4M as our object size), fuse client will receive read reques... - 08:54 PM Backport #23935 (In Progress): luminous: mds: may send LOCK_SYNC_MIX message to starting MDS
- https://github.com/ceph/ceph/pull/21990
- 08:36 PM Backport #24130 (In Progress): luminous: mds: race with new session from connection and imported ...
- https://github.com/ceph/ceph/pull/21989
- 08:33 PM Backport #24130 (Resolved): luminous: mds: race with new session from connection and imported ses...
- 08:32 PM Bug #24072 (Pending Backport): mds: race with new session from connection and imported session
- Mimic PR: https://github.com/ceph/ceph/pull/21988
- 04:29 AM Bug #24072: mds: race with new session from connection and imported session
- WIP: https://github.com/ceph/ceph/pull/21966
- 06:53 PM Documentation #24093: doc: Update *remove a metadata server*
- It should be sufficient to say that the operator can just turn it the MDS off, however that is done for their environ...
- 05:57 PM Bug #24118 (Fix Under Review): mds: crash when using `config set` on tracked configs
- https://github.com/ceph/ceph/pull/21984
- 05:46 PM Bug #24118 (Resolved): mds: crash when using `config set` on tracked configs
- These configs: https://github.com/ceph/ceph/blob/7dbba9e54282e0a4c3000eb0c1a66e346c7eab98/src/mds/MDSDaemon.cc#L362-L...
- 04:03 PM Bug #24052: repeated eviction of idle client until some IO happens
- The log for that client at 128.142.160.86 is here: ceph-post-file: dd10811e-2790-43e4-b0a9-135725f70209
Thanks for... - 01:47 PM Bug #24052: repeated eviction of idle client until some IO happens
- It's not expected. could you upload client log with debug_ms=1
- 02:41 PM Bug #23972: Ceph MDS Crash from client mounting aufs over cephfs
- Any chance you can reproduce this with debuginfo packages installed, so that we can get meaningful backtraces?
- 01:57 PM Bug #24054 (Fix Under Review): kceph: umount on evicted client blocks forever
- https://github.com/ceph/ceph/pull/21941
- 01:48 PM Bug #23837 (In Progress): client: deleted inode's Bufferhead which was in STATE::Tx would lead a ...
- Kicking this back to In Progress. Please see comments in original PR. It has been reverted by https://github.com/ceph...
- 01:45 PM Bug #24101: mds: deadlock during fsstress workunit with 9 actives
- Revert: https://github.com/ceph/ceph/pull/21975
- 01:39 PM Bug #24101: mds: deadlock during fsstress workunit with 9 actives
- 11:53 AM Bug #24101: mds: deadlock during fsstress workunit with 9 actives
- caused by https://github.com/ceph/ceph/pull/21615
- 01:29 PM Bug #24030 (Fix Under Review): ceph-fuse: double dash meaning
- 07:45 AM Bug #24053 (Fix Under Review): qa: kernel_mount.py umount must handle timeout arg
- 07:44 AM Bug #24053: qa: kernel_mount.py umount must handle timeout arg
- https://github.com/ceph/ceph/pull/21941
- 04:05 AM Bug #24111 (Fix Under Review): mds didn't update file's max_size
- https://github.com/ceph/ceph/pull/21963
- 03:37 AM Bug #24111 (Resolved): mds didn't update file's max_size
- http://pulpito.ceph.com/pdonnell-2018-05-04_03:45:51-multimds-master-testing-basic-smithi/2474517/
- 03:59 AM Bug #24039 (Closed): MDSTableServer.cc: 62: FAILED assert(g_conf->mds_kill_mdstable_at != 1)
- create new ticket for the fsstress hang http://tracker.ceph.com/issues/24111
close this one
05/13/2018
- 03:01 PM Backport #24108 (Resolved): luminous: MDCache.cc: 5317: FAILED assert(mds->is_rejoin())
- https://github.com/ceph/ceph/pull/22171
- 03:01 PM Backport #24107 (Resolved): luminous: PurgeQueue::_consume() could return true when there were no...
- https://github.com/ceph/ceph/pull/22176
05/12/2018
05/11/2018
- 10:14 PM Bug #23837 (Pending Backport): client: deleted inode's Bufferhead which was in STATE::Tx would le...
- Mimic PR: https://github.com/ceph/ceph/pull/21954
- 10:09 PM Backport #23946: luminous: mds: crash when failover
- Prashant, pr21769 is merged.
- 10:06 PM Bug #24047 (Pending Backport): MDCache.cc: 5317: FAILED assert(mds->is_rejoin())
- Mimic PR: https://github.com/ceph/ceph/pull/21952
- 10:01 PM Bug #24073 (Pending Backport): PurgeQueue::_consume() could return true when there were no purge ...
- Mimic PR: https://github.com/ceph/ceph/pull/21951
- 03:50 AM Bug #24073: PurgeQueue::_consume() could return true when there were no purge queue item actually...
- dongdong tao wrote:
> Yeah, that‘s what i want to recommend to you, it can work as you expected.
Thank you:-) Tha... - 03:46 AM Bug #24073: PurgeQueue::_consume() could return true when there were no purge queue item actually...
- Yeah, that‘s what i want to recommend to you, it can work as you expected.
- 03:04 AM Bug #24073: PurgeQueue::_consume() could return true when there were no purge queue item actually...
- dongdong tao wrote:
> Hi Xuehan,
> I'm just curious about that how do you repair your purge queue journal ?
By t... - 03:01 AM Bug #24073: PurgeQueue::_consume() could return true when there were no purge queue item actually...
- dongdong tao wrote:
> Hi Xuehan,
> I'm just curious about that how do you repair your purge queue journal ?
Actu... - 02:34 AM Bug #24073: PurgeQueue::_consume() could return true when there were no purge queue item actually...
- Hi Xuehan,
I'm just curious about that how do you repair your purge queue journal ?
- 06:06 PM Bug #24101 (Closed): mds: deadlock during fsstress workunit with 9 actives
- http://pulpito.ceph.com/pdonnell-2018-05-11_00:47:01-multimds-wip-pdonnell-testing-20180510.225359-testing-basic-smit...
- 05:52 PM Feature #17230 (Fix Under Review): ceph_volume_client: py3 compatible
- https://github.com/ceph/ceph/pull/21948
- 04:26 AM Documentation #24093 (Resolved): doc: Update *remove a metadata server*
- Update: http://docs.ceph.com/docs/master/rados/deployment/ceph-deploy-mds/#remove-a-metadata-server
See:
http://d...
05/10/2018
- 10:09 PM Bug #24090 (Resolved): mds: fragmentation in QA is slowing down ops enough for WRNs
- http://pulpito.ceph.com/pdonnell-2018-05-08_18:15:09-fs-mimic-testing-basic-smithi/
http://pulpito.ceph.com/pdonnell... - 08:48 PM Bug #24089 (Rejected): mds: print slow requests to debug log when sending health WRN to monitors ...
- Nevermind, it is actually printed earlier in the log. Sorry for the noise.
- 08:46 PM Bug #24089 (Rejected): mds: print slow requests to debug log when sending health WRN to monitors ...
- ...
- 08:22 PM Bug #24088 (Duplicate): mon: slow remove_snaps op reported in cluster health log
- ...
- 04:57 PM Bug #24087 (Duplicate): client: assert during shutdown after blacklisted
- ...
- 11:48 AM Bug #24039: MDSTableServer.cc: 62: FAILED assert(g_conf->mds_kill_mdstable_at != 1)
- the pjd: http://pulpito.ceph.com/pdonnell-2018-05-04_03:45:51-multimds-master-testing-basic-smithi/2475062/...
- 11:25 AM Feature #22446: mds: ask idle client to trim more caps
- Glad to see this :)
- Backport set to mimic,luminous
Thanks. - 09:44 AM Bug #23332: kclient: with fstab entry is not coming up reboot
- I still don't think this is kernel issue. please path kernel with below change and try again....
- 06:44 AM Bug #24073: PurgeQueue::_consume() could return true when there were no purge queue item actually...
- Xuehan Xu wrote:
> In our online clusters, we encountered the bug #19593. Although we cherry-pick the fixing commits... - 04:38 AM Bug #24073: PurgeQueue::_consume() could return true when there were no purge queue item actually...
- https://github.com/ceph/ceph/pull/21923
- 04:38 AM Bug #24073 (Resolved): PurgeQueue::_consume() could return true when there were no purge queue it...
- In our online clusters, we encountered the bug #19593. Although we cherry-pick the fixing commits, the purge queue's ...
- 06:39 AM Bug #24074 (Need More Info): Read ahead in fuse client is broken with large buffer size
- If the read is larger than 128K(e.g. 4M as our object size), fuse client will receive read requests as multiple ll_re...
- 04:10 AM Backport #23984 (In Progress): luminous: mds: scrub on fresh file system fails
- https://github.com/ceph/ceph/pull/21922
- 04:07 AM Backport #23982 (In Progress): luminous: qa: TestVolumeClient.test_lifecycle needs updated for ne...
- https://github.com/ceph/ceph/pull/21921
- 04:05 AM Bug #23826: mds: assert after daemon restart
- Finish context of MDCache::open_undef_inodes_dirfrags() calls rejoin_gather_finish() without check rejoin_gather. I t...
Also available in: Atom