Activity
From 12/08/2017 to 01/06/2018
01/05/2018
- 09:40 PM Bug #22051 (Need More Info): tests: Health check failed: Reduced data availability: 5 pgs peering...
- 09:37 PM Bug #21575 (Resolved): mds: client caps can go below hard-coded default (100)
- 09:34 PM Feature #20752 (Resolved): cap message flag which indicates if client still has pending capsnap
- 09:32 PM Bug #21419 (Need More Info): client: is ceph_caps_for_mode correct for r/o opens?
- Jeff, any update on this?
- 09:30 PM Documentation #21172: doc: Export over NFS
- Ramana, any update on this?
- 07:25 PM Documentation #22599 (Fix Under Review): doc: mds memory tracking of cache is imprecise by a cons...
- https://github.com/ceph/ceph/pull/19807
- 07:19 PM Documentation #22599 (In Progress): doc: mds memory tracking of cache is imprecise by a constant ...
- 07:19 PM Documentation #22599 (Resolved): doc: mds memory tracking of cache is imprecise by a constant factor
- MDS currently can use up much more memory than its mds_cache_memory_limit. This is more noticable in deployments of a...
- 06:44 PM Bug #22548 (Need More Info): mds: crash during recovery
- 05:09 PM Bug #21539 (Resolved): man: missing man page for mount.fuse.ceph
- 02:51 PM Bug #21539: man: missing man page for mount.fuse.ceph
- follow-on fix: https://github.com/ceph/ceph/pull/19792
- 05:09 PM Backport #22398 (Resolved): luminous: man: missing man page for mount.fuse.ceph
- 04:08 PM Documentation #2206 (Resolved): Need a control command to gracefully shutdown an active MDS prior...
- 03:02 PM Bug #22595 (Resolved): doc: mount.fuse.ceph is missing in index.rst
- 02:56 PM Bug #22595 (Closed): doc: mount.fuse.ceph is missing in index.rst
- Luminous backport handled via #21539
- 01:57 PM Bug #22595 (Fix Under Review): doc: mount.fuse.ceph is missing in index.rst
- 01:57 PM Bug #22595: doc: mount.fuse.ceph is missing in index.rst
- https://github.com/ceph/ceph/pull/19792
- 01:56 PM Bug #22595 (Resolved): doc: mount.fuse.ceph is missing in index.rst
- mount.fuse.ceph is missing in http://docs.ceph.com/docs/master/cephfs/
- 12:19 PM Backport #22590 (Resolved): jewel: ceph.in: tell mds does not understand --cluster
- https://github.com/ceph/ceph/pull/19907
- 12:18 PM Backport #22587 (Resolved): luminous: mds: mdsload debug too high
- https://github.com/ceph/ceph/pull/19827
- 12:17 PM Backport #22563 (In Progress): luminous: mds: optimize CDir::_omap_commit() and CDir::_committed(...
- 12:17 PM Backport #22564 (In Progress): luminous: Locker::calc_new_max_size does not take layout.stripe_co...
- 12:16 PM Backport #22580 (Resolved): luminous: qa: full flag not set on osdmap for tasks.cephfs.test_full....
- https://github.com/ceph/ceph/pull/19962
- 12:16 PM Backport #22579 (Resolved): luminous: mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full ...
- https://github.com/ceph/ceph/pull/19830
- 12:16 PM Backport #22573 (Resolved): luminous: AttributeError: 'LocalFilesystem' object has no attribute '...
- https://github.com/ceph/ceph/pull/19829
- 10:10 AM Backport #22569 (Fix Under Review): jewel: doc: clarify path restriction instructions
- https://github.com/ceph/ceph/pull/19795
- 09:39 AM Backport #22569 (Resolved): jewel: doc: clarify path restriction instructions
- https://github.com/ceph/ceph/pull/19795 and https://github.com/ceph/ceph/pull/19840
- 09:39 AM Documentation #16906 (Pending Backport): doc: clarify path restriction instructions
- 12:42 AM Bug #22483 (Pending Backport): mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is...
- https://github.com/ceph/ceph/pull/19602
- 12:40 AM Bug #22475 (Pending Backport): qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClu...
01/04/2018
- 07:32 PM Bug #22562 (Fix Under Review): mds: fix dump last_sent
- 03:57 AM Bug #22562: mds: fix dump last_sent
- https://github.com/ceph/ceph/pull/19762
- 03:57 AM Bug #22562 (Resolved): mds: fix dump last_sent
- last_sent in capability is an integer
- 07:15 AM Backport #22564 (Resolved): luminous: Locker::calc_new_max_size does not take layout.stripe_count...
- https://github.com/ceph/ceph/pull/19776
- 07:10 AM Backport #22563 (Resolved): luminous: mds: optimize CDir::_omap_commit() and CDir::_committed() f...
- https://github.com/ceph/ceph/pull/19775
- 03:46 AM Bug #22542 (Resolved): doc: epoch barrier mechanism not found
- 03:26 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
- can't find any 'osd_op ... write' in mds logs. So I can't find any clue how the corruption happened.
- 01:48 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
- Zheng Yan wrote:
> can't any log for "2017-12-16". next time you do experiment,please set debug_ms=1 for mds
Dear...
01/03/2018
- 06:11 PM Bug #22536 (Fix Under Review): client:_rmdir() uses a deleted memory structure(Dentry) leading a ...
- 05:40 PM Bug #22546 (Fix Under Review): client: dirty caps may never get the chance to flush
- 02:42 PM Feature #16775 (Fix Under Review): MDS command for listing open files
- https://github.com/ceph/ceph/pull/19760
- 01:04 PM Feature #16775: MDS command for listing open files
- could you please have a look at this pr
https://github.com/ceph/ceph/pull/19760 - 02:04 PM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
- can't any log for "2017-12-16". next time you do experiment,please set debug_ms=1 for mds
- 10:10 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
- Zheng Yan wrote:
> please upload ceph cluster log. So I can check timestamp of mds failovers
Dear zheng:
I ha... - 03:45 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
- please upload ceph cluster log. So I can check timestamp of mds failovers
- 04:00 AM Bug #22547: active mds session miss for client
- Zheng Yan wrote:
> Sorry. the while the process is:
>
> mds close client connection
> client's remote_reset call... - 02:51 AM Bug #22547: active mds session miss for client
- Sorry. the while the process is:
mds close client connection
client's remote_reset callback gets called
client s...
01/02/2018
- 03:40 PM Bug #22547: active mds session miss for client
- Zheng Yan wrote:
> dongdong tao wrote:
> > zheng, if a client has been evicted by mds, the client should still thin... - 01:47 AM Bug #22547: active mds session miss for client
- dongdong tao wrote:
> zheng, if a client has been evicted by mds, the client should still think the connection is av... - 03:17 PM Backport #22552 (Resolved): luminous: doc: epoch barrier mechanism not found
- 11:34 AM Backport #22552 (Fix Under Review): luminous: doc: epoch barrier mechanism not found
- 10:57 AM Backport #22552: luminous: doc: epoch barrier mechanism not found
- https://github.com/ceph/ceph/pull/19741
- 10:43 AM Backport #22552 (Resolved): luminous: doc: epoch barrier mechanism not found
- 11:14 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
- Jos Collin wrote:
> I don't see anything in the URLs provided. Additionally, this looks like a Support Case.
can ... - 11:10 AM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
- wangyong wang wrote:
> Hi all.
> ==============================
> version: jewel 10.2.10 (professional rpms)
> no...
01/01/2018
- 11:56 AM Bug #22547 (Need More Info): active mds session miss for client
- 06:47 AM Bug #22542 (Pending Backport): doc: epoch barrier mechanism not found
12/29/2017
- 04:17 PM Feature #22545: add dump inode command to mds
- I just notice, it's almost same with 11172
- 03:35 PM Bug #22542 (Resolved): doc: epoch barrier mechanism not found
- 01:40 AM Bug #22551 (Need More Info): client: should flush dirty caps on backgroud
- the dirty data would have a background thread to do the flush,so we may need to flush dirty caps backgroud too
12/28/2017
- 03:29 PM Bug #22550 (New): mds: FAILED assert(probe->known_size[p->oid] <= shouldbe) when mds start
I stop the mds while coping files to the cluster, then I try to start mds later, I encounter a failed assertion.
...- 02:05 PM Bug #22548: mds: crash during recovery
- Just once.
It took a little long time during recovery and then crashed. There are about 10M files in the file syst... - 01:46 PM Bug #22548: mds: crash during recovery
- this probably can be fixed by. How many times do you encounter this issue...
- 07:15 AM Bug #22548: mds: crash during recovery
- Zheng Yan wrote:
> which line trigger the assertion
Hi, yan
this line:
0> 2017-12-27 23:27:05.892112 7f0... - 07:04 AM Bug #22548: mds: crash during recovery
- which line trigger the assertion
- 04:42 AM Bug #22548 (Need More Info): mds: crash during recovery
- 2017-12-27 23:27:05.919710 7f08483d0700 -1 *** Caught signal (Aborted) **
in thread 7f08483d0700 thread_name:ms_dis... - 12:53 PM Bug #22547: active mds session miss for client
- by saying evicted, i means due to the auto_close_timeout.
- 12:50 PM Bug #22547: active mds session miss for client
- zheng, if a client has been evicted by mds, the client should still think the connection is available,
and when that... - 10:25 AM Bug #22547: active mds session miss for client
- wei jin wrote:
> Ok. I will do it soon.
>
I can not reproduce it after open the log and it will have an impact ... - 07:21 AM Bug #22547: active mds session miss for client
- Ok. I will do it soon.
This happened after I restarted mds daemon last night. And also there is another crash(bug ... - 07:10 AM Bug #22547: active mds session miss for client
- please set debug_mds=10 and check why mds evicted the client. it's likely that docker host went to sleep or there was...
- 04:34 AM Bug #22547 (Need More Info): active mds session miss for client
- Our user case: k8s docker mounts cephfs using cephfs kernel client.
If we do not use the 'mounted dir', after a wh... - 06:58 AM Feature #21156: mds: speed up recovery with many open inodes
- thanks, that can explain the senerio we have met,
sometimes my standby-replay mds spend too much time in rejoin stat... - 06:56 AM Feature #21156: mds: speed up recovery with many open inodes
- besides, when there are lots of open inodes, it's not efficient to journal all of them in each log segment.
- 02:46 AM Feature #21156: mds: speed up recovery with many open inodes
- mds need to open all inodes with client caps during recovery. some of these inode may be not in the journal
- 02:00 AM Feature #21156: mds: speed up recovery with many open inodes
- hi zheng,
i'm not sure if i understand this correctlly, do you mean the mds can not recover the openning inode jus...
12/27/2017
- 04:32 PM Bug #22546: client: dirty caps may never get the chance to flush
- https://github.com/ceph/ceph/pull/19703
- 04:05 PM Bug #22546 (Resolved): client: dirty caps may never get the chance to flush
- currently, we flush the caps in function Client::flush_caps_sync
but there is a bug in this funcion.
because the ... - 03:54 PM Feature #22545: add dump inode command to mds
- pull request:
https://github.com/ceph/ceph/pull/19677 - 03:53 PM Feature #22545 (Duplicate): add dump inode command to mds
- 1 when the mds cache is really big. it's hard to dump all the cache
2 most of the time, we only want to know a speci... - 10:58 AM Bug #22542 (Fix Under Review): doc: epoch barrier mechanism not found
- 10:21 AM Bug #22542: doc: epoch barrier mechanism not found
- https://github.com/ceph/ceph/pull/19701
12/26/2017
- 10:17 AM Bug #22542 (Resolved): doc: epoch barrier mechanism not found
- [[http://docs.ceph.com/docs/master/cephfs/full/]] says "For more on the epoch barrier mechanism, see Ceph filesystem ...
12/25/2017
- 03:57 AM Bug #22536: client:_rmdir() uses a deleted memory structure(Dentry) leading a core
- fixed by https://github.com/ceph/ceph/pull/19672
- 03:45 AM Bug #22536 (Resolved): client:_rmdir() uses a deleted memory structure(Dentry) leading a core
- Version: ceph-10.2.2
Bug description:
"::rmdir()" acquires the Dentry structure "by get_or_create(dir, name, &de...
12/22/2017
- 11:47 AM Bug #22523 (Need More Info): Jewel10.2.10 cephfs journal corrupt,later event jump into previous ...
- I don't see anything in the URLs provided. Additionally, this looks like a Support Case.
- 09:16 AM Bug #22524 (Resolved): NameError: global name 'get_mds_map' is not defined
- We don't need to backport this fix to luminous. The commit that introduced
this bug, https://github.com/ceph/ceph/co... - 04:53 AM Bug #22524 (Pending Backport): NameError: global name 'get_mds_map' is not defined
- 04:55 AM Bug #22338 (Resolved): mds: ceph mds stat json should use array output for info section
- 04:55 AM Bug #21853 (Pending Backport): mds: mdsload debug too high
- 04:55 AM Feature #19578 (Pending Backport): mds: optimize CDir::_omap_commit() and CDir::_committed() for ...
- 04:54 AM Bug #22492 (Pending Backport): Locker::calc_new_max_size does not take layout.stripe_count into a...
- 04:49 AM Backport #22503 (In Progress): luminous: mds: read hang in multiple mds setup
- https://github.com/ceph/ceph/pull/19646
- 02:37 AM Backport #22503: luminous: mds: read hang in multiple mds setup
- I'm on it.
- 12:28 AM Bug #22357: mds: read hang in multiple mds setup
- https://github.com/ceph/ceph/pull/19414
12/21/2017
- 11:45 PM Bug #22487: mds: setattr blocked when metadata pool is full
- right. full test should have no problem
- 10:27 PM Bug #22487: mds: setattr blocked when metadata pool is full
- Presumably that would be because with the vstart config the MDS writes cannot actually be written whereas with the te...
- 02:35 PM Bug #22487: mds: setattr blocked when metadata pool is full
- I reproduced this locally.
It was caused by stuck log flush ... - 10:38 PM Bug #22526 (Pending Backport): AttributeError: 'LocalFilesystem' object has no attribute 'ec_prof...
- Fixed by: https://github.com/ceph/ceph/pull/19533
- 02:19 PM Bug #22526 (Resolved): AttributeError: 'LocalFilesystem' object has no attribute 'ec_profile'
- Hit an error while running a ceph_volume_client test on a vstart Ceph cluster using the command
LD_LIBRARY_PATH=`pwd... - 04:13 PM Bug #22357: mds: read hang in multiple mds setup
- i don't see any merged pr.
- 01:04 PM Bug #22524 (Fix Under Review): NameError: global name 'get_mds_map' is not defined
- https://github.com/ceph/ceph/pull/19633
- 12:48 PM Bug #22524 (Resolved): NameError: global name 'get_mds_map' is not defined
- Hit a error while running test_volume_client on a dev vstart cluster (Ceph master) using
# LD_LIBRARY_PATH=`pwd`/lib... - 12:05 PM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
- type :fs
version:10.2.10
- 11:59 AM Bug #22523 (Closed): Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
- Hi all.
==============================
version: jewel 10.2.10 (professional rpms)
nodes : 3 centos7.3
cephfs : k... - 10:08 AM Backport #22501: luminous: qa: CommandFailedError: Command failed on smithi135 with status 22: 's...
- https://github.com/ceph/ceph/pull/19628
- 10:05 AM Backport #22500: luminous: cephfs: potential adjust failure in lru_expire
- https://github.com/ceph/ceph/pull/19627
- 10:03 AM Backport #22499: luminous: cephfs-journal-tool: tool would miss to report some invalid range
- https://github.com/ceph/ceph/pull/19626
- 08:36 AM Bug #22374 (Duplicate): luminous: mds: SimpleLock::num_rdlock overloaded
12/20/2017
- 06:41 PM Backport #22493 (In Progress): luminous: mds: crash during exiting
- 02:47 AM Backport #22493 (Resolved): luminous: mds: crash during exiting
- https://github.com/ceph/ceph/pull/19610
- 11:56 AM Backport #22508 (Closed): luminous: MDSMonitor: inconsistent role/who usage in command help
- 11:54 AM Backport #22505 (Rejected): jewel: client may fail to trim as many caps as MDS asked for
- 11:54 AM Backport #22504 (Resolved): luminous: client may fail to trim as many caps as MDS asked for
- https://github.com/ceph/ceph/pull/24119
- 11:54 AM Backport #22503 (Resolved): luminous: mds: read hang in multiple mds setup
- https://github.com/ceph/ceph/pull/19646
- 11:54 AM Backport #22501 (Resolved): luminous: qa: CommandFailedError: Command failed on smithi135 with st...
- https://github.com/ceph/ceph/pull/19628
- 11:54 AM Backport #22500 (Resolved): luminous: cephfs: potential adjust failure in lru_expire
- https://github.com/ceph/ceph/pull/19627
- 11:54 AM Backport #22499 (Resolved): luminous: cephfs-journal-tool: tool would miss to report some invalid...
- https://github.com/ceph/ceph/pull/19626
- 11:39 AM Backport #21947 (In Progress): luminous: mds: preserve order of requests during recovery of multi...
- 03:41 AM Backport #22494 (Fix Under Review): jewel: unsigned integer overflow in file_layout_t::get_period
- https://github.com/ceph/ceph/pull/19611
- 03:35 AM Backport #22494 (Resolved): jewel: unsigned integer overflow in file_layout_t::get_period
- A customer encountered this
https://bugzilla.redhat.com/show_bug.cgi?id=1527548 - 02:09 AM Bug #22492 (Fix Under Review): Locker::calc_new_max_size does not take layout.stripe_count into a...
- https://github.com/ceph/ceph/pull/19609
- 02:07 AM Bug #22492 (Resolved): Locker::calc_new_max_size does not take layout.stripe_count into account
- if layout.stripe_count is N, size_increment is actually 'N * mds_client_writeable_range_max_inc_objs' objects
- 01:09 AM Bug #22360 (Pending Backport): mds: crash during exiting
- 12:35 AM Backport #22490 (In Progress): luminous: mds: handle client session messages when mds is stopping
- 12:35 AM Backport #22490 (Resolved): luminous: mds: handle client session messages when mds is stopping
- https://github.com/ceph/ceph/pull/19585
12/19/2017
- 11:45 PM Feature #22446: mds: ask idle client to trim more caps
- Zheng Yan wrote:
> Idle client holds so many caps is wasteful, because it increase the chance that mds trim other re... - 09:55 PM Bug #22488 (New): mds: unlink blocks on large file when metadata pool is full
- With both of these PRs on mimic-dev1:
https://github.com/ceph/ceph/pull/19588
https://github.com/ceph/ceph/pull/1... - 09:53 PM Bug #22487 (Rejected): mds: setattr blocked when metadata pool is full
- With both of these PRs on mimic-dev1:
https://github.com/ceph/ceph/pull/19588
https://github.com/ceph/ceph/pull/1... - 07:25 PM Bug #22483 (In Progress): mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obso...
- 07:12 PM Bug #22483 (Resolved): mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obsolete
- https://github.com/ceph/ceph/blob/06b7707cee87a54517630def0ad274340325a677/src/mds/Server.cc#L1742
since: b4ca5ae4... - 06:56 PM Bug #22482 (Won't Fix): qa: MDS can apparently journal new file on "full" metadata pool
- ...
- 05:00 PM Fix #15064 (Closed): multifs: tweak text on "flag set enable multiple"
- Okay, thanks for the info John. I'll just close this then.
- 03:36 PM Fix #15064: multifs: tweak text on "flag set enable multiple"
- I think there was an idea that the message ought to be scarier, but I'm not sure we need that at this point
- 03:05 PM Fix #15064 (Need More Info): multifs: tweak text on "flag set enable multiple"
- Not sure what the problem is here.
- 03:13 PM Tasks #22479 (Closed): multifs: review testing coverage
- 03:11 PM Feature #22478 (Rejected): multifs: support snapshots for shared data pool
- If two file systems use the same data pool with different RADOS namespaces, it is necessary for them to cooperate on ...
- 03:08 PM Feature #22477 (Resolved): multifs: remove multifs experimental warnings
- Once all sub-tasks are complete.
- 09:26 AM Bug #22475: qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClusterFull.test_full_...
- Flagging for luminous backport because b4ca5ae (which presumably introduced the test failure this is fixing (?)) was ...
- 05:51 AM Bug #22475 (Fix Under Review): qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClu...
- 05:51 AM Bug #22475: qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClusterFull.test_full_...
- https://github.com/ceph/ceph/pull/19588
- 05:25 AM Bug #22475 (Resolved): qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClusterFull...
- ...
- 05:53 AM Bug #22436 (Pending Backport): qa: CommandFailedError: Command failed on smithi135 with status 22...
- https://github.com/ceph/ceph/pull/19534
- 03:09 AM Bug #22460: mds: handle client session messages when mds is stopping
- luminous RP: https://github.com/ceph/ceph/pull/19585
- 02:01 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
- https://github.com/ceph/ceph-client/commit/a2a44b35146e6ccf099e4326bc1a7e2cdaf02f65
12/18/2017
- 02:53 PM Bug #22460 (Pending Backport): mds: handle client session messages when mds is stopping
- 02:49 PM Bug #22428: mds: don't report slow request for blocked filelock request
- Perhaps reclassify the slow requests blocked by locks as "clients not releasing file locks" or similar to be differen...
- 02:47 PM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
- This is also fixed in master already by one of Zheng's commits. We need to link to the commit in master where this is...
- 02:41 PM Bug #22353 (In Progress): kclient: ceph_getattr() return zero st_dev for normal inode
- 09:26 AM Feature #19578 (Fix Under Review): mds: optimize CDir::_omap_commit() and CDir::_committed() for ...
- https://github.com/ceph/ceph/pull/19574
- 12:10 AM Bug #22459 (Pending Backport): cephfs-journal-tool: tool would miss to report some invalid range
- 12:09 AM Bug #22458 (Pending Backport): cephfs: potential adjust failure in lru_expire
12/16/2017
- 12:01 AM Feature #22446: mds: ask idle client to trim more caps
- no about recovery time, clients already trim their cache aggressively when mds recovers.
Idle client holds so many...
12/15/2017
- 10:24 PM Bug #21853 (Fix Under Review): mds: mdsload debug too high
- https://github.com/ceph/ceph/pull/19556
- 08:26 PM Feature #22446: mds: ask idle client to trim more caps
- Ah, the problem is recovery of the MDS takes too long. (from follow-up posts to "[ceph-users] cephfs mds millions of ...
- 08:16 PM Feature #22446: mds: ask idle client to trim more caps
- What's the goal? Prevent the situation where the client has ~1M caps for an indefinite period like what we saw on the...
- 01:42 AM Feature #22446 (Resolved): mds: ask idle client to trim more caps
- we can add decay counter to client session, tracking the rate we add new cap to the client
- 07:30 PM Bug #21393 (Pending Backport): MDSMonitor: inconsistent role/who usage in command help
- 07:30 PM Bug #22293 (Pending Backport): client may fail to trim as many caps as MDS asked for
- 07:30 PM Bug #22357 (Pending Backport): mds: read hang in multiple mds setup
- 07:29 PM Bug #21764 (Resolved): common/options.cc: Update descriptions and visibility levels for MDS/clien...
- 07:02 PM Bug #22460 (Resolved): mds: handle client session messages when mds is stopping
- https://github.com/ceph/ceph/pull/19234
- 06:59 PM Bug #22459 (Resolved): cephfs-journal-tool: tool would miss to report some invalid range
- https://github.com/ceph/ceph/pull/19421
- 06:58 PM Bug #22458 (Resolved): cephfs: potential adjust failure in lru_expire
- https://github.com/ceph/ceph/pull/19277
12/13/2017
- 11:09 PM Backport #22392: luminous: mds: tell session ls returns vanila EINVAL when MDS is not active
- https://github.com/ceph/ceph/pull/19505
- 09:28 PM Bug #22436 (Resolved): qa: CommandFailedError: Command failed on smithi135 with status 22: 'sudo ...
- Key bits:...
- 04:41 PM Feature #22417 (Fix Under Review): support purge queue with cephfs-journal-tool
- 09:34 AM Feature #22417: support purge queue with cephfs-journal-tool
- I have already pulled a request
https://github.com/ceph/ceph/pull/19471 - 09:33 AM Feature #22417 (Resolved): support purge queue with cephfs-journal-tool
- Currently, luminous has introduced new purge queue journal whose inode number is 0x500
but cephfs-journal-tool does ... - 03:12 PM Bug #22428 (Resolved): mds: don't report slow request for blocked filelock request
- 12:58 PM Backport #22407 (In Progress): luminous: client: implement delegation support in userland cephfs
12/12/2017
- 01:31 PM Backport #22385 (In Progress): luminous: mds: mds should ignore export_pin for deleted directory
- 08:43 AM Backport #22385 (Resolved): luminous: mds: mds should ignore export_pin for deleted directory
- https://github.com/ceph/ceph/pull/19360
- 11:25 AM Backport #22398 (In Progress): luminous: man: missing man page for mount.fuse.ceph
- 08:44 AM Backport #22398 (Resolved): luminous: man: missing man page for mount.fuse.ceph
- https://github.com/ceph/ceph/pull/19449
- 11:10 AM Bug #22374 (Fix Under Review): luminous: mds: SimpleLock::num_rdlock overloaded
- 06:16 AM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
- I just submitted a PR for this: https://github.com/ceph/ceph/pull/19442
- 04:44 AM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
- ...
- 04:43 AM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
- @ 6> 2017-12-04 17:59:45.509030 7ff4bb7fb700 7 mds.0.locker rdlock_finish on (ifile sync) on [inode 0x10004cec485...
- 04:42 AM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
- Sorry, I misedited the log:
@ -6> 2017-12-04 17:59:45.509030 7ff4bb7fb700 7 mds.0.locker rdlock_finish on (ifile... - 04:41 AM Bug #22374 (Duplicate): luminous: mds: SimpleLock::num_rdlock overloaded
- Recently, when doing massive directory delete test, both active mds and standby mds aborted and can't be started agai...
- 08:45 AM Backport #22407 (Resolved): luminous: client: implement delegation support in userland cephfs
- https://github.com/ceph/ceph/pull/19480
- 08:43 AM Backport #22392 (Resolved): luminous: mds: tell session ls returns vanila EINVAL when MDS is not ...
- 08:43 AM Backport #22384 (Resolved): jewel: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), act...
- https://github.com/ceph/ceph/pull/21172
- 08:42 AM Backport #22383 (Resolved): luminous: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), ...
- https://github.com/ceph/ceph/pull/21173
- 08:42 AM Backport #22382 (Rejected): jewel: client: give more descriptive error message for remount failures
- 08:42 AM Backport #22381 (Rejected): luminous: client: give more descriptive error message for remount fai...
- 08:42 AM Backport #22380 (Resolved): jewel: client reconnect gather race
- https://github.com/ceph/ceph/pull/21163
- 08:42 AM Backport #22379 (Resolved): luminous: client reconnect gather race
- https://github.com/ceph/ceph/pull/19326
- 08:42 AM Backport #22378 (Resolved): jewel: ceph-fuse: failure to remount in startup test does not handle ...
- https://github.com/ceph/ceph/pull/21162
- 02:49 AM Feature #22372 (Resolved): kclient: implement quota handling using new QuotaRealm
- 02:46 AM Feature #22371 (Resolved): mds: implement QuotaRealm to obviate parent quota lookup
- https://github.com/ceph/ceph/pull/18424
- 02:45 AM Feature #22370 (Resolved): cephfs: add kernel client quota support
12/11/2017
- 01:13 AM Bug #22360 (Fix Under Review): mds: crash during exiting
- https://github.com/ceph/ceph/pull/19424
- 12:43 AM Bug #22360 (Resolved): mds: crash during exiting
- 2017-12-07 21:30:39.323495 7fe6b8830700 0 -- 192.168.100.49:6800/227508323 >> 192.168.100.59:0/2802955003 pipe(0x560...
12/10/2017
- 07:16 PM Cleanup #22359 (New): mds: change MDSMap::in to a mds_rank_t which is the current size of the clu...
- Now that we no longer allow deactivating arbitrary ranks, it makes sense to change the `std::set<mds_rank_t> MDSMap::...
- 09:00 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
- Robert Sander wrote:
> Robert Sander wrote:
> > Zheng Yan wrote:
> > > I can't reproduce it on Fedora 26. please p...
12/09/2017
- 02:12 PM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
- Robert Sander wrote:
> Zheng Yan wrote:
> > I can't reproduce it on Fedora 26. please provide versions of kernel an... - 09:33 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
- Zheng Yan wrote:
> I can't reproduce it on Fedora 26. please provide versions of kernel and fuse-libs installed on t... - 08:39 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
- no '+ sign' is caused by ls code...
- 05:17 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
- If fuse-libs version < 2.8, ceph-fuse can't get supplementary groups of an user. group ACL only apply for users who p...
- 02:03 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
- I can't reproduce it on Fedora 26. please provide versions of kernel and fuse-libs installed on the machine that ran ...
- 07:06 AM Bug #22357 (Fix Under Review): mds: read hang in multiple mds setup
- http://tracker.ceph.com/issues/22357
- 07:04 AM Bug #22357 (Resolved): mds: read hang in multiple mds setup
12/08/2017
- 07:33 PM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
- The kernel client in Ubuntu 17.10 (4.13.0-17-generic) does not have this issue, but it does not show if ACLs are set ...
- 04:40 PM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
- Now with better formatting:
Running Ceph 12.2.2
Create Filesystem fresh on this version.
FUSE-mounted files... - 04:37 PM Bug #22353 (Resolved): kclient: ceph_getattr() return zero st_dev for normal inode
- Running Ceph 12.2.2
Create Filesystem fresh on this version.
FUSE-mounted filesystem with client_acl_type=posix... - 07:20 AM Bug #22249: Need to restart MDS to release cephfs space
- junming rao wrote:
> Zheng Yan wrote:
> > please try remounting all cephfs with ceph-fuse option --client_try_dentr... - 03:18 AM Bug #22249: Need to restart MDS to release cephfs space
- Zheng Yan wrote:
> please try remounting all cephfs with ceph-fuse option --client_try_dentry_invalidate=false.
>
... - 02:45 AM Bug #22249: Need to restart MDS to release cephfs space
- please try remounting all cephfs with ceph-fuse option --client_try_dentry_invalidate=false.
Besides, please creat... - 02:26 AM Bug #22249: Need to restart MDS to release cephfs space
- Zheng Yan wrote:
> It seems you have multiple clients mount cephfs. do you use kernel client or ceph-fuse? try execu...
Also available in: Atom