Activity
From 11/22/2017 to 12/21/2017
12/21/2017
- 11:45 PM Bug #22487: mds: setattr blocked when metadata pool is full
- right. full test should have no problem
- 10:27 PM Bug #22487: mds: setattr blocked when metadata pool is full
- Presumably that would be because with the vstart config the MDS writes cannot actually be written whereas with the te...
- 02:35 PM Bug #22487: mds: setattr blocked when metadata pool is full
- I reproduced this locally.
It was caused by stuck log flush ... - 10:38 PM Bug #22526 (Pending Backport): AttributeError: 'LocalFilesystem' object has no attribute 'ec_prof...
- Fixed by: https://github.com/ceph/ceph/pull/19533
- 02:19 PM Bug #22526 (Resolved): AttributeError: 'LocalFilesystem' object has no attribute 'ec_profile'
- Hit an error while running a ceph_volume_client test on a vstart Ceph cluster using the command
LD_LIBRARY_PATH=`pwd... - 04:13 PM Bug #22357: mds: read hang in multiple mds setup
- i don't see any merged pr.
- 01:04 PM Bug #22524 (Fix Under Review): NameError: global name 'get_mds_map' is not defined
- https://github.com/ceph/ceph/pull/19633
- 12:48 PM Bug #22524 (Resolved): NameError: global name 'get_mds_map' is not defined
- Hit a error while running test_volume_client on a dev vstart cluster (Ceph master) using
# LD_LIBRARY_PATH=`pwd`/lib... - 12:05 PM Bug #22523: Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
- type :fs
version:10.2.10
- 11:59 AM Bug #22523 (Closed): Jewel10.2.10 cephfs journal corrupt,later event jump into previous position.
- Hi all.
==============================
version: jewel 10.2.10 (professional rpms)
nodes : 3 centos7.3
cephfs : k... - 10:08 AM Backport #22501: luminous: qa: CommandFailedError: Command failed on smithi135 with status 22: 's...
- https://github.com/ceph/ceph/pull/19628
- 10:05 AM Backport #22500: luminous: cephfs: potential adjust failure in lru_expire
- https://github.com/ceph/ceph/pull/19627
- 10:03 AM Backport #22499: luminous: cephfs-journal-tool: tool would miss to report some invalid range
- https://github.com/ceph/ceph/pull/19626
- 08:36 AM Bug #22374 (Duplicate): luminous: mds: SimpleLock::num_rdlock overloaded
12/20/2017
- 06:41 PM Backport #22493 (In Progress): luminous: mds: crash during exiting
- 02:47 AM Backport #22493 (Resolved): luminous: mds: crash during exiting
- https://github.com/ceph/ceph/pull/19610
- 11:56 AM Backport #22508 (Closed): luminous: MDSMonitor: inconsistent role/who usage in command help
- 11:54 AM Backport #22505 (Rejected): jewel: client may fail to trim as many caps as MDS asked for
- 11:54 AM Backport #22504 (Resolved): luminous: client may fail to trim as many caps as MDS asked for
- https://github.com/ceph/ceph/pull/24119
- 11:54 AM Backport #22503 (Resolved): luminous: mds: read hang in multiple mds setup
- https://github.com/ceph/ceph/pull/19646
- 11:54 AM Backport #22501 (Resolved): luminous: qa: CommandFailedError: Command failed on smithi135 with st...
- https://github.com/ceph/ceph/pull/19628
- 11:54 AM Backport #22500 (Resolved): luminous: cephfs: potential adjust failure in lru_expire
- https://github.com/ceph/ceph/pull/19627
- 11:54 AM Backport #22499 (Resolved): luminous: cephfs-journal-tool: tool would miss to report some invalid...
- https://github.com/ceph/ceph/pull/19626
- 11:39 AM Backport #21947 (In Progress): luminous: mds: preserve order of requests during recovery of multi...
- 03:41 AM Backport #22494 (Fix Under Review): jewel: unsigned integer overflow in file_layout_t::get_period
- https://github.com/ceph/ceph/pull/19611
- 03:35 AM Backport #22494 (Resolved): jewel: unsigned integer overflow in file_layout_t::get_period
- A customer encountered this
https://bugzilla.redhat.com/show_bug.cgi?id=1527548 - 02:09 AM Bug #22492 (Fix Under Review): Locker::calc_new_max_size does not take layout.stripe_count into a...
- https://github.com/ceph/ceph/pull/19609
- 02:07 AM Bug #22492 (Resolved): Locker::calc_new_max_size does not take layout.stripe_count into account
- if layout.stripe_count is N, size_increment is actually 'N * mds_client_writeable_range_max_inc_objs' objects
- 01:09 AM Bug #22360 (Pending Backport): mds: crash during exiting
- 12:35 AM Backport #22490 (In Progress): luminous: mds: handle client session messages when mds is stopping
- 12:35 AM Backport #22490 (Resolved): luminous: mds: handle client session messages when mds is stopping
- https://github.com/ceph/ceph/pull/19585
12/19/2017
- 11:45 PM Feature #22446: mds: ask idle client to trim more caps
- Zheng Yan wrote:
> Idle client holds so many caps is wasteful, because it increase the chance that mds trim other re... - 09:55 PM Bug #22488 (New): mds: unlink blocks on large file when metadata pool is full
- With both of these PRs on mimic-dev1:
https://github.com/ceph/ceph/pull/19588
https://github.com/ceph/ceph/pull/1... - 09:53 PM Bug #22487 (Rejected): mds: setattr blocked when metadata pool is full
- With both of these PRs on mimic-dev1:
https://github.com/ceph/ceph/pull/19588
https://github.com/ceph/ceph/pull/1... - 07:25 PM Bug #22483 (In Progress): mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obso...
- 07:12 PM Bug #22483 (Resolved): mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obsolete
- https://github.com/ceph/ceph/blob/06b7707cee87a54517630def0ad274340325a677/src/mds/Server.cc#L1742
since: b4ca5ae4... - 06:56 PM Bug #22482 (Won't Fix): qa: MDS can apparently journal new file on "full" metadata pool
- ...
- 05:00 PM Fix #15064 (Closed): multifs: tweak text on "flag set enable multiple"
- Okay, thanks for the info John. I'll just close this then.
- 03:36 PM Fix #15064: multifs: tweak text on "flag set enable multiple"
- I think there was an idea that the message ought to be scarier, but I'm not sure we need that at this point
- 03:05 PM Fix #15064 (Need More Info): multifs: tweak text on "flag set enable multiple"
- Not sure what the problem is here.
- 03:13 PM Tasks #22479 (Closed): multifs: review testing coverage
- 03:11 PM Feature #22478 (Rejected): multifs: support snapshots for shared data pool
- If two file systems use the same data pool with different RADOS namespaces, it is necessary for them to cooperate on ...
- 03:08 PM Feature #22477 (Resolved): multifs: remove multifs experimental warnings
- Once all sub-tasks are complete.
- 09:26 AM Bug #22475: qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClusterFull.test_full_...
- Flagging for luminous backport because b4ca5ae (which presumably introduced the test failure this is fixing (?)) was ...
- 05:51 AM Bug #22475 (Fix Under Review): qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClu...
- 05:51 AM Bug #22475: qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClusterFull.test_full_...
- https://github.com/ceph/ceph/pull/19588
- 05:25 AM Bug #22475 (Resolved): qa: full flag not set on osdmap for tasks.cephfs.test_full.TestClusterFull...
- ...
- 05:53 AM Bug #22436 (Pending Backport): qa: CommandFailedError: Command failed on smithi135 with status 22...
- https://github.com/ceph/ceph/pull/19534
- 03:09 AM Bug #22460: mds: handle client session messages when mds is stopping
- luminous RP: https://github.com/ceph/ceph/pull/19585
- 02:01 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
- https://github.com/ceph/ceph-client/commit/a2a44b35146e6ccf099e4326bc1a7e2cdaf02f65
12/18/2017
- 02:53 PM Bug #22460 (Pending Backport): mds: handle client session messages when mds is stopping
- 02:49 PM Bug #22428: mds: don't report slow request for blocked filelock request
- Perhaps reclassify the slow requests blocked by locks as "clients not releasing file locks" or similar to be differen...
- 02:47 PM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
- This is also fixed in master already by one of Zheng's commits. We need to link to the commit in master where this is...
- 02:41 PM Bug #22353 (In Progress): kclient: ceph_getattr() return zero st_dev for normal inode
- 09:26 AM Feature #19578 (Fix Under Review): mds: optimize CDir::_omap_commit() and CDir::_committed() for ...
- https://github.com/ceph/ceph/pull/19574
- 12:10 AM Bug #22459 (Pending Backport): cephfs-journal-tool: tool would miss to report some invalid range
- 12:09 AM Bug #22458 (Pending Backport): cephfs: potential adjust failure in lru_expire
12/16/2017
- 12:01 AM Feature #22446: mds: ask idle client to trim more caps
- no about recovery time, clients already trim their cache aggressively when mds recovers.
Idle client holds so many...
12/15/2017
- 10:24 PM Bug #21853 (Fix Under Review): mds: mdsload debug too high
- https://github.com/ceph/ceph/pull/19556
- 08:26 PM Feature #22446: mds: ask idle client to trim more caps
- Ah, the problem is recovery of the MDS takes too long. (from follow-up posts to "[ceph-users] cephfs mds millions of ...
- 08:16 PM Feature #22446: mds: ask idle client to trim more caps
- What's the goal? Prevent the situation where the client has ~1M caps for an indefinite period like what we saw on the...
- 01:42 AM Feature #22446 (Resolved): mds: ask idle client to trim more caps
- we can add decay counter to client session, tracking the rate we add new cap to the client
- 07:30 PM Bug #21393 (Pending Backport): MDSMonitor: inconsistent role/who usage in command help
- 07:30 PM Bug #22293 (Pending Backport): client may fail to trim as many caps as MDS asked for
- 07:30 PM Bug #22357 (Pending Backport): mds: read hang in multiple mds setup
- 07:29 PM Bug #21764 (Resolved): common/options.cc: Update descriptions and visibility levels for MDS/clien...
- 07:02 PM Bug #22460 (Resolved): mds: handle client session messages when mds is stopping
- https://github.com/ceph/ceph/pull/19234
- 06:59 PM Bug #22459 (Resolved): cephfs-journal-tool: tool would miss to report some invalid range
- https://github.com/ceph/ceph/pull/19421
- 06:58 PM Bug #22458 (Resolved): cephfs: potential adjust failure in lru_expire
- https://github.com/ceph/ceph/pull/19277
12/13/2017
- 11:09 PM Backport #22392: luminous: mds: tell session ls returns vanila EINVAL when MDS is not active
- https://github.com/ceph/ceph/pull/19505
- 09:28 PM Bug #22436 (Resolved): qa: CommandFailedError: Command failed on smithi135 with status 22: 'sudo ...
- Key bits:...
- 04:41 PM Feature #22417 (Fix Under Review): support purge queue with cephfs-journal-tool
- 09:34 AM Feature #22417: support purge queue with cephfs-journal-tool
- I have already pulled a request
https://github.com/ceph/ceph/pull/19471 - 09:33 AM Feature #22417 (Resolved): support purge queue with cephfs-journal-tool
- Currently, luminous has introduced new purge queue journal whose inode number is 0x500
but cephfs-journal-tool does ... - 03:12 PM Bug #22428 (Resolved): mds: don't report slow request for blocked filelock request
- 12:58 PM Backport #22407 (In Progress): luminous: client: implement delegation support in userland cephfs
12/12/2017
- 01:31 PM Backport #22385 (In Progress): luminous: mds: mds should ignore export_pin for deleted directory
- 08:43 AM Backport #22385 (Resolved): luminous: mds: mds should ignore export_pin for deleted directory
- https://github.com/ceph/ceph/pull/19360
- 11:25 AM Backport #22398 (In Progress): luminous: man: missing man page for mount.fuse.ceph
- 08:44 AM Backport #22398 (Resolved): luminous: man: missing man page for mount.fuse.ceph
- https://github.com/ceph/ceph/pull/19449
- 11:10 AM Bug #22374 (Fix Under Review): luminous: mds: SimpleLock::num_rdlock overloaded
- 06:16 AM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
- I just submitted a PR for this: https://github.com/ceph/ceph/pull/19442
- 04:44 AM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
- ...
- 04:43 AM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
- @ 6> 2017-12-04 17:59:45.509030 7ff4bb7fb700 7 mds.0.locker rdlock_finish on (ifile sync) on [inode 0x10004cec485...
- 04:42 AM Bug #22374: luminous: mds: SimpleLock::num_rdlock overloaded
- Sorry, I misedited the log:
@ -6> 2017-12-04 17:59:45.509030 7ff4bb7fb700 7 mds.0.locker rdlock_finish on (ifile... - 04:41 AM Bug #22374 (Duplicate): luminous: mds: SimpleLock::num_rdlock overloaded
- Recently, when doing massive directory delete test, both active mds and standby mds aborted and can't be started agai...
- 08:45 AM Backport #22407 (Resolved): luminous: client: implement delegation support in userland cephfs
- https://github.com/ceph/ceph/pull/19480
- 08:43 AM Backport #22392 (Resolved): luminous: mds: tell session ls returns vanila EINVAL when MDS is not ...
- 08:43 AM Backport #22384 (Resolved): jewel: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), act...
- https://github.com/ceph/ceph/pull/21172
- 08:42 AM Backport #22383 (Resolved): luminous: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), ...
- https://github.com/ceph/ceph/pull/21173
- 08:42 AM Backport #22382 (Rejected): jewel: client: give more descriptive error message for remount failures
- 08:42 AM Backport #22381 (Rejected): luminous: client: give more descriptive error message for remount fai...
- 08:42 AM Backport #22380 (Resolved): jewel: client reconnect gather race
- https://github.com/ceph/ceph/pull/21163
- 08:42 AM Backport #22379 (Resolved): luminous: client reconnect gather race
- https://github.com/ceph/ceph/pull/19326
- 08:42 AM Backport #22378 (Resolved): jewel: ceph-fuse: failure to remount in startup test does not handle ...
- https://github.com/ceph/ceph/pull/21162
- 02:49 AM Feature #22372 (Resolved): kclient: implement quota handling using new QuotaRealm
- 02:46 AM Feature #22371 (Resolved): mds: implement QuotaRealm to obviate parent quota lookup
- https://github.com/ceph/ceph/pull/18424
- 02:45 AM Feature #22370 (Resolved): cephfs: add kernel client quota support
12/11/2017
- 01:13 AM Bug #22360 (Fix Under Review): mds: crash during exiting
- https://github.com/ceph/ceph/pull/19424
- 12:43 AM Bug #22360 (Resolved): mds: crash during exiting
- 2017-12-07 21:30:39.323495 7fe6b8830700 0 -- 192.168.100.49:6800/227508323 >> 192.168.100.59:0/2802955003 pipe(0x560...
12/10/2017
- 07:16 PM Cleanup #22359 (New): mds: change MDSMap::in to a mds_rank_t which is the current size of the clu...
- Now that we no longer allow deactivating arbitrary ranks, it makes sense to change the `std::set<mds_rank_t> MDSMap::...
- 09:00 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
- Robert Sander wrote:
> Robert Sander wrote:
> > Zheng Yan wrote:
> > > I can't reproduce it on Fedora 26. please p...
12/09/2017
- 02:12 PM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
- Robert Sander wrote:
> Zheng Yan wrote:
> > I can't reproduce it on Fedora 26. please provide versions of kernel an... - 09:33 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
- Zheng Yan wrote:
> I can't reproduce it on Fedora 26. please provide versions of kernel and fuse-libs installed on t... - 08:39 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
- no '+ sign' is caused by ls code...
- 05:17 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
- If fuse-libs version < 2.8, ceph-fuse can't get supplementary groups of an user. group ACL only apply for users who p...
- 02:03 AM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
- I can't reproduce it on Fedora 26. please provide versions of kernel and fuse-libs installed on the machine that ran ...
- 07:06 AM Bug #22357 (Fix Under Review): mds: read hang in multiple mds setup
- http://tracker.ceph.com/issues/22357
- 07:04 AM Bug #22357 (Resolved): mds: read hang in multiple mds setup
12/08/2017
- 07:33 PM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
- The kernel client in Ubuntu 17.10 (4.13.0-17-generic) does not have this issue, but it does not show if ACLs are set ...
- 04:40 PM Bug #22353: kclient: ceph_getattr() return zero st_dev for normal inode
- Now with better formatting:
Running Ceph 12.2.2
Create Filesystem fresh on this version.
FUSE-mounted files... - 04:37 PM Bug #22353 (Resolved): kclient: ceph_getattr() return zero st_dev for normal inode
- Running Ceph 12.2.2
Create Filesystem fresh on this version.
FUSE-mounted filesystem with client_acl_type=posix... - 07:20 AM Bug #22249: Need to restart MDS to release cephfs space
- junming rao wrote:
> Zheng Yan wrote:
> > please try remounting all cephfs with ceph-fuse option --client_try_dentr... - 03:18 AM Bug #22249: Need to restart MDS to release cephfs space
- Zheng Yan wrote:
> please try remounting all cephfs with ceph-fuse option --client_try_dentry_invalidate=false.
>
... - 02:45 AM Bug #22249: Need to restart MDS to release cephfs space
- please try remounting all cephfs with ceph-fuse option --client_try_dentry_invalidate=false.
Besides, please creat... - 02:26 AM Bug #22249: Need to restart MDS to release cephfs space
- Zheng Yan wrote:
> It seems you have multiple clients mount cephfs. do you use kernel client or ceph-fuse? try execu...
12/07/2017
- 03:09 AM Backport #22339: luminous: ceph-fuse: failure to remount in startup test does not handle client_d...
- https://github.com/ceph/ceph/pull/19370
- 03:01 AM Backport #22339 (Resolved): luminous: ceph-fuse: failure to remount in startup test does not hand...
- https://github.com/ceph/ceph/pull/19370
- 03:08 AM Bug #22338: mds: ceph mds stat json should use array output for info section
- Ji You wrote:
> When use `ceph mds stat -f json-pretty` would get output as below:
>
> [...]
>
> The proper ou... - 02:58 AM Bug #22338 (Resolved): mds: ceph mds stat json should use array output for info section
- When use `ceph mds stat -f json-pretty` would get output as below:...
12/06/2017
- 09:12 PM Feature #20610: MDSMonitor: add new command to shrink the cluster in an automated way
- Hi Douglas, is this something that you're still planning on working on? If not, I'm willing to have a look at it.
- 02:51 PM Bug #22334 (New): client: throttle osd requests created by page-write
- If we create lots of small file in cephfs, page writeback may create hundreds of thousand of OSD requests. these many...
- 08:51 AM Bug #22219: mds: mds should ignore export_pin for deleted directory
- https://github.com/ceph/ceph/pull/19360
- 06:37 AM Bug #22219 (Pending Backport): mds: mds should ignore export_pin for deleted directory
- 06:37 AM Feature #22097 (Resolved): mds: change mds perf counters can statistics filesystem operations num...
- 06:36 AM Bug #22269 (Pending Backport): ceph-fuse: failure to remount in startup test does not handle clie...
12/05/2017
- 03:00 AM Bug #22263: client reconnect gather race
- https://github.com/ceph/ceph/pull/19326
- 02:54 AM Bug #22249: Need to restart MDS to release cephfs space
- It seems you have multiple clients mount cephfs. do you use kernel client or ceph-fuse? try executing "echo 3 >/proc/...
- 12:33 AM Bug #22051: tests: Health check failed: Reduced data availability: 5 pgs peering (PG_AVAILABILITY...
- Please make sure this isn't a misconfigured run or a missing log whitelist; you can kick it to RADOS if not. :)
12/04/2017
- 06:54 PM Feature #18490 (Pending Backport): client: implement delegation support in userland cephfs
- Thanks for remembering to update this ticket Jeff. We need to backport this for Luminous as this is needed for 3.0.
... - 02:52 PM Bug #22292: mds: scrub may mark repaired directory with lost dentries and not flush backtrace
- Greg also brought up some good points that we should also mark the directory as damaged (especially in a persistent w...
- 02:45 PM Bug #22292: mds: scrub may mark repaired directory with lost dentries and not flush backtrace
- Consensus during scrub is that this can be resolved by adding an appropriate warning to scrub output that the directo...
12/03/2017
- 10:55 AM Feature #18490 (Resolved): client: implement delegation support in userland cephfs
- Patches merged into both ceph and ganesha for this.
12/01/2017
- 10:03 PM Bug #22256 (Resolved): nfs-ganesha: crashes in free_delegrecall_context
- This was fixed by commit f332c172a2884c04a0d4e743c8858ff3e7f957a1 in ganesha (and the associated ntirpc changes).
- 09:39 PM Bug #22256 (In Progress): nfs-ganesha: crashes in free_delegrecall_context
- 09:50 PM Bug #21777: src/mds/MDCache.cc: 4332: FAILED assert(mds->is_rejoin())
- Reducing priority since we can't seem to get this reproduced.
- 02:49 AM Bug #22293: client may fail to trim as many caps as MDS asked for
- kernel patch https://github.com/ceph/ceph-client/commit/4f9b2bc31681f41fe73ddbabc6e9b9fd047af126
- 02:44 AM Bug #22293 (Fix Under Review): client may fail to trim as many caps as MDS asked for
- https://github.com/ceph/ceph/pull/19271
- 02:22 AM Bug #22293 (Resolved): client may fail to trim as many caps as MDS asked for
- Client::trim_caps() can't trim inode if it has null child dentries. If config option client_cache_size is large, Clie...
11/30/2017
- 11:18 PM Bug #22292 (New): mds: scrub may mark repaired directory with lost dentries and not flush backtrace
- Simple reproducer (with selected output):...
- 07:15 PM Tasks #22291 (New): add metadata thrasher to qa suite
- Add a tool to qa suite, possibly based on smallfile (https://github.com/bengland2/smallfile) to run filesystem operat...
- 06:56 PM Bug #22288 (Fix Under Review): mds: assert when inode moves during scrub
- https://github.com/ceph/ceph/pull/19263
- 04:03 PM Bug #22288 (In Progress): mds: assert when inode moves during scrub
- 03:40 PM Bug #22288 (Resolved): mds: assert when inode moves during scrub
- If an inode moves while on the scrub stack, it can be enqueued a second time and hit:
mds/CInode.cc: 4153: FAILED ... - 06:51 PM Bug #22221 (Pending Backport): qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), actual:...
- 06:50 PM Bug #22254 (Pending Backport): client: give more descriptive error message for remount failures
- 06:49 PM Bug #22263 (Pending Backport): client reconnect gather race
- 09:58 AM Bug #22249: Need to restart MDS to release cephfs space
- Zheng Yan wrote:
> still no clue in the log. Do you still have this issue after restarting mds
Hi Zheng yan:
... - 09:01 AM Bug #21539 (Pending Backport): man: missing man page for mount.fuse.ceph
- 09:01 AM Bug #21991 (Pending Backport): mds: tell session ls returns vanila EINVAL when MDS is not active
11/29/2017
- 02:25 PM Bug #22249: Need to restart MDS to release cephfs space
- still no clue in the log. Do you still have this issue after restarting mds
- 01:59 PM Bug #22249: Need to restart MDS to release cephfs space
- Zheng Yan wrote:
> can't find any clue from log. Next time it happens, please set debug_mds=10 and capture some log ...
11/28/2017
- 11:05 PM Bug #22269 (Fix Under Review): ceph-fuse: failure to remount in startup test does not handle clie...
- https://github.com/ceph/ceph/pull/19218
- 08:41 PM Bug #22269 (Resolved): ceph-fuse: failure to remount in startup test does not handle client_die_o...
- https://github.com/ceph/ceph/blob/38f051c22af1def4a06427876ee2e5000046fd03/src/client/Client.cc#L10063-L10066
The ... - 01:42 PM Bug #22249: Need to restart MDS to release cephfs space
- can't find any clue from log. Next time it happens, please set debug_mds=10 and capture some log before mds restart
- 03:08 AM Bug #22249: Need to restart MDS to release cephfs space
- Zheng Yan wrote:
> It seems the log was generated by pre-luminous mds. which version of ceph do you use.
OS Ver... - 09:09 AM Bug #22263 (Fix Under Review): client reconnect gather race
- https://github.com/ceph/ceph/pull/19207
- 09:06 AM Bug #22263 (Resolved): client reconnect gather race
- #0 raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1 0x0000555555ae8bde in reraise_fatal (signum...
11/27/2017
- 10:04 PM Bug #22249 (Need More Info): Need to restart MDS to release cephfs space
- 02:45 PM Bug #22249: Need to restart MDS to release cephfs space
- It seems the log was generated by pre-luminous mds. which version of ceph do you use.
- 07:52 AM Bug #22249 (Can't reproduce): Need to restart MDS to release cephfs space
- I used 'ceph df' to show the usage of the cluster was 238TB (2 copies), however, the result of using 'du -sh' into th...
- 09:21 PM Bug #22003 (Need More Info): [CephFS-Ganesha]MDS migrate will affect Ganesha service?
- Please retry with Ganesha -next.
- 07:37 PM Bug #22256: nfs-ganesha: crashes in free_delegrecall_context
- Here's my ganesha.conf as well. I bisected the change down to 46a5e8535f978b1e12dcb15cbdcbf6d5e757d24e (nfs_rpc_call)...
- 07:34 PM Bug #22256 (Resolved): nfs-ganesha: crashes in free_delegrecall_context
- I've been working on delegation support in cephfs for ganesha. The ceph pieces were recently merged, so I rebased my ...
- 06:49 PM Bug #22254 (Fix Under Review): client: give more descriptive error message for remount failures
- https://github.com/ceph/ceph/pull/19181
- 05:53 PM Bug #22254 (Resolved): client: give more descriptive error message for remount failures
- During remount failures:
https://github.com/ceph/ceph/blob/54e51fd3c39a38e72ed989f862e6e21515f41d3b/src/client/Cli... - 01:11 PM Bug #21539 (Fix Under Review): man: missing man page for mount.fuse.ceph
- https://github.com/ceph/ceph/pull/19172
- 02:25 AM Backport #22237 (In Progress): luminous: request that is "!mdr->is_replay() && mdr->is_queued_for...
- https://github.com/ceph/ceph/pull/19157
11/25/2017
- 12:32 AM Backport #22241: jewel: Processes stuck waiting for write with ceph-fuse
- https://github.com/ceph/ceph/pull/19141
- 12:21 AM Backport #22240: luminous: Processes stuck waiting for write with ceph-fuse
- https://github.com/ceph/ceph/pull/19137
- 12:16 AM Backport #22242: luminous: mds: limit size of subtree migration
- https://github.com/ceph/ceph/pull/19136
11/24/2017
- 09:57 PM Backport #22242 (Resolved): luminous: mds: limit size of subtree migration
- https://github.com/ceph/ceph/pull/20339
- 09:57 PM Backport #22241 (Resolved): jewel: Processes stuck waiting for write with ceph-fuse
- 09:57 PM Backport #22240 (Resolved): luminous: Processes stuck waiting for write with ceph-fuse
- https://github.com/ceph/ceph/pull/20340
- 09:56 PM Backport #22239 (Rejected): luminous: provide a way to look up snapshotted inodes by vinodeno_t
- 09:56 PM Backport #22237 (Resolved): luminous: request that is "!mdr->is_replay() && mdr->is_queued_for_re...
- https://github.com/ceph/ceph/pull/19157
- 01:31 PM Bug #21539 (In Progress): man: missing man page for mount.fuse.ceph
11/22/2017
- 09:52 PM Backport #22228: luminous: client: trim_caps may remove cap iterator points to
- https://github.com/ceph/ceph/pull/19105
- 09:49 PM Backport #22228 (In Progress): luminous: client: trim_caps may remove cap iterator points to
- 09:48 PM Backport #22228 (Resolved): luminous: client: trim_caps may remove cap iterator points to
- https://github.com/ceph/ceph/pull/19105
- 09:47 PM Bug #22157 (Pending Backport): client: trim_caps may remove cap iterator points to
- 09:47 PM Bug #22163 (Pending Backport): request that is "!mdr->is_replay() && mdr->is_queued_for_replay()"...
- 09:46 PM Bug #22058 (Resolved): mds: admin socket wait for scrub completion is racy
- 09:45 PM Feature #22105 (Pending Backport): provide a way to look up snapshotted inodes by vinodeno_t
- 09:44 PM Bug #22008 (Pending Backport): Processes stuck waiting for write with ceph-fuse
- 09:43 PM Bug #21892 (Pending Backport): limit size of subtree migration
- 05:36 AM Bug #22221 (Fix Under Review): qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), actual:...
- https://github.com/ceph/ceph/pull/19095
- 05:32 AM Bug #22221 (Resolved): qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), actual: -34 vs 0
- ...
- 04:45 AM Bug #22219 (Fix Under Review): mds: mds should ignore export_pin for deleted directory
- https://github.com/ceph/ceph/pull/19092
- 03:53 AM Bug #22219 (Resolved): mds: mds should ignore export_pin for deleted directory
- Otherwise, subtree dirfrag may prevent stray inode from getting purged
Also available in: Atom