Activity
From 03/20/2018 to 04/18/2018
04/18/2018
- 09:43 PM Feature #20606 (Resolved): mds: improve usability of cluster rank manipulation and setting cluste...
- 09:42 PM Subtask #20864 (Resolved): kill allow_multimds
- 09:42 PM Feature #20610 (Resolved): MDSMonitor: add new command to shrink the cluster in an automated way
- 09:41 PM Feature #20608 (Resolved): MDSMonitor: rename `ceph fs set <fs_name> cluster_down` to `ceph fs se...
- 09:41 PM Feature #20609 (Resolved): MDSMonitor: add new command `ceph fs set <fs_name> down` to bring the ...
- 09:40 PM Bug #23764 (Pending Backport): MDSMonitor: new file systems are not initialized with the pending_...
- 09:39 PM Bug #23766 (Pending Backport): mds: crash during shutdown_pass
- 09:38 PM Bug #23762 (Pending Backport): MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_...
- 06:36 PM Feature #3244 (New): qa: integrate Ganesha into teuthology testing to regularly exercise Ganesha ...
- Jeff, fixed the wording to be clear.
- 06:14 PM Feature #3244 (Rejected): qa: integrate Ganesha into teuthology testing to regularly exercise Gan...
- I'm going to suggest that we just close this bug. We're doing this as a matter of course with the current work to cle...
- 05:43 PM Bug #23421 (Need More Info): ceph-fuse: stop ceph-fuse if no root permissions?
- Jos, please get hte client logs so we can diagnose.
- 01:01 PM Bug #23714: slow ceph_ll_sync_inode calls after setattr
- Thanks, dongdong! That seems to resolve the problem. Pull request is up here:
https://github.com/ceph/ceph/pull/21... - 11:53 AM Backport #23770 (In Progress): luminous: ceph-fuse: return proper exit code
- https://github.com/ceph/ceph/pull/21495
- 08:49 AM Documentation #23775: PendingReleaseNotes: add notes for major Mimic features
- FYI: https://github.com/ceph/ceph/pull/21374 already include mds upgrade process
04/17/2018
- 11:10 PM Documentation #23775 (Resolved): PendingReleaseNotes: add notes for major Mimic features
- mds upgrade process, snapshots, kernel quotas, etc.
- 11:08 PM Bug #23421 (Closed): ceph-fuse: stop ceph-fuse if no root permissions?
- 08:31 AM Bug #23421: ceph-fuse: stop ceph-fuse if no root permissions?
- Patrick Donnelly wrote:
> Jos, can you get more detailed debug logs when this happens? It is probably not related to... - 07:00 PM Backport #23771 (Resolved): luminous: client: fix gid_count check in UserPerm->deep_copy_from()
- https://github.com/ceph/ceph/pull/21596
- 07:00 PM Backport #23770 (Resolved): luminous: ceph-fuse: return proper exit code
- https://github.com/ceph/ceph/pull/21495
- 04:29 PM Bug #23755 (Fix Under Review): qa: FAIL: test_purge_queue_op_rate (tasks.cephfs.test_strays.TestS...
- 04:29 PM Bug #23755 (Pending Backport): qa: FAIL: test_purge_queue_op_rate (tasks.cephfs.test_strays.TestS...
- 01:11 PM Bug #23755 (Fix Under Review): qa: FAIL: test_purge_queue_op_rate (tasks.cephfs.test_strays.TestS...
- https://github.com/ceph/ceph/pull/21472
- 04:16 PM Bug #23768 (Resolved): MDSMonitor: uncommitted state exposed to clients/mdss
- e.g.
https://github.com/ceph/ceph/pull/21458#discussion_r182041693
and
https://github.com/ceph/ceph/pull/214... - 02:08 PM Backport #23704 (In Progress): luminous: ceph-fuse: broken directory permission checking
- https://github.com/ceph/ceph/pull/21475
- 01:51 PM Bug #23665: ceph-fuse: return proper exit code
- backporter note: please include https://github.com/ceph/ceph/pull/21473
- 11:59 AM Feature #23623 (Fix Under Review): mds: mark allow_snaps true by default
- by one commit in https://github.com/ceph/ceph/pull/21374
- 03:50 AM Bug #23762 (Fix Under Review): MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_...
- https://github.com/ceph/ceph/pull/21458
- 01:50 AM Bug #23762: MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_fsmap
- 03:16 AM Bug #23766 (Fix Under Review): mds: crash during shutdown_pass
- https://github.com/ceph/ceph/pull/21457
- 03:13 AM Bug #23766 (Resolved): mds: crash during shutdown_pass
- ...
- 01:49 AM Bug #23764 (Fix Under Review): MDSMonitor: new file systems are not initialized with the pending_...
- https://github.com/ceph/ceph/pull/21456
- 01:41 AM Bug #23764 (In Progress): MDSMonitor: new file systems are not initialized with the pending_fsmap...
- 01:41 AM Bug #23764 (Resolved): MDSMonitor: new file systems are not initialized with the pending_fsmap epoch
- Problem here: https://github.com/ceph/ceph/blob/60e8a63fdc21654d7f199a67f3f410a9e33c8134/src/mds/FSMap.cc#L234
FSM...
04/16/2018
- 08:10 PM Bug #23762 (In Progress): MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_fsmap
- 08:07 PM Bug #23762 (Resolved): MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_fsmap
- https://github.com/ceph/ceph/blob/60e8a63fdc21654d7f199a67f3f410a9e33c8134/src/mon/MDSMonitor.cc#L162-L166
- 02:13 PM Backport #23750 (In Progress): luminous: mds: ceph.dir.rctime follows dir ctime not inode ctime
- https://github.com/ceph/ceph/pull/21448
- 02:09 PM Backport #23703 (In Progress): luminous: MDSMonitor: mds health warnings printed in bad format
- https://github.com/ceph/ceph/pull/21447
- 12:20 PM Backport #23703: luminous: MDSMonitor: mds health warnings printed in bad format
- I'm on it.
- 01:50 PM Bug #23714: slow ceph_ll_sync_inode calls after setattr
- I tried to roll a standalone testcase for this, but it didn't stall out in the same way. I'm not quite sure what caus...
- 11:36 AM Feature #21156: mds: speed up recovery with many open inodes
- Patrick Donnelly wrote:
> Very unlikely because of the new structure in the metadata pool adds unacceptable risk for... - 10:44 AM Backport #23702 (In Progress): luminous: mds: sessions opened by journal replay do not get dirtie...
- https://github.com/ceph/ceph/pull/21441
- 03:29 AM Bug #23652 (Pending Backport): client: fix gid_count check in UserPerm->deep_copy_from()
- 03:29 AM Bug #23665 (Pending Backport): ceph-fuse: return proper exit code
- 03:25 AM Bug #23755 (Resolved): qa: FAIL: test_purge_queue_op_rate (tasks.cephfs.test_strays.TestStrays)
- ...
- 03:16 AM Bug #22933 (Fix Under Review): client: add option descriptions and review levels (e.g. LEVEL_DEV)
- https://github.com/ceph/ceph/pull/21434
04/15/2018
- 06:30 PM Bug #23751: mon: use fs-client profile for fs authorize mon caps
- Actually I think this gives blanket permission to read from OSDs for all pools so we may actually want to remove this...
- 06:27 PM Bug #23751 (New): mon: use fs-client profile for fs authorize mon caps
- This is simpler and consistent.
- 05:40 PM Backport #23750 (Resolved): luminous: mds: ceph.dir.rctime follows dir ctime not inode ctime
- https://github.com/ceph/ceph/pull/21448
- 05:39 PM Bug #23724 (Resolved): qa: broad snapshot functionality testing across clients
- Ganesha FSAL
ceph-fuse
kclient - 02:02 PM Bug #23714: slow ceph_ll_sync_inode calls after setattr
- I think patrick is right, maybe we should call flush_mdlog_sync to make mds do the mdlog flush before we wait on the ...
- 03:35 AM Bug #23714: slow ceph_ll_sync_inode calls after setattr
- Sounds like Ganesha is blocked on a journal flush by the MDS.
- 02:50 AM Bug #23723 (New): qa: incorporate smallfile workload
- Add smallfile workload
https://github.com/distributed-system-analysis/smallfile
to fs:workloads suite.
04/14/2018
- 03:30 PM Bug #23715: "Scrubbing terminated -- not all pgs were active and clean" in fs-jewel-distro-basic-ovh
- Patrick, any requirements against running k* suites on ovh ?
- 07:10 AM Bug #23715: "Scrubbing terminated -- not all pgs were active and clean" in fs-jewel-distro-basic-ovh
- Yuri, I have not seen this on smithi, so I assume it only happens in virtual environments.
- 12:27 AM Cleanup #23718 (Resolved): qa: merge fs/kcephfs suites
- and remove redundant tests (e.g. inline on/off with administrative tests like changing max_mds).
- 12:15 AM Cleanup #23717 (New): cephfs: consider renaming max_mds to a better name
- It is no longer considered a "max" and having fewer ranks than max_mds is considered a bad configuration which genera...
- 12:12 AM Bug #23567 (Fix Under Review): MDSMonitor: successive changes to max_mds can allow hole in ranks
- https://github.com/ceph/ceph/pull/16608
QA and fix here.
04/13/2018
- 09:53 PM Bug #23715: "Scrubbing terminated -- not all pgs were active and clean" in fs-jewel-distro-basic-ovh
- Nathan, I am not sure if you have seen this.
Suspect also in http://pulpito.ceph.com/teuthology-2018-04-11_04:15:02-... - 09:52 PM Bug #23715 (Closed): "Scrubbing terminated -- not all pgs were active and clean" in fs-jewel-dist...
- Run: http://pulpito.ceph.com/teuthology-2018-04-11_04:10:03-fs-jewel-distro-basic-ovh/
Jobs: 40 jobs
Logs: teutholo... - 09:40 PM Feature #21156: mds: speed up recovery with many open inodes
- Webert Lima wrote:
> Hi, thank you very much for this.
>
> I see this
> > Target version: Ceph - v13.0.0
>
> ... - 08:33 PM Feature #21156: mds: speed up recovery with many open inodes
- Hi, thank you very much for this.
I see this
> Target version: Ceph - v13.0.0
So I'm not even asking for a bac... - 09:32 PM Backport #23698 (In Progress): luminous: mds: load balancer fixes
- 09:07 AM Backport #23698: luminous: mds: load balancer fixes
- https://github.com/ceph/ceph/pull/21412
- 08:54 AM Backport #23698 (New): luminous: mds: load balancer fixes
- backport https://github.com/ceph/ceph/pull/19220 to luminous
- 08:41 AM Backport #23698 (Pending Backport): luminous: mds: load balancer fixes
- 02:26 AM Backport #23698 (Resolved): luminous: mds: load balancer fixes
- https://github.com/ceph/ceph/pull/21412
- 06:49 PM Bug #23714: slow ceph_ll_sync_inode calls after setattr
- I'll see if I can cook up libcephfs standalone testcase for this.
- 06:48 PM Bug #23714: slow ceph_ll_sync_inode calls after setattr
- We recently added some calls to ceph_ll_sync_inode in ganesha, to be done after a setattr request. Testing with cthon...
- 06:40 PM Bug #23714 (Resolved): slow ceph_ll_sync_inode calls after setattr
- 04:55 PM Bug #23697 (Pending Backport): mds: load balancer fixes
- 08:56 AM Bug #23697: mds: load balancer fixes
- Sorry for the confused edits. The "backport-create-issue" script is much better at this than I am. It's enough to cha...
- 08:36 AM Bug #23697 (New): mds: load balancer fixes
- 02:23 AM Bug #23697 (Resolved): mds: load balancer fixes
- https://github.com/ceph/ceph/pull/19220
- 09:48 AM Bug #21848: client: re-expand admin_socket metavariables in child process
- Hi Patrick,
Sorry for missing this for a long time. I will take a look recently to see whether there is a better f... - 08:35 AM Backport #23705 (Rejected): jewel: ceph-fuse: broken directory permission checking
- 08:35 AM Backport #23704 (Resolved): luminous: ceph-fuse: broken directory permission checking
- https://github.com/ceph/ceph/pull/21475
- 08:35 AM Backport #23703 (Resolved): luminous: MDSMonitor: mds health warnings printed in bad format
- https://github.com/ceph/ceph/pull/21447
- 08:34 AM Backport #23702 (Resolved): luminous: mds: sessions opened by journal replay do not get dirtied p...
- https://github.com/ceph/ceph/pull/21441
- 02:07 AM Feature #17434: qa: background rsync task for FS workunits
- Current work on this by Ramakrishnan: https://github.com/ceph/ceph/pull/12503
- 01:26 AM Bug #23509 (Pending Backport): ceph-fuse: broken directory permission checking
- 01:25 AM Bug #23582 (Pending Backport): MDSMonitor: mds health warnings printed in bad format
- 01:24 AM Bug #23625 (Pending Backport): mds: sessions opened by journal replay do not get dirtied properly
- 01:10 AM Feature #22417 (Resolved): support purge queue with cephfs-journal-tool
04/12/2018
- 10:50 PM Feature #20608 (In Progress): MDSMonitor: rename `ceph fs set <fs_name> cluster_down` to `ceph fs...
- Doug, thinking about this more, I'd like to keep "cluster_down" (as "joinable" or not) because it simplifies qa testi...
- 09:40 PM Feature #23695 (Resolved): VolumeClient: allow ceph_volume_client to create 'volumes' without nam...
- https://bugzilla.redhat.com/show_bug.cgi?id=1566194
to address the needs of
https://github.com/kubernetes-incub... - 07:39 PM Documentation #23568 (Resolved): doc: outline the steps for upgrading an MDS cluster
- 07:39 PM Backport #23634 (Resolved): luminous: doc: outline the steps for upgrading an MDS cluster
- 07:27 PM Backport #23634: luminous: doc: outline the steps for upgrading an MDS cluster
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21352
merged - 07:19 PM Bug #23665 (Fix Under Review): ceph-fuse: return proper exit code
- https://github.com/ceph/ceph/pull/21396
- 12:53 AM Bug #23665 (Resolved): ceph-fuse: return proper exit code
- from mailling list...
- 05:00 PM Feature #23689 (New): qa: test major/minor version upgrades
- We should verify the upgrade process [1] works and that older clients with parallel I/O still function correctly.
... - 10:19 AM Backport #23637 (In Progress): luminous: mds: assertion in MDSRank::validate_sessions
- https://github.com/ceph/ceph/pull/21372
- 04:17 AM Backport #23636 (In Progress): luminous: mds: kicked out by monitor during rejoin
- https://github.com/ceph/ceph/pull/21366
- 01:35 AM Backport #23671 (Resolved): luminous: mds: MDBalancer using total (all time) request count in loa...
- https://github.com/ceph/ceph/pull/21412
- 01:34 AM Backport #23669 (Resolved): luminous: doc: create doc outlining steps to bring down cluster
- https://github.com/ceph/ceph/pull/22872
04/11/2018
- 07:05 PM Feature #20611 (New): MDSMonitor: do not show cluster health warnings for file system intentional...
- Doug, I was just thinking about this and a valid reason to not want a HEALTH_ERR is if you have dozens or hundreds of...
- 01:42 PM Feature #20611 (Fix Under Review): MDSMonitor: do not show cluster health warnings for file syste...
- 01:42 PM Feature #20611: MDSMonitor: do not show cluster health warnings for file system intentionally mar...
- See https://github.com/ceph/ceph/pull/16608, which implements the opposite of this behavior. Whenever a filesystem is...
- 06:54 PM Feature #20607 (Rejected): MDSMonitor: change "mds deactivate" to clearer "mds rejoin"
- This is rejected in favor of removing `mds deactivate`.
- 05:31 PM Documentation #23427 (Pending Backport): doc: create doc outlining steps to bring down cluster
- 05:29 PM Bug #23658 (Resolved): MDSMonitor: crash after assigning standby-replay daemon in multifs setup
- From: https://github.com/rook/rook/issues/1027...
- 05:19 PM Bug #23567: MDSMonitor: successive changes to max_mds can allow hole in ranks
- Doug, I tested with master but I believe it also happened with your PR. I can't remember.
- 03:02 PM Bug #23567 (Need More Info): MDSMonitor: successive changes to max_mds can allow hole in ranks
- Was this before or after https://github.com/ceph/ceph/pull/16608 ?
- 03:14 PM Bug #23652 (Fix Under Review): client: fix gid_count check in UserPerm->deep_copy_from()
- https://github.com/ceph/ceph/pull/21341
- 03:10 PM Bug #23652 (Resolved): client: fix gid_count check in UserPerm->deep_copy_from()
- Fix gid_count check in UserPerm->deep_copy_from(). Allocate gids only if gid_count > 0.
- 02:34 PM Backport #23635 (In Progress): luminous: client: fix request send_to_auth was never really used
- https://github.com/ceph/ceph/pull/21354
- 01:46 PM Feature #20609 (In Progress): MDSMonitor: add new command `ceph fs set <fs_name> down` to bring t...
- https://github.com/ceph/ceph/pull/16608 overhauls this behavior, and re-implements the cluster_down flag for this fun...
- 01:45 PM Feature #20606 (Fix Under Review): mds: improve usability of cluster rank manipulation and settin...
- https://github.com/ceph/ceph/pull/16608
- 01:44 PM Feature #20610 (Fix Under Review): MDSMonitor: add new command to shrink the cluster in an automa...
- https://github.com/ceph/ceph/pull/16608
- 01:43 PM Feature #20608 (Rejected): MDSMonitor: rename `ceph fs set <fs_name> cluster_down` to `ceph fs se...
- This behavior is overhauled in https://github.com/ceph/ceph/pull/16608 .
- 01:30 PM Backport #23634 (In Progress): luminous: doc: outline the steps for upgrading an MDS cluster
- https://github.com/ceph/ceph/pull/21352
- 01:30 PM Bug #23393 (Resolved): ceph-ansible: update Ganesha config for nfs_file_gw to use optimal settings
- 01:26 PM Bug #23643 (Resolved): qa: osd_mon_report_interval typo in test_full.py
- 10:27 AM Backport #23632 (In Progress): luminous: mds: handle client requests when mds is stopping
- https://github.com/ceph/ceph/pull/21346
- 09:09 AM Backport #23632: luminous: mds: handle client requests when mds is stopping
- I am on it.
- 05:14 AM Bug #21745 (Pending Backport): mds: MDBalancer using total (all time) request count in load stati...
- 01:00 AM Feature #22372: kclient: implement quota handling using new QuotaRealm
- by following commits in testing branch
ceph: quota: report root dir quota usage in statfs …
ceph: quota: add cou... - 12:47 AM Bug #18730 (Closed): mds: backtrace issues getxattr for every file with cap on rejoin
- should be resolved by open file table https://github.com/ceph/ceph/pull/20132
- 12:44 AM Fix #5268 (Closed): mds: fix/clean up file size/mtime recovery code
- current code does parallel object checks.
- 12:39 AM Bug #4212 (Closed): mds: open_snap_parents isn't called all the times it needs to be
- with the new snaprealm format, there is no need to open past parent
- 12:37 AM Bug #21412 (Closed): cephfs: too many cephfs snapshots chokes the system
- 12:37 AM Bug #21412: cephfs: too many cephfs snapshots chokes the system
- this is actually osd issue. I talk to josh at cephalocon. He said it has already been fixed
04/10/2018
- 11:22 PM Feature #13688 (Resolved): mds: performance: journal inodes with capabilities to limit rejoin tim...
- Fixed by Zheng's openfile table: https://github.com/ceph/ceph/pull/20132
- 11:20 PM Feature #14456: mon: prevent older/incompatible clients from mounting the file system
- I think the right direction is to allow setting a flag on the MDSMap to prevent older clients from connecting to the ...
- 11:18 PM Bug #22482 (Won't Fix): qa: MDS can apparently journal new file on "full" metadata pool
- This is expected. The MDS is treated specially by the OSDs to allow some writes when the pool is full.
- 11:13 PM Feature #15507: MDS: support "watching" an inode/dentry
- https://bugzilla.redhat.com/show_bug.cgi?id=1561326
- 10:36 PM Bug #23615 (Rejected): qa: test for "snapid allocation/deletion mismatch with monitor"
- From Zheng:
> mds deletes old snapshots by MRemoveSnaps message. Monitor does not do snap_seq auto-increment wh... - 06:45 PM Bug #23643 (Resolved): qa: osd_mon_report_interval typo in test_full.py
- https://github.com/ceph/ceph/blob/577737d007c05bc7a3972158be8c520ab73a1517/qa/tasks/cephfs/test_full.py#L137
- 06:33 PM Bug #23624 (Resolved): cephfs-foo-tool crashes immediately it starts
- 08:27 AM Bug #23624 (Fix Under Review): cephfs-foo-tool crashes immediately it starts
- https://github.com/ceph/ceph/pull/21321
- 08:14 AM Bug #23624 (Resolved): cephfs-foo-tool crashes immediately it starts
- http://pulpito.ceph.com/pdonnell-2018-04-06_22:48:23-kcephfs-master-testing-basic-smithi/
- 05:54 PM Backport #23642 (Rejected): luminous: mds: the number of inode showed by "mds perf dump" not corr...
- 05:54 PM Backport #23641 (Resolved): luminous: auth|doc: fs authorize error for existing credentials confu...
- https://github.com/ceph/ceph/pull/22963
- 05:53 PM Backport #23638 (Resolved): luminous: ceph-fuse: getgroups failure causes exception
- https://github.com/ceph/ceph/pull/21687
- 05:53 PM Backport #23637 (Resolved): luminous: mds: assertion in MDSRank::validate_sessions
- https://github.com/ceph/ceph/pull/21372
- 05:53 PM Backport #23636 (Resolved): luminous: mds: kicked out by monitor during rejoin
- https://github.com/ceph/ceph/pull/21366
- 05:53 PM Backport #23635 (Resolved): luminous: client: fix request send_to_auth was never really used
- https://github.com/ceph/ceph/pull/21354
- 05:53 PM Backport #23634 (Resolved): luminous: doc: outline the steps for upgrading an MDS cluster
- https://github.com/ceph/ceph/pull/21352
- 05:53 PM Backport #23632 (Resolved): luminous: mds: handle client requests when mds is stopping
- https://github.com/ceph/ceph/pull/21346
- 02:24 PM Bug #23519: mds: mds got laggy because of MDSBeacon stuck in mqueue
- the two issue are not the same, but they are caused by the same reason: mds take too much time to handle MDSMap messa...
- 09:24 AM Bug #23625 (Fix Under Review): mds: sessions opened by journal replay do not get dirtied properly
- https://github.com/ceph/ceph/pull/21323
- 09:12 AM Bug #23625 (Resolved): mds: sessions opened by journal replay do not get dirtied properly
- http://pulpito.ceph.com/pdonnell-2018-04-06_01:22:39-multimds-wip-pdonnell-testing-20180405.233852-testing-basic-smit...
- 07:07 AM Backport #23158 (Resolved): jewel: mds: underwater dentry check in CDir::_omap_fetched is racy
- 04:46 AM Bug #23380 (Pending Backport): mds: ceph.dir.rctime follows dir ctime not inode ctime
- 04:45 AM Bug #23452 (Pending Backport): mds: assertion in MDSRank::validate_sessions
- 04:45 AM Bug #23446 (Pending Backport): ceph-fuse: getgroups failure causes exception
- 04:44 AM Bug #23530 (Pending Backport): mds: kicked out by monitor during rejoin
- 04:43 AM Bug #23602 (Pending Backport): mds: handle client requests when mds is stopping
- 04:43 AM Bug #23541 (Pending Backport): client: fix request send_to_auth was never really used
- 04:42 AM Feature #23623 (Resolved): mds: mark allow_snaps true by default
- 04:41 AM Bug #23491 (Resolved): fs: quota backward compatibility
- 01:22 AM Bug #21745 (Fix Under Review): mds: MDBalancer using total (all time) request count in load stati...
- https://github.com/ceph/ceph/pull/19220/commits/e9689c1ff7e75394298c0e86aa9ed4e703391c3e
04/09/2018
- 11:29 PM Bug #22824 (Resolved): Journaler::flush() may flush less data than expected, which causes flush w...
- 11:29 PM Backport #22967 (Resolved): luminous: Journaler::flush() may flush less data than expected, which...
- 11:17 PM Backport #22967: luminous: Journaler::flush() may flush less data than expected, which causes flu...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20431
merged - 11:29 PM Bug #22221 (Resolved): qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), actual: -34 vs 0
- 11:29 PM Backport #22383 (Resolved): luminous: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), ...
- 11:17 PM Backport #22383: luminous: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), actual: -34...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21173
merged - 11:17 PM Backport #23154 (Resolved): luminous: mds: FAILED assert (p != active_requests.end()) in MDReques...
- 11:16 PM Backport #23154: luminous: mds: FAILED assert (p != active_requests.end()) in MDRequestRef MDCach...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/21176
merged - 10:59 PM Bug #23571 (Resolved): mds: make sure that MDBalancer uses heartbeat info from the same epoch
- 10:58 PM Backport #23572 (Resolved): luminous: mds: make sure that MDBalancer uses heartbeat info from the...
- 10:58 PM Bug #23569 (Resolved): mds: counter decay incorrect
- 10:58 PM Backport #23570 (Resolved): luminous: mds: counter decay incorrect
- 10:58 PM Bug #23560 (Resolved): mds: mds gets significantly behind on trimming while creating millions of ...
- 10:57 PM Backport #23561 (Resolved): luminous: mds: mds gets significantly behind on trimming while creati...
- 10:34 PM Bug #23519: mds: mds got laggy because of MDSBeacon stuck in mqueue
- dongdong tao wrote:
> Patrick Donnelly wrote:
> > Dongdong, I think fast dispatch may not be the answer here. We're... - 08:12 PM Bug #23519 (In Progress): mds: mds got laggy because of MDSBeacon stuck in mqueue
- 02:27 PM Bug #23519: mds: mds got laggy because of MDSBeacon stuck in mqueue
- Patrick Donnelly wrote:
> Dongdong, I think fast dispatch may not be the answer here. We're not yet sure on the caus... - 01:39 PM Bug #23519: mds: mds got laggy because of MDSBeacon stuck in mqueue
- Dongdong, I think fast dispatch may not be the answer here. We're not yet sure on the cause. Do you have ideas?
- 09:07 PM Bug #10423 (Closed): update hadoop gitbuilders
- stale
- 09:03 PM Bug #20593 (Pending Backport): mds: the number of inode showed by "mds perf dump" not correct aft...
- 09:01 PM Feature #21156 (Resolved): mds: speed up recovery with many open inodes
- 09:00 PM Bug #21765 (Pending Backport): auth|doc: fs authorize error for existing credentials confusing/un...
- Please backport: https://github.com/ceph/ceph/pull/17678/commits/447b3d4852acd2db656c973cc224fb77d3fff590
- 08:56 PM Feature #22545 (Duplicate): add dump inode command to mds
- 08:52 PM Bug #6613 (Closed): samba is crashing in teuthology
- Closing as stale.
- 08:52 PM Feature #358: mds: efficient revert to snapshot
- Also consider cloning snapshots.
- 08:50 PM Documentation #21172 (Duplicate): doc: Export over NFS
- 08:48 PM Bug #21745: mds: MDBalancer using total (all time) request count in load statistics
- https://github.com/ceph/ceph/pull/19220/commits/fb8d07772ffd3b061d2752c6b3375f6cb187be4b
Zheng, please amend the a... - 08:43 PM Bug #19101 (Closed): "samba3error [Unknown error/failure. Missing torture_fail() or torture_asser...
- Not looking at samba right now.
- 08:39 PM Bug #23234 (Won't Fix): mds: damage detected while opening remote dentry
- Sorry, we won't look at bugs for multiple actives pre-Luminous.
- 08:37 PM Bug #21412: cephfs: too many cephfs snapshots chokes the system
- Zheng, is this issue resolved with the snapshot changes for Mimic?
- 08:36 PM Bug #20494 (Closed): cephfs_data_scan: try_remove_dentries_for_stray assertion failure
- Closing due to inactivity.
- 08:31 PM Bug #19255 (Can't reproduce): qa: test_full_fclose failure
- 08:27 PM Bug #22788 (Won't Fix): ceph-fuse performance issues with rsync
- 08:26 PM Feature #12274 (In Progress): mds: start forward scrubs from all subtree roots, skip non-auth met...
- 08:21 PM Bug #23615 (Rejected): qa: test for "snapid allocation/deletion mismatch with monitor"
- See email thread.
- 08:11 PM Bug #23556 (Closed): segfault in LibCephFS.ShutdownRace (jewel 10.2.11 integration testing)
- 07:45 PM Feature #21888 (Fix Under Review): Adding [--repair] option for cephfs-journal-tool make it can r...
- 07:29 PM Feature #23362 (In Progress): mds: add drop_cache command
- 06:39 PM Feature #23376: nfsgw: add NFS-Ganesha to service map similar to "rgw-nfs"
- The service map is a librados resource consumed by ceph-mgr. It periodically gets perfcounters, for example. When l...
- 06:23 PM Feature #23376: nfsgw: add NFS-Ganesha to service map similar to "rgw-nfs"
- I'm not aware of any bugs open on this. Is there any background on the rgw-nfs map at all? I've not looked at the ser...
- 05:22 PM Documentation #23611 (Need More Info): doc: add description of new fs-client auth profile
- On that page: http://docs.ceph.com/docs/master/rados/operations/user-management/#authorization-capabilities
https:... - 04:00 PM Documentation #23568 (Pending Backport): doc: outline the steps for upgrading an MDS cluster
- 01:37 PM Bug #23518: mds: crash when failover
- Are you still hitting the issue or has it gone away? If so `debug mds = 20` logs would be helpful..
- 01:32 PM Bug #23393 (Fix Under Review): ceph-ansible: update Ganesha config for nfs_file_gw to use optimal...
- https://github.com/ceph/ceph-ansible/pull/2503
- 01:26 PM Bug #23538 (Fix Under Review): mds: fix occasional dir rstat inconsistency between multi-MDSes
- 01:25 PM Bug #23530 (Fix Under Review): mds: kicked out by monitor during rejoin
- 12:59 PM Bug #23602 (Resolved): mds: handle client requests when mds is stopping
- https://github.com/ceph/ceph/pull/21167
04/08/2018
- 04:35 PM Bug #23211 (Resolved): client: prevent fallback to remount when dentry_invalidate_cb is true but ...
- 04:35 PM Backport #23356 (Resolved): jewel: client: prevent fallback to remount when dentry_invalidate_cb ...
- 02:54 PM Bug #23332: kclient: with fstab entry is not coming up reboot
- Zheng Yan wrote:
> In messages_ceph-sshreeka-run379-node5-client
> [...]
>
> looks like fstab didn't include c... - 01:12 AM Bug #23332: kclient: with fstab entry is not coming up reboot
- In messages_ceph-sshreeka-run379-node5-client ...
- 06:36 AM Bug #23503: mds: crash during pressure test
- Patrick Donnelly wrote:
> wei jin wrote:
> > Hi, Patrick, I have a question: after pinning base dir, will subdirs s...
04/06/2018
- 10:28 PM Bug #23332 (New): kclient: with fstab entry is not coming up reboot
- 10:25 PM Documentation #23583 (Resolved): doc: update snapshot doc to account for recent changes
- http://docs.ceph.com/docs/master/dev/cephfs-snapshots/
- 10:23 PM Bug #22741 (Resolved): osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-wip-yuri-mas...
- 10:21 PM Backport #22696 (In Progress): luminous: client: dirty caps may never get the chance to flush
- https://github.com/ceph/ceph/pull/21278
- 10:11 PM Backport #22696: luminous: client: dirty caps may never get the chance to flush
- I'd prefer not to, I'll try to resolve the conflicts.
- 10:05 PM Bug #23582 (Fix Under Review): MDSMonitor: mds health warnings printed in bad format
- https://github.com/ceph/ceph/pull/21276
- 08:29 PM Bug #23582 (Resolved): MDSMonitor: mds health warnings printed in bad format
- Example:...
- 05:32 PM Bug #23033 (Resolved): qa: ignore more warnings during mds-full test
- 05:32 PM Backport #23060 (Resolved): luminous: qa: ignore more warnings during mds-full test
- 05:28 PM Bug #22483 (Resolved): mds: check for CEPH_OSDMAP_FULL is now wrong; cluster full flag is obsolete
- 05:27 PM Bug #21402 (Resolved): mds: move remaining containers in CDentry/CDir/CInode to mempool
- 05:27 PM Backport #22972 (Resolved): luminous: mds: move remaining containers in CDentry/CDir/CInode to me...
- 05:26 PM Bug #22288 (Resolved): mds: assert when inode moves during scrub
- 05:26 PM Backport #23016 (Resolved): luminous: mds: assert when inode moves during scrub
- 03:20 AM Backport #23572 (In Progress): luminous: mds: make sure that MDBalancer uses heartbeat info from ...
- 03:18 AM Backport #23572 (Resolved): luminous: mds: make sure that MDBalancer uses heartbeat info from the...
- https://github.com/ceph/ceph/pull/21267
- 03:17 AM Bug #23571 (Resolved): mds: make sure that MDBalancer uses heartbeat info from the same epoch
- Already fixed in: https://github.com/ceph/ceph/pull/18941/
- 03:15 AM Backport #23570 (In Progress): luminous: mds: counter decay incorrect
- 03:12 AM Backport #23570 (Resolved): luminous: mds: counter decay incorrect
- https://github.com/ceph/ceph/pull/21266
- 03:12 AM Bug #23569 (Resolved): mds: counter decay incorrect
- Fixed by https://github.com/ceph/ceph/pull/18776
Issue for backport. - 03:07 AM Documentation #23427 (Fix Under Review): doc: create doc outlining steps to bring down cluster
- https://github.com/ceph/ceph/pull/21265
- 01:13 AM Bug #21584 (Resolved): FAILED assert(get_version() < pv) in CDir::mark_dirty
- 01:13 AM Backport #22031 (Resolved): jewel: FAILED assert(get_version() < pv) in CDir::mark_dirty
- 12:02 AM Bug #23532 (Resolved): doc: create PendingReleaseNotes and add dev doc for openfile table purpose...
04/05/2018
- 10:52 PM Documentation #23568 (Fix Under Review): doc: outline the steps for upgrading an MDS cluster
- https://github.com/ceph/ceph/pull/21263
- 10:37 PM Documentation #23568 (Resolved): doc: outline the steps for upgrading an MDS cluster
- Until we have versioned MDS-MDS messages and feature flags obeyed by MDSs during upgrades (e.g. require_mimic_mds), t...
- 09:22 PM Bug #23567 (Resolved): MDSMonitor: successive changes to max_mds can allow hole in ranks
- With 3 MDS, approximately this sequence:...
- 07:39 PM Backport #22384 (Resolved): jewel: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), act...
- 07:09 PM Bug #23503: mds: crash during pressure test
- wei jin wrote:
> Hi, Patrick, I have a question: after pinning base dir, will subdirs still be migrated to other act... - 06:39 PM Bug #22263 (Resolved): client reconnect gather race
- 06:39 PM Backport #22380 (Resolved): jewel: client reconnect gather race
- 06:38 PM Bug #22631 (Resolved): mds: crashes because of old pool id in journal header
- 06:38 PM Backport #22764 (Resolved): jewel: mds: crashes because of old pool id in journal header
- 06:37 PM Bug #21383 (Resolved): qa: failures from pjd fstest
- 06:37 PM Backport #21489 (Resolved): jewel: qa: failures from pjd fstest
- 04:26 PM Bug #22821 (Resolved): mds: session reference leak
- 04:26 PM Backport #22970 (Resolved): jewel: mds: session reference leak
- 01:21 PM Bug #23529 (Resolved): TmapMigratePP.DataScan asserts in jewel
- 04:19 AM Backport #23561 (In Progress): luminous: mds: mds gets significantly behind on trimming while cre...
- https://github.com/ceph/ceph/pull/21256
- 04:14 AM Backport #23561 (Resolved): luminous: mds: mds gets significantly behind on trimming while creati...
- https://github.com/ceph/ceph/pull/21256
- 04:14 AM Bug #23560 (Pending Backport): mds: mds gets significantly behind on trimming while creating mill...
- 03:43 AM Bug #23491 (Fix Under Review): fs: quota backward compatibility
- https://github.com/ceph/ceph/pull/21255
04/04/2018
- 11:49 PM Bug #23560 (Fix Under Review): mds: mds gets significantly behind on trimming while creating mill...
- https://github.com/ceph/ceph/pull/21254
- 11:44 PM Bug #23560 (Resolved): mds: mds gets significantly behind on trimming while creating millions of ...
- Under create heavy workloads, MDS sometimes reaches ~60 untrimmed segments for brief periods. I suggest we bump mds_l...
- 09:26 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
- Luminous test revert PR: https://github.com/ceph/ceph/pull/21251
- 09:22 PM Bug #20988: client: dual client segfault with racing ceph_shutdown
- Jeff Layton wrote:
> To be clear, I think we may want to leave off the patch that adds the new testcase from this se... - 09:13 PM Bug #20988 (Resolved): client: dual client segfault with racing ceph_shutdown
- 09:13 PM Backport #21526 (Closed): jewel: client: dual client segfault with racing ceph_shutdown
- Dropping this due to the age of jewel, dubious value, and lack of dual-client use-case for jewel.
- 08:48 PM Bug #23556: segfault in LibCephFS.ShutdownRace (jewel 10.2.11 integration testing)
- Tentatively agreed to drop the PR, because "jewel is near EOL and we don't have a use-case with dual clients for jewel"
- 08:31 PM Bug #23556: segfault in LibCephFS.ShutdownRace (jewel 10.2.11 integration testing)
- Looks to be coming from this commit, because it's the one that adds the ShutdownRace test case:
https://github.com... - 08:23 PM Bug #23556 (Closed): segfault in LibCephFS.ShutdownRace (jewel 10.2.11 integration testing)
- The test runs libcephfs/test.sh workunit, which in turn runs the ceph_test_libcephfs binary from the ceph-test packag...
- 05:51 PM Bug #23210 (Resolved): ceph-fuse: exported nfs get "stale file handle" when mds migrating
- Sorry, this should not be backported. ceph-fuse "support" for NFS is only for Mimic.
- 06:05 AM Bug #23210 (Pending Backport): ceph-fuse: exported nfs get "stale file handle" when mds migrating
- @Patrick, please confirm that this should be backported to luminous and which master PR.
- 07:50 AM Bug #16807 (Resolved): Crash in handle_slave_rename_prep
- http://tracker.ceph.com/issues/16768 already fixed
- 07:47 AM Bug #22353 (Resolved): kclient: ceph_getattr() return zero st_dev for normal inode
- 07:44 AM Feature #4501 (Resolved): Identify fields in CDir which aren't permanently necessary
- 07:43 AM Tasks #4499 (Resolved): Identify fields in CInode which aren't permanently necessary
- 07:43 AM Feature #14427 (Resolved): qa: run snapshot tests under thrashing
- 07:41 AM Feature #21877 (Resolved): quota and snaprealm integation
- 07:38 AM Feature #22371 (Resolved): mds: implement QuotaRealm to obviate parent quota lookup
- https://github.com/ceph/ceph/pull/18424
- 07:36 AM Bug #3254 (Resolved): mds: Replica inode's parent snaprealms are not open
- 07:36 AM Bug #3254: mds: Replica inode's parent snaprealms are not open
- opening snaprealm parents is no longer required with the new snaprealm format
https://github.com/ceph/ceph/pull/16779 - 07:34 AM Bug #1938 (Resolved): mds: snaptest-2 doesn't pass with 3 MDS system
- https://github.com/ceph/ceph/pull/16779
- 07:34 AM Bug #925 (Resolved): mds: update replica snaprealm on rename
- https://github.com/ceph/ceph/pull/16779
04/03/2018
- 08:07 PM Documentation #23271 (Resolved): doc: create install/setup guide for NFS-Ganesha w/ CephFS
- 06:44 PM Bug #23436 (Resolved): Client::_read() always return 0 when reading from inline data
- https://github.com/ceph/ceph/pull/21221
Wrote/tested that before I saw your PR, sorry Zheng. - 02:46 AM Bug #23436 (Fix Under Review): Client::_read() always return 0 when reading from inline data
- previous RP is buggy
https://github.com/ceph/ceph/pull/21186 - 06:41 PM Bug #23210 (Resolved): ceph-fuse: exported nfs get "stale file handle" when mds migrating
- 01:38 PM Bug #23509: ceph-fuse: broken directory permission checking
- Pull request here:
https://github.com/ceph/ceph/pull/21181 - 12:04 PM Bug #23250 (Need More Info): mds: crash during replay: interval_set.h: 396: FAILED assert(p->firs...
- 11:54 AM Bug #21070 (Resolved): MDS: MDS is laggy or crashed When deleting a large number of files
- 10:08 AM Bug #23529 (Fix Under Review): TmapMigratePP.DataScan asserts in jewel
- https://github.com/ceph/ceph/pull/21208
- 09:30 AM Backport #21450 (Closed): jewel: MDS: MDS is laggy or crashed When deleting a large number of files
- jewel does not have this bug
- 09:14 AM Bug #23541 (Fix Under Review): client: fix request send_to_auth was never really used
- 03:20 AM Bug #23541: client: fix request send_to_auth was never really used
- https://github.com/ceph/ceph/pull/21191
- 03:20 AM Bug #23541 (Resolved): client: fix request send_to_auth was never really used
- Client request's send_to_auth was never really used in choose_target_mds, although it would be set to true when getti...
- 09:09 AM Bug #23532 (Fix Under Review): doc: create PendingReleaseNotes and add dev doc for openfile table...
- https://github.com/ceph/ceph/pull/21204
- 07:39 AM Bug #23491: fs: quota backward compatibility
- will there be feature bit for mimic. If there will be, it's easy to add a mds version check to client
- 06:59 AM Bug #23491: fs: quota backward compatibility
- old user space client talking to new mds is OK. new client talking to old mds may have problem.
- 02:59 AM Backport #23356 (In Progress): jewel: client: prevent fallback to remount when dentry_invalidate_...
- 02:46 AM Backport #23157 (In Progress): luminous: mds: underwater dentry check in CDir::_omap_fetched is racy
- 02:44 AM Backport #23158 (In Progress): jewel: mds: underwater dentry check in CDir::_omap_fetched is racy
04/02/2018
- 05:42 PM Bug #23448 (Fix Under Review): nfs-ganesha: fails to parse rados URLs with '.' in object name
- https://review.gerrithub.io/#/c/406085/
- 04:37 PM Bug #23448: nfs-ganesha: fails to parse rados URLs with '.' in object name
- Opened github bug here:
https://github.com/nfs-ganesha/nfs-ganesha/issues/283 - 01:47 PM Bug #23518: mds: crash when failover
- No. I did nothing.
During pressure test, I ran into two crashes, another one is #23503. - 01:43 PM Bug #23518 (Need More Info): mds: crash when failover
- Did you evict the client session during this time?
- 01:42 PM Bug #23491: fs: quota backward compatibility
- There definitely needs to be something to keep this feature turned off until all the MDSes have upgraded. That may ju...
- 11:52 AM Bug #23446: ceph-fuse: getgroups failure causes exception
- New PR here:
https://github.com/ceph/ceph/pull/21132 - 10:41 AM Backport #23154 (In Progress): luminous: mds: FAILED assert (p != active_requests.end()) in MDReq...
- 10:40 AM Backport #23155 (Need More Info): jewel: mds: FAILED assert (p != active_requests.end()) in MDReq...
- @Patrick: This jewel backport appears to depend on https://github.com/ceph/ceph/commit/57c001335e180868088b329b575f0b...
- 10:29 AM Backport #22970 (In Progress): jewel: mds: session reference leak
- 10:20 AM Backport #22696 (Need More Info): luminous: client: dirty caps may never get the chance to flush
- @Patrick: Appears to depend on https://github.com/ceph/ceph/commit/8859ccf2824adbf836f32760b7e4f81c92cb47c4 - do you ...
- 10:19 AM Backport #22697 (Need More Info): jewel: client: dirty caps may never get the chance to flush
- @Patrick: Appears to depend on https://github.com/ceph/ceph/commit/8859ccf2824adbf836f32760b7e4f81c92cb47c4 - do you ...
- 10:15 AM Backport #22504 (Need More Info): luminous: client may fail to trim as many caps as MDS asked for
- non-trivial backport (the luminous and jewel conflicts are exactly the same, so once the luminous backport is done, t...
- 10:05 AM Backport #22505 (Need More Info): jewel: client may fail to trim as many caps as MDS asked for
- non-trivial backport (the luminous and jewel conflicts are exactly the same, so once the luminous backport is done, t...
- 09:56 AM Backport #22383 (In Progress): luminous: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0...
- 09:55 AM Backport #22384 (In Progress): jewel: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), ...
- 07:10 AM Bug #23538: mds: fix occasional dir rstat inconsistency between multi-MDSes
- https://github.com/ceph/ceph/pull/21166
- 06:58 AM Bug #23538: mds: fix occasional dir rstat inconsistency between multi-MDSes
- I looked through the code and found in MDCache::predirty_journal_parents, the parent's rstat of a file will be update...
- 06:57 AM Bug #23538 (Resolved): mds: fix occasional dir rstat inconsistency between multi-MDSes
- Recently we found dir rstat inconsistency between multi-MDSes on ceph version Luminous.
For example, on client A,...
04/01/2018
- 08:52 PM Backport #22380 (In Progress): jewel: client reconnect gather race
- 08:50 PM Backport #22378 (In Progress): jewel: ceph-fuse: failure to remount in startup test does not hand...
03/31/2018
- 12:01 PM Backport #22031 (In Progress): jewel: FAILED assert(get_version() < pv) in CDir::mark_dirty
- 12:00 PM Backport #22031 (Need More Info): jewel: FAILED assert(get_version() < pv) in CDir::mark_dirty
- @Zheng, this backport appears to depend on 7596f4b76a31d16b01ed733be9c52ecfffa7b21a
Please advise. - 05:28 AM Bug #23532 (Resolved): doc: create PendingReleaseNotes and add dev doc for openfile table purpose...
- https://github.com/ceph/ceph/pull/20132
- 05:19 AM Backport #21526 (In Progress): jewel: client: dual client segfault with racing ceph_shutdown
- 05:12 AM Backport #21489 (In Progress): jewel: qa: failures from pjd fstest
- 05:07 AM Backport #21450 (Need More Info): jewel: MDS: MDS is laggy or crashed When deleting a large numbe...
- Zheng, this backport appears to depend on 2b396cab22c9faaa7496000e77ec5f2d7e7d553d which is not in jewel.
03/30/2018
- 03:45 PM Bug #23530: mds: kicked out by monitor during rejoin
- https://github.com/ceph/ceph/pull/21144
above PR can fix this issue , and we it has been verified in our cluster. - 03:38 PM Bug #23530 (Resolved): mds: kicked out by monitor during rejoin
- function process_imported_caps might hold mds_lock too long,
mds tick thread will be starved, and leads to ... - 03:36 PM Bug #23529 (Resolved): TmapMigratePP.DataScan asserts in jewel
- I'm not sure why this test is in the rados suite, when it tests a CephFS-specific tool(?), but here we are. After som...
- 03:35 AM Bug #23519: mds: mds got laggy because of MDSBeacon stuck in mqueue
- currently, there is no incoming mds messages can be fast dispatch.
so, and since the MDSBeacon's handler handle_mds_... - 03:15 AM Bug #23519 (Resolved): mds: mds got laggy because of MDSBeacon stuck in mqueue
- the MDSBeacon message from monitor may stuck for long time in mqueue.
because DispatcherQueue is currently dispatchi... - 03:06 AM Bug #23503: mds: crash during pressure test
- Patrick Donnelly wrote:
> wei jin wrote:
> > After crash, the standby mds took it over, however, we observed anothe... - 03:01 AM Bug #23518 (Resolved): mds: crash when failover
- 2018-03-29 10:25:04.719502 7f5ae5ad2700 -1 /build/ceph-12.2.4/src/mds/MDCache.cc: In function 'void MDCache::handle_c...
03/29/2018
- 09:04 PM Documentation #23334 (Resolved): doc: note client eviction results in a client instance blacklist...
- 07:44 PM Bug #23503: mds: crash during pressure test
- wei jin wrote:
> After crash, the standby mds took it over, however, we observed another crash:
This smells like ... - 07:43 PM Bug #23503 (Duplicate): mds: crash during pressure test
- 09:08 AM Bug #23503: mds: crash during pressure test
- After crash, the standby mds took it over, however, we observed another crash:
2018-03-29 10:25:04.719502 7f5ae5ad... - 09:05 AM Bug #23503 (Duplicate): mds: crash during pressure test
- ceph version: 12.2.4
10 mds, 9 active + 1 standby
disabled dir fragmentation
We created 9 directories, and pnine... - 06:39 PM Bug #22754 (Resolved): mon: removing tier from an EC base pool is forbidden, even if allow_ec_ove...
- 06:39 PM Backport #22971 (Resolved): luminous: mon: removing tier from an EC base pool is forbidden, even ...
- 01:20 PM Backport #22971: luminous: mon: removing tier from an EC base pool is forbidden, even if allow_ec...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20433
merged - 06:27 PM Bug #10915 (Fix Under Review): client: hangs on umount if it had an MDS session evicted
- https://github.com/ceph/ceph/pull/21065
- 02:28 PM Bug #23509: ceph-fuse: broken directory permission checking
- Kernel chdir syscall does:...
- 02:23 PM Bug #23509 (Resolved): ceph-fuse: broken directory permission checking
- Description of problem:
We have encountered cephfs-fuse mounted directory different behavior than base Linux or kern... - 04:27 AM Backport #23474 (In Progress): luminous: client: allow caller to request that setattr request be ...
- https://github.com/ceph/ceph/pull/21109
- 12:21 AM Bug #23491 (Resolved): fs: quota backward compatibility
- Since...
03/28/2018
- 08:49 PM Backport #22968 (Resolved): jewel: Journaler::flush() may flush less data than expected, which ca...
- 08:48 PM Bug #22730 (Resolved): mds: scrub crash
- 08:48 PM Backport #22865 (Resolved): jewel: mds: scrub crash
- 08:47 PM Bug #22734 (Resolved): cephfs-journal-tool: may got assertion failure due to not shutdown
- 08:47 PM Backport #22863 (Resolved): jewel: cephfs-journal-tool: may got assertion failure due to not shut...
- 08:47 PM Backport #22861 (Resolved): jewel: osdc: "FAILED assert(bh->last_write_tid > tid)" in powercycle-...
- 08:46 PM Bug #22536 (Resolved): client:_rmdir() uses a deleted memory structure(Dentry) leading a core
- 08:46 PM Backport #22700 (Resolved): jewel: client:_rmdir() uses a deleted memory structure(Dentry) leadin...
- 08:45 PM Bug #22562 (Resolved): mds: fix dump last_sent
- 08:45 PM Backport #22695 (Resolved): jewel: mds: fix dump last_sent
- 08:44 PM Bug #22008 (Resolved): Processes stuck waiting for write with ceph-fuse
- 08:44 PM Backport #22241 (Resolved): jewel: Processes stuck waiting for write with ceph-fuse
- 03:00 PM Bug #20592 (Resolved): client::mkdirs not handle well when two clients send mkdir request for a s...
- 03:00 PM Backport #20823 (Resolved): jewel: client::mkdirs not handle well when two clients send mkdir req...
- 06:01 AM Backport #22862 (Resolved): luminous: cephfs-journal-tool: may got assertion failure due to not s...
- 06:01 AM Bug #22627 (Resolved): qa: kcephfs lacks many configurations in the fs/multimds suites
- 06:01 AM Backport #22891 (Resolved): luminous: qa: kcephfs lacks many configurations in the fs/multimds su...
- 06:00 AM Bug #22652 (Resolved): client: fails to release to revoking Fc
- 06:00 AM Backport #22688 (Resolved): luminous: client: fails to release to revoking Fc
- 06:00 AM Bug #22910 (Resolved): client: setattr should drop "Fs" rather than "As" for mtime and size
- 05:59 AM Backport #22935 (Resolved): luminous: client: setattr should drop "Fs" rather than "As" for mtime...
- 05:59 AM Bug #22909 (Resolved): client: readdir bug
- 05:59 AM Backport #22936 (Resolved): luminous: client: readdir bug
- 05:59 AM Bug #22886 (Resolved): kclient: Test failure: test_full_same_file (tasks.cephfs.test_full.TestClu...
- 05:58 AM Backport #22966 (Resolved): luminous: kclient: Test failure: test_full_same_file (tasks.cephfs.te...
- 05:58 AM Backport #22969 (Resolved): luminous: mds: session reference leak
- 05:56 AM Backport #23013 (Resolved): luminous: mds: LOCK_SYNC_MIX state makes "getattr" operations extreme...
- 05:55 AM Bug #22993 (Resolved): qa: kcephfs thrash sub-suite does not ignore MON_DOWN
- 05:55 AM Backport #23061 (Resolved): luminous: qa: kcephfs thrash sub-suite does not ignore MON_DOWN
- 05:55 AM Bug #22990 (Resolved): qa: mds-full: ignore "Health check failed: pauserd,pausewr flag(s) set (OS...
- 05:55 AM Backport #23062 (Resolved): luminous: qa: mds-full: ignore "Health check failed: pauserd,pausewr ...
- 05:54 AM Bug #23094 (Resolved): mds: add uptime to status asok command
- 05:54 AM Backport #23150 (Resolved): luminous: mds: add uptime to status asok command
- 05:53 AM Bug #23041 (Resolved): ceph-fuse: clarify -i is not a valid option
- 05:53 AM Backport #23156 (Resolved): luminous: ceph-fuse: clarify -i is not a valid option
- 05:53 AM Bug #23028 (Resolved): client: allow client to use caps that are revoked but not yet returned
- 05:52 AM Backport #23314 (Resolved): luminous: client: allow client to use caps that are revoked but not y...
- 05:52 AM Backport #23355 (Resolved): luminous: client: prevent fallback to remount when dentry_invalidate_...
- 05:51 AM Backport #23475 (Resolved): luminous: ceph-fuse: trim ceph-fuse -V output
- https://github.com/ceph/ceph/pull/21600
- 05:51 AM Backport #23474 (Resolved): luminous: client: allow caller to request that setattr request be syn...
- https://github.com/ceph/ceph/pull/21109
03/27/2018
- 10:50 PM Backport #22862: luminous: cephfs-journal-tool: may got assertion failure due to not shutdown
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20251
merged - 10:49 PM Backport #22891: luminous: qa: kcephfs lacks many configurations in the fs/multimds suites
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20302
merged - 10:49 PM Backport #22688: luminous: client: fails to release to revoking Fc
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20342
merged - 10:48 PM Backport #22935: luminous: client: setattr should drop "Fs" rather than "As" for mtime and size
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20354
merged - 10:48 PM Backport #22936: luminous: client: readdir bug
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20356
merged - 10:47 PM Backport #22966: luminous: kclient: Test failure: test_full_same_file (tasks.cephfs.test_full.Tes...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20417
merged - 10:46 PM Backport #22969: luminous: mds: session reference leak
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20432
merged - 10:45 PM Backport #23013: luminous: mds: LOCK_SYNC_MIX state makes "getattr" operations extremely slow whe...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20455
merged - 10:41 PM Backport #23061: luminous: qa: kcephfs thrash sub-suite does not ignore MON_DOWN
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20523
merged - 10:41 PM Backport #23062: luminous: qa: mds-full: ignore "Health check failed: pauserd,pausewr flag(s) set...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20525
merged - 10:38 PM Backport #23150: luminous: mds: add uptime to status asok command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20626
merged - 10:37 PM Backport #23156: luminous: ceph-fuse: clarify -i is not a valid option
- Prashant D wrote:
> https://github.com/ceph/ceph/pull/20654
merged - 10:37 PM Backport #23314: luminous: client: allow client to use caps that are revoked but not yet returned
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20904
merged - 10:36 PM Backport #23355: luminous: client: prevent fallback to remount when dentry_invalidate_cb is true ...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/20960
merged - 09:59 PM Bug #23248 (Pending Backport): ceph-fuse: trim ceph-fuse -V output
- 09:59 PM Bug #23293 (Resolved): client: Client::_read returns buffer length on success instead of bytes read
- 09:58 PM Bug #23288 (Resolved): ceph-fuse: Segmentation fault --localize-reads
- 09:58 PM Bug #23291 (Pending Backport): client: add way to sync setattr operations to MDS
- 09:57 PM Bug #23436 (Resolved): Client::_read() always return 0 when reading from inline data
- 01:52 PM Bug #23172 (Resolved): mds: fixed MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 definition breaks luminous ...
- 01:52 PM Backport #23414 (Resolved): luminous: mds: fixed MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 definition b...
- 01:36 PM Bug #23421 (Need More Info): ceph-fuse: stop ceph-fuse if no root permissions?
- Jos, can you get more detailed debug logs when this happens? It is probably not related to sudo permissions.
03/26/2018
- 09:52 AM Bug #23452 (Fix Under Review): mds: assertion in MDSRank::validate_sessions
- https://github.com/ceph/ceph/pull/21040
- 09:51 AM Bug #23380 (Fix Under Review): mds: ceph.dir.rctime follows dir ctime not inode ctime
- https://github.com/ceph/ceph/pull/21039
- 08:43 AM Backport #23014 (In Progress): jewel: mds: LOCK_SYNC_MIX state makes "getattr" operations extreme...
- https://github.com/ceph/ceph/pull/21038
03/23/2018
- 09:08 PM Bug #23452 (Resolved): mds: assertion in MDSRank::validate_sessions
This function is meant to make the MDS more resilient by killing any client sessions that have prealloc_inos that a...- 01:39 PM Bug #23446 (In Progress): ceph-fuse: getgroups failure causes exception
- Pull request here:
https://github.com/ceph/ceph/pull/21025 - 11:58 AM Bug #23446: ceph-fuse: getgroups failure causes exception
- Ok, draft patch is building in shaman now. It should make ceph-fuse send the error back to the kernel when this occur...
- 10:10 AM Bug #23446: ceph-fuse: getgroups failure causes exception
- Looks like bad error handling. Here's getgroups:...
- 08:24 AM Bug #23446 (Resolved): ceph-fuse: getgroups failure causes exception
- Problem described here:
https://github.com/ceph/ceph-csi/pull/30#issuecomment-375331907... - 12:28 PM Bug #23448 (Resolved): nfs-ganesha: fails to parse rados URLs with '.' in object name
- I added a ganesha config file to nfs-ganesha RADOS pool with the object name "ganesha.conf". I then added this to the...
03/22/2018
- 07:46 AM Bug #23394: nfs-ganesha: check cache configuration when exporting FSAL_CEPH
- The default should be the safe/efficient configuration and we should make an attempt to either fix it automatically o...
- 07:38 AM Bug #23210 (Fix Under Review): ceph-fuse: exported nfs get "stale file handle" when mds migrating
- https://github.com/ceph/ceph/pull/21003
- 06:13 AM Backport #23414 (In Progress): luminous: mds: fixed MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 definitio...
- https://github.com/ceph/ceph/pull/21001
- 03:50 AM Bug #23436 (Fix Under Review): Client::_read() always return 0 when reading from inline data
- https://github.com/ceph/ceph/pull/20997
- 03:00 AM Bug #23436 (Resolved): Client::_read() always return 0 when reading from inline data
- /a/pdonnell-2018-03-19_23:30:16-fs-wip-pdonnell-testing-20180317.202121-testing-basic-smithi/2306753/
03/21/2018
- 02:29 PM Bug #23429: File corrupt after writing to cephfs
- When did you create the filesystem. were the filesystem created before luminous?
Do you know how these files were ... - 09:38 AM Bug #23429: File corrupt after writing to cephfs
- Patrick Donnelly wrote:
> Can you paste `ceph fs dump` and `ceph osd dump`.
- 09:36 AM Bug #23429: File corrupt after writing to cephfs
- Patrick Donnelly wrote:
> Can you paste `ceph fs dump` and `ceph osd dump`.
dumped fsmap epoch 83
e83
enable_mu... - 09:36 AM Bug #23429: File corrupt after writing to cephfs
- Zheng Yan wrote:
> Is file size larger than it should be? do you store cephfs metadata on ec pools?
metadata in r... - 08:53 AM Bug #23429 (Need More Info): File corrupt after writing to cephfs
- Can you paste `ceph fs dump` and `ceph osd dump`.
- 07:50 AM Bug #23429: File corrupt after writing to cephfs
- Is file size larger than it should be? do you store cephfs metadata on ec pools?
- 07:08 AM Bug #23429 (Can't reproduce): File corrupt after writing to cephfs
- We fount millions of file corruption on our CephFS+ec+overwrite cluster.I select some of them and use diff tools to c...
- 06:53 AM Bug #23425: Kclient: failed to umount
- Zheng Yan wrote:
> this is purely kernel issue. which kernel did you use?
3.10.0-693.el7.x86_64 - 01:01 AM Bug #23425: Kclient: failed to umount
- this is purely kernel issue. which kernel did you use?
- 03:59 AM Feature #4504 (Resolved): mds: trim based on total memory usage
- 01:03 AM Documentation #23427 (Resolved): doc: create doc outlining steps to bring down cluster
03/20/2018
- 05:48 PM Bug #23425 (Closed): Kclient: failed to umount
- my fault, we will file a bz for this.
- 05:46 PM Bug #23425 (Closed): Kclient: failed to umount
- I was trying to do mount on kernel client ,IOs and umount(automated script). While umounting, umount command: sudo um...
- 12:49 PM Bug #23210: ceph-fuse: exported nfs get "stale file handle" when mds migrating
- try modifying Client::ll_lookup, ignore may_lookup check for lookup . and lookup ..
- 12:31 PM Bug #23210: ceph-fuse: exported nfs get "stale file handle" when mds migrating
- but with above patch, "lookup ." on non-directory inode should works
- 11:56 AM Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- Possibly related tracker:
http://tracker.ceph.com/issues/18537 - 11:44 AM Feature #12334: nfs-ganesha: handle client cache pressure in NFS Ganesha FSAL
- This is worth looking into, but I wonder how big of a problem this is once you disable a lot of the ganesha mdcache b...
- 11:35 AM Bug #23291 (In Progress): client: add way to sync setattr operations to MDS
- PR is here:
https://github.com/ceph/ceph/pull/20913 - 11:34 AM Bug #23394: nfs-ganesha: check cache configuration when exporting FSAL_CEPH
- I don't think we should do this. We have settings that we recommend, but not using those doesn't mean that anything i...
- 04:56 AM Bug #23421 (Closed): ceph-fuse: stop ceph-fuse if no root permissions?
- I think it would be a good idea to prevent ceph-fuse from proceeding if there are no appropriate permissions.
I requ...
Also available in: Atom