Activity
From 07/03/2019 to 08/01/2019
08/01/2019
- 05:46 PM Support #40906: Full CephFS causes hang when accessing inode.
- Please confirm that I understand the process so that I can give it a try.
Thanks! - 05:45 PM Bug #41049: adding ceph secret key to kernel failed: Invalid argument.
- reason: accidentially double base64 encoded
MISSING old warning!: secret is not valid base64: Invalid argument. - 05:32 PM Bug #41049 (New): adding ceph secret key to kernel failed: Invalid argument.
- Fresh Nautilus Cluster.
Fresh cephfs.
This is not a common base64 error
mount -t ceph 10.3.2.1:6789:/ /mnt/ -o n...
07/31/2019
- 06:40 PM Feature #40811 (Pending Backport): mds: add command that modify session metadata
- 06:34 PM Bug #40927 (Pending Backport): mgr/volumes: unable to create subvolumegroups/subvolumes when ceph...
- 05:37 PM Bug #41034 (Resolved): cephfs-journal-tool: NetHandler create_socket couldn't create socket
- Ubuntu 18.04 with ceph Nautilus repo
Journal tool is broken:
2019-07-31 19:36:56.879 7f57bf308700 -1 NetHandler cre... - 05:33 PM Bug #40999 (Pending Backport): qa: AssertionError: u'open' != 'stale'
- 05:10 PM Bug #41031 (Resolved): qa: malformed job
- /ceph/teuthology-archive/pdonnell-2019-07-31_00:35:45-fs-wip-pdonnell-testing-20190730.205527-distro-basic-smithi/416...
- 04:59 PM Bug #41026 (Rejected): MDS process crashes on 14.2.2
- Please seek help on ceph-users. Provide more information about your cluster and how the error came about.
- 04:00 PM Bug #41026: MDS process crashes on 14.2.2
- ...
- 01:15 PM Bug #41026 (Rejected): MDS process crashes on 14.2.2
- MDS Process on Ubuntu 18.04 Nautilus 14.2.2 are crashing, unable to recover
-7> 2019-07-31 13:29:46.888 7fb36a... - 03:32 PM Bug #39395: ceph: ceph fs auth fails
- merged https://github.com/ceph/ceph/pull/28666
- 09:08 AM Bug #40960: client: failed to drop dn and release caps causing mds stary stacking.
- some more background of this issue is under
https://tracker.ceph.com/issues/38679#note-9 - 02:49 AM Bug #41006 (Fix Under Review): cephfs-data-scan scan_links FAILED ceph_assert(p->second >= before...
07/30/2019
- 10:13 PM Backport #40845: nautilus: MDSMonitor: use stringstream instead of dout for mds repaired
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29159
merged - 06:35 PM Bug #39947 (Pending Backport): cephfs-shell: add CI testing with flake8
- 05:07 PM Bug #40951 (Rejected): (think this is bug) Why CephFS metadata pool usage is only growing?
- NAB
- 12:09 PM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
- Bingo - mds_log_max_segments.
In Luminous desc for this option is empty:... - 01:25 PM Bug #41006: cephfs-data-scan scan_links FAILED ceph_assert(p->second >= before+len)
- looks like discontiguous free inode number can trigger the crash
- 08:56 AM Bug #41006 (Resolved): cephfs-data-scan scan_links FAILED ceph_assert(p->second >= before+len)
- Running cephfs-data-scan scan_links on a test 14.2.2 cluster I get this assertion:...
- 05:51 AM Feature #5520 (In Progress): osdc: should handle namespaces
- 01:42 AM Backport #41002 (Resolved): nautilus:client: failed to drop dn and release caps causing mds stary...
- https://github.com/ceph/ceph/pull/29478
- 01:41 AM Backport #41001 (Resolved): mimic: client: failed to drop dn and release caps causing mds stary s...
- https://github.com/ceph/ceph/pull/29479
- 01:38 AM Backport #41000 (Resolved): luminous: client: failed to drop dn and release caps causing mds star...
- https://github.com/ceph/ceph/pull/29830
07/29/2019
- 09:53 PM Bug #40603 (Pending Backport): mds: disallow setting ceph.dir.pin value exceeding max rank id
- 09:49 PM Bug #40965 (Resolved): client: don't report ceph.* xattrs via listxattr
- 09:47 PM Bug #40939 (Pending Backport): mds: map client_caps been inserted by mistake
- 09:46 PM Bug #40960 (Pending Backport): client: failed to drop dn and release caps causing mds stary stack...
- 09:10 PM Bug #37681: qa: power off still resulted in client sending session close
- backport note: also need fix for https://tracker.ceph.com/issues/40999
- 08:09 PM Bug #37681 (Pending Backport): qa: power off still resulted in client sending session close
- 09:09 PM Bug #40999 (Fix Under Review): qa: AssertionError: u'open' != 'stale'
- 09:06 PM Bug #40999 (Resolved): qa: AssertionError: u'open' != 'stale'
- ...
- 08:10 PM Bug #40968 (Pending Backport): qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_stal...
- 06:17 PM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
- Konstantin Shalygin wrote:
> ??The MDS tracks opened files and capabilities in the MDS journal. That would explain t... - 04:25 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
- ??The MDS tracks opened files and capabilities in the MDS journal. That would explain the space usage in the metadata...
- 05:38 PM Cleanup #40992 (Pending Backport): cephfs-shell: Multiple flake8 errors
- 11:54 AM Cleanup #40992: cephfs-shell: Multiple flake8 errors
- Not ignoring E501 instead limiting line length to 100....
- 06:59 AM Cleanup #40992 (Fix Under Review): cephfs-shell: Multiple flake8 errors
- 06:48 AM Cleanup #40992 (Resolved): cephfs-shell: Multiple flake8 errors
- After ignoring E501 and W503 flake8 errors, following needs to be fixed:...
07/27/2019
- 05:25 AM Support #40906: Full CephFS causes hang when accessing inode.
- Okay, I'll give it a shot next week on some of my smaller directories.
Just to be sure I understand the process.
...
07/26/2019
- 10:18 PM Bug #40474 (Pending Backport): client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- 10:18 PM Bug #40474 (Resolved): client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
- 10:16 PM Bug #40476 (Pending Backport): cephfs-shell: cd with no args has no effect
- 10:14 PM Bug #40695 (Resolved): mds: rework PurgeQueue on_error handler to avoid mds_lock state check
- 10:12 PM Cleanup #40787 (Resolved): mds: reorg CInode header
- 10:11 PM Bug #40936 (Pending Backport): tools/cephfs: memory leak in cephfs/Resetter.cc
- 10:10 PM Bug #40967 (Pending Backport): qa: race in test_standby_replay_singleton_fail
- 07:47 PM Bug #40989 (Resolved): qa: RECENT_CRASH warning prevents wait_for_health_clear from completing
- ...
- 06:36 PM Bug #40965 (Fix Under Review): client: don't report ceph.* xattrs via listxattr
- 06:34 PM Bug #40927 (Fix Under Review): mgr/volumes: unable to create subvolumegroups/subvolumes when ceph...
- 05:08 PM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
- The MDS tracks opened files and capabilities in the MDS journal. That would explain the space usage in the metadata p...
- 09:37 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
- After another bunch of simulations I was call `cache drop` to MDS via admin socket. Pool usage *198.8MB* -> *2.8MB*.
... - 06:45 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
- I was abstract of kernel client and make some tests with Samba VFS - this is Luminous client.
First, I just copied... - 03:25 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
- +10MBytes for last ~24H.
Actual pool's usage:
fs_data:
!fs_data_26.07.19.png!
fs_meta:
!fs_meta_26.07.19.png! - 03:23 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
- Patrick, this is definitely Luminous 12.2.12. My actual question is why metadata usage (bytes) is growing and new obj...
- 04:06 PM Feature #40986: cephfs qos: implement cephfs qos base on tokenbucket algorighm
- I think there are two kinds of design:
1. all clients use the same QoS setting, just as the implementation
in this... - 03:58 PM Feature #40986 (Fix Under Review): cephfs qos: implement cephfs qos base on tokenbucket algorighm
- The basic idea is as follows:
Set QoS info as one of the dir's xattrs;
All clients that can access the same dirs ... - 10:04 AM Feature #18537: libcephfs cache invalidation upcalls
- Jeff Layton wrote:
> I looked at this, but I think the real solution to this problem is to just prevent ganesha from... - 06:15 AM Backport #40445 (In Progress): nautilus: mds: MDCache::cow_inode does not cleanup unneeded client...
- https://github.com/ceph/ceph/pull/29344
- 06:12 AM Backport #40443 (In Progress): nautilus: libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH whe...
- https://github.com/ceph/ceph/pull/29343
- 04:49 AM Backport #37761: mimic: mds: deadlock when setting config value via admin socket
- ACK
- 02:42 AM Support #40906: Full CephFS causes hang when accessing inode.
- after delete omap key. use scrub_path asok command to repair parent directory
- 02:06 AM Support #40906: Full CephFS causes hang when accessing inode.
- Will deleting the RADOS object for the inode going to cause more problems with the MDS because they get out of sync?
07/25/2019
- 11:12 PM Feature #40617 (Pending Backport): mgr/volumes: Add `ceph fs subvolumegroup getpath` command
- 11:11 PM Cleanup #40866 (Resolved): mds: reorg Capability header
- 11:09 PM Bug #40836 (Pending Backport): cephfs-shell: flake8 blank line and indentation error
- 11:06 PM Cleanup #40742 (Resolved): mds: reorg CDir header
- 10:18 PM Bug #40968 (Fix Under Review): qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_stal...
- 10:15 PM Bug #40968 (Resolved): qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_stale_write_...
- /ceph/teuthology-archive/pdonnell-2019-07-25_06:25:06-fs-wip-pdonnell-testing-20190725.023305-distro-basic-smithi/414...
- 09:43 PM Bug #40967 (Fix Under Review): qa: race in test_standby_replay_singleton_fail
- 09:39 PM Bug #40967 (Resolved): qa: race in test_standby_replay_singleton_fail
- ...
- 07:51 PM Bug #40965 (Resolved): client: don't report ceph.* xattrs via listxattr
- The convention with filesystems that have virtual xattrs is to not report them via listxattr(). A lot of archiving to...
- 06:53 PM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
- I see now that you're using 12.2.12? That wouldn't be the cause then I guess. The PR which backports the relevant cha...
- 06:48 PM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
- Did you just upgrade your cluster to Nautilus? The data usage stats changed recently so that omaps in the metadata po...
- 09:45 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
- Found another cluster with this Ceph version. Data usage 10x more, but meta is not much.
Metadata pool in this clust... - 09:37 AM Bug #40951 (Rejected): (think this is bug) Why CephFS metadata pool usage is only growing?
- Can't find a clue: singe MDS fs without actually usage, but metadata is only growing (screenshot attached). MDS with ...
- 06:46 PM Bug #40939 (Fix Under Review): mds: map client_caps been inserted by mistake
- 08:52 AM Bug #40939 (Resolved): mds: map client_caps been inserted by mistake
- SnapRealm.h: SnapRealm::remove_cap()
if map client_caps has not key client, 'client_caps[client]' will insert key cl... - 06:18 PM Bug #40936 (Fix Under Review): tools/cephfs: memory leak in cephfs/Resetter.cc
- 06:24 AM Bug #40936: tools/cephfs: memory leak in cephfs/Resetter.cc
- function -- Resetter::_write_reset_event(Journaler *journaler)
- 06:22 AM Bug #40936 (Resolved): tools/cephfs: memory leak in cephfs/Resetter.cc
- there is a memory leak in cephfs/Resetter.cc:
LogEvent *le = new EResetJournal;
forget using 'delete' to release ... - 06:02 PM Bug #40932 (Need More Info): ceph-fuse: rm a dir failed, print "non-empty" error
- does that work?
- 03:25 AM Bug #40932: ceph-fuse: rm a dir failed, print "non-empty" error
- use scrub_path asok command to fix it
- 02:10 AM Bug #40932 (Need More Info): ceph-fuse: rm a dir failed, print "non-empty" error
- #rm -rf dir1
print "non-empty" error, but "ls dir1" is empty. - 05:53 PM Bug #40877 (Fix Under Review): client: client should return EIO when it's unsafe reqs have been d...
- 05:51 PM Bug #40960 (Fix Under Review): client: failed to drop dn and release caps causing mds stary stack...
- 02:32 PM Bug #40960 (Resolved): client: failed to drop dn and release caps causing mds stary stacking.
- when client get notification from MDS that a file has been deleted(via
getting CEPH_CAP_LINK_SHARED cap for inode wi... - 02:47 PM Bug #40927: mgr/volumes: unable to create subvolumegroups/subvolumes when ceph-mgr is run as non-...
- Patrick Donnelly wrote:
> Also need a ticket to allow setting the uid/gid of the subvolume (Feature).
Done. http:... - 02:43 PM Bug #40171 (Resolved): mds: reset heartbeat during long-running loops in recovery
- 02:43 PM Backport #40222 (Resolved): mimic: mds: reset heartbeat during long-running loops in recovery
- 02:37 PM Backport #40222: mimic: mds: reset heartbeat during long-running loops in recovery
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28918
merged - 02:42 PM Backport #40875 (Resolved): mimic: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- 02:37 PM Backport #40875: mimic: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- Xiaoxi Chen wrote:
> https://github.com/ceph/ceph/pull/29187
merged - 02:41 PM Backport #39685 (Resolved): mimic: ceph-fuse: client hang because its bad session PipeConnection ...
- 02:36 PM Backport #39685: mimic: ceph-fuse: client hang because its bad session PipeConnection to mds
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29200
merged - 02:40 PM Backport #38099 (Resolved): mimic: mds: remove cache drop admin socket command
- 02:35 PM Backport #38099: mimic: mds: remove cache drop admin socket command
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29210
merged - 02:38 PM Bug #38270 (Resolved): kcephfs TestClientLimits.test_client_pin fails with "client caps fell belo...
- 02:38 PM Backport #38687 (Resolved): mimic: kcephfs TestClientLimits.test_client_pin fails with "client ca...
- 02:34 PM Backport #38687: mimic: kcephfs TestClientLimits.test_client_pin fails with "client caps fell bel...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29211
merged
- 02:19 PM Feature #40959 (Resolved): mgr/volumes: allow setting uid, gid of subvolume and subvolume group d...
- ... by passing optional arguments --uid and --gid to `ceph fs subvolume/subvolume group create` commands.
- 01:41 PM Documentation #40957 (Resolved): doc: add section to manpage for recover_session= option
- Now that the recover_session= mount option has been added to the testing branch in the kernel, we need to update the ...
- 08:56 AM Backport #40944 (Resolved): nautilus: mgr: failover during in qa testing causes unresponsive clie...
- https://github.com/ceph/ceph/pull/29649
- 03:14 AM Support #40934 (New): can't get connection to external cephfs frmo kubernetes pod
- I need to get clear step by step instructions on how to setup cephfs provisioning to external cephfs cluster without ...
07/24/2019
- 11:19 PM Bug #40931 (Rejected): Can't connect to my kubernetes pod
- I'm not sure I have enough context to help here. Also, this question should begin on ceph-users, not here.
- 11:11 PM Bug #40931 (Rejected): Can't connect to my kubernetes pod
- New version is not allowed to connect to my external to kubernetes ceph cluster. It is OK if connecting to rdbms, but...
- 06:59 PM Feature #40929 (Resolved): pybind/mgr/mds_autoscaler: create mgr plugin to deploy and configure M...
- Create a new mgr plugin that adds and removes MDSs in response to degraded file systems. The plugin should monitor ch...
- 06:23 PM Bug #40927: mgr/volumes: unable to create subvolumegroups/subvolumes when ceph-mgr is run as non-...
- Also need a ticket to allow setting the uid/gid of the subvolume (Feature).
- 04:36 PM Bug #40927 (Resolved): mgr/volumes: unable to create subvolumegroups/subvolumes when ceph-mgr is ...
- ...
- 04:55 PM Support #40906 (New): Full CephFS causes hang when accessing inode.
- Reopening this as I see now that Zheng asked you to create the ticket.
- 04:53 PM Support #40906 (Rejected): Full CephFS causes hang when accessing inode.
- This discussion should move to the ceph-users list: https://lists.ceph.io/postorius/lists/ceph-users.ceph.io/
- 01:48 AM Support #40906: Full CephFS causes hang when accessing inode.
- The log does not contain information about the bad inode.
To use rados to delete inode
1. find inode number o... - 11:51 AM Bug #24868 (Resolved): doc: Incomplete Kernel mount debugging
- This is fixed by https://github.com/ceph/ceph/pull/28900
- 11:14 AM Bug #40864: cephfs-shell: rmdir doesn't complain when directory is not empty
- After this PR[https://github.com/ceph/ceph/pull/28514]
It does print an error message, but the message is not approp... - 11:03 AM Bug #40863: cephfs-shell: rmdir with -p attempts to delete non-dir files as well
07/23/2019
- 04:14 PM Support #40906: Full CephFS causes hang when accessing inode.
- The complete logs were 81GB, so I just filtered on the client session. If you need some more complete logs, let me kn...
- 04:10 PM Support #40906 (New): Full CephFS causes hang when accessing inode.
- Our CephFS filled up and we are having trouble operating on a directory that has a file with a bad inode state. We wo...
- 03:57 PM Backport #40440 (In Progress): nautilus: mds: cannot switch mds state from standby-replay to active
- 03:53 PM Backport #40438 (In Progress): nautilus: getattr on snap inode stuck
- 03:51 PM Backport #40437 (In Progress): mimic: getattr on snap inode stuck
- 03:47 PM Backport #40218 (In Progress): luminous: TestMisc.test_evict_client fails
- 03:46 PM Backport #40219 (In Progress): mimic: TestMisc.test_evict_client fails
- 03:45 PM Backport #40163 (In Progress): luminous: mount: key parsing fail when doing a remount
- 03:44 PM Backport #40165 (In Progress): mimic: mount: key parsing fail when doing a remount
- 03:43 PM Backport #40162 (Need More Info): mimic: FSAL_CEPH assertion failed in Client::_lookup_name: "par...
- non-trivial
- 03:41 PM Backport #39223 (In Progress): mimic: mds: behind on trimming and "[dentry] was purgeable but no ...
- 03:36 PM Backport #39215 (In Progress): mimic: mds: there is an assertion when calling Beacon::shutdown()
- 03:25 PM Backport #39212 (In Progress): mimic: MDSTableServer.cc: 83: FAILED assert(version == tid)
- 03:22 PM Backport #39210 (In Progress): mimic: mds: mds_cap_revoke_eviction_timeout is not used to initial...
- 03:19 PM Backport #38875 (In Progress): mimic: mds: high debug logging with many subtrees is slow
- 03:15 PM Backport #38709 (In Progress): mimic: qa: kclient unmount hangs after file system goes down
- 02:07 PM Bug #40867 (Pending Backport): mgr: failover during in qa testing causes unresponsive client warn...
- 01:03 PM Bug #40867: mgr: failover during in qa testing causes unresponsive client warnings
- Patrick Donnelly wrote:
> Sage's whitelist PR: https://github.com/ceph/ceph/pull/29169
>
> As I said in issue des... - 12:11 PM Backport #38687 (In Progress): mimic: kcephfs TestClientLimits.test_client_pin fails with "client...
- 12:09 PM Backport #38643 (Need More Info): mimic: fs: "log [WRN] : failed to reconnect caps for missing in...
- not trivial to backport because in master we have:...
- 12:04 PM Backport #38444 (Need More Info): mimic: mds: drop cache does not timeout as expected
- see luminous backport https://github.com/ceph/ceph/pull/27342
- 12:03 PM Backport #38350 (Need More Info): luminous: mds: decoded LogEvent may leak during shutdown
- extensive changes that do not apply cleanly - elevated risk
- 12:03 PM Backport #38349 (Need More Info): mimic: mds: decoded LogEvent may leak during shutdown
- extensive changes that do not apply cleanly - elevated risk
- 12:01 PM Backport #38339: mimic: mds: may leak gather during cache drop
- The luminous backport https://github.com/ceph/ceph/pull/27342 involved cherry-picking a large number of commits. Assi...
- 12:00 PM Backport #38339 (Need More Info): mimic: mds: may leak gather during cache drop
- non-trivial - the fix touches code that appears to have been refactored for nautilus
- 11:57 AM Backport #38099 (In Progress): mimic: mds: remove cache drop admin socket command
- 11:52 AM Backport #37906 (Need More Info): mimic: make cephfs-data-scan reconstruct snaptable
- non-trivial feature backport - assigning to dev lead for disposition
- 11:50 AM Backport #37761 (Need More Info): mimic: mds: deadlock when setting config value via admin socket
- The master PR has two commits. The first commit touches the following files:
src/common/config_obs_mgr.h
src/comm... - 11:48 AM Backport #37637 (Need More Info): luminous: client: support getfattr ceph.dir.pin extended attribute
- non-trivial feature backport - assigning to dev lead for disposition
- 11:47 AM Backport #37636 (Need More Info): mimic: client: support getfattr ceph.dir.pin extended attribute
- non-trivial feature backport - assigning to dev lead for disposition
- 10:22 AM Backport #39685 (In Progress): mimic: ceph-fuse: client hang because its bad session PipeConnecti...
- 08:28 AM Backport #40900 (Resolved): nautilus: mds: only evict an unresponsive client when another client ...
- https://github.com/ceph/ceph/pull/30031
- 08:28 AM Backport #40899 (Resolved): mimic: mds: only evict an unresponsive client when another client wan...
- https://github.com/ceph/ceph/pull/30239
- 08:25 AM Backport #40898 (Rejected): nautilus: cephfs-shell: Error messages are printed to stdout
- 08:25 AM Backport #40897 (Resolved): nautilus: ceph_volume_client: fs_name must be converted to string bef...
- https://github.com/ceph/ceph/pull/30030
- 08:25 AM Backport #40896 (Rejected): mimic: ceph_volume_client: fs_name must be converted to string before...
- https://github.com/ceph/ceph/pull/30238
- 08:25 AM Backport #40895 (Resolved): nautilus: pybind: Add standard error message and fix print of path as...
- https://github.com/ceph/ceph/pull/30026
- 08:25 AM Backport #40894 (Resolved): nautilus: mds: cleanup truncating inodes when standby replay mds trim...
- https://github.com/ceph/ceph/pull/29591
- 08:24 AM Backport #40892 (Resolved): luminous: mds: cleanup truncating inodes when standby replay mds trim...
- https://github.com/ceph/ceph/pull/31286
- 08:23 AM Backport #40887 (Resolved): nautilus: ceph_volume_client: to_bytes converts NoneType object str
- https://github.com/ceph/ceph/pull/30030
- 08:23 AM Backport #40886 (Rejected): mimic: ceph_volume_client: to_bytes converts NoneType object str
- 02:41 AM Backport #40875 (In Progress): mimic: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- https://github.com/ceph/ceph/pull/29187
- 02:29 AM Bug #40877 (Resolved): client: client should return EIO when it's unsafe reqs have been dropped w...
- when the session is close, the client will dropped unsafe req and cannot confirm whether its request had been complet...
- 02:21 AM Backport #40874 (In Progress): nautilus: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- 02:20 AM Backport #40874: nautilus: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- https://github.com/ceph/ceph/pull/29186
07/22/2019
- 11:42 PM Backport #40875 (Resolved): mimic: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- https://github.com/ceph/ceph/pull/29187
- 11:41 PM Backport #40874 (Resolved): nautilus: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- https://github.com/ceph/ceph/pull/29186
- 11:34 PM Bug #40775 (Pending Backport): /src/include/xlist.h: 77: FAILED assert(_size == 0)
- 11:30 PM Bug #40477 (Pending Backport): mds: cleanup truncating inodes when standby replay mds trim log se...
- 11:29 PM Bug #40411 (Pending Backport): pybind: Add standard error message and fix print of path as byte o...
- 11:28 PM Bug #40800 (Pending Backport): ceph_volume_client: to_bytes converts NoneType object str
- 11:28 PM Bug #40369 (Pending Backport): ceph_volume_client: fs_name must be converted to string before usi...
- 11:26 PM Bug #40202 (Pending Backport): cephfs-shell: Error messages are printed to stdout
- 11:25 PM Feature #17854 (Pending Backport): mds: only evict an unresponsive client when another client wan...
- 09:10 PM Bug #40784 (Fix Under Review): mds: metadata changes may be lost when MDS is restarted
- 08:01 PM Bug #40873 (Duplicate): qa: expected MDS_CLIENT_LATE_RELEASE in tasks.cephfs.test_client_recovery...
- This needs whitelisted:
/ceph/teuthology-archive/pdonnell-2019-07-20_01:58:35-fs-wip-pdonnell-testing-20190719.231... - 06:19 PM Bug #40867: mgr: failover during in qa testing causes unresponsive client warnings
- Sage's whitelist PR: https://github.com/ceph/ceph/pull/29169
As I said in issue description, I'd prefer if we clea... - 03:26 PM Bug #40867 (Resolved): mgr: failover during in qa testing causes unresponsive client warnings
- ...
- 01:48 PM Bug #23262 (Resolved): kclient: nofail option not supported
- 01:48 PM Backport #39233 (Resolved): mimic: kclient: nofail option not supported
- 01:48 PM Bug #38832 (Resolved): mds: fail to resolve snapshot name contains '_'
- 01:48 PM Backport #39472 (Resolved): mimic: mds: fail to resolve snapshot name contains '_'
- 01:47 PM Bug #39645 (Resolved): mds: output lock state in format dump
- 01:47 PM Backport #39669 (Resolved): mimic: mds: output lock state in format dump
- 01:46 PM Feature #39403 (Resolved): pybind: add the lseek() function to pybind of cephfs
- 01:46 PM Backport #39679 (Resolved): mimic: pybind: add the lseek() function to pybind of cephfs
- 01:25 PM Cleanup #40866 (Fix Under Review): mds: reorg Capability header
- 01:21 PM Cleanup #40866 (Resolved): mds: reorg Capability header
- 12:54 PM Backport #39689 (Resolved): mimic: mds: error "No space left on device" when create a large numb...
- 12:22 PM Feature #24463: kclient: add btime support
- Patches merged for v5.3.
- 12:21 PM Feature #24463 (Resolved): kclient: add btime support
- 11:19 AM Backport #40168 (Resolved): mimic: client: ceph.dir.rctime xattr value incorrectly prefixes "09" ...
- 11:18 AM Bug #40211 (Resolved): mds: fix corner case of replaying open sessions
- 11:18 AM Backport #40342 (Resolved): mimic: mds: fix corner case of replaying open sessions
- 11:14 AM Bug #40028 (Resolved): mds: avoid trimming too many log segments after mds failover
- 11:14 AM Backport #40042 (Resolved): mimic: avoid trimming too many log segments after mds failover
- 10:46 AM Bug #40864 (Resolved): cephfs-shell: rmdir doesn't complain when directory is not empty
- Passing rmdir an non-empty directory doesn't remove the directory, as it's expected. But not printing anything mislea...
- 10:40 AM Bug #40863 (Resolved): cephfs-shell: rmdir with -p attempts to delete non-dir files as well
- Going through do_rmdir in cephfs-shell, I don't see any checks that stop the method from deleting non-directory files...
- 09:49 AM Backport #40845 (In Progress): nautilus: MDSMonitor: use stringstream instead of dout for mds rep...
- 08:21 AM Backport #40845 (Resolved): nautilus: MDSMonitor: use stringstream instead of dout for mds repaired
- https://github.com/ceph/ceph/pull/29159
- 09:48 AM Backport #40843 (In Progress): nautilus: cephfs-shell: name 'files' is not defined error in do_rm()
- 08:20 AM Backport #40843 (Resolved): nautilus: cephfs-shell: name 'files' is not defined error in do_rm()
- https://github.com/ceph/ceph/pull/29158
- 09:47 AM Backport #40842 (In Progress): nautilus: ceph-fuse: mount does not support the fallocate()
- 08:20 AM Backport #40842 (Resolved): nautilus: ceph-fuse: mount does not support the fallocate()
- https://github.com/ceph/ceph/pull/29157
- 09:47 AM Backport #40839 (In Progress): nautilus: cephfs-shell: TypeError in poutput
- 08:20 AM Backport #40839 (Resolved): nautilus: cephfs-shell: TypeError in poutput
- https://github.com/ceph/ceph/pull/29156
- 09:20 AM Bug #40861 (Resolved): cephfs-shell: -p doesn't work for rmdir
- ...
- 09:18 AM Bug #40860 (Resolved): cephfs-shell: raises incorrect error when regfiles are passed to be deleted
- Steps to reproduce the bug -...
- 08:23 AM Backport #40858 (Rejected): luminous: ceph_volume_client: python program embedded in test_volume_...
- 08:23 AM Backport #40857 (Resolved): nautilus: ceph_volume_client: python program embedded in test_volume_...
- https://github.com/ceph/ceph/pull/30030
- 08:23 AM Backport #40856 (Rejected): mimic: ceph_volume_client: python program embedded in test_volume_cli...
- 08:23 AM Backport #40855 (Rejected): luminous: test_volume_client: test_put_object_versioned is unreliable
- 08:23 AM Backport #40854 (Resolved): nautilus: test_volume_client: test_put_object_versioned is unreliable
- https://github.com/ceph/ceph/pull/30030
- 08:22 AM Backport #40853 (Resolved): mimic: test_volume_client: test_put_object_versioned is unreliable
- https://github.com/ceph/ceph/pull/30236
- 08:21 AM Backport #40844 (Resolved): mimic: MDSMonitor: use stringstream instead of dout for mds repaired
- https://github.com/ceph/ceph/pull/30235
- 08:20 AM Backport #40841 (Resolved): mimic: ceph-fuse: mount does not support the fallocate()
- https://github.com/ceph/ceph/pull/30228
- 06:18 AM Bug #40836 (Fix Under Review): cephfs-shell: flake8 blank line and indentation error
- 06:06 AM Bug #40836 (Resolved): cephfs-shell: flake8 blank line and indentation error
- Following are the flake8 errors...
07/19/2019
07/18/2019
- 09:05 PM Bug #39405 (Pending Backport): ceph_volume_client: python program embedded in test_volume_client....
- 09:04 PM Bug #39510 (Pending Backport): test_volume_client: test_put_object_versioned is unreliable
- 06:06 PM Feature #40036 (Resolved): mgr / volumes: support asynchronous subvolume deletes
- 06:06 PM Backport #40796 (Resolved): nautilus: mgr / volumes: support asynchronous subvolume deletes
- 05:35 PM Bug #40821 (Resolved): osdc: objecter ops output does not have useful time information
- ...
- 01:22 PM Bug #40775: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- kernel data structure for this...
- 07:52 AM Bug #40775: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- echo 2>/proc/sys/vm/drop_caches " can walk release the reference and walk around the issue....
- 06:16 AM Bug #40775: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- Analyzing more on the log , it seems an overflow in ll_ref.
From below log, it is pretty clear the patten is 2 _ll... - 01:00 PM Feature #40811 (Fix Under Review): mds: add command that modify session metadata
- 07:34 AM Feature #40811 (Resolved): mds: add command that modify session metadata
- 10:52 AM Feature #40617 (Fix Under Review): mgr/volumes: Add `ceph fs subvolumegroup getpath` command
07/17/2019
- 07:56 PM Backport #40807 (In Progress): luminous: mds: msg weren't destroyed before handle_client_reconnec...
- 07:53 PM Backport #40807 (Resolved): luminous: mds: msg weren't destroyed before handle_client_reconnect r...
- https://github.com/ceph/ceph/pull/29097
- 07:45 PM Backport #39233: mimic: kclient: nofail option not supported
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28090
merged - 07:44 PM Backport #39472: mimic: mds: fail to resolve snapshot name contains '_'
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28186
merged - 07:44 PM Backport #39669: mimic: mds: output lock state in format dump
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28274
merged - 07:44 PM Backport #39679: mimic: pybind: add the lseek() function to pybind of cephfs
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28337
merged - 07:43 PM Backport #39689: mimic: mds: error "No space left on device" when create a large number of dirs
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28381
merged - 07:43 PM Backport #40168: mimic: client: ceph.dir.rctime xattr value incorrectly prefixes "09" to the nano...
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28501
merged - 07:41 PM Backport #40342: mimic: mds: fix corner case of replaying open sessions
- Patrick Donnelly wrote:
> https://github.com/ceph/ceph/pull/28579
merged - 07:40 PM Backport #40042: mimic: avoid trimming too many log segments after mds failover
- Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28650
merged - 10:40 AM Bug #40800 (Fix Under Review): ceph_volume_client: to_bytes converts NoneType object str
- 10:04 AM Bug #40800 (Resolved): ceph_volume_client: to_bytes converts NoneType object str
- Precisely, it happens here - https://github.com/ceph/ceph/blob/master/src/pybind/ceph_volume_client.py#L29-L32
IMO... - 10:12 AM Backport #40796 (In Progress): nautilus: mgr / volumes: support asynchronous subvolume deletes
- 02:39 AM Bug #40775: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- The affected inode is a symlink...
07/16/2019
- 08:41 PM Backport #40796 (Resolved): nautilus: mgr / volumes: support asynchronous subvolume deletes
- https://github.com/ceph/ceph/pull/29079
- 08:40 PM Feature #40036 (Pending Backport): mgr / volumes: support asynchronous subvolume deletes
- 12:57 PM Cleanup #40787 (Fix Under Review): mds: reorg CInode header
- 12:35 PM Cleanup #40787 (Resolved): mds: reorg CInode header
- 11:25 AM Bug #40695 (Fix Under Review): mds: rework PurgeQueue on_error handler to avoid mds_lock state check
- 07:35 AM Bug #40784 (Resolved): mds: metadata changes may be lost when MDS is restarted
- Assumed a client copied some obj to another location in ceph. when early_replied was received , the cp command would ...
- 04:07 AM Documentation #39620 (Resolved): doc: MDS and metadata pool hardware requirements/recommendations
07/15/2019
- 02:03 PM Cleanup #40694 (Resolved): mds: move MDSDaemon conf change handling to MDSRank finisher
- 01:58 PM Cleanup #40694 (Fix Under Review): mds: move MDSDaemon conf change handling to MDSRank finisher
- 01:49 PM Bug #40613 (Need More Info): kclient: .handle_message_footer got old message 1 <= 648 0x558ceadea...
- Waiting to see if this happens again.
- 01:46 PM Bug #40476 (Fix Under Review): cephfs-shell: cd with no args has no effect
- 01:45 PM Bug #40603 (Fix Under Review): mds: disallow setting ceph.dir.pin value exceeding max rank id
07/13/2019
- 03:41 PM Bug #40775: /src/include/xlist.h: 77: FAILED assert(_size == 0)
- hmm, interesting.
- 2147483646 = 0x80000002, it is more like a memory corruption?
- 03:13 PM Bug #40775 (Resolved): /src/include/xlist.h: 77: FAILED assert(_size == 0)
- It seems like we handle the inode ref wrongly? the number looks like overflow.
-12> 2019-07-13 00:49:44.582 7... - 03:12 AM Bug #40746: client: removing dir reports "not empty" issue due to client side filled wrong dir of...
- I don't see any problem. last paramter of fill_dirent() should be offset for next readdir. With your change, offset o...
- 12:17 AM Bug #40472 (Pending Backport): MDSMonitor: use stringstream instead of dout for mds repaired
- 12:16 AM Bug #40489 (Pending Backport): cephfs-shell: name 'files' is not defined error in do_rm()
- 12:15 AM Bug #40615 (Pending Backport): ceph-fuse: mount does not support the fallocate()
- 12:14 AM Bug #40679 (Pending Backport): cephfs-shell: TypeError in poutput
07/12/2019
- 11:16 PM Bug #40773 (Resolved): qa: 'ceph osd require-osd-release nautilus' fails
- ...
- 05:27 PM Bug #40746 (Fix Under Review): client: removing dir reports "not empty" issue due to client side ...
- 08:30 AM Bug #40746 (Resolved): client: removing dir reports "not empty" issue due to client side filled w...
- recently, during use nfs-ganesha+cephfs, we found some "directory not empty error" when removing
existing directory....
07/11/2019
- 07:50 PM Cleanup #40742 (Resolved): mds: reorg CDir header
- 07:30 PM Feature #40401 (Fix Under Review): mgr/volumes: allow/deny r/rw access of auth IDs to subvolume a...
- 02:53 PM Bug #38679: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- Zheng Yan wrote:
> is cephfs exported to nfs
No, it is ceph-fuse (13.2.5).
It seems like customer has ~10 nod... - 01:35 PM Bug #38679: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- is cephfs exported to nfs
- 03:55 AM Bug #38679: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- I think we need to try reclaim the caps in this case. I am seeing num_stray accumulated in my env due to the indoes ...
- 03:44 AM Bug #38679: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- not big issue even inode becomes to have caps
- 01:09 PM Backport #40343 (Resolved): luminous: mds: fix corner case of replaying open sessions
07/10/2019
- 07:17 PM Feature #40401 (In Progress): mgr/volumes: allow/deny r/rw access of auth IDs to subvolume and su...
- 02:53 PM Bug #38679: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- ...
- 02:52 PM Bug #38679: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
- It seems not a valid fix as this is not the only path to have the error.
We hit this in another way, not due to fr...
07/09/2019
- 08:01 PM Cleanup #40694 (In Progress): mds: move MDSDaemon conf change handling to MDSRank finisher
- 05:45 PM Feature #40563 (Fix Under Review): client: query a single cache information, for example print a ...
- Jos Collin wrote:
> @Patrick,
>
> Seems wenpengLi is already working on this?
> https://github.com/ceph/ceph/pul... - 04:22 AM Feature #40563: client: query a single cache information, for example print a single inode cache ...
- @Patrick,
Seems wenpengLi is already working on this?
https://github.com/ceph/ceph/pull/28853
07/08/2019
- 05:07 PM Bug #40695 (Resolved): mds: rework PurgeQueue on_error handler to avoid mds_lock state check
- See also: https://lists.ceph.io/hyperkitty/list/dev@ceph.io/thread/6QL7XW72O4NJBZGQEPX6SOBXSTUZOZOZ/
- 05:05 PM Cleanup #40694 (Resolved): mds: move MDSDaemon conf change handling to MDSRank finisher
- In order to avoid checking the mds_lock state.
- 04:24 PM Cleanup #40578 (In Progress): mds: reorganize class members in headers to follow coding guidelines
- 12:53 PM Backport #40222 (In Progress): mimic: mds: reset heartbeat during long-running loops in recovery
- 12:44 PM Backport #40041 (Resolved): luminous: avoid trimming too many log segments after mds failover
- 12:43 PM Backport #40221 (Resolved): luminous: mds: reset heartbeat during long-running loops in recovery
- 10:47 AM Documentation #40689 (Resolved): mgr/volumes: document mgr fs volumes CLI
- Document ceph-mgr FS volumes CLI
07/07/2019
- 05:17 PM Feature #40681 (Fix Under Review): mds: show total number of opened files beneath a directory
07/06/2019
- 07:30 AM Feature #40681 (Rejected): mds: show total number of opened files beneath a directory
- In our online clusters, occasionally, there exists some clients that open massive files/dirs under a directory. So, w...
07/05/2019
- 04:39 PM Bug #40679 (Fix Under Review): cephfs-shell: TypeError in poutput
- 04:14 PM Bug #40679 (Resolved): cephfs-shell: TypeError in poutput
- Recent changes in signature of poutput method from cmd2 module causes the following error....
07/03/2019
- 06:20 PM Bug #40611 (Rejected): can I upload missing rpm package from my build to: https://download.ceph....
- Please repost to ceph-users or the dev list.
- 06:08 PM Feature #40285: mds: support hierarchical layout transformations on files
- Patrick Donnelly wrote:
> The main goal of this feature is to support moving whole trees to cheaper storage hardware... - 04:46 AM Bug #40584: kernel build failure in kernel_untar_build.sh
- Patrick Donnelly wrote:
> Perhaps related to a new distro being used with luminous builds?
yeh, probably. this is... - 04:15 AM Bug #40584: kernel build failure in kernel_untar_build.sh
- Perhaps related to a new distro being used with luminous builds?
- 04:14 AM Bug #40582 (Rejected): cephfs-journal-tool: Error 22 ((22) Invalid argument)
- Please seek help on the ceph-users mailing list.
Also available in: Atom