Project

General

Profile

Activity

From 07/18/2019 to 08/16/2019

08/16/2019

09:57 PM Bug #40773: qa: 'ceph osd require-osd-release nautilus' fails
https://github.com/ceph/ceph/pull/29715 Patrick Donnelly
07:37 PM Bug #40773: qa: 'ceph osd require-osd-release nautilus' fails
http://pulpito.ceph.com/pdonnell-2019-08-15_13:19:31-fs-wip-pdonnell-testing-20190814.222632-distro-basic-smithi/
...
Patrick Donnelly
07:16 PM Documentation #41316 (Fix Under Review): doc: update documentation for LazyIO
Patrick Donnelly
03:44 PM Documentation #41316 (Resolved): doc: update documentation for LazyIO
Update the documentation with usage info about the LazyIO methods: lazyio_propagate() and lazyio_synchronize(). Also ... Sidharth Anupkrishnan
06:52 PM Feature #41302: mds: add ephemeral random and distributed export pins
Sidharth, I've discussed this with Doug and we'll be assigning this to you.
Sidharth Anupkrishnan wrote:
> Nice!...
Patrick Donnelly
06:25 PM Feature #41302: mds: add ephemeral random and distributed export pins
Nice!
I have a doubt regarding how we could use consistent hashing for the 2nd case: "export_ephemeral_random" pinni...
Sidharth Anupkrishnan
04:04 PM Feature #41302: mds: add ephemeral random and distributed export pins
Here are some scripts shared by Dan from CERN that can be used to manually test random subtree pinning: https://githu... Patrick Donnelly
05:38 PM Support #40906: Full CephFS causes hang when accessing inode.
I've sent an e-mail to Zheng with a link to download the logs due to sensitive info. The client requested the file 0.... Robert LeBlanc
05:33 PM Bug #41319 (Can't reproduce): ceph.in: pool creation fails with "AttributeError: 'str' object has...
... Patrick Donnelly
04:42 PM Bug #41192 (In Progress): mds: atime not being updated persistently
I've dropped the PR for now, as Zheng pointed out that atime is not actually tied to Fr caps after all, but rather to... Jeff Layton
12:52 PM Bug #40298 (Resolved): cephfs-shell: 'rmdir *' does not remove all directories
The issue is resolved in this PR https://github.com/ceph/ceph/commit/4c968b1f30faab9f9013dee95043ccf5f38f5d20. Varsha Rao
11:41 AM Feature #41311 (Resolved): deprecate CephFS inline_data support
I sent a proposal to the various ceph mailing lists to deprecate inline_data support for Octopus. At this point, we m... Jeff Layton
10:51 AM Bug #41310 (Resolved): client: lazyio synchronize does not get file size
LazyIO synchronize fails to do the task of making the propagated writes by other clients/fds visible to the current f... Sidharth Anupkrishnan
08:08 AM Backport #37761 (In Progress): mimic: mds: deadlock when setting config value via admin socket
Nathan Cutler

08/15/2019

06:25 PM Feature #41302: mds: add ephemeral random and distributed export pins
It's worth noting that the only difference between the two options is that export_ephemeral_distributed is not hierar... Patrick Donnelly
06:15 PM Feature #41302 (Resolved): mds: add ephemeral random and distributed export pins
Background: export pins [1] are an effective way to distribute metadata load for large workloads without the metadata... Patrick Donnelly
04:22 PM Support #40906: Full CephFS causes hang when accessing inode.
It seems that after some time (hour, days, weeks) things get fixed, but it sure would be nice to know how to get it i... Robert LeBlanc
01:46 AM Support #40906: Full CephFS causes hang when accessing inode.
please provide logs of both ceph-fuse and mds during accessing the bad file. Zheng Yan
01:55 PM Bug #41242 (Fix Under Review): mds: re-introudce mds_log_max_expiring to control expiring concurr...
Patrick Donnelly
09:50 AM Bug #40939: mds: map client_caps been inserted by mistake
keep the script happy (alternatively, we could delete #41098) Nathan Cutler
09:09 AM Backport #41283 (Rejected): nautilus: cephfs-shell: No error message is printed on ls of invalid ...
Nathan Cutler
09:08 AM Backport #41276 (Resolved): nautilus: qa: malformed job
https://github.com/ceph/ceph/pull/30038 Nathan Cutler
09:07 AM Backport #41269 (Resolved): nautilus: cephfs-shell: Convert files path type from string to bytes
https://github.com/ceph/ceph/pull/30057 Nathan Cutler
09:07 AM Backport #41268 (Rejected): nautilus: cephfs-shell: onecmd throws TypeError
Nathan Cutler
07:31 AM Bug #41228: mon: deleting a CephFS and its pools causes MONs to crash
I think the first time I used the standard mimic workflow of @mds fail@ and once all MDSs are stopped, @fs remove@. T... Janek Bevendorff

08/14/2019

10:23 PM Bug #41031 (Pending Backport): qa: malformed job
Patrick Donnelly
10:06 PM Bug #40430 (Pending Backport): cephfs-shell: No error message is printed on ls of invalid directo...
Patrick Donnelly
10:04 PM Bug #41164 (Pending Backport): cephfs-shell: onecmd throws TypeError
Patrick Donnelly
10:03 PM Bug #41163 (Pending Backport): cephfs-shell: Convert files path type from string to bytes
Patrick Donnelly
09:24 PM Bug #41228: mon: deleting a CephFS and its pools causes MONs to crash
> [14:11:31] <@gregsfortytwo> batrick: looking at tracker.ceph.com/issues/41228 and it's got a lot going on but part... Greg Farnum
09:10 PM Bug #41228: mon: deleting a CephFS and its pools causes MONs to crash
Can you include exactly what commands you ran? Did you still have clients mounted while deleting the FS? Greg Farnum
04:22 PM Bug #41192 (Fix Under Review): mds: atime not being updated persistently
Patrick Donnelly
04:18 PM Feature #41220: mgr/volumes: add test case for blacklisted clients
Related: test ceph-mgr getting blacklisted. It should recover somehow (probably close the libcephfs handle and get a ... Patrick Donnelly
04:17 PM Feature #16656: mount.ceph: enable consumption of ceph keyring files
PR here: https://github.com/ceph/ceph/pull/29642
It occurs to me though that now that we have a way to get to the ...
Jeff Layton
04:14 PM Feature #16656 (Fix Under Review): mount.ceph: enable consumption of ceph keyring files
Patrick Donnelly
01:26 PM Backport #37761: mimic: mds: deadlock when setting config value via admin socket
https://github.com/ceph/ceph/pull/29664 Venky Shankar
12:08 PM Feature #41209: mds: create a configurable snapshot limit
Zheng Yan wrote:
> Milind Changire wrote:
> > Thanks for the hint Zheng.
> >
> > How could the MDS return the st...
Milind Changire
11:09 AM Feature #41209: mds: create a configurable snapshot limit
Milind Changire wrote:
> Thanks for the hint Zheng.
>
> How could the MDS return the status "too many snaps" to t...
Zheng Yan
10:43 AM Feature #41209: mds: create a configurable snapshot limit
Thanks for the hint Zheng.
How could the MDS return the status "too many snaps" to the caller ?
There's no error ...
Milind Changire
08:07 AM Bug #41242 (Closed): mds: re-introudce mds_log_max_expiring to control expiring concurrency manually
In some case, huge of mds segments could be expired concurrently, which might bring very heavy loads to OSDs and we c... Zhi Zhang
04:22 AM Backport #40944 (In Progress): nautilus: mgr: failover during in qa testing causes unresponsive c...
https://github.com/ceph/ceph/pull/29649 Prashant D
02:12 AM Support #40906: Full CephFS causes hang when accessing inode.
Okay, we had another data corruption incident, so I took some time to try looking deeper into the problem. I did some... Robert LeBlanc

08/13/2019

08:01 PM Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
I tried again, this time with a replicated pool and just one MDS. I think it's too early to draw definitive conclusio... Janek Bevendorff
02:54 PM Bug #41140: mds: trim cache more regularly
I believe this problem may be particularly severe when the main data pool is an EC pool. I am trying the same thing w... Janek Bevendorff
01:17 PM Bug #41228 (Resolved): mon: deleting a CephFS and its pools causes MONs to crash
Disclaimer: I am not entirely sure if this is strictly related to CephFS or a general problem when deleting pools wit... Janek Bevendorff
11:14 AM Bug #41133 (In Progress): qa/tasks: update thrasher design
Jos Collin
06:46 AM Feature #41220 (New): mgr/volumes: add test case for blacklisted clients
see qa/tasks/cephfs/test_volumes.py Venky Shankar
06:44 AM Bug #41218: mgr/volumes: retry spawning purge threads on failure
thanks -- those were cache in my browser :P Venky Shankar
05:16 AM Bug #41218 (Resolved): mgr/volumes: retry spawning purge threads on failure
seen here: http://qa-proxy.ceph.com/teuthology/pdonnell-2019-08-07_15:57:31-fs-wip-pdonnell-testing-20190807.132723-d... Venky Shankar
05:18 AM Bug #41219 (Resolved): mgr/volumes: send purge thread (and other) health warnings to `ceph status`
so as to easily identify issues rather than scanning log files. Also, log any critical error(s) in cluster log. Venky Shankar
01:34 AM Feature #41209: mds: create a configurable snapshot limit
mds.0 has snaptable, which contains information about all snapshots. Each mds has a snapclient, which also caches sna... Zheng Yan

08/12/2019

09:00 PM Feature #16656: mount.ceph: enable consumption of ceph keyring files
Jeff Layton wrote:
> That's certainly another possibility. I'm not sure it's any easier though.
>
> We'd have to ...
Patrick Donnelly
06:18 PM Feature #16656: mount.ceph: enable consumption of ceph keyring files
That's certainly another possibility. I'm not sure it's any easier though.
We'd have to scrape and parse the outpu...
Jeff Layton
06:05 PM Feature #16656: mount.ceph: enable consumption of ceph keyring files
Jeff Layton wrote:
> My current thinking is to link in libceph-common, create a context and fetch the keys using the...
Patrick Donnelly
05:37 PM Feature #16656: mount.ceph: enable consumption of ceph keyring files
My current thinking is to link in libceph-common, create a context and fetch the keys using the same C++ routines tha... Jeff Layton
03:49 PM Feature #16656 (In Progress): mount.ceph: enable consumption of ceph keyring files
Patrick Donnelly
03:46 PM Bug #41187 (Rejected): src/mds/Server.cc: 958: FAILED assert(in->snaprealm)
Sorry Jan, snapshots are not stable in Luminous and we don't spend time looking at snapshot related failures for that... Patrick Donnelly
03:42 PM Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
Each file will have an object in the default data pool (the data pool used at file system creation time) with an exte... Patrick Donnelly
01:35 PM Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
I had the same issue before our MDS and Mons died.
Journal was producing 2 files a few TB big and the metadatapool ...
Anonymous
01:34 PM Bug #41204: CephFS pool usage 3x above expected value and sparse journal dumps
referenced: https://tracker.ceph.com/issues/41026 Anonymous
01:28 PM Bug #41204 (New): CephFS pool usage 3x above expected value and sparse journal dumps
I am in the process of copying about 230 million small and medium-sized files to a CephFS and I have three active MDS... Janek Bevendorff
02:03 PM Backport #41098: luminous: mds: map client_caps been inserted by mistake
(snapshots are not supported/stable in luminous) Patrick Donnelly
01:43 PM Backport #41098 (Rejected): luminous: mds: map client_caps been inserted by mistake
Patrick Donnelly
01:59 PM Backport #40162: mimic: FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_dir()"
Backport PR here:
https://github.com/ceph/ceph/pull/29609
Jeff Layton
12:59 PM Backport #40162 (In Progress): mimic: FSAL_CEPH assertion failed in Client::_lookup_name: "parent...
Jeff Layton
01:56 PM Feature #41209 (Resolved): mds: create a configurable snapshot limit
Add a new config option that imposes a limit on the number of snapshots in a directory. Zheng has found in the past t... Patrick Donnelly
01:31 PM Bug #40821 (In Progress): osdc: objecter ops output does not have useful time information
Varsha Rao
12:29 PM Bug #41192: mds: atime not being updated persistently
Now that I've done a bit more investigation, I think there are actually two parts to this bug:
1) the MDS only upd...
Jeff Layton
06:54 AM Backport #40894 (In Progress): nautilus: mds: cleanup truncating inodes when standby replay mds t...
https://github.com/ceph/ceph/pull/29591 Prashant D
05:32 AM Cleanup #41185 (Fix Under Review): mds: reorg FSMapUser header
Varsha Rao

08/09/2019

06:03 PM Bug #41192: mds: atime not being updated persistently
Tracepoints from adding and removing caps for the inode:... Jeff Layton
05:26 PM Bug #41192: mds: atime not being updated persistently
...maybe Fw too? Jeff Layton
05:16 PM Bug #41192 (Won't Fix): mds: atime not being updated persistently
xfstest generic/192 fails with kcephfs. It basically:
mounts the fs
creates a file and records the atime
waits a...
Jeff Layton
02:18 PM Bug #41187 (Rejected): src/mds/Server.cc: 958: FAILED assert(in->snaprealm)
We saw this on a cluster where two out of three mds servers needed to be rebooted. After the reboot both mds dumped c... Jan Fajerski
11:03 AM Cleanup #41185 (Resolved): mds: reorg FSMapUser header
Varsha Rao
10:16 AM Feature #41182 (Resolved): mgr/volumes: add `fs subvolume extend/shrink` commands
To extend and shrink subvolume set desired quota.
And during shrink, maybe error out if the desired shrunk size i...
Ramana Raja
10:02 AM Cleanup #41181 (Fix Under Review): mds: reorg FSMap header
Varsha Rao
09:55 AM Cleanup #41181 (Resolved): mds: reorg FSMap header
Varsha Rao
09:12 AM Bug #41069 (Closed): nautilus: test_subvolume_group_create_with_desired_mode fails with "Assertio...
The test passed in a more recent nautilus test run that included the mgr/volumes backport PR https://github.com/ceph... Ramana Raja
08:04 AM Cleanup #41178 (Fix Under Review): mds: reorg DamageTable header
Varsha Rao
07:54 AM Cleanup #41178 (Resolved): mds: reorg DamageTable header
Varsha Rao
06:12 AM Bug #39395 (Resolved): ceph: ceph fs auth fails
Varsha Rao

08/08/2019

01:49 PM Bug #41164 (Fix Under Review): cephfs-shell: onecmd throws TypeError
Varsha Rao
01:41 PM Bug #41164 (Resolved): cephfs-shell: onecmd throws TypeError
cmd2 (0.9.15)
Python 3.7.4
Fedora 30
Due to changes in recent cmd module, on any command the following error is ...
Varsha Rao
01:17 PM Bug #41163 (Fix Under Review): cephfs-shell: Convert files path type from string to bytes
Varsha Rao
01:09 PM Bug #41163 (Resolved): cephfs-shell: Convert files path type from string to bytes
Varsha Rao
09:36 AM Bug #41140: mds: trim cache more regularly
I have the following settings now, which seem to work okay-ish:... Janek Bevendorff
04:56 AM Bug #41034: cephfs-journal-tool: NetHandler create_socket couldn't create socket
I forgot to say: the tool functions work, the warning/error doesn't affect its functionality but its confusing if you... Anonymous
04:55 AM Bug #41034: cephfs-journal-tool: NetHandler create_socket couldn't create socket
the cephfs-journal-tool but I also see this with the cephfs-data-scan
but I made a mistake, its Ubuntu 16.04, on the...
Anonymous

08/07/2019

09:48 PM Bug #41141 (Fix Under Review): mds: recall capabilities more regularly when under cache pressure
Patrick Donnelly
09:47 PM Bug #41140 (Fix Under Review): mds: trim cache more regularly
Patrick Donnelly
04:53 PM Bug #41140: mds: trim cache more regularly
We did much the same thing in the OSD. Previously we trimmed in a single thread at regular intervals, but now we tri... Mark Nelson
09:39 AM Bug #41140: mds: trim cache more regularly
Janek Bevendorff wrote:
> This may be obvious, but to put the whole thing into context: this cache trimming issue ca...
Dan van der Ster
09:29 AM Bug #41140: mds: trim cache more regularly
This may be obvious, but to put the whole thing into context: this cache trimming issue can make a CephFS permanently... Janek Bevendorff
09:07 PM Bug #41034 (Need More Info): cephfs-journal-tool: NetHandler create_socket couldn't create socket
Can you please elaborate on this, which journal tool are you talking about?
If possible, could you provide steps to...
Neha Ojha
08:03 PM Bug #40927 (Resolved): mgr/volumes: unable to create subvolumegroups/subvolumes when ceph-mgr is ...
Patrick Donnelly
08:02 PM Feature #40617 (Resolved): mgr/volumes: Add `ceph fs subvolumegroup getpath` command
Patrick Donnelly
08:02 PM Backport #41071 (Resolved): nautilus: mgr/volumes: unable to create subvolumegroups/subvolumes wh...
Patrick Donnelly
08:02 PM Backport #41070 (Resolved): nautilus: mgr/volumes: Add `ceph fs subvolumegroup getpath` command
Patrick Donnelly
02:42 PM Bug #41144: mount.ceph: doesn't accept "strictatime"
Changing subject. "nostrictatime" seems to be intercepted by /bin/mount, so the mount helper doesn't need to handle it. Jeff Layton
07:37 AM Bug #41148 (Fix Under Review): client: _readdir_cache_cb() may use the readdir_cache already clear
Zheng Yan
07:31 AM Bug #41148: client: _readdir_cache_cb() may use the readdir_cache already clear
segment fault:inode=0x0
huanwen ren
07:25 AM Bug #41148: client: _readdir_cache_cb() may use the readdir_cache already clear
huanwen ren wrote:
> Zheng Yan wrote:
> > I think getattr does not affect parent directory inode's completeness
> ...
huanwen ren
07:22 AM Bug #41148: client: _readdir_cache_cb() may use the readdir_cache already clear
Zheng Yan wrote:
> I think getattr does not affect parent directory inode's completeness
From the log, there is a...
huanwen ren
07:00 AM Bug #41148: client: _readdir_cache_cb() may use the readdir_cache already clear
I think getattr does not affect parent directory inode's completeness Zheng Yan
06:56 AM Bug #41148 (Resolved): client: _readdir_cache_cb() may use the readdir_cache already clear
Calling function A means to get dir information from the cache, but in the while loop,
the contents of readdir_cach...
huanwen ren
04:55 AM Bug #41147: mds: crash loop - Server.cc 6835: FAILED ceph_assert(in->first <= straydn->first)
this is temporarily fixed by wiping session table Anonymous
04:34 AM Bug #41147 (Duplicate): mds: crash loop - Server.cc 6835: FAILED ceph_assert(in->first <= straydn...
After creating a new FS and running it for 2 days I my MDS is in a crash loop. I didn't try anything yet so far as to... Anonymous

08/06/2019

10:36 PM Bug #41140: mds: trim cache more regularly
Dan van der Ster wrote:
> FWIW i can trigger the same problems here without creates. `ls -lR` of a large tree is eno...
Patrick Donnelly
05:01 PM Bug #41140: mds: trim cache more regularly
FWIW i can trigger the same problems here without creates. `ls -lR` of a large tree is enough, and increasing the thr... Dan van der Ster
04:42 PM Bug #41140 (Resolved): mds: trim cache more regularly
Under -create- workloads that result in the acquisition of a lot of capabilities, the MDS can't trim the cache fast e... Patrick Donnelly
07:56 PM Bug #41144 (Resolved): mount.ceph: doesn't accept "strictatime"
The cephfs mount helper doesn't support either strictatime or nostrictatime. It should intercept those options and se... Jeff Layton
04:46 PM Bug #41141 (Resolved): mds: recall capabilities more regularly when under cache pressure
If a client is doing a large parallel create workload, the MDS may not recall capabilities fast enough and the client... Patrick Donnelly
02:46 AM Bug #41133 (Closed): qa/tasks: update thrasher design
* Make the Thrasher class abstract by adding _do_thrash abstract function.
* Change OSDThrasher, RBDMirrorThrasher, ...
Jos Collin
02:45 AM Bug #41066 (Closed): mds: skip trim mds cache if mdcache is not opened
Zhi Zhang

08/05/2019

11:09 PM Backport #41129 (Resolved): mimic: qa: power off still resulted in client sending session close
https://github.com/ceph/ceph/pull/30233 Patrick Donnelly
11:09 PM Backport #41128 (Resolved): nautilus: qa: power off still resulted in client sending session close
https://github.com/ceph/ceph/pull/29983 Patrick Donnelly
11:07 PM Backport #41118 (Rejected): nautilus: cephfs-shell: add CI testing with flake8
Patrick Donnelly
11:07 PM Backport #41114 (Resolved): mimic: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
https://github.com/ceph/ceph/pull/31283 Patrick Donnelly
11:07 PM Backport #41113 (Resolved): nautilus: client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
https://github.com/ceph/ceph/pull/30032 Patrick Donnelly
11:06 PM Backport #41112 (Rejected): nautilus: cephfs-shell: cd with no args has no effect
Patrick Donnelly
11:06 PM Backport #41108 (Resolved): mimic: mds: disallow setting ceph.dir.pin value exceeding max rank id
https://github.com/ceph/ceph/pull/29940 Patrick Donnelly
11:06 PM Backport #41107 (Resolved): nautilus: mds: disallow setting ceph.dir.pin value exceeding max rank id
https://github.com/ceph/ceph/pull/29938 Patrick Donnelly
11:06 PM Backport #41106 (Resolved): nautilus: mds: add command that modify session metadata
https://github.com/ceph/ceph/pull/32245 Patrick Donnelly
11:06 PM Backport #41105 (Rejected): nautilus: cephfs-shell: flake8 blank line and indentation error
Patrick Donnelly
11:05 PM Backport #41101 (Rejected): luminous: tools/cephfs: memory leak in cephfs/Resetter.cc
Patrick Donnelly
11:05 PM Backport #41100 (Resolved): mimic: tools/cephfs: memory leak in cephfs/Resetter.cc
https://github.com/ceph/ceph/pull/29915 Patrick Donnelly
11:05 PM Backport #41099 (Resolved): nautilus: tools/cephfs: memory leak in cephfs/Resetter.cc
https://github.com/ceph/ceph/pull/29879 Patrick Donnelly
11:05 PM Backport #41098 (Rejected): luminous: mds: map client_caps been inserted by mistake
Patrick Donnelly
11:05 PM Backport #41097 (Resolved): mimic: mds: map client_caps been inserted by mistake
https://github.com/ceph/ceph/pull/29833 Patrick Donnelly
11:04 PM Backport #41096 (Resolved): nautilus: mds: map client_caps been inserted by mistake
https://github.com/ceph/ceph/pull/29878 Patrick Donnelly
11:04 PM Backport #41095 (Resolved): nautilus: qa: race in test_standby_replay_singleton_fail
https://github.com/ceph/ceph/pull/29832 Patrick Donnelly
11:04 PM Backport #41094 (Resolved): mimic: qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_...
https://github.com/ceph/ceph/pull/29812 Patrick Donnelly
11:04 PM Backport #41093 (Resolved): nautilus: qa: tasks.cephfs.test_client_recovery.TestClientRecovery.te...
https://github.com/ceph/ceph/pull/29811 Patrick Donnelly
11:04 PM Backport #41089 (Rejected): nautilus: cephfs-shell: Multiple flake8 errors
Patrick Donnelly
11:04 PM Backport #41088 (Resolved): mimic: qa: AssertionError: u'open' != 'stale'
https://github.com/ceph/ceph/pull/29751 Patrick Donnelly
11:03 PM Backport #41087 (Resolved): nautilus: qa: AssertionError: u'open' != 'stale'
https://github.com/ceph/ceph/pull/29750 Patrick Donnelly
04:05 PM Backport #40326: nautilus: mds: evict stale client when one of its write caps are stolen
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28583
merged
Yuri Weinstein
04:04 PM Backport #40324: nautilus: ceph_volume_client: d_name needs to be converted to string before using
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28609
merged
Yuri Weinstein
04:03 PM Backport #40839: nautilus: cephfs-shell: TypeError in poutput
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29156
merged
Yuri Weinstein
04:03 PM Backport #40842: nautilus: ceph-fuse: mount does not support the fallocate()
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29157
merged
Yuri Weinstein
04:02 PM Backport #40843: nautilus: cephfs-shell: name 'files' is not defined error in do_rm()
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29158
merged
Yuri Weinstein
04:02 PM Backport #40874: nautilus: /src/include/xlist.h: 77: FAILED assert(_size == 0)
Xiaoxi Chen wrote:
> https://github.com/ceph/ceph/pull/29186
merged
Yuri Weinstein
04:01 PM Backport #40438: nautilus: getattr on snap inode stuck
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29231
merged
Yuri Weinstein
04:00 PM Backport #40440: nautilus: mds: cannot switch mds state from standby-replay to active
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29233
merged
Yuri Weinstein
03:59 PM Backport #40443: nautilus: libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH when operating on...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29343
merged
Yuri Weinstein
03:58 PM Backport #40445: nautilus: mds: MDCache::cow_inode does not cleanup unneeded client_snap_caps
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29344
merged
Yuri Weinstein
02:14 PM Bug #41072 (In Progress): scheduled cephfs snapshots (via ceph manager)
Patrick Donnelly
12:54 PM Bug #41072: scheduled cephfs snapshots (via ceph manager)
- also, interface for fetching snap metadata (`flushed` state, etc...)... Venky Shankar
12:23 PM Bug #41072 (Resolved): scheduled cephfs snapshots (via ceph manager)
outline detailed here: https://pad.ceph.com/p/cephfs-snap-mirroring
Specify a snapshot schedule on any (sub)direct...
Venky Shankar
01:47 PM Bug #40989 (Resolved): qa: RECENT_CRASH warning prevents wait_for_health_clear from completing
Patrick Donnelly
01:46 PM Feature #40986 (Fix Under Review): cephfs qos: implement cephfs qos base on tokenbucket algorighm
Patrick Donnelly
01:45 PM Backport #41000 (New): luminous: client: failed to drop dn and release caps causing mds stary sta...
Zheng, please take this one. The backport is non-trivial. Patrick Donnelly
01:14 PM Feature #41074 (Resolved): pybind/mgr/volumes: mirror (scheduled) snapshots to remote target
outline detailed here: https://pad.ceph.com/p/cephfs-snap-mirroring
mirror scheduled (and temporary) snapshots a r...
Venky Shankar
01:13 PM Feature #41073 (Rejected): cephfs-sync: tool for synchronizing cephfs snapshots to remote target
Introduce a rsync like tool (cephfs-sync) for mirroring scheduled and temp cephfs snapshots to sync targets. sync tar... Venky Shankar
01:02 PM Backport #41070 (In Progress): nautilus: mgr/volumes: Add `ceph fs subvolumegroup getpath` command
Ramana Raja
11:56 AM Backport #41070 (Resolved): nautilus: mgr/volumes: Add `ceph fs subvolumegroup getpath` command
https://github.com/ceph/ceph/pull/29490 Ramana Raja
01:00 PM Backport #41071 (In Progress): nautilus: mgr/volumes: unable to create subvolumegroups/subvolumes...
Ramana Raja
11:58 AM Backport #41071 (Resolved): nautilus: mgr/volumes: unable to create subvolumegroups/subvolumes wh...
https://github.com/ceph/ceph/pull/29490 Ramana Raja
10:50 AM Bug #41069 (Need More Info): nautilus: test_subvolume_group_create_with_desired_mode fails with "...
Seen here is nautilus run: http://qa-proxy.ceph.com/teuthology/yuriw-2019-07-30_20:57:10-fs-wip-yuri-testing-2019-07-... Venky Shankar
05:23 AM Bug #41066: mds: skip trim mds cache if mdcache is not opened
https://github.com/ceph/ceph/pull/29481 Zhi Zhang
05:18 AM Bug #41066 (Closed): mds: skip trim mds cache if mdcache is not opened
```
2019-07-24 14:51:28.028198 7f6dc2543700 1 mds.0.940446 active_start
2019-07-24 14:51:39.452890 7f6dc2543700 1...
Zhi Zhang
12:29 AM Backport #41001 (In Progress): mimic: client: failed to drop dn and release caps causing mds star...
Xiaoxi Chen
12:20 AM Backport #41002 (In Progress): nautilus:client: failed to drop dn and release caps causing mds st...
Xiaoxi Chen

08/01/2019

05:46 PM Support #40906: Full CephFS causes hang when accessing inode.
Please confirm that I understand the process so that I can give it a try.
Thanks!
Robert LeBlanc
05:45 PM Bug #41049: adding ceph secret key to kernel failed: Invalid argument.
reason: accidentially double base64 encoded
MISSING old warning!: secret is not valid base64: Invalid argument.
Anonymous
05:32 PM Bug #41049 (New): adding ceph secret key to kernel failed: Invalid argument.
Fresh Nautilus Cluster.
Fresh cephfs.
This is not a common base64 error
mount -t ceph 10.3.2.1:6789:/ /mnt/ -o n...
Anonymous

07/31/2019

06:40 PM Feature #40811 (Pending Backport): mds: add command that modify session metadata
Patrick Donnelly
06:34 PM Bug #40927 (Pending Backport): mgr/volumes: unable to create subvolumegroups/subvolumes when ceph...
Patrick Donnelly
05:37 PM Bug #41034 (Resolved): cephfs-journal-tool: NetHandler create_socket couldn't create socket
Ubuntu 18.04 with ceph Nautilus repo
Journal tool is broken:
2019-07-31 19:36:56.879 7f57bf308700 -1 NetHandler cre...
Anonymous
05:33 PM Bug #40999 (Pending Backport): qa: AssertionError: u'open' != 'stale'
Patrick Donnelly
05:10 PM Bug #41031 (Resolved): qa: malformed job
/ceph/teuthology-archive/pdonnell-2019-07-31_00:35:45-fs-wip-pdonnell-testing-20190730.205527-distro-basic-smithi/416... Patrick Donnelly
04:59 PM Bug #41026 (Rejected): MDS process crashes on 14.2.2
Please seek help on ceph-users. Provide more information about your cluster and how the error came about. Patrick Donnelly
04:00 PM Bug #41026: MDS process crashes on 14.2.2
... Anonymous
01:15 PM Bug #41026 (Rejected): MDS process crashes on 14.2.2
MDS Process on Ubuntu 18.04 Nautilus 14.2.2 are crashing, unable to recover
-7> 2019-07-31 13:29:46.888 7fb36a...
Anonymous
03:32 PM Bug #39395: ceph: ceph fs auth fails
merged https://github.com/ceph/ceph/pull/28666 Yuri Weinstein
09:08 AM Bug #40960: client: failed to drop dn and release caps causing mds stary stacking.
some more background of this issue is under
https://tracker.ceph.com/issues/38679#note-9
Xiaoxi Chen
02:49 AM Bug #41006 (Fix Under Review): cephfs-data-scan scan_links FAILED ceph_assert(p->second >= before...
Zheng Yan

07/30/2019

10:13 PM Backport #40845: nautilus: MDSMonitor: use stringstream instead of dout for mds repaired
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29159
merged
Yuri Weinstein
06:35 PM Bug #39947 (Pending Backport): cephfs-shell: add CI testing with flake8
Patrick Donnelly
05:07 PM Bug #40951 (Rejected): (think this is bug) Why CephFS metadata pool usage is only growing?
NAB Patrick Donnelly
12:09 PM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
Bingo - mds_log_max_segments.
In Luminous desc for this option is empty:...
Konstantin Shalygin
01:25 PM Bug #41006: cephfs-data-scan scan_links FAILED ceph_assert(p->second >= before+len)
looks like discontiguous free inode number can trigger the crash Zheng Yan
08:56 AM Bug #41006 (Resolved): cephfs-data-scan scan_links FAILED ceph_assert(p->second >= before+len)
Running cephfs-data-scan scan_links on a test 14.2.2 cluster I get this assertion:... Dan van der Ster
05:51 AM Feature #5520 (In Progress): osdc: should handle namespaces
Jos Collin
01:42 AM Backport #41002 (Resolved): nautilus:client: failed to drop dn and release caps causing mds stary...
https://github.com/ceph/ceph/pull/29478 Xiaoxi Chen
01:41 AM Backport #41001 (Resolved): mimic: client: failed to drop dn and release caps causing mds stary s...
https://github.com/ceph/ceph/pull/29479 Xiaoxi Chen
01:38 AM Backport #41000 (Resolved): luminous: client: failed to drop dn and release caps causing mds star...
https://github.com/ceph/ceph/pull/29830 Xiaoxi Chen

07/29/2019

09:53 PM Bug #40603 (Pending Backport): mds: disallow setting ceph.dir.pin value exceeding max rank id
Patrick Donnelly
09:49 PM Bug #40965 (Resolved): client: don't report ceph.* xattrs via listxattr
Patrick Donnelly
09:47 PM Bug #40939 (Pending Backport): mds: map client_caps been inserted by mistake
Patrick Donnelly
09:46 PM Bug #40960 (Pending Backport): client: failed to drop dn and release caps causing mds stary stack...
Patrick Donnelly
09:10 PM Bug #37681: qa: power off still resulted in client sending session close
backport note: also need fix for https://tracker.ceph.com/issues/40999 Patrick Donnelly
08:09 PM Bug #37681 (Pending Backport): qa: power off still resulted in client sending session close
Patrick Donnelly
09:09 PM Bug #40999 (Fix Under Review): qa: AssertionError: u'open' != 'stale'
Patrick Donnelly
09:06 PM Bug #40999 (Resolved): qa: AssertionError: u'open' != 'stale'
... Patrick Donnelly
08:10 PM Bug #40968 (Pending Backport): qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_stal...
Patrick Donnelly
06:17 PM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
Konstantin Shalygin wrote:
> ??The MDS tracks opened files and capabilities in the MDS journal. That would explain t...
Patrick Donnelly
04:25 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
??The MDS tracks opened files and capabilities in the MDS journal. That would explain the space usage in the metadata... Konstantin Shalygin
05:38 PM Cleanup #40992 (Pending Backport): cephfs-shell: Multiple flake8 errors
Patrick Donnelly
11:54 AM Cleanup #40992: cephfs-shell: Multiple flake8 errors
Not ignoring E501 instead limiting line length to 100.... Varsha Rao
06:59 AM Cleanup #40992 (Fix Under Review): cephfs-shell: Multiple flake8 errors
Varsha Rao
06:48 AM Cleanup #40992 (Resolved): cephfs-shell: Multiple flake8 errors
After ignoring E501 and W503 flake8 errors, following needs to be fixed:... Varsha Rao

07/27/2019

05:25 AM Support #40906: Full CephFS causes hang when accessing inode.
Okay, I'll give it a shot next week on some of my smaller directories.
Just to be sure I understand the process.
...
Robert LeBlanc

07/26/2019

10:18 PM Bug #40474 (Pending Backport): client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
Patrick Donnelly
10:18 PM Bug #40474 (Resolved): client: more precise CEPH_CLIENT_CAPS_PENDING_CAPSNAP
Patrick Donnelly
10:16 PM Bug #40476 (Pending Backport): cephfs-shell: cd with no args has no effect
Patrick Donnelly
10:14 PM Bug #40695 (Resolved): mds: rework PurgeQueue on_error handler to avoid mds_lock state check
Patrick Donnelly
10:12 PM Cleanup #40787 (Resolved): mds: reorg CInode header
Patrick Donnelly
10:11 PM Bug #40936 (Pending Backport): tools/cephfs: memory leak in cephfs/Resetter.cc
Patrick Donnelly
10:10 PM Bug #40967 (Pending Backport): qa: race in test_standby_replay_singleton_fail
Patrick Donnelly
07:47 PM Bug #40989 (Resolved): qa: RECENT_CRASH warning prevents wait_for_health_clear from completing
... Patrick Donnelly
06:36 PM Bug #40965 (Fix Under Review): client: don't report ceph.* xattrs via listxattr
Patrick Donnelly
06:34 PM Bug #40927 (Fix Under Review): mgr/volumes: unable to create subvolumegroups/subvolumes when ceph...
Patrick Donnelly
05:08 PM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
The MDS tracks opened files and capabilities in the MDS journal. That would explain the space usage in the metadata p... Patrick Donnelly
09:37 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
After another bunch of simulations I was call `cache drop` to MDS via admin socket. Pool usage *198.8MB* -> *2.8MB*.
...
Konstantin Shalygin
06:45 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
I was abstract of kernel client and make some tests with Samba VFS - this is Luminous client.
First, I just copied...
Konstantin Shalygin
03:25 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
+10MBytes for last ~24H.
Actual pool's usage:
fs_data:
!fs_data_26.07.19.png!
fs_meta:
!fs_meta_26.07.19.png!
Konstantin Shalygin
03:23 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
Patrick, this is definitely Luminous 12.2.12. My actual question is why metadata usage (bytes) is growing and new obj... Konstantin Shalygin
04:06 PM Feature #40986: cephfs qos: implement cephfs qos base on tokenbucket algorighm
I think there are two kinds of design:
1. all clients use the same QoS setting, just as the implementation
in this...
songbo wang
03:58 PM Feature #40986 (Fix Under Review): cephfs qos: implement cephfs qos base on tokenbucket algorighm
The basic idea is as follows:
Set QoS info as one of the dir's xattrs;
All clients that can access the same dirs ...
songbo wang
10:04 AM Feature #18537: libcephfs cache invalidation upcalls
Jeff Layton wrote:
> I looked at this, but I think the real solution to this problem is to just prevent ganesha from...
huanwen ren
06:15 AM Backport #40445 (In Progress): nautilus: mds: MDCache::cow_inode does not cleanup unneeded client...
https://github.com/ceph/ceph/pull/29344 Prashant D
06:12 AM Backport #40443 (In Progress): nautilus: libcephfs: returns ESTALE to nfs-ganesha's FSAL_CEPH whe...
https://github.com/ceph/ceph/pull/29343 Prashant D
04:49 AM Backport #37761: mimic: mds: deadlock when setting config value via admin socket
ACK Venky Shankar
02:42 AM Support #40906: Full CephFS causes hang when accessing inode.
after delete omap key. use scrub_path asok command to repair parent directory Zheng Yan
02:06 AM Support #40906: Full CephFS causes hang when accessing inode.
Will deleting the RADOS object for the inode going to cause more problems with the MDS because they get out of sync? Robert LeBlanc

07/25/2019

11:12 PM Feature #40617 (Pending Backport): mgr/volumes: Add `ceph fs subvolumegroup getpath` command
Patrick Donnelly
11:11 PM Cleanup #40866 (Resolved): mds: reorg Capability header
Patrick Donnelly
11:09 PM Bug #40836 (Pending Backport): cephfs-shell: flake8 blank line and indentation error
Patrick Donnelly
11:06 PM Cleanup #40742 (Resolved): mds: reorg CDir header
Patrick Donnelly
10:18 PM Bug #40968 (Fix Under Review): qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_stal...
Patrick Donnelly
10:15 PM Bug #40968 (Resolved): qa: tasks.cephfs.test_client_recovery.TestClientRecovery.test_stale_write_...
/ceph/teuthology-archive/pdonnell-2019-07-25_06:25:06-fs-wip-pdonnell-testing-20190725.023305-distro-basic-smithi/414... Patrick Donnelly
09:43 PM Bug #40967 (Fix Under Review): qa: race in test_standby_replay_singleton_fail
Patrick Donnelly
09:39 PM Bug #40967 (Resolved): qa: race in test_standby_replay_singleton_fail
... Patrick Donnelly
07:51 PM Bug #40965 (Resolved): client: don't report ceph.* xattrs via listxattr
The convention with filesystems that have virtual xattrs is to not report them via listxattr(). A lot of archiving to... Jeff Layton
06:53 PM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
I see now that you're using 12.2.12? That wouldn't be the cause then I guess. The PR which backports the relevant cha... Patrick Donnelly
06:48 PM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
Did you just upgrade your cluster to Nautilus? The data usage stats changed recently so that omaps in the metadata po... Patrick Donnelly
09:45 AM Bug #40951: (think this is bug) Why CephFS metadata pool usage is only growing?
Found another cluster with this Ceph version. Data usage 10x more, but meta is not much.
Metadata pool in this clust...
Konstantin Shalygin
09:37 AM Bug #40951 (Rejected): (think this is bug) Why CephFS metadata pool usage is only growing?
Can't find a clue: singe MDS fs without actually usage, but metadata is only growing (screenshot attached). MDS with ... Konstantin Shalygin
06:46 PM Bug #40939 (Fix Under Review): mds: map client_caps been inserted by mistake
Patrick Donnelly
08:52 AM Bug #40939 (Resolved): mds: map client_caps been inserted by mistake
SnapRealm.h: SnapRealm::remove_cap()
if map client_caps has not key client, 'client_caps[client]' will insert key cl...
guodong xiao
06:18 PM Bug #40936 (Fix Under Review): tools/cephfs: memory leak in cephfs/Resetter.cc
Patrick Donnelly
06:24 AM Bug #40936: tools/cephfs: memory leak in cephfs/Resetter.cc
function -- Resetter::_write_reset_event(Journaler *journaler) guodong xiao
06:22 AM Bug #40936 (Resolved): tools/cephfs: memory leak in cephfs/Resetter.cc
there is a memory leak in cephfs/Resetter.cc:
LogEvent *le = new EResetJournal;
forget using 'delete' to release ...
guodong xiao
06:02 PM Bug #40932 (Need More Info): ceph-fuse: rm a dir failed, print "non-empty" error
does that work? Patrick Donnelly
03:25 AM Bug #40932: ceph-fuse: rm a dir failed, print "non-empty" error
use scrub_path asok command to fix it Zheng Yan
02:10 AM Bug #40932 (Need More Info): ceph-fuse: rm a dir failed, print "non-empty" error
#rm -rf dir1
print "non-empty" error, but "ls dir1" is empty.
guodong xiao
05:53 PM Bug #40877 (Fix Under Review): client: client should return EIO when it's unsafe reqs have been d...
Patrick Donnelly
05:51 PM Bug #40960 (Fix Under Review): client: failed to drop dn and release caps causing mds stary stack...
Patrick Donnelly
02:32 PM Bug #40960 (Resolved): client: failed to drop dn and release caps causing mds stary stacking.
when client get notification from MDS that a file has been deleted(via
getting CEPH_CAP_LINK_SHARED cap for inode wi...
Xiaoxi Chen
02:47 PM Bug #40927: mgr/volumes: unable to create subvolumegroups/subvolumes when ceph-mgr is run as non-...
Patrick Donnelly wrote:
> Also need a ticket to allow setting the uid/gid of the subvolume (Feature).
Done. http:...
Ramana Raja
02:43 PM Bug #40171 (Resolved): mds: reset heartbeat during long-running loops in recovery
Nathan Cutler
02:43 PM Backport #40222 (Resolved): mimic: mds: reset heartbeat during long-running loops in recovery
Nathan Cutler
02:37 PM Backport #40222: mimic: mds: reset heartbeat during long-running loops in recovery
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/28918
merged
Yuri Weinstein
02:42 PM Backport #40875 (Resolved): mimic: /src/include/xlist.h: 77: FAILED assert(_size == 0)
Nathan Cutler
02:37 PM Backport #40875: mimic: /src/include/xlist.h: 77: FAILED assert(_size == 0)
Xiaoxi Chen wrote:
> https://github.com/ceph/ceph/pull/29187
merged
Yuri Weinstein
02:41 PM Backport #39685 (Resolved): mimic: ceph-fuse: client hang because its bad session PipeConnection ...
Nathan Cutler
02:36 PM Backport #39685: mimic: ceph-fuse: client hang because its bad session PipeConnection to mds
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29200
merged
Yuri Weinstein
02:40 PM Backport #38099 (Resolved): mimic: mds: remove cache drop admin socket command
Nathan Cutler
02:35 PM Backport #38099: mimic: mds: remove cache drop admin socket command
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29210
merged
Yuri Weinstein
02:38 PM Bug #38270 (Resolved): kcephfs TestClientLimits.test_client_pin fails with "client caps fell belo...
Nathan Cutler
02:38 PM Backport #38687 (Resolved): mimic: kcephfs TestClientLimits.test_client_pin fails with "client ca...
Nathan Cutler
02:34 PM Backport #38687: mimic: kcephfs TestClientLimits.test_client_pin fails with "client caps fell bel...
Nathan Cutler wrote:
> https://github.com/ceph/ceph/pull/29211
merged
Yuri Weinstein
02:19 PM Feature #40959 (Resolved): mgr/volumes: allow setting uid, gid of subvolume and subvolume group d...
... by passing optional arguments --uid and --gid to `ceph fs subvolume/subvolume group create` commands. Ramana Raja
01:41 PM Documentation #40957 (Resolved): doc: add section to manpage for recover_session= option
Now that the recover_session= mount option has been added to the testing branch in the kernel, we need to update the ... Jeff Layton
08:56 AM Backport #40944 (Resolved): nautilus: mgr: failover during in qa testing causes unresponsive clie...
https://github.com/ceph/ceph/pull/29649 Nathan Cutler
03:14 AM Support #40934 (New): can't get connection to external cephfs frmo kubernetes pod
I need to get clear step by step instructions on how to setup cephfs provisioning to external cephfs cluster without ... konstantin pupkov

07/24/2019

11:19 PM Bug #40931 (Rejected): Can't connect to my kubernetes pod
I'm not sure I have enough context to help here. Also, this question should begin on ceph-users, not here. Patrick Donnelly
11:11 PM Bug #40931 (Rejected): Can't connect to my kubernetes pod
New version is not allowed to connect to my external to kubernetes ceph cluster. It is OK if connecting to rdbms, but... konstantin pupkov
06:59 PM Feature #40929 (Resolved): pybind/mgr/mds_autoscaler: create mgr plugin to deploy and configure M...
Create a new mgr plugin that adds and removes MDSs in response to degraded file systems. The plugin should monitor ch... Patrick Donnelly
06:23 PM Bug #40927: mgr/volumes: unable to create subvolumegroups/subvolumes when ceph-mgr is run as non-...
Also need a ticket to allow setting the uid/gid of the subvolume (Feature). Patrick Donnelly
04:36 PM Bug #40927 (Resolved): mgr/volumes: unable to create subvolumegroups/subvolumes when ceph-mgr is ...
... Ramana Raja
04:55 PM Support #40906 (New): Full CephFS causes hang when accessing inode.
Reopening this as I see now that Zheng asked you to create the ticket. Patrick Donnelly
04:53 PM Support #40906 (Rejected): Full CephFS causes hang when accessing inode.
This discussion should move to the ceph-users list: https://lists.ceph.io/postorius/lists/ceph-users.ceph.io/ Patrick Donnelly
01:48 AM Support #40906: Full CephFS causes hang when accessing inode.
The log does not contain information about the bad inode.
To use rados to delete inode
1. find inode number o...
Zheng Yan
11:51 AM Bug #24868 (Resolved): doc: Incomplete Kernel mount debugging
This is fixed by https://github.com/ceph/ceph/pull/28900 Jos Collin
11:14 AM Bug #40864: cephfs-shell: rmdir doesn't complain when directory is not empty
After this PR[https://github.com/ceph/ceph/pull/28514]
It does print an error message, but the message is not approp...
Varsha Rao
11:03 AM Bug #40863: cephfs-shell: rmdir with -p attempts to delete non-dir files as well
Varsha Rao

07/23/2019

04:14 PM Support #40906: Full CephFS causes hang when accessing inode.
The complete logs were 81GB, so I just filtered on the client session. If you need some more complete logs, let me kn... Robert LeBlanc
04:10 PM Support #40906 (New): Full CephFS causes hang when accessing inode.
Our CephFS filled up and we are having trouble operating on a directory that has a file with a bad inode state. We wo... Robert LeBlanc
03:57 PM Backport #40440 (In Progress): nautilus: mds: cannot switch mds state from standby-replay to active
Nathan Cutler
03:53 PM Backport #40438 (In Progress): nautilus: getattr on snap inode stuck
Nathan Cutler
03:51 PM Backport #40437 (In Progress): mimic: getattr on snap inode stuck
Nathan Cutler
03:47 PM Backport #40218 (In Progress): luminous: TestMisc.test_evict_client fails
Nathan Cutler
03:46 PM Backport #40219 (In Progress): mimic: TestMisc.test_evict_client fails
Nathan Cutler
03:45 PM Backport #40163 (In Progress): luminous: mount: key parsing fail when doing a remount
Nathan Cutler
03:44 PM Backport #40165 (In Progress): mimic: mount: key parsing fail when doing a remount
Nathan Cutler
03:43 PM Backport #40162 (Need More Info): mimic: FSAL_CEPH assertion failed in Client::_lookup_name: "par...
non-trivial Nathan Cutler
03:41 PM Backport #39223 (In Progress): mimic: mds: behind on trimming and "[dentry] was purgeable but no ...
Nathan Cutler
03:36 PM Backport #39215 (In Progress): mimic: mds: there is an assertion when calling Beacon::shutdown()
Nathan Cutler
03:25 PM Backport #39212 (In Progress): mimic: MDSTableServer.cc: 83: FAILED assert(version == tid)
Nathan Cutler
03:22 PM Backport #39210 (In Progress): mimic: mds: mds_cap_revoke_eviction_timeout is not used to initial...
Nathan Cutler
03:19 PM Backport #38875 (In Progress): mimic: mds: high debug logging with many subtrees is slow
Nathan Cutler
03:15 PM Backport #38709 (In Progress): mimic: qa: kclient unmount hangs after file system goes down
Nathan Cutler
02:07 PM Bug #40867 (Pending Backport): mgr: failover during in qa testing causes unresponsive client warn...
Sage Weil
01:03 PM Bug #40867: mgr: failover during in qa testing causes unresponsive client warnings
Patrick Donnelly wrote:
> Sage's whitelist PR: https://github.com/ceph/ceph/pull/29169
>
> As I said in issue des...
Venky Shankar
12:11 PM Backport #38687 (In Progress): mimic: kcephfs TestClientLimits.test_client_pin fails with "client...
Nathan Cutler
12:09 PM Backport #38643 (Need More Info): mimic: fs: "log [WRN] : failed to reconnect caps for missing in...
not trivial to backport because in master we have:... Nathan Cutler
12:04 PM Backport #38444 (Need More Info): mimic: mds: drop cache does not timeout as expected
see luminous backport https://github.com/ceph/ceph/pull/27342 Nathan Cutler
12:03 PM Backport #38350 (Need More Info): luminous: mds: decoded LogEvent may leak during shutdown
extensive changes that do not apply cleanly - elevated risk Nathan Cutler
12:03 PM Backport #38349 (Need More Info): mimic: mds: decoded LogEvent may leak during shutdown
extensive changes that do not apply cleanly - elevated risk Nathan Cutler
12:01 PM Backport #38339: mimic: mds: may leak gather during cache drop
The luminous backport https://github.com/ceph/ceph/pull/27342 involved cherry-picking a large number of commits. Assi... Nathan Cutler
12:00 PM Backport #38339 (Need More Info): mimic: mds: may leak gather during cache drop
non-trivial - the fix touches code that appears to have been refactored for nautilus Nathan Cutler
11:57 AM Backport #38099 (In Progress): mimic: mds: remove cache drop admin socket command
Nathan Cutler
11:52 AM Backport #37906 (Need More Info): mimic: make cephfs-data-scan reconstruct snaptable
non-trivial feature backport - assigning to dev lead for disposition Nathan Cutler
11:50 AM Backport #37761 (Need More Info): mimic: mds: deadlock when setting config value via admin socket
The master PR has two commits. The first commit touches the following files:
src/common/config_obs_mgr.h
src/comm...
Nathan Cutler
11:48 AM Backport #37637 (Need More Info): luminous: client: support getfattr ceph.dir.pin extended attribute
non-trivial feature backport - assigning to dev lead for disposition Nathan Cutler
11:47 AM Backport #37636 (Need More Info): mimic: client: support getfattr ceph.dir.pin extended attribute
non-trivial feature backport - assigning to dev lead for disposition Nathan Cutler
10:22 AM Backport #39685 (In Progress): mimic: ceph-fuse: client hang because its bad session PipeConnecti...
Nathan Cutler
08:28 AM Backport #40900 (Resolved): nautilus: mds: only evict an unresponsive client when another client ...
https://github.com/ceph/ceph/pull/30031 Nathan Cutler
08:28 AM Backport #40899 (Resolved): mimic: mds: only evict an unresponsive client when another client wan...
https://github.com/ceph/ceph/pull/30239 Nathan Cutler
08:25 AM Backport #40898 (Rejected): nautilus: cephfs-shell: Error messages are printed to stdout
Nathan Cutler
08:25 AM Backport #40897 (Resolved): nautilus: ceph_volume_client: fs_name must be converted to string bef...
https://github.com/ceph/ceph/pull/30030 Nathan Cutler
08:25 AM Backport #40896 (Rejected): mimic: ceph_volume_client: fs_name must be converted to string before...
https://github.com/ceph/ceph/pull/30238 Nathan Cutler
08:25 AM Backport #40895 (Resolved): nautilus: pybind: Add standard error message and fix print of path as...
https://github.com/ceph/ceph/pull/30026 Nathan Cutler
08:25 AM Backport #40894 (Resolved): nautilus: mds: cleanup truncating inodes when standby replay mds trim...
https://github.com/ceph/ceph/pull/29591 Nathan Cutler
08:24 AM Backport #40892 (Resolved): luminous: mds: cleanup truncating inodes when standby replay mds trim...
https://github.com/ceph/ceph/pull/31286 Nathan Cutler
08:23 AM Backport #40887 (Resolved): nautilus: ceph_volume_client: to_bytes converts NoneType object str
https://github.com/ceph/ceph/pull/30030 Nathan Cutler
08:23 AM Backport #40886 (Rejected): mimic: ceph_volume_client: to_bytes converts NoneType object str
Nathan Cutler
02:41 AM Backport #40875 (In Progress): mimic: /src/include/xlist.h: 77: FAILED assert(_size == 0)
https://github.com/ceph/ceph/pull/29187 Xiaoxi Chen
02:29 AM Bug #40877 (Resolved): client: client should return EIO when it's unsafe reqs have been dropped w...
when the session is close, the client will dropped unsafe req and cannot confirm whether its request had been complet... simon gao
02:21 AM Backport #40874 (In Progress): nautilus: /src/include/xlist.h: 77: FAILED assert(_size == 0)
Xiaoxi Chen
02:20 AM Backport #40874: nautilus: /src/include/xlist.h: 77: FAILED assert(_size == 0)
https://github.com/ceph/ceph/pull/29186 Xiaoxi Chen

07/22/2019

11:42 PM Backport #40875 (Resolved): mimic: /src/include/xlist.h: 77: FAILED assert(_size == 0)
https://github.com/ceph/ceph/pull/29187 Xiaoxi Chen
11:41 PM Backport #40874 (Resolved): nautilus: /src/include/xlist.h: 77: FAILED assert(_size == 0)
https://github.com/ceph/ceph/pull/29186 Xiaoxi Chen
11:34 PM Bug #40775 (Pending Backport): /src/include/xlist.h: 77: FAILED assert(_size == 0)
Patrick Donnelly
11:30 PM Bug #40477 (Pending Backport): mds: cleanup truncating inodes when standby replay mds trim log se...
Patrick Donnelly
11:29 PM Bug #40411 (Pending Backport): pybind: Add standard error message and fix print of path as byte o...
Patrick Donnelly
11:28 PM Bug #40800 (Pending Backport): ceph_volume_client: to_bytes converts NoneType object str
Patrick Donnelly
11:28 PM Bug #40369 (Pending Backport): ceph_volume_client: fs_name must be converted to string before usi...
Patrick Donnelly
11:26 PM Bug #40202 (Pending Backport): cephfs-shell: Error messages are printed to stdout
Patrick Donnelly
11:25 PM Feature #17854 (Pending Backport): mds: only evict an unresponsive client when another client wan...
Patrick Donnelly
09:10 PM Bug #40784 (Fix Under Review): mds: metadata changes may be lost when MDS is restarted
Patrick Donnelly
08:01 PM Bug #40873 (Duplicate): qa: expected MDS_CLIENT_LATE_RELEASE in tasks.cephfs.test_client_recovery...
This needs whitelisted:
/ceph/teuthology-archive/pdonnell-2019-07-20_01:58:35-fs-wip-pdonnell-testing-20190719.231...
Patrick Donnelly
06:19 PM Bug #40867: mgr: failover during in qa testing causes unresponsive client warnings
Sage's whitelist PR: https://github.com/ceph/ceph/pull/29169
As I said in issue description, I'd prefer if we clea...
Patrick Donnelly
03:26 PM Bug #40867 (Resolved): mgr: failover during in qa testing causes unresponsive client warnings
... Patrick Donnelly
01:48 PM Bug #23262 (Resolved): kclient: nofail option not supported
Nathan Cutler
01:48 PM Backport #39233 (Resolved): mimic: kclient: nofail option not supported
Nathan Cutler
01:48 PM Bug #38832 (Resolved): mds: fail to resolve snapshot name contains '_'
Nathan Cutler
01:48 PM Backport #39472 (Resolved): mimic: mds: fail to resolve snapshot name contains '_'
Nathan Cutler
01:47 PM Bug #39645 (Resolved): mds: output lock state in format dump
Nathan Cutler
01:47 PM Backport #39669 (Resolved): mimic: mds: output lock state in format dump
Nathan Cutler
01:46 PM Feature #39403 (Resolved): pybind: add the lseek() function to pybind of cephfs
Nathan Cutler
01:46 PM Backport #39679 (Resolved): mimic: pybind: add the lseek() function to pybind of cephfs
Nathan Cutler
01:25 PM Cleanup #40866 (Fix Under Review): mds: reorg Capability header
Varsha Rao
01:21 PM Cleanup #40866 (Resolved): mds: reorg Capability header
Varsha Rao
12:54 PM Backport #39689 (Resolved): mimic: mds: error "No space left on device" when create a large numb...
Nathan Cutler
12:22 PM Feature #24463: kclient: add btime support
Patches merged for v5.3. Jeff Layton
12:21 PM Feature #24463 (Resolved): kclient: add btime support
Jeff Layton
11:19 AM Backport #40168 (Resolved): mimic: client: ceph.dir.rctime xattr value incorrectly prefixes "09" ...
Nathan Cutler
11:18 AM Bug #40211 (Resolved): mds: fix corner case of replaying open sessions
Nathan Cutler
11:18 AM Backport #40342 (Resolved): mimic: mds: fix corner case of replaying open sessions
Nathan Cutler
11:14 AM Bug #40028 (Resolved): mds: avoid trimming too many log segments after mds failover
Nathan Cutler
11:14 AM Backport #40042 (Resolved): mimic: avoid trimming too many log segments after mds failover
Nathan Cutler
10:46 AM Bug #40864 (Resolved): cephfs-shell: rmdir doesn't complain when directory is not empty
Passing rmdir an non-empty directory doesn't remove the directory, as it's expected. But not printing anything mislea... Rishabh Dave
10:40 AM Bug #40863 (Resolved): cephfs-shell: rmdir with -p attempts to delete non-dir files as well
Going through do_rmdir in cephfs-shell, I don't see any checks that stop the method from deleting non-directory files... Rishabh Dave
09:49 AM Backport #40845 (In Progress): nautilus: MDSMonitor: use stringstream instead of dout for mds rep...
Nathan Cutler
08:21 AM Backport #40845 (Resolved): nautilus: MDSMonitor: use stringstream instead of dout for mds repaired
https://github.com/ceph/ceph/pull/29159 Nathan Cutler
09:48 AM Backport #40843 (In Progress): nautilus: cephfs-shell: name 'files' is not defined error in do_rm()
Nathan Cutler
08:20 AM Backport #40843 (Resolved): nautilus: cephfs-shell: name 'files' is not defined error in do_rm()
https://github.com/ceph/ceph/pull/29158 Nathan Cutler
09:47 AM Backport #40842 (In Progress): nautilus: ceph-fuse: mount does not support the fallocate()
Nathan Cutler
08:20 AM Backport #40842 (Resolved): nautilus: ceph-fuse: mount does not support the fallocate()
https://github.com/ceph/ceph/pull/29157 Nathan Cutler
09:47 AM Backport #40839 (In Progress): nautilus: cephfs-shell: TypeError in poutput
Nathan Cutler
08:20 AM Backport #40839 (Resolved): nautilus: cephfs-shell: TypeError in poutput
https://github.com/ceph/ceph/pull/29156 Nathan Cutler
09:20 AM Bug #40861 (Resolved): cephfs-shell: -p doesn't work for rmdir
... Rishabh Dave
09:18 AM Bug #40860 (Resolved): cephfs-shell: raises incorrect error when regfiles are passed to be deleted
Steps to reproduce the bug -... Rishabh Dave
08:23 AM Backport #40858 (Rejected): luminous: ceph_volume_client: python program embedded in test_volume_...
Nathan Cutler
08:23 AM Backport #40857 (Resolved): nautilus: ceph_volume_client: python program embedded in test_volume_...
https://github.com/ceph/ceph/pull/30030 Nathan Cutler
08:23 AM Backport #40856 (Rejected): mimic: ceph_volume_client: python program embedded in test_volume_cli...
Nathan Cutler
08:23 AM Backport #40855 (Rejected): luminous: test_volume_client: test_put_object_versioned is unreliable
Nathan Cutler
08:23 AM Backport #40854 (Resolved): nautilus: test_volume_client: test_put_object_versioned is unreliable
https://github.com/ceph/ceph/pull/30030 Nathan Cutler
08:22 AM Backport #40853 (Resolved): mimic: test_volume_client: test_put_object_versioned is unreliable
https://github.com/ceph/ceph/pull/30236 Nathan Cutler
08:21 AM Backport #40844 (Resolved): mimic: MDSMonitor: use stringstream instead of dout for mds repaired
https://github.com/ceph/ceph/pull/30235 Nathan Cutler
08:20 AM Backport #40841 (Resolved): mimic: ceph-fuse: mount does not support the fallocate()
https://github.com/ceph/ceph/pull/30228 Nathan Cutler
06:18 AM Bug #40836 (Fix Under Review): cephfs-shell: flake8 blank line and indentation error
Varsha Rao
06:06 AM Bug #40836 (Resolved): cephfs-shell: flake8 blank line and indentation error
Following are the flake8 errors... Varsha Rao

07/19/2019

05:39 PM Bug #40775 (Fix Under Review): /src/include/xlist.h: 77: FAILED assert(_size == 0)
Patrick Donnelly

07/18/2019

09:05 PM Bug #39405 (Pending Backport): ceph_volume_client: python program embedded in test_volume_client....
Patrick Donnelly
09:04 PM Bug #39510 (Pending Backport): test_volume_client: test_put_object_versioned is unreliable
Patrick Donnelly
06:06 PM Feature #40036 (Resolved): mgr / volumes: support asynchronous subvolume deletes
Patrick Donnelly
06:06 PM Backport #40796 (Resolved): nautilus: mgr / volumes: support asynchronous subvolume deletes
Patrick Donnelly
05:35 PM Bug #40821 (Resolved): osdc: objecter ops output does not have useful time information
... Patrick Donnelly
01:22 PM Bug #40775: /src/include/xlist.h: 77: FAILED assert(_size == 0)
kernel data structure for this... Zheng Yan
07:52 AM Bug #40775: /src/include/xlist.h: 77: FAILED assert(_size == 0)
echo 2>/proc/sys/vm/drop_caches " can walk release the reference and walk around the issue.... Xiaoxi Chen
06:16 AM Bug #40775: /src/include/xlist.h: 77: FAILED assert(_size == 0)
Analyzing more on the log , it seems an overflow in ll_ref.
From below log, it is pretty clear the patten is 2 _ll...
Xiaoxi Chen
01:00 PM Feature #40811 (Fix Under Review): mds: add command that modify session metadata
Zheng Yan
07:34 AM Feature #40811 (Resolved): mds: add command that modify session metadata
Zheng Yan
10:52 AM Feature #40617 (Fix Under Review): mgr/volumes: Add `ceph fs subvolumegroup getpath` command
Ramana Raja
 

Also available in: Atom