Bug #19401
closedMDS goes readonly writing backtrace for a file whose data pool has been removed
0%
Description
Reproduce:
1. create a pool
2. add it as a data pool
3. set that pool in a layout on the client, and write a file
4. unmount the client
5. try doing "ceph daemon mds.<id> flush journal"
You'll get:
2017-03-28 13:23:05.625372 mds.0 [ERR] failed to store backtrace on ino 10000000001 object, pool 3, errno -2 2017-03-28 13:23:05.625405 mds.0 [WRN] force file system read-only
There isn't a neat way to prevent users from doing this, because even if they are well behaved and remove all their files in a pool before removing the pool, we might not be done purging them and/or writing backtraces for them.
If we can't stop the pool getting removed, we need to cope gracefully with it being gone, in the case of writing a backtrace, following a backtrace during hardlink resolution, file size recovery, and delete file purging.
Updated by John Spray about 7 years ago
- Status changed from New to Fix Under Review
So on reflection I realise that the deletion/recovery cases are not an issue because those code paths already handle ENOENT.
Updated by John Spray about 7 years ago
- Status changed from Fix Under Review to Pending Backport
Updated by Nathan Cutler about 7 years ago
- Copied to Backport #19668: jewel: MDS goes readonly writing backtrace for a file whose data pool has been removed added
Updated by Nathan Cutler about 7 years ago
- Copied to Backport #19669: kraken: MDS goes readonly writing backtrace for a file whose data pool has been removed added
Updated by Nathan Cutler almost 7 years ago
- Status changed from Pending Backport to Resolved