CephFS - Hadoop Support¶
Summary¶
Overview of the current status of Hadoop support on Ceph. what we are working on now, and the development roadmap.
Owners¶
- Noah Watkins (RedHat, UCSC)
- Name (Affiliation)
- Name
Interested Parties¶
- Name (Affiliation)
- Name (Affiliation)
- Name
Current Status¶
Results from HCFS Test Suite¶
The HCFS tests are now in hadoop-common. We are running them against our cephfs-hadoop bindings and have been squashing bugs for the past couple weeks. This is the current state of issues:
HCFS Resources¶
- Documents describing semantics
- https://github.com/apache/hadoop-common/tree/trunk/hadoop-common-project/hadoop-common/src/site/markdown/filesystem
- https://github.com/apache/hadoop/tree/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract
- https://issues.apache.org/jira/browse/HADOOP-9371
Results¶
- Tests run: 61, Failures: 3, Errors: 1, Skipped: 4
- Errors:
- We reported problem in HCFS (https://issues.apache.org/jira/browse/HADOOP-11244)
- Skipped:
- File concatenation API
- void concat(finalPath target, finalPath [] sources)
- This is a little-used operation currently implemented only by HDFS.
- Support with a simple re-write hack
- Optimized CephFS support?
- ?Root directory tests
- ?libcephfs bug rmdir("/")
- #9935
- File concatenation API
- Failures:
- testRenameFileOverExistingFiles
- testRenameFileNonexistentDir?
- Rename semantics for HCFS are complicated.
- Is rename in Ceph atomic?
- According to HCFS we only need the core rename op to be atomic, and the rest of semantics can be emulated in our binding.
- testNoMkdirOverFile?
BigTop/ceph-qa-suite Tests¶
- Not completed, supposedly very easy
- Integration
- ceph-qa-suite
- Jenkins?
Clock Sync¶
- I haven't seen this issue come up in a long time
- #1666
Snapshots and Quotas¶
Haven't investigated the Ceph side of this. There are documents describing HDFS behavior for reference.- https://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html
- https://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html
Client Shutdown Woes¶
When processes using libcephfs exit without first unmounting, other clients may experience delays (e.g. `ls`) waiting for timeouts to expire. There are a few scenarios that we've run into.
Scenario 1¶
Some processes just don't shutdown cleanly. These are relatively easy to identify on a case-by-case basis. For instance, it looks like this is true for MRAppMaster and there is an open bug report for this https://issues.apache.org/jira/browse/MAPREDUCE-6136. Generally the file systems will be closed automatically unless explicit control is requested. This hasn't been an issue.
Scenario 2¶
- Map tasks finish, broadcast success
- Simultaneously
- SIGTERM->map tasks, 250ms delay, SIGKILL->map tasks
- Application master examines file system to verify success
In this scenario SIGTERM will invoke file system clean-up (i.e. libcephfs unmount) on all the clients, but the 250ms delay isn't an adequate delay for libcephfs unmounting. The result is that the application master hangs for about 30 seconds. The solution is to increase the delay before SIGKILL is sent.
Curiously, it doesn't appear that libcephfs clients need to fully unmount, they only need to make it far enough through the process. Even when the processes are given a 30 second delay before SIGKILL (this is in YARN), many of the ceph client logs are truncated within ceph_unmount, so it appears they are exiting/killed through another path.
Generalization¶
This is really a generalization of the previous scenario, but it will occur for any reason the task can't reach ceph_unmount.- YARN wants to kill a task that has mounted ceph, sends SIGTERM
- The task being killed isn't able to invoke shutdown within the delay before SIGKILL?
- Client stuck in fsync for 40 seconds due to laggy osds
- CephFS-Java prevents ceph_unmount from racing with other operations
- Perhaps this should cause other threads to abort their operations
- CephFS-Java prevents ceph_unmount from racing with other operations
- They could be stuck due to other clients' unclean shutdown
- Some sort of general cascading problem
- But could generally be stuck for any reason
Take Aways¶
- Always prefer clients to shutdown cleanly
- Through normal process exit paths
- Asynchronously from signal (SIGTERM + delay + SIGKILL)
- Shorter (bounded?) unmount cost
- Process stuck in libcephfs
- ?Unmount can force clean up threads?
- Forced exit without reaching unmount
- Maybe not a common case, no big deal
- How to avoid cascading problems
HCFS¶
- Doesn't appear to define any sort of semantics for closing a the file system, which suggests that all the important things are handled by the semantics of file.close/file.flush.
- In the process of clarifying these points
Next Steps¶
- Finishing with HCFS bugs
- 30+ OSD cluster for performance tests
- Profiling
- hdfs as baseline vs libcephfs benchmark tool...
- fio backend?
Work items¶
Coding tasks¶
- Task 1
- Task 2
- Task 3
Build / release tasks¶
- Task 1
- Task 2
- Task 3
Documentation tasks¶
- Task 1
- Task 2
- Task 3
Deprecation tasks¶
- Task 1
- Task 2
- Task 3
Updated by Jessica Mack almost 9 years ago · 2 revisions