⚲
Project
General
Profile
Sign in
Register
Home
Projects
Help
Search
:
Ceph
Overview
Activity
Roadmap
Issues
Gantt
Calendar
Wiki
Repository
v12.0.0
Luminous
100%
43 issues
(
43 closed
— 0 open)
Time tracking
Estimated time
0
.00
hour
Spent time
0
.00
hour
Issues by
Tracker
Status
Priority
Author
Assignee
Category
Bug
31/31
Fix
1/1
Feature
11/11
Related issues
CephFS -
Bug #4829
: client: handling part of MClientForward incorrectly?
CephFS -
Bug #16768
: multimds: check_rstat assertion failure
CephFS -
Bug #16807
: Crash in handle_slave_rename_prep
CephFS -
Bug #16886
: multimds: kclient hang (?) in tests
CephFS -
Bug #16914
: multimds: pathologically slow deletions in some tests
CephFS -
Bug #16924
: Crash replaying EExport
CephFS -
Bug #16925
: multimds: cfuse (?) hang on fsx.sh workunit
CephFS -
Bug #17113
: MDS EImport crashing with mds/journal.cc: 2929: FAILED assert(mds->sessionmap.get_version() == cmapv)
CephFS -
Bug #17606
: multimds: assertion failure during directory migration
CephFS -
Bug #17670
: multimds: mds entering up:replay and processing down mds aborts
CephFS -
Bug #17731
: MDS stuck in stopping with other rank's strays
CephFS -
Bug #17858
: Cannot create deep directories when caps contain "path=/somepath"
CephFS -
Bug #17954
: standby-replay daemons can sometimes miss events
Bug #18113
: Don't lose deep-scrub information
Bug #18196
: fail to build ceph-11.0.2 if yasm installed
CephFS -
Bug #18311
: Decode errors on backtrace will crash MDS
bluestore -
Bug #18375
: bluestore: bluefs_preextend_wal_files=true is not crash consistent
CephFS -
Bug #18487
: Crash in MDCache::split_dir -- FAILED assert(dir->is_auth())
CephFS -
Bug #18579
: Fuse client has "opening" session to nonexistent MDS rank after MDS cluster shrink
CephFS -
Bug #18600
: multimds suite tries to run quota tests against kclient, fails
Linux kernel client -
Bug #18690
: kclient: FAILED assert(0 == "old msgs despite reconnect_seq feature")
CephFS -
Bug #18743
: Scrub considers dirty backtraces to be damaged, puts in damage table even though it repairs
CephFS -
Bug #18759
: multimds suite tries to run norstats tests against kclient
CephFS -
Bug #18850
: Leak in MDCache::handle_dentry_unlink
rgw -
Bug #18940
: ERROR RESTFUL_IO with S3 GET/PUT operations
CephFS -
Bug #19245
: Crash in PurgeQueue::_execute_item when deletions happen extremely quickly
Bug #19253
: "failsafe engaged, dropping updates" clog message causing fs suite failures
rbd -
Bug #19260
: FreesBSD/Clang generates a linking issue about a missing dtor cls::rbd::MirrorImageStatus::~MirrorImageStatus()
mgr -
Bug #19407
: MgrMonitor doesn't update map when active mgr times out
mgr -
Bug #19412
: Crash in PyModules::shutdown when non-serve()'ing module is loaded
Bug #19454
: src/arch/ppc.c FreeBSD build break
CephFS -
Fix #19288
: Remove legacy "mds tell"
CephFS -
Feature #11950
: Strays enqueued for purge cause MDCache to exceed size limit
rbd -
Feature #13025
: Add scatter/gather support to librbd C/C++ APIs
CephFS -
Feature #16523
: Assert directory fragmentation is occuring during stress tests
Linux kernel client -
Feature #17204
: Implement new-style ENOSPC handling in kclient
CephFS -
Feature #17834
: MDS Balancer overrides
CephFS -
Feature #17853
: More deterministic timing for directory fragmentation
CephFS -
Feature #17855
: Don't evict a slow client if it's the only client
CephFS -
Feature #17980
: MDS should reject connections from OSD-blacklisted clients
CephFS -
Feature #19075
: Extend 'p' mds auth cap to cover quotas and all layout fields
CephFS -
Feature #19230
: Limit MDS deactivation to one at a time
CephFS -
Feature #19551
: CephFS MDS health messages should be logged in the cluster log
Loading...