⚲
Project
General
Profile
Sign in
Register
Home
Projects
Help
Search
:
Ceph
Overview
Activity
Roadmap
Issues
Gantt
Calendar
Wiki
Repository
v13.0.0
Mimic
97%
157 issues
(
152 closed
—
5 open
)
Time tracking
Estimated time
4
.00
hours
Spent time
0
.00
hour
Issues by
Tracker
Status
Priority
Author
Assignee
Category
Bug
122/124
Fix
3/3
Feature
17/19
Cleanup
0/1
Documentation
8/8
Subtask
1/1
Backport
1/1
Related issues
CephFS -
Bug #925
: mds: update replica snaprealm on rename
CephFS -
Bug #1938
: mds: snaptest-2 doesn't pass with 3 MDS system
CephFS -
Bug #3254
: mds: Replica inode's parent snaprealms are not open
CephFS -
Bug #4212
: mds: open_snap_parents isn't called all the times it needs to be
CephFS -
Bug #10915
: client: hangs on umount if it had an MDS session evicted
CephFS -
Bug #16842
: mds: replacement MDS crashes on InoTable release
CephFS -
Bug #18730
: mds: backtrace issues getxattr for every file with cap on rejoin
CephFS -
Bug #20549
: cephfs-journal-tool: segfault during journal reset
CephFS -
Bug #20593
: mds: the number of inode showed by "mds perf dump" not correct after trimming
CephFS -
Bug #20596
: MDSMonitor: obsolete `mds dump` and other deprecated mds commands
CephFS -
Bug #20988
: client: dual client segfault with racing ceph_shutdown
RADOS -
Bug #21309
: mon/OSDMonitor: deleting pool while pgs are being created leads to assert(p != pools.end) in update_creating_pgs()
CephFS -
Bug #21412
: cephfs: too many cephfs snapshots chokes the system
rgw -
Bug #21500
: list bucket which enable versioning get wrong result when user marker
CephFS -
Bug #21745
: mds: MDBalancer using total (all time) request count in load statistics
CephFS -
Bug #21765
: auth|doc: fs authorize error for existing credentials confusing/unclear
rgw -
Bug #21896
: Bucket policy evaluation is not carried out for DeleteBucketWebsite
rgw -
Bug #21962
: Policy parser may or may not dereference uninitialized boost::optional sometimes
rgw -
Bug #22002
: rgw: add cors header rule check in cors option request
CephFS -
Bug #22038
: ceph-volume-client: rados.Error: command not known
RADOS -
Bug #22045
: OSDMonitor: osd down by monitor is delayed
rgw -
Bug #22296
: Librgw shutdown uncorreclty
RADOS -
Bug #22330
: ec: src/common/interval_map.h: 161: FAILED assert(len > 0)
RADOS -
Bug #22413
: can't delete object from pool when Ceph out of space
rgw -
Bug #22416
: s3cmd move object error
rgw -
Bug #22418
: RGW doesn't check time skew in auth v4 http header request
CephFS -
Bug #22428
: mds: don't report slow request for blocked filelock request
RADOS -
Bug #22525
: auth: ceph auth add does not sanity-check caps
CephFS -
Bug #22526
: AttributeError: 'LocalFilesystem' object has no attribute 'ec_profile'
RADOS -
Bug #22530
: pool create cmd's expected_num_objects is not correctly interpreted
CephFS -
Bug #22536
: client:_rmdir() uses a deleted memory structure(Dentry) leading a core
rgw -
Bug #22541
: put bucket policy panics RGW process
Bug #22608
: Missing TrackedOp events "header_read", "throttled", "all_read" and "all_read"
CephFS -
Bug #22683
: client: coredump when nfs-ganesha use ceph_ll_get_inode()
mgr -
Bug #22717
: mgr: prometheus: impossible to join labels from metadata metrics.
CephFS -
Bug #22802
: libcephfs: allow setting default perms
rgw -
Bug #22820
: rgw_file: avoid fragging thread_local log buffer
rgw -
Bug #22827
: rgw_file: FLAG_EXACT_MATCH actually should be enforced
CephFS -
Bug #22829
: ceph-fuse: uses up all snap tags
CephFS -
Bug #22839
: MDSAuthCaps (unlike others) still require "allow" at start
CephFS -
Bug #22933
: client: add option descriptions and review levels (e.g. LEVEL_DEV)
CephFS -
Bug #22948
: client: wire up ceph_ll_readv and ceph_ll_writev
CephFS -
Bug #22990
: qa: mds-full: ignore "Health check failed: pauserd,pausewr flag(s) set (OSDMAP_FLAGS)" in cluster log"
CephFS -
Bug #22993
: qa: kcephfs thrash sub-suite does not ignore MON_DOWN
CephFS -
Bug #23028
: client: allow client to use caps that are revoked but not yet returned
CephFS -
Bug #23032
: mds: underwater dentry check in CDir::_omap_fetched is racy
CephFS -
Bug #23041
: ceph-fuse: clarify -i is not a valid option
CephFS -
Bug #23059
: mds: FAILED assert (p != active_requests.end()) in MDRequestRef MDCache::request_get(metareqid_t)
CephFS -
Bug #23084
: doc: update ceph-fuse with FUSE options
CephFS -
Bug #23094
: mds: add uptime to status asok command
CephFS -
Bug #23098
: ceph-mds: prevent MDS names starting with a digit
CephFS -
Bug #23116
: cephfs-journal-tool: add time to event list
CephFS -
Bug #23172
: mds: fixed MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 definition breaks luminous upgrades
CephFS -
Bug #23211
: client: prevent fallback to remount when dentry_invalidate_cb is true but root->dir is NULL
CephFS -
Bug #23214
: doc: Fix -d option in ceph-fuse doc
CephFS -
Bug #23247
: doc: distinguish versions in ceph-fuse
CephFS -
Bug #23248
: ceph-fuse: trim ceph-fuse -V output
CephFS -
Bug #23288
: ceph-fuse: Segmentation fault --localize-reads
CephFS -
Bug #23291
: client: add way to sync setattr operations to MDS
CephFS -
Bug #23293
: client: Client::_read returns buffer length on success instead of bytes read
rgw -
Bug #23299
: rgw_file: post deadlock fix, direct deleted RGWFileHandle objects can remain in handle table
Bug #23358
: vstart.sh gives obscure error of dashboard dependencies missing
CephFS -
Bug #23393
: ceph-ansible: update Ganesha config for nfs_file_gw to use optimal settings
CephFS -
Bug #23394
: nfs-ganesha: check cache configuration when exporting FSAL_CEPH
CephFS -
Bug #23436
: Client::_read() always return 0 when reading from inline data
CephFS -
Bug #23446
: ceph-fuse: getgroups failure causes exception
CephFS -
Bug #23448
: nfs-ganesha: fails to parse rados URLs with '.' in object name
CephFS -
Bug #23452
: mds: assertion in MDSRank::validate_sessions
CephFS -
Bug #23491
: fs: quota backward compatibility
CephFS -
Bug #23509
: ceph-fuse: broken directory permission checking
CephFS -
Bug #23518
: mds: crash when failover
CephFS -
Bug #23530
: mds: kicked out by monitor during rejoin
CephFS -
Bug #23532
: doc: create PendingReleaseNotes and add dev doc for openfile table purpose and format
CephFS -
Bug #23538
: mds: fix occasional dir rstat inconsistency between multi-MDSes
CephFS -
Bug #23541
: client: fix request send_to_auth was never really used
CephFS -
Bug #23560
: mds: mds gets significantly behind on trimming while creating millions of files (cont.)
CephFS -
Bug #23567
: MDSMonitor: successive changes to max_mds can allow hole in ranks
CephFS -
Bug #23569
: mds: counter decay incorrect
CephFS -
Bug #23571
: mds: make sure that MDBalancer uses heartbeat info from the same epoch
CephFS -
Bug #23582
: MDSMonitor: mds health warnings printed in bad format
Messengers -
Bug #23600
: assert(0 == "BUG!") attached in EventCenter::create_file_event
CephFS -
Bug #23602
: mds: handle client requests when mds is stopping
CephFS -
Bug #23615
: qa: test for "snapid allocation/deletion mismatch with monitor"
Dashboard -
Bug #23619
: v13.0.2 tries to build dashboard on arm64
CephFS -
Bug #23624
: cephfs-foo-tool crashes immediately it starts
CephFS -
Bug #23625
: mds: sessions opened by journal replay do not get dirtied properly
CephFS -
Bug #23643
: qa: osd_mon_report_interval typo in test_full.py
CephFS -
Bug #23652
: client: fix gid_count check in UserPerm->deep_copy_from()
CephFS -
Bug #23658
: MDSMonitor: crash after assigning standby-replay daemon in multifs setup
RADOS -
Bug #23662
: osd: regression causes SLOW_OPS warnings in multimds suite
CephFS -
Bug #23665
: ceph-fuse: return proper exit code
CephFS -
Bug #23697
: mds: load balancer fixes
CephFS -
Bug #23714
: slow ceph_ll_sync_inode calls after setattr
RADOS -
Bug #23753
: "Error ENXIO: problem getting command descriptions from osd.4" in upgrade:kraken-x-luminous-distro-basic-smithi
CephFS -
Bug #23755
: qa: FAIL: test_purge_queue_op_rate (tasks.cephfs.test_strays.TestStrays)
CephFS -
Bug #23762
: MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_fsmap
CephFS -
Bug #23764
: MDSMonitor: new file systems are not initialized with the pending_fsmap epoch
CephFS -
Bug #23766
: mds: crash during shutdown_pass
CephFS -
Bug #23768
: MDSMonitor: uncommitted state exposed to clients/mdss
RADOS -
Bug #23769
: osd/EC: slow/hung ops in multimds suite test
Bug #23774
: common: build warning in Preforker
Bug #23778
: rados commands suddenly take a very long time to finish
devops -
Bug #23781
: ceph-detect-init still uses python's platform lib
RADOS -
Bug #23785
: "test_prometheus (tasks.mgr.test_module_selftest.TestModuleSelftest) ... ERROR" in rados
RADOS -
Bug #23787
: luminous: "osd-scrub-repair.sh'" failures in rados
Bug #23794
: strtoll is broken for hex number string
CephFS -
Bug #23799
: MDSMonitor: creates invalid transition from up:creating to up:shutdown
CephFS -
Bug #23800
: MDSMonitor: setting fs down twice will wipe old_max_mds
CephFS -
Bug #23812
: mds: may send LOCK_SYNC_MIX message to starting MDS
CephFS -
Bug #23813
: client: "remove_session_caps still has dirty|flushing caps" when thrashing max_mds
CephFS -
Bug #23814
: mds: newly active mds aborts may abort in handle_file_lock
CephFS -
Bug #23815
: client: avoid second lock on client_lock
RADOS -
Bug #23827
: osd sends op_reply out of order
CephFS -
Bug #23829
: qa: test_purge_queue_op_rate: self.assertTrue(phase2_ops < phase1_ops * 1.25)
CephFS -
Bug #23848
: mds: stuck shutdown procedure
CephFS -
Bug #23873
: cephfs does not count st_nlink for directories correctly?
CephFS -
Bug #23880
: mds: scrub code stuck at trimming log segments
CephFS -
Bug #23919
: mds: stuck during up:stopping
CephFS -
Bug #23923
: mds: stopping rank 0 cannot shutdown until log is trimmed
CephFS -
Bug #23927
: qa: test_full failure in test_barrier
mgr -
Bug #23928
: qa: spurious cluster "[WRN] Manager daemon y is unresponsive. No standby daemons available." in cluster log
RADOS -
Bug #23949
: osd: "failed to encode map e19 with expected crc" in cluster log "
CephFS -
Bug #23960
: mds: scrub on fresh file system fails
CephFS -
Bug #23975
: qa: TestVolumeClient.test_lifecycle needs updated for new eviction behavior
CephFS -
Fix #4708
: MDS: journaler pre-zeroing is dangerous
CephFS -
Fix #5268
: mds: fix/clean up file size/mtime recovery code
mgr -
Fix #22718
: mgr: prometheus: missed osd commit\apply latency metrics.
RADOS -
Feature #1894
: mon: implement internal heartbeating
RADOS -
Feature #6325
: mon: mon_status should make it clear when the mon has connection issues
Feature #7567
: mon: 'service network', 'cluster network', 'admin network'
CephFS -
Feature #13688
: mds: performance: journal inodes with capabilities to limit rejoin time on failover
CephFS -
Feature #16775
: MDS command for listing open files
Feature #18053
: Minimize log-based recovery in the acting set
CephFS -
Feature #20606
: mds: improve usability of cluster rank manipulation and setting cluster up/down
CephFS -
Feature #20607
: MDSMonitor: change "mds deactivate" to clearer "mds rejoin"
CephFS -
Feature #20608
: MDSMonitor: rename `ceph fs set <fs_name> cluster_down` to `ceph fs set <fs_name> joinable`
CephFS -
Feature #20609
: MDSMonitor: add new command `ceph fs set <fs_name> down` to bring the cluster down
CephFS -
Feature #20610
: MDSMonitor: add new command to shrink the cluster in an automated way
RADOS -
Feature #21084
: auth: add osd auth caps based on pool metadata
CephFS -
Feature #21156
: mds: speed up recovery with many open inodes
CephFS -
Feature #21995
: ceph-fuse: support nfs export
CephFS -
Feature #22097
: mds: change mds perf counters can statistics filesystem operations number and latency
CephFS -
Feature #22371
: mds: implement QuotaRealm to obviate parent quota lookup
CephFS -
Feature #22417
: support purge queue with cephfs-journal-tool
Feature #23513
: ceph_authtool: add mode option
CephFS -
Feature #23623
: mds: mark allow_snaps true by default
RADOS -
Cleanup #10506
: mon: get rid of QuorumServices
rgw -
Documentation #23081
: docs: radosgw: ldap-auth: wrong option name 'rgw_ldap_searchfilter'
CephFS -
Documentation #23271
: doc: create install/setup guide for NFS-Ganesha w/ CephFS
CephFS -
Documentation #23334
: doc: note client eviction results in a client instance blacklisted, not an address
CephFS -
Documentation #23427
: doc: create doc outlining steps to bring down cluster
CephFS -
Documentation #23568
: doc: outline the steps for upgrading an MDS cluster
CephFS -
Documentation #23583
: doc: update snapshot doc to account for recent changes
RADOS -
Documentation #23613
: doc: add description of new fs-client auth profile
RADOS -
Documentation #23777
: doc: description of OSD_OUT_OF_ORDER_FULL problem
CephFS -
Subtask #20864
: kill allow_multimds
CephFS -
Backport #23705
: jewel: ceph-fuse: broken directory permission checking
Loading...