⚲
Project
General
Profile
Sign in
Register
Home
Projects
Help
Search
:
Ceph
All Projects
Ceph
Overview
Activity
Roadmap
Issues
Spent time
Gantt
Calendar
Wiki
Repository
v20.0.0
open
T release
14%
154 issues
(
18 closed
—
136 open
)
Time tracking
Estimated time
0
:00
hour
Spent time
0
:00
hour
Issues by
Tracker
Status
Priority
Author
Assignee
Category
Bug
11/124
Fix
0/6
Feature
4/19
Cleanup
0/2
Tasks
2/2
Documentation
1/1
Related issues
RADOS -
Bug #23565
: Inactive PGs don't seem to cause HEALTH_ERR
Actions
CephFS -
Bug #40159
: mds: openfiletable prefetching large amounts of inodes lead to mds start failure
Actions
CephFS -
Bug #40197
: The command 'node ls' sometimes output some incorrect information about mds.
Actions
rgw -
Bug #46702
: rgw: lc: lifecycle rule with more than one prefix in RGWPutLC::execute() should throw error
Actions
RADOS -
Bug #47813
: osd op age is 4294967296
Actions
CephFS -
Bug #48562
: qa: scrub - object missing on disk; some files may be lost
Actions
Dashboard -
Bug #49124
: mgr/dashboard: NFS settings aren't updated after modifying them when working with Rook orchestrator
Actions
rgw -
Bug #49615
: can't get mdlog when rgw_run_sync_thread = false
Actions
rgw -
Bug #50261
: rgw: system users can't issue role policy related ops without explicit user policy
Actions
CephFS -
Bug #50821
: qa: untar_snap_rm failure during mds thrashing
Actions
CephFS -
Bug #51197
: qa: [WRN] Scrub error on inode 0x10000001520 (/client.0/tmp/t/linux-5.4/Documentation/driver-api) see mds.f log and `damage ls` output for details
Actions
CephFS -
Bug #51282
: pybind/mgr/mgr_util: .mgr pool may be created too early causing spurious PG_DEGRADED warnings
Actions
bluestore -
Bug #52513
: BlueStore.cc: 12391: ceph_abort_msg(\"unexpected error\") on operation 15
Actions
mgr -
Bug #52846
: octopus: mgr fails and freezes while doing pg dump
Actions
CephFS -
Bug #57676
: qa: error during scrub thrashing: rank damage found: {'backtrace'}
Actions
CephFS -
Bug #62123
: mds: detect out-of-order locking
Actions
CephFS -
Bug #62188
: AttributeError: 'RemoteProcess' object has no attribute 'read'
Actions
rgw -
Bug #63428
: RGW: multipart get wrong storage class metadata
Actions
Bug #63494
: all: daemonizing may release CephContext:: _fork_watchers_lock when its already unlocked
Actions
CephFS -
Bug #63866
: mount command returning misleading error message
Actions
CephFS -
Bug #63931
: qa: test_mirroring_init_failure_with_recovery failure
Actions
CephFS -
Bug #64008
: mds: CInode::item_caps used in two different lists
Actions
CephFS -
Bug #64064
: mds config `mds_log_max_segments` throws error for value -1
Actions
CephFS -
Bug #64198
: mds: Fcb caps issued to clients when filelock is xlocked
Actions
Linux kernel client -
Bug #64471
: kernel: upgrades from quincy/v18.2.[01]/reef to main|squid fail with kernel oops
Actions
CephFS -
Bug #64477
: pacific: rados/cephadm/mgr-nfs-upgrade: [WRN] client session with duplicated session uuid 'ganesha-nfs.foo.XXX' denied
Actions
CephFS -
Bug #64486
: qa: enhance labeled perf counters test for cephfs-mirror
Actions
CephFS -
Bug #64502
: pacific/quincy/v18.2.0: client: ceph-fuse fails to unmount after upgrade to main
Actions
bluestore -
Bug #64511
: kv/RocksDBStore: rocksdb_cf_compact_on_deletion has no effect on the default column family
Actions
bluestore -
Bug #64533
: BlueFS: l_bluefs_log_compactions is counted twice in sync log compaction
Actions
CephFS -
Bug #64537
: mds: lower the log level when rejecting a session reclaim request
Actions
CephFS -
Bug #64542
: Difference in error code returned while removing system xattrs using removexattr()
Actions
CephFS -
Bug #64563
: mds: enhance laggy clients detections due to laggy OSDs
Actions
CephFS -
Bug #64572
: workunits/fsx.sh failure
Actions
CephFS -
Bug #64602
: tools/cephfs: cephfs-journal-tool does not recover dentries with alternate_name
Actions
CephFS -
Bug #64616
: selinux denials with centos9.stream
Actions
CephFS -
Bug #64641
: qa: Add multifs root_squash testcase
Actions
CephFS -
Bug #64685
: mds: disable defer_client_eviction_on_laggy_osds by default
Actions
CephFS -
Bug #64700
: Test failure: test_mount_all_caps_absent (tasks.cephfs.test_multifs_auth.TestClientsWithoutAuth)
Actions
CephFS -
Bug #64707
: suites/fsstress.sh hangs on one client - test times out
Actions
CephFS -
Bug #64711
: Test failure: test_cephfs_mirror_cancel_mirroring_and_readd (tasks.cephfs.test_mirroring.TestMirroring)
Actions
CephFS -
Bug #64717
: MDS stuck in replay/resolve use
Actions
CephFS -
Bug #64729
: mon.a (mon.0) 1281 : cluster 3 [WRN] MDS_SLOW_METADATA_IO: 3 MDSs report slow metadata IOs" in cluster log
Actions
CephFS -
Bug #64730
: fs/misc/multiple_rsync.sh workunit times out
Actions
CephFS -
Bug #64746
: qa/cephfs: add MON_DOWN and `deprecated feature inline_data' to health ignorelist.
Actions
CephFS -
Bug #64747
: postgresql pkg install failure
Actions
CephFS -
Bug #64751
: cephfs-mirror coredumped when acquiring pthread mutex
Actions
CephFS -
Bug #64752
: cephfs-mirror: valgrind report leaks
Actions
CephFS -
Bug #64761
: cephfs-mirror: add throttling to mirror daemon ops
Actions
mgr -
Bug #64799
: mgr: update cluster state for new maps from the mons before notifying modules
Actions
rgw -
Bug #64875
: rgw: rgw-restore-bucket-index -- sort uses specified temp dir
Actions
CephFS -
Bug #64912
: make check: QuiesceDbTest.MultiRankRecovery Failed
Actions
CephFS -
Bug #64947
: qa: fix continued use of log-whitelist
Actions
RADOS -
Bug #64968
: mon: MON_DOWN warnings when mons are first booting
Actions
RADOS -
Bug #64972
: qa: "ceph tell 4.3a deep-scrub" command not found
Actions
CephFS -
Bug #64985
: qa: mgr logs do not include client debugging
Actions
CephFS -
Bug #64986
: qa: "cluster [WRN] Health detail: HEALTH_WARN 1 filesystem is online with fewer MDS than max_mds" in cluster log "
Actions
CephFS -
Bug #64987
: qa: "cluster [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log "
Actions
CephFS -
Bug #64988
: qa: fs:workloads mgr client evicted indicated by "cluster [WRN] evicting unresponsive client smithi042:x (15288), after 303.306 seconds"
Actions
CephFS -
Bug #65001
: mds: ceph-mds might silently ignore client_session(request_close, ...) message
Actions
CephFS -
Bug #65018
: PG_DEGRADED warnings during cluster creation via cephadm: "Health check failed: Degraded data redundancy: 2/192 objects degraded (1.042%), 1 pg degraded (PG_DEGRADED)"
Actions
CephFS -
Bug #65019
: qa/suites/fs/top: [WRN] Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)" in cluster log
Actions
CephFS -
Bug #65020
: qa: Scrub error on inode 0x1000000356c (/volumes/qa/sv_0/2f8f6bb4-3ea9-47a0-bd79-a0f50dc149d5/client.0/tmp/clients/client7/~dmtmp/PARADOX) see mds.b log and `damage ls` output for details" in cluster log
Actions
CephFS -
Bug #65021
: qa/suites/fs/nfs: cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)" in cluster log
Actions
CephFS -
Bug #65022
: qa: test_max_items_per_obj open procs not fully cleaned up
Actions
CephFS -
Bug #65039
: mds: standby-replay segmentation fault in md_log_replay
Actions
CephFS -
Bug #65043
: Unable to set timestamp to value > UINT32_MAX
Actions
CephFS -
Bug #65073
: pybind/mgr/stats/fs: log exceptions to cluster log
Actions
CephFS -
Bug #65094
: mds STATE_STARTING won't add root ino for root rank and not correctly handle when fails at STATE_STARTING
Actions
CephFS -
Bug #65116
: squid: kclient: "ld: final link failed: Resource temporarily unavailable"
Actions
CephFS -
Bug #65136
: QA failure: test_fscrypt_dummy_encryption_with_quick_group
Actions
CephFS -
Bug #65157
: cephfs-mirror: set layout.pool_name xattr of destination subvol correctly
Actions
CephFS -
Bug #65171
: Provide metrics support for the Replication Start/End Notifications
Actions
CephFS -
Bug #65182
: mds: quiesce_inode op waiting on remote auth pins is not killed correctly during quiesce timeout/expiration
Actions
rgw -
Bug #65216
: rgw: only accept valid ipv4 from host header
Actions
CephFS -
Bug #65224
: mds: fs subvolume rm fails
Actions
CephFS -
Bug #65225
: ceph_assert on dn->get_projected_linkage()->is_remote
Actions
CephFS -
Bug #65246
: qa/cephfs: test_multifs_single_path_rootsquash (tasks.cephfs.test_admin.TestFsAuthorize)
Actions
CephFS -
Bug #65260
: mds: Reduce log level for messages when mds is stopping
Actions
CephFS -
Bug #65262
: qa/cephfs: kernel_untar_build.sh failed due to build error
Actions
Orchestrator -
Bug #65263
: upgrade stalls after upgrading one ceph-mgr daemon
Actions
CephFS -
Bug #65265
: qa: health warning "no active mgr (MGR_DOWN)" occurs before and after test_nfs runs
Actions
CephFS -
Bug #65271
: qa: cluster [WRN] Health detail: HEALTH_WARN 1 pool(s) do not have an application enabled" in cluster log
Actions
CephFS -
Bug #65276
: MDS daemon is using 50% CPU when idle
Actions
rgw -
Bug #65277
: rgw: update options yaml file so LDAP uri isn't an invalid example
Actions
CephFS -
Bug #65301
: fs:upgrade still uses centos_8* distro
Actions
CephFS -
Bug #65308
: qa: fs was offline but also unexpectedly degraded
Actions
CephFS -
Bug #65309
: qa: dbench.sh failed with "ERROR: handle 10318 was not found"
Actions
CephFS -
Bug #65314
: valgrind error: Leak_PossiblyLost posix_memalign UnknownInlinedFun ceph::buffer::v15_2_0::list::refill_append_space(unsigned int)
Actions
rgw -
Bug #65337
: rgw: Segmentation fault in rgw::notify::Manager during realm reload
Actions
CephFS -
Bug #65342
: mds: quiesce_counter decay rate initialized from wrong config
Actions
CephFS -
Bug #65350
: mgr/snap_schedule: restore yearly spec from uppercase Y to lowercase y
Actions
CephFS -
Bug #65372
: qa: The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}
Actions
CephFS -
Bug #65388
: The MDS_SLOW_REQUEST warning is flapping even though the slow requests don't go away
Actions
CephFS -
Bug #65389
: The ceph_readdir function in libcephfs returns incorrect d_reclen value
Actions
CephFS -
Bug #65472
: mds: avoid recalling Fb when quiescing file
Actions
cephsqlite -
Bug #65494
: ceph-mgr critical error: "Module 'devicehealth' has failed: table Device already exists"
Actions
CephFS -
Bug #65496
: mds: ceph.dir.subvolume and ceph.quiesce.blocked is not properly replicated
Actions
CephFS -
Bug #65508
: qa: lockup not long enough to for test_quiesce_authpin_wait
Actions
CephFS -
Bug #65518
: mds: regular file inode flags are not replicated by the policylock
Actions
CephFS -
Bug #65545
: Quiesce may fail randomly with EBADF due to the same root submitted to the MDCache multiple times under the same quiesce request
Actions
Orchestrator -
Bug #65546
: quincy|reef: qa/suites/upgrade/pacific-x: failure to pull image causes dead jobs
Actions
CephFS -
Bug #65564
: Test failure: test_snap_schedule_subvol_and_group_arguments_08 (tasks.cephfs.test_snap_schedules.TestSnapSchedulesSubvolAndGroupArguments)
Actions
CephFS -
Bug #65572
: Command failed (workunit test fs/snaps/untar_snap_rm.sh) on smithi155 with status 1
Actions
CephFS -
Bug #65580
: mds/client: add dummy client feature to test client eviction
Actions
CephFS -
Bug #65595
: mds: missing policylock acquisition for quiesce
Actions
CephFS -
Bug #65603
: mds: quiesce timeout due to a freezing directory
Actions
CephFS -
Bug #65606
: workload fails due to slow ops, assert in logs mds/Locker.cc: 551 FAILED ceph_assert(!lock->is_waiter_for(SimpleLock::WAIT_WR) || lock->is_waiter_for(SimpleLock::WAIT_XLOCK))
Actions
Bug #65612
: qa: logrotate fails when state file is already locked
Actions
CephFS -
Bug #65614
: client: resends request to same MDS it just received a forward from if it does not have an open session with the target
Actions
CephFS -
Bug #65616
: pybind/mgr/snap_schedule: 1m scheduled snaps not reliably executed (RuntimeError: The following counters failed to be set on mds daemons: {'mds_server.req_rmsnap_latency.avgcount'})
Actions
CephFS -
Bug #65618
: qa: fsstress: cannot execute binary file: Exec format error
Actions
CephFS -
Bug #65647
: Evicted kernel client may get stuck after reconnect
Actions
Orchestrator -
Bug #65657
: doc: lack of clarity for explicit placement analogue in yaml spec
Actions
CephFS -
Bug #65658
: mds: MetricAggregator::ms_can_fast_dispatch2 acquires locks
Actions
CephFS -
Bug #65660
: mds: drop client metrics during recovery
Actions
CephFS -
Bug #65669
: QuiesceDB responds with a misleading error to a quiesce-await of a terminated set.
Actions
bluestore -
Bug #65678
: Cannot use BtreeAllocator for blustore or bluefs
Actions
CephFS -
Bug #65700
: qa: Health detail: HEALTH_WARN Degraded data redundancy: 40/348 objects degraded (11.494%), 9 pgs degraded" in cluster log
Actions
CephFS -
Bug #65701
: qa: quiesce cache/ops dump not world readable
Actions
CephFS -
Bug #65704
: mds+valgrind: beacon thread blocked for 60+ seconds
Actions
CephFS -
Bug #65705
: qa: snaptest-multiple-capsnaps.sh failure
Actions
CephFS -
Bug #65716
: mds: quiesce_path blocks on acquiring auth_pins for dentries to root inode to be quiesced
Actions
CephFS -
Bug #65733
: mds: upgrade to MDS enforcing CEPHFS_FEATURE_MDS_AUTH_CAPS_CHECK with client having root_squash in any MDS cap causes eviction for all file systems the client has caps for
Actions
CephFS -
Fix #63432
: qa: run TestSnapshots.test_kill_mdstable for all mount types
Actions
nvme-of -
Fix #64821
: cephadm - make changes to ceph-nvmeof.conf template
Actions
CephFS -
Fix #64984
: qa: probabilistically ignore PG_AVAILABILITY/PG_DEGRADED
Actions
CephFS -
Fix #65408
: qa: under valgrind, restart valgrind/mds when MDS exits with 0
Actions
CephFS -
Fix #65579
: mds: use _exit for QA killpoints rather than SIGABRT
Actions
CephFS -
Fix #65617
: qa: increase debugging for snap_schedule
Actions
RADOS -
Feature #54525
: osd/mon: log memory usage during tick
Actions
CephFS -
Feature #57481
: mds: enhance scrub to fragment/merge dirfrags
Actions
CephFS -
Feature #61334
: cephfs-mirror: use snapdiff api for efficient tree traversal
Actions
CephFS -
Feature #63374
: mds: add asok command to kill/respond to request
Actions
CephFS -
Feature #63663
: mds,client: add crash-consistent snapshot support
Actions
CephFS -
Feature #63664
: mds: add quiesce protocol for halting I/O on subvolumes
Actions
CephFS -
Feature #63665
: mds: QuiesceDb to manage subvolume quiesce state
Actions
CephFS -
Feature #63666
: mds: QuiesceAgent to execute quiesce operations on an MDS rank
Actions
CephFS -
Feature #63668
: pybind/mgr/volumes: add quiesce protocol API
Actions
CephFS -
Feature #64506
: qa: update fs:upgrade to test from reef/squid to main
Actions
CephFS -
Feature #64507
: pybind/mgr/snap_schedule: support crash-consistent snapshots
Actions
CephFS -
Feature #64531
: mds,mgr: identify metadata heavy workloads
Actions
nvme-of -
Feature #64777
: mon: add NVMe-oF gateway monitor and HA
Actions
nvme-of -
Feature #65259
: cephadm - make changes to ceph-nvmeof.conf template
Actions
Orchestrator -
Feature #65338
: Add --continue-on-error for `cephadm bootstrap`
Actions
CephFS -
Feature #65503
: mgr/stats, cephfs-top: provide per volume/sub-volume based performance metrics to monitor / troubleshoot performance issues
Actions
nvme-of -
Feature #65566
: Change some default values for OMAP lock parameters in nvmeof conf file
Actions
CephFS -
Feature #65637
: mds: continue sending heartbeats during recovery when MDS journal is large
Actions
Feature #65747
: common/admin_socket: support saving json output to a file local to the daemon
Actions
CephFS -
Cleanup #65689
: mds: move specialized cleanup for fragment_dir to MDCache::request_cleanup
Actions
CephFS -
Cleanup #65690
: mds: move specialized cleanup for export_dir to MDCache::request_cleanup
Actions
CephFS -
Tasks #63707
: mds: AdminSocket command to control the QuiesceDbManager
Actions
CephFS -
Tasks #63708
: mds: MDS message transport for inter-rank QuiesceDbManager communications
Actions
rgw -
Documentation #49649
: add information on the system objects holding notifications
Actions
Also available in:
TXT
Loading...