⚲
Project
General
Profile
Sign in
Register
Home
Projects
Help
Search
:
Ceph
Overview
Activity
Roadmap
Issues
Gantt
Calendar
Wiki
Repository
v18.0.0
Reef
35%
324 issues
(
108 closed
—
216 open
)
Time tracking
Estimated time
0
.00
hour
Spent time
0
.00
hour
Issues by
Tracker
Status
Priority
Author
Assignee
Category
Bug
87/250
Fix
3/6
Feature
12/53
Support
0/1
Cleanup
2/3
Tasks
0/1
Documentation
2/7
Backport
2/3
Related issues
CephFS -
Bug #23724
: qa: broad snapshot functionality testing across clients
CephFS -
Bug #24403
: mon failed to return metadata for mds
CephFS -
Bug #24894
: client: allow overwrites to files with size greater than the max_file_size config
CephFS -
Bug #38452
: mds: assert crash loop while unlinking file
RADOS -
Bug #44092
: mon: config commands do not accept whitespace style config name
CephFS -
Bug #46438
: mds: add vxattr for querying inherited layout
CephFS -
Bug #48673
: High memory usage on standby replay MDS
CephFS -
Bug #48773
: qa: scrub does not complete
CephFS -
Bug #48812
: qa: test_scrub_pause_and_resume_with_abort failure
rgw -
Bug #50974
: rgw: storage class: GLACIER lifecycle don't worked when STANDARD pool and GLACIER pool are equal
CephFS -
Bug #51267
: CommandFailedError: Command failed (workunit test fs/snaps/snaptest-multiple-capsnaps.sh) on smithi096 with status 1:...
CephFS -
Bug #51278
: mds: "FAILED ceph_assert(!segments.empty())"
CephFS -
Bug #51824
: pacific scrub ~mds_dir causes stray related ceph_assert, abort and OOM
CephFS -
Bug #51964
: qa: test_cephfs_mirror_restart_sync_on_blocklist failure
CephFS -
Bug #52260
: 1 MDSs are read only | pacific 16.2.5
RADOS -
Bug #52513
: BlueStore.cc: 12391: ceph_abort_msg(\"unexpected error\") on operation 15
CephFS -
Bug #52982
: client: Inode::hold_caps_until should be a time from a monotonic clock
CephFS -
Bug #53504
: client: infinite loop "got ESTALE" after mds recovery
CephFS -
Bug #53573
: qa: test new clients against older Ceph clusters
CephFS -
Bug #53597
: mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
CephFS -
Bug #53611
: mds,client: can not identify pool id if pool name is positive integer when set layout.pool
RADOS -
Bug #53729
: ceph-osd takes all memory before oom on boot
CephFS -
Bug #53811
: standby-replay mds is removed from MDSMap unexpectedly
Dashboard -
Bug #53950
: mgr/dashboard: Health check failed: Module 'feedback' has failed: Not found or unloadable (MGR_MODULE_ERROR)" in cluster log
mgr -
Bug #53951
: cluster [ERR] Health check failed: Module 'feedback' has failed: Not found or unloadable (MGR_MODULE_ERROR)" in cluster log
CephFS -
Bug #53979
: mds: defer prefetching the dirfrags to speed up MDS rejoin
mgr -
Bug #53986
: mgr/prometheus: The size of the export is not tracked as a metric returned to Prometheus
CephFS -
Bug #53996
: qa: update fs:upgrade tasks to upgrade from pacific instead of octopus, or quincy instead of pacific
CephFS -
Bug #54017
: Problem with ceph fs snapshot mirror and read-only folders
Orchestrator -
Bug #54026
: the sort sequence used by 'orch ps' is not in a natural sequence
Orchestrator -
Bug #54028
: alertmanager clustering is not configured consistently
CephFS -
Bug #54046
: unaccessible dentries after fsstress run with namespace-restricted caps
CephFS -
Bug #54049
: ceph-fuse: If nonroot user runs ceph-fuse mount on then path is not expected to add in /proc/self/mounts and command should return failure
CephFS -
Bug #54052
: mgr/snap-schedule: scheduled snapshots are not created after ceph-mgr restart
CephFS -
Bug #54066
: mgr/volumes: uid/gid of the clone is incorrect
Linux kernel client -
Bug #54067
: fs/maxentries.sh test fails with "2022-01-21T12:47:05.490 DEBUG:teuthology.orchestra.run:got remote process result: 124"
CephFS -
Bug #54081
: mon/MDSMonitor: sanity assert when inline data turned on in MDSMap from v16.2.4 -> v16.2.[567]
CephFS -
Bug #54106
: kclient: hang during workunit cleanup
CephFS -
Bug #54107
: kclient: hang during umount
CephFS -
Bug #54108
: qa: iogen workunit: "The following counters failed to be set on mds daemons: {'mds.exported', 'mds.imported'}"
CephFS -
Bug #54111
: data pool attached to a file system can be attached to another file system
CephFS -
Bug #54237
: pybind/cephfs: Add mapping for Ernno 13:Permission Denied and adding path (in msg) while raising exception from opendir() in cephfs.pyx
CephFS -
Bug #54271
: mds/OpenFileTable.cc: 777: FAILED ceph_assert(omap_num_objs == num_objs)
Orchestrator -
Bug #54311
: cephadm/monitoring: monitoring stack versions are too old
rgw -
Bug #54325
: lua: elasticsearch example script does not check for null object/bucket
CephFS -
Bug #54345
: mds: try to reset heartbeat when fetching or committing.
CephFS -
Bug #54374
: mgr/snap_schedule: include timezone information in scheduled snapshots
CephFS -
Bug #54384
: mds: crash due to seemingly unrecoverable metadata error
CephFS -
Bug #54459
: fs:upgrade fails with "hit max job timeout"
CephFS -
Bug #54460
: snaptest-multiple-capsnaps.sh test failure
CephFS -
Bug #54461
: ffsb.sh test failure
CephFS -
Bug #54463
: mds: flush mdlog if locked and still has wanted caps not satisfied
CephFS -
Bug #54501
: libcephfs: client needs to update the mtime and change attr when snaps are created and deleted
CephFS -
Bug #54557
: scrub repair does not clear earlier damage health status
CephFS -
Bug #54560
: snap_schedule: avoid throwing traceback for bad or missing arguments
CephFS -
Bug #54606
: check-counter task runs till max job timeout
CephFS -
Bug #54625
: Issue removing subvolume with retained snapshots - Possible quincy regression?
CephFS -
Bug #54701
: crash: void Server::set_trace_dist(ceph::ref_t<MClientReply>&, CInode*, CDentry*, MDRequestRef&): assert(dnl->get_inode() == in)
CephFS -
Bug #54760
: crash: void CDir::try_remove_dentries_for_stray(): assert(dn->get_linkage()->is_null())
CephFS -
Bug #54971
: Test failure: test_perf_stats_stale_metrics (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
CephFS -
Bug #54976
: mds: Test failure: test_filelock_eviction (tasks.cephfs.test_client_recovery.TestClientRecovery)
mgr -
Bug #55029
: mgr/prometheus: ceph_mon_metadata is not consistently populating the ceph_version
Bug #55107
: Getting "Could NOT find utf8proc (missing: utf8proc_LIB)" error while building from master branch
CephFS -
Bug #55110
: mount.ceph: mount helper incorrectly passes `ms_mode' mount option to older kernel
CephFS -
Bug #55112
: cephfs-shell: saving files doesn't work as expected
Dashboard -
Bug #55133
: mgr/dashboard: Error message of /api/grafana/validation is not helpful
CephFS -
Bug #55134
: ceph pacific fails to perform fs/mirror test
CephFS -
Bug #55148
: snap_schedule: remove subvolume(-group) interfaces
CephFS -
Bug #55165
: client: validate pool against pool ids as well as pool names
CephFS -
Bug #55170
: mds: crash during rejoin (CDir::fetch_keys)
CephFS -
Bug #55173
: qa: missing dbench binary?
CephFS -
Bug #55196
: mgr/stats: perf stats command doesn't have filter option for fs names.
CephFS -
Bug #55216
: cephfs-shell: creates directories in local file system even if file not found
CephFS -
Bug #55217
: pybind/mgr/volumes: Clone operation hangs
CephFS -
Bug #55234
: snap_schedule: replace .snap with the client configured snap dir name
CephFS -
Bug #55236
: qa: fs/snaps tests fails with "hit max job timeout"
CephFS -
Bug #55240
: mds: stuck 2 seconds and keeps retrying to find ino from auth MDS
CephFS -
Bug #55242
: cephfs-shell: put command should accept both path mandatorily and validate local_path
Linux kernel client -
Bug #55258
: lots of "heartbeat_check: no reply from X.X.X.X" in OSD logs
cephsqlite -
Bug #55304
: libcephsqlite: crash when compiled with gcc12 cause of regex treating '-' as a range operator
CephFS -
Bug #55313
: Unexpected file access behavior using ceph-fuse
CephFS -
Bug #55331
: pjd failure (caused by xattr's value not consistent between auth MDS and replicate MDSes)
CephFS -
Bug #55332
: Failure in snaptest-git-ceph.sh (it's an async unlink/create bug)
Bug #55351
: ceph-mon crash in handle_forward when add new message type
RADOS -
Bug #55355
: osd thread deadlock
Linux kernel client -
Bug #55377
: kclient: mds revoke Fwb caps stuck after the kclient tries writebcak once
CephFS -
Bug #55464
: cephfs: mds/client error when client stale reconnect
rgw -
Bug #55476
: rgw: remove entries from bucket index shards directly in limited cases
rgw -
Bug #55477
: Gloal Ratelilmit is overriding the per user ratelimit
CephFS -
Bug #55516
: qa: fs suite tests failing with "json.decoder.JSONDecodeError: Extra data: line 2 column 82 (char 82)"
CephFS -
Bug #55537
: mds: crash during fs:upgrade test
CephFS -
Bug #55538
: Test failure: test_flush (tasks.cephfs.test_readahead.TestReadahead)
rgw -
Bug #55546
: rgw: trigger dynamic reshard on index entry count rather than object count
rgw -
Bug #55547
: rgw: figure out what to do with "--check-objects" option to radosgw-admin
Dashboard -
Bug #55578
: mgr/dashboard: Creating and editing Prometheus AlertManager silences is buggy
CephFS -
Bug #55583
: Intermittent ParsingError failure in mgr/volumes module during "clone cancel"
Orchestrator -
Bug #55595
: cephadm: prometheus: The generatorURL in alerts is only using hostname
Dashboard -
Bug #55604
: mgr/dashboard: form field validation icons overlap with other icons
rgw -
Bug #55618
: RGWRados::check_disk_state no checking object's storage_class
rgw -
Bug #55619
: rgw: input args poolid and epoch of fun RGWRados::Bucket::UpdateIndex::complete_del shold belong to index_pool
CephFS -
Bug #55620
: ceph pacific fails to perform fs/multifs test
Orchestrator -
Bug #55638
: alertmanager webhook urls may lead to 404
rgw -
Bug #55655
: rgw: clean up linking targets to radosgw-admin
RADOS -
Bug #55670
: osdmaptool is not mapping child pgs to the target OSDs
Orchestrator -
Bug #55673
: mgr/cephadm: Deploying a cluster with the Vagrantfile fails
CephFS -
Bug #55710
: cephfs-shell: exit code unset when command has missing argument
CephFS -
Bug #55725
: MDS allows a (kernel) client to exceed the xattrs key/value limits
CephFS -
Bug #55759
: mgr/volumes: subvolume ls with groupname as '_nogroup' crashes
CephFS -
Bug #55762
: mgr/volumes: Handle internal metadata directories under '/volumes' properly.
CephFS -
Bug #55778
: client: choose auth MDS for getxattr with the Xs caps
CephFS -
Bug #55779
: fuse client losing connection to mds
CephFS -
Bug #55807
: qa failure: workload iogen failed
CephFS -
Bug #55822
: mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info' command
CephFS -
Bug #55824
: ceph-fuse[88614]: ceph mount failed with (65536) Unknown error 65536
Dashboard -
Bug #55837
: mgr/dashboard: After several days of not being used, Dashboard HTTPS website hangs during loading, with no errors
CephFS -
Bug #55842
: Upgrading to 16.2.9 with 9M strays files causes MDS OOM
RADOS -
Bug #55851
: Assert in Ceph messenger
CephFS -
Bug #55858
: Pacific 16.2.7 MDS constantly crashing
CephFS -
Bug #55861
: Test failure: test_client_metrics_and_metadata (tasks.cephfs.test_mds_metrics.TestMDSMetrics)
CephFS -
Bug #55897
: test_nfs: update of export's access type should not trigger NFS service restart
rgw -
Bug #55904
: RGWRados::check_disk_state no checking object's appendable attr
RADOS -
Bug #55905
: Failed to build rados.cpython-310-x86_64-linux-gnu.so
CephFS -
Bug #55971
: LibRadosMiscConnectFailure.ConnectFailure test failure
CephFS -
Bug #55976
: mgr/volumes: Clone operations are failing with Assertion Error
CephFS -
Bug #55980
: mds,qa: some balancer debug messages (<=5) not printed when debug_mds is >=5
Orchestrator -
Bug #56000
: task/test_nfs: ERROR: Daemon not found: mds.a.smithi060.ujwxef. See `cephadm ls`
CephFS -
Bug #56003
: client: src/include/xlist.h: 81: FAILED ceph_assert(_size == 0)
CephFS -
Bug #56010
: xfstests-dev generic/444 test failed
CephFS -
Bug #56011
: fs/thrash: snaptest-snap-rm-cmp.sh fails in mds5sum comparison
CephFS -
Bug #56012
: mds: src/mds/MDLog.cc: 283: FAILED ceph_assert(!mds->is_ any_replay())
Orchestrator -
Bug #56024
: cephadm: removes ceph.conf during qa run causing command failure
CephFS -
Bug #56063
: Snapshot retention config lost after mgr restart
CephFS -
Bug #56067
: Cephfs data loss with root_squash enabled
CephFS -
Bug #56116
: mds: handle deferred client request core when mds reboot
CephFS -
Bug #56162
: mgr/stats: add fs_name as field in perf stats command output
CephFS -
Bug #56169
: mgr/stats: 'perf stats' command shows incorrect output with non-existing mds_rank filter.
CephFS -
Bug #56249
: crash: int Client::_do_remount(bool): abort
CephFS -
Bug #56261
: crash: Migrator::import_notify_abort(CDir*, std::set<CDir*, std::less<CDir*>, std::allocator<CDir*> >&)
CephFS -
Bug #56269
: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient(self)
CephFS -
Bug #56270
: crash: File "mgr/snap_schedule/module.py", in __init__: self.client = SnapSchedClient(self)
cephsqlite -
Bug #56274
: crash: pthread_mutex_lock()
CephFS -
Bug #56282
: crash: void Locker::file_recover(ScatterLock*): assert(lock->get_state() == LOCK_PRE_SCAN)
CephFS -
Bug #56288
: crash: Client::_readdir_cache_cb(dir_result_t*, int (*)(void*, dirent*, ceph_statx*, long, Inode*), void*, int, bool)
CephFS -
Bug #56384
: ceph/test.sh: check_response erasure-code didn't find erasure-code in output
CephFS -
Bug #56446
: Test failure: test_client_cache_size (tasks.cephfs.test_client_limits.TestClientLimits)
Bug #56480
: std::shared_mutex deadlocks on Windows
CephFS -
Bug #56483
: mgr/stats: missing clients in perf stats command output.
bluestore -
Bug #56488
: BlueStore doesn't defer small writes for pre-pacific hdd osds
CephFS -
Bug #56529
: ceph-fs crashes on getfattr
rgw -
Bug #56536
: cls_rgw: nonexists object shoud not be accounted when check_index
CephFS -
Bug #56537
: cephfs-top: wrong/infinitely changing wsp values
CephFS -
Bug #56577
: mds: client request may complete without queueing next replay request
CephFS -
Bug #56592
: mds: crash when mounting a client during the scrub repair is going on
CephFS -
Bug #56626
: "ceph fs volume create" fails with error ERANGE
CephFS -
Bug #56632
: qa: test_subvolume_snapshot_clone_quota_exceeded fails CommandFailedError
CephFS -
Bug #56633
: mds: crash during construction of internal request
CephFS -
Bug #56644
: qa: test_rapid_creation fails with "No space left on device"
CephFS -
Bug #56666
: mds: standby-replay daemon always removed in MDSMonitor::prepare_beacon
Orchestrator -
Bug #56667
: cephadm install fails: apt:stderr E: Unable to locate package cephadm
mgr -
Bug #56671
: zabbix module does not process some config options correctly
mgr -
Bug #56672
: 'ceph zabbix send' can block (mon) ceph commands and messages
rgw -
Bug #56673
: rgw: 'bucket check' deletes index of multipart meta when its pending_map is noempty
CephFS -
Bug #56694
: qa: avoid blocking forever on hung umount
Orchestrator -
Bug #56696
: admin keyring disappears during qa run
CephFS -
Bug #56697
: qa: fs/snaps fails for fuse
CephFS -
Bug #56698
: client: FAILED ceph_assert(_size == 0)
CephFS -
Bug #56808
: crash: LogSegment* MDLog::get_current_segment(): assert(!segments.empty())
CephFS -
Bug #56830
: crash: cephfs::mirror::PeerReplayer::pick_directory()
Bug #56945
: python: upgrade to 3.8 and/or 3.9
CephFS -
Bug #56988
: mds: memory leak suspected
Dashboard -
Bug #57005
: mgr/dashboard: Cross site scripting in Angular <11.0.5 (CVE-2021-4231)
CephFS -
Bug #57014
: cephfs-top: add an option to dump the computed values to stdout
CephFS -
Bug #57044
: mds: add some debug logs for "crash during construction of internal request"
CephFS -
Bug #57048
: osdc/Journaler: better handle ENOENT during replay as up:standby-replay
CephFS -
Bug #57064
: qa: test_add_ancestor_and_child_directory failure
CephFS -
Bug #57065
: qa: test_query_client_ip_filter fails with latest 'perf stats' structure changes
CephFS -
Bug #57071
: mds: consider mds_cap_revoke_eviction_timeout for get_late_revoking_clients()
CephFS -
Bug #57072
: Quincy 17.2.3 pybind/mgr/status: assert metadata failed
CephFS -
Bug #57084
: Permissions of the .snap directory do not inherit ACLs
CephFS -
Bug #57087
: qa: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan) failure
CephFS -
Bug #57126
: client: abort the client daemons when we couldn't invalidate the dentry caches from kernel
Bug #57138
: mgr(snap-schedule): may TypeError in rm_schedule
RADOS -
Bug #57152
: segfault in librados via libcephsqlite
CephFS -
Bug #57154
: kernel/fuse client using ceph ID with uid restricted MDS caps cannot update caps
CephFS -
Bug #57204
: MDLog.h: 99: FAILED ceph_assert(!segments.empty())
CephFS -
Bug #57205
: Test failure: test_subvolume_group_ls_filter_internal_directories (tasks.cephfs.test_volumes.TestSubvolumeGroups)
CephFS -
Bug #57206
: ceph_test_libcephfs_reclaim crashes during test
CephFS -
Bug #57210
: NFS client unable to see newly created files when listing directory contents in a FS subvolume clone
CephFS -
Bug #57218
: qa: tasks/{1-thrash/mds 2-workunit/cfuse_workunit_suites_fsstress}} fails
CephFS -
Bug #57244
: [WRN] : client.408214273 isn't responding to mclientcaps(revoke), ino 0x10000000003 pending pAsLsXsFs issued pAsLsXsFs, sent 62.303702 seconds ago
CephFS -
Bug #57248
: qa: mirror tests should cleanup fs during unwind
CephFS -
Bug #57249
: mds: damage table only stores one dentry per dirfrag
CephFS -
Bug #57280
: qa: tasks/kernel_cfuse_workunits_untarbuild_blogbench fails - Failed to fetch package version from shaman
CephFS -
Bug #57299
: qa: test_dump_loads fails with JSONDecodeError
Orchestrator -
Bug #57335
: cephadm gather-facts reports disk size incorecctly for native 4k sectors
CephFS -
Bug #57361
: cephfs: rbytes seems not work correctly
Orchestrator -
Bug #57449
: qa: removal of host during QA
CephFS -
Bug #57580
: Test failure: test_newops_getvxattr (tasks.cephfs.test_newops.TestNewOps)
CephFS -
Bug #57586
: first-damage.sh does not handle dentries with spaces
CephFS -
Bug #57589
: cephfs-data-scan: scan_links is not verbose enough
CephFS -
Bug #57594
: pacific: Test failure: test_rebuild_moved_dir (tasks.cephfs.test_data_scan.TestDataScan)
CephFS -
Bug #57597
: qa: data-scan/journal-tool do not output debugging in upstream testing
CephFS -
Bug #57598
: qa: test_recovery_pool uses wrong recovery procedure
CephFS -
Bug #57610
: qa: timeout during unwinding of qa/workunits/suites/fsstress.sh
CephFS -
Bug #57620
: mgr/volumes: addition of human-readable flag to volume info command
CephFS -
Bug #57641
: Ceph FS fscrypt clones missing fscrypt metadata
CephFS -
Bug #57655
: qa: fs:mixed-clients kernel_untar_build failure
CephFS -
Bug #57657
: mds: scrub locates mismatch between child accounted_rstats and self rstats
CephFS -
Bug #57674
: fuse mount crashes the standby MDSes
CephFS -
Bug #57676
: qa: error during scrub thrashing: rank damage found: {'backtrace'}
CephFS -
Bug #57677
: qa: "1 MDSs behind on trimming (MDS_TRIM)"
CephFS -
Bug #57682
: client: ERROR: test_reconnect_after_blocklisted
CephFS -
Bug #57764
: Thread md_log_replay is hanged for ever.
mgr -
Bug #57851
: pybind/mgr/snap_schedule: use temp_store for db
rgw -
Bug #57881
: LDAP invalid password resource leak fix
Dashboard -
Bug #57912
: mgr/dashboard: Dashboard creation of NFS exports with RGW backend fails: "selected realm is not the default"
Bug #57923
: log: writes to stderr (pipe) may not be atomic
CephFS -
Bug #57985
: mds: warning `clients failing to advance oldest client/flush tid` seen with some workloads
CephFS -
Bug #58000
: mds: switch submit_mutex to fair mutex for MDLog
CephFS -
Bug #58008
: mds/PurgeQueue: don't consider filer_max_purge_ops when _calculate_ops
CephFS -
Bug #58028
: cephfs-top: Sorting doesn't work when the filesystems are removed and created
CephFS -
Bug #58030
: mds: avoid ~mdsdir's scrubbing and reporting damage health status
rgw -
Bug #58034
: RGW misplaces index entries after dynamically resharding bucket
CephFS -
Bug #58041
: mds: src/mds/Server.cc: 3231: FAILED ceph_assert(straydn->get_name() == straydname)
CephFS -
Bug #58058
: CephFS Snapshot Mirroring slow due to repeating attribute sync
CephFS -
Bug #58082
: cephfs:filesystem became read only after Quincy upgrade
CephFS -
Bug #58090
: Non-existent pending clone shows up in snapshot info
CephFS -
Bug #58109
: ceph-fuse: doesn't work properly when the version of libfuse is 3.1 or later
Bug #58128
: FTBFS with fmtlib 9.1.0
CephFS -
Bug #58219
: Test failure: test_journal_migration (tasks.cephfs.test_journal_migration.TestJournalMigration) [Command crashed: 'ceph-dencoder type JournalPointer import - decode dump_json']
CephFS -
Bug #58220
: Command failed (workunit test fs/quota/quota.sh) on smithi081 with status 1:
CephFS -
Bug #58221
: pacific: Test failure: test_fragmented_injection (tasks.cephfs.test_data_scan.TestDataScan)
CephFS -
Bug #58228
: mgr/nfs: disallow non-existent paths when creating export
CephFS -
Bug #58244
: Test failure: test_rebuild_inotable (tasks.cephfs.test_data_scan.TestDataScan)
rgw -
Bug #58286
: Subsequent request fails after PutObject to non-existing bucket
CephFS -
Bug #58294
: MDS: scan_stray_dir doesn't walk through all stray inode fragment
rgw -
Bug #58330
: RGW service crashes regularly with floating point exception
CephFS -
Bug #58340
: mds: fsstress.sh hangs with multimds
Orchestrator -
Bug #58353
: cephadm/ingress: default haproxy image not using 'LTS' release.
CephFS -
Bug #58376
: CephFS Snapshots are accessible even when it's deleted from the other client
RADOS -
Bug #58379
: no active mgr after ~1 hour
CephFS -
Bug #58394
: nofail option in fstab not supported
CephFS -
Bug #58395
: mds:in openc, if unlink is not finished we should reintegrate the dentry before continuing further.
CephFS -
Bug #58411
: mds: a few simple operations crash mds
rgw -
Bug #58442
: rgw-orphan-list tool can list all rados objects as orphans
rgw -
Bug #58453
: rgw-gap-list has insufficient error checking
CephFS -
Bug #58482
: mds: catch damage to CDentry's first member before persisting
CephFS -
Bug #58489
: mds stuck in 'up:replay' and crashed.
Orchestrator -
Bug #58572
: Rook: Recover device inventory
CephFS -
Fix #51177
: pybind/mgr/volumes: investigate moving calls which may block on libcephfs into another thread
rgw -
Fix #54174
: rgw dbstore test env init wrong
CephFS -
Fix #54317
: qa: add testing in fs:workload for different kinds of subvolumes
CephFS -
Fix #57295
: qa: remove RHEL from job matrix
CephFS -
Fix #58023
: mds: do not evict clients if OSDs are laggy
CephFS -
Fix #58154
: mds: add minor segment boundaries
CephFS -
Feature #7320
: qa: thrash directory fragmentation
CephFS -
Feature #16745
: mon: prevent allocating snapids allocated for CephFS
CephFS -
Feature #41824
: mds: aggregate subtree authorities for display in `fs top`
CephFS -
Feature #48619
: client: track (and forward to MDS) average read/write/metadata latency
CephFS -
Feature #50150
: qa: begin grepping kernel logs for kclient warnings/failures to fail a test
rgw -
Feature #51017
: rgw: beast: lack of 302 http -> https redirects
Feature #51537
: use git `Prepare Commit Message` hook to add component in commit title
Orchestrator -
Feature #54308
: monitoring/prometheus: mgr/cephadm should support a data retention spec for prometheus data
Orchestrator -
Feature #54309
: cephadm/monitoring: Update cephadm web endpoint to provide scrape configuration information to Prometheus
Orchestrator -
Feature #54310
: cephadm: allow services to have dependencies on rbd
Orchestrator -
Feature #54391
: orch/cephadm: upgrade status output could be improved to make progress more transparent
Orchestrator -
Feature #54392
: orch/cephadm: Add a 'history' subcommand to the orch upgrade command
CephFS -
Feature #54472
: mgr/volumes: allow users to add metadata (key-value pairs) to subvolumes
rgw -
Feature #54476
: rgw: allow S3 delete-marker behavior to be restored via config
RADOS -
Feature #54580
: common/options: add FLAG_SECURE to Ceph options
CephFS -
Feature #54978
: cephfs-top:addition of filesystem menu(improving GUI)
CephFS -
Feature #55041
: mgr/volumes: display in-progress clones for a snapshot
CephFS -
Feature #55121
: cephfs-top: new options to limit and order-by
CephFS -
Feature #55197
: cephfs-top: make cephfs-top display scrollable like top
CephFS -
Feature #55214
: mds: add asok/tell command to clear stale omap entries
CephFS -
Feature #55215
: mds: fragment directory snapshots
CephFS -
Feature #55401
: mgr/volumes: allow users to add metadata (key-value pairs) for subvolume snapshot
CephFS -
Feature #55414
: mds:asok interface to cleanup permanently damaged inodes
CephFS -
Feature #55463
: cephfs-top: allow users to chose sorting order
CephFS -
Feature #55470
: qa: postgresql test suite workunit
Orchestrator -
Feature #55489
: cephadm: Improve gather facts to tolerate mpath device configurations
Dashboard -
Feature #55520
: mgr/dashboard: Add `location` field to [ POST /api/host ]
Orchestrator -
Feature #55551
: device ls-lights should include the host where the devices are
Orchestrator -
Feature #55576
: [RFE] Add a rescan subcommand to the orch device command
CephFS -
Feature #55715
: pybind/mgr/cephadm/upgrade: allow upgrades without reducing max_mds
rgw -
Feature #55769
: rgw: allow `radosgw-admin bucket stats` report more accurately
Orchestrator -
Feature #55777
: Add server serial number information to cephadm gather-facts subcommand
CephFS -
Feature #55821
: pybind/mgr/volumes: interface to check the presence of subvolumegroups/subvolumes.
CephFS -
Feature #55940
: quota: accept values in human readable format as well
CephFS -
Feature #56058
: mds/MDBalancer: add an arg to limit depth when dump loads for dirfrags
CephFS -
Feature #56140
: cephfs: tooling to identify inode (metadata) corruption
Orchestrator -
Feature #56178
: [RFE] add a --force or --yes-i-really-mean-it to ceph orch upgrade
Orchestrator -
Feature #56179
: [RFE] Our prometheus instance should scrape itself
CephFS -
Feature #56442
: mds: build asok command to dump stray files and associated caps
CephFS -
Feature #56489
: qa: test mgr plugins with standby mgr failover
CephFS -
Feature #57090
: MDSMonitor,mds: add MDSMap flag to prevent clients from connecting
CephFS -
Feature #57091
: mds: modify scrub to catch dentry corruption
Dashboard -
Feature #57459
: mgr/dashboard: add support for creating realm/zonegroup/zone
CephFS -
Feature #57481
: mds: enhance scrub to fragment/merge dirfrags
CephFS -
Feature #58057
: cephfs-top: enhance fstop tests to cover testing displayed data
CephFS -
Feature #58129
: mon/FSCommands: support swapping file systems by name
CephFS -
Feature #58133
: qa: add test cases for fscrypt feature in kernel CephFS client
Orchestrator -
Feature #58150
: Addhigh level host related information to the orch host ls command
CephFS -
Feature #58193
: mds: remove stray directory indexes since stray directory can fragment
mgr -
Feature #58227
: Expose additional OSD/PG related information to monitoring
CephFS -
Feature #58488
: mds: avoid encoding srnode for each ancestor in an EMetaBlob log event
CephFS -
Feature #58550
: mds: add perf counter to track (relatively) larger log events
Feature #58565
: rgw: add replication status header to s3 GetObj response
CephFS -
Support #57952
: Pacific: the buffer_anon_bytes of ceph-mds is too large
Cleanup #53682
: common: use fmt::print for stderr logging
CephFS -
Cleanup #54362
: client: do not release the global snaprealm until unmounting
Dashboard -
Cleanup #54991
: mgr/dashboard: don't log HTTP 3xx as errors
cleanup -
Tasks #57172
: Yield Context Threading
CephFS -
Documentation #54551
: docs.ceph.com/en/pacific/cephfs/add-remove-mds/#adding-an-mds cannot work
Documentation #55530
: teuthology-suite -k option doesn't always override kernel
CephFS -
Documentation #56730
: doc: update snap-schedule notes regarding 'start' time
CephFS -
Documentation #57115
: Explanation for cache pressure
cephsqlite -
Documentation #57127
: doc: add debugging documentation
CephFS -
Documentation #57673
: doc: document the relevance of mds_namespace mount option
CephFS -
Documentation #57737
: Clarify security implications of path-restricted cephx capabilities
Dashboard -
Backport #55201
: cephadm/monitoring: monitoring stack versions are too old
Dashboard -
Backport #55366
: cephadm/monitoring: monitoring stack versions are too old
rgw -
Backport #58470
: pacific: It is not possible to set empty tags on buckets and objects.
Loading...