⚲
Project
General
Profile
Sign in
Register
Home
Projects
Help
Search
:
Ceph
Overview
Activity
Roadmap
Issues
Gantt
Calendar
Wiki
Repository
v12.2.9
98%
128 issues
(
126 closed
—
2 open
)
Time tracking
Estimated time
0
.00
hour
Spent time
0
.00
hour
Issues by
Tracker
Status
Priority
Author
Assignee
Category
Bug
5/7
Backport
121/121
Related issues
RADOS -
Bug #36406
: Cache-tier forward mode hang in luminous (again)
RADOS -
Bug #36411
: OSD crash starting recovery/backfill with EC pool
bluestore -
Bug #36567
: Segmentation fault in BlueStore::Blob::discard_unallocated
rbd -
Bug #36626
: couldn't rewatch after network was blocked and client blacklisted
RADOS -
Bug #36725
: luminous: Apparent Memory Leak in OSD
Bug #37280
: librbd's generate_image_id() is not so random
RADOS -
Bug #37299
: ceph-disk: ceph osd start failed: Command '['/usr/bin/systemctl', 'disable', 'ceph-osd@0', '--runtime']'
CephFS -
Backport #22504
: luminous: client may fail to trim as many caps as MDS asked for
RADOS -
Backport #23408
: luminous: mgrc's ms_handle_reset races with send_pgstats()
rbd -
Backport #23604
: luminous: Discard ops should flush affected objects from in-memory cache
RADOS -
Backport #23998
: luminous: osd/EC: slow/hung ops in multimds suite test
RADOS -
Backport #24478
: luminous: read object attrs failed at EC recovery
rgw -
Backport #24630
: luminous: cls_bucket_list fails causes cascading osd crashes
CephFS -
Backport #24842
: luminous: qa: move mds/client config to qa from teuthology ceph.conf.template
CephFS -
Backport #24862
: luminous: ceph_volume_client: allow atomic update of RADOS objects
CephFS -
Backport #24912
: luminous: qa: multifs requires 4 mds but gets only 2
CephFS -
Backport #24934
: luminous: cephfs-journal-tool: wrong layout info used
rbd -
Backport #24946
: luminous: image create request should validate data pool for self-managed snapshot support
rgw -
Backport #24983
: luminous: 'radosgw-admin sync error trim' only trims partially
rgw -
Backport #24985
: luminous: multisite: object metadata operations are skipped by sync
RADOS -
Backport #24988
: luminous: Limit pg log length during recovery/backfill so that we don't run out of memory.
rgw -
Backport #25025
: luminous: cls_rgw test is only run in rados suite: add it to rgw suite as well
CephFS -
Backport #25043
: luminous: overhead of g_conf->get_val<type>("config name") is high
CephFS -
Backport #25046
: luminous: mds: create health warning if we detect metadata (journal) writes are slow
rgw -
Backport #25087
: luminous: change default rgw_thread_pool_size to 512
RADOS -
Backport #25145
: luminous: Automatically set expected_num_objects for new pools with >=100 PGs per OSD
RADOS -
Backport #25177
: luminous: osd,mon: increase mon_max_pg_per_osd to 300
RADOS -
Backport #25199
: luminous: FAILED assert(trim_to <= info.last_complete) in PGLog::trim()
RADOS -
Backport #25203
: luminous: rados python bindings use prval from stack
CephFS -
Backport #25205
: luminous: CephVolumeClient: delay required after adding data pool to MDSMap
rgw -
Backport #25217
: luminous: valgrind failures related to --max-threads prevent radosgw from starting
RADOS -
Backport #25219
: luminous: osd/PGLog.cc: use lgeneric_subdout instead of generic_dout
mgr -
Backport #26838
: luminous: Can't turn off mgrc stats with mgr_stats_threshold
RADOS -
Backport #26840
: luminous: librados application's symbol could conflict with the libceph-common
rgw -
Backport #26844
: luminous: rgw_file: "deep stat"/stats of unenumerated paths not handled
rgw -
Backport #26846
: luminous: Lifecycle rules number on one bucket should be limited.
rgw -
Backport #26848
: luminous: Delete marker generated by lifecycle has no owner
CephFS -
Backport #26851
: luminous: ceph_volume_client: py3 compatible
CephFS -
Backport #26885
: luminous: mds: reset heartbeat map at potential time-consuming places
CephFS -
Backport #26889
: luminous: mds: use self CPU usage to calculate load
CephFS -
Backport #26904
: luminous: qa: reduce slow warnings arising due to limited testing hardware
CephFS -
Backport #26906
: luminous: MDSMonitor: consider raising priority of MMDSBeacons from MDS so they are processed before other client messages
RADOS -
Backport #26908
: luminous: kv: MergeOperator name() returns string, and caller calls c_str() on the temporary
RADOS -
Backport #26910
: luminous: PGLog.cc: saw valgrind issues while accessing complete_to->version
CephFS -
Backport #26915
: luminous: handle ceph_ll_close on unmounted filesystem without crashing
rbd -
Backport #26917
: luminous: doc: Fix broken urls
rgw -
Backport #26922
: luminous: possibly wrong log level in gc_iterate_entries (src/cls/rgw/cls_rgw.cc:3291)
CephFS -
Backport #26924
: luminous: mds: mds got laggy because of MDSBeacon stuck in mqueue
CephFS -
Backport #26930
: luminous: MDSMonitor: note ignored beacons/map changes at higher debug level
Backport #26934
: luminous: segv in OSDMap::calc_pg_upmaps from balancer
CephFS -
Backport #26977
: luminous: cephfs-data-scan: print the max used ino
rgw -
Backport #26979
: luminous: multisite: intermittent failures in test_bucket_sync_disable_enable
CephFS -
Backport #26981
: luminous: mds: crash when dumping ops in flight
CephFS -
Backport #26983
: luminous: client: requests that do name lookup may be sent to wrong mds
CephFS -
Backport #26987
: luminous: mds: explain delayed client_request due to subtree migration
CephFS -
Backport #26990
: luminous: mds: curate priority of perf counters sent to mgr
RADOS -
Backport #26992
: luminous: discover_all_missing() not always called during activating
mgr -
Backport #27058
: luminous: ceph-mgr package does not remove /usr/lib/ceph/mgr compiled files (Debian only?)
rbd -
Backport #27061
: luminous: run-rbd-unit-tests.sh test fails to finish in jenkin's "make check" run
rbd -
Backport #27987
: luminous: Refuses to release lock when cookie is the same at rewatch
mgr -
Backport #32080
: luminous: mgr balancer does not save optimized plan but latest
CephFS -
Backport #32084
: luminous: mds: MDBalancer::try_rebalance() may stop prematurely
CephFS -
Backport #32088
: luminous: mds: use monotonic clock for beacon sender thread waits
CephFS -
Backport #32098
: luminous: mds: optimize the way how max export size is enforced
CephFS -
Backport #32103
: luminous: mds: allows client to create ".." and "." dirents
RADOS -
Backport #32106
: luminous: object errors found in be_select_auth_object() aren't logged the same
rgw -
Backport #32127
: luminous: docs: radosgw: ldap-auth: wrong option name 'rgw_ldap_searchfilter'
rgw -
Backport #35069
: luminous: cls/rgw: add rgw_usage_log_entry type to ceph-dencoder
Backport #35072
: luminous: osd/PGLog.cc: 60: FAILED assert(s <= can_rollback_to) after upgrade to luminous
Backport #35537
: luminous: Bad URL for unmap.t in krbd run
rgw -
Backport #35703
: luminous: multisite: out of order updates to sync status markers
rbd -
Backport #35704
: luminous: "rbd import --export-format 2" fails when the input is a pipe
rgw -
Backport #35707
: luminous: A period pull occasionally raises "curl_easy_perform returned status 28 error: Operation too slow"
rgw -
Backport #35709
: luminous: deadlock on shutdown in RGWIndexCompletionManager::stop()
rbd -
Backport #35711
: luminous: Enabling journaling on an in-use image ignores any journal options
rbd -
Backport #35713
: luminous: [rbd-mirror] aborted in Operation::execute_snap_remove()
Messengers -
Backport #35716
: luminous: msg: "challenging authorizer" messages appear at debug_ms=0
CephFS -
Backport #35718
: luminous: mds: beacon spams is_laggy message
CephFS -
Backport #35721
: luminous: evicting client session may block finisher thread
CephFS -
Backport #35838
: luminous: mds: use monotonic clock for beacon message timekeeping
RADOS -
Backport #35844
: luminous: objecter cannot resend split-dropped op when racing with con reset
RADOS -
Backport #35854
: luminous: should remove mentioning of "scrubq" in ceph(8) manpage
rgw -
Backport #35856
: luminous: multisite: segfault on shutdown/realm reload
CephFS -
Backport #35859
: luminous: MDSMonitor: lookup of gid in prepare_beacon that has been removed will cause exception
Backport #35929
: luminous: mon/OSDMonitor: cancel_report causes obsolete max_failed_since
CephFS -
Backport #35931
: luminous: mds: retry remounting in ceph-fuse on dcache invalidation
CephFS -
Backport #35933
: luminous: client: cannot list out files created by another ceph-fuse client
CephFS -
Backport #35937
: luminous: mds: add average session age (uptime) perf counter
CephFS -
Backport #35939
: luminous: client: statfs inode count odd
RADOS -
Backport #35941
: luminous: "ceph tell osd.x bench" writes resulting JSON to stderr instead of stdout.
rbd -
Backport #35958
: luminous: assert in execute_flatten() when flattening a clone with no overlap
Backport #35960
: luminous: assert(total_data_size % sinfo.get_chunk_size() == 0) with ec overwrite flag set
RADOS -
Backport #35962
: luminous: choose_acting picked want > pool size
CephFS -
Backport #35976
: luminous: mds: configurable timeout for client eviction
rgw -
Backport #35978
: luminous: multisite: incremental data sync makes unnecessary call to RGWReadRemoteDataLogShardInfoCR
rgw -
Backport #35980
: luminous: multisite: data sync error repo processing does not back off on empty
Backport #35981
: luminous: ceph-disk: is_mounted() returns None for mounted OSDs with Python 3
CephFS -
Backport #35983
: luminous: mds: change mds perf counters can statistics filesystem operations number and latency
Backport #35991
: luminous: ceph-objectstore-tool apply-layout-settings optional target level can't be specified.
CephFS -
Backport #36101
: luminous: qa: remove knfs site from future releases
rbd -
Backport #36116
: luminous: [test] not valid to have different parents between image snapshots
rbd -
Backport #36119
: luminous: [rbd-mirror] failed assertion when updating mirror status
rgw -
Backport #36124
: luminous: Chunked encoding fails if chunk greater than 1MiB
Backport #36126
: luminous: msg: AsyncConnection keeps previous message buffers until new message comes in
rgw -
Backport #36128
: luminous: abort_bucket_multiparts() fails on missing multipart meta objects
RADOS -
Backport #36131
: luminous: "symbol lookup error: ceph-osd: undefined symbol: _ZdaPvm" on centos 7.4
CephFS -
Backport #36133
: luminous: client: update ctime when modifying file content
CephFS -
Backport #36135
: luminous: mds: rctime may go back
rgw -
Backport #36137
: luminous: multisite: update index segfault on shutdown/realm reload
rgw -
Backport #36139
: luminous: multisite: make redundant data sync errors less scary
rgw -
Backport #36141
: luminous: rgw: return x-amz-version-id: null when delete obj in versioning suspended bucket
rbd -
Backport #36143
: luminous: Blacklisted client might not notice it lost the lock
CephFS -
Backport #36152
: luminous: qa: fsstress workunit does not execute in parallel on same host without clobbering files
Backport #36157
: luminous: [simple/msg]Add heartbeat timeout beforeAccepter::entry break out for osd thread
CephFS -
Backport #36196
: luminous: mds: internal op missing events time 'throttled', 'all_read', 'dispatched'
CephFS -
Backport #36198
: luminous: ceph-fuse: add SELinux policy
rgw -
Backport #36202
: luminous: multisite: intermittent test_bucket_index_log_trim failures
CephFS -
Backport #36210
: luminous: mds: runs out of file descriptors after several respawns
rbd -
Backport #36224
: luminous: [rbd-mirror] object map is getting invalidated during rbd-mirror-fsx-workunit test
RADOS -
Backport #36274
: luminous: osd/PrimaryLogPG: fix potential pg-log overtrimming
CephFS -
Backport #36277
: luminous: qa: add timeouts to workunits to bound test execution time in the event of crashes/bugs
rgw -
Backport #36311
: luminous: multi-site: object name should be urlencoded when we put it into ES
CephFS -
Backport #36322
: luminous: qa: Command failed on smithi189 with status 1: 'rm -rf -- /home/ubuntu/cephtest/workunits.list.client.0 /home/ubuntu/cephtest/clone.client.0 /home/ubuntu/cephtest/mnt.0/client.0/tmp'
rgw -
Backport #36382
: luminous: resharding produces invalid values of bucket stats
rbd -
Backport #36431
: luminous: [qa] fsstress workunit uses unavailable "realpath" command
Backport #36514
: luminous: add a missing dependency for e2fsprogs
Backport #38165
: luminous: os/bluestore: avoid frequent and massive allocator's dump on bluefs rebalance failure
Loading...