⚲
Project
General
Profile
Sign in
Register
Home
Projects
Help
Search
:
Ceph
Overview
Activity
Roadmap
Issues
Gantt
Calendar
Wiki
Repository
v12.2.2
90%
178 issues
(
160 closed
—
18 open
)
Time tracking
Estimated time
0
.00
hour
Spent time
0
.00
hour
Issues by
Tracker
Status
Priority
Author
Assignee
Category
Bug
38/48
Feature
2/2
Support
7/15
Documentation
2/2
Backport
111/111
Related issues
CephFS -
Bug #21337
: luminous: MDS is not getting past up:replay on Luminous cluster
rgw -
Bug #21590
: fix a bug about inconsistent unit of comparison
mgr -
Bug #21752
: ceph.in: ceph fs status returns error
rgw -
Bug #21983
: rgw: modify s3 type subuser access permission fail
Bug #22154
: ceph-disk: add deprecation warnings
rgw -
Bug #22202
: rgw_statfs should report the correct stats
rgw -
Bug #22272
: S3 API: incorrect error code on GET website bucket
Bug #22280
: ceph-volume - ceph.conf parsing error, due to whitespace
ceph-volume -
Bug #22297
: ceph-volume should handle inline comments in the ceph.conf file
ceph-volume -
Bug #22298
: ceph-volume parsing errors on spaces / tabs
Bug #22311
: Luminous 12.2.2: some of rpm packages is not signed.
rgw -
Bug #22312
: ERROR: keystone revocation processing returned error r=-22 on keystone v3 openstack ocata
rbd -
Bug #22321
: ceph 12.2.x Luminous: Build fails with --without-radosgw
mgr -
Bug #22327
: MGR dashboard doesn't update OSD's ceph version after updating from 12.2.1 to 12.2.2
rgw -
Bug #22331
: Luminous 12.2.2 multisite sync failed when objects bigger than 4MB
rbd -
Bug #22362
: cluster resource agent ocf:ceph:rbd - wrong permissions
ceph-volume -
Bug #22435
: fail to create bluestore osd with ceph-volume command on ubuntu 14.04 with ceph 12.2.2
mgr -
Bug #22441
: mgr prometheus module failed on 12.2.2
mgr -
Bug #22472
: restful module: Unresponsive restful API after marking MGR as failed
mgr -
Bug #22474
: prometheus plugin breaks if an osd is out
bluestore -
Bug #22543
: OSDs can not start after shutdown, killed by OOM killer during PGs load
bluestore -
Bug #22616
: bluestore_cache_data uses too much memory
rgw -
Bug #22632
: radosgw - s3 keystone integration doesn't work while using civetweb as frontend
mgr -
Bug #22669
: KeyError: 'pg_deep' from prometheus plugin
bluestore -
Bug #22678
: block checksum mismatch from rocksdb
ceph-volume -
Bug #22720
: Added an osd by ceph-volume,it got an error in systemctl enable ceph-volume@.service
Bug #22735
: about mon_max_pg_per_osd
RADOS -
Bug #22746
: osd/common: ceph-osd process is terminated by the logratote task
Bug #22757
: 色方式
CephFS -
Bug #22788
: ceph-fuse performance issues with rsync
rgw -
Bug #22804
: multisite Synchronization failed when read and write delete at the same time
rgw -
Bug #22826
: "x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD" is not support by V4 auth through LDAPEngine
rgw -
Bug #22838
: slave zone, `bucket stats`,the result of 'num_objects' is incorrect.
rgw -
Bug #22895
: radosgw not work after all rgw `RGWAsyncRadosProcessor had timed out
rgw -
Bug #22908
: [Multisite] Synchronization works only one way (zone2->zone1)
Bug #22943
: c++: internal compiler error: Killed (program cc1plus)
RADOS -
Bug #22952
: Monitor stopped responding after awhile
devops -
Bug #23085
: Assertion failure in Bluestore _kv_sync_thread()
rgw -
Bug #23091
: rgw + OpenLDAP = Failed the auth strategy, reason=-13
rgw -
Bug #23149
: Aws::S3::Errors::SignatureDoesNotMatch (rgw::auth::s3::LocalEngine denied with reason=-2027)
rgw -
Bug #23188
: Cannot list file in a bucket or delete it : illegal character code U+0001
bluestore -
Bug #23206
: ceph-osd daemon crashes - *** Caught signal (Aborted) **
devops -
Bug #23209
: Strange RGW behaviour after running Cosbench tests (heavy read/write on cluster, then delete objects and dispose buckets)
RADOS -
Bug #23235
: The randomness of the hash function causes the object to be inhomogeneous to the PG.The result is that each OSD utilization ratio is uneven.
RADOS -
Bug #23283
: os/bluestore:cache arise a Segmentation fault
Bug #23296
: Cache-tier forward mode hang in luminous
RADOS -
Bug #23365
: CEPH device class not honored for erasure encoding.
rgw -
Bug #23939
: etag/hash and content_type fields were concatenated wit \u0000
RADOS -
Feature #3586
: CRUSH: separate library
Dashboard -
Feature #22495
: mgr: dashboard: show per pool io
RADOS -
Support #22132
: OSDs stuck in "booting" state after catastrophic data loss
RADOS -
Support #22422
: Block fsid does not match our fsid
RADOS -
Support #22520
: nearfull threshold is not cleared when osd really is not nearfull.
RADOS -
Support #22531
: OSD flapping under repair/scrub after recieve inconsistent PG LFNIndex.cc: 439: FAILED assert(long_name == short_name)
RADOS -
Support #22566
: Some osd remain 100% CPU after upgrade jewel => luminous (v12.2.2) and some work
RADOS -
Support #22749
: dmClock OP classification
rgw -
Support #22822
: RGW multi site issue - data sync: ERROR: failed to fetch datalog info
RADOS -
Support #22917
: mon keeps on crashing ( 12.2.2 )
RADOS -
Support #23005
: Implement rados for Python library with some problem
Support #23396
: erasure-code pool cannot be configured as gnocchi data storage
rbd -
Support #23401
: rbd mirror lead to a potential risk that primary image can be remove from a remote cluster
rbd -
Support #23456
: where is the log of src\journal\JournalPlayer.cc
rbd -
Support #23457
: rbd mirror: entries_behind_master will not be zero after mirror over
rbd -
Support #23461
: which rpm package src/journal/ is in?
RADOS -
Support #23719
: Three nodes shutdown,only boot two of the nodes,many pgs down.(host failure-domain,ec 2+1)
RADOS -
Documentation #21733
: OSD-Config-ref(osd max object size) section malformed
mgr -
Documentation #22532
: mgr: balancer: missed documentation
rbd -
Backport #21299
: luminous: [rbd-mirror] asok hook names not updated when image is renamed
RADOS -
Backport #21307
: luminous: Client client.admin marked osd.2 out, after it was down for 1504627577 seconds
mgr -
Backport #21320
: luminous: Quieten scary RuntimeError from restful module on startup
CephFS -
Backport #21324
: luminous: ceph: tell mds.* results in warning
RADOS -
Backport #21343
: luminous: DNS SRV default service name not used anymore
RADOS -
Backport #21438
: luminous: Daemons(OSD, Mon...) exit abnormally at injectargs command
Backport #21439
: luminous: Performance: Slow OSD startup, heavy LevelDB activity
rbd -
Backport #21441
: luminous: [cli] mirror "getter" commands will fail if mirroring has never been enabled
mgr -
Backport #21443
: luminous: Prometheus crash when update
rgw -
Backport #21444
: luminous: rgw: setxattrs call leads to different mtimes for bucket index and object
rgw -
Backport #21445
: luminous: rgw: reversed account listing of Swift API should be supported
rgw -
Backport #21446
: luminous: rgw:multisite: Get bucket location which is located in another zonegroup, will return "301 Moved Permanently"
rgw -
Backport #21448
: luminous: rgw: string_view instance points to expired memory in PrefixableSignatureHelper
rgw -
Backport #21451
: luminous: rgw: lc process only schdule the first item of lc objects
mgr -
Backport #21452
: luminous: prometheus module generates invalid output when counter names contain non-alphanum characters
rgw -
Backport #21453
: luminous: rgw: end_marker parameter doesn't work on Swift container's listing
rgw -
Backport #21456
: luminous: rgw: wrong error message is returned when putting container with a name that is too long
rgw -
Backport #21457
: luminous: rgw: /info lacks swift.max_meta_value_length
rgw -
Backport #21458
: luminous: rgw: /info lacks swift.max_meta_count
rgw -
Backport #21459
: luminous: rgw: wrong error code is returned when putting container metadata with too long name
rgw -
Backport #21460
: luminous: rgw: missing support for per storage-policy usage statistics
RADOS -
Backport #21465
: luminous: OSD metadata 'backend_filestore_dev_node' is "unknown" even for simple deployment
Backport #21504
: ceph-disk omits "--runtime" when enabling ceph-osd@$ID.service units for device-backed OSDs
CephFS -
Backport #21514
: luminous: ceph_volume_client: snapshot dir name hardcoded
Backport #21523
: luminous: selinux denies getattr on lnk sysfs files
mgr -
Backport #21524
: luminous: DaemonState members accessed outside of locks
RADOS -
Backport #21543
: luminous: bluestore fsck took 224.778802 seconds to complete which caused "timed out waiting for admin_socket to appear after osd.1 restart"
RADOS -
Backport #21544
: luminous: mon osd feature checks for osdmap flags and require-osd-release fail if 0 up osds
rgw -
Backport #21545
: luminous: rgw file write error
mgr -
Backport #21547
: luminous: ceph-mgr gets process called "exe" after respawn
Backport #21548
: luminous: ceph_manager: bad AssertionError: failed to recover before timeout expired
mgr -
Backport #21549
: luminous: the dashboard uses absolute links for filesystems and clients
CephFS -
Backport #21600
: luminous: mds: client caps can go below hard-coded default (100)
CephFS -
Backport #21602
: luminous: ceph_volume_client: add get, put, and delete object interfaces
CephFS -
Backport #21627
: luminous: ceph_volume_client: sets invalid caps for existing IDs with no caps
rgw -
Backport #21633
: luminous: s3:GetBucketWebsite/PutBucketWebsite fails with 403
rgw -
Backport #21634
: luminous: s3:GetBucketLocation bucket policy fails with 403
rgw -
Backport #21635
: luminous: s3:GetBucketCORS/s3:PutBucketCORS policy fails with 403
rgw -
Backport #21637
: luminous: encryption: PutObj response does not include sse-kms headers
mgr -
Backport #21638
: luminous: dashboard OSD list has servers and osds in arbitrary order
rbd -
Backport #21639
: luminous: rbd does not delete snaps in (ec) data pool
rbd -
Backport #21640
: luminous: [rbd-mirror] resync isn't properly deleting non-primary image
Backport #21643
: luminous: upmap does not respect osd reweights
Backport #21645
: luminous: Incomplete/missing get_store_prefixes implementations in OSDMonitor/MDSMonitor
Backport #21647
: luminous: "mgr_command_descs" not sync in the new join Mon
mgr -
Backport #21648
: luminous: mgr[zabbix] float division by zero
rgw -
Backport #21649
: luminous: multisite: sync of bucket entrypoints fail with ENOENT
RADOS -
Backport #21650
: luminous: buffer_anon leak during deep scrub (on otherwise idle osd)
rgw -
Backport #21651
: luminous: rgw: avoid logging keystone revocation failures when no keystone is configured
rgw -
Backport #21652
: luminous: policy checks missing from Get/SetRequestPayment operations
rgw -
Backport #21655
: luminous: expose --sync-stats via admin api
mgr -
Backport #21656
: luminous: crash on DaemonPerfCounters::update
CephFS -
Backport #21658
: luminous: purge queue and standby replay mds
mgr -
Backport #21659
: luminous: Crash in get_metadata_python during MDS restart
rgw -
Backport #21684
: luminous: keystone: Thread::join assert when joining uninitialized revoke thread from TokenCache dtor
RADOS -
Backport #21692
: luminous: Kraken client crash after upgrading cluster from Kraken to Luminous
RADOS -
Backport #21693
: luminous: interval_map.h: 161: FAILED assert(len > 0)
rgw -
Backport #21696
: luminous: fix a bug about inconsistent unit of comparison
rgw -
Backport #21698
: luminous: radosgw-admin usage show loops indefinitly
mgr -
Backport #21699
: luminous: mgr status module uses base 10 units
RADOS -
Backport #21701
: luminous: ceph-kvstore-tool does not call bluestore's umount when exit
RADOS -
Backport #21702
: luminous: BlueStore::umount will crash when the BlueStore is opened by start_kv_only()
RADOS -
Backport #21719
: luminous: doc fails build with latest breathe
Backport #21729
: luminous: ceph-disk: retry on OSError
Backport #21732
: luminous: "cluster [WRN] Health check failed: noup flag(s) set (OSDMAP_FLAGS)"" in rest-luminous-distro
Backport #21755
: RADOS: Get "valid pg state" error at ceph pg ls commands
RADOS -
Backport #21783
: luminous: cli/crushtools/build.t sometimes fails in jenkins' "make check" run
Backport #21787
: luminous: "Error EINVAL: invalid command" in upgrade:jewel-x-luminous-distro-basic-smithi
rgw -
Backport #21789
: luminous: user creation can overwrite existing user even if different uid is given
rgw -
Backport #21790
: luminous: RGW: Multipart upload may double the quota
rgw -
Backport #21792
: luminous: encryption: reject requests that don't provide all expected headers
Backport #21795
: luminous: Ubuntu amd64 client can not discover the ubuntu arm64 ceph cluster
CephFS -
Backport #21804
: luminous: limit internal memory usage of object cacher.
CephFS -
Backport #21805
: luminous: client_metadata can be missing
CephFS -
Backport #21806
: luminous: FAILED assert(in->is_dir()) in MDBalancer::handle_export_pins()
CephFS -
Backport #21810
: luminous: mds: trims all unpinned dentries when memory limit is reached
rgw -
Backport #21816
: luminous: multisite: multipart uploads fail to sync
rgw -
Backport #21817
: zone compression type is not validated
rgw -
Backport #21854
: luminous: rgw_file: explicit NFSv3 open() emulation
rbd -
Backport #21855
: luminous: [object map] removing a large image (~100TB) with an object map may result in loss of OSD
rgw -
Backport #21856
: luminous: rgw: disable dynamic resharding in multisite enviorment
rgw -
Backport #21857
: luminous: rgw: We cant't get torrents if objects are encrypted using SSE-C
RADOS -
Backport #21899
: luminous: [upgrade] buffer::list ABI broken in luminous release
rbd -
Backport #21914
: luminous: [rbd-mirror] peer cluster connections should filter out command line optionals
Backport #21917
: luminous: some kernels don't understand crush compat weight-set
rbd -
Backport #21918
: luminous: Disable messenger logging (debug ms = 0/0) for clients unless overridden.
rgw -
Backport #21919
: luminous: rgw_file: full enumeration needs new readdir whence strategy, plus recursive rm side-effect of FLAG_EXACT_MATCH found
rgw -
Backport #21937
: luminous: source data in 'default.rgw.buckets.data' may not be deleted after inter-bucket copy
rgw -
Backport #21938
: luminous: multisite: data sync status advances despite failure in RGWListBucketIndexesCR
rgw -
Backport #21939
: luminous: list bucket which enable versioning get wrong result when user marker
CephFS -
Backport #21955
: luminous: qa: add EC data pool to testing
rgw -
Backport #21972
: luminous: rgw_file: objects have wrong accounted_size
Backport #21974
: luminous: ceph-disk dmcrypt does not unlock blockdevice for bluestore
RADOS -
Backport #22019
: luminous: "ceph osd create" is not idempotent
rgw -
Backport #22020
: luminous: multisite: race between sync of bucket and bucket instance metadata
mgr -
Backport #22032
: luminous: dashboard barfs on nulls where it expects numbers
mgr -
Backport #22034
: luminous: key mismatch for mgr after upgrade from jewel to luminous(dev)
mgr -
Backport #22035
: luminous: Spurious ceph-mgr failovers during mon elections
Backport #22079
: luminous: "ceph osd df" crashes ceph-mon if mgr is offline
RADOS -
Backport #22150
: luminous: bluestore: segv during unmount
RADOS -
Backport #22166
: luminous: bluestore fsck took 224.778802 seconds to complete which caused "timed out waiting for admin_socket to appear after osd.1 restart"
Backport #22212
: luminous: ceph-disk: add deprecation warnings
Backport #22235
: luminous: ceph-disk flake8 test fails on very old, and very new, versions of flake8
Backport #22251
: luminous: macros expanding in spec file comment
Backport #22262
: luminous: ceph-disk-test.py:test_activate_multipath fails because nearfull on osd.2
RADOS -
Backport #22270
: ceph df report wrong pool usage pct
RADOS -
Backport #22275
: luminous: mgr/PyModuleRegistry.cc: 139: FAILED assert(map.epoch > 0)
RADOS -
Backport #22391
: luminous: thrashosds defaults to min_in 3, some ec tests are (2,2)
rgw -
Backport #22571
: luminous: Stale bucket index entry remains after object deletion
Backport #22780
: luminous: backfill cancelation makes target crash; now triggered by recovery preemption
Backport #23009
: luminous: Filestore rocksdb compaction readahead option not set by default
Loading...