⚲
Project
General
Profile
Sign in
Register
Home
Projects
Help
Search
:
Ceph
Overview
Activity
Roadmap
Issues
Gantt
Calendar
Wiki
Repository
v12.2.5
97%
136 issues
(
132 closed
—
4 open
)
Time tracking
Estimated time
0
.00
hour
Spent time
0
.00
hour
Issues by
Tracker
Status
Priority
Author
Assignee
Category
Bug
11/14
Support
1/1
Documentation
0/1
Backport
120/120
Related issues
ceph-volume -
Bug #23140
: ceph-volume lvm list should work with raw devices
rgw -
Bug #23229
: usage trim loops forever: infinite calls to rgw.user_usage_log_trim
Linux kernel client -
Bug #23272
: switch port down ,cephfs kernel client lost session, blocked not recover ok util port up
Messengers -
Bug #23329
: async messager lost session when IO performance testing, not recover util to restart
bluestore -
Bug #23390
: Identifying NVMe via PCI serial isn't sufficient (Bluestore/SPDK)
ceph-volume -
Bug #23496
: ceph-volume: lsblk: unknown column: PKNAME,PARTLABEL
ceph-volume -
Bug #23497
: ceph-volume: lvcreate: unrecognized option '--yes'
ceph-volume -
Bug #23644
: when no OSDs are found to activate an AttributeError is raised
rgw -
Bug #23817
: Bucket policy and colons in filename
Bug #23831
: bucket policy ipdeny not in effect
ceph-volume -
Bug #23918
: "ceph-volume lvm prepare" errors with "no valid command found"
Bug #23944
: OSD going down randomly
rgw -
Bug #24011
: [rgw] Bucket Policy - not works with object tags
rgw -
Bug #24603
: rgw-multisite: endless loop in RGWBucketShardIncrementalSyncCR
Support #24602
: bind unable to bind to 192.168.5.77:7300/0 on any port in range 6800-7300:
RADOS -
Documentation #23765
: librbd hangs if permissions are incorrect
CephFS -
Backport #20823
: jewel: client::mkdirs not handle well when two clients send mkdir request for a same dir
CephFS -
Backport #22383
: luminous: qa: src/test/libcephfs/test.cc:376: Expected: (len) > (0), actual: -34 vs 0
rgw -
Backport #22635
: luminous: s3cmd move object error
CephFS -
Backport #22688
: luminous: client: fails to release to revoking Fc
rgw -
Backport #22766
: luminous: RGW doesn't check time skew in auth v4 http header request
rgw -
Backport #22812
: luminous: Civetweb reports bad response code.
Backport #22856
: luminous: build Debian installation packages failure
rbd -
Backport #22857
: luminous: librbd::object_map::InvalidateRequest: 0x7fbd100beed0 should_complete: r=0
rgw -
Backport #22858
: luminous: beast: bind to specific ip address
CephFS -
Backport #22862
: luminous: cephfs-journal-tool: may got assertion failure due to not shutdown
rgw -
Backport #22884
: luminous: rgw: document civetweb ssl configuration
rgw -
Backport #22889
: luminous: rgw_file: avoid fragging thread_local log buffer
CephFS -
Backport #22891
: luminous: qa: kcephfs lacks many configurations in the fs/multimds suites
CephFS -
Backport #22935
: luminous: client: setattr should drop "Fs" rather than "As" for mtime and size
CephFS -
Backport #22936
: luminous: client: readdir bug
Backport #22940
: luminous: Double free in rados_getxattrs_next
RADOS -
Backport #22942
: luminous: ceph osd force-create-pg cause all ceph-mon to crash and unable to come up again
rbd -
Backport #22964
: luminous: [rbd-mirror] infinite loop is possible when formatting the status message
CephFS -
Backport #22966
: luminous: kclient: Test failure: test_full_same_file (tasks.cephfs.test_full.TestClusterFull)
CephFS -
Backport #22967
: luminous: Journaler::flush() may flush less data than expected, which causes flush waiter to hang
CephFS -
Backport #22969
: luminous: mds: session reference leak
CephFS -
Backport #22971
: luminous: mon: removing tier from an EC base pool is forbidden, even if allow_ec_overwrites is set
CephFS -
Backport #22972
: luminous: mds: move remaining containers in CDentry/CDir/CInode to mempool
mgr -
Backport #22983
: luminous: balancer should warn about missing requirements
rbd -
Backport #23011
: luminous: [journal] allocating a new tag after acquiring the lock should use on-disk committed position
CephFS -
Backport #23013
: luminous: mds: LOCK_SYNC_MIX state makes "getattr" operations extremely slow when there are lots of clients issue writes or reads to the same file
CephFS -
Backport #23016
: luminous: mds: assert when inode moves during scrub
rgw -
Backport #23020
: luminous: The parameter of max-uploads doesn't work when List Multipart Uploads
rgw -
Backport #23022
: luminous: can not set user quota with specific value
RADOS -
Backport #23024
: luminous: thrash-eio + bluestore (hangs with unfound objects or read_log_and_missing assert)
rgw -
Backport #23025
: luminous: rgw: data sync of versioned objects, note updating bi marker
CephFS -
Backport #23060
: luminous: qa: ignore more warnings during mds-full test
CephFS -
Backport #23061
: luminous: qa: kcephfs thrash sub-suite does not ignore MON_DOWN
CephFS -
Backport #23062
: luminous: qa: mds-full: ignore "Health check failed: pauserd,pausewr flag(s) set (OSDMAP_FLAGS)" in cluster log"
bluestore -
Backport #23063
: luminous: osd: BlueStore.cc: BlueStore::_balance_bluefs_freespace: assert(0 == "allocate failed, wtf");
rbd -
Backport #23064
: luminous: "[ FAILED ] TestLibRBD.LockingPP" in rbd-next-distro-basic-mult run
bluestore -
Backport #23074
: luminous: bluestore: statfs available can go negative
RADOS -
Backport #23075
: luminous: osd: objecter sends out of sync with pg epochs for proxied ops
RADOS -
Backport #23077
: luminous: mon: ops get stuck in "resend forwarded message to leader"
mgr -
Backport #23101
: luminous: ceph-mgr fails to start after a system reboot on Ubuntu 16.04
rgw -
Backport #23102
: luminous: Objects only serving first 512K
RADOS -
Backport #23114
: luminous: can't delete object from pool when Ceph out of space
CephFS -
Backport #23150
: luminous: mds: add uptime to status asok command
rbd -
Backport #23152
: luminous: TestLibRBD.RenameViaLockOwner may still fail with -ENOENT
CephFS -
Backport #23154
: luminous: mds: FAILED assert (p != active_requests.end()) in MDRequestRef MDCache::request_get(metareqid_t)
CephFS -
Backport #23156
: luminous: ceph-fuse: clarify -i is not a valid option
Backport #23159
: luminous: Drop upgrade/jewel-x/point-to-point-x in luminous and master
RADOS -
Backport #23160
: luminous: Multiple asserts caused by DNE pgs left behind after lots of OSD restarts
bluestore -
Backport #23173
: luminous: BlueFS reports rotational journals if BDEV_WAL is not set
RADOS -
Backport #23174
: luminous: SRV resolution fails to lookup AAAA records
mgr -
Backport #23175
: luminous: mgr not reporting when ports conflict
rgw -
Backport #23176
: luminous: some rgw suites override frontend setting in frontend/beast.yaml
rbd -
Backport #23177
: luminous: [test] OpenStack tempest test is failing across all branches (again)
Backport #23178
: luminous: run-make-check.sh thinks it needs debianutils on SUSE
rgw -
Backport #23179
: luminous: rgw: can't download object with range when compression enabled
rgw -
Backport #23180
: luminous: radosgw-admin data sync run crashes
RADOS -
Backport #23186
: luminous: ceph tell mds.* <command> prints only one matching usage
rgw -
Backport #23192
: rgw_log (and rgw_file): don't use undefined/unset RGWEnv key/value pairs
rgw -
Backport #23221
: luminous: possible issue with ssl + libcurl
mgr -
Backport #23224
: luminous: mgr log spamming about down osds
rgw -
Backport #23225
: luminous: rgw: list bilog will loop forever
bluestore -
Backport #23226
: luminous: bluestore_cache_data uses too much memory
mgr -
Backport #23230
: luminous: Update mgr/restful documentation
rgw -
Backport #23239
: luminous: Curl+OpenSSL support in RGW
rgw -
Backport #23245
: luminous: multisite: segfault in radosgw-admin realm pull
rgw -
Backport #23252
: luminous: The return value of auth v2/v4 in RGW is wrong when Expires/X-Amz-Expires missing
RADOS -
Backport #23256
: luminous: bluestore: should recalc_allocated when decoding bluefs_fnode_t
Backport #23268
: luminous: osd: add numpg_removing metric
RADOS -
Backport #23275
: luminous: ceph-objectstore-tool command to trim the pg log
rgw -
Backport #23302
: luminous: rgw: add radosgw-admin sync error trim to trim sync error log
rbd -
Backport #23304
: luminous: parent blocks are still seen after a whole-object discard
Backport #23306
: luminous: Assertion is raised when fetching file event in Ceph 12.2.1
rgw -
Backport #23310
: luminous: s3 website: some s3tests are failing because redirects include index doc suffix
RADOS -
Backport #23312
: luminous: invalid JSON returned when querying pool parameters
mgr -
Backport #23313
: luminous: mgr: prometheus: internal server error while new OSDs are being added to the cluster.
CephFS -
Backport #23314
: luminous: client: allow client to use caps that are revoked but not yet returned
RADOS -
Backport #23315
: luminous: pool create cmd's expected_num_objects is not correctly interpreted
rgw -
Backport #23317
: luminous: Cannot specify multiple ports for civetweb port/listening_ports due to config parsing
rgw -
Backport #23318
: luminous: rgw: crash with rgw_run_sync_thread=false
RADOS -
Backport #23323
: luminous: ERROR type entries of pglog do not update min_last_complete_ondisk, potentially ballooning memory usage
rgw -
Backport #23346
: luminous: RGWCopyObj silently corrupts the object that was mulitpart-uploaded in SSE-C
rgw -
Backport #23347
: luminous: rgw: inefficient buffer usage for PUTs
RADOS -
Backport #23349
: luminous: Couldn't init storage provider (RADOS)
RADOS -
Backport #23351
: luminous: filestore: do_copy_range replay bad return value
CephFS -
Backport #23355
: luminous: client: prevent fallback to remount when dentry_invalidate_cb is true but root->dir is NULL
rgw -
Backport #23357
: luminous: Admin API support for bucket quota change
rbd -
Backport #23407
: luminous: [cls] rbd.group_image_list is incorrectly flagged as R/W
mgr -
Backport #23409
: luminous: mgr: fix MSG_MGR_MAP handling
Backport #23410
: luminous: Documentation license version is ambiguous
RADOS -
Backport #23412
: luminous: delete type mismatch in CephContext teardown
CephFS -
Backport #23414
: luminous: mds: fixed MDS_FEATURE_INCOMPAT_FILE_LAYOUT_V2 definition breaks luminous upgrades
rbd -
Backport #23423
: luminous: librados/snap_set_diff: don't assert on empty snapset
RADOS -
Backport #23472
: luminous: add --add-bucket and --move options to crushtool
RADOS -
Backport #23478
: should not check for VERSION_ID
RADOS -
Backport #23485
: luminous: scrub errors not cleared on replicas can cause inconsistent pg state when replica takes over primary
RADOS -
Backport #23500
: luminous: snapmapper inconsistency, crash on luminous
Backport #23501
: luminous: OSD bind to IPv6 link-local address
rbd -
Backport #23507
: luminous: test_admin_socket.sh may fail on wait_for_clean
Backport #23520
: luminous: ceph_authtool: add mode option
Backport #23522
: luminous: tests: unittest_pglog timeout
rbd -
Backport #23524
: luminous: is_qemu_running in qemu_rebuild_object_map.sh and qemu_dynamic_features.sh may return false positive
rbd -
Backport #23542
: luminous: rbd-nbd: EBUSY when do map
Backport #23544
: luminous: aio_t::rval int type not enough to contain io_event::res with unsigned long type, cause core dump
rbd -
Backport #23545
: luminous: "Message too long" error when appending journal
CephFS -
Backport #23561
: luminous: mds: mds gets significantly behind on trimming while creating millions of files (cont.)
CephFS -
Backport #23570
: luminous: mds: counter decay incorrect
CephFS -
Backport #23572
: luminous: mds: make sure that MDBalancer uses heartbeat info from the same epoch
Backport #23606
: luminous: "ENGINE Error in 'start' listener <bound " in rados
Backport #23626
: mon failed to read inc osdmap
RADOS -
Backport #23630
: luminous: pg stuck in activating
CephFS -
Backport #23634
: luminous: doc: outline the steps for upgrading an MDS cluster
RADOS -
Backport #23654
: luminous: Special scrub handling of hinfo_key errors
mgr -
Backport #23667
: luminous: mgr: prometheus: 'PG_STATES' still have not all PG_STATES.
rgw -
Backport #23686
: luminous: radosgw-admin usage show loops indefinitly - again
rgw -
Backport #23687
: luminous: RGW Reshard error add failed to drop lock on <bucket>
rgw -
Backport #23690
: luminous: multisite Synchronization failed when read and write delete at the same time
rgw -
Backport #23691
: luminous: radosgw-admin: add an option to reset user stats
rgw -
Backport #23720
: luminous: radosgw-admin user stats --sync-stats without a user will create an empty object
rgw -
Backport #23758
: luminous: usage trim loops forever: infinite calls to rgw.user_usage_log_trim
rgw -
Backport #24299
: luminous: rgw: download object might fail for local invariable uninitialized
Loading...