⚲
Project
General
Profile
Sign in
Register
Home
Projects
Help
Search
:
Ceph
Overview
Activity
Roadmap
Issues
Gantt
Calendar
Wiki
Repository
v12.2.11
94%
133 issues
(
125 closed
—
8 open
)
Time tracking
Estimated time
0
.00
hour
Spent time
0
.00
hour
Issues by
Tracker
Status
Priority
Author
Assignee
Category
Bug
5/12
Feature
0/1
Support
1/1
Backport
119/119
Related issues
CephFS -
Bug #37540
: luminous: MDSMap session timeout cannot be modified
RADOS -
Bug #37582
: luminous: ceph -s client gets all mgrmaps
rgw -
Bug #37616
: SignatureDoesNotMatch with multipart upload from minio-py
rgw -
Bug #37668
: AbortMultipartUpload causes data loss(NoSuchKey) when CompleteMultipartUpload request timeout
rgw -
Bug #37754
: bucket metadata not deleted after placement and bucket deleted
Linux kernel client -
Bug #37769
: __ceph_remove_cap caused kernel crash
rgw -
Bug #37855
: only first subuser can be exported to nfs
rgw -
Bug #37879
: rgw: fix prefix handling in LCFilter
ceph-volume -
Bug #37946
: ceph-volume simple scan: AttributeError:
Bug #38005
: _scan_snaps no head for <object>
rgw -
Bug #38119
: rgw can't create bucket, because can't find zonegroup? location constraint (default) can't be found.
rgw -
Bug #38226
: rgw: data sync: ERROR: failed to read remote data log info: ret=-2
rgw -
Feature #37522
: Keystone type user creation
Support #37918
: apt-get upgrade 12.2.8 to 12.2.10 failed
rgw -
Backport #24358
: luminous: SSL support for beast frontend
CephFS -
Backport #24759
: luminous: test gets ENOSPC from bluestore block device
Backport #24826
: luminous: run-make-check.sh ccache tweaks
CephFS -
Backport #24929
: luminous: qa: test_recovery_pool tries asok on wrong node
mgr -
Backport #25201
: luminous: ceph-mgr: Module 'influx' has failed
Backport #26919
: luminous: common: (mon) command sanitization accepts floats when Int type is defined resulting in exception fault in ceph-mon
bluestore -
Backport #26943
: luminous: os/bluestore/BlueStore.cc: 1025: FAILED assert(buffer_bytes >= b->length) from ObjectStore/StoreTest.ColSplitTest2/2
CephFS -
Backport #32091
: luminous: mds: migrate strays part by part when shutdown mds
bluestore -
Backport #36145
: luminous: fsck: cid is improperly matched to oid
CephFS -
Backport #36200
: luminous: mds: fix mds damaged due to unexpected journal length
CephFS -
Backport #36206
: luminous: nfs-ganesha: ceph_fsal_setattr2 returned Operation not permitted
CephFS -
Backport #36217
: luminous: Some cephfs tool commands silently operate on only rank 0, even if multiple ranks exist
rgw -
Backport #36222
: luminous: rgw: default quota not set in radosgw for Openstack users
CephFS -
Backport #36279
: luminous: qa: RuntimeError: FSCID 10 has no rank 1
CephFS -
Backport #36281
: luminous: mds: add drop_cache command
CephFS -
Backport #36309
: luminous: doc: Typo error on cephfs/fuse/
CephFS -
Backport #36312
: luminous: doc: fix broken fstab url in cephfs/fuse
RADOS -
Backport #36321
: luminous: Add support for osd_delete_sleep configuration value
devops -
Backport #36391
: luminous: No linker time hardening in ceph rpm builds
rbd -
Backport #36407
: luminous:[pybind/rbd] Flag RBD_FLAG_FAST_DIFF_INVALID is not exposed in Python bindings
rgw -
Backport #36414
: luminous: librgw: crashes in multisite configuration
rbd -
Backport #36429
: luminous: [qa] move OpenStack devstack test to rocky release
RADOS -
Backport #36436
: luminous: rados rm --force-full is blocked when cluster is in full status
CephFS -
Backport #36456
: luminous: client: explicitly show blacklisted state via asok status command
CephFS -
Backport #36460
: luminous: mds: rctime not set on system inode (root) at startup
mgr -
Backport #36464
: luminous: mgr crash on scrub of unconnected osd
CephFS -
Backport #36502
: luminous: qa: increase rm timeout for workunit cleanup
CephFS -
Backport #36504
: luminous: qa: infinite timeout on asok command causes job to die
RADOS -
Backport #36506
: luminous: mon osdmap cash too small during upgrade to mimic
rbd -
Backport #36554
: luminous: [rbd-mirror] periodic mirror status timer might fail to be scheduled
RADOS -
Backport #36556
: luminous: RBD client IOPS pool stats are incorrect (2x higher; includes IO hints as an op)
rbd -
Backport #36568
: luminous: [test] workunit teuthology tasks race with "git clone"
mgr -
Backport #36575
: luminous: mgr/status: fix fs status subcommand did not show standby-replay MDS' perf info
CephFS -
Backport #36577
: luminous: qa: teuthology may hang on diagnostic commands for fuse mount
RADOS -
Backport #36630
: luminous: potential deadlock in PG::_scan_snaps when repairing snap mapper
RADOS -
Backport #36636
: luminous: osd: race condition opening heartbeat connection
bluestore -
Backport #36638
: luminous: rename does not old ref to replacement onode at old name
CephFS -
Backport #36642
: luminous: Internal fragment of ObjectCacher
rgw -
Backport #36644
: luminous: SSE encryption does not detect ssl termination in proxy
RADOS -
Backport #36646
: luminous: librados api aio tests race condition
RADOS -
Backport #36657
: luminous: Cache-tier forward mode hang in luminous (again)
rgw -
Backport #36688
: luminous: lock in resharding may expires before the dynamic resharding completes
CephFS -
Backport #36691
: luminous: client: request next osdmap for blacklisted client
CephFS -
Backport #36695
: luminous: mds: cache drop command requires timeout argument when it is supposed to be optional
mgr -
Backport #36750
: luminous: [restful] deep_scrub is not a valid OSD command
rgw -
Backport #36757
: luminous: rgw-admin: reshard add can add a non existant bucket
CephFS -
Backport #37092
: luminous: mds: "src/mds/MDLog.cc: 281: FAILED ceph_assert(!capped)" during max_mds thrashing
Backport #37154
: luminous: tests: ceph-admin-commands.sh workunit does not log what it's doing
Backport #37272
: luminous: ceph-mgr: blocking requests sent to restful api server hangs sometimes
RADOS -
Backport #37274
: luminous: debian: packaging need to reflect move of /etc/bash_completion.d/radosgw-admin from radosgw to ceph-common
rgw -
Backport #37284
: luminous: rgw: radosgw-admin: reshard status prints status codes as enum value (e.g., "0" rather than something human-readable)
RADOS -
Backport #37341
: luminous: doc: Add bluestore memory autotuning docs
RADOS -
Backport #37343
: luminous: Prioritize user specified scrubs
rgw -
Backport #37349
: luminous: when using nfs-ganesha to upload file, rgw es sync module get failed
mgr -
Backport #37362
: luminous: mgr: prometheus: is not possible to determine wal/db devices
rbd -
Backport #37363
: luminous: Resize state macihine missing unblock_writes if shrink is not allowed
Backport #37365
: luminous: doc: edit on github
Backport #37383
: luminous: test: Start using GNU awk and fix archiving directory
Backport #37397
: luminous: "/usr/bin/ld: cannot find -lradospp" in rados mimic
mgr -
Backport #37413
: luminous: mgr/balancer: add crush_compat_metrics param to change optimization keys
mgr -
Backport #37416
: luminous: mgr: various python3 fixes
mgr -
Backport #37420
: luminous: mgr/balancer: add cmd to list all plans
CephFS -
Backport #37423
: luminous: qa: wrong setting for msgr failures
CephFS -
Backport #37425
: luminous: ceph-volume-client: cannot set mode for cephfs volumes as required by OpenShift
Messengers -
Backport #37427
: luminous: msg/async: crashes when authenticator provided by verify_authorizer not implemented
Backport #37429
: luminous: common: WeightedPriorityQueue leaks memory
RADOS -
Backport #37438
: luminous: crushtool: add --reclassify operation to convert legacy crush maps to use device classes
rgw -
Backport #37446
: luminous: add a command to trim old bucket instances after resharding completes
Backport #37466
: luminous: rgw: master zone deletion without a zonegroup rm would break rgw rados init
rgw -
Backport #37475
: luminous: multisite: bilog trimming crashes when pgnls fails with EINVAL
mgr -
Backport #37478
: luminous: src/mgr/DaemonServer.cc: 912: FAILED ceph_assert(daemon_state.exists(key))
rgw -
Backport #37482
: luminous: Bucket policy and colons in filename
bluestore -
Backport #37495
: luminous: bluefs-bdev-expand aborts
rgw -
Backport #37519
: luminous: rgw: fix max-size in radosgw-admin and REST Admin API
rbd -
Backport #37535
: luminous: rbd_snap_list_end() segfaults if rbd_snap_list() fails
Backport #37537
: luminous: Incorrect upmap remove
rgw -
Backport #37549
: luminous: librgw not sync s3 user info since started
rgw -
Backport #37551
: luminous: multisite: sync gets stuck retrying deletes that fail with ERR_PRECONDITION_FAILED
Backport #37553
: luminous: linger op get lost during ceph osd pause and ceph osd unpause
rgw -
Backport #37555
: luminous: rgw: resharding leaves old bucket info objs and index shards behind
rgw -
Backport #37563
: luminous: rgw: version bucket stats not correct
Backport #37600
: luminous: doc: broken link on troubleshooting-mon page
CephFS -
Backport #37602
: luminous: mds: severe internal fragment when decoding xattr_map from log event
CephFS -
Backport #37604
: luminous: mds: PurgeQueue write error handler does not handle EBLACKLISTED
CephFS -
Backport #37606
: luminous: mds: directories pinned keep being replicated back and forth between exporting mds and importing mds
CephFS -
Backport #37608
: luminous: MDS admin socket command `dump cache` with a very large cache will hang/kill the MDS
CephFS -
Backport #37610
: luminous: qa: pjd test appears to require more than 3h timeout for some configurations
CephFS -
Backport #37623
: luminous: qa: client socket inaccessible without sudo
mgr -
Backport #37625
: luminous: fs status command broken in py3-only environments
CephFS -
Backport #37627
: luminous: mds: fix incorrect l_pq_executing_ops statistics when meet an invalid item in purge queue
CephFS -
Backport #37629
: luminous: mds: do not call Journaler::_trim twice
CephFS -
Backport #37631
: luminous: client: do not move f->pos untill success write
CephFS -
Backport #37633
: luminous: mds: remove duplicated l_mdc_num_strays perfcounter set
CephFS -
Backport #37635
: luminous: race of updating wanted caps
Backport #37643
: luminous: ceph-create-keys: fix octal notation for Python 3 without losing compatibility with Python 2
Backport #37685
: luminous: Remove capability reset command
Backport #37694
: luminous: CephFS remove snapshot result in slow ops
RADOS -
Backport #37697
: luminous: osd_memory_target: failed assert when options mismatch
CephFS -
Backport #37700
: luminous: fuse client can't read file due to can't acquire Fr
CephFS -
Backport #37737
: luminous: MDSMonitor: ignores stopping MDS that was formerly laggy
CephFS -
Backport #37739
: luminous: extend reconnect period when mds is busy
Backport #37743
: luminous: Mgr: OSDMap.cc: 4140: FAILED assert(osd_weight.count(i.first))
CephFS -
Backport #37758
: luminous: standby-replay MDS spews message to log every second
CephFS -
Backport #37762
: luminous: mds: deadlock when setting config value via admin socket
RADOS -
Backport #37806
: luminous: OSD logs are not logging slow requests
Backport #37811
: luminous: Empty pg_temps are added to incremental map even if there're no changes in new epoch
Backport #37813
: luminous: mon: segmentation fault during shutdown
CephFS -
Backport #37820
: luminous: mds: create separate config for heartbeat timeout
mgr -
Backport #37827
: luminous: mgr crash when handle_report updating existing DaemonState for rgw
CephFS -
Backport #37829
: luminous: ceph-fuse: hang because it miss reconnect phase when hot standby mds switch occurs
rgw -
Backport #37831
: luminous: Configurable ListBucket max-keys limit
CephFS -
Backport #37899
: luminous: mds: purge queue recovery hangs during boot if PQ journal is damaged
RADOS -
Backport #37903
: luminous: osd: pg log hard limit can cause crash during upgrade
CephFS -
Backport #37922
: luminous: qa: test_damage expectations wrong for Truncate on some objects
CephFS -
Backport #37924
: luminous: qa: test_damage performs truncate test on same object repeatedly
rgw -
Backport #37949
: luminous: debug logging for v4 auth does not sanitize encryption keys
CephFS -
Backport #37953
: luminous: qa: test_damage needs to silence MDS_READ_ONLY
CephFS -
Backport #37977
: luminous: infinite loop in OpTracker::check_ops_in_flight
RADOS -
Backport #37985
: luminous: cli: dump osd-fsid as part of osd find <id>
Loading...