⚲
Project
General
Profile
Sign in
Register
Home
Projects
Help
Search
:
Ceph
Overview
Activity
Roadmap
Issues
Gantt
Calendar
Wiki
Repository
v12.2.6
98%
155 issues
(
152 closed
—
3 open
)
Time tracking
Estimated time
0
.00
hour
Spent time
1
.00
hour
Issues by
Tracker
Status
Priority
Author
Assignee
Category
Bug
8/10
Documentation
0/1
Backport
144/144
Related issues
Bug #24225
: AArch64 CRC32 crash with SIGILL
CephFS -
Bug #24369
: luminous: checking quota while holding cap ref may deadlock
CephFS -
Bug #24370
: luminous: root dir's new snapshot lost when restart mds
Bug #24421
: async messager thread cpu high, osd service not normal until restart
RADOS -
Bug #24511
: osd crushed at thread_name:safe_timer
devops -
Bug #24622
: No module named rados
Bug #24661
: os/bluestore: don't store/use path_block.{db,wal} from meta
rgw -
Bug #24789
: [rgw] ERROR: unable to remove bucket(2) No such file or directory (if --bypass-gc)
rgw -
Bug #24947
: Ceph Luminous radosgw: Couldn't init storage provider (RADOS)
bluestore -
Bug #25098
: Bluestore OSD failed to start with `bluefs_types.h: 54: FAILED assert(pos <= end)`
bluestore -
Documentation #24712
: Memory recommendations for bluestore
rgw -
Backport #22637
: luminous: rgw:lc: set LifecycleConfiguration without "Rule" tag return OK
CephFS -
Backport #22696
: luminous: client: dirty caps may never get the chance to flush
Backport #22769
: luminous: allow client requests to preempt scrub
RADOS -
Backport #22934
: luminous: filestore journal replay does not guard omap operations
rgw -
Backport #22937
: luminous: beast: listen on multiple endpoints
CephFS -
Backport #23151
: luminous: doc: update ceph-fuse with FUSE options
CephFS -
Backport #23157
: luminous: mds: underwater dentry check in CDir::_omap_fetched is racy
rgw -
Backport #23227
: luminous: clang compilation error in BoundedKeyCounter
rgw -
Backport #23231
: luminous: rgw_statfs should report the correct stats
CephFS -
Backport #23308
: luminous: doc: Fix -d option in ceph-fuse doc
CephFS -
Backport #23474
: luminous: client: allow caller to request that setattr request be synchronous
CephFS -
Backport #23475
: luminous: ceph-fuse: trim ceph-fuse -V output
rbd -
Backport #23607
: luminous: import-diff failed: (33) Numerical argument out of domain - if image size of the child is larger than the size of its parent
rbd -
Backport #23631
: luminous: python bindings fixes and improvements
CephFS -
Backport #23632
: luminous: mds: handle client requests when mds is stopping
CephFS -
Backport #23635
: luminous: client: fix request send_to_auth was never really used
CephFS -
Backport #23636
: luminous: mds: kicked out by monitor during rejoin
CephFS -
Backport #23637
: luminous: mds: assertion in MDSRank::validate_sessions
CephFS -
Backport #23638
: luminous: ceph-fuse: getgroups failure causes exception
rbd -
Backport #23640
: luminous: rbd: import with option --export-format fails to protect snapshot
Messengers -
Backport #23666
: luminous: SIGFPE, Arithmetic exception in AsyncConnection::_process_connection
RADOS -
Backport #23668
: luminous: There is no 'ceph osd pool get erasure allow_ec_overwrites' command
CephFS -
Backport #23671
: luminous: mds: MDBalancer using total (all time) request count in load statistics
bluestore -
Backport #23672
: luminous: bluestore: ENODATA on aio
RADOS -
Backport #23675
: luminous: qa/workunits/mon/test_mon_config_key.py fails on master
rgw -
Backport #23681
: luminous: mg_read() call has wrong arguments
rgw -
Backport #23682
: luminous: rgw:failed to pass test_bucket_list_maxkeys_unreadable in s3-test
rgw -
Backport #23683
: luminous: radosgw-admin should not use metadata cache when not needed
rgw -
Backport #23685
: luminous: rgw_file: post deadlock fix, direct deleted RGWFileHandle objects can remain in handle table
CephFS -
Backport #23698
: luminous: mds: load balancer fixes
bluestore -
Backport #23700
: luminous: osd: KernelDevice.cc: 539: FAILED assert(r == 0)
CephFS -
Backport #23702
: luminous: mds: sessions opened by journal replay do not get dirtied properly
CephFS -
Backport #23703
: luminous: MDSMonitor: mds health warnings printed in bad format
CephFS -
Backport #23704
: luminous: ceph-fuse: broken directory permission checking
CephFS -
Backport #23750
: luminous: mds: ceph.dir.rctime follows dir ctime not inode ctime
CephFS -
Backport #23770
: luminous: ceph-fuse: return proper exit code
CephFS -
Backport #23771
: luminous: client: fix gid_count check in UserPerm->deep_copy_from()
Backport #23782
: luminous: table of contents doesn't render for luminous/jewel docs
RADOS -
Backport #23784
: luminous: osd: Warn about objects with too many omap entries
RADOS -
Backport #23786
: luminous: "utilities/env_librados.cc:175:33: error: unused parameter 'offset' [-Werror=unused-parameter]" in rados
CephFS -
Backport #23791
: luminous: MDSMonitor: new file systems are not initialized with the pending_fsmap epoch
CephFS -
Backport #23792
: luminous: MDSMonitor: MDSMonitor::encode_pending modifies fsmap not pending_fsmap
CephFS -
Backport #23802
: luminous: slow ceph_ll_sync_inode calls after setattr
RADOS -
Backport #23808
: luminous: upgrade: bad pg num and stale health status in mixed lumnious/mimic cluster
CephFS -
Backport #23818
: luminous: client: add option descriptions and review levels (e.g. LEVEL_DEV)
CephFS -
Backport #23833
: luminous: MDSMonitor: crash after assigning standby-replay daemon in multifs setup
CephFS -
Backport #23835
: luminous: mds: fix occasional dir rstat inconsistency between multi-MDSes
RADOS -
Backport #23850
: luminous: Read operations segfaulting multiple OSDs
RADOS -
Backport #23852
: luminous: OSD crashes on empty snapset
rgw -
Backport #23861
: luminous: rgw: admin rest api shouldn't return error when getting user's stats if the user hasn't create any bucket.
rgw -
Backport #23862
: luminous: aws4 auth not implemented for PutBucketRequestPayment
RADOS -
Backport #23863
: luminous: scrub interaction with HEAD boundaries and clones is broken
rgw -
Backport #23864
: luminous: compression ratio depends on block size, which is much smaller (16K vs 4M) in multisite sync
rgw -
Backport #23865
: luminous: [rgw] GET <object>?torrent returns object's body instead torrent-file
rgw -
Backport #23866
: luminous: No meaningful error when RGW cannot create pools due to lack of available PGs
rgw -
Backport #23868
: luminous: rgw: do not reflect period if not current in RGWPeriodPuller::pull
rgw -
Backport #23869
: luminous: rgw sends garbage meta.compression to ElasticSearch
RADOS -
Backport #23870
: luminous: null map from OSDService::get_map in advance_pg
bluestore -
Backport #23881
: luminous: Bluestore OSD hit assert((log_reader->buf.pos & ~super.block_mask()) == 0)
rgw -
Backport #23886
: luminous: Resharding hangs with versioning-enabled buckets
rbd -
Backport #23900
: luminous: [rbd-mirror] asok hook for image replayer not re-registered after bootstrap
rbd -
Backport #23902
: luminous: [rbd-mirror] local tag predecessor mirror uuid is incorrectly replaced with remote
RADOS -
Backport #23904
: luminous: Deleting a pool with active watch/notify linger ops can result in seg fault
rgw -
Backport #23906
: luminous: libcurl ignores headers with empty value, leading to signature mismatches
RADOS -
Backport #23912
: luminous: mon: High MON cpu usage when cluster is changing
rbd -
Backport #23913
: luminous: rbd-nbd can deadlock in logging thread
RADOS -
Backport #23914
: luminous: cache-try-flush hits wrlock, busy loops
RADOS -
Backport #23915
: luminous: monitors crashing ./include/interval_set.h: 355: FAILED assert(0) (jewel+kraken)
RADOS -
Backport #23924
: luminous: LibRadosAio.PoolQuotaPP failed
RADOS -
Backport #23925
: luminous: assert on pg upmap
CephFS -
Backport #23930
: luminous: mds: scrub code stuck at trimming log segments
CephFS -
Backport #23931
: luminous: qa: test_purge_queue_op_rate: self.assertTrue(phase2_ops < phase1_ops * 1.25)
CephFS -
Backport #23933
: luminous: client: avoid second lock on client_lock
CephFS -
Backport #23934
: luminous: client: "remove_session_caps still has dirty|flushing caps" when thrashing max_mds
CephFS -
Backport #23935
: luminous: mds: may send LOCK_SYNC_MIX message to starting MDS
CephFS -
Backport #23936
: luminous: cephfs-journal-tool: segfault during journal reset
rbd -
Backport #23945
: luminous: potential race in rbd-mirror disconnect QA test
CephFS -
Backport #23946
: luminous: mds: crash when failover
CephFS -
Backport #23950
: luminous: mds: stopping rank 0 cannot shutdown until log is trimmed
CephFS -
Backport #23951
: luminous: mds: stuck during up:stopping
rgw -
Backport #23977
: luminous: multisite: misleading error on normal shutdown
CephFS -
Backport #23982
: luminous: qa: TestVolumeClient.test_lifecycle needs updated for new eviction behavior
CephFS -
Backport #23984
: luminous: mds: scrub on fresh file system fails
rbd -
Backport #23985
: luminous: librbd::Watcher's handle_rewatch_complete might fire after object destroyed
RADOS -
Backport #23986
: luminous: recursive lock of objecter session::lock on cancel
CephFS -
Backport #23987
: luminous: cephfs does not count st_nlink for directories correctly?
RADOS -
Backport #23988
: luminous: luminous->master: luminous crashes with AllReplicasRecovered in Started/Primary/Active/NotRecovering state
CephFS -
Backport #23991
: luminous: client: hangs on umount if it had an MDS session evicted
mgr -
Backport #24014
: luminous: mgr/influx: Module fails to parse service names if RGW is present
RADOS -
Backport #24015
: luminous: UninitCondition in PG::RecoveryState::Incomplete::react(PG::AdvMap const&)
RADOS -
Backport #24016
: luminous: scrub interaction with HEAD boundaries and snapmapper repair is broken
RADOS -
Backport #24042
: luminous: ceph-disk log is written to /var/run/ceph
Backport #24043
: luminous: java compile error: Source/Target option 1.5 is not supported since jdk 9
RADOS -
Backport #24048
: luminous: pg-upmap cannot balance in some case
CephFS -
Backport #24049
: luminous: ceph-fuse: missing dentries in readdir result
CephFS -
Backport #24050
: luminous: mds: MClientCaps should carry inode's dirstat
CephFS -
Backport #24055
: luminous: VolumeClient: allow ceph_volume_client to create 'volumes' without namespace isolation
RADOS -
Backport #24059
: luminous: Deleting a pool with active notify linger ops can result in seg fault
rgw -
Backport #24060
: luminous: RGW Multi-site radosgw-admin sync status and radosgw-admin bucket sync status commands addition of 'detail' flag
rgw -
Backport #24063
: luminous: Misnamed S3 operation
Backport #24070
: luminous: build-integration-branch script can fail with UnicodeEncodeError
rbd -
Backport #24084
: luminous: [rbd-mirror] bootstrap should not raise -EREMOTEIO if local image still attached
rbd -
Backport #24086
: luminous: [rbd-mirror] potential races during PoolReplayer shut-down
CephFS -
Backport #24107
: luminous: PurgeQueue::_consume() could return true when there were no purge queue item actually executed.
CephFS -
Backport #24108
: luminous: MDCache.cc: 5317: FAILED assert(mds->is_rejoin())
rgw -
Backport #24120
: luminous: rgw: 403 error when creating an object with metadata containing sequence of spaces
Backport #24122
: luminous: selinux denials with ceph-deploy/ceph-volume lvm device
CephFS -
Backport #24130
: luminous: mds: race with new session from connection and imported session
bluestore -
Backport #24132
: luminous: "122 - unittest_bluefs (OTHER_FAULT)" during ctest run
RADOS -
Backport #24153
: luminous: Eviction still raced with scrub due to preemption
rbd -
Backport #24156
: luminous: [rbd-mirror] potential deadlock when running asok 'flush' command
CephFS -
Backport #24185
: luminous: client: segfault in trim_caps
CephFS -
Backport #24188
: luminous: kceph: umount on evicted client blocks forever
CephFS -
Backport #24189
: luminous: qa: kernel_mount.py umount must handle timeout arg
RADOS -
Backport #24198
: luminous: mon: slow op on log message
CephFS -
Backport #24201
: luminous: client: fails to respond cap revoke from non-auth mds
CephFS -
Backport #24205
: luminous: mds: broadcast quota to relevant clients when quota is explicitly set
CephFS -
Backport #24207
: luminous: client: deleted inode's Bufferhead which was in STATE::Tx would lead a assert fail
RADOS -
Backport #24214
: luminous: Module 'balancer' has failed: could not find bucket -14
RADOS -
Backport #24216
: luminous: "process (unknown)" in ceph logs
RADOS -
Backport #24245
: luminous: Manager daemon y is unresponsive during teuthology cluster teardown
bluestore -
Backport #24247
: luminous: SharedBlob::put() racy
rgw -
Backport #24252
: luminous: Admin OPS Api overwrites email when user is modified
RADOS -
Backport #24258
: luminous: crush device class: Monitor Crash when moving Bucket into Default root
RADOS -
Backport #24279
: luminous: RocksDB compression is not supported at least on Debian.
RADOS -
Backport #24290
: luminous: common: JSON output from rados bench write has typo in max_latency key
rgw -
Backport #24298
: luminous: rgw: fix 'copy part' without 'x-amz-copy-source-range' when compression enabled
rgw -
Backport #24302
: luminous: rgw: (jewel) can't delete swift acls with swift command.
rgw -
Backport #24314
: luminous: multisite test failures in test_versioned_object_incremental_sync
RADOS -
Backport #24328
: luminous: assert manager.get_num_active_clean() == pg_num on rados/singleton/all/max-pg-per-osd.from-primary.yaml
CephFS -
Backport #24331
: luminous: mon: mds health metrics sent to cluster log indpeendently
Backport #24334
: luminous: ARMv8 feature detection broken, leading to illegal instruction crashes
CephFS -
Backport #24341
: luminous: mds memory leak
RADOS -
Backport #24351
: luminous: slow mon ops from osd_failure
rgw -
Backport #24353
: luminous: rgw: request with range defined as "bytes=0--1" returns 416 InvalidRange
RADOS -
Backport #24356
: luminous: osd: pg hard limit too easy to hit
RADOS -
Backport #24374
: luminous: mon: auto compaction on rocksdb should kick in more often
rbd -
Backport #24378
: luminous: [rbd-mirror] entries_behind_master will not be zero after mirror over
rgw -
Backport #24393
: luminous: rgw: making implicit_tenants backwards compatible
rgw -
Backport #24477
: luminous: Bucket lifecycles stick around after buckets are deleted
bluestore -
Backport #24503
: luminous: ObjectStore/StoreTestSpecificAUSize.SyntheticMatrixNoCsum/2 failed fsck with stray objects
rbd -
Backport #24740
: luminous: CLI unit formatting tests are broken
Backport #24750
: luminous: os/bluestore: don't store/use path_block.{db,wal} from meta
RADOS -
Backport #24806
: luminous: rgw workload makes osd memory explode
Loading...