⚲
Project
General
Profile
Sign in
Register
Home
Projects
Help
Search
:
Ceph
Overview
Activity
Roadmap
Issues
Gantt
Calendar
Wiki
Repository
v12.2.13
98%
251 issues
(
246 closed
—
5 open
)
Time tracking
Estimated time
0
.00
hour
Spent time
0
.00
hour
Issues by
Tracker
Status
Priority
Author
Assignee
Category
Bug
16/21
Backport
230/230
Related issues
rgw -
Bug #23147
: RGW: metrics 'qlen', 'qactive' are not work
CephFS -
Bug #39395
: ceph: ceph fs auth fails
CephFS -
Bug #40182
: luminous: pybind: luminous volume client breaks against nautilus cluster
CephFS -
Bug #40200
: luminous: mds: does fails assert(session->get_nref() == 1) when balancing
CephFS -
Bug #40286
: luminous: qa: remove ubuntu 14.04 testing
CephFS -
Bug #40584
: kernel build failure in kernel_untar_build.sh
bluestore -
Bug #41367
: rocksdb: submit_transaction error: Corruption: block checksum mismatch code = 2
rgw -
Bug #41370
: [RGW] RGW in website mode: rgw_rados.h: 2150: FAILED assert(!obj.empty()
rgw -
Bug #41401
: rgw: api_name fixes from Nautilus (e.g., allows CreateBucket w/alternate placement)
rgw -
Bug #42056
: rgw: librgw write wrongly closed in NFS3
RADOS -
Bug #42058
: OSD reconnected across map epochs, inconsistent pg logs created
RADOS -
Bug #42175
: _txc_add_transaction error (2) No such file or directory not handled on operation 15
CephFS -
Bug #42193
: luminous: MDS crash running upgrade test
Bug #42316
: msg/async: do not bump connect_seq for fault during ACCEPTING_SESSION
rbd -
Bug #42828
: rbd journal err assert(ictx->journal != __null) when release exclusive_lock
RADOS -
Bug #43175
: pgs inconsistent, union_shard_errors=missing
rgw -
Bug #43269
: rgw: lc: continue past get_obj_state() failure
RADOS -
Bug #43421
: mon spends too much time to build incremental osdmap
rgw -
Bug #43562
: Error in tcmalloc
rgw -
Bug #44008
: multi-part upload will lost part data when you abort and resume a multipart upload request by using aws java Signature Version 4 api
rgw -
Bug #44967
: rgw:rgw crash when putting object tagging and post object with malformedXML
rgw -
Backport #23223
: luminous: rgw: garbage collector removes objects slowly
rgw -
Backport #23237
: Corrupted downloads from civetweb when using multipart with slow connections
RADOS -
Backport #24360
: luminous: osd: leaked Session on osd.7
devops -
Backport #36080
: luminous: aarch64: Compiler-based detection of crc32 extended CPU type is broken
rgw -
Backport #37497
: luminous: get or set rgw realm zonegroup zone should check user's caps for security
devops -
Backport #37612
: luminous: rpm: missing dependency on python34-ceph-argparse from python34-cephfs (and others?)
rbd -
Backport #37692
: luminous: Image mirroring should be disabled when it is moved to trash
Backport #37748
: luminous: Add clear-data-digest command to objectstore tool
rgw -
Backport #37892
: luminous: doc: wrong value of usage log default in logging section
RADOS -
Backport #38205
: luminous: osds allows to partially start more than N+2
Messengers -
Backport #38242
: luminous: msg/async: connection race + winner fault can leave connection in standby
RADOS -
Backport #38276
: luminous: osd_map_message_max default is too high?
CephFS -
Backport #38340
: luminous: mds: may leak gather during cache drop
rgw -
Backport #38397
: luminous: rgw: when exclusive lock fails due existing lock, log add'l info
RADOS -
Backport #38436
: luminous: crc cache should be invalidated when posting preallocated rx buffers
rbd -
Backport #38440
: luminous: compare-and-write skips compare after copyup without object map
RADOS -
Backport #38442
: luminous: osd-markdown.sh can fail with CLI_DUP_COMMAND=1
CephFS -
Backport #38445
: luminous: mds: drop cache does not timeout as expected
rbd -
Backport #38508
: luminous: [rbd-mirror] LeaderWatcher stuck in loop if pool deleted
RADOS -
Backport #38551
: luminous: core: lazy omap stat collection
rbd -
Backport #38564
: luminous: [librbd] race condition possible when validating RBD pool
RADOS -
Backport #38567
: luminous: osd_recovery_priority is not documented (but osd_recovery_op_priority is)
rbd -
Backport #38674
: luminous: Performance improvements for object-map
CephFS -
Backport #38686
: luminous: kcephfs TestClientLimits.test_client_pin fails with "client caps fell below min"
rgw -
Backport #38714
: luminous: rgw: gc entries with zero-length chains are not cleaned up
RADOS -
Backport #38719
: luminous: crush: choose_args array size mis-sized when weight-sets are enabled
rgw -
Backport #38748
: luminous: non existant mdlog failures logged at level 0
RADOS -
Backport #38750
: luminous: should report EINVAL in ErasureCode::parse() if m<=0
mgr -
Backport #38781
: luminous: mgr/balancer: blame if upmap won't actually work
RADOS -
Backport #38873
: luminous: Rados.get_fsid() returning bytes in python3
CephFS -
Backport #38877
: luminous: mds: high debug logging with many subtrees is slow
RADOS -
Backport #38880
: luminous: ENOENT in collection_move_rename on EC backfill target
rgw -
Backport #38884
: luminous: Lifecycle doesn't remove delete markers
rgw -
Backport #38887
: luminous: GetBucketCORS API returns "Not Found" error code when CORS configuration does not exist
RADOS -
Backport #38902
: luminous: Minor rados related documentation fixes
RADOS -
Backport #38905
: luminous: osd/PGLog.h: print olog_can_rollback_to before deciding to rollback
rgw -
Backport #38908
: luminous: rgw: read not exists null version success and return empty data
rgw -
Backport #38920
: luminous: "Caught signal (Aborted) thread_name:radosgw" in ceph dashboard tests Jenkins job
rgw -
Backport #38925
: luminous: beast frontend option to set the TCP_NODELAY socket option
rbd -
Backport #38954
: luminous: backport krbd discard qa fixes to stable branches
rgw -
Backport #38958
: luminous: multisite: sync status on master zone does not show "oldest incremental change not applied"
Backport #38962
: luminous: DaemonServer::handle_conf_change - broken locking
rbd -
Backport #38975
: luminous: return ETIMEDOUT if we meet a timeout in poll
rgw -
Backport #39016
: luminous: unable to cancel reshard operations for buckets with tenants
RADOS -
Backport #39042
: luminous: osd/PGLog: preserve original_crt to check rollbackability
rgw -
Backport #39177
: luminous: rgw: remove_olh_pending_entries() does not limit the number of xattrs to remove
rgw -
Backport #39180
: luminous: rgw: orphans find perf improvments
CephFS -
Backport #39191
: luminous: mds: crash during mds restart
CephFS -
Backport #39198
: luminous: mds: we encountered "No space left on device" when moving huge number of files into one directory
RADOS -
Backport #39204
: luminous: osd: leaked pg refs on shutdown
CephFS -
Backport #39208
: luminous: mds: mds_cap_revoke_eviction_timeout is not used to initialize Server::cap_revoke_eviction_timeout
CephFS -
Backport #39213
: luminous: mds: there is an assertion when calling Beacon::shutdown()
RADOS -
Backport #39218
: luminous: osd: FAILED ceph_assert(attrs || !pg_log.get_missing().is_missing(soid) || (it_objects != pg_log.get_log().objects.end() && it_objects->second->op == pg_log_entry_t::LOST_REVERT)) in PrimaryLogPG::get_object_context()
CephFS -
Backport #39221
: luminous: mds: behind on trimming and "[dentry] was purgeable but no longer is!"
rgw -
Backport #39227
: luminous: rgw_file: can't retrieve etag of empty object written through NFS
CephFS -
Backport #39231
: luminous: kclient: nofail option not supported
RADOS -
Backport #39239
: luminous: "sudo yum -y install python34-cephfs" fails on mimic
Messengers -
Backport #39243
: luminous: msg/async: connection race + winner fault can leave connection stuck at replacing forever
bluestore -
Backport #39247
: luminous: os/bluestore: fix length overflow
bluestore -
Backport #39254
: luminous: occaionsal ObjectStore/StoreTestSpecificAUSize.Many4KWritesTest/2 failure
rgw -
Backport #39272
: luminous: rgw: S3 policy evaluated incorrectly
Backport #39277
: luminous: platform.linux_distribution() is deprecated; stop using it
rbd -
Backport #39314
: luminous: krbd: fix rbd map hang due to udev return subsystem unordered
Backport #39332
: luminous: Build with lttng on openSUSE
RADOS -
Backport #39343
: luminous: ceph-objectstore-tool rename dump-import to dump-export
rgw -
Backport #39358
: luminous: Compliance to aws s3's relaxed query handling behaviour
rgw -
Backport #39360
: luminous: rgw:failed to pass test_bucket_create_naming_bad_punctuation in s3test
RADOS -
Backport #39373
: luminous: ceph tell osd.xx bench help : gives wrong help
rgw -
Backport #39409
: luminous: inefficient unordered bucket listing
RADOS -
Backport #39420
: luminous: Don't mark removed osds in when running "ceph osd in any|all|*"
mgr -
Backport #39424
: luminous: mgr: deadlock
rbd -
Backport #39427
: luminous: 'rbd mirror status --verbose' will occasionally seg fault
RADOS -
Backport #39431
: luminous: Degraded PG does not discover remapped data on originating OSD
bluestore -
Backport #39444
: luminous: OSD crashed in BitmapAllocator::init_add_free()
mgr -
Backport #39457
: luminous: mgr/prometheus: replace whitespaces in metric names
rbd -
Backport #39460
: luminous: [rbd-mirror] "bad crc in data" error when listing large pools
Messengers -
Backport #39463
: luminous: print client IP in default debug_ms log level when "bad crc in {front|middle|data}" occurs
CephFS -
Backport #39468
: luminous: There is no punctuation mark or blank between tid and client_id in the output of "ceph health detail"
RADOS -
Backport #39474
: luminous: segv in fgets() in collect_sys_info reading /proc/cpuinfo
rgw -
Backport #39497
: luminous: rgw admin: object stat command output's delete_at not readable
RADOS -
Backport #39537
: luminous: osd/ReplicatedBackend.cc: 1321: FAILED assert(get_parent()->get_log().get_log().objects.count(soid) && (get_parent()->get_log().get_log().objects.find(soid)->second->op == pg_log_entry_t::LOST_REVERT) && (get_parent()->get_log().get_log().object
RADOS -
Backport #39563
: luminous: Error message displayed when mon_osd_max_split_count would be exceeded is not as user-friendly as it could be
bluestore -
Backport #39565
: luminous: ceph-bluestore-tool: bluefs-bdev-expand silently bypasses main device (slot 2)
rgw -
Backport #39572
: luminous: send x-amz-version-id header in PUT response
rbd -
Backport #39589
: luminous: qa/tasks/rbd_fio: fixed missing delimiter between 'cd' and 'configure'
rgw -
Backport #39603
: luminous: document CreateBucketConfiguration for s3 PUT Bucket request
rgw -
Backport #39615
: luminous: civetweb frontend: response is buffered in memory if content length is not explicitly specified
bluestore -
Backport #39638
: luminous: fsck on mkfs breaks ObjectStore/StoreTestSpecificAUSize.BlobReuseOnOverwrite
rbd -
Backport #39673
: luminous: [test] possible race condition in rbd-nbd disconnect
CephFS -
Backport #39691
: luminous: mds: error "No space left on device" when create a large number of dirs
rgw -
Backport #39696
: luminous: rgw: success returned for put bucket versioning on a non existant bucket
RADOS -
Backport #39719
: luminous: short pg log+nautilus-p2p-stress-split: "Error: finished tid 3 when last_acked_tid was 5" in upgrade:nautilus-p2p
rbd -
Backport #39727
: luminous: [test] devstack is broken (again)
rgw -
Backport #39732
: luminous: rgw: allow radosgw-admin bucket list to use the --allow-unordered flag
rgw -
Backport #39733
: luminous: multisite: mismatch of bucket creation times from List Buckets
rgw -
Backport #39747
: luminous: Add support for --bypass-gc flag of radosgw-admin bucket rm command in RGW Multi-site
Backport #40004
: luminous: do_cmake.sh: "source" not found
rgw -
Backport #40032
: luminous: rgw metadata search (elastic search): meta sync: ERROR: failed to read mdlog info with (2) No such file or directory
CephFS -
Backport #40041
: luminous: avoid trimming too many log segments after mds failover
RADOS -
Backport #40082
: luminous: osd: Better error message when OSD count is less than osd_pool_default_size
rgw -
Backport #40092
: luminous: Missing Documentation for radosgw-admin reshard commands (man pages)
rgw -
Backport #40127
: luminous: rgw: Swift interface: server side copy fails if object name contains `?`
rgw -
Backport #40132
: luminous: rgw: putting X-Object-Manifest via TempURL should be prohibited
rgw -
Backport #40135
: luminous: rgw: the Multi-Object Delete operation of S3 API wrongly handles the "Code" response element
rgw -
Backport #40138
: luminous: document steps to disable metadata_heap on existing zones
rgw -
Backport #40143
: luminous: multisite: 'radosgw-admin bucket sync status' should call syncs_from(source.name) instead of id
rgw -
Backport #40149
: luminous: rgw: bucket may redundantly list keys after BI_PREFIX_CHAR
CephFS -
Backport #40160
: luminous: FSAL_CEPH assertion failed in Client::_lookup_name: "parent->is_dir()"
CephFS -
Backport #40163
: luminous: mount: key parsing fail when doing a remount
CephFS -
Backport #40166
: luminous: client: ceph.dir.rctime xattr value incorrectly prefixes "09" to the nanoseconds component
CephFS -
Backport #40218
: luminous: TestMisc.test_evict_client fails
CephFS -
Backport #40221
: luminous: mds: reset heartbeat during long-running loops in recovery
Backport #40229
: luminous: maybe_remove_pg_upmap can be super inefficient for large clusters
rbd -
Backport #40233
: luminous: [CLI]rbd: get positional argument error when using --image
Backport #40266
: luminous: data race in OutputDataSocket
Backport #40318
: luminous: "make: *** [hello_world_cpp] Error 127" in rados
CephFS -
Backport #40343
: luminous: mds: fix corner case of replaying open sessions
rgw -
Backport #40347
: luminous: ssl tests failing with SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed'),)",)
rgw -
Backport #40350
: luminous: rgw/OutputDataSocket: append_output(buffer::list&) says it will (but does not) discard output at data_max_backlog
rgw -
Backport #40359
: luminous: rgw: set null version object issues
bluestore -
Backport #40422
: luminous: Bitmap allocator return duplicate entries which cause interval_set assert
rbd -
Backport #40463
: luminous: possible crash when replaying journal with invalid/corrupted ranges
rgw -
Backport #40496
: luminous: Object Gateway multisite document read-only argument error
rbd -
Backport #40499
: luminous: [cli] 'export' should handle concurrent IO completions
RADOS -
Backport #40502
: luminous: osd: rollforward may need to mark pglog dirty
rgw -
Backport #40506
: luminous: rgw: conditionally allow builtin users with non-unique email addresses
bluestore -
Backport #40534
: luminous: pool compression options not consistently applied
Backport #40548
: luminous: Keyrings created by ceph auth get are not suitable for ceph auth import
rbd -
Backport #40551
: luminous: [test] qemu-iotests tests fails under latest Ubuntu kernel
rgw -
Backport #40559
: luminous: rgw: the log output gets very spammy in multisite clusters
rbd -
Backport #40574
: luminous: Disabling journal might result in assertion failure
rbd -
Backport #40592
: luminous: rbd_mirror/ImageSyncThrottler.cc: 61: FAILED ceph_assert(m_queue.empty())
RADOS -
Backport #40638
: luminous: osd: report omap/data/metadata usage
RADOS -
Backport #40650
: luminous: os/bluestore: fix >2GB writes
RADOS -
Backport #40653
: luminous: Lower the default value of osd_deep_scrub_large_omap_object_key_threshold
Backport #40697
: luminous: test_envlibrados_for_rocksdb.yaml fails installing g++-4.7 on 18.04
rgw -
Backport #40735
: luminous: multisite: failover docs should use 'realm pull' instead of 'period pull'
bluestore -
Backport #40756
: luminous: stupid allocator might return extents with length = 0
CephFS -
Backport #40807
: luminous: mds: msg weren't destroyed before handle_client_reconnect returned, if the reconnect msg was from non-existent session
rgw -
Backport #40852
: luminous: multisite: radosgw-admin commands should not modify metadata on a non-master zone
rbd -
Backport #40880
: luminous: Reduce log level for cls/journal and cls/rbd expected errors
CephFS -
Backport #40892
: luminous: mds: cleanup truncating inodes when standby replay mds trim log segments
RADOS -
Backport #40947
: luminous: Better default value for osd_snap_trim_sleep
ceph-volume -
Backport #40978
: luminous: missing string substitution when reporting mounts
CephFS -
Backport #41000
: luminous: client: failed to drop dn and release caps causing mds stary stacking.
ceph-volume -
Backport #41020
: luminous: simple: when 'type' file is not present activate fails
ceph-volume -
Backport #41057
: luminous: ceph-volume does not recognize wal/db partitions created by ceph-disk
rgw -
Backport #41104
: luminous: rgw: when usring radosgw-admin to list bucket, can set --max-entries excessively high
rgw -
Backport #41111
: luminous: rgw: fix drain handles error when deleting bucket with bypass-gc option
ceph-volume -
Backport #41139
: luminous: ceph-volume prints errors to stdout with --format json
ceph-volume -
Backport #41202
: luminous: ceph-volume prints log messages to stdout
rgw -
Backport #41266
: luminous: beast frontend throws an exception when running out of FDs
mgr -
Backport #41278
: luminous: mgr/prometheus: Setting scrape_interval breaks cache timeout comparison
bluestore -
Backport #41281
: luminous: BlueStore tool to check fragmentation
rbd -
Backport #41285
: luminous: error from replay does not stored in rbd-mirror status
bluestore -
Backport #41289
: luminous: fix and improve doc regarding manual bluestore cache settings.
rgw -
Backport #41322
: luminous: multisite: datalog/mdlog trim don't loop until done
Backport #41334
: luminous: ceph-test RPM not built for SUSE
bluestore -
Backport #41338
: luminous: os/bluestore/BlueFS: use 64K alloc_size on the shared device
ceph-volume -
Backport #41373
: luminous: batch functional idempotency test fails since message is now on stderr
rgw -
Backport #41382
: luminous: rgw: housekeeping of reset stats operation in radosgw-admin and cls back-end
rbd -
Backport #41421
: luminous: `rbd mirror pool status --verbose` test is missing
rbd -
Backport #41439
: luminous: [rbd-mirror] cannot connect to remote cluster when running as 'ceph' user
Backport #41458
: luminous: proc_replica_log need preserve replica log's crt
rgw -
Backport #41480
: luminous: rgw dns name is not case sensitive
CephFS -
Backport #41489
: luminous: client: client should return EIO when it's unsafe reqs have been dropped when the session is close.
bluestore -
Backport #41510
: luminous: 50-100% iops lost due to bluefs_preextend_wal_files = false
RADOS -
Backport #41532
: luminous: Move bluefs alloc size initialization log message to log level 1
rbd -
Backport #41544
: luminous: [test] rbd-nbd FSX test runs are failing
rgw -
Backport #41579
: luminous: rgw: api_name fixes from Nautilus (e.g., allows CreateBucket w/alternate placement)
ceph-volume -
Backport #41613
: luminous: ceph-volume lvm list is O(n^2)
rbd -
Backport #41621
: luminous: in rbd-ggate the assert in Log:open() will trigger
rgw -
Backport #41626
: luminous: multisite: ENOENT errors from FetchRemoteObj causing bucket sync to stall without retry
Backport #41644
: luminous: QA run failures "Command failed on smithi with status 1: '\n sudo yum -y install ceph-radosgw\n ' "
RADOS -
Backport #41697
: luminous: Network ping monitoring
rgw -
Backport #41706
: luminous: in cls_bucket_list_unordered() listing of entries following an entry for which check_disk_state() returns -ENOENT may not get listed
bluestore -
Backport #41709
: luminous: Set concurrent max_background_compactions in rocksdb to 2
rgw -
Backport #41713
: luminous: can't remove rados objects after copy rgw-object fail
RADOS -
Backport #41730
: luminous: osd/ReplicatedBackend.cc: 1349: FAILED ceph_assert(peer_missing.count(fromshard))
Backport #41733
: luminous: osd: need clear PG_STATE_CLEAN when repair object
rbd -
Backport #41772
: luminous: RBD image manipulation using python API crashing since Nautilus
rgw -
Backport #41808
: luminous: rgw: fix minimum of unordered bucket listing
RADOS -
Backport #41845
: luminous: tools/rados: allow list objects in a specific pg in a pool
RADOS -
Backport #41864
: luminous: Mimic MONs have slow/long running ops
mgr -
Backport #41914
: luminous: mgr/test_localpool.sh fails after multiple tries on luminous
RADOS -
Backport #41919
: luminous: osd: scrub error on big objects; make bluestore refuse to start on big objects
RADOS -
Backport #41959
: luminous: tools/rados: add --pgid in help
RADOS -
Backport #41962
: luminous: Segmentation fault in rados ls when using --pgid and --pool/-p together as options
RADOS -
Backport #42037
: luminous: Enable auto-scaler and get src/osd/PeeringState.cc:3671: failed assert info.last_complete == info.last_update
CephFS -
Backport #42039
: luminous: client: _readdir_cache_cb() may use the readdir_cache already clear
ceph-volume -
Backport #42049
: luminous: fix pytest warnings
RADOS -
Backport #42127
: luminous: mgr/balancer FAILED ceph_assert(osd_weight.count(i.first))
RADOS -
Backport #42138
: luminous: Remove unused full and nearful output from OSDMap summary
RADOS -
Backport #42153
: luminous: Removed OSDs with outstanding peer failure reports crash the monitor
RADOS -
Backport #42199
: luminous: osd/PrimaryLogPG.cc: 13068: FAILED ceph_assert(obc)
RADOS -
Backport #42241
: luminous: Adding Placement Group id in Large omap log message
Backport #42264
: luminous: mimic and luminous still need to read ceph.conf.template from teuthology
RADOS -
Backport #42361
: luminous: python3-cephfs should provide python36-cephfs
mgr -
Backport #42390
: luminous: mgr/balancer: 'dict_keys' object does not support indexing
RADOS -
Backport #42393
: luminous: CephContext::CephContextServiceThread might pause for 5 seconds at shutdown
rbd -
Backport #42415
: luminous: sphinx spits warning when rendering doc/rbd/qemu-rbd.rst
rbd -
Backport #42425
: luminous: [rbd] rbd map hangs up infinitely after osd down
rbd -
Backport #42527
: luminous: concurrent "rbd unmap" failures due to udev
RADOS -
Backport #42548
: luminous: verify_upmaps can not cancel invalid upmap_items in some cases
mgr -
Backport #42573
: luminous: restful: Query nodes_by_id for items
RADOS -
Backport #42580
: luminous: p2p tests fail due to missing python3-cephfs package
Messengers -
Backport #42586
: luminous: out of order caused by letting old msg from down peer be processed to RESETSESSION
Backport #42663
: luminous: RBD mirroring test cases broken in mimic due to bad backport
CephFS -
Backport #42672
: luminous: qa: cfuse_workunit_kernel_untar_build fails on Ubuntu 18.04
CephFS -
Backport #42678
: luminous: qa: malformed job
mgr -
Backport #42698
: luminous: Larger cluster using upmap mode balancer can block other balancer commands
CephFS -
Backport #42774
: luminous: mds: add command that modify session metadata
mgr -
Backport #42784
: luminous: mgr/prometheus: UnboundLocalError occurs when obj_store is neither filestore nor bluestore
RADOS -
Backport #42796
: luminous: unnecessary error message "calc_pg_upmaps failed to build overfull/underfull"
bluestore -
Backport #42834
: luminous: STATE_KV_SUBMITTED is set too early.
mgr -
Backport #42849
: luminous: ceph osd status - units invisible using black background
rgw -
Backport #42895
: luminous: rgw: add list user admin OP API
rbd -
Backport #42988
: luminous: update kernel.sh for read-only changes
rgw -
Backport #43013
: luminous: rgw: crypt: permit RGW-AUTO/default with SSE-S3 headers
RADOS -
Backport #43093
: luminous: Improve OSDMap::calc_pg_upmaps() efficiency
rgw -
Backport #43234
: luminous: rgw: radosgw_admin teuthology task: No module named bunch
rgw-testing -
Backport #43278
: luminous: "cd /home/ubuntu/cephtest/s3-tests && ./bootstrap" fails on ubuntu
RADOS -
Backport #43325
: luminous: wrong datatype describing crush_rule
rbd -
Backport #43499
: luminous: rbd-mirror daemons don't logrotate correctly
RADOS -
Backport #43532
: luminous: Change default upmap_max_deviation to 5
bluestore -
Backport #43577
: luminous: StupidAllocator.cc: 265: FAILED assert(intervals <= max_intervals)
RADOS -
Backport #43651
: luminous: Improve upmap change reporting in logs
ceph-volume -
Backport #43759
: luminous: functional tests only assume correct number is osds if branch tests is mimic or luminous
Backport #43926
: luminous: kernel_untar_build.sh: bison: command not found
Loading...