⚲
Project
General
Profile
Sign in
Register
Home
Projects
Help
Search
:
Ceph
Overview
Activity
Roadmap
Issues
Gantt
Calendar
Wiki
Repository
v11.2.1
100%
266 issues
(
265 closed
—
1 open
)
Time tracking
Estimated time
0
.00
hour
Spent time
0
.00
hour
Issues by
Tracker
Status
Priority
Author
Assignee
Category
Bug
7/8
Tasks
1/1
Backport
257/257
Related issues
RADOS -
Bug #18926
: Why osds do not release memory?
CephFS -
Bug #19593
: purge queue and standby replay mds
devops -
Bug #19821
: apt-purge ceph-mon, apt-purge ceph-osd fails
rgw -
Bug #20177
: RGW lifecycle not expiring objects due to permissions on lc pool
RADOS -
Bug #20843
: assert(i->prior_version == last) when a MODIFY entry follows an ERROR entry
Bug #21021
: Failed assert starting OSD after mark_unfound_lost caused process crash
Bug #21038
: Upgrading from jewel to kraken - mgr create throws EACCESS: access denied
bluestore -
Bug #21068
: ceph-disk deploy bluestore fails to create correct block symlink for multipath devices
Stable releases -
Tasks #19009
: kraken v11.2.1
Backport #18378
: kraken: msg/simple/SimpleMessenger.cc: 239: FAILED assert(!cleared)
Backport #18387
: kraken: Cannot clone ceph/s3-tests.git (missing branch)
Backport #18403
: kraken: cache tiering: base pool last_force_resend not respected (racing read got wrong version)
Backport #18418
: kraken: leveldb corruption leads to "Operation not permitted not handled" and assert
Backport #18431
: kraken: ceph-disk: error on _bytes2str
CephFS -
Backport #18439
: kraken: TestVolumeClient.test_evict_client failure creating pidfile
rbd -
Backport #18456
: kraken: Attempting to remove an image w/ incompatible features results in partial removal
CephFS -
Backport #18463
: kraken: Decode errors on backtrace will crash MDS
rbd -
Backport #18493
: kraken: rbd-mirror: sporadic image replayer shut down failure
rbd -
Backport #18495
: kraken: rbd: Possible deadlock performing a synchronous API action while refresh in-progress
Backport #18497
: kraken: osd_recovery_incomplete: failed assert not manager.is_recovered()
rgw -
Backport #18499
: kraken: rgw: Realm set does not create a new period
rbd -
Backport #18501
: kraken: rbd-mirror: potential race mirroring cloned image
CephFS -
Backport #18531
: kraken: speed up readdir by skipping unwanted dn
CephFS -
Backport #18540
: kraken: Test failure: test_session_reject (tasks.cephfs.test_sessionmap.TestSessionMap)
rgw -
Backport #18548
: kraken: multisite: segfault after changing value of rgw_data_log_num_shards
rbd -
Backport #18549
: kraken: rbd: 'metadata_set' API operation should not change global config setting
CephFS -
Backport #18552
: kraken: ceph-fuse crash during snapshot tests
Backport #18554
: kraken: poen wrongly delete routed pg stats op before receive pg stats ack
rbd -
Backport #18555
: kraken: rbd: Potential race when removing two-way mirroring image
rbd -
Backport #18557
: kraken: rbd: 'rbd bench-write' will crash if --io-size is 4G
CephFS -
Backport #18562
: kraken: Test Failure: kcephfs test_client_recovery.TestClientRecovery
CephFS -
Backport #18566
: kraken: MDS crashes on missing metadata object
Backport #18571
: kraken: Python Swift client commands in Quick Developer Guide don't match configuration in vstart.sh
rbd -
Backport #18601
: kraken: rbd: Add missing parameter feedback to 'rbd snap limit'
CephFS -
Backport #18604
: kraken: cephfs test failures (ceph.com/qa is broken, should be download.ceph.com/qa)
Backport #18606
: kraken: ceph-disk prepare writes osd log 0 with root owner
rbd -
Backport #18609
: kraken: Removing a clone that fails to open its parent might leave dangling rbd_children reference
Backport #18610
: kraken: osd: ENOENT on clone
CephFS -
Backport #18612
: kraken: client: segfault on ceph_rmdir path "/"
CephFS -
Backport #18616
: kraken: segfault in handle_client_caps
rgw -
Backport #18627
: kraken: TempURL verification broken for URI encoded object names
rbd -
Backport #18632
: kraken: rbd: [qa] crash in journal-enabled fsx run
Backport #18659
: kraken: /home/dzafman/ceph/src/osd/PG.h: 441: FAILED assert(needs_recovery_map.count(hoid))
rbd -
Backport #18668
: kraken: [ FAILED ] TestLibRBD.ImagePollIO in upgrade:client-upgrade-kraken-distro-basic-smithi
Backport #18677
: kraken: OSD metadata reports filestore when using bluestore
CephFS -
Backport #18678
: kraken: failed to reconnect caps during snapshot tests
Backport #18682
: kraken: mon: 'osd crush move ...' doesnt work on osds
CephFS -
Backport #18700
: kraken: client: fix the cross-quota rename boundary check conditions
rbd -
Backport #18703
: kraken: Prevent librbd from blacklisting the in-use librados client
CephFS -
Backport #18706
: kraken: fragment space check can cause replayed request fail
CephFS -
Backport #18707
: kraken: failed filelock.can_read(-1) assertion in Server::_dir_is_nonempty
rgw -
Backport #18709
: kraken: multisite: sync status reports master is on a different period
rgw -
Backport #18711
: kraken: slave zonegroup cannot enable the bucket versioning
rgw -
Backport #18713
: kraken: radosgw-admin period update reverts deleted zonegroup
Backport #18721
: kraken: systemd restarts Ceph Mon to quickly after failing to start
Backport #18722
: kraken: bluestore: full osd will not start. _do_alloc_write failed to reserve 0x10000, etc.
Backport #18723
: kraken: osd: calc_clone_subsets misuses try_read_lock vs missing
rbd -
Backport #18769
: kraken: [ FAILED ] TestJournalTrimmer.RemoveObjectsWithOtherClient
rbd -
Backport #18771
: kraken: rbd: Improve compatibility between librbd + krbd for the data pool
rgw -
Backport #18772
: kraken: rgw crashes when updating period with placement group
rbd -
Backport #18776
: kraken: Qemu crash triggered by network issues
rbd -
Backport #18777
: kraken: rbd --pool=x rename y z does not work
rgw -
Backport #18780
: kraken: radosgw swift: error messages: spurious newline after http body causes weird errors.
Backport #18793
: kraken: Client message throttles are not changeable without restart
Backport #18805
: kraken: "ERROR: Export PG's map_epoch 3901 > OSD's epoch 3281" in upgrade:infernalis-x-jewel-distro-basic-vps
rgw -
Backport #18810
: kraken: librgw: RGWLibFS::setattr fails on directories
Backport #18814
: kraken: PrimaryLogPG: up_osd_features used without the requires_kraken flag in kraken
rbd -
Backport #18822
: kraken: run-rbd-unit-tests.sh assert in lockdep_will_lock, TestLibRBD.ObjectMapConsistentSnap
Backport #18842
: kraken: kernel client feature mismatch on latest master test runs
rgw -
Backport #18843
: kraken: rgw: usage stats and quota are not operational for multi-tenant users
Backport #18849
: kraken: remove qa/suites/buildpackages
Backport #18870
: kraken: tests: SUSE yaml facets in qa/distros/all are out of date
rbd -
Backport #18892
: kraken: Incomplete declaration for ContextWQ in librbd/Journal.h
Backport #18894
: kraken: Possible lockdep false alarm for ThreadPool lock
rgw -
Backport #18896
: kraken: should parse the url to http host to compare with the container referer acl
rgw -
Backport #18898
: kraken: no http referer info in container metadata dump in swift API
CephFS -
Backport #18899
: kraken: Test failure: test_open_inode
rgw -
Backport #18902
: kraken: librgw: path segments neglect to ref parents
rgw -
Backport #18904
: kraken: rgw: first write also tries to read object
Backport #18907
: kraken: "osd marked itself down" will not recognised if host runs mon + osd on shutdown/reboot
rgw -
Backport #18909
: kraken: rgw: the swift container acl does not support field ".ref"
rbd -
Backport #18910
: kraken: rbd-nbd: check /sys/block/nbdX/size to ensure kernel mapped correctly
rbd -
Backport #18947
: kraken: rbd-mirror: additional test stability improvements
CephFS -
Backport #18950
: kraken: mds/StrayManager: avoid reusing deleted inode in StrayManager::_purge_stray_logged
Backport #18952
: kraken: segfault in ceph-osd --flush-journal
Backport #18956
: kraken: ceph-disk: bluestore --setgroup incorrectly set with user
rbd -
Backport #18970
: kraken: rbd: AdminSocket::bind_and_listen failed after rbd-nbd mapping
Backport #18973
: kraken: ceph-disk does not support cluster names different than 'ceph'
rgw -
Backport #18985
: kraken: rgw: sending Content-Length in 204 and 304 responses should be controllable
Backport #18997
: kraken: ceph-disk prepare get wrong group name in bluestore
Backport #18999
: kraken: "osd/PG.cc: 6896: FAILED assert(pg->is_acting(osd_with_shard) || pg->is_up(osd_with_shard))" in rados/upgrade
rbd -
Backport #19037
: kraken: rbd-mirror: deleting a snapshot during sync can result in read errors
CephFS -
Backport #19045
: kraken: buffer overflow in test LibCephFS.DirLs
rgw -
Backport #19047
: kraken: RGW leaking data
rgw -
Backport #19049
: kraken: multisite: some yields in RGWMetaSyncShardCR::full_sync() resume in incremental_sync()
rgw -
Backport #19144
: kraken: rgw_file: FHCache residence check should be exhaustive
rgw -
Backport #19146
: kraken: rgw: a few cases where rgw_obj is incorrectly initialized
rgw -
Backport #19147
: kraken: rgw daemon's DUMPABLE flag is cleared by setuid preventing coredumps
rgw -
Backport #19149
: kraken: rgw_file: ensure valid_s3_object_name for directories
rgw -
Backport #19154
: kraken: rgw_file: fix recycling of invalid mkdir handles
rgw -
Backport #19156
: kraken: rgw: typo in rgw_admin.cc
rgw -
Backport #19157
: kraken: RGW health check errors out incorrectly
rgw -
Backport #19160
: kraken: multisite: RGWMetaSyncShardControlCR gives up on EIO
rgw -
Backport #19162
: kraken: rgw_file: fix marker computation
rgw -
Backport #19164
: kraken: radosgw-admin: add the 'object stat' command to usage
rgw -
Backport #19166
: kraken: rgw_file: "exact match" invalid for directories, in RGWLibFS::stat_leaf()
rgw -
Backport #19168
: kraken: rgw_file: RGWReaddir (and cognate ListBuckets request) don't enumerate multi-segment directories
rgw -
Backport #19170
: kraken: rgw_file: allow setattr on placeholder (common_prefix) directories
rgw -
Backport #19172
: kraken: rgw: S3 create bucket should not do response in json
rbd -
Backport #19173
: kraken: rbd: rbd_clone_copy_on_read ineffective with exclusive-lock
rgw -
Backport #19175
: kraken: swift API: cannot disable object versioning with empty X-Versions-Location
rgw -
Backport #19178
: kraken: anonymous user's error code of getting object is not consistent with SWIFT
rgw -
Backport #19180
: kraken: rgw: 204 No Content is returned when putting illformed Swift's ACL
Backport #19181
: kraken: mon: force_create_pg could leave pg stuck in creating state
Backport #19209
: kraken: pre-jewel "osd rm" incrementals are misinterpreted
rgw -
Backport #19212
: kraken: rgw: "cluster [WRN] bad locator @X on object @X...." in cluster log
rbd -
Backport #19227
: kraken: rbd: Enabling mirroring for a pool wiht clones may fail
rgw -
Backport #19229
: kraken: librgw: objects created from s3 apis are not visible from nfs mount point
Backport #19315
: kraken: osd: pg log split does not rebuild index for parent or child
rgw -
Backport #19322
: kraken: multisite: possible infinite loop in RGWFetchAllMetaCR
rbd -
Backport #19324
: kraken: rbd: [api] temporarily restrict (rbd_)mirror_peer_add from adding multiple peers
Backport #19326
: kraken: bluestore bdev: flush no-op optimization is racy
Backport #19327
: kraken: bluefs: missing flush_bdev in fsync path
Backport #19329
: kraken: osd_snap_trim_sleep option does not work
rgw -
Backport #19331
: kraken: upgrade to multisite v2 fails if there is a zone without zone info
Backport #19333
: kraken: brag fails to count "in" mds
CephFS -
Backport #19335
: kraken: MDS heartbeat timeout during rejoin, when working with large amount of caps/inodes
rbd -
Backport #19336
: kraken: rbd: refuse to use an ec pool that doesn't support overwrites
Backport #19340
: kraken: An OSD was seen getting ENOSPC even with osd_failsafe_full_ratio passed
rgw -
Backport #19342
: kraken: 'period update' does not remove short_zone_ids of deleted zones
Backport #19351
: kraken: RadosImport::import should return an error if Rados::connect fails
rgw -
Backport #19354
: kraken: multisite: some 'radosgw-admin data sync' commands hang
rgw -
Backport #19356
: kraken: when converting region_map we need to use rgw_zone_root_pool
Backport #19391
: kraken: two instances of omap_digest mismatch
Backport #19460
: kraken: rpm spec file mentions non-existent ceph-create-keys systemd unit file, causing ceph-mon units to not be enabled via preset
rgw -
Backport #19462
: kraken: rgw: admin ops: fix the quota section
Backport #19465
: kraken: monitor creation with IPv6 public network segfaults
rbd -
Backport #19467
: kraken: [api] is_exclusive_lock_owner doesn't detect that is has been blacklisted
rgw -
Backport #19470
: kraken: rgw_file: leaf objects (which store Unix attrs) can be deleted when children exist
rgw -
Backport #19471
: kraken: rgw_file: RGWFileHandle dtor must also cond-unlink from FHCache
rgw -
Backport #19472
: kraken: cannot cover the object expiration
rgw -
Backport #19475
: kraken: rgw: multisite: EPERM when trying to read SLO objects as system/admin user
rgw -
Backport #19477
: kraken: rgw: S3 v4 authentication issue with X-Amz-Expires
rgw -
Backport #19479
: kraken: rgw: "zonegroupmap set" does not work
Backport #19480
: kraken: ceph degraded and misplaced status output inaccurate
CephFS -
Backport #19483
: kraken: No output for "ceph mds rmfailed 0 --yes-i-really-mean-it" command
Backport #19485
: kraken: New added OSD always down when full flag is set
Backport #19496
: kraken: Objecter::epoch_barrier isn't respected in _op_submit()
rgw -
Backport #19524
: kraken: rgw: 'radosgw-admin zone create' command with specified zone-id creates a zone with different id
rgw -
Backport #19526
: kraken: rgwfs hung due to missing unlock within unlink operation
rgw -
Backport #19534
: kraken: rgw: Error parsing xml when get bucket lifecycle
Backport #19537
: kraken: ceph-disk list reports mount error for OSD having mount options with SELinux context
Backport #19544
: kraken: ceph-disk: Add fix subcommand kraken back-port
Backport #19560
: kraken: objecter: full_try behavior not consistent with osd
Backport #19561
: kraken: "api_misc: [ FAILED ] LibRadosMiscConnectFailure.ConnectFailure"
Backport #19564
: kraken: Ceph Xenial Packages - ceph-base missing dependency for psmisc
rgw -
Backport #19573
: kraken: rgw: Response header of swift API returned by radosgw does not contain "x-openstack-request-id". But Swift returns it.
rgw -
Backport #19574
: kraken: rgw: unsafe access in RGWListBucket_ObjStore_SWIFT::send_response()
rgw -
Backport #19608
: kraken: multisite: fetch_remote_obj() gets wrong version when copying from remote
rbd -
Backport #19609
: kraken: [librados_test_stub] cls_cxx_map_get_XYZ methods don't return correct value
rbd -
Backport #19611
: kraken: Issues with C API image metadata retrieval functions
rgw -
Backport #19614
: kraken: multisite: rest api fails to decode large period on 'period commit'
rgw -
Backport #19616
: kraken: multisite: bucket zonegroup redirect not working
Backport #19618
: kraken: common/LogClient.cc: 310: FAILED assert(num_unsent <= log_queue.size())
CephFS -
Backport #19620
: kraken: MDS server crashes due to inconsistent metadata.
rbd -
Backport #19621
: kraken: rbd-nbd: add signal handler
Backport #19622
: kraken: hammer client generated misdirected op against jewel cluster
Backport #19647
: kraken: ceph-disk: directory-backed OSDs do not start on boot
rbd -
Backport #19659
: kraken: upgrade:client-upgrade/{hammer,jewel}-client-x/rbd failing in kraken 11.2.1 integration testing
rgw -
Backport #19661
: kraken: rgw_file: fix readdir after dir-change
rgw -
Backport #19663
: kraken: rgw_file: fix event expire check, don't expire directories being read
CephFS -
Backport #19664
: kraken: C_MDSInternalNoop::complete doesn't free itself
CephFS -
Backport #19667
: kraken: fs:The mount point break off when mds switch hanppened.
CephFS -
Backport #19669
: kraken: MDS goes readonly writing backtrace for a file whose data pool has been removed
Backport #19670
: kraken: logrotate is missing from debian package (kraken, master)
CephFS -
Backport #19672
: kraken: MDS assert failed when shutting down
CephFS -
Backport #19674
: kraken: cephfs: mds is crushed, after I set about 400 64KB xattr kv pairs to a file
CephFS -
Backport #19676
: kraken: cephfs: Test failure: test_data_isolated (tasks.cephfs.test_volume_client.TestVolumeClient)
CephFS -
Backport #19678
: kraken: Jewel ceph-fuse does not recover after lost connection to MDS
CephFS -
Backport #19680
: kraken: MDS: damage reporting by ino number is useless
Backport #19685
: kraken: Give requested scrubs a higher priority
rbd -
Backport #19693
: kraken: [test] test_notify.py: rbd.InvalidArgument: error updating features for image test_notify_clone2
Backport #19702
: kraken: osd/PGLog.cc: 1047: FAILED assert(oi.version == i->first)
rgw -
Backport #19704
: civetweb-worker segmentation fault
CephFS -
Backport #19710
: kraken: Enable MDS to start when session ino info is corrupt
rgw -
Backport #19723
: kraken: rgw_file: introduce rgw_lookup type hints
rgw -
Backport #19725
: kraken: RGW S3 v4 authentication issue with X-Amz-Expires
rgw -
Backport #19759
: kraken: multisite: after CreateBucket is forwarded to master, local bucket may use different value for bucket index shards
Backport #19760
: kraken: osd: leaked MOSDMap
CephFS -
Backport #19763
: kraken: non-local cephfs quota changes not visible until some IO is done
rgw -
Backport #19766
: kraken: rgw: when uploading the objects continuesly in the versioned bucket, some objects will not sync.
rgw -
Backport #19776
: kraken: multisite: realm rename does not propagate to other clusters
rgw -
Backport #19777
: kraken: rgw: implement support for OS-REVOKE extension of OpenStack Identity API v3
rbd -
Backport #19794
: kraken: [test] test_notify.py: assert(not image.is_exclusive_lock_owner()) on line 147
rbd -
Backport #19807
: kraken: [test] remove hard-coded image name from TestLibRBD.Mirror
rgw -
Backport #19809
: kraken: APIs to support Ragweed suite
rbd -
Backport #19833
: kraken: Cannot delete some snapshots after upgrade from jewel to kraken
rgw -
Backport #19837
: kraken: rgw: S3 object uploads using the AWSv4's multi-chunk mode hang RadosGW
rgw -
Backport #19839
: kraken: reduce log level of 'storing entry at' in cls_log
rgw -
Backport #19840
: kraken: civetweb frontend segfaults in Luminous
Backport #19841
: kraken: clean up min/max span warning
rgw -
Backport #19843
: kraken: Add custom user data support in bucket index
CephFS -
Backport #19845
: kraken: write to cephfs mount hangs, ceph-fuse and kernel
rbd -
Backport #19872
: kraken: [rbd-mirror] failover and failback of unmodified image results in split-brain
Backport #19916
: kraken: osd/OSD.h: 706: FAILED assert(removed) in PG::unreg_next_scrub
Backport #19928
: kraken: mon crash on shutdown, lease_ack_timeout event
Backport #20010
: kraken: ceph-disk: separate ceph-osd --check-needs-* logs
rgw -
Backport #20015
: kraken: multisite: bi_list() decode failures
rbd -
Backport #20022
: kraken: rbd-mirror replay fails on attempting to reclaim data to local site (LS) from distant-end after DE promotion.
Backport #20024
: kraken: HEALTH_WARN pool rbd pg_num 244 > pgp_num 224 during upgrade
CephFS -
Backport #20026
: kraken: cephfs: MDS became unresponsive when truncating a very large file
CephFS -
Backport #20028
: kraken: Deadlock on two ceph-fuse clients accessing the same file
rgw -
Backport #20031
: kraken: rgw: Swift's at-root features (/crossdomain.xml, /info, /healthcheck) are broken
Backport #20033
: kraken: osd_scrub_sleep option blocks op thread in jewel + later
Backport #20034
: kraken: ceph-disk: Racing between partition creation & device node creation
Backport #20035
: kraken: mon: MAX AVAIL calcuation does not fact or in mon_osd_full_ratio
Backport #20125
: Kraken: Can't repair when only an attr object error
rgw -
Backport #20147
: kraken: rgw: 'gc list --include-all' command infinite loop the first 1000 items
Backport #20150
: kraken: ceph-disk fails if OSD udev rule triggers prior to mount of /var
rbd -
Backport #20154
: kraken: Potential IO hang if image is flattened while read request is in-flight
rgw -
Backport #20156
: kraken: fix: rgw crashed caused by shard id out of range when listing data log
rgw -
Backport #20158
: kraken: rgw_file: handle chunked readdir
Backport #20173
: kraken: PR #14886 creates a SafeTimer thread per PG
Backport #20191
: kraken: SELinux denials (the files in /var/log/ceph get mislabeled)
Backport #20193
: kraken: Speed up upgrade from non-SELinux enabled ceph to an SELinux enabled one
rgw -
Backport #20195
: kraken: rgw_file: restore (corrected) fix for dir "partial match" (return of FLAG_EXACT_MATCH)
rgw -
Backport #20261
: kraken: 'radosgw-admin usage show' listing 0 bytes_sent/received
rgw -
Backport #20263
: kraken: "datalog trim" can't work as expected
rbd -
Backport #20264
: kraken: [cli] ensure positional arguments exist before casting
rbd -
Backport #20266
: kraken: [api] is_exclusive_lock_owner shouldn't return -EBUSY
rgw -
Backport #20268
: kraken: get wrong content when download object with specific range when compression was enabled
rgw -
Backport #20269
: kraken: wrong object size after copy of uncompressed multipart objects
Backport #20271
: kraken: LibRadosMiscConnectFailure.ConnectFailure hang
rgw -
Backport #20293
: kraken: multisite: log_meta on secondary zone causes continuous loop of metadata sync
Backport #20315
: kraken: mon: fail to form large quorum; msg/async busy loop
Backport #20345
: kraken: make check fails with Error EIO: load dlopen(build/lib/libec_FAKE.so): build/lib/libec_FAKE.so: cannot open shared object file: No such file or directory
rgw -
Backport #20347
: kraken: rgw: meta sync thread crash at RGWMetaSyncShardCR
rbd -
Backport #20351
: kraken: test_librbd_api.sh fails in upgrade test
rgw -
Backport #20363
: kraken: VersionIdMarker and NextVersionIdMarker are not returned when listing object versions
Backport #20365
: kraken: mon: osd crush set crushmap need sanity check
RADOS -
Backport #20366
: kraken: kraken-bluestore 11.2.0 memory leak issue
rgw -
Backport #20405
: kraken: Lifecycle thread will still handle the bucket even if it has been removed.
Backport #20443
: kraken: osd: client IOPS drops to zero frequently
Backport #20487
: kraken: make check fails due to missing bc in ceph-helper.sh
Backport #20495
: Bluestore memory leak (uninit)
RADOS -
Backport #20497
: kraken: MaxWhileTries: reached maximum tries (105) after waiting for 630 seconds from radosbench.yaml
Backport #20499
: kraken: tests: ObjectStore/StoreTest.OnodeSizeTracking/2 fails on bluestore
CephFS -
Backport #20500
: kraken: src/test/pybind/test_cephfs.py fails
rbd -
Backport #20517
: kraken: [rbd CLI] map with cephx disabled results in error message
rgw -
Backport #20520
: kraken: rados/upgrade rgw swift test fails
Backport #20522
: kraken: FAILED assert(object_contexts.empty()) (live on master only from Jan-Feb 2017, all other instances are different)
Backport #20523
: kraken: on_flushed: object ... obc still alive
rbd -
Backport #20634
: kraken: [test] rbd-mirror teuthology task doesn't start daemon in foreground mode
RADOS -
Backport #20638
: kraken: EPERM: cannot set require_min_compat_client to luminous: 6 connected client(s) look like jewel (missing 0x800000000200000
Backport #20672
: kraken: Bad status warning for mon_warn_osd_usage_min_max_delta
Backport #20881
: Thrasher: update pgp_num of all expanded pools if not yet
RADOS -
Backport #20884
: kraken: bluestore: allocator fails for 0x80000000 allocations
Loading...