⚲
Project
General
Profile
Sign in
Register
Home
Projects
Help
Search
:
Ceph
Overview
Activity
Roadmap
Issues
Gantt
Calendar
Wiki
Repository
v15.2.5
97%
245 issues
(
237 closed
—
8 open
)
Time tracking
Estimated time
0
.00
hour
Spent time
0
.00
hour
Issues by
Tracker
Status
Priority
Author
Assignee
Category
Bug
47/55
Feature
7/7
Support
1/1
Cleanup
1/1
Tasks
1/1
Documentation
3/3
Backport
177/177
Related issues
Orchestrator -
Bug #43681
: cephadm: Streamline RGW deployment
Orchestrator -
Bug #44252
: cephadm: mgr,mds scale-down should prefer standby daemons
Dashboard -
Bug #44458
: octopus: mgr/dashboard: dropmenu item of column filters might exceed the viewport boundary
Dashboard -
Bug #44877
: mgr/dashboard: allow custom dashboard grafana url when set by cephadm
Orchestrator -
Bug #44926
: dashboard: creating a new bucket causes InvalidLocationConstraint
Orchestrator -
Bug #45016
: mgr: `ceph tell mgr mgr_status` hangs
Orchestrator -
Bug #45097
: cephadm: UX: Traceback, if `orch host add mon1` fails.
Orchestrator -
Bug #45155
: mgr/dashboard: Error listing orchestrator NFS daemons
Orchestrator -
Bug #45252
: cephadm: fail to insert modules when creating iSCSI targets
Orchestrator -
Bug #45594
: cephadm: weight of a replaced OSD is 0
Orchestrator -
Bug #45726
: Module 'cephadm' has failed: auth get failed: failed to find client.crash.<node_name> in keyring
Orchestrator -
Bug #45872
: ceph orch device ls exposes the `device_id` under the DEVICES column which isn't too useful for the user
Orchestrator -
Bug #45961
: cephadm: high load and slow disk make "cephadm bootstrap" fail
Orchestrator -
Bug #45980
: cephadm: implement missing "FileStore not supported" error message and update DriveGroup docs
Orchestrator -
Bug #45999
: cephadm shell: picking up legacy_dir
Orchestrator -
Bug #46036
: cephadm: killmode=none: systemd units failed, but containers still running
Orchestrator -
Bug #46045
: qa/tasks/cephadm: Module 'dashboard' is not enabled error
CephFS -
Bug #46081
: cephadm: mds permissions for osd are unnecessarily permissive
Orchestrator -
Bug #46098
: Exception adding host using cephadm
Orchestrator -
Bug #46138
: mgr/dashboard: Error creating iSCSI target
Orchestrator -
Bug #46175
: cephadm: orch apply -i: MON and MGR service specs must not have a service_id
Orchestrator -
Bug #46231
: translate.to_ceph_volume: no need to pass the drive group
Orchestrator -
Bug #46233
: cephadm: Add "--format" option to "ceph orch status"
Orchestrator -
Bug #46245
: cephadm: set-ssh-config/clear-ssh-config command doesn't take effect immediately
Orchestrator -
Bug #46268
: cephadm: orch apply -i: RGW service spec id might not contain a zone
Orchestrator -
Bug #46271
: podman pull: transient "Error: error creating container storage: error creating read-write layer with ID" failure
Orchestrator -
Bug #46329
: cephadm: Dashboard's ganesha option is not correct if there are multiple NFS daemons
rgw -
Bug #46332
: boost::asio::async_write() does not return error when the remote endpoint is not connected
Orchestrator -
Bug #46398
: cephadm: can't use custom prometheus image
Orchestrator -
Bug #46429
: cephadm fails bootstrap with new Podman Versions 2.0.1 and 2.0.2
Dashboard -
Bug #46502
: octopus: mgr/dashboard: fix issue introduced by https://github.com/ceph/ceph/pull/35926.
Orchestrator -
Bug #46534
: cephadm podman pull: Digest did not match
Orchestrator -
Bug #46540
: cephadm: iSCSI gateways problems.
Orchestrator -
Bug #46560
: cephadm: assigns invalid id to daemons
Dashboard -
Bug #46566
: octopus: mgr/dashboard: fix rbdmirroring dropdown menu
Orchestrator -
Bug #46740
: mgr/cephadm: restart of daemon reports host is empty
Orchestrator -
Bug #46748
: Module 'cephadm' has failed: auth get failed: failed to find osd.32 in keyring retval: -2
Orchestrator -
Bug #46777
: cephadm: Error bootstraping a cluster with '--registry-json' option
mgr -
Bug #46808
: prometheus stats reporting fails with "KeyError"
Orchestrator -
Bug #46813
: `ceph orch * --refresh` is broken
Orchestrator -
Bug #46833
: simple (ceph-disk style) OSDs adopted by cephadm must not call `ceph-volume lvm activate`
ceph-ansible -
Bug #46979
: Install ceph ansible
RADOS -
Bug #47206
: Ceph-mon crashes with zero exit code when no space left on device
RADOS -
Bug #47592
: extract-monmap changes permission on some files
rgw -
Bug #47655
: AWS put-bucket-lifecycle command fails on the latest minor Octopus release
rbd -
Bug #47868
: rbd-target-api / one of two service crash
Dashboard -
Bug #47870
: Unable to install/setup Ceph Manager Dashboard
rgw -
Bug #47871
: radosgw does not properly handle a roleArn when executing assume-role operation
rgw -
Bug #47912
: Problems with Rados Gateway installation (CEPH)
rgw -
Bug #47913
: Problems with Rados Gateway installation (CEPH)
Dashboard -
Bug #47997
: mgr/dashboard: OSD disk performance statistics not working in grafana
RADOS -
Bug #48060
: data loss in EC pool
mgr -
Bug #48080
: osd latency not showing data after applying label fix
rgw -
Bug #48139
: Ceph Dashboard Object Gateway InvalidRange bucket exception
Bug #48498
: octopus: timeout when running the "ceph" command
Orchestrator -
Feature #44548
: cephadm: persist osd removal queue
Orchestrator -
Feature #44628
: cephadm: Add initial firewall management to cephadm
Orchestrator -
Feature #44866
: cephadm root mode: support non-root users + sudo
Orchestrator -
Feature #44886
: cephadm: allow use of authenticated registry
Orchestrator -
Feature #45263
: osdspec/drivegroup: not enough filters to define layout
Orchestrator -
Feature #45859
: cephadm: use fixed versions
mgr -
Feature #46775
: mgr/cephadm: Enhance AlertManagerSpec to allow adding additional webhook receiver URLs
Orchestrator -
Support #47233
: cephadm: orch apply mon "label:osd" crashes cluster
Orchestrator -
Cleanup #45321
: Servcie spec: unify `spec:` vs omitting `spec:`
Stable releases -
Tasks #47173
: octopus 15.2.5
Orchestrator -
Documentation #45858
: `ceph orch status` doesn't show in progress actions
Orchestrator -
Documentation #46052
: Module 'cephadm' has failed: DaemonDescription: Cannot calculate service_id:
Documentation #47130
: Please add basic infos into the documentation
mgr -
Backport #45209
: octopus: monitoring: alert for pool fill up broken
bluestore -
Backport #45426
: octopus: ObjectStore/StoreTestSpecificAUSize.SpilloverTest/2 failed
Dashboard -
Backport #45449
: octopus: mgr/dashboard: The max. buckets field in RGW user form should be pre-filled
Dashboard -
Backport #45475
: octopus: qa: mgr/dashboard: Replace Telemetry module in REST API test
rgw -
Backport #45645
: octopus: [rfe] rgw: parallelize single-node lifecycle processing
mgr -
Backport #45786
: octopus: dashboard/rbd: Add button to copy the bootstrap token into the clipboard
Dashboard -
Backport #45855
: octopus: mgr/dashboard: Improve SummaryService's getCurrentSummary method
Dashboard -
Backport #45889
: octopus: mgr/dashboard: Pool form max size
rgw -
Backport #45913
: octopus: rgw crashes while accessing an invalid iterator in gc update entry
rgw -
Backport #45922
: octopus: [rfe] rgw: add lifecycle perfcounters
rgw -
Backport #45924
: octopus: radsgw-admin bucket list/stats does not list/stat all buckets if user owns more than 1000 buckets
rgw -
Backport #45926
: octopus: Bucket quota not check in copy operation
rgw -
Backport #45928
: octopus: rgw/ swift stat can hang
rgw -
Backport #45931
: octopus: Add support wildcard subuser on bucket policy
rgw -
Backport #45933
: octopus: Add user identity to OPA request
rgw -
Backport #45951
: octopus: add access log line to the beast frontend
CephFS -
Backport #45953
: octopus: vstart: Support deployment of ganesha daemon by cephadm with NFS option
CephFS -
Backport #46003
: octopus: vstart: set $CEPH_CONF when calling ganesha-rados-grace commands
rgw -
Backport #46005
: octopus: rgw: bucket index entries marked rgw.none not accounted for correctly during reshard
RADOS -
Backport #46007
: octopus: PrimaryLogPG.cc: 627: FAILED ceph_assert(!get_acting_recovery_backfill().empty())
bluestore -
Backport #46009
: octopus: ObjectStore/StoreTestSpecificAUSize.ExcessiveFragmentation/2 failed
Backport #46015
: octopus: log: the time precision of log is only milliseconds because the option log_coarse_timestamps doesn’t work well
RADOS -
Backport #46016
: octopus: osd-backfill-stats.sh failing intermittently in TEST_backfill_sizeup_out() (degraded outside margin)
Dashboard -
Backport #46020
: octopus: mgr/dashboard/rbd: throws 500s with format 1 RBD images
Dashboard -
Backport #46048
: octopus: mgr/dashboard: cropped actions menu in nested details
CephFS -
Backport #46085
: octopus: handle multiple ganesha.nfsd's appropriately in vstart.sh
RADOS -
Backport #46086
: octopus: osd: wakeup all threads of shard rather than one thread
rbd -
Backport #46087
: octopus: [prometheus] auto-configure RBD metric exports for all RBD pools
RADOS -
Backport #46089
: octopus: PG merge: FAILED ceph_assert(info.history.same_interval_since != 0)
RADOS -
Backport #46095
: octopus: Issue health status warning if num_shards_repaired exceeds some threshold
CephFS -
Backport #46106
: octopus: pybind/mgr/volumes: add API to manage NFS-Ganesha gateway clusters in exporting subvolumes
ceph-volume -
Backport #46112
: octopus: Report wrong rejected reason in inventory subcommand if device type is invalid
RADOS -
Backport #46115
: octopus: Add statfs output to ceph-objectstore-tool
mgr -
Backport #46117
: octopus: "ActivePyModule.cc: 54: FAILED ceph_assert(pClassInstance != nullptr)" due to race when loading modules
mgr -
Backport #46121
: octopus: mgr/k8sevents backport to sanitise the data coming from kubernetes
ceph-volume -
Backport #46148
: octopus: functional tests: pass pv_devices to ansible
rbd -
Backport #46150
: octopus: [object-map] possible race condition when disabling object map with active IO
CephFS -
Backport #46152
: octopus: test_scrub_pause_and_resume (tasks.cephfs.test_scrub_checks.TestScrubControls) fails intermittently
CephFS -
Backport #46155
: octopus: Test failure: test_create_multiple_exports (tasks.cephfs.test_nfs.TestNFS)
CephFS -
Backport #46156
: octopus: Test failure: test_export_create_and_delete (tasks.cephfs.test_nfs.TestNFS)
RADOS -
Backport #46165
: octopus: osd: make message cap option usable again
mgr -
Backport #46171
: octopus: mgr/prometheus: cache ineffective when gathering data takes longer than 5 seconds
Dashboard -
Backport #46173
: octopus: mgr/dashboard Replace broken osd
mgr -
Backport #46183
: octopus: ceph config show does not display fsid correctly
CephFS -
Backport #46185
: octopus: cephadm: mds permissions for osd are unnecessarily permissive
CephFS -
Backport #46186
: octopus: client: fix snap directory atime
CephFS -
Backport #46188
: octopus: mds: EMetablob replay too long will cause mds restart
CephFS -
Backport #46190
: octopus: mds: cap revoking requests didn't success when the client doing reconnection
bluestore -
Backport #46193
: octopus: BlueFS replay log grows without end
Dashboard -
Backport #46197
: octopus: mgr/dashboard: the RBD configuration table has incorrect values in source column in non-default locales
CephFS -
Backport #46199
: octopus: qa: "[WRN] evicting unresponsive client smithi131:z (6314), after 304.461 seconds"
CephFS -
Backport #46201
: octopus: mds: add ephemeral random and distributed export pins
Dashboard -
Backport #46205
: octopus: mgr/dashboard: telemetry module activation notification
Dashboard -
Backport #46214
: octopus: mgr/dashboard: Add host labels in UI
RADOS -
Backport #46229
: octopus: Ceph Monitor heartbeat grace period does not reset.
CephFS -
Backport #46234
: octopus: pybind/mgr/volumes: volume deletion not always removes the associated osd pools
ceph-volume -
Backport #46251
: octopus: add encryption support to raw mode
RADOS -
Backport #46261
: octopus: larger osd_scrub_max_preemptions values cause Floating point exception
RADOS -
Backport #46286
: octopus: mon: log entry with garbage generated by bad memory access
CephFS -
Backport #46289
: octopus: mgr/nfs: allow only [A-Za-z0-9-_.] in cluster ID
CephFS -
Backport #46290
: octopus: mgr/nfs: Add interface for listing cluster
CephFS -
Backport #46291
: octopus: mgr/volumes/nfs: Add interface for get and list exports
CephFS -
Backport #46292
: octopus: mgr/nfs: Check cluster exists before creating exports and make exports persistent
Backport #46307
: octopus: unittest_lockdep failure
Dashboard -
Backport #46308
: octopus: mgr/dashboard: Display check icon instead of true|false in various datatables
rbd -
Backport #46309
: octopus: TestMockImageReplayerSnapshotReplayer.UnlinkRemoteSnapshot race on shut down
CephFS -
Backport #46311
: octopus: qa/tasks/cephfs/test_snapshots.py: Command failed with status 1: ['cd', '|/usr/libexec', ...]
Dashboard -
Backport #46313
: octopus: mgr/dashboard: Prometheus query error while filtering values in the metrics of Pools and OSDs
Dashboard -
Backport #46314
: octopus: mgr/dashboard: wal/db slots in create OSDs form do not work properly in firefox
CephFS -
Backport #46315
: octopus: mgr/volumes: ephemerally pin volumes
rbd -
Backport #46322
: octopus: profile rbd does not allow the use of RBD_INFO
Dashboard -
Backport #46328
: octopus: mgr/dashboard: cdCopy2ClipboardButton does no longer support 'formatted' attribute
rgw -
Backport #46340
: octopus: [rgw] listing bucket via s3 hangs on "ordered bucket listing requires read #1"
rgw -
Backport #46343
: octopus: rgw: orphan-list timestamp fix
CephFS -
Backport #46348
: octopus: qa/tasks: make sh() in vstart_runner.py identical with teuthology.orchestra.remote.sh
Dashboard -
Backport #46351
: octopus: mgr/dashboard: table details flicker if autoReload of table is on
Dashboard -
Backport #46354
: octopus: mgr/dashboard: Display users current bucket quota usage
RADOS -
Backport #46372
: osd: expose osdspec_affinity to osd_metadata
CephFS -
Backport #46389
: octopus: pybind/mgr/volumes: cleanup stale connection hang
mgr -
Backport #46394
: octopus: FAIL: test_pool_update_metadata (tasks.mgr.dashboard.test_pool.PoolTest)
CephFS -
Backport #46401
: octopus: mgr/nfs: Add interface to show cluster information
CephFS -
Backport #46402
: octopus: client: recover from a killed session (w/ blacklist)
RADOS -
Backport #46408
: octopus: Health check failed: 4 mgr modules have failed (MGR_MODULE_ERROR)
CephFS -
Backport #46410
: octopus: client: supplying ceph_fsetxattr with no value unsets xattr
Dashboard -
Backport #46418
: octopus: mgr/dashboard: Password expiration notification is always shown if a date is set
Dashboard -
Backport #46436
: octopus: mgr/dashboard: Unable to edit iSCSI target which has active session
rgw -
Backport #46457
: octopus: [RGW]: avc denial observed for pid=13757 comm="radosgw" on starting RabbitMQ at port 5672
rgw -
Backport #46459
: octopus: rgw: orphan list teuthology test & fully-qualified domain issue
RADOS -
Backport #46460
: octopus: pybind/mgr/balancer: should use "==" and "!=" for comparing strings
rgw -
Backport #46462
: octopus: rgw: rgw-orphan-list -- fix interaction, quoting, and percentage calc
CephFS -
Backport #46465
: octopus: pybind/mgr/volumes: get_pool_names may indicate volume does not exist if multiple volumes exist
rgw -
Backport #46467
: octopus: rgw: radoslist incomplete multipart uploads fix marker progression
CephFS -
Backport #46469
: octopus: client: release the client_lock before copying data in read
rgw -
Backport #46471
: octopus: crash on realm reload during shutdown
rgw -
Backport #46475
: octopus: aws iam get-role-policy doesn't work
CephFS -
Backport #46477
: octopus: pybind/mgr/volumes: volume deletion should check mon_allow_pool_delete
mgr -
Backport #46489
: octopus: pybind/mgr/pg_autoscaler/module.py: do not update event if ev.pg_num== ev.pg_num_target
CephFS -
Backport #46498
: octopus: mgr/nfs: Update nfs-ganesha package requirements
rgw -
Backport #46510
: octopus: Adding data cache and CDN capabilities
rgw -
Backport #46511
: octopus: rgw: lc: Segmentation Fault when the tag of the object was not found in the rule
mgr -
Backport #46514
: octopus: mgr progress module causes needless load
rgw -
Backport #46518
: octopus: boost::asio::async_write() does not return error when the remote endpoint is not connected
CephFS -
Backport #46528
: octopus: mgr/volumes: `protect` and `clone` operation in a single transaction
Backport #46536
: octopus: ceph_volume_client.py: python 3.8 compatibility
Dashboard -
Backport #46570
: octopus: mgr/dashboard: fix usage calculation to match "ceph df" way
Dashboard -
Backport #46576
: octopus: mgr/dashboard/api: CODEOWNERS
bluestore -
Backport #46584
: octopus: os/bluestore: simplify Onode pin/unpin logic.
CephFS -
Backport #46585
: octopus: mgr/nfs: Update about nfs ganesha cluster deployment using cephadm in vstart
RADOS -
Backport #46586
: octopus: The default value of osd_scrub_during_recovery is false since v11.1.1
Dashboard -
Backport #46590
: octopus: mgr/dashboard: Use same required field message accross the UI
CephFS -
Backport #46591
: octopus: ceph-fuse: ceph-fuse process is terminated by the logratote task and what is more serious is that one Uninterruptible Sleep process will be produced
rgw -
Backport #46593
: octopus: [notifications] reading topic info for every op overloads the osd
RADOS -
Backport #46595
: octopus: crash in Objecter and CRUSH map lookup
bluestore -
Backport #46599
: octopus: Rescue procedure for extremely large bluefs log
mgr -
Backport #46602
: octopus: Fix broken UiApi documentation endpoints and add warning
Backport #46629
: octopus: The bandwidth of bluestore was throttled
CephFS -
Backport #46631
: octopus: mgr/nfs: Remove NParts and Cache_Size from MDCACHE block
CephFS -
Backport #46632
: octopus: mgr/nfs: help for "nfs export create" and "nfs export delete" says "<attach>" where the documentation says "<clusterid>"
rbd -
Backport #46639
: octopus: [iscsi-target-cli page]: add systemctl commands for enabling and starting rbd-target-gw in addition to rbd-target-api
rgw -
Backport #46640
: octopus: Headers are missing in abort multipart upload response if bucket has lifecycle.
CephFS -
Backport #46642
: octopus: qa: random subvolumegroup collision
Dashboard -
Backport #46672
: octopus: mgr/dashboard/api: reach 100% test coverage in API controllers
rbd -
Backport #46674
: octopus: importing rbd diff does not apply zero sequences correctly
Dashboard -
Backport #46693
: octopus: mgr/dashboard: Don't have two different unit test mechanics
RADOS -
Backport #46707
: octopus: Cancellation of on-going scrubs
RADOS -
Backport #46709
: octopus: Negative peer_num_objects crashes osd
rbd -
Backport #46711
: octopus: Object dispatch layers need to ensure all IO is complete prior to shut down
CephFS -
Backport #46712
: octopus: mgr/nfs: Ensure pseudoroot path is absolute and is not just /
mgr -
Backport #46715
: octopus: Module 'diskprediction_local' has failed: Expected 2D array, got 1D array instead
mgr -
Backport #46717
: octopus: mgr/prometheus: log time it takes to collect metrics in debug mode
rbd -
Backport #46719
: octopus: [librbd]assert at Notifier::notify's aio_notify_locker
rbd -
Backport #46721
: octopus: tools: ceph-immutable-object-cache can start without root permission
RADOS -
Backport #46722
: octopus: osd/osd-bench.sh 'tell osd.N bench' hang
rbd -
Backport #46724
: octopus: ceph-iscsi: selinux avc denial on rbd-target-api from ioctl access
Dashboard -
Backport #46736
: octopus: mgr/dashboard: cpu stats incorrectly displayed
RADOS -
Backport #46739
: octopus: mon: expected_num_objects warning triggers on bluestore-only setups
RADOS -
Backport #46742
: octopus: ceph_osd crash in _committed_osd_maps when failed to encode first inc map
Dashboard -
Backport #46751
: octopus: mgr/dashboard: Add hosts page unit tests
mgr -
Backport #46753
: octopus: FAIL: test_pool_update_metadata (tasks.mgr.dashboard.test_pool.PoolTest)
ceph-volume -
Backport #46785
: octopus: add subcommand to parse drive_groups
Dashboard -
Backport #46788
: octopus: mgr/dashboard: Cluster status messages overflow in the landing page
Dashboard -
Backport #46794
: octopus: mgr/dashboard: ExpressionChangedAfterItHasBeenCheckedError in OSD delete form
CephFS -
Backport #46795
: octopus: mds: Subvolume snapshot directory does not save attribute "ceph.quota.max_bytes" of snapshot source directory tree
rgw -
Backport #46798
: octopus: The append operation will trigger the garbage collection mechanism
rgw -
Backport #46873
: octopus: rgw: lc: fix backdward-compat decode
rgw -
Backport #46874
: octopus: rgw lifecycle versioned encoding mismatch
mgr -
Backport #46895
: octopus: Fix API test timeout issues
mgr -
Backport #46896
: octopus: The backend test fails in tasks.mgr.dashboard.test_rbd.RbdTest.test_move_image_to_trash test
Dashboard -
Backport #46907
: octopus: mgr/dashboard: Extract documentation link to a component
ceph-volume -
Backport #46911
: octopus: testing: flake8 uses py2
Dashboard -
Backport #46924
: octopus: mgr/dashboard: Unable to edit iSCSI logged-in client
rgw -
Backport #46929
: octopus: rgw: http requests state should be set before unlink
RADOS -
Backport #46931
: octopus: librados: add LIBRBD_SUPPORTS_GETADDRS support
RADOS -
Backport #46934
: octopus: "No such file or directory" when exporting or importing a pool if locator key is specified
mgr -
Backport #46936
: octopus: prometheus stats reporting fails with "KeyError"
rgw -
Backport #46938
: octopus: UnboundLocalError: local variable 'ragweed_repo' referenced before assignment
Dashboard -
Backport #46944
: octopus: mgr/dashboard: host labels not shown after adding them.
rbd -
Backport #46945
: octopus: Global and pool-level config overrides require image refresh to apply
rgw -
Backport #46949
: octopus: OLH entries pending removal get mistakenly resharded to shard 0
RADOS -
Backport #46951
: octopus: nautilis client may hunt for mon very long if msg v2 is not enabled on mons
rgw -
Backport #46953
: octopus: invalid principal arn in bucket policy grants access to all
rgw -
Backport #46955
: octopus: multisite: RGWAsyncReadMDLogEntries crash on shutdown
CephFS -
Backport #46957
: octopus: pybind/mgr/nfs: add interface for adding user defined configuration
mgr -
Backport #46958
: octopus: mgr/status: metadata is fetched async
RADOS -
Backport #46964
: octopus: Pool stats increase after PG merged (PGMap::apply_incremental doesn't subtract stats correctly)
rgw -
Backport #46966
: octopus: rgw: GETing S3 website root with two slashes // crashes rgw
rgw -
Backport #46968
: octopus: rgw: break up user reset-stats into multiple cls ops
Dashboard -
Backport #46974
: octopus: mgr/dashboard: Strange iSCSI discovery auth behavior
Dashboard -
Backport #46993
: octopus: mgr/dashboard: remove password field if login is using SSO and fix error message in confirm password
mgr -
Backport #46996
: octopus: mgr/crash: invalid crash remove example
Dashboard -
Backport #47001
: octopus: mgr/dashboard/api: reduce verbosity in API tests log output
Backport #47022
: octopus: rbd_write_zeroes()
rgw -
Backport #47114
: octopus: rgw: hold reloader using unique_ptr
Dashboard -
Backport #47121
: octopus: mgr/dashboard: replace endpoint of "This week" time range for Grafana in dashboard
Dashboard -
Backport #47155
: octopus: mgr/dashboard: redirect to original URL after successful login
RADOS -
Backport #47297
: octopus: osdmaps aren't being cleaned up automatically on healthy cluster
rgw -
Backport #47464
: octopus: rgw:lc: fix (post-parallel) non-current expiration
Loading...