Ceph : Issues
https://tracker.ceph.com/
https://tracker.ceph.com/favicon.ico
2020-10-14T11:06:05Z
Ceph
Redmine
Dashboard - Bug #47857 (Won't Fix): mgr/dashboard: sensitive information stored in cleartext
https://tracker.ceph.com/issues/47857
2020-10-14T11:06:05Z
Ernesto Puerta
<p>Description</p>
<p>The application stores sensitive information (i.e. usernames, passwords and access tokens) cleartext inside<br />a RocksDB database. An attacker with read access to the database files could compromise the application<br />and other systems.</p>
<p>Exploitation</p>
<p>The testing team identified a RocksDB instance in use by the Ceph Monitor daemon that contains sensitive<br />data, such as S3 access and secret keys, usernames and passwords.<br />It was possible to easily obtain the data either by searching for ASCII strings in the database files or by<br />using RocksDB command line tool to dump the database.</p>
<p>Recommendation</p>
<p>Either encrypt whole RocksDB or perform application-level encryption.</p>
<p>Caveat</p>
Application level encryption still requires an encryption key to saved somewhere, which simply shifts the problem to where to securely store this key:
<ul>
<li>Key-Value Store... un-encrypted. Same as original issue.</li>
<li>Hardware Security Module (HSM), like a FIPS-140</li>
<li>Remote key server (e.g.: Vault)</li>
</ul>
Dashboard - Bug #47356 (Resolved): mgr/dashboard: some nfs-ganesha endpoints are not in correct s...
https://tracker.ceph.com/issues/47356
2020-09-08T08:11:20Z
Kiefer Chang
<p>The <a href="https://github.com/ceph/ceph/blob/056d8776ce1133928f4473a2f63e4537cecec5f2/src/pybind/mgr/dashboard/controllers/nfsganesha.py#L271-L311" class="external">endpoints</a> in the <code>/ui-api/ganesha-nfs</code> and <code>/api/ganesha-nfs/daemons</code> paths need to be scoped.</p>
Dashboard - Cleanup #47341 (Resolved): mgr/dashboard: securing CherryPy
https://tracker.ceph.com/issues/47341
2020-09-07T16:44:51Z
Ernesto Puerta
<p>Ensuring we follow, as much as possible, <a href="https://docs.cherrypy.org/en/3.3.0/progguide/security.html" class="external">Cherrypy security guidelines</a></p>
<ul>
<li>Transmitting data:
<ul>
<li>Use Secure Cookies</li>
</ul>
</li>
<li>Rendering pages:
<ul>
<li>Set HttpOnly cookies</li>
<li>Set XFrame options</li>
<li>Enable XSS Protection</li>
<li>Set the Content Security Policy</li>
</ul></li>
</ul>
Dashboard - Feature #45965 (Resolved): mgr/dashboard: Display users current quota usage
https://tracker.ceph.com/issues/45965
2020-06-10T10:45:50Z
Lenz Grimmer
<p>As a follow-up to <a class="issue tracker-2 status-3 priority-4 priority-default closed child" title="Feature: mgr/dashboard: Display users current bucket quota usage (Resolved)" href="https://tracker.ceph.com/issues/45011">#45011</a> it would probably make sense to show a user's quota in the data table in a similar fashion.</p>
<p>Currently, that information is somewhat hidden in the details when clicking the user's table row.</p>
<p>Having the quota information included in the toplevel data table would make it possible to search and sort the user list for these values.</p>
Dashboard - Feature #45897 (Resolved): mgr/dashboard: Add host labels in UI
https://tracker.ceph.com/issues/45897
2020-06-04T13:44:12Z
Volker Theile
<p>Add the feature to add labels to hosts via UI.</p>
Dashboard - Bug #45873 (Resolved): mgr/dashboard: CRUSH map viewer inconsistent with output of "c...
https://tracker.ceph.com/issues/45873
2020-06-04T07:58:44Z
Lenz Grimmer
<p>This was reported by Marco Pizzolo on the ceph-users mailing list (<a href="https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/442I35QPCKUZ5PJOODKI4XC3IWGTKQG7/" class="external">15.2.3 Crush Map Viewer problem.</a>):</p>
<p>We're working on a new cluster and seeing some oddities. The crush map viewer is not showing all hosts or OSDs. Cluster is NVMe w/4 hosts, each having 8 NVMe. Using 2 OSDs per NVMe and Encryption. Using Max size of 3, Min size of 2:</p>
<p><img src="https://tracker.ceph.com/attachments/download/4904/crush-map-viewer.png" alt="" /></p>
<p>All OSDs appear to exist in: <code>ceph osd tree</code>:</p>
<pre>
root@prdhcistonode01:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 372.60156 root default
-3 93.15039 host prdhcistonode01
0 ssd 5.82190 osd.0 up 1.00000 1.00000
1 ssd 5.82190 osd.1 up 1.00000 1.00000
2 ssd 5.82190 osd.2 up 1.00000 1.00000
3 ssd 5.82190 osd.3 up 1.00000 1.00000
4 ssd 5.82190 osd.4 up 1.00000 1.00000
5 ssd 5.82190 osd.5 up 1.00000 1.00000
6 ssd 5.82190 osd.6 up 1.00000 1.00000
7 ssd 5.82190 osd.7 up 1.00000 1.00000
8 ssd 5.82190 osd.8 up 1.00000 1.00000
9 ssd 5.82190 osd.9 up 1.00000 1.00000
10 ssd 5.82190 osd.10 up 1.00000 1.00000
11 ssd 5.82190 osd.11 up 1.00000 1.00000
12 ssd 5.82190 osd.12 up 1.00000 1.00000
13 ssd 5.82190 osd.13 up 1.00000 1.00000
14 ssd 5.82190 osd.14 up 1.00000 1.00000
15 ssd 5.82190 osd.15 up 1.00000 1.00000
-5 93.15039 host prdhcistonode02
17 ssd 5.82190 osd.17 up 1.00000 1.00000
18 ssd 5.82190 osd.18 up 1.00000 1.00000
19 ssd 5.82190 osd.19 up 1.00000 1.00000
20 ssd 5.82190 osd.20 up 1.00000 1.00000
21 ssd 5.82190 osd.21 up 1.00000 1.00000
22 ssd 5.82190 osd.22 up 1.00000 1.00000
23 ssd 5.82190 osd.23 up 1.00000 1.00000
24 ssd 5.82190 osd.24 up 1.00000 1.00000
25 ssd 5.82190 osd.25 up 1.00000 1.00000
26 ssd 5.82190 osd.26 up 1.00000 1.00000
27 ssd 5.82190 osd.27 up 1.00000 1.00000
28 ssd 5.82190 osd.28 up 1.00000 1.00000
29 ssd 5.82190 osd.29 up 1.00000 1.00000
30 ssd 5.82190 osd.30 up 1.00000 1.00000
48 ssd 5.82190 osd.48 up 1.00000 1.00000
49 ssd 5.82190 osd.49 up 1.00000 1.00000
-7 93.15039 host prdhcistonode03
16 ssd 5.82190 osd.16 up 1.00000 1.00000
31 ssd 5.82190 osd.31 up 1.00000 1.00000
32 ssd 5.82190 osd.32 up 1.00000 1.00000
33 ssd 5.82190 osd.33 up 1.00000 1.00000
34 ssd 5.82190 osd.34 up 1.00000 1.00000
35 ssd 5.82190 osd.35 up 1.00000 1.00000
36 ssd 5.82190 osd.36 up 1.00000 1.00000
37 ssd 5.82190 osd.37 up 1.00000 1.00000
38 ssd 5.82190 osd.38 up 1.00000 1.00000
39 ssd 5.82190 osd.39 up 1.00000 1.00000
40 ssd 5.82190 osd.40 up 1.00000 1.00000
41 ssd 5.82190 osd.41 up 1.00000 1.00000
42 ssd 5.82190 osd.42 up 1.00000 1.00000
43 ssd 5.82190 osd.43 up 1.00000 1.00000
44 ssd 5.82190 osd.44 up 1.00000 1.00000
45 ssd 5.82190 osd.45 up 1.00000 1.00000
-9 93.15039 host prdhcistonode04
46 ssd 5.82190 osd.46 up 1.00000 1.00000
47 ssd 5.82190 osd.47 up 1.00000 1.00000
50 ssd 5.82190 osd.50 up 1.00000 1.00000
51 ssd 5.82190 osd.51 up 1.00000 1.00000
52 ssd 5.82190 osd.52 up 1.00000 1.00000
53 ssd 5.82190 osd.53 up 1.00000 1.00000
54 ssd 5.82190 osd.54 up 1.00000 1.00000
55 ssd 5.82190 osd.55 up 1.00000 1.00000
56 ssd 5.82190 osd.56 up 1.00000 1.00000
57 ssd 5.82190 osd.57 up 1.00000 1.00000
58 ssd 5.82190 osd.58 up 1.00000 1.00000
59 ssd 5.82190 osd.59 up 1.00000 1.00000
60 ssd 5.82190 osd.60 up 1.00000 1.00000
61 ssd 5.82190 osd.61 up 1.00000 1.00000
62 ssd 5.82190 osd.62 up 1.00000 1.00000
63 ssd 5.82190 osd.63 up 1.00000 1.00000
</pre>
<p>The output of the <code>GET /health/full</code> also contains all OSDs, so somehow the UI seems to have an issue with rendering this tree:</p>
<pre>
"tree": {
"nodes": [
{
"id": -1,
"name": "default",
"type": "root",
"type_id": 11,
"children": [
-9,
-7,
-5,
-3
]
},
{
"id": -3,
"name": "prdhcistonode01",
"type": "host",
"type_id": 1,
"pool_weights": {},
"children": [
15,
14,
13,
12,
11,
10,
9,
8,
7,
6,
5,
4,
3,
2,
1,
0
]
},
{
"id": 0,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.0"
},
{
"id": 1,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.1"
},
{
"id": 2,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.2"
},
{
"id": 3,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.3"
},
{
"id": 4,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.4"
},
{
"id": 5,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.5"
},
{
"id": 6,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.6"
},
{
"id": 7,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.7"
},
{
"id": 8,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.8"
},
{
"id": 9,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.9"
},
{
"id": 10,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.10"
},
{
"id": 11,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.11"
},
{
"id": 12,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.12"
},
{
"id": 13,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.13"
},
{
"id": 14,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.14"
},
{
"id": 15,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.15"
},
{
"id": -5,
"name": "prdhcistonode02",
"type": "host",
"type_id": 1,
"pool_weights": {},
"children": [
49,
48,
30,
29,
28,
27,
26,
25,
24,
23,
22,
21,
20,
19,
18,
17
]
},
{
"id": 17,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.17"
},
{
"id": 18,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.18"
},
{
"id": 19,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.19"
},
{
"id": 20,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.20"
},
{
"id": 21,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.21"
},
{
"id": 22,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.22"
},
{
"id": 23,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.23"
},
{
"id": 24,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.24"
},
{
"id": 25,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.25"
},
{
"id": 26,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.26"
},
{
"id": 27,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.27"
},
{
"id": 28,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.28"
},
{
"id": 29,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.29"
},
{
"id": 30,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.30"
},
{
"id": 48,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.48"
},
{
"id": 49,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.49"
},
{
"id": -7,
"name": "prdhcistonode03",
"type": "host",
"type_id": 1,
"pool_weights": {},
"children": [
45,
44,
43,
42,
41,
40,
39,
38,
37,
36,
35,
34,
33,
32,
31,
16
]
},
{
"id": 16,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.16"
},
{
"id": 31,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.31"
},
{
"id": 32,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.32"
},
{
"id": 33,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.33"
},
{
"id": 34,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.34"
},
{
"id": 35,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.35"
},
{
"id": 36,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.36"
},
{
"id": 37,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.37"
},
{
"id": 38,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.38"
},
{
"id": 39,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.39"
},
{
"id": 40,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.40"
},
{
"id": 41,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.41"
},
{
"id": 42,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.42"
},
{
"id": 43,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.43"
},
{
"id": 44,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.44"
},
{
"id": 45,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.45"
},
{
"id": -9,
"name": "prdhcistonode04",
"type": "host",
"type_id": 1,
"pool_weights": {},
"children": [
63,
62,
61,
60,
59,
58,
57,
56,
55,
54,
53,
52,
51,
50,
47,
46
]
},
{
"id": 46,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.46"
},
{
"id": 47,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.47"
},
{
"id": 50,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.50"
},
{
"id": 51,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.51"
},
{
"id": 52,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.52"
},
{
"id": 53,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.53"
},
{
"id": 54,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.54"
},
{
"id": 55,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.55"
},
{
"id": 56,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.56"
},
{
"id": 57,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.57"
},
{
"id": 58,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.58"
},
{
"id": 59,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.59"
},
{
"id": 60,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.60"
},
{
"id": 61,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.61"
},
{
"id": 62,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.62"
},
{
"id": 63,
"device_class": "ssd",
"type": "osd",
"type_id": 0,
"crush_weight": 5.8218994140625,
"depth": 2,
"pool_weights": {},
"exists": 1,
"status": "up",
"reweight": 1,
"primary_affinity": 1,
"name": "osd.63"
}
],
"stray": []
},
"crush": {
"devices": [
{
"id": 0,
"name": "osd.0",
"class": "ssd"
},
{
"id": 1,
"name": "osd.1",
"class": "ssd"
},
{
"id": 2,
"name": "osd.2",
"class": "ssd"
},
{
"id": 3,
"name": "osd.3",
"class": "ssd"
},
{
"id": 4,
"name": "osd.4",
"class": "ssd"
},
{
"id": 5,
"name": "osd.5",
"class": "ssd"
},
{
"id": 6,
"name": "osd.6",
"class": "ssd"
},
{
"id": 7,
"name": "osd.7",
"class": "ssd"
},
{
"id": 8,
"name": "osd.8",
"class": "ssd"
},
{
"id": 9,
"name": "osd.9",
"class": "ssd"
},
{
"id": 10,
"name": "osd.10",
"class": "ssd"
},
{
"id": 11,
"name": "osd.11",
"class": "ssd"
},
{
"id": 12,
"name": "osd.12",
"class": "ssd"
},
{
"id": 13,
"name": "osd.13",
"class": "ssd"
},
{
"id": 14,
"name": "osd.14",
"class": "ssd"
},
{
"id": 15,
"name": "osd.15",
"class": "ssd"
},
{
"id": 16,
"name": "osd.16",
"class": "ssd"
},
{
"id": 17,
"name": "osd.17",
"class": "ssd"
},
{
"id": 18,
"name": "osd.18",
"class": "ssd"
},
{
"id": 19,
"name": "osd.19",
"class": "ssd"
},
{
"id": 20,
"name": "osd.20",
"class": "ssd"
},
{
"id": 21,
"name": "osd.21",
"class": "ssd"
},
{
"id": 22,
"name": "osd.22",
"class": "ssd"
},
{
"id": 23,
"name": "osd.23",
"class": "ssd"
},
{
"id": 24,
"name": "osd.24",
"class": "ssd"
},
{
"id": 25,
"name": "osd.25",
"class": "ssd"
},
{
"id": 26,
"name": "osd.26",
"class": "ssd"
},
{
"id": 27,
"name": "osd.27",
"class": "ssd"
},
{
"id": 28,
"name": "osd.28",
"class": "ssd"
},
{
"id": 29,
"name": "osd.29",
"class": "ssd"
},
{
"id": 30,
"name": "osd.30",
"class": "ssd"
},
{
"id": 31,
"name": "osd.31",
"class": "ssd"
},
{
"id": 32,
"name": "osd.32",
"class": "ssd"
},
{
"id": 33,
"name": "osd.33",
"class": "ssd"
},
{
"id": 34,
"name": "osd.34",
"class": "ssd"
},
{
"id": 35,
"name": "osd.35",
"class": "ssd"
},
{
"id": 36,
"name": "osd.36",
"class": "ssd"
},
{
"id": 37,
"name": "osd.37",
"class": "ssd"
},
{
"id": 38,
"name": "osd.38",
"class": "ssd"
},
{
"id": 39,
"name": "osd.39",
"class": "ssd"
},
{
"id": 40,
"name": "osd.40",
"class": "ssd"
},
{
"id": 41,
"name": "osd.41",
"class": "ssd"
},
{
"id": 42,
"name": "osd.42",
"class": "ssd"
},
{
"id": 43,
"name": "osd.43",
"class": "ssd"
},
{
"id": 44,
"name": "osd.44",
"class": "ssd"
},
{
"id": 45,
"name": "osd.45",
"class": "ssd"
},
{
"id": 46,
"name": "osd.46",
"class": "ssd"
},
{
"id": 47,
"name": "osd.47",
"class": "ssd"
},
{
"id": 48,
"name": "osd.48",
"class": "ssd"
},
{
"id": 49,
"name": "osd.49",
"class": "ssd"
},
{
"id": 50,
"name": "osd.50",
"class": "ssd"
},
{
"id": 51,
"name": "osd.51",
"class": "ssd"
},
{
"id": 52,
"name": "osd.52",
"class": "ssd"
},
{
"id": 53,
"name": "osd.53",
"class": "ssd"
},
{
"id": 54,
"name": "osd.54",
"class": "ssd"
},
{
"id": 55,
"name": "osd.55",
"class": "ssd"
},
{
"id": 56,
"name": "osd.56",
"class": "ssd"
},
{
"id": 57,
"name": "osd.57",
"class": "ssd"
},
{
"id": 58,
"name": "osd.58",
"class": "ssd"
},
{
"id": 59,
"name": "osd.59",
"class": "ssd"
},
{
"id": 60,
"name": "osd.60",
"class": "ssd"
},
{
"id": 61,
"name": "osd.61",
"class": "ssd"
},
{
"id": 62,
"name": "osd.62",
"class": "ssd"
},
{
"id": 63,
"name": "osd.63",
"class": "ssd"
}
],
"types": [
{
"type_id": 0,
"name": "osd"
},
{
"type_id": 1,
"name": "host"
},
{
"type_id": 2,
"name": "chassis"
},
{
"type_id": 3,
"name": "rack"
},
{
"type_id": 4,
"name": "row"
},
{
"type_id": 5,
"name": "pdu"
},
{
"type_id": 6,
"name": "pod"
},
{
"type_id": 7,
"name": "room"
},
{
"type_id": 8,
"name": "datacenter"
},
{
"type_id": 9,
"name": "zone"
},
{
"type_id": 10,
"name": "region"
},
{
"type_id": 11,
"name": "root"
}
],
</pre>
Dashboard - Feature #45372 (New): mgr/dashboard: monitoring/grafana: any user can run any query o...
https://tracker.ceph.com/issues/45372
2020-05-04T11:34:20Z
Patrick Seidensal
<p>Any user with the viewer role can send any query to the Prometheus data source through Grafana.</p>
<p>According to Prometheus' security model, neither the data which could be accessed nor the access to Prometheus itself is considered an issue, though malicious actions might be possible and enhanced security might be a requirement in some cases.</p>
<blockquote>
<p>It is presumed that untrusted users have access to the Prometheus HTTP endpoint and logs.<br />-- <a class="external" href="https://prometheus.io/docs/operating/security/#prometheus">https://prometheus.io/docs/operating/security/#prometheus</a></p>
</blockquote>
<p>By default though, this access is limited to read-only operations (provided the admin API isn't enabled, which isn't the case in our default deployment). Measures to mitigate against Denial of Service attacks are built in Prometheus itself, though it can not be guaranteed to prevent all attacks. Secrets are not exposed.</p>
<p>Grafana enables users of its enterprise version to control which queries can be send to Prometheus, the community version does not include such options.</p>
<p>-- <a class="external" href="https://grafana.com/docs/grafana/latest/installation/security/#limit-viewer-query-permissions">https://grafana.com/docs/grafana/latest/installation/security/#limit-viewer-query-permissions</a></p>
<p>A possible solution would be to implement a reverse proxy in Ceph Dashboard, which will be capable of filtering queries before they are relayed to Grafana. This solution should not replace the current integration with Grafana, where Grafana is enabled to allow anonymous access, but offered as additional option.</p>
<p>By implementing such a proxy, the benefit of being able to access Grafana directly and outside of Ceph Dashboard will be lost, as Grafana will need to be configured to run behind a reverse proxy.</p>
<p>By implementing this solution, we will need to specify which operations can be performed by a Ceph Dashboard user (by its group). This involves continues maintenance.</p>
<p>Please also note that Prometheus' HTTP API in our current deployments is not protected from being accessed and that restrictions to access Prometheus' API should probably be resolved before this issue needs to be fixed.</p>
Dashboard - Feature #45302 (New): mgr/dashboard: get/set bucket ACLs
https://tracker.ceph.com/issues/45302
2020-04-28T09:26:52Z
Alfonso MartÃnez
almartin@redhat.com
<p>Ability to get/set ACLs on a bucket:<br /><a class="external" href="https://docs.ceph.com/docs/master/radosgw/s3/bucketops/#get-bucket-acl">https://docs.ceph.com/docs/master/radosgw/s3/bucketops/#get-bucket-acl</a><br /><a class="external" href="https://docs.ceph.com/docs/master/radosgw/s3/bucketops/#put-bucket-acl">https://docs.ceph.com/docs/master/radosgw/s3/bucketops/#put-bucket-acl</a></p>
<p>UPDATE:<br />This feature implmentation should be discussed as from S3 perspective ACLs are considered a legacy mechanism:<br />"As a general rule, AWS recommends using S3 bucket policies or IAM policies for access control. S3 ACLs is a legacy access control mechanism that predates IAM." <br /><a class="external" href="https://aws.amazon.com/blogs/security/iam-policies-and-bucket-policies-and-acls-oh-my-controlling-access-to-s3-resources/">https://aws.amazon.com/blogs/security/iam-policies-and-bucket-policies-and-acls-oh-my-controlling-access-to-s3-resources/</a></p>
Dashboard - Bug #44591 (Resolved): CVE-2020-27839: mgr/dashboard: The ceph dashboard is vulnerabl...
https://tracker.ceph.com/issues/44591
2020-03-13T07:49:08Z
Anonymous
<p>To authenticate the user with the ceph dashboard the user can exchange a username and password for a jwt token. This token will be stored inside the users browser. On every request, the ceph dashboard attaches this token to the 'Authorization' header to get access to the api.<br />The problem is how we store this token. Currently, we just save it inside the local storage of the browser which makes it vulnerable to XSS attacks. If any of our npm dependencies is compromised, an attacker could steal the token of a user and use it. I also tried to inject malicious code into the translation files, we host on transifex. I did not manage to attack the dashboard with that approach, because the angular i18n compiler rejected all script tags, but that does not mean that it is not possible. <br />Keeping the npm packages up to date is very important, but doesn't fix the problem either. Not anyone, who is running a ceph cluster, is going to update it on a regular basis (every week or two).</p>
Currently I see three solutions:
<ul>
<li>Critical actions should always ask for user credentials<br />What are critical actions? From my point of view, at least all user management actions. Changing the role of a user or en- and disabling an account of another user should not be possible without entering the password of the current user.</li>
</ul>
<ul>
<li>2FA<br />Using two factor authentication will mitigate attacks. At least the attacker cannot request a token with just the username and password. The XSS problem would still exist.</li>
</ul>
<ul>
<li>Double submitted cookies<br />Storing the jwt token in a httpOnly cookie prevents XSS attacks, but opens the door for csrf attacks. With sending two coockies, one containing the jwt header and payload (httpOnly disabled), and one containing the signature, could be a solution. JavaScript can not read the content of httpOnly cookies, but the content of cookies without httpOnly. So, our frontend could read the header.payload and sends it with every request, inside a meta tag. The backend could then reconstruct the token.</li>
</ul>
<p>Please see: <a class="external" href="https://medium.com/lightrail/getting-token-authentication-right-in-a-stateless-single-page-application-57d0c6474e3">https://medium.com/lightrail/getting-token-authentication-right-in-a-stateless-single-page-application-57d0c6474e3</a> for more details</p>
Dashboard - Bug #44237 (Resolved): mgr/dashboard: security: some system roles allow accessing sen...
https://tracker.ceph.com/issues/44237
2020-02-21T12:11:04Z
Ernesto Puerta
<p>Some system roles (<code>pool-manager</code>, <code>cephfs-manager</code>, <code>ganesha-manager</code> etc) have the <code>configOpt</code> read permissions enabled, which allows to read all cluster config options and manager module config options. The latter includes RGW keys or Grafana user/admin, plus any sensitive information used by existing or new modules. As dashboard cannot control what new information is exposed by these modules, the suggestion is to remove that read permission from all system roles except the specific management ones (<code>adminstrator</code> and <code>cluster-manager</code>).</p>
The reason why configOpts was added to those roles is that at some point they require access to some cluster configuration settings:
<ul>
<li><b><code>pool-manager</code></b>: checks <code>/api/cluster_conf/osd_pool_default_pg_autoscale_mode</code>. This parameter could/should also be exposed via <code>/api/pools/_info</code>, which already returns other ceph config params (e.g.: <code>bluestore_compression_algorithm</code>).</li>
<li><b><code>ganesha-manager</code>, <code>cephfs-manager</code>, <code>rgw-manager</code></b>: I couldn't find any direct dependency with cluster config options.</li>
</ul>
<p>different case is the <code>read-only</code> role. While it initially makes sense to allow <code>configOpt</code> read permission, dashboard administrator might guess that <code>read-only</code> perfectly fits for a <code>guest</code>/low-privileged user. On the contrary, a <code>read-only</code> user has access to the same sensitive data as mentioned above.</p>
Suggested next steps:
<ul>
<li>Discuss and agree on <code>read-only</code> user with/without access to <code>configOpts</code>. This could improve by splitting it into 2: <code>administrator-read-only</code> and <code>guest</code> (without the read permission on sensitive data). As I'm against adding more roles, I'd simply leave the low-privilege <code>guest</code> one.</li>
<li>Make <code>pool-form</code> get <code>osd_pool_default_pg_autoscale_mode</code> from <code>/pool/_info</code>.</li>
<li>Remove <code>configOpt</code> read perm (and test) in all other roles.</li>
</ul>
Dashboard - Bug #43607 (Resolved): mgr/dashboard: fix improper URL checking
https://tracker.ceph.com/issues/43607
2020-01-15T12:59:37Z
Ernesto Puerta
<p>From <a class="external" href="https://github.com/rook/rook/issues/4635">https://github.com/rook/rook/issues/4635</a></p>
<p>Only release 14.2.5 and above show this behaviour (including master) introduced in <a class="external" href="https://github.com/ceph/ceph/pull/30694">https://github.com/ceph/ceph/pull/30694</a>.</p>
<p>Assigned CVE-2020-1699</p>
<p><a href="https://cwe.mitre.org/data/definitions/22.html" class="external">CWE-22</a></p>
Dashboard - Bug #43262 (New): mgr/dashboard: security: upgrade serialize-javascript
https://tracker.ceph.com/issues/43262
2019-12-11T16:38:46Z
Ernesto Puerta
<p><a href="https://github.com/advisories/GHSA-h9rv-jmmf-4pgx" class="external">GHSA-h9rv-jmmf-4pgx</a>. Seems it might have no impact on dashboard, as it's a dependency of webpack, and hence used only at build time.</p>
Dashboard - Feature #43146 (New): mgr/dashboard: Display available updates for a cluster
https://tracker.ceph.com/issues/43146
2019-12-05T12:02:33Z
Lenz Grimmer
<p>The SSH orchestrator will soon be able to provide <a href="https://github.com/ceph/ceph/pull/31827" class="external">information about pending updates</a>, so it would be useful to notify the administrator of new versions of code that are available in the dashboard, e.g. via a toasty notification or by visualizing the version/update information in the list of hosts.</p>
rbd - Feature #42620 (Rejected): rbd namespace support should support moving RBDs out of a namesp...
https://tracker.ceph.com/issues/42620
2019-11-04T14:44:03Z
Lenz Grimmer
<p>Currently, it does not seem to be possible to move an RBD into a different namespace or out of the current one after it has been created. As an administrator I would find it useful to have some more flexibility in moving RBDs between different namespaces or taking it out of the given namespace (e.g. if I want to preserve that image before deleting that entire namespace).</p>
rbd - Feature #42619 (Rejected): rbd namespace deletion should support deleting all RBDs of that ...
https://tracker.ceph.com/issues/42619
2019-11-04T14:40:27Z
Lenz Grimmer
<p>Currently, trying to remove a namespace that contains RBDs simply fails with <code>rbd: namespace contains images which must be deleted first.</code>, leaving the administrator with the task to manually remove all RBDs in there first.</p>
<p>It would be nice if there was a way to forcibly remove the namespace including all of its RBDs.</p>