Project

General

Profile

Actions

Bug #38634

closed

mgr/dashboard create new iscsi target disk failed.

Added by 一帆 师 about 5 years ago. Updated about 3 years ago.

Status:
Can't reproduce
Priority:
Normal
Assignee:
Ricardo Marques
Category:
Component - iSCSI
Target version:
% Done:

0%

Source:
Tags:
Backport:
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature (v1):
Crash signature (v2):

Description

I create the first iscsi target with client and disks by gwcli.and it works. alse can be display on the dashboard as pic-1.

Then I tried to create the second one by dashboard.

I can not create a new iscsi disk when I trying to create a target,It is irrational.I can only choose disks from the images from the rbd pool.

Ok,I create a rbd image first as rbd.disk_1. try again.

I can choose a image now.

but. after I fall all the dashboard need and clicked the submit butten.

terrible,It failed again. Although it is successed to create a target.BUUUUUUUUUUUUT.It throw a exception and the target has no initiator and no disk. I had all a initiator.

OK.fine.calm. I trid to edit the target and add a initiator and disk.

Oh my god. It still doesn't work. failed.

The exception is as the follow:

traceback: "Traceback (most recent call last):↵
File "/usr/lib/python2.7/site-packages/cherrypy/_cprequest.py", line 656, in respond↵ response.body = self.handler()↵
File "/usr/lib/python2.7/site-packages/cherrypy/lib/encoding.py", line 188, in call__↵ self.body = self.oldhandler(*args, **kwargs)↵
File "/usr/lib/python2.7/site-packages/cherrypy/_cptools.py", line 221, in wrap↵ return self.newhandler(innerfunc, *args, **kwargs)↵
File "/usr/share/ceph/mgr/dashboard/services/exception.py", line 88, in dashboard_exception_handler↵ return handler(*args, **kwargs)↵
File "/usr/lib/python2.7/site-packages/cherrypy/_cpdispatch.py", line 34, in _call_↵ return self.callable(*self.args, **self.kwargs)↵
File "/usr/share/ceph/mgr/dashboard/controllers/__init
.py", line 545, in inner↵ ret = func(*args, **kwargs)↵
File "/usr/share/ceph/mgr/dashboard/controllers/__init__.py", line 738, in wrapper↵ return func(*vpath, **params)↵
File "/usr/share/ceph/mgr/dashboard/controllers/__init__.py", line 350, in wrapper↵ raise ex↵KeyError: 'backstore'↵"


Files

1-create-gwcli.png (78.4 KB) 1-create-gwcli.png 一帆 师, 03/08/2019 03:16 AM
2-show-1.png (52.1 KB) 2-show-1.png 一帆 师, 03/08/2019 03:16 AM
3-show-1.png (43.4 KB) 3-show-1.png 一帆 师, 03/08/2019 03:16 AM
3-show-2.png (163 KB) 3-show-2.png 一帆 师, 03/08/2019 03:16 AM
3-show-3.png (130 KB) 3-show-3.png 一帆 师, 03/08/2019 03:16 AM
111111.png (120 KB) 111111.png 一帆 师, 03/12/2019 04:13 AM
QQ图片20190314111125.png (221 KB) QQ图片20190314111125.png 一帆 师, 03/14/2019 04:41 AM
11111.png (221 KB) 11111.png 一帆 师, 03/14/2019 04:41 AM
QQ图片20190314124617.png (183 KB) QQ图片20190314124617.png 一帆 师, 03/14/2019 04:46 AM
2.png (37 KB) 2.png 一帆 师, 03/15/2019 02:00 AM
Actions #1

Updated by Ricardo Marques about 5 years ago

Looks like your `ceph-iscsi` configuration is outdated, can you please paste the output of:

# gwcli export copy

so we can check the content and version of your `ceph-iscsi` configuration?

(`backstore` is a `disk` field introduced on config version `5`, and the latest version is `6`)

If you don't have the latest config version, but you are using the latest `ceph-iscsi` (master branch), you just have to restart the `rbd-target-gw` service in order to get the config version updated.

In the meanwhile, I'll submit a ceph dashboard PR to validate the ceph-iscsi config version.

Actions #2

Updated by Ricardo Marques about 5 years ago

PR that will validate the `ceph-iscsi` config version: https://github.com/ceph/ceph/pull/26835

Actions #3

Updated by Ricardo Marques about 5 years ago

  • Category set to 141
Actions #4

Updated by 一帆 师 about 5 years ago

Ricardo Marques wrote:

Looks like your `ceph-iscsi` configuration is outdated, can you please paste the output of:

[...]

so we can check the content and version of your `ceph-iscsi` configuration?

(`backstore` is a `disk` field introduced on config version `5`, and the latest version is `6`)

If you don't have the latest config version, but you are using the latest `ceph-iscsi` (master branch), you just have to restart the `rbd-target-gw` service in order to get the config version updated.

In the meanwhile, I'll submit a ceph dashboard PR to validate the ceph-iscsi config version.

{
"clients": {
"iqn.2019-03.com.cfdatas:1341310110": {
"auth": {
"chap": "000000000000/000000000000"
},
"created": "2019/03/06 06:09:15",
"group_name": "",
"luns": {
"rbd.disk_1": {
"lun_id": 0
}
},
"updated": "2019/03/06 06:10:02"
}
},
"created": "2019/03/06 06:06:18",
"disks": {
"rbd.disk_1": {
"created": "2019/03/06 06:09:43",
"image": "disk_1",
"owner": "node1",
"pool": "rbd",
"pool_id": 6,
"updated": "2019/03/06 06:09:43",
"wwn": "db25bbf7-c5a4-4ded-b675-faa0b6c9638a"
}
},
"epoch": 9,
"gateways": {
"created": "2019/03/06 06:08:28",
"ip_list": [
"192.168.20.1",
"192.168.6.5"
],
"iqn": "iqn.2019-03.com.cfdatas.iscsi-gw:iscsi-igw",
"node1": {
"active_luns": 1,
"created": "2019/03/06 06:08:36",
"gateway_ip_list": [
"192.168.20.1",
"192.168.6.5"
],
"inactive_portal_ips": [
"192.168.6.5"
],
"iqn": "iqn.2019-03.com.cfdatas.iscsi-gw:iscsi-igw",
"portal_ip_address": "192.168.20.1",
"tpgs": 2,
"updated": "2019/03/06 06:09:43"
},
"node2": {
"active_luns": 0,
"created": "2019/03/06 06:08:43",
"gateway_ip_list": [
"192.168.20.1",
"192.168.6.5"
],
"inactive_portal_ips": [
"192.168.20.1"
],
"iqn": "iqn.2019-03.com.cfdatas.iscsi-gw:iscsi-igw",
"portal_ip_address": "192.168.6.5",
"tpgs": 2,
"updated": "2019/03/06 06:08:43"
}
},
"groups": {},
"updated": "2019/03/06 06:10:02",
"version": 3
}

uh。。。。。。 this is version 3?

I dont unerstand what you said about "restart the `rbd-target-gw` service in order to get the config version updated."

restart the rbd-target-gw can update the verison ?

Actions #5

Updated by Ricardo Marques about 5 years ago

Exactly, your configuration is on version 3, which is an outdated version.

You should install the latest `ceph-iscsi` development version from master branch [1], and then restart the `rbd-target-gw` service.

When `rbd-target-gw` service starts, it checks your configuration version and performs the upgrade if needed.

[1] https://github.com/ceph/ceph-iscsi/tree/master

Actions #6

Updated by 一帆 师 about 5 years ago

Ricardo Marques wrote:

Exactly, your configuration is on version 3, which is an outdated version.

You should install the latest `ceph-iscsi` development version from master branch [1], and then restart the `rbd-target-gw` service.

When `rbd-target-gw` service starts, it checks your configuration version and performs the upgrade if needed.

[1] https://github.com/ceph/ceph-iscsi/tree/master

in fact.what I used is what you gave,I used the https://github.com/ceph/ceph-iscsi/tree/master to build the ceph-iscsi-3.0-1.el7.centos.noarch.rpm. and installed it as what you saw.

I didn't get the version 6.

Actions #7

Updated by Ricardo Marques about 5 years ago

Did you restarted the service?

Actions #8

Updated by 一帆 师 about 5 years ago

Ricardo Marques wrote:

Did you restarted the service?

I remembered that I had restart,But I am not sure, Let me try it tomorrow. thx

Actions #9

Updated by Lenz Grimmer about 5 years ago

  • Status changed from New to Need More Info
Actions #10

Updated by 一帆 师 about 5 years ago

Ricardo Marques wrote:

Did you restarted the service?

{
"created": "2019/03/12 03:36:30",
"discovery_auth": {
"chap": "",
"chap_mutual": ""
},
"disks": {},
"epoch": 21,
"gateways": {
"node1": {
"active_luns": 0,
"created": "2019/03/12 04:01:36",
"updated": "2019/03/12 04:01:36"
},
"node2": {
"active_luns": 0,
"created": "2019/03/12 04:01:37",
"updated": "2019/03/12 04:01:37"
}
},
"targets": {
"iqn.2001-07.com.ceph:1552363257839": {
"acl_enabled": true,
"clients": {},
"controls": {},
"created": "2019/03/12 04:01:35",
"disks": [],
"groups": {},
"ip_list": [
"192.168.6.4",
"192.168.6.6"
],
"portals": {
"node1": {
"gateway_ip_list": [
"192.168.6.4",
"192.168.6.6"
],
"inactive_portal_ips": [
"192.168.6.6"
],
"portal_ip_address": "192.168.6.4",
"tpgs": 2
},
"node2": {
"gateway_ip_list": [
"192.168.6.4",
"192.168.6.6"
],
"inactive_portal_ips": [
"192.168.6.4"
],
"portal_ip_address": "192.168.6.6",
"tpgs": 2
}
},
"updated": "2019/03/12 04:01:37"
},
"iqn.2001-07.com.ceph:1552363316938": {
"acl_enabled": true,
"clients": {},
"controls": {},
"created": "2019/03/12 04:02:17",
"disks": [],
"groups": {},
"ip_list": [
"192.168.6.4",
"192.168.6.6"
],
"portals": {
"node1": {
"gateway_ip_list": [
"192.168.6.4",
"192.168.6.6"
],
"inactive_portal_ips": [
"192.168.6.6"
],
"portal_ip_address": "192.168.6.4",
"tpgs": 2
},
"node2": {
"gateway_ip_list": [
"192.168.6.4",
"192.168.6.6"
],
"inactive_portal_ips": [
"192.168.6.4"
],
"portal_ip_address": "192.168.6.6",
"tpgs": 2
}
},
"updated": "2019/03/12 04:02:19"
}
},
"updated": "2019/03/12 04:02:19",
"version": 6
}

I restarted it and the version is 6 now. but it is still doesn't work.

Actions #11

Updated by 一帆 师 about 5 years ago

一帆 师 wrote:

Ricardo Marques wrote:

Did you restarted the service?

I remembered that I had restart,But I am not sure, Let me try it tomorrow. thx

the error is still the 'backstore' ERROR

Actions #12

Updated by Ricardo Marques about 5 years ago

In that case, we have to check if your Ceph Dashboard already supports the 'backstore' field that was introduced in PRs https://github.com/ceph/ceph/pull/26575 and https://github.com/ceph/ceph/pull/26506.

If you are using Ceph Nautilus RC1, I'm afraid you have to wait for RC2 (...or compile the nautilus branch directly).

Actions #13

Updated by 一帆 师 about 5 years ago

Ricardo Marques wrote:

In that case, we have to check if your Ceph Dashboard already supports the 'backstore' field that was introduced in PRs https://github.com/ceph/ceph/pull/26575 and https://github.com/ceph/ceph/pull/26506.

If you are using Ceph Nautilus RC1, I'm afraid you have to wait for RC2 (...or compile the nautilus branch directly).

the ceph dashborad I used is http://download.ceph.com/rpm-nautilus/el7/noarch/ceph-mgr-dashboard-14.1.0-0.el7.noarch.rpm

Actions #14

Updated by Ricardo Marques about 5 years ago

Updated by 一帆 师 about 5 years ago

Ricardo Marques wrote:

You need the release candidate 2 that is now available here http://download.ceph.com/rpm-nautilus/el7/noarch/ceph-mgr-dashboard-14.1.1-0.el7.noarch.rpm

there are still 2 bugs I noticed.

The first one is when I create the rbd image, the default Feature is onlt layering, But it can be choosed when create iscsi, the feature is not enough,I will case the error "LUN allocation failure".

and the second bug is when i used 14.1.0, I can create a target, But in 14.1.1 I can't create a target now, the error is "Gateway creation failed on node4. Failed to create the gateway".

I have two node named node4 and node5, and I can create it by gwcli. and after I create the target by gwcli,I can manage it well in dashboard with other functions such as add disk(of course the image fatures are enough) and delete the target.

Actions #16

Updated by 一帆 师 about 5 years ago

Ricardo Marques wrote:

You need the release candidate 2 that is now available here http://download.ceph.com/rpm-nautilus/el7/noarch/ceph-mgr-dashboard-14.1.1-0.el7.noarch.rpm

Actions #17

Updated by Ricardo Marques about 5 years ago

一帆 师 wrote:

Ricardo Marques wrote:

You need the release candidate 2 that is now available here http://download.ceph.com/rpm-nautilus/el7/noarch/ceph-mgr-dashboard-14.1.1-0.el7.noarch.rpm

there are still 2 bugs I noticed.

The first one is when I create the rbd image, the default Feature is onlt layering, But it can be choosed when create iscsi, the feature is not enough,I will case the error "LUN allocation failure".

Yes, ATM all images are listed, even the ones that cannot be used. This is a known issue and we already have an open issue to track this https://tracker.ceph.com/issues/38074

and the second bug is when i used 14.1.0, I can create a target, But in 14.1.1 I can't create a target now, the error is "Gateway creation failed on node4. Failed to create the gateway".

You may have an error on node4 `/var/log/rbd-target-api/rbd-target-api.log` file, can you paste it here?

I have two node named node4 and node5, and I can create it by gwcli. and after I create the target by gwcli,I can manage it well in dashboard with other functions such as add disk(of course the image fatures are enough) and delete the target.

Actions #18

Updated by 一帆 师 about 5 years ago

Ricardo Marques wrote:

一帆 师 wrote:

Ricardo Marques wrote:

You need the release candidate 2 that is now available here http://download.ceph.com/rpm-nautilus/el7/noarch/ceph-mgr-dashboard-14.1.1-0.el7.noarch.rpm

there are still 2 bugs I noticed.

The first one is when I create the rbd image, the default Feature is onlt layering, But it can be choosed when create iscsi, the feature is not enough,I will case the error "LUN allocation failure".

Yes, ATM all images are listed, even the ones that cannot be used. This is a known issue and we already have an open issue to track this https://tracker.ceph.com/issues/38074

and the second bug is when i used 14.1.0, I can create a target, But in 14.1.1 I can't create a target now, the error is "Gateway creation failed on node4. Failed to create the gateway".

You may have an error on node4 `/var/log/rbd-target-api/rbd-target-api.log` file, can you paste it here?

I have two node named node4 and node5, and I can create it by gwcli. and after I create the target by gwcli,I can manage it well in dashboard with other functions such as add disk(of course the image fatures are enough) and delete the target.

2019-03-14 05:29:10,912 DEBUG [rbd-target-api:618:_gateway()] - Attempting create of gateway node5
2019-03-14 05:29:10,917 ERROR [rbd-target-api:636:_gateway()] - Unable to create an instance of the GWTarget class: gateway IP addresses provided do not match any ip on this host
2019-03-14 05:29:10,918 INFO [_internal.py:87:_log()] - ::1 - - [14/Mar/2019 05:29:10] "PUT /api/_gateway/iqn.2001-07.com.ceph:1552555742271/node5 HTTP/1.1" 500 -
2019-03-14 05:29:10,920 ERROR [rbd-target-api:2329:call_api()] - _gateway change on localhost failed with 500
2019-03-14 05:29:10,920 DEBUG [rbd-target-api:2351:call_api()] - failed on node4. Failed to create the gateway
2019-03-14 05:29:10,921 INFO [_internal.py:87:_log()] - ::ffff:192.168.20.1 - - [14/Mar/2019 05:29:10] "PUT /api/gateway/iqn.2001-07.com.ceph:1552555742271/node5 HTTP/1.1" 500 -

Actions #19

Updated by 一帆 师 about 5 years ago

Ricardo Marques wrote:

一帆 师 wrote:

Ricardo Marques wrote:

You need the release candidate 2 that is now available here http://download.ceph.com/rpm-nautilus/el7/noarch/ceph-mgr-dashboard-14.1.1-0.el7.noarch.rpm

there are still 2 bugs I noticed.

The first one is when I create the rbd image, the default Feature is onlt layering, But it can be choosed when create iscsi, the feature is not enough,I will case the error "LUN allocation failure".

Yes, ATM all images are listed, even the ones that cannot be used. This is a known issue and we already have an open issue to track this https://tracker.ceph.com/issues/38074

and the second bug is when i used 14.1.0, I can create a target, But in 14.1.1 I can't create a target now, the error is "Gateway creation failed on node4. Failed to create the gateway".

You may have an error on node4 `/var/log/rbd-target-api/rbd-target-api.log` file, can you paste it here?

I have two node named node4 and node5, and I can create it by gwcli. and after I create the target by gwcli,I can manage it well in dashboard with other functions such as add disk(of course the image fatures are enough) and delete the target.

and this is my ip information

[root@node4 rbd-target-api]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens15f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:24:ec:f1:83:4b brd ff:ff:ff:ff:ff:ff
inet 192.168.10.1/24 brd 192.168.10.255 scope global noprefixroute ens15f0
valid_lft forever preferred_lft forever
inet6 fe80::9cb0:4352:ae3b:f83d/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: ens1f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:1b:cd:05:10:da brd ff:ff:ff:ff:ff:ff
inet 192.168.20.1/24 brd 192.168.20.255 scope global noprefixroute ens1f0
valid_lft forever preferred_lft forever
inet6 fe80::161b:ef42:f515:1090/64 scope link noprefixroute
valid_lft forever preferred_lft forever
4: ens15f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:24:ec:f1:83:4c brd ff:ff:ff:ff:ff:ff
5: ens1f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:1b:cd:05:10:db brd ff:ff:ff:ff:ff:ff
inet 192.168.30.1/24 brd 192.168.30.255 scope global noprefixroute ens1f1
valid_lft forever preferred_lft forever
inet6 fe80::4890:85f5:1082:f3c6/64 scope link noprefixroute
valid_lft forever preferred_lft forever

Actions #20

Updated by 一帆 师 about 5 years ago

一帆 师 wrote:

Ricardo Marques wrote:

一帆 师 wrote:

Ricardo Marques wrote:

You need the release candidate 2 that is now available here http://download.ceph.com/rpm-nautilus/el7/noarch/ceph-mgr-dashboard-14.1.1-0.el7.noarch.rpm

there are still 2 bugs I noticed.

The first one is when I create the rbd image, the default Feature is onlt layering, But it can be choosed when create iscsi, the feature is not enough,I will case the error "LUN allocation failure".

Yes, ATM all images are listed, even the ones that cannot be used. This is a known issue and we already have an open issue to track this https://tracker.ceph.com/issues/38074

and the second bug is when i used 14.1.0, I can create a target, But in 14.1.1 I can't create a target now, the error is "Gateway creation failed on node4. Failed to create the gateway".

You may have an error on node4 `/var/log/rbd-target-api/rbd-target-api.log` file, can you paste it here?

I have two node named node4 and node5, and I can create it by gwcli. and after I create the target by gwcli,I can manage it well in dashboard with other functions such as add disk(of course the image fatures are enough) and delete the target.

2019-03-14 05:29:10,912 DEBUG [rbd-target-api:618:_gateway()] - Attempting create of gateway node5
2019-03-14 05:29:10,917 ERROR [rbd-target-api:636:_gateway()] - Unable to create an instance of the GWTarget class: gateway IP addresses provided do not match any ip on this host
2019-03-14 05:29:10,918 INFO [_internal.py:87:_log()] - ::1 - - [14/Mar/2019 05:29:10] "PUT /api/_gateway/iqn.2001-07.com.ceph:1552555742271/node5 HTTP/1.1" 500 -
2019-03-14 05:29:10,920 ERROR [rbd-target-api:2329:call_api()] - _gateway change on localhost failed with 500
2019-03-14 05:29:10,920 DEBUG [rbd-target-api:2351:call_api()] - failed on node4. Failed to create the gateway
2019-03-14 05:29:10,921 INFO [_internal.py:87:_log()] - ::ffff:192.168.20.1 - - [14/Mar/2019 05:29:10] "PUT /api/gateway/iqn.2001-07.com.ceph:1552555742271/node5 HTTP/1.1" 500 -

when I create a target,I need to choose to gateway, I choosed node4 192.168.20.1 and node5 192.168.20.2.

Actions #21

Updated by Ricardo Marques about 5 years ago

Do you have any error on node5 ( `/var/log/rbd-target-api/rbd-target-api.log` )? What is the output of `ip addr` on node5?

Actions #22

Updated by 一帆 师 about 5 years ago

Ricardo Marques wrote:

Do you have any error on node5 ( `/var/log/rbd-target-api/rbd-target-api.log` )? What is the output of `ip addr` on node5?

[root@node5 rbd-target-api]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens15f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:24:ec:f1:83:4e brd ff:ff:ff:ff:ff:ff
inet 192.168.10.2/24 brd 192.168.10.255 scope global noprefixroute ens15f0
valid_lft forever preferred_lft forever
inet6 fe80::4f11:de28:c628:420/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: ens15f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:24:ec:f1:83:4f brd ff:ff:ff:ff:ff:ff
4: ens1f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:1b:cd:05:10:fe brd ff:ff:ff:ff:ff:ff
inet 192.168.20.2/24 brd 192.168.20.255 scope global noprefixroute ens1f0
valid_lft forever preferred_lft forever
inet6 fe80::9810:19cf:9db0:5e6f/64 scope link noprefixroute
valid_lft forever preferred_lft forever
5: ens1f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:1b:cd:05:10:ff brd ff:ff:ff:ff:ff:ff
inet 192.168.30.2/24 brd 192.168.30.255 scope global noprefixroute ens1f1
valid_lft forever preferred_lft forever
inet6 fe80::a2fa:1135:82f:6290/64 scope link noprefixroute
valid_lft forever preferred_lft forever

and there is not any error output in node5's rbd-target-api.log.

it looks seem that even the node5 didn't get any request to create a target.

but I am sure node5 works,beacause if I use gwcli to create, It works.

Actions #23

Updated by 一帆 师 about 5 years ago

一帆 师 wrote:

Ricardo Marques wrote:

Do you have any error on node5 ( `/var/log/rbd-target-api/rbd-target-api.log` )? What is the output of `ip addr` on node5?

[root@node5 rbd-target-api]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens15f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:24:ec:f1:83:4e brd ff:ff:ff:ff:ff:ff
inet 192.168.10.2/24 brd 192.168.10.255 scope global noprefixroute ens15f0
valid_lft forever preferred_lft forever
inet6 fe80::4f11:de28:c628:420/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: ens15f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:24:ec:f1:83:4f brd ff:ff:ff:ff:ff:ff
4: ens1f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:1b:cd:05:10:fe brd ff:ff:ff:ff:ff:ff
inet 192.168.20.2/24 brd 192.168.20.255 scope global noprefixroute ens1f0
valid_lft forever preferred_lft forever
inet6 fe80::9810:19cf:9db0:5e6f/64 scope link noprefixroute
valid_lft forever preferred_lft forever
5: ens1f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:1b:cd:05:10:ff brd ff:ff:ff:ff:ff:ff
inet 192.168.30.2/24 brd 192.168.30.255 scope global noprefixroute ens1f1
valid_lft forever preferred_lft forever
inet6 fe80::a2fa:1135:82f:6290/64 scope link noprefixroute
valid_lft forever preferred_lft forever

and there is not any error output in node5's rbd-target-api.log.

it looks seem that even the node5 didn't get any request to create a target.

but I am sure node5 works,beacause if I use gwcli to create, It works.

Actions #24

Updated by 一帆 师 about 5 years ago

Ricardo Marques wrote:

Do you have any error on node5 ( `/var/log/rbd-target-api/rbd-target-api.log` )? What is the output of `ip addr` on node5?

this bug is still exist on 14.2.0.

I used ceph-14.2.0

and the ceph-iscsi-3.0 from "https://shaman.ceph.com/repos/ceph-iscsi/master/53802ae0957935907654a101270357d6f3eb5577/default/121940/"

Actions #25

Updated by Lenz Grimmer about 5 years ago

  • Assignee set to Ricardo Marques
Actions #26

Updated by Ricardo Marques about 5 years ago

  • Status changed from Need More Info to Can't reproduce

I'm closing this issue because I cannot reproduce this problem using the latest code from ceph (nautilus branch) and ceph-iscsi (master branch).

If the problem still persists on your environment, please reopen or create a new issue.

Actions #27

Updated by Ernesto Puerta about 3 years ago

  • Project changed from mgr to Dashboard
  • Category changed from 141 to Component - iSCSI
Actions

Also available in: Atom PDF