Project

General

Profile

Bug #47946

vstart.sh: failed to run with multi active mds, when setting max_mds.

Added by Jinmyeong Lee 3 months ago. Updated about 2 months ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
-
Target version:
% Done:

0%

Source:
Community (dev)
Tags:
Backport:
octopus,nautilus
Regression:
No
Severity:
3 - minor
Reviewed:
Affected Versions:
ceph-qa-suite:
Pull request ID:
Crash signature:

Description

VSTART_DEST=ceph/build FS=1 MON=1 OSD=1 MDS=3 ../src/vstart.sh -n -d -x --smallmds --multimds 3

In vstart.sh, this script creates FS by just naming them alphabetically (a, b, ...).
By the way, in setting max_mds(fs set max_mds), it is trying to set the volume with cephfs_ as the prefix. (cephfs_a, cephfs_b, ...).
So, the Error ENOENT: Not found:'cephfs_a' occurs.
I would like to make PR directly to the master branch for this part.

  • going verbose
    ../src/vstart.sh: line 938: ss: command not found
    rm -f core*
    hostname

    ../src/vstart.sh: line 970: ifconfig: command not found
    ip 127.0.0.1
    port 40798

NOTE: hostname resolves to loopback; remote hosts will not be able to
connect. either adjust /etc/hosts, or edit this script to use your
machine's real IP.

/home1/irteamsu/ceph/build/bin/ceph-authtool --create-keyring --gen-key --name=mon. /home1/irteamsu/ceph/build/keyring --cap mon 'allow '
creating /home1/irteamsu/ceph/build/keyring
/home1/irteamsu/ceph/build/bin/ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /home1/irteamsu/ceph/build/keyring
/home1/irteamsu/ceph/build/bin/ceph-authtool --gen-key --name=client.fs --cap mon 'allow r' --cap osd 'allow rw tag cephfs data=
' --cap mds 'allow rwp' /home1/irteamsu/ceph/build/keyring
/home1/irteamsu/ceph/build/bin/ceph-authtool --gen-key --name=client.rgw --cap mon 'allow rw' --cap osd 'allow rwx' --cap mgr 'allow rw' /home1/irteamsu/ceph/build/keyring
/home1/irteamsu/ceph/build/bin/monmaptool --create --clobber --addv a [v2:127.0.0.1:40798,v1:127.0.0.1:40799] --print /tmp/ceph_monmap.137008
/home1/irteamsu/ceph/build/bin/monmaptool: monmap file /tmp/ceph_monmap.137008
/home1/irteamsu/ceph/build/bin/monmaptool: generated fsid 2d2c92a7-ecff-42c2-9c7f-7b2489c2507c
epoch 0
fsid 2d2c92a7-ecff-42c2-9c7f-7b2489c2507c
last_changed 2020-10-22 16:30:51.381265
created 2020-10-22 16:30:51.381265
min_mon_release 0 (unknown)
0: [v2:127.0.0.1:40798/0,v1:127.0.0.1:40799/0] mon.a
/home1/irteamsu/ceph/build/bin/monmaptool: writing epoch 0 to /tmp/ceph_monmap.137008 (1 monitors)
rm rf - /home1/irteamsu/ceph/build/dev/mon.a
mkdir p /home1/irteamsu/ceph/build/dev/mon.a
/home1/irteamsu/ceph/build/bin/ceph-mon --mkfs -c /home1/irteamsu/ceph/build/ceph.conf -i a --monmap=/tmp/ceph_monmap.137008 --keyring=/home1/irteamsu/ceph/build/keyring
rm -
/tmp/ceph_monmap.137008
/home1/irteamsu/ceph/build/bin/ceph-mon -i a -c /home1/irteamsu/ceph/build/ceph.conf
Populating config ...
Setting debug configs ...
creating /home1/irteamsu/ceph/build/dev/mgr.x/keyring
/home1/irteamsu/ceph/build/bin/ceph -c /home1/irteamsu/ceph/build/ceph.conf -k /home1/irteamsu/ceph/build/keyring -i /home1/irteamsu/ceph/build/dev/mgr.x/keyring auth add mgr.x mon 'allow profile mgr' mds 'allow *' osd 'allow *'
added key for mgr.x
/home1/irteamsu/ceph/build/bin/ceph -c /home1/irteamsu/ceph/build/ceph.conf -k /home1/irteamsu/ceph/build/keyring config set mgr mgr/dashboard/x/ssl_server_port 41798 --force
/home1/irteamsu/ceph/build/bin/ceph -c /home1/irteamsu/ceph/build/ceph.conf -k /home1/irteamsu/ceph/build/keyring config set mgr mgr/prometheus/x/server_port 9283 --force
/home1/irteamsu/ceph/build/bin/ceph -c /home1/irteamsu/ceph/build/ceph.conf -k /home1/irteamsu/ceph/build/keyring config set mgr mgr/restful/x/server_port 42798 --force
Starting mgr.x
/home1/irteamsu/ceph/build/bin/ceph-mgr -i x -c /home1/irteamsu/ceph/build/ceph.conf
/home1/irteamsu/ceph/build/bin/ceph -c /home1/irteamsu/ceph/build/ceph.conf -k /home1/irteamsu/ceph/build/keyring tell mgr dashboard ac-user-create admin admin administrator {"username": "admin", "lastUpdate": 1603351864, "name": null, "roles": ["administrator"], "password": "$2b$12$VO7CFPjYVpt1o2b0cVGutuI7l7b7ZyjrqY.B7GqgpFBp2/Vaq.A6S", "email": null}
/home1/irteamsu/ceph/build/bin/ceph -c /home1/irteamsu/ceph/build/ceph.conf -k /home1/irteamsu/ceph/build/keyring tell mgr dashboard create-self-signed-cert
Self-signed certificate created
/home1/irteamsu/ceph/build/bin/ceph -c /home1/irteamsu/ceph/build/ceph.conf -k /home1/irteamsu/ceph/build/keyring tell mgr restful create-self-signed-cert
Restarting RESTful API server...
/home1/irteamsu/ceph/build/bin/ceph -c /home1/irteamsu/ceph/build/ceph.conf -k /home1/irteamsu/ceph/build/keyring restful create-key admin -o /tmp/tmp.Y4qtXKVgGD
add osd0 1b9ba7e3-1e15-4db3-864e-46bf2377d976
/home1/irteamsu/ceph/build/bin/ceph -c /home1/irteamsu/ceph/build/ceph.conf -k /home1/irteamsu/ceph/build/keyring osd new 1b9ba7e3-1e15-4db3-864e-46bf2377d976 -i /home1/irteamsu/ceph/build/dev/osd0/new.json
0
2020-10-22 16:31:07.932 7f117b099a80 -1 auth: unable to find a keyring on /home1/irteamsu/ceph/build/dev/osd0/keyring: (2) No such file or directory
2020-10-22 16:31:07.941 7f117b099a80 -1 auth: unable to find a keyring on /home1/irteamsu/ceph/build/dev/osd0/keyring: (2) No such file or directory
2020-10-22 16:31:07.941 7f117b099a80 -1 auth: unable to find a keyring on /home1/irteamsu/ceph/build/dev/osd0/keyring: (2) No such file or directory
2020-10-22 16:31:07.993 7f117b099a80 -1 bluestore(/home1/irteamsu/ceph/build/dev/osd0/block) _read_bdev_label failed to open /home1/irteamsu/ceph/build/dev/osd0/block: (2) No such file or directory
2020-10-22 16:31:07.993 7f117b099a80 -1 bluestore(/home1/irteamsu/ceph/build/dev/osd0/block) _read_bdev_label failed to open /home1/irteamsu/ceph/build/dev/osd0/block: (2) No such file or directory
2020-10-22 16:31:07.993 7f117b099a80 -1 bluestore(/home1/irteamsu/ceph/build/dev/osd0/block) _read_bdev_label failed to open /home1/irteamsu/ceph/build/dev/osd0/block: (2) No such file or directory
2020-10-22 16:31:07.995 7f117b099a80 -1 bluestore(/home1/irteamsu/ceph/build/dev/osd0) _read_fsid unparsable uuid
adding osd0 key to auth repository
/home1/irteamsu/ceph/build/bin/ceph -c /home1/irteamsu/ceph/build/ceph.conf -k /home1/irteamsu/ceph/build/keyring -i /home1/irteamsu/ceph/build/dev/osd0/keyring auth add osd.0 osd 'allow ' mon 'allow profile osd' mgr 'allow profile osd'
start osd.0
/home1/irteamsu/ceph/build/bin/ceph-osd -i 0 -c /home1/irteamsu/ceph/build/ceph.conf
2020-10-22 16:31:14.450 7f07dbd3ba80 -1 Falling back to public interface
2020-10-22 16:31:16.588 7f07dbd3ba80 -1 osd.0 0 log_to_monitors {default=true}
mkdir -p /home1/irteamsu/ceph/build/dev/mds.a
/home1/irteamsu/ceph/build/bin/ceph-authtool --create-keyring --gen-key --name=mds.a /home1/irteamsu/ceph/build/dev/mds.a/keyring
creating /home1/irteamsu/ceph/build/dev/mds.a/keyring
/home1/irteamsu/ceph/build/bin/ceph -c /home1/irteamsu/ceph/build/ceph.conf -k /home1/irteamsu/ceph/build/keyring -i /home1/irteamsu/ceph/build/dev/mds.a/keyring auth add mds.a mon 'allow profile mds' osd 'allow rw tag cephfs *=
' mds allow mgr 'allow profile mds'
added key for mds.a
/home1/irteamsu/ceph/build/bin/ceph-mds -i a -c /home1/irteamsu/ceph/build/ceph.conf
starting mds.a at
mkdir -p /home1/irteamsu/ceph/build/dev/mds.b
/home1/irteamsu/ceph/build/bin/ceph-authtool --create-keyring --gen-key --name=mds.b /home1/irteamsu/ceph/build/dev/mds.b/keyring
creating /home1/irteamsu/ceph/build/dev/mds.b/keyring
/home1/irteamsu/ceph/build/bin/ceph -c /home1/irteamsu/ceph/build/ceph.conf -k /home1/irteamsu/ceph/build/keyring -i /home1/irteamsu/ceph/build/dev/mds.b/keyring auth add mds.b mon 'allow profile mds' osd 'allow rw tag cephfs =*' mds allow mgr 'allow profile mds'
added key for mds.b
/home1/irteamsu/ceph/build/bin/ceph-mds -i b -c /home1/irteamsu/ceph/build/ceph.conf
starting mds.b at
mkdir -p /home1/irteamsu/ceph/build/dev/mds.c
/home1/irteamsu/ceph/build/bin/ceph-authtool --create-keyring --gen-key --name=mds.c /home1/irteamsu/ceph/build/dev/mds.c/keyring
creating /home1/irteamsu/ceph/build/dev/mds.c/keyring
/home1/irteamsu/ceph/build/bin/ceph -c /home1/irteamsu/ceph/build/ceph.conf -k /home1/irteamsu/ceph/build/keyring -i /home1/irteamsu/ceph/build/dev/mds.c/keyring auth add mds.c mon 'allow profile mds' osd 'allow rw tag cephfs *=
' mds allow mgr 'allow profile mds'
added key for mds.c
/home1/irteamsu/ceph/build/bin/ceph-mds -i c -c /home1/irteamsu/ceph/build/ceph.conf
starting mds.c at
/home1/irteamsu/ceph/build/bin/ceph -c /home1/irteamsu/ceph/build/ceph.conf -k /home1/irteamsu/ceph/build/keyring fs volume create a
Volume created successfully (no MDS daemons created)
/home1/irteamsu/ceph/build/bin/ceph -c /home1/irteamsu/ceph/build/ceph.conf -k /home1/irteamsu/ceph/build/keyring fs authorize a client.fs_a / rwp
[client.fs_a]
key = AQBQNZFfrOh0NRAAa4fYwDEcIaKPTAeNGo3fUg==
/home1/irteamsu/ceph/build/bin/ceph -c /home1/irteamsu/ceph/build/ceph.conf -k /home1/irteamsu/ceph/build/keyring fs set cephfs_a max_mds 3
Error ENOENT: Not found: 'cephfs_a'


Related issues

Copied to Ceph - Backport #47953: nautilus: vstart.sh: failed to run with multi active mds, when setting max_mds. Resolved
Copied to Ceph - Backport #47954: octopus: vstart.sh: failed to run with multi active mds, when setting max_mds. Resolved

History

#1 Updated by Kefu Chai 3 months ago

  • Status changed from New to Fix Under Review
  • Pull request ID set to 37752

#2 Updated by Patrick Donnelly 3 months ago

  • Status changed from Fix Under Review to Pending Backport
  • Assignee set to Jinmyeong Lee
  • Target version set to v16.0.0
  • Source set to Community (dev)
  • Backport set to octopus,nautilus

#3 Updated by Nathan Cutler 3 months ago

  • Copied to Backport #47953: nautilus: vstart.sh: failed to run with multi active mds, when setting max_mds. added

#4 Updated by Nathan Cutler 3 months ago

  • Copied to Backport #47954: octopus: vstart.sh: failed to run with multi active mds, when setting max_mds. added

#5 Updated by Nathan Cutler about 2 months ago

  • Status changed from Pending Backport to Resolved

While running with --resolve-parent, the script "backport-create-issue" noticed that all backports of this issue are in status "Resolved" or "Rejected".

Also available in: Atom PDF